* [dts][PATCH V1 0/2] completion testplan and optimize testsuite script
@ 2023-03-28 7:20 Wei Ling
2023-03-28 7:20 ` [dts][PATCH V1 1/2] test_plans/pvp_qemu_multi_paths_port_restart: completion testplan Wei Ling
2023-03-28 7:20 ` [dts][PATCH V1 2/2] tests/pvp_qemu_multi_paths_port_restart: optimize testsuite script Wei Ling
0 siblings, 2 replies; 5+ messages in thread
From: Wei Ling @ 2023-03-28 7:20 UTC (permalink / raw)
To: dts; +Cc: Wei Ling
1.Completion `-a 0000:04:00.0` parameter when start testpmd in VM and
modify testcase 10 re-run time from 100 to 10 to reduce run time.
2.Replace pmd_out API to replace send_expect() to start testpmd.
Wei Ling (2):
test_plans/pvp_qemu_multi_paths_port_restart: completion testplan
tests/pvp_qemu_multi_paths_port_restart: optimize testsuite script
...emu_multi_paths_port_restart_test_plan.rst | 130 ++++++++------
...Suite_pvp_qemu_multi_paths_port_restart.py | 161 ++++++++----------
2 files changed, 150 insertions(+), 141 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [dts][PATCH V1 1/2] test_plans/pvp_qemu_multi_paths_port_restart: completion testplan
2023-03-28 7:20 [dts][PATCH V1 0/2] completion testplan and optimize testsuite script Wei Ling
@ 2023-03-28 7:20 ` Wei Ling
2023-03-28 7:20 ` [dts][PATCH V1 2/2] tests/pvp_qemu_multi_paths_port_restart: optimize testsuite script Wei Ling
1 sibling, 0 replies; 5+ messages in thread
From: Wei Ling @ 2023-03-28 7:20 UTC (permalink / raw)
To: dts; +Cc: Wei Ling
Completion `-a 0000:04:00.0` parameter when start testpmd in VM and
modify testcase 10 re-run time from 100 to 10 to reduce run time.
Signed-off-by: Wei Ling <weix.ling@intel.com>
---
...emu_multi_paths_port_restart_test_plan.rst | 130 +++++++++++-------
1 file changed, 80 insertions(+), 50 deletions(-)
diff --git a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
index 7e24290a..84ee68de 100644
--- a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
+++ b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
@@ -44,8 +44,8 @@ Test Case 1: pvp test with virtio 0.95 mergeable path
3. On VM, bind virtio net to vfio-pci and run testpmd::
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \
- --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0 \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -53,15 +53,18 @@ Test Case 1: pvp test with virtio 0.95 mergeable path
testpmd>show port stats all
-5. Port restart 100 times by below command and re-calculate the average througnput,verify the throughput is not zero after port restart::
+5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::
testpmd>stop
+ testpmd>port stop 0
+ testpmd>show port stats 0
+
+6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::
+
+ testpmd>clear port stats all
+ testpmd>port start all
testpmd>start
- ...
- testpmd>stop
- testpmd>show port stats all
- testpmd>start
- testpmd>show port stats all
+ testpmd>show port stats 0
Test Case 2: pvp test with virtio 0.95 normal path
==================================================
@@ -90,8 +93,8 @@ Test Case 2: pvp test with virtio 0.95 normal path
3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads::
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip \
- --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0 \
+ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -102,14 +105,17 @@ Test Case 2: pvp test with virtio 0.95 normal path
5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::
testpmd>stop
- testpmd>show port stats all
+ testpmd>port stop 0
+ testpmd>show port stats 0
6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::
+ testpmd>clear port stats all
+ testpmd>port start all
testpmd>start
- testpmd>show port stats all
+ testpmd>show port stats 0
-Test Case 3: pvp test with virtio 0.95 vrctor_rx path
+Test Case 3: pvp test with virtio 0.95 vector_rx path
=====================================================
1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command::
@@ -136,8 +142,8 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path
3. On VM, bind virtio net to vfio-pci and run testpmd without ant tx-offloads::
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -a 0000:04:00.0,vectorized=1 -- -i \
- --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0,vectorized=1 \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -148,12 +154,15 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path
5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::
testpmd>stop
- testpmd>show port stats all
+ testpmd>port stop 0
+ testpmd>show port stats 0
6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::
+ testpmd>clear port stats all
+ testpmd>port start all
testpmd>start
- testpmd>show port stats all
+ testpmd>show port stats 0
Test Case 4: pvp test with virtio 1.0 mergeable path
====================================================
@@ -182,8 +191,8 @@ Test Case 4: pvp test with virtio 1.0 mergeable path
3. On VM, bind virtio net to vfio-pci and run testpmd::
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \
- --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0 \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -194,12 +203,15 @@ Test Case 4: pvp test with virtio 1.0 mergeable path
5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::
testpmd>stop
- testpmd>show port stats all
+ testpmd>port stop 0
+ testpmd>show port stats 0
6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::
+ testpmd>clear port stats all
+ testpmd>port start all
testpmd>start
- testpmd>show port stats all
+ testpmd>show port stats 0
Test Case 5: pvp test with virtio 1.0 normal path
=================================================
@@ -228,8 +240,8 @@ Test Case 5: pvp test with virtio 1.0 normal path
3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads::
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip\
- --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0 \
+ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -240,14 +252,17 @@ Test Case 5: pvp test with virtio 1.0 normal path
5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::
testpmd>stop
- testpmd>show port stats all
+ testpmd>port stop 0
+ testpmd>show port stats 0
6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::
+ testpmd>clear port stats all
+ testpmd>port start all
testpmd>start
- testpmd>show port stats all
+ testpmd>show port stats 0
-Test Case 6: pvp test with virtio 1.0 vrctor_rx path
+Test Case 6: pvp test with virtio 1.0 vector_rx path
====================================================
1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command::
@@ -274,8 +289,8 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path
3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -a 0000:04:00.0,vectorized=1 -- -i \
- --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0,vectorized=1 \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -286,12 +301,15 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path
5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::
testpmd>stop
- testpmd>show port stats all
+ testpmd>port stop 0
+ testpmd>show port stats 0
6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::
+ testpmd>clear port stats all
+ testpmd>port start all
testpmd>start
- testpmd>show port stats all
+ testpmd>show port stats 0
Test Case 7: pvp test with virtio 1.1 mergeable path
====================================================
@@ -320,8 +338,8 @@ Test Case 7: pvp test with virtio 1.1 mergeable path
3. On VM, bind virtio net to vfio-pci and run testpmd::
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \
- --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0 \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -332,12 +350,15 @@ Test Case 7: pvp test with virtio 1.1 mergeable path
5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::
testpmd>stop
- testpmd>show port stats all
+ testpmd>port stop 0
+ testpmd>show port stats 0
6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::
+ testpmd>clear port stats all
+ testpmd>port start all
testpmd>start
- testpmd>show port stats all
+ testpmd>show port stats 0
Test Case 8: pvp test with virtio 1.1 normal path
=================================================
@@ -366,8 +387,8 @@ Test Case 8: pvp test with virtio 1.1 normal path
3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads::
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip\
- --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0 \
+ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -378,14 +399,17 @@ Test Case 8: pvp test with virtio 1.1 normal path
5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::
testpmd>stop
- testpmd>show port stats all
+ testpmd>port stop 0
+ testpmd>show port stats 0
6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::
+ testpmd>clear port stats all
+ testpmd>port start all
testpmd>start
- testpmd>show port stats all
+ testpmd>show port stats 0
-Test Case 9: pvp test with virtio 1.1 vrctor_rx path
+Test Case 9: pvp test with virtio 1.1 vector_rx path
====================================================
1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command::
@@ -412,8 +436,8 @@ Test Case 9: pvp test with virtio 1.1 vrctor_rx path
3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -a 0000:04:00.0,vectorized=1 -- -i \
- --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0,vectorized=1 \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -424,15 +448,18 @@ Test Case 9: pvp test with virtio 1.1 vrctor_rx path
5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::
testpmd>stop
- testpmd>show port stats all
+ testpmd>port stop 0
+ testpmd>show port stats 0
6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::
+ testpmd>clear port stats all
+ testpmd>port start all
testpmd>start
- testpmd>show port stats all
+ testpmd>show port stats 0
-Test Case 10: pvp test with virtio 1.0 mergeable path restart 100 times
-=======================================================================
+Test Case 10: pvp test with virtio 1.0 mergeable path restart 10 times
+======================================================================
1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command::
@@ -458,8 +485,8 @@ Test Case 10: pvp test with virtio 1.0 mergeable path restart 100 times
3. On VM, bind virtio net to vfio-pci and run testpmd::
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \
- --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0 \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -470,11 +497,14 @@ Test Case 10: pvp test with virtio 1.0 mergeable path restart 100 times
5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::
testpmd>stop
- testpmd>show port stats all
+ testpmd>port stop 0
+ testpmd>show port stats 0
6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::
+ testpmd>clear port stats all
+ testpmd>port start all
testpmd>start
- testpmd>show port stats all
+ testpmd>show port stats 0
-7. Rerun steps 4-6 100 times to check stability.
+7. Rerun steps 4-6 10 times to check stability.
--
2.25.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [dts][PATCH V1 2/2] tests/pvp_qemu_multi_paths_port_restart: optimize testsuite script
2023-03-28 7:20 [dts][PATCH V1 0/2] completion testplan and optimize testsuite script Wei Ling
2023-03-28 7:20 ` [dts][PATCH V1 1/2] test_plans/pvp_qemu_multi_paths_port_restart: completion testplan Wei Ling
@ 2023-03-28 7:20 ` Wei Ling
2023-04-04 3:11 ` He, Xingguang
2023-04-11 8:53 ` lijuan.tu
1 sibling, 2 replies; 5+ messages in thread
From: Wei Ling @ 2023-03-28 7:20 UTC (permalink / raw)
To: dts; +Cc: Wei Ling
Replace pmd_out API to replace send_expect() to start testpmd.
Signed-off-by: Wei Ling <weix.ling@intel.com>
---
...Suite_pvp_qemu_multi_paths_port_restart.py | 161 ++++++++----------
1 file changed, 70 insertions(+), 91 deletions(-)
diff --git a/tests/TestSuite_pvp_qemu_multi_paths_port_restart.py b/tests/TestSuite_pvp_qemu_multi_paths_port_restart.py
index 64bb4436..64661a1e 100644
--- a/tests/TestSuite_pvp_qemu_multi_paths_port_restart.py
+++ b/tests/TestSuite_pvp_qemu_multi_paths_port_restart.py
@@ -2,19 +2,12 @@
# Copyright(c) 2010-2019 Intel Corporation
#
-"""
-DPDK Test suite.
-Benchmark pvp qemu test with 3 RX/TX PATHs,
-includes Mergeable, Normal, Vector_RX.
-Cover virtio 1.1 1.0 and virtio 0.95.Also cover
-port restart test with each path
-"""
import re
import time
-import framework.utils as utils
from framework.packet import Packet
from framework.pktgen import PacketGeneratorHelper
+from framework.pmd_output import PmdOutput
from framework.test_case import TestCase
from framework.virt_common import VM
@@ -28,7 +21,6 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
self.core_config = "1S/3C/1T"
self.dut_ports = self.dut.get_ports()
self.verify(len(self.dut_ports) >= 1, "Insufficient ports for testing")
- # get core mask
self.ports_socket = self.dut.get_numa_id(self.dut_ports[0])
self.core_list = self.dut.get_core_list(
self.core_config, socket=self.ports_socket
@@ -41,21 +33,24 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
out = self.tester.send_expect("ls -d %s" % self.out_path, "# ")
if "No such file or directory" in out:
self.tester.send_expect("mkdir -p %s" % self.out_path, "# ")
+ self.base_dir = self.dut.base_dir.replace("~", "/root")
# create an instance to set stream field setting
self.pktgen_helper = PacketGeneratorHelper()
self.pci_info = self.dut.ports_info[0]["pci"]
self.number_of_ports = 1
self.path = self.dut.apps_name["test-pmd"]
self.testpmd_name = self.path.split("/")[-1]
+ self.vhost_user = self.dut.new_session(suite="vhost-user")
+ self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user)
def set_up(self):
"""
Run before each test case.
"""
# Clean the execution ENV
- self.dut.send_expect("rm -rf ./vhost.out", "#")
self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#")
self.dut.send_expect("killall -s INT qemu-system-x86_64", "#")
+ self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#")
# Prepare the result table
self.table_header = [
"FrameSize(B)",
@@ -66,82 +61,67 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
]
self.result_table_create(self.table_header)
- self.vhost = self.dut.new_session(suite="vhost-user")
-
def start_vhost_testpmd(self):
"""
start testpmd on vhost
"""
- self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#")
- self.dut.send_expect("rm -rf ./vhost-net*", "#")
- vdev = [r"'net_vhost0,iface=vhost-net,queues=1'"]
- eal_params = self.dut.create_eal_parameters(
- cores=self.core_list, prefix="vhost", ports=[self.pci_info], vdevs=vdev
+ eal_param = "--vdev 'eth_vhost0,iface=vhost-net,queues=1'"
+ param = "--nb-cores=1 --txd=1024 --rxd=1024"
+ self.vhost_user_pmd.start_testpmd(
+ cores=self.core_list,
+ eal_param=eal_param,
+ param=param,
+ prefix="vhost-user",
+ fixed_prefix=True,
+ ports=[self.pci_info],
)
- para = " -- -i --nb-cores=1 --txd=1024 --rxd=1024"
- command_line_client = self.path + eal_params + para
- self.vhost.send_expect(command_line_client, "testpmd> ", 120)
- self.vhost.send_expect("set fwd mac", "testpmd> ", 120)
- self.vhost.send_expect("start", "testpmd> ", 120)
+ self.vhost_user_pmd.execute_cmd("set fwd mac")
+ self.vhost_user_pmd.execute_cmd("start")
def start_vm_testpmd(self, path):
"""
start testpmd in vm depend on different path
"""
+ self.vm0_pmd = PmdOutput(self.vm_dut)
+ self.vm0_pci = self.vm_dut.get_port_pci(0)
+ self.vm_core_config = "1S/2C/1T"
+ self.vm_core_list = self.vm_dut.get_core_list(config=self.vm_core_config)
+ eal_param = "-a %s" % (self.vm0_pci)
+ param = "--nb-cores=1 --txd=1024 --rxd=1024"
if path == "mergeable":
- command = (
- self.path + "-c 0x3 -n 3 -- -i " + "--nb-cores=1 --txd=1024 --rxd=1024"
- )
+ eal_param = eal_param
+ param = param
elif path == "normal":
- command = (
- self.path
- + "-c 0x3 -n 3 -- -i "
- + "--tx-offloads=0x0 --enable-hw-vlan-strip "
- + "--nb-cores=1 --txd=1024 --rxd=1024"
- )
+ eal_param = eal_param
+ param = "--tx-offloads=0x0 --enable-hw-vlan-strip " + param
elif path == "vector_rx":
- command = (
- self.path
- + "-c 0x3 -n 3 -a %s,vectorized=1 -- -i "
- + "--nb-cores=1 --txd=1024 --rxd=1024"
- ) % self.vm_dut.get_port_pci(0)
- self.vm_dut.send_expect(command, "testpmd> ", 30)
- self.vm_dut.send_expect("set fwd mac", "testpmd> ", 30)
- self.vm_dut.send_expect("start", "testpmd> ", 30)
+ eal_param = eal_param + ",vectorized=1"
+ param = param
+ self.vm0_pmd.start_testpmd(
+ cores=self.vm_core_list, eal_param=eal_param, param=param, fixed_prefix=True
+ )
+ self.vm0_pmd.execute_cmd("set fwd mac")
+ self.vm0_pmd.execute_cmd("start")
- def start_one_vm(self, modem=0, mergeable=0, packed=0):
+ def start_one_vm(self, disable_modern=False, mrg_rxbuf=False, packed=False):
"""
start qemu
"""
self.vm = VM(self.dut, "vm0", "vhost_sample")
vm_params = {}
vm_params["driver"] = "vhost-user"
- vm_params["opt_path"] = "./vhost-net"
+ vm_params["opt_path"] = "%s/vhost-net" % self.base_dir
vm_params["opt_mac"] = self.virtio1_mac
- if modem == 1 and mergeable == 0 and packed == 0:
- vm_params[
- "opt_settings"
- ] = "disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024"
- elif modem == 1 and mergeable == 1 and packed == 0:
- vm_params[
- "opt_settings"
- ] = "disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024"
- elif modem == 0 and mergeable == 0 and packed == 0:
- vm_params[
- "opt_settings"
- ] = "disable-modern=true,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024"
- elif modem == 0 and mergeable == 1 and packed == 0:
- vm_params[
- "opt_settings"
- ] = "disable-modern=true,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024"
- elif modem == 1 and mergeable == 0 and packed == 1:
- vm_params[
- "opt_settings"
- ] = "disable-modern=false,mrg_rxbuf=off,packed=on,rx_queue_size=1024,tx_queue_size=1024"
- elif modem == 1 and mergeable == 1 and packed == 1:
- vm_params[
- "opt_settings"
- ] = "disable-modern=false,mrg_rxbuf=on,packed=on,rx_queue_size=1024,tx_queue_size=1024"
+ disable_modern_param = "true" if disable_modern else "false"
+ mrg_rxbuf_param = "on" if mrg_rxbuf else "off"
+ packed_param = ",packed=on" if packed else ""
+ vm_params[
+ "opt_settings"
+ ] = "disable-modern=%s,mrg_rxbuf=%s,rx_queue_size=1024,tx_queue_size=1024%s" % (
+ disable_modern_param,
+ mrg_rxbuf_param,
+ packed_param,
+ )
self.vm.set_vm_device(**vm_params)
try:
@@ -157,7 +137,7 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
"""
loop = 1
while loop <= 5:
- out = self.vhost.send_expect("show port stats 0", "testpmd>", 60)
+ out = self.vhost_user_pmd.execute_cmd("show port stats 0")
lines = re.search("Rx-pps:\s*(\d*)", out)
result = lines.group(1)
if result == "0":
@@ -174,7 +154,7 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
"""
loop = 1
while loop <= 5:
- out = self.vhost.send_expect("show port info all", "testpmd> ", 120)
+ out = self.vhost_user_pmd.execute_cmd("show port info all")
port_status = re.findall("Link\s*status:\s*([a-z]*)", out)
if "down" not in port_status:
break
@@ -184,13 +164,13 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
self.verify("down" not in port_status, "port can not up after restart")
def port_restart(self):
- self.vhost.send_expect("stop", "testpmd> ", 120)
- self.vhost.send_expect("port stop 0", "testpmd> ", 120)
+ self.vhost_user_pmd.execute_cmd("stop")
+ self.vhost_user_pmd.execute_cmd("port stop 0")
self.check_port_throughput_after_port_stop()
- self.vhost.send_expect("clear port stats all", "testpmd> ", 120)
- self.vhost.send_expect("port start all", "testpmd> ", 120)
+ self.vhost_user_pmd.execute_cmd("clear port stats all")
+ self.vhost_user_pmd.execute_cmd("port start all")
self.check_port_link_status_after_port_restart()
- self.vhost.send_expect("start", "testpmd> ", 120)
+ self.vhost_user_pmd.execute_cmd("start")
def update_table_info(self, case_info, frame_size, Mpps, throughtput, Cycle):
results_row = [frame_size]
@@ -272,21 +252,21 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
"""
close testpmd about vhost-user and vm_testpmd
"""
- self.vhost.send_expect("quit", "#", 60)
- self.vm_dut.send_expect("quit", "#", 60)
+ self.vhost_user_pmd.quit()
+ self.vm0_pmd.quit()
def close_session(self):
"""
close session of vhost-user
"""
- self.dut.close_session(self.vhost)
+ self.dut.close_session(self.vhost_user)
def test_perf_pvp_qemu_mergeable_mac(self):
"""
Test Case 1: pvp test with virtio 0.95 mergeable path
"""
self.start_vhost_testpmd()
- self.start_one_vm(modem=0, mergeable=1)
+ self.start_one_vm(disable_modern=True, mrg_rxbuf=True, packed=False)
self.start_vm_testpmd(path="mergeable")
self.send_and_verify("virtio0.95 mergeable")
self.close_all_testpmd()
@@ -298,7 +278,7 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
Test Case 2: pvp test with virtio 0.95 normal path
"""
self.start_vhost_testpmd()
- self.start_one_vm(modem=0, mergeable=0)
+ self.start_one_vm(disable_modern=True, mrg_rxbuf=False, packed=False)
self.start_vm_testpmd(path="normal")
self.send_and_verify("virtio0.95 normal")
self.close_all_testpmd()
@@ -310,7 +290,7 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
Test Case 3: pvp test with virtio 0.95 vrctor_rx path
"""
self.start_vhost_testpmd()
- self.start_one_vm(modem=0, mergeable=0)
+ self.start_one_vm(disable_modern=True, mrg_rxbuf=False, packed=False)
self.start_vm_testpmd(path="vector_rx")
self.send_and_verify("virtio0.95 vector_rx")
self.close_all_testpmd()
@@ -322,7 +302,7 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
Test Case 4: pvp test with virtio 1.0 mergeable path
"""
self.start_vhost_testpmd()
- self.start_one_vm(modem=1, mergeable=1)
+ self.start_one_vm(disable_modern=False, mrg_rxbuf=True, packed=False)
self.start_vm_testpmd(path="mergeable")
self.send_and_verify("virtio1.0 mergeable")
self.close_all_testpmd()
@@ -334,7 +314,7 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
Test Case 5: pvp test with virtio 1.0 normal path
"""
self.start_vhost_testpmd()
- self.start_one_vm(modem=1, mergeable=0)
+ self.start_one_vm(disable_modern=False, mrg_rxbuf=False, packed=False)
self.start_vm_testpmd(path="normal")
self.send_and_verify("virtio1.0 normal")
self.close_all_testpmd()
@@ -346,7 +326,7 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
Test Case 6: pvp test with virtio 1.0 vrctor_rx path
"""
self.start_vhost_testpmd()
- self.start_one_vm(modem=1, mergeable=0)
+ self.start_one_vm(disable_modern=False, mrg_rxbuf=False, packed=False)
self.start_vm_testpmd(path="vector_rx")
self.send_and_verify("virtio1.0 vector_rx")
self.close_all_testpmd()
@@ -358,7 +338,7 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
Test Case 7: pvp test with virtio 1.1 mergeable path
"""
self.start_vhost_testpmd()
- self.start_one_vm(modem=1, mergeable=1, packed=1)
+ self.start_one_vm(disable_modern=False, mrg_rxbuf=True, packed=True)
self.start_vm_testpmd(path="mergeable")
self.send_and_verify("virtio1.1 mergeable")
self.close_all_testpmd()
@@ -370,7 +350,7 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
Test Case 8: pvp test with virtio 1.1 normal path
"""
self.start_vhost_testpmd()
- self.start_one_vm(modem=1, mergeable=0, packed=1)
+ self.start_one_vm(disable_modern=False, mrg_rxbuf=False, packed=True)
self.start_vm_testpmd(path="normal")
self.send_and_verify("virtio1.1 normal")
self.close_all_testpmd()
@@ -382,26 +362,26 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
Test Case 9: pvp test with virtio 1.1 vrctor_rx path
"""
self.start_vhost_testpmd()
- self.start_one_vm(modem=1, mergeable=0, packed=1)
+ self.start_one_vm(disable_modern=False, mrg_rxbuf=False, packed=True)
self.start_vm_testpmd(path="vector_rx")
self.send_and_verify("virtio1.1 vector_rx")
self.close_all_testpmd()
self.result_table_print()
self.vm.stop()
- def test_perf_pvp_qemu_modern_mergeable_mac_restart_100_times(self):
+ def test_perf_pvp_qemu_modern_mergeable_mac_restart_10_times(self):
"""
- Test Case 10: pvp test with virtio 1.0 mergeable path restart 100 times
+ Test Case 10: pvp test with virtio 1.0 mergeable path restart 10 times
"""
self.start_vhost_testpmd()
- self.start_one_vm(modem=1, mergeable=1)
+ self.start_one_vm(disable_modern=False, mrg_rxbuf=True, packed=False)
self.start_vm_testpmd(path="mergeable")
case_info = "virtio1.0 mergeable"
Mpps, throughput = self.calculate_avg_throughput(64)
self.update_table_info(case_info, 64, Mpps, throughput, "Before Restart")
- for cycle in range(100):
- self.logger.info("now port restart %d times" % (cycle + 1))
+ for cycle in range(10):
+ self.logger.info("now port restart %d times" % (cycle + 1))
self.port_restart()
Mpps, throughput = self.calculate_avg_throughput(64)
self.update_table_info(
@@ -418,10 +398,9 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
"""
self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#")
self.dut.send_expect("killall -s INT qemu-system-x86_64", "#")
- self.close_session()
- time.sleep(2)
def tear_down_all(self):
"""
Run after each test suite.
"""
+ self.close_session()
--
2.25.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: [dts][PATCH V1 2/2] tests/pvp_qemu_multi_paths_port_restart: optimize testsuite script
2023-03-28 7:20 ` [dts][PATCH V1 2/2] tests/pvp_qemu_multi_paths_port_restart: optimize testsuite script Wei Ling
@ 2023-04-04 3:11 ` He, Xingguang
2023-04-11 8:53 ` lijuan.tu
1 sibling, 0 replies; 5+ messages in thread
From: He, Xingguang @ 2023-04-04 3:11 UTC (permalink / raw)
To: Ling, WeiX, dts; +Cc: Ling, WeiX
> -----Original Message-----
> From: Wei Ling <weix.ling@intel.com>
> Sent: Tuesday, March 28, 2023 3:21 PM
> To: dts@dpdk.org
> Cc: Ling, WeiX <weix.ling@intel.com>
> Subject: [dts][PATCH V1 2/2] tests/pvp_qemu_multi_paths_port_restart:
> optimize testsuite script
>
> Replace pmd_out API to replace send_expect() to start testpmd.
>
> Signed-off-by: Wei Ling <weix.ling@intel.com>
> ---
Acked-by: Xingguang He<xingguang.he@intel.com>
^ permalink raw reply [flat|nested] 5+ messages in thread
* [dts][PATCH V1 2/2] tests/pvp_qemu_multi_paths_port_restart: optimize testsuite script
2023-03-28 7:20 ` [dts][PATCH V1 2/2] tests/pvp_qemu_multi_paths_port_restart: optimize testsuite script Wei Ling
2023-04-04 3:11 ` He, Xingguang
@ 2023-04-11 8:53 ` lijuan.tu
1 sibling, 0 replies; 5+ messages in thread
From: lijuan.tu @ 2023-04-11 8:53 UTC (permalink / raw)
To: dts, Wei Ling; +Cc: Wei Ling
On Tue, 28 Mar 2023 15:20:43 +0800, Wei Ling <weix.ling@intel.com> wrote:
> Replace pmd_out API to replace send_expect() to start testpmd.
>
> Signed-off-by: Wei Ling <weix.ling@intel.com>
Series applied, thanks
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2023-04-11 8:53 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-28 7:20 [dts][PATCH V1 0/2] completion testplan and optimize testsuite script Wei Ling
2023-03-28 7:20 ` [dts][PATCH V1 1/2] test_plans/pvp_qemu_multi_paths_port_restart: completion testplan Wei Ling
2023-03-28 7:20 ` [dts][PATCH V1 2/2] tests/pvp_qemu_multi_paths_port_restart: optimize testsuite script Wei Ling
2023-04-04 3:11 ` He, Xingguang
2023-04-11 8:53 ` lijuan.tu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).