test suite reviews and discussions
 help / color / mirror / Atom feed
From: Wei Ling <weix.ling@intel.com>
To: dts@dpdk.org
Cc: Wei Ling <weix.ling@intel.com>
Subject: [dts][PATCH V1] vm2vm_virtio_net_perf_dsa: modify start vhost-user back-end queues
Date: Thu, 15 Jun 2023 16:54:12 +0800	[thread overview]
Message-ID: <20230615085412.1122975-1-weix.ling@intel.com> (raw)

1.As DPDK commit b82f55c0(net/vhost: use API to set max queue pairs), need
use the same queues number on vhose-user back-end side as front-end
side, so modify the testplan and testsuite.

2.Remove 'tso=1' parameter in testsuite case3 and case11 sync with
testplan.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 .../vm2vm_virtio_net_perf_dsa_test_plan.rst   |  60 ++++++----
 tests/TestSuite_vm2vm_virtio_net_perf_dsa.py  | 106 +++++++++---------
 2 files changed, 93 insertions(+), 73 deletions(-)

diff --git a/test_plans/vm2vm_virtio_net_perf_dsa_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_dsa_test_plan.rst
index 0e742387..a6efbf69 100644
--- a/test_plans/vm2vm_virtio_net_perf_dsa_test_plan.rst
+++ b/test_plans/vm2vm_virtio_net_perf_dsa_test_plan.rst
@@ -264,8 +264,10 @@ The dynamic change of multi-queues number, iova as VA and PA mode also test.
 
 12. Quit vhost ports and relaunch vhost ports w/o dsa channels::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --no-pci --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=8' \
-	--vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --no-pci \
+	--vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=8' \
+	--vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=8' \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
 	testpmd>start
 
 13. On VM1, set virtio device::
@@ -280,8 +282,10 @@ The dynamic change of multi-queues number, iova as VA and PA mode also test.
 
 16. Quit vhost ports and relaunch vhost ports with 1 queues::
 
-	 <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --no-pci --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=4' \
-	 --vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=4'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
+	 <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --no-pci \
+	 --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=8' \
+	 --vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=8' \
+	 -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
 	 testpmd>start
 
 17. On VM1, set virtio device::
@@ -359,16 +363,20 @@ The dynamic change of multi-queues number also test.
 
 8. Quit vhost ports and relaunch vhost ports w/o dsa channels::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --no-pci --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-	--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --no-pci \
+	--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
+	--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
 	testpmd>start
 
 9. Rerun step 6-7.
 
 10. Quit vhost ports and relaunch vhost ports with 1 queues::
 
-	 <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --no-pci --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-	 --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
+	 <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --no-pci \
+	 --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
+	 --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \
+	 -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
 	 testpmd>start
 
 11. On VM1, set virtio device::
@@ -840,8 +848,10 @@ The dynamic change of multi-queues number also test.
 
 10. Quit vhost ports and relaunch vhost ports w/o dsa channels::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=8' \
-	--vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
+	--vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=8' \
+	--vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=8' \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
 	testpmd>start
 
 11. On VM1, set virtio device::
@@ -856,8 +866,10 @@ The dynamic change of multi-queues number also test.
 
 14. Quit vhost ports and relaunch vhost ports with 1 queues::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=4' \
-	--vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=4'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
+	--vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=8' \
+	--vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=8' \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
 	testpmd>start
 
 15. On VM1, set virtio device::
@@ -940,16 +952,20 @@ The dynamic change of multi-queues number also test.
 
 8. Quit vhost ports and relaunch vhost ports w/o dsa channels::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,legacy-ol-flags=1' \
-	--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,legacy-ol-flags=1'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
+	--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,legacy-ol-flags=1' \
+	--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,legacy-ol-flags=1' \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
 	testpmd>start
 
 9. Rerun step 6-7.
 
 10. Quit vhost ports and relaunch vhost ports with 1 queues::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-	--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
+	--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
+	--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
 	testpmd>start
 
 11. On VM1, set virtio device::
@@ -1254,8 +1270,10 @@ and kernel driver and perform SW checksum in Rx/Tx path.. The dynamic change of
 
 10. Quit vhost ports and relaunch vhost ports w/o dsa channels::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=16' \
-	--vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=16'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=16 --txq=16
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
+	--vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=16' \
+	--vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=16' \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=16 --txq=16
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -1263,8 +1281,10 @@ and kernel driver and perform SW checksum in Rx/Tx path.. The dynamic change of
 
 12. Quit vhost ports and relaunch vhost ports with 1 queues::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=8' \
-	--vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
+	--vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=16' \
+	--vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=16' \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
 	testpmd>set fwd csum
 	testpmd>start
 
diff --git a/tests/TestSuite_vm2vm_virtio_net_perf_dsa.py b/tests/TestSuite_vm2vm_virtio_net_perf_dsa.py
index b4d55fac..6ab1aff9 100644
--- a/tests/TestSuite_vm2vm_virtio_net_perf_dsa.py
+++ b/tests/TestSuite_vm2vm_virtio_net_perf_dsa.py
@@ -79,7 +79,7 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             )
         self.vhost_user_pmd.execute_cmd("start")
 
-    def start_vms(self, server_mode=False, vm_queue=1, vm_config="vhost_sample"):
+    def start_vms(self, server_mode=False, queues=1, vm_config="vhost_sample"):
         """
         start two VM, each VM has one virtio device
         """
@@ -92,10 +92,11 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
                 vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i
             else:
                 vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i + ",server"
-            if vm_queue > 1:
-                vm_params["opt_queue"] = vm_queue
+            if queues > 1:
+                vm_params["opt_queue"] = queues
+            mq_param = ",mq=on,vectors=%s" % (2 + 2 * queues) if queues > 1 else ""
             vm_params["opt_mac"] = "52:54:00:00:00:0%d" % (i + 1)
-            vm_params["opt_settings"] = self.vm_args
+            vm_params["opt_settings"] = self.vm_args + mq_param
             vm_info.set_vm_device(**vm_params)
             try:
                 vm_dut = vm_info.start(set_target=False)
@@ -148,7 +149,7 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             port_options=port_options,
         )
         self.vm_args = "disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on"
-        self.start_vms(server_mode=False, vm_queue=1)
+        self.start_vms(server_mode=False, queues=1)
         self.BC.config_2_vms_ip()
         self.BC.check_scp_file_between_2_vms()
         self.BC.run_iperf_test_between_2_vms()
@@ -212,8 +213,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             ports=dsas[0:1],
             port_options=port_options,
         )
-        self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on"
-        self.start_vms(server_mode=True, vm_queue=8)
+        self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on"
+        self.start_vms(server_mode=True, queues=8)
         self.BC.config_2_vms_combined(combined=8)
         self.BC.config_2_vms_ip()
         self.BC.check_ping_between_2_vms()
@@ -334,8 +335,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
 
         self.vhost_user_pmd.quit()
         vhost_eal_param = (
-            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=4,tso=1' "
-            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=4,tso=1'"
+            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,tso=1' "
+            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,tso=1'"
         )
         vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1"
         self.start_vhost_testpmd(
@@ -395,8 +396,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
         )
         port_options = {dsas[0]: "max_queues=8"}
         vhost_eal_param = (
-            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[%s]' "
-            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[%s]'"
+            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[%s]' "
+            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[%s]'"
             % (dmas, dmas)
         )
         vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8"
@@ -407,8 +408,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             ports=dsas[0:1],
             port_options=port_options,
         )
-        self.vm_args = "disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on"
-        self.start_vms(server_mode=True, vm_queue=8)
+        self.vm_args = "disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on"
+        self.start_vms(server_mode=True, queues=8)
         self.BC.config_2_vms_combined(combined=8)
         self.BC.config_2_vms_ip()
         self.BC.check_ping_between_2_vms()
@@ -418,8 +419,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
 
         self.vhost_user_pmd.quit()
         vhost_eal_param = (
-            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,tso=1' "
-            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,tso=1'"
+            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8' "
+            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8'"
         )
         vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4"
         self.start_vhost_testpmd(
@@ -437,8 +438,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
 
         self.vhost_user_pmd.quit()
         vhost_eal_param = (
-            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=4,tso=1' "
-            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=4,tso=1'"
+            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8' "
+            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8'"
         )
         vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1"
         self.start_vhost_testpmd(
@@ -477,7 +478,7 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             port_options=port_options,
         )
         self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on"
-        self.start_vms(server_mode=False, vm_queue=1)
+        self.start_vms(server_mode=False, queues=1)
         self.BC.config_2_vms_ip()
         self.BC.check_ping_between_2_vms()
         self.BC.check_scp_file_between_2_vms()
@@ -542,8 +543,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             ports=dsas[0:1],
             port_options=port_options,
         )
-        self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on"
-        self.start_vms(server_mode=False, vm_queue=8)
+        self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on"
+        self.start_vms(server_mode=False, queues=8)
         self.BC.config_2_vms_combined(combined=8)
         self.BC.config_2_vms_ip()
         self.BC.check_ping_between_2_vms()
@@ -637,8 +638,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             ports=dsas,
             port_options=port_options,
         )
-        self.vm_args = "disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=off,guest_tso4=off,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on"
-        self.start_vms(server_mode=False, vm_queue=8)
+        self.vm_args = "disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=off,guest_tso4=off,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on"
+        self.start_vms(server_mode=False, queues=8)
         self.BC.config_2_vms_combined(combined=8)
         self.BC.config_2_vms_ip()
         self.BC.check_ping_between_2_vms()
@@ -676,7 +677,7 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             port_options=port_options,
         )
         self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on"
-        self.start_vms(server_mode=False, vm_queue=1)
+        self.start_vms(server_mode=False, queues=1)
         self.BC.config_2_vms_ip()
         self.BC.check_ping_between_2_vms()
         self.BC.check_scp_file_between_2_vms()
@@ -741,8 +742,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             ports=dsas[0:1],
             port_options=port_options,
         )
-        self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on"
-        self.start_vms(server_mode=False, vm_queue=8)
+        self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on"
+        self.start_vms(server_mode=False, queues=8)
         self.BC.config_2_vms_combined(combined=8)
         self.BC.config_2_vms_ip()
         self.BC.check_ping_between_2_vms()
@@ -774,8 +775,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             param=vhost_param,
             no_pci=True,
         )
-        self.vm_args = "disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on"
-        self.start_vms(server_mode=False, vm_queue=1)
+        self.vm_args = "disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on"
+        self.start_vms(server_mode=False, queues=1)
         self.BC.config_2_vms_ip()
         self.BC.check_ping_between_2_vms()
         self.BC.check_scp_file_between_2_vms()
@@ -870,8 +871,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             param=vhost_param,
             no_pci=True,
         )
-        self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on"
-        self.start_vms(server_mode=True, vm_queue=8)
+        self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on"
+        self.start_vms(server_mode=True, queues=8)
         self.BC.config_2_vms_combined(combined=8)
         self.BC.config_2_vms_ip()
         self.BC.check_ping_between_2_vms()
@@ -973,8 +974,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
 
         self.vhost_user_pmd.quit()
         vhost_eal_param = (
-            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=4,tso=1' "
-            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=4,tso=1'"
+            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,tso=1' "
+            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,tso=1'"
         )
         vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1"
         self.start_vhost_testpmd(
@@ -1054,8 +1055,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             )
         )
         vhost_eal_param = (
-            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[%s]' "
-            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[%s]'"
+            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[%s]' "
+            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[%s]'"
             % (dmas1, dmas2)
         )
         vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8"
@@ -1065,8 +1066,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             param=vhost_param,
             no_pci=True,
         )
-        self.vm_args = "disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=off,guest_tso4=off,guest_ecn=on,guest_ufo=on,host_ufo=on"
-        self.start_vms(server_mode=True, vm_queue=8)
+        self.vm_args = "disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=off,guest_tso4=off,guest_ecn=on,guest_ufo=on,host_ufo=on"
+        self.start_vms(server_mode=True, queues=8)
         self.BC.config_2_vms_combined(combined=8)
         self.BC.config_2_vms_ip()
         self.BC.check_ping_between_2_vms()
@@ -1076,8 +1077,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
 
         self.vhost_user_pmd.quit()
         vhost_eal_param = (
-            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,legacy-ol-flags=1' "
-            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,legacy-ol-flags=1'"
+            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,legacy-ol-flags=1' "
+            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,legacy-ol-flags=1'"
         )
         vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8"
         self.start_vhost_testpmd(
@@ -1092,8 +1093,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
         self.BC.check_iperf_result_between_2_vms()
         self.vhost_user_pmd.quit()
         vhost_eal_param = (
-            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=4,tso=1' "
-            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=4,tso=1'"
+            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8' "
+            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8'"
         )
         vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1"
         self.start_vhost_testpmd(
@@ -1128,7 +1129,7 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             no_pci=True,
         )
         self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on"
-        self.start_vms(server_mode=False, vm_queue=1)
+        self.start_vms(server_mode=False, queues=1)
         self.BC.config_2_vms_ip()
         self.BC.check_ping_between_2_vms()
         self.BC.check_scp_file_between_2_vms()
@@ -1198,9 +1199,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             )
         )
         vhost_eal_param = (
-            "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,tso=1,dmas=[%s]' "
-            "--vdev 'eth_vhost1,iface=vhost-net1,queues=8,tso=1,dmas=[%s]'"
-            % (dmas1, dmas2)
+            "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,dmas=[%s]' "
+            "--vdev 'eth_vhost1,iface=vhost-net1,queues=8,dmas=[%s]'" % (dmas1, dmas2)
         )
         vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8"
         self.start_vhost_testpmd(
@@ -1209,8 +1209,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             param=vhost_param,
             no_pci=True,
         )
-        self.vm_args = "mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on"
-        self.start_vms(server_mode=False, vm_queue=8)
+        self.vm_args = "mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on"
+        self.start_vms(server_mode=False, queues=8)
         self.BC.config_2_vms_combined(combined=8)
         self.BC.config_2_vms_ip()
         self.BC.check_ping_between_2_vms()
@@ -1315,8 +1315,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
             param=vhost_param,
             no_pci=True,
         )
-        self.vm_args = "disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on"
-        self.start_vms(server_mode=False, vm_queue=8)
+        self.vm_args = "disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on"
+        self.start_vms(server_mode=False, queues=8)
         self.BC.config_2_vms_combined(combined=8)
         self.BC.config_2_vms_ip()
         self.BC.check_ping_between_2_vms()
@@ -1499,8 +1499,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
         self.vhost_user_pmd.execute_cmd("port config 1 tx_offload tcp_cksum on")
         self.vhost_user_pmd.execute_cmd("port start all")
         self.vhost_user_pmd.execute_cmd("start")
-        self.vm_args = "disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=off,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on"
-        self.start_vms(server_mode=True, vm_queue=16)
+        self.vm_args = "disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=off,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on"
+        self.start_vms(server_mode=True, queues=16)
         self.BC.config_2_vms_combined(combined=16)
         self.BC.config_2_vms_ip()
         self.BC.check_ping_between_2_vms()
@@ -1593,8 +1593,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
 
         self.vhost_user_pmd.quit()
         vhost_eal_param = (
-            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,tso=1' "
-            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,tso=1'"
+            "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=16,tso=1' "
+            "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=16,tso=1'"
         )
         vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1"
         self.start_vhost_testpmd(
@@ -1778,8 +1778,8 @@ class TestVM2VMVirtioNetPerfDsa(TestCase):
         self.vhost_user_pmd.execute_cmd("port config 1 tx_offload tcp_cksum on")
         self.vhost_user_pmd.execute_cmd("port start all")
         self.vhost_user_pmd.execute_cmd("start")
-        self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=off,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on"
-        self.start_vms(server_mode=False, vm_queue=16)
+        self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=off,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on"
+        self.start_vms(server_mode=False, queues=16)
         self.BC.config_2_vms_combined(combined=16)
         self.BC.config_2_vms_ip()
         self.BC.check_ping_between_2_vms()
-- 
2.34.1


                 reply	other threads:[~2023-06-15  8:54 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230615085412.1122975-1-weix.ling@intel.com \
    --to=weix.ling@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).