* [dts] [PATCH V1] tests/perf_vm2vm_virtio_net_perf: delete 2 zero_copy testcase
@ 2021-04-13 2:44 Ling Wei
2021-04-13 2:47 ` Ling, WeiX
2021-04-20 3:08 ` Wang, Yinan
0 siblings, 2 replies; 4+ messages in thread
From: Ling Wei @ 2021-04-13 2:44 UTC (permalink / raw)
To: dts; +Cc: Ling Wei
As DPDK community has removed support dequeue zero copy feature in
20.11,so delete zero_copy case in testcase and testplan.
Signed-off-by: Ling Wei <weix.ling@intel.com>
---
.../perf_vm2vm_virtio_net_perf_test_plan.rst | 103 +-----------------
tests/TestSuite_perf_vm2vm_virtio_net_perf.py | 47 ++------
2 files changed, 8 insertions(+), 142 deletions(-)
diff --git a/test_plans/perf_vm2vm_virtio_net_perf_test_plan.rst b/test_plans/perf_vm2vm_virtio_net_perf_test_plan.rst
index 60e7abd0..c284c0f3 100644
--- a/test_plans/perf_vm2vm_virtio_net_perf_test_plan.rst
+++ b/test_plans/perf_vm2vm_virtio_net_perf_test_plan.rst
@@ -97,58 +97,8 @@ Test Case 1: Perf VM2VM split ring vhost-user/virtio-net test with tcp traffic
6. Check iperf throughput can get expected data.
-Test Case 2: Perf VM2VM split ring vhost-user/virtio-net dequeue zero-copy test with tcp traffic
-================================================================================================
-1. Launch the Vhost sample by below commands::
-
- rm -rf vhost-net*
- ./testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' \
- --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024
- testpmd>start
-
-2. Launch VM1 and VM2::
-
- taskset -c 13 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
- -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
- -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
- -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
- -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
- -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
- -chardev socket,id=char0,path=./vhost-net0 \
- -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
- -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
-
- taskset -c 15 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
- -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
- -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
- -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
- -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
- -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
- -chardev socket,id=char0,path=./vhost-net1 \
- -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
- -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocal::
-
- ifconfig ens5 1.1.1.2
- arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocal::
-
- ifconfig ens5 1.1.1.8
- arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Check the iperf performance between two VMs by below commands::
-
- Under VM1, run: `iperf -s -i 1`
- Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-6. Check iperf throughput can get expected data.
-
-Test Case 3: Perf VM2VM packed ring vhost-user/virtio-net test with tcp traffic
+Test Case 2: Perf VM2VM packed ring vhost-user/virtio-net test with tcp traffic
===============================================================================
1. Launch the Vhost sample by below commands::
@@ -198,54 +148,3 @@ Test Case 3: Perf VM2VM packed ring vhost-user/virtio-net test with tcp traffic
Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
6. Check iperf throughput can get expected data.
-
-Test Case 4: Perf VM2VM packed ring vhost-user/virtio-net dequeue zero-copy test with tcp traffic
-=================================================================================================
-
-1. Launch the Vhost sample by below commands::
-
- rm -rf vhost-net*
- ./testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' \
- --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024
- testpmd>start
-
-2. Launch VM1 and VM2::
-
- taskset -c 13 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
- -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
- -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
- -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
- -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
- -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
- -chardev socket,id=char0,path=./vhost-net0 \
- -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
- -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
-
- taskset -c 15 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
- -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
- -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
- -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
- -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
- -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
- -chardev socket,id=char0,path=./vhost-net1 \
- -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
- -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocal::
-
- ifconfig ens3 1.1.1.2
- arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocal::
-
- ifconfig ens3 1.1.1.8
- arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Check the iperf performance between two VMs by below commands::
-
- Under VM1, run: `iperf -s -i 1`
- Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-6. Check iperf throughput can get expected data.
\ No newline at end of file
diff --git a/tests/TestSuite_perf_vm2vm_virtio_net_perf.py b/tests/TestSuite_perf_vm2vm_virtio_net_perf.py
index 7116eece..ac0e38a2 100644
--- a/tests/TestSuite_perf_vm2vm_virtio_net_perf.py
+++ b/tests/TestSuite_perf_vm2vm_virtio_net_perf.py
@@ -164,16 +164,12 @@ class TestPerfVM2VMVirtioNetPerf(TestCase):
self.verify("FAIL" not in status_result, "Exceeded Gap")
- def start_vhost_testpmd(self, zerocopy=False):
+ def start_vhost_testpmd(self):
"""
launch the testpmd with different parameters
"""
- if zerocopy is True:
- zerocopy_arg = ",dequeue-zero-copy=1"
- else:
- zerocopy_arg = ""
- vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,queues=1%s' " % (self.base_dir, zerocopy_arg)
- vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,queues=1%s' " % (self.base_dir, zerocopy_arg)
+ vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,queues=1' " % self.base_dir
+ vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,queues=1' " % self.base_dir
eal_params = self.dut.create_eal_parameters(cores=self.cores_list, prefix='vhost', no_pci=True)
para = " -- -i --nb-cores=2 --txd=1024 --rxd=1024"
self.command_line = self.path + eal_params + vdev1 + vdev2 + para
@@ -227,11 +223,11 @@ class TestPerfVM2VMVirtioNetPerf(TestCase):
self.vm_dut[0].send_expect("arp -s %s %s" % (self.virtio_ip2, self.virtio_mac2), "#", 10)
self.vm_dut[1].send_expect("arp -s %s %s" % (self.virtio_ip1, self.virtio_mac1), "#", 10)
- def prepare_test_env(self, zerocopy, path_mode, packed_mode=False):
+ def prepare_test_env(self, path_mode, packed_mode=False):
"""
start vhost testpmd and qemu, and config the vm env
"""
- self.start_vhost_testpmd(zerocopy)
+ self.start_vhost_testpmd()
self.start_vms(mode=path_mode, packed=packed_mode)
self.config_vm_env()
@@ -310,24 +306,10 @@ class TestPerfVM2VMVirtioNetPerf(TestCase):
"""
VM2VM split ring vhost-user/virtio-net test with tcp traffic
"""
- zerocopy = False
path_mode = "tso"
self.test_target = "split_tso"
self.expected_throughput = self.get_suite_cfg()['expected_throughput'][self.test_target]
- self.prepare_test_env(zerocopy, path_mode)
- self.start_iperf_and_verify_vhost_xstats_info(mode="tso")
- self.handle_expected()
- self.handle_results()
-
- def test_vm2vm_split_ring_dequeue_zero_copy_iperf_with_tso(self):
- """
- VM2VM split ring vhost-user/virtio-net zero copy test with tcp traffic
- """
- zerocopy = True
- path_mode = "tso"
- self.test_target = "split_zero_copy_tso"
- self.expected_throughput = self.get_suite_cfg()['expected_throughput'][self.test_target]
- self.prepare_test_env(zerocopy, path_mode)
+ self.prepare_test_env(path_mode)
self.start_iperf_and_verify_vhost_xstats_info(mode="tso")
self.handle_expected()
self.handle_results()
@@ -336,26 +318,11 @@ class TestPerfVM2VMVirtioNetPerf(TestCase):
"""
VM2VM packed ring vhost-user/virtio-net test with tcp traffic
"""
- zerocopy = False
path_mode = "tso"
self.test_target = "packed_tso"
self.expected_throughput = self.get_suite_cfg()['expected_throughput'][self.test_target]
packed_mode = True
- self.prepare_test_env(zerocopy, path_mode, packed_mode)
- self.start_iperf_and_verify_vhost_xstats_info(mode="tso")
- self.handle_expected()
- self.handle_results()
-
- def test_vm2vm_packed_ring_dequeue_zero_copy_iperf_with_tso(self):
- """
- VM2VM packed ring vhost-user/virtio-net zero copy test with tcp traffic
- """
- zerocopy = True
- path_mode = "tso"
- packed_mode = True
- self.test_target = "packed_zero_copy_tso"
- self.expected_throughput = self.get_suite_cfg()['expected_throughput'][self.test_target]
- self.prepare_test_env(zerocopy, path_mode, packed_mode)
+ self.prepare_test_env(path_mode, packed_mode)
self.start_iperf_and_verify_vhost_xstats_info(mode="tso")
self.handle_expected()
self.handle_results()
--
2.25.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dts] [PATCH V1] tests/perf_vm2vm_virtio_net_perf: delete 2 zero_copy testcase
2021-04-13 2:44 [dts] [PATCH V1] tests/perf_vm2vm_virtio_net_perf: delete 2 zero_copy testcase Ling Wei
@ 2021-04-13 2:47 ` Ling, WeiX
2021-05-07 6:05 ` Tu, Lijuan
2021-04-20 3:08 ` Wang, Yinan
1 sibling, 1 reply; 4+ messages in thread
From: Ling, WeiX @ 2021-04-13 2:47 UTC (permalink / raw)
To: dts
[-- Attachment #1: Type: text/plain, Size: 314 bytes --]
> -----Original Message-----
> From: Ling, WeiX <weix.ling@intel.com>
> Sent: Tuesday, April 13, 2021 10:45 AM
> To: dts@dpdk.org
> Cc: Ling, WeiX <weix.ling@intel.com>
> Subject: [dts][PATCH V1] tests/perf_vm2vm_virtio_net_perf: delete 2
> zero_copy testcase
>
Tested-by: Wei Ling <weix.ling@intel.com>
[-- Attachment #2: TestPerfVM2VMVirtioNetPerf.log --]
[-- Type: application/octet-stream, Size: 292966 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dts] [PATCH V1] tests/perf_vm2vm_virtio_net_perf: delete 2 zero_copy testcase
2021-04-13 2:47 ` Ling, WeiX
@ 2021-05-07 6:05 ` Tu, Lijuan
0 siblings, 0 replies; 4+ messages in thread
From: Tu, Lijuan @ 2021-05-07 6:05 UTC (permalink / raw)
To: Ling, WeiX, dts
> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Ling, WeiX
> Sent: 2021年4月13日 10:48
> To: dts@dpdk.org
> Subject: Re: [dts] [PATCH V1] tests/perf_vm2vm_virtio_net_perf: delete 2
> zero_copy testcase
>
> > -----Original Message-----
> > From: Ling, WeiX <weix.ling@intel.com>
> > Sent: Tuesday, April 13, 2021 10:45 AM
> > To: dts@dpdk.org
> > Cc: Ling, WeiX <weix.ling@intel.com>
> > Subject: [dts][PATCH V1] tests/perf_vm2vm_virtio_net_perf: delete 2
> > zero_copy testcase
> >
> Tested-by: Wei Ling <weix.ling@intel.com>
Applied
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dts] [PATCH V1] tests/perf_vm2vm_virtio_net_perf: delete 2 zero_copy testcase
2021-04-13 2:44 [dts] [PATCH V1] tests/perf_vm2vm_virtio_net_perf: delete 2 zero_copy testcase Ling Wei
2021-04-13 2:47 ` Ling, WeiX
@ 2021-04-20 3:08 ` Wang, Yinan
1 sibling, 0 replies; 4+ messages in thread
From: Wang, Yinan @ 2021-04-20 3:08 UTC (permalink / raw)
To: Ling, WeiX, dts; +Cc: Ling, WeiX
Acked-by: Wang, Yinan <yinan.wang@intel.com>
> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Ling Wei
> Sent: 2021?4?13? 10:45
> To: dts@dpdk.org
> Cc: Ling, WeiX <weix.ling@intel.com>
> Subject: [dts] [PATCH V1] tests/perf_vm2vm_virtio_net_perf: delete 2
> zero_copy testcase
>
> As DPDK community has removed support dequeue zero copy feature in
> 20.11,so delete zero_copy case in testcase and testplan.
>
> Signed-off-by: Ling Wei <weix.ling@intel.com>
> ---
> .../perf_vm2vm_virtio_net_perf_test_plan.rst | 103 +-----------------
> tests/TestSuite_perf_vm2vm_virtio_net_perf.py | 47 ++------
> 2 files changed, 8 insertions(+), 142 deletions(-)
>
> diff --git a/test_plans/perf_vm2vm_virtio_net_perf_test_plan.rst
> b/test_plans/perf_vm2vm_virtio_net_perf_test_plan.rst
> index 60e7abd0..c284c0f3 100644
> --- a/test_plans/perf_vm2vm_virtio_net_perf_test_plan.rst
> +++ b/test_plans/perf_vm2vm_virtio_net_perf_test_plan.rst
> @@ -97,58 +97,8 @@ Test Case 1: Perf VM2VM split ring vhost-user/virtio-
> net test with tcp traffic
>
> 6. Check iperf throughput can get expected data.
>
> -Test Case 2: Perf VM2VM split ring vhost-user/virtio-net dequeue zero-copy
> test with tcp traffic
> -
> ===============================================================
> =================================
>
> -1. Launch the Vhost sample by below commands::
> -
> - rm -rf vhost-net*
> - ./testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' \
> - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -
> i --nb-cores=2 --txd=1024 --rxd=1024
> - testpmd>start
> -
> -2. Launch VM1 and VM2::
> -
> - taskset -c 13 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -
> smp 1 -m 4096 \
> - -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \
> - -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img \
> - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
> - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> e1000,netdev=nttsip1 \
> - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
> - -chardev socket,id=char0,path=./vhost-net0 \
> - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
> - -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_c
> sum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
> -
> - taskset -c 15 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -
> smp 1 -m 4096 \
> - -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \
> - -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16-2.img \
> - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
> - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> e1000,netdev=nttsip1 \
> - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
> - -chardev socket,id=char0,path=./vhost-net1 \
> - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
> - -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_c
> sum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
> -
> -3. On VM1, set virtio device IP and run arp protocal::
> -
> - ifconfig ens5 1.1.1.2
> - arp -s 1.1.1.8 52:54:00:00:00:02
> -
> -4. On VM2, set virtio device IP and run arp protocal::
> -
> - ifconfig ens5 1.1.1.8
> - arp -s 1.1.1.2 52:54:00:00:00:01
> -
> -5. Check the iperf performance between two VMs by below commands::
> -
> - Under VM1, run: `iperf -s -i 1`
> - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> -
> -6. Check iperf throughput can get expected data.
> -
> -Test Case 3: Perf VM2VM packed ring vhost-user/virtio-net test with tcp
> traffic
> +Test Case 2: Perf VM2VM packed ring vhost-user/virtio-net test with tcp
> traffic
>
> ===============================================================
> ================
>
> 1. Launch the Vhost sample by below commands::
> @@ -198,54 +148,3 @@ Test Case 3: Perf VM2VM packed ring vhost-
> user/virtio-net test with tcp traffic
> Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
>
> 6. Check iperf throughput can get expected data.
> -
> -Test Case 4: Perf VM2VM packed ring vhost-user/virtio-net dequeue zero-
> copy test with tcp traffic
> -
> ===============================================================
> ==================================
> -
> -1. Launch the Vhost sample by below commands::
> -
> - rm -rf vhost-net*
> - ./testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' \
> - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -
> i --nb-cores=2 --txd=1024 --rxd=1024
> - testpmd>start
> -
> -2. Launch VM1 and VM2::
> -
> - taskset -c 13 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -
> smp 1 -m 4096 \
> - -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \
> - -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img \
> - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
> - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> e1000,netdev=nttsip1 \
> - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
> - -chardev socket,id=char0,path=./vhost-net0 \
> - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
> - -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_c
> sum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
> -
> - taskset -c 15 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -
> smp 1 -m 4096 \
> - -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \
> - -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16-2.img \
> - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
> - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> e1000,netdev=nttsip1 \
> - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
> - -chardev socket,id=char0,path=./vhost-net1 \
> - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
> - -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_c
> sum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
> -
> -3. On VM1, set virtio device IP and run arp protocal::
> -
> - ifconfig ens3 1.1.1.2
> - arp -s 1.1.1.8 52:54:00:00:00:02
> -
> -4. On VM2, set virtio device IP and run arp protocal::
> -
> - ifconfig ens3 1.1.1.8
> - arp -s 1.1.1.2 52:54:00:00:00:01
> -
> -5. Check the iperf performance between two VMs by below commands::
> -
> - Under VM1, run: `iperf -s -i 1`
> - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> -
> -6. Check iperf throughput can get expected data.
> \ No newline at end of file
> diff --git a/tests/TestSuite_perf_vm2vm_virtio_net_perf.py
> b/tests/TestSuite_perf_vm2vm_virtio_net_perf.py
> index 7116eece..ac0e38a2 100644
> --- a/tests/TestSuite_perf_vm2vm_virtio_net_perf.py
> +++ b/tests/TestSuite_perf_vm2vm_virtio_net_perf.py
> @@ -164,16 +164,12 @@ class TestPerfVM2VMVirtioNetPerf(TestCase):
>
> self.verify("FAIL" not in status_result, "Exceeded Gap")
>
> - def start_vhost_testpmd(self, zerocopy=False):
> + def start_vhost_testpmd(self):
> """
> launch the testpmd with different parameters
> """
> - if zerocopy is True:
> - zerocopy_arg = ",dequeue-zero-copy=1"
> - else:
> - zerocopy_arg = ""
> - vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,queues=1%s' " %
> (self.base_dir, zerocopy_arg)
> - vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,queues=1%s' " %
> (self.base_dir, zerocopy_arg)
> + vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,queues=1' " %
> self.base_dir
> + vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,queues=1' " %
> self.base_dir
> eal_params = self.dut.create_eal_parameters(cores=self.cores_list,
> prefix='vhost', no_pci=True)
> para = " -- -i --nb-cores=2 --txd=1024 --rxd=1024"
> self.command_line = self.path + eal_params + vdev1 + vdev2 + para
> @@ -227,11 +223,11 @@ class TestPerfVM2VMVirtioNetPerf(TestCase):
> self.vm_dut[0].send_expect("arp -s %s %s" % (self.virtio_ip2,
> self.virtio_mac2), "#", 10)
> self.vm_dut[1].send_expect("arp -s %s %s" % (self.virtio_ip1,
> self.virtio_mac1), "#", 10)
>
> - def prepare_test_env(self, zerocopy, path_mode, packed_mode=False):
> + def prepare_test_env(self, path_mode, packed_mode=False):
> """
> start vhost testpmd and qemu, and config the vm env
> """
> - self.start_vhost_testpmd(zerocopy)
> + self.start_vhost_testpmd()
> self.start_vms(mode=path_mode, packed=packed_mode)
> self.config_vm_env()
>
> @@ -310,24 +306,10 @@ class TestPerfVM2VMVirtioNetPerf(TestCase):
> """
> VM2VM split ring vhost-user/virtio-net test with tcp traffic
> """
> - zerocopy = False
> path_mode = "tso"
> self.test_target = "split_tso"
> self.expected_throughput =
> self.get_suite_cfg()['expected_throughput'][self.test_target]
> - self.prepare_test_env(zerocopy, path_mode)
> - self.start_iperf_and_verify_vhost_xstats_info(mode="tso")
> - self.handle_expected()
> - self.handle_results()
> -
> - def test_vm2vm_split_ring_dequeue_zero_copy_iperf_with_tso(self):
> - """
> - VM2VM split ring vhost-user/virtio-net zero copy test with tcp traffic
> - """
> - zerocopy = True
> - path_mode = "tso"
> - self.test_target = "split_zero_copy_tso"
> - self.expected_throughput =
> self.get_suite_cfg()['expected_throughput'][self.test_target]
> - self.prepare_test_env(zerocopy, path_mode)
> + self.prepare_test_env(path_mode)
> self.start_iperf_and_verify_vhost_xstats_info(mode="tso")
> self.handle_expected()
> self.handle_results()
> @@ -336,26 +318,11 @@ class TestPerfVM2VMVirtioNetPerf(TestCase):
> """
> VM2VM packed ring vhost-user/virtio-net test with tcp traffic
> """
> - zerocopy = False
> path_mode = "tso"
> self.test_target = "packed_tso"
> self.expected_throughput =
> self.get_suite_cfg()['expected_throughput'][self.test_target]
> packed_mode = True
> - self.prepare_test_env(zerocopy, path_mode, packed_mode)
> - self.start_iperf_and_verify_vhost_xstats_info(mode="tso")
> - self.handle_expected()
> - self.handle_results()
> -
> - def test_vm2vm_packed_ring_dequeue_zero_copy_iperf_with_tso(self):
> - """
> - VM2VM packed ring vhost-user/virtio-net zero copy test with tcp
> traffic
> - """
> - zerocopy = True
> - path_mode = "tso"
> - packed_mode = True
> - self.test_target = "packed_zero_copy_tso"
> - self.expected_throughput =
> self.get_suite_cfg()['expected_throughput'][self.test_target]
> - self.prepare_test_env(zerocopy, path_mode, packed_mode)
> + self.prepare_test_env(path_mode, packed_mode)
> self.start_iperf_and_verify_vhost_xstats_info(mode="tso")
> self.handle_expected()
> self.handle_results()
> --
> 2.25.1
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-05-07 6:05 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-13 2:44 [dts] [PATCH V1] tests/perf_vm2vm_virtio_net_perf: delete 2 zero_copy testcase Ling Wei
2021-04-13 2:47 ` Ling, WeiX
2021-05-07 6:05 ` Tu, Lijuan
2021-04-20 3:08 ` Wang, Yinan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).