From: Ling Wei <weix.ling@intel.com>
To: dts@dpdk.org
Cc: Ling Wei <weix.ling@intel.com>
Subject: [dts] [PATCH V1] tests/perf_vm2vm_virtio_net_perf: delete 2 zero_copy testcase
Date: Tue, 13 Apr 2021 10:44:56 +0800 [thread overview]
Message-ID: <20210413024456.525859-1-weix.ling@intel.com> (raw)
As DPDK community has removed support dequeue zero copy feature in
20.11,so delete zero_copy case in testcase and testplan.
Signed-off-by: Ling Wei <weix.ling@intel.com>
---
.../perf_vm2vm_virtio_net_perf_test_plan.rst | 103 +-----------------
tests/TestSuite_perf_vm2vm_virtio_net_perf.py | 47 ++------
2 files changed, 8 insertions(+), 142 deletions(-)
diff --git a/test_plans/perf_vm2vm_virtio_net_perf_test_plan.rst b/test_plans/perf_vm2vm_virtio_net_perf_test_plan.rst
index 60e7abd0..c284c0f3 100644
--- a/test_plans/perf_vm2vm_virtio_net_perf_test_plan.rst
+++ b/test_plans/perf_vm2vm_virtio_net_perf_test_plan.rst
@@ -97,58 +97,8 @@ Test Case 1: Perf VM2VM split ring vhost-user/virtio-net test with tcp traffic
6. Check iperf throughput can get expected data.
-Test Case 2: Perf VM2VM split ring vhost-user/virtio-net dequeue zero-copy test with tcp traffic
-================================================================================================
-1. Launch the Vhost sample by below commands::
-
- rm -rf vhost-net*
- ./testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' \
- --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024
- testpmd>start
-
-2. Launch VM1 and VM2::
-
- taskset -c 13 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
- -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
- -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
- -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
- -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
- -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
- -chardev socket,id=char0,path=./vhost-net0 \
- -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
- -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
-
- taskset -c 15 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
- -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
- -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
- -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
- -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
- -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
- -chardev socket,id=char0,path=./vhost-net1 \
- -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
- -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocal::
-
- ifconfig ens5 1.1.1.2
- arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocal::
-
- ifconfig ens5 1.1.1.8
- arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Check the iperf performance between two VMs by below commands::
-
- Under VM1, run: `iperf -s -i 1`
- Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-6. Check iperf throughput can get expected data.
-
-Test Case 3: Perf VM2VM packed ring vhost-user/virtio-net test with tcp traffic
+Test Case 2: Perf VM2VM packed ring vhost-user/virtio-net test with tcp traffic
===============================================================================
1. Launch the Vhost sample by below commands::
@@ -198,54 +148,3 @@ Test Case 3: Perf VM2VM packed ring vhost-user/virtio-net test with tcp traffic
Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
6. Check iperf throughput can get expected data.
-
-Test Case 4: Perf VM2VM packed ring vhost-user/virtio-net dequeue zero-copy test with tcp traffic
-=================================================================================================
-
-1. Launch the Vhost sample by below commands::
-
- rm -rf vhost-net*
- ./testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' \
- --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024
- testpmd>start
-
-2. Launch VM1 and VM2::
-
- taskset -c 13 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
- -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
- -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
- -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
- -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
- -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
- -chardev socket,id=char0,path=./vhost-net0 \
- -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
- -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
-
- taskset -c 15 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
- -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
- -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
- -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
- -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
- -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
- -chardev socket,id=char0,path=./vhost-net1 \
- -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
- -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocal::
-
- ifconfig ens3 1.1.1.2
- arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocal::
-
- ifconfig ens3 1.1.1.8
- arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Check the iperf performance between two VMs by below commands::
-
- Under VM1, run: `iperf -s -i 1`
- Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-6. Check iperf throughput can get expected data.
\ No newline at end of file
diff --git a/tests/TestSuite_perf_vm2vm_virtio_net_perf.py b/tests/TestSuite_perf_vm2vm_virtio_net_perf.py
index 7116eece..ac0e38a2 100644
--- a/tests/TestSuite_perf_vm2vm_virtio_net_perf.py
+++ b/tests/TestSuite_perf_vm2vm_virtio_net_perf.py
@@ -164,16 +164,12 @@ class TestPerfVM2VMVirtioNetPerf(TestCase):
self.verify("FAIL" not in status_result, "Exceeded Gap")
- def start_vhost_testpmd(self, zerocopy=False):
+ def start_vhost_testpmd(self):
"""
launch the testpmd with different parameters
"""
- if zerocopy is True:
- zerocopy_arg = ",dequeue-zero-copy=1"
- else:
- zerocopy_arg = ""
- vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,queues=1%s' " % (self.base_dir, zerocopy_arg)
- vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,queues=1%s' " % (self.base_dir, zerocopy_arg)
+ vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,queues=1' " % self.base_dir
+ vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,queues=1' " % self.base_dir
eal_params = self.dut.create_eal_parameters(cores=self.cores_list, prefix='vhost', no_pci=True)
para = " -- -i --nb-cores=2 --txd=1024 --rxd=1024"
self.command_line = self.path + eal_params + vdev1 + vdev2 + para
@@ -227,11 +223,11 @@ class TestPerfVM2VMVirtioNetPerf(TestCase):
self.vm_dut[0].send_expect("arp -s %s %s" % (self.virtio_ip2, self.virtio_mac2), "#", 10)
self.vm_dut[1].send_expect("arp -s %s %s" % (self.virtio_ip1, self.virtio_mac1), "#", 10)
- def prepare_test_env(self, zerocopy, path_mode, packed_mode=False):
+ def prepare_test_env(self, path_mode, packed_mode=False):
"""
start vhost testpmd and qemu, and config the vm env
"""
- self.start_vhost_testpmd(zerocopy)
+ self.start_vhost_testpmd()
self.start_vms(mode=path_mode, packed=packed_mode)
self.config_vm_env()
@@ -310,24 +306,10 @@ class TestPerfVM2VMVirtioNetPerf(TestCase):
"""
VM2VM split ring vhost-user/virtio-net test with tcp traffic
"""
- zerocopy = False
path_mode = "tso"
self.test_target = "split_tso"
self.expected_throughput = self.get_suite_cfg()['expected_throughput'][self.test_target]
- self.prepare_test_env(zerocopy, path_mode)
- self.start_iperf_and_verify_vhost_xstats_info(mode="tso")
- self.handle_expected()
- self.handle_results()
-
- def test_vm2vm_split_ring_dequeue_zero_copy_iperf_with_tso(self):
- """
- VM2VM split ring vhost-user/virtio-net zero copy test with tcp traffic
- """
- zerocopy = True
- path_mode = "tso"
- self.test_target = "split_zero_copy_tso"
- self.expected_throughput = self.get_suite_cfg()['expected_throughput'][self.test_target]
- self.prepare_test_env(zerocopy, path_mode)
+ self.prepare_test_env(path_mode)
self.start_iperf_and_verify_vhost_xstats_info(mode="tso")
self.handle_expected()
self.handle_results()
@@ -336,26 +318,11 @@ class TestPerfVM2VMVirtioNetPerf(TestCase):
"""
VM2VM packed ring vhost-user/virtio-net test with tcp traffic
"""
- zerocopy = False
path_mode = "tso"
self.test_target = "packed_tso"
self.expected_throughput = self.get_suite_cfg()['expected_throughput'][self.test_target]
packed_mode = True
- self.prepare_test_env(zerocopy, path_mode, packed_mode)
- self.start_iperf_and_verify_vhost_xstats_info(mode="tso")
- self.handle_expected()
- self.handle_results()
-
- def test_vm2vm_packed_ring_dequeue_zero_copy_iperf_with_tso(self):
- """
- VM2VM packed ring vhost-user/virtio-net zero copy test with tcp traffic
- """
- zerocopy = True
- path_mode = "tso"
- packed_mode = True
- self.test_target = "packed_zero_copy_tso"
- self.expected_throughput = self.get_suite_cfg()['expected_throughput'][self.test_target]
- self.prepare_test_env(zerocopy, path_mode, packed_mode)
+ self.prepare_test_env(path_mode, packed_mode)
self.start_iperf_and_verify_vhost_xstats_info(mode="tso")
self.handle_expected()
self.handle_results()
--
2.25.1
next reply other threads:[~2021-04-13 2:46 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-13 2:44 Ling Wei [this message]
2021-04-13 2:47 ` Ling, WeiX
2021-05-07 6:05 ` Tu, Lijuan
2021-04-20 3:08 ` Wang, Yinan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210413024456.525859-1-weix.ling@intel.com \
--to=weix.ling@intel.com \
--cc=dts@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).