From: Wei Ling <weix.ling@intel.com>
To: dts@dpdk.org
Cc: Wei Ling <weix.ling@intel.com>
Subject: [dts][PATCH V1] optimization pvp_qemu_multi_paths_port_restart testplan and testsuite
Date: Fri, 23 Dec 2022 15:35:45 +0800 [thread overview]
Message-ID: <20221223073545.756712-1-weix.ling@intel.com> (raw)
1.Add `disable-modern=false` parameter in vitio0.95 testcases.
2.Add `-a 0000:af:00.0` in start vhost-user testpmd.
3.Add `-a 0000:04:00.0,vectorized=1` in virtio0.95 and virtio1.0
vector_rx path case.
Signed-off-by: Wei Ling <weix.ling@intel.com>
---
...emu_multi_paths_port_restart_test_plan.rst | 108 +++++++++---------
...Suite_pvp_qemu_multi_paths_port_restart.py | 4 +-
2 files changed, 56 insertions(+), 56 deletions(-)
diff --git a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
index 017ea5f0..a621738d 100644
--- a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
+++ b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
@@ -19,27 +19,27 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
Test Case 1: pvp test with virtio 0.95 mergeable path
=====================================================
-1. Bind one port to vfio-pci, then launch testpmd by below command::
+1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command::
rm -rf vhost-net*
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=1' \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
2. Launch VM with mrg_rxbuf feature on::
- qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \
+ qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \
-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
- -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
- -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
- -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+ -net user,hostfwd=tcp:127.0.0.1:6000-:22 \
-chardev socket,id=char0,path=./vhost-net \
-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
- -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
-vnc :10
3. On VM, bind virtio net to vfio-pci and run testpmd::
@@ -66,26 +66,26 @@ Test Case 1: pvp test with virtio 0.95 mergeable path
Test Case 2: pvp test with virtio 0.95 normal path
==================================================
-1. Bind one port to vfio-pci, then launch testpmd by below command::
+1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command::
rm -rf vhost-net*
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=1' \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
2. Launch VM with mrg_rxbuf feature off::
- qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \
+ qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \
-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
- -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
- -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \
-chardev socket,id=char0,path=./vhost-net \
-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
- -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
-vnc :10
3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads::
@@ -112,31 +112,31 @@ Test Case 2: pvp test with virtio 0.95 normal path
Test Case 3: pvp test with virtio 0.95 vrctor_rx path
=====================================================
-1. Bind one port to vfio-pci, then launch testpmd by below command::
+1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command::
rm -rf vhost-net*
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=1' \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
2. Launch VM with mrg_rxbuf feature off::
- qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \
+ qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \
-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
- -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
- -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \
-chardev socket,id=char0,path=./vhost-net \
-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
- -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
-vnc :10
3. On VM, bind virtio net to vfio-pci and run testpmd without ant tx-offloads::
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -a 0000:04:00.0,vectorized=1 -- -i \
--nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -158,23 +158,23 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path
Test Case 4: pvp test with virtio 1.0 mergeable path
====================================================
-1. Bind one port to vfio-pci, then launch testpmd by below command::
+1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command::
rm -rf vhost-net*
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=1' \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0::
- qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \
+ qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \
-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
- -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
- -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \
-chardev socket,id=char0,path=./vhost-net \
-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
@@ -204,23 +204,23 @@ Test Case 4: pvp test with virtio 1.0 mergeable path
Test Case 5: pvp test with virtio 1.0 normal path
=================================================
-1. Bind one port to vfio-pci, then launch testpmd by below command::
+1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command::
rm -rf vhost-net*
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=1' \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0::
- qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \
+ qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \
-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
- -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
- -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \
-chardev socket,id=char0,path=./vhost-net \
-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
@@ -250,23 +250,23 @@ Test Case 5: pvp test with virtio 1.0 normal path
Test Case 6: pvp test with virtio 1.0 vrctor_rx path
====================================================
-1. Bind one port to vfio-pci, then launch testpmd by below command::
+1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command::
rm -rf vhost-net*
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=1' \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0::
- qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \
+ qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \
-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
- -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
- -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \
-chardev socket,id=char0,path=./vhost-net \
-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
@@ -274,7 +274,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path
3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -a 0000:04:00.0,vectorized=1 -- -i \
--nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
diff --git a/tests/TestSuite_pvp_qemu_multi_paths_port_restart.py b/tests/TestSuite_pvp_qemu_multi_paths_port_restart.py
index 2b753eb1..9ae83dfe 100644
--- a/tests/TestSuite_pvp_qemu_multi_paths_port_restart.py
+++ b/tests/TestSuite_pvp_qemu_multi_paths_port_restart.py
@@ -101,8 +101,8 @@ class TestPVPQemuMultiPathPortRestart(TestCase):
)
elif path == "vector_rx":
command = (
- self.path + "-c 0x3 -n 3 -- -i " + "--nb-cores=1 --txd=1024 --rxd=1024"
- )
+ self.path + "-c 0x3 -n 3 -a %s,vectorized=1 -- -i " + "--nb-cores=1 --txd=1024 --rxd=1024"
+ ) % self.vm_dut.get_port_pci(0)
self.vm_dut.send_expect(command, "testpmd> ", 30)
self.vm_dut.send_expect("set fwd mac", "testpmd> ", 30)
self.vm_dut.send_expect("start", "testpmd> ", 30)
--
2.25.1
reply other threads:[~2022-12-23 7:44 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221223073545.756712-1-weix.ling@intel.com \
--to=weix.ling@intel.com \
--cc=dts@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).