test suite reviews and discussions
 help / color / mirror / Atom feed
* Re: [dts] [PATCH v1] test_plans/vm2vm_virtio_net_perf_test_plan: update packed ring cbdma cases due to Qemu not support packed ring server mode
  2021-06-09 11:50 [dts] [PATCH v1] test_plans/vm2vm_virtio_net_perf_test_plan: update packed ring cbdma cases due to Qemu not support packed ring server mode Yinan Wang
@ 2021-06-09  8:31 ` Tu, Lijuan
  2021-06-10  0:53   ` Wang, Yinan
  0 siblings, 1 reply; 3+ messages in thread
From: Tu, Lijuan @ 2021-06-09  8:31 UTC (permalink / raw)
  To: Wang, Yinan, dts; +Cc: Wang, Yinan



> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Yinan Wang
> Sent: 2021年6月9日 19:51
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: [dts] [PATCH v1] test_plans/vm2vm_virtio_net_perf_test_plan: update
> packed ring cbdma cases due to Qemu not support packed ring server mode
> 

Confused, I same the changes and I think the older qemu support packet ring server mode. 
So do you mean, some newer qemu don't support packet ring server mode, 
if that, could you please clarify what qemu are required for your cases.

> Signed-off-by: Yinan Wang <yinan.wang@intel.com>
> ---
>  .../vm2vm_virtio_net_perf_test_plan.rst       | 106 +++---------------
>  1 file changed, 17 insertions(+), 89 deletions(-)
> 
> diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> index 78418e00..c3a6d739 100644
> --- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> +++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> @@ -71,7 +71,7 @@ Test Case 1: VM2VM split ring vhost-user/virtio-net test
> with tcp traffic
>      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net0 \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
> -    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso
> 4=on,guest_ecn=on -vnc :10
> +    -device
> + virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=fal
> +
> se,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,gues
> + t_ecn=on -vnc :10
> 
>     taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -
> m 4096 \
>      -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \ @@ -82,7 +82,7 @@ Test Case 1: VM2VM split ring
> vhost-user/virtio-net test with tcp traffic
>      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
>      -chardev socket,id=char0,path=./vhost-net1 \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
> -    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-
> modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso
> 4=on,guest_ecn=on -vnc :12
> +    -device
> + virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=fal
> +
> se,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,gues
> + t_ecn=on -vnc :12
> 
>  3. On VM1, set virtio device IP and run arp protocal::
> 
> @@ -461,7 +461,7 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net
> test with tcp traffic
>      --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --txd=1024
> --rxd=1024
>      testpmd>start
> 
> -2. Launch VM1 and VM2::
> +2. Launch VM1 and VM2 with qemu 5.2.0::
> 
>      qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
>      -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \ @@ -516,7 +516,7 @@ Test Case 8: VM2VM
> packed ring vhost-user/virtio-net CBDMA enable test with tcp
>      --vdev 'net_vhost1,iface=vhost-
> net1,queues=1,dmas=[txq0@00:04.1],dmathr=512'  -- -i --nb-cores=2 --
> txd=1024 --rxd=1024
>      testpmd>start
> 
> -2. Launch VM1 and VM2 on socket 1::
> +2. Launch VM1 and VM2 on socket 1 with qemu 5.2.0::
> 
>      taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1
> -m 4096 \
>      -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \ @@ -573,7 +573,7 @@ Test Case 9: VM2VM
> packed ring vhost-user/virtio-net test with udp traffic
>      --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --txd=1024
> --rxd=1024
>      testpmd>start
> 
> -2. Launch VM1 and VM2::
> +2. Launch VM1 and VM2 with qemu 5.2.0::
> 
>      qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 40 -m 4096 \
>      -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \ @@ -628,7 +628,7 @@ Test Case 10: Check packed
> ring virtio-net device capability
>      --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --txd=1024
> --rxd=1024
>      testpmd>start
> 
> -2. Launch VM1 and VM2,set TSO and UFO on in qemu command::
> +2. Launch VM1 and VM2 with qemu 5.2.0,set TSO and UFO on in qemu
> command::
> 
>      qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
>      -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \ @@ -672,11 +672,11 @@ Test Case 11: VM2VM
> virtio-net packed ring mergeable 8 queues CBDMA enable test  1. Launch the
> Vhost sample by below commands::
> 
>      rm -rf vhost-net*
> -    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3
> @00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=5
> 12' \
> -    --vdev 'net_vhost1,iface=vhost-
> net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3
> @80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=5
> 12'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
> +    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.
> 3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
> +    --vdev
> + 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;
> +
> txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@
> + 80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8
> + --txq=8
>      testpmd>start
> 
> -2. Launch VM1 and VM2::
> +2. Launch VM1 and VM2 with qemu 5.2.0::
> 
>      taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8
> -m 4096 \
>      -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \ @@ -685,7 +685,7 @@ Test Case 11: VM2VM
> virtio-net packed ring mergeable 8 queues CBDMA enable test
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -
> daemonize \
>      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> e1000,netdev=nttsip1 \
>      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
> -    -chardev socket,id=char0,path=./vhost-net0,server \
> +    -chardev socket,id=char0,path=./vhost-net0 \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host
> _tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on
> -vnc :10
> 
> @@ -696,7 +696,7 @@ Test Case 11: VM2VM virtio-net packed ring mergeable
> 8 queues CBDMA enable test
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -
> daemonize \
>      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> e1000,netdev=nttsip1 \
>      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
> -    -chardev socket,id=char0,path=./vhost-net1,server \
> +    -chardev socket,id=char0,path=./vhost-net1 \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-
> modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host
> _tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on
> -vnc :12
> 
> @@ -721,43 +721,7 @@ Test Case 11: VM2VM virtio-net packed ring mergeable
> 8 queues CBDMA enable test
>      Under VM1, run: `iperf -s -i 1`
>      Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> 
> -7. Quit vhost ports and relaunch vhost ports w/o CBDMA channels::
> -
> -    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,client=1,queues=8' \
> -    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --
> txd=1024 --rxd=1024 --rxq=8 --txq=8
> -    testpmd>start
> -
> -8. Scp 1MB file form VM1 to VM2::
> -
> -    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> -
> -9. Check the iperf performance and compare with CBDMA enable performance,
> ensure CMDMA enable performance is higher::
> -
> -    Under VM1, run: `iperf -s -i 1`
> -    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> -
> -10. Quit vhost ports and relaunch vhost ports with 1 queues::
> -
> -     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,client=1,queues=8' \
> -     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --
> txd=1024 --rxd=1024 --rxq=1 --txq=1
> -     testpmd>start
> -
> -11. On VM1, set virtio device::
> -
> -      ethtool -L ens5 combined 1
> -
> -12. On VM2, set virtio device::
> -
> -      ethtool -L ens5 combined 1
> -
> -13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success
> by scp::
> -
> -     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> -
> -14. Check the iperf performance, ensure queue0 can work from vhost side::
> -
> -     Under VM1, run: `iperf -s -i 1`
> -     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> +7. Rerun step 5-6 five times.
> 
>  Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA
> enable test with large packet payload valid check
> =================================================================
> ========================================================
> @@ -765,8 +729,8 @@ Test Case 12: VM2VM virtio-net packed ring non-
> mergeable 8 queues CBDMA enable t  1. Launch the Vhost sample by below
> commands::
> 
>      rm -rf vhost-net*
> -    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3
> @00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=5
> 12' \
> -    --vdev 'net_vhost1,iface=vhost-
> net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3
> @80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=5
> 12'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
> +    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.
> 3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
> +    --vdev
> + 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;
> +
> txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@
> + 80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8
> + --txq=8
>      testpmd>start
> 
>  2. Launch VM1 and VM2::
> @@ -778,7 +742,7 @@ Test Case 12: VM2VM virtio-net packed ring non-
> mergeable 8 queues CBDMA enable t
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -
> daemonize \
>      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> e1000,netdev=nttsip1 \
>      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
> -    -chardev socket,id=char0,path=./vhost-net0,server \
> +    -chardev socket,id=char0,path=./vhost-net0 \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host
> _tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on
> -vnc :10
> 
> @@ -789,7 +753,7 @@ Test Case 12: VM2VM virtio-net packed ring non-
> mergeable 8 queues CBDMA enable t
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -
> daemonize \
>      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> e1000,netdev=nttsip1 \
>      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
> -    -chardev socket,id=char0,path=./vhost-net1,server \
> +    -chardev socket,id=char0,path=./vhost-net1 \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-
> modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host
> _tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on
> -vnc :12
> 
> @@ -814,40 +778,4 @@ Test Case 12: VM2VM virtio-net packed ring non-
> mergeable 8 queues CBDMA enable t
>      Under VM1, run: `iperf -s -i 1`
>      Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> 
> -7. Quit vhost ports and relaunch vhost ports w/o CBDMA channels::
> -
> -    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,client=1,queues=8' \
> -    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --
> txd=1024 --rxd=1024 --rxq=8 --txq=8
> -    testpmd>start
> -
> -8. Scp 1MB file form VM1 to VM2::
> -
> -    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> -
> -9. Check the iperf performance and compare with CBDMA enable performance,
> ensure CMDMA enable performance is higher::
> -
> -    Under VM1, run: `iperf -s -i 1`
> -    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> -
> -10. Quit vhost ports and relaunch vhost ports with 1 queues::
> -
> -     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,client=1,queues=8' \
> -     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --
> txd=1024 --rxd=1024 --rxq=1 --txq=1
> -     testpmd>start
> -
> -11. On VM1, set virtio device::
> -
> -      ethtool -L ens5 combined 1
> -
> -12. On VM2, set virtio device::
> -
> -      ethtool -L ens5 combined 1
> -
> -13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success
> by scp::
> -
> -     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> -
> -14. Check the iperf performance, ensure queue0 can work from vhost side::
> -
> -     Under VM1, run: `iperf -s -i 1`
> -     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> +7. Rerun step 5-6 five times.
> \ No newline at end of file
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dts] [PATCH v1] test_plans/vm2vm_virtio_net_perf_test_plan: update packed ring cbdma cases due to Qemu not support packed ring server mode
@ 2021-06-09 11:50 Yinan Wang
  2021-06-09  8:31 ` Tu, Lijuan
  0 siblings, 1 reply; 3+ messages in thread
From: Yinan Wang @ 2021-06-09 11:50 UTC (permalink / raw)
  To: dts; +Cc: Yinan Wang

Signed-off-by: Yinan Wang <yinan.wang@intel.com>
---
 .../vm2vm_virtio_net_perf_test_plan.rst       | 106 +++---------------
 1 file changed, 17 insertions(+), 89 deletions(-)

diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
index 78418e00..c3a6d739 100644
--- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
+++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
@@ -71,7 +71,7 @@ Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
 
    taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -82,7 +82,7 @@ Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocal::
 
@@ -461,7 +461,7 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
     --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>start
 
-2. Launch VM1 and VM2::
+2. Launch VM1 and VM2 with qemu 5.2.0::
 
     qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -516,7 +516,7 @@ Test Case 8: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp
     --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@00:04.1],dmathr=512'  -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>start
 
-2. Launch VM1 and VM2 on socket 1::
+2. Launch VM1 and VM2 on socket 1 with qemu 5.2.0::
 
     taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -573,7 +573,7 @@ Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic
     --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>start
 
-2. Launch VM1 and VM2::
+2. Launch VM1 and VM2 with qemu 5.2.0::
 
     qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 40 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -628,7 +628,7 @@ Test Case 10: Check packed ring virtio-net device capability
     --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>start
 
-2. Launch VM1 and VM2,set TSO and UFO on in qemu command::
+2. Launch VM1 and VM2 with qemu 5.2.0,set TSO and UFO on in qemu command::
 
     qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -672,11 +672,11 @@ Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
     testpmd>start
 
-2. Launch VM1 and VM2::
+2. Launch VM1 and VM2 with qemu 5.2.0::
 
     taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -685,7 +685,7 @@ Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0,server \
+    -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10
 
@@ -696,7 +696,7 @@ Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1,server \
+    -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
 
@@ -721,43 +721,7 @@ Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test
     Under VM1, run: `iperf -s -i 1`
     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
 
-7. Quit vhost ports and relaunch vhost ports w/o CBDMA channels::
-
-    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>start
-
-8. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-9. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-10. Quit vhost ports and relaunch vhost ports with 1 queues::
-
-     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
-     testpmd>start
-
-11. On VM1, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-12. On VM2, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::
-
-     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-14. Check the iperf performance, ensure queue0 can work from vhost side::
-
-     Under VM1, run: `iperf -s -i 1`
-     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+7. Rerun step 5-6 five times.
 
 Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check
 =========================================================================================================================
@@ -765,8 +729,8 @@ Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable t
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
     testpmd>start
 
 2. Launch VM1 and VM2::
@@ -778,7 +742,7 @@ Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable t
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0,server \
+    -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10
 
@@ -789,7 +753,7 @@ Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable t
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1,server \
+    -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
 
@@ -814,40 +778,4 @@ Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable t
     Under VM1, run: `iperf -s -i 1`
     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
 
-7. Quit vhost ports and relaunch vhost ports w/o CBDMA channels::
-
-    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>start
-
-8. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-9. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-10. Quit vhost ports and relaunch vhost ports with 1 queues::
-
-     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
-     testpmd>start
-
-11. On VM1, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-12. On VM2, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::
-
-     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-14. Check the iperf performance, ensure queue0 can work from vhost side::
-
-     Under VM1, run: `iperf -s -i 1`
-     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+7. Rerun step 5-6 five times.
\ No newline at end of file
-- 
2.25.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dts] [PATCH v1] test_plans/vm2vm_virtio_net_perf_test_plan: update packed ring cbdma cases due to Qemu not support packed ring server mode
  2021-06-09  8:31 ` Tu, Lijuan
@ 2021-06-10  0:53   ` Wang, Yinan
  0 siblings, 0 replies; 3+ messages in thread
From: Wang, Yinan @ 2021-06-10  0:53 UTC (permalink / raw)
  To: Tu, Lijuan, dts



> -----Original Message-----
> From: Tu, Lijuan <lijuan.tu@intel.com>
> Sent: 2021年6月9日 16:31
> To: Wang, Yinan <yinan.wang@intel.com>; dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: RE: [dts] [PATCH v1] test_plans/vm2vm_virtio_net_perf_test_plan:
> update packed ring cbdma cases due to Qemu not support packed ring
> server mode
> 
> 
> 
> > -----Original Message-----
> > From: dts <dts-bounces@dpdk.org> On Behalf Of Yinan Wang
> > Sent: 2021年6月9日 19:51
> > To: dts@dpdk.org
> > Cc: Wang, Yinan <yinan.wang@intel.com>
> > Subject: [dts] [PATCH v1] test_plans/vm2vm_virtio_net_perf_test_plan:
> update
> > packed ring cbdma cases due to Qemu not support packed ring server
> mode
> >
> 
> Confused, I same the changes and I think the older qemu support packet ring
> server mode.
> So do you mean, some newer qemu don't support packet ring server mode,
> if that, could you please clarify what qemu are required for your cases.
Hi Lijuan,
Qemu not support packed ring server mode now, so change test plan in this patch. 
Since some older qemu version exist bug with server mode test, we need move to newer qemu which includes bug fix.
BR,
Yinan
> 
> > Signed-off-by: Yinan Wang <yinan.wang@intel.com>
> > ---
> >  .../vm2vm_virtio_net_perf_test_plan.rst       | 106 +++---------------
> >  1 file changed, 17 insertions(+), 89 deletions(-)
> >
> > diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> > b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> > index 78418e00..c3a6d739 100644
> > --- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> > +++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> > @@ -71,7 +71,7 @@ Test Case 1: VM2VM split ring vhost-user/virtio-net
> test
> > with tcp traffic
> >      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
> >      -chardev socket,id=char0,path=./vhost-net0 \
> >      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
> > -    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> >
> modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest
> _tso
> > 4=on,guest_ecn=on -vnc :10
> > +    -device
> > + virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> modern=fal
> > +
> >
> se,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,g
> ues
> > + t_ecn=on -vnc :10
> >
> >     taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -
> smp 1 -
> > m 4096 \
> >      -object memory-backend-file,id=mem,size=4096M,mem-
> > path=/mnt/huge,share=on \ @@ -82,7 +82,7 @@ Test Case 1: VM2VM split
> ring
> > vhost-user/virtio-net test with tcp traffic
> >      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
> >      -chardev socket,id=char0,path=./vhost-net1 \
> >      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
> > -    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-
> >
> modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest
> _tso
> > 4=on,guest_ecn=on -vnc :12
> > +    -device
> > + virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-
> modern=fal
> > +
> >
> se,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,g
> ues
> > + t_ecn=on -vnc :12
> >
> >  3. On VM1, set virtio device IP and run arp protocal::
> >
> > @@ -461,7 +461,7 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-
> net
> > test with tcp traffic
> >      --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --
> txd=1024
> > --rxd=1024
> >      testpmd>start
> >
> > -2. Launch VM1 and VM2::
> > +2. Launch VM1 and VM2 with qemu 5.2.0::
> >
> >      qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096
> \
> >      -object memory-backend-file,id=mem,size=4096M,mem-
> > path=/mnt/huge,share=on \ @@ -516,7 +516,7 @@ Test Case 8: VM2VM
> > packed ring vhost-user/virtio-net CBDMA enable test with tcp
> >      --vdev 'net_vhost1,iface=vhost-
> > net1,queues=1,dmas=[txq0@00:04.1],dmathr=512'  -- -i --nb-cores=2 --
> > txd=1024 --rxd=1024
> >      testpmd>start
> >
> > -2. Launch VM1 and VM2 on socket 1::
> > +2. Launch VM1 and VM2 on socket 1 with qemu 5.2.0::
> >
> >      taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -
> smp 1
> > -m 4096 \
> >      -object memory-backend-file,id=mem,size=4096M,mem-
> > path=/mnt/huge,share=on \ @@ -573,7 +573,7 @@ Test Case 9: VM2VM
> > packed ring vhost-user/virtio-net test with udp traffic
> >      --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --
> txd=1024
> > --rxd=1024
> >      testpmd>start
> >
> > -2. Launch VM1 and VM2::
> > +2. Launch VM1 and VM2 with qemu 5.2.0::
> >
> >      qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 40 -m
> 4096 \
> >      -object memory-backend-file,id=mem,size=4096M,mem-
> > path=/mnt/huge,share=on \ @@ -628,7 +628,7 @@ Test Case 10: Check
> packed
> > ring virtio-net device capability
> >      --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --
> txd=1024
> > --rxd=1024
> >      testpmd>start
> >
> > -2. Launch VM1 and VM2,set TSO and UFO on in qemu command::
> > +2. Launch VM1 and VM2 with qemu 5.2.0,set TSO and UFO on in qemu
> > command::
> >
> >      qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096
> \
> >      -object memory-backend-file,id=mem,size=4096M,mem-
> > path=/mnt/huge,share=on \ @@ -672,11 +672,11 @@ Test Case 11:
> VM2VM
> > virtio-net packed ring mergeable 8 queues CBDMA enable test  1. Launch
> the
> > Vhost sample by below commands::
> >
> >      rm -rf vhost-net*
> > -    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-
> >
> net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;tx
> q3
> >
> @00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=
> 5
> > 12' \
> > -    --vdev 'net_vhost1,iface=vhost-
> >
> net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;tx
> q3
> >
> @80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=
> 5
> > 12'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
> > +    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-
> >
> net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:0
> 4.
> > 3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
> > +    --vdev
> > + 'net_vhost1,iface=vhost-
> net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;
> > +
> >
> txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7
> @
> > + 80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8
> > + --txq=8
> >      testpmd>start
> >
> > -2. Launch VM1 and VM2::
> > +2. Launch VM1 and VM2 with qemu 5.2.0::
> >
> >      taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -
> smp 8
> > -m 4096 \
> >      -object memory-backend-file,id=mem,size=4096M,mem-
> > path=/mnt/huge,share=on \ @@ -685,7 +685,7 @@ Test Case 11: VM2VM
> > virtio-net packed ring mergeable 8 queues CBDMA enable test
> >      -device
> virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -
> > daemonize \
> >      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> > e1000,netdev=nttsip1 \
> >      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
> > -    -chardev socket,id=char0,path=./vhost-net0,server \
> > +    -chardev socket,id=char0,path=./vhost-net0 \
> >      -netdev type=vhost-
> user,id=netdev0,chardev=char0,vhostforce,queues=8 \
> >      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> >
> modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,
> host
> >
> _tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed
> =on
> > -vnc :10
> >
> > @@ -696,7 +696,7 @@ Test Case 11: VM2VM virtio-net packed ring
> mergeable
> > 8 queues CBDMA enable test
> >      -device
> virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -
> > daemonize \
> >      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> > e1000,netdev=nttsip1 \
> >      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
> > -    -chardev socket,id=char0,path=./vhost-net1,server \
> > +    -chardev socket,id=char0,path=./vhost-net1 \
> >      -netdev type=vhost-
> user,id=netdev0,chardev=char0,vhostforce,queues=8 \
> >      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-
> >
> modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,
> host
> >
> _tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed
> =on
> > -vnc :12
> >
> > @@ -721,43 +721,7 @@ Test Case 11: VM2VM virtio-net packed ring
> mergeable
> > 8 queues CBDMA enable test
> >      Under VM1, run: `iperf -s -i 1`
> >      Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> >
> > -7. Quit vhost ports and relaunch vhost ports w/o CBDMA channels::
> > -
> > -    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-
> > net0,client=1,queues=8' \
> > -    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-
> cores=4 --
> > txd=1024 --rxd=1024 --rxq=8 --txq=8
> > -    testpmd>start
> > -
> > -8. Scp 1MB file form VM1 to VM2::
> > -
> > -    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> > -
> > -9. Check the iperf performance and compare with CBDMA enable
> performance,
> > ensure CMDMA enable performance is higher::
> > -
> > -    Under VM1, run: `iperf -s -i 1`
> > -    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> > -
> > -10. Quit vhost ports and relaunch vhost ports with 1 queues::
> > -
> > -     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-
> > net0,client=1,queues=8' \
> > -     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-
> cores=4 --
> > txd=1024 --rxd=1024 --rxq=1 --txq=1
> > -     testpmd>start
> > -
> > -11. On VM1, set virtio device::
> > -
> > -      ethtool -L ens5 combined 1
> > -
> > -12. On VM2, set virtio device::
> > -
> > -      ethtool -L ens5 combined 1
> > -
> > -13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding
> success
> > by scp::
> > -
> > -     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> > -
> > -14. Check the iperf performance, ensure queue0 can work from vhost
> side::
> > -
> > -     Under VM1, run: `iperf -s -i 1`
> > -     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> > +7. Rerun step 5-6 five times.
> >
> >  Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues
> CBDMA
> > enable test with large packet payload valid check
> >
> ==========================================================
> =======
> > ========================================================
> > @@ -765,8 +729,8 @@ Test Case 12: VM2VM virtio-net packed ring non-
> > mergeable 8 queues CBDMA enable t  1. Launch the Vhost sample by
> below
> > commands::
> >
> >      rm -rf vhost-net*
> > -    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-
> >
> net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;tx
> q3
> >
> @00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=
> 5
> > 12' \
> > -    --vdev 'net_vhost1,iface=vhost-
> >
> net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;tx
> q3
> >
> @80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=
> 5
> > 12'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
> > +    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-
> >
> net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:0
> 4.
> > 3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
> > +    --vdev
> > + 'net_vhost1,iface=vhost-
> net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;
> > +
> >
> txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7
> @
> > + 80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8
> > + --txq=8
> >      testpmd>start
> >
> >  2. Launch VM1 and VM2::
> > @@ -778,7 +742,7 @@ Test Case 12: VM2VM virtio-net packed ring non-
> > mergeable 8 queues CBDMA enable t
> >      -device
> virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -
> > daemonize \
> >      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> > e1000,netdev=nttsip1 \
> >      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
> > -    -chardev socket,id=char0,path=./vhost-net0,server \
> > +    -chardev socket,id=char0,path=./vhost-net0 \
> >      -netdev type=vhost-
> user,id=netdev0,chardev=char0,vhostforce,queues=8 \
> >      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> >
> modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,
> host
> >
> _tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed
> =on
> > -vnc :10
> >
> > @@ -789,7 +753,7 @@ Test Case 12: VM2VM virtio-net packed ring non-
> > mergeable 8 queues CBDMA enable t
> >      -device
> virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -
> > daemonize \
> >      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> > e1000,netdev=nttsip1 \
> >      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
> > -    -chardev socket,id=char0,path=./vhost-net1,server \
> > +    -chardev socket,id=char0,path=./vhost-net1 \
> >      -netdev type=vhost-
> user,id=netdev0,chardev=char0,vhostforce,queues=8 \
> >      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-
> >
> modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,
> host
> >
> _tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed
> =on
> > -vnc :12
> >
> > @@ -814,40 +778,4 @@ Test Case 12: VM2VM virtio-net packed ring non-
> > mergeable 8 queues CBDMA enable t
> >      Under VM1, run: `iperf -s -i 1`
> >      Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> >
> > -7. Quit vhost ports and relaunch vhost ports w/o CBDMA channels::
> > -
> > -    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-
> > net0,client=1,queues=8' \
> > -    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-
> cores=4 --
> > txd=1024 --rxd=1024 --rxq=8 --txq=8
> > -    testpmd>start
> > -
> > -8. Scp 1MB file form VM1 to VM2::
> > -
> > -    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> > -
> > -9. Check the iperf performance and compare with CBDMA enable
> performance,
> > ensure CMDMA enable performance is higher::
> > -
> > -    Under VM1, run: `iperf -s -i 1`
> > -    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> > -
> > -10. Quit vhost ports and relaunch vhost ports with 1 queues::
> > -
> > -     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-
> > net0,client=1,queues=8' \
> > -     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-
> cores=4 --
> > txd=1024 --rxd=1024 --rxq=1 --txq=1
> > -     testpmd>start
> > -
> > -11. On VM1, set virtio device::
> > -
> > -      ethtool -L ens5 combined 1
> > -
> > -12. On VM2, set virtio device::
> > -
> > -      ethtool -L ens5 combined 1
> > -
> > -13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding
> success
> > by scp::
> > -
> > -     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> > -
> > -14. Check the iperf performance, ensure queue0 can work from vhost
> side::
> > -
> > -     Under VM1, run: `iperf -s -i 1`
> > -     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> > +7. Rerun step 5-6 five times.
> > \ No newline at end of file
> > --
> > 2.25.1
> 


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-06-10  0:53 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-09 11:50 [dts] [PATCH v1] test_plans/vm2vm_virtio_net_perf_test_plan: update packed ring cbdma cases due to Qemu not support packed ring server mode Yinan Wang
2021-06-09  8:31 ` Tu, Lijuan
2021-06-10  0:53   ` Wang, Yinan

test suite reviews and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/dts/0 dts/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dts dts/ https://inbox.dpdk.org/dts \
		dts@dpdk.org
	public-inbox-index dts

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dts


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git