test suite reviews and discussions
 help / color / mirror / Atom feed
From: "Tu, Lijuan" <lijuan.tu@intel.com>
To: "Wang, Yinan" <yinan.wang@intel.com>, "dts@dpdk.org" <dts@dpdk.org>
Cc: "Wang, Yinan" <yinan.wang@intel.com>
Subject: Re: [dts] [PATCH v1] test_plans/vm2vm_virtio_net_perf_test_plan: update packed ring cbdma cases due to Qemu not support packed ring server mode
Date: Wed, 9 Jun 2021 08:31:22 +0000	[thread overview]
Message-ID: <dab007717b3d46b09ecf4602186310df@intel.com> (raw)
In-Reply-To: <20210609115030.179202-1-yinan.wang@intel.com>



> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Yinan Wang
> Sent: 2021年6月9日 19:51
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: [dts] [PATCH v1] test_plans/vm2vm_virtio_net_perf_test_plan: update
> packed ring cbdma cases due to Qemu not support packed ring server mode
> 

Confused, I same the changes and I think the older qemu support packet ring server mode. 
So do you mean, some newer qemu don't support packet ring server mode, 
if that, could you please clarify what qemu are required for your cases.

> Signed-off-by: Yinan Wang <yinan.wang@intel.com>
> ---
>  .../vm2vm_virtio_net_perf_test_plan.rst       | 106 +++---------------
>  1 file changed, 17 insertions(+), 89 deletions(-)
> 
> diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> index 78418e00..c3a6d739 100644
> --- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> +++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> @@ -71,7 +71,7 @@ Test Case 1: VM2VM split ring vhost-user/virtio-net test
> with tcp traffic
>      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net0 \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
> -    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso
> 4=on,guest_ecn=on -vnc :10
> +    -device
> + virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=fal
> +
> se,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,gues
> + t_ecn=on -vnc :10
> 
>     taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -
> m 4096 \
>      -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \ @@ -82,7 +82,7 @@ Test Case 1: VM2VM split ring
> vhost-user/virtio-net test with tcp traffic
>      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
>      -chardev socket,id=char0,path=./vhost-net1 \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
> -    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-
> modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso
> 4=on,guest_ecn=on -vnc :12
> +    -device
> + virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=fal
> +
> se,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,gues
> + t_ecn=on -vnc :12
> 
>  3. On VM1, set virtio device IP and run arp protocal::
> 
> @@ -461,7 +461,7 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net
> test with tcp traffic
>      --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --txd=1024
> --rxd=1024
>      testpmd>start
> 
> -2. Launch VM1 and VM2::
> +2. Launch VM1 and VM2 with qemu 5.2.0::
> 
>      qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
>      -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \ @@ -516,7 +516,7 @@ Test Case 8: VM2VM
> packed ring vhost-user/virtio-net CBDMA enable test with tcp
>      --vdev 'net_vhost1,iface=vhost-
> net1,queues=1,dmas=[txq0@00:04.1],dmathr=512'  -- -i --nb-cores=2 --
> txd=1024 --rxd=1024
>      testpmd>start
> 
> -2. Launch VM1 and VM2 on socket 1::
> +2. Launch VM1 and VM2 on socket 1 with qemu 5.2.0::
> 
>      taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1
> -m 4096 \
>      -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \ @@ -573,7 +573,7 @@ Test Case 9: VM2VM
> packed ring vhost-user/virtio-net test with udp traffic
>      --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --txd=1024
> --rxd=1024
>      testpmd>start
> 
> -2. Launch VM1 and VM2::
> +2. Launch VM1 and VM2 with qemu 5.2.0::
> 
>      qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 40 -m 4096 \
>      -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \ @@ -628,7 +628,7 @@ Test Case 10: Check packed
> ring virtio-net device capability
>      --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --txd=1024
> --rxd=1024
>      testpmd>start
> 
> -2. Launch VM1 and VM2,set TSO and UFO on in qemu command::
> +2. Launch VM1 and VM2 with qemu 5.2.0,set TSO and UFO on in qemu
> command::
> 
>      qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
>      -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \ @@ -672,11 +672,11 @@ Test Case 11: VM2VM
> virtio-net packed ring mergeable 8 queues CBDMA enable test  1. Launch the
> Vhost sample by below commands::
> 
>      rm -rf vhost-net*
> -    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3
> @00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=5
> 12' \
> -    --vdev 'net_vhost1,iface=vhost-
> net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3
> @80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=5
> 12'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
> +    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.
> 3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
> +    --vdev
> + 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;
> +
> txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@
> + 80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8
> + --txq=8
>      testpmd>start
> 
> -2. Launch VM1 and VM2::
> +2. Launch VM1 and VM2 with qemu 5.2.0::
> 
>      taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8
> -m 4096 \
>      -object memory-backend-file,id=mem,size=4096M,mem-
> path=/mnt/huge,share=on \ @@ -685,7 +685,7 @@ Test Case 11: VM2VM
> virtio-net packed ring mergeable 8 queues CBDMA enable test
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -
> daemonize \
>      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> e1000,netdev=nttsip1 \
>      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
> -    -chardev socket,id=char0,path=./vhost-net0,server \
> +    -chardev socket,id=char0,path=./vhost-net0 \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host
> _tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on
> -vnc :10
> 
> @@ -696,7 +696,7 @@ Test Case 11: VM2VM virtio-net packed ring mergeable
> 8 queues CBDMA enable test
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -
> daemonize \
>      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> e1000,netdev=nttsip1 \
>      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
> -    -chardev socket,id=char0,path=./vhost-net1,server \
> +    -chardev socket,id=char0,path=./vhost-net1 \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-
> modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host
> _tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on
> -vnc :12
> 
> @@ -721,43 +721,7 @@ Test Case 11: VM2VM virtio-net packed ring mergeable
> 8 queues CBDMA enable test
>      Under VM1, run: `iperf -s -i 1`
>      Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> 
> -7. Quit vhost ports and relaunch vhost ports w/o CBDMA channels::
> -
> -    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,client=1,queues=8' \
> -    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --
> txd=1024 --rxd=1024 --rxq=8 --txq=8
> -    testpmd>start
> -
> -8. Scp 1MB file form VM1 to VM2::
> -
> -    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> -
> -9. Check the iperf performance and compare with CBDMA enable performance,
> ensure CMDMA enable performance is higher::
> -
> -    Under VM1, run: `iperf -s -i 1`
> -    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> -
> -10. Quit vhost ports and relaunch vhost ports with 1 queues::
> -
> -     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,client=1,queues=8' \
> -     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --
> txd=1024 --rxd=1024 --rxq=1 --txq=1
> -     testpmd>start
> -
> -11. On VM1, set virtio device::
> -
> -      ethtool -L ens5 combined 1
> -
> -12. On VM2, set virtio device::
> -
> -      ethtool -L ens5 combined 1
> -
> -13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success
> by scp::
> -
> -     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> -
> -14. Check the iperf performance, ensure queue0 can work from vhost side::
> -
> -     Under VM1, run: `iperf -s -i 1`
> -     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> +7. Rerun step 5-6 five times.
> 
>  Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA
> enable test with large packet payload valid check
> =================================================================
> ========================================================
> @@ -765,8 +729,8 @@ Test Case 12: VM2VM virtio-net packed ring non-
> mergeable 8 queues CBDMA enable t  1. Launch the Vhost sample by below
> commands::
> 
>      rm -rf vhost-net*
> -    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3
> @00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=5
> 12' \
> -    --vdev 'net_vhost1,iface=vhost-
> net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3
> @80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=5
> 12'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
> +    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.
> 3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
> +    --vdev
> + 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;
> +
> txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@
> + 80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8
> + --txq=8
>      testpmd>start
> 
>  2. Launch VM1 and VM2::
> @@ -778,7 +742,7 @@ Test Case 12: VM2VM virtio-net packed ring non-
> mergeable 8 queues CBDMA enable t
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -
> daemonize \
>      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> e1000,netdev=nttsip1 \
>      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
> -    -chardev socket,id=char0,path=./vhost-net0,server \
> +    -chardev socket,id=char0,path=./vhost-net0 \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host
> _tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on
> -vnc :10
> 
> @@ -789,7 +753,7 @@ Test Case 12: VM2VM virtio-net packed ring non-
> mergeable 8 queues CBDMA enable t
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -
> daemonize \
>      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device
> e1000,netdev=nttsip1 \
>      -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
> -    -chardev socket,id=char0,path=./vhost-net1,server \
> +    -chardev socket,id=char0,path=./vhost-net1 \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-
> modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host
> _tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on
> -vnc :12
> 
> @@ -814,40 +778,4 @@ Test Case 12: VM2VM virtio-net packed ring non-
> mergeable 8 queues CBDMA enable t
>      Under VM1, run: `iperf -s -i 1`
>      Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> 
> -7. Quit vhost ports and relaunch vhost ports w/o CBDMA channels::
> -
> -    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,client=1,queues=8' \
> -    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --
> txd=1024 --rxd=1024 --rxq=8 --txq=8
> -    testpmd>start
> -
> -8. Scp 1MB file form VM1 to VM2::
> -
> -    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> -
> -9. Check the iperf performance and compare with CBDMA enable performance,
> ensure CMDMA enable performance is higher::
> -
> -    Under VM1, run: `iperf -s -i 1`
> -    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> -
> -10. Quit vhost ports and relaunch vhost ports with 1 queues::
> -
> -     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,client=1,queues=8' \
> -     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --
> txd=1024 --rxd=1024 --rxq=1 --txq=1
> -     testpmd>start
> -
> -11. On VM1, set virtio device::
> -
> -      ethtool -L ens5 combined 1
> -
> -12. On VM2, set virtio device::
> -
> -      ethtool -L ens5 combined 1
> -
> -13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success
> by scp::
> -
> -     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> -
> -14. Check the iperf performance, ensure queue0 can work from vhost side::
> -
> -     Under VM1, run: `iperf -s -i 1`
> -     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
> +7. Rerun step 5-6 five times.
> \ No newline at end of file
> --
> 2.25.1


  reply	other threads:[~2021-06-09  8:31 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-09 11:50 Yinan Wang
2021-06-09  8:31 ` Tu, Lijuan [this message]
2021-06-10  0:53   ` Wang, Yinan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dab007717b3d46b09ecf4602186310df@intel.com \
    --to=lijuan.tu@intel.com \
    --cc=dts@dpdk.org \
    --cc=yinan.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).