test suite reviews and discussions
 help / color / mirror / Atom feed
From: "Tu, Lijuan" <lijuan.tu@intel.com>
To: "Wang, Yinan" <yinan.wang@intel.com>, "dts@dpdk.org" <dts@dpdk.org>
Cc: "Wang, Yinan" <yinan.wang@intel.com>
Subject: Re: [dts] [PATCH v1] test_plans: add vm2vm cases with iperf performance	check
Date: Mon, 13 Jan 2020 07:53:59 +0000	[thread overview]
Message-ID: <8CE3E05A3F976642AAB0F4675D0AD20E0BBA88EE@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <20200112215151.5045-1-yinan.wang@intel.com>

applied

> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Yinan
> Sent: Monday, January 13, 2020 5:52 AM
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: [dts] [PATCH v1] test_plans: add vm2vm cases with iperf
> performance check
> 
> From: Wang Yinan <yinan.wang@intel.com>
> 
> add performance check for vm2vm cases
> 
> Signed-off-by: Wang Yinan <yinan.wang@intel.com>
> ---
>  .../vm2vm_virtio_net_perf_test_plan.rst       | 128 ++++++++++++++++--
>  1 file changed, 116 insertions(+), 12 deletions(-)
> 
> diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> index 0fe8400..2db0339 100644
> --- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> +++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> @@ -50,7 +50,7 @@ Test Case 1: VM2VM split ring vhost-user/virtio-net test
> with tcp traffic  1. Launch the Vhost sample by below commands::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-
> pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
> +    ./testpmd -l 2-4 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci
> + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1'
> + --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2
> + --txd=1024 --rxd=1024
>      testpmd>start
> 
>  2. Launch VM1 and VM2::
> @@ -92,7 +92,59 @@ Test Case 1: VM2VM split ring vhost-user/virtio-net
> test with tcp traffic
>      Port 0 should have tx packets above 1522
>      Port 1 should have rx packets above 1522
> 
> -Test Case 2: VM2VM split ring vhost-user/virtio-net test with udp traffic
> +7. Check iperf throughput can get expected data.
> +
> +Test Case 2: VM2VM split ring vhost-user/virtio-net dequeue zero-copy
> +test with tcp traffic
> +===============================================================
> ========
> +====================
> +
> +1. Launch the Vhost sample by below commands::
> +
> +    rm -rf vhost-net*
> +    ./testpmd -l 2-4 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-
> prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-
> copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-
> copy=1'  -- -i --nb-cores=2 --txd=1024 --rxd=1024
> +    testpmd>start
> +
> +2. Launch VM1 and VM2::
> +
> +    qemu-system-x86_64 -name us-vhost-vm1 \
> +     -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
> +     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
> +     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
> +     -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_cs
> um=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
> +     -vnc :12 -daemonize
> +
> +    qemu-system-x86_64 -name us-vhost-vm2 \
> +     -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
> +     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
> +     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-
> user,id=mynet2,chardev=char1,vhostforce \
> +     -device virtio-net-
> pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_cs
> um=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
> +     -vnc :11 -daemonize
> +
> +3. On VM1, set virtio device IP and run arp protocal::
> +
> +    ifconfig ens3 1.1.1.2
> +    arp -s 1.1.1.8 52:54:00:00:00:02
> +
> +4. On VM2, set virtio device IP and run arp protocal::
> +
> +    ifconfig ens3 1.1.1.8
> +    arp -s 1.1.1.2 52:54:00:00:00:01
> +
> +5. Check the iperf performance between two VMs by below commands::
> +
> +    Under VM1, run: `iperf -s -i 1`
> +    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30`
> +
> +6. Check both 2VMs can receive and send big packets to each other::
> +
> +    testpmd>show port xstats all
> +    Port 0 should have tx packets above 1522
> +    Port 1 should have rx packets above 1522
> +
> +7. Check iperf throughput can get expected data.
> +
> +Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp
> +traffic
> 
> ================================================================
> =========
> 
>  1. Launch the Vhost sample by below commands::
> @@ -138,7 +190,7 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net
> test with udp traffic
>      Port 0 should have tx packets above 1522
>      Port 1 should have rx packets above 1522
> 
> -Test Case 3: Check split ring virtio-net device capability
> +Test Case 4: Check split ring virtio-net device capability
>  ==========================================================
> 
>  1. Launch the Vhost sample by below commands::
> @@ -177,7 +229,7 @@ Test Case 3: Check split ring virtio-net device
> capability
>      tx-tcp-ecn-segmentation: on
>      tx-tcp6-segmentation: on
> 
> -Test Case 4: VM2VM virtio-net split ring mergeable zero copy test with large
> packet payload valid check
> +Test Case 5: VM2VM virtio-net split ring mergeable zero copy test with
> +large packet payload valid check
> 
> ================================================================
> =======================================
> 
>  1. Launch the Vhost sample by below commands::
> @@ -218,7 +270,7 @@ Test Case 4: VM2VM virtio-net split ring mergeable
> zero copy test with large pac
> 
>      Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> 
> -Test Case 5: VM2VM virtio-net split ring non-mergeable zero copy test with
> large packet payload valid check
> +Test Case 6: VM2VM virtio-net split ring non-mergeable zero copy test
> +with large packet payload valid check
> 
> ================================================================
> ===========================================
> 
>  1. Launch the Vhost sample by below commands::
> @@ -259,13 +311,13 @@ Test Case 5: VM2VM virtio-net split ring non-
> mergeable zero copy test with large
> 
>      Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> 
> -Test Case 6: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
> +Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp
> +traffic
> 
> ================================================================
> ==========
> 
>  1. Launch the Vhost sample by below commands::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-
> pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
> +    ./testpmd -l 2-4 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci
> + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1'
> + --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2
> + --txd=1024 --rxd=1024
>      testpmd>start
> 
>  2. Launch VM1 and VM2::
> @@ -307,7 +359,59 @@ Test Case 6: VM2VM packed ring vhost-user/virtio-
> net test with tcp traffic
>      Port 0 should have tx packets above 1522
>      Port 1 should have rx packets above 1522
> 
> -Test Case 7: VM2VM packed ring vhost-user/virtio-net test with udp traffic
> +7. Check iperf throughput can get expected data.
> +
> +Test Case 8: VM2VM packed ring vhost-user/virtio-net dequeue zero-copy
> +test with tcp traffic
> +===============================================================
> ========
> +=====================
> +
> +1. Launch the Vhost sample by below commands::
> +
> +    rm -rf vhost-net*
> +    ./testpmd -l 2-4 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-
> prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-
> copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-
> copy=1'  -- -i --nb-cores=2 --txd=1024 --rxd=1024
> +    testpmd>start
> +
> +2. Launch VM1 and VM2::
> +
> +    qemu-system-x86_64 -name us-vhost-vm1 \
> +     -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
> +     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
> +     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
> +     -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_cs
> um=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
> +     -vnc :12 -daemonize
> +
> +    qemu-system-x86_64 -name us-vhost-vm2 \
> +     -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
> +     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
> +     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-
> user,id=mynet2,chardev=char1,vhostforce \
> +     -device virtio-net-
> pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_cs
> um=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
> +     -vnc :11 -daemonize
> +
> +3. On VM1, set virtio device IP and run arp protocal::
> +
> +    ifconfig ens3 1.1.1.2
> +    arp -s 1.1.1.8 52:54:00:00:00:02
> +
> +4. On VM2, set virtio device IP and run arp protocal::
> +
> +    ifconfig ens3 1.1.1.8
> +    arp -s 1.1.1.2 52:54:00:00:00:01
> +
> +5. Check the iperf performance between two VMs by below commands::
> +
> +    Under VM1, run: `iperf -s -i 1`
> +    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30`
> +
> +6. Check both 2VMs can receive and send big packets to each other::
> +
> +    testpmd>show port xstats all
> +    Port 0 should have tx packets above 1522
> +    Port 1 should have rx packets above 1522
> +
> +7. Check iperf throughput can get expected data.
> +
> +Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp
> +traffic
> 
> ================================================================
> ==========
> 
>  1. Launch the Vhost sample by below commands::
> @@ -353,7 +457,7 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-
> net test with udp traffic
>      Port 0 should have tx packets above 1522
>      Port 1 should have rx packets above 1522
> 
> -Test Case 8: Check packed ring virtio-net device capability
> +Test Case 10: Check packed ring virtio-net device capability
>  ===========================================================
> 
>  1. Launch the Vhost sample by below commands::
> @@ -392,8 +496,8 @@ Test Case 8: Check packed ring virtio-net device
> capability
>      tx-tcp-ecn-segmentation: on
>      tx-tcp6-segmentation: on
> 
> -Test Case 9: VM2VM packed ring virtio-net mergeable dequeue zero copy
> test with large packet payload valid check -
> ================================================================
> ================================================
> +Test Case 11: VM2VM packed ring virtio-net mergeable dequeue zero copy
> +test with large packet payload valid check
> +===============================================================
> ========
> +==========================================
> 
>  1. Launch the Vhost sample by below commands::
> 
> @@ -433,7 +537,7 @@ Test Case 9: VM2VM packed ring virtio-net
> mergeable dequeue zero copy test with
> 
>      Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> 
> -Test Case 10: VM2VM packed ring virtio-net non-mergeable dequeue zero
> copy test with large packet payload valid check
> +Test Case 12: VM2VM packed ring virtio-net non-mergeable dequeue zero
> +copy test with large packet payload valid check
> 
> ================================================================
> =====================================================
> 
>  1. Launch the Vhost sample by below commands::
> --
> 2.17.1


      reply	other threads:[~2020-01-13  7:54 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-12 21:51 Yinan
2020-01-13  7:53 ` Tu, Lijuan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8CE3E05A3F976642AAB0F4675D0AD20E0BBA88EE@SHSMSX101.ccr.corp.intel.com \
    --to=lijuan.tu@intel.com \
    --cc=dts@dpdk.org \
    --cc=yinan.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).