From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D150FA0509; Wed, 6 Apr 2022 11:09:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C958340E2D; Wed, 6 Apr 2022 11:09:45 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id EA1F040689 for ; Wed, 6 Apr 2022 11:09:42 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649236183; x=1680772183; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=tfIvxs6ji3cvcuxAOs/CEPVNmAlfFZbr4hg9P9BtAHY=; b=RZCU2VTyhJ05Dw57/0nc2M42h89mRU47XHECLthnUrL7wXTVTHgTZmFr zyT0fQNUwRXtQ1+UoQ/GyC5OjoQvbZLYkGHl+5qP8F+Rqgn3wdRbc9+d+ kjHwr+nkcr9vyjt8/fH9sPP1ZDX+wqILdruqeLeq9xYKtdrXCsU91sWPt kCLIXg1vnHbPZKOXHn9BDjP7BHSmhaBJ6LsMSApkSpMXOUUWIRkZjBJ2M 7L4PUn3l1dZl1tVdtcNPa8cCxghLR/0WLmedtscb2eb6jETjZDWKwt0X8 sCV+upTQu8/jkmH6HPIdsjl1xxMu0wFySAAWDJ2KahqMcBRKU1YJAaLu5 w==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="321688770" X-IronPort-AV: E=Sophos;i="5.90,239,1643702400"; d="scan'208";a="321688770" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2022 02:09:42 -0700 X-IronPort-AV: E=Sophos;i="5.90,239,1643702400"; d="scan'208";a="570424788" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2022 02:09:39 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/5] test_plans/vm2vm_virtio_net_perf_test_plan: delete CBDMA test case Date: Wed, 6 Apr 2022 17:09:33 +0800 Message-Id: <20220406090933.28267-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), delete cbdma related case form test_plan/vm2vm_virtio_net_perf_test_plan. Signed-off-by: Wei Ling --- .../vm2vm_virtio_net_perf_test_plan.rst | 720 ++---------------- 1 file changed, 84 insertions(+), 636 deletions(-) diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst index 6e679b5b..9787b658 100644 --- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst +++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst @@ -44,88 +44,62 @@ in the UDP/IP stack with vm2vm split ring and packed ring vhost-user/virtio-net and packed ring vhost-user/virtio-net mergeable and non-mergeable path. 3. Check Vhost tx offload function by verifying the TSO/cksum in the TCP/IP stack with vm2vm split ring and packed ring vhost-user/virtio-net mergeable path with CBDMA channel. -4. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring when vhost enqueue operation with multi-CBDMA channels. +4. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring when vhost +enqueue operation with multi-CBDMA channels. + Note: 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > v5.1. -2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version > LTS 4.2.1, dut to old qemu exist reconnect issue when multi-queues test. +2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version > LTS 4.2.1, +DUT to old qemu exist reconnect issue when multi-queues test. 3.For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. -Test flow -========= - -Virtio-net <-> Vhost <-> Testpmd <-> Vhost <-> Virtio-net - -Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic -========================================================================= - -1. Launch the Vhost sample on socket 0 by below commands:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1' \ - -- -i --nb-cores=2 --txd=1024 --rxd=1024 - testpmd>start - -2. Launch VM1 and VM2 on socket 1:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 +For more about dpdk-testpmd sample, please refer to the DPDK docments: +https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html - taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 +For virtio-user vdev parameter, you can refer to the DPDK docments: +https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage. -3. On VM1, set virtio device IP and run arp protocol:: +Prerequisites +============= - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 +Topology +-------- + Test flow: Virtio-net-->Vhost-->Testpmd-->Vhost-->Virtio-net -4. On VM2, set virtio device IP and run arp protocol:: +Hardware +-------- + Supportted NICs: ALL - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 +Software +-------- + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz -5. Check the iperf performance with different packet size between two VMs by below commands:: +General set up +-------------- +1. Compile DPDK:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # ninja -C -j 110 -6. Check 2VMs can receive and send big packets to each other:: +Test case +========= - testpmd>show port xstats all - Port 0 should have tx packets above 1522 - Port 1 should have rx packets above 1522 +Common steps +------------ -Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp traffic -====================================================================================== +Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic +------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test split ring to get tcp traffic throughput between 2 VMs. -1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command:: +1. Launch the Vhost sample on socket 0 by below commands:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:80:04.0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@0000:80:04.1]' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 - testpmd>vhost enable tx all + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1' \ + -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>start -2. Launch VM1 and VM2:: +2. Launch VM1 and VM2 on socket 1:: taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -136,7 +110,8 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net0 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -147,7 +122,8 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 3. On VM1, set virtio device IP and run arp protocol:: @@ -159,7 +135,7 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t ifconfig ens5 1.1.1.8 arp -s 1.1.1.2 52:54:00:00:00:01 -5. Check the iperf performance between two VMs by below commands:: +5. Check the iperf performance with different packet size between two VMs by below commands:: Under VM1, run: `iperf -s -i 1` Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` @@ -170,18 +146,16 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -7. Check throughput and compare with case1, CBDMA enable performance should larger than w/o CBDMA performance when cross socket. - -Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic -========================================================================= +Test Case 2: VM2VM split ring vhost-user/virtio-net test with udp traffic +------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test split ring to get udp traffic throughput between 2 VMs. 1. Launch the Vhost sample by below commands:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 + -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>start 2. Launch VM1 and VM2:: @@ -195,7 +169,8 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net0 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -206,7 +181,8 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 3. On VM1, set virtio device IP and run arp protocol:: @@ -229,13 +205,13 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 4: Check split ring virtio-net device capability -========================================================== +Test Case 3: Check split ring virtio-net device capability +---------------------------------------------------------- +This case uses testpmd and QEMU to test split ring device capability in 2 VMs. 1. Launch the Vhost sample by below commands:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' \ -- -i --nb-cores=2 --txd=1024 --rxd=1024 @@ -252,7 +228,8 @@ Test Case 4: Check split ring virtio-net device capability -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net0 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -263,7 +240,8 @@ Test Case 4: Check split ring virtio-net device capability -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2:: @@ -279,247 +257,13 @@ Test Case 4: Check split ring virtio-net device capability tx-tcp-ecn-segmentation: on tx-tcp6-segmentation: on -Test Case 5: VM2VM split ring vhost-user/virtio-net mergeable 8 queues CBDMA enable test with large packet payload valid check -============================================================================================================================== - -1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 using qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Quit and relaunch vhost w/ diff CBDMA channels:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -8. Rerun step 5-6. - -9. Quit and relaunch vhost w/ iova=pa:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -10. Rerun step 5-6. - -11. Quit and relaunch vhost w/o CBDMA channels:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 - testpmd>vhost enable tx all - testpmd>start - -12. On VM1, set virtio device:: - - ethtool -L ens5 combined 4 - -13. On VM2, set virtio device:: - - ethtool -L ens5 combined 4 - -14. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -15. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -16. Quit and relaunch vhost with 1 queues:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 - testpmd>vhost enable tx all - testpmd>start - -17. On VM1, set virtio device:: - - ethtool -L ens5 combined 1 - -18. On VM2, set virtio device:: - - ethtool -L ens5 combined 1 - -19. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -20. Check the iperf performance, ensure queue0 can work from vhost side:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -Test Case 6: VM2VM split ring vhost-user/virtio-net non-mergeable 8 queues CBDMA enable test with large packet payload valid check -================================================================================================================================== - -1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 using qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Quit and relaunch vhost ports w/o CBDMA channels:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -8. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -9. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -10. Quit and relaunch vhost ports with 1 queues:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 - testpmd>vhost enable tx all - testpmd>start - -11. On VM1, set virtio device:: - - ethtool -L ens5 combined 1 - -12. On VM2, set virtio device:: - - ethtool -L ens5 combined 1 - -13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -14. Check the iperf performance, ensure queue0 can work from vhost side:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic -========================================================================== +Test Case 4: VM2VM packed ring vhost-user/virtio-net test with tcp traffic +-------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test split ring to get tcp traffic throughput between 2 VMs. 1. Launch the Vhost sample by below commands:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' \ -- -i --nb-cores=2 --txd=1024 --rxd=1024 @@ -536,7 +280,8 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net0 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,\ + mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -547,7 +292,8 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,\ + mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 3. On VM1, set virtio device IP and run arp protocol:: @@ -570,73 +316,13 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 8: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic -======================================================================================= - -1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@0000:00:04.1]' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 on socket 1 with qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 - - taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -6. Check 2VMs can receive and send big packets to each other:: - - testpmd>show port xstats all - Port 0 should have tx packets above 1522 - Port 1 should have rx packets above 1522 - -7. Check throughput and compare with case6, CBDMA enable performance should larger than w/o CBDMA performance when cross socket. - -Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic -========================================================================== +Test Case 5: VM2VM packed ring vhost-user/virtio-net test with udp traffic +-------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test split ring to get udp traffic throughput between 2 VMs. 1. Launch the Vhost sample by below commands:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' \ -- -i --nb-cores=2 --txd=1024 --rxd=1024 @@ -653,7 +339,8 @@ Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net0 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -664,7 +351,8 @@ Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 3. On VM1, set virtio device IP and run arp protocol:: @@ -687,13 +375,13 @@ Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 10: Check packed ring virtio-net device capability -============================================================ +Test Case 6: Check packed ring virtio-net device capability +----------------------------------------------------------- +This case uses testpmd and QEMU to test split ring device capability in 2 VMs. 1. Launch the Vhost sample by below commands:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' \ -- -i --nb-cores=2 --txd=1024 --rxd=1024 @@ -710,7 +398,8 @@ Test Case 10: Check packed ring virtio-net device capability -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net0 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -721,7 +410,8 @@ Test Case 10: Check packed ring virtio-net device capability -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,host_ufo=on,guest_ufo=on,guest_ecn=on,packed=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,host_ufo=on,guest_ufo=on,guest_ecn=on,packed=on -vnc :12 3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2:: @@ -736,245 +426,3 @@ Test Case 10: Check packed ring virtio-net device capability tx-tcp-segmentation: on tx-tcp-ecn-segmentation: on tx-tcp6-segmentation: on - -Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test with large packet payload valid check -===================================================================================================================== - -1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 with qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Rerun step 5-6 five times. - -Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check -========================================================================================================================= - -1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Rerun step 5-6 five times. - -Test Case 13: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic when set iova=pa -========================================================================================================= - -1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@0000:00:04.1]' \ - --iova=pa -- -i --nb-cores=2 --txd=1024 --rxd=1024 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 on socket 1 with qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 - - taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Check 2VMs can receive and send big packets to each other:: - - testpmd>show port xstats all - Port 0 should have tx packets above 1522 - Port 1 should have rx packets above 1522 - -Test Case 14: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable and PA mode test with large packet payload valid check -================================================================================================================================= - -1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 with qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Rerun step 5-6 five times. -- 2.25.1