From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 594AAA0510; Fri, 15 Apr 2022 11:04:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4F8244068A; Fri, 15 Apr 2022 11:04:47 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 104324067C for ; Fri, 15 Apr 2022 11:04:44 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650013485; x=1681549485; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=n25D1sYEbIZoAU1n0CM2lehBdMAxIhP8blVKNLaRYmc=; b=ldVKKP/w/pNrJWkLSOxnrKSJ2puNBrHIQFLy7EhboTbuCSEdRtlzCc9B 1S+azWxJNB8WcCE6gZaNwjTBAcuzr3cCuNIXmjxJdd6GMVoqtBK11I2R5 ZVBkXnJkGHs0HSC+slofije7Kp6BS7sySnZWhVJhSXVBlOVn8zE6tT5vO pl1o+lzljIudoUxgEqI1UfTRk6l/xQUyptPxxkCJYq55LzqKcf1DtNOOA /s8jepP7QBBNNBecxhkd0SN89FBBpik39udnHHY0Qsn9zz+BXK1UhF04W 8uKI/qYzKgWt2BMgmj39atWVyQzFhsttc5rzxg5Actpq4LUp+s9qBdoHb w==; X-IronPort-AV: E=McAfee;i="6400,9594,10317"; a="260719575" X-IronPort-AV: E=Sophos;i="5.90,262,1643702400"; d="scan'208";a="260719575" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2022 02:04:44 -0700 X-IronPort-AV: E=Sophos;i="5.90,262,1643702400"; d="scan'208";a="700995273" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2022 02:04:42 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/4] test_plans/pvp_virtio_user_4k_pages_cbdma_test_plan: add DPDK22.03 new feature Date: Fri, 15 Apr 2022 17:04:38 +0800 Message-Id: <20220415090438.261234-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new test_plans/pvp_virtio_user_4k_pages_cbdma_test_plan. Signed-off-by: Wei Ling --- ...p_virtio_user_4k_pages_cbdma_test_plan.rst | 455 ++++++++++++++++++ 1 file changed, 455 insertions(+) create mode 100644 test_plans/pvp_virtio_user_4k_pages_cbdma_test_plan.rst diff --git a/test_plans/pvp_virtio_user_4k_pages_cbdma_test_plan.rst b/test= _plans/pvp_virtio_user_4k_pages_cbdma_test_plan.rst new file mode 100644 index 00000000..e510aa8b --- /dev/null +++ b/test_plans/pvp_virtio_user_4k_pages_cbdma_test_plan.rst @@ -0,0 +1,455 @@ +.. Copyright (c) <2022>, Intel Corporation=0D + All rights reserved.=0D +=0D + Redistribution and use in source and binary forms, with or without=0D + modification, are permitted provided that the following conditions=0D + are met:=0D +=0D + - Redistributions of source code must retain the above copyright=0D + notice, this list of conditions and the following disclaimer.=0D +=0D + - Redistributions in binary form must reproduce the above copyright=0D + notice, this list of conditions and the following disclaimer in=0D + the documentation and/or other materials provided with the=0D + distribution.=0D +=0D + - Neither the name of Intel Corporation nor the names of its=0D + contributors may be used to endorse or promote products derived=0D + from this software without specific prior written permission.=0D +=0D + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS=0D + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT=0D + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS=0D + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE=0D + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,=0D + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES=0D + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR=0D + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)=0D + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,=0D + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)=0D + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED=0D + OF THE POSSIBILITY OF SUCH DAMAGE.=0D +=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +vhost/virtio-user pvp with 4K-pages test plan=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +DPDK 19.02 add support for using virtio-user without hugepages.=0D +The --no-huge mode was augmented to use memfd-backed memory (on systems th= at support memfd),=0D +to allow using virtio-user-based NICs without hugepages.=0D +=0D +For more about dpdk-testpmd sample, please refer to the DPDK docments:=0D +https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html=0D +=0D +For virtio-user vdev parameter, you can refer to the DPDK docments:=0D +https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-us= age.=0D +=0D +=0D +Prerequisites=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +Topology=0D +--------=0D +Test flow: Vhost-user-->Virtio-user=0D +=0D +Hardware=0D +--------=0D +Supportted NICs: ALL=0D +=0D +Software=0D +--------=0D +Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz=0D +=0D +General set up=0D +--------------=0D +1. Compile DPDK::=0D +=0D + # CC=3Dgcc meson --werror -Denable_kmods=3DTrue -Dlibdir=3Dlib -Dexamples= =3Dall --default-library=3D=0D + # ninja -C -j 110=0D +=0D +2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00= .0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::=0D +=0D + # ./usertools/dpdk-devbind.py -s=0D +=0D + Network devices using kernel driver=0D + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D + 0000:18:00.0 'Device 159b' if=3Dens785f0 drv=3Dice unused=3Dvfio-pci=0D +=0D + DMA devices using kernel driver=0D + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=0D + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=3Dioatdma unused=3Dvfi= o-pci=0D + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=3Dioatdma unused=3Dvfi= o-pci=0D +=0D +Test case=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +Common steps=0D +------------=0D +1. Bind 1 NIC port and CBDMA channels to vfio-pci::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci =0D + # ./usertools/dpdk-devbind.py -b vfio-pci =0D +=0D + For example, bind 1 NIC port and 1 CBDMA channels::=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:af:00.0,0000:80:= 04.0=0D +=0D +Test Case 1: Basic test vhost/virtio-user split ring with 4K-pages and cbd= ma enable=0D +--------------------------------------------------------------------------= ---------=0D +This case uses testpmd to test split ring path with 4K-pages and cbdma ena= ble.=0D +=0D +1. Bind 1 NIC port and 1 CBDMA channel to vfio-pci, as common steps 1.=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 -= -no-huge -m 1024 --file-prefix=3Dvhost -a 0000:af:00.0 -a 0000:80:04.0 \=0D + --vdev 'net_vhost0,iface=3D/tmp/vhost-net,queues=3D1,dmas=3D[txq0]' -- -i= --no-numa --socket-num=3D1 --lcore-dma=3D[lcore32@0000:80:04.0]=0D + testpmd> start=0D +=0D +3. Prepare tmpfs with 4K-pages::=0D +=0D + mkdir /mnt/tmpfs_nohuge0=0D + mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=3D4G=0D +=0D +4. Launch virtio-user with 4K-pages::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 33-34 -n 4 -= -no-huge -m 1024 --file-prefix=3Dvirtio-user --no-pci \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:11:22:33:44:10,path=3D/tmp/vhost-net,q= ueues=3D1 -- -i=0D + testpmd> set fwd mac=0D + testpmd> start=0D +=0D +5. Send packet with packet generator with different packet size,includes [= 64, 128, 256, 512, 1024, 1518], check the throughput with below command::=0D +=0D + testpmd> show port stats all=0D +=0D +Test Case 2: Basic test vhost/virtio-user packed ring with 4K-pages and cb= dma enable=0D +--------------------------------------------------------------------------= ----------=0D +This case uses testpmd to test packed ring path with 4K-pages and cbdma en= able.=0D +=0D +1. Bind 1 NIC port and 1 CBDMA channel to vfio-pci, as common steps 1.=0D +=0D +2. Launch vhost by below command::=0D +=0D + modprobe vfio-pci=0D + ./usertools/dpdk-devbind.py --bind=3Dvfio-pci 0000:af:00.0 0000:80:04.0=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 -= -no-huge -m 1024 --file-prefix=3Dvhost -a 0000:af:00.0 -a 0000:80:04.0 \=0D + --vdev 'net_vhost0,iface=3D/tmp/vhost-net,queues=3D1,dmas=3D[txq0]' -- -i= --no-numa --socket-num=3D1 --lcore-dma=3D[lcore32@0000:80:04.0]=0D + testpmd> start=0D +=0D +3. Prepare tmpfs with 4K-pages::=0D +=0D + mkdir /mnt/tmpfs_nohuge0=0D + mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=3D4G=0D +=0D +4. Launch virtio-user with 4K-pages::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 33-34 -n 4 -= -no-huge -m 1024 --file-prefix=3Dvirtio-user --no-pci \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:11:22:33:44:10,path=3D/tmp/vhost-net,p= acked_vq=3D1,queues=3D1 -- -i=0D + testpmd> set fwd mac=0D + testpmd> start=0D +=0D +5. Send packet with packet generator with different packet size,includes [= 64, 128, 256, 512, 1024, 1518], check the throughput with below command::=0D +=0D + testpmd> show port stats all=0D +=0D +Test Case 3: VM2VM split ring vhost-user/virtio-net 4K-pages and CBDMA ena= ble test with tcp traffic=0D +--------------------------------------------------------------------------= -------------------------=0D +This case uses testpmd, QEMU and iperf to test split ring path with 4K-pag= es and cbdma enable to forward packets.=0D +=0D +1. Bind 2 CBDMA channels to vfio-pci, as common steps 1.=0D +=0D +2. Launch vhost by below command::=0D +=0D + ./usertools/dpdk-devbind.py --bind=3Dvfio-pci 0000:80:04.0 0000:80:04.1=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30-32 -n 4 -= -no-huge -m 1024 --file-prefix=3Dvhost -a 0000:80:04.0 -a 0000:80:04.1 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D1,dmas=3D[txq0],dma_ring_s= ize=3D2048' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D1,dmas=3D[txq0],dma_ring_s= ize=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D2 --txd=3D1024 --rxd=3D1024 --lcore-dma=3D= [lcore31@0000:80:04.0,lcore32@0000:80:04.1]=0D + testpmd> start=0D +=0D +3. Prepare tmpfs with 4K-pages::=0D +=0D + mkdir /mnt/tmpfs_nohuge0=0D + mkdir /mnt/tmpfs_nohuge1=0D + mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=3D8G=0D + mount tmpfs /mnt/tmpfs_nohuge1 -t tmpfs -o size=3D8G=0D +=0D +4. Launch VM1 and VM2::=0D +=0D + taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-= x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_n= ohuge0,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/home/image/ubuntu200= 4.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:10.239.251.220:6000-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don -vnc :10=0D +=0D + taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-= x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_n= ohuge1,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/home/image/ubuntu200= 4_2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:10.239.251.220:6001-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don -vnc :12=0D +=0D +5. On VM1, set virtio device IP and run arp protocal::=0D +=0D + ifconfig ens5 1.1.1.2=0D + arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +6. On VM2, set virtio device IP and run arp protocal::=0D +=0D + ifconfig ens5 1.1.1.8=0D + arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + Under VM1, run: `iperf -s -i 1`=0D + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`=0D +=0D +8. Check 2VMs can receive and send big packets to each other::=0D +=0D + testpmd> show port xstats all=0D + Port 0 should have tx packets above 1522=0D + Port 1 should have rx packets above 1522=0D +=0D +Test Case 4: vm2vm vhost/virtio-net packed ring multi queues with 4K-pages= and cbdma enable=0D +--------------------------------------------------------------------------= -----------------=0D +This case uses testpmd, QEMU and iperf to test packed ring path with 4K-pa= ges and cbdma enable to forward packets.=0D +=0D +1. Bind 16 CBDMA channels to vfio-pci, as common steps 1.=0D +=0D +2. Launch vhost by below command::=0D +=0D + ./usertools/dpdk-devbind.py --bind=3Dvfio-pci 0000:80:04.0 0000:80:04.1 0= 000:80:04.2 0000:80:04.3 0000:80:04.4 0000:80:04.5 0000:80:04.6 0000:80:04.= 7 \=0D + 0000:00:04.0 0000:00:04.1 0000:00:04.2 0000:00:04.3 0000:00:04.4 0000:00:= 04.5 0000:00:04.6 0000:00:04.7=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -= -no-huge -m 1024 --file-prefix=3Dvhost \=0D + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:0= 0:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \=0D + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:8= 0:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D8,dmas=3D[txq0;= txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8,dmas=3D[txq0;= txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 \=0D + --lcore-dma=3D[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:= 04.2,lcore30@0000:00:04.3,lcore30@0000:00:04.4,lcore30@0000:00:04.5,lcore31= @0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.0,lcore32@0000:80:04.= 1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore32@0000:80:04.4,lcore32@00= 00:80:04.5,lcore32@0000:80:04.6,lcore33@0000:80:04.7]=0D +=0D + testpmd> start=0D +=0D +3. Prepare tmpfs with 4K-pages::=0D +=0D + mkdir /mnt/tmpfs_nohuge0=0D + mkdir /mnt/tmpfs_nohuge1=0D + mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=3D4G=0D + mount tmpfs /mnt/tmpfs_nohuge1 -t tmpfs -o size=3D4G=0D +=0D +4. Launch VM qemu::=0D +=0D + taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-= x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge,sh= are=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/home/image/ubuntu200= 4.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:10.239.251.220:6000-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= -vnc :10=0D +=0D + taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-= x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_y= inan1,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/home/image/ubuntu200= 4_2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:10.239.251.220:6001-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= -vnc :12=0D +=0D +5. On VM1, set virtio device IP and run arp protocal::=0D +=0D + ethtool -L ens5 combined 8=0D + ifconfig ens5 1.1.1.2=0D + arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +6. On VM2, set virtio device IP and run arp protocal::=0D +=0D + ethtool -L ens5 combined 8=0D + ifconfig ens5 1.1.1.8=0D + arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +7. Scp 1MB file form VM1 to VM2::=0D +=0D + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name=0D +=0D +8. Check the iperf performance between two VMs by below commands::=0D +=0D + Under VM1, run: `iperf -s -i 1`=0D + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`=0D +=0D +9. Quit and relaunch vhost w/ diff CBDMA channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -= -no-huge -m 1024 --file-prefix=3Dvhost \=0D + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:0= 0:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \=0D + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:8= 0:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D8,dmas=3D[txq0;= txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8,dmas=3D[txq1;= txq2;txq3;txq4;txq5;txq6;txq7]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 \=0D + --lcore-dma=3D[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:= 04.2,lcore30@0000:00:04.3,lcore31@0000:80:04.0,lcore31@0000:00:04.2,lcore31= @0000:00:04.4,lcore31@0000:00:04.5,lcore31@0000:00:04.6,lcore31@0000:00:04.= 7,lcore32@0000:80:04.1,lcore32@0000:00:04.3,lcore32@0000:80:04.0,lcore32@00= 00:80:04.1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore32@0000:80:04.4,l= core32@0000:80:04.5,lcore32@0000:80:04.6,lcore33@0000:80:04.7]=0D + testpmd> start=0D +=0D +10. Rerun step 6-7.=0D +=0D +11. Quit and relaunch vhost w/o CBDMA channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -= -no-huge -m 1024 --file-prefix=3Dvhost \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D4' --vdev 'net_= vhost1,iface=3Dvhost-net1,client=3D1,queues=3D4' \=0D + -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D4 --txq=3D4=0D + testpmd> start=0D +=0D +12. On VM1, set virtio device::=0D +=0D + ethtool -L ens5 combined 4=0D +=0D +13. On VM2, set virtio device::=0D +=0D + ethtool -L ens5 combined 4=0D +=0D +14. Scp 1MB file form VM1 to VM2::=0D +=0D + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name=0D +=0D +15. Check the iperf performance and compare with CBDMA enable performance,= ensure CMDMA enable performance is higher::=0D +=0D + Under VM1, run: `iperf -s -i 1`=0D + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`=0D +=0D +16. Quit and relaunch vhost with 1 queues::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -= -no-huge -m 1024 --file-prefix=3Dvhost \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D4' --vdev 'net_= vhost1,iface=3Dvhost-net1,client=3D1,queues=3D4' \=0D + -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D1 --txq=3D1=0D + testpmd> start=0D +=0D +17. On VM1, set virtio device::=0D +=0D + ethtool -L ens5 combined 1=0D +=0D +18. On VM2, set virtio device::=0D +=0D + ethtool -L ens5 combined 1=0D +=0D +19. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success= by scp::=0D +=0D + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name=0D +=0D +20. Check the iperf performance, ensure queue0 can work from vhost side::= =0D +=0D + Under VM1, run: `iperf -s -i 1`=0D + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`=0D +=0D +Test Case 5: loopback packed ring large chain packets 4K-pages stress test= with server mode and cbdma enable=0D +--------------------------------------------------------------------------= ----------------------------------=0D +This case uses and testpmd to test packed ring path with 4K-pages and cbdm= a enable to stress test with chain packets forward.=0D +=0D +1. Bind 1 CBDMA channel to vfio-pci, as common steps 1.=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30-31 -n 4 -= -no-huge -m 1024 -a 0000:80:04.0 \=0D + --vdev 'eth_vhost0,iface=3Dvhost-net0,queues=3D1,dmas=3D[txq0],client=3D1= ' --iova=3Dva -- -i --no-numa --socket-num=3D1 --nb-cores=3D1 --mbuf-size= =3D65535 --lcore-dma=3D[lcore31@0000:80:04.0]=0D +=0D +3. Prepare tmpfs with 4K-pages::=0D +=0D + mkdir /mnt/tmpfs_nohuge0=0D + mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=3D4G=0D +=0D +4. Launch virtio and start testpmd::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 -n 4 = --no-huge -m 1024 --file-prefix=3Dtestpmd0 --no-pci \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:11:22:33:44:10,path=3D./vhost-net0,que= ues=3D1,mrg_rxbuf=3D1,in_order=3D0,vectorized=3D1,packed_vq=3D1,queue_size= =3D2048,server=3D1 \=0D + -- -i --rxq=3D1 --txq=3D1 --txd=3D2048 --rxd=3D2048 --nb-cores=3D1=0D + testpmd> set fwd mac=0D + testpmd> start=0D +=0D +5. Send large packets from vhost, check virtio can receive packets::=0D +=0D + testpmd> set txpkts 65535,65535,65535,65535,65535=0D + testpmd> start tx_first 32=0D + testpmd> show port stats all=0D +=0D +Test Case 6: vm2vm vhost/virtio-net split packed ring multi queues with 1G= /4k-pages and cbdma enable=0D +--------------------------------------------------------------------------= --------------------------=0D +This case uses testpmd, QEMU and iperf to test split and packed ring path = with 1G/4K-pages and cbdma enable to forward packets.=0D +=0D +1. Bind 16 CBDMA channel to vfio-pci, as common steps 1.=0D +=0D +2. Launch vhost by below command::=0D +=0D + ./usertools/dpdk-devbind.py --bind=3Dvfio-pci 0000:80:04.0 0000:80:04.1 0= 000:80:04.2 0000:80:04.3 0000:80:04.4 0000:80:04.5 0000:80:04.6 0000:80:04.= 7 \=0D + 0000:00:04.0 0000:00:04.1 0000:00:04.2 0000:00:04.3 0000:00:04.4 0000:00:= 04.5 0000:00:04.6 0000:00:04.7=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -= m 1024 --file-prefix=3Dvhost \=0D + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:0= 0:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \=0D + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:8= 0:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D8,dmas=3D[txq0;txq1;txq2;t= xq3;]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D8,dmas=3D[txq0;txq1;txq2;t= xq3;]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 \=0D + --lcore-dma=3D[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:= 04.2,lcore30@0000:00:04.3,lcore31@0000:00:04.4,lcore31@0000:00:04.5,lcore31= @0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.0,lcore32@0000:80:04.= 1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore33@0000:80:04.4,lcore33@00= 00:80:04.5,lcore33@0000:80:04.6,lcore33@0000:80:04.7]=0D + testpmd> start=0D +=0D +3. Prepare tmpfs with 4K-pages::=0D +=0D + mkdir /mnt/tmpfs_nohuge0=0D + mkdir /mnt/tmpfs_nohuge1=0D + mount tmpfs /mnt/tmpfs_yinan0 -t tmpfs -o size=3D4G=0D + mount tmpfs /mnt/tmpfs_yinan1 -t tmpfs -o size=3D4G=0D +=0D +4. Launch VM qemu::=0D +=0D + taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-= x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_n= ohuge0,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/home/image/ubuntu200= 4.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:10.239.251.220:6000-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= -vnc :10=0D +=0D + taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-= x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_n= ohuge1,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/home/image/ubuntu200= 4_2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:10.239.251.220:6001-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,packed=3Don,mq=3Don,vectors=3D40,csum=3Don,gue= st_csum=3Don,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,h= ost_ufo=3Don -vnc :12=0D +=0D +5. On VM1, set virtio device IP and run arp protocal::=0D +=0D + ethtool -L ens5 combined 8=0D + ifconfig ens5 1.1.1.2=0D + arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +6. On VM2, set virtio device IP and run arp protocal::=0D +=0D + ethtool -L ens5 combined 8=0D + ifconfig ens5 1.1.1.8=0D + arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +7. Scp 1MB file form VM1 to VM2::=0D +=0D + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name=0D +=0D +8. Check the iperf performance between two VMs by below commands::=0D +=0D + Under VM1, run: `iperf -s -i 1`=0D + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`=0D --=20 2.25.1