From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F0F50A050A; Sat, 7 May 2022 12:00:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E922440696; Sat, 7 May 2022 12:00:09 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 623EA40395 for ; Sat, 7 May 2022 12:00:07 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651917607; x=1683453607; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Gr5SrHnEEUEceGWrxQvTrda6FJuUZErDTK1lVUn+9tk=; b=PHNdzXKR+6gdOZnhlZFPkdkf9Fs7r1ETyfzRZ/cE3M4iUpItHzF7WvNs 1iGs7NdUCTHTjpQwqhFvchS2HHA2tnDQgzrt0lIkhKx9zqPkdkJOwRbfY gkinO3yUSgZULunqMMUU/nNyO8S6NIYmCvDptAOk1mQLKwY0gheTrkWXO Wt63w0N2QRyObVPsboCa5ForxTMXdTT6+ac/hiO464GASSBkBz4uaFrjT OM8RYpXulF8793GCTYVZ6PbqmjiIWQXKSIaJlPgvpYs9/vF5XrCuj1QIY fRcH43+iGJZ0bwVQUEq2mfexyGa6VfqMFeJ14F/zeaU9vpPX46Ml0NPMF Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10339"; a="268828757" X-IronPort-AV: E=Sophos;i="5.91,206,1647327600"; d="scan'208";a="268828757" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 May 2022 03:00:06 -0700 X-IronPort-AV: E=Sophos;i="5.91,206,1647327600"; d="scan'208";a="736138753" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 May 2022 03:00:03 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 5/7] test_plans/vm2vm_virtio_net_dsa_test_plan: add vm2vm_virtio_net_dsa testplan Date: Sat, 7 May 2022 05:59:39 -0400 Message-Id: <20220507095939.311120-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Type: text/plain; charset=y Content-Transfer-Encoding: quoted-printable X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add vm2vm_virtio_net_dsa_test_plan.rst into test_plans. Signed-off-by: Wei Ling --- test_plans/vm2vm_virtio_net_dsa_test_plan.rst | 1388 +++++++++++++++++ 1 file changed, 1388 insertions(+) create mode 100644 test_plans/vm2vm_virtio_net_dsa_test_plan.rst diff --git a/test_plans/vm2vm_virtio_net_dsa_test_plan.rst b/test_plans/vm2= vm_virtio_net_dsa_test_plan.rst new file mode 100644 index 00000000..9906714b --- /dev/null +++ b/test_plans/vm2vm_virtio_net_dsa_test_plan.rst @@ -0,0 +1,1388 @@ +.. Copyright (c) <2022>, Intel Corporation=0D + All rights reserved.=0D +=0D + Redistribution and use in source and binary forms, with or without=0D + modification, are permitted provided that the following conditions=0D + are met:=0D +=0D + - Redistributions of source code must retain the above copyright=0D + notice, this list of conditions and the following disclaimer.=0D +=0D + - Redistributions in binary form must reproduce the above copyright=0D + notice, this list of conditions and the following disclaimer in=0D + the documentation and/or other materials provided with the=0D + distribution.=0D +=0D + - Neither the name of Intel Corporation nor the names of its=0D + contributors may be used to endorse or promote products derived=0D + from this software without specific prior written permission.=0D +=0D + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS=0D + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT=0D + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS=0D + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE=0D + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,=0D + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES=0D + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR=0D + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)=0D + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,=0D + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)=0D + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED=0D + OF THE POSSIBILITY OF SUCH DAMAGE.=0D +=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=0D +VM2VM vhost-user/virtio-net with DSA driver test plan=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=0D +=0D +Description=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +Vhost asynchronous data path leverages DMA devices to offload memory copie= s from the CPU and it is implemented in an asynchronous way.=0D +In addition, vhost supports M:N mapping between vrings and DMA virtual cha= nnels. Specifically, one vring can use multiple different DMA=0D +channels and one DMA channel can be shared by multiple vrings at the same = time.Vhost enqueue operation with CBDMA channels is supported=0D +in both split and packed ring.=0D +=0D +This document provides the test plan for testing the following features wh= en Vhost-user using asynchronous data path with=0D +DSA driver (kernel IDXD driver and DPDK vfio-pci driver) in VM2VM virtio-n= et topology.=0D +1. check Vhost tx offload function by verifing the TSO/cksum in the TCP/IP= stack with vm2vm split ring and packed ring =0D +vhost-user/virtio-net mergeable path.=0D +2.Check the payload of large packet (larger than 1MB) is valid after forwa= rding packets with vm2vm split ring=0D +and packed ring vhost-user/virtio-net mergeable and non-mergeable path.=0D +3. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with = split ring and packed ring.=0D +=0D +IOMMU impact:=0D +If iommu off, idxd can work with iova=3Dpa=0D +If iommu on, kernel dsa driver only can work with iova=3Dva by program IOM= MU, can't use iova=3Dpa(fwd not work due to pkts payload wrong).=0D +=0D +Note: =0D +1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM k= ernel version > v5.1, and packed ring multi-queues not support reconnect in= qemu yet.=0D +2.For split virtqueue virtio-net with multi-queues server mode test, need = qemu version >=3D 5.2.0, dut to old qemu exist reconnect issue when multi-q= ueues test.=0D +3.When DMA devices are bound to vfio driver, VA mode is the default and re= commended. For PA mode, page by page mapping may=0D +exceed IOMMU's max capability, better to use 1G guest hugepage.=0D +4.DPDK local patch that about vhost pmd is needed when testing Vhost async= hronous data path with testpmd, and the suite has not yet been automated.=0D +=0D +Prerequisites=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +Topology=0D +--------=0D + Test flow: Virtio-net <-> Vhost-user <-> Testpmd <-> Vhost-user <-> Virti= o-net=0D +=0D +Software=0D +--------=0D + iperf=0D + qemu: https://download.qemu.org/qemu-6.2.0.tar.xz=0D +=0D +General set up=0D +--------------=0D +1. Compile DPDK::=0D +=0D + # CC=3Dgcc meson --werror -Denable_kmods=3DTrue -Dlibdir=3Dlib -Dexamples= =3Dall --default-library=3D=0D + # ninja -C -j 110=0D + For example,=0D + CC=3Dgcc meson --werror -Denable_kmods=3DTrue -Dlibdir=3Dlib -Dexamples= =3Dall --default-library=3Dx86_64-native-linuxapp-gcc=0D + ninja -C x86_64-native-linuxapp-gcc -j 110=0D +=0D +2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00= .1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::=0D +=0D + # ./usertools/dpdk-devbind.py -s=0D +=0D + Network devices using kernel driver=0D + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D + 0000:4f:00.1 'Ethernet Controller E810-C for QSFP 1592' drv=3Dice unused= =3Dvfio-pci=0D +=0D + DMA devices using kernel driver=0D + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=0D + 0000:6a:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 0000:6f:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 0000:74:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 0000:79:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 0000:e7:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 0000:ec:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 0000:f1:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 0000:f6:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D +=0D +Test case=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +Common steps=0D +------------=0D +1. Bind DSA devices to DPDK vfio-pci driver::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci =0D +=0D + For example, bind 2 DMA devices to vfio-pci driver:=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:= 01.0=0D +=0D +.. note::=0D +=0D + One DPDK DSA device can create 8 WQ at most. Below is an example, where D= PDK DSA device will create one and=0D + eight WQ for DSA deivce 0000:e7:01.0 and 0000:ec:01.0. The value of =E2= =80=9Cmax_queues=E2=80=9D is 1~8:=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 00= 00:e7:01.0,max_queues=3D1 -- -i=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 00= 00:ec:01.0,max_queues=3D8 -- -i=0D +=0D +2. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ)::= =0D +=0D + # ./usertools/dpdk-devbind.py -b idxd =0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q =0D +=0D +.. note::=0D +=0D + Better to reset WQ when need operate DSA devices that bound to idxd drvie= r:=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset = =0D + You can check it by 'ls /dev/dsa'=0D + numDevices: number of devices, where 0<=3DnumDevices<=3D7, corresponding = to 0000:6a:01.0 - 0000:f6:01.0=0D + numWq: Number of workqueues per DSA endpoint, where 1<=3DnumWq<=3D8=0D +=0D + For example, bind 2 DMA devices to idxd driver and configure WQ:=0D +=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 2=0D + Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq2.0 wq2.1 wq2.2 wq2.3"=0D +=0D +Test Case 1: VM2VM vhost-user/virtio-net split ring test TSO with dsa dpdk= driver=0D +--------------------------------------------------------------------------= ---------=0D +This case test the function of Vhost tx offload in the topology of vhost-u= ser/virtio-net split ring mergeable path =0D +by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchro= nous enqueue operations with dsa dpdk driver.=0D +=0D +1. Bind 1 dsa device to vfio-pci like common step 1::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0=0D +=0D +2. Launch the Vhost testpmd by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost -a 0000:e7:01.0,max_queues=3D2 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D1,dmas=3D[txq0]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D1,dmas=3D[txq0]' \=0D + --iova=3Dva -- -i --nb-cores=3D2 --txd=3D1024 --rxd=3D1024 --rxq=3D1 --tx= q=3D1 --lcore-dma=3D[lcore2@0000:e7:01.0-q0,lcore3@0000:e7:01.0-q1]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 with split ring mergeable path and tso on::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,csum=3Don,guest_csum=3Don,host_tso4=3Don,gues= t_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,csum=3Don,guest_csum=3Don,host_tso4=3Don,gues= t_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +7. Check that 2VMs can receive and send big packets to each other through = vhost log. Port 0 should have tx packets above 1522, Port 1 should have rx = packets above 1522::=0D +=0D + testpmd>show port xstats all=0D +=0D +Test Case 2: VM2VM vhost-user/virtio-net split ring mergeable path 8 queue= s test with large packet payload with dsa dpdk driver=0D +--------------------------------------------------------------------------= -------------------------------------------------------=0D +This case uses iperf and scp to test the payload of large packet (larger t= han 1MB) is valid after packets forwarding in =0D +vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the = asynchronous enqueue operations with dsa dpdk driver.=0D +The dynamic change of multi-queues number, iova as VA and PA mode also tes= t.=0D +=0D +1. Bind 4 dsa device to vfio-pci like common step 1::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01= .0 f6:01.1=0D +=0D +2. Launch the Vhost testpmd by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost -a 0000:e7:01.0,max_queues=3D8 -a 0000:ec:01.0,max_queu= es=3D8 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D8,client=3D1,dmas=3D[txq0;= txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D8,client=3D1,dmas=3D[txq0;= txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 --lcore-dma=3D[lcore2@0000:e7:01.0-q0,lcore2@0000:e7:01.0-q1,lcore2@0= 000:e7:01.0-q2,lcore2@0000:e7:01.0-q3,lcore2@0000:e7:01.0-q4,lcore2@0000:e7= :01.0-q5,lcore3@0000:e7:01.0-q6,lcore3@0000:e7:01.0-q7,lcore4@0000:ec:01.0-= q0,lcore4@0000:ec:01.0-q1,lcore4@0000:ec:01.0-q2,lcore4@0000:ec:01.0-q3,lco= re4@0000:ec:01.0-q4,lcore4@0000:ec:01.0-q5,lcore4@0000:ec:01.0-q6,lcore5@00= 00:ec:01.0-q7]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 using qemu 6.2.0::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + # scp root@1.1.1.8:/=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +8. Quit and relaunch vhost w/ diff dsa channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost -a 0000:f1:01.0,max_queues=3D8 -a 0000:f6:01.0,max_queu= es=3D8 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D8,dmas=3D[txq0;= txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8,dmas=3D[txq1;= txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 --lcore-dma=3D[lcore2@0000:f1:01.0-q0,lcore2@0000:f1:01.0-q1,lcore2@0= 000:f1:01.0-q2,lcore2@0000:f1:01.0-q3,lcore3@0000:f1:01.0-q0,lcore3@0000:f1= :01.0-q2,lcore3@0000:f1:01.0-q4,lcore3@0000:f1:01.0-q5,lcore3@0000:f1:01.0-= q6,lcore3@0000:f1:01.0-q7,lcore4@0000:f1:01.0-q1,lcore4@0000:f1:01.0-q3,lco= re4@0000:f6:01.0-q0,lcore4@0000:f6:01.0-q1,lcore4@0000:f6:01.0-q2,lcore4@00= 00:f6:01.0-q3,lcore4@0000:f6:01.0-q4,lcore4@0000:f6:01.0-q5,lcore4@0000:f6:= 01.0-q6,lcore5@0000:f6:01.0-q7]=0D + testpmd>start=0D +=0D +9. Rerun step 6-7.=0D +=0D +10. Quit and relaunch vhost w/ iova=3Dpa::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost -a 0000:e7:01.0,max_queues=3D8 -a 0000:ec:01.0,max_queu= es=3D8 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D8,dmas=3D[txq0;= txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8,dmas=3D[txq1;= txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dpa -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 --lcore-dma=3D[lcore2@0000:e7:01.0-q0,lcore2@0000:e7:01.0-q1,lcore2@0= 000:e7:01.0-q2,lcore2@0000:e7:01.0-q3,lcore3@0000:e7:01.0-q0,lcore3@0000:e7= :01.0-q2,lcore3@0000:e7:01.0-q4,lcore3@0000:e7:01.0-q5,lcore3@0000:e7:01.0-= q6,lcore3@0000:e7:01.0-q7,lcore4@0000:e7:01.0-q1,lcore4@0000:e7:01.0-q3,lco= re4@0000:ec:01.0-q0,lcore4@0000:ec:01.0-q1,lcore4@0000:ec:01.0-q2,lcore4@00= 00:ec:01.0-q3,lcore4@0000:ec:01.0-q4,lcore4@0000:ec:01.0-q5,lcore4@0000:ec:= 01.0-q6,lcore5@0000:ec:01.0-q7]=0D + testpmd>start=0D +=0D +11. Rerun step 6-7.=0D +=0D +12. Quit vhost ports and relaunch vhost ports w/o dsa channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues= =3D8' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8' -- -i --nb-= cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D4 --txq=3D4=0D + testpmd>start=0D +=0D +13. On VM1, set virtio device::=0D +=0D + # ethtool -L ens5 combined 4=0D +=0D +14. On VM2, set virtio device::=0D +=0D + # ethtool -L ens5 combined 4=0D +=0D +15. Rerun step 6-7.=0D +=0D +16. Quit vhost ports and relaunch vhost ports with 1 queues::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --= file-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues= =3D4' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D4' -- -i --nb= -cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D1 --txq=3D1=0D + testpmd>start=0D +=0D +17. On VM1, set virtio device::=0D +=0D + # ethtool -L ens5 combined 1=0D +=0D +18. On VM2, set virtio device::=0D +=0D + # ethtool -L ens5 combined 1=0D +=0D +19. Rerun step 6-7.=0D +=0D +Test Case 3: VM2VM vhost-user/virtio-net split ring non-mergeable path 8 q= ueues test with large packet payload with dsa dpdk driver=0D +--------------------------------------------------------------------------= ----------------------------------------------------------=0D +This case uses iperf and scp to test the payload of large packet (larger t= han 1MB) is valid after packets forwarding in=0D +vm2vm vhost-user/virtio-net split ring non-mergeable path when vhost uses = the asynchronous enqueue operations with dsa dpdk driver.=0D +The dynamic change of multi-queues number also test.=0D +=0D +1. Bind 2 dsa device to vfio-pci like common step 1::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost -a 0000:e7:01.0,max_queues=3D8 -a 0000:ec:01.0,max_queue= s=3D8 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D8,dmas=3D[txq0;= txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8,dmas=3D[txq0;= txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 --lcore-dma=3D[lcore2@0000:e7:01.0-q0,lcore2@0000:e7:01.0-q1,lcore2@0= 000:e7:01.0-q2,lcore2@0000:e7:01.0-q3,lcore2@0000:e7:01.0-q4,lcore2@0000:e7= :01.0-q5,lcore3@0000:e7:01.0-q6,lcore3@0000:e7:01.0-q7,lcore4@0000:ec:01.0-= q0,lcore4@0000:ec:01.0-q1,lcore4@0000:ec:01.0-q2,lcore4@0000:ec:01.0-q3,lco= re4@0000:ec:01.0-q4,lcore4@0000:ec:01.0-q5,lcore4@0000:ec:01.0-q6,lcore5@00= 00:ec:01.0-q7]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Do= n,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Do= n -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Do= n,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Do= n -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + # scp root@1.1.1.8:/=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +8. Quit vhost ports and relaunch vhost ports w/o dsa channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues= =3D8' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8' -- -i --nb-= cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --txq=3D8=0D + testpmd>start=0D +=0D +9. Rerun step 6-7.=0D +=0D +10. Quit vhost ports and relaunch vhost ports with 1 queues::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --= file-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues= =3D8' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8' -- -i --nb= -cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D1 --txq=3D1=0D + testpmd>start=0D +=0D +11. On VM1, set virtio device::=0D +=0D + # ethtool -L ens5 combined 1=0D +=0D +12. On VM2, set virtio device::=0D +=0D + # ethtool -L ens5 combined 1=0D +=0D +13. Rerun step 6-7.=0D +=0D +Test Case 4: VM2VM vhost-user/virtio-net packed ring test TSO with dsa dpd= k driver=0D +--------------------------------------------------------------------------= ---------=0D +This case test the function of Vhost tx offload in the topology of vhost-u= ser/virtio-net packed ring mergeable path =0D +by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchro= nous enqueue operations with dsa dpdk driver.=0D +=0D +1. Bind 2 dsa device to vfio-pci like common step 1::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --f= ile-prefix=3Dvhost -a 0000:e7:01.0,max_queues=3D1 -a 0000:ec:01.0,max_queue= s=3D1 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D1,dmas=3D[txq0]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D1,dmas=3D[txq0]' \=0D + --iova=3Dva -- -i --nb-cores=3D2 --txd=3D1024 --rxd=3D1024 --lcore-dma=3D= [lcore3@0000:e7:01.0-q0,lcore4@0000:ec:01.0-q0]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 with qemu::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don,packed=3Don -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don,packed=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +7. Check that 2VMs can receive and send big packets to each other through = vhost log. Port 0 should have tx packets above 1522, Port 1 should have rx = packets above 1522::=0D +=0D + testpmd>show port xstats all=0D +=0D +Test Case 5: VM2VM vhost-user/virtio-net packed ring mergeable path 8 queu= es test with large packet payload with dsa dpdk driver=0D +--------------------------------------------------------------------------= -------------------------------------------------------=0D +This case uses iperf and scp to test the payload of large packet (larger t= han 1MB) is valid after packets forwarding in =0D +vm2vm vhost-user/virtio-net packed ring mergeable path when vhost uses the= asynchronous enqueue operations with dsa dpdk driver.=0D +The dynamic change of multi-queues number also test.=0D +=0D +1. Bind 8 dsa device to vfio-pci like common step 1::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01= .0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:= 79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D8,dmas=3D[txq0;txq1;txq2;t= xq3;txq4;txq5;txq6]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D8,dmas=3D[txq1;txq2;txq3;t= xq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 \=0D + --lcore-dma=3D[lcore2@0000:6a:01.0-q0,lcore2@0000:6f:01.0-q1,lcore2@0000:= 74:01.0-q2,lcore2@0000:79:01.0-q3,lcore3@0000:6a:01.0-q0,lcore3@0000:74:01.= 0-q2,lcore3@0000:e7:01.0-q4,lcore3@0000:ec:01.0-q5,lcore3@0000:f1:01.0-q6,l= core3@0000:f6:01.0-q7,lcore4@0000:6f:01.0-q1,lcore4@0000:79:01.0-q3,lcore4@= 0000:6a:01.0-q1,lcore4@0000:6f:01.0-q2,lcore4@0000:74:01.0-q3,lcore4@0000:7= 9:01.0-q4,lcore4@0000:e7:01.0-q5,lcore4@0000:ec:01.0-q6,lcore4@0000:f1:01.0= -q7,lcore5@0000:f6:01.0-q0]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 with qemu::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= ,packed=3Don -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= ,packed=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + : scp root@1.1.1.8:/=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +8. Rerun step 6-7 five times.=0D +=0D +Test Case 6: VM2VM vhost-user/virtio-net packed ring non-mergeable path 8 = queues test with large packet payload with dsa dpdk driver=0D +--------------------------------------------------------------------------= -----------------------------------------------------------=0D +This case uses iperf and scp to test the payload of large packet (larger t= han 1MB) is valid after packets forwarding in =0D +vm2vm vhost-user/virtio-net packed ring non-mergeable path when vhost uses= the asynchronous enqueue operations with dsa dpdk driver.=0D +The dynamic change of multi-queues number also test.=0D +=0D +1. Bind 8 dsa device to vfio-pci like common step 1::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01= .0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:= 79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D8,dmas=3D[txq0;txq1;txq2;t= xq3;txq4;txq5;txq6;txq7]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D8,dmas=3D[txq0;txq1;txq2;t= xq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 --lcore-dma=3D[lcore2@0000:6a:01.0-q0,lcore2@0000:6f:01.0-q1,lcore2@0= 000:74:01.0-q2,lcore2@0000:79:01.0-q3,lcore2@0000:e7:01.0-q4,lcore2@0000:ec= :01.0-q5,lcore3@0000:f1:01.0-q6,lcore3@0000:f6:01.0-q7,lcore4@0000:6a:01.0-= q1,lcore4@0000:6f:01.0-q2,lcore4@0000:74:01.0-q3,lcore4@0000:79:01.0-q4,lco= re4@0000:e7:01.0-q5,lcore4@0000:ec:01.0-q6,lcore4@0000:f1:01.0-q7,lcore5@00= 00:f6:01.0-q0]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 with qemu::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Do= n,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Do= n,packed=3Don -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Do= n,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Do= n,packed=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + # scp root@1.1.1.8:/=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +8. Rerun step 6-7 five times.=0D +=0D +Test Case 7: VM2VM vhost-user/virtio-net packed ring test TSO with dsa dpd= k driver and pa mode=0D +--------------------------------------------------------------------------= ---------------------=0D +This case test the function of Vhost tx offload in the topology of vhost-u= ser/virtio-net packed ring mergeable path =0D +by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchro= nous enqueue operations with dsa dpdk driver and iova as PA mode.=0D +=0D +1. Bind 2 dsa device to vfio-pci like common step 1::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0=0D +=0D +2. Launch the Vhost sample with PA mode by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --f= ile-prefix=3Dvhost -a 0000:e7:01.0 -a 0000:ec:01.0 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D1,dmas=3D[txq0]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D1,dmas=3D[txq0]' \=0D + --iova=3Dpa -- -i --nb-cores=3D2 --txd=3D1024 --rxd=3D1024 --lcore-dma=3D= [lcore3@0000:e7:01.0-q0,lcore4@0000:ec:01.0-q0]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 with qemu::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don,packed=3Don -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don,packed=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +7. Check that 2VMs can receive and send big packets to each other through = vhost log. Port 0 should have tx packets above 1522, Port 1 should have rx = packets above 1522::=0D +=0D + testpmd>show port xstats all=0D +=0D +Test Case 8: VM2VM vhost-user/virtio-net packed ring mergeable path 8 queu= es test with large packet payload with dsa dpdk driver and pa mode=0D +--------------------------------------------------------------------------= -------------------------------------------------------------------=0D +This case uses iperf and scp to test the payload of large packet (larger t= han 1MB) is valid after packets forwarding in =0D +vm2vm vhost-user/virtio-net packed ring mergeable path when vhost uses the= asynchronous enqueue operations with dsa dpdk driver=0D +and iova as PA mode. The dynamic change of multi-queues number also test.= =0D +=0D +1. Bind 8 dsa device to vfio-pci like common step 1::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01= .0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:= 79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D8,dmas=3D[txq0;txq1;txq2;t= xq3;txq4;txq5;txq6]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D8,dmas=3D[txq1;txq2;txq3;t= xq4;txq5;txq6]' \=0D + --iova=3Dpa -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 --lcore-dma=3D[lcore2@0000:6a:01.0-q0,lcore2@0000:6f:01.0-q1,lcore2@0= 000:74:01.0-q2,lcore2@0000:79:01.0-q3,lcore3@0000:6a:01.0-q0,lcore3@0000:74= :01.0-q2,lcore3@0000:e7:01.0-q4,lcore3@0000:ec:01.0-q5,lcore3@0000:f1:01.0-= q6,lcore3@0000:f6:01.0-q7,lcore4@0000:6f:01.0-q1,lcore4@0000:79:01.0-q3,lco= re4@0000:6a:01.0-q1,lcore4@0000:6f:01.0-q2,lcore4@0000:74:01.0-q3,lcore4@00= 00:79:01.0-q4,lcore4@0000:e7:01.0-q5,lcore4@0000:ec:01.0-q6,lcore4@0000:f1:= 01.0-q7,lcore5@0000:f6:01.0-q0]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 with qemu::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= ,packed=3Don -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= ,packed=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + # scp root@1.1.1.8:/=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +8. Rerun step 6-7 five times.=0D +=0D +Test Case 9: VM2VM vhost-user/virtio-net split ring test TSO with dsa kern= el driver=0D +--------------------------------------------------------------------------= ----------=0D +This case test the function of Vhost tx offload in the topology of vhost-u= ser/virtio-net split ring mergeable path =0D +by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchro= nous enqueue operations with dsa kernel driver.=0D +=0D +1. Bind 1 dsa device to idxd like common step 2::=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D1,dmas=3D[txq0]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D1,dmas=3D[txq0]' \=0D + --iova=3Dva -- -i --nb-cores=3D2 --txd=3D1024 --rxd=3D1024 --rxq=3D1 --tx= q=3D1 --lcore-dma=3D[lcore2@wq0.0,lcore2@wq0.1,lcore3@wq0.2,lcore3@wq0.3]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 on socket 1::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Do= n,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Do= n -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Do= n,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Do= n -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +7. Check that 2VMs can receive and send big packets to each other through = vhost log. Port 0 should have tx packets above 1522, Port 1 should have rx = packets above 1522::=0D +=0D + testpmd>show port xstats all=0D +=0D +Test Case 10: VM2VM vhost-user/virtio-net split ring mergeable path 8 queu= es test with large packet payload with dsa kernel driver=0D +--------------------------------------------------------------------------= ---------------------------------------------------------=0D +This case uses iperf and scp to test the payload of large packet (larger t= han 1MB) is valid after packets forwarding in =0D +vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the = asynchronous enqueue operations with dsa kernel driver.=0D +The dynamic change of multi-queues number also test.=0D +=0D +1. Bind 2 dsa device to idxd like common step 2::=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D8,dmas=3D[txq0;= txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8,dmas=3D[txq0;= txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 \=0D + --lcore-dma=3D[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2,lcore2@wq0.3,lcore2= @wq0.4,lcore2@wq0.5,lcore3@wq0.6,lcore3@wq0.7,lcore4@wq2.0,lcore4@wq2.1,lco= re4@wq2.2,lcore4@wq2.3,lcore4@wq2.4,lcore4@wq2.5,lcore4@wq2.6,lcore5@wq2.7]= =0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 using qemu::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + # scp root@1.1.1.8:/=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +8. Quit and relaunch vhost w/ diff dsa channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D8,dmas=3D[txq0;= txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8,dmas=3D[txq1;= txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 --lcore-dma=3D[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2,lcore2@wq0.3,lc= ore3@wq0.0,lcore3@wq0.2,lcore3@wq0.4,lcore3@wq0.5,lcore3@wq0.6,lcore3@wq0.7= ,lcore4@wq0.1,lcore4@wq0.3,lcore4@wq2.0,lcore4@wq2.1,lcore4@wq2.2,lcore4@wq= 2.3,lcore4@wq2.4,lcore4@wq2.5,lcore4@wq2.6,lcore5@wq2.7]=0D + testpmd>start=0D +=0D +9. Rerun step 6-7.=0D +=0D +10. Quit vhost ports and relaunch vhost ports w/o dsa channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues= =3D8' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8' -- -i --nb-= cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D4 --txq=3D4=0D + testpmd>start=0D +=0D +11. On VM1, set virtio device::=0D +=0D + ethtool -L ens5 combined 4=0D +=0D +12. On VM2, set virtio device::=0D +=0D + ethtool -L ens5 combined 4=0D +=0D +13. Rerun step 6-7.=0D +=0D +14. Quit vhost ports and relaunch vhost ports with 1 queues::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues= =3D4' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D4' -- -i --nb-= cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D1 --txq=3D1=0D + testpmd>start=0D +=0D +15. On VM1, set virtio device::=0D +=0D + ethtool -L ens5 combined 1=0D +=0D +16. On VM2, set virtio device::=0D +=0D + ethtool -L ens5 combined 1=0D +=0D +17. Rerun step 6-7.=0D +=0D +Test Case 11: VM2VM vhost-user/virtio-net split ring non-mergeable path 8 = queues test with large packet payload with dsa kernel driver=0D +--------------------------------------------------------------------------= -------------------------------------------------------------=0D +This case uses iperf and scp to test the payload of large packet (larger t= han 1MB) is valid after packets forwarding in =0D +vm2vm vhost-user/virtio-net split ring non-mergeable path when vhost uses = the asynchronous enqueue operations with dsa kernel driver.=0D +The dynamic change of multi-queues number also test.=0D +=0D +1. Bind 2 dsa device to idxd like common step 2::=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D8,dmas=3D[txq0;= txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8,dmas=3D[txq0;= txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 --lcore-dma=3D[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2,lcore2@wq0.3,lc= ore2@wq0.4,lcore2@wq0.5,lcore3@wq0.6,lcore3@wq0.7,lcore4@wq2.0,lcore4@wq2.1= ,lcore4@wq2.2,lcore4@wq2.3,lcore4@wq2.4,lcore4@wq2.5,lcore4@wq2.6,lcore5@wq= 2.7]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Do= n,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Do= n -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Do= n,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Do= n -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + # scp root@1.1.1.8:/=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +8. Quit vhost ports and relaunch vhost ports w/o dsa channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues= =3D8' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8' -- -i --nb-= cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --txq=3D8=0D + testpmd>start=0D +=0D +9. Rerun step 6-7.=0D +=0D +10. Quit vhost ports and relaunch vhost ports with 1 queues::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues= =3D8' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8' -- -i --nb-= cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D1 --txq=3D1=0D + testpmd>start=0D +=0D +11. On VM1, set virtio device::=0D +=0D + # ethtool -L ens5 combined 1=0D +=0D +12. On VM2, set virtio device::=0D +=0D + # ethtool -L ens5 combined 1=0D +=0D +13. Rerun step 6-7.=0D +=0D +Test Case 12: VM2VM vhost-user/virtio-net packed ring test TSO with dsa ke= rnel driver=0D +--------------------------------------------------------------------------= -----------=0D +This case test the function of Vhost tx offload in the topology of vhost-u= ser/virtio-net packed ring mergeable path=0D +by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchro= nous enqueue operations with dsa kernel driver.=0D +=0D +1. Bind 2 dsa device to idxd::=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 2=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --f= ile-prefix=3Dvhost \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D1,dmas=3D[txq0]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D1,dmas=3D[txq0]' \=0D + --iova=3Dva -- -i --nb-cores=3D2 --txd=3D1024 --rxd=3D1024 --lcore-dma=3D= [lcore3@wq0.0,lcore4@wq2.0]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 with qemu::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don,packed=3Don -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don,packed=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +7. Check that 2VMs can receive and send big packets to each other through = vhost log. Port 0 should have tx packets above 1522, Port 1 should have rx = packets above 1522::=0D +=0D + testpmd>show port xstats all=0D +=0D +Test Case 13: VM2VM vhost-user/virtio-net packed ring mergeable path 8 que= ues test with large packet payload with dsa kernel driver=0D +--------------------------------------------------------------------------= -----------------------------------------------------------=0D +This case uses iperf and scp to test the payload of large packet (larger t= han 1MB) is valid after packets forwarding in=0D +vm2vm vhost-user/virtio-net packed ring mergeable path when vhost uses the= asynchronous enqueue operations with dsa kernel driver.=0D +The dynamic change of multi-queues number also test.=0D +=0D +1. Bind 8 dsa device to idxd like common step 2::=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.= 0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 7= 9:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 12=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D8,dmas=3D[txq0;txq1;txq2;t= xq3;txq4;txq5;txq6]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D8,dmas=3D[txq1;txq2;txq3;t= xq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 --lcore-dma=3D[lcore2@wq0.0,lcore2@wq2.1,lcore2@wq4.2,lcore2@wq6.3,lc= ore3@wq0.0,lcore3@wq4.2,lcore3@wq8.4,lcore3@wq10.5,lcore3@wq12.6,lcore3@wq1= 4.7,lcore4@wq2.1,lcore4@wq6.3,lcore4@wq0.1,lcore4@wq2.2,lcore4@wq4.3,lcore4= @wq6.4,lcore4@wq8.5,lcore4@wq10.6,lcore4@wq12.7,lcore5@wq14.0]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 with qemu::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= ,packed=3Don -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= ,packed=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + # scp root@1.1.1.8:/=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +8. Rerun step 6-7 five times.=0D +=0D +Test Case 14: VM2VM vhost-user/virtio-net packed ring non-mergeable path 8= queues test with large packet payload with dsa kernel driver=0D +--------------------------------------------------------------------------= --------------------------------------------------------------=0D +This case uses iperf and scp to test the payload of large packet (larger t= han 1MB) is valid after packets forwarding in=0D +vm2vm vhost-user/virtio-net packed ring mergeable path when vhost uses the= asynchronous enqueue operations with dsa kernel driver.=0D +The dynamic change of multi-queues number also test.=0D +=0D +1. Bind 8 dsa device to vfio-pci like common step 2::=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.= 0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 7= 9:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 12=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D8,dmas=3D[txq0;txq1;txq2;t= xq3;txq4;txq5;txq6;txq7]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D8,dmas=3D[txq0;txq1;txq2;t= xq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 --lcore-dma=3D[lcore2@wq0.0,lcore2@wq2.1,lcore2@wq4.2,lcore2@wq6.3,lc= ore3@wq0.0,lcore3@wq4.2,lcore3@wq8.4,lcore3@wq10.5,lcore3@wq12.6,lcore3@wq1= 4.7,lcore4@wq2.1,lcore4@wq6.3,lcore4@wq0.1,lcore4@wq2.2,lcore4@wq4.3,lcore4= @wq6.4,lcore4@wq8.5,lcore4@wq10.6,lcore4@wq12.7,lcore5@wq14.0]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 with qemu::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Do= n,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Do= n,packed=3Don -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Do= n,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Do= n,packed=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + # scp root@1.1.1.8:/=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +8. Rerun step 6-7 five times.=0D +=0D +Test Case 15: VM2VM vhost-user/virtio-net split ring non-mergeable 16 queu= es test with large packet payload with dsa dpdk and kernel driver=0D +--------------------------------------------------------------------------= ------------------------------------------------------------------=0D +This case uses iperf and scp to test the payload of large packet (larger t= han 1MB) is valid after packets forwarding in=0D +vm2vm vhost-user/virtio-net split ring non-mergeable path when vhost uses = the asynchronous enqueue operations with dsa dpdk=0D +and kernel driver. The dynamic change of multi-queues number also test.=0D +=0D +1. Bind 4 dsa device to vfio-pci and 4 dsa device to idxd like common step= 1-2::=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.= 0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 7= 9:01.0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6=0D + ls /dev/dsa #check wq configure success=0D +=0D + # ./usertools/dpdk-devbind.py -u 0000:e7:01.0 0000:ec:01.0 0000= :f1:01.0 0000:f6:01.0=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:= 01.0 0000:f1:01.0 0000:f6:01.0=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost -a 0000:e7:01.0 -a 0000:ec:01.0 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D16,dmas=3D[txq0= ;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14= ;txq15]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D16,dmas=3D[txq0= ;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14= ;txq15]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D16 --t= xq=3D16 --lcore-dma=3D[lcore2@0000:e7:01.0-q0,lcore2@0000:e7:01.0-q1,lcore2= @0000:ec:01.0-q0,lcore2@0000:ec:01.0-q1,lcore3@wq0.0,lcore3@wq2.0,lcore4@00= 00:e7:01.0-q4,lcore4@0000:e7:01.0-q5,lcore4@0000:ec:01.0-q4,lcore4@0000:ec:= 01.0-q5,lcore5@wq4.1,lcore5@wq2.1]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D16 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Do= n,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Do= n -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D16 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Do= n,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Do= n -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 16=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 16=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + # scp root@1.1.1.8:/=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +8. Quit vhost ports and relaunch vhost ports w/ diff dsa channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost -a 0000:e7:01.0,max_queues=3D2 -a 0000:f1:01.0,max_queue= s=3D2 -a 0000:f6:01.0,max_queues=3D2 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D16,dmas=3D[txq0= ;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D16,dmas=3D[txq0= ;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D16 --t= xq=3D16 --lcore-dma=3D[lcore2@0000:e7:01.0-q0,lcore2@0000:f1:01.0-q0,lcore2= @0000:f1:01.0-q1,lcore2@wq4.2,lcore3@wq6.1,lcore3@wq6.3,lcore4@0000:e7:01.0= -q1,lcore4@0000:f6:01.0-q0,lcore4@wq4.2,lcore4@wq6.0,lcore5@wq4.2,lcore5@wq= 6.0]=0D + testpmd>start=0D +=0D +9. rerun step 6-7.=0D +=0D +10. Quit vhost ports and relaunch vhost ports w/o dsa channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues= =3D16' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D16' -- -i --nb= -cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D16 --txq=3D16=0D + testpmd>start=0D +=0D +11. Rerun step 6-7.=0D +=0D +12. Quit vhost ports and relaunch vhost ports with 1 queues::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues= =3D8' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8' -- -i --nb-= cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D1 --txq=3D1=0D + testpmd>start=0D +=0D +13. On VM1, set virtio device::=0D +=0D + # ethtool -L ens5 combined 1=0D +=0D +14. On VM2, set virtio device::=0D +=0D + # ethtool -L ens5 combined 1=0D +=0D +15. Rerun step 6-7.=0D +=0D +Test Case 16: VM2VM vhost-user/virtio-net packed ring mergeable 16 queues = test with large packet payload with dsa dpdk and kernel driver=0D +--------------------------------------------------------------------------= ----------------------------------------------------------------=0D +This case uses iperf and scp to test the payload of large packet (larger t= han 1MB) is valid after packets forwarding in=0D +vm2vm vhost-user/virtio-net packed ring mergeable path when vhost uses the= asynchronous enqueue operations with dsa dpdk=0D +and kernel driver. The dynamic change of multi-queues number also test.=0D +=0D +1. Bind 4 dsa device to vfio-pci and 4 dsa device to idxd like common step= 1-2::=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D +=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.= 0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 7= 9:01.0=0D + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01= .0 f6:01.0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 2=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 4=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 6=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:= f6:01.0 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D16,dmas=3D[txq0;txq1;txq2;= txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15]' \= =0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D16,dmas=3D[txq0;txq1;txq2;= txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15]' \= =0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D16 --t= xq=3D16 --lcore-dma=3D[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq2.0,lcore2@wq2.1,= lcore3@wq0.1,lcore3@wq2.0,lcore3@0000:e7:01.0-q4,lcore3@0000:ec:01.0-q5,lco= re3@0000:f1:01.0-q6,lcore3@0000:f6:01.0-q7,lcore4@0000:e7:01.0-q4,lcore4@00= 00:ec:01.0-q5,lcore4@0000:f1:01.0-q1,lcore4@wq2.0,lcore5@wq4.1,lcore5@wq2.0= ,lcore5@wq4.1,lcore5@wq6.2]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 with qemu::=0D +=0D + # taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G0= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D16 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= ,packed=3Don -vnc :10=0D +=0D + # taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_6= 4 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge1G1= ,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D16 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= ,packed=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 16=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 16=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + # scp root@1.1.1.8:/=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +8. Rerun step 6-7 five times.=0D +=0D +=0D +=0D --=20 2.25.1