From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BDBC9A050A; Sat, 7 May 2022 11:59:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B76A340696; Sat, 7 May 2022 11:59:25 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 627F240395 for ; Sat, 7 May 2022 11:59:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651917563; x=1683453563; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Z2Mbl6PWgb6T+ZKxqQLJ/4wmV4opWOteKDYDdY3BIcA=; b=VrbdI2hvvQS/hjjdVZTnF1EoSWWLfNP1uN/bovF/eptPe39WXpo2Y7XJ rCDSr17RIGr+nplcRnBSY6FM8nywW7HeFfi69yvzTpY2XhVDgQ5eakrgH kbCQVcoZYzcgFqzYGXajyQgERTBGWTqgm0Cn8YISPKtT8lcb1I6nXw193 kxnxmYnuSuEgOCg+8WDBToj7fBCYince2CP6pa31ki8+db9sMfCFRsbmr +MyBsiaPeTg+VlkfDo5KguPnRKXhCug2JtsyLDEPseq/E/f3+bBLiRO/q rueGR4twqwilDVxC/W/9mVwgSBJqE6B3onAbIpVBvkkDRvC2RByE2KEUb A==; X-IronPort-AV: E=McAfee;i="6400,9594,10339"; a="267527707" X-IronPort-AV: E=Sophos;i="5.91,206,1647327600"; d="scan'208";a="267527707" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 May 2022 02:59:22 -0700 X-IronPort-AV: E=Sophos;i="5.91,206,1647327600"; d="scan'208";a="736138571" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 May 2022 02:59:20 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/7] test_plans/basic_4k_pages_dsa_test_plan: add basic_4k_pages_dsa testplan Date: Sat, 7 May 2022 05:58:55 -0400 Message-Id: <20220507095855.310940-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Type: text/plain; charset=y Content-Transfer-Encoding: quoted-printable X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add basic_4k_pages_dsa_test_plan.rst into test_plans. Signed-off-by: Wei Ling --- test_plans/basic_4k_pages_dsa_test_plan.rst | 1146 +++++++++++++++++++ 1 file changed, 1146 insertions(+) create mode 100644 test_plans/basic_4k_pages_dsa_test_plan.rst diff --git a/test_plans/basic_4k_pages_dsa_test_plan.rst b/test_plans/basic= _4k_pages_dsa_test_plan.rst new file mode 100644 index 00000000..235fa5f4 --- /dev/null +++ b/test_plans/basic_4k_pages_dsa_test_plan.rst @@ -0,0 +1,1146 @@ +.. Copyright (c) <2022>, Intel Corporation=0D + All rights reserved.=0D +=0D + Redistribution and use in source and binary forms, with or without=0D + modification, are permitted provided that the following conditions=0D + are met:=0D +=0D + - Redistributions of source code must retain the above copyright=0D + notice, this list of conditions and the following disclaimer.=0D +=0D + - Redistributions in binary form must reproduce the above copyright=0D + notice, this list of conditions and the following disclaimer in=0D + the documentation and/or other materials provided with the=0D + distribution.=0D +=0D + - Neither the name of Intel Corporation nor the names of its=0D + contributors may be used to endorse or promote products derived=0D + from this software without specific prior written permission.=0D +=0D + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS=0D + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT=0D + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS=0D + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE=0D + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,=0D + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES=0D + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR=0D + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)=0D + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,=0D + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)=0D + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED=0D + OF THE POSSIBILITY OF SUCH DAMAGE.=0D +=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +vhost async operation with DSA driver using 4K-pages test plan=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +Description=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +Vhost asynchronous data path leverages DMA devices to offload memory copie= s from the CPU and it is implemented in an asynchronous way.=0D +In addition, vhost supports M:N mapping between vrings and DMA virtual cha= nnels. Specifically, one vring can use multiple different DMA=0D +channels and one DMA channel can be shared by multiple vrings at the same = time. Vhost enqueue operation with CBDMA channels is supported=0D +in both split and packed ring.=0D +=0D +This document provides the test plan for testing some basic functions when= Vhost-user using asynchronous data path with=0D +DSA driver (kernel IDXD driver and DPDK vfio-pci driver) and using 4K-page= s memory environment.=0D +1. test Vhost asynchronous data path with DSA driver in the PVP topology e= nvironment with testpmd.=0D +2. check Vhost tx offload function by verifing the TSO/cksum in the TCP/IP= stack with vm2vm split ring and packed ring =0D +vhost-user/virtio-net mergeable path.=0D +3.Check the payload of large packet (larger than 1MB) is valid after forwa= rding packets with vm2vm split ring=0D +and packed ring vhost-user/virtio-net mergeable and non-mergeable path.=0D +4. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with = split ring and packed ring.=0D +5. Vhost-user using 1G hugepges and virtio-user using 4k-pages.=0D +=0D +DPDK 19.02 add support for using virtio-user without hugepages. The --no-h= uge mode was augmented to use memfd-backed=0D +memory (on systems that support memfd), to allow using virtio-user-based N= ICs without hugepages.=0D +=0D +Note:=0D +1. When DMA devices are bound to vfio driver, VA mode is the default and r= ecommended. For PA mode, page by page mapping may=0D +exceed IOMMU's max capability, better to use 1G guest hugepage.=0D +2. DPDK local patch that about vhost pmd is needed when testing Vhost asyn= chronous data path with testpmd, and the suite has not yet been automated.= =0D +=0D +Prerequisites=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +General set up=0D +--------------=0D +1. Turn off transparent hugepage in grub by adding GRUB_CMDLINE_LINUX=3D"t= ransparent_hugepage=3Dnever".=0D +=0D +2. Compile DPDK::=0D +=0D + # CC=3Dgcc meson --werror -Denable_kmods=3DTrue -Dlibdir=3Dlib -Dexamples= =3Dall --default-library=3D=0D + # ninja -C -j 110=0D + For example,=0D + CC=3Dgcc meson --werror -Denable_kmods=3DTrue -Dlibdir=3Dlib -Dexamples= =3Dall --default-library=3Dx86_64-native-linuxapp-gcc=0D + ninja -C x86_64-native-linuxapp-gcc -j 110=0D +=0D +3. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00= .1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::=0D +=0D + # ./usertools/dpdk-devbind.py -s=0D +=0D + Network devices using kernel driver=0D + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D + 0000:4f:00.1 'Ethernet Controller E810-C for QSFP 1592' drv=3Dice unused= =3Dvfio-pci=0D +=0D + 4DMA devices using kernel driver=0D + 4=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=0D + 40000:6a:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 40000:6f:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 40000:74:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 40000:79:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 40000:e7:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 40000:ec:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 40000:f1:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 40000:f6:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D +=0D +4. Prepare tmpfs with 4K-pages::=0D +=0D + mkdir /mnt/tmpfs_4k=0D + mkdir /mnt/tmpfs_4k_2=0D + mount tmpfs /mnt/tmpfs_4k -t tmpfs -o size=3D4G=0D + mount tmpfs /mnt/tmpfs_4k_2 -t tmpfs -o size=3D4G=0D +=0D +Test case=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +Common steps=0D +------------=0D +1. Bind 1 NIC port to vfio-pci::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci =0D + For example:=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:4f.1=0D +=0D +2.Bind DSA devices to DPDK vfio-pci driver::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci =0D + For example, bind 2 DMA devices to vfio-pci driver:=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:= 01.0=0D +=0D +.. note::=0D +=0D + One DPDK DSA device can create 8 WQ at most. Below is an example, where D= PDK DSA device will create one and=0D + eight WQ for DSA deivce 0000:e7:01.0 and 0000:ec:01.0. The value of =E2= =80=9Cmax_queues=E2=80=9D is 1~8:=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 00= 00:e7:01.0,max_queues=3D1 -- -i=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 00= 00:ec:01.0,max_queues=3D8 -- -i=0D +=0D +3. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ)::= =0D +=0D + # ./usertools/dpdk-devbind.py -b idxd =0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q =0D +=0D +.. note::=0D +=0D + Better to reset WQ when need operate DSA devices that bound to idxd drvie= r:=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset = =0D + You can check it by 'ls /dev/dsa'=0D + numDevices: number of devices, where 0<=3DnumDevices<=3D7, corresponding = to 0000:6a:01.0 - 0000:f6:01.0=0D + numWq: Number of workqueues per DSA endpoint, where 1<=3DnumWq<=3D8=0D +=0D + For example, bind 2 DMA devices to idxd driver and configure WQ:=0D +=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0=0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 2=0D + Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq2.0 wq2.1 wq2.2 wq2.3"=0D +=0D +Test Case 1: Basic test vhost/virtio-user split ring with 4K-pages and dsa= dpdk driver=0D +--------------------------------------------------------------------------= --------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test sp= lit ring when vhost uses the asynchronous =0D +enqueue operations with dsa dpdk driver and the mapping between vrings and= dsa virtual channels is 1:1 in 4k-pages environment.=0D +=0D +1. Bind one dsa device(6a:01.0) and one nic port(4f:00.1) to vfio-pci like= common step 1-2.=0D +=0D +2. Launch vhost::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m = 1024 --no-huge --file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:6a:01.0,max_qu= eues=3D1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/vhost-net,queues=3D1,dmas=3D[txq0]' -- -i= --no-numa --socket-num=3D0 --lcore-dma=3D[lcore4@0000:6a:01.0-q0]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with 4K-pages::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --n= o-huge -m 1024 --file-prefix=3Dvirtio-user \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/vhost-net,q= ueues=3D1 -- -i=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send packet with packet generator with different packet size,includes [= 64, 128, 256, 512, 1024, 1518], check the throughput with below command::=0D +=0D + testpmd>show port stats all=0D +=0D +Test Case 2: Basic test vhost/virtio-user packed ring with 4K-pages and ds= a dpdk driver=0D +--------------------------------------------------------------------------= --------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pa= cked ring when vhost uses the asynchronous =0D +enqueue operations with dsa dpdk driver and the mapping between vrings and= dsa virtual channels is 1:1 in 4k-pages environment.=0D +=0D +1. Bind one dsa device(6a:01.0) and one nic port(4f:00.1) to vfio-pci like= common step 1-2.=0D +=0D +2. Launch vhost::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m = 1024 --no-huge --file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:6a:01.0,max_qu= eues=3D4 \=0D + --vdev 'net_vhost0,iface=3D/tmp/vhost-net,queues=3D1,dmas=3D[txq0]' -- -i= --no-numa --socket-num=3D0 --lcore-dma=3D[lcore4@0000:6a:01.0-q0]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with 4K-pages::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --n= o-huge -m 1024 --file-prefix=3Dvirtio-user \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/vhost-net,p= acked_vq=3D1,queues=3D1 -- -i=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send packet with packet generator with different packet size,includes [= 64, 128, 256, 512, 1024, 1518], check the throughput with below command::=0D +=0D + testpmd>show port stats all=0D +=0D +Test Case 3: PVP split ring multi-queues with 4K-pages and dsa dpdk driver= =0D +--------------------------------------------------------------------------= ---=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test sp= lit ring multi-queues when vhost uses the asynchronous =0D +enqueue operations with dsa dpdk driver and the mapping between vrings and= dsa virtual channels is M:N in 4k-pages environment.=0D +=0D +1. Bind 8 dsa device and one nic port to vfio-pci like common step 1-2::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01= .0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0 =0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -no-huge -m 1024 -a 0000:4f:00.1 -a 0000:6a:01.0 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq6;txq7]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 --no-numa --socket-num=3D0 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore11@0000:6a:01.0-q7,lcore12@00= 00:6a:01.0-q1,lcore12@0000:6a:01.0-q2,lcore12@0000:6a:01.0-q3,lcore13@0000:= 6a:01.0-q2,lcore13@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q4,lcore14@0000:6a:= 01.0-q2,lcore14@0000:6a:01.0-q3,lcore14@0000:6a:01.0-q4,lcore14@0000:6a:01.= 0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q= 2,lcore15@0000:6a:01.0-q3,lcore15@0000:6a:01.0-q4,lcore15@0000:6a:01.0-q5,l= core15@0000:6a:01.0-q6,lcore15@0000:6a:01.0-q7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-huge -m 1024 --no-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D8,server=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check vhost RX and TX direction both exist packtes in = 8 queues from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. restart vhost port and send imix pkts again, check get same throuhput a= s above::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings a= nd dsa virtual channels with 1G hugepage::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@00= 00:6f:01.0-q1,lcore13@0000:74:01.0-q2,lcore14@0000:6f:01.0-q1,lcore14@0000:= 74:01.0-q2,lcore15@0000:6f:01.0-q1,lcore15@0000:74:01.0-q2]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Rerun step 3-5.=0D +=0D +Test Case 4: PVP packed ring multi-queues with 4K-pages and dsa dpdk drive= r=0D +--------------------------------------------------------------------------= ----=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pa= cked ring multi-queues when vhost uses the asynchronous =0D +enqueue operations with dsa dpdk driver and the mapping between vrings and= dsa virtual channels is M:N in 4k-pages environment.=0D +=0D +1. Bind 8 dsa device and one nic port to vfio-pci like common step 1-2::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01= .0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0 =0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -no-huge -m 1024 -a 0000:4f:00.1 -a 0000:6a:01.0 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq6;txq7]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 --no-numa --socket-num=3D0 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore11@0000:6a:01.0-q7,lcore12@00= 00:6a:01.0-q1,lcore12@0000:6a:01.0-q2,lcore12@0000:6a:01.0-q3,lcore13@0000:= 6a:01.0-q2,lcore13@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q4,lcore14@0000:6a:= 01.0-q2,lcore14@0000:6a:01.0-q3,lcore14@0000:6a:01.0-q4,lcore14@0000:6a:01.= 0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q= 2,lcore15@0000:6a:01.0-q3,lcore15@0000:6a:01.0-q4,lcore15@0000:6a:01.0-q5,l= core15@0000:6a:01.0-q6,lcore15@0000:6a:01.0-q7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-huge -m 1024 --no-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D8,server=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check vhost RX and TX direction both exist packtes in = 8 queues from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. restart vhost port and send imix pkts again, check get same throuhput a= s above::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings a= nd dsa virtual channels with 1G hugepage::::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@00= 00:6f:01.0-q1,lcore13@0000:74:01.0-q2,lcore14@0000:6f:01.0-q1,lcore14@0000:= 74:01.0-q2,lcore15@0000:6f:01.0-q1,lcore15@0000:74:01.0-q2]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Rerun step 3-5.=0D +=0D +Test Case 5: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa dpdk = driver test with tcp traffic=0D +--------------------------------------------------------------------------= ------------------------------=0D +This case test the function of Vhost tx offload in the topology of vhost-u= ser/virtio-net split ring mergeable path =0D +by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchro= nous enqueue operations with dsa dpdk driver =0D +in 4k-pages environment.=0D +=0D +1. Bind 1 dsa device to vfio-pci like common step 2::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --n= o-huge -m 1024 --file-prefix=3Dvhost -a 0000:6a:01.0,max_queues=3D2 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D1,dmas=3D[txq0],dma_ring_s= ize=3D2048' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D1,dmas=3D[txq0],dma_ring_s= ize=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D2 --txd=3D1024 --rxd=3D1024 --no-numa --so= cket-num=3D0 --lcore-dma=3D[lcore3@0000:6a:01.0-q0,lcore4@0000:6a:01.0-q1]= =0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2::=0D +=0D + taskset -c 32 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1= -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don -vnc :10=0D +=0D + taskset -c 33 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2= -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k_2,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don -vnc :12=0D +=0D +3. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +4. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +5. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +6. Check that 2VMs can receive and send big packets to each other through = vhost log. Port 0 should have tx packets above 1522, Port 1 should have rx = packets above 1522::=0D +=0D + testpmd>show port xstats all=0D +=0D +Test Case 6: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa dpdk= driver test with tcp traffic=0D +--------------------------------------------------------------------------= -------------------------------=0D +This case test the function of Vhost tx offload in the topology of vhost-u= ser/virtio-net packed ring mergeable path =0D +by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchro= nous enqueue operations with dsa dpdk driver =0D +in 4k-pages environment.=0D +=0D +1. Bind 1 dsa device to vfio-pci like common step 2::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0=0D + =0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --n= o-huge -m 1024 --file-prefix=3Dvhost -a 0000:6a:01.0 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D1,dmas=3D[txq0],dma_ring_s= ize=3D2048' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D1,dmas=3D[txq0],dma_ring_s= ize=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D2 --txd=3D1024 --rxd=3D1024 --no-numa --so= cket-num=3D0 --lcore-dma=3D[lcore3@0000:6a:01.0-q0,lcore4@0000:6a:01.0-q1]= =0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2::=0D +=0D + taskset -c 32 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1= -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don,packed=3Don -vnc :10=0D +=0D + taskset -c 33 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2= -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k_2,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don,packed=3Don -vnc :12=0D +=0D +3. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +4. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +5. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +6. Check that 2VMs can receive and send big packets to each other through = vhost log. Port 0 should have tx packets above 1522, Port 1 should have rx = packets above 1522::=0D +=0D + testpmd>show port xstats all=0D +=0D +Test Case 7: vm2vm vhost/virtio-net split packed ring multi queues with 1G= /4k-pages and dsa dpdk driver=0D +--------------------------------------------------------------------------= -------------------------------=0D +This case uses iperf and scp to test the payload of large packet (larger t= han 1MB) is valid after packets forwarding in =0D +vm2vm vhost-user/virtio-net multi-queues mergeable path when vhost uses th= e asynchronous enqueue operations with dsa dpdk driver.=0D +And one virtio-net is split ring, the other is packed ring. The vhost run = in 1G hugepages and the virtio-user run in 4k-pages environment.=0D +=0D +1. Bind 2 dsa channel to vfio-pci like common step 2::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01= .0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D +=0D +2. Launch vhost::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:= 79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D8,dmas=3D[txq0;txq1;txq2;t= xq3;txq4;txq5;txq6]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D8,dmas=3D[txq1;txq2;txq3;t= xq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 \=0D + --lcore-dma=3D[lcore2@0000:6a:01.0-q0,lcore2@0000:6f:01.0-q1,lcore2@0000:= 74:01.0-q2,lcore2@0000:79:01.0-q3,lcore3@0000:6a:01.0-q0,lcore3@0000:74:01.= 0-q2,lcore3@0000:e7:01.0-q4,lcore3@0000:ec:01.0-q5,lcore3@0000:f1:01.0-q6,l= core3@0000:f6:01.0-q7,lcore4@0000:6f:01.0-q1,lcore4@0000:79:01.0-q3,lcore4@= 0000:6a:01.0-q1,lcore4@0000:6f:01.0-q2,lcore4@0000:74:01.0-q3,lcore4@0000:7= 9:01.0-q4,lcore4@0000:e7:01.0-q5,lcore4@0000:ec:01.0-q6,lcore4@0000:f1:01.0= -q7,lcore5@0000:f6:01.0-q0]=0D + testpmd>start=0D +=0D +3. Launch VM qemu::=0D +=0D + taskset -c 32 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1= -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don -vnc :10=0D +=0D + taskset -c 33 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2= -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k_2,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,packed=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + # scp root@1.1.1.8:/=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +Test Case 8: Basic test vhost/virtio-user split ring with 4K-pages and dsa= kernel driver=0D +--------------------------------------------------------------------------= --------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test sp= lit ring when vhost uses the asynchronous =0D +enqueue operations with dsa kernel driver and the mapping between vrings a= nd dsa virtual channels is 1:1 in 4k-pages environment.=0D +=0D +1. Bind one nic port to vfio-pci and one dsa device to idxd like common st= ep 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + #ls /dev/dsa,check wq configure, reset if exist=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py --reset = 0=0D +=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m = 1024 --no-huge --file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/vhost-net,queues=3D1,dmas=3D[txq0]' -- -i= --no-numa --socket-num=3D0 --lcore-dma=3D[lcore4@wq0.0]=0D + testpmd>start=0D +=0D +3. Launch virtio-user with 4K-pages::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --n= o-huge -m 1024 --file-prefix=3Dvirtio-user \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/vhost-net,q= ueues=3D1 -- -i=0D + testpmd>start=0D +=0D +4. Send packet with packet generator with different packet size,includes [= 64, 128, 256, 512, 1024, 1518], check the throughput with below command::=0D +=0D + testpmd>show port stats all=0D +=0D +Test Case 9: Basic test vhost/virtio-user packed ring with 4K-pages and ds= a dpdk driver=0D +--------------------------------------------------------------------------= --------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pa= cked ring when vhost uses the asynchronous =0D +enqueue operations with dsa kernel driver and the mapping between vrings a= nd dsa virtual channels is 1:1 in 4k-pages environment.=0D +=0D +1. Bind one nic port to vfio-pci and one dsa device to idxd like common st= ep 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + #ls /dev/dsa,check wq configure, reset if exist=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py --reset = 0=0D +=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m = 1024 --no-huge --file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/vhost-net,queues=3D1,dmas=3D[txq0]' -- -i= --no-numa --socket-num=3D0 --lcore-dma=3D[lcore4@wq0.1]=0D + testpmd>start=0D +=0D +3. Launch virtio-user with 4K-pages::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --n= o-huge -m 1024 --file-prefix=3Dvirtio-user \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/vhost-net,p= acked_vq=3D1,queues=3D1 -- -i=0D + testpmd>start=0D +=0D +4. Send packet with packet generator with different packet size,includes [= 64, 128, 256, 512, 1024, 1518], check the throughput with below command::=0D +=0D + testpmd>show port stats all=0D +=0D +Test Case 10: PVP split ring multi-queues with 4K-pages and dsa kernel dri= ver=0D +--------------------------------------------------------------------------= ------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test sp= lit ring multi-queues when vhost uses the asynchronous =0D +enqueue operations with dsa kernel driver and the mapping between vrings a= nd dsa virtual channels is M:N in 4k-pages environment.=0D +=0D +1. Bind one nic port to vfio-pci and 8 dsa device to idxd like common step= 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + .ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.= 0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 7= 9:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10= =0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 12= =0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14= =0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= m 1024 --no-huge -a 0000:4f:00.1 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq6;txq7]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 --no-numa --socket-num=3D0 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore11@wq0.7,lcore12@wq0.1,lcore12@wq0.2,lc= ore12@wq0.3,lcore13@wq0.2,lcore13@wq0.3,lcore13@wq0.4,lcore14@wq0.2,lcore14= @wq0.3,lcore14@wq0.4,lcore14@wq0.5,lcore15@wq0.0,lcore15@wq0.1,lcore15@wq0.= 2,lcore15@wq0.3,lcore15@wq0.4,lcore15@wq0.5,lcore15@wq0.6,lcore15@wq0.7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-huge -m 1024 --no-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D8,server=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check vhost RX and TX direction both exist packtes in = 8 queues from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. restart vhost port and send imix pkts again, check get same throuhput a= s above::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings a= nd dsa virtual channels::::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq0.2,lc= ore14@wq0.1,lcore14@wq0.2,lcore15@wq0.1,lcore15@wq0.2]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Rerun step 4-6.=0D +=0D +Test Case 11: PVP packed ring multi-queues with 4K-pages and dsa kernel dr= iver=0D +--------------------------------------------------------------------------= -------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pa= cked ring multi-queues when vhost uses the asynchronous=0D +enqueue operations with dsa kernel driver and the mapping between vrings a= nd dsa virtual channels is M:N in 4k-pages environment.=0D +=0D +1. Bind one nic port to vfio-pci and 8 dsa device to idxd like common step= 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + .ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.= 0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 7= 9:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10= =0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 12= =0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14= =0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= m 1024 --no-huge -a 0000:4f:00.1 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3]' \=0D + --iova=3Dva -- -i --nb-cores=3D8 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 --no-numa --socket-num=3D0 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore12@wq2.1,lcore13@wq4.2,lcore14@wq6.3]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-huge -m 1024 --no-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D8,server=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check vhost RX and TX direction both exist packtes in = 8 queues from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. restart vhost port and send imix pkts again, check get same throuhput a= s above::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings a= nd dsa virtual channels::::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 = -a 0000:4f:00.1 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq6;txq7]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore11@wq0.7,lcore12@wq0.1,lcore12@wq0.2,lc= ore12@wq0.3,lcore13@wq0.2,lcore13@wq0.3,lcore13@wq0.4,lcore14@wq0.2,lcore14= @wq0.3,lcore14@wq0.4,lcore14@wq0.5,lcore15@wq0.0,lcore15@wq0.1,lcore15@wq0.= 2,lcore15@wq0.3,lcore15@wq0.4,lcore15@wq0.5,lcore15@wq0.6,lcore15@wq0.7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Rerun step 4-6.=0D +=0D +Test Case 12: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa kern= el driver test with tcp traffic=0D +--------------------------------------------------------------------------= -------------------------------=0D +This case test the function of Vhost tx offload in the topology of vhost-u= ser/virtio-net split ring mergeable path=0D +by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchro= nous enqueue operations with dsa dpdk driver=0D +in 4k-pages environment.=0D +=0D +1. Bind 1 dsa device to idxd like common step 2::=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --n= o-huge -m 1024 --file-prefix=3Dvhost \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D1,dmas=3D[txq0]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D1,dmas=3D[txq0]' \=0D + --iova=3Dva -- -i --nb-cores=3D2 --txd=3D1024 --rxd=3D1024 --rxq=3D1 --tx= q=3D1 --no-numa --socket-num=3D0 --lcore-dma=3D[lcore2@wq0.0,lcore2@wq0.1,l= core3@wq0.2,lcore3@wq0.3]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 on socket 1::=0D +=0D + taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 = -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don -vnc :10=0D +=0D + taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 = -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k_2,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +7. Check that 2VMs can receive and send big packets to each other through = vhost log. Port 0 should have tx packets above 1522, Port 1 should have rx = packets above 1522::=0D +=0D + testpmd>show port xstats all=0D +=0D +Test Case 13: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa ker= nel driver test with tcp traffic=0D +--------------------------------------------------------------------------= ---------------------------------=0D +This case test the function of Vhost tx offload in the topology of vhost-u= ser/virtio-net packed ring mergeable path=0D +by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchro= nous enqueue operations with dsa dpdk driver=0D +in 4k-pages environment.=0D +=0D +1. Bind 2 dsa device to idxd like common step 2::=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --n= o-huge -m 1024 --file-prefix=3Dvhost \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D1,dmas=3D[txq0]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D1,dmas=3D[txq0]' \=0D + --iova=3Dva -- -i --nb-cores=3D2 --txd=3D1024 --rxd=3D1024 --no-numa --so= cket-num=3D0 --lcore-dma=3D[lcore3@wq0.0,lcore4@wq2.0]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 with qemu::=0D +=0D + taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 = -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don,packed=3Don -vnc :10=0D +=0D + taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 = -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k_2,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,guest_csum=3Don,host_tso4=3Don,guest= _tso4=3Don,guest_ecn=3Don,packed=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +7. Check that 2VMs can receive and send big packets to each other through = vhost log. Port 0 should have tx packets above 1522, Port 1 should have rx = packets above 1522::=0D +=0D + testpmd>show port xstats all=0D +=0D +Test Case 14: vm2vm vhost/virtio-net split packed ring multi queues with 1= G/4k-pages and dsa kernel driver=0D +--------------------------------------------------------------------------= ---------------------------------=0D +This case uses iperf and scp to test the payload of large packet (larger t= han 1MB) is valid after packets forwarding in=0D +vm2vm vhost-user/virtio-net split and packed ring mergeable path when vhos= t uses the asynchronous enqueue operations with=0D +dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run i= n 4k-pages environment.=0D +=0D +1. Bind 8 dsa device to idxd like common step 3::=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.= 0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 7= 9:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10= =0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 12= =0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14= =0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --f= ile-prefix=3Dvhost \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D8,dmas=3D[txq0;txq1;txq2;t= xq3;txq4;txq5;txq6]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D8,dmas=3D[txq1;txq2;txq3;t= xq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --tx= q=3D8 \=0D + --lcore-dma=3D[lcore2@wq0.0,lcore2@wq2.1,lcore2@wq4.2,lcore2@wq6.3,lcore3= @wq0.0,lcore3@wq4.2,lcore3@wq8.4,lcore3@wq10.5,lcore3@wq12.6,lcore3@wq14.7,= lcore4@wq2.1,lcore4@wq6.3,lcore4@wq0.1,lcore4@wq2.2,lcore4@wq4.3,lcore4@wq6= .4,lcore4@wq8.5,lcore4@wq10.6,lcore4@wq12.7,lcore5@wq14.0]=0D + testpmd>start=0D +=0D +3. Launch VM qemu::=0D +=0D + taskset -c 32 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1= -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don -vnc :10=0D +=0D + taskset -c 33 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2= -enable-kvm -cpu host -smp 1 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k_2,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,packed=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 8=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + # scp root@1.1.1.8:/=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +Test Case 15: PVP split and packed ring dynamic queue number test with dsa= dpdk and kernel driver=0D +--------------------------------------------------------------------------= -------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test sp= lit and packed ring when vhost uses the asynchronous enqueue operations=0D +with dsa dpdk and kernel driver and if the vhost-user can work well when t= he queue number dynamic change.=0D +=0D +1. bind 2 dsa device to idxd, 2 dsa device to vfio-pci and one nic port to= vfio-pci like common step 1-3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0=0D + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 2=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --n= o-huge -m 1024 --file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D2,client=3D1,dmas=3D[txq0;txq= 1]' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D2 --rx= q=3D2 --no-numa --socket-num=3D0 --lcore-dma=3D[lcore3@wq0.0,lcore3@wq2.0]= =0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with split ring mergeable in-order path by below com= mand::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-huge -m 1024 --no-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D8,server=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rxq=3D8=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets from packet generator with random ip, check perforamn= ce can get target.=0D +=0D +5. Stop vhost port, check vhost RX and TX direction both exist packtes in = 2 queues from vhost log.=0D +=0D +6. Quit and relaunch vhost as below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --n= o-huge -m 1024 --file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:e7:01.0 -a 000= 0:ec:01.0 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D4,client=3D1,dmas=3D[txq0;txq= 1;txq2;txq3]' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D4 --rx= q=3D4 --no-numa --socket-num=3D0 --lcore-dma=3D[lcore3@0000:e7:01.0-q0,lcor= e3@0000:e7:01.0-q1,lcore3@0000:ec:01.0-q2,lcore3@0000:ec:01.0-q3]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +7. Send imix packets from packet generator with random ip, check perforamn= ce can get target.=0D +=0D +8. Stop vhost port, check vhost RX and TX direction both exist packtes in = 4 queues from vhost log.=0D +=0D +9. Quit and relaunch vhost as below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --n= o-huge -m 1024 --file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:e7:01.0 -a 000= 0:ec:01.0 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,client=3D1,dmas=3D[txq0;txq= 1;txq2;txq3;txq4;txq5]' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 --no-numa --socket-num=3D0 --lcore-dma=3D[lcore3@wq0.0,lcore3@wq2.0,l= core3@wq2.2,lcore3@0000:e7:01.0-q0,lcore3@0000:e7:01.0-q1,lcore3@0000:ec:01= .0-q3]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Send imix packets from packet generator with random ip, check perforam= nce can get target.=0D +=0D +11. Stop vhost port, check vhost RX and TX direction both exist packtes in= 8 queues from vhost log.=0D +=0D +12. Quit and relaunch vhost with diff cahnnels as below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --n= o-huge -m 1024 --file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:e7:01.0 -a 000= 0:ec:01.0 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,client=3D1,dmas=3D[txq0;txq= 1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 --no-numa --socket-num=3D0 --lcore-dma=3D[lcore3@wq0.0,lcore3@wq0.1,l= core3@wq2.1,lcore3@wq2.0,lcore3@0000:e7:01.0-q1,lcore3@0000:ec:01.0-q3]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +13. Send imix packets from packet generator with random ip, check perforam= nce can get target.=0D +=0D +14. Stop vhost port, check vhost RX and TX direction both exist packtes in= 8 queues from vhost log.=0D +=0D +15. Quit and relaunch virtio-user with packed ring mergeable in-order path= by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-huge -m 1024 --no-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D8,server=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rxq=3D8=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +16. Rerun steps 4-5.=0D +=0D +Test Case 16: VM2VM split ring vhost-user/virtio-net non-mergeable 4k-page= s 16 queues dsa dpdk and kernel driver test with large packet payload valid= check=0D +--------------------------------------------------------------------------= ---------------------------------------------------------------------------= --------=0D +This case uses iperf and scp to test the payload of large packet (larger t= han 1MB) is valid after packets forwarding in=0D +vm2vm vhost-user/virtio-net split ring non-mergeable path when vhost uses = the asynchronous enqueue operations with dsa dpdk=0D +and kernel driver. The dynamic change of multi-queues number also test.=0D +=0D +1. Bind 4 dsa device to vfio-pci and 4 dsa device to idxd like common step= 2-3::=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.= 0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 7= 9:01.0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6=0D + ls /dev/dsa #check wq configure success=0D + # ./usertools/dpdk-devbind.py -u 0000:e7:01.0 0000:ec:01.0 0000= :f1:01.0 0000:f6:01.0=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:= 01.0 0000:f1:01.0 0000:f6:01.0=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 -m = 1024 --no-huge --file-prefix=3Dvhost \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D16,dmas=3D[txq0= ;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14= ;txq15]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D16,dmas=3D[txq0= ;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14= ;txq15]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D16 --t= xq=3D16 --no-numa --socket-num=3D0 \=0D + --lcore-dma=3D[lcore2@0000:e7:01.0-q0,lcore2@0000:e7:01.0-q1,lcore2@0000:= ec:01.0-q0,lcore2@0000:ec:01.0-q1,lcore3@wq0.0,lcore3@wq2.0,lcore4@0000:e7:= 01.0-q4,lcore4@0000:e7:01.0-q5,lcore4@0000:ec:01.0-q4,lcore4@0000:ec:01.0-q= 5,lcore5@wq4.1,lcore5@wq2.1]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2::=0D +=0D + taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 = -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D16 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Do= n,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Do= n -vnc :10=0D +=0D + taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 = -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k_2,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D16 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Do= n,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Do= n -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 16=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 16=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + # scp > root@1.1.1.8:/=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +8. Quit vhost ports and relaunch vhost ports w/ diff dsa channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 -m = 1024 --no-huge --file-prefix=3Dvhost -a 0000:6f:01.0 -a 0000:74:01.0 -a 000= 0:79:01.0 -a 0000:e7:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D16,dmas=3D[txq0= ;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D16,dmas=3D[txq0= ;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13]' \=0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D16 --t= xq=3D16 --no-numa --socket-num=3D0 --lcore-dma=3D[lcore2@0000:e7:01.0-q2,lc= ore2@0000:f1:01.0-q3,lcore2@0000:f1:01.0-q1,lcore2@wq4.2,lcore3@wq6.1,lcore= 3@wq6.3,lcore4@0000:e7:01.0-q2,lcore4@0000:f6:01.0-q5,lcore4@wq4.2,lcore4@w= q6.0,lcore5@wq4.2,lcore5@wq6.0]=0D + testpmd>start=0D +=0D +9. rerun step 6-7.=0D +=0D +10. Quit vhost ports and relaunch vhost ports w/o dsa channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 -m = 1024 --no-huge --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,= client=3D1,queues=3D16' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D16' -- -i --nb= -cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D16 --txq=3D16 --no-numa --sock= et-num=3D0=0D + testpmd>start=0D +=0D +11. Rerun step 6-7.=0D +=0D +12. Quit vhost ports and relaunch vhost ports with 1 queues::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 -m = 1024 --no-huge --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,= client=3D1,queues=3D8' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8' -- -i --nb-= cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D1 --txq=3D1 --no-numa --socket-= num=3D0=0D + testpmd>start=0D +=0D +13. On VM1, set virtio device::=0D +=0D + # ethtool -L ens5 combined 1=0D +=0D +14. On VM2, set virtio device::=0D +=0D + # ethtool -L ens5 combined 1=0D +=0D +15. Rerun step 5-6.=0D +=0D +Test Case 17: vm2vm packed ring vhost-user/virtio-net mergeable 16 queues = dsa dpdk and kernel driver test with large packet payload valid check=0D +--------------------------------------------------------------------------= ------------------------------------------------------------------------=0D +This case uses iperf and scp to test the payload of large packet (larger t= han 1MB) is valid after packets forwarding in=0D +vm2vm vhost-user/virtio-net packed ring mergeable path when vhost uses the= asynchronous enqueue operations with dsa dpdk=0D +and kernel driver. The dynamic change of multi-queues number also test.=0D +=0D +1. Bind 4 dsa device to vfio-pci and 4 dsa device to idxd like common step= 2-3::=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01= .0 f6:01.0=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.= 0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 7= 9:01.0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 2=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 4=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 6=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch the Vhost sample by below commands::=0D +=0D + rm -rf vhost-net*=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 -m = 1024 --no-huge --file-prefix=3Dvhost -a 0000:e7:01.0 -a 0000:ec:01.0 -a 000= 0:f1:01.0 -a 0000:f6:01.0 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D16,dmas=3D[txq0;txq1;txq2;= txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15]' \= =0D + --vdev 'net_vhost1,iface=3Dvhost-net1,queues=3D16,dmas=3D[txq0;txq1;txq2;= txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15]' \= =0D + --iova=3Dva -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D16 --t= xq=3D16 --no-numa --socket-num=3D0 \=0D + --lcore-dma=3D[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq2.0,lcore2@wq2.1,lcore3= @wq0.1,lcore3@wq2.0,lcore3@0000:e7:01.0-q4,lcore3@0000:ec:01.0-q5,lcore3@00= 00:f1:01.0-q6,lcore3@0000:f6:01.0-q7,lcore4@0000:e7:01.0-q4,lcore4@0000:ec:= 01.0-q5,lcore4@0000:f1:01.0-q1,lcore4@wq2.0,lcore5@wq4.1,lcore5@wq2.0,lcore= 5@wq4.1,lcore5@wq6.2]=0D + testpmd>start=0D +=0D +3. Launch VM1 and VM2 with qemu::=0D +=0D + taskset -c 7 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 = -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D16 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= ,packed=3Don -vnc :10=0D +=0D + taskset -c 8 /root/xingguang/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 = -enable-kvm -cpu host -smp 8 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/tmpfs_4= k_2,share=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/root/xingguang/ubunt= u20-04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1 \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D16 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disable-m= odern=3Dfalse,mrg_rxbuf=3Don,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Don= ,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Don= ,packed=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 16=0D + # ifconfig ens5 1.1.1.2=0D + # arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + # ethtool -L ens5 combined 16=0D + # ifconfig ens5 1.1.1.8=0D + # arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + # scp > root@1.1.1.8:/=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + # iperf -s -i 1=0D + # iperf -c 1.1.1.2 -i 1 -t 60=0D +=0D +8. Rerun step 6-7 five times.=0D --=20 2.25.1