From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1A0DCA050A; Sat, 7 May 2022 11:59:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 140184068A; Sat, 7 May 2022 11:59:51 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 93CE940395 for ; Sat, 7 May 2022 11:59:48 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651917588; x=1683453588; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=E2tgs0ciiaDk4idu8avk3ylqGMU1SOScpRDsNR3XnOQ=; b=lol0XRrQ2sV5QF1PkF7FrVd20vidS5hekv8psCCc6yZYyYzr1+evyBvw HdS53gk+3g8M4kzC1OAummdY59jHaLWrSZHTlUpHHI6zzCtAiX0q4LkBL meQcnBkaVl2yDGfq99npXTR7qitSSEdvSnEnk+aB2OYyKP3kFE5kLRZcK Yi5VCxWo1XlJ7K1PuStql/co9m3M1SB9vGUMohh+MdeJcNLZn0d/tKBr9 7v/7I+cTECqE4Ud14j0lANWvKPlOIDGeHQhUlEEZpGNUkNRxq9e/xejp+ 6vMu4P26XvxFoCq5G2177GQ05L8905RUWUJgWaI+294FyT/jOu7LPmiDx Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10339"; a="248594816" X-IronPort-AV: E=Sophos;i="5.91,206,1647327600"; d="scan'208";a="248594816" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 May 2022 02:59:47 -0700 X-IronPort-AV: E=Sophos;i="5.91,206,1647327600"; d="scan'208";a="736138625" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 May 2022 02:59:44 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 4/7] test_plans/pvp_vhost_dsa_test_plan: add pvp_vhost_dsa testplan Date: Sat, 7 May 2022 05:59:19 -0400 Message-Id: <20220507095919.311060-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Type: text/plain; charset=y Content-Transfer-Encoding: quoted-printable X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add pvp_vhost_dsa_test_plan.rst test_plans. Signed-off-by: Wei Ling --- test_plans/pvp_vhost_dsa_test_plan.rst | 2600 ++++++++++++++++++++++++ 1 file changed, 2600 insertions(+) create mode 100644 test_plans/pvp_vhost_dsa_test_plan.rst diff --git a/test_plans/pvp_vhost_dsa_test_plan.rst b/test_plans/pvp_vhost_= dsa_test_plan.rst new file mode 100644 index 00000000..4d8625bc --- /dev/null +++ b/test_plans/pvp_vhost_dsa_test_plan.rst @@ -0,0 +1,2600 @@ +.. Copyright (c) <2022>, Intel Corporation=0D + All rights reserved.=0D +=0D + Redistribution and use in source and binary forms, with or without=0D + modification, are permitted provided that the following conditions=0D + are met:=0D +=0D + - Redistributions of source code must retain the above copyright=0D + notice, this list of conditions and the following disclaimer.=0D +=0D + - Redistributions in binary form must reproduce the above copyright=0D + notice, this list of conditions and the following disclaimer in=0D + the documentation and/or other materials provided with the=0D + distribution.=0D +=0D + - Neither the name of Intel Corporation nor the names of its=0D + contributors may be used to endorse or promote products derived=0D + from this software without specific prior written permission.=0D +=0D + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS=0D + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT=0D + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS=0D + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE=0D + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,=0D + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES=0D + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR=0D + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)=0D + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,=0D + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)=0D + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED=0D + OF THE POSSIBILITY OF SUCH DAMAGE.=0D +=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=0D +PVP vhost async operation with DSA driver test plan=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=0D +=0D +Description=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +This document provides the test plan for testing Vhost asynchronous=0D +data path with DSA driver (kernel IDXD driver and DPDK vfio-pci driver)=0D +in the PVP topology environment with testpmd.=0D +=0D +DSA is a kind of DMA engine, Vhost asynchronous data path leverages DMA de= vices=0D +to offload memory copies from the CPU and it is implemented in an asynchro= nous way.=0D +Linux kernel and DPDK provide DSA driver (kernel IDXD driver and DPDK vfio= -pci driver),=0D +no matter which driver is used, DPDK DMA library is used in data-path to o= ffload copies=0D +to DSA, and the only difference is which driver configures DSA. It enables= applications,=0D +like OVS, to save CPU cycles and hide memory copy overhead, thus achieving= higher throughput.=0D +Vhost doesn't manage DMA devices and applications, like OVS, need to manag= e and configure DSA=0D +devices. Applications need to tell vhost what DSA devices to use in every = data path function call.=0D +This design enables the flexibility for applications to dynamically use DM= A channels in different=0D +function modules, not limited in vhost. In addition, vhost supports M:N ma= pping between vrings=0D +and DMA virtual channels. Specifically, one vring can use multiple differe= nt DMA channels=0D +and one DMA channel can be shared by multiple vrings at the same time.=0D +=0D +IOMMU impact:=0D +If iommu off, idxd can work with iova=3Dpa=0D +If iommu on, kernel dsa driver only can work with iova=3Dva by program IOM= MU, can't use iova=3Dpa(fwd not work due to pkts payload wrong).=0D +=0D +Note: DPDK local patch that about vhost pmd is needed when testing Vhost a= synchronous data path with testpmd, and the suite has not yet been automate= d.=0D +=0D +Prerequisites=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +Topology=0D +--------=0D + Test flow: TG-->NIC-->Vhost-user-->Virtio-user-->Vhost-user-->NIC-->TG=0D +=0D +Hardware=0D +--------=0D + Supportted NICs: ALL=0D +=0D +Software=0D +--------=0D + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz=0D +=0D +General set up=0D +--------------=0D +1. Compile DPDK::=0D +=0D + # CC=3Dgcc meson --werror -Denable_kmods=3DTrue -Dlibdir=3Dlib -Dexamples= =3Dall --default-library=3D=0D + # ninja -C -j 110=0D + For example,=0D + CC=3Dgcc meson --werror -Denable_kmods=3DTrue -Dlibdir=3Dlib -Dexamples= =3Dall --default-library=3Dx86_64-native-linuxapp-gcc=0D + ninja -C x86_64-native-linuxapp-gcc -j 110=0D +=0D +2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00= .1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::=0D +=0D + # ./usertools/dpdk-devbind.py -s=0D + =0D + Network devices using kernel driver=0D + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D + 0000:4f:00.1 'Ethernet Controller E810-C for QSFP 1592' drv=3Dice unused= =3Dvfio-pci=0D +=0D + DMA devices using kernel driver=0D + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=0D + 0000:6a:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 0000:6f:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 0000:74:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 0000:79:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 0000:e7:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 0000:ec:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 0000:f1:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D + 0000:f6:01.0 'Device 0b25' drv=3Didxd unused=3Dvfio-pci=0D +=0D +Test case=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +Common steps=0D +------------=0D +1. Bind 1 NIC port to vfio-pci::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci =0D +=0D + For example:=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:4f.1=0D +=0D +2. Bind DSA devices to DPDK vfio-pci driver::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci =0D +=0D + For example, bind 2 DMA devices to vfio-pci driver:=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:= 01.0=0D +=0D +.. note::=0D +=0D + One DPDK DSA device can create 8 WQ at most. Below is an example, where D= PDK DSA device will create one and=0D + eight WQ for DSA deivce 0000:e7:01.0 and 0000:ec:01.0. The value of =E2= =80=9Cmax_queues=E2=80=9D is 1~8:=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 00= 00:e7:01.0,max_queues=3D1 -- -i=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 00= 00:ec:01.0,max_queues=3D8 -- -i=0D +=0D +3. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ)::= =0D +=0D + # ./usertools/dpdk-devbind.py -b idxd =0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q =0D +=0D +.. note::=0D +=0D + Better to reset WQ when need operate DSA devices that bound to idxd drvie= r: =0D + # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset = =0D + You can check it by 'ls /dev/dsa'=0D + numDevices: number of devices, where 0<=3DnumDevices<=3D7, corresponding = to 0000:6a:01.0 - 0000:f6:01.0=0D + numWq: Number of workqueues per DSA endpoint, where 1<=3DnumWq<=3D8=0D +=0D + For example, bind 2 DMA devices to idxd driver and configure WQ:=0D +=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 2=0D + Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq2.0 wq2.1 wq2.2 wq2.3"=0D +=0D +4. Send imix packets [64,1518] to NIC by traffic generator::=0D +=0D + The imix packets include packet size [64, 128, 256, 512, 1024, 1518], and= the format of packet is as follows.=0D + +-------------+-------------+-------------+-------------+=0D + | MAC | MAC | IPV4 | IPV4 |=0D + | Src address | Dst address | Src address | Dst address |=0D + |-------------|-------------|-------------|-------------|=0D + | Random MAC | Virtio mac | Random IP | Random IP |=0D + +-------------+-------------+-------------+-------------+=0D + All the packets in this test plan use the Virtio mac: 00:11:22:33:44:10.= =0D +=0D +Test Case 1: PVP split ring all path vhost enqueue operations with 1:1 map= ping between vrings and dsa dpdk driver channels=0D +--------------------------------------------------------------------------= -------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of split ring in each virtio path with 1 core and 1 queue=0D +when vhost uses the asynchronous enqueue operations with dsa dpdk driver a= nd the mapping between vrings and dsa virtual channels is 1:1.=0D +Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind one dsa device(f6:01.0) and one nic port(4f:00.1) to vfio-pci like= common step 1-2.=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --f= ile-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D1,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore3@0000:f6:01.0-q0]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D1 \=0D + -- -i --enable-hw-vlan-strip --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D1,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Quit all testpmd and relaunch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --f= ile-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:6a:01.0,max_queues=3D8 \=0D + --vdev 'net_vhost0,iface=3D/tmp/vhost-net0,queues=3D1,dmas=3D[txq0]' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q1]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +12. Rerun step 3-6.=0D +=0D +13. Quit all testpmd and relaunch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --f= ile-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D8 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D1,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dpa -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore3@0000:f6:01.0-q3]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +14. Rerun step 3-10 with csum fwd.=0D +=0D +Test Case 2: PVP split ring all path multi-queues vhost async enqueue with= 1:1 mapping between vrings and dsa dpdk driver channels=0D +--------------------------------------------------------------------------= ---------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of split ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa dpdk driver a= nd the mapping between vrings and dsa virtual channels is 1:1.=0D +Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested. =0D +=0D +1. Bind 8 dsa device(6a:01.0-f6:01.0) and one nic port(4f:00.1) to vfio-pc= i like common step 1-2::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01= .0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:4f:00.1=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D8 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D8 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:f6:01.0-q0,lcore12@0000:f6:01.0-q1,lcore13@00= 00:f6:01.0-q2,lcore14@0000:f6:01.0-q3,lcore15@0000:f6:01.0-q4,lcore16@0000:= f6:01.0-q5,lcore17@0000:f6:01.0-q6,lcore18@0000:f6:01.0-q7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D8 \=0D + -- -i --enable-hw-vlan-strip --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1= 024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D8,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Quit all testpmd and relaunch vhost with diff channel by below command= ::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:6a:01.0,max_queues=3D1 -a 0000= :6f:01.0,max_queues=3D1 \=0D + -a 0000:74:01.0,max_queues=3D1 -a 0000:79:01.0,max_queues=3D1 -a 0000:e7:= 01.0,max_queues=3D1 -a 0000:ec:01.0,max_queues=3D1 -a 0000:f1:01.0,max_queu= es=3D1 -a 0000:f6:01.0,max_queues=3D1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D8 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6f:01.0-q0,lcore13@00= 00:74:01.0-q0,lcore14@0000:79:01.0-q0,lcore15@0000:e7:01.0-q0,lcore16@0000:= ec:01.0-q0,lcore17@0000:f1:01.0-q0,lcore18@0000:f6:01.0-q0]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +12. Rerun step 3-6 with csum fwd.=0D +=0D +13. Quit all testpmd and relaunch vhost with pa mode by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 000= 0:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -= a 0000:f6:01.0 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dpa -- -i --nb-cores=3D8 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6f:01.0-q1,lcore13@00= 00:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore16@0000:= ec:01.0-q5,lcore17@0000:f1:01.0-q6,lcore18@0000:f6:01.0-q7]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +14. Rerun step 3-6 with csum fwd.=0D +=0D +Test Case 3: PVP split ring all path multi-queues vhost enqueue operations= with M to 1 mapping between vrings and CBDMA virtual channels=0D +--------------------------------------------------------------------------= ---------------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of split ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa dpdk driver a= nd the mapping between vrings and dsa virtual channels is M:1.=0D +Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested. =0D +=0D +1. Bind 1 dsa device and one nic port to vfio-pci like comon step 1-2::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1 f6:01.0=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:f1:01.0-q0]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D8 \=0D + -- -i --enable-hw-vlan-strip --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1= 024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D8,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Quit all testpmd and relaunch vhost with pa mode by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dpa -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:f1:01.0-q3]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +12. Rerun step 6 with csum fwd.=0D +=0D +Test Case 4: PVP split ring all path multi-queues vhost enqueue operations= with 1 to N mapping between vrings and CBDMA virtual channels=0D +--------------------------------------------------------------------------= ---------------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of split ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa dpdk driver a= nd the mapping between vrings and dsa virtual channels is 1:N.=0D +Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind 8 dsa device and one nic port to vfio-pci like cmmon step 1-2::=0D +=0D + # ./usertools/dpdk-devbind.py -b 4f:00.1=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01= .0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D8 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D1,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@00= 00:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:= f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D1 \=0D + -- -i --enable-hw-vlan-strip --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D1,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Quit all testpmd and relaunch vhost with diff channel by below command= ::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:6a:01.0,max_queues=3D1 -a 0000= :6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a= 0000:f1:01.0 -a 0000:f6:01.0 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore11@0000:6f:01.0-q1,lcore11@00= 00:74:01.0-q2,lcore11@0000:79:01.0-q3,lcore11@0000:e7:01.0-q4,lcore11@0000:= ec:01.0-q5,lcore11@0000:f1:01.0-q6,lcore11@0000:f6:01.0-q7]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +12. Rerun step 3-10 with csum fwd.=0D +=0D +13. Quit all testpmd and relaunch vhost with pa mode by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 000= 0:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -= a 0000:f6:01.0 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dpa -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore11@0000:6f:01.0-q1,lcore11@00= 00:74:01.0-q2,lcore11@0000:79:01.0-q3,lcore11@0000:e7:01.0-q4,lcore11@0000:= ec:01.0-q5,lcore11@0000:f1:01.0-q6,lcore11@0000:f6:01.0-q7]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +14. Rerun step 8 with csum fwd.=0D +=0D +Test Case 5: PVP split ring all path multi-queues vhost enqueue operations= with M to N mapping between vrings and CBDMA virtual channels=0D +--------------------------------------------------------------------------= ---------------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of split ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa dpdk driver a= nd the mapping between vrings and dsa virtual channels is M:N.=0D +Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind 8 dsa device and one nic port to vfio-pci like common step 1-2::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01= .0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0 =0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D8 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2],dma= _ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@00= 00:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:= f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D8 \=0D + -- -i --enable-hw-vlan-strip --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1= 024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D8,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Quit all testpmd and relaunch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D8 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@00= 00:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:= f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +12. Rerun step 3-6.=0D +=0D +13. Quit all testpmd and relaunch vhost with diff channel by below command= ::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 000= 0:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -= a 0000:f6:01.0 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore11@0000:6f:01.0-q1,lcore11@00= 00:74:01.0-q2,lcore11@0000:79:01.0-q3,lcore11@0000:e7:01.0-q4,lcore11@0000:= ec:01.0-q5,lcore11@0000:f1:01.0-q6,lcore11@0000:f6:01.0-q7]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +14. Rerun step 7 with csum.=0D +=0D +15. Quit all testpmd and relaunch vhost with pa mode by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D8 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dpa -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@00= 00:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:= f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +16. Rerun step 9 with csum fwd.=0D +=0D +Test Case 6: PVP split ring dynamic queues vhost async operation with dsa = dpdk driver channels=0D +--------------------------------------------------------------------------= -----------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of split ring when vhost uses the asynchronous enqueue operations= =0D +with dsa dpdk driver and if the vhost-user can work well when the queue nu= mber dynamic change. Both iova as VA and PA mode have beed tested.=0D +=0D +1. Bind 8 dsa devices and 1 NIC port to vfio-pci like common step 1-2::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:4f:00.1=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01= .0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,client=3D1' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D2 --rx= q=3D2=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D8,server=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rxq=3D8=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets[64,1518] from packet generator with random ip, check = perforamnce can get target.=0D +=0D +5. Stop vhost port, check vhost RX and TX direction both exist packtes in = 2 queues from vhost log.=0D +=0D +6. Quit and relaunch vhost with 1:1 mapping between vrings and dsa virtual= channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 -a 0000:6a:01.0,max_queues=3D4 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3]' \=0D + --iova=3Dva -- -i --nb-cores=3D8 --txd=3D1024 --rxd=3D1024 --txq=3D4 --rx= q=3D4 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q1,lcore13@00= 00:6a:01.0-q2,lcore14@0000:6a:01.0-q3]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +7. Rerun step 4.=0D +=0D +8. Stop vhost port, check vhost RX and TX direction both exist packtes in = 4 queues from vhost log.=0D +=0D +9. Quit and relaunch vhost with M:N(1:N;M# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 = -a 0000:4f:00.1 -a 0000:6a:01.0,max_queues=3D8 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq6;txq7]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore11@0000:6a:01.0-q7,lcore12@00= 00:6a:01.0-q1,lcore12@0000:6a:01.0-q2,lcore12@0000:6a:01.0-q3,lcore13@0000:= 6a:01.0-q2,lcore13@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q4,lcore14@0000:6a:= 01.0-q2,lcore14@0000:6a:01.0-q3,lcore14@0000:6a:01.0-q4,lcore14@0000:6a:01.= 0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q= 2,lcore15@0000:6a:01.0-q3,lcore15@0000:6a:01.0-q4,lcore15@0000:6a:01.0-q5,l= core15@0000:6a:01.0-q6,lcore15@0000:6a:01.0-q7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Rerun step 4.=0D +=0D +11. Stop vhost port, check vhost RX and TX direction both exist packtes in= 8 queues from vhost log.=0D +=0D +12. Quit and relaunch vhost with diff M:N(1:N;M# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 = -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:= 01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore11@0000:f6:01.0-q7,lcore12@00= 00:6f:01.0-q1,lcore12@0000:74:01.0-q2,lcore12@0000:79:01.0-q3,lcore13@0000:= 74:01.0-q2,lcore13@0000:79:01.0-q3,lcore13@0000:e7:01.0-q4,lcore14@0000:74:= 01.0-q2,lcore14@0000:79:01.0-q3,lcore14@0000:e7:01.0-q4,lcore14@0000:ec:01.= 0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6f:01.0-q1,lcore15@0000:74:01.0-q= 2,lcore15@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore15@0000:ec:01.0-q5,l= core15@0000:f1:01.0-q6,lcore15@0000:f6:01.0-q7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +13. Rerun step 10-11.=0D +=0D +14. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings = and dsa virtual channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 -a 0000:6a:01.0,max_queues=3D4 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@00= 00:6a:01.0-q1,lcore13@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q1,lcore14@0000:= 6a:01.0-q2,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +15. Rerun step 10-11.=0D +=0D +16. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings = and dsa virtual channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 -a 0000:6a:01.0,max_queus=3D3 -a 0000:6f:01.0,max_queus=3D3 = -a 0000:74:01.0,max_queus=3D3 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@00= 00:6f:01.0-q1,lcore13@0000:74:01.0-q2,lcore14@0000:6f:01.0-q1,lcore14@0000:= 74:01.0-q2,lcore15@0000:6f:01.0-q1,lcore15@0000:74:01.0-q2]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +17. Rerun step 10-11.=0D +=0D +18. Quit and relaunch vhost with iova=3Dpa by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 -a 0000:6a:01.0,max_queus=3D4 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dpa -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@00= 00:6a:01.0-q1,lcore13@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q1,lcore14@0000:= 6a:01.0-q2,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +19. Rerun step 10-11.=0D +=0D +Test Case 7: PVP packed ring all path vhost enqueue operations with 1:1 ma= pping between vrings and dsa dpdk driver channels=0D +--------------------------------------------------------------------------= --------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of packed ring in each virtio path with 1 core and =0D +1 queue when vhost uses the asynchronous enqueue operations with dsa dpdk = driver and the mapping between vrings and dsa virtual channels =0D +is 1:1. Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been te= sted.=0D +=0D +1. Bind one dsa device and one nic port to vfio-pci like common step 1-2::= =0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1 f6:01.0=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --f= ile-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D1,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore3@0000:f6:01.0-q0]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,packed_vq=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,packed_vq=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,packed_vq=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D1,packed_vq=3D1,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Relaunch virtio-user with vector_rx path and ring size is not power of= 2, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D1,packed_vq=3D1,vectorized=3D1,,queue_size=3D10= 25 \=0D + -- -i --nb-cores=3D1 --txd=3D1025 --rxd=3D1025=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +12. Quit all testpmd and relaunch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --f= ile-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D1,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dpa -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore3@0000:f6:01.0-q3]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +13. Rerun step 11 with csum fwd.=0D +=0D +Test Case 8: PVP packed ring all path mulit-queues vhost async enqueue ope= ration with 1:1 mapping between vrings and dsa dpdk driver channels=0D +--------------------------------------------------------------------------= ---------------------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of packed ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa dpdk driver a= nd the mapping between vrings and dsa virtual channels is 1:1.=0D +Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind 8 dsa devices and one nic port to vfio-pci like common step 1-2::= =0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01= .0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D8 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D8 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:f6:01.0-q0,lcore12@0000:f6:01.0-q1,lcore13@00= 00:f6:01.0-q2,lcore14@0000:f6:01.0-q3,lcore15@0000:f6:01.0-q4,lcore16@0000:= f6:01.0-q5,lcore17@0000:f6:01.0-q6,lcore18@0000:f6:01.0-q7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,packed_vq=3D1,queues=3D8,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Relaunch virtio-user with vector_rx path and ring size is not power of= 2, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,packed_vq=3D1,queues=3D8,vectorized=3D1,queue_size=3D102= 5 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1025 --rxd=3D1025=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +12. Quit all testpmd and relaunch vhost with diff channel by below command= ::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 000= 0:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -= a 0000:f6:01.0 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D8 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6f:01.0-q1,lcore13@00= 00:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore16@0000:= ec:01.0-q5,lcore17@0000:f1:01.0-q6,lcore18@0000:f6:01.0-q7]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +13. Rerun step 11 with csum fwd.=0D +=0D +14. Quit all testpmd and relaunch vhost with pa mode by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 000= 0:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -= a 0000:f6:01.0 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dpa -- -i --nb-cores=3D8 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6f:01.0-q1,lcore13@00= 00:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore16@0000:= ec:01.0-q5,lcore17@0000:f1:01.0-q6,lcore18@0000:f6:01.0-q7]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +15. Rerun step 3-6 with csum fwd.=0D +=0D +Test Case 9: PVP packed ring all path mulit-queues vhost async enqueue ope= ration with M:1 mapping between vrings and dsa dpdk driver channels=0D +--------------------------------------------------------------------------= --------------------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of packed ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa dpdk driver a= nd the mapping between vrings and dsa virtual channels is M:1.=0D +Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind 1 dsa device and one nic port to vfio-pci like common step 1-2::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1 f1:01.0=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f1:01.0,max_queues=3D1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:f1:01.0-q0]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,packed_vq=3D1,queues=3D8,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Relaunch virtio-user with vector_rx path and ring size is not power of= 2, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,packed_vq=3D1,queues=3D8,vectorized=3D1,queue_size=3D102= 5 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1025 --rxd=3D1025=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +12. Quit all testpmd and relaunch vhost with pa mode by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f1:01.0,max_queues=3D8 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dpa -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:f1:01.0-q3]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +13. Rerun step 7 with csum fwd.=0D +=0D +Test Case 10: PVP packed ring all path mulit-queues vhost async enqueue op= eration with 1:N mapping between vrings and dsa dpdk driver channels=0D +--------------------------------------------------------------------------= ---------------------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of packed ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa dpdk driver a= nd the mapping between vrings and dsa virtual channels is 1:N.=0D +Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind 8 dsa devices and one nic port to vfio-pci like common step 1-2::= =0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01= .0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D8 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D1,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@00= 00:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:= f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check vhost RX and TX direction both exist packtes fro= m vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. restart vhost port and send imix pkts again, check get same throuhput a= s above::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,packed_vq=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,packed_vq=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D1,packed_vq=3D1,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Relaunch virtio-user with vector_rx path and ring size is not power of= 2, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D1,packed_vq=3D1,vectorized=3D1,queue_size=3D102= 5 \=0D + -- -i --nb-cores=3D1 --txd=3D1025 --rxd=3D1025=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +12. Quit all testpmd and relaunch vhost with diff channel by below command= ::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 000= 0:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -= a 0000:f6:01.0 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6f:01.0-q1,lcore13@00= 00:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore16@0000:= ec:01.0-q5,lcore17@0000:f1:01.0-q6,lcore18@0000:f6:01.0-q7]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +13. Rerun step 11 with csum fwd.=0D +=0D +14. Quit all testpmd and relaunch vhost with pa mode by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 000= 0:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -= a 0000:f6:01.0 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dpa -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6f:01.0-q1,lcore13@00= 00:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore16@0000:= ec:01.0-q5,lcore17@0000:f1:01.0-q6,lcore18@0000:f6:01.0-q7]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +15. Rerun step 8 with csum fwd.=0D +=0D +Test Case 11: PVP packed ring all path mulit-queues vhost async enqueue op= eration with 1:N mapping between vrings and dsa dpdk driver channels=0D +--------------------------------------------------------------------------= ---------------------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of packed ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa dpdk driver a= nd the mapping between vrings and dsa virtual channels is M:N.=0D +Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind 8 dsa devices and one nic port to vfio-pci like common step 1-2::= =0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01= .0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D8 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2],dma= _ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@00= 00:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:= f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,queues=3D8,in_order=3D1,packed_vq=3D1,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Relaunch virtio-user with vector_rx path and ring size is not power of= 2, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,queues=3D8,in_order=3D1,packed_vq=3D1,vectorized=3D1,queue_size=3D102= 5 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1025 --rxd=3D1025=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +12. Quit all testpmd and relaunch vhost with diff channel by below command= ::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D8 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@00= 00:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:= f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +13. Rerun step 11.=0D +=0D +14. Quit all testpmd and relaunch vhost with diff channel by below command= ::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 000= 0:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -= a 0000:f6:01.0 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6f:01.0-q1,lcore13@00= 00:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore16@0000:= ec:01.0-q5,lcore17@0000:f1:01.0-q6,lcore18@0000:f6:01.0-q7]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +15. Rerun step 11 with csum fwd.=0D +=0D +16. Quit all testpmd and relaunch vhost with pa mode by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=3D8 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dpa -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@00= 00:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:= f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +17. Rerun step 9 with csum fwd.=0D +=0D +Test Case 12: PVP packed ring dynamic queues vhost async operation with d= sa dpdk driver channels=0D +--------------------------------------------------------------------------= -------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of packed ring when vhost uses the asynchronous enqueue operation= s=0D +with dsa dpdk driver and if the vhost-user can work well when the queue nu= mber dynamic change. Both iova as VA and PA mode have beed tested.=0D +=0D +1. Bind 8 dsa devices and 1 NIC port to vfio-pci like common step 1-2::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:4f:00.1=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01= .0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,client=3D1' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D2 --rx= q=3D2=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D8,server=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rxq=3D8=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets[64,1518] from packet generator with random ip, check = perforamnce can get target.=0D +=0D +5. Stop vhost port, check vhost RX and TX direction both exist packtes in = 2 queues from vhost log.=0D +=0D +6. Quit and relaunch vhost with 1:1 mapping between vrings and dsa virtual= channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 -a 0000:6a:01.0 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3]' \=0D + --iova=3Dva -- -i --nb-cores=3D8 --txd=3D1024 --rxd=3D1024 --txq=3D4 --rx= q=3D4 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q1,lcore13@00= 00:6a:01.0-q2,lcore14@0000:6a:01.0-q3]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +7. Send imix packets[64,1518] from packet generator with random ip, check = perforamnce can get target.=0D +=0D +8. Stop vhost port, check vhost RX and TX direction both exist packtes in = 4 queues from vhost log.=0D +=0D +9. Quit and relaunch vhost with M:N(1:N;M# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 = -a 0000:4f:00.1 -a 0000:6a:01.0 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq6;txq7]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore11@0000:6a:01.0-q7,lcore12@00= 00:6a:01.0-q1,lcore12@0000:6a:01.0-q2,lcore12@0000:6a:01.0-q3,lcore13@0000:= 6a:01.0-q2,lcore13@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q4,lcore14@0000:6a:= 01.0-q2,lcore14@0000:6a:01.0-q3,lcore14@0000:6a:01.0-q4,lcore14@0000:6a:01.= 0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q= 2,lcore15@0000:6a:01.0-q3,lcore15@0000:6a:01.0-q4,lcore15@0000:6a:01.0-q5,l= core15@0000:6a:01.0-q6,lcore15@0000:6a:01.0-q7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Send imix packets[64,1518] from packet generator with random ip, check= perforamnce can get target.=0D +=0D +11. Stop vhost port, check vhost RX and TX direction both exist packtes in= 8 queues from vhost log.=0D +=0D +12. Quit and relaunch vhost with diff M:N(1:N;M# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 = -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:= 01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore11@0000:f6:01.0-q7,lcore12@00= 00:6f:01.0-q1,lcore12@0000:74:01.0-q2,lcore12@0000:79:01.0-q3,lcore13@0000:= 74:01.0-q2,lcore13@0000:79:01.0-q3,lcore13@0000:e7:01.0-q4,lcore14@0000:74:= 01.0-q2,lcore14@0000:79:01.0-q3,lcore14@0000:e7:01.0-q4,lcore14@0000:ec:01.= 0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6f:01.0-q1,lcore15@0000:74:01.0-q= 2,lcore15@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore15@0000:ec:01.0-q5,l= core15@0000:f1:01.0-q6,lcore15@0000:f6:01.0-q7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +13. Rerun step 10-11.=0D +=0D +14. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings = and dsa virtual channels::::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 -a 0000:6a:01.0 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@00= 00:6a:01.0-q1,lcore13@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q1,lcore14@0000:= 6a:01.0-q2,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +15. Rerun step 10-11.=0D +=0D +16. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings = and dsa virtual channels::::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@00= 00:6f:01.0-q1,lcore13@0000:74:01.0-q2,lcore14@0000:6f:01.0-q1,lcore14@0000:= 74:01.0-q2,lcore15@0000:6f:01.0-q1,lcore15@0000:74:01.0-q2]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +17. Rerun step 10-11.=0D +=0D +18. Quit and relaunch vhost with iova=3Dpa by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 -a 0000:6a:01.0 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dpa -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@00= 00:6a:01.0-q1,lcore13@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q1,lcore14@0000:= 6a:01.0-q2,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +19. Rerun step 10-11.=0D +=0D +Test Case 13: PVP split ring all path vhost enqueue operations with 1:1 ma= pping between vrings and dsa kernel driver channels=0D +--------------------------------------------------------------------------= ----------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of split ring in each virtio path with 1 core and 1 queue=0D +when vhost uses the asynchronous enqueue operations with dsa kernel driver= and the mapping between vrings and dsa virtual channels is 1:1.=0D +Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind one dsa device to idxd driver and one nic port to vfio-pci like co= mmon step 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + #ls /dev/dsa,check wq configure, reset if exist=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py --reset = 0=0D +=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --f= ile-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D1,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore3@wq0.0]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D1 \=0D + -- -i --enable-hw-vlan-strip --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D1,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Quit all testpmd and relaunch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --f= ile-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D1,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore3@wq0.1]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +12. Rerun step 3-6 with csum fwd.=0D +=0D +Test Case 14: PVP split ring all path multi-queues vhost async enqueue wit= h 1:1 mapping between vrings and dsa kernel driver channels=0D +--------------------------------------------------------------------------= ------------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of split ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa kernel driver= and the mapping between vrings and dsa virtual channels is 1:1.=0D +Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind 3 dsa device to idxd driver and one nic port to vfio-pci like comm= on step 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + #ls /dev/dsa,check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 =0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 =0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 2=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 4=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D8 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3,lc= ore15@wq0.4,lcore16@wq0.5,lcore17@wq0.6,lcore18@wq0.7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D8 \=0D + -- -i --enable-hw-vlan-strip --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1= 024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D8,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Quit all testpmd and relaunch vhost with diff channel by below command= ::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D8 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@wq2.0,lcore12@wq2.1,lcore13@wq2.2,lcore14@wq2.3,lc= ore15@wq4.0,lcore16@wq4.1,lcore17@wq4.2,lcore18@wq4.3]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +12. Rerun step 10 with csum fwd.=0D +=0D +Test Case 15: PVP split ring all path multi-queues vhost async enqueue wit= h M:1 mapping between vrings and dsa kernel driver channels=0D +--------------------------------------------------------------------------= ------------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of split ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa kernel driver= and the mapping between vrings and dsa virtual channels is M:1.=0D +Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind 1 dsa device to idxd driver and one nic port to vfio-pci like comm= on step 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@wq0.0]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D8 \=0D + -- -i --enable-hw-vlan-strip --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1= 024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D8,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Quit all testpmd and relaunch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@wq0.1]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +12. Rerun step 10 with csum fwd.=0D +=0D +Test Case 16: PVP split ring all path multi-queues vhost async enqueue wit= h 1:N mapping between vrings and dsa kernel driver channels=0D +--------------------------------------------------------------------------= ------------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of split ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa kernel driver= and the mapping between vrings and dsa virtual channels is 1:N.=0D +Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind 3 dsa device to idxd driver and one nic port to vfio-pci like cmmo= n step 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4=0D + ls /dev/dsa #check wq configure success=0D +=0D +2.Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D1,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq0.2,lcore11@wq0.3,lc= ore11@wq0.4,lcore11@wq0.5,lcore11@wq0.6,lcore11@wq0.7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D1 \=0D + -- -i --enable-hw-vlan-strip --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D1,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Quit all testpmd and relaunch vhost with diff channel by below command= ::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore11@wq2.0,lcore11@wq2.1,lcore11@wq2.2,lcore11@wq2.3,lc= ore11@wq4.4,lcore11@wq4.5,lcore11@wq4.6,lcore11@wq4.7]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +12. Rerun step 3-6 with csum fwd.=0D +=0D +Test Case 17: PVP split ring all path multi-queues vhost async enqueue wit= h M:N mapping between vrings and dsa kernel driver channels=0D +--------------------------------------------------------------------------= ------------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of split ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa kernel driver= and the mapping between vrings and dsa virtual channels is M:N.=0D +Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind 8 dsa device to idxd driver and one nic port to vfio-pci like comm= on step 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.= 0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 7= 9:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2],dma= _ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq0.2,lcore11@wq0.3,lc= ore11@wq0.4,lcore11@wq0.5,lcore11@wq0.6,lcore11@wq0.7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D8 \=0D + -- -i --enable-hw-vlan-strip --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1= 024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D8,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Quit all testpmd and relaunch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq0.2,lcore11@wq0.3,lc= ore11@wq0.4,lcore11@wq0.5,lcore11@wq0.6,lcore11@wq0.7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +12. Rerun step 3-6.=0D +=0D +13. Quit all testpmd and relaunch vhost with diff channel by below command= ::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore11@wq2.1,lcore11@wq4.2,lcore11@wq6.3,lc= ore11@wq8.4,lcore11@wq10.5,lcore11@wq12.6,lcore11@wq14.7]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +14. Rerun step 11 with csum fwd.=0D +=0D +Test Case 18: PVP split ring dynamic queues vhost async operation with dsa= kernel driver channels=0D +--------------------------------------------------------------------------= ------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of split ring when vhost uses the asynchronous enqueue operations= =0D +with dsa kernel driver and if the vhost-user can work well when the queue = number dynamic change.=0D +=0D +1. Bind 8 dsa device to idxd driver and 1 NIC port to vfio-pci like common= step 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.= 0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 7= 9:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 12=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 = --file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,client=3D1' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D2 --r= xq=3D2=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D8,server=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rxq=3D8=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets[64,1518] from packet generator with random ip, check = perforamnce can get target.=0D +=0D +5. Stop vhost port, check vhost RX and TX direction both exist packtes in = 2 queues from vhost log.=0D +=0D +6. Quit and relaunch vhost with 1:1 mapping between vrings and dsa virtual= channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3]' \=0D + --iova=3Dva -- -i --nb-cores=3D8 --txd=3D1024 --rxd=3D1024 --txq=3D4 --rx= q=3D4 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore12@wq2.1,lcore13@wq4.2,lcore14@wq6.3]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +7. Rerun step 3.=0D +=0D +8. Stop vhost port, check vhost RX and TX direction both exist packtes in = 4 queues from vhost log.=0D +=0D +9. Quit and relaunch vhost with M:N(1:N;M# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 = -a 0000:4f:00.1 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq6;txq7]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore11@wq0.7,lcore12@wq0.1,lcore12@wq0.2,lc= ore12@wq0.3,lcore13@wq0.2,lcore13@wq0.3,lcore13@wq0.4,lcore14@wq0.2,lcore14= @wq0.3,lcore14@wq0.4,lcore14@wq0.5,lcore15@wq0.0,lcore15@wq0.1,lcore15@wq0.= 2,lcore15@wq0.3,lcore15@wq0.4,lcore15@wq0.5,lcore15@wq0.6,lcore15@wq0.7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Rerun step 3.=0D +=0D +11. Stop vhost port, check vhost RX and TX direction both exist packtes in= 8 queues from vhost log.=0D +=0D +12. Quit and relaunch vhost with diff M:N(1:N;M# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 = -a 0000:4f:00.1 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore11@wq14.7,lcore12@wq2.1,lcore12@wq4.2,l= core12@wq6.3,lcore13@wq4.2,lcore13@wq6.3,lcore13@wq8.4,lcore14@wq4.2,lcore1= 4@wq6.3,lcore14@wq8.4,lcore14@wq10.5,lcore15@wq0.0,lcore15@wq2.1,lcore15@wq= 4.2,lcore15@wq6.3,lcore15@wq8.4,lcore15@wq10.5,lcore15@wq12.6,lcore15@wq14.= 7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +13. Rerun step 10-11.=0D +=0D +14. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings = and dsa virtual channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 -a 0000:6a:01.0 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq0.2,lc= ore14@wq0.1,lcore14@wq0.2,lcore15@wq0.1,lcore15@wq0.2]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +15. Rerun step 10-11.=0D +=0D +16. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings = and dsa virtual channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq2.1,lcore13@wq4.2,lc= ore14@wq2.1,lcore14@wq4.2,lcore15@wq2.1,lcore15@wq4.2]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +17. Rerun step 10-11.=0D +=0D +Test Case 19: PVP packed ring all path vhost enqueue operations with 1:1 m= apping between vrings and dsa kernel driver channels=0D +--------------------------------------------------------------------------= ----------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of packed ring in each virtio path with 1 core and 1 queue=0D +when vhost uses the asynchronous enqueue operations with dsa kernel driver= and the mapping between vrings and dsa virtual channels is 1:1.=0D +Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind one dsa device to idxd driver and one nic port to vfio-pci like co= mmon step 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + #ls /dev/dsa,check wq configure, reset if exist=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py --reset = 0=0D +=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --f= ile-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D1,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore3@wq0.0]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,packed_vq=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,packed_vq=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,packed_vq=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D1,packed_vq=3D1,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Relaunch virtio-user with vector_rx path and ring size is not power of= 2, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D1,packed_vq=3D1,vectorized=3D1,,queue_size=3D10= 25 \=0D + -- -i --nb-cores=3D1 --txd=3D1025 --rxd=3D1025=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +12. Quit all testpmd and relaunch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --f= ile-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D1,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore3@wq0.1]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +13. Rerun step 3-6 with csum fwd.=0D +=0D +Test Case 20: PVP packed ring all path multi-queues vhost async enqueue wi= th 1:1 mapping between vrings and dsa kernel driver channels=0D +--------------------------------------------------------------------------= -------------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of packed ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa kernel driver= and the mapping between vrings and dsa virtual channels is 1:1.=0D +Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind 3 dsa device to idxd driver and one nic port to vfio-pci like comm= on step 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + #ls /dev/dsa,check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 2=0D + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 4=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D8 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3,lc= ore15@wq0.4,lcore16@wq0.5,lcore17@wq0.6,lcore18@wq0.7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D8,packed_vq=3D1,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Relaunch virtio-user with vector_rx path and ring size is not power of= 2, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D8,packed_vq=3D1,vectorized=3D1,queue_size=3D102= 5 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1025 --rxd=3D1025=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +12. Quit all testpmd and relaunch vhost with diff channel by below command= ::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D8 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@wq2.0,lcore12@wq2.1,lcore13@wq2.2,lcore14@wq2.3,lc= ore15@wq4.0,lcore16@wq4.1,lcore17@wq4.2,lcore18@wq4.3]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +13. Rerun step 11 with csum fwd.=0D +=0D +Test Case 21: PVP packed ring all path multi-queues vhost async enqueue wi= th M:1 mapping between vrings and dsa kernel driver channels=0D +--------------------------------------------------------------------------= -------------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of packed ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa kernel driver= and the mapping between vrings and dsa virtual channels is M:1.=0D +Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind 1 dsa device to idxd driver and one nic port to vfio-pci like comm= on step 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@wq0.0]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D8,packed_vq=3D1,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Relaunch virtio-user with vector_rx path and ring size is not power of= 2, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,queues=3D8,packed_vq=3D1,vectorized=3D1,queue_size=3D102= 5 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1025 --rxd=3D1025=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +12. Quit all testpmd and relaunch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@wq0.1]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +13. Rerun step 7 with csum fwd.=0D +=0D +Test Case 22: PVP packed ring all path multi-queues vhost async enqueue wi= th 1:N mapping between vrings and dsa kernel driver channels=0D +--------------------------------------------------------------------------= -------------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of packed ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa kernel driver= and the mapping between vrings and dsa virtual channels is 1:N.=0D +Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind 3 dsa device to idxd driver and one nic port to vfio-pci like comm= on step 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D1,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq0.2,lcore11@wq0.3,lc= ore11@wq0.4,lcore11@wq0.5,lcore11@wq0.6,lcore11@wq0.7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,packed_vq=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,packed_vq=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,packed_vq=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D1,packed_vq=3D1,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Relaunch virtio-user with vector_rx path and ring size is not power of= 2, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D1,packed_vq=3D1,vectorized=3D1,queue_size=3D102= 5 \=0D + -- -i --nb-cores=3D1 --txd=3D1025 --rxd=3D1025=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +12. Quit all testpmd and relaunch vhost with diff channel by below command= ::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0],dma_ring_size= =3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 \=0D + --lcore-dma=3D[lcore11@wq2.0,lcore11@wq2.1,lcore11@wq2.2,lcore11@wq2.3,lc= ore11@wq4.4,lcore11@wq4.5,lcore11@wq4.6,lcore11@wq4.7]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +13. Rerun step 8 with csum fwd.=0D +=0D +Test Case 23: PVP packed ring all path multi-queues vhost async enqueue wi= th M:N mapping between vrings and dsa kernel driver channels=0D +--------------------------------------------------------------------------= -------------------------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of packed ring in each virtio path with multi-queues=0D +when vhost uses the asynchronous enqueue operations with dsa kernel driver= and the mapping between vrings and dsa virtual channels is M:N.=0D +Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.=0D +=0D +1. Bind 8 dsa device to idxd driver and one nic port to vfio-pci like comm= on step 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.= 0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 7= 9:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 12=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2],dma= _ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq0.2,lcore11@wq0.3,lc= ore11@wq0.4,lcore11@wq0.5,lcore11@wq0.6,lcore11@wq0.7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets [64,1518] from packet generator, check the throughput= can get expected data::=0D +=0D + testpmd>show port stats all=0D +=0D +5. Stop vhost port, check that there are packets in both directions of RX = and TX in each queue from vhost log::=0D +=0D + testpmd>stop=0D +=0D +6. Restart vhost port and send imix packets again, then check the throuhpu= t can get expected data::=0D +=0D + testpmd>start=0D + testpmd>show port stats all=0D +=0D +7. Relaunch virtio-user with mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D0,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step = 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D1,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,packed_vq=3D1,queues=3D8 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D8,packed_vq=3D1,vectorized=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd=3D1024=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +11. Relaunch virtio-user with vector_rx path and ring size is not power of= 2, then repeat step 4-6::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio --force-max-simd-bitwidth=3D512 \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D0,in_order=3D0,queues=3D8,packed_vq=3D1,vectorized=3D1,queue_size=3D102= 5 \=0D + -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1025 --rxd=3D1025=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +12. Quit all testpmd and relaunch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq0.2,lcore11@wq0.3,lc= ore11@wq0.4,lcore11@wq0.5,lcore11@wq0.6,lcore11@wq0.7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +13. Rerun step 3-6.=0D +=0D +14. Quit all testpmd and relaunch vhost with diff channel by below command= ::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,dmas=3D[txq0;txq1;txq2;txq3= ;txq4;txq5;txq6;txq7],dma_ring_size=3D2048' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore11@wq2.1,lcore11@wq4.2,lcore11@wq6.3,lc= ore11@wq8.4,lcore11@wq10.5,lcore11@wq12.6,lcore11@wq14.7]=0D + testpmd>set fwd csum=0D + testpmd>start=0D +=0D +15. Rerun step 3-6 with csum fwd.=0D +=0D +Test Case 24: PVP packed ring dynamic queues vhost async operation with ds= a kernel driver channels=0D +--------------------------------------------------------------------------= -------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test pe= rformance of packed ring when vhost uses the asynchronous enqueue=0D +operations with dsa kernel driver and if the vhost-user can work well when= the queue number dynamic change.=0D +=0D +1. Bind 8 dsa device to idxd driver and 1 NIC port to vfio-pci like common= step 1 and 3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.= 0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 7= 9:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 12=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -= -file-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,client=3D1' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D2 --r= xq=3D2=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D8,server=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rxq=3D8=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets[64,1518] from packet generator with random ip, check = perforamnce can get target.=0D +=0D +5. Stop vhost port, check vhost RX and TX direction both exist packtes in = 2 queues from vhost log.=0D +=0D +6. Quit and relaunch vhost with 1:1 mapping between vrings and dsa virtual= channels::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3]' \=0D + --iova=3Dva -- -i --nb-cores=3D8 --txd=3D1024 --rxd=3D1024 --txq=3D4 --rx= q=3D4 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore12@wq2.1,lcore13@wq4.2,lcore14@wq6.3]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +7. Rerun step 3.=0D +=0D +8. Stop vhost port, check vhost RX and TX direction both exist packtes in = 4 queues from vhost log.=0D +=0D +9. Quit and relaunch vhost with M:N(1:N;M# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 = -a 0000:4f:00.1 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq6;txq7]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore11@wq0.7,lcore12@wq0.1,lcore12@wq0.2,lc= ore12@wq0.3,lcore13@wq0.2,lcore13@wq0.3,lcore13@wq0.4,lcore14@wq0.2,lcore14= @wq0.3,lcore14@wq0.4,lcore14@wq0.5,lcore15@wq0.0,lcore15@wq0.1,lcore15@wq0.= 2,lcore15@wq0.3,lcore15@wq0.4,lcore15@wq0.5,lcore15@wq0.6,lcore15@wq0.7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Rerun step 3.=0D +=0D +11. Stop vhost port, check vhost RX and TX direction both exist packtes in= 8 queues from vhost log.=0D +=0D +12. Quit and relaunch vhost with diff M:N(1:N;M# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 = -a 0000:4f:00.1 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore11@wq14.7,lcore12@wq2.1,lcore12@wq4.2,l= core12@wq6.3,lcore13@wq4.2,lcore13@wq6.3,lcore13@wq8.4,lcore14@wq4.2,lcore1= 4@wq6.3,lcore14@wq8.4,lcore14@wq10.5,lcore15@wq0.0,lcore15@wq2.1,lcore15@wq= 4.2,lcore15@wq6.3,lcore15@wq8.4,lcore15@wq10.5,lcore15@wq12.6,lcore15@wq14.= 7]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +13. rerun step 10-11.=0D +=0D +14. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings = and dsa virtual channels::::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq0.2,lc= ore14@wq0.1,lcore14@wq0.2,lcore15@wq0.1,lcore15@wq0.2]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +15. Rerun step 10-11.=0D +=0D +16. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings = and dsa virtual channels::::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= a 0000:4f:00.1 \=0D + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,clien= t=3D1,dmas=3D[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D5 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 \=0D + --lcore-dma=3D[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq2.1,lcore13@wq4.2,lc= ore14@wq2.1,lcore14@wq4.2,lcore15@wq2.1,lcore15@wq4.2]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +17. Rerun step 10-11.=0D +=0D +Test Case 25: PVP split and packed ring dynamic queues vhost async operati= on with dsa dpdk and kernel driver channels=0D +--------------------------------------------------------------------------= ---------------------------------------------=0D +This case uses testpmd and Traffic Generator(For example, Trex) to test sp= lit ring and packed ring when vhost uses the asynchronous enqueue=0D +operations with both dsa dpdk driver and dsa kernel driver and if the vhos= t-user can work well when the queue number dynamic change.=0D +Both iova as VA mode have beed tested.=0D +=0D +1. Bind 2 dsa device to idxd driver, 2 dsa device and 1 NIC port to vfio-p= ci like common step 1-3::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1=0D +=0D + ls /dev/dsa #check wq configure, reset if exist=0D + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 e7:01.0 ec:01.= 0=0D + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0=0D + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0=0D + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 2=0D + ls /dev/dsa #check wq configure success=0D +=0D +2. Launch vhost::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --f= ile-prefix=3Dvhost -a 0000:4f:00.1 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D2,client=3D1,dmas=3D[txq0;txq= 1]' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D2 --rx= q=3D2 --lcore-dma=3D[lcore3@wq0.0,lcore3@wq2.0]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +3. Launch virtio-user with split ring mergeable in-order path by below com= mand::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,queues=3D8,server=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rxq=3D8=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +4. Send imix packets from packet generator with random ip, check perforamn= ce can get target.=0D +=0D +5. Stop vhost port, check vhost RX and TX direction both exist packtes in = 2 queues from vhost log.=0D +=0D +6. Quit and relaunch vhost as below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --f= ile-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:e7:01.0,max_queues=3D2 -a 0000:e= c:01.0,max_queues=3D4 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D4,client=3D1,dmas=3D[txq0;txq= 1;txq2;txq3]' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D4 --rx= q=3D4 --lcore-dma=3D[lcore3@0000:e7:01.0-q0,lcore3@0000:e7:01.0-q1,lcore3@0= 000:ec:01.0-q2,lcore3@0000:ec:01.0-q3]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +7. Send imix packets from packet generator with random ip, check perforamn= ce can get target.=0D +=0D +8. Stop vhost port, check vhost RX and TX direction both exist packtes in = 4 queues from vhost log.=0D +=0D +9. Quit and relaunch vhost as below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --f= ile-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:e7:01.0,max_queues=3D2 -a 0000:e= c:01.0,max_queues=3D4 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,client=3D1,dmas=3D[txq0;txq= 1;txq2;txq3;txq4;txq5]' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 --lcore-dma=3D[lcore3@wq0.0,lcore3@wq2.0,lcore3@wq2.2,lcore3@0000:e7:= 01.0-q0,lcore3@0000:e7:01.0-q1,lcore3@0000:ec:01.0-q3]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +10. Send imix packets from packet generator with random ip, check perforam= nce can get target.=0D +=0D +11. Stop vhost port, check vhost RX and TX direction both exist packtes in= 8 queues from vhost log.=0D +=0D +12. Quit and relaunch vhost with diff cahnnels as below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --f= ile-prefix=3Dvhost -a 0000:4f:00.1 -a 0000:e7:01.0,max_queues=3D2 -a 0000:e= c:01.0,max_queues=3D4 \=0D + --vdev 'net_vhost0,iface=3D/tmp/s0,queues=3D8,client=3D1,dmas=3D[txq0;txq= 1;txq2;txq3;txq4;txq5;txq6]' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rx= q=3D8 --lcore-dma=3D[lcore3@wq0.0,lcore3@wq0.1,lcore3@wq2.1,lcore3@wq2.0,lc= ore3@0000:e7:01.0-q1,lcore3@0000:ec:01.0-q3]=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +13. Send imix packets from packet generator with random ip, check perforam= nce can get target.=0D +=0D +14. Stop vhost port, check vhost RX and TX direction both exist packtes in= 8 queues from vhost log.=0D +=0D +15. Quit and relaunch virtio-user with packed ring mergeable in-order path= by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:01:02:03:04:05,path=3D/tmp/s0,mrg_rxbu= f=3D1,in_order=3D1,packed_vq=3D1,queues=3D8,server=3D1 \=0D + -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txq=3D8 --rxq=3D8=0D + testpmd>set fwd mac=0D + testpmd>start=0D +=0D +16. Rerun steps 10-11. \ No newline at end of file --=20 2.25.1