From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4EF2BA0032; Fri, 22 Apr 2022 12:02:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 484C34067C; Fri, 22 Apr 2022 12:02:11 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id C3EEA40042 for ; Fri, 22 Apr 2022 12:02:08 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650621729; x=1682157729; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=nISMyOeNnd61g6cOGyRSILLSsh8g6P2TH9FmAlwqDU4=; b=EnDTrqLxp/TGQPE+eyLeyloJItPXg9Fe5c0OHp34sa2662cb6K+VKedc NIQjGMj74r9BsgJLFGcByN6BwG8jwxI71vF8vI1mkGRU+vUdOkPJc/Me7 zhGZXZ80JsX4T6XFUWZEpCdkkITXSrp7N8+zUuljOnFwxJjj+2Pz2NWfK SCXyQ6bFUehuGxlyoYpgbyPbB3ejy8ZXK9Mcscbw3eV6VP5wPojSi1XoF Dkir2TuhgBopS3oyakcnbgyVVrB1mdyLzvnOjvctT1X1wGz/KQ5wKqwBe PI09YjqI3e/fMr2u8uhvUBpKTljqwPb/CX6dkPtrSYzuyWy8qZel+bqJH A==; X-IronPort-AV: E=McAfee;i="6400,9594,10324"; a="264111442" X-IronPort-AV: E=Sophos;i="5.90,281,1643702400"; d="scan'208";a="264111442" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2022 03:02:07 -0700 X-IronPort-AV: E=Sophos;i="5.90,281,1643702400"; d="scan'208";a="703492243" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2022 03:02:04 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/3] test_plans/vhost_cbdma_test_plan: modify testplan by DPDK command change Date: Fri, 22 Apr 2022 18:01:59 +0800 Message-Id: <20220422100159.1565307-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), modify vhost_cbdma testplan by DPDK22.03 Lib change. Signed-off-by: Wei Ling --- test_plans/vhost_cbdma_test_plan.rst | 1414 ++++++++++++++++++++------ 1 file changed, 1118 insertions(+), 296 deletions(-) diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst index c8f8b8c5..051f089e 100644 --- a/test_plans/vhost_cbdma_test_plan.rst +++ b/test_plans/vhost_cbdma_test_plan.rst @@ -1,404 +1,1226 @@ -.. Copyright (c) <2021>, Intel Corporation - All rights reserved. - - Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions - are met: - - - Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - - - Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in - the documentation and/or other materials provided with the - distribution. - - - Neither the name of Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived - from this software without specific prior written permission. - - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS - FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE - COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, - INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR - SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, - STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED - OF THE POSSIBILITY OF SUCH DAMAGE. +.. Copyright (c) <2022>, Intel Corporation + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + + - Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + - Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + - Neither the name of Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + OF THE POSSIBILITY OF SUCH DAMAGE. ========================================================== DMA-accelerated Tx operations for vhost-user PMD test plan ========================================================== -Overview +Description +=========== + +This document provides the test plan for testing Vhost asynchronous +data path in the PVP topology environment with testpmd. + +Vhost asynchronous data path leverages DMA devices to offload memory +copies from the CPU and it is implemented in an asynchronous way. It +enables applications, like OVS, to save CPU cycles and hide memory copy +overhead, thus achieving higher throughput. + +Vhost doesn't manage DMA devices and applications, like OVS, need to +manage and configure DMA devices. Applications need to tell vhost what +DMA devices to use in every data path function call. This design enables +the flexibility for applications to dynamically use DMA channels in +different function modules, not limited in vhost. + +In addition, vhost supports M:N mapping between vrings and DMA virtual +channels. Specifically, one vring can use multiple different DMA channels +and one DMA channel can be shared by multiple vrings at the same time. +The reason of enabling one vring to use multiple DMA channels is that +it's possible that more than one dataplane threads enqueue packets to +the same vring with their own DMA virtual channels. Besides, the number +of DMA devices is limited. For the purpose of scaling, it's necessary to +support sharing DMA channels among vrings. + +DPDK 21.11 adds vfio support for DMA device in vhost. When DMA devices +are bound to vfio driver, VA mode is the default and recommended. +For PA mode, page by page mapping may exceed IOMMU's max capability, +better to use 1G guest hugepage. + +For more about dpdk-testpmd sample, please refer to the DPDK docments: +https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html + +For virtio-user vdev parameter, you can refer to the DPDK docments: +https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage. + +Prerequisites +============= + +Topology +-------- + Test flow: TG-->NIC-->Vhost-->Virtio-->Vhost-->NIC-->TG + +Hardware -------- + Supportted NICs: ALL + +Software +-------- + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # ninja -C -j 110 + +2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 are DMA device IDs:: + + # ./usertools/dpdk-devbind.py -s + + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + +Test case +========= + +Common steps +------------ +1. Bind 1 NIC port and CBDMA devices to vfio-pci:: + + # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci + + For example, Bind 1 NIC port and 2 CBDMA devices:: + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0 + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1 + +2. Send imix packets [64,1518] to NIC by traffic generator:: + + The imix packets include packet size [64, 128, 256, 512, 1024, 1518], and the format of packet is as follows. + +-------------+-------------+-------------+-------------+ + | MAC | MAC | IPV4 | IPV4 | + | Src address | Dst address | Src address | Dst address | + |-------------|-------------|-------------|-------------| + | Any MAC | Virtio mac | Fixed IP | Any IP | + +-------------+-------------+-------------+-------------+ + All the packets in this test plan use the Virtio mac: 00:11:22:33:44:10 and fixed src ip. + +Test Case 1: PVP split ring all path vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels +--------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with 1 core and 1 queue +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:1. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 1 CBDMA device to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user with inorder mergeable path:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +4. Send imix packets [64,1518] from packet generator as common step2, and then check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throuhput can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=1 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +Note:: + + Rx offload(s) are requested when using split ring non-mergeable path. So add the parameter "--enable-hw-vlan-strip". + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +11. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +12. Rerun steps 3-11. + +Test Case 2: PVP split ring all path multi-queues vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels +---------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:1. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user with inorder mergeable path:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throuhput can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +11. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +12. Rerun step 7. + +Test Case 3: PVP split ring all path multi-queues vhost enqueue operations with M to 1 mapping between vrings and CBDMA virtual channels +---------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is M:1. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user with inorder mergeable path:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +3. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +11. Quit all testpmd and relaunch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=3 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +12. Rerun steps 4-6. + +13. Quit all testpmd and relaunch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +14. Rerun steps 7. + +15. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +16. Rerun steps 7. + +Test Case 4: PVP split ring all path vhost enqueue operations with 1 to N mapping between vrings and CBDMA virtual channels +--------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path when vhost uses +the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:N. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user with inorder mergeable path:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=1 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +11. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +12. Rerun steps 9. + +Test Case 5: PVP split ring all path multi-queues vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels +---------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is M:N. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user with inorder mergeable path:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +11. Quit all testpmd and relaunch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +12. Rerun steps 8. + +13. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +14. Rerun steps 10. + +Test Case 6: PVP split ring dynamic queue number vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels +--------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring when vhost uses the asynchronous enqueue operations +and if the vhost-user can work well when the queue number dynamic change. Both iova as VA and PA mode have been tested. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user by below command:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log. + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Quit and relaunch vhost with 1:1 mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] + testpmd> set fwd mac + testpmd> start + +8. Quit and relaunch vhost with M:N(1:N;M# .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \ + --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.7,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore13@0000:00:04.2,lcore13@0000:00:04.3,lcore13@0000:00:04.4,lcore14@0000:00:04.2,lcore14@0000:00:04.3,lcore14@0000:00:04.4,lcore14@0000:00:04.5,lcore15@0000:00:04.0,lcore15@0000:00:04.1,lcore15@0000:00:04.2,lcore15@0000:00:04.3,lcore15@0000:00:04.4,lcore15@0000:00:04.5,lcore15@0000:00:04.6,lcore15@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +9. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + testpmd> set fwd mac + testpmd> start + +10. Quit and relaunch vhost with iova=pa by below command, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --iova=pa -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + testpmd> set fwd mac + testpmd> start + +Test Case 7: PVP packed ring all path vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels +---------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with 1 core and 1 queue +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:1. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 1 CBDMA device to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user with inorder mergeable path:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +Note:: + If building and running environment support (AVX512 || NEON) && in-order feature is negotiated && Rx mergeable + is not negotiated && TCP_LRO Rx offloading is disabled && vectorized option enabled, packed virtqueue vectorized Rx path will be selected. + +11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=1,queue_size=1025 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1025 --rxd=1025 + testpmd> set fwd mac + testpmd> start + +12. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +12. Rerun steps 3-6. + +Test Case 8: PVP packed ring all path multi-queues vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels +----------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:1. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user with inorder mergeable path:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8,queue_size=1025 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025 + testpmd> set fwd mac + testpmd> start + +12. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +13. Rerun step 7. + +Test Case 9: PVP packed ring all path multi-queues vhost enqueue operations with M to 1 mapping between vrings and CBDMA virtual channels +----------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is M:1. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 1 CBDMA device to vfio-pci, as common step 1. + +2. Launch vhost by below command:: -This feature supports to offload large data movement in vhost enqueue operations -from the CPU to the I/OAT(a DMA engine in Intel's processor) device for every queue. -In addition, a queue can only use one I/OAT device, and I/OAT devices cannot be shared -among vhost ports and queues. That is, an I/OAT device can only be used by one queue at -a time. DMA devices(e.g.,CBDMA) used by queues are assigned by users; for a queue without -assigning a DMA device, the PMD will leverages librte_vhost to perform vhost enqueue -operations. Moreover, users cannot enable I/OAT acceleration for live-migration. Large -copies are offloaded from the CPU to the DMA engine in an asynchronous manner. The CPU -just submits copy jobs to the DMA engine and without waiting for DMA copy completion; -there is no CPU intervention during DMA data transfer. By overlapping CPU -computation and DMA copy, we can save precious CPU cycles and improve the overall -throughput for vhost-user PMD based applications, like OVS. Due to startup overheads -associated with DMA engines, small copies are performed by the CPU. -DPDK 21.11 adds vfio support for DMA device in vhost. When DMA devices are bound to -vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping -may exceed IOMMU's max capability, better to use 1G guest hugepage. - -We introduce a new vdev parameter to enable DMA acceleration for Tx operations of queues: -- dmas: This parameter is used to specify the assigned DMA device of a queue. + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd> set fwd mac + testpmd> start -Here is an example: -./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 \ ---vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:00:04.0] \ ---iova=va -- -i' +3. Launch virtio-user with inorder mergeable path:: -Test Case 1: PVP split ring all path vhost enqueue operations with cbdma -======================================================================== + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -Packet pipeline: -================ -TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: -1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, then launch vhost by below command:: + testpmd> show port stats all - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: -2. Launch virtio-user with inorder mergeable path:: + testpmd> stop - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: -3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port and send imix pkts again, check get same throuhput:: + testpmd> start + testpmd> show port stats all - testpmd>show port stats all - testpmd>stop - testpmd>start - testpmd>show port stats all +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: -4. Relaunch virtio-user with mergeable path, then repeat step 3:: + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: -5. Relaunch virtio-user with inorder non-mergeable path, then repeat step 3:: + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: -6. Relaunch virtio-user with non-mergeable path, then repeat step 3:: + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \ - -- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: -7. Relaunch virtio-user with vector_rx path, then repeat step 3:: + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: -8. Quit all testpmd and relaunch vhost with iova=pa by below command:: + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8,queue_size=1025 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +12. Quit all testpmd and relaunch vhost by below command:: -9. Rerun steps 2-7. + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=3 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0] + testpmd> set fwd mac + testpmd> start -Test Case 2: PVP split ring dynamic queue number vhost enqueue operations with cbdma -===================================================================================== +13. Rerun steps 3-6. -1. Bind 8 CBDMA ports and 1 NIC port to vfio-pci, then launch vhost by below command:: +14. Quit all testpmd and relaunch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] + testpmd> set fwd mac + testpmd> start -2. Launch virtio-user by below command:: +15. Rerun steps 7. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +16. Quit all testpmd and relaunch vhost with iova=pa by below command:: -3. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] + testpmd> set fwd mac + testpmd> start -4. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. +17. Rerun steps 8. -5. Quit and relaunch vhost with 4 queues w/ cbdma and 4 queues w/o cbdma:: +Test Case 10: PVP packed ring all path vhost enqueue operations with 1 to N mapping between vrings and CBDMA virtual channels +----------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path when vhost uses +the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:N. +Both iova as VA and PA mode have been tested. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. -6. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. +2. Launch vhost by below command:: -7. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start -8. Quit and relaunch vhost with 8 queues w/ cbdma:: +3. Launch virtio-user with inorder mergeable path:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -9. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: -10. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. + testpmd> show port stats all -11. Quit and relaunch vhost with iova=pa, 6 queues w/ cbdma and 2 queues w/o cbdma:: +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5]' \ - --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start + testpmd> stop -12. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: -13. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. + testpmd> start + testpmd> show port stats all -Test Case 3: PVP packed ring all path vhost enqueue operations with cbdma -========================================================================= +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: -Packet pipeline: -================ -TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, then launch vhost by below command:: +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:80:04.0]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -2. Launch virtio-user with inorder mergeable path:: +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port and send imix pkts again, check get same throuhput:: +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: - testpmd>show port stats all - testpmd>stop - testpmd>start - testpmd>show port stats all + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -4. Relaunch virtio-user with mergeable path, then repeat step 3:: +11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 3:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1,queue_size=1025 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1025 --rxd=1025 + testpmd> set fwd mac + testpmd> start -5. Relaunch virtio-user with inorder non-mergeable path, then repeat step 3:: +12. Quit all testpmd and relaunch vhost with iova=pa by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start -6. Relaunch virtio-user with non-mergeable path, then repeat step 3:: +13. Rerun steps 9. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +Test Case 11: PVP packed ring all path multi-queues vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels +------------------------------------------------------------------------------------------------------------------------------------------ +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is M:N. +Both iova as VA and PA mode have been tested. -7. Relaunch virtio-user with vectorized path, then repeat step 3:: +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +2. Launch vhost by below command:: -8. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 3:: + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1,queue_size=1025 \ - -- -i --nb-cores=1 --txd=1025 --rxd=1025 - >set fwd mac - >start +3. Launch virtio-user with inorder mergeable path:: -9. Quit all testpmd and relaunch vhost with iova=pa by below command:: + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:80:04.0]' \ - --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: -10. Rerun steps 2-8. + testpmd> show port stats all -Test Case 4: PVP packed ring dynamic queue number vhost enqueue operations with cbdma -===================================================================================== +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: -1. Bind 8 CBDMA ports and 1 NIC port to vfio-pci, then launch vhost by below command:: + testpmd> stop - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: -2. Launch virtio-user by below command:: + testpmd> start + testpmd> show port stats all - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8,server=1,packed_vq=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: -3. Send imix packets from packet generator with random ip, check perforamnce can get target. + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -4. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: -5. Quit and relaunch vhost with 4 queues w/ cbdma and 4 queues w/o cbdma:: + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: -6. Send imix packets from packet generator with random ip, check perforamnce can get target. + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -7. Stop vhost port, check vhost RX and TX direction both exist packtes in 4 queues from vhost log. +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: -8. Quit and relaunch vhost with 8 queues w/ cbdma:: + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8, \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: -9. Send imix packets from packet generator with random ip, check perforamnce can get target. + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8,queue_size=1025 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025 + testpmd> set fwd mac + testpmd> start -10. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. +12. Quit all testpmd and relaunch vhost by below command:: -11. Quit and relaunch vhost with iova=pa, 6 queues w/ cbdma and 2 queues w/o cbdma:: + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5]' \ - --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +13. Rerun steps 7. -12. Send imix packets from packet generator with random ip, check perforamnce can get target. +14. Quit all testpmd and relaunch vhost with iova=pa by below command:: -13. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start -Test Case 5: loopback split ring large chain packets stress test with cbdma enqueue -==================================================================================== +15. Rerun steps 9. -Packet pipeline: -================ -Vhost <--> Virtio +Test Case 12: PVP packed ring dynamic queue number vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels +----------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring when vhost uses the asynchronous enqueue operations +and if the vhost-user can work well when the queue number dynamic change. Both iova as VA and PA mode have been tested. +Both iova as VA and PA mode have been tested. -1. Bind 1 CBDMA channel to vfio-pci and launch vhost:: +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --iova=va -- -i --nb-cores=1 --mbuf-size=65535 +2. Launch vhost by below command:: -2. Launch virtio and start testpmd:: + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1, \ - mrg_rxbuf=1,in_order=0,vectorized=1,queue_size=2048 \ - -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1 - >start +3. Launch virtio-user by below command:: -3. Send large packets from vhost, check virtio can receive packets:: + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=1,server=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - testpmd> vhost enable tx all - testpmd> set txpkts 65535,65535,65535,65535,65535 - testpmd> start tx_first 32 - testpmd> show port stats all +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: -4. Quit all testpmd and relaunch vhost with iova=pa:: + testpmd> show port stats all - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --iova=pa -- -i --nb-cores=1 --mbuf-size=65535 +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log. -5. Rerun steps 2-3. + testpmd> stop -Test Case 6: loopback packed ring large chain packets stress test with cbdma enqueue -==================================================================================== +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: -Packet pipeline: -================ -Vhost <--> Virtio + testpmd> start + testpmd> show port stats all -1. Bind 1 CBDMA channel to vfio-pci and launch vhost:: +7. Quit and relaunch vhost with 1:1 mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --iova=va -- -i --nb-cores=1 --mbuf-size=65535 + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] + testpmd> set fwd mac + testpmd> start -2. Launch virtio and start testpmd:: +9. Quit and relaunch vhost with M:N(1:N;Mstart + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \ + --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.7,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore13@0000:00:04.2,lcore13@0000:00:04.3,lcore13@0000:00:04.4,lcore14@0000:00:04.2,lcore14@0000:00:04.3,lcore14@0000:00:04.4,lcore14@0000:00:04.5,lcore15@0000:00:04.0,lcore15@0000:00:04.1,lcore15@0000:00:04.2,lcore15@0000:00:04.3,lcore15@0000:00:04.4,lcore15@0000:00:04.5,lcore15@0000:00:04.6,lcore15@0000:00:04.7] + testpmd> set fwd mac + testpmd> start -3. Send large packets from vhost, check virtio can receive packets:: +11. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: - testpmd> vhost enable tx all - testpmd> set txpkts 65535,65535,65535,65535,65535 - testpmd> start tx_first 32 - testpmd> show port stats all + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + testpmd> set fwd mac + testpmd> start -4. Quit all testpmd and relaunch vhost with iova=pa:: +13. Quit and relaunch vhost with iova=pa by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' --iova=pa -- -i --nb-cores=1 --mbuf-size=65535 + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --iova=pa -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + testpmd> set fwd mac + testpmd> start -5. Rerun steps 2-3. +14. Rerun step 4-6. -- 2.25.1