From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 90414A0503; Fri, 1 Apr 2022 09:52:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 886064290C; Fri, 1 Apr 2022 09:52:40 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 6771C4014F for ; Fri, 1 Apr 2022 09:52:38 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1648799558; x=1680335558; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=I3Yuse3YZzflKCYnakiQJz/e/FrXc9RfxLwWjL6AJ8s=; b=eoWdLpW9HsFy037emIDAL+jWT29wi3R7u1t8W+VDch3+bGbVUOd1YFP8 yuKxWHD+kYQK3lSKkohrvq9/nSyL3HStvK9lI8LvISrm1eJHb3CnCeUQI xmcnU7FtlT17wZExlkZOiB7jNTa1qRdOH7Hu67gi05k7rEYU7iq+eaNDF huqcuEmlwit96KTZ2JEJcoz5MwjUd0k68bf4dN5KKTx7naIePqn/7Eslw FkiGsANrgGzON8IsKbnSKsDexAwJYIY711mC/m9jlUUS1X+/ytlTDPNmJ 9mHGR+Q51kMxadBPAdHAzG1PvhsUw7pcLlvMq/7sdhdFyKX9SYl71fq6d A==; X-IronPort-AV: E=McAfee;i="6200,9189,10303"; a="260056144" X-IronPort-AV: E=Sophos;i="5.90,226,1643702400"; d="scan'208";a="260056144" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Apr 2022 00:52:37 -0700 X-IronPort-AV: E=Sophos;i="5.90,226,1643702400"; d="scan'208";a="567258657" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Apr 2022 00:52:35 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/3] test_plans/vhost_cbdma_test_plan: modify testplan by DPDK22.03 change Date: Fri, 1 Apr 2022 15:52:31 +0800 Message-Id: <20220401075231.4175720-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), modify vhost_cbdma testplan by DPDK22.03 Lib change. Signed-off-by: Wei Ling --- test_plans/vhost_cbdma_test_plan.rst | 1392 ++++++++++++++++++++------ 1 file changed, 1096 insertions(+), 296 deletions(-) diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst index c8f8b8c5..ce558d72 100644 --- a/test_plans/vhost_cbdma_test_plan.rst +++ b/test_plans/vhost_cbdma_test_plan.rst @@ -1,404 +1,1204 @@ -.. Copyright (c) <2021>, Intel Corporation - All rights reserved. - - Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions - are met: - - - Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - - - Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in - the documentation and/or other materials provided with the - distribution. - - - Neither the name of Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived - from this software without specific prior written permission. - - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS - FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE - COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, - INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR - SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, - STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED - OF THE POSSIBILITY OF SUCH DAMAGE. +.. Copyright (c) <2022>, Intel Corporation + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + + - Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + - Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + - Neither the name of Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + OF THE POSSIBILITY OF SUCH DAMAGE. ========================================================== DMA-accelerated Tx operations for vhost-user PMD test plan ========================================================== -Overview +Description +=========== + +Vhost asynchronous data path leverages DMA devices to offload memory +copies from the CPU and it is implemented in an asynchronous way. It +enables applications, like OVS, to save CPU cycles and hide memory copy +overhead, thus achieving higher throughput. + +Vhost doesn't manage DMA devices and applications, like OVS, need to +manage and configure DMA devices. Applications need to tell vhost what +DMA devices to use in every data path function call. This design enables +the flexibility for applications to dynamically use DMA channels in +different function modules, not limited in vhost. + +In addition, vhost supports M:N mapping between vrings and DMA virtual +channels. Specifically, one vring can use multiple different DMA channels +and one DMA channel can be shared by multiple vrings at the same time. +The reason of enabling one vring to use multiple DMA channels is that +it's possible that more than one dataplane threads enqueue packets to +the same vring with their own DMA virtual channels. Besides, the number +of DMA devices is limited. For the purpose of scaling, it's necessary to +support sharing DMA channels among vrings. + +DPDK 21.11 adds vfio support for DMA device in vhost. When DMA devices +are bound to vfio driver, VA mode is the default and recommended. +For PA mode, page by page mapping may exceed IOMMU's max capability, +better to use 1G guest hugepage. + +For more about dpdk-testpmd sample, please refer to the DPDK docments: +https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html + +For virtio-user vdev parameter, you can refer to the DPDK docments: +https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage. + +Prerequisites +============= + +Topology +-------- + Test flow: TG-->NIC-->Vhost-->Virtio-->Vhost-->NIC-->TG + +Hardware -------- + Supportted NICs: ALL + +Software +-------- + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # ninja -C -j 110 + +2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID:: + + # ./usertools/dpdk-devbind.py -s + + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + +Test case +========= + +Common steps +------------ +1. Bind 1 NIC port and CBDMA channels to vfio-pci:: + + # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci + + For example, Bind 1 NIC port and 2 CBDMA channels:: + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0 + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1 + +2. Inject imix packets to NIC by traffic generator:: + + The packet size include [64, 128, 256, 512, 1024, 1518], and the format of packet is as follows. + +-------------+-------------+-------------+-------------+ + | MAC | MAC | IPV4 | IPV4 | + | Src address | Dst address | Src address | Dst address | + |-------------|-------------|-------------|-------------| + | Any MAC | Virtio mac | Any IP | Any IP | + +-------------+-------------+-------------+-------------+ + All the packets in this test plan use the Virtio mac:00:11:22:33:44:10. + +Test Case 1: PVP split ring all path vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels +--------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring all path vhost enqueue +operations with 1 to 1 mapping between vrings and CBDMA virtual channels. + +1. Bind 1 NIC port and 1 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with inorder mergeable path:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check vhost RX and TX direction both exist packtes (in all queues only when start with multi-queues) from vhost log:: + + testpmd>stop + +6. Restart vhost port and send imix pkts again, check get same throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +11. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 + --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd>set fwd mac + testpmd>start + +12. Rerun steps 3-11. + +Test Case 2: PVP split ring all path multi-queues vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels +---------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring all path multi-queues vhost enqueue +operations with 1 to 1 mapping between vrings and CBDMA virtual channels. + +1. Bind 1 NIC port and 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with inorder mergeable path:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check vhost RX and TX direction both exist packtes (in all queues only when start with multi-queues) from vhost log:: + + testpmd>stop + +6. Restart vhost port and send imix pkts again, check get same throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +11. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] + testpmd>set fwd mac + testpmd>start + +12. Rerun step 7. + +Test Case 3: PVP split ring all path multi-queues vhost enqueue operations with M to 1 mapping between vrings and CBDMA virtual channels +---------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring all path multi-queues vhost enqueue +operations with M to 1 mapping between vrings and CBDMA virtual channels. + +1. Bind 1 NIC port and 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with inorder mergeable path:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check vhost RX and TX direction both exist packtes (in all queues only when start with multi-queues) from vhost log:: + + testpmd>stop + +6. Restart vhost port and send imix pkts again, check get same throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +11. Quit all testpmd and relaunch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=3 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0] + testpmd>set fwd mac + testpmd>start + +12. Rerun steps 4-6. + +13. Quit all testpmd and relaunch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] + testpmd>set fwd mac + testpmd>start + +14. Rerun steps 7. + +15. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] + testpmd>set fwd mac + testpmd>start + +16. Rerun steps 7. + +Test Case 4: PVP split ring all path vhost enqueue operations with 1 to N mapping between vrings and CBDMA virtual channels +--------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring all path vhost enqueue +operations with 1 to N mapping between vrings and CBDMA virtual channels. + +1. Bind 1 NIC port and 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with inorder mergeable path:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check vhost RX and TX direction both exist packtes (in all queues only when start with multi-queues) from vhost log:: + + testpmd>stop + +6. Restart vhost port and send imix pkts again, check get same throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +11. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd>set fwd mac + testpmd>start + +12. Rerun steps 9. + +Test Case 5: PVP split ring all path multi-queues vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels +---------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring all path multi-queues vhost enqueue +operations with M to N mapping between vrings and CBDMA virtual channels. + +1. Bind 1 NIC port and 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with inorder mergeable path:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check vhost RX and TX direction both exist packtes (in all queues only when start with multi-queues) from vhost log:: + + testpmd>stop + +6. Restart vhost port and send imix pkts again, check get same throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +11. Quit all testpmd and relaunch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd>set fwd mac + testpmd>start + +12. Rerun steps 8. + +13. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd>set fwd mac + testpmd>start + +14. Rerun steps 10. + +Test Case 6: PVP split ring dynamic queue number vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels +--------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance if the vhost-user can work well when split ring vhost enqueue +operations with M to N mapping between vrings and CBDMA virtual channels and the queue number dynamic change. + +1. Bind 1 NIC port and 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check vhost RX and TX direction both exist packtes (in all queues only when start with multi-queues) from vhost log. + + testpmd>stop + +6. Restart vhost port and send imix pkts again, check get same throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Quit and relaunch vhost with 1:1 mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas=[txq0;txq1;txq2;txq3]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] + testpmd>set fwd mac + testpmd>start + +8. Quit and relaunch vhost with M:N(1:N;M# .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \ + --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.7,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore13@0000:00:04.2,lcore13@0000:00:04.3,lcore13@0000:00:04.4,lcore14@0000:00:04.2,lcore14@0000:00:04.3,lcore14@0000:00:04.4,lcore14@0000:00:04.5,lcore15@0000:00:04.0,lcore15@0000:00:04.1,lcore15@0000:00:04.2,lcore15@0000:00:04.3,lcore15@0000:00:04.4,lcore15@0000:00:04.5,lcore15@0000:00:04.6,lcore15@0000:00:04.7] + testpmd>set fwd mac + testpmd>start + +9. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + testpmd>set fwd mac + testpmd>start + +10. Quit and relaunch vhost with iova=pa by below command, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --iova=pa -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + testpmd>set fwd mac + testpmd>start + +Test Case 7: PVP packed ring all path vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels +---------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring all path vhost enqueue +operations with 1 to 1 mapping between vrings and CBDMA virtual channels. + +1. Bind 1 NIC port and 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 + --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with inorder mergeable path:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check vhost RX and TX direction both exist packtes (in all queues only when start with multi-queues) from vhost log:: + + testpmd>stop + +6. Restart vhost port and send imix pkts again, check get same throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=1,queue_size=1025 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1025 --rxd=1025 + testpmd>set fwd mac + testpmd>start + +12. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd>set fwd mac + testpmd>start + +12. Rerun steps 3-6. + +Test Case 8: PVP packed ring all path multi-queues vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels +----------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring all path multi-queues vhost enqueue +operations with 1 to 1 mapping between vrings and CBDMA virtual channels. + +1. Bind 1 NIC port and 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with inorder mergeable path:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check vhost RX and TX direction both exist packtes (in all queues only when start with multi-queues) from vhost log:: + + testpmd>stop + +6. Restart vhost port and send imix pkts again, check get same throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: + + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8,queue_size=1025 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025 + testpmd>set fwd mac + testpmd>start + +12. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] + testpmd>set fwd mac + testpmd>start + +13. Rerun step 7. + +Test Case 9: PVP packed ring all path multi-queues vhost enqueue operations with M to 1 mapping between vrings and CBDMA virtual channels +----------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring all path multi-queues vhost enqueue +operations with M to 1 mapping between vrings and CBDMA virtual channels. + +1. Bind 1 NIC port and 1 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd>set fwd mac + testpmd>start -This feature supports to offload large data movement in vhost enqueue operations -from the CPU to the I/OAT(a DMA engine in Intel's processor) device for every queue. -In addition, a queue can only use one I/OAT device, and I/OAT devices cannot be shared -among vhost ports and queues. That is, an I/OAT device can only be used by one queue at -a time. DMA devices(e.g.,CBDMA) used by queues are assigned by users; for a queue without -assigning a DMA device, the PMD will leverages librte_vhost to perform vhost enqueue -operations. Moreover, users cannot enable I/OAT acceleration for live-migration. Large -copies are offloaded from the CPU to the DMA engine in an asynchronous manner. The CPU -just submits copy jobs to the DMA engine and without waiting for DMA copy completion; -there is no CPU intervention during DMA data transfer. By overlapping CPU -computation and DMA copy, we can save precious CPU cycles and improve the overall -throughput for vhost-user PMD based applications, like OVS. Due to startup overheads -associated with DMA engines, small copies are performed by the CPU. -DPDK 21.11 adds vfio support for DMA device in vhost. When DMA devices are bound to -vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping -may exceed IOMMU's max capability, better to use 1G guest hugepage. - -We introduce a new vdev parameter to enable DMA acceleration for Tx operations of queues: -- dmas: This parameter is used to specify the assigned DMA device of a queue. +3. Launch virtio-user with inorder mergeable path:: -Here is an example: -./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 \ ---vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:00:04.0] \ ---iova=va -- -i' + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start -Test Case 1: PVP split ring all path vhost enqueue operations with cbdma -======================================================================== +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: -Packet pipeline: -================ -TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG + testpmd>show port stats all -1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, then launch vhost by below command:: +5. Stop vhost port, check vhost RX and TX direction both exist packtes (in all queues only when start with multi-queues) from vhost log:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + testpmd>stop -2. Launch virtio-user with inorder mergeable path:: +6. Restart vhost port and send imix pkts again, check get same throuhput can get expected data:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + testpmd>start + testpmd>show port stats all -3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port and send imix pkts again, check get same throuhput:: +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: - testpmd>show port stats all - testpmd>stop - testpmd>start - testpmd>show port stats all + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start -4. Relaunch virtio-user with mergeable path, then repeat step 3:: +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start -5. Relaunch virtio-user with inorder non-mergeable path, then repeat step 3:: +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start -6. Relaunch virtio-user with non-mergeable path, then repeat step 3:: +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \ - -- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start -7. Relaunch virtio-user with vector_rx path, then repeat step 3:: +11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8,queue_size=1025 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025 + testpmd>set fwd mac + testpmd>start -8. Quit all testpmd and relaunch vhost with iova=pa by below command:: +12. Quit all testpmd and relaunch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=3 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0] + testpmd>set fwd mac + testpmd>start -9. Rerun steps 2-7. +13. Rerun steps 3-6. -Test Case 2: PVP split ring dynamic queue number vhost enqueue operations with cbdma -===================================================================================== +14. Quit all testpmd and relaunch vhost by below command:: -1. Bind 8 CBDMA ports and 1 NIC port to vfio-pci, then launch vhost by below command:: + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] + testpmd>set fwd mac + testpmd>start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +15. Rerun steps 7. -2. Launch virtio-user by below command:: +16. Quit all testpmd and relaunch vhost with iova=pa by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] + testpmd>set fwd mac + testpmd>start -3. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. +17. Rerun steps 8. -4. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. +Test Case 10: PVP packed ring all path vhost enqueue operations with 1 to N mapping between vrings and CBDMA virtual channels +----------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring all path vhost enqueue +operations with 1 to N mapping between vrings and CBDMA virtual channels. -5. Quit and relaunch vhost with 4 queues w/ cbdma and 4 queues w/o cbdma:: +1. Bind 1 NIC port and 8 CBDMA channel to vfio-pci, as common step 1. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +2. Launch vhost by below command:: -6. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd>set fwd mac + testpmd>start -7. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. +3. Launch virtio-user with inorder mergeable path:: -8. Quit and relaunch vhost with 8 queues w/ cbdma:: + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: -9. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. + testpmd>show port stats all -10. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. +5. Stop vhost port, check vhost RX and TX direction both exist packtes (in all queues only when start with multi-queues) from vhost log:: -11. Quit and relaunch vhost with iova=pa, 6 queues w/ cbdma and 2 queues w/o cbdma:: + testpmd>stop - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5]' \ - --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +6. Restart vhost port and send imix pkts again, check get same throuhput can get expected data:: -12. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. + testpmd>start + testpmd>show port stats all -13. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: -Test Case 3: PVP packed ring all path vhost enqueue operations with cbdma -========================================================================= + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start -Packet pipeline: -================ -TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: -1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, then launch vhost by below command:: + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:80:04.0]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: -2. Launch virtio-user with inorder mergeable path:: + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: -3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port and send imix pkts again, check get same throuhput:: + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start - testpmd>show port stats all - testpmd>stop - testpmd>start - testpmd>show port stats all +11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 3:: -4. Relaunch virtio-user with mergeable path, then repeat step 3:: + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1,queue_size=1025 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1025 --rxd=1025 + testpmd>set fwd mac + testpmd>start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +12. Quit all testpmd and relaunch vhost with iova=pa by below command:: -5. Relaunch virtio-user with inorder non-mergeable path, then repeat step 3:: + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd>set fwd mac + testpmd>start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +13. Rerun steps 9. -6. Relaunch virtio-user with non-mergeable path, then repeat step 3:: +Test Case 11: PVP packed ring all path multi-queues vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels +------------------------------------------------------------------------------------------------------------------------------------------ +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring all path multi-queues vhost enqueue +operations with M to N mapping between vrings and CBDMA virtual channels. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +1. Bind 1 NIC port and 8 CBDMA channel to vfio-pci, as common step 1. -7. Relaunch virtio-user with vectorized path, then repeat step 3:: +2. Launch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd>set fwd mac + testpmd>start -8. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 3:: +3. Launch virtio-user with inorder mergeable path:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1,queue_size=1025 \ - -- -i --nb-cores=1 --txd=1025 --rxd=1025 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start -9. Quit all testpmd and relaunch vhost with iova=pa by below command:: +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:80:04.0]' \ - --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + testpmd>show port stats all -10. Rerun steps 2-8. +5. Stop vhost port, check vhost RX and TX direction both exist packtes (in all queues only when start with multi-queues) from vhost log:: -Test Case 4: PVP packed ring dynamic queue number vhost enqueue operations with cbdma -===================================================================================== + testpmd>stop -1. Bind 8 CBDMA ports and 1 NIC port to vfio-pci, then launch vhost by below command:: +6. Restart vhost port and send imix pkts again, check get same throuhput can get expected data:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start + testpmd>start + testpmd>show port stats all -2. Launch virtio-user by below command:: +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8,server=1,packed_vq=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start -3. Send imix packets from packet generator with random ip, check perforamnce can get target. +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: -4. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start -5. Quit and relaunch vhost with 4 queues w/ cbdma and 4 queues w/o cbdma:: +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start -6. Send imix packets from packet generator with random ip, check perforamnce can get target. +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: -7. Stop vhost port, check vhost RX and TX direction both exist packtes in 4 queues from vhost log. + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8, \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start -8. Quit and relaunch vhost with 8 queues w/ cbdma:: +11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8,queue_size=1025 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025 + testpmd>set fwd mac + testpmd>start -9. Send imix packets from packet generator with random ip, check perforamnce can get target. +12. Quit all testpmd and relaunch vhost by below command:: -10. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd>set fwd mac + testpmd>start -11. Quit and relaunch vhost with iova=pa, 6 queues w/ cbdma and 2 queues w/o cbdma:: +13. Rerun steps 7. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5]' \ - --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +14. Quit all testpmd and relaunch vhost with iova=pa by below command:: -12. Send imix packets from packet generator with random ip, check perforamnce can get target. + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd>set fwd mac + testpmd>start -13. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. +15. Rerun steps 9. -Test Case 5: loopback split ring large chain packets stress test with cbdma enqueue -==================================================================================== +Test Case 12: PVP packed ring dynamic queue number vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels +----------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance if the vhost-user can work well when packed ring vhost enqueue +operations with M to N mapping between vrings and CBDMA virtual channels and the queue number dynamic change. -Packet pipeline: -================ -Vhost <--> Virtio +1. Bind 1 NIC port and 8 CBDMA channel to vfio-pci, as common step 1. -1. Bind 1 CBDMA channel to vfio-pci and launch vhost:: +2. Launch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --iova=va -- -i --nb-cores=1 --mbuf-size=65535 + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start -2. Launch virtio and start testpmd:: +3. Launch virtio-user by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1, \ - mrg_rxbuf=1,in_order=0,vectorized=1,queue_size=2048 \ - -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1 - >start + # .//app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,server=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start -3. Send large packets from vhost, check virtio can receive packets:: +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: - testpmd> vhost enable tx all - testpmd> set txpkts 65535,65535,65535,65535,65535 - testpmd> start tx_first 32 - testpmd> show port stats all + testpmd>show port stats all -4. Quit all testpmd and relaunch vhost with iova=pa:: +5. Stop vhost port, check vhost RX and TX direction both exist packtes (in all queues only when start with multi-queues) from vhost log. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --iova=pa -- -i --nb-cores=1 --mbuf-size=65535 + testpmd>stop -5. Rerun steps 2-3. +6. Restart vhost port and send imix pkts again, check get same throuhput can get expected data:: -Test Case 6: loopback packed ring large chain packets stress test with cbdma enqueue -==================================================================================== + testpmd>start + testpmd>show port stats all -Packet pipeline: -================ -Vhost <--> Virtio +7. Quit and relaunch vhost with 1:1 mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: -1. Bind 1 CBDMA channel to vfio-pci and launch vhost:: + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas=[txq0;txq1;txq2;txq3]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] + testpmd>set fwd mac + testpmd>start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --iova=va -- -i --nb-cores=1 --mbuf-size=65535 +9. Quit and relaunch vhost with M:N(1:N;M# .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \ + --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.7,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore13@0000:00:04.2,lcore13@0000:00:04.3,lcore13@0000:00:04.4,lcore14@0000:00:04.2,lcore14@0000:00:04.3,lcore14@0000:00:04.4,lcore14@0000:00:04.5,lcore15@0000:00:04.0,lcore15@0000:00:04.1,lcore15@0000:00:04.2,lcore15@0000:00:04.3,lcore15@0000:00:04.4,lcore15@0000:00:04.5,lcore15@0000:00:04.6,lcore15@0000:00:04.7] + testpmd>set fwd mac + testpmd>start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1, \ - mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,queue_size=2048 \ - -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1 - >start +11. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: -3. Send large packets from vhost, check virtio can receive packets:: + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + testpmd>set fwd mac + testpmd>start - testpmd> vhost enable tx all - testpmd> set txpkts 65535,65535,65535,65535,65535 - testpmd> start tx_first 32 - testpmd> show port stats all +13. Quit and relaunch vhost with iova=pa by below command:: -4. Quit all testpmd and relaunch vhost with iova=pa:: + # .//app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --iova=pa -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + testpmd>set fwd mac + testpmd>start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' --iova=pa -- -i --nb-cores=1 --mbuf-size=65535 +14. Rerun step 4-6. -5. Rerun steps 2-3. -- 2.25.1