test suite reviews and discussions
 help / color / mirror / Atom feed
From: Lingli Chen <linglix.chen@intel.com>
To: dts@dpdk.org
Cc: Lingli Chen <linglix.chen@intel.com>
Subject: [dts][PATCH V4 2/3] test_plans/vhost_cbdma: modify test plan to coverage more test point
Date: Wed, 26 Jan 2022 17:09:06 +0800	[thread overview]
Message-ID: <20220126090906.915545-1-linglix.chen@intel.com> (raw)

v1:
Modify test plan to coverage more test point.
v2:
Fix test plan make html format issue.

Signed-off-by: Lingli Chen <linglix.chen@intel.com>
---
 test_plans/vhost_cbdma_test_plan.rst | 365 ++++++++++++++++-----------
 1 file changed, 211 insertions(+), 154 deletions(-)

diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst
index 3d0e518a..c8f8b8c5 100644
--- a/test_plans/vhost_cbdma_test_plan.rst
+++ b/test_plans/vhost_cbdma_test_plan.rst
@@ -1,34 +1,34 @@
 .. Copyright (c) <2021>, Intel Corporation
-   All rights reserved.
-
-   Redistribution and use in source and binary forms, with or without
-   modification, are permitted provided that the following conditions
-   are met:
-
-   - Redistributions of source code must retain the above copyright
-     notice, this list of conditions and the following disclaimer.
-
-   - Redistributions in binary form must reproduce the above copyright
-     notice, this list of conditions and the following disclaimer in
-     the documentation and/or other materials provided with the
-     distribution.
-
-   - Neither the name of Intel Corporation nor the names of its
-     contributors may be used to endorse or promote products derived
-     from this software without specific prior written permission.
-
-   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
-   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
-   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
-   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
-   OF THE POSSIBILITY OF SUCH DAMAGE.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    - Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+
+    - Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+
+    - Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+    FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+    COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+    INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+    (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+    SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+    HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+    STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+    OF THE POSSIBILITY OF SUCH DAMAGE.
 
 ==========================================================
 DMA-accelerated Tx operations for vhost-user PMD test plan
@@ -38,41 +38,42 @@ Overview
 --------
 
 This feature supports to offload large data movement in vhost enqueue operations
-from the CPU to the I/OAT device for every queue. Note that I/OAT acceleration
-is just enabled for split rings now. In addition, a queue can only use one I/OAT
-device, and I/OAT devices cannot be shared among vhost ports and queues. That is,
-an I/OAT device can only be used by one queue at a time. DMA devices used by
-queues are assigned by users; for a queue without assigning a DMA device, the
-PMD will leverages librte_vhost to perform vhost enqueue operations. Moreover,
-users cannot enable I/OAT acceleration for live-migration. Large copies are
-offloaded from the CPU to the DMA engine in an asynchronous manner. The CPU just
-submits copy jobs to the DMA engine and without waiting for DMA copy completion;
+from the CPU to the I/OAT(a DMA engine in Intel's processor) device for every queue.
+In addition, a queue can only use one I/OAT device, and I/OAT devices cannot be shared
+among vhost ports and queues. That is, an I/OAT device can only be used by one queue at
+a time. DMA devices(e.g.,CBDMA) used by queues are assigned by users; for a queue without
+assigning a DMA device, the PMD will leverages librte_vhost to perform vhost enqueue
+operations. Moreover, users cannot enable I/OAT acceleration for live-migration. Large
+copies are offloaded from the CPU to the DMA engine in an asynchronous manner. The CPU
+just submits copy jobs to the DMA engine and without waiting for DMA copy completion;
 there is no CPU intervention during DMA data transfer. By overlapping CPU
 computation and DMA copy, we can save precious CPU cycles and improve the overall
 throughput for vhost-user PMD based applications, like OVS. Due to startup overheads
 associated with DMA engines, small copies are performed by the CPU.
+DPDK 21.11 adds vfio support for DMA device in vhost. When DMA devices are bound to
+vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping
+may exceed IOMMU's max capability, better to use 1G guest hugepage.
 
-We introduce a new vdev parameter to enable DMA acceleration for Tx
-operations of queues:
-
- - dmas: This parameter is used to specify the assigned DMA device of
-   a queue.
+We introduce a new vdev parameter to enable DMA acceleration for Tx operations of queues:
+- dmas: This parameter is used to specify the assigned DMA device of a queue.
 
 Here is an example:
- $ ./dpdk-testpmd -c f -n 4 \
-   --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0]'
+./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 \
+--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:00:04.0] \
+--iova=va -- -i'
 
-Test Case 1: PVP Split all path with DMA-accelerated vhost enqueue
-==================================================================
+Test Case 1: PVP split ring all path vhost enqueue operations with cbdma
+========================================================================
 
-Packet pipeline: 
+Packet pipeline:
 ================
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
-1. Bind one cbdma port and one nic port to vfio-pci, then launch vhost by below command::
+1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0]' \
-    -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \
+    --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:00:04.0]' \
+    --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
@@ -80,11 +81,11 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
-3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port, then check throughput again::
+3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port and send imix pkts again, check get same throuhput::
 
     testpmd>show port stats all
     testpmd>stop
@@ -95,7 +96,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
@@ -103,7 +104,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
@@ -111,77 +112,99 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    -- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
 7. Relaunch virtio-user with vector_rx path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
-    --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=1 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
-Test Case 2: Split ring dynamic queue number test for DMA-accelerated vhost Tx operations
-=========================================================================================
+8. Quit all testpmd and relaunch vhost with iova=pa by below command::
 
-1. Bind 8 cbdma channels and one nic port to vfio-pci, then launch vhost by below command::
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \
+    --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:00:04.0]' \
+    --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    >set fwd mac
+    >start
+
+9. Rerun steps 2-7.
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
-     --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
-     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
-     >set fwd mac
-     >start
+Test Case 2: PVP split ring dynamic queue number vhost enqueue operations with cbdma
+=====================================================================================
+
+1. Bind 8 CBDMA ports and 1 NIC port to vfio-pci, then launch vhost by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \
+    --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
+    --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
+    >set fwd mac
+    >start
 
 2. Launch virtio-user by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
     >set fwd mac
     >start
 
-3. Send imix packets from packet generator with random ip, check perforamnce can get target.
+3. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
 
 4. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
 
-5. Quit vhost port and relaunch vhost with 4 queues w/ cbdma::
+5. Quit and relaunch vhost with 4 queues w/ cbdma and 4 queues w/o cbdma::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
-    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3]' \
-    -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=4 --rxq=4
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \
+    --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3]' \
+    --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
     >set fwd mac
     >start
 
-6. Send imix packets from packet generator with random ip, check perforamnce can get target.
+6. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
 
-7. Stop vhost port, check vhost RX and TX direction both exist packtes in 4 queues from vhost log.
+7. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
 
-8. Quit vhost port and relaunch vhost with 8 queues w/ cbdma::
+8. Quit and relaunch vhost with 8 queues w/ cbdma::
 
-     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
-    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7]' \
-    -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \
+    --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
+    --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
     >set fwd mac
     >start
 
-9. Send imix packets from packet generator with random ip, check perforamnce can get target.
+9. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
 
 10. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
 
-Test Case 3: PVP packed ring all path with DMA-accelerated vhost enqueue
-========================================================================
+11. Quit and relaunch vhost with iova=pa, 6 queues w/ cbdma and 2 queues w/o cbdma::
+
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5]' \
+	--iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
+	>set fwd mac
+	>start
+
+12. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
+
+13. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
+
+Test Case 3: PVP packed ring all path vhost enqueue operations with cbdma
+=========================================================================
 
-Packet pipeline: 
+Packet pipeline:
 ================
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
-1. Bind one cbdma port and one nic port to vfio-pci, then launch vhost by below command::
+1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0]' \
-    -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \
+    --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:80:04.0]' \
+    --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
@@ -189,11 +212,11 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
-3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port, then check throughput again::
+3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port and send imix pkts again, check get same throuhput::
 
     testpmd>show port stats all
     testpmd>stop
@@ -204,7 +227,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
@@ -212,7 +235,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
@@ -220,44 +243,52 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
 7. Relaunch virtio-user with vectorized path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
-    --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
-8. Relaunch virtio-user with vector_rx path, then repeat step 3::
+8. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
-    --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \
-    -- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1,queue_size=1025 \
+    -- -i --nb-cores=1 --txd=1025 --rxd=1025
     >set fwd mac
     >start
 
-Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx operations
-==========================================================================================
+9. Quit all testpmd and relaunch vhost with iova=pa by below command::
 
-1. Bind 8 cbdma channels and one nic port to vfio-pci, then launch vhost by below command::
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \
+    --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:80:04.0]' \
+    --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    >set fwd mac
+    >start
+
+10. Rerun steps 2-8.
+
+Test Case 4: PVP packed ring dynamic queue number vhost enqueue operations with cbdma
+=====================================================================================
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
-     --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
-     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
-     >set fwd mac
-     >start
+1. Bind 8 CBDMA ports and 1 NIC port to vfio-pci, then launch vhost by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \
+    --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
+    --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
+    >set fwd mac
+    >start
 
 2. Launch virtio-user by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1,packed_vq=1 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8,server=1,packed_vq=1 \
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
     >set fwd mac
     >start
 
@@ -265,11 +296,11 @@ Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx
 
 4. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
 
-5. Quit vhost port and relaunch vhost with 4 queues w/ cbdma::
+5. Quit and relaunch vhost with 4 queues w/ cbdma and 4 queues w/o cbdma::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
-    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3]' \
-    -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=4 --rxq=4
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \
+    --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3]' \
+    --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
     >set fwd mac
     >start
 
@@ -277,11 +308,11 @@ Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx
 
 7. Stop vhost port, check vhost RX and TX direction both exist packtes in 4 queues from vhost log.
 
-8. Quit vhost port and relaunch vhost with 8 queues w/ cbdma::
+8. Quit and relaunch vhost with 8 queues w/ cbdma::
 
-     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
-    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7]' \
-    -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \
+    --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \
+    --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
     >set fwd mac
     >start
 
@@ -289,59 +320,85 @@ Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx
 
 10. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
 
-Test Case 5: Compare PVP split ring performance between CPU copy, CBDMA copy and Sync copy
-==========================================================================================
+11. Quit and relaunch vhost with iova=pa, 6 queues w/ cbdma and 2 queues w/o cbdma::
 
-1. Bind one cbdma port and one nic port which on same numa to vfio-pci, then launch vhost by below command::
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5]' \
+	--iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
+	>set fwd mac
+	>start
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0]' \
-    -- -i --nb-cores=1 --txd=1024 --rxd=1024
-    >set fwd mac
-    >start
+12. Send imix packets from packet generator with random ip, check perforamnce can get target.
 
-2. Launch virtio-user with inorder mergeable path::
+13. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,server=1 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
-    >set fwd mac
-    >start
+Test Case 5: loopback split ring large chain packets stress test with cbdma enqueue
+====================================================================================
 
-3. Send packets with 64b and 1518b seperately from packet generator, record the throughput as sync copy throughput for 64b and cbdma copy for 1518b::
+Packet pipeline:
+================
+Vhost <--> Virtio
 
-    testpmd>show port stats all
+1. Bind 1 CBDMA channel to vfio-pci and launch vhost::
 
-4.Quit vhost side, relaunch with below cmd::
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \
+    --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \
+    --iova=va -- -i --nb-cores=1 --mbuf-size=65535
 
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0]' \
-    -- -i --nb-cores=1 --txd=1024 --rxd=1024
-    >set fwd mac
+2. Launch virtio and start testpmd::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4  --file-prefix=testpmd0 --no-pci  \
+    --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1, \
+    mrg_rxbuf=1,in_order=0,vectorized=1,queue_size=2048 \
+    -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1
     >start
 
-5. Send packets with 1518b from packet generator, record the throughput as sync copy throughput for 1518b::
+3. Send large packets from vhost, check virtio can receive packets::
 
-    testpmd>show port stats all
+    testpmd> vhost enable tx all
+    testpmd> set txpkts 65535,65535,65535,65535,65535
+    testpmd> start tx_first 32
+    testpmd> show port stats all
 
-6. Quit two testpmd, relaunch vhost by below command::
+4. Quit all testpmd and relaunch vhost with iova=pa::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1' \
-    -- -i --nb-cores=1 --txd=1024 --rxd=1024
-    >set fwd mac
-    >start
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \
+    --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \
+    --iova=pa -- -i --nb-cores=1 --mbuf-size=65535
 
-7. Launch virtio-user with inorder mergeable path::
+5. Rerun steps 2-3.
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
-    >set fwd mac
+Test Case 6: loopback packed ring large chain packets stress test with cbdma enqueue
+====================================================================================
+
+Packet pipeline:
+================
+Vhost <--> Virtio
+
+1. Bind 1 CBDMA channel to vfio-pci and launch vhost::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \
+    --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \
+    --iova=va -- -i --nb-cores=1 --mbuf-size=65535
+
+2. Launch virtio and start testpmd::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4  --file-prefix=testpmd0 --no-pci  \
+    --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1, \
+    mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,queue_size=2048 \
+    -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1
     >start
 
-8. Send packets with 64b from packet generator, record the throughput as cpu copy for 64b::
+3. Send large packets from vhost, check virtio can receive packets::
 
-    testpmd>show port stats all
+    testpmd> vhost enable tx all
+    testpmd> set txpkts 65535,65535,65535,65535,65535
+    testpmd> start tx_first 32
+    testpmd> show port stats all
+
+4. Quit all testpmd and relaunch vhost with iova=pa::
 
-9. Check performance can meet below requirement::
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \
+    --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' --iova=pa -- -i --nb-cores=1 --mbuf-size=65535
 
-   (1)CPU copy vs. sync copy delta < 10% for 64B packet size
-   (2)CBDMA copy vs sync copy delta > 5% for 1518 packet size
+5. Rerun steps 2-3.
-- 
2.25.1


             reply	other threads:[~2022-01-26  9:09 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-26  9:09 Lingli Chen [this message]
2022-01-26  9:21 Wei Ling

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220126090906.915545-1-linglix.chen@intel.com \
    --to=linglix.chen@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).