test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts] [PATCH v1] test_plans/vhost_cbdma_test_plan.rst
@ 2021-03-02 15:22 Yinan Wang
  2021-03-03  5:16 ` Tu, Lijuan
  0 siblings, 1 reply; 7+ messages in thread
From: Yinan Wang @ 2021-03-02 15:22 UTC (permalink / raw)
  To: dts; +Cc: Yinan Wang

1. Change test traffic to imix packets for better case coverage.
2. Update normal path to vectorized path for better case coverage.

Signed-off-by: Yinan Wang <yinan.wang@intel.com>
---
 test_plans/vhost_cbdma_test_plan.rst | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst
index bbfa22c1..a1ceda74 100644
--- a/test_plans/vhost_cbdma_test_plan.rst
+++ b/test_plans/vhost_cbdma_test_plan.rst
@@ -86,7 +86,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     >set fwd mac
     >start
 
-3. Send packets with packet size [64,1518] from packet generator, check the throughput can get expected data, restart vhost port, then check throughput again::
+3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port, then check throughput again::
 
     testpmd>show port stats all
     testpmd>stop
@@ -158,11 +158,11 @@ Test Case2: Dynamic queue number test for DMA-accelerated vhost Tx operations
     testpmd>start
     testpmd>show port stats all
 
-6. Relaunch virtio-user with 2 queues::
+6. Relaunch virtio-user with vectorized path and 2 queues::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=2,server=1 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,vectorized=1,queues=2,server=1 \
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
     >set fwd mac
     >start
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 7+ messages in thread
* [dts] [PATCH v1] test_plans/vhost_cbdma_test_plan.rst
@ 2021-07-29 18:00 Yinan Wang
  0 siblings, 0 replies; 7+ messages in thread
From: Yinan Wang @ 2021-07-29 18:00 UTC (permalink / raw)
  To: dts; +Cc: Yinan Wang

1. Correct test app name.
2. Add a tip that cbdma case need special dpdk code.

Signed-off-by: Yinan Wang <yinan.wang@intel.com>
---
 test_plans/vhost_cbdma_test_plan.rst | 54 ++++++++++++++--------------
 1 file changed, 28 insertions(+), 26 deletions(-)

diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst
index 325b5d87..e4aad3c0 100644
--- a/test_plans/vhost_cbdma_test_plan.rst
+++ b/test_plans/vhost_cbdma_test_plan.rst
@@ -64,6 +64,8 @@ Here is an example:
  $ ./dpdk-testpmd -c f -n 4 \
    --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024'
 
+Note: All cases in this test plan should add dpdk local path to support async vhostpmd.
+
 Test Case 1: PVP Split all path with DMA-accelerated vhost enqueue
 ==================================================================
 
@@ -73,14 +75,14 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \
+    ./dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
 2. Launch virtio-user with inorder mergeable path::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -95,7 +97,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 4. Relaunch virtio-user with mergeable path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -103,7 +105,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 5. Relaunch virtio-user with inorder non-mergeable path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -111,7 +113,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 6. Relaunch virtio-user with non-mergeable path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -119,7 +121,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 7. Relaunch virtio-user with vector_rx path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
+    ./dpdk-testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -131,7 +133,7 @@ Test Case 2: Split ring dynamic queue number test for DMA-accelerated vhost Tx o
 
 1. Bind 8 cbdma channels and one nic port to igb_uio, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
+    ./dpdk-testpmd -n 4 -l 28-29  \
      --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
      -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
      >set fwd mac
@@ -139,7 +141,7 @@ Test Case 2: Split ring dynamic queue number test for DMA-accelerated vhost Tx o
 
 2. Launch virtio-user by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
+    ./dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
     >set fwd mac
@@ -151,7 +153,7 @@ Test Case 2: Split ring dynamic queue number test for DMA-accelerated vhost Tx o
 
 5. Quit vhost port and relaunch vhost with 4 queues w/ cbdma::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
+    ./dpdk-testpmd -n 4 -l 28-29  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3],dmathr=1024' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=4 --rxq=4
     >set fwd mac
@@ -163,7 +165,7 @@ Test Case 2: Split ring dynamic queue number test for DMA-accelerated vhost Tx o
 
 8. Quit vhost port and relaunch vhost with 8 queues w/ cbdma::
 
-     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
+     ./dpdk-testpmd -n 4 -l 28-29  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=1024' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
     >set fwd mac
@@ -182,14 +184,14 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \
+    ./dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
 2. Launch virtio-user with inorder mergeable path::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -204,7 +206,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 4. Relaunch virtio-user with mergeable path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -212,7 +214,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 5. Relaunch virtio-user with inorder non-mergeable path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -220,7 +222,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 6. Relaunch virtio-user with non-mergeable path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -228,7 +230,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 7. Relaunch virtio-user with vectorized path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
+    ./dpdk-testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -237,7 +239,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 8. Relaunch virtio-user with vector_rx path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
+    ./dpdk-testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \
     -- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
@@ -249,7 +251,7 @@ Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx
 
 1. Bind 8 cbdma channels and one nic port to igb_uio, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
+    ./dpdk-testpmd -n 4 -l 28-29  \
      --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
      -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
      >set fwd mac
@@ -257,7 +259,7 @@ Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx
 
 2. Launch virtio-user by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
+    ./dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1,packed_vq=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
     >set fwd mac
@@ -269,7 +271,7 @@ Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx
 
 5. Quit vhost port and relaunch vhost with 4 queues w/ cbdma::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
+    ./dpdk-testpmd -n 4 -l 28-29  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3],dmathr=1024' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=4 --rxq=4
     >set fwd mac
@@ -281,7 +283,7 @@ Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx
 
 8. Quit vhost port and relaunch vhost with 8 queues w/ cbdma::
 
-     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
+     ./dpdk-testpmd -n 4 -l 28-29  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=1024' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
     >set fwd mac
@@ -296,14 +298,14 @@ Test Case 5: Compare PVP split ring performance between CPU copy, CBDMA copy and
 
 1. Bind one cbdma port and one nic port which on same numa to igb_uio, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0],dmathr=1024' \
+    ./dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0],dmathr=1024' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
 2. Launch virtio-user with inorder mergeable path::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,server=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -315,7 +317,7 @@ Test Case 5: Compare PVP split ring performance between CPU copy, CBDMA copy and
 
 4.Quit vhost side, relaunch with below cmd::
 
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0],dmathr=2000' \
+ ./dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0],dmathr=2000' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
@@ -326,14 +328,14 @@ Test Case 5: Compare PVP split ring performance between CPU copy, CBDMA copy and
 
 6. Quit two testpmd, relaunch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1' \
+    ./dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
 7. Launch virtio-user with inorder mergeable path::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
-- 
2.25.1


^ permalink raw reply	[flat|nested] 7+ messages in thread
* [dts] [PATCH v1] test_plans/vhost_cbdma_test_plan.rst
@ 2021-04-01 17:12 Yinan Wang
  2021-04-07  2:19 ` Tu, Lijuan
  0 siblings, 1 reply; 7+ messages in thread
From: Yinan Wang @ 2021-04-01 17:12 UTC (permalink / raw)
  To: dts; +Cc: Yinan Wang

Add cases for cbdma packed ring test.

Signed-off-by: Yinan Wang <yinan.wang@intel.com>
---
 test_plans/vhost_cbdma_test_plan.rst | 140 ++++++++++++++++++++++++++-
 1 file changed, 136 insertions(+), 4 deletions(-)

diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst
index a1ceda74..c827adaa 100644
--- a/test_plans/vhost_cbdma_test_plan.rst
+++ b/test_plans/vhost_cbdma_test_plan.rst
@@ -1,4 +1,4 @@
-.. Copyright (c) <2020>, Intel Corporation
+.. Copyright (c) <2021>, Intel Corporation
    All rights reserved.
 
    Redistribution and use in source and binary forms, with or without
@@ -126,10 +126,10 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     >set fwd mac
     >start
 
-Test Case2: Dynamic queue number test for DMA-accelerated vhost Tx operations
-=============================================================================
+Test Case2: Split ring dynamic queue number test for DMA-accelerated vhost Tx operations
+========================================================================================
 
-1. Bind four cbdma port and one nic port to igb_uio, then launch vhost by below command::
+1. Bind four cbdma channels and one nic port to igb_uio, then launch vhost by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.5;txq1@80:04.6],dmathr=1024' \
@@ -222,4 +222,136 @@ Test Case3: CBDMA threshold value check
     dma parameters: vid1,qid0,dma*,threshold:4096
     dma parameters: vid1,qid2,dma*,threshold:4096
 
+Test Case 4: PVP packed ring all path with DMA-accelerated vhost enqueue
+========================================================================
 
+Packet pipeline: 
+================
+TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
+
+1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    >set fwd mac
+    >start
+
+2. Launch virtio-user with inorder mergeable path::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \
+    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    >set fwd mac
+    >start
+
+3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port, then check throughput again::
+
+    testpmd>show port stats all
+    testpmd>stop
+    testpmd>start
+    testpmd>show port stats all
+
+4. Relaunch virtio-user with mergeable path, then repeat step 3::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \
+    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    >set fwd mac
+    >start
+
+5. Relaunch virtio-user with inorder non-mergeable path, then repeat step 3::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \
+    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    >set fwd mac
+    >start
+
+6. Relaunch virtio-user with non-mergeable path, then repeat step 3::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \
+    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    >set fwd mac
+    >start
+
+7. Relaunch virtio-user with vectorized path, then repeat step 3::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    >set fwd mac
+    >start
+
+8. Relaunch virtio-user with vector_rx path, then repeat step 3::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \
+    -- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    >set fwd mac
+    >start
+
+Test Case5: Packed ring dynamic queue number test for DMA-accelerated vhost Tx operations
+=========================================================================================
+
+1. Bind four cbdma channels and one nic port to igb_uio, then launch vhost by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
+    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.5;txq1@80:04.6],dmathr=1024' \
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
+    >set fwd mac
+    >start
+
+2. Launch virtio-user by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=2,server=1,packed_vq=1 \
+    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
+    >set fwd mac
+    >start
+
+3. Send imix packets [64,1518] from packet generator with random ip, check perforamnce can get target.
+
+4. Stop vhost port, check vhost RX and TX direction both exist packets in two queues from vhost log.
+
+5. On virtio-user side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally::
+
+    testpmd>port stop all
+    testpmd>port config all rxq 1
+    testpmd>port config all txq 1
+    testpmd>port start all
+    testpmd>start
+    testpmd>show port stats all
+
+6. Relaunch virtio-user with vectorized path and 2 queues::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,vectorized=1,queues=2,server=1,packed_vq=1 \
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
+    >set fwd mac
+    >start
+
+7. Send imix packets [64,1518] from packet generator with random ip, check perforamnce can get target.
+
+8. Stop vhost port, check vhost RX and TX direction both exist packets in queue0 from vhost log.
+
+9. On vhost side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally::
+
+    testpmd>port stop all
+    testpmd>port config all rxq 1
+    testpmd>port config all txq 1
+    testpmd>port start all
+    testpmd>start
+    testpmd>show port stats all
+
+10. Relaunch vhost with another two cbdma channels and 2 queueus, check perforamnce can get target::
+
+     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
+     --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@00:04.5;txq1@00:04.6],dmathr=512' \
+     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
+     >set fwd mac
+     >start
+
+11. Stop vhost port, check vhost RX and TX direction both exist packets in two queues from vhost log.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 7+ messages in thread
* [dts]  [PATCH v1] test_plans/vhost_cbdma_test_plan.rst
@ 2020-12-15 23:46 Yinan Wang
  2020-12-21  7:33 ` Tu, Lijuan
  0 siblings, 1 reply; 7+ messages in thread
From: Yinan Wang @ 2020-12-15 23:46 UTC (permalink / raw)
  To: dts; +Cc: Yinan Wang

Add one cbdma new case and optimize dynamic queue size test case.

Signed-off-by: Yinan Wang <yinan.wang@intel.com>
---
 test_plans/vhost_cbdma_test_plan.rst | 117 ++++++++++++++++++---------
 1 file changed, 80 insertions(+), 37 deletions(-)

diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst
index b2230900..504b9aa0 100644
--- a/test_plans/vhost_cbdma_test_plan.rst
+++ b/test_plans/vhost_cbdma_test_plan.rst
@@ -61,7 +61,7 @@ operations of queues:
    otherwise, leverage librte_vhost to perform memory copy.
 
 Here is an example:
- $ ./testpmd -c f -n 4 \
+ $ ./dpdk-testpmd -c f -n 4 \
    --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024'
 
 Test Case 1: PVP Split all path with DMA-accelerated vhost enqueue
@@ -73,14 +73,14 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
-     set fwd mac
-     start
+    >set fwd mac
+    >start
 
 2. Launch virtio-user with inorder mergeable path::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -95,7 +95,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 4. Relaunch virtio-user with mergeable path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -103,15 +103,15 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 5. Relaunch virtio-user with inorder non-mergeable path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0in_order=1,queues=1 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
 6. Relaunch virtio-user with non-mergeable path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -119,7 +119,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 7. Relaunch virtio-user with vector_rx path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -129,54 +129,97 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 Test Case2: Dynamic queue number test for DMA-accelerated vhost Tx operations
 =============================================================================
 
-1. Bind two cbdma port and one nic port to igb_uio, then launch vhost by below command::
+1. Bind four cbdma port and one nic port to igb_uio, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 28-29  \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.5;txq1@80:04.6],dmathr=1024' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
-     set fwd mac
-     start
+    >set fwd mac
+    >start
 
-2. Launch virtio-user by below ccd ommand::
+2. Launch virtio-user by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=2,server=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
     >set fwd mac
     >start
 
-3. Send packets with packet size [64,1518] from packet generator with random ip, check perforamnce can get target and RX/TX can work normally in two queues.
+3. Send packets with packet size [64,1518] from packet generator with random ip, check perforamnce can get target.
 
-4. On virtio-user side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally::
+4. Stop vhost port, check vhost RX and TX direction both exist packtes in two queues from vhost log.
 
-     start
-     stop
-     port stop all
-     port config all rxq 1
-     port start all
-     start
+5. On virtio-user side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally::
+
+    testpmd>port stop all
+    testpmd>port config all rxq 1
+    testpmd>port config all txq 1
+    testpmd>port start all
+    testpmd>start
+    testpmd>show port stats all
 
-5. Relaunch virtio-user with queues=2, check RX/TX can work normally in two queues::
+6. Relaunch virtio-user with 2 queues::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=2,server=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
     >set fwd mac
     >start
 
-4. On vhost side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally::
+7. Send packets with packet size [64,1518] from packet generator with random ip, check perforamnce can get target.
 
-     start
-     stop
-     port stop all
-     port config all rxq 1
-     port start all
-     start
+8. Stop vhost port, check vhost RX and TX direction both exist packtes in queue0 from vhost log.
 
-6. Relaunch vhost with another two cbdma channels, check perforamnce can get target and RX/TX can work normally in two queueus::
+9. On vhost side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 28-29  \
-    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.0],dmathr=512' \
+    testpmd>port stop all
+    testpmd>port config all rxq 1
+    testpmd>port config all txq 1
+    testpmd>port start all
+    testpmd>start
+    testpmd>show port stats all
+
+10. Relaunch vhost with another two cbdma channels and 2 queueus, check perforamnce can get target::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
+    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@00:04.5;txq1@00:04.6],dmathr=512' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
     >set fwd mac
-    >start
\ No newline at end of file
+    >start
+
+11. Stop vhost port, check vhost RX and TX direction both exist packtes in two queues from vhost log.
+
+Test Case3: CBDMA threshold value check
+========================================
+
+1. Bind four cbdma port to igb_uio, then launch vhost by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@80:04.0;txq1@80:04.1],dmathr=512' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@80:04.2;txq1@80:04.3],dmathr=4096' -- \
+    -i --nb-cores=1 --rxq=2 --txq=2
+    >start
+
+2. Launch virtio-user1::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \
+    --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \
+    -- -i --nb-cores=1 --rxq=2 --txq=2
+    >start
+
+3. Launch virtio-user0::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \
+    -- -i --nb-cores=1 --rxq=2 --txq=2
+    >start
+  
+4. Check the cbdma threshold value for each vhost port can be config correct from vhost log::
+
+    dma parameters: vid0,qid0,dma*,threshold:512
+    dma parameters: vid0,qid2,dma*,threshold:512
+    dma parameters: vid1,qid0,dma*,threshold:4096
+    dma parameters: vid1,qid2,dma*,threshold:4096
+
+
-- 
2.25.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-07-29  9:19 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-02 15:22 [dts] [PATCH v1] test_plans/vhost_cbdma_test_plan.rst Yinan Wang
2021-03-03  5:16 ` Tu, Lijuan
  -- strict thread matches above, loose matches on Subject: below --
2021-07-29 18:00 Yinan Wang
2021-04-01 17:12 Yinan Wang
2021-04-07  2:19 ` Tu, Lijuan
2020-12-15 23:46 Yinan Wang
2020-12-21  7:33 ` Tu, Lijuan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).