* [dts] [PATCH 01/11 v1] test_plans: add packed ring cases for loopback_multi_paths_port_restart
2020-02-28 6:09 [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Yinan
@ 2020-02-28 6:09 ` Yinan
2020-02-28 6:09 ` [dts] [PATCH 02/11 v1] test_plans: add packed ring cases for loopback_multi_queues Yinan
` (10 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Yinan @ 2020-02-28 6:09 UTC (permalink / raw)
To: dts; +Cc: Wang Yinan
From: Wang Yinan <yinan.wang@intel.com>
Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
...ack_multi_paths_port_restart_test_plan.rst | 109 ++++++++++++------
1 file changed, 76 insertions(+), 33 deletions(-)
diff --git a/test_plans/loopback_multi_paths_port_restart_test_plan.rst b/test_plans/loopback_multi_paths_port_restart_test_plan.rst
index 0b9b87a..1536ebf 100644
--- a/test_plans/loopback_multi_paths_port_restart_test_plan.rst
+++ b/test_plans/loopback_multi_paths_port_restart_test_plan.rst
@@ -34,16 +34,10 @@
vhost/virtio loopback with multi-paths and port restart test plan
=================================================================
-Description
-===========
+This test plan includes split virtqueue in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, vector_rx path, and packed virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable, non-mergeable path test. Also test port restart and only send one packet each time using testpmd.
-Benchmark vhost/virtio-user loopback test with 8 rx/tx paths.
-Includes mergeable, normal, vector_rx, inorder mergeable,
-inorder no-mergeable, virtio 1.1 mergeable, virtio 1.1 inorder, virtio 1.1 normal path.
-Also cover port restart test with each path.
-
-Test Case 1: loopback test with virtio 1.1 mergeable path
-=========================================================
+Test Case 1: loopback test with packed ring mergeable path
+==========================================================
1. Launch vhost by below command::
@@ -79,11 +73,12 @@ Test Case 1: loopback test with virtio 1.1 mergeable path
6. Restart port at vhost side and re-calculate the average throughput, verify the throughput is not zero after port restart::
testpmd>port start 0
- testpmd>start tx_first 32
+ testpmd>set burst 1
+ testpmd>start tx_first 1
testpmd>show port stats all
-Test Case 2: loopback test with virtio 1.1 normal path
-======================================================
+Test Case 2: loopback test with packed ring non-mergeable path
+==============================================================
1. Launch vhost by below command::
@@ -119,11 +114,53 @@ Test Case 2: loopback test with virtio 1.1 normal path
6. Restart port at vhost side and re-calculate the average throughput, verify the throughput is not zero after port restart::
testpmd>port start 0
+ testpmd>set burst 1
+ testpmd>start tx_first 1
+ testpmd>show port stats all
+
+Test Case 3: loopback test with packed ring inorder mergeable path
+==================================================================
+
+1. Launch vhost by below command::
+
+ rm -rf vhost-net*
+ ./testpmd -n 4 -l 2-4 --socket-mem 1024,1024 --legacy-mem --no-pci \
+ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+
+2. Launch virtio-user by below command::
+
+ ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+ --legacy-mem --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=1 \
+ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
+ >set fwd mac
+ >start
+
+3. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
+
+ testpmd>set txpkts [frame_size]
testpmd>start tx_first 32
+
+4. Repeat below command to get throughput 10 times,then calculate the average throughput::
+
+ testpmd>show port stats all
+
+5. Stop port at vhost side and re-calculate the average throughput, verify the throughput is zero after port stop::
+
+ testpmd>stop
+ testpmd>port stop 0
+ testpmd>show port stats all
+
+6. Restart port at vhost side and re-calculate the average throughput, verify the throughput is not zero after port restart::
+
+ testpmd>port start 0
+ testpmd>set burst 1
+ testpmd>start tx_first 1
testpmd>show port stats all
-Test Case 3: loopback test with virtio 1.1 inorder path
-=======================================================
+Test Case 4: loopback test with packed ring inorder non-mergeable path
+======================================================================
1. Launch vhost by below command::
@@ -136,7 +173,7 @@ Test Case 3: loopback test with virtio 1.1 inorder path
./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
--legacy-mem --no-pci --file-prefix=virtio \
- --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=1,mrg_rxbuf=0 \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=1 \
-- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
>set fwd mac
>start
@@ -159,11 +196,12 @@ Test Case 3: loopback test with virtio 1.1 inorder path
6. Restart port at vhost side and re-calculate the average throughput, verify the throughput is not zero after port restart::
testpmd>port start 0
- testpmd>start tx_first 32
+ testpmd>set burst 1
+ testpmd>start tx_first 1
testpmd>show port stats all
-Test Case 4: loopback test with inorder mergeable path
-======================================================
+Test Case 5: loopback test with split ring inorder mergeable path
+==================================================================
1. Launch vhost by below command::
@@ -199,11 +237,12 @@ Test Case 4: loopback test with inorder mergeable path
6. Restart port at vhost side and re-calculate the average throughput, verify the throughput is not zero after port restart::
testpmd>port start 0
- testpmd>start tx_first 32
+ testpmd>set burst 1
+ testpmd>start tx_first 1
testpmd>show port stats all
-Test Case 5: loopback test with inorder no-mergeable path
-=========================================================
+Test Case 6: loopback test with split ring inorder non-mergeable path
+=====================================================================
1. Launch vhost by below command::
@@ -239,11 +278,12 @@ Test Case 5: loopback test with inorder no-mergeable path
6. Restart port at vhost side and re-calculate the average throughput, verify the throughput is not zero after port restart::
testpmd>port start 0
- testpmd>start tx_first 32
+ testpmd>set burst 1
+ testpmd>start tx_first 1
testpmd>show port stats all
-Test Case 6: loopback test with mergeable path
-==============================================
+Test Case 7: loopback test with split ring mergeable path
+=========================================================
1. Launch vhost by below command::
@@ -279,11 +319,12 @@ Test Case 6: loopback test with mergeable path
testpmd>stop
testpmd>port stop 0
testpmd>port start 0
- testpmd>start tx_first 32
+ testpmd>set burst 1
+ testpmd>start tx_first 1
testpmd>show port stats all
-Test Case 7: loopback test with normal path
-===========================================
+Test Case 8: loopback test with split ring non-mergeable path
+=============================================================
1. Launch vhost by below command::
@@ -319,11 +360,12 @@ Test Case 7: loopback test with normal path
6. Restart port at vhost side and re-calculate the average throughput, verify the throughput is not zero after port restart::
testpmd>port start 0
- testpmd>start tx_first 32
+ testpmd>set burst 1
+ testpmd>start tx_first 1
testpmd>show port stats all
-Test Case 8: loopback test with vector_rx path
-==============================================
+Test Case 9: loopback test with split ring vector_rx path
+=========================================================
1. Launch vhost by below command::
@@ -337,7 +379,7 @@ Test Case 8: loopback test with vector_rx path
./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
--legacy-mem --no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0 \
- -- -i --tx-offloads=0x0 --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
>set fwd mac
>start
@@ -359,5 +401,6 @@ Test Case 8: loopback test with vector_rx path
6. Restart port at vhost side and re-calculate the average throughput, verify the throughput is not zero after port restart::
testpmd>port start 0
- testpmd>start tx_first 32
- testpmd>show port stats all
+ testpmd>set burst 1
+ testpmd>start tx_first 1
+ testpmd>show port stats all
\ No newline at end of file
--
2.17.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [dts] [PATCH 02/11 v1] test_plans: add packed ring cases for loopback_multi_queues
2020-02-28 6:09 [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Yinan
2020-02-28 6:09 ` [dts] [PATCH 01/11 v1] test_plans: add packed ring cases for loopback_multi_paths_port_restart Yinan
@ 2020-02-28 6:09 ` Yinan
2020-02-28 6:09 ` [dts] [PATCH 03/11 v1] test_plans: add packed ring test case for pvp_virtio_user_2M_hugepages Yinan
` (9 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Yinan @ 2020-02-28 6:09 UTC (permalink / raw)
To: dts; +Cc: Wang Yinan
From: Wang Yinan <yinan.wang@intel.com>
Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
.../loopback_multi_queues_test_plan.rst | 473 ++++++++++++++----
1 file changed, 372 insertions(+), 101 deletions(-)
diff --git a/test_plans/loopback_multi_queues_test_plan.rst b/test_plans/loopback_multi_queues_test_plan.rst
index 2021c3e..75cfd8d 100644
--- a/test_plans/loopback_multi_queues_test_plan.rst
+++ b/test_plans/loopback_multi_queues_test_plan.rst
@@ -30,37 +30,33 @@
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
OF THE POSSIBILITY OF SUCH DAMAGE.
-=====================================================
-vhost/virtio-user loopback with multi-queue test plan
-=====================================================
+======================================================
+vhost/virtio-user loopback with multi-queues test plan
+======================================================
-Description
-===========
+This test plan test vhost/virtio-user loopback multi-queues with split virtqueue and packed virtqueue different rx/tx paths, includes split virtqueue in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, vector_rx path test, and packed virtqueue in-order mergeable, in-order non-mergeable, mergeable, non-mergeable path. And virtio-user support 8 queues in maximum, check performance could be linear growth when enable 8 queues and 8 cores, notice cores should in same socket.
-Benchmark vhost/virtio-user loopback multi-queues test with 8 rx/tx paths, virtio-user support 8 queues in maximum.
-Includes mergeable, normal, vector_rx, inorder mergeable,
-inorder no-mergeable, virtio 1.1 mergeable, virtio 1.1 inorder, virtio 1.1 normal path.
+Test Case 1: loopback with virtio 1.1 mergeable path using 1 queue and 8 queues
+===============================================================================
-Test Case 1: loopback 2 queues test with virtio 1.1 mergeable path
-==================================================================
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --no-pci \
- --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
+ ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
+ -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
+ ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
--legacy-mem --no-pci --file-prefix=virtio \
- --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0 \
- -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
- >set fwd mac
- >start
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
-3. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
+3. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 1518]::
testpmd>set txpkts [frame_size]
testpmd>start tx_first 32
@@ -69,29 +65,56 @@ Test Case 1: loopback 2 queues test with virtio 1.1 mergeable path
testpmd>show port stats all
-5. Check each queue's RX/TX packet numbers::
+5. Check each RX/TX queue has packets, then quit testpmd::
testpmd>stop
+ testpmd>quit
+
+6. Launch testpmd by below command::
+
+ rm -rf vhost-net*
+ ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
+ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+
+7. Launch virtio-user by below command::
+
+ ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
+ --legacy-mem --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=0 \
+ -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
-Test Case 2: loopback 2 queues test with virtio 1.1 normal path
-===============================================================
+8. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 1518]::
+
+ testpmd>set txpkts [frame_size]
+ testpmd>start tx_first 32
+
+9. Get throughput 10 times and calculate the average throughput,check the throughput of 8 queues is eight times of 1 queue::
+
+ testpmd>show port stats all
+
+Test Case 2: loopback with virtio 1.1 non-mergeable path using 1 queue and 8 queues
+===================================================================================
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --no-pci \
- --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
+ ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
+ -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
+ ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
--legacy-mem --no-pci --file-prefix=virtio \
- --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0 \
- -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
- >set fwd mac
- >start
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
3. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
@@ -102,29 +125,56 @@ Test Case 2: loopback 2 queues test with virtio 1.1 normal path
testpmd>show port stats all
-5. Check each queue's RX/TX packet numbers::
+5. Check each RX/TX queue has packets, then quit testpmd::
testpmd>stop
+ testpmd>quit
+
+6. Launch testpmd by below command::
-Test Case 3: loopback 2 queues test with virtio 1.1 inorder path
-================================================================
+ rm -rf vhost-net*
+ ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
+ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+
+7. Launch virtio-user by below command::
+
+ ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
+ --legacy-mem --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=0,in_order=0 \
+ -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
+
+8. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
+
+ testpmd>set txpkts [frame_size]
+ testpmd>start tx_first 32
+
+9. Get throughput 10 times and calculate the average throughput,check the throughput of 8 queues is eight times of 1 queue::
+
+ testpmd>show port stats all
+
+Test Case 3: loopback with virtio 1.0 inorder mergeable path using 1 queue and 8 queues
+=======================================================================================
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --no-pci \
- --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
+ ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
+ -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
+ ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
--legacy-mem --no-pci --file-prefix=virtio \
- --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1 \
- -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
- >set fwd mac
- >start
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=1,in_order=1 \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
3. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
@@ -135,29 +185,56 @@ Test Case 3: loopback 2 queues test with virtio 1.1 inorder path
testpmd>show port stats all
-5. Check each queue's RX/TX packet numbers::
+5. Check each RX/TX queue has packets, then quit testpmd::
testpmd>stop
+ testpmd>quit
+
+6. Launch testpmd by below command::
+
+ rm -rf vhost-net*
+ ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
+ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+
+7. Launch virtio-user by below command::
+
+ ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
+ --legacy-mem --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=1,in_order=1 \
+ -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
-Test Case 4: loopback 2 queues test with inorder mergeable path
-===============================================================
+8. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
+
+ testpmd>set txpkts [frame_size]
+ testpmd>start tx_first 32
+
+9. Get throughput 10 times and calculate the average throughput,check the throughput of 8 queues is eight times of 1 queue::
+
+ testpmd>show port stats all
+
+Test Case 4: loopback with virtio 1.0 inorder non-mergeable path using 1 queue and 8 queues
+===========================================================================================
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --no-pci \
- --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
+ ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
+ -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
+ ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
--legacy-mem --no-pci --file-prefix=virtio \
- --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=1,mrg_rxbuf=1 \
- -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
- >set fwd mac
- >start
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=0,in_order=1 \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
3. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
@@ -168,29 +245,56 @@ Test Case 4: loopback 2 queues test with inorder mergeable path
testpmd>show port stats all
-5. Check each queue's RX/TX packet numbers::
+5. Check each RX/TX queue has packets, then quit testpmd::
testpmd>stop
+ testpmd>quit
+
+6. Launch testpmd by below command::
+
+ rm -rf vhost-net*
+ ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
+ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+
+7. Launch virtio-user by below command::
+
+ ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
+ --legacy-mem --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=0,in_order=1 \
+ -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
+
+8. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
+
+ testpmd>set txpkts [frame_size]
+ testpmd>start tx_first 32
+
+9. Get throughput 10 times and calculate the average throughput,check the throughput of 8 queues is eight times of 1 queue::
-Test Case 5: loopback 2 queues test with inorder no-mergeable path
-==================================================================
+ testpmd>show port stats all
+
+Test Case 5: loopback with virtio 1.0 mergeable path using 1 queue and 8 queues
+===============================================================================
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --no-pci \
- --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
+ ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
+ -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
+ ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
--legacy-mem --no-pci --file-prefix=virtio \
- --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=1,mrg_rxbuf=0 \
- -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
- >set fwd mac
- >start
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=1,in_order=0 \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
3. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
@@ -201,29 +305,56 @@ Test Case 5: loopback 2 queues test with inorder no-mergeable path
testpmd>show port stats all
-5. Check each queue's RX/TX packet numbers::
+5. Check each RX/TX queue has packets, then quit testpmd::
testpmd>stop
+ testpmd>quit
+
+6. Launch testpmd by below command::
+
+ rm -rf vhost-net*
+ ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
+ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+
+7. Launch virtio-user by below command::
+
+ ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
+ --legacy-mem --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=1,in_order=0 \
+ -- -i --enable-hw-vlan-strip --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
+
+8. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
+
+ testpmd>set txpkts [frame_size]
+ testpmd>start tx_first 32
-Test Case 6: loopback 8 queues test with mergeable path
-=======================================================
+9. Get throughput 10 times and calculate the average throughput,check the throughput of 8 queues is eight times of 1 queue::
+
+ testpmd>show port stats all
+
+Test Case 6: loopback with virtio 1.0 non-mergeable path using 1 queue and 8 queues
+===================================================================================
1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -l 2-6 -n 4 --socket-mem 1024,1024 --no-pci \
- --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
+ -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 7-11 --socket-mem 1024,1024 \
+ ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
--legacy-mem --no-pci --file-prefix=virtio \
- --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,in_order=0,mrg_rxbuf=1 \
- -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=4--rxq=8 --txq=8 --txd=1024 --rxd=1024
- >set fwd mac
- >start
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=0,in_order=0 \
+ -- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
3. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
@@ -234,30 +365,56 @@ Test Case 6: loopback 8 queues test with mergeable path
testpmd>show port stats all
-5. Check each queue's RX/TX packet numbers::
+5. Check each RX/TX queue has packets, then quit testpmd::
testpmd>stop
+ testpmd>quit
+
+6. Launch testpmd by below command::
+ rm -rf vhost-net*
+ ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
+ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+
+7. Launch virtio-user by below command::
+
+ ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
+ --legacy-mem --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=0,in_order=0 \
+ -- -i --enable-hw-vlan-strip --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
+
+8. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
+
+ testpmd>set txpkts [frame_size]
+ testpmd>start tx_first 32
-Test Case 7: loopback 8 queues test with normal path
-====================================================
+9. Get throughput 10 times and calculate the average throughput,check the throughput of 8 queues is eight times of 1 queue::
-1. Launch vhost by below command::
+ testpmd>show port stats all
+
+Test Case 7: loopback with virtio 1.0 vector_rx path using 1 queue and 8 queues
+===============================================================================
+
+1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -n 4 -l 2-6 --socket-mem 1024,1024 --legacy-mem --no-pci \
- --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=8' -- \
- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
- >set fwd mac
+ ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
+ -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 7-11 --socket-mem 1024,1024 \
+ ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
--legacy-mem --no-pci --file-prefix=virtio \
- --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,in_order=0,mrg_rxbuf=0 \
- -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
- >set fwd mac
- >start
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=0,in_order=0 \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
3. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
@@ -268,29 +425,116 @@ Test Case 7: loopback 8 queues test with normal path
testpmd>show port stats all
-5. Check each queue's RX/TX packet numbers::
+5. Check each RX/TX queue has packets, then quit testpmd::
testpmd>stop
+ testpmd>quit
-Test Case 8: loopback 8 queues test with vector_rx path
-=======================================================
+6. Launch testpmd by below command::
-1. Launch vhost by below command::
+ rm -rf vhost-net*
+ ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
+ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+
+7. Launch virtio-user by below command::
+
+ ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
+ --legacy-mem --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=0,in_order=0 \
+ -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
+
+8. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
+
+ testpmd>set txpkts [frame_size]
+ testpmd>start tx_first 32
+
+9. Get throughput 10 times and calculate the average throughput,check the throughput of 8 queues is eight times of 1 queue::
+
+ testpmd>show port stats all
+
+Test Case 8: loopback with virtio 1.1 inorder mergeable path using 1 queue and 8 queues
+=======================================================================================
+
+1. Launch testpmd by below command::
rm -rf vhost-net*
- ./testpmd -n 4 -l 2-6 --socket-mem 1024,1024 --legacy-mem --no-pci \
- --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=8' -- \
- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
- >set fwd mac
+ ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
+ -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
2. Launch virtio-user by below command::
- ./testpmd -n 4 -l 7-11 --socket-mem 1024,1024 \
+ ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
--legacy-mem --no-pci --file-prefix=virtio \
- --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,in_order=0,mrg_rxbuf=0 \
- -- -i --tx-offloads=0x0 --rss-ip --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
- >set fwd mac
- >start
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
+
+3. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 1518]::
+
+ testpmd>set txpkts [frame_size]
+ testpmd>start tx_first 32
+
+4. Get throughput 10 times and calculate the average throughput::
+
+ testpmd>show port stats all
+
+5. Check each RX/TX queue has packets, then quit testpmd::
+
+ testpmd>stop
+ testpmd>quit
+
+6. Launch testpmd by below command::
+
+ rm -rf vhost-net*
+ ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
+ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+
+7. Launch virtio-user by below command::
+
+ ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
+ --legacy-mem --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=0 \
+ -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
+
+8. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 1518]::
+
+ testpmd>set txpkts [frame_size]
+ testpmd>start tx_first 32
+
+9. Get throughput 10 times and calculate the average throughput,check the throughput of 8 queues is eight times of 1 queue::
+
+ testpmd>show port stats all
+
+Test Case 9: loopback with virtio 1.1 inorder non-mergeable path using 1 queue and 8 queues
+===========================================================================================
+
+1. Launch testpmd by below command::
+
+ rm -rf vhost-net*
+ ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
+ -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+
+2. Launch virtio-user by below command::
+
+ ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+ --legacy-mem --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1 \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
3. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
@@ -301,6 +545,33 @@ Test Case 8: loopback 8 queues test with vector_rx path
testpmd>show port stats all
-5. Check each queue's RX/TX packet numbers::
+5. Check each RX/TX queue has packets, then quit testpmd::
testpmd>stop
+ testpmd>quit
+
+6. Launch testpmd by below command::
+
+ rm -rf vhost-net*
+ ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
+ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+
+7. Launch virtio-user by below command::
+
+ ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
+ --legacy-mem --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=0,in_order=0 \
+ -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
+
+8. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64, 128, 256, 512, 1024, 1518]::
+
+ testpmd>set txpkts [frame_size]
+ testpmd>start tx_first 32
+
+9. Get throughput 10 times and calculate the average throughput,check the throughput of 8 queues is eight times of 1 queue::
+
+ testpmd>show port stats all
\ No newline at end of file
--
2.17.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [dts] [PATCH 03/11 v1] test_plans: add packed ring test case for pvp_virtio_user_2M_hugepages
2020-02-28 6:09 [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Yinan
2020-02-28 6:09 ` [dts] [PATCH 01/11 v1] test_plans: add packed ring cases for loopback_multi_paths_port_restart Yinan
2020-02-28 6:09 ` [dts] [PATCH 02/11 v1] test_plans: add packed ring cases for loopback_multi_queues Yinan
@ 2020-02-28 6:09 ` Yinan
2020-02-28 6:09 ` [dts] [PATCH 04/11 v1] test_plans: add packed ring cases for pvp_virtio_user_4k_pages Yinan
` (8 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Yinan @ 2020-02-28 6:09 UTC (permalink / raw)
To: dts; +Cc: Wang Yinan
From: Wang Yinan <yinan.wang@intel.com>
Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
...pvp_virtio_user_2M_hugepages_test_plan.rst | 24 +++++++++++++++++--
1 file changed, 22 insertions(+), 2 deletions(-)
diff --git a/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst b/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
index 8a8355f..6a80b89 100644
--- a/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
+++ b/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
@@ -39,8 +39,8 @@ Description
Before 18.05, virtio-user can only work 1G hugepage. After 18.05, more hugepage pages can be represented by single fd (file descriptor)file, so virtio-user can work with 2M hugepage now. The key parameter is "--single-file-segments" when launch virtio-user.
-Test Case1: Basic test for virtio-user 2M hugepage
-===================================================
+Test Case1: Basic test for virtio-user split ring 2M hugepage
+==============================================================
1. Before the test, plese make sure only 2M hugepage are mounted in host.
@@ -55,6 +55,26 @@ Test Case1: Basic test for virtio-user 2M hugepage
--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,queues=1 -- -i
+3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+
+ testpmd>show port stats all
+
+Test Case1: Basic test for virtio-user packed ring 2M hugepage
+===============================================================
+
+1. Before the test, plese make sure only 2M hugepage are mounted in host.
+
+2. Bind one port to igb_uio, launch vhost::
+
+ ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --file-prefix=vhost \
+ --vdev 'net_vhost0,iface=/tmp/sock0,queues=1' -- -i
+
+3. Launch virtio-user with 2M hugepage::
+
+ ./testpmd -l 5-6 -n 4 --no-pci --socket-mem 1024,1024 --single-file-segments --file-prefix=virtio-user \
+ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,packed_vq=1,queues=1 -- -i
+
+
3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
testpmd>show port stats all
\ No newline at end of file
--
2.17.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [dts] [PATCH 04/11 v1] test_plans: add packed ring cases for pvp_virtio_user_4k_pages
2020-02-28 6:09 [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Yinan
` (2 preceding siblings ...)
2020-02-28 6:09 ` [dts] [PATCH 03/11 v1] test_plans: add packed ring test case for pvp_virtio_user_2M_hugepages Yinan
@ 2020-02-28 6:09 ` Yinan
2020-02-28 6:09 ` [dts] [PATCH 05/11 v1] test_plans: add packed ring test case for vhost_enqueue_interrupt Yinan
` (7 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Yinan @ 2020-02-28 6:09 UTC (permalink / raw)
To: dts; +Cc: Wang Yinan
From: Wang Yinan <yinan.wang@intel.com>
Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
.../pvp_virtio_user_4k_pages_test_plan.rst | 30 +++++++++++++++++--
1 file changed, 28 insertions(+), 2 deletions(-)
diff --git a/test_plans/pvp_virtio_user_4k_pages_test_plan.rst b/test_plans/pvp_virtio_user_4k_pages_test_plan.rst
index a7ba2a0..ea3dcc8 100644
--- a/test_plans/pvp_virtio_user_4k_pages_test_plan.rst
+++ b/test_plans/pvp_virtio_user_4k_pages_test_plan.rst
@@ -40,8 +40,8 @@ Prerequisites
-------------
Turn off transparent hugepage in grub by adding GRUB_CMDLINE_LINUX="transparent_hugepage=never"
-Test Case1: Basic test vhost/virtio-user with 4K-pages
-======================================================
+Test Case1: Basic test vhost/virtio-user split ring with 4K-pages
+=================================================================
1. Bind one port to vfio-pci, launch vhost::
@@ -62,6 +62,32 @@ Test Case1: Basic test vhost/virtio-user with 4K-pages
--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1 -- -i
testpmd>start
+4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+
+ testpmd>show port stats all
+
+Test Case2: Basic test vhost/virtio-user packed ring with 4K-pages
+==================================================================
+
+1. Bind one port to vfio-pci, launch vhost::
+
+ modprobe vfio-pci
+ ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x
+ ./testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \
+ --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1' -- -i --no-numa --socket-num=0
+ testpmd>start
+
+2. Prepare tmpfs with 4K-pages::
+
+ mkdir /mnt/tmpfs_yinan
+ mount tmpfs /mnt/tmpfs_yinan -t tmpfs -o size=4G
+
+3. Launch virtio-user with 4K-pages::
+
+ ./testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \
+ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,packed_vq=1,queues=1 -- -i
+ testpmd>start
+
4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
testpmd>show port stats all
\ No newline at end of file
--
2.17.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [dts] [PATCH 05/11 v1] test_plans: add packed ring test case for vhost_enqueue_interrupt
2020-02-28 6:09 [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Yinan
` (3 preceding siblings ...)
2020-02-28 6:09 ` [dts] [PATCH 04/11 v1] test_plans: add packed ring cases for pvp_virtio_user_4k_pages Yinan
@ 2020-02-28 6:09 ` Yinan
2020-02-28 6:09 ` [dts] [PATCH 06/11 v1] test_plans: add packed ring test cases for vhost_event_idx_interrupt Yinan
` (6 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Yinan @ 2020-02-28 6:09 UTC (permalink / raw)
To: dts; +Cc: Wang Yinan
From: Wang Yinan <yinan.wang@intel.com>
Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
.../vhost_enqueue_interrupt_test_plan.rst | 53 +++++++++++++++++--
1 file changed, 48 insertions(+), 5 deletions(-)
diff --git a/test_plans/vhost_enqueue_interrupt_test_plan.rst b/test_plans/vhost_enqueue_interrupt_test_plan.rst
index c20bd61..b4bc21f 100644
--- a/test_plans/vhost_enqueue_interrupt_test_plan.rst
+++ b/test_plans/vhost_enqueue_interrupt_test_plan.rst
@@ -38,15 +38,17 @@ Description
===========
Vhost enqueue interrupt need test with l3fwd-power sample, small packets send from virtio-user to vhost side,
-check vhost-user cores can be wakeup,and vhost-user cores should be back to sleep after stop sending packets from virtio side.
+check vhost-user cores can be wakeup,and vhost-user cores should be back to sleep after stop sending packets
+from virtio side.
+
Test flow
=========
Virtio-user --> Vhost-user
-Test Case1: Wake up vhost-user core with l3fwd-power sample
-============================================================
+Test Case1: Wake up split ring vhost-user core with l3fwd-power sample
+======================================================================
1. Launch virtio-user with server mode::
@@ -65,8 +67,8 @@ Test Case1: Wake up vhost-user core with l3fwd-power sample
4. Stop and restart testpmd again, check vhost-user core will sleep and wakeup again.
-Test Case2: Wake up vhost-user cores with l3fwd-power sample when multi queues are enabled
-===========================================================================================
+Test Case2: Wake up split ring vhost-user cores with l3fwd-power sample when multi queues are enabled
+=====================================================================================================
1. Launch virtio-user with server mode::
@@ -84,4 +86,45 @@ Test Case2: Wake up vhost-user cores with l3fwd-power sample when multi queues
testpmd>set fwd txonly
testpmd>start
+4. Stop and restart testpmd again, check vhost-user cores will sleep and wakeup again.
+
+Test Case3: Wake up packed ring vhost-user core with l3fwd-power sample
+=======================================================================
+
+1. Launch virtio-user with server mode::
+
+ ./testpmd -l 7-8 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=1 -- -i
+
+2. Build l3fwd-power sample and launch l3fwd-power with a virtual vhost device::
+
+ ./l3fwd-power -l 0-3 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci \
+ --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1,packed_vq=1,client=1' -- -p 0x1 --parse-ptype 1 --config "(0,0,2)"
+
+3. Send packet by testpmd, check vhost-user core will keep wakeup status::
+
+ testpmd>set fwd txonly
+ testpmd>start
+
+4. Stop and restart testpmd again, check vhost-user core will sleep and wakeup again.
+
+Test Case4: Wake up packed ring vhost-user cores with l3fwd-power sample when multi queues are enabled
+=======================================================================================================
+
+1. Launch virtio-user with server mode::
+
+ ./testpmd -l 1-5 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=4 -- -i --rxq=4 --txq=4 --rss-ip
+
+2. Build l3fwd-power sample and launch l3fwd-power with a virtual vhost device::
+
+ ./l3fwd-power -l 9-12 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --log-level=9 \
+ --vdev 'eth_vhost0,iface=/tmp/sock0,queues=4,packed_vq=1,client=1' -- -p 0x1 --parse-ptype 1 \
+ --config "(0,0,9),(0,1,10),(0,2,11),(0,3,12)"
+
+3. Send packet by testpmd, check vhost-user multi-cores will keep wakeup status::
+
+ testpmd>set fwd txonly
+ testpmd>start
+
4. Stop and restart testpmd again, check vhost-user cores will sleep and wakeup again.
\ No newline at end of file
--
2.17.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [dts] [PATCH 06/11 v1] test_plans: add packed ring test cases for vhost_event_idx_interrupt
2020-02-28 6:09 [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Yinan
` (4 preceding siblings ...)
2020-02-28 6:09 ` [dts] [PATCH 05/11 v1] test_plans: add packed ring test case for vhost_enqueue_interrupt Yinan
@ 2020-02-28 6:09 ` Yinan
2020-02-28 6:09 ` [dts] [PATCH 07/11 v1] test_plans: add packed ring test cases for vhost_user_live_migration Yinan
` (5 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Yinan @ 2020-02-28 6:09 UTC (permalink / raw)
To: dts; +Cc: Wang Yinan
From: Wang Yinan <yinan.wang@intel.com>
Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
.../vhost_event_idx_interrupt_test_plan.rst | 182 +++++++++++++++++-
1 file changed, 175 insertions(+), 7 deletions(-)
diff --git a/test_plans/vhost_event_idx_interrupt_test_plan.rst b/test_plans/vhost_event_idx_interrupt_test_plan.rst
index e785755..28f079d 100644
--- a/test_plans/vhost_event_idx_interrupt_test_plan.rst
+++ b/test_plans/vhost_event_idx_interrupt_test_plan.rst
@@ -37,15 +37,18 @@ vhost event idx interrupt mode test plan
Description
===========
-Vhost event idx interrupt need test with l3fwd-power sample, send small packets from virtio-net to vhost side,check vhost-user cores can be wakeup status,and vhost-user cores should be sleep status after stop sending packets from virtio side.
+Vhost event idx interrupt need test with l3fwd-power sample, send small packets
+from virtio-net to vhost side,check vhost-user cores can be wakeup status,and
+vhost-user cores should be sleep status after stop sending packets from virtio
+side.For packed virtqueue test, need using qemu version > 4.2.0.
Test flow
=========
Virtio-net --> Vhost-user
-Test Case 1: wake up vhost-user core with event idx interrupt mode
-==================================================================
+Test Case 1: wake up split ring vhost-user core with event idx interrupt mode
+=============================================================================
1. Launch l3fwd-power example app with client mode::
@@ -86,8 +89,8 @@ Test Case 1: wake up vhost-user core with event idx interrupt mode
5. Check vhost related core is waked up by reading l3fwd-power log.
-Test Case 2: wake up vhost-user cores with event idx interrupt mode 16 queues test
-==================================================================================
+Test Case 2: wake up split ring vhost-user cores with event idx interrupt mode 16 queues test
+=============================================================================================
1. Launch l3fwd-power example app with client mode::
@@ -150,8 +153,8 @@ Test Case 2: wake up vhost-user cores with event idx interrupt mode 16 queues te
...
L3FWD_POWER: lcore 15 is waked up from rx interrupt on port 0 queue 15
-Test Case 3: wake up vhost-user cores by multi virtio-net in VMs with event idx interrupt mode
-==============================================================================================
+Test Case 3: wake up split ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode
+=========================================================================================================
1. Launch l3fwd-power example app with client mode::
@@ -208,3 +211,168 @@ Test Case 3: wake up vhost-user cores by multi virtio-net in VMs with event idx
#send packets to vhost
6. Check vhost related cores are waked up with l3fwd-power log.
+
+Test Case 4: wake up packed ring vhost-user core with event idx interrupt mode
+==============================================================================
+
+1. Launch l3fwd-power example app with client mode::
+
+ ./l3fwd-power -l 1 \
+ -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci\
+ --log-level=9 \
+ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1' \
+ -- -p 0x1 \
+ --parse-ptype 1 \
+ --config "(0,0,1)"
+
+2. Launch VM1 with server mode::
+
+ taskset -c 33 \
+ qemu-system-x86_64 -name us-vhost-vm1 \
+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
+ -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,server,id=char0,path=/vhost-net0 \
+ -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,packed=on -vnc :10 -daemonize
+
+3. Relauch l3fwd-power sample for port up::
+
+ ./l3fwd-power -l 0-3 \
+ -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci\
+ --log-level=9 \
+ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1' \
+ -- -p 0x1 \
+ --parse-ptype 1 \
+ --config "(0,0,1)"
+
+4. On VM, set ip for virtio device and send packets to vhost by cmds::
+
+ ifconfig [ens3] 1.1.1.2
+ #[ens3] is the virtual device name
+ ping 1.1.1.3
+ #send packets to vhost
+
+5. Check vhost related core is waked up by reading l3fwd-power log.
+
+Test Case 5: wake up packed ring vhost-user cores with event idx interrupt mode 16 queues test
+==============================================================================================
+
+1. Launch l3fwd-power example app with client mode::
+
+ ./l3fwd-power -l 1-16 \
+ -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci\
+ --log-level=9 \
+ --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1' \
+ -- -p 0x1 \
+ --parse-ptype 1 \
+ --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)"
+
+2. Launch VM1 with server mode::
+
+ taskset -c 17-18 \
+ qemu-system-x86_64 -name us-vhost-vm1 \
+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
+ -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,server,id=char0,path=/vhost-net0 \
+ -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=16 \
+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,mq=on,packed=on,vectors=40 -vnc :10 -daemonize
+
+3. Relauch l3fwd-power sample for port up::
+
+ ./l3fwd-power -l 1-16 \
+ -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci\
+ --log-level=9 \
+ --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1' \
+ -- -p 0x1 \
+ --parse-ptype 1 \
+ --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)"
+
+4. Set vitio-net with 16 quques and give vitio-net ip address::
+
+ ethtool -L [ens3] combined 16 # [ens3] is the name of virtio-net
+ ifconfig [ens3] 1.1.1.1
+
+5. Send packets with different IPs from virtio-net, notice to bind each vcpu to different send packets process::
+
+ taskset -c 0 ping 1.1.1.2
+ taskset -c 1 ping 1.1.1.3
+ taskset -c 2 ping 1.1.1.4
+ taskset -c 3 ping 1.1.1.5
+ taskset -c 4 ping 1.1.1.6
+ taskset -c 5 ping 1.1.1.7
+ taskset -c 6 ping 1.1.1.8
+ taskset -c 7 ping 1.1.1.9
+ taskset -c 8 ping 1.1.1.2
+ taskset -c 9 ping 1.1.1.2
+ taskset -c 10 ping 1.1.1.2
+ taskset -c 11 ping 1.1.1.2
+ taskset -c 12 ping 1.1.1.2
+ taskset -c 13 ping 1.1.1.2
+ taskset -c 14 ping 1.1.1.2
+ taskset -c 15 ping 1.1.1.2
+
+6. Check vhost related cores are waked up with l3fwd-power log, such as following::
+
+ L3FWD_POWER: lcore 0 is waked up from rx interrupt on port 0 queue 0
+ ...
+ ...
+ L3FWD_POWER: lcore 15 is waked up from rx interrupt on port 0 queue 15
+
+Test Case 6: wake up packed ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode
+==========================================================================================================
+
+1. Launch l3fwd-power example app with client mode::
+
+ ./l3fwd-power -l 1-2 \
+ -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci\
+ --log-level=9 \
+ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1' \
+ --vdev 'eth_vhost1,iface=/vhost-net1,queues=1,client=1' \
+ -- -p 0x3 \
+ --parse-ptype 1 \
+ --config "(0,0,1),(1,0,2)"
+
+2. Launch VM1 and VM2 with server mode::
+
+ taskset -c 33 \
+ qemu-system-x86_64 -name us-vhost-vm1 \
+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
+ -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,server,id=char0,path=/vhost-net0 \
+ -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,packed=on -vnc :10 -daemonize
+
+ taskset -c 34 \
+ qemu-system-x86_64 -name us-vhost-vm2 \
+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
+ -chardev socket,server,id=char0,path=/vhost-net1 \
+ -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+ -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,csum=on,packed=on -vnc :11 -daemonize
+
+3. Relauch l3fwd-power sample for port up::
+
+ ./l3fwd-power -l 0-3 \
+ -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci\
+ --log-level=9 \
+ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1' \
+ --vdev 'eth_vhost1,iface=/vhost-net1,queues=1,client=1' \
+ -- -p 0x3 \
+ --parse-ptype 1 \
+ --config "(0,0,1),(1,0,2)"
+
+4. On VM1, set ip for virtio device and send packets to vhost::
+
+ ifconfig [ens3] 1.1.1.2
+ #[ens3] is the virtual device name
+ ping 1.1.1.3
+ #send packets to vhost
+
+5. On VM2, also set ip for virtio device and send packets to vhost::
+
+ ifconfig [ens3] 1.1.1.4
+ #[ens3] is the virtual device name
+ ping 1.1.1.5
+ #send packets to vhost
+
+6. Check vhost related cores are waked up with l3fwd-power log.
--
2.17.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [dts] [PATCH 07/11 v1] test_plans: add packed ring test cases for vhost_user_live_migration
2020-02-28 6:09 [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Yinan
` (5 preceding siblings ...)
2020-02-28 6:09 ` [dts] [PATCH 06/11 v1] test_plans: add packed ring test cases for vhost_event_idx_interrupt Yinan
@ 2020-02-28 6:09 ` Yinan
2020-02-28 6:09 ` [dts] [PATCH 08/11 v1] test_plans: add packed ring test cases for vhost_virtio_pmd_interrupt Yinan
` (4 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Yinan @ 2020-02-28 6:09 UTC (permalink / raw)
To: dts; +Cc: Wang Yinan
From: Wang Yinan <yinan.wang@intel.com>
Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
.../vhost_user_live_migration_test_plan.rst | 398 +++++++++++++++++-
1 file changed, 390 insertions(+), 8 deletions(-)
diff --git a/test_plans/vhost_user_live_migration_test_plan.rst b/test_plans/vhost_user_live_migration_test_plan.rst
index ec32e82..2626f7a 100644
--- a/test_plans/vhost_user_live_migration_test_plan.rst
+++ b/test_plans/vhost_user_live_migration_test_plan.rst
@@ -35,6 +35,7 @@ Vhost User Live Migration Tests
===============================
This feature is to make sure vhost user live migration works based on testpmd.
+For packed virtqueue test, need using qemu version > 4.2.0.
Prerequisites
-------------
@@ -63,8 +64,8 @@ NFS configuration
backup# mount -t nfs -o nolock,vers=4 host-ip:/home/osimg/live_mig /mnt/nfs
-Test Case 1: migrate with virtio-pmd
-====================================
+Test Case 1: migrate with split ring virtio-pmd
+===============================================
On host server side:
@@ -163,8 +164,8 @@ On the backup server, run the vhost testpmd on the host and launch VM:
backup server # ssh -p 5555 127.0.0.1
backup VM # screen -r vm
-Test Case 2: migrate with virtio-pmd zero-copy enabled
-======================================================
+Test Case 2: migrate with split ring virtio-pmd zero-copy enabled
+=================================================================
On host server side:
@@ -263,8 +264,8 @@ On the backup server, run the vhost testpmd on the host and launch VM:
backup server # ssh -p 5555 127.0.0.1
backup VM # screen -r vm
-Test Case 3: migrate with virtio-net
-====================================
+Test Case 3: migrate with split ring virtio-net
+===============================================
On host server side:
@@ -351,8 +352,8 @@ On the backup server, run the vhost testpmd on the host and launch VM:
backup server # ssh -p 5555 127.0.0.1
backup VM # screen -r vm
-Test Case 4: adjust virtio-net queue numbers while migrating with virtio-net
-============================================================================
+Test Case 4: adjust split ring virtio-net queue numbers while migrating with virtio-net
+=======================================================================================
On host server side:
@@ -442,3 +443,384 @@ On the backup server, run the vhost testpmd on the host and launch VM:
backup server # ssh -p 5555 127.0.0.1
backup VM # screen -r vm
+
+Test Case 5: migrate with packed ring virtio-pmd
+================================================
+
+On host server side:
+
+1. Create enough hugepages for testpmd and qemu backend memory::
+
+ host server# mkdir /mnt/huge
+ host server# mount -t hugetlbfs hugetlbfs /mnt/huge
+
+2. Bind host port to igb_uio and start testpmd with vhost port::
+
+ host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+ host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+ host server# testpmd>start
+
+3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -vnc :10 -daemonize
+
+On the backup server, run the vhost testpmd on the host and launch VM:
+
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+
+ backup server # mkdir /mnt/huge
+ backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
+ backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+ backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+ backup server # testpmd>start
+
+5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -incoming tcp:0:4444 \
+ -vnc :10 -daemonize
+
+6. SSH to host VM and scp the DPDK folder from host to VM::
+
+ host server# ssh -p 5555 127.0.0.1
+ host server# scp -P 5555 -r <dpdk_folder>/ 127.0.0.1:/root
+
+7. Run testpmd in VM::
+
+ host VM# cd /root/<dpdk_folder>
+ host VM# make -j 110 install T=x86_64-native-linuxapp-gcc
+ host VM# modprobe uio
+ host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
+ host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0
+ host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+ host VM# screen -S vm
+ host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i
+ host VM# testpmd>set fwd rxonly
+ host VM# testpmd>set verbose 1
+ host VM# testpmd>start
+
+8. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+
+ tester# scapy
+ tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+ tester# sendp(p, iface="p5p1", inter=1, loop=1)
+
+9. Check the virtio-pmd can receive the packet, then detach the session for retach on backup server::
+
+ host VM# testpmd>port 0/queue 0: received 1 packets
+ host VM# ctrl+a+d
+
+10. Start Live migration, ensure the traffic is continuous::
+
+ host server # telnet localhost 3333
+ host server # (qemu)migrate -d tcp:backup server:4444
+ host server # (qemu)info migrate
+ host server # Check if the migrate is active and not failed.
+
+11. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
+
+ host server # (qemu)info migrate
+ host server # (qemu)Migration status: completed
+
+12. After live migration, go to the backup server and check if the virtio-pmd can continue to receive packets::
+
+ backup server # ssh -p 5555 127.0.0.1
+ backup VM # screen -r vm
+
+Test Case 6: migrate with packed ring virtio-pmd zero-copy enabled
+==================================================================
+
+On host server side:
+
+1. Create enough hugepages for testpmd and qemu backend memory::
+
+ host server# mkdir /mnt/huge
+ host server# mount -t hugetlbfs hugetlbfs /mnt/huge
+
+2. Bind host port to igb_uio and start testpmd with vhost port,note not start vhost port before launching qemu::
+
+ host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+ host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' --socket-mem 1024,1024 -- -i
+
+3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -vnc :10 -daemonize
+
+On the backup server, run the vhost testpmd on the host and launch VM:
+
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+
+ backup server # mkdir /mnt/huge
+ backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
+ backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+ backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' --socket-mem 1024,1024 -- -i
+
+5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -incoming tcp:0:4444 \
+ -vnc :10 -daemonize
+
+6. SSH to host VM and scp the DPDK folder from host to VM::
+
+ host server# ssh -p 5555 127.0.0.1
+ host server# scp -P 5555 -r <dpdk_folder>/ 127.0.0.1:/root
+
+7. Run testpmd in VM::
+
+ host VM# cd /root/<dpdk_folder>
+ host VM# make -j 110 install T=x86_64-native-linuxapp-gcc
+ host VM# modprobe uio
+ host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
+ host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0
+ host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+ host VM# screen -S vm
+ host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i
+ host VM# testpmd>set fwd rxonly
+ host VM# testpmd>set verbose 1
+ host VM# testpmd>start
+
+8. Start vhost testpmd on host and send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+
+ host# testpmd>start
+ tester# scapy
+ tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+ tester# sendp(p, iface="p5p1", inter=1, loop=1)
+
+9. Check the virtio-pmd can receive packets, then detach the session for retach on backup server::
+
+ host VM# testpmd>port 0/queue 0: received 1 packets
+ host VM# ctrl+a+d
+
+10. Start Live migration, ensure the traffic is continuous::
+
+ host server # telnet localhost 3333
+ host server # (qemu)migrate -d tcp:backup server:4444
+ host server # (qemu)info migrate
+ host server # Check if the migrate is active and not failed.
+
+11. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
+
+ host server # (qemu)info migrate
+ host server # (qemu)Migration status: completed
+
+12. After live migration, go to the backup server start vhost testpmd and check if the virtio-pmd can continue to receive packets::
+
+ backup server # testpmd>start
+ backup server # ssh -p 5555 127.0.0.1
+ backup VM # screen -r vm
+
+Test Case 7: migrate with packed ring virtio-net
+================================================
+
+On host server side:
+
+1. Create enough hugepages for testpmd and qemu backend memory::
+
+ host server# mkdir /mnt/huge
+ host server# mount -t hugetlbfs hugetlbfs /mnt/huge
+
+2. Bind host port to igb_uio and start testpmd with vhost port::
+
+ host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+ host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+ host server# testpmd>start
+
+3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -vnc :10 -daemonize
+
+On the backup server, run the vhost testpmd on the host and launch VM:
+
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+
+ backup server # mkdir /mnt/huge
+ backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
+ backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+ backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+ backup server # testpmd>start
+
+5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -incoming tcp:0:4444 \
+ -vnc :10 -daemonize
+
+6. SSH to host VM and let the virtio-net link up::
+
+ host server# ssh -p 5555 127.0.0.1
+ host vm # ifconfig eth0 up
+ host VM# screen -S vm
+ host VM# tcpdump -i eth0
+
+7. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+
+ tester# scapy
+ tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+ tester# sendp(p, iface="p5p1", inter=1, loop=1)
+
+8. Check the virtio-net can receive the packet, then detach the session for retach on backup server::
+
+ host VM# testpmd>port 0/queue 0: received 1 packets
+ host VM# ctrl+a+d
+
+9. Start Live migration, ensure the traffic is continuous::
+
+ host server # telnet localhost 3333
+ host server # (qemu)migrate -d tcp:backup server:4444
+ host server # (qemu)info migrate
+ host server # Check if the migrate is active and not failed.
+
+10. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
+
+ host server # (qemu)info migrate
+ host server # (qemu)Migration status: completed
+
+11. After live migration, go to the backup server and check if the virtio-net can continue to receive packets::
+
+ backup server # ssh -p 5555 127.0.0.1
+ backup VM # screen -r vm
+
+Test Case 8: adjust packed ring virtio-net queue numbers while migrating with virtio-net
+=========================================================================================
+
+On host server side:
+
+1. Create enough hugepages for testpmd and qemu backend memory::
+
+ host server# mkdir /mnt/huge
+ host server# mount -t hugetlbfs hugetlbfs /mnt/huge
+
+2. Bind host port to igb_uio and start testpmd with vhost port::
+
+ host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+ host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' --socket-mem 1024,1024 -- -i --nb-cores=4 --rxq=4 --txq=4
+ host server# testpmd>start
+
+3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=4 \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,mq=on,vectors=10,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -vnc :10 -daemonize
+
+On the backup server, run the vhost testpmd on the host and launch VM:
+
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+
+ backup server # mkdir /mnt/huge
+ backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
+ backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+ backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' --socket-mem 1024,1024 -- -i --nb-cores=4 --rxq=4 --txq=4
+ backup server # testpmd>start
+
+5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=4 \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,mq=on,vectors=10,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -incoming tcp:0:4444 \
+ -vnc :10 -daemonize
+
+6. SSH to host VM and let the virtio-net link up::
+
+ host server# ssh -p 5555 127.0.0.1
+ host vm # ifconfig eth0 up
+ host VM# screen -S vm
+ host VM# tcpdump -i eth0
+
+7. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+
+ tester# scapy
+ tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+ tester# sendp(p, iface="p5p1", inter=1, loop=1)
+
+8. Check the virtio-net can receive the packet, then detach the session for retach on backup server::
+
+ host VM# testpmd>port 0/queue 0: received 1 packets
+ host VM# ctrl+a+d
+
+9. Start Live migration, ensure the traffic is continuous::
+
+ host server # telnet localhost 3333
+ host server # (qemu)migrate -d tcp:backup server:4444
+ host server # (qemu)info migrate
+ host server # Check if the migrate is active and not failed.
+
+10. Change virtio-net queue numbers from 1 to 4 while migrating::
+
+ host server # ethtool -L ens3 combined 4
+
+11. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
+
+ host server # (qemu)info migrate
+ host server # (qemu)Migration status: completed
+
+12. After live migration, go to the backup server and check if the virtio-net can continue to receive packets::
+
+ backup server # ssh -p 5555 127.0.0.1
+ backup VM # screen -r vm
+
--
2.17.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [dts] [PATCH 08/11 v1] test_plans: add packed ring test cases for vhost_virtio_pmd_interrupt
2020-02-28 6:09 [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Yinan
` (6 preceding siblings ...)
2020-02-28 6:09 ` [dts] [PATCH 07/11 v1] test_plans: add packed ring test cases for vhost_user_live_migration Yinan
@ 2020-02-28 6:09 ` Yinan
2020-02-28 6:09 ` [dts] [PATCH 09/11 v1] test_plans: add packed ring test cases for vhost_virtio_user_interrupt Yinan
` (3 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Yinan @ 2020-02-28 6:09 UTC (permalink / raw)
To: dts; +Cc: Wang Yinan
From: Wang Yinan <yinan.wang@intel.com>
Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
.../vhost_virtio_pmd_interrupt_test_plan.rst | 42 ++++++++++++++++++-
1 file changed, 40 insertions(+), 2 deletions(-)
diff --git a/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst b/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst
index 03f4d50..389d8d8 100644
--- a/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst
+++ b/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst
@@ -34,8 +34,11 @@
vhost/virtio-pmd interrupt mode test plan
=========================================
-Virtio-pmd interrupt need test with l3fwd-power sample, small packets send from traffic generator to virtio-pmd side,check virtio-pmd cores can be wakeup status,and virtio-pmd cores should be sleep status after stop sending packets from traffic generator.
-This test plan cover virtio 0.95 and virtio 1.0 test.
+Virtio-pmd interrupt need test with l3fwd-power sample, small packets send from traffic generator
+to virtio-pmd side,check virtio-pmd cores can be wakeup status,and virtio-pmd cores should be
+sleep status after stop sending packets from traffic generator.This test plan cover virtio 0.95,
+virtio 1.0 and virtio 1.1 test.For packed virtqueue test, need using qemu version > 4.2.0.
+
Prerequisites
=============
@@ -151,3 +154,38 @@ Test Case 3: Basic virtio-1.0 interrupt test with 4 queues
6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.
7. Stop the date transmitter, check all related core will be back to sleep status.
+
+Test Case 4: Packed ring virtio interrupt test with 16 queues
+=============================================================
+
+1. Bind one NIC port to igb_uio, then launch testpmd by below command::
+
+ rm -rf vhost-net*
+ ./testpmd -c 0x1ffff -n 4 --socket-mem 1024 1024 --vdev 'eth_vhost0,iface=vhost-net,queues=16' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip
+
+2. Launch VM1, set queues=16, vectors>=2xqueues+2, mq=on::
+
+ taskset -c 34-35 \
+ qemu-system-x86_64 -name us-vhost-vm2 \
+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+ -smp cores=16,sockets=1 -drive file=/home/osimg/noiommu-ubt16.img \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
+ -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=16 \
+ -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on,csum=on,mq=on,vectors=40,packed=on \
+ -vnc :11 -daemonize
+
+3. Bind virtio port to vfio-pci::
+
+ modprobe vfio enable_unsafe_noiommu_mode=1
+ modprobe vfio-pci
+ ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x
+
+4. In VM, launch l3fwd-power sample::
+
+ ./l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa --parse-ptype
+
+5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
+
+6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.
+
+7. Stop the date transmitter, check all related core will be back to sleep status.
--
2.17.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [dts] [PATCH 09/11 v1] test_plans: add packed ring test cases for vhost_virtio_user_interrupt
2020-02-28 6:09 [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Yinan
` (7 preceding siblings ...)
2020-02-28 6:09 ` [dts] [PATCH 08/11 v1] test_plans: add packed ring test cases for vhost_virtio_pmd_interrupt Yinan
@ 2020-02-28 6:09 ` Yinan
2020-02-28 6:09 ` [dts] [PATCH 10/11 v1] test_plans: add test cases for virtio_event_idx_interrupt Yinan
` (2 subsequent siblings)
11 siblings, 0 replies; 13+ messages in thread
From: Yinan @ 2020-02-28 6:09 UTC (permalink / raw)
To: dts; +Cc: Wang Yinan
From: Wang Yinan <yinan.wang@intel.com>
Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
.../vhost_virtio_user_interrupt_test_plan.rst | 88 +++++++++++++++++--
1 file changed, 80 insertions(+), 8 deletions(-)
diff --git a/test_plans/vhost_virtio_user_interrupt_test_plan.rst b/test_plans/vhost_virtio_user_interrupt_test_plan.rst
index d948dc7..149d373 100644
--- a/test_plans/vhost_virtio_user_interrupt_test_plan.rst
+++ b/test_plans/vhost_virtio_user_interrupt_test_plan.rst
@@ -34,11 +34,13 @@
vhost/virtio-user interrupt mode test plan
==========================================
-Virtio-user interrupt need test with l3fwd-power sample, small packets send from traffic generator to virtio side, check virtio-user cores can be wakeup status, and virtio-user cores should be sleep status after stop sending packets from traffic generator.
-This test plan cover both vhost-net and vhost-user as the backend.
+Virtio-user interrupt need test with l3fwd-power sample, small packets send from traffic generator
+to virtio side, check virtio-user cores can be wakeup status, and virtio-user cores should be sleep
+status after stop sending packets from traffic generator.This test plan cover both vhost-net and
+vhost-user as the backend.
-Test Case1: Virtio-user interrupt test with vhost-user as backend
-=================================================================
+Test Case1: Split ring virtio-user interrupt test with vhost-user as backend
+============================================================================
flow: TG --> NIC --> Vhost --> Virtio
@@ -58,8 +60,8 @@ flow: TG --> NIC --> Vhost --> Virtio
5. Restart sending packets with packet generator, check virtio-user related core change to wakeup status again.
-Test Case2: Virtio-user interrupt test with vhost-net as backend
-===============================================================================
+Test Case2: Split ring virtio-user interrupt test with vhost-net as backend
+===========================================================================
flow: Tap --> Vhost-net --> Virtio
@@ -80,8 +82,8 @@ flow: Tap --> Vhost-net --> Virtio
5. Restart sending packets with tap device, check virtio-user related core change to wakeup status again.
-Test Case3: LSC event between vhost-user and virtio-user
-===============================================================================
+Test Case3: LSC event between vhost-user and virtio-user with split ring
+========================================================================
flow: Vhost <--> Virtio
@@ -106,3 +108,73 @@ flow: Vhost <--> Virtio
testpmd> show port info 0
#it should show "down"
+
+Test Case4: Packed ring virtio-user interrupt test with vhost-user as backend
+=============================================================================
+
+flow: TG --> NIC --> Vhost --> Virtio
+
+1. Bind one NIC port to igb_uio, launch testpmd with a virtual vhost device as backend::
+
+ ./testpmd -c 0x7c -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --rxq=1 --txq=1
+ testpmd>start
+
+2. Start l3fwd-power with a virtio-user device::
+
+ ./l3fwd-power -c 0xc000 -n 4 --socket-mem 1024,1024 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \
+ --vdev=virtio_user0,path=./vhost-net,packed_vq=1 -- -p 1 -P --config="(0,0,14)" --parse-ptype
+
+3. Send packets with packet generator, check the virtio-user related core can be wakeup status.
+
+4. Stop sending packets with packet generator, check virtio-user related core change to sleep status.
+
+5. Restart sending packets with packet generator, check virtio-user related core change to wakeup status again.
+
+Test Case5: Packed ring virtio-user interrupt test with vhost-net as backend with
+=================================================================================
+
+flow: Tap --> Vhost-net --> Virtio
+
+1. Start l3fwd-power with a virtio-user device, vhost-net as backend::
+
+ ./l3fwd-power -c 0xc000 -n 4 --socket-mem 1024,1024 --legacy-mem --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \
+ --vdev=virtio_user0,path=/dev/vhost-net,packed_vq=1 -- -p 1 -P --config="(0,0,14)" --parse-ptype
+
+2. Vhost-net will generate one tap device, normally, it's TAP0, config it and generate packets on it using pind cmd::
+
+ ifconfig tap0 up
+ ifconfig tap0 1.1.1.1
+ ping -I tap0 1.1.1.2
+
+3. Check the virtio-user related core can be wake up.
+
+4. Stop sending packets with tap device, check virtio-user related core change to sleep status.
+
+5. Restart sending packets with tap device, check virtio-user related core change to wakeup status again.
+
+Test Case6: LSC event between vhost-user and virtio-user with packed ring
+=========================================================================
+
+flow: Vhost <--> Virtio
+
+1. Start vhost-user side::
+
+ ./testpmd -c 0x3000 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i
+ testpmd>set fwd mac
+ testpmd>start
+
+2. Start virtio-user side::
+
+ ./testpmd -c 0xc000 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=virtio --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1 -- -i --tx-offloads=0x00
+ testpmd>set fwd mac
+ testpmd>start
+
+3. Check the virtio-user side link status::
+
+ testpmd> show port info 0
+ #it should show "up"
+
+4. Quit the vhost-user side with testpmd, then check the virtio-user side link status::
+
+ testpmd> show port info 0
+ #it should show "down"
--
2.17.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [dts] [PATCH 10/11 v1] test_plans: add test cases for virtio_event_idx_interrupt
2020-02-28 6:09 [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Yinan
` (8 preceding siblings ...)
2020-02-28 6:09 ` [dts] [PATCH 09/11 v1] test_plans: add packed ring test cases for vhost_virtio_user_interrupt Yinan
@ 2020-02-28 6:09 ` Yinan
2020-02-28 6:09 ` [dts] [PATCH 11/11 v1] test_plans: add packed ring test cases for virtio_pvp_regression Yinan
2020-03-03 7:28 ` [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Tu, Lijuan
11 siblings, 0 replies; 13+ messages in thread
From: Yinan @ 2020-02-28 6:09 UTC (permalink / raw)
To: dts; +Cc: Wang Yinan
From: Wang Yinan <yinan.wang@intel.com>
Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
.../virtio_event_idx_interrupt_test_plan.rst | 123 ++++++++++++++++--
1 file changed, 115 insertions(+), 8 deletions(-)
diff --git a/test_plans/virtio_event_idx_interrupt_test_plan.rst b/test_plans/virtio_event_idx_interrupt_test_plan.rst
index e51d808..8293032 100644
--- a/test_plans/virtio_event_idx_interrupt_test_plan.rst
+++ b/test_plans/virtio_event_idx_interrupt_test_plan.rst
@@ -37,16 +37,17 @@ virtio event idx interrupt mode test plan
Description
===========
-This feature is to suppress interrupts for performance improvement, need compare interrupt times with and without
-virtio event idx enabled. Also need cover driver reload and live migration test.
+This feature is to suppress interrupts for performance improvement, need compare
+interrupt times with and without virtio event idx enabled. Also need cover driver
+reload test. For packed virtqueue test, need using qemu version > 4.2.0.
Test flow
=========
TG --> NIC --> Vhost-user --> Virtio-net
-Test Case 1: compare interrupt times with and without virtio event idx enabled
-==============================================================================
+Test Case 1: Compare interrupt times with and without split ring virtio event idx enabled
+=========================================================================================
1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
@@ -78,8 +79,8 @@ Test Case 1: compare interrupt times with and without virtio event idx enabled
6. Compare interrupt times between virtio event_idx enabled and virtio event_idx disabled.
-Test Case 2: virtio-pci driver reload test
-==========================================
+Test Case 2: Split ring virtio-pci driver reload test
+=====================================================
1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
@@ -116,8 +117,8 @@ Test Case 2: virtio-pci driver reload test
6. Rerun step4 and step5 100 times to check event idx workable after driver reload.
-Test Case 3: wake up virtio-net cores with event idx interrupt mode 16 queues test
-==================================================================================
+Test Case 3: Wake up split ring virtio-net cores with event idx interrupt mode 16 queues test
+=============================================================================================
1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
@@ -150,3 +151,109 @@ Test Case 3: wake up virtio-net cores with event idx interrupt mode 16 queues te
testpmd>stop
testpmd>start
testpmd>stop
+
+Test Case 4: Compare interrupt times with and without packed ring virtio event idx enabled
+==========================================================================================
+
+1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
+
+ rm -rf vhost-net*
+ ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i
+ --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>start
+
+2. Launch VM::
+
+ taskset -c 32-33 \
+ qemu-system-x86_64 -name us-vhost-vm1 \
+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+ -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+ -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
+ -vnc :12 -daemonize
+
+3. On VM1, set virtio device IP::
+
+ ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net
+
+4. Send 10M packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM::
+
+ cat /proc/interrupts
+
+5. Disable virtio event idx feature and rerun step1 ~ step4.
+
+6. Compare interrupt times between virtio event_idx enabled and virtio event_idx disabled.
+
+Test Case 5: Packed ring virtio-pci driver reload test
+======================================================
+
+1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
+
+ rm -rf vhost-net*
+ ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>start
+
+2. Launch VM::
+
+ taskset -c 32-33 \
+ qemu-system-x86_64 -name us-vhost-vm1 \
+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+ -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+ -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
+ -vnc :12 -daemonize
+
+3. On VM1, set virtio device IP, send 10M packets from packet generator to nic then check virtio device can receive packets::
+
+ ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net
+ tcpdump -i [ens3]
+
+4. Reload virtio-net driver by below cmds::
+
+ ifconfig [ens3] down
+ ./dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net
+ ./dpdk-devbind.py -b virtio-pci [00:03.0]
+
+5. Check virtio device can receive packets again::
+
+ ifconfig [ens3] 1.1.1.2
+ tcpdump -i [ens3]
+
+6. Rerun step4 and step5 100 times to check event idx workable after driver reload.
+
+Test Case 6: Wake up packed ring virtio-net cores with event idx interrupt mode 16 queues test
+==============================================================================================
+
+1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
+
+ rm -rf vhost-net*
+ ./testpmd -l 1-17 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
+ testpmd>start
+
+2. Launch VM::
+
+ taskset -c 32-33 \
+ qemu-system-x86_64 -name us-vhost-vm1 \
+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+ -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu16.img \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+ -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=16 \
+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=40,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
+ -vnc :12 -daemonize
+
+3. On VM1, give virtio device ip addr and enable vitio-net with 16 quques::
+
+ ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net
+ ethtool -L [ens3] combined 16
+
+4. Send 10M different ip addr packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM::
+
+ cat /proc/interrupts
+
+5. After two hours stress test, stop and restart testpmd, check each queue has new packets coming::
+
+ testpmd>stop
+ testpmd>start
+ testpmd>stop
\ No newline at end of file
--
2.17.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [dts] [PATCH 11/11 v1] test_plans: add packed ring test cases for virtio_pvp_regression
2020-02-28 6:09 [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Yinan
` (9 preceding siblings ...)
2020-02-28 6:09 ` [dts] [PATCH 10/11 v1] test_plans: add test cases for virtio_event_idx_interrupt Yinan
@ 2020-02-28 6:09 ` Yinan
2020-03-03 7:28 ` [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Tu, Lijuan
11 siblings, 0 replies; 13+ messages in thread
From: Yinan @ 2020-02-28 6:09 UTC (permalink / raw)
To: dts; +Cc: Wang Yinan
From: Wang Yinan <yinan.wang@intel.com>
Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
.../virtio_pvp_regression_test_plan.rst | 95 +++++++++++++++++--
1 file changed, 89 insertions(+), 6 deletions(-)
diff --git a/test_plans/virtio_pvp_regression_test_plan.rst b/test_plans/virtio_pvp_regression_test_plan.rst
index 32374b0..2242920 100644
--- a/test_plans/virtio_pvp_regression_test_plan.rst
+++ b/test_plans/virtio_pvp_regression_test_plan.rst
@@ -34,7 +34,12 @@
vhost/virtio-pmd qemu regression test plan
==========================================
-Add feature combind cases to capture regression issue: cover 2 queues + reconnect + multi qemu version + multi-paths with virtio1.0 and virtio0.95.
+Add feature combind cases to capture regression issue: cover 2 queues
++ reconnect + multi qemu version + multi-paths with virtio1.0,
+virtio0.95 and virtio 1.1. For packed virtqueue (virtio 1.1) test,
+need using qemu version > 4.2.0. The qemu launch parameters
+(rx_queue_size=1024,tx_queue_size=1024) only can be supported with qemu
+version greater or equal to 2.10.
Test flow
=========
@@ -80,8 +85,8 @@ Test Case 1: pvp test with virtio 0.95 mergeable path
6. Kill VM, then re-launch VM, check if the reconnect can work and ensure the traffic can continue.
-Test Case 2: pvp test with virtio 0.95 normal path
-==================================================
+Test Case 2: pvp test with virtio 0.95 non-mergeable path
+=========================================================
1. Bind one port to igb_uio, then launch testpmd by below command::
@@ -197,8 +202,8 @@ Test Case 4: pvp test with virtio 1.0 mergeable path
6. Kill VM, then re-launch VM, check if the reconnect can work and ensure the traffic can continue.
-Test Case 5: pvp test with virtio 1.0 normal path
-=================================================
+Test Case 5: pvp test with virtio 1.0 non-mergeable path
+========================================================
1. Bind one port to igb_uio, then launch testpmd by below command::
@@ -273,4 +278,82 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path
5. Quit vhost-user, then re-launch, check if the reconnect can work and ensure the traffic can continue.
-6. Kill VM, then re-launch VM, check if the reconnect can work and ensure the traffic can continue.
\ No newline at end of file
+6. Kill VM, then re-launch VM, check if the reconnect can work and ensure the traffic can continue.
+
+Test Case 7: pvp test with virtio 1.1 mergeable path
+====================================================
+
+1. Bind one port to igb_uio, then launch testpmd by below command::
+
+ rm -rf vhost-net*
+ ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
+
+2. Check dut machine already has installed qemu 4.2.0, then launch VM::
+
+ qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 3 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,id=char0,path=./vhost-net,server \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=2 \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15,packed_vq=1 \
+ -vnc :10
+
+3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::
+
+ ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\
+ --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
+
+4. Send 64B and 1518B packets by packet generator separately, show throughput with below command::
+
+ testpmd>show port stats all
+
+5. Quit vhost-user, then re-launch, check if the reconnect can work and ensure the traffic can continue.
+
+6. Kill VM, then re-launch VM, check if the reconnect can work and ensure the traffic can continue.
+
+Test Case 8: pvp test with virtio 1.1 non-mergeable path
+=========================================================
+
+1. Bind one port to igb_uio, then launch testpmd by below command::
+
+ rm -rf vhost-net*
+ ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
+
+2. Check dut machine already has installed qemu 4.2.0, then launch VM::
+
+ qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 3 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,id=char0,path=./vhost-net,server \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=2 \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15,packed_vq=1 \
+ -vnc :10
+
+3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::
+
+ ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\
+ --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
+ testpmd>set fwd mac
+ testpmd>start
+
+4. Send 64B and 1518B packets by packet generator separately, show throughput with below command::
+
+ testpmd>show port stats all
+
+5. Quit vhost-user, then re-launch, check if the reconnect can work and ensure the traffic can continue.
+
+6. Kill VM, then re-launch VM, check if the reconnect can work and ensure the traffic can continue.
--
2.17.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [dts] [PATCH 00/11 v1] test_plans: add packed ring cases
2020-02-28 6:09 [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Yinan
` (10 preceding siblings ...)
2020-02-28 6:09 ` [dts] [PATCH 11/11 v1] test_plans: add packed ring test cases for virtio_pvp_regression Yinan
@ 2020-03-03 7:28 ` Tu, Lijuan
11 siblings, 0 replies; 13+ messages in thread
From: Tu, Lijuan @ 2020-03-03 7:28 UTC (permalink / raw)
To: Wang, Yinan, dts; +Cc: Wang, Yinan
Applied the series, thanks
> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Yinan
> Sent: Friday, February 28, 2020 2:10 PM
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: [dts] [PATCH 00/11 v1] test_plans: add packed ring cases
>
> From: Wang Yinan <yinan.wang@intel.com>
>
> As packed ring is supported since dpdk19.11 and qemu 4.2.0 also supported
> now, add packed ring cases in virtio regression cases.
>
> Wang Yinan (11):
> test_plans: add packed ring cases for
> loopback_multi_paths_port_restart
> test_plans: add packed ring cases for loopback_multi_queues
> test_plans: add packed ring test case for pvp_virtio_user_2M_hugepages
> test_plans: add packed ring cases for pvp_virtio_user_4k_pages
> test_plans: add packed ring test case for vhost_enqueue_interrupt
> test_plans: add packed ring test cases for vhost_event_idx_interrupt
> test_plans: add packed ring test cases for vhost_user_live_migration
> test_plans: add packed ring test cases for vhost_virtio_pmd_interrupt
> test_plans: add packed ring test cases for vhost_virtio_user_interrupt
> test_plans: add test cases for virtio_event_idx_interrupt
> test_plans: add packed ring test cases for virtio_pvp_regression
>
> ...ack_multi_paths_port_restart_test_plan.rst | 109 ++--
> .../loopback_multi_queues_test_plan.rst | 473 ++++++++++++++----
> ...pvp_virtio_user_2M_hugepages_test_plan.rst | 24 +-
> .../pvp_virtio_user_4k_pages_test_plan.rst | 30 +-
> .../vhost_enqueue_interrupt_test_plan.rst | 53 +-
> .../vhost_event_idx_interrupt_test_plan.rst | 182 ++++++-
> .../vhost_user_live_migration_test_plan.rst | 398 ++++++++++++++-
> .../vhost_virtio_pmd_interrupt_test_plan.rst | 42 +-
> .../vhost_virtio_user_interrupt_test_plan.rst | 88 +++-
> .../virtio_event_idx_interrupt_test_plan.rst | 123 ++++-
> .../virtio_pvp_regression_test_plan.rst | 95 +++-
> 11 files changed, 1435 insertions(+), 182 deletions(-)
>
> --
> 2.17.1
^ permalink raw reply [flat|nested] 13+ messages in thread