test suite reviews and discussions
 help / color / mirror / Atom feed
From: Wenjie Li <wenjiex.a.li@intel.com>
To: dts@dpdk.org
Cc: Wenjie Li <wenjiex.a.li@intel.com>
Subject: [dts] [PATCH V1] test_plans: fix build warnings
Date: Mon, 22 Jul 2019 15:08:49 +0800	[thread overview]
Message-ID: <1563779329-32092-1-git-send-email-wenjiex.a.li@intel.com> (raw)

fix build warnings

Signed-off-by: Wenjie Li <wenjiex.a.li@intel.com>
---
 test_plans/index.rst                          | 14 +++-
 ...back_virtio_user_server_mode_test_plan.rst | 84 +++++++++----------
 test_plans/nic_single_core_perf_test_plan.rst | 18 ++--
 .../pvp_vhost_user_reconnect_test_plan.rst    |  1 +
 test_plans/pvp_virtio_bonding_test_plan.rst   |  4 +-
 5 files changed, 69 insertions(+), 52 deletions(-)

diff --git a/test_plans/index.rst b/test_plans/index.rst
index 52d4e55..d0ebeb5 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -81,7 +81,6 @@ The following are the test plans for the DPDK DTS automated test system.
     l3fwdacl_test_plan
     link_flowctrl_test_plan
     link_status_interrupt_test_plan
-    loopback_multi_paths_port_restart_performance_test_plan
     loopback_multi_paths_port_restart_test_plan
     loopback_virtio_user_server_mode_test_plan
     mac_filter_test_plan
@@ -174,16 +173,22 @@ The following are the test plans for the DPDK DTS automated test system.
     vhost_dequeue_zero_copy_test_plan
     vxlan_gpe_support_in_i40e_test_plan
     pvp_diff_qemu_version_test_plan
-    pvp_qemu_zero_copy_test_plan
     pvp_share_lib_test_plan
     pvp_vhost_user_built_in_net_driver_test_plan
     pvp_virtio_user_2M_hugepages_test_plan
     pvp_virtio_user_multi_queues_test_plan
-    vhost_gro_test_plan
     virtio_unit_cryptodev_func_test_plan
     virtio_user_for_container_networking_test_plan
     eventdev_perf_test_plan
     eventdev_pipeline_perf_test_plan
+    pvp_qemu_multi_paths_port_restart_test_plan
+    pvp_vhost_user_reconnect_test_plan
+    pvp_virtio_bonding_test_plan
+    pvp_virtio_user_4k_pages_test_plan
+    vdev_primary_secondary_test_plan
+    vhost_1024_ethports_test_plan
+    virtio_pvp_regression_test_plan
+    virtio_user_as_exceptional_path
 
     unit_tests_cmdline_test_plan
     unit_tests_crc_test_plan
@@ -217,3 +222,6 @@ The following are the test plans for the DPDK DTS automated test system.
     efd_test_plan
     example_build_test_plan
     flow_classify_test_plan
+    dpdk_hugetlbfs_mount_size_test_plan
+    nic_single_core_perf_test_plan
+    power_managerment_throughput_test_plan
\ No newline at end of file
diff --git a/test_plans/loopback_virtio_user_server_mode_test_plan.rst b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
index 45388f4..1dd17d1 100644
--- a/test_plans/loopback_virtio_user_server_mode_test_plan.rst
+++ b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
@@ -143,15 +143,15 @@ Test Case 3: loopback reconnect test with virtio 1.1 mergeable path and server m
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 4: loopback reconnect test with virtio 1.1 normal path and server mode
 ================================================================================
@@ -215,15 +215,15 @@ Test Case 4: loopback reconnect test with virtio 1.1 normal path and server mode
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 5: loopback reconnect test with virtio 1.0 mergeable path and server mode
 ===================================================================================
@@ -287,15 +287,15 @@ Test Case 5: loopback reconnect test with virtio 1.0 mergeable path and server m
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 6: loopback reconnect test with virtio 1.0 inorder mergeable path and server mode
 ===========================================================================================
@@ -359,15 +359,15 @@ Test Case 6: loopback reconnect test with virtio 1.0 inorder mergeable path and
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 7: loopback reconnect test with virtio 1.0 inorder no-mergeable path and server mode
 ==============================================================================================
@@ -431,15 +431,15 @@ Test Case 7: loopback reconnect test with virtio 1.0 inorder no-mergeable path a
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 8: loopback reconnect test with virtio 1.0 normal path and server mode
 ================================================================================
@@ -503,15 +503,15 @@ Test Case 8: loopback reconnect test with virtio 1.0 normal path and server mode
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 9: loopback reconnect test with virtio 1.0 vector_rx path and server mode
 ===================================================================================
@@ -575,12 +575,12 @@ Test Case 9: loopback reconnect test with virtio 1.0 vector_rx path and server m
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
\ No newline at end of file
+      testpmd>stop
\ No newline at end of file
diff --git a/test_plans/nic_single_core_perf_test_plan.rst b/test_plans/nic_single_core_perf_test_plan.rst
index 428d5db..4157c31 100644
--- a/test_plans/nic_single_core_perf_test_plan.rst
+++ b/test_plans/nic_single_core_perf_test_plan.rst
@@ -38,12 +38,14 @@ Prerequisites
 =============
 
 1. Hardware:
-    1) nic_single_core_perf test for FVL25G: two dual port FVL25G nics,
+
+    1.1) nic_single_core_perf test for FVL25G: two dual port FVL25G nics,
         all installed on the same socket, pick one port per nic
-    3) nic_single_core_perf test for NNT10G : four 82599 nics,
+    1.2) nic_single_core_perf test for NNT10G: four 82599 nics,
         all installed on the same socket, pick one port per nic
   
-2. Software: 
+2. Software::
+
     dpdk: git clone http://dpdk.org/git/dpdk
     scapy: http://www.secdev.org/projects/scapy/
     dts (next branch): git clone http://dpdk.org/git/tools/dts, 
@@ -51,12 +53,13 @@ Prerequisites
     Trex code: http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz 
                (to be run in stateless Layer 2 mode, see section in
                 Getting Started Guide for more details)
-    python-prettytable: 
+    python-prettytable:
         apt install python-prettytable (for ubuntu os) 
         or dnf install python-prettytable (for fedora os). 
 
 3. Connect all the selected nic ports to traffic generator(IXIA,TREX,
-   PKTGEN) ports(TG ports).
+   PKTGEN) ports(TG ports)::
+
     2 TG 25g ports for FVL25G ports
     4 TG 10g ports for 4 NNT10G ports
     
@@ -86,19 +89,24 @@ Test Case : Single Core Performance Measurement
 6) Result tables for different NICs:
 
    FVL25G:
+
    +------------+---------+-------------+---------+---------------------+
    | Frame Size | TXD/RXD |  Throughput |   Rate  | Expected Throughput |
    +------------+---------+-------------+---------+---------------------+
    |     64     |   512   | xxxxxx Mpps |   xxx % |     xxx    Mpps     |
+   +------------+---------+-------------+---------+---------------------+
    |     64     |   2048  | xxxxxx Mpps |   xxx % |     xxx    Mpps     |
    +------------+---------+-------------+---------+---------------------+
 
    NNT10G:
+
    +------------+---------+-------------+---------+---------------------+
    | Frame Size | TXD/RXD |  Throughput |   Rate  | Expected Throughput |
    +------------+---------+-------------+---------+---------------------+
    |     64     |   128   | xxxxxx Mpps |   xxx % |       xxx  Mpps     |
+   +------------+---------+-------------+---------+---------------------+
    |     64     |   512   | xxxxxx Mpps |   xxx % |       xxx  Mpps     |
+   +------------+---------+-------------+---------+---------------------+
    |     64     |   2048  | xxxxxx Mpps |   xxx % |       xxx  Mpps     |
    +------------+---------+-------------+---------+---------------------+
 
diff --git a/test_plans/pvp_vhost_user_reconnect_test_plan.rst b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
index a2ccdb1..9cc1ddc 100644
--- a/test_plans/pvp_vhost_user_reconnect_test_plan.rst
+++ b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
@@ -49,6 +49,7 @@ Vhost-user uses Unix domain sockets for passing messages. This means the DPDK vh
   When DPDK vhost-user restarts from an normal or abnormal exit (such as a crash), the client mode allows DPDK to establish the connection again. Note
   that QEMU version v2.7 or above is required for this reconnect feature.
   Also, when DPDK vhost-user acts as the client, it will keep trying to reconnect to the server (QEMU) until it succeeds. This is useful in two cases:
+
     * When QEMU is not started yet.
     * When QEMU restarts (for example due to a guest OS reboot).
 
diff --git a/test_plans/pvp_virtio_bonding_test_plan.rst b/test_plans/pvp_virtio_bonding_test_plan.rst
index a90e7d3..c45b3f7 100644
--- a/test_plans/pvp_virtio_bonding_test_plan.rst
+++ b/test_plans/pvp_virtio_bonding_test_plan.rst
@@ -50,7 +50,7 @@ Test case 1: vhost-user/virtio-pmd pvp bonding test with mode 0
 ===============================================================
 Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG
 
-1.  Bind one port to igb_uio,launch vhost by below command::
+1. Bind one port to igb_uio,launch vhost by below command::
 
     ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -112,7 +112,7 @@ Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG
 Test case 2: vhost-user/virtio-pmd pvp bonding test with different mode from 1 to 6
 ===================================================================================
 
-1.  Bind one port to igb_uio,launch vhost by below command::
+1. Bind one port to igb_uio,launch vhost by below command::
 
     ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024
     testpmd>set fwd mac
-- 
2.17.2


             reply	other threads:[~2019-07-22  7:04 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-22  7:08 Wenjie Li [this message]
2019-08-06  9:00 ` Tu, Lijuan
2019-08-12 14:22 Wenjie Li
2019-08-12  7:06 ` Tu, Lijuan
2019-12-22  5:20 [dts] [PATCH V1] test_plans:fix " Wenjie Li
2019-12-27  6:08 ` Tu, Lijuan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1563779329-32092-1-git-send-email-wenjiex.a.li@intel.com \
    --to=wenjiex.a.li@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).