test suite reviews and discussions
 help / color / Atom feed
* [dts] [PATCH V1] test_plans: fix build warnings
@ 2019-07-22  7:08 Wenjie Li
  2019-08-06  9:00 ` Tu, Lijuan
  0 siblings, 1 reply; 4+ messages in thread
From: Wenjie Li @ 2019-07-22  7:08 UTC (permalink / raw)
  To: dts; +Cc: Wenjie Li

fix build warnings

Signed-off-by: Wenjie Li <wenjiex.a.li@intel.com>
---
 test_plans/index.rst                          | 14 +++-
 ...back_virtio_user_server_mode_test_plan.rst | 84 +++++++++----------
 test_plans/nic_single_core_perf_test_plan.rst | 18 ++--
 .../pvp_vhost_user_reconnect_test_plan.rst    |  1 +
 test_plans/pvp_virtio_bonding_test_plan.rst   |  4 +-
 5 files changed, 69 insertions(+), 52 deletions(-)

diff --git a/test_plans/index.rst b/test_plans/index.rst
index 52d4e55..d0ebeb5 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -81,7 +81,6 @@ The following are the test plans for the DPDK DTS automated test system.
     l3fwdacl_test_plan
     link_flowctrl_test_plan
     link_status_interrupt_test_plan
-    loopback_multi_paths_port_restart_performance_test_plan
     loopback_multi_paths_port_restart_test_plan
     loopback_virtio_user_server_mode_test_plan
     mac_filter_test_plan
@@ -174,16 +173,22 @@ The following are the test plans for the DPDK DTS automated test system.
     vhost_dequeue_zero_copy_test_plan
     vxlan_gpe_support_in_i40e_test_plan
     pvp_diff_qemu_version_test_plan
-    pvp_qemu_zero_copy_test_plan
     pvp_share_lib_test_plan
     pvp_vhost_user_built_in_net_driver_test_plan
     pvp_virtio_user_2M_hugepages_test_plan
     pvp_virtio_user_multi_queues_test_plan
-    vhost_gro_test_plan
     virtio_unit_cryptodev_func_test_plan
     virtio_user_for_container_networking_test_plan
     eventdev_perf_test_plan
     eventdev_pipeline_perf_test_plan
+    pvp_qemu_multi_paths_port_restart_test_plan
+    pvp_vhost_user_reconnect_test_plan
+    pvp_virtio_bonding_test_plan
+    pvp_virtio_user_4k_pages_test_plan
+    vdev_primary_secondary_test_plan
+    vhost_1024_ethports_test_plan
+    virtio_pvp_regression_test_plan
+    virtio_user_as_exceptional_path
 
     unit_tests_cmdline_test_plan
     unit_tests_crc_test_plan
@@ -217,3 +222,6 @@ The following are the test plans for the DPDK DTS automated test system.
     efd_test_plan
     example_build_test_plan
     flow_classify_test_plan
+    dpdk_hugetlbfs_mount_size_test_plan
+    nic_single_core_perf_test_plan
+    power_managerment_throughput_test_plan
\ No newline at end of file
diff --git a/test_plans/loopback_virtio_user_server_mode_test_plan.rst b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
index 45388f4..1dd17d1 100644
--- a/test_plans/loopback_virtio_user_server_mode_test_plan.rst
+++ b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
@@ -143,15 +143,15 @@ Test Case 3: loopback reconnect test with virtio 1.1 mergeable path and server m
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 4: loopback reconnect test with virtio 1.1 normal path and server mode
 ================================================================================
@@ -215,15 +215,15 @@ Test Case 4: loopback reconnect test with virtio 1.1 normal path and server mode
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 5: loopback reconnect test with virtio 1.0 mergeable path and server mode
 ===================================================================================
@@ -287,15 +287,15 @@ Test Case 5: loopback reconnect test with virtio 1.0 mergeable path and server m
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 6: loopback reconnect test with virtio 1.0 inorder mergeable path and server mode
 ===========================================================================================
@@ -359,15 +359,15 @@ Test Case 6: loopback reconnect test with virtio 1.0 inorder mergeable path and
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 7: loopback reconnect test with virtio 1.0 inorder no-mergeable path and server mode
 ==============================================================================================
@@ -431,15 +431,15 @@ Test Case 7: loopback reconnect test with virtio 1.0 inorder no-mergeable path a
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 8: loopback reconnect test with virtio 1.0 normal path and server mode
 ================================================================================
@@ -503,15 +503,15 @@ Test Case 8: loopback reconnect test with virtio 1.0 normal path and server mode
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 9: loopback reconnect test with virtio 1.0 vector_rx path and server mode
 ===================================================================================
@@ -575,12 +575,12 @@ Test Case 9: loopback reconnect test with virtio 1.0 vector_rx path and server m
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
\ No newline at end of file
+      testpmd>stop
\ No newline at end of file
diff --git a/test_plans/nic_single_core_perf_test_plan.rst b/test_plans/nic_single_core_perf_test_plan.rst
index 428d5db..4157c31 100644
--- a/test_plans/nic_single_core_perf_test_plan.rst
+++ b/test_plans/nic_single_core_perf_test_plan.rst
@@ -38,12 +38,14 @@ Prerequisites
 =============
 
 1. Hardware:
-    1) nic_single_core_perf test for FVL25G: two dual port FVL25G nics,
+
+    1.1) nic_single_core_perf test for FVL25G: two dual port FVL25G nics,
         all installed on the same socket, pick one port per nic
-    3) nic_single_core_perf test for NNT10G : four 82599 nics,
+    1.2) nic_single_core_perf test for NNT10G: four 82599 nics,
         all installed on the same socket, pick one port per nic
   
-2. Software: 
+2. Software::
+
     dpdk: git clone http://dpdk.org/git/dpdk
     scapy: http://www.secdev.org/projects/scapy/
     dts (next branch): git clone http://dpdk.org/git/tools/dts, 
@@ -51,12 +53,13 @@ Prerequisites
     Trex code: http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz 
                (to be run in stateless Layer 2 mode, see section in
                 Getting Started Guide for more details)
-    python-prettytable: 
+    python-prettytable:
         apt install python-prettytable (for ubuntu os) 
         or dnf install python-prettytable (for fedora os). 
 
 3. Connect all the selected nic ports to traffic generator(IXIA,TREX,
-   PKTGEN) ports(TG ports).
+   PKTGEN) ports(TG ports)::
+
     2 TG 25g ports for FVL25G ports
     4 TG 10g ports for 4 NNT10G ports
     
@@ -86,19 +89,24 @@ Test Case : Single Core Performance Measurement
 6) Result tables for different NICs:
 
    FVL25G:
+
    +------------+---------+-------------+---------+---------------------+
    | Frame Size | TXD/RXD |  Throughput |   Rate  | Expected Throughput |
    +------------+---------+-------------+---------+---------------------+
    |     64     |   512   | xxxxxx Mpps |   xxx % |     xxx    Mpps     |
+   +------------+---------+-------------+---------+---------------------+
    |     64     |   2048  | xxxxxx Mpps |   xxx % |     xxx    Mpps     |
    +------------+---------+-------------+---------+---------------------+
 
    NNT10G:
+
    +------------+---------+-------------+---------+---------------------+
    | Frame Size | TXD/RXD |  Throughput |   Rate  | Expected Throughput |
    +------------+---------+-------------+---------+---------------------+
    |     64     |   128   | xxxxxx Mpps |   xxx % |       xxx  Mpps     |
+   +------------+---------+-------------+---------+---------------------+
    |     64     |   512   | xxxxxx Mpps |   xxx % |       xxx  Mpps     |
+   +------------+---------+-------------+---------+---------------------+
    |     64     |   2048  | xxxxxx Mpps |   xxx % |       xxx  Mpps     |
    +------------+---------+-------------+---------+---------------------+
 
diff --git a/test_plans/pvp_vhost_user_reconnect_test_plan.rst b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
index a2ccdb1..9cc1ddc 100644
--- a/test_plans/pvp_vhost_user_reconnect_test_plan.rst
+++ b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
@@ -49,6 +49,7 @@ Vhost-user uses Unix domain sockets for passing messages. This means the DPDK vh
   When DPDK vhost-user restarts from an normal or abnormal exit (such as a crash), the client mode allows DPDK to establish the connection again. Note
   that QEMU version v2.7 or above is required for this reconnect feature.
   Also, when DPDK vhost-user acts as the client, it will keep trying to reconnect to the server (QEMU) until it succeeds. This is useful in two cases:
+
     * When QEMU is not started yet.
     * When QEMU restarts (for example due to a guest OS reboot).
 
diff --git a/test_plans/pvp_virtio_bonding_test_plan.rst b/test_plans/pvp_virtio_bonding_test_plan.rst
index a90e7d3..c45b3f7 100644
--- a/test_plans/pvp_virtio_bonding_test_plan.rst
+++ b/test_plans/pvp_virtio_bonding_test_plan.rst
@@ -50,7 +50,7 @@ Test case 1: vhost-user/virtio-pmd pvp bonding test with mode 0
 ===============================================================
 Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG
 
-1.  Bind one port to igb_uio,launch vhost by below command::
+1. Bind one port to igb_uio,launch vhost by below command::
 
     ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -112,7 +112,7 @@ Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG
 Test case 2: vhost-user/virtio-pmd pvp bonding test with different mode from 1 to 6
 ===================================================================================
 
-1.  Bind one port to igb_uio,launch vhost by below command::
+1. Bind one port to igb_uio,launch vhost by below command::
 
     ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024
     testpmd>set fwd mac
-- 
2.17.2


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dts] [PATCH V1] test_plans: fix build warnings
  2019-07-22  7:08 [dts] [PATCH V1] test_plans: fix build warnings Wenjie Li
@ 2019-08-06  9:00 ` Tu, Lijuan
  0 siblings, 0 replies; 4+ messages in thread
From: Tu, Lijuan @ 2019-08-06  9:00 UTC (permalink / raw)
  To: Li, WenjieX A, dts; +Cc: Li, WenjieX A

Applied, thanks

> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Wenjie Li
> Sent: Monday, July 22, 2019 3:09 PM
> To: dts@dpdk.org
> Cc: Li, WenjieX A <wenjiex.a.li@intel.com>
> Subject: [dts] [PATCH V1] test_plans: fix build warnings
> 
> fix build warnings
> 
> Signed-off-by: Wenjie Li <wenjiex.a.li@intel.com>
> ---
>  test_plans/index.rst                          | 14 +++-
>  ...back_virtio_user_server_mode_test_plan.rst | 84 +++++++++----------
> test_plans/nic_single_core_perf_test_plan.rst | 18 ++--
>  .../pvp_vhost_user_reconnect_test_plan.rst    |  1 +
>  test_plans/pvp_virtio_bonding_test_plan.rst   |  4 +-
>  5 files changed, 69 insertions(+), 52 deletions(-)
> 
> diff --git a/test_plans/index.rst b/test_plans/index.rst index
> 52d4e55..d0ebeb5 100644
> --- a/test_plans/index.rst
> +++ b/test_plans/index.rst
> @@ -81,7 +81,6 @@ The following are the test plans for the DPDK DTS
> automated test system.
>      l3fwdacl_test_plan
>      link_flowctrl_test_plan
>      link_status_interrupt_test_plan
> -    loopback_multi_paths_port_restart_performance_test_plan
>      loopback_multi_paths_port_restart_test_plan
>      loopback_virtio_user_server_mode_test_plan
>      mac_filter_test_plan
> @@ -174,16 +173,22 @@ The following are the test plans for the DPDK DTS
> automated test system.
>      vhost_dequeue_zero_copy_test_plan
>      vxlan_gpe_support_in_i40e_test_plan
>      pvp_diff_qemu_version_test_plan
> -    pvp_qemu_zero_copy_test_plan
>      pvp_share_lib_test_plan
>      pvp_vhost_user_built_in_net_driver_test_plan
>      pvp_virtio_user_2M_hugepages_test_plan
>      pvp_virtio_user_multi_queues_test_plan
> -    vhost_gro_test_plan
>      virtio_unit_cryptodev_func_test_plan
>      virtio_user_for_container_networking_test_plan
>      eventdev_perf_test_plan
>      eventdev_pipeline_perf_test_plan
> +    pvp_qemu_multi_paths_port_restart_test_plan
> +    pvp_vhost_user_reconnect_test_plan
> +    pvp_virtio_bonding_test_plan
> +    pvp_virtio_user_4k_pages_test_plan
> +    vdev_primary_secondary_test_plan
> +    vhost_1024_ethports_test_plan
> +    virtio_pvp_regression_test_plan
> +    virtio_user_as_exceptional_path
> 
>      unit_tests_cmdline_test_plan
>      unit_tests_crc_test_plan
> @@ -217,3 +222,6 @@ The following are the test plans for the DPDK DTS
> automated test system.
>      efd_test_plan
>      example_build_test_plan
>      flow_classify_test_plan
> +    dpdk_hugetlbfs_mount_size_test_plan
> +    nic_single_core_perf_test_plan
> +    power_managerment_throughput_test_plan
> \ No newline at end of file
> diff --git a/test_plans/loopback_virtio_user_server_mode_test_plan.rst
> b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
> index 45388f4..1dd17d1 100644
> --- a/test_plans/loopback_virtio_user_server_mode_test_plan.rst
> +++ b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
> @@ -143,15 +143,15 @@ Test Case 3: loopback reconnect test with virtio 1.1
> mergeable path and server m
> 
>  10. Port restart at vhost side by below command and re-calculate the
> average throughput::
> 
> -    testpmd>stop
> -    testpmd>port stop 0
> -    testpmd>port start 0
> -    testpmd>start tx_first 32
> -    testpmd>show port stats all
> +      testpmd>stop
> +      testpmd>port stop 0
> +      testpmd>port start 0
> +      testpmd>start tx_first 32
> +      testpmd>show port stats all
> 
>  11. Check each RX/TX queue has packets::
> 
> -    testpmd>stop
> +      testpmd>stop
> 
>  Test Case 4: loopback reconnect test with virtio 1.1 normal path and server
> mode
> ================================================================
> ================
> @@ -215,15 +215,15 @@ Test Case 4: loopback reconnect test with virtio 1.1
> normal path and server mode
> 
>  10. Port restart at vhost side by below command and re-calculate the
> average throughput::
> 
> -    testpmd>stop
> -    testpmd>port stop 0
> -    testpmd>port start 0
> -    testpmd>start tx_first 32
> -    testpmd>show port stats all
> +      testpmd>stop
> +      testpmd>port stop 0
> +      testpmd>port start 0
> +      testpmd>start tx_first 32
> +      testpmd>show port stats all
> 
>  11. Check each RX/TX queue has packets::
> 
> -    testpmd>stop
> +      testpmd>stop
> 
>  Test Case 5: loopback reconnect test with virtio 1.0 mergeable path and
> server mode
> ================================================================
> ===================
> @@ -287,15 +287,15 @@ Test Case 5: loopback reconnect test with virtio 1.0
> mergeable path and server m
> 
>  10. Port restart at vhost side by below command and re-calculate the
> average throughput::
> 
> -    testpmd>stop
> -    testpmd>port stop 0
> -    testpmd>port start 0
> -    testpmd>start tx_first 32
> -    testpmd>show port stats all
> +      testpmd>stop
> +      testpmd>port stop 0
> +      testpmd>port start 0
> +      testpmd>start tx_first 32
> +      testpmd>show port stats all
> 
>  11. Check each RX/TX queue has packets::
> 
> -    testpmd>stop
> +      testpmd>stop
> 
>  Test Case 6: loopback reconnect test with virtio 1.0 inorder mergeable path
> and server mode
> ================================================================
> ===========================
> @@ -359,15 +359,15 @@ Test Case 6: loopback reconnect test with virtio 1.0
> inorder mergeable path and
> 
>  10. Port restart at vhost side by below command and re-calculate the
> average throughput::
> 
> -    testpmd>stop
> -    testpmd>port stop 0
> -    testpmd>port start 0
> -    testpmd>start tx_first 32
> -    testpmd>show port stats all
> +      testpmd>stop
> +      testpmd>port stop 0
> +      testpmd>port start 0
> +      testpmd>start tx_first 32
> +      testpmd>show port stats all
> 
>  11. Check each RX/TX queue has packets::
> 
> -    testpmd>stop
> +      testpmd>stop
> 
>  Test Case 7: loopback reconnect test with virtio 1.0 inorder no-mergeable
> path and server mode
> ================================================================
> ==============================
> @@ -431,15 +431,15 @@ Test Case 7: loopback reconnect test with virtio 1.0
> inorder no-mergeable path a
> 
>  10. Port restart at vhost side by below command and re-calculate the
> average throughput::
> 
> -    testpmd>stop
> -    testpmd>port stop 0
> -    testpmd>port start 0
> -    testpmd>start tx_first 32
> -    testpmd>show port stats all
> +      testpmd>stop
> +      testpmd>port stop 0
> +      testpmd>port start 0
> +      testpmd>start tx_first 32
> +      testpmd>show port stats all
> 
>  11. Check each RX/TX queue has packets::
> 
> -    testpmd>stop
> +      testpmd>stop
> 
>  Test Case 8: loopback reconnect test with virtio 1.0 normal path and server
> mode
> ================================================================
> ================
> @@ -503,15 +503,15 @@ Test Case 8: loopback reconnect test with virtio 1.0
> normal path and server mode
> 
>  10. Port restart at vhost side by below command and re-calculate the
> average throughput::
> 
> -    testpmd>stop
> -    testpmd>port stop 0
> -    testpmd>port start 0
> -    testpmd>start tx_first 32
> -    testpmd>show port stats all
> +      testpmd>stop
> +      testpmd>port stop 0
> +      testpmd>port start 0
> +      testpmd>start tx_first 32
> +      testpmd>show port stats all
> 
>  11. Check each RX/TX queue has packets::
> 
> -    testpmd>stop
> +      testpmd>stop
> 
>  Test Case 9: loopback reconnect test with virtio 1.0 vector_rx path and
> server mode
> ================================================================
> ===================
> @@ -575,12 +575,12 @@ Test Case 9: loopback reconnect test with virtio 1.0
> vector_rx path and server m
> 
>  10. Port restart at vhost side by below command and re-calculate the
> average throughput::
> 
> -    testpmd>stop
> -    testpmd>port stop 0
> -    testpmd>port start 0
> -    testpmd>start tx_first 32
> -    testpmd>show port stats all
> +      testpmd>stop
> +      testpmd>port stop 0
> +      testpmd>port start 0
> +      testpmd>start tx_first 32
> +      testpmd>show port stats all
> 
>  11. Check each RX/TX queue has packets::
> 
> -    testpmd>stop
> \ No newline at end of file
> +      testpmd>stop
> \ No newline at end of file
> diff --git a/test_plans/nic_single_core_perf_test_plan.rst
> b/test_plans/nic_single_core_perf_test_plan.rst
> index 428d5db..4157c31 100644
> --- a/test_plans/nic_single_core_perf_test_plan.rst
> +++ b/test_plans/nic_single_core_perf_test_plan.rst
> @@ -38,12 +38,14 @@ Prerequisites
>  =============
> 
>  1. Hardware:
> -    1) nic_single_core_perf test for FVL25G: two dual port FVL25G nics,
> +
> +    1.1) nic_single_core_perf test for FVL25G: two dual port FVL25G
> + nics,
>          all installed on the same socket, pick one port per nic
> -    3) nic_single_core_perf test for NNT10G : four 82599 nics,
> +    1.2) nic_single_core_perf test for NNT10G: four 82599 nics,
>          all installed on the same socket, pick one port per nic
> 
> -2. Software:
> +2. Software::
> +
>      dpdk: git clone http://dpdk.org/git/dpdk
>      scapy: http://www.secdev.org/projects/scapy/
>      dts (next branch): git clone http://dpdk.org/git/tools/dts, @@ -51,12
> +53,13 @@ Prerequisites
>      Trex code: http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz
>                 (to be run in stateless Layer 2 mode, see section in
>                  Getting Started Guide for more details)
> -    python-prettytable:
> +    python-prettytable:
>          apt install python-prettytable (for ubuntu os)
>          or dnf install python-prettytable (for fedora os).
> 
>  3. Connect all the selected nic ports to traffic generator(IXIA,TREX,
> -   PKTGEN) ports(TG ports).
> +   PKTGEN) ports(TG ports)::
> +
>      2 TG 25g ports for FVL25G ports
>      4 TG 10g ports for 4 NNT10G ports
> 
> @@ -86,19 +89,24 @@ Test Case : Single Core Performance Measurement
>  6) Result tables for different NICs:
> 
>     FVL25G:
> +
>     +------------+---------+-------------+---------+---------------------+
>     | Frame Size | TXD/RXD |  Throughput |   Rate  | Expected Throughput |
>     +------------+---------+-------------+---------+---------------------+
>     |     64     |   512   | xxxxxx Mpps |   xxx % |     xxx    Mpps     |
> +
> + +------------+---------+-------------+---------+---------------------+
>     |     64     |   2048  | xxxxxx Mpps |   xxx % |     xxx    Mpps     |
>     +------------+---------+-------------+---------+---------------------+
> 
>     NNT10G:
> +
>     +------------+---------+-------------+---------+---------------------+
>     | Frame Size | TXD/RXD |  Throughput |   Rate  | Expected Throughput |
>     +------------+---------+-------------+---------+---------------------+
>     |     64     |   128   | xxxxxx Mpps |   xxx % |       xxx  Mpps     |
> +
> + +------------+---------+-------------+---------+---------------------+
>     |     64     |   512   | xxxxxx Mpps |   xxx % |       xxx  Mpps     |
> +
> + +------------+---------+-------------+---------+---------------------+
>     |     64     |   2048  | xxxxxx Mpps |   xxx % |       xxx  Mpps     |
>     +------------+---------+-------------+---------+---------------------+
> 
> diff --git a/test_plans/pvp_vhost_user_reconnect_test_plan.rst
> b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
> index a2ccdb1..9cc1ddc 100644
> --- a/test_plans/pvp_vhost_user_reconnect_test_plan.rst
> +++ b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
> @@ -49,6 +49,7 @@ Vhost-user uses Unix domain sockets for passing
> messages. This means the DPDK vh
>    When DPDK vhost-user restarts from an normal or abnormal exit (such as a
> crash), the client mode allows DPDK to establish the connection again. Note
>    that QEMU version v2.7 or above is required for this reconnect feature.
>    Also, when DPDK vhost-user acts as the client, it will keep trying to
> reconnect to the server (QEMU) until it succeeds. This is useful in two cases:
> +
>      * When QEMU is not started yet.
>      * When QEMU restarts (for example due to a guest OS reboot).
> 
> diff --git a/test_plans/pvp_virtio_bonding_test_plan.rst
> b/test_plans/pvp_virtio_bonding_test_plan.rst
> index a90e7d3..c45b3f7 100644
> --- a/test_plans/pvp_virtio_bonding_test_plan.rst
> +++ b/test_plans/pvp_virtio_bonding_test_plan.rst
> @@ -50,7 +50,7 @@ Test case 1: vhost-user/virtio-pmd pvp bonding test
> with mode 0
> ===============================================================
>  Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG
> 
> -1.  Bind one port to igb_uio,launch vhost by below command::
> +1. Bind one port to igb_uio,launch vhost by below command::
> 
>      ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev
> 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev
> 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-
> topology=chained --nb-cores=4 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -112,7 +112,7 @@ Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 -->
> Vhost--> NIC--> TG  Test case 2: vhost-user/virtio-pmd pvp bonding test with
> different mode from 1 to 6
> ================================================================
> ===================
> 
> -1.  Bind one port to igb_uio,launch vhost by below command::
> +1. Bind one port to igb_uio,launch vhost by below command::
> 
>      ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev
> 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev
> 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-
> topology=chained --nb-cores=4 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> --
> 2.17.2


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dts] [PATCH V1] test_plans: fix build warnings
@ 2019-08-12 14:22 Wenjie Li
  2019-08-12  7:06 ` Tu, Lijuan
  0 siblings, 1 reply; 4+ messages in thread
From: Wenjie Li @ 2019-08-12 14:22 UTC (permalink / raw)
  To: dts; +Cc: Wenjie Li

fix build warnings

Signed-off-by: Wenjie Li <wenjiex.a.li@intel.com>
---
 test_plans/af_xdp_test_plan.rst | 142 ++++++++++++++++----------------
 test_plans/index.rst            |   8 +-
 2 files changed, 77 insertions(+), 73 deletions(-)

diff --git a/test_plans/af_xdp_test_plan.rst b/test_plans/af_xdp_test_plan.rst
index d12ef0c..58e6077 100644
--- a/test_plans/af_xdp_test_plan.rst
+++ b/test_plans/af_xdp_test_plan.rst
@@ -139,36 +139,36 @@ Test case 4: multiqueue
 
 1. One queue.
 
-   1) Start the testpmd with one queue::
+  1) Start the testpmd with one queue::
 
-    ./testpmd -l 29,30 -n 6 --no-pci \
-    --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1 \
-    -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
+      ./testpmd -l 29,30 -n 6 --no-pci \
+      --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1 \
+      -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
 
-   2) Assign the kernel core::
+  2) Assign the kernel core::
 
-    ./set_irq_affinity 34 enp216s0f0
+      ./set_irq_affinity 34 enp216s0f0
 
-   3) Send packets with different dst IP address by packet generator
-      with different packet size from 64 bytes to 1518 bytes, check the throughput.
+  3) Send packets with different dst IP address by packet generator
+     with different packet size from 64 bytes to 1518 bytes, check the throughput.
 
 2. Four queues.
 
-   1) Set hardware queue::
+  1) Set hardware queue::
 
-    ethtool -L enp216s0f0 combined 4
+      ethtool -L enp216s0f0 combined 4
 
-   2)Start the testpmd with four queues::
+  2) Start the testpmd with four queues::
 
-    ./testpmd -l 29,30-33 -n 6 --no-pci \
-    --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4 \
-    -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
+      ./testpmd -l 29,30-33 -n 6 --no-pci \
+      --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4 \
+      -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
 
-   3)Assign the kernel core::
+  3) Assign the kernel core::
 
-    ./set_irq_affinity 34-37 enp216s0f0
+      ./set_irq_affinity 34-37 enp216s0f0
 
-   4)Send packets with different dst IP address by packet generator
+  4) Send packets with different dst IP address by packet generator
       with different packet size from 64 bytes to 1518 bytes, check the throughput.
       The packets were distributed to the four queues.
 
@@ -177,45 +177,45 @@ Test case 5: multiqueue and zero copy
 
 1. One queue and zero copy.
 
-   1) Set hardware queue::
+  1) Set hardware queue::
 
-    ethtool -L enp216s0f0 combined 1
+      ethtool -L enp216s0f0 combined 1
 
-   2) Start the testpmd with one queue::
+  2) Start the testpmd with one queue::
 
-    ./testpmd -l 29,30 -n 6 --no-pci \
-    --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1,pmd_zero_copy=1 \
-    -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
+      ./testpmd -l 29,30 -n 6 --no-pci \
+      --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1,pmd_zero_copy=1 \
+      -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
 
-   3) Assign the kernel core::
+  3) Assign the kernel core::
 
-    ./set_irq_affinity 34 enp216s0f0
+      ./set_irq_affinity 34 enp216s0f0
 
-   4) Send packets with different dst IP address by packet generator
-      with different packet size from 64 bytes to 1518 bytes, check the throughput.
-      Expect the performance is better than non-zero-copy.
+  4) Send packets with different dst IP address by packet generator
+     with different packet size from 64 bytes to 1518 bytes, check the throughput.
+     Expect the performance is better than non-zero-copy.
 
 2. Four queues and zero copy.
 
-   1) Set hardware queue::
+  1) Set hardware queue::
 
-    ethtool -L enp216s0f0 combined 4
+      ethtool -L enp216s0f0 combined 4
 
-   2) Start the testpmd with four queues::
+  2) Start the testpmd with four queues::
 
-    ./testpmd -l 29,30-33 -n 6 --no-pci \
-    --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4,pmd_zero_copy=1 \
-    -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
+      ./testpmd -l 29,30-33 -n 6 --no-pci \
+      --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4,pmd_zero_copy=1 \
+      -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
 
-   3) Assign the kernel core::
+  3) Assign the kernel core::
 
-    ./set_irq_affinity 34-37 enp216s0f0
+      ./set_irq_affinity 34-37 enp216s0f0
 
-   4) Send packets with different dst IP address by packet generator
-      with different packet size from 64 bytes to 1518 bytes, check the throughput.
-      The packets were distributed to the four queues.
-      Expect the performance of four queues is better than one queue.
-      Expect the performance is better than non-zero-copy.
+  4) Send packets with different dst IP address by packet generator
+     with different packet size from 64 bytes to 1518 bytes, check the throughput.
+     The packets were distributed to the four queues.
+     Expect the performance of four queues is better than one queue.
+     Expect the performance is better than non-zero-copy.
 
 Test case 6: need_wakeup
 ========================
@@ -242,57 +242,57 @@ Test case 7: xdpsock sample performance
 
 1. One queue.
 
-   1) Set hardware queue::
+  1) Set hardware queue::
 
-    ethtool -L enp216s0f0 combined 1
+      ethtool -L enp216s0f0 combined 1
 
-   2) Start the xdp socket with one queue::
+  2) Start the xdp socket with one queue::
 
-    #taskset -c 30 ./xdpsock -l -i enp216s0f0
+      #taskset -c 30 ./xdpsock -l -i enp216s0f0
 
-   3) Assign the kernel core::
+  3) Assign the kernel core::
 
-    ./set_irq_affinity 34 enp216s0f0
+      ./set_irq_affinity 34 enp216s0f0
 
-   4) Send packets with different dst IP address by packet generator
-      with different packet size from 64 bytes to 1518 bytes, check the throughput.
+  4) Send packets with different dst IP address by packet generator
+     with different packet size from 64 bytes to 1518 bytes, check the throughput.
 
 2. Four queues.
 
-   1) Set hardware queue::
+  1) Set hardware queue::
 
-    ethtool -L enp216s0f0 combined 4
+      ethtool -L enp216s0f0 combined 4
 
-   2) Start the xdp socket with four queues::
+  2) Start the xdp socket with four queues::
 
-    #taskset -c 30 ./xdpsock -l -i enp216s0f0 -q 0
-    #taskset -c 31 ./xdpsock -l -i enp216s0f0 -q 1
-    #taskset -c 32 ./xdpsock -l -i enp216s0f0 -q 2
-    #taskset -c 33 ./xdpsock -l -i enp216s0f0 -q 3
+      #taskset -c 30 ./xdpsock -l -i enp216s0f0 -q 0
+      #taskset -c 31 ./xdpsock -l -i enp216s0f0 -q 1
+      #taskset -c 32 ./xdpsock -l -i enp216s0f0 -q 2
+      #taskset -c 33 ./xdpsock -l -i enp216s0f0 -q 3
 
-   3)Assign the kernel core::
+  3) Assign the kernel core::
 
-    ./set_irq_affinity 34-37 enp216s0f0
+      ./set_irq_affinity 34-37 enp216s0f0
 
-   4)Send packets with different dst IP address by packet generator
-      with different packet size from 64 bytes to 1518 bytes, check the throughput.
-      The packets were distributed to the four queues.
-      Expect the performance of four queues is better than one queue.
+  4) Send packets with different dst IP address by packet generator
+     with different packet size from 64 bytes to 1518 bytes, check the throughput.
+     The packets were distributed to the four queues.
+     Expect the performance of four queues is better than one queue.
 
 3. Need_wakeup.
 
-   1) Set hardware queue::
+  1) Set hardware queue::
 
-    ethtool -L enp216s0f0 combined 1
+      ethtool -L enp216s0f0 combined 1
 
-   2) Start the xdp socket with four queues::
+  2) Start the xdp socket with four queues::
 
-    #taskset -c 30 ./xdpsock -l -i enp216s0f0
+      #taskset -c 30 ./xdpsock -l -i enp216s0f0
 
-   3)Assign the kernel core::
+  3) Assign the kernel core::
 
-    ./set_irq_affinity 30 enp216s0f0
+      ./set_irq_affinity 30 enp216s0f0
 
-   4) Send packets by packet generator with different packet size from 64 bytes
-      to 1518 bytes, check the throughput.
-      Expect the performance is better than no need_wakeup.
+  4) Send packets by packet generator with different packet size from 64 bytes
+     to 1518 bytes, check the throughput.
+     Expect the performance is better than no need_wakeup.
\ No newline at end of file
diff --git a/test_plans/index.rst b/test_plans/index.rst
index a8269fa..28f5a69 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -189,7 +189,7 @@ The following are the test plans for the DPDK DTS automated test system.
     vdev_primary_secondary_test_plan
     vhost_1024_ethports_test_plan
     virtio_pvp_regression_test_plan
-    virtio_user_as_exceptional_path
+    virtio_user_as_exceptional_path_test_plan
 
     unit_tests_cmdline_test_plan
     unit_tests_crc_test_plan
@@ -225,4 +225,8 @@ The following are the test plans for the DPDK DTS automated test system.
     flow_classify_test_plan
     dpdk_hugetlbfs_mount_size_test_plan
     nic_single_core_perf_test_plan
-    power_managerment_throughput_test_plan
\ No newline at end of file
+    power_managerment_throughput_test_plan
+    ethtool_stats_test_plan
+    iavf_test_plan
+    packet_capture_test_plan
+    packet_ordering_test_plan
-- 
2.17.2


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dts] [PATCH V1] test_plans: fix build warnings
  2019-08-12 14:22 Wenjie Li
@ 2019-08-12  7:06 ` Tu, Lijuan
  0 siblings, 0 replies; 4+ messages in thread
From: Tu, Lijuan @ 2019-08-12  7:06 UTC (permalink / raw)
  To: Li, WenjieX A, dts; +Cc: Li, WenjieX A

Applied, thanks

> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Wenjie Li
> Sent: Monday, August 12, 2019 10:22 PM
> To: dts@dpdk.org
> Cc: Li, WenjieX A <wenjiex.a.li@intel.com>
> Subject: [dts] [PATCH V1] test_plans: fix build warnings
> 
> fix build warnings
> 
> Signed-off-by: Wenjie Li <wenjiex.a.li@intel.com>
> ---
>  test_plans/af_xdp_test_plan.rst | 142 ++++++++++++++++----------------
>  test_plans/index.rst            |   8 +-
>  2 files changed, 77 insertions(+), 73 deletions(-)
> 
> diff --git a/test_plans/af_xdp_test_plan.rst
> b/test_plans/af_xdp_test_plan.rst index d12ef0c..58e6077 100644
> --- a/test_plans/af_xdp_test_plan.rst
> +++ b/test_plans/af_xdp_test_plan.rst
> @@ -139,36 +139,36 @@ Test case 4: multiqueue
> 
>  1. One queue.
> 
> -   1) Start the testpmd with one queue::
> +  1) Start the testpmd with one queue::
> 
> -    ./testpmd -l 29,30 -n 6 --no-pci \
> -    --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1 \
> -    -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
> +      ./testpmd -l 29,30 -n 6 --no-pci \
> +      --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1 \
> +      -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
> 
> -   2) Assign the kernel core::
> +  2) Assign the kernel core::
> 
> -    ./set_irq_affinity 34 enp216s0f0
> +      ./set_irq_affinity 34 enp216s0f0
> 
> -   3) Send packets with different dst IP address by packet generator
> -      with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> +  3) Send packets with different dst IP address by packet generator
> +     with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> 
>  2. Four queues.
> 
> -   1) Set hardware queue::
> +  1) Set hardware queue::
> 
> -    ethtool -L enp216s0f0 combined 4
> +      ethtool -L enp216s0f0 combined 4
> 
> -   2)Start the testpmd with four queues::
> +  2) Start the testpmd with four queues::
> 
> -    ./testpmd -l 29,30-33 -n 6 --no-pci \
> -    --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4 \
> -    -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
> +      ./testpmd -l 29,30-33 -n 6 --no-pci \
> +      --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4 \
> +      -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
> 
> -   3)Assign the kernel core::
> +  3) Assign the kernel core::
> 
> -    ./set_irq_affinity 34-37 enp216s0f0
> +      ./set_irq_affinity 34-37 enp216s0f0
> 
> -   4)Send packets with different dst IP address by packet generator
> +  4) Send packets with different dst IP address by packet generator
>        with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
>        The packets were distributed to the four queues.
> 
> @@ -177,45 +177,45 @@ Test case 5: multiqueue and zero copy
> 
>  1. One queue and zero copy.
> 
> -   1) Set hardware queue::
> +  1) Set hardware queue::
> 
> -    ethtool -L enp216s0f0 combined 1
> +      ethtool -L enp216s0f0 combined 1
> 
> -   2) Start the testpmd with one queue::
> +  2) Start the testpmd with one queue::
> 
> -    ./testpmd -l 29,30 -n 6 --no-pci \
> -    --vdev
> net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1,pmd_zero_co
> py=1 \
> -    -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
> +      ./testpmd -l 29,30 -n 6 --no-pci \
> +      --vdev
> net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1,pmd_zero_co
> py=1 \
> +      -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
> 
> -   3) Assign the kernel core::
> +  3) Assign the kernel core::
> 
> -    ./set_irq_affinity 34 enp216s0f0
> +      ./set_irq_affinity 34 enp216s0f0
> 
> -   4) Send packets with different dst IP address by packet generator
> -      with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> -      Expect the performance is better than non-zero-copy.
> +  4) Send packets with different dst IP address by packet generator
> +     with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> +     Expect the performance is better than non-zero-copy.
> 
>  2. Four queues and zero copy.
> 
> -   1) Set hardware queue::
> +  1) Set hardware queue::
> 
> -    ethtool -L enp216s0f0 combined 4
> +      ethtool -L enp216s0f0 combined 4
> 
> -   2) Start the testpmd with four queues::
> +  2) Start the testpmd with four queues::
> 
> -    ./testpmd -l 29,30-33 -n 6 --no-pci \
> -    --vdev
> net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4,pmd_zero_co
> py=1 \
> -    -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
> +      ./testpmd -l 29,30-33 -n 6 --no-pci \
> +      --vdev
> net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4,pmd_zero_co
> py=1 \
> +      -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
> 
> -   3) Assign the kernel core::
> +  3) Assign the kernel core::
> 
> -    ./set_irq_affinity 34-37 enp216s0f0
> +      ./set_irq_affinity 34-37 enp216s0f0
> 
> -   4) Send packets with different dst IP address by packet generator
> -      with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> -      The packets were distributed to the four queues.
> -      Expect the performance of four queues is better than one queue.
> -      Expect the performance is better than non-zero-copy.
> +  4) Send packets with different dst IP address by packet generator
> +     with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> +     The packets were distributed to the four queues.
> +     Expect the performance of four queues is better than one queue.
> +     Expect the performance is better than non-zero-copy.
> 
>  Test case 6: need_wakeup
>  ========================
> @@ -242,57 +242,57 @@ Test case 7: xdpsock sample performance
> 
>  1. One queue.
> 
> -   1) Set hardware queue::
> +  1) Set hardware queue::
> 
> -    ethtool -L enp216s0f0 combined 1
> +      ethtool -L enp216s0f0 combined 1
> 
> -   2) Start the xdp socket with one queue::
> +  2) Start the xdp socket with one queue::
> 
> -    #taskset -c 30 ./xdpsock -l -i enp216s0f0
> +      #taskset -c 30 ./xdpsock -l -i enp216s0f0
> 
> -   3) Assign the kernel core::
> +  3) Assign the kernel core::
> 
> -    ./set_irq_affinity 34 enp216s0f0
> +      ./set_irq_affinity 34 enp216s0f0
> 
> -   4) Send packets with different dst IP address by packet generator
> -      with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> +  4) Send packets with different dst IP address by packet generator
> +     with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> 
>  2. Four queues.
> 
> -   1) Set hardware queue::
> +  1) Set hardware queue::
> 
> -    ethtool -L enp216s0f0 combined 4
> +      ethtool -L enp216s0f0 combined 4
> 
> -   2) Start the xdp socket with four queues::
> +  2) Start the xdp socket with four queues::
> 
> -    #taskset -c 30 ./xdpsock -l -i enp216s0f0 -q 0
> -    #taskset -c 31 ./xdpsock -l -i enp216s0f0 -q 1
> -    #taskset -c 32 ./xdpsock -l -i enp216s0f0 -q 2
> -    #taskset -c 33 ./xdpsock -l -i enp216s0f0 -q 3
> +      #taskset -c 30 ./xdpsock -l -i enp216s0f0 -q 0
> +      #taskset -c 31 ./xdpsock -l -i enp216s0f0 -q 1
> +      #taskset -c 32 ./xdpsock -l -i enp216s0f0 -q 2
> +      #taskset -c 33 ./xdpsock -l -i enp216s0f0 -q 3
> 
> -   3)Assign the kernel core::
> +  3) Assign the kernel core::
> 
> -    ./set_irq_affinity 34-37 enp216s0f0
> +      ./set_irq_affinity 34-37 enp216s0f0
> 
> -   4)Send packets with different dst IP address by packet generator
> -      with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> -      The packets were distributed to the four queues.
> -      Expect the performance of four queues is better than one queue.
> +  4) Send packets with different dst IP address by packet generator
> +     with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> +     The packets were distributed to the four queues.
> +     Expect the performance of four queues is better than one queue.
> 
>  3. Need_wakeup.
> 
> -   1) Set hardware queue::
> +  1) Set hardware queue::
> 
> -    ethtool -L enp216s0f0 combined 1
> +      ethtool -L enp216s0f0 combined 1
> 
> -   2) Start the xdp socket with four queues::
> +  2) Start the xdp socket with four queues::
> 
> -    #taskset -c 30 ./xdpsock -l -i enp216s0f0
> +      #taskset -c 30 ./xdpsock -l -i enp216s0f0
> 
> -   3)Assign the kernel core::
> +  3) Assign the kernel core::
> 
> -    ./set_irq_affinity 30 enp216s0f0
> +      ./set_irq_affinity 30 enp216s0f0
> 
> -   4) Send packets by packet generator with different packet size from 64
> bytes
> -      to 1518 bytes, check the throughput.
> -      Expect the performance is better than no need_wakeup.
> +  4) Send packets by packet generator with different packet size from 64
> bytes
> +     to 1518 bytes, check the throughput.
> +     Expect the performance is better than no need_wakeup.
> \ No newline at end of file
> diff --git a/test_plans/index.rst b/test_plans/index.rst index
> a8269fa..28f5a69 100644
> --- a/test_plans/index.rst
> +++ b/test_plans/index.rst
> @@ -189,7 +189,7 @@ The following are the test plans for the DPDK DTS
> automated test system.
>      vdev_primary_secondary_test_plan
>      vhost_1024_ethports_test_plan
>      virtio_pvp_regression_test_plan
> -    virtio_user_as_exceptional_path
> +    virtio_user_as_exceptional_path_test_plan
> 
>      unit_tests_cmdline_test_plan
>      unit_tests_crc_test_plan
> @@ -225,4 +225,8 @@ The following are the test plans for the DPDK DTS
> automated test system.
>      flow_classify_test_plan
>      dpdk_hugetlbfs_mount_size_test_plan
>      nic_single_core_perf_test_plan
> -    power_managerment_throughput_test_plan
> \ No newline at end of file
> +    power_managerment_throughput_test_plan
> +    ethtool_stats_test_plan
> +    iavf_test_plan
> +    packet_capture_test_plan
> +    packet_ordering_test_plan
> --
> 2.17.2


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, back to index

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-22  7:08 [dts] [PATCH V1] test_plans: fix build warnings Wenjie Li
2019-08-06  9:00 ` Tu, Lijuan
2019-08-12 14:22 Wenjie Li
2019-08-12  7:06 ` Tu, Lijuan

test suite reviews and discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/dts/0 dts/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dts dts/ http://inbox.dpdk.org/dts \
		dts@dpdk.org
	public-inbox-index dts


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dts


AGPL code for this site: git clone https://public-inbox.org/ public-inbox