test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts] [PATCH V1] test_plans:fix build warnings
@ 2019-12-22  5:20 Wenjie Li
  2019-12-27  6:08 ` Tu, Lijuan
  0 siblings, 1 reply; 6+ messages in thread
From: Wenjie Li @ 2019-12-22  5:20 UTC (permalink / raw)
  To: dts; +Cc: Wenjie Li

fix build warnings in test plans.

Signed-off-by: Wenjie Li <wenjiex.a.li@intel.com>
---
 test_plans/cbdma_test_plan.rst                        | 10 +++++-----
 ...nable_package_download_in_ice_driver_test_plan.rst |  6 +++---
 test_plans/index.rst                                  |  9 ++++++++-
 test_plans/port_representor_test_plan.rst             | 11 ++---------
 test_plans/vhost_dequeue_zero_copy_test_plan.rst      | 11 ++++++-----
 5 files changed, 24 insertions(+), 23 deletions(-)

diff --git a/test_plans/cbdma_test_plan.rst b/test_plans/cbdma_test_plan.rst
index 3cace07..aee05a6 100644
--- a/test_plans/cbdma_test_plan.rst
+++ b/test_plans/cbdma_test_plan.rst
@@ -87,7 +87,7 @@ where,
     or not
 
 Packet pipeline: 
-===============
+================
 NIC RX -> copy packet -> free original -> update mac addresses -> NIC TX
 
 Test Case1: CBDMA basic test with differnet size packets
@@ -158,7 +158,7 @@ Test Case5: CBDMA performance cmparison between mac-updating and no-mac-updating
 
 4. Check performance from ioat app::
 
-Total packets Tx:                   xxx [pps]
+    Total packets Tx:                   xxx [pps]
 
 5.Launch ioatfwd app::
 
@@ -168,7 +168,7 @@ Total packets Tx:                   xxx [pps]
 
 7. Check performance from ioat app::
 
-Total packets Tx:                   xxx [pps]
+    Total packets Tx:                   xxx [pps]
   
 Test Case6: CBDMA performance cmparison between HW copies and SW copies using different packet size
 ===================================================================================================
@@ -183,7 +183,7 @@ Test Case6: CBDMA performance cmparison between HW copies and SW copies using di
 
 4. Check performance from ioat app::
 
-Total packets Tx:                   xxx [pps]
+    Total packets Tx:                   xxx [pps]
 
 5.Launch ioatfwd app with three cores::
 
@@ -193,4 +193,4 @@ Total packets Tx:                   xxx [pps]
 
 7. Check performance from ioat app and compare with hw copy test::
 
-Total packets Tx:                   xxx [pps]
+    Total packets Tx:                   xxx [pps]
diff --git a/test_plans/enable_package_download_in_ice_driver_test_plan.rst b/test_plans/enable_package_download_in_ice_driver_test_plan.rst
index 55a83e2..4139191 100644
--- a/test_plans/enable_package_download_in_ice_driver_test_plan.rst
+++ b/test_plans/enable_package_download_in_ice_driver_test_plan.rst
@@ -182,6 +182,6 @@ In this case, b1:00.0 interface is specific interface.
 
 Check the initial output log, it shows::
 
-EAL: PCI device 0000:b1:00.0 on NUMA socket 0
-EAL:   probe driver: 8086:1593 net_ice
-**ice_load_pkg(): pkg to be loaded: 1.2.100.0, ICE COMMS Package**
\ No newline at end of file
+  EAL: PCI device 0000:b1:00.0 on NUMA socket 0
+  EAL:   probe driver: 8086:1593 net_ice
+  **ice_load_pkg(): pkg to be loaded: 1.2.100.0, ICE COMMS Package**
diff --git a/test_plans/index.rst b/test_plans/index.rst
index eb506e4..0fdc039 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -231,7 +231,6 @@ The following are the test plans for the DPDK DTS automated test system.
     dpdk_hugetlbfs_mount_size_test_plan
     nic_single_core_perf_test_plan
     power_managerment_throughput_test_plan
-    ethtool_stats_test_plan
     iavf_test_plan
     packet_capture_test_plan
     packet_ordering_test_plan
@@ -240,3 +239,11 @@ The following are the test plans for the DPDK DTS automated test system.
 
     fips_cryptodev_test_plan
     flow_filtering_test_plan
+    af_xdp_2_test_plan
+    cbdma_test_plan
+    flexible_rxd_test_plan
+    ipsec_gw_and_library_test_plan
+    port_control_test_plan
+    port_representor_test_plan
+    vm2vm_virtio_user_test_plan
+    vmdq_dcb_test_plan
diff --git a/test_plans/port_representor_test_plan.rst b/test_plans/port_representor_test_plan.rst
index e7c1849..c54b7ec 100644
--- a/test_plans/port_representor_test_plan.rst
+++ b/test_plans/port_representor_test_plan.rst
@@ -195,12 +195,5 @@ Description: use control testpmd to set vlan
     scapy> sendp(pkts, iface="ens785f0")
 
 3. check port stats in 2 VF testpmd:
-  expected result:
-  2 VF testpmds should receive 10 packets separately.
-
-
-
-
-
-
-
+    expected result:
+    2 VF testpmds should receive 10 packets separately.
diff --git a/test_plans/vhost_dequeue_zero_copy_test_plan.rst b/test_plans/vhost_dequeue_zero_copy_test_plan.rst
index 100afca..0c550d8 100644
--- a/test_plans/vhost_dequeue_zero_copy_test_plan.rst
+++ b/test_plans/vhost_dequeue_zero_copy_test_plan.rst
@@ -37,11 +37,12 @@ vhost dequeue zero-copy test plan
 Description
 ===========
 
-Vhost dequeue zero-copy is a performance optimization for vhost, the copy in the dequeue path is avoided in order to improve the performance. The test cases cover split ring and packed ring. 
+Vhost dequeue zero-copy is a performance optimization for vhost, the copy in the dequeue path is avoided in order to improve the performance. The test cases cover split ring and packed ring.
 Notice:
-*All packed ring case need special qemu version.
-*In the PVP case, when packet size is 1518B, 10G nic could be the performance bottleneck, so we use 40G traffic genarator and 40G nic.
-*Also as vhost zero copy mbufs should be consumed as soon as possible, don't start send packets at vhost side before VM and virtio-pmd launched.
+
+* All packed ring case need special qemu version.
+* In the PVP case, when packet size is 1518B, 10G nic could be the performance bottleneck, so we use 40G traffic genarator and 40G nic.
+* Also as vhost zero copy mbufs should be consumed as soon as possible, don't start send packets at vhost side before VM and virtio-pmd launched.
 
 Test flow
 =========
@@ -386,4 +387,4 @@ Test Case 8: pvp packed ring dequeue zero-copy test with driver reload test
 
 8. Check each queue's rx/tx packet numbers at vhost side::
 
-    testpmd>stop
\ No newline at end of file
+    testpmd>stop
-- 
2.17.2


^ permalink raw reply	[flat|nested] 6+ messages in thread
* [dts] [PATCH V1] test_plans: fix build warnings
@ 2019-08-12 14:22 Wenjie Li
  2019-08-12  7:06 ` Tu, Lijuan
  0 siblings, 1 reply; 6+ messages in thread
From: Wenjie Li @ 2019-08-12 14:22 UTC (permalink / raw)
  To: dts; +Cc: Wenjie Li

fix build warnings

Signed-off-by: Wenjie Li <wenjiex.a.li@intel.com>
---
 test_plans/af_xdp_test_plan.rst | 142 ++++++++++++++++----------------
 test_plans/index.rst            |   8 +-
 2 files changed, 77 insertions(+), 73 deletions(-)

diff --git a/test_plans/af_xdp_test_plan.rst b/test_plans/af_xdp_test_plan.rst
index d12ef0c..58e6077 100644
--- a/test_plans/af_xdp_test_plan.rst
+++ b/test_plans/af_xdp_test_plan.rst
@@ -139,36 +139,36 @@ Test case 4: multiqueue
 
 1. One queue.
 
-   1) Start the testpmd with one queue::
+  1) Start the testpmd with one queue::
 
-    ./testpmd -l 29,30 -n 6 --no-pci \
-    --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1 \
-    -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
+      ./testpmd -l 29,30 -n 6 --no-pci \
+      --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1 \
+      -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
 
-   2) Assign the kernel core::
+  2) Assign the kernel core::
 
-    ./set_irq_affinity 34 enp216s0f0
+      ./set_irq_affinity 34 enp216s0f0
 
-   3) Send packets with different dst IP address by packet generator
-      with different packet size from 64 bytes to 1518 bytes, check the throughput.
+  3) Send packets with different dst IP address by packet generator
+     with different packet size from 64 bytes to 1518 bytes, check the throughput.
 
 2. Four queues.
 
-   1) Set hardware queue::
+  1) Set hardware queue::
 
-    ethtool -L enp216s0f0 combined 4
+      ethtool -L enp216s0f0 combined 4
 
-   2)Start the testpmd with four queues::
+  2) Start the testpmd with four queues::
 
-    ./testpmd -l 29,30-33 -n 6 --no-pci \
-    --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4 \
-    -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
+      ./testpmd -l 29,30-33 -n 6 --no-pci \
+      --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4 \
+      -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
 
-   3)Assign the kernel core::
+  3) Assign the kernel core::
 
-    ./set_irq_affinity 34-37 enp216s0f0
+      ./set_irq_affinity 34-37 enp216s0f0
 
-   4)Send packets with different dst IP address by packet generator
+  4) Send packets with different dst IP address by packet generator
       with different packet size from 64 bytes to 1518 bytes, check the throughput.
       The packets were distributed to the four queues.
 
@@ -177,45 +177,45 @@ Test case 5: multiqueue and zero copy
 
 1. One queue and zero copy.
 
-   1) Set hardware queue::
+  1) Set hardware queue::
 
-    ethtool -L enp216s0f0 combined 1
+      ethtool -L enp216s0f0 combined 1
 
-   2) Start the testpmd with one queue::
+  2) Start the testpmd with one queue::
 
-    ./testpmd -l 29,30 -n 6 --no-pci \
-    --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1,pmd_zero_copy=1 \
-    -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
+      ./testpmd -l 29,30 -n 6 --no-pci \
+      --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1,pmd_zero_copy=1 \
+      -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
 
-   3) Assign the kernel core::
+  3) Assign the kernel core::
 
-    ./set_irq_affinity 34 enp216s0f0
+      ./set_irq_affinity 34 enp216s0f0
 
-   4) Send packets with different dst IP address by packet generator
-      with different packet size from 64 bytes to 1518 bytes, check the throughput.
-      Expect the performance is better than non-zero-copy.
+  4) Send packets with different dst IP address by packet generator
+     with different packet size from 64 bytes to 1518 bytes, check the throughput.
+     Expect the performance is better than non-zero-copy.
 
 2. Four queues and zero copy.
 
-   1) Set hardware queue::
+  1) Set hardware queue::
 
-    ethtool -L enp216s0f0 combined 4
+      ethtool -L enp216s0f0 combined 4
 
-   2) Start the testpmd with four queues::
+  2) Start the testpmd with four queues::
 
-    ./testpmd -l 29,30-33 -n 6 --no-pci \
-    --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4,pmd_zero_copy=1 \
-    -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
+      ./testpmd -l 29,30-33 -n 6 --no-pci \
+      --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4,pmd_zero_copy=1 \
+      -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
 
-   3) Assign the kernel core::
+  3) Assign the kernel core::
 
-    ./set_irq_affinity 34-37 enp216s0f0
+      ./set_irq_affinity 34-37 enp216s0f0
 
-   4) Send packets with different dst IP address by packet generator
-      with different packet size from 64 bytes to 1518 bytes, check the throughput.
-      The packets were distributed to the four queues.
-      Expect the performance of four queues is better than one queue.
-      Expect the performance is better than non-zero-copy.
+  4) Send packets with different dst IP address by packet generator
+     with different packet size from 64 bytes to 1518 bytes, check the throughput.
+     The packets were distributed to the four queues.
+     Expect the performance of four queues is better than one queue.
+     Expect the performance is better than non-zero-copy.
 
 Test case 6: need_wakeup
 ========================
@@ -242,57 +242,57 @@ Test case 7: xdpsock sample performance
 
 1. One queue.
 
-   1) Set hardware queue::
+  1) Set hardware queue::
 
-    ethtool -L enp216s0f0 combined 1
+      ethtool -L enp216s0f0 combined 1
 
-   2) Start the xdp socket with one queue::
+  2) Start the xdp socket with one queue::
 
-    #taskset -c 30 ./xdpsock -l -i enp216s0f0
+      #taskset -c 30 ./xdpsock -l -i enp216s0f0
 
-   3) Assign the kernel core::
+  3) Assign the kernel core::
 
-    ./set_irq_affinity 34 enp216s0f0
+      ./set_irq_affinity 34 enp216s0f0
 
-   4) Send packets with different dst IP address by packet generator
-      with different packet size from 64 bytes to 1518 bytes, check the throughput.
+  4) Send packets with different dst IP address by packet generator
+     with different packet size from 64 bytes to 1518 bytes, check the throughput.
 
 2. Four queues.
 
-   1) Set hardware queue::
+  1) Set hardware queue::
 
-    ethtool -L enp216s0f0 combined 4
+      ethtool -L enp216s0f0 combined 4
 
-   2) Start the xdp socket with four queues::
+  2) Start the xdp socket with four queues::
 
-    #taskset -c 30 ./xdpsock -l -i enp216s0f0 -q 0
-    #taskset -c 31 ./xdpsock -l -i enp216s0f0 -q 1
-    #taskset -c 32 ./xdpsock -l -i enp216s0f0 -q 2
-    #taskset -c 33 ./xdpsock -l -i enp216s0f0 -q 3
+      #taskset -c 30 ./xdpsock -l -i enp216s0f0 -q 0
+      #taskset -c 31 ./xdpsock -l -i enp216s0f0 -q 1
+      #taskset -c 32 ./xdpsock -l -i enp216s0f0 -q 2
+      #taskset -c 33 ./xdpsock -l -i enp216s0f0 -q 3
 
-   3)Assign the kernel core::
+  3) Assign the kernel core::
 
-    ./set_irq_affinity 34-37 enp216s0f0
+      ./set_irq_affinity 34-37 enp216s0f0
 
-   4)Send packets with different dst IP address by packet generator
-      with different packet size from 64 bytes to 1518 bytes, check the throughput.
-      The packets were distributed to the four queues.
-      Expect the performance of four queues is better than one queue.
+  4) Send packets with different dst IP address by packet generator
+     with different packet size from 64 bytes to 1518 bytes, check the throughput.
+     The packets were distributed to the four queues.
+     Expect the performance of four queues is better than one queue.
 
 3. Need_wakeup.
 
-   1) Set hardware queue::
+  1) Set hardware queue::
 
-    ethtool -L enp216s0f0 combined 1
+      ethtool -L enp216s0f0 combined 1
 
-   2) Start the xdp socket with four queues::
+  2) Start the xdp socket with four queues::
 
-    #taskset -c 30 ./xdpsock -l -i enp216s0f0
+      #taskset -c 30 ./xdpsock -l -i enp216s0f0
 
-   3)Assign the kernel core::
+  3) Assign the kernel core::
 
-    ./set_irq_affinity 30 enp216s0f0
+      ./set_irq_affinity 30 enp216s0f0
 
-   4) Send packets by packet generator with different packet size from 64 bytes
-      to 1518 bytes, check the throughput.
-      Expect the performance is better than no need_wakeup.
+  4) Send packets by packet generator with different packet size from 64 bytes
+     to 1518 bytes, check the throughput.
+     Expect the performance is better than no need_wakeup.
\ No newline at end of file
diff --git a/test_plans/index.rst b/test_plans/index.rst
index a8269fa..28f5a69 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -189,7 +189,7 @@ The following are the test plans for the DPDK DTS automated test system.
     vdev_primary_secondary_test_plan
     vhost_1024_ethports_test_plan
     virtio_pvp_regression_test_plan
-    virtio_user_as_exceptional_path
+    virtio_user_as_exceptional_path_test_plan
 
     unit_tests_cmdline_test_plan
     unit_tests_crc_test_plan
@@ -225,4 +225,8 @@ The following are the test plans for the DPDK DTS automated test system.
     flow_classify_test_plan
     dpdk_hugetlbfs_mount_size_test_plan
     nic_single_core_perf_test_plan
-    power_managerment_throughput_test_plan
\ No newline at end of file
+    power_managerment_throughput_test_plan
+    ethtool_stats_test_plan
+    iavf_test_plan
+    packet_capture_test_plan
+    packet_ordering_test_plan
-- 
2.17.2


^ permalink raw reply	[flat|nested] 6+ messages in thread
* [dts] [PATCH V1] test_plans: fix build warnings
@ 2019-07-22  7:08 Wenjie Li
  2019-08-06  9:00 ` Tu, Lijuan
  0 siblings, 1 reply; 6+ messages in thread
From: Wenjie Li @ 2019-07-22  7:08 UTC (permalink / raw)
  To: dts; +Cc: Wenjie Li

fix build warnings

Signed-off-by: Wenjie Li <wenjiex.a.li@intel.com>
---
 test_plans/index.rst                          | 14 +++-
 ...back_virtio_user_server_mode_test_plan.rst | 84 +++++++++----------
 test_plans/nic_single_core_perf_test_plan.rst | 18 ++--
 .../pvp_vhost_user_reconnect_test_plan.rst    |  1 +
 test_plans/pvp_virtio_bonding_test_plan.rst   |  4 +-
 5 files changed, 69 insertions(+), 52 deletions(-)

diff --git a/test_plans/index.rst b/test_plans/index.rst
index 52d4e55..d0ebeb5 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -81,7 +81,6 @@ The following are the test plans for the DPDK DTS automated test system.
     l3fwdacl_test_plan
     link_flowctrl_test_plan
     link_status_interrupt_test_plan
-    loopback_multi_paths_port_restart_performance_test_plan
     loopback_multi_paths_port_restart_test_plan
     loopback_virtio_user_server_mode_test_plan
     mac_filter_test_plan
@@ -174,16 +173,22 @@ The following are the test plans for the DPDK DTS automated test system.
     vhost_dequeue_zero_copy_test_plan
     vxlan_gpe_support_in_i40e_test_plan
     pvp_diff_qemu_version_test_plan
-    pvp_qemu_zero_copy_test_plan
     pvp_share_lib_test_plan
     pvp_vhost_user_built_in_net_driver_test_plan
     pvp_virtio_user_2M_hugepages_test_plan
     pvp_virtio_user_multi_queues_test_plan
-    vhost_gro_test_plan
     virtio_unit_cryptodev_func_test_plan
     virtio_user_for_container_networking_test_plan
     eventdev_perf_test_plan
     eventdev_pipeline_perf_test_plan
+    pvp_qemu_multi_paths_port_restart_test_plan
+    pvp_vhost_user_reconnect_test_plan
+    pvp_virtio_bonding_test_plan
+    pvp_virtio_user_4k_pages_test_plan
+    vdev_primary_secondary_test_plan
+    vhost_1024_ethports_test_plan
+    virtio_pvp_regression_test_plan
+    virtio_user_as_exceptional_path
 
     unit_tests_cmdline_test_plan
     unit_tests_crc_test_plan
@@ -217,3 +222,6 @@ The following are the test plans for the DPDK DTS automated test system.
     efd_test_plan
     example_build_test_plan
     flow_classify_test_plan
+    dpdk_hugetlbfs_mount_size_test_plan
+    nic_single_core_perf_test_plan
+    power_managerment_throughput_test_plan
\ No newline at end of file
diff --git a/test_plans/loopback_virtio_user_server_mode_test_plan.rst b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
index 45388f4..1dd17d1 100644
--- a/test_plans/loopback_virtio_user_server_mode_test_plan.rst
+++ b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
@@ -143,15 +143,15 @@ Test Case 3: loopback reconnect test with virtio 1.1 mergeable path and server m
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 4: loopback reconnect test with virtio 1.1 normal path and server mode
 ================================================================================
@@ -215,15 +215,15 @@ Test Case 4: loopback reconnect test with virtio 1.1 normal path and server mode
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 5: loopback reconnect test with virtio 1.0 mergeable path and server mode
 ===================================================================================
@@ -287,15 +287,15 @@ Test Case 5: loopback reconnect test with virtio 1.0 mergeable path and server m
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 6: loopback reconnect test with virtio 1.0 inorder mergeable path and server mode
 ===========================================================================================
@@ -359,15 +359,15 @@ Test Case 6: loopback reconnect test with virtio 1.0 inorder mergeable path and
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 7: loopback reconnect test with virtio 1.0 inorder no-mergeable path and server mode
 ==============================================================================================
@@ -431,15 +431,15 @@ Test Case 7: loopback reconnect test with virtio 1.0 inorder no-mergeable path a
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 8: loopback reconnect test with virtio 1.0 normal path and server mode
 ================================================================================
@@ -503,15 +503,15 @@ Test Case 8: loopback reconnect test with virtio 1.0 normal path and server mode
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
+      testpmd>stop
 
 Test Case 9: loopback reconnect test with virtio 1.0 vector_rx path and server mode
 ===================================================================================
@@ -575,12 +575,12 @@ Test Case 9: loopback reconnect test with virtio 1.0 vector_rx path and server m
 
 10. Port restart at vhost side by below command and re-calculate the average throughput::
 
-    testpmd>stop
-    testpmd>port stop 0
-    testpmd>port start 0
-    testpmd>start tx_first 32
-    testpmd>show port stats all
+      testpmd>stop
+      testpmd>port stop 0
+      testpmd>port start 0
+      testpmd>start tx_first 32
+      testpmd>show port stats all
 
 11. Check each RX/TX queue has packets::
 
-    testpmd>stop
\ No newline at end of file
+      testpmd>stop
\ No newline at end of file
diff --git a/test_plans/nic_single_core_perf_test_plan.rst b/test_plans/nic_single_core_perf_test_plan.rst
index 428d5db..4157c31 100644
--- a/test_plans/nic_single_core_perf_test_plan.rst
+++ b/test_plans/nic_single_core_perf_test_plan.rst
@@ -38,12 +38,14 @@ Prerequisites
 =============
 
 1. Hardware:
-    1) nic_single_core_perf test for FVL25G: two dual port FVL25G nics,
+
+    1.1) nic_single_core_perf test for FVL25G: two dual port FVL25G nics,
         all installed on the same socket, pick one port per nic
-    3) nic_single_core_perf test for NNT10G : four 82599 nics,
+    1.2) nic_single_core_perf test for NNT10G: four 82599 nics,
         all installed on the same socket, pick one port per nic
   
-2. Software: 
+2. Software::
+
     dpdk: git clone http://dpdk.org/git/dpdk
     scapy: http://www.secdev.org/projects/scapy/
     dts (next branch): git clone http://dpdk.org/git/tools/dts, 
@@ -51,12 +53,13 @@ Prerequisites
     Trex code: http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz 
                (to be run in stateless Layer 2 mode, see section in
                 Getting Started Guide for more details)
-    python-prettytable: 
+    python-prettytable:
         apt install python-prettytable (for ubuntu os) 
         or dnf install python-prettytable (for fedora os). 
 
 3. Connect all the selected nic ports to traffic generator(IXIA,TREX,
-   PKTGEN) ports(TG ports).
+   PKTGEN) ports(TG ports)::
+
     2 TG 25g ports for FVL25G ports
     4 TG 10g ports for 4 NNT10G ports
     
@@ -86,19 +89,24 @@ Test Case : Single Core Performance Measurement
 6) Result tables for different NICs:
 
    FVL25G:
+
    +------------+---------+-------------+---------+---------------------+
    | Frame Size | TXD/RXD |  Throughput |   Rate  | Expected Throughput |
    +------------+---------+-------------+---------+---------------------+
    |     64     |   512   | xxxxxx Mpps |   xxx % |     xxx    Mpps     |
+   +------------+---------+-------------+---------+---------------------+
    |     64     |   2048  | xxxxxx Mpps |   xxx % |     xxx    Mpps     |
    +------------+---------+-------------+---------+---------------------+
 
    NNT10G:
+
    +------------+---------+-------------+---------+---------------------+
    | Frame Size | TXD/RXD |  Throughput |   Rate  | Expected Throughput |
    +------------+---------+-------------+---------+---------------------+
    |     64     |   128   | xxxxxx Mpps |   xxx % |       xxx  Mpps     |
+   +------------+---------+-------------+---------+---------------------+
    |     64     |   512   | xxxxxx Mpps |   xxx % |       xxx  Mpps     |
+   +------------+---------+-------------+---------+---------------------+
    |     64     |   2048  | xxxxxx Mpps |   xxx % |       xxx  Mpps     |
    +------------+---------+-------------+---------+---------------------+
 
diff --git a/test_plans/pvp_vhost_user_reconnect_test_plan.rst b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
index a2ccdb1..9cc1ddc 100644
--- a/test_plans/pvp_vhost_user_reconnect_test_plan.rst
+++ b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
@@ -49,6 +49,7 @@ Vhost-user uses Unix domain sockets for passing messages. This means the DPDK vh
   When DPDK vhost-user restarts from an normal or abnormal exit (such as a crash), the client mode allows DPDK to establish the connection again. Note
   that QEMU version v2.7 or above is required for this reconnect feature.
   Also, when DPDK vhost-user acts as the client, it will keep trying to reconnect to the server (QEMU) until it succeeds. This is useful in two cases:
+
     * When QEMU is not started yet.
     * When QEMU restarts (for example due to a guest OS reboot).
 
diff --git a/test_plans/pvp_virtio_bonding_test_plan.rst b/test_plans/pvp_virtio_bonding_test_plan.rst
index a90e7d3..c45b3f7 100644
--- a/test_plans/pvp_virtio_bonding_test_plan.rst
+++ b/test_plans/pvp_virtio_bonding_test_plan.rst
@@ -50,7 +50,7 @@ Test case 1: vhost-user/virtio-pmd pvp bonding test with mode 0
 ===============================================================
 Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG
 
-1.  Bind one port to igb_uio,launch vhost by below command::
+1. Bind one port to igb_uio,launch vhost by below command::
 
     ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -112,7 +112,7 @@ Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG
 Test case 2: vhost-user/virtio-pmd pvp bonding test with different mode from 1 to 6
 ===================================================================================
 
-1.  Bind one port to igb_uio,launch vhost by below command::
+1. Bind one port to igb_uio,launch vhost by below command::
 
     ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024
     testpmd>set fwd mac
-- 
2.17.2


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-12-27  6:08 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-22  5:20 [dts] [PATCH V1] test_plans:fix build warnings Wenjie Li
2019-12-27  6:08 ` Tu, Lijuan
  -- strict thread matches above, loose matches on Subject: below --
2019-08-12 14:22 [dts] [PATCH V1] test_plans: fix " Wenjie Li
2019-08-12  7:06 ` Tu, Lijuan
2019-07-22  7:08 Wenjie Li
2019-08-06  9:00 ` Tu, Lijuan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).