test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts] [PATCH V1] test_plans:fix build warning
@ 2019-09-23  2:53 Wenjie Li
  0 siblings, 0 replies; 7+ messages in thread
From: Wenjie Li @ 2019-09-23  2:53 UTC (permalink / raw)
  To: dts; +Cc: Wenjie Li

fix build warning

Signed-off-by: Wenjie Li <wenjiex.a.li@intel.com>
---
 test_plans/bbdev_test_plan.rst     |  6 +++++-
 test_plans/index.rst               |  7 ++++++-
 test_plans/sriov_kvm_test_plan.rst | 14 ++++++++++----
 3 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/test_plans/bbdev_test_plan.rst b/test_plans/bbdev_test_plan.rst
index 800e7f6..7fa0e2a 100644
--- a/test_plans/bbdev_test_plan.rst
+++ b/test_plans/bbdev_test_plan.rst
@@ -67,12 +67,14 @@ Prerequisites
 =============
 
 1. OS and Hardware
+
    (a) An AVX2 supporting machine
-   (b) Windriver TS 2 or CentOS 7 operating systems
+   (b) Windriver TS 2 or CentOS 7 operating systems 
        (Fedora 25 and Ubuntu 16.04 is ok.)
    (c) Intel ICC compiler installed
 
 2. FlexRAN SDK Libraries
+
    To build DPDK with the *turbo_sw* PMD the user is required to download
    the export controlled ``FlexRAN SDK`` Libraries.
    An account at Intel Resource Design Center needs to be registered from
@@ -84,6 +86,7 @@ Prerequisites
    You can refer to the file dpdk/doc/guides/bbdevs/turbo_sw.rst.
 
 3. PMD setting
+
    Current BBDEV framework is en-suited with two vdev PMD drivers:
    null and turbo_sw.
    1) Null PMD is similar to cryptodev Null PMD, which is an empty driver to
@@ -101,6 +104,7 @@ Prerequisites
    They are both located in the build configuration file ``common_base``.
 
 4. Test tool
+
    A test suite for BBDEV is packaged with the framework to ease the
    validation needs for various functions and use cases.
    The tool to use for validation and testing is called: test-bbdev,
diff --git a/test_plans/index.rst b/test_plans/index.rst
index 28f5a69..e15823e 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -133,6 +133,7 @@ The following are the test plans for the DPDK DTS automated test system.
     compressdev_qat_pmd_test_plan
     compressdev_zlib_pmd_test_plan
     enable_package_download_in_ice_driver_test_plan
+    multicast_test_plan
 
     veb_switch_test_plan
     vf_daemon_test_plan
@@ -151,7 +152,6 @@ The following are the test plans for the DPDK DTS automated test system.
     vhost_multi_queue_qemu_test_plan
     vhost_pmd_xstats_test_plan
     vhost_qemu_mtu_test_plan
-    vhost_tso_test_plan
     vhost_user_live_migration_test_plan
     vm_power_manager_test_plan
     vmdq_test_plan
@@ -230,3 +230,8 @@ The following are the test plans for the DPDK DTS automated test system.
     iavf_test_plan
     packet_capture_test_plan
     packet_ordering_test_plan
+    bbdev_test_plan
+    performance_thread_test_plan
+
+    fips_cryptodev_test_plan
+    flow_filtering_test_plan
\ No newline at end of file
diff --git a/test_plans/sriov_kvm_test_plan.rst b/test_plans/sriov_kvm_test_plan.rst
index f22b2ed..1b84f78 100644
--- a/test_plans/sriov_kvm_test_plan.rst
+++ b/test_plans/sriov_kvm_test_plan.rst
@@ -135,7 +135,7 @@ Send 10 packets with VF0 mac address and make sure the packets will be
 forwarded by VF1.
 
 Test Case2: Mirror Traffic between 2VMs with Pool up mirroring
-===========================================================
+==============================================================
 
 Set up common 2VM prerequisites.
 
@@ -216,7 +216,7 @@ After test need reset mirror rule::
     PF testpmd-> reset port 0 mirror-rule 0
 
 Test Case6: Mirror Traffic between 2VMs with Vlan mirroring
-==========================================================
+===========================================================
 
 Set up common 2VM prerequisites.
 
@@ -237,7 +237,7 @@ After test need reset mirror rule::
     PF testpmd-> reset port 0 mirror-rule 0
 
 Test Case7: Mirror Traffic between 2VMs with up link mirroring & down link mirroring
-==================================================================================
+====================================================================================
 
 Run testpmd on VM0 and VM1 and start traffic forward on the VM hosts::
 
@@ -247,9 +247,12 @@ Run testpmd on VM0 and VM1 and start traffic forward on the VM hosts::
 When mirroring only between two Vfs, pool up (or down) mirroring and up (or down) link mirroring lead
 to the same behavior, so we randomly choose one way to mirror in both up and down directions.
 up link mirroring as below:
+
    1. Pool up mirroring (Case 2)
    2. Uplink port mirroring(Case 4)
+
 down link mirroring as below:
+
    1. Pool down mirroring(Fortville only, Case 3)
    2. Downlink port mirroring(Case 5)
 
@@ -274,7 +277,7 @@ After test need reset mirror rule::
     PF testpmd-> reset port 0 mirror-rule 1
 
 Test Case8: Mirror Traffic between 2VMs with Vlan & with up link mirroring & down link mirroring
-=============================================================================================
+================================================================================================
 
 Run testpmd on VM0 and VM1 and start traffic forward on the VM hosts::
 
@@ -284,9 +287,12 @@ Run testpmd on VM0 and VM1 and start traffic forward on the VM hosts::
 When mirroring only between two Vfs, pool up (or down) mirroring and up (or down) link mirroring lead
 to the same behavior, so we randomly choose one way to mirror in both up and down directions.
 up link mirroring as below:
+
    1. Pool up mirroring (Case 2)
    2. Uplink port mirroring(Case 4)
+
 down link mirroring as below:
+
    1. Pool down mirroring(Fortville only, Case 3)
    2. Downlink port mirroring(Case 5)
 
-- 
2.17.2


^ permalink raw reply	[flat|nested] 7+ messages in thread
* [dts] [PATCH V1] test_plans:fix build warning
@ 2019-09-26  3:27 Wenjie Li
  0 siblings, 0 replies; 7+ messages in thread
From: Wenjie Li @ 2019-09-26  3:27 UTC (permalink / raw)
  To: dts; +Cc: Wenjie Li

fix build warning

Signed-off-by: Wenjie Li <wenjiex.a.li@intel.com>
---
 test_plans/index.rst                          |  3 +-
 .../vhost_user_live_migration_test_plan.rst   | 60 +++++++++----------
 test_plans/vm2vm_virtio_pmd_test_plan.rst     | 18 +++---
 3 files changed, 41 insertions(+), 40 deletions(-)

diff --git a/test_plans/index.rst b/test_plans/index.rst
index e15823e..a10d171 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -158,6 +158,7 @@ The following are the test plans for the DPDK DTS automated test system.
     vf_l3fwd_test_plan
     softnic_test_plan
     vm_hotplug_test_plan
+    mdd_test_plan
 
     virtio_1.0_test_plan
     vhost_enqueue_interrupt_test_plan
@@ -234,4 +235,4 @@ The following are the test plans for the DPDK DTS automated test system.
     performance_thread_test_plan
 
     fips_cryptodev_test_plan
-    flow_filtering_test_plan
\ No newline at end of file
+    flow_filtering_test_plan
diff --git a/test_plans/vhost_user_live_migration_test_plan.rst b/test_plans/vhost_user_live_migration_test_plan.rst
index 3814196..ec32e82 100644
--- a/test_plans/vhost_user_live_migration_test_plan.rst
+++ b/test_plans/vhost_user_live_migration_test_plan.rst
@@ -94,7 +94,7 @@ On host server side:
 
 On the backup server, run the vhost testpmd on the host and launch VM:
 
-4.  Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
 
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
@@ -148,20 +148,20 @@ On the backup server, run the vhost testpmd on the host and launch VM:
 
 10. Start Live migration, ensure the traffic is continuous::
 
-    host server # telnet localhost 3333
-    host server # (qemu)migrate -d tcp:backup server:4444
-    host server # (qemu)info migrate
-    host server # Check if the migrate is active and not failed.
+     host server # telnet localhost 3333
+     host server # (qemu)migrate -d tcp:backup server:4444
+     host server # (qemu)info migrate
+     host server # Check if the migrate is active and not failed.
 
 11. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
 
-    host server # (qemu)info migrate
-    host server # (qemu)Migration status: completed
+     host server # (qemu)info migrate
+     host server # (qemu)Migration status: completed
 
 12. After live migration, go to the backup server and check if the virtio-pmd can continue to receive packets::
 
-    backup server # ssh -p 5555 127.0.0.1
-    backup VM # screen -r vm
+     backup server # ssh -p 5555 127.0.0.1
+     backup VM # screen -r vm
 
 Test Case 2: migrate with virtio-pmd zero-copy enabled
 ======================================================
@@ -193,7 +193,7 @@ On host server side:
 
 On the backup server, run the vhost testpmd on the host and launch VM:
 
-4.  Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
 
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
@@ -247,21 +247,21 @@ On the backup server, run the vhost testpmd on the host and launch VM:
 
 10. Start Live migration, ensure the traffic is continuous::
 
-    host server # telnet localhost 3333
-    host server # (qemu)migrate -d tcp:backup server:4444
-    host server # (qemu)info migrate
-    host server # Check if the migrate is active and not failed.
+     host server # telnet localhost 3333
+     host server # (qemu)migrate -d tcp:backup server:4444
+     host server # (qemu)info migrate
+     host server # Check if the migrate is active and not failed.
 
 11. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
 
-    host server # (qemu)info migrate
-    host server # (qemu)Migration status: completed
+     host server # (qemu)info migrate
+     host server # (qemu)Migration status: completed
 
 12. After live migration, go to the backup server start vhost testpmd and check if the virtio-pmd can continue to receive packets::
 
-    backup server # testpmd>start
-    backup server # ssh -p 5555 127.0.0.1
-    backup VM # screen -r vm
+     backup server # testpmd>start
+     backup server # ssh -p 5555 127.0.0.1
+     backup VM # screen -r vm
 
 Test Case 3: migrate with virtio-net
 ====================================
@@ -294,7 +294,7 @@ On host server side:
 
 On the backup server, run the vhost testpmd on the host and launch VM:
 
-4.  Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
 
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
@@ -343,13 +343,13 @@ On the backup server, run the vhost testpmd on the host and launch VM:
 
 10. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
 
-    host server # (qemu)info migrate
-    host server # (qemu)Migration status: completed
+     host server # (qemu)info migrate
+     host server # (qemu)Migration status: completed
 
 11. After live migration, go to the backup server and check if the virtio-net can continue to receive packets::
 
-    backup server # ssh -p 5555 127.0.0.1
-    backup VM # screen -r vm
+     backup server # ssh -p 5555 127.0.0.1
+     backup VM # screen -r vm
 
 Test Case 4: adjust virtio-net queue numbers while migrating with virtio-net
 ============================================================================
@@ -382,7 +382,7 @@ On host server side:
 
 On the backup server, run the vhost testpmd on the host and launch VM:
 
-4.  Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
 
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
@@ -431,14 +431,14 @@ On the backup server, run the vhost testpmd on the host and launch VM:
 
 10. Change virtio-net queue numbers from 1 to 4 while migrating::
 
-    host server # ethtool -L ens3 combined 4
+     host server # ethtool -L ens3 combined 4
 
 11. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
 
-    host server # (qemu)info migrate
-    host server # (qemu)Migration status: completed
+     host server # (qemu)info migrate
+     host server # (qemu)Migration status: completed
 
 12. After live migration, go to the backup server and check if the virtio-net can continue to receive packets::
 
-    backup server # ssh -p 5555 127.0.0.1
-    backup VM # screen -r vm
\ No newline at end of file
+     backup server # ssh -p 5555 127.0.0.1
+     backup VM # screen -r vm
diff --git a/test_plans/vm2vm_virtio_pmd_test_plan.rst b/test_plans/vm2vm_virtio_pmd_test_plan.rst
index ead7d58..06c76b8 100644
--- a/test_plans/vm2vm_virtio_pmd_test_plan.rst
+++ b/test_plans/vm2vm_virtio_pmd_test_plan.rst
@@ -326,10 +326,10 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check
 
 11. Relaunch testpmd on VM2, send ten 64B packets from virtio-pmd on VM2::
 
-    ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024
-    testpmd>set fwd mac
-    testpmd>set burst 1
-    testpmd>start tx_first 10
+     ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024
+     testpmd>set fwd mac
+     testpmd>set burst 1
+     testpmd>start tx_first 10
 
 12. Check payload is correct in each dumped packets.
 
@@ -408,10 +408,10 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch
 
 11. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2::
 
-    ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600
-    testpmd>set fwd mac
-    testpmd>set burst 1
-    testpmd>start tx_first 10
+     ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600
+     testpmd>set fwd mac
+     testpmd>set burst 1
+     testpmd>start tx_first 10
 
 12. Check payload is correct in each dumped packets.
 
@@ -450,4 +450,4 @@ Test Case 7: vm2vm vhost-user/virtio1.1-pmd mergeable path test with payload che
     testpmd>set burst 1
     testpmd>start tx_first 10
 
-5. Check payload is correct in each dumped packets.
\ No newline at end of file
+5. Check payload is correct in each dumped packets.
-- 
2.17.2


^ permalink raw reply	[flat|nested] 7+ messages in thread
* [dts] [PATCH V1] test_plans: fix build warning
@ 2019-06-10  1:43 Wenjie Li
  0 siblings, 0 replies; 7+ messages in thread
From: Wenjie Li @ 2019-06-10  1:43 UTC (permalink / raw)
  To: dts; +Cc: Wenjie Li

fix build warnings in test plans.

Signed-off-by: Wenjie Li <wenjiex.a.li@intel.com>
---
 test_plans/eventdev_perf_test_plan.rst |  6 ++---
 test_plans/vhost_gro_test_plan.rst     | 31 +++++++++++++++++---------
 2 files changed, 24 insertions(+), 13 deletions(-)

diff --git a/test_plans/eventdev_perf_test_plan.rst b/test_plans/eventdev_perf_test_plan.rst
index ed1312a..0b9da9f 100644
--- a/test_plans/eventdev_perf_test_plan.rst
+++ b/test_plans/eventdev_perf_test_plan.rst
@@ -51,7 +51,8 @@ Description: Execute performance test with Atomic_atq type of stage in multi-flo
 
    # ./build/dpdk-test-eventdev -l 22-23 -w eventdev_device_bus_id -w device_bus_id -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=A --wlcores=23
 
-    Parameters:
+    Parameters::
+	
        -l CORELIST        : List of cores to run on
                             The argument format is <c1>[-c2][,c3[-c4],...]
                             where c1, c2, etc are core indexes between 0 and 24
@@ -272,5 +273,4 @@ Description: Execute performance test with Ordered_queue type of stage in multi-
 
 2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
 
-3. Observe the speed of packets received(Rx-rate) on Ixia.
-
+3. Observe the speed of packets received(Rx-rate) on Ixia.
\ No newline at end of file
diff --git a/test_plans/vhost_gro_test_plan.rst b/test_plans/vhost_gro_test_plan.rst
index a0ea3b9..e2652b3 100644
--- a/test_plans/vhost_gro_test_plan.rst
+++ b/test_plans/vhost_gro_test_plan.rst
@@ -92,7 +92,9 @@ Test Case1: DPDK GRO lightmode test with tcp traffic
     testpmd>port start 1
     testpmd>start
 
-3.  Set up vm with virto device and using kernel virtio-net driver::
+3.  Set up vm with virto device and using kernel virtio-net driver:
+
+  ::
 
     taskset -c 13 \
     qemu-system-x86_64 -name us-vhost-vm1 \
@@ -145,7 +147,9 @@ Test Case2: DPDK GRO heavymode test with tcp traffic
     testpmd>port start 1
     testpmd>start
 
-3.  Set up vm with virto device and using kernel virtio-net driver::
+3.  Set up vm with virto device and using kernel virtio-net driver:
+
+  ::
 
     taskset -c 13 \
     qemu-system-x86_64 -name us-vhost-vm1 \
@@ -198,7 +202,9 @@ Test Case3: DPDK GRO heavymode_flush4 test with tcp traffic
     testpmd>port start 1
     testpmd>start
 
-3.  Set up vm with virto device and using kernel virtio-net driver::
+3.  Set up vm with virto device and using kernel virtio-net driver:
+
+  ::
 
     taskset -c 13 \
     qemu-system-x86_64 -name us-vhost-vm1 \
@@ -226,11 +232,14 @@ Test Case4: DPDK GRO test with vxlan traffic
 
 Vxlan topology
 --------------
-  VM          Host
-50.1.1.2      50.1.1.1
-   |           |
-1.1.2.3       1.1.2.4
-   |------------Testpmd------------|
+
+::
+
+    VM          Host
+  50.1.1.2      50.1.1.1
+     |           |
+  1.1.2.3       1.1.2.4
+     |------------Testpmd------------|
 
 1. Connect two nic port directly, put nic2 into another namesapce and create Host VxLAN port::
 
@@ -269,7 +278,9 @@ Vxlan topology
     testpmd>port start 1
     testpmd>start
 
-3.  Set up vm with virto device and using kernel virtio-net driver::
+3.  Set up vm with virto device and using kernel virtio-net driver:
+
+  ::
 
     taskset -c 13 \
     qemu-system-x86_64 -name us-vhost-vm1 \
@@ -294,4 +305,4 @@ Vxlan topology
 5. Start iperf test, run iperf server at vm side and iperf client at host side, check throughput in log::
 
     Host side :  ip netns exec t2 iperf -c 50.1.1.2 -i 2 -t 60 -f g -m
-    VM side:     iperf -s -f g
+    VM side:     iperf -s -f g
\ No newline at end of file
-- 
2.17.2


^ permalink raw reply	[flat|nested] 7+ messages in thread
* [dts] [PATCH V1] test_plans: fix build warning
@ 2019-05-22 10:51 Wenjie Li
  2019-05-29  2:36 ` Tu, Lijuan
  0 siblings, 1 reply; 7+ messages in thread
From: Wenjie Li @ 2019-05-22 10:51 UTC (permalink / raw)
  To: dts; +Cc: Wenjie Li

From: "Wenjie Li" <wenjiex.a.li@intel.com>

fix build warning of test plans

Signed-off-by: Wenjie Li <wenjiex.a.li@intel.com>
---
 test_plans/dpdk_gro_lib_test_plan.rst         | 36 +++++++++++++------
 test_plans/dpdk_gso_lib_test_plan.rst         | 16 ++++++---
 test_plans/macsec_for_ixgbe_test_plan.rst     | 14 ++++----
 test_plans/vf_l3fwd_test_plan.rst             |  4 +--
 .../vhost_dequeue_zero_copy_test_plan.rst     |  4 ++-
 5 files changed, 50 insertions(+), 24 deletions(-)

diff --git a/test_plans/dpdk_gro_lib_test_plan.rst b/test_plans/dpdk_gro_lib_test_plan.rst
index 4ca78ef..410ae68 100644
--- a/test_plans/dpdk_gro_lib_test_plan.rst
+++ b/test_plans/dpdk_gro_lib_test_plan.rst
@@ -87,7 +87,9 @@ Modify the testpmd code as following::
 Test flow
 =========
 
-NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net
+::
+
+  NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net
 
 Test Case1: DPDK GRO lightmode test with tcp/ipv4 traffic
 =========================================================
@@ -119,7 +121,9 @@ Test Case1: DPDK GRO lightmode test with tcp/ipv4 traffic
     testpmd>port start 1
     testpmd>start
 
-3.  Set up vm with virto device and using kernel virtio-net driver::
+3.  Set up vm with virto device and using kernel virtio-net driver:
+
+  ::
 
     taskset -c 13 \
     qemu-system-x86_64 -name us-vhost-vm1 \
@@ -172,7 +176,9 @@ Test Case2: DPDK GRO heavymode test with tcp/ipv4 traffic
     testpmd>port start 1
     testpmd>start
 
-3.  Set up vm with virto device and using kernel virtio-net driver::
+3.  Set up vm with virto device and using kernel virtio-net driver:
+
+  ::
 
     taskset -c 13 \
     qemu-system-x86_64 -name us-vhost-vm1 \
@@ -225,7 +231,9 @@ Test Case3: DPDK GRO heavymode_flush4 test with tcp/ipv4 traffic
     testpmd>port start 1
     testpmd>start
 
-3.  Set up vm with virto device and using kernel virtio-net driver::
+3.  Set up vm with virto device and using kernel virtio-net driver:
+
+  ::
 
     taskset -c 13 \
     qemu-system-x86_64 -name us-vhost-vm1 \
@@ -253,11 +261,15 @@ Test Case4: DPDK GRO test with vxlan traffic
 
 Vxlan topology
 --------------
-  VM          Host
-50.1.1.2      50.1.1.1
-   |           |
-1.1.2.3       1.1.2.4
-   |------------Testpmd------------|
+
+::
+
+    VM          Host
+  50.1.1.2      50.1.1.1
+     |           |
+  1.1.2.3       1.1.2.4
+     |------------Testpmd------------|
+
 
 1. Connect two nic port directly, put nic2 into another namesapce and create Host VxLAN port::
 
@@ -296,7 +308,9 @@ Vxlan topology
     testpmd>port start 1
     testpmd>start
 
-3.  Set up vm with virto device and using kernel virtio-net driver::
+3.  Set up vm with virto device and using kernel virtio-net driver:
+
+  ::
 
     taskset -c 13 \
     qemu-system-x86_64 -name us-vhost-vm1 \
@@ -321,4 +335,4 @@ Vxlan topology
 5. Start iperf test, run iperf server at vm side and iperf client at host side, check throughput in log::
 
     Host side :  ip netns exec t2 iperf -c 50.1.1.2 -i 2 -t 60 -f g -m
-    VM side:     iperf -s -f g
+    VM side:     iperf -s -f g
\ No newline at end of file
diff --git a/test_plans/dpdk_gso_lib_test_plan.rst b/test_plans/dpdk_gso_lib_test_plan.rst
index e88cff7..8de5f56 100644
--- a/test_plans/dpdk_gso_lib_test_plan.rst
+++ b/test_plans/dpdk_gso_lib_test_plan.rst
@@ -81,7 +81,9 @@ Modify the testpmd code as following::
 Test flow
 =========
 
-NIC2(In kernel) <- NIC1(DPDK) <- testpmd(csum fwd) <- Vhost <- Virtio-net
+::
+
+  NIC2(In kernel) <- NIC1(DPDK) <- testpmd(csum fwd) <- Vhost <- Virtio-net
 
 Test Case1: DPDK GSO test with tcp traffic
 ==========================================
@@ -109,7 +111,9 @@ Test Case1: DPDK GSO test with tcp traffic
     testpmd>port start 0
     testpmd>start
 
-3.  Set up vm with virto device and using kernel virtio-net driver::
+3.  Set up vm with virto device and using kernel virtio-net driver:
+
+  ::
 
     taskset -c 13 \
     qemu-system-x86_64 -name us-vhost-vm1 \
@@ -169,7 +173,9 @@ Test Case3: DPDK GSO test with vxlan traffic
     testpmd>port start 0
     testpmd>start
 
-3.  Set up vm with virto device and using kernel virtio-net driver::
+3.  Set up vm with virto device and using kernel virtio-net driver:
+
+  ::
 
     taskset -c 13 \
     qemu-system-x86_64 -name us-vhost-vm1 \
@@ -221,7 +227,9 @@ Test Case4: DPDK GSO test with gre traffic
     testpmd>port start 0
     testpmd>start
 
-3.  Set up vm with virto device and using kernel virtio-net driver::
+3.  Set up vm with virto device and using kernel virtio-net driver:
+
+  ::
 
     taskset -c 13 \
     qemu-system-x86_64 -name us-vhost-vm1 \
diff --git a/test_plans/macsec_for_ixgbe_test_plan.rst b/test_plans/macsec_for_ixgbe_test_plan.rst
index 893e7a6..79b7d57 100644
--- a/test_plans/macsec_for_ixgbe_test_plan.rst
+++ b/test_plans/macsec_for_ixgbe_test_plan.rst
@@ -73,12 +73,14 @@ Prerequisites
 1. Hardware:
 
    * 1x Niantic NIC (2x 10G)
-     port0:
-       pci address: 07:00.0
-       mac address: 00:00:00:00:00:01
-     port1:
-       pci address: 07:00.1
-       mac address: 00:00:00:00:00:02
+     ::
+
+       port0:
+         pci address: 07:00.0
+         mac address: 00:00:00:00:00:01
+       port1:
+         pci address: 07:00.1
+         mac address: 00:00:00:00:00:02
 
    * 2x IXIA ports (10G)
 
diff --git a/test_plans/vf_l3fwd_test_plan.rst b/test_plans/vf_l3fwd_test_plan.rst
index 97e3ab7..9260cb0 100644
--- a/test_plans/vf_l3fwd_test_plan.rst
+++ b/test_plans/vf_l3fwd_test_plan.rst
@@ -89,7 +89,7 @@ Setup overview
 Set up topology as above based on the NIC used.
 
 Test Case 1: Measure performance with kernel PF & dpdk VF
-========================================================
+=========================================================
 
 1, Bind PF ports to kernel driver, i40e or ixgbe, then create 1 VF from each PF,
 take XL710 for example::
@@ -132,7 +132,7 @@ Fill out this table with results.
 
 
 Test Case 2: Measure performance with dpdk PF & dpdk VF
-======================================================
+=======================================================
 
 1, Bind PF ports to igb_uio driver, then create 1 VF from each PF,
 take XL710 for example::
diff --git a/test_plans/vhost_dequeue_zero_copy_test_plan.rst b/test_plans/vhost_dequeue_zero_copy_test_plan.rst
index 3ec5e53..3c74c77 100644
--- a/test_plans/vhost_dequeue_zero_copy_test_plan.rst
+++ b/test_plans/vhost_dequeue_zero_copy_test_plan.rst
@@ -323,7 +323,9 @@ Test topology: NIC2(In kernel) <- NIC1(DPDK) <- testpmd(csum fwd) <- Vhost <- Vi
     testpmd>port start 0
     testpmd>start
 
-3.  Set up vm with virto device and using kernel virtio-net driver::
+3.  Set up vm with virto device and using kernel virtio-net driver:
+
+  ::
 
     taskset -c 13 \
     qemu-system-x86_64 -name us-vhost-vm1 \
-- 
2.17.2


^ permalink raw reply	[flat|nested] 7+ messages in thread
* [dts] [PATCH V1] test_plans: fix build warning
@ 2019-03-01  6:45 Wenjie Li
  2019-03-01  7:57 ` Tu, Lijuan
  0 siblings, 1 reply; 7+ messages in thread
From: Wenjie Li @ 2019-03-01  6:45 UTC (permalink / raw)
  To: dts; +Cc: Wenjie Li

fix build warning in these test_plans.
1. update the list in index.rst
2. fix the format

Signed-off-by: Wenjie Li <wenjiex.a.li@intel.com>
---
 test_plans/index.rst                          | 80 ++++++++++------
 .../ipsec_gw_cryptodev_func_test_plan.rst     | 54 ++++++-----
 test_plans/l2fwd_cryptodev_func_test_plan.rst | 94 ++++++++++---------
 test_plans/pmd_bonded_8023ad_test_plan.rst    |  6 +-
 ...host_single_core_performance_test_plan.rst | 12 +--
 ...rtio_single_core_performance_test_plan.rst | 20 ++--
 .../vlan_ethertype_config_test_plan.rst       |  2 +-
 7 files changed, 149 insertions(+), 119 deletions(-)

diff --git a/test_plans/index.rst b/test_plans/index.rst
index a114231..b7d155b 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -42,57 +42,93 @@ The following are the test plans for the DPDK DTS automated test system.
     checksum_offload_test_plan
     cloud_filter_test_plan
     coremask_test_plan
-    crypto_perf_test_plan
+    cryptodev_perf_crypto-perf_test_plan
+    ddp_gtp_qregion_test_plan
+    ddp_gtp_test_plan
+    ddp_mpls_test_plan
+    ddp_ppp_l2tp_test_plan
     dual_vlan_test_plan
     dynamic_config_test_plan
+    dynamic_flowtype_test_plan
+    dynamic_queue_test_plan
     etag_test_plan
+    external_memory_test_plan
     external_mempool_handler_test_plan
     fdir_test_plan
     floating_veb_test_plan
+    flow_classify_softnic_test_plan
     fortville_rss_granularity_config_test_plan
     ftag_test_plan
     generic_filter_test_plan
-    hotplug_test_plan
+    generic_flow_api_test_plan
     hotplug_mp_test_plan
+    hotplug_test_plan
     ieee1588_test_plan
+    inline_ipsec_test_plan
     interrupt_pmd_test_plan
+    ip_pipeline_test_plan
     ipfrag_test_plan
     ipgre_test_plan
-    ip_pipeline_test_plan
+    ipsec_gw_cryptodev_func_test_plan
     ipv4_reassembly_test_plan
+    ixgbe_vf_get_extra_queue_information_test_plan
     jumboframes_test_plan
     kni_test_plan
-    l2fwd_crypto_test_plan
+    l2fwd_cryptodev_func_test_plan
     l2fwd_test_plan
     l3fwd_em_test_plan
     l3fwd_test_plan
+    l3fwdacl_test_plan
     link_flowctrl_test_plan
     link_status_interrupt_test_plan
+    loopback_multi_paths_port_restart_performance_test_plan
+    loopback_multi_paths_port_restart_test_plan
     mac_filter_test_plan
     macsec_for_ixgbe_test_plan
-    mempool_exthandler_test_plan
+    metering_and_policing_test_plan
+    multiple_pthread_test_plan
     NICStatistics_test_plan
     nvgre_test_plan
+    pmd_bonded_8023ad_test_plan
     pmd_bonded_test_plan
+    pmd_stacked_bonded_test_plan
+    pmd_test_plan
     pmdpcap_test_plan
     pmdrss_hash_test_plan
     pmdrssreta_test_plan
-    pmd_test_plan
     ptype_mapping_test_plan
+    pvp_multi_paths_performance_test_plan
+    pvp_multi_paths_vhost_single_core_performance_test_plan
+    pvp_multi_paths_virtio_single_core_performance_test_plan
+    qinq_filter_test_plan
+    qos_api_test_plan
+    qos_meter_test_plan
+    qos_sched_test_plan
+    queue_region_test_plan
     queue_start_stop_test_plan
+    rss_to_rte_flow_test_plan
+    runtime_vf_queue_number_kernel_test_plan
+    runtime_vf_queue_number_maxinum_test_plan
+    runtime_vf_queue_number_test_plan
+    rxtx_offload_test_plan
     scatter_test_plan
     short_live_test_plan
     shutdown_api_test_plan
     sriov_kvm_test_plan
     stability_test_plan
+    sw_eventdev_pipeline_sample_test_plan
     tso_test_plan
     tx_preparation_test_plan
     uni_pkt_test_plan
     userspace_ethtool_test_plan
+    vlan_ethertype_config_test_plan
+    vlan_test_plan
+    vxlan_test_plan
     veb_switch_test_plan
     vf_daemon_test_plan
-    vf_jumboframe_test_plan
     vf_interrupt_pmd_test_plan
+    vf_jumboframe_test_plan
+    vf_kernel_test_plan
     vf_macfilter_test_plan
     vf_offload_test_plan
     vf_packet_rxtx_test_plan
@@ -101,45 +137,26 @@ The following are the test plans for the DPDK DTS automated test system.
     vf_rss_test_plan
     vf_to_vf_nic_bridge_test_plan
     vf_vlan_test_plan
+    vhost_multi_queue_qemu_test_plan
     vhost_pmd_xstats_test_plan
+    vhost_qemu_mtu_test_plan
     vhost_tso_test_plan
     vhost_user_live_migration_test_plan
     virtio_1.0_test_plan
-    vlan_ethertype_config_test_plan
-    vlan_test_plan
-    vmdq_test_plan
     vm_power_manager_test_plan
-    vxlan_test_plan
-    ixgbe_vf_get_extra_queue_information_test_plan
-    queue_region_test_plan
-    inline_ipsec_test_plan
-    sw_eventdev_pipeline_sample_test_plan
-    dynamic_flowtype_test_plan
-    vf_kernel_test_plan
-    multiple_pthread_test_plan
-    qinq_filter_test_plan
-    generic_flow_api_test_plan
-    rss_to_rte_flow_test_plan
-    ddp_gtp_test_plan
-    ddp_gtp_qregion_test_plan
-    ddp_ppp_l2tp_test_plan
-    ddp_mpls_test_plan
-    runtime_queue_number_test_plan
-    dynamic_queue_test_plan
-    vhost_multi_queue_qemu_test_plan
-    vhost_qemu_mtu_test_plan
+    vmdq_test_plan
 
     unit_tests_cmdline_test_plan
     unit_tests_crc_test_plan
-    unit_tests_cryptodev_test_plan
+    unit_tests_cryptodev_func_test_plan
     unit_tests_dump_test_plan
     unit_tests_eal_test_plan
     unit_tests_kni_test_plan
+    unit_tests_loopback_test_plan
     unit_tests_lpm_test_plan
     unit_tests_mbuf_test_plan
     unit_tests_mempool_test_plan
     unit_tests_pmd_perf_test_plan
-    unit_tests_loopback_test_plan
     unit_tests_power_test_plan
     unit_tests_qos_test_plan
     unit_tests_ringpmd_test_plan
@@ -159,3 +176,4 @@ The following are the test plans for the DPDK DTS automated test system.
     ptpclient_test_plan
     distributor_test_plan
     efd_test_plan
+    example_build_test_plan
diff --git a/test_plans/ipsec_gw_cryptodev_func_test_plan.rst b/test_plans/ipsec_gw_cryptodev_func_test_plan.rst
index fd26fab..a103fac 100644
--- a/test_plans/ipsec_gw_cryptodev_func_test_plan.rst
+++ b/test_plans/ipsec_gw_cryptodev_func_test_plan.rst
@@ -126,23 +126,25 @@ Prerequisites
 
 To test CryptoDev API, an example ipsec-secgw is added into DPDK.
 
-The test commands of ipsec-secgw is below:
+The test commands of ipsec-secgw is below::
 
 
-   ./build/ipsec-secgw [EAL options] --
-                        -p PORTMASK -P -u PORTMASK -j FRAMESIZE
-                        -l -w REPLAY_WINOW_SIZE -e -a
-                        --config (port,queue,lcore)[,(port,queue,lcore]
-                        --single-sa SAIDX
-                        --rxoffload MASK
-                        --txoffload MASK
-                        -f CONFIG_FILE_PATH
-compile the applications :
+    ./build/ipsec-secgw [EAL options] --
+        -p PORTMASK -P -u PORTMASK -j FRAMESIZE
+        -l -w REPLAY_WINOW_SIZE -e -a
+        --config (port,queue,lcore)[,(port,queue,lcore]
+        --single-sa SAIDX
+        --rxoffload MASK
+        --txoffload MASK
+        -f CONFIG_FILE_PATH
+
+compile the applications::
 
     make -C ./examples/ipsec-secgw
 
 
-Configuration File Syntax
+Configuration File Syntax:
+
     The ``-f CONFIG_FILE_PATH`` option enables the application read and
     parse the configuration file specified, and configures the application
     with a given set of SP, SA and Routing entries accordingly. The syntax of
@@ -196,10 +198,11 @@ Cryptodev AES-NI algorithm validation matrix is showed in table below.
 | CIPHER_HASH | 3DES_CBC    | ENCRYPT     | 128         |  SHA1_HMAC  | GENERATE    |
 +-------------+-------------+-------------+-------------+-------------+-------------+
 
-example:
+example::
+
     ./examples/ipsec-secgw/build/ipsec-secgw --socket-mem 2048,0 --legacy-mem -w 0000:60:00.0 -w 0000:60:00.2
- --vdev crypto_aesni_mb_pmd_1 --vdev=crypto_aesni_mb_pmd_2 -l 9,10,11 -n 6  -- -P  --config "(0,0,10),(1,0,11)"
--u 0x1 -p 0x3 -f /root/dts/local_conf/ipsec_test.cfg
+    --vdev crypto_aesni_mb_pmd_1 --vdev=crypto_aesni_mb_pmd_2 -l 9,10,11 -n 6  -- -P  --config "(0,0,10),(1,0,11)"
+    -u 0x1 -p 0x3 -f /root/dts/local_conf/ipsec_test.cfg
 
 Sub-case: QAT test case
 ---------------------------
@@ -226,10 +229,11 @@ Cryptodev QAT algorithm validation matrix is showed in table below.
 | AEAD        | AES_GCM     | ENCRYPT     | 128         |
 +-------------+-------------+-------------+-------------+
 
-example:
+example::
+
     ./examples/ipsec-secgw/build/ipsec-secgw --socket-mem 2048,0 --legacy-mem -w 0000:60:00.0 -w 0000:60:00.2
--w 0000:1a:01.0 -l 9,10,11 -n 6  -- -P  --config "(0,0,10),(1,0,11)" -u 0x1 -p 0x3
--f /root/dts/local_conf/ipsec_test.cfg
+    -w 0000:1a:01.0 -l 9,10,11 -n 6  -- -P  --config "(0,0,10),(1,0,11)" -u 0x1 -p 0x3
+    -f /root/dts/local_conf/ipsec_test.cfg
 
 Sub-case: AES-GCM test case
 ------------------------------
@@ -242,10 +246,11 @@ Cryptodev AES-GCM algorithm validation matrix is showed in table below.
 | AEAD        | AES_GCM     | ENCRYPT     | 128         |
 +-------------+-------------+-------------+-------------+
 
-example:
-     ./examples/ipsec-secgw/build/ipsec-secgw --socket-mem 2048,0 --legacy-mem -w 0000:60:00.0 -w 0000:60:00.2
---vdev crypto_aesni_gcm_pmd_1 --vdev=crypto_aesni_gcm_pmd_2 -l 9,10,11 -n 6  -- -P  --config "(0,0,10),(1,0,11)"
--u 0x1 -p 0x3 -f /root/dts/local_conf/ipsec_test.cfg
+example::
+
+    ./examples/ipsec-secgw/build/ipsec-secgw --socket-mem 2048,0 --legacy-mem -w 0000:60:00.0 -w 0000:60:00.2
+    --vdev crypto_aesni_gcm_pmd_1 --vdev=crypto_aesni_gcm_pmd_2 -l 9,10,11 -n 6  -- -P  --config "(0,0,10),(1,0,11)"
+    -u 0x1 -p 0x3 -f /root/dts/local_conf/ipsec_test.cfg
 
 Sub-case: NULL test case
 ------------------------------
@@ -258,7 +263,8 @@ Cryptodev NULL algorithm validation matrix is showed in table below.
 | CIPHER_HASH | NULL        | ENCRYPT     | 0           |  NULL       | GENERATE    |
 +-------------+-------------+-------------+-------------+-------------+-------------+
 
-example:
+example::
+
     ./examples/ipsec-secgw/build/ipsec-secgw --socket-mem 2048,0 --legacy-mem -w 0000:60:00.0 -w 0000:60:00.2
---vdev crypto_null_pmd_1 --vdev=crypto_null_pmd_2 -l 9,10,11 -n 6  -- -P  --config "(0,0,10),(1,0,11)"
--u 0x1 -p 0x3 -f /root/dts/local_conf/ipsec_test.cfg
\ No newline at end of file
+    --vdev crypto_null_pmd_1 --vdev=crypto_null_pmd_2 -l 9,10,11 -n 6  -- -P  --config "(0,0,10),(1,0,11)"
+    -u 0x1 -p 0x3 -f /root/dts/local_conf/ipsec_test.cfg
diff --git a/test_plans/l2fwd_cryptodev_func_test_plan.rst b/test_plans/l2fwd_cryptodev_func_test_plan.rst
index 27b90b9..4176770 100644
--- a/test_plans/l2fwd_cryptodev_func_test_plan.rst
+++ b/test_plans/l2fwd_cryptodev_func_test_plan.rst
@@ -277,7 +277,7 @@ Prerequisites
 
 To test CryptoDev API, an example l2fwd-crypto is added into DPDK.
 
-The test commands of l2fwd-crypto is below:
+The test commands of l2fwd-crypto is below::
 
     ./build/l2fwd-crypto [EAL options] -- [-p PORTMASK] [-q NQ] [-s] [-T PERIOD] /
     [--cdev_type HW/SW/ANY] [--chain HASH_CIPHER/CIPHER_HASH/CIPHER_ONLY/HASH_ONLY/AEAD] /
@@ -316,7 +316,7 @@ correctly. The steps how to use ZUClibrary is described in DPDK code directory
 dpdk/doc/guides/cryptodevs/zuc.rst.
 
 Test case: Cryptodev l2fwd test
-=============================
+===============================
 
 For function test, the DUT forward UDP packets generated by scapy.
 
@@ -334,7 +334,7 @@ and compare the payload with correct answer pre-stored in scripts::
     |          | <-------------> |          |
     +----------+                 +----------+
 
-compile the applications:
+compile the applications::
 
     make -C ./examples/l2fwd-crypto
 
@@ -376,14 +376,15 @@ Cryptodev AES-NI algorithm validation matrix is showed in table below.
 | CIPHER_HASH | 3DES_CBC    | ENCRYPT     | 128         |  SHA256_HMAC| GENERATE    |
 +-------------+-------------+-------------+-------------+-------------+-------------+
 
-example:
+example::
+
     ./examples/l2fwd-crypto/build/l2fwd-crypto --socket-mem 1024,0 --legacy-mem -l 6,7,8 -n 2
---vdev crypto_aesni_mb --vdev crypto_aesni_mb -- -p 0x1 --chain CIPHER_ONLY --cdev_type SW
---cipher_algo aes-cbc --cipher_op ENCRYPT --cipher_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
---cipher_iv 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f --no-mac-updating
+    --vdev crypto_aesni_mb --vdev crypto_aesni_mb -- -p 0x1 --chain CIPHER_ONLY --cdev_type SW
+    --cipher_algo aes-cbc --cipher_op ENCRYPT --cipher_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
+    --cipher_iv 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f --no-mac-updating
 
 Sub-case: QAT test case
----------------------------
+-----------------------
 
 Cryptodev QAT algorithm validation matrix is showed in table below.
 
@@ -435,14 +436,15 @@ Cryptodev QAT algorithm validation matrix is showed in table below.
 | AEAD        | AES_CCM     | ENCRYPT     | 128         |
 +-------------+-------------+-------------+-------------+
 
-example:
+example::
+
     ./examples/l2fwd-crypto/build/l2fwd-crypto --socket-mem 1024,0 --legacy-mem -l 6,7,8 -n 2
--- -p 0x1 --chain CIPHER_ONLY --cdev_type HW --cipher_algo aes-cbc --cipher_op ENCRYPT
---cipher_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
---cipher_iv 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f --no-mac-updating
+    -- -p 0x1 --chain CIPHER_ONLY --cdev_type HW --cipher_algo aes-cbc --cipher_op ENCRYPT
+    --cipher_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
+    --cipher_iv 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f --no-mac-updating
 
 Sub-case: OPENSSL test case
---------------------------
+---------------------------
 
 Cryptodev OPENSSL algorithm validation matrix is showed in table below.
 
@@ -478,15 +480,16 @@ Cryptodev OPENSSL algorithm validation matrix is showed in table below.
 | CIPHER_HASH | 3DES_CBC    | ENCRYPT     | 128         |  SHA256_HMAC| GENERATE    |
 +-------------+-------------+-------------+-------------+-------------+-------------+
 
-example:
+example::
+
     ./examples/l2fwd-crypto/build/l2fwd-crypto --socket-mem 1024,0 --legacy-mem -l 6,7,8 -n 2
---vdev crypto_openssl_pmd --vdev crypto_openssl_pmd -- -p 0x1 --chain CIPHER_ONLY
---cdev_type SW --cipher_algo aes-cbc --cipher_op ENCRYPT
---cipher_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
---cipher_iv 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f --no-mac-updating
+    --vdev crypto_openssl_pmd --vdev crypto_openssl_pmd -- -p 0x1 --chain CIPHER_ONLY
+    --cdev_type SW --cipher_algo aes-cbc --cipher_op ENCRYPT
+    --cipher_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
+    --cipher_iv 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f --no-mac-updating
 
 Sub-case: QAT/SNOW3G Snow3G test case
-------------------------------
+-------------------------------------
 
 Cryptodev Snow3G algorithm validation matrix is showed in table below.
 Cipher only, hash-only and chaining functionality is supported for Snow3g.
@@ -503,14 +506,15 @@ Cipher only, hash-only and chaining functionality is supported for Snow3g.
 | HASH_ONLY   | UIA2        | GENERATE    | 128         |
 +-------------+-------------+-------------+-------------+
 
-example:
+example::
+
     ./examples/l2fwd-crypto/build/l2fwd-crypto --socket-mem 1024,0 --legacy-mem -l 6,7,8 -n 2
--- -p 0x1 --chain HASH_ONLY --cdev_type HW --auth_algo snow3g-uia2 --auth_op GENERATE
---auth_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
---auth_iv 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 --digest 4 --no-mac-updating
+    -- -p 0x1 --chain HASH_ONLY --cdev_type HW --auth_algo snow3g-uia2 --auth_op GENERATE
+    --auth_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
+    --auth_iv 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 --digest 4 --no-mac-updating
 
 Sub-case: QAT/KASUMI Kasumi test case
-------------------------------
+-------------------------------------
 
 Cryptodev Kasumi algorithm validation matrix is showed in table below.
 Cipher only, hash-only and chaining functionality is supported for Kasumi.
@@ -527,14 +531,15 @@ Cipher only, hash-only and chaining functionality is supported for Kasumi.
 | HASH_ONLY   | F9          | GENERATE    | 128         |
 +-------------+-------------+-------------+-------------+
 
-example:
+example::
+
     ./examples/l2fwd-crypto/build/l2fwd-crypto --socket-mem 1024,0 --legacy-mem -l 6,7,8 -n 2
---vdev crypto_kasumi_pmd --vdev crypto_kasumi_pmd -- -p 0x1 --chain HASH_ONLY --cdev_type SW
---auth_algo kasumi-f9 --auth_op GENERATE
---auth_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f --digest 4 --no-mac-updating
+    --vdev crypto_kasumi_pmd --vdev crypto_kasumi_pmd -- -p 0x1 --chain HASH_ONLY --cdev_type SW
+    --auth_algo kasumi-f9 --auth_op GENERATE
+    --auth_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f --digest 4 --no-mac-updating
 
 Sub-case: QAT/ZUC Zuc test case
-------------------------------
+-------------------------------
 
 Cryptodev ZUC algorithm validation matrix is showed in table below.
 Cipher only, hash-only and chaining functionality is supported for ZUC.
@@ -551,14 +556,15 @@ Cipher only, hash-only and chaining functionality is supported for ZUC.
 | HASH_ONLY   | EIA3        | GENERATE    | 128         |
 +-------------+-------------+-------------+-------------+
 
-example:
+example::
+
     ./examples/l2fwd-crypto/build/l2fwd-crypto --socket-mem 1024,0 --legacy-mem -l 6,7,8 -n 2
---vdev crypto_zuc_pmd --vdev crypto_zuc_pmd -- -p 0x1 --chain HASH_ONLY --cdev_type SW
---auth_algo zuc-eia3 --auth_op GENERATE --auth_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
---auth_iv 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 --digest 4 --no-mac-updating
+    --vdev crypto_zuc_pmd --vdev crypto_zuc_pmd -- -p 0x1 --chain HASH_ONLY --cdev_type SW
+    --auth_algo zuc-eia3 --auth_op GENERATE --auth_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
+    --auth_iv 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 --digest 4 --no-mac-updating
 
 Sub-case: AESNI-GCM test case
---------------------------
+-----------------------------
 
 Cryptodev AESNI-GCM algorithm validation matrix is showed in table below.
 
@@ -578,15 +584,16 @@ Cryptodev AESNI-GCM algorithm validation matrix is showed in table below.
 | CIPHER_HASH | AES-GMAC    | ENCRYPT     | 128         |  SHA256_HMAC| GENERATE    |
 +-------------+-------------+-------------+-------------+-------------+-------------+
 
-example:
+example::
+
     ./examples/l2fwd-crypto/build/l2fwd-crypto --socket-mem 1024,0 --legacy-mem -l 6,7,8 -n 2
---vdev crypto_aesni_gcm_pmd --vdev crypto_aesni_gcm_pmd -- -p 0x1 --chain AEAD --cdev_type SW
---aead_algo aes-gcm --aead_op ENCRYPT --aead_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
---aead_iv 00:01:02:03:04:05:06:07:08:09:0a:0b --aad 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
---digest 16 --no-mac-updating
+    --vdev crypto_aesni_gcm_pmd --vdev crypto_aesni_gcm_pmd -- -p 0x1 --chain AEAD --cdev_type SW
+    --aead_algo aes-gcm --aead_op ENCRYPT --aead_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
+    --aead_iv 00:01:02:03:04:05:06:07:08:09:0a:0b --aad 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
+    --digest 16 --no-mac-updating
 
 Sub-case: QAT/NULL null test case
-------------------------------
+---------------------------------
 
 Cryptodev NULL algorithm validation matrix is showed in table below.
 Cipher only, hash-only and chaining functionality is supported for NULL.
@@ -603,7 +610,8 @@ Cipher only, hash-only and chaining functionality is supported for NULL.
 | HASH_ONLY   | NULL        | GENERATE    | 0           |
 +-------------+-------------+-------------+-------------+
 
-example:
+example::
+
     ./examples/l2fwd-crypto/build/l2fwd-crypto --socket-mem 2048,0 --legacy-mem -l 9,10,66 -n 6
---vdev crypto_null_pmd --vdev crypto_null_pmd  --  -p 0x1 --chain CIPHER_ONLY --cdev_type SW
---cipher_algo null --cipher_op ENCRYPT --no-mac-updating
\ No newline at end of file
+    --vdev crypto_null_pmd --vdev crypto_null_pmd  --  -p 0x1 --chain CIPHER_ONLY --cdev_type SW
+    --cipher_algo null --cipher_op ENCRYPT --no-mac-updating
diff --git a/test_plans/pmd_bonded_8023ad_test_plan.rst b/test_plans/pmd_bonded_8023ad_test_plan.rst
index 03de918..4e45e08 100644
--- a/test_plans/pmd_bonded_8023ad_test_plan.rst
+++ b/test_plans/pmd_bonded_8023ad_test_plan.rst
@@ -138,11 +138,9 @@ steps
 
 Test Case : basic behavior mac
 ==============================
-#. bonded device's default mac is one of each slave's mac after one slave has
-   been added.
+#. bonded device's default mac is one of each slave's mac after one slave has been added.
 #. when no slave attached, mac should be 00:00:00:00:00:00
-#. slave's mac restore the MAC addresses that the slave has before they were
-enslaved.
+#. slave's mac restore the MAC addresses that the slave has before they were enslaved.
 
 steps
 -----
diff --git a/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst b/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst
index 19b3096..247f162 100644
--- a/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst
+++ b/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst
@@ -30,9 +30,9 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-=====================================
+=======================================
 PVP multi-paths vhost single core Tests
-=====================================
+=======================================
 
 Description
 ===========
@@ -43,7 +43,7 @@ no-mergeable, Virtio 1.1 mergeable, Virtio 1.1 no-mergeable Path.
 For vhost single core test, give 2 cores for virtio and 1 core for vhost, use io fwd at virtio side to lower the virtio workload.
 
 Test Case 1: vhost single core performance test with Virtio 1.1 mergeable path
-=======================================================================
+==============================================================================
 
 flow: 
 TG --> NIC --> Virtio --> Vhost --> Virtio --> NIC --> TG
@@ -67,7 +67,7 @@ TG --> NIC --> Virtio --> Vhost --> Virtio --> NIC --> TG
 3. Send packet with packet generator with different packet size, check the throughput.
 
 Test Case 2: vhost single core performance test with Virtio 1.1 no-mergeable path
-=======================================================================
+=================================================================================
 
 flow: 
 TG --> NIC --> Virtio --> Vhost --> Virtio --> NIC --> TG
@@ -91,7 +91,7 @@ TG --> NIC --> Virtio --> Vhost --> Virtio --> NIC --> TG
 3. Send packet with packet generator with different packet size, check the throughput.
 
 Test Case 3: vhost single core performance test with Inorder mergeable path
-=======================================================================
+===========================================================================
 
 flow: 
 TG --> NIC --> Virtio --> Vhost --> Virtio --> NIC --> TG
@@ -115,7 +115,7 @@ TG --> NIC --> Virtio --> Vhost --> Virtio --> NIC --> TG
 3. Send packet with packet generator with different packet size, check the throughput.
 
 Test Case 4: vhost single core performance test with Inorder no-mergeable path
-=======================================================================
+==============================================================================
 
 flow: 
 TG --> NIC --> Virtio --> Vhost --> Virtio --> NIC --> TG
diff --git a/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst b/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst
index e9378bf..5c64d5d 100644
--- a/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst
+++ b/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst
@@ -30,9 +30,9 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-=====================================
+========================================
 PVP multi-paths virtio single core Tests
-=====================================
+========================================
 
 Description
 ===========
@@ -43,7 +43,7 @@ no-mergeable, Virtio 1.1 mergeable, Virtio 1.1 no-mergeable Path.
 For virtio single core test,give 2 cores for vhost and 1 core for virtio, use io fwd at vhost side to lower the vhost workload.
 
 Test Case 1: virtio single core performance test with Virtio 1.1 mergeable path
-=======================================================================
+===============================================================================
 
 flow: 
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
@@ -68,7 +68,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 3. Send packet with packet generator with different packet size, check the throughput.
 
 Test Case 2: virtio single core performance test with Virtio 1.1 no-mergeable path
-=======================================================================
+==================================================================================
 
 flow: 
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
@@ -93,7 +93,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 3. Send packet with packet generator with different packet size, check the throughput.
 
 Test Case 3: virtio single core performance test with Inorder mergeable path
-=======================================================================
+============================================================================
 
 flow: 
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
@@ -118,7 +118,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 3. Send packet with packet generator with different packet size, check the throughput.
 
 Test Case 4: virtio single core performance test with Inorder no-mergeable path
-=======================================================================
+===============================================================================
 
 flow: 
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
@@ -143,7 +143,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 3. Send packet with packet generator with different packet size, check the throughput.
 
 Test Case 5: virtio single core performance test with Mergeable path
-=======================================================================
+====================================================================
 
 flow: 
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
@@ -168,7 +168,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 3. Send packet with packet generator with different packet size, check the throughput.
 
 Test Case 6: virtio single core performance test with Normal path
-=======================================================================
+=================================================================
 
 flow: 
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
@@ -193,7 +193,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 3. Send packet with packet generator with different packet size, check the throughput.
 
 Test Case 7: virtio single core performance test with Vector_RX path
-=======================================================================
+====================================================================
 
 flow: 
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
@@ -215,4 +215,4 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
\ No newline at end of file
+3. Send packet with packet generator with different packet size, check the throughput.
diff --git a/test_plans/vlan_ethertype_config_test_plan.rst b/test_plans/vlan_ethertype_config_test_plan.rst
index 7096614..2462cf1 100644
--- a/test_plans/vlan_ethertype_config_test_plan.rst
+++ b/test_plans/vlan_ethertype_config_test_plan.rst
@@ -142,7 +142,7 @@ Test Case 4: test VLAN header stripping with changing VLAN TPID
       testpmd> vlan set outer tpid 0xA100 0
 
 4. Send 1 packet with VLAN TPID 0xA100 and VLAN Tag 16 on port ``A``.
-  Verify that packet received in port ``B`` without VLAN Tag Identifier
+   Verify that packet received in port ``B`` without VLAN Tag Identifier
 
 5. Disable vlan header stripping on port ``0``::
 
-- 
2.17.2

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-09-26  3:23 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-23  2:53 [dts] [PATCH V1] test_plans:fix build warning Wenjie Li
  -- strict thread matches above, loose matches on Subject: below --
2019-09-26  3:27 Wenjie Li
2019-06-10  1:43 [dts] [PATCH V1] test_plans: fix " Wenjie Li
2019-05-22 10:51 Wenjie Li
2019-05-29  2:36 ` Tu, Lijuan
2019-03-01  6:45 Wenjie Li
2019-03-01  7:57 ` Tu, Lijuan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).