test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts] [PATCH] test_plans: fix doc build warnings
@ 2018-04-24 11:18 Marvin Liu
  2018-04-24 11:24 ` [dts] [PATCH v2] " Marvin Liu
  0 siblings, 1 reply; 3+ messages in thread
From: Marvin Liu @ 2018-04-24 11:18 UTC (permalink / raw)
  To: dts; +Cc: Marvin Liu

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=UTF-8, Size: 31800 bytes --]

Signed-off-by: Marvin Liu <yong.liu@intel.com>

diff --git a/test_plans/ddp_gtp_qregion_test_plan.rst b/test_plans/ddp_gtp_qregion_test_plan.rst
index 4a08e41..ced70ff 100644
--- a/test_plans/ddp_gtp_qregion_test_plan.rst
+++ b/test_plans/ddp_gtp_qregion_test_plan.rst
@@ -193,7 +193,9 @@ Test Case: Outer IPv6 dst controls GTP-C queue in queue region
     GTP_U_Header()/Raw('x'*20)
 	
 10. Send different outer src GTP-C packet, check pmd receives packet from 
-    same queue::
+    same queue
+
+.. code-block:: console
 
     p=Ether()/IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0002",
     dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/
@@ -405,7 +407,9 @@ Test Case: Inner IP src controls GTP-U IPv4 queue in queue region
     IP(src="1.1.1.2",dst="2.2.2.2")/UDP()/Raw('x'*20)
 
 10. Send different dst GTP-U IPv4 packet, check pmd receives packet from same
-    queue::
+    queue
+
+.. code-block:: console
     
     p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/
     IP(src="1.1.1.1",dst="2.2.2.3")/UDP()/Raw('x'*20)
@@ -452,7 +456,9 @@ Test Case: Inner IP dst controls GTP-U IPv4 queue in queue region
     p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/
     IP(src="1.1.1.1",dst="2.2.2.3")/UDP()/Raw('x'*20)
 
-10. Send different src address, check pmd receives packet from same queue::
+10. Send different src address, check pmd receives packet from same queue
+
+.. code-block:: console
 
     p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/
     IP(src="1.1.1.2",dst="2.2.2.2")/UDP()/Raw('x'*20)
@@ -635,14 +641,14 @@ Test Case: Inner IPv6 src controls GTP-U IPv6 queue in queue region
     dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/UDP()/Raw('x'*20)
 		
 10. Send different inner dst GTP-U IPv6 packet, check pmd receives packet 
-    from same queue::
+    from same queue
+
+.. code-block:: console
 
     p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/
     IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001",
     dst="2001:0db8:85a3:0000:0000:8a2e:0370:0002)/UDP()/Raw('x'*20)
-	
 
-	
 Test Case: Inner IPv6 dst controls GTP-U IPv6 queue in queue region
 =========================================================================
 1. Check flow type to pctype mapping::
@@ -693,7 +699,9 @@ Test Case: Inner IPv6 dst controls GTP-U IPv6 queue in queue region
     dst="2001:0db8:85a3:0000:0000:8a2e:0370:0002")/UDP()/Raw('x'*20)
 
 10. Send different inner src GTP-U IPv6 packets, check pmd receives packet 
-    from same queue::
+    from same queue
+
+.. code-block:: console
 
     p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/
     IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0002",
diff --git a/test_plans/index.rst b/test_plans/index.rst
index 40984b2..b2a7d28 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -120,6 +120,15 @@ The following are the test plans for the DPDK DTS automated test system.
     qinq_filter_test_plan
     ddp_gtp_test_plan
     generic_flow_api_test_plan
+    ddp_gtp_qregion_test_plan
+    interrupt_pmd_kvm_test_plan
+    ipingre_test_plan
+    multi_vm_test_plan
+    runtime_queue_number_test_plan
+    sriov_live_migration_test_plan
+    vhost_multi_queue_qemu_test_plan
+    vhost_qemu_mtu_test_plan
+    vlan_fm10k_test_plan
 
     unit_tests_cmdline_test_plan
     unit_tests_crc_test_plan
@@ -150,3 +159,5 @@ The following are the test plans for the DPDK DTS automated test system.
     ptpclient_test_plan
     distributor_test_plan
     efd_test_plan
+    l2fwd_fork_test_plan
+    l3fwdacl_test_plan
diff --git a/test_plans/runtime_queue_number_test_plan.rst b/test_plans/runtime_queue_number_test_plan.rst
index f3353c9..fc07bb5 100644
--- a/test_plans/runtime_queue_number_test_plan.rst
+++ b/test_plans/runtime_queue_number_test_plan.rst
@@ -425,7 +425,9 @@ Test case: pass through VF to VM
  
 5. Bind VF to kernel driver i40evf, check the rxq and txq number.
    if set VF Max possible RX queues and TX queues to 2 by PF,
-   the VF rxq and txq number is 2::
+   the VF rxq and txq number is 2
+
+.. code-block:: console
 
     #ethtool -S eth0
     NIC statistics:
diff --git a/test_plans/sriov_live_migration_test_plan.rst b/test_plans/sriov_live_migration_test_plan.rst
index cd690b6..11d5997 100644
--- a/test_plans/sriov_live_migration_test_plan.rst
+++ b/test_plans/sriov_live_migration_test_plan.rst
@@ -1,289 +1,319 @@
-.. Copyright (c) <2016>, Intel Corporation
-      All rights reserved.
-
-   Redistribution and use in source and binary forms, with or without
-   modification, are permitted provided that the following conditions
-   are met:
-
-   - Redistributions of source code must retain the above copyright
-     notice, this list of conditions and the following disclaimer.
-
-   - Redistributions in binary form must reproduce the above copyright
-     notice, this list of conditions and the following disclaimer in
-     the documentation and/or other materials provided with the
-     distribution.
-
-   - Neither the name of Intel Corporation nor the names of its
-     contributors may be used to endorse or promote products derived
-     from this software without specific prior written permission.
-
-   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
-   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
-   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
-   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
-   OF THE POSSIBILITY OF SUCH DAMAGE.
-
-====================
-SRIOV live migration
-====================
-Qemu not support migrate a Virtual Machine which has an SR-IOV Virtual
-Function (VF). To get work around of this, bonding PMD and VirtIO is used.
-
-Prerequisites
--------------
-Connect three ports to one switch, these three ports are from Host, Backup
-host and tester.
-
-Start nfs service and export nfs to backup host IP:
-    host# service rpcbind start
-    host# service nfs start
-    host# cat /etc/exports
-    host# /home/vm-image backup-host-ip(rw,sync,no_root_squash)
-
-Make sure host nfsd module updated to v4 version(v2 not support file > 4G)
-
-Enable vhost pmd in configuration file and rebuild dpdk on host and backup host
-    CONFIG_RTE_LIBRTE_PMD_VHOST=y
-
-Create enough hugepages for testpmd and qemu backend memory.
-    host# echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
-    host# mount -t hugetlbfs hugetlbfs /mnt/huge
-
-Generate VF device with host port and backup host port
-    host# echo 1 > /sys/bus/pci/devices/0000\:01\:00.1/sriov_numvfs
-    backup# echo 1 > /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
-    
-Test Case 1: migrate with tap VirtIO
-====================================
-Start qemu on host server 
-    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
-          -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
-          -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 \
-          -no-reboot \
-          -drive file=/home/vm-image/vm0.img,format=raw \
-          -net nic,model=e1000 -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
-          -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
-          -device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01 \
-          -device pci-assign,host=01:10.1,id=vf1 \
-          -monitor telnet::3333,server,nowait \
-          -serial telnet:localhost:5432,server,nowait \
-          -daemonize
-
-Bridge tap and PF device into one bridge
-    host# brctl addbr br0
-    host# brctl addif br0 tap1
-    host# brctl addif br0 $PF
-    host# ifconfig tap1 up
-    host# ifconfig $PF up
-    host# ifconfig br0 up
-
-Login into vm and bind VirtIO and VF device to igb_uio, then start testpmd
-    host# telnet localhost 5432
-    host vm# cd /root/dpdk
-    host vm# modprobe uio
-    host vm# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
-    host vm# modprobe -r ixgbevf
-    host vm# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0 00:04.0
-    host vm# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
-    host vm# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
-
-Create bond device with VF and virtIO
-    testpmd> create bonded device 1 0
-    testpmd> add bonding slave 0 2
-    testpmd> add bonding slave 1 2
-    testpmd> set bonding primary 1 2
-    testpmd> port start 2
-    testpmd> set portlist 2
-    testpmd> show config fwd
-    testpmd> set fwd rxonly
-    testpmd> set verbose 1
-    testpmd> start
-
-Send packets from tester with bonding device's mac and check received
-    tester# scapy
-    tester# >>> VF="AA:BB:CC:DD:EE:FF"
-    tester# >>> sendp([Ether(dst=VF, src=get_if_hwaddr('p5p1')/IP()/UDP()/Raw('x' * 18)],
-                       iface='p5p1', loop=1, inter=1)
-
-Start qemu on backup server 
-    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
-          -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
-          -numa node,memdev=mem -mem-prealloc \
-          -smp 4 -cpu host -name VM1 \
-          -no-reboot \
-          -drive file=/mnt/nfs/vm0.img,format=raw \
-          -net nic,model=e1000 -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
-          -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
-          -device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01 \
-          -incoming tcp:0:4444 \
-          -monitor telnet::3333,server,nowait \
-          -serial telnet:localhost:5432,server,nowait \
-          -daemonize
-
-Bridge tap and PF device into one bridge on backup server
-    backup# brctl addbr br0
-    backup# brctl addif br0 tap1
-    backup# brctl addif br0 $PF
-    backup# ifconfig tap1 up
-    backup# ifconfig $PF up
-    backup# ifconfig br0 up
-
-Before migration, remove VF device in host VM
-    testpmd> remove bonding slave 1 2
-    testpmd> port stop 1
-    testpmd> port close 1
-    testpmd> port detach 1
-
-Delete VF device in qemu monitor and then start migration
-    host# telnet localhost 3333
-        (qemu) device_del vf1
-        (qemu) migrate -d tcp:backup server ip:4444
-
-Check in migration process, still can receive packets
-
-After migration, check backup vm can receive packets
-
-After migration done, attached backup VF device
-    backup# (qemu) device_add pci-assign,host=03:10.0,id=vf1
-
-Login backup VM and attach VF device
-    backup vm# ./dpdk/tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0
-    backup vm# testpmd> stop
-    backup vm# testpmd> port attach 0000:00:04.0
-
-Change backup VF mac address to same of host VF device
-    testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
-    testpmd> port start 1
-    testpmd> add bonding slave 1 2
-    testpmd> set bonding primary 1 2
-    testpmd> show bonding config 2
-    testpmd> show port stats all
-
-Remove virtio device
-    testpmd> remove bonding slave 0 2
-    testpmd> show bonding config 2
-    testpmd> port stop 0
-    testpmd> port close 0
-    testpmd> port detach 0
-
-Check bonding device still can received packets
-
-
-Test Case 2: migrate with vhost user pmd
-========================================
-Start testpmd with vhost user pmd device on host
-    host# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 \
-          --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net,queues=1' \
-          --socket-mem 1024 -- -i
-
-Start qemu with vhost user on host
-    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm \
-          -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
-          -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 -no-reboot \
-          -drive file=/home/vm-image/vm0.img,format=raw \
-          -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
-          -chardev socket,id=char0,path=/root/dpdk/vhost-net \
-          -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-          -device virtio-net-pci,netdev=netdev0,mac=00:00:00:00:00:01 \
-          -device pci-assign,host=01:10.1,id=vf1 \
-          -monitor telnet::3333,server,nowait \
-          -serial telnet:localhost:5432,server,nowait \
-          -daemonize
-
-Start testpmd on backup host with vhost user pmd device
-    backup# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 \
-            --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net,queues=1' \
-            --socket-mem 1024 -- -i
-
-Start qemu with vhost user on backup host
-    backup# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
-            -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
-            -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 -no-reboot \
-            -drive file=/mnt/nfs/vm0.img,format=raw \
-            -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
-            -chardev socket,id=char0,path=/root/dpdk/vhost-net \
-            -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-            -device virtio-net-pci,netdev=netdev0,mac=00:00:00:00:00:01 \
-            -monitor telnet::3333,server,nowait \
-            -serial telnet:localhost:5432,server,nowait \
-            -incoming tcp:0:4444 \
-            -daemonize
-
-Login into host vm, start testpmd with virtio and VF devices
-    host# telnet localhost 5432
-    host vm# cd /root/dpdk
-    host vm# modprobe uio
-    host vm# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
-    host vm# modprobe -r ixgbevf
-    host vm# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0 00:04.0
-    host vm# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
-    host vm# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
-
-Check host vhost pmd connect with VM’s virtio device
-    host# testpmd> host testpmd message for connection
-
-Create bond device and then add VF and virtio devices into bonding device
-    host vm# testpmd> create bonded device 1 0
-    host vm# testpmd> add bonding slave 0 2
-    host vm# testpmd> add bonding slave 1 2
-    host vm# testpmd> set bonding primary 1 2
-    host vm# testpmd> port start 2
-    host vm# testpmd> set portlist 2
-    host vm# testpmd> show config fwd
-    host vm# testpmd> set fwd rxonly
-    host vm# testpmd> set verbose 1
-    host vm# testpmd> start
-
-Send packets matched bonding device’s mac from tester, check packets received
-by bonding device
-
-Before migration, removed VF device from bonding device. After that, bonding device
-can’t receive packets
-    host vm# testpmd> remove bonding slave 1 2
-    host vm# testpmd> port stop 1
-    host vm# testpmd> port close 1
-    host vm# testpmd> port detach 1
-
-Delete VF device in qemu monitor and then start migration
-    host# telnet localhost 3333
-    host# (qemu) device_del vf1
-    host# (qemu) migrate -d tcp:10.239.129.125:4444
-
-After migration done, add backup VF device into backup VM
-    backup# (qemu) device_add pci-assign,host=03:10.0,id=vf1
-
-Login into backup VM and bind VF device to igb_uio
-    backup# ssh -p 5555 root@localhost
-    backup vm# ./dpdk/tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0
-
-Connect to backup VM serial port  and attach backup VF device
-    backup# telnet localhost 5432
-    backup vm# testpmd> port attach 0000:00:04.0
-
-Change backup VF mac address to match host VF device
-    backup vm# testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
-
-Add backup VF device into bonding device
-    backup vm# testpmd> port start 1
-    backup vm# testpmd> add bonding slave 1 2
-    backup vm# testpmd> set bonding primary 1 2
-    backup vm# testpmd> show bonding config 2
-    backup vm# testpmd> show port stats all
-
-Remove virtio device from backup bonding device
-    backup vm# testpmd> remove bonding slave 0 2
-    backup vm# testpmd> show bonding config 2
-    backup vm# testpmd> port stop 0
-    backup vm# testpmd> port close 0
-    backup vm# testpmd> port detach 0
-    backup vm# 
-
-Check still can receive packets matched VF mac address
-
+.. Copyright (c) <2016>, Intel Corporation
+      All rights reserved.
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions
+   are met:
+
+   - Redistributions of source code must retain the above copyright
+     notice, this list of conditions and the following disclaimer.
+
+   - Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in
+     the documentation and/or other materials provided with the
+     distribution.
+
+   - Neither the name of Intel Corporation nor the names of its
+     contributors may be used to endorse or promote products derived
+     from this software without specific prior written permission.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+   OF THE POSSIBILITY OF SUCH DAMAGE.
+
+====================
+SRIOV live migration
+====================
+Qemu not support migrate a Virtual Machine which has an SR-IOV Virtual
+Function (VF). To get work around of this, bonding PMD and VirtIO is used.
+
+Prerequisites
+-------------
+Connect three ports to one switch, these three ports are from Host, Backup
+host and tester.
+
+Start nfs service and export nfs to backup host IP::
+
+    host# service rpcbind start
+    host# service nfs start
+    host# cat /etc/exports
+    host# /home/vm-image backup-host-ip(rw,sync,no_root_squash)
+
+Make sure host nfsd module updated to v4 version(v2 not support file > 4G)
+
+Enable vhost pmd in configuration file and rebuild dpdk on host and backup host
+    CONFIG_RTE_LIBRTE_PMD_VHOST=y
+
+Create enough hugepages for testpmd and qemu backend memory::
+
+    host# echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+    host# mount -t hugetlbfs hugetlbfs /mnt/huge
+
+Generate VF device with host port and backup host port::
+
+    host# echo 1 > /sys/bus/pci/devices/0000\:01\:00.1/sriov_numvfs
+    backup# echo 1 > /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
+    
+Test Case 1: migrate with tap VirtIO
+====================================
+Start qemu on host server::
+
+    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
+          -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+          -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 \
+          -no-reboot \
+          -drive file=/home/vm-image/vm0.img,format=raw \
+          -net nic,model=e1000 -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+          -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
+          -device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01 \
+          -device pci-assign,host=01:10.1,id=vf1 \
+          -monitor telnet::3333,server,nowait \
+          -serial telnet:localhost:5432,server,nowait \
+          -daemonize
+
+Bridge tap and PF device into one bridge::
+
+    host# brctl addbr br0
+    host# brctl addif br0 tap1
+    host# brctl addif br0 $PF
+    host# ifconfig tap1 up
+    host# ifconfig $PF up
+    host# ifconfig br0 up
+
+Login into vm and bind VirtIO and VF device to igb_uio, then start testpmd::
+
+    host# telnet localhost 5432
+    host vm# cd /root/dpdk
+    host vm# modprobe uio
+    host vm# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
+    host vm# modprobe -r ixgbevf
+    host vm# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0 00:04.0
+    host vm# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+    host vm# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+
+Create bond device with VF and virtIO::
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 0 2
+    testpmd> add bonding slave 1 2
+    testpmd> set bonding primary 1 2
+    testpmd> port start 2
+    testpmd> set portlist 2
+    testpmd> show config fwd
+    testpmd> set fwd rxonly
+    testpmd> set verbose 1
+    testpmd> start
+
+Send packets from tester with bonding device's mac and check received::
+
+    tester# scapy
+    tester# >>> VF="AA:BB:CC:DD:EE:FF"
+    tester# >>> sendp([Ether(dst=VF, src=get_if_hwaddr('p5p1')/IP()/UDP()/Raw('x' * 18)],
+                       iface='p5p1', loop=1, inter=1)
+
+Start qemu on backup server::
+
+    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
+          -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+          -numa node,memdev=mem -mem-prealloc \
+          -smp 4 -cpu host -name VM1 \
+          -no-reboot \
+          -drive file=/mnt/nfs/vm0.img,format=raw \
+          -net nic,model=e1000 -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+          -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
+          -device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01 \
+          -incoming tcp:0:4444 \
+          -monitor telnet::3333,server,nowait \
+          -serial telnet:localhost:5432,server,nowait \
+          -daemonize
+
+Bridge tap and PF device into one bridge on backup server::
+
+    backup# brctl addbr br0
+    backup# brctl addif br0 tap1
+    backup# brctl addif br0 $PF
+    backup# ifconfig tap1 up
+    backup# ifconfig $PF up
+    backup# ifconfig br0 up
+
+Before migration, remove VF device in host VM::
+
+    testpmd> remove bonding slave 1 2
+    testpmd> port stop 1
+    testpmd> port close 1
+    testpmd> port detach 1
+
+Delete VF device in qemu monitor and then start migration::
+
+    host# telnet localhost 3333
+        (qemu) device_del vf1
+        (qemu) migrate -d tcp:backup server ip:4444
+
+Check in migration process, still can receive packets
+
+After migration, check backup vm can receive packets
+
+After migration done, attached backup VF device::
+
+    backup# (qemu) device_add pci-assign,host=03:10.0,id=vf1
+
+Login backup VM and attach VF device::
+
+    backup vm# ./dpdk/tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0
+    backup vm# testpmd> stop
+    backup vm# testpmd> port attach 0000:00:04.0
+
+Change backup VF mac address to same of host VF device::
+
+    testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
+    testpmd> port start 1
+    testpmd> add bonding slave 1 2
+    testpmd> set bonding primary 1 2
+    testpmd> show bonding config 2
+    testpmd> show port stats all
+
+Remove virtio device::
+
+    testpmd> remove bonding slave 0 2
+    testpmd> show bonding config 2
+    testpmd> port stop 0
+    testpmd> port close 0
+    testpmd> port detach 0
+
+Check bonding device still can received packets
+
+
+Test Case 2: migrate with vhost user pmd
+========================================
+Start testpmd with vhost user pmd device on host::
+
+    host# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 \
+          --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net,queues=1' \
+          --socket-mem 1024 -- -i
+
+Start qemu with vhost user on host::
+
+    host# /usr/local/bin/qemu-system-x86_64 -enable-kvm \
+          -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+          -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 -no-reboot \
+          -drive file=/home/vm-image/vm0.img,format=raw \
+          -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+          -chardev socket,id=char0,path=/root/dpdk/vhost-net \
+          -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+          -device virtio-net-pci,netdev=netdev0,mac=00:00:00:00:00:01 \
+          -device pci-assign,host=01:10.1,id=vf1 \
+          -monitor telnet::3333,server,nowait \
+          -serial telnet:localhost:5432,server,nowait \
+          -daemonize
+
+Start testpmd on backup host with vhost user pmd device::
+
+    backup# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 \
+            --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net,queues=1' \
+            --socket-mem 1024 -- -i
+
+Start qemu with vhost user on backup host::
+
+    backup# /usr/local/bin/qemu-system-x86_64 -enable-kvm -m 2048 \
+            -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+            -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -name VM1 -no-reboot \
+            -drive file=/mnt/nfs/vm0.img,format=raw \
+            -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+            -chardev socket,id=char0,path=/root/dpdk/vhost-net \
+            -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+            -device virtio-net-pci,netdev=netdev0,mac=00:00:00:00:00:01 \
+            -monitor telnet::3333,server,nowait \
+            -serial telnet:localhost:5432,server,nowait \
+            -incoming tcp:0:4444 \
+            -daemonize
+
+Login into host vm, start testpmd with virtio and VF devices::
+
+    host# telnet localhost 5432
+    host vm# cd /root/dpdk
+    host vm# modprobe uio
+    host vm# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
+    host vm# modprobe -r ixgbevf
+    host vm# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0 00:04.0
+    host vm# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+    host vm# ./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+
+Check host vhost pmd connect with VM's virtio device::
+
+    host# testpmd> host testpmd message for connection
+
+Create bond device and then add VF and virtio devices into bonding device::
+
+    host vm# testpmd> create bonded device 1 0
+    host vm# testpmd> add bonding slave 0 2
+    host vm# testpmd> add bonding slave 1 2
+    host vm# testpmd> set bonding primary 1 2
+    host vm# testpmd> port start 2
+    host vm# testpmd> set portlist 2
+    host vm# testpmd> show config fwd
+    host vm# testpmd> set fwd rxonly
+    host vm# testpmd> set verbose 1
+    host vm# testpmd> start
+
+Send packets matched bonding device's mac from tester, check packets received
+by bonding device
+
+Before migration, removed VF device from bonding device. After that, bonding device
+can't receive packets::
+
+    host vm# testpmd> remove bonding slave 1 2
+    host vm# testpmd> port stop 1
+    host vm# testpmd> port close 1
+    host vm# testpmd> port detach 1
+
+Delete VF device in qemu monitor and then start migration::
+
+    host# telnet localhost 3333
+    host# (qemu) device_del vf1
+    host# (qemu) migrate -d tcp:10.239.129.125:4444
+
+After migration done, add backup VF device into backup VM::
+
+    backup# (qemu) device_add pci-assign,host=03:10.0,id=vf1
+
+Login into backup VM and bind VF device to igb_uio::
+
+    backup# ssh -p 5555 root@localhost
+    backup vm# ./dpdk/tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0
+
+Connect to backup VM serial port  and attach backup VF device::
+
+    backup# telnet localhost 5432
+    backup vm# testpmd> port attach 0000:00:04.0
+
+Change backup VF mac address to match host VF device::
+
+    backup vm# testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
+
+Add backup VF device into bonding device::
+
+    backup vm# testpmd> port start 1
+    backup vm# testpmd> add bonding slave 1 2
+    backup vm# testpmd> set bonding primary 1 2
+    backup vm# testpmd> show bonding config 2
+    backup vm# testpmd> show port stats all
+
+Remove virtio device from backup bonding device::
+
+    backup vm# testpmd> remove bonding slave 0 2
+    backup vm# testpmd> show bonding config 2
+    backup vm# testpmd> port stop 0
+    backup vm# testpmd> port close 0
+    backup vm# testpmd> port detach 0
+    backup vm# 
+
+Check still can receive packets matched VF mac address
diff --git a/test_plans/vhost_multi_queue_qemu_test_plan.rst b/test_plans/vhost_multi_queue_qemu_test_plan.rst
index c2a7558..bb13a81 100644
--- a/test_plans/vhost_multi_queue_qemu_test_plan.rst
+++ b/test_plans/vhost_multi_queue_qemu_test_plan.rst
@@ -85,7 +85,8 @@ flow:
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 1. Bind one port to igb_uio, then launch testpmd by below command, 
-   ensure the vhost using 2 queues: 
+   ensure the vhost using 2 queues::
+
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
@@ -106,7 +107,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     -vnc :2 -daemonize
 
 3. On VM, bind virtio net to igb_uio and run testpmd,
-   using one queue for testing at first  ::
+   using one queue for testing at first::
  
     ./testpmd -c 0x7 -n 3 -- -i --rxq=1 --txq=1 --tx-offloads=0x0 \
     --rss-ip --nb-cores=1
@@ -114,6 +115,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     testpmd>start
 
 4. Use scapy send packet::
+
     #scapy
     >>>pk1= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.1")/UDP()/("X"*64)]
     >>>pk2= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.7")/UDP()/("X"*64)]
@@ -159,7 +161,8 @@ flow:
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 1. Bind one port to igb_uio, then launch testpmd by below command, 
-   ensure the vhost using 2 queues: 
+   ensure the vhost using 2 queues::
+
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
@@ -180,7 +183,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     -vnc :2 -daemonize
 
 3. On VM, bind virtio net to igb_uio and run testpmd,
-   using one queue for testing at first  ::
+   using one queue for testing at first::
  
     ./testpmd -c 0x7 -n 4 -- -i --rxq=2 --txq=2 \
     --tx-offloads=0x0 --rss-ip --nb-cores=2
@@ -188,6 +191,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     testpmd>start
  
 4. Use scapy send packet::
+
     #scapy
     >>>pk1= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.1")/UDP()/("X"*64)]
     >>>pk2= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.7")/UDP()/("X"*64)]
diff --git a/test_plans/virtio_1.0_test_plan.rst b/test_plans/virtio_1.0_test_plan.rst
index 69f6794..265c586 100644
--- a/test_plans/virtio_1.0_test_plan.rst
+++ b/test_plans/virtio_1.0_test_plan.rst
@@ -44,8 +44,7 @@ test with virtio0.95 to ensure they can co-exist. Besides, we need test virtio
 
 
 Test Case 1: test_func_vhost_user_virtio1.0-pmd with different tx-offloads
-=======================================================================
-
+==========================================================================
 Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1 or 2.5.0.
 
 1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.::
-- 
1.9.3

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dts] [PATCH v2] test_plans: fix doc build warnings
  2018-04-24 11:18 [dts] [PATCH] test_plans: fix doc build warnings Marvin Liu
@ 2018-04-24 11:24 ` Marvin Liu
  0 siblings, 0 replies; 3+ messages in thread
From: Marvin Liu @ 2018-04-24 11:24 UTC (permalink / raw)
  To: dts; +Cc: Marvin Liu

v2: remove internal test plan modifications

Signed-off-by: Marvin Liu <yong.liu@intel.com>

diff --git a/test_plans/ddp_gtp_qregion_test_plan.rst b/test_plans/ddp_gtp_qregion_test_plan.rst
index 4a08e41..ced70ff 100644
--- a/test_plans/ddp_gtp_qregion_test_plan.rst
+++ b/test_plans/ddp_gtp_qregion_test_plan.rst
@@ -193,7 +193,9 @@ Test Case: Outer IPv6 dst controls GTP-C queue in queue region
     GTP_U_Header()/Raw('x'*20)
 	
 10. Send different outer src GTP-C packet, check pmd receives packet from 
-    same queue::
+    same queue
+
+.. code-block:: console
 
     p=Ether()/IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0002",
     dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/
@@ -405,7 +407,9 @@ Test Case: Inner IP src controls GTP-U IPv4 queue in queue region
     IP(src="1.1.1.2",dst="2.2.2.2")/UDP()/Raw('x'*20)
 
 10. Send different dst GTP-U IPv4 packet, check pmd receives packet from same
-    queue::
+    queue
+
+.. code-block:: console
     
     p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/
     IP(src="1.1.1.1",dst="2.2.2.3")/UDP()/Raw('x'*20)
@@ -452,7 +456,9 @@ Test Case: Inner IP dst controls GTP-U IPv4 queue in queue region
     p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/
     IP(src="1.1.1.1",dst="2.2.2.3")/UDP()/Raw('x'*20)
 
-10. Send different src address, check pmd receives packet from same queue::
+10. Send different src address, check pmd receives packet from same queue
+
+.. code-block:: console
 
     p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/
     IP(src="1.1.1.2",dst="2.2.2.2")/UDP()/Raw('x'*20)
@@ -635,14 +641,14 @@ Test Case: Inner IPv6 src controls GTP-U IPv6 queue in queue region
     dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/UDP()/Raw('x'*20)
 		
 10. Send different inner dst GTP-U IPv6 packet, check pmd receives packet 
-    from same queue::
+    from same queue
+
+.. code-block:: console
 
     p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/
     IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001",
     dst="2001:0db8:85a3:0000:0000:8a2e:0370:0002)/UDP()/Raw('x'*20)
-	
 
-	
 Test Case: Inner IPv6 dst controls GTP-U IPv6 queue in queue region
 =========================================================================
 1. Check flow type to pctype mapping::
@@ -693,7 +699,9 @@ Test Case: Inner IPv6 dst controls GTP-U IPv6 queue in queue region
     dst="2001:0db8:85a3:0000:0000:8a2e:0370:0002")/UDP()/Raw('x'*20)
 
 10. Send different inner src GTP-U IPv6 packets, check pmd receives packet 
-    from same queue::
+    from same queue
+
+.. code-block:: console
 
     p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/
     IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0002",
diff --git a/test_plans/index.rst b/test_plans/index.rst
index 40984b2..875be3e 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -120,6 +120,13 @@ The following are the test plans for the DPDK DTS automated test system.
     qinq_filter_test_plan
     ddp_gtp_test_plan
     generic_flow_api_test_plan
+    ddp_gtp_qregion_test_plan
+    interrupt_pmd_kvm_test_plan
+    ipingre_test_plan
+    multi_vm_test_plan
+    runtime_queue_number_test_plan
+    vhost_multi_queue_qemu_test_plan
+    vhost_qemu_mtu_test_plan
 
     unit_tests_cmdline_test_plan
     unit_tests_crc_test_plan
diff --git a/test_plans/runtime_queue_number_test_plan.rst b/test_plans/runtime_queue_number_test_plan.rst
index f3353c9..fc07bb5 100644
--- a/test_plans/runtime_queue_number_test_plan.rst
+++ b/test_plans/runtime_queue_number_test_plan.rst
@@ -425,7 +425,9 @@ Test case: pass through VF to VM
  
 5. Bind VF to kernel driver i40evf, check the rxq and txq number.
    if set VF Max possible RX queues and TX queues to 2 by PF,
-   the VF rxq and txq number is 2::
+   the VF rxq and txq number is 2
+
+.. code-block:: console
 
     #ethtool -S eth0
     NIC statistics:
diff --git a/test_plans/vhost_multi_queue_qemu_test_plan.rst b/test_plans/vhost_multi_queue_qemu_test_plan.rst
index c2a7558..bb13a81 100644
--- a/test_plans/vhost_multi_queue_qemu_test_plan.rst
+++ b/test_plans/vhost_multi_queue_qemu_test_plan.rst
@@ -85,7 +85,8 @@ flow:
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 1. Bind one port to igb_uio, then launch testpmd by below command, 
-   ensure the vhost using 2 queues: 
+   ensure the vhost using 2 queues::
+
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
@@ -106,7 +107,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     -vnc :2 -daemonize
 
 3. On VM, bind virtio net to igb_uio and run testpmd,
-   using one queue for testing at first  ::
+   using one queue for testing at first::
  
     ./testpmd -c 0x7 -n 3 -- -i --rxq=1 --txq=1 --tx-offloads=0x0 \
     --rss-ip --nb-cores=1
@@ -114,6 +115,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     testpmd>start
 
 4. Use scapy send packet::
+
     #scapy
     >>>pk1= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.1")/UDP()/("X"*64)]
     >>>pk2= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.7")/UDP()/("X"*64)]
@@ -159,7 +161,8 @@ flow:
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 1. Bind one port to igb_uio, then launch testpmd by below command, 
-   ensure the vhost using 2 queues: 
+   ensure the vhost using 2 queues::
+
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
@@ -180,7 +183,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     -vnc :2 -daemonize
 
 3. On VM, bind virtio net to igb_uio and run testpmd,
-   using one queue for testing at first  ::
+   using one queue for testing at first::
  
     ./testpmd -c 0x7 -n 4 -- -i --rxq=2 --txq=2 \
     --tx-offloads=0x0 --rss-ip --nb-cores=2
@@ -188,6 +191,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     testpmd>start
  
 4. Use scapy send packet::
+
     #scapy
     >>>pk1= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.1")/UDP()/("X"*64)]
     >>>pk2= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.7")/UDP()/("X"*64)]
diff --git a/test_plans/virtio_1.0_test_plan.rst b/test_plans/virtio_1.0_test_plan.rst
index 69f6794..265c586 100644
--- a/test_plans/virtio_1.0_test_plan.rst
+++ b/test_plans/virtio_1.0_test_plan.rst
@@ -44,8 +44,7 @@ test with virtio0.95 to ensure they can co-exist. Besides, we need test virtio
 
 
 Test Case 1: test_func_vhost_user_virtio1.0-pmd with different tx-offloads
-=======================================================================
-
+==========================================================================
 Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1 or 2.5.0.
 
 1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.::
-- 
1.9.3

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dts] [PATCH] test_plans: fix doc build warnings
@ 2018-01-10 22:35 Marvin Liu
  0 siblings, 0 replies; 3+ messages in thread
From: Marvin Liu @ 2018-01-10 22:35 UTC (permalink / raw)
  To: dts; +Cc: Marvin Liu

Signed-off-by: Marvin Liu <yong.liu@intel.com>

diff --git a/test_plans/ddp_gtp_test_plan.rst b/test_plans/ddp_gtp_test_plan.rst
index 3dcba1b..3e6c954 100644
--- a/test_plans/ddp_gtp_test_plan.rst
+++ b/test_plans/ddp_gtp_test_plan.rst
@@ -30,9 +30,9 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-================
-DDP GTP-C/GTP-U 
-================
+===============================
+Fortville DDP GTP-C/GTP-U Tests
+===============================
 
 FVL6 supports DDP (Dynamic Device Personalization) to program analyzer/parser
 via AdminQ. Profile can be used to update FVL configuration tables via MMIO
diff --git a/test_plans/distributor_test_plan.rst b/test_plans/distributor_test_plan.rst
index e627faa..106936d 100644
--- a/test_plans/distributor_test_plan.rst
+++ b/test_plans/distributor_test_plan.rst
@@ -30,9 +30,10 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-==================
-Packet distributor
-==================
+============================================
+Sample Application Tests: Packet distributor
+============================================
+
 Packet Distributor library is a library designed to be used for dynamic
 load balancing of traffic while supporting single packet at a time operation.
 When using this library, the logical cores in use are to be considered in
diff --git a/test_plans/dynamic_flowtype_test_plan.rst b/test_plans/dynamic_flowtype_test_plan.rst
index 39054b1..d7eab5f 100644
--- a/test_plans/dynamic_flowtype_test_plan.rst
+++ b/test_plans/dynamic_flowtype_test_plan.rst
@@ -30,9 +30,9 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-========================================
-Dynamic mapping of Flow Types to PCTYPEs
-========================================
+========================================================
+Fortville Dynamic Mapping of Flow Types to PCTYPEs Tests
+========================================================
 
 More protocols can be added dynamically using dynamic device personalization 
 profiles (DDP).
diff --git a/test_plans/efd_test_plan.rst b/test_plans/efd_test_plan.rst
index a6b5e3d..d6962e7 100644
--- a/test_plans/efd_test_plan.rst
+++ b/test_plans/efd_test_plan.rst
@@ -30,9 +30,9 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-========================
-Elastic Flow Distributor
-========================
+==================================================
+Sample Application Tests: Elastic Flow Distributor
+==================================================
 
 Description
 -----------
diff --git a/test_plans/index.rst b/test_plans/index.rst
index ac1959a..40984b2 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -110,6 +110,16 @@ The following are the test plans for the DPDK DTS automated test system.
     vmdq_test_plan
     vm_power_manager_test_plan
     vxlan_test_plan
+    ixgbe_vf_get_extra_queue_information_test_plan
+    queue_region_test_plan
+    inline_ipsec_test_plan
+    sw_eventdev_pipeline_sample_test_plan
+    dynamic_flowtype_test_plan
+    vf_kernel_test_plan
+    multiple_pthread_test_plan
+    qinq_filter_test_plan
+    ddp_gtp_test_plan
+    generic_flow_api_test_plan
 
     unit_tests_cmdline_test_plan
     unit_tests_crc_test_plan
@@ -137,12 +147,6 @@ The following are the test plans for the DPDK DTS automated test system.
     skeleton_test_plan
     timer_test_plan
     vxlan_sample_test_plan
-
-    distributor_test_plan.rst
-    efd_test_plan.rst
-    multiple_pthread_test_plan.rst
-    ptpclient_test_plan.rst
-    qinq_filter_test_plan.rst
-    vf_kernel_test_plan.rst
-    ddp_gtp_test_plan.rst
-    generic_flow_api_test_plan.rst
+    ptpclient_test_plan
+    distributor_test_plan
+    efd_test_plan
diff --git a/test_plans/inline_ipsec_test_plan.rst b/test_plans/inline_ipsec_test_plan.rst
index 8725284..9bec8cc 100644
--- a/test_plans/inline_ipsec_test_plan.rst
+++ b/test_plans/inline_ipsec_test_plan.rst
@@ -30,9 +30,10 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-======================
-Inline IPsec Test Plan
-======================
+==========================
+Niantic Inline IPsec Tests
+==========================
+
 This test plan describe the method of validation inline hardware acceleration
 of symmetric crypto processing of IPsec flows on Intel® 82599 10 GbE
 Controller (IXGBE) within the cryptodev framework.
@@ -136,15 +137,17 @@ Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode::
 	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 
 	0x2 --config="(0,0,20),(1,0,21)" -f ./enc.cfg
 
-Use scapy to listen on unprotected port
-sniff(iface='%s',count=1,timeout=10)
+Use scapy to listen on unprotected port::
+
+    sniff(iface='%s',count=1,timeout=10)
 	
 Use scapy send burst(32) normal packets with dst ip (192.168.105.0) to protected port.
 
-Check burst esp packets received from unprotected port.
-tcpdump -Xvvvi ens802f1
-tcpdump: listening on ens802f1, link-type EN10MB (Ethernet), capture size 262144 bytes
-06:10:25.674233 IP (tos 0x0, ttl 64, id 0, offset 0, flags [none], proto ESP (50), length 108)
+Check burst esp packets received from unprotected port::
+
+    tcpdump -Xvvvi ens802f1
+    tcpdump: listening on ens802f1, link-type EN10MB (Ethernet), capture size 262144 bytes
+    06:10:25.674233 IP (tos 0x0, ttl 64, id 0, offset 0, flags [none], proto ESP (50), length 108)
     172.16.1.5 > 172.16.2.5: ESP(spi=0x000003ed,seq=0x9), length 88
         0x0000:  4500 006c 0000 0000 4032 1f36 ac10 0105  E..l....@2.6....
         0x0010:  ac10 0205 0000 03ed 0000 0009 0000 0000  ................
@@ -156,7 +159,7 @@ tcpdump: listening on ens802f1, link-type EN10MB (Ethernet), capture size 262144
 
 Check esp packets' format is correct.
 
-See decrypted packets on scapy output
+See decrypted packets on scapy output::
 
     ###[ IP ]###
       version   = 4
@@ -179,6 +182,7 @@ See decrypted packets on scapy output
 Test Case: IPSec Encryption with Jumboframe
 ===========================================
 Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode::
+
 	sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev 
 	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 
 	0x2 --config="(0,0,20),(1,0,21)" -f ./enc.cfg
@@ -283,6 +287,7 @@ Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode::
 	sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev 
 	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 
 	0x2 --config="(0,0,20),(1,0,21)" -f ./dec.cfg
+
 Default frame size is 1518, Send two burst(1000) esp packets to unprotected port.
 
 First one will produce an error "IPSEC_ESP: failed crypto op" in the IPsec application, 
diff --git a/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst b/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst
index 4942910..e621eeb 100644
--- a/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst
+++ b/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst
@@ -30,9 +30,9 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-=======================================================
-Improve ixgbe_get_vf_queue to include extra information
-=======================================================
+==========================================================
+Niantic ixgbe_get_vf_queue Include Extra Information Tests
+==========================================================
 
 Description
 ===========
diff --git a/test_plans/macsec_for_ixgbe_test_plan.rst b/test_plans/macsec_for_ixgbe_test_plan.rst
index 597531d..401723e 100644
--- a/test_plans/macsec_for_ixgbe_test_plan.rst
+++ b/test_plans/macsec_for_ixgbe_test_plan.rst
@@ -30,9 +30,9 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-=====================================================
-Niantic: Media Access Control Security (MACsec) Tests
-=====================================================
+====================================================
+Niantic Media Access Control Security (MACsec) Tests
+====================================================
 
 Description
 ===========
diff --git a/test_plans/ptpclient_test_plan.rst b/test_plans/ptpclient_test_plan.rst
index 7e349b5..0ee0f1e 100644
--- a/test_plans/ptpclient_test_plan.rst
+++ b/test_plans/ptpclient_test_plan.rst
@@ -30,9 +30,9 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-===============
-EEE1588 Sample 
-===============
+==================================
+Sample Application Tests: IEEE1588
+==================================
 
 The PTP (Precision Time Protocol) client sample application is a simple 
 example of using the DPDK IEEE1588 API to communicate with a PTP master 
diff --git a/test_plans/qinq_filter_test_plan.rst b/test_plans/qinq_filter_test_plan.rst
index fc2aef8..e180341 100644
--- a/test_plans/qinq_filter_test_plan.rst
+++ b/test_plans/qinq_filter_test_plan.rst
@@ -30,9 +30,9 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-===============================   
-Cloud filters for QinQ steering
-===============================
+===============================================
+Fortville Cloud filters for QinQ steering Tests
+===============================================
 This document provides test plan for testing the function of Fortville:
 QinQ filter function
 
diff --git a/test_plans/queue_region_test_plan.rst b/test_plans/queue_region_test_plan.rst
index af80f50..20ac496 100644
--- a/test_plans/queue_region_test_plan.rst
+++ b/test_plans/queue_region_test_plan.rst
@@ -30,9 +30,9 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-======================================
-API to configure queue regions for RSS
-======================================
+===========================================
+Fortville Configure RSS Queue Regions Tests
+===========================================
 Description
 ===========
 
diff --git a/test_plans/sw_eventdev_pipeline_sample_test_plan.rst b/test_plans/sw_eventdev_pipeline_sample_test_plan.rst
index c837bcc..1da0dc1 100644
--- a/test_plans/sw_eventdev_pipeline_sample_test_plan.rst
+++ b/test_plans/sw_eventdev_pipeline_sample_test_plan.rst
@@ -29,6 +29,7 @@
    STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
+
 ===============================
 Eventdev Pipeline SW PMD Tests
 ===============================
diff --git a/test_plans/vf_kernel_test_plan.rst b/test_plans/vf_kernel_test_plan.rst
index b8f08d5..8c150be 100644
--- a/test_plans/vf_kernel_test_plan.rst
+++ b/test_plans/vf_kernel_test_plan.rst
@@ -30,9 +30,9 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-===========================
-VFD as SRIOV Policy Manager
-===========================
+=================================
+VFD as SRIOV Policy Manager Tests
+=================================
 
 VFD is SRIOV Policy Manager (daemon) running on the host allowing
 configuration not supported by kernel NIC driver, supports ixgbe and
-- 
1.9.3

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2018-04-24  3:36 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-24 11:18 [dts] [PATCH] test_plans: fix doc build warnings Marvin Liu
2018-04-24 11:24 ` [dts] [PATCH v2] " Marvin Liu
  -- strict thread matches above, loose matches on Subject: below --
2018-01-10 22:35 [dts] [PATCH] " Marvin Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).