test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts][PATCH V1]test_plans/cvl_dcf_qos: correct some typos
@ 2021-11-12  0:30 Peng Yuan
  2021-11-23 14:19 ` Tu, Lijuan
  0 siblings, 1 reply; 2+ messages in thread
From: Peng Yuan @ 2021-11-12  0:30 UTC (permalink / raw)
  To: dts; +Cc: Peng Yuan

Correct some typos, and clarify some description.

Signed-off-by: Peng Yuan <yuan.peng@intel.com>

diff --git a/test_plans/cvl_dcf_qos_test_plan.rst b/test_plans/cvl_dcf_qos_test_plan.rst
index 3d96a0f5..1ceaed0c 100644
--- a/test_plans/cvl_dcf_qos_test_plan.rst
+++ b/test_plans/cvl_dcf_qos_test_plan.rst
@@ -54,6 +54,7 @@ available BW (min, max) per TC etc. To be accomplished by way of new advance vir
 4. Enable DPDK iAVF to set TC to queue mapping, to be accomplished by way of new advance virtchnl op.
 
 Note: The AVF inherits TCs that are configured on the PF. DCF tunes the BW allocation per TC for the target AVF.
+Priority: TC7>…>TC0. TC7 has the highest priority, TC0 has the lowest priority.
 
 QoS configuration ownership::
 
@@ -80,8 +81,9 @@ Prerequisites
 =============
 
 1. Hardware:
-   1 port from columbiaville_100g(NIC-1), 2 ports from columbiaville_25g(NIC-2)
-   one 100G cable, one 10G cable::
+   1 port from columbiaville_100g(NIC-1), 2 ports from columbiaville_25g(NIC-2);
+   one 100G cable, one 10G cable;
+   The connection is as below table::
 
     +---------------------------------+
     |  DUT           |  IXIA          |
@@ -92,10 +94,10 @@ Prerequisites
     | NIC-2,Port-1  ---  NIC-2,Port-2 |
     +---------------------------------+
 
-   assume that device ID and pci address of NIC-1,Port-1 are ens785f0 and 18:00.0,
+   Assume that device ID and pci address of NIC-1,Port-1 are ens785f0 and 18:00.0,
    device ID and pci address of NIC-2,Port-1 are ens802f0 and 86:00.0.
 
-2. software:
+2. Software:
 
    - dpdk: http://dpdk.org/git/dpdk
    - runtime command: https://doc.dpdk.org/guides/testpmd_app_ug/testpmd_funcs.html
@@ -107,7 +109,7 @@ Prerequisites
 
         yum install lldpad
 
-3. don't allow the peer port modify the DCB parameter::
+3. Don't allow the peer port modify the DCB parameter::
 
     lldptool -T -i ens802f0 -V ETS-CFG willing=no
     lldptool -T -i ens785f0 -V ETS-CFG willing=no
@@ -116,7 +118,7 @@ Prerequisites
 
     lldptool -T -i enp785s0f0 -V ETS-CFG willing=no tsa=0:strict,1:strict,2:strict,3:strict,4:strict,5:strict,6:strict,7:strict up2tc=0:0,1:0,2:0,3:1,4:2,5:0,6:0,7:0 tcbw=10,30,60,0,0,0,0,0
 
-   or configure DCB with dcbgetset tool(an internal tool to set DCB without peer negotiation)::
+   Or configure DCB with dcbgetset tool(an internal tool to set DCB without peer negotiation)::
 
     ./dcbgetset enp785s0f0 --ieee --up2tc 0,0,0,1,2,0,0,0 --tcbw 10,30,60,0,0,0,0,0,0 --tsa 0,0,0,0,0,0,0,0 --pfc 0,0,0,0,0,0,0,0
 
@@ -125,7 +127,7 @@ Prerequisites
     echo 2 > /sys/bus/pci/devices/0000:18:00.0/sriov_numvfs
     echo 2 > /sys/bus/pci/devices/0000:86:00.0/sriov_numvfs
 
-6. Set each VF0 as trust and set each VF1 mac address::
+6. Set each VF0 as trust mode and set each VF1 mac address::
 
     ip link set dev ens785f0 vf 0 trust on
     ip link set ens785f0 vf 1 mac 00:11:22:33:44:55
@@ -142,20 +144,20 @@ Prerequisites
     0000:86:01.0 'Ethernet Adaptive Virtual Function 1889' drv=vfio-pci unused=iavf
     0000:86:01.1 'Ethernet Adaptive Virtual Function 1889' drv=vfio-pci unused=iavf
 
-8. 100G testpmd setting, just use 100G NIC port, start testpmd::
+8. When choose 100G testpmd setting, just use 100G NIC port, launch testpmd as below::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-10 -n 4 -a 18:01.0,cap=dcf -a 18:01.1 -- -i --txq=8 --rxq=8 --nb-cores=8 --port-topology=loop
 
-   10G testpmd setting, use 100G NIC and 25G NIC ports, start testpmd::
+   When choose 10G testpmd setting, use 100G NIC and 25G NIC ports, launch testpmd as below::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-10 -n 4 -a 18:01.0,cap=dcf -a 18:01.1 -a 86:01.0,cap=dcf -a 86:01.1 -- -i --txq=8 --rxq=8 --nb-cores=8
 
 Test case 1: strict mode, check peak_tb_rate
 ============================================
 Note: In strict mode, the "--tcbw" parameter is invalid.
-the throughput is only limited by shaper profile,
+The throughput is only limited by shaper profile,
 and the TC priority follows the default order.
-this case is to check the max rate of node limited by peak_tb_rate,
+This case is to check the max rate of node limited by peak_tb_rate,
 when the scheduler use strict priority mode in different user priority.
 
 1. DCB setting, set 3 TCs bandwidth with strict mode::
@@ -163,7 +165,7 @@ when the scheduler use strict priority mode in different user priority.
     ./dcbgetset ens785f0  --ieee --up2tc 0,0,0,1,2,0,0,0 --tcbw 10,30,60,0,0,0,0,0 --tsa 0,0,0,0,0,0,0,0 --pfc 0,0,0,0,0,0,0,0   
     ifconfig ens785f0 up
 
-2. start testpmd with 100G setting, then set profile and TC mapping::
+2. Start testpmd with 100G setting, then set profile and TC mapping::
 
     port stop all
     add port tm node shaper profile 0 1 1000000 0 2000000 0 0 0    
@@ -198,19 +200,19 @@ when the scheduler use strict priority mode in different user priority.
     set fwd mac
     start
 
-3. since configured 3 TCs, send four streams from IXIA,
+3. Since configured 3 TCs, send four streams from IXIA,
    mac address is vf1's mac address: "00:11:22:33:44:55", vlan=0, user priority=2,5,3,4,
    user priority 2/5 map to TC0, 3 maps to TC1, 4 maps TC2.
-   frame size is 68 bytes(4 bytes vlan field added to 64 bytes packet), each stream desired 25% max rate.
+   frame size is 68 bytes(4 bytes vlan field added to 64 bytes packet), each stream desired 25% max rate(100Gbps).
    send each stream separately, check the TX throughput of each priority and queue mapping:
    user priority=2, throughput is 2MBps, mapping to queue0-3;
    user priority=5, throughput is 2MBps, mapping to queue0-3;
    user priority=3, throughput is 4MBps, mapping to queue4-5;
    user priority=4, throughput is 4MBps, mapping to queue6-7;
    send the four streams synchronously, the sum of throughput is 10MBps.
-   and the queue mapping is correct.
+   and the queue mapping is correct, which is as the mapping of step3.
 
-4. restart testpmd, and map all the queues to 2 TCs(TC0 and TC1)::
+4. Restart testpmd, and map all the queues to 2 TCs(TC0 and TC1)::
 
     port stop all
     add port tm node shaper profile 0 1 1000000 0 2000000 0 0 0
@@ -243,7 +245,7 @@ when the scheduler use strict priority mode in different user priority.
     set fwd mac
     start
 
-5. send the same four streams as step3.
+5. Send the same four streams as step3.
    send each stream separately, check the TX throughput of each priority and queue mapping:
    stream 4 are dropped by vf1.
    user priority=2, throughput is 2MBps, mapping to queue0-3;
@@ -256,9 +258,9 @@ when the scheduler use strict priority mode in different user priority.
 Test case 2: ets mode, check peak_tb_rate
 =========================================
 Note: In ETS mode, the "--tcbw" parameter is valid.
-the throughput is only limited by TC bandwidth distribution and shaper profile,
+The throughput is only limited by TC bandwidth distribution and shaper profile,
 and the TC priority follows the value of the "--tcbw" setting.
-this case is to check the max rate of node limited by tcbw distribution and peak_tb_rate.
+This case is to check the max rate of node limited by tcbw distribution and peak_tb_rate,
 when the scheduler use ETS mode in different user priority.
 
 1. DCB setting, set 2 TCs bandwidth with ets mode::
@@ -268,7 +270,7 @@ when the scheduler use ETS mode in different user priority.
     ifconfig ens785f0 up
     ifconfig ens802f0 up
 
-2. start testpmd with 10G setting, then set profile and TC mapping::
+2. Start testpmd with 10G setting, then set profile and TC mapping::
 
     set portlist 0,2,1,3
     show config fwd
@@ -319,11 +321,11 @@ when the scheduler use ETS mode in different user priority.
     set fwd mac
     start
 
-3. send two streams from IXIA, vlan=0, priority=0/4(TC0/TC1),
+3. Send two streams from IXIA, vlan=0, priority=0/4(TC0/TC1),
    mac address is VF1's mac address "00:11:22:33:44:55",
-   frame size is 68 bytes(4 bytes vlan field added to 64 bytes packet), each stream desired 50% max rate.
+   frame size is 68 bytes(4 bytes vlan field added to 64 bytes packet), each stream desired 50% max rate(100Gbps).
    send each stream separately, check the port3(VF1 of 25G port) stats:
-   both of the tx rate is about 7.3Gbps(linerate);
+   each tx rate is about 7.3Gbps(linerate);
    stop forward, check queue mapping:
    when send stream of UP=0, the tx queues are queue0-queue3;
    when send stream of UP=4, the tx queues are queue4-queue7;
@@ -332,7 +334,7 @@ when the scheduler use ETS mode in different user priority.
    stop forward, check the result:
    the throughput's proportion of queue0-3 and queue4-7 is about 20:80 as the DCB TC bandwidth setting.
 
-4. set both two profiles' PIR to 500000000, other settings are the same as step2, 
+4. Set both two profiles' PIR to 500000000, other settings are the same as step2,
    send same streams as step3.
    send each stream separately, check the port3 tx rate is about 3.95Gbps, closed to the PIR 4Gbps,
    check queue mapping is same as step3.
@@ -345,7 +347,8 @@ when the scheduler use ETS mode in different user priority.
 
 Test case 3: strict mode, check cmit_tb_rate
 ============================================
-this case is to check the guaranteed rate of node set by cmit_tb_rate.
+This case is to check the guaranteed rate of node set by cmit_tb_rate.
+Note: now, the cmit_tb_rate setting can't take work, it is not supported by kernel.
 
 1. DCB setting, set 2 TCs bandwidth with strict mode::
 
@@ -354,9 +357,9 @@ this case is to check the guaranteed rate of node set by cmit_tb_rate.
     ifconfig ens785f0 up
     ifconfig ens802f0 up
 
-2. start testpmd with 10G setting, then set profile and TC mapping as test_case 2 step2.
+2. Start testpmd with 10G setting, then set profile and TC mapping as test_case 2 step2.
 
-3. send two streams from IXIA,
+3. Send two streams from IXIA,
    mac address is VF1's mac address "00:11:22:33:44:55", vlan=0, priority=0/3(TC0/TC1),
    frame size is 68 bytes(4 bytes vlan field added to 64 bytes packet), each stream desired 50% max rate.
    send each stream separately, check the port3(VF1 of 25G port) tx rate is about 7.3Gbps,
@@ -368,7 +371,10 @@ this case is to check the guaranteed rate of node set by cmit_tb_rate.
    queue4-queue7 which mapping to TC1 have about 7.22Gbps.
 
 Note: the cmit of the profile is not supported now, so the current expected result is:
-all the TX throughput should at TC1, TC0 should has no throughput.
+   all the TX throughput should at TC1, TC0 should has no throughput.
+   queue0-queue3 which mapping to TC0 have no throughput
+   queue4-queue7 which mapping to TC1 have about 7.3Gbps.
+
 
 Test case 4: ets mode, check the TC throughput of min BW allocation
 ===================================================================
@@ -441,7 +447,7 @@ this case is to check the TC throughput of min BW allocation.
     set fwd mac
     start
 
-3. send 8 streams from IXIA, vlan=0, priority=0-7(TC0-TC7),
+3. Send 8 streams from IXIA, vlan=0, priority=0-7(TC0-TC7),
    mac address is VF1's mac address "00:11:22:33:44:55",
    frame size is 1024 bytes, each stream desired 12.5% max rate.
    send each stream separately, the PIR can be reached, and the queue mapping is correct:
@@ -454,10 +460,10 @@ this case is to check the TC throughput of min BW allocation.
    UP6(TC2) stream maps queue6-7, the throughput is 8Gbps.
    UP7(TC2) stream maps queue6-7, the throughput is 8Gbps.
    send 8 streams synchronously, check throughput is 9.77Gbps,
-   TC2 and TC1's PIR can be satisfied, and the rest rate is given to TC0.
+   TC2 and TC1's PIR(1.2Gbps/8Gbps) can be satisfied, and the rest rate is given to TC0.
 
-4. set frame size to 68bytes, send 8 streams synchronously,
-   check the throughput is 7.273Gbps. all the TC can’t reach PIR.
+4. Set frame size to 68bytes, send 8 streams synchronously,
+   check the throughput is about 7.273Gbps. all the TC can’t reach PIR.
    TC0 rate is 0.072Gbps, occupys 0.01 ets BW.
    TC1 rate is 0.72Gbps, occupys 0.1 ets BW.
    TC2 rate is 6.48Gbps, occupys 0.89 ets BW.
@@ -465,13 +471,13 @@ this case is to check the TC throughput of min BW allocation.
 
 Test case 5: 2 iavf VFs, strict mode, check peak_tb_rate
 ========================================================
-each VF's max rate is limited by the sum of peak_tb_rate of all TCs binded to it.
+Each VF's max rate is limited by the sum of peak_tb_rate of all TCs binded to it.
 
 1. DCB setting, set 3 TCs bandwidth with strict mode::
 
     ./dcbgetset ens785f0  --ieee --up2tc 0,0,0,1,2,0,0,0 --tcbw 10,30,60,0,0,0,0,0 --tsa 0,0,0,0,0,0,0,0 --pfc 0,0,0,0,0,0,0,0
 
-2. create 3 VFs::
+2. Create 3 VFs::
 
     echo 3 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
     ./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1 18:01.2
@@ -479,7 +485,7 @@ each VF's max rate is limited by the sum of peak_tb_rate of all TCs binded to it
     ip link set ens785f0 vf 1 mac 00:11:22:33:44:55
     ip link set ens785f0 vf 2 mac 00:11:22:33:44:66
 
-3. start testpmd with 100G setting, different vsi node of same TC node use different profiles::
+3. Start testpmd with 100G setting, different vsi node of same TC node use different profiles::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-10 -n 4 -a 18:01.0,cap=dcf -a 18:01.1 -a 18:01.2 -a 18:01.3 -- -i --txq=8 --rxq=8 --port-topology=loop --nb-cores=8
     port stop all
@@ -524,14 +530,14 @@ each VF's max rate is limited by the sum of peak_tb_rate of all TCs binded to it
     add port tm leaf node 2 5 800 0 1 2 0 0 0xffffffff 0 0
     add port tm leaf node 2 6 700 0 1 2 0 0 0xffffffff 0 0
     add port tm leaf node 2 7 700 0 1 2 0 0 0xffffffff 0 0
-    port tm hierarchy commit 2 yes vlan=0, priority=1,2,3,4, vlan=0, priority=1,2,3,4,
+    port tm hierarchy commit 2 yes
     port start all
     set fwd mac
     start
 
-4. send 8 streams, stream0-3’s mac address is vf1's, vlan=0, priority=1/2/3/4(TC0/TC0/TC1/TC2),
+4. Send 8 streams, stream0-3’s mac address is vf1's, vlan=0, priority=1/2/3/4(TC0/TC0/TC1/TC2),
    stream4-7' mac address is vf2's, vlan=0, priority=1,2,3,4,
-   send each stream separatly, check the stats:
+   send each stream separately, check the stats:
    stream0 maps queue0-1 of port 1, the throughput reaches PIR of profile 1(16Mbps).
    stream1 maps queue0-1 of port 1, the throughput reaches PIR of profile 1(16Mbps).
    stream2 maps queue2-3 of port 1, the throughput reaches PIR of profile 2(32Mbps).
@@ -541,18 +547,19 @@ each VF's max rate is limited by the sum of peak_tb_rate of all TCs binded to it
    stream6 maps queue2-5 of port 2, the throughput reaches PIR of profile 1(16Mbps).
    stream7 maps queue6-7 of port 2, the throughput reaches PIR of profile 2(32Mbps).
    send all streams synchronously, each 12.5%max, check the sum of throughput reach 128Mbps.
-   each stream's queue mapping is correct.
+   each stream's queue mapping is correct,which is same as sent separately.
 
 Test case 6: 2 iavf VFs, strict mode, check cmit_tb_rate
 ========================================================
-each VF's guaranteed rate is set by the cmit_tb_rate of TC0 binded to it.
+Each VF's guaranteed rate is set by the cmit_tb_rate of TC0 binded to it.
+Note: now, the cmit_tb_rate setting can't take work, it is not supported by kernel.
 
 1. DCB setting, set 3 TCs bandwidth with strict mode::
 
     ./dcbgetset ens785f0 --ieee --up2tc 0,0,0,1,0,0,0,0 --tcbw 20,80,0,0,0,0,0,0 --tsa 0,0,0,0,0,0,0,0 --pfc 0,0,0,0,0,0,0,0
     ./dcbgetset ens802f0 --ieee --up2tc 0,0,0,1,0,0,0,0 --tcbw 20,80,0,0,0,0,0,0 --tsa 0,0,0,0,0,0,0,0 --pfc 0,0,0,0,0,0,0,0
 
-2. create 3 VFs on each pf::
+2. Create 3 VFs on each pf::
 
     echo 3 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
     ip link set dev ens785f0 vf 0 trust on
@@ -565,7 +572,7 @@ each VF's guaranteed rate is set by the cmit_tb_rate of TC0 binded to it.
     ip link set ens802f0 vf 2 mac 00:11:22:33:44:88
     ./usertools/dpdk-devbind.py -b vfio-pci 86:01.0 86:01.1 86:01.2
 
-3. start testpmd with 10G setting::
+3. Start testpmd with 10G setting::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-10 -n 4 -a 18:01.0,cap=dcf -a 18:01.1 -a 18:01.2 -a 86:01.0,cap=dcf -a 86:01.1 -a 86:01.2 -- -i --txq=8 --rxq=8 --nb-cores=8
     set portlist 0,3,1,4,2,5
@@ -645,9 +652,9 @@ each VF's guaranteed rate is set by the cmit_tb_rate of TC0 binded to it.
     set fwd mac
     start
 
-4. send 4 streams synchronously, stream0-1's mac address is vf1's, vlan id=0, UP=2/3(TC0/TC1),
+4. Send 4 streams synchronously, stream0-1's mac address is vf1's, vlan id=0, UP=2/3(TC0/TC1),
    streams2-3's mac address is vf2's, vlan id=0, UP=2/3(TC0/TC1),
-   frame size 68 bytes, each stream allocates 25%max.
+   frame size is 68 bytes, each stream allocates 25%max.
    check the vf4 and vf5 stats, the sum of tx rate is 7.27Gbps, each vf tx is 3.64Gbps.
    in each vf, TC0 should occupied 0.8Gbps, the rest of throughput is occupied by TC1, which is about 2.84Gbps
    stop the fwd, check each queue's tx stats,
@@ -655,11 +662,14 @@ each VF's guaranteed rate is set by the cmit_tb_rate of TC0 binded to it.
    vf4's queue4-queue7 and vf5's queue2-queue7 map to TC1, which occupied 2.84Gbps.
 
 Note: now, the cmit_tb_rate setting can't take work, it is not supported by kernel.
-so the current status should be: all the TX throughput are occupied by TC1, TC0 should have no throughput.
+   So the current result of step4 should be:
+   all the TX throughput are occupied by TC1, TC0 should have no throughput.
+   vf4's queue0-queue3 and vf5's queue0-queue1 map to TC0, which have no throughput.
+   vf4's queue4-queue7 and vf5's queue2-queue7 map to TC1, which occupied 3.64Gbps.
 
 Test case 7: 2 iavf VFs, ets mode
 =================================
-in ETS mode, calculate the sum value of different vf node which binded to same TC,
+In ETS mode, calculate the sum value of different vf node which binded to same TC,
 the proportion of the value of different TC is consistent to TC bandwitch distribution
 
 1. DCB setting, set 3 TCs bandwidth with ets mode::
@@ -667,7 +677,7 @@ the proportion of the value of different TC is consistent to TC bandwitch distri
     ./dcbgetset ens785f0  --ieee --up2tc 0,0,0,1,2,0,0,0 --tcbw 10,30,60,0,0,0,0,0 --tsa 2,2,2,2,2,2,2,2 --pfc 0,0,0,0,0,0,0,0   
     ./dcbgetset ens802f0  --ieee --up2tc 0,0,0,1,2,0,0,0 --tcbw 10,30,60,0,0,0,0,0 --tsa 2,2,2,2,2,2,2,2 --pfc 0,0,0,0,0,0,0,0   
 
-2. create 3 VFs on each pf::
+2. Create 3 VFs on each pf::
 
     echo 3 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
     ip link set dev ens785f0 vf 0 trust on
@@ -680,7 +690,7 @@ the proportion of the value of different TC is consistent to TC bandwitch distri
     ip link set ens802f0 vf 2 mac 00:11:22:33:44:88
     ./usertools/dpdk-devbind.py -b vfio-pci 86:01.0 86:01.1 86:01.2
 
-3. start testpmd with 10G setting::
+3. Start testpmd with 10G setting::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-10 -n 4 -a 18:01.0,cap=dcf -a 18:01.1 -a 18:01.2 -a 86:01.0,cap=dcf -a 86:01.1 -a 86:01.2 -- -i --txq=8 --rxq=8 --nb-cores=8
     set portlist 0,3,1,4,2,5
@@ -772,11 +782,12 @@ the proportion of the value of different TC is consistent to TC bandwitch distri
     set fwd mac
     start
 
-4. send 8 streams, stream0-3’s mac address is vf1's, vlan=0, priority=1/2/3/4(TC0/TC0/TC1/TC2),
+4. Send 8 streams synchronously, stream0-3’s mac address is vf1's, vlan=0, priority=1/2/3/4(TC0/TC0/TC1/TC2),
    stream4-7’s mac address is vf2's, vlan=0, priority=1/2/3/4(TC0/TC0/TC1/TC2),
    frame size 68 bytes, each stream allocates 12.5%max.
    calculate the sum of vf1 and vf2 tx rate which belongs to TC0, mark it as t0,
-   calculate the sum of vf1 and vf2 tx rate which belongs to TC1 and TC2, mark them as t1 and t2.
+   calculate the sum of vf1 and vf2 tx rate which belongs to TC1, mark it as t1,
+   calculate the sum of vf1 and vf2 tx rate which belongs to TC2, mark it as t2,
    check the proportion of t0:t1:t2 is 1:3:6, which can match the ets bandwidth limit 1:3:6,
    and the queue mapping is:
    stream1 maps queue0-1 of vf1,
@@ -790,13 +801,13 @@ the proportion of the value of different TC is consistent to TC bandwitch distri
 
 Test case 8: strict mode, 8 TCs
 ===============================
-this case is to check QoS Tx side processing with max TC number set in strict priority mode.
+This case is to check QoS Tx side processing with max TC number set in strict priority mode.
 
 1. DCB setting, set 8 TCs bandwidth with strict mode::
 
     ./dcbgetset ens785f0  --ieee --up2tc 0,1,2,3,4,5,6,7 --tcbw 10,30,60,0,0,0,0,0 --tsa 0,0,0,0,0,0,0,0 --pfc 0,0,0,0,0,0,0,0
 
-2. start testpmd with 100G setting::
+2. Start testpmd with 100G setting::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-10 -n 4 -a 18:01.0,cap=dcf -a 18:01.1 -- -i --txq=8 --rxq=8 --port-topology=loop --nb-cores=8
     port stop all
@@ -851,10 +862,10 @@ this case is to check QoS Tx side processing with max TC number set in strict pr
     set fwd mac
     start
 
-3. send 8 streams vlan id=0, UP0-UP7,68bytes, each stream 12.5%max, which is much more than PIR.
-   Tx is limited by PIR, each TC can reach to PIR.
+3. Send 8 streams synchronously, vlan id=0, UP0-UP7,68bytes, each stream 12.5%max, which is much more than PIR.
+   check tx is limited by PIR, each TC can reach to PIR.
 
-4. change the shaper profile::
+4. Change the shaper profile::
 
     port stop all
     add port tm node shaper profile 0 1 1000000 0 1780000000 0 0 0
@@ -906,23 +917,23 @@ this case is to check QoS Tx side processing with max TC number set in strict pr
     set fwd mac
     start
 
-5. send 8 streams vlan id=0, UP0-UP7, 68bytes, each stream 12.5%max, which is less than PIR.
+5. Send 8 streams synchronously, vlan id=0, UP0-UP7, 68bytes, each stream 12.5%max, which is less than PIR.
    stop the forward, check all the Tx packet drop is at queue0, which maps to TC0.
    the throughput satisfy TC7-TC1 by priority.
 
-6. send 8 streams vlan id=0, UP0-UP7,1024bytes, each stream 12.5%max, which is less than PIR.
+6. Send 8 streams synchronously, vlan id=0, UP0-UP7,1024bytes, each stream 12.5%max, which is less than PIR.
    stop the forward, check all the Tx packet drop is at queue0, which maps to TC0.
    the throughput satisfy TC7-TC1 by priority.
 
 Test case 9: strict mode, 1 TC
 ==============================
-this case is to check QoS Tx side processing with min TC number set in strict priority mode.
+This case is to check QoS Tx side processing with min TC number set in strict priority mode.
 
 1. DCB setting, set 1 TC bandwidth with strict mode::
 
     ./dcbgetset ens785f0  --ieee --up2tc 0,0,0,0,0,0,0,0 --tcbw 10,30,60,0,0,0,0,0 --tsa 0,0,0,0,0,0,0,0 --pfc 0,0,0,0,0,0,0,0
 
-2. start testpmd with 100G setting::
+2. Start testpmd with 100G setting::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-10 -n 4 -a 18:01.0,cap=dcf -a 18:01.1 -- -i --txq=8 --rxq=8 --port-topology=loop --nb-cores=8
     port stop all
@@ -947,20 +958,20 @@ this case is to check QoS Tx side processing with min TC number set in strict pr
     set fwd mac
     start
 
-3. send 8 streams vlan id=0, UP0-UP7, which all map to TC0, 68bytes, each stream 12.5%max.
-   check the sum of Tx throughput can reach PIR.
-   only send 1 stream, check the Tx throughput can reach PIR too.
+3. Send 8 streams synchronously, vlan id=0, UP0-UP7, which all map to TC0, 68bytes, each stream 12.5%max.
+   check the sum of Tx throughput can reach PIR(8Gbps).
+   only send 1 stream, check the Tx throughput can reach PIR(8Gbps) too.
 
 Test case 10: ets mode, 8 TCs
 =============================
-this case is to check QoS Tx side processing with max TC number set in ETS mode.
+This case is to check QoS Tx side processing with max TC number set in ETS mode.
 
 1. DCB setting, set 8 TCs bandwidth with ets mode::
 
     ./dcbgetset ens785f0  --ieee --up2tc 0,1,2,3,4,5,6,7 --tcbw 5,10,15,10,20,1,30,9 --tsa 2,2,2,2,2,2,2,2 --pfc 0,0,0,0,0,0,0,0   
     ./dcbgetset ens802f0  --ieee --up2tc 0,1,2,3,4,5,6,7 --tcbw 5,10,15,10,20,1,30,9 --tsa 2,2,2,2,2,2,2,2 --pfc 0,0,0,0,0,0,0,0   
 
-2. start testpmd with 10G setting::
+2. Start testpmd with 10G setting::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-10 -n 4 -a 18:01.0,cap=dcf -a 18:01.1 -a 86:01.0,cap=dcf -a 86:01.1 -- -i --txq=8 --rxq=8 --nb-cores=8
     set portlist 0,2,1,3
@@ -1064,21 +1075,23 @@ this case is to check QoS Tx side processing with max TC number set in ETS mode.
     set fwd mac
     start
 
-3. send 8 streams vlan id=0, UP0-UP7, which map TC0-TC7, 68bytes, each stream 12.5%max,
+3. Send 8 streams synchronously, vlan id=0, UP0-UP7, which map TC0-TC7, 68bytes, each stream 12.5%max,
    check port3 stats, the Tx rate is 7.3Gbps.
    stop forward, check the tx rate, queue0-queue4 correspond to TC0-TC4, can reach the PIR(100MBps),
    queue6 which corresponds to TC6 is limited by PIR(200MBps) too.
    queue7(maps to TC7) is limited by Rx IXIA traffic, can’t reach PIR(400MBps),
-   and TC5(maps to queue5) is the lowest priority, other TCs must be satisfied first,
+   and queue5(maps to TC5) is the lowest priority, other TCs must be satisfied first,
    so TC5 and TC7 are limited by the bandwidth distribution 1:9.
 
-4. set profile of port2 as below::
+4. Set profile of port2 as below::
 
     add port tm node shaper profile 2 1 1000000 0 100000000 0 0 0    
     add port tm node shaper profile 2 2 1000000 0 250000000 0 0 0    
     add port tm node shaper profile 2 3 1000000 0 100000000 0 0 0    
 
-   queue0-queue4 and queue6-7 can reach PIR(are limited by PIR),
+   Send the same 8 streams synchronously,
+   check port3 stats, the Tx rate is 7.3Gbps.
+   stop forward, check the tx rate, queue0-queue4 and queue6-7 can reach PIR(are limited by PIR),
    queue5(corresponds to TC5) is the lowest priority (1% BW set by DCB), 
    the rest rate are put to queue 5, may be more than 1% of whole throughput.
 
@@ -1088,18 +1101,20 @@ this case is to check QoS Tx side processing with max TC number set in ETS mode.
     add port tm node shaper profile 2 2 0 0 0 0 0 0    
     add port tm node shaper profile 2 3 0 0 0 0 0 0    
 
-   check all the steam's tx throughput proportion is due to ets bandwidth distribution.
+   Send the same 8 streams synchronously,
+   check port3 stats, the Tx rate is 7.3Gbps.
+   check all the steam's tx throughput proportion is due to ets bandwidth distribution(5:10:15:10:20:1:30:9).
 
 Test case 11: ets mode, 1 TC
 ============================
-this case is to check QoS Tx side processing with min TC number set in ETS mode.
+This case is to check QoS Tx side processing with min TC number set in ETS mode.
 
 1. DCB setting, set 1 TC bandwidth with ets mode::
 
     ./dcbgetset ens785f0  --ieee --up2tc 0,0,0,0,0,0,0,0 --tcbw 100,0,0,0,0,0,0,0 --tsa 2,2,2,2,2,2,2,2 --pfc 0,0,0,0,0,0,0,0   
     ./dcbgetset ens802f0  --ieee --up2tc 0,0,0,0,0,0,0,0 --tcbw 100,0,0,0,0,0,0,0 --tsa 2,2,2,2,2,2,2,2 --pfc 0,0,0,0,0,0,0,0   
 
-2. start testpmd with 10G setting::
+2. Start testpmd with 10G setting::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-10 -n 4 -a 18:01.0,cap=dcf -a 18:01.1 -a 86:01.0,cap=dcf -a 86:01.1 -- -i --txq=8 --rxq=8 --nb-cores=8
     set portlist 0,2,1,3
@@ -1143,20 +1158,20 @@ this case is to check QoS Tx side processing with min TC number set in ETS mode.
     set fwd mac
     start
 
-3. send 8 streams vlan id=0, UP0-UP7, 68bytes, each stream 12.5%max.
+3. Send 8 streams synchronously, vlan id=0, UP0-UP7, 68bytes, each stream 12.5%max.
    check the sum of Tx throughput can reach 7.3Gbps.
    only send 1 stream, check the Tx throughput can reach 7.3Gbps too.
 
 Test case 12: query qos setting
 ===============================
-the case is to check the support to query QoS settings.
+The case is to check the support to query QoS settings.
 
 1. DCB setting, set 3 TCs bandwidth with strict mode::
 
     ./dcbgetset ens785f0  --ieee --up2tc 0,0,0,1,2,0,0,0 --tcbw 10,30,60,0,0,0,0,0 --tsa 0,0,0,0,0,0,0,0 --pfc 0,0,0,0,0,0,0,0
     ifconfig ens785f0 up
 
-2. start testpmd with 100G setting, then set profile and TC mapping::
+2. Start testpmd with 100G setting, then set profile and TC mapping::
 
     port stop all
     add port tm node shaper profile 0 1 1000000 0 2000000 0 0 0    
@@ -1187,29 +1202,29 @@ the case is to check the support to query QoS settings.
     port tm hierarchy commit 1 no
     port start all
 
-3. show port tm capability::
+3. Show port tm capability::
 
     show port tm cap 1
 
-   show port tm level capability::
+   Show port tm level capability::
 
     show port tm level cap 1 0
     show port tm level cap 1 1
     show port tm level cap 1 2
 
-   check shaper_private_rate_max are the same::
+   Check shaper_private_rate_max are the same::
 
     shaper_private_rate_max 12500000000
 
-   the value is speed of the port.
-   the shaper_private_rate_min is 0.
+   The value is speed of the port.
+   The shaper_private_rate_min is 0.
 
-   show port tm node capability::
+   Show port tm node capability::
 
     show port tm node cap 1 900
     show port tm node cap 1 800
 
-   check shaper_private_rate_max and shaper_private_rate_min,
+   Check shaper_private_rate_max and shaper_private_rate_min,
    the TC node value is consistent to profile setting.
    node 900::
 
@@ -1221,93 +1236,93 @@ the case is to check the support to query QoS settings.
     cap.shaper_private_rate_min 1000000
     cap.shaper_private_rate_max 4000000
 
-   check all the unit of rate is consistent which is Bps.
-   show capability of node 0-7 for port 1::
+   Check all the unit of rate is consistent which is Bps.
+   Show capability of node 0-7 for port 1::
 
     show port tm node cap 1 1
     node parameter null: not support capability get (error 22)
 
-   it's not supported by queue node.
+   It's not supported by queue node.
 
-4. show port tm node type::
+4. Show port tm node type::
 
     show port tm node type 1 0
     show port tm node type 1 900
     show port tm node type 1 1000
 
-   the result is::
+   The result is::
 
     leaf node
     nonleaf node
     nonleaf node
 
-   check the type is correct.
+   Check the type is correct.
 
 Test case 13: pf reset
 ======================
-this case is to check if the QoS setting works after resetting PF.
+This case is to check if the QoS setting works after resetting PF.
 
-1. run the test case 1, the result is as expected.
+1. Run the test case 1, the result is as expected.
 
-2. reset pf::
+2. Reset pf::
 
     echo 1 > /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/reset
 
-3. send same streams as step1, check no packets received and transmitted.
+3. Send same streams as step1, check no packets received and transmitted.
 
 Test case 14: vf reset
 ======================
-this case is to check if the QoS setting works after resetting VF.
+This case is to check if the QoS setting works after resetting VF.
 
-1. run the test case 1, the result is as expected.
+1. Run the test case 1, the result is as expected.
 
-2. reset VF1 by setting mac addr::
+2. Reset VF1 by setting mac addr::
 
     ip link set ens785f0 vf 1 mac 00:11:22:33:44:66
 
-   then execute below command in testpmd::
+   Then execute below command in testpmd::
 
     port stop 1
     port reset 1
     port start 1
     start
 
-3. send same streams in step1 but with VF1's new mac address "00:11:22:33:44:66",
+3. Send same streams in step1 but with VF1's new mac address "00:11:22:33:44:66",
    check TC0 stream maps to all queues, TC1 and TC2 stream map to queue0.
 
-4. set the qos settings as test case 1 step2 again.
+4. Set the qos settings as test case 1 step2 again.
    send the same steams, check the same result as step 1.
 
 Test case 15: link status change
 ================================
-this case is to check if the QoS setting works after link status change.
+This case is to check if the QoS setting works after link status change.
 
-1. run the test case 1, the result is as expected.
+1. Run the test case 1, the result is as expected.
 
-2. change the link status::
+2. Change the link status::
 
     ifconfig ens785f0 down
 
-   check TC setting is not changed, the queue mapping is not changed,
+   Check TC setting is not changed, the queue mapping is not changed,
    The Tx rate is not changed.
 
-3. change the link status again::   
+3. Change the link status again::
 
     ifconfig ens785f0 up
 
-   check the status, get the same result.
+   Check the status, get the same result.
 
 Test case 16: DCB setting TC change
 ===================================
-this case is to check if the QoS setting works after DCB setting TC change.
+This case is to check if the QoS setting works after DCB setting TC change.
 
-1. run the test case 1, the result is as expected.
+1. Run the test case 1, the result is as expected.
 
-2. reset the DCB setting as below::
+2. Reset the DCB setting as below::
 
     ./dcbgetset ens785f0  --ieee --up2tc 0,0,0,1,2,0,0,0 --tcbw 10,40,50,0,0,0,0,0 --tsa 0,0,0,0,0,0,0,0 --pfc 0,0,0,0,0,0,0,0
 
-3. send the same streams as step 1,
+3. Send the same streams as step 1,
    Only send TC0 stream, queue0-queue7 of both Rx and Tx have traffic, load is balancing.
    Only send TC1/TC2 streams, only queue0 has Rx and Tx traffic.
 
@@ -1319,14 +1334,14 @@ Test case 17: negative case for requested VF
 
     ./dcbgetset ens785f0  --ieee --up2tc 0,0,0,1,0,0,0,0 --tcbw 20,80,0,0,0,0,0,0 --tsa 0,0,0,0,0,0,0,0 --pfc 0,0,0,0,0,0,0,0
 
-2. create 2 VFs::
+2. Create 2 VFs::
 
     echo 2 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
     ./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1
     ip link set dev ens785f0 vf 0 trust on
     ip link set ens785f0 vf 1 mac 00:11:22:33:44:55
 
-3. start testpmd with 100G setting::
+3. Start testpmd with 100G setting::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-10 -n 4 -a 18:01.0,cap=dcf -a 18:01.1 -- -i --txq=8 --rxq=8 --port-topology=loop --nb-cores=8
     port stop all
@@ -1345,7 +1360,7 @@ Set 3 VSIs, more than 2 VFs created::
 
 Subcase 2: Valid number of TCs for the target VF
 ------------------------------------------------
-1. configured 2 TCs by DCB, but only set 1 TC node::
+1. Configured 2 TCs by DCB, but only set 1 TC node::
 
     add port tm node shaper profile 0 1 63000 0 12500000000 0 0 0        
     add port tm nonleaf node 0 1000000 -1 0 1 0 -1 1 0 0
@@ -1369,7 +1384,7 @@ Subcase 2: Valid number of TCs for the target VF
     ice_dcf_commit_check(): Not all VFs are binded to TC1
     no error: (no stated reason) (error 0)
 
-3. add 1 profile, but use 2 profiles::
+3. Add 1 profile, but use 2 profiles::
 
     add port tm node shaper profile 0 1 1000000 0 2000000 0 0 0    
     add port tm nonleaf node 0 1000 -1 0 1 0 -1 1 0 0              
@@ -1409,7 +1424,7 @@ Subcase 3: Valid Min and Max values
     add port tm leaf node 0 3 800000 0 1 2 2 0 0xffffffff 0 0
     testpmd> port tm hierarchy commit 0 no
 
-   the setting commit successfully.
+   The setting commit successfully.
 
 2.Min BW for the given TC must be less than that of Max BW::
 
@@ -1436,10 +1451,10 @@ Subcase 3: Valid Min and Max values
     add port tm leaf node 0 3 800000 0 1 2 1 0 0xffffffff 0 0
     port tm hierarchy commit 0 yes
 
-   the setting commit successfully.
+   The setting commit successfully.
 
 3. Max BW must be less than or equal to negotiated link speed for the port
-1).one iavf VF, two TCs::
+1).One iavf VF, two TCs::
 
     add port tm node shaper profile 0 1 1000000000 0 12000000000 0 0 0
     add port tm nonleaf node 0 1000000 -1 0 1 0 -1 1 0 0
@@ -1453,7 +1468,7 @@ Subcase 3: Valid Min and Max values
     port tm hierarchy commit 0 yes
     ice_dcf_validate_tc_bw(): Total value of TC0 min bandwidth and other TCs' max bandwidth 104000000kbps should be less than port link speed 100000000kbps
 
-2).two iavf VF, two TCs::
+2).Two iavf VF, two TCs::
 
     echo 3 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
     ./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1 18:01.2
@@ -1505,7 +1520,7 @@ Subcase 3: Valid Min and Max values
     set fwd mac
     start
 
-   send two streams from IXIA, vlan=0, priority=0/3(TC0/TC1),
+   Send two streams from IXIA, vlan=0, priority=0/3(TC0/TC1),
    mac address is VF1's mac address "00:11:22:33:44:55", frame size is 1024 bytes, 100% max rate.
    send each stream separately, check the TX throughput of each TC can reach linerate.
    and the queue mapping is correct.
@@ -1516,14 +1531,14 @@ Test case 18: negative case for req VF to update its queue to TC mapping
 
     ./dcbgetset ens785f0  --ieee --up2tc 0,0,0,1,2,0,0,0 --tcbw 10,30,60,0,0,0,0,0 --tsa 0,0,0,0,0,0,0,0 --pfc 0,0,0,0,0,0,0,0
 
-2. create 2 VFs::
+2. Create 2 VFs::
 
     echo 2 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
     ./usertools/dpdk-devbind.py -b vfio-pci 18:01.0 18:01.1
     ip link set dev ens785f0 vf 0 trust on
     ip link set ens785f0 vf 1 mac 00:11:22:33:44:55
 
-3. start testpmd with 100G setting::
+3. Start testpmd with 100G setting::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-10 -n 4 -a 18:01.0,cap=dcf -a 18:01.1 -- -i --txq=8 --rxq=8 --port-topology=loop --nb-cores=8
     port stop all
@@ -1579,7 +1594,7 @@ Subcase 1: Total number of queue pairs match to what the VF is allocated
 
 Subcase 1: Number of TCs match is less than TC enabled on the VF
 ----------------------------------------------------------------
-1. not all VF0 VSI nodes binded to TC::
+1. Not all VF0 VSI nodes binded to TC::
 
     add port tm node shaper profile 0 1 1000000 0 2000000 0 0 0
     add port tm node shaper profile 0 2 1000000 0 4000000 0 0 0
@@ -1595,7 +1610,7 @@ Subcase 1: Number of TCs match is less than TC enabled on the VF
     ice_dcf_commit_check(): Not all VFs are binded to TC2
     no error: (no stated reason) (error 0)
 
-2. not all VF1 VSI nodes binded to TC::
+2. Not all VF1 VSI nodes binded to TC::
 
     add port tm node shaper profile 0 1 1000000 0 2000000 0 0 0
     add port tm node shaper profile 0 2 1000000 0 4000000 0 0 0
@@ -1625,7 +1640,7 @@ Subcase 1: Number of TCs match is less than TC enabled on the VF
     iavf_hierarchy_commit(): Does not set VF vsi nodes to all TCs
     no error: (no stated reason) (error 0)
 
-3. not all VF1 VSI nodes mapping to queues, set successfully::
+3. Not all VF1 VSI nodes mapping to queues, set successfully::
 
     add port tm nonleaf node 1 1000 -1 0 1 0 0 1 0 0
     add port tm nonleaf node 1 900 1000 0 1 1 0 1 0 0
@@ -1641,12 +1656,12 @@ Subcase 1: Number of TCs match is less than TC enabled on the VF
     add port tm leaf node 1 7 800 0 1 2 0 0 0xffffffff 0 0
     port tm hierarchy commit 1 yes
 
-   send TC0 and TC1 streams, the queue mapping is correct.
+   Send TC0 and TC1 streams, the queue mapping is correct.
 
 Subcase 3: Number of TCs match is more than TC enabled on the VF
 ----------------------------------------------------------------
 The TC number should be consistent to the TC enabled by lldptool.
-run the below steps sequentially.
+Run the below steps sequentially.
 
 1. TC node number is more than TC enabled by lldptool::
 
@@ -1680,7 +1695,7 @@ run the below steps sequentially.
     add port tm nonleaf node 1 600 1000 0 1 1 0 1 0 0
     node id: too many TCs (error 33)
 
-4. map the nonexist VSI node to queues::
+4. Map the nonexist VSI node to queues::
 
     add port tm leaf node 1 0 900 0 1 2 0 0 0xffffffff 0 0
     add port tm leaf node 1 1 900 0 1 2 0 0 0xffffffff 0 0
@@ -1721,7 +1736,7 @@ There can't be overlap between queue to TC mapping::
 
 Subcase 5: Non-contiguous TC setting in queue mapping
 -----------------------------------------------------
-1. set non-contiguous TC and queue mapping::
+1. Set non-contiguous TC and queue mapping::
 
     add port tm node shaper profile 0 1 1000000 0 2000000 0 0 0    
     add port tm node shaper profile 0 2 1000000 0 4000000 0 0 0    
@@ -1753,7 +1768,7 @@ Subcase 5: Non-contiguous TC setting in queue mapping
     set fwd mac
     start
 
-2. send four streams from IXIA, vlan=0, priority=2/5/3/4(TC0/TC0/TC1/TC2),
+2. Send four streams from IXIA, vlan=0, priority=2/5/3/4(TC0/TC0/TC1/TC2),
    mac address is VF1's mac address "00:11:22:33:44:55".
    frame size is 68 bytes, each stream desired 25% max rate.
    send each stream separately, check the TX throughput of each priority and queue mapping:
@@ -1763,16 +1778,16 @@ Subcase 5: Non-contiguous TC setting in queue mapping
 
 Test case 19: different vlan ID
 ===============================
-there are different streams with different vlan id, but with same user priority.
-the sum of the streams's throughput is limited by the tcbw distribution or peak_tb_rate.
+There are different streams with different vlan id, but with same user priority.
+The sum of the streams's throughput is limited by the tcbw distribution or peak_tb_rate.
 
 1. DCB setting, set 3 TCs bandwidth with strict mode::
 
     ./dcbgetset ens785f0  --ieee --up2tc 0,0,0,1,2,0,0,0 --tcbw 10,30,60,0,0,0,0,0 --tsa 0,0,0,0,0,0,0,0 --pfc 0,0,0,0,0,0,0,0
     ifconfig ens785f0 up
 
-2. start testpmd with 100G setting, add vlan filter,
-   then set profile and TC mapping::
+2. Start testpmd with 100G setting, add vlan filter,
+   Then set profile and TC mapping::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-10 -n 4 -a 18:01.0,cap=dcf -a 18:01.1 -- -i --txq=8 --rxq=8 --nb-cores=8 --port-topology=loop
     port stop all
@@ -1812,7 +1827,7 @@ the sum of the streams's throughput is limited by the tcbw distribution or peak_
     set fwd mac
     start
 
-3. send 8 streams from IXIA, mac address is VF1's mac address "00:11:22:33:44:55",
+3. Send 8 streams from IXIA, mac address is VF1's mac address "00:11:22:33:44:55",
    frame size is 68 bytes, each stream desired 25% max rate.
    stream 0-3, vlan id=0, priority=0/1/3/4(TC0/TC0/TC1/TC2),
    stream 4-5, vlan id=1, priority=0/3(TC0/TC1),
@@ -1820,18 +1835,19 @@ the sum of the streams's throughput is limited by the tcbw distribution or peak_
    only send stream 0,1,4,6 synchronously, the throughput is 2MBps, mapping queue 0-3
    only send steam 2 and 5 synchronously, the throughput is 4MBps, mapping queue 4-5
    only send steam 3 and 7 synchronously, the throughput is 4MBps, mapping queue 6-7
-   send all the streams synchronously, the throughput is 10MBps, queue mapping is correct.
+   send all the streams synchronously, the throughput is 10MBps, queue mapping is correct,
+   which is same as previous steps.
 
 Test case 20: delete qos setting
 ================================
-the case is to check the support to delete QoS settings.
+The case is to check the support to delete QoS settings.
 
 1. DCB setting, set 3 TCs bandwidth with strict mode::
 
     ./dcbgetset ens785f0  --ieee --up2tc 0,0,0,1,2,0,0,0 --tcbw 10,30,60,0,0,0,0,0 --tsa 0,0,0,0,0,0,0,0 --pfc 0,0,0,0,0,0,0,0
     ifconfig ens785f0 up
 
-2. start testpmd with 100G setting, then set profile and TC mapping::
+2. Start testpmd with 100G setting, then set profile and TC mapping::
 
     port stop all
     add port tm node shaper profile 0 1 1000000 0 2000000 0 0 0
@@ -1847,7 +1863,7 @@ the case is to check the support to delete QoS settings.
     add port tm leaf node 0 4 700 0 1 2 2 0 0xffffffff 0 0
     add port tm leaf node 0 5 700 0 1 2 2 0 0xffffffff 0 0
 
-3. delete the shaper profile and nonleaf node::
+3. Delete the shaper profile and nonleaf node::
 
     del port tm node 0 1000
     node id: cannot delete a node which has children (error 33)
@@ -1856,8 +1872,8 @@ the case is to check the support to delete QoS settings.
     del port tm node shaper profile 0 1
     shaper profile null: profile in use (error 10)
 
-   the nodes can't be deleted due to the children nodes.
-   delete the leaf nodes first, then delete the nonleaf nodes and shaper profile::
+   The nodes can't be deleted due to the children nodes.
+   Delete the leaf nodes first, then delete the nonleaf nodes and shaper profile::
 
     del port tm node 0 5
     del port tm node 0 4
@@ -1872,17 +1888,17 @@ the case is to check the support to delete QoS settings.
     del port tm node shaper profile 0 1
     del port tm node shaper profile 0 2
 
-   deleted successfully.
+   Deleted successfully.
 
-4. add the settings again as step2, then commit the configuration::
+4. Add the settings again as step2, then commit the configuration::
 
     port tm hierarchy commit 0 no
 
-   delete the leaf node::
+   Delete the leaf node::
 
     del port tm node 0 5
     cause unspecified: already committed (error 1)
 
-   check the node can't be deleted after committed.
+   Check the node can't be deleted after committed.
 
-5. all the operation has the same result on port 1.
+5. All the operation has the same result on port 1.
-- 
2.17.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

* RE: [dts][PATCH V1]test_plans/cvl_dcf_qos: correct some typos
  2021-11-12  0:30 [dts][PATCH V1]test_plans/cvl_dcf_qos: correct some typos Peng Yuan
@ 2021-11-23 14:19 ` Tu, Lijuan
  0 siblings, 0 replies; 2+ messages in thread
From: Tu, Lijuan @ 2021-11-23 14:19 UTC (permalink / raw)
  To: Peng, Yuan, dts; +Cc: Peng, Yuan

> -----Original Message-----
> From: Peng Yuan <yuan.peng@intel.com>
> Sent: 2021年11月12日 8:30
> To: dts@dpdk.org
> Cc: Peng, Yuan <yuan.peng@intel.com>
> Subject: [dts][PATCH V1]test_plans/cvl_dcf_qos: correct some typos
> 
> Correct some typos, and clarify some description.
> 
> Signed-off-by: Peng Yuan <yuan.peng@intel.com>

Applied with commit subject changed.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-11-23 14:19 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-12  0:30 [dts][PATCH V1]test_plans/cvl_dcf_qos: correct some typos Peng Yuan
2021-11-23 14:19 ` Tu, Lijuan

test suite reviews and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/dts/0 dts/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dts dts/ http://inbox.dpdk.org/dts \
		dts@dpdk.org
	public-inbox-index dts

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dts


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git