test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts][PATCH V2 1/3] test_plans/index:add new testplan index
@ 2022-09-01 17:12 Zhimin Huang
  2022-09-01 17:12 ` [dts][PATCH V2 2/3] test_plans/ice_enable_basic_hqos_on_pf:add new testplan Zhimin Huang
  2022-09-01 17:12 ` [dts][PATCH V2 3/3] tests/ice_enable_basic_hqos_on_pf:add new test suite Zhimin Huang
  0 siblings, 2 replies; 3+ messages in thread
From: Zhimin Huang @ 2022-09-01 17:12 UTC (permalink / raw)
  To: dts; +Cc: Zhimin Huang

add ice_enable_basic_hqos_on_pf_test_plan new testplan index

Signed-off-by: Zhimin Huang <zhiminx.huang@intel.com>
---
 test_plans/index.rst | 1 +
 1 file changed, 1 insertion(+)

diff --git a/test_plans/index.rst b/test_plans/index.rst
index f9efc044..1ec7843e 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -158,6 +158,7 @@ The following are the test plans for the DPDK DTS automated test system.
     compressdev_qat_pmd_test_plan
     compressdev_zlib_pmd_test_plan
     enable_package_download_in_ice_driver_test_plan
+    ice_enable_basic_hqos_on_pf_test_plan
     multicast_test_plan
     ethtool_stats_test_plan
     metrics_test_plan.rst
-- 
2.17.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dts][PATCH V2 2/3] test_plans/ice_enable_basic_hqos_on_pf:add new testplan
  2022-09-01 17:12 [dts][PATCH V2 1/3] test_plans/index:add new testplan index Zhimin Huang
@ 2022-09-01 17:12 ` Zhimin Huang
  2022-09-01 17:12 ` [dts][PATCH V2 3/3] tests/ice_enable_basic_hqos_on_pf:add new test suite Zhimin Huang
  1 sibling, 0 replies; 3+ messages in thread
From: Zhimin Huang @ 2022-09-01 17:12 UTC (permalink / raw)
  To: dts; +Cc: Zhimin Huang

add 22.07 new feature testplan

Signed-off-by: Zhimin Huang <zhiminx.huang@intel.com>
---
 .../ice_enable_basic_hqos_on_pf_test_plan.rst | 546 ++++++++++++++++++
 1 file changed, 546 insertions(+)
 create mode 100644 test_plans/ice_enable_basic_hqos_on_pf_test_plan.rst

diff --git a/test_plans/ice_enable_basic_hqos_on_pf_test_plan.rst b/test_plans/ice_enable_basic_hqos_on_pf_test_plan.rst
new file mode 100644
index 00000000..d785fb2b
--- /dev/null
+++ b/test_plans/ice_enable_basic_hqos_on_pf_test_plan.rst
@@ -0,0 +1,546 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2022 Intel Corporation
+
+==================================
+ICE Enable basic HQoS on PF driver
+==================================
+
+Description
+===========
+A switch chip help to fan out network ports because of Intel NIC didn't have so much ports.
+In this solution, NIC might be configured to 4x10G or 4x25G or 100G mode, all these connect to a switch chip.
+Outside switch port's bandwidth lower than NIC, might be 1G and 10G. Therefore, NIC should limit flow bandwidth to each switch port.
+To support this opportunity, we need 3 level tx scheduler:
+
+   - queue (each CPU core have 1 queue for outside switch port)
+   - queue group (map to outside switch port)
+   - port (local MAC port)
+
+The PMD is required to support the below features:
+
+   - Support at least 3 layers Tx scheduler, (Port--> Queue --> Queue Group)
+   - Support SP or RR Scheduling on queue groups
+   - Support SP or RR Scheduling or WFQ Scheduling on queues
+   - Support Bandwith configure on each layer.
+
+..Note::
+
+   Node priority 0 is highest priority, 7 is lowest priority.
+   SP: Strict Priority arbitration scheme.
+   RR: Round Robin arbitration scheme.
+   WFQ: Weighted Fair Queue arbitration scheme.
+
+Prerequisites
+=============
+
+Topology
+--------
+one port from ICE_100G-E810C_QSFP(NIC-1), two ports from ICE_25G-E810_XXV_SFP(NIC-2);
+
+one 100G cable, one 10G cable;
+
+The connection is as below table::
+
+    +---------------------------------+
+    |  DUT           |  IXIA          |
+    +=================================+
+    |               100G              |
+    | NIC-1,Port-1  ---  IXIA, Port-1 |
+    |               10G               |
+    | NIC-2,Port-1  ---  NIC-2,Port-2 |
+    +---------------------------------+
+
+Hardware
+--------
+1. one NIC ICE_100G-E810C_QSFP(NIC-1), one NIC ICE_25G-E810_XXV_SFP(NIC-2);
+   one 100G cable, one 10G cable;
+   Assume that device ID and pci address of NIC-1,Port-1 are ens785f0 and 18:00.0,
+   device ID and pci address of NIC-2,Port-1 are ens802f0 and 86:00.0.
+2. one 100Gbps traffic generator, assume that as an IXIA port to design case.
+
+Software
+--------
+dpdk: http://dpdk.org/git/dpdk
+
+runtime command: https://doc.dpdk.org/guides/testpmd_app_ug/testpmd_funcs.html
+
+General Set Up
+--------------
+1. Copy specific ice package to /lib/firmware/updates/intel/ice/ddp/ice.pkg,
+   then load driver::
+
+      # cp <ice package> /lib/firmware/updates/intel/ice/ddp/ice.pkg
+      # rmmod ice
+      # insmod <ice build dir>/ice.ko
+
+2. Compile DPDK::
+
+      # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib --default-library=static <dpdk build dir>
+      # ninja -C <dpdk build dir> -j 110
+
+3. Get the pci device id and interface of DUT and tester.
+   For example, device ID and pci address of NIC-1,Port-1 are ens785f0 and 18:00.0,
+   device ID and pci address of NIC-2,Port-1 are ens802f0 and 86:00.0.
+   Bind the DUT port to dpdk::
+
+      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 18:00.1 86:00.0
+
+4. launch testpmd with 8 or 16 queues due to case design::
+
+      <dpdk dir># ./<dpdk build dir>/app/dpdk-testpmd -c 0x3fffe -n 4 -- -i --rxq=16 --txq=16
+
+Test Case
+=========
+Common Steps
+------------
+IXIA Sends 8 streams(1-8) with different pkt size(64/128/256/512/1024/1518/512/1024).
+Stream2 has UP1, stream3 has UP2, other streams has UP0.
+Linerate is 100Gbps, each stream occupied 12.5%Max linerate.
+The 10Gbps cable limited TX rate of NIC-2,Port-1 to 10Gbps.
+Check the actual TX throughput of ens802f0(86:00.0) is about 8.25Gbps.
+When check the throughput ratio of each queue group and queue,
+stop the forward, and check the TX-packets ratio of queues.
+The TX-packets ratio of queues is same as TX throughput ratio of queues.
+
+Test Case 1: queuegroup_RR_queue_WFQ_RR_nolimit
+-----------------------------------------------
+RR Scheduling on queue groups.
+WFQ Scheduling on queue group 0, RR Scheduling on queue group 1.
+No bandwidth limit.
+
+Test Steps
+~~~~~~~~~~
+1. Launch testpmd with 8 queues::
+
+      <dpdk dir># ./<dpdk build dir>/app/dpdk-testpmd -c 0x3fffe -n 4 -- -i --rxq=8 --txq=8
+
+2. configure 2 groups: group 0(queue 0 to queue 3),group 1(queue 4 to queue 7).
+   RR scheduler algo between group 0 and group 1.
+   WFQ scheduler within group 0(1:2:3:4) and RR within group 1::
+
+      testpmd> add port tm node shaper profile 1 1 100000000 0 100000000 0 0 0
+      testpmd> add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 700000 800000 0 1 3 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 600000 800000 0 1 3 -1 1 0 0
+      testpmd> add port tm leaf node 1 0 700000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 1 700000 0 2 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 2 700000 0 3 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 3 700000 0 4 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 4 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 5 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 6 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 7 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> port tm hierarchy commit 1 no
+      testpmd> start
+
+3. Send streams from IXIA
+
+4. Check the TX throughput of port 1::
+
+      testpmd> show port stats 1
+
+   Check the throughput ratio of each queue group and queue::
+
+      testpmd> stop
+
+   The TX throughput of Queue group 0 and group 1 are the same.
+   The TX throughput of queue 0-3 is 1:2:3:4
+   The TX throughput of queue 4-7 is 1:1:1:1.
+
+Test Case 2: queuegroup_SP_queue_WFQ_RR_nolimit
+-----------------------------------------------
+SP Scheduling on queue groups.
+WFQ Scheduling on queue group 0, RR Scheduling on queue group 1.
+No bandwidth limit.
+
+Test Steps
+~~~~~~~~~~
+1. Launch testpmd with 8 queues::
+
+      <dpdk dir># ./<dpdk build dir>/app/dpdk-testpmd -c 0x3fffe -n 4 -- -i --rxq=8 --txq=8
+
+2. configure 2 groups: group 0(queue 0 to queue 3),group 1(queue 4 to queue 7).
+   SP scheduler algo between group 0 and group 1(0/1).
+   WFQ scheduler within group 0(1:2:3:4) and RR within group 1::
+
+      testpmd> testpmd> add port tm node shaper profile 1 1 100000000 0 100000000 0 0 0
+      testpmd> testpmd> add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0
+      testpmd> testpmd> add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0
+      testpmd> testpmd> add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0
+      testpmd> testpmd> add port tm nonleaf node 1 700000 800000 0 1 3 -1 1 0 0
+      testpmd> testpmd> add port tm nonleaf node 1 600000 800000 1 1 3 -1 1 0 0
+      testpmd> testpmd> add port tm leaf node 1 0 700000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> testpmd> add port tm leaf node 1 1 700000 0 2 4 -1 0 0xffffffff 0 0
+      testpmd> testpmd> add port tm leaf node 1 2 700000 0 3 4 -1 0 0xffffffff 0 0
+      testpmd> testpmd> add port tm leaf node 1 3 700000 0 4 4 -1 0 0xffffffff 0 0
+      testpmd> testpmd> add port tm leaf node 1 4 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> testpmd> add port tm leaf node 1 5 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> testpmd> add port tm leaf node 1 6 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> testpmd> add port tm leaf node 1 7 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> port tm hierarchy commit 1 no
+      testpmd> start
+
+3. Send streams from IXIA
+
+4. Check the TX throughput of port 1::
+
+      testpmd> show port stats 1
+
+   Check the throughput ratio of each queue group and queue::
+
+      testpmd> stop
+
+   Queue group 1 has not TX throughput
+   The TX throughput of queue 0-3 is 1:2:3:4
+
+Test Case 3: queuegroup_RR_queue_WFQ_RR
+---------------------------------------
+RR Scheduling on queue groups.
+WFQ Scheduling on queue group 0, RR Scheduling on queue group 1.
+
+Test Steps
+~~~~~~~~~~
+1. Launch testpmd with 8 queues::
+
+      <dpdk dir># ./<dpdk build dir>/app/dpdk-testpmd -c 0x3fffe -n 4 -- -i --rxq=8 --txq=8
+
+2. configure 2 groups: group 0(queue 0 to queue 3),group 1(queue 4 to queue 7).
+   RR scheduler algo between group 0 and group 1.
+   WFQ scheduler within group 0(1:2:3:4) and RR within group 1.
+   Set rate limit on group 1 to 300MBps::
+
+      testpmd> add port tm node shaper profile 1 1 300000000 0 300000000 0 0 0
+      testpmd> add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 700000 800000 0 1 3 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 600000 800000 0 1 3 1 1 0 0
+      testpmd> add port tm leaf node 1 0 700000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 1 700000 0 2 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 2 700000 0 3 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 3 700000 0 4 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 4 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 5 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 6 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 7 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> port tm hierarchy commit 1 no
+      testpmd> start
+
+3. Send streams from IXIA
+
+4. Check the TX throughput of port 1::
+
+      testpmd> show port stats 1
+
+   Check the throughput ratio of each queue group and queue::
+
+      testpmd> stop
+
+   Check the TX throughput of queue group 1 is limited to 2.4Gbps.
+   The TX throughput of queue 0-3 is 1:2:3:4.
+   The TX throughput of queue 4-7 is 1:1:1:1.
+
+Test Case 4: queuegroup_SP_queue_WFQ_SP
+---------------------------------------
+SP Scheduling on queue groups.
+WFQ Scheduling on queue group 0, SP Scheduling on queue group 1.
+
+Test Steps
+~~~~~~~~~~
+1. Launch testpmd with 12 queues::
+
+      <dpdk dir># ./<dpdk build dir>/app/dpdk-testpmd -c 0x3fffe -n 4 -- -i --rxq=12 --txq=12
+
+2. configure 2 groups: group 0(queue 0 to queue 3),group 1(queue 4 to queue 11).
+   SP scheduler algo between group 0 and group 1(0/1).
+   WFQ scheduler within group 0(1:2:3:4) and SP within group 1(0/2/1/2/3/3/5/7).
+   Set rate limit on group 0 to 300MBps,
+   set rate limit on group 1 to (10/10/100/20/300/400/no/10MBps)::
+
+      testpmd> add port tm node shaper profile 1 1 300 0 300000000 0 0 0
+      testpmd> add port tm node shaper profile 1 2 300 0 100000000 0 0 0
+      testpmd> add port tm node shaper profile 1 3 300 0 10000000 0 0 0
+      testpmd> add port tm node shaper profile 1 4 300 0 20000000 0 0 0
+      testpmd> add port tm node shaper profile 1 5 200 0 400000000 0 0 0
+      testpmd> add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 700000 800000 0 1 3 1 1 0 0
+      testpmd> add port tm nonleaf node 1 600000 800000 7 1 3 -1 1 0 0
+      testpmd> add port tm leaf node 1 0 700000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 1 700000 0 2 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 2 700000 0 3 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 3 700000 0 4 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 4 600000 0 1 4 3 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 5 600000 2 1 4 3 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 6 600000 1 1 4 2 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 7 600000 2 1 4 4 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 8 600000 3 1 4 1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 9 600000 3 1 4 5 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 10 600000 5 1 4 3 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 11 600000 7 1 4 3 0 0xffffffff 0 0
+      testpmd> port tm hierarchy commit 1 no
+      testpmd> start
+
+3. Send streams from IXIA
+
+4. Check the TX throughput of port 1::
+
+      testpmd> show port stats 1
+
+   Check the throughput ratio of each queue group and queue::
+
+      testpmd> stop
+
+   Check the TX throughput of queue group 0 is limited to 2.4Gbps.
+   The TX throughput of queue 0-3 is 1:2:3:4.
+   The throughput of queue 4 is limited to 80Mbps,
+   queue 5 is limited to 80Mbps,
+   queue 6 is limited to 800Mbps,
+   queue 7 is limited to 160Mbps,
+   queue 8 and queue 9 has rest throughput of queue group 1,
+   and the two queue has the same throughput,
+   queue 10 and queue 11 has little throughput.
+
+Test Case 5: queuegroup_RR_queue_RR_SP_WFQ
+------------------------------------------
+RR Scheduling on queue groups.
+RR Scheduling on queue group 0, SP Scheduling on queue group 1,
+WFQ Scheduling on queue group 2.
+
+Test Steps
+~~~~~~~~~~
+1. Launch testpmd with 16 queues::
+
+      <dpdk dir># ./<dpdk build dir>/app/dpdk-testpmd -c 0x3fffe -n 4 -- -i --rxq=16 --txq=16
+
+2. configure 2 groups: group 0(queue 0 to queue 3),group 1(queue 4 to queue 7),group 2(queue 8 to queue 15).
+   RR scheduler algo between group 0, group 1 and group 2.
+   RR scheduler  within group 0(1:1:1:1), SP within group 1(0/4/1/7) and WFQ within group 2(4:2:2:100:3:1:5:7).
+   Set rate limit on queue4-7 to (100/no/400/100MBps)::
+
+      testpmd> add port tm node shaper profile 1 1 300 0 300000000 0 0 0
+      testpmd> add port tm node shaper profile 1 2 100 0 100000000 0 0 0
+      testpmd> add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 700000 800000 0 1 3 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 600000 800000 0 1 3 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 500000 800000 0 1 3 -1 1 0 0
+      testpmd> add port tm leaf node 1 0 700000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 1 700000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 2 700000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 3 700000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 4 600000 0 1 4 2 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 5 600000 4 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 6 600000 1 1 4 1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 7 600000 7 1 4 2 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 8 500000 0 4 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 9 500000 0 2 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 10 500000 0 2 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 11 500000 0 100 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 12 500000 0 3 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 13 500000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 14 500000 0 5 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 15 500000 0 7 4 -1 0 0xffffffff 0 0
+      testpmd> port tm hierarchy commit 1 no
+      testpmd> start
+
+3. Send streams from IXIA
+
+4. Check the TX throughput of port 1::
+
+      testpmd> show port stats 1
+
+   Check the throughput ratio of each queue group and queue::
+
+      testpmd> stop
+
+   Check the TX throughput ratio of queue group 0/1/2 is 1:1:1.
+   The TX throughput of queue 0-3 is 1:1:1:1.
+   The throughput of queue 4 is limited to 800Mbps,
+   queue 5 has little throughput,
+   queue 6 has the rest throughput of queue group 1,
+   queue 7 has little throughput.
+   Queue 8-15 throughput ratio is align to (4:2:2:100:3:1:5:7).
+
+Test Case 6: queuegroup_SP_queue_RR_SP_WFQ
+------------------------------------------
+SP Scheduling on queue groups.
+RR Scheduling on queue group 0, SP Scheduling on queue group 1,
+WFQ Scheduling on queue group 2.
+
+Test Steps
+~~~~~~~~~~
+1. Launch testpmd with 16 queues::
+
+      <dpdk dir># ./<dpdk build dir>/app/dpdk-testpmd -c 0x3fffe -n 4 -- -i --rxq=16 --txq=16
+
+2. configure 2 groups: group 0(queue 0 to queue 3),group 1(queue 4 to queue 7),group 2(queue 8 to queue 15).
+   SP scheduler algo between group 0, group 1 and group 2(0/1/2).
+   RR scheduler  within group 0(1:1:1:1), SP within group 1(0/4/1/7) and WFQ within group 2(4:2:2:100:3:1:5:7).
+   Set rate limit on group 0 to 100MBps, set rate limit on group 1 to 100MBps,
+   set rate limit on group 2 to 300MBps.
+   Set rate limit to queue0, queue1 and queue4 to 300MBps,
+   set no rate limit on other queues::
+
+      testpmd> add port tm node shaper profile 1 1 300 0 300000000 0 0 0
+      testpmd> add port tm node shaper profile 1 2 100 0 100000000 0 0 0
+      testpmd> add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 700000 800000 0 1 3 2 1 0 0
+      testpmd> add port tm nonleaf node 1 600000 800000 1 1 3 2 1 0 0
+      testpmd> add port tm nonleaf node 1 500000 800000 2 1 3 1 1 0 0
+      testpmd> add port tm leaf node 1 0 700000 0 1 4 1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 1 700000 0 1 4 1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 2 700000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 3 700000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 4 600000 0 1 4 1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 5 600000 4 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 6 600000 1 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 7 600000 7 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 8 500000 0 4 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 9 500000 0 2 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 10 500000 0 2 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 11 500000 0 100 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 12 500000 0 3 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 13 500000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 14 500000 0 5 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 15 500000 0 7 4 -1 0 0xffffffff 0 0
+      testpmd> port tm hierarchy commit 1 no
+      testpmd> start
+
+3. Send streams from IXIA
+
+4. Check the TX throughput of port 1::
+
+      testpmd> show port stats 1
+
+   Check the throughput ratio of each queue group and queue::
+
+      testpmd> stop
+
+   Check the TX throughput ratio of queue group 0/1/2 is 1:1:3,
+   the sum of TX throughput is 4Gbps.
+   The TX throughput ratio of queue 0-3 is 1:1:1:1.
+   The throughput of queue 4 is limited to 800Mbps,
+   queue 5-7 has little throughput,
+   Queue 8-15 throughput ratio is align to (4:2:2:100:3:1:5:7).
+
+Test Case 7: queuegroup_RR_queue_WFQ_WFQ
+----------------------------------------
+RR Scheduling on queue groups.
+WFQ Scheduling on queue group 0, WFQ Scheduling on queue group 1.
+
+Test Steps
+~~~~~~~~~~
+1. Launch testpmd with 8 queues::
+
+      <dpdk dir># ./<dpdk build dir>/app/dpdk-testpmd -c 0x3fffe -n 4 -- -i --rxq=8 --txq=8
+
+2. configure 2 groups: group 0(queue 0 to queue 3),group 1(queue 4 to queue 7).
+   RR scheduler algo between group 0 and group 1.
+   WFQ scheduler  within group 0(1:2:3:4), WFQ within group 1(1:2:3:4).
+   Set bandwidth limit on queues of group 1 to (10/10/40/no)MBps
+   Set bandwidth limit on queues of group 1 to (40/30/no/no)MBps::
+
+      testpmd> add port tm node shaper profile 1 1 10000000 0 10000000 0 0 0
+      testpmd> add port tm node shaper profile 1 2 20000000 0 20000000 0 0 0
+      testpmd> add port tm node shaper profile 1 3 30000000 0 30000000 0 0 0
+      testpmd> add port tm node shaper profile 1 4 40000000 0 40000000 0 0 0
+      testpmd> add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 700000 800000 0 1 3 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 600000 800000 0 1 3 -1 1 0 0
+      testpmd> add port tm leaf node 1 0 700000 0 1 4 1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 1 700000 0 2 4 1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 2 700000 0 3 4 4 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 3 700000 0 4 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 4 600000 0 1 4 4 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 5 600000 0 2 4 3 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 6 600000 0 3 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 7 600000 0 4 4 -1 0 0xffffffff 0 0
+      testpmd> port tm hierarchy commit 1 no
+      testpmd> start
+
+3. Send streams from IXIA
+
+4. Check the TX throughput of port 1::
+
+      testpmd> show port stats 1
+
+   Check the throughput ratio of each queue group and queue::
+
+      testpmd> stop
+
+   Check the TX throughput of queue group 0 and group 1 are the same.
+   Check the TX throughput of queue0 is limited to 10MBps,
+   queue1 is limited to 10MBps, queue2 is limited to 40MBps,
+   queue3 has the rest throughput of queue group 0.
+   Queue4 is limited to 40MBps, queue5 is limited to 30MBps,
+   queue 6 and queue 7 have the rest throughput of queue group 1,
+   the throughput ratio of queue 6 and queue 7 is 3:4.
+
+Test Case 8: negative case
+--------------------------
+Configure invalid parameters, report expected errors.
+
+Test Steps
+~~~~~~~~~~
+1. Launch testpmd with 16 queues::
+
+      <dpdk dir># ./<dpdk build dir>/app/dpdk-testpmd -c 0x3fffe -n 4 -- -i --rxq=16 --txq=16
+
+2. configure 2 groups, WFQ scheduler algo between group 0 and group 1(1:2)::
+
+      testpmd> add port tm node shaper profile 1 1 100000000 0 100000000 0 0 0
+      testpmd> add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 700000 800000 0 1 3 -1 1 0 0
+      testpmd> add port tm nonleaf node 1 600000 800000 0 2 3 -1 1 0 0
+      ice_tm_node_add(): weight != 1 not supported in level 3
+
+3. Configure RR scheduler algo on groups, and set queue 3 weight to 201::
+
+      testpmd> port stop 1
+      testpmd> del port tm node 1 600000
+      testpmd> add port tm nonleaf node 1 600000 800000 0 1 3 -1 1 0 0
+      testpmd> port start 1
+      testpmd> add port tm leaf node 1 0 700000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 1 700000 0 2 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 2 700000 0 3 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 3 700000 0 201 4 -1 0 0xffffffff 0 0
+      node weight: weight must be between 1 and 200 (error 21)
+
+4.  reset queue 3 weight to 200, set queue 11 node priority to 8::
+
+      testpmd> add port tm leaf node 1 3 700000 0 200 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 4 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 5 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 6 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 7 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 8 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 9 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 10 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 11 600000 8 1 4 -1 0 0xffffffff 0 0
+      node priority: priority should be less than 8 (error 20)
+
+5. reset queue 11 node priority to 7,
+   set queue 4-15 (>8 queues) to queue group 1 and commit::
+
+      testpmd> add port tm leaf node 1 11 600000 7 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 12 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 13 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 14 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> add port tm leaf node 1 15 600000 0 1 4 -1 0 0xffffffff 0 0
+      testpmd> port tm hierarchy commit 1 no
+      ice_move_recfg_lan_txq(): move lan queue 12 failed
+      ice_hierarchy_commit(): move queue 12 failed
+      cause unspecified: (no stated reason) (error 1)
+
+6. Check all the reported errors as expected.
-- 
2.17.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dts][PATCH V2 3/3] tests/ice_enable_basic_hqos_on_pf:add new test suite
  2022-09-01 17:12 [dts][PATCH V2 1/3] test_plans/index:add new testplan index Zhimin Huang
  2022-09-01 17:12 ` [dts][PATCH V2 2/3] test_plans/ice_enable_basic_hqos_on_pf:add new testplan Zhimin Huang
@ 2022-09-01 17:12 ` Zhimin Huang
  1 sibling, 0 replies; 3+ messages in thread
From: Zhimin Huang @ 2022-09-01 17:12 UTC (permalink / raw)
  To: dts; +Cc: Zhimin Huang

add 22.07 new feature test suite

Signed-off-by: Zhimin Huang <zhiminx.huang@intel.com>
---
 .../TestSuite_ice_enable_basic_hqos_on_pf.py  | 652 ++++++++++++++++++
 1 file changed, 652 insertions(+)
 create mode 100644 tests/TestSuite_ice_enable_basic_hqos_on_pf.py

diff --git a/tests/TestSuite_ice_enable_basic_hqos_on_pf.py b/tests/TestSuite_ice_enable_basic_hqos_on_pf.py
new file mode 100644
index 00000000..a35f06fb
--- /dev/null
+++ b/tests/TestSuite_ice_enable_basic_hqos_on_pf.py
@@ -0,0 +1,652 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2022 Intel Corporation
+#
+"""
+DPDK Test suite.
+ICE Enable basic HQoS on PF driver.
+"""
+
+import re
+from copy import deepcopy
+from pprint import pformat
+
+from framework.packet import Packet
+from framework.pktgen import TRANSMIT_CONT
+from framework.pmd_output import PmdOutput
+from framework.settings import HEADER_SIZE, get_nic_name
+from framework.test_case import TestCase
+
+PKT_LEN = [64, 128, 256, 512, 1024, 1518, 512, 1024]
+STREAM_UP_CONFIG = [0, 1, 2, 0, 0, 0, 0, 0]
+LINERATE = 100
+
+
+class TestIceEnableBasicHqosOnPF(TestCase):
+    def set_up_all(self):
+        self.dut_ports = self.dut.get_ports()
+        self.verify(len(self.dut_ports) >= 2, "Insufficient ports")
+        "test topo is port 0 is 100G-E810C and port 1 is 25G-E810"
+        self.skip_case(
+            self.check_require_nic_for_test(),
+            "Topology is ICE_100G-E810C_QSFP and ICE_25G-E810_XXV_SFP",
+        )
+        self.cores = "1S/9C/1T"
+        # check core num
+        core_list = self.dut.get_core_list(self.cores)
+        self.verify(len(core_list) >= 9, "Insufficient cores for testing")
+        self.tester_port0 = self.tester.get_local_port(self.dut_ports[0])
+        self.tester_port1 = self.tester.get_local_port(self.dut_ports[1])
+        self.dut_port0_mac = self.dut.get_mac_address(self.dut_ports[0])
+
+        self.pmd_output = PmdOutput(self.dut)
+
+    def get_nic_info_from_ports_cfg(self):
+        """
+        get port.cfg nic type/intf/pci list
+        :return: port config nic list
+        """
+        nic_list = []
+        for id in self.dut.get_ports():
+            nic_dict = {}
+            for info in ["type", "intf", "pci"]:
+                nic_dict[info] = self.dut.ports_info[id][info]
+                if info == "type":
+                    nic_dict["name"] = get_nic_name(nic_dict[info])
+            nic_list.append(nic_dict)
+        return nic_list
+
+    def check_require_nic_for_test(self):
+        """
+        check port 0 is E810_100G and port 1 is E810_25G
+        :return: check status, True or False
+        """
+        for id, _nic in enumerate(self.get_nic_info_from_ports_cfg()):
+            if _nic["name"] not in ["ICE_100G-E810C_QSFP", "ICE_25G-E810_XXV_SFP"]:
+                return False
+            if id == 0 and _nic["name"] != "ICE_100G-E810C_QSFP":
+                return False
+        return True
+
+    def launch_testpmd(self, param=""):
+        """
+        start testpmd and check testpmd link status
+        :param param: rxq/txq
+        """
+        self.pmd_output.start_testpmd(self.cores, param=param)
+        res = self.pmd_output.wait_link_status_up("all", timeout=15)
+        self.verify(res is True, "there have port link is down")
+        self.testpmd_flag = True
+
+    def close_testpmd(self):
+        """
+        close testpmd
+        """
+        if not self.testpmd_flag:
+            return
+        try:
+            self.pmd_output.quit()
+        except Exception as e:
+            self.logger.error("The testpmd status is incorrect")
+        self.testpmd_flag = False
+
+    def get_queue_packets_stats(self, port=1):
+        """
+        get testpmd tx pkts stats
+        :param port: tx port
+        :return: pkts list
+        """
+        output = self.pmd_output.execute_cmd("stop")
+        self.pmd_output.execute_cmd("start")
+        p = re.compile("TX Port= %d/Queue=.*\n.*TX-packets: ([0-9]+)\s" % port)
+        tx_pkts = list(map(int, p.findall(output)))
+        return tx_pkts
+
+    def add_stream_to_pktgen(self, txport, rxport, send_pkts, option):
+        """
+        add streams to pktgen and return streams id
+        """
+        stream_ids = []
+        for pkt in send_pkts:
+            _option = deepcopy(option)
+            _option["pcap"] = pkt
+            stream_id = self.tester.pktgen.add_stream(txport, rxport, send_pkts[0])
+            self.tester.pktgen.config_stream(stream_id, _option)
+            stream_ids.append(stream_id)
+        return stream_ids
+
+    def config_stream(self, fields, frame_size=64):
+        """
+        config stream and return pkt
+        """
+        pri = fields
+        pkt_config = {
+            "type": "VLAN_UDP",
+            "pkt_layers": {
+                "ether": {"dst": self.dut_port0_mac},
+                "vlan": {"vlan": pri, "prio": pri},
+                "raw": {"payload": ["58"] * self.get_pkt_len(frame_size)},
+            },
+        }
+        pkt_type = pkt_config.get("type")
+        pkt_layers = pkt_config.get("pkt_layers")
+        pkt = Packet(pkt_type=pkt_type)
+        for layer in list(pkt_layers.keys()):
+            pkt.config_layer(layer, pkt_layers[layer])
+        return pkt.pktgen.pkt
+
+    def testpmd_query_stats(self):
+        """
+        traffic callback function, return port 1 stats
+        """
+        output = self.pmd_output.execute_cmd("show port stats 1")
+        if not output:
+            return
+        port_pat = ".*NIC statistics for (port \d+) .*"
+        rx_pat = ".*Rx-pps:\s+(\d+)\s+Rx-bps:\s+(\d+).*"
+        tx_pat = ".*Tx-pps:\s+(\d+)\s+Tx-bps:\s+(\d+).*"
+        port = re.findall(port_pat, output, re.M)
+        rx = re.findall(rx_pat, output, re.M)
+        tx = re.findall(tx_pat, output, re.M)
+        if not port or not rx or not tx:
+            return
+        stat = {}
+        for port_id, (rx_pps, rx_bps), (tx_pps, tx_bps) in zip(port, rx, tx):
+            stat[port_id] = {
+                "rx_pps": float(rx_pps),
+                "rx_bps": float(rx_bps),
+                "tx_pps": float(tx_pps),
+                "tx_bps": float(tx_bps),
+            }
+        self.pmd_stat = stat
+
+    def get_pkt_len(self, frame_size):
+        HEADER_SIZE["vlan"] = 4
+        headers_size = sum([HEADER_SIZE[x] for x in ["eth", "ip", "vlan", "udp"]])
+        pktlen = frame_size - headers_size
+        return pktlen
+
+    def start_traffic(self, send_pkts):
+        """
+        send stream and return results
+        """
+        self.tester.pktgen.clear_streams()
+        duration = 20
+        s_option = {
+            "stream_config": {
+                "txmode": {},
+                "transmit_mode": TRANSMIT_CONT,
+                "rate": LINERATE,
+            },
+            "fields_config": {
+                "ip": {
+                    "src": {
+                        "start": "198.18.0.0",
+                        "end": "198.18.0.255",
+                        "step": 1,
+                        "action": "random",
+                    },
+                },
+            },
+        }
+        stream_ids = self.add_stream_to_pktgen(
+            self.tester_port0, self.tester_port0, send_pkts, s_option
+        )
+        traffic_opt = {
+            "method": "throughput",
+            "duration": duration,
+            "interval": duration - 5,
+            "callback": self.testpmd_query_stats,
+        }
+        result = self.tester.pktgen.measure(stream_ids, traffic_opt)
+        return result
+
+    def get_traffic_results(self):
+        """
+        get traffic results, append results, port stats, queue stats
+        """
+        stream = []
+        results = []
+        for id in range(len(STREAM_UP_CONFIG)):
+            pkt = self.config_stream(STREAM_UP_CONFIG[id], frame_size=PKT_LEN[id])
+            stream.append(pkt)
+        result = self.start_traffic(stream)
+        queue_stats = self.get_queue_packets_stats()
+        results.append([result, self.pmd_stat, queue_stats])
+        return results
+
+    def check_traffic_throughput(self, expect_results, rel_results):
+        """
+        compare traffic throughput with expect results
+        """
+        status = False
+        for traffic_task, _result in zip(expect_results, rel_results):
+            _expected, unit, port = traffic_task
+            result, pmd_stat, _ = _result
+            real_stat = pmd_stat.get(f"port {port}", {})
+            real_bps = real_stat.get("tx_bps")
+            bias = 10
+            if unit == "MBps":
+                status = ((real_bps / 8 / 1e6 - _expected) * 100 / _expected) < bias
+            elif unit == "-MBps":
+                status = real_bps / 8 / 1e6 < _expected
+            elif unit in ["Gbps", "rGbps"]:
+                status = abs(((real_bps / 1e9 - _expected) * 100 / _expected)) < bias
+            msg = (
+                f"{pformat(traffic_task)}"
+                " not get expected throughput value, real is: "
+                f"{pformat(pmd_stat)}"
+            )
+            self.verify(status, msg)
+
+    def check_queue_pkts_ratio(self, expected, results):
+        """
+        check queue ratio
+        """
+        for result in results:
+            queue_group0 = result[-1][:4]
+            queue_group1 = result[-1][4:]
+            queue_group2 = []
+            if len(expected) == 3:
+                queue_group1 = result[-1][4:8]
+                queue_group2 = result[-1][8:]
+            queue_stats = [queue_group0, queue_group1, queue_group2]
+            for id, ex in enumerate(expected):
+                total_pkts = sum(queue_stats[id])
+                total_ratio = sum(ex)
+                if not ex:
+                    self.verify(not total_pkts, "Queue group 1 has not TX throughput")
+                    return
+                ratio = []
+                for idx, queue_stat in enumerate(queue_stats[id]):
+                    percentage = queue_stat / total_pkts * 100
+                    ratio.append(percentage)
+                bias = 10
+                for idx, percentage in enumerate(ex):
+                    percentage = percentage / total_ratio * 100
+                    _bias = abs(ratio[idx] - percentage) / percentage * 100
+                    self.logger.info(
+                        "ratio and percentage:{}".format((ratio[idx], percentage))
+                    )
+                    if _bias < bias:
+                        continue
+                    else:
+                        msg = "can not get expected queue ratio"
+                        self.verify(False, msg)
+
+    def check_queue_group_throughput(self, expected, results):
+        """
+        check queue group ratio
+        """
+        for result in results:
+            queue_group0 = result[-1][:4]
+            queue_group1 = result[-1][4:]
+            queue_group2 = []
+            if len(expected) == 3:
+                queue_group1 = result[-1][4:8]
+                queue_group2 = result[-1][8:]
+            queue_stats = [queue_group0, queue_group1, queue_group2]
+            total_pkts = sum(queue_stats)
+            total_ratio = sum(expected)
+            ratio = []
+            for idx, queue_stat in enumerate(queue_stats):
+                percentage = queue_stat / total_pkts * 100
+                ratio.append(percentage)
+            bias = 10
+            for idx, percentage in enumerate(expected):
+                percentage = percentage / total_ratio * 100
+                _bias = abs(ratio[idx] - percentage) / percentage * 100
+                self.logger.info(
+                    "ratio and percentage:{}".format((ratio[idx], percentage))
+                )
+                if _bias < bias:
+                    continue
+                else:
+                    msg = "can not get expected queue ratio"
+                    self.verify(False, msg)
+
+    def test_perf_queuegroup_RR_queue_WFQ_RR_nolimit(self):
+
+        self.launch_testpmd(param="--rxq=8 --txq=8")
+        cmds = [
+            "add port tm node shaper profile 1 1 100000000 0 100000000 0 0 0",
+            "add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0",
+            "add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0",
+            "add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0",
+            "add port tm nonleaf node 1 700000 800000 0 1 3 -1 1 0 0",
+            "add port tm nonleaf node 1 600000 800000 0 1 3 -1 1 0 0",
+            "add port tm leaf node 1 0 700000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 1 700000 0 2 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 2 700000 0 3 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 3 700000 0 4 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 4 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 5 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 6 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 7 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "port tm hierarchy commit 1 no",
+            "start",
+        ]
+        for cmd in cmds:
+            self.pmd_output.execute_cmd(cmd)
+
+        traffic_tasks = [
+            [8.25, "Gbps", 1],
+        ]
+        results = self.get_traffic_results()
+        self.check_traffic_throughput(traffic_tasks, results)
+        expected = [[1, 2, 3, 4], [1, 1, 1, 1]]
+        self.check_queue_pkts_ratio(expected, results)
+        expected = [1, 1]
+        self.check_queue_group_throughput(expected, results)
+
+    def test_perf_queuegroup_SP_queue_WFQ_RR_nolimit(self):
+
+        self.launch_testpmd(param="--rxq=8 --txq=8")
+        cmds = [
+            "add port tm node shaper profile 1 1 100000000 0 100000000 0 0 0",
+            "add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0",
+            "add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0",
+            "add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0",
+            "add port tm nonleaf node 1 700000 800000 0 1 3 -1 1 0 0",
+            "add port tm nonleaf node 1 600000 800000 1 1 3 -1 1 0 0",
+            "add port tm leaf node 1 0 700000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 1 700000 0 2 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 2 700000 0 3 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 3 700000 0 4 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 4 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 5 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 6 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 7 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "port tm hierarchy commit 1 no",
+            "start",
+        ]
+        for cmd in cmds:
+            self.pmd_output.execute_cmd(cmd)
+
+        traffic_tasks = [
+            [8.25, "Gbps", 1],
+        ]
+        results = self.get_traffic_results()
+        self.check_traffic_throughput(traffic_tasks, results)
+        expected = [[1, 2, 3, 4], []]
+        self.check_queue_pkts_ratio(expected, results)
+
+    def test_perf_queuegroup_RR_queue_WFQ_RR(self):
+
+        self.launch_testpmd(param="--rxq=8 --txq=8")
+        cmds = [
+            "add port tm node shaper profile 1 1 300000000 0 300000000 0 0 0",
+            "add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0",
+            "add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0",
+            "add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0",
+            "add port tm nonleaf node 1 700000 800000 0 1 3 -1 1 0 0",
+            "add port tm nonleaf node 1 600000 800000 0 1 3 1 1 0 0",
+            "add port tm leaf node 1 0 700000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 1 700000 0 2 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 2 700000 0 3 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 3 700000 0 4 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 4 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 5 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 6 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 7 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "port tm hierarchy commit 1 no",
+            "start",
+        ]
+        for cmd in cmds:
+            self.pmd_output.execute_cmd(cmd)
+
+        traffic_tasks = [
+            [8.25, "Gbps", 1],
+        ]
+        results = self.get_traffic_results()
+        self.check_traffic_throughput(traffic_tasks, results)
+        expected = [[1, 2, 3, 4], [8, 8, 80, 16, 300, 400, 1, 10]]
+        self.check_queue_pkts_ratio(expected, results)
+
+    def test_perf_queuegroup_SP_queue_WFQ_SP(self):
+
+        self.launch_testpmd(param="--rxq=12 --txq=12")
+        cmds = [
+            "add port tm node shaper profile 1 1 300 0 300000000 0 0 0",
+            "add port tm node shaper profile 1 2 300 0 100000000 0 0 0",
+            "add port tm node shaper profile 1 3 300 0 10000000 0 0 0",
+            "add port tm node shaper profile 1 4 300 0 20000000 0 0 0",
+            "add port tm node shaper profile 1 5 200 0 400000000 0 0 0",
+            "add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0",
+            "add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0",
+            "add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0",
+            "add port tm nonleaf node 1 700000 800000 0 1 3 1 1 0 0",
+            "add port tm nonleaf node 1 600000 800000 7 1 3 -1 1 0 0",
+            "add port tm leaf node 1 0 700000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 1 700000 0 2 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 2 700000 0 3 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 3 700000 0 4 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 4 600000 0 1 4 3 0 0xffffffff 0 0",
+            "add port tm leaf node 1 5 600000 2 1 4 3 0 0xffffffff 0 0",
+            "add port tm leaf node 1 6 600000 1 1 4 2 0 0xffffffff 0 0",
+            "add port tm leaf node 1 7 600000 2 1 4 4 0 0xffffffff 0 0",
+            "add port tm leaf node 1 8 600000 3 1 4 1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 9 600000 3 1 4 5 0 0xffffffff 0 0",
+            "add port tm leaf node 1 10 600000 5 1 4 3 0 0xffffffff 0 0",
+            "add port tm leaf node 1 11 600000 7 1 4 3 0 0xffffffff 0 0",
+            "port tm hierarchy commit 1 no",
+            "start",
+        ]
+        for cmd in cmds:
+            self.pmd_output.execute_cmd(cmd)
+
+        traffic_tasks = [
+            [8.25, "Gbps", 1],
+        ]
+        results = self.get_traffic_results()
+        self.check_traffic_throughput(traffic_tasks, results)
+        expected = [[1, 2, 3, 4], [8, 8, 80, 16, 240, 240, 1, 1]]
+        self.check_queue_pkts_ratio(expected, results)
+        expected = [2, 1]
+        self.check_queue_group_throughput(expected, results)
+
+    def test_perf_queuegroup_RR_queue_RR_SP_WFQ(self):
+
+        self.launch_testpmd(param="--rxq=16 --txq=16")
+        cmds = [
+            "add port tm node shaper profile 1 1 300 0 300000000 0 0 0",
+            "add port tm node shaper profile 1 2 100 0 100000000 0 0 0",
+            "add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0",
+            "add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0",
+            "add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0",
+            "add port tm nonleaf node 1 700000 800000 0 1 3 -1 1 0 0",
+            "add port tm nonleaf node 1 600000 800000 0 1 3 -1 1 0 0",
+            "add port tm nonleaf node 1 500000 800000 0 1 3 -1 1 0 0",
+            "add port tm leaf node 1 0 700000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 1 700000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 2 700000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 3 700000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 4 600000 0 1 4 2 0 0xffffffff 0 0",
+            "add port tm leaf node 1 5 600000 4 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 6 600000 1 1 4 1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 7 600000 7 1 4 2 0 0xffffffff 0 0",
+            "add port tm leaf node 1 8 500000 0 4 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 9 500000 0 2 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 10 500000 0 2 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 11 500000 0 100 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 12 500000 0 3 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 13 500000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 14 500000 0 5 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 15 500000 0 7 4 -1 0 0xffffffff 0 0",
+            "port tm hierarchy commit 1 no",
+            "start",
+        ]
+        for cmd in cmds:
+            self.pmd_output.execute_cmd(cmd)
+
+        traffic_tasks = [
+            [8.25, "Gbps", 1],
+        ]
+        results = self.get_traffic_results()
+        self.check_traffic_throughput(traffic_tasks, results)
+        expected = [[1, 1, 1, 1], [8, 1, 20, 1], [4, 2, 2, 100, 3, 1, 5, 7]]
+        self.check_queue_pkts_ratio(expected, results)
+        expected = [1, 1, 1]
+        self.check_queue_group_throughput(expected, results)
+
+    def test_perf_queuegroup_SP_queue_RR_SP_WFQ(self):
+
+        self.launch_testpmd(param="--rxq=16 --txq=16")
+        cmds = [
+            "add port tm node shaper profile 1 1 300 0 300000000 0 0 0",
+            "add port tm node shaper profile 1 2 100 0 100000000 0 0 0",
+            "add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0",
+            "add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0",
+            "add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0",
+            "add port tm nonleaf node 1 700000 800000 0 1 3 2 1 0 0",
+            "add port tm nonleaf node 1 600000 800000 1 1 3 2 1 0 0",
+            "add port tm nonleaf node 1 500000 800000 2 1 3 1 1 0 0",
+            "add port tm leaf node 1 0 700000 0 1 4 1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 1 700000 0 1 4 1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 2 700000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 3 700000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 4 600000 0 1 4 1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 5 600000 4 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 6 600000 1 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 7 600000 7 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 8 500000 0 4 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 9 500000 0 2 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 10 500000 0 2 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 11 500000 0 100 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 12 500000 0 3 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 13 500000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 14 500000 0 5 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 15 500000 0 7 4 -1 0 0xffffffff 0 0",
+            "port tm hierarchy commit 1 no",
+            "start",
+        ]
+        for cmd in cmds:
+            self.pmd_output.execute_cmd(cmd)
+
+        traffic_tasks = [
+            [4, "Gbps", 1],
+        ]
+        results = self.get_traffic_results()
+        self.check_traffic_throughput(traffic_tasks, results)
+        expected = [[1, 1, 1, 1], [800, 1, 1, 1], [4, 2, 2, 100, 3, 1, 5, 7]]
+        self.check_queue_pkts_ratio(expected, results)
+        expected = [1, 1, 3]
+        self.check_queue_group_throughput(expected, results)
+
+    def test_perf_queuegroup_RR_queue_WFQ_WFQ(self):
+
+        self.launch_testpmd(param="--rxq=8 --txq=8")
+        cmds = [
+            "add port tm node shaper profile 1 1 10000000 0 10000000 0 0 0",
+            "add port tm node shaper profile 1 2 20000000 0 20000000 0 0 0",
+            "add port tm node shaper profile 1 3 30000000 0 30000000 0 0 0",
+            "add port tm node shaper profile 1 4 40000000 0 40000000 0 0 0",
+            "add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0",
+            "add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0",
+            "add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0",
+            "add port tm nonleaf node 1 700000 800000 0 1 3 -1 1 0 0",
+            "add port tm nonleaf node 1 600000 800000 0 1 3 -1 1 0 0",
+            "add port tm leaf node 1 0 700000 0 1 4 1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 1 700000 0 2 4 1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 2 700000 0 3 4 4 0 0xffffffff 0 0",
+            "add port tm leaf node 1 3 700000 0 4 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 4 600000 0 1 4 4 0 0xffffffff 0 0",
+            "add port tm leaf node 1 5 600000 0 2 4 3 0 0xffffffff 0 0",
+            "add port tm leaf node 1 6 600000 0 3 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 7 600000 0 4 4 -1 0 0xffffffff 0 0",
+            "port tm hierarchy commit 1 no",
+            "start",
+        ]
+        for cmd in cmds:
+            self.pmd_output.execute_cmd(cmd)
+
+        traffic_tasks = [
+            [8.25, "Gbps", 1],
+        ]
+        results = self.get_traffic_results()
+        self.check_traffic_throughput(traffic_tasks, results)
+        expected = [[4, 3, 170, 220], [1, 1, 1, 300]]
+        self.check_queue_pkts_ratio(expected, results)
+        expected = [1, 1]
+        self.check_queue_group_throughput(expected, results)
+
+    def test_perf_negative_case(self):
+
+        self.launch_testpmd(param="--rxq=16 --txq=16")
+        cmd1 = [
+            "add port tm node shaper profile 1 1 100000000 0 100000000 0 0 0",
+            "add port tm nonleaf node 1 1000000 -1 0 1 0 -1 1 0 0",
+            "add port tm nonleaf node 1 900000 1000000 0 1 1 -1 1 0 0",
+            "add port tm nonleaf node 1 800000 900000 0 1 2 -1 1 0 0",
+            "add port tm nonleaf node 1 700000 800000 0 1 3 -1 1 0 0",
+            "add port tm nonleaf node 1 600000 800000 0 2 3 -1 1 0 0",
+        ]
+        output = ""
+        for cmd in cmd1:
+            output += self.pmd_output.execute_cmd(cmd)
+        check_msg = "ice_tm_node_add(): weight != 1 not supported in level 3"
+        self.verify(
+            check_msg in output, "Configure invalid parameters, report expected errors."
+        )
+        cmd2 = [
+            "port stop 1",
+            "del port tm node 1 600000",
+            "add port tm nonleaf node 1 600000 800000 0 1 3 -1 1 0 0",
+            "port start 1",
+            "add port tm leaf node 1 0 700000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 1 700000 0 2 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 2 700000 0 3 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 3 700000 0 201 4 -1 0 0xffffffff 0 0",
+        ]
+        output = ""
+        for cmd in cmd2:
+            output += self.pmd_output.execute_cmd(cmd)
+        check_msg = "node weight: weight must be between 1 and 200 (error 21)"
+        self.verify(
+            check_msg in output, "Configure invalid parameters, report expected errors."
+        )
+        cmd3 = [
+            "add port tm leaf node 1 3 700000 0 200 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 4 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 5 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 6 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 7 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 8 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 9 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 10 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 11 600000 8 1 4 -1 0 0xffffffff 0 0",
+        ]
+        output = ""
+        for cmd in cmd3:
+            output += self.pmd_output.execute_cmd(cmd)
+        check_msg = "node priority: priority should be less than 8 (error 20)"
+        self.verify(
+            check_msg in output, "Configure invalid parameters, report expected errors."
+        )
+        cmd4 = [
+            "add port tm leaf node 1 11 600000 7 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 12 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 13 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 14 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "add port tm leaf node 1 15 600000 0 1 4 -1 0 0xffffffff 0 0",
+            "port tm hierarchy commit 1 no",
+        ]
+        output = ""
+        for cmd in cmd4:
+            output += self.pmd_output.execute_cmd(cmd)
+        check_msg = "ice_move_recfg_lan_txq(): move lan queue 12 failed\r\nice_hierarchy_commit(): move queue 12 failed\r\ncause unspecified: (no stated reason) (error 1)"
+        self.verify(
+            check_msg in output, "Configure invalid parameters, report expected errors."
+        )
+
+    def tear_down(self):
+        """
+        Run after each test case.
+        """
+        self.close_testpmd()
+        self.dut.kill_all()
+
+    def tear_down_all(self):
+        """
+        Run after each test suite.
+        """
+        self.dut.kill_all()
-- 
2.17.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-09-01  8:57 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-01 17:12 [dts][PATCH V2 1/3] test_plans/index:add new testplan index Zhimin Huang
2022-09-01 17:12 ` [dts][PATCH V2 2/3] test_plans/ice_enable_basic_hqos_on_pf:add new testplan Zhimin Huang
2022-09-01 17:12 ` [dts][PATCH V2 3/3] tests/ice_enable_basic_hqos_on_pf:add new test suite Zhimin Huang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).