test suite reviews and discussions
 help / color / mirror / Atom feed
* RE: [dts] [PATCH V3 7/7] tests/vf_pmd_stacked_bonded: add cases to test vf bonded
  2023-03-21 17:40 ` [dts] [PATCH V3 7/7] tests/vf_pmd_stacked_bonded: " Song Jiale
@ 2023-03-21 10:21   ` Peng, Yuan
  2023-03-28  0:57   ` lijuan.tu
  1 sibling, 0 replies; 10+ messages in thread
From: Peng, Yuan @ 2023-03-21 10:21 UTC (permalink / raw)
  To: Jiale, SongX, dts; +Cc: Jiale, SongX



> -----Original Message-----
> From: Song Jiale <songx.jiale@intel.com>
> Sent: Wednesday, March 22, 2023 1:40 AM
> To: dts@dpdk.org
> Cc: Jiale, SongX <songx.jiale@intel.com>
> Subject: [dts] [PATCH V3 7/7] tests/vf_pmd_stacked_bonded: add cases to
> test vf bonded
> 
> add cases to test vf bonded.
> 
> Signed-off-by: Song Jiale <songx.jiale@intel.com>
> ---

Acked-by: Yuan Peng <yuan.peng@intel.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dts] [PATCH V3 0/7] add cases to test vf bonded
@ 2023-03-21 17:40 Song Jiale
  2023-03-21 17:40 ` [dts] [PATCH V3 1/7] test_plans/index: add 3 test suites " Song Jiale
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Song Jiale @ 2023-03-21 17:40 UTC (permalink / raw)
  To: dts; +Cc: Song Jiale

add cases to test vf bonded.

Song Jiale (7):
  test_plans/index: add 3 test suites to test vf bonded
  test_plans/vf_pmd_bonded: add cases to test vf bonded
  tests/vf_pmd_bonded: add cases to test vf bonded
  test_plans/vf_pmd_bonded_8023ad: add cases to test vf bonded
  tests/vf_pmd_bonded_8023ad: add cases to test vf bonded
  test_plans/vf_pmd_stacked_bonded: add cases to test vf bonded
  tests/vf_pmd_stacked_bonded: add cases to test vf bonded

 test_plans/index.rst                          |    3 +
 test_plans/vf_pmd_bonded_8023ad_test_plan.rst |  477 ++++
 test_plans/vf_pmd_bonded_test_plan.rst        |  509 ++++
 .../vf_pmd_stacked_bonded_test_plan.rst       |  416 ++++
 tests/TestSuite_vf_pmd_bonded.py              | 2169 +++++++++++++++++
 tests/TestSuite_vf_pmd_bonded_8023ad.py       |  660 +++++
 tests/TestSuite_vf_pmd_stacked_bonded.py      |  613 +++++
 7 files changed, 4847 insertions(+)
 create mode 100644 test_plans/vf_pmd_bonded_8023ad_test_plan.rst
 create mode 100644 test_plans/vf_pmd_bonded_test_plan.rst
 create mode 100644 test_plans/vf_pmd_stacked_bonded_test_plan.rst
 create mode 100644 tests/TestSuite_vf_pmd_bonded.py
 create mode 100644 tests/TestSuite_vf_pmd_bonded_8023ad.py
 create mode 100644 tests/TestSuite_vf_pmd_stacked_bonded.py

-- 
2.25.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dts] [PATCH V3 1/7] test_plans/index: add 3 test suites to test vf bonded
  2023-03-21 17:40 [dts] [PATCH V3 0/7] add cases to test vf bonded Song Jiale
@ 2023-03-21 17:40 ` Song Jiale
  2023-03-21 17:40 ` [dts] [PATCH V3 2/7] test_plans/vf_pmd_bonded: add cases " Song Jiale
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Song Jiale @ 2023-03-21 17:40 UTC (permalink / raw)
  To: dts; +Cc: Song Jiale

add cases to test vf bonded. 

Signed-off-by: Song Jiale <songx.jiale@intel.com>
---
 test_plans/index.rst | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/test_plans/index.rst b/test_plans/index.rst
index 0770a935..45e1f3e0 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -125,6 +125,9 @@ The following are the test plans for the DPDK DTS automated test system.
     pmd_bonded_8023ad_test_plan
     pmd_bonded_test_plan
     pmd_stacked_bonded_test_plan
+    vf_pmd_bonded_test_plan
+    vf_pmd_bonded_8023ad_test_plan
+    vf_pmd_stacked_bonded_test_plan
     pmd_test_plan
     pmdpcap_test_plan
     pmdrss_hash_test_plan
-- 
2.25.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dts] [PATCH V3 2/7] test_plans/vf_pmd_bonded: add cases to test vf bonded
  2023-03-21 17:40 [dts] [PATCH V3 0/7] add cases to test vf bonded Song Jiale
  2023-03-21 17:40 ` [dts] [PATCH V3 1/7] test_plans/index: add 3 test suites " Song Jiale
@ 2023-03-21 17:40 ` Song Jiale
  2023-03-21 17:40 ` [dts] [PATCH V3 3/7] tests/vf_pmd_bonded: " Song Jiale
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Song Jiale @ 2023-03-21 17:40 UTC (permalink / raw)
  To: dts; +Cc: Song Jiale

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 28727 bytes --]

add cases to test vf bonded.

Signed-off-by: Song Jiale <songx.jiale@intel.com>
---
 test_plans/vf_pmd_bonded_test_plan.rst | 509 +++++++++++++++++++++++++
 1 file changed, 509 insertions(+)
 create mode 100644 test_plans/vf_pmd_bonded_test_plan.rst

diff --git a/test_plans/vf_pmd_bonded_test_plan.rst b/test_plans/vf_pmd_bonded_test_plan.rst
new file mode 100644
index 00000000..49dbc8d1
--- /dev/null
+++ b/test_plans/vf_pmd_bonded_test_plan.rst
@@ -0,0 +1,509 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2023 Intel Corporation
+
+================
+VF Bonding Tests
+================
+
+Provide the ability to support Link Bonding for 1GbE and 10GbE ports similar the ability found in Linux to allow the aggregation of multiple (slave) NICs into a single logical interface between a server and a switch. A new PMD will then process these interfaces based on the mode of operation specified and supported. This provides support for redundant links, fault tolerance and/or load balancing of networks. Bonding may also be used in connection with 802.1q VLAN support.
+The following is a good overview http://www.cyberciti.biz/howto/question/static/linux-ethernet-bonding-driver-howto.php
+
+Requirements
+============
+
+* The Bonding mode SHOULD be specified via an API for a logical bonded interface used for link aggregation.
+* A new PMD layer SHALL operate on the bonded interfaces and may be used in connection with 802.1q VLAN support.
+* Bonded ports SHALL maintain statistics similar to that of normal ports
+* The slave links SHALL be monitor for link status change. See also the concept of up/down time delay to handle situations such as a switch reboots, it is possible that its ports report "link up" status before they become usable.
+* The following bonding modes SHALL be available;
+
+  - Mode = 0 (balance-rr) Round-robin policy: (default). Transmit packets in sequential order from the first available network interface (NIC) slave through the last. This mode provides load balancing and fault tolerance. Packets may be bulk dequeued from devices then serviced in round-robin manner. The order should be specified so that it corresponds to the other side.
+
+  - Mode = 1 (active-backup) Active-backup policy: Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface's MAC address is externally visible on only one NIC (port) to avoid confusing the network switch. This mode provides fault tolerance. Active-backup policy is useful for implementing high availability solutions using two hubs
+
+  - Mode = 2 (balance-xor) XOR policy: Transmit network packets based on the default transmit policy. The default policy (layer2) is a simple [(source MAC address XOR'd with destination MAC address) modulo slave count].  Alternate transmit policies may be selected. The default transmit policy selects the same NIC slave for each destination MAC address. This mode provides load balancing and fault tolerance.
+
+  - Mode = 3 (broadcast) Broadcast policy: Transmit network packets on all slave network interfaces. This mode provides fault tolerance but is only suitable for special cases.
+
+  - Mode = 4 (802.3ad) IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification. This mode requires a switch that supports IEEE 802.3ad Dynamic link aggregation. Slave selection for outgoing traffic is done according to the transmit hash policy, which may be changed from the default simple XOR layer2 policy.
+
+  - Mode = 5 (balance-tlb) Adaptive transmit load balancing. Linux bonding driver mode that does not require any special network switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
+
+  - Mode = 6 (balance-alb) Adaptive load balancing. Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.
+* The available transmit policies SHALL be as follows;
+
+  - layer2: Uses XOR of hardware MAC addresses to generate the hash.  The formula is (source MAC XOR destination MAC) modulo slave count. This algorithm will place all traffic to a particular network peer on the same slave. This algorithm is 802.3ad compliant.
+  - layer3+4: This policy uses upper layer protocol information, when available, to generate the hash.  This allows for traffic to a particular network peer to span multiple slaves, although a single connection will not span multiple slaves.   The formula for unfragmented TCP and UDP packets is ((source port XOR dest port) XOR  ((source IP XOR dest IP) AND 0xffff)  modulo slave count.  For fragmented TCP or UDP packets and all other IP protocol traffic, the source and destination port information is omitted.  For non-IP traffic, the formula is the same as for the layer2 transmit hash policy. This policy is intended to mimic the behavior of certain switches, notably Cisco switches with PFC2 as well as some Foundry and IBM products. This algorithm is not fully 802.3ad compliant.  A single TCP or UDP conversation containing both fragmented and unfragmented packets will see packets striped across two interfaces.  This may result in out of order delivery.  Most traffic types will not meet these criteria, as TCP rarely fragments traffic, and most UDP traffic is not involved in extended conversations.  Other implementations of 802.3ad may or may not tolerate this noncompliance.
+
+* Upon unbonding the bonding PMD driver MUST restore the MAC addresses that the slaves had before they were enslaved.
+* According to the bond type, when the bond interface is placed in promiscuous mode it will propagate the setting to the slave devices as follow: For mode=0, 2, 3 and 4 the promiscuous mode setting is propagated to all slaves.
+* Mode=0, 2, 3 generally require that the switch have the appropriate ports grouped together (e.g. Cisco 5500 series with EtherChannel support or may be called a trunk group).
+
+* Goals:
+
+  - Provide a forwarding example that demonstrates Link Bonding for 2/4x 1GbE ports and 2x 10GbE with the ability to specify the links to be bound, the port order if required, and the bonding type to be used. MAC address of the bond MUST be settable or taken from its first slave device. The example SHALL also allow the enable/disable of promiscuous mode and disabling of the bonding resulting in the return of the normal interfaces and the ability to bring up and down the logical bonded link.
+  - Provide the performance for each of these modes.
+
+This bonding test plan is mainly to test basic bonding APIs via testpmd and the supported modes(0-3) and each mode's performance in R1.7.
+
+Prerequisites for Bonding
+=========================
+
+* NIC requirements.
+
+  - Tester: 4 ports of nic
+  - DUT: 4 ports of nic
+
+* create 1 vf for 4 dut ports::
+
+   echo 1 > /sys/bus/pci/devices/0000\:31\:00.0/sriov_numvfs
+   echo 1 > /sys/bus/pci/devices/0000\:31\:00.1/sriov_numvfs
+   echo 1 > /sys/bus/pci/devices/0000\:31\:00.2/sriov_numvfs
+   echo 1 > /sys/bus/pci/devices/0000\:31\:00.3/sriov_numvfs
+
+* disabel spoofchk for VF::
+
+     ip link set dev {pf0_iface} vf 0 spoofchk off
+     ip link set dev {pf1_iface} vf 0 spoofchk off
+     ip link set dev {pf2_iface} vf 0 spoofchk off
+     ip link set dev {pf3_iface} vf 0 spoofchk off
+
+* Connections ports between tester and DUT
+
+   - TESTER---------------DUT
+   - portA----------------vf0
+   - portB----------------vf1
+   - portC----------------vf2
+   - portD----------------vf3
+
+
+Test Setup#1 for Functional test
+================================
+
+Tester has 4 ports(portA--portD), and DUT has 4 ports(vf0-vf3), then connect portA to vf0, portB to vf1, portC to vf1, portD to vf3.
+
+
+Test Case1: Basic bonding--Create bonded devices and slaves
+===========================================================
+
+Use Setup#1.
+
+Create bonded device, add first slave, verify default bonded device has default mode 0 and default primary slave.Below are the sample commands and output::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- -i
+    .....
+    Port 0 Link Up - speed 10000 Mbps - full-duplex
+    Port 1 Link Up - speed 10000 Mbps - full-duplex
+    Port 2 Link Up - speed 10000 Mbps - full-duplex
+    Port 3 Link Up - speed 10000 Mbps - full-duplex
+    Done
+    testpmd> create bonded device 1 1(mode socket, if not set, default mode=0, default socket=0)
+    Created new bonded device (Port 4)
+    testpmd> add bonding slave 1 4
+    Adding port 1 as slave
+    testpmd> show bonding config 4
+    - Dev basic:
+      Bonding mode: ACTIVE_BACKUP(1)
+      Slaves (1): [1]
+      Active Slaves: []
+      Current Primary: [1]
+    testpmd> port start 4
+    ......
+    Done
+    testpmd> show bonding config 4
+    - Dev basic:
+      Bonding mode: ACTIVE_BACKUP(1)
+      Slaves (1): [1]
+      Active Slaves: [1]
+      Current Primary: [1]
+
+Create another bonded device, and check if the slave added to bonded device1 can't be added to bonded device2::
+
+    testpmd> create bonded device 1 1
+    Created new bonded device (Port 5)
+    testpmd> add bonding slave 0 4
+    Adding port 0 as slave
+    testpmd> add bonding slave 0 5
+    Failed to add port 0 as slave
+
+Change the bonding mode and verify if it works::
+
+    testpmd> set bonding mode 3 4
+    testpmd> show bonding config 4
+
+Add 2nd slave, and change the primary slave to 2nd slave and verify if it works::
+
+    testpmd> add bonding slave 2 4
+    testpmd> set bonding primary 2 4
+    testpmd> show bonding config 4
+
+Remove the slaves, and check the bonded device again. Below is the sample command::
+
+    testpmd> remove bonding slave 1 4
+    testpmd> show bonding config 4(Verify that slave1 is removed from slaves/active slaves).
+    testpmd> remove bonding slave 0 4
+    testpmd> remove bonding slave 2 4(This command can't be done, since bonded device need at least 1 slave)
+    testpmd> show bonding config 4
+
+
+Test Case2: Basic bonding--MAC Address Test
+===========================================
+
+Use Setup#1.
+
+Create bonded device, add one slave, verify bonded device MAC address is the slave's MAC::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- -i
+    .....
+    Port 0 Link Up - speed 10000 Mbps - full-duplex
+    Port 1 Link Up - speed 10000 Mbps - full-duplex
+    Port 2 Link Up - speed 10000 Mbps - full-duplex
+    Port 3 Link Up - speed 10000 Mbps - full-duplex
+    Done
+    testpmd> create bonded device 1 1
+    testpmd> add bonding slave 1 4
+    testpmd> show port info 1
+     ********************* Infos for port 1  *********************
+    MAC address: 90:E2:BA:4A:54:81
+    Connect to socket: 0
+    memory allocation on the socket: 0
+    Link status: up
+    Link speed: 10000 Mbps
+    Link duplex: full-duplex
+    Promiscuous mode: enabled
+    Allmulticast mode: disabled
+    Maximum number of MAC addresses: 127
+    Maximum number of MAC addresses of hash filtering: 4096
+    VLAN offload:
+       strip on
+       filter on
+       qinq(extend) off
+    testpmd> show port info 4
+     ********************* Infos for port 4  *********************
+    MAC address: 90:E2:BA:4A:54:81
+    Connect to socket: 1
+    memory allocation on the socket: 0
+    Link status: down
+    Link speed: 10000 Mbps
+    Link duplex: full-duplex
+    Promiscuous mode: enabled
+    Allmulticast mode: disabled
+    Maximum number of MAC addresses: 1
+    Maximum number of MAC addresses of hash filtering: 0
+    VLAN offload:
+      strip off
+      filter off
+      qinq(extend) off
+
+Continue with above case, add 2nd slave, check the configuration of a bonded device. Verify bonded device MAC address is that of primary slave and all slaves' MAC address is same. Below are the sample commands::
+
+    testpmd> add bonding slave 2 4
+    testpmd> show bonding config 4
+    testpmd> show port info 1  ------(To check if vf1,2,4 has the same MAC address as vf1)
+    testpmd> show port info 4
+    testpmd> show port info 2
+
+Set the bonded device's MAC address, and verify the bonded port and slaves' MAC address have changed to the new MAC address::
+
+    testpmd> set bonding mac_addr 4 00:11:22:00:33:44
+    testpmd> show port info 1  ------(To check if vf1,2,4 has the same MAC address as new MAC)
+    testpmd> show port info 4
+    testpmd> show port info 2
+
+Change the primary slave to 2nd slave, verify that the bonded device's MAC and slave's MAC is still original.
+Remove 2nd slave from the bonded device, then verify 2nd slave device MAC address is returned to the correct MAC::
+
+    testpmd> port start 4(Make sure the port4 has the primary slave)
+    testpmd> show bonding config 4
+    testpmd> set bonding primary 2 4
+    testpmd> show bonding config 4-----(Verify that vf1 is primary slave)
+    testpmd> show port info 4
+    testpmd> show port info 2
+    testpmd> show port info 1-----(Verify that the bonding port and the slaves`s MAC is still original)
+    testpmd> remove bonding slave 2 4
+    testpmd> show bonding config 4-----(Verify that vf1 is primary slave)
+    testpmd> show port info 2  ------(To check if vf1 returned to correct MAC)
+    testpmd> show port info 4 ------(Verify that bonding device and slave MAC is still original when remove the primary slave)
+    testpmd> show port info 1
+
+Add another slave(3rd slave), then remove this slave from a bonded device, verify slave device MAC address is returned to the correct MAC::
+
+    testpmd> add bonding slave 3 4
+    testpmd> show bonding config 4
+    testpmd> remove bonding slave 3 4
+    testpmd> show bonding config 4
+    testpmd> show port info 3  ------(To check if vf3 has returned to the correct MAC)
+
+
+Test Case3: Basic bonding--Device Promiscuous Mode Test
+========================================================
+
+Use Setup#1.
+
+set vf0 trust on::
+
+   ip link set dev {pf0_iface} vf 0 trust on
+
+Create bonded device, add 3 slaves. Set promiscuous mode on bonded eth dev. Verify all slaves of bonded device are changed to promiscuous mode::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- -i
+    .....
+    Port 0 Link Up - speed 10000 Mbps - full-duplex
+    Port 1 Link Up - speed 10000 Mbps - full-duplex
+    Port 2 Link Up - speed 10000 Mbps - full-duplex
+    Port 3 Link Up - speed 10000 Mbps - full-duplex
+    Done
+    testpmd> create bonded device 3 1
+    testpmd> add bonding slave 0 4
+    testpmd> add bonding slave 1 4
+    testpmd> add bonding slave 2 4
+    testpmd> show port info all---(Check if vf0,1,2,4 has Promiscuous mode enabled)
+     ********************* Infos for port 0  *********************
+    MAC address: 90:E2:BA:4A:54:80
+    Connect to socket: 0
+    memory allocation on the socket: 0
+    Link status: up
+    Link speed: 10000 Mbps
+    Link duplex: full-duplex
+    **Promiscuous mode: enabled**
+    Allmulticast mode: disabled
+    Maximum number of MAC addresses: 127
+    Maximum number of MAC addresses of hash filtering: 4096
+    VLAN offload:
+      strip on
+      filter on
+      qinq(extend) off
+
+Send 1 packet to any bonded slave port(e.g: vf0) with a different MAC destination than that of that eth dev(00:11:22:33:44:55) and verify that data is received at slave and bonded device. (vf0 and port4)::
+
+    testpmd> set portlist 3,4
+    testpmd> port start all
+    testpmd> start
+    testpmd> show port stats all----(Verify vf0 has received 1 packet, port4 has received 1 packet, also vf3 has transmitted 1 packet)
+
+Disable promiscuous mode on bonded device.Verify all slaves of bonded eth dev have changed to be in non-promiscuous mode.This is applied to mode 0,2,3,4, for other mode, such as mode1, this is only applied to active slave::
+
+    testpmd> set promisc 4 off
+    testpmd> show port info all---(Verify that vf0,1,2 and 4 has promiscuous mode disabled, and it depends on the mode)
+
+Send 1 packet to any bonded slave port(e.g: vf0) with MAC not for that slave and verify that data is not received on bonded device and slave::
+
+    testpmd> show port stats all----(Verify vf0 has NOT received 1 packet, port4 NOT received 1 packet,too)
+
+Send 1 packet to any bonded slave port(e.g: vf0) with that slave's MAC and verify that data is received on bonded device and slave since the MAC address is correct::
+
+    testpmd> show port stats all----(Verify vf0 has received 1 packet, port4 received 1 packet,also vf3 has transmitted 1 packet)
+
+Test Case4: Mode 0(Round Robin) TX/RX test
+==========================================
+
+TX:
+
+Add ports 1-3 as slave devices to the bonded port 5.
+Send a packet stream from port D on the traffic generator to be forwarded through the bonded port.
+Verify that traffic is distributed equally in a round robin manner through ports 1-3 on the DUT back to the traffic generator.
+The sum of the packets received on ports A-C should equal the total packets sent from port D.
+The sum of the packets transmitted on ports 1-3 should equal the total packets transmitted from port 5 and received on port 4::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- -i
+    ....
+
+    testpmd> create bonded device 0 1
+    testpmd> add bonding slave 0 4
+    testpmd> add bonding slave 1 4
+    testpmd> add bonding slave 2 4
+    testpmd> set portlist 3,4
+    testpmd> port start all
+    testpmd> start
+    testpmd> show port stats all----(Check vf0,1,2,3 and 4 tx/rx packet stats)
+
+Send 100 packets to vf3 and verify vf3 receive 100 packets, port4 transmit 100 packets,meanwhile the sum of the packets transmitted on port 0-2 should equal the total packets transmitted from port4::
+
+    testpmd> show port stats all----(Verify vf3 100 rx packets,vf0,1,2 have total 100 tx packets,port4 have 100 tx packets)
+
+RX:
+Add ports 1-3 as slave devices to the bonded port 5.
+Send a packet stream from port A, B or C on the traffic generator to be forwarded through the bonded port 5 to port 4
+Verify the sum of the packets transmitted from the traffic generator port is equal the total received packets on port 5 and transmitted on port 4.
+Send a packet stream from the other 2 ports on the traffic generator connected to the bonded port slave ports.
+Verify data transmission/reception counts.
+
+Send 10 packets from port 0-2 to vf3::
+
+    testpmd> clear port stats all
+    testpmd> show port stats all----(Verify vf0-2 have 10 rx packets respectively,port4 have 30 rx packets,meanwhile vf3 have 30 tx packets)
+
+
+Test Case5: Mode 0(Round Robin) Bring one slave link down
+=========================================================
+
+Add ports 1-3 as slave devices to the bonded port 5.
+Bring the link on either port 1, 2 or 3 down.
+Send a packet stream from port D on the traffic generator to be forwarded through the bonded port.
+Verify that forwarded traffic is distributed equally in a round robin manner through the active bonded ports on the DUT back to the traffic generator.
+The sum of the packets received on ports A-C should equal the total packets sent from port D.
+The sum of the packets transmitted on the active bonded ports should equal the total packets transmitted from port 5 and received on port 4.
+No traffic should be sent on the bonded port which was brought down.
+Bring link back up link on bonded port.
+Verify that round robin return to operate across all bonded ports
+
+Test Case6: Mode 0(Round Robin) Bring all slave links down
+==========================================================
+
+Add ports 1-3 as slave devices to the bonded port 5.
+Bring the links down on all bonded ports.
+Verify that bonded callback for link down is called.
+Verify that no traffic is forwarded through bonded device
+
+Test Case7: Mode 1(Active Backup) TX/RX Test
+============================================
+
+Add ports 0-2 as slave devices to the bonded port 4.Set port 0 as active slave on bonded device::
+
+    testpmd> create bonded device 1 1
+    testpmd> add bonding slave 0 4
+    testpmd> add bonding slave 1 4
+    testpmd> add bonding slave 2 4
+    testpmd> show port info 4-----(Check the MAC address of bonded device)
+    testpmd> set portlist 3,4
+    testpmd> port start all
+    testpmd> start
+
+Send a packet stream(100 packets) from port A on the traffic generator to be forwarded through the bonded port4 to vf3. Verify the sum of the packets transmitted from the traffic generator portA is equal the total received packets on vf0, 4 and Port D and transmitted on port 4::
+
+    testpmd> show port stats all---(Verify vf0 receive 100 packets, and port4 receive 100 packets, and vf3 transmit 100 packets)
+
+Send a packet stream(100 packets) from portD on the traffic generator to be forwarded through vf3 to the bonded port4. Verify the sum of the packets(100packets) transmitted from the traffic generator port is equal the total received packets on port4 and portA and transmitted on port4 and vf0::
+
+    testpmd> show port stats all---(Verify vf0/port4 TX 100 packets, and vf3 receive 100 packets)
+
+Test Case8: Mode 1(Active Backup) Change active slave, RX/TX test
+=================================================================
+
+Continuing from Test Case8.
+Change the active slave port from vf0 to vf1.Verify that the bonded device's MAC has changed to slave1's MAC::
+
+    testpmd> set bonding primary 1 4
+
+Repeat the transmission and reception(TX/RX) test verify that data is now transmitted and received through the new active slave and no longer through vf0
+
+
+Test Case9: Mode 1(Active Backup) Link up/down active eth dev
+==============================================================
+
+Bring link between port A and vf0 down.
+Verify that the active slave has been changed from vf0.
+Repeat the transmission and reception test verify that data is now transmitted and received through the new active slave and no longer through vf0.
+
+Test Case10: Mode 1(Active Backup) Bring all slave links down
+=============================================================
+
+Bring all slave ports of bonded port down.
+Verify that bonded callback for link down is called and no active slaves.
+Verify that data cannot be sent or received through bonded port. Send 100 packets to vf3 and verify that bonded port can't TX 100 packets.
+
+Test Case11: Mode 2(Balance XOR) TX Load Balance test
+=====================================================
+
+Bonded port will activate each slave eth dev based on the following hash function::
+
+    ((dst_mac XOR src_mac) % (number of slave ports))
+
+Send 300 packets from non-bonded port(vf3),and verify these packets will be forwarded to bonded device. The bonded device will transmit these packets to all slaves.
+Verify that each slave receive correct number of packets according to the policy. The total number of packets which are on slave should be equal as 300 packets.
+
+
+Test Case12: Mode 2(Balance XOR) TX Load Balance Link down
+==========================================================
+
+Bring link down of one slave.
+Send 300 packets from non-bonded port(vf3), and verify these packets will be forwarded to bonded device.
+Verify that each active slave receive correct number of packets(according to the mode policy), and the down slave will not receive packets.
+
+Test Case13: Mode 2(Balance XOR) Bring all slave links down
+===========================================================
+
+Bring all slave links down.
+Verify that bonded callback for link down is called.
+Verify no packet can be sent.
+
+Test Case14: Mode 2(Balance XOR) Layer 3+4 forwarding
+=========================================================
+
+Use “xmit_hash_policy()” to change to this forwarding mode
+Create a stream of traffic which will exercise all slave ports using the transmit policy::
+
+    ((SRC_PORT XOR DST_PORT) XOR ((SRC_IP XOR DST_IP) AND 0xffff) % (number of slave ports))
+
+Transmit data through bonded device, verify TX packet count for each slave port is as expected
+
+Test Case15: Mode 2(Balance XOR) RX test
+========================================
+
+Send 100 packets to each bonded slaves(vf0,1,2)
+Verify that each slave receives 100 packets and the bonded device receive a total 300 packets.
+Verify that the bonded device forwards 300 packets to the non-bonded port(port4).
+
+Test Case16: Mode 3(Broadcast) TX/RX Test
+=========================================
+
+Add ports 0-2 as slave devices to the bonded port 4.Set port 0 as active slave on bonded device::
+
+    testpmd> create bonded device 3 1
+    testpmd> add bonding slave 0 4
+    testpmd> add bonding slave 1 4
+    testpmd> add bonding slave 2 4
+    testpmd> show port info 4-----(Check the MAC address of bonded device)
+    testpmd> set portlist 3,4
+    testpmd> port start all
+    testpmd> start
+
+RX: Send a packet stream(100 packets) from port A on the traffic generator to be forwarded through the bonded port4 to vf3. Verify the sum of the packets transmitted from the traffic generator portA is equal the total received packets on vf0, port4 and portD(Traffic generator)::
+
+    testpmd> show port stats all---(Verify vf0 receive 100 packets, and port4 receive 100 packets, and vf3 transmit 100 packets)
+
+TX: Send a packet stream(100 packets) from portD on the traffic generator to be forwarded through vf3 to the bonded port4. Verify the sum of the packets(100packets) transmitted from the traffic generator port is equal the total received packets on port4, portA and transmitted to vf0.::
+
+    testpmd> show port stats all---(Verify vf3 RX 100 packets, and vf0,1,2,4 TX 100 packets)
+
+Test Case17: Mode 3(Broadcast) Bring one slave link down
+========================================================
+
+Bring one slave port link down. Send 100 packets through portD to vf3, then vf3 forwards to bonded device(port4), verify that the bonded device and other slaves TX the correct number of packets(100 packets for each port).
+
+
+Test Case18: Mode 3(Broadcast) Bring all slave links down
+=========================================================
+
+Bring all slave ports of bonded port down
+Verify that bonded callback for link down is called
+Verify that data cannot be sent or received through bonded port.
+
+Test Case19: Mode 5(TLB) Base Test
+==================================
+
+Repeat Test Case1, Test Case2, and Test Case3 using the bonding device with mode 5(TLB).
+create bonding device with mode 5(TLB)::
+
+   testpmd> create bonded device 5 0
+
+Test Case20: Mode 5(TLB) TX/RX test
+====================================
+Repeat Test Case4 using the bonding device with mode 5(TLB).
+create bonding device with mode 5(TLB)::
+
+   testpmd> create bonded device 5 0
+
+Test Case21: Mode 5(TLB) Bring one slave link down
+==================================================
+
+Bring one slave port link down. Send 10000 packets through portA to vf3, then vf3 forwards to bonded device(port4),
+verify that the bonded device and other slaves TX the correct number of packets,load balancing of all active slaves,
+10000 packages in total.
+
+Test Case22: Mode 5(TLB) Bring all slave links down
+===================================================
+
+Bring all slave ports of bonded port down
+Verify that bonded callback for link down is called
+Verify that data cannot be sent or received through bonded port.
\ No newline at end of file
-- 
2.25.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dts] [PATCH V3 3/7] tests/vf_pmd_bonded: add cases to test vf bonded
  2023-03-21 17:40 [dts] [PATCH V3 0/7] add cases to test vf bonded Song Jiale
  2023-03-21 17:40 ` [dts] [PATCH V3 1/7] test_plans/index: add 3 test suites " Song Jiale
  2023-03-21 17:40 ` [dts] [PATCH V3 2/7] test_plans/vf_pmd_bonded: add cases " Song Jiale
@ 2023-03-21 17:40 ` Song Jiale
  2023-03-21 17:40 ` [dts] [PATCH V3 4/7] test_plans/vf_pmd_bonded_8023ad: " Song Jiale
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Song Jiale @ 2023-03-21 17:40 UTC (permalink / raw)
  To: dts; +Cc: Song Jiale

add cases to test vf bonded.

Signed-off-by: Song Jiale <songx.jiale@intel.com>
---
 tests/TestSuite_vf_pmd_bonded.py | 2169 ++++++++++++++++++++++++++++++
 1 file changed, 2169 insertions(+)
 create mode 100644 tests/TestSuite_vf_pmd_bonded.py

diff --git a/tests/TestSuite_vf_pmd_bonded.py b/tests/TestSuite_vf_pmd_bonded.py
new file mode 100644
index 00000000..ca3b2823
--- /dev/null
+++ b/tests/TestSuite_vf_pmd_bonded.py
@@ -0,0 +1,2169 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 Intel Corporation
+#
+
+"""
+DPDK Test suite.
+
+
+Test userland 10Gb PMD.
+
+"""
+
+import random
+import re
+import time
+from socket import htonl, htons
+
+import framework.utils as utils
+import tests.bonding as bonding
+from framework.pmd_output import PmdOutput
+from framework.test_case import TestCase
+
+SOCKET_0 = 0
+SOCKET_1 = 1
+
+MODE_ROUND_ROBIN = "ROUND_ROBIN(0)"
+MODE_ACTIVE_BACKUP = "ACTIVE_BACKUP(1)"
+MODE_XOR_BALANCE = "BALANCE(2)"
+MODE_BROADCAST = "BROADCAST(3)"
+MODE_LACP = "8023AD(4)"
+MODE_TLB_BALANCE = "TLB(5)"
+MODE_ALB_BALANCE = "ALB(6)"
+
+FRAME_SIZE_64 = 64
+FRAME_SIZE_65 = 65
+FRAME_SIZE_128 = 128
+FRAME_SIZE_256 = 256
+FRAME_SIZE_512 = 512
+FRAME_SIZE_1024 = 1024
+FRAME_SIZE_1280 = 1280
+FRAME_SIZE_1518 = 1518
+
+S_MAC_IP_PORT = [
+    ("52:00:00:00:00:00", "10.239.129.65", 61),
+    ("52:00:00:00:00:01", "10.239.129.66", 62),
+    ("52:00:00:00:00:02", "10.239.129.67", 63),
+]
+
+D_MAC_IP_PORT = []
+LACP_MESSAGE_SIZE = 128
+
+
+class TestVFPmdBonded(TestCase):
+    def get_stats(self, portid, rx_tx):
+        """
+        Get packets number from port statistic
+        """
+
+        out = self.dut.send_expect("show port stats %d" % portid, "testpmd> ")
+
+        if rx_tx == "rx":
+            result_scanner = (
+                r"RX-packets: ([0-9]+)\s*RX-missed: ([0-9]+)\s*RX-bytes:  ([0-9]+)"
+            )
+        elif rx_tx == "tx":
+            result_scanner = (
+                r"TX-packets: ([0-9]+)\s*TX-errors: ([0-9]+)\s*TX-bytes:  ([0-9]+)"
+            )
+        else:
+            return None
+
+        scanner = re.compile(result_scanner, re.DOTALL)
+        m = scanner.search(out)
+
+        return m.groups()
+
+    def parse_ether_ip(self, dest_port, **ether_ip):
+        """
+        ether_ip:
+            'ether':
+                {
+                    'dest_mac':False
+                    'src_mac':"52:00:00:00:00:00"
+                }
+            'dot1q':
+                {
+                    'vlan':1
+                }
+            'ip':
+                {
+                    'dest_ip':"10.239.129.88"
+                    'src_ip':"10.239.129.65"
+                }
+            'udp':
+                {
+                    'dest_port':53
+                    'src_port':53
+                }
+        """
+        ret_ether_ip = {}
+        ether = {}
+        dot1q = {}
+        ip = {}
+        udp = {}
+        try:
+            dut_dest_port = self.vf_ports[dest_port]
+        except Exception as e:
+            dut_dest_port = dest_port
+
+        query_type = "mac"
+        if not ether_ip.get("ether"):
+            ether["dest_mac"] = self.bond_inst.get_port_mac(dut_dest_port, query_type)
+            ether["src_mac"] = "52:00:00:00:00:00"
+        else:
+            if not ether_ip["ether"].get("dest_mac"):
+                ether["dest_mac"] = self.bond_inst.get_port_mac(
+                    dut_dest_port, query_type
+                )
+            else:
+                ether["dest_mac"] = ether_ip["ether"]["dest_mac"]
+            if not ether_ip["ether"].get("src_mac"):
+                ether["src_mac"] = "52:00:00:00:00:00"
+            else:
+                ether["src_mac"] = ether_ip["ether"]["src_mac"]
+
+        if not ether_ip.get("dot1q"):
+            pass
+        else:
+            if not ether_ip["dot1q"].get("vlan"):
+                dot1q["vlan"] = "1"
+            else:
+                dot1q["vlan"] = ether_ip["dot1q"]["vlan"]
+
+        if not ether_ip.get("ip"):
+            ip["dest_ip"] = "10.239.129.88"
+            ip["src_ip"] = "10.239.129.65"
+        else:
+            if not ether_ip["ip"].get("dest_ip"):
+                ip["dest_ip"] = "10.239.129.88"
+            else:
+                ip["dest_ip"] = ether_ip["ip"]["dest_ip"]
+            if not ether_ip["ip"].get("src_ip"):
+                ip["src_ip"] = "10.239.129.65"
+            else:
+                ip["src_ip"] = ether_ip["ip"]["src_ip"]
+
+        if not ether_ip.get("udp"):
+            udp["dest_port"] = 53
+            udp["src_port"] = 53
+        else:
+            if not ether_ip["udp"].get("dest_port"):
+                udp["dest_port"] = 53
+            else:
+                udp["dest_port"] = ether_ip["udp"]["dest_port"]
+            if not ether_ip["udp"].get("src_port"):
+                udp["src_port"] = 53
+            else:
+                udp["src_port"] = ether_ip["udp"]["src_port"]
+
+        ret_ether_ip["ether"] = ether
+        ret_ether_ip["dot1q"] = dot1q
+        ret_ether_ip["ip"] = ip
+        ret_ether_ip["udp"] = udp
+
+        return ret_ether_ip
+
+    def send_packet(
+        self,
+        dest_port,
+        src_port=False,
+        frame_size=FRAME_SIZE_64,
+        count=1,
+        invert_verify=False,
+        **ether_ip,
+    ):
+        """
+        Send count packet to portid
+        count: 1 or 2 or 3 or ... or 'MANY'
+               if count is 'MANY', then set count=100000,
+               send packets during 5 seconds.
+        ether_ip:
+            'ether':
+                {
+                    'dest_mac':False
+                    'src_mac':"52:00:00:00:00:00"
+                }
+            'dot1q':
+                {
+                    'vlan':1
+                }
+            'ip':
+                {
+                    'dest_ip':"10.239.129.88"
+                    'src_ip':"10.239.129.65"
+                }
+            'udp':
+                {
+                    'dest_port':53
+                    'src_port':53
+                }
+        """
+        during = 0
+        loop = 0
+        try:
+            count = int(count)
+        except ValueError as e:
+            if count == "MANY":
+                during = 5
+                count = 100000
+            else:
+                raise e
+
+        if not src_port:
+            gp0rx_pkts, gp0rx_err, gp0rx_bytes = [
+                int(_) for _ in self.get_stats(self.vf_ports[dest_port], "rx")
+            ]
+            itf = self.tester.get_interface(
+                self.tester.get_local_port(self.dut_ports[dest_port])
+            )
+        else:
+            gp0rx_pkts, gp0rx_err, gp0rx_bytes = [
+                int(_) for _ in self.get_stats(dest_port, "rx")
+            ]
+            itf = src_port
+
+        ret_ether_ip = self.parse_ether_ip(dest_port, **ether_ip)
+
+        pktlen = frame_size - 18
+        padding = pktlen - 20
+
+        start = time.time()
+        while True:
+            self.tester.scapy_foreground()
+            self.tester.scapy_append('nutmac="%s"' % ret_ether_ip["ether"]["dest_mac"])
+            self.tester.scapy_append('srcmac="%s"' % ret_ether_ip["ether"]["src_mac"])
+
+            if ether_ip.get("dot1q"):
+                self.tester.scapy_append("vlanvalue=%d" % ret_ether_ip["dot1q"]["vlan"])
+            self.tester.scapy_append('destip="%s"' % ret_ether_ip["ip"]["dest_ip"])
+            self.tester.scapy_append('srcip="%s"' % ret_ether_ip["ip"]["src_ip"])
+            self.tester.scapy_append("destport=%d" % ret_ether_ip["udp"]["dest_port"])
+            self.tester.scapy_append("srcport=%d" % ret_ether_ip["udp"]["src_port"])
+            if not ret_ether_ip.get("dot1q"):
+                pkt = (
+                    'sendp([Ether(dst=nutmac, src=srcmac)/IP(dst=destip, src=srcip, len=%s)/\
+UDP(sport=srcport, dport=destport)/Raw(load="\x50"*%s)], iface="%s", count=%d, verbose=False)'
+                    % (pktlen, padding, itf, count)
+                )
+                self.tester.scapy_append(pkt)
+            else:
+                pkt = (
+                    'sendp([Ether(dst=nutmac, src=srcmac)/Dot1Q(vlan=vlanvalue)/IP(dst=destip, src=srcip, len=%s)/\
+UDP(sport=srcport, dport=destport)/Raw(load="\x50"*%s)], iface="%s", count=%d, verbose=False)'
+                    % (pktlen, padding, itf, count)
+                )
+                self.tester.scapy_append(pkt)
+            self.tester.scapy_execute(timeout=180)
+            loop += 1
+
+            now = time.time()
+            if (now - start) >= during:
+                break
+        time.sleep(0.5)
+
+        if not src_port:
+            p0rx_pkts, p0rx_err, p0rx_bytes = [
+                int(_) for _ in self.get_stats(self.vf_ports[dest_port], "rx")
+            ]
+        else:
+            p0rx_pkts, p0rx_err, p0rx_bytes = [
+                int(_) for _ in self.get_stats(dest_port, "rx")
+            ]
+
+        p0rx_pkts -= gp0rx_pkts
+        p0rx_bytes -= gp0rx_bytes
+
+        if not invert_verify:
+            self.verify(p0rx_pkts >= count * loop, "Data not received by port")
+        else:
+            global LACP_MESSAGE_SIZE
+            self.verify(
+                p0rx_pkts == 0 or p0rx_bytes / p0rx_pkts == LACP_MESSAGE_SIZE,
+                "Data received by port, but should not.",
+            )
+        return count * loop
+
+    def get_value_from_str(self, key_str, regx_str, string):
+        """
+        Get some values from the given string by the regular expression.
+        """
+        pattern = r"(?<=%s)%s" % (key_str, regx_str)
+        s = re.compile(pattern)
+        res = s.search(string)
+        if type(res).__name__ == "NoneType":
+            return " "
+        else:
+            return res.group(0)
+
+    def get_detail_from_port_info(self, key_str, regx_str, port):
+        """
+        Get the detail info from the output of pmd cmd 'show port info <port num>'.
+        """
+        out = self.dut.send_expect("show port info %d" % port, "testpmd> ")
+        find_value = self.get_value_from_str(key_str, regx_str, out)
+        return find_value
+
+    def get_port_mac(self, port_id):
+        """
+        Get the specified port MAC.
+        """
+        return self.get_detail_from_port_info(
+            "MAC address: ", "([0-9A-F]{2}:){5}[0-9A-F]{2}", port_id
+        )
+
+    def get_port_connect_socket(self, port_id):
+        """
+        Get the socket id which the specified port is connecting with.
+        """
+        return self.get_detail_from_port_info("Connect to socket: ", "\d+", port_id)
+
+    def get_port_memory_socket(self, port_id):
+        """
+        Get the socket id which the specified port memory is allocated on.
+        """
+        return self.get_detail_from_port_info(
+            "memory allocation on the socket: ", "\d+", port_id
+        )
+
+    def get_port_link_status(self, port_id):
+        """
+        Get the specified port link status now.
+        """
+        return self.get_detail_from_port_info("Link status: ", "\d+", port_id)
+
+    def get_port_link_speed(self, port_id):
+        """
+        Get the specified port link speed now.
+        """
+        return self.get_detail_from_port_info("Link speed: ", "\d+", port_id)
+
+    def get_port_link_duplex(self, port_id):
+        """
+        Get the specified port link mode, duplex or simplex.
+        """
+        return self.get_detail_from_port_info("Link duplex: ", "\S+", port_id)
+
+    def get_port_promiscuous_mode(self, port_id):
+        """
+        Get the promiscuous mode of port.
+        """
+        return self.get_detail_from_port_info("Promiscuous mode: ", "\S+", port_id)
+
+    def get_port_allmulticast_mode(self, port_id):
+        """
+        Get the allmulticast mode of port.
+        """
+        return self.get_detail_from_port_info("Allmulticast mode: ", "\S+", port_id)
+
+    def get_port_vlan_offload(self, port_id):
+        """
+        Function: get the port vlan setting info.
+        return value:
+            'strip':'on'
+            'filter':'on'
+            'qinq':'off'
+        """
+        vlan_info = {}
+        vlan_info["strip"] = self.get_detail_from_port_info("strip ", "\S+", port_id)
+        vlan_info["filter"] = self.get_detail_from_port_info("filter", "\S+", port_id)
+        vlan_info["qinq"] = self.get_detail_from_port_info(
+            "qinq\(extend\) ", "\S+", port_id
+        )
+        return vlan_info
+
+    def get_info_from_bond_config(self, key_str, regx_str, bond_port):
+        """
+        Get info by executing the command "show bonding config".
+        """
+        out = self.dut.send_expect("show bonding config %d" % bond_port, "testpmd> ")
+        find_value = self.get_value_from_str(key_str, regx_str, out)
+        return find_value
+
+    def get_bond_mode(self, bond_port):
+        """
+        Get the  mode of the bonding device  which you choose.
+        """
+        return self.get_info_from_bond_config("Bonding mode: ", "\S*", bond_port)
+
+    def get_bond_balance_policy(self, bond_port):
+        """
+        Get the balance transmit policy of bonding device.
+        """
+        return self.get_info_from_bond_config("Balance Xmit Policy: ", "\S+", bond_port)
+
+    def get_bond_slaves(self, bond_port):
+        """
+        Get all the slaves of the bonding device which you choose.
+        """
+        try:
+            return self.get_info_from_bond_config(
+                "Slaves \(\d\): \[", "\d*( \d*)*", bond_port
+            )
+        except Exception as e:
+            return self.get_info_from_bond_config("Slaves: \[", "\d*( \d*)*", bond_port)
+
+    def get_bond_active_slaves(self, bond_port):
+        """
+        Get the active slaves of the bonding device which you choose.
+        """
+        try:
+            return self.get_info_from_bond_config(
+                "Active Slaves \(\d\): \[", "\d*( \d*)*", bond_port
+            )
+        except Exception as e:
+            return self.get_info_from_bond_config(
+                "Acitve Slaves: \[", "\d*( \d*)*", bond_port
+            )
+
+    def get_bond_primary(self, bond_port):
+        """
+        Get the primary slave of the bonding device which you choose.
+        """
+        return self.get_info_from_bond_config("Current Primary: \[", "\d*", bond_port)
+
+    def create_bonded_device(self, mode="", socket=0, verify_detail=False):
+        """
+        Create a bonding device with the parameters you specified.
+        """
+        p = r"\w+\((\d+)\)"
+        mode_id = int(re.match(p, mode).group(1))
+        out = self.dut.send_expect(
+            "create bonded device %s %d" % (mode_id, socket), "testpmd> "
+        )
+        self.verify(
+            "Created new bonded device" in out,
+            "Create bonded device on mode [%s] socket [%d] failed" % (mode, socket),
+        )
+        bond_port = self.get_value_from_str(
+            "Created new bonded device net_bonding_testpmd_[\d] on \(port ", "\d+", out
+        )
+        bond_port = int(bond_port)
+
+        if verify_detail:
+            out = self.dut.send_expect(
+                "show bonding config %d" % bond_port, "testpmd> "
+            )
+            self.verify(
+                "Bonding mode: %s" % mode in out,
+                "Bonding mode display error when create bonded device",
+            )
+            self.verify(
+                "Slaves: []" in out, "Slaves display error when create bonded device"
+            )
+            self.verify(
+                "Active Slaves: []" in out,
+                "Active Slaves display error when create bonded device",
+            )
+            self.verify(
+                "Primary: []" not in out,
+                "Primary display error when create bonded device",
+            )
+
+            out = self.dut.send_expect("show port info %d" % bond_port, "testpmd> ")
+            self.verify(
+                "Connect to socket: %d" % socket in out,
+                "Bonding port connect socket error",
+            )
+            self.verify(
+                "Link status: down" in out, "Bonding port default link status error"
+            )
+            self.verify(
+                "Link speed: None" in out, "Bonding port default link speed error"
+            )
+
+        return bond_port
+
+    def start_port(self, port):
+        """
+        Start a port which the testpmd can see.
+        """
+        self.pmdout.execute_cmd("port start %s" % str(port))
+
+    def add_slave_to_bonding_device(self, bond_port, invert_verify=False, *slave_port):
+        """
+        Add the ports into the bonding device as slaves.
+        """
+        if len(slave_port) <= 0:
+            utils.RED("No port exist when add slave to bonded device")
+        for slave_id in slave_port:
+            self.pmdout.execute_cmd("add bonding slave %d %d" % (slave_id, bond_port))
+            slaves = self.get_info_from_bond_config(
+                "Slaves \(\d\): \[", "\d*( \d*)*", bond_port
+            )
+            if not invert_verify:
+                self.verify(str(slave_id) in slaves, "Add port as bonding slave failed")
+            else:
+                self.verify(
+                    str(slave_id) not in slaves,
+                    "Add port as bonding slave successfully,should fail",
+                )
+
+    def remove_slave_from_bonding_device(
+        self, bond_port, invert_verify=False, *slave_port
+    ):
+        """
+        Remove the specified slave port from the bonding device.
+        """
+        if len(slave_port) <= 0:
+            utils.RED("No port exist when remove slave from bonded device")
+        for slave_id in slave_port:
+            self.dut.send_expect(
+                "remove bonding slave %d %d" % (int(slave_id), bond_port), "testpmd> "
+            )
+            out = self.get_info_from_bond_config("Slaves: \[", "\d*( \d*)*", bond_port)
+            if not invert_verify:
+                self.verify(
+                    str(slave_id) not in out, "Remove slave to fail from bonding device"
+                )
+            else:
+                self.verify(
+                    str(slave_id) in out,
+                    "Remove slave successfully from bonding device,should be failed",
+                )
+
+    def remove_all_slaves(self, bond_port):
+        """
+        Remove all slaves of specified bound device.
+        """
+        all_slaves = self.get_bond_slaves(bond_port)
+        all_slaves = all_slaves.split()
+        if len(all_slaves) == 0:
+            pass
+        else:
+            self.remove_slave_from_bonding_device(bond_port, False, *all_slaves)
+
+    def set_primary_for_bonding_device(
+        self, bond_port, slave_port, invert_verify=False
+    ):
+        """
+        Set the primary slave for the bonding device.
+        """
+        self.dut.send_expect(
+            "set bonding primary %d %d" % (slave_port, bond_port), "testpmd> "
+        )
+        out = self.get_info_from_bond_config("Primary: \[", "\d*", bond_port)
+        if not invert_verify:
+            self.verify(str(slave_port) in out, "Set bonding primary port failed")
+        else:
+            self.verify(
+                str(slave_port) not in out,
+                "Set bonding primary port successfully,should not success",
+            )
+
+    def set_mode_for_bonding_device(self, bond_port, mode_id):
+        """
+        Set the mode for the bonding device.
+        """
+        self.dut.send_expect(
+            "set bonding mode %d %d" % (mode_id, bond_port), "testpmd> "
+        )
+        mode_value = self.get_bond_mode(bond_port)
+        self.verify(str(mode_id) in mode_value, "Set bonding mode failed")
+
+    def set_mac_for_bonding_device(self, bond_port, mac):
+        """
+        Set the MAC for the bonding device.
+        """
+        self.dut.send_expect(
+            "set bonding mac_addr %s %s" % (bond_port, mac), "testpmd> "
+        )
+        new_mac = self.get_port_mac(bond_port)
+        self.verify(new_mac == mac, "Set bonding mac failed")
+
+    def set_balance_policy_for_bonding_device(self, bond_port, policy):
+        """
+        Set the balance transmit policy for the bonding device.
+        """
+        self.dut.send_expect(
+            "set bonding balance_xmit_policy %d %s" % (bond_port, policy), "testpmd> "
+        )
+        new_policy = self.get_bond_balance_policy(bond_port)
+        policy = "BALANCE_XMIT_POLICY_LAYER" + policy.lstrip("l")
+        self.verify(new_policy == policy, "Set bonding balance policy failed")
+
+    def send_default_packet_to_slave(
+        self, unbound_port, bond_port, pkt_count=100, **slaves
+    ):
+        """
+        Send packets to the slaves and calculate the slave`s RX packets
+        and unbond port TX packets.
+        Parameters:
+        *** unbound_port: the unbonded port id
+        *** bond_port: the bonded device port id
+        *** slaves:
+        ******* 'active'=[]
+        ******* 'inactive'=[]
+        """
+        summary = 0
+
+        # send to slave ports
+        pkt_orig = self.get_all_stats(unbound_port, "tx", bond_port, **slaves)
+        for slave in slaves["active"]:
+            temp_count = self.send_packet(
+                self.vf_ports[slave], False, FRAME_SIZE_64, pkt_count
+            )
+            summary += temp_count
+        for slave in slaves["inactive"]:
+            self.send_packet(
+                self.vf_ports[slave], False, FRAME_SIZE_64, pkt_count, True
+            )
+        time.sleep(1)
+        pkt_now = self.get_all_stats(unbound_port, "tx", bond_port, **slaves)
+
+        for key in pkt_now:
+            for num in [0, 1, 2]:
+                pkt_now[key][num] -= pkt_orig[key][num]
+
+        return pkt_now, summary
+
+    def send_customized_packet_to_slave(
+        self, unbound_port, bond_port, *pkt_info, **slaves
+    ):
+        """
+        Send packets to the slaves and calculate the slave`s RX packets
+        and unbond port TX packets.
+        Parameters:
+        *** unbound_port: the unbonded port id
+        *** bond_port: the bonded device port id
+        *** pkt_info: the first is necessary which will describe the packet,
+                      the second is optional which will describe the params of
+                      the function send_packet
+        *** slaves:
+        ******* 'active'=[]
+        ******* 'inactive'=[]
+        """
+        pkt_orig = {}
+        pkt_now = {}
+        temp_count = 0
+        summary = 0
+
+        pkt_info_len = len(pkt_info)
+        if pkt_info_len < 1:
+            self.verify(False, "At least one members for pkt_info!")
+
+        ether_ip = pkt_info[0]
+        if pkt_info_len > 1:
+            pkt_size = pkt_info[1].get("frame_size", FRAME_SIZE_64)
+            pkt_count = pkt_info[1].get("pkt_count", 1)
+            invert_verify = pkt_info[1].get("verify", False)
+        else:
+            pkt_size = FRAME_SIZE_64
+            pkt_count = 1
+            invert_verify = False
+
+        # send to slave ports
+        pkt_orig = self.get_all_stats(unbound_port, "tx", bond_port, **slaves)
+        for slave in slaves["active"]:
+            temp_count = self.send_packet(
+                self.vf_ports[slave],
+                False,
+                pkt_size,
+                pkt_count,
+                invert_verify,
+                **ether_ip,
+            )
+            summary += temp_count
+        for slave in slaves["inactive"]:
+            self.send_packet(
+                self.vf_ports[slave], False, FRAME_SIZE_64, pkt_count, True
+            )
+        pkt_now = self.get_all_stats(unbound_port, "tx", bond_port, **slaves)
+
+        for key in pkt_now:
+            for num in [0, 1, 2]:
+                pkt_now[key][num] -= pkt_orig[key][num]
+
+        return pkt_now, summary
+
+    def send_default_packet_to_unbound_port(
+        self, unbound_port, bond_port, pkt_count, **slaves
+    ):
+        """
+        Send packets to the unbound port and calculate unbound port RX packets
+        and the slave`s TX packets.
+        Parameters:
+        *** unbound_port: the unbonded port id
+        *** bond_port: the bonded device port id
+        *** slaves:
+        ******* 'active':[]
+        ******* 'inactive':[]
+        """
+        pkt_orig = {}
+        pkt_now = {}
+        summary = 0
+
+        # send to unbonded device
+        pkt_orig = self.get_all_stats(unbound_port, "rx", bond_port, **slaves)
+        summary = self.send_packet(unbound_port, False, FRAME_SIZE_64, pkt_count)
+        pkt_now = self.get_all_stats(unbound_port, "rx", bond_port, **slaves)
+
+        for key in pkt_now:
+            for num in [0, 1, 2]:
+                pkt_now[key][num] -= pkt_orig[key][num]
+
+        return pkt_now, summary
+
+    def send_customized_packet_to_unbound_port(
+        self, unbound_port, bond_port, policy, vlan_tag=False, pkt_count=100, **slaves
+    ):
+        """
+        Verify that transmitting the packets correctly in the XOR mode.
+        Parameters:
+        *** unbound_port: the unbonded port id
+        *** bond_port: the bonded device port id
+        *** policy:'L2' , 'L23' or 'L34'
+        *** vlan_tag:False or True
+        *** slaves:
+        ******* 'active'=[]
+        ******* 'inactive'=[]
+        """
+        pkt_orig = {}
+        pkt_now = {}
+        summary = 0
+        temp_count = 0
+
+        # send to unbound_port
+        pkt_orig = self.get_all_stats(unbound_port, "rx", bond_port, **slaves)
+        query_type = "mac"
+        dest_mac = self.bond_inst.get_port_mac(self.vf_ports[unbound_port], query_type)
+        dest_ip = "10.239.129.88"
+        dest_port = 53
+
+        global D_MAC_IP_PORT
+        D_MAC_IP_PORT = [dest_mac, dest_ip, dest_port]
+
+        ether_ip = {}
+        ether = {}
+        ip = {}
+        udp = {}
+
+        ether["dest_mac"] = False
+        ip["dest_ip"] = dest_ip
+        udp["dest_port"] = 53
+        if vlan_tag:
+            dot1q = {}
+            dot1q["vlan"] = random.randint(1, 50)
+            ether_ip["dot1q"] = dot1q
+
+        ether_ip["ether"] = ether
+        ether_ip["ip"] = ip
+        ether_ip["udp"] = udp
+
+        global S_MAC_IP_PORT
+        source = S_MAC_IP_PORT
+
+        for src_mac, src_ip, src_port in source:
+            ether_ip["ether"]["src_mac"] = src_mac
+            ether_ip["ip"]["src_ip"] = src_ip
+            ether_ip["udp"]["src_port"] = src_port
+            temp_count = self.send_packet(
+                unbound_port, False, FRAME_SIZE_64, pkt_count, False, **ether_ip
+            )
+            summary += temp_count
+        pkt_now = self.get_all_stats(unbound_port, "rx", bond_port, **slaves)
+
+        for key in pkt_now:
+            for num in [0, 1, 2]:
+                pkt_now[key][num] -= pkt_orig[key][num]
+
+        return pkt_now, summary
+
+    #
+    # Test cases.
+    #
+    def set_up_all(self):
+        """
+        Run before each test suite
+        """
+        self.verify("bsdapp" not in self.target, "Bonding not support freebsd")
+        self.frame_sizes = [64, 65, 128, 256, 512, 1024, 1280, 1518]
+
+        self.eth_head_size = 18
+        self.ip_head_size = 20
+        self.udp_header_size = 8
+        self.dut_ports = self.dut.get_ports()
+        self.verify(len(self.dut_ports) >= 4, "Insufficient ports")
+        self.dport_info0 = self.dut.ports_info[self.dut_ports[0]]
+        self.dport_ifaces0 = self.dport_info0["intf"]
+        self.dport_info1 = self.dut.ports_info[self.dut_ports[1]]
+        self.dport_ifaces1 = self.dport_info1["intf"]
+        tester_port0 = self.tester.get_local_port(self.dut_ports[0])
+        self.tport_iface0 = self.tester.get_interface(tester_port0)
+        tester_port1 = self.tester.get_local_port(self.dut_ports[1])
+        self.tport_iface1 = self.tester.get_interface(tester_port1)
+        self.flag = "link-down-on-close"
+        self.default_stats = self.tester.get_priv_flags_state(
+            self.tport_iface0, self.flag
+        )
+        if self.default_stats:
+            for port in self.dut_ports:
+                tester_port = self.tester.get_local_port(port)
+                tport_iface = self.tester.get_interface(tester_port)
+                self.tester.send_expect(
+                    "ethtool --set-priv-flags %s %s on" % (tport_iface, self.flag), "# "
+                )
+        self.create_vfs(pfs_id=self.dut_ports, vf_num=1)
+        self.vf_ports = list(range(len(self.vfs_pci)))
+        self.pmdout = PmdOutput(self.dut)
+
+        self.tester_bond = "bond0"
+        # initialize bonding common methods name
+        config = {
+            "parent": self,
+            "pkt_name": "udp",
+            "pkt_size": FRAME_SIZE_64,
+            "src_mac": "52:00:00:00:00:03",
+            "src_ip": "10.239.129.65",
+            "src_port": 61,
+            "dst_ip": "10.239.129.88",
+            "dst_port": 53,
+        }
+        self.bond_inst = bonding.PmdBonding(**config)
+
+    def set_up(self):
+        """
+        Run before each test case.
+        """
+        if self.running_case in ["test_bound_promisc_opt", "test_tlb_basic"]:
+            self.dut.send_expect(
+                "ip link set %s vf 0 trust on" % (self.dport_ifaces0), "# "
+            )
+        self.pmdout.start_testpmd(
+            cores="1S/4C/1T",
+            ports=self.vfs_pci,
+        )
+
+    def create_vfs(self, pfs_id, vf_num):
+        self.sriov_vfs_port = []
+        self.vfs_pci = []
+        self.dut.bind_interfaces_linux(self.kdriver)
+        pfs_id = pfs_id if isinstance(pfs_id, list) else [pfs_id]
+        for pf_id in pfs_id:
+            self.dut.generate_sriov_vfs_by_port(pf_id, vf_num)
+            self.sriov_vfs_port += self.dut.ports_info[self.dut_ports[pf_id]][
+                "vfs_port"
+            ]
+            dport_iface = self.dut.ports_info[self.dut_ports[pf_id]]["intf"]
+            self.dut.send_expect(
+                "ip link set %s vf 0 spoofchk off" % (dport_iface), "# "
+            )
+        for vf in self.sriov_vfs_port:
+            self.vfs_pci.append(vf.pci)
+        try:
+            for port in self.sriov_vfs_port:
+                port.bind_driver(self.drivername)
+
+        except Exception as e:
+            self.dut.destroy_all_sriov_vfs()
+            raise Exception(e)
+
+    def verify_bound_basic_opt(self, mode_set):
+        """
+        Do some basic operations to bonded devices and slaves,
+        such as adding, removing, setting primary or setting mode.
+        """
+        p = r"\w+\((\d+)\)"
+        mode_id = int(re.match(p, mode_set).group(1))
+        bond_port_0 = self.create_bonded_device(mode_set, SOCKET_0, True)
+        self.add_slave_to_bonding_device(bond_port_0, False, self.vf_ports[1])
+
+        mode_value = self.get_bond_mode(bond_port_0)
+        self.verify("%s" % mode_set in mode_value, "Setting bonding mode error")
+
+        bond_port_1 = self.create_bonded_device(mode_set, SOCKET_0)
+        self.add_slave_to_bonding_device(bond_port_0, False, self.vf_ports[0])
+        self.add_slave_to_bonding_device(bond_port_1, True, self.vf_ports[0])
+
+        OTHER_MODE = mode_id + 1 if not mode_id else mode_id - 1
+        self.set_mode_for_bonding_device(bond_port_0, OTHER_MODE)
+        self.set_mode_for_bonding_device(bond_port_0, mode_id)
+
+        self.add_slave_to_bonding_device(bond_port_0, False, self.vf_ports[2])
+        time.sleep(3)
+        self.set_primary_for_bonding_device(bond_port_0, self.vf_ports[2])
+
+        self.remove_slave_from_bonding_device(bond_port_0, False, self.vf_ports[2])
+        primary_now = self.get_bond_primary(bond_port_0)
+        self.verify(
+            int(primary_now) == self.vf_ports[1],
+            "Reset primary slave failed after removing primary slave",
+        )
+
+        for bond_port in [bond_port_0, bond_port_1]:
+            self.remove_all_slaves(bond_port)
+
+        self.dut.send_expect("quit", "# ")
+        self.pmdout.start_testpmd(
+            cores="1S/4C/1T",
+            ports=self.vfs_pci,
+        )
+
+    def verify_bound_mac_opt(self, mode_set):
+        """
+        Create bonded device, add one slave,
+        verify bonded device MAC action varies with the mode.
+        """
+        mac_address_0_orig = self.get_port_mac(self.vf_ports[0])
+        mac_address_1_orig = self.get_port_mac(self.vf_ports[1])
+        mac_address_2_orig = self.get_port_mac(self.vf_ports[2])
+        mac_address_3_orig = self.get_port_mac(self.vf_ports[3])
+
+        bond_port = self.create_bonded_device(mode_set, SOCKET_1)
+        self.add_slave_to_bonding_device(bond_port, False, self.vf_ports[1])
+
+        mac_address_bond_orig = self.get_port_mac(bond_port)
+        self.verify(
+            mac_address_1_orig == mac_address_bond_orig,
+            "Bonded device MAC address not same with first slave MAC",
+        )
+
+        self.add_slave_to_bonding_device(bond_port, False, self.vf_ports[2])
+        mac_address_2_now = self.get_port_mac(self.vf_ports[2])
+        mac_address_bond_now = self.get_port_mac(bond_port)
+        if mode_set in [MODE_ROUND_ROBIN, MODE_XOR_BALANCE, MODE_BROADCAST]:
+            self.verify(
+                mac_address_1_orig == mac_address_bond_now
+                and mac_address_bond_now == mac_address_2_now,
+                "NOT all slaves MAC address same with bonding device in mode %s"
+                % mode_set,
+            )
+        else:
+            self.verify(
+                mac_address_1_orig == mac_address_bond_now
+                and mac_address_bond_now != mac_address_2_now,
+                "All slaves should not be the same in mode %s" % mode_set,
+            )
+
+        new_mac = "00:11:22:00:33:44"
+        self.set_mac_for_bonding_device(bond_port, new_mac)
+        self.start_port(bond_port)
+        mac_address_1_now = self.get_port_mac(self.vf_ports[1])
+        mac_address_2_now = self.get_port_mac(self.vf_ports[2])
+        mac_address_bond_now = self.get_port_mac(bond_port)
+        if mode_set in [MODE_ROUND_ROBIN, MODE_XOR_BALANCE, MODE_BROADCAST]:
+            self.verify(
+                mac_address_1_now
+                == mac_address_2_now
+                == mac_address_bond_now
+                == new_mac,
+                "Set mac failed for bonding device in mode %s" % mode_set,
+            )
+        elif mode_set == MODE_LACP:
+            self.verify(
+                mac_address_bond_now == new_mac
+                and mac_address_1_now != new_mac
+                and mac_address_2_now != new_mac
+                and mac_address_1_now != mac_address_2_now,
+                "Set mac failed for bonding device in mode %s" % mode_set,
+            )
+        elif mode_set in [MODE_ACTIVE_BACKUP, MODE_TLB_BALANCE]:
+            self.verify(
+                mac_address_bond_now == new_mac
+                and mac_address_1_now == new_mac
+                and mac_address_bond_now != mac_address_2_now,
+                "Set mac failed for bonding device in mode %s" % mode_set,
+            )
+
+        self.set_primary_for_bonding_device(bond_port, self.vf_ports[2], False)
+        mac_address_1_now = self.get_port_mac(self.vf_ports[1])
+        mac_address_2_now = self.get_port_mac(self.vf_ports[2])
+        mac_address_bond_now = self.get_port_mac(bond_port)
+        self.verify(
+            mac_address_bond_now == new_mac, "Slave MAC changed when set primary slave"
+        )
+
+        mac_address_1_orig = mac_address_1_now
+        self.remove_slave_from_bonding_device(bond_port, False, self.vf_ports[2])
+        mac_address_2_now = self.get_port_mac(self.vf_ports[2])
+        self.verify(
+            mac_address_2_now == mac_address_2_orig,
+            "MAC not back to original after removing the port",
+        )
+
+        mac_address_1_now = self.get_port_mac(self.vf_ports[1])
+        mac_address_bond_now = self.get_port_mac(bond_port)
+        self.verify(
+            mac_address_bond_now == new_mac and mac_address_1_now == mac_address_1_orig,
+            "Bonding device or slave MAC changed after removing the primary slave",
+        )
+
+        self.remove_all_slaves(bond_port)
+        self.dut.send_expect("quit", "# ")
+        self.pmdout.start_testpmd(
+            cores="1S/4C/1T",
+            ports=self.vfs_pci,
+        )
+
+    def verify_bound_promisc_opt(self, mode_set):
+        """
+        Set promiscuous mode on bonded device, verify bonded device and all slaves
+        have different actions by the different modes.
+        """
+        unbound_port = self.vf_ports[3]
+        bond_port = self.create_bonded_device(mode_set, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (unbound_port, bond_port), "testpmd> "
+        )
+        self.start_port(bond_port)
+        self.dut.send_expect("start", "testpmd> ")
+
+        port_disabled_num = 0
+        testpmd_all_ports = self.vf_ports
+        testpmd_all_ports.append(bond_port)
+        for port_id in testpmd_all_ports:
+            value = self.get_detail_from_port_info(
+                "Promiscuous mode: ", "enabled", port_id
+            )
+            if not value:
+                port_disabled_num += 1
+        self.verify(
+            port_disabled_num == 0,
+            "Not all slaves of bonded device turn promiscuous mode on by default.",
+        )
+
+        ether_ip = {}
+        ether = {}
+        ether["dest_mac"] = "00:11:22:33:44:55"
+        ether_ip["ether"] = ether
+
+        send_param = {}
+        pkt_count = 1
+        send_param["pkt_count"] = pkt_count
+        pkt_info = [ether_ip, send_param]
+
+        slaves = {}
+        slaves["active"] = [self.vf_ports[0]]
+        slaves["inactive"] = []
+        curr_primary = self.vf_ports[0]
+
+        pkt_now, summary = self.send_customized_packet_to_slave(
+            unbound_port, bond_port, *pkt_info, **slaves
+        )
+        if mode_set == MODE_LACP:
+            do_transmit = False
+            pkt_size = 0
+            if pkt_now[unbound_port][0]:
+                do_transmit = True
+                pkt_size = pkt_now[unbound_port][2] / pkt_now[unbound_port][0]
+            self.verify(
+                do_transmit and pkt_size != LACP_MESSAGE_SIZE,
+                "Data not received by slave or bonding device when promiscuous enabled",
+            )
+        else:
+            self.verify(
+                pkt_now[self.vf_ports[0]][0] == pkt_now[bond_port][0]
+                and pkt_now[bond_port][0] == pkt_count,
+                "Data not received by slave or bonding device when promiscuous enabled",
+            )
+
+        self.dut.send_expect("set promisc %s off" % bond_port, "testpmd> ")
+        port_disabled_num = 0
+        testpmd_all_ports = [
+            self.vf_ports[0],
+            self.vf_ports[1],
+            self.vf_ports[2],
+            bond_port,
+        ]
+        for port_id in testpmd_all_ports:
+            value = self.get_detail_from_port_info(
+                "Promiscuous mode: ", "disabled", port_id
+            )
+            if value == "disabled":
+                port_disabled_num += 1
+        if mode_set in [MODE_ROUND_ROBIN, MODE_XOR_BALANCE, MODE_BROADCAST]:
+            self.verify(
+                port_disabled_num == 4,
+                "Not all slaves of bonded device turn promiscuous mode off in mode %s."
+                % mode_set,
+            )
+        elif mode_set == MODE_LACP:
+            self.verify(
+                port_disabled_num == 1,
+                "Not only turn bound device promiscuous mode off in mode %s" % mode_set,
+            )
+        else:
+            self.verify(
+                port_disabled_num == 2,
+                "Not only the primary slave turn promiscous mode off in mode %s, "
+                % mode_set
+                + " when bonded device  promiscous disabled.",
+            )
+            curr_primary = int(self.get_bond_primary(bond_port))
+            slaves["active"] = [curr_primary]
+
+        if mode_set != MODE_LACP:
+            send_param["verify"] = True
+        pkt_now, summary = self.send_customized_packet_to_slave(
+            unbound_port, bond_port, *pkt_info, **slaves
+        )
+        if mode_set == MODE_LACP:
+            do_transmit = False
+            pkt_size = 0
+            if pkt_now[unbound_port][0]:
+                do_transmit = True
+                pkt_size = pkt_now[unbound_port][2] / pkt_now[unbound_port][0]
+            self.verify(
+                not do_transmit or pkt_size == LACP_MESSAGE_SIZE,
+                "Data received by slave or bonding device when promiscuous disabled",
+            )
+        else:
+            self.verify(
+                pkt_now[curr_primary][0] == 0 and pkt_now[bond_port][0] == 0,
+                "Data received by slave or bonding device when promiscuous disabled",
+            )
+
+        pkt_now, summary = self.send_default_packet_to_slave(
+            self.vf_ports[3], bond_port, pkt_count, **slaves
+        )
+        if mode_set == MODE_LACP:
+            do_transmit = False
+            pkt_size = 0
+            if pkt_now[unbound_port][0]:
+                do_transmit = True
+                pkt_size = pkt_now[unbound_port][2] / pkt_now[unbound_port][0]
+            self.verify(
+                not do_transmit or pkt_size != LACP_MESSAGE_SIZE,
+                "RX or TX packet number not correct when promiscuous disabled",
+            )
+        else:
+            self.verify(
+                pkt_now[curr_primary][0] == pkt_now[bond_port][0]
+                and pkt_now[self.vf_ports[3]][0] == pkt_now[bond_port][0]
+                and pkt_now[bond_port][0] == pkt_count,
+                "RX or TX packet number not correct when promiscuous disabled",
+            )
+
+        # Stop fwd threads first before removing slaves from bond to avoid
+        # races and crashes
+        self.dut.send_expect("stop", "testpmd> ")
+        self.remove_all_slaves(bond_port)
+        self.dut.send_expect("quit", "# ")
+
+    def test_bound_basic_opt(self):
+        """
+        Test Case1: Basic bonding--Create bonded devices and slaves
+        """
+        self.verify_bound_basic_opt(MODE_ACTIVE_BACKUP)
+
+    def test_bound_mac_opt(self):
+        """
+        Test Case2: Basic bonding--MAC Address Test
+        """
+        self.verify_bound_mac_opt(MODE_BROADCAST)
+
+    def test_bound_promisc_opt(self):
+        """
+        Test Case3: Basic bonding--Device Promiscuous Mode Test
+        """
+        self.verify_bound_promisc_opt(MODE_BROADCAST)
+
+    def admin_tester_port(self, local_port, status):
+        """
+        Do some operations to the network interface port, such as "up" or "down".
+        """
+        if self.tester.get_os_type() == "freebsd":
+            self.tester.admin_ports(local_port, status)
+        else:
+            eth = self.tester.get_interface(local_port)
+            self.tester.admin_ports_linux(eth, status)
+        time.sleep(10)
+
+    def verify_round_robin_rx(self, unbound_port, bond_port, **slaves):
+        """
+        Verify the receiving packet are all correct in the round robin mode.
+            slaves:
+                'active' = []
+                'inactive' = []
+        """
+        pkt_count = 100
+        pkt_now = {}
+        pkt_now, summary = self.send_default_packet_to_slave(
+            unbound_port, bond_port, pkt_count=pkt_count, **slaves
+        )
+
+        self.verify(
+            pkt_now[unbound_port][0] == pkt_count * slaves["active"].__len__(),
+            "Unbonded port has error TX pkt count in mode 0",
+        )
+        self.verify(
+            pkt_now[bond_port][0] == pkt_count * slaves["active"].__len__(),
+            "Bonding port has error RX pkt count in mode 0",
+        )
+
+    def verify_round_robin_tx(self, unbound_port, bond_port, **slaves):
+        """
+        Verify the transmitting packet are all correct in the round robin mode.
+            slaves:
+                'active' = []
+                'inactive' = []
+        """
+        pkt_count = 300
+        pkt_now = {}
+        pkt_now, summary = self.send_default_packet_to_unbound_port(
+            unbound_port, bond_port, pkt_count=pkt_count, **slaves
+        )
+
+        if slaves["active"].__len__() == 0:
+            self.verify(
+                pkt_now[bond_port][0] == 0,
+                "Bonding port should not have TX pkt in mode 0 when all slaves down",
+            )
+        else:
+            self.verify(
+                pkt_now[bond_port][0] == pkt_count,
+                "Bonding port has error TX pkt count in mode 0",
+            )
+        for slave in slaves["active"]:
+            self.verify(
+                pkt_now[slave][0] == pkt_count / slaves["active"].__len__(),
+                "Active slave has error TX pkt count in mode 0",
+            )
+        for slave in slaves["inactive"]:
+            self.verify(
+                pkt_now[slave][0] == 0,
+                "Inactive slave has error TX pkt count in mode 0",
+            )
+
+    def test_round_robin_rx_tx(self):
+        """
+        Test Case4: Mode 0(Round Robin) TX/RX test
+        """
+        bond_port = self.create_bonded_device(MODE_ROUND_ROBIN, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+
+        slaves = {}
+        slaves["active"] = [self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]]
+        slaves["inactive"] = []
+        self.verify_round_robin_rx(self.vf_ports[3], bond_port, **slaves)
+        self.verify_round_robin_tx(self.vf_ports[3], bond_port, **slaves)
+
+    def test_round_robin_one_slave_down(self):
+        """
+        Test Case5: Mode 0(Round Robin) Bring one slave link down
+        """
+        self.verify(self.default_stats, "tester port not support '%s'" % self.flag)
+        bond_port = self.create_bonded_device(MODE_ROUND_ROBIN, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "down")
+
+        stat = self.tester.get_port_status(
+            self.tester.get_local_port(self.dut_ports[0])
+        )
+        self.dut.send_expect("show bonding config %d" % bond_port, "testpmd> ")
+        self.dut.send_expect("show port info all", "testpmd> ")
+
+        try:
+            slaves = {}
+            slaves["active"] = [self.vf_ports[1], self.vf_ports[2]]
+            slaves["inactive"] = [self.vf_ports[0]]
+            self.verify_round_robin_rx(self.vf_ports[3], bond_port, **slaves)
+            self.verify_round_robin_tx(self.vf_ports[3], bond_port, **slaves)
+        finally:
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "up")
+
+    def test_round_robin_all_slaves_down(self):
+        """
+        Test Case6: Mode 0(Round Robin) Bring all slave links down
+        """
+        bond_port = self.create_bonded_device(MODE_ROUND_ROBIN, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "down")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[1]), "down")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[2]), "down")
+
+        try:
+            slaves = {}
+            slaves["active"] = []
+            slaves["inactive"] = [
+                self.vf_ports[0],
+                self.vf_ports[1],
+                self.vf_ports[2],
+            ]
+            self.verify_round_robin_rx(self.vf_ports[3], bond_port, **slaves)
+            self.verify_round_robin_tx(self.vf_ports[3], bond_port, **slaves)
+        finally:
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "up")
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[1]), "up")
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[2]), "up")
+
+    def get_all_stats(self, unbound_port, rx_tx, bond_port, **slaves):
+        """
+        Get all the port stats which the testpmd can discover.
+        Parameters:
+        *** unbound_port: pmd port id
+        *** rx_tx: unbond port stat 'rx' or 'tx'
+        *** bond_port: bonding port
+        *** slaves:
+        ******** 'active' = []
+        ******** 'inactive' = []
+        """
+        pkt_now = {}
+
+        if rx_tx == "rx":
+            bond_stat = "tx"
+        else:
+            bond_stat = "rx"
+
+        pkt_now[unbound_port] = [int(_) for _ in self.get_stats(unbound_port, rx_tx)]
+        pkt_now[bond_port] = [int(_) for _ in self.get_stats(bond_port, bond_stat)]
+        for slave in slaves["active"]:
+            pkt_now[slave] = [int(_) for _ in self.get_stats(slave, bond_stat)]
+        for slave in slaves["inactive"]:
+            pkt_now[slave] = [int(_) for _ in self.get_stats(slave, bond_stat)]
+
+        return pkt_now
+
+    def verify_active_backup_rx(self, unbound_port, bond_port, **slaves):
+        """
+        Verify the RX packets are all correct in the active-backup mode.
+        Parameters:
+        *** slaves:
+        ******* 'active' = []
+        ******* 'inactive' = []
+        """
+        pkt_count = 100
+        pkt_now = {}
+
+        slave_num = slaves["active"].__len__()
+        if slave_num != 0:
+            active_flag = 1
+        else:
+            active_flag = 0
+
+        pkt_now, summary = self.send_default_packet_to_slave(
+            unbound_port, bond_port, pkt_count=pkt_count, **slaves
+        )
+
+        self.verify(
+            pkt_now[bond_port][0] == pkt_count * slave_num,
+            "Not correct RX pkt on bond port in mode 1",
+        )
+        self.verify(
+            pkt_now[unbound_port][0] == pkt_count * active_flag,
+            "Not correct TX pkt on unbound port in mode 1",
+        )
+        for slave in slaves["inactive"]:
+            self.verify(
+                pkt_now[slave][0] == 0, "Not correct RX pkt on inactive port in mode 1"
+            )
+        for slave in slaves["active"]:
+            self.verify(
+                pkt_now[slave][0] == pkt_count,
+                "Not correct RX pkt on active port in mode 1",
+            )
+
+    def verify_active_backup_tx(self, unbound_port, bond_port, **slaves):
+        """
+        Verify the TX packets are all correct in the active-backup mode.
+        Parameters:
+        *** slaves:
+        ******* 'active' = []
+        ******* 'inactive' = []
+        """
+        pkt_count = 0
+        pkt_now = {}
+
+        if slaves["active"].__len__() != 0:
+            primary_port = slaves["active"][0]
+            active_flag = 1
+        else:
+            active_flag = 0
+
+        pkt_now, summary = self.send_default_packet_to_unbound_port(
+            unbound_port, bond_port, pkt_count=pkt_count, **slaves
+        )
+
+        self.verify(
+            pkt_now[bond_port][0] == pkt_count * active_flag,
+            "Not correct RX pkt on bond port in mode 1",
+        )
+        if active_flag == 1:
+            self.verify(
+                pkt_now[primary_port][0] == pkt_count,
+                "Not correct TX pkt on primary port in mode 1",
+            )
+        for slave in slaves["inactive"]:
+            self.verify(
+                pkt_now[slave][0] == 0, "Not correct TX pkt on inactive port in mode 1"
+            )
+        for slave in [slave for slave in slaves["active"] if slave != primary_port]:
+            self.verify(
+                pkt_now[slave][0] == 0, "Not correct TX pkt on backup port in mode 1"
+            )
+
+    def test_active_backup_rx_tx(self):
+        """
+        Test Case7: Mode 1(Active Backup) TX/RX Test
+        """
+        bond_port = self.create_bonded_device(MODE_ACTIVE_BACKUP, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+        time.sleep(5)
+
+        slaves = {}
+        slaves["active"] = [self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]]
+        slaves["inactive"] = []
+        self.verify_active_backup_rx(self.vf_ports[3], bond_port, **slaves)
+        self.verify_active_backup_tx(self.vf_ports[3], bond_port, **slaves)
+
+    def test_active_backup_change_primary(self):
+        """
+        Test Case8: Mode 1(Active Backup) Change active slave, RX/TX test
+        """
+        bond_port = self.create_bonded_device(MODE_ACTIVE_BACKUP, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+        self.set_primary_for_bonding_device(bond_port, self.vf_ports[1])
+        time.sleep(5)
+
+        slaves = {}
+        slaves["active"] = [self.vf_ports[1], self.vf_ports[0], self.vf_ports[2]]
+        slaves["inactive"] = []
+        self.verify_active_backup_rx(self.vf_ports[3], bond_port, **slaves)
+        self.verify_active_backup_tx(self.vf_ports[3], bond_port, **slaves)
+
+    def test_active_backup_one_slave_down(self):
+        """
+        Test Case9: Mode 1(Active Backup) Link up/down active eth dev
+        """
+        self.verify(self.default_stats, "tester port not support '%s'" % self.flag)
+        bond_port = self.create_bonded_device(MODE_ACTIVE_BACKUP, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "down")
+        primary_port = int(self.get_bond_primary(bond_port))
+
+        try:
+            slaves = {}
+            active_slaves = [self.vf_ports[1], self.vf_ports[2]]
+            active_slaves.remove(primary_port)
+            slaves["active"] = [primary_port]
+            slaves["active"].extend(active_slaves)
+            slaves["inactive"] = [self.vf_ports[0]]
+            self.verify_active_backup_rx(self.vf_ports[3], bond_port, **slaves)
+            self.verify_active_backup_tx(self.vf_ports[3], bond_port, **slaves)
+        finally:
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "up")
+
+    def test_active_backup_all_slaves_down(self):
+        """
+        Test Case10: Mode 1(Active Backup) Bring all slave links down
+        """
+        self.verify(self.default_stats, "tester port not support '%s'" % self.flag)
+        bond_port = self.create_bonded_device(MODE_ACTIVE_BACKUP, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "down")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[1]), "down")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[2]), "down")
+
+        try:
+            slaves = {}
+            slaves["active"] = []
+            slaves["inactive"] = [
+                self.vf_ports[0],
+                self.vf_ports[1],
+                self.vf_ports[2],
+            ]
+            self.verify_active_backup_rx(self.vf_ports[3], bond_port, **slaves)
+            self.verify_active_backup_tx(self.vf_ports[3], bond_port, **slaves)
+        finally:
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "up")
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[1]), "up")
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[2]), "up")
+
+    def translate_mac_str_into_int(self, mac_str):
+        """
+        Translate the MAC type from the string into the int.
+        """
+        mac_hex = "0x"
+        for mac_part in mac_str.split(":"):
+            mac_hex += mac_part
+        return int(mac_hex, 16)
+
+    def mac_hash(self, dest_mac, src_mac):
+        """
+        Generate the hash value with the source and destination MAC.
+        """
+        dest_port_mac = self.translate_mac_str_into_int(dest_mac)
+        src_port_mac = self.translate_mac_str_into_int(src_mac)
+        src_xor_dest = dest_port_mac ^ src_port_mac
+        xor_value_1 = src_xor_dest >> 32
+        xor_value_2 = (src_xor_dest >> 16) ^ (xor_value_1 << 16)
+        xor_value_3 = src_xor_dest ^ (xor_value_1 << 32) ^ (xor_value_2 << 16)
+        return htons(xor_value_1 ^ xor_value_2 ^ xor_value_3)
+
+    def translate_ip_str_into_int(self, ip_str):
+        """
+        Translate the IP type from the string into the int.
+        """
+        ip_part_list = ip_str.split(".")
+        ip_part_list.reverse()
+        num = 0
+        ip_int = 0
+        for ip_part in ip_part_list:
+            ip_part_int = int(ip_part) << (num * 8)
+            ip_int += ip_part_int
+            num += 1
+        return ip_int
+
+    def ipv4_hash(self, dest_ip, src_ip):
+        """
+        Generate the hash value with the source and destination IP.
+        """
+        dest_ip_int = self.translate_ip_str_into_int(dest_ip)
+        src_ip_int = self.translate_ip_str_into_int(src_ip)
+        return htonl(dest_ip_int ^ src_ip_int)
+
+    def udp_hash(self, dest_port, src_port):
+        """
+        Generate the hash value with the source and destination port.
+        """
+        return htons(dest_port ^ src_port)
+
+    def policy_and_slave_hash(self, policy, **slaves):
+        """
+        Generate the hash value by the policy and active slave number.
+        *** policy:'L2' , 'L23' or 'L34'
+        *** slaves:
+        ******* 'active'=[]
+        ******* 'inactive'=[]
+        """
+        global S_MAC_IP_PORT
+        source = S_MAC_IP_PORT
+
+        global D_MAC_IP_PORT
+        dest_mac = D_MAC_IP_PORT[0]
+        dest_ip = D_MAC_IP_PORT[1]
+        dest_port = D_MAC_IP_PORT[2]
+
+        hash_values = []
+        if len(slaves["active"]) != 0:
+            for src_mac, src_ip, src_port in source:
+                if policy == "L2":
+                    hash_value = self.mac_hash(dest_mac, src_mac)
+                elif policy == "L23":
+                    hash_value = self.mac_hash(dest_mac, src_mac) ^ self.ipv4_hash(
+                        dest_ip, src_ip
+                    )
+                else:
+                    hash_value = self.ipv4_hash(dest_ip, src_ip) ^ self.udp_hash(
+                        dest_port, src_port
+                    )
+
+                if policy in ("L23", "L34"):
+                    hash_value ^= hash_value >> 16
+                hash_value ^= hash_value >> 8
+                hash_value = hash_value % len(slaves["active"])
+                hash_values.append(hash_value)
+
+        return hash_values
+
+    def slave_map_hash(self, port, order_ports):
+        """
+        Find the hash value by the given slave port id.
+        """
+        if len(order_ports) == 0:
+            return None
+        else:
+            order_ports = order_ports.split()
+            return order_ports.index(str(port))
+
+    def verify_xor_rx(self, unbound_port, bond_port, **slaves):
+        """
+        Verify receiving the packets correctly in the XOR mode.
+        Parameters:
+        *** unbound_port: the unbonded port id
+        *** bond_port: the bonded device port id
+        *** slaves:
+        ******* 'active'=[]
+        ******* 'inactive'=[]
+        """
+        pkt_count = 100
+        pkt_now = {}
+
+        pkt_now, summary = self.send_default_packet_to_slave(
+            unbound_port, bond_port, pkt_count=pkt_count, **slaves
+        )
+
+        for slave in slaves["active"]:
+            self.verify(
+                pkt_now[slave][0] == pkt_count, "Slave have error RX packet in XOR"
+            )
+        for slave in slaves["inactive"]:
+            self.verify(pkt_now[slave][0] == 0, "Slave have error RX packet in XOR")
+        self.verify(
+            pkt_now[unbound_port][0] == pkt_count * len(slaves["active"]),
+            "Unbonded device have error TX packet in XOR",
+        )
+
+    def verify_xor_tx(self, unbound_port, bond_port, policy, vlan_tag=False, **slaves):
+        """
+        Verify that transmitting the packets correctly in the XOR mode.
+        Parameters:
+        *** unbound_port: the unbonded port id
+        *** bond_port: the bonded device port id
+        *** policy:'L2' , 'L23' or 'L34'
+        *** vlan_tag:False or True
+        *** slaves:
+        ******* 'active'=[]
+        ******* 'inactive'=[]
+        """
+        pkt_count = 100
+        pkt_now = {}
+
+        pkt_now, summary = self.send_customized_packet_to_unbound_port(
+            unbound_port,
+            bond_port,
+            policy,
+            vlan_tag=False,
+            pkt_count=pkt_count,
+            **slaves,
+        )
+
+        hash_values = []
+        hash_values = self.policy_and_slave_hash(policy, **slaves)
+
+        order_ports = self.get_bond_active_slaves(bond_port)
+        for slave in slaves["active"]:
+            slave_map_hash = self.slave_map_hash(slave, order_ports)
+            self.verify(
+                pkt_now[slave][0] == pkt_count * hash_values.count(slave_map_hash),
+                "XOR load balance transmit error on the link up port",
+            )
+        for slave in slaves["inactive"]:
+            self.verify(
+                pkt_now[slave][0] == 0,
+                "XOR load balance transmit error on the link down port",
+            )
+
+    def test_xor_tx(self):
+        """
+        Test Case11: Mode 2(Balance XOR) TX Load Balance test
+        """
+        bond_port = self.create_bonded_device(MODE_XOR_BALANCE, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+
+        slaves = {}
+        slaves["active"] = [self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]]
+        slaves["inactive"] = []
+
+        self.verify_xor_tx(self.vf_ports[3], bond_port, "L2", False, **slaves)
+
+    def test_xor_tx_one_slave_down(self):
+        """
+        Test Case12: Mode 2(Balance XOR) TX Load Balance Link down
+        """
+        self.verify(self.default_stats, "tester port not support '%s'" % self.flag)
+        bond_port = self.create_bonded_device(MODE_XOR_BALANCE, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[2], self.vf_ports[1]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "down")
+
+        try:
+            slaves = {}
+            slaves["active"] = [self.vf_ports[1], self.vf_ports[2]]
+            slaves["inactive"] = [self.vf_ports[0]]
+
+            self.verify_xor_tx(self.vf_ports[3], bond_port, "L2", False, **slaves)
+        finally:
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "up")
+
+    def test_xor_tx_all_slaves_down(self):
+        """
+        Test Case13: Mode 2(Balance XOR) Bring all slave links down
+        """
+        self.verify(self.default_stats, "tester port not support '%s'" % self.flag)
+        bond_port = self.create_bonded_device(MODE_XOR_BALANCE, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "down")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[1]), "down")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[2]), "down")
+
+        try:
+            slaves = {}
+            slaves["active"] = []
+            slaves["inactive"] = [
+                self.vf_ports[0],
+                self.vf_ports[1],
+                self.vf_ports[2],
+            ]
+
+            self.verify_xor_tx(self.vf_ports[3], bond_port, "L2", False, **slaves)
+        finally:
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "up")
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[1]), "up")
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[2]), "up")
+
+    def vlan_strip_and_filter(self, action="off", *ports):
+        """
+        Open or shutdown the vlan strip and filter option of specified port.
+        """
+        for port_id in ports:
+            self.dut.send_expect(
+                "vlan set strip %s %d" % (action, port_id), "testpmd> "
+            )
+            self.dut.send_expect(
+                "vlan set filter %s %d" % (action, port_id), "testpmd> "
+            )
+
+    def test_xor_l34_forward(self):
+        """
+        Test Case14: Mode 2(Balance XOR) Layer 3+4 forwarding
+        """
+        bond_port = self.create_bonded_device(MODE_XOR_BALANCE, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.set_balance_policy_for_bonding_device(bond_port, "l34")
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+
+        slaves = {}
+        slaves["active"] = [self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]]
+        slaves["inactive"] = []
+
+        self.verify_xor_tx(self.vf_ports[3], bond_port, "L34", False, **slaves)
+        self.vlan_strip_and_filter(
+            "off",
+            self.vf_ports[0],
+            self.vf_ports[1],
+            self.vf_ports[2],
+            self.vf_ports[3],
+            bond_port,
+        )
+        self.verify_xor_tx(self.vf_ports[3], bond_port, "L34", True, **slaves)
+
+    def test_xor_rx(self):
+        """
+        Test Case15: Mode 2(Balance XOR) RX test
+        """
+        bond_port = self.create_bonded_device(MODE_XOR_BALANCE, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+
+        slaves = {}
+        slaves["active"] = [self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]]
+        slaves["inactive"] = []
+
+        self.verify_xor_rx(self.vf_ports[3], bond_port, **slaves)
+
+    def verify_broadcast_rx(self, unbound_port, bond_port, **slaves):
+        """
+        Verify that receiving packets correctly in the broadcast mode.
+        Parameters:
+        *** unbound_port: the unbonded port id
+        *** bond_port: the bonded device port id
+        *** slaves:
+        ******* 'active':[]
+        ******* 'inactive':[]
+        """
+        pkt_count = 100
+        pkt_now = {}
+
+        pkt_now, summary = self.send_default_packet_to_slave(
+            unbound_port, bond_port, pkt_count=pkt_count, **slaves
+        )
+
+        for slave in slaves["active"]:
+            self.verify(
+                pkt_now[slave][0] == pkt_count, "Slave RX packet not correct in mode 3"
+            )
+        for slave in slaves["inactive"]:
+            self.verify(pkt_now[slave][0] == 0, "Slave RX packet not correct in mode 3")
+        self.verify(
+            pkt_now[unbound_port][0] == pkt_count * len(slaves["active"]),
+            "Unbonded port TX packet not correct in mode 3",
+        )
+        self.verify(
+            pkt_now[bond_port][0] == pkt_count * len(slaves["active"]),
+            "Bonded device RX packet not correct in mode 3",
+        )
+
+    def verify_broadcast_tx(self, unbound_port, bond_port, **slaves):
+        """
+        Verify that transmitting packets correctly in the broadcast mode.
+        Parameters:
+        *** unbound_port: the unbonded port id
+        *** bond_port: the bonded device port id
+        *** slaves:
+        ******* 'active':[]
+        ******* 'inactive':[]
+        """
+        pkt_count = 100
+        pkt_now = {}
+
+        pkt_now, summary = self.send_default_packet_to_unbound_port(
+            unbound_port, bond_port, pkt_count=pkt_count, **slaves
+        )
+
+        for slave in slaves["active"]:
+            self.verify(
+                pkt_now[slave][0] == pkt_count, "Slave TX packet not correct in mode 3"
+            )
+        for slave in slaves["inactive"]:
+            self.verify(pkt_now[slave][0] == 0, "Slave TX packet not correct in mode 3")
+        self.verify(
+            pkt_now[unbound_port][0] == pkt_count,
+            "Unbonded port RX packet not correct in mode 3",
+        )
+        self.verify(
+            pkt_now[bond_port][0] == pkt_count * len(slaves["active"]),
+            "Bonded device TX packet not correct in mode 3",
+        )
+
+    def test_broadcast_rx_tx(self):
+        """
+        Test Case16: Mode 3(Broadcast) TX/RX Test
+        """
+        bond_port = self.create_bonded_device(MODE_BROADCAST, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+
+        slaves = {}
+        slaves["active"] = [self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]]
+        slaves["inactive"] = []
+
+        self.verify_broadcast_rx(self.vf_ports[3], bond_port, **slaves)
+        self.verify_broadcast_tx(self.vf_ports[3], bond_port, **slaves)
+
+    def test_broadcast_tx_one_slave_down(self):
+        """
+        Test Case17: Mode 3(Broadcast) Bring one slave link down
+        """
+        self.verify(self.default_stats, "tester port not support '%s'" % self.flag)
+        bond_port = self.create_bonded_device(MODE_BROADCAST, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "down")
+
+        try:
+            slaves = {}
+            slaves["active"] = [self.vf_ports[1], self.vf_ports[2]]
+            slaves["inactive"] = [self.vf_ports[0]]
+
+            self.verify_broadcast_tx(self.vf_ports[3], bond_port, **slaves)
+        finally:
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "up")
+
+    def test_broadcast_tx_all_slaves_down(self):
+        """
+        Test Case18: Mode 3(Broadcast) Bring all slave links down
+        """
+        self.verify(self.default_stats, "tester port not support '%s'" % self.flag)
+        bond_port = self.create_bonded_device(MODE_BROADCAST, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "down")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[1]), "down")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[2]), "down")
+
+        try:
+            slaves = {}
+            slaves["active"] = []
+            slaves["inactive"] = [
+                self.vf_ports[0],
+                self.vf_ports[1],
+                self.vf_ports[2],
+            ]
+
+            self.verify_broadcast_tx(self.vf_ports[3], bond_port, **slaves)
+        finally:
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "up")
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[1]), "up")
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[2]), "up")
+
+    def verify_tlb_rx(self, unbound_port, bond_port, **slaves):
+        """
+        Verify that receiving packets correctly in the mode 4.
+        Parameters:
+        *** unbound_port: the unbonded port id
+        *** bond_port: the bonded device port id
+        *** slaves:
+        ******* 'active':[]
+        ******* 'inactive':[]
+        """
+        pkt_count = 100
+        pkt_now = {}
+
+        slave_num = slaves["active"].__len__()
+        if slave_num != 0:
+            active_flag = 1
+        else:
+            active_flag = 0
+
+        pkt_now, summary = self.send_default_packet_to_slave(
+            unbound_port, bond_port, pkt_count=pkt_count, **slaves
+        )
+
+        self.verify(
+            pkt_now[unbound_port][0] == pkt_count * active_flag,
+            "Unbonded device has error TX packet in TLB",
+        )
+        self.verify(
+            pkt_now[bond_port][0] == pkt_count * slave_num,
+            "Bounded device has error RX packet in TLB",
+        )
+        for slave in slaves["inactive"]:
+            self.verify(
+                pkt_now[slave][0] == 0, "Inactive slave has error RX packet in TLB"
+            )
+        for slave in slaves["active"]:
+            self.verify(
+                pkt_now[slave][0] == pkt_count,
+                "Active slave has error RX packet in TLB",
+            )
+
+    def verify_tlb_tx(self, unbound_port, bond_port, **slaves):
+        """
+        Verify that transmitting packets correctly in the broadcast mode.
+        Parameters:
+        *** unbound_port: the unbonded port id
+        *** bond_port: the bonded device port id
+        *** slaves:
+        ******* 'active':[]
+        ******* 'inactive':[]
+        """
+        pkt_count = "MANY"
+
+        # send to unbonded device
+        pkt_now, summary = self.send_default_packet_to_unbound_port(
+            unbound_port, bond_port, pkt_count=pkt_count, **slaves
+        )
+
+        active_slaves = len(slaves["active"])
+        if active_slaves:
+            mean = float(summary) / float(active_slaves)
+            active_flag = 1
+        else:
+            active_flag = 0
+
+        for slave in slaves["active"]:
+            self.verify(
+                pkt_now[slave][0] > mean * 0.8 and pkt_now[slave][0] < mean * 1.2,
+                "Slave TX packet not correct in mode 5!",
+            )
+        for slave in slaves["inactive"]:
+            self.verify(
+                pkt_now[slave][0] == 0, "Slave TX packet not correct in mode 5!!"
+            )
+        self.verify(
+            pkt_now[unbound_port][0] == summary,
+            "Unbonded port RX packet not correct in TLB",
+        )
+        self.verify(
+            pkt_now[bond_port][0] == summary * active_flag,
+            "Bonded device TX packet not correct in TLB",
+        )
+
+    def test_tlb_basic(self):
+        """
+        Test Case19: Mode 5(TLB) Base Test
+        """
+        self.verify_bound_basic_opt(MODE_TLB_BALANCE)
+        self.verify_bound_mac_opt(MODE_TLB_BALANCE)
+        self.verify_bound_promisc_opt(MODE_TLB_BALANCE)
+
+    def test_tlb_rx_tx(self):
+        """
+        Test Case20: Mode 5(TLB) TX/RX test
+        """
+        bond_port = self.create_bonded_device(MODE_TLB_BALANCE, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+
+        slaves = {}
+        slaves["active"] = [self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]]
+        slaves["inactive"] = []
+
+        self.verify_tlb_rx(self.vf_ports[3], bond_port, **slaves)
+        self.verify_tlb_tx(self.vf_ports[3], bond_port, **slaves)
+
+    def test_tlb_one_slave_dwon(self):
+        """
+        Test Case21: Mode 5(TLB) Bring one slave link down
+        """
+        self.verify(self.default_stats, "tester port not support '%s'" % self.flag)
+        bond_port = self.create_bonded_device(MODE_TLB_BALANCE, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "down")
+
+        try:
+            slaves = {}
+            slaves["active"] = [self.vf_ports[1], self.vf_ports[2]]
+            slaves["inactive"] = [self.vf_ports[0]]
+
+            self.verify_tlb_rx(self.vf_ports[3], bond_port, **slaves)
+            self.verify_tlb_tx(self.vf_ports[3], bond_port, **slaves)
+        finally:
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "up")
+
+    def test_tlb_all_slaves_down(self):
+        """
+        Test Case22: Mode 5(TLB) Bring all slave links down
+        """
+        self.verify(self.default_stats, "tester port not support '%s'" % self.flag)
+        bond_port = self.create_bonded_device(MODE_TLB_BALANCE, SOCKET_0)
+        self.add_slave_to_bonding_device(
+            bond_port, False, self.vf_ports[0], self.vf_ports[1], self.vf_ports[2]
+        )
+        self.dut.send_expect(
+            "set portlist %d,%d" % (self.vf_ports[3], bond_port), "testpmd> "
+        )
+        self.start_port("all")
+        self.dut.send_expect("start", "testpmd> ")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "down")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[1]), "down")
+        self.admin_tester_port(self.tester.get_local_port(self.dut_ports[2]), "down")
+
+        try:
+            slaves = {}
+            slaves["active"] = []
+            slaves["inactive"] = [
+                self.vf_ports[0],
+                self.vf_ports[1],
+                self.vf_ports[2],
+            ]
+
+            self.verify_tlb_rx(self.vf_ports[3], bond_port, **slaves)
+            self.verify_tlb_tx(self.vf_ports[3], bond_port, **slaves)
+        finally:
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[0]), "up")
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[1]), "up")
+            self.admin_tester_port(self.tester.get_local_port(self.dut_ports[2]), "up")
+
+    def tear_down(self):
+        """
+        Run after each test case.
+        """
+        self.pmdout.quit()
+        if self.running_case in ["test_bound_promisc_opt", "test_tlb_basic"]:
+            self.dut.send_expect(
+                "ip link set %s vf 0 trust off" % (self.dport_ifaces0), "# "
+            )
+
+    def tear_down_all(self):
+        """
+        Run after each test suite.
+        """
+        self.dut.kill_all()
+        self.dut.destroy_all_sriov_vfs()
+        if self.default_stats:
+            for port in self.dut_ports:
+                tester_port = self.tester.get_local_port(port)
+                tport_iface = self.tester.get_interface(tester_port)
+                self.tester.send_expect(
+                    "ethtool --set-priv-flags %s %s %s"
+                    % (tport_iface, self.flag, self.default_stats),
+                    "# ",
+                )
-- 
2.25.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dts] [PATCH V3 4/7] test_plans/vf_pmd_bonded_8023ad: add cases to test vf bonded
  2023-03-21 17:40 [dts] [PATCH V3 0/7] add cases to test vf bonded Song Jiale
                   ` (2 preceding siblings ...)
  2023-03-21 17:40 ` [dts] [PATCH V3 3/7] tests/vf_pmd_bonded: " Song Jiale
@ 2023-03-21 17:40 ` Song Jiale
  2023-03-21 17:40 ` [dts] [PATCH V3 5/7] tests/vf_pmd_bonded_8023ad: " Song Jiale
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Song Jiale @ 2023-03-21 17:40 UTC (permalink / raw)
  To: dts; +Cc: Song Jiale

add cases to test vf bonded.

Signed-off-by: Song Jiale <songx.jiale@intel.com>
---
 test_plans/vf_pmd_bonded_8023ad_test_plan.rst | 477 ++++++++++++++++++
 1 file changed, 477 insertions(+)
 create mode 100644 test_plans/vf_pmd_bonded_8023ad_test_plan.rst

diff --git a/test_plans/vf_pmd_bonded_8023ad_test_plan.rst b/test_plans/vf_pmd_bonded_8023ad_test_plan.rst
new file mode 100644
index 00000000..74895f09
--- /dev/null
+++ b/test_plans/vf_pmd_bonded_8023ad_test_plan.rst
@@ -0,0 +1,477 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2023 Intel Corporation
+
+====================================
+VF Link Bonding for mode 4 (802.3ad)
+====================================
+
+This test plan is mainly to test link bonding mode 4(802.3ad) function via
+testpmd.
+
+link bonding mode 4 is IEEE 802.3ad Dynamic link aggregation. Creates
+aggregation groups that share the same speed and duplex settings. Utilizes all
+slaves in the active aggregator according to the 802.3ad specification. DPDK
+realize it based on 802.1AX specification, it includes LACP protocol and Marker
+protocol. This mode requires a switch that supports IEEE 802.3ad Dynamic link
+aggregation.
+
+note: Slave selection for outgoing traffic is done according to the transmit
+hash policy, which may be changed from the default simple XOR layer2 policy.
+
+Requirements
+============
+#. Bonded ports shall maintain statistics similar to normal port.
+
+#. The slave links shall be monitor for link status change. See also the concept
+   of up/down time delay to handle situations such as a switch reboots, it is
+   possible that its ports report "link up" status before they become usable.
+
+#. Upon unbonding the bonding PMD driver must restore the MAC addresses that the
+   slaves had before they were enslaved.
+
+#. According to the bond type, when the bond interface is placed in promiscuous
+   mode it will propagate the setting to the slave devices.
+
+#. LACP control packet filtering offload. It is a idea of performance
+   improvement, which use hardware offloads to improve packet classification.
+
+#. support three 802.3ad aggregation selection logic modes (stable/bandwidth/
+   count). The Selection Logic selects a compatible Aggregator for a port, using
+   the port LAG ID. The Selection Logic may determine that the link should be
+   operated as a standby link if there are constraints on the simultaneous
+   attachment of ports that have selected the same Aggregator.
+
+#. technical details refer to content attached in website::
+
+    http://dpdk.org/ml/archives/dev/2017-May/066143.html
+
+#. DPDK technical details refer to::
+
+    dpdk/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst:
+      ``Link Aggregation 802.3AD (Mode 4)``
+
+#. linux technical document of 802.3ad as testing reference document::
+
+    https://www.kernel.org/doc/Documentation/networking/bonding.txt:``802.3ad``
+
+Prerequisites for Bonding
+=========================
+all link ports of switch/dut should be the same data rate and support full-duplex.
+
+Functional testing hardware configuration
+-----------------------------------------
+NIC and DUT ports requirements:
+
+- Tester: 2 ports of nic
+- DUT:    2 ports of nic
+
+create 1 vf for two dut ports::
+
+   echo 1 > /sys/bus/pci/devices/0000\:31\:00.0/sriov_numvfs
+   echo 1 > /sys/bus/pci/devices/0000\:31\:00.1/sriov_numvfs
+
+disabel spoofchk for VF::
+
+     ip link set dev {pf0_iface} vf 0 spoofchk off
+     ip link set dev {pf1_iface} vf 0 spoofchk off
+
+port topology diagram::
+
+   Tester                             DUT
+   .-------.                      .------------.
+   | port0 | <------------------> | port0(VF0) |
+   | port1 | <------------------> | port1(VF1) |
+   '-------'                      '------------'
+
+Test Case : basic behavior start/stop
+=====================================
+#. check bonded device stop/start action under frequency operation status
+
+steps
+-----
+
+#. bind two ports::
+
+    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
+
+#. boot up testpmd::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
+
+#. run testpmd command of bonding::
+
+    testpmd> port stop all
+    testpmd> create bonded device 4 0
+    testpmd> add bonding slave 0 2
+    testpmd> add bonding slave 1 2
+    testpmd> set allmulti 0 on
+    testpmd> set allmulti 1 on
+    testpmd> set allmulti 2 on
+    testpmd> set portlist 2
+
+#. loop execute this step 10 times, check if bonded device still work::
+
+    testpmd> port stop all
+    testpmd> port start all
+    testpmd> start
+    testpmd> show bonding config 2
+    testpmd> stop
+
+#. quit testpmd::
+
+    testpmd> stop
+    testpmd> quit
+
+Test Case : basic behavior mac
+==============================
+#. bonded device's default mac is one of each slave's mac after one slave has been added.
+#. when no slave attached, mac should be 00:00:00:00:00:00
+#. slave's mac restore the MAC addresses that the slave has before they were enslaved.
+
+steps
+-----
+
+#. bind two ports::
+
+    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
+
+#. boot up testpmd::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
+
+#. run testpmd command of bonding::
+
+    testpmd> port stop all
+    testpmd> create bonded device 4 0
+
+#. check bond device mac should be 00:00:00:00:00:00::
+
+    testpmd> show port info 2
+
+#. add two slaves to bond port::
+
+    testpmd> add bonding slave 0 2
+    testpmd> add bonding slave 1 2
+    testpmd> port start all
+
+#. check bond device mac should be one of each slave's mac::
+
+    testpmd> show port info 0
+    testpmd> show port info 1
+    testpmd> show port info 2
+
+#. quit testpmd::
+
+    testpmd> stop
+    testpmd> quit
+
+Test Case : basic behavior link up/down
+=======================================
+#. bonded device should be down status without slaves.
+#. bonded device device should have the same status of link status.
+#. Active Slaves status should change with the slave status change.
+
+steps
+-----
+
+#. bind two ports::
+
+    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
+
+#. boot up testpmd::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
+
+#. run testpmd command of bonding::
+
+    testpmd> port stop all
+    testpmd> create bonded device 4 0
+    testpmd> add bonding slave 0 2
+    testpmd> add bonding slave 1 2
+    testpmd> set allmulti 0 on
+    testpmd> set allmulti 1 on
+    testpmd> set allmulti 2 on
+    testpmd> set portlist 2
+
+#. stop bonded device and check bonded device/slaves link status::
+
+    testpmd> port stop 2
+    testpmd> show port info 2
+    testpmd> show port info 1
+    testpmd> show port info 0
+
+#. start bonded device and check bonded device/slaves link status::
+
+    testpmd> port start 2
+    testpmd> show port info 2
+    testpmd> show port info 1
+    testpmd> show port info 0
+
+#. quit testpmd::
+
+    testpmd> stop
+    testpmd> quit
+
+Test Case : basic behavior promiscuous  mode
+============================================
+#. bonded device promiscuous mode should be ``enabled`` by default.
+#. bonded device/slave device should have the same status of promiscuous mode.
+
+steps
+-----
+
+#. bind two ports::
+
+    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
+
+#. boot up testpmd::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
+
+#. run testpmd command of bonding::
+
+    testpmd> port stop all
+    testpmd> create bonded device 4 0
+
+#. check if bonded device promiscuous mode is ``enabled``::
+
+    testpmd> show port info 2
+
+#. add two slaves and check if promiscuous mode is ``enabled``::
+
+    testpmd> add bonding slave 0 2
+    testpmd> add bonding slave 1 2
+    testpmd> show port info 0
+    testpmd> show port info 1
+
+#. disable bonded device promiscuous mode and check promiscuous mode::
+
+    testpmd> set promisc 2 off
+    testpmd> show port info 2
+
+#. enable bonded device promiscuous mode and check promiscuous mode::
+
+    testpmd> set promisc 2 on
+    testpmd> show port info 2
+
+#. check slaves' promiscuous mode::
+
+    testpmd> show port info 0
+    testpmd> show port info 1
+
+#. quit testpmd::
+
+    testpmd> stop
+    testpmd> quit
+
+Test Case : basic behavior agg mode
+===================================
+#. stable is the default agg mode.
+#. check 802.3ad aggregation mode configuration, support <agg_option>::
+   ``count``
+   ``stable``
+   ``bandwidth``
+
+steps
+-----
+
+#. bind two ports::
+
+    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
+
+#. boot up testpmd::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xXXXXX -n 4  -- -i
+
+#. run testpmd command of bonding::
+
+    testpmd> port stop all
+    testpmd> create bonded device 4 0
+    testpmd> add bonding slave 0 2
+    testpmd> add bonding slave 1 2
+    testpmd> set allmulti 0 on
+    testpmd> set allmulti 1 on
+    testpmd> set allmulti 2 on
+    testpmd> set portlist 2
+    testpmd> port start all
+    testpmd> show bonding config 2
+    testpmd> set bonding agg_mode 2 <agg_option>
+
+#. check if agg_mode set successful::
+
+    testpmd> show bonding config 2
+    - Dev basic:
+       Bonding mode: 8023AD(4)
+       Balance Xmit Policy: BALANCE_XMIT_POLICY_LAYER2
+       IEEE802.3AD Aggregator Mode: <agg_option>
+       Slaves (2): [0 1]
+       Active Slaves (2): [0 1]
+       Current Primary: [0]
+    - Lacp info:
+        IEEE802.3 port: 2
+        fast period: 900 ms
+        slow period: 29000 ms
+        short timeout: 3000 ms
+        long timeout: 90000 ms
+        aggregate wait timeout: 2000 ms
+        tx period: 500 ms
+        rx marker period: 2000 ms
+        update timeout: 100 ms
+        aggregation mode: count
+
+        Slave Port: 0
+        Aggregator port id: 0
+        selection: SELECTED
+        Actor detail info:
+                system priority: 65535
+                system mac address: 7A:1A:91:74:32:46
+                port key: 8448
+                port priority: 65280
+                port number: 256
+                port state: ACTIVE AGGREGATION DEFAULTED
+        Partner detail info:
+                system priority: 65535
+                system mac address: 00:00:00:00:00:00
+                port key: 256
+                port priority: 65280
+                port number: 0
+                port state: ACTIVE
+
+        Slave Port: 1
+        Aggregator port id: 0
+        selection: SELECTED
+        Actor detail info:
+                system priority: 65535
+                system mac address: 5E:F7:F5:3E:58:D8
+                port key: 8448
+                port priority: 65280
+                port number: 512
+                port state: ACTIVE AGGREGATION DEFAULTED
+        Partner detail info:
+                system priority: 65535
+                system mac address: 00:00:00:00:00:00
+                port key: 256
+                port priority: 65280
+                port number: 0
+                port state: ACTIVE
+
+#. quit testpmd::
+
+    testpmd> stop
+    testpmd> quit
+
+Test Case : basic behavior dedicated queues
+===========================================
+#. check 802.3ad dedicated queues is ``disable`` by default
+#. check 802.3ad set dedicated queues, support <agg_option>::
+   ``disable``
+   ``enable``
+
+.. note:: only ``ice`` drive supports vf bonded port to enable dedicated queues
+
+steps
+-----
+
+#. bind two ports::
+
+    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
+
+#. boot up testpmd::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
+
+#. run testpmd command of bonding::
+
+    testpmd> port stop all
+    testpmd> create bonded device 4 0
+    testpmd> add bonding slave 0 2
+    testpmd> add bonding slave 1 2
+    testpmd> show bonding config 2
+
+#. check if dedicated_queues disable successful::
+
+    testpmd> set bonding lacp dedicated_queues 2 disable
+
+#. check if bonded port can start::
+
+    testpmd> port start all
+    testpmd> start
+
+#. check if dedicated_queues enable successful::
+
+    testpmd> stop
+    testpmd> port stop all
+    testpmd> set bonding lacp dedicated_queues 2 enable
+
+#. check if bonded port can start::
+
+    testpmd> port start all
+    testpmd> start
+
+#. quit testpmd::
+
+    testpmd> stop
+    testpmd> quit
+
+Test Case : command line option
+===============================
+#. check command line option::
+
+    slave=<0000:xx:00.0>
+    agg_mode=<bandwidth | stable | count>
+
+#. compare bonding configuration with expected configuration.
+
+steps
+-----
+
+#. bind two ports::
+
+    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
+
+#. boot up testpmd ::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x0f -n 4 \
+    --vdev 'net_bonding0,slave=0000:xx:00.0,slave=0000:xx:00.1,mode=4,agg_mode=<agg_option>'  \
+    -- -i --port-topology=chained
+
+#. run testpmd command of bonding::
+
+    testpmd> port stop all
+
+#. check if bonded device has been created and slaves have been bonded successful::
+
+    testpmd> show bonding config 2
+    - Dev basic:
+       Bonding mode: 8023AD(4)
+       Balance Xmit Policy: BALANCE_XMIT_POLICY_LAYER2
+       IEEE802.3AD Aggregator Mode: <agg_option>
+       Slaves (2): [0 1]
+       Active Slaves (2): [0 1]
+       Current Primary: [0]
+    - Lacp info:
+        IEEE802.3 port: 2
+        fast period: 900 ms
+        slow period: 29000 ms
+        short timeout: 3000 ms
+        long timeout: 90000 ms
+        aggregate wait timeout: 2000 ms
+        tx period: 500 ms
+        rx marker period: 2000 ms
+        update timeout: 100 ms
+        aggregation mode: <agg_option>
+
+#. check if bonded port can start::
+
+    testpmd> port start all
+    testpmd> start
+
+#. check if dedicated_queues enable successful::
+
+    testpmd> stop
+    testpmd> port stop all
+
+#. quit testpmd::
+
+    testpmd> quit
+
-- 
2.25.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dts] [PATCH V3 5/7] tests/vf_pmd_bonded_8023ad: add cases to test vf bonded
  2023-03-21 17:40 [dts] [PATCH V3 0/7] add cases to test vf bonded Song Jiale
                   ` (3 preceding siblings ...)
  2023-03-21 17:40 ` [dts] [PATCH V3 4/7] test_plans/vf_pmd_bonded_8023ad: " Song Jiale
@ 2023-03-21 17:40 ` Song Jiale
  2023-03-21 17:40 ` [dts] [PATCH V3 6/7] test_plans/vf_pmd_stacked_bonded: " Song Jiale
  2023-03-21 17:40 ` [dts] [PATCH V3 7/7] tests/vf_pmd_stacked_bonded: " Song Jiale
  6 siblings, 0 replies; 10+ messages in thread
From: Song Jiale @ 2023-03-21 17:40 UTC (permalink / raw)
  To: dts; +Cc: Song Jiale

add cases to test vf bonded.

Signed-off-by: Song Jiale <songx.jiale@intel.com>
---
 tests/TestSuite_vf_pmd_bonded_8023ad.py | 660 ++++++++++++++++++++++++
 1 file changed, 660 insertions(+)
 create mode 100644 tests/TestSuite_vf_pmd_bonded_8023ad.py

diff --git a/tests/TestSuite_vf_pmd_bonded_8023ad.py b/tests/TestSuite_vf_pmd_bonded_8023ad.py
new file mode 100644
index 00000000..9939f1a9
--- /dev/null
+++ b/tests/TestSuite_vf_pmd_bonded_8023ad.py
@@ -0,0 +1,660 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 Intel Corporation
+#
+
+import re
+import time
+import traceback
+
+# import bonding lib(common methods for pmd bonding command)
+import tests.bonding as bonding
+from framework.exception import VerifyFailure
+from framework.test_case import TestCase
+
+from .bonding import FRAME_SIZE_64, MODE_LACP
+
+
+######################
+# bonding 802.3ad mode
+######################
+class TestVFPmdBonded8023AD(TestCase):
+    AGG_MODES = ["bandwidth", "stable", "count"]
+    DEDICATED_QUEUES = ["disable", "enable"]
+
+    #
+    # On dut, dpdk bonding
+    #
+
+    def set_8023ad_agg_mode(self, bond_port, mode="bandwidth"):
+        """
+        set bonding agg_mode <port_id> <agg_name>
+
+        Set 802.11AD Aggregator Mode
+        """
+        cmd = "set bonding agg_mode %d %s" % (bond_port, mode)
+        self.bond_inst.d_console(cmd)
+        cur_mode = self.bond_inst.get_bonding_info(bond_port, "agg_mode")
+        if mode == cur_mode:
+            fmt = "set bonding agg_mode <{0}> successfully"
+            self.logger.info(fmt.format(mode))
+        else:
+            msg = "failed to set bonding agg_mode <{0}>".format(mode)
+            self.logger.error(msg)
+            raise VerifyFailure(msg)
+
+    def get_8023ad_agg_mode(self, bond_port):
+        """
+        get 802.3ad mode  aggregator Mode
+        """
+        cur_mode = self.bond_inst.get_bonding_info(bond_port, "agg_mode")
+        return cur_mode
+
+    def set_8023ad_dedicated_queues(self, bond_port, status="disable"):
+        """
+        set 802.11AD dedicated_queues mode(enable|disable)
+        """
+        cmds = [
+            [
+                "set bonding lacp dedicated_queues %s %s" % (bond_port, status),
+                ["", "port %s failed" % bond_port, False],
+                2,
+            ],
+        ]
+        out = self.bond_inst.d_console(cmds)
+        # when set 'hw'
+        if status == "enable":
+            expected_msg = "queues for LACP control packets enabled"
+            err_fmt = "link bonding mode 4 (802.3ad) set {0} failed"
+            self.verify(expected_msg in out, err_fmt.format(status))
+        elif status == "disable":
+            expected_msg = "queues for LACP control packets disabled"
+            err_fmt = "link bonding mode 4 (802.3ad) set {0} failed"
+            self.verify(expected_msg in out, err_fmt.format(status))
+
+    def set_special_command(self, bond_port):
+        cmds = [
+            "set allmulti 0 on",
+            "set allmulti 1 on",
+            "set allmulti {} on".format(bond_port),
+            "set portlist {}".format(bond_port),
+        ]
+        [self.bond_inst.d_console([cmd, "testpmd>", 15]) for cmd in cmds]
+
+    def set_8023ad_bonded(self, slaves, bond_mode, ignore=True):
+        """set 802.3ad bonded mode for the specified bonding mode"""
+        specified_socket = self.dut.get_numa_id(slaves[0])
+        # create bonded device, add slaves in it
+        bond_port = self.bond_inst.create_bonded_device(bond_mode, specified_socket)
+        if not ignore:
+            # when no slave attached, mac should be 00:00:00:00:00:00
+            self.bonding_8023ad_check_macs_without_slaves(bond_port)
+        # add slave
+        self.bond_inst.add_slave(bond_port, False, "", *slaves)
+        # set special command
+        self.set_special_command(bond_port)
+        return bond_port
+
+    def set_8023ad_bonded2(self, slaves, bond_mode, ignore=True):
+        """set 802.3ad bonded mode for the specified bonding mode"""
+        specified_socket = self.dut.get_numa_id(slaves[0])
+        # create bonded device, add slaves in it
+        bond_port = self.bond_inst.create_bonded_device(bond_mode, specified_socket)
+        if not ignore:
+            # when no slave attached, mac should be 00:00:00:00:00:00
+            self.bonding_8023ad_check_macs_without_slaves(bond_port)
+        # add slave
+        self.bond_inst.add_slave(bond_port, False, "", *slaves)
+        return bond_port
+
+    def get_pci_link(self, slaves):
+        """get slaves ports pci address"""
+        slaves_pci = []
+        for port_id in slaves:
+            slaves_pci.append(self.dut.ports_info[port_id]["pci"])
+        if not slaves_pci:
+            msg = "can't find tx_port pci"
+            self.logger.error(msg)
+            raise VerifyFailure(msg)
+        return slaves_pci
+
+    def set_bond_port_ready(self, tx_port, bond_port):
+        cmd = "set portlist {0},{1}".format(tx_port, bond_port)
+        self.bond_inst.d_console(cmd)
+        # for port link up is slow and unstable,
+        # every port should start one by one
+        cmds = []
+        port_num = len(self.sriov_vfs_port)
+        start_fmt = "port start {0}".format
+        for cnt in range(port_num):
+            cmds.append([start_fmt(cnt), "", 5])
+        self.bond_inst.d_console(cmds)
+        time.sleep(10)
+        self.bond_inst.d_console([start_fmt(self.bond_port), "", 15])
+        time.sleep(5)
+        self.bond_inst.d_console(["start", "", 10])
+        self.verify(
+            self.bond_inst.testpmd.wait_link_status_up("all"),
+            "Failed to set bond port ready!!!",
+        )
+
+    def run_8023ad_pre(self, slaves, bond_mode):
+        bond_port = self.set_8023ad_bonded(slaves, bond_mode)
+        # should set port to stop and make sure port re-sync with parter when
+        # testpmd linking with switch equipment
+        cmds = ["port stop all", "", 15]
+        self.bond_inst.d_console(cmds)
+        time.sleep(2)
+        cmds = ["port start all", "", 10]
+        self.bond_inst.d_console(cmds)
+        self.verify(
+            self.bond_inst.testpmd.wait_link_status_up("all"),
+            "run_8023ad_pre: Failed to start all port",
+        )
+        return bond_port
+
+    def bonding_8023ad_check_macs_without_slaves(self, bond_port):
+        query_type = "mac"
+        bond_port_mac = self.bond_inst.get_port_mac(bond_port, query_type)
+        default_mac = "00:00:00:00:00:00"
+        if bond_port_mac == default_mac:
+            msg = "bond port default mac is [{0}]".format(default_mac)
+            self.logger.info(msg)
+        else:
+            fmt = "bond port default mac is [{0}], not expected mac"
+            msg = fmt.format(bond_port_mac)
+            self.logger.warning(msg)
+
+    def bonding_8023ad_check_macs(self, slaves, bond_port):
+        """check if bonded device's mac is one of its slaves mac"""
+        query_type = "mac"
+        bond_port_mac = self.bond_inst.get_port_mac(bond_port, query_type)
+        if bond_port_mac == "00:00:00:00:00:00":
+            msg = "bond port hasn't set mac address"
+            self.logger.info(msg)
+            return
+
+        for port_id in slaves:
+            slave_mac = self.bond_inst.get_port_info(port_id, query_type)
+            if bond_port_mac == slave_mac:
+                fmt = "bonded device's mac is slave [{0}]'s mac [{1}]"
+                msg = fmt.format(port_id, slave_mac)
+                self.logger.info(msg)
+                return port_id
+        else:
+            fmt = "bonded device's current mac [{0}] " + "is not one of its slaves mac"
+            msg = fmt.format(bond_port_mac)
+            # it is not supported by dpdk, but supported by linux normal
+            # bonding/802.3ad tool
+            self.logger.warning("bonding_8023ad_check_macs: " + msg)
+
+    def check_bonded_device_mac_change(self, slaves, bond_port):
+        remove_slave = 0
+        cur_slaves = slaves[1:]
+        self.bond_inst.remove_slaves(bond_port, False, *[remove_slave])
+        self.bonding_8023ad_check_macs(cur_slaves, bond_port)
+
+    def check_bonded_device_start(self, bond_port):
+        cmds = [
+            ["port stop all", "", 15],
+            ["port start %s" % bond_port, "", 10],
+            ["start", [" ", "core dump", False]],
+        ]
+        self.bond_inst.d_console(cmds)
+        time.sleep(2)
+
+    def stop_bonded_device(self, bond_port):
+        cmds = [
+            ["stop", "", 10],
+            ["port stop %s" % bond_port, "", 10],
+        ]
+        self.bond_inst.d_console(cmds)
+        time.sleep(2)
+
+    def check_bonded_device_up_down(self, bond_port):
+        # stop bonded device
+        cmd = "port stop {0}".format(bond_port)
+        self.bond_inst.d_console(cmd)
+        status = self.bond_inst.get_port_info(bond_port, "link_status")
+        if status != "down":
+            msg = "bond port {0} fail to set down".format(bond_port)
+            self.logger.error(msg)
+            raise VerifyFailure(msg)
+        else:
+            msg = "bond port {0} set down successful !".format(bond_port)
+            self.logger.info(msg)
+        # start bonded device
+        cmds = ["port start {0}".format(bond_port), "", 10]
+        self.bond_inst.d_console(cmds)
+        self.verify(
+            self.bond_inst.testpmd.wait_link_status_up("all", timeout=30),
+            "bond port {0} fail to set up".format(bond_port),
+        )
+
+    def check_bonded_device_promisc_mode(self, slaves, bond_port):
+        # disable bonded device promiscuous mode
+        cmd = "set promisc {0} off".format(bond_port)
+        self.bond_inst.d_console(cmd)
+        time.sleep(2)
+        status = self.bond_inst.get_port_info(bond_port, "promiscuous_mode")
+        if status != "disabled":
+            fmt = "bond port {0} fail to set promiscuous mode disabled"
+            msg = fmt.format(bond_port)
+            self.logger.warning(msg)
+        else:
+            fmt = "bond port {0} set promiscuous mode disabled successful !"
+            msg = fmt.format(bond_port)
+            self.logger.info(msg)
+        self.bond_inst.d_console("start")
+        time.sleep(2)
+        # check slave promiscuous mode
+        for port_id in slaves:
+            status = self.bond_inst.get_port_info(port_id, "promiscuous_mode")
+            if status != "disabled":
+                fmt = (
+                    "slave port {0} promiscuous mode "
+                    "isn't the same as bond port 'disabled', "
+                )
+                msg = fmt.format(port_id)
+                self.logger.warning(msg)
+                # dpdk developer hasn't completed this function as linux
+                # document description about `Promiscuous mode`, ignore it here
+                # temporarily
+                # raise VerifyFailure(msg)
+            else:
+                fmt = "slave port {0} promiscuous mode is 'disabled' too"
+                msg = fmt.format(port_id)
+                self.logger.info(msg)
+        # enable bonded device promiscuous mode
+        cmd = "set promisc {0} on".format(bond_port)
+        self.bond_inst.d_console(cmd)
+        time.sleep(3)
+        status = self.bond_inst.get_port_info(bond_port, "promiscuous_mode")
+        if status != "enabled":
+            fmt = "bond port {0} fail to set promiscuous mode enabled"
+            msg = fmt.format(bond_port)
+            self.logger.error(msg)
+            raise VerifyFailure(msg)
+        else:
+            fmt = "bond port {0} set promiscuous mode enabled successful !"
+            msg = fmt.format(bond_port)
+            self.logger.info(msg)
+        # check slave promiscuous mode
+        for port_id in slaves:
+            status = self.bond_inst.get_port_info(port_id, "promiscuous_mode")
+            if status != "enabled":
+                fmt = (
+                    "slave port {0} promiscuous mode "
+                    + "isn't the same as bond port 'enabled'"
+                )
+                msg = fmt.format(port_id)
+                self.logger.warning(msg)
+                # dpdk developer hasn't completed this function as linux
+                # document description about `Promiscuous mode`, ignore it here
+                # temporarily
+                # raise VerifyFailure(msg)
+            else:
+                fmt = "slave port {0} promiscuous mode is 'enabled' too"
+                msg = fmt.format(port_id)
+                self.logger.info(msg)
+
+    def check_8023ad_agg_modes(self, slaves, bond_mode):
+        """check aggregator mode"""
+        check_results = []
+        default_agg_mode = "stable"
+        for mode in self.AGG_MODES:
+            try:
+                self.bond_inst.start_testpmd(self.eal_param)
+                bond_port = self.set_8023ad_bonded(slaves, bond_mode)
+                cur_agg_mode = self.get_8023ad_agg_mode(bond_port)
+                if cur_agg_mode != default_agg_mode:
+                    fmt = "link bonding mode 4 (802.3ad) default agg mode " "isn't {0}"
+                    msg = fmt.format(default_agg_mode)
+                    self.logger.warning(msg)
+                # ignore default mode
+                if mode == default_agg_mode:
+                    fmt = "link bonding mode 4 (802.3ad) " "current agg mode is {0}"
+                    msg = fmt.format(mode)
+                    self.logger.info(msg)
+                    continue
+                cmds = [["port stop all", "", 15], ["port start all", "", 15]]
+                self.bond_inst.d_console(cmds)
+                self.set_8023ad_agg_mode(bond_port, mode)
+            except Exception as e:
+                check_results.append(e)
+                print(traceback.format_exc())
+            finally:
+                self.bond_inst.close_testpmd()
+                time.sleep(2)
+
+        if check_results:
+            for result in check_results:
+                self.logger.error(result)
+            raise VerifyFailure("check_8023ad_agg_modes is failed")
+
+    def check_8023ad_dedicated_queues(self, slaves, bond_mode):
+        """check 802.3ad dedicated queues"""
+        check_results = []
+        default_slow_queue = "unknown"
+        for mode in self.DEDICATED_QUEUES:
+            try:
+                self.bond_inst.start_testpmd(self.eal_param)
+                bond_port = self.set_8023ad_bonded2(slaves, bond_mode)
+                self.set_8023ad_dedicated_queues(bond_port, mode)
+            except Exception as e:
+                check_results.append(e)
+                print(traceback.format_exc())
+            finally:
+                self.bond_inst.close_testpmd()
+                time.sleep(2)
+
+        if check_results:
+            for result in check_results:
+                self.logger.error(result)
+            raise VerifyFailure("check_8023ad_dedicated_queues is failed")
+
+    def get_commandline_options(self, agg_mode):
+        # get bonding port configuration
+        slave_pcis = self.vfs_pci
+        # create commandline option format
+        bonding_name = "net_bonding0"
+        slaves_pci = ["slave=" + pci for pci in slave_pcis]
+        p = r"\w+\((\d+)\)"
+        mode_id = int(re.match(p, str(MODE_LACP)).group(1))
+        bonding_mode = "mode={0}".format(mode_id)
+        agg_config = "agg_mode={0}"
+        vdev_format = ",".join([bonding_name] + slaves_pci + [bonding_mode, agg_config])
+        # command line option
+        mode = str(MODE_LACP)
+        option = vdev_format.format(agg_mode)
+        vdev_option = " --vdev '{0}'".format(option)
+        # 802.3ad bond port only create one, it must be the max port number
+        bond_port = len(self.sriov_vfs_port)
+        return bond_port, vdev_option
+
+    def run_test_pre(self, agg_mode):
+        # get bonding port configuration
+        bond_port, vdev_option = self.get_commandline_options(agg_mode)
+        self.bond_port = bond_port
+        # boot up testpmd
+        eal_param = self.eal_param + vdev_option
+        self.bond_inst.start_testpmd(eal_option=eal_param)
+        cur_slaves, cur_agg_mode = self.bond_inst.get_bonding_info(
+            bond_port, ["slaves", "agg_mode"]
+        )
+        if agg_mode != cur_agg_mode:
+            fmt = "expected agg mode is [{0}], current agg mode is [{1}]"
+            msg = fmt.format(agg_mode, cur_agg_mode)
+            self.logger.warning(msg)
+        # get forwarding port
+        for port_id in range(len(self.sriov_vfs_port)):
+            # select a non-slave port as forwarding port to do transmitting
+            if str(port_id) not in cur_slaves:
+                tx_port_id = port_id
+                break
+        else:
+            tx_port_id = bond_port
+        # enable dedicated queue,
+        # only ice drive supports vf bonded port to enable dedicated queues
+        if "ice" in self.kdriver:
+            self.set_8023ad_dedicated_queues(bond_port, "enable")
+        self.set_bond_port_ready(tx_port_id, bond_port)
+        slaves = [int(slave) for slave in cur_slaves]
+
+        return bond_port, slaves, tx_port_id
+
+    def run_dpdk_functional_pre(self):
+        mode = MODE_LACP
+        slaves = self.vf_ports[:]
+        self.bond_inst.start_testpmd(self.eal_param)
+        bond_port = self.run_8023ad_pre(slaves, mode)
+        return slaves, bond_port
+
+    def run_dpdk_functional_post(self):
+        self.bond_inst.close_testpmd()
+
+    def check_cmd_line_option_status(self, agg_mode, bond_port, slaves):
+        mode = str(MODE_LACP)
+        msgs = []
+        (
+            cur_mode,
+            cur_slaves,
+            cur_active_slaves,
+            cur_agg_mode,
+        ) = self.bond_inst.get_bonding_info(
+            bond_port, ["mode", "slaves", "active_slaves", "agg_mode"]
+        )
+        # check bonding mode
+        if mode != cur_mode:
+            fmt = "expected mode is [{0}], current mode is [{1}]"
+            msg = fmt.format(mode, cur_mode)
+            msgs.append(msg)
+        # check bonding 802.3ad agg mode
+        if agg_mode != cur_agg_mode:
+            fmt = "expected agg mode is [{0}], current agg mode is [{1}]"
+            msg = fmt.format(agg_mode, cur_agg_mode)
+            msgs.append(msg)
+        # check bonded slaves
+        _cur_slaves = [int(id) for id in cur_slaves]
+        if not _cur_slaves or sorted(slaves) != sorted(_cur_slaves):
+            slaves_str = " ".join([str(id) for id in slaves])
+            cur_slaves_str = (
+                " ".join([str(id) for id in _cur_slaves]) if _cur_slaves else ""
+            )
+            msg_format = "expected slaves is [{0}], current slaves is [{1}]"
+            msg = msg_format.format(slaves_str, cur_slaves_str)
+            msgs.append(msg)
+        # check active slaves status before ports start
+        if cur_active_slaves:
+            check_active_slaves = [int(id) for id in cur_active_slaves]
+            if sorted(slaves) != sorted(check_active_slaves):
+                slaves_str = " ".join([str(id) for id in slaves])
+                msg_fmt = (
+                    "expected active slaves is [{0}], " "current active slaves is [{1}]"
+                )
+                msg = msg_fmt.format(slaves_str, cur_active_slaves)
+                msgs.append(msg)
+        else:
+            msg = "active slaves should not be empty"
+            self.logger.warning(msg)
+            msgs.append(msg)
+        # check status after ports start
+        self.bond_inst.start_ports()
+        # set bonded device to active status
+        cur_active_slaves = [
+            int(id)
+            for id in self.bond_inst.get_bonding_info(bond_port, "active_slaves")
+        ]
+        if not cur_active_slaves or sorted(slaves) != sorted(cur_active_slaves):
+            slaves_str = " ".join([str(id) for id in slaves])
+            active_str = (
+                " ".join([str(id) for id in cur_active_slaves])
+                if cur_active_slaves
+                else ""
+            )
+            msg_fmt = (
+                "expected active slaves is [{0}], " "current active slaves is [{1}]"
+            )
+            msg = msg_fmt.format(slaves_str, active_str)
+            msgs.append(msg)
+        return msgs
+
+    #
+    # Test cases.
+    #
+    def set_up_all(self):
+        """
+        Run before each test suite
+        """
+        self.verify("bsdapp" not in self.target, "Bonding not support freebsd")
+        # ------------------------------------------------------------
+        # link peer resource
+        self.dut_ports = self.dut.get_ports()
+        required_link = 2
+        self.dport_info0 = self.dut.ports_info[self.dut_ports[0]]
+        self.dport_ifaces = self.dport_info0["intf"]
+        self.verify(len(self.dut_ports) >= required_link, "Insufficient ports")
+        # Create a vf for each pf and get all vf info,
+        self.dut.restore_interfaces()
+        self.create_vfs(pfs_id=self.dut_ports[0:2], vf_num=1)
+        self.vf_ports = list(range(len(self.vfs_pci)))
+        self.eal_param = str()
+        for pci in self.vfs_pci:
+            self.eal_param += "-a {} ".format(pci)
+        # ------------------------------------------------------------
+        # 802.3ad related
+        self.bond_port = None
+        self.bond_slave = self.dut_ports[0]
+        # ----------------------------------------------------------------
+        # initialize bonding common methods name
+        config = {
+            "parent": self,
+            "pkt_name": "udp",
+            "pkt_size": FRAME_SIZE_64,
+            "src_mac": "52:00:00:00:00:03",
+            "src_ip": "10.239.129.65",
+            "src_port": 61,
+            "dst_ip": "10.239.129.88",
+            "dst_port": 53,
+        }
+        self.bond_inst = bonding.PmdBonding(**config)
+
+    def set_up(self):
+        """
+        Run before each test case.
+        """
+        pass
+
+    def create_vfs(self, pfs_id, vf_num):
+        self.sriov_vfs_port = []
+        self.vfs_pci = []
+        self.dut.bind_interfaces_linux(self.kdriver)
+        pfs_id = pfs_id if isinstance(pfs_id, list) else [pfs_id]
+        for pf_id in pfs_id:
+            self.dut.generate_sriov_vfs_by_port(pf_id, vf_num)
+            self.sriov_vfs_port += self.dut.ports_info[self.dut_ports[pf_id]][
+                "vfs_port"
+            ]
+            dport_iface = self.dut.ports_info[self.dut_ports[pf_id]]["intf"]
+            self.dut.send_expect(
+                "ip link set %s vf 0 spoofchk off" % (dport_iface), "# "
+            )
+        for vf in self.sriov_vfs_port:
+            self.vfs_pci.append(vf.pci)
+        try:
+            for port in self.sriov_vfs_port:
+                port.bind_driver(self.drivername)
+        except Exception as e:
+            self.dut.destroy_all_sriov_vfs()
+            raise Exception(e)
+
+    def test_basic_behav_startStop(self):
+        """
+        Test Case : basic behavior start/stop
+        """
+        msg = ""
+        slaves, bond_port = self.run_dpdk_functional_pre()
+        try:
+            for _ in range(10):
+                self.check_bonded_device_start(bond_port)
+                self.stop_bonded_device(bond_port)
+        except Exception as e:
+            print(traceback.format_exc())
+            msg = "bonding 8023ad check start/stop failed"
+        self.run_dpdk_functional_post()
+        if msg:
+            raise VerifyFailure(msg)
+
+    def test_basic_behav_mac(self):
+        """
+        Test Case : basic behavior mac
+        """
+        msg = ""
+        slaves, bond_port = self.run_dpdk_functional_pre()
+        try:
+            self.bonding_8023ad_check_macs(slaves, bond_port)
+            self.check_bonded_device_mac_change(slaves, bond_port)
+        except Exception as e:
+            msg = "bonding 8023ad check mac failed"
+        self.run_dpdk_functional_post()
+        if msg:
+            raise VerifyFailure(msg)
+
+    def test_basic_behav_upDown(self):
+        """
+        Test Case : basic behavior link up/down
+        """
+        msg = ""
+        slaves, bond_port = self.run_dpdk_functional_pre()
+        try:
+            self.check_bonded_device_up_down(bond_port)
+        except Exception as e:
+            msg = "bonding 8023ad check link up/down failed"
+        self.run_dpdk_functional_post()
+        if msg:
+            raise VerifyFailure(msg)
+
+    def test_basic_behav_promisc_mode(self):
+        """
+        Test Case : basic behavior promiscuous  mode
+        """
+        msg = ""
+        slaves, bond_port = self.run_dpdk_functional_pre()
+        try:
+            self.check_bonded_device_promisc_mode(slaves, bond_port)
+        except Exception as e:
+            msg = "bonding 8023ad check promisc mode failed"
+        self.run_dpdk_functional_post()
+        if msg:
+            raise VerifyFailure(msg)
+
+    def test_command_line_option(self):
+        """
+        Test Case : command line option
+        """
+        agg_modes_msgs = []
+        for agg_mode in self.AGG_MODES:
+            bond_port, cur_slaves, tx_port_id = self.run_test_pre(agg_mode)
+            msgs = self.check_cmd_line_option_status(agg_mode, bond_port, cur_slaves)
+            if msgs:
+                agg_modes_msgs.append((msgs, agg_mode))
+            self.bond_inst.close_testpmd()
+        if agg_modes_msgs:
+            msgs = ""
+            for msg, agg_mode in agg_modes_msgs:
+                self.logger.warning(msg)
+                msgs += "fail to config from command line at {0}  ".format(agg_mode)
+            raise VerifyFailure(msgs)
+
+    def test_basic_behav_agg_mode(self):
+        """
+        Test Case : basic behavior agg mode
+        """
+        mode = MODE_LACP
+        self.check_8023ad_agg_modes(self.vf_ports, mode)
+
+    def test_basic_dedicated_queues(self):
+        """
+        Test Case : basic behavior dedicated queues
+        """
+        self.skip_case(
+            "ice" in self.kdriver,
+            "only ice drive supports vf bonded port to enable dedicated queues",
+        )
+        mode = MODE_LACP
+        self.check_8023ad_dedicated_queues(self.vf_ports, mode)
+
+    def tear_down(self):
+        """
+        Run after each test case.
+        """
+        try:
+            self.bond_inst.close_testpmd()
+        except Exception:
+            self.dut.kill_all()
+
+    def tear_down_all(self):
+        """
+        Run after each test suite.
+        """
+        self.dut.kill_all()
+        self.dut.destroy_all_sriov_vfs()
-- 
2.25.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dts] [PATCH V3 6/7] test_plans/vf_pmd_stacked_bonded: add cases to test vf bonded
  2023-03-21 17:40 [dts] [PATCH V3 0/7] add cases to test vf bonded Song Jiale
                   ` (4 preceding siblings ...)
  2023-03-21 17:40 ` [dts] [PATCH V3 5/7] tests/vf_pmd_bonded_8023ad: " Song Jiale
@ 2023-03-21 17:40 ` Song Jiale
  2023-03-21 17:40 ` [dts] [PATCH V3 7/7] tests/vf_pmd_stacked_bonded: " Song Jiale
  6 siblings, 0 replies; 10+ messages in thread
From: Song Jiale @ 2023-03-21 17:40 UTC (permalink / raw)
  To: dts; +Cc: Song Jiale

add cases to test vf bonded.

Signed-off-by: Song Jiale <songx.jiale@intel.com>
---
 .../vf_pmd_stacked_bonded_test_plan.rst       | 416 ++++++++++++++++++
 1 file changed, 416 insertions(+)
 create mode 100644 test_plans/vf_pmd_stacked_bonded_test_plan.rst

diff --git a/test_plans/vf_pmd_stacked_bonded_test_plan.rst b/test_plans/vf_pmd_stacked_bonded_test_plan.rst
new file mode 100644
index 00000000..9c9d9d2b
--- /dev/null
+++ b/test_plans/vf_pmd_stacked_bonded_test_plan.rst
@@ -0,0 +1,416 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2023 Intel Corporation
+
+=================
+VF Stacked Bonded
+=================
+
+Stacked bonded mechanism allow a bonded port to be added to another bonded port.
+
+The demand arises from a discussion with a prospective customer for a 100G NIC
+based on RRC. The customer already uses Mellanox 100G NICs. Mellanox 100G NICs
+support a proper x16 PCIe interface so the host sees a single netdev and that
+netdev corresponds directly to the 100G Ethernet port. They indicated that in
+their current system they bond multiple 100G NICs together, using DPDK bonding
+API in their application. They are interested in looking at an alternative source
+for the 100G NIC and are in conversation with Silicom who are shipping a 100G
+RRC based NIC (something like Boulder Rapids). The issue they have with RRC NIC
+is that the NIC presents as two PCIe interfaces (netdevs) instead of one. If the
+DPDK bonding could operate at 1st level on the two RRC netdevs to present a
+single netdev could the application then bond multiple of these bonded
+interfaces to implement NIC bonding.
+
+Prerequisites
+=============
+
+hardware configuration
+----------------------
+
+all link ports of tester/dut should be the same data rate and support full-duplex.
+
+NIC/DUT/TESTER ports requirements:
+
+- Tester: 2/4 ports of nic
+- DUT:    2/4 ports of nic
+
+enable ``link-down-on-close`` in tester::
+
+   ethtool --set-priv-flags {tport_iface0} link-down-on-close on
+   ethtool --set-priv-flags {tport_iface1} link-down-on-close on
+   ethtool --set-priv-flags {tport_iface2} link-down-on-close on
+   ethtool --set-priv-flags {tport_iface3} link-down-on-close on
+
+create 1 vf for 4 dut ports::
+
+   echo 0 > /sys/bus/pci/devices/0000\:31\:00.0/sriov_numvfs
+   echo 0 > /sys/bus/pci/devices/0000\:31\:00.1/sriov_numvfs
+   echo 0 > /sys/bus/pci/devices/0000\:31\:00.2/sriov_numvfs
+   echo 0 > /sys/bus/pci/devices/0000\:31\:00.3/sriov_numvfs
+
+disabel spoofchk for VF::
+
+     ip link set dev {pf0_iface} vf 0 spoofchk off
+     ip link set dev {pf1_iface} vf 0 spoofchk off
+     ip link set dev {pf2_iface} vf 0 spoofchk off
+     ip link set dev {pf3_iface} vf 0 spoofchk off
+
+port topology diagram(4 peer links)::
+
+    TESTER                                          DUT
+                 physical link              logical link
+    .---------.                .------------------------------------------------.
+    | portA 0 | <------------> | portB pf0vf0 <---> .--------.                  |
+    |         |                |                    | bond 0 | <-----> .------. |
+    | portA 1 | <------------> | portB pf1vf0 <---> '--------'         |      | |
+    |         |                |                                       |bond2 | |
+    | portA 2 | <------------> | portB pf2vf0 <---> .--------.         |      | |
+    |         |                |                    | bond 1 | <-----> '------' |
+    | portA 3 | <------------> | portB pf3vf0 <---> '--------'                  |
+    '---------'                '------------------------------------------------'
+
+Test cases
+==========
+``tx-offloads`` value set based on nic type. Test cases' steps, which run for
+slave down testing, are based on 4 ports. Other test cases' steps are based on
+2 ports.
+
+Test Case: basic behavior
+=========================
+allow a bonded port to be added to another bonded port, which is
+supported by::
+
+   balance-rr    0
+   active-backup 1
+   balance-xor   2
+   broadcast     3
+   balance-tlb   5
+   balance-alb   6
+
+#. 802.3ad mode is not supported if one or more slaves is a bond device.
+#. add the same device twice to check exceptional process is good.
+#. master bonded port/each slaves queue configuration is the same.
+
+steps
+-----
+
+#. bind two ports::
+
+    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
+
+#. boot up testpmd, stop all ports::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX>
+    testpmd> port stop all
+
+#. create first bonded port and add one slave, check bond 2 config status::
+
+    testpmd> create bonded device <mode> 0
+    testpmd> add bonding slave 0 2
+    testpmd> show bonding config 2
+
+#. create second bonded port and add one slave, check bond 3 config status::
+
+    testpmd> create bonded device <mode> 0
+    testpmd> add bonding slave 1 3
+    testpmd> show bonding config 3
+
+#. create third bonded port and add first/second bonded port as its' slaves.
+   check if slaves are added successful. stacked bonded is forbidden by mode 4,
+   mode 4 will fail to add a bonded port as its' slave::
+
+    testpmd> create bonded device <mode> 0
+    testpmd> add bonding slave 2 4
+    testpmd> add bonding slave 3 4
+    testpmd> show bonding config 4
+
+#. check master bonded port/slave port's queue configuration are the same::
+
+    testpmd> show bonding config 0
+    testpmd> show bonding config 1
+    testpmd> show bonding config 2
+    testpmd> show bonding config 3
+    testpmd> show bonding config 4
+
+#. start top level bond port to check ports start action::
+
+    testpmd> port start 4
+    testpmd> start
+
+#. close testpmd::
+
+    testpmd> stop
+    testpmd> quit
+
+#. repeat upper steps with the following mode number::
+
+    balance-rr    0
+    active-backup 1
+    balance-xor   2
+    broadcast     3
+    802.3ad       4
+    balance-tlb   5
+
+Test Case: active-backup stacked bonded rx traffic
+==================================================
+setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check
+testpmd packet statistics.
+
+steps
+-----
+
+#. bind two ports::
+
+    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
+
+#. boot up testpmd, stop all ports::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX>
+    testpmd> port stop all
+
+#. create first bonded port and add one port as slave::
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 0 2
+
+#. create second bonded port and add one port as slave::
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 1 3
+
+#. create third bonded port and add first/second bonded ports as its' slaves,
+   check if slaves are added successful::
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 2 4
+    testpmd> add bonding slave 3 4
+    testpmd> show bonding config 4
+
+#. start top level bond port::
+
+    testpmd> port start 4
+    testpmd> start
+
+#. send 100 tcp packets to portA 0 and portA 1::
+
+    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 0>)
+    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 2>)
+
+#. first/second bonded port should receive 100 packets, third bonded port
+   should receive 200 packets::
+
+    testpmd> show port stats all
+
+#. close testpmd::
+
+    testpmd> stop
+    testpmd> quit
+
+Test Case: active-backup stacked bonded rx traffic with slave down
+==================================================================
+setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded port
+to down status, send tcp packet by scapy and check testpmd packet statistics.
+
+steps
+-----
+
+#. bind four ports::
+
+    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2> <pci address 3> <pci address 4>
+
+#. boot up testpmd, stop all ports::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX>
+    testpmd> port stop all
+
+#. create first bonded port and add two ports as slaves::
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 0 4
+    testpmd> add bonding slave 1 4
+
+#. set portB 0 down::
+
+    ethtool --set-priv-flags {portA 0} link-down-on-close on
+    ifconfig {portA 0} down
+
+.. note::
+
+    The vf port link status cannot be changed directly. Change the peer port to make the vf port link down.
+
+#. create second bonded port and add two ports as slaves::
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 2 5
+    testpmd> add bonding slave 3 5
+
+#. set portB 2 down::
+
+    ethtool --set-priv-flags {portA 2} link-down-on-close on
+    ifconfig {portA 2} down
+
+.. note::
+
+    The vf port link status cannot be changed directly. Change the peer port to make the vf port link down.
+
+#. create third bonded port and add first/second bonded port as its' slaves,
+   check if slave is added successful::
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 4 6
+    testpmd> add bonding slave 5 6
+    testpmd> show bonding config 6
+
+#. start top level bond port::
+
+    testpmd> port start 6
+    testpmd> start
+
+#. send 100 packets to portB pf0vf0/portB pf1vf0/portB pf3vf0/portB pf4vf0 separately::
+
+    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portB pf0>)
+    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
+    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portB pf2>)
+    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 3>)
+
+#. check first/second bonded ports should receive 100 packets, third bonded
+   device should receive 200 packets.::
+
+    testpmd> show port stats all
+
+#. close testpmd::
+
+    testpmd> stop
+    testpmd> quit
+    
+Test Case: balance-xor stacked bonded rx traffic
+================================================
+setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check
+packet statistics.
+
+steps
+-----
+
+#. bind two ports::
+
+    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
+
+#. boot up testpmd, stop all ports::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX>
+    testpmd> port stop all
+
+#. create first bonded port and add one port as slave::
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 0 2
+
+#. create second bonded port and add one port as slave::
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 1 3
+
+#. create third bonded port and add first/second bonded ports as its' slaves
+   check if slaves are added successful::
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 2 4
+    testpmd> add bonding slave 3 4
+    testpmd> show bonding config 4
+
+#. start top level bond port::
+
+    testpmd> port start 4
+    testpmd> start
+
+#. send 100 packets to portB pf0vf0/portB pf1vf0/portB pf3vf0/portB pf4vf0 separately::
+
+    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 0>)
+    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
+
+#. check first/second bonded port should receive 100 packets, third bonded
+   device should receive 200 packets::
+
+    testpmd> show port stats all
+
+#. close testpmd::
+
+    testpmd> stop
+    testpmd> quit
+    
+Test Case: balance-xor stacked bonded rx traffic with slave down
+================================================================
+setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded
+device to down status, send tcp packet by scapy and check packet statistics.
+
+steps
+-----
+
+#. bind four ports::
+
+    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2> <pci address 3> <pci address 4>
+
+#. boot up testpmd, stop all ports::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX>
+    testpmd> port stop all
+
+#. create first bonded port and add two ports as slaves::
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 0 4
+    testpmd> add bonding slave 1 4
+
+#. set portB 0 down::
+
+    ethtool --set-priv-flags {portA 0} link-down-on-close on
+    ifconfig {portA 0} down
+
+.. note::
+
+    The vf port link status cannot be changed directly. Change the peer port to make the vf port link down.
+
+#. create second bonded port and add two ports as slaves::
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 2 5
+    testpmd> add bonding slave 3 5
+
+#. set portB 2 down::
+
+    ethtool --set-priv-flags {portA 2} link-down-on-close on
+    ifconfig {portA 2} down
+
+.. note::
+
+    The vf port link status cannot be changed directly. Change the peer port to make the vf port link down.
+
+#. create third bonded port and add first/second bonded port as its' slaves
+   check if slave is added successful::
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 4 6
+    testpmd> add bonding slave 5 6
+    testpmd> show bonding config 6
+
+#. start top level bond port::
+
+    testpmd> port start 6
+    testpmd> start
+
+#. send 100 packets to portB pf0vf0/portB pf1vf0/portB pf3vf0/portB pf4vf0 separately::
+
+    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portB pf0>)
+    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
+    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portB pf2>)
+    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 3>)
+
+#. check first/second bonded port should receive 100 packets, third bonded
+   device should receive 200 packets::
+
+    testpmd> show port stats all
+
+#. close testpmd::
+
+    testpmd> stop
+    testpmd> quit
+    
-- 
2.25.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dts] [PATCH V3 7/7] tests/vf_pmd_stacked_bonded: add cases to test vf bonded
  2023-03-21 17:40 [dts] [PATCH V3 0/7] add cases to test vf bonded Song Jiale
                   ` (5 preceding siblings ...)
  2023-03-21 17:40 ` [dts] [PATCH V3 6/7] test_plans/vf_pmd_stacked_bonded: " Song Jiale
@ 2023-03-21 17:40 ` Song Jiale
  2023-03-21 10:21   ` Peng, Yuan
  2023-03-28  0:57   ` lijuan.tu
  6 siblings, 2 replies; 10+ messages in thread
From: Song Jiale @ 2023-03-21 17:40 UTC (permalink / raw)
  To: dts; +Cc: Song Jiale

add cases to test vf bonded.

Signed-off-by: Song Jiale <songx.jiale@intel.com>
---
 tests/TestSuite_vf_pmd_stacked_bonded.py | 613 +++++++++++++++++++++++
 1 file changed, 613 insertions(+)
 create mode 100644 tests/TestSuite_vf_pmd_stacked_bonded.py

diff --git a/tests/TestSuite_vf_pmd_stacked_bonded.py b/tests/TestSuite_vf_pmd_stacked_bonded.py
new file mode 100644
index 00000000..5765481e
--- /dev/null
+++ b/tests/TestSuite_vf_pmd_stacked_bonded.py
@@ -0,0 +1,613 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 Intel Corporation
+#
+
+import time
+import traceback
+
+# import dts/framework libs
+import framework.utils as utils
+
+# import bonding lib
+import tests.bonding as bonding
+from framework.exception import VerifyFailure
+from framework.test_case import TestCase
+
+from .bonding import (
+    FRAME_SIZE_64,
+    MODE_ACTIVE_BACKUP,
+    MODE_ALB_BALANCE,
+    MODE_BROADCAST,
+    MODE_LACP,
+    MODE_ROUND_ROBIN,
+    MODE_TLB_BALANCE,
+    MODE_XOR_BALANCE,
+)
+
+
+class TestVFPmdStackedBonded(TestCase):
+
+    #
+    # On dut, dpdk bonding
+    #
+    def check_bonded_device_queue_config(self, *devices):
+        """
+        check if master bonded device/slave device queue configuration
+        is the same.
+        """
+        # get master bonded device queue configuration
+        master = self.bond_inst.get_port_info(devices[0], "queue_config")
+        # get slave device queue configuration
+        for port_id in devices[1:]:
+            config = self.bond_inst.get_port_info(port_id, "queue_config")
+            if config == master:
+                continue
+            msg = (
+                "slave bonded port [{0}] is " "different to top bonded port [{1}]"
+            ).format(port_id, devices[0])
+            raise VerifyFailure("bonded device queue config:: " + msg)
+
+    def set_stacked_bonded(self, slaveGrpOne, slaveGrpTwo, bond_mode, ignore=False):
+        """
+        set stacked bonded mode for a custom bonding mode
+        """
+        inst = self.bond_inst
+        socket_id = self.dut.get_numa_id(self.bond_slave)
+        # create first bonded device 1, add slaves in it
+        bond_port_1 = inst.create_bonded_device(bond_mode, socket_id)
+        inst.add_slave(bond_port_1, False, "", *slaveGrpOne)
+        # create second bonded device 2, add slaves in it
+        bond_port_2 = inst.create_bonded_device(bond_mode, socket_id)
+        inst.add_slave(bond_port_2, False, "", *slaveGrpTwo)
+        # create master bonded device 3, which is the top bonded device
+        master_bond_port = inst.create_bonded_device(bond_mode, socket_id)
+        # add bond bonded device 1 to bonded device 3
+        # check bonding config status
+        inst.add_slave(master_bond_port, False, "", *[bond_port_1])
+        # add bonded device 2 to bonded device 3
+        # check bonding config status
+        inst.add_slave(master_bond_port, False, "", *[bond_port_2])
+        # check if master bonding/each slaves queue configuration is the same.
+        if not ignore:
+            self.check_bonded_device_queue_config(
+                *[master_bond_port, bond_port_1, bond_port_2]
+            )
+
+        return [bond_port_1, bond_port_2, master_bond_port]
+
+    def set_third_stacked_bonded(self, bond_port, bond_mode):
+        """
+        set third level stacked bonded to check if stacked level can be set
+        more than 2
+        """
+        inst = self.bond_inst
+        socket_id = self.dut.get_numa_id(self.bond_slave)
+        third_bond_port = inst.create_bonded_device(bond_mode, socket_id)
+        inst.add_slave(third_bond_port, False, "", *[bond_port])
+
+    def duplicate_add_stacked_bonded(self, bond_port_1, bond_port_2, master_bond_port):
+        """
+        check if adding duplicate stacked bonded device is forbidden
+        """
+        inst = self.bond_inst
+        # check exception process
+        expected_str = "Slave device is already a slave of a bonded device"
+        # add bonded device 1 to bonded device 3
+        # check bonding config status
+        inst.add_slave(master_bond_port, False, expected_str, *[bond_port_1])
+        # add bonded device 2 to bonded device 3
+        # check bonding config status
+        inst.add_slave(master_bond_port, False, expected_str, *[bond_port_2])
+
+    def preset_stacked_bonded(self, slaveGrpOne, slaveGrpTwo, bond_mode):
+        bond_port_1, bond_port_2, master_bond_port = self.set_stacked_bonded(
+            slaveGrpOne, slaveGrpTwo, bond_mode, ignore=True
+        )
+        portList = [
+            slaveGrpOne[0],
+            slaveGrpTwo[0],
+            bond_port_1,
+            bond_port_2,
+            master_bond_port,
+        ]
+        cmds = [
+            ["port stop all", ""],
+            ["set portlist " + ",".join([str(port) for port in portList]), ""],
+            # start top level bond port only, and let it propagate the start
+            # action to slave bond ports and its the real nics.
+            ["port start {}".format(master_bond_port), " ", 15],
+        ]
+        self.bond_inst.d_console(cmds)
+        # blank space command is used to skip LSC event to avoid core dumped issue
+        time.sleep(5)
+        cmds = [[" ", ""], ["start", ""]]
+        self.bond_inst.d_console(cmds)
+        time.sleep(5)
+
+        return bond_port_1, bond_port_2, master_bond_port
+
+    def send_packets_by_scapy(self, **kwargs):
+        tx_iface = kwargs.get("port topo")[0]
+        # set interface ready to send packet
+        self.dut1 = self.dut.new_session()
+        cmd = "ifconfig {0} up".format(tx_iface)
+        self.dut1.send_expect(cmd, "# ", 30)
+        # stream config
+        send_pkts = kwargs.get("stream")
+        # stream config
+        stream_configs = kwargs.get("traffic configs")
+        count = stream_configs.get("count")
+        interval = stream_configs.get("interval", 0.01)
+        # run traffic
+        self.dut1.send_expect("scapy", ">>> ", 30)
+        cmd = (
+            "sendp("
+            + send_pkts[0].command()
+            + f',iface="{tx_iface}",count={count},inter={interval},verbose=False)'
+        )
+        out = self.dut1.send_expect(cmd, ">>> ")
+        self.verify("Error" not in out, "scapy failed to send packets!!!")
+        self.dut1.send_expect("quit()", "# ")
+        self.dut.close_session(self.dut1)
+
+    #
+    # packet transmission
+    #
+    def traffic(self, traffic_config, ports, tport_is_up=True):
+        # get ports statistics before sending packets
+        stats_pre = self.bond_inst.get_all_stats(ports)
+        # send packets
+        if tport_is_up:
+            self.bond_inst.send_packet(traffic_config)
+        else:
+            self.send_packets_by_scapy(**traffic_config)
+        # get ports statistics after sending packets
+        stats_post = self.bond_inst.get_all_stats(ports)
+        # calculate ports statistics result
+        for port_id in ports:
+            stats_post[port_id]["RX-packets"] -= stats_pre[port_id]["RX-packets"]
+            stats_post[port_id]["TX-packets"] -= stats_pre[port_id]["TX-packets"]
+
+        return stats_post
+
+    def config_port_traffic(self, tx_port, rx_port, total_pkt):
+        """set traffic configuration"""
+        traffic_config = {
+            "port topo": [tx_port, rx_port],
+            "stream": self.bond_inst.set_stream_to_slave_port(rx_port),
+            "traffic configs": {
+                "count": total_pkt,
+            },
+        }
+
+        return traffic_config
+
+    def active_slave_rx(self, slave, bond_port, mode):
+        msg = "send packet to active slave port <{0}>".format(slave)
+        self.logger.info(msg)
+        tx_intf = self.tester.get_interface(
+            self.tester.get_local_port(self.dut_ports[slave])
+        )
+        # get traffic config
+        traffic_config = self.config_port_traffic(tx_intf, slave, self.total_pkt)
+        # select ports for statistics
+        ports = [slave, bond_port]
+        # run traffic
+        stats = self.traffic(traffic_config, ports)
+        # check slave statistics
+        msg = "port <{0}> Data not received by port <{1}>".format(tx_intf, slave)
+        self.verify(stats[slave]["RX-packets"] >= self.total_pkt, msg)
+        msg = "tester port {0}  <----> dut port {1} is ok".format(tx_intf, slave)
+        self.logger.info(msg)
+        # check bond port statistics
+        self.verify(
+            stats[slave]["RX-packets"] == self.total_pkt,
+            "Bond port have error RX packet in XOR",
+        )
+
+    def inactive_slave_rx(self, slave, bond_port, mode):
+        msg = "send packet to inactive slave port <{0}>".format(slave)
+        self.logger.info(msg)
+        dport_info0 = self.dut.ports_info[self.dut_ports[slave]]
+        tx_intf = dport_info0["intf"]
+        # get traffic config
+        traffic_config = self.config_port_traffic(tx_intf, slave, self.total_pkt)
+        # select ports for statistics
+        ports = [slave, bond_port]
+        # run traffic
+        stats = self.traffic(traffic_config, ports, tport_is_up=False)
+        # check slave statistics
+        msg = ("port <{0}> Data received by port <{1}>, " "but should not.").format(
+            tx_intf, slave
+        )
+        self.verify(stats[slave]["RX-packets"] == 0, msg)
+        msg = "tester port {0}  <-|  |-> VF port {1} is blocked".format(tx_intf, slave)
+        self.logger.info(msg)
+        # check bond port statistics
+        self.verify(
+            stats[slave]["RX-packets"] == 0,
+            "Bond port have error RX packet in {0}".format(mode),
+        )
+
+    def set_port_status(self, vfs_id, tport_inface, status):
+        # stop slave link by force
+        cmd = "ifconfig {0} {1}".format(tport_inface, status)
+        self.tester.send_expect(cmd, "# ")
+        time.sleep(3)
+        vfs_id = vfs_id if isinstance(vfs_id, list) else [vfs_id]
+        for vf in vfs_id:
+            cur_status = self.bond_inst.get_port_info(vf, "link_status")
+            self.logger.info("port {0} is [{1}]".format(vf, cur_status))
+            self.verify(cur_status == status, "expected status is [{0}]".format(status))
+
+    def check_traffic_with_one_slave_down(self, mode):
+        """
+        Verify that transmitting packets correctly when set one slave of
+        the bonded device link down.
+        """
+        results = []
+        # -------------------------------
+        # boot up testpmd
+        self.bond_inst.start_testpmd(self.eal_param)
+        try:
+            slaves = {"active": [], "inactive": []}
+            # -------------------------------
+            # preset stacked bonded device
+            slaveGrpOne = self.slaveGrpOne
+            slaveGrpTwo = self.slaveGrpTwo
+            bond_port_1, bond_port_2, master_bond_port = self.preset_stacked_bonded(
+                slaveGrpOne, slaveGrpTwo, mode
+            )
+            # ---------------------------------------------------
+            # set one slave of first bonded device link down
+            primary_slave = slaveGrpOne[0]
+            tester_port = self.tester.get_local_port(primary_slave)
+            tport_iface = self.tester.get_interface(tester_port)
+            self.set_port_status(
+                vfs_id=primary_slave, tport_inface=tport_iface, status="down"
+            )
+            slaves["inactive"].append(primary_slave)
+            # get slave status
+            primary_port, active_slaves = self.bond_inst.get_active_slaves(bond_port_1)
+            slaves["active"].extend(active_slaves)
+            if primary_slave in slaves["active"]:
+                msg = "{0} should not be in active slaves list".format(primary_slave)
+                raise Exception(msg)
+            # ---------------------------------------------------
+            # set one slave of second bonded device link down
+            primary_slave = slaveGrpTwo[0]
+            tester_port = self.tester.get_local_port(primary_slave)
+            tport_iface = self.tester.get_interface(tester_port)
+            self.set_port_status(
+                vfs_id=primary_slave, tport_inface=tport_iface, status="down"
+            )
+            slaves["inactive"].append(primary_slave)
+            # check active slaves
+            primary_port_2, active_slaves_2 = self.bond_inst.get_active_slaves(
+                bond_port_2
+            )
+            slaves["active"].extend(active_slaves_2)
+            if primary_slave in slaves["active"]:
+                msg = "{0} should not be in active slaves list".format(primary_slave)
+                raise Exception(msg)
+            # traffic testing
+            # active slave traffic testing
+            for slave in slaves["active"]:
+                self.active_slave_rx(slave, master_bond_port, mode)
+            # inactive slave traffic testing
+            for slave in slaves["inactive"]:
+                self.inactive_slave_rx(slave, master_bond_port, mode)
+        except Exception as e:
+            results.append(e)
+            self.logger.error(traceback.format_exc())
+        finally:
+            self.bond_inst.close_testpmd()
+
+        return results
+
+    def check_traffic(self, mode):
+        """normal traffic with all slaves are under active status.
+        verify the RX packets are all correct with stacked bonded device.
+        bonded device's statistics should be the sum of slaves statistics.
+        """
+        self.bond_inst.start_testpmd(self.eal_param)
+        slaveGrpOne = self.slaveGrpOne
+        slaveGrpTwo = self.slaveGrpTwo
+        bond_port_1, bond_port_2, master_bond_port = self.preset_stacked_bonded(
+            slaveGrpOne, slaveGrpTwo, mode
+        )
+        results = []
+        # check first bonded device
+        try:
+            self.logger.info("check first bonded device")
+            # active slave traffic testing
+            for slave in slaveGrpOne:
+                self.active_slave_rx(slave, bond_port_1, mode)
+        except Exception as e:
+            results.append(e)
+        # check second bonded device
+        try:
+            self.logger.info("check second bonded device")
+            # active slave traffic testing
+            for slave in slaveGrpOne:
+                self.active_slave_rx(slave, bond_port_2, mode)
+        except Exception as e:
+            results.append(e)
+
+        # check top bonded device
+        try:
+            self.logger.info("check master bonded device")
+            # active slave traffic testing
+            for slave in slaveGrpOne + slaveGrpTwo:
+                self.active_slave_rx(slave, master_bond_port, mode)
+        except Exception as e:
+            results.append(e)
+
+        self.bond_inst.close_testpmd()
+
+        return results
+
+    def backup_check_traffic(self):
+        mode = MODE_ACTIVE_BACKUP
+        msg = "begin checking bonding backup(stacked) mode transmission"
+        self.logger.info(msg)
+        results = self.check_traffic(mode)
+        if results:
+            for item in results:
+                self.logger.error(item)
+            raise VerifyFailure("backup(stacked) mode: rx failed")
+
+    def backup_check_traffic_with_slave_down(self):
+        mode = MODE_ACTIVE_BACKUP
+        self.logger.info(
+            "begin checking bonding backup(stacked) "
+            "mode transmission with one slave down"
+        )
+        results = self.check_traffic_with_one_slave_down(mode)
+        if results:
+            for item in results:
+                self.logger.error(item)
+            msg = "backup(stacked) mode: rx with one slave down failed"
+            raise VerifyFailure(msg)
+
+    def xor_check_rx(self):
+        mode = MODE_XOR_BALANCE
+        msg = "begin checking bonding xor(stacked) mode transmission"
+        self.logger.info(msg)
+        results = self.check_traffic(mode)
+        if results:
+            for item in results:
+                self.logger.error(item)
+            raise VerifyFailure("xor(stacked) mode: rx failed")
+
+    def xor_check_stacked_rx_one_slave_down(self):
+        mode = MODE_XOR_BALANCE
+        self.logger.info(
+            "begin checking bonding xor(stacked) mode "
+            "transmission with one slave down"
+        )
+        results = self.check_traffic_with_one_slave_down(mode)
+        if results:
+            for item in results:
+                self.logger.error(item)
+            msg = "xor(stacked) mode: rx with one slave down failed"
+            raise VerifyFailure(msg)
+
+    #
+    # Test cases.
+    #
+    def set_up_all(self):
+        """
+        Run before each test suite
+        """
+        self.verify("bsdapp" not in self.target, "Bonding not support freebsd")
+        self.dut_ports = self.dut.get_ports()
+        self.dport_info0 = self.dut.ports_info[self.dut_ports[0]]
+        self.dport_ifaces = self.dport_info0["intf"]
+        num_ports = len(self.dut_ports)
+        self.verify(num_ports == 2 or num_ports == 4, "Insufficient ports")
+        tester_port0 = self.tester.get_local_port(self.dut_ports[0])
+        self.tport_iface0 = self.tester.get_interface(tester_port0)
+        self.flag = "link-down-on-close"
+        self.default_stats = self.tester.get_priv_flags_state(
+            self.tport_iface0, self.flag
+        )
+        # enable the peer port "link-down-on-close"
+        if self.default_stats:
+            for port in self.dut_ports:
+                tester_port = self.tester.get_local_port(port)
+                tport_iface = self.tester.get_interface(tester_port)
+                self.tester.send_expect(
+                    "ethtool --set-priv-flags %s %s on" % (tport_iface, self.flag), "# "
+                )
+        sep_index = len(self.dut_ports) // 2
+        # separate ports into two group as first level bond ports' slaves
+        self.slaveGrpOne = self.dut_ports[:sep_index]
+        self.slaveGrpTwo = self.dut_ports[sep_index:]
+        self.bond_slave = self.dut_ports[0]
+        # initialize bonding common methods name
+        self.total_pkt = 100
+        config = {
+            "parent": self,
+            "pkt_name": "udp",
+            "pkt_size": FRAME_SIZE_64,
+            "src_mac": "52:00:00:00:00:03",
+            "src_ip": "10.239.129.65",
+            "src_port": 61,
+            "dst_ip": "10.239.129.88",
+            "dst_port": 53,
+        }
+        self.bond_inst = bonding.PmdBonding(**config)
+
+    def set_up(self):
+        """
+        Run before each test case.
+        """
+        self.create_vfs(pf_list=self.dut_ports, vf_num=1)
+        self.eal_param = ""
+        for pci in self.vfs_pci:
+            self.eal_param += " -a %s" % pci
+
+    def create_vfs(self, pf_list, vf_num):
+        self.sriov_vfs_port = []
+        self.vfs_pci = []
+        self.dut.bind_interfaces_linux(self.kdriver)
+        pf_list = pf_list if isinstance(pf_list, list) else [pf_list]
+        for pf_id in pf_list:
+            self.dut.generate_sriov_vfs_by_port(pf_id, vf_num)
+            self.sriov_vfs_port += self.dut.ports_info[self.dut_ports[pf_id]][
+                "vfs_port"
+            ]
+            dport_iface = self.dut.ports_info[self.dut_ports[pf_id]]["intf"]
+            self.dut.send_expect(
+                "ip link set %s vf 0 spoofchk off" % (dport_iface), "# "
+            )
+        for vf in self.sriov_vfs_port:
+            self.vfs_pci.append(vf.pci)
+        try:
+            for port in self.sriov_vfs_port:
+                port.bind_driver(self.drivername)
+        except Exception as e:
+            self.dut.destroy_all_sriov_vfs()
+            raise Exception(e)
+
+    def test_basic_behav(self):
+        """
+        Test Case: basic behavior
+        allow a bonded device to be added to another bonded device.
+        There's two limitations to create master bonding:
+
+         - Total depth of nesting is limited to two levels,
+         - 802.3ad mode is not supported if one or more slaves is a bond device
+
+        note: There 802.3ad mode can not be supported on this bond device.
+
+        This case is aimed at testing basic behavior of stacked bonded commands.
+
+        """
+        # ------------------------------------------------
+        # check stacked bonded status, except mode 4 (802.3ad)
+        mode_list = [
+            MODE_ROUND_ROBIN,
+            MODE_ACTIVE_BACKUP,
+            MODE_XOR_BALANCE,
+            MODE_BROADCAST,
+            MODE_TLB_BALANCE,
+            MODE_ALB_BALANCE,
+        ]
+        slaveGrpOne = self.slaveGrpOne
+        slaveGrpTwo = self.slaveGrpTwo
+        check_result = []
+        for bond_mode in mode_list:
+            self.logger.info("begin mode <{0}> checking".format(bond_mode))
+            # boot up testpmd
+            self.bond_inst.start_testpmd(self.eal_param)
+            try:
+                self.logger.info("check bonding mode <{0}>".format(bond_mode))
+                # set up stacked bonded status
+                bond_port_1, bond_port_2, master_bond_port = self.set_stacked_bonded(
+                    slaveGrpOne, slaveGrpTwo, bond_mode
+                )
+                # check duplicate add slave
+                self.duplicate_add_stacked_bonded(
+                    bond_port_1, bond_port_2, master_bond_port
+                )
+                # check stacked limitation
+                self.set_third_stacked_bonded(master_bond_port, bond_mode)
+                # quit testpmd, it is not supported to reset testpmd
+                self.logger.info("mode <{0}> done !".format(bond_mode))
+                check_result.append([bond_mode, None])
+            except Exception as e:
+                check_result.append([bond_mode, e])
+                self.logger.error(e)
+            finally:
+                self.bond_inst.close_testpmd()
+                time.sleep(5)
+        # ------------------------------------------------
+        # 802.3ad mode is not supported
+        # if one or more slaves is a bond device
+        # so it should raise a exception
+        msg = ""
+        try:
+            # boot up testpmd
+            self.bond_inst.start_testpmd(self.eal_param)
+            # set up stacked bonded status
+            self.set_stacked_bonded(slaveGrpOne, slaveGrpTwo, MODE_LACP)
+            # quit testpmd, it is not supported to reset testpmd
+            msg = "802.3ad mode hasn't been forbidden to " "use stacked bonded setting"
+            check_result.append([MODE_LACP, msg])
+        except Exception as e:
+            check_result.append([MODE_LACP, None])
+        finally:
+            self.bond_inst.close_testpmd()
+
+        exception_flag = False
+        for bond_mode, e in check_result:
+            msg = "mode <{0}>".format(bond_mode)
+            if e:
+                self.logger.info(msg)
+                self.logger.error(e)
+                exception_flag = True
+            else:
+                self.logger.info(msg + " done !")
+        # if some checking item is failed, raise exception
+        if exception_flag:
+            raise VerifyFailure("some test items failed")
+        else:
+            self.logger.info("all test items have done !")
+
+    def test_mode_backup_rx(self):
+        """
+        Test Case: active-backup stacked bonded rx traffic
+        """
+        self.backup_check_traffic()
+
+    def test_mode_backup_one_slave_down(self):
+        """
+        Test Case: active-backup stacked bonded rx traffic with slave down
+        """
+        self.verify(self.default_stats, "tester port not support '%s'" % self.flag)
+        self.backup_check_traffic_with_slave_down()
+
+    def test_mode_xor_rx(self):
+        """
+        Test Case: balance-xor stacked bonded rx traffic
+        """
+        self.xor_check_rx()
+
+    def test_mode_xor_rx_one_slave_down(self):
+        """
+        Test Case: balance-xor stacked bonded rx traffic with slave down
+        """
+        self.verify(self.default_stats, "tester port not support '%s'" % self.flag)
+        self.xor_check_stacked_rx_one_slave_down()
+
+    def tear_down(self):
+        """
+        Run after each test case.
+        """
+        try:
+            self.bond_inst.close_testpmd()
+        except Exception:
+            self.dut.kill_all()
+        self.dut.destroy_all_sriov_vfs()
+        for port in self.dut_ports:
+            tport = self.tester.get_local_port(port)
+            tport_iface = self.tester.get_interface(tport)
+            cmd = "ifconfig {0} up".format(tport_iface)
+            self.tester.send_expect(cmd, "# ")
+
+    def tear_down_all(self):
+        """
+        Run after each test suite.
+        """
+        self.dut.kill_all()
+        if self.default_stats:
+            for port in self.dut_ports:
+                tester_port = self.tester.get_local_port(port)
+                tport_iface = self.tester.get_interface(tester_port)
+                self.tester.send_expect(
+                    "ethtool --set-priv-flags %s %s %s"
+                    % (tport_iface, self.flag, self.default_stats),
+                    "# ",
+                )
-- 
2.25.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dts] [PATCH V3 7/7] tests/vf_pmd_stacked_bonded: add cases to test vf bonded
  2023-03-21 17:40 ` [dts] [PATCH V3 7/7] tests/vf_pmd_stacked_bonded: " Song Jiale
  2023-03-21 10:21   ` Peng, Yuan
@ 2023-03-28  0:57   ` lijuan.tu
  1 sibling, 0 replies; 10+ messages in thread
From: lijuan.tu @ 2023-03-28  0:57 UTC (permalink / raw)
  To: dts, Song Jiale; +Cc: Song Jiale

On Tue, 21 Mar 2023 17:40:13 +0000, Song Jiale <songx.jiale@intel.com> wrote:
> add cases to test vf bonded.
> 
> Signed-off-by: Song Jiale <songx.jiale@intel.com>


Series applied, thanks

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-03-28  0:57 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-21 17:40 [dts] [PATCH V3 0/7] add cases to test vf bonded Song Jiale
2023-03-21 17:40 ` [dts] [PATCH V3 1/7] test_plans/index: add 3 test suites " Song Jiale
2023-03-21 17:40 ` [dts] [PATCH V3 2/7] test_plans/vf_pmd_bonded: add cases " Song Jiale
2023-03-21 17:40 ` [dts] [PATCH V3 3/7] tests/vf_pmd_bonded: " Song Jiale
2023-03-21 17:40 ` [dts] [PATCH V3 4/7] test_plans/vf_pmd_bonded_8023ad: " Song Jiale
2023-03-21 17:40 ` [dts] [PATCH V3 5/7] tests/vf_pmd_bonded_8023ad: " Song Jiale
2023-03-21 17:40 ` [dts] [PATCH V3 6/7] test_plans/vf_pmd_stacked_bonded: " Song Jiale
2023-03-21 17:40 ` [dts] [PATCH V3 7/7] tests/vf_pmd_stacked_bonded: " Song Jiale
2023-03-21 10:21   ` Peng, Yuan
2023-03-28  0:57   ` lijuan.tu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).