From: Song Jiale <songx.jiale@intel.com>
To: dts@dpdk.org
Cc: Song Jiale <songx.jiale@intel.com>
Subject: [dts] [PATCH V2 6/7] test_plans/vf_pmd_stacked_bonded: add test plan for vf bonding
Date: Fri, 6 Jan 2023 09:32:08 +0000 [thread overview]
Message-ID: <20230106093209.317472-7-songx.jiale@intel.com> (raw)
In-Reply-To: <20230106093209.317472-1-songx.jiale@intel.com>
add test plan for vf bonding.
Signed-off-by: Song Jiale <songx.jiale@intel.com>
---
.../vf_pmd_stacked_bonded_test_plan.rst | 406 ++++++++++++++++++
1 file changed, 406 insertions(+)
create mode 100644 test_plans/vf_pmd_stacked_bonded_test_plan.rst
diff --git a/test_plans/vf_pmd_stacked_bonded_test_plan.rst b/test_plans/vf_pmd_stacked_bonded_test_plan.rst
new file mode 100644
index 00000000..ab6c3287
--- /dev/null
+++ b/test_plans/vf_pmd_stacked_bonded_test_plan.rst
@@ -0,0 +1,406 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2023 Intel Corporation
+
+==============
+stacked Bonded
+==============
+
+Stacked bonded mechanism allow a bonded port to be added to another bonded port.
+
+The demand arises from a discussion with a prospective customer for a 100G NIC
+based on RRC. The customer already uses Mellanox 100G NICs. Mellanox 100G NICs
+support a proper x16 PCIe interface so the host sees a single netdev and that
+netdev corresponds directly to the 100G Ethernet port. They indicated that in
+their current system they bond multiple 100G NICs together, using DPDK bonding
+API in their application. They are interested in looking at an alternative source
+for the 100G NIC and are in conversation with Silicom who are shipping a 100G
+RRC based NIC (something like Boulder Rapids). The issue they have with RRC NIC
+is that the NIC presents as two PCIe interfaces (netdevs) instead of one. If the
+DPDK bonding could operate at 1st level on the two RRC netdevs to present a
+single netdev could the application then bond multiple of these bonded
+interfaces to implement NIC bonding.
+
+Prerequisites
+=============
+
+hardware configuration
+----------------------
+
+all link ports of tester/dut should be the same data rate and support full-duplex.
+Slave down test cases need four ports at least, other test cases can run with
+two ports.
+
+NIC/DUT/TESTER ports requirements:
+
+- Tester: 2 ports of nic
+- DUT: 2 ports of nic
+
+enable ``link-down-on-close`` in tester::
+
+ ethtool --set-priv-flags {tport_iface0} link-down-on-close on
+ ethtool --set-priv-flags {tport_iface1} link-down-on-close on
+
+create 2 vf for two dut ports::
+
+ echo 2 > /sys/bus/pci/devices/0000\:31\:00.0/sriov_numvfs
+ echo 2 > /sys/bus/pci/devices/0000\:31\:00.1/sriov_numvfs
+
+port topology diagram(2 peer links)::
+
+ TESTER DUT
+ physical link logical link
+ .---------. .------------------------------------------------.
+ | portA 0 | <------------> | portB pf0vf0 <---> .--------. |
+ | | | | bond 0 | <-----> .------. |
+ | portA 1 | <------------> | portB pf1vf0 <---> '--------' | | |
+ | | | |bond2 | |
+ | portA 0 | <------------> | portB pf0vf1 <---> .--------. | | |
+ | | | | bond 1 | <-----> '------' |
+ | portA 1 | <------------> | portB pf1vf1 <---> '--------' |
+ '---------' '------------------------------------------------'
+
+Test cases
+==========
+``tx-offloads`` value set based on nic type. Test cases' steps, which run for
+slave down testing, are based on 4 ports. Other test cases' steps are based on
+2 ports.
+
+Test Case: basic behavior
+=========================
+allow a bonded port to be added to another bonded port, which is
+supported by::
+
+ balance-rr 0
+ active-backup 1
+ balance-xor 2
+ broadcast 3
+ balance-tlb 5
+ balance-alb 6
+
+#. 802.3ad mode is not supported if one or more slaves is a bond device.
+#. add the same device twice to check exceptional process is good.
+#. master bonded port/each slaves queue configuration is the same.
+
+steps
+-----
+
+#. bind two ports::
+
+ ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2> <pci address 3> <pci address 4>
+
+#. boot up testpmd, stop all ports::
+
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i
+ testpmd> port stop all
+
+#. create first bonded port and add two slave, check bond 4 config status::
+
+ testpmd> create bonded device <mode> 0
+ testpmd> add bonding slave 0 4
+ testpmd> add bonding slave 2 4
+ testpmd> show bonding config 4
+
+#. create second bonded port and add two slave, check bond 5 config status::
+
+ testpmd> create bonded device <mode> 0
+ testpmd> add bonding slave 1 5
+ testpmd> add bonding slave 3 5
+ testpmd> show bonding config 5
+
+#. create third bonded port and add first/second bonded port as its' slaves.
+ check if slaves are added successful. stacked bonded is forbidden by mode 4,
+ mode 4 will fail to add a bonded port as its' slave::
+
+ testpmd> create bonded device <mode> 0
+ testpmd> add bonding slave 4 6
+ testpmd> add bonding slave 5 6
+ testpmd> show bonding config 6
+
+#. check master bonded port/slave port's queue configuration are the same::
+
+ testpmd> show bonding config 0
+ testpmd> show bonding config 1
+ testpmd> show bonding config 2
+ testpmd> show bonding config 3
+ testpmd> show bonding config 4
+ testpmd> show bonding config 5
+ testpmd> show bonding config 6
+
+#. start top level bond port to check ports start action::
+
+ testpmd> port start 4
+ testpmd> start
+
+#. close testpmd::
+
+ testpmd> stop
+ testpmd> quit
+
+#. repeat upper steps with the following mode number::
+
+ balance-rr 0
+ active-backup 1
+ balance-xor 2
+ broadcast 3
+ 802.3ad 4
+ balance-tlb 5
+
+Test Case: active-backup stacked bonded rx traffic
+==================================================
+setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check
+testpmd packet statistics.
+
+steps
+-----
+
+#. bind two ports::
+
+ ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2> <pci address 3> <pci address 4>
+
+#. boot up testpmd, stop all ports::
+
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i
+ testpmd> port stop all
+
+#. create first bonded port and add two port as slave::
+
+ testpmd> create bonded device 1 0
+ testpmd> add bonding slave 0 4
+ testpmd> add bonding slave 2 4
+
+#. create second bonded port and add two port as slave::
+
+ testpmd> create bonded device 1 0
+ testpmd> add bonding slave 1 5
+ testpmd> add bonding slave 3 5
+
+#. create third bonded port and add first/second bonded ports as its' slaves,
+ check if slaves are added successful::
+
+ testpmd> create bonded device 1 0
+ testpmd> add bonding slave 4 6
+ testpmd> add bonding slave 5 6
+ testpmd> show bonding config 6
+
+#. start top level bond port::
+
+ testpmd> port start 6
+ testpmd> start
+
+#. send 100 tcp packets to portA 0 and portA 1::
+
+ sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portA 0>)
+ sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portA 0>)
+ sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
+ sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
+
+#. first/second bonded port should receive 400 packets, third bonded port
+ should receive 800 packets::
+
+ testpmd> show port stats all
+
+#. close testpmd::
+
+ testpmd> stop
+ testpmd> quit
+
+Test Case: active-backup stacked bonded rx traffic with slave down
+==================================================================
+setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded port
+to down status, send tcp packet by scapy and check testpmd packet statistics.
+
+steps
+-----
+
+#. bind four ports::
+
+ ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2> <pci address 3> <pci address 4>
+
+#. boot up testpmd, stop all ports::
+
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i
+ testpmd> port stop all
+
+#. create first bonded port and add two ports as slaves::
+
+ testpmd> create bonded device 1 0
+ testpmd> add bonding slave 0 4
+ testpmd> add bonding slave 2 4
+
+#. set portB pf0vf0 and pf0vf1 down::
+
+ ethtool --set-priv-flags {portA 0} link-down-on-close on
+ ifconfig {portA 0} down
+
+.. note::
+
+ The vf port link status cannot be changed directly. Change the connection status of
+ the opposite port to make the vf port link down.
+
+#. create second bonded port and add two ports as slaves::
+
+ testpmd> create bonded device 1 0
+ testpmd> add bonding slave 1 5
+ testpmd> add bonding slave 3 5
+
+#. create third bonded port and add first/second bonded port as its' slaves,
+ check if slave is added successful::
+
+ testpmd> create bonded device 1 0
+ testpmd> add bonding slave 4 6
+ testpmd> add bonding slave 5 6
+ testpmd> show bonding config 6
+
+#. start top level bond port::
+
+ testpmd> port start 6
+ testpmd> start
+
+#. send 100 packets to portB pf0vf0/portB pf0vf1/portB pf1vf0/portB pf1vf1 separately::
+
+ sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portB pf0>)
+ sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portB pf0>)
+ sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
+ sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
+
+#. check first/second bonded ports should receive 100 packets, third bonded
+ device should receive 200 packets.::
+
+ testpmd> show port stats all
+
+#. close testpmd::
+
+ testpmd> stop
+ testpmd> quit
+
+Test Case: balance-xor stacked bonded rx traffic
+================================================
+setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check
+packet statistics.
+
+steps
+-----
+
+#. bind two ports::
+
+ ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2> <pci address 3> <pci address 4>
+
+#. boot up testpmd, stop all ports::
+
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i
+ testpmd> port stop all
+
+#. create first bonded port and add one port as slave::
+
+ testpmd> create bonded device 2 0
+ testpmd> add bonding slave 0 4
+ testpmd> add bonding slave 2 4
+
+#. create second bonded port and add one port as slave::
+
+ testpmd> create bonded device 2 0
+ testpmd> add bonding slave 1 5
+ testpmd> add bonding slave 3 5
+
+#. create third bonded port and add first/second bonded ports as its' slaves
+ check if slaves are added successful::
+
+ testpmd> create bonded device 2 0
+ testpmd> add bonding slave 4 6
+ testpmd> add bonding slave 5 6
+ testpmd> show bonding config 6
+
+#. start top level bond port::
+
+ testpmd> port start 6
+ testpmd> start
+
+#. send 100 packets to portA 0 and portA 1::
+
+ sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portA 0>)
+ sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portA 0>)
+ sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
+ sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
+
+#. check first/second bonded port should receive 200 packets, third bonded
+ device should receive 400 packets::
+
+ testpmd> show port stats all
+
+#. close testpmd::
+
+ testpmd> stop
+ testpmd> quit
+
+Test Case: balance-xor stacked bonded rx traffic with slave down
+================================================================
+setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded
+device to down status, send tcp packet by scapy and check packet statistics.
+
+steps
+-----
+
+#. bind four ports::
+
+ ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2> <pci address 3> <pci address 4> \
+ <pci address 3> <pci address 4>
+
+#. boot up testpmd, stop all ports::
+
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i
+ testpmd> port stop all
+
+#. create first bonded port and add two ports as slaves, set portA 0a down::
+
+ testpmd> create bonded device 2 0
+ testpmd> add bonding slave 0 4
+ testpmd> add bonding slave 2 4
+ testpmd> port stop 1
+
+#. create second bonded port and add two ports as slaves::
+
+ testpmd> create bonded device 2 0
+ testpmd> add bonding slave 1 5
+ testpmd> add bonding slave 3 5
+ testpmd> port stop 3
+
+#. set portB pf0vf0 and pf0vf1 down::
+
+ ethtool --set-priv-flags {portA 0} link-down-on-close on
+ ifconfig {portA 0} down
+
+.. note::
+
+ The vf port link status cannot be changed directly. Change the connection status of
+ the opposite port to make the vf port link down.
+
+#. create third bonded port and add first/second bonded port as its' slaves
+ check if slave is added successful::
+
+ testpmd> create bonded device 2 0
+ testpmd> add bonding slave 4 6
+ testpmd> add bonding slave 5 6
+ testpmd> show bonding config 6
+
+#. start top level bond port::
+
+ testpmd> port start 6
+ testpmd> start
+
+#. send 100 packets to portB pf0vf0/portB pf0vf1/portB pf1vf0/portB pf1vf1 separately::
+
+ sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portB pf0>)
+ sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portB pf0>)
+ sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
+ sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
+
+#. check first/second bonded port should receive 100 packets, third bonded
+ device should receive 200 packets::
+
+ testpmd> show port stats all
+
+#. close testpmd::
+
+ testpmd> stop
+ testpmd> quit
+
--
2.25.1
next prev parent reply other threads:[~2023-01-06 1:34 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-06 9:32 [dts] [PATCH V2 0/7] add cases to test " Song Jiale
2023-01-06 9:32 ` [dts] [PATCH V2 1/7] tests/vf_pmd_bonded: add case " Song Jiale
2023-01-06 9:32 ` [dts] [PATCH V2 2/7] test_plans/vf_pmd_bonded: add test plan for " Song Jiale
2023-01-10 7:31 ` Tu, Lijuan
2023-01-06 9:32 ` [dts] [PATCH V2 3/7] tests/vf_pmd_bonded_8023ad: add case to test " Song Jiale
2023-01-06 9:32 ` [dts] [PATCH V2 4/7] test_plans/vf_pmd_bonded_8023ad: add test plan for " Song Jiale
2023-01-06 9:32 ` [dts] [PATCH V2 5/7] tests/vf_pmd_stacked_bonded: add cases to test " Song Jiale
2023-01-06 9:32 ` Song Jiale [this message]
2023-01-06 9:32 ` [dts] [PATCH V2 7/7] test_plans/index: add 3 suite Song Jiale
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230106093209.317472-7-songx.jiale@intel.com \
--to=songx.jiale@intel.com \
--cc=dts@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).