From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 61B28A00C2; Fri, 6 Jan 2023 02:34:34 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5DC2242D29; Fri, 6 Jan 2023 02:34:34 +0100 (CET) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 6B24B4021F for ; Fri, 6 Jan 2023 02:34:32 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1672968872; x=1704504872; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=v5SetU+UrgYHGsiXfV4O3cPS/bPEbr/GQhrfkaobVug=; b=dMh0Uak4MVRxcJ+LYs2VcNA0VUaAd/umIJAQb01ByylmAR+1kqAppgCf hOUTP1by0a3GPPrOXpMQ0bTCgkCM/Mn/puPSDpdr+YDfM4InmvvOe/t7F eXzgrWaySQGiv63uBEoMFVghHDXkGts48QV8CTIPxICo425x7vhxb62ag 56dvlFxDKiDNYXIlnoS3jhjKQFWFxQUPeTJB4GXhLO+4ctyqTEcricltV IBpDPwog9Mo7LvFtUcHM/mFUi+m6S9KRKm/Say5cF0OYwZAHC72bf5gA6 ijOXQPziX6cMpO277y73n/i85xv8dhLO8TeD/3KEDlRcGkzR4Sqnah6C6 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10581"; a="324391134" X-IronPort-AV: E=Sophos;i="5.96,303,1665471600"; d="scan'208";a="324391134" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jan 2023 17:34:32 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10581"; a="829755016" X-IronPort-AV: E=Sophos;i="5.96,303,1665471600"; d="scan'208";a="829755016" Received: from unknown (HELO localhost.localdomain) ([10.239.252.20]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jan 2023 17:34:30 -0800 From: Song Jiale To: dts@dpdk.org Cc: Song Jiale Subject: [dts] [PATCH V2 6/7] test_plans/vf_pmd_stacked_bonded: add test plan for vf bonding Date: Fri, 6 Jan 2023 09:32:08 +0000 Message-Id: <20230106093209.317472-7-songx.jiale@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230106093209.317472-1-songx.jiale@intel.com> References: <20230106093209.317472-1-songx.jiale@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org add test plan for vf bonding. Signed-off-by: Song Jiale --- .../vf_pmd_stacked_bonded_test_plan.rst | 406 ++++++++++++++++++ 1 file changed, 406 insertions(+) create mode 100644 test_plans/vf_pmd_stacked_bonded_test_plan.rst diff --git a/test_plans/vf_pmd_stacked_bonded_test_plan.rst b/test_plans/vf_pmd_stacked_bonded_test_plan.rst new file mode 100644 index 00000000..ab6c3287 --- /dev/null +++ b/test_plans/vf_pmd_stacked_bonded_test_plan.rst @@ -0,0 +1,406 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2023 Intel Corporation + +============== +stacked Bonded +============== + +Stacked bonded mechanism allow a bonded port to be added to another bonded port. + +The demand arises from a discussion with a prospective customer for a 100G NIC +based on RRC. The customer already uses Mellanox 100G NICs. Mellanox 100G NICs +support a proper x16 PCIe interface so the host sees a single netdev and that +netdev corresponds directly to the 100G Ethernet port. They indicated that in +their current system they bond multiple 100G NICs together, using DPDK bonding +API in their application. They are interested in looking at an alternative source +for the 100G NIC and are in conversation with Silicom who are shipping a 100G +RRC based NIC (something like Boulder Rapids). The issue they have with RRC NIC +is that the NIC presents as two PCIe interfaces (netdevs) instead of one. If the +DPDK bonding could operate at 1st level on the two RRC netdevs to present a +single netdev could the application then bond multiple of these bonded +interfaces to implement NIC bonding. + +Prerequisites +============= + +hardware configuration +---------------------- + +all link ports of tester/dut should be the same data rate and support full-duplex. +Slave down test cases need four ports at least, other test cases can run with +two ports. + +NIC/DUT/TESTER ports requirements: + +- Tester: 2 ports of nic +- DUT: 2 ports of nic + +enable ``link-down-on-close`` in tester:: + + ethtool --set-priv-flags {tport_iface0} link-down-on-close on + ethtool --set-priv-flags {tport_iface1} link-down-on-close on + +create 2 vf for two dut ports:: + + echo 2 > /sys/bus/pci/devices/0000\:31\:00.0/sriov_numvfs + echo 2 > /sys/bus/pci/devices/0000\:31\:00.1/sriov_numvfs + +port topology diagram(2 peer links):: + + TESTER DUT + physical link logical link + .---------. .------------------------------------------------. + | portA 0 | <------------> | portB pf0vf0 <---> .--------. | + | | | | bond 0 | <-----> .------. | + | portA 1 | <------------> | portB pf1vf0 <---> '--------' | | | + | | | |bond2 | | + | portA 0 | <------------> | portB pf0vf1 <---> .--------. | | | + | | | | bond 1 | <-----> '------' | + | portA 1 | <------------> | portB pf1vf1 <---> '--------' | + '---------' '------------------------------------------------' + +Test cases +========== +``tx-offloads`` value set based on nic type. Test cases' steps, which run for +slave down testing, are based on 4 ports. Other test cases' steps are based on +2 ports. + +Test Case: basic behavior +========================= +allow a bonded port to be added to another bonded port, which is +supported by:: + + balance-rr 0 + active-backup 1 + balance-xor 2 + broadcast 3 + balance-tlb 5 + balance-alb 6 + +#. 802.3ad mode is not supported if one or more slaves is a bond device. +#. add the same device twice to check exceptional process is good. +#. master bonded port/each slaves queue configuration is the same. + +steps +----- + +#. bind two ports:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci + +#. boot up testpmd, stop all ports:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i + testpmd> port stop all + +#. create first bonded port and add two slave, check bond 4 config status:: + + testpmd> create bonded device 0 + testpmd> add bonding slave 0 4 + testpmd> add bonding slave 2 4 + testpmd> show bonding config 4 + +#. create second bonded port and add two slave, check bond 5 config status:: + + testpmd> create bonded device 0 + testpmd> add bonding slave 1 5 + testpmd> add bonding slave 3 5 + testpmd> show bonding config 5 + +#. create third bonded port and add first/second bonded port as its' slaves. + check if slaves are added successful. stacked bonded is forbidden by mode 4, + mode 4 will fail to add a bonded port as its' slave:: + + testpmd> create bonded device 0 + testpmd> add bonding slave 4 6 + testpmd> add bonding slave 5 6 + testpmd> show bonding config 6 + +#. check master bonded port/slave port's queue configuration are the same:: + + testpmd> show bonding config 0 + testpmd> show bonding config 1 + testpmd> show bonding config 2 + testpmd> show bonding config 3 + testpmd> show bonding config 4 + testpmd> show bonding config 5 + testpmd> show bonding config 6 + +#. start top level bond port to check ports start action:: + + testpmd> port start 4 + testpmd> start + +#. close testpmd:: + + testpmd> stop + testpmd> quit + +#. repeat upper steps with the following mode number:: + + balance-rr 0 + active-backup 1 + balance-xor 2 + broadcast 3 + 802.3ad 4 + balance-tlb 5 + +Test Case: active-backup stacked bonded rx traffic +================================================== +setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check +testpmd packet statistics. + +steps +----- + +#. bind two ports:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci + +#. boot up testpmd, stop all ports:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i + testpmd> port stop all + +#. create first bonded port and add two port as slave:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 0 4 + testpmd> add bonding slave 2 4 + +#. create second bonded port and add two port as slave:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 1 5 + testpmd> add bonding slave 3 5 + +#. create third bonded port and add first/second bonded ports as its' slaves, + check if slaves are added successful:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 4 6 + testpmd> add bonding slave 5 6 + testpmd> show bonding config 6 + +#. start top level bond port:: + + testpmd> port start 6 + testpmd> start + +#. send 100 tcp packets to portA 0 and portA 1:: + + sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + +#. first/second bonded port should receive 400 packets, third bonded port + should receive 800 packets:: + + testpmd> show port stats all + +#. close testpmd:: + + testpmd> stop + testpmd> quit + +Test Case: active-backup stacked bonded rx traffic with slave down +================================================================== +setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded port +to down status, send tcp packet by scapy and check testpmd packet statistics. + +steps +----- + +#. bind four ports:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci + +#. boot up testpmd, stop all ports:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i + testpmd> port stop all + +#. create first bonded port and add two ports as slaves:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 0 4 + testpmd> add bonding slave 2 4 + +#. set portB pf0vf0 and pf0vf1 down:: + + ethtool --set-priv-flags {portA 0} link-down-on-close on + ifconfig {portA 0} down + +.. note:: + + The vf port link status cannot be changed directly. Change the connection status of + the opposite port to make the vf port link down. + +#. create second bonded port and add two ports as slaves:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 1 5 + testpmd> add bonding slave 3 5 + +#. create third bonded port and add first/second bonded port as its' slaves, + check if slave is added successful:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 4 6 + testpmd> add bonding slave 5 6 + testpmd> show bonding config 6 + +#. start top level bond port:: + + testpmd> port start 6 + testpmd> start + +#. send 100 packets to portB pf0vf0/portB pf0vf1/portB pf1vf0/portB pf1vf1 separately:: + + sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + +#. check first/second bonded ports should receive 100 packets, third bonded + device should receive 200 packets.:: + + testpmd> show port stats all + +#. close testpmd:: + + testpmd> stop + testpmd> quit + +Test Case: balance-xor stacked bonded rx traffic +================================================ +setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check +packet statistics. + +steps +----- + +#. bind two ports:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci + +#. boot up testpmd, stop all ports:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i + testpmd> port stop all + +#. create first bonded port and add one port as slave:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 0 4 + testpmd> add bonding slave 2 4 + +#. create second bonded port and add one port as slave:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 1 5 + testpmd> add bonding slave 3 5 + +#. create third bonded port and add first/second bonded ports as its' slaves + check if slaves are added successful:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 4 6 + testpmd> add bonding slave 5 6 + testpmd> show bonding config 6 + +#. start top level bond port:: + + testpmd> port start 6 + testpmd> start + +#. send 100 packets to portA 0 and portA 1:: + + sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + +#. check first/second bonded port should receive 200 packets, third bonded + device should receive 400 packets:: + + testpmd> show port stats all + +#. close testpmd:: + + testpmd> stop + testpmd> quit + +Test Case: balance-xor stacked bonded rx traffic with slave down +================================================================ +setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded +device to down status, send tcp packet by scapy and check packet statistics. + +steps +----- + +#. bind four ports:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci \ + + +#. boot up testpmd, stop all ports:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i + testpmd> port stop all + +#. create first bonded port and add two ports as slaves, set portA 0a down:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 0 4 + testpmd> add bonding slave 2 4 + testpmd> port stop 1 + +#. create second bonded port and add two ports as slaves:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 1 5 + testpmd> add bonding slave 3 5 + testpmd> port stop 3 + +#. set portB pf0vf0 and pf0vf1 down:: + + ethtool --set-priv-flags {portA 0} link-down-on-close on + ifconfig {portA 0} down + +.. note:: + + The vf port link status cannot be changed directly. Change the connection status of + the opposite port to make the vf port link down. + +#. create third bonded port and add first/second bonded port as its' slaves + check if slave is added successful:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 4 6 + testpmd> add bonding slave 5 6 + testpmd> show bonding config 6 + +#. start top level bond port:: + + testpmd> port start 6 + testpmd> start + +#. send 100 packets to portB pf0vf0/portB pf0vf1/portB pf1vf0/portB pf1vf1 separately:: + + sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + +#. check first/second bonded port should receive 100 packets, third bonded + device should receive 200 packets:: + + testpmd> show port stats all + +#. close testpmd:: + + testpmd> stop + testpmd> quit + -- 2.25.1