From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 884F32AB for ; Fri, 1 Feb 2019 06:48:42 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 Jan 2019 21:48:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,547,1539673200"; d="scan'208";a="130277117" Received: from fmsmsx104.amr.corp.intel.com ([10.18.124.202]) by FMSMGA003.fm.intel.com with ESMTP; 31 Jan 2019 21:48:41 -0800 Received: from fmsmsx124.amr.corp.intel.com (10.18.125.39) by fmsmsx104.amr.corp.intel.com (10.18.124.202) with Microsoft SMTP Server (TLS) id 14.3.408.0; Thu, 31 Jan 2019 21:48:40 -0800 Received: from shsmsx108.ccr.corp.intel.com (10.239.4.97) by fmsmsx124.amr.corp.intel.com (10.18.125.39) with Microsoft SMTP Server (TLS) id 14.3.408.0; Thu, 31 Jan 2019 21:48:40 -0800 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.110]) by SHSMSX108.ccr.corp.intel.com ([169.254.8.36]) with mapi id 14.03.0415.000; Fri, 1 Feb 2019 13:48:38 +0800 From: "Tu, Lijuan" To: yufengmx <"yufengx.mo@intel.com"@intel.com>, "dts@dpdk.org" Thread-Topic: [dts] [PATCH V2][testpmd/stacked_bonded]: test plan for testpmd stacked bonded Thread-Index: AQHUuS5zJR+C1jtSKkSLdN9u/rq9XqXKcSIw Date: Fri, 1 Feb 2019 05:48:38 +0000 Message-ID: <8CE3E05A3F976642AAB0F4675D0AD20E0BA1FDA1@SHSMSX101.ccr.corp.intel.com> References: <1548916365-182758-1-git-send-email-yufengx.mo@intel.com> <1548916365-182758-2-git-send-email-yufengx.mo@intel.com> In-Reply-To: <1548916365-182758-2-git-send-email-yufengx.mo@intel.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.0.400.15 dlp-reaction: no-action x-ctpclassification: CTP_NT x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZDJhNzU5NjctMTRhMi00MzhmLWI2OWItOGFmYWJkNTg2OWZmIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoicnZFbkpOSllWY0I3WGVWakZ4b3FnTmZaNkNGTFpsVE5RRGI4TGF1SlVqT01EN05WMHk1R1JlXC9ra2VIdzdtaWsifQ== x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dts] [PATCH V2][testpmd/stacked_bonded]: test plan for testpmd stacked bonded X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Feb 2019 05:48:43 -0000 Hi yufeng, Applying: test plan for testpmd stacked bonded .git/rebase-apply/patch:50: trailing whitespace. The demand arises from a discussion with a prospective customer for a 100G = NIC .git/rebase-apply/patch:51: trailing whitespace. based on RRC. The customer already uses Mellanox 100G NICs. Mellanox 100G N= ICs .git/rebase-apply/patch:52: trailing whitespace. support a proper x16 PCIe interface so the host sees a single netdev and th= at .git/rebase-apply/patch:53: trailing whitespace. netdev corresponds directly to the 100G Ethernet port. They indicated that = in .git/rebase-apply/patch:54: trailing whitespace. their current system they bond multiple 100G NICs together, using DPDK bond= ing warning: squelched 12 whitespace errors warning: 17 lines add whitespace errors. > -----Original Message----- > From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of yufengmx > Sent: Thursday, January 31, 2019 2:33 PM > To: dts@dpdk.org > Cc: yufengmx <"yufengx.mo@intel.com"@intel.com> > Subject: [dts] [PATCH V2][testpmd/stacked_bonded]: test plan for testpmd > stacked bonded >=20 >=20 > Stacked bonded mechanism allow a bonded port to be added to another bonde= d > port. >=20 > The demand arises from a discussion with a prospective customer for a 100= G > NIC based on RRC. The customer already uses Mellanox 100G NICs. Mellanox > 100G NICs support a proper x16 PCIe interface so the host sees a single n= etdev > and that netdev corresponds directly to the 100G Ethernet port. They indi= cated > that in their current system they bond multiple 100G NICs together, using= DPDK > bonding API in their application. They are interested in looking at an al= ternatve > source for the 100G NIC and are in conversation with Silicom who are ship= ping a > 100G RRC based NIC (something like Boulder Rapids). The issue they have w= ith > RRC NIC is that the NIC presents as two PCIe interfaces (netdevs) instead= of one. > If the DPDK bonding could operate at 1st level on the two RRC netdevs to > present a single netdev could the application then bond multiple of these > bonded interfaces to implement NIC bonding. >=20 > Signed-off-by: yufengmx > --- > test_plans/pmd_stacked_bonded_test_plan.rst | 408 > ++++++++++++++++++++++++++++ > 1 file changed, 408 insertions(+) > create mode 100644 test_plans/pmd_stacked_bonded_test_plan.rst >=20 > diff --git a/test_plans/pmd_stacked_bonded_test_plan.rst > b/test_plans/pmd_stacked_bonded_test_plan.rst > new file mode 100644 > index 0000000..9fbf75e > --- /dev/null > +++ b/test_plans/pmd_stacked_bonded_test_plan.rst > @@ -0,0 +1,408 @@ > +.. Copyright (c) <2010-2019>, Intel Corporation > + All rights reserved. > + > + Redistribution and use in source and binary forms, with or without > + modification, are permitted provided that the following conditions > + are met: > + > + - Redistributions of source code must retain the above copyright > + notice, this list of conditions and the following disclaimer. > + > + - Redistributions in binary form must reproduce the above copyright > + notice, this list of conditions and the following disclaimer in > + the documentation and/or other materials provided with the > + distribution. > + > + - Neither the name of Intel Corporation nor the names of its > + contributors may be used to endorse or promote products derived > + from this software without specific prior written permission. > + > + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > CONTRIBUTORS > + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS > + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, > INDIRECT, > + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES > + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS > OR > + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) > + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN > CONTRACT, > + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) > + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED > + OF THE POSSIBILITY OF SUCH DAMAGE. > + > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > +stacked Bonded > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > + > +Stacked bonded mechanism allow a bonded port to be added to another > bonded port. > + > +The demand arises from a discussion with a prospective customer for a > +100G NIC based on RRC. The customer already uses Mellanox 100G NICs. > +Mellanox 100G NICs support a proper x16 PCIe interface so the host sees > +a single netdev and that netdev corresponds directly to the 100G > +Ethernet port. They indicated that in their current system they bond > +multiple 100G NICs together, using DPDK bonding API in their > +application. They are interested in looking at an alternatve source for > +the 100G NIC and are in conversation with Silicom who are shipping a > +100G RRC based NIC (something like Boulder Rapids). The issue they have > +with RRC NIC is that the NIC presents as two PCIe interfaces (netdevs) > +instead of one. If the DPDK bonding could operate at 1st level on the > +two RRC netdevs to present a single netdev could the application then bo= nd > multiple of these bonded interfaces to implement NIC bonding. > + > +Prerequisites > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > + > +hardware configuration > +---------------------- > + > +all link ports of tester/dut should be the same data rate and support fu= ll-duplex. > +Slave down test cases need four ports at least, other test cases can > +run with two ports. > + > +NIC/DUT/TESTER ports requirements:: > + > + DUT: 2/4 ports. > + TESTER: 2/4 ports. > + > +port topology diagram(4 peer links):: > + > + TESTER DUT > + physical link logical link > + .---------. .---------------------------------------= ----. > + | portA 0 | <------------> | portB 0 <---> .--------. = | > + | | | | bond 0 | <-----> .----= --. | > + | portA 0a| <------------> | portB 1 <---> '--------' | = | | > + | | | |bond= 2 | | > + | portA 1 | <------------> | portB 2 <---> .--------. | = | | > + | | | | bond 1 | <-----> '----= --' | > + | portA 1a| <------------> | portB 3 <---> '--------' = | > + '---------' '---------------------------------------= ----' > + > +Test cases > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > +``tx-offloads`` value set based on nic type. Test cases' steps, which > +run for slave down testing, are based on 4 ports. Other test cases' > +steps are based on > +2 ports. > + > +Test Case: basic behavior > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D > +allow a bonded port to be added to another bonded port, which is > +supported by:: > + > + balance-rr 0 > + active-backup 1 > + balance-xor 2 > + broadcast 3 > + balance-tlb 5 > + > +#. 802.3ad mode is not supported if one or more slaves is a bond device. > +#. add the same device twice to check exceptional process is good. > +#. master bonded port/each slaves queue configuration is the same. > + > +steps > +----- > + > +#. bind two ports:: > + > + ./usertools/dpdk-devbind.py --bind=3Digb_uio + address 2> > + > +#. boot up testpmd, stop all ports:: > + > + ./testpmd -c 0x6 -n 4 -- -i --tx-offloads=3D<0xXXXX> > + testpmd> port stop all > + > +#. create first bonded port and add one slave, check bond 2 config statu= s:: > + > + testpmd> create bonded device 0 > + testpmd> add bonding slave 0 2 > + testpmd> show bonding config 2 > + > +#. create second bonded port and add one slave, check bond 3 config stat= us:: > + > + testpmd> create bonded device 0 > + testpmd> add bonding slave 1 3 > + testpmd> show bonding config 3 > + > +#. create third bonded port and add first/second bonded port as its' sla= ves. > + check if slaves are added successful. stacked bonded is forbidden by = mode 4, > + mode 4 will fail to add a bonded port as its' slave:: > + > + testpmd> create bonded device 0 > + testpmd> add bonding slave 2 4 > + testpmd> add bonding slave 3 4 > + testpmd> show bonding config 4 > + > +#. check master bonded port/slave port's queue configuration are the sam= e:: > + > + testpmd> show bonding config 0 > + testpmd> show bonding config 1 > + testpmd> show bonding config 2 > + testpmd> show bonding config 3 > + testpmd> show bonding config 4 > + > +#. start all ports to check ports start action:: > + > + testpmd> port start all > + testpmd> start > + > +#. close testpmd:: > + > + testpmd> stop > + testpmd> quit > + > +#. repeat upper steps with the following mode number:: > + > + balance-rr 0 > + active-backup 1 > + balance-xor 2 > + broadcast 3 > + 802.3ad 4 > + balance-tlb 5 > + > +Test Case: active-backup stacked bonded rx traffic > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D > +setup dut/testpmd stacked bonded ports, send tcp packet by scapy and > +check testpmd packet statistics. > + > +steps > +----- > + > +#. bind two ports:: > + > + ./usertools/dpdk-devbind.py --bind=3Digb_uio + address 2> > + > +#. boot up testpmd, stop all ports:: > + > + ./testpmd -c 0x6 -n 4 -- -i --tx-offloads=3D<0xXXXX> > + testpmd> port stop all > + > +#. create first bonded port and add one port as slave:: > + > + testpmd> create bonded device 1 0 > + testpmd> add bonding slave 0 2 > + > +#. create second bonded port and add one port as slave:: > + > + testpmd> create bonded device 1 0 > + testpmd> add bonding slave 1 3 > + > +#. create third bonded port and add first/second bonded ports as its' sl= aves, > + check if slaves are added successful:: > + > + testpmd> create bonded device 1 0 > + testpmd> add bonding slave 2 4 > + testpmd> add bonding slave 3 4 > + testpmd> show bonding config 4 > + > +#. start all bonded device ports:: > + > + testpmd> port start all > + testpmd> start > + > +#. send 100 tcp packets to portA 0 and portA 1:: > + > + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=3D) > + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=3D) > + > +#. first/second bonded port should receive 100 packets, third bonded por= t > + should receive 200 packets:: > + > + testpmd> show port stats all > + > +#. close testpmd:: > + > + testpmd> stop > + testpmd> quit > + > +Test Case: active-backup stacked bonded rx traffic with slave down > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D > +setup dut/testpmd stacked bonded ports, set one slave of 1st level > +bonded port to down status, send tcp packet by scapy and check testpmd > packet statistics. > + > +steps > +----- > + > +#. bind four ports:: > + > + ./usertools/dpdk-devbind.py --bind=3Digb_uio \ > + + address 4> > + > +#. boot up testpmd, stop all ports:: > + > + ./testpmd -c 0x6 -n 4 -- -i --tx-offloads=3D<0xXXXX> > + testpmd> port stop all > + > +#. create first bonded port and add two ports as slaves:: > + > + testpmd> create bonded device 1 0 > + testpmd> add bonding slave 0 4 > + testpmd> add bonding slave 1 4 > + > +#. set portA 0a down:: > + > + ifconfig down > + > +#. create second bonded port and add two ports as slaves:: > + > + testpmd> create bonded device 1 0 > + testpmd> add bonding slave 2 5 > + testpmd> add bonding slave 3 5 > + > +#. set portA 1a down:: > + > + ifconfig down > + > +#. create third bonded port and add first/second bonded port as its' sla= ves, > + check if slave is added successful:: > + > + testpmd> create bonded device 1 0 > + testpmd> add bonding slave 4 6 > + testpmd> add bonding slave 5 6 > + testpmd> show bonding config 6 > + > +#. start all ports:: > + > + testpmd> port start all > + testpmd> start > + > +#. send 100 packets to portA 0/portA 0a/portA 1/portA 1a separately:: > + > + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=3D) > + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=3D) > + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=3D) > + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=3D) > + > +#. check first/second bonded ports should receive 100 packets, third bon= ded > + device should receive 200 packets.:: > + > + testpmd> show port stats all > + > +#. close testpmd:: > + > + testpmd> stop > + testpmd> quit > + > +Test Case: balance-xor stacked bonded rx traffic > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > +setup dut/testpmd stacked bonded ports, send tcp packet by scapy and > +check packet statistics. > + > +steps > +----- > + > +#. bind two ports:: > + > + ./usertools/dpdk-devbind.py --bind=3Digb_uio + address 2> > + > +#. boot up testpmd, stop all ports:: > + > + ./testpmd -c 0x6 -n 4 -- -i --tx-offloads=3D<0xXXXX> > + testpmd> port stop all > + > +#. create first bonded port and add one port as slave:: > + > + testpmd> create bonded device 2 0 > + testpmd> add bonding slave 0 2 > + > +#. create second bonded port and add one port as slave:: > + > + testpmd> create bonded device 2 0 > + testpmd> add bonding slave 1 3 > + > +#. create third bonded port and add first/second bonded ports as its' sl= aves > + check if slaves are added successful:: > + > + testpmd> create bonded device 2 0 > + testpmd> add bonding slave 2 4 > + testpmd> add bonding slave 3 4 > + testpmd> show bonding config 4 > + > +#. start all ports:: > + > + testpmd> port start all > + testpmd> start > + > +#. send 100 packets to portA 0 and portA 1:: > + > + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=3D) > + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=3D) > + > +#. check first/second bonded port should receive 100 packets, third bond= ed > + device should receive 200 packets:: > + > + testpmd> show port stats all > + > +#. close testpmd:: > + > + testpmd> stop > + testpmd> quit > + > +Test Case: balance-xor stacked bonded rx traffic with slave down > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > +setup dut/testpmd stacked bonded ports, set one slave of 1st level > +bonded device to down status, send tcp packet by scapy and check packet > statistics. > + > +steps > +----- > + > +#. bind four ports:: > + > + ./usertools/dpdk-devbind.py --bind=3Digb_uio \ > + + address 4> > + > +#. boot up testpmd, stop all ports:: > + > + ./testpmd -c 0x6 -n 4 -- -i --tx-offloads=3D<0xXXXX> > + testpmd> port stop all > + > +#. create first bonded port and add two ports as slaves, set portA 0a do= wn:: > + > + testpmd> create bonded device 2 0 > + testpmd> add bonding slave 0 4 > + testpmd> add bonding slave 1 4 > + testpmd> port stop 1 > + > +#. set portA 0a down:: > + > + ifconfig down > + > +#. create second bonded port and add two ports as slaves, set portA 1a d= own:: > + > + testpmd> create bonded device 2 0 > + testpmd> add bonding slave 2 5 > + testpmd> add bonding slave 3 5 > + testpmd> port stop 3 > + > +#. set portA 1a down:: > + > + ifconfig down > + > +#. create third bonded port and add first/second bonded port as its' sla= ves > + check if slave is added successful:: > + > + testpmd> create bonded device 2 0 > + testpmd> add bonding slave 4 6 > + testpmd> add bonding slave 5 6 > + testpmd> show bonding config 6 > + > +#. start all ports:: > + > + testpmd> port start all > + testpmd> start > + > +#. send 100 packets to portA 0/portA 0a/portA 1/portA 1a separately:: > + > + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=3D) > + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=3D) > + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=3D) > + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=3D) > + > +#. check first/second bonded port should receive 100 packets, third bond= ed > + device should receive 200 packets:: > + > + testpmd> show port stats all > + > +#. close testpmd:: > + > + testpmd> stop > + testpmd> quit > \ No newline at end of file > -- > 1.9.3