test suite reviews and discussions
 help / color / mirror / Atom feed
From: "Chen, Zhaoyan" <zhaoyan.chen@intel.com>
To: yufengmx <"yufengx.mo@intel.com"@intel.com>,
	"dts@dpdk.org" <dts@dpdk.org>
Cc: "Chen, Zhaoyan" <zhaoyan.chen@intel.com>
Subject: Re: [dts] [PATCH V2][testpmd/stacked_bonded]: test plan for testpmd	stacked bonded
Date: Fri, 1 Feb 2019 05:44:00 +0000	[thread overview]
Message-ID: <9DEEADBC57E43F4DA73B571777FECECA41C27A22@SHSMSX104.ccr.corp.intel.com> (raw)
In-Reply-To: <1548916365-182758-2-git-send-email-yufengx.mo@intel.com>

Acked-by: Zhaoyan Chen<zhaoyan.chen@intel.com>




Regards,
Zhaoyan Chen


> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of yufengmx
> Sent: Thursday, January 31, 2019 2:33 PM
> To: dts@dpdk.org
> Cc: yufengmx <"yufengx.mo@intel.com"@intel.com>
> Subject: [dts] [PATCH V2][testpmd/stacked_bonded]: test plan for testpmd stacked
> bonded
> 
> 
> Stacked bonded mechanism allow a bonded port to be added to another bonded port.
> 
> The demand arises from a discussion with a prospective customer for a 100G NIC
> based on RRC. The customer already uses Mellanox 100G NICs. Mellanox 100G NICs
> support a proper x16 PCIe interface so the host sees a single netdev and that netdev
> corresponds directly to the 100G Ethernet port. They indicated that in their current
> system they bond multiple 100G NICs together, using DPDK bonding API in their
> application. They are interested in looking at an alternatve source for the 100G NIC
> and are in conversation with Silicom who are shipping a 100G RRC based NIC
> (something like Boulder Rapids). The issue they have with RRC NIC is that the NIC
> presents as two PCIe interfaces (netdevs) instead of one. If the DPDK bonding could
> operate at 1st level on the two RRC netdevs to present a single netdev could the
> application then bond multiple of these bonded interfaces to implement NIC bonding.
> 
> Signed-off-by: yufengmx <yufengx.mo@intel.com@intel.com>
> ---
>  test_plans/pmd_stacked_bonded_test_plan.rst | 408
> ++++++++++++++++++++++++++++
>  1 file changed, 408 insertions(+)
>  create mode 100644 test_plans/pmd_stacked_bonded_test_plan.rst
> 
> diff --git a/test_plans/pmd_stacked_bonded_test_plan.rst
> b/test_plans/pmd_stacked_bonded_test_plan.rst
> new file mode 100644
> index 0000000..9fbf75e
> --- /dev/null
> +++ b/test_plans/pmd_stacked_bonded_test_plan.rst
> @@ -0,0 +1,408 @@
> +.. Copyright (c) <2010-2019>, Intel Corporation
> +   All rights reserved.
> +
> +   Redistribution and use in source and binary forms, with or without
> +   modification, are permitted provided that the following conditions
> +   are met:
> +
> +   - Redistributions of source code must retain the above copyright
> +     notice, this list of conditions and the following disclaimer.
> +
> +   - Redistributions in binary form must reproduce the above copyright
> +     notice, this list of conditions and the following disclaimer in
> +     the documentation and/or other materials provided with the
> +     distribution.
> +
> +   - Neither the name of Intel Corporation nor the names of its
> +     contributors may be used to endorse or promote products derived
> +     from this software without specific prior written permission.
> +
> +   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> +   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> +   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> +   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> +   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> +   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> +   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
> +   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> +   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
> CONTRACT,
> +   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> +   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
> +   OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +==============
> +stacked Bonded
> +==============
> +
> +Stacked bonded mechanism allow a bonded port to be added to another bonded
> port.
> +
> +The demand arises from a discussion with a prospective customer for a
> +100G NIC based on RRC. The customer already uses Mellanox 100G NICs.
> +Mellanox 100G NICs support a proper x16 PCIe interface so the host sees
> +a single netdev and that netdev corresponds directly to the 100G
> +Ethernet port. They indicated that in their current system they bond
> +multiple 100G NICs together, using DPDK bonding API in their
> +application. They are interested in looking at an alternatve source for
> +the 100G NIC and are in conversation with Silicom who are shipping a
> +100G RRC based NIC (something like Boulder Rapids). The issue they have
> +with RRC NIC is that the NIC presents as two PCIe interfaces (netdevs)
> +instead of one. If the DPDK bonding could operate at 1st level on the
> +two RRC netdevs to present a single netdev could the application then bond
> multiple of these bonded interfaces to implement NIC bonding.
> +
> +Prerequisites
> +=============
> +
> +hardware configuration
> +----------------------
> +
> +all link ports of tester/dut should be the same data rate and support full-duplex.
> +Slave down test cases need four ports at least, other test cases can
> +run with two ports.
> +
> +NIC/DUT/TESTER ports requirements::
> +
> +     DUT:     2/4 ports.
> +     TESTER:  2/4 ports.
> +
> +port topology diagram(4 peer links)::
> +
> +       TESTER                                   DUT
> +                  physical link             logical link
> +     .---------.                .-------------------------------------------.
> +     | portA 0 | <------------> | portB 0 <---> .--------.                  |
> +     |         |                |               | bond 0 | <-----> .------. |
> +     | portA 0a| <------------> | portB 1 <---> '--------'         |      | |
> +     |         |                |                                  |bond2 | |
> +     | portA 1 | <------------> | portB 2 <---> .--------.         |      | |
> +     |         |                |               | bond 1 | <-----> '------' |
> +     | portA 1a| <------------> | portB 3 <---> '--------'                  |
> +     '---------'                '-------------------------------------------'
> +
> +Test cases
> +==========
> +``tx-offloads`` value set based on nic type. Test cases' steps, which
> +run for slave down testing, are based on 4 ports. Other test cases'
> +steps are based on
> +2 ports.
> +
> +Test Case: basic behavior
> +=========================
> +allow a bonded port to be added to another bonded port, which is
> +supported by::
> +
> +   balance-rr    0
> +   active-backup 1
> +   balance-xor   2
> +   broadcast     3
> +   balance-tlb   5
> +
> +#. 802.3ad mode is not supported if one or more slaves is a bond device.
> +#. add the same device twice to check exceptional process is good.
> +#. master bonded port/each slaves queue configuration is the same.
> +
> +steps
> +-----
> +
> +#. bind two ports::
> +
> +    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci
> + address 2>
> +
> +#. boot up testpmd, stop all ports::
> +
> +    ./testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX>
> +    testpmd> port stop all
> +
> +#. create first bonded port and add one slave, check bond 2 config status::
> +
> +    testpmd> create bonded device <mode> 0
> +    testpmd> add bonding slave 0 2
> +    testpmd> show bonding config 2
> +
> +#. create second bonded port and add one slave, check bond 3 config status::
> +
> +    testpmd> create bonded device <mode> 0
> +    testpmd> add bonding slave 1 3
> +    testpmd> show bonding config 3
> +
> +#. create third bonded port and add first/second bonded port as its' slaves.
> +   check if slaves are added successful. stacked bonded is forbidden by mode 4,
> +   mode 4 will fail to add a bonded port as its' slave::
> +
> +    testpmd> create bonded device <mode> 0
> +    testpmd> add bonding slave 2 4
> +    testpmd> add bonding slave 3 4
> +    testpmd> show bonding config 4
> +
> +#. check master bonded port/slave port's queue configuration are the same::
> +
> +    testpmd> show bonding config 0
> +    testpmd> show bonding config 1
> +    testpmd> show bonding config 2
> +    testpmd> show bonding config 3
> +    testpmd> show bonding config 4
> +
> +#. start all ports to check ports start action::
> +
> +    testpmd> port start all
> +    testpmd> start
> +
> +#. close testpmd::
> +
> +    testpmd> stop
> +    testpmd> quit
> +
> +#. repeat upper steps with the following mode number::
> +
> +    balance-rr    0
> +    active-backup 1
> +    balance-xor   2
> +    broadcast     3
> +    802.3ad       4
> +    balance-tlb   5
> +
> +Test Case: active-backup stacked bonded rx traffic
> +==================================================
> +setup dut/testpmd stacked bonded ports, send tcp packet by scapy and
> +check testpmd packet statistics.
> +
> +steps
> +-----
> +
> +#. bind two ports::
> +
> +    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci
> + address 2>
> +
> +#. boot up testpmd, stop all ports::
> +
> +    ./testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX>
> +    testpmd> port stop all
> +
> +#. create first bonded port and add one port as slave::
> +
> +    testpmd> create bonded device 1 0
> +    testpmd> add bonding slave 0 2
> +
> +#. create second bonded port and add one port as slave::
> +
> +    testpmd> create bonded device 1 0
> +    testpmd> add bonding slave 1 3
> +
> +#. create third bonded port and add first/second bonded ports as its' slaves,
> +   check if slaves are added successful::
> +
> +    testpmd> create bonded device 1 0
> +    testpmd> add bonding slave 2 4
> +    testpmd> add bonding slave 3 4
> +    testpmd> show bonding config 4
> +
> +#. start all bonded device ports::
> +
> +    testpmd> port start all
> +    testpmd> start
> +
> +#. send 100 tcp packets to portA 0 and portA 1::
> +
> +    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 0>)
> +    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
> +
> +#. first/second bonded port should receive 100 packets, third bonded port
> +   should receive 200 packets::
> +
> +    testpmd> show port stats all
> +
> +#. close testpmd::
> +
> +    testpmd> stop
> +    testpmd> quit
> +
> +Test Case: active-backup stacked bonded rx traffic with slave down
> +===============================================================
> ===
> +setup dut/testpmd stacked bonded ports, set one slave of 1st level
> +bonded port to down status, send tcp packet by scapy and check testpmd packet
> statistics.
> +
> +steps
> +-----
> +
> +#. bind four ports::
> +
> +    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2> \
> +                                               <pci address 3> <pci
> + address 4>
> +
> +#. boot up testpmd, stop all ports::
> +
> +    ./testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX>
> +    testpmd> port stop all
> +
> +#. create first bonded port and add two ports as slaves::
> +
> +    testpmd> create bonded device 1 0
> +    testpmd> add bonding slave 0 4
> +    testpmd> add bonding slave 1 4
> +
> +#. set portA 0a down::
> +
> +    ifconfig <portA 0a> down
> +
> +#. create second bonded port and add two ports as slaves::
> +
> +    testpmd> create bonded device 1 0
> +    testpmd> add bonding slave 2 5
> +    testpmd> add bonding slave 3 5
> +
> +#. set portA 1a down::
> +
> +    ifconfig <portA 1a> down
> +
> +#. create third bonded port and add first/second bonded port as its' slaves,
> +   check if slave is added successful::
> +
> +    testpmd> create bonded device 1 0
> +    testpmd> add bonding slave 4 6
> +    testpmd> add bonding slave 5 6
> +    testpmd> show bonding config 6
> +
> +#. start all ports::
> +
> +    testpmd> port start all
> +    testpmd> start
> +
> +#. send 100 packets to portA 0/portA 0a/portA 1/portA 1a separately::
> +
> +    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 0>)
> +    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 0a>)
> +    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
> +    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 1a>)
> +
> +#. check first/second bonded ports should receive 100 packets, third bonded
> +   device should receive 200 packets.::
> +
> +    testpmd> show port stats all
> +
> +#. close testpmd::
> +
> +    testpmd> stop
> +    testpmd> quit
> +
> +Test Case: balance-xor stacked bonded rx traffic
> +================================================
> +setup dut/testpmd stacked bonded ports, send tcp packet by scapy and
> +check packet statistics.
> +
> +steps
> +-----
> +
> +#. bind two ports::
> +
> +    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci
> + address 2>
> +
> +#. boot up testpmd, stop all ports::
> +
> +    ./testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX>
> +    testpmd> port stop all
> +
> +#. create first bonded port and add one port as slave::
> +
> +    testpmd> create bonded device 2 0
> +    testpmd> add bonding slave 0 2
> +
> +#. create second bonded port and add one port as slave::
> +
> +    testpmd> create bonded device 2 0
> +    testpmd> add bonding slave 1 3
> +
> +#. create third bonded port and add first/second bonded ports as its' slaves
> +   check if slaves are added successful::
> +
> +    testpmd> create bonded device 2 0
> +    testpmd> add bonding slave 2 4
> +    testpmd> add bonding slave 3 4
> +    testpmd> show bonding config 4
> +
> +#. start all ports::
> +
> +    testpmd> port start all
> +    testpmd> start
> +
> +#. send 100 packets to portA 0 and portA 1::
> +
> +    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 0>)
> +    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
> +
> +#. check first/second bonded port should receive 100 packets, third bonded
> +   device should receive 200 packets::
> +
> +    testpmd> show port stats all
> +
> +#. close testpmd::
> +
> +    testpmd> stop
> +    testpmd> quit
> +
> +Test Case: balance-xor stacked bonded rx traffic with slave down
> +===============================================================
> =
> +setup dut/testpmd stacked bonded ports, set one slave of 1st level
> +bonded device to down status, send tcp packet by scapy and check packet statistics.
> +
> +steps
> +-----
> +
> +#. bind four ports::
> +
> +    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2> \
> +                                               <pci address 3> <pci
> + address 4>
> +
> +#. boot up testpmd, stop all ports::
> +
> +    ./testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX>
> +    testpmd> port stop all
> +
> +#. create first bonded port and add two ports as slaves, set portA 0a down::
> +
> +    testpmd> create bonded device 2 0
> +    testpmd> add bonding slave 0 4
> +    testpmd> add bonding slave 1 4
> +    testpmd> port stop 1
> +
> +#. set portA 0a down::
> +
> +    ifconfig <portA 0a> down
> +
> +#. create second bonded port and add two ports as slaves, set portA 1a down::
> +
> +    testpmd> create bonded device 2 0
> +    testpmd> add bonding slave 2 5
> +    testpmd> add bonding slave 3 5
> +    testpmd> port stop 3
> +
> +#. set portA 1a down::
> +
> +    ifconfig <portA 1a> down
> +
> +#. create third bonded port and add first/second bonded port as its' slaves
> +   check if slave is added successful::
> +
> +    testpmd> create bonded device 2 0
> +    testpmd> add bonding slave 4 6
> +    testpmd> add bonding slave 5 6
> +    testpmd> show bonding config 6
> +
> +#. start all ports::
> +
> +    testpmd> port start all
> +    testpmd> start
> +
> +#. send 100 packets to portA 0/portA 0a/portA 1/portA 1a separately::
> +
> +    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 0>)
> +    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 0a>)
> +    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
> +    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 1a>)
> +
> +#. check first/second bonded port should receive 100 packets, third bonded
> +   device should receive 200 packets::
> +
> +    testpmd> show port stats all
> +
> +#. close testpmd::
> +
> +    testpmd> stop
> +    testpmd> quit
> \ No newline at end of file
> --
> 1.9.3

  reply	other threads:[~2019-02-01  5:44 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-31  6:32 [dts] [PATCH V2][testpmd/stacked_bonded]: test plan for testpmd stacked bonded stacked bonded mechanism allow a bonded port to be added to another bonded port yufengmx
2019-01-31  6:32 ` [dts] [PATCH V2][testpmd/stacked_bonded]: test plan for testpmd stacked bonded yufengmx
2019-02-01  5:44   ` Chen, Zhaoyan [this message]
2019-02-01  5:48   ` Tu, Lijuan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9DEEADBC57E43F4DA73B571777FECECA41C27A22@SHSMSX104.ccr.corp.intel.com \
    --to=zhaoyan.chen@intel.com \
    --cc="yufengx.mo@intel.com"@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).