From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6CC0FA2E1B for ; Wed, 4 Sep 2019 07:04:59 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5D39C1EC91; Wed, 4 Sep 2019 07:04:59 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id C04621BF69 for ; Wed, 4 Sep 2019 07:04:57 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Sep 2019 22:04:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,465,1559545200"; d="scan'208";a="184995301" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by orsmga003.jf.intel.com with ESMTP; 03 Sep 2019 22:04:56 -0700 Received: from fmsmsx157.amr.corp.intel.com (10.18.116.73) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 3 Sep 2019 22:04:56 -0700 Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by FMSMSX157.amr.corp.intel.com (10.18.116.73) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 3 Sep 2019 22:04:55 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.92]) by SHSMSX151.ccr.corp.intel.com ([169.254.3.53]) with mapi id 14.03.0439.000; Wed, 4 Sep 2019 13:04:53 +0800 From: "Tu, Lijuan" To: "Mo, YufengX" , "Wu, ChangqingX" , "dts@dpdk.org" CC: "Mo, YufengX" Thread-Topic: [dts] [PATCH V1]test_plans/pmd_stacked_bonded: update test plan Thread-Index: AQHVW89ahw7GaIDRt0yYwuKFggtQbKcbBO6g Date: Wed, 4 Sep 2019 05:04:52 +0000 Message-ID: <8CE3E05A3F976642AAB0F4675D0AD20E0BB1BE71@SHSMSX101.ccr.corp.intel.com> References: <1566797465-122445-1-git-send-email-yufengx.mo@intel.com> <1566797465-122445-2-git-send-email-yufengx.mo@intel.com> In-Reply-To: <1566797465-122445-2-git-send-email-yufengx.mo@intel.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.2.0.6 dlp-reaction: no-action x-ctpclassification: CTP_NT x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYzMzNzEwYTUtZDViNy00NDc3LThjNWMtNzJhZGJkM2UxZTc4IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiT0Q3cU9RYnFpczlZQlc0U0ZLR2wrZFQ0QzBGcXc3alFxXC9pc0xjbnduQmdaOFRDZDBlYXhnZFNMcHhrV21RamMifQ== x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dts] [PATCH V1]test_plans/pmd_stacked_bonded: update test plan X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Applied, thanks > -----Original Message----- > From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of yufengmx > Sent: Monday, August 26, 2019 1:31 PM > To: Wu, ChangqingX ; dts@dpdk.org > Cc: Mo, YufengX > Subject: [dts] [PATCH V1]test_plans/pmd_stacked_bonded: update test plan >=20 >=20 > port start all action implies the race condition. The right way is to sta= rt top > level bond port only, and let it propagate the start action to slave bond > ports and its the real nics. >=20 > Signed-off-by: yufengmx > --- > test_plans/pmd_stacked_bonded_test_plan.rst | 22 +++++++++++----------- > 1 file changed, 11 insertions(+), 11 deletions(-) >=20 > diff --git a/test_plans/pmd_stacked_bonded_test_plan.rst > b/test_plans/pmd_stacked_bonded_test_plan.rst > index be0da9a..a864f1f 100644 > --- a/test_plans/pmd_stacked_bonded_test_plan.rst > +++ b/test_plans/pmd_stacked_bonded_test_plan.rst > @@ -41,7 +41,7 @@ based on RRC. The customer already uses Mellanox > 100G NICs. Mellanox 100G NICs support a proper x16 PCIe interface so the > host sees a single netdev and that netdev corresponds directly to the 10= 0G > Ethernet port. They indicated that in their current system they bond > multiple 100G NICs together, using DPDK bonding -API in their application= . > They are interested in looking at an alternatve source > +API in their application. They are interested in looking at an > +alternative source > for the 100G NIC and are in conversation with Silicom who are shipping a > 100G RRC based NIC (something like Boulder Rapids). The issue they have > with RRC NIC is that the NIC presents as two PCIe interfaces (netdevs) > instead of one. If the @@ -140,9 +140,9 @@ steps > testpmd> show bonding config 3 > testpmd> show bonding config 4 >=20 > -#. start all ports to check ports start action:: > +#. start top level bond port to check ports start action:: >=20 > - testpmd> port start all > + testpmd> port start 4 > testpmd> start >=20 > #. close testpmd:: > @@ -194,9 +194,9 @@ steps > testpmd> add bonding slave 3 4 > testpmd> show bonding config 4 >=20 > -#. start all bonded device ports:: > +#. start top level bond port:: >=20 > - testpmd> port start all > + testpmd> port start 4 > testpmd> start >=20 > #. send 100 tcp packets to portA 0 and portA 1:: > @@ -260,9 +260,9 @@ steps > testpmd> add bonding slave 5 6 > testpmd> show bonding config 6 >=20 > -#. start all ports:: > +#. start top level bond port:: >=20 > - testpmd> port start all > + testpmd> port start 6 > testpmd> start >=20 > #. send 100 packets to portA 0/portA 0a/portA 1/portA 1a separately:: > @@ -317,9 +317,9 @@ steps > testpmd> add bonding slave 3 4 > testpmd> show bonding config 4 >=20 > -#. start all ports:: > +#. start top level bond port:: >=20 > - testpmd> port start all > + testpmd> port start 4 > testpmd> start >=20 > #. send 100 packets to portA 0 and portA 1:: > @@ -385,9 +385,9 @@ steps > testpmd> add bonding slave 5 6 > testpmd> show bonding config 6 >=20 > -#. start all ports:: > +#. start top level bond port:: >=20 > - testpmd> port start all > + testpmd> port start 6 > testpmd> start >=20 > #. send 100 packets to portA 0/portA 0a/portA 1/portA 1a separately:: > -- > 1.9.3