From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 49736A0613 for ; Mon, 26 Aug 2019 09:10:57 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E69E81BF0B; Mon, 26 Aug 2019 09:10:56 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 120821BF05 for ; Mon, 26 Aug 2019 09:10:54 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Aug 2019 00:10:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,431,1559545200"; d="scan'208";a="184861942" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by orsmga006.jf.intel.com with ESMTP; 26 Aug 2019 00:10:53 -0700 Received: from fmsmsx125.amr.corp.intel.com (10.18.125.40) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 26 Aug 2019 00:10:52 -0700 Received: from shsmsx107.ccr.corp.intel.com (10.239.4.96) by FMSMSX125.amr.corp.intel.com (10.18.125.40) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 26 Aug 2019 00:10:52 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.80]) by SHSMSX107.ccr.corp.intel.com ([169.254.9.65]) with mapi id 14.03.0439.000; Mon, 26 Aug 2019 15:10:50 +0800 From: "Wu, ChangqingX" To: "Mo, YufengX" , "dts@dpdk.org" Thread-Topic: [dts][PATCH V1]test_plans/pmd_stacked_bonded: update test plan Thread-Index: AQHVW89PbJwvX6Xg60WO78gozUd+QqcNAyZA Date: Mon, 26 Aug 2019 07:10:50 +0000 Message-ID: <7F81DD3887C58F49A6B2EFEC3C28E22E0B746E79@SHSMSX101.ccr.corp.intel.com> References: <1566797465-122445-1-git-send-email-yufengx.mo@intel.com> <1566797465-122445-2-git-send-email-yufengx.mo@intel.com> In-Reply-To: <1566797465-122445-2-git-send-email-yufengx.mo@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dts] [PATCH V1]test_plans/pmd_stacked_bonded: update test plan X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Tested-by: Wu, ChangqingX -----Original Message----- From: Mo, YufengX=20 Sent: Monday, August 26, 2019 1:31 PM To: Wu, ChangqingX ; dts@dpdk.org Cc: Mo, YufengX Subject: [dts][PATCH V1]test_plans/pmd_stacked_bonded: update test plan port start all action implies the race condition. The right way is to start= top level bond port only, and let it propagate the start action to slave b= ond ports and its the real nics. Signed-off-by: yufengmx --- test_plans/pmd_stacked_bonded_test_plan.rst | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/test_plans/pmd_stacked_bonded_test_plan.rst b/test_plans/pmd_s= tacked_bonded_test_plan.rst index be0da9a..a864f1f 100644 --- a/test_plans/pmd_stacked_bonded_test_plan.rst +++ b/test_plans/pmd_stacked_bonded_test_plan.rst @@ -41,7 +41,7 @@ based on RRC. The customer already uses Mellanox 100G NIC= s. Mellanox 100G NICs support a proper x16 PCIe interface so the host sees= a single netdev and that netdev corresponds directly to the 100G Ethernet= port. They indicated that in their current system they bond multiple 100G= NICs together, using DPDK bonding -API in their application. They are inte= rested in looking at an alternatve source +API in their application. They are interested in looking at an=20 +alternative source for the 100G NIC and are in conversation with Silicom who are shipping a 1= 00G RRC based NIC (something like Boulder Rapids). The issue they have wit= h RRC NIC is that the NIC presents as two PCIe interfaces (netdevs) instea= d of one. If the @@ -140,9 +140,9 @@ steps testpmd> show bonding config 3 testpmd> show bonding config 4 =20 -#. start all ports to check ports start action:: +#. start top level bond port to check ports start action:: =20 - testpmd> port start all + testpmd> port start 4 testpmd> start =20 #. close testpmd:: @@ -194,9 +194,9 @@ steps testpmd> add bonding slave 3 4 testpmd> show bonding config 4 =20 -#. start all bonded device ports:: +#. start top level bond port:: =20 - testpmd> port start all + testpmd> port start 4 testpmd> start =20 #. send 100 tcp packets to portA 0 and portA 1:: @@ -260,9 +260,9 @@ steps testpmd> add bonding slave 5 6 testpmd> show bonding config 6 =20 -#. start all ports:: +#. start top level bond port:: =20 - testpmd> port start all + testpmd> port start 6 testpmd> start =20 #. send 100 packets to portA 0/portA 0a/portA 1/portA 1a separately:: @@ -317,9 +317,9 @@ steps testpmd> add bonding slave 3 4 testpmd> show bonding config 4 =20 -#. start all ports:: +#. start top level bond port:: =20 - testpmd> port start all + testpmd> port start 4 testpmd> start =20 #. send 100 packets to portA 0 and portA 1:: @@ -385,9 +385,9 @@ steps testpmd> add bonding slave 5 6 testpmd> show bonding config 6 =20 -#. start all ports:: +#. start top level bond port:: =20 - testpmd> port start all + testpmd> port start 6 testpmd> start =20 #. send 100 packets to portA 0/portA 0a/portA 1/portA 1a separately:: -- 1.9.3