test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts] [PATCH V1]test_plans/pmd_stacked_bonded: update test plan
@ 2019-08-26  5:31 yufengmx
  2019-08-26  5:31 ` yufengmx
  0 siblings, 1 reply; 4+ messages in thread
From: yufengmx @ 2019-08-26  5:31 UTC (permalink / raw)
  To: changqingx.wu, dts; +Cc: yufengmx

 quote Nicolau, Radu 
 port start all action is not really a defect, but due to a race condition this 
 setup implies; the right way will be to start top level bond port only, and let it 
 propagate the start to the bonded ports and they will propagate it to 
 the real nics. 
 
 fix typo 

yufengmx (1):
  test_plans/pmd_stacked_bonded: update test plan

 test_plans/pmd_stacked_bonded_test_plan.rst | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

-- 
1.9.3


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dts] [PATCH V1]test_plans/pmd_stacked_bonded: update test plan
  2019-08-26  5:31 [dts] [PATCH V1]test_plans/pmd_stacked_bonded: update test plan yufengmx
@ 2019-08-26  5:31 ` yufengmx
  2019-08-26  7:10   ` Wu, ChangqingX
  2019-09-04  5:04   ` Tu, Lijuan
  0 siblings, 2 replies; 4+ messages in thread
From: yufengmx @ 2019-08-26  5:31 UTC (permalink / raw)
  To: changqingx.wu, dts; +Cc: yufengmx


port start all action implies the race condition. The right way is to start
top level bond port only, and let it propagate the start action to slave bond ports
and its the real nics.

Signed-off-by: yufengmx <yufengx.mo@intel.com>
---
 test_plans/pmd_stacked_bonded_test_plan.rst | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/test_plans/pmd_stacked_bonded_test_plan.rst b/test_plans/pmd_stacked_bonded_test_plan.rst
index be0da9a..a864f1f 100644
--- a/test_plans/pmd_stacked_bonded_test_plan.rst
+++ b/test_plans/pmd_stacked_bonded_test_plan.rst
@@ -41,7 +41,7 @@ based on RRC. The customer already uses Mellanox 100G NICs. Mellanox 100G NICs
 support a proper x16 PCIe interface so the host sees a single netdev and that
 netdev corresponds directly to the 100G Ethernet port. They indicated that in
 their current system they bond multiple 100G NICs together, using DPDK bonding
-API in their application. They are interested in looking at an alternatve source
+API in their application. They are interested in looking at an alternative source
 for the 100G NIC and are in conversation with Silicom who are shipping a 100G
 RRC based NIC (something like Boulder Rapids). The issue they have with RRC NIC
 is that the NIC presents as two PCIe interfaces (netdevs) instead of one. If the
@@ -140,9 +140,9 @@ steps
     testpmd> show bonding config 3
     testpmd> show bonding config 4
 
-#. start all ports to check ports start action::
+#. start top level bond port to check ports start action::
 
-    testpmd> port start all
+    testpmd> port start 4
     testpmd> start
 
 #. close testpmd::
@@ -194,9 +194,9 @@ steps
     testpmd> add bonding slave 3 4
     testpmd> show bonding config 4
 
-#. start all bonded device ports::
+#. start top level bond port::
 
-    testpmd> port start all
+    testpmd> port start 4
     testpmd> start
 
 #. send 100 tcp packets to portA 0 and portA 1::
@@ -260,9 +260,9 @@ steps
     testpmd> add bonding slave 5 6
     testpmd> show bonding config 6
 
-#. start all ports::
+#. start top level bond port::
 
-    testpmd> port start all
+    testpmd> port start 6
     testpmd> start
 
 #. send 100 packets to portA 0/portA 0a/portA 1/portA 1a separately::
@@ -317,9 +317,9 @@ steps
     testpmd> add bonding slave 3 4
     testpmd> show bonding config 4
 
-#. start all ports::
+#. start top level bond port::
 
-    testpmd> port start all
+    testpmd> port start 4
     testpmd> start
 
 #. send 100 packets to portA 0 and portA 1::
@@ -385,9 +385,9 @@ steps
     testpmd> add bonding slave 5 6
     testpmd> show bonding config 6
 
-#. start all ports::
+#. start top level bond port::
 
-    testpmd> port start all
+    testpmd> port start 6
     testpmd> start
 
 #. send 100 packets to portA 0/portA 0a/portA 1/portA 1a separately::
-- 
1.9.3


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dts] [PATCH V1]test_plans/pmd_stacked_bonded: update test plan
  2019-08-26  5:31 ` yufengmx
@ 2019-08-26  7:10   ` Wu, ChangqingX
  2019-09-04  5:04   ` Tu, Lijuan
  1 sibling, 0 replies; 4+ messages in thread
From: Wu, ChangqingX @ 2019-08-26  7:10 UTC (permalink / raw)
  To: Mo, YufengX, dts

Tested-by: Wu, ChangqingX <changqingx.wu@intel.com>

-----Original Message-----
From: Mo, YufengX 
Sent: Monday, August 26, 2019 1:31 PM
To: Wu, ChangqingX <changqingx.wu@intel.com>; dts@dpdk.org
Cc: Mo, YufengX <yufengx.mo@intel.com>
Subject: [dts][PATCH V1]test_plans/pmd_stacked_bonded: update test plan


port start all action implies the race condition. The right way is to start top level bond port only, and let it propagate the start action to slave bond ports and its the real nics.

Signed-off-by: yufengmx <yufengx.mo@intel.com>
---
 test_plans/pmd_stacked_bonded_test_plan.rst | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/test_plans/pmd_stacked_bonded_test_plan.rst b/test_plans/pmd_stacked_bonded_test_plan.rst
index be0da9a..a864f1f 100644
--- a/test_plans/pmd_stacked_bonded_test_plan.rst
+++ b/test_plans/pmd_stacked_bonded_test_plan.rst
@@ -41,7 +41,7 @@ based on RRC. The customer already uses Mellanox 100G NICs. Mellanox 100G NICs  support a proper x16 PCIe interface so the host sees a single netdev and that  netdev corresponds directly to the 100G Ethernet port. They indicated that in  their current system they bond multiple 100G NICs together, using DPDK bonding -API in their application. They are interested in looking at an alternatve source
+API in their application. They are interested in looking at an 
+alternative source
 for the 100G NIC and are in conversation with Silicom who are shipping a 100G  RRC based NIC (something like Boulder Rapids). The issue they have with RRC NIC  is that the NIC presents as two PCIe interfaces (netdevs) instead of one. If the @@ -140,9 +140,9 @@ steps
     testpmd> show bonding config 3
     testpmd> show bonding config 4
 
-#. start all ports to check ports start action::
+#. start top level bond port to check ports start action::
 
-    testpmd> port start all
+    testpmd> port start 4
     testpmd> start
 
 #. close testpmd::
@@ -194,9 +194,9 @@ steps
     testpmd> add bonding slave 3 4
     testpmd> show bonding config 4
 
-#. start all bonded device ports::
+#. start top level bond port::
 
-    testpmd> port start all
+    testpmd> port start 4
     testpmd> start
 
 #. send 100 tcp packets to portA 0 and portA 1::
@@ -260,9 +260,9 @@ steps
     testpmd> add bonding slave 5 6
     testpmd> show bonding config 6
 
-#. start all ports::
+#. start top level bond port::
 
-    testpmd> port start all
+    testpmd> port start 6
     testpmd> start
 
 #. send 100 packets to portA 0/portA 0a/portA 1/portA 1a separately::
@@ -317,9 +317,9 @@ steps
     testpmd> add bonding slave 3 4
     testpmd> show bonding config 4
 
-#. start all ports::
+#. start top level bond port::
 
-    testpmd> port start all
+    testpmd> port start 4
     testpmd> start
 
 #. send 100 packets to portA 0 and portA 1::
@@ -385,9 +385,9 @@ steps
     testpmd> add bonding slave 5 6
     testpmd> show bonding config 6
 
-#. start all ports::
+#. start top level bond port::
 
-    testpmd> port start all
+    testpmd> port start 6
     testpmd> start
 
 #. send 100 packets to portA 0/portA 0a/portA 1/portA 1a separately::
--
1.9.3


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dts] [PATCH V1]test_plans/pmd_stacked_bonded: update test plan
  2019-08-26  5:31 ` yufengmx
  2019-08-26  7:10   ` Wu, ChangqingX
@ 2019-09-04  5:04   ` Tu, Lijuan
  1 sibling, 0 replies; 4+ messages in thread
From: Tu, Lijuan @ 2019-09-04  5:04 UTC (permalink / raw)
  To: Mo, YufengX, Wu, ChangqingX, dts; +Cc: Mo, YufengX

Applied, thanks

> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of yufengmx
> Sent: Monday, August 26, 2019 1:31 PM
> To: Wu, ChangqingX <changqingx.wu@intel.com>; dts@dpdk.org
> Cc: Mo, YufengX <yufengx.mo@intel.com>
> Subject: [dts] [PATCH V1]test_plans/pmd_stacked_bonded: update test plan
> 
> 
> port start all action implies the race condition. The right way is to start top
> level bond port only, and let it propagate the start action to slave bond
> ports and its the real nics.
> 
> Signed-off-by: yufengmx <yufengx.mo@intel.com>
> ---
>  test_plans/pmd_stacked_bonded_test_plan.rst | 22 +++++++++++-----------
>  1 file changed, 11 insertions(+), 11 deletions(-)
> 
> diff --git a/test_plans/pmd_stacked_bonded_test_plan.rst
> b/test_plans/pmd_stacked_bonded_test_plan.rst
> index be0da9a..a864f1f 100644
> --- a/test_plans/pmd_stacked_bonded_test_plan.rst
> +++ b/test_plans/pmd_stacked_bonded_test_plan.rst
> @@ -41,7 +41,7 @@ based on RRC. The customer already uses Mellanox
> 100G NICs. Mellanox 100G NICs  support a proper x16 PCIe interface so the
> host sees a single netdev and that  netdev corresponds directly to the 100G
> Ethernet port. They indicated that in  their current system they bond
> multiple 100G NICs together, using DPDK bonding -API in their application.
> They are interested in looking at an alternatve source
> +API in their application. They are interested in looking at an
> +alternative source
>  for the 100G NIC and are in conversation with Silicom who are shipping a
> 100G  RRC based NIC (something like Boulder Rapids). The issue they have
> with RRC NIC  is that the NIC presents as two PCIe interfaces (netdevs)
> instead of one. If the @@ -140,9 +140,9 @@ steps
>      testpmd> show bonding config 3
>      testpmd> show bonding config 4
> 
> -#. start all ports to check ports start action::
> +#. start top level bond port to check ports start action::
> 
> -    testpmd> port start all
> +    testpmd> port start 4
>      testpmd> start
> 
>  #. close testpmd::
> @@ -194,9 +194,9 @@ steps
>      testpmd> add bonding slave 3 4
>      testpmd> show bonding config 4
> 
> -#. start all bonded device ports::
> +#. start top level bond port::
> 
> -    testpmd> port start all
> +    testpmd> port start 4
>      testpmd> start
> 
>  #. send 100 tcp packets to portA 0 and portA 1::
> @@ -260,9 +260,9 @@ steps
>      testpmd> add bonding slave 5 6
>      testpmd> show bonding config 6
> 
> -#. start all ports::
> +#. start top level bond port::
> 
> -    testpmd> port start all
> +    testpmd> port start 6
>      testpmd> start
> 
>  #. send 100 packets to portA 0/portA 0a/portA 1/portA 1a separately::
> @@ -317,9 +317,9 @@ steps
>      testpmd> add bonding slave 3 4
>      testpmd> show bonding config 4
> 
> -#. start all ports::
> +#. start top level bond port::
> 
> -    testpmd> port start all
> +    testpmd> port start 4
>      testpmd> start
> 
>  #. send 100 packets to portA 0 and portA 1::
> @@ -385,9 +385,9 @@ steps
>      testpmd> add bonding slave 5 6
>      testpmd> show bonding config 6
> 
> -#. start all ports::
> +#. start top level bond port::
> 
> -    testpmd> port start all
> +    testpmd> port start 6
>      testpmd> start
> 
>  #. send 100 packets to portA 0/portA 0a/portA 1/portA 1a separately::
> --
> 1.9.3


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-09-04  5:04 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-26  5:31 [dts] [PATCH V1]test_plans/pmd_stacked_bonded: update test plan yufengmx
2019-08-26  5:31 ` yufengmx
2019-08-26  7:10   ` Wu, ChangqingX
2019-09-04  5:04   ` Tu, Lijuan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).