DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/3] support setting lanes
@ 2024-03-12  7:52 Dengdui Huang
  2024-03-12  7:52 ` [PATCH 1/3] ethdev: " Dengdui Huang
                   ` (3 more replies)
  0 siblings, 4 replies; 51+ messages in thread
From: Dengdui Huang @ 2024-03-12  7:52 UTC (permalink / raw)
  To: dev
  Cc: ferruh.yigit, aman.deep.singh, yuying.zhang, thomas,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong

Some speeds can be achieved with different number of lanes. For example,
100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
When use different lanes, the port cannot be up.

This patch set add support setting lanes for ethdev
In addition, modify the testpmd and hns3 driver to adapt to it.

Dengdui Huang (3):
  ethdev: support setting lanes
  net/hns3: support setting lanes
  app/testpmd: support setting lanes

 app/test-pmd/cmdline.c                      | 110 ++++----
 app/test-pmd/config.c                       |  60 +++--
 doc/guides/rel_notes/release_24_03.rst      |  11 +-
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |   3 +-
 drivers/net/bnxt/bnxt_ethdev.c              |   3 +-
 drivers/net/hns3/hns3_cmd.h                 |  15 +-
 drivers/net/hns3/hns3_common.c              |   3 +-
 drivers/net/hns3/hns3_ethdev.c              | 112 ++++----
 drivers/net/hns3/hns3_ethdev.h              |   2 +
 lib/ethdev/ethdev_driver.h                  |   1 -
 lib/ethdev/ethdev_linux_ethtool.c           | 101 +++++++-
 lib/ethdev/ethdev_private.h                 |   4 +
 lib/ethdev/ethdev_trace.h                   |   4 +-
 lib/ethdev/meson.build                      |   2 +
 lib/ethdev/rte_ethdev.c                     | 272 +++++++++++++++++++-
 lib/ethdev/rte_ethdev.h                     |  99 +++++--
 lib/ethdev/version.map                      |   7 +
 17 files changed, 635 insertions(+), 174 deletions(-)

-- 
2.33.0


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 1/3] ethdev: support setting lanes
  2024-03-12  7:52 [PATCH 0/3] support setting lanes Dengdui Huang
@ 2024-03-12  7:52 ` Dengdui Huang
  2024-03-19  3:02   ` Stephen Hemminger
  2024-03-12  7:52 ` [PATCH 2/3] net/hns3: " Dengdui Huang
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 51+ messages in thread
From: Dengdui Huang @ 2024-03-12  7:52 UTC (permalink / raw)
  To: dev
  Cc: ferruh.yigit, aman.deep.singh, yuying.zhang, thomas,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong

Some speeds can be achieved with different number of lanes. For example,
100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
When use different lanes, the port cannot be up. This patch add support
for setting lanes and report lanes.

Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
---
 doc/guides/rel_notes/release_24_03.rst |   8 +-
 drivers/net/bnxt/bnxt_ethdev.c         |   3 +-
 drivers/net/hns3/hns3_ethdev.c         |   1 +
 lib/ethdev/ethdev_driver.h             |   1 -
 lib/ethdev/ethdev_linux_ethtool.c      | 101 ++++++++-
 lib/ethdev/ethdev_private.h            |   4 +
 lib/ethdev/ethdev_trace.h              |   4 +-
 lib/ethdev/meson.build                 |   2 +
 lib/ethdev/rte_ethdev.c                | 272 +++++++++++++++++++++++--
 lib/ethdev/rte_ethdev.h                |  99 +++++++--
 lib/ethdev/version.map                 |   7 +
 11 files changed, 466 insertions(+), 36 deletions(-)

diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 932688ca4d..75d93ee965 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -76,6 +76,10 @@ New Features
   * Added a fath path function ``rte_eth_tx_queue_count``
     to get the number of used descriptors of a Tx queue.
 
+* **Support setting lanes for ethdev.**
+  * Support setting lanes by extended ``RTE_ETH_LINK_SPEED_*``.
+  * Added function to convert bitmap flag to the struct of link speed info.
+
 * **Added hash calculation of an encapsulated packet as done by the HW.**
 
   Added function to calculate hash when doing tunnel encapsulation:
@@ -240,9 +244,11 @@ ABI Changes
    This section is a comment. Do not overwrite or remove it.
    Also, make sure to start the actual text at the margin.
    =======================================================
-
 * No ABI change that would break compatibility with 23.11.
 
+* ethdev: Convert a numerical speed to a bitmap flag with lanes:
+  The function ``rte_eth_speed_bitflag`` add lanes parameters.
+
 
 Known Issues
 ------------
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index ba31ae9286..e881a7f3cc 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -711,7 +711,8 @@ static int bnxt_update_phy_setting(struct bnxt *bp)
 	}
 
 	/* convert to speedbit flag */
-	curr_speed_bit = rte_eth_speed_bitflag((uint32_t)link->link_speed, 1);
+	curr_speed_bit = rte_eth_speed_bitflag((uint32_t)link->link_speed,
+					       RTE_ETH_LANES_UNKNOWN, 1);
 
 	/*
 	 * Device is not obliged link down in certain scenarios, even
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index b10d1216d2..ecd3b2ef64 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -5969,6 +5969,7 @@ hns3_get_speed_fec_capa(struct rte_eth_fec_capa *speed_fec_capa,
 	for (i = 0; i < RTE_DIM(speed_fec_capa_tbl); i++) {
 		speed_bit =
 			rte_eth_speed_bitflag(speed_fec_capa_tbl[i].speed,
+					      RTE_ETH_LANES_UNKNOWN,
 					      RTE_ETH_LINK_FULL_DUPLEX);
 		if ((speed_capa & speed_bit) == 0)
 			continue;
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 0dbf2dd6a2..bb7dc7acb7 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -2003,7 +2003,6 @@ __rte_internal
 int
 rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag);
 
-
 /*
  * Legacy ethdev API used internally by drivers.
  */
diff --git a/lib/ethdev/ethdev_linux_ethtool.c b/lib/ethdev/ethdev_linux_ethtool.c
index e792204b01..b776ec6173 100644
--- a/lib/ethdev/ethdev_linux_ethtool.c
+++ b/lib/ethdev/ethdev_linux_ethtool.c
@@ -111,10 +111,107 @@ static const uint32_t link_modes[] = {
 	[101] =      11, /* ETHTOOL_LINK_MODE_10baseT1S_P2MP_Half_BIT */
 };
 
+/*
+ * Link modes sorted with index as defined in ethtool.
+ * Values are lanes.
+ */
+static const uint32_t link_modes_lanes[] = {
+	  [0] =      1, /* ETHTOOL_LINK_MODE_10baseT_Half_BIT */
+	  [1] =      1, /* ETHTOOL_LINK_MODE_10baseT_Full_BIT */
+	  [2] =      1, /* ETHTOOL_LINK_MODE_100baseT_Half_BIT */
+	  [3] =      1, /* ETHTOOL_LINK_MODE_100baseT_Full_BIT */
+	  [4] =      1, /* ETHTOOL_LINK_MODE_1000baseT_Half_BIT */
+	  [5] =      1, /* ETHTOOL_LINK_MODE_1000baseT_Full_BIT */
+	 [12] =      1, /* ETHTOOL_LINK_MODE_10000baseT_Full_BIT */
+	 [15] =      1, /* ETHTOOL_LINK_MODE_2500baseX_Full_BIT */
+	 [17] =      1, /* ETHTOOL_LINK_MODE_1000baseKX_Full_BIT */
+	 [18] =      4, /* ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT */
+	 [19] =      1, /* ETHTOOL_LINK_MODE_10000baseKR_Full_BIT */
+	 [20] =      1, /* ETHTOOL_LINK_MODE_10000baseR_FEC_BIT */
+	 [21] =      2, /* ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT */
+	 [22] =      2, /* ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT */
+	 [23] =      4, /* ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT */
+	 [24] =      4, /* ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT */
+	 [25] =      4, /* ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT */
+	 [26] =      4, /* ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT */
+	 [27] =      4, /* ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT */
+	 [28] =      4, /* ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT */
+	 [29] =      4, /* ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT */
+	 [30] =      4, /* ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT */
+	 [31] =      1, /* ETHTOOL_LINK_MODE_25000baseCR_Full_BIT */
+	 [32] =      1, /* ETHTOOL_LINK_MODE_25000baseKR_Full_BIT */
+	 [33] =      1, /* ETHTOOL_LINK_MODE_25000baseSR_Full_BIT */
+	 [34] =      2, /* ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT */
+	 [35] =      2, /* ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT */
+	 [36] =      4, /* ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT */
+	 [37] =      4, /* ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT */
+	 [38] =      4, /* ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT */
+	 [39] =      4, /* ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT */
+	 [40] =      2, /* ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT */
+	 [41] =      1, /* ETHTOOL_LINK_MODE_1000baseX_Full_BIT */
+	 [42] =      1, /* ETHTOOL_LINK_MODE_10000baseCR_Full_BIT */
+	 [43] =      1, /* ETHTOOL_LINK_MODE_10000baseSR_Full_BIT */
+	 [44] =      1, /* ETHTOOL_LINK_MODE_10000baseLR_Full_BIT */
+	 [45] =      1, /* ETHTOOL_LINK_MODE_10000baseLRM_Full_BIT */
+	 [46] =      1, /* ETHTOOL_LINK_MODE_10000baseER_Full_BIT */
+	 [47] =      1, /* ETHTOOL_LINK_MODE_2500baseT_Full_BIT */
+	 [48] =      1, /* ETHTOOL_LINK_MODE_5000baseT_Full_BIT */
+	 [52] =      1, /* ETHTOOL_LINK_MODE_50000baseKR_Full_BIT */
+	 [53] =      1, /* ETHTOOL_LINK_MODE_50000baseSR_Full_BIT */
+	 [54] =      1, /* ETHTOOL_LINK_MODE_50000baseCR_Full_BIT */
+	 [55] =      1, /* ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT */
+	 [56] =      1, /* ETHTOOL_LINK_MODE_50000baseDR_Full_BIT */
+	 [57] =      2, /* ETHTOOL_LINK_MODE_100000baseKR2_Full_BIT */
+	 [58] =      2, /* ETHTOOL_LINK_MODE_100000baseSR2_Full_BIT */
+	 [59] =      2, /* ETHTOOL_LINK_MODE_100000baseCR2_Full_BIT */
+	 [60] =      2, /* ETHTOOL_LINK_MODE_100000baseLR2_ER2_FR2_Full_BIT */
+	 [61] =      2, /* ETHTOOL_LINK_MODE_100000baseDR2_Full_BIT */
+	 [62] =      4, /* ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT */
+	 [63] =      4, /* ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT */
+	 [64] =      4, /* ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT */
+	 [65] =      4, /* ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT */
+	 [66] =      4, /* ETHTOOL_LINK_MODE_200000baseCR4_Full_BIT */
+	 [67] =      1, /* ETHTOOL_LINK_MODE_100baseT1_Full_BIT */
+	 [68] =      1, /* ETHTOOL_LINK_MODE_1000baseT1_Full_BIT */
+	 [69] =      8, /* ETHTOOL_LINK_MODE_400000baseKR8_Full_BIT */
+	 [70] =      8, /* ETHTOOL_LINK_MODE_400000baseSR8_Full_BIT */
+	 [71] =      8, /* ETHTOOL_LINK_MODE_400000baseLR8_ER8_FR8_Full_BIT */
+	 [72] =      8, /* ETHTOOL_LINK_MODE_400000baseDR8_Full_BIT */
+	 [73] =      8, /* ETHTOOL_LINK_MODE_400000baseCR8_Full_BIT */
+	 [75] =      1, /* ETHTOOL_LINK_MODE_100000baseKR_Full_BIT */
+	 [76] =      1, /* ETHTOOL_LINK_MODE_100000baseSR_Full_BIT */
+	 [77] =      1, /* ETHTOOL_LINK_MODE_100000baseLR_ER_FR_Full_BIT */
+	 [78] =      1, /* ETHTOOL_LINK_MODE_100000baseCR_Full_BIT */
+	 [79] =      1, /* ETHTOOL_LINK_MODE_100000baseDR_Full_BIT */
+	 [80] =      2, /* ETHTOOL_LINK_MODE_200000baseKR2_Full_BIT */
+	 [81] =      2, /* ETHTOOL_LINK_MODE_200000baseSR2_Full_BIT */
+	 [82] =      2, /* ETHTOOL_LINK_MODE_200000baseLR2_ER2_FR2_Full_BIT */
+	 [83] =      2, /* ETHTOOL_LINK_MODE_200000baseDR2_Full_BIT */
+	 [84] =      2, /* ETHTOOL_LINK_MODE_200000baseCR2_Full_BIT */
+	 [85] =      4, /* ETHTOOL_LINK_MODE_400000baseKR4_Full_BIT */
+	 [86] =      4, /* ETHTOOL_LINK_MODE_400000baseSR4_Full_BIT */
+	 [87] =      4, /* ETHTOOL_LINK_MODE_400000baseLR4_ER4_FR4_Full_BIT */
+	 [88] =      4, /* ETHTOOL_LINK_MODE_400000baseDR4_Full_BIT */
+	 [89] =      4, /* ETHTOOL_LINK_MODE_400000baseCR4_Full_BIT */
+	 [90] =      1, /* ETHTOOL_LINK_MODE_100baseFX_Half_BIT */
+	 [91] =      1, /* ETHTOOL_LINK_MODE_100baseFX_Full_BIT */
+	 [92] =      1, /* ETHTOOL_LINK_MODE_10baseT1L_Full_BIT */
+	 [93] =      8, /* ETHTOOL_LINK_MODE_800000baseCR8_Full_BIT */
+	 [94] =      8, /* ETHTOOL_LINK_MODE_800000baseKR8_Full_BIT */
+	 [95] =      8, /* ETHTOOL_LINK_MODE_800000baseDR8_Full_BIT */
+	 [96] =      8, /* ETHTOOL_LINK_MODE_800000baseDR8_2_Full_BIT */
+	 [97] =      8, /* ETHTOOL_LINK_MODE_800000baseSR8_Full_BIT */
+	 [98] =      8, /* ETHTOOL_LINK_MODE_800000baseVR8_Full_BIT */
+	 [99] =      1, /* ETHTOOL_LINK_MODE_10baseT1S_Full_BIT */
+	[100] =      1, /* ETHTOOL_LINK_MODE_10baseT1S_Half_BIT */
+	[101] =      1, /* ETHTOOL_LINK_MODE_10baseT1S_P2MP_Half_BIT */
+};
+
 uint32_t
 rte_eth_link_speed_ethtool(enum ethtool_link_mode_bit_indices bit)
 {
 	uint32_t speed;
+	uint8_t lanes;
 	int duplex;
 
 	/* get mode from array */
@@ -131,7 +228,9 @@ rte_eth_link_speed_ethtool(enum ethtool_link_mode_bit_indices bit)
 			RTE_ETH_LINK_FULL_DUPLEX;
 	speed &= RTE_GENMASK32(31, 1);
 
-	return rte_eth_speed_bitflag(speed, duplex);
+	lanes = link_modes_lanes[bit];
+
+	return rte_eth_speed_bitflag(speed, lanes, duplex);
 }
 
 uint32_t
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 0d36b9c30f..9092ab3a9e 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -79,4 +79,8 @@ void eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid);
 int eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues);
 int eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues);
 
+/* versioned functions */
+uint32_t rte_eth_speed_bitflag_v24(uint32_t speed, int duplex);
+uint32_t rte_eth_speed_bitflag_v25(uint32_t speed, uint8_t lanes, int duplex);
+
 #endif /* _ETH_PRIVATE_H_ */
diff --git a/lib/ethdev/ethdev_trace.h b/lib/ethdev/ethdev_trace.h
index 3bec87bfdb..5547b49cab 100644
--- a/lib/ethdev/ethdev_trace.h
+++ b/lib/ethdev/ethdev_trace.h
@@ -183,8 +183,10 @@ RTE_TRACE_POINT(
 
 RTE_TRACE_POINT(
 	rte_eth_trace_speed_bitflag,
-	RTE_TRACE_POINT_ARGS(uint32_t speed, int duplex, uint32_t ret),
+	RTE_TRACE_POINT_ARGS(uint32_t speed, uint8_t lanes, int duplex,
+			     uint32_t ret),
 	rte_trace_point_emit_u32(speed);
+	rte_trace_point_emit_u8(lanes);
 	rte_trace_point_emit_int(duplex);
 	rte_trace_point_emit_u32(ret);
 )
diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build
index f1d2586591..2c9588d0b3 100644
--- a/lib/ethdev/meson.build
+++ b/lib/ethdev/meson.build
@@ -62,3 +62,5 @@ endif
 if get_option('buildtype').contains('debug')
     cflags += ['-DRTE_FLOW_DEBUG']
 endif
+
+use_function_versioning = true
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index f1c658f49e..522f8796b1 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -26,6 +26,7 @@
 #include <rte_class.h>
 #include <rte_ether.h>
 #include <rte_telemetry.h>
+#include <rte_function_versioning.h>
 
 #include "rte_ethdev.h"
 #include "rte_ethdev_trace_fp.h"
@@ -991,63 +992,111 @@ rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id)
 	return ret;
 }
 
-uint32_t
-rte_eth_speed_bitflag(uint32_t speed, int duplex)
+uint32_t __vsym
+rte_eth_speed_bitflag_v25(uint32_t speed, uint8_t lanes, int duplex)
 {
-	uint32_t ret;
+	uint32_t ret = 0;
 
 	switch (speed) {
 	case RTE_ETH_SPEED_NUM_10M:
+		if (lanes != RTE_ETH_LANES_UNKNOWN && lanes != RTE_ETH_LANES_1)
+			break;
 		ret = duplex ? RTE_ETH_LINK_SPEED_10M : RTE_ETH_LINK_SPEED_10M_HD;
 		break;
 	case RTE_ETH_SPEED_NUM_100M:
+		if (lanes != RTE_ETH_LANES_UNKNOWN && lanes != RTE_ETH_LANES_1)
+			break;
 		ret = duplex ? RTE_ETH_LINK_SPEED_100M : RTE_ETH_LINK_SPEED_100M_HD;
 		break;
 	case RTE_ETH_SPEED_NUM_1G:
+		if (lanes != RTE_ETH_LANES_UNKNOWN && lanes != RTE_ETH_LANES_1)
+			break;
 		ret = RTE_ETH_LINK_SPEED_1G;
 		break;
 	case RTE_ETH_SPEED_NUM_2_5G:
+		if (lanes != RTE_ETH_LANES_UNKNOWN && lanes != RTE_ETH_LANES_1)
+			break;
 		ret = RTE_ETH_LINK_SPEED_2_5G;
 		break;
 	case RTE_ETH_SPEED_NUM_5G:
+		if (lanes != RTE_ETH_LANES_UNKNOWN && lanes != RTE_ETH_LANES_1)
+			break;
 		ret = RTE_ETH_LINK_SPEED_5G;
 		break;
 	case RTE_ETH_SPEED_NUM_10G:
-		ret = RTE_ETH_LINK_SPEED_10G;
+		if (lanes == RTE_ETH_LANES_1)
+			ret = RTE_ETH_LINK_SPEED_10G;
+		if (lanes == RTE_ETH_LANES_4)
+			ret = RTE_ETH_LINK_SPEED_10G_4LANES;
 		break;
 	case RTE_ETH_SPEED_NUM_20G:
-		ret = RTE_ETH_LINK_SPEED_20G;
+		if (lanes != RTE_ETH_LANES_UNKNOWN && lanes != RTE_ETH_LANES_2)
+			break;
+		ret = RTE_ETH_LINK_SPEED_20G_2LANES;
 		break;
 	case RTE_ETH_SPEED_NUM_25G:
+		if (lanes != RTE_ETH_LANES_UNKNOWN && lanes != RTE_ETH_LANES_1)
+			break;
 		ret = RTE_ETH_LINK_SPEED_25G;
 		break;
 	case RTE_ETH_SPEED_NUM_40G:
-		ret = RTE_ETH_LINK_SPEED_40G;
+		if (lanes != RTE_ETH_LANES_UNKNOWN && lanes != RTE_ETH_LANES_4)
+			break;
+		ret = RTE_ETH_LINK_SPEED_40G_4LANES;
 		break;
 	case RTE_ETH_SPEED_NUM_50G:
-		ret = RTE_ETH_LINK_SPEED_50G;
+		if (lanes == RTE_ETH_LANES_1)
+			ret = RTE_ETH_LINK_SPEED_50G;
+		if (lanes == RTE_ETH_LANES_2)
+			ret = RTE_ETH_LINK_SPEED_50G_2LANES;
 		break;
 	case RTE_ETH_SPEED_NUM_56G:
-		ret = RTE_ETH_LINK_SPEED_56G;
+		if (lanes != RTE_ETH_LANES_UNKNOWN && lanes != RTE_ETH_LANES_4)
+			break;
+		ret = RTE_ETH_LINK_SPEED_56G_4LANES;
 		break;
 	case RTE_ETH_SPEED_NUM_100G:
-		ret = RTE_ETH_LINK_SPEED_100G;
+		if (lanes == RTE_ETH_LANES_1)
+			ret = RTE_ETH_LINK_SPEED_100G;
+		if (lanes == RTE_ETH_LANES_2)
+			ret = RTE_ETH_LINK_SPEED_100G_2LANES;
+		if (lanes == RTE_ETH_LANES_4)
+			ret = RTE_ETH_LINK_SPEED_100G_4LANES;
 		break;
 	case RTE_ETH_SPEED_NUM_200G:
-		ret = RTE_ETH_LINK_SPEED_200G;
+		if (lanes == RTE_ETH_LANES_2)
+			ret = RTE_ETH_LINK_SPEED_200G_2LANES;
+		if (lanes == RTE_ETH_LANES_4)
+			ret = RTE_ETH_LINK_SPEED_200G_4LANES;
 		break;
 	case RTE_ETH_SPEED_NUM_400G:
+		if (lanes == RTE_ETH_LANES_4)
+			ret = RTE_ETH_LINK_SPEED_400G_4LANES;
+		if (lanes == RTE_ETH_LANES_8)
+			ret = RTE_ETH_LINK_SPEED_400G_8LANES;
 		ret = RTE_ETH_LINK_SPEED_400G;
 		break;
 	default:
 		ret = 0;
 	}
 
-	rte_eth_trace_speed_bitflag(speed, duplex, ret);
+	rte_eth_trace_speed_bitflag(speed, lanes, duplex, ret);
 
 	return ret;
 }
 
+uint32_t __vsym
+rte_eth_speed_bitflag_v24(uint32_t speed, int duplex)
+{
+	return rte_eth_speed_bitflag_v25(speed, RTE_ETH_LANES_UNKNOWN, duplex);
+}
+
+/* mark the v24 function as the older version, and v25 as the default version */
+VERSION_SYMBOL(rte_eth_speed_bitflag, _v24, 24);
+BIND_DEFAULT_SYMBOL(rte_eth_speed_bitflag, _v25, 25);
+MAP_STATIC_SYMBOL(uint32_t rte_eth_speed_bitflag(uint32_t speed, uint8_t lanes, int duplex),
+		  rte_eth_speed_bitflag_v25);
+
 const char *
 rte_eth_dev_rx_offload_name(uint64_t offload)
 {
@@ -1066,6 +1115,204 @@ rte_eth_dev_rx_offload_name(uint64_t offload)
 	return name;
 }
 
+int
+rte_eth_speed_capa_to_info(uint32_t link_speed,
+			   struct rte_eth_speed_capa_info *capa_info)
+{
+	const struct {
+		uint32_t speed_capa;
+		struct rte_eth_speed_capa_info capa_info;
+	} speed_capa_info_map[] = {
+		{
+			RTE_ETH_LINK_SPEED_10M_HD,
+				{
+					RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_HALF_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_10M,
+				{
+					RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_100M_HD,
+				{
+					RTE_ETH_SPEED_NUM_100M,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_HALF_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_100M,
+				{
+					RTE_ETH_SPEED_NUM_100M,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_1G,
+				{
+					RTE_ETH_SPEED_NUM_1G,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_2_5G,
+				{
+					RTE_ETH_SPEED_NUM_2_5G,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_5G,
+				{
+					RTE_ETH_SPEED_NUM_5G,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_10G,
+				{
+					RTE_ETH_SPEED_NUM_10G,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_20G_2LANES,
+				{
+					RTE_ETH_SPEED_NUM_20G,
+					RTE_ETH_LANES_2,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_25G,
+				{
+					RTE_ETH_SPEED_NUM_25G,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_40G_4LANES,
+				{
+					RTE_ETH_SPEED_NUM_40G,
+					RTE_ETH_LANES_4,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_50G,
+				{
+					RTE_ETH_SPEED_NUM_50G,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_56G_4LANES,
+				{
+					RTE_ETH_SPEED_NUM_56G,
+					RTE_ETH_LANES_4,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_100G,
+				{
+					RTE_ETH_SPEED_NUM_100G,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_200G_4LANES,
+				{
+					RTE_ETH_SPEED_NUM_200G,
+					RTE_ETH_LANES_4,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_400G_4LANES,
+				{
+					RTE_ETH_SPEED_NUM_400G,
+					RTE_ETH_LANES_4,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_10G_4LANES,
+				{
+					RTE_ETH_SPEED_NUM_10G,
+					RTE_ETH_LANES_4,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_50G_2LANES,
+				{
+					RTE_ETH_SPEED_NUM_50G,
+					RTE_ETH_LANES_2,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_100G_2LANES,
+				{
+					RTE_ETH_SPEED_NUM_100G,
+					RTE_ETH_LANES_2,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_100G_4LANES,
+				{
+					RTE_ETH_SPEED_NUM_100G,
+					RTE_ETH_LANES_4,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_200G_2LANES,
+				{
+					RTE_ETH_SPEED_NUM_200G,
+					RTE_ETH_LANES_2,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_400G_8LANES,
+				{
+					RTE_ETH_SPEED_NUM_400G,
+					RTE_ETH_LANES_8,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		}
+	};
+	uint32_t i;
+
+	for (i = 0; i < RTE_DIM(speed_capa_info_map); i++)
+		if (link_speed == speed_capa_info_map[i].speed_capa) {
+			capa_info->speed = speed_capa_info_map[i].capa_info.speed;
+			capa_info->lanes = speed_capa_info_map[i].capa_info.lanes;
+			capa_info->duplex = speed_capa_info_map[i].capa_info.duplex;
+			return 0;
+		}
+
+	return -EINVAL;
+}
+
 const char *
 rte_eth_dev_tx_offload_name(uint64_t offload)
 {
@@ -3111,8 +3358,9 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 	if (eth_link->link_status == RTE_ETH_LINK_DOWN)
 		ret = snprintf(str, len, "Link down");
 	else
-		ret = snprintf(str, len, "Link up at %s %s %s",
+		ret = snprintf(str, len, "Link up at %s %ulanes %s %s",
 			rte_eth_link_speed_to_str(eth_link->link_speed),
+			eth_link->link_lanes,
 			(eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			"FDX" : "HDX",
 			(eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 147257d6a2..3383ad8495 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -288,24 +288,40 @@ struct rte_eth_stats {
 /**@{@name Link speed capabilities
  * Device supported speeds bitmap flags
  */
-#define RTE_ETH_LINK_SPEED_AUTONEG 0             /**< Autonegotiate (all speeds) */
-#define RTE_ETH_LINK_SPEED_FIXED   RTE_BIT32(0)  /**< Disable autoneg (fixed speed) */
-#define RTE_ETH_LINK_SPEED_10M_HD  RTE_BIT32(1)  /**<  10 Mbps half-duplex */
-#define RTE_ETH_LINK_SPEED_10M     RTE_BIT32(2)  /**<  10 Mbps full-duplex */
-#define RTE_ETH_LINK_SPEED_100M_HD RTE_BIT32(3)  /**< 100 Mbps half-duplex */
-#define RTE_ETH_LINK_SPEED_100M    RTE_BIT32(4)  /**< 100 Mbps full-duplex */
-#define RTE_ETH_LINK_SPEED_1G      RTE_BIT32(5)  /**<   1 Gbps */
-#define RTE_ETH_LINK_SPEED_2_5G    RTE_BIT32(6)  /**< 2.5 Gbps */
-#define RTE_ETH_LINK_SPEED_5G      RTE_BIT32(7)  /**<   5 Gbps */
-#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**<  10 Gbps */
-#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**<  20 Gbps */
-#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**<  25 Gbps */
-#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**<  40 Gbps */
-#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**<  50 Gbps */
-#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**<  56 Gbps */
-#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
-#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
-#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
+#define RTE_ETH_LINK_SPEED_AUTONEG        0             /**< Autonegotiate (all speeds) */
+#define RTE_ETH_LINK_SPEED_FIXED          RTE_BIT32(0)  /**< Disable autoneg (fixed speed) */
+#define RTE_ETH_LINK_SPEED_10M_HD         RTE_BIT32(1)  /**<  10 Mbps half-duplex */
+#define RTE_ETH_LINK_SPEED_10M            RTE_BIT32(2)  /**<  10 Mbps full-duplex */
+#define RTE_ETH_LINK_SPEED_100M_HD        RTE_BIT32(3)  /**< 100 Mbps half-duplex */
+#define RTE_ETH_LINK_SPEED_100M           RTE_BIT32(4)  /**< 100 Mbps full-duplex */
+#define RTE_ETH_LINK_SPEED_1G             RTE_BIT32(5)  /**<   1 Gbps */
+#define RTE_ETH_LINK_SPEED_2_5G           RTE_BIT32(6)  /**< 2.5 Gbps */
+#define RTE_ETH_LINK_SPEED_5G             RTE_BIT32(7)  /**<   5 Gbps */
+#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**<  10 Gbps */
+#define RTE_ETH_LINK_SPEED_20G_2LANES     RTE_BIT32(9)  /**<  20 Gbps 2lanes */
+#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**<  25 Gbps */
+#define RTE_ETH_LINK_SPEED_40G_4LANES     RTE_BIT32(11) /**<  40 Gbps 4lanes */
+#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**<  50 Gbps */
+#define RTE_ETH_LINK_SPEED_56G_4LANES     RTE_BIT32(13) /**<  56 Gbps  4lanes */
+#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
+#define RTE_ETH_LINK_SPEED_200G_4LANES    RTE_BIT32(15) /**< 200 Gbps 4lanes */
+#define RTE_ETH_LINK_SPEED_400G_4LANES    RTE_BIT32(16) /**< 400 Gbps 4lanes */
+#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**<  10 Gbps 4lanes */
+#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**<  50 Gbps 2 lanes */
+#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
+#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
+#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
+#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
+/**@}*/
+
+/**@{@name Link speed capabilities
+ * Default lanes, use to compatible with earlier versions
+ */
+#define RTE_ETH_LINK_SPEED_20G	RTE_ETH_LINK_SPEED_20G_2LANES
+#define RTE_ETH_LINK_SPEED_40G	RTE_ETH_LINK_SPEED_40G_4LANES
+#define RTE_ETH_LINK_SPEED_56G	RTE_ETH_LINK_SPEED_56G_4LANES
+#define RTE_ETH_LINK_SPEED_200G	RTE_ETH_LINK_SPEED_200G_4LANES
+#define RTE_ETH_LINK_SPEED_400G	RTE_ETH_LINK_SPEED_400G_4LANES
 /**@}*/
 
 /**@{@name Link speed
@@ -329,6 +345,25 @@ struct rte_eth_stats {
 #define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
 /**@}*/
 
+/**@{@name Link lane number
+ * Ethernet lane number
+ */
+#define RTE_ETH_LANES_UNKNOWN    0 /**< Unknown */
+#define RTE_ETH_LANES_1          1 /**< 1 lanes */
+#define RTE_ETH_LANES_2          2 /**< 2 lanes */
+#define RTE_ETH_LANES_4          4 /**< 4 lanes */
+#define RTE_ETH_LANES_8          8 /**< 8 lanes */
+/**@}*/
+
+/**
+ * A structure used to store information of link speed capability.
+ */
+struct rte_eth_speed_capa_info {
+	uint32_t speed;        /**< RTE_ETH_SPEED_NUM_ */
+	uint8_t  lanes;        /**< RTE_ETH_LANES_ */
+	uint8_t  duplex;       /**< RTE_ETH_LANES_ */
+};
+
 /**
  * A structure used to retrieve link-level information of an Ethernet port.
  */
@@ -338,6 +373,7 @@ struct __rte_aligned(8) rte_eth_link { /**< aligned for atomic64 read/write */
 	uint16_t link_duplex  : 1;  /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
 	uint16_t link_autoneg : 1;  /**< RTE_ETH_LINK_[AUTONEG/FIXED] */
 	uint16_t link_status  : 1;  /**< RTE_ETH_LINK_[DOWN/UP] */
+	uint16_t link_lanes   : 8;  /**< RTE_ETH_LANES_ */
 };
 
 /**@{@name Link negotiation
@@ -1641,6 +1677,13 @@ struct rte_eth_conf {
 #define RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP         RTE_BIT64(3)
 /** Device supports keeping shared flow objects across restart. */
 #define RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP RTE_BIT64(4)
+/**
+ * Device supports setting lanes. When the driver does not support setting lane,
+ * the lane in the speed capability reported by the driver may be incorrect,
+ * for example, if the driver reports that the 200G speed capability
+ * (@see RTE_ETH_LINK_SPEED_200G), the number of used lanes may be 2 or 4.
+ */
+#define RTE_ETH_DEV_CAPA_SETTING_LANES RTE_BIT64(4)
 /**@}*/
 
 /*
@@ -2301,12 +2344,30 @@ uint16_t rte_eth_dev_count_total(void);
  *
  * @param speed
  *   Numerical speed value in Mbps
+ * @param lanes
+ *   RTE_ETH_LANES_x
  * @param duplex
  *   RTE_ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
  * @return
  *   0 if the speed cannot be mapped
  */
-uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
+uint32_t rte_eth_speed_bitflag(uint32_t speed, uint8_t lanes, int duplex);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Convert bitmap flag to the struct of link speed capability info
+ *
+ * @param link_speed
+ *   speed bitmap (RTE_ETH_LINK_SPEED_)
+ * @return
+ *   - (0) if successful.
+ *   - (-EINVAL) if bad parameter.
+ */
+__rte_experimental
+int rte_eth_speed_capa_to_info(uint32_t link_speed,
+				struct rte_eth_speed_capa_info *capa_info);
 
 /**
  * Get RTE_ETH_RX_OFFLOAD_* flag name.
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 79f6f5293b..0e9c560920 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -169,6 +169,12 @@ DPDK_24 {
 	local: *;
 };
 
+DPDK_25 {
+	global:
+
+	rte_eth_speed_bitflag;
+} DPDK_24;
+
 EXPERIMENTAL {
 	global:
 
@@ -320,6 +326,7 @@ EXPERIMENTAL {
 	# added in 24.03
 	__rte_eth_trace_tx_queue_count;
 	rte_eth_find_rss_algo;
+	rte_eth_speed_capa_to_info;
 	rte_flow_async_update_resized;
 	rte_flow_calc_encap_hash;
 	rte_flow_template_table_resizable;
-- 
2.33.0


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 2/3] net/hns3: support setting lanes
  2024-03-12  7:52 [PATCH 0/3] support setting lanes Dengdui Huang
  2024-03-12  7:52 ` [PATCH 1/3] ethdev: " Dengdui Huang
@ 2024-03-12  7:52 ` Dengdui Huang
  2024-03-12  7:52 ` [PATCH 3/3] app/testpmd: " Dengdui Huang
  2024-03-18 14:56 ` [PATCH 0/3] " Thomas Monjalon
  3 siblings, 0 replies; 51+ messages in thread
From: Dengdui Huang @ 2024-03-12  7:52 UTC (permalink / raw)
  To: dev
  Cc: ferruh.yigit, aman.deep.singh, yuying.zhang, thomas,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong

Some speeds can be achieved with different number of lanes. For example,
100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
When use different lanes, the port cannot be up. This patch add support
for setting lanes and report lanes.

Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
---
 doc/guides/rel_notes/release_24_03.rst |   2 +
 drivers/net/hns3/hns3_cmd.h            |  15 ++--
 drivers/net/hns3/hns3_common.c         |   3 +-
 drivers/net/hns3/hns3_ethdev.c         | 111 +++++++++++--------------
 drivers/net/hns3/hns3_ethdev.h         |   2 +
 5 files changed, 66 insertions(+), 67 deletions(-)

diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 75d93ee965..c17334ac25 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -180,6 +180,8 @@ New Features
   * Added power-saving during polling within the ``rte_event_dequeue_burst()`` API.
   * Added support for DMA adapter.
 
+* **Added setting lanes for hns3 PF driver.**
+  * This feature add support for setting lanes and report lanes.
 
 Removed Items
 -------------
diff --git a/drivers/net/hns3/hns3_cmd.h b/drivers/net/hns3/hns3_cmd.h
index 79a8c1edad..31ff7b35d8 100644
--- a/drivers/net/hns3/hns3_cmd.h
+++ b/drivers/net/hns3/hns3_cmd.h
@@ -753,7 +753,9 @@ struct hns3_config_mac_mode_cmd {
 struct hns3_config_mac_speed_dup_cmd {
 	uint8_t speed_dup;
 	uint8_t mac_change_fec_en;
-	uint8_t rsv[22];
+	uint8_t rsv[4];
+	uint8_t lanes;
+	uint8_t rsv1[17];
 };
 
 #define HNS3_TQP_ENABLE_B		0
@@ -789,12 +791,15 @@ struct hns3_sfp_type {
 #define HNS3_FIBER_LINK_SPEED_1G_BIT		BIT(0)
 #define HNS3_FIBER_LINK_SPEED_10G_BIT		BIT(1)
 #define HNS3_FIBER_LINK_SPEED_25G_BIT		BIT(2)
-#define HNS3_FIBER_LINK_SPEED_50G_BIT		BIT(3)
-#define HNS3_FIBER_LINK_SPEED_100G_BIT		BIT(4)
+#define HNS3_FIBER_LINK_SPEED_50G_R2_BIT	BIT(3)
+#define HNS3_FIBER_LINK_SPEED_100G_R4_BIT	BIT(4)
 #define HNS3_FIBER_LINK_SPEED_40G_BIT		BIT(5)
 #define HNS3_FIBER_LINK_SPEED_100M_BIT		BIT(6)
 #define HNS3_FIBER_LINK_SPEED_10M_BIT		BIT(7)
-#define HNS3_FIBER_LINK_SPEED_200G_BIT		BIT(8)
+#define HNS3_FIBER_LINK_SPEED_200G_EXT_BIT	BIT(8)
+#define HNS3_FIBER_LINK_SPEED_50G_R1_BIT	BIT(9)
+#define HNS3_FIBER_LINK_SPEED_100G_R2_BIT	BIT(10)
+#define HNS3_FIBER_LINK_SPEED_200G_R4_BIT	BIT(11)
 
 #define HNS3_FIBER_FEC_AUTO_BIT		BIT(0)
 #define HNS3_FIBER_FEC_BASER_BIT	BIT(1)
@@ -823,7 +828,7 @@ struct hns3_sfp_info_cmd {
 	uint32_t supported_speed; /* speed supported by current media */
 	uint32_t module_type;
 	uint8_t fec_ability; /* supported fec modes, see HNS3_FIBER_FEC_XXX_BIT */
-	uint8_t rsv0;
+	uint8_t lanes;
 	uint8_t pause_status;
 	uint8_t rsv1[5];
 };
diff --git a/drivers/net/hns3/hns3_common.c b/drivers/net/hns3/hns3_common.c
index 28c26b049c..cccb441780 100644
--- a/drivers/net/hns3/hns3_common.c
+++ b/drivers/net/hns3/hns3_common.c
@@ -92,7 +92,8 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 		info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 	info->dev_capa = RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP |
-			 RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP;
+			 RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP |
+			 RTE_ETH_DEV_CAPA_SETTING_LANES;
 	if (hns3_dev_get_support(hw, INDEP_TXRX))
 		info->dev_capa |= RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 				  RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index ecd3b2ef64..8a19c3ddd9 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2235,12 +2235,16 @@ hns3_get_firber_port_speed_capa(uint32_t supported_speed)
 		speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_40G_BIT)
 		speed_capa |= RTE_ETH_LINK_SPEED_40G;
-	if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_BIT)
+	if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_R1_BIT)
 		speed_capa |= RTE_ETH_LINK_SPEED_50G;
-	if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_BIT)
-		speed_capa |= RTE_ETH_LINK_SPEED_100G;
-	if (supported_speed & HNS3_FIBER_LINK_SPEED_200G_BIT)
-		speed_capa |= RTE_ETH_LINK_SPEED_200G;
+	if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_R2_BIT)
+		speed_capa |= RTE_ETH_LINK_SPEED_50G_2LANES;
+	if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_R4_BIT)
+		speed_capa |= RTE_ETH_LINK_SPEED_100G_4LANES;
+	if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_R2_BIT)
+		speed_capa |= RTE_ETH_LINK_SPEED_100G_2LANES;
+	if (supported_speed & HNS3_FIBER_LINK_SPEED_200G_R4_BIT)
+		speed_capa |= RTE_ETH_LINK_SPEED_200G_4LANES;
 
 	return speed_capa;
 }
@@ -2308,6 +2312,7 @@ hns3_setup_linkstatus(struct rte_eth_dev *eth_dev,
 	if (!mac->link_status)
 		new_link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 
+	new_link->link_lanes = mac->link_lanes;
 	new_link->link_duplex = mac->link_duplex;
 	new_link->link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link->link_autoneg = mac->link_autoneg;
@@ -2934,7 +2939,8 @@ hns3_map_tqp(struct hns3_hw *hw)
 }
 
 static int
-hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
+hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t lanes,
+			  uint8_t duplex)
 {
 	struct hns3_config_mac_speed_dup_cmd *req;
 	struct hns3_cmd_desc desc;
@@ -2989,6 +2995,7 @@ hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
 	}
 
 	hns3_set_bit(req->mac_change_fec_en, HNS3_CFG_MAC_SPEED_CHANGE_EN_B, 1);
+	req->lanes = lanes;
 
 	ret = hns3_cmd_send(hw, &desc, 1);
 	if (ret)
@@ -3643,7 +3650,9 @@ hns3_mac_init(struct hns3_hw *hw)
 
 	pf->support_sfp_query = true;
 	mac->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
-	ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, mac->link_duplex);
+	/* If lane is set to 0, the firmware selects the default lane.*/
+	ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, 0,
+					mac->link_duplex);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Config mac speed dup fail ret = %d", ret);
 		return ret;
@@ -4052,6 +4061,7 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
 		return 0;
 
 	mac_info->link_speed = rte_le_to_cpu_32(resp->sfp_speed);
+	mac_info->link_lanes = resp->lanes;
 	/*
 	 * if resp->supported_speed is 0, it means it's an old version
 	 * firmware, do not update these params.
@@ -4088,16 +4098,18 @@ hns3_check_speed_dup(uint8_t duplex, uint32_t speed)
 }
 
 static int
-hns3_cfg_mac_speed_dup(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
+hns3_cfg_mac_speed_dup(struct hns3_hw *hw, uint32_t speed, uint8_t lanes,
+		       uint8_t duplex)
 {
 	struct hns3_mac *mac = &hw->mac;
 	int ret;
 
 	duplex = hns3_check_speed_dup(duplex, speed);
-	if (mac->link_speed == speed && mac->link_duplex == duplex)
+	if (mac->link_speed == speed && mac->link_lanes == lanes &&
+	    mac->link_duplex == duplex)
 		return 0;
 
-	ret = hns3_cfg_mac_speed_dup_hw(hw, speed, duplex);
+	ret = hns3_cfg_mac_speed_dup_hw(hw, speed, lanes, duplex);
 	if (ret)
 		return ret;
 
@@ -4106,6 +4118,7 @@ hns3_cfg_mac_speed_dup(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
 		return ret;
 
 	mac->link_speed = speed;
+	mac->link_lanes = lanes;
 	mac->link_duplex = duplex;
 
 	return 0;
@@ -4150,6 +4163,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 		}
 
 		mac->link_speed = mac_info.link_speed;
+		mac->link_lanes = mac_info.link_lanes;
 		mac->supported_speed = mac_info.supported_speed;
 		mac->support_autoneg = mac_info.support_autoneg;
 		mac->link_autoneg = mac_info.link_autoneg;
@@ -4161,7 +4175,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 	}
 
 	/* Config full duplex for SFP */
-	return hns3_cfg_mac_speed_dup(hw, mac_info.link_speed,
+	return hns3_cfg_mac_speed_dup(hw, mac_info.link_speed, mac_info.link_lanes,
 				      RTE_ETH_LINK_FULL_DUPLEX);
 }
 
@@ -4512,11 +4526,11 @@ hns3_set_firber_default_support_speed(struct hns3_hw *hw)
 	case RTE_ETH_SPEED_NUM_40G:
 		return HNS3_FIBER_LINK_SPEED_40G_BIT;
 	case RTE_ETH_SPEED_NUM_50G:
-		return HNS3_FIBER_LINK_SPEED_50G_BIT;
+		return HNS3_FIBER_LINK_SPEED_50G_R2_BIT;
 	case RTE_ETH_SPEED_NUM_100G:
-		return HNS3_FIBER_LINK_SPEED_100G_BIT;
+		return HNS3_FIBER_LINK_SPEED_100G_R4_BIT;
 	case RTE_ETH_SPEED_NUM_200G:
-		return HNS3_FIBER_LINK_SPEED_200G_BIT;
+		return HNS3_FIBER_LINK_SPEED_200G_R4_BIT;
 	default:
 		hns3_warn(hw, "invalid speed %u Mbps.", mac->link_speed);
 		return 0;
@@ -4773,13 +4787,19 @@ hns3_convert_link_speeds2bitmap_fiber(uint32_t link_speeds)
 		speed_bit = HNS3_FIBER_LINK_SPEED_40G_BIT;
 		break;
 	case RTE_ETH_LINK_SPEED_50G:
-		speed_bit = HNS3_FIBER_LINK_SPEED_50G_BIT;
+		speed_bit = HNS3_FIBER_LINK_SPEED_50G_R1_BIT;
 		break;
-	case RTE_ETH_LINK_SPEED_100G:
-		speed_bit = HNS3_FIBER_LINK_SPEED_100G_BIT;
+	case RTE_ETH_LINK_SPEED_50G_2LANES:
+		speed_bit = HNS3_FIBER_LINK_SPEED_50G_R2_BIT;
 		break;
-	case RTE_ETH_LINK_SPEED_200G:
-		speed_bit = HNS3_FIBER_LINK_SPEED_200G_BIT;
+	case RTE_ETH_LINK_SPEED_100G_4LANES:
+		speed_bit = HNS3_FIBER_LINK_SPEED_100G_R4_BIT;
+		break;
+	case RTE_ETH_LINK_SPEED_100G_2LANES:
+		speed_bit = HNS3_FIBER_LINK_SPEED_100G_R2_BIT;
+		break;
+	case RTE_ETH_LINK_SPEED_200G_4LANES:
+		speed_bit = HNS3_FIBER_LINK_SPEED_200G_R4_BIT;
 		break;
 	default:
 		speed_bit = 0;
@@ -4810,45 +4830,6 @@ hns3_check_port_speed(struct hns3_hw *hw, uint32_t link_speeds)
 	return 0;
 }
 
-static uint32_t
-hns3_get_link_speed(uint32_t link_speeds)
-{
-	uint32_t speed = RTE_ETH_SPEED_NUM_NONE;
-
-	if (link_speeds & RTE_ETH_LINK_SPEED_10M ||
-	    link_speeds & RTE_ETH_LINK_SPEED_10M_HD)
-		speed = RTE_ETH_SPEED_NUM_10M;
-	if (link_speeds & RTE_ETH_LINK_SPEED_100M ||
-	    link_speeds & RTE_ETH_LINK_SPEED_100M_HD)
-		speed = RTE_ETH_SPEED_NUM_100M;
-	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
-		speed = RTE_ETH_SPEED_NUM_1G;
-	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
-		speed = RTE_ETH_SPEED_NUM_10G;
-	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
-		speed = RTE_ETH_SPEED_NUM_25G;
-	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
-		speed = RTE_ETH_SPEED_NUM_40G;
-	if (link_speeds & RTE_ETH_LINK_SPEED_50G)
-		speed = RTE_ETH_SPEED_NUM_50G;
-	if (link_speeds & RTE_ETH_LINK_SPEED_100G)
-		speed = RTE_ETH_SPEED_NUM_100G;
-	if (link_speeds & RTE_ETH_LINK_SPEED_200G)
-		speed = RTE_ETH_SPEED_NUM_200G;
-
-	return speed;
-}
-
-static uint8_t
-hns3_get_link_duplex(uint32_t link_speeds)
-{
-	if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
-	    (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
-		return RTE_ETH_LINK_HALF_DUPLEX;
-	else
-		return RTE_ETH_LINK_FULL_DUPLEX;
-}
-
 static int
 hns3_set_copper_port_link_speed(struct hns3_hw *hw,
 				struct hns3_set_link_speed_cfg *cfg)
@@ -4939,7 +4920,7 @@ hns3_set_fiber_port_link_speed(struct hns3_hw *hw,
 		return 0;
 	}
 
-	return hns3_cfg_mac_speed_dup(hw, cfg->speed, cfg->duplex);
+	return hns3_cfg_mac_speed_dup(hw, cfg->speed, cfg->lanes, cfg->duplex);
 }
 
 const char *
@@ -4979,15 +4960,23 @@ hns3_set_port_link_speed(struct hns3_hw *hw,
 static int
 hns3_apply_link_speed(struct hns3_hw *hw)
 {
+	struct rte_eth_speed_capa_info capa_info = { 0 };
 	struct rte_eth_conf *conf = &hw->data->dev_conf;
 	struct hns3_set_link_speed_cfg cfg;
+	int ret;
 
 	memset(&cfg, 0, sizeof(struct hns3_set_link_speed_cfg));
 	cfg.autoneg = (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) ?
 			RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
 	if (cfg.autoneg != RTE_ETH_LINK_AUTONEG) {
-		cfg.speed = hns3_get_link_speed(conf->link_speeds);
-		cfg.duplex = hns3_get_link_duplex(conf->link_speeds);
+		ret = rte_eth_speed_capa_to_info(conf->link_speeds, &capa_info);
+		if (ret) {
+			hns3_err(hw, "speed capability error, ret = %d", ret);
+			return ret;
+		}
+		cfg.speed = capa_info.speed;
+		cfg.lanes = capa_info.lanes;
+		cfg.duplex = capa_info.duplex;
 	}
 
 	return hns3_set_port_link_speed(hw, &cfg);
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 12d8299def..eae7f4bb12 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -168,6 +168,7 @@ struct hns3_set_link_speed_cfg {
 	uint32_t speed;
 	uint8_t duplex  : 1;
 	uint8_t autoneg : 1;
+	uint8_t lanes   : 6;
 };
 
 /* mac media type */
@@ -190,6 +191,7 @@ struct hns3_mac {
 	uint8_t link_autoneg : 1; /* RTE_ETH_LINK_[AUTONEG/FIXED] */
 	uint8_t link_status  : 1; /* RTE_ETH_LINK_[DOWN/UP] */
 	uint32_t link_speed;      /* RTE_ETH_SPEED_NUM_ */
+	uint8_t link_lanes;       /* RTE_ETH_LANES_ */
 	/*
 	 * Some firmware versions support only the SFP speed query. In addition
 	 * to the SFP speed query, some firmware supports the query of the speed
-- 
2.33.0


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 3/3] app/testpmd: support setting lanes
  2024-03-12  7:52 [PATCH 0/3] support setting lanes Dengdui Huang
  2024-03-12  7:52 ` [PATCH 1/3] ethdev: " Dengdui Huang
  2024-03-12  7:52 ` [PATCH 2/3] net/hns3: " Dengdui Huang
@ 2024-03-12  7:52 ` Dengdui Huang
  2024-03-15 21:47   ` Damodharam Ammepalli
  2024-03-22  7:09   ` [PATCH v2 0/6] " Dengdui Huang
  2024-03-18 14:56 ` [PATCH 0/3] " Thomas Monjalon
  3 siblings, 2 replies; 51+ messages in thread
From: Dengdui Huang @ 2024-03-12  7:52 UTC (permalink / raw)
  To: dev
  Cc: ferruh.yigit, aman.deep.singh, yuying.zhang, thomas,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong

Extended speed command for setting lane number and
Show info print lane number.

Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
---
 app/test-pmd/cmdline.c                      | 110 +++++++++++---------
 app/test-pmd/config.c                       |  60 +++++++----
 doc/guides/rel_notes/release_24_03.rst      |   1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |   3 +-
 4 files changed, 103 insertions(+), 71 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index f521a1fe9e..e66daf4bba 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1356,15 +1356,20 @@ struct cmd_config_speed_all {
 	cmdline_fixed_string_t keyword;
 	cmdline_fixed_string_t all;
 	cmdline_fixed_string_t item1;
+	cmdline_fixed_string_t lanes_item;
 	cmdline_fixed_string_t item2;
 	cmdline_fixed_string_t value1;
+	uint8_t lanes_value;
 	cmdline_fixed_string_t value2;
 };
 
 static int
-parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
+parse_and_check_speed_duplex(char *speedstr, uint8_t lanes, char *duplexstr,
+			     uint32_t *speed)
 {
 
+	uint32_t speed_num;
+	char *endptr;
 	int duplex;
 
 	if (!strcmp(duplexstr, "half")) {
@@ -1378,47 +1383,22 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
 		return -1;
 	}
 
-	if (!strcmp(speedstr, "10")) {
-		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
-				RTE_ETH_LINK_SPEED_10M_HD : RTE_ETH_LINK_SPEED_10M;
-	} else if (!strcmp(speedstr, "100")) {
-		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
-				RTE_ETH_LINK_SPEED_100M_HD : RTE_ETH_LINK_SPEED_100M;
-	} else {
-		if (duplex != RTE_ETH_LINK_FULL_DUPLEX) {
-			fprintf(stderr, "Invalid speed/duplex parameters\n");
-			return -1;
-		}
-		if (!strcmp(speedstr, "1000")) {
-			*speed = RTE_ETH_LINK_SPEED_1G;
-		} else if (!strcmp(speedstr, "2500")) {
-			*speed = RTE_ETH_LINK_SPEED_2_5G;
-		} else if (!strcmp(speedstr, "5000")) {
-			*speed = RTE_ETH_LINK_SPEED_5G;
-		} else if (!strcmp(speedstr, "10000")) {
-			*speed = RTE_ETH_LINK_SPEED_10G;
-		} else if (!strcmp(speedstr, "25000")) {
-			*speed = RTE_ETH_LINK_SPEED_25G;
-		} else if (!strcmp(speedstr, "40000")) {
-			*speed = RTE_ETH_LINK_SPEED_40G;
-		} else if (!strcmp(speedstr, "50000")) {
-			*speed = RTE_ETH_LINK_SPEED_50G;
-		} else if (!strcmp(speedstr, "100000")) {
-			*speed = RTE_ETH_LINK_SPEED_100G;
-		} else if (!strcmp(speedstr, "200000")) {
-			*speed = RTE_ETH_LINK_SPEED_200G;
-		} else if (!strcmp(speedstr, "400000")) {
-			*speed = RTE_ETH_LINK_SPEED_400G;
-		} else if (!strcmp(speedstr, "auto")) {
-			*speed = RTE_ETH_LINK_SPEED_AUTONEG;
-		} else {
-			fprintf(stderr, "Unknown speed parameter\n");
-			return -1;
-		}
+	if (!strcmp(speedstr, "auto")) {
+		*speed = RTE_ETH_LINK_SPEED_AUTONEG;
+		return 0;
 	}
 
-	if (*speed != RTE_ETH_LINK_SPEED_AUTONEG)
-		*speed |= RTE_ETH_LINK_SPEED_FIXED;
+	speed_num = strtol(speedstr, &endptr, 10);
+	if (*endptr != '\0') {
+		fprintf(stderr, "Unknown speed parameter\n");
+		return -1;
+	}
+
+	*speed = rte_eth_speed_bitflag(speed_num, lanes, duplex);
+	if (*speed == 0) {
+		fprintf(stderr, "param error\n");
+		return -1;
+	}
 
 	return 0;
 }
@@ -1429,19 +1409,27 @@ cmd_config_speed_all_parsed(void *parsed_result,
 			__rte_unused void *data)
 {
 	struct cmd_config_speed_all *res = parsed_result;
+	struct rte_eth_dev_info dev_info;
 	uint32_t link_speed;
 	portid_t pid;
+	int ret;
 
 	if (!all_ports_stopped()) {
 		fprintf(stderr, "Please stop all ports first\n");
 		return;
 	}
 
-	if (parse_and_check_speed_duplex(res->value1, res->value2,
-			&link_speed) < 0)
+	if (parse_and_check_speed_duplex(res->value1, res->lanes_value,
+					 res->value2, &link_speed) < 0)
 		return;
 
 	RTE_ETH_FOREACH_DEV(pid) {
+		ret = eth_dev_info_get_print_err(pid, &dev_info);
+		if (ret != 0)
+			return;
+		if ((dev_info.dev_capa & RTE_ETH_DEV_CAPA_SETTING_LANES) == 0)
+			fprintf(stderr, "The setting lane may not take effect because "
+					"the port (%u) does not support it\n", pid);
 		ports[pid].dev_conf.link_speeds = link_speed;
 	}
 
@@ -1460,6 +1448,11 @@ static cmdline_parse_token_string_t cmd_config_speed_all_item1 =
 static cmdline_parse_token_string_t cmd_config_speed_all_value1 =
 	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_all, value1,
 				"10#100#1000#2500#5000#10000#25000#40000#50000#100000#200000#400000#auto");
+static cmdline_parse_token_string_t cmd_config_speed_all_lanes_item =
+	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_all, lanes_item, "lanes");
+static cmdline_parse_token_num_t cmd_config_speed_all_lanes_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_config_speed_all, lanes_value,
+			      RTE_UINT8);
 static cmdline_parse_token_string_t cmd_config_speed_all_item2 =
 	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_all, item2, "duplex");
 static cmdline_parse_token_string_t cmd_config_speed_all_value2 =
@@ -1470,14 +1463,16 @@ static cmdline_parse_inst_t cmd_config_speed_all = {
 	.f = cmd_config_speed_all_parsed,
 	.data = NULL,
 	.help_str = "port config all speed "
-		"10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto duplex "
-							"half|full|auto",
+		"10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto"
+		" lanes 1|2|4|8 duplex half|full|auto",
 	.tokens = {
 		(void *)&cmd_config_speed_all_port,
 		(void *)&cmd_config_speed_all_keyword,
 		(void *)&cmd_config_speed_all_all,
 		(void *)&cmd_config_speed_all_item1,
 		(void *)&cmd_config_speed_all_value1,
+		(void *)&cmd_config_speed_all_lanes_item,
+		(void *)&cmd_config_speed_all_lanes_value,
 		(void *)&cmd_config_speed_all_item2,
 		(void *)&cmd_config_speed_all_value2,
 		NULL,
@@ -1490,8 +1485,10 @@ struct cmd_config_speed_specific {
 	cmdline_fixed_string_t keyword;
 	portid_t id;
 	cmdline_fixed_string_t item1;
+	cmdline_fixed_string_t lanes_item;
 	cmdline_fixed_string_t item2;
 	cmdline_fixed_string_t value1;
+	uint8_t lanes_value;
 	cmdline_fixed_string_t value2;
 };
 
@@ -1501,7 +1498,9 @@ cmd_config_speed_specific_parsed(void *parsed_result,
 				__rte_unused void *data)
 {
 	struct cmd_config_speed_specific *res = parsed_result;
+	struct rte_eth_dev_info dev_info;
 	uint32_t link_speed;
+	int ret;
 
 	if (port_id_is_invalid(res->id, ENABLED_WARN))
 		return;
@@ -1511,8 +1510,15 @@ cmd_config_speed_specific_parsed(void *parsed_result,
 		return;
 	}
 
-	if (parse_and_check_speed_duplex(res->value1, res->value2,
-			&link_speed) < 0)
+	ret = eth_dev_info_get_print_err(res->id, &dev_info);
+	if (ret != 0)
+		return;
+	if ((dev_info.dev_capa & RTE_ETH_DEV_CAPA_SETTING_LANES) == 0)
+		fprintf(stderr, "The setting lane may not take effect because "
+				"the port (%d) does not support it\n", res->id);
+
+	if (parse_and_check_speed_duplex(res->value1, res->lanes_value,
+					 res->value2, &link_speed) < 0)
 		return;
 
 	ports[res->id].dev_conf.link_speeds = link_speed;
@@ -1535,6 +1541,12 @@ static cmdline_parse_token_string_t cmd_config_speed_specific_item1 =
 static cmdline_parse_token_string_t cmd_config_speed_specific_value1 =
 	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_specific, value1,
 				"10#100#1000#2500#5000#10000#25000#40000#50000#100000#200000#400000#auto");
+static cmdline_parse_token_string_t cmd_config_speed_specific_lanes_item =
+	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_specific, lanes_item,
+								"lanes");
+static cmdline_parse_token_num_t cmd_config_speed_specific_lanes_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_config_speed_specific, lanes_value,
+			      RTE_UINT8);
 static cmdline_parse_token_string_t cmd_config_speed_specific_item2 =
 	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_specific, item2,
 								"duplex");
@@ -1546,14 +1558,16 @@ static cmdline_parse_inst_t cmd_config_speed_specific = {
 	.f = cmd_config_speed_specific_parsed,
 	.data = NULL,
 	.help_str = "port config <port_id> speed "
-		"10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto duplex "
-							"half|full|auto",
+		"10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto"
+		" lanes 1|2|4|8 duplex half|full|auto",
 	.tokens = {
 		(void *)&cmd_config_speed_specific_port,
 		(void *)&cmd_config_speed_specific_keyword,
 		(void *)&cmd_config_speed_specific_id,
 		(void *)&cmd_config_speed_specific_item1,
 		(void *)&cmd_config_speed_specific_value1,
+		(void *)&cmd_config_speed_specific_lanes_item,
+		(void *)&cmd_config_speed_specific_lanes_value,
 		(void *)&cmd_config_speed_specific_item2,
 		(void *)&cmd_config_speed_specific_value2,
 		NULL,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 968d2164ab..c104327878 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -587,39 +587,51 @@ device_infos_display_speeds(uint32_t speed_capa)
 	if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
 		printf(" Autonegotiate (all speeds)");
 	if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
-		printf(" Disable autonegotiate (fixed speed)  ");
+		printf(" Disable autonegotiate (fixed speed) /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
-		printf(" 10 Mbps half-duplex  ");
+		printf(" 10Mbps_1lane_half-duplex /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_10M)
-		printf(" 10 Mbps full-duplex  ");
+		printf(" 10Mbps_1lane_full-duplex /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
-		printf(" 100 Mbps half-duplex  ");
+		printf(" 100Mbps_lane_half-duplex /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_100M)
-		printf(" 100 Mbps full-duplex  ");
+		printf(" 100Mbps_1lane_full-duplex /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_1G)
-		printf(" 1 Gbps  ");
+		printf(" 1Gbps_1lane_full-duplex /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
-		printf(" 2.5 Gbps  ");
+		printf(" 2.5Gbps_1lane_full-duplex /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_5G)
-		printf(" 5 Gbps  ");
+		printf(" 5Gbps_1lane_full-duplex /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_10G)
-		printf(" 10 Gbps  ");
-	if (speed_capa & RTE_ETH_LINK_SPEED_20G)
-		printf(" 20 Gbps  ");
+		printf(" 10Gbps_1lane_full-duplex /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_10G_4LANES)
+		printf(" 10Gbps_4lane_full-duplex /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_20G_2LANES)
+		printf(" 20Gbps_2lane_full-duplex /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_25G)
-		printf(" 25 Gbps  ");
-	if (speed_capa & RTE_ETH_LINK_SPEED_40G)
-		printf(" 40 Gbps  ");
+		printf(" 25Gbps_1lane_full-duplex /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_40G_4LANES)
+		printf(" 40Gbps_4lane_full-duplex /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_50G)
-		printf(" 50 Gbps  ");
-	if (speed_capa & RTE_ETH_LINK_SPEED_56G)
-		printf(" 56 Gbps  ");
+		printf(" 50Gbps_1lane_full-duplex /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_50G_2LANES)
+		printf(" 50Gbps_2lane_full-duplex /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_56G_4LANES)
+		printf(" 56Gbps_4lane_full-duplex /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_100G)
-		printf(" 100 Gbps  ");
-	if (speed_capa & RTE_ETH_LINK_SPEED_200G)
-		printf(" 200 Gbps  ");
-	if (speed_capa & RTE_ETH_LINK_SPEED_400G)
-		printf(" 400 Gbps  ");
+		printf(" 100Gbps_1lane_full-duplex /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_100G_2LANES)
+		printf(" 100Gbps_2lane_full-duplex /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_100G_4LANES)
+		printf(" 100Gbps_4lane_full-duplex /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_200G_4LANES)
+		printf(" 200Gbps_4lane_full-duplex /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_200G_2LANES)
+		printf(" 200Gbps_2lane_full-duplex /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_400G_4LANES)
+		printf(" 400Gbps_4lane_full-duplex /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_400G_8LANES)
+		printf(" 400Gbps_8lane_full-duplex /");
 }
 
 void
@@ -828,6 +840,10 @@ port_infos_display(portid_t port_id)
 
 	printf("\nLink status: %s\n", (link.link_status) ? ("up") : ("down"));
 	printf("Link speed: %s\n", rte_eth_link_speed_to_str(link.link_speed));
+	if (link.link_lanes == 0)
+		printf("Link lanes: unknown\n");
+	else
+		printf("Link lanes: %u\n", link.link_lanes);
 	printf("Link duplex: %s\n", (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 	       ("full-duplex") : ("half-duplex"));
 	printf("Autoneg status: %s\n", (link.link_autoneg == RTE_ETH_LINK_AUTONEG) ?
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index c17334ac25..46aceeee93 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -79,6 +79,7 @@ New Features
 * **Support setting lanes for ethdev.**
   * Support setting lanes by extended ``RTE_ETH_LINK_SPEED_*``.
   * Added function to convert bitmap flag to the struct of link speed info.
+  ``rte_eth_speed_capa_to_info()``
 
 * **Added hash calculation of an encapsulated packet as done by the HW.**
 
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 2fbf9220d8..087f7fe853 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -190,6 +190,7 @@ For example:
    memory allocation on the socket: 0
    Link status: up
    Link speed: 40000 Mbps
+   Link lanes: 4
    Link duplex: full-duplex
    Promiscuous mode: enabled
    Allmulticast mode: disabled
@@ -2065,7 +2066,7 @@ port config - speed
 Set the speed and duplex mode for all ports or a specific port::
 
    testpmd> port config (port_id|all) speed (10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto) \
-            duplex (half|full|auto)
+            lanes 1|2|4|8 duplex (half|full|auto)
 
 port config - queues/descriptors
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- 
2.33.0


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 3/3] app/testpmd: support setting lanes
  2024-03-12  7:52 ` [PATCH 3/3] app/testpmd: " Dengdui Huang
@ 2024-03-15 21:47   ` Damodharam Ammepalli
  2024-03-19  2:38     ` huangdengdui
  2024-03-22  7:09   ` [PATCH v2 0/6] " Dengdui Huang
  1 sibling, 1 reply; 51+ messages in thread
From: Damodharam Ammepalli @ 2024-03-15 21:47 UTC (permalink / raw)
  To: Dengdui Huang
  Cc: dev, ferruh.yigit, aman.deep.singh, yuying.zhang, thomas,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong

[-- Attachment #1: Type: text/plain, Size: 19539 bytes --]

On Tue, Mar 12, 2024 at 12:52 AM Dengdui Huang <huangdengdui@huawei.com> wrote:
>
> Extended speed command for setting lane number and
> Show info print lane number.
>
> Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
> ---
>  app/test-pmd/cmdline.c                      | 110 +++++++++++---------
>  app/test-pmd/config.c                       |  60 +++++++----
>  doc/guides/rel_notes/release_24_03.rst      |   1 +
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |   3 +-
>  4 files changed, 103 insertions(+), 71 deletions(-)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index f521a1fe9e..e66daf4bba 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -1356,15 +1356,20 @@ struct cmd_config_speed_all {
>         cmdline_fixed_string_t keyword;
>         cmdline_fixed_string_t all;
>         cmdline_fixed_string_t item1;
> +       cmdline_fixed_string_t lanes_item;
>         cmdline_fixed_string_t item2;
>         cmdline_fixed_string_t value1;
> +       uint8_t lanes_value;
>         cmdline_fixed_string_t value2;
>  };
>
>  static int
> -parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
> +parse_and_check_speed_duplex(char *speedstr, uint8_t lanes, char *duplexstr,
> +                            uint32_t *speed)
>  {
>
We internally implemented a similar feature, without changing the
existing testpmd speed cmd.
Instead of modifying the existing command set  we can have a  separate
cmd for the lanes
configuration similar to FEC configuration. Our internal
implementation looks something like this,
without affecting existing implementations.
testpmd> port stop 0
testpmd> port config 0 speed_lanes 4
testpmd> port config 0 speed 200000 duplex full
testpmd> port start 0
testpmd> show port summary 0
Number of available ports: 2
Port MAC Address       Name         Driver         Status   Link         Lanes
0    14:23:F2:C3:BA:D2 0000:b1:00.0 net_bnxt       up  200 Gbps 4
testpmd>
testpmd> show port info 0

********************* Infos for port 0  *********************
MAC address: 14:23:F2:C3:BA:D2
Device name: 0000:b1:00.0
Driver name: net_bnxt
Firmware-version: 228.9.115.0
Connect to socket: 2
memory allocation on the socket: 2
Link status: up
Link speed: 200 Gbps
Lanes: 4
Link duplex: full-duplex
Autoneg status: Off

> +       uint32_t speed_num;
> +       char *endptr;
>         int duplex;
>
>         if (!strcmp(duplexstr, "half")) {
> @@ -1378,47 +1383,22 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
>                 return -1;
>         }
>
> -       if (!strcmp(speedstr, "10")) {
> -               *speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
> -                               RTE_ETH_LINK_SPEED_10M_HD : RTE_ETH_LINK_SPEED_10M;
> -       } else if (!strcmp(speedstr, "100")) {
> -               *speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
> -                               RTE_ETH_LINK_SPEED_100M_HD : RTE_ETH_LINK_SPEED_100M;
> -       } else {
> -               if (duplex != RTE_ETH_LINK_FULL_DUPLEX) {
> -                       fprintf(stderr, "Invalid speed/duplex parameters\n");
> -                       return -1;
> -               }
> -               if (!strcmp(speedstr, "1000")) {
> -                       *speed = RTE_ETH_LINK_SPEED_1G;
> -               } else if (!strcmp(speedstr, "2500")) {
> -                       *speed = RTE_ETH_LINK_SPEED_2_5G;
> -               } else if (!strcmp(speedstr, "5000")) {
> -                       *speed = RTE_ETH_LINK_SPEED_5G;
> -               } else if (!strcmp(speedstr, "10000")) {
> -                       *speed = RTE_ETH_LINK_SPEED_10G;
> -               } else if (!strcmp(speedstr, "25000")) {
> -                       *speed = RTE_ETH_LINK_SPEED_25G;
> -               } else if (!strcmp(speedstr, "40000")) {
> -                       *speed = RTE_ETH_LINK_SPEED_40G;
> -               } else if (!strcmp(speedstr, "50000")) {
> -                       *speed = RTE_ETH_LINK_SPEED_50G;
> -               } else if (!strcmp(speedstr, "100000")) {
> -                       *speed = RTE_ETH_LINK_SPEED_100G;
> -               } else if (!strcmp(speedstr, "200000")) {
> -                       *speed = RTE_ETH_LINK_SPEED_200G;
> -               } else if (!strcmp(speedstr, "400000")) {
> -                       *speed = RTE_ETH_LINK_SPEED_400G;
> -               } else if (!strcmp(speedstr, "auto")) {
> -                       *speed = RTE_ETH_LINK_SPEED_AUTONEG;
> -               } else {
> -                       fprintf(stderr, "Unknown speed parameter\n");
> -                       return -1;
> -               }
> +       if (!strcmp(speedstr, "auto")) {
> +               *speed = RTE_ETH_LINK_SPEED_AUTONEG;
> +               return 0;
>         }
>
> -       if (*speed != RTE_ETH_LINK_SPEED_AUTONEG)
> -               *speed |= RTE_ETH_LINK_SPEED_FIXED;
> +       speed_num = strtol(speedstr, &endptr, 10);
> +       if (*endptr != '\0') {
> +               fprintf(stderr, "Unknown speed parameter\n");
> +               return -1;
> +       }
> +
> +       *speed = rte_eth_speed_bitflag(speed_num, lanes, duplex);
> +       if (*speed == 0) {
> +               fprintf(stderr, "param error\n");
> +               return -1;
> +       }
>
>         return 0;
>  }
> @@ -1429,19 +1409,27 @@ cmd_config_speed_all_parsed(void *parsed_result,
>                         __rte_unused void *data)
>  {
>         struct cmd_config_speed_all *res = parsed_result;
> +       struct rte_eth_dev_info dev_info;
>         uint32_t link_speed;
>         portid_t pid;
> +       int ret;
>
>         if (!all_ports_stopped()) {
>                 fprintf(stderr, "Please stop all ports first\n");
>                 return;
>         }
>
> -       if (parse_and_check_speed_duplex(res->value1, res->value2,
> -                       &link_speed) < 0)
> +       if (parse_and_check_speed_duplex(res->value1, res->lanes_value,
> +                                        res->value2, &link_speed) < 0)
>                 return;
>
>         RTE_ETH_FOREACH_DEV(pid) {
> +               ret = eth_dev_info_get_print_err(pid, &dev_info);
> +               if (ret != 0)
> +                       return;
> +               if ((dev_info.dev_capa & RTE_ETH_DEV_CAPA_SETTING_LANES) == 0)
> +                       fprintf(stderr, "The setting lane may not take effect because "
> +                                       "the port (%u) does not support it\n", pid);
>                 ports[pid].dev_conf.link_speeds = link_speed;
>         }
>
> @@ -1460,6 +1448,11 @@ static cmdline_parse_token_string_t cmd_config_speed_all_item1 =
>  static cmdline_parse_token_string_t cmd_config_speed_all_value1 =
>         TOKEN_STRING_INITIALIZER(struct cmd_config_speed_all, value1,
>                                 "10#100#1000#2500#5000#10000#25000#40000#50000#100000#200000#400000#auto");
> +static cmdline_parse_token_string_t cmd_config_speed_all_lanes_item =
> +       TOKEN_STRING_INITIALIZER(struct cmd_config_speed_all, lanes_item, "lanes");
> +static cmdline_parse_token_num_t cmd_config_speed_all_lanes_value =
> +       TOKEN_NUM_INITIALIZER(struct cmd_config_speed_all, lanes_value,
> +                             RTE_UINT8);
>  static cmdline_parse_token_string_t cmd_config_speed_all_item2 =
>         TOKEN_STRING_INITIALIZER(struct cmd_config_speed_all, item2, "duplex");
>  static cmdline_parse_token_string_t cmd_config_speed_all_value2 =
> @@ -1470,14 +1463,16 @@ static cmdline_parse_inst_t cmd_config_speed_all = {
>         .f = cmd_config_speed_all_parsed,
>         .data = NULL,
>         .help_str = "port config all speed "
> -               "10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto duplex "
> -                                                       "half|full|auto",
> +               "10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto"
> +               " lanes 1|2|4|8 duplex half|full|auto",
>         .tokens = {
>                 (void *)&cmd_config_speed_all_port,
>                 (void *)&cmd_config_speed_all_keyword,
>                 (void *)&cmd_config_speed_all_all,
>                 (void *)&cmd_config_speed_all_item1,
>                 (void *)&cmd_config_speed_all_value1,
> +               (void *)&cmd_config_speed_all_lanes_item,
> +               (void *)&cmd_config_speed_all_lanes_value,
>                 (void *)&cmd_config_speed_all_item2,
>                 (void *)&cmd_config_speed_all_value2,
>                 NULL,
> @@ -1490,8 +1485,10 @@ struct cmd_config_speed_specific {
>         cmdline_fixed_string_t keyword;
>         portid_t id;
>         cmdline_fixed_string_t item1;
> +       cmdline_fixed_string_t lanes_item;
>         cmdline_fixed_string_t item2;
>         cmdline_fixed_string_t value1;
> +       uint8_t lanes_value;
>         cmdline_fixed_string_t value2;
>  };
>
> @@ -1501,7 +1498,9 @@ cmd_config_speed_specific_parsed(void *parsed_result,
>                                 __rte_unused void *data)
>  {
>         struct cmd_config_speed_specific *res = parsed_result;
> +       struct rte_eth_dev_info dev_info;
>         uint32_t link_speed;
> +       int ret;
>
>         if (port_id_is_invalid(res->id, ENABLED_WARN))
>                 return;
> @@ -1511,8 +1510,15 @@ cmd_config_speed_specific_parsed(void *parsed_result,
>                 return;
>         }
>
> -       if (parse_and_check_speed_duplex(res->value1, res->value2,
> -                       &link_speed) < 0)
> +       ret = eth_dev_info_get_print_err(res->id, &dev_info);
> +       if (ret != 0)
> +               return;
> +       if ((dev_info.dev_capa & RTE_ETH_DEV_CAPA_SETTING_LANES) == 0)
> +               fprintf(stderr, "The setting lane may not take effect because "
> +                               "the port (%d) does not support it\n", res->id);
> +
> +       if (parse_and_check_speed_duplex(res->value1, res->lanes_value,
> +                                        res->value2, &link_speed) < 0)
>                 return;
>
>         ports[res->id].dev_conf.link_speeds = link_speed;
> @@ -1535,6 +1541,12 @@ static cmdline_parse_token_string_t cmd_config_speed_specific_item1 =
>  static cmdline_parse_token_string_t cmd_config_speed_specific_value1 =
>         TOKEN_STRING_INITIALIZER(struct cmd_config_speed_specific, value1,
>                                 "10#100#1000#2500#5000#10000#25000#40000#50000#100000#200000#400000#auto");
> +static cmdline_parse_token_string_t cmd_config_speed_specific_lanes_item =
> +       TOKEN_STRING_INITIALIZER(struct cmd_config_speed_specific, lanes_item,
> +                                                               "lanes");
> +static cmdline_parse_token_num_t cmd_config_speed_specific_lanes_value =
> +       TOKEN_NUM_INITIALIZER(struct cmd_config_speed_specific, lanes_value,
> +                             RTE_UINT8);
>  static cmdline_parse_token_string_t cmd_config_speed_specific_item2 =
>         TOKEN_STRING_INITIALIZER(struct cmd_config_speed_specific, item2,
>                                                                 "duplex");
> @@ -1546,14 +1558,16 @@ static cmdline_parse_inst_t cmd_config_speed_specific = {
>         .f = cmd_config_speed_specific_parsed,
>         .data = NULL,
>         .help_str = "port config <port_id> speed "
> -               "10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto duplex "
> -                                                       "half|full|auto",
> +               "10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto"
> +               " lanes 1|2|4|8 duplex half|full|auto",
>         .tokens = {
>                 (void *)&cmd_config_speed_specific_port,
>                 (void *)&cmd_config_speed_specific_keyword,
>                 (void *)&cmd_config_speed_specific_id,
>                 (void *)&cmd_config_speed_specific_item1,
>                 (void *)&cmd_config_speed_specific_value1,
> +               (void *)&cmd_config_speed_specific_lanes_item,
> +               (void *)&cmd_config_speed_specific_lanes_value,
>                 (void *)&cmd_config_speed_specific_item2,
>                 (void *)&cmd_config_speed_specific_value2,
>                 NULL,
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 968d2164ab..c104327878 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -587,39 +587,51 @@ device_infos_display_speeds(uint32_t speed_capa)
>         if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
>                 printf(" Autonegotiate (all speeds)");
>         if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
> -               printf(" Disable autonegotiate (fixed speed)  ");
> +               printf(" Disable autonegotiate (fixed speed) /");
>         if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
> -               printf(" 10 Mbps half-duplex  ");
> +               printf(" 10Mbps_1lane_half-duplex /");
>         if (speed_capa & RTE_ETH_LINK_SPEED_10M)
> -               printf(" 10 Mbps full-duplex  ");
> +               printf(" 10Mbps_1lane_full-duplex /");
>         if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
> -               printf(" 100 Mbps half-duplex  ");
> +               printf(" 100Mbps_lane_half-duplex /");
>         if (speed_capa & RTE_ETH_LINK_SPEED_100M)
> -               printf(" 100 Mbps full-duplex  ");
> +               printf(" 100Mbps_1lane_full-duplex /");
>         if (speed_capa & RTE_ETH_LINK_SPEED_1G)
> -               printf(" 1 Gbps  ");
> +               printf(" 1Gbps_1lane_full-duplex /");
>         if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
> -               printf(" 2.5 Gbps  ");
> +               printf(" 2.5Gbps_1lane_full-duplex /");
>         if (speed_capa & RTE_ETH_LINK_SPEED_5G)
> -               printf(" 5 Gbps  ");
> +               printf(" 5Gbps_1lane_full-duplex /");
>         if (speed_capa & RTE_ETH_LINK_SPEED_10G)
> -               printf(" 10 Gbps  ");
> -       if (speed_capa & RTE_ETH_LINK_SPEED_20G)
> -               printf(" 20 Gbps  ");
> +               printf(" 10Gbps_1lane_full-duplex /");
> +       if (speed_capa & RTE_ETH_LINK_SPEED_10G_4LANES)
> +               printf(" 10Gbps_4lane_full-duplex /");
> +       if (speed_capa & RTE_ETH_LINK_SPEED_20G_2LANES)
> +               printf(" 20Gbps_2lane_full-duplex /");
>         if (speed_capa & RTE_ETH_LINK_SPEED_25G)
> -               printf(" 25 Gbps  ");
> -       if (speed_capa & RTE_ETH_LINK_SPEED_40G)
> -               printf(" 40 Gbps  ");
> +               printf(" 25Gbps_1lane_full-duplex /");
> +       if (speed_capa & RTE_ETH_LINK_SPEED_40G_4LANES)
> +               printf(" 40Gbps_4lane_full-duplex /");
>         if (speed_capa & RTE_ETH_LINK_SPEED_50G)
> -               printf(" 50 Gbps  ");
> -       if (speed_capa & RTE_ETH_LINK_SPEED_56G)
> -               printf(" 56 Gbps  ");
> +               printf(" 50Gbps_1lane_full-duplex /");
> +       if (speed_capa & RTE_ETH_LINK_SPEED_50G_2LANES)
> +               printf(" 50Gbps_2lane_full-duplex /");
> +       if (speed_capa & RTE_ETH_LINK_SPEED_56G_4LANES)
> +               printf(" 56Gbps_4lane_full-duplex /");
>         if (speed_capa & RTE_ETH_LINK_SPEED_100G)
> -               printf(" 100 Gbps  ");
> -       if (speed_capa & RTE_ETH_LINK_SPEED_200G)
> -               printf(" 200 Gbps  ");
> -       if (speed_capa & RTE_ETH_LINK_SPEED_400G)
> -               printf(" 400 Gbps  ");
> +               printf(" 100Gbps_1lane_full-duplex /");
> +       if (speed_capa & RTE_ETH_LINK_SPEED_100G_2LANES)
> +               printf(" 100Gbps_2lane_full-duplex /");
> +       if (speed_capa & RTE_ETH_LINK_SPEED_100G_4LANES)
> +               printf(" 100Gbps_4lane_full-duplex /");
> +       if (speed_capa & RTE_ETH_LINK_SPEED_200G_4LANES)
> +               printf(" 200Gbps_4lane_full-duplex /");
> +       if (speed_capa & RTE_ETH_LINK_SPEED_200G_2LANES)
> +               printf(" 200Gbps_2lane_full-duplex /");
> +       if (speed_capa & RTE_ETH_LINK_SPEED_400G_4LANES)
> +               printf(" 400Gbps_4lane_full-duplex /");
> +       if (speed_capa & RTE_ETH_LINK_SPEED_400G_8LANES)
> +               printf(" 400Gbps_8lane_full-duplex /");
>  }
>
>  void
> @@ -828,6 +840,10 @@ port_infos_display(portid_t port_id)
>
>         printf("\nLink status: %s\n", (link.link_status) ? ("up") : ("down"));
>         printf("Link speed: %s\n", rte_eth_link_speed_to_str(link.link_speed));
> +       if (link.link_lanes == 0)
> +               printf("Link lanes: unknown\n");
> +       else
> +               printf("Link lanes: %u\n", link.link_lanes);
>         printf("Link duplex: %s\n", (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
>                ("full-duplex") : ("half-duplex"));
>         printf("Autoneg status: %s\n", (link.link_autoneg == RTE_ETH_LINK_AUTONEG) ?
> diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
> index c17334ac25..46aceeee93 100644
> --- a/doc/guides/rel_notes/release_24_03.rst
> +++ b/doc/guides/rel_notes/release_24_03.rst
> @@ -79,6 +79,7 @@ New Features
>  * **Support setting lanes for ethdev.**
>    * Support setting lanes by extended ``RTE_ETH_LINK_SPEED_*``.
>    * Added function to convert bitmap flag to the struct of link speed info.
> +  ``rte_eth_speed_capa_to_info()``
>
>  * **Added hash calculation of an encapsulated packet as done by the HW.**
>
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 2fbf9220d8..087f7fe853 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -190,6 +190,7 @@ For example:
>     memory allocation on the socket: 0
>     Link status: up
>     Link speed: 40000 Mbps
> +   Link lanes: 4
>     Link duplex: full-duplex
>     Promiscuous mode: enabled
>     Allmulticast mode: disabled
> @@ -2065,7 +2066,7 @@ port config - speed
>  Set the speed and duplex mode for all ports or a specific port::
>
>     testpmd> port config (port_id|all) speed (10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto) \
> -            duplex (half|full|auto)
> +            lanes 1|2|4|8 duplex (half|full|auto)
>
>  port config - queues/descriptors
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> --
> 2.33.0
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4233 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/3] support setting lanes
  2024-03-12  7:52 [PATCH 0/3] support setting lanes Dengdui Huang
                   ` (2 preceding siblings ...)
  2024-03-12  7:52 ` [PATCH 3/3] app/testpmd: " Dengdui Huang
@ 2024-03-18 14:56 ` Thomas Monjalon
  2024-03-18 21:26   ` Damodharam Ammepalli
  3 siblings, 1 reply; 51+ messages in thread
From: Thomas Monjalon @ 2024-03-18 14:56 UTC (permalink / raw)
  To: Dengdui Huang
  Cc: dev, ferruh.yigit, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong

12/03/2024 08:52, Dengdui Huang:
> Some speeds can be achieved with different number of lanes. For example,
> 100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
> When use different lanes, the port cannot be up.

I'm not sure what you are referring to.
I suppose it is not PCI lanes.
Please could you link to an explanation of how a port is split in lanes?
Which hardware does this?




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/3] support setting lanes
  2024-03-18 14:56 ` [PATCH 0/3] " Thomas Monjalon
@ 2024-03-18 21:26   ` Damodharam Ammepalli
  2024-03-18 21:42     ` Stephen Hemminger
                       ` (2 more replies)
  0 siblings, 3 replies; 51+ messages in thread
From: Damodharam Ammepalli @ 2024-03-18 21:26 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Dengdui Huang, dev, ferruh.yigit, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong

[-- Attachment #1: Type: text/plain, Size: 1957 bytes --]

On Mon, Mar 18, 2024 at 7:56 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 12/03/2024 08:52, Dengdui Huang:
> > Some speeds can be achieved with different number of lanes. For example,
> > 100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
> > When use different lanes, the port cannot be up.
>
> I'm not sure what you are referring to.
> I suppose it is not PCI lanes.
> Please could you link to an explanation of how a port is split in lanes?
> Which hardware does this?
>
>
>
This is a snapshot of 100Gb that the latest BCM576xx supports.
100Gb (NRZ: 25G per lane, 4 lanes) link speed
100Gb (PAM4-56: 50G per lane, 2 lanes) link speed
100Gb (PAM4-112: 100G per lane, 1 lane) link speed

Let the user feed in lanes=< integer value> and the NIC driver decides
the matching combination speed x lanes that works. In future if a new speed
is implemented with more than 8 lanes, there wouldn't be a need
to touch this speed command. Using separate lane command would
be a better alternative to support already shipped products and only new
drivers would consider this lanes configuration, if applicable.

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4233 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/3] support setting lanes
  2024-03-18 21:26   ` Damodharam Ammepalli
@ 2024-03-18 21:42     ` Stephen Hemminger
  2024-03-18 22:55       ` Ajit Khaparde
  2024-03-20 11:41     ` Thomas Monjalon
  2024-03-20 12:31     ` Ferruh Yigit
  2 siblings, 1 reply; 51+ messages in thread
From: Stephen Hemminger @ 2024-03-18 21:42 UTC (permalink / raw)
  To: Damodharam Ammepalli
  Cc: Thomas Monjalon, Dengdui Huang, dev, ferruh.yigit,
	aman.deep.singh, yuying.zhang, andrew.rybchenko, liuyonglong,
	fengchengwen, haijie1, lihuisong

On Mon, 18 Mar 2024 14:26:33 -0700
Damodharam Ammepalli <damodharam.ammepalli@broadcom.com> wrote:

> On Mon, Mar 18, 2024 at 7:56 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> >
> > 12/03/2024 08:52, Dengdui Huang:  
> > > Some speeds can be achieved with different number of lanes. For example,
> > > 100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
> > > When use different lanes, the port cannot be up.  
> >
> > I'm not sure what you are referring to.
> > I suppose it is not PCI lanes.
> > Please could you link to an explanation of how a port is split in lanes?
> > Which hardware does this?
> >
> >
> >  
> This is a snapshot of 100Gb that the latest BCM576xx supports.
> 100Gb (NRZ: 25G per lane, 4 lanes) link speed
> 100Gb (PAM4-56: 50G per lane, 2 lanes) link speed
> 100Gb (PAM4-112: 100G per lane, 1 lane) link speed
> 
> Let the user feed in lanes=< integer value> and the NIC driver decides
> the matching combination speed x lanes that works. In future if a new speed
> is implemented with more than 8 lanes, there wouldn't be a need
> to touch this speed command. Using separate lane command would
> be a better alternative to support already shipped products and only new
> drivers would consider this lanes configuration, if applicable.
> 

The DPDK does not need more driver specific knobs.
Shouldn't the PMD be able to auto negotiate the speed?
What does Linux do?

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/3] support setting lanes
  2024-03-18 21:42     ` Stephen Hemminger
@ 2024-03-18 22:55       ` Ajit Khaparde
  0 siblings, 0 replies; 51+ messages in thread
From: Ajit Khaparde @ 2024-03-18 22:55 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: Damodharam Ammepalli, Thomas Monjalon, Dengdui Huang, dev,
	ferruh.yigit, aman.deep.singh, yuying.zhang, andrew.rybchenko,
	liuyonglong, fengchengwen, haijie1, lihuisong

[-- Attachment #1: Type: text/plain, Size: 1881 bytes --]

On Mon, Mar 18, 2024 at 2:42 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Mon, 18 Mar 2024 14:26:33 -0700
> Damodharam Ammepalli <damodharam.ammepalli@broadcom.com> wrote:
>
> > On Mon, Mar 18, 2024 at 7:56 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> > >
> > > 12/03/2024 08:52, Dengdui Huang:
> > > > Some speeds can be achieved with different number of lanes. For example,
> > > > 100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
> > > > When use different lanes, the port cannot be up.
> > >
> > > I'm not sure what you are referring to.
> > > I suppose it is not PCI lanes.
> > > Please could you link to an explanation of how a port is split in lanes?
> > > Which hardware does this?
> > >
> > >
> > >
> > This is a snapshot of 100Gb that the latest BCM576xx supports.
> > 100Gb (NRZ: 25G per lane, 4 lanes) link speed
> > 100Gb (PAM4-56: 50G per lane, 2 lanes) link speed
> > 100Gb (PAM4-112: 100G per lane, 1 lane) link speed
> >
> > Let the user feed in lanes=< integer value> and the NIC driver decides
> > the matching combination speed x lanes that works. In future if a new speed
> > is implemented with more than 8 lanes, there wouldn't be a need
> > to touch this speed command. Using separate lane command would
> > be a better alternative to support already shipped products and only new
> > drivers would consider this lanes configuration, if applicable.
> >
>
> The DPDK does not need more driver specific knobs.
> Shouldn't the PMD be able to auto negotiate the speed?
Yes. Its possible to auto negotiate. And that's the default.
Even for the lane count, a default number can be arrived at.

> What does Linux do?
ethtool has been extended a while ago to allow configuring the number
of lanes along with speed and other settings.
But as usual, autoneg is possible.

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 3/3] app/testpmd: support setting lanes
  2024-03-15 21:47   ` Damodharam Ammepalli
@ 2024-03-19  2:38     ` huangdengdui
  0 siblings, 0 replies; 51+ messages in thread
From: huangdengdui @ 2024-03-19  2:38 UTC (permalink / raw)
  To: Damodharam Ammepalli
  Cc: dev, ferruh.yigit, aman.deep.singh, yuying.zhang, thomas,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong



On 2024/3/16 5:47, Damodharam Ammepalli wrote:
> On Tue, Mar 12, 2024 at 12:52 AM Dengdui Huang <huangdengdui@huawei.com> wrote:
>>
>> Extended speed command for setting lane number and
>> Show info print lane number.
>>
>> Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
>> ---
>>  app/test-pmd/cmdline.c                      | 110 +++++++++++---------
>>  app/test-pmd/config.c                       |  60 +++++++----
>>  doc/guides/rel_notes/release_24_03.rst      |   1 +
>>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |   3 +-
>>  4 files changed, 103 insertions(+), 71 deletions(-)
>>
>> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
>> index f521a1fe9e..e66daf4bba 100644
>> --- a/app/test-pmd/cmdline.c
>> +++ b/app/test-pmd/cmdline.c
>> @@ -1356,15 +1356,20 @@ struct cmd_config_speed_all {
>>         cmdline_fixed_string_t keyword;
>>         cmdline_fixed_string_t all;
>>         cmdline_fixed_string_t item1;
>> +       cmdline_fixed_string_t lanes_item;
>>         cmdline_fixed_string_t item2;
>>         cmdline_fixed_string_t value1;
>> +       uint8_t lanes_value;
>>         cmdline_fixed_string_t value2;
>>  };
>>
>>  static int
>> -parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
>> +parse_and_check_speed_duplex(char *speedstr, uint8_t lanes, char *duplexstr,
>> +                            uint32_t *speed)
>>  {
>>
> We internally implemented a similar feature, without changing the
> existing testpmd speed cmd.
> Instead of modifying the existing command set  we can have a  separate
> cmd for the lanes
> configuration similar to FEC configuration. Our internal
> implementation looks something like this,
> without affecting existing implementations.
> testpmd> port stop 0
> testpmd> port config 0 speed_lanes 4

Hi, Damodharam,
Thanks for your review.

I think the lanes should be configured with speed and duplex,
they will eventually map to Link speed capabilities(RTE_ETH_LINK_SPEED_*).

Would it be better to add the following new command?

testpmd> port config 0 speed 200000 lanes 4 duplex full

It can be used when the driver supports setting lanes;
It cannot be used when the driver does not support the setting lanes.

what do you think?

> testpmd> port config 0 speed 200000 duplex full
> testpmd> port start 0
> testpmd> show port summary 0
> Number of available ports: 2
> Port MAC Address       Name         Driver         Status   Link         Lanes
> 0    14:23:F2:C3:BA:D2 0000:b1:00.0 net_bnxt       up  200 Gbps 4

The summary command adds print of the number of lanes.
I will implement this in the next version.

> testpmd>
> testpmd> show port info 0
> 
> ********************* Infos for port 0  *********************
> MAC address: 14:23:F2:C3:BA:D2
> Device name: 0000:b1:00.0
> Driver name: net_bnxt
> Firmware-version: 228.9.115.0
> Connect to socket: 2
> memory allocation on the socket: 2
> Link status: up
> Link speed: 200 Gbps
> Lanes: 4
> Link duplex: full-duplex
> Autoneg status: Off
> 
>> +       uint32_t speed_num;
>> +       char *endptr;
>>         int duplex;
>>
>>         if (!strcmp(duplexstr, "half")) {
>> @@ -1378,47 +1383,22 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
>>                 return -1;
>>         }
>>
>> -       if (!strcmp(speedstr, "10")) {
>> -               *speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
>> -                               RTE_ETH_LINK_SPEED_10M_HD : RTE_ETH_LINK_SPEED_10M;
>> -       } else if (!strcmp(speedstr, "100")) {
>> -               *speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
>> -                               RTE_ETH_LINK_SPEED_100M_HD : RTE_ETH_LINK_SPEED_100M;
>> -       } else {
>> -               if (duplex != RTE_ETH_LINK_FULL_DUPLEX) {
>> -                       fprintf(stderr, "Invalid speed/duplex parameters\n");
>> -                       return -1;
>> -               }
>> -               if (!strcmp(speedstr, "1000")) {
>> -                       *speed = RTE_ETH_LINK_SPEED_1G;
>> -               } else if (!strcmp(speedstr, "2500")) {
>> -                       *speed = RTE_ETH_LINK_SPEED_2_5G;
>> -               } else if (!strcmp(speedstr, "5000")) {
>> -                       *speed = RTE_ETH_LINK_SPEED_5G;
>> -               } else if (!strcmp(speedstr, "10000")) {
>> -                       *speed = RTE_ETH_LINK_SPEED_10G;
>> -               } else if (!strcmp(speedstr, "25000")) {
>> -                       *speed = RTE_ETH_LINK_SPEED_25G;
>> -               } else if (!strcmp(speedstr, "40000")) {
>> -                       *speed = RTE_ETH_LINK_SPEED_40G;
>> -               } else if (!strcmp(speedstr, "50000")) {
>> -                       *speed = RTE_ETH_LINK_SPEED_50G;
>> -               } else if (!strcmp(speedstr, "100000")) {
>> -                       *speed = RTE_ETH_LINK_SPEED_100G;
>> -               } else if (!strcmp(speedstr, "200000")) {
>> -                       *speed = RTE_ETH_LINK_SPEED_200G;
>> -               } else if (!strcmp(speedstr, "400000")) {
>> -                       *speed = RTE_ETH_LINK_SPEED_400G;
>> -               } else if (!strcmp(speedstr, "auto")) {
>> -                       *speed = RTE_ETH_LINK_SPEED_AUTONEG;
>> -               } else {
>> -                       fprintf(stderr, "Unknown speed parameter\n");
>> -                       return -1;
>> -               }
>> +       if (!strcmp(speedstr, "auto")) {
>> +               *speed = RTE_ETH_LINK_SPEED_AUTONEG;
>> +               return 0;
>>         }
>>
>> -       if (*speed != RTE_ETH_LINK_SPEED_AUTONEG)
>> -               *speed |= RTE_ETH_LINK_SPEED_FIXED;
>> +       speed_num = strtol(speedstr, &endptr, 10);
>> +       if (*endptr != '\0') {
>> +               fprintf(stderr, "Unknown speed parameter\n");
>> +               return -1;
>> +       }
>> +
>> +       *speed = rte_eth_speed_bitflag(speed_num, lanes, duplex);
>> +       if (*speed == 0) {
>> +               fprintf(stderr, "param error\n");
>> +               return -1;
>> +       }
>>
>>         return 0;
>>  }
>> @@ -1429,19 +1409,27 @@ cmd_config_speed_all_parsed(void *parsed_result,
>>                         __rte_unused void *data)
>>  {
>>         struct cmd_config_speed_all *res = parsed_result;
>> +       struct rte_eth_dev_info dev_info;
>>         uint32_t link_speed;
>>         portid_t pid;
>> +       int ret;
>>
>>         if (!all_ports_stopped()) {
>>                 fprintf(stderr, "Please stop all ports first\n");
>>                 return;
>>         }
>>
>> -       if (parse_and_check_speed_duplex(res->value1, res->value2,
>> -                       &link_speed) < 0)
>> +       if (parse_and_check_speed_duplex(res->value1, res->lanes_value,
>> +                                        res->value2, &link_speed) < 0)
>>                 return;
>>
>>         RTE_ETH_FOREACH_DEV(pid) {
>> +               ret = eth_dev_info_get_print_err(pid, &dev_info);
>> +               if (ret != 0)
>> +                       return;
>> +               if ((dev_info.dev_capa & RTE_ETH_DEV_CAPA_SETTING_LANES) == 0)
>> +                       fprintf(stderr, "The setting lane may not take effect because "
>> +                                       "the port (%u) does not support it\n", pid);
>>                 ports[pid].dev_conf.link_speeds = link_speed;
>>         }
>>
>> @@ -1460,6 +1448,11 @@ static cmdline_parse_token_string_t cmd_config_speed_all_item1 =
>>  static cmdline_parse_token_string_t cmd_config_speed_all_value1 =
>>         TOKEN_STRING_INITIALIZER(struct cmd_config_speed_all, value1,
>>                                 "10#100#1000#2500#5000#10000#25000#40000#50000#100000#200000#400000#auto");
>> +static cmdline_parse_token_string_t cmd_config_speed_all_lanes_item =
>> +       TOKEN_STRING_INITIALIZER(struct cmd_config_speed_all, lanes_item, "lanes");
>> +static cmdline_parse_token_num_t cmd_config_speed_all_lanes_value =
>> +       TOKEN_NUM_INITIALIZER(struct cmd_config_speed_all, lanes_value,
>> +                             RTE_UINT8);
>>  static cmdline_parse_token_string_t cmd_config_speed_all_item2 =
>>         TOKEN_STRING_INITIALIZER(struct cmd_config_speed_all, item2, "duplex");
>>  static cmdline_parse_token_string_t cmd_config_speed_all_value2 =
>> @@ -1470,14 +1463,16 @@ static cmdline_parse_inst_t cmd_config_speed_all = {
>>         .f = cmd_config_speed_all_parsed,
>>         .data = NULL,
>>         .help_str = "port config all speed "
>> -               "10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto duplex "
>> -                                                       "half|full|auto",
>> +               "10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto"
>> +               " lanes 1|2|4|8 duplex half|full|auto",
>>         .tokens = {
>>                 (void *)&cmd_config_speed_all_port,
>>                 (void *)&cmd_config_speed_all_keyword,
>>                 (void *)&cmd_config_speed_all_all,
>>                 (void *)&cmd_config_speed_all_item1,
>>                 (void *)&cmd_config_speed_all_value1,
>> +               (void *)&cmd_config_speed_all_lanes_item,
>> +               (void *)&cmd_config_speed_all_lanes_value,
>>                 (void *)&cmd_config_speed_all_item2,
>>                 (void *)&cmd_config_speed_all_value2,
>>                 NULL,
>> @@ -1490,8 +1485,10 @@ struct cmd_config_speed_specific {
>>         cmdline_fixed_string_t keyword;
>>         portid_t id;
>>         cmdline_fixed_string_t item1;
>> +       cmdline_fixed_string_t lanes_item;
>>         cmdline_fixed_string_t item2;
>>         cmdline_fixed_string_t value1;
>> +       uint8_t lanes_value;
>>         cmdline_fixed_string_t value2;
>>  };
>>
>> @@ -1501,7 +1498,9 @@ cmd_config_speed_specific_parsed(void *parsed_result,
>>                                 __rte_unused void *data)
>>  {
>>         struct cmd_config_speed_specific *res = parsed_result;
>> +       struct rte_eth_dev_info dev_info;
>>         uint32_t link_speed;
>> +       int ret;
>>
>>         if (port_id_is_invalid(res->id, ENABLED_WARN))
>>                 return;
>> @@ -1511,8 +1510,15 @@ cmd_config_speed_specific_parsed(void *parsed_result,
>>                 return;
>>         }
>>
>> -       if (parse_and_check_speed_duplex(res->value1, res->value2,
>> -                       &link_speed) < 0)
>> +       ret = eth_dev_info_get_print_err(res->id, &dev_info);
>> +       if (ret != 0)
>> +               return;
>> +       if ((dev_info.dev_capa & RTE_ETH_DEV_CAPA_SETTING_LANES) == 0)
>> +               fprintf(stderr, "The setting lane may not take effect because "
>> +                               "the port (%d) does not support it\n", res->id);
>> +
>> +       if (parse_and_check_speed_duplex(res->value1, res->lanes_value,
>> +                                        res->value2, &link_speed) < 0)
>>                 return;
>>
>>         ports[res->id].dev_conf.link_speeds = link_speed;
>> @@ -1535,6 +1541,12 @@ static cmdline_parse_token_string_t cmd_config_speed_specific_item1 =
>>  static cmdline_parse_token_string_t cmd_config_speed_specific_value1 =
>>         TOKEN_STRING_INITIALIZER(struct cmd_config_speed_specific, value1,
>>                                 "10#100#1000#2500#5000#10000#25000#40000#50000#100000#200000#400000#auto");
>> +static cmdline_parse_token_string_t cmd_config_speed_specific_lanes_item =
>> +       TOKEN_STRING_INITIALIZER(struct cmd_config_speed_specific, lanes_item,
>> +                                                               "lanes");
>> +static cmdline_parse_token_num_t cmd_config_speed_specific_lanes_value =
>> +       TOKEN_NUM_INITIALIZER(struct cmd_config_speed_specific, lanes_value,
>> +                             RTE_UINT8);
>>  static cmdline_parse_token_string_t cmd_config_speed_specific_item2 =
>>         TOKEN_STRING_INITIALIZER(struct cmd_config_speed_specific, item2,
>>                                                                 "duplex");
>> @@ -1546,14 +1558,16 @@ static cmdline_parse_inst_t cmd_config_speed_specific = {
>>         .f = cmd_config_speed_specific_parsed,
>>         .data = NULL,
>>         .help_str = "port config <port_id> speed "
>> -               "10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto duplex "
>> -                                                       "half|full|auto",
>> +               "10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto"
>> +               " lanes 1|2|4|8 duplex half|full|auto",
>>         .tokens = {
>>                 (void *)&cmd_config_speed_specific_port,
>>                 (void *)&cmd_config_speed_specific_keyword,
>>                 (void *)&cmd_config_speed_specific_id,
>>                 (void *)&cmd_config_speed_specific_item1,
>>                 (void *)&cmd_config_speed_specific_value1,
>> +               (void *)&cmd_config_speed_specific_lanes_item,
>> +               (void *)&cmd_config_speed_specific_lanes_value,
>>                 (void *)&cmd_config_speed_specific_item2,
>>                 (void *)&cmd_config_speed_specific_value2,
>>                 NULL,
>> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
>> index 968d2164ab..c104327878 100644
>> --- a/app/test-pmd/config.c
>> +++ b/app/test-pmd/config.c
>> @@ -587,39 +587,51 @@ device_infos_display_speeds(uint32_t speed_capa)
>>         if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
>>                 printf(" Autonegotiate (all speeds)");
>>         if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
>> -               printf(" Disable autonegotiate (fixed speed)  ");
>> +               printf(" Disable autonegotiate (fixed speed) /");
>>         if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
>> -               printf(" 10 Mbps half-duplex  ");
>> +               printf(" 10Mbps_1lane_half-duplex /");
>>         if (speed_capa & RTE_ETH_LINK_SPEED_10M)
>> -               printf(" 10 Mbps full-duplex  ");
>> +               printf(" 10Mbps_1lane_full-duplex /");
>>         if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
>> -               printf(" 100 Mbps half-duplex  ");
>> +               printf(" 100Mbps_lane_half-duplex /");
>>         if (speed_capa & RTE_ETH_LINK_SPEED_100M)
>> -               printf(" 100 Mbps full-duplex  ");
>> +               printf(" 100Mbps_1lane_full-duplex /");
>>         if (speed_capa & RTE_ETH_LINK_SPEED_1G)
>> -               printf(" 1 Gbps  ");
>> +               printf(" 1Gbps_1lane_full-duplex /");
>>         if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
>> -               printf(" 2.5 Gbps  ");
>> +               printf(" 2.5Gbps_1lane_full-duplex /");
>>         if (speed_capa & RTE_ETH_LINK_SPEED_5G)
>> -               printf(" 5 Gbps  ");
>> +               printf(" 5Gbps_1lane_full-duplex /");
>>         if (speed_capa & RTE_ETH_LINK_SPEED_10G)
>> -               printf(" 10 Gbps  ");
>> -       if (speed_capa & RTE_ETH_LINK_SPEED_20G)
>> -               printf(" 20 Gbps  ");
>> +               printf(" 10Gbps_1lane_full-duplex /");
>> +       if (speed_capa & RTE_ETH_LINK_SPEED_10G_4LANES)
>> +               printf(" 10Gbps_4lane_full-duplex /");
>> +       if (speed_capa & RTE_ETH_LINK_SPEED_20G_2LANES)
>> +               printf(" 20Gbps_2lane_full-duplex /");
>>         if (speed_capa & RTE_ETH_LINK_SPEED_25G)
>> -               printf(" 25 Gbps  ");
>> -       if (speed_capa & RTE_ETH_LINK_SPEED_40G)
>> -               printf(" 40 Gbps  ");
>> +               printf(" 25Gbps_1lane_full-duplex /");
>> +       if (speed_capa & RTE_ETH_LINK_SPEED_40G_4LANES)
>> +               printf(" 40Gbps_4lane_full-duplex /");
>>         if (speed_capa & RTE_ETH_LINK_SPEED_50G)
>> -               printf(" 50 Gbps  ");
>> -       if (speed_capa & RTE_ETH_LINK_SPEED_56G)
>> -               printf(" 56 Gbps  ");
>> +               printf(" 50Gbps_1lane_full-duplex /");
>> +       if (speed_capa & RTE_ETH_LINK_SPEED_50G_2LANES)
>> +               printf(" 50Gbps_2lane_full-duplex /");
>> +       if (speed_capa & RTE_ETH_LINK_SPEED_56G_4LANES)
>> +               printf(" 56Gbps_4lane_full-duplex /");
>>         if (speed_capa & RTE_ETH_LINK_SPEED_100G)
>> -               printf(" 100 Gbps  ");
>> -       if (speed_capa & RTE_ETH_LINK_SPEED_200G)
>> -               printf(" 200 Gbps  ");
>> -       if (speed_capa & RTE_ETH_LINK_SPEED_400G)
>> -               printf(" 400 Gbps  ");
>> +               printf(" 100Gbps_1lane_full-duplex /");
>> +       if (speed_capa & RTE_ETH_LINK_SPEED_100G_2LANES)
>> +               printf(" 100Gbps_2lane_full-duplex /");
>> +       if (speed_capa & RTE_ETH_LINK_SPEED_100G_4LANES)
>> +               printf(" 100Gbps_4lane_full-duplex /");
>> +       if (speed_capa & RTE_ETH_LINK_SPEED_200G_4LANES)
>> +               printf(" 200Gbps_4lane_full-duplex /");
>> +       if (speed_capa & RTE_ETH_LINK_SPEED_200G_2LANES)
>> +               printf(" 200Gbps_2lane_full-duplex /");
>> +       if (speed_capa & RTE_ETH_LINK_SPEED_400G_4LANES)
>> +               printf(" 400Gbps_4lane_full-duplex /");
>> +       if (speed_capa & RTE_ETH_LINK_SPEED_400G_8LANES)
>> +               printf(" 400Gbps_8lane_full-duplex /");
>>  }
>>
>>  void
>> @@ -828,6 +840,10 @@ port_infos_display(portid_t port_id)
>>
>>         printf("\nLink status: %s\n", (link.link_status) ? ("up") : ("down"));
>>         printf("Link speed: %s\n", rte_eth_link_speed_to_str(link.link_speed));
>> +       if (link.link_lanes == 0)
>> +               printf("Link lanes: unknown\n");
>> +       else
>> +               printf("Link lanes: %u\n", link.link_lanes);
>>         printf("Link duplex: %s\n", (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
>>                ("full-duplex") : ("half-duplex"));
>>         printf("Autoneg status: %s\n", (link.link_autoneg == RTE_ETH_LINK_AUTONEG) ?
>> diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
>> index c17334ac25..46aceeee93 100644
>> --- a/doc/guides/rel_notes/release_24_03.rst
>> +++ b/doc/guides/rel_notes/release_24_03.rst
>> @@ -79,6 +79,7 @@ New Features
>>  * **Support setting lanes for ethdev.**
>>    * Support setting lanes by extended ``RTE_ETH_LINK_SPEED_*``.
>>    * Added function to convert bitmap flag to the struct of link speed info.
>> +  ``rte_eth_speed_capa_to_info()``
>>
>>  * **Added hash calculation of an encapsulated packet as done by the HW.**
>>
>> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>> index 2fbf9220d8..087f7fe853 100644
>> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>> @@ -190,6 +190,7 @@ For example:
>>     memory allocation on the socket: 0
>>     Link status: up
>>     Link speed: 40000 Mbps
>> +   Link lanes: 4
>>     Link duplex: full-duplex
>>     Promiscuous mode: enabled
>>     Allmulticast mode: disabled
>> @@ -2065,7 +2066,7 @@ port config - speed
>>  Set the speed and duplex mode for all ports or a specific port::
>>
>>     testpmd> port config (port_id|all) speed (10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto) \
>> -            duplex (half|full|auto)
>> +            lanes 1|2|4|8 duplex (half|full|auto)
>>
>>  port config - queues/descriptors
>>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> --
>> 2.33.0
>>
> 

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 1/3] ethdev: support setting lanes
  2024-03-12  7:52 ` [PATCH 1/3] ethdev: " Dengdui Huang
@ 2024-03-19  3:02   ` Stephen Hemminger
  2024-03-20  1:16     ` huangdengdui
  0 siblings, 1 reply; 51+ messages in thread
From: Stephen Hemminger @ 2024-03-19  3:02 UTC (permalink / raw)
  To: Dengdui Huang
  Cc: dev, ferruh.yigit, aman.deep.singh, yuying.zhang, thomas,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong

On Tue, 12 Mar 2024 15:52:36 +0800
Dengdui Huang <huangdengdui@huawei.com> wrote:

> -		ret = snprintf(str, len, "Link up at %s %s %s",
> +		ret = snprintf(str, len, "Link up at %s %ulanes %s %s",

Don't you want a space after %u?

Could you make it so that lanes is only part of the message if non-default value
is used?

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 1/3] ethdev: support setting lanes
  2024-03-19  3:02   ` Stephen Hemminger
@ 2024-03-20  1:16     ` huangdengdui
  0 siblings, 0 replies; 51+ messages in thread
From: huangdengdui @ 2024-03-20  1:16 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: dev, ferruh.yigit, aman.deep.singh, yuying.zhang, thomas,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong



On 2024/3/19 11:02, Stephen Hemminger wrote:
> On Tue, 12 Mar 2024 15:52:36 +0800
> Dengdui Huang <huangdengdui@huawei.com> wrote:
> 
>> -		ret = snprintf(str, len, "Link up at %s %s %s",
>> +		ret = snprintf(str, len, "Link up at %s %ulanes %s %s",
> 
> Don't you want a space after %u?
> 
> Could you make it so that lanes is only part of the message if non-default value
> is used?
Ok, I'll do it in the next version.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/3] support setting lanes
  2024-03-18 21:26   ` Damodharam Ammepalli
  2024-03-18 21:42     ` Stephen Hemminger
@ 2024-03-20 11:41     ` Thomas Monjalon
  2024-03-20 12:31     ` Ferruh Yigit
  2 siblings, 0 replies; 51+ messages in thread
From: Thomas Monjalon @ 2024-03-20 11:41 UTC (permalink / raw)
  To: Damodharam Ammepalli
  Cc: dev, Dengdui Huang, ferruh.yigit, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong,
	ajit.khaparde

18/03/2024 22:26, Damodharam Ammepalli:
> On Mon, Mar 18, 2024 at 7:56 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> >
> > 12/03/2024 08:52, Dengdui Huang:
> > > Some speeds can be achieved with different number of lanes. For example,
> > > 100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
> > > When use different lanes, the port cannot be up.
> >
> > I'm not sure what you are referring to.
> > I suppose it is not PCI lanes.
> > Please could you link to an explanation of how a port is split in lanes?
> > Which hardware does this?
> 
> This is a snapshot of 100Gb that the latest BCM576xx supports.
> 100Gb (NRZ: 25G per lane, 4 lanes) link speed
> 100Gb (PAM4-56: 50G per lane, 2 lanes) link speed
> 100Gb (PAM4-112: 100G per lane, 1 lane) link speed
> 
> Let the user feed in lanes=< integer value> and the NIC driver decides
> the matching combination speed x lanes that works. In future if a new speed
> is implemented with more than 8 lanes, there wouldn't be a need
> to touch this speed command. Using separate lane command would
> be a better alternative to support already shipped products and only new
> drivers would consider this lanes configuration, if applicable.

Sorry it does not provide enough explanations.
What is a lane? How does it work?
Is it only for Broadcom devices? Do you know other devices?



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/3] support setting lanes
  2024-03-18 21:26   ` Damodharam Ammepalli
  2024-03-18 21:42     ` Stephen Hemminger
  2024-03-20 11:41     ` Thomas Monjalon
@ 2024-03-20 12:31     ` Ferruh Yigit
  2024-03-21  2:02       ` huangdengdui
  2 siblings, 1 reply; 51+ messages in thread
From: Ferruh Yigit @ 2024-03-20 12:31 UTC (permalink / raw)
  To: Damodharam Ammepalli, Thomas Monjalon
  Cc: Dengdui Huang, dev, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong

On 3/18/2024 9:26 PM, Damodharam Ammepalli wrote:
> On Mon, Mar 18, 2024 at 7:56 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>>
>> 12/03/2024 08:52, Dengdui Huang:
>>> Some speeds can be achieved with different number of lanes. For example,
>>> 100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
>>> When use different lanes, the port cannot be up.
>>
>> I'm not sure what you are referring to.
>> I suppose it is not PCI lanes.
>> Please could you link to an explanation of how a port is split in lanes?
>> Which hardware does this?
>>
>>
>>
> This is a snapshot of 100Gb that the latest BCM576xx supports.
> 100Gb (NRZ: 25G per lane, 4 lanes) link speed
> 100Gb (PAM4-56: 50G per lane, 2 lanes) link speed
> 100Gb (PAM4-112: 100G per lane, 1 lane) link speed
> 
> Let the user feed in lanes=< integer value> and the NIC driver decides
> the matching combination speed x lanes that works. In future if a new speed
> is implemented with more than 8 lanes, there wouldn't be a need
> to touch this speed command. Using separate lane command would
> be a better alternative to support already shipped products and only new
> drivers would consider this lanes configuration, if applicable.
> 

As far as I understand, lane is related to the physical layer of the
NIC, there are multiple copies of transmitter, receiver, modulator HW
block and each set called as a 'lane' and multiple lanes work together
to achieve desired speed. (please correct me if this is wrong).

Why not just configuring the speed is not enough? Why user needs to know
the detail and configuration of the lanes?
Will it work if driver/device configure the "speed x lane" internally
for the requested speed?

Is there a benefit to force specific lane count for a specific speed
(like power optimization, just a wild guess)?


And +1 for auto-negotiation if possible.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/3] support setting lanes
  2024-03-20 12:31     ` Ferruh Yigit
@ 2024-03-21  2:02       ` huangdengdui
  2024-03-21  8:28         ` Thomas Monjalon
  0 siblings, 1 reply; 51+ messages in thread
From: huangdengdui @ 2024-03-21  2:02 UTC (permalink / raw)
  To: Ferruh Yigit, Damodharam Ammepalli, Thomas Monjalon
  Cc: dev, aman.deep.singh, yuying.zhang, andrew.rybchenko,
	liuyonglong, fengchengwen, haijie1, lihuisong



On 2024/3/20 20:31, Ferruh Yigit wrote:
> On 3/18/2024 9:26 PM, Damodharam Ammepalli wrote:
>> On Mon, Mar 18, 2024 at 7:56 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>>>
>>> 12/03/2024 08:52, Dengdui Huang:
>>>> Some speeds can be achieved with different number of lanes. For example,
>>>> 100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
>>>> When use different lanes, the port cannot be up.
>>>
>>> I'm not sure what you are referring to.
>>> I suppose it is not PCI lanes.
>>> Please could you link to an explanation of how a port is split in lanes?
>>> Which hardware does this?
>>>
>>>
>>>
>> This is a snapshot of 100Gb that the latest BCM576xx supports.
>> 100Gb (NRZ: 25G per lane, 4 lanes) link speed
>> 100Gb (PAM4-56: 50G per lane, 2 lanes) link speed
>> 100Gb (PAM4-112: 100G per lane, 1 lane) link speed
>>
>> Let the user feed in lanes=< integer value> and the NIC driver decides
>> the matching combination speed x lanes that works. In future if a new speed
>> is implemented with more than 8 lanes, there wouldn't be a need
>> to touch this speed command. Using separate lane command would
>> be a better alternative to support already shipped products and only new
>> drivers would consider this lanes configuration, if applicable.
>>
> 
> As far as I understand, lane is related to the physical layer of the
> NIC, there are multiple copies of transmitter, receiver, modulator HW
> block and each set called as a 'lane' and multiple lanes work together
> to achieve desired speed. (please correct me if this is wrong).
> 
> Why not just configuring the speed is not enough? Why user needs to know
> the detail and configuration of the lanes?
> Will it work if driver/device configure the "speed x lane" internally
> for the requested speed?
> 
> Is there a benefit to force specific lane count for a specific speed
> (like power optimization, just a wild guess)?
> 
> 
> And +1 for auto-negotiation if possible.

As you said above,,multiple lanes work together to achieve desired speed.
For example, the following solutions can be used to implement 100G:
1、Combines four 25G lanes
2、Combines two 50G lanes
3、A single 100G lane

It is assumed that two ports are interconnected and the two ports support
the foregoing three solutions. But, we just configured the speed to 100G and
one port uses four 25G lanes by default and the other port uses two 50G lanes
by default, the port cannot be up. In this case, we need to configure the
two ports to use the same solutions (for example, uses two 50G lanes)
so that the ports can be up.

ethtool has supported lanes configuration a while ago.[1]

[1]
https://lore.kernel.org/netdev/20201010154119.3537085-1-idosch@idosch.org/T/

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/3] support setting lanes
  2024-03-21  2:02       ` huangdengdui
@ 2024-03-21  8:28         ` Thomas Monjalon
  2024-03-22  2:28           ` huangdengdui
  0 siblings, 1 reply; 51+ messages in thread
From: Thomas Monjalon @ 2024-03-21  8:28 UTC (permalink / raw)
  To: Damodharam Ammepalli, huangdengdui
  Cc: Ferruh Yigit, dev, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong

21/03/2024 03:02, huangdengdui:
> 
> On 2024/3/20 20:31, Ferruh Yigit wrote:
> > On 3/18/2024 9:26 PM, Damodharam Ammepalli wrote:
> >> On Mon, Mar 18, 2024 at 7:56 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> >>>
> >>> 12/03/2024 08:52, Dengdui Huang:
> >>>> Some speeds can be achieved with different number of lanes. For example,
> >>>> 100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
> >>>> When use different lanes, the port cannot be up.
> >>>
> >>> I'm not sure what you are referring to.
> >>> I suppose it is not PCI lanes.
> >>> Please could you link to an explanation of how a port is split in lanes?
> >>> Which hardware does this?
> >>>
> >> This is a snapshot of 100Gb that the latest BCM576xx supports.
> >> 100Gb (NRZ: 25G per lane, 4 lanes) link speed
> >> 100Gb (PAM4-56: 50G per lane, 2 lanes) link speed
> >> 100Gb (PAM4-112: 100G per lane, 1 lane) link speed
> >>
> >> Let the user feed in lanes=< integer value> and the NIC driver decides
> >> the matching combination speed x lanes that works. In future if a new speed
> >> is implemented with more than 8 lanes, there wouldn't be a need
> >> to touch this speed command. Using separate lane command would
> >> be a better alternative to support already shipped products and only new
> >> drivers would consider this lanes configuration, if applicable.
> >>
> > 
> > As far as I understand, lane is related to the physical layer of the
> > NIC, there are multiple copies of transmitter, receiver, modulator HW
> > block and each set called as a 'lane' and multiple lanes work together
> > to achieve desired speed. (please correct me if this is wrong).
> > 
> > Why not just configuring the speed is not enough? Why user needs to know
> > the detail and configuration of the lanes?
> > Will it work if driver/device configure the "speed x lane" internally
> > for the requested speed?
> > 
> > Is there a benefit to force specific lane count for a specific speed
> > (like power optimization, just a wild guess)?
> > 
> > 
> > And +1 for auto-negotiation if possible.
> 
> As you said above,,multiple lanes work together to achieve desired speed.
> For example, the following solutions can be used to implement 100G:
> 1、Combines four 25G lanes
> 2、Combines two 50G lanes
> 3、A single 100G lane
> 
> It is assumed that two ports are interconnected and the two ports support
> the foregoing three solutions. But, we just configured the speed to 100G and
> one port uses four 25G lanes by default and the other port uses two 50G lanes
> by default, the port cannot be up. In this case, we need to configure the
> two ports to use the same solutions (for example, uses two 50G lanes)
> so that the ports can be up.

Why this config is not OK? How do we know?
Really I have a very bad feeling about this feature.



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/3] support setting lanes
  2024-03-21  8:28         ` Thomas Monjalon
@ 2024-03-22  2:28           ` huangdengdui
  2024-03-22  4:38             ` Jerin Jacob
  0 siblings, 1 reply; 51+ messages in thread
From: huangdengdui @ 2024-03-22  2:28 UTC (permalink / raw)
  To: Thomas Monjalon, Damodharam Ammepalli
  Cc: Ferruh Yigit, dev, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong



On 2024/3/21 16:28, Thomas Monjalon wrote:
> 21/03/2024 03:02, huangdengdui:
>>
>> On 2024/3/20 20:31, Ferruh Yigit wrote:
>>> On 3/18/2024 9:26 PM, Damodharam Ammepalli wrote:
>>>> On Mon, Mar 18, 2024 at 7:56 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>>>>>
>>>>> 12/03/2024 08:52, Dengdui Huang:
>>>>>> Some speeds can be achieved with different number of lanes. For example,
>>>>>> 100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
>>>>>> When use different lanes, the port cannot be up.
>>>>>
>>>>> I'm not sure what you are referring to.
>>>>> I suppose it is not PCI lanes.
>>>>> Please could you link to an explanation of how a port is split in lanes?
>>>>> Which hardware does this?
>>>>>
>>>> This is a snapshot of 100Gb that the latest BCM576xx supports.
>>>> 100Gb (NRZ: 25G per lane, 4 lanes) link speed
>>>> 100Gb (PAM4-56: 50G per lane, 2 lanes) link speed
>>>> 100Gb (PAM4-112: 100G per lane, 1 lane) link speed
>>>>
>>>> Let the user feed in lanes=< integer value> and the NIC driver decides
>>>> the matching combination speed x lanes that works. In future if a new speed
>>>> is implemented with more than 8 lanes, there wouldn't be a need
>>>> to touch this speed command. Using separate lane command would
>>>> be a better alternative to support already shipped products and only new
>>>> drivers would consider this lanes configuration, if applicable.
>>>>
>>>
>>> As far as I understand, lane is related to the physical layer of the
>>> NIC, there are multiple copies of transmitter, receiver, modulator HW
>>> block and each set called as a 'lane' and multiple lanes work together
>>> to achieve desired speed. (please correct me if this is wrong).
>>>
>>> Why not just configuring the speed is not enough? Why user needs to know
>>> the detail and configuration of the lanes?
>>> Will it work if driver/device configure the "speed x lane" internally
>>> for the requested speed?
>>>
>>> Is there a benefit to force specific lane count for a specific speed
>>> (like power optimization, just a wild guess)?
>>>
>>>
>>> And +1 for auto-negotiation if possible.
>>
>> As you said above,,multiple lanes work together to achieve desired speed.
>> For example, the following solutions can be used to implement 100G:
>> 1、Combines four 25G lanes
>> 2、Combines two 50G lanes
>> 3、A single 100G lane
>>
>> It is assumed that two ports are interconnected and the two ports support
>> the foregoing three solutions. But, we just configured the speed to 100G and
>> one port uses four 25G lanes by default and the other port uses two 50G lanes
>> by default, the port cannot be up. In this case, we need to configure the
>> two ports to use the same solutions (for example, uses two 50G lanes)
>> so that the ports can be up.
> 
> Why this config is not OK? How do we know?
> Really I have a very bad feeling about this feature.
> 
> 
Sorry, I don't quite understand your question.
Are you asking why cannot be up when one port uses four 25G lanes and the other port uses two 50G lanes?

100GBASE-SR2 (two 50G lanes) and 100GBASE-SR4 (four 25G lanes) have different standards at the physical layer.[1]
So it's not possible to communicate. Configuring lanes can help the driver choose the same standard.

[1]
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9844436

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/3] support setting lanes
  2024-03-22  2:28           ` huangdengdui
@ 2024-03-22  4:38             ` Jerin Jacob
  2024-03-22  5:25               ` Ajit Khaparde
  0 siblings, 1 reply; 51+ messages in thread
From: Jerin Jacob @ 2024-03-22  4:38 UTC (permalink / raw)
  To: huangdengdui
  Cc: Thomas Monjalon, Damodharam Ammepalli, Ferruh Yigit, dev,
	aman.deep.singh, yuying.zhang, andrew.rybchenko, liuyonglong,
	fengchengwen, haijie1, lihuisong

On Fri, Mar 22, 2024 at 7:58 AM huangdengdui <huangdengdui@huawei.com> wrote:
>
>
>
> On 2024/3/21 16:28, Thomas Monjalon wrote:
> > 21/03/2024 03:02, huangdengdui:
> >>
> >> On 2024/3/20 20:31, Ferruh Yigit wrote:
> >>> On 3/18/2024 9:26 PM, Damodharam Ammepalli wrote:
> >>>> On Mon, Mar 18, 2024 at 7:56 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> >>>>>
> >>>>> 12/03/2024 08:52, Dengdui Huang:
> >>>>>> Some speeds can be achieved with different number of lanes. For example,
> >>>>>> 100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
> >>>>>> When use different lanes, the port cannot be up.
> >>>>>
> >>>>> I'm not sure what you are referring to.
> >>>>> I suppose it is not PCI lanes.
> >>>>> Please could you link to an explanation of how a port is split in lanes?
> >>>>> Which hardware does this?
> >>>>>
> >>>> This is a snapshot of 100Gb that the latest BCM576xx supports.
> >>>> 100Gb (NRZ: 25G per lane, 4 lanes) link speed
> >>>> 100Gb (PAM4-56: 50G per lane, 2 lanes) link speed
> >>>> 100Gb (PAM4-112: 100G per lane, 1 lane) link speed
> >>>>
> >>>> Let the user feed in lanes=< integer value> and the NIC driver decides
> >>>> the matching combination speed x lanes that works. In future if a new speed
> >>>> is implemented with more than 8 lanes, there wouldn't be a need
> >>>> to touch this speed command. Using separate lane command would
> >>>> be a better alternative to support already shipped products and only new
> >>>> drivers would consider this lanes configuration, if applicable.
> >>>>
> >>>
> >>> As far as I understand, lane is related to the physical layer of the
> >>> NIC, there are multiple copies of transmitter, receiver, modulator HW
> >>> block and each set called as a 'lane' and multiple lanes work together
> >>> to achieve desired speed. (please correct me if this is wrong).
> >>>
> >>> Why not just configuring the speed is not enough? Why user needs to know
> >>> the detail and configuration of the lanes?
> >>> Will it work if driver/device configure the "speed x lane" internally
> >>> for the requested speed?
> >>>
> >>> Is there a benefit to force specific lane count for a specific speed
> >>> (like power optimization, just a wild guess)?
> >>>
> >>>
> >>> And +1 for auto-negotiation if possible.
> >>
> >> As you said above,,multiple lanes work together to achieve desired speed.
> >> For example, the following solutions can be used to implement 100G:
> >> 1、Combines four 25G lanes
> >> 2、Combines two 50G lanes
> >> 3、A single 100G lane
> >>
> >> It is assumed that two ports are interconnected and the two ports support
> >> the foregoing three solutions. But, we just configured the speed to 100G and
> >> one port uses four 25G lanes by default and the other port uses two 50G lanes
> >> by default, the port cannot be up. In this case, we need to configure the
> >> two ports to use the same solutions (for example, uses two 50G lanes)
> >> so that the ports can be up.
> >
> > Why this config is not OK? How do we know?
> > Really I have a very bad feeling about this feature.
> >
> >
> Sorry, I don't quite understand your question.
> Are you asking why cannot be up when one port uses four 25G lanes and the other port uses two 50G lanes?
>
> 100GBASE-SR2 (two 50G lanes) and 100GBASE-SR4 (four 25G lanes) have different standards at the physical layer.[1]
> So it's not possible to communicate. Configuring lanes can help the driver choose the same standard.

Typically, low-level drivers like FW configure this.

For example, If FW configures, 100G port as 100GBASE-SR2 then two
ethdev(port 0 and port1) will show up.
Now, assume if we expose this API and Can end user configure port 1 as
25G lines if so,
a) What happens to port0 and it states?
b) Will port2, port3 will show up after issuing this API(As end user
configured 25Gx4 for 100G)? Will application needs to hotplug to get
use ports.

> https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9844436

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/3] support setting lanes
  2024-03-22  4:38             ` Jerin Jacob
@ 2024-03-22  5:25               ` Ajit Khaparde
  2024-03-22  5:51                 ` Jerin Jacob
  0 siblings, 1 reply; 51+ messages in thread
From: Ajit Khaparde @ 2024-03-22  5:25 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: huangdengdui, Thomas Monjalon, Damodharam Ammepalli,
	Ferruh Yigit, dev, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong

[-- Attachment #1: Type: text/plain, Size: 6441 bytes --]

On Thu, Mar 21, 2024 at 9:39 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Fri, Mar 22, 2024 at 7:58 AM huangdengdui <huangdengdui@huawei.com> wrote:
> >
> >
> >
> > On 2024/3/21 16:28, Thomas Monjalon wrote:
> > > 21/03/2024 03:02, huangdengdui:
> > >>
> > >> On 2024/3/20 20:31, Ferruh Yigit wrote:
> > >>> On 3/18/2024 9:26 PM, Damodharam Ammepalli wrote:
> > >>>> On Mon, Mar 18, 2024 at 7:56 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> > >>>>>
> > >>>>> 12/03/2024 08:52, Dengdui Huang:
> > >>>>>> Some speeds can be achieved with different number of lanes. For example,
> > >>>>>> 100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
> > >>>>>> When use different lanes, the port cannot be up.
> > >>>>>
> > >>>>> I'm not sure what you are referring to.
> > >>>>> I suppose it is not PCI lanes.
> > >>>>> Please could you link to an explanation of how a port is split in lanes?
> > >>>>> Which hardware does this?
> > >>>>>
> > >>>> This is a snapshot of 100Gb that the latest BCM576xx supports.
> > >>>> 100Gb (NRZ: 25G per lane, 4 lanes) link speed
> > >>>> 100Gb (PAM4-56: 50G per lane, 2 lanes) link speed
> > >>>> 100Gb (PAM4-112: 100G per lane, 1 lane) link speed
> > >>>>
> > >>>> Let the user feed in lanes=< integer value> and the NIC driver decides
> > >>>> the matching combination speed x lanes that works. In future if a new speed
> > >>>> is implemented with more than 8 lanes, there wouldn't be a need
> > >>>> to touch this speed command. Using separate lane command would
> > >>>> be a better alternative to support already shipped products and only new
> > >>>> drivers would consider this lanes configuration, if applicable.
> > >>>>
> > >>>
> > >>> As far as I understand, lane is related to the physical layer of the
> > >>> NIC, there are multiple copies of transmitter, receiver, modulator HW
> > >>> block and each set called as a 'lane' and multiple lanes work together
> > >>> to achieve desired speed. (please correct me if this is wrong).
> > >>>
> > >>> Why not just configuring the speed is not enough? Why user needs to know
> > >>> the detail and configuration of the lanes?
> > >>> Will it work if driver/device configure the "speed x lane" internally
> > >>> for the requested speed?
> > >>>
> > >>> Is there a benefit to force specific lane count for a specific speed
> > >>> (like power optimization, just a wild guess)?
> > >>>
> > >>>
> > >>> And +1 for auto-negotiation if possible.
> > >>
> > >> As you said above,,multiple lanes work together to achieve desired speed.
> > >> For example, the following solutions can be used to implement 100G:
> > >> 1、Combines four 25G lanes
> > >> 2、Combines two 50G lanes
> > >> 3、A single 100G lane
> > >>
> > >> It is assumed that two ports are interconnected and the two ports support
> > >> the foregoing three solutions. But, we just configured the speed to 100G and
> > >> one port uses four 25G lanes by default and the other port uses two 50G lanes
> > >> by default, the port cannot be up. In this case, we need to configure the
> > >> two ports to use the same solutions (for example, uses two 50G lanes)
> > >> so that the ports can be up.
> > >
> > > Why this config is not OK? How do we know?
> > > Really I have a very bad feeling about this feature.
> > >
> > >
> > Sorry, I don't quite understand your question.
> > Are you asking why cannot be up when one port uses four 25G lanes and the other port uses two 50G lanes?
> >
> > 100GBASE-SR2 (two 50G lanes) and 100GBASE-SR4 (four 25G lanes) have different standards at the physical layer.[1]
> > So it's not possible to communicate. Configuring lanes can help the driver choose the same standard.
>
> Typically, low-level drivers like FW configure this.
>
> For example, If FW configures, 100G port as 100GBASE-SR2 then two
> ethdev(port 0 and port1) will show up.
> Now, assume if we expose this API and Can end user configure port 1 as
> 25G lines if so,
> a) What happens to port0 and it states?
There should be no impact to port0.

> b) Will port2, port3 will show up after issuing this API(As end user
> configured 25Gx4 for 100G)? Will application needs to hotplug to get
> use ports.
No. The port count does not change. Nor does the number of PCI
functions seen by the host. Unless designed otherwise.

Changing the lane count does not change anything in physical terms.
What changes is the modulation or the signaling scheme.
The number of lanes which can be supported is determined by
the PHY itself and the cables used and needs to be negotiated appropriately
with the remote partner - which is just like using forced Ethernet Speed
instead of auto-negotiated speeds.

I tried to search for some links, but it has not been easy.
So let me try to put something down. Sorry for the long mail.

In the above example, if the 100G port is configured for 2 lanes,
the SerDes in the PHY is configured such that each lane communicates
at 50G with the remote partner.
If the port is configured for 4 lanes, the internal SerDes is programmed
to use a lower 25G signaling scheme.

Each lane provides a parallel data transmission path and are
typically implemented using multiple pairs of copper wires or optical fibers.

When the lane configuration of the Ethernet port is changed, the port speed
does not change. But the PHY FW uses a suitable modulation scheme to
achieve the total aggregate port speed.

The choice of modulation schemes include NRZ and PAM4.

Typically 100G uses four lanes at 25 Gbps each and happens to be NRZ.
For higher Ethernet speeds, the shift has been to use PAM4 signaling
which allows twice the transmission rates compared to NRZ signaling.

So the same 100G port can be configured to send signals to the remote
partner using two lanes at 50G + 50G, or just 1 lane operating at 100G.

It gets interesting as the speeds increase.
200G can be achieved with 4 lanes at 50G or 2 lanes at 100G.

PAM4 offers improved bandwidth efficiency by encoding two bits per symbol
compared to NRZ's one bit per symbol. The Ethernet standards committee
I think has already completed work on 200G speed per lane.

BTW -
PAM4 stands for 4-Level Pulse Amplitude Modulation.
NRZ stands for Non-Return-to-Zero.

Thanks
Ajit

>
> > https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9844436

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/3] support setting lanes
  2024-03-22  5:25               ` Ajit Khaparde
@ 2024-03-22  5:51                 ` Jerin Jacob
  2024-03-22 13:51                   ` Thomas Monjalon
  0 siblings, 1 reply; 51+ messages in thread
From: Jerin Jacob @ 2024-03-22  5:51 UTC (permalink / raw)
  To: Ajit Khaparde
  Cc: huangdengdui, Thomas Monjalon, Damodharam Ammepalli,
	Ferruh Yigit, dev, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, liuyonglong, fengchengwen, haijie1, lihuisong

On Fri, Mar 22, 2024 at 10:56 AM Ajit Khaparde
<ajit.khaparde@broadcom.com> wrote:
>
> On Thu, Mar 21, 2024 at 9:39 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> >
> > On Fri, Mar 22, 2024 at 7:58 AM huangdengdui <huangdengdui@huawei.com> wrote:
> > >
> > >
> > >
> > > On 2024/3/21 16:28, Thomas Monjalon wrote:
> > > > 21/03/2024 03:02, huangdengdui:
> > > >>
> > > >> On 2024/3/20 20:31, Ferruh Yigit wrote:
> > > >>> On 3/18/2024 9:26 PM, Damodharam Ammepalli wrote:
> > > >>>> On Mon, Mar 18, 2024 at 7:56 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> > > >>>>>
> > > >>>>> 12/03/2024 08:52, Dengdui Huang:
> > > >>>>>> Some speeds can be achieved with different number of lanes. For example,
> > > >>>>>> 100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
> > > >>>>>> When use different lanes, the port cannot be up.
> > > >>>>>
> > > >>>>> I'm not sure what you are referring to.
> > > >>>>> I suppose it is not PCI lanes.
> > > >>>>> Please could you link to an explanation of how a port is split in lanes?
> > > >>>>> Which hardware does this?
> > > >>>>>
> > > >>>> This is a snapshot of 100Gb that the latest BCM576xx supports.
> > > >>>> 100Gb (NRZ: 25G per lane, 4 lanes) link speed
> > > >>>> 100Gb (PAM4-56: 50G per lane, 2 lanes) link speed
> > > >>>> 100Gb (PAM4-112: 100G per lane, 1 lane) link speed
> > > >>>>
> > > >>>> Let the user feed in lanes=< integer value> and the NIC driver decides
> > > >>>> the matching combination speed x lanes that works. In future if a new speed
> > > >>>> is implemented with more than 8 lanes, there wouldn't be a need
> > > >>>> to touch this speed command. Using separate lane command would
> > > >>>> be a better alternative to support already shipped products and only new
> > > >>>> drivers would consider this lanes configuration, if applicable.
> > > >>>>
> > > >>>
> > > >>> As far as I understand, lane is related to the physical layer of the
> > > >>> NIC, there are multiple copies of transmitter, receiver, modulator HW
> > > >>> block and each set called as a 'lane' and multiple lanes work together
> > > >>> to achieve desired speed. (please correct me if this is wrong).
> > > >>>
> > > >>> Why not just configuring the speed is not enough? Why user needs to know
> > > >>> the detail and configuration of the lanes?
> > > >>> Will it work if driver/device configure the "speed x lane" internally
> > > >>> for the requested speed?
> > > >>>
> > > >>> Is there a benefit to force specific lane count for a specific speed
> > > >>> (like power optimization, just a wild guess)?
> > > >>>
> > > >>>
> > > >>> And +1 for auto-negotiation if possible.
> > > >>
> > > >> As you said above,,multiple lanes work together to achieve desired speed.
> > > >> For example, the following solutions can be used to implement 100G:
> > > >> 1、Combines four 25G lanes
> > > >> 2、Combines two 50G lanes
> > > >> 3、A single 100G lane
> > > >>
> > > >> It is assumed that two ports are interconnected and the two ports support
> > > >> the foregoing three solutions. But, we just configured the speed to 100G and
> > > >> one port uses four 25G lanes by default and the other port uses two 50G lanes
> > > >> by default, the port cannot be up. In this case, we need to configure the
> > > >> two ports to use the same solutions (for example, uses two 50G lanes)
> > > >> so that the ports can be up.
> > > >
> > > > Why this config is not OK? How do we know?
> > > > Really I have a very bad feeling about this feature.
> > > >
> > > >
> > > Sorry, I don't quite understand your question.
> > > Are you asking why cannot be up when one port uses four 25G lanes and the other port uses two 50G lanes?
> > >
> > > 100GBASE-SR2 (two 50G lanes) and 100GBASE-SR4 (four 25G lanes) have different standards at the physical layer.[1]
> > > So it's not possible to communicate. Configuring lanes can help the driver choose the same standard.
> >
> > Typically, low-level drivers like FW configure this.
> >
> > For example, If FW configures, 100G port as 100GBASE-SR2 then two
> > ethdev(port 0 and port1) will show up.
> > Now, assume if we expose this API and Can end user configure port 1 as
> > 25G lines if so,
> > a) What happens to port0 and it states?
> There should be no impact to port0.
>
> > b) Will port2, port3 will show up after issuing this API(As end user
> > configured 25Gx4 for 100G)? Will application needs to hotplug to get
> > use ports.
> No. The port count does not change. Nor does the number of PCI
> functions seen by the host. Unless designed otherwise.
>
> Changing the lane count does not change anything in physical terms.
> What changes is the modulation or the signaling scheme.
> The number of lanes which can be supported is determined by
> the PHY itself and the cables used and needs to be negotiated appropriately
> with the remote partner - which is just like using forced Ethernet Speed
> instead of auto-negotiated speeds.

OK. It looks like platform independent then. At least cnxk driver, End
user cannot simplify change the line config parameters
while traffic is active also, it looks like other drivers need to have
SerDes training with remote partner while reconfiguring it.

At least on cnxk platform, 25Gx4 on 100G will show as 4 ethdev devices.

Having said that, If other NICs support this feature without
disturbing current port states, I don't have an objection to this API.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v2 0/6] support setting lanes
  2024-03-12  7:52 ` [PATCH 3/3] app/testpmd: " Dengdui Huang
  2024-03-15 21:47   ` Damodharam Ammepalli
@ 2024-03-22  7:09   ` Dengdui Huang
  2024-03-22  7:09     ` [PATCH v2 1/6] ethdev: " Dengdui Huang
                       ` (6 more replies)
  1 sibling, 7 replies; 51+ messages in thread
From: Dengdui Huang @ 2024-03-22  7:09 UTC (permalink / raw)
  To: dev
  Cc: ferruh.yigit, aman.deep.singh, yuying.zhang, thomas,
	andrew.rybchenko, damodharam.ammepalli, stephen, jerinjacobk,
	ajit.khaparde, liuyonglong, fengchengwen, haijie1, lihuisong

At the physical layer, multiple lanes are often used to work together
to achieve higher speeds. So a speeds can be achieved with different
number of lanes. For example, the following solutions can be used to
implement 100G:
1. Combines four 25G lanes
2. Combines two 50G lanes
3. A single 100G lane

It is assumed that two ports are interconnected and the two ports support
the above three solutions. But, we just configured the speed to 100G and
one port uses four 25G lanes by default and the other port uses two 50G lanes
by default, the port cannot be up. In this case, we need to configure the
ports to use the same solutions (for example, uses two 50G lanes) so that
the ports can be up.

This patch set add support setting lanes for ethdev. application can use
this feature to configure lanes to help select the same solutions.

In addition, modify the testpmd and hns3 driver to adapt to it.

change log:
v1->v2:
 - ethdev updata parse link mode info function name
 - hns3 driver modify report FEC capability
 - testpmd add a command to config speed with lanes
 - update UT and some code rectification

Dengdui Huang (6):
  ethdev: support setting lanes
  test: updated UT for setting lanes
  ethdev: add function to parse link mode info
  net/hns3: use parse link mode info function
  net/hns3: support setting lanes
  app/testpmd: support setting lanes

 app/test-pmd/cmdline.c                      | 199 +++++++++-----
 app/test-pmd/config.c                       |  78 ++++--
 app/test/test_ethdev_link.c                 |  18 +-
 doc/guides/rel_notes/release_24_03.rst      |  11 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |   9 +
 drivers/net/bnxt/bnxt_ethdev.c              |   3 +-
 drivers/net/hns3/hns3_cmd.h                 |  15 +-
 drivers/net/hns3/hns3_common.c              |   2 +
 drivers/net/hns3/hns3_ethdev.c              | 206 ++++++++------
 drivers/net/hns3/hns3_ethdev.h              |   2 +
 lib/ethdev/ethdev_linux_ethtool.c           | 208 +++++++-------
 lib/ethdev/ethdev_private.h                 |   4 +
 lib/ethdev/ethdev_trace.h                   |   4 +-
 lib/ethdev/meson.build                      |   2 +
 lib/ethdev/rte_ethdev.c                     | 284 ++++++++++++++++++--
 lib/ethdev/rte_ethdev.h                     | 104 +++++--
 lib/ethdev/version.map                      |   7 +
 17 files changed, 829 insertions(+), 327 deletions(-)

-- 
2.33.0


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-22  7:09   ` [PATCH v2 0/6] " Dengdui Huang
@ 2024-03-22  7:09     ` Dengdui Huang
  2024-03-22 13:58       ` Thomas Monjalon
  2024-03-22  7:09     ` [PATCH v2 2/6] test: updated UT for " Dengdui Huang
                       ` (5 subsequent siblings)
  6 siblings, 1 reply; 51+ messages in thread
From: Dengdui Huang @ 2024-03-22  7:09 UTC (permalink / raw)
  To: dev
  Cc: ferruh.yigit, aman.deep.singh, yuying.zhang, thomas,
	andrew.rybchenko, damodharam.ammepalli, stephen, jerinjacobk,
	ajit.khaparde, liuyonglong, fengchengwen, haijie1, lihuisong

Some speeds can be achieved with different number of lanes. For example,
100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
When use different lanes, the port cannot be up. This patch add support
setting lanes and report lanes.

In addition, add a device capability RTE_ETH_DEV_CAPA_SETTING_LANES
When the device does not support it, if a speed supports different
numbers of lanes, the application does not knowe which the lane number
are used by the device.

Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
---
 doc/guides/rel_notes/release_24_03.rst |   6 +
 drivers/net/bnxt/bnxt_ethdev.c         |   3 +-
 drivers/net/hns3/hns3_ethdev.c         |   1 +
 lib/ethdev/ethdev_linux_ethtool.c      | 208 ++++++++++++-------------
 lib/ethdev/ethdev_private.h            |   4 +
 lib/ethdev/ethdev_trace.h              |   4 +-
 lib/ethdev/meson.build                 |   2 +
 lib/ethdev/rte_ethdev.c                |  85 +++++++---
 lib/ethdev/rte_ethdev.h                |  75 ++++++---
 lib/ethdev/version.map                 |   6 +
 10 files changed, 250 insertions(+), 144 deletions(-)

diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 7bd9ceab27..4621689c68 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -76,6 +76,9 @@ New Features
   * Added a fath path function ``rte_eth_tx_queue_count``
     to get the number of used descriptors of a Tx queue.
 
+* **Support setting lanes for ethdev.**
+  * Support setting lanes by extended ``RTE_ETH_LINK_SPEED_*``.
+
 * **Added hash calculation of an encapsulated packet as done by the HW.**
 
   Added function to calculate hash when doing tunnel encapsulation:
@@ -254,6 +257,9 @@ ABI Changes
 
 * No ABI change that would break compatibility with 23.11.
 
+* ethdev: Convert a numerical speed to a bitmap flag with lanes:
+  The function ``rte_eth_speed_bitflag`` add lanes parameters.
+
 
 Known Issues
 ------------
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index ba31ae9286..e881a7f3cc 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -711,7 +711,8 @@ static int bnxt_update_phy_setting(struct bnxt *bp)
 	}
 
 	/* convert to speedbit flag */
-	curr_speed_bit = rte_eth_speed_bitflag((uint32_t)link->link_speed, 1);
+	curr_speed_bit = rte_eth_speed_bitflag((uint32_t)link->link_speed,
+					       RTE_ETH_LANES_UNKNOWN, 1);
 
 	/*
 	 * Device is not obliged link down in certain scenarios, even
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index b10d1216d2..ecd3b2ef64 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -5969,6 +5969,7 @@ hns3_get_speed_fec_capa(struct rte_eth_fec_capa *speed_fec_capa,
 	for (i = 0; i < RTE_DIM(speed_fec_capa_tbl); i++) {
 		speed_bit =
 			rte_eth_speed_bitflag(speed_fec_capa_tbl[i].speed,
+					      RTE_ETH_LANES_UNKNOWN,
 					      RTE_ETH_LINK_FULL_DUPLEX);
 		if ((speed_capa & speed_bit) == 0)
 			continue;
diff --git a/lib/ethdev/ethdev_linux_ethtool.c b/lib/ethdev/ethdev_linux_ethtool.c
index e792204b01..6412845161 100644
--- a/lib/ethdev/ethdev_linux_ethtool.c
+++ b/lib/ethdev/ethdev_linux_ethtool.c
@@ -7,6 +7,10 @@
 #include "rte_ethdev.h"
 #include "ethdev_linux_ethtool.h"
 
+#define RTE_ETH_LINK_MODES_INDEX_SPEED	0
+#define RTE_ETH_LINK_MODES_INDEX_DUPLEX	1
+#define RTE_ETH_LINK_MODES_INDEX_LANES	2
+
 /* Link modes sorted with index as defined in ethtool.
  * Values are speed in Mbps with LSB indicating duplex.
  *
@@ -15,123 +19,119 @@
  * and allows to compile with new bits included even on an old kernel.
  *
  * The array below is built from bit definitions with this shell command:
- *   sed -rn 's;.*(ETHTOOL_LINK_MODE_)([0-9]+)([0-9a-zA-Z_]*).*= *([0-9]*).*;'\
- *           '[\4] = \2, /\* \1\2\3 *\/;p' /usr/include/linux/ethtool.h |
- *   awk '/_Half_/{$3=$3+1","}1'
+ *   sed -rn 's;.*(ETHTOOL_LINK_MODE_)([0-9]+)([a-zA-Z]+)([0-9_]+)([0-9a-zA-Z_]*)
+ *   .*= *([0-9]*).*;'\ '[\6] = {\2, 1, \4}, /\* \1\2\3\4\5 *\/;p'
+ *    /usr/include/linux/ethtool.h | awk '/_Half_/{$4=0","}1' |
+ *    awk '/, _}/{$5=1"},"}1' | awk '{sub(/_}/,"\}");}1'
  */
-static const uint32_t link_modes[] = {
-	  [0] =      11, /* ETHTOOL_LINK_MODE_10baseT_Half_BIT */
-	  [1] =      10, /* ETHTOOL_LINK_MODE_10baseT_Full_BIT */
-	  [2] =     101, /* ETHTOOL_LINK_MODE_100baseT_Half_BIT */
-	  [3] =     100, /* ETHTOOL_LINK_MODE_100baseT_Full_BIT */
-	  [4] =    1001, /* ETHTOOL_LINK_MODE_1000baseT_Half_BIT */
-	  [5] =    1000, /* ETHTOOL_LINK_MODE_1000baseT_Full_BIT */
-	 [12] =   10000, /* ETHTOOL_LINK_MODE_10000baseT_Full_BIT */
-	 [15] =    2500, /* ETHTOOL_LINK_MODE_2500baseX_Full_BIT */
-	 [17] =    1000, /* ETHTOOL_LINK_MODE_1000baseKX_Full_BIT */
-	 [18] =   10000, /* ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT */
-	 [19] =   10000, /* ETHTOOL_LINK_MODE_10000baseKR_Full_BIT */
-	 [20] =   10000, /* ETHTOOL_LINK_MODE_10000baseR_FEC_BIT */
-	 [21] =   20000, /* ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT */
-	 [22] =   20000, /* ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT */
-	 [23] =   40000, /* ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT */
-	 [24] =   40000, /* ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT */
-	 [25] =   40000, /* ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT */
-	 [26] =   40000, /* ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT */
-	 [27] =   56000, /* ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT */
-	 [28] =   56000, /* ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT */
-	 [29] =   56000, /* ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT */
-	 [30] =   56000, /* ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT */
-	 [31] =   25000, /* ETHTOOL_LINK_MODE_25000baseCR_Full_BIT */
-	 [32] =   25000, /* ETHTOOL_LINK_MODE_25000baseKR_Full_BIT */
-	 [33] =   25000, /* ETHTOOL_LINK_MODE_25000baseSR_Full_BIT */
-	 [34] =   50000, /* ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT */
-	 [35] =   50000, /* ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT */
-	 [36] =  100000, /* ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT */
-	 [37] =  100000, /* ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT */
-	 [38] =  100000, /* ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT */
-	 [39] =  100000, /* ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT */
-	 [40] =   50000, /* ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT */
-	 [41] =    1000, /* ETHTOOL_LINK_MODE_1000baseX_Full_BIT */
-	 [42] =   10000, /* ETHTOOL_LINK_MODE_10000baseCR_Full_BIT */
-	 [43] =   10000, /* ETHTOOL_LINK_MODE_10000baseSR_Full_BIT */
-	 [44] =   10000, /* ETHTOOL_LINK_MODE_10000baseLR_Full_BIT */
-	 [45] =   10000, /* ETHTOOL_LINK_MODE_10000baseLRM_Full_BIT */
-	 [46] =   10000, /* ETHTOOL_LINK_MODE_10000baseER_Full_BIT */
-	 [47] =    2500, /* ETHTOOL_LINK_MODE_2500baseT_Full_BIT */
-	 [48] =    5000, /* ETHTOOL_LINK_MODE_5000baseT_Full_BIT */
-	 [52] =   50000, /* ETHTOOL_LINK_MODE_50000baseKR_Full_BIT */
-	 [53] =   50000, /* ETHTOOL_LINK_MODE_50000baseSR_Full_BIT */
-	 [54] =   50000, /* ETHTOOL_LINK_MODE_50000baseCR_Full_BIT */
-	 [55] =   50000, /* ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT */
-	 [56] =   50000, /* ETHTOOL_LINK_MODE_50000baseDR_Full_BIT */
-	 [57] =  100000, /* ETHTOOL_LINK_MODE_100000baseKR2_Full_BIT */
-	 [58] =  100000, /* ETHTOOL_LINK_MODE_100000baseSR2_Full_BIT */
-	 [59] =  100000, /* ETHTOOL_LINK_MODE_100000baseCR2_Full_BIT */
-	 [60] =  100000, /* ETHTOOL_LINK_MODE_100000baseLR2_ER2_FR2_Full_BIT */
-	 [61] =  100000, /* ETHTOOL_LINK_MODE_100000baseDR2_Full_BIT */
-	 [62] =  200000, /* ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT */
-	 [63] =  200000, /* ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT */
-	 [64] =  200000, /* ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT */
-	 [65] =  200000, /* ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT */
-	 [66] =  200000, /* ETHTOOL_LINK_MODE_200000baseCR4_Full_BIT */
-	 [67] =     100, /* ETHTOOL_LINK_MODE_100baseT1_Full_BIT */
-	 [68] =    1000, /* ETHTOOL_LINK_MODE_1000baseT1_Full_BIT */
-	 [69] =  400000, /* ETHTOOL_LINK_MODE_400000baseKR8_Full_BIT */
-	 [70] =  400000, /* ETHTOOL_LINK_MODE_400000baseSR8_Full_BIT */
-	 [71] =  400000, /* ETHTOOL_LINK_MODE_400000baseLR8_ER8_FR8_Full_BIT */
-	 [72] =  400000, /* ETHTOOL_LINK_MODE_400000baseDR8_Full_BIT */
-	 [73] =  400000, /* ETHTOOL_LINK_MODE_400000baseCR8_Full_BIT */
-	 [75] =  100000, /* ETHTOOL_LINK_MODE_100000baseKR_Full_BIT */
-	 [76] =  100000, /* ETHTOOL_LINK_MODE_100000baseSR_Full_BIT */
-	 [77] =  100000, /* ETHTOOL_LINK_MODE_100000baseLR_ER_FR_Full_BIT */
-	 [78] =  100000, /* ETHTOOL_LINK_MODE_100000baseCR_Full_BIT */
-	 [79] =  100000, /* ETHTOOL_LINK_MODE_100000baseDR_Full_BIT */
-	 [80] =  200000, /* ETHTOOL_LINK_MODE_200000baseKR2_Full_BIT */
-	 [81] =  200000, /* ETHTOOL_LINK_MODE_200000baseSR2_Full_BIT */
-	 [82] =  200000, /* ETHTOOL_LINK_MODE_200000baseLR2_ER2_FR2_Full_BIT */
-	 [83] =  200000, /* ETHTOOL_LINK_MODE_200000baseDR2_Full_BIT */
-	 [84] =  200000, /* ETHTOOL_LINK_MODE_200000baseCR2_Full_BIT */
-	 [85] =  400000, /* ETHTOOL_LINK_MODE_400000baseKR4_Full_BIT */
-	 [86] =  400000, /* ETHTOOL_LINK_MODE_400000baseSR4_Full_BIT */
-	 [87] =  400000, /* ETHTOOL_LINK_MODE_400000baseLR4_ER4_FR4_Full_BIT */
-	 [88] =  400000, /* ETHTOOL_LINK_MODE_400000baseDR4_Full_BIT */
-	 [89] =  400000, /* ETHTOOL_LINK_MODE_400000baseCR4_Full_BIT */
-	 [90] =     101, /* ETHTOOL_LINK_MODE_100baseFX_Half_BIT */
-	 [91] =     100, /* ETHTOOL_LINK_MODE_100baseFX_Full_BIT */
-	 [92] =      10, /* ETHTOOL_LINK_MODE_10baseT1L_Full_BIT */
-	 [93] =  800000, /* ETHTOOL_LINK_MODE_800000baseCR8_Full_BIT */
-	 [94] =  800000, /* ETHTOOL_LINK_MODE_800000baseKR8_Full_BIT */
-	 [95] =  800000, /* ETHTOOL_LINK_MODE_800000baseDR8_Full_BIT */
-	 [96] =  800000, /* ETHTOOL_LINK_MODE_800000baseDR8_2_Full_BIT */
-	 [97] =  800000, /* ETHTOOL_LINK_MODE_800000baseSR8_Full_BIT */
-	 [98] =  800000, /* ETHTOOL_LINK_MODE_800000baseVR8_Full_BIT */
-	 [99] =      10, /* ETHTOOL_LINK_MODE_10baseT1S_Full_BIT */
-	[100] =      11, /* ETHTOOL_LINK_MODE_10baseT1S_Half_BIT */
-	[101] =      11, /* ETHTOOL_LINK_MODE_10baseT1S_P2MP_Half_BIT */
+static const uint32_t link_modes[][3] = {
+	[0]   = {10, 0, 1},     /* ETHTOOL_LINK_MODE_10baseT_Half_BIT */
+	[1]   = {10, 1, 1},     /* ETHTOOL_LINK_MODE_10baseT_Full_BIT */
+	[2]   = {100, 0, 1},    /* ETHTOOL_LINK_MODE_100baseT_Half_BIT */
+	[3]   = {100, 1, 1},    /* ETHTOOL_LINK_MODE_100baseT_Full_BIT */
+	[4]   = {1000, 0, 1},   /* ETHTOOL_LINK_MODE_1000baseT_Half_BIT */
+	[5]   = {1000, 1, 1},   /* ETHTOOL_LINK_MODE_1000baseT_Full_BIT */
+	[12]  = {10000, 1, 1},  /* ETHTOOL_LINK_MODE_10000baseT_Full_BIT */
+	[15]  = {2500, 1, 1},   /* ETHTOOL_LINK_MODE_2500baseX_Full_BIT */
+	[17]  = {1000, 1, 1},   /* ETHTOOL_LINK_MODE_1000baseKX_Full_BIT */
+	[18]  = {10000, 1, 4},  /* ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT */
+	[19]  = {10000, 1, 1},  /* ETHTOOL_LINK_MODE_10000baseKR_Full_BIT */
+	[20]  = {10000, 1, 1},  /* ETHTOOL_LINK_MODE_10000baseR_FEC_BIT */
+	[21]  = {20000, 1, 2},  /* ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT */
+	[22]  = {20000, 1, 2},  /* ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT */
+	[23]  = {40000, 1, 4},  /* ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT */
+	[24]  = {40000, 1, 4},  /* ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT */
+	[25]  = {40000, 1, 4},  /* ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT */
+	[26]  = {40000, 1, 4},  /* ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT */
+	[27]  = {56000, 1, 4},  /* ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT */
+	[28]  = {56000, 1, 4},  /* ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT */
+	[29]  = {56000, 1, 4},  /* ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT */
+	[30]  = {56000, 1, 4},  /* ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT */
+	[31]  = {25000, 1, 1},  /* ETHTOOL_LINK_MODE_25000baseCR_Full_BIT */
+	[32]  = {25000, 1, 1},  /* ETHTOOL_LINK_MODE_25000baseKR_Full_BIT */
+	[33]  = {25000, 1, 1},  /* ETHTOOL_LINK_MODE_25000baseSR_Full_BIT */
+	[34]  = {50000, 1, 2},  /* ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT */
+	[35]  = {50000, 1, 2},  /* ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT */
+	[36]  = {100000, 1, 4}, /* ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT */
+	[37]  = {100000, 1, 4}, /* ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT */
+	[38]  = {100000, 1, 4}, /* ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT */
+	[39]  = {100000, 1, 4}, /* ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT */
+	[40]  = {50000, 1, 2},  /* ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT */
+	[41]  = {1000, 1, 1},   /* ETHTOOL_LINK_MODE_1000baseX_Full_BIT */
+	[42]  = {10000, 1, 1},  /* ETHTOOL_LINK_MODE_10000baseCR_Full_BIT */
+	[43]  = {10000, 1, 1},  /* ETHTOOL_LINK_MODE_10000baseSR_Full_BIT */
+	[44]  = {10000, 1, 1},  /* ETHTOOL_LINK_MODE_10000baseLR_Full_BIT */
+	[45]  = {10000, 1, 1},  /* ETHTOOL_LINK_MODE_10000baseLRM_Full_BIT */
+	[46]  = {10000, 1, 1},  /* ETHTOOL_LINK_MODE_10000baseER_Full_BIT */
+	[47]  = {2500, 1, 1},   /* ETHTOOL_LINK_MODE_2500baseT_Full_BIT */
+	[48]  = {5000, 1, 1},   /* ETHTOOL_LINK_MODE_5000baseT_Full_BIT */
+	[52]  = {50000, 1, 1},  /* ETHTOOL_LINK_MODE_50000baseKR_Full_BIT */
+	[53]  = {50000, 1, 1},  /* ETHTOOL_LINK_MODE_50000baseSR_Full_BIT */
+	[54]  = {50000, 1, 1},  /* ETHTOOL_LINK_MODE_50000baseCR_Full_BIT */
+	[55]  = {50000, 1, 1},  /* ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT */
+	[56]  = {50000, 1, 1},  /* ETHTOOL_LINK_MODE_50000baseDR_Full_BIT */
+	[57]  = {100000, 1, 2}, /* ETHTOOL_LINK_MODE_100000baseKR2_Full_BIT */
+	[58]  = {100000, 1, 2}, /* ETHTOOL_LINK_MODE_100000baseSR2_Full_BIT */
+	[59]  = {100000, 1, 2}, /* ETHTOOL_LINK_MODE_100000baseCR2_Full_BIT */
+	[60]  = {100000, 1, 2}, /* ETHTOOL_LINK_MODE_100000baseLR2_ER2_FR2_Full_BIT */
+	[61]  = {100000, 1, 2}, /* ETHTOOL_LINK_MODE_100000baseDR2_Full_BIT */
+	[62]  = {200000, 1, 4}, /* ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT */
+	[63]  = {200000, 1, 4}, /* ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT */
+	[64]  = {200000, 1, 4}, /* ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT */
+	[65]  = {200000, 1, 4}, /* ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT */
+	[66]  = {200000, 1, 4}, /* ETHTOOL_LINK_MODE_200000baseCR4_Full_BIT */
+	[67]  = {100, 1, 1},    /* ETHTOOL_LINK_MODE_100baseT1_Full_BIT */
+	[68]  = {1000, 1, 1},   /* ETHTOOL_LINK_MODE_1000baseT1_Full_BIT */
+	[69]  = {400000, 1, 8}, /* ETHTOOL_LINK_MODE_400000baseKR8_Full_BIT */
+	[70]  = {400000, 1, 8}, /* ETHTOOL_LINK_MODE_400000baseSR8_Full_BIT */
+	[71]  = {400000, 1, 8}, /* ETHTOOL_LINK_MODE_400000baseLR8_ER8_FR8_Full_BIT */
+	[72]  = {400000, 1, 8}, /* ETHTOOL_LINK_MODE_400000baseDR8_Full_BIT */
+	[73]  = {400000, 1, 8}, /* ETHTOOL_LINK_MODE_400000baseCR8_Full_BIT */
+	[75]  = {100000, 1, 1}, /* ETHTOOL_LINK_MODE_100000baseKR_Full_BIT */
+	[76]  = {100000, 1, 1}, /* ETHTOOL_LINK_MODE_100000baseSR_Full_BIT */
+	[77]  = {100000, 1, 1}, /* ETHTOOL_LINK_MODE_100000baseLR_ER_FR_Full_BIT */
+	[78]  = {100000, 1, 1}, /* ETHTOOL_LINK_MODE_100000baseCR_Full_BIT */
+	[79]  = {100000, 1, 1}, /* ETHTOOL_LINK_MODE_100000baseDR_Full_BIT */
+	[80]  = {200000, 1, 2}, /* ETHTOOL_LINK_MODE_200000baseKR2_Full_BIT */
+	[81]  = {200000, 1, 2}, /* ETHTOOL_LINK_MODE_200000baseSR2_Full_BIT */
+	[82]  = {200000, 1, 2}, /* ETHTOOL_LINK_MODE_200000baseLR2_ER2_FR2_Full_BIT */
+	[83]  = {200000, 1, 2}, /* ETHTOOL_LINK_MODE_200000baseDR2_Full_BIT */
+	[84]  = {200000, 1, 2}, /* ETHTOOL_LINK_MODE_200000baseCR2_Full_BIT */
+	[85]  = {400000, 1, 4}, /* ETHTOOL_LINK_MODE_400000baseKR4_Full_BIT */
+	[86]  = {400000, 1, 4}, /* ETHTOOL_LINK_MODE_400000baseSR4_Full_BIT */
+	[87]  = {400000, 1, 4}, /* ETHTOOL_LINK_MODE_400000baseLR4_ER4_FR4_Full_BIT */
+	[88]  = {400000, 1, 4}, /* ETHTOOL_LINK_MODE_400000baseDR4_Full_BIT */
+	[89]  = {400000, 1, 4}, /* ETHTOOL_LINK_MODE_400000baseCR4_Full_BIT */
+	[90]  = {100, 0, 1},    /* ETHTOOL_LINK_MODE_100baseFX_Half_BIT */
+	[91]  = {100, 1, 1},    /* ETHTOOL_LINK_MODE_100baseFX_Full_BIT */
+	[92]  = {10, 1, 1},     /* ETHTOOL_LINK_MODE_10baseT1L_Full_BIT */
+	[93]  = {800000, 1, 8}, /* ETHTOOL_LINK_MODE_800000baseCR8_Full_BIT */
+	[94]  = {800000, 1, 8}, /* ETHTOOL_LINK_MODE_800000baseKR8_Full_BIT */
+	[95]  = {800000, 1, 8}, /* ETHTOOL_LINK_MODE_800000baseDR8_Full_BIT */
+	[96]  = {800000, 1, 8}, /* ETHTOOL_LINK_MODE_800000baseDR8_2_Full_BIT */
+	[97]  = {800000, 1, 8}, /* ETHTOOL_LINK_MODE_800000baseSR8_Full_BIT */
+	[98]  = {800000, 1, 8}, /* ETHTOOL_LINK_MODE_800000baseVR8_Full_BIT */
+	[99]  = {10, 1, 1},     /* ETHTOOL_LINK_MODE_10baseT1S_Full_BIT */
+	[100] = {10, 0, 1},     /* ETHTOOL_LINK_MODE_10baseT1S_Half_BIT */
+	[101] = {10, 0, 1},     /* ETHTOOL_LINK_MODE_10baseT1S_P2MP_Half_BIT */
 };
 
 uint32_t
 rte_eth_link_speed_ethtool(enum ethtool_link_mode_bit_indices bit)
 {
-	uint32_t speed;
-	int duplex;
+	uint32_t speed, duplex, lanes;
 
 	/* get mode from array */
 	if (bit >= RTE_DIM(link_modes))
 		return RTE_ETH_LINK_SPEED_AUTONEG;
-	speed = link_modes[bit];
-	if (speed == 0)
+	if (link_modes[bit][RTE_ETH_LINK_MODES_INDEX_SPEED] == 0)
 		return RTE_ETH_LINK_SPEED_AUTONEG;
 	RTE_BUILD_BUG_ON(RTE_ETH_LINK_SPEED_AUTONEG != 0);
 
-	/* duplex is LSB */
-	duplex = (speed & 1) ?
-			RTE_ETH_LINK_HALF_DUPLEX :
-			RTE_ETH_LINK_FULL_DUPLEX;
-	speed &= RTE_GENMASK32(31, 1);
-
-	return rte_eth_speed_bitflag(speed, duplex);
+	speed = link_modes[bit][RTE_ETH_LINK_MODES_INDEX_SPEED];
+	duplex = link_modes[bit][RTE_ETH_LINK_MODES_INDEX_DUPLEX];
+	lanes = link_modes[bit][RTE_ETH_LINK_MODES_INDEX_LANES];
+	return rte_eth_speed_bitflag(speed, duplex, lanes);
 }
 
 uint32_t
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 0d36b9c30f..9092ab3a9e 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -79,4 +79,8 @@ void eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid);
 int eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues);
 int eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues);
 
+/* versioned functions */
+uint32_t rte_eth_speed_bitflag_v24(uint32_t speed, int duplex);
+uint32_t rte_eth_speed_bitflag_v25(uint32_t speed, uint8_t lanes, int duplex);
+
 #endif /* _ETH_PRIVATE_H_ */
diff --git a/lib/ethdev/ethdev_trace.h b/lib/ethdev/ethdev_trace.h
index 3bec87bfdb..5547b49cab 100644
--- a/lib/ethdev/ethdev_trace.h
+++ b/lib/ethdev/ethdev_trace.h
@@ -183,8 +183,10 @@ RTE_TRACE_POINT(
 
 RTE_TRACE_POINT(
 	rte_eth_trace_speed_bitflag,
-	RTE_TRACE_POINT_ARGS(uint32_t speed, int duplex, uint32_t ret),
+	RTE_TRACE_POINT_ARGS(uint32_t speed, uint8_t lanes, int duplex,
+			     uint32_t ret),
 	rte_trace_point_emit_u32(speed);
+	rte_trace_point_emit_u8(lanes);
 	rte_trace_point_emit_int(duplex);
 	rte_trace_point_emit_u32(ret);
 )
diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build
index f1d2586591..2c9588d0b3 100644
--- a/lib/ethdev/meson.build
+++ b/lib/ethdev/meson.build
@@ -62,3 +62,5 @@ endif
 if get_option('buildtype').contains('debug')
     cflags += ['-DRTE_FLOW_DEBUG']
 endif
+
+use_function_versioning = true
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index f1c658f49e..6571116fbf 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -26,6 +26,7 @@
 #include <rte_class.h>
 #include <rte_ether.h>
 #include <rte_telemetry.h>
+#include <rte_function_versioning.h>
 
 #include "rte_ethdev.h"
 #include "rte_ethdev_trace_fp.h"
@@ -991,63 +992,101 @@ rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id)
 	return ret;
 }
 
-uint32_t
-rte_eth_speed_bitflag(uint32_t speed, int duplex)
+uint32_t __vsym
+rte_eth_speed_bitflag_v25(uint32_t speed, uint8_t lanes, int duplex)
 {
-	uint32_t ret;
+	uint32_t ret = 0;
 
 	switch (speed) {
 	case RTE_ETH_SPEED_NUM_10M:
-		ret = duplex ? RTE_ETH_LINK_SPEED_10M : RTE_ETH_LINK_SPEED_10M_HD;
+		if (lanes == RTE_ETH_LANES_UNKNOWN || lanes == RTE_ETH_LANES_1)
+			ret = duplex ? RTE_ETH_LINK_SPEED_10M : RTE_ETH_LINK_SPEED_10M_HD;
 		break;
 	case RTE_ETH_SPEED_NUM_100M:
-		ret = duplex ? RTE_ETH_LINK_SPEED_100M : RTE_ETH_LINK_SPEED_100M_HD;
+		if (lanes == RTE_ETH_LANES_UNKNOWN || lanes == RTE_ETH_LANES_1)
+			ret = duplex ? RTE_ETH_LINK_SPEED_100M : RTE_ETH_LINK_SPEED_100M_HD;
 		break;
 	case RTE_ETH_SPEED_NUM_1G:
-		ret = RTE_ETH_LINK_SPEED_1G;
+		if (lanes == RTE_ETH_LANES_UNKNOWN || lanes == RTE_ETH_LANES_1)
+			ret = RTE_ETH_LINK_SPEED_1G;
 		break;
 	case RTE_ETH_SPEED_NUM_2_5G:
-		ret = RTE_ETH_LINK_SPEED_2_5G;
+		if (lanes == RTE_ETH_LANES_UNKNOWN || lanes == RTE_ETH_LANES_1)
+			ret = RTE_ETH_LINK_SPEED_2_5G;
 		break;
 	case RTE_ETH_SPEED_NUM_5G:
-		ret = RTE_ETH_LINK_SPEED_5G;
+		if (lanes == RTE_ETH_LANES_UNKNOWN || lanes == RTE_ETH_LANES_1)
+			ret = RTE_ETH_LINK_SPEED_5G;
 		break;
 	case RTE_ETH_SPEED_NUM_10G:
-		ret = RTE_ETH_LINK_SPEED_10G;
+		if (lanes == RTE_ETH_LANES_UNKNOWN || lanes == RTE_ETH_LANES_1)
+			ret = RTE_ETH_LINK_SPEED_10G;
+		else if (lanes == RTE_ETH_LANES_4)
+			ret = RTE_ETH_LINK_SPEED_10G_4LANES;
 		break;
 	case RTE_ETH_SPEED_NUM_20G:
-		ret = RTE_ETH_LINK_SPEED_20G;
+		if (lanes == RTE_ETH_LANES_UNKNOWN || lanes == RTE_ETH_LANES_2)
+			ret = RTE_ETH_LINK_SPEED_20G_2LANES;
 		break;
 	case RTE_ETH_SPEED_NUM_25G:
-		ret = RTE_ETH_LINK_SPEED_25G;
+		if (lanes == RTE_ETH_LANES_UNKNOWN || lanes == RTE_ETH_LANES_1)
+			ret = RTE_ETH_LINK_SPEED_25G;
 		break;
 	case RTE_ETH_SPEED_NUM_40G:
-		ret = RTE_ETH_LINK_SPEED_40G;
+		if (lanes == RTE_ETH_LANES_UNKNOWN || lanes == RTE_ETH_LANES_4)
+			ret = RTE_ETH_LINK_SPEED_40G_4LANES;
 		break;
 	case RTE_ETH_SPEED_NUM_50G:
-		ret = RTE_ETH_LINK_SPEED_50G;
+		if (lanes == RTE_ETH_LANES_UNKNOWN || lanes == RTE_ETH_LANES_1)
+			ret = RTE_ETH_LINK_SPEED_50G;
+		else if (lanes == RTE_ETH_LANES_2)
+			ret = RTE_ETH_LINK_SPEED_50G_2LANES;
 		break;
 	case RTE_ETH_SPEED_NUM_56G:
-		ret = RTE_ETH_LINK_SPEED_56G;
+		if (lanes == RTE_ETH_LANES_UNKNOWN || lanes == RTE_ETH_LANES_4)
+			ret = RTE_ETH_LINK_SPEED_56G_4LANES;
 		break;
 	case RTE_ETH_SPEED_NUM_100G:
-		ret = RTE_ETH_LINK_SPEED_100G;
+		if (lanes == RTE_ETH_LANES_UNKNOWN || lanes == RTE_ETH_LANES_1)
+			ret = RTE_ETH_LINK_SPEED_100G;
+		else if (lanes == RTE_ETH_LANES_2)
+			ret = RTE_ETH_LINK_SPEED_100G_2LANES;
+		else if (lanes == RTE_ETH_LANES_4)
+			ret = RTE_ETH_LINK_SPEED_100G_4LANES;
 		break;
 	case RTE_ETH_SPEED_NUM_200G:
-		ret = RTE_ETH_LINK_SPEED_200G;
+		if (lanes == RTE_ETH_LANES_UNKNOWN || lanes == RTE_ETH_LANES_4)
+			ret = RTE_ETH_LINK_SPEED_200G_4LANES;
+		else if (lanes == RTE_ETH_LANES_2)
+			ret = RTE_ETH_LINK_SPEED_200G_2LANES;
 		break;
 	case RTE_ETH_SPEED_NUM_400G:
-		ret = RTE_ETH_LINK_SPEED_400G;
+		if (lanes == RTE_ETH_LANES_UNKNOWN || lanes == RTE_ETH_LANES_4)
+			ret = RTE_ETH_LINK_SPEED_400G_4LANES;
+		else if (lanes == RTE_ETH_LANES_8)
+			ret = RTE_ETH_LINK_SPEED_400G_8LANES;
 		break;
 	default:
 		ret = 0;
 	}
 
-	rte_eth_trace_speed_bitflag(speed, duplex, ret);
+	rte_eth_trace_speed_bitflag(speed, lanes, duplex, ret);
 
 	return ret;
 }
 
+uint32_t __vsym
+rte_eth_speed_bitflag_v24(uint32_t speed, int duplex)
+{
+	return rte_eth_speed_bitflag_v25(speed, RTE_ETH_LANES_UNKNOWN, duplex);
+}
+
+/* mark the v24 function as the older version, and v25 as the default version */
+VERSION_SYMBOL(rte_eth_speed_bitflag, _v24, 24);
+BIND_DEFAULT_SYMBOL(rte_eth_speed_bitflag, _v25, 25);
+MAP_STATIC_SYMBOL(uint32_t rte_eth_speed_bitflag(uint32_t speed, uint8_t lanes, int duplex),
+		  rte_eth_speed_bitflag_v25);
+
 const char *
 rte_eth_dev_rx_offload_name(uint64_t offload)
 {
@@ -3110,13 +3149,21 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 
 	if (eth_link->link_status == RTE_ETH_LINK_DOWN)
 		ret = snprintf(str, len, "Link down");
-	else
+	else if (eth_link->link_lanes == RTE_ETH_LANES_UNKNOWN)
 		ret = snprintf(str, len, "Link up at %s %s %s",
 			rte_eth_link_speed_to_str(eth_link->link_speed),
 			(eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			"FDX" : "HDX",
 			(eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
 			"Autoneg" : "Fixed");
+	else
+		ret = snprintf(str, len, "Link up at %s %u lanes %s %s",
+			rte_eth_link_speed_to_str(eth_link->link_speed),
+			eth_link->link_lanes,
+			(eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
+			"FDX" : "HDX",
+			(eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
+			"Autoneg" : "Fixed");
 
 	rte_eth_trace_link_to_str(len, eth_link, str, ret);
 
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 147257d6a2..123b771046 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -288,24 +288,40 @@ struct rte_eth_stats {
 /**@{@name Link speed capabilities
  * Device supported speeds bitmap flags
  */
-#define RTE_ETH_LINK_SPEED_AUTONEG 0             /**< Autonegotiate (all speeds) */
-#define RTE_ETH_LINK_SPEED_FIXED   RTE_BIT32(0)  /**< Disable autoneg (fixed speed) */
-#define RTE_ETH_LINK_SPEED_10M_HD  RTE_BIT32(1)  /**<  10 Mbps half-duplex */
-#define RTE_ETH_LINK_SPEED_10M     RTE_BIT32(2)  /**<  10 Mbps full-duplex */
-#define RTE_ETH_LINK_SPEED_100M_HD RTE_BIT32(3)  /**< 100 Mbps half-duplex */
-#define RTE_ETH_LINK_SPEED_100M    RTE_BIT32(4)  /**< 100 Mbps full-duplex */
-#define RTE_ETH_LINK_SPEED_1G      RTE_BIT32(5)  /**<   1 Gbps */
-#define RTE_ETH_LINK_SPEED_2_5G    RTE_BIT32(6)  /**< 2.5 Gbps */
-#define RTE_ETH_LINK_SPEED_5G      RTE_BIT32(7)  /**<   5 Gbps */
-#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**<  10 Gbps */
-#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**<  20 Gbps */
-#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**<  25 Gbps */
-#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**<  40 Gbps */
-#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**<  50 Gbps */
-#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**<  56 Gbps */
-#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
-#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
-#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
+#define RTE_ETH_LINK_SPEED_AUTONEG        0             /**< Autonegotiate (all speeds) */
+#define RTE_ETH_LINK_SPEED_FIXED          RTE_BIT32(0)  /**< Disable autoneg (fixed speed) */
+#define RTE_ETH_LINK_SPEED_10M_HD         RTE_BIT32(1)  /**<  10 Mbps half-duplex */
+#define RTE_ETH_LINK_SPEED_10M            RTE_BIT32(2)  /**<  10 Mbps full-duplex */
+#define RTE_ETH_LINK_SPEED_100M_HD        RTE_BIT32(3)  /**< 100 Mbps half-duplex */
+#define RTE_ETH_LINK_SPEED_100M           RTE_BIT32(4)  /**< 100 Mbps full-duplex */
+#define RTE_ETH_LINK_SPEED_1G             RTE_BIT32(5)  /**<   1 Gbps */
+#define RTE_ETH_LINK_SPEED_2_5G           RTE_BIT32(6)  /**< 2.5 Gbps */
+#define RTE_ETH_LINK_SPEED_5G             RTE_BIT32(7)  /**<   5 Gbps */
+#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**<  10 Gbps */
+#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**<  20 Gbps 2lanes */
+#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**<  25 Gbps */
+#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**<  40 Gbps 4lanes */
+#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**<  50 Gbps */
+#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**<  56 Gbps  4lanes */
+#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
+#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
+#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
+#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**<  10 Gbps 4lanes */
+#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**<  50 Gbps 2 lanes */
+#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
+#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
+#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
+#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
+/**@}*/
+
+/**@{@name Link speed capabilities
+ * Default lanes, use to compatible with earlier versions
+ */
+#define RTE_ETH_LINK_SPEED_20G_2LANES	RTE_ETH_LINK_SPEED_20G
+#define RTE_ETH_LINK_SPEED_40G_4LANES	RTE_ETH_LINK_SPEED_40G
+#define RTE_ETH_LINK_SPEED_56G_4LANES	RTE_ETH_LINK_SPEED_56G
+#define RTE_ETH_LINK_SPEED_200G_4LANES	RTE_ETH_LINK_SPEED_200G
+#define RTE_ETH_LINK_SPEED_400G_4LANES	RTE_ETH_LINK_SPEED_400G
 /**@}*/
 
 /**@{@name Link speed
@@ -329,6 +345,16 @@ struct rte_eth_stats {
 #define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
 /**@}*/
 
+/**@{@name Link lane number
+ * Ethernet lane number
+ */
+#define RTE_ETH_LANES_UNKNOWN    0 /**< Unknown */
+#define RTE_ETH_LANES_1          1 /**< 1 lanes */
+#define RTE_ETH_LANES_2          2 /**< 2 lanes */
+#define RTE_ETH_LANES_4          4 /**< 4 lanes */
+#define RTE_ETH_LANES_8          8 /**< 8 lanes */
+/**@}*/
+
 /**
  * A structure used to retrieve link-level information of an Ethernet port.
  */
@@ -338,6 +364,7 @@ struct __rte_aligned(8) rte_eth_link { /**< aligned for atomic64 read/write */
 	uint16_t link_duplex  : 1;  /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
 	uint16_t link_autoneg : 1;  /**< RTE_ETH_LINK_[AUTONEG/FIXED] */
 	uint16_t link_status  : 1;  /**< RTE_ETH_LINK_[DOWN/UP] */
+	uint16_t link_lanes   : 4;  /**< RTE_ETH_LANES_ */
 };
 
 /**@{@name Link negotiation
@@ -1641,6 +1668,12 @@ struct rte_eth_conf {
 #define RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP         RTE_BIT64(3)
 /** Device supports keeping shared flow objects across restart. */
 #define RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP RTE_BIT64(4)
+/**
+ * Device supports setting lanes. When the device does not support it,
+ * if a speed supports different numbers of lanes, the application does
+ * not knowe which the lane number are used by the device.
+ */
+#define RTE_ETH_DEV_CAPA_SETTING_LANES RTE_BIT64(5)
 /**@}*/
 
 /*
@@ -2301,12 +2334,16 @@ uint16_t rte_eth_dev_count_total(void);
  *
  * @param speed
  *   Numerical speed value in Mbps
+ * @param lanes
+ *   number of lanes (RTE_ETH_LANES_x)
+ *   RTE_ETH_LANES_UNKNOWN is always used when the device does not support
+ *   setting lanes
  * @param duplex
  *   RTE_ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
  * @return
  *   0 if the speed cannot be mapped
  */
-uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
+uint32_t rte_eth_speed_bitflag(uint32_t speed, uint8_t lanes, int duplex);
 
 /**
  * Get RTE_ETH_RX_OFFLOAD_* flag name.
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 79f6f5293b..9fa2439976 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -169,6 +169,12 @@ DPDK_24 {
 	local: *;
 };
 
+DPDK_25 {
+	global:
+
+	rte_eth_speed_bitflag;
+} DPDK_24;
+
 EXPERIMENTAL {
 	global:
 
-- 
2.33.0


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v2 2/6] test: updated UT for setting lanes
  2024-03-22  7:09   ` [PATCH v2 0/6] " Dengdui Huang
  2024-03-22  7:09     ` [PATCH v2 1/6] ethdev: " Dengdui Huang
@ 2024-03-22  7:09     ` Dengdui Huang
  2024-03-22  7:09     ` [PATCH v2 3/6] ethdev: add function to parse link mode info Dengdui Huang
                       ` (4 subsequent siblings)
  6 siblings, 0 replies; 51+ messages in thread
From: Dengdui Huang @ 2024-03-22  7:09 UTC (permalink / raw)
  To: dev
  Cc: ferruh.yigit, aman.deep.singh, yuying.zhang, thomas,
	andrew.rybchenko, damodharam.ammepalli, stephen, jerinjacobk,
	ajit.khaparde, liuyonglong, fengchengwen, haijie1, lihuisong

The lanes number is added to the ethdev link test case.

Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
---
 app/test/test_ethdev_link.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index f063a5fe26..1c31bc0448 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -17,14 +17,15 @@ test_link_status_up_default(void)
 		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
 		.link_status = RTE_ETH_LINK_UP,
 		.link_autoneg = RTE_ETH_LINK_AUTONEG,
-		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_lanes = RTE_ETH_LANES_1
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
 	printf("Default link up #1: %s\n", text);
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps 1 lanes FDX Autoneg",
 		text, strlen(text), "Invalid default link status string");
 
 	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
@@ -33,7 +34,7 @@ test_link_status_up_default(void)
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #2: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 10 Mbps HDX Fixed",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 10 Mbps 1 lanes HDX Fixed",
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
@@ -41,7 +42,7 @@ test_link_status_up_default(void)
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Unknown HDX Fixed",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Unknown 1 lanes HDX Fixed",
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
@@ -49,7 +50,7 @@ test_link_status_up_default(void)
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at None HDX Fixed",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at None 1 lanes HDX Fixed",
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
@@ -57,6 +58,7 @@ test_link_status_up_default(void)
 	link_status.link_speed = RTE_ETH_SPEED_NUM_400G;
 	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
+	link_status.link_lanes = RTE_ETH_LANES_4;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #4:len = %d, %s\n", ret, text);
 	RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -72,7 +74,8 @@ test_link_status_down_default(void)
 		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
 		.link_status = RTE_ETH_LINK_DOWN,
 		.link_autoneg = RTE_ETH_LINK_AUTONEG,
-		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_lanes = RTE_ETH_LANES_1
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -92,7 +95,8 @@ test_link_status_invalid(void)
 		.link_speed = 55555,
 		.link_status = RTE_ETH_LINK_UP,
 		.link_autoneg = RTE_ETH_LINK_AUTONEG,
-		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_lanes = RTE_ETH_LANES_UNKNOWN
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
-- 
2.33.0


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v2 3/6] ethdev: add function to parse link mode info
  2024-03-22  7:09   ` [PATCH v2 0/6] " Dengdui Huang
  2024-03-22  7:09     ` [PATCH v2 1/6] ethdev: " Dengdui Huang
  2024-03-22  7:09     ` [PATCH v2 2/6] test: updated UT for " Dengdui Huang
@ 2024-03-22  7:09     ` Dengdui Huang
  2024-03-22  7:09     ` [PATCH v2 4/6] net/hns3: use parse link mode info function Dengdui Huang
                       ` (3 subsequent siblings)
  6 siblings, 0 replies; 51+ messages in thread
From: Dengdui Huang @ 2024-03-22  7:09 UTC (permalink / raw)
  To: dev
  Cc: ferruh.yigit, aman.deep.singh, yuying.zhang, thomas,
	andrew.rybchenko, damodharam.ammepalli, stephen, jerinjacobk,
	ajit.khaparde, liuyonglong, fengchengwen, haijie1, lihuisong

Added function rte_eth_link_mode_parse() to parse
the speed number, lanes and duplex parameters
from the Link speed apabilities bitmap flags.

Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
---
 doc/guides/rel_notes/release_24_03.rst |   3 +
 lib/ethdev/rte_ethdev.c                | 199 +++++++++++++++++++++++++
 lib/ethdev/rte_ethdev.h                |  29 ++++
 lib/ethdev/version.map                 |   1 +
 4 files changed, 232 insertions(+)

diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 4621689c68..b41b0028b2 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -78,6 +78,9 @@ New Features
 
 * **Support setting lanes for ethdev.**
   * Support setting lanes by extended ``RTE_ETH_LINK_SPEED_*``.
+  * Added function to parse the speed number, lanes and duplex parameters from
+  * the Link speed apabilities bitmap flags:
+  ``rte_eth_link_mode_parse()``
 
 * **Added hash calculation of an encapsulated packet as done by the HW.**
 
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 6571116fbf..bac7c652be 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1105,6 +1105,205 @@ rte_eth_dev_rx_offload_name(uint64_t offload)
 	return name;
 }
 
+int
+rte_eth_link_mode_parse(uint32_t link_speed,
+			struct rte_eth_link_mode_info *mode_onfo)
+{
+	const struct {
+		uint32_t speed_capa;
+		struct rte_eth_link_mode_info mode_onfo;
+	} speed_mode_onfo_map[] = {
+		{
+			RTE_ETH_LINK_SPEED_10M_HD,
+				{
+					RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_HALF_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_10M,
+				{
+					RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_100M_HD,
+				{
+					RTE_ETH_SPEED_NUM_100M,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_HALF_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_100M,
+				{
+					RTE_ETH_SPEED_NUM_100M,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_1G,
+				{
+					RTE_ETH_SPEED_NUM_1G,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_2_5G,
+				{
+					RTE_ETH_SPEED_NUM_2_5G,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_5G,
+				{
+					RTE_ETH_SPEED_NUM_5G,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_10G,
+				{
+					RTE_ETH_SPEED_NUM_10G,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_20G_2LANES,
+				{
+					RTE_ETH_SPEED_NUM_20G,
+					RTE_ETH_LANES_2,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_25G,
+				{
+					RTE_ETH_SPEED_NUM_25G,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_40G_4LANES,
+				{
+					RTE_ETH_SPEED_NUM_40G,
+					RTE_ETH_LANES_4,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_50G,
+				{
+					RTE_ETH_SPEED_NUM_50G,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_56G_4LANES,
+				{
+					RTE_ETH_SPEED_NUM_56G,
+					RTE_ETH_LANES_4,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_100G,
+				{
+					RTE_ETH_SPEED_NUM_100G,
+					RTE_ETH_LANES_1,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_200G_4LANES,
+				{
+					RTE_ETH_SPEED_NUM_200G,
+					RTE_ETH_LANES_4,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_400G_4LANES,
+				{
+					RTE_ETH_SPEED_NUM_400G,
+					RTE_ETH_LANES_4,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_10G_4LANES,
+				{
+					RTE_ETH_SPEED_NUM_10G,
+					RTE_ETH_LANES_4,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_50G_2LANES,
+				{
+					RTE_ETH_SPEED_NUM_50G,
+					RTE_ETH_LANES_2,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_100G_2LANES,
+				{
+					RTE_ETH_SPEED_NUM_100G,
+					RTE_ETH_LANES_2,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_100G_4LANES,
+				{
+					RTE_ETH_SPEED_NUM_100G,
+					RTE_ETH_LANES_4,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_200G_2LANES,
+				{
+					RTE_ETH_SPEED_NUM_200G,
+					RTE_ETH_LANES_2,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		},
+		{
+			RTE_ETH_LINK_SPEED_400G_8LANES,
+				{
+					RTE_ETH_SPEED_NUM_400G,
+					RTE_ETH_LANES_8,
+					RTE_ETH_LINK_FULL_DUPLEX
+				}
+		}
+	};
+	uint32_t i;
+
+	for (i = 0; i < RTE_DIM(speed_mode_onfo_map); i++) {
+		if (link_speed == speed_mode_onfo_map[i].speed_capa) {
+			mode_onfo->speed_num = speed_mode_onfo_map[i].mode_onfo.speed_num;
+			mode_onfo->lanes = speed_mode_onfo_map[i].mode_onfo.lanes;
+			mode_onfo->duplex = speed_mode_onfo_map[i].mode_onfo.duplex;
+			return 0;
+		}
+	}
+
+	return -EINVAL;
+}
+
 const char *
 rte_eth_dev_tx_offload_name(uint64_t offload)
 {
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 123b771046..b7aacf6da8 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -355,6 +355,15 @@ struct rte_eth_stats {
 #define RTE_ETH_LANES_8          8 /**< 8 lanes */
 /**@}*/
 
+/**
+ * A structure used to store information of link mode.
+ */
+struct rte_eth_link_mode_info {
+	uint32_t speed_num;        /**< RTE_ETH_SPEED_NUM_ */
+	uint8_t  lanes;        /**< RTE_ETH_LANES_ */
+	uint8_t  duplex;       /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+};
+
 /**
  * A structure used to retrieve link-level information of an Ethernet port.
  */
@@ -2345,6 +2354,26 @@ uint16_t rte_eth_dev_count_total(void);
  */
 uint32_t rte_eth_speed_bitflag(uint32_t speed, uint8_t lanes, int duplex);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Parse the speed number, lanes and duplex parameters from the Link speed
+ * capabilities bitmap flags.
+ *
+ * @param link_speed
+ *   speed capabilities bitmap flag (RTE_ETH_LINK_SPEED_)
+ * @param mode_onfo
+ *   A pointer to a structure of type *rte_eth_link_mode_info* to be filled
+ *   with the information of link mode.
+ * @return
+ *   - (0) if successful.
+ *   - (-EINVAL) if bad parameter.
+ */
+__rte_experimental
+int rte_eth_link_mode_parse(uint32_t link_speed,
+			    struct rte_eth_link_mode_info *mode_info);
+
 /**
  * Get RTE_ETH_RX_OFFLOAD_* flag name.
  *
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 9fa2439976..5726498dd1 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -326,6 +326,7 @@ EXPERIMENTAL {
 	# added in 24.03
 	__rte_eth_trace_tx_queue_count;
 	rte_eth_find_rss_algo;
+	rte_eth_link_mode_parse;
 	rte_flow_async_update_resized;
 	rte_flow_calc_encap_hash;
 	rte_flow_template_table_resizable;
-- 
2.33.0


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v2 4/6] net/hns3: use parse link mode info function
  2024-03-22  7:09   ` [PATCH v2 0/6] " Dengdui Huang
                       ` (2 preceding siblings ...)
  2024-03-22  7:09     ` [PATCH v2 3/6] ethdev: add function to parse link mode info Dengdui Huang
@ 2024-03-22  7:09     ` Dengdui Huang
  2024-03-22  7:09     ` [PATCH v2 5/6] net/hns3: support setting lanes Dengdui Huang
                       ` (2 subsequent siblings)
  6 siblings, 0 replies; 51+ messages in thread
From: Dengdui Huang @ 2024-03-22  7:09 UTC (permalink / raw)
  To: dev
  Cc: ferruh.yigit, aman.deep.singh, yuying.zhang, thomas,
	andrew.rybchenko, damodharam.ammepalli, stephen, jerinjacobk,
	ajit.khaparde, liuyonglong, fengchengwen, haijie1, lihuisong

This patch use the framework public function to
replace the driver's private function.

Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c | 51 +++++++---------------------------
 1 file changed, 10 insertions(+), 41 deletions(-)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index ecd3b2ef64..1b380ac75f 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -4810,45 +4810,6 @@ hns3_check_port_speed(struct hns3_hw *hw, uint32_t link_speeds)
 	return 0;
 }
 
-static uint32_t
-hns3_get_link_speed(uint32_t link_speeds)
-{
-	uint32_t speed = RTE_ETH_SPEED_NUM_NONE;
-
-	if (link_speeds & RTE_ETH_LINK_SPEED_10M ||
-	    link_speeds & RTE_ETH_LINK_SPEED_10M_HD)
-		speed = RTE_ETH_SPEED_NUM_10M;
-	if (link_speeds & RTE_ETH_LINK_SPEED_100M ||
-	    link_speeds & RTE_ETH_LINK_SPEED_100M_HD)
-		speed = RTE_ETH_SPEED_NUM_100M;
-	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
-		speed = RTE_ETH_SPEED_NUM_1G;
-	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
-		speed = RTE_ETH_SPEED_NUM_10G;
-	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
-		speed = RTE_ETH_SPEED_NUM_25G;
-	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
-		speed = RTE_ETH_SPEED_NUM_40G;
-	if (link_speeds & RTE_ETH_LINK_SPEED_50G)
-		speed = RTE_ETH_SPEED_NUM_50G;
-	if (link_speeds & RTE_ETH_LINK_SPEED_100G)
-		speed = RTE_ETH_SPEED_NUM_100G;
-	if (link_speeds & RTE_ETH_LINK_SPEED_200G)
-		speed = RTE_ETH_SPEED_NUM_200G;
-
-	return speed;
-}
-
-static uint8_t
-hns3_get_link_duplex(uint32_t link_speeds)
-{
-	if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
-	    (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
-		return RTE_ETH_LINK_HALF_DUPLEX;
-	else
-		return RTE_ETH_LINK_FULL_DUPLEX;
-}
-
 static int
 hns3_set_copper_port_link_speed(struct hns3_hw *hw,
 				struct hns3_set_link_speed_cfg *cfg)
@@ -4980,14 +4941,22 @@ static int
 hns3_apply_link_speed(struct hns3_hw *hw)
 {
 	struct rte_eth_conf *conf = &hw->data->dev_conf;
+	struct rte_eth_link_mode_info mode_info = {0};
 	struct hns3_set_link_speed_cfg cfg;
+	int ret;
 
 	memset(&cfg, 0, sizeof(struct hns3_set_link_speed_cfg));
 	cfg.autoneg = (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) ?
 			RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
 	if (cfg.autoneg != RTE_ETH_LINK_AUTONEG) {
-		cfg.speed = hns3_get_link_speed(conf->link_speeds);
-		cfg.duplex = hns3_get_link_duplex(conf->link_speeds);
+		ret = rte_eth_link_mode_parse(conf->link_speeds, &mode_info);
+		if (ret) {
+			hns3_err(hw, "failed to parse link mode, ret = %d", ret);
+			return ret;
+		}
+		cfg.speed = mode_onfo.speed_num;
+		cfg.speed = mode_info.speed_num;
+		cfg.duplex = mode_info.duplex;
 	}
 
 	return hns3_set_port_link_speed(hw, &cfg);
-- 
2.33.0


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v2 5/6] net/hns3: support setting lanes
  2024-03-22  7:09   ` [PATCH v2 0/6] " Dengdui Huang
                       ` (3 preceding siblings ...)
  2024-03-22  7:09     ` [PATCH v2 4/6] net/hns3: use parse link mode info function Dengdui Huang
@ 2024-03-22  7:09     ` Dengdui Huang
  2024-03-22  7:09     ` [PATCH v2 6/6] app/testpmd: " Dengdui Huang
  2024-04-04 13:58     ` [PATCH v2 0/6] " Ferruh Yigit
  6 siblings, 0 replies; 51+ messages in thread
From: Dengdui Huang @ 2024-03-22  7:09 UTC (permalink / raw)
  To: dev
  Cc: ferruh.yigit, aman.deep.singh, yuying.zhang, thomas,
	andrew.rybchenko, damodharam.ammepalli, stephen, jerinjacobk,
	ajit.khaparde, liuyonglong, fengchengwen, haijie1, lihuisong

Some speeds can be achieved with different number of lanes. For example,
100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
When use different lanes, the port cannot be up. This patch add support
for setting lanes and report lanes.

In addition, when reporting FEC capability, it is incorrect to calculate
speed_num from the speed function when one speed supports a different
number of lanes. This patch modifies it together.

Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
---
 doc/guides/rel_notes/release_24_03.rst |   2 +
 drivers/net/hns3/hns3_cmd.h            |  15 ++-
 drivers/net/hns3/hns3_common.c         |   2 +
 drivers/net/hns3/hns3_ethdev.c         | 158 ++++++++++++++++++-------
 drivers/net/hns3/hns3_ethdev.h         |   2 +
 5 files changed, 134 insertions(+), 45 deletions(-)

diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index b41b0028b2..c9b8740323 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -193,6 +193,8 @@ New Features
   * Added power-saving during polling within the ``rte_event_dequeue_burst()`` API.
   * Added support for DMA adapter.
 
+* **Added setting lanes for hns3 PF driver.**
+  * This feature add support for setting lanes and report lanes.
 
 Removed Items
 -------------
diff --git a/drivers/net/hns3/hns3_cmd.h b/drivers/net/hns3/hns3_cmd.h
index 79a8c1edad..31ff7b35d8 100644
--- a/drivers/net/hns3/hns3_cmd.h
+++ b/drivers/net/hns3/hns3_cmd.h
@@ -753,7 +753,9 @@ struct hns3_config_mac_mode_cmd {
 struct hns3_config_mac_speed_dup_cmd {
 	uint8_t speed_dup;
 	uint8_t mac_change_fec_en;
-	uint8_t rsv[22];
+	uint8_t rsv[4];
+	uint8_t lanes;
+	uint8_t rsv1[17];
 };
 
 #define HNS3_TQP_ENABLE_B		0
@@ -789,12 +791,15 @@ struct hns3_sfp_type {
 #define HNS3_FIBER_LINK_SPEED_1G_BIT		BIT(0)
 #define HNS3_FIBER_LINK_SPEED_10G_BIT		BIT(1)
 #define HNS3_FIBER_LINK_SPEED_25G_BIT		BIT(2)
-#define HNS3_FIBER_LINK_SPEED_50G_BIT		BIT(3)
-#define HNS3_FIBER_LINK_SPEED_100G_BIT		BIT(4)
+#define HNS3_FIBER_LINK_SPEED_50G_R2_BIT	BIT(3)
+#define HNS3_FIBER_LINK_SPEED_100G_R4_BIT	BIT(4)
 #define HNS3_FIBER_LINK_SPEED_40G_BIT		BIT(5)
 #define HNS3_FIBER_LINK_SPEED_100M_BIT		BIT(6)
 #define HNS3_FIBER_LINK_SPEED_10M_BIT		BIT(7)
-#define HNS3_FIBER_LINK_SPEED_200G_BIT		BIT(8)
+#define HNS3_FIBER_LINK_SPEED_200G_EXT_BIT	BIT(8)
+#define HNS3_FIBER_LINK_SPEED_50G_R1_BIT	BIT(9)
+#define HNS3_FIBER_LINK_SPEED_100G_R2_BIT	BIT(10)
+#define HNS3_FIBER_LINK_SPEED_200G_R4_BIT	BIT(11)
 
 #define HNS3_FIBER_FEC_AUTO_BIT		BIT(0)
 #define HNS3_FIBER_FEC_BASER_BIT	BIT(1)
@@ -823,7 +828,7 @@ struct hns3_sfp_info_cmd {
 	uint32_t supported_speed; /* speed supported by current media */
 	uint32_t module_type;
 	uint8_t fec_ability; /* supported fec modes, see HNS3_FIBER_FEC_XXX_BIT */
-	uint8_t rsv0;
+	uint8_t lanes;
 	uint8_t pause_status;
 	uint8_t rsv1[5];
 };
diff --git a/drivers/net/hns3/hns3_common.c b/drivers/net/hns3/hns3_common.c
index 28c26b049c..b6db012993 100644
--- a/drivers/net/hns3/hns3_common.c
+++ b/drivers/net/hns3/hns3_common.c
@@ -93,6 +93,8 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 
 	info->dev_capa = RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP |
 			 RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP;
+	if (!hns->is_vf)
+		info->dev_capa |= RTE_ETH_DEV_CAPA_SETTING_LANES;
 	if (hns3_dev_get_support(hw, INDEP_TXRX))
 		info->dev_capa |= RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 				  RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 1b380ac75f..7f36c193a6 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -63,12 +63,12 @@ struct hns3_intr_state {
 	uint32_t hw_err_state;
 };
 
-#define HNS3_SPEEDS_SUPP_FEC (RTE_ETH_LINK_SPEED_10G | \
-			      RTE_ETH_LINK_SPEED_25G | \
-			      RTE_ETH_LINK_SPEED_40G | \
-			      RTE_ETH_LINK_SPEED_50G | \
-			      RTE_ETH_LINK_SPEED_100G | \
-			      RTE_ETH_LINK_SPEED_200G)
+#define HNS3_SPEED_NUM_10G_BIT	RTE_BIT32(1)
+#define HNS3_SPEED_NUM_25G_BIT	RTE_BIT32(2)
+#define HNS3_SPEED_NUM_40G_BIT	RTE_BIT32(3)
+#define HNS3_SPEED_NUM_50G_BIT	RTE_BIT32(4)
+#define HNS3_SPEED_NUM_100G_BIT	RTE_BIT32(5)
+#define HNS3_SPEED_NUM_200G_BIT	RTE_BIT32(6)
 
 static const struct rte_eth_fec_capa speed_fec_capa_tbl[] = {
 	{ RTE_ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
@@ -2234,13 +2234,17 @@ hns3_get_firber_port_speed_capa(uint32_t supported_speed)
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_25G_BIT)
 		speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_40G_BIT)
-		speed_capa |= RTE_ETH_LINK_SPEED_40G;
-	if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_BIT)
+		speed_capa |= RTE_ETH_LINK_SPEED_40G_4LANES;
+	if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_R1_BIT)
 		speed_capa |= RTE_ETH_LINK_SPEED_50G;
-	if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_BIT)
-		speed_capa |= RTE_ETH_LINK_SPEED_100G;
-	if (supported_speed & HNS3_FIBER_LINK_SPEED_200G_BIT)
-		speed_capa |= RTE_ETH_LINK_SPEED_200G;
+	if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_R2_BIT)
+		speed_capa |= RTE_ETH_LINK_SPEED_50G_2LANES;
+	if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_R4_BIT)
+		speed_capa |= RTE_ETH_LINK_SPEED_100G_4LANES;
+	if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_R2_BIT)
+		speed_capa |= RTE_ETH_LINK_SPEED_100G_2LANES;
+	if (supported_speed & HNS3_FIBER_LINK_SPEED_200G_R4_BIT)
+		speed_capa |= RTE_ETH_LINK_SPEED_200G_4LANES;
 
 	return speed_capa;
 }
@@ -2308,6 +2312,7 @@ hns3_setup_linkstatus(struct rte_eth_dev *eth_dev,
 	if (!mac->link_status)
 		new_link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 
+	new_link->link_lanes = mac->link_lanes;
 	new_link->link_duplex = mac->link_duplex;
 	new_link->link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link->link_autoneg = mac->link_autoneg;
@@ -2934,7 +2939,8 @@ hns3_map_tqp(struct hns3_hw *hw)
 }
 
 static int
-hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
+hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t lanes,
+			  uint8_t duplex)
 {
 	struct hns3_config_mac_speed_dup_cmd *req;
 	struct hns3_cmd_desc desc;
@@ -2989,6 +2995,7 @@ hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
 	}
 
 	hns3_set_bit(req->mac_change_fec_en, HNS3_CFG_MAC_SPEED_CHANGE_EN_B, 1);
+	req->lanes = lanes;
 
 	ret = hns3_cmd_send(hw, &desc, 1);
 	if (ret)
@@ -3643,7 +3650,10 @@ hns3_mac_init(struct hns3_hw *hw)
 
 	pf->support_sfp_query = true;
 	mac->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
-	ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, mac->link_duplex);
+	/* If lane is set to 0, the firmware selects the default lane. */
+	mac->link_lanes = RTE_ETH_LANES_UNKNOWN;
+	ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, mac->link_lanes,
+					mac->link_duplex);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Config mac speed dup fail ret = %d", ret);
 		return ret;
@@ -4052,6 +4062,7 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
 		return 0;
 
 	mac_info->link_speed = rte_le_to_cpu_32(resp->sfp_speed);
+	mac_info->link_lanes = resp->lanes;
 	/*
 	 * if resp->supported_speed is 0, it means it's an old version
 	 * firmware, do not update these params.
@@ -4088,16 +4099,18 @@ hns3_check_speed_dup(uint8_t duplex, uint32_t speed)
 }
 
 static int
-hns3_cfg_mac_speed_dup(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
+hns3_cfg_mac_speed_dup(struct hns3_hw *hw, uint32_t speed, uint8_t lanes,
+		       uint8_t duplex)
 {
 	struct hns3_mac *mac = &hw->mac;
 	int ret;
 
 	duplex = hns3_check_speed_dup(duplex, speed);
-	if (mac->link_speed == speed && mac->link_duplex == duplex)
+	if (mac->link_speed == speed && mac->link_lanes == lanes &&
+	    mac->link_duplex == duplex)
 		return 0;
 
-	ret = hns3_cfg_mac_speed_dup_hw(hw, speed, duplex);
+	ret = hns3_cfg_mac_speed_dup_hw(hw, speed, lanes, duplex);
 	if (ret)
 		return ret;
 
@@ -4106,6 +4119,7 @@ hns3_cfg_mac_speed_dup(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
 		return ret;
 
 	mac->link_speed = speed;
+	mac->link_lanes = lanes;
 	mac->link_duplex = duplex;
 
 	return 0;
@@ -4150,6 +4164,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 		}
 
 		mac->link_speed = mac_info.link_speed;
+		mac->link_lanes = mac_info.link_lanes;
 		mac->supported_speed = mac_info.supported_speed;
 		mac->support_autoneg = mac_info.support_autoneg;
 		mac->link_autoneg = mac_info.link_autoneg;
@@ -4161,7 +4176,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 	}
 
 	/* Config full duplex for SFP */
-	return hns3_cfg_mac_speed_dup(hw, mac_info.link_speed,
+	return hns3_cfg_mac_speed_dup(hw, mac_info.link_speed, mac_info.link_lanes,
 				      RTE_ETH_LINK_FULL_DUPLEX);
 }
 
@@ -4512,11 +4527,11 @@ hns3_set_firber_default_support_speed(struct hns3_hw *hw)
 	case RTE_ETH_SPEED_NUM_40G:
 		return HNS3_FIBER_LINK_SPEED_40G_BIT;
 	case RTE_ETH_SPEED_NUM_50G:
-		return HNS3_FIBER_LINK_SPEED_50G_BIT;
+		return HNS3_FIBER_LINK_SPEED_50G_R2_BIT;
 	case RTE_ETH_SPEED_NUM_100G:
-		return HNS3_FIBER_LINK_SPEED_100G_BIT;
+		return HNS3_FIBER_LINK_SPEED_100G_R4_BIT;
 	case RTE_ETH_SPEED_NUM_200G:
-		return HNS3_FIBER_LINK_SPEED_200G_BIT;
+		return HNS3_FIBER_LINK_SPEED_200G_R4_BIT;
 	default:
 		hns3_warn(hw, "invalid speed %u Mbps.", mac->link_speed);
 		return 0;
@@ -4769,17 +4784,23 @@ hns3_convert_link_speeds2bitmap_fiber(uint32_t link_speeds)
 	case RTE_ETH_LINK_SPEED_25G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_25G_BIT;
 		break;
-	case RTE_ETH_LINK_SPEED_40G:
+	case RTE_ETH_LINK_SPEED_40G_4LANES:
 		speed_bit = HNS3_FIBER_LINK_SPEED_40G_BIT;
 		break;
 	case RTE_ETH_LINK_SPEED_50G:
-		speed_bit = HNS3_FIBER_LINK_SPEED_50G_BIT;
+		speed_bit = HNS3_FIBER_LINK_SPEED_50G_R1_BIT;
 		break;
-	case RTE_ETH_LINK_SPEED_100G:
-		speed_bit = HNS3_FIBER_LINK_SPEED_100G_BIT;
+	case RTE_ETH_LINK_SPEED_50G_2LANES:
+		speed_bit = HNS3_FIBER_LINK_SPEED_50G_R2_BIT;
 		break;
-	case RTE_ETH_LINK_SPEED_200G:
-		speed_bit = HNS3_FIBER_LINK_SPEED_200G_BIT;
+	case RTE_ETH_LINK_SPEED_100G_4LANES:
+		speed_bit = HNS3_FIBER_LINK_SPEED_100G_R4_BIT;
+		break;
+	case RTE_ETH_LINK_SPEED_100G_2LANES:
+		speed_bit = HNS3_FIBER_LINK_SPEED_100G_R2_BIT;
+		break;
+	case RTE_ETH_LINK_SPEED_200G_4LANES:
+		speed_bit = HNS3_FIBER_LINK_SPEED_200G_R4_BIT;
 		break;
 	default:
 		speed_bit = 0;
@@ -4900,7 +4921,7 @@ hns3_set_fiber_port_link_speed(struct hns3_hw *hw,
 		return 0;
 	}
 
-	return hns3_cfg_mac_speed_dup(hw, cfg->speed, cfg->duplex);
+	return hns3_cfg_mac_speed_dup(hw, cfg->speed, cfg->lanes, cfg->duplex);
 }
 
 const char *
@@ -4954,7 +4975,7 @@ hns3_apply_link_speed(struct hns3_hw *hw)
 			hns3_err(hw, "failed to parse link mode, ret = %d", ret);
 			return ret;
 		}
-		cfg.speed = mode_onfo.speed_num;
+		cfg.lanes = mode_info.lanes;
 		cfg.speed = mode_info.speed_num;
 		cfg.duplex = mode_info.duplex;
 	}
@@ -5927,20 +5948,49 @@ hns3_reset_service(void *param)
 		hns3_msix_process(hns, reset_level);
 }
 
+static uint32_t
+hns3_speed_num_capa_bit(uint32_t speed_num)
+{
+	uint32_t speed_bit;
+
+	switch (speed_num) {
+	case RTE_ETH_SPEED_NUM_10G:
+		speed_bit = HNS3_SPEED_NUM_10G_BIT;
+		break;
+	case RTE_ETH_SPEED_NUM_25G:
+		speed_bit = HNS3_SPEED_NUM_25G_BIT;
+		break;
+	case RTE_ETH_SPEED_NUM_40G:
+		speed_bit = HNS3_SPEED_NUM_40G_BIT;
+		break;
+	case RTE_ETH_SPEED_NUM_50G:
+		speed_bit = HNS3_SPEED_NUM_50G_BIT;
+		break;
+	case RTE_ETH_SPEED_NUM_100G:
+		speed_bit = HNS3_SPEED_NUM_100G_BIT;
+		break;
+	case RTE_ETH_SPEED_NUM_200G:
+		speed_bit = HNS3_SPEED_NUM_200G_BIT;
+		break;
+	default:
+		speed_bit = 0;
+		break;
+	}
+
+	return speed_bit;
+}
+
 static uint32_t
 hns3_get_speed_fec_capa(struct rte_eth_fec_capa *speed_fec_capa,
-			uint32_t speed_capa)
+			uint32_t speed_num_capa)
 {
 	uint32_t speed_bit;
 	uint32_t num = 0;
 	uint32_t i;
 
 	for (i = 0; i < RTE_DIM(speed_fec_capa_tbl); i++) {
-		speed_bit =
-			rte_eth_speed_bitflag(speed_fec_capa_tbl[i].speed,
-					      RTE_ETH_LANES_UNKNOWN,
-					      RTE_ETH_LINK_FULL_DUPLEX);
-		if ((speed_capa & speed_bit) == 0)
+		speed_bit = hns3_speed_num_capa_bit(speed_fec_capa_tbl[i].speed);
+		if ((speed_num_capa & speed_bit) == 0)
 			continue;
 
 		speed_fec_capa[num].speed = speed_fec_capa_tbl[i].speed;
@@ -5951,6 +6001,34 @@ hns3_get_speed_fec_capa(struct rte_eth_fec_capa *speed_fec_capa,
 	return num;
 }
 
+static uint32_t
+hns3_get_speed_num_capa(struct hns3_hw *hw)
+{
+	uint32_t speed_num_capa = 0;
+	uint32_t speed_capa;
+
+	speed_capa = hns3_get_speed_capa(hw);
+
+	if (speed_capa & RTE_ETH_LINK_SPEED_10G)
+		speed_num_capa |= HNS3_SPEED_NUM_10G_BIT;
+	if (speed_capa & RTE_ETH_LINK_SPEED_25G)
+		speed_num_capa |= HNS3_SPEED_NUM_25G_BIT;
+	if (speed_capa & RTE_ETH_LINK_SPEED_40G_4LANES)
+		speed_num_capa |= HNS3_SPEED_NUM_40G_BIT;
+	if (speed_capa & RTE_ETH_LINK_SPEED_50G)
+		speed_num_capa |= HNS3_SPEED_NUM_50G_BIT;
+	if (speed_capa & RTE_ETH_LINK_SPEED_50G_2LANES)
+		speed_num_capa |= HNS3_SPEED_NUM_50G_BIT;
+	if (speed_capa & RTE_ETH_LINK_SPEED_100G_4LANES)
+		speed_num_capa |= HNS3_SPEED_NUM_100G_BIT;
+	if (speed_capa & RTE_ETH_LINK_SPEED_100G_2LANES)
+		speed_num_capa |= HNS3_SPEED_NUM_100G_BIT;
+	if (speed_capa & RTE_ETH_LINK_SPEED_200G_4LANES)
+		speed_num_capa |= HNS3_SPEED_NUM_200G_BIT;
+
+	return speed_num_capa;
+}
+
 static int
 hns3_fec_get_capability(struct rte_eth_dev *dev,
 			struct rte_eth_fec_capa *speed_fec_capa,
@@ -5958,11 +6036,11 @@ hns3_fec_get_capability(struct rte_eth_dev *dev,
 {
 	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	unsigned int speed_num;
-	uint32_t speed_capa;
+	uint32_t speed_num_capa;
 
-	speed_capa = hns3_get_speed_capa(hw);
-	/* speed_num counts number of speed capabilities */
-	speed_num = rte_popcount32(speed_capa & HNS3_SPEEDS_SUPP_FEC);
+	speed_num_capa = hns3_get_speed_num_capa(hw);
+	/* speed_num counts number of speed number capabilities */
+	speed_num = rte_popcount32(speed_num_capa);
 	if (speed_num == 0)
 		return -ENOTSUP;
 
@@ -5975,7 +6053,7 @@ hns3_fec_get_capability(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	return hns3_get_speed_fec_capa(speed_fec_capa, speed_capa);
+	return hns3_get_speed_fec_capa(speed_fec_capa, speed_num_capa);
 }
 
 
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 12d8299def..070fc76420 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -168,6 +168,7 @@ struct hns3_set_link_speed_cfg {
 	uint32_t speed;
 	uint8_t duplex  : 1;
 	uint8_t autoneg : 1;
+	uint8_t lanes   : 4;
 };
 
 /* mac media type */
@@ -190,6 +191,7 @@ struct hns3_mac {
 	uint8_t link_autoneg : 1; /* RTE_ETH_LINK_[AUTONEG/FIXED] */
 	uint8_t link_status  : 1; /* RTE_ETH_LINK_[DOWN/UP] */
 	uint32_t link_speed;      /* RTE_ETH_SPEED_NUM_ */
+	uint8_t link_lanes;       /* RTE_ETH_LANES_ */
 	/*
 	 * Some firmware versions support only the SFP speed query. In addition
 	 * to the SFP speed query, some firmware supports the query of the speed
-- 
2.33.0


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v2 6/6] app/testpmd: support setting lanes
  2024-03-22  7:09   ` [PATCH v2 0/6] " Dengdui Huang
                       ` (4 preceding siblings ...)
  2024-03-22  7:09     ` [PATCH v2 5/6] net/hns3: support setting lanes Dengdui Huang
@ 2024-03-22  7:09     ` Dengdui Huang
  2024-04-04 13:58     ` [PATCH v2 0/6] " Ferruh Yigit
  6 siblings, 0 replies; 51+ messages in thread
From: Dengdui Huang @ 2024-03-22  7:09 UTC (permalink / raw)
  To: dev
  Cc: ferruh.yigit, aman.deep.singh, yuying.zhang, thomas,
	andrew.rybchenko, damodharam.ammepalli, stephen, jerinjacobk,
	ajit.khaparde, liuyonglong, fengchengwen, haijie1, lihuisong

Add a command to config speed with lanes and
added print the lane number for show info command.

Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
---
 app/test-pmd/cmdline.c                      | 199 +++++++++++++-------
 app/test-pmd/config.c                       |  78 +++++---
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |   9 +
 3 files changed, 194 insertions(+), 92 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index f521a1fe9e..413bf735a2 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -696,6 +696,11 @@ static void cmd_help_long_parsed(void *parsed_result,
 			" duplex (half|full|auto)\n"
 			"    Set speed and duplex for all ports or port_id\n\n"
 
+			"port config (port_id|all)"
+			" speed (10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto)"
+			" lanes (lane_num) duplex (half|full|auto)\n"
+			"    Set speed and duplex for all ports or port_id\n\n"
+
 			"port config (port_id|all) loopback (mode)\n"
 			"    Set loopback mode for all ports or port_id\n\n"
 
@@ -1357,14 +1362,19 @@ struct cmd_config_speed_all {
 	cmdline_fixed_string_t all;
 	cmdline_fixed_string_t item1;
 	cmdline_fixed_string_t item2;
+	cmdline_fixed_string_t item3;
 	cmdline_fixed_string_t value1;
-	cmdline_fixed_string_t value2;
+	uint8_t value2;
+	cmdline_fixed_string_t value3;
 };
 
 static int
-parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
+parse_and_check_speed_duplex(char *speedstr, uint8_t lanes, char *duplexstr,
+			     uint32_t *speed)
 {
 
+	uint32_t speed_num;
+	char *endptr;
 	int duplex;
 
 	if (!strcmp(duplexstr, "half")) {
@@ -1378,47 +1388,22 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
 		return -1;
 	}
 
-	if (!strcmp(speedstr, "10")) {
-		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
-				RTE_ETH_LINK_SPEED_10M_HD : RTE_ETH_LINK_SPEED_10M;
-	} else if (!strcmp(speedstr, "100")) {
-		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
-				RTE_ETH_LINK_SPEED_100M_HD : RTE_ETH_LINK_SPEED_100M;
-	} else {
-		if (duplex != RTE_ETH_LINK_FULL_DUPLEX) {
-			fprintf(stderr, "Invalid speed/duplex parameters\n");
-			return -1;
-		}
-		if (!strcmp(speedstr, "1000")) {
-			*speed = RTE_ETH_LINK_SPEED_1G;
-		} else if (!strcmp(speedstr, "2500")) {
-			*speed = RTE_ETH_LINK_SPEED_2_5G;
-		} else if (!strcmp(speedstr, "5000")) {
-			*speed = RTE_ETH_LINK_SPEED_5G;
-		} else if (!strcmp(speedstr, "10000")) {
-			*speed = RTE_ETH_LINK_SPEED_10G;
-		} else if (!strcmp(speedstr, "25000")) {
-			*speed = RTE_ETH_LINK_SPEED_25G;
-		} else if (!strcmp(speedstr, "40000")) {
-			*speed = RTE_ETH_LINK_SPEED_40G;
-		} else if (!strcmp(speedstr, "50000")) {
-			*speed = RTE_ETH_LINK_SPEED_50G;
-		} else if (!strcmp(speedstr, "100000")) {
-			*speed = RTE_ETH_LINK_SPEED_100G;
-		} else if (!strcmp(speedstr, "200000")) {
-			*speed = RTE_ETH_LINK_SPEED_200G;
-		} else if (!strcmp(speedstr, "400000")) {
-			*speed = RTE_ETH_LINK_SPEED_400G;
-		} else if (!strcmp(speedstr, "auto")) {
-			*speed = RTE_ETH_LINK_SPEED_AUTONEG;
-		} else {
-			fprintf(stderr, "Unknown speed parameter\n");
-			return -1;
-		}
+	if (!strcmp(speedstr, "auto")) {
+		*speed = RTE_ETH_LINK_SPEED_AUTONEG;
+		return 0;
 	}
 
-	if (*speed != RTE_ETH_LINK_SPEED_AUTONEG)
-		*speed |= RTE_ETH_LINK_SPEED_FIXED;
+	speed_num = strtol(speedstr, &endptr, 10);
+	if (*endptr != '\0') {
+		fprintf(stderr, "Unknown speed parameter\n");
+		return -1;
+	}
+
+	*speed = rte_eth_speed_bitflag(speed_num, lanes, duplex);
+	if (*speed == 0) {
+		fprintf(stderr, "param error\n");
+		return -1;
+	}
 
 	return 0;
 }
@@ -1426,22 +1411,37 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
 static void
 cmd_config_speed_all_parsed(void *parsed_result,
 			__rte_unused struct cmdline *cl,
-			__rte_unused void *data)
+			void *data)
 {
 	struct cmd_config_speed_all *res = parsed_result;
+	struct rte_eth_dev_info dev_info;
 	uint32_t link_speed;
 	portid_t pid;
+	int ret;
 
 	if (!all_ports_stopped()) {
 		fprintf(stderr, "Please stop all ports first\n");
 		return;
 	}
 
-	if (parse_and_check_speed_duplex(res->value1, res->value2,
-			&link_speed) < 0)
+	if (data)
+		ret = parse_and_check_speed_duplex(res->value1, res->value2,
+						   res->value3, &link_speed);
+	else
+		ret = parse_and_check_speed_duplex(res->value1, 0, res->value3,
+						   &link_speed);
+	if (ret < 0)
 		return;
 
 	RTE_ETH_FOREACH_DEV(pid) {
+		ret = eth_dev_info_get_print_err(pid, &dev_info);
+		if (ret != 0)
+			return;
+		if ((dev_info.dev_capa & RTE_ETH_DEV_CAPA_SETTING_LANES) == 0 && data) {
+			fprintf(stderr, "The port (%d) does not support setting lane\n",
+				pid);
+			return;
+		}
 		ports[pid].dev_conf.link_speeds = link_speed;
 	}
 
@@ -1460,26 +1460,51 @@ static cmdline_parse_token_string_t cmd_config_speed_all_item1 =
 static cmdline_parse_token_string_t cmd_config_speed_all_value1 =
 	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_all, value1,
 				"10#100#1000#2500#5000#10000#25000#40000#50000#100000#200000#400000#auto");
-static cmdline_parse_token_string_t cmd_config_speed_all_item2 =
-	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_all, item2, "duplex");
-static cmdline_parse_token_string_t cmd_config_speed_all_value2 =
-	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_all, value2,
+static cmdline_parse_token_string_t cmd_config_speed_all__item2 =
+	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_all, item2, "lanes");
+static cmdline_parse_token_num_t cmd_config_speed_all_value2 =
+	TOKEN_NUM_INITIALIZER(struct cmd_config_speed_all, value2,
+			      RTE_UINT8);
+static cmdline_parse_token_string_t cmd_config_speed_all_item3 =
+	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_all, item3, "duplex");
+static cmdline_parse_token_string_t cmd_config_speed_all_value3 =
+	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_all, value3,
 						"half#full#auto");
 
-static cmdline_parse_inst_t cmd_config_speed_all = {
+static cmdline_parse_inst_t cmd_config_speed_lanes_all = {
 	.f = cmd_config_speed_all_parsed,
-	.data = NULL,
+	.data = (void *)1,
 	.help_str = "port config all speed "
-		"10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto duplex "
-							"half|full|auto",
+		"10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto"
+		" lanes <lanes> duplex half|full|auto",
 	.tokens = {
 		(void *)&cmd_config_speed_all_port,
 		(void *)&cmd_config_speed_all_keyword,
 		(void *)&cmd_config_speed_all_all,
 		(void *)&cmd_config_speed_all_item1,
 		(void *)&cmd_config_speed_all_value1,
-		(void *)&cmd_config_speed_all_item2,
+		(void *)&cmd_config_speed_all__item2,
 		(void *)&cmd_config_speed_all_value2,
+		(void *)&cmd_config_speed_all_item3,
+		(void *)&cmd_config_speed_all_value3,
+		NULL,
+	},
+};
+
+static cmdline_parse_inst_t cmd_config_speed_all = {
+	.f = cmd_config_speed_all_parsed,
+	.data = (void *)0,
+	.help_str = "port config all speed "
+		"10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto"
+		" duplex half|full|auto",
+	.tokens = {
+		(void *)&cmd_config_speed_all_port,
+		(void *)&cmd_config_speed_all_keyword,
+		(void *)&cmd_config_speed_all_all,
+		(void *)&cmd_config_speed_all_item1,
+		(void *)&cmd_config_speed_all_value1,
+		(void *)&cmd_config_speed_all_item3,
+		(void *)&cmd_config_speed_all_value3,
 		NULL,
 	},
 };
@@ -1491,17 +1516,20 @@ struct cmd_config_speed_specific {
 	portid_t id;
 	cmdline_fixed_string_t item1;
 	cmdline_fixed_string_t item2;
+	cmdline_fixed_string_t item3;
 	cmdline_fixed_string_t value1;
-	cmdline_fixed_string_t value2;
+	uint8_t value2;
+	cmdline_fixed_string_t value3;
 };
 
 static void
 cmd_config_speed_specific_parsed(void *parsed_result,
-				__rte_unused struct cmdline *cl,
-				__rte_unused void *data)
+				__rte_unused struct cmdline *cl, void *data)
 {
 	struct cmd_config_speed_specific *res = parsed_result;
+	struct rte_eth_dev_info dev_info;
 	uint32_t link_speed;
+	int ret;
 
 	if (port_id_is_invalid(res->id, ENABLED_WARN))
 		return;
@@ -1511,8 +1539,23 @@ cmd_config_speed_specific_parsed(void *parsed_result,
 		return;
 	}
 
-	if (parse_and_check_speed_duplex(res->value1, res->value2,
-			&link_speed) < 0)
+	ret = eth_dev_info_get_print_err(res->id, &dev_info);
+	if (ret != 0)
+		return;
+
+	if ((dev_info.dev_capa & RTE_ETH_DEV_CAPA_SETTING_LANES) == 0 && data) {
+		fprintf(stderr, "The port (%d) does not support setting lanes\n",
+			res->id);
+		return;
+	}
+
+	if (data)
+		ret = parse_and_check_speed_duplex(res->value1, res->value2,
+						   res->value3, &link_speed);
+	else
+		ret = parse_and_check_speed_duplex(res->value1, 0, res->value3,
+						   &link_speed);
+	if (ret < 0)
 		return;
 
 	ports[res->id].dev_conf.link_speeds = link_speed;
@@ -1537,17 +1580,23 @@ static cmdline_parse_token_string_t cmd_config_speed_specific_value1 =
 				"10#100#1000#2500#5000#10000#25000#40000#50000#100000#200000#400000#auto");
 static cmdline_parse_token_string_t cmd_config_speed_specific_item2 =
 	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_specific, item2,
+								"lanes");
+static cmdline_parse_token_num_t cmd_config_speed_specific_value2 =
+	TOKEN_NUM_INITIALIZER(struct cmd_config_speed_specific, value2,
+			      RTE_UINT8);
+static cmdline_parse_token_string_t cmd_config_speed_specific_item3 =
+	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_specific, item3,
 								"duplex");
-static cmdline_parse_token_string_t cmd_config_speed_specific_value2 =
-	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_specific, value2,
+static cmdline_parse_token_string_t cmd_config_speed_specific_value3 =
+	TOKEN_STRING_INITIALIZER(struct cmd_config_speed_specific, value3,
 							"half#full#auto");
 
-static cmdline_parse_inst_t cmd_config_speed_specific = {
+static cmdline_parse_inst_t cmd_config_speed_lanes_specific = {
 	.f = cmd_config_speed_specific_parsed,
-	.data = NULL,
+	.data = (void *)1,
 	.help_str = "port config <port_id> speed "
-		"10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto duplex "
-							"half|full|auto",
+		"10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto"
+		" lanes <lane_num> duplex half|full|auto",
 	.tokens = {
 		(void *)&cmd_config_speed_specific_port,
 		(void *)&cmd_config_speed_specific_keyword,
@@ -1556,6 +1605,26 @@ static cmdline_parse_inst_t cmd_config_speed_specific = {
 		(void *)&cmd_config_speed_specific_value1,
 		(void *)&cmd_config_speed_specific_item2,
 		(void *)&cmd_config_speed_specific_value2,
+		(void *)&cmd_config_speed_specific_item3,
+		(void *)&cmd_config_speed_specific_value3,
+		NULL,
+	},
+};
+
+static cmdline_parse_inst_t cmd_config_speed_specific = {
+	.f = cmd_config_speed_specific_parsed,
+	.data = (void *)0,
+	.help_str = "port config <port_id> speed "
+		"10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto"
+		" duplex half|full|auto",
+	.tokens = {
+		(void *)&cmd_config_speed_specific_port,
+		(void *)&cmd_config_speed_specific_keyword,
+		(void *)&cmd_config_speed_specific_id,
+		(void *)&cmd_config_speed_specific_item1,
+		(void *)&cmd_config_speed_specific_value1,
+		(void *)&cmd_config_speed_specific_item3,
+		(void *)&cmd_config_speed_specific_value3,
 		NULL,
 	},
 };
@@ -13237,6 +13306,8 @@ static cmdline_parse_ctx_t builtin_ctx[] = {
 	(cmdline_parse_inst_t *)&cmd_set_port_setup_on,
 	(cmdline_parse_inst_t *)&cmd_config_speed_all,
 	(cmdline_parse_inst_t *)&cmd_config_speed_specific,
+	(cmdline_parse_inst_t *)&cmd_config_speed_lanes_all,
+	(cmdline_parse_inst_t *)&cmd_config_speed_lanes_specific,
 	(cmdline_parse_inst_t *)&cmd_config_loopback_all,
 	(cmdline_parse_inst_t *)&cmd_config_loopback_specific,
 	(cmdline_parse_inst_t *)&cmd_config_rx_tx,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index ba1007ace6..3f77f321a5 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -587,39 +587,51 @@ device_infos_display_speeds(uint32_t speed_capa)
 	if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
 		printf(" Autonegotiate (all speeds)");
 	if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
-		printf(" Disable autonegotiate (fixed speed)  ");
+		printf(" Disable autonegotiate (fixed speed) /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
-		printf(" 10 Mbps half-duplex  ");
+		printf(" 10Mbps_1lane_half-duplex /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_10M)
-		printf(" 10 Mbps full-duplex  ");
+		printf(" 10Mbps_1lane_full-duplex /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
-		printf(" 100 Mbps half-duplex  ");
+		printf(" 100Mbps_lane_half-duplex /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_100M)
-		printf(" 100 Mbps full-duplex  ");
+		printf(" 100Mbps_1lane_full-duplex /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_1G)
-		printf(" 1 Gbps  ");
+		printf(" 1Gbps_1lane /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
-		printf(" 2.5 Gbps  ");
+		printf(" 2.5Gbps_1lane /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_5G)
-		printf(" 5 Gbps  ");
+		printf(" 5Gbps_1lane /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_10G)
-		printf(" 10 Gbps  ");
-	if (speed_capa & RTE_ETH_LINK_SPEED_20G)
-		printf(" 20 Gbps  ");
+		printf(" 10Gbps_1lane /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_10G_4LANES)
+		printf(" 10Gbps_4lane /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_20G_2LANES)
+		printf(" 20Gbps_2lane /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_25G)
-		printf(" 25 Gbps  ");
-	if (speed_capa & RTE_ETH_LINK_SPEED_40G)
-		printf(" 40 Gbps  ");
+		printf(" 25Gbps_1lane /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_40G_4LANES)
+		printf(" 40Gbps_4lane /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_50G)
-		printf(" 50 Gbps  ");
-	if (speed_capa & RTE_ETH_LINK_SPEED_56G)
-		printf(" 56 Gbps  ");
+		printf(" 50Gbps_1lane /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_50G_2LANES)
+		printf(" 50Gbps_2lane /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_56G_4LANES)
+		printf(" 56Gbps_4lane /");
 	if (speed_capa & RTE_ETH_LINK_SPEED_100G)
-		printf(" 100 Gbps  ");
-	if (speed_capa & RTE_ETH_LINK_SPEED_200G)
-		printf(" 200 Gbps  ");
-	if (speed_capa & RTE_ETH_LINK_SPEED_400G)
-		printf(" 400 Gbps  ");
+		printf(" 100Gbps_1lane /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_100G_2LANES)
+		printf(" 100Gbps_2lane /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_100G_4LANES)
+		printf(" 100Gbps_4lane /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_200G_4LANES)
+		printf(" 200Gbps_4lane /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_200G_2LANES)
+		printf(" 200Gbps_2lane /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_400G_4LANES)
+		printf(" 400Gbps_4lane /");
+	if (speed_capa & RTE_ETH_LINK_SPEED_400G_8LANES)
+		printf(" 400Gbps_8lane /");
 }
 
 void
@@ -828,6 +840,10 @@ port_infos_display(portid_t port_id)
 
 	printf("\nLink status: %s\n", (link.link_status) ? ("up") : ("down"));
 	printf("Link speed: %s\n", rte_eth_link_speed_to_str(link.link_speed));
+	if (link.link_lanes == 0)
+		printf("Link lanes: unknown\n");
+	else
+		printf("Link lanes: %u\n", link.link_lanes);
 	printf("Link duplex: %s\n", (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 	       ("full-duplex") : ("half-duplex"));
 	printf("Autoneg status: %s\n", (link.link_autoneg == RTE_ETH_LINK_AUTONEG) ?
@@ -962,8 +978,8 @@ port_summary_header_display(void)
 
 	port_number = rte_eth_dev_count_avail();
 	printf("Number of available ports: %i\n", port_number);
-	printf("%-4s %-17s %-12s %-14s %-8s %s\n", "Port", "MAC Address", "Name",
-			"Driver", "Status", "Link");
+	printf("%-4s %-17s %-12s %-14s %-8s %-8s %s\n", "Port", "MAC Address", "Name",
+			"Driver", "Status", "Link", "Lanes");
 }
 
 void
@@ -993,10 +1009,16 @@ port_summary_display(portid_t port_id)
 	if (ret != 0)
 		return;
 
-	printf("%-4d " RTE_ETHER_ADDR_PRT_FMT " %-12s %-14s %-8s %s\n",
-		port_id, RTE_ETHER_ADDR_BYTES(&mac_addr), name,
-		dev_info.driver_name, (link.link_status) ? ("up") : ("down"),
-		rte_eth_link_speed_to_str(link.link_speed));
+	if (link.link_lanes == RTE_ETH_LANES_UNKNOWN)
+		printf("%-4d " RTE_ETHER_ADDR_PRT_FMT " %-12s %-14s %-8s %-8s %s\n",
+			port_id, RTE_ETHER_ADDR_BYTES(&mac_addr), name,
+			dev_info.driver_name, (link.link_status) ? ("up") : ("down"),
+			rte_eth_link_speed_to_str(link.link_speed), "unknown");
+	else
+		printf("%-4d " RTE_ETHER_ADDR_PRT_FMT " %-12s %-14s %-8s %-8s %u\n",
+			port_id, RTE_ETHER_ADDR_BYTES(&mac_addr), name,
+			dev_info.driver_name, (link.link_status) ? ("up") : ("down"),
+			rte_eth_link_speed_to_str(link.link_speed), link.link_lanes);
 }
 
 void
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 2fbf9220d8..118a0956bc 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -190,6 +190,7 @@ For example:
    memory allocation on the socket: 0
    Link status: up
    Link speed: 40000 Mbps
+   Link lanes: 4
    Link duplex: full-duplex
    Promiscuous mode: enabled
    Allmulticast mode: disabled
@@ -2067,6 +2068,14 @@ Set the speed and duplex mode for all ports or a specific port::
    testpmd> port config (port_id|all) speed (10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto) \
             duplex (half|full|auto)
 
+port config - speed with lanes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Set the speed, lanes and duplex mode for all ports or a specific port::
+
+   testpmd> port config (port_id|all) speed (10|100|1000|2500|5000|10000|25000|40000|50000|100000|200000|400000|auto) \
+            lanes (lane_num) duplex (half|full|auto)
+
 port config - queues/descriptors
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.33.0


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/3] support setting lanes
  2024-03-22  5:51                 ` Jerin Jacob
@ 2024-03-22 13:51                   ` Thomas Monjalon
  2024-03-25 14:04                     ` Jerin Jacob
  0 siblings, 1 reply; 51+ messages in thread
From: Thomas Monjalon @ 2024-03-22 13:51 UTC (permalink / raw)
  To: Ajit Khaparde, Jerin Jacob
  Cc: huangdengdui, Damodharam Ammepalli, Ferruh Yigit, dev,
	aman.deep.singh, yuying.zhang, andrew.rybchenko, liuyonglong,
	fengchengwen, haijie1, lihuisong

22/03/2024 06:51, Jerin Jacob:
> On Fri, Mar 22, 2024 at 10:56 AM Ajit Khaparde
> <ajit.khaparde@broadcom.com> wrote:
> >
> > On Thu, Mar 21, 2024 at 9:39 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > >
> > > On Fri, Mar 22, 2024 at 7:58 AM huangdengdui <huangdengdui@huawei.com> wrote:
> > > >
> > > >
> > > >
> > > > On 2024/3/21 16:28, Thomas Monjalon wrote:
> > > > > 21/03/2024 03:02, huangdengdui:
> > > > >>
> > > > >> On 2024/3/20 20:31, Ferruh Yigit wrote:
> > > > >>> On 3/18/2024 9:26 PM, Damodharam Ammepalli wrote:
> > > > >>>> On Mon, Mar 18, 2024 at 7:56 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> > > > >>>>>
> > > > >>>>> 12/03/2024 08:52, Dengdui Huang:
> > > > >>>>>> Some speeds can be achieved with different number of lanes. For example,
> > > > >>>>>> 100Gbps can be achieved using two lanes of 50Gbps or four lanes of 25Gbps.
> > > > >>>>>> When use different lanes, the port cannot be up.
> > > > >>>>>
> > > > >>>>> I'm not sure what you are referring to.
> > > > >>>>> I suppose it is not PCI lanes.
> > > > >>>>> Please could you link to an explanation of how a port is split in lanes?
> > > > >>>>> Which hardware does this?
> > > > >>>>>
> > > > >>>> This is a snapshot of 100Gb that the latest BCM576xx supports.
> > > > >>>> 100Gb (NRZ: 25G per lane, 4 lanes) link speed
> > > > >>>> 100Gb (PAM4-56: 50G per lane, 2 lanes) link speed
> > > > >>>> 100Gb (PAM4-112: 100G per lane, 1 lane) link speed
> > > > >>>>
> > > > >>>> Let the user feed in lanes=< integer value> and the NIC driver decides
> > > > >>>> the matching combination speed x lanes that works. In future if a new speed
> > > > >>>> is implemented with more than 8 lanes, there wouldn't be a need
> > > > >>>> to touch this speed command. Using separate lane command would
> > > > >>>> be a better alternative to support already shipped products and only new
> > > > >>>> drivers would consider this lanes configuration, if applicable.
> > > > >>>>
> > > > >>>
> > > > >>> As far as I understand, lane is related to the physical layer of the
> > > > >>> NIC, there are multiple copies of transmitter, receiver, modulator HW
> > > > >>> block and each set called as a 'lane' and multiple lanes work together
> > > > >>> to achieve desired speed. (please correct me if this is wrong).
> > > > >>>
> > > > >>> Why not just configuring the speed is not enough? Why user needs to know
> > > > >>> the detail and configuration of the lanes?
> > > > >>> Will it work if driver/device configure the "speed x lane" internally
> > > > >>> for the requested speed?
> > > > >>>
> > > > >>> Is there a benefit to force specific lane count for a specific speed
> > > > >>> (like power optimization, just a wild guess)?
> > > > >>>
> > > > >>>
> > > > >>> And +1 for auto-negotiation if possible.
> > > > >>
> > > > >> As you said above,,multiple lanes work together to achieve desired speed.
> > > > >> For example, the following solutions can be used to implement 100G:
> > > > >> 1、Combines four 25G lanes
> > > > >> 2、Combines two 50G lanes
> > > > >> 3、A single 100G lane
> > > > >>
> > > > >> It is assumed that two ports are interconnected and the two ports support
> > > > >> the foregoing three solutions. But, we just configured the speed to 100G and
> > > > >> one port uses four 25G lanes by default and the other port uses two 50G lanes
> > > > >> by default, the port cannot be up. In this case, we need to configure the
> > > > >> two ports to use the same solutions (for example, uses two 50G lanes)
> > > > >> so that the ports can be up.
> > > > >
> > > > > Why this config is not OK? How do we know?
> > > > > Really I have a very bad feeling about this feature.
> > > > >
> > > > >
> > > > Sorry, I don't quite understand your question.
> > > > Are you asking why cannot be up when one port uses four 25G lanes and the other port uses two 50G lanes?
> > > >
> > > > 100GBASE-SR2 (two 50G lanes) and 100GBASE-SR4 (four 25G lanes) have different standards at the physical layer.[1]
> > > > So it's not possible to communicate. Configuring lanes can help the driver choose the same standard.
> > >
> > > Typically, low-level drivers like FW configure this.
> > >
> > > For example, If FW configures, 100G port as 100GBASE-SR2 then two
> > > ethdev(port 0 and port1) will show up.
> > > Now, assume if we expose this API and Can end user configure port 1 as
> > > 25G lines if so,
> > > a) What happens to port0 and it states?
> > There should be no impact to port0.
> >
> > > b) Will port2, port3 will show up after issuing this API(As end user
> > > configured 25Gx4 for 100G)? Will application needs to hotplug to get
> > > use ports.
> > No. The port count does not change. Nor does the number of PCI
> > functions seen by the host. Unless designed otherwise.
> >
> > Changing the lane count does not change anything in physical terms.
> > What changes is the modulation or the signaling scheme.
> > The number of lanes which can be supported is determined by
> > the PHY itself and the cables used and needs to be negotiated appropriately
> > with the remote partner - which is just like using forced Ethernet Speed
> > instead of auto-negotiated speeds.

Thanks for the explanation Ajit.

> OK. It looks like platform independent then. At least cnxk driver, End
> user cannot simplify change the line config parameters
> while traffic is active also, it looks like other drivers need to have
> SerDes training with remote partner while reconfiguring it.
> 
> At least on cnxk platform, 25Gx4 on 100G will show as 4 ethdev devices.

That's a strange behaviour.
Why showing 4 ports which are not independent?

> Having said that, If other NICs support this feature without
> disturbing current port states, I don't have an objection to this API.




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-22  7:09     ` [PATCH v2 1/6] ethdev: " Dengdui Huang
@ 2024-03-22 13:58       ` Thomas Monjalon
  2024-03-22 15:15         ` Ajit Khaparde
  2024-03-25  6:24         ` huangdengdui
  0 siblings, 2 replies; 51+ messages in thread
From: Thomas Monjalon @ 2024-03-22 13:58 UTC (permalink / raw)
  To: Dengdui Huang
  Cc: dev, ferruh.yigit, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, damodharam.ammepalli, stephen, jerinjacobk,
	ajit.khaparde, liuyonglong, fengchengwen, haijie1, lihuisong

22/03/2024 08:09, Dengdui Huang:
> -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
> -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
> -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
> -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
> -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
> -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
> -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
> -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
> -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
> +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
> +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
> +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
> +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
> +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
> +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
> +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
> +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
> +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
> +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
> +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
> +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
> +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
> +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
> +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */

I don't think it is a good idea to make this more complex.
It brings nothing as far as I can see, compared to having speed and lanes separated.
Can we have lanes information a separate value? no need for bitmask.



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-22 13:58       ` Thomas Monjalon
@ 2024-03-22 15:15         ` Ajit Khaparde
  2024-03-22 17:32           ` Tyler Retzlaff
  2024-03-25  6:24         ` huangdengdui
  1 sibling, 1 reply; 51+ messages in thread
From: Ajit Khaparde @ 2024-03-22 15:15 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Dengdui Huang, dev, ferruh.yigit, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, damodharam.ammepalli, stephen, jerinjacobk,
	liuyonglong, fengchengwen, haijie1, lihuisong

[-- Attachment #1: Type: text/plain, Size: 2261 bytes --]

On Fri, Mar 22, 2024 at 6:58 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 22/03/2024 08:09, Dengdui Huang:
> > -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
> > -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
> > -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
> > -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
> > -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
> > -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
> > -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
> > -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
> > -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
> > +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
> > +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
> > +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
> > +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
> > +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
> > +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
> > +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
> > +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
> > +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
> > +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
> > +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
> > +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
> > +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
> > +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
> > +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
>
> I don't think it is a good idea to make this more complex.
> It brings nothing as far as I can see, compared to having speed and lanes separated.
> Can we have lanes information a separate value? no need for bitmask.
I agree.

>
>

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-22 15:15         ` Ajit Khaparde
@ 2024-03-22 17:32           ` Tyler Retzlaff
  2024-03-22 22:30             ` Damodharam Ammepalli
  0 siblings, 1 reply; 51+ messages in thread
From: Tyler Retzlaff @ 2024-03-22 17:32 UTC (permalink / raw)
  To: Ajit Khaparde
  Cc: Thomas Monjalon, Dengdui Huang, dev, ferruh.yigit,
	aman.deep.singh, yuying.zhang, andrew.rybchenko,
	damodharam.ammepalli, stephen, jerinjacobk, liuyonglong,
	fengchengwen, haijie1, lihuisong

On Fri, Mar 22, 2024 at 08:15:00AM -0700, Ajit Khaparde wrote:
> On Fri, Mar 22, 2024 at 6:58 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> >
> > 22/03/2024 08:09, Dengdui Huang:
> > > -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
> > > -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
> > > -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
> > > -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
> > > -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
> > > -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
> > > -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
> > > -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
> > > -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
> > > +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
> > > +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
> > > +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
> > > +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
> > > +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
> > > +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
> > > +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
> > > +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
> > > +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
> > > +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
> > > +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
> > > +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
> > > +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
> > > +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
> > > +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
> >
> > I don't think it is a good idea to make this more complex.
> > It brings nothing as far as I can see, compared to having speed and lanes separated.
> > Can we have lanes information a separate value? no need for bitmask.
> I agree.

+1 api design coupling the two together is definitely undesirable it
seems like half the time you end up with redundant RTE_ETH_LANES_UNKNOWN.

> 
> >
> >



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-22 17:32           ` Tyler Retzlaff
@ 2024-03-22 22:30             ` Damodharam Ammepalli
  0 siblings, 0 replies; 51+ messages in thread
From: Damodharam Ammepalli @ 2024-03-22 22:30 UTC (permalink / raw)
  To: Tyler Retzlaff
  Cc: Ajit Khaparde, Thomas Monjalon, Dengdui Huang, dev, ferruh.yigit,
	aman.deep.singh, yuying.zhang, andrew.rybchenko, stephen,
	jerinjacobk, liuyonglong, fengchengwen, haijie1, lihuisong

[-- Attachment #1: Type: text/plain, Size: 3787 bytes --]

On Fri, Mar 22, 2024 at 10:32 AM Tyler Retzlaff
<roretzla@linux.microsoft.com> wrote:
>
> On Fri, Mar 22, 2024 at 08:15:00AM -0700, Ajit Khaparde wrote:
> > On Fri, Mar 22, 2024 at 6:58 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> > >
> > > 22/03/2024 08:09, Dengdui Huang:
> > > > -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
> > > > -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
> > > > -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
> > > > -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
> > > > -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
> > > > -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
> > > > -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
> > > > -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
> > > > -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
> > > > +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
> > > > +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
> > > > +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
> > > > +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
> > > > +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
> > > > +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
> > > > +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
> > > > +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
> > > > +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
> > > > +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
> > > > +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
> > > > +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
> > > > +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
> > > > +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
> > > > +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
> > >
> > > I don't think it is a good idea to make this more complex.
> > > It brings nothing as far as I can see, compared to having speed and lanes separated.
> > > Can we have lanes information a separate value? no need for bitmask.
> > I agree.
>
> +1 api design coupling the two together is definitely undesirable it
> seems like half the time you end up with redundant RTE_ETH_LANES_UNKNOWN.

https://patchwork.dpdk.org/project/dpdk/list/?series=31606
This is how we have implemented internally and we could use this as a reference.
Ethtool 6.x allows lanes configuration this way.
ethtool -s ens6f0np0 speed <speed> duplex full autoneg off lanes < int >
>
> >
> > >
> > >
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4233 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-22 13:58       ` Thomas Monjalon
  2024-03-22 15:15         ` Ajit Khaparde
@ 2024-03-25  6:24         ` huangdengdui
  2024-03-25  9:30           ` Thomas Monjalon
  1 sibling, 1 reply; 51+ messages in thread
From: huangdengdui @ 2024-03-25  6:24 UTC (permalink / raw)
  To: Thomas Monjalon, ajit.khaparde, roretzla, Damodharam Ammepalli
  Cc: dev, ferruh.yigit, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, damodharam.ammepalli, stephen, jerinjacobk,
	liuyonglong, fengchengwen, haijie1, lihuisong



On 2024/3/22 21:58, Thomas Monjalon wrote:
> 22/03/2024 08:09, Dengdui Huang:
>> -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
>> -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
>> -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
>> -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
>> -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
>> -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
>> -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
>> -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
>> -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
>> +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
>> +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
>> +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
>> +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
>> +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
>> +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
>> +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
>> +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
>> +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
>> +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
>> +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
>> +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
>> +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
>> +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
>> +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
> 
> I don't think it is a good idea to make this more complex.
> It brings nothing as far as I can see, compared to having speed and lanes separated.
> Can we have lanes information a separate value? no need for bitmask.
> 
Hi,Thomas, Ajit, roretzla, damodharam

I also considered the option at the beginning of the design.
But this option is not used due to the following reasons:
1. For the user, ethtool couples speed and lanes.
The result of querying the NIC capability is as follows:
Supported link modes:
        100000baseSR4/Full
        100000baseSR2/Full
The NIC capability is configured as follows:
ethtool -s eth1 speed 100000 lanes 4 autoneg off
ethtool -s eth1 speed 100000 lanes 2 autoneg off

Therefore, users are more accustomed to the coupling of speed and lanes.

2. For the PHY, When the physical layer capability is configured through the MDIO,
the speed and lanes are also coupled.
For example:
Table 45–7—PMA/PMD control 2 register bit definitions[1]
PMA/PMD type selection
                        1 0 0 1 0 1 0 = 100GBASE-SR2 PMA/PMD
                        0 1 0 1 1 1 1 = 100GBASE-SR4 PMA/PMD

Therefore, coupling speeds and lanes is easier to understand.
And it is easier for the driver to report the support lanes.

In addition, the code implementation is compatible with the old version.
When the driver does not support the lanes setting, the code does not need to be modified.

So I think the speed and lanes coupling is better.

[1]
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9844436

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-25  6:24         ` huangdengdui
@ 2024-03-25  9:30           ` Thomas Monjalon
  2024-03-25 21:14             ` Damodharam Ammepalli
  2024-03-26  1:42             ` lihuisong (C)
  0 siblings, 2 replies; 51+ messages in thread
From: Thomas Monjalon @ 2024-03-25  9:30 UTC (permalink / raw)
  To: huangdengdui
  Cc: ajit.khaparde, roretzla, Damodharam Ammepalli, dev, ferruh.yigit,
	aman.deep.singh, yuying.zhang, andrew.rybchenko,
	damodharam.ammepalli, stephen, jerinjacobk, liuyonglong,
	fengchengwen, haijie1, lihuisong

25/03/2024 07:24, huangdengdui:
> 
> On 2024/3/22 21:58, Thomas Monjalon wrote:
> > 22/03/2024 08:09, Dengdui Huang:
> >> -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
> >> -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
> >> -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
> >> -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
> >> -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
> >> -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
> >> -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
> >> -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
> >> -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
> >> +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
> >> +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
> >> +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
> >> +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
> >> +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
> >> +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
> >> +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
> >> +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
> >> +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
> >> +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
> >> +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
> >> +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
> >> +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
> >> +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
> >> +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
> > 
> > I don't think it is a good idea to make this more complex.
> > It brings nothing as far as I can see, compared to having speed and lanes separated.
> > Can we have lanes information a separate value? no need for bitmask.
> > 
> Hi,Thomas, Ajit, roretzla, damodharam
> 
> I also considered the option at the beginning of the design.
> But this option is not used due to the following reasons:
> 1. For the user, ethtool couples speed and lanes.
> The result of querying the NIC capability is as follows:
> Supported link modes:
>         100000baseSR4/Full
>         100000baseSR2/Full
> The NIC capability is configured as follows:
> ethtool -s eth1 speed 100000 lanes 4 autoneg off
> ethtool -s eth1 speed 100000 lanes 2 autoneg off
> 
> Therefore, users are more accustomed to the coupling of speed and lanes.
> 
> 2. For the PHY, When the physical layer capability is configured through the MDIO,
> the speed and lanes are also coupled.
> For example:
> Table 45–7—PMA/PMD control 2 register bit definitions[1]
> PMA/PMD type selection
>                         1 0 0 1 0 1 0 = 100GBASE-SR2 PMA/PMD
>                         0 1 0 1 1 1 1 = 100GBASE-SR4 PMA/PMD
> 
> Therefore, coupling speeds and lanes is easier to understand.
> And it is easier for the driver to report the support lanes.
> 
> In addition, the code implementation is compatible with the old version.
> When the driver does not support the lanes setting, the code does not need to be modified.
> 
> So I think the speed and lanes coupling is better.

I don't think so.
You are mixing hardware implementation, user tool, and API.
Having a separate and simple API is cleaner and not more difficult to handle
in some get/set style functions.




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 0/3] support setting lanes
  2024-03-22 13:51                   ` Thomas Monjalon
@ 2024-03-25 14:04                     ` Jerin Jacob
  0 siblings, 0 replies; 51+ messages in thread
From: Jerin Jacob @ 2024-03-25 14:04 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Ajit Khaparde, huangdengdui, Damodharam Ammepalli, Ferruh Yigit,
	dev, aman.deep.singh, yuying.zhang, andrew.rybchenko,
	liuyonglong, fengchengwen, haijie1, lihuisong

On Fri, Mar 22, 2024 at 7:21 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 22/03/2024 06:51, Jerin Jacob:
> > On Fri, Mar 22, 2024 at 10:56 AM Ajit Khaparde
> > <ajit.khaparde@broadcom.com> wrote:
> > >
> > > On Thu, Mar 21, 2024 at 9:39 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > > >
> > > > On Fri, Mar 22, 2024 at 7:58 AM huangdengdui <huangdengdui@huawei.com> wrote:
> > > > >
> > > > >
> > > > >

> > > > For example, If FW configures, 100G port as 100GBASE-SR2 then two
> > > > ethdev(port 0 and port1) will show up.
> > > > Now, assume if we expose this API and Can end user configure port 1 as
> > > > 25G lines if so,
> > > > a) What happens to port0 and it states?
> > > There should be no impact to port0.
> > >
> > > > b) Will port2, port3 will show up after issuing this API(As end user
> > > > configured 25Gx4 for 100G)? Will application needs to hotplug to get
> > > > use ports.
> > > No. The port count does not change. Nor does the number of PCI
> > > functions seen by the host. Unless designed otherwise.
> > >
> > > Changing the lane count does not change anything in physical terms.
> > > What changes is the modulation or the signaling scheme.
> > > The number of lanes which can be supported is determined by
> > > the PHY itself and the cables used and needs to be negotiated appropriately
> > > with the remote partner - which is just like using forced Ethernet Speed
> > > instead of auto-negotiated speeds.
>
> Thanks for the explanation Ajit.
>
> > OK. It looks like platform independent then. At least cnxk driver, End
> > user cannot simplify change the line config parameters
> > while traffic is active also, it looks like other drivers need to have
> > SerDes training with remote partner while reconfiguring it.
> >
> > At least on cnxk platform, 25Gx4 on 100G will show as 4 ethdev devices.
>
> That's a strange behaviour.
> Why showing 4 ports which are not independent?

I checked SerDes + NIC configuration again. It supports both modes.
Show up as One port vs four ports.

>
> > Having said that, If other NICs support this feature without
> > disturbing current port states, I don't have an objection to this API.
>
>
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-25  9:30           ` Thomas Monjalon
@ 2024-03-25 21:14             ` Damodharam Ammepalli
  2024-03-26  1:42             ` lihuisong (C)
  1 sibling, 0 replies; 51+ messages in thread
From: Damodharam Ammepalli @ 2024-03-25 21:14 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: huangdengdui, ajit.khaparde, roretzla, dev, ferruh.yigit,
	aman.deep.singh, yuying.zhang, andrew.rybchenko, stephen,
	jerinjacobk, liuyonglong, fengchengwen, haijie1, lihuisong

[-- Attachment #1: Type: text/plain, Size: 4905 bytes --]

On Mon, Mar 25, 2024 at 2:30 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 25/03/2024 07:24, huangdengdui:
> >
> > On 2024/3/22 21:58, Thomas Monjalon wrote:
> > > 22/03/2024 08:09, Dengdui Huang:
> > >> -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
> > >> -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
> > >> -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
> > >> -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
> > >> -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
> > >> -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
> > >> -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
> > >> -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
> > >> -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
> > >> +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
> > >> +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
> > >> +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
> > >> +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
> > >> +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
> > >> +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
> > >> +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
> > >> +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
> > >> +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
> > >> +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
> > >> +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
> > >> +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
> > >> +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
> > >> +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
> > >> +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
> > >
> > > I don't think it is a good idea to make this more complex.
> > > It brings nothing as far as I can see, compared to having speed and lanes separated.
> > > Can we have lanes information a separate value? no need for bitmask.
> > >
> > Hi,Thomas, Ajit, roretzla, damodharam
> >
> > I also considered the option at the beginning of the design.
> > But this option is not used due to the following reasons:
> > 1. For the user, ethtool couples speed and lanes.
> > The result of querying the NIC capability is as follows:
> > Supported link modes:
> >         100000baseSR4/Full
> >         100000baseSR2/Full
> > The NIC capability is configured as follows:
> > ethtool -s eth1 speed 100000 lanes 4 autoneg off
> > ethtool -s eth1 speed 100000 lanes 2 autoneg off
> >
> > Therefore, users are more accustomed to the coupling of speed and lanes.
> >
> > 2. For the PHY, When the physical layer capability is configured through the MDIO,
> > the speed and lanes are also coupled.
> > For example:
> > Table 45–7—PMA/PMD control 2 register bit definitions[1]
> > PMA/PMD type selection
> >                         1 0 0 1 0 1 0 = 100GBASE-SR2 PMA/PMD
> >                         0 1 0 1 1 1 1 = 100GBASE-SR4 PMA/PMD
> >
> > Therefore, coupling speeds and lanes is easier to understand.
> > And it is easier for the driver to report the support lanes.
> >
> > In addition, the code implementation is compatible with the old version.
> > When the driver does not support the lanes setting, the code does not need to be modified.
> >
> > So I think the speed and lanes coupling is better.
>
> I don't think so.
> You are mixing hardware implementation, user tool, and API.
> Having a separate and simple API is cleaner and not more difficult to handle
> in some get/set style functions.
>
>
>

Agree with Thomas. DPDK lib/apps should be independent of HW implementation.

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4233 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-25  9:30           ` Thomas Monjalon
  2024-03-25 21:14             ` Damodharam Ammepalli
@ 2024-03-26  1:42             ` lihuisong (C)
  2024-03-26  3:45               ` Ajit Khaparde
  2024-03-26 10:30               ` Thomas Monjalon
  1 sibling, 2 replies; 51+ messages in thread
From: lihuisong (C) @ 2024-03-26  1:42 UTC (permalink / raw)
  To: Thomas Monjalon, huangdengdui, Damodharam Ammepalli
  Cc: ajit.khaparde, roretzla, Damodharam Ammepalli, dev, ferruh.yigit,
	aman.deep.singh, yuying.zhang, andrew.rybchenko, stephen,
	jerinjacobk, liuyonglong, fengchengwen, haijie1


在 2024/3/25 17:30, Thomas Monjalon 写道:
> 25/03/2024 07:24, huangdengdui:
>> On 2024/3/22 21:58, Thomas Monjalon wrote:
>>> 22/03/2024 08:09, Dengdui Huang:
>>>> -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
>>>> -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
>>>> -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
>>>> -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
>>>> -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
>>>> -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
>>>> -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
>>>> -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
>>>> -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
>>>> +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
>>>> +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
>>>> +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
>>>> +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
>>>> +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
>>>> +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
>>>> +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
>>>> +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
>>>> +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
>>>> +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
>>>> +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
>>>> +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
>>>> +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
>>>> +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
>>>> +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
>>> I don't think it is a good idea to make this more complex.
>>> It brings nothing as far as I can see, compared to having speed and lanes separated.
>>> Can we have lanes information a separate value? no need for bitmask.
>>>
>> Hi,Thomas, Ajit, roretzla, damodharam
>>
>> I also considered the option at the beginning of the design.
>> But this option is not used due to the following reasons:
>> 1. For the user, ethtool couples speed and lanes.
>> The result of querying the NIC capability is as follows:
>> Supported link modes:
>>          100000baseSR4/Full
>>          100000baseSR2/Full
>> The NIC capability is configured as follows:
>> ethtool -s eth1 speed 100000 lanes 4 autoneg off
>> ethtool -s eth1 speed 100000 lanes 2 autoneg off
>>
>> Therefore, users are more accustomed to the coupling of speed and lanes.
>>
>> 2. For the PHY, When the physical layer capability is configured through the MDIO,
>> the speed and lanes are also coupled.
>> For example:
>> Table 45–7—PMA/PMD control 2 register bit definitions[1]
>> PMA/PMD type selection
>>                          1 0 0 1 0 1 0 = 100GBASE-SR2 PMA/PMD
>>                          0 1 0 1 1 1 1 = 100GBASE-SR4 PMA/PMD
>>
>> Therefore, coupling speeds and lanes is easier to understand.
>> And it is easier for the driver to report the support lanes.
>>
>> In addition, the code implementation is compatible with the old version.
>> When the driver does not support the lanes setting, the code does not need to be modified.
>>
>> So I think the speed and lanes coupling is better.
> I don't think so.
> You are mixing hardware implementation, user tool, and API.
> Having a separate and simple API is cleaner and not more difficult to handle
> in some get/set style functions.
Having a separate and simple API is cleaner. It's good.
But supported lane capabilities have a lot to do with the specified 
speed. This is determined by releated specification.
If we add a separate API for speed lanes, it probably is hard to check 
the validity of the configuration for speed and lanes.
And the setting lane API sepparated from speed is not good for 
uniforming all PMD's behavior in ethdev layer.

The patch[1] is an example for this separate API.
I think it is not very good. It cannot tell user and PMD the follow points:
1) user don't know what lanes should or can be set for a specified speed 
on one NIC.
2) how should PMD do for a supported lanes in their HW?

Anyway, if we add setting speed lanes feature, we must report and set 
speed and lanes capabilities for user well.
otherwise, user will be more confused.

[1] https://patchwork.dpdk.org/project/dpdk/list/?series=31606

BR,
/Huisong
>
>
>
> .

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-26  1:42             ` lihuisong (C)
@ 2024-03-26  3:45               ` Ajit Khaparde
  2024-03-26 10:30               ` Thomas Monjalon
  1 sibling, 0 replies; 51+ messages in thread
From: Ajit Khaparde @ 2024-03-26  3:45 UTC (permalink / raw)
  To: lihuisong (C)
  Cc: Thomas Monjalon, huangdengdui, Damodharam Ammepalli, roretzla,
	dev, ferruh.yigit, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, stephen, jerinjacobk, liuyonglong,
	fengchengwen, haijie1

[-- Attachment #1: Type: text/plain, Size: 5271 bytes --]

On Mon, Mar 25, 2024 at 6:42 PM lihuisong (C) <lihuisong@huawei.com> wrote:
>
>
> 在 2024/3/25 17:30, Thomas Monjalon 写道:
> > 25/03/2024 07:24, huangdengdui:
> >> On 2024/3/22 21:58, Thomas Monjalon wrote:
> >>> 22/03/2024 08:09, Dengdui Huang:
> >>>> -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
> >>>> +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
> >>>> +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
> >>>> +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
> >>>> +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
> >>>> +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
> >>> I don't think it is a good idea to make this more complex.
> >>> It brings nothing as far as I can see, compared to having speed and lanes separated.
> >>> Can we have lanes information a separate value? no need for bitmask.
> >>>
> >> Hi,Thomas, Ajit, roretzla, damodharam
> >>
> >> I also considered the option at the beginning of the design.
> >> But this option is not used due to the following reasons:
> >> 1. For the user, ethtool couples speed and lanes.
> >> The result of querying the NIC capability is as follows:
> >> Supported link modes:
> >>          100000baseSR4/Full
> >>          100000baseSR2/Full
So if DPDK provides a get lanes API, it should be able to tell the
number of lanes supported.
After that, the user should be able to pick one of the supported lane counts?

> >> The NIC capability is configured as follows:
> >> ethtool -s eth1 speed 100000 lanes 4 autoneg off
> >> ethtool -s eth1 speed 100000 lanes 2 autoneg off
> >>
> >> Therefore, users are more accustomed to the coupling of speed and lanes.
> >>
> >> 2. For the PHY, When the physical layer capability is configured through the MDIO,
> >> the speed and lanes are also coupled.
> >> For example:
> >> Table 45–7—PMA/PMD control 2 register bit definitions[1]
> >> PMA/PMD type selection
> >>                          1 0 0 1 0 1 0 = 100GBASE-SR2 PMA/PMD
> >>                          0 1 0 1 1 1 1 = 100GBASE-SR4 PMA/PMD
> >>
> >> Therefore, coupling speeds and lanes is easier to understand.
> >> And it is easier for the driver to report the support lanes.
> >>
> >> In addition, the code implementation is compatible with the old version.
> >> When the driver does not support the lanes setting, the code does not need to be modified.
> >>
> >> So I think the speed and lanes coupling is better.
> > I don't think so.
> > You are mixing hardware implementation, user tool, and API.
> > Having a separate and simple API is cleaner and not more difficult to handle
> > in some get/set style functions.
> Having a separate and simple API is cleaner. It's good.
> But supported lane capabilities have a lot to do with the specified
> speed. This is determined by releated specification.
> If we add a separate API for speed lanes, it probably is hard to check
> the validity of the configuration for speed and lanes.
> And the setting lane API sepparated from speed is not good for
> uniforming all PMD's behavior in ethdev layer.
>
> The patch[1] is an example for this separate API.
> I think it is not very good. It cannot tell user and PMD the follow points:
> 1) user don't know what lanes should or can be set for a specified speed
> on one NIC.
> 2) how should PMD do for a supported lanes in their HW?
>
> Anyway, if we add setting speed lanes feature, we must report and set
> speed and lanes capabilities for user well.
> otherwise, user will be more confused.
>
> [1] https://patchwork.dpdk.org/project/dpdk/list/?series=31606
>
> BR,
> /Huisong
> >
> >
> >
> > .

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-26  1:42             ` lihuisong (C)
  2024-03-26  3:45               ` Ajit Khaparde
@ 2024-03-26 10:30               ` Thomas Monjalon
  2024-03-26 11:15                 ` lihuisong (C)
  1 sibling, 1 reply; 51+ messages in thread
From: Thomas Monjalon @ 2024-03-26 10:30 UTC (permalink / raw)
  To: huangdengdui, Damodharam Ammepalli, lihuisong (C)
  Cc: ajit.khaparde, roretzla, Damodharam Ammepalli, dev, ferruh.yigit,
	aman.deep.singh, yuying.zhang, andrew.rybchenko, stephen,
	jerinjacobk, liuyonglong, fengchengwen, haijie1

26/03/2024 02:42, lihuisong (C):
> 
> 在 2024/3/25 17:30, Thomas Monjalon 写道:
> > 25/03/2024 07:24, huangdengdui:
> >> On 2024/3/22 21:58, Thomas Monjalon wrote:
> >>> 22/03/2024 08:09, Dengdui Huang:
> >>>> -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
> >>>> -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
> >>>> +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
> >>>> +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
> >>>> +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
> >>>> +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
> >>>> +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
> >>>> +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
> >>> I don't think it is a good idea to make this more complex.
> >>> It brings nothing as far as I can see, compared to having speed and lanes separated.
> >>> Can we have lanes information a separate value? no need for bitmask.
> >>>
> >> Hi,Thomas, Ajit, roretzla, damodharam
> >>
> >> I also considered the option at the beginning of the design.
> >> But this option is not used due to the following reasons:
> >> 1. For the user, ethtool couples speed and lanes.
> >> The result of querying the NIC capability is as follows:
> >> Supported link modes:
> >>          100000baseSR4/Full
> >>          100000baseSR2/Full
> >> The NIC capability is configured as follows:
> >> ethtool -s eth1 speed 100000 lanes 4 autoneg off
> >> ethtool -s eth1 speed 100000 lanes 2 autoneg off
> >>
> >> Therefore, users are more accustomed to the coupling of speed and lanes.
> >>
> >> 2. For the PHY, When the physical layer capability is configured through the MDIO,
> >> the speed and lanes are also coupled.
> >> For example:
> >> Table 45–7—PMA/PMD control 2 register bit definitions[1]
> >> PMA/PMD type selection
> >>                          1 0 0 1 0 1 0 = 100GBASE-SR2 PMA/PMD
> >>                          0 1 0 1 1 1 1 = 100GBASE-SR4 PMA/PMD
> >>
> >> Therefore, coupling speeds and lanes is easier to understand.
> >> And it is easier for the driver to report the support lanes.
> >>
> >> In addition, the code implementation is compatible with the old version.
> >> When the driver does not support the lanes setting, the code does not need to be modified.
> >>
> >> So I think the speed and lanes coupling is better.
> > I don't think so.
> > You are mixing hardware implementation, user tool, and API.
> > Having a separate and simple API is cleaner and not more difficult to handle
> > in some get/set style functions.
> Having a separate and simple API is cleaner. It's good.
> But supported lane capabilities have a lot to do with the specified 
> speed. This is determined by releated specification.
> If we add a separate API for speed lanes, it probably is hard to check 
> the validity of the configuration for speed and lanes.
> And the setting lane API sepparated from speed is not good for 
> uniforming all PMD's behavior in ethdev layer.

Please let's be more specific.
There are 3 needs:
	- set PHY lane config
	- get PHY lane config
	- get PHY lane capabilities

There is no problem providing a function to get the number of PHY lanes.
It is possible to set PHY lanes number after defining a fixed speed.

> The patch[1] is an example for this separate API.
> I think it is not very good. It cannot tell user and PMD the follow points:
> 1) user don't know what lanes should or can be set for a specified speed 
> on one NIC.

This is about capabilities.
Can we say a HW will support a maximum number of PHY lanes in general?
We may need to associate the maximum speed per lane?
Do we really need to associate PHY lane and PHY speed numbers for capabilities?
Example: if a HW supports 100G-4-lanes and 200G-2-lanes,
may we assume it is also supporting 200G-4-lanes?

> 2) how should PMD do for a supported lanes in their HW?

I don't understand this question. Please rephrase.

> Anyway, if we add setting speed lanes feature, we must report and set 
> speed and lanes capabilities for user well.
> otherwise, user will be more confused.

Well is not necessarily exposing all raw combinations as ethtool does.

> [1] https://patchwork.dpdk.org/project/dpdk/list/?series=31606




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-26 10:30               ` Thomas Monjalon
@ 2024-03-26 11:15                 ` lihuisong (C)
  2024-03-26 13:47                   ` Ajit Khaparde
  0 siblings, 1 reply; 51+ messages in thread
From: lihuisong (C) @ 2024-03-26 11:15 UTC (permalink / raw)
  To: Thomas Monjalon, huangdengdui, Damodharam Ammepalli
  Cc: ajit.khaparde, roretzla, dev, ferruh.yigit, aman.deep.singh,
	yuying.zhang, andrew.rybchenko, stephen, jerinjacobk,
	liuyonglong, fengchengwen, haijie1


在 2024/3/26 18:30, Thomas Monjalon 写道:
> 26/03/2024 02:42, lihuisong (C):
>> 在 2024/3/25 17:30, Thomas Monjalon 写道:
>>> 25/03/2024 07:24, huangdengdui:
>>>> On 2024/3/22 21:58, Thomas Monjalon wrote:
>>>>> 22/03/2024 08:09, Dengdui Huang:
>>>>>> -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
>>>>>> -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
>>>>>> -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
>>>>>> -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
>>>>>> -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
>>>>>> -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
>>>>>> -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
>>>>>> -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
>>>>>> -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
>>>>>> +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
>>>>>> +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
>>>>>> +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
>>>>>> +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
>>>>>> +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
>>>>>> +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
>>>>>> +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
>>>>>> +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
>>>>>> +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
>>>>>> +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
>>>>>> +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
>>>>>> +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
>>>>>> +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
>>>>>> +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
>>>>>> +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
>>>>> I don't think it is a good idea to make this more complex.
>>>>> It brings nothing as far as I can see, compared to having speed and lanes separated.
>>>>> Can we have lanes information a separate value? no need for bitmask.
>>>>>
>>>> Hi,Thomas, Ajit, roretzla, damodharam
>>>>
>>>> I also considered the option at the beginning of the design.
>>>> But this option is not used due to the following reasons:
>>>> 1. For the user, ethtool couples speed and lanes.
>>>> The result of querying the NIC capability is as follows:
>>>> Supported link modes:
>>>>           100000baseSR4/Full
>>>>           100000baseSR2/Full
>>>> The NIC capability is configured as follows:
>>>> ethtool -s eth1 speed 100000 lanes 4 autoneg off
>>>> ethtool -s eth1 speed 100000 lanes 2 autoneg off
>>>>
>>>> Therefore, users are more accustomed to the coupling of speed and lanes.
>>>>
>>>> 2. For the PHY, When the physical layer capability is configured through the MDIO,
>>>> the speed and lanes are also coupled.
>>>> For example:
>>>> Table 45–7—PMA/PMD control 2 register bit definitions[1]
>>>> PMA/PMD type selection
>>>>                           1 0 0 1 0 1 0 = 100GBASE-SR2 PMA/PMD
>>>>                           0 1 0 1 1 1 1 = 100GBASE-SR4 PMA/PMD
>>>>
>>>> Therefore, coupling speeds and lanes is easier to understand.
>>>> And it is easier for the driver to report the support lanes.
>>>>
>>>> In addition, the code implementation is compatible with the old version.
>>>> When the driver does not support the lanes setting, the code does not need to be modified.
>>>>
>>>> So I think the speed and lanes coupling is better.
>>> I don't think so.
>>> You are mixing hardware implementation, user tool, and API.
>>> Having a separate and simple API is cleaner and not more difficult to handle
>>> in some get/set style functions.
>> Having a separate and simple API is cleaner. It's good.
>> But supported lane capabilities have a lot to do with the specified
>> speed. This is determined by releated specification.
>> If we add a separate API for speed lanes, it probably is hard to check
>> the validity of the configuration for speed and lanes.
>> And the setting lane API sepparated from speed is not good for
>> uniforming all PMD's behavior in ethdev layer.
> Please let's be more specific.
> There are 3 needs:
> 	- set PHY lane config
> 	- get PHY lane config
> 	- get PHY lane capabilities
IMO, this lane capabilities should be reported based on supported speed 
capabilities.
>
> There is no problem providing a function to get the number of PHY lanes.
> It is possible to set PHY lanes number after defining a fixed speed.
yes it's ok.
>
>> The patch[1] is an example for this separate API.
>> I think it is not very good. It cannot tell user and PMD the follow points:
>> 1) user don't know what lanes should or can be set for a specified speed
>> on one NIC.
> This is about capabilities.
> Can we say a HW will support a maximum number of PHY lanes in general?
> We may need to associate the maximum speed per lane?
> Do we really need to associate PHY lane and PHY speed numbers for capabilities?
Personally, it should contain the below releationship at least.
speed 10G  --> 1lane | 4lane
speed 100G  --> 2lane | 4lane
> Example: if a HW supports 100G-4-lanes and 200G-2-lanes,
> may we assume it is also supporting 200G-4-lanes?
I think we cannot assume that NIC also support 200G-4-lanes.
Beause it has a lot to do with HW design.
>
>> 2) how should PMD do for a supported lanes in their HW?
> I don't understand this question. Please rephrase.
I mean that PMD don't know set how many lanes when the lanes from user 
is not supported on a fixed speed by PMD.
So ethdev layer should limit the avaiable lane number based on a fixed 
speed.
>
>> Anyway, if we add setting speed lanes feature, we must report and set
>> speed and lanes capabilities for user well.
>> otherwise, user will be more confused.
> Well is not necessarily exposing all raw combinations as ethtool does.
Agreed.
>
>> [1] https://patchwork.dpdk.org/project/dpdk/list/?series=31606
>
>
> .

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-26 11:15                 ` lihuisong (C)
@ 2024-03-26 13:47                   ` Ajit Khaparde
  2024-03-26 18:11                     ` Ajit Khaparde
  2024-03-29  3:25                     ` lihuisong (C)
  0 siblings, 2 replies; 51+ messages in thread
From: Ajit Khaparde @ 2024-03-26 13:47 UTC (permalink / raw)
  To: lihuisong (C)
  Cc: Thomas Monjalon, huangdengdui, Damodharam Ammepalli, roretzla,
	dev, ferruh.yigit, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, stephen, jerinjacobk, liuyonglong,
	fengchengwen, haijie1

[-- Attachment #1: Type: text/plain, Size: 6972 bytes --]

On Tue, Mar 26, 2024 at 4:15 AM lihuisong (C) <lihuisong@huawei.com> wrote:
>
>
> 在 2024/3/26 18:30, Thomas Monjalon 写道:
> > 26/03/2024 02:42, lihuisong (C):
> >> 在 2024/3/25 17:30, Thomas Monjalon 写道:
> >>> 25/03/2024 07:24, huangdengdui:
> >>>> On 2024/3/22 21:58, Thomas Monjalon wrote:
> >>>>> 22/03/2024 08:09, Dengdui Huang:
> >>>>>> -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
> >>>>>> -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
> >>>>>> -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
> >>>>>> -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
> >>>>>> -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
> >>>>>> -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
> >>>>>> -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
> >>>>>> -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
> >>>>>> -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
> >>>>>> +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
> >>>>>> +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
> >>>>>> +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
> >>>>>> +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
> >>>>>> +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
> >>>>>> +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
> >>>>>> +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
> >>>>>> +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
> >>>>>> +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
> >>>>>> +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
> >>>>>> +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
> >>>>>> +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
> >>>>>> +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
> >>>>>> +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
> >>>>>> +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
> >>>>> I don't think it is a good idea to make this more complex.
> >>>>> It brings nothing as far as I can see, compared to having speed and lanes separated.
> >>>>> Can we have lanes information a separate value? no need for bitmask.
> >>>>>
> >>>> Hi,Thomas, Ajit, roretzla, damodharam
> >>>>
> >>>> I also considered the option at the beginning of the design.
> >>>> But this option is not used due to the following reasons:
> >>>> 1. For the user, ethtool couples speed and lanes.
> >>>> The result of querying the NIC capability is as follows:
> >>>> Supported link modes:
> >>>>           100000baseSR4/Full
> >>>>           100000baseSR2/Full
> >>>> The NIC capability is configured as follows:
> >>>> ethtool -s eth1 speed 100000 lanes 4 autoneg off
> >>>> ethtool -s eth1 speed 100000 lanes 2 autoneg off
> >>>>
> >>>> Therefore, users are more accustomed to the coupling of speed and lanes.
> >>>>
> >>>> 2. For the PHY, When the physical layer capability is configured through the MDIO,
> >>>> the speed and lanes are also coupled.
> >>>> For example:
> >>>> Table 45–7—PMA/PMD control 2 register bit definitions[1]
> >>>> PMA/PMD type selection
> >>>>                           1 0 0 1 0 1 0 = 100GBASE-SR2 PMA/PMD
> >>>>                           0 1 0 1 1 1 1 = 100GBASE-SR4 PMA/PMD
> >>>>
> >>>> Therefore, coupling speeds and lanes is easier to understand.
> >>>> And it is easier for the driver to report the support lanes.
> >>>>
> >>>> In addition, the code implementation is compatible with the old version.
> >>>> When the driver does not support the lanes setting, the code does not need to be modified.
> >>>>
> >>>> So I think the speed and lanes coupling is better.
> >>> I don't think so.
> >>> You are mixing hardware implementation, user tool, and API.
> >>> Having a separate and simple API is cleaner and not more difficult to handle
> >>> in some get/set style functions.
> >> Having a separate and simple API is cleaner. It's good.
> >> But supported lane capabilities have a lot to do with the specified
> >> speed. This is determined by releated specification.
> >> If we add a separate API for speed lanes, it probably is hard to check
> >> the validity of the configuration for speed and lanes.
> >> And the setting lane API sepparated from speed is not good for
> >> uniforming all PMD's behavior in ethdev layer.
> > Please let's be more specific.
> > There are 3 needs:
> >       - set PHY lane config
> >       - get PHY lane config
> >       - get PHY lane capabilities
> IMO, this lane capabilities should be reported based on supported speed
> capabilities.
> >
> > There is no problem providing a function to get the number of PHY lanes.
> > It is possible to set PHY lanes number after defining a fixed speed.
> yes it's ok.
> >
> >> The patch[1] is an example for this separate API.
> >> I think it is not very good. It cannot tell user and PMD the follow points:
> >> 1) user don't know what lanes should or can be set for a specified speed
> >> on one NIC.
> > This is about capabilities.
> > Can we say a HW will support a maximum number of PHY lanes in general?
> > We may need to associate the maximum speed per lane?
> > Do we really need to associate PHY lane and PHY speed numbers for capabilities?
> Personally, it should contain the below releationship at least.
> speed 10G  --> 1lane | 4lane
> speed 100G  --> 2lane | 4lane
> > Example: if a HW supports 100G-4-lanes and 200G-2-lanes,
> > may we assume it is also supporting 200G-4-lanes?
> I think we cannot assume that NIC also support 200G-4-lanes.
> Beause it has a lot to do with HW design.
> >
> >> 2) how should PMD do for a supported lanes in their HW?
> > I don't understand this question. Please rephrase.
> I mean that PMD don't know set how many lanes when the lanes from user
> is not supported on a fixed speed by PMD.
> So ethdev layer should limit the avaiable lane number based on a fixed
> speed.

ethdev layer has generally been opaque. We should keep it that way.
The PMD should know what the HW supports.
So it should show the capabilities correctly. Right?
And if the user provides incorrect settings, it should reject it.

> >
> >> Anyway, if we add setting speed lanes feature, we must report and set
> >> speed and lanes capabilities for user well.
> >> otherwise, user will be more confused.
> > Well is not necessarily exposing all raw combinations as ethtool does.
> Agreed.
> >
> >> [1] https://patchwork.dpdk.org/project/dpdk/list/?series=31606
> >
> >
> > .

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-26 13:47                   ` Ajit Khaparde
@ 2024-03-26 18:11                     ` Ajit Khaparde
  2024-03-26 18:21                       ` Damodharam Ammepalli
  2024-03-29  3:25                     ` lihuisong (C)
  1 sibling, 1 reply; 51+ messages in thread
From: Ajit Khaparde @ 2024-03-26 18:11 UTC (permalink / raw)
  To: lihuisong (C)
  Cc: Thomas Monjalon, huangdengdui, Damodharam Ammepalli, roretzla,
	dev, ferruh.yigit, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, stephen, jerinjacobk, liuyonglong,
	fengchengwen, haijie1

[-- Attachment #1: Type: text/plain, Size: 7358 bytes --]

On Tue, Mar 26, 2024 at 6:47 AM Ajit Khaparde
<ajit.khaparde@broadcom.com> wrote:
>
> On Tue, Mar 26, 2024 at 4:15 AM lihuisong (C) <lihuisong@huawei.com> wrote:
> >
> >
> > 在 2024/3/26 18:30, Thomas Monjalon 写道:
> > > 26/03/2024 02:42, lihuisong (C):
> > >> 在 2024/3/25 17:30, Thomas Monjalon 写道:
> > >>> 25/03/2024 07:24, huangdengdui:
> > >>>> On 2024/3/22 21:58, Thomas Monjalon wrote:
> > >>>>> 22/03/2024 08:09, Dengdui Huang:
> > >>>>>> -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
> > >>>>> I don't think it is a good idea to make this more complex.
> > >>>>> It brings nothing as far as I can see, compared to having speed and lanes separated.
> > >>>>> Can we have lanes information a separate value? no need for bitmask.
> > >>>>>
> > >>>> Hi,Thomas, Ajit, roretzla, damodharam
> > >>>>
> > >>>> I also considered the option at the beginning of the design.
> > >>>> But this option is not used due to the following reasons:
> > >>>> 1. For the user, ethtool couples speed and lanes.
> > >>>> The result of querying the NIC capability is as follows:
> > >>>> Supported link modes:
> > >>>>           100000baseSR4/Full
> > >>>>           100000baseSR2/Full
> > >>>> The NIC capability is configured as follows:
> > >>>> ethtool -s eth1 speed 100000 lanes 4 autoneg off
> > >>>> ethtool -s eth1 speed 100000 lanes 2 autoneg off
> > >>>>
> > >>>> Therefore, users are more accustomed to the coupling of speed and lanes.
> > >>>>
> > >>>> 2. For the PHY, When the physical layer capability is configured through the MDIO,
> > >>>> the speed and lanes are also coupled.
> > >>>> For example:
> > >>>> Table 45–7—PMA/PMD control 2 register bit definitions[1]
> > >>>> PMA/PMD type selection
> > >>>>                           1 0 0 1 0 1 0 = 100GBASE-SR2 PMA/PMD
> > >>>>                           0 1 0 1 1 1 1 = 100GBASE-SR4 PMA/PMD
> > >>>>
> > >>>> Therefore, coupling speeds and lanes is easier to understand.
> > >>>> And it is easier for the driver to report the support lanes.
> > >>>>
> > >>>> In addition, the code implementation is compatible with the old version.
> > >>>> When the driver does not support the lanes setting, the code does not need to be modified.
> > >>>>
> > >>>> So I think the speed and lanes coupling is better.
> > >>> I don't think so.
> > >>> You are mixing hardware implementation, user tool, and API.
> > >>> Having a separate and simple API is cleaner and not more difficult to handle
> > >>> in some get/set style functions.
> > >> Having a separate and simple API is cleaner. It's good.
> > >> But supported lane capabilities have a lot to do with the specified
> > >> speed. This is determined by releated specification.
> > >> If we add a separate API for speed lanes, it probably is hard to check
> > >> the validity of the configuration for speed and lanes.
> > >> And the setting lane API sepparated from speed is not good for
> > >> uniforming all PMD's behavior in ethdev layer.
> > > Please let's be more specific.
> > > There are 3 needs:
> > >       - set PHY lane config
> > >       - get PHY lane config
> > >       - get PHY lane capabilities
> > IMO, this lane capabilities should be reported based on supported speed
> > capabilities.
> > >
> > > There is no problem providing a function to get the number of PHY lanes.
> > > It is possible to set PHY lanes number after defining a fixed speed.
> > yes it's ok.
> > >
> > >> The patch[1] is an example for this separate API.
> > >> I think it is not very good. It cannot tell user and PMD the follow points:
> > >> 1) user don't know what lanes should or can be set for a specified speed
> > >> on one NIC.
> > > This is about capabilities.
> > > Can we say a HW will support a maximum number of PHY lanes in general?
> > > We may need to associate the maximum speed per lane?
> > > Do we really need to associate PHY lane and PHY speed numbers for capabilities?
> > Personally, it should contain the below releationship at least.
> > speed 10G  --> 1lane | 4lane
> > speed 100G  --> 2lane | 4lane
> > > Example: if a HW supports 100G-4-lanes and 200G-2-lanes,
> > > may we assume it is also supporting 200G-4-lanes?
> > I think we cannot assume that NIC also support 200G-4-lanes.
> > Beause it has a lot to do with HW design.
> > >
> > >> 2) how should PMD do for a supported lanes in their HW?
> > > I don't understand this question. Please rephrase.
> > I mean that PMD don't know set how many lanes when the lanes from user
> > is not supported on a fixed speed by PMD.
> > So ethdev layer should limit the avaiable lane number based on a fixed
> > speed.
>
> ethdev layer has generally been opaque. We should keep it that way.
I mis-typed.
%s/opaque/transparent


> The PMD should know what the HW supports.
> So it should show the capabilities correctly. Right?
> And if the user provides incorrect settings, it should reject it.
>
> > >
> > >> Anyway, if we add setting speed lanes feature, we must report and set
> > >> speed and lanes capabilities for user well.
> > >> otherwise, user will be more confused.
> > > Well is not necessarily exposing all raw combinations as ethtool does.
> > Agreed.
> > >
> > >> [1] https://patchwork.dpdk.org/project/dpdk/list/?series=31606
> > >
> > >
> > > .

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-26 18:11                     ` Ajit Khaparde
@ 2024-03-26 18:21                       ` Damodharam Ammepalli
  2024-03-30 11:38                         ` huangdengdui
  0 siblings, 1 reply; 51+ messages in thread
From: Damodharam Ammepalli @ 2024-03-26 18:21 UTC (permalink / raw)
  To: Ajit Khaparde
  Cc: lihuisong (C),
	Thomas Monjalon, huangdengdui, roretzla, dev, ferruh.yigit,
	aman.deep.singh, yuying.zhang, andrew.rybchenko, stephen,
	jerinjacobk, liuyonglong, fengchengwen, haijie1

[-- Attachment #1: Type: text/plain, Size: 9804 bytes --]

On Tue, Mar 26, 2024 at 11:12 AM Ajit Khaparde
<ajit.khaparde@broadcom.com> wrote:
>
> On Tue, Mar 26, 2024 at 6:47 AM Ajit Khaparde
> <ajit.khaparde@broadcom.com> wrote:
> >
> > On Tue, Mar 26, 2024 at 4:15 AM lihuisong (C) <lihuisong@huawei.com> wrote:
> > >
> > >
> > > 在 2024/3/26 18:30, Thomas Monjalon 写道:
> > > > 26/03/2024 02:42, lihuisong (C):
> > > >> 在 2024/3/25 17:30, Thomas Monjalon 写道:
> > > >>> 25/03/2024 07:24, huangdengdui:
> > > >>>> On 2024/3/22 21:58, Thomas Monjalon wrote:
> > > >>>>> 22/03/2024 08:09, Dengdui Huang:
> > > >>>>>> -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
> > > >>>>>> -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
> > > >>>>>> -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
> > > >>>>>> -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
> > > >>>>>> -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
> > > >>>>>> -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
> > > >>>>>> -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
> > > >>>>>> -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
> > > >>>>>> -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
> > > >>>>>> +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
> > > >>>>>> +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
> > > >>>>>> +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
> > > >>>>>> +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
> > > >>>>>> +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
> > > >>>>>> +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
> > > >>>>>> +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
> > > >>>>>> +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
> > > >>>>>> +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
> > > >>>>>> +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
> > > >>>>>> +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
> > > >>>>>> +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
> > > >>>>>> +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
> > > >>>>>> +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
> > > >>>>>> +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
> > > >>>>> I don't think it is a good idea to make this more complex.
> > > >>>>> It brings nothing as far as I can see, compared to having speed and lanes separated.
> > > >>>>> Can we have lanes information a separate value? no need for bitmask.
> > > >>>>>
> > > >>>> Hi,Thomas, Ajit, roretzla, damodharam
> > > >>>>
> > > >>>> I also considered the option at the beginning of the design.
> > > >>>> But this option is not used due to the following reasons:
> > > >>>> 1. For the user, ethtool couples speed and lanes.
> > > >>>> The result of querying the NIC capability is as follows:
> > > >>>> Supported link modes:
> > > >>>>           100000baseSR4/Full
> > > >>>>           100000baseSR2/Full
> > > >>>> The NIC capability is configured as follows:
> > > >>>> ethtool -s eth1 speed 100000 lanes 4 autoneg off
> > > >>>> ethtool -s eth1 speed 100000 lanes 2 autoneg off
> > > >>>>
> > > >>>> Therefore, users are more accustomed to the coupling of speed and lanes.
> > > >>>>
> > > >>>> 2. For the PHY, When the physical layer capability is configured through the MDIO,
> > > >>>> the speed and lanes are also coupled.
> > > >>>> For example:
> > > >>>> Table 45–7—PMA/PMD control 2 register bit definitions[1]
> > > >>>> PMA/PMD type selection
> > > >>>>                           1 0 0 1 0 1 0 = 100GBASE-SR2 PMA/PMD
> > > >>>>                           0 1 0 1 1 1 1 = 100GBASE-SR4 PMA/PMD
> > > >>>>
> > > >>>> Therefore, coupling speeds and lanes is easier to understand.
> > > >>>> And it is easier for the driver to report the support lanes.
> > > >>>>
> > > >>>> In addition, the code implementation is compatible with the old version.
> > > >>>> When the driver does not support the lanes setting, the code does not need to be modified.
> > > >>>>
> > > >>>> So I think the speed and lanes coupling is better.
> > > >>> I don't think so.
> > > >>> You are mixing hardware implementation, user tool, and API.
> > > >>> Having a separate and simple API is cleaner and not more difficult to handle
> > > >>> in some get/set style functions.
> > > >> Having a separate and simple API is cleaner. It's good.
> > > >> But supported lane capabilities have a lot to do with the specified
> > > >> speed. This is determined by releated specification.
> > > >> If we add a separate API for speed lanes, it probably is hard to check
> > > >> the validity of the configuration for speed and lanes.
> > > >> And the setting lane API sepparated from speed is not good for
> > > >> uniforming all PMD's behavior in ethdev layer.
> > > > Please let's be more specific.
> > > > There are 3 needs:
> > > >       - set PHY lane config
> > > >       - get PHY lane config
> > > >       - get PHY lane capabilities
> > > IMO, this lane capabilities should be reported based on supported speed
> > > capabilities.
> > > >
> > > > There is no problem providing a function to get the number of PHY lanes.
> > > > It is possible to set PHY lanes number after defining a fixed speed.
> > > yes it's ok.
> > > >
> > > >> The patch[1] is an example for this separate API.
> > > >> I think it is not very good. It cannot tell user and PMD the follow points:
> > > >> 1) user don't know what lanes should or can be set for a specified speed
> > > >> on one NIC.
> > > > This is about capabilities.
> > > > Can we say a HW will support a maximum number of PHY lanes in general?
> > > > We may need to associate the maximum speed per lane?
> > > > Do we really need to associate PHY lane and PHY speed numbers for capabilities?
> > > Personally, it should contain the below releationship at least.
> > > speed 10G  --> 1lane | 4lane
> > > speed 100G  --> 2lane | 4lane
> > > > Example: if a HW supports 100G-4-lanes and 200G-2-lanes,
> > > > may we assume it is also supporting 200G-4-lanes?
> > > I think we cannot assume that NIC also support 200G-4-lanes.
> > > Beause it has a lot to do with HW design.
> > > >
> > > >> 2) how should PMD do for a supported lanes in their HW?
> > > > I don't understand this question. Please rephrase.
> > > I mean that PMD don't know set how many lanes when the lanes from user
> > > is not supported on a fixed speed by PMD.
> > > So ethdev layer should limit the avaiable lane number based on a fixed
> > > speed.
> >
> > ethdev layer has generally been opaque. We should keep it that way.
> I mis-typed.
> %s/opaque/transparent
>
>
> > The PMD should know what the HW supports.
> > So it should show the capabilities correctly. Right?
> > And if the user provides incorrect settings, it should reject it.
> >
> > > >
> > > >> Anyway, if we add setting speed lanes feature, we must report and set
> > > >> speed and lanes capabilities for user well.
> > > >> otherwise, user will be more confused.
> > > > Well is not necessarily exposing all raw combinations as ethtool does.
> > > Agreed.
> > > >
> > > >> [1] https://patchwork.dpdk.org/project/dpdk/list/?series=31606
> > > >
> > > >
> > > >
Our RFC patch's cmdline design is inspired by how ethtool works as it
provides carriage return at user choice,
which makes it backward compatible for no lanes config also. testpmd
does not have that flexibility
in the speed command and we resorted to a separate command for lanes
and for all other reasons mentioned
earlier.

2nd, the lanes validation logic resting place, irrespective of lanes
in speed or separate lanes command,
like others said, the AutoNegotiation itself should suffice for link
train and up. Taking this example,
if the link came up at 100G PAM4-112 AN'd, and user for whatever
reasons, even others mentioned earlier,
may want it to force 100Gb NRZ which is 25G per lane 4, lanes), the
user should aware of cmds the tool offers,
and the driver can do final validation, for anomalies.

In any case, in RFC patch we are doing lanes validation in
cmd_validate_lanes(portid_t pid, uint32_t *lanes), that gets populated
by hw/driver based on current
AN's link up speed and signalling type.

Today 400Gig is 4(pam4_56), 8(pam4_112) lanes and in future with a new
HW design,
it may be 2 x 200Gig lanes, at that time. we don't need to update
testpmd, handle it in the driver.
Maybe for a new speed 800Gb+, can demand an update to app/lib entries.

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4233 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-26 13:47                   ` Ajit Khaparde
  2024-03-26 18:11                     ` Ajit Khaparde
@ 2024-03-29  3:25                     ` lihuisong (C)
  1 sibling, 0 replies; 51+ messages in thread
From: lihuisong (C) @ 2024-03-29  3:25 UTC (permalink / raw)
  To: Ajit Khaparde
  Cc: Thomas Monjalon, huangdengdui, Damodharam Ammepalli, roretzla,
	dev, ferruh.yigit, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, stephen, jerinjacobk, liuyonglong,
	fengchengwen, haijie1


在 2024/3/26 21:47, Ajit Khaparde 写道:
> On Tue, Mar 26, 2024 at 4:15 AM lihuisong (C) <lihuisong@huawei.com> wrote:
>>
>> 在 2024/3/26 18:30, Thomas Monjalon 写道:
>>> 26/03/2024 02:42, lihuisong (C):
>>>> 在 2024/3/25 17:30, Thomas Monjalon 写道:
>>>>> 25/03/2024 07:24, huangdengdui:
>>>>>> On 2024/3/22 21:58, Thomas Monjalon wrote:
>>>>>>> 22/03/2024 08:09, Dengdui Huang:
>>>>>>>> -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
>>>>>>>> -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
>>>>>>>> -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
>>>>>>>> -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
>>>>>>>> -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
>>>>>>>> -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
>>>>>>>> -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
>>>>>>>> -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
>>>>>>>> -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
>>>>>>>> +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
>>>>>>>> +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
>>>>>>>> +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
>>>>>>>> +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
>>>>>>>> +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
>>>>>>>> +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
>>>>>>>> +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
>>>>>>>> +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
>>>>>>>> +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
>>>>>>>> +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
>>>>>>>> +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
>>>>>>>> +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
>>>>>>>> +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
>>>>>>>> +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
>>>>>>>> +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
>>>>>>> I don't think it is a good idea to make this more complex.
>>>>>>> It brings nothing as far as I can see, compared to having speed and lanes separated.
>>>>>>> Can we have lanes information a separate value? no need for bitmask.
>>>>>>>
>>>>>> Hi,Thomas, Ajit, roretzla, damodharam
>>>>>>
>>>>>> I also considered the option at the beginning of the design.
>>>>>> But this option is not used due to the following reasons:
>>>>>> 1. For the user, ethtool couples speed and lanes.
>>>>>> The result of querying the NIC capability is as follows:
>>>>>> Supported link modes:
>>>>>>            100000baseSR4/Full
>>>>>>            100000baseSR2/Full
>>>>>> The NIC capability is configured as follows:
>>>>>> ethtool -s eth1 speed 100000 lanes 4 autoneg off
>>>>>> ethtool -s eth1 speed 100000 lanes 2 autoneg off
>>>>>>
>>>>>> Therefore, users are more accustomed to the coupling of speed and lanes.
>>>>>>
>>>>>> 2. For the PHY, When the physical layer capability is configured through the MDIO,
>>>>>> the speed and lanes are also coupled.
>>>>>> For example:
>>>>>> Table 45–7—PMA/PMD control 2 register bit definitions[1]
>>>>>> PMA/PMD type selection
>>>>>>                            1 0 0 1 0 1 0 = 100GBASE-SR2 PMA/PMD
>>>>>>                            0 1 0 1 1 1 1 = 100GBASE-SR4 PMA/PMD
>>>>>>
>>>>>> Therefore, coupling speeds and lanes is easier to understand.
>>>>>> And it is easier for the driver to report the support lanes.
>>>>>>
>>>>>> In addition, the code implementation is compatible with the old version.
>>>>>> When the driver does not support the lanes setting, the code does not need to be modified.
>>>>>>
>>>>>> So I think the speed and lanes coupling is better.
>>>>> I don't think so.
>>>>> You are mixing hardware implementation, user tool, and API.
>>>>> Having a separate and simple API is cleaner and not more difficult to handle
>>>>> in some get/set style functions.
>>>> Having a separate and simple API is cleaner. It's good.
>>>> But supported lane capabilities have a lot to do with the specified
>>>> speed. This is determined by releated specification.
>>>> If we add a separate API for speed lanes, it probably is hard to check
>>>> the validity of the configuration for speed and lanes.
>>>> And the setting lane API sepparated from speed is not good for
>>>> uniforming all PMD's behavior in ethdev layer.
>>> Please let's be more specific.
>>> There are 3 needs:
>>>        - set PHY lane config
>>>        - get PHY lane config
>>>        - get PHY lane capabilities
>> IMO, this lane capabilities should be reported based on supported speed
>> capabilities.
>>> There is no problem providing a function to get the number of PHY lanes.
>>> It is possible to set PHY lanes number after defining a fixed speed.
>> yes it's ok.
>>>> The patch[1] is an example for this separate API.
>>>> I think it is not very good. It cannot tell user and PMD the follow points:
>>>> 1) user don't know what lanes should or can be set for a specified speed
>>>> on one NIC.
>>> This is about capabilities.
>>> Can we say a HW will support a maximum number of PHY lanes in general?
>>> We may need to associate the maximum speed per lane?
>>> Do we really need to associate PHY lane and PHY speed numbers for capabilities?
>> Personally, it should contain the below releationship at least.
>> speed 10G  --> 1lane | 4lane
>> speed 100G  --> 2lane | 4lane
>>> Example: if a HW supports 100G-4-lanes and 200G-2-lanes,
>>> may we assume it is also supporting 200G-4-lanes?
>> I think we cannot assume that NIC also support 200G-4-lanes.
>> Beause it has a lot to do with HW design.
>>>> 2) how should PMD do for a supported lanes in their HW?
>>> I don't understand this question. Please rephrase.
>> I mean that PMD don't know set how many lanes when the lanes from user
>> is not supported on a fixed speed by PMD.
>> So ethdev layer should limit the avaiable lane number based on a fixed
>> speed.
> ethdev layer has generally been opaque. We should keep it that way.
> The PMD should know what the HW supports.
If one operation is common for all PMD, which is recommanded to do it in 
ethdev layer.
> So it should show the capabilities correctly. Right?
Yes, the lane capabilities of differrent speed are necessary for user.
> And if the user provides incorrect settings, it should reject it.
Rejecting it in this case is no any reason.
>
>>>> Anyway, if we add setting speed lanes feature, we must report and set
>>>> speed and lanes capabilities for user well.
>>>> otherwise, user will be more confused.
>>> Well is not necessarily exposing all raw combinations as ethtool does.
>> Agreed.
>>>> [1] https://patchwork.dpdk.org/project/dpdk/list/?series=31606
>>>
>>> .

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-26 18:21                       ` Damodharam Ammepalli
@ 2024-03-30 11:38                         ` huangdengdui
  2024-04-01 20:07                           ` Thomas Monjalon
  0 siblings, 1 reply; 51+ messages in thread
From: huangdengdui @ 2024-03-30 11:38 UTC (permalink / raw)
  To: Damodharam Ammepalli, Ajit Khaparde, Thomas Monjalon
  Cc: lihuisong (C),
	Thomas Monjalon, roretzla, dev, ferruh.yigit, aman.deep.singh,
	yuying.zhang, andrew.rybchenko, stephen, jerinjacobk,
	liuyonglong, fengchengwen, haijie1



On 2024/3/27 2:21, Damodharam Ammepalli wrote:
> On Tue, Mar 26, 2024 at 11:12 AM Ajit Khaparde
> <ajit.khaparde@broadcom.com> wrote:
>>
>> On Tue, Mar 26, 2024 at 6:47 AM Ajit Khaparde
>> <ajit.khaparde@broadcom.com> wrote:
>>>
>>> On Tue, Mar 26, 2024 at 4:15 AM lihuisong (C) <lihuisong@huawei.com> wrote:
>>>>
>>>>
>>>> 在 2024/3/26 18:30, Thomas Monjalon 写道:
>>>>> 26/03/2024 02:42, lihuisong (C):
>>>>>> 在 2024/3/25 17:30, Thomas Monjalon 写道:
>>>>>>> 25/03/2024 07:24, huangdengdui:
>>>>>>>> On 2024/3/22 21:58, Thomas Monjalon wrote:
>>>>>>>>> 22/03/2024 08:09, Dengdui Huang:
>>>>>>>>>> -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
>>>>>>>>>> -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
>>>>>>>>>> -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
>>>>>>>>>> -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
>>>>>>>>>> -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
>>>>>>>>>> -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
>>>>>>>>>> -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
>>>>>>>>>> -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
>>>>>>>>>> -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
>>>>>>>>>> +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 Gbps */
>>>>>>>>>> +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 Gbps 2lanes */
>>>>>>>>>> +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 Gbps */
>>>>>>>>>> +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 Gbps 4lanes */
>>>>>>>>>> +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 Gbps */
>>>>>>>>>> +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 Gbps 4lanes */
>>>>>>>>>> +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 Gbps */
>>>>>>>>>> +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 Gbps 4lanes */
>>>>>>>>>> +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 Gbps 4lanes */
>>>>>>>>>> +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 Gbps 4lanes */
>>>>>>>>>> +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 Gbps 2 lanes */
>>>>>>>>>> +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 Gbps 2 lanes */
>>>>>>>>>> +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 Gbps 4lanes */
>>>>>>>>>> +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 Gbps 2lanes */
>>>>>>>>>> +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 Gbps 8lanes */
>>>>>>>>> I don't think it is a good idea to make this more complex.
>>>>>>>>> It brings nothing as far as I can see, compared to having speed and lanes separated.
>>>>>>>>> Can we have lanes information a separate value? no need for bitmask.
>>>>>>>>>
>>>>>>>> Hi,Thomas, Ajit, roretzla, damodharam
>>>>>>>>
>>>>>>>> I also considered the option at the beginning of the design.
>>>>>>>> But this option is not used due to the following reasons:
>>>>>>>> 1. For the user, ethtool couples speed and lanes.
>>>>>>>> The result of querying the NIC capability is as follows:
>>>>>>>> Supported link modes:
>>>>>>>>           100000baseSR4/Full
>>>>>>>>           100000baseSR2/Full
>>>>>>>> The NIC capability is configured as follows:
>>>>>>>> ethtool -s eth1 speed 100000 lanes 4 autoneg off
>>>>>>>> ethtool -s eth1 speed 100000 lanes 2 autoneg off
>>>>>>>>
>>>>>>>> Therefore, users are more accustomed to the coupling of speed and lanes.
>>>>>>>>
>>>>>>>> 2. For the PHY, When the physical layer capability is configured through the MDIO,
>>>>>>>> the speed and lanes are also coupled.
>>>>>>>> For example:
>>>>>>>> Table 45–7—PMA/PMD control 2 register bit definitions[1]
>>>>>>>> PMA/PMD type selection
>>>>>>>>                           1 0 0 1 0 1 0 = 100GBASE-SR2 PMA/PMD
>>>>>>>>                           0 1 0 1 1 1 1 = 100GBASE-SR4 PMA/PMD
>>>>>>>>
>>>>>>>> Therefore, coupling speeds and lanes is easier to understand.
>>>>>>>> And it is easier for the driver to report the support lanes.
>>>>>>>>
>>>>>>>> In addition, the code implementation is compatible with the old version.
>>>>>>>> When the driver does not support the lanes setting, the code does not need to be modified.
>>>>>>>>
>>>>>>>> So I think the speed and lanes coupling is better.
>>>>>>> I don't think so.
>>>>>>> You are mixing hardware implementation, user tool, and API.
>>>>>>> Having a separate and simple API is cleaner and not more difficult to handle
>>>>>>> in some get/set style functions.
>>>>>> Having a separate and simple API is cleaner. It's good.
>>>>>> But supported lane capabilities have a lot to do with the specified
>>>>>> speed. This is determined by releated specification.
>>>>>> If we add a separate API for speed lanes, it probably is hard to check
>>>>>> the validity of the configuration for speed and lanes.
>>>>>> And the setting lane API sepparated from speed is not good for
>>>>>> uniforming all PMD's behavior in ethdev layer.
>>>>> Please let's be more specific.
>>>>> There are 3 needs:
>>>>>       - set PHY lane config
>>>>>       - get PHY lane config
>>>>>       - get PHY lane capabilities
>>>> IMO, this lane capabilities should be reported based on supported speed
>>>> capabilities.
>>>>>
>>>>> There is no problem providing a function to get the number of PHY lanes.
>>>>> It is possible to set PHY lanes number after defining a fixed speed.
>>>> yes it's ok.
>>>>>
>>>>>> The patch[1] is an example for this separate API.
>>>>>> I think it is not very good. It cannot tell user and PMD the follow points:
>>>>>> 1) user don't know what lanes should or can be set for a specified speed
>>>>>> on one NIC.
>>>>> This is about capabilities.
>>>>> Can we say a HW will support a maximum number of PHY lanes in general?
>>>>> We may need to associate the maximum speed per lane?
>>>>> Do we really need to associate PHY lane and PHY speed numbers for capabilities?
>>>> Personally, it should contain the below releationship at least.
>>>> speed 10G  --> 1lane | 4lane
>>>> speed 100G  --> 2lane | 4lane
>>>>> Example: if a HW supports 100G-4-lanes and 200G-2-lanes,
>>>>> may we assume it is also supporting 200G-4-lanes?
>>>> I think we cannot assume that NIC also support 200G-4-lanes.
>>>> Beause it has a lot to do with HW design.
>>>>>
>>>>>> 2) how should PMD do for a supported lanes in their HW?
>>>>> I don't understand this question. Please rephrase.
>>>> I mean that PMD don't know set how many lanes when the lanes from user
>>>> is not supported on a fixed speed by PMD.
>>>> So ethdev layer should limit the avaiable lane number based on a fixed
>>>> speed.
>>>
>>> ethdev layer has generally been opaque. We should keep it that way.
>> I mis-typed.
>> %s/opaque/transparent
>>
>>
>>> The PMD should know what the HW supports.
>>> So it should show the capabilities correctly. Right?
>>> And if the user provides incorrect settings, it should reject it.
>>>
>>>>>
>>>>>> Anyway, if we add setting speed lanes feature, we must report and set
>>>>>> speed and lanes capabilities for user well.
>>>>>> otherwise, user will be more confused.
>>>>> Well is not necessarily exposing all raw combinations as ethtool does.
>>>> Agreed.
>>>>>
>>>>>> [1] https://patchwork.dpdk.org/project/dpdk/list/?series=31606
>>>>>
>>>>>
>>>>>
> Our RFC patch's cmdline design is inspired by how ethtool works as it
> provides carriage return at user choice,
> which makes it backward compatible for no lanes config also. testpmd
> does not have that flexibility
> in the speed command and we resorted to a separate command for lanes
> and for all other reasons mentioned
> earlier.
> 
> 2nd, the lanes validation logic resting place, irrespective of lanes
> in speed or separate lanes command,
> like others said, the AutoNegotiation itself should suffice for link
> train and up. Taking this example,
> if the link came up at 100G PAM4-112 AN'd, and user for whatever
> reasons, even others mentioned earlier,
> may want it to force 100Gb NRZ which is 25G per lane 4, lanes), the
> user should aware of cmds the tool offers,
> and the driver can do final validation, for anomalies.
> 
> In any case, in RFC patch we are doing lanes validation in
> cmd_validate_lanes(portid_t pid, uint32_t *lanes), that gets populated
> by hw/driver based on current
> AN's link up speed and signalling type.
> 
> Today 400Gig is 4(pam4_56), 8(pam4_112) lanes and in future with a new
> HW design,
> it may be 2 x 200Gig lanes, at that time. we don't need to update
> testpmd, handle it in the driver.
> Maybe for a new speed 800Gb+, can demand an update to app/lib entries.
> 
As thomas said, There are 3 needs:
 - set the number of lanes for a device.
 - get number of current lanes for a device.
 - get the number of lanes supported by the device.

For the first two needs, similar to the Damodharam's RFC patch [1],
the lane setting and speed setting are separated, I'm happy to set
the lanes this way.

But, there are different solutions for the device to report the setting
lane capability, as following:
1. Like the current patch, reporting device capabilities in speed and
   lane coupling mode. However, if we use this solution, we will have
   to couple the the lanes setting with speed setting.

2. Like the Damodharam's RFC patch [1], the device reports the maximum
   number of supported lanes. Users can config a lane randomly,
   which is completely separated from the speed.

3. Similar to the FEC capability reported by a device, the device reports the
   relationship table of the number of lanes supported by the speed,
   for example:
      speed    lanes_capa
      50G      1,2
      100G     1,2,4
      200G     2,4

Options 1 and 2 have been discussed a lot above.

For solution 1, the speed and lanes are over-coupled, and the implementation is too
complex. But I think it's easier to understand and easier for the device to report
capabilities. In addition, the ethtool reporting capability also uses this mode.

For solution 2, as huisong said that user don't know what lanes should or can be set
for a specified speed on one NIC.

I think that when the device reports the capability, the lanes should be associated
with the speed. In this way, users can know which lanes are supported by the current
speed and verify the configuration validity.

So I think solution 3 is better. What do you think?

[1] https://patchwork.dpdk.org/project/dpdk/list/?series=31606

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-03-30 11:38                         ` huangdengdui
@ 2024-04-01 20:07                           ` Thomas Monjalon
  2024-04-01 22:29                             ` Damodharam Ammepalli
  2024-04-02  8:37                             ` huangdengdui
  0 siblings, 2 replies; 51+ messages in thread
From: Thomas Monjalon @ 2024-04-01 20:07 UTC (permalink / raw)
  To: huangdengdui
  Cc: Damodharam Ammepalli, Ajit Khaparde, lihuisong (C),
	roretzla, dev, ferruh.yigit, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, stephen, jerinjacobk, liuyonglong,
	fengchengwen, haijie1

30/03/2024 12:38, huangdengdui:
> But, there are different solutions for the device to report the setting
> lane capability, as following:
> 1. Like the current patch, reporting device capabilities in speed and
>    lane coupling mode. However, if we use this solution, we will have
>    to couple the the lanes setting with speed setting.
> 
> 2. Like the Damodharam's RFC patch [1], the device reports the maximum
>    number of supported lanes. Users can config a lane randomly,
>    which is completely separated from the speed.
> 
> 3. Similar to the FEC capability reported by a device, the device reports the
>    relationship table of the number of lanes supported by the speed,
>    for example:
>       speed    lanes_capa
>       50G      1,2
>       100G     1,2,4
>       200G     2,4
> 
> Options 1 and 2 have been discussed a lot above.
> 
> For solution 1, the speed and lanes are over-coupled, and the implementation is too
> complex. But I think it's easier to understand and easier for the device to report
> capabilities. In addition, the ethtool reporting capability also uses this mode.
> 
> For solution 2, as huisong said that user don't know what lanes should or can be set
> for a specified speed on one NIC.
> 
> I think that when the device reports the capability, the lanes should be associated
> with the speed. In this way, users can know which lanes are supported by the current
> speed and verify the configuration validity.
> 
> So I think solution 3 is better. What do you think?

I don't understand your proposals.
Please could you show the function signature for each option?




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-04-01 20:07                           ` Thomas Monjalon
@ 2024-04-01 22:29                             ` Damodharam Ammepalli
  2024-04-02  8:37                             ` huangdengdui
  1 sibling, 0 replies; 51+ messages in thread
From: Damodharam Ammepalli @ 2024-04-01 22:29 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: huangdengdui, Ajit Khaparde, lihuisong (C),
	roretzla, dev, ferruh.yigit, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, stephen, jerinjacobk, liuyonglong,
	fengchengwen, haijie1

[-- Attachment #1: Type: text/plain, Size: 3818 bytes --]

On Mon, Apr 1, 2024 at 1:07 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 30/03/2024 12:38, huangdengdui:
> > But, there are different solutions for the device to report the setting
> > lane capability, as following:
> > 1. Like the current patch, reporting device capabilities in speed and
> >    lane coupling mode. However, if we use this solution, we will have
> >    to couple the the lanes setting with speed setting.
> >
> > 2. Like the Damodharam's RFC patch [1], the device reports the maximum
> >    number of supported lanes. Users can config a lane randomly,
> >    which is completely separated from the speed.
> >
> > 3. Similar to the FEC capability reported by a device, the device reports the
> >    relationship table of the number of lanes supported by the speed,
> >    for example:
> >       speed    lanes_capa
> >       50G      1,2
> >       100G     1,2,4
> >       200G     2,4
> >
> > Options 1 and 2 have been discussed a lot above.
> >
> > For solution 1, the speed and lanes are over-coupled, and the implementation is too
> > complex. But I think it's easier to understand and easier for the device to report
> > capabilities. In addition, the ethtool reporting capability also uses this mode.
> >
> > For solution 2, as huisong said that user don't know what lanes should or can be set
> > for a specified speed on one NIC.
> >
> > I think that when the device reports the capability, the lanes should be associated
> > with the speed. In this way, users can know which lanes are supported by the current
> > speed and verify the configuration validity.
> >
> > So I think solution 3 is better. What do you think?
>
> I don't understand your proposals.
> Please could you show the function signature for each option?
>
>
>
testpmd can query the driver, and driver can export latest bit-map say in,
rte_eth_speed_lanes_get()->supported_bmap

0  1Gb   link speed
1  10Gb  (NRZ: 10G per lane, 1 lane) link speed
2  25Gb  (NRZ: 25G per lane, 1 lane) link speed
3  40Gb  (NRZ: 10G per lane, 4 lanes) link speed
4  50Gb  (NRZ: 25G per lane, 2 lanes) link speed
5  100Gb (NRZ: 25G per lane, 4 lanes) link speed
6  50Gb  (PAM4-56: 50G per lane, 1 lane) link speed
7  100Gb (PAM4-56: 50G per lane, 2 lanes) link speed
8  200Gb (PAM4-56: 50G per lane, 4 lanes) link speed
9  400Gb (PAM4-56: 50G per lane, 8 lanes) link speed
10 100Gb (PAM4-112: 100G per lane, 1 lane) link speed
11 200Gb (PAM4-112: 100G per lane, 2 lanes) link speed
12 400Gb (PAM4-112: 100G per lane, 4 lanes) link speed
13 800Gb (PAM4-112: 100G per lane, 8 lanes) link speed
14 For future

In cmd_config_speed_specific_parsed()
   if (parse_and_check_speed_duplex(res->value1, res->value2, &link_speed) < 0)
        return;
+ /* validate speed x lanes combo */
+ if (!cmd_validate_lanes(res->id, link_speed))
+ return;

Driver can validate the rest of other internal link parameters in
rte_eth_dev_start() before
applying the config to the hardware.

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4233 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-04-01 20:07                           ` Thomas Monjalon
  2024-04-01 22:29                             ` Damodharam Ammepalli
@ 2024-04-02  8:37                             ` huangdengdui
  2024-04-02 15:28                               ` Stephen Hemminger
  2024-04-04 13:45                               ` Ferruh Yigit
  1 sibling, 2 replies; 51+ messages in thread
From: huangdengdui @ 2024-04-02  8:37 UTC (permalink / raw)
  To: Thomas Monjalon, Damodharam Ammepalli
  Cc: Damodharam Ammepalli, Ajit Khaparde, lihuisong (C),
	roretzla, dev, ferruh.yigit, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, stephen, jerinjacobk, liuyonglong,
	fengchengwen, haijie1



On 2024/4/2 4:07, Thomas Monjalon wrote:
> 30/03/2024 12:38, huangdengdui:
>> But, there are different solutions for the device to report the setting
>> lane capability, as following:
>> 1. Like the current patch, reporting device capabilities in speed and
>>    lane coupling mode. However, if we use this solution, we will have
>>    to couple the the lanes setting with speed setting.
>>
>> 2. Like the Damodharam's RFC patch [1], the device reports the maximum
>>    number of supported lanes. Users can config a lane randomly,
>>    which is completely separated from the speed.
>>
>> 3. Similar to the FEC capability reported by a device, the device reports the
>>    relationship table of the number of lanes supported by the speed,
>>    for example:
>>       speed    lanes_capa
>>       50G      1,2
>>       100G     1,2,4
>>       200G     2,4
>>
>> Options 1 and 2 have been discussed a lot above.
>>
>> For solution 1, the speed and lanes are over-coupled, and the implementation is too
>> complex. But I think it's easier to understand and easier for the device to report
>> capabilities. In addition, the ethtool reporting capability also uses this mode.
>>
>> For solution 2, as huisong said that user don't know what lanes should or can be set
>> for a specified speed on one NIC.
>>
>> I think that when the device reports the capability, the lanes should be associated
>> with the speed. In this way, users can know which lanes are supported by the current
>> speed and verify the configuration validity.
>>
>> So I think solution 3 is better. What do you think?
> 
> I don't understand your proposals.
> Please could you show the function signature for each option?
> 
> 

I agree with separating the lanes setting from the speed setting.
I have a different proposal for device lanes capability reporting.

Three interfaces are added to the lib/ethdev like FEC interfaces.
1. rte_eth_lanes_get(uint16_t port_id, uint32_t *capa)       /* get current lanes */
2. rte_eth_lanes_set(uint16_t port_id, uint32_t capa)
3. rte_eth_lanes_get_capa(uint16_t port_id, struct rte_eth_lanes_capa *speed_lanes_capa)

/* A structure used to get capabilities per link speed */
struct rte_eth_lanes_capa {
	uint32_t speed; /**< Link speed (see RTE_ETH_SPEED_NUM_*) */
	uint32_t capa;  /**< lanes capabilities bitmask */
};

For example, an ethdev report the following lanes capability array:
struct rte_eth_lanes_capa[] device_capa = {
	{ RTE_ETH_SPEED_NUM_50G, 0x0003 }, //supports lanes 1 and 2 for 50G
	{ RTE_ETH_SPEED_NUM_100G, 0x000B } //supports lanes 1, 2 and 4 for 100G
};

The application can know which lanes are supported at a specified speed.

I think it's better to implement the setting lanes feature in this way.

Welcom to jump into discuss.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-04-02  8:37                             ` huangdengdui
@ 2024-04-02 15:28                               ` Stephen Hemminger
  2024-04-04 13:45                               ` Ferruh Yigit
  1 sibling, 0 replies; 51+ messages in thread
From: Stephen Hemminger @ 2024-04-02 15:28 UTC (permalink / raw)
  To: huangdengdui
  Cc: Thomas Monjalon, Damodharam Ammepalli, Ajit Khaparde,
	lihuisong (C),
	roretzla, dev, ferruh.yigit, aman.deep.singh, yuying.zhang,
	andrew.rybchenko, jerinjacobk, liuyonglong, fengchengwen,
	haijie1

On Tue, 2 Apr 2024 16:37:39 +0800
huangdengdui <huangdengdui@huawei.com> wrote:

> On 2024/4/2 4:07, Thomas Monjalon wrote:
> > 30/03/2024 12:38, huangdengdui:  
> >> But, there are different solutions for the device to report the setting
> >> lane capability, as following:
> >> 1. Like the current patch, reporting device capabilities in speed and
> >>    lane coupling mode. However, if we use this solution, we will have
> >>    to couple the the lanes setting with speed setting.
> >>
> >> 2. Like the Damodharam's RFC patch [1], the device reports the maximum
> >>    number of supported lanes. Users can config a lane randomly,
> >>    which is completely separated from the speed.
> >>
> >> 3. Similar to the FEC capability reported by a device, the device reports the
> >>    relationship table of the number of lanes supported by the speed,
> >>    for example:
> >>       speed    lanes_capa
> >>       50G      1,2
> >>       100G     1,2,4
> >>       200G     2,4
> >>
> >> Options 1 and 2 have been discussed a lot above.
> >>
> >> For solution 1, the speed and lanes are over-coupled, and the implementation is too
> >> complex. But I think it's easier to understand and easier for the device to report
> >> capabilities. In addition, the ethtool reporting capability also uses this mode.
> >>
> >> For solution 2, as huisong said that user don't know what lanes should or can be set
> >> for a specified speed on one NIC.
> >>
> >> I think that when the device reports the capability, the lanes should be associated
> >> with the speed. In this way, users can know which lanes are supported by the current
> >> speed and verify the configuration validity.
> >>
> >> So I think solution 3 is better. What do you think?  
> > 
> > I don't understand your proposals.
> > Please could you show the function signature for each option?
> > 
> >   
> 
> I agree with separating the lanes setting from the speed setting.
> I have a different proposal for device lanes capability reporting.
> 
> Three interfaces are added to the lib/ethdev like FEC interfaces.
> 1. rte_eth_lanes_get(uint16_t port_id, uint32_t *capa)       /* get current lanes */
> 2. rte_eth_lanes_set(uint16_t port_id, uint32_t capa)
> 3. rte_eth_lanes_get_capa(uint16_t port_id, struct rte_eth_lanes_capa *speed_lanes_capa)
> 
> /* A structure used to get capabilities per link speed */
> struct rte_eth_lanes_capa {
> 	uint32_t speed; /**< Link speed (see RTE_ETH_SPEED_NUM_*) */
> 	uint32_t capa;  /**< lanes capabilities bitmask */
> };
> 
> For example, an ethdev report the following lanes capability array:
> struct rte_eth_lanes_capa[] device_capa = {
> 	{ RTE_ETH_SPEED_NUM_50G, 0x0003 }, //supports lanes 1 and 2 for 50G
> 	{ RTE_ETH_SPEED_NUM_100G, 0x000B } //supports lanes 1, 2 and 4 for 100G
> };
> 
> The application can know which lanes are supported at a specified speed.
> 
> I think it's better to implement the setting lanes feature in this way.
> 
> Welcom to jump into discuss.

Wouldn't the best way to handle this be to make lanes as similar as possible
to how link speed is handled in ethdev now.  It would mean holding off until 24.11
release to do it right.  Things like adding link_lanes to rte_eth_link struct


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 1/6] ethdev: support setting lanes
  2024-04-02  8:37                             ` huangdengdui
  2024-04-02 15:28                               ` Stephen Hemminger
@ 2024-04-04 13:45                               ` Ferruh Yigit
  1 sibling, 0 replies; 51+ messages in thread
From: Ferruh Yigit @ 2024-04-04 13:45 UTC (permalink / raw)
  To: huangdengdui, Thomas Monjalon, Damodharam Ammepalli
  Cc: Ajit Khaparde, lihuisong (C),
	roretzla, dev, aman.deep.singh, yuying.zhang, andrew.rybchenko,
	stephen, jerinjacobk, liuyonglong, fengchengwen, haijie1

On 4/2/2024 9:37 AM, huangdengdui wrote:
> 
> 
> On 2024/4/2 4:07, Thomas Monjalon wrote:
>> 30/03/2024 12:38, huangdengdui:
>>> But, there are different solutions for the device to report the setting
>>> lane capability, as following:
>>> 1. Like the current patch, reporting device capabilities in speed and
>>>    lane coupling mode. However, if we use this solution, we will have
>>>    to couple the the lanes setting with speed setting.
>>>
>>> 2. Like the Damodharam's RFC patch [1], the device reports the maximum
>>>    number of supported lanes. Users can config a lane randomly,
>>>    which is completely separated from the speed.
>>>
>>> 3. Similar to the FEC capability reported by a device, the device reports the
>>>    relationship table of the number of lanes supported by the speed,
>>>    for example:
>>>       speed    lanes_capa
>>>       50G      1,2
>>>       100G     1,2,4
>>>       200G     2,4
>>>
>>> Options 1 and 2 have been discussed a lot above.
>>>
>>> For solution 1, the speed and lanes are over-coupled, and the implementation is too
>>> complex. But I think it's easier to understand and easier for the device to report
>>> capabilities. In addition, the ethtool reporting capability also uses this mode.
>>>
>>> For solution 2, as huisong said that user don't know what lanes should or can be set
>>> for a specified speed on one NIC.
>>>
>>> I think that when the device reports the capability, the lanes should be associated
>>> with the speed. In this way, users can know which lanes are supported by the current
>>> speed and verify the configuration validity.
>>>
>>> So I think solution 3 is better. What do you think?
>>
>> I don't understand your proposals.
>> Please could you show the function signature for each option?
>>
>>
> 
> I agree with separating the lanes setting from the speed setting.
> I have a different proposal for device lanes capability reporting.
> 
> Three interfaces are added to the lib/ethdev like FEC interfaces.
> 1. rte_eth_lanes_get(uint16_t port_id, uint32_t *capa)       /* get current lanes */
> 2. rte_eth_lanes_set(uint16_t port_id, uint32_t capa)
> 3. rte_eth_lanes_get_capa(uint16_t port_id, struct rte_eth_lanes_capa *speed_lanes_capa)
> 
> /* A structure used to get capabilities per link speed */
> struct rte_eth_lanes_capa {
> 	uint32_t speed; /**< Link speed (see RTE_ETH_SPEED_NUM_*) */
> 	uint32_t capa;  /**< lanes capabilities bitmask */
> };
> 
> For example, an ethdev report the following lanes capability array:
> struct rte_eth_lanes_capa[] device_capa = {
> 	{ RTE_ETH_SPEED_NUM_50G, 0x0003 }, //supports lanes 1 and 2 for 50G
> 	{ RTE_ETH_SPEED_NUM_100G, 0x000B } //supports lanes 1, 2 and 4 for 100G
> };
> 
> The application can know which lanes are supported at a specified speed.
> 
> I think it's better to implement the setting lanes feature in this way.
> 
> Welcom to jump into discuss.
>

Hi Dengdui,

+1 to option 3, as lane capability coupled with speed, it make sense to
expose lane capability per speed, as done in FEC capability.


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2 0/6] support setting lanes
  2024-03-22  7:09   ` [PATCH v2 0/6] " Dengdui Huang
                       ` (5 preceding siblings ...)
  2024-03-22  7:09     ` [PATCH v2 6/6] app/testpmd: " Dengdui Huang
@ 2024-04-04 13:58     ` Ferruh Yigit
  6 siblings, 0 replies; 51+ messages in thread
From: Ferruh Yigit @ 2024-04-04 13:58 UTC (permalink / raw)
  To: Dengdui Huang, dev, damodharam.ammepalli
  Cc: aman.deep.singh, yuying.zhang, thomas, andrew.rybchenko, stephen,
	jerinjacobk, ajit.khaparde, liuyonglong, fengchengwen, haijie1,
	lihuisong

On 3/22/2024 7:09 AM, Dengdui Huang wrote:
> At the physical layer, multiple lanes are often used to work together
> to achieve higher speeds. So a speeds can be achieved with different
> number of lanes. For example, the following solutions can be used to
> implement 100G:
> 1. Combines four 25G lanes
> 2. Combines two 50G lanes
> 3. A single 100G lane
> 
> It is assumed that two ports are interconnected and the two ports support
> the above three solutions. But, we just configured the speed to 100G and
> one port uses four 25G lanes by default and the other port uses two 50G lanes
> by default, the port cannot be up. In this case, we need to configure the
> ports to use the same solutions (for example, uses two 50G lanes) so that
> the ports can be up.
> 
> This patch set add support setting lanes for ethdev. application can use
> this feature to configure lanes to help select the same solutions.
>

Hi Dengdui, Damodharam,

As details of the implementation under discussion, I have a high level
question.

Why/when an application need to configure the lanes explicitly?

In above description, it mentions if one port is configured as 4x25G and
other 2x50G, port can't be up. How do you end up being in this situation?

Lets assume first port is configured as 100G, and FW configured it as
4x25G, and again user configured second port as 100G, why FW can't
detect this and configure ports with correct lane configuration?

In this case, if we push the responsibility to the user, when user is
configuring the second port how she will know what is the lane
configuration for first port, and what is the proper lane configuration
for the second port?

Instead of pushing this configuration to user, why it can't be handled
internally?

As long as user requested speed configured by device, the lane
configuration has no impact to the user, right?
Is there a case/reason user need to explicitly set, lets say PAM4
against NRZ?


Thanks,
ferruh

^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2024-04-04 13:58 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-12  7:52 [PATCH 0/3] support setting lanes Dengdui Huang
2024-03-12  7:52 ` [PATCH 1/3] ethdev: " Dengdui Huang
2024-03-19  3:02   ` Stephen Hemminger
2024-03-20  1:16     ` huangdengdui
2024-03-12  7:52 ` [PATCH 2/3] net/hns3: " Dengdui Huang
2024-03-12  7:52 ` [PATCH 3/3] app/testpmd: " Dengdui Huang
2024-03-15 21:47   ` Damodharam Ammepalli
2024-03-19  2:38     ` huangdengdui
2024-03-22  7:09   ` [PATCH v2 0/6] " Dengdui Huang
2024-03-22  7:09     ` [PATCH v2 1/6] ethdev: " Dengdui Huang
2024-03-22 13:58       ` Thomas Monjalon
2024-03-22 15:15         ` Ajit Khaparde
2024-03-22 17:32           ` Tyler Retzlaff
2024-03-22 22:30             ` Damodharam Ammepalli
2024-03-25  6:24         ` huangdengdui
2024-03-25  9:30           ` Thomas Monjalon
2024-03-25 21:14             ` Damodharam Ammepalli
2024-03-26  1:42             ` lihuisong (C)
2024-03-26  3:45               ` Ajit Khaparde
2024-03-26 10:30               ` Thomas Monjalon
2024-03-26 11:15                 ` lihuisong (C)
2024-03-26 13:47                   ` Ajit Khaparde
2024-03-26 18:11                     ` Ajit Khaparde
2024-03-26 18:21                       ` Damodharam Ammepalli
2024-03-30 11:38                         ` huangdengdui
2024-04-01 20:07                           ` Thomas Monjalon
2024-04-01 22:29                             ` Damodharam Ammepalli
2024-04-02  8:37                             ` huangdengdui
2024-04-02 15:28                               ` Stephen Hemminger
2024-04-04 13:45                               ` Ferruh Yigit
2024-03-29  3:25                     ` lihuisong (C)
2024-03-22  7:09     ` [PATCH v2 2/6] test: updated UT for " Dengdui Huang
2024-03-22  7:09     ` [PATCH v2 3/6] ethdev: add function to parse link mode info Dengdui Huang
2024-03-22  7:09     ` [PATCH v2 4/6] net/hns3: use parse link mode info function Dengdui Huang
2024-03-22  7:09     ` [PATCH v2 5/6] net/hns3: support setting lanes Dengdui Huang
2024-03-22  7:09     ` [PATCH v2 6/6] app/testpmd: " Dengdui Huang
2024-04-04 13:58     ` [PATCH v2 0/6] " Ferruh Yigit
2024-03-18 14:56 ` [PATCH 0/3] " Thomas Monjalon
2024-03-18 21:26   ` Damodharam Ammepalli
2024-03-18 21:42     ` Stephen Hemminger
2024-03-18 22:55       ` Ajit Khaparde
2024-03-20 11:41     ` Thomas Monjalon
2024-03-20 12:31     ` Ferruh Yigit
2024-03-21  2:02       ` huangdengdui
2024-03-21  8:28         ` Thomas Monjalon
2024-03-22  2:28           ` huangdengdui
2024-03-22  4:38             ` Jerin Jacob
2024-03-22  5:25               ` Ajit Khaparde
2024-03-22  5:51                 ` Jerin Jacob
2024-03-22 13:51                   ` Thomas Monjalon
2024-03-25 14:04                     ` Jerin Jacob

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).