DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/6] net/ice improve qos
@ 2024-01-02 19:42 Qi Zhang
  2024-01-02 19:42 ` [PATCH 1/6] net/ice: remove redundent code Qi Zhang
                   ` (6 more replies)
  0 siblings, 7 replies; 9+ messages in thread
From: Qi Zhang @ 2024-01-02 19:42 UTC (permalink / raw)
  To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang

The patchset enhanced ice rte_tm implemenations

Qi Zhang (6):
  net/ice: remove redundent code
  net/ice: support VSI level bandwidth config
  net/ice: support queue group weight configure
  net/ice: refactor hardware Tx sched node config
  net/ice: reset Tx sched node during commit
  net/ice: support Tx sched commit before dev_start

 drivers/net/ice/base/ice_sched.c |   4 +-
 drivers/net/ice/base/ice_sched.h |   7 +-
 drivers/net/ice/ice_ethdev.c     |   9 +
 drivers/net/ice/ice_ethdev.h     |   5 +
 drivers/net/ice/ice_tm.c         | 361 +++++++++++++++++++++----------
 5 files changed, 269 insertions(+), 117 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/6] net/ice: remove redundent code
  2024-01-02 19:42 [PATCH 0/6] net/ice improve qos Qi Zhang
@ 2024-01-02 19:42 ` Qi Zhang
  2024-01-02 19:42 ` [PATCH 2/6] net/ice: support VSI level bandwidth config Qi Zhang
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Qi Zhang @ 2024-01-02 19:42 UTC (permalink / raw)
  To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang

The committed flag for tx schedular configuration is not used
in PF only mode, remove the redundent code.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_tm.c | 14 --------------
 1 file changed, 14 deletions(-)

diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index f5ea47ae83..9e2f981fa3 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -390,13 +390,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
 	if (!params || !error)
 		return -EINVAL;
 
-	/* if already committed */
-	if (pf->tm_conf.committed) {
-		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
-		error->message = "already committed";
-		return -EINVAL;
-	}
-
 	ret = ice_node_param_check(pf, node_id, priority, weight,
 				    params, error);
 	if (ret)
@@ -579,13 +572,6 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
 	if (!error)
 		return -EINVAL;
 
-	/* if already committed */
-	if (pf->tm_conf.committed) {
-		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
-		error->message = "already committed";
-		return -EINVAL;
-	}
-
 	if (node_id == RTE_TM_NODE_ID_NULL) {
 		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
 		error->message = "invalid node id";
-- 
2.31.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 2/6] net/ice: support VSI level bandwidth config
  2024-01-02 19:42 [PATCH 0/6] net/ice improve qos Qi Zhang
  2024-01-02 19:42 ` [PATCH 1/6] net/ice: remove redundent code Qi Zhang
@ 2024-01-02 19:42 ` Qi Zhang
  2024-01-02 19:42 ` [PATCH 3/6] net/ice: support queue group weight configure Qi Zhang
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Qi Zhang @ 2024-01-02 19:42 UTC (permalink / raw)
  To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang

Enable the configuration of peak and committed rates for a Tx scheduler
node at the VSI level. This patch also consolidate rate configuration
across various levels into a single function 'ice_set_node_rate.'

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_sched.c |   2 +-
 drivers/net/ice/base/ice_sched.h |   4 +-
 drivers/net/ice/ice_tm.c         | 142 +++++++++++++++++++------------
 3 files changed, 91 insertions(+), 57 deletions(-)

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index a4d31647fe..23cc1ee50a 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -4429,7 +4429,7 @@ ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node,
  * NOTE: Caller provides the correct SRL node in case of shared profile
  * settings.
  */
-static enum ice_status
+enum ice_status
 ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node,
 			  enum ice_rl_type rl_type, u32 bw)
 {
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 4b68f3f535..a600ff9a24 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -237,5 +237,7 @@ enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle);
 enum ice_status ice_sched_replay_root_node_bw(struct ice_port_info *pi);
 enum ice_status
 ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx);
-
+enum ice_status
+ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node,
+			  enum ice_rl_type rl_type, u32 bw);
 #endif /* _ICE_SCHED_H_ */
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 9e2f981fa3..d9187af8af 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -663,6 +663,55 @@ static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev,
 	return ret;
 }
 
+static int ice_set_node_rate(struct ice_hw *hw,
+			     struct ice_tm_node *tm_node,
+			     struct ice_sched_node *sched_node)
+{
+	enum ice_status status;
+	bool reset = false;
+	uint32_t peak = 0;
+	uint32_t committed = 0;
+	uint32_t rate;
+
+	if (tm_node == NULL || tm_node->shaper_profile == NULL) {
+		reset = true;
+	} else {
+		peak = (uint32_t)tm_node->shaper_profile->profile.peak.rate;
+		committed = (uint32_t)tm_node->shaper_profile->profile.committed.rate;
+	}
+
+	if (reset || peak == 0)
+		rate = ICE_SCHED_DFLT_BW;
+	else
+		rate = peak / 1000 * BITS_PER_BYTE;
+
+
+	status = ice_sched_set_node_bw_lmt(hw->port_info,
+					   sched_node,
+					   ICE_MAX_BW,
+					   rate);
+	if (status) {
+		PMD_DRV_LOG(ERR, "Failed to set max bandwidth for node %u", tm_node->id);
+		return -EINVAL;
+	}
+
+	if (reset || committed == 0)
+		rate = ICE_SCHED_DFLT_BW;
+	else
+		rate = committed / 1000 * BITS_PER_BYTE;
+
+	status = ice_sched_set_node_bw_lmt(hw->port_info,
+					   sched_node,
+					   ICE_MIN_BW,
+					   rate);
+	if (status) {
+		PMD_DRV_LOG(ERR, "Failed to set min bandwidth for node %u", tm_node->id);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 				 int clear_on_fail,
 				 __rte_unused struct rte_tm_error *error)
@@ -673,13 +722,11 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 	struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
 	struct ice_tm_node *tm_node;
 	struct ice_sched_node *node;
-	struct ice_sched_node *vsi_node;
+	struct ice_sched_node *vsi_node = NULL;
 	struct ice_sched_node *queue_node;
 	struct ice_tx_queue *txq;
 	struct ice_vsi *vsi;
 	int ret_val = ICE_SUCCESS;
-	uint64_t peak = 0;
-	uint64_t committed = 0;
 	uint8_t priority;
 	uint32_t i;
 	uint32_t idx_vsi_child;
@@ -704,6 +751,18 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 	for (i = 0; i < vsi_layer; i++)
 		node = node->children[0];
 	vsi_node = node;
+
+	tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list);
+
+	ret_val = ice_set_node_rate(hw, tm_node, vsi_node);
+	if (ret_val) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		PMD_DRV_LOG(ERR,
+			    "configure vsi node %u bandwidth failed",
+			    tm_node->id);
+		goto reset_vsi;
+	}
+
 	nb_vsi_child = vsi_node->num_children;
 	nb_qg = vsi_node->children[0]->num_children;
 
@@ -722,7 +781,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 			if (ret_val) {
 				error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
 				PMD_DRV_LOG(ERR, "start queue %u failed", qid);
-				goto fail_clear;
+				goto reset_vsi;
 			}
 			txq = dev->data->tx_queues[qid];
 			q_teid = txq->q_teid;
@@ -730,7 +789,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 			if (queue_node == NULL) {
 				error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
 				PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
-				goto fail_clear;
+				goto reset_vsi;
 			}
 			if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
 				continue;
@@ -738,28 +797,19 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 			if (ret_val) {
 				error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
 				PMD_DRV_LOG(ERR, "move queue %u failed", qid);
-				goto fail_clear;
+				goto reset_vsi;
 			}
 		}
-		if (tm_node->reference_count != 0 && tm_node->shaper_profile) {
-			uint32_t node_teid = qgroup_sched_node->info.node_teid;
-			/* Transfer from Byte per seconds to Kbps */
-			peak = tm_node->shaper_profile->profile.peak.rate;
-			peak = peak / 1000 * BITS_PER_BYTE;
-			ret_val = ice_sched_set_node_bw_lmt_per_tc(hw->port_info,
-								   node_teid,
-								   ICE_AGG_TYPE_Q,
-								   tm_node->tc,
-								   ICE_MAX_BW,
-								   (u32)peak);
-			if (ret_val) {
-				error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
-				PMD_DRV_LOG(ERR,
-					    "configure queue group %u bandwidth failed",
-					    tm_node->id);
-				goto fail_clear;
-			}
+
+		ret_val = ice_set_node_rate(hw, tm_node, qgroup_sched_node);
+		if (ret_val) {
+			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+			PMD_DRV_LOG(ERR,
+				    "configure queue group %u bandwidth failed",
+				    tm_node->id);
+			goto reset_vsi;
 		}
+
 		priority = 7 - tm_node->priority;
 		ret_val = ice_sched_cfg_sibl_node_prio_lock(hw->port_info, qgroup_sched_node,
 							    priority);
@@ -777,7 +827,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 		if (idx_vsi_child >= nb_vsi_child) {
 			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
 			PMD_DRV_LOG(ERR, "too many queues");
-			goto fail_clear;
+			goto reset_vsi;
 		}
 	}
 
@@ -786,37 +836,17 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 		txq = dev->data->tx_queues[qid];
 		vsi = txq->vsi;
 		q_teid = txq->q_teid;
-		if (tm_node->shaper_profile) {
-			/* Transfer from Byte per seconds to Kbps */
-			if (tm_node->shaper_profile->profile.peak.rate > 0) {
-				peak = tm_node->shaper_profile->profile.peak.rate;
-				peak = peak / 1000 * BITS_PER_BYTE;
-				ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
-							   tm_node->tc, tm_node->id,
-							   ICE_MAX_BW, (u32)peak);
-				if (ret_val) {
-					error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
-					PMD_DRV_LOG(ERR,
-						    "configure queue %u peak bandwidth failed",
-						    tm_node->id);
-					goto fail_clear;
-				}
-			}
-			if (tm_node->shaper_profile->profile.committed.rate > 0) {
-				committed = tm_node->shaper_profile->profile.committed.rate;
-				committed = committed / 1000 * BITS_PER_BYTE;
-				ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
-							   tm_node->tc, tm_node->id,
-							   ICE_MIN_BW, (u32)committed);
-				if (ret_val) {
-					error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
-					PMD_DRV_LOG(ERR,
-						    "configure queue %u committed bandwidth failed",
-						    tm_node->id);
-					goto fail_clear;
-				}
-			}
+
+		queue_node = ice_sched_get_node(hw->port_info, q_teid);
+		ret_val = ice_set_node_rate(hw, tm_node, queue_node);
+		if (ret_val) {
+			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+			PMD_DRV_LOG(ERR,
+				    "configure queue %u bandwidth failed",
+				    tm_node->id);
+			goto reset_vsi;
 		}
+
 		priority = 7 - tm_node->priority;
 		ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1,
 						 &q_teid, &priority);
@@ -838,6 +868,8 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 
 	return ret_val;
 
+reset_vsi:
+	ice_set_node_rate(hw, NULL, vsi_node);
 fail_clear:
 	/* clear all the traffic manager configuration */
 	if (clear_on_fail) {
-- 
2.31.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 3/6] net/ice: support queue group weight configure
  2024-01-02 19:42 [PATCH 0/6] net/ice improve qos Qi Zhang
  2024-01-02 19:42 ` [PATCH 1/6] net/ice: remove redundent code Qi Zhang
  2024-01-02 19:42 ` [PATCH 2/6] net/ice: support VSI level bandwidth config Qi Zhang
@ 2024-01-02 19:42 ` Qi Zhang
  2024-01-02 19:42 ` [PATCH 4/6] net/ice: refactor hardware Tx sched node config Qi Zhang
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Qi Zhang @ 2024-01-02 19:42 UTC (permalink / raw)
  To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang

Enable the configuration of weight for Tx scheduler node at
the queue group level. This patch also consolidate weight
configuration across various levels by exposing the base
code API 'ice_sched_cfg_node_bw_alloc'.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_sched.c |  2 +-
 drivers/net/ice/base/ice_sched.h |  3 +++
 drivers/net/ice/ice_tm.c         | 27 ++++++++++++++++++++-------
 3 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 23cc1ee50a..a1dd0c6ace 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3020,7 +3020,7 @@ ice_sched_update_elem(struct ice_hw *hw, struct ice_sched_node *node,
  *
  * This function configures node element's BW allocation.
  */
-static enum ice_status
+enum ice_status
 ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node,
 			    enum ice_rl_type rl_type, u16 bw_alloc)
 {
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index a600ff9a24..5b35fd564e 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -240,4 +240,7 @@ ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx);
 enum ice_status
 ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node,
 			  enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node,
+			    enum ice_rl_type rl_type, u16 bw_alloc);
 #endif /* _ICE_SCHED_H_ */
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index d9187af8af..604d045e2c 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -529,7 +529,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
 		PMD_DRV_LOG(WARNING, "priority != 0 not supported in level %d",
 			    level_id);
 
-	if (tm_node->weight != 1 && level_id != ICE_TM_NODE_TYPE_QUEUE)
+	if (tm_node->weight != 1 &&
+	    level_id != ICE_TM_NODE_TYPE_QUEUE && level_id != ICE_TM_NODE_TYPE_QGROUP)
 		PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d",
 			    level_id);
 
@@ -725,7 +726,6 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 	struct ice_sched_node *vsi_node = NULL;
 	struct ice_sched_node *queue_node;
 	struct ice_tx_queue *txq;
-	struct ice_vsi *vsi;
 	int ret_val = ICE_SUCCESS;
 	uint8_t priority;
 	uint32_t i;
@@ -819,6 +819,18 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 				    tm_node->priority);
 			goto fail_clear;
 		}
+
+		ret_val = ice_sched_cfg_node_bw_alloc(hw, qgroup_sched_node,
+						      ICE_MAX_BW,
+						      (uint16_t)tm_node->weight);
+		if (ret_val) {
+			error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+			PMD_DRV_LOG(ERR, "configure queue group %u weight %u failed",
+				    tm_node->id,
+				    tm_node->weight);
+			goto fail_clear;
+		}
+
 		idx_qg++;
 		if (idx_qg >= nb_qg) {
 			idx_qg = 0;
@@ -834,7 +846,6 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 	TAILQ_FOREACH(tm_node, queue_list, node) {
 		qid = tm_node->id;
 		txq = dev->data->tx_queues[qid];
-		vsi = txq->vsi;
 		q_teid = txq->q_teid;
 
 		queue_node = ice_sched_get_node(hw->port_info, q_teid);
@@ -856,12 +867,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 			goto fail_clear;
 		}
 
-		ret_val = ice_cfg_q_bw_alloc(hw->port_info, vsi->idx,
-					     tm_node->tc, tm_node->id,
-					     ICE_MAX_BW, (u32)tm_node->weight);
+		queue_node = ice_sched_get_node(hw->port_info, q_teid);
+		ret_val = ice_sched_cfg_node_bw_alloc(hw, queue_node, ICE_MAX_BW,
+						      (uint16_t)tm_node->weight);
 		if (ret_val) {
 			error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
-			PMD_DRV_LOG(ERR, "configure queue %u weight failed", tm_node->weight);
+			PMD_DRV_LOG(ERR, "configure queue %u weight %u failed",
+				    tm_node->id,
+				    tm_node->weight);
 			goto fail_clear;
 		}
 	}
-- 
2.31.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 4/6] net/ice: refactor hardware Tx sched node config
  2024-01-02 19:42 [PATCH 0/6] net/ice improve qos Qi Zhang
                   ` (2 preceding siblings ...)
  2024-01-02 19:42 ` [PATCH 3/6] net/ice: support queue group weight configure Qi Zhang
@ 2024-01-02 19:42 ` Qi Zhang
  2024-01-02 19:42 ` [PATCH 5/6] net/ice: reset Tx sched node during commit Qi Zhang
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Qi Zhang @ 2024-01-02 19:42 UTC (permalink / raw)
  To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang

Consolidate Tx scheduler node configuration into a function:
'ice_cfg_hw_node", where rate limit, weight, priority will be
configured for queue group level and queue level.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_tm.c | 97 ++++++++++++++++++++--------------------
 1 file changed, 49 insertions(+), 48 deletions(-)

diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 604d045e2c..20cc47fff1 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -713,6 +713,49 @@ static int ice_set_node_rate(struct ice_hw *hw,
 	return 0;
 }
 
+static int ice_cfg_hw_node(struct ice_hw *hw,
+			   struct ice_tm_node *tm_node,
+			   struct ice_sched_node *sched_node)
+{
+	enum ice_status status;
+	uint8_t priority;
+	uint16_t weight;
+	int ret;
+
+	ret = ice_set_node_rate(hw, tm_node, sched_node);
+	if (ret) {
+		PMD_DRV_LOG(ERR,
+			    "configure queue group %u bandwidth failed",
+			    sched_node->info.node_teid);
+		return ret;
+	}
+
+	priority = tm_node ? (7 - tm_node->priority) : 0;
+	status = ice_sched_cfg_sibl_node_prio(hw->port_info,
+					      sched_node,
+					      priority);
+	if (status) {
+		PMD_DRV_LOG(ERR, "configure node %u priority %u failed",
+			    sched_node->info.node_teid,
+			    priority);
+		return -EINVAL;
+	}
+
+	weight = tm_node ? (uint16_t)tm_node->weight : 4;
+
+	status = ice_sched_cfg_node_bw_alloc(hw, sched_node,
+					     ICE_MAX_BW,
+					     weight);
+	if (status) {
+		PMD_DRV_LOG(ERR, "configure node %u weight %u failed",
+			    sched_node->info.node_teid,
+			    weight);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 				 int clear_on_fail,
 				 __rte_unused struct rte_tm_error *error)
@@ -726,8 +769,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 	struct ice_sched_node *vsi_node = NULL;
 	struct ice_sched_node *queue_node;
 	struct ice_tx_queue *txq;
-	int ret_val = ICE_SUCCESS;
-	uint8_t priority;
+	int ret_val = 0;
 	uint32_t i;
 	uint32_t idx_vsi_child;
 	uint32_t idx_qg;
@@ -801,36 +843,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 			}
 		}
 
-		ret_val = ice_set_node_rate(hw, tm_node, qgroup_sched_node);
+		ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node);
 		if (ret_val) {
 			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
 			PMD_DRV_LOG(ERR,
-				    "configure queue group %u bandwidth failed",
+				    "configure queue group node %u failed",
 				    tm_node->id);
 			goto reset_vsi;
 		}
 
-		priority = 7 - tm_node->priority;
-		ret_val = ice_sched_cfg_sibl_node_prio_lock(hw->port_info, qgroup_sched_node,
-							    priority);
-		if (ret_val) {
-			error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
-			PMD_DRV_LOG(ERR, "configure queue group %u priority failed",
-				    tm_node->priority);
-			goto fail_clear;
-		}
-
-		ret_val = ice_sched_cfg_node_bw_alloc(hw, qgroup_sched_node,
-						      ICE_MAX_BW,
-						      (uint16_t)tm_node->weight);
-		if (ret_val) {
-			error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
-			PMD_DRV_LOG(ERR, "configure queue group %u weight %u failed",
-				    tm_node->id,
-				    tm_node->weight);
-			goto fail_clear;
-		}
-
 		idx_qg++;
 		if (idx_qg >= nb_qg) {
 			idx_qg = 0;
@@ -847,36 +868,16 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 		qid = tm_node->id;
 		txq = dev->data->tx_queues[qid];
 		q_teid = txq->q_teid;
-
 		queue_node = ice_sched_get_node(hw->port_info, q_teid);
-		ret_val = ice_set_node_rate(hw, tm_node, queue_node);
+
+		ret_val = ice_cfg_hw_node(hw, tm_node, queue_node);
 		if (ret_val) {
 			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
 			PMD_DRV_LOG(ERR,
-				    "configure queue %u bandwidth failed",
+				    "configure queue group node %u failed",
 				    tm_node->id);
 			goto reset_vsi;
 		}
-
-		priority = 7 - tm_node->priority;
-		ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1,
-						 &q_teid, &priority);
-		if (ret_val) {
-			error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
-			PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
-			goto fail_clear;
-		}
-
-		queue_node = ice_sched_get_node(hw->port_info, q_teid);
-		ret_val = ice_sched_cfg_node_bw_alloc(hw, queue_node, ICE_MAX_BW,
-						      (uint16_t)tm_node->weight);
-		if (ret_val) {
-			error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
-			PMD_DRV_LOG(ERR, "configure queue %u weight %u failed",
-				    tm_node->id,
-				    tm_node->weight);
-			goto fail_clear;
-		}
 	}
 
 	return ret_val;
-- 
2.31.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 5/6] net/ice: reset Tx sched node during commit
  2024-01-02 19:42 [PATCH 0/6] net/ice improve qos Qi Zhang
                   ` (3 preceding siblings ...)
  2024-01-02 19:42 ` [PATCH 4/6] net/ice: refactor hardware Tx sched node config Qi Zhang
@ 2024-01-02 19:42 ` Qi Zhang
  2024-01-02 19:42 ` [PATCH 6/6] net/ice: support Tx sched commit before device start Qi Zhang
  2024-01-04  2:28 ` [PATCH 0/6] net/ice improve qos Wu, Wenjun1
  6 siblings, 0 replies; 9+ messages in thread
From: Qi Zhang @ 2024-01-02 19:42 UTC (permalink / raw)
  To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang

1. Always reset all Tx scheduler at the beginning of a commit action.
   This prevent unexpected remains from previous commit.
2. Reset all Tx scheduler nodes if a commit failed.

For leaf node, stop queues which will remove sched node from
scheduler tree, then start queues which will add sched node back to
default topo.
For noleaf node, simply reset to default parameters.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_ethdev.h |   1 +
 drivers/net/ice/ice_tm.c     | 130 ++++++++++++++++++++++++++++-------
 2 files changed, 107 insertions(+), 24 deletions(-)

diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 1338c80d14..3b2db6aaa6 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -478,6 +478,7 @@ struct ice_tm_node {
 	struct ice_tm_node **children;
 	struct ice_tm_shaper_profile *shaper_profile;
 	struct rte_tm_node_params params;
+	struct ice_sched_node *sched_node;
 };
 
 /* node type of Traffic Manager */
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 20cc47fff1..4d8dbff2dc 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -756,16 +756,91 @@ static int ice_cfg_hw_node(struct ice_hw *hw,
 	return 0;
 }
 
+static struct ice_sched_node *ice_get_vsi_node(struct ice_hw *hw)
+{
+	struct ice_sched_node *node = hw->port_info->root;
+	uint32_t vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+	uint32_t i;
+
+	for (i = 0; i < vsi_layer; i++)
+		node = node->children[0];
+
+	return node;
+}
+
+static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
+	struct ice_sched_node *vsi_node = ice_get_vsi_node(hw);
+	struct ice_tm_node *tm_node;
+	int ret;
+
+	/* reset vsi_node */
+	ret = ice_set_node_rate(hw, NULL, vsi_node);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "reset vsi node failed");
+		return ret;
+	}
+
+	/* reset queue group nodes */
+	TAILQ_FOREACH(tm_node, qgroup_list, node) {
+		if (tm_node->sched_node == NULL)
+			continue;
+
+		ret = ice_cfg_hw_node(hw, NULL, tm_node->sched_node);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "reset queue group node %u failed", tm_node->id);
+			return ret;
+		}
+		tm_node->sched_node = NULL;
+	}
+
+	return 0;
+}
+
+static int ice_remove_leaf_nodes(struct rte_eth_dev *dev)
+{
+	int ret = 0;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		ret = ice_tx_queue_stop(dev, i);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "stop queue %u failed", i);
+			break;
+		}
+	}
+
+	return ret;
+}
+
+static int ice_add_leaf_nodes(struct rte_eth_dev *dev)
+{
+	int ret = 0;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		ret = ice_tx_queue_start(dev, i);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "start queue %u failed", i);
+			break;
+		}
+	}
+
+	return ret;
+}
+
 static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 				 int clear_on_fail,
-				 __rte_unused struct rte_tm_error *error)
+				 struct rte_tm_error *error)
 {
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
 	struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
 	struct ice_tm_node *tm_node;
-	struct ice_sched_node *node;
 	struct ice_sched_node *vsi_node = NULL;
 	struct ice_sched_node *queue_node;
 	struct ice_tx_queue *txq;
@@ -777,23 +852,25 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 	uint32_t nb_qg;
 	uint32_t qid;
 	uint32_t q_teid;
-	uint32_t vsi_layer;
 
-	for (i = 0; i < dev->data->nb_tx_queues; i++) {
-		ret_val = ice_tx_queue_stop(dev, i);
-		if (ret_val) {
-			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
-			PMD_DRV_LOG(ERR, "stop queue %u failed", i);
-			goto fail_clear;
-		}
+	/* remove leaf nodes */
+	ret_val = ice_remove_leaf_nodes(dev);
+	if (ret_val) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		PMD_DRV_LOG(ERR, "reset no-leaf nodes failed");
+		goto fail_clear;
 	}
 
-	node = hw->port_info->root;
-	vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
-	for (i = 0; i < vsi_layer; i++)
-		node = node->children[0];
-	vsi_node = node;
+	/* reset no-leaf nodes. */
+	ret_val = ice_reset_noleaf_nodes(dev);
+	if (ret_val) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		PMD_DRV_LOG(ERR, "reset leaf nodes failed");
+		goto add_leaf;
+	}
 
+	/* config vsi node */
+	vsi_node = ice_get_vsi_node(hw);
 	tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list);
 
 	ret_val = ice_set_node_rate(hw, tm_node, vsi_node);
@@ -802,9 +879,10 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 		PMD_DRV_LOG(ERR,
 			    "configure vsi node %u bandwidth failed",
 			    tm_node->id);
-		goto reset_vsi;
+		goto add_leaf;
 	}
 
+	/* config queue group nodes */
 	nb_vsi_child = vsi_node->num_children;
 	nb_qg = vsi_node->children[0]->num_children;
 
@@ -823,7 +901,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 			if (ret_val) {
 				error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
 				PMD_DRV_LOG(ERR, "start queue %u failed", qid);
-				goto reset_vsi;
+				goto reset_leaf;
 			}
 			txq = dev->data->tx_queues[qid];
 			q_teid = txq->q_teid;
@@ -831,7 +909,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 			if (queue_node == NULL) {
 				error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
 				PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
-				goto reset_vsi;
+				goto reset_leaf;
 			}
 			if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
 				continue;
@@ -839,7 +917,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 			if (ret_val) {
 				error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
 				PMD_DRV_LOG(ERR, "move queue %u failed", qid);
-				goto reset_vsi;
+				goto reset_leaf;
 			}
 		}
 
@@ -849,7 +927,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 			PMD_DRV_LOG(ERR,
 				    "configure queue group node %u failed",
 				    tm_node->id);
-			goto reset_vsi;
+			goto reset_leaf;
 		}
 
 		idx_qg++;
@@ -860,10 +938,11 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 		if (idx_vsi_child >= nb_vsi_child) {
 			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
 			PMD_DRV_LOG(ERR, "too many queues");
-			goto reset_vsi;
+			goto reset_leaf;
 		}
 	}
 
+	/* config queue nodes */
 	TAILQ_FOREACH(tm_node, queue_list, node) {
 		qid = tm_node->id;
 		txq = dev->data->tx_queues[qid];
@@ -876,14 +955,17 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 			PMD_DRV_LOG(ERR,
 				    "configure queue group node %u failed",
 				    tm_node->id);
-			goto reset_vsi;
+			goto reset_leaf;
 		}
 	}
 
 	return ret_val;
 
-reset_vsi:
-	ice_set_node_rate(hw, NULL, vsi_node);
+reset_leaf:
+	ice_remove_leaf_nodes(dev);
+add_leaf:
+	ice_add_leaf_nodes(dev);
+	ice_reset_noleaf_nodes(dev);
 fail_clear:
 	/* clear all the traffic manager configuration */
 	if (clear_on_fail) {
-- 
2.31.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 6/6] net/ice: support Tx sched commit before device start
  2024-01-02 19:42 [PATCH 0/6] net/ice improve qos Qi Zhang
                   ` (4 preceding siblings ...)
  2024-01-02 19:42 ` [PATCH 5/6] net/ice: reset Tx sched node during commit Qi Zhang
@ 2024-01-02 19:42 ` Qi Zhang
  2024-01-04  2:28 ` [PATCH 0/6] net/ice improve qos Wu, Wenjun1
  6 siblings, 0 replies; 9+ messages in thread
From: Qi Zhang @ 2024-01-02 19:42 UTC (permalink / raw)
  To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang

Currently Tx hierarchy commit only take effect if device
already be started, as after a dev start / stop cycle, queues
has been removed and added back which cause the Tx scheduler
tree return to original topo.

In this patch, the hierarchy commit function will simply return
if device has not be started yet and all the commit actions will
be deferred to dev_start.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_ethdev.c |  9 +++++++++
 drivers/net/ice/ice_ethdev.h |  4 ++++
 drivers/net/ice/ice_tm.c     | 25 ++++++++++++++++++++++---
 3 files changed, 35 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 3c3bc49dc2..72e13f95f8 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3717,6 +3717,7 @@ ice_dev_start(struct rte_eth_dev *dev)
 	int mask, ret;
 	uint8_t timer = hw->func_caps.ts_func_info.tmr_index_owned;
 	uint32_t pin_idx = ad->devargs.pin_idx;
+	struct rte_tm_error tm_err;
 
 	/* program Tx queues' context in hardware */
 	for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
@@ -3746,6 +3747,14 @@ ice_dev_start(struct rte_eth_dev *dev)
 		}
 	}
 
+	if (pf->tm_conf.committed) {
+		ret = ice_do_hierarchy_commit(dev, pf->tm_conf.clear_on_fail, &tm_err);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "fail to commit Tx scheduler");
+			goto rx_err;
+		}
+	}
+
 	ice_set_rx_function(dev);
 	ice_set_tx_function(dev);
 
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 3b2db6aaa6..fa4981ed14 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -504,6 +504,7 @@ struct ice_tm_conf {
 	uint32_t nb_qgroup_node;
 	uint32_t nb_queue_node;
 	bool committed;
+	bool clear_on_fail;
 };
 
 struct ice_pf {
@@ -686,6 +687,9 @@ int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
 			 struct ice_rss_hash_cfg *cfg);
 void ice_tm_conf_init(struct rte_eth_dev *dev);
 void ice_tm_conf_uninit(struct rte_eth_dev *dev);
+int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
+			    int clear_on_fail,
+			    struct rte_tm_error *error);
 extern const struct rte_tm_ops ice_tm_ops;
 
 static inline int
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 4d8dbff2dc..aa012897ed 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -52,6 +52,7 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
 	pf->tm_conf.nb_qgroup_node = 0;
 	pf->tm_conf.nb_queue_node = 0;
 	pf->tm_conf.committed = false;
+	pf->tm_conf.clear_on_fail = false;
 }
 
 void
@@ -832,9 +833,9 @@ static int ice_add_leaf_nodes(struct rte_eth_dev *dev)
 	return ret;
 }
 
-static int ice_hierarchy_commit(struct rte_eth_dev *dev,
-				 int clear_on_fail,
-				 struct rte_tm_error *error)
+int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
+			    int clear_on_fail,
+			    struct rte_tm_error *error)
 {
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -959,6 +960,8 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 		}
 	}
 
+	pf->tm_conf.committed = true;
+
 	return ret_val;
 
 reset_leaf:
@@ -974,3 +977,19 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
 	}
 	return ret_val;
 }
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+				 int clear_on_fail,
+				 struct rte_tm_error *error)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+	/* if device not started, simply set committed flag and return. */
+	if (!dev->data->dev_started) {
+		pf->tm_conf.committed = true;
+		pf->tm_conf.clear_on_fail = clear_on_fail;
+		return 0;
+	}
+
+	return ice_do_hierarchy_commit(dev, clear_on_fail, error);
+}
-- 
2.31.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH 0/6] net/ice improve qos
  2024-01-02 19:42 [PATCH 0/6] net/ice improve qos Qi Zhang
                   ` (5 preceding siblings ...)
  2024-01-02 19:42 ` [PATCH 6/6] net/ice: support Tx sched commit before device start Qi Zhang
@ 2024-01-04  2:28 ` Wu, Wenjun1
  2024-01-04  2:57   ` Zhang, Qi Z
  6 siblings, 1 reply; 9+ messages in thread
From: Wu, Wenjun1 @ 2024-01-04  2:28 UTC (permalink / raw)
  To: Zhang, Qi Z, Yang, Qiming; +Cc: dev

> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: Wednesday, January 3, 2024 3:42 AM
> To: Yang, Qiming <qiming.yang@intel.com>; Wu, Wenjun1
> <wenjun1.wu@intel.com>
> Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [PATCH 0/6] net/ice improve qos
> 
> The patchset enhanced ice rte_tm implemenations
> 
> Qi Zhang (6):
>   net/ice: remove redundent code
>   net/ice: support VSI level bandwidth config
>   net/ice: support queue group weight configure
>   net/ice: refactor hardware Tx sched node config
>   net/ice: reset Tx sched node during commit
>   net/ice: support Tx sched commit before dev_start
> 
>  drivers/net/ice/base/ice_sched.c |   4 +-
>  drivers/net/ice/base/ice_sched.h |   7 +-
>  drivers/net/ice/ice_ethdev.c     |   9 +
>  drivers/net/ice/ice_ethdev.h     |   5 +
>  drivers/net/ice/ice_tm.c         | 361 +++++++++++++++++++++----------
>  5 files changed, 269 insertions(+), 117 deletions(-)
> 
> --
> 2.31.1

Acked-by: Wenjun Wu <wenjun1.wu@intel.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH 0/6] net/ice improve qos
  2024-01-04  2:28 ` [PATCH 0/6] net/ice improve qos Wu, Wenjun1
@ 2024-01-04  2:57   ` Zhang, Qi Z
  0 siblings, 0 replies; 9+ messages in thread
From: Zhang, Qi Z @ 2024-01-04  2:57 UTC (permalink / raw)
  To: Wu, Wenjun1, Yang, Qiming; +Cc: dev



> -----Original Message-----
> From: Wu, Wenjun1 <wenjun1.wu@intel.com>
> Sent: Thursday, January 4, 2024 10:29 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH 0/6] net/ice improve qos
> 
> > -----Original Message-----
> > From: Zhang, Qi Z <qi.z.zhang@intel.com>
> > Sent: Wednesday, January 3, 2024 3:42 AM
> > To: Yang, Qiming <qiming.yang@intel.com>; Wu, Wenjun1
> > <wenjun1.wu@intel.com>
> > Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>
> > Subject: [PATCH 0/6] net/ice improve qos
> >
> > The patchset enhanced ice rte_tm implemenations
> >
> > Qi Zhang (6):
> >   net/ice: remove redundent code
> >   net/ice: support VSI level bandwidth config
> >   net/ice: support queue group weight configure
> >   net/ice: refactor hardware Tx sched node config
> >   net/ice: reset Tx sched node during commit
> >   net/ice: support Tx sched commit before dev_start
> >
> >  drivers/net/ice/base/ice_sched.c |   4 +-
> >  drivers/net/ice/base/ice_sched.h |   7 +-
> >  drivers/net/ice/ice_ethdev.c     |   9 +
> >  drivers/net/ice/ice_ethdev.h     |   5 +
> >  drivers/net/ice/ice_tm.c         | 361 +++++++++++++++++++++----------
> >  5 files changed, 269 insertions(+), 117 deletions(-)
> >
> > --
> > 2.31.1
> 
> Acked-by: Wenjun Wu <wenjun1.wu@intel.com>

Applied to dpdk-next-net-intel after fix the CI typo warning in PATCH 1/6.

Thanks
Qi

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-01-04  2:57 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-02 19:42 [PATCH 0/6] net/ice improve qos Qi Zhang
2024-01-02 19:42 ` [PATCH 1/6] net/ice: remove redundent code Qi Zhang
2024-01-02 19:42 ` [PATCH 2/6] net/ice: support VSI level bandwidth config Qi Zhang
2024-01-02 19:42 ` [PATCH 3/6] net/ice: support queue group weight configure Qi Zhang
2024-01-02 19:42 ` [PATCH 4/6] net/ice: refactor hardware Tx sched node config Qi Zhang
2024-01-02 19:42 ` [PATCH 5/6] net/ice: reset Tx sched node during commit Qi Zhang
2024-01-02 19:42 ` [PATCH 6/6] net/ice: support Tx sched commit before device start Qi Zhang
2024-01-04  2:28 ` [PATCH 0/6] net/ice improve qos Wu, Wenjun1
2024-01-04  2:57   ` Zhang, Qi Z

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).