From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 88089437FC; Tue, 2 Jan 2024 12:21:45 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 999DA4068A; Tue, 2 Jan 2024 12:21:36 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by mails.dpdk.org (Postfix) with ESMTP id 2192F40395 for ; Tue, 2 Jan 2024 12:21:33 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704194494; x=1735730494; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6UHvH055I6V0OZcwq+N6SPm17yCde1kEavCP/E/nooA=; b=ki7IhfOA3o+rO6nz2ZOQlF7oeMmW0HUl3ICpKF7V4p2bI4ax2rYgsi4e 8aDMMZXxED14ndJYsnB1kJn5tq1gme9sx24ZiO7WDRB59vUJ5WFw0mF4i J6ZQ9xPFtNQtAZJkraX+Ti1hOvou5TIIkhCJhGanqfRuG9iuzieuMLOXo PdJFuPECk3wMFG8wGT5OiOmIKgft3iHkoqAkrd7GuRj811WUTtvLGgopE A7mxPGH9jTK922UEGvyNrq2G5yKP+/5YKO5fyKljiy7LW70KeJjVUV9iM CpySkRNsJcgkHxMFYmQhYUunpyWPexoTiQlmAtjAjC6lQAZs2GsL1y5WG Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="10256785" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="10256785" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jan 2024 03:21:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="952895238" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="952895238" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by orsmga005.jf.intel.com with ESMTP; 02 Jan 2024 03:21:32 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH 2/6] net/ice: support VSI level bandwidth config Date: Tue, 2 Jan 2024 14:42:28 -0500 Message-Id: <20240102194232.3614305-3-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240102194232.3614305-1-qi.z.zhang@intel.com> References: <20240102194232.3614305-1-qi.z.zhang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enable the configuration of peak and committed rates for a Tx scheduler node at the VSI level. This patch also consolidate rate configuration across various levels into a single function 'ice_set_node_rate.' Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_sched.c | 2 +- drivers/net/ice/base/ice_sched.h | 4 +- drivers/net/ice/ice_tm.c | 142 +++++++++++++++++++------------ 3 files changed, 91 insertions(+), 57 deletions(-) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index a4d31647fe..23cc1ee50a 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -4429,7 +4429,7 @@ ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node, * NOTE: Caller provides the correct SRL node in case of shared profile * settings. */ -static enum ice_status +enum ice_status ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node, enum ice_rl_type rl_type, u32 bw) { diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h index 4b68f3f535..a600ff9a24 100644 --- a/drivers/net/ice/base/ice_sched.h +++ b/drivers/net/ice/base/ice_sched.h @@ -237,5 +237,7 @@ enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle); enum ice_status ice_sched_replay_root_node_bw(struct ice_port_info *pi); enum ice_status ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx); - +enum ice_status +ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node, + enum ice_rl_type rl_type, u32 bw); #endif /* _ICE_SCHED_H_ */ diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index 9e2f981fa3..d9187af8af 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -663,6 +663,55 @@ static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev, return ret; } +static int ice_set_node_rate(struct ice_hw *hw, + struct ice_tm_node *tm_node, + struct ice_sched_node *sched_node) +{ + enum ice_status status; + bool reset = false; + uint32_t peak = 0; + uint32_t committed = 0; + uint32_t rate; + + if (tm_node == NULL || tm_node->shaper_profile == NULL) { + reset = true; + } else { + peak = (uint32_t)tm_node->shaper_profile->profile.peak.rate; + committed = (uint32_t)tm_node->shaper_profile->profile.committed.rate; + } + + if (reset || peak == 0) + rate = ICE_SCHED_DFLT_BW; + else + rate = peak / 1000 * BITS_PER_BYTE; + + + status = ice_sched_set_node_bw_lmt(hw->port_info, + sched_node, + ICE_MAX_BW, + rate); + if (status) { + PMD_DRV_LOG(ERR, "Failed to set max bandwidth for node %u", tm_node->id); + return -EINVAL; + } + + if (reset || committed == 0) + rate = ICE_SCHED_DFLT_BW; + else + rate = committed / 1000 * BITS_PER_BYTE; + + status = ice_sched_set_node_bw_lmt(hw->port_info, + sched_node, + ICE_MIN_BW, + rate); + if (status) { + PMD_DRV_LOG(ERR, "Failed to set min bandwidth for node %u", tm_node->id); + return -EINVAL; + } + + return 0; +} + static int ice_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail, __rte_unused struct rte_tm_error *error) @@ -673,13 +722,11 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list; struct ice_tm_node *tm_node; struct ice_sched_node *node; - struct ice_sched_node *vsi_node; + struct ice_sched_node *vsi_node = NULL; struct ice_sched_node *queue_node; struct ice_tx_queue *txq; struct ice_vsi *vsi; int ret_val = ICE_SUCCESS; - uint64_t peak = 0; - uint64_t committed = 0; uint8_t priority; uint32_t i; uint32_t idx_vsi_child; @@ -704,6 +751,18 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, for (i = 0; i < vsi_layer; i++) node = node->children[0]; vsi_node = node; + + tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list); + + ret_val = ice_set_node_rate(hw, tm_node, vsi_node); + if (ret_val) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + PMD_DRV_LOG(ERR, + "configure vsi node %u bandwidth failed", + tm_node->id); + goto reset_vsi; + } + nb_vsi_child = vsi_node->num_children; nb_qg = vsi_node->children[0]->num_children; @@ -722,7 +781,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, if (ret_val) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, "start queue %u failed", qid); - goto fail_clear; + goto reset_vsi; } txq = dev->data->tx_queues[qid]; q_teid = txq->q_teid; @@ -730,7 +789,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, if (queue_node == NULL) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, "get queue %u node failed", qid); - goto fail_clear; + goto reset_vsi; } if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid) continue; @@ -738,28 +797,19 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, if (ret_val) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, "move queue %u failed", qid); - goto fail_clear; + goto reset_vsi; } } - if (tm_node->reference_count != 0 && tm_node->shaper_profile) { - uint32_t node_teid = qgroup_sched_node->info.node_teid; - /* Transfer from Byte per seconds to Kbps */ - peak = tm_node->shaper_profile->profile.peak.rate; - peak = peak / 1000 * BITS_PER_BYTE; - ret_val = ice_sched_set_node_bw_lmt_per_tc(hw->port_info, - node_teid, - ICE_AGG_TYPE_Q, - tm_node->tc, - ICE_MAX_BW, - (u32)peak); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue group %u bandwidth failed", - tm_node->id); - goto fail_clear; - } + + ret_val = ice_set_node_rate(hw, tm_node, qgroup_sched_node); + if (ret_val) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + PMD_DRV_LOG(ERR, + "configure queue group %u bandwidth failed", + tm_node->id); + goto reset_vsi; } + priority = 7 - tm_node->priority; ret_val = ice_sched_cfg_sibl_node_prio_lock(hw->port_info, qgroup_sched_node, priority); @@ -777,7 +827,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, if (idx_vsi_child >= nb_vsi_child) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, "too many queues"); - goto fail_clear; + goto reset_vsi; } } @@ -786,37 +836,17 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, txq = dev->data->tx_queues[qid]; vsi = txq->vsi; q_teid = txq->q_teid; - if (tm_node->shaper_profile) { - /* Transfer from Byte per seconds to Kbps */ - if (tm_node->shaper_profile->profile.peak.rate > 0) { - peak = tm_node->shaper_profile->profile.peak.rate; - peak = peak / 1000 * BITS_PER_BYTE; - ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx, - tm_node->tc, tm_node->id, - ICE_MAX_BW, (u32)peak); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue %u peak bandwidth failed", - tm_node->id); - goto fail_clear; - } - } - if (tm_node->shaper_profile->profile.committed.rate > 0) { - committed = tm_node->shaper_profile->profile.committed.rate; - committed = committed / 1000 * BITS_PER_BYTE; - ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx, - tm_node->tc, tm_node->id, - ICE_MIN_BW, (u32)committed); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue %u committed bandwidth failed", - tm_node->id); - goto fail_clear; - } - } + + queue_node = ice_sched_get_node(hw->port_info, q_teid); + ret_val = ice_set_node_rate(hw, tm_node, queue_node); + if (ret_val) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + PMD_DRV_LOG(ERR, + "configure queue %u bandwidth failed", + tm_node->id); + goto reset_vsi; } + priority = 7 - tm_node->priority; ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1, &q_teid, &priority); @@ -838,6 +868,8 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, return ret_val; +reset_vsi: + ice_set_node_rate(hw, NULL, vsi_node); fail_clear: /* clear all the traffic manager configuration */ if (clear_on_fail) { -- 2.31.1