From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 72CCDA00BE; Tue, 17 May 2022 07:22:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F14EE42B6C; Tue, 17 May 2022 07:22:33 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id C489042825 for ; Tue, 17 May 2022 07:22:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652764951; x=1684300951; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=BCeKKmCoeiqtM9BOGYhY7vGXbguWchhw2/6bSnaD6pk=; b=T1vjXph1gPsL7bYYcbklf06E23qT8Bk2EwsZMW1eM3uR1urcJjyuzDRp zQdm9LjPOOtibje0cF81a/tbvJzH9gsJDYSbi6W/qrUjnofqzMjvuWLOY 5TcSA/PKsOOC4CQe+ZKcwJHrJxTaLDeAYBfSY7m1BHOmeZDwLaXDSkEip lBHOtZDFIre6WSnb2ctZorT6jucOiEmZ4X3Vt1WUJ1eMBSM1e6TM3UKca QJU0/NB1tpwb5k/EavVSI1uYDQ9onygqUfxWTUADjKlQQwHq7fdbPAzuF CZXsxkUyotqyHPC2TXzFJwIaFYfDYx5pwwOEHUczGZAJsFO9yqLBv8oNL Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10349"; a="253114296" X-IronPort-AV: E=Sophos;i="5.91,231,1647327600"; d="scan'208";a="253114296" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 May 2022 22:22:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,231,1647327600"; d="scan'208";a="568698769" Received: from npg-wuwenjun-dpdk-01.sh.intel.com ([10.67.110.181]) by orsmga007.jf.intel.com with ESMTP; 16 May 2022 22:22:30 -0700 From: Wenjun Wu To: dev@dpdk.org, qiming.yang@intel.com, qi.z.zhang@intel.com Subject: [PATCH v10 2/7] net/ice/base: support queue BW allocation configuration Date: Tue, 17 May 2022 12:59:11 +0800 Message-Id: <20220517045916.4073904-3-wenjun1.wu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220517045916.4073904-1-wenjun1.wu@intel.com> References: <20220329014813.1092054-1-wenjun1.wu@intel.com> <20220517045916.4073904-1-wenjun1.wu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds BW allocation support of queue scheduling node to support WFQ in queue level. Signed-off-by: Wenjun Wu --- drivers/net/ice/base/ice_sched.c | 64 ++++++++++++++++++++++++++++++++ drivers/net/ice/base/ice_sched.h | 3 ++ 2 files changed, 67 insertions(+) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index e697c579be..4ca15bf8f8 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -3613,6 +3613,70 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids, return status; } +/** + * ice_sched_save_q_bw_alloc - save queue node's BW allocation information + * @q_ctx: queue context structure + * @rl_type: rate limit type min, max, or shared + * @bw_alloc: BW weight/allocation + * + * Save BW information of queue type node for post replay use. + */ +static enum ice_status +ice_sched_save_q_bw_alloc(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type, + u32 bw_alloc) +{ + switch (rl_type) { + case ICE_MIN_BW: + ice_set_clear_cir_bw_alloc(&q_ctx->bw_t_info, bw_alloc); + break; + case ICE_MAX_BW: + ice_set_clear_eir_bw_alloc(&q_ctx->bw_t_info, bw_alloc); + break; + default: + return ICE_ERR_PARAM; + } + return ICE_SUCCESS; +} + +/** + * ice_cfg_q_bw_alloc - configure queue BW weight/alloc params + * @pi: port information structure + * @vsi_handle: sw VSI handle + * @tc: traffic class + * @q_handle: software queue handle + * @rl_type: min, max, or shared + * @bw_alloc: BW weight/allocation + * + * This function configures BW allocation of queue scheduling node. + */ +enum ice_status +ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc, + u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc) +{ + enum ice_status status = ICE_ERR_PARAM; + struct ice_sched_node *node; + struct ice_q_ctx *q_ctx; + + ice_acquire_lock(&pi->sched_lock); + q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle); + if (!q_ctx) + goto exit_q_bw_alloc; + + node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid); + if (!node) { + ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n"); + goto exit_q_bw_alloc; + } + + status = ice_sched_cfg_node_bw_alloc(pi->hw, node, rl_type, bw_alloc); + if (!status) + status = ice_sched_save_q_bw_alloc(q_ctx, rl_type, bw_alloc); + +exit_q_bw_alloc: + ice_release_lock(&pi->sched_lock); + return status; +} + /** * ice_cfg_agg_vsi_priority_per_tc - config aggregator's VSI priority per TC * @pi: port information structure diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h index 1441b5f191..184ad09e6a 100644 --- a/drivers/net/ice/base/ice_sched.h +++ b/drivers/net/ice/base/ice_sched.h @@ -172,6 +172,9 @@ enum ice_status ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids, u8 *q_prio); enum ice_status +ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc, + u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc); +enum ice_status ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap, enum ice_rl_type rl_type, u8 *bw_alloc); enum ice_status -- 2.25.1