From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BA00945BA3; Tue, 22 Oct 2024 18:40:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 447AC40272; Tue, 22 Oct 2024 18:40:06 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) by mails.dpdk.org (Postfix) with ESMTP id B40A040665 for ; Tue, 22 Oct 2024 18:39:53 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729615194; x=1761151194; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o29mhUDSkbedrEo4oCOJCFVm7PJt4XC80nnQNJRziDc=; b=exQ0ZUE+7HBLTGE7eVKXsQaCP5tODD21rN/aBvSKUcwZJlWe39X2qIjt HhVTT5RB1hFd1SEX7+PwMTwSdE3cmtlBXwglGaG9Eh3pvJIO8zEJWdS7Y hvphxKCb6TLWeQ0FAVYuoRjTt6AEjGxX7JhzrFAodOAQP5bGvhgM0Awav YUM6EIfiZ4f0qT2nKfGE3gVHugDvQ0T3m6MiYYttklunbwP15JdQfLWb0 Ui6nFGTxWAn/+DnPU5Aay0aKOh2DXb3zoQd87k+cf2rf+CnEOqrSPuUUi t90Qd4ZDzv8uFDcqMDHEJsyiTC4e+x7tkoOh4pgDowjIVIzyD5FIGyumG w==; X-CSE-ConnectionGUID: Lacg7P3QS0a0oaxXjElpsQ== X-CSE-MsgGUID: ffuBAKbBTHSHHgk5/cWYCg== X-IronPort-AV: E=McAfee;i="6700,10204,11233"; a="40538720" X-IronPort-AV: E=Sophos;i="6.11,223,1725346800"; d="scan'208";a="40538720" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2024 09:39:53 -0700 X-CSE-ConnectionGUID: sohT3zPnROWbNJATGSL6hA== X-CSE-MsgGUID: rVe2fNWmR2qBFcO1+vNSJg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,223,1725346800"; d="scan'208";a="103217666" Received: from unknown (HELO silpixa00401385.ir.intel.com) ([10.237.214.25]) by fmviesa002.fm.intel.com with ESMTP; 22 Oct 2024 09:39:52 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Stephen Hemminger Subject: [PATCH v4 4/4] net/ice: limit the number of queues to sched capabilities Date: Tue, 22 Oct 2024 17:39:44 +0100 Message-ID: <20241022163944.3564662-5-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241022163944.3564662-1-bruce.richardson@intel.com> References: <20241009170822.344716-1-bruce.richardson@intel.com> <20241022163944.3564662-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rather than assuming that each VSI can hold up to 256 queue pairs, or the reported device limit, query the available nodes in the scheduler tree to check that we are not overflowing the limit for number of child scheduling nodes at each level. Do this by multiplying max_children for each level beyond the VSI and using that as an additional cap on the number of queues. Signed-off-by: Bruce Richardson Acked-by: Stephen Hemminger --- drivers/net/ice/ice_ethdev.c | 25 ++++++++++++++++++++----- 1 file changed, 20 insertions(+), 5 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 97092c8550..d5e94a6685 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -915,7 +915,7 @@ ice_vsi_config_default_rss(struct ice_aqc_vsi_props *info) } static int -ice_vsi_config_tc_queue_mapping(struct ice_vsi *vsi, +ice_vsi_config_tc_queue_mapping(struct ice_hw *hw, struct ice_vsi *vsi, struct ice_aqc_vsi_props *info, uint8_t enabled_tcmap) { @@ -931,13 +931,28 @@ ice_vsi_config_tc_queue_mapping(struct ice_vsi *vsi, } /* vector 0 is reserved and 1 vector for ctrl vsi */ - if (vsi->adapter->hw.func_caps.common_cap.num_msix_vectors < 2) + if (vsi->adapter->hw.func_caps.common_cap.num_msix_vectors < 2) { vsi->nb_qps = 0; - else + } else { vsi->nb_qps = RTE_MIN ((uint16_t)vsi->adapter->hw.func_caps.common_cap.num_msix_vectors - 2, RTE_MIN(vsi->nb_qps, ICE_MAX_Q_PER_TC)); + /* cap max QPs to what the HW reports as num-children for each layer. + * Multiply num_children for each layer from the entry_point layer to + * the qgroup, or second-last layer. + * Avoid any potential overflow by using uint32_t type and breaking loop + * once we have a number greater than the already configured max. + */ + uint32_t max_sched_vsi_nodes = 1; + for (uint8_t i = hw->sw_entry_point_layer; i < hw->num_tx_sched_layers - 1; i++) { + max_sched_vsi_nodes *= hw->max_children[i]; + if (max_sched_vsi_nodes >= vsi->nb_qps) + break; + } + vsi->nb_qps = RTE_MIN(vsi->nb_qps, max_sched_vsi_nodes); + } + /* nb_qps(hex) -> fls */ /* 0000 -> 0 */ /* 0001 -> 0 */ @@ -1709,7 +1724,7 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type) rte_cpu_to_le_16(hw->func_caps.fd_fltr_best_effort); /* Enable VLAN/UP trip */ - ret = ice_vsi_config_tc_queue_mapping(vsi, + ret = ice_vsi_config_tc_queue_mapping(hw, vsi, &vsi_ctx.info, ICE_DEFAULT_TCMAP); if (ret) { @@ -1733,7 +1748,7 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type) vsi_ctx.info.fd_options = rte_cpu_to_le_16(cfg); vsi_ctx.info.sw_id = hw->port_info->sw_id; vsi_ctx.info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA; - ret = ice_vsi_config_tc_queue_mapping(vsi, + ret = ice_vsi_config_tc_queue_mapping(hw, vsi, &vsi_ctx.info, ICE_DEFAULT_TCMAP); if (ret) { -- 2.43.0