From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9D7C545500; Wed, 26 Jun 2024 13:56:06 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 41CFD4328D; Wed, 26 Jun 2024 13:55:08 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by mails.dpdk.org (Postfix) with ESMTP id 9AB0042E95 for ; Wed, 26 Jun 2024 13:43:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1719402198; x=1750938198; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7tDtDy7KVHO+iGMz4vAC2tahj0EEfK39nVh9Z66yLzI=; b=CpoQeM7jd1LlfDXFB0CT8QEAI8uxQcH4iLABldnp4YOhNMk7jH4S6hk6 1rg1F+kvZOS3qII3Ws9fIAYGOvUprV3LUoikPtewe99VTheVvsPsflnSm T0DiYZf1E7Mso7iynWxkhEWoWVm21X2iFtuGpNh20i4srfdL9/YpqAudJ GLXkR+bwT41t/a6TR5RGdMzhmwp2TjE/sTldDBhwYBGg6pz/tEgQM4UjS qqzei8HHJ5hScG0kcKRnPQP9WKQ0LQC3q7XgGTm25fM0U9ZAFcKdeP7U6 9nZDTd88AOBBSS84m8oJwvx/Bo7I1FqIEew6AVpinIovqITx/CD03uo/7 w==; X-CSE-ConnectionGUID: P5BhrJNIQsmssU+bHSjtRQ== X-CSE-MsgGUID: r3uY3tVlQY29Tz0neST70A== X-IronPort-AV: E=McAfee;i="6700,10204,11114"; a="38979301" X-IronPort-AV: E=Sophos;i="6.08,266,1712646000"; d="scan'208";a="38979301" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jun 2024 04:43:18 -0700 X-CSE-ConnectionGUID: 1fjhpUyVRymDK3HSeBTP/g== X-CSE-MsgGUID: sB3e4R8PRPKSHxDCd7pVyA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,266,1712646000"; d="scan'208";a="43873474" Received: from unknown (HELO silpixa00401119.ir.intel.com) ([10.55.129.167]) by orviesa010.jf.intel.com with ESMTP; 26 Jun 2024 04:43:17 -0700 From: Anatoly Burakov To: dev@dpdk.org Cc: Jacob Keller , ian.stokes@intel.com, bruce.richardson@intel.com Subject: [PATCH v4 011/103] net/ice/base: refactor control queue send delay Date: Wed, 26 Jun 2024 12:40:59 +0100 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Jacob Keller Since we know that most side band queue messages complete within 2-3 microseconds, introduce an initial 5 microsecond delay before we enter the main timeout loop. Use ice_flush(hw) first to ensure that we immediately flush the tail bump before delaying. This should mean that in practice almost all side band messages will be completed at the first ice_sq_done() check. Because the driver already uses non-sleeping delays, it should not affect the CPU usage in a negative way to check more frequently. Currently, the delay is specified using a macro which makes reading the code difficult as you must look up the macro to figure out the delay value. In general we try to avoid such "magic" numbers. However, delay values aren't really magic they're a number. Using a macro obscures the intent here, plus the macro names are rather long. I double checked the Linux kernel and nearly all invocations of udelay, usleep_range, and msleep today use raw values or some multiple of HZ in the case of msleep. These changes should reduce the amount of time that the driver spins a CPU, minimizing CPU waste, and reducing the time required to process most control queue messages. Signed-off-by: Jacob Keller Signed-off-by: Ian Stokes --- drivers/net/ice/base/ice_controlq.c | 8 +++++++- drivers/net/ice/base/ice_controlq.h | 3 +-- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c index cac04c6a98..c2cf747b65 100644 --- a/drivers/net/ice/base/ice_controlq.c +++ b/drivers/net/ice/base/ice_controlq.c @@ -1048,12 +1048,18 @@ ice_sq_send_cmd_nolock(struct ice_hw *hw, struct ice_ctl_q_info *cq, if (cq->sq.next_to_use == cq->sq.count) cq->sq.next_to_use = 0; wr32(hw, cq->sq.tail, cq->sq.next_to_use); + ice_flush(hw); + + /* Wait a short time before initial ice_sq_done() check, to allow + * hardware time for completion. + */ + ice_usec_delay(5, false); do { if (ice_sq_done(hw, cq)) break; - ice_usec_delay(ICE_CTL_Q_SQ_CMD_USEC, false); + ice_usec_delay(10, false); total_delay++; } while (total_delay < cq->sq_cmd_timeout); diff --git a/drivers/net/ice/base/ice_controlq.h b/drivers/net/ice/base/ice_controlq.h index 5c5bb069d8..45394ee695 100644 --- a/drivers/net/ice/base/ice_controlq.h +++ b/drivers/net/ice/base/ice_controlq.h @@ -35,8 +35,7 @@ enum ice_ctl_q { }; /* Control Queue timeout settings - max delay 1s */ -#define ICE_CTL_Q_SQ_CMD_TIMEOUT 10000 /* Count 10000 times */ -#define ICE_CTL_Q_SQ_CMD_USEC 100 /* Check every 100usec */ +#define ICE_CTL_Q_SQ_CMD_TIMEOUT 100000 /* Count 100000 times */ #define ICE_CTL_Q_ADMIN_INIT_TIMEOUT 10 /* Count 10 times */ #define ICE_CTL_Q_ADMIN_INIT_MSEC 100 /* Check every 100msec */ -- 2.43.0