From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 52B35454EF; Tue, 25 Jun 2024 13:19:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1297D42EBE; Tue, 25 Jun 2024 13:16:33 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by mails.dpdk.org (Postfix) with ESMTP id CE03342DCD for ; Tue, 25 Jun 2024 13:15:50 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1719314151; x=1750850151; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7tDtDy7KVHO+iGMz4vAC2tahj0EEfK39nVh9Z66yLzI=; b=FbUgoMW2WxSMvGQNMNqgU8oDlBhckFsavxfC+c2ZIVksgUYJDmMmjA59 Kau4bbIJaU3DiPDDuyK/JluaZIf0Tf10H1dv3ufTpk9fDXUNWQrMhDh/t tqsNKPjQTPRRoxv+dbtwYN8DVB/fgPjlQ0Hcc9a3teN87tQhpfh8DkB0q b7azyYqiNd8jqYWFcksUbcuv7mEBdBJ80vM8FRokuBX8lMNAq0UaU5VFI +Dn9QQDXL4hp0jbH6q/kh/rAQYcootbzWiq9qO0bX4oQAmiUeK+qGhKD/ O2RVLjlGP28Ipfz8ZRMFyNOI12bxrQ2G82c2hd2oslt+APrkJihteLW++ Q==; X-CSE-ConnectionGUID: UYE5XyLzT9mFDi/11hRyhw== X-CSE-MsgGUID: Fz+HZWaGQlWLa4MPRFt0iw== X-IronPort-AV: E=McAfee;i="6700,10204,11113"; a="16080140" X-IronPort-AV: E=Sophos;i="6.08,263,1712646000"; d="scan'208";a="16080140" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2024 04:15:50 -0700 X-CSE-ConnectionGUID: nwFc5LHuTAWBH2HI+vXKQw== X-CSE-MsgGUID: 7ZSCwM+BT9GZ6Z6b25kpcQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,263,1712646000"; d="scan'208";a="43719084" Received: from unknown (HELO silpixa00401119.ir.intel.com) ([10.55.129.167]) by orviesa009.jf.intel.com with ESMTP; 25 Jun 2024 04:15:50 -0700 From: Anatoly Burakov To: dev@dpdk.org Cc: Jacob Keller , bruce.richardson@intel.com, ian.stokes@intel.com Subject: [PATCH v3 031/129] net/ice/base: refactor control queue send delay Date: Tue, 25 Jun 2024 12:12:36 +0100 Message-ID: <01bc16fce525fbe54efc323b93f9bfb0aa3e3576.1719313663.git.anatoly.burakov@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Jacob Keller Since we know that most side band queue messages complete within 2-3 microseconds, introduce an initial 5 microsecond delay before we enter the main timeout loop. Use ice_flush(hw) first to ensure that we immediately flush the tail bump before delaying. This should mean that in practice almost all side band messages will be completed at the first ice_sq_done() check. Because the driver already uses non-sleeping delays, it should not affect the CPU usage in a negative way to check more frequently. Currently, the delay is specified using a macro which makes reading the code difficult as you must look up the macro to figure out the delay value. In general we try to avoid such "magic" numbers. However, delay values aren't really magic they're a number. Using a macro obscures the intent here, plus the macro names are rather long. I double checked the Linux kernel and nearly all invocations of udelay, usleep_range, and msleep today use raw values or some multiple of HZ in the case of msleep. These changes should reduce the amount of time that the driver spins a CPU, minimizing CPU waste, and reducing the time required to process most control queue messages. Signed-off-by: Jacob Keller Signed-off-by: Ian Stokes --- drivers/net/ice/base/ice_controlq.c | 8 +++++++- drivers/net/ice/base/ice_controlq.h | 3 +-- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c index cac04c6a98..c2cf747b65 100644 --- a/drivers/net/ice/base/ice_controlq.c +++ b/drivers/net/ice/base/ice_controlq.c @@ -1048,12 +1048,18 @@ ice_sq_send_cmd_nolock(struct ice_hw *hw, struct ice_ctl_q_info *cq, if (cq->sq.next_to_use == cq->sq.count) cq->sq.next_to_use = 0; wr32(hw, cq->sq.tail, cq->sq.next_to_use); + ice_flush(hw); + + /* Wait a short time before initial ice_sq_done() check, to allow + * hardware time for completion. + */ + ice_usec_delay(5, false); do { if (ice_sq_done(hw, cq)) break; - ice_usec_delay(ICE_CTL_Q_SQ_CMD_USEC, false); + ice_usec_delay(10, false); total_delay++; } while (total_delay < cq->sq_cmd_timeout); diff --git a/drivers/net/ice/base/ice_controlq.h b/drivers/net/ice/base/ice_controlq.h index 5c5bb069d8..45394ee695 100644 --- a/drivers/net/ice/base/ice_controlq.h +++ b/drivers/net/ice/base/ice_controlq.h @@ -35,8 +35,7 @@ enum ice_ctl_q { }; /* Control Queue timeout settings - max delay 1s */ -#define ICE_CTL_Q_SQ_CMD_TIMEOUT 10000 /* Count 10000 times */ -#define ICE_CTL_Q_SQ_CMD_USEC 100 /* Check every 100usec */ +#define ICE_CTL_Q_SQ_CMD_TIMEOUT 100000 /* Count 100000 times */ #define ICE_CTL_Q_ADMIN_INIT_TIMEOUT 10 /* Count 10 times */ #define ICE_CTL_Q_ADMIN_INIT_MSEC 100 /* Check every 100msec */ -- 2.43.0