From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BA9E8A0562; Thu, 15 Apr 2021 03:52:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4B737161E8E; Thu, 15 Apr 2021 03:50:59 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id D7FE1161E37 for ; Thu, 15 Apr 2021 03:50:41 +0200 (CEST) IronPort-SDR: 7xFzpfoa4lMUGxQfxbbMFPL7sMXFzS1uHxmR23wWDhP04vWCYp1NpL+kYku/38SPf99WVN9NI5 vNF5LhKeH96Q== X-IronPort-AV: E=McAfee;i="6200,9189,9954"; a="215272815" X-IronPort-AV: E=Sophos;i="5.82,223,1613462400"; d="scan'208";a="215272815" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2021 18:50:41 -0700 IronPort-SDR: 8/ekaPq1Uqbj6QEvjqJzttdZEtqfrgQNUffTxNX071o9QaWr3C73C09O+TGhgLN09x2qtIsUvB NXS4BN+r/iyg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,223,1613462400"; d="scan'208";a="382569882" Received: from txasoft-yocto.an.intel.com ([10.123.72.192]) by orsmga003.jf.intel.com with ESMTP; 14 Apr 2021 18:50:40 -0700 From: Timothy McDaniel To: Cc: dev@dpdk.org, erik.g.carrillo@intel.com, harry.van.haaren@intel.com, jerinj@marvell.com, thomas@monjalon.net Date: Wed, 14 Apr 2021 20:49:10 -0500 Message-Id: <1618451359-20693-19-git-send-email-timothy.mcdaniel@intel.com> X-Mailer: git-send-email 1.7.10 In-Reply-To: <1618451359-20693-1-git-send-email-timothy.mcdaniel@intel.com> References: <20210316221857.2254-2-timothy.mcdaniel@intel.com> <1618451359-20693-1-git-send-email-timothy.mcdaniel@intel.com> Subject: [dpdk-dev] [PATCH v4 18/27] event/dlb2: add v2.5 sparse cq mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Update the low level HW functions responsible for configuring sparse CQ mode, where each cache line contains just one QE instead of 4. The logic is very similar to what was done for v2.0, but the new combined register map for v2.0 and v2.5 uses new register names and bit names. Additionally, new register access macros are used so that the code can perform the correct action, based on the hardware. Signed-off-by: Timothy McDaniel --- drivers/event/dlb2/pf/base/dlb2_resource.c | 22 ----------- .../event/dlb2/pf/base/dlb2_resource_new.c | 39 +++++++++++++++++++ 2 files changed, 39 insertions(+), 22 deletions(-) diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c index f05f750f5..d53cce643 100644 --- a/drivers/event/dlb2/pf/base/dlb2_resource.c +++ b/drivers/event/dlb2/pf/base/dlb2_resource.c @@ -32,28 +32,6 @@ #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \ DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp) -void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw) -{ - union dlb2_chp_cfg_chp_csr_ctrl r0; - - r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL); - - r0.field.cfg_64bytes_qe_dir_cq_mode = 1; - - DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val); -} - -void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw) -{ - union dlb2_chp_cfg_chp_csr_ctrl r0; - - r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL); - - r0.field.cfg_64bytes_qe_ldb_cq_mode = 1; - - DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val); -} - int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id) { if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c index 8cd1762cf..0f18bfeff 100644 --- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c +++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c @@ -6089,3 +6089,42 @@ unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw) return num; } + +/** + * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports. + * @hw: dlb2_hw handle for a particular device. + * + * This function must be called prior to configuring scheduling domains. + */ + +void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw) +{ + u32 ctrl; + + ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL); + + DLB2_BIT_SET(ctrl, + DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE); + + DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl); +} + +/** + * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced + * ports. + * @hw: dlb2_hw handle for a particular device. + * + * This function must be called prior to configuring scheduling domains. + */ +void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw) +{ + u32 ctrl; + + ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL); + + DLB2_BIT_SET(ctrl, + DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE); + + DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl); +} + -- 2.23.0