From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C0DEA43F5A; Wed, 1 May 2024 09:52:01 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 385B4402A7; Wed, 1 May 2024 09:52:01 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) by mails.dpdk.org (Postfix) with ESMTP id 3B8CF400EF for ; Tue, 30 Apr 2024 17:40:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1714491624; x=1746027624; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=KZGrRpsxmPVtT8ChhVqQoJv5LPk2qoWsaXWQGpbPjl0=; b=ngPK0DeT/MKdQin1dmK3UIPjWQD+5JWcjKzbMWXZckL87WoXKxGW7xAW PW5ARjSLdK1efRxnaVB3xuFTxSDSCL1IljN4nK4xB0qJ4l63N582ylIzL yKf5iiBMFijsst+NWSLOfLRofOYdC9k4zH2aadKd8aCNV7P6Zl7uuD0gt xQEr9mEhfEVDUupmJ2VhGjzkPsJSOdXrFCDtqqxDDdO9p1I+yB6wRTFMq WEeELrIZtPLAqr4XBjHHVJnDmk+RO7Uumb8vA7fKwC/o9t0dODYc2oGRM qDz/dBL+YnPx52GWv5yFXew+COfBOoCRQL4SWB85eg3yXjMizILtiI63R w==; X-CSE-ConnectionGUID: DN56TANwTtuIDvDM8oECAA== X-CSE-MsgGUID: T3c4JeIpTrmynjPpmwFKiA== X-IronPort-AV: E=McAfee;i="6600,9927,11060"; a="10066369" X-IronPort-AV: E=Sophos;i="6.07,242,1708416000"; d="scan'208";a="10066369" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2024 08:40:20 -0700 X-CSE-ConnectionGUID: L4BoPk2bSmKXMbugxM6aBw== X-CSE-MsgGUID: CjW1Ev4ZQPSCrmByxK4fbg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,242,1708416000"; d="scan'208";a="31297456" Received: from sivswdev19.ir.intel.com ([10.237.217.69]) by orviesa004.jf.intel.com with ESMTP; 30 Apr 2024 08:40:16 -0700 From: Ian Stokes To: dev@dpdk.org Cc: bruce.richardson@intel.com, Ian Stokes Subject: [RFC] net/ice: Update base code with latest snapshot. Date: Tue, 30 Apr 2024 16:40:14 +0100 Message-Id: <20240430154014.1026-1-ian.stokes@intel.com> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Mailman-Approved-At: Wed, 01 May 2024 09:51:59 +0200 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The purpose of this patch is to update shared code for the ice driver. This patch not only contains the updated base code files but also the required changes within the DPDK code base to enable compilation and basic functionality. This is with a view to allow further regression testing while a final patch series is generated that correctly splits the code changes into an expected manner. Signed-off-by: Ian Stokes --- drivers/net/ice/base/ice_acl.c | 58 +- drivers/net/ice/base/ice_acl.h | 50 +- drivers/net/ice/base/ice_acl_ctrl.c | 45 +- drivers/net/ice/base/ice_adminq_cmd.h | 459 ++- drivers/net/ice/base/ice_bitops.h | 8 +- drivers/net/ice/base/ice_cgu_regs.h | 90 + drivers/net/ice/base/ice_common.c | 1621 +++++--- drivers/net/ice/base/ice_common.h | 189 +- drivers/net/ice/base/ice_controlq.c | 213 +- drivers/net/ice/base/ice_controlq.h | 28 +- drivers/net/ice/base/ice_dcb.c | 79 +- drivers/net/ice/base/ice_dcb.h | 32 +- drivers/net/ice/base/ice_ddp.c | 115 +- drivers/net/ice/base/ice_ddp.h | 18 +- drivers/net/ice/base/ice_devids.h | 39 +- drivers/net/ice/base/ice_fdir.c | 50 +- drivers/net/ice/base/ice_fdir.h | 18 +- drivers/net/ice/base/ice_flex_pipe.c | 385 +- drivers/net/ice/base/ice_flex_pipe.h | 32 +- drivers/net/ice/base/ice_flex_type.h | 41 +- drivers/net/ice/base/ice_flow.c | 337 +- drivers/net/ice/base/ice_flow.h | 52 +- drivers/net/ice/base/ice_fwlog.c | 5 + drivers/net/ice/base/ice_fwlog.h | 4 + drivers/net/ice/base/ice_hw_autogen.h | 2569 ++++++++++-- drivers/net/ice/base/ice_lan_tx_rx.h | 87 +- drivers/net/ice/base/ice_metainit.c | 2 +- drivers/net/ice/base/ice_nvm.c | 321 +- drivers/net/ice/base/ice_nvm.h | 36 +- drivers/net/ice/base/ice_parser.c | 57 +- drivers/net/ice/base/ice_parser.h | 30 +- drivers/net/ice/base/ice_parser_rt.c | 88 +- drivers/net/ice/base/ice_parser_rt.h | 23 +- drivers/net/ice/base/ice_phy_regs.h | 84 + drivers/net/ice/base/ice_protocol_type.h | 17 +- drivers/net/ice/base/ice_ptp_consts.h | 3 +- drivers/net/ice/base/ice_ptp_hw.c | 4593 ++++++++++------------ drivers/net/ice/base/ice_ptp_hw.h | 332 +- drivers/net/ice/base/ice_sbq_cmd.h | 1 - drivers/net/ice/base/ice_sched.c | 556 +-- drivers/net/ice/base/ice_sched.h | 130 +- drivers/net/ice/base/ice_switch.c | 1893 +++++---- drivers/net/ice/base/ice_switch.h | 241 +- drivers/net/ice/base/ice_type.h | 194 +- drivers/net/ice/base/ice_vf_mbx.c | 4 + drivers/net/ice/base/ice_vf_mbx.h | 4 + drivers/net/ice/base/ice_vlan_mode.c | 47 +- drivers/net/ice/base/ice_vlan_mode.h | 2 +- drivers/net/ice/base/ice_xlt_kb.c | 4 +- drivers/net/ice/base/meson.build | 2 + drivers/net/ice/ice_diagnose.c | 9 +- drivers/net/ice/ice_ethdev.c | 55 +- 52 files changed, 9105 insertions(+), 6247 deletions(-) create mode 100644 drivers/net/ice/base/ice_fwlog.c create mode 100644 drivers/net/ice/base/ice_fwlog.h create mode 100644 drivers/net/ice/base/ice_phy_regs.h create mode 100644 drivers/net/ice/base/ice_vf_mbx.c create mode 100644 drivers/net/ice/base/ice_vf_mbx.h diff --git a/drivers/net/ice/base/ice_acl.c b/drivers/net/ice/base/ice_acl.c index fd9c6d5c14..6ace29c946 100644 --- a/drivers/net/ice/base/ice_acl.c +++ b/drivers/net/ice/base/ice_acl.c @@ -13,7 +13,7 @@ * * Allocate ACL table (indirect 0x0C10) */ -enum ice_status +int ice_aq_alloc_acl_tbl(struct ice_hw *hw, struct ice_acl_alloc_tbl *tbl, struct ice_sq_cd *cd) { @@ -64,7 +64,7 @@ ice_aq_alloc_acl_tbl(struct ice_hw *hw, struct ice_acl_alloc_tbl *tbl, * format is 'struct ice_aqc_acl_generic', pass ptr to that struct * as 'buf' and its size as 'buf_size' */ -enum ice_status +int ice_aq_dealloc_acl_tbl(struct ice_hw *hw, u16 alloc_id, struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd) { @@ -78,7 +78,7 @@ ice_aq_dealloc_acl_tbl(struct ice_hw *hw, u16 alloc_id, return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd); } -static enum ice_status +static int ice_aq_acl_entry(struct ice_hw *hw, u16 opcode, u8 tcam_idx, u16 entry_idx, struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd) { @@ -107,7 +107,7 @@ ice_aq_acl_entry(struct ice_hw *hw, u16 opcode, u8 tcam_idx, u16 entry_idx, * * Program ACL entry (direct 0x0C20) */ -enum ice_status +int ice_aq_program_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx, struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd) { @@ -128,7 +128,7 @@ ice_aq_program_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx, * NOTE: Caller of this API to parse 'buf' appropriately since it contains * response (key and key invert) */ -enum ice_status +int ice_aq_query_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx, struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd) { @@ -137,7 +137,7 @@ ice_aq_query_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx, } /* Helper function to alloc/dealloc ACL action pair */ -static enum ice_status +static int ice_aq_actpair_a_d(struct ice_hw *hw, u16 opcode, u16 alloc_id, struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd) { @@ -163,7 +163,7 @@ ice_aq_actpair_a_d(struct ice_hw *hw, u16 opcode, u16 alloc_id, * This command doesn't need and doesn't have its own command buffer * but for response format is as specified in 'struct ice_aqc_acl_generic' */ -enum ice_status +int ice_aq_alloc_actpair(struct ice_hw *hw, u16 alloc_id, struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd) { @@ -180,7 +180,7 @@ ice_aq_alloc_actpair(struct ice_hw *hw, u16 alloc_id, * * Deallocate ACL actionpair (direct 0x0C13) */ -enum ice_status +int ice_aq_dealloc_actpair(struct ice_hw *hw, u16 alloc_id, struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd) { @@ -189,7 +189,7 @@ ice_aq_dealloc_actpair(struct ice_hw *hw, u16 alloc_id, } /* Helper function to program/query ACL action pair */ -static enum ice_status +static int ice_aq_actpair_p_q(struct ice_hw *hw, u16 opcode, u8 act_mem_idx, u16 act_entry_idx, struct ice_aqc_actpair *buf, struct ice_sq_cd *cd) @@ -219,7 +219,7 @@ ice_aq_actpair_p_q(struct ice_hw *hw, u16 opcode, u8 act_mem_idx, * * Program action entries (indirect 0x0C1C) */ -enum ice_status +int ice_aq_program_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx, struct ice_aqc_actpair *buf, struct ice_sq_cd *cd) { @@ -237,7 +237,7 @@ ice_aq_program_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx, * * Query ACL actionpair (indirect 0x0C25) */ -enum ice_status +int ice_aq_query_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx, struct ice_aqc_actpair *buf, struct ice_sq_cd *cd) { @@ -253,7 +253,7 @@ ice_aq_query_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx, * De-allocate ACL resources (direct 0x0C1A). Used by SW to release all the * resources allocated for it using a single command */ -enum ice_status ice_aq_dealloc_acl_res(struct ice_hw *hw, struct ice_sq_cd *cd) +int ice_aq_dealloc_acl_res(struct ice_hw *hw, struct ice_sq_cd *cd) { struct ice_aq_desc desc; @@ -272,7 +272,7 @@ enum ice_status ice_aq_dealloc_acl_res(struct ice_hw *hw, struct ice_sq_cd *cd) * * This function sends ACL profile commands */ -static enum ice_status +static int ice_acl_prof_aq_send(struct ice_hw *hw, u16 opc, u8 prof_id, struct ice_aqc_acl_prof_generic_frmt *buf, struct ice_sq_cd *cd) @@ -296,7 +296,7 @@ ice_acl_prof_aq_send(struct ice_hw *hw, u16 opc, u8 prof_id, * * Program ACL profile extraction (indirect 0x0C1D) */ -enum ice_status +int ice_prgm_acl_prof_xtrct(struct ice_hw *hw, u8 prof_id, struct ice_aqc_acl_prof_generic_frmt *buf, struct ice_sq_cd *cd) @@ -314,7 +314,7 @@ ice_prgm_acl_prof_xtrct(struct ice_hw *hw, u8 prof_id, * * Query ACL profile (indirect 0x0C21) */ -enum ice_status +int ice_query_acl_prof(struct ice_hw *hw, u8 prof_id, struct ice_aqc_acl_prof_generic_frmt *buf, struct ice_sq_cd *cd) @@ -330,9 +330,9 @@ ice_query_acl_prof(struct ice_hw *hw, u8 prof_id, * This function checks the counter bank range for counter type and returns * success or failure. */ -static enum ice_status ice_aq_acl_cntrs_chk_params(struct ice_acl_cntrs *cntrs) +static int ice_aq_acl_cntrs_chk_params(struct ice_acl_cntrs *cntrs) { - enum ice_status status = ICE_SUCCESS; + int status = 0; if (!cntrs || !cntrs->amount) return ICE_ERR_PARAM; @@ -373,14 +373,14 @@ static enum ice_status ice_aq_acl_cntrs_chk_params(struct ice_acl_cntrs *cntrs) * unsuccessful if returned counter value is invalid. In this case it returns * an error otherwise success. */ -enum ice_status +int ice_aq_alloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs, struct ice_sq_cd *cd) { struct ice_aqc_acl_alloc_counters *cmd; u16 first_cntr, last_cntr; struct ice_aq_desc desc; - enum ice_status status; + int status; /* check for invalid params */ status = ice_aq_acl_cntrs_chk_params(cntrs); @@ -412,13 +412,13 @@ ice_aq_alloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs, * * De-allocate ACL counters (direct 0x0C17) */ -enum ice_status +int ice_aq_dealloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs, struct ice_sq_cd *cd) { struct ice_aqc_acl_dealloc_counters *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; /* check for invalid params */ status = ice_aq_acl_cntrs_chk_params(cntrs); @@ -443,7 +443,7 @@ ice_aq_dealloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs, * * Program ACL profile ranges (indirect 0x0C1E) */ -enum ice_status +int ice_prog_acl_prof_ranges(struct ice_hw *hw, u8 prof_id, struct ice_aqc_acl_profile_ranges *buf, struct ice_sq_cd *cd) @@ -466,7 +466,7 @@ ice_prog_acl_prof_ranges(struct ice_hw *hw, u8 prof_id, * * Query ACL profile ranges (indirect 0x0C22) */ -enum ice_status +int ice_query_acl_prof_ranges(struct ice_hw *hw, u8 prof_id, struct ice_aqc_acl_profile_ranges *buf, struct ice_sq_cd *cd) @@ -488,13 +488,13 @@ ice_query_acl_prof_ranges(struct ice_hw *hw, u8 prof_id, * * Allocate ACL scenario (indirect 0x0C14) */ -enum ice_status +int ice_aq_alloc_acl_scen(struct ice_hw *hw, u16 *scen_id, struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd) { struct ice_aqc_acl_alloc_scen *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; if (!scen_id) return ICE_ERR_PARAM; @@ -518,7 +518,7 @@ ice_aq_alloc_acl_scen(struct ice_hw *hw, u16 *scen_id, * * Deallocate ACL scenario (direct 0x0C15) */ -enum ice_status +int ice_aq_dealloc_acl_scen(struct ice_hw *hw, u16 scen_id, struct ice_sq_cd *cd) { struct ice_aqc_acl_dealloc_scen *cmd; @@ -541,7 +541,7 @@ ice_aq_dealloc_acl_scen(struct ice_hw *hw, u16 scen_id, struct ice_sq_cd *cd) * * Calls update or query ACL scenario */ -static enum ice_status +static int ice_aq_update_query_scen(struct ice_hw *hw, u16 opcode, u16 scen_id, struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd) { @@ -566,7 +566,7 @@ ice_aq_update_query_scen(struct ice_hw *hw, u16 opcode, u16 scen_id, * * Update ACL scenario (indirect 0x0C1B) */ -enum ice_status +int ice_aq_update_acl_scen(struct ice_hw *hw, u16 scen_id, struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd) { @@ -583,7 +583,7 @@ ice_aq_update_acl_scen(struct ice_hw *hw, u16 scen_id, * * Query ACL scenario (indirect 0x0C23) */ -enum ice_status +int ice_aq_query_acl_scen(struct ice_hw *hw, u16 scen_id, struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd) { diff --git a/drivers/net/ice/base/ice_acl.h b/drivers/net/ice/base/ice_acl.h index ac703be0a1..f7bb4c687d 100644 --- a/drivers/net/ice/base/ice_acl.h +++ b/drivers/net/ice/base/ice_acl.h @@ -126,77 +126,77 @@ struct ice_acl_cntrs { u16 last_cntr; }; -enum ice_status +int ice_acl_create_tbl(struct ice_hw *hw, struct ice_acl_tbl_params *params); -enum ice_status ice_acl_destroy_tbl(struct ice_hw *hw); -enum ice_status +int ice_acl_destroy_tbl(struct ice_hw *hw); +int ice_acl_create_scen(struct ice_hw *hw, u16 match_width, u16 num_entries, u16 *scen_id); -enum ice_status +int ice_aq_alloc_acl_tbl(struct ice_hw *hw, struct ice_acl_alloc_tbl *tbl, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_dealloc_acl_tbl(struct ice_hw *hw, u16 alloc_id, struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_program_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx, struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_query_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx, struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_alloc_actpair(struct ice_hw *hw, u16 alloc_id, struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_dealloc_actpair(struct ice_hw *hw, u16 alloc_id, struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_program_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx, struct ice_aqc_actpair *buf, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_query_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx, struct ice_aqc_actpair *buf, struct ice_sq_cd *cd); -enum ice_status ice_aq_dealloc_acl_res(struct ice_hw *hw, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_dealloc_acl_res(struct ice_hw *hw, struct ice_sq_cd *cd); +int ice_prgm_acl_prof_xtrct(struct ice_hw *hw, u8 prof_id, struct ice_aqc_acl_prof_generic_frmt *buf, struct ice_sq_cd *cd); -enum ice_status +int ice_query_acl_prof(struct ice_hw *hw, u8 prof_id, struct ice_aqc_acl_prof_generic_frmt *buf, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_alloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_dealloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs, struct ice_sq_cd *cd); -enum ice_status +int ice_prog_acl_prof_ranges(struct ice_hw *hw, u8 prof_id, struct ice_aqc_acl_profile_ranges *buf, struct ice_sq_cd *cd); -enum ice_status +int ice_query_acl_prof_ranges(struct ice_hw *hw, u8 prof_id, struct ice_aqc_acl_profile_ranges *buf, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_alloc_acl_scen(struct ice_hw *hw, u16 *scen_id, struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_dealloc_acl_scen(struct ice_hw *hw, u16 scen_id, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_update_acl_scen(struct ice_hw *hw, u16 scen_id, struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_query_acl_scen(struct ice_hw *hw, u16 scen_id, struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd); -enum ice_status +int ice_acl_add_entry(struct ice_hw *hw, struct ice_acl_scen *scen, enum ice_acl_entry_prio prio, u8 *keys, u8 *inverts, struct ice_acl_act_entry *acts, u8 acts_cnt, u16 *entry_idx); -enum ice_status +int ice_acl_prog_act(struct ice_hw *hw, struct ice_acl_scen *scen, struct ice_acl_act_entry *acts, u8 acts_cnt, u16 entry_idx); -enum ice_status +int ice_acl_rem_entry(struct ice_hw *hw, struct ice_acl_scen *scen, u16 entry_idx); #endif /* _ICE_ACL_H_ */ diff --git a/drivers/net/ice/base/ice_acl_ctrl.c b/drivers/net/ice/base/ice_acl_ctrl.c index 2223a8313b..c31ba51fa2 100644 --- a/drivers/net/ice/base/ice_acl_ctrl.c +++ b/drivers/net/ice/base/ice_acl_ctrl.c @@ -74,7 +74,7 @@ ice_acl_scen_assign_entry_idx(struct ice_acl_scen *scen, * * To mark an entry available in scenario */ -static enum ice_status +static int ice_acl_scen_free_entry_idx(struct ice_acl_scen *scen, u16 idx) { if (idx >= scen->num_entry) @@ -83,7 +83,7 @@ ice_acl_scen_free_entry_idx(struct ice_acl_scen *scen, u16 idx) if (!ice_test_and_clear_bit(idx, scen->entry_bitmap)) return ICE_ERR_DOES_NOT_EXIST; - return ICE_SUCCESS; + return 0; } /** @@ -141,12 +141,12 @@ static u16 ice_acl_tbl_calc_end_idx(u16 start, u16 num_entries, u16 width) * * Initialize the ACL table by invalidating TCAM entries and action pairs. */ -static enum ice_status ice_acl_init_tbl(struct ice_hw *hw) +static int ice_acl_init_tbl(struct ice_hw *hw) { struct ice_aqc_actpair act_buf; struct ice_aqc_acl_data buf; - enum ice_status status = ICE_SUCCESS; struct ice_acl_tbl *tbl; + int status = 0; u8 tcam_idx, i; u16 idx; @@ -303,14 +303,14 @@ static void ice_acl_divide_act_mems_to_tcams(struct ice_acl_tbl *tbl) * values for the size of the table, but this will need to grow as more flow * entries are added by the user level. */ -enum ice_status +int ice_acl_create_tbl(struct ice_hw *hw, struct ice_acl_tbl_params *params) { u16 width, depth, first_e, last_e, i; struct ice_aqc_acl_generic *resp_buf; struct ice_acl_alloc_tbl tbl_alloc; struct ice_acl_tbl *tbl; - enum ice_status status; + int status; if (hw->acl_tbl) return ICE_ERR_ALREADY_EXISTS; @@ -423,7 +423,7 @@ ice_acl_create_tbl(struct ice_hw *hw, struct ice_acl_tbl_params *params) * @hw: pointer to the hardware structure * @req: info of partition being allocated */ -static enum ice_status +static int ice_acl_alloc_partition(struct ice_hw *hw, struct ice_acl_scen *req) { u16 start = 0, cnt = 0, off = 0; @@ -547,7 +547,7 @@ ice_acl_alloc_partition(struct ice_hw *hw, struct ice_acl_scen *req) } } while (!done); - return cnt >= r_entries ? ICE_SUCCESS : ICE_ERR_MAX_LIMIT; + return cnt >= r_entries ? 0 : ICE_ERR_MAX_LIMIT; } /** @@ -737,14 +737,14 @@ ice_acl_commit_partition(struct ice_hw *hw, struct ice_acl_scen *scen, * @num_entries: number of entries to be allocated for the scenario * @scen_id: holds returned scenario ID if successful */ -enum ice_status +int ice_acl_create_scen(struct ice_hw *hw, u16 match_width, u16 num_entries, u16 *scen_id) { u8 cascade_cnt, first_tcam, last_tcam, i, k; struct ice_aqc_acl_scen scen_buf; struct ice_acl_scen *scen; - enum ice_status status; + int status; if (!hw->acl_tbl) return ICE_ERR_DOES_NOT_EXIST; @@ -845,11 +845,11 @@ ice_acl_create_scen(struct ice_hw *hw, u16 match_width, u16 num_entries, * @hw: pointer to the HW struct * @scen_id: ID of the remove scenario */ -static enum ice_status ice_acl_destroy_scen(struct ice_hw *hw, u16 scen_id) +static int ice_acl_destroy_scen(struct ice_hw *hw, u16 scen_id) { struct ice_acl_scen *scen, *tmp_scen; struct ice_flow_prof *p, *tmp; - enum ice_status status; + int status; if (!hw->acl_tbl) return ICE_ERR_DOES_NOT_EXIST; @@ -882,19 +882,19 @@ static enum ice_status ice_acl_destroy_scen(struct ice_hw *hw, u16 scen_id) ice_free(hw, scen); } - return ICE_SUCCESS; + return 0; } /** * ice_acl_destroy_tbl - Destroy a previously created LEM table for ACL * @hw: pointer to the HW struct */ -enum ice_status ice_acl_destroy_tbl(struct ice_hw *hw) +int ice_acl_destroy_tbl(struct ice_hw *hw) { struct ice_acl_scen *pos_scen, *tmp_scen; struct ice_aqc_acl_generic resp_buf; struct ice_aqc_acl_scen buf; - enum ice_status status; + int status; u8 i; if (!hw->acl_tbl) @@ -947,7 +947,7 @@ enum ice_status ice_acl_destroy_tbl(struct ice_hw *hw) ice_free(hw, hw->acl_tbl); hw->acl_tbl = NULL; - return ICE_SUCCESS; + return 0; } /** @@ -966,7 +966,7 @@ enum ice_status ice_acl_destroy_tbl(struct ice_hw *hw) * The "keys" and "inverts" buffers must be of the size which is the same as * the scenario's width */ -enum ice_status +int ice_acl_add_entry(struct ice_hw *hw, struct ice_acl_scen *scen, enum ice_acl_entry_prio prio, u8 *keys, u8 *inverts, struct ice_acl_act_entry *acts, u8 acts_cnt, u16 *entry_idx) @@ -974,7 +974,7 @@ ice_acl_add_entry(struct ice_hw *hw, struct ice_acl_scen *scen, struct ice_aqc_acl_data buf; u8 entry_tcam, offset; u16 i, num_cscd, idx; - enum ice_status status = ICE_SUCCESS; + int status = 0; if (!scen) return ICE_ERR_DOES_NOT_EXIST; @@ -1043,14 +1043,14 @@ ice_acl_add_entry(struct ice_hw *hw, struct ice_acl_scen *scen, * * Program a scenario's action memory */ -enum ice_status +int ice_acl_prog_act(struct ice_hw *hw, struct ice_acl_scen *scen, struct ice_acl_act_entry *acts, u8 acts_cnt, u16 entry_idx) { u16 idx, entry_tcam, num_cscd, i, actx_idx = 0; struct ice_aqc_actpair act_buf; - enum ice_status status = ICE_SUCCESS; + int status = 0; if (entry_idx >= scen->num_entry) return ICE_ERR_MAX_LIMIT; @@ -1105,13 +1105,13 @@ ice_acl_prog_act(struct ice_hw *hw, struct ice_acl_scen *scen, * @scen: scenario to remove the entry from * @entry_idx: the scenario-relative index of the flow entry being removed */ -enum ice_status +int ice_acl_rem_entry(struct ice_hw *hw, struct ice_acl_scen *scen, u16 entry_idx) { struct ice_aqc_actpair act_buf; struct ice_aqc_acl_data buf; - enum ice_status status = ICE_SUCCESS; u16 num_cscd, idx, i; + int status = 0; u8 entry_tcam; if (!scen) @@ -1161,3 +1161,4 @@ ice_acl_rem_entry(struct ice_hw *hw, struct ice_acl_scen *scen, u16 entry_idx) return status; } + diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h index 844e90bbce..33413c6c12 100644 --- a/drivers/net/ice/base/ice_adminq_cmd.h +++ b/drivers/net/ice/base/ice_adminq_cmd.h @@ -109,7 +109,6 @@ struct ice_aqc_list_caps { struct ice_aqc_list_caps_elem { __le16 cap; #define ICE_AQC_CAPS_VALID_FUNCTIONS 0x0005 -#define ICE_AQC_MAX_VALID_FUNCTIONS 0x8 #define ICE_AQC_CAPS_VSI 0x0017 #define ICE_AQC_CAPS_DCB 0x0018 #define ICE_AQC_CAPS_RSS 0x0040 @@ -120,6 +119,7 @@ struct ice_aqc_list_caps_elem { #define ICE_AQC_CAPS_1588 0x0046 #define ICE_AQC_CAPS_MAX_MTU 0x0047 #define ICE_AQC_CAPS_IWARP 0x0051 +#define ICE_AQC_CAPS_SENSOR_READING 0x0067 #define ICE_AQC_CAPS_PCIE_RESET_AVOIDANCE 0x0076 #define ICE_AQC_CAPS_POST_UPDATE_RESET_RESTRICT 0x0077 #define ICE_AQC_CAPS_NVM_MGMT 0x0080 @@ -129,8 +129,11 @@ struct ice_aqc_list_caps_elem { #define ICE_AQC_CAPS_EXT_TOPO_DEV_IMG3 0x0084 #define ICE_AQC_CAPS_TX_SCHED_TOPO_COMP_MODE 0x0085 #define ICE_AQC_CAPS_NAC_TOPOLOGY 0x0087 +#define ICE_AQC_CAPS_OROM_RECOVERY_UPDATE 0x0090 #define ICE_AQC_CAPS_ROCEV2_LAG 0x0092 - +#define ICE_AQC_BIT_ROCEV2_LAG 0x01 +#define ICE_AQC_BIT_SRIOV_LAG 0x02 +#define ICE_AQC_CAPS_NEXT_CLUSTER_ID 0x0096 u8 major_ver; u8 minor_ver; /* Number of resources described by this capability */ @@ -263,7 +266,12 @@ struct ice_aqc_set_port_params { (0x3F << ICE_AQC_SET_P_PARAMS_LOGI_PORT_ID_S) #define ICE_AQC_SET_P_PARAMS_IS_LOGI_PORT BIT(14) #define ICE_AQC_SET_P_PARAMS_SWID_VALID BIT(15) - u8 reserved[10]; + u8 lb_mode; +#define ICE_AQC_SET_P_PARAMS_LOOPBACK_MODE_VALID BIT(2) +#define ICE_AQC_SET_P_PARAMS_LOOPBACK_MODE_NORMAL 0x00 +#define ICE_AQC_SET_P_PARAMS_LOOPBACK_MODE_NO 0x01 +#define ICE_AQC_SET_P_PARAMS_LOOPBACK_MODE_HIGH 0x02 + u8 reserved[9]; }; /* These resource type defines are used for all switch resource @@ -306,6 +314,8 @@ struct ice_aqc_set_port_params { #define ICE_AQC_RES_TYPE_FLAG_SHARED BIT(7) #define ICE_AQC_RES_TYPE_FLAG_SCAN_BOTTOM BIT(12) #define ICE_AQC_RES_TYPE_FLAG_IGNORE_INDEX BIT(13) +#define ICE_AQC_RES_TYPE_FLAG_SUBSCRIBE_SHARED BIT(14) +#define ICE_AQC_RES_TYPE_FLAG_SUBSCRIBE_CTL BIT(15) #define ICE_AQC_RES_TYPE_FLAG_DEDICATED 0x00 @@ -486,6 +496,7 @@ struct ice_aqc_vsi_props { #define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S 0 #define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M (0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S) #define ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA BIT(0) +#define ICE_AQ_VSI_SW_FLAG_RX_PASS_PRUNE_ENA BIT(3) #define ICE_AQ_VSI_SW_FLAG_LAN_ENA BIT(4) u8 veb_stat_id; #define ICE_AQ_VSI_SW_VEB_STAT_ID_S 0 @@ -552,7 +563,6 @@ struct ice_aqc_vsi_props { #define ICE_AQ_VSI_OUTER_TAG_VLAN_8100 0x2 #define ICE_AQ_VSI_OUTER_TAG_VLAN_9100 0x3 #define ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT BIT(4) -#define ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_ACCEPT_HOST BIT(6) #define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S 5 #define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M (0x3 << ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) #define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ACCEPTUNTAGGED 0x1 @@ -752,7 +762,7 @@ struct ice_aqc_recipe_content { #define ICE_AQ_RECIPE_ID_S 0 #define ICE_AQ_RECIPE_ID_M (0x3F << ICE_AQ_RECIPE_ID_S) #define ICE_AQ_RECIPE_ID_IS_ROOT BIT(7) -#define ICE_AQ_SW_ID_LKUP_IDX 0 +#define ICE_AQ_SW_ID_LKUP_IDX 0 u8 lkup_indx[5]; #define ICE_AQ_RECIPE_LKUP_DATA_S 0 #define ICE_AQ_RECIPE_LKUP_DATA_M (0x3F << ICE_AQ_RECIPE_LKUP_DATA_S) @@ -796,7 +806,7 @@ struct ice_aqc_recipe_data_elem { struct ice_aqc_recipe_to_profile { __le16 profile_id; u8 rsvd[6]; - ice_declare_bitmap(recipe_assoc, ICE_MAX_NUM_RECIPES); + u8 recipe_assoc[DIVIDE_AND_ROUND_UP(ICE_MAX_NUM_RECIPES, BITS_PER_BYTE)]; }; /* Add/Update/Remove/Get switch rules (indirect 0x02A0, 0x02A1, 0x02A2, 0x02A3) @@ -813,12 +823,30 @@ struct ice_aqc_sw_rules { __le32 addr_low; }; +/* Add switch rule response: + * Content of return buffer is same as the input buffer. The status field and + * LUT index are updated as part of the response + */ +struct ice_aqc_sw_rules_elem_hdr { + __le16 type; /* Switch rule type, one of T_... */ +#define ICE_AQC_SW_RULES_T_LKUP_RX 0x0 +#define ICE_AQC_SW_RULES_T_LKUP_TX 0x1 +#define ICE_AQC_SW_RULES_T_LG_ACT 0x2 +#define ICE_AQC_SW_RULES_T_VSI_LIST_SET 0x3 +#define ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR 0x4 +#define ICE_AQC_SW_RULES_T_PRUNE_LIST_SET 0x5 +#define ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR 0x6 + __le16 status; +}; + /* Add/Update/Get/Remove lookup Rx/Tx command/response entry * This structures describes the lookup rules and associated actions. "index" * is returned as part of a response to a successful Add command, and can be * used to identify the rule for Update/Get/Remove commands. */ struct ice_sw_rule_lkup_rx_tx { + struct ice_aqc_sw_rules_elem_hdr hdr; + __le16 recipe_id; #define ICE_SW_RECIPE_LOGICAL_PORT_FWD 10 /* Source port for LOOKUP_RX and source VSI in case of LOOKUP_TX */ @@ -866,6 +894,8 @@ struct ice_sw_rule_lkup_rx_tx { #define ICE_SINGLE_ACT_PTR 0x2 #define ICE_SINGLE_ACT_PTR_VAL_S 4 #define ICE_SINGLE_ACT_PTR_VAL_M (0x1FFF << ICE_SINGLE_ACT_PTR_VAL_S) + /* Bit 17 should be set if pointed action includes a FWD cmd */ +#define ICE_SINGLE_ACT_PTR_HAS_FWD BIT(17) /* Bit 18 should be set to 1 */ #define ICE_SINGLE_ACT_PTR_BIT BIT(18) @@ -895,14 +925,17 @@ struct ice_sw_rule_lkup_rx_tx { * lookup-type */ __le16 hdr_len; - u8 hdr[STRUCT_HACK_VAR_LEN]; + u8 hdr_data[STRUCT_HACK_VAR_LEN]; }; +#pragma pack(1) /* Add/Update/Remove large action command/response entry * "index" is returned as part of a response to a successful Add command, and * can be used to identify the action for Update/Get/Remove commands. */ struct ice_sw_rule_lg_act { + struct ice_aqc_sw_rules_elem_hdr hdr; + __le16 index; /* Index in large action table */ __le16 size; /* Max number of large actions */ @@ -957,63 +990,24 @@ struct ice_sw_rule_lg_act { #define ICE_LG_ACT_STAT_COUNT_M (0x7F << ICE_LG_ACT_STAT_COUNT_S) __le32 act[STRUCT_HACK_VAR_LEN]; /* array of size for actions */ }; +#pragma pack() +#pragma pack(1) /* Add/Update/Remove VSI list command/response entry * "index" is returned as part of a response to a successful Add command, and * can be used to identify the VSI list for Update/Get/Remove commands. */ struct ice_sw_rule_vsi_list { + struct ice_aqc_sw_rules_elem_hdr hdr; + __le16 index; /* Index of VSI/Prune list */ __le16 number_vsi; __le16 vsi[STRUCT_HACK_VAR_LEN]; /* Array of number_vsi VSI numbers */ }; - -#pragma pack(1) -/* Query VSI list command/response entry */ -struct ice_sw_rule_vsi_list_query { - __le16 index; - ice_declare_bitmap(vsi_list, ICE_MAX_VSI); -}; -#pragma pack() - -#pragma pack(1) -/* Add switch rule response: - * Content of return buffer is same as the input buffer. The status field and - * LUT index are updated as part of the response - */ -struct ice_aqc_sw_rules_elem { - __le16 type; /* Switch rule type, one of T_... */ -#define ICE_AQC_SW_RULES_T_LKUP_RX 0x0 -#define ICE_AQC_SW_RULES_T_LKUP_TX 0x1 -#define ICE_AQC_SW_RULES_T_LG_ACT 0x2 -#define ICE_AQC_SW_RULES_T_VSI_LIST_SET 0x3 -#define ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR 0x4 -#define ICE_AQC_SW_RULES_T_PRUNE_LIST_SET 0x5 -#define ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR 0x6 - __le16 status; - union { - struct ice_sw_rule_lkup_rx_tx lkup_tx_rx; - struct ice_sw_rule_lg_act lg_act; - struct ice_sw_rule_vsi_list vsi_list; - struct ice_sw_rule_vsi_list_query vsi_list_query; - } pdata; -}; - #pragma pack() -/* PFC Ignore (direct 0x0301) - * The command and response use the same descriptor structure - */ -struct ice_aqc_pfc_ignore { - u8 tc_bitmap; - u8 cmd_flags; /* unused in response */ -#define ICE_AQC_PFC_IGNORE_SET BIT(7) -#define ICE_AQC_PFC_IGNORE_CLEAR 0 - u8 reserved[14]; -}; - -/* Set PFC Mode (direct 0x0303) - * Query PFC Mode (direct 0x0302) +/* Query PFC Mode (direct 0x0302) + * Set PFC Mode (direct 0x0303) */ struct ice_aqc_set_query_pfc_mode { u8 pfc_mode; @@ -1026,17 +1020,6 @@ struct ice_aqc_set_query_pfc_mode { u8 rsvd[15]; }; -/* Set DCB Parameters (direct 0x0306) */ -struct ice_aqc_set_dcb_params { - u8 cmd_flags; /* unused in response */ -#define ICE_AQC_LINK_UP_DCB_CFG BIT(0) -#define ICE_AQC_PERSIST_DCB_CFG BIT(1) - u8 valid_flags; /* unused in response */ -#define ICE_AQC_LINK_UP_DCB_CFG_VALID BIT(0) -#define ICE_AQC_PERSIST_DCB_CFG_VALID BIT(1) - u8 rsvd[14]; -}; - /* Get Default Topology (indirect 0x0400) */ struct ice_aqc_get_topo { u8 port_num; @@ -1116,9 +1099,9 @@ struct ice_aqc_txsched_elem { u8 generic; #define ICE_AQC_ELEM_GENERIC_MODE_M 0x1 #define ICE_AQC_ELEM_GENERIC_PRIO_S 0x1 -#define ICE_AQC_ELEM_GENERIC_PRIO_M (0x7 << ICE_AQC_ELEM_GENERIC_PRIO_S) +#define ICE_AQC_ELEM_GENERIC_PRIO_M (0x7 << ICE_AQC_ELEM_GENERIC_PRIO_S) #define ICE_AQC_ELEM_GENERIC_SP_S 0x4 -#define ICE_AQC_ELEM_GENERIC_SP_M (0x1 << ICE_AQC_ELEM_GENERIC_SP_S) +#define ICE_AQC_ELEM_GENERIC_SP_M (0x1 << ICE_AQC_ELEM_GENERIC_SP_S) #define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S 0x5 #define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_M \ (0x3 << ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S) @@ -1308,10 +1291,11 @@ struct ice_aqc_get_phy_caps { /* 18.0 - Report qualified modules */ #define ICE_AQC_GET_PHY_RQM BIT(0) /* 18.1 - 18.3 : Report mode - * 000b - Report NVM capabilities - * 001b - Report topology capabilities - * 010b - Report SW configured - * 100b - Report default capabilities + * 000b - Report topology capabilities, without media + * 001b - Report topology capabilities, with media + * 010b - Report Active configuration + * 011b - Report PHY Type and FEC mode capabilities + * 100b - Report Default capabilities */ #define ICE_AQC_REPORT_MODE_S 1 #define ICE_AQC_REPORT_MODE_M (7 << ICE_AQC_REPORT_MODE_S) @@ -1398,7 +1382,18 @@ struct ice_aqc_get_phy_caps { #define ICE_PHY_TYPE_HIGH_100G_CAUI2 BIT_ULL(2) #define ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC BIT_ULL(3) #define ICE_PHY_TYPE_HIGH_100G_AUI2 BIT_ULL(4) -#define ICE_PHY_TYPE_HIGH_MAX_INDEX 4 +#define ICE_PHY_TYPE_HIGH_200G_CR4_PAM4 BIT_ULL(5) +#define ICE_PHY_TYPE_HIGH_200G_SR4 BIT_ULL(6) +#define ICE_PHY_TYPE_HIGH_200G_FR4 BIT_ULL(7) +#define ICE_PHY_TYPE_HIGH_200G_LR4 BIT_ULL(8) +#define ICE_PHY_TYPE_HIGH_200G_DR4 BIT_ULL(9) +#define ICE_PHY_TYPE_HIGH_200G_KR4_PAM4 BIT_ULL(10) +#define ICE_PHY_TYPE_HIGH_200G_AUI4_AOC_ACC BIT_ULL(11) +#define ICE_PHY_TYPE_HIGH_200G_AUI4 BIT_ULL(12) +#define ICE_PHY_TYPE_HIGH_200G_AUI8_AOC_ACC BIT_ULL(13) +#define ICE_PHY_TYPE_HIGH_200G_AUI8 BIT_ULL(14) +#define ICE_PHY_TYPE_HIGH_400GBASE_FR8 BIT_ULL(15) +#define ICE_PHY_TYPE_HIGH_MAX_INDEX 15 struct ice_aqc_get_phy_caps_data { __le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */ @@ -1548,7 +1543,16 @@ struct ice_aqc_get_link_status { __le32 addr_low; }; +enum ice_get_link_status_data_version { + ICE_GET_LINK_STATUS_DATA_V1 = 1, + ICE_GET_LINK_STATUS_DATA_V2 = 2, +}; + +#define ICE_GET_LINK_STATUS_DATALEN_V1 32 +#define ICE_GET_LINK_STATUS_DATALEN_V2 56 + /* Get link status response data structure, also used for Link Status Event */ +#pragma pack(1) struct ice_aqc_get_link_status_data { u8 topo_media_conflict; #define ICE_AQ_LINK_TOPO_CONFLICT BIT(0) @@ -1633,12 +1637,37 @@ struct ice_aqc_get_link_status_data { #define ICE_AQ_LINK_SPEED_40GB BIT(8) #define ICE_AQ_LINK_SPEED_50GB BIT(9) #define ICE_AQ_LINK_SPEED_100GB BIT(10) +#define ICE_AQ_LINK_SPEED_200GB BIT(11) #define ICE_AQ_LINK_SPEED_UNKNOWN BIT(15) - __le32 reserved3; /* Aligns next field to 8-byte boundary */ + __le16 reserved3; /* Aligns next field to 8-byte boundary */ + u8 ext_fec_status; +#define ICE_AQ_LINK_RS_272_FEC_EN BIT(0) /* RS 272 FEC enabled */ + u8 reserved4; __le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */ __le64 phy_type_high; /* Use values from ICE_PHY_TYPE_HIGH_* */ + /* Get link status version 2 link partner data */ + __le64 lp_phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */ + __le64 lp_phy_type_high; /* Use values from ICE_PHY_TYPE_HIGH_* */ + u8 lp_fec_adv; +#define ICE_AQ_LINK_LP_10G_KR_FEC_CAP BIT(0) +#define ICE_AQ_LINK_LP_25G_KR_FEC_CAP BIT(1) +#define ICE_AQ_LINK_LP_RS_528_FEC_CAP BIT(2) +#define ICE_AQ_LINK_LP_50G_KR_272_FEC_CAP BIT(3) +#define ICE_AQ_LINK_LP_100G_KR_272_FEC_CAP BIT(4) +#define ICE_AQ_LINK_LP_200G_KR_272_FEC_CAP BIT(5) + u8 lp_fec_req; +#define ICE_AQ_LINK_LP_10G_KR_FEC_REQ BIT(0) +#define ICE_AQ_LINK_LP_25G_KR_FEC_REQ BIT(1) +#define ICE_AQ_LINK_LP_RS_528_FEC_REQ BIT(2) +#define ICE_AQ_LINK_LP_KR_272_FEC_REQ BIT(3) + u8 lp_flowcontrol; +#define ICE_AQ_LINK_LP_PAUSE_ADV BIT(0) +#define ICE_AQ_LINK_LP_ASM_DIR_ADV BIT(1) + u8 reserved[5]; }; +#pragma pack() + /* Set event mask command (direct 0x0613) */ struct ice_aqc_set_event_mask { u8 lport_num; @@ -1655,6 +1684,7 @@ struct ice_aqc_set_event_mask { #define ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED BIT(9) #define ICE_AQ_LINK_EVENT_TOPO_CONFLICT BIT(10) #define ICE_AQ_LINK_EVENT_MEDIA_CONFLICT BIT(11) +#define ICE_AQ_LINK_EVENT_PHY_FW_LOAD_FAIL BIT(12) u8 reserved1[6]; }; @@ -1708,6 +1738,8 @@ struct ice_aqc_link_topo_params { #define ICE_AQC_LINK_TOPO_NODE_TYPE_CAGE 6 #define ICE_AQC_LINK_TOPO_NODE_TYPE_MEZZ 7 #define ICE_AQC_LINK_TOPO_NODE_TYPE_ID_EEPROM 8 +#define ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL 9 +#define ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_MUX 10 #define ICE_AQC_LINK_TOPO_NODE_TYPE_GPS 11 #define ICE_AQC_LINK_TOPO_NODE_CTX_S 4 #define ICE_AQC_LINK_TOPO_NODE_CTX_M \ @@ -1728,8 +1760,8 @@ struct ice_aqc_link_topo_addr { #define ICE_AQC_LINK_TOPO_HANDLE_M (0x3FF << ICE_AQC_LINK_TOPO_HANDLE_S) /* Used to decode the handle field */ #define ICE_AQC_LINK_TOPO_HANDLE_BRD_TYPE_M BIT(9) -#define ICE_AQC_LINK_TOPO_HANDLE_BRD_TYPE_LOM BIT(9) -#define ICE_AQC_LINK_TOPO_HANDLE_BRD_TYPE_MEZZ 0 +#define ICE_AQC_LINK_TOPO_HANDLE_BRD_TYPE_LOM 0 +#define ICE_AQC_LINK_TOPO_HANDLE_BRD_TYPE_MEZZ BIT(9) #define ICE_AQC_LINK_TOPO_HANDLE_NODE_S 0 /* In case of a Mezzanine type */ #define ICE_AQC_LINK_TOPO_HANDLE_MEZZ_NODE_M \ @@ -1745,8 +1777,13 @@ struct ice_aqc_link_topo_addr { struct ice_aqc_get_link_topo { struct ice_aqc_link_topo_addr addr; u8 node_part_num; -#define ICE_ACQ_GET_LINK_TOPO_NODE_NR_PCA9575 0x21 -#define ICE_ACQ_GET_LINK_TOPO_NODE_NR_GEN_GPS 0x48 +#define ICE_AQC_GET_LINK_TOPO_NODE_NR_PCA9575 0x21 +#define ICE_AQC_GET_LINK_TOPO_NODE_NR_ZL30632_80032 0x24 +#define ICE_AQC_GET_LINK_TOPO_NODE_NR_SI5383_5384 0x25 +#define ICE_AQC_GET_LINK_TOPO_NODE_NR_E822_PHY 0x30 +#define ICE_AQC_GET_LINK_TOPO_NODE_NR_C827 0x31 +#define ICE_AQC_GET_LINK_TOPO_NODE_NR_GEN_CLK_MUX 0x47 +#define ICE_AQC_GET_LINK_TOPO_NODE_NR_GEN_GPS 0x48 u8 rsvd[9]; }; @@ -1774,6 +1811,7 @@ struct ice_aqc_get_link_topo_pin { #define ICE_AQC_LINK_TOPO_IO_FUNC_RED_LED 12 #define ICE_AQC_LINK_TOPO_IO_FUNC_GREEN_LED 13 #define ICE_AQC_LINK_TOPO_IO_FUNC_BLUE_LED 14 +#define ICE_AQC_LINK_TOPO_IO_FUNC_CLK_IN 20 #define ICE_AQC_LINK_TOPO_INPUT_IO_TYPE_S 5 #define ICE_AQC_LINK_TOPO_INPUT_IO_TYPE_M \ (0x7 << ICE_AQC_LINK_TOPO_INPUT_IO_TYPE_S) @@ -1782,11 +1820,11 @@ struct ice_aqc_get_link_topo_pin { u8 output_io_params; #define ICE_AQC_LINK_TOPO_OUTPUT_IO_FUNC_S 0 #define ICE_AQC_LINK_TOPO_OUTPUT_IO_FUNC_M \ - (0x1F << \ ICE_AQC_LINK_TOPO_INPUT_IO_FUNC_NUM_S) + (0x1F << ICE_AQC_LINK_TOPO_OUTPUT_IO_FUNC_S) /* Use ICE_AQC_LINK_TOPO_IO_FUNC_* for the non-numerical options */ #define ICE_AQC_LINK_TOPO_OUTPUT_IO_TYPE_S 5 #define ICE_AQC_LINK_TOPO_OUTPUT_IO_TYPE_M \ - (0x7 << ICE_AQC_LINK_TOPO_INPUT_IO_TYPE_S) + (0x7 << ICE_AQC_LINK_TOPO_OUTPUT_IO_TYPE_S) /* Use ICE_AQC_LINK_TOPO_NODE_TYPE_* for the type values */ u8 output_io_flags; #define ICE_AQC_LINK_TOPO_OUTPUT_SPEED_S 0 @@ -1837,6 +1875,63 @@ struct ice_aqc_set_port_id_led { u8 rsvd[13]; }; +/* Get Port Options (indirect, 0x06EA) */ +struct ice_aqc_get_port_options { + u8 lport_num; + u8 lport_num_valid; +#define ICE_AQC_PORT_OPT_PORT_NUM_VALID BIT(0) + u8 port_options_count; +#define ICE_AQC_PORT_OPT_COUNT_S 0 +#define ICE_AQC_PORT_OPT_COUNT_M (0xF << ICE_AQC_PORT_OPT_COUNT_S) +#define ICE_AQC_PORT_OPT_MAX 16 + u8 innermost_phy_index; + u8 port_options; +#define ICE_AQC_PORT_OPT_ACTIVE_S 0 +#define ICE_AQC_PORT_OPT_ACTIVE_M (0xF << ICE_AQC_PORT_OPT_ACTIVE_S) +#define ICE_AQC_PORT_OPT_FORCED BIT(6) +#define ICE_AQC_PORT_OPT_VALID BIT(7) + u8 pending_port_option_status; +#define ICE_AQC_PENDING_PORT_OPT_IDX_S 0 +#define ICE_AQC_PENDING_PORT_OPT_IDX_M (0xF << ICE_AQC_PENDING_PORT_OPT_IDX_S) +#define ICE_AQC_PENDING_PORT_OPT_VALID BIT(7) + u8 rsvd[2]; + __le32 addr_high; + __le32 addr_low; +}; + +struct ice_aqc_get_port_options_elem { + u8 pmd; +#define ICE_AQC_PORT_INV_PORT_OPT 4 +#define ICE_AQC_PORT_OPT_PMD_COUNT_S 0 +#define ICE_AQC_PORT_OPT_PMD_COUNT_M (0xF << ICE_AQC_PORT_OPT_PMD_COUNT_S) +#define ICE_AQC_PORT_OPT_PMD_WIDTH_S 4 +#define ICE_AQC_PORT_OPT_PMD_WIDTH_M (0xF << ICE_AQC_PORT_OPT_PMD_WIDTH_S) + u8 max_lane_speed; +#define ICE_AQC_PORT_OPT_MAX_LANE_S 0 +#define ICE_AQC_PORT_OPT_MAX_LANE_M (0xF << ICE_AQC_PORT_OPT_MAX_LANE_S) +#define ICE_AQC_PORT_OPT_MAX_LANE_100M 0 +#define ICE_AQC_PORT_OPT_MAX_LANE_1G 1 +#define ICE_AQC_PORT_OPT_MAX_LANE_2500M 2 +#define ICE_AQC_PORT_OPT_MAX_LANE_5G 3 +#define ICE_AQC_PORT_OPT_MAX_LANE_10G 4 +#define ICE_AQC_PORT_OPT_MAX_LANE_25G 5 +#define ICE_AQC_PORT_OPT_MAX_LANE_50G 6 +#define ICE_AQC_PORT_OPT_MAX_LANE_100G 7 +#define ICE_AQC_PORT_OPT_MAX_LANE_200G 8 + u8 global_scid[2]; + u8 phy_scid[2]; + u8 pf2port_cid[2]; +}; + +/* Set Port Option (direct, 0x06EB) */ +struct ice_aqc_set_port_option { + u8 lport_num; + u8 lport_num_valid; +#define ICE_AQC_SET_PORT_OPT_PORT_NUM_VALID BIT(0) + u8 selected_port_option; + u8 rsvd[13]; +}; + /* Set/Get GPIO (direct, 0x06EC/0x06ED) */ struct ice_aqc_gpio { __le16 gpio_ctrl_handle; @@ -1930,10 +2025,17 @@ struct ice_aqc_nvm { #define ICE_AQC_NVM_REVERT_LAST_ACTIV BIT(6) /* Write Activate only */ #define ICE_AQC_NVM_ACTIV_SEL_MASK MAKEMASK(0x7, 3) #define ICE_AQC_NVM_FLASH_ONLY BIT(7) -#define ICE_AQC_NVM_POR_FLAG 0 /* Used by NVM Write completion on ARQ */ -#define ICE_AQC_NVM_PERST_FLAG 1 -#define ICE_AQC_NVM_EMPR_FLAG 2 -#define ICE_AQC_NVM_EMPR_ENA BIT(0) +#define ICE_AQC_NVM_RESET_LVL_M MAKEMASK(0x3, 0) /* Write reply only */ +#define ICE_AQC_NVM_POR_FLAG 0 +#define ICE_AQC_NVM_PERST_FLAG 1 +#define ICE_AQC_NVM_EMPR_FLAG 2 +#define ICE_AQC_NVM_EMPR_ENA BIT(0) /* Write Activate reply only */ + /* For Write Activate, several flags are sent as part of a separate + * flags2 field using a separate byte. For simplicity of the software + * interface, we pass the flags as a 16 bit value so these flags are + * all offset by 8 bits + */ +#define ICE_AQC_NVM_ACTIV_REQ_EMPR BIT(8) /* NVM Write Activate only */ __le16 module_typeid; __le16 length; #define ICE_AQC_NVM_ERASE_LEN 0xFFFF @@ -1963,7 +2065,54 @@ struct ice_aqc_nvm { #define ICE_AQC_NVM_LLDP_STATUS_M_LEN 4 /* In Bits */ #define ICE_AQC_NVM_LLDP_STATUS_RD_LEN 4 /* In Bytes */ +#define ICE_AQC_NVM_SDP_CFG_PTR_OFFSET 0xD8 +#define ICE_AQC_NVM_SDP_CFG_PTR_RD_LEN 2 /* In Bytes */ +#define ICE_AQC_NVM_SDP_CFG_PTR_M MAKEMASK(0x7FFF, 0) +#define ICE_AQC_NVM_SDP_CFG_PTR_TYPE_M BIT(15) +#define ICE_AQC_NVM_SDP_CFG_HEADER_LEN 2 /* In Bytes */ +#define ICE_AQC_NVM_SDP_CFG_SEC_LEN_LEN 2 /* In Bytes */ +#define ICE_AQC_NVM_SDP_CFG_DATA_LEN 14 /* In Bytes */ +#define ICE_AQC_NVM_SDP_CFG_MAX_SECTION_SIZE 7 +#define ICE_AQC_NVM_SDP_CFG_PIN_SIZE 10 +#define ICE_AQC_NVM_SDP_CFG_PIN_OFFSET 6 +#define ICE_AQC_NVM_SDP_CFG_PIN_MASK MAKEMASK(0x3FF, \ + ICE_AQC_NVM_SDP_CFG_PIN_OFFSET) +#define ICE_AQC_NVM_SDP_CFG_CHAN_OFFSET 4 +#define ICE_AQC_NVM_SDP_CFG_CHAN_MASK MAKEMASK(0x3, \ + ICE_AQC_NVM_SDP_CFG_CHAN_OFFSET) +#define ICE_AQC_NVM_SDP_CFG_DIR_OFFSET 3 +#define ICE_AQC_NVM_SDP_CFG_DIR_MASK MAKEMASK(0x1, \ + ICE_AQC_NVM_SDP_CFG_DIR_OFFSET) +#define ICE_AQC_NVM_SDP_CFG_SDP_NUM_OFFSET 0 +#define ICE_AQC_NVM_SDP_CFG_SDP_NUM_MASK MAKEMASK(0x7, \ + ICE_AQC_NVM_SDP_CFG_SDP_NUM_OFFSET) +#define ICE_AQC_NVM_SDP_CFG_NA_PIN_MASK MAKEMASK(0x1, 15) + +#define ICE_AQC_NVM_MINSREV_MOD_ID 0x130 #define ICE_AQC_NVM_TX_TOPO_MOD_ID 0x14B +#define ICE_AQC_NVM_CMPO_MOD_ID 0x153 + +/* Cage Max Power override NVM module */ +struct ice_aqc_nvm_cmpo { + __le16 length; +#define ICE_AQC_NVM_CMPO_ENABLE BIT(8) + __le16 cages_cfg[8]; +}; + +/* Used for reading and writing MinSRev using 0x0701 and 0x0703. Note that the + * type field is excluded from the section when reading and writing from + * a module using the module_typeid field with these AQ commands. + */ +struct ice_aqc_nvm_minsrev { + __le16 length; + __le16 validity; +#define ICE_AQC_NVM_MINSREV_NVM_VALID BIT(0) +#define ICE_AQC_NVM_MINSREV_OROM_VALID BIT(1) + __le16 nvm_minsrev_l; + __le16 nvm_minsrev_h; + __le16 orom_minsrev_l; + __le16 orom_minsrev_h; +}; struct ice_aqc_nvm_tx_topo_user_sel { __le16 length; @@ -2003,6 +2152,29 @@ struct ice_aqc_nvm_checksum { u8 rsvd2[12]; }; +/* Used for NVM Sanitization command - 0x070C */ +struct ice_aqc_nvm_sanitization { + u8 cmd_flags; +#define ICE_AQ_NVM_SANITIZE_REQ_READ 0 +#define ICE_AQ_NVM_SANITIZE_REQ_OPERATE BIT(0) + +#define ICE_AQ_NVM_SANITIZE_READ_SUBJECT_NVM_BITS 0 +#define ICE_AQ_NVM_SANITIZE_READ_SUBJECT_NVM_STATE BIT(1) +#define ICE_AQ_NVM_SANITIZE_OPERATE_SUBJECT_CLEAR 0 + u8 values; +#define ICE_AQ_NVM_SANITIZE_NVM_BITS_HOST_CLEAN_SUPPORT BIT(0) +#define ICE_AQ_NVM_SANITIZE_NVM_BITS_BMC_CLEAN_SUPPORT BIT(2) +#define ICE_AQ_NVM_SANITIZE_NVM_STATE_HOST_CLEAN_DONE BIT(0) +#define ICE_AQ_NVM_SANITIZE_NVM_STATE_HOST_CLEAN_SUCCESS BIT(1) +#define ICE_AQ_NVM_SANITIZE_NVM_STATE_BMC_CLEAN_DONE BIT(2) +#define ICE_AQ_NVM_SANITIZE_NVM_STATE_BMC_CLEAN_SUCCESS BIT(3) +#define ICE_AQ_NVM_SANITIZE_OPERATE_HOST_CLEAN_DONE BIT(0) +#define ICE_AQ_NVM_SANITIZE_OPERATE_HOST_CLEAN_SUCCESS BIT(1) +#define ICE_AQ_NVM_SANITIZE_OPERATE_BMC_CLEAN_DONE BIT(2) +#define ICE_AQ_NVM_SANITIZE_OPERATE_BMC_CLEAN_SUCCESS BIT(3) + u8 reserved[14]; +}; + /* Get LLDP MIB (indirect 0x0A00) * Note: This is also used by the LLDP MIB Change Event (0x0A01) * as the format is the same. @@ -2214,6 +2386,21 @@ struct ice_aqc_get_set_rss_keys { u8 extended_hash_key[ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE]; }; +enum ice_lut_type { + ICE_LUT_VSI = 0, + ICE_LUT_PF = 1, + ICE_LUT_GLOBAL = 2, + ICE_LUT_TYPE_MASK = 3, + ICE_LUT_PF_SMALL = 5, /* yields ICE_LUT_PF when &= ICE_LUT_TYPE_MASK */ +}; + +enum ice_lut_size { + ICE_LUT_VSI_SIZE = 64, + ICE_LUT_PF_SMALL_SIZE = 128, + ICE_LUT_GLOBAL_SIZE = 512, + ICE_LUT_PF_SIZE = 2048, +}; + /* Get/Set RSS LUT (indirect 0x0B05/0x0B03) */ struct ice_aqc_get_set_rss_lut { #define ICE_AQC_GSET_RSS_LUT_VSI_VALID BIT(15) @@ -2222,21 +2409,13 @@ struct ice_aqc_get_set_rss_lut { __le16 vsi_id; #define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S 0 #define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M \ - (0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) - -#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI 0 -#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF 1 -#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL 2 + (ICE_LUT_TYPE_MASK << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) #define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S 2 #define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M \ - (0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) + (ICE_LUT_TYPE_MASK << ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) -#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128 128 -#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG 0 -#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512 512 #define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG 1 -#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K 2048 #define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG 2 #define ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S 4 @@ -2259,6 +2438,15 @@ struct ice_aqc_clear_fd_table { u8 reserved[12]; }; +/* Sideband Control Interface Commands */ +/* Neighbor Device Request (indirect 0x0C00); also used for the response. */ +struct ice_aqc_neigh_dev_req { + __le16 sb_data_len; + u8 reserved[6]; + __le32 addr_high; + __le32 addr_low; +}; + /* Allocate ACL table (indirect 0x0C10) */ #define ICE_AQC_ACL_KEY_WIDTH 40 #define ICE_AQC_ACL_KEY_WIDTH_BYTES 5 @@ -2751,7 +2939,6 @@ struct ice_aqc_dis_txq_item { (1 << ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S) __le16 q_id[STRUCT_HACK_VAR_LEN]; }; - #pragma pack() /* Tx LAN Queues Cleanup Event (0x0C31) */ @@ -2797,7 +2984,7 @@ struct ice_aqc_move_txqs_data { }; /* Download Package (indirect 0x0C40) */ -/* Also used for Update Package (indirect 0x0C42 and 0x0C41) */ +/* Also used for Update Package (indirect 0x0C41 and 0x0C42) */ struct ice_aqc_download_pkg { u8 flags; #define ICE_AQC_DOWNLOAD_PKG_LAST_BUF 0x01 @@ -2831,7 +3018,7 @@ struct ice_pkg_ver { }; #define ICE_PKG_NAME_SIZE 32 -#define ICE_SEG_ID_SIZE 28 +#define ICE_SEG_ID_SIZE 28 #define ICE_SEG_NAME_SIZE 28 struct ice_aqc_get_pkg_info { @@ -2850,6 +3037,29 @@ struct ice_aqc_get_pkg_info_resp { struct ice_aqc_get_pkg_info pkg_info[STRUCT_HACK_VAR_LEN]; }; +/* Configure CGU Error Reporting (direct, 0x0C60) */ +struct ice_aqc_cfg_cgu_err { + u8 cmd; +#define ICE_AQC_CFG_CGU_EVENT_SHIFT 0 +#define ICE_AQC_CFG_CGU_EVENT_MASK BIT(ICE_AQC_CFG_CGU_EVENT_SHIFT) +#define ICE_AQC_CFG_CGU_EVENT_EN (0 << ICE_AQC_CFG_CGU_EVENT_SHIFT) +#define ICE_AQC_CFG_CGU_EVENT_DIS ICE_AQC_CFG_CGU_EVENT_MASK +#define ICE_AQC_CFG_CGU_ERR_SHIFT 1 +#define ICE_AQC_CFG_CGU_ERR_MASK BIT(ICE_AQC_CFG_CGU_ERR_SHIFT) +#define ICE_AQC_CFG_CGU_ERR_EN (0 << ICE_AQC_CFG_CGU_ERR_SHIFT) +#define ICE_AQC_CFG_CGU_ERR_DIS ICE_AQC_CFG_CGU_ERR_MASK + u8 rsvd[15]; +}; + +/* CGU Error Event (direct, 0x0C60) */ +struct ice_aqc_event_cgu_err { + u8 err_type; +#define ICE_AQC_CGU_ERR_SYNCE_LOCK_LOSS BIT(0) +#define ICE_AQC_CGU_ERR_HOLDOVER_CHNG BIT(1) +#define ICE_AQC_CGU_ERR_TIMESYNC_LOCK_LOSS BIT(2) + u8 rsvd[15]; +}; + /* Driver Shared Parameters (direct, 0x0C90) */ struct ice_aqc_driver_shared_params { u8 set_or_get_op; @@ -2865,11 +3075,6 @@ struct ice_aqc_driver_shared_params { }; enum ice_aqc_driver_params { - /* OS clock index for PTP timer Domain 0 */ - ICE_AQC_DRIVER_PARAM_CLK_IDX_TMR0 = 0, - /* OS clock index for PTP timer Domain 1 */ - ICE_AQC_DRIVER_PARAM_CLK_IDX_TMR1, - /* Add new parameters above */ ICE_AQC_DRIVER_PARAM_MAX = 16, }; @@ -2883,21 +3088,34 @@ struct ice_aqc_event_lan_overflow { /* Debug Dump Internal Data (indirect 0xFF08) */ struct ice_aqc_debug_dump_internals { - u8 cluster_id; -#define ICE_AQC_DBG_DUMP_CLUSTER_ID_SW 0 -#define ICE_AQC_DBG_DUMP_CLUSTER_ID_ACL 1 -#define ICE_AQC_DBG_DUMP_CLUSTER_ID_TXSCHED 2 -#define ICE_AQC_DBG_DUMP_CLUSTER_ID_PROFILES 3 + __le16 cluster_id; /* Expresses next cluster ID in response */ +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_SW_E810 0 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_ACL_E810 1 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_TXSCHED_E810 2 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_PROFILES_E810 3 /* EMP_DRAM only dumpable in device debug mode */ -#define ICE_AQC_DBG_DUMP_CLUSTER_ID_EMP_DRAM 4 -#define ICE_AQC_DBG_DUMP_CLUSTER_ID_LINK 5 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_EMP_DRAM_E810 4 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_LINK_E810 5 /* AUX_REGS only dumpable in device debug mode */ -#define ICE_AQC_DBG_DUMP_CLUSTER_ID_AUX_REGS 6 -#define ICE_AQC_DBG_DUMP_CLUSTER_ID_DCB 7 -#define ICE_AQC_DBG_DUMP_CLUSTER_ID_L2P 8 -#define ICE_AQC_DBG_DUMP_CLUSTER_ID_QUEUE_MNG 9 -#define ICE_AQC_DBG_DUMP_CLUSTER_ID_FULL_CSR_SPACE 21 - u8 reserved; +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_AUX_REGS_E810 6 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_DCB_E810 7 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_L2P_E810 8 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_QUEUE_MNG_E810 9 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_FULL_CSR_SPACE_E810 21 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_MNG_TRANSACTIONS_E810 22 + +/* Start cluster to discover first available cluster */ +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_START_ALL 0 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_SW_E830 100 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_ACL_E830 101 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_TXSCHED_E830 102 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_PROFILES_E830 103 +/* EMP_DRAM only dumpable in device debug mode */ +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_LINK_E830 105 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_DCB_E830 107 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_L2P_E830 108 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_QUEUE_MNG_E830 109 +#define ICE_AQC_DBG_DUMP_CLUSTER_ID_FULL_CSR_SPACE_E830 121 __le16 table_id; /* Used only for non-memory clusters */ __le32 idx; /* In table entries for tables, in bytes for memory */ __le32 addr_high; @@ -3038,6 +3256,8 @@ struct ice_aq_desc { struct ice_aqc_sw_gpio sw_read_write_gpio; struct ice_aqc_sff_eeprom read_write_sff_param; struct ice_aqc_set_port_id_led set_port_id_led; + struct ice_aqc_get_port_options get_port_options; + struct ice_aqc_set_port_option set_port_option; struct ice_aqc_get_sw_cfg get_sw_conf; struct ice_aqc_set_port_params set_port_params; struct ice_aqc_sw_rules sw_rules; @@ -3055,9 +3275,8 @@ struct ice_aq_desc { struct ice_aqc_nvm nvm; struct ice_aqc_nvm_cfg nvm_cfg; struct ice_aqc_nvm_checksum nvm_checksum; - struct ice_aqc_pfc_ignore pfc_ignore; + struct ice_aqc_nvm_sanitization sanitization; struct ice_aqc_set_query_pfc_mode set_query_pfc_mode; - struct ice_aqc_set_dcb_params set_dcb_params; struct ice_aqc_lldp_get_mib lldp_get_mib; struct ice_aqc_lldp_set_mib_change lldp_set_event; struct ice_aqc_lldp_add_delete_tlv lldp_add_delete_tlv; @@ -3070,6 +3289,7 @@ struct ice_aq_desc { struct ice_aqc_get_set_rss_lut get_set_rss_lut; struct ice_aqc_get_set_rss_key get_set_rss_key; struct ice_aqc_clear_fd_table clear_fd_table; + struct ice_aqc_neigh_dev_req neigh_dev; struct ice_aqc_acl_alloc_table alloc_table; struct ice_aqc_acl_tbl_actpair tbl_actpair; struct ice_aqc_acl_alloc_scen alloc_scen; @@ -3091,6 +3311,8 @@ struct ice_aq_desc { struct ice_aqc_get_vsi_resp get_vsi_resp; struct ice_aqc_download_pkg download_pkg; struct ice_aqc_get_pkg_info_list get_pkg_info_list; + struct ice_aqc_cfg_cgu_err config_cgu_err; + struct ice_aqc_event_cgu_err cgu_err; struct ice_aqc_driver_shared_params drv_shared_params; struct ice_aqc_debug_dump_internals debug_dump; struct ice_aqc_set_mac_lb set_mac_lb; @@ -3308,6 +3530,9 @@ enum ice_adminq_opc { ice_aqc_opc_nvm_sr_dump = 0x0707, ice_aqc_opc_nvm_save_factory_settings = 0x0708, ice_aqc_opc_nvm_update_empr = 0x0709, + ice_aqc_opc_nvm_pkg_data = 0x070A, + ice_aqc_opc_nvm_pass_component_tbl = 0x070B, + ice_aqc_opc_nvm_sanitization = 0x070C, /* LLDP commands */ ice_aqc_opc_lldp_get_mib = 0x0A00, @@ -3329,6 +3554,8 @@ enum ice_adminq_opc { ice_aqc_opc_get_rss_key = 0x0B04, ice_aqc_opc_get_rss_lut = 0x0B05, ice_aqc_opc_clear_fd_table = 0x0B06, + /* Sideband Control Interface commands */ + ice_aqc_opc_neighbour_device_request = 0x0C00, /* ACL commands */ ice_aqc_opc_alloc_acl_tbl = 0x0C10, ice_aqc_opc_dealloc_acl_tbl = 0x0C11, @@ -3363,6 +3590,10 @@ enum ice_adminq_opc { ice_aqc_opc_update_pkg = 0x0C42, ice_aqc_opc_get_pkg_info_list = 0x0C43, + /* 1588/SyncE commands/events */ + ice_aqc_opc_cfg_cgu_err = 0x0C60, + ice_aqc_opc_event_cgu_err = 0x0C60, + ice_aqc_opc_driver_shared_params = 0x0C90, /* Standalone Commands/Events */ diff --git a/drivers/net/ice/base/ice_bitops.h b/drivers/net/ice/base/ice_bitops.h index 3b71c1b7f5..85e14a2358 100644 --- a/drivers/net/ice/base/ice_bitops.h +++ b/drivers/net/ice/base/ice_bitops.h @@ -375,7 +375,7 @@ static inline bool ice_is_any_bit_set(ice_bitmap_t *bitmap, u16 size) } /** - * ice_cp_bitmap - copy bitmaps. + * ice_cp_bitmap - copy bitmaps * @dst: bitmap destination * @src: bitmap to copy from * @size: Size of the bitmaps in bits @@ -418,10 +418,10 @@ ice_bitmap_set(ice_bitmap_t *dst, u16 pos, u16 num_bits) * Note that this function assumes it is operating on a bitmap declared using * ice_declare_bitmap. */ -static inline int +static inline u16 ice_bitmap_hweight(ice_bitmap_t *bm, u16 size) { - int count = 0; + u16 count = 0; u16 bit = 0; while (size > (bit = ice_find_next_bit(bm, size, bit))) { @@ -433,7 +433,7 @@ ice_bitmap_hweight(ice_bitmap_t *bm, u16 size) } /** - * ice_cmp_bitmap - compares two bitmaps. + * ice_cmp_bitmap - compares two bitmaps * @bmp1: the bitmap to compare * @bmp2: the bitmap to compare with bmp1 * @size: Size of the bitmaps in bits diff --git a/drivers/net/ice/base/ice_cgu_regs.h b/drivers/net/ice/base/ice_cgu_regs.h index c44bfc1846..f24f4746dd 100644 --- a/drivers/net/ice/base/ice_cgu_regs.h +++ b/drivers/net/ice/base/ice_cgu_regs.h @@ -28,6 +28,42 @@ union nac_cgu_dword9 { u32 val; }; +#define NAC_CGU_DWORD10_E825C 0x28 +union nac_cgu_dword10_e825c { + struct { + u32 ja_pll_enable : 1; + u32 misc11 : 1; + u32 fdpll_enable : 1; + u32 fdpll_slow : 1; + u32 fdpll_lock_int_enb : 1; + u32 synce_clko_sel : 4; + u32 synce_clkodiv_m1 : 5; + u32 synce_clkodiv_load : 1; + u32 synce_dck_rst : 1; + u32 synce_ethclko_sel : 3; + u32 synce_ethdiv_m1 : 5; + u32 synce_ethdiv_load : 1; + u32 synce_dck2_rst : 1; + u32 synce_sel_gnd : 1; + u32 synce_s_ref_clk : 5; + } field; + u32 val; +}; + +#define NAC_CGU_DWORD11_E825C 0x2c +union nac_cgu_dword11_e825c { + struct { + u32 misc25 : 1; + u32 synce_s_byp_clk : 6; + u32 synce_hdov_mode : 1; + u32 synce_rat_sel : 2; + u32 synce_link_enable : 20; + u32 synce_misclk_en : 1; + u32 synce_misclk_rat_m1 : 1; + } field; + u32 val; +}; + #define NAC_CGU_DWORD19 0x4c union nac_cgu_dword19 { struct { @@ -68,6 +104,22 @@ union nac_cgu_dword22 { u32 val; }; +#define NAC_CGU_DWORD23_E825C 0x5C +union nac_cgu_dword23_e825c { + struct { + u32 cgupll_fbdiv_intgr : 10; + u32 ux56pll_fbdiv_intgr : 10; + u32 misc20 : 4; + u32 ts_pll_enable : 1; + u32 time_sync_tspll_align_sel : 1; + u32 ext_synce_sel : 1; + u32 ref1588_ck_div : 4; + u32 time_ref_sel : 1; + + } field; + u32 val; +}; + #define NAC_CGU_DWORD24 0x60 union nac_cgu_dword24 { struct { @@ -114,4 +166,42 @@ union tspll_ro_bwm_lf { u32 val; }; +#define TSPLL_RO_LOCK_E825C 0x3f0 +union tspll_ro_lock_e825c { + struct { + u32 bw_freqov_high_cri_7_0 : 8; + u32 bw_freqov_high_cri_9_8 : 2; + u32 reserved455 : 1; + u32 plllock_gain_tran_cri : 1; + u32 plllock_true_lock_cri : 1; + u32 pllunlock_flag_cri : 1; + u32 afcerr_cri : 1; + u32 afcdone_cri : 1; + u32 feedfwrdgain_cal_cri_7_0 : 8; + u32 reserved462 : 8; + } field; + u32 val; +}; + +#define TSPLL_BW_TDC_E825C 0x31c +union tspll_bw_tdc_e825c { + struct { + u32 i_tdc_offset_lock_1_0 : 2; + u32 i_bbthresh1_2_0 : 3; + u32 i_bbthresh2_2_0 : 3; + u32 i_tdcsel_1_0 : 2; + u32 i_tdcovccorr_en_h : 1; + u32 i_divretimeren : 1; + u32 i_bw_ampmeas_window : 1; + u32 i_bw_lowerbound_2_0 : 3; + u32 i_bw_upperbound_2_0 : 3; + u32 i_bw_mode_1_0 : 2; + u32 i_ft_mode_sel_2_0 : 3; + u32 i_bwphase_4_0 : 5; + u32 i_plllock_sel_1_0 : 2; + u32 i_afc_divratio : 1; + } field; + u32 val; +}; + #endif /* _ICE_CGU_REGS_H_ */ diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index 8867279c28..050532d877 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -5,124 +5,127 @@ #include "ice_common.h" #include "ice_sched.h" #include "ice_adminq_cmd.h" - #include "ice_flow.h" +#include "ice_ptp_hw.h" #include "ice_switch.h" -#define ICE_PF_RESET_WAIT_COUNT 300 +#define ICE_PF_RESET_WAIT_COUNT 500 + +static const char * const ice_link_mode_str_low[] = { + ice_arr_elem_idx(0, "100BASE_TX"), + ice_arr_elem_idx(1, "100M_SGMII"), + ice_arr_elem_idx(2, "1000BASE_T"), + ice_arr_elem_idx(3, "1000BASE_SX"), + ice_arr_elem_idx(4, "1000BASE_LX"), + ice_arr_elem_idx(5, "1000BASE_KX"), + ice_arr_elem_idx(6, "1G_SGMII"), + ice_arr_elem_idx(7, "2500BASE_T"), + ice_arr_elem_idx(8, "2500BASE_X"), + ice_arr_elem_idx(9, "2500BASE_KX"), + ice_arr_elem_idx(10, "5GBASE_T"), + ice_arr_elem_idx(11, "5GBASE_KR"), + ice_arr_elem_idx(12, "10GBASE_T"), + ice_arr_elem_idx(13, "10G_SFI_DA"), + ice_arr_elem_idx(14, "10GBASE_SR"), + ice_arr_elem_idx(15, "10GBASE_LR"), + ice_arr_elem_idx(16, "10GBASE_KR_CR1"), + ice_arr_elem_idx(17, "10G_SFI_AOC_ACC"), + ice_arr_elem_idx(18, "10G_SFI_C2C"), + ice_arr_elem_idx(19, "25GBASE_T"), + ice_arr_elem_idx(20, "25GBASE_CR"), + ice_arr_elem_idx(21, "25GBASE_CR_S"), + ice_arr_elem_idx(22, "25GBASE_CR1"), + ice_arr_elem_idx(23, "25GBASE_SR"), + ice_arr_elem_idx(24, "25GBASE_LR"), + ice_arr_elem_idx(25, "25GBASE_KR"), + ice_arr_elem_idx(26, "25GBASE_KR_S"), + ice_arr_elem_idx(27, "25GBASE_KR1"), + ice_arr_elem_idx(28, "25G_AUI_AOC_ACC"), + ice_arr_elem_idx(29, "25G_AUI_C2C"), + ice_arr_elem_idx(30, "40GBASE_CR4"), + ice_arr_elem_idx(31, "40GBASE_SR4"), + ice_arr_elem_idx(32, "40GBASE_LR4"), + ice_arr_elem_idx(33, "40GBASE_KR4"), + ice_arr_elem_idx(34, "40G_XLAUI_AOC_ACC"), + ice_arr_elem_idx(35, "40G_XLAUI"), + ice_arr_elem_idx(36, "50GBASE_CR2"), + ice_arr_elem_idx(37, "50GBASE_SR2"), + ice_arr_elem_idx(38, "50GBASE_LR2"), + ice_arr_elem_idx(39, "50GBASE_KR2"), + ice_arr_elem_idx(40, "50G_LAUI2_AOC_ACC"), + ice_arr_elem_idx(41, "50G_LAUI2"), + ice_arr_elem_idx(42, "50G_AUI2_AOC_ACC"), + ice_arr_elem_idx(43, "50G_AUI2"), + ice_arr_elem_idx(44, "50GBASE_CP"), + ice_arr_elem_idx(45, "50GBASE_SR"), + ice_arr_elem_idx(46, "50GBASE_FR"), + ice_arr_elem_idx(47, "50GBASE_LR"), + ice_arr_elem_idx(48, "50GBASE_KR_PAM4"), + ice_arr_elem_idx(49, "50G_AUI1_AOC_ACC"), + ice_arr_elem_idx(50, "50G_AUI1"), + ice_arr_elem_idx(51, "100GBASE_CR4"), + ice_arr_elem_idx(52, "100GBASE_SR4"), + ice_arr_elem_idx(53, "100GBASE_LR4"), + ice_arr_elem_idx(54, "100GBASE_KR4"), + ice_arr_elem_idx(55, "100G_CAUI4_AOC_ACC"), + ice_arr_elem_idx(56, "100G_CAUI4"), + ice_arr_elem_idx(57, "100G_AUI4_AOC_ACC"), + ice_arr_elem_idx(58, "100G_AUI4"), + ice_arr_elem_idx(59, "100GBASE_CR_PAM4"), + ice_arr_elem_idx(60, "100GBASE_KR_PAM4"), + ice_arr_elem_idx(61, "100GBASE_CP2"), + ice_arr_elem_idx(62, "100GBASE_SR2"), + ice_arr_elem_idx(63, "100GBASE_DR"), +}; -/** - * dump_phy_type - helper function that prints PHY type strings - * @hw: pointer to the HW structure - * @phy: 64 bit PHY type to decipher - * @i: bit index within phy - * @phy_string: string corresponding to bit i in phy - * @prefix: prefix string to differentiate multiple dumps - */ -static void -dump_phy_type(struct ice_hw *hw, u64 phy, u8 i, const char *phy_string, - const char *prefix) -{ - if (phy & BIT_ULL(i)) - ice_debug(hw, ICE_DBG_PHY, "%s: bit(%d): %s\n", prefix, i, - phy_string); -} +static const char * const ice_link_mode_str_high[] = { + ice_arr_elem_idx(0, "100GBASE_KR2_PAM4"), + ice_arr_elem_idx(1, "100G_CAUI2_AOC_ACC"), + ice_arr_elem_idx(2, "100G_CAUI2"), + ice_arr_elem_idx(3, "100G_AUI2_AOC_ACC"), + ice_arr_elem_idx(4, "100G_AUI2"), + ice_arr_elem_idx(5, "200G_CR4_PAM4"), + ice_arr_elem_idx(6, "200G_SR4"), + ice_arr_elem_idx(7, "200G_FR4"), + ice_arr_elem_idx(8, "200G_LR4"), + ice_arr_elem_idx(9, "200G_DR4"), + ice_arr_elem_idx(10, "200G_KR4_PAM4"), + ice_arr_elem_idx(11, "200G_AUI4_AOC_ACC"), + ice_arr_elem_idx(12, "200G_AUI4"), + ice_arr_elem_idx(13, "200G_AUI8_AOC_ACC"), + ice_arr_elem_idx(14, "200G_AUI8"), + ice_arr_elem_idx(15, "400GBASE_FR8"), +}; /** - * ice_dump_phy_type_low - helper function to dump phy_type_low + * ice_dump_phy_type - helper function to dump phy_type * @hw: pointer to the HW structure * @low: 64 bit value for phy_type_low + * @high: 64 bit value for phy_type_high * @prefix: prefix string to differentiate multiple dumps */ static void -ice_dump_phy_type_low(struct ice_hw *hw, u64 low, const char *prefix) +ice_dump_phy_type(struct ice_hw *hw, u64 low, u64 high, const char *prefix) { + u32 i; + ice_debug(hw, ICE_DBG_PHY, "%s: phy_type_low: 0x%016llx\n", prefix, (unsigned long long)low); - dump_phy_type(hw, low, 0, "100BASE_TX", prefix); - dump_phy_type(hw, low, 1, "100M_SGMII", prefix); - dump_phy_type(hw, low, 2, "1000BASE_T", prefix); - dump_phy_type(hw, low, 3, "1000BASE_SX", prefix); - dump_phy_type(hw, low, 4, "1000BASE_LX", prefix); - dump_phy_type(hw, low, 5, "1000BASE_KX", prefix); - dump_phy_type(hw, low, 6, "1G_SGMII", prefix); - dump_phy_type(hw, low, 7, "2500BASE_T", prefix); - dump_phy_type(hw, low, 8, "2500BASE_X", prefix); - dump_phy_type(hw, low, 9, "2500BASE_KX", prefix); - dump_phy_type(hw, low, 10, "5GBASE_T", prefix); - dump_phy_type(hw, low, 11, "5GBASE_KR", prefix); - dump_phy_type(hw, low, 12, "10GBASE_T", prefix); - dump_phy_type(hw, low, 13, "10G_SFI_DA", prefix); - dump_phy_type(hw, low, 14, "10GBASE_SR", prefix); - dump_phy_type(hw, low, 15, "10GBASE_LR", prefix); - dump_phy_type(hw, low, 16, "10GBASE_KR_CR1", prefix); - dump_phy_type(hw, low, 17, "10G_SFI_AOC_ACC", prefix); - dump_phy_type(hw, low, 18, "10G_SFI_C2C", prefix); - dump_phy_type(hw, low, 19, "25GBASE_T", prefix); - dump_phy_type(hw, low, 20, "25GBASE_CR", prefix); - dump_phy_type(hw, low, 21, "25GBASE_CR_S", prefix); - dump_phy_type(hw, low, 22, "25GBASE_CR1", prefix); - dump_phy_type(hw, low, 23, "25GBASE_SR", prefix); - dump_phy_type(hw, low, 24, "25GBASE_LR", prefix); - dump_phy_type(hw, low, 25, "25GBASE_KR", prefix); - dump_phy_type(hw, low, 26, "25GBASE_KR_S", prefix); - dump_phy_type(hw, low, 27, "25GBASE_KR1", prefix); - dump_phy_type(hw, low, 28, "25G_AUI_AOC_ACC", prefix); - dump_phy_type(hw, low, 29, "25G_AUI_C2C", prefix); - dump_phy_type(hw, low, 30, "40GBASE_CR4", prefix); - dump_phy_type(hw, low, 31, "40GBASE_SR4", prefix); - dump_phy_type(hw, low, 32, "40GBASE_LR4", prefix); - dump_phy_type(hw, low, 33, "40GBASE_KR4", prefix); - dump_phy_type(hw, low, 34, "40G_XLAUI_AOC_ACC", prefix); - dump_phy_type(hw, low, 35, "40G_XLAUI", prefix); - dump_phy_type(hw, low, 36, "50GBASE_CR2", prefix); - dump_phy_type(hw, low, 37, "50GBASE_SR2", prefix); - dump_phy_type(hw, low, 38, "50GBASE_LR2", prefix); - dump_phy_type(hw, low, 39, "50GBASE_KR2", prefix); - dump_phy_type(hw, low, 40, "50G_LAUI2_AOC_ACC", prefix); - dump_phy_type(hw, low, 41, "50G_LAUI2", prefix); - dump_phy_type(hw, low, 42, "50G_AUI2_AOC_ACC", prefix); - dump_phy_type(hw, low, 43, "50G_AUI2", prefix); - dump_phy_type(hw, low, 44, "50GBASE_CP", prefix); - dump_phy_type(hw, low, 45, "50GBASE_SR", prefix); - dump_phy_type(hw, low, 46, "50GBASE_FR", prefix); - dump_phy_type(hw, low, 47, "50GBASE_LR", prefix); - dump_phy_type(hw, low, 48, "50GBASE_KR_PAM4", prefix); - dump_phy_type(hw, low, 49, "50G_AUI1_AOC_ACC", prefix); - dump_phy_type(hw, low, 50, "50G_AUI1", prefix); - dump_phy_type(hw, low, 51, "100GBASE_CR4", prefix); - dump_phy_type(hw, low, 52, "100GBASE_SR4", prefix); - dump_phy_type(hw, low, 53, "100GBASE_LR4", prefix); - dump_phy_type(hw, low, 54, "100GBASE_KR4", prefix); - dump_phy_type(hw, low, 55, "100G_CAUI4_AOC_ACC", prefix); - dump_phy_type(hw, low, 56, "100G_CAUI4", prefix); - dump_phy_type(hw, low, 57, "100G_AUI4_AOC_ACC", prefix); - dump_phy_type(hw, low, 58, "100G_AUI4", prefix); - dump_phy_type(hw, low, 59, "100GBASE_CR_PAM4", prefix); - dump_phy_type(hw, low, 60, "100GBASE_KR_PAM4", prefix); - dump_phy_type(hw, low, 61, "100GBASE_CP2", prefix); - dump_phy_type(hw, low, 62, "100GBASE_SR2", prefix); - dump_phy_type(hw, low, 63, "100GBASE_DR", prefix); -} - -/** - * ice_dump_phy_type_high - helper function to dump phy_type_high - * @hw: pointer to the HW structure - * @high: 64 bit value for phy_type_high - * @prefix: prefix string to differentiate multiple dumps - */ -static void -ice_dump_phy_type_high(struct ice_hw *hw, u64 high, const char *prefix) -{ + for (i = 0; i < ARRAY_SIZE(ice_link_mode_str_low); i++) { + if (low & BIT_ULL(i)) + ice_debug(hw, ICE_DBG_PHY, "%s: bit(%d): %s\n", + prefix, i, ice_link_mode_str_low[i]); + } + ice_debug(hw, ICE_DBG_PHY, "%s: phy_type_high: 0x%016llx\n", prefix, (unsigned long long)high); - dump_phy_type(hw, high, 0, "100GBASE_KR2_PAM4", prefix); - dump_phy_type(hw, high, 1, "100G_CAUI2_AOC_ACC", prefix); - dump_phy_type(hw, high, 2, "100G_CAUI2", prefix); - dump_phy_type(hw, high, 3, "100G_AUI2_AOC_ACC", prefix); - dump_phy_type(hw, high, 4, "100G_AUI2", prefix); + for (i = 0; i < ARRAY_SIZE(ice_link_mode_str_high); i++) { + if (high & BIT_ULL(i)) + ice_debug(hw, ICE_DBG_PHY, "%s: bit(%d): %s\n", + prefix, i, ice_link_mode_str_high[i]); + } } /** @@ -132,7 +135,7 @@ ice_dump_phy_type_high(struct ice_hw *hw, u64 high, const char *prefix) * This function sets the MAC type of the adapter based on the * vendor ID and device ID stored in the HW structure. */ -static enum ice_status ice_set_mac_type(struct ice_hw *hw) +static int ice_set_mac_type(struct ice_hw *hw) { ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -169,21 +172,30 @@ static enum ice_status ice_set_mac_type(struct ice_hw *hw) case ICE_DEV_ID_E823C_SGMII: hw->mac_type = ICE_MAC_GENERIC; break; - case ICE_DEV_ID_E824S: case ICE_DEV_ID_E825C_BACKPLANE: case ICE_DEV_ID_E825C_QSFP: case ICE_DEV_ID_E825C_SFP: - case ICE_DEV_ID_C825X: case ICE_DEV_ID_E825C_SGMII: hw->mac_type = ICE_MAC_GENERIC_3K_E825; break; + case ICE_DEV_ID_E830_BACKPLANE: + case ICE_DEV_ID_E830_QSFP56: + case ICE_DEV_ID_E830_SFP: + case ICE_DEV_ID_E830C_BACKPLANE: + case ICE_DEV_ID_E830_XXV_BACKPLANE: + case ICE_DEV_ID_E830C_QSFP: + case ICE_DEV_ID_E830_XXV_QSFP: + case ICE_DEV_ID_E830C_SFP: + case ICE_DEV_ID_E830_XXV_SFP: + hw->mac_type = ICE_MAC_E830; + break; default: hw->mac_type = ICE_MAC_UNKNOWN; break; } ice_debug(hw, ICE_DBG_INIT, "mac_type: %d\n", hw->mac_type); - return ICE_SUCCESS; + return 0; } /** @@ -225,7 +237,7 @@ bool ice_is_e810t(struct ice_hw *hw) case ICE_SUBDEV_ID_E810T2: case ICE_SUBDEV_ID_E810T3: case ICE_SUBDEV_ID_E810T4: - case ICE_SUBDEV_ID_E810T5: + case ICE_SUBDEV_ID_E810T6: case ICE_SUBDEV_ID_E810T7: return true; } @@ -233,8 +245,8 @@ bool ice_is_e810t(struct ice_hw *hw) case ICE_DEV_ID_E810C_QSFP: switch (hw->subsystem_device_id) { case ICE_SUBDEV_ID_E810T2: + case ICE_SUBDEV_ID_E810T3: case ICE_SUBDEV_ID_E810T5: - case ICE_SUBDEV_ID_E810T6: return true; } break; @@ -245,6 +257,17 @@ bool ice_is_e810t(struct ice_hw *hw) return false; } +/** + * ice_is_e830 + * @hw: pointer to the hardware structure + * + * returns true if the device is E830 based, false if not. + */ +bool ice_is_e830(struct ice_hw *hw) +{ + return hw->mac_type == ICE_MAC_E830; +} + /** * ice_is_e823 * @hw: pointer to the hardware structure @@ -270,6 +293,25 @@ bool ice_is_e823(struct ice_hw *hw) } } +/** + * ice_is_e825c + * @hw: pointer to the hardware structure + * + * returns true if the device is E825-C based, false if not. + */ +bool ice_is_e825c(struct ice_hw *hw) +{ + switch (hw->device_id) { + case ICE_DEV_ID_E825C_BACKPLANE: + case ICE_DEV_ID_E825C_QSFP: + case ICE_DEV_ID_E825C_SFP: + case ICE_DEV_ID_E825C_SGMII: + return true; + default: + return false; + } +} + /** * ice_clear_pf_cfg - Clear PF configuration * @hw: pointer to the hardware structure @@ -277,7 +319,7 @@ bool ice_is_e823(struct ice_hw *hw) * Clears any existing PF configuration (VSIs, VSI lists, switch rules, port * configuration, flow director filters, etc.). */ -enum ice_status ice_clear_pf_cfg(struct ice_hw *hw) +int ice_clear_pf_cfg(struct ice_hw *hw) { struct ice_aq_desc desc; @@ -301,14 +343,14 @@ enum ice_status ice_clear_pf_cfg(struct ice_hw *hw) * ice_discover_dev_caps is expected to be called before this function is * called. */ -static enum ice_status +static int ice_aq_manage_mac_read(struct ice_hw *hw, void *buf, u16 buf_size, struct ice_sq_cd *cd) { struct ice_aqc_manage_mac_read_resp *resp; struct ice_aqc_manage_mac_read *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; u16 flags; u8 i; @@ -336,13 +378,100 @@ ice_aq_manage_mac_read(struct ice_hw *hw, void *buf, u16 buf_size, if (resp[i].addr_type == ICE_AQC_MAN_MAC_ADDR_TYPE_LAN) { ice_memcpy(hw->port_info->mac.lan_addr, resp[i].mac_addr, ETH_ALEN, - ICE_DMA_TO_NONDMA); + ICE_NONDMA_TO_NONDMA); ice_memcpy(hw->port_info->mac.perm_addr, resp[i].mac_addr, - ETH_ALEN, ICE_DMA_TO_NONDMA); + ETH_ALEN, ICE_NONDMA_TO_NONDMA); break; } - return ICE_SUCCESS; + return 0; +} + +/** + * ice_phy_maps_to_media + * @phy_type_low: PHY type low bits + * @phy_type_high: PHY type high bits + * @media_mask_low: media type PHY type low bitmask + * @media_mask_high: media type PHY type high bitmask + * + * Return true if PHY type [low|high] bits are only of media type PHY types + * [low|high] bitmask. + */ +static bool +ice_phy_maps_to_media(u64 phy_type_low, u64 phy_type_high, + u64 media_mask_low, u64 media_mask_high) +{ + /* check if a PHY type exist for media type */ + if (!(phy_type_low & media_mask_low || + phy_type_high & media_mask_high)) + return false; + + /* check that PHY types are only of media type */ + if (!(phy_type_low & ~media_mask_low) && + !(phy_type_high & ~media_mask_high)) + return true; + + return false; +} + +/** + * ice_set_media_type - Sets media type + * @pi: port information structure + * + * Set ice_port_info PHY media type based on PHY type. This should be called + * from Get PHY caps with media. + */ +static void ice_set_media_type(struct ice_port_info *pi) +{ + enum ice_media_type *media_type; + u64 phy_type_high, phy_type_low; + + phy_type_high = pi->phy.phy_type_high; + phy_type_low = pi->phy.phy_type_low; + media_type = &pi->phy.media_type; + + /* if no media, then media type is NONE */ + if (!(pi->phy.link_info.link_info & ICE_AQ_MEDIA_AVAILABLE)) + *media_type = ICE_MEDIA_NONE; + /* else if PHY types are only BASE-T, then media type is BASET */ + else if (ice_phy_maps_to_media(phy_type_low, phy_type_high, + ICE_MEDIA_BASET_PHY_TYPE_LOW_M, 0)) + *media_type = ICE_MEDIA_BASET; + /* else if any PHY type is BACKPLANE, then media type is BACKPLANE */ + else if (phy_type_low & ICE_MEDIA_BP_PHY_TYPE_LOW_M || + phy_type_high & ICE_MEDIA_BP_PHY_TYPE_HIGH_M) + *media_type = ICE_MEDIA_BACKPLANE; + /* else if PHY types are only optical, or optical and C2M, then media + * type is FIBER + */ + else if (ice_phy_maps_to_media(phy_type_low, phy_type_high, + ICE_MEDIA_OPT_PHY_TYPE_LOW_M, + ICE_MEDIA_OPT_PHY_TYPE_HIGH_M) || + ((phy_type_low & ICE_MEDIA_OPT_PHY_TYPE_LOW_M || + phy_type_high & ICE_MEDIA_OPT_PHY_TYPE_HIGH_M) && + (phy_type_low & ICE_MEDIA_C2M_PHY_TYPE_LOW_M || + phy_type_high & ICE_MEDIA_C2C_PHY_TYPE_HIGH_M))) + *media_type = ICE_MEDIA_FIBER; + /* else if PHY types are only DA, or DA and C2C, then media type DA */ + else if (ice_phy_maps_to_media(phy_type_low, phy_type_high, + ICE_MEDIA_DAC_PHY_TYPE_LOW_M, + ICE_MEDIA_DAC_PHY_TYPE_HIGH_M) || + ((phy_type_low & ICE_MEDIA_DAC_PHY_TYPE_LOW_M || + phy_type_high & ICE_MEDIA_DAC_PHY_TYPE_HIGH_M) && + (phy_type_low & ICE_MEDIA_C2C_PHY_TYPE_LOW_M || + phy_type_high & ICE_MEDIA_C2C_PHY_TYPE_HIGH_M))) + *media_type = ICE_MEDIA_DA; + /* else if PHY types are only C2M or only C2C, then media is AUI */ + else if (ice_phy_maps_to_media(phy_type_low, phy_type_high, + ICE_MEDIA_C2M_PHY_TYPE_LOW_M, + ICE_MEDIA_C2M_PHY_TYPE_HIGH_M) || + ice_phy_maps_to_media(phy_type_low, phy_type_high, + ICE_MEDIA_C2C_PHY_TYPE_LOW_M, + ICE_MEDIA_C2C_PHY_TYPE_HIGH_M)) + *media_type = ICE_MEDIA_AUI; + + else + *media_type = ICE_MEDIA_UNKNOWN; } /** @@ -355,7 +484,7 @@ ice_aq_manage_mac_read(struct ice_hw *hw, void *buf, u16 buf_size, * * Returns the various PHY capabilities supported on the Port (0x0600) */ -enum ice_status +int ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode, struct ice_aqc_get_phy_caps_data *pcaps, struct ice_sq_cd *cd) @@ -363,9 +492,9 @@ ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode, struct ice_aqc_get_phy_caps *cmd; u16 pcaps_size = sizeof(*pcaps); struct ice_aq_desc desc; - enum ice_status status; const char *prefix; struct ice_hw *hw; + int status; cmd = &desc.params.get_phy; @@ -383,23 +512,30 @@ ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode, cmd->param0 |= CPU_TO_LE16(ICE_AQC_GET_PHY_RQM); cmd->param0 |= CPU_TO_LE16(report_mode); + status = ice_aq_send_cmd(hw, &desc, pcaps, pcaps_size, cd); ice_debug(hw, ICE_DBG_LINK, "get phy caps dump\n"); - if (report_mode == ICE_AQC_REPORT_TOPO_CAP_MEDIA) + switch (report_mode) { + case ICE_AQC_REPORT_TOPO_CAP_MEDIA: prefix = "phy_caps_media"; - else if (report_mode == ICE_AQC_REPORT_TOPO_CAP_NO_MEDIA) + break; + case ICE_AQC_REPORT_TOPO_CAP_NO_MEDIA: prefix = "phy_caps_no_media"; - else if (report_mode == ICE_AQC_REPORT_ACTIVE_CFG) + break; + case ICE_AQC_REPORT_ACTIVE_CFG: prefix = "phy_caps_active"; - else if (report_mode == ICE_AQC_REPORT_DFLT_CFG) + break; + case ICE_AQC_REPORT_DFLT_CFG: prefix = "phy_caps_default"; - else + break; + default: prefix = "phy_caps_invalid"; + } - ice_dump_phy_type_low(hw, LE64_TO_CPU(pcaps->phy_type_low), prefix); - ice_dump_phy_type_high(hw, LE64_TO_CPU(pcaps->phy_type_high), prefix); + ice_dump_phy_type(hw, LE64_TO_CPU(pcaps->phy_type_low), + LE64_TO_CPU(pcaps->phy_type_high), prefix); ice_debug(hw, ICE_DBG_LINK, "%s: report_mode = 0x%x\n", prefix, report_mode); @@ -423,265 +559,34 @@ ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode, ice_debug(hw, ICE_DBG_LINK, "%s: module_type[2] = 0x%x\n", prefix, pcaps->module_type[2]); - if (status == ICE_SUCCESS && report_mode == ICE_AQC_REPORT_TOPO_CAP_MEDIA) { + if (!status && report_mode == ICE_AQC_REPORT_TOPO_CAP_MEDIA) { pi->phy.phy_type_low = LE64_TO_CPU(pcaps->phy_type_low); pi->phy.phy_type_high = LE64_TO_CPU(pcaps->phy_type_high); ice_memcpy(pi->phy.link_info.module_type, &pcaps->module_type, sizeof(pi->phy.link_info.module_type), ICE_NONDMA_TO_NONDMA); + ice_set_media_type(pi); + ice_debug(hw, ICE_DBG_LINK, "%s: media_type = 0x%x\n", prefix, + pi->phy.media_type); } return status; } -/** - * ice_aq_get_netlist_node_pin - * @hw: pointer to the hw struct - * @cmd: get_link_topo_pin AQ structure - * @node_handle: output node handle parameter if node found - */ -enum ice_status -ice_aq_get_netlist_node_pin(struct ice_hw *hw, - struct ice_aqc_get_link_topo_pin *cmd, - u16 *node_handle) -{ - struct ice_aq_desc desc; - - ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_topo_pin); - desc.params.get_link_topo_pin = *cmd; - - if (ice_aq_send_cmd(hw, &desc, NULL, 0, NULL)) - return ICE_ERR_NOT_SUPPORTED; - - if (node_handle) - *node_handle = - LE16_TO_CPU(desc.params.get_link_topo_pin.addr.handle); - - return ICE_SUCCESS; -} - -/** - * ice_aq_get_netlist_node - * @hw: pointer to the hw struct - * @cmd: get_link_topo AQ structure - * @node_part_number: output node part number if node found - * @node_handle: output node handle parameter if node found - */ -enum ice_status -ice_aq_get_netlist_node(struct ice_hw *hw, struct ice_aqc_get_link_topo *cmd, - u8 *node_part_number, u16 *node_handle) -{ - struct ice_aq_desc desc; - - ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_topo); - desc.params.get_link_topo = *cmd; - - if (ice_aq_send_cmd(hw, &desc, NULL, 0, NULL)) - return ICE_ERR_NOT_SUPPORTED; - - if (node_handle) - *node_handle = - LE16_TO_CPU(desc.params.get_link_topo.addr.handle); - if (node_part_number) - *node_part_number = desc.params.get_link_topo.node_part_num; - - return ICE_SUCCESS; -} - -#define MAX_NETLIST_SIZE 10 -/** - * ice_find_netlist_node - * @hw: pointer to the hw struct - * @node_type_ctx: type of netlist node to look for - * @node_part_number: node part number to look for - * @node_handle: output parameter if node found - optional - * - * Find and return the node handle for a given node type and part number in the - * netlist. When found ICE_SUCCESS is returned, ICE_ERR_DOES_NOT_EXIST - * otherwise. If node_handle provided, it would be set to found node handle. - */ -enum ice_status -ice_find_netlist_node(struct ice_hw *hw, u8 node_type_ctx, u8 node_part_number, - u16 *node_handle) -{ - struct ice_aqc_get_link_topo cmd; - u8 rec_node_part_number; - u16 rec_node_handle; - u8 idx; - - for (idx = 0; idx < MAX_NETLIST_SIZE; idx++) { - enum ice_status status; - - memset(&cmd, 0, sizeof(cmd)); - - cmd.addr.topo_params.node_type_ctx = - (node_type_ctx << ICE_AQC_LINK_TOPO_NODE_TYPE_S); - cmd.addr.topo_params.index = idx; - - status = ice_aq_get_netlist_node(hw, &cmd, - &rec_node_part_number, - &rec_node_handle); - if (status) - return status; - - if (rec_node_part_number == node_part_number) { - if (node_handle) - *node_handle = rec_node_handle; - return ICE_SUCCESS; - } - } - - return ICE_ERR_DOES_NOT_EXIST; -} +#define ice_get_link_status_data_ver(hw) ((hw)->mac_type == ICE_MAC_E830 ? \ + ICE_GET_LINK_STATUS_DATA_V2 : ICE_GET_LINK_STATUS_DATA_V1) /** - * ice_is_media_cage_present - * @pi: port information structure + * ice_get_link_status_datalen + * @hw: pointer to the HW struct * - * Returns true if media cage is present, else false. If no cage, then - * media type is backplane or BASE-T. - */ -static bool ice_is_media_cage_present(struct ice_port_info *pi) -{ - struct ice_aqc_get_link_topo *cmd; - struct ice_aq_desc desc; - - cmd = &desc.params.get_link_topo; - - ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_topo); - - cmd->addr.topo_params.node_type_ctx = - (ICE_AQC_LINK_TOPO_NODE_CTX_PORT << - ICE_AQC_LINK_TOPO_NODE_CTX_S); - - /* set node type */ - cmd->addr.topo_params.node_type_ctx |= - (ICE_AQC_LINK_TOPO_NODE_TYPE_M & - ICE_AQC_LINK_TOPO_NODE_TYPE_CAGE); - - /* Node type cage can be used to determine if cage is present. If AQC - * returns error (ENOENT), then no cage present. If no cage present then - * connection type is backplane or BASE-T. - */ - return ice_aq_get_netlist_node(pi->hw, cmd, NULL, NULL); -} - -/** - * ice_get_media_type - Gets media type - * @pi: port information structure + * return Get Link Status datalen */ -static enum ice_media_type ice_get_media_type(struct ice_port_info *pi) +static u16 ice_get_link_status_datalen(struct ice_hw *hw) { - struct ice_link_status *hw_link_info; - - if (!pi) - return ICE_MEDIA_UNKNOWN; - - hw_link_info = &pi->phy.link_info; - if (hw_link_info->phy_type_low && hw_link_info->phy_type_high) - /* If more than one media type is selected, report unknown */ - return ICE_MEDIA_UNKNOWN; - - if (hw_link_info->phy_type_low) { - /* 1G SGMII is a special case where some DA cable PHYs - * may show this as an option when it really shouldn't - * be since SGMII is meant to be between a MAC and a PHY - * in a backplane. Try to detect this case and handle it - */ - if (hw_link_info->phy_type_low == ICE_PHY_TYPE_LOW_1G_SGMII && - (hw_link_info->module_type[ICE_AQC_MOD_TYPE_IDENT] == - ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_ACTIVE || - hw_link_info->module_type[ICE_AQC_MOD_TYPE_IDENT] == - ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_PASSIVE)) - return ICE_MEDIA_DA; - - switch (hw_link_info->phy_type_low) { - case ICE_PHY_TYPE_LOW_1000BASE_SX: - case ICE_PHY_TYPE_LOW_1000BASE_LX: - case ICE_PHY_TYPE_LOW_10GBASE_SR: - case ICE_PHY_TYPE_LOW_10GBASE_LR: - case ICE_PHY_TYPE_LOW_25GBASE_SR: - case ICE_PHY_TYPE_LOW_25GBASE_LR: - case ICE_PHY_TYPE_LOW_40GBASE_SR4: - case ICE_PHY_TYPE_LOW_40GBASE_LR4: - case ICE_PHY_TYPE_LOW_50GBASE_SR2: - case ICE_PHY_TYPE_LOW_50GBASE_LR2: - case ICE_PHY_TYPE_LOW_50GBASE_SR: - case ICE_PHY_TYPE_LOW_50GBASE_FR: - case ICE_PHY_TYPE_LOW_50GBASE_LR: - case ICE_PHY_TYPE_LOW_100GBASE_SR4: - case ICE_PHY_TYPE_LOW_100GBASE_LR4: - case ICE_PHY_TYPE_LOW_100GBASE_SR2: - case ICE_PHY_TYPE_LOW_100GBASE_DR: - return ICE_MEDIA_FIBER; - case ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC: - case ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC: - case ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC: - case ICE_PHY_TYPE_LOW_50G_LAUI2_AOC_ACC: - case ICE_PHY_TYPE_LOW_50G_AUI2_AOC_ACC: - case ICE_PHY_TYPE_LOW_50G_AUI1_AOC_ACC: - case ICE_PHY_TYPE_LOW_100G_CAUI4_AOC_ACC: - case ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC: - return ICE_MEDIA_FIBER; - case ICE_PHY_TYPE_LOW_100BASE_TX: - case ICE_PHY_TYPE_LOW_1000BASE_T: - case ICE_PHY_TYPE_LOW_2500BASE_T: - case ICE_PHY_TYPE_LOW_5GBASE_T: - case ICE_PHY_TYPE_LOW_10GBASE_T: - case ICE_PHY_TYPE_LOW_25GBASE_T: - return ICE_MEDIA_BASET; - case ICE_PHY_TYPE_LOW_10G_SFI_DA: - case ICE_PHY_TYPE_LOW_25GBASE_CR: - case ICE_PHY_TYPE_LOW_25GBASE_CR_S: - case ICE_PHY_TYPE_LOW_25GBASE_CR1: - case ICE_PHY_TYPE_LOW_40GBASE_CR4: - case ICE_PHY_TYPE_LOW_50GBASE_CR2: - case ICE_PHY_TYPE_LOW_50GBASE_CP: - case ICE_PHY_TYPE_LOW_100GBASE_CR4: - case ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4: - case ICE_PHY_TYPE_LOW_100GBASE_CP2: - return ICE_MEDIA_DA; - case ICE_PHY_TYPE_LOW_25G_AUI_C2C: - case ICE_PHY_TYPE_LOW_40G_XLAUI: - case ICE_PHY_TYPE_LOW_50G_LAUI2: - case ICE_PHY_TYPE_LOW_50G_AUI2: - case ICE_PHY_TYPE_LOW_50G_AUI1: - case ICE_PHY_TYPE_LOW_100G_AUI4: - case ICE_PHY_TYPE_LOW_100G_CAUI4: - if (ice_is_media_cage_present(pi)) - return ICE_MEDIA_AUI; - /* fall-through */ - case ICE_PHY_TYPE_LOW_1000BASE_KX: - case ICE_PHY_TYPE_LOW_2500BASE_KX: - case ICE_PHY_TYPE_LOW_2500BASE_X: - case ICE_PHY_TYPE_LOW_5GBASE_KR: - case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1: - case ICE_PHY_TYPE_LOW_10G_SFI_C2C: - case ICE_PHY_TYPE_LOW_25GBASE_KR: - case ICE_PHY_TYPE_LOW_25GBASE_KR1: - case ICE_PHY_TYPE_LOW_25GBASE_KR_S: - case ICE_PHY_TYPE_LOW_40GBASE_KR4: - case ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4: - case ICE_PHY_TYPE_LOW_50GBASE_KR2: - case ICE_PHY_TYPE_LOW_100GBASE_KR4: - case ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4: - return ICE_MEDIA_BACKPLANE; - } - } else { - switch (hw_link_info->phy_type_high) { - case ICE_PHY_TYPE_HIGH_100G_AUI2: - case ICE_PHY_TYPE_HIGH_100G_CAUI2: - if (ice_is_media_cage_present(pi)) - return ICE_MEDIA_AUI; - /* fall-through */ - case ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4: - return ICE_MEDIA_BACKPLANE; - case ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC: - case ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC: - return ICE_MEDIA_FIBER; - } - } - return ICE_MEDIA_UNKNOWN; + return (ice_get_link_status_data_ver(hw) == + ICE_GET_LINK_STATUS_DATA_V1) ? ICE_GET_LINK_STATUS_DATALEN_V1 : + ICE_GET_LINK_STATUS_DATALEN_V2; } /** @@ -693,26 +598,25 @@ static enum ice_media_type ice_get_media_type(struct ice_port_info *pi) * * Get Link Status (0x607). Returns the link status of the adapter. */ -enum ice_status +int ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse, struct ice_link_status *link, struct ice_sq_cd *cd) { struct ice_aqc_get_link_status_data link_data = { 0 }; struct ice_aqc_get_link_status *resp; struct ice_link_status *li_old, *li; - enum ice_media_type *hw_media_type; struct ice_fc_info *hw_fc_info; bool tx_pause, rx_pause; struct ice_aq_desc desc; - enum ice_status status; struct ice_hw *hw; u16 cmd_flags; + int status; if (!pi) return ICE_ERR_PARAM; hw = pi->hw; + li_old = &pi->phy.link_info_old; - hw_media_type = &pi->phy.media_type; li = &pi->phy.link_info; hw_fc_info = &pi->fc; @@ -722,9 +626,9 @@ ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse, resp->cmd_flags = CPU_TO_LE16(cmd_flags); resp->lport_num = pi->lport; - status = ice_aq_send_cmd(hw, &desc, &link_data, sizeof(link_data), cd); - - if (status != ICE_SUCCESS) + status = ice_aq_send_cmd(hw, &desc, &link_data, + ice_get_link_status_datalen(hw), cd); + if (status) return status; /* save off old link status information */ @@ -734,7 +638,6 @@ ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse, li->link_speed = LE16_TO_CPU(link_data.link_speed); li->phy_type_low = LE64_TO_CPU(link_data.phy_type_low); li->phy_type_high = LE64_TO_CPU(link_data.phy_type_high); - *hw_media_type = ice_get_media_type(pi); li->link_info = link_data.link_info; li->link_cfg_err = link_data.link_cfg_err; li->an_info = link_data.an_info; @@ -765,7 +668,6 @@ ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse, (unsigned long long)li->phy_type_low); ice_debug(hw, ICE_DBG_LINK, " phy_type_high = 0x%llx\n", (unsigned long long)li->phy_type_high); - ice_debug(hw, ICE_DBG_LINK, " media_type = 0x%x\n", *hw_media_type); ice_debug(hw, ICE_DBG_LINK, " link_info = 0x%x\n", li->link_info); ice_debug(hw, ICE_DBG_LINK, " link_cfg_err = 0x%x\n", li->link_cfg_err); ice_debug(hw, ICE_DBG_LINK, " an_info = 0x%x\n", li->an_info); @@ -783,7 +685,7 @@ ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse, /* flag cleared so calling functions don't call AQ again */ pi->phy.get_link_info = false; - return ICE_SUCCESS; + return 0; } /** @@ -808,17 +710,28 @@ ice_fill_tx_timer_and_fc_thresh(struct ice_hw *hw, * Also, because we are operating on transmit timer and fc * threshold of LFC, we don't turn on any bit in tx_tmr_priority */ -#define IDX_OF_LFC PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX +#define E800_IDX_OF_LFC E800_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX - /* Retrieve the transmit timer */ - val = rd32(hw, PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(IDX_OF_LFC)); - tx_timer_val = val & - PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M; - cmd->tx_tmr_value = CPU_TO_LE16(tx_timer_val); + if ((hw)->mac_type == ICE_MAC_E830) { + /* Retrieve the transmit timer */ + val = rd32(hw, E830_PRTMAC_CL01_PAUSE_QUANTA); + tx_timer_val = val & E830_PRTMAC_CL01_PAUSE_QUANTA_CL0_PAUSE_QUANTA_M; + cmd->tx_tmr_value = CPU_TO_LE16(tx_timer_val); - /* Retrieve the fc threshold */ - val = rd32(hw, PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(IDX_OF_LFC)); - fc_thres_val = val & PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M; + /* Retrieve the fc threshold */ + val = rd32(hw, E830_PRTMAC_CL01_QUANTA_THRESH); + fc_thres_val = val & E830_PRTMAC_CL01_QUANTA_THRESH_CL0_QUANTA_THRESH_M; + } else { + /* Retrieve the transmit timer */ + val = rd32(hw, E800_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(E800_IDX_OF_LFC)); + tx_timer_val = val & + E800_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M; + cmd->tx_tmr_value = CPU_TO_LE16(tx_timer_val); + + /* Retrieve the fc threshold */ + val = rd32(hw, E800_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(E800_IDX_OF_LFC)); + fc_thres_val = val & E800_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M; + } cmd->fc_refresh_threshold = CPU_TO_LE16(fc_thres_val); } @@ -832,7 +745,7 @@ ice_fill_tx_timer_and_fc_thresh(struct ice_hw *hw, * * Set MAC configuration (0x0603) */ -enum ice_status +int ice_aq_set_mac_cfg(struct ice_hw *hw, u16 max_frame_size, bool auto_drop, struct ice_sq_cd *cd) { @@ -859,10 +772,10 @@ ice_aq_set_mac_cfg(struct ice_hw *hw, u16 max_frame_size, bool auto_drop, * ice_init_fltr_mgmt_struct - initializes filter management list and locks * @hw: pointer to the HW struct */ -enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw) +int ice_init_fltr_mgmt_struct(struct ice_hw *hw) { struct ice_switch_info *sw; - enum ice_status status; + int status; hw->switch_info = (struct ice_switch_info *) ice_malloc(hw, sizeof(*hw->switch_info)); @@ -880,7 +793,7 @@ enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw) ice_free(hw, hw->switch_info); return status; } - return ICE_SUCCESS; + return 0; } /** @@ -998,7 +911,7 @@ void ice_print_rollback_msg(struct ice_hw *hw) orom = &hw->flash.orom; nvm = &hw->flash.nvm; - SNPRINTF(nvm_str, sizeof(nvm_str), "%x.%02x 0x%x %d.%d.%d", + (void)SNPRINTF(nvm_str, sizeof(nvm_str), "%x.%02x 0x%x %d.%d.%d", nvm->major, nvm->minor, nvm->eetrack, orom->major, orom->build, orom->patch); ice_warn(hw, @@ -1021,12 +934,12 @@ void ice_set_umac_shared(struct ice_hw *hw) * ice_init_hw - main hardware initialization routine * @hw: pointer to the hardware structure */ -enum ice_status ice_init_hw(struct ice_hw *hw) +int ice_init_hw(struct ice_hw *hw) { struct ice_aqc_get_phy_caps_data *pcaps; - enum ice_status status; u16 mac_buf_len; void *mac_buf; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -1042,9 +955,10 @@ enum ice_status ice_init_hw(struct ice_hw *hw) status = ice_reset(hw, ICE_RESET_PFR); if (status) return status; - ice_get_itr_intrl_gran(hw); + hw->fw_vsi_num = ICE_DFLT_VSI_INVAL; + status = ice_create_all_ctrlq(hw); if (status) goto err_unroll_cqinit; @@ -1056,9 +970,11 @@ enum ice_status ice_init_hw(struct ice_hw *hw) if (ice_get_fw_mode(hw) == ICE_FW_MODE_ROLLBACK) ice_print_rollback_msg(hw); - status = ice_clear_pf_cfg(hw); - if (status) - goto err_unroll_cqinit; + if (!hw->skip_clear_pf) { + status = ice_clear_pf_cfg(hw); + if (status) + goto err_unroll_cqinit; + } /* Set bit to enable Flow Director filters */ wr32(hw, PFQF_FD_ENA, PFQF_FD_ENA_FD_ENA_M); @@ -1070,13 +986,16 @@ enum ice_status ice_init_hw(struct ice_hw *hw) if (status) goto err_unroll_cqinit; - hw->port_info = (struct ice_port_info *) + if (!hw->port_info) + hw->port_info = (struct ice_port_info *) ice_malloc(hw, sizeof(*hw->port_info)); if (!hw->port_info) { status = ICE_ERR_NO_MEMORY; goto err_unroll_cqinit; } + hw->port_info->loopback_mode = ICE_AQC_SET_P_PARAMS_LOOPBACK_MODE_NORMAL; + /* set the back pointer to HW */ hw->port_info->hw = hw; @@ -1132,6 +1051,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw) goto err_unroll_sched; /* Get MAC information */ + /* A single port can report up to two (LAN and WoL) addresses */ mac_buf = ice_calloc(hw, 2, sizeof(struct ice_aqc_manage_mac_read_resp)); @@ -1158,12 +1078,15 @@ enum ice_status ice_init_hw(struct ice_hw *hw) status = ice_alloc_fd_res_cntr(hw, &hw->fd_ctr_base); if (status) goto err_unroll_fltr_mgmt_struct; + status = ice_init_hw_tbls(hw); if (status) goto err_unroll_fltr_mgmt_struct; ice_init_lock(&hw->tnl_lock); - return ICE_SUCCESS; + ice_init_chk_subscribable_recipe_support(hw); + + return 0; err_unroll_fltr_mgmt_struct: ice_cleanup_fltr_mgmt_struct(hw); @@ -1211,9 +1134,9 @@ void ice_deinit_hw(struct ice_hw *hw) * ice_check_reset - Check to see if a global reset is complete * @hw: pointer to the hardware structure */ -enum ice_status ice_check_reset(struct ice_hw *hw) +int ice_check_reset(struct ice_hw *hw) { - u32 cnt, reg = 0, grst_timeout, uld_mask; + u32 cnt, reg = 0, grst_timeout, uld_mask, reset_wait_cnt; /* Poll for Device Active state in case a recent CORER, GLOBR, * or EMPR has occurred. The grst delay value is in 100ms units. @@ -1244,8 +1167,10 @@ enum ice_status ice_check_reset(struct ice_hw *hw) uld_mask = ICE_RESET_DONE_MASK; + reset_wait_cnt = ICE_PF_RESET_WAIT_COUNT; + /* Device is Active; check Global Reset processes are done */ - for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) { + for (cnt = 0; cnt < reset_wait_cnt; cnt++) { reg = rd32(hw, GLNVM_ULD) & uld_mask; if (reg == uld_mask) { ice_debug(hw, ICE_DBG_INIT, "Global reset processes done. %d\n", cnt); @@ -1254,13 +1179,13 @@ enum ice_status ice_check_reset(struct ice_hw *hw) ice_msec_delay(10, true); } - if (cnt == ICE_PF_RESET_WAIT_COUNT) { + if (cnt == reset_wait_cnt) { ice_debug(hw, ICE_DBG_INIT, "Wait for Reset Done timed out. GLNVM_ULD = 0x%x\n", reg); return ICE_ERR_RESET_FAILED; } - return ICE_SUCCESS; + return 0; } /** @@ -1270,9 +1195,9 @@ enum ice_status ice_check_reset(struct ice_hw *hw) * If a global reset has been triggered, this function checks * for its completion and then issues the PF reset */ -static enum ice_status ice_pf_reset(struct ice_hw *hw) +static int ice_pf_reset(struct ice_hw *hw) { - u32 cnt, reg; + u32 cnt, reg, reset_wait_cnt, cfg_lock_timeout; /* If at function entry a global reset was already in progress, i.e. * state is not 'device active' or any of the reset done bits are not @@ -1285,7 +1210,7 @@ static enum ice_status ice_pf_reset(struct ice_hw *hw) if (ice_check_reset(hw)) return ICE_ERR_RESET_FAILED; - return ICE_SUCCESS; + return 0; } /* Reset the PF */ @@ -1297,8 +1222,10 @@ static enum ice_status ice_pf_reset(struct ice_hw *hw) * timeout plus the PFR timeout which will account for a possible reset * that is occurring during a download package operation. */ - for (cnt = 0; cnt < ICE_GLOBAL_CFG_LOCK_TIMEOUT + - ICE_PF_RESET_WAIT_COUNT; cnt++) { + reset_wait_cnt = ICE_PF_RESET_WAIT_COUNT; + cfg_lock_timeout = ICE_GLOBAL_CFG_LOCK_TIMEOUT; + + for (cnt = 0; cnt < cfg_lock_timeout + reset_wait_cnt; cnt++) { reg = rd32(hw, PFGEN_CTRL); if (!(reg & PFGEN_CTRL_PFSWR_M)) break; @@ -1306,12 +1233,12 @@ static enum ice_status ice_pf_reset(struct ice_hw *hw) ice_msec_delay(1, true); } - if (cnt == ICE_PF_RESET_WAIT_COUNT) { + if (cnt == cfg_lock_timeout + reset_wait_cnt) { ice_debug(hw, ICE_DBG_INIT, "PF reset polling failed to complete.\n"); return ICE_ERR_RESET_FAILED; } - return ICE_SUCCESS; + return 0; } /** @@ -1326,7 +1253,7 @@ static enum ice_status ice_pf_reset(struct ice_hw *hw) * This has to be cleared using ice_clear_pxe_mode again, once the AQ * interface has been restored in the rebuild flow. */ -enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req) +int ice_reset(struct ice_hw *hw, enum ice_reset_req req) { u32 val = 0; @@ -1361,7 +1288,7 @@ enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req) * * Copies rxq context from dense structure to HW register space */ -static enum ice_status +static int ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index) { u8 i; @@ -1381,7 +1308,7 @@ ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index) *((u32 *)(ice_rxq_ctx + (i * sizeof(u32))))); } - return ICE_SUCCESS; + return 0; } /** @@ -1392,7 +1319,7 @@ ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index) * * Copies rxq context from HW register space to dense structure */ -static enum ice_status +static int ice_copy_rxq_ctx_from_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index) { u8 i; @@ -1412,7 +1339,7 @@ ice_copy_rxq_ctx_from_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index) ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i, *ctx); } - return ICE_SUCCESS; + return 0; } /* LAN Rx Queue Context */ @@ -1451,7 +1378,7 @@ static const struct ice_ctx_ele ice_rlan_ctx_info[] = { * it to HW register space and enables the hardware to prefetch descriptors * instead of only fetching them on demand */ -enum ice_status +int ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx, u32 rxq_index) { @@ -1475,12 +1402,12 @@ ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx, * Read rxq context from HW register space and then converts it from dense * structure to sparse */ -enum ice_status +int ice_read_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx, u32 rxq_index) { u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 }; - enum ice_status status; + int status; if (!rlan_ctx) return ICE_ERR_BAD_PTR; @@ -1499,7 +1426,7 @@ ice_read_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx, * * Clears rxq context in HW register space */ -enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index) +int ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index) { u8 i; @@ -1510,7 +1437,7 @@ enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index) for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) wr32(hw, QRX_CONTEXT(i, rxq_index), 0); - return ICE_SUCCESS; + return 0; } /* LAN Tx Queue Context used for set Tx config by ice_aqc_opc_add_txqs, @@ -1546,7 +1473,6 @@ const struct ice_ctx_ele ice_tlan_ctx_info[] = { ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx, 2, 166), ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx, 3, 168), ICE_CTX_STORE(ice_tlan_ctx, int_q_state, 122, 171), - ICE_CTX_STORE(ice_tlan_ctx, gsc_ena, 1, 172), { 0 } }; @@ -1558,7 +1484,7 @@ const struct ice_ctx_ele ice_tlan_ctx_info[] = { * * Copies Tx completion queue context from dense structure to HW register space */ -static enum ice_status +static int ice_copy_tx_cmpltnq_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_cmpltnq_ctx, u32 tx_cmpltnq_index) { @@ -1579,7 +1505,7 @@ ice_copy_tx_cmpltnq_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_cmpltnq_ctx, *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32))))); } - return ICE_SUCCESS; + return 0; } /* LAN Tx Completion Queue Context */ @@ -1607,7 +1533,7 @@ static const struct ice_ctx_ele ice_tx_cmpltnq_ctx_info[] = { * Converts completion queue context from sparse to dense structure and then * writes it to HW register space */ -enum ice_status +int ice_write_tx_cmpltnq_ctx(struct ice_hw *hw, struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx, u32 tx_cmpltnq_index) @@ -1625,7 +1551,7 @@ ice_write_tx_cmpltnq_ctx(struct ice_hw *hw, * * Clears Tx completion queue context in HW register space */ -enum ice_status +int ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index) { u8 i; @@ -1637,7 +1563,7 @@ ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index) for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++) wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index), 0); - return ICE_SUCCESS; + return 0; } /** @@ -1648,7 +1574,7 @@ ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index) * * Copies doorbell queue context from dense structure to HW register space */ -static enum ice_status +static int ice_copy_tx_drbell_q_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_drbell_q_ctx, u32 tx_drbell_q_index) { @@ -1669,7 +1595,7 @@ ice_copy_tx_drbell_q_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_drbell_q_ctx, *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32))))); } - return ICE_SUCCESS; + return 0; } /* LAN Tx Doorbell Queue Context info */ @@ -1698,7 +1624,7 @@ static const struct ice_ctx_ele ice_tx_drbell_q_ctx_info[] = { * Converts doorbell queue context from sparse to dense structure and then * writes it to HW register space */ -enum ice_status +int ice_write_tx_drbell_q_ctx(struct ice_hw *hw, struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx, u32 tx_drbell_q_index) @@ -1717,7 +1643,7 @@ ice_write_tx_drbell_q_ctx(struct ice_hw *hw, * * Clears doorbell queue context in HW register space */ -enum ice_status +int ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index) { u8 i; @@ -1729,7 +1655,7 @@ ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index) for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++) wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index), 0); - return ICE_SUCCESS; + return 0; } /* Sideband Queue command wrappers */ @@ -1753,7 +1679,7 @@ static struct ice_ctl_q_info *ice_get_sbq(struct ice_hw *hw) * @buf_size: size of buffer for indirect commands (0 for direct commands) * @cd: pointer to command details structure */ -static enum ice_status +static int ice_sbq_send_cmd(struct ice_hw *hw, struct ice_sbq_cmd_desc *desc, void *buf, u16 buf_size, struct ice_sq_cd *cd) { @@ -1770,7 +1696,7 @@ ice_sbq_send_cmd(struct ice_hw *hw, struct ice_sbq_cmd_desc *desc, * @buf_size: size of buffer for indirect commands (0 for direct commands) * @cd: pointer to command details structure */ -static enum ice_status +static int ice_sbq_send_cmd_nolock(struct ice_hw *hw, struct ice_sbq_cmd_desc *desc, void *buf, u16 buf_size, struct ice_sq_cd *cd) { @@ -1783,16 +1709,17 @@ ice_sbq_send_cmd_nolock(struct ice_hw *hw, struct ice_sbq_cmd_desc *desc, * ice_sbq_rw_reg_lp - Fill Sideband Queue command, with lock parameter * @hw: pointer to the HW struct * @in: message info to be filled in descriptor + * @flag: flag to fill desc structure * @lock: true to lock the sq_lock (the usual case); false if the sq_lock has * already been locked at a higher level */ -enum ice_status ice_sbq_rw_reg_lp(struct ice_hw *hw, - struct ice_sbq_msg_input *in, bool lock) +int ice_sbq_rw_reg_lp(struct ice_hw *hw, struct ice_sbq_msg_input *in, + u16 flag, bool lock) { struct ice_sbq_cmd_desc desc = {0}; struct ice_sbq_msg_req msg = {0}; - enum ice_status status; u16 msg_len; + int status; msg_len = sizeof(msg); @@ -1811,7 +1738,7 @@ enum ice_status ice_sbq_rw_reg_lp(struct ice_hw *hw, */ msg_len -= sizeof(msg.data); - desc.flags = CPU_TO_LE16(ICE_AQ_FLAG_RD); + desc.flags = CPU_TO_LE16(flag); desc.opcode = CPU_TO_LE16(ice_sbq_opc_neigh_dev_req); desc.param0.cmd_len = CPU_TO_LE16(msg_len); if (lock) @@ -1829,10 +1756,11 @@ enum ice_status ice_sbq_rw_reg_lp(struct ice_hw *hw, * ice_sbq_rw_reg - Fill Sideband Queue command * @hw: pointer to the HW struct * @in: message info to be filled in descriptor + * @flag: flag to fill desc structure */ -enum ice_status ice_sbq_rw_reg(struct ice_hw *hw, struct ice_sbq_msg_input *in) +int ice_sbq_rw_reg(struct ice_hw *hw, struct ice_sbq_msg_input *in, u16 flag) { - return ice_sbq_rw_reg_lp(hw, in, true); + return ice_sbq_rw_reg_lp(hw, in, flag, true); } /** @@ -1887,17 +1815,17 @@ static bool ice_should_retry_sq_send_cmd(u16 opcode) * Retry sending the FW Admin Queue command, multiple times, to the FW Admin * Queue if the EBUSY AQ error is returned. */ -static enum ice_status +static int ice_sq_send_cmd_retry(struct ice_hw *hw, struct ice_ctl_q_info *cq, struct ice_aq_desc *desc, void *buf, u16 buf_size, struct ice_sq_cd *cd) { struct ice_aq_desc desc_cpy; - enum ice_status status; bool is_cmd_for_retry; u8 *buf_cpy = NULL; u8 idx = 0; u16 opcode; + int status; opcode = LE16_TO_CPU(desc->opcode); is_cmd_for_retry = ice_should_retry_sq_send_cmd(opcode); @@ -1917,7 +1845,7 @@ ice_sq_send_cmd_retry(struct ice_hw *hw, struct ice_ctl_q_info *cq, do { status = ice_sq_send_cmd(hw, cq, desc, buf, buf_size, cd); - if (!is_cmd_for_retry || status == ICE_SUCCESS || + if (!is_cmd_for_retry || !status || hw->adminq.sq_last_status != ICE_AQ_RC_EBUSY) break; @@ -1948,13 +1876,13 @@ ice_sq_send_cmd_retry(struct ice_hw *hw, struct ice_ctl_q_info *cq, * * Helper function to send FW Admin Queue commands to the FW Admin Queue. */ -enum ice_status +int ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf, u16 buf_size, struct ice_sq_cd *cd) { if (hw->aq_send_cmd_fn) { - enum ice_status status = ICE_ERR_NOT_READY; u16 retval = ICE_AQ_RC_OK; + int status = ICE_ERR_NOT_READY; ice_acquire_lock(&hw->adminq.sq_lock); if (!hw->aq_send_cmd_fn(hw->aq_send_cmd_param, desc, @@ -1964,7 +1892,7 @@ ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf, if (retval) retval &= 0xff; if (retval == ICE_AQ_RC_OK) - status = ICE_SUCCESS; + status = 0; else status = ICE_ERR_AQ_ERROR; } @@ -1984,11 +1912,11 @@ ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf, * * Get the firmware version (0x0001) from the admin queue commands */ -enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd) +int ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd) { struct ice_aqc_get_ver *resp; struct ice_aq_desc desc; - enum ice_status status; + int status; resp = &desc.params.get_ver; @@ -2019,7 +1947,7 @@ enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd) * * Send the driver version (0x0002) to the firmware */ -enum ice_status +int ice_aq_send_driver_ver(struct ice_hw *hw, struct ice_driver_ver *dv, struct ice_sq_cd *cd) { @@ -2056,7 +1984,7 @@ ice_aq_send_driver_ver(struct ice_hw *hw, struct ice_driver_ver *dv, * Tell the Firmware that we're shutting down the AdminQ and whether * or not the driver is unloading as well (0x0003). */ -enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading) +int ice_aq_q_shutdown(struct ice_hw *hw, bool unloading) { struct ice_aqc_q_shutdown *cmd; struct ice_aq_desc desc; @@ -2083,8 +2011,8 @@ enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading) * Requests common resource using the admin queue commands (0x0008). * When attempting to acquire the Global Config Lock, the driver can * learn of three states: - * 1) ICE_SUCCESS - acquired lock, and can perform download package - * 2) ICE_ERR_AQ_ERROR - did not get lock, driver should fail to load + * 1) 0 - acquired lock, and can perform download package + * 2) ICE_ERR_AQ_ERROR - did not get lock, driver should fail to load * 3) ICE_ERR_AQ_NO_WORK - did not get lock, but another driver has * successfully downloaded the package; the driver does * not have to download the package and can continue @@ -2097,14 +2025,14 @@ enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading) * will likely get an error propagated back to it indicating the Download * Package, Update Package or the Release Resource AQ commands timed out. */ -static enum ice_status +static int ice_aq_req_res(struct ice_hw *hw, enum ice_aq_res_ids res, enum ice_aq_res_access_type access, u8 sdp_number, u32 *timeout, struct ice_sq_cd *cd) { struct ice_aqc_req_res *cmd_resp; struct ice_aq_desc desc; - enum ice_status status; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -2134,7 +2062,7 @@ ice_aq_req_res(struct ice_hw *hw, enum ice_aq_res_ids res, if (res == ICE_GLOBAL_CFG_LOCK_RES_ID) { if (LE16_TO_CPU(cmd_resp->status) == ICE_AQ_RES_GLBL_SUCCESS) { *timeout = LE32_TO_CPU(cmd_resp->timeout); - return ICE_SUCCESS; + return 0; } else if (LE16_TO_CPU(cmd_resp->status) == ICE_AQ_RES_GLBL_IN_PROG) { *timeout = LE32_TO_CPU(cmd_resp->timeout); @@ -2168,7 +2096,7 @@ ice_aq_req_res(struct ice_hw *hw, enum ice_aq_res_ids res, * * release common resource using the admin queue commands (0x0009) */ -static enum ice_status +static int ice_aq_release_res(struct ice_hw *hw, enum ice_aq_res_ids res, u8 sdp_number, struct ice_sq_cd *cd) { @@ -2196,14 +2124,14 @@ ice_aq_release_res(struct ice_hw *hw, enum ice_aq_res_ids res, u8 sdp_number, * * This function will attempt to acquire the ownership of a resource. */ -enum ice_status +int ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res, enum ice_aq_res_access_type access, u32 timeout) { #define ICE_RES_POLLING_DELAY_MS 10 u32 delay = ICE_RES_POLLING_DELAY_MS; u32 time_left = timeout; - enum ice_status status; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -2257,8 +2185,8 @@ ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res, */ void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res) { - enum ice_status status; u32 total_delay = 0; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -2286,7 +2214,7 @@ void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res) * * Helper function to allocate/free resources using the admin queue commands */ -enum ice_status +int ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries, struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size, enum ice_adminq_opc opc, struct ice_sq_cd *cd) @@ -2321,12 +2249,12 @@ ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries, * @btm: allocate from bottom * @res: pointer to array that will receive the resources */ -enum ice_status +int ice_alloc_hw_res(struct ice_hw *hw, u16 type, u16 num, bool btm, u16 *res) { struct ice_aqc_alloc_free_res_elem *buf; - enum ice_status status; u16 buf_len; + int status; buf_len = ice_struct_size(buf, elem, num); buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len); @@ -2360,11 +2288,11 @@ ice_alloc_hw_res(struct ice_hw *hw, u16 type, u16 num, bool btm, u16 *res) * @num: number of resources * @res: pointer to array that contains the resources to free */ -enum ice_status ice_free_hw_res(struct ice_hw *hw, u16 type, u16 num, u16 *res) +int ice_free_hw_res(struct ice_hw *hw, u16 type, u16 num, u16 *res) { struct ice_aqc_alloc_free_res_elem *buf; - enum ice_status status; u16 buf_len; + int status; buf_len = ice_struct_size(buf, elem, num); buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len); @@ -2495,6 +2423,11 @@ ice_parse_common_caps(struct ice_hw *hw, struct ice_hw_common_caps *caps, true : false; ice_debug(hw, ICE_DBG_INIT, "%s: nvm_unified_update = %d\n", prefix, caps->nvm_unified_update); + caps->netlist_auth = + (number & ICE_NVM_MGMT_NETLIST_AUTH_SUPPORT) ? + true : false; + ice_debug(hw, ICE_DBG_INIT, "%s: netlist_auth = %d\n", prefix, + caps->netlist_auth); break; case ICE_AQC_CAPS_MAX_MTU: caps->max_mtu = number; @@ -2529,6 +2462,8 @@ ice_parse_common_caps(struct ice_hw *hw, struct ice_hw_common_caps *caps, (phys_id & ICE_EXT_TOPO_DEV_IMG_LOAD_EN) != 0; caps->ext_topo_dev_img_prog_en[index] = (phys_id & ICE_EXT_TOPO_DEV_IMG_PROG_EN) != 0; + caps->ext_topo_dev_img_ver_schema[index] = + (phys_id & ICE_EXT_TOPO_DEV_IMG_VER_SCHEMA) != 0; ice_debug(hw, ICE_DBG_INIT, "%s: ext_topo_dev_img_ver_high[%d] = %d\n", prefix, index, @@ -2549,11 +2484,25 @@ ice_parse_common_caps(struct ice_hw *hw, struct ice_hw_common_caps *caps, "%s: ext_topo_dev_img_prog_en[%d] = %d\n", prefix, index, caps->ext_topo_dev_img_prog_en[index]); + ice_debug(hw, ICE_DBG_INIT, + "%s: ext_topo_dev_img_ver_schema[%d] = %d\n", + prefix, index, + caps->ext_topo_dev_img_ver_schema[index]); break; } case ICE_AQC_CAPS_TX_SCHED_TOPO_COMP_MODE: caps->tx_sched_topo_comp_mode_en = (number == 1); break; + case ICE_AQC_CAPS_OROM_RECOVERY_UPDATE: + caps->orom_recovery_update = (number == 1); + ice_debug(hw, ICE_DBG_INIT, "%s: orom_recovery_update = %d\n", + prefix, caps->orom_recovery_update); + break; + case ICE_AQC_CAPS_NEXT_CLUSTER_ID: + caps->next_cluster_id_support = (number == 1); + ice_debug(hw, ICE_DBG_INIT, "%s: next_cluster_id_support = %d\n", + prefix, caps->next_cluster_id_support); + break; default: /* Not one of the recognized common capabilities */ found = false; @@ -2620,7 +2569,7 @@ ice_parse_1588_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p, u32 number = LE32_TO_CPU(cap->number); u8 clk_freq; - ice_debug(hw, ICE_DBG_INIT, "1588 func caps: raw value %x\n", number); + ice_debug(hw, ICE_DBG_INIT, "1588 func caps: raw value %#x\n", number); info->ena = ((number & ICE_TS_FUNC_ENA_M) != 0); func_p->common_cap.ieee_1588 = info->ena; @@ -2630,6 +2579,8 @@ ice_parse_1588_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p, info->tmr_index_owned = ((number & ICE_TS_TMR_IDX_OWND_M) != 0); info->tmr_index_assoc = ((number & ICE_TS_TMR_IDX_ASSOC_M) != 0); + info->gpio_1pps = ((number & ICE_TS_GPIO_1PPS_ASSOC) != 0); + info->clk_src = ((number & ICE_TS_CLK_SRC_M) != 0); clk_freq = (number & ICE_TS_CLK_FREQ_M) >> ICE_TS_CLK_FREQ_S; if (clk_freq < NUM_ICE_TIME_REF_FREQ) { @@ -2660,6 +2611,7 @@ ice_parse_1588_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p, info->clk_src); } +static void /** * ice_parse_fdir_func_caps - Parse ICE_AQC_CAPS_FD function caps * @hw: pointer to the HW struct @@ -2667,7 +2619,6 @@ ice_parse_1588_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p, * * Extract function capabilities for ICE_AQC_CAPS_FD. */ -static void ice_parse_fdir_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p) { u32 reg_val, val; @@ -2675,11 +2626,11 @@ ice_parse_fdir_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p) if (hw->dcf_enabled) return; reg_val = rd32(hw, GLQF_FD_SIZE); - val = (reg_val & GLQF_FD_SIZE_FD_GSIZE_M) >> + val = (reg_val & GLQF_FD_SIZE_FD_GSIZE_M_BY_MAC(hw)) >> GLQF_FD_SIZE_FD_GSIZE_S; func_p->fd_fltr_guar = ice_get_num_per_func(hw, val); - val = (reg_val & GLQF_FD_SIZE_FD_BSIZE_M) >> + val = (reg_val & GLQF_FD_SIZE_FD_BSIZE_M_BY_MAC(hw)) >> GLQF_FD_SIZE_FD_BSIZE_S; func_p->fd_fltr_best_effort = val; @@ -2828,6 +2779,7 @@ ice_parse_1588_dev_caps(struct ice_hw *hw, struct ice_hw_dev_caps *dev_p, info->tmr1_ena = ((number & ICE_TS_TMR1_ENA_M) != 0); info->ts_ll_read = ((number & ICE_TS_LL_TX_TS_READ_M) != 0); + info->ts_ll_int_read = ((number & ICE_TS_LL_TX_TS_INT_READ_M) != 0); info->tmr_own_map = phys_id; @@ -2847,6 +2799,8 @@ ice_parse_1588_dev_caps(struct ice_hw *hw, struct ice_hw_dev_caps *dev_p, info->tmr1_ena); ice_debug(hw, ICE_DBG_INIT, "dev caps: ts_ll_read = %u\n", info->ts_ll_read); + ice_debug(hw, ICE_DBG_INIT, "dev caps: ts_ll_int_read = %u\n", + info->ts_ll_int_read); ice_debug(hw, ICE_DBG_INIT, "dev caps: tmr_own_map = %u\n", info->tmr_own_map); } @@ -2885,6 +2839,10 @@ ice_parse_nac_topo_dev_caps(struct ice_hw *hw, struct ice_hw_dev_caps *dev_p, dev_p->nac_topo.mode = LE32_TO_CPU(cap->number); dev_p->nac_topo.id = LE32_TO_CPU(cap->phys_id) & ICE_NAC_TOPO_ID_M; + ice_info(hw, "PF is configured in %s mode with IP instance ID %d\n", + (dev_p->nac_topo.mode & ICE_NAC_TOPO_PRIMARY_M) ? + "primary" : "secondary", dev_p->nac_topo.id); + ice_debug(hw, ICE_DBG_INIT, "dev caps: nac topology is_primary = %d\n", !!(dev_p->nac_topo.mode & ICE_NAC_TOPO_PRIMARY_M)); ice_debug(hw, ICE_DBG_INIT, "dev caps: nac topology is_dual = %d\n", @@ -2893,6 +2851,26 @@ ice_parse_nac_topo_dev_caps(struct ice_hw *hw, struct ice_hw_dev_caps *dev_p, dev_p->nac_topo.id); } +/** + * ice_parse_sensor_reading_cap - Parse ICE_AQC_CAPS_SENSOR_READING cap + * @hw: pointer to the HW struct + * @dev_p: pointer to device capabilities structure + * @cap: capability element to parse + * + * Parse ICE_AQC_CAPS_SENSOR_READING for device capability for reading + * enabled sensors. + */ +static void +ice_parse_sensor_reading_cap(struct ice_hw *hw, struct ice_hw_dev_caps *dev_p, + struct ice_aqc_list_caps_elem *cap) +{ + dev_p->supported_sensors = LE32_TO_CPU(cap->number); + + ice_debug(hw, ICE_DBG_INIT, + "dev caps: supported sensors (bitmap) = 0x%x\n", + dev_p->supported_sensors); +} + /** * ice_parse_dev_caps - Parse device capabilities * @hw: pointer to the HW struct @@ -2941,6 +2919,9 @@ ice_parse_dev_caps(struct ice_hw *hw, struct ice_hw_dev_caps *dev_p, case ICE_AQC_CAPS_NAC_TOPOLOGY: ice_parse_nac_topo_dev_caps(hw, dev_p, &cap_resp[i]); break; + case ICE_AQC_CAPS_SENSOR_READING: + ice_parse_sensor_reading_cap(hw, dev_p, &cap_resp[i]); + break; default: /* Don't list common capabilities as unknown */ if (!found) @@ -2953,6 +2934,125 @@ ice_parse_dev_caps(struct ice_hw *hw, struct ice_hw_dev_caps *dev_p, ice_recalc_port_limited_caps(hw, &dev_p->common_cap); } +/** + * ice_aq_get_netlist_node_pin + * @hw: pointer to the hw struct + * @cmd: get_link_topo_pin AQ structure + * @node_handle: output node handle parameter if node found + */ +int +ice_aq_get_netlist_node_pin(struct ice_hw *hw, + struct ice_aqc_get_link_topo_pin *cmd, + u16 *node_handle) +{ + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_topo_pin); + desc.params.get_link_topo_pin = *cmd; + + if (ice_aq_send_cmd(hw, &desc, NULL, 0, NULL)) + return ICE_ERR_NOT_SUPPORTED; + + if (node_handle) + *node_handle = + LE16_TO_CPU(desc.params.get_link_topo_pin.addr.handle); + + cmd->output_io_params = desc.params.get_link_topo_pin.output_io_params; + cmd->output_io_flags = desc.params.get_link_topo_pin.output_io_flags; + + return 0; +} + +/** + * ice_aq_get_netlist_node + * @hw: pointer to the hw struct + * @cmd: get_link_topo AQ structure + * @node_part_number: output node part number if node found + * @node_handle: output node handle parameter if node found + */ +int +ice_aq_get_netlist_node(struct ice_hw *hw, struct ice_aqc_get_link_topo *cmd, + u8 *node_part_number, u16 *node_handle) +{ + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_topo); + desc.params.get_link_topo = *cmd; + + if (ice_aq_send_cmd(hw, &desc, NULL, 0, NULL)) + return ICE_ERR_NOT_SUPPORTED; + + if (node_handle) + *node_handle = + LE16_TO_CPU(desc.params.get_link_topo.addr.handle); + if (node_part_number) + *node_part_number = desc.params.get_link_topo.node_part_num; + + return 0; +} + +#define MAX_NETLIST_SIZE 10 +/** + * ice_find_netlist_node + * @hw: pointer to the hw struct + * @node_type_ctx: type of netlist node to look for + * @node_part_number: node part number to look for + * @node_handle: output parameter if node found - optional + * + * Scan the netlist for a node handle of the given node type and part number. + * + * If node_handle is non-NULL it will be modified on function exit. It is only + * valid if the function returns zero, and should be ignored on any non-zero + * return value. + * + * Returns: 0 if the node is found, ICE_ERR_DOES_NOT_EXIST if no handle was + * found, and an error code on failure to access the AQ. + */ +int +ice_find_netlist_node(struct ice_hw *hw, u8 node_type_ctx, u8 node_part_number, + u16 *node_handle) +{ + u8 idx; + + for (idx = 0; idx < MAX_NETLIST_SIZE; idx++) { + struct ice_aqc_get_link_topo cmd; + u8 rec_node_part_number; + int status; + + memset(&cmd, 0, sizeof(cmd)); + + cmd.addr.topo_params.node_type_ctx = + (node_type_ctx << ICE_AQC_LINK_TOPO_NODE_TYPE_S); + cmd.addr.topo_params.index = idx; + + status = ice_aq_get_netlist_node(hw, &cmd, + &rec_node_part_number, + node_handle); + if (status) + return status; + + if (rec_node_part_number == node_part_number) + return 0; + } + + return ICE_ERR_DOES_NOT_EXIST; +} + +/** + * ice_is_gps_in_netlist + * @hw: pointer to the hw struct + * + * Check if the GPS generic device is present in the netlist + */ +bool ice_is_gps_in_netlist(struct ice_hw *hw) +{ + if (ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_GPS, + ICE_AQC_GET_LINK_TOPO_NODE_NR_GEN_GPS, NULL)) + return false; + + return true; +} + /** * ice_aq_list_caps - query function/device capabilities * @hw: pointer to the HW struct @@ -2972,13 +3072,13 @@ ice_parse_dev_caps(struct ice_hw *hw, struct ice_hw_dev_caps *dev_p, * buffer size be set to ICE_AQ_MAX_BUF_LEN (the largest possible buffer that * firmware could return) to avoid this. */ -static enum ice_status +static int ice_aq_list_caps(struct ice_hw *hw, void *buf, u16 buf_size, u32 *cap_count, enum ice_adminq_opc opc, struct ice_sq_cd *cd) { struct ice_aqc_list_caps *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; cmd = &desc.params.get_cap; @@ -3003,12 +3103,12 @@ ice_aq_list_caps(struct ice_hw *hw, void *buf, u16 buf_size, u32 *cap_count, * Read the device capabilities and extract them into the dev_caps structure * for later use. */ -static enum ice_status +static int ice_discover_dev_caps(struct ice_hw *hw, struct ice_hw_dev_caps *dev_caps) { - enum ice_status status; u32 cap_count = 0; void *cbuf; + int status; cbuf = ice_malloc(hw, ICE_AQ_MAX_BUF_LEN); if (!cbuf) @@ -3037,12 +3137,12 @@ ice_discover_dev_caps(struct ice_hw *hw, struct ice_hw_dev_caps *dev_caps) * Read the function capabilities and extract them into the func_caps structure * for later use. */ -static enum ice_status +static int ice_discover_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_caps) { - enum ice_status status; u32 cap_count = 0; void *cbuf; + int status; cbuf = ice_malloc(hw, ICE_AQ_MAX_BUF_LEN); if (!cbuf) @@ -3130,9 +3230,9 @@ void ice_set_safe_mode_caps(struct ice_hw *hw) * ice_get_caps - get info about the HW * @hw: pointer to the hardware structure */ -enum ice_status ice_get_caps(struct ice_hw *hw) +int ice_get_caps(struct ice_hw *hw) { - enum ice_status status; + int status; status = ice_discover_dev_caps(hw, &hw->dev_caps); if (status) @@ -3150,7 +3250,7 @@ enum ice_status ice_get_caps(struct ice_hw *hw) * * This function is used to write MAC address to the NVM (0x0108). */ -enum ice_status +int ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags, struct ice_sq_cd *cd) { @@ -3172,7 +3272,7 @@ ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags, * * Tell the firmware that the driver is taking over from PXE (0x0110). */ -static enum ice_status ice_aq_clear_pxe_mode(struct ice_hw *hw) +static int ice_aq_clear_pxe_mode(struct ice_hw *hw) { struct ice_aq_desc desc; @@ -3196,7 +3296,7 @@ void ice_clear_pxe_mode(struct ice_hw *hw) } /** - * ice_aq_set_port_params - set physical port parameters. + * ice_aq_set_port_params - set physical port parameters * @pi: pointer to the port info struct * @bad_frame_vsi: defines the VSI to which bad frames are forwarded * @save_bad_pac: if set packets with errors are forwarded to the bad frames VSI @@ -3206,11 +3306,10 @@ void ice_clear_pxe_mode(struct ice_hw *hw) * * Set Physical port parameters (0x0203) */ -enum ice_status +int ice_aq_set_port_params(struct ice_port_info *pi, u16 bad_frame_vsi, bool save_bad_pac, bool pad_short_pac, bool double_vlan, struct ice_sq_cd *cd) - { struct ice_aqc_set_port_params *cmd; struct ice_hw *hw = pi->hw; @@ -3220,6 +3319,8 @@ ice_aq_set_port_params(struct ice_port_info *pi, u16 bad_frame_vsi, cmd = &desc.params.set_port_params; ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_params); + cmd->lb_mode = pi->loopback_mode | + ICE_AQC_SET_P_PARAMS_LOOPBACK_MODE_VALID; cmd->bad_frame_vsi = CPU_TO_LE16(bad_frame_vsi); if (save_bad_pac) cmd_flags |= ICE_AQC_SET_P_PARAMS_SAVE_BAD_PACKETS; @@ -3262,8 +3363,8 @@ bool ice_is_100m_speed_supported(struct ice_hw *hw) * Note: In the structure of [phy_type_low, phy_type_high], there should * be one bit set, as this function will convert one PHY type to its * speed. - * If no bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned - * If more than one bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned + * If no bit gets set, ICE_AQ_LINK_SPEED_UNKNOWN will be returned + * If more than one bit gets set, ICE_AQ_LINK_SPEED_UNKNOWN will be returned */ static u16 ice_get_link_speed_based_on_phy_type(u64 phy_type_low, u64 phy_type_high) @@ -3367,6 +3468,18 @@ ice_get_link_speed_based_on_phy_type(u64 phy_type_low, u64 phy_type_high) case ICE_PHY_TYPE_HIGH_100G_AUI2: speed_phy_type_high = ICE_AQ_LINK_SPEED_100GB; break; + case ICE_PHY_TYPE_HIGH_200G_CR4_PAM4: + case ICE_PHY_TYPE_HIGH_200G_SR4: + case ICE_PHY_TYPE_HIGH_200G_FR4: + case ICE_PHY_TYPE_HIGH_200G_LR4: + case ICE_PHY_TYPE_HIGH_200G_DR4: + case ICE_PHY_TYPE_HIGH_200G_KR4_PAM4: + case ICE_PHY_TYPE_HIGH_200G_AUI4_AOC_ACC: + case ICE_PHY_TYPE_HIGH_200G_AUI4: + case ICE_PHY_TYPE_HIGH_200G_AUI8_AOC_ACC: + case ICE_PHY_TYPE_HIGH_200G_AUI8: + speed_phy_type_high = ICE_AQ_LINK_SPEED_200GB; + break; default: speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN; break; @@ -3440,12 +3553,12 @@ ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high, * mode as the PF may not have the privilege to set some of the PHY Config * parameters. This status will be indicated by the command response (0x0601). */ -enum ice_status +int ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd) { struct ice_aq_desc desc; - enum ice_status status; + int status; if (!cfg) return ICE_ERR_PARAM; @@ -3478,7 +3591,7 @@ ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi, status = ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd); if (hw->adminq.sq_last_status == ICE_AQ_RC_EMODE) - status = ICE_SUCCESS; + status = 0; if (!status) pi->phy.curr_user_phy_cfg = *cfg; @@ -3490,10 +3603,10 @@ ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi, * ice_update_link_info - update status of the HW network link * @pi: port info structure of the interested logical port */ -enum ice_status ice_update_link_info(struct ice_port_info *pi) +int ice_update_link_info(struct ice_port_info *pi) { struct ice_link_status *li; - enum ice_status status; + int status; if (!pi) return ICE_ERR_PARAM; @@ -3517,7 +3630,7 @@ enum ice_status ice_update_link_info(struct ice_port_info *pi) status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP_MEDIA, pcaps, NULL); - if (status == ICE_SUCCESS) + if (!status) ice_memcpy(li->module_type, &pcaps->module_type, sizeof(li->module_type), ICE_NONDMA_TO_NONDMA); @@ -3617,7 +3730,7 @@ enum ice_fec_mode ice_caps_to_fec_mode(u8 caps, u8 fec_options) * @cfg: PHY configuration data to set FC mode * @req_mode: FC mode to configure */ -static enum ice_status +static int ice_cfg_phy_fc(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fc_mode req_mode) { @@ -3626,18 +3739,16 @@ ice_cfg_phy_fc(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg, if (!pi || !cfg) return ICE_ERR_BAD_PTR; - switch (req_mode) { case ICE_FC_AUTO: { struct ice_aqc_get_phy_caps_data *pcaps; - enum ice_status status; + int status; pcaps = (struct ice_aqc_get_phy_caps_data *) ice_malloc(pi->hw, sizeof(*pcaps)); if (!pcaps) return ICE_ERR_NO_MEMORY; - /* Query the value of FC that both the NIC and attached media * can do. */ @@ -3679,7 +3790,7 @@ ice_cfg_phy_fc(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg, cache_data.data.curr_user_fc_req = req_mode; ice_cache_phy_user_req(pi, cache_data, ICE_FC_MODE); - return ICE_SUCCESS; + return 0; } /** @@ -3690,13 +3801,13 @@ ice_cfg_phy_fc(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg, * * Set the requested flow control mode. */ -enum ice_status +int ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update) { struct ice_aqc_set_phy_cfg_data cfg = { 0 }; struct ice_aqc_get_phy_caps_data *pcaps; - enum ice_status status; struct ice_hw *hw; + int status; if (!pi || !aq_failures) return ICE_ERR_BAD_PTR; @@ -3751,7 +3862,7 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update) for (retry_count = 0; retry_count < retry_max; retry_count++) { status = ice_update_link_info(pi); - if (status == ICE_SUCCESS) + if (!status) break; ice_msec_delay(100, true); @@ -3837,13 +3948,13 @@ ice_copy_phy_caps_to_cfg(struct ice_port_info *pi, * @cfg: PHY configuration data to set FEC mode * @fec: FEC mode to configure */ -enum ice_status +int ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec) { struct ice_aqc_get_phy_caps_data *pcaps; - enum ice_status status = ICE_SUCCESS; struct ice_hw *hw; + int status = 0; if (!pi || !cfg) return ICE_ERR_BAD_PTR; @@ -3890,8 +4001,10 @@ ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg, break; case ICE_FEC_DIS_AUTO: /* Set No FEC and auto FEC */ - if (!ice_fw_supports_fec_dis_auto(hw)) - return ICE_ERR_NOT_SUPPORTED; + if (!ice_fw_supports_fec_dis_auto(hw)) { + status = ICE_ERR_NOT_SUPPORTED; + goto out; + } cfg->link_fec_opt |= ICE_AQC_PHY_FEC_DIS; /* fall-through */ case ICE_FEC_AUTO: @@ -3931,10 +4044,10 @@ ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg, * The variable link_up is invalid if status is non zero. As a * result of this call, link status reporting becomes enabled */ -enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up) +int ice_get_link_status(struct ice_port_info *pi, bool *link_up) { struct ice_phy_info *phy_info; - enum ice_status status = ICE_SUCCESS; + int status = 0; if (!pi || !link_up) return ICE_ERR_PARAM; @@ -3962,10 +4075,11 @@ enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up) * * Sets up the link and restarts the Auto-Negotiation over the link. */ -enum ice_status +int ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link, struct ice_sq_cd *cd) { + int status = ICE_ERR_AQ_ERROR; struct ice_aqc_restart_an *cmd; struct ice_aq_desc desc; @@ -3980,7 +4094,16 @@ ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link, else cmd->cmd_flags &= ~ICE_AQC_RESTART_AN_LINK_ENABLE; - return ice_aq_send_cmd(pi->hw, &desc, NULL, 0, cd); + status = ice_aq_send_cmd(pi->hw, &desc, NULL, 0, cd); + if (status) + return status; + + if (ena_link) + pi->phy.curr_user_phy_cfg.caps |= ICE_AQC_PHY_EN_LINK; + else + pi->phy.curr_user_phy_cfg.caps &= ~ICE_AQC_PHY_EN_LINK; + + return 0; } /** @@ -3992,7 +4115,7 @@ ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link, * * Set event mask (0x0613) */ -enum ice_status +int ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask, struct ice_sq_cd *cd) { @@ -4017,7 +4140,7 @@ ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask, * * Enable/disable loopback on a given port */ -enum ice_status +int ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd) { struct ice_aqc_set_mac_lb *cmd; @@ -4040,7 +4163,7 @@ ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd) * * Set LED value for the given port (0x06e9) */ -enum ice_status +int ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode, struct ice_sq_cd *cd) { @@ -4075,14 +4198,14 @@ ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode, * * Read/Write SFF EEPROM (0x06EE) */ -enum ice_status +int ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr, u16 mem_addr, u8 page, u8 set_page, u8 *data, u8 length, bool write, struct ice_sq_cd *cd) { struct ice_aqc_sff_eeprom *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; if (!data || (mem_addr & 0xff00)) return ICE_ERR_PARAM; @@ -4115,7 +4238,7 @@ ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr, * Program Topology Device NVM (0x06F2) * */ -enum ice_status +int ice_aq_prog_topo_dev_nvm(struct ice_hw *hw, struct ice_aqc_link_topo_params *topo_params, struct ice_sq_cd *cd) @@ -4144,7 +4267,7 @@ ice_aq_prog_topo_dev_nvm(struct ice_hw *hw, * Read Topology Device NVM (0x06F3) * */ -enum ice_status +int ice_aq_read_topo_dev_nvm(struct ice_hw *hw, struct ice_aqc_link_topo_params *topo_params, u32 start_address, u8 *data, u8 data_size, @@ -4152,7 +4275,7 @@ ice_aq_read_topo_dev_nvm(struct ice_hw *hw, { struct ice_aqc_read_topo_dev_nvm *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; if (!data || data_size == 0 || data_size > ICE_AQC_READ_TOPO_DEV_NVM_DATA_READ_SIZE) @@ -4173,7 +4296,56 @@ ice_aq_read_topo_dev_nvm(struct ice_hw *hw, ice_memcpy(data, cmd->data_read, data_size, ICE_NONDMA_TO_NONDMA); - return ICE_SUCCESS; + return 0; +} + +static u16 ice_lut_type_to_size(u16 lut_type) +{ + switch (lut_type) { + case ICE_LUT_VSI: + return ICE_LUT_VSI_SIZE; + case ICE_LUT_GLOBAL: + return ICE_LUT_GLOBAL_SIZE; + case ICE_LUT_PF: + return ICE_LUT_PF_SIZE; + case ICE_LUT_PF_SMALL: + return ICE_LUT_PF_SMALL_SIZE; + default: + return 0; + } +} + +static u16 ice_lut_size_to_flag(u16 lut_size) +{ + u16 f = 0; + + switch (lut_size) { + case ICE_LUT_GLOBAL_SIZE: + f = ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG; + break; + case ICE_LUT_PF_SIZE: + f = ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG; + break; + default: + break; + } + return f << ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S; +} + +int ice_lut_size_to_type(int lut_size) +{ + switch (lut_size) { + case ICE_LUT_VSI_SIZE: + return ICE_LUT_VSI; + case ICE_LUT_GLOBAL_SIZE: + return ICE_LUT_GLOBAL; + case ICE_LUT_PF_SIZE: + return ICE_LUT_PF; + case ICE_LUT_PF_SMALL_SIZE: + return ICE_LUT_PF_SMALL; + default: + return -1; + } } /** @@ -4184,13 +4356,13 @@ ice_aq_read_topo_dev_nvm(struct ice_hw *hw, * * Internal function to get (0x0B05) or set (0x0B03) RSS look up table */ -static enum ice_status +static int __ice_aq_get_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *params, bool set) { - u16 flags = 0, vsi_id, lut_type, lut_size, glob_lut_idx, vsi_handle; + u16 flags, vsi_id, lut_type, lut_size, glob_lut_idx = 0, vsi_handle; struct ice_aqc_get_set_rss_lut *cmd_resp; struct ice_aq_desc desc; - enum ice_status status; + int status; u8 *lut; if (!params) @@ -4198,16 +4370,22 @@ __ice_aq_get_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params vsi_handle = params->vsi_handle; lut = params->lut; + lut_size = ice_lut_type_to_size(params->lut_type); + lut_type = params->lut_type & ICE_LUT_TYPE_MASK; + cmd_resp = &desc.params.get_set_rss_lut; + if (lut_type == ICE_LUT_GLOBAL) + glob_lut_idx = params->global_lut_id; - if (!ice_is_vsi_valid(hw, vsi_handle) || !lut) + if (!lut || !lut_size || !ice_is_vsi_valid(hw, vsi_handle)) return ICE_ERR_PARAM; - lut_size = params->lut_size; - lut_type = params->lut_type; - glob_lut_idx = params->global_lut_id; - vsi_id = ice_get_hw_vsi_num(hw, vsi_handle); + if (lut_size > params->lut_size) + return ICE_ERR_INVAL_SIZE; - cmd_resp = &desc.params.get_set_rss_lut; + if (set && lut_size != params->lut_size) + return ICE_ERR_PARAM; + + vsi_id = ice_get_hw_vsi_num(hw, vsi_handle); if (set) { ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_lut); @@ -4221,61 +4399,15 @@ __ice_aq_get_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params ICE_AQC_GSET_RSS_LUT_VSI_ID_M) | ICE_AQC_GSET_RSS_LUT_VSI_VALID); - switch (lut_type) { - case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI: - case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF: - case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL: - flags |= ((lut_type << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) & - ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M); - break; - default: - status = ICE_ERR_PARAM; - goto ice_aq_get_set_rss_lut_exit; - } - - if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL) { - flags |= ((glob_lut_idx << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S) & - ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M); - - if (!set) - goto ice_aq_get_set_rss_lut_send; - } else if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) { - if (!set) - goto ice_aq_get_set_rss_lut_send; - } else { - goto ice_aq_get_set_rss_lut_send; - } - - /* LUT size is only valid for Global and PF table types */ - switch (lut_size) { - case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128: - flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG << - ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) & - ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M; - break; - case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512: - flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG << - ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) & - ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M; - break; - case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K: - if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) { - flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG << - ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) & - ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M; - break; - } - /* fall-through */ - default: - status = ICE_ERR_PARAM; - goto ice_aq_get_set_rss_lut_exit; - } + flags = ice_lut_size_to_flag(lut_size) | + ((lut_type << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) & + ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M) | + ((glob_lut_idx << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S) & + ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M); -ice_aq_get_set_rss_lut_send: cmd_resp->flags = CPU_TO_LE16(flags); status = ice_aq_send_cmd(hw, &desc, lut, lut_size, NULL); - -ice_aq_get_set_rss_lut_exit: + params->lut_size = LE16_TO_CPU(desc.datalen); return status; } @@ -4286,7 +4418,7 @@ __ice_aq_get_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params * * get the RSS lookup table, PF or VSI type */ -enum ice_status +int ice_aq_get_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *get_params) { return __ice_aq_get_set_rss_lut(hw, get_params, false); @@ -4299,7 +4431,7 @@ ice_aq_get_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *get_ * * set the RSS lookup table, PF or VSI type */ -enum ice_status +int ice_aq_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *set_params) { return __ice_aq_get_set_rss_lut(hw, set_params, true); @@ -4314,8 +4446,7 @@ ice_aq_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *set_ * * get (0x0B04) or set (0x0B02) the RSS key per VSI */ -static enum -ice_status __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id, +static int __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id, struct ice_aqc_get_set_rss_keys *key, bool set) { @@ -4348,7 +4479,7 @@ ice_status __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id, * * get the RSS key per VSI */ -enum ice_status +int ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle, struct ice_aqc_get_set_rss_keys *key) { @@ -4367,7 +4498,7 @@ ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle, * * set the RSS key per VSI */ -enum ice_status +int ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle, struct ice_aqc_get_set_rss_keys *keys) { @@ -4399,7 +4530,7 @@ ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle, * Association of Tx queue to Doorbell queue is not part of Add LAN Tx queue * flow. */ -enum ice_status +int ice_aq_add_lan_txq(struct ice_hw *hw, u8 num_qgrps, struct ice_aqc_add_tx_qgrp *qg_list, u16 buf_size, struct ice_sq_cd *cd) @@ -4449,7 +4580,7 @@ ice_aq_add_lan_txq(struct ice_hw *hw, u8 num_qgrps, * * Disable LAN Tx queue (0x0C31) */ -static enum ice_status +static int ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps, struct ice_aqc_dis_txq_item *qg_list, u16 buf_size, enum ice_disq_rst_src rst_src, u16 vmvf_num, @@ -4458,7 +4589,7 @@ ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps, struct ice_aqc_dis_txq_item *item; struct ice_aqc_dis_txqs *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; u16 i, sz = 0; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -4545,7 +4676,7 @@ ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps, * * Move / Reconfigure Tx LAN queues (0x0C32) */ -enum ice_status +int ice_aq_move_recfg_lan_txq(struct ice_hw *hw, u8 num_qs, bool is_move, bool is_tc_change, bool subseq_call, bool flush_pipe, u8 timeout, u32 *blocked_cgds, @@ -4554,7 +4685,7 @@ ice_aq_move_recfg_lan_txq(struct ice_hw *hw, u8 num_qs, bool is_move, { struct ice_aqc_move_txqs *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; cmd = &desc.params.move_txqs; ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_move_recfg_txqs); @@ -4631,13 +4762,13 @@ ice_write_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) /* get the current bits from the target bit string */ dest = dest_ctx + (ce_info->lsb / 8); - ice_memcpy(&dest_byte, dest, sizeof(dest_byte), ICE_DMA_TO_NONDMA); + ice_memcpy(&dest_byte, dest, sizeof(dest_byte), ICE_NONDMA_TO_NONDMA); dest_byte &= ~mask; /* get the bits not changing */ dest_byte |= src_byte; /* add in the new bits */ /* put it all back */ - ice_memcpy(dest, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA); + ice_memcpy(dest, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_NONDMA); } /** @@ -4674,13 +4805,13 @@ ice_write_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) /* get the current bits from the target bit string */ dest = dest_ctx + (ce_info->lsb / 8); - ice_memcpy(&dest_word, dest, sizeof(dest_word), ICE_DMA_TO_NONDMA); + ice_memcpy(&dest_word, dest, sizeof(dest_word), ICE_NONDMA_TO_NONDMA); dest_word &= ~(CPU_TO_LE16(mask)); /* get the bits not changing */ dest_word |= CPU_TO_LE16(src_word); /* add in the new bits */ /* put it all back */ - ice_memcpy(dest, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA); + ice_memcpy(dest, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_NONDMA); } /** @@ -4725,13 +4856,13 @@ ice_write_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) /* get the current bits from the target bit string */ dest = dest_ctx + (ce_info->lsb / 8); - ice_memcpy(&dest_dword, dest, sizeof(dest_dword), ICE_DMA_TO_NONDMA); + ice_memcpy(&dest_dword, dest, sizeof(dest_dword), ICE_NONDMA_TO_NONDMA); dest_dword &= ~(CPU_TO_LE32(mask)); /* get the bits not changing */ dest_dword |= CPU_TO_LE32(src_dword); /* add in the new bits */ /* put it all back */ - ice_memcpy(dest, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA); + ice_memcpy(dest, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_NONDMA); } /** @@ -4776,13 +4907,13 @@ ice_write_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) /* get the current bits from the target bit string */ dest = dest_ctx + (ce_info->lsb / 8); - ice_memcpy(&dest_qword, dest, sizeof(dest_qword), ICE_DMA_TO_NONDMA); + ice_memcpy(&dest_qword, dest, sizeof(dest_qword), ICE_NONDMA_TO_NONDMA); dest_qword &= ~(CPU_TO_LE64(mask)); /* get the bits not changing */ dest_qword |= CPU_TO_LE64(src_qword); /* add in the new bits */ /* put it all back */ - ice_memcpy(dest, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA); + ice_memcpy(dest, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_NONDMA); } /** @@ -4792,7 +4923,7 @@ ice_write_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) * @dest_ctx: pointer to memory for the packed structure * @ce_info: a description of the structure to be transformed */ -enum ice_status +int ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) { @@ -4826,7 +4957,7 @@ ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx, } } - return ICE_SUCCESS; + return 0; } /** @@ -4838,21 +4969,22 @@ ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx, * @buf: dump buffer * @buf_size: dump buffer size * @ret_buf_size: return buffer size (returned by FW) + * @ret_next_cluster: next cluster to read (returned by FW) * @ret_next_table: next block to read (returned by FW) * @ret_next_index: next index to read (returned by FW) * @cd: pointer to command details structure * * Get internal FW/HW data (0xFF08) for debug purposes. */ -enum ice_status -ice_aq_get_internal_data(struct ice_hw *hw, u8 cluster_id, u16 table_id, +int +ice_aq_get_internal_data(struct ice_hw *hw, u16 cluster_id, u16 table_id, u32 start, void *buf, u16 buf_size, u16 *ret_buf_size, - u16 *ret_next_table, u32 *ret_next_index, - struct ice_sq_cd *cd) + u16 *ret_next_cluster, u16 *ret_next_table, + u32 *ret_next_index, struct ice_sq_cd *cd) { struct ice_aqc_debug_dump_internals *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; cmd = &desc.params.debug_dump; @@ -4861,7 +4993,7 @@ ice_aq_get_internal_data(struct ice_hw *hw, u8 cluster_id, u16 table_id, ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_debug_dump_internals); - cmd->cluster_id = cluster_id; + cmd->cluster_id = CPU_TO_LE16(cluster_id); cmd->table_id = CPU_TO_LE16(table_id); cmd->idx = CPU_TO_LE32(start); @@ -4870,6 +5002,8 @@ ice_aq_get_internal_data(struct ice_hw *hw, u8 cluster_id, u16 table_id, if (!status) { if (ret_buf_size) *ret_buf_size = LE16_TO_CPU(desc.datalen); + if (ret_next_cluster) + *ret_next_cluster = LE16_TO_CPU(cmd->cluster_id); if (ret_next_table) *ret_next_table = LE16_TO_CPU(cmd->table_id); if (ret_next_index) @@ -4902,9 +5036,9 @@ ice_read_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) /* get the current bits from the src bit string */ src = src_ctx + (ce_info->lsb / 8); - ice_memcpy(&dest_byte, src, sizeof(dest_byte), ICE_DMA_TO_NONDMA); + ice_memcpy(&dest_byte, src, sizeof(dest_byte), ICE_NONDMA_TO_NONDMA); - dest_byte &= ~(mask); + dest_byte &= mask; dest_byte >>= shift_width; @@ -4912,7 +5046,7 @@ ice_read_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) target = dest_ctx + ce_info->offset; /* put it back in the struct */ - ice_memcpy(target, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA); + ice_memcpy(target, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_NONDMA); } /** @@ -4939,12 +5073,12 @@ ice_read_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) /* get the current bits from the src bit string */ src = src_ctx + (ce_info->lsb / 8); - ice_memcpy(&src_word, src, sizeof(src_word), ICE_DMA_TO_NONDMA); + ice_memcpy(&src_word, src, sizeof(src_word), ICE_NONDMA_TO_NONDMA); /* the data in the memory is stored as little endian so mask it * correctly */ - src_word &= ~(CPU_TO_LE16(mask)); + src_word &= CPU_TO_LE16(mask); /* get the data back into host order before shifting */ dest_word = LE16_TO_CPU(src_word); @@ -4955,7 +5089,7 @@ ice_read_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) target = dest_ctx + ce_info->offset; /* put it back in the struct */ - ice_memcpy(target, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA); + ice_memcpy(target, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_NONDMA); } /** @@ -4990,12 +5124,12 @@ ice_read_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) /* get the current bits from the src bit string */ src = src_ctx + (ce_info->lsb / 8); - ice_memcpy(&src_dword, src, sizeof(src_dword), ICE_DMA_TO_NONDMA); + ice_memcpy(&src_dword, src, sizeof(src_dword), ICE_NONDMA_TO_NONDMA); /* the data in the memory is stored as little endian so mask it * correctly */ - src_dword &= ~(CPU_TO_LE32(mask)); + src_dword &= CPU_TO_LE32(mask); /* get the data back into host order before shifting */ dest_dword = LE32_TO_CPU(src_dword); @@ -5006,7 +5140,7 @@ ice_read_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) target = dest_ctx + ce_info->offset; /* put it back in the struct */ - ice_memcpy(target, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA); + ice_memcpy(target, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_NONDMA); } /** @@ -5041,12 +5175,12 @@ ice_read_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) /* get the current bits from the src bit string */ src = src_ctx + (ce_info->lsb / 8); - ice_memcpy(&src_qword, src, sizeof(src_qword), ICE_DMA_TO_NONDMA); + ice_memcpy(&src_qword, src, sizeof(src_qword), ICE_NONDMA_TO_NONDMA); /* the data in the memory is stored as little endian so mask it * correctly */ - src_qword &= ~(CPU_TO_LE64(mask)); + src_qword &= CPU_TO_LE64(mask); /* get the data back into host order before shifting */ dest_qword = LE64_TO_CPU(src_qword); @@ -5057,7 +5191,7 @@ ice_read_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) target = dest_ctx + ce_info->offset; /* put it back in the struct */ - ice_memcpy(target, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA); + ice_memcpy(target, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_NONDMA); } /** @@ -5066,7 +5200,7 @@ ice_read_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) * @dest_ctx: pointer to a generic non-packed context structure * @ce_info: a description of the structure to be read from */ -enum ice_status +int ice_get_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) { int f; @@ -5091,7 +5225,7 @@ ice_get_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info) } } - return ICE_SUCCESS; + return 0; } /** @@ -5131,7 +5265,7 @@ ice_get_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 q_handle) * * This function adds one LAN queue */ -enum ice_status +int ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle, u8 num_qgrps, struct ice_aqc_add_tx_qgrp *buf, u16 buf_size, struct ice_sq_cd *cd) @@ -5139,8 +5273,8 @@ ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle, struct ice_aqc_txsched_elem_data node = { 0 }; struct ice_sched_node *parent; struct ice_q_ctx *q_ctx; - enum ice_status status; struct ice_hw *hw; + int status; if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY) return ICE_ERR_CFG; @@ -5199,7 +5333,7 @@ ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle, /* add the LAN queue */ status = ice_aq_add_lan_txq(hw, num_qgrps, buf, buf_size, cd); - if (status != ICE_SUCCESS) { + if (status) { ice_debug(hw, ICE_DBG_SCHED, "enable queue %d failed %d\n", LE16_TO_CPU(buf->txqs[0].txq_id), hw->adminq.sq_last_status); @@ -5236,15 +5370,15 @@ ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle, * * This function removes queues and their corresponding nodes in SW DB */ -enum ice_status +int ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues, u16 *q_handles, u16 *q_ids, u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num, struct ice_sq_cd *cd) { - enum ice_status status = ICE_ERR_DOES_NOT_EXIST; struct ice_aqc_dis_txq_item *qg_list; struct ice_q_ctx *q_ctx; + int status = ICE_ERR_DOES_NOT_EXIST; struct ice_hw *hw; u16 i, buf_size; @@ -5294,7 +5428,7 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues, status = ice_aq_dis_lan_txq(hw, 1, qg_list, buf_size, rst_src, vmvf_num, cd); - if (status != ICE_SUCCESS) + if (status) break; ice_free_sched_node(pi, node); q_ctx->q_handle = ICE_INVAL_Q_HANDLE; @@ -5314,11 +5448,11 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues, * * This function adds/updates the VSI queues per TC. */ -static enum ice_status +static int ice_cfg_vsi_qs(struct ice_port_info *pi, u16 vsi_handle, u16 tc_bitmap, u16 *maxqs, u8 owner) { - enum ice_status status = ICE_SUCCESS; + int status = 0; u8 i; if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY) @@ -5353,7 +5487,7 @@ ice_cfg_vsi_qs(struct ice_port_info *pi, u16 vsi_handle, u16 tc_bitmap, * * This function adds/updates the VSI LAN queues per TC. */ -enum ice_status +int ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u16 tc_bitmap, u16 *max_lanqs) { @@ -5361,6 +5495,35 @@ ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u16 tc_bitmap, ICE_SCHED_NODE_OWNER_LAN); } +/** + * ice_aq_cfg_cgu_err + * @hw: pointer to the HW struct + * @ena_event_report: enable or disable event reporting + * @ena_err_report: enable/re-enable or disable error reporting mechanism + * @cd: pointer to command details structure or NULL + * + * Configure CGU error reporting mechanism (0x0C60) + */ +int +ice_aq_cfg_cgu_err(struct ice_hw *hw, bool ena_event_report, + bool ena_err_report, struct ice_sq_cd *cd) +{ + struct ice_aqc_cfg_cgu_err *cmd; + struct ice_aq_desc desc; + + cmd = &desc.params.config_cgu_err; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_cfg_cgu_err); + + if (!ena_event_report) + cmd->cmd |= ICE_AQC_CFG_CGU_EVENT_DIS; + + if (!ena_err_report) + cmd->cmd |= ICE_AQC_CFG_CGU_ERR_DIS; + + return ice_aq_send_cmd(hw, &desc, NULL, 0, cd); +} + /** * ice_aq_get_sensor_reading * @hw: pointer to the HW struct @@ -5371,14 +5534,14 @@ ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u16 tc_bitmap, * * Get sensor reading (0x0632) */ -enum ice_status +int ice_aq_get_sensor_reading(struct ice_hw *hw, u8 sensor, u8 format, struct ice_aqc_get_sensor_reading_resp *data, struct ice_sq_cd *cd) { struct ice_aqc_get_sensor_reading *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; if (!data) return ICE_ERR_PARAM; @@ -5417,10 +5580,10 @@ static bool ice_is_main_vsi(struct ice_hw *hw, u16 vsi_handle) * * Initializes required config data for VSI, FD, ACL, and RSS before replay. */ -enum ice_status +int ice_replay_pre_init(struct ice_hw *hw, struct ice_switch_info *sw) { - enum ice_status status; + int status; u8 i; /* Delete old entries from replay filter list head if there is any */ @@ -5449,11 +5612,11 @@ ice_replay_pre_init(struct ice_hw *hw, struct ice_switch_info *sw) * Restore all VSI configuration after reset. It is required to call this * function with main VSI first. */ -enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle) +int ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle) { struct ice_switch_info *sw = hw->switch_info; struct ice_port_info *pi = hw->port_info; - enum ice_status status; + int status; if (!ice_is_vsi_valid(hw, vsi_handle)) return ICE_ERR_PARAM; @@ -5623,19 +5786,19 @@ ice_stat_update_repc(struct ice_hw *hw, u16 vsi_handle, bool prev_stat_loaded, * * This function queries HW element information */ -enum ice_status +int ice_sched_query_elem(struct ice_hw *hw, u32 node_teid, struct ice_aqc_txsched_elem_data *buf) { u16 buf_size, num_elem_ret = 0; - enum ice_status status; + int status; buf_size = sizeof(*buf); ice_memset(buf, 0, buf_size, ICE_NONDMA_MEM); buf->node_teid = CPU_TO_LE32(node_teid); status = ice_aq_query_sched_elems(hw, 1, buf, buf_size, &num_elem_ret, NULL); - if (status != ICE_SUCCESS || num_elem_ret != 1) + if (status || num_elem_ret != 1) ice_debug(hw, ICE_DBG_SCHED, "query element failed\n"); return status; } @@ -5652,8 +5815,7 @@ enum ice_fw_modes ice_get_fw_mode(struct ice_hw *hw) u32 fw_mode; /* check the current FW mode */ - fw_mode = rd32(hw, GL_MNG_FWSM) & GL_MNG_FWSM_FW_MODES_M; - + fw_mode = rd32(hw, GL_MNG_FWSM) & GL_MNG_FWSM_FW_MODES_M_BY_MAC(hw); if (fw_mode & ICE_FW_MODE_DBG_M) return ICE_FW_MODE_DBG; else if (fw_mode & ICE_FW_MODE_REC_M) @@ -5677,15 +5839,15 @@ enum ice_fw_modes ice_get_fw_mode(struct ice_hw *hw) * * Read I2C (0x06E2) */ -enum ice_status +int ice_aq_read_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr, u16 bus_addr, __le16 addr, u8 params, u8 *data, struct ice_sq_cd *cd) { struct ice_aq_desc desc = { 0 }; struct ice_aqc_i2c *cmd; - enum ice_status status; u8 data_size; + int status; ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_read_i2c); cmd = &desc.params.read_write_i2c; @@ -5727,9 +5889,9 @@ ice_aq_read_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr, * * Write I2C (0x06E3) */ -enum ice_status +int ice_aq_write_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr, - u16 bus_addr, __le16 addr, u8 params, u8 *data, + u16 bus_addr, __le16 addr, u8 params, const u8 *data, struct ice_sq_cd *cd) { struct ice_aq_desc desc = { 0 }; @@ -5773,7 +5935,7 @@ ice_aq_write_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr, * a single PF will write the parameter value, while all other PFs will only * read it. */ -enum ice_status +int ice_aq_set_driver_param(struct ice_hw *hw, enum ice_aqc_driver_params idx, u32 value, struct ice_sq_cd *cd) { @@ -5806,13 +5968,13 @@ ice_aq_set_driver_param(struct ice_hw *hw, enum ice_aqc_driver_params idx, * Note that firmware provides no synchronization or locking. It is expected * that only a single PF will write a given parameter. */ -enum ice_status +int ice_aq_get_driver_param(struct ice_hw *hw, enum ice_aqc_driver_params idx, u32 *value, struct ice_sq_cd *cd) { struct ice_aqc_driver_shared_params *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; if (idx >= ICE_AQC_DRIVER_PARAM_MAX) return ICE_ERR_OUT_OF_RANGE; @@ -5830,7 +5992,7 @@ ice_aq_get_driver_param(struct ice_hw *hw, enum ice_aqc_driver_params idx, *value = LE32_TO_CPU(cmd->param_val); - return ICE_SUCCESS; + return 0; } /** @@ -5843,7 +6005,7 @@ ice_aq_get_driver_param(struct ice_hw *hw, enum ice_aqc_driver_params idx, * * Sends 0x06EC AQ command to set the GPIO pin state that's part of the topology */ -enum ice_status +int ice_aq_set_gpio(struct ice_hw *hw, u16 gpio_ctrl_handle, u8 pin_idx, bool value, struct ice_sq_cd *cd) { @@ -5870,13 +6032,13 @@ ice_aq_set_gpio(struct ice_hw *hw, u16 gpio_ctrl_handle, u8 pin_idx, bool value, * Sends 0x06ED AQ command to get the value of a GPIO signal which is part of * the topology */ -enum ice_status +int ice_aq_get_gpio(struct ice_hw *hw, u16 gpio_ctrl_handle, u8 pin_idx, bool *value, struct ice_sq_cd *cd) { struct ice_aqc_gpio *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_gpio); cmd = &desc.params.read_write_gpio; @@ -5888,7 +6050,7 @@ ice_aq_get_gpio(struct ice_hw *hw, u16 gpio_ctrl_handle, u8 pin_idx, return status; *value = !!cmd->gpio_val; - return ICE_SUCCESS; + return 0; } /** @@ -5936,8 +6098,6 @@ static bool ice_is_fw_min_ver(struct ice_hw *hw, u8 branch, u8 maj, u8 min, if (hw->fw_min_ver == min && hw->fw_patch >= patch) return true; } - } else if (hw->fw_branch > branch) { - return true; } return false; @@ -5963,13 +6123,13 @@ bool ice_fw_supports_link_override(struct ice_hw *hw) * * Gets the link default override for a port */ -enum ice_status +int ice_get_link_default_override(struct ice_link_default_override_tlv *ldo, struct ice_port_info *pi) { u16 i, tlv, tlv_len, tlv_start, buf, offset; struct ice_hw *hw = pi->hw; - enum ice_status status; + int status; status = ice_get_pfa_module_tlv(hw, &tlv, &tlv_len, ICE_SR_LINK_DEFAULT_OVERRIDE_PTR); @@ -6044,6 +6204,115 @@ bool ice_is_phy_caps_an_enabled(struct ice_aqc_get_phy_caps_data *caps) return false; } +/** + * ice_aq_get_port_options + * @hw: pointer to the hw struct + * @options: buffer for the resultant port options + * @option_count: input - size of the buffer in port options structures, + * output - number of returned port options + * @lport: logical port to call the command with (optional) + * @lport_valid: when false, FW uses port owned by the PF instead of lport, + * when PF owns more than 1 port it must be true + * @active_option_idx: index of active port option in returned buffer + * @active_option_valid: active option in returned buffer is valid + * @pending_option_idx: index of pending port option in returned buffer + * @pending_option_valid: pending option in returned buffer is valid + * + * Calls Get Port Options AQC (0x06ea) and verifies result. + */ +int +ice_aq_get_port_options(struct ice_hw *hw, + struct ice_aqc_get_port_options_elem *options, + u8 *option_count, u8 lport, bool lport_valid, + u8 *active_option_idx, bool *active_option_valid, + u8 *pending_option_idx, bool *pending_option_valid) +{ + struct ice_aqc_get_port_options *cmd; + struct ice_aq_desc desc; + int status; + u8 i; + + /* options buffer shall be able to hold max returned options */ + if (*option_count < ICE_AQC_PORT_OPT_COUNT_M) + return ICE_ERR_PARAM; + + cmd = &desc.params.get_port_options; + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_port_options); + + cmd->lport_num = lport; + cmd->lport_num_valid = lport_valid; + + status = ice_aq_send_cmd(hw, &desc, options, + *option_count * sizeof(*options), NULL); + if (status) + return status; + + /* verify direct FW response & set output parameters */ + *option_count = cmd->port_options_count & ICE_AQC_PORT_OPT_COUNT_M; + ice_debug(hw, ICE_DBG_PHY, "options: %x\n", *option_count); + *active_option_valid = cmd->port_options & ICE_AQC_PORT_OPT_VALID; + if (*active_option_valid) { + *active_option_idx = cmd->port_options & + ICE_AQC_PORT_OPT_ACTIVE_M; + if (*active_option_idx > (*option_count - 1)) + return ICE_ERR_OUT_OF_RANGE; + ice_debug(hw, ICE_DBG_PHY, "active idx: %x\n", + *active_option_idx); + } + + *pending_option_valid = cmd->pending_port_option_status & + ICE_AQC_PENDING_PORT_OPT_VALID; + if (*pending_option_valid) { + *pending_option_idx = cmd->pending_port_option_status & + ICE_AQC_PENDING_PORT_OPT_IDX_M; + if (*pending_option_idx > (*option_count - 1)) + return ICE_ERR_OUT_OF_RANGE; + ice_debug(hw, ICE_DBG_PHY, "pending idx: %x\n", + *pending_option_idx); + } + + /* mask output options fields */ + for (i = 0; i < *option_count; i++) { + options[i].pmd &= ICE_AQC_PORT_OPT_PMD_COUNT_M; + options[i].max_lane_speed &= ICE_AQC_PORT_OPT_MAX_LANE_M; + ice_debug(hw, ICE_DBG_PHY, "pmds: %x max speed: %x\n", + options[i].pmd, options[i].max_lane_speed); + } + + return 0; +} + +/** + * ice_aq_set_port_option + * @hw: pointer to the hw struct + * @lport: logical port to call the command with + * @lport_valid: when false, FW uses port owned by the PF instead of lport, + * when PF owns more than 1 port it must be true + * @new_option: new port option to be written + * + * Calls Set Port Options AQC (0x06eb). + */ +int +ice_aq_set_port_option(struct ice_hw *hw, u8 lport, u8 lport_valid, + u8 new_option) +{ + struct ice_aqc_set_port_option *cmd; + struct ice_aq_desc desc; + + if (new_option >= ICE_AQC_PORT_OPT_COUNT_M) + return ICE_ERR_PARAM; + + cmd = &desc.params.set_port_option; + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_option); + + cmd->lport_num = lport; + + cmd->lport_num_valid = lport_valid; + cmd->selected_port_option = new_option; + + return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL); +} + /** * ice_aq_set_lldp_mib - Set the LLDP MIB * @hw: pointer to the HW struct @@ -6054,7 +6323,7 @@ bool ice_is_phy_caps_an_enabled(struct ice_aqc_get_phy_caps_data *caps) * * Set the LLDP MIB. (0x0A08) */ -enum ice_status +int ice_aq_set_lldp_mib(struct ice_hw *hw, u8 mib_type, void *buf, u16 buf_size, struct ice_sq_cd *cd) { @@ -6097,7 +6366,7 @@ bool ice_fw_supports_lldp_fltr_ctrl(struct ice_hw *hw) * @vsi_num: absolute HW index for VSI * @add: boolean for if adding or removing a filter */ -enum ice_status +int ice_lldp_fltr_add_remove(struct ice_hw *hw, u16 vsi_num, bool add) { struct ice_aqc_lldp_filter_ctrl *cmd; @@ -6121,7 +6390,7 @@ ice_lldp_fltr_add_remove(struct ice_hw *hw, u16 vsi_num, bool add) * ice_lldp_execute_pending_mib - execute LLDP pending MIB request * @hw: pointer to HW struct */ -enum ice_status ice_lldp_execute_pending_mib(struct ice_hw *hw) +int ice_lldp_execute_pending_mib(struct ice_hw *hw) { struct ice_aq_desc desc; @@ -6143,6 +6412,42 @@ bool ice_fw_supports_report_dflt_cfg(struct ice_hw *hw) ICE_FW_API_REPORT_DFLT_CFG_PATCH); } +/* each of the indexes into the following array match the speed of a return + * value from the list of AQ returned speeds like the range: + * ICE_AQ_LINK_SPEED_10MB .. ICE_AQ_LINK_SPEED_100GB excluding + * ICE_AQ_LINK_SPEED_UNKNOWN which is BIT(15) The array is defined as 15 + * elements long because the link_speed returned by the firmware is a 16 bit + * value, but is indexed by [fls(speed) - 1] + */ +static const u32 ice_aq_to_link_speed[] = { + ICE_LINK_SPEED_10MBPS, /* BIT(0) */ + ICE_LINK_SPEED_100MBPS, + ICE_LINK_SPEED_1000MBPS, + ICE_LINK_SPEED_2500MBPS, + ICE_LINK_SPEED_5000MBPS, + ICE_LINK_SPEED_10000MBPS, + ICE_LINK_SPEED_20000MBPS, + ICE_LINK_SPEED_25000MBPS, + ICE_LINK_SPEED_40000MBPS, + ICE_LINK_SPEED_50000MBPS, + ICE_LINK_SPEED_100000MBPS, /* BIT(10) */ + ICE_LINK_SPEED_200000MBPS, +}; + +/** + * ice_get_link_speed - get integer speed from table + * @index: array index from fls(aq speed) - 1 + * + * Returns: u32 value containing integer speed + */ +u32 ice_get_link_speed(u16 index) +{ + if (index >= ARRAY_SIZE(ice_aq_to_link_speed)) + return ICE_LINK_SPEED_UNKNOWN; + + return ice_aq_to_link_speed[index]; +} + /** * ice_fw_supports_fec_dis_auto * @hw: pointer to the hardware structure @@ -6151,11 +6456,16 @@ bool ice_fw_supports_report_dflt_cfg(struct ice_hw *hw) */ bool ice_fw_supports_fec_dis_auto(struct ice_hw *hw) { - return ice_is_fw_min_ver(hw, ICE_FW_FEC_DIS_AUTO_BRANCH, + return ice_is_fw_min_ver(hw, ICE_FW_VER_BRANCH_E810, ICE_FW_FEC_DIS_AUTO_MAJ, ICE_FW_FEC_DIS_AUTO_MIN, - ICE_FW_FEC_DIS_AUTO_PATCH); + ICE_FW_FEC_DIS_AUTO_PATCH) || + ice_is_fw_min_ver(hw, ICE_FW_VER_BRANCH_E82X, + ICE_FW_FEC_DIS_AUTO_MAJ_E82X, + ICE_FW_FEC_DIS_AUTO_MIN_E82X, + ICE_FW_FEC_DIS_AUTO_PATCH_E82X); } + /** * ice_is_fw_auto_drop_supported * @hw: pointer to the hardware structure @@ -6169,3 +6479,4 @@ bool ice_is_fw_auto_drop_supported(struct ice_hw *hw) return true; return false; } + diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h index 0f1be917db..16cb0b12c1 100644 --- a/drivers/net/ice/base/ice_common.h +++ b/drivers/net/ice/base/ice_common.h @@ -15,6 +15,9 @@ #define ICE_SQ_SEND_DELAY_TIME_MS 10 #define ICE_SQ_SEND_MAX_EXECUTE 3 +#define LOOPBACK_MODE_NO 0 +#define LOOPBACK_MODE_HIGH 2 + enum ice_fw_modes { ICE_FW_MODE_NORMAL, ICE_FW_MODE_DBG, @@ -22,55 +25,53 @@ enum ice_fw_modes { ICE_FW_MODE_ROLLBACK }; -enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw); +int ice_init_fltr_mgmt_struct(struct ice_hw *hw); void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw); void ice_set_umac_shared(struct ice_hw *hw); -enum ice_status ice_init_hw(struct ice_hw *hw); +int ice_init_hw(struct ice_hw *hw); void ice_deinit_hw(struct ice_hw *hw); -enum ice_status ice_check_reset(struct ice_hw *hw); -enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req); - -enum ice_status ice_create_all_ctrlq(struct ice_hw *hw); -enum ice_status ice_init_all_ctrlq(struct ice_hw *hw); +int ice_check_reset(struct ice_hw *hw); +int ice_reset(struct ice_hw *hw, enum ice_reset_req req); +int ice_create_all_ctrlq(struct ice_hw *hw); +int ice_init_all_ctrlq(struct ice_hw *hw); void ice_shutdown_all_ctrlq(struct ice_hw *hw, bool unloading); void ice_destroy_all_ctrlq(struct ice_hw *hw); -enum ice_status +int ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq, struct ice_rq_event_info *e, u16 *pending); -enum ice_status +int ice_get_link_status(struct ice_port_info *pi, bool *link_up); -enum ice_status ice_update_link_info(struct ice_port_info *pi); -enum ice_status +int ice_update_link_info(struct ice_port_info *pi); +int ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res, enum ice_aq_res_access_type access, u32 timeout); void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res); -enum ice_status +int ice_alloc_hw_res(struct ice_hw *hw, u16 type, u16 num, bool btm, u16 *res); -enum ice_status +int ice_free_hw_res(struct ice_hw *hw, u16 type, u16 num, u16 *res); -enum ice_status +int ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries, struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size, enum ice_adminq_opc opc, struct ice_sq_cd *cd); -enum ice_status +int ice_sq_send_cmd_nolock(struct ice_hw *hw, struct ice_ctl_q_info *cq, struct ice_aq_desc *desc, void *buf, u16 buf_size, struct ice_sq_cd *cd); -enum ice_status +int ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq, struct ice_aq_desc *desc, void *buf, u16 buf_size, struct ice_sq_cd *cd); void ice_clear_pxe_mode(struct ice_hw *hw); - -enum ice_status ice_get_caps(struct ice_hw *hw); +int ice_get_caps(struct ice_hw *hw); void ice_set_safe_mode_caps(struct ice_hw *hw); -enum ice_status -ice_aq_get_internal_data(struct ice_hw *hw, u8 cluster_id, u16 table_id, +int +ice_aq_get_internal_data(struct ice_hw *hw, u16 cluster_id, u16 table_id, u32 start, void *buf, u16 buf_size, u16 *ret_buf_size, - u16 *ret_next_table, u32 *ret_next_index, - struct ice_sq_cd *cd); + u16 *ret_next_cluster, u16 *ret_next_table, + u32 *ret_next_index, struct ice_sq_cd *cd); /* Define a macro that will align a pointer to point to the next memory address * that falls on the given power of 2 (i.e., 2, 4, 8, 16, 32, 64...). For @@ -83,41 +84,50 @@ ice_aq_get_internal_data(struct ice_hw *hw, u8 cluster_id, u16 table_id, */ #define ICE_ALIGN(ptr, align) (((ptr) + ((align) - 1)) & ~((align) - 1)) -enum ice_status +/* Define a macro for initializing array using indexes. Due to limitation + * of MSVC compiler it is necessary to allow other projects to replace + * that macro and strip the index from initialization. + * Linux driver is using coccinelle to maintain source sync with upstream + * and is not requiring this macro. + */ +#define ice_arr_elem_idx(idx, val) [(idx)] = (val) + +int ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx, u32 rxq_index); -enum ice_status +int ice_read_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx, u32 rxq_index); -enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index); -enum ice_status +int ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index); +int ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index); -enum ice_status +int ice_write_tx_cmpltnq_ctx(struct ice_hw *hw, struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx, u32 tx_cmpltnq_index); -enum ice_status +int ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index); -enum ice_status +int ice_write_tx_drbell_q_ctx(struct ice_hw *hw, struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx, u32 tx_drbell_q_index); -enum ice_status +int ice_lut_size_to_type(int lut_size); +int ice_aq_get_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *get_params); -enum ice_status +int ice_aq_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *set_params); -enum ice_status +int ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle, struct ice_aqc_get_set_rss_keys *keys); -enum ice_status +int ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle, struct ice_aqc_get_set_rss_keys *keys); -enum ice_status +int ice_aq_add_lan_txq(struct ice_hw *hw, u8 count, struct ice_aqc_add_tx_qgrp *qg_list, u16 buf_size, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_move_recfg_lan_txq(struct ice_hw *hw, u8 num_qs, bool is_move, bool is_tc_change, bool subseq_call, bool flush_pipe, u8 timeout, u32 *blocked_cgds, @@ -125,62 +135,62 @@ ice_aq_move_recfg_lan_txq(struct ice_hw *hw, u8 num_qs, bool is_move, u8 *txqs_moved, struct ice_sq_cd *cd); bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq); -enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading); +int ice_aq_q_shutdown(struct ice_hw *hw, bool unloading); void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode); extern const struct ice_ctx_ele ice_tlan_ctx_info[]; -enum ice_status +int ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info); -enum ice_status +int ice_get_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info); -enum ice_status +int ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf, u16 buf_size, struct ice_sq_cd *cd); -enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd); +int ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_send_driver_ver(struct ice_hw *hw, struct ice_driver_ver *dv, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_set_port_params(struct ice_port_info *pi, u16 bad_frame_vsi, bool save_bad_pac, bool pad_short_pac, bool double_vlan, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode, struct ice_aqc_get_phy_caps_data *caps, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_get_netlist_node_pin(struct ice_hw *hw, struct ice_aqc_get_link_topo_pin *cmd, u16 *node_handle); -enum ice_status +int ice_aq_get_netlist_node(struct ice_hw *hw, struct ice_aqc_get_link_topo *cmd, u8 *node_part_number, u16 *node_handle); -enum ice_status +int ice_find_netlist_node(struct ice_hw *hw, u8 node_type_ctx, u8 node_part_number, u16 *node_handle); +bool ice_is_gps_in_netlist(struct ice_hw *hw); void ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high, u16 link_speeds_bitmap); -enum ice_status +int ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags, struct ice_sq_cd *cd); -enum ice_status ice_clear_pf_cfg(struct ice_hw *hw); -enum ice_status +int ice_clear_pf_cfg(struct ice_hw *hw); +int ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd); bool ice_fw_supports_link_override(struct ice_hw *hw); bool ice_fw_supports_fec_dis_auto(struct ice_hw *hw); -enum ice_status +int ice_get_link_default_override(struct ice_link_default_override_tlv *ldo, struct ice_port_info *pi); bool ice_is_phy_caps_an_enabled(struct ice_aqc_get_phy_caps_data *caps); - enum ice_fc_mode ice_caps_to_fc_mode(u8 caps); enum ice_fec_mode ice_caps_to_fec_mode(u8 caps, u8 fec_options); -enum ice_status +int ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update); bool @@ -190,66 +200,79 @@ void ice_copy_phy_caps_to_cfg(struct ice_port_info *pi, struct ice_aqc_get_phy_caps_data *caps, struct ice_aqc_set_phy_cfg_data *cfg); -enum ice_status +int ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec); -enum ice_status +int ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_set_mac_cfg(struct ice_hw *hw, u16 max_frame_size, bool auto_drop, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse, struct ice_link_status *link, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr, u16 mem_addr, u8 page, u8 set_page, u8 *data, u8 length, bool write, struct ice_sq_cd *cd); +u32 ice_get_link_speed(u16 index); -enum ice_status +int ice_aq_prog_topo_dev_nvm(struct ice_hw *hw, struct ice_aqc_link_topo_params *topo_params, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_read_topo_dev_nvm(struct ice_hw *hw, struct ice_aqc_link_topo_params *topo_params, u32 start_address, u8 *buf, u8 buf_size, struct ice_sq_cd *cd); -enum ice_status +int +ice_aq_get_port_options(struct ice_hw *hw, + struct ice_aqc_get_port_options_elem *options, + u8 *option_count, u8 lport, bool lport_valid, + u8 *active_option_idx, bool *active_option_valid, + u8 *pending_option_idx, bool *pending_option_valid); +int +ice_aq_set_port_option(struct ice_hw *hw, u8 lport, u8 lport_valid, + u8 new_option); +int ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues, u16 *q_handle, u16 *q_ids, u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num, struct ice_sq_cd *cd); -enum ice_status +int ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u16 tc_bitmap, u16 *max_lanqs); -enum ice_status +int ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle, u8 num_qgrps, struct ice_aqc_add_tx_qgrp *buf, u16 buf_size, struct ice_sq_cd *cd); -enum ice_status +int ice_replay_pre_init(struct ice_hw *hw, struct ice_switch_info *sw); -enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle); +int ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle); void ice_replay_post(struct ice_hw *hw); struct ice_q_ctx * ice_get_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 q_handle); -enum ice_status ice_sbq_rw_reg_lp(struct ice_hw *hw, - struct ice_sbq_msg_input *in, bool lock); +int ice_sbq_rw_reg_lp(struct ice_hw *hw, struct ice_sbq_msg_input *in, + u16 flag, bool lock); void ice_sbq_lock(struct ice_hw *hw); void ice_sbq_unlock(struct ice_hw *hw); -enum ice_status ice_sbq_rw_reg(struct ice_hw *hw, struct ice_sbq_msg_input *in); -enum ice_status +int ice_sbq_rw_reg(struct ice_hw *hw, struct ice_sbq_msg_input *in, u16 flag); +int +ice_aq_cfg_cgu_err(struct ice_hw *hw, bool ena_event_report, bool ena_err_report, + struct ice_sq_cd *cd); +int ice_aq_get_sensor_reading(struct ice_hw *hw, u8 sensor, u8 format, struct ice_aqc_get_sensor_reading_resp *data, struct ice_sq_cd *cd); @@ -267,37 +290,39 @@ void ice_print_rollback_msg(struct ice_hw *hw); bool ice_is_generic_mac(struct ice_hw *hw); bool ice_is_e810(struct ice_hw *hw); bool ice_is_e810t(struct ice_hw *hw); +bool ice_is_e830(struct ice_hw *hw); +bool ice_is_e825c(struct ice_hw *hw); bool ice_is_e823(struct ice_hw *hw); -enum ice_status +int ice_sched_query_elem(struct ice_hw *hw, u32 node_teid, struct ice_aqc_txsched_elem_data *buf); -enum ice_status +int ice_aq_set_driver_param(struct ice_hw *hw, enum ice_aqc_driver_params idx, u32 value, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_get_driver_param(struct ice_hw *hw, enum ice_aqc_driver_params idx, u32 *value, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_set_gpio(struct ice_hw *hw, u16 gpio_ctrl_handle, u8 pin_idx, bool value, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_get_gpio(struct ice_hw *hw, u16 gpio_ctrl_handle, u8 pin_idx, bool *value, struct ice_sq_cd *cd); bool ice_is_100m_speed_supported(struct ice_hw *hw); -enum ice_status +int ice_aq_set_lldp_mib(struct ice_hw *hw, u8 mib_type, void *buf, u16 buf_size, struct ice_sq_cd *cd); bool ice_fw_supports_lldp_fltr_ctrl(struct ice_hw *hw); -enum ice_status +int ice_lldp_fltr_add_remove(struct ice_hw *hw, u16 vsi_num, bool add); -enum ice_status ice_lldp_execute_pending_mib(struct ice_hw *hw); -enum ice_status +int ice_lldp_execute_pending_mib(struct ice_hw *hw); +int ice_aq_read_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr, u16 bus_addr, __le16 addr, u8 params, u8 *data, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_write_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr, - u16 bus_addr, __le16 addr, u8 params, u8 *data, + u16 bus_addr, __le16 addr, u8 params, const u8 *data, struct ice_sq_cd *cd); bool ice_fw_supports_report_dflt_cfg(struct ice_hw *hw); /* AQ API version for FW auto drop reports */ diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c index c34407b48c..65c7f7579a 100644 --- a/drivers/net/ice/base/ice_controlq.c +++ b/drivers/net/ice/base/ice_controlq.c @@ -92,7 +92,7 @@ bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq) * @hw: pointer to the hardware structure * @cq: pointer to the specific Control queue */ -static enum ice_status +static int ice_alloc_ctrlq_sq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq) { size_t size = cq->num_sq_entries * sizeof(struct ice_aq_desc); @@ -101,14 +101,7 @@ ice_alloc_ctrlq_sq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq) if (!cq->sq.desc_buf.va) return ICE_ERR_NO_MEMORY; - cq->sq.cmd_buf = ice_calloc(hw, cq->num_sq_entries, - sizeof(struct ice_sq_cd)); - if (!cq->sq.cmd_buf) { - ice_free_dma_mem(hw, &cq->sq.desc_buf); - return ICE_ERR_NO_MEMORY; - } - - return ICE_SUCCESS; + return 0; } /** @@ -116,7 +109,7 @@ ice_alloc_ctrlq_sq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq) * @hw: pointer to the hardware structure * @cq: pointer to the specific Control queue */ -static enum ice_status +static int ice_alloc_ctrlq_rq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq) { size_t size = cq->num_rq_entries * sizeof(struct ice_aq_desc); @@ -124,7 +117,7 @@ ice_alloc_ctrlq_rq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq) cq->rq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->rq.desc_buf, size); if (!cq->rq.desc_buf.va) return ICE_ERR_NO_MEMORY; - return ICE_SUCCESS; + return 0; } /** @@ -145,7 +138,7 @@ static void ice_free_cq_ring(struct ice_hw *hw, struct ice_ctl_q_ring *ring) * @hw: pointer to the hardware structure * @cq: pointer to the specific Control queue */ -static enum ice_status +static int ice_alloc_rq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq) { int i; @@ -176,7 +169,7 @@ ice_alloc_rq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq) if (cq->rq_buf_size > ICE_AQ_LG_BUF) desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB); desc->opcode = 0; - /* This is in accordance with Admin queue design, there is no + /* This is in accordance with control queue design, there is no * register for buffer size configuration */ desc->datalen = CPU_TO_LE16(bi->size); @@ -190,7 +183,7 @@ ice_alloc_rq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq) desc->params.generic.param0 = 0; desc->params.generic.param1 = 0; } - return ICE_SUCCESS; + return 0; unwind_alloc_rq_bufs: /* don't try to free the one that failed... */ @@ -209,7 +202,7 @@ ice_alloc_rq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq) * @hw: pointer to the hardware structure * @cq: pointer to the specific Control queue */ -static enum ice_status +static int ice_alloc_sq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq) { int i; @@ -230,7 +223,7 @@ ice_alloc_sq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq) if (!bi->va) goto unwind_alloc_sq_bufs; } - return ICE_SUCCESS; + return 0; unwind_alloc_sq_bufs: /* don't try to free the one that failed... */ @@ -244,7 +237,7 @@ ice_alloc_sq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq) return ICE_ERR_NO_MEMORY; } -static enum ice_status +static int ice_cfg_cq_regs(struct ice_hw *hw, struct ice_ctl_q_ring *ring, u16 num_entries) { /* Clear Head and Tail */ @@ -260,7 +253,7 @@ ice_cfg_cq_regs(struct ice_hw *hw, struct ice_ctl_q_ring *ring, u16 num_entries) if (rd32(hw, ring->bal) != ICE_LO_DWORD(ring->desc_buf.pa)) return ICE_ERR_AQ_ERROR; - return ICE_SUCCESS; + return 0; } /** @@ -270,7 +263,7 @@ ice_cfg_cq_regs(struct ice_hw *hw, struct ice_ctl_q_ring *ring, u16 num_entries) * * Configure base address and length registers for the transmit queue */ -static enum ice_status +static int ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq) { return ice_cfg_cq_regs(hw, &cq->sq, cq->num_sq_entries); @@ -283,10 +276,10 @@ ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq) * * Configure base address and length registers for the receive (event queue) */ -static enum ice_status +static int ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq) { - enum ice_status status; + int status; status = ice_cfg_cq_regs(hw, &cq->rq, cq->num_rq_entries); if (status) @@ -295,7 +288,7 @@ ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq) /* Update tail in the HW to post pre-allocated buffers */ wr32(hw, cq->rq.tail, (u32)(cq->num_rq_entries - 1)); - return ICE_SUCCESS; + return 0; } #define ICE_FREE_CQ_BUFS(hw, qi, ring) \ @@ -309,9 +302,6 @@ do { \ ice_free_dma_mem((hw), \ &(qi)->ring.r.ring##_bi[i]); \ } \ - /* free the buffer info list */ \ - if ((qi)->ring.cmd_buf) \ - ice_free(hw, (qi)->ring.cmd_buf); \ /* free DMA head */ \ ice_free(hw, (qi)->ring.dma_head); \ } while (0) @@ -330,9 +320,9 @@ do { \ * Do *NOT* hold the lock when calling this as the memory allocation routines * called are not going to be atomic context safe */ -static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq) +static int ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq) { - enum ice_status ret_code; + int ret_code; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -379,11 +369,11 @@ static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq) } /** - * ice_init_rq - initialize ARQ + * ice_init_rq - initialize receive side of a control queue * @hw: pointer to the hardware structure * @cq: pointer to the specific Control queue * - * The main initialization routine for the Admin Receive (Event) Queue. + * The main initialization routine for Receive side of a control queue. * Prior to calling this function, the driver *MUST* set the following fields * in the cq->structure: * - cq->num_rq_entries @@ -392,9 +382,9 @@ static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq) * Do *NOT* hold the lock when calling this as the memory allocation routines * called are not going to be atomic context safe */ -static enum ice_status ice_init_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq) +static int ice_init_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq) { - enum ice_status ret_code; + int ret_code; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -441,16 +431,16 @@ static enum ice_status ice_init_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq) } /** - * ice_shutdown_sq - shutdown the Control ATQ + * ice_shutdown_sq - shutdown the transmit side of a control queue * @hw: pointer to the hardware structure * @cq: pointer to the specific Control queue * * The main shutdown routine for the Control Transmit Queue */ -static enum ice_status +static int ice_shutdown_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq) { - enum ice_status ret_code = ICE_SUCCESS; + int ret_code = 0; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -461,7 +451,7 @@ ice_shutdown_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq) goto shutdown_sq_out; } - /* Stop firmware AdminQ processing */ + /* Stop processing of the control queue */ wr32(hw, cq->sq.head, 0); wr32(hw, cq->sq.tail, 0); wr32(hw, cq->sq.len, 0); @@ -480,7 +470,7 @@ ice_shutdown_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq) } /** - * ice_aq_ver_check - Check the reported AQ API version. + * ice_aq_ver_check - Check the reported AQ API version * @hw: pointer to the hardware structure * * Checks if the driver should load on a given AQ API version. @@ -489,24 +479,27 @@ ice_shutdown_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq) */ static bool ice_aq_ver_check(struct ice_hw *hw) { - if (hw->api_maj_ver > EXP_FW_API_VER_MAJOR) { + u8 exp_fw_api_ver_major = EXP_FW_API_VER_MAJOR_BY_MAC(hw); + u8 exp_fw_api_ver_minor = EXP_FW_API_VER_MINOR_BY_MAC(hw); + +if (hw->api_maj_ver > exp_fw_api_ver_major) { /* Major API version is newer than expected, don't load */ ice_warn(hw, "The driver for the device stopped because the NVM image is newer than expected. You must install the most recent version of the network driver.\n"); return false; - } else if (hw->api_maj_ver == EXP_FW_API_VER_MAJOR) { - if (hw->api_min_ver > (EXP_FW_API_VER_MINOR + 2)) + } else if (hw->api_maj_ver == exp_fw_api_ver_major) { + if (hw->api_min_ver > (exp_fw_api_ver_minor + 2)) ice_info(hw, "The driver for the device detected a newer version (%u.%u) of the NVM image than expected (%u.%u). Please install the most recent version of the network driver.\n", hw->api_maj_ver, hw->api_min_ver, - EXP_FW_API_VER_MAJOR, EXP_FW_API_VER_MINOR); - else if ((hw->api_min_ver + 2) < EXP_FW_API_VER_MINOR) + exp_fw_api_ver_major, exp_fw_api_ver_minor); + else if ((hw->api_min_ver + 2) < exp_fw_api_ver_minor) ice_info(hw, "The driver for the device detected an older version (%u.%u) of the NVM image than expected (%u.%u). Please update the NVM image.\n", hw->api_maj_ver, hw->api_min_ver, - EXP_FW_API_VER_MAJOR, EXP_FW_API_VER_MINOR); + exp_fw_api_ver_major, exp_fw_api_ver_minor); } else { /* Major API version is older than expected, log a warning */ ice_info(hw, "The driver for the device detected an older version (%u.%u) of the NVM image than expected (%u.%u). Please update the NVM image.\n", hw->api_maj_ver, hw->api_min_ver, - EXP_FW_API_VER_MAJOR, EXP_FW_API_VER_MINOR); + exp_fw_api_ver_major, exp_fw_api_ver_minor); } return true; } @@ -518,10 +511,10 @@ static bool ice_aq_ver_check(struct ice_hw *hw) * * The main shutdown routine for the Control Receive Queue */ -static enum ice_status +static int ice_shutdown_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq) { - enum ice_status ret_code = ICE_SUCCESS; + int ret_code = 0; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -555,10 +548,10 @@ ice_shutdown_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq) * ice_init_check_adminq - Check version for Admin Queue to know if its alive * @hw: pointer to the hardware structure */ -static enum ice_status ice_init_check_adminq(struct ice_hw *hw) +static int ice_init_check_adminq(struct ice_hw *hw) { struct ice_ctl_q_info *cq = &hw->adminq; - enum ice_status status; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -571,7 +564,7 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw) goto init_ctrlq_free_rq; } - return ICE_SUCCESS; + return 0; init_ctrlq_free_rq: ice_shutdown_rq(hw, cq); @@ -593,10 +586,10 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw) * * NOTE: this function does not initialize the controlq locks */ -static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type) +static int ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type) { struct ice_ctl_q_info *cq; - enum ice_status ret_code; + int ret_code; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -638,7 +631,7 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type) goto init_ctrlq_free_sq; /* success! */ - return ICE_SUCCESS; + return 0; init_ctrlq_free_sq: ice_shutdown_sq(hw, cq); @@ -665,8 +658,9 @@ static bool ice_is_sbq_supported(struct ice_hw *hw) * * NOTE: this function does not destroy the control queue locks. */ -static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type, - bool unloading) +static void +ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type, + bool unloading) { struct ice_ctl_q_info *cq; @@ -726,10 +720,10 @@ void ice_shutdown_all_ctrlq(struct ice_hw *hw, bool unloading) * * NOTE: this function does not initialize the controlq locks. */ -enum ice_status ice_init_all_ctrlq(struct ice_hw *hw) +int ice_init_all_ctrlq(struct ice_hw *hw) { - enum ice_status status; u32 retry = 0; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -791,7 +785,7 @@ static void ice_init_ctrlq_locks(struct ice_ctl_q_info *cq) * driver needs to re-initialize control queues at run time it should call * ice_init_all_ctrlq instead. */ -enum ice_status ice_create_all_ctrlq(struct ice_hw *hw) +int ice_create_all_ctrlq(struct ice_hw *hw) { ice_init_ctrlq_locks(&hw->adminq); if (ice_is_sbq_supported(hw)) @@ -834,7 +828,7 @@ void ice_destroy_all_ctrlq(struct ice_hw *hw) } /** - * ice_clean_sq - cleans Admin send queue (ATQ) + * ice_clean_sq - cleans send side of a control queue * @hw: pointer to the hardware structure * @cq: pointer to the specific Control queue * @@ -844,21 +838,17 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq) { struct ice_ctl_q_ring *sq = &cq->sq; u16 ntc = sq->next_to_clean; - struct ice_sq_cd *details; struct ice_aq_desc *desc; desc = ICE_CTL_Q_DESC(*sq, ntc); - details = ICE_CTL_Q_DETAILS(*sq, ntc); while (rd32(hw, cq->sq.head) != ntc) { ice_debug(hw, ICE_DBG_AQ_MSG, "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head)); ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM); - ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM); ntc++; if (ntc == sq->count) ntc = 0; desc = ICE_CTL_Q_DESC(*sq, ntc); - details = ICE_CTL_Q_DETAILS(*sq, ntc); } sq->next_to_clean = ntc; @@ -866,16 +856,42 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq) return ICE_CTL_Q_DESC_UNUSED(sq); } +/** + * ice_ctl_q_str - Convert control queue type to string + * @qtype: the control queue type + * + * Returns: A string name for the given control queue type. + */ +static const char *ice_ctl_q_str(enum ice_ctl_q qtype) +{ + switch (qtype) { + case ICE_CTL_Q_UNKNOWN: + return "Unknown CQ"; + case ICE_CTL_Q_ADMIN: + return "AQ"; + case ICE_CTL_Q_MAILBOX: + return "MBXQ"; + case ICE_CTL_Q_SB: + return "SBQ"; + default: + return "Unrecognized CQ"; + } +} + /** * ice_debug_cq * @hw: pointer to the hardware structure + * @cq: pointer to the specific Control queue * @desc: pointer to control queue descriptor * @buf: pointer to command buffer * @buf_len: max length of buf + * @response: true if this is the writeback response * * Dumps debug log about control command with descriptor contents. */ -static void ice_debug_cq(struct ice_hw *hw, void *desc, void *buf, u16 buf_len) +static void +ice_debug_cq(struct ice_hw *hw, struct ice_ctl_q_info *cq, + void *desc, void *buf, u16 buf_len, bool response) { struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc; u16 datalen, flags; @@ -889,7 +905,8 @@ static void ice_debug_cq(struct ice_hw *hw, void *desc, void *buf, u16 buf_len) datalen = LE16_TO_CPU(cq_desc->datalen); flags = LE16_TO_CPU(cq_desc->flags); - ice_debug(hw, ICE_DBG_AQ_DESC, "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n", + ice_debug(hw, ICE_DBG_AQ_DESC, "%s %s: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n", + ice_ctl_q_str(cq->qtype), response ? "Response" : "Command", LE16_TO_CPU(cq_desc->opcode), flags, datalen, LE16_TO_CPU(cq_desc->retval)); ice_debug(hw, ICE_DBG_AQ_DESC, "\tcookie (h,l) 0x%08X 0x%08X\n", @@ -914,23 +931,23 @@ static void ice_debug_cq(struct ice_hw *hw, void *desc, void *buf, u16 buf_len) } /** - * ice_sq_done - check if FW has processed the Admin Send Queue (ATQ) + * ice_sq_done - check if the last send on a control queue has completed * @hw: pointer to the HW struct * @cq: pointer to the specific Control queue * - * Returns true if the firmware has processed all descriptors on the - * admin send queue. Returns false if there are still requests pending. + * Returns: true if all the descriptors on the send side of a control queue + * are finished processing, false otherwise. */ static bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq) { - /* AQ designers suggest use of head for better + /* control queue designers suggest use of head for better * timing reliability than DD bit */ return rd32(hw, cq->sq.head) == cq->sq.next_to_use; } /** - * ice_sq_send_cmd_nolock - send command to Control Queue (ATQ) + * ice_sq_send_cmd_nolock - send command to a control queue * @hw: pointer to the HW struct * @cq: pointer to the specific Control queue * @desc: prefilled descriptor describing the command (non DMA mem) @@ -938,10 +955,11 @@ static bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq) * @buf_size: size of buffer for indirect commands (or 0 for direct commands) * @cd: pointer to command details structure * - * This is the main send command routine for the ATQ. It runs the queue, - * cleans the queue, etc. + * This is the main send command routine for a control queue. It prepares the + * command into a descriptor, bumps the send queue tail, waits for the command + * to complete, captures status and data for the command, etc. */ -enum ice_status +int ice_sq_send_cmd_nolock(struct ice_hw *hw, struct ice_ctl_q_info *cq, struct ice_aq_desc *desc, void *buf, u16 buf_size, struct ice_sq_cd *cd) @@ -949,9 +967,8 @@ ice_sq_send_cmd_nolock(struct ice_hw *hw, struct ice_ctl_q_info *cq, struct ice_dma_mem *dma_buf = NULL; struct ice_aq_desc *desc_on_ring; bool cmd_completed = false; - enum ice_status status = ICE_SUCCESS; - struct ice_sq_cd *details; u32 total_delay = 0; + int status = 0; u16 retval = 0; u32 val = 0; @@ -993,12 +1010,6 @@ ice_sq_send_cmd_nolock(struct ice_hw *hw, struct ice_ctl_q_info *cq, goto sq_send_command_error; } - details = ICE_CTL_Q_DETAILS(cq->sq, cq->sq.next_to_use); - if (cd) - *details = *cd; - else - ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM); - /* Call clean and check queue available function to reclaim the * descriptors that were processed by FW/MBX; the function returns the * number of desc available. The clean function called here could be @@ -1035,19 +1046,24 @@ ice_sq_send_cmd_nolock(struct ice_hw *hw, struct ice_ctl_q_info *cq, /* Debug desc and buffer */ ice_debug(hw, ICE_DBG_AQ_DESC, "ATQ: Control Send queue desc and buffer:\n"); - - ice_debug_cq(hw, (void *)desc_on_ring, buf, buf_size); + ice_debug_cq(hw, cq, (void *)desc_on_ring, buf, buf_size, false); (cq->sq.next_to_use)++; if (cq->sq.next_to_use == cq->sq.count) cq->sq.next_to_use = 0; wr32(hw, cq->sq.tail, cq->sq.next_to_use); + ice_flush(hw); + + /* Wait a short time before initial ice_sq_done() check, to allow + * hardware time for completion. + */ + ice_usec_delay(5, false); do { if (ice_sq_done(hw, cq)) break; - ice_usec_delay(ICE_CTL_Q_SQ_CMD_USEC, false); + ice_usec_delay(10, false); total_delay++; } while (total_delay < cq->sq_cmd_timeout); @@ -1084,13 +1100,12 @@ ice_sq_send_cmd_nolock(struct ice_hw *hw, struct ice_ctl_q_info *cq, } ice_debug(hw, ICE_DBG_AQ_MSG, "ATQ: desc and buffer writeback:\n"); - - ice_debug_cq(hw, (void *)desc, buf, buf_size); + ice_debug_cq(hw, cq, (void *)desc, buf, buf_size, true); /* save writeback AQ if requested */ - if (details->wb_desc) - ice_memcpy(details->wb_desc, desc_on_ring, - sizeof(*details->wb_desc), ICE_DMA_TO_NONDMA); + if (cd && cd->wb_desc) + ice_memcpy(cd->wb_desc, desc_on_ring, + sizeof(*cd->wb_desc), ICE_DMA_TO_NONDMA); /* update the error if time out occurred */ if (!cmd_completed) { @@ -1109,7 +1124,7 @@ ice_sq_send_cmd_nolock(struct ice_hw *hw, struct ice_ctl_q_info *cq, } /** - * ice_sq_send_cmd - send command to Control Queue (ATQ) + * ice_sq_send_cmd - send command to a control queue * @hw: pointer to the HW struct * @cq: pointer to the specific Control queue * @desc: prefilled descriptor describing the command @@ -1117,15 +1132,16 @@ ice_sq_send_cmd_nolock(struct ice_hw *hw, struct ice_ctl_q_info *cq, * @buf_size: size of buffer for indirect commands (or 0 for direct commands) * @cd: pointer to command details structure * - * This is the main send command routine for the ATQ. It runs the queue, - * cleans the queue, etc. + * Main command for the transmit side of a control queue. It puts the command + * on the queue, bumps the tail, waits for processing of the command, captures + * command status and results, etc. */ -enum ice_status +int ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq, struct ice_aq_desc *desc, void *buf, u16 buf_size, struct ice_sq_cd *cd) { - enum ice_status status = ICE_SUCCESS; + int status = 0; /* if reset is in progress return a soft error */ if (hw->reset_ongoing) @@ -1160,19 +1176,19 @@ void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode) * @e: event info from the receive descriptor, includes any buffers * @pending: number of events that could be left to process * - * This function cleans one Admin Receive Queue element and returns - * the contents through e. It can also return how many events are - * left to process through 'pending'. + * Clean one element from the receive side of a control queue. On return 'e' + * contains contents of the message, and 'pending' contains the number of + * events left to process. */ -enum ice_status +int ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq, struct ice_rq_event_info *e, u16 *pending) { u16 ntc = cq->rq.next_to_clean; enum ice_aq_err rq_last_status; - enum ice_status ret_code = ICE_SUCCESS; struct ice_aq_desc *desc; struct ice_dma_mem *bi; + int ret_code = 0; u16 desc_idx; u16 datalen; u16 flags; @@ -1218,8 +1234,7 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq, e->msg_len, ICE_DMA_TO_NONDMA); ice_debug(hw, ICE_DBG_AQ_DESC, "ARQ: desc and buffer:\n"); - - ice_debug_cq(hw, (void *)desc, e->msg_buf, cq->rq_buf_size); + ice_debug_cq(hw, cq, (void *)desc, e->msg_buf, cq->rq_buf_size, true); /* Restore the original datalen and buffer address in the desc, * FW updates datalen to indicate the event message size diff --git a/drivers/net/ice/base/ice_controlq.h b/drivers/net/ice/base/ice_controlq.h index 986604ec3c..27849c6ac1 100644 --- a/drivers/net/ice/base/ice_controlq.h +++ b/drivers/net/ice/base/ice_controlq.h @@ -22,9 +22,25 @@ /* Defines that help manage the driver vs FW API checks. * Take a look at ice_aq_ver_check in ice_controlq.c for actual usage. */ -#define EXP_FW_API_VER_BRANCH 0x00 -#define EXP_FW_API_VER_MAJOR 0x01 -#define EXP_FW_API_VER_MINOR 0x05 +#define EXP_FW_API_VER_BRANCH_E830 0x00 +#define EXP_FW_API_VER_MAJOR_E830 0x01 +#define EXP_FW_API_VER_MINOR_E830 0x05 + +#define EXP_FW_API_VER_BRANCH_E810 0x00 +#define EXP_FW_API_VER_MAJOR_E810 0x01 +#define EXP_FW_API_VER_MINOR_E810 0x05 + +#define EXP_FW_API_VER_BRANCH_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? \ + EXP_FW_API_VER_BRANCH_E830 : \ + EXP_FW_API_VER_BRANCH_E810) + +#define EXP_FW_API_VER_MAJOR_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? \ + EXP_FW_API_VER_MAJOR_E830 : \ + EXP_FW_API_VER_MAJOR_E810) + +#define EXP_FW_API_VER_MINOR_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? \ + EXP_FW_API_VER_MINOR_E830 : \ + EXP_FW_API_VER_MINOR_E810) /* Different control queue types: These are mainly for SW consumption. */ enum ice_ctl_q { @@ -35,15 +51,13 @@ enum ice_ctl_q { }; /* Control Queue timeout settings - max delay 1s */ -#define ICE_CTL_Q_SQ_CMD_TIMEOUT 10000 /* Count 10000 times */ -#define ICE_CTL_Q_SQ_CMD_USEC 100 /* Check every 100usec */ +#define ICE_CTL_Q_SQ_CMD_TIMEOUT 100000 /* Count 100000 times */ #define ICE_CTL_Q_ADMIN_INIT_TIMEOUT 10 /* Count 10 times */ #define ICE_CTL_Q_ADMIN_INIT_MSEC 100 /* Check every 100msec */ struct ice_ctl_q_ring { void *dma_head; /* Virtual address to DMA head */ struct ice_dma_mem desc_buf; /* descriptor ring memory */ - void *cmd_buf; /* command buffer memory */ union { struct ice_dma_mem *sq_bi; @@ -73,8 +87,6 @@ struct ice_sq_cd { struct ice_aq_desc *wb_desc; }; -#define ICE_CTL_Q_DETAILS(R, i) (&(((struct ice_sq_cd *)((R).cmd_buf))[i])) - /* rq event information */ struct ice_rq_event_info { struct ice_aq_desc desc; diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c index cc4e28a702..2623603f2f 100644 --- a/drivers/net/ice/base/ice_dcb.c +++ b/drivers/net/ice/base/ice_dcb.c @@ -19,14 +19,14 @@ * * Requests the complete LLDP MIB (entire packet). (0x0A00) */ -enum ice_status +int ice_aq_get_lldp_mib(struct ice_hw *hw, u8 bridge_type, u8 mib_type, void *buf, u16 buf_size, u16 *local_len, u16 *remote_len, struct ice_sq_cd *cd) { struct ice_aqc_lldp_get_mib *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; cmd = &desc.params.lldp_get_mib; @@ -61,7 +61,7 @@ ice_aq_get_lldp_mib(struct ice_hw *hw, u8 bridge_type, u8 mib_type, void *buf, * Enable or Disable posting of an event on ARQ when LLDP MIB * associated with the interface changes (0x0A01) */ -enum ice_status +int ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update, struct ice_sq_cd *cd) { @@ -92,7 +92,7 @@ ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update, * * Stop or Shutdown the embedded LLDP Agent (0x0A05) */ -enum ice_status +int ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent, bool persist, struct ice_sq_cd *cd) { @@ -120,7 +120,7 @@ ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent, bool persist, * * Start the embedded LLDP Agent on all ports. (0x0A06) */ -enum ice_status +int ice_aq_start_lldp(struct ice_hw *hw, bool persist, struct ice_sq_cd *cd) { struct ice_aqc_lldp_start *cmd; @@ -601,11 +601,11 @@ ice_parse_org_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg) * * Parse DCB configuration from the LLDPDU */ -enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg) +int ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg) { struct ice_lldp_org_tlv *tlv; - enum ice_status ret = ICE_SUCCESS; u16 offset = 0; + int ret = 0; u16 typelen; u16 type; u16 len; @@ -651,12 +651,12 @@ enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg) * * Query DCB configuration from the firmware */ -enum ice_status +int ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype, struct ice_dcbx_cfg *dcbcfg) { - enum ice_status ret; u8 *lldpmib; + int ret; /* Allocate the LLDPDU */ lldpmib = (u8 *)ice_malloc(hw, ICE_LLDPDU_SIZE); @@ -666,7 +666,7 @@ ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype, ret = ice_aq_get_lldp_mib(hw, bridgetype, mib_type, (void *)lldpmib, ICE_LLDPDU_SIZE, NULL, NULL, NULL); - if (ret == ICE_SUCCESS) + if (!ret) /* Parse LLDP MIB to get DCB configuration */ ret = ice_lldp_to_dcb_cfg(lldpmib, dcbcfg); @@ -686,17 +686,17 @@ ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype, * @cd: pointer to command details structure or NULL * * Start/Stop the embedded dcbx Agent. In case that this wrapper function - * returns ICE_SUCCESS, caller will need to check if FW returns back the same + * returns 0, caller will need to check if FW returns back the same * value as stated in dcbx_agent_status, and react accordingly. (0x0A09) */ -enum ice_status +int ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent, bool *dcbx_agent_status, struct ice_sq_cd *cd) { struct ice_aqc_lldp_stop_start_specific_agent *cmd; enum ice_adminq_opc opcode; struct ice_aq_desc desc; - enum ice_status status; + int status; cmd = &desc.params.lldp_agent_ctrl; @@ -711,7 +711,7 @@ ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent, *dcbx_agent_status = false; - if (status == ICE_SUCCESS && + if (!status && cmd->command == ICE_AQC_START_STOP_AGENT_START_DCBX) *dcbx_agent_status = true; @@ -726,7 +726,7 @@ ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent, * * Get CEE DCBX mode operational configuration from firmware (0x0A07) */ -enum ice_status +int ice_aq_get_cee_dcb_cfg(struct ice_hw *hw, struct ice_aqc_get_cee_dcb_cfg_resp *buff, struct ice_sq_cd *cd) @@ -747,12 +747,12 @@ ice_aq_get_cee_dcb_cfg(struct ice_hw *hw, * This AQ call configures the PFC mdoe to DSCP-based PFC mode or VLAN * -based PFC (0x0303) */ -enum ice_status +int ice_aq_set_pfc_mode(struct ice_hw *hw, u8 pfc_mode, struct ice_sq_cd *cd) { struct ice_aqc_set_query_pfc_mode *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; if (pfc_mode > ICE_AQC_PFC_DSCP_BASED_PFC) return ICE_ERR_PARAM; @@ -775,7 +775,7 @@ ice_aq_set_pfc_mode(struct ice_hw *hw, u8 pfc_mode, struct ice_sq_cd *cd) if (cmd->pfc_mode != pfc_mode) return ICE_ERR_NOT_SUPPORTED; - return ICE_SUCCESS; + return 0; } /** @@ -906,11 +906,11 @@ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg, * * Get IEEE or CEE mode DCB configuration from the Firmware */ -STATIC enum ice_status +STATIC int ice_get_ieee_or_cee_dcb_cfg(struct ice_port_info *pi, u8 dcbx_mode) { struct ice_dcbx_cfg *dcbx_cfg = NULL; - enum ice_status ret; + int ret; if (!pi) return ICE_ERR_PARAM; @@ -934,7 +934,7 @@ ice_get_ieee_or_cee_dcb_cfg(struct ice_port_info *pi, u8 dcbx_mode) ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID, dcbx_cfg); /* Don't treat ENOENT as an error for Remote MIBs */ if (pi->hw->adminq.sq_last_status == ICE_AQ_RC_ENOENT) - ret = ICE_SUCCESS; + ret = 0; out: return ret; @@ -946,17 +946,17 @@ ice_get_ieee_or_cee_dcb_cfg(struct ice_port_info *pi, u8 dcbx_mode) * * Get DCB configuration from the Firmware */ -enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi) +int ice_get_dcb_cfg(struct ice_port_info *pi) { struct ice_aqc_get_cee_dcb_cfg_resp cee_cfg; struct ice_dcbx_cfg *dcbx_cfg; - enum ice_status ret; + int ret; if (!pi) return ICE_ERR_PARAM; ret = ice_aq_get_cee_dcb_cfg(pi->hw, &cee_cfg, NULL); - if (ret == ICE_SUCCESS) { + if (!ret) { /* CEE mode */ ret = ice_get_ieee_or_cee_dcb_cfg(pi, ICE_DCBX_MODE_CEE); ice_cee_to_dcb_cfg(&cee_cfg, pi); @@ -1014,10 +1014,10 @@ void ice_get_dcb_cfg_from_mib_change(struct ice_port_info *pi, * * Update DCB configuration from the Firmware */ -enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change) +int ice_init_dcb(struct ice_hw *hw, bool enable_mib_change) { struct ice_qos_cfg *qos_cfg = &hw->port_info->qos_cfg; - enum ice_status ret = ICE_SUCCESS; + int ret = 0; if (!hw->func_caps.common_cap.dcb) return ICE_ERR_NOT_SUPPORTED; @@ -1056,10 +1056,10 @@ enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change) * * Configure (disable/enable) MIB */ -enum ice_status ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib) +int ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib) { struct ice_qos_cfg *qos_cfg = &hw->port_info->qos_cfg; - enum ice_status ret; + int ret; if (!hw->func_caps.common_cap.dcb) return ICE_ERR_NOT_SUPPORTED; @@ -1370,7 +1370,7 @@ ice_add_dscp_tc_bw_tlv(struct ice_lldp_org_tlv *tlv, ICE_DSCP_SUBTYPE_TCBW); tlv->ouisubtype = HTONL(ouisubtype); - /* First Octet after subtype + /* First Octect after subtype * ---------------------------- * | RSV | CBS | RSV | Max TCs | * | 1b | 1b | 3b | 3b | @@ -1508,13 +1508,13 @@ void ice_dcb_cfg_to_lldp(u8 *lldpmib, u16 *miblen, struct ice_dcbx_cfg *dcbcfg) * * Set DCB configuration to the Firmware */ -enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi) +int ice_set_dcb_cfg(struct ice_port_info *pi) { u8 mib_type, *lldpmib = NULL; struct ice_dcbx_cfg *dcbcfg; - enum ice_status ret; struct ice_hw *hw; u16 miblen; + int ret; if (!pi) return ICE_ERR_PARAM; @@ -1550,21 +1550,20 @@ enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi) * * query current port ETS configuration */ -enum ice_status +int ice_aq_query_port_ets(struct ice_port_info *pi, struct ice_aqc_port_ets_elem *buf, u16 buf_size, struct ice_sq_cd *cd) { struct ice_aqc_query_port_ets *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; - if (!pi) + if (!pi || !pi->root) return ICE_ERR_PARAM; cmd = &desc.params.port_ets; ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_query_port_ets); - if (pi->root) - cmd->port_teid = pi->root->info.node_teid; + cmd->port_teid = pi->root->info.node_teid; status = ice_aq_send_cmd(pi->hw, &desc, buf, buf_size, cd); return status; @@ -1577,14 +1576,14 @@ ice_aq_query_port_ets(struct ice_port_info *pi, * * update the SW DB with the new TC changes */ -enum ice_status +int ice_update_port_tc_tree_cfg(struct ice_port_info *pi, struct ice_aqc_port_ets_elem *buf) { struct ice_sched_node *node, *tc_node; struct ice_aqc_txsched_elem_data elem; - enum ice_status status = ICE_SUCCESS; u32 teid1, teid2; + int status = 0; u8 i, j; if (!pi) @@ -1645,12 +1644,12 @@ ice_update_port_tc_tree_cfg(struct ice_port_info *pi, * query current port ETS configuration and update the * SW DB with the TC changes */ -enum ice_status +int ice_query_port_ets(struct ice_port_info *pi, struct ice_aqc_port_ets_elem *buf, u16 buf_size, struct ice_sq_cd *cd) { - enum ice_status status; + int status; ice_acquire_lock(&pi->sched_lock); status = ice_aq_query_port_ets(pi, buf, buf_size, cd); diff --git a/drivers/net/ice/base/ice_dcb.h b/drivers/net/ice/base/ice_dcb.h index bae033a460..c2c48ae8bb 100644 --- a/drivers/net/ice/base/ice_dcb.h +++ b/drivers/net/ice/base/ice_dcb.h @@ -188,48 +188,48 @@ struct ice_dcbx_variables { u32 deftsaassignment; }; -enum ice_status +int ice_aq_get_lldp_mib(struct ice_hw *hw, u8 bridge_type, u8 mib_type, void *buf, u16 buf_size, u16 *local_len, u16 *remote_len, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_get_cee_dcb_cfg(struct ice_hw *hw, struct ice_aqc_get_cee_dcb_cfg_resp *buff, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_set_pfc_mode(struct ice_hw *hw, u8 pfc_mode, struct ice_sq_cd *cd); -enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg); +int ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg); u8 ice_get_dcbx_status(struct ice_hw *hw); -enum ice_status +int ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype, struct ice_dcbx_cfg *dcbcfg); -enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi); -enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi); +int ice_get_dcb_cfg(struct ice_port_info *pi); +int ice_set_dcb_cfg(struct ice_port_info *pi); void ice_get_dcb_cfg_from_mib_change(struct ice_port_info *pi, struct ice_rq_event_info *event); -enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change); +int ice_init_dcb(struct ice_hw *hw, bool enable_mib_change); void ice_dcb_cfg_to_lldp(u8 *lldpmib, u16 *miblen, struct ice_dcbx_cfg *dcbcfg); -enum ice_status +int ice_query_port_ets(struct ice_port_info *pi, struct ice_aqc_port_ets_elem *buf, u16 buf_size, struct ice_sq_cd *cmd_details); -enum ice_status +int ice_aq_query_port_ets(struct ice_port_info *pi, struct ice_aqc_port_ets_elem *buf, u16 buf_size, struct ice_sq_cd *cd); -enum ice_status +int ice_update_port_tc_tree_cfg(struct ice_port_info *pi, struct ice_aqc_port_ets_elem *buf); -enum ice_status +int ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent, bool persist, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_start_lldp(struct ice_hw *hw, bool persist, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent, bool *dcbx_agent_status, struct ice_sq_cd *cd); -enum ice_status ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib); -enum ice_status +int ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib); +int ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update, struct ice_sq_cd *cd); #endif /* _ICE_DCB_H_ */ diff --git a/drivers/net/ice/base/ice_ddp.c b/drivers/net/ice/base/ice_ddp.c index ffcd5a9394..5e7f154810 100644 --- a/drivers/net/ice/base/ice_ddp.c +++ b/drivers/net/ice/base/ice_ddp.c @@ -19,14 +19,14 @@ * * Download Package (0x0C40) */ -static enum ice_status +static int ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, u16 buf_size, bool last_buf, u32 *error_offset, u32 *error_info, struct ice_sq_cd *cd) { struct ice_aqc_download_pkg *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; if (error_offset) *error_offset = 0; @@ -64,7 +64,7 @@ ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, * * Upload Section (0x0C41) */ -enum ice_status +int ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, u16 buf_size, struct ice_sq_cd *cd) { @@ -88,14 +88,14 @@ ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, * * Update Package (0x0C42) */ -static enum ice_status +static int ice_aq_update_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, u16 buf_size, bool last_buf, u32 *error_offset, u32 *error_info, struct ice_sq_cd *cd) { struct ice_aqc_download_pkg *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; if (error_offset) *error_offset = 0; @@ -228,10 +228,10 @@ ice_is_signing_seg_type_at_idx(struct ice_pkg_hdr *pkg_hdr, u32 idx, * @bufs: pointer to an array of buffers * @count: the number of buffers in the array */ -enum ice_status +int ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 count) { - enum ice_status status = ICE_SUCCESS; + int status = 0; u32 i; for (i = 0; i < count; i++) { @@ -260,10 +260,10 @@ ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 count) * * Obtains change lock and updates package. */ -enum ice_status +int ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) { - enum ice_status status; + int status; status = ice_acquire_change_lock(hw, ICE_RES_WRITE); if (status) @@ -367,8 +367,8 @@ ice_dwnld_cfg_bufs_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 start, return ICE_DDP_PKG_SUCCESS; for (i = 0; i < count; i++) { - enum ice_status status; bool last = false; + int status; bh = (struct ice_buf_hdr *)(bufs + start + i); @@ -403,7 +403,7 @@ ice_dwnld_cfg_bufs_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 start, * * Get Package Info List (0x0C43) */ -static enum ice_status +static int ice_aq_get_pkg_info_list(struct ice_hw *hw, struct ice_aqc_get_pkg_info_resp *pkg_info, u16 buf_size, struct ice_sq_cd *cd) @@ -415,21 +415,6 @@ ice_aq_get_pkg_info_list(struct ice_hw *hw, return ice_aq_send_cmd(hw, &desc, pkg_info, buf_size, cd); } -/** - * ice_has_signing_seg - determine if package has a signing segment - * @hw: pointer to the hardware structure - * @pkg_hdr: pointer to the driver's package hdr - */ -static bool ice_has_signing_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr) -{ - struct ice_generic_seg_hdr *seg_hdr; - - seg_hdr = (struct ice_generic_seg_hdr *) - ice_find_seg_in_pkg(hw, SEGMENT_TYPE_SIGNING, pkg_hdr); - - return seg_hdr ? true : false; -} - /** * ice_get_pkg_segment_id - get correct package segment id, based on device * @mac_type: MAC type of the device @@ -439,6 +424,9 @@ static u32 ice_get_pkg_segment_id(enum ice_mac_type mac_type) u32 seg_id; switch (mac_type) { + case ICE_MAC_E830: + seg_id = SEGMENT_TYPE_ICE_E830; + break; case ICE_MAC_GENERIC: case ICE_MAC_GENERIC_3K: case ICE_MAC_GENERIC_3K_E825: @@ -459,6 +447,9 @@ static u32 ice_get_pkg_sign_type(enum ice_mac_type mac_type) u32 sign_type; switch (mac_type) { + case ICE_MAC_E830: + sign_type = SEGMENT_SIGN_TYPE_RSA3K_SBB; + break; case ICE_MAC_GENERIC_3K: sign_type = SEGMENT_SIGN_TYPE_RSA3K; break; @@ -564,10 +555,21 @@ ice_dwnld_sign_and_cfg_segs(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr, start = LE32_TO_CPU(seg->signed_buf_start); count = LE32_TO_CPU(seg->signed_buf_count); + if (count == 0 && seg->seg_id == SEGMENT_TYPE_ICE_E830) + seg->buf_tbl.buf_count = 1; + state = ice_download_pkg_sig_seg(hw, seg); if (state) goto exit; + if (count == 0) { + /* this is a "Reference Signature Segment" and download should + * be only for the buffers in the signature segment (and not + * the hardware configuration segment) + */ + goto exit; + } + state = ice_download_pkg_config_seg(hw, pkg_hdr, conf_idx, start, count); @@ -606,7 +608,7 @@ static enum ice_ddp_state ice_post_dwnld_pkg_actions(struct ice_hw *hw) { enum ice_ddp_state state = ICE_DDP_PKG_SUCCESS; - enum ice_status status; + int status; status = ice_set_vlan_mode(hw); if (status) { @@ -628,7 +630,7 @@ ice_download_pkg_with_sig_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr) { enum ice_aq_err aq_err = hw->adminq.sq_last_status; enum ice_ddp_state state = ICE_DDP_PKG_ERR; - enum ice_status status; + int status; u32 i; ice_debug(hw, ICE_DBG_INIT, "Segment ID %d\n", hw->pkg_seg_id); @@ -674,8 +676,8 @@ static enum ice_ddp_state ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count) { enum ice_ddp_state state = ICE_DDP_PKG_SUCCESS; - enum ice_status status; struct ice_buf_hdr *bh; + int status; if (!bufs || !count) return ICE_DDP_PKG_ERR; @@ -752,7 +754,7 @@ ice_download_pkg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr, { enum ice_ddp_state state; - if (hw->pkg_has_signing_seg) + if (ice_match_signing_seg(pkg_hdr, hw->pkg_seg_id, hw->pkg_sign_type)) state = ice_download_pkg_with_sig_seg(hw, pkg_hdr); else state = ice_download_pkg_without_sig_seg(hw, ice_seg); @@ -777,7 +779,6 @@ ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr) if (!pkg_hdr) return ICE_DDP_PKG_ERR; - hw->pkg_has_signing_seg = ice_has_signing_seg(hw, pkg_hdr); ice_get_signing_req(hw); ice_debug(hw, ICE_DBG_INIT, "Pkg using segment id: 0x%08X\n", @@ -1036,7 +1037,6 @@ static enum ice_ddp_state ice_chk_pkg_version(struct ice_pkg_ver *pkg_ver) (pkg_ver->major == ICE_PKG_SUPP_VER_MAJ && pkg_ver->minor < ICE_PKG_SUPP_VER_MNR)) return ICE_DDP_PKG_FILE_VERSION_TOO_LOW; - return ICE_DDP_PKG_SUCCESS; } @@ -1181,7 +1181,7 @@ static int ice_get_prof_index_max(struct ice_hw *hw) hw->switch_info->max_used_prof_index = max_prof_index; - return ICE_SUCCESS; + return 0; } /** @@ -1205,11 +1205,8 @@ ice_get_ddp_pkg_state(struct ice_hw *hw, bool already_loaded) } else if (hw->active_pkg_ver.major != ICE_PKG_SUPP_VER_MAJ || hw->active_pkg_ver.minor != ICE_PKG_SUPP_VER_MNR) { return ICE_DDP_PKG_ALREADY_LOADED_NOT_SUPPORTED; - } else if (hw->active_pkg_ver.major == ICE_PKG_SUPP_VER_MAJ && - hw->active_pkg_ver.minor == ICE_PKG_SUPP_VER_MNR) { - return ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED; } else { - return ICE_DDP_PKG_ERR; + return ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED; } } @@ -1355,12 +1352,6 @@ enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len) if (state) return state; - /* For packages with signing segments, must be a matching segment */ - if (hw->pkg_has_signing_seg) - if (!ice_match_signing_seg(pkg, hw->pkg_seg_id, - hw->pkg_sign_type)) - return ICE_DDP_PKG_ERR; - /* before downloading the package, check package version for * compatibility with driver */ @@ -1489,7 +1480,7 @@ struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw) return bld; } -/** +/* * ice_get_sw_prof_type - determine switch profile type * @hw: pointer to the HW structure * @fv: pointer to the switch field vector @@ -1572,7 +1563,7 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs, * NOTE: The caller of the function is responsible for freeing the memory * allocated for every list entry. */ -enum ice_status +int ice_get_sw_fv_list(struct ice_hw *hw, struct ice_prot_lkup_ext *lkups, ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list) { @@ -1584,7 +1575,7 @@ ice_get_sw_fv_list(struct ice_hw *hw, struct ice_prot_lkup_ext *lkups, u32 offset; if (!lkups->n_val_words) - return ICE_SUCCESS; + return 0; ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); @@ -1634,7 +1625,7 @@ ice_get_sw_fv_list(struct ice_hw *hw, struct ice_prot_lkup_ext *lkups, ice_warn(hw, "Required profiles not found in currently loaded DDP package"); return ICE_ERR_CFG; } - return ICE_SUCCESS; + return 0; err: LIST_FOR_EACH_ENTRY_SAFE(fvl, tmp, fv_list, ice_sw_fv_list_entry, @@ -1713,7 +1704,7 @@ void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld) * result in some wasted space in the buffer. * Note: all package contents must be in Little Endian form. */ -enum ice_status +int ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count) { struct ice_buf_hdr *buf; @@ -1738,7 +1729,7 @@ ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count) FLEX_ARRAY_SIZE(buf, section_entry, count); buf->data_end = CPU_TO_LE16(data_end); - return ICE_SUCCESS; + return 0; } /** @@ -2129,7 +2120,7 @@ ice_boost_tcam_handler(u32 sect_type, void *section, u32 index, u32 *offset) * if it is found. The ice_seg parameter must not be NULL since the first call * to ice_pkg_enum_entry requires a pointer to an actual ice_segment structure. */ -static enum ice_status +static int ice_find_boost_entry(struct ice_seg *ice_seg, u16 addr, struct ice_boost_tcam_entry **entry) { @@ -2148,7 +2139,7 @@ ice_find_boost_entry(struct ice_seg *ice_seg, u16 addr, ice_boost_tcam_handler); if (tcam && LE16_TO_CPU(tcam->addr) == addr) { *entry = tcam; - return ICE_SUCCESS; + return 0; } ice_seg = NULL; @@ -2224,18 +2215,18 @@ void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg) * or writing of the package. When attempting to obtain write access, the * caller must check for the following two return values: * - * ICE_SUCCESS - Means the caller has acquired the global config lock + * 0 - Means the caller has acquired the global config lock * and can perform writing of the package. * ICE_ERR_AQ_NO_WORK - Indicates another driver has already written the * package or has found that no update was necessary; in * this case, the caller can just skip performing any * update of the package. */ -enum ice_status +int ice_acquire_global_cfg_lock(struct ice_hw *hw, enum ice_aq_res_access_type access) { - enum ice_status status; + int status; status = ice_acquire_res(hw, ICE_GLOBAL_CFG_LOCK_RES_ID, access, ICE_GLOBAL_CFG_LOCK_TIMEOUT); @@ -2264,7 +2255,7 @@ void ice_release_global_cfg_lock(struct ice_hw *hw) * * This function will request ownership of the change lock. */ -enum ice_status +int ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type access) { return ice_acquire_res(hw, ICE_CHANGE_LOCK_RES_ID, access, @@ -2293,13 +2284,13 @@ void ice_release_change_lock(struct ice_hw *hw) * * The function will get or set tx topology */ -static enum ice_status +static int ice_get_set_tx_topo(struct ice_hw *hw, u8 *buf, u16 buf_size, struct ice_sq_cd *cd, u8 *flags, bool set) { struct ice_aqc_get_set_tx_topo *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; cmd = &desc.params.get_set_tx_topo; if (set) { @@ -2321,7 +2312,7 @@ ice_get_set_tx_topo(struct ice_hw *hw, u8 *buf, u16 buf_size, if (!set && flags) *flags = desc.params.get_set_tx_topo.set_flags; - return ICE_SUCCESS; + return 0; } /** @@ -2333,7 +2324,7 @@ ice_get_set_tx_topo(struct ice_hw *hw, u8 *buf, u16 buf_size, * The function will apply the new Tx topology from the package buffer * if available. */ -enum ice_status ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len) +int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len) { u8 *current_topo, *new_topo = NULL; struct ice_run_time_cfg_seg *seg; @@ -2341,8 +2332,8 @@ enum ice_status ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len) struct ice_pkg_hdr *pkg_hdr; enum ice_ddp_state state; u16 i, size = 0, offset; - enum ice_status status; u32 reg = 0; + int status; u8 flags; if (!buf || !len) @@ -2463,7 +2454,7 @@ enum ice_status ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len) /* Reset is in progress, re-init the hw again */ ice_debug(hw, ICE_DBG_INIT, "Reset is in progress. layer topology might be applied already\n"); ice_check_reset(hw); - return ICE_SUCCESS; + return 0; } /* set new topology */ @@ -2480,5 +2471,5 @@ enum ice_status ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len) /* CORER will clear the global lock, so no explicit call * required for release */ - return ICE_SUCCESS; + return 0; } diff --git a/drivers/net/ice/base/ice_ddp.h b/drivers/net/ice/base/ice_ddp.h index 1e02adf0db..bedbc12879 100644 --- a/drivers/net/ice/base/ice_ddp.h +++ b/drivers/net/ice/base/ice_ddp.h @@ -107,6 +107,7 @@ struct ice_generic_seg_hdr { #define SEGMENT_TYPE_METADATA 0x00000001 #define SEGMENT_TYPE_ICE_E810 0x00000010 #define SEGMENT_TYPE_SIGNING 0x00001001 +#define SEGMENT_TYPE_ICE_E830 0x00000017 #define SEGMENT_TYPE_ICE_RUN_TIME_CFG 0x00000020 __le32 seg_type; struct ice_pkg_ver seg_format_ver; @@ -405,23 +406,26 @@ struct ice_marker_ptype_tcam_section { struct ice_hw; -enum ice_status +int ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type access); void ice_release_change_lock(struct ice_hw *hw); struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw); void * ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size); -enum ice_status +int ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count); -enum ice_status +int ice_get_sw_fv_list(struct ice_hw *hw, struct ice_prot_lkup_ext *lkups, ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list); +int +ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count); +u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld); u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld); -enum ice_status +int ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count); -enum ice_status +int ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 count); void ice_release_global_cfg_lock(struct ice_hw *hw); struct ice_generic_seg_hdr * @@ -433,7 +437,7 @@ enum ice_ddp_state ice_get_pkg_info(struct ice_hw *hw); void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg); struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg); -enum ice_status +int ice_acquire_global_cfg_lock(struct ice_hw *hw, enum ice_aq_res_access_type access); @@ -462,6 +466,6 @@ ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size, struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld); void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld); -enum ice_status ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len); +int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len); #endif /* _ICE_DDP_H_ */ diff --git a/drivers/net/ice/base/ice_devids.h b/drivers/net/ice/base/ice_devids.h index 19478d2db1..0b7a5406c2 100644 --- a/drivers/net/ice/base/ice_devids.h +++ b/drivers/net/ice/base/ice_devids.h @@ -6,6 +6,8 @@ #define _ICE_DEVIDS_H_ /* Device IDs */ +#define ICE_DEV_ID_E822_SI_DFLT 0x1888 +/* Intel(R) Ethernet Connection E823-L for backplane */ #define ICE_DEV_ID_E823L_BACKPLANE 0x124C /* Intel(R) Ethernet Connection E823-L for SFP */ #define ICE_DEV_ID_E823L_SFP 0x124D @@ -15,6 +17,24 @@ #define ICE_DEV_ID_E823L_1GBE 0x124F /* Intel(R) Ethernet Connection E823-L for QSFP */ #define ICE_DEV_ID_E823L_QSFP 0x151D +/* Intel(R) Ethernet Controller E830-CC for backplane */ +#define ICE_DEV_ID_E830_BACKPLANE 0x12D1 +/* Intel(R) Ethernet Controller E830-CC for QSFP */ +#define ICE_DEV_ID_E830_QSFP56 0x12D2 +/* Intel(R) Ethernet Controller E830-CC for SFP */ +#define ICE_DEV_ID_E830_SFP 0x12D3 +/* Intel(R) Ethernet Controller E830-C for backplane */ +#define ICE_DEV_ID_E830C_BACKPLANE 0x12D5 +/* Intel(R) Ethernet Controller E830-XXV for backplane */ +#define ICE_DEV_ID_E830_XXV_BACKPLANE 0x12DC +/* Intel(R) Ethernet Controller E830-C for QSFP */ +#define ICE_DEV_ID_E830C_QSFP 0x12D8 +/* Intel(R) Ethernet Controller E830-XXV for QSFP */ +#define ICE_DEV_ID_E830_XXV_QSFP 0x12DD +/* Intel(R) Ethernet Controller E830-C for SFP */ +#define ICE_DEV_ID_E830C_SFP 0x12DA +/* Intel(R) Ethernet Controller E830-XXV for SFP */ +#define ICE_DEV_ID_E830_XXV_SFP 0x12DE /* Intel(R) Ethernet Controller E810-C for backplane */ #define ICE_DEV_ID_E810C_BACKPLANE 0x1591 /* Intel(R) Ethernet Controller E810-C for QSFP */ @@ -23,11 +43,11 @@ #define ICE_DEV_ID_E810C_SFP 0x1593 #define ICE_SUBDEV_ID_E810T 0x000E #define ICE_SUBDEV_ID_E810T2 0x000F -#define ICE_SUBDEV_ID_E810T3 0x02E9 -#define ICE_SUBDEV_ID_E810T4 0x02EA -#define ICE_SUBDEV_ID_E810T5 0x0010 -#define ICE_SUBDEV_ID_E810T6 0x0012 -#define ICE_SUBDEV_ID_E810T7 0x0011 +#define ICE_SUBDEV_ID_E810T3 0x0010 +#define ICE_SUBDEV_ID_E810T4 0x0011 +#define ICE_SUBDEV_ID_E810T5 0x0012 +#define ICE_SUBDEV_ID_E810T6 0x02E9 +#define ICE_SUBDEV_ID_E810T7 0x02EA /* Intel(R) Ethernet Controller E810-XXV for backplane */ #define ICE_DEV_ID_E810_XXV_BACKPLANE 0x1599 /* Intel(R) Ethernet Controller E810-XXV for QSFP */ @@ -35,8 +55,6 @@ /* Intel(R) Ethernet Controller E810-XXV for SFP */ #define ICE_DEV_ID_E810_XXV_SFP 0x159B /* Intel(R) Ethernet Connection E823-C for backplane */ -#define ICE_DEV_ID_E822_SI_DFLT 0x1888 -/* Intel(R) Ethernet Connection E823-L for backplane */ #define ICE_DEV_ID_E823C_BACKPLANE 0x188A /* Intel(R) Ethernet Connection E823-C for QSFP */ #define ICE_DEV_ID_E823C_QSFP 0x188B @@ -64,15 +82,12 @@ #define ICE_DEV_ID_E822L_10G_BASE_T 0x1899 /* Intel(R) Ethernet Connection E822-L 1GbE */ #define ICE_DEV_ID_E822L_SGMII 0x189A -/* Intel(R) Ethernet Connection E824-S */ -#define ICE_DEV_ID_E824S 0x0DBD /* Intel(R) Ethernet Connection E825-C for backplane */ #define ICE_DEV_ID_E825C_BACKPLANE 0x579C /* Intel(R) Ethernet Connection E825-C for QSFP */ -#define ICE_DEV_ID_E825C_QSFP 0x579D +#define ICE_DEV_ID_E825C_QSFP 0x579D /* Intel(R) Ethernet Connection E825-C for SFP */ -#define ICE_DEV_ID_E825C_SFP 0x579E +#define ICE_DEV_ID_E825C_SFP 0x579E /* Intel(R) Ethernet Connection E825-C 1GbE */ #define ICE_DEV_ID_E825C_SGMII 0x579F -#define ICE_DEV_ID_C825X 0x0DCD #endif /* _ICE_DEVIDS_H_ */ diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c index c742c77ac6..3a40a21941 100644 --- a/drivers/net/ice/base/ice_fdir.c +++ b/drivers/net/ice/base/ice_fdir.c @@ -6,6 +6,7 @@ #include "ice_fdir.h" /* These are training packet headers used to program flow director filters. */ + static const u8 ice_fdir_tcpv4_pkt[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x45, 0x00, @@ -2212,7 +2213,7 @@ static const u8 ice_fdir_ip4_tun_pkt[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x45, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00, 0x00, - 0x40, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, }; @@ -3502,7 +3503,7 @@ ice_fdir_get_prgm_desc(struct ice_hw *hw, struct ice_fdir_fltr *input, * @hw: pointer to the hardware structure * @cntr_id: returns counter index */ -enum ice_status ice_alloc_fd_res_cntr(struct ice_hw *hw, u16 *cntr_id) +int ice_alloc_fd_res_cntr(struct ice_hw *hw, u16 *cntr_id) { return ice_alloc_res_cntr(hw, ICE_AQC_RES_TYPE_FDIR_COUNTER_BLOCK, ICE_AQC_RES_TYPE_FLAG_DEDICATED, 1, cntr_id); @@ -3513,7 +3514,7 @@ enum ice_status ice_alloc_fd_res_cntr(struct ice_hw *hw, u16 *cntr_id) * @hw: pointer to the hardware structure * @cntr_id: counter index to be freed */ -enum ice_status ice_free_fd_res_cntr(struct ice_hw *hw, u16 cntr_id) +int ice_free_fd_res_cntr(struct ice_hw *hw, u16 cntr_id) { return ice_free_res_cntr(hw, ICE_AQC_RES_TYPE_FDIR_COUNTER_BLOCK, ICE_AQC_RES_TYPE_FLAG_DEDICATED, 1, cntr_id); @@ -3525,7 +3526,7 @@ enum ice_status ice_free_fd_res_cntr(struct ice_hw *hw, u16 cntr_id) * @cntr_id: returns counter index * @num_fltr: number of filter entries to be allocated */ -enum ice_status +int ice_alloc_fd_guar_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr) { return ice_alloc_res_cntr(hw, ICE_AQC_RES_TYPE_FDIR_GUARANTEED_ENTRIES, @@ -3539,7 +3540,7 @@ ice_alloc_fd_guar_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr) * @cntr_id: counter index that needs to be freed * @num_fltr: number of filters to be freed */ -enum ice_status +int ice_free_fd_guar_item(struct ice_hw *hw, u16 cntr_id, u16 num_fltr) { return ice_free_res_cntr(hw, ICE_AQC_RES_TYPE_FDIR_GUARANTEED_ENTRIES, @@ -3553,7 +3554,7 @@ ice_free_fd_guar_item(struct ice_hw *hw, u16 cntr_id, u16 num_fltr) * @cntr_id: returns counter index * @num_fltr: number of filter entries to be allocated */ -enum ice_status +int ice_alloc_fd_shrd_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr) { return ice_alloc_res_cntr(hw, ICE_AQC_RES_TYPE_FDIR_SHARED_ENTRIES, @@ -3567,7 +3568,7 @@ ice_alloc_fd_shrd_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr) * @cntr_id: counter index that needs to be freed * @num_fltr: number of filters to be freed */ -enum ice_status +int ice_free_fd_shrd_item(struct ice_hw *hw, u16 cntr_id, u16 num_fltr) { return ice_free_res_cntr(hw, ICE_AQC_RES_TYPE_FDIR_SHARED_ENTRIES, @@ -3587,7 +3588,7 @@ int ice_get_fdir_cnt_all(struct ice_hw *hw) } /** - * ice_pkt_insert_ipv6_addr - insert a be32 IPv6 address into a memory buffer. + * ice_pkt_insert_ipv6_addr - insert a be32 IPv6 address into a memory buffer * @pkt: packet buffer * @offset: offset into buffer * @addr: IPv6 address to convert and insert into pkt at offset @@ -3602,12 +3603,12 @@ static void ice_pkt_insert_ipv6_addr(u8 *pkt, int offset, __be32 *addr) } /** - * ice_pkt_insert_u6_qfi - insert a u6 value qfi into a memory buffer for gtpu + * ice_pkt_insert_u6_qfi - insert a u6 value QFI into a memory buffer for GTPU * @pkt: packet buffer * @offset: offset into buffer * @data: 8 bit value to convert and insert into pkt at offset * - * This function is designed for inserting qfi (6 bits) for gtpu. + * This function is designed for inserting QFI (6 bits) for GTPU. */ static void ice_pkt_insert_u6_qfi(u8 *pkt, int offset, u8 data) { @@ -3618,7 +3619,7 @@ static void ice_pkt_insert_u6_qfi(u8 *pkt, int offset, u8 data) } /** - * ice_pkt_insert_u8 - insert a u8 value into a memory buffer. + * ice_pkt_insert_u8 - insert a u8 value into a memory buffer * @pkt: packet buffer * @offset: offset into buffer * @data: 8 bit value to convert and insert into pkt at offset @@ -3629,7 +3630,7 @@ static void ice_pkt_insert_u8(u8 *pkt, int offset, u8 data) } /** - * ice_pkt_insert_u8_tc - insert a u8 value into a memory buffer for TC ipv6. + * ice_pkt_insert_u8_tc - insert a u8 value into a memory buffer for TC IPv6 * @pkt: packet buffer * @offset: offset into buffer * @data: 8 bit value to convert and insert into pkt at offset @@ -3651,7 +3652,7 @@ static void ice_pkt_insert_u8_tc(u8 *pkt, int offset, u8 data) } /** - * ice_pkt_insert_u16 - insert a be16 value into a memory buffer. + * ice_pkt_insert_u16 - insert a be16 value into a memory buffer * @pkt: packet buffer * @offset: offset into buffer * @data: 16 bit value to convert and insert into pkt at offset @@ -3662,7 +3663,7 @@ static void ice_pkt_insert_u16(u8 *pkt, int offset, __be16 data) } /** - * ice_pkt_insert_u32 - insert a be32 value into a memory buffer. + * ice_pkt_insert_u32 - insert a be32 value into a memory buffer * @pkt: packet buffer * @offset: offset into buffer * @data: 32 bit value to convert and insert into pkt at offset @@ -3673,7 +3674,7 @@ static void ice_pkt_insert_u32(u8 *pkt, int offset, __be32 data) } /** - * ice_pkt_insert_mac_addr - insert a MAC addr into a memory buffer. + * ice_pkt_insert_mac_addr - insert a MAC addr into a memory buffer * @pkt: packet buffer * @addr: MAC address to convert and insert into pkt at offset */ @@ -3690,7 +3691,7 @@ static void ice_pkt_insert_mac_addr(u8 *pkt, u8 *addr) * * returns an open tunnel port specified for this flow type */ -static enum ice_status +static int ice_fdir_get_open_tunnel_port(struct ice_hw *hw, enum ice_fltr_ptype flow, u16 *port) { @@ -3706,7 +3707,7 @@ ice_fdir_get_open_tunnel_port(struct ice_hw *hw, enum ice_fltr_ptype flow, return ICE_ERR_DOES_NOT_EXIST; } - return ICE_SUCCESS; + return 0; } /** @@ -3822,7 +3823,7 @@ ice_fdir_gen_l2tpv2_pkt(u8 *pkt, struct ice_fdir_l2tpv2 *l2tpv2_data, * @frag: generate a fragment packet * @tun: true implies generate a tunnel packet */ -enum ice_status +int ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input, u8 *pkt, bool frag, bool tun) { @@ -4101,7 +4102,8 @@ ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input, ice_pkt_insert_u8(loc, ICE_IPV4_TOS_OFFSET, input->ip.v4.tos); ice_pkt_insert_u8(loc, ICE_IPV4_TTL_OFFSET, input->ip.v4.ttl); ice_pkt_insert_mac_addr(loc, input->ext_data.dst_mac); - ice_pkt_insert_mac_addr(loc + ETH_ALEN, input->ext_data.src_mac); + ice_pkt_insert_mac_addr(loc + ETH_ALEN, + input->ext_data.src_mac); break; case ICE_FLTR_PTYPE_NONF_IPV4_SCTP: ice_pkt_insert_u32(loc, ICE_IPV4_DST_ADDR_OFFSET, @@ -4223,6 +4225,8 @@ ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input, input->ip.v4.dst_ip); ice_pkt_insert_u8(loc, ICE_IPV4_TOS_OFFSET, input->ip.v4.tos); ice_pkt_insert_u8(loc, ICE_IPV4_TTL_OFFSET, input->ip.v4.ttl); + ice_pkt_insert_u8(loc, ICE_IPV4_PROTO_OFFSET, + input->ip.v4.proto); ice_pkt_insert_mac_addr(loc, input->ext_data.dst_mac); ice_pkt_insert_mac_addr(loc + ETH_ALEN, input->ext_data.src_mac); @@ -4510,7 +4514,7 @@ ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input, input->ecpri_data.pc_id); break; case ICE_FLTR_PTYPE_NONF_IPV4_UDP_ECPRI_TP0: - /* Use pkt instead of loc, since PC_ID is in outer part */ + /* Use pkt instead of loc, since PC_ID is in outter part */ ice_pkt_insert_u16(pkt, ICE_IPV4_UDP_ECPRI_TP0_PC_ID_OFFSET, input->ecpri_data.pc_id); break; @@ -4801,7 +4805,7 @@ ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input, if (input->flex_fltr) ice_pkt_insert_u16(loc, input->flex_offset, input->flex_word); - return ICE_SUCCESS; + return 0; } /** @@ -4810,7 +4814,7 @@ ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input, * @pkt: pointer to return filter packet * @frag: generate a fragment packet */ -enum ice_status +int ice_fdir_get_prgm_pkt(struct ice_fdir_fltr *input, u8 *pkt, bool frag) { return ice_fdir_get_gen_prgm_pkt(NULL, input, pkt, frag, false); @@ -4995,7 +4999,7 @@ bool ice_fdir_is_dup_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input) * * Clears FD table entries for a PF by issuing admin command (direct, 0x0B06) */ -enum ice_status ice_clear_pf_fd_table(struct ice_hw *hw) +int ice_clear_pf_fd_table(struct ice_hw *hw) { struct ice_aqc_clear_fd_table *cmd; struct ice_aq_desc desc; diff --git a/drivers/net/ice/base/ice_fdir.h b/drivers/net/ice/base/ice_fdir.h index 81ba6008e4..1bb8a14a5d 100644 --- a/drivers/net/ice/base/ice_fdir.h +++ b/drivers/net/ice/base/ice_fdir.h @@ -318,24 +318,24 @@ ice_fdir_comp_rules_basic(struct ice_fdir_fltr *a, struct ice_fdir_fltr *b); bool ice_fdir_comp_rules_extended(struct ice_fdir_fltr *a, struct ice_fdir_fltr *b); -enum ice_status ice_alloc_fd_res_cntr(struct ice_hw *hw, u16 *cntr_id); -enum ice_status ice_free_fd_res_cntr(struct ice_hw *hw, u16 cntr_id); -enum ice_status +int ice_alloc_fd_res_cntr(struct ice_hw *hw, u16 *cntr_id); +int ice_free_fd_res_cntr(struct ice_hw *hw, u16 cntr_id); +int ice_alloc_fd_guar_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr); -enum ice_status +int ice_free_fd_guar_item(struct ice_hw *hw, u16 cntr_id, u16 num_fltr); -enum ice_status +int ice_alloc_fd_shrd_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr); -enum ice_status +int ice_free_fd_shrd_item(struct ice_hw *hw, u16 cntr_id, u16 num_fltr); -enum ice_status ice_clear_pf_fd_table(struct ice_hw *hw); +int ice_clear_pf_fd_table(struct ice_hw *hw); void ice_fdir_get_prgm_desc(struct ice_hw *hw, struct ice_fdir_fltr *input, struct ice_fltr_desc *fdesc, bool add); -enum ice_status +int ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input, u8 *pkt, bool frag, bool tun); -enum ice_status +int ice_fdir_get_prgm_pkt(struct ice_fdir_fltr *input, u8 *pkt, bool frag); int ice_get_fdir_cnt_all(struct ice_hw *hw); bool ice_fdir_is_dup_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input); diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index f9266447d9..f08e3b3917 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -182,7 +182,7 @@ void ice_add_dvm_hint(struct ice_hw *hw, u16 val, bool enable) * ------------------------------ * Result: key: b01 10 11 11 00 00 */ -static enum ice_status +static int ice_gen_key_word(u8 val, u8 valid, u8 dont_care, u8 nvr_mtch, u8 *key, u8 *key_inv) { @@ -226,7 +226,7 @@ ice_gen_key_word(u8 val, u8 valid, u8 dont_care, u8 nvr_mtch, u8 *key, in_key_inv >>= 1; } - return ICE_SUCCESS; + return 0; } /** @@ -284,7 +284,7 @@ static bool ice_bits_max_set(const u8 *mask, u16 size, u16 max) * dc == NULL --> dc mask is all 0's (no don't care bits) * nm == NULL --> nm mask is all 0's (no never match bits) */ -enum ice_status +int ice_set_key(u8 *key, u16 size, u8 *val, u8 *upd, u8 *dc, u8 *nm, u16 off, u16 len) { @@ -313,7 +313,7 @@ ice_set_key(u8 *key, u16 size, u8 *val, u8 *upd, u8 *dc, u8 *nm, u16 off, key + off + i, key + half_size + off + i)) return ICE_ERR_CFG; - return ICE_SUCCESS; + return 0; } /** @@ -445,12 +445,12 @@ ice_get_open_tunnel_port(struct ice_hw *hw, enum ice_tunnel_type type, * @hw: pointer to the HW structure * @entry: pointer to double vlan boost entry info */ -static enum ice_status +static int ice_upd_dvm_boost_entry(struct ice_hw *hw, struct ice_dvm_entry *entry) { struct ice_boost_tcam_section *sect_rx, *sect_tx; - enum ice_status status = ICE_ERR_MAX_LIMIT; struct ice_buf_build *bld; + int status = ICE_ERR_MAX_LIMIT; u8 val, dc, nm; bld = ice_pkg_buf_alloc(hw); @@ -513,19 +513,19 @@ ice_upd_dvm_boost_entry(struct ice_hw *hw, struct ice_dvm_entry *entry) * * Enable double vlan by updating the appropriate boost tcam entries. */ -enum ice_status ice_set_dvm_boost_entries(struct ice_hw *hw) +int ice_set_dvm_boost_entries(struct ice_hw *hw) { u16 i; for (i = 0; i < hw->dvm_upd.count; i++) { - enum ice_status status; + int status; status = ice_upd_dvm_boost_entry(hw, &hw->dvm_upd.tbl[i]); if (status) return status; } - return ICE_SUCCESS; + return 0; } /** @@ -538,19 +538,19 @@ enum ice_status ice_set_dvm_boost_entries(struct ice_hw *hw) * creating a package buffer with the tunnel info and issuing an update package * command. */ -enum ice_status +int ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port) { struct ice_boost_tcam_section *sect_rx, *sect_tx; - enum ice_status status = ICE_ERR_MAX_LIMIT; struct ice_buf_build *bld; + int status = ICE_ERR_MAX_LIMIT; u16 index; ice_acquire_lock(&hw->tnl_lock); if (ice_tunnel_port_in_use_hlpr(hw, port, &index)) { hw->tnl.tbl[index].ref++; - status = ICE_SUCCESS; + status = 0; goto ice_create_tunnel_end; } @@ -625,11 +625,11 @@ ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port) * targeting the specific updates requested and then performing an update * package. */ -enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all) +int ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all) { struct ice_boost_tcam_section *sect_rx, *sect_tx; - enum ice_status status = ICE_ERR_MAX_LIMIT; struct ice_buf_build *bld; + int status = ICE_ERR_MAX_LIMIT; u16 count = 0; u16 index; u16 size; @@ -640,7 +640,7 @@ enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all) if (!all && ice_tunnel_port_in_use_hlpr(hw, port, &index)) if (hw->tnl.tbl[index].ref > 1) { hw->tnl.tbl[index].ref--; - status = ICE_SUCCESS; + status = 0; goto ice_destroy_tunnel_end; } @@ -729,8 +729,8 @@ enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all) * @prot: variable to receive the protocol ID * @off: variable to receive the protocol offset */ -enum ice_status -ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx, +int +ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u16 fv_idx, u8 *prot, u16 *off) { struct ice_fv_word *fv_ext; @@ -746,7 +746,7 @@ ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx, *prot = fv_ext[fv_idx].prot_id; *off = fv_ext[fv_idx].off; - return ICE_SUCCESS; + return 0; } /* PTG Management */ @@ -762,14 +762,14 @@ ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx, * PTG ID that contains it through the PTG parameter, with the value of * ICE_DEFAULT_PTG (0) meaning it is part the default PTG. */ -static enum ice_status +static int ice_ptg_find_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 *ptg) { if (ptype >= ICE_XLT1_CNT || !ptg) return ICE_ERR_PARAM; *ptg = hw->blk[blk].xlt1.ptypes[ptype].ptg; - return ICE_SUCCESS; + return 0; } /** @@ -796,7 +796,7 @@ static void ice_ptg_alloc_val(struct ice_hw *hw, enum ice_block blk, u8 ptg) * This function will remove the ptype from the specific PTG, and move it to * the default PTG (ICE_DEFAULT_PTG). */ -static enum ice_status +static int ice_ptg_remove_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg) { struct ice_ptg_ptype **ch; @@ -828,7 +828,7 @@ ice_ptg_remove_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg) hw->blk[blk].xlt1.ptypes[ptype].ptg = ICE_DEFAULT_PTG; hw->blk[blk].xlt1.ptypes[ptype].next_ptype = NULL; - return ICE_SUCCESS; + return 0; } /** @@ -843,11 +843,11 @@ ice_ptg_remove_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg) * a destination PTG ID of ICE_DEFAULT_PTG (0) will move the ptype to the * default PTG. */ -static enum ice_status +static int ice_ptg_add_mv_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg) { - enum ice_status status; u8 original_ptg; + int status; if (ptype > ICE_XLT1_CNT - 1) return ICE_ERR_PARAM; @@ -861,7 +861,7 @@ ice_ptg_add_mv_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg) /* Is ptype already in the correct PTG? */ if (original_ptg == ptg) - return ICE_SUCCESS; + return 0; /* Remove from original PTG and move back to the default PTG */ if (original_ptg != ICE_DEFAULT_PTG) @@ -869,7 +869,7 @@ ice_ptg_add_mv_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg) /* Moving to default PTG? Then we're done with this request */ if (ptg == ICE_DEFAULT_PTG) - return ICE_SUCCESS; + return 0; /* Add ptype to PTG at beginning of list */ hw->blk[blk].xlt1.ptypes[ptype].next_ptype = @@ -880,7 +880,7 @@ ice_ptg_add_mv_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg) hw->blk[blk].xlt1.ptypes[ptype].ptg = ptg; hw->blk[blk].xlt1.t[ptype] = ptg; - return ICE_SUCCESS; + return 0; } /* Block / table size info */ @@ -955,6 +955,9 @@ ice_match_prop_lst(struct LIST_HEAD_TYPE *list1, struct LIST_HEAD_TYPE *list2) count++; LIST_FOR_EACH_ENTRY(tmp2, list2, ice_vsig_prof, list) chk_count++; +#ifdef __CHECKER__ + /* cppcheck-suppress knownConditionTrueFalse */ +#endif /* __CHECKER__ */ if (!count || count != chk_count) return false; @@ -987,7 +990,7 @@ ice_match_prop_lst(struct LIST_HEAD_TYPE *list1, struct LIST_HEAD_TYPE *list2) * This function will lookup the VSI entry in the XLT2 list and return * the VSI group its associated with. */ -enum ice_status +int ice_vsig_find_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 *vsig) { if (!vsig || vsi >= ICE_MAX_VSI) @@ -999,7 +1002,7 @@ ice_vsig_find_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 *vsig) */ *vsig = hw->blk[blk].xlt2.vsis[vsi].vsig; - return ICE_SUCCESS; + return 0; } /** @@ -1056,7 +1059,7 @@ static u16 ice_vsig_alloc(struct ice_hw *hw, enum ice_block blk) * for, the list must match exactly, including the order in which the * characteristics are listed. */ -static enum ice_status +static int ice_find_dup_props_vsig(struct ice_hw *hw, enum ice_block blk, struct LIST_HEAD_TYPE *chs, u16 *vsig) { @@ -1067,7 +1070,7 @@ ice_find_dup_props_vsig(struct ice_hw *hw, enum ice_block blk, if (xlt2->vsig_tbl[i].in_use && ice_match_prop_lst(chs, &xlt2->vsig_tbl[i].prop_lst)) { *vsig = ICE_VSIG_VALUE(i, hw->pf_id); - return ICE_SUCCESS; + return 0; } return ICE_ERR_DOES_NOT_EXIST; @@ -1082,7 +1085,7 @@ ice_find_dup_props_vsig(struct ice_hw *hw, enum ice_block blk, * The function will remove all VSIs associated with the input VSIG and move * them to the DEFAULT_VSIG and mark the VSIG available. */ -static enum ice_status +static int ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig) { struct ice_vsig_prof *dtmp, *del; @@ -1130,7 +1133,7 @@ ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig) */ INIT_LIST_HEAD(&hw->blk[blk].xlt2.vsig_tbl[idx].prop_lst); - return ICE_SUCCESS; + return 0; } /** @@ -1143,7 +1146,7 @@ ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig) * The function will remove the input VSI from its VSI group and move it * to the DEFAULT_VSIG. */ -static enum ice_status +static int ice_vsig_remove_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig) { struct ice_vsig_vsi **vsi_head, *vsi_cur, *vsi_tgt; @@ -1159,7 +1162,7 @@ ice_vsig_remove_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig) /* entry already in default VSIG, don't have to remove */ if (idx == ICE_DEFAULT_VSIG) - return ICE_SUCCESS; + return 0; vsi_head = &hw->blk[blk].xlt2.vsig_tbl[idx].first_vsi; if (!(*vsi_head)) @@ -1186,7 +1189,7 @@ ice_vsig_remove_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig) vsi_cur->changed = 1; vsi_cur->next_vsi = NULL; - return ICE_SUCCESS; + return 0; } /** @@ -1201,12 +1204,12 @@ ice_vsig_remove_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig) * move the entry to the DEFAULT_VSIG, update the original VSIG and * then move entry to the new VSIG. */ -static enum ice_status +static int ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig) { struct ice_vsig_vsi *tmp; - enum ice_status status; u16 orig_vsig, idx; + int status; idx = vsig & ICE_VSIG_IDX_M; @@ -1226,7 +1229,7 @@ ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig) /* no update required if vsigs match */ if (orig_vsig == vsig) - return ICE_SUCCESS; + return 0; if (orig_vsig != ICE_DEFAULT_VSIG) { /* remove entry from orig_vsig and add to default VSIG */ @@ -1236,7 +1239,7 @@ ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig) } if (idx == ICE_DEFAULT_VSIG) - return ICE_SUCCESS; + return 0; /* Create VSI entry and add VSIG and prop_mask values */ hw->blk[blk].xlt2.vsis[vsi].vsig = vsig; @@ -1249,7 +1252,7 @@ ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig) hw->blk[blk].xlt2.vsis[vsi].next_vsi = tmp; hw->blk[blk].xlt2.t[vsi] = vsig; - return ICE_SUCCESS; + return 0; } /** @@ -1324,14 +1327,14 @@ ice_prof_has_mask(struct ice_hw *hw, enum ice_block blk, u8 prof, u16 *masks) * @masks: masks for fv * @prof_id: receives the profile ID */ -static enum ice_status +static int ice_find_prof_id_with_mask(struct ice_hw *hw, enum ice_block blk, struct ice_fv_word *fv, u16 *masks, u8 *prof_id) { struct ice_es *es = &hw->blk[blk].es; u8 i; - /* For FD and RSS, we don't want to re-use an existed profile with the + /* For FD and RSS we don't want to re-use a existed profile with the * same field vector and mask. This will cause rule interference. */ if (blk == ICE_BLK_FD || blk == ICE_BLK_RSS) @@ -1348,7 +1351,7 @@ ice_find_prof_id_with_mask(struct ice_hw *hw, enum ice_block blk, continue; *prof_id = i; - return ICE_SUCCESS; + return 0; } return ICE_ERR_DOES_NOT_EXIST; @@ -1422,7 +1425,7 @@ static bool ice_tcam_ent_rsrc_type(enum ice_block blk, u16 *rsrc_type) * This function allocates a new entry in a Profile ID TCAM for a specific * block. */ -static enum ice_status +static int ice_alloc_tcam_ent(struct ice_hw *hw, enum ice_block blk, bool btm, u16 *tcam_idx) { @@ -1442,7 +1445,7 @@ ice_alloc_tcam_ent(struct ice_hw *hw, enum ice_block blk, bool btm, * * This function frees an entry in a Profile ID TCAM for a specific block. */ -static enum ice_status +static int ice_free_tcam_ent(struct ice_hw *hw, enum ice_block blk, u16 tcam_idx) { u16 res_type; @@ -1462,12 +1465,12 @@ ice_free_tcam_ent(struct ice_hw *hw, enum ice_block blk, u16 tcam_idx) * This function allocates a new profile ID, which also corresponds to a Field * Vector (Extraction Sequence) entry. */ -static enum ice_status +static int ice_alloc_prof_id(struct ice_hw *hw, enum ice_block blk, u8 *prof_id) { - enum ice_status status; u16 res_type; u16 get_prof; + int status; if (!ice_prof_id_rsrc_type(blk, &res_type)) return ICE_ERR_PARAM; @@ -1487,7 +1490,7 @@ ice_alloc_prof_id(struct ice_hw *hw, enum ice_block blk, u8 *prof_id) * * This function frees a profile ID, which also corresponds to a Field Vector. */ -static enum ice_status +static int ice_free_prof_id(struct ice_hw *hw, enum ice_block blk, u8 prof_id) { u16 tmp_prof_id = (u16)prof_id; @@ -1505,7 +1508,7 @@ ice_free_prof_id(struct ice_hw *hw, enum ice_block blk, u8 prof_id) * @blk: the block from which to free the profile ID * @prof_id: the profile ID for which to increment the reference count */ -static enum ice_status +static int ice_prof_inc_ref(struct ice_hw *hw, enum ice_block blk, u8 prof_id) { if (prof_id > hw->blk[blk].es.count) @@ -1513,7 +1516,7 @@ ice_prof_inc_ref(struct ice_hw *hw, enum ice_block blk, u8 prof_id) hw->blk[blk].es.ref_count[prof_id]++; - return ICE_SUCCESS; + return 0; } /** @@ -1534,16 +1537,14 @@ ice_write_prof_mask_reg(struct ice_hw *hw, enum ice_block blk, u16 mask_idx, switch (blk) { case ICE_BLK_RSS: offset = GLQF_HMASK(mask_idx); - val = (idx << GLQF_HMASK_MSK_INDEX_S) & - GLQF_HMASK_MSK_INDEX_M; - val |= (mask << GLQF_HMASK_MASK_S) & GLQF_HMASK_MASK_M; + val = (idx << GLQF_HMASK_MSK_INDEX_S) & GLQF_HMASK_MSK_INDEX_M; + val |= ((u32)mask << GLQF_HMASK_MASK_S) & GLQF_HMASK_MASK_M; break; case ICE_BLK_FD: offset = GLQF_FDMASK(mask_idx); val = (idx << GLQF_FDMASK_MSK_INDEX_S) & GLQF_FDMASK_MSK_INDEX_M; - val |= (mask << GLQF_FDMASK_MASK_S) & - GLQF_FDMASK_MASK_M; + val |= ((u32)mask << GLQF_FDMASK_MASK_S) & GLQF_FDMASK_MASK_M; break; default: ice_debug(hw, ICE_DBG_PKG, "No profile masks for block %d\n", @@ -1630,13 +1631,13 @@ void ice_init_all_prof_masks(struct ice_hw *hw) * @mask: the 16-bit mask * @mask_idx: variable to receive the mask index */ -static enum ice_status +static int ice_alloc_prof_mask(struct ice_hw *hw, enum ice_block blk, u16 idx, u16 mask, u16 *mask_idx) { bool found_unused = false, found_copy = false; - enum ice_status status = ICE_ERR_MAX_LIMIT; u16 unused_idx = 0, copy_idx = 0; + int status = ICE_ERR_MAX_LIMIT; u16 i; if (blk != ICE_BLK_RSS && blk != ICE_BLK_FD) @@ -1684,7 +1685,7 @@ ice_alloc_prof_mask(struct ice_hw *hw, enum ice_block blk, u16 idx, u16 mask, hw->blk[blk].masks.masks[i].ref++; *mask_idx = i; - status = ICE_SUCCESS; + status = 0; err_ice_alloc_prof_mask: ice_release_lock(&hw->blk[blk].masks.lock); @@ -1698,7 +1699,7 @@ ice_alloc_prof_mask(struct ice_hw *hw, enum ice_block blk, u16 idx, u16 mask, * @blk: hardware block * @mask_idx: index of mask */ -static enum ice_status +static int ice_free_prof_mask(struct ice_hw *hw, enum ice_block blk, u16 mask_idx) { if (blk != ICE_BLK_RSS && blk != ICE_BLK_FD) @@ -1731,7 +1732,7 @@ ice_free_prof_mask(struct ice_hw *hw, enum ice_block blk, u16 mask_idx) exit_ice_free_prof_mask: ice_release_lock(&hw->blk[blk].masks.lock); - return ICE_SUCCESS; + return 0; } /** @@ -1740,7 +1741,7 @@ ice_free_prof_mask(struct ice_hw *hw, enum ice_block blk, u16 mask_idx) * @blk: hardware block * @prof_id: profile ID */ -static enum ice_status +static int ice_free_prof_masks(struct ice_hw *hw, enum ice_block blk, u16 prof_id) { u32 mask_bm; @@ -1754,7 +1755,7 @@ ice_free_prof_masks(struct ice_hw *hw, enum ice_block blk, u16 prof_id) if (mask_bm & BIT(i)) ice_free_prof_mask(hw, blk, i); - return ICE_SUCCESS; + return 0; } /** @@ -1802,7 +1803,7 @@ void ice_shutdown_all_prof_masks(struct ice_hw *hw) * @prof_id: profile ID * @masks: masks */ -static enum ice_status +static int ice_update_prof_masking(struct ice_hw *hw, enum ice_block blk, u16 prof_id, u16 *masks) { @@ -1813,7 +1814,7 @@ ice_update_prof_masking(struct ice_hw *hw, enum ice_block blk, u16 prof_id, /* Only support FD and RSS masking, otherwise nothing to be done */ if (blk != ICE_BLK_RSS && blk != ICE_BLK_FD) - return ICE_SUCCESS; + return 0; for (i = 0; i < hw->blk[blk].es.fvw; i++) if (masks[i] && masks[i] != 0xFFFF) { @@ -1841,7 +1842,7 @@ ice_update_prof_masking(struct ice_hw *hw, enum ice_block blk, u16 prof_id, /* store enabled masks with profile so that they can be freed later */ hw->blk[blk].es.mask_ena[prof_id] = ena_mask; - return ICE_SUCCESS; + return 0; } /** @@ -1874,7 +1875,7 @@ ice_write_es(struct ice_hw *hw, enum ice_block blk, u8 prof_id, * @blk: the block from which to free the profile ID * @prof_id: the profile ID for which to decrement the reference count */ -static enum ice_status +static int ice_prof_dec_ref(struct ice_hw *hw, enum ice_block blk, u8 prof_id) { if (prof_id > hw->blk[blk].es.count) @@ -1888,7 +1889,7 @@ ice_prof_dec_ref(struct ice_hw *hw, enum ice_block blk, u8 prof_id) } } - return ICE_SUCCESS; + return 0; } /* Block / table section IDs */ @@ -2138,7 +2139,7 @@ void ice_init_flow_profs(struct ice_hw *hw, u8 blk_idx) * ice_init_hw_tbls - init hardware table memory * @hw: pointer to the hardware structure */ -enum ice_status ice_init_hw_tbls(struct ice_hw *hw) +int ice_init_hw_tbls(struct ice_hw *hw) { u8 i; @@ -2250,7 +2251,7 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw) if (!es->mask_ena) goto err; } - return ICE_SUCCESS; + return 0; err: ice_free_hw_tbls(hw); @@ -2368,6 +2369,7 @@ void ice_free_hw_tbls(struct ice_hw *hw) ice_free_prof_map(hw, i); ice_destroy_lock(&es->prof_map_lock); + ice_free_flow_profs(hw, i); ice_destroy_lock(&hw->fl_profs_locks[i]); @@ -2493,7 +2495,7 @@ void ice_clear_hw_tbls(struct ice_hw *hw) * @nm_msk: never match mask * @key: output of profile ID key */ -static enum ice_status +static int ice_prof_gen_key(struct ice_hw *hw, enum ice_block blk, u8 ptg, u16 vsig, u8 cdid, u16 flags, u8 vl_msk[ICE_TCAM_KEY_VAL_SZ], u8 dc_msk[ICE_TCAM_KEY_VAL_SZ], u8 nm_msk[ICE_TCAM_KEY_VAL_SZ], @@ -2549,7 +2551,7 @@ ice_prof_gen_key(struct ice_hw *hw, enum ice_block blk, u8 ptg, u16 vsig, * @dc_msk: don't care mask * @nm_msk: never match mask */ -static enum ice_status +static int ice_tcam_write_entry(struct ice_hw *hw, enum ice_block blk, u16 idx, u8 prof_id, u8 ptg, u16 vsig, u8 cdid, u16 flags, u8 vl_msk[ICE_TCAM_KEY_VAL_SZ], @@ -2557,7 +2559,7 @@ ice_tcam_write_entry(struct ice_hw *hw, enum ice_block blk, u16 idx, u8 nm_msk[ICE_TCAM_KEY_VAL_SZ]) { struct ice_prof_tcam_entry; - enum ice_status status; + int status; status = ice_prof_gen_key(hw, blk, ptg, vsig, cdid, flags, vl_msk, dc_msk, nm_msk, hw->blk[blk].prof.t[idx].key); @@ -2576,7 +2578,7 @@ ice_tcam_write_entry(struct ice_hw *hw, enum ice_block blk, u16 idx, * @vsig: VSIG to query * @refs: pointer to variable to receive the reference count */ -static enum ice_status +static int ice_vsig_get_ref(struct ice_hw *hw, enum ice_block blk, u16 vsig, u16 *refs) { u16 idx = vsig & ICE_VSIG_IDX_M; @@ -2593,7 +2595,7 @@ ice_vsig_get_ref(struct ice_hw *hw, enum ice_block blk, u16 vsig, u16 *refs) ptr = ptr->next_vsi; } - return ICE_SUCCESS; + return 0; } /** @@ -2626,7 +2628,7 @@ ice_has_prof_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, u64 hdl) * @bld: the update package buffer build to add to * @chgs: the list of changes to make in hardware */ -static enum ice_status +static int ice_prof_bld_es(struct ice_hw *hw, enum ice_block blk, struct ice_buf_build *bld, struct LIST_HEAD_TYPE *chgs) { @@ -2657,7 +2659,7 @@ ice_prof_bld_es(struct ice_hw *hw, enum ice_block blk, ICE_NONDMA_TO_NONDMA); } - return ICE_SUCCESS; + return 0; } /** @@ -2667,7 +2669,7 @@ ice_prof_bld_es(struct ice_hw *hw, enum ice_block blk, * @bld: the update package buffer build to add to * @chgs: the list of changes to make in hardware */ -static enum ice_status +static int ice_prof_bld_tcam(struct ice_hw *hw, enum ice_block blk, struct ice_buf_build *bld, struct LIST_HEAD_TYPE *chgs) { @@ -2698,7 +2700,7 @@ ice_prof_bld_tcam(struct ice_hw *hw, enum ice_block blk, ICE_NONDMA_TO_NONDMA); } - return ICE_SUCCESS; + return 0; } /** @@ -2707,7 +2709,7 @@ ice_prof_bld_tcam(struct ice_hw *hw, enum ice_block blk, * @bld: the update package buffer build to add to * @chgs: the list of changes to make in hardware */ -static enum ice_status +static int ice_prof_bld_xlt1(enum ice_block blk, struct ice_buf_build *bld, struct LIST_HEAD_TYPE *chgs) { @@ -2733,7 +2735,7 @@ ice_prof_bld_xlt1(enum ice_block blk, struct ice_buf_build *bld, p->value[0] = tmp->ptg; } - return ICE_SUCCESS; + return 0; } /** @@ -2742,7 +2744,7 @@ ice_prof_bld_xlt1(enum ice_block blk, struct ice_buf_build *bld, * @bld: the update package buffer build to add to * @chgs: the list of changes to make in hardware */ -static enum ice_status +static int ice_prof_bld_xlt2(enum ice_block blk, struct ice_buf_build *bld, struct LIST_HEAD_TYPE *chgs) { @@ -2775,7 +2777,7 @@ ice_prof_bld_xlt2(enum ice_block blk, struct ice_buf_build *bld, } } - return ICE_SUCCESS; + return 0; } /** @@ -2784,18 +2786,18 @@ ice_prof_bld_xlt2(enum ice_block blk, struct ice_buf_build *bld, * @blk: hardware block * @chgs: the list of changes to make in hardware */ -static enum ice_status +static int ice_upd_prof_hw(struct ice_hw *hw, enum ice_block blk, struct LIST_HEAD_TYPE *chgs) { struct ice_buf_build *b; struct ice_chs_chg *tmp; - enum ice_status status; u16 pkg_sects; u16 xlt1 = 0; u16 xlt2 = 0; u16 tcam = 0; u16 es = 0; + int status; u16 sects; /* count number of sections we need */ @@ -2822,7 +2824,7 @@ ice_upd_prof_hw(struct ice_hw *hw, enum ice_block blk, sects = xlt1 + xlt2 + tcam + es; if (!sects) - return ICE_SUCCESS; + return 0; /* Build update package buffer */ b = ice_pkg_buf_alloc(hw); @@ -2942,7 +2944,7 @@ static const struct ice_fd_src_dst_pair ice_fd_pairs[] = { * @prof_id: profile ID * @es: extraction sequence (length of array is determined by the block) */ -static enum ice_status +static int ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es) { ice_declare_bitmap(pair_list, ICE_FD_SRC_DST_PAIR_COUNT); @@ -3087,7 +3089,7 @@ ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es) /* initially clear the mask select for this profile */ ice_update_fd_mask(hw, prof_id, 0); - return ICE_SUCCESS; + return 0; } /* The entries here needs to match the order of enum ice_ptype_attrib */ @@ -3118,7 +3120,7 @@ ice_get_ptype_attrib_info(enum ice_ptype_attrib_type type, * @attr: array of attributes that will be considered * @attr_cnt: number of elements in the attribute array */ -static enum ice_status +static int ice_add_prof_attrib(struct ice_prof_map *prof, u8 ptg, u16 ptype, const struct ice_ptype_attributes *attr, u16 attr_cnt) { @@ -3141,7 +3143,7 @@ ice_add_prof_attrib(struct ice_prof_map *prof, u8 ptg, u16 ptype, if (!found) return ICE_ERR_DOES_NOT_EXIST; - return ICE_SUCCESS; + return 0; } /** @@ -3171,17 +3173,17 @@ static void ice_disable_fd_swap(struct ice_hw *hw, u16 prof_id) wr32(hw, GLQF_FDSWAP(prof_id, i), raw_swap); ice_debug(hw, ICE_DBG_INIT, "swap wr(%d, %d): %x = %08x\n", - prof_id, i, GLQF_FDSWAP(prof_id, i), raw_swap); + prof_id, i, GLQF_FDSWAP(prof_id, i), raw_swap); /* write the FDIR inset register set */ wr32(hw, GLQF_FDINSET(prof_id, i), raw_in); ice_debug(hw, ICE_DBG_INIT, "inset wr(%d, %d): %x = %08x\n", - prof_id, i, GLQF_FDINSET(prof_id, i), raw_in); + prof_id, i, GLQF_FDINSET(prof_id, i), raw_in); } } -/** +/* * ice_add_prof - add profile * @hw: pointer to the HW struct * @blk: hardware block @@ -3198,14 +3200,14 @@ static void ice_disable_fd_swap(struct ice_hw *hw, u16 prof_id) * it will not be written until the first call to ice_add_flow that specifies * the ID value used here. */ -enum ice_status +int ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, ice_bitmap_t *ptypes, const struct ice_ptype_attributes *attr, u16 attr_cnt, struct ice_fv_word *es, u16 *masks, bool fd_swap) { ice_declare_bitmap(ptgs_used, ICE_XLT1_CNT); struct ice_prof_map *prof; - enum ice_status status; + int status; u8 prof_id; u16 ptype; @@ -3288,7 +3290,7 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, } LIST_ADD(&prof->list, &hw->blk[blk].es.prof_map); - status = ICE_SUCCESS; + status = 0; err_ice_add_prof: ice_release_lock(&hw->blk[blk].es.prof_map_lock); @@ -3344,14 +3346,14 @@ ice_vsig_prof_id_count(struct ice_hw *hw, enum ice_block blk, u16 vsig) * @blk: hardware block * @idx: the index to release */ -static enum ice_status +static int ice_rel_tcam_idx(struct ice_hw *hw, enum ice_block blk, u16 idx) { /* Masks to invoke a never match entry */ u8 vl_msk[ICE_TCAM_KEY_VAL_SZ] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF }; u8 dc_msk[ICE_TCAM_KEY_VAL_SZ] = { 0xFE, 0xFF, 0xFF, 0xFF, 0xFF }; u8 nm_msk[ICE_TCAM_KEY_VAL_SZ] = { 0x01, 0x00, 0x00, 0x00, 0x00 }; - enum ice_status status; + int status; /* write the TCAM entry */ status = ice_tcam_write_entry(hw, blk, idx, 0, 0, 0, 0, 0, vl_msk, @@ -3371,11 +3373,11 @@ ice_rel_tcam_idx(struct ice_hw *hw, enum ice_block blk, u16 idx) * @blk: hardware block * @prof: pointer to profile structure to remove */ -static enum ice_status +static int ice_rem_prof_id(struct ice_hw *hw, enum ice_block blk, struct ice_vsig_prof *prof) { - enum ice_status status; + int status; u16 i; for (i = 0; i < prof->tcam_count; i++) @@ -3387,7 +3389,7 @@ ice_rem_prof_id(struct ice_hw *hw, enum ice_block blk, return ICE_ERR_HW_TABLE; } - return ICE_SUCCESS; + return 0; } /** @@ -3397,7 +3399,7 @@ ice_rem_prof_id(struct ice_hw *hw, enum ice_block blk, * @vsig: the VSIG to remove * @chg: the change list */ -static enum ice_status +static int ice_rem_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, struct LIST_HEAD_TYPE *chg) { @@ -3409,7 +3411,7 @@ ice_rem_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, LIST_FOR_EACH_ENTRY_SAFE(d, t, &hw->blk[blk].xlt2.vsig_tbl[idx].prop_lst, ice_vsig_prof, list) { - enum ice_status status; + int status; status = ice_rem_prof_id(hw, blk, d); if (status) @@ -3454,7 +3456,7 @@ ice_rem_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, * @hdl: profile handle indicating which profile to remove * @chg: list to receive a record of changes */ -static enum ice_status +static int ice_rem_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, u64 hdl, struct LIST_HEAD_TYPE *chg) { @@ -3465,7 +3467,7 @@ ice_rem_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, u64 hdl, &hw->blk[blk].xlt2.vsig_tbl[idx].prop_lst, ice_vsig_prof, list) if (p->profile_cookie == hdl) { - enum ice_status status; + int status; if (ice_vsig_prof_id_count(hw, blk, vsig) == 1) /* this is the last profile, remove the VSIG */ @@ -3488,12 +3490,12 @@ ice_rem_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, u64 hdl, * @blk: hardware block * @id: profile tracking ID */ -static enum ice_status +static int ice_rem_flow_all(struct ice_hw *hw, enum ice_block blk, u64 id) { struct ice_chs_chg *del, *tmp; struct LIST_HEAD_TYPE chg; - enum ice_status status; + int status; u16 i; INIT_LIST_HEAD(&chg); @@ -3529,10 +3531,10 @@ ice_rem_flow_all(struct ice_hw *hw, enum ice_block blk, u64 id) * previously created through ice_add_prof. If any existing entries * are associated with this profile, they will be removed as well. */ -enum ice_status ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id) +int ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id) { struct ice_prof_map *pmap; - enum ice_status status; + int status; ice_acquire_lock(&hw->blk[blk].es.prof_map_lock); @@ -3565,13 +3567,13 @@ enum ice_status ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id) * @hdl: profile handle * @chg: change list */ -static enum ice_status +static int ice_get_prof(struct ice_hw *hw, enum ice_block blk, u64 hdl, struct LIST_HEAD_TYPE *chg) { - enum ice_status status = ICE_SUCCESS; struct ice_prof_map *map; struct ice_chs_chg *p; + int status = 0; u16 i; ice_acquire_lock(&hw->blk[blk].es.prof_map_lock); @@ -3620,7 +3622,7 @@ ice_get_prof(struct ice_hw *hw, enum ice_block blk, u64 hdl, * * This routine makes a copy of the list of profiles in the specified VSIG. */ -static enum ice_status +static int ice_get_profs_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, struct LIST_HEAD_TYPE *lst) { @@ -3640,7 +3642,7 @@ ice_get_profs_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, LIST_ADD_TAIL(&p->list, lst); } - return ICE_SUCCESS; + return 0; err_ice_get_profs_vsig: LIST_FOR_EACH_ENTRY_SAFE(ent1, ent2, lst, ice_vsig_prof, list) { @@ -3658,13 +3660,13 @@ ice_get_profs_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, * @lst: the list to be added to * @hdl: profile handle of entry to add */ -static enum ice_status +static int ice_add_prof_to_lst(struct ice_hw *hw, enum ice_block blk, struct LIST_HEAD_TYPE *lst, u64 hdl) { - enum ice_status status = ICE_SUCCESS; struct ice_prof_map *map; struct ice_vsig_prof *p; + int status = 0; u16 i; ice_acquire_lock(&hw->blk[blk].es.prof_map_lock); @@ -3706,13 +3708,13 @@ ice_add_prof_to_lst(struct ice_hw *hw, enum ice_block blk, * @vsig: the VSIG to move the VSI to * @chg: the change list */ -static enum ice_status +static int ice_move_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig, struct LIST_HEAD_TYPE *chg) { - enum ice_status status; struct ice_chs_chg *p; u16 orig_vsig; + int status; p = (struct ice_chs_chg *)ice_malloc(hw, sizeof(*p)); if (!p) @@ -3734,7 +3736,7 @@ ice_move_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig, LIST_ADD(&p->list_entry, chg); - return ICE_SUCCESS; + return 0; } /** @@ -3780,13 +3782,13 @@ ice_rem_chg_tcam_ent(struct ice_hw *hw, u16 idx, struct LIST_HEAD_TYPE *chg) * * This function appends an enable or disable TCAM entry in the change log */ -static enum ice_status +static int ice_prof_tcam_ena_dis(struct ice_hw *hw, enum ice_block blk, bool enable, u16 vsig, struct ice_tcam_inf *tcam, struct LIST_HEAD_TYPE *chg) { - enum ice_status status; struct ice_chs_chg *p; + int status; u8 vl_msk[ICE_TCAM_KEY_VAL_SZ] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF }; u8 dc_msk[ICE_TCAM_KEY_VAL_SZ] = { 0xFF, 0xFF, 0x00, 0x00, 0x00 }; @@ -3842,7 +3844,7 @@ ice_prof_tcam_ena_dis(struct ice_hw *hw, enum ice_block blk, bool enable, /* log change */ LIST_ADD(&p->list_entry, chg); - return ICE_SUCCESS; + return 0; err_ice_prof_tcam_ena_dis: ice_free(hw, p); @@ -3882,15 +3884,15 @@ ice_ptg_attr_in_use(struct ice_tcam_inf *ptg_attr, ice_bitmap_t *ptgs_used, * @vsig: the VSIG for which to adjust profile priorities * @chg: the change list */ -static enum ice_status +static int ice_adj_prof_priorities(struct ice_hw *hw, enum ice_block blk, u16 vsig, struct LIST_HEAD_TYPE *chg) { ice_declare_bitmap(ptgs_used, ICE_XLT1_CNT); struct ice_tcam_inf **attr_used; - enum ice_status status = ICE_SUCCESS; struct ice_vsig_prof *t; u16 attr_used_cnt = 0; + int status = 0; u16 idx; #define ICE_MAX_PTG_ATTRS 1024 @@ -3970,7 +3972,7 @@ ice_adj_prof_priorities(struct ice_hw *hw, enum ice_block blk, u16 vsig, * @rev: true to add entries to the end of the list * @chg: the change list */ -static enum ice_status +static int ice_add_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, u64 hdl, bool rev, struct LIST_HEAD_TYPE *chg) { @@ -3978,11 +3980,11 @@ ice_add_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, u64 hdl, u8 vl_msk[ICE_TCAM_KEY_VAL_SZ] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF }; u8 dc_msk[ICE_TCAM_KEY_VAL_SZ] = { 0xFF, 0xFF, 0x00, 0x00, 0x00 }; u8 nm_msk[ICE_TCAM_KEY_VAL_SZ] = { 0x00, 0x00, 0x00, 0x00, 0x00 }; - enum ice_status status = ICE_SUCCESS; struct ice_prof_map *map; struct ice_vsig_prof *t; struct ice_chs_chg *p; u16 vsig_idx, i; + int status = 0; /* Error, if this VSIG already has this profile */ if (ice_has_prof_vsig(hw, blk, vsig, hdl)) @@ -4086,13 +4088,13 @@ ice_add_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, u64 hdl, * @hdl: the profile handle of the profile that will be added to the VSIG * @chg: the change list */ -static enum ice_status +static int ice_create_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl, struct LIST_HEAD_TYPE *chg) { - enum ice_status status; struct ice_chs_chg *p; u16 new_vsig; + int status; p = (struct ice_chs_chg *)ice_malloc(hw, sizeof(*p)); if (!p) @@ -4119,7 +4121,7 @@ ice_create_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl, LIST_ADD(&p->list_entry, chg); - return ICE_SUCCESS; + return 0; err_ice_create_prof_id_vsig: /* let caller clean up the change list */ @@ -4136,13 +4138,13 @@ ice_create_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl, * @new_vsig: return of new VSIG * @chg: the change list */ -static enum ice_status +static int ice_create_vsig_from_lst(struct ice_hw *hw, enum ice_block blk, u16 vsi, struct LIST_HEAD_TYPE *lst, u16 *new_vsig, struct LIST_HEAD_TYPE *chg) { struct ice_vsig_prof *t; - enum ice_status status; + int status; u16 vsig; vsig = ice_vsig_alloc(hw, blk); @@ -4163,7 +4165,7 @@ ice_create_vsig_from_lst(struct ice_hw *hw, enum ice_block blk, u16 vsi, *new_vsig = vsig; - return ICE_SUCCESS; + return 0; } /** @@ -4178,7 +4180,7 @@ ice_find_prof_vsig(struct ice_hw *hw, enum ice_block blk, u64 hdl, u16 *vsig) { struct ice_vsig_prof *t; struct LIST_HEAD_TYPE lst; - enum ice_status status; + int status; INIT_LIST_HEAD(&lst); @@ -4194,7 +4196,7 @@ ice_find_prof_vsig(struct ice_hw *hw, enum ice_block blk, u64 hdl, u16 *vsig) LIST_DEL(&t->list); ice_free(hw, t); - return status == ICE_SUCCESS; + return !status; } /** @@ -4211,12 +4213,12 @@ ice_find_prof_vsig(struct ice_hw *hw, enum ice_block blk, u64 hdl, u16 *vsig) * save time in generating a new VSIG and TCAMs till a match is * found and subsequent rollback when a matching VSIG is found. */ -enum ice_status +int ice_add_vsi_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig) { struct ice_chs_chg *tmp, *del; struct LIST_HEAD_TYPE chg; - enum ice_status status; + int status; /* if target VSIG is default the move is invalid */ if ((vsig & ICE_VSIG_IDX_M) == ICE_DEFAULT_VSIG) @@ -4249,14 +4251,14 @@ ice_add_vsi_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig) * profile indicated by the ID parameter for the VSIs specified in the VSI * array. Once successfully called, the flow will be enabled. */ -enum ice_status +int ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl) { struct ice_vsig_prof *tmp1, *del1; - struct LIST_HEAD_TYPE union_lst; struct ice_chs_chg *tmp, *del; + struct LIST_HEAD_TYPE union_lst; struct LIST_HEAD_TYPE chg; - enum ice_status status; + int status; u16 vsig; INIT_LIST_HEAD(&union_lst); @@ -4384,13 +4386,62 @@ ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl) return status; } +/** + * ice_flow_assoc_hw_prof - add profile id flow for main/ctrl VSI flow entry + * @hw: pointer to the HW struct + * @blk: HW block + * @dest_vsi_handle: dest VSI handle + * @fdir_vsi_handle: fdir programming VSI handle + * @id: profile id (handle) + * + * Calling this function will update the hardware tables to enable the + * profile indicated by the ID parameter for the VSIs specified in the VSI + * array. Once successfully called, the flow will be enabled. + */ +int +ice_flow_assoc_hw_prof(struct ice_hw *hw, enum ice_block blk, + u16 dest_vsi_handle, u16 fdir_vsi_handle, int id) +{ + int status = 0; + u16 vsi_num; + + vsi_num = ice_get_hw_vsi_num(hw, dest_vsi_handle); + status = ice_add_prof_id_flow(hw, blk, vsi_num, id); + if (status) { + ice_debug(hw, ICE_DBG_FLOW, "HW profile add failed for main VSI flow entry, %d\n", + status); + goto err_add_prof; + } + + if (blk != ICE_BLK_FD) + return status; + + vsi_num = ice_get_hw_vsi_num(hw, fdir_vsi_handle); + status = ice_add_prof_id_flow(hw, blk, vsi_num, id); + if (status) { + ice_debug(hw, ICE_DBG_FLOW, "HW profile add failed for ctrl VSI flow entry, %d\n", + status); + goto err_add_entry; + } + + return status; + +err_add_entry: + vsi_num = ice_get_hw_vsi_num(hw, dest_vsi_handle); + ice_rem_prof_id_flow(hw, blk, vsi_num, id); +err_add_prof: + ice_flow_rem_prof(hw, blk, id); + + return status; +} + /** * ice_rem_prof_from_list - remove a profile from list * @hw: pointer to the HW struct * @lst: list to remove the profile from * @hdl: the profile handle indicating the profile to remove */ -static enum ice_status +static int ice_rem_prof_from_list(struct ice_hw *hw, struct LIST_HEAD_TYPE *lst, u64 hdl) { struct ice_vsig_prof *ent, *tmp; @@ -4399,7 +4450,7 @@ ice_rem_prof_from_list(struct ice_hw *hw, struct LIST_HEAD_TYPE *lst, u64 hdl) if (ent->profile_cookie == hdl) { LIST_DEL(&ent->list); ice_free(hw, ent); - return ICE_SUCCESS; + return 0; } return ICE_ERR_DOES_NOT_EXIST; @@ -4416,13 +4467,13 @@ ice_rem_prof_from_list(struct ice_hw *hw, struct LIST_HEAD_TYPE *lst, u64 hdl) * profile indicated by the ID parameter for the VSIs specified in the VSI * array. Once successfully called, the flow will be disabled. */ -enum ice_status +int ice_rem_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl) { struct ice_vsig_prof *tmp1, *del1; - struct LIST_HEAD_TYPE chg, copy; struct ice_chs_chg *tmp, *del; - enum ice_status status; + struct LIST_HEAD_TYPE chg, copy; + int status; u16 vsig; INIT_LIST_HEAD(©); @@ -4538,51 +4589,3 @@ ice_rem_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl) return status; } -/** - * ice_flow_assoc_hw_prof - add profile id flow for main/ctrl VSI flow entry - * @hw: pointer to the HW struct - * @blk: HW block - * @dest_vsi_handle: dest VSI handle - * @fdir_vsi_handle: fdir programming VSI handle - * @id: profile id (handle) - * - * Calling this function will update the hardware tables to enable the - * profile indicated by the ID parameter for the VSIs specified in the VSI - * array. Once successfully called, the flow will be enabled. - */ -enum ice_status -ice_flow_assoc_hw_prof(struct ice_hw *hw, enum ice_block blk, - u16 dest_vsi_handle, u16 fdir_vsi_handle, int id) -{ - enum ice_status status = ICE_SUCCESS; - u16 vsi_num; - - vsi_num = ice_get_hw_vsi_num(hw, dest_vsi_handle); - status = ice_add_prof_id_flow(hw, blk, vsi_num, id); - if (status) { - ice_debug(hw, ICE_DBG_FLOW, "HW profile add failed for main VSI flow entry, %d\n", - status); - goto err_add_prof; - } - - if (blk != ICE_BLK_FD) - return status; - - vsi_num = ice_get_hw_vsi_num(hw, fdir_vsi_handle); - status = ice_add_prof_id_flow(hw, blk, vsi_num, id); - if (status) { - ice_debug(hw, ICE_DBG_FLOW, "HW profile add failed for ctrl VSI flow entry, %d\n", - status); - goto err_add_entry; - } - - return status; - -err_add_entry: - vsi_num = ice_get_hw_vsi_num(hw, dest_vsi_handle); - ice_rem_prof_id_flow(hw, blk, vsi_num, id); -err_add_prof: - ice_flow_rem_prof(hw, blk, id); - - return status; -} diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h index 422d09becc..9d897887d0 100644 --- a/drivers/net/ice/base/ice_flex_pipe.h +++ b/drivers/net/ice/base/ice_flex_pipe.h @@ -7,10 +7,10 @@ #include "ice_type.h" -enum ice_status -ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx, +int +ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u16 fv_idx, u8 *prot, u16 *off); -enum ice_status +int ice_find_label_value(struct ice_seg *ice_seg, char const *name, u32 type, u16 *value); void @@ -18,16 +18,16 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type type, ice_bitmap_t *bm); void ice_init_prof_result_bm(struct ice_hw *hw); -enum ice_status +int ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, u16 buf_size, struct ice_sq_cd *cd); bool ice_get_open_tunnel_port(struct ice_hw *hw, enum ice_tunnel_type type, u16 *port); -enum ice_status +int ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port); -enum ice_status ice_set_dvm_boost_entries(struct ice_hw *hw); -enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all); +int ice_set_dvm_boost_entries(struct ice_hw *hw); +int ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all); bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index); bool ice_tunnel_get_type(struct ice_hw *hw, u16 port, enum ice_tunnel_type *type); @@ -36,9 +36,9 @@ ice_tunnel_get_type(struct ice_hw *hw, u16 port, enum ice_tunnel_type *type); bool ice_hw_ptype_ena(struct ice_hw *hw, u16 ptype); /* XLT2/VSI group functions */ -enum ice_status +int ice_vsig_find_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 *vsig); -enum ice_status +int ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, ice_bitmap_t *ptypes, const struct ice_ptype_attributes *attr, u16 attr_cnt, struct ice_fv_word *es, u16 *masks, bool fd_swap); @@ -46,23 +46,23 @@ void ice_init_all_prof_masks(struct ice_hw *hw); void ice_shutdown_all_prof_masks(struct ice_hw *hw); struct ice_prof_map * ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id); -enum ice_status +int ice_add_vsi_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig); -enum ice_status +int ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl); -enum ice_status +int ice_rem_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl); -enum ice_status +int ice_flow_assoc_hw_prof(struct ice_hw *hw, enum ice_block blk, u16 dest_vsi_handle, u16 fdir_vsi_handle, int id); -enum ice_status ice_init_hw_tbls(struct ice_hw *hw); +int ice_init_hw_tbls(struct ice_hw *hw); void ice_fill_blk_tbls(struct ice_hw *hw); void ice_clear_hw_tbls(struct ice_hw *hw); void ice_free_hw_tbls(struct ice_hw *hw); -enum ice_status +int ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id); -enum ice_status +int ice_set_key(u8 *key, u16 size, u8 *val, u8 *upd, u8 *dc, u8 *nm, u16 off, u16 len); diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h index c83479d6fa..a556db5054 100644 --- a/drivers/net/ice/base/ice_flex_type.h +++ b/drivers/net/ice/base/ice_flex_type.h @@ -25,16 +25,16 @@ struct ice_fv { }; /* Packet Type (PTYPE) values */ -#define ICE_PTYPE_MAC_PAY 1 +#define ICE_PTYPE_MAC_PAY 1 #define ICE_MAC_PTP 2 #define ICE_MAC_LLDP 6 #define ICE_MAC_ARP 11 -#define ICE_PTYPE_IPV4FRAG_PAY 22 -#define ICE_PTYPE_IPV4_PAY 23 -#define ICE_PTYPE_IPV4_UDP_PAY 24 -#define ICE_PTYPE_IPV4_TCP_PAY 26 -#define ICE_PTYPE_IPV4_SCTP_PAY 27 -#define ICE_PTYPE_IPV4_ICMP_PAY 28 +#define ICE_PTYPE_IPV4FRAG_PAY 22 +#define ICE_PTYPE_IPV4_PAY 23 +#define ICE_PTYPE_IPV4_UDP_PAY 24 +#define ICE_PTYPE_IPV4_TCP_PAY 26 +#define ICE_PTYPE_IPV4_SCTP_PAY 27 +#define ICE_PTYPE_IPV4_ICMP_PAY 28 #define ICE_MAC_IPV4_IPV4_FRAG 29 #define ICE_MAC_IPV4_IPV4_PAY 30 #define ICE_MAC_IPV4_IPV4_UDP_PAY 31 @@ -73,12 +73,12 @@ struct ice_fv { #define ICE_MAC_IPV4_TUN_ICE_MAC_IPV6_TCP 70 #define ICE_MAC_IPV4_TUN_ICE_MAC_IPV6_SCTP 71 #define ICE_MAC_IPV4_TUN_ICE_MAC_IPV6_ICMPV6 72 -#define ICE_PTYPE_IPV6FRAG_PAY 88 -#define ICE_PTYPE_IPV6_PAY 89 -#define ICE_PTYPE_IPV6_UDP_PAY 90 -#define ICE_PTYPE_IPV6_TCP_PAY 92 -#define ICE_PTYPE_IPV6_SCTP_PAY 93 -#define ICE_PTYPE_IPV6_ICMP_PAY 94 +#define ICE_PTYPE_IPV6FRAG_PAY 88 +#define ICE_PTYPE_IPV6_PAY 89 +#define ICE_PTYPE_IPV6_UDP_PAY 90 +#define ICE_PTYPE_IPV6_TCP_PAY 92 +#define ICE_PTYPE_IPV6_SCTP_PAY 93 +#define ICE_PTYPE_IPV6_ICMP_PAY 94 #define ICE_MAC_IPV6_IPV4_FRAG 95 #define ICE_MAC_IPV6_IPV4_PAY 96 #define ICE_MAC_IPV6_IPV4_UDP_PAY 97 @@ -380,10 +380,18 @@ struct ice_sw_fv_list_entry { * fields of the packet are now little endian. */ struct ice_boost_key_value { -#define ICE_BOOST_REMAINING_HV_KEY 15 +#define ICE_BOOST_REMAINING_HV_KEY 15 u8 remaining_hv_key[ICE_BOOST_REMAINING_HV_KEY]; - __le16 hv_dst_port_key; - __le16 hv_src_port_key; + union { + struct { + __le16 hv_dst_port_key; + __le16 hv_src_port_key; + } /* udp_tunnel */; + struct { + __le16 hv_vlan_id_key; + __le16 hv_etype_key; + } vlan; + }; u8 tcam_search_key; }; #pragma pack() @@ -643,7 +651,6 @@ struct ice_prof_tcam_entry { u8 key[ICE_TCAM_KEY_SZ]; u8 prof_id; }; - #pragma pack() struct ice_prof_id_section { diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c index 7f1490de50..aca25731aa 100644 --- a/drivers/net/ice/base/ice_flow.c +++ b/drivers/net/ice/base/ice_flow.c @@ -196,7 +196,7 @@ struct ice_flow_field_info ice_flds_info[ICE_FLOW_FIELD_IDX_MAX] = { /* ICE_FLOW_FIELD_IDX_GTPU_DWN_QFI */ ICE_FLOW_FLD_INFO_MSK(ICE_FLOW_SEG_HDR_GTPU_DWN, 22, ICE_FLOW_FLD_SZ_GTP_QFI, 0x3f00), - /* PPPOE */ + /* PPPoE */ /* ICE_FLOW_FIELD_IDX_PPPOE_SESS_ID */ ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_PPPOE, 2, ICE_FLOW_FLD_SZ_PPPOE_SESS_ID), @@ -798,7 +798,7 @@ static const u32 ice_ptypes_gtpu[] = { 0x00000000, 0x00000000, 0x00000000, 0x00000000, }; -/* Packet types for pppoe */ +/* Packet types for PPPoE */ static const u32 ice_ptypes_pppoe[] = { 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, @@ -834,7 +834,7 @@ static const u32 ice_ptypes_pfcp_session[] = { 0x00000000, 0x00000000, 0x00000000, 0x00000000, }; -/* Packet types for l2tpv3 */ +/* Packet types for L2TPv3 */ static const u32 ice_ptypes_l2tpv3[] = { 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, @@ -846,7 +846,7 @@ static const u32 ice_ptypes_l2tpv3[] = { 0x00000000, 0x00000000, 0x00000000, 0x00000000, }; -/* Packet types for esp */ +/* Packet types for ESP */ static const u32 ice_ptypes_esp[] = { 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000003, 0x00000000, 0x00000000, @@ -858,7 +858,7 @@ static const u32 ice_ptypes_esp[] = { 0x00000000, 0x00000000, 0x00000000, 0x00000000, }; -/* Packet types for ah */ +/* Packet types for AH */ static const u32 ice_ptypes_ah[] = { 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x0000000C, 0x00000000, 0x00000000, @@ -1002,12 +1002,11 @@ struct ice_flow_prof_params { #define ICE_FLOW_SEG_HDRS_L2_MASK \ (ICE_FLOW_SEG_HDR_ETH | ICE_FLOW_SEG_HDR_VLAN) #define ICE_FLOW_SEG_HDRS_L3_MASK \ - (ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV6 | \ - ICE_FLOW_SEG_HDR_ARP) + (ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_ARP) #define ICE_FLOW_SEG_HDRS_L4_MASK \ (ICE_FLOW_SEG_HDR_ICMP | ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_UDP | \ ICE_FLOW_SEG_HDR_SCTP) -/* mask for L4 protocols that are NOT part of IPV4/6 OTHER PTYPE groups */ +/* mask for L4 protocols that are NOT part of IPv4/6 OTHER PTYPE groups */ #define ICE_FLOW_SEG_HDRS_L4_MASK_NO_OTHER \ (ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_SCTP) @@ -1016,8 +1015,7 @@ struct ice_flow_prof_params { * @segs: array of one or more packet segments that describe the flow * @segs_cnt: number of packet segments provided */ -static enum ice_status -ice_flow_val_hdrs(struct ice_flow_seg_info *segs, u8 segs_cnt) +static int ice_flow_val_hdrs(struct ice_flow_seg_info *segs, u8 segs_cnt) { u8 i; @@ -1033,7 +1031,7 @@ ice_flow_val_hdrs(struct ice_flow_seg_info *segs, u8 segs_cnt) return ICE_ERR_PARAM; } - return ICE_SUCCESS; + return 0; } /* Sizes of fixed known protocol headers without header options */ @@ -1068,7 +1066,7 @@ static u16 ice_flow_calc_seg_sz(struct ice_flow_prof_params *params, u8 seg) else if (params->prof->segs[seg].hdrs & ICE_FLOW_SEG_HDR_ARP) sz += ICE_FLOW_PROT_HDR_SZ_ARP; else if (params->prof->segs[seg].hdrs & ICE_FLOW_SEG_HDRS_L4_MASK) - /* A L3 header is required if L4 is specified */ + /* An L3 header is required if L4 is specified */ return 0; /* L4 headers */ @@ -1091,7 +1089,7 @@ static u16 ice_flow_calc_seg_sz(struct ice_flow_prof_params *params, u8 seg) * This function identifies the packet types associated with the protocol * headers being present in packet segments of the specified flow profile. */ -static enum ice_status +static int ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params) { struct ice_flow_prof *prof; @@ -1132,17 +1130,16 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params) ice_and_bitmap(params->ptypes, params->ptypes, src, ICE_FLOW_PTYPE_MAX); } + if ((hdrs & ICE_FLOW_SEG_HDR_IPV4) && (hdrs & ICE_FLOW_SEG_HDR_IPV_OTHER)) { - src = i ? - (const ice_bitmap_t *)ice_ptypes_ipv4_il : + src = i ? (const ice_bitmap_t *)ice_ptypes_ipv4_il : (const ice_bitmap_t *)ice_ptypes_ipv4_ofos_all; ice_and_bitmap(params->ptypes, params->ptypes, src, ICE_FLOW_PTYPE_MAX); } else if ((hdrs & ICE_FLOW_SEG_HDR_IPV6) && (hdrs & ICE_FLOW_SEG_HDR_IPV_OTHER)) { - src = i ? - (const ice_bitmap_t *)ice_ptypes_ipv6_il : + src = i ? (const ice_bitmap_t *)ice_ptypes_ipv6_il : (const ice_bitmap_t *)ice_ptypes_ipv6_ofos_all; ice_and_bitmap(params->ptypes, params->ptypes, src, ICE_FLOW_PTYPE_MAX); @@ -1299,11 +1296,9 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params) if (hdrs & ICE_FLOW_SEG_HDR_PFCP) { if (hdrs & ICE_FLOW_SEG_HDR_PFCP_NODE) - src = - (const ice_bitmap_t *)ice_ptypes_pfcp_node; + src = (const ice_bitmap_t *)ice_ptypes_pfcp_node; else - src = - (const ice_bitmap_t *)ice_ptypes_pfcp_session; + src = (const ice_bitmap_t *)ice_ptypes_pfcp_session; ice_and_bitmap(params->ptypes, params->ptypes, src, ICE_FLOW_PTYPE_MAX); @@ -1318,7 +1313,7 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params) } } - return ICE_SUCCESS; + return 0; } /** @@ -1330,7 +1325,7 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params) * This function will allocate an extraction sequence entries for a DWORD size * chunk of the packet flags. */ -static enum ice_status +static int ice_flow_xtract_pkt_flags(struct ice_hw *hw, struct ice_flow_prof_params *params, enum ice_flex_mdid_pkt_flags flags) @@ -1354,7 +1349,7 @@ ice_flow_xtract_pkt_flags(struct ice_hw *hw, params->es[idx].off = (u16)flags; params->es_cnt++; - return ICE_SUCCESS; + return 0; } /** @@ -1369,9 +1364,9 @@ ice_flow_xtract_pkt_flags(struct ice_hw *hw, * field. It then allocates one or more extraction sequence entries for the * given field, and fill the entries with protocol ID and offset information. */ -static enum ice_status +static int ice_flow_xtract_fld(struct ice_hw *hw, struct ice_flow_prof_params *params, - u8 seg, enum ice_flow_field fld, u64 match) + u8 seg, enum ice_flow_field fld, ice_bitmap_t *match) { enum ice_flow_field sib = ICE_FLOW_FIELD_IDX_MAX; u8 fv_words = (u8)hw->blk[params->blk].es.fvw; @@ -1420,7 +1415,7 @@ ice_flow_xtract_fld(struct ice_hw *hw, struct ice_flow_prof_params *params, /* If the sibling field is also included, that field's * mask needs to be included. */ - if (match & BIT(sib)) + if (ice_is_bit_set(match, sib)) sib_mask = ice_flds_info[sib].mask; break; case ICE_FLOW_FIELD_IDX_IPV6_TTL: @@ -1451,7 +1446,7 @@ ice_flow_xtract_fld(struct ice_hw *hw, struct ice_flow_prof_params *params, /* If the sibling field is also included, that field's * mask needs to be included. */ - if (match & BIT(sib)) + if (ice_is_bit_set(match, sib)) sib_mask = ice_flds_info[sib].mask; break; case ICE_FLOW_FIELD_IDX_IPV4_SA: @@ -1547,8 +1542,7 @@ ice_flow_xtract_fld(struct ice_hw *hw, struct ice_flow_prof_params *params, case ICE_FLOW_FIELD_IDX_ICMP_TYPE: case ICE_FLOW_FIELD_IDX_ICMP_CODE: /* ICMP type and code share the same extraction seq. entry */ - prot_id = (params->prof->segs[seg].hdrs & - ICE_FLOW_SEG_HDR_IPV4) ? + prot_id = (params->prof->segs[seg].hdrs & ICE_FLOW_SEG_HDR_IPV4) ? ICE_PROT_ICMP_IL : ICE_PROT_ICMPV6_IL; sib = fld == ICE_FLOW_FIELD_IDX_ICMP_TYPE ? ICE_FLOW_FIELD_IDX_ICMP_CODE : @@ -1617,7 +1611,7 @@ ice_flow_xtract_fld(struct ice_hw *hw, struct ice_flow_prof_params *params, off += ICE_FLOW_FV_EXTRACT_SZ; } - return ICE_SUCCESS; + return 0; } /** @@ -1626,7 +1620,7 @@ ice_flow_xtract_fld(struct ice_hw *hw, struct ice_flow_prof_params *params, * @params: information about the flow to be processed * @seg: index of packet segment whose raw fields are to be extracted */ -static enum ice_status +static int ice_flow_xtract_raws(struct ice_hw *hw, struct ice_flow_prof_params *params, u8 seg) { @@ -1635,7 +1629,7 @@ ice_flow_xtract_raws(struct ice_hw *hw, struct ice_flow_prof_params *params, u8 i; if (!params->prof->segs[seg].raws_cnt) - return ICE_SUCCESS; + return 0; if (params->prof->segs[seg].raws_cnt > ARRAY_SIZE(params->prof->segs[seg].raws)) @@ -1693,7 +1687,7 @@ ice_flow_xtract_raws(struct ice_hw *hw, struct ice_flow_prof_params *params, } } - return ICE_SUCCESS; + return 0; } /** @@ -1704,11 +1698,11 @@ ice_flow_xtract_raws(struct ice_hw *hw, struct ice_flow_prof_params *params, * This function iterates through all matched fields in the given segments, and * creates an extraction sequence for the fields. */ -static enum ice_status +static int ice_flow_create_xtrct_seq(struct ice_hw *hw, struct ice_flow_prof_params *params) { - enum ice_status status = ICE_SUCCESS; + int status = 0; u8 i; /* For ACL, we also need to extract the direction bit (Rx,Tx) data from @@ -1722,15 +1716,16 @@ ice_flow_create_xtrct_seq(struct ice_hw *hw, } for (i = 0; i < params->prof->segs_cnt; i++) { - u64 match = params->prof->segs[i].match; + ice_declare_bitmap(match, ICE_FLOW_FIELD_IDX_MAX); enum ice_flow_field j; - ice_for_each_set_bit(j, (ice_bitmap_t *)&match, - ICE_FLOW_FIELD_IDX_MAX) { + ice_cp_bitmap(match, params->prof->segs[i].match, + ICE_FLOW_FIELD_IDX_MAX); + ice_for_each_set_bit(j, match, ICE_FLOW_FIELD_IDX_MAX) { status = ice_flow_xtract_fld(hw, params, i, j, match); if (status) return status; - ice_clear_bit(j, (ice_bitmap_t *)&match); + ice_clear_bit(j, match); } /* Process raw matching bytes */ @@ -1750,7 +1745,7 @@ ice_flow_create_xtrct_seq(struct ice_hw *hw, * This function will return the specific scenario based on the * params passed to it */ -static enum ice_status +static int ice_flow_sel_acl_scen(struct ice_hw *hw, struct ice_flow_prof_params *params) { /* Find the best-fit scenario for the provided match width */ @@ -1771,14 +1766,14 @@ ice_flow_sel_acl_scen(struct ice_hw *hw, struct ice_flow_prof_params *params) params->prof->cfg.scen = cand_scen; - return ICE_SUCCESS; + return 0; } /** * ice_flow_acl_def_entry_frmt - Determine the layout of flow entries * @params: information about the flow to be processed */ -static enum ice_status +static int ice_flow_acl_def_entry_frmt(struct ice_flow_prof_params *params) { u16 index, i, range_idx = 0; @@ -1789,7 +1784,7 @@ ice_flow_acl_def_entry_frmt(struct ice_flow_prof_params *params) struct ice_flow_seg_info *seg = ¶ms->prof->segs[i]; u16 j; - ice_for_each_set_bit(j, (ice_bitmap_t *)&seg->match, + ice_for_each_set_bit(j, seg->match, (u16)ICE_FLOW_FIELD_IDX_MAX) { struct ice_flow_fld_info *fld = &seg->fields[j]; @@ -1852,7 +1847,7 @@ ice_flow_acl_def_entry_frmt(struct ice_flow_prof_params *params) /* Store # bytes required for entry for later use */ params->entry_length = index - ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX; - return ICE_SUCCESS; + return 0; } /** @@ -1860,10 +1855,10 @@ ice_flow_acl_def_entry_frmt(struct ice_flow_prof_params *params) * @hw: pointer to the HW struct * @params: information about the flow to be processed */ -static enum ice_status +static int ice_flow_proc_segs(struct ice_hw *hw, struct ice_flow_prof_params *params) { - enum ice_status status; + int status; status = ice_flow_proc_seg_hdrs(params); if (status) @@ -1876,7 +1871,7 @@ ice_flow_proc_segs(struct ice_hw *hw, struct ice_flow_prof_params *params) switch (params->blk) { case ICE_BLK_FD: case ICE_BLK_RSS: - status = ICE_SUCCESS; + status = 0; break; case ICE_BLK_ACL: status = ice_flow_acl_def_entry_frmt(params); @@ -1932,7 +1927,10 @@ ice_flow_find_prof_conds(struct ice_hw *hw, enum ice_block blk, for (i = 0; i < segs_cnt; i++) if (segs[i].hdrs != p->segs[i].hdrs || ((conds & ICE_FLOW_FIND_PROF_CHK_FLDS) && - segs[i].match != p->segs[i].match)) + (ice_cmp_bitmap(segs[i].match, + p->segs[i].match, + ICE_FLOW_FIELD_IDX_MAX) == + false))) break; /* A match is found if all segments are matched */ @@ -2019,18 +2017,18 @@ ice_dealloc_flow_entry(struct ice_hw *hw, struct ice_flow_entry *entry) * @prof_id: the profile ID handle * @hw_prof_id: pointer to variable to receive the HW profile ID */ -enum ice_status +int ice_flow_get_hw_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id, u8 *hw_prof_id) { - enum ice_status status = ICE_ERR_DOES_NOT_EXIST; struct ice_prof_map *map; + int status = ICE_ERR_DOES_NOT_EXIST; ice_acquire_lock(&hw->blk[blk].es.prof_map_lock); map = ice_search_prof_id(hw, blk, prof_id); if (map) { *hw_prof_id = map->prof_id; - status = ICE_SUCCESS; + status = 0; } ice_release_lock(&hw->blk[blk].es.prof_map_lock); return status; @@ -2044,16 +2042,16 @@ ice_flow_get_hw_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id, * @prof: pointer to flow profile * @buf: destination buffer function writes partial extraction sequence to * - * returns ICE_SUCCESS if no PF is associated to the given profile + * returns 0 if no PF is associated to the given profile * returns ICE_ERR_IN_USE if at least one PF is associated to the given profile * returns other error code for real error */ -static enum ice_status +static int ice_flow_acl_is_prof_in_use(struct ice_hw *hw, struct ice_flow_prof *prof, struct ice_aqc_acl_prof_generic_frmt *buf) { - enum ice_status status; u8 prof_id = 0; + int status; status = ice_flow_get_hw_prof(hw, ICE_BLK_ACL, prof->id, &prof_id); if (status) @@ -2071,7 +2069,7 @@ ice_flow_acl_is_prof_in_use(struct ice_hw *hw, struct ice_flow_prof *prof, buf->pf_scenario_num[2] == 0 && buf->pf_scenario_num[3] == 0 && buf->pf_scenario_num[4] == 0 && buf->pf_scenario_num[5] == 0 && buf->pf_scenario_num[6] == 0 && buf->pf_scenario_num[7] == 0) - return ICE_SUCCESS; + return 0; if (buf->pf_scenario_num[0] == ICE_ACL_INVALID_SCEN && buf->pf_scenario_num[1] == ICE_ACL_INVALID_SCEN && @@ -2081,7 +2079,7 @@ ice_flow_acl_is_prof_in_use(struct ice_hw *hw, struct ice_flow_prof *prof, buf->pf_scenario_num[5] == ICE_ACL_INVALID_SCEN && buf->pf_scenario_num[6] == ICE_ACL_INVALID_SCEN && buf->pf_scenario_num[7] == ICE_ACL_INVALID_SCEN) - return ICE_SUCCESS; + return 0; return ICE_ERR_IN_USE; } @@ -2092,7 +2090,7 @@ ice_flow_acl_is_prof_in_use(struct ice_hw *hw, struct ice_flow_prof *prof, * @acts: array of actions to be performed on a match * @acts_cnt: number of actions */ -static enum ice_status +static int ice_flow_acl_free_act_cntr(struct ice_hw *hw, struct ice_flow_action *acts, u8 acts_cnt) { @@ -2103,7 +2101,7 @@ ice_flow_acl_free_act_cntr(struct ice_hw *hw, struct ice_flow_action *acts, acts[i].type == ICE_FLOW_ACT_CNTR_BYTES || acts[i].type == ICE_FLOW_ACT_CNTR_PKT_BYTES) { struct ice_acl_cntrs cntrs = { 0 }; - enum ice_status status; + int status; /* amount is unused in the dealloc path but the common * parameter check routine wants a value set, as zero @@ -2126,7 +2124,7 @@ ice_flow_acl_free_act_cntr(struct ice_hw *hw, struct ice_flow_action *acts, return status; } } - return ICE_SUCCESS; + return 0; } /** @@ -2136,11 +2134,11 @@ ice_flow_acl_free_act_cntr(struct ice_hw *hw, struct ice_flow_action *acts, * * Disassociate the scenario from the profile for the PF of the VSI. */ -static enum ice_status +static int ice_flow_acl_disassoc_scen(struct ice_hw *hw, struct ice_flow_prof *prof) { struct ice_aqc_acl_prof_generic_frmt buf; - enum ice_status status = ICE_SUCCESS; + int status = 0; u8 prof_id = 0; ice_memset(&buf, 0, sizeof(buf), ICE_NONDMA_MEM); @@ -2166,7 +2164,7 @@ ice_flow_acl_disassoc_scen(struct ice_hw *hw, struct ice_flow_prof *prof) * @blk: classification stage * @entry: flow entry to be removed */ -static enum ice_status +static int ice_flow_rem_entry_sync(struct ice_hw *hw, enum ice_block blk, struct ice_flow_entry *entry) { @@ -2174,7 +2172,7 @@ ice_flow_rem_entry_sync(struct ice_hw *hw, enum ice_block blk, return ICE_ERR_BAD_PTR; if (blk == ICE_BLK_ACL) { - enum ice_status status; + int status; if (!entry->prof) return ICE_ERR_BAD_PTR; @@ -2194,7 +2192,7 @@ ice_flow_rem_entry_sync(struct ice_hw *hw, enum ice_block blk, ice_dealloc_flow_entry(hw, entry); - return ICE_SUCCESS; + return 0; } /** @@ -2211,7 +2209,7 @@ ice_flow_rem_entry_sync(struct ice_hw *hw, enum ice_block blk, * * Assumption: the caller has acquired the lock to the profile list */ -static enum ice_status +static int ice_flow_add_prof_sync(struct ice_hw *hw, enum ice_block blk, enum ice_flow_dir dir, u64 prof_id, struct ice_flow_seg_info *segs, u8 segs_cnt, @@ -2219,7 +2217,7 @@ ice_flow_add_prof_sync(struct ice_hw *hw, enum ice_block blk, struct ice_flow_prof **prof) { struct ice_flow_prof_params *params; - enum ice_status status; + int status; u8 i; if (!prof || (acts_cnt && !acts)) @@ -2307,11 +2305,11 @@ ice_flow_add_prof_sync(struct ice_hw *hw, enum ice_block blk, * * Assumption: the caller has acquired the lock to the profile list */ -static enum ice_status +static int ice_flow_rem_prof_sync(struct ice_hw *hw, enum ice_block blk, struct ice_flow_prof *prof) { - enum ice_status status; + int status; /* Remove all remaining flow entries before removing the flow profile */ if (!LIST_EMPTY(&prof->entries)) { @@ -2403,13 +2401,13 @@ ice_flow_acl_set_xtrct_seq_fld(struct ice_aqc_acl_prof_generic_frmt *buf, * @hw: pointer to the hardware structure * @prof: pointer to flow profile */ -static enum ice_status +static int ice_flow_acl_set_xtrct_seq(struct ice_hw *hw, struct ice_flow_prof *prof) { struct ice_aqc_acl_prof_generic_frmt buf; struct ice_flow_fld_info *info; - enum ice_status status; u8 prof_id = 0; + int status; u16 i; ice_memset(&buf, 0, sizeof(buf), ICE_NONDMA_MEM); @@ -2432,7 +2430,7 @@ ice_flow_acl_set_xtrct_seq(struct ice_hw *hw, struct ice_flow_prof *prof) struct ice_flow_seg_info *seg = &prof->segs[i]; u16 j; - ice_for_each_set_bit(j, (ice_bitmap_t *)&seg->match, + ice_for_each_set_bit(j, seg->match, ICE_FLOW_FIELD_IDX_MAX) { info = &seg->fields[j]; @@ -2473,11 +2471,11 @@ ice_flow_acl_set_xtrct_seq(struct ice_hw *hw, struct ice_flow_prof *prof) * be added has the same characteristics as the VSIG and will * thereby have access to all resources added to that VSIG. */ -enum ice_status +int ice_flow_assoc_vsig_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi_handle, u16 vsig) { - enum ice_status status; + int status; if (!ice_is_vsi_valid(hw, vsi_handle) || blk >= ICE_BLK_COUNT) return ICE_ERR_PARAM; @@ -2500,11 +2498,11 @@ ice_flow_assoc_vsig_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi_handle, * Assumption: the caller has acquired the lock to the profile list * and the software VSI handle has been validated */ -enum ice_status +int ice_flow_assoc_prof(struct ice_hw *hw, enum ice_block blk, struct ice_flow_prof *prof, u16 vsi_handle) { - enum ice_status status = ICE_SUCCESS; + int status = 0; if (!ice_is_bit_set(prof->vsis, vsi_handle)) { if (blk == ICE_BLK_ACL) { @@ -2536,11 +2534,11 @@ ice_flow_assoc_prof(struct ice_hw *hw, enum ice_block blk, * Assumption: the caller has acquired the lock to the profile list * and the software VSI handle has been validated */ -static enum ice_status +static int ice_flow_disassoc_prof(struct ice_hw *hw, enum ice_block blk, struct ice_flow_prof *prof, u16 vsi_handle) { - enum ice_status status = ICE_SUCCESS; + int status = 0; if (ice_is_bit_set(prof->vsis, vsi_handle)) { status = ice_rem_prof_id_flow(hw, blk, @@ -2574,7 +2572,7 @@ ice_flow_disassoc_prof(struct ice_hw *hw, enum ice_block blk, * @prof: stores parsed profile info from raw flow * @blk: classification stage */ -enum ice_status +int ice_flow_set_hw_prof(struct ice_hw *hw, u16 dest_vsi_handle, u16 fdir_vsi_handle, struct ice_parser_profile *prof, enum ice_block blk) @@ -2582,7 +2580,7 @@ ice_flow_set_hw_prof(struct ice_hw *hw, u16 dest_vsi_handle, int id = ice_find_first_bit(prof->ptypes, ICE_FLOW_PTYPE_MAX); struct ice_flow_prof_params *params; u8 fv_words = hw->blk[blk].es.fvw; - enum ice_status status; + int status; int i, idx; params = (struct ice_flow_prof_params *)ice_malloc(hw, sizeof(*params)); @@ -2601,7 +2599,8 @@ ice_flow_set_hw_prof(struct ice_hw *hw, u16 dest_vsi_handle, idx = i; params->es[idx].prot_id = prof->fv[i].proto_id; params->es[idx].off = prof->fv[i].offset; - params->mask[idx] = CPU_TO_BE16(prof->fv[i].msk); + params->mask[idx] = (((prof->fv[i].msk) << 8) & 0xff00) | + (((prof->fv[i].msk) >> 8) & 0x00ff); } switch (prof->flags) { @@ -2632,7 +2631,7 @@ ice_flow_set_hw_prof(struct ice_hw *hw, u16 dest_vsi_handle, if (status) goto free_params; - return ICE_SUCCESS; + return 0; free_params: ice_free(hw, params); @@ -2652,13 +2651,13 @@ ice_flow_set_hw_prof(struct ice_hw *hw, u16 dest_vsi_handle, * @acts_cnt: number of default actions * @prof: stores the returned flow profile added */ -enum ice_status +int ice_flow_add_prof(struct ice_hw *hw, enum ice_block blk, enum ice_flow_dir dir, u64 prof_id, struct ice_flow_seg_info *segs, u8 segs_cnt, struct ice_flow_action *acts, u8 acts_cnt, struct ice_flow_prof **prof) { - enum ice_status status; + int status; if (segs_cnt > ICE_FLOW_SEG_MAX) return ICE_ERR_MAX_LIMIT; @@ -2691,11 +2690,11 @@ ice_flow_add_prof(struct ice_hw *hw, enum ice_block blk, enum ice_flow_dir dir, * @blk: the block for which the flow profile is to be removed * @prof_id: unique ID of the flow profile to be removed */ -enum ice_status +int ice_flow_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id) { struct ice_flow_prof *prof; - enum ice_status status; + int status; ice_acquire_lock(&hw->fl_profs_locks[blk]); @@ -2759,7 +2758,7 @@ u64 ice_flow_find_entry(struct ice_hw *hw, enum ice_block blk, u64 entry_id) * @acts_cnt: number of actions * @cnt_alloc: indicates if an ACL counter has been allocated. */ -static enum ice_status +static int ice_flow_acl_check_actions(struct ice_hw *hw, struct ice_flow_action *acts, u8 acts_cnt, bool *cnt_alloc) { @@ -2792,7 +2791,7 @@ ice_flow_acl_check_actions(struct ice_hw *hw, struct ice_flow_action *acts, acts[i].type == ICE_FLOW_ACT_CNTR_BYTES || acts[i].type == ICE_FLOW_ACT_CNTR_PKT_BYTES) { struct ice_acl_cntrs cntrs = { 0 }; - enum ice_status status; + int status; cntrs.amount = 1; cntrs.bank = 0; /* Only bank0 for the moment */ @@ -2812,7 +2811,7 @@ ice_flow_acl_check_actions(struct ice_hw *hw, struct ice_flow_action *acts, } } - return ICE_SUCCESS; + return 0; } /** @@ -2941,17 +2940,17 @@ ice_flow_acl_frmt_entry_fld(u16 fld, struct ice_flow_fld_info *info, u8 *buf, * along with data from the flow profile. This key/key_inverse pair makes up * the 'entry' for an ACL flow entry. */ -static enum ice_status +static int ice_flow_acl_frmt_entry(struct ice_hw *hw, struct ice_flow_prof *prof, struct ice_flow_entry *e, u8 *data, struct ice_flow_action *acts, u8 acts_cnt) { u8 *buf = NULL, *dontcare = NULL, *key = NULL, range = 0, dir_flag_msk; struct ice_aqc_acl_profile_ranges *range_buf = NULL; - enum ice_status status; bool cnt_alloc; u8 prof_id = 0; u16 i, buf_sz; + int status; status = ice_flow_get_hw_prof(hw, ICE_BLK_ACL, prof->id, &prof_id); if (status) @@ -3002,7 +3001,7 @@ ice_flow_acl_frmt_entry(struct ice_hw *hw, struct ice_flow_prof *prof, struct ice_flow_seg_info *seg = &prof->segs[i]; u16 j; - ice_for_each_set_bit(j, (ice_bitmap_t *)&seg->match, + ice_for_each_set_bit(j, seg->match, (u16)ICE_FLOW_FIELD_IDX_MAX) { struct ice_flow_fld_info *info = &seg->fields[j]; @@ -3208,7 +3207,7 @@ ice_flow_acl_convert_to_acl_prio(enum ice_flow_priority p) * For this function, we do the union between dst_buf and src_buf * range checker buffer, and we will save the result back to dst_buf */ -static enum ice_status +static int ice_flow_acl_union_rng_chk(struct ice_aqc_acl_profile_ranges *dst_buf, struct ice_aqc_acl_profile_ranges *src_buf) { @@ -3247,7 +3246,7 @@ ice_flow_acl_union_rng_chk(struct ice_aqc_acl_profile_ranges *dst_buf, } } - return ICE_SUCCESS; + return 0; } /** @@ -3260,7 +3259,7 @@ ice_flow_acl_union_rng_chk(struct ice_aqc_acl_profile_ranges *dst_buf, * corresponding ACL scenario. Then, we will perform matching logic to * see if we want to add/modify/do nothing with this new entry. */ -static enum ice_status +static int ice_flow_acl_add_scen_entry_sync(struct ice_hw *hw, struct ice_flow_prof *prof, struct ice_flow_entry **entry) { @@ -3268,8 +3267,8 @@ ice_flow_acl_add_scen_entry_sync(struct ice_hw *hw, struct ice_flow_prof *prof, struct ice_aqc_acl_profile_ranges query_rng_buf, cfg_rng_buf; struct ice_acl_act_entry *acts = NULL; struct ice_flow_entry *exist; - enum ice_status status = ICE_SUCCESS; struct ice_flow_entry *e; + int status = 0; u8 i; if (!entry || !(*entry) || !prof) @@ -3406,11 +3405,11 @@ ice_flow_acl_add_scen_entry_sync(struct ice_hw *hw, struct ice_flow_prof *prof, * @prof: pointer to flow profile * @e: double pointer to the flow entry */ -static enum ice_status +static int ice_flow_acl_add_scen_entry(struct ice_hw *hw, struct ice_flow_prof *prof, struct ice_flow_entry **e) { - enum ice_status status; + int status; ice_acquire_lock(&prof->entries_lock); status = ice_flow_acl_add_scen_entry_sync(hw, prof, e); @@ -3432,7 +3431,7 @@ ice_flow_acl_add_scen_entry(struct ice_hw *hw, struct ice_flow_prof *prof, * @acts_cnt: number of actions * @entry_h: pointer to buffer that receives the new flow entry's handle */ -enum ice_status +int ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id, u64 entry_id, u16 vsi_handle, enum ice_flow_priority prio, void *data, struct ice_flow_action *acts, u8 acts_cnt, @@ -3440,7 +3439,7 @@ ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id, { struct ice_flow_entry *e = NULL; struct ice_flow_prof *prof; - enum ice_status status = ICE_SUCCESS; + int status = 0; /* ACL entries must indicate an action */ if (blk == ICE_BLK_ACL && (!acts || !acts_cnt)) @@ -3524,17 +3523,17 @@ ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id, * @blk: classification stage * @entry_h: handle to the flow entry to be removed */ -enum ice_status ice_flow_rem_entry(struct ice_hw *hw, enum ice_block blk, - u64 entry_h) +int ice_flow_rem_entry(struct ice_hw *hw, enum ice_block blk, + u64 entry_h) { struct ice_flow_entry *entry; struct ice_flow_prof *prof; - enum ice_status status = ICE_SUCCESS; + int status = 0; if (entry_h == ICE_FLOW_ENTRY_HANDLE_INVAL) return ICE_ERR_PARAM; - entry = ICE_FLOW_ENTRY_PTR((intptr_t)entry_h); + entry = ICE_FLOW_ENTRY_PTR(entry_h); /* Retain the pointer to the flow profile as the entry will be freed */ prof = entry->prof; @@ -3576,11 +3575,9 @@ ice_flow_set_fld_ext(struct ice_flow_seg_info *seg, enum ice_flow_field fld, enum ice_flow_fld_match_type field_type, u16 val_loc, u16 mask_loc, u16 last_loc) { - u64 bit = BIT_ULL(fld); - - seg->match |= bit; + ice_set_bit(fld, seg->match); if (field_type == ICE_FLOW_FLD_TYPE_RANGE) - seg->range |= bit; + ice_set_bit(fld, seg->range); seg->fields[fld].type = field_type; seg->fields[fld].src.val = val_loc; @@ -3695,11 +3692,11 @@ ice_flow_add_fld_raw(struct ice_flow_seg_info *seg, u16 off, u8 len, * This function removes the flow entries associated to the input * vsi handle and disassociates the vsi from the flow profile. */ -enum ice_status ice_flow_rem_vsi_prof(struct ice_hw *hw, enum ice_block blk, u16 vsi_handle, - u64 prof_id) +int ice_flow_rem_vsi_prof(struct ice_hw *hw, enum ice_block blk, u16 vsi_handle, + u64 prof_id) { struct ice_flow_prof *prof = NULL; - enum ice_status status = ICE_SUCCESS; + int status = 0; if (blk >= ICE_BLK_COUNT || !ice_is_vsi_valid(hw, vsi_handle)) return ICE_ERR_PARAM; @@ -3708,7 +3705,7 @@ enum ice_status ice_flow_rem_vsi_prof(struct ice_hw *hw, enum ice_block blk, u16 prof = ice_flow_find_prof_id(hw, ICE_BLK_FD, prof_id); if (!prof) { ice_debug(hw, ICE_DBG_PKG, - "Cannot find flow profile id=%" PRIu64 "\n", prof_id); + "Cannot find flow profile id=%lu\n", prof_id); return ICE_ERR_DOES_NOT_EXIST; } @@ -3741,7 +3738,7 @@ enum ice_status ice_flow_rem_vsi_prof(struct ice_hw *hw, enum ice_block blk, u16 } #define ICE_FLOW_RSS_SEG_HDR_L2_MASKS \ -(ICE_FLOW_SEG_HDR_ETH | ICE_FLOW_SEG_HDR_ETH_NON_IP | ICE_FLOW_SEG_HDR_VLAN) +(ICE_FLOW_SEG_HDR_ETH | ICE_FLOW_SEG_HDR_VLAN) #define ICE_FLOW_RSS_SEG_HDR_L3_MASKS \ (ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV6) @@ -3764,7 +3761,7 @@ enum ice_status ice_flow_rem_vsi_prof(struct ice_hw *hw, enum ice_block blk, u16 * header value to set flow field segment for further use in flow * profile entry or removal. */ -static enum ice_status +static int ice_flow_set_rss_seg_info(struct ice_flow_seg_info *segs, u8 seg_cnt, const struct ice_rss_hash_cfg *cfg) { @@ -3786,20 +3783,20 @@ ice_flow_set_rss_seg_info(struct ice_flow_seg_info *segs, u8 seg_cnt, /* set outer most header */ if (cfg->hdr_type == ICE_RSS_INNER_HEADERS_W_OUTER_IPV4) segs[ICE_RSS_OUTER_HEADERS].hdrs |= ICE_FLOW_SEG_HDR_IPV4 | - ICE_FLOW_SEG_HDR_IPV_FRAG | - ICE_FLOW_SEG_HDR_IPV_OTHER; + ICE_FLOW_SEG_HDR_IPV_FRAG | + ICE_FLOW_SEG_HDR_IPV_OTHER; else if (cfg->hdr_type == ICE_RSS_INNER_HEADERS_W_OUTER_IPV6) segs[ICE_RSS_OUTER_HEADERS].hdrs |= ICE_FLOW_SEG_HDR_IPV6 | - ICE_FLOW_SEG_HDR_IPV_FRAG | - ICE_FLOW_SEG_HDR_IPV_OTHER; + ICE_FLOW_SEG_HDR_IPV_FRAG | + ICE_FLOW_SEG_HDR_IPV_OTHER; else if (cfg->hdr_type == ICE_RSS_INNER_HEADERS_W_OUTER_IPV4_GRE) segs[ICE_RSS_OUTER_HEADERS].hdrs |= ICE_FLOW_SEG_HDR_IPV4 | - ICE_FLOW_SEG_HDR_GRE | - ICE_FLOW_SEG_HDR_IPV_OTHER; + ICE_FLOW_SEG_HDR_GRE | + ICE_FLOW_SEG_HDR_IPV_OTHER; else if (cfg->hdr_type == ICE_RSS_INNER_HEADERS_W_OUTER_IPV6_GRE) segs[ICE_RSS_OUTER_HEADERS].hdrs |= ICE_FLOW_SEG_HDR_IPV6 | - ICE_FLOW_SEG_HDR_GRE | - ICE_FLOW_SEG_HDR_IPV_OTHER; + ICE_FLOW_SEG_HDR_GRE | + ICE_FLOW_SEG_HDR_IPV_OTHER; if (seg->hdrs & ~ICE_FLOW_RSS_SEG_HDR_VAL_MASKS & ~ICE_FLOW_RSS_HDRS_INNER_MASK & ~ICE_FLOW_SEG_HDR_IPV_OTHER & @@ -3814,7 +3811,7 @@ ice_flow_set_rss_seg_info(struct ice_flow_seg_info *segs, u8 seg_cnt, if (val && !ice_is_pow2(val)) return ICE_ERR_CFG; - return ICE_SUCCESS; + return 0; } /** @@ -3851,18 +3848,18 @@ void ice_rem_vsi_rss_list(struct ice_hw *hw, u16 vsi_handle) * the VSI from that profile. If the flow profile has no VSIs it will * be removed. */ -enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle) +int ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle) { const enum ice_block blk = ICE_BLK_RSS; struct ice_flow_prof *p, *t; - enum ice_status status = ICE_SUCCESS; + int status = 0; u16 vsig; if (!ice_is_vsi_valid(hw, vsi_handle)) return ICE_ERR_PARAM; if (LIST_EMPTY(&hw->fl_profs[blk])) - return ICE_SUCCESS; + return 0; ice_acquire_lock(&hw->rss_locks); LIST_FOR_EACH_ENTRY_SAFE(p, t, &hw->fl_profs[blk], ice_flow_prof, @@ -3905,11 +3902,14 @@ ice_get_rss_hdr_type(struct ice_flow_prof *prof) if (prof->segs_cnt == ICE_FLOW_SEG_SINGLE) { hdr_type = ICE_RSS_OUTER_HEADERS; } else if (prof->segs_cnt == ICE_FLOW_SEG_MAX) { - if (prof->segs[ICE_RSS_OUTER_HEADERS].hdrs == ICE_FLOW_SEG_HDR_NONE) + const struct ice_flow_seg_info *s; + + s = &prof->segs[ICE_RSS_OUTER_HEADERS]; + if (s->hdrs == ICE_FLOW_SEG_HDR_NONE) hdr_type = ICE_RSS_INNER_HEADERS; - if (prof->segs[ICE_RSS_OUTER_HEADERS].hdrs & ICE_FLOW_SEG_HDR_IPV4) + if (s->hdrs & ICE_FLOW_SEG_HDR_IPV4) hdr_type = ICE_RSS_INNER_HEADERS_W_OUTER_IPV4; - if (prof->segs[ICE_RSS_OUTER_HEADERS].hdrs & ICE_FLOW_SEG_HDR_IPV6) + if (s->hdrs & ICE_FLOW_SEG_HDR_IPV6) hdr_type = ICE_RSS_INNER_HEADERS_W_OUTER_IPV6; } @@ -3929,6 +3929,14 @@ ice_rem_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof) { enum ice_rss_cfg_hdr_type hdr_type; struct ice_rss_cfg *r, *tmp; + u64 seg_match = 0; + u16 i; + + /* convert match bitmap to u64 for hash field comparison */ + ice_for_each_set_bit(i, prof->segs[prof->segs_cnt - 1].match, + ICE_FLOW_FIELD_IDX_MAX) { + seg_match |= 1ULL << i; + } /* Search for RSS hash fields associated to the VSI that match the * hash configurations associated to the flow profile. If found @@ -3937,7 +3945,7 @@ ice_rem_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof) hdr_type = ice_get_rss_hdr_type(prof); LIST_FOR_EACH_ENTRY_SAFE(r, tmp, &hw->rss_list_head, ice_rss_cfg, l_entry) - if (r->hash.hash_flds == prof->segs[prof->segs_cnt - 1].match && + if (r->hash.hash_flds == seg_match && r->hash.addl_hdrs == prof->segs[prof->segs_cnt - 1].hdrs && r->hash.hdr_type == hdr_type) { ice_clear_bit(vsi_handle, r->vsis); @@ -3957,27 +3965,34 @@ ice_rem_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof) * * Assumption: lock has already been acquired for RSS list */ -static enum ice_status +static int ice_add_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof) { enum ice_rss_cfg_hdr_type hdr_type; struct ice_rss_cfg *r, *rss_cfg; + u64 seg_match = 0; + u16 i; + + ice_for_each_set_bit(i, prof->segs[prof->segs_cnt - 1].match, + ICE_FLOW_FIELD_IDX_MAX) { + seg_match |= 1ULL << i; + } hdr_type = ice_get_rss_hdr_type(prof); LIST_FOR_EACH_ENTRY(r, &hw->rss_list_head, ice_rss_cfg, l_entry) - if (r->hash.hash_flds == prof->segs[prof->segs_cnt - 1].match && + if (r->hash.hash_flds == seg_match && r->hash.addl_hdrs == prof->segs[prof->segs_cnt - 1].hdrs && r->hash.hdr_type == hdr_type) { ice_set_bit(vsi_handle, r->vsis); - return ICE_SUCCESS; + return 0; } rss_cfg = (struct ice_rss_cfg *)ice_malloc(hw, sizeof(*rss_cfg)); if (!rss_cfg) return ICE_ERR_NO_MEMORY; - rss_cfg->hash.hash_flds = prof->segs[prof->segs_cnt - 1].match; + rss_cfg->hash.hash_flds = seg_match; rss_cfg->hash.addl_hdrs = prof->segs[prof->segs_cnt - 1].hdrs; rss_cfg->hash.hdr_type = hdr_type; rss_cfg->hash.symm = prof->cfg.symm; @@ -3985,7 +4000,7 @@ ice_add_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof) LIST_ADD_TAIL(&rss_cfg->l_entry, &hw->rss_list_head); - return ICE_SUCCESS; + return 0; } #define ICE_FLOW_PROF_HASH_S 0 @@ -4001,13 +4016,14 @@ ice_add_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof) * [62:63] - Encapsulation flag: * 0 if non-tunneled * 1 if tunneled - * 2 for tunneled with outer ipv4 - * 3 for tunneled with outer ipv6 + * 2 for tunneled with outer IPv4 + * 3 for tunneled with outer IPv6 */ -#define ICE_FLOW_GEN_PROFID(hash, hdr, encap) \ - ((u64)(((u64)(hash) & ICE_FLOW_PROF_HASH_M) | \ +#define ICE_FLOW_GEN_PROFID(hash, hdr, encap) \ + ((u64)(((u64)(hash) & ICE_FLOW_PROF_HASH_M) | \ (((u64)(hdr) << ICE_FLOW_PROF_HDR_S) & ICE_FLOW_PROF_HDR_M) | \ - (((u64)(encap) << ICE_FLOW_PROF_ENCAP_S) & ICE_FLOW_PROF_ENCAP_M))) + (((u64)(encap) << ICE_FLOW_PROF_ENCAP_S) & \ + ICE_FLOW_PROF_ENCAP_M))) static void ice_rss_config_xor_word(struct ice_hw *hw, u8 prof_id, u8 src, u8 dst) @@ -4235,18 +4251,19 @@ ice_rss_update_raw_symm(struct ice_hw *hw, * * Assumption: lock has already been acquired for RSS list */ -static enum ice_status +static int ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, const struct ice_rss_hash_cfg *cfg) { const enum ice_block blk = ICE_BLK_RSS; struct ice_flow_prof *prof = NULL; struct ice_flow_seg_info *segs; - enum ice_status status; u8 segs_cnt; + int status; segs_cnt = (cfg->hdr_type == ICE_RSS_OUTER_HEADERS) ? - ICE_FLOW_SEG_SINGLE : ICE_FLOW_SEG_MAX; + ICE_FLOW_SEG_SINGLE : + ICE_FLOW_SEG_MAX; segs = (struct ice_flow_seg_info *)ice_calloc(hw, segs_cnt, sizeof(*segs)); @@ -4358,25 +4375,23 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, * the input fields to hash on, the flow type and use the VSI number to add * a flow entry to the profile. */ -enum ice_status +int ice_add_rss_cfg(struct ice_hw *hw, u16 vsi_handle, const struct ice_rss_hash_cfg *cfg) { struct ice_rss_hash_cfg local_cfg; - enum ice_status status; + int status; - if (!ice_is_vsi_valid(hw, vsi_handle) || - !cfg || cfg->hdr_type > ICE_RSS_ANY_HEADERS || + if (!ice_is_vsi_valid(hw, vsi_handle) || !cfg || + cfg->hdr_type > ICE_RSS_ANY_HEADERS || cfg->hash_flds == ICE_HASH_INVALID) return ICE_ERR_PARAM; + ice_acquire_lock(&hw->rss_locks); local_cfg = *cfg; if (cfg->hdr_type < ICE_RSS_ANY_HEADERS) { - ice_acquire_lock(&hw->rss_locks); status = ice_add_rss_cfg_sync(hw, vsi_handle, &local_cfg); - ice_release_lock(&hw->rss_locks); } else { - ice_acquire_lock(&hw->rss_locks); local_cfg.hdr_type = ICE_RSS_OUTER_HEADERS; status = ice_add_rss_cfg_sync(hw, vsi_handle, &local_cfg); if (!status) { @@ -4384,8 +4399,8 @@ ice_add_rss_cfg(struct ice_hw *hw, u16 vsi_handle, status = ice_add_rss_cfg_sync(hw, vsi_handle, &local_cfg); } - ice_release_lock(&hw->rss_locks); } + ice_release_lock(&hw->rss_locks); return status; } @@ -4398,18 +4413,19 @@ ice_add_rss_cfg(struct ice_hw *hw, u16 vsi_handle, * * Assumption: lock has already been acquired for RSS list */ -static enum ice_status +static int ice_rem_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, const struct ice_rss_hash_cfg *cfg) { const enum ice_block blk = ICE_BLK_RSS; struct ice_flow_seg_info *segs; struct ice_flow_prof *prof; - enum ice_status status; u8 segs_cnt; + int status; segs_cnt = (cfg->hdr_type == ICE_RSS_OUTER_HEADERS) ? - ICE_FLOW_SEG_SINGLE : ICE_FLOW_SEG_MAX; + ICE_FLOW_SEG_SINGLE : + ICE_FLOW_SEG_MAX; segs = (struct ice_flow_seg_info *)ice_calloc(hw, segs_cnt, sizeof(*segs)); if (!segs) @@ -4457,15 +4473,15 @@ ice_rem_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, * removed. Calls are made to underlying flow apis which will in * turn build or update buffers for RSS XLT1 section. */ -enum ice_status +int ice_rem_rss_cfg(struct ice_hw *hw, u16 vsi_handle, const struct ice_rss_hash_cfg *cfg) { struct ice_rss_hash_cfg local_cfg; - enum ice_status status; + int status; - if (!ice_is_vsi_valid(hw, vsi_handle) || - !cfg || cfg->hdr_type > ICE_RSS_ANY_HEADERS || + if (!ice_is_vsi_valid(hw, vsi_handle) || !cfg || + cfg->hdr_type > ICE_RSS_ANY_HEADERS || cfg->hash_flds == ICE_HASH_INVALID) return ICE_ERR_PARAM; @@ -4476,7 +4492,6 @@ ice_rem_rss_cfg(struct ice_hw *hw, u16 vsi_handle, } else { local_cfg.hdr_type = ICE_RSS_OUTER_HEADERS; status = ice_rem_rss_cfg_sync(hw, vsi_handle, &local_cfg); - if (!status) { local_cfg.hdr_type = ICE_RSS_INNER_HEADERS; status = ice_rem_rss_cfg_sync(hw, vsi_handle, @@ -4493,10 +4508,10 @@ ice_rem_rss_cfg(struct ice_hw *hw, u16 vsi_handle, * @hw: pointer to the hardware structure * @vsi_handle: software VSI handle */ -enum ice_status ice_replay_rss_cfg(struct ice_hw *hw, u16 vsi_handle) +int ice_replay_rss_cfg(struct ice_hw *hw, u16 vsi_handle) { - enum ice_status status = ICE_SUCCESS; struct ice_rss_cfg *r; + int status = 0; if (!ice_is_vsi_valid(hw, vsi_handle)) return ICE_ERR_PARAM; diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h index 57e8e1f1df..65b261beca 100644 --- a/drivers/net/ice/base/ice_flow.h +++ b/drivers/net/ice/base/ice_flow.h @@ -7,6 +7,7 @@ #include "ice_flex_type.h" #include "ice_acl.h" +#include "ice_parser.h" #define ICE_IPV4_MAKE_PREFIX_MASK(prefix) ((u32)(~0) << (32 - (prefix))) #define ICE_FLOW_PROF_ID_INVAL 0xfffffffffffffffful @@ -370,14 +371,14 @@ enum ice_flow_avf_hdr_field { enum ice_rss_cfg_hdr_type { ICE_RSS_OUTER_HEADERS, /* take outer headers as inputset. */ ICE_RSS_INNER_HEADERS, /* take inner headers as inputset. */ - /* take inner headers as inputset for packet with outer ipv4. */ + /* take inner headers as inputset for packet with outer IPv4. */ ICE_RSS_INNER_HEADERS_W_OUTER_IPV4, - /* take inner headers as inputset for packet with outer ipv6. */ + /* take inner headers as inputset for packet with outer IPv6. */ ICE_RSS_INNER_HEADERS_W_OUTER_IPV6, /* take outer headers first then inner headers as inputset */ - /* take inner as inputset for GTPoGRE with outer ipv4 + gre. */ + /* take inner as inputset for GTPoGRE with outer IPv4 + GRE. */ ICE_RSS_INNER_HEADERS_W_OUTER_IPV4_GRE, - /* take inner as inputset for GTPoGRE with outer ipv6 + gre. */ + /* take inner as inputset for GTPoGRE with outer IPv6 + GRE. */ ICE_RSS_INNER_HEADERS_W_OUTER_IPV6_GRE, ICE_RSS_ANY_HEADERS }; @@ -452,8 +453,10 @@ struct ice_flow_seg_fld_raw { struct ice_flow_seg_info { u32 hdrs; /* Bitmask indicating protocol headers present */ - u64 match; /* Bitmask indicating header fields to be matched */ - u64 range; /* Bitmask indicating header fields matched as ranges */ + /* Bitmask indicating header fields to be matched */ + ice_declare_bitmap(match, ICE_FLOW_FIELD_IDX_MAX); + /* Bitmask indicating header fields matched as ranges */ + ice_declare_bitmap(range, ICE_FLOW_FIELD_IDX_MAX); struct ice_flow_fld_info fields[ICE_FLOW_FIELD_IDX_MAX]; @@ -482,7 +485,7 @@ struct ice_flow_entry { u8 acts_cnt; }; -#define ICE_FLOW_ENTRY_HNDL(e) ((intptr_t)e) +#define ICE_FLOW_ENTRY_HNDL(e) ((u64)e) #define ICE_FLOW_ENTRY_PTR(h) ((struct ice_flow_entry *)(h)) struct ice_flow_prof { @@ -562,34 +565,33 @@ struct ice_flow_action { u64 ice_flow_find_prof(struct ice_hw *hw, enum ice_block blk, enum ice_flow_dir dir, struct ice_flow_seg_info *segs, u8 segs_cnt); -enum ice_status +int ice_flow_add_prof(struct ice_hw *hw, enum ice_block blk, enum ice_flow_dir dir, u64 prof_id, struct ice_flow_seg_info *segs, u8 segs_cnt, struct ice_flow_action *acts, u8 acts_cnt, struct ice_flow_prof **prof); -enum ice_status +int ice_flow_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id); -enum ice_status +int ice_flow_assoc_prof(struct ice_hw *hw, enum ice_block blk, struct ice_flow_prof *prof, u16 vsi_handle); -enum ice_status -ice_flow_assoc_vsig_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi_handle, - u16 vsig); -enum ice_status +int ice_flow_set_hw_prof(struct ice_hw *hw, u16 dest_vsi_handle, u16 fdir_vsi_handle, struct ice_parser_profile *prof, enum ice_block blk); -enum ice_status +int +ice_flow_assoc_vsig_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi_handle, + u16 vsig); +int ice_flow_get_hw_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id, u8 *hw_prof); - u64 ice_flow_find_entry(struct ice_hw *hw, enum ice_block blk, u64 entry_id); -enum ice_status +int ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id, u64 entry_id, u16 vsi, enum ice_flow_priority prio, void *data, struct ice_flow_action *acts, u8 acts_cnt, u64 *entry_h); -enum ice_status +int ice_flow_rem_entry(struct ice_hw *hw, enum ice_block blk, u64 entry_h); void ice_flow_set_fld(struct ice_flow_seg_info *seg, enum ice_flow_field fld, @@ -600,17 +602,17 @@ ice_flow_set_fld_prefix(struct ice_flow_seg_info *seg, enum ice_flow_field fld, void ice_flow_add_fld_raw(struct ice_flow_seg_info *seg, u16 off, u8 len, u16 val_loc, u16 mask_loc); -enum ice_status ice_flow_rem_vsi_prof(struct ice_hw *hw, enum ice_block blk, - u16 vsi_handle, u64 prof_id); +int ice_flow_rem_vsi_prof(struct ice_hw *hw, enum ice_block blk, + u16 vsi_handle, u64 prof_id); void ice_rem_vsi_rss_list(struct ice_hw *hw, u16 vsi_handle); -enum ice_status ice_replay_rss_cfg(struct ice_hw *hw, u16 vsi_handle); -enum ice_status +int ice_replay_rss_cfg(struct ice_hw *hw, u16 vsi_handle); +int ice_add_avf_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds); -enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle); -enum ice_status +int ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle); +int ice_add_rss_cfg(struct ice_hw *hw, u16 vsi_handle, const struct ice_rss_hash_cfg *cfg); -enum ice_status +int ice_rem_rss_cfg(struct ice_hw *hw, u16 vsi_handle, const struct ice_rss_hash_cfg *cfg); void ice_rss_update_raw_symm(struct ice_hw *hw, diff --git a/drivers/net/ice/base/ice_fwlog.c b/drivers/net/ice/base/ice_fwlog.c new file mode 100644 index 0000000000..a285ff3c66 --- /dev/null +++ b/drivers/net/ice/base/ice_fwlog.c @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2001-2023 Intel Corporation + */ + +#include "ice_osdep.h" diff --git a/drivers/net/ice/base/ice_fwlog.h b/drivers/net/ice/base/ice_fwlog.h new file mode 100644 index 0000000000..7faf002754 --- /dev/null +++ b/drivers/net/ice/base/ice_fwlog.h @@ -0,0 +1,4 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2001-2023 Intel Corporation + */ + diff --git a/drivers/net/ice/base/ice_hw_autogen.h b/drivers/net/ice/base/ice_hw_autogen.h index 4610cec6a7..9824f2eca4 100644 --- a/drivers/net/ice/base/ice_hw_autogen.h +++ b/drivers/net/ice/base/ice_hw_autogen.h @@ -7,6 +7,14 @@ #ifndef _ICE_HW_AUTOGEN_H_ #define _ICE_HW_AUTOGEN_H_ +#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE E800_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE +#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_S E800_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_S +#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_M E800_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_M +#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE E800_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE +#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_S E800_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_S +#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_M E800_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_M +#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M E800_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M +#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M E800_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M #define GL_HIDA(_i) (0x00082000 + ((_i) * 4)) #define GL_HIBA(_i) (0x00081000 + ((_i) * 4)) #define GL_HICR 0x00082040 @@ -30,9 +38,15 @@ #define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_S 10 #define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_M MAKEMASK(0x3F, 10) #define GL_RDPU_CNTRL_REQ_WB_PM_TH_S 16 -#define GL_RDPU_CNTRL_REQ_WB_PM_TH_M MAKEMASK(0x1F, 16) -#define GL_RDPU_CNTRL_ECO_S 21 -#define GL_RDPU_CNTRL_ECO_M MAKEMASK(0x7FF, 21) +#define GL_RDPU_CNTRL_REQ_WB_PM_TH_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_RDPU_CNTRL_REQ_WB_PM_TH_M : E800_GL_RDPU_CNTRL_REQ_WB_PM_TH_M) +#define E800_GL_RDPU_CNTRL_REQ_WB_PM_TH_M MAKEMASK(0x1F, 16) +#define E830_GL_RDPU_CNTRL_REQ_WB_PM_TH_M MAKEMASK(0x3F, 16) +#define GL_RDPU_CNTRL_ECO_S_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_RDPU_CNTRL_ECO_S : E800_GL_RDPU_CNTRL_ECO_S) +#define E800_GL_RDPU_CNTRL_ECO_S 21 +#define E830_GL_RDPU_CNTRL_ECO_S 22 +#define GL_RDPU_CNTRL_ECO_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_RDPU_CNTRL_ECO_M : E800_GL_RDPU_CNTRL_ECO_M) +#define E800_GL_RDPU_CNTRL_ECO_M MAKEMASK(0x7FF, 21) +#define E830_GL_RDPU_CNTRL_ECO_M MAKEMASK(0x3FF, 22) #define MSIX_PBA(_i) (0x00008000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: FLR */ #define MSIX_PBA_MAX_INDEX 2 #define MSIX_PBA_PENBIT_S 0 @@ -429,9 +443,11 @@ #define PF0INT_OICR_CPM_PAGE_QUEUE_S 1 #define PF0INT_OICR_CPM_PAGE_QUEUE_M BIT(1) #define PF0INT_OICR_CPM_PAGE_RSV1_S 2 -#define PF0INT_OICR_CPM_PAGE_RSV1_M MAKEMASK(0xFF, 2) -#define PF0INT_OICR_CPM_PAGE_HH_COMP_S 10 -#define PF0INT_OICR_CPM_PAGE_HH_COMP_M BIT(10) +#define PF0INT_OICR_CPM_PAGE_RSV1_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PF0INT_OICR_CPM_PAGE_RSV1_M : E800_PF0INT_OICR_CPM_PAGE_RSV1_M) +#define E800_PF0INT_OICR_CPM_PAGE_RSV1_M MAKEMASK(0xFF, 2) +#define E830_PF0INT_OICR_CPM_PAGE_RSV1_M MAKEMASK(0x3F, 2) +#define E800_PF0INT_OICR_CPM_PAGE_HH_COMP_S 10 +#define E800_PF0INT_OICR_CPM_PAGE_HH_COMP_M BIT(10) #define PF0INT_OICR_CPM_PAGE_TSYN_TX_S 11 #define PF0INT_OICR_CPM_PAGE_TSYN_TX_M BIT(11) #define PF0INT_OICR_CPM_PAGE_TSYN_EVNT_S 12 @@ -493,9 +509,11 @@ #define PF0INT_OICR_HLP_PAGE_QUEUE_S 1 #define PF0INT_OICR_HLP_PAGE_QUEUE_M BIT(1) #define PF0INT_OICR_HLP_PAGE_RSV1_S 2 -#define PF0INT_OICR_HLP_PAGE_RSV1_M MAKEMASK(0xFF, 2) -#define PF0INT_OICR_HLP_PAGE_HH_COMP_S 10 -#define PF0INT_OICR_HLP_PAGE_HH_COMP_M BIT(10) +#define PF0INT_OICR_HLP_PAGE_RSV1_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PF0INT_OICR_HLP_PAGE_RSV1_M : E800_PF0INT_OICR_HLP_PAGE_RSV1_M) +#define E800_PF0INT_OICR_HLP_PAGE_RSV1_M MAKEMASK(0xFF, 2) +#define E830_PF0INT_OICR_HLP_PAGE_RSV1_M MAKEMASK(0x3F, 2) +#define E800_PF0INT_OICR_HLP_PAGE_HH_COMP_S 10 +#define E800_PF0INT_OICR_HLP_PAGE_HH_COMP_M BIT(10) #define PF0INT_OICR_HLP_PAGE_TSYN_TX_S 11 #define PF0INT_OICR_HLP_PAGE_TSYN_TX_M BIT(11) #define PF0INT_OICR_HLP_PAGE_TSYN_EVNT_S 12 @@ -542,9 +560,11 @@ #define PF0INT_OICR_PSM_PAGE_QUEUE_S 1 #define PF0INT_OICR_PSM_PAGE_QUEUE_M BIT(1) #define PF0INT_OICR_PSM_PAGE_RSV1_S 2 -#define PF0INT_OICR_PSM_PAGE_RSV1_M MAKEMASK(0xFF, 2) -#define PF0INT_OICR_PSM_PAGE_HH_COMP_S 10 -#define PF0INT_OICR_PSM_PAGE_HH_COMP_M BIT(10) +#define PF0INT_OICR_PSM_PAGE_RSV1_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PF0INT_OICR_PSM_PAGE_RSV1_M : E800_PF0INT_OICR_PSM_PAGE_RSV1_M) +#define E800_PF0INT_OICR_PSM_PAGE_RSV1_M MAKEMASK(0xFF, 2) +#define E830_PF0INT_OICR_PSM_PAGE_RSV1_M MAKEMASK(0x3F, 2) +#define E800_PF0INT_OICR_PSM_PAGE_HH_COMP_S 10 +#define E800_PF0INT_OICR_PSM_PAGE_HH_COMP_M BIT(10) #define PF0INT_OICR_PSM_PAGE_TSYN_TX_S 11 #define PF0INT_OICR_PSM_PAGE_TSYN_TX_M BIT(11) #define PF0INT_OICR_PSM_PAGE_TSYN_EVNT_S 12 @@ -593,10 +613,10 @@ #define QTX_COMM_DBELL_PAGE_MAX_INDEX 16383 #define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_S 0 #define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_M MAKEMASK(0xFFFFFFFF, 0) -#define QTX_COMM_DBLQ_DBELL_PAGE(_DBLQ) (0x02F00000 + ((_DBLQ) * 4096)) /* _i=0...255 */ /* Reset Source: CORER */ -#define QTX_COMM_DBLQ_DBELL_PAGE_MAX_INDEX 255 -#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_S 0 -#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_M MAKEMASK(0x1FFF, 0) +#define E800_QTX_COMM_DBLQ_DBELL_PAGE(_DBLQ) (0x02F00000 + ((_DBLQ) * 4096)) /* _i=0...255 */ /* Reset Source: CORER */ +#define E800_QTX_COMM_DBLQ_DBELL_PAGE_MAX_INDEX 255 +#define E800_QTX_COMM_DBLQ_DBELL_PAGE_TAIL_S 0 +#define E800_QTX_COMM_DBLQ_DBELL_PAGE_TAIL_M MAKEMASK(0x1FFF, 0) #define VSI_MBX_ARQBAH(_VSI) (0x02000018 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */ #define VSI_MBX_ARQBAH_MAX_INDEX 767 #define VSI_MBX_ARQBAH_ARQBAH_S 0 @@ -831,7 +851,9 @@ #define GLCOMM_CQ_CTL_CMD_M MAKEMASK(0x7, 4) #define GLCOMM_CQ_CTL_ID_S 16 #define GLCOMM_CQ_CTL_ID_M MAKEMASK(0x3FFF, 16) -#define GLCOMM_MIN_MAX_PKT 0x000FC064 /* Reset Source: CORER */ +#define GLCOMM_MIN_MAX_PKT_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLCOMM_MIN_MAX_PKT : E800_GLCOMM_MIN_MAX_PKT) +#define E800_GLCOMM_MIN_MAX_PKT 0x000FC064 /* Reset Source: CORER */ +#define E830_GLCOMM_MIN_MAX_PKT 0x000FCCD0 /* Reset Source: CORER */ #define GLCOMM_MIN_MAX_PKT_MAHDL_S 0 #define GLCOMM_MIN_MAX_PKT_MAHDL_M MAKEMASK(0x3FFF, 0) #define GLCOMM_MIN_MAX_PKT_MIHDL_S 16 @@ -864,7 +886,9 @@ #define GLCOMM_QUANTA_PROF_MAX_CMD_M MAKEMASK(0xFF, 16) #define GLCOMM_QUANTA_PROF_MAX_DESC_S 24 #define GLCOMM_QUANTA_PROF_MAX_DESC_M MAKEMASK(0x3F, 24) -#define GLLAN_TCLAN_CACHE_CTL 0x000FC0B8 /* Reset Source: CORER */ +#define GLLAN_TCLAN_CACHE_CTL_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLLAN_TCLAN_CACHE_CTL : E800_GLLAN_TCLAN_CACHE_CTL) +#define E800_GLLAN_TCLAN_CACHE_CTL 0x000FC0B8 /* Reset Source: CORER */ +#define E830_GLLAN_TCLAN_CACHE_CTL 0x000FCCBC /* Reset Source: CORER */ #define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_S 0 #define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_M MAKEMASK(0x3F, 0) #define GLLAN_TCLAN_CACHE_CTL_FETCH_CL_ALIGN_S 6 @@ -1999,18 +2023,18 @@ #define GLTPB_WB_RL_PERIOD_M MAKEMASK(0xFFFF, 0) #define GLTPB_WB_RL_EN_S 16 #define GLTPB_WB_RL_EN_M BIT(16) -#define PRTDCB_FCCFG 0x001E4640 /* Reset Source: GLOBR */ -#define PRTDCB_FCCFG_TFCE_S 3 -#define PRTDCB_FCCFG_TFCE_M MAKEMASK(0x3, 3) -#define PRTDCB_FCRTV 0x001E4600 /* Reset Source: GLOBR */ -#define PRTDCB_FCRTV_FC_REFRESH_TH_S 0 -#define PRTDCB_FCRTV_FC_REFRESH_TH_M MAKEMASK(0xFFFF, 0) -#define PRTDCB_FCTTVN(_i) (0x001E4580 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: GLOBR */ -#define PRTDCB_FCTTVN_MAX_INDEX 3 -#define PRTDCB_FCTTVN_TTV_2N_S 0 -#define PRTDCB_FCTTVN_TTV_2N_M MAKEMASK(0xFFFF, 0) -#define PRTDCB_FCTTVN_TTV_2N_P1_S 16 -#define PRTDCB_FCTTVN_TTV_2N_P1_M MAKEMASK(0xFFFF, 16) +#define E800_PRTDCB_FCCFG 0x001E4640 /* Reset Source: GLOBR */ +#define E800_PRTDCB_FCCFG_TFCE_S 3 +#define E800_PRTDCB_FCCFG_TFCE_M MAKEMASK(0x3, 3) +#define E800_PRTDCB_FCRTV 0x001E4600 /* Reset Source: GLOBR */ +#define E800_PRTDCB_FCRTV_FC_REFRESH_TH_S 0 +#define E800_PRTDCB_FCRTV_FC_REFRESH_TH_M MAKEMASK(0xFFFF, 0) +#define E800_PRTDCB_FCTTVN(_i) (0x001E4580 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: GLOBR */ +#define E800_PRTDCB_FCTTVN_MAX_INDEX 3 +#define E800_PRTDCB_FCTTVN_TTV_2N_S 0 +#define E800_PRTDCB_FCTTVN_TTV_2N_M MAKEMASK(0xFFFF, 0) +#define E800_PRTDCB_FCTTVN_TTV_2N_P1_S 16 +#define E800_PRTDCB_FCTTVN_TTV_2N_P1_M MAKEMASK(0xFFFF, 16) #define PRTDCB_GENC 0x00083000 /* Reset Source: CORER */ #define PRTDCB_GENC_NUMTC_S 2 #define PRTDCB_GENC_NUMTC_M MAKEMASK(0xF, 2) @@ -2376,214 +2400,222 @@ #define TPB_WB_RL_TC_STAT_MAX_INDEX 31 #define TPB_WB_RL_TC_STAT_BUCKET_S 0 #define TPB_WB_RL_TC_STAT_BUCKET_M MAKEMASK(0x1FFFF, 0) -#define GL_ACLEXT_CDMD_L1SEL(_i) (0x00210054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_CDMD_L1SEL_MAX_INDEX 2 -#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_S 0 -#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_M MAKEMASK(0x1F, 0) -#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_S 8 -#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_M MAKEMASK(0x1F, 8) -#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_S 16 -#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_M MAKEMASK(0x1F, 16) -#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_S 24 -#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_M MAKEMASK(0x1F, 24) -#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_S 30 -#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_M MAKEMASK(0x3, 30) -#define GL_ACLEXT_CTLTBL_L2ADDR(_i) (0x00210084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_CTLTBL_L2ADDR_MAX_INDEX 2 -#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_S 0 -#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_M MAKEMASK(0x7, 0) -#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_S 8 -#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_M MAKEMASK(0x7, 8) -#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_S 31 -#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_M BIT(31) -#define GL_ACLEXT_CTLTBL_L2DATA(_i) (0x00210090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_CTLTBL_L2DATA_MAX_INDEX 2 -#define GL_ACLEXT_CTLTBL_L2DATA_DATA_S 0 -#define GL_ACLEXT_CTLTBL_L2DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0) -#define GL_ACLEXT_DFLT_L2PRFL(_i) (0x00210138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_DFLT_L2PRFL_MAX_INDEX 2 -#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_S 0 -#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_M MAKEMASK(0xFFFF, 0) +#define E800_GL_ACLEXT_CDMD_L1SEL(_i) (0x00210054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_CDMD_L1SEL_MAX_INDEX 2 +#define E800_GL_ACLEXT_CDMD_L1SEL_RX_SEL_S 0 +#define E800_GL_ACLEXT_CDMD_L1SEL_RX_SEL_M MAKEMASK(0x1F, 0) +#define E800_GL_ACLEXT_CDMD_L1SEL_TX_SEL_S 8 +#define E800_GL_ACLEXT_CDMD_L1SEL_TX_SEL_M MAKEMASK(0x1F, 8) +#define E800_GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_S 16 +#define E800_GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_M MAKEMASK(0x1F, 16) +#define E800_GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_S 24 +#define E800_GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_M MAKEMASK(0x1F, 24) +#define E800_GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_S 30 +#define E800_GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_M MAKEMASK(0x3, 30) +#define E800_GL_ACLEXT_CTLTBL_L2ADDR(_i) (0x00210084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_CTLTBL_L2ADDR_MAX_INDEX 2 +#define E800_GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_S 0 +#define E800_GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_M MAKEMASK(0x7, 0) +#define E800_GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_S 8 +#define E800_GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_M MAKEMASK(0x7, 8) +#define E800_GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_S 31 +#define E800_GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_M BIT(31) +#define E800_GL_ACLEXT_CTLTBL_L2DATA(_i) (0x00210090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_CTLTBL_L2DATA_MAX_INDEX 2 +#define E800_GL_ACLEXT_CTLTBL_L2DATA_DATA_S 0 +#define E800_GL_ACLEXT_CTLTBL_L2DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0) +#define E800_GL_ACLEXT_DFLT_L2PRFL(_i) (0x00210138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_DFLT_L2PRFL_MAX_INDEX 2 +#define E800_GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_S 0 +#define E800_GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_M MAKEMASK(0xFFFF, 0) #define GL_ACLEXT_DFLT_L2PRFL_ACL(_i) (0x00393800 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ #define GL_ACLEXT_DFLT_L2PRFL_ACL_MAX_INDEX 2 #define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_S 0 #define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_M MAKEMASK(0xFFFF, 0) -#define GL_ACLEXT_FLGS_L1SEL0_1(_i) (0x0021006C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_FLGS_L1SEL0_1_MAX_INDEX 2 -#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_S 0 -#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_M MAKEMASK(0x1FF, 0) -#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_S 16 -#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_M MAKEMASK(0x1FF, 16) -#define GL_ACLEXT_FLGS_L1SEL2_3(_i) (0x00210078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_FLGS_L1SEL2_3_MAX_INDEX 2 -#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_S 0 -#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_M MAKEMASK(0x1FF, 0) -#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_S 16 -#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_M MAKEMASK(0x1FF, 16) -#define GL_ACLEXT_FLGS_L1TBL(_i) (0x00210060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_FLGS_L1TBL_MAX_INDEX 2 -#define GL_ACLEXT_FLGS_L1TBL_LSB_S 0 -#define GL_ACLEXT_FLGS_L1TBL_LSB_M MAKEMASK(0xFFFF, 0) -#define GL_ACLEXT_FLGS_L1TBL_MSB_S 16 -#define GL_ACLEXT_FLGS_L1TBL_MSB_M MAKEMASK(0xFFFF, 16) -#define GL_ACLEXT_FORCE_L1CDID(_i) (0x00210018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_FORCE_L1CDID_MAX_INDEX 2 -#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_S 0 -#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_M MAKEMASK(0xF, 0) -#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31 -#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31) -#define GL_ACLEXT_FORCE_PID(_i) (0x00210000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_FORCE_PID_MAX_INDEX 2 -#define GL_ACLEXT_FORCE_PID_STATIC_PID_S 0 -#define GL_ACLEXT_FORCE_PID_STATIC_PID_M MAKEMASK(0xFFFF, 0) -#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_S 31 -#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_M BIT(31) -#define GL_ACLEXT_K2N_L2ADDR(_i) (0x00210144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_K2N_L2ADDR_MAX_INDEX 2 -#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_S 0 -#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_M MAKEMASK(0x7F, 0) -#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_S 31 -#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_M BIT(31) -#define GL_ACLEXT_K2N_L2DATA(_i) (0x00210150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_K2N_L2DATA_MAX_INDEX 2 -#define GL_ACLEXT_K2N_L2DATA_DATA0_S 0 -#define GL_ACLEXT_K2N_L2DATA_DATA0_M MAKEMASK(0xFF, 0) -#define GL_ACLEXT_K2N_L2DATA_DATA1_S 8 -#define GL_ACLEXT_K2N_L2DATA_DATA1_M MAKEMASK(0xFF, 8) -#define GL_ACLEXT_K2N_L2DATA_DATA2_S 16 -#define GL_ACLEXT_K2N_L2DATA_DATA2_M MAKEMASK(0xFF, 16) -#define GL_ACLEXT_K2N_L2DATA_DATA3_S 24 -#define GL_ACLEXT_K2N_L2DATA_DATA3_M MAKEMASK(0xFF, 24) -#define GL_ACLEXT_L2_PMASK0(_i) (0x002100FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_L2_PMASK0_MAX_INDEX 2 -#define GL_ACLEXT_L2_PMASK0_BITMASK_S 0 -#define GL_ACLEXT_L2_PMASK0_BITMASK_M MAKEMASK(0xFFFFFFFF, 0) -#define GL_ACLEXT_L2_PMASK1(_i) (0x00210108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_L2_PMASK1_MAX_INDEX 2 -#define GL_ACLEXT_L2_PMASK1_BITMASK_S 0 -#define GL_ACLEXT_L2_PMASK1_BITMASK_M MAKEMASK(0xFFFF, 0) -#define GL_ACLEXT_L2_TMASK0(_i) (0x00210498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_L2_TMASK0_MAX_INDEX 2 -#define GL_ACLEXT_L2_TMASK0_BITMASK_S 0 -#define GL_ACLEXT_L2_TMASK0_BITMASK_M MAKEMASK(0xFFFFFFFF, 0) -#define GL_ACLEXT_L2_TMASK1(_i) (0x002104A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_L2_TMASK1_MAX_INDEX 2 -#define GL_ACLEXT_L2_TMASK1_BITMASK_S 0 -#define GL_ACLEXT_L2_TMASK1_BITMASK_M MAKEMASK(0xFF, 0) -#define GL_ACLEXT_L2BMP0_3(_i) (0x002100A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_L2BMP0_3_MAX_INDEX 2 -#define GL_ACLEXT_L2BMP0_3_BMP0_S 0 -#define GL_ACLEXT_L2BMP0_3_BMP0_M MAKEMASK(0xFF, 0) -#define GL_ACLEXT_L2BMP0_3_BMP1_S 8 -#define GL_ACLEXT_L2BMP0_3_BMP1_M MAKEMASK(0xFF, 8) -#define GL_ACLEXT_L2BMP0_3_BMP2_S 16 -#define GL_ACLEXT_L2BMP0_3_BMP2_M MAKEMASK(0xFF, 16) -#define GL_ACLEXT_L2BMP0_3_BMP3_S 24 -#define GL_ACLEXT_L2BMP0_3_BMP3_M MAKEMASK(0xFF, 24) -#define GL_ACLEXT_L2BMP4_7(_i) (0x002100B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_L2BMP4_7_MAX_INDEX 2 -#define GL_ACLEXT_L2BMP4_7_BMP4_S 0 -#define GL_ACLEXT_L2BMP4_7_BMP4_M MAKEMASK(0xFF, 0) -#define GL_ACLEXT_L2BMP4_7_BMP5_S 8 -#define GL_ACLEXT_L2BMP4_7_BMP5_M MAKEMASK(0xFF, 8) -#define GL_ACLEXT_L2BMP4_7_BMP6_S 16 -#define GL_ACLEXT_L2BMP4_7_BMP6_M MAKEMASK(0xFF, 16) -#define GL_ACLEXT_L2BMP4_7_BMP7_S 24 -#define GL_ACLEXT_L2BMP4_7_BMP7_M MAKEMASK(0xFF, 24) -#define GL_ACLEXT_L2PRTMOD(_i) (0x0021009C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_L2PRTMOD_MAX_INDEX 2 -#define GL_ACLEXT_L2PRTMOD_XLT1_S 0 -#define GL_ACLEXT_L2PRTMOD_XLT1_M MAKEMASK(0x3, 0) -#define GL_ACLEXT_L2PRTMOD_XLT2_S 8 -#define GL_ACLEXT_L2PRTMOD_XLT2_M MAKEMASK(0x3, 8) -#define GL_ACLEXT_N2N_L2ADDR(_i) (0x0021015C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_N2N_L2ADDR_MAX_INDEX 2 -#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_S 0 -#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_M MAKEMASK(0x3F, 0) -#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_S 31 -#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_M BIT(31) -#define GL_ACLEXT_N2N_L2DATA(_i) (0x00210168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_N2N_L2DATA_MAX_INDEX 2 -#define GL_ACLEXT_N2N_L2DATA_DATA0_S 0 -#define GL_ACLEXT_N2N_L2DATA_DATA0_M MAKEMASK(0xFF, 0) -#define GL_ACLEXT_N2N_L2DATA_DATA1_S 8 -#define GL_ACLEXT_N2N_L2DATA_DATA1_M MAKEMASK(0xFF, 8) -#define GL_ACLEXT_N2N_L2DATA_DATA2_S 16 -#define GL_ACLEXT_N2N_L2DATA_DATA2_M MAKEMASK(0xFF, 16) -#define GL_ACLEXT_N2N_L2DATA_DATA3_S 24 -#define GL_ACLEXT_N2N_L2DATA_DATA3_M MAKEMASK(0xFF, 24) -#define GL_ACLEXT_P2P_L1ADDR(_i) (0x00210024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_P2P_L1ADDR_MAX_INDEX 2 -#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_S 0 -#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_M BIT(0) -#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_S 31 -#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_M BIT(31) -#define GL_ACLEXT_P2P_L1DATA(_i) (0x00210030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_P2P_L1DATA_MAX_INDEX 2 -#define GL_ACLEXT_P2P_L1DATA_DATA_S 0 -#define GL_ACLEXT_P2P_L1DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0) -#define GL_ACLEXT_PID_L2GKTYPE(_i) (0x002100F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_PID_L2GKTYPE_MAX_INDEX 2 -#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_S 0 -#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_M MAKEMASK(0x3, 0) -#define GL_ACLEXT_PLVL_SEL(_i) (0x0021000C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_PLVL_SEL_MAX_INDEX 2 -#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_S 0 -#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_M BIT(0) -#define GL_ACLEXT_TCAM_L2ADDR(_i) (0x00210114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_TCAM_L2ADDR_MAX_INDEX 2 -#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_S 0 -#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_M MAKEMASK(0x3FF, 0) -#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_S 31 -#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_M BIT(31) -#define GL_ACLEXT_TCAM_L2DATALSB(_i) (0x00210120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_TCAM_L2DATALSB_MAX_INDEX 2 -#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_S 0 -#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_M MAKEMASK(0xFFFFFFFF, 0) -#define GL_ACLEXT_TCAM_L2DATAMSB(_i) (0x0021012C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_TCAM_L2DATAMSB_MAX_INDEX 2 -#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_S 0 -#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_M MAKEMASK(0xFF, 0) -#define GL_ACLEXT_XLT0_L1ADDR(_i) (0x0021003C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_XLT0_L1ADDR_MAX_INDEX 2 -#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_S 0 -#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_M MAKEMASK(0xFF, 0) -#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_S 31 -#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_M BIT(31) -#define GL_ACLEXT_XLT0_L1DATA(_i) (0x00210048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_XLT0_L1DATA_MAX_INDEX 2 -#define GL_ACLEXT_XLT0_L1DATA_DATA_S 0 -#define GL_ACLEXT_XLT0_L1DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0) -#define GL_ACLEXT_XLT1_L2ADDR(_i) (0x002100C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_XLT1_L2ADDR_MAX_INDEX 2 -#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_S 0 -#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_M MAKEMASK(0x7FF, 0) -#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_S 31 -#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_M BIT(31) -#define GL_ACLEXT_XLT1_L2DATA(_i) (0x002100CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_XLT1_L2DATA_MAX_INDEX 2 -#define GL_ACLEXT_XLT1_L2DATA_DATA_S 0 -#define GL_ACLEXT_XLT1_L2DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0) -#define GL_ACLEXT_XLT2_L2ADDR(_i) (0x002100D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_XLT2_L2ADDR_MAX_INDEX 2 -#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_S 0 -#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_M MAKEMASK(0x1FF, 0) -#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_S 31 -#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_M BIT(31) -#define GL_ACLEXT_XLT2_L2DATA(_i) (0x002100E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ -#define GL_ACLEXT_XLT2_L2DATA_MAX_INDEX 2 -#define GL_ACLEXT_XLT2_L2DATA_DATA_S 0 -#define GL_ACLEXT_XLT2_L2DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0) +#define E800_GL_ACLEXT_FLGS_L1SEL0_1(_i) (0x0021006C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_FLGS_L1SEL0_1_MAX_INDEX 2 +#define E800_GL_ACLEXT_FLGS_L1SEL0_1_FLS0_S 0 +#define E800_GL_ACLEXT_FLGS_L1SEL0_1_FLS0_M MAKEMASK(0x1FF, 0) +#define E800_GL_ACLEXT_FLGS_L1SEL0_1_FLS1_S 16 +#define E800_GL_ACLEXT_FLGS_L1SEL0_1_FLS1_M MAKEMASK(0x1FF, 16) +#define E800_GL_ACLEXT_FLGS_L1SEL2_3(_i) (0x00210078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_FLGS_L1SEL2_3_MAX_INDEX 2 +#define E800_GL_ACLEXT_FLGS_L1SEL2_3_FLS2_S 0 +#define E800_GL_ACLEXT_FLGS_L1SEL2_3_FLS2_M MAKEMASK(0x1FF, 0) +#define E800_GL_ACLEXT_FLGS_L1SEL2_3_FLS3_S 16 +#define E800_GL_ACLEXT_FLGS_L1SEL2_3_FLS3_M MAKEMASK(0x1FF, 16) +#define E800_GL_ACLEXT_FLGS_L1TBL(_i) (0x00210060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_FLGS_L1TBL_MAX_INDEX 2 +#define E800_GL_ACLEXT_FLGS_L1TBL_LSB_S 0 +#define E800_GL_ACLEXT_FLGS_L1TBL_LSB_M MAKEMASK(0xFFFF, 0) +#define E800_GL_ACLEXT_FLGS_L1TBL_MSB_S 16 +#define E800_GL_ACLEXT_FLGS_L1TBL_MSB_M MAKEMASK(0xFFFF, 16) +#define E800_GL_ACLEXT_FORCE_L1CDID(_i) (0x00210018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_FORCE_L1CDID_MAX_INDEX 2 +#define E800_GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_S 0 +#define E800_GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_M MAKEMASK(0xF, 0) +#define E800_GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31 +#define E800_GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31) +#define E800_GL_ACLEXT_FORCE_PID(_i) (0x00210000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_FORCE_PID_MAX_INDEX 2 +#define E800_GL_ACLEXT_FORCE_PID_STATIC_PID_S 0 +#define E800_GL_ACLEXT_FORCE_PID_STATIC_PID_M MAKEMASK(0xFFFF, 0) +#define E800_GL_ACLEXT_FORCE_PID_STATIC_PID_EN_S 31 +#define E800_GL_ACLEXT_FORCE_PID_STATIC_PID_EN_M BIT(31) +#define E800_GL_ACLEXT_K2N_L2ADDR(_i) (0x00210144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_K2N_L2ADDR_MAX_INDEX 2 +#define E800_GL_ACLEXT_K2N_L2ADDR_LINE_IDX_S 0 +#define E800_GL_ACLEXT_K2N_L2ADDR_LINE_IDX_M MAKEMASK(0x7F, 0) +#define E800_GL_ACLEXT_K2N_L2ADDR_AUTO_INC_S 31 +#define E800_GL_ACLEXT_K2N_L2ADDR_AUTO_INC_M BIT(31) +#define E800_GL_ACLEXT_K2N_L2DATA(_i) (0x00210150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_K2N_L2DATA_MAX_INDEX 2 +#define E800_GL_ACLEXT_K2N_L2DATA_DATA0_S 0 +#define E800_GL_ACLEXT_K2N_L2DATA_DATA0_M MAKEMASK(0xFF, 0) +#define E800_GL_ACLEXT_K2N_L2DATA_DATA1_S 8 +#define E800_GL_ACLEXT_K2N_L2DATA_DATA1_M MAKEMASK(0xFF, 8) +#define E800_GL_ACLEXT_K2N_L2DATA_DATA2_S 16 +#define E800_GL_ACLEXT_K2N_L2DATA_DATA2_M MAKEMASK(0xFF, 16) +#define E800_GL_ACLEXT_K2N_L2DATA_DATA3_S 24 +#define E800_GL_ACLEXT_K2N_L2DATA_DATA3_M MAKEMASK(0xFF, 24) +#define E800_GL_ACLEXT_L2_PMASK0(_i) (0x002100FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_L2_PMASK0_MAX_INDEX 2 +#define E800_GL_ACLEXT_L2_PMASK0_BITMASK_S 0 +#define E800_GL_ACLEXT_L2_PMASK0_BITMASK_M MAKEMASK(0xFFFFFFFF, 0) +#define E800_GL_ACLEXT_L2_PMASK1(_i) (0x00210108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_L2_PMASK1_MAX_INDEX 2 +#define E800_GL_ACLEXT_L2_PMASK1_BITMASK_S 0 +#define E800_GL_ACLEXT_L2_PMASK1_BITMASK_M MAKEMASK(0xFFFF, 0) +#define E800_GL_ACLEXT_L2_TMASK0(_i) (0x00210498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_L2_TMASK0_MAX_INDEX 2 +#define E800_GL_ACLEXT_L2_TMASK0_BITMASK_S 0 +#define E800_GL_ACLEXT_L2_TMASK0_BITMASK_M MAKEMASK(0xFFFFFFFF, 0) +#define E800_GL_ACLEXT_L2_TMASK1(_i) (0x002104A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_L2_TMASK1_MAX_INDEX 2 +#define E800_GL_ACLEXT_L2_TMASK1_BITMASK_S 0 +#define E800_GL_ACLEXT_L2_TMASK1_BITMASK_M MAKEMASK(0xFF, 0) +#define E800_GL_ACLEXT_L2BMP0_3(_i) (0x002100A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_L2BMP0_3_MAX_INDEX 2 +#define E800_GL_ACLEXT_L2BMP0_3_BMP0_S 0 +#define E800_GL_ACLEXT_L2BMP0_3_BMP0_M MAKEMASK(0xFF, 0) +#define E800_GL_ACLEXT_L2BMP0_3_BMP1_S 8 +#define E800_GL_ACLEXT_L2BMP0_3_BMP1_M MAKEMASK(0xFF, 8) +#define E800_GL_ACLEXT_L2BMP0_3_BMP2_S 16 +#define E800_GL_ACLEXT_L2BMP0_3_BMP2_M MAKEMASK(0xFF, 16) +#define E800_GL_ACLEXT_L2BMP0_3_BMP3_S 24 +#define E800_GL_ACLEXT_L2BMP0_3_BMP3_M MAKEMASK(0xFF, 24) +#define E800_GL_ACLEXT_L2BMP4_7(_i) (0x002100B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_L2BMP4_7_MAX_INDEX 2 +#define E800_GL_ACLEXT_L2BMP4_7_BMP4_S 0 +#define E800_GL_ACLEXT_L2BMP4_7_BMP4_M MAKEMASK(0xFF, 0) +#define E800_GL_ACLEXT_L2BMP4_7_BMP5_S 8 +#define E800_GL_ACLEXT_L2BMP4_7_BMP5_M MAKEMASK(0xFF, 8) +#define E800_GL_ACLEXT_L2BMP4_7_BMP6_S 16 +#define E800_GL_ACLEXT_L2BMP4_7_BMP6_M MAKEMASK(0xFF, 16) +#define E800_GL_ACLEXT_L2BMP4_7_BMP7_S 24 +#define E800_GL_ACLEXT_L2BMP4_7_BMP7_M MAKEMASK(0xFF, 24) +#define E800_GL_ACLEXT_L2PRTMOD(_i) (0x0021009C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_L2PRTMOD_MAX_INDEX 2 +#define E800_GL_ACLEXT_L2PRTMOD_XLT1_S 0 +#define E800_GL_ACLEXT_L2PRTMOD_XLT1_M MAKEMASK(0x3, 0) +#define E800_GL_ACLEXT_L2PRTMOD_XLT2_S 8 +#define E800_GL_ACLEXT_L2PRTMOD_XLT2_M MAKEMASK(0x3, 8) +#define E800_GL_ACLEXT_N2N_L2ADDR(_i) (0x0021015C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_N2N_L2ADDR_MAX_INDEX 2 +#define E800_GL_ACLEXT_N2N_L2ADDR_LINE_IDX_S 0 +#define E800_GL_ACLEXT_N2N_L2ADDR_LINE_IDX_M MAKEMASK(0x3F, 0) +#define E800_GL_ACLEXT_N2N_L2ADDR_AUTO_INC_S 31 +#define E800_GL_ACLEXT_N2N_L2ADDR_AUTO_INC_M BIT(31) +#define E800_GL_ACLEXT_N2N_L2DATA(_i) (0x00210168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_N2N_L2DATA_MAX_INDEX 2 +#define E800_GL_ACLEXT_N2N_L2DATA_DATA0_S 0 +#define E800_GL_ACLEXT_N2N_L2DATA_DATA0_M MAKEMASK(0xFF, 0) +#define E800_GL_ACLEXT_N2N_L2DATA_DATA1_S 8 +#define E800_GL_ACLEXT_N2N_L2DATA_DATA1_M MAKEMASK(0xFF, 8) +#define E800_GL_ACLEXT_N2N_L2DATA_DATA2_S 16 +#define E800_GL_ACLEXT_N2N_L2DATA_DATA2_M MAKEMASK(0xFF, 16) +#define E800_GL_ACLEXT_N2N_L2DATA_DATA3_S 24 +#define E800_GL_ACLEXT_N2N_L2DATA_DATA3_M MAKEMASK(0xFF, 24) +#define E800_GL_ACLEXT_P2P_L1ADDR(_i) (0x00210024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_P2P_L1ADDR_MAX_INDEX 2 +#define E800_GL_ACLEXT_P2P_L1ADDR_LINE_IDX_S 0 +#define E800_GL_ACLEXT_P2P_L1ADDR_LINE_IDX_M BIT(0) +#define E800_GL_ACLEXT_P2P_L1ADDR_AUTO_INC_S 31 +#define E800_GL_ACLEXT_P2P_L1ADDR_AUTO_INC_M BIT(31) +#define E800_GL_ACLEXT_P2P_L1DATA(_i) (0x00210030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_P2P_L1DATA_MAX_INDEX 2 +#define E800_GL_ACLEXT_P2P_L1DATA_DATA_S 0 +#define E800_GL_ACLEXT_P2P_L1DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0) +#define E800_GL_ACLEXT_PID_L2GKTYPE(_i) (0x002100F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_PID_L2GKTYPE_MAX_INDEX 2 +#define E800_GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_S 0 +#define E800_GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_M MAKEMASK(0x3, 0) +#define E800_GL_ACLEXT_PLVL_SEL(_i) (0x0021000C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_PLVL_SEL_MAX_INDEX 2 +#define E800_GL_ACLEXT_PLVL_SEL_PLVL_SEL_S 0 +#define E800_GL_ACLEXT_PLVL_SEL_PLVL_SEL_M BIT(0) +#define E800_GL_ACLEXT_TCAM_L2ADDR(_i) (0x00210114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_TCAM_L2ADDR_MAX_INDEX 2 +#define E800_GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_S 0 +#define E800_GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_M MAKEMASK(0x3FF, 0) +#define E800_GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_S 31 +#define E800_GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_M BIT(31) +#define E800_GL_ACLEXT_TCAM_L2DATALSB(_i) (0x00210120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_TCAM_L2DATALSB_MAX_INDEX 2 +#define E800_GL_ACLEXT_TCAM_L2DATALSB_DATALSB_S 0 +#define E800_GL_ACLEXT_TCAM_L2DATALSB_DATALSB_M MAKEMASK(0xFFFFFFFF, 0) +#define E800_GL_ACLEXT_TCAM_L2DATAMSB(_i) (0x0021012C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_TCAM_L2DATAMSB_MAX_INDEX 2 +#define E800_GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_S 0 +#define E800_GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_M MAKEMASK(0xFF, 0) +#define E800_GL_ACLEXT_XLT0_L1ADDR(_i) (0x0021003C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_XLT0_L1ADDR_MAX_INDEX 2 +#define E800_GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_S 0 +#define E800_GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_M MAKEMASK(0xFF, 0) +#define E800_GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_S 31 +#define E800_GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_M BIT(31) +#define E800_GL_ACLEXT_XLT0_L1DATA(_i) (0x00210048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_XLT0_L1DATA_MAX_INDEX 2 +#define E800_GL_ACLEXT_XLT0_L1DATA_DATA_S 0 +#define E800_GL_ACLEXT_XLT0_L1DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0) +#define E800_GL_ACLEXT_XLT1_L2ADDR(_i) (0x002100C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_XLT1_L2ADDR_MAX_INDEX 2 +#define E800_GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_S 0 +#define E800_GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_M MAKEMASK(0x7FF, 0) +#define E800_GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_S 31 +#define E800_GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_M BIT(31) +#define E800_GL_ACLEXT_XLT1_L2DATA(_i) (0x002100CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_XLT1_L2DATA_MAX_INDEX 2 +#define E800_GL_ACLEXT_XLT1_L2DATA_DATA_S 0 +#define E800_GL_ACLEXT_XLT1_L2DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0) +#define E800_GL_ACLEXT_XLT2_L2ADDR(_i) (0x002100D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_XLT2_L2ADDR_MAX_INDEX 2 +#define E800_GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_S 0 +#define E800_GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_M MAKEMASK(0x1FF, 0) +#define E800_GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_S 31 +#define E800_GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_M BIT(31) +#define E800_GL_ACLEXT_XLT2_L2DATA(_i) (0x002100E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ +#define E800_GL_ACLEXT_XLT2_L2DATA_MAX_INDEX 2 +#define E800_GL_ACLEXT_XLT2_L2DATA_DATA_S 0 +#define E800_GL_ACLEXT_XLT2_L2DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0) #define GL_PREEXT_CDMD_L1SEL(_i) (0x0020F054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ #define GL_PREEXT_CDMD_L1SEL_MAX_INDEX 2 #define GL_PREEXT_CDMD_L1SEL_RX_SEL_S 0 -#define GL_PREEXT_CDMD_L1SEL_RX_SEL_M MAKEMASK(0x1F, 0) +#define GL_PREEXT_CDMD_L1SEL_RX_SEL_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PREEXT_CDMD_L1SEL_RX_SEL_M : E800_GL_PREEXT_CDMD_L1SEL_RX_SEL_M) +#define E800_GL_PREEXT_CDMD_L1SEL_RX_SEL_M MAKEMASK(0x1F, 0) +#define E830_GL_PREEXT_CDMD_L1SEL_RX_SEL_M MAKEMASK(0x3F, 0) #define GL_PREEXT_CDMD_L1SEL_TX_SEL_S 8 -#define GL_PREEXT_CDMD_L1SEL_TX_SEL_M MAKEMASK(0x1F, 8) +#define GL_PREEXT_CDMD_L1SEL_TX_SEL_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PREEXT_CDMD_L1SEL_TX_SEL_M : E800_GL_PREEXT_CDMD_L1SEL_TX_SEL_M) +#define E800_GL_PREEXT_CDMD_L1SEL_TX_SEL_M MAKEMASK(0x1F, 8) +#define E830_GL_PREEXT_CDMD_L1SEL_TX_SEL_M MAKEMASK(0x3F, 8) #define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_S 16 -#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_M MAKEMASK(0x1F, 16) +#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PREEXT_CDMD_L1SEL_AUX0_SEL_M : E800_GL_PREEXT_CDMD_L1SEL_AUX0_SEL_M) +#define E800_GL_PREEXT_CDMD_L1SEL_AUX0_SEL_M MAKEMASK(0x1F, 16) +#define E830_GL_PREEXT_CDMD_L1SEL_AUX0_SEL_M MAKEMASK(0x3F, 16) #define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_S 24 -#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_M MAKEMASK(0x1F, 24) +#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PREEXT_CDMD_L1SEL_AUX1_SEL_M : E800_GL_PREEXT_CDMD_L1SEL_AUX1_SEL_M) +#define E800_GL_PREEXT_CDMD_L1SEL_AUX1_SEL_M MAKEMASK(0x1F, 24) +#define E830_GL_PREEXT_CDMD_L1SEL_AUX1_SEL_M MAKEMASK(0x3F, 24) #define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_S 30 #define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_M MAKEMASK(0x3, 30) #define GL_PREEXT_CTLTBL_L2ADDR(_i) (0x0020F084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ @@ -2605,15 +2637,23 @@ #define GL_PREEXT_FLGS_L1SEL0_1(_i) (0x0020F06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ #define GL_PREEXT_FLGS_L1SEL0_1_MAX_INDEX 2 #define GL_PREEXT_FLGS_L1SEL0_1_FLS0_S 0 -#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_M MAKEMASK(0x1FF, 0) +#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PREEXT_FLGS_L1SEL0_1_FLS0_M : E800_GL_PREEXT_FLGS_L1SEL0_1_FLS0_M) +#define E800_GL_PREEXT_FLGS_L1SEL0_1_FLS0_M MAKEMASK(0x1FF, 0) +#define E830_GL_PREEXT_FLGS_L1SEL0_1_FLS0_M MAKEMASK(0x3FF, 0) #define GL_PREEXT_FLGS_L1SEL0_1_FLS1_S 16 -#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_M MAKEMASK(0x1FF, 16) +#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PREEXT_FLGS_L1SEL0_1_FLS1_M : E800_GL_PREEXT_FLGS_L1SEL0_1_FLS1_M) +#define E800_GL_PREEXT_FLGS_L1SEL0_1_FLS1_M MAKEMASK(0x1FF, 16) +#define E830_GL_PREEXT_FLGS_L1SEL0_1_FLS1_M MAKEMASK(0x3FF, 16) #define GL_PREEXT_FLGS_L1SEL2_3(_i) (0x0020F078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ #define GL_PREEXT_FLGS_L1SEL2_3_MAX_INDEX 2 #define GL_PREEXT_FLGS_L1SEL2_3_FLS2_S 0 -#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_M MAKEMASK(0x1FF, 0) +#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PREEXT_FLGS_L1SEL2_3_FLS2_M : E800_GL_PREEXT_FLGS_L1SEL2_3_FLS2_M) +#define E800_GL_PREEXT_FLGS_L1SEL2_3_FLS2_M MAKEMASK(0x1FF, 0) +#define E830_GL_PREEXT_FLGS_L1SEL2_3_FLS2_M MAKEMASK(0x3FF, 0) #define GL_PREEXT_FLGS_L1SEL2_3_FLS3_S 16 -#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_M MAKEMASK(0x1FF, 16) +#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PREEXT_FLGS_L1SEL2_3_FLS3_M : E800_GL_PREEXT_FLGS_L1SEL2_3_FLS3_M) +#define E800_GL_PREEXT_FLGS_L1SEL2_3_FLS3_M MAKEMASK(0x1FF, 16) +#define E830_GL_PREEXT_FLGS_L1SEL2_3_FLS3_M MAKEMASK(0x3FF, 16) #define GL_PREEXT_FLGS_L1TBL(_i) (0x0020F060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ #define GL_PREEXT_FLGS_L1TBL_MAX_INDEX 2 #define GL_PREEXT_FLGS_L1TBL_LSB_S 0 @@ -2771,13 +2811,21 @@ #define GL_PSTEXT_CDMD_L1SEL(_i) (0x0020E054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ #define GL_PSTEXT_CDMD_L1SEL_MAX_INDEX 2 #define GL_PSTEXT_CDMD_L1SEL_RX_SEL_S 0 -#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_M MAKEMASK(0x1F, 0) +#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PSTEXT_CDMD_L1SEL_RX_SEL_M : E800_GL_PSTEXT_CDMD_L1SEL_RX_SEL_M) +#define E800_GL_PSTEXT_CDMD_L1SEL_RX_SEL_M MAKEMASK(0x1F, 0) +#define E830_GL_PSTEXT_CDMD_L1SEL_RX_SEL_M MAKEMASK(0x3F, 0) #define GL_PSTEXT_CDMD_L1SEL_TX_SEL_S 8 -#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_M MAKEMASK(0x1F, 8) +#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PSTEXT_CDMD_L1SEL_TX_SEL_M : E800_GL_PSTEXT_CDMD_L1SEL_TX_SEL_M) +#define E800_GL_PSTEXT_CDMD_L1SEL_TX_SEL_M MAKEMASK(0x1F, 8) +#define E830_GL_PSTEXT_CDMD_L1SEL_TX_SEL_M MAKEMASK(0x3F, 8) #define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_S 16 -#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_M MAKEMASK(0x1F, 16) +#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_M : E800_GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_M) +#define E800_GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_M MAKEMASK(0x1F, 16) +#define E830_GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_M MAKEMASK(0x3F, 16) #define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_S 24 -#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_M MAKEMASK(0x1F, 24) +#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_M : E800_GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_M) +#define E800_GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_M MAKEMASK(0x1F, 24) +#define E830_GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_M MAKEMASK(0x3F, 24) #define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_S 30 #define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_M MAKEMASK(0x3, 30) #define GL_PSTEXT_CTLTBL_L2ADDR(_i) (0x0020E084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ @@ -2807,15 +2855,23 @@ #define GL_PSTEXT_FLGS_L1SEL0_1(_i) (0x0020E06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ #define GL_PSTEXT_FLGS_L1SEL0_1_MAX_INDEX 2 #define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_S 0 -#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_M MAKEMASK(0x1FF, 0) +#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PSTEXT_FLGS_L1SEL0_1_FLS0_M : E800_GL_PSTEXT_FLGS_L1SEL0_1_FLS0_M) +#define E800_GL_PSTEXT_FLGS_L1SEL0_1_FLS0_M MAKEMASK(0x1FF, 0) +#define E830_GL_PSTEXT_FLGS_L1SEL0_1_FLS0_M MAKEMASK(0x3FF, 0) #define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_S 16 -#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_M MAKEMASK(0x1FF, 16) +#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PSTEXT_FLGS_L1SEL0_1_FLS1_M : E800_GL_PSTEXT_FLGS_L1SEL0_1_FLS1_M) +#define E800_GL_PSTEXT_FLGS_L1SEL0_1_FLS1_M MAKEMASK(0x1FF, 16) +#define E830_GL_PSTEXT_FLGS_L1SEL0_1_FLS1_M MAKEMASK(0x3FF, 16) #define GL_PSTEXT_FLGS_L1SEL2_3(_i) (0x0020E078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ #define GL_PSTEXT_FLGS_L1SEL2_3_MAX_INDEX 2 #define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_S 0 -#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_M MAKEMASK(0x1FF, 0) +#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PSTEXT_FLGS_L1SEL2_3_FLS2_M : E800_GL_PSTEXT_FLGS_L1SEL2_3_FLS2_M) +#define E800_GL_PSTEXT_FLGS_L1SEL2_3_FLS2_M MAKEMASK(0x1FF, 0) +#define E830_GL_PSTEXT_FLGS_L1SEL2_3_FLS2_M MAKEMASK(0x3FF, 0) #define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_S 16 -#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_M MAKEMASK(0x1FF, 16) +#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_PSTEXT_FLGS_L1SEL2_3_FLS3_M : E800_GL_PSTEXT_FLGS_L1SEL2_3_FLS3_M) +#define E800_GL_PSTEXT_FLGS_L1SEL2_3_FLS3_M MAKEMASK(0x1FF, 16) +#define E830_GL_PSTEXT_FLGS_L1SEL2_3_FLS3_M MAKEMASK(0x3FF, 16) #define GL_PSTEXT_FLGS_L1TBL(_i) (0x0020E060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */ #define GL_PSTEXT_FLGS_L1TBL_MAX_INDEX 2 #define GL_PSTEXT_FLGS_L1TBL_LSB_S 0 @@ -4292,7 +4348,9 @@ #define GL_DSI_REPC_NO_DESC_CNT_M MAKEMASK(0xFFFF, 0) #define GL_DSI_REPC_ERROR_CNT_S 16 #define GL_DSI_REPC_ERROR_CNT_M MAKEMASK(0xFFFF, 16) -#define GL_MDCK_TDAT_TCLAN 0x000FC0DC /* Reset Source: CORER */ +#define GL_MDCK_TDAT_TCLAN_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_MDCK_TDAT_TCLAN : E800_GL_MDCK_TDAT_TCLAN) +#define E800_GL_MDCK_TDAT_TCLAN 0x000FC0DC /* Reset Source: CORER */ +#define E830_GL_MDCK_TDAT_TCLAN 0x000FCCDC /* Reset Source: CORER */ #define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_S 0 #define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_M BIT(0) #define GL_MDCK_TDAT_TCLAN_UR_S 1 @@ -4397,11 +4455,11 @@ #define GLTPB_100G_MAC_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0) #define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_S 16 #define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16) -#define GLTPB_100G_RPB_FC_THRESH 0x0009963C /* Reset Source: CORER */ -#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_S 0 -#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0) -#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_S 16 -#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16) +#define E800_GLTPB_100G_RPB_FC_THRESH 0x0009963C /* Reset Source: CORER */ +#define E800_GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_S 0 +#define E800_GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0) +#define E800_GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_S 16 +#define E800_GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16) #define GLTPB_PACING_10G 0x000994E4 /* Reset Source: CORER */ #define GLTPB_PACING_10G_N_S 0 #define GLTPB_PACING_10G_N_M MAKEMASK(0xFF, 0) @@ -4457,8 +4515,8 @@ #define GL_UFUSE_SOC_SOC_TYPE_M BIT(10) #define GL_UFUSE_SOC_BTS_MODE_S 11 #define GL_UFUSE_SOC_BTS_MODE_M BIT(11) -#define GL_UFUSE_SOC_SPARE_FUSES_S 12 -#define GL_UFUSE_SOC_SPARE_FUSES_M MAKEMASK(0xF, 12) +#define E800_GL_UFUSE_SOC_SPARE_FUSES_S 12 +#define E800_GL_UFUSE_SOC_SPARE_FUSES_M MAKEMASK(0xF, 12) #define EMPINT_GPIO_ENA 0x000880C0 /* Reset Source: POR */ #define EMPINT_GPIO_ENA_GPIO0_ENA_S 0 #define EMPINT_GPIO_ENA_GPIO0_ENA_M BIT(0) @@ -4545,7 +4603,9 @@ #define GLINT_TSYN_PFMSTR_PF_MASTER_M MAKEMASK(0x7, 0) #define GLINT_TSYN_PHY 0x0016CC50 /* Reset Source: CORER */ #define GLINT_TSYN_PHY_PHY_INDX_S 0 -#define GLINT_TSYN_PHY_PHY_INDX_M MAKEMASK(0x1F, 0) +#define GLINT_TSYN_PHY_PHY_INDX_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLINT_TSYN_PHY_PHY_INDX_M : E800_GLINT_TSYN_PHY_PHY_INDX_M) +#define E800_GLINT_TSYN_PHY_PHY_INDX_M MAKEMASK(0x1F, 0) +#define E830_GLINT_TSYN_PHY_PHY_INDX_M MAKEMASK(0xFF, 0) #define GLINT_VECT2FUNC(_INT) (0x00162000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */ #define GLINT_VECT2FUNC_MAX_INDEX 2047 #define GLINT_VECT2FUNC_VF_NUM_S 0 @@ -4605,9 +4665,11 @@ #define PF0INT_OICR_CPM_QUEUE_S 1 #define PF0INT_OICR_CPM_QUEUE_M BIT(1) #define PF0INT_OICR_CPM_RSV1_S 2 -#define PF0INT_OICR_CPM_RSV1_M MAKEMASK(0xFF, 2) -#define PF0INT_OICR_CPM_HH_COMP_S 10 -#define PF0INT_OICR_CPM_HH_COMP_M BIT(10) +#define PF0INT_OICR_CPM_RSV1_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PF0INT_OICR_CPM_RSV1_M : E800_PF0INT_OICR_CPM_RSV1_M) +#define E800_PF0INT_OICR_CPM_RSV1_M MAKEMASK(0xFF, 2) +#define E830_PF0INT_OICR_CPM_RSV1_M MAKEMASK(0x3F, 2) +#define E800_PF0INT_OICR_CPM_HH_COMP_S 10 +#define E800_PF0INT_OICR_CPM_HH_COMP_M BIT(10) #define PF0INT_OICR_CPM_TSYN_TX_S 11 #define PF0INT_OICR_CPM_TSYN_TX_M BIT(11) #define PF0INT_OICR_CPM_TSYN_EVNT_S 12 @@ -4696,9 +4758,11 @@ #define PF0INT_OICR_HLP_QUEUE_S 1 #define PF0INT_OICR_HLP_QUEUE_M BIT(1) #define PF0INT_OICR_HLP_RSV1_S 2 -#define PF0INT_OICR_HLP_RSV1_M MAKEMASK(0xFF, 2) -#define PF0INT_OICR_HLP_HH_COMP_S 10 -#define PF0INT_OICR_HLP_HH_COMP_M BIT(10) +#define PF0INT_OICR_HLP_RSV1_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PF0INT_OICR_HLP_RSV1_M : E800_PF0INT_OICR_HLP_RSV1_M) +#define E800_PF0INT_OICR_HLP_RSV1_M MAKEMASK(0xFF, 2) +#define E830_PF0INT_OICR_HLP_RSV1_M MAKEMASK(0x3F, 2) +#define E800_PF0INT_OICR_HLP_HH_COMP_S 10 +#define E800_PF0INT_OICR_HLP_HH_COMP_M BIT(10) #define PF0INT_OICR_HLP_TSYN_TX_S 11 #define PF0INT_OICR_HLP_TSYN_TX_M BIT(11) #define PF0INT_OICR_HLP_TSYN_EVNT_S 12 @@ -4745,9 +4809,11 @@ #define PF0INT_OICR_PSM_QUEUE_S 1 #define PF0INT_OICR_PSM_QUEUE_M BIT(1) #define PF0INT_OICR_PSM_RSV1_S 2 -#define PF0INT_OICR_PSM_RSV1_M MAKEMASK(0xFF, 2) -#define PF0INT_OICR_PSM_HH_COMP_S 10 -#define PF0INT_OICR_PSM_HH_COMP_M BIT(10) +#define PF0INT_OICR_PSM_RSV1_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PF0INT_OICR_PSM_RSV1_M : E800_PF0INT_OICR_PSM_RSV1_M) +#define E800_PF0INT_OICR_PSM_RSV1_M MAKEMASK(0xFF, 2) +#define E830_PF0INT_OICR_PSM_RSV1_M MAKEMASK(0x3F, 2) +#define E800_PF0INT_OICR_PSM_HH_COMP_S 10 +#define E800_PF0INT_OICR_PSM_HH_COMP_M BIT(10) #define PF0INT_OICR_PSM_TSYN_TX_S 11 #define PF0INT_OICR_PSM_TSYN_TX_M BIT(11) #define PF0INT_OICR_PSM_TSYN_EVNT_S 12 @@ -4868,9 +4934,11 @@ #define PFINT_OICR_QUEUE_S 1 #define PFINT_OICR_QUEUE_M BIT(1) #define PFINT_OICR_RSV1_S 2 -#define PFINT_OICR_RSV1_M MAKEMASK(0xFF, 2) -#define PFINT_OICR_HH_COMP_S 10 -#define PFINT_OICR_HH_COMP_M BIT(10) +#define PFINT_OICR_RSV1_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PFINT_OICR_RSV1_M : E800_PFINT_OICR_RSV1_M) +#define E800_PFINT_OICR_RSV1_M MAKEMASK(0xFF, 2) +#define E830_PFINT_OICR_RSV1_M MAKEMASK(0x3F, 2) +#define E800_PFINT_OICR_HH_COMP_S 10 +#define E800_PFINT_OICR_HH_COMP_M BIT(10) #define PFINT_OICR_TSYN_TX_S 11 #define PFINT_OICR_TSYN_TX_M BIT(11) #define PFINT_OICR_TSYN_EVNT_S 12 @@ -4936,7 +5004,9 @@ #define PFINT_SB_CTL_INTEVENT_M BIT(31) #define PFINT_TSYN_MSK 0x0016C980 /* Reset Source: CORER */ #define PFINT_TSYN_MSK_PHY_INDX_S 0 -#define PFINT_TSYN_MSK_PHY_INDX_M MAKEMASK(0x1F, 0) +#define PFINT_TSYN_MSK_PHY_INDX_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PFINT_TSYN_MSK_PHY_INDX_M : E800_PFINT_TSYN_MSK_PHY_INDX_M) +#define E800_PFINT_TSYN_MSK_PHY_INDX_M MAKEMASK(0x1F, 0) +#define E830_PFINT_TSYN_MSK_PHY_INDX_M MAKEMASK(0xFF, 0) #define QINT_RQCTL(_QRX) (0x00150000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */ #define QINT_RQCTL_MAX_INDEX 2047 #define QINT_RQCTL_MSIX_INDX_S 0 @@ -5203,76 +5273,92 @@ #define VSILAN_QTABLE_QINDEX_0_M MAKEMASK(0x7FF, 0) #define VSILAN_QTABLE_QINDEX_1_S 16 #define VSILAN_QTABLE_QINDEX_1_M MAKEMASK(0x7FF, 16) -#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP 0x001E31C0 /* Reset Source: GLOBR */ -#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_S 0 -#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_M BIT(0) -#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP 0x001E34C0 /* Reset Source: GLOBR */ -#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_S 0 -#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_M BIT(0) -#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP 0x001E35C0 /* Reset Source: GLOBR */ -#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_S 0 -#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_M BIT(0) -#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL 0x001E36C0 /* Reset Source: GLOBR */ -#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_S 0 -#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_M BIT(0) -#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1 0x001E3220 /* Reset Source: GLOBR */ -#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_S 0 -#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_M MAKEMASK(0xFFFFFFFF, 0) -#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2 0x001E3240 /* Reset Source: GLOBR */ -#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_S 0 -#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_M MAKEMASK(0xFFFF, 0) -#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE 0x001E3180 /* Reset Source: GLOBR */ -#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_S 0 -#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0) -#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1 0x001E3280 /* Reset Source: GLOBR */ -#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_S 0 -#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0) -#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2 0x001E32A0 /* Reset Source: GLOBR */ -#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_S 0 -#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_M MAKEMASK(0xFFFF, 0) -#define PRTMAC_HSEC_CTL_RX_QUANTA_S 0x001E3C40 /* Reset Source: GLOBR */ -#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_S 0 -#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_M MAKEMASK(0xFFFF, 0) -#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE 0x001E31A0 /* Reset Source: GLOBR */ -#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_S 0 -#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0) -#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(_i) (0x001E36E0 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */ -#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX 8 -#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_S 0 -#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0) -#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(_i) (0x001E3800 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */ -#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MAX_INDEX 8 -#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_S 0 -#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M MAKEMASK(0xFFFF, 0) -#define PRTMAC_HSEC_CTL_TX_SA_PART1 0x001E3960 /* Reset Source: GLOBR */ -#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_S 0 -#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0) -#define PRTMAC_HSEC_CTL_TX_SA_PART2 0x001E3980 /* Reset Source: GLOBR */ -#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_S 0 -#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_M MAKEMASK(0xFFFF, 0) -#define PRTMAC_LINK_DOWN_COUNTER 0x001E47C0 /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_RX_ENABLE_GCP 0x001E31C0 /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_S 0 +#define E800_PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_M BIT(0) +#define E800_PRTMAC_HSEC_CTL_RX_ENABLE_GPP 0x001E34C0 /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_S 0 +#define E800_PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_M BIT(0) +#define E800_PRTMAC_HSEC_CTL_RX_ENABLE_PPP 0x001E35C0 /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_S 0 +#define E800_PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_M BIT(0) +#define E800_PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL 0x001E36C0 /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_S 0 +#define E800_PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_M BIT(0) +#define E800_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1 0x001E3220 /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_S 0 +#define E800_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_M MAKEMASK(0xFFFFFFFF, 0) +#define E800_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2 0x001E3240 /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_S 0 +#define E800_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_M MAKEMASK(0xFFFF, 0) +#define E800_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE 0x001E3180 /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_S 0 +#define E800_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0) +#define E800_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1 0x001E3280 /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_S 0 +#define E800_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0) +#define E800_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2 0x001E32A0 /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_S 0 +#define E800_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_M MAKEMASK(0xFFFF, 0) +#define E800_PRTMAC_HSEC_CTL_RX_QUANTA_S 0x001E3C40 /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_S 0 +#define E800_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_M MAKEMASK(0xFFFF, 0) +#define E800_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE 0x001E31A0 /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_S 0 +#define E800_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0) +#define E800_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(_i) (0x001E36E0 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX 8 +#define E800_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_S 0 +#define E800_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0) +#define E800_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(_i) (0x001E3800 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MAX_INDEX 8 +#define E800_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_S 0 +#define E800_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M MAKEMASK(0xFFFF, 0) +#define E800_PRTMAC_HSEC_CTL_TX_SA_PART1 0x001E3960 /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_S 0 +#define E800_PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0) +#define E800_PRTMAC_HSEC_CTL_TX_SA_PART2 0x001E3980 /* Reset Source: GLOBR */ +#define E800_PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_S 0 +#define E800_PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_M MAKEMASK(0xFFFF, 0) +#define PRTMAC_LINK_DOWN_COUNTER_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PRTMAC_LINK_DOWN_COUNTER : E800_PRTMAC_LINK_DOWN_COUNTER) +#define E800_PRTMAC_LINK_DOWN_COUNTER 0x001E47C0 /* Reset Source: GLOBR */ +#define E830_PRTMAC_LINK_DOWN_COUNTER 0x001E2460 /* Reset Source: GLOBR */ #define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_S 0 #define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_M MAKEMASK(0xFFFF, 0) -#define PRTMAC_MD_OVRRIDE_ENABLE(_i) (0x001E3C60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */ +#define PRTMAC_MD_OVRRIDE_ENABLE_BY_MAC(hw, _i) ((hw)->mac_type == ICE_MAC_E830 ? E830_PRTMAC_MD_OVRRIDE_ENABLE(_i) : E800_PRTMAC_MD_OVRRIDE_ENABLE(_i)) +#define E800_PRTMAC_MD_OVRRIDE_ENABLE(_i) (0x001E3C60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */ +#define E830_PRTMAC_MD_OVRRIDE_ENABLE(_i) (0x001E2500 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */ #define PRTMAC_MD_OVRRIDE_ENABLE_MAX_INDEX 7 #define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_S 0 #define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0) -#define PRTMAC_MD_OVRRIDE_VAL(_i) (0x001E3D60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */ +#define PRTMAC_MD_OVRRIDE_VAL_BY_MAC(hw, _i) ((hw)->mac_type == ICE_MAC_E830 ? E830_PRTMAC_MD_OVRRIDE_VAL(_i) : E800_PRTMAC_MD_OVRRIDE_VAL(_i)) +#define E800_PRTMAC_MD_OVRRIDE_VAL(_i) (0x001E3D60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */ +#define E830_PRTMAC_MD_OVRRIDE_VAL(_i) (0x001E2600 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */ #define PRTMAC_MD_OVRRIDE_VAL_MAX_INDEX 7 #define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_S 0 #define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0) #define PRTMAC_RX_CNT_MRKR 0x001E48E0 /* Reset Source: GLOBR */ #define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_S 0 #define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_M MAKEMASK(0xFFFF, 0) -#define PRTMAC_RX_PKT_DRP_CNT 0x001E3C20 /* Reset Source: GLOBR */ +#define PRTMAC_RX_PKT_DRP_CNT_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PRTMAC_RX_PKT_DRP_CNT : E800_PRTMAC_RX_PKT_DRP_CNT) +#define E800_PRTMAC_RX_PKT_DRP_CNT 0x001E3C20 /* Reset Source: GLOBR */ +#define E830_PRTMAC_RX_PKT_DRP_CNT 0x001E2420 /* Reset Source: GLOBR */ #define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_S 0 -#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_M MAKEMASK(0xFFFF, 0) -#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_S 16 -#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_M MAKEMASK(0xFFFF, 16) +#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_M : E800_PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_M) +#define E800_PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_M MAKEMASK(0xFFF, 0) +#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_S_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_S : E800_PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_S) +#define E800_PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_S 16 +#define E830_PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_S 28 +#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_M : E800_PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_M) +#define E800_PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_M MAKEMASK(0xF, 28) #define PRTMAC_TX_CNT_MRKR 0x001E48C0 /* Reset Source: GLOBR */ #define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_S 0 #define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_M MAKEMASK(0xFFFF, 0) -#define PRTMAC_TX_LNK_UP_CNT 0x001E4840 /* Reset Source: GLOBR */ +#define PRTMAC_TX_LNK_UP_CNT_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PRTMAC_TX_LNK_UP_CNT : E800_PRTMAC_TX_LNK_UP_CNT) +#define E800_PRTMAC_TX_LNK_UP_CNT 0x001E4840 /* Reset Source: GLOBR */ +#define E830_PRTMAC_TX_LNK_UP_CNT 0x001E2480 /* Reset Source: GLOBR */ #define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_S 0 #define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_M MAKEMASK(0xFFFF, 0) #define GL_MDCK_CFG1_TX_PQM 0x002D2DF4 /* Reset Source: CORER */ @@ -5333,8 +5419,8 @@ #define GL_MDCK_EN_TX_PQM_ILLEGAL_VF_QNUM_M BIT(24) #define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_S 25 #define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_M BIT(25) -#define GL_MDCK_EN_TX_PQM_RSVD_S 26 -#define GL_MDCK_EN_TX_PQM_RSVD_M MAKEMASK(0x3F, 26) +#define E800_GL_MDCK_EN_TX_PQM_RSVD_S 26 +#define E800_GL_MDCK_EN_TX_PQM_RSVD_M MAKEMASK(0x3F, 26) #define GL_MDCK_RX 0x0029422C /* Reset Source: CORER */ #define GL_MDCK_RX_DESC_ADDR_S 0 #define GL_MDCK_RX_DESC_ADDR_M BIT(0) @@ -5383,7 +5469,9 @@ #define GL_MDET_TX_PQM_MAL_TYPE_M MAKEMASK(0x1F, 26) #define GL_MDET_TX_PQM_VALID_S 31 #define GL_MDET_TX_PQM_VALID_M BIT(31) -#define GL_MDET_TX_TCLAN 0x000FC068 /* Reset Source: CORER */ +#define GL_MDET_TX_TCLAN_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_MDET_TX_TCLAN : E800_GL_MDET_TX_TCLAN) +#define E800_GL_MDET_TX_TCLAN 0x000FC068 /* Reset Source: CORER */ +#define E830_GL_MDET_TX_TCLAN 0x000FCCC0 /* Reset Source: CORER */ #define GL_MDET_TX_TCLAN_QNUM_S 0 #define GL_MDET_TX_TCLAN_QNUM_M MAKEMASK(0x7FFF, 0) #define GL_MDET_TX_TCLAN_VF_NUM_S 15 @@ -5414,7 +5502,9 @@ #define PF_MDET_TX_PQM 0x002D2C80 /* Reset Source: CORER */ #define PF_MDET_TX_PQM_VALID_S 0 #define PF_MDET_TX_PQM_VALID_M BIT(0) -#define PF_MDET_TX_TCLAN 0x000FC000 /* Reset Source: CORER */ +#define PF_MDET_TX_TCLAN_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PF_MDET_TX_TCLAN : E800_PF_MDET_TX_TCLAN) +#define E800_PF_MDET_TX_TCLAN 0x000FC000 /* Reset Source: CORER */ +#define E830_PF_MDET_TX_TCLAN 0x000FCC00 /* Reset Source: CORER */ #define PF_MDET_TX_TCLAN_VALID_S 0 #define PF_MDET_TX_TCLAN_VALID_M BIT(0) #define PF_MDET_TX_TDPU 0x00040800 /* Reset Source: CORER */ @@ -5443,16 +5533,25 @@ #define GL_FWRESETCNT 0x00083100 /* Reset Source: POR */ #define GL_FWRESETCNT_FWRESETCNT_S 0 #define GL_FWRESETCNT_FWRESETCNT_M MAKEMASK(0xFFFFFFFF, 0) -#define GL_MNG_FW_RAM_STAT 0x0008309C /* Reset Source: POR */ +#define GL_MNG_FW_RAM_STAT_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_MNG_FW_RAM_STAT : E800_GL_MNG_FW_RAM_STAT) +#define E800_GL_MNG_FW_RAM_STAT 0x0008309C /* Reset Source: POR */ +#define E830_GL_MNG_FW_RAM_STAT 0x000830D4 /* Reset Source: POR */ #define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_S 0 #define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_M BIT(0) #define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_S 1 #define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_M BIT(1) #define GL_MNG_FWSM 0x000B6134 /* Reset Source: POR */ +#define GL_MNG_FWSM_FW_LOADING_M BIT(30) #define GL_MNG_FWSM_FW_MODES_S 0 -#define GL_MNG_FWSM_FW_MODES_M MAKEMASK(0x7, 0) -#define GL_MNG_FWSM_RSV0_S 3 -#define GL_MNG_FWSM_RSV0_M MAKEMASK(0x7F, 3) +#define GL_MNG_FWSM_FW_MODES_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_MNG_FWSM_FW_MODES_M : E800_GL_MNG_FWSM_FW_MODES_M) +#define E800_GL_MNG_FWSM_FW_MODES_M MAKEMASK(0x7, 0) +#define E830_GL_MNG_FWSM_FW_MODES_M MAKEMASK(0x3, 0) +#define GL_MNG_FWSM_RSV0_S_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_MNG_FWSM_RSV0_S : E800_GL_MNG_FWSM_RSV0_S) +#define E800_GL_MNG_FWSM_RSV0_S 3 +#define E830_GL_MNG_FWSM_RSV0_S 2 +#define GL_MNG_FWSM_RSV0_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_MNG_FWSM_RSV0_M : E800_GL_MNG_FWSM_RSV0_M) +#define E800_GL_MNG_FWSM_RSV0_M MAKEMASK(0x7F, 3) +#define E830_GL_MNG_FWSM_RSV0_M MAKEMASK(0xFF, 2) #define GL_MNG_FWSM_EEP_RELOAD_IND_S 10 #define GL_MNG_FWSM_EEP_RELOAD_IND_M BIT(10) #define GL_MNG_FWSM_RSV1_S 11 @@ -5476,12 +5575,20 @@ #define GL_MNG_HWARB_CTRL 0x000B6130 /* Reset Source: POR */ #define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_S 0 #define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_M BIT(0) -#define GL_MNG_SHA_EXTEND(_i) (0x00083120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */ -#define GL_MNG_SHA_EXTEND_MAX_INDEX 7 +#define GL_MNG_SHA_EXTEND_BY_MAC(hw, _i) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_MNG_SHA_EXTEND(_i) : E800_GL_MNG_SHA_EXTEND(_i)) +#define E800_GL_MNG_SHA_EXTEND(_i) (0x00083120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */ +#define E830_GL_MNG_SHA_EXTEND(_i) (0x00083340 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: EMPR */ +#define GL_MNG_SHA_EXTEND_MAX_INDEX_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_MNG_SHA_EXTEND_MAX_INDEX : E800_GL_MNG_SHA_EXTEND_MAX_INDEX) +#define E800_GL_MNG_SHA_EXTEND_MAX_INDEX 7 +#define E830_GL_MNG_SHA_EXTEND_MAX_INDEX 11 #define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_S 0 #define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_M MAKEMASK(0xFFFFFFFF, 0) -#define GL_MNG_SHA_EXTEND_ROM(_i) (0x00083160 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */ -#define GL_MNG_SHA_EXTEND_ROM_MAX_INDEX 7 +#define GL_MNG_SHA_EXTEND_ROM_BY_MAC(hw, _i) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_MNG_SHA_EXTEND_ROM(_i) : E800_GL_MNG_SHA_EXTEND_ROM(_i)) +#define E800_GL_MNG_SHA_EXTEND_ROM(_i) (0x00083160 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */ +#define E830_GL_MNG_SHA_EXTEND_ROM(_i) (0x000832C0 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: EMPR */ +#define GL_MNG_SHA_EXTEND_ROM_MAX_INDEX_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_MNG_SHA_EXTEND_ROM_MAX_INDEX : E800_GL_MNG_SHA_EXTEND_ROM_MAX_INDEX) +#define E800_GL_MNG_SHA_EXTEND_ROM_MAX_INDEX 7 +#define E830_GL_MNG_SHA_EXTEND_ROM_MAX_INDEX 11 #define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_S 0 #define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_M MAKEMASK(0xFFFFFFFF, 0) #define GL_MNG_SHA_EXTEND_STATUS 0x00083148 /* Reset Source: EMPR */ @@ -5880,8 +5987,8 @@ #define GLPCI_CAPSUP 0x0009DE8C /* Reset Source: PCIR */ #define GLPCI_CAPSUP_PCIE_VER_S 0 #define GLPCI_CAPSUP_PCIE_VER_M BIT(0) -#define GLPCI_CAPSUP_RESERVED_2_S 1 -#define GLPCI_CAPSUP_RESERVED_2_M BIT(1) +#define E800_GLPCI_CAPSUP_RESERVED_2_S 1 +#define E800_GLPCI_CAPSUP_RESERVED_2_M BIT(1) #define GLPCI_CAPSUP_LTR_EN_S 2 #define GLPCI_CAPSUP_LTR_EN_M BIT(2) #define GLPCI_CAPSUP_TPH_EN_S 3 @@ -6331,9 +6438,9 @@ #define PFPE_MRTEIDXMASK 0x0050A300 /* Reset Source: PFR */ #define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_S 0 #define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_M MAKEMASK(0x1F, 0) -#define PFPE_RCVUNEXPECTEDERROR 0x0050A380 /* Reset Source: PFR */ -#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0 -#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0) +#define E800_PFPE_RCVUNEXPECTEDERROR 0x0050A380 /* Reset Source: PFR */ +#define E800_PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0 +#define E800_PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0) #define PFPE_TCPNOWTIMER 0x0050A280 /* Reset Source: PFR */ #define PFPE_TCPNOWTIMER_TCP_NOW_S 0 #define PFPE_TCPNOWTIMER_TCP_NOW_M MAKEMASK(0xFFFFFFFF, 0) @@ -6402,10 +6509,10 @@ #define VFPE_IPCONFIG0_USEENTIREIDRANGE_M BIT(16) #define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_S 17 #define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_M BIT(17) -#define VFPE_RCVUNEXPECTEDERROR(_VF) (0x00509C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */ -#define VFPE_RCVUNEXPECTEDERROR_MAX_INDEX 255 -#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0 -#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0) +#define E800_VFPE_RCVUNEXPECTEDERROR(_VF) (0x00509C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */ +#define E800_VFPE_RCVUNEXPECTEDERROR_MAX_INDEX 255 +#define E800_VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0 +#define E800_VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0) #define VFPE_TCPNOWTIMER(_VF) (0x00509400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */ #define VFPE_TCPNOWTIMER_MAX_INDEX 255 #define VFPE_TCPNOWTIMER_TCP_NOW_S 0 @@ -7109,15 +7216,21 @@ #define GLRPB_DHW(_i) (0x000AC000 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */ #define GLRPB_DHW_MAX_INDEX 15 #define GLRPB_DHW_DHW_TCN_S 0 -#define GLRPB_DHW_DHW_TCN_M MAKEMASK(0xFFFFF, 0) +#define GLRPB_DHW_DHW_TCN_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLRPB_DHW_DHW_TCN_M : E800_GLRPB_DHW_DHW_TCN_M) +#define E800_GLRPB_DHW_DHW_TCN_M MAKEMASK(0xFFFFF, 0) +#define E830_GLRPB_DHW_DHW_TCN_M MAKEMASK(0x3FFFFF, 0) #define GLRPB_DLW(_i) (0x000AC044 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */ #define GLRPB_DLW_MAX_INDEX 15 #define GLRPB_DLW_DLW_TCN_S 0 -#define GLRPB_DLW_DLW_TCN_M MAKEMASK(0xFFFFF, 0) +#define GLRPB_DLW_DLW_TCN_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLRPB_DLW_DLW_TCN_M : E800_GLRPB_DLW_DLW_TCN_M) +#define E800_GLRPB_DLW_DLW_TCN_M MAKEMASK(0xFFFFF, 0) +#define E830_GLRPB_DLW_DLW_TCN_M MAKEMASK(0x3FFFFF, 0) #define GLRPB_DPS(_i) (0x000AC084 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */ #define GLRPB_DPS_MAX_INDEX 15 #define GLRPB_DPS_DPS_TCN_S 0 -#define GLRPB_DPS_DPS_TCN_M MAKEMASK(0xFFFFF, 0) +#define GLRPB_DPS_DPS_TCN_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLRPB_DPS_DPS_TCN_M : E800_GLRPB_DPS_DPS_TCN_M) +#define E800_GLRPB_DPS_DPS_TCN_M MAKEMASK(0xFFFFF, 0) +#define E830_GLRPB_DPS_DPS_TCN_M MAKEMASK(0x3FFFFF, 0) #define GLRPB_DSI_EN 0x000AC324 /* Reset Source: CORER */ #define GLRPB_DSI_EN_DSI_EN_S 0 #define GLRPB_DSI_EN_DSI_EN_M BIT(0) @@ -7126,15 +7239,21 @@ #define GLRPB_SHW(_i) (0x000AC120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ #define GLRPB_SHW_MAX_INDEX 7 #define GLRPB_SHW_SHW_S 0 -#define GLRPB_SHW_SHW_M MAKEMASK(0xFFFFF, 0) +#define GLRPB_SHW_SHW_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLRPB_SHW_SHW_M : E800_GLRPB_SHW_SHW_M) +#define E800_GLRPB_SHW_SHW_M MAKEMASK(0xFFFFF, 0) +#define E830_GLRPB_SHW_SHW_M MAKEMASK(0x3FFFFF, 0) #define GLRPB_SLW(_i) (0x000AC140 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ #define GLRPB_SLW_MAX_INDEX 7 #define GLRPB_SLW_SLW_S 0 -#define GLRPB_SLW_SLW_M MAKEMASK(0xFFFFF, 0) +#define GLRPB_SLW_SLW_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLRPB_SLW_SLW_M : E800_GLRPB_SLW_SLW_M) +#define E800_GLRPB_SLW_SLW_M MAKEMASK(0xFFFFF, 0) +#define E830_GLRPB_SLW_SLW_M MAKEMASK(0x3FFFFF, 0) #define GLRPB_SPS(_i) (0x000AC0C4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ #define GLRPB_SPS_MAX_INDEX 7 #define GLRPB_SPS_SPS_TCN_S 0 -#define GLRPB_SPS_SPS_TCN_M MAKEMASK(0xFFFFF, 0) +#define GLRPB_SPS_SPS_TCN_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLRPB_SPS_SPS_TCN_M : E800_GLRPB_SPS_SPS_TCN_M) +#define E800_GLRPB_SPS_SPS_TCN_M MAKEMASK(0xFFFFF, 0) +#define E830_GLRPB_SPS_SPS_TCN_M MAKEMASK(0x3FFFFF, 0) #define GLRPB_TC_CFG(_i) (0x000AC2A4 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */ #define GLRPB_TC_CFG_MAX_INDEX 31 #define GLRPB_TC_CFG_D_POOL_S 0 @@ -7144,11 +7263,15 @@ #define GLRPB_TCHW(_i) (0x000AC330 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */ #define GLRPB_TCHW_MAX_INDEX 31 #define GLRPB_TCHW_TCHW_S 0 -#define GLRPB_TCHW_TCHW_M MAKEMASK(0xFFFFF, 0) +#define GLRPB_TCHW_TCHW_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLRPB_TCHW_TCHW_M : E800_GLRPB_TCHW_TCHW_M) +#define E800_GLRPB_TCHW_TCHW_M MAKEMASK(0xFFFFF, 0) +#define E830_GLRPB_TCHW_TCHW_M MAKEMASK(0x3FFFFF, 0) #define GLRPB_TCLW(_i) (0x000AC3B0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */ #define GLRPB_TCLW_MAX_INDEX 31 #define GLRPB_TCLW_TCLW_S 0 -#define GLRPB_TCLW_TCLW_M MAKEMASK(0xFFFFF, 0) +#define GLRPB_TCLW_TCLW_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLRPB_TCLW_TCLW_M : E800_GLRPB_TCLW_TCLW_M) +#define E800_GLRPB_TCLW_TCLW_M MAKEMASK(0xFFFFF, 0) +#define E830_GLRPB_TCLW_TCLW_M MAKEMASK(0x3FFFFF, 0) #define GLQF_APBVT(_i) (0x00450000 + ((_i) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */ #define GLQF_APBVT_MAX_INDEX 2047 #define GLQF_APBVT_APBVT_S 0 @@ -7161,9 +7284,13 @@ #define GLQF_FD_CLSN1_HITLBCNT_M MAKEMASK(0xFFFFFFFF, 0) #define GLQF_FD_CNT 0x00460018 /* Reset Source: CORER */ #define GLQF_FD_CNT_FD_GCNT_S 0 -#define GLQF_FD_CNT_FD_GCNT_M MAKEMASK(0x7FFF, 0) +#define GLQF_FD_CNT_FD_GCNT_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLQF_FD_CNT_FD_GCNT_M : E800_GLQF_FD_CNT_FD_GCNT_M) +#define E800_GLQF_FD_CNT_FD_GCNT_M MAKEMASK(0x7FFF, 0) +#define E830_GLQF_FD_CNT_FD_GCNT_M MAKEMASK(0xFFFF, 0) #define GLQF_FD_CNT_FD_BCNT_S 16 -#define GLQF_FD_CNT_FD_BCNT_M MAKEMASK(0x7FFF, 16) +#define GLQF_FD_CNT_FD_BCNT_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLQF_FD_CNT_FD_BCNT_M : E800_GLQF_FD_CNT_FD_BCNT_M) +#define E800_GLQF_FD_CNT_FD_BCNT_M MAKEMASK(0x7FFF, 16) +#define E830_GLQF_FD_CNT_FD_BCNT_M MAKEMASK(0xFFFF, 16) #define GLQF_FD_CTL 0x00460000 /* Reset Source: CORER */ #define GLQF_FD_CTL_FDLONG_S 0 #define GLQF_FD_CTL_FDLONG_M MAKEMASK(0xF, 0) @@ -7173,12 +7300,18 @@ #define GLQF_FD_CTL_FLT_ADDR_REPORT_M BIT(5) #define GLQF_FD_SIZE 0x00460010 /* Reset Source: CORER */ #define GLQF_FD_SIZE_FD_GSIZE_S 0 -#define GLQF_FD_SIZE_FD_GSIZE_M MAKEMASK(0x7FFF, 0) +#define GLQF_FD_SIZE_FD_GSIZE_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLQF_FD_SIZE_FD_GSIZE_M : E800_GLQF_FD_SIZE_FD_GSIZE_M) +#define E800_GLQF_FD_SIZE_FD_GSIZE_M MAKEMASK(0x7FFF, 0) +#define E830_GLQF_FD_SIZE_FD_GSIZE_M MAKEMASK(0xFFFF, 0) #define GLQF_FD_SIZE_FD_BSIZE_S 16 -#define GLQF_FD_SIZE_FD_BSIZE_M MAKEMASK(0x7FFF, 16) +#define GLQF_FD_SIZE_FD_BSIZE_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLQF_FD_SIZE_FD_BSIZE_M : E800_GLQF_FD_SIZE_FD_BSIZE_M) +#define E800_GLQF_FD_SIZE_FD_BSIZE_M MAKEMASK(0x7FFF, 16) +#define E830_GLQF_FD_SIZE_FD_BSIZE_M MAKEMASK(0xFFFF, 16) #define GLQF_FDCNT_0 0x00460020 /* Reset Source: CORER */ #define GLQF_FDCNT_0_BUCKETCNT_S 0 -#define GLQF_FDCNT_0_BUCKETCNT_M MAKEMASK(0x7FFF, 0) +#define GLQF_FDCNT_0_BUCKETCNT_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_GLQF_FDCNT_0_BUCKETCNT_M : E800_GLQF_FDCNT_0_BUCKETCNT_M) +#define E800_GLQF_FDCNT_0_BUCKETCNT_M MAKEMASK(0x7FFF, 0) +#define E830_GLQF_FDCNT_0_BUCKETCNT_M MAKEMASK(0xFFFF, 0) #define GLQF_FDCNT_0_CNT_NOT_VLD_S 31 #define GLQF_FDCNT_0_CNT_NOT_VLD_M BIT(31) #define GLQF_FDEVICTENA(_i) (0x00452000 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */ @@ -7402,22 +7535,34 @@ #define GLQF_PROF2TC_REGION_7_M MAKEMASK(0x7, 29) #define PFQF_FD_CNT 0x00460180 /* Reset Source: CORER */ #define PFQF_FD_CNT_FD_GCNT_S 0 -#define PFQF_FD_CNT_FD_GCNT_M MAKEMASK(0x7FFF, 0) +#define PFQF_FD_CNT_FD_GCNT_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PFQF_FD_CNT_FD_GCNT_M : E800_PFQF_FD_CNT_FD_GCNT_M) +#define E800_PFQF_FD_CNT_FD_GCNT_M MAKEMASK(0x7FFF, 0) +#define E830_PFQF_FD_CNT_FD_GCNT_M MAKEMASK(0xFFFF, 0) #define PFQF_FD_CNT_FD_BCNT_S 16 -#define PFQF_FD_CNT_FD_BCNT_M MAKEMASK(0x7FFF, 16) +#define PFQF_FD_CNT_FD_BCNT_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PFQF_FD_CNT_FD_BCNT_M : E800_PFQF_FD_CNT_FD_BCNT_M) +#define E800_PFQF_FD_CNT_FD_BCNT_M MAKEMASK(0x7FFF, 16) +#define E830_PFQF_FD_CNT_FD_BCNT_M MAKEMASK(0xFFFF, 16) #define PFQF_FD_ENA 0x0043A000 /* Reset Source: CORER */ #define PFQF_FD_ENA_FD_ENA_S 0 #define PFQF_FD_ENA_FD_ENA_M BIT(0) #define PFQF_FD_SIZE 0x00460100 /* Reset Source: CORER */ #define PFQF_FD_SIZE_FD_GSIZE_S 0 -#define PFQF_FD_SIZE_FD_GSIZE_M MAKEMASK(0x7FFF, 0) +#define PFQF_FD_SIZE_FD_GSIZE_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PFQF_FD_SIZE_FD_GSIZE_M : E800_PFQF_FD_SIZE_FD_GSIZE_M) +#define E800_PFQF_FD_SIZE_FD_GSIZE_M MAKEMASK(0x7FFF, 0) +#define E830_PFQF_FD_SIZE_FD_GSIZE_M MAKEMASK(0xFFFF, 0) #define PFQF_FD_SIZE_FD_BSIZE_S 16 -#define PFQF_FD_SIZE_FD_BSIZE_M MAKEMASK(0x7FFF, 16) +#define PFQF_FD_SIZE_FD_BSIZE_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PFQF_FD_SIZE_FD_BSIZE_M : E800_PFQF_FD_SIZE_FD_BSIZE_M) +#define E800_PFQF_FD_SIZE_FD_BSIZE_M MAKEMASK(0x7FFF, 16) +#define E830_PFQF_FD_SIZE_FD_BSIZE_M MAKEMASK(0xFFFF, 16) #define PFQF_FD_SUBTRACT 0x00460200 /* Reset Source: CORER */ #define PFQF_FD_SUBTRACT_FD_GCNT_S 0 -#define PFQF_FD_SUBTRACT_FD_GCNT_M MAKEMASK(0x7FFF, 0) +#define PFQF_FD_SUBTRACT_FD_GCNT_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PFQF_FD_SUBTRACT_FD_GCNT_M : E800_PFQF_FD_SUBTRACT_FD_GCNT_M) +#define E800_PFQF_FD_SUBTRACT_FD_GCNT_M MAKEMASK(0x7FFF, 0) +#define E830_PFQF_FD_SUBTRACT_FD_GCNT_M MAKEMASK(0xFFFF, 0) #define PFQF_FD_SUBTRACT_FD_BCNT_S 16 -#define PFQF_FD_SUBTRACT_FD_BCNT_M MAKEMASK(0x7FFF, 16) +#define PFQF_FD_SUBTRACT_FD_BCNT_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_PFQF_FD_SUBTRACT_FD_BCNT_M : E800_PFQF_FD_SUBTRACT_FD_BCNT_M) +#define E800_PFQF_FD_SUBTRACT_FD_BCNT_M MAKEMASK(0x7FFF, 16) +#define E830_PFQF_FD_SUBTRACT_FD_BCNT_M MAKEMASK(0xFFFF, 16) #define PFQF_HLUT(_i) (0x00430000 + ((_i) * 64)) /* _i=0...511 */ /* Reset Source: CORER */ #define PFQF_HLUT_MAX_INDEX 511 #define PFQF_HLUT_LUT0_S 0 @@ -7645,20 +7790,20 @@ #define GLPRT_AORCL_AORCL_M MAKEMASK(0xFFFFFFFF, 0) #define GLPRT_BPRCH(_i) (0x00381384 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */ #define GLPRT_BPRCH_MAX_INDEX 7 -#define GLPRT_BPRCH_UPRCH_S 0 -#define GLPRT_BPRCH_UPRCH_M MAKEMASK(0xFF, 0) +#define E800_GLPRT_BPRCH_UPRCH_S 0 +#define E800_GLPRT_BPRCH_UPRCH_M MAKEMASK(0xFF, 0) #define GLPRT_BPRCL(_i) (0x00381380 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */ #define GLPRT_BPRCL_MAX_INDEX 7 -#define GLPRT_BPRCL_UPRCH_S 0 -#define GLPRT_BPRCL_UPRCH_M MAKEMASK(0xFFFFFFFF, 0) +#define E800_GLPRT_BPRCL_UPRCH_S 0 +#define E800_GLPRT_BPRCL_UPRCH_M MAKEMASK(0xFFFFFFFF, 0) #define GLPRT_BPTCH(_i) (0x00381244 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */ #define GLPRT_BPTCH_MAX_INDEX 7 -#define GLPRT_BPTCH_UPRCH_S 0 -#define GLPRT_BPTCH_UPRCH_M MAKEMASK(0xFF, 0) +#define E800_GLPRT_BPTCH_UPRCH_S 0 +#define E800_GLPRT_BPTCH_UPRCH_M MAKEMASK(0xFF, 0) #define GLPRT_BPTCL(_i) (0x00381240 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */ #define GLPRT_BPTCL_MAX_INDEX 7 -#define GLPRT_BPTCL_UPRCH_S 0 -#define GLPRT_BPTCL_UPRCH_M MAKEMASK(0xFFFFFFFF, 0) +#define E800_GLPRT_BPTCL_UPRCH_S 0 +#define E800_GLPRT_BPTCL_UPRCH_M MAKEMASK(0xFFFFFFFF, 0) #define GLPRT_CRCERRS(_i) (0x00380100 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */ #define GLPRT_CRCERRS_MAX_INDEX 7 #define GLPRT_CRCERRS_CRCERRS_S 0 @@ -7973,8 +8118,8 @@ #define GLPRT_UPTCH_UPTCH_M MAKEMASK(0xFF, 0) #define GLPRT_UPTCL(_i) (0x003811C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */ #define GLPRT_UPTCL_MAX_INDEX 7 -#define GLPRT_UPTCL_VUPTCH_S 0 -#define GLPRT_UPTCL_VUPTCH_M MAKEMASK(0xFFFFFFFF, 0) +#define E800_GLPRT_UPTCL_VUPTCH_S 0 +#define E800_GLPRT_UPTCL_VUPTCH_M MAKEMASK(0xFFFFFFFF, 0) #define GLSTAT_ACL_CNT_0_H(_i) (0x00388004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */ #define GLSTAT_ACL_CNT_0_H_MAX_INDEX 511 #define GLSTAT_ACL_CNT_0_H_CNT_MSB_S 0 @@ -8869,9 +9014,13 @@ #define VSIQF_FD_CNT(_VSI) (0x00464000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */ #define VSIQF_FD_CNT_MAX_INDEX 767 #define VSIQF_FD_CNT_FD_GCNT_S 0 -#define VSIQF_FD_CNT_FD_GCNT_M MAKEMASK(0x3FFF, 0) +#define VSIQF_FD_CNT_FD_GCNT_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_VSIQF_FD_CNT_FD_GCNT_M : E800_VSIQF_FD_CNT_FD_GCNT_M) +#define E800_VSIQF_FD_CNT_FD_GCNT_M MAKEMASK(0x3FFF, 0) +#define E830_VSIQF_FD_CNT_FD_GCNT_M MAKEMASK(0xFFFF, 0) #define VSIQF_FD_CNT_FD_BCNT_S 16 -#define VSIQF_FD_CNT_FD_BCNT_M MAKEMASK(0x3FFF, 16) +#define VSIQF_FD_CNT_FD_BCNT_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_VSIQF_FD_CNT_FD_BCNT_M : E800_VSIQF_FD_CNT_FD_BCNT_M) +#define E800_VSIQF_FD_CNT_FD_BCNT_M MAKEMASK(0x3FFF, 16) +#define E830_VSIQF_FD_CNT_FD_BCNT_M MAKEMASK(0xFFFF, 16) #define VSIQF_FD_CTL1(_VSI) (0x00411000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */ #define VSIQF_FD_CTL1_MAX_INDEX 767 #define VSIQF_FD_CTL1_FLT_ENA_S 0 @@ -8895,9 +9044,13 @@ #define VSIQF_FD_SIZE(_VSI) (0x00462000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */ #define VSIQF_FD_SIZE_MAX_INDEX 767 #define VSIQF_FD_SIZE_FD_GSIZE_S 0 -#define VSIQF_FD_SIZE_FD_GSIZE_M MAKEMASK(0x3FFF, 0) +#define VSIQF_FD_SIZE_FD_GSIZE_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_VSIQF_FD_SIZE_FD_GSIZE_M : E800_VSIQF_FD_SIZE_FD_GSIZE_M) +#define E800_VSIQF_FD_SIZE_FD_GSIZE_M MAKEMASK(0x3FFF, 0) +#define E830_VSIQF_FD_SIZE_FD_GSIZE_M MAKEMASK(0xFFFF, 0) #define VSIQF_FD_SIZE_FD_BSIZE_S 16 -#define VSIQF_FD_SIZE_FD_BSIZE_M MAKEMASK(0x3FFF, 16) +#define VSIQF_FD_SIZE_FD_BSIZE_M_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_VSIQF_FD_SIZE_FD_BSIZE_M : E800_VSIQF_FD_SIZE_FD_BSIZE_M) +#define E800_VSIQF_FD_SIZE_FD_BSIZE_M MAKEMASK(0x3FFF, 16) +#define E830_VSIQF_FD_SIZE_FD_BSIZE_M MAKEMASK(0xFFFF, 16) #define VSIQF_HASH_CTL(_VSI) (0x0040D000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */ #define VSIQF_HASH_CTL_MAX_INDEX 767 #define VSIQF_HASH_CTL_HASH_LUT_SEL_S 0 @@ -9021,7 +9174,9 @@ #define PFPM_WUS_FLX7_M BIT(23) #define PFPM_WUS_FW_RST_WK_S 31 #define PFPM_WUS_FW_RST_WK_M BIT(31) -#define PRTPM_SAH(_i) (0x001E3BA0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */ +#define PRTPM_SAH_BY_MAC(hw, _i) ((hw)->mac_type == ICE_MAC_E830 ? E830_PRTPM_SAH(_i) : E800_PRTPM_SAH(_i)) +#define E800_PRTPM_SAH(_i) (0x001E3BA0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */ +#define E830_PRTPM_SAH(_i) (0x001E2380 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */ #define PRTPM_SAH_MAX_INDEX 3 #define PRTPM_SAH_PFPM_SAH_S 0 #define PRTPM_SAH_PFPM_SAH_M MAKEMASK(0xFFFF, 0) @@ -9031,7 +9186,9 @@ #define PRTPM_SAH_MC_MAG_EN_M BIT(30) #define PRTPM_SAH_AV_S 31 #define PRTPM_SAH_AV_M BIT(31) -#define PRTPM_SAL(_i) (0x001E3B20 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */ +#define PRTPM_SAL_BY_MAC(hw, _i) ((hw)->mac_type == ICE_MAC_E830 ? E830_PRTPM_SAL(_i) : E800_PRTPM_SAL(_i)) +#define E800_PRTPM_SAL(_i) (0x001E3B20 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */ +#define E830_PRTPM_SAL(_i) (0x001E2300 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */ #define PRTPM_SAL_MAX_INDEX 3 #define PRTPM_SAL_PFPM_SAL_S 0 #define PRTPM_SAL_PFPM_SAL_M MAKEMASK(0xFFFFFFFF, 0) @@ -9044,7 +9201,9 @@ #define GLPE_CQM_FUNC_INVALIDATE_VM_VF_TYPE_M MAKEMASK(0x3, 13) #define GLPE_CQM_FUNC_INVALIDATE_ENABLE_S 31 #define GLPE_CQM_FUNC_INVALIDATE_ENABLE_M BIT(31) -#define VFPE_MRTEIDXMASK 0x00009000 /* Reset Source: PFR */ +#define VFPE_MRTEIDXMASK_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_VFPE_MRTEIDXMASK : E800_VFPE_MRTEIDXMASK) +#define E800_VFPE_MRTEIDXMASK 0x00009000 /* Reset Source: PFR */ +#define E830_VFPE_MRTEIDXMASK(_VF) (0x00509800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */ #define VFPE_MRTEIDXMASK_MRTEIDXMASKBITS_S 0 #define VFPE_MRTEIDXMASK_MRTEIDXMASKBITS_M MAKEMASK(0x1F, 0) #define GLTSYN_HH_DLAY 0x0008881C /* Reset Source: CORER */ @@ -9147,8 +9306,12 @@ #define VFINT_ITR0_MAX_INDEX 2 #define VFINT_ITR0_INTERVAL_S 0 #define VFINT_ITR0_INTERVAL_M MAKEMASK(0xFFF, 0) -#define VFINT_ITRN(_i, _j) (0x00002800 + ((_i) * 4 + (_j) * 12)) /* _i=0...2, _j=0...63 */ /* Reset Source: CORER */ -#define VFINT_ITRN_MAX_INDEX 2 +#define VFINT_ITRN_BY_MAC(hw, _i, _j) ((hw)->mac_type == ICE_MAC_E830 ? E830_VFINT_ITRN(_i, _j) : E800_VFINT_ITRN(_i, _j)) +#define E800_VFINT_ITRN(_i, _j) (0x00002800 + ((_i) * 4 + (_j) * 12)) /* _i=0...2, _j=0...63 */ /* Reset Source: CORER */ +#define E830_VFINT_ITRN(_i, _j) (0x00002800 + ((_i) * 4 + (_j) * 64)) /* _i=0...15, _j=0...2 */ /* Reset Source: CORER */ +#define VFINT_ITRN_MAX_INDEX_BY_MAC(hw) ((hw)->mac_type == ICE_MAC_E830 ? E830_VFINT_ITRN_MAX_INDEX : E800_VFINT_ITRN_MAX_INDEX) +#define E800_VFINT_ITRN_MAX_INDEX 2 +#define E830_VFINT_ITRN_MAX_INDEX 15 #define VFINT_ITRN_INTERVAL_S 0 #define VFINT_ITRN_INTERVAL_M MAKEMASK(0xFFF, 0) #define QRX_TAIL1(_QRX) (0x00002000 + ((_QRX) * 4)) /* _i=0...255 */ /* Reset Source: CORER */ @@ -9443,13 +9606,13 @@ #define VFPE_IPCONFIG01_USEENTIREIDRANGE_M BIT(16) #define VFPE_IPCONFIG01_UDP_SRC_PORT_MASK_EN_S 17 #define VFPE_IPCONFIG01_UDP_SRC_PORT_MASK_EN_M BIT(17) -#define VFPE_MRTEIDXMASK1(_VF) (0x00509800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */ -#define VFPE_MRTEIDXMASK1_MAX_INDEX 255 -#define VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_S 0 -#define VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_M MAKEMASK(0x1F, 0) -#define VFPE_RCVUNEXPECTEDERROR1 0x00009400 /* Reset Source: VFR */ -#define VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_S 0 -#define VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0) +#define E800_VFPE_MRTEIDXMASK1(_VF) (0x00509800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */ +#define E800_VFPE_MRTEIDXMASK1_MAX_INDEX 255 +#define E800_VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_S 0 +#define E800_VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_M MAKEMASK(0x1F, 0) +#define E800_VFPE_RCVUNEXPECTEDERROR1 0x00009400 /* Reset Source: VFR */ +#define E800_VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_S 0 +#define E800_VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0) #define VFPE_TCPNOWTIMER1 0x0000A800 /* Reset Source: VFR */ #define VFPE_TCPNOWTIMER1_TCP_NOW_S 0 #define VFPE_TCPNOWTIMER1_TCP_NOW_M MAKEMASK(0xFFFFFFFF, 0) @@ -9458,5 +9621,1645 @@ #define VFPE_WQEALLOC1_PEQPID_M MAKEMASK(0x3FFFF, 0) #define VFPE_WQEALLOC1_WQE_DESC_INDEX_S 20 #define VFPE_WQEALLOC1_WQE_DESC_INDEX_M MAKEMASK(0xFFF, 20) +#define E830_GL_QRX_CONTEXT_CTL 0x00296640 /* Reset Source: CORER */ +#define E830_GL_QRX_CONTEXT_CTL_QUEUE_ID_S 0 +#define E830_GL_QRX_CONTEXT_CTL_QUEUE_ID_M MAKEMASK(0xFFF, 0) +#define E830_GL_QRX_CONTEXT_CTL_CMD_S 16 +#define E830_GL_QRX_CONTEXT_CTL_CMD_M MAKEMASK(0x7, 16) +#define E830_GL_QRX_CONTEXT_CTL_CMD_EXEC_S 19 +#define E830_GL_QRX_CONTEXT_CTL_CMD_EXEC_M BIT(19) +#define E830_GL_QRX_CONTEXT_DATA(_i) (0x00296620 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GL_QRX_CONTEXT_DATA_MAX_INDEX 7 +#define E830_GL_QRX_CONTEXT_DATA_DATA_S 0 +#define E830_GL_QRX_CONTEXT_DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GL_QRX_CONTEXT_STAT 0x00296644 /* Reset Source: CORER */ +#define E830_GL_QRX_CONTEXT_STAT_CMD_IN_PROG_S 0 +#define E830_GL_QRX_CONTEXT_STAT_CMD_IN_PROG_M BIT(0) +#define E830_GL_RCB_INTERNAL(_i) (0x00122600 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */ +#define E830_GL_RCB_INTERNAL_MAX_INDEX 63 +#define E830_GL_RCB_INTERNAL_INTERNAL_S 0 +#define E830_GL_RCB_INTERNAL_INTERNAL_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GL_RLAN_INTERNAL(_i) (0x00296700 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */ +#define E830_GL_RLAN_INTERNAL_MAX_INDEX 63 +#define E830_GL_RLAN_INTERNAL_INTERNAL_S 0 +#define E830_GL_RLAN_INTERNAL_INTERNAL_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLPQMDBL_PQMDBL_OUT_WRR_MAX_CREDITS 0x002D30F8 /* Reset Source: CORER */ +#define E830_GLPQMDBL_PQMDBL_OUT_WRR_MAX_CREDITS_DBLQ_FDBL_S 0 +#define E830_GLPQMDBL_PQMDBL_OUT_WRR_MAX_CREDITS_DBLQ_FDBL_M MAKEMASK(0xFF, 0) +#define E830_GLPQMDBL_PQMDBL_OUT_WRR_MAX_CREDITS_TXT_S 8 +#define E830_GLPQMDBL_PQMDBL_OUT_WRR_MAX_CREDITS_TXT_M MAKEMASK(0xFF, 8) +#define E830_GLPQMDBL_PQMDBL_OUT_WRR_WEIGHTS 0x002D30FC /* Reset Source: CORER */ +#define E830_GLPQMDBL_PQMDBL_OUT_WRR_WEIGHTS_DBLQ_FDBL_S 0 +#define E830_GLPQMDBL_PQMDBL_OUT_WRR_WEIGHTS_DBLQ_FDBL_M MAKEMASK(0x3F, 0) +#define E830_GLPQMDBL_PQMDBL_OUT_WRR_WEIGHTS_TXT_S 6 +#define E830_GLPQMDBL_PQMDBL_OUT_WRR_WEIGHTS_TXT_M MAKEMASK(0x3F, 6) +#define E830_GLPQMDBL_PQMMNG_IN_WRR_MAX_CREDITS 0x002D30F0 /* Reset Source: CORER */ +#define E830_GLPQMDBL_PQMMNG_IN_WRR_MAX_CREDITS_DBLQ_S 0 +#define E830_GLPQMDBL_PQMMNG_IN_WRR_MAX_CREDITS_DBLQ_M MAKEMASK(0xFF, 0) +#define E830_GLPQMDBL_PQMMNG_IN_WRR_MAX_CREDITS_FDBL_S 8 +#define E830_GLPQMDBL_PQMMNG_IN_WRR_MAX_CREDITS_FDBL_M MAKEMASK(0xFF, 8) +#define E830_GLPQMDBL_PQMMNG_IN_WRR_MAX_CREDITS_TXT_S 16 +#define E830_GLPQMDBL_PQMMNG_IN_WRR_MAX_CREDITS_TXT_M MAKEMASK(0xFF, 16) +#define E830_GLPQMDBL_PQMMNG_IN_WRR_WEIGHTS 0x002D30F4 /* Reset Source: CORER */ +#define E830_GLPQMDBL_PQMMNG_IN_WRR_WEIGHTS_DBLQ_S 0 +#define E830_GLPQMDBL_PQMMNG_IN_WRR_WEIGHTS_DBLQ_M MAKEMASK(0x3F, 0) +#define E830_GLPQMDBL_PQMMNG_IN_WRR_WEIGHTS_FDBL_S 6 +#define E830_GLPQMDBL_PQMMNG_IN_WRR_WEIGHTS_FDBL_M MAKEMASK(0x3F, 6) +#define E830_GLPQMDBL_PQMMNG_IN_WRR_WEIGHTS_TXT_S 12 +#define E830_GLPQMDBL_PQMMNG_IN_WRR_WEIGHTS_TXT_M MAKEMASK(0x3F, 12) +#define E830_GLQTX_TXTIME_DBELL_LSB(_DBQM) (0x002E0000 + ((_DBQM) * 8)) /* _i=0...16383 */ /* Reset Source: CORER */ +#define E830_GLQTX_TXTIME_DBELL_LSB_MAX_INDEX 16383 +#define E830_GLQTX_TXTIME_DBELL_LSB_QTX_TXTIME_DBELL_S 0 +#define E830_GLQTX_TXTIME_DBELL_LSB_QTX_TXTIME_DBELL_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLQTX_TXTIME_DBELL_MSB(_DBQM) (0x002E0004 + ((_DBQM) * 8)) /* _i=0...16383 */ /* Reset Source: CORER */ +#define E830_GLQTX_TXTIME_DBELL_MSB_MAX_INDEX 16383 +#define E830_GLQTX_TXTIME_DBELL_MSB_QTX_TXTIME_DBELL_S 0 +#define E830_GLQTX_TXTIME_DBELL_MSB_QTX_TXTIME_DBELL_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLTXTIME_DBL_COMP_WRR_MAX_CREDITS 0x002D320C /* Reset Source: CORER */ +#define E830_GLTXTIME_DBL_COMP_WRR_MAX_CREDITS_DBL_S 0 +#define E830_GLTXTIME_DBL_COMP_WRR_MAX_CREDITS_DBL_M MAKEMASK(0xFF, 0) +#define E830_GLTXTIME_DBL_COMP_WRR_MAX_CREDITS_COMP_S 8 +#define E830_GLTXTIME_DBL_COMP_WRR_MAX_CREDITS_COMP_M MAKEMASK(0xFF, 8) +#define E830_GLTXTIME_DBL_COMP_WRR_WEIGHTS 0x002D3210 /* Reset Source: CORER */ +#define E830_GLTXTIME_DBL_COMP_WRR_WEIGHTS_DBL_S 0 +#define E830_GLTXTIME_DBL_COMP_WRR_WEIGHTS_DBL_M MAKEMASK(0x3F, 0) +#define E830_GLTXTIME_DBL_COMP_WRR_WEIGHTS_COMP_S 6 +#define E830_GLTXTIME_DBL_COMP_WRR_WEIGHTS_COMP_M MAKEMASK(0x3F, 6) +#define E830_GLTXTIME_FETCH_PROFILE(_i, _j) (0x002D3500 + ((_i) * 4 + (_j) * 64)) /* _i=0...15, _j=0...15 */ /* Reset Source: CORER */ +#define E830_GLTXTIME_FETCH_PROFILE_MAX_INDEX 15 +#define E830_GLTXTIME_FETCH_PROFILE_FETCH_TS_DESC_S 0 +#define E830_GLTXTIME_FETCH_PROFILE_FETCH_TS_DESC_M MAKEMASK(0x1FF, 0) +#define E830_GLTXTIME_FETCH_PROFILE_FETCH_FIFO_TRESH_S 9 +#define E830_GLTXTIME_FETCH_PROFILE_FETCH_FIFO_TRESH_M MAKEMASK(0x7F, 9) +#define E830_GLTXTIME_OUTST_REQ_CNTL 0x002D3214 /* Reset Source: CORER */ +#define E830_GLTXTIME_OUTST_REQ_CNTL_THRESHOLD_S 0 +#define E830_GLTXTIME_OUTST_REQ_CNTL_THRESHOLD_M MAKEMASK(0x3FF, 0) +#define E830_GLTXTIME_OUTST_REQ_CNTL_SNAPSHOT_S 10 +#define E830_GLTXTIME_OUTST_REQ_CNTL_SNAPSHOT_M MAKEMASK(0x3FF, 10) +#define E830_GLTXTIME_QTX_CNTX_CTL 0x002D3204 /* Reset Source: CORER */ +#define E830_GLTXTIME_QTX_CNTX_CTL_QUEUE_ID_S 0 +#define E830_GLTXTIME_QTX_CNTX_CTL_QUEUE_ID_M MAKEMASK(0x7FF, 0) +#define E830_GLTXTIME_QTX_CNTX_CTL_CMD_S 16 +#define E830_GLTXTIME_QTX_CNTX_CTL_CMD_M MAKEMASK(0x7, 16) +#define E830_GLTXTIME_QTX_CNTX_CTL_CMD_EXEC_S 19 +#define E830_GLTXTIME_QTX_CNTX_CTL_CMD_EXEC_M BIT(19) +#define E830_GLTXTIME_QTX_CNTX_DATA(_i) (0x002D3104 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */ +#define E830_GLTXTIME_QTX_CNTX_DATA_MAX_INDEX 6 +#define E830_GLTXTIME_QTX_CNTX_DATA_DATA_S 0 +#define E830_GLTXTIME_QTX_CNTX_DATA_DATA_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLTXTIME_QTX_CNTX_STAT 0x002D3208 /* Reset Source: CORER */ +#define E830_GLTXTIME_QTX_CNTX_STAT_CMD_IN_PROG_S 0 +#define E830_GLTXTIME_QTX_CNTX_STAT_CMD_IN_PROG_M BIT(0) +#define E830_GLTXTIME_TS_CFG 0x002D3100 /* Reset Source: CORER */ +#define E830_GLTXTIME_TS_CFG_TXTIME_ENABLE_S 0 +#define E830_GLTXTIME_TS_CFG_TXTIME_ENABLE_M BIT(0) +#define E830_GLTXTIME_TS_CFG_STORAGE_MODE_S 2 +#define E830_GLTXTIME_TS_CFG_STORAGE_MODE_M MAKEMASK(0x7, 2) +#define E830_GLTXTIME_TS_CFG_PIPE_LATENCY_STATIC_S 5 +#define E830_GLTXTIME_TS_CFG_PIPE_LATENCY_STATIC_M MAKEMASK(0x1FFF, 5) +#define E830_MBX_PF_DEC_ERR 0x00234100 /* Reset Source: CORER */ +#define E830_MBX_PF_DEC_ERR_DEC_ERR_S 0 +#define E830_MBX_PF_DEC_ERR_DEC_ERR_M BIT(0) +#define E830_MBX_PF_IN_FLIGHT_VF_MSGS_THRESH 0x00234000 /* Reset Source: CORER */ +#define E830_MBX_PF_IN_FLIGHT_VF_MSGS_THRESH_TRESH_S 0 +#define E830_MBX_PF_IN_FLIGHT_VF_MSGS_THRESH_TRESH_M MAKEMASK(0x3FF, 0) +#define E830_MBX_VF_DEC_TRIG(_VF) (0x00233800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */ +#define E830_MBX_VF_DEC_TRIG_MAX_INDEX 255 +#define E830_MBX_VF_DEC_TRIG_DEC_S 0 +#define E830_MBX_VF_DEC_TRIG_DEC_M MAKEMASK(0x3FF, 0) +#define E830_MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT(_VF) (0x00233000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */ +#define E830_MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT_MAX_INDEX 255 +#define E830_MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT_MSGS_S 0 +#define E830_MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT_MSGS_M MAKEMASK(0x3FF, 0) +#define E830_GLRCB_AG_ARBITER_CONFIG 0x00122500 /* Reset Source: CORER */ +#define E830_GLRCB_AG_ARBITER_CONFIG_CREDIT_MAX_S 0 +#define E830_GLRCB_AG_ARBITER_CONFIG_CREDIT_MAX_M MAKEMASK(0xFFFFF, 0) +#define E830_GLRCB_AG_DCB_ARBITER_CONFIG 0x00122518 /* Reset Source: CORER */ +#define E830_GLRCB_AG_DCB_ARBITER_CONFIG_CREDIT_MAX_S 0 +#define E830_GLRCB_AG_DCB_ARBITER_CONFIG_CREDIT_MAX_M MAKEMASK(0x7F, 0) +#define E830_GLRCB_AG_DCB_ARBITER_CONFIG_STRICT_WRR_S 7 +#define E830_GLRCB_AG_DCB_ARBITER_CONFIG_STRICT_WRR_M BIT(7) +#define E830_GLRCB_AG_DCB_NODE_CONFIG(_i) (0x00122510 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */ +#define E830_GLRCB_AG_DCB_NODE_CONFIG_MAX_INDEX 1 +#define E830_GLRCB_AG_DCB_NODE_CONFIG_BWSHARE_S 0 +#define E830_GLRCB_AG_DCB_NODE_CONFIG_BWSHARE_M MAKEMASK(0xF, 0) +#define E830_GLRCB_AG_DCB_NODE_STATE(_i) (0x00122508 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */ +#define E830_GLRCB_AG_DCB_NODE_STATE_MAX_INDEX 1 +#define E830_GLRCB_AG_DCB_NODE_STATE_CREDITS_S 0 +#define E830_GLRCB_AG_DCB_NODE_STATE_CREDITS_M MAKEMASK(0xFF, 0) +#define E830_GLRCB_AG_NODE_CONFIG(_i) (0x001224E0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GLRCB_AG_NODE_CONFIG_MAX_INDEX 7 +#define E830_GLRCB_AG_NODE_CONFIG_BWSHARE_S 0 +#define E830_GLRCB_AG_NODE_CONFIG_BWSHARE_M MAKEMASK(0x7F, 0) +#define E830_GLRCB_AG_NODE_STATE(_i) (0x001224C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GLRCB_AG_NODE_STATE_MAX_INDEX 7 +#define E830_GLRCB_AG_NODE_STATE_CREDITS_S 0 +#define E830_GLRCB_AG_NODE_STATE_CREDITS_M MAKEMASK(0xFFFFF, 0) +#define E830_PRT_AG_PORT_FC_MAP 0x00122520 /* Reset Source: CORER */ +#define E830_PRT_AG_PORT_FC_MAP_AG_BITMAP_S 0 +#define E830_PRT_AG_PORT_FC_MAP_AG_BITMAP_M MAKEMASK(0xFF, 0) +#define E830_GL_FW_LOGS_CTL 0x000827F8 /* Reset Source: POR */ +#define E830_GL_FW_LOGS_CTL_PAGE_SELECT_S 0 +#define E830_GL_FW_LOGS_CTL_PAGE_SELECT_M MAKEMASK(0x3FF, 0) +#define E830_GL_FW_LOGS_STS 0x000827FC /* Reset Source: POR */ +#define E830_GL_FW_LOGS_STS_MAX_PAGE_S 0 +#define E830_GL_FW_LOGS_STS_MAX_PAGE_M MAKEMASK(0x3FF, 0) +#define E830_GL_FW_LOGS_STS_FW_LOGS_ENA_S 31 +#define E830_GL_FW_LOGS_STS_FW_LOGS_ENA_M BIT(31) +#define E830_GLGEN_RTRIG_EMPR_WO_GLOBR_S 3 +#define E830_GLGEN_RTRIG_EMPR_WO_GLOBR_M BIT(3) +#define E830_GLPE_TSCD_NUM_PQS 0x0051E2FC /* Reset Source: CORER */ +#define E830_GLPE_TSCD_NUM_PQS_NUM_PQS_S 0 +#define E830_GLPE_TSCD_NUM_PQS_NUM_PQS_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLTPB_100G_RPB_FC_THRESH2 0x0009972C /* Reset Source: CORER */ +#define E830_GLTPB_100G_RPB_FC_THRESH2_PORT4_FC_THRESH_S 0 +#define E830_GLTPB_100G_RPB_FC_THRESH2_PORT4_FC_THRESH_M MAKEMASK(0xFFFF, 0) +#define E830_GLTPB_100G_RPB_FC_THRESH2_PORT5_FC_THRESH_S 16 +#define E830_GLTPB_100G_RPB_FC_THRESH2_PORT5_FC_THRESH_M MAKEMASK(0xFFFF, 16) +#define E830_GLTPB_100G_RPB_FC_THRESH3 0x00099730 /* Reset Source: CORER */ +#define E830_GLTPB_100G_RPB_FC_THRESH3_PORT6_FC_THRESH_S 0 +#define E830_GLTPB_100G_RPB_FC_THRESH3_PORT6_FC_THRESH_M MAKEMASK(0xFFFF, 0) +#define E830_GLTPB_100G_RPB_FC_THRESH3_PORT7_FC_THRESH_S 16 +#define E830_GLTPB_100G_RPB_FC_THRESH3_PORT7_FC_THRESH_M MAKEMASK(0xFFFF, 16) +#define E830_PORT_TIMER_SEL(_i) (0x00088BE0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_PORT_TIMER_SEL_MAX_INDEX 7 +#define E830_PORT_TIMER_SEL_TIMER_SEL_S 0 +#define E830_PORT_TIMER_SEL_TIMER_SEL_M BIT(0) +#define E830_GLINT_FW_DCF_CTL(_i) (0x0016CFD4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GLINT_FW_DCF_CTL_MAX_INDEX 7 +#define E830_GLINT_FW_DCF_CTL_MSIX_INDX_S 0 +#define E830_GLINT_FW_DCF_CTL_MSIX_INDX_M MAKEMASK(0x7FF, 0) +#define E830_GLINT_FW_DCF_CTL_ITR_INDX_S 11 +#define E830_GLINT_FW_DCF_CTL_ITR_INDX_M MAKEMASK(0x3, 11) +#define E830_GLINT_FW_DCF_CTL_CAUSE_ENA_S 30 +#define E830_GLINT_FW_DCF_CTL_CAUSE_ENA_M BIT(30) +#define E830_GLINT_FW_DCF_CTL_INTEVENT_S 31 +#define E830_GLINT_FW_DCF_CTL_INTEVENT_M BIT(31) +#define E830_GL_MDET_RX_FIFO 0x00296840 /* Reset Source: CORER */ +#define E830_GL_MDET_RX_FIFO_FUNC_NUM_S 0 +#define E830_GL_MDET_RX_FIFO_FUNC_NUM_M MAKEMASK(0x3FF, 0) +#define E830_GL_MDET_RX_FIFO_PF_NUM_S 10 +#define E830_GL_MDET_RX_FIFO_PF_NUM_M MAKEMASK(0x7, 10) +#define E830_GL_MDET_RX_FIFO_FUNC_TYPE_S 13 +#define E830_GL_MDET_RX_FIFO_FUNC_TYPE_M MAKEMASK(0x3, 13) +#define E830_GL_MDET_RX_FIFO_MAL_TYPE_S 15 +#define E830_GL_MDET_RX_FIFO_MAL_TYPE_M MAKEMASK(0x1F, 15) +#define E830_GL_MDET_RX_FIFO_FIFO_FULL_S 20 +#define E830_GL_MDET_RX_FIFO_FIFO_FULL_M BIT(20) +#define E830_GL_MDET_RX_FIFO_VALID_S 21 +#define E830_GL_MDET_RX_FIFO_VALID_M BIT(21) +#define E830_GL_MDET_RX_PF_CNT(_i) (0x00296800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GL_MDET_RX_PF_CNT_MAX_INDEX 7 +#define E830_GL_MDET_RX_PF_CNT_CNT_S 0 +#define E830_GL_MDET_RX_PF_CNT_CNT_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GL_MDET_RX_VF(_i) (0x00296820 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GL_MDET_RX_VF_MAX_INDEX 7 +#define E830_GL_MDET_RX_VF_VF_MAL_EVENT_S 0 +#define E830_GL_MDET_RX_VF_VF_MAL_EVENT_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GL_MDET_TX_PQM_FIFO 0x002D4B00 /* Reset Source: CORER */ +#define E830_GL_MDET_TX_PQM_FIFO_FUNC_NUM_S 0 +#define E830_GL_MDET_TX_PQM_FIFO_FUNC_NUM_M MAKEMASK(0x3FF, 0) +#define E830_GL_MDET_TX_PQM_FIFO_PF_NUM_S 10 +#define E830_GL_MDET_TX_PQM_FIFO_PF_NUM_M MAKEMASK(0x7, 10) +#define E830_GL_MDET_TX_PQM_FIFO_FUNC_TYPE_S 13 +#define E830_GL_MDET_TX_PQM_FIFO_FUNC_TYPE_M MAKEMASK(0x3, 13) +#define E830_GL_MDET_TX_PQM_FIFO_MAL_TYPE_S 15 +#define E830_GL_MDET_TX_PQM_FIFO_MAL_TYPE_M MAKEMASK(0x1F, 15) +#define E830_GL_MDET_TX_PQM_FIFO_FIFO_FULL_S 20 +#define E830_GL_MDET_TX_PQM_FIFO_FIFO_FULL_M BIT(20) +#define E830_GL_MDET_TX_PQM_FIFO_VALID_S 21 +#define E830_GL_MDET_TX_PQM_FIFO_VALID_M BIT(21) +#define E830_GL_MDET_TX_PQM_PF_CNT(_i) (0x002D4AC0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GL_MDET_TX_PQM_PF_CNT_MAX_INDEX 7 +#define E830_GL_MDET_TX_PQM_PF_CNT_CNT_S 0 +#define E830_GL_MDET_TX_PQM_PF_CNT_CNT_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GL_MDET_TX_PQM_VF(_i) (0x002D4AE0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GL_MDET_TX_PQM_VF_MAX_INDEX 7 +#define E830_GL_MDET_TX_PQM_VF_VF_MAL_EVENT_S 0 +#define E830_GL_MDET_TX_PQM_VF_VF_MAL_EVENT_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GL_MDET_TX_TCLAN_FIFO 0x000FD000 /* Reset Source: CORER */ +#define E830_GL_MDET_TX_TCLAN_FIFO_FUNC_NUM_S 0 +#define E830_GL_MDET_TX_TCLAN_FIFO_FUNC_NUM_M MAKEMASK(0x3FF, 0) +#define E830_GL_MDET_TX_TCLAN_FIFO_PF_NUM_S 10 +#define E830_GL_MDET_TX_TCLAN_FIFO_PF_NUM_M MAKEMASK(0x7, 10) +#define E830_GL_MDET_TX_TCLAN_FIFO_FUNC_TYPE_S 13 +#define E830_GL_MDET_TX_TCLAN_FIFO_FUNC_TYPE_M MAKEMASK(0x3, 13) +#define E830_GL_MDET_TX_TCLAN_FIFO_MAL_TYPE_S 15 +#define E830_GL_MDET_TX_TCLAN_FIFO_MAL_TYPE_M MAKEMASK(0x1F, 15) +#define E830_GL_MDET_TX_TCLAN_FIFO_FIFO_FULL_S 20 +#define E830_GL_MDET_TX_TCLAN_FIFO_FIFO_FULL_M BIT(20) +#define E830_GL_MDET_TX_TCLAN_FIFO_VALID_S 21 +#define E830_GL_MDET_TX_TCLAN_FIFO_VALID_M BIT(21) +#define E830_GL_MDET_TX_TCLAN_PF_CNT(_i) (0x000FCFC0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GL_MDET_TX_TCLAN_PF_CNT_MAX_INDEX 7 +#define E830_GL_MDET_TX_TCLAN_PF_CNT_CNT_S 0 +#define E830_GL_MDET_TX_TCLAN_PF_CNT_CNT_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GL_MDET_TX_TCLAN_VF(_i) (0x000FCFE0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GL_MDET_TX_TCLAN_VF_MAX_INDEX 7 +#define E830_GL_MDET_TX_TCLAN_VF_VF_MAL_EVENT_S 0 +#define E830_GL_MDET_TX_TCLAN_VF_VF_MAL_EVENT_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GL_MDET_TX_TDPU_FIFO 0x00049D80 /* Reset Source: CORER */ +#define E830_GL_MDET_TX_TDPU_FIFO_FUNC_NUM_S 0 +#define E830_GL_MDET_TX_TDPU_FIFO_FUNC_NUM_M MAKEMASK(0x3FF, 0) +#define E830_GL_MDET_TX_TDPU_FIFO_PF_NUM_S 10 +#define E830_GL_MDET_TX_TDPU_FIFO_PF_NUM_M MAKEMASK(0x7, 10) +#define E830_GL_MDET_TX_TDPU_FIFO_FUNC_TYPE_S 13 +#define E830_GL_MDET_TX_TDPU_FIFO_FUNC_TYPE_M MAKEMASK(0x3, 13) +#define E830_GL_MDET_TX_TDPU_FIFO_MAL_TYPE_S 15 +#define E830_GL_MDET_TX_TDPU_FIFO_MAL_TYPE_M MAKEMASK(0x1F, 15) +#define E830_GL_MDET_TX_TDPU_FIFO_FIFO_FULL_S 20 +#define E830_GL_MDET_TX_TDPU_FIFO_FIFO_FULL_M BIT(20) +#define E830_GL_MDET_TX_TDPU_FIFO_VALID_S 21 +#define E830_GL_MDET_TX_TDPU_FIFO_VALID_M BIT(21) +#define E830_GL_MDET_TX_TDPU_PF_CNT(_i) (0x00049D40 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GL_MDET_TX_TDPU_PF_CNT_MAX_INDEX 7 +#define E830_GL_MDET_TX_TDPU_PF_CNT_CNT_S 0 +#define E830_GL_MDET_TX_TDPU_PF_CNT_CNT_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GL_MDET_TX_TDPU_VF(_i) (0x00049D60 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GL_MDET_TX_TDPU_VF_MAX_INDEX 7 +#define E830_GL_MDET_TX_TDPU_VF_VF_MAL_EVENT_S 0 +#define E830_GL_MDET_TX_TDPU_VF_VF_MAL_EVENT_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GL_MNG_ECDSA_PUBKEY(_i) (0x00083300 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: EMPR */ +#define E830_GL_MNG_ECDSA_PUBKEY_MAX_INDEX 11 +#define E830_GL_MNG_ECDSA_PUBKEY_GL_MNG_ECDSA_PUBKEY_S 0 +#define E830_GL_MNG_ECDSA_PUBKEY_GL_MNG_ECDSA_PUBKEY_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GL_PPRS_RX_SIZE_CTRL_0(_i) (0x00084900 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */ +#define E830_GL_PPRS_RX_SIZE_CTRL_0_MAX_INDEX 1 +#define E830_GL_PPRS_RX_SIZE_CTRL_0_MAX_HEADER_SIZE_S 16 +#define E830_GL_PPRS_RX_SIZE_CTRL_0_MAX_HEADER_SIZE_M MAKEMASK(0x3FF, 16) +#define E830_GL_PPRS_RX_SIZE_CTRL_1(_i) (0x00085900 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */ +#define E830_GL_PPRS_RX_SIZE_CTRL_1_MAX_INDEX 1 +#define E830_GL_PPRS_RX_SIZE_CTRL_1_MAX_HEADER_SIZE_S 16 +#define E830_GL_PPRS_RX_SIZE_CTRL_1_MAX_HEADER_SIZE_M MAKEMASK(0x3FF, 16) +#define E830_GL_PPRS_RX_SIZE_CTRL_2(_i) (0x00086900 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */ +#define E830_GL_PPRS_RX_SIZE_CTRL_2_MAX_INDEX 1 +#define E830_GL_PPRS_RX_SIZE_CTRL_2_MAX_HEADER_SIZE_S 16 +#define E830_GL_PPRS_RX_SIZE_CTRL_2_MAX_HEADER_SIZE_M MAKEMASK(0x3FF, 16) +#define E830_GL_PPRS_RX_SIZE_CTRL_3(_i) (0x00087900 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */ +#define E830_GL_PPRS_RX_SIZE_CTRL_3_MAX_INDEX 1 +#define E830_GL_PPRS_RX_SIZE_CTRL_3_MAX_HEADER_SIZE_S 16 +#define E830_GL_PPRS_RX_SIZE_CTRL_3_MAX_HEADER_SIZE_M MAKEMASK(0x3FF, 16) +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_IP 0x00200740 /* Reset Source: CORER */ +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_IP_IPV4_PROT_ID_0_S 0 +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_IP_IPV4_PROT_ID_0_M MAKEMASK(0xFF, 0) +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_IP_IPV4_PROT_ID_1_S 8 +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_IP_IPV4_PROT_ID_1_M MAKEMASK(0xFF, 8) +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_IP_IPV6_PROT_ID_0_S 16 +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_IP_IPV6_PROT_ID_0_M MAKEMASK(0xFF, 16) +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_IP_IPV6_PROT_ID_1_S 24 +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_IP_IPV6_PROT_ID_1_M MAKEMASK(0xFF, 24) +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_UDP_TCP 0x00200744 /* Reset Source: CORER */ +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_UDP_TCP_TCP_PROT_ID_0_S 0 +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_UDP_TCP_TCP_PROT_ID_0_M MAKEMASK(0xFF, 0) +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_UDP_TCP_TCP_PROT_ID_1_S 8 +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_UDP_TCP_TCP_PROT_ID_1_M MAKEMASK(0xFF, 8) +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_UDP_TCP_UDP_PROT_ID_0_S 16 +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_UDP_TCP_UDP_PROT_ID_0_M MAKEMASK(0xFF, 16) +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_UDP_TCP_UDP_PROT_ID_1_S 24 +#define E830_GL_RPRS_CSUM_PROT_ID_CFG_UDP_TCP_UDP_PROT_ID_1_M MAKEMASK(0xFF, 24) +#define E830_GL_RPRS_PROT_ID_MAP(_i) (0x00200800 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */ +#define E830_GL_RPRS_PROT_ID_MAP_MAX_INDEX 255 +#define E830_GL_RPRS_PROT_ID_MAP_PROT_ID0_S 0 +#define E830_GL_RPRS_PROT_ID_MAP_PROT_ID0_M MAKEMASK(0xFF, 0) +#define E830_GL_RPRS_PROT_ID_MAP_PROT_ID1_S 8 +#define E830_GL_RPRS_PROT_ID_MAP_PROT_ID1_M MAKEMASK(0xFF, 8) +#define E830_GL_RPRS_PROT_ID_MAP_PROT_ID2_S 16 +#define E830_GL_RPRS_PROT_ID_MAP_PROT_ID2_M MAKEMASK(0xFF, 16) +#define E830_GL_RPRS_PROT_ID_MAP_PROT_ID3_S 24 +#define E830_GL_RPRS_PROT_ID_MAP_PROT_ID3_M MAKEMASK(0xFF, 24) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL(_i) (0x00201000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */ +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_MAX_INDEX 63 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_0_S 0 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_0_M MAKEMASK(0x3, 0) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_1_S 2 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_1_M MAKEMASK(0x3, 2) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_2_S 4 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_2_M MAKEMASK(0x3, 4) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_3_S 6 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_3_M MAKEMASK(0x3, 6) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_4_S 8 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_4_M MAKEMASK(0x3, 8) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_5_S 10 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_5_M MAKEMASK(0x3, 10) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_6_S 12 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_6_M MAKEMASK(0x3, 12) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_7_S 14 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_7_M MAKEMASK(0x3, 14) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_8_S 16 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_8_M MAKEMASK(0x3, 16) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_9_S 18 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_9_M MAKEMASK(0x3, 18) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_10_S 20 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_10_M MAKEMASK(0x3, 20) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_11_S 22 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_11_M MAKEMASK(0x3, 22) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_12_S 24 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_12_M MAKEMASK(0x3, 24) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_13_S 26 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_13_M MAKEMASK(0x3, 26) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_14_S 28 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_14_M MAKEMASK(0x3, 28) +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_15_S 30 +#define E830_GL_RPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_15_M MAKEMASK(0x3, 30) +#define E830_GL_RPRS_VALIDATE_CHECKS_CTL 0x00200748 /* Reset Source: CORER */ +#define E830_GL_RPRS_VALIDATE_CHECKS_CTL_VALIDATE_UDP_LEN_0_EN_S 0 +#define E830_GL_RPRS_VALIDATE_CHECKS_CTL_VALIDATE_UDP_LEN_0_EN_M BIT(0) +#define E830_GL_RPRS_VALIDATE_CHECKS_CTL_VALIDATE_UDP_LEN_1_EN_S 1 +#define E830_GL_RPRS_VALIDATE_CHECKS_CTL_VALIDATE_UDP_LEN_1_EN_M BIT(1) +#define E830_GL_RPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_LEN_0_S 2 +#define E830_GL_RPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_LEN_0_M BIT(2) +#define E830_GL_RPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_LEN_1_S 3 +#define E830_GL_RPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_LEN_1_M BIT(3) +#define E830_GL_RPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_L4_COHERENT_0_S 4 +#define E830_GL_RPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_L4_COHERENT_0_M BIT(4) +#define E830_GL_RPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_L4_COHERENT_1_S 5 +#define E830_GL_RPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_L4_COHERENT_1_M BIT(5) +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_IP 0x00203A04 /* Reset Source: CORER */ +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_IP_IPV4_PROT_ID_0_S 0 +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_IP_IPV4_PROT_ID_0_M MAKEMASK(0xFF, 0) +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_IP_IPV4_PROT_ID_1_S 8 +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_IP_IPV4_PROT_ID_1_M MAKEMASK(0xFF, 8) +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_IP_IPV6_PROT_ID_0_S 16 +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_IP_IPV6_PROT_ID_0_M MAKEMASK(0xFF, 16) +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_IP_IPV6_PROT_ID_1_S 24 +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_IP_IPV6_PROT_ID_1_M MAKEMASK(0xFF, 24) +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_UDP_TCP 0x00203A08 /* Reset Source: CORER */ +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_UDP_TCP_TCP_PROT_ID_0_S 0 +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_UDP_TCP_TCP_PROT_ID_0_M MAKEMASK(0xFF, 0) +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_UDP_TCP_TCP_PROT_ID_1_S 8 +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_UDP_TCP_TCP_PROT_ID_1_M MAKEMASK(0xFF, 8) +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_UDP_TCP_UDP_PROT_ID_0_S 16 +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_UDP_TCP_UDP_PROT_ID_0_M MAKEMASK(0xFF, 16) +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_UDP_TCP_UDP_PROT_ID_1_S 24 +#define E830_GL_TPRS_CSUM_PROT_ID_CFG_UDP_TCP_UDP_PROT_ID_1_M MAKEMASK(0xFF, 24) +#define E830_GL_TPRS_PROT_ID_MAP(_i) (0x00202200 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */ +#define E830_GL_TPRS_PROT_ID_MAP_MAX_INDEX 255 +#define E830_GL_TPRS_PROT_ID_MAP_PROT_ID0_S 0 +#define E830_GL_TPRS_PROT_ID_MAP_PROT_ID0_M MAKEMASK(0xFF, 0) +#define E830_GL_TPRS_PROT_ID_MAP_PROT_ID1_S 8 +#define E830_GL_TPRS_PROT_ID_MAP_PROT_ID1_M MAKEMASK(0xFF, 8) +#define E830_GL_TPRS_PROT_ID_MAP_PROT_ID2_S 16 +#define E830_GL_TPRS_PROT_ID_MAP_PROT_ID2_M MAKEMASK(0xFF, 16) +#define E830_GL_TPRS_PROT_ID_MAP_PROT_ID3_S 24 +#define E830_GL_TPRS_PROT_ID_MAP_PROT_ID3_M MAKEMASK(0xFF, 24) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL(_i) (0x00202A00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */ +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_MAX_INDEX 63 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_0_S 0 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_0_M MAKEMASK(0x3, 0) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_1_S 2 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_1_M MAKEMASK(0x3, 2) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_2_S 4 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_2_M MAKEMASK(0x3, 4) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_3_S 6 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_3_M MAKEMASK(0x3, 6) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_4_S 8 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_4_M MAKEMASK(0x3, 8) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_5_S 10 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_5_M MAKEMASK(0x3, 10) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_6_S 12 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_6_M MAKEMASK(0x3, 12) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_7_S 14 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_7_M MAKEMASK(0x3, 14) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_8_S 16 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_8_M MAKEMASK(0x3, 16) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_9_S 18 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_9_M MAKEMASK(0x3, 18) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_10_S 20 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_10_M MAKEMASK(0x3, 20) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_11_S 22 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_11_M MAKEMASK(0x3, 22) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_12_S 24 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_12_M MAKEMASK(0x3, 24) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_13_S 26 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_13_M MAKEMASK(0x3, 26) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_14_S 28 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_14_M MAKEMASK(0x3, 28) +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_15_S 30 +#define E830_GL_TPRS_PROT_ID_MAP_PRFL_PTYPE_PRFL_15_M MAKEMASK(0x3, 30) +#define E830_GL_TPRS_VALIDATE_CHECKS_CTL 0x00203A00 /* Reset Source: CORER */ +#define E830_GL_TPRS_VALIDATE_CHECKS_CTL_VALIDATE_UDP_LEN_0_EN_S 0 +#define E830_GL_TPRS_VALIDATE_CHECKS_CTL_VALIDATE_UDP_LEN_0_EN_M BIT(0) +#define E830_GL_TPRS_VALIDATE_CHECKS_CTL_VALIDATE_UDP_LEN_1_EN_S 1 +#define E830_GL_TPRS_VALIDATE_CHECKS_CTL_VALIDATE_UDP_LEN_1_EN_M BIT(1) +#define E830_GL_TPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_LEN_0_S 2 +#define E830_GL_TPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_LEN_0_M BIT(2) +#define E830_GL_TPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_LEN_1_S 3 +#define E830_GL_TPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_LEN_1_M BIT(3) +#define E830_GL_TPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_L4_COHERENT_0_S 4 +#define E830_GL_TPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_L4_COHERENT_0_M BIT(4) +#define E830_GL_TPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_L4_COHERENT_1_S 5 +#define E830_GL_TPRS_VALIDATE_CHECKS_CTL_VALIDATE_L3_L4_COHERENT_1_M BIT(5) +#define E830_PRT_TDPU_TX_SIZE_CTRL 0x00049D20 /* Reset Source: CORER */ +#define E830_PRT_TDPU_TX_SIZE_CTRL_MAX_HEADER_SIZE_S 16 +#define E830_PRT_TDPU_TX_SIZE_CTRL_MAX_HEADER_SIZE_M MAKEMASK(0x3FF, 16) +#define E830_PRT_TPB_RX_LB_SIZE_CTRL 0x00099740 /* Reset Source: CORER */ +#define E830_PRT_TPB_RX_LB_SIZE_CTRL_MAX_HEADER_SIZE_S 16 +#define E830_PRT_TPB_RX_LB_SIZE_CTRL_MAX_HEADER_SIZE_M MAKEMASK(0x3FF, 16) +#define E830_GLNVM_AL_DONE_HLP_PAGE 0x02D004B0 /* Reset Source: POR */ +#define E830_GLNVM_AL_DONE_HLP_PAGE_HLP_CORER_S 0 +#define E830_GLNVM_AL_DONE_HLP_PAGE_HLP_CORER_M BIT(0) +#define E830_GLNVM_AL_DONE_HLP_PAGE_HLP_FULLR_S 1 +#define E830_GLNVM_AL_DONE_HLP_PAGE_HLP_FULLR_M BIT(1) +#define E830_GLQTX_TXTIME_DBELL_LSB_PAGE(_DBQM) (0x04000008 + ((_DBQM) * 4096)) /* _i=0...16383 */ /* Reset Source: CORER */ +#define E830_GLQTX_TXTIME_DBELL_LSB_PAGE_MAX_INDEX 16383 +#define E830_GLQTX_TXTIME_DBELL_LSB_PAGE_QTX_TXTIME_DBELL_S 0 +#define E830_GLQTX_TXTIME_DBELL_LSB_PAGE_QTX_TXTIME_DBELL_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLQTX_TXTIME_DBELL_MSB_PAGE(_DBQM) (0x0400000C + ((_DBQM) * 4096)) /* _i=0...16383 */ /* Reset Source: CORER */ +#define E830_GLQTX_TXTIME_DBELL_MSB_PAGE_MAX_INDEX 16383 +#define E830_GLQTX_TXTIME_DBELL_MSB_PAGE_QTX_TXTIME_DBELL_S 0 +#define E830_GLQTX_TXTIME_DBELL_MSB_PAGE_QTX_TXTIME_DBELL_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PF0INT_OICR_CPM_PAGE_PTM_COMP_S 8 +#define E830_PF0INT_OICR_CPM_PAGE_PTM_COMP_M BIT(8) +#define E830_PF0INT_OICR_CPM_PAGE_RSV4_S 9 +#define E830_PF0INT_OICR_CPM_PAGE_RSV4_M BIT(9) +#define E830_PF0INT_OICR_CPM_PAGE_RSV5_S 10 +#define E830_PF0INT_OICR_CPM_PAGE_RSV5_M BIT(10) +#define E830_PF0INT_OICR_HLP_PAGE_PTM_COMP_S 8 +#define E830_PF0INT_OICR_HLP_PAGE_PTM_COMP_M BIT(8) +#define E830_PF0INT_OICR_HLP_PAGE_RSV4_S 9 +#define E830_PF0INT_OICR_HLP_PAGE_RSV4_M BIT(9) +#define E830_PF0INT_OICR_HLP_PAGE_RSV5_S 10 +#define E830_PF0INT_OICR_HLP_PAGE_RSV5_M BIT(10) +#define E830_PF0INT_OICR_PSM_PAGE_PTM_COMP_S 8 +#define E830_PF0INT_OICR_PSM_PAGE_PTM_COMP_M BIT(8) +#define E830_PF0INT_OICR_PSM_PAGE_RSV4_S 9 +#define E830_PF0INT_OICR_PSM_PAGE_RSV4_M BIT(9) +#define E830_PF0INT_OICR_PSM_PAGE_RSV5_S 10 +#define E830_PF0INT_OICR_PSM_PAGE_RSV5_M BIT(10) +#define E830_GL_HIBA(_i) (0x00081000 + ((_i) * 4)) /* _i=0...1023 */ /* Reset Source: EMPR */ +#define E830_GL_HIBA_MAX_INDEX 1023 +#define E830_GL_HIBA_GL_HIBA_S 0 +#define E830_GL_HIBA_GL_HIBA_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GL_HICR 0x00082040 /* Reset Source: EMPR */ +#define E830_GL_HICR_C_S 1 +#define E830_GL_HICR_C_M BIT(1) +#define E830_GL_HICR_SV_S 2 +#define E830_GL_HICR_SV_M BIT(2) +#define E830_GL_HICR_EV_S 3 +#define E830_GL_HICR_EV_M BIT(3) +#define E830_GL_HICR_EN 0x00082044 /* Reset Source: EMPR */ +#define E830_GL_HICR_EN_EN_S 0 +#define E830_GL_HICR_EN_EN_M BIT(0) +#define E830_GL_HIDA(_i) (0x00082000 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: EMPR */ +#define E830_GL_HIDA_MAX_INDEX 15 +#define E830_GL_HIDA_GL_HIDB_S 0 +#define E830_GL_HIDA_GL_HIDB_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLFLXP_RXDID_FLX_WRD_0_SPARE_S 18 +#define E830_GLFLXP_RXDID_FLX_WRD_0_SPARE_M MAKEMASK(0xF, 18) +#define E830_GLFLXP_RXDID_FLX_WRD_1_SPARE_S 18 +#define E830_GLFLXP_RXDID_FLX_WRD_1_SPARE_M MAKEMASK(0xF, 18) +#define E830_GLFLXP_RXDID_FLX_WRD_2_SPARE_S 18 +#define E830_GLFLXP_RXDID_FLX_WRD_2_SPARE_M MAKEMASK(0xF, 18) +#define E830_GLFLXP_RXDID_FLX_WRD_3_SPARE_S 18 +#define E830_GLFLXP_RXDID_FLX_WRD_3_SPARE_M MAKEMASK(0xF, 18) +#define E830_GLFLXP_RXDID_FLX_WRD_4_SPARE_S 18 +#define E830_GLFLXP_RXDID_FLX_WRD_4_SPARE_M MAKEMASK(0xF, 18) +#define E830_GLFLXP_RXDID_FLX_WRD_5_SPARE_S 18 +#define E830_GLFLXP_RXDID_FLX_WRD_5_SPARE_M MAKEMASK(0xF, 18) +#define E830_GLFLXP_RXDID_FLX_WRD_6(_i) (0x0045CE00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */ +#define E830_GLFLXP_RXDID_FLX_WRD_6_MAX_INDEX 63 +#define E830_GLFLXP_RXDID_FLX_WRD_6_PROT_MDID_S 0 +#define E830_GLFLXP_RXDID_FLX_WRD_6_PROT_MDID_M MAKEMASK(0xFF, 0) +#define E830_GLFLXP_RXDID_FLX_WRD_6_EXTRACTION_OFFSET_S 8 +#define E830_GLFLXP_RXDID_FLX_WRD_6_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8) +#define E830_GLFLXP_RXDID_FLX_WRD_6_L2TAG_OVRD_EN_S 18 +#define E830_GLFLXP_RXDID_FLX_WRD_6_L2TAG_OVRD_EN_M BIT(18) +#define E830_GLFLXP_RXDID_FLX_WRD_6_SPARE_S 19 +#define E830_GLFLXP_RXDID_FLX_WRD_6_SPARE_M MAKEMASK(0x7, 19) +#define E830_GLFLXP_RXDID_FLX_WRD_6_RXDID_OPCODE_S 30 +#define E830_GLFLXP_RXDID_FLX_WRD_6_RXDID_OPCODE_M MAKEMASK(0x3, 30) +#define E830_GLFLXP_RXDID_FLX_WRD_7(_i) (0x0045CF00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */ +#define E830_GLFLXP_RXDID_FLX_WRD_7_MAX_INDEX 63 +#define E830_GLFLXP_RXDID_FLX_WRD_7_PROT_MDID_S 0 +#define E830_GLFLXP_RXDID_FLX_WRD_7_PROT_MDID_M MAKEMASK(0xFF, 0) +#define E830_GLFLXP_RXDID_FLX_WRD_7_EXTRACTION_OFFSET_S 8 +#define E830_GLFLXP_RXDID_FLX_WRD_7_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8) +#define E830_GLFLXP_RXDID_FLX_WRD_7_L2TAG_OVRD_EN_S 18 +#define E830_GLFLXP_RXDID_FLX_WRD_7_L2TAG_OVRD_EN_M BIT(18) +#define E830_GLFLXP_RXDID_FLX_WRD_7_SPARE_S 19 +#define E830_GLFLXP_RXDID_FLX_WRD_7_SPARE_M MAKEMASK(0x7, 19) +#define E830_GLFLXP_RXDID_FLX_WRD_7_RXDID_OPCODE_S 30 +#define E830_GLFLXP_RXDID_FLX_WRD_7_RXDID_OPCODE_M MAKEMASK(0x3, 30) +#define E830_GLFLXP_RXDID_FLX_WRD_8(_i) (0x0045D500 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */ +#define E830_GLFLXP_RXDID_FLX_WRD_8_MAX_INDEX 63 +#define E830_GLFLXP_RXDID_FLX_WRD_8_PROT_MDID_S 0 +#define E830_GLFLXP_RXDID_FLX_WRD_8_PROT_MDID_M MAKEMASK(0xFF, 0) +#define E830_GLFLXP_RXDID_FLX_WRD_8_EXTRACTION_OFFSET_S 8 +#define E830_GLFLXP_RXDID_FLX_WRD_8_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8) +#define E830_GLFLXP_RXDID_FLX_WRD_8_L2TAG_OVRD_EN_S 18 +#define E830_GLFLXP_RXDID_FLX_WRD_8_L2TAG_OVRD_EN_M BIT(18) +#define E830_GLFLXP_RXDID_FLX_WRD_8_SPARE_S 19 +#define E830_GLFLXP_RXDID_FLX_WRD_8_SPARE_M MAKEMASK(0x7, 19) +#define E830_GLFLXP_RXDID_FLX_WRD_8_RXDID_OPCODE_S 30 +#define E830_GLFLXP_RXDID_FLX_WRD_8_RXDID_OPCODE_M MAKEMASK(0x3, 30) +#define E830_GL_FW_LOGS(_i) (0x00082800 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: POR */ +#define E830_GL_FW_LOGS_MAX_INDEX 255 +#define E830_GL_FW_LOGS_GL_FW_LOGS_S 0 +#define E830_GL_FW_LOGS_GL_FW_LOGS_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GL_FWSTS_FWABS_S 10 +#define E830_GL_FWSTS_FWABS_M MAKEMASK(0x3, 10) +#define E830_GL_FWSTS_FW_FAILOVER_TRIG_S 12 +#define E830_GL_FWSTS_FW_FAILOVER_TRIG_M BIT(12) +#define E830_GLGEN_RSTAT_EMPR_WO_GLOBR_CNT_S 19 +#define E830_GLGEN_RSTAT_EMPR_WO_GLOBR_CNT_M MAKEMASK(0x3, 19) +#define E830_GLPCI_PLATFORM_INFO 0x0009DDC4 /* Reset Source: POR */ +#define E830_GLPCI_PLATFORM_INFO_PLATFORM_TYPE_S 0 +#define E830_GLPCI_PLATFORM_INFO_PLATFORM_TYPE_M MAKEMASK(0xFF, 0) +#define E830_GL_MDCK_TDAT_TCLAN_DESC_TYPE_ACL_DTYPE_NOT_ALLOWED_S 21 +#define E830_GL_MDCK_TDAT_TCLAN_DESC_TYPE_ACL_DTYPE_NOT_ALLOWED_M BIT(21) +#define E830_GL_TPB_LOCAL_TOPO 0x000996F4 /* Reset Source: CORER */ +#define E830_GL_TPB_LOCAL_TOPO_ALLOW_TOPO_OVERRIDE_S 0 +#define E830_GL_TPB_LOCAL_TOPO_ALLOW_TOPO_OVERRIDE_M BIT(0) +#define E830_GL_TPB_LOCAL_TOPO_TOPO_VAL_S 1 +#define E830_GL_TPB_LOCAL_TOPO_TOPO_VAL_M MAKEMASK(0x3, 1) +#define E830_GL_TPB_PM_RESET 0x000996F0 /* Reset Source: CORER */ +#define E830_GL_TPB_PM_RESET_MAC_PM_RESET_S 0 +#define E830_GL_TPB_PM_RESET_MAC_PM_RESET_M BIT(0) +#define E830_GL_TPB_PM_RESET_RPB_PM_RESET_S 1 +#define E830_GL_TPB_PM_RESET_RPB_PM_RESET_M BIT(1) +#define E830_GLTPB_100G_MAC_FC_THRESH1 0x00099724 /* Reset Source: CORER */ +#define E830_GLTPB_100G_MAC_FC_THRESH1_PORT2_FC_THRESH_S 0 +#define E830_GLTPB_100G_MAC_FC_THRESH1_PORT2_FC_THRESH_M MAKEMASK(0xFFFF, 0) +#define E830_GLTPB_100G_MAC_FC_THRESH1_PORT3_FC_THRESH_S 16 +#define E830_GLTPB_100G_MAC_FC_THRESH1_PORT3_FC_THRESH_M MAKEMASK(0xFFFF, 16) +#define E830_GLTPB_100G_RPB_FC_THRESH0 0x0009963C /* Reset Source: CORER */ +#define E830_GLTPB_100G_RPB_FC_THRESH0_PORT0_FC_THRESH_S 0 +#define E830_GLTPB_100G_RPB_FC_THRESH0_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0) +#define E830_GLTPB_100G_RPB_FC_THRESH0_PORT1_FC_THRESH_S 16 +#define E830_GLTPB_100G_RPB_FC_THRESH0_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16) +#define E830_GLTPB_100G_RPB_FC_THRESH1 0x00099728 /* Reset Source: CORER */ +#define E830_GLTPB_100G_RPB_FC_THRESH1_PORT2_FC_THRESH_S 0 +#define E830_GLTPB_100G_RPB_FC_THRESH1_PORT2_FC_THRESH_M MAKEMASK(0xFFFF, 0) +#define E830_GLTPB_100G_RPB_FC_THRESH1_PORT3_FC_THRESH_S 16 +#define E830_GLTPB_100G_RPB_FC_THRESH1_PORT3_FC_THRESH_M MAKEMASK(0xFFFF, 16) +#define E830_GL_UFUSE_SOC_MAX_PORT_SPEED_S 12 +#define E830_GL_UFUSE_SOC_MAX_PORT_SPEED_M MAKEMASK(0xFFFF, 12) +#define E830_PF0INT_OICR_CPM_PTM_COMP_S 8 +#define E830_PF0INT_OICR_CPM_PTM_COMP_M BIT(8) +#define E830_PF0INT_OICR_CPM_RSV4_S 9 +#define E830_PF0INT_OICR_CPM_RSV4_M BIT(9) +#define E830_PF0INT_OICR_CPM_RSV5_S 10 +#define E830_PF0INT_OICR_CPM_RSV5_M BIT(10) +#define E830_PF0INT_OICR_HLP_PTM_COMP_S 8 +#define E830_PF0INT_OICR_HLP_PTM_COMP_M BIT(8) +#define E830_PF0INT_OICR_HLP_RSV4_S 9 +#define E830_PF0INT_OICR_HLP_RSV4_M BIT(9) +#define E830_PF0INT_OICR_HLP_RSV5_S 10 +#define E830_PF0INT_OICR_HLP_RSV5_M BIT(10) +#define E830_PF0INT_OICR_PSM_PTM_COMP_S 8 +#define E830_PF0INT_OICR_PSM_PTM_COMP_M BIT(8) +#define E830_PF0INT_OICR_PSM_RSV4_S 9 +#define E830_PF0INT_OICR_PSM_RSV4_M BIT(9) +#define E830_PF0INT_OICR_PSM_RSV5_S 10 +#define E830_PF0INT_OICR_PSM_RSV5_M BIT(10) +#define E830_PFINT_OICR_PTM_COMP_S 8 +#define E830_PFINT_OICR_PTM_COMP_M BIT(8) +#define E830_PFINT_OICR_RSV4_S 9 +#define E830_PFINT_OICR_RSV4_M BIT(9) +#define E830_PFINT_OICR_RSV5_S 10 +#define E830_PFINT_OICR_RSV5_M BIT(10) +#define E830_GLQF_FLAT_QTABLE(_i) (0x00488000 + ((_i) * 4)) /* _i=0...6143 */ /* Reset Source: CORER */ +#define E830_GLQF_FLAT_QTABLE_MAX_INDEX 6143 +#define E830_GLQF_FLAT_QTABLE_QINDEX_0_S 0 +#define E830_GLQF_FLAT_QTABLE_QINDEX_0_M MAKEMASK(0x7FF, 0) +#define E830_GLQF_FLAT_QTABLE_QINDEX_1_S 16 +#define E830_GLQF_FLAT_QTABLE_QINDEX_1_M MAKEMASK(0x7FF, 16) +#define E830_PRTMAC_200G_CL01_PAUSE_QUANTA 0x001E3854 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_CL01_PAUSE_QUANTA_CL0_PAUSE_QUANTA_S 0 +#define E830_PRTMAC_200G_CL01_PAUSE_QUANTA_CL0_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_200G_CL01_PAUSE_QUANTA_CL1_PAUSE_QUANTA_S 16 +#define E830_PRTMAC_200G_CL01_PAUSE_QUANTA_CL1_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_200G_CL01_QUANTA_THRESH 0x001E3864 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_CL01_QUANTA_THRESH_CL0_QUANTA_THRESH_S 0 +#define E830_PRTMAC_200G_CL01_QUANTA_THRESH_CL0_QUANTA_THRESH_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_200G_CL01_QUANTA_THRESH_CL1_QUANTA_THRESH_S 16 +#define E830_PRTMAC_200G_CL01_QUANTA_THRESH_CL1_QUANTA_THRESH_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_200G_CL23_PAUSE_QUANTA 0x001E3858 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_CL23_PAUSE_QUANTA_CL2_PAUSE_QUANTA_S 0 +#define E830_PRTMAC_200G_CL23_PAUSE_QUANTA_CL2_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_200G_CL23_PAUSE_QUANTA_CL3_PAUSE_QUANTA_S 16 +#define E830_PRTMAC_200G_CL23_PAUSE_QUANTA_CL3_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_200G_CL23_QUANTA_THRESH 0x001E3868 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_CL23_QUANTA_THRESH_CL2_QUANTA_THRESH_S 0 +#define E830_PRTMAC_200G_CL23_QUANTA_THRESH_CL2_QUANTA_THRESH_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_200G_CL23_QUANTA_THRESH_CL3_QUANTA_THRESH_S 16 +#define E830_PRTMAC_200G_CL23_QUANTA_THRESH_CL3_QUANTA_THRESH_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_200G_CL45_PAUSE_QUANTA 0x001E385C /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_CL45_PAUSE_QUANTA_CL4_PAUSE_QUANTA_S 0 +#define E830_PRTMAC_200G_CL45_PAUSE_QUANTA_CL4_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_200G_CL45_PAUSE_QUANTA_CL5_PAUSE_QUANTA_S 16 +#define E830_PRTMAC_200G_CL45_PAUSE_QUANTA_CL5_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_200G_CL45_QUANTA_THRESH 0x001E386C /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_CL45_QUANTA_THRESH_CL4_QUANTA_THRESH_S 0 +#define E830_PRTMAC_200G_CL45_QUANTA_THRESH_CL4_QUANTA_THRESH_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_200G_CL45_QUANTA_THRESH_CL5_QUANTA_THRESH_S 16 +#define E830_PRTMAC_200G_CL45_QUANTA_THRESH_CL5_QUANTA_THRESH_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_200G_CL67_PAUSE_QUANTA 0x001E3860 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_CL67_PAUSE_QUANTA_CL6_PAUSE_QUANTA_S 0 +#define E830_PRTMAC_200G_CL67_PAUSE_QUANTA_CL6_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_200G_CL67_PAUSE_QUANTA_CL7_PAUSE_QUANTA_S 16 +#define E830_PRTMAC_200G_CL67_PAUSE_QUANTA_CL7_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_200G_CL67_QUANTA_THRESH 0x001E3870 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_CL67_QUANTA_THRESH_CL6_QUANTA_THRESH_S 0 +#define E830_PRTMAC_200G_CL67_QUANTA_THRESH_CL6_QUANTA_THRESH_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_200G_CL67_QUANTA_THRESH_CL7_QUANTA_THRESH_S 16 +#define E830_PRTMAC_200G_CL67_QUANTA_THRESH_CL7_QUANTA_THRESH_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_200G_COMMAND_CONFIG 0x001E3808 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_COMMAND_CONFIG_TX_ENA_S 0 +#define E830_PRTMAC_200G_COMMAND_CONFIG_TX_ENA_M BIT(0) +#define E830_PRTMAC_200G_COMMAND_CONFIG_RX_ENA_S 1 +#define E830_PRTMAC_200G_COMMAND_CONFIG_RX_ENA_M BIT(1) +#define E830_PRTMAC_200G_COMMAND_CONFIG_RESERVED1_S 3 +#define E830_PRTMAC_200G_COMMAND_CONFIG_RESERVED1_M BIT(3) +#define E830_PRTMAC_200G_COMMAND_CONFIG_PROMIS_EN_S 4 +#define E830_PRTMAC_200G_COMMAND_CONFIG_PROMIS_EN_M BIT(4) +#define E830_PRTMAC_200G_COMMAND_CONFIG_RESERVED2_S 5 +#define E830_PRTMAC_200G_COMMAND_CONFIG_RESERVED2_M BIT(5) +#define E830_PRTMAC_200G_COMMAND_CONFIG_CRC_FWD_S 6 +#define E830_PRTMAC_200G_COMMAND_CONFIG_CRC_FWD_M BIT(6) +#define E830_PRTMAC_200G_COMMAND_CONFIG_PAUSE_FWD_S 7 +#define E830_PRTMAC_200G_COMMAND_CONFIG_PAUSE_FWD_M BIT(7) +#define E830_PRTMAC_200G_COMMAND_CONFIG_PAUSE_IGNORE_S 8 +#define E830_PRTMAC_200G_COMMAND_CONFIG_PAUSE_IGNORE_M BIT(8) +#define E830_PRTMAC_200G_COMMAND_CONFIG_TX_ADDR_INS_S 9 +#define E830_PRTMAC_200G_COMMAND_CONFIG_TX_ADDR_INS_M BIT(9) +#define E830_PRTMAC_200G_COMMAND_CONFIG_LOOP_ENA_S 10 +#define E830_PRTMAC_200G_COMMAND_CONFIG_LOOP_ENA_M BIT(10) +#define E830_PRTMAC_200G_COMMAND_CONFIG_TX_PAD_EN_S 11 +#define E830_PRTMAC_200G_COMMAND_CONFIG_TX_PAD_EN_M BIT(11) +#define E830_PRTMAC_200G_COMMAND_CONFIG_SW_RESET_S 12 +#define E830_PRTMAC_200G_COMMAND_CONFIG_SW_RESET_M BIT(12) +#define E830_PRTMAC_200G_COMMAND_CONFIG_CNTL_FRM_ENA_S 13 +#define E830_PRTMAC_200G_COMMAND_CONFIG_CNTL_FRM_ENA_M BIT(13) +#define E830_PRTMAC_200G_COMMAND_CONFIG_RESERVED3_S 14 +#define E830_PRTMAC_200G_COMMAND_CONFIG_RESERVED3_M BIT(14) +#define E830_PRTMAC_200G_COMMAND_CONFIG_PHY_TXENA_S 15 +#define E830_PRTMAC_200G_COMMAND_CONFIG_PHY_TXENA_M BIT(15) +#define E830_PRTMAC_200G_COMMAND_CONFIG_FORCE_SEND__S 16 +#define E830_PRTMAC_200G_COMMAND_CONFIG_FORCE_SEND__M BIT(16) +#define E830_PRTMAC_200G_COMMAND_CONFIG_NO_LGTH_CHECK_S 17 +#define E830_PRTMAC_200G_COMMAND_CONFIG_NO_LGTH_CHECK_M BIT(17) +#define E830_PRTMAC_200G_COMMAND_CONFIG_RESERVED5_S 18 +#define E830_PRTMAC_200G_COMMAND_CONFIG_RESERVED5_M BIT(18) +#define E830_PRTMAC_200G_COMMAND_CONFIG_PFC_MODE_S 19 +#define E830_PRTMAC_200G_COMMAND_CONFIG_PFC_MODE_M BIT(19) +#define E830_PRTMAC_200G_COMMAND_CONFIG_PAUSE_PFC_COMP_S 20 +#define E830_PRTMAC_200G_COMMAND_CONFIG_PAUSE_PFC_COMP_M BIT(20) +#define E830_PRTMAC_200G_COMMAND_CONFIG_RX_SFD_ANY_S 21 +#define E830_PRTMAC_200G_COMMAND_CONFIG_RX_SFD_ANY_M BIT(21) +#define E830_PRTMAC_200G_COMMAND_CONFIG_TX_FLUSH_S 22 +#define E830_PRTMAC_200G_COMMAND_CONFIG_TX_FLUSH_M BIT(22) +#define E830_PRTMAC_200G_COMMAND_CONFIG_FLT_TX_STOP_S 25 +#define E830_PRTMAC_200G_COMMAND_CONFIG_FLT_TX_STOP_M BIT(25) +#define E830_PRTMAC_200G_COMMAND_CONFIG_TX_FIFO_RESET_S 26 +#define E830_PRTMAC_200G_COMMAND_CONFIG_TX_FIFO_RESET_M BIT(26) +#define E830_PRTMAC_200G_COMMAND_CONFIG_FLT_HDL_DIS_S 27 +#define E830_PRTMAC_200G_COMMAND_CONFIG_FLT_HDL_DIS_M BIT(27) +#define E830_PRTMAC_200G_COMMAND_CONFIG_INV_LOOP_S 31 +#define E830_PRTMAC_200G_COMMAND_CONFIG_INV_LOOP_M BIT(31) +#define E830_PRTMAC_200G_CRC_INV_M 0x001E384C /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_CRC_INV_MASK_CRC_INV_MASK_S 0 +#define E830_PRTMAC_200G_CRC_INV_MASK_CRC_INV_MASK_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_200G_FRM_LENGTH 0x001E3814 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_FRM_LENGTH_FRM_LENGTH_S 0 +#define E830_PRTMAC_200G_FRM_LENGTH_FRM_LENGTH_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_200G_FRM_LENGTH_TX_MTU_S 16 +#define E830_PRTMAC_200G_FRM_LENGTH_TX_MTU_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_200G_HASHTABLE_LOAD 0x001E382C /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_HASHTABLE_LOAD_HASH_TABLE_ADDR_S 0 +#define E830_PRTMAC_200G_HASHTABLE_LOAD_HASH_TABLE_ADDR_M MAKEMASK(0x3F, 0) +#define E830_PRTMAC_200G_HASHTABLE_LOAD_RESERVED_2_S 6 +#define E830_PRTMAC_200G_HASHTABLE_LOAD_RESERVED_2_M MAKEMASK(0x3, 6) +#define E830_PRTMAC_200G_HASHTABLE_LOAD_MCAST_EN_S 8 +#define E830_PRTMAC_200G_HASHTABLE_LOAD_MCAST_EN_M BIT(8) +#define E830_PRTMAC_200G_HASHTABLE_LOAD_RESERVED1_S 9 +#define E830_PRTMAC_200G_HASHTABLE_LOAD_RESERVED1_M MAKEMASK(0x7FFFFF, 9) +#define E830_PRTMAC_200G_MAC_ADDR_0 0x001E380C /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_MAC_ADDR_0_MAC_ADDR_0_S 0 +#define E830_PRTMAC_200G_MAC_ADDR_0_MAC_ADDR_0_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_200G_MAC_ADDR_1 0x001E3810 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_MAC_ADDR_1_MAC_ADDR_1_S 0 +#define E830_PRTMAC_200G_MAC_ADDR_1_MAC_ADDR_1_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_200G_MDIO_CFG_STATUS 0x001E3830 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_MDIO_CFG_STATUS_MDIO_BUSY_S 0 +#define E830_PRTMAC_200G_MDIO_CFG_STATUS_MDIO_BUSY_M BIT(0) +#define E830_PRTMAC_200G_MDIO_CFG_STATUS_MDIO_RD_ERR_S 1 +#define E830_PRTMAC_200G_MDIO_CFG_STATUS_MDIO_RD_ERR_M BIT(1) +#define E830_PRTMAC_200G_MDIO_CFG_STATUS_MDIO_HOLD_TIME_S 2 +#define E830_PRTMAC_200G_MDIO_CFG_STATUS_MDIO_HOLD_TIME_M MAKEMASK(0x7, 2) +#define E830_PRTMAC_200G_MDIO_CFG_STATUS_MDIO_DIS_PREAMBLE_S 5 +#define E830_PRTMAC_200G_MDIO_CFG_STATUS_MDIO_DIS_PREAMBLE_M BIT(5) +#define E830_PRTMAC_200G_MDIO_CFG_STATUS_MDIO_CLS_45_EN_S 6 +#define E830_PRTMAC_200G_MDIO_CFG_STATUS_MDIO_CLS_45_EN_M BIT(6) +#define E830_PRTMAC_200G_MDIO_CFG_STATUS_MDIO_CLK_DIVISOR_S 7 +#define E830_PRTMAC_200G_MDIO_CFG_STATUS_MDIO_CLK_DIVISOR_M MAKEMASK(0x1FF, 7) +#define E830_PRTMAC_200G_MDIO_COMMAND 0x001E3834 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_MDIO_COMMAND_MDIO_COMMAND_S 0 +#define E830_PRTMAC_200G_MDIO_COMMAND_MDIO_COMMAND_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_200G_MDIO_DATA 0x001E3838 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_MDIO_DATA_MDIO_DATA_S 0 +#define E830_PRTMAC_200G_MDIO_DATA_MDIO_DATA_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_200G_MDIO_REGADDR 0x001E383C /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_MDIO_REGADDR_MDIO_REGADDR_S 0 +#define E830_PRTMAC_200G_MDIO_REGADDR_MDIO_REGADDR_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_200G_REVISION 0x001E3800 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_REVISION_CORE_REVISION_S 0 +#define E830_PRTMAC_200G_REVISION_CORE_REVISION_M MAKEMASK(0xFF, 0) +#define E830_PRTMAC_200G_REVISION_CORE_VERSION_S 8 +#define E830_PRTMAC_200G_REVISION_CORE_VERSION_M MAKEMASK(0xFF, 8) +#define E830_PRTMAC_200G_REVISION_CUSTOMER_VERSION_S 16 +#define E830_PRTMAC_200G_REVISION_CUSTOMER_VERSION_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_200G_RX_PAUSE_STATUS 0x001E3874 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_RX_PAUSE_STATUS_RX_PAUSE_STATUS_S 0 +#define E830_PRTMAC_200G_RX_PAUSE_STATUS_RX_PAUSE_STATUS_M MAKEMASK(0xFF, 0) +#define E830_PRTMAC_200G_SCRATCH 0x001E3804 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_SCRATCH_SCRATCH_S 0 +#define E830_PRTMAC_200G_SCRATCH_SCRATCH_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_200G_STATUS 0x001E3840 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_STATUS_RX_LOC_FAULT_S 0 +#define E830_PRTMAC_200G_STATUS_RX_LOC_FAULT_M BIT(0) +#define E830_PRTMAC_200G_STATUS_RX_REM_FAULT_S 1 +#define E830_PRTMAC_200G_STATUS_RX_REM_FAULT_M BIT(1) +#define E830_PRTMAC_200G_STATUS_PHY_LOS_S 2 +#define E830_PRTMAC_200G_STATUS_PHY_LOS_M BIT(2) +#define E830_PRTMAC_200G_STATUS_TS_AVAIL_S 3 +#define E830_PRTMAC_200G_STATUS_TS_AVAIL_M BIT(3) +#define E830_PRTMAC_200G_STATUS_RESERVED_5_S 4 +#define E830_PRTMAC_200G_STATUS_RESERVED_5_M BIT(4) +#define E830_PRTMAC_200G_STATUS_TX_EMPTY_S 5 +#define E830_PRTMAC_200G_STATUS_TX_EMPTY_M BIT(5) +#define E830_PRTMAC_200G_STATUS_RX_EMPTY_S 6 +#define E830_PRTMAC_200G_STATUS_RX_EMPTY_M BIT(6) +#define E830_PRTMAC_200G_STATUS_RESERVED1_S 7 +#define E830_PRTMAC_200G_STATUS_RESERVED1_M BIT(7) +#define E830_PRTMAC_200G_STATUS_TX_ISIDLE_S 8 +#define E830_PRTMAC_200G_STATUS_TX_ISIDLE_M BIT(8) +#define E830_PRTMAC_200G_STATUS_RESERVED2_S 9 +#define E830_PRTMAC_200G_STATUS_RESERVED2_M MAKEMASK(0x7FFFFF, 9) +#define E830_PRTMAC_200G_TS_TIMESTAMP 0x001E387C /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_TS_TIMESTAMP_TS_TIMESTAMP_S 0 +#define E830_PRTMAC_200G_TS_TIMESTAMP_TS_TIMESTAMP_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_200G_TX_FIFO_SECTIONS 0x001E3820 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_TX_FIFO_SECTIONS_TX_SECTION_AVAIL_THRESHOLD_S 0 +#define E830_PRTMAC_200G_TX_FIFO_SECTIONS_TX_SECTION_AVAIL_THRESHOLD_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_200G_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_THRESHOLD_S 16 +#define E830_PRTMAC_200G_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_THRESHOLD_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_200G_TX_IPG_LENGTH 0x001E3844 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_TX_IPG_LENGTH_AVG_IPG_LEN_S 0 +#define E830_PRTMAC_200G_TX_IPG_LENGTH_AVG_IPG_LEN_M MAKEMASK(0x7F, 0) +#define E830_PRTMAC_200G_TX_IPG_LENGTH_IPG_COMP_12_0_S 19 +#define E830_PRTMAC_200G_TX_IPG_LENGTH_IPG_COMP_12_0_M MAKEMASK(0x1FFF, 19) +#define E830_PRTMAC_200G_XIF_MODE 0x001E3880 /* Reset Source: GLOBR */ +#define E830_PRTMAC_200G_XIF_MODE_RESERVED_1_S 0 +#define E830_PRTMAC_200G_XIF_MODE_RESERVED_1_M MAKEMASK(0x1F, 0) +#define E830_PRTMAC_200G_XIF_MODE_ONE_STEP_ENA_S 5 +#define E830_PRTMAC_200G_XIF_MODE_ONE_STEP_ENA_M BIT(5) +#define E830_PRTMAC_200G_XIF_MODE_PFC_PULSE_MODE_S 17 +#define E830_PRTMAC_200G_XIF_MODE_PFC_PULSE_MODE_M BIT(17) +#define E830_PRTMAC_200G_XIF_MODE_PFC_LP_MODE_S 18 +#define E830_PRTMAC_200G_XIF_MODE_PFC_LP_MODE_M BIT(18) +#define E830_PRTMAC_200G_XIF_MODE_PFC_LP_16PRI_S 19 +#define E830_PRTMAC_200G_XIF_MODE_PFC_LP_16PRI_M BIT(19) +#define E830_PRTMAC_CF_GEN_STATUS 0x001E33C0 /* Reset Source: GLOBR */ +#define E830_PRTMAC_CF_GEN_STATUS_CF_GEN_SENT_S 0 +#define E830_PRTMAC_CF_GEN_STATUS_CF_GEN_SENT_M BIT(0) +#define E830_PRTMAC_CL01_PAUSE_QUANTA 0x001E32A0 /* Reset Source: GLOBR */ +#define E830_PRTMAC_CL01_PAUSE_QUANTA_CL0_PAUSE_QUANTA_S 0 +#define E830_PRTMAC_CL01_PAUSE_QUANTA_CL0_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_CL01_PAUSE_QUANTA_CL1_PAUSE_QUANTA_S 16 +#define E830_PRTMAC_CL01_PAUSE_QUANTA_CL1_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_CL01_QUANTA_THRESH 0x001E3320 /* Reset Source: GLOBR */ +#define E830_PRTMAC_CL01_QUANTA_THRESH_CL0_QUANTA_THRESH_S 0 +#define E830_PRTMAC_CL01_QUANTA_THRESH_CL0_QUANTA_THRESH_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_CL01_QUANTA_THRESH_CL1_QUANTA_THRESH_S 16 +#define E830_PRTMAC_CL01_QUANTA_THRESH_CL1_QUANTA_THRESH_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_CL23_PAUSE_QUANTA 0x001E32C0 /* Reset Source: GLOBR */ +#define E830_PRTMAC_CL23_PAUSE_QUANTA_CL2_PAUSE_QUANTA_S 0 +#define E830_PRTMAC_CL23_PAUSE_QUANTA_CL2_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_CL23_PAUSE_QUANTA_CL3_PAUSE_QUANTA_S 16 +#define E830_PRTMAC_CL23_PAUSE_QUANTA_CL3_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_CL23_QUANTA_THRESH 0x001E3340 /* Reset Source: GLOBR */ +#define E830_PRTMAC_CL23_QUANTA_THRESH_CL2_QUANTA_THRESH_S 0 +#define E830_PRTMAC_CL23_QUANTA_THRESH_CL2_QUANTA_THRESH_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_CL23_QUANTA_THRESH_CL3_QUANTA_THRESH_S 16 +#define E830_PRTMAC_CL23_QUANTA_THRESH_CL3_QUANTA_THRESH_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_CL45_PAUSE_QUANTA 0x001E32E0 /* Reset Source: GLOBR */ +#define E830_PRTMAC_CL45_PAUSE_QUANTA_CL4_PAUSE_QUANTA_S 0 +#define E830_PRTMAC_CL45_PAUSE_QUANTA_CL4_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_CL45_PAUSE_QUANTA_CL5_PAUSE_QUANTA_S 16 +#define E830_PRTMAC_CL45_PAUSE_QUANTA_CL5_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_CL45_QUANTA_THRESH 0x001E3360 /* Reset Source: GLOBR */ +#define E830_PRTMAC_CL45_QUANTA_THRESH_CL4_QUANTA_THRESH_S 0 +#define E830_PRTMAC_CL45_QUANTA_THRESH_CL4_QUANTA_THRESH_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_CL45_QUANTA_THRESH_CL5_QUANTA_THRESH_S 16 +#define E830_PRTMAC_CL45_QUANTA_THRESH_CL5_QUANTA_THRESH_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_CL67_PAUSE_QUANTA 0x001E3300 /* Reset Source: GLOBR */ +#define E830_PRTMAC_CL67_PAUSE_QUANTA_CL6_PAUSE_QUANTA_S 0 +#define E830_PRTMAC_CL67_PAUSE_QUANTA_CL6_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_CL67_PAUSE_QUANTA_CL7_PAUSE_QUANTA_S 16 +#define E830_PRTMAC_CL67_PAUSE_QUANTA_CL7_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_CL67_QUANTA_THRESH 0x001E3380 /* Reset Source: GLOBR */ +#define E830_PRTMAC_CL67_QUANTA_THRESH_CL6_QUANTA_THRESH_S 0 +#define E830_PRTMAC_CL67_QUANTA_THRESH_CL6_QUANTA_THRESH_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_CL67_QUANTA_THRESH_CL7_QUANTA_THRESH_S 16 +#define E830_PRTMAC_CL67_QUANTA_THRESH_CL7_QUANTA_THRESH_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_COMMAND_CONFIG 0x001E3040 /* Reset Source: GLOBR */ +#define E830_PRTMAC_COMMAND_CONFIG_TX_ENA_S 0 +#define E830_PRTMAC_COMMAND_CONFIG_TX_ENA_M BIT(0) +#define E830_PRTMAC_COMMAND_CONFIG_RX_ENA_S 1 +#define E830_PRTMAC_COMMAND_CONFIG_RX_ENA_M BIT(1) +#define E830_PRTMAC_COMMAND_CONFIG_RESERVED1_S 3 +#define E830_PRTMAC_COMMAND_CONFIG_RESERVED1_M BIT(3) +#define E830_PRTMAC_COMMAND_CONFIG_PROMIS_EN_S 4 +#define E830_PRTMAC_COMMAND_CONFIG_PROMIS_EN_M BIT(4) +#define E830_PRTMAC_COMMAND_CONFIG_RESERVED2_S 5 +#define E830_PRTMAC_COMMAND_CONFIG_RESERVED2_M BIT(5) +#define E830_PRTMAC_COMMAND_CONFIG_CRC_FWD_S 6 +#define E830_PRTMAC_COMMAND_CONFIG_CRC_FWD_M BIT(6) +#define E830_PRTMAC_COMMAND_CONFIG_PAUSE_FWD_S 7 +#define E830_PRTMAC_COMMAND_CONFIG_PAUSE_FWD_M BIT(7) +#define E830_PRTMAC_COMMAND_CONFIG_PAUSE_IGNORE_S 8 +#define E830_PRTMAC_COMMAND_CONFIG_PAUSE_IGNORE_M BIT(8) +#define E830_PRTMAC_COMMAND_CONFIG_TX_ADDR_INS_S 9 +#define E830_PRTMAC_COMMAND_CONFIG_TX_ADDR_INS_M BIT(9) +#define E830_PRTMAC_COMMAND_CONFIG_LOOP_ENA_S 10 +#define E830_PRTMAC_COMMAND_CONFIG_LOOP_ENA_M BIT(10) +#define E830_PRTMAC_COMMAND_CONFIG_TX_PAD_EN_S 11 +#define E830_PRTMAC_COMMAND_CONFIG_TX_PAD_EN_M BIT(11) +#define E830_PRTMAC_COMMAND_CONFIG_SW_RESET_S 12 +#define E830_PRTMAC_COMMAND_CONFIG_SW_RESET_M BIT(12) +#define E830_PRTMAC_COMMAND_CONFIG_CNTL_FRM_ENA_S 13 +#define E830_PRTMAC_COMMAND_CONFIG_CNTL_FRM_ENA_M BIT(13) +#define E830_PRTMAC_COMMAND_CONFIG_RESERVED3_S 14 +#define E830_PRTMAC_COMMAND_CONFIG_RESERVED3_M BIT(14) +#define E830_PRTMAC_COMMAND_CONFIG_PHY_TXENA_S 15 +#define E830_PRTMAC_COMMAND_CONFIG_PHY_TXENA_M BIT(15) +#define E830_PRTMAC_COMMAND_CONFIG_FORCE_SEND__S 16 +#define E830_PRTMAC_COMMAND_CONFIG_FORCE_SEND__M BIT(16) +#define E830_PRTMAC_COMMAND_CONFIG_RESERVED4_S 17 +#define E830_PRTMAC_COMMAND_CONFIG_RESERVED4_M BIT(17) +#define E830_PRTMAC_COMMAND_CONFIG_RESERVED5_S 18 +#define E830_PRTMAC_COMMAND_CONFIG_RESERVED5_M BIT(18) +#define E830_PRTMAC_COMMAND_CONFIG_PFC_MODE_S 19 +#define E830_PRTMAC_COMMAND_CONFIG_PFC_MODE_M BIT(19) +#define E830_PRTMAC_COMMAND_CONFIG_PAUSE_PFC_COMP_S 20 +#define E830_PRTMAC_COMMAND_CONFIG_PAUSE_PFC_COMP_M BIT(20) +#define E830_PRTMAC_COMMAND_CONFIG_RX_SFD_ANY_S 21 +#define E830_PRTMAC_COMMAND_CONFIG_RX_SFD_ANY_M BIT(21) +#define E830_PRTMAC_COMMAND_CONFIG_TX_FLUSH_S 22 +#define E830_PRTMAC_COMMAND_CONFIG_TX_FLUSH_M BIT(22) +#define E830_PRTMAC_COMMAND_CONFIG_TX_LOWP_ENA_S 23 +#define E830_PRTMAC_COMMAND_CONFIG_TX_LOWP_ENA_M BIT(23) +#define E830_PRTMAC_COMMAND_CONFIG_REG_LOWP_RXEMPTY_S 24 +#define E830_PRTMAC_COMMAND_CONFIG_REG_LOWP_RXEMPTY_M BIT(24) +#define E830_PRTMAC_COMMAND_CONFIG_FLT_TX_STOP_S 25 +#define E830_PRTMAC_COMMAND_CONFIG_FLT_TX_STOP_M BIT(25) +#define E830_PRTMAC_COMMAND_CONFIG_TX_FIFO_RESET_S 26 +#define E830_PRTMAC_COMMAND_CONFIG_TX_FIFO_RESET_M BIT(26) +#define E830_PRTMAC_COMMAND_CONFIG_FLT_HDL_DIS_S 27 +#define E830_PRTMAC_COMMAND_CONFIG_FLT_HDL_DIS_M BIT(27) +#define E830_PRTMAC_COMMAND_CONFIG_TX_PAUSE_DIS_S 28 +#define E830_PRTMAC_COMMAND_CONFIG_TX_PAUSE_DIS_M BIT(28) +#define E830_PRTMAC_COMMAND_CONFIG_RX_PAUSE_DIS_S 29 +#define E830_PRTMAC_COMMAND_CONFIG_RX_PAUSE_DIS_M BIT(29) +#define E830_PRTMAC_COMMAND_CONFIG_SHORT_PREAM_S 30 +#define E830_PRTMAC_COMMAND_CONFIG_SHORT_PREAM_M BIT(30) +#define E830_PRTMAC_COMMAND_CONFIG_NO_PREAM_S 31 +#define E830_PRTMAC_COMMAND_CONFIG_NO_PREAM_M BIT(31) +#define E830_PRTMAC_CRC_INV_M 0x001E3260 /* Reset Source: GLOBR */ +#define E830_PRTMAC_CRC_INV_MASK_CRC_INV_MASK_S 0 +#define E830_PRTMAC_CRC_INV_MASK_CRC_INV_MASK_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_CRC_MODE 0x001E3240 /* Reset Source: GLOBR */ +#define E830_PRTMAC_CRC_MODE_RESERVED_1_S 0 +#define E830_PRTMAC_CRC_MODE_RESERVED_1_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_CRC_MODE_DISABLE_RX_CRC_CHECKING_S 16 +#define E830_PRTMAC_CRC_MODE_DISABLE_RX_CRC_CHECKING_M BIT(16) +#define E830_PRTMAC_CRC_MODE_RESERVED1_S 17 +#define E830_PRTMAC_CRC_MODE_RESERVED1_M BIT(17) +#define E830_PRTMAC_CRC_MODE_ONE_BYTE_CRC_S 18 +#define E830_PRTMAC_CRC_MODE_ONE_BYTE_CRC_M BIT(18) +#define E830_PRTMAC_CRC_MODE_TWO_BYTES_CRC_S 19 +#define E830_PRTMAC_CRC_MODE_TWO_BYTES_CRC_M BIT(19) +#define E830_PRTMAC_CRC_MODE_ZERO_BYTE_CRC_S 20 +#define E830_PRTMAC_CRC_MODE_ZERO_BYTE_CRC_M BIT(20) +#define E830_PRTMAC_CRC_MODE_RESERVED2_S 21 +#define E830_PRTMAC_CRC_MODE_RESERVED2_M MAKEMASK(0x7FF, 21) +#define E830_PRTMAC_CTL_RX_PAUSE_ENABLE 0x001E2180 /* Reset Source: GLOBR */ +#define E830_PRTMAC_CTL_RX_PAUSE_ENABLE_RX_PAUSE_ENABLE_S 0 +#define E830_PRTMAC_CTL_RX_PAUSE_ENABLE_RX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0) +#define E830_PRTMAC_CTL_TX_PAUSE_ENABLE 0x001E21A0 /* Reset Source: GLOBR */ +#define E830_PRTMAC_CTL_TX_PAUSE_ENABLE_TX_PAUSE_ENABLE_S 0 +#define E830_PRTMAC_CTL_TX_PAUSE_ENABLE_TX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0) +#define E830_PRTMAC_FRM_LENGTH 0x001E30A0 /* Reset Source: GLOBR */ +#define E830_PRTMAC_FRM_LENGTH_FRM_LENGTH_S 0 +#define E830_PRTMAC_FRM_LENGTH_FRM_LENGTH_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_FRM_LENGTH_TX_MTU_S 16 +#define E830_PRTMAC_FRM_LENGTH_TX_MTU_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_MAC_ADDR_0 0x001E3060 /* Reset Source: GLOBR */ +#define E830_PRTMAC_MAC_ADDR_0_MAC_ADDR_0_S 0 +#define E830_PRTMAC_MAC_ADDR_0_MAC_ADDR_0_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_MAC_ADDR_1 0x001E3080 /* Reset Source: GLOBR */ +#define E830_PRTMAC_MAC_ADDR_1_MAC_ADDR_1_S 0 +#define E830_PRTMAC_MAC_ADDR_1_MAC_ADDR_1_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_MDIO_CFG_STATUS 0x001E3180 /* Reset Source: GLOBR */ +#define E830_PRTMAC_MDIO_CFG_STATUS_MDIO_BUSY_S 0 +#define E830_PRTMAC_MDIO_CFG_STATUS_MDIO_BUSY_M BIT(0) +#define E830_PRTMAC_MDIO_CFG_STATUS_MDIO_RD_ERR_S 1 +#define E830_PRTMAC_MDIO_CFG_STATUS_MDIO_RD_ERR_M BIT(1) +#define E830_PRTMAC_MDIO_CFG_STATUS_MDIO_HOLD_TIME_S 2 +#define E830_PRTMAC_MDIO_CFG_STATUS_MDIO_HOLD_TIME_M MAKEMASK(0x7, 2) +#define E830_PRTMAC_MDIO_CFG_STATUS_MDIO_DIS_PREAMBLE_S 5 +#define E830_PRTMAC_MDIO_CFG_STATUS_MDIO_DIS_PREAMBLE_M BIT(5) +#define E830_PRTMAC_MDIO_CFG_STATUS_MDIO_CLS_45_EN_S 6 +#define E830_PRTMAC_MDIO_CFG_STATUS_MDIO_CLS_45_EN_M BIT(6) +#define E830_PRTMAC_MDIO_CFG_STATUS_MDIO_CLK_DIVISOR_S 7 +#define E830_PRTMAC_MDIO_CFG_STATUS_MDIO_CLK_DIVISOR_M MAKEMASK(0x1FF, 7) +#define E830_PRTMAC_MDIO_COMMAND 0x001E31A0 /* Reset Source: GLOBR */ +#define E830_PRTMAC_MDIO_COMMAND_MDIO_COMMAND_S 0 +#define E830_PRTMAC_MDIO_COMMAND_MDIO_COMMAND_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_MDIO_DATA 0x001E31C0 /* Reset Source: GLOBR */ +#define E830_PRTMAC_MDIO_DATA_MDIO_DATA_S 0 +#define E830_PRTMAC_MDIO_DATA_MDIO_DATA_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_MDIO_REGADDR 0x001E31E0 /* Reset Source: GLOBR */ +#define E830_PRTMAC_MDIO_REGADDR_MDIO_REGADDR_S 0 +#define E830_PRTMAC_MDIO_REGADDR_MDIO_REGADDR_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_REVISION 0x001E3000 /* Reset Source: GLOBR */ +#define E830_PRTMAC_REVISION_CORE_REVISION_S 0 +#define E830_PRTMAC_REVISION_CORE_REVISION_M MAKEMASK(0xFF, 0) +#define E830_PRTMAC_REVISION_CORE_VERSION_S 8 +#define E830_PRTMAC_REVISION_CORE_VERSION_M MAKEMASK(0xFF, 8) +#define E830_PRTMAC_REVISION_CUSTOMER_VERSION_S 16 +#define E830_PRTMAC_REVISION_CUSTOMER_VERSION_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_RX_PAUSE_STATUS 0x001E33A0 /* Reset Source: GLOBR */ +#define E830_PRTMAC_RX_PAUSE_STATUS_RX_PAUSE_STATUS_S 0 +#define E830_PRTMAC_RX_PAUSE_STATUS_RX_PAUSE_STATUS_M MAKEMASK(0xFF, 0) +#define E830_PRTMAC_RX_PKT_DRP_CNT_RX_OFLOW_PKT_DRP_CNT_S 12 +#define E830_PRTMAC_RX_PKT_DRP_CNT_RX_OFLOW_PKT_DRP_CNT_M MAKEMASK(0xFFFF, 12) +#define E830_PRTMAC_SCRATCH 0x001E3020 /* Reset Source: GLOBR */ +#define E830_PRTMAC_SCRATCH_SCRATCH_S 0 +#define E830_PRTMAC_SCRATCH_SCRATCH_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_STATUS 0x001E3200 /* Reset Source: GLOBR */ +#define E830_PRTMAC_STATUS_RX_LOC_FAULT_S 0 +#define E830_PRTMAC_STATUS_RX_LOC_FAULT_M BIT(0) +#define E830_PRTMAC_STATUS_RX_REM_FAULT_S 1 +#define E830_PRTMAC_STATUS_RX_REM_FAULT_M BIT(1) +#define E830_PRTMAC_STATUS_PHY_LOS_S 2 +#define E830_PRTMAC_STATUS_PHY_LOS_M BIT(2) +#define E830_PRTMAC_STATUS_TS_AVAIL_S 3 +#define E830_PRTMAC_STATUS_TS_AVAIL_M BIT(3) +#define E830_PRTMAC_STATUS_RX_LOWP_S 4 +#define E830_PRTMAC_STATUS_RX_LOWP_M BIT(4) +#define E830_PRTMAC_STATUS_TX_EMPTY_S 5 +#define E830_PRTMAC_STATUS_TX_EMPTY_M BIT(5) +#define E830_PRTMAC_STATUS_RX_EMPTY_S 6 +#define E830_PRTMAC_STATUS_RX_EMPTY_M BIT(6) +#define E830_PRTMAC_STATUS_RX_LINT_FAULT_S 7 +#define E830_PRTMAC_STATUS_RX_LINT_FAULT_M BIT(7) +#define E830_PRTMAC_STATUS_TX_ISIDLE_S 8 +#define E830_PRTMAC_STATUS_TX_ISIDLE_M BIT(8) +#define E830_PRTMAC_STATUS_RESERVED_10_S 9 +#define E830_PRTMAC_STATUS_RESERVED_10_M MAKEMASK(0x7FFFFF, 9) +#define E830_PRTMAC_TS_RX_PCS_LATENCY 0x001E2220 /* Reset Source: GLOBR */ +#define E830_PRTMAC_TS_RX_PCS_LATENCY_TS_RX_PCS_LATENCY_S 0 +#define E830_PRTMAC_TS_RX_PCS_LATENCY_TS_RX_PCS_LATENCY_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_TS_TIMESTAMP 0x001E33E0 /* Reset Source: GLOBR */ +#define E830_PRTMAC_TS_TIMESTAMP_TS_TIMESTAMP_S 0 +#define E830_PRTMAC_TS_TIMESTAMP_TS_TIMESTAMP_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_TS_TX_MEM_VALID_H 0x001E2020 /* Reset Source: GLOBR */ +#define E830_PRTMAC_TS_TX_MEM_VALID_H_TIMESTAMP_TX_VALID_ARR_H_S 0 +#define E830_PRTMAC_TS_TX_MEM_VALID_H_TIMESTAMP_TX_VALID_ARR_H_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_TS_TX_MEM_VALID_L 0x001E2000 /* Reset Source: GLOBR */ +#define E830_PRTMAC_TS_TX_MEM_VALID_L_TIMESTAMP_TX_VALID_ARR_L_S 0 +#define E830_PRTMAC_TS_TX_MEM_VALID_L_TIMESTAMP_TX_VALID_ARR_L_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PRTMAC_TS_TX_PCS_LATENCY 0x001E2200 /* Reset Source: GLOBR */ +#define E830_PRTMAC_TS_TX_PCS_LATENCY_TS_TX_PCS_LATENCY_S 0 +#define E830_PRTMAC_TS_TX_PCS_LATENCY_TS_TX_PCS_LATENCY_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_TX_FIFO_SECTIONS 0x001E3100 /* Reset Source: GLOBR */ +#define E830_PRTMAC_TX_FIFO_SECTIONS_TX_SECTION_AVAIL_THRESHOLD_S 0 +#define E830_PRTMAC_TX_FIFO_SECTIONS_TX_SECTION_AVAIL_THRESHOLD_M MAKEMASK(0xFFFF, 0) +#define E830_PRTMAC_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_THRESHOLD_S 16 +#define E830_PRTMAC_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_THRESHOLD_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_TX_IPG_LENGTH 0x001E3220 /* Reset Source: GLOBR */ +#define E830_PRTMAC_TX_IPG_LENGTH_AVG_IPG_LEN_S 0 +#define E830_PRTMAC_TX_IPG_LENGTH_AVG_IPG_LEN_M MAKEMASK(0x3F, 0) +#define E830_PRTMAC_TX_IPG_LENGTH_RESERVED1_S 6 +#define E830_PRTMAC_TX_IPG_LENGTH_RESERVED1_M MAKEMASK(0x3, 6) +#define E830_PRTMAC_TX_IPG_LENGTH_IPG_COMP_23_16_S 8 +#define E830_PRTMAC_TX_IPG_LENGTH_IPG_COMP_23_16_M MAKEMASK(0xFF, 8) +#define E830_PRTMAC_TX_IPG_LENGTH_IPG_COMP_15_0_S 16 +#define E830_PRTMAC_TX_IPG_LENGTH_IPG_COMP_15_0_M MAKEMASK(0xFFFF, 16) +#define E830_PRTMAC_XIF_MODE 0x001E3400 /* Reset Source: GLOBR */ +#define E830_PRTMAC_XIF_MODE_XGMII_ENA_S 0 +#define E830_PRTMAC_XIF_MODE_XGMII_ENA_M BIT(0) +#define E830_PRTMAC_XIF_MODE_RESERVED_2_S 1 +#define E830_PRTMAC_XIF_MODE_RESERVED_2_M MAKEMASK(0x7, 1) +#define E830_PRTMAC_XIF_MODE_PAUSETIMERX8_S 4 +#define E830_PRTMAC_XIF_MODE_PAUSETIMERX8_M BIT(4) +#define E830_PRTMAC_XIF_MODE_ONE_STEP_ENA_S 5 +#define E830_PRTMAC_XIF_MODE_ONE_STEP_ENA_M BIT(5) +#define E830_PRTMAC_XIF_MODE_RX_PAUSE_BYPASS_S 6 +#define E830_PRTMAC_XIF_MODE_RX_PAUSE_BYPASS_M BIT(6) +#define E830_PRTMAC_XIF_MODE_RESERVED1_S 7 +#define E830_PRTMAC_XIF_MODE_RESERVED1_M BIT(7) +#define E830_PRTMAC_XIF_MODE_TX_MAC_RS_ERR_S 8 +#define E830_PRTMAC_XIF_MODE_TX_MAC_RS_ERR_M BIT(8) +#define E830_PRTMAC_XIF_MODE_TS_DELTA_MODE_S 9 +#define E830_PRTMAC_XIF_MODE_TS_DELTA_MODE_M BIT(9) +#define E830_PRTMAC_XIF_MODE_TS_DELAY_MODE_S 10 +#define E830_PRTMAC_XIF_MODE_TS_DELAY_MODE_M BIT(10) +#define E830_PRTMAC_XIF_MODE_TS_BINARY_MODE_S 11 +#define E830_PRTMAC_XIF_MODE_TS_BINARY_MODE_M BIT(11) +#define E830_PRTMAC_XIF_MODE_TS_UPD64_MODE_S 12 +#define E830_PRTMAC_XIF_MODE_TS_UPD64_MODE_M BIT(12) +#define E830_PRTMAC_XIF_MODE_RESERVED2_S 13 +#define E830_PRTMAC_XIF_MODE_RESERVED2_M MAKEMASK(0x7, 13) +#define E830_PRTMAC_XIF_MODE_RX_CNT_MODE_S 16 +#define E830_PRTMAC_XIF_MODE_RX_CNT_MODE_M BIT(16) +#define E830_PRTMAC_XIF_MODE_PFC_PULSE_MODE_S 17 +#define E830_PRTMAC_XIF_MODE_PFC_PULSE_MODE_M BIT(17) +#define E830_PRTMAC_XIF_MODE_PFC_LP_MODE_S 18 +#define E830_PRTMAC_XIF_MODE_PFC_LP_MODE_M BIT(18) +#define E830_PRTMAC_XIF_MODE_PFC_LP_16PRI_S 19 +#define E830_PRTMAC_XIF_MODE_PFC_LP_16PRI_M BIT(19) +#define E830_PRTMAC_XIF_MODE_TS_SFD_ENA_S 20 +#define E830_PRTMAC_XIF_MODE_TS_SFD_ENA_M BIT(20) +#define E830_PRTMAC_XIF_MODE_RESERVED3_S 21 +#define E830_PRTMAC_XIF_MODE_RESERVED3_M MAKEMASK(0x7FF, 21) +#define E830_PRTTSYN_TXTIME_H(_i) (0x001E5004 + ((_i) * 64)) /* _i=0...63 */ /* Reset Source: GLOBR */ +#define E830_PRTTSYN_TXTIME_H_MAX_INDEX 63 +#define E830_PRTTSYN_TXTIME_H_TX_TIMESTAMP_HIGH_S 0 +#define E830_PRTTSYN_TXTIME_H_TX_TIMESTAMP_HIGH_M MAKEMASK(0xFF, 0) +#define E830_PRTTSYN_TXTIME_L(_i) (0x001E5000 + ((_i) * 64)) /* _i=0...63 */ /* Reset Source: GLOBR */ +#define E830_PRTTSYN_TXTIME_L_MAX_INDEX 63 +#define E830_PRTTSYN_TXTIME_L_TX_VALID_S 0 +#define E830_PRTTSYN_TXTIME_L_TX_VALID_M BIT(0) +#define E830_PRTTSYN_TXTIME_L_TX_TIMESTAMP_LOW_S 1 +#define E830_PRTTSYN_TXTIME_L_TX_TIMESTAMP_LOW_M MAKEMASK(0x7FFFFFFF, 1) +#define E830_GL_MDCK_EN_TX_PQM_TXT_MAL_SW_ABOVE_HW_TAIL_S 28 +#define E830_GL_MDCK_EN_TX_PQM_TXT_MAL_SW_ABOVE_HW_TAIL_M BIT(28) +#define E830_GL_MDCK_EN_TX_PQM_TXT_MAL_SAME_TAIL_S 29 +#define E830_GL_MDCK_EN_TX_PQM_TXT_MAL_SAME_TAIL_M BIT(29) +#define E830_GL_MDCK_EN_TX_PQM_TXT_MAL_TAIL_GE_QLEN_S 30 +#define E830_GL_MDCK_EN_TX_PQM_TXT_MAL_TAIL_GE_QLEN_M BIT(30) +#define E830_GL_MDCK_EN_TX_PQM_TXT_MAL_UR_S 31 +#define E830_GL_MDCK_EN_TX_PQM_TXT_MAL_UR_M BIT(31) +#define E830_GL_MDET_HIF_ERR_FIFO 0x00096844 /* Reset Source: CORER */ +#define E830_GL_MDET_HIF_ERR_FIFO_FUNC_NUM_S 0 +#define E830_GL_MDET_HIF_ERR_FIFO_FUNC_NUM_M MAKEMASK(0x3FF, 0) +#define E830_GL_MDET_HIF_ERR_FIFO_PF_NUM_S 10 +#define E830_GL_MDET_HIF_ERR_FIFO_PF_NUM_M MAKEMASK(0x7, 10) +#define E830_GL_MDET_HIF_ERR_FIFO_FUNC_TYPE_S 13 +#define E830_GL_MDET_HIF_ERR_FIFO_FUNC_TYPE_M MAKEMASK(0x3, 13) +#define E830_GL_MDET_HIF_ERR_FIFO_MAL_TYPE_S 15 +#define E830_GL_MDET_HIF_ERR_FIFO_MAL_TYPE_M MAKEMASK(0x1F, 15) +#define E830_GL_MDET_HIF_ERR_FIFO_FIFO_FULL_S 20 +#define E830_GL_MDET_HIF_ERR_FIFO_FIFO_FULL_M BIT(20) +#define E830_GL_MDET_HIF_ERR_FIFO_VALID_S 21 +#define E830_GL_MDET_HIF_ERR_FIFO_VALID_M BIT(21) +#define E830_GL_MDET_HIF_ERR_PF_CNT(_i) (0x00096804 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GL_MDET_HIF_ERR_PF_CNT_MAX_INDEX 7 +#define E830_GL_MDET_HIF_ERR_PF_CNT_CNT_S 0 +#define E830_GL_MDET_HIF_ERR_PF_CNT_CNT_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GL_MDET_HIF_ERR_VF(_i) (0x00096824 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GL_MDET_HIF_ERR_VF_MAX_INDEX 7 +#define E830_GL_MDET_HIF_ERR_VF_VF_MAL_EVENT_S 0 +#define E830_GL_MDET_HIF_ERR_VF_VF_MAL_EVENT_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PF_MDET_HIF_ERR 0x00096880 /* Reset Source: CORER */ +#define E830_PF_MDET_HIF_ERR_VALID_S 0 +#define E830_PF_MDET_HIF_ERR_VALID_M BIT(0) +#define E830_VM_MDET_TX_TCLAN(_i) (0x000FC000 + ((_i) * 4)) /* _i=0...767 */ /* Reset Source: CORER */ +#define E830_VM_MDET_TX_TCLAN_MAX_INDEX 767 +#define E830_VM_MDET_TX_TCLAN_VALID_S 0 +#define E830_VM_MDET_TX_TCLAN_VALID_M BIT(0) +#define E830_VP_MDET_HIF_ERR(_VF) (0x00096C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */ +#define E830_VP_MDET_HIF_ERR_MAX_INDEX 255 +#define E830_VP_MDET_HIF_ERR_VALID_S 0 +#define E830_VP_MDET_HIF_ERR_VALID_M BIT(0) +#define E830_GLNVM_FLA_GLOBAL_LOCKED_S 7 +#define E830_GLNVM_FLA_GLOBAL_LOCKED_M BIT(7) +#define E830_DMA_AGENT_AT0 0x000BE268 /* Reset Source: PCIR */ +#define E830_DMA_AGENT_AT0_RLAN_PASID_SELECTED_S 0 +#define E830_DMA_AGENT_AT0_RLAN_PASID_SELECTED_M MAKEMASK(0x3, 0) +#define E830_DMA_AGENT_AT0_TCLAN_PASID_SELECTED_S 2 +#define E830_DMA_AGENT_AT0_TCLAN_PASID_SELECTED_M MAKEMASK(0x3, 2) +#define E830_DMA_AGENT_AT0_PQM_DBL_PASID_SELECTED_S 4 +#define E830_DMA_AGENT_AT0_PQM_DBL_PASID_SELECTED_M MAKEMASK(0x3, 4) +#define E830_DMA_AGENT_AT0_PQM_DESC_PASID_SELECTED_S 6 +#define E830_DMA_AGENT_AT0_PQM_DESC_PASID_SELECTED_M MAKEMASK(0x3, 6) +#define E830_DMA_AGENT_AT0_PQM_TS_DESC_PASID_SELECTED_S 8 +#define E830_DMA_AGENT_AT0_PQM_TS_DESC_PASID_SELECTED_M MAKEMASK(0x3, 8) +#define E830_DMA_AGENT_AT0_RDPU_PASID_SELECTED_S 10 +#define E830_DMA_AGENT_AT0_RDPU_PASID_SELECTED_M MAKEMASK(0x3, 10) +#define E830_DMA_AGENT_AT0_TDPU_PASID_SELECTED_S 12 +#define E830_DMA_AGENT_AT0_TDPU_PASID_SELECTED_M MAKEMASK(0x3, 12) +#define E830_DMA_AGENT_AT0_MBX_PASID_SELECTED_S 14 +#define E830_DMA_AGENT_AT0_MBX_PASID_SELECTED_M MAKEMASK(0x3, 14) +#define E830_DMA_AGENT_AT0_MNG_PASID_SELECTED_S 16 +#define E830_DMA_AGENT_AT0_MNG_PASID_SELECTED_M MAKEMASK(0x3, 16) +#define E830_DMA_AGENT_AT0_TEP_PMAT_PASID_SELECTED_S 18 +#define E830_DMA_AGENT_AT0_TEP_PMAT_PASID_SELECTED_M MAKEMASK(0x3, 18) +#define E830_DMA_AGENT_AT0_RX_PE_PASID_SELECTED_S 20 +#define E830_DMA_AGENT_AT0_RX_PE_PASID_SELECTED_M MAKEMASK(0x3, 20) +#define E830_DMA_AGENT_AT0_TX_PE_PASID_SELECTED_S 22 +#define E830_DMA_AGENT_AT0_TX_PE_PASID_SELECTED_M MAKEMASK(0x3, 22) +#define E830_DMA_AGENT_AT0_PEPMAT_PASID_SELECTED_S 24 +#define E830_DMA_AGENT_AT0_PEPMAT_PASID_SELECTED_M MAKEMASK(0x3, 24) +#define E830_DMA_AGENT_AT0_FPMAT_PASID_SELECTED_S 26 +#define E830_DMA_AGENT_AT0_FPMAT_PASID_SELECTED_M MAKEMASK(0x3, 26) +#define E830_DMA_AGENT_AT1 0x000BE26C /* Reset Source: PCIR */ +#define E830_DMA_AGENT_AT1_RLAN_PASID_SELECTED_S 0 +#define E830_DMA_AGENT_AT1_RLAN_PASID_SELECTED_M MAKEMASK(0x3, 0) +#define E830_DMA_AGENT_AT1_TCLAN_PASID_SELECTED_S 2 +#define E830_DMA_AGENT_AT1_TCLAN_PASID_SELECTED_M MAKEMASK(0x3, 2) +#define E830_DMA_AGENT_AT1_PQM_DBL_PASID_SELECTED_S 4 +#define E830_DMA_AGENT_AT1_PQM_DBL_PASID_SELECTED_M MAKEMASK(0x3, 4) +#define E830_DMA_AGENT_AT1_PQM_DESC_PASID_SELECTED_S 6 +#define E830_DMA_AGENT_AT1_PQM_DESC_PASID_SELECTED_M MAKEMASK(0x3, 6) +#define E830_DMA_AGENT_AT1_PQM_TS_DESC_PASID_SELECTED_S 8 +#define E830_DMA_AGENT_AT1_PQM_TS_DESC_PASID_SELECTED_M MAKEMASK(0x3, 8) +#define E830_DMA_AGENT_AT1_RDPU_PASID_SELECTED_S 10 +#define E830_DMA_AGENT_AT1_RDPU_PASID_SELECTED_M MAKEMASK(0x3, 10) +#define E830_DMA_AGENT_AT1_TDPU_PASID_SELECTED_S 12 +#define E830_DMA_AGENT_AT1_TDPU_PASID_SELECTED_M MAKEMASK(0x3, 12) +#define E830_DMA_AGENT_AT1_MBX_PASID_SELECTED_S 14 +#define E830_DMA_AGENT_AT1_MBX_PASID_SELECTED_M MAKEMASK(0x3, 14) +#define E830_DMA_AGENT_AT1_MNG_PASID_SELECTED_S 16 +#define E830_DMA_AGENT_AT1_MNG_PASID_SELECTED_M MAKEMASK(0x3, 16) +#define E830_DMA_AGENT_AT1_TEP_PMAT_PASID_SELECTED_S 18 +#define E830_DMA_AGENT_AT1_TEP_PMAT_PASID_SELECTED_M MAKEMASK(0x3, 18) +#define E830_DMA_AGENT_AT1_RX_PE_PASID_SELECTED_S 20 +#define E830_DMA_AGENT_AT1_RX_PE_PASID_SELECTED_M MAKEMASK(0x3, 20) +#define E830_DMA_AGENT_AT1_TX_PE_PASID_SELECTED_S 22 +#define E830_DMA_AGENT_AT1_TX_PE_PASID_SELECTED_M MAKEMASK(0x3, 22) +#define E830_DMA_AGENT_AT1_PEPMAT_PASID_SELECTED_S 24 +#define E830_DMA_AGENT_AT1_PEPMAT_PASID_SELECTED_M MAKEMASK(0x3, 24) +#define E830_DMA_AGENT_AT1_FPMAT_PASID_SELECTED_S 26 +#define E830_DMA_AGENT_AT1_FPMAT_PASID_SELECTED_M MAKEMASK(0x3, 26) +#define E830_GLPCI_CAPSUP_DOE_EN_S 1 +#define E830_GLPCI_CAPSUP_DOE_EN_M BIT(1) +#define E830_GLPCI_CAPSUP_GEN5_EXT_EN_S 12 +#define E830_GLPCI_CAPSUP_GEN5_EXT_EN_M BIT(12) +#define E830_GLPCI_CAPSUP_PTM_EN_S 13 +#define E830_GLPCI_CAPSUP_PTM_EN_M BIT(13) +#define E830_GLPCI_CAPSUP_SNPS_RAS_EN_S 14 +#define E830_GLPCI_CAPSUP_SNPS_RAS_EN_M BIT(14) +#define E830_GLPCI_CAPSUP_SIOV_EN_S 15 +#define E830_GLPCI_CAPSUP_SIOV_EN_M BIT(15) +#define E830_GLPCI_DOE_BUSY_STATUS 0x0009DF70 /* Reset Source: PCIR */ +#define E830_GLPCI_DOE_BUSY_STATUS_BUSY_REQ_S 0 +#define E830_GLPCI_DOE_BUSY_STATUS_BUSY_REQ_M BIT(0) +#define E830_GLPCI_DOE_BUSY_STATUS_BUSY_EMPR_S 1 +#define E830_GLPCI_DOE_BUSY_STATUS_BUSY_EMPR_M BIT(1) +#define E830_GLPCI_DOE_BUSY_STATUS_BUSY_PCIER_S 2 +#define E830_GLPCI_DOE_BUSY_STATUS_BUSY_PCIER_M BIT(2) +#define E830_GLPCI_DOE_BUSY_STATUS_BUSY_FLR_S 3 +#define E830_GLPCI_DOE_BUSY_STATUS_BUSY_FLR_M BIT(3) +#define E830_GLPCI_DOE_BUSY_STATUS_BUSY_CFG_ABORT_S 4 +#define E830_GLPCI_DOE_BUSY_STATUS_BUSY_CFG_ABORT_M BIT(4) +#define E830_GLPCI_DOE_BUSY_STATUS_BUSY_FW_S 5 +#define E830_GLPCI_DOE_BUSY_STATUS_BUSY_FW_M BIT(5) +#define E830_GLPCI_DOE_CFG 0x0009DF54 /* Reset Source: PCIR */ +#define E830_GLPCI_DOE_CFG_ENABLE_S 0 +#define E830_GLPCI_DOE_CFG_ENABLE_M BIT(0) +#define E830_GLPCI_DOE_CFG_ITR_SUPPORT_S 1 +#define E830_GLPCI_DOE_CFG_ITR_SUPPORT_M BIT(1) +#define E830_GLPCI_DOE_CFG_POISON_CFGWR_PIOSF_EP_BIT_S 2 +#define E830_GLPCI_DOE_CFG_POISON_CFGWR_PIOSF_EP_BIT_M BIT(2) +#define E830_GLPCI_DOE_CFG_POISON_CFGWR_SBIOSF_AER_MSG_S 3 +#define E830_GLPCI_DOE_CFG_POISON_CFGWR_SBIOSF_AER_MSG_M BIT(3) +#define E830_GLPCI_DOE_CFG_MSIX_VECTOR_S 8 +#define E830_GLPCI_DOE_CFG_MSIX_VECTOR_M MAKEMASK(0x7FF, 8) +#define E830_GLPCI_DOE_CTRL 0x0009DF60 /* Reset Source: PCIR */ +#define E830_GLPCI_DOE_CTRL_BUSY_FW_SET_S 0 +#define E830_GLPCI_DOE_CTRL_BUSY_FW_SET_M BIT(0) +#define E830_GLPCI_DOE_CTRL_DOE_CFG_ERR_SET_S 1 +#define E830_GLPCI_DOE_CTRL_DOE_CFG_ERR_SET_M BIT(1) +#define E830_GLPCI_DOE_DBG 0x0009DF6C /* Reset Source: PCIR */ +#define E830_GLPCI_DOE_DBG_CFG_BUSY_S 0 +#define E830_GLPCI_DOE_DBG_CFG_BUSY_M BIT(0) +#define E830_GLPCI_DOE_DBG_CFG_DATA_OBJECT_READY_S 1 +#define E830_GLPCI_DOE_DBG_CFG_DATA_OBJECT_READY_M BIT(1) +#define E830_GLPCI_DOE_DBG_CFG_ERROR_S 2 +#define E830_GLPCI_DOE_DBG_CFG_ERROR_M BIT(2) +#define E830_GLPCI_DOE_DBG_CFG_INTERRUPT_ENABLE_S 3 +#define E830_GLPCI_DOE_DBG_CFG_INTERRUPT_ENABLE_M BIT(3) +#define E830_GLPCI_DOE_DBG_CFG_INTERRUPT_STATUS_S 4 +#define E830_GLPCI_DOE_DBG_CFG_INTERRUPT_STATUS_M BIT(4) +#define E830_GLPCI_DOE_DBG_REQ_BUF_SW_WR_PTR_S 8 +#define E830_GLPCI_DOE_DBG_REQ_BUF_SW_WR_PTR_M MAKEMASK(0x1FF, 8) +#define E830_GLPCI_DOE_DBG_RESP_BUF_SW_RD_PTR_S 20 +#define E830_GLPCI_DOE_DBG_RESP_BUF_SW_RD_PTR_M MAKEMASK(0x1FF, 20) +#define E830_GLPCI_DOE_ERR_EN 0x0009DF64 /* Reset Source: PCIR */ +#define E830_GLPCI_DOE_ERR_EN_RD_REQ_BUF_ECC_ERR_EN_S 0 +#define E830_GLPCI_DOE_ERR_EN_RD_REQ_BUF_ECC_ERR_EN_M BIT(0) +#define E830_GLPCI_DOE_ERR_EN_RD_RESP_BUF_ECC_ERR_EN_S 1 +#define E830_GLPCI_DOE_ERR_EN_RD_RESP_BUF_ECC_ERR_EN_M BIT(1) +#define E830_GLPCI_DOE_ERR_EN_SW_WR_CFG_POISONED_EN_S 2 +#define E830_GLPCI_DOE_ERR_EN_SW_WR_CFG_POISONED_EN_M BIT(2) +#define E830_GLPCI_DOE_ERR_EN_SW_WR_REQ_BUF_ON_BUSY_DUE_REQ_EN_S 3 +#define E830_GLPCI_DOE_ERR_EN_SW_WR_REQ_BUF_ON_BUSY_DUE_REQ_EN_M BIT(3) +#define E830_GLPCI_DOE_ERR_EN_SW_GO_ON_BUSY_DUE_REQ_EN_S 4 +#define E830_GLPCI_DOE_ERR_EN_SW_GO_ON_BUSY_DUE_REQ_EN_M BIT(4) +#define E830_GLPCI_DOE_ERR_EN_SW_WR_REQ_BUF_ON_BUSY_DUE_FW_EN_S 5 +#define E830_GLPCI_DOE_ERR_EN_SW_WR_REQ_BUF_ON_BUSY_DUE_FW_EN_M BIT(5) +#define E830_GLPCI_DOE_ERR_EN_SW_GO_ON_BUSY_DUE_FW_EN_S 6 +#define E830_GLPCI_DOE_ERR_EN_SW_GO_ON_BUSY_DUE_FW_EN_M BIT(6) +#define E830_GLPCI_DOE_ERR_EN_SW_WR_REQ_BUF_OVERFLOW_EN_S 7 +#define E830_GLPCI_DOE_ERR_EN_SW_WR_REQ_BUF_OVERFLOW_EN_M BIT(7) +#define E830_GLPCI_DOE_ERR_EN_SW_GO_REQ_BUF_EMPTY_EN_S 8 +#define E830_GLPCI_DOE_ERR_EN_SW_GO_REQ_BUF_EMPTY_EN_M BIT(8) +#define E830_GLPCI_DOE_ERR_EN_SW_RD_RESP_BUF_ON_READY_LOW_EN_S 9 +#define E830_GLPCI_DOE_ERR_EN_SW_RD_RESP_BUF_ON_READY_LOW_EN_M BIT(9) +#define E830_GLPCI_DOE_ERR_EN_SW_REQ_DURING_MNG_RST_EN_S 10 +#define E830_GLPCI_DOE_ERR_EN_SW_REQ_DURING_MNG_RST_EN_M BIT(10) +#define E830_GLPCI_DOE_ERR_EN_FW_SET_ERROR_EN_S 11 +#define E830_GLPCI_DOE_ERR_EN_FW_SET_ERROR_EN_M BIT(11) +#define E830_GLPCI_DOE_ERR_EN_SW_WR_REQ_BUF_ON_BUSY_DUE_ABORT_EN_S 12 +#define E830_GLPCI_DOE_ERR_EN_SW_WR_REQ_BUF_ON_BUSY_DUE_ABORT_EN_M BIT(12) +#define E830_GLPCI_DOE_ERR_EN_SW_GO_ON_BUSY_DUE_ABORT_EN_S 13 +#define E830_GLPCI_DOE_ERR_EN_SW_GO_ON_BUSY_DUE_ABORT_EN_M BIT(13) +#define E830_GLPCI_DOE_ERR_EN_SW_RD_RESP_BUF_ON_BUSY_DUE_ABORT_EN_S 14 +#define E830_GLPCI_DOE_ERR_EN_SW_RD_RESP_BUF_ON_BUSY_DUE_ABORT_EN_M BIT(14) +#define E830_GLPCI_DOE_ERR_STATUS 0x0009DF68 /* Reset Source: PCIR */ +#define E830_GLPCI_DOE_ERR_STATUS_RD_REQ_BUF_ECC_ERR_S 0 +#define E830_GLPCI_DOE_ERR_STATUS_RD_REQ_BUF_ECC_ERR_M BIT(0) +#define E830_GLPCI_DOE_ERR_STATUS_RD_RESP_BUF_ECC_ERR_S 1 +#define E830_GLPCI_DOE_ERR_STATUS_RD_RESP_BUF_ECC_ERR_M BIT(1) +#define E830_GLPCI_DOE_ERR_STATUS_SW_WR_CFG_POISONED_S 2 +#define E830_GLPCI_DOE_ERR_STATUS_SW_WR_CFG_POISONED_M BIT(2) +#define E830_GLPCI_DOE_ERR_STATUS_SW_WR_REQ_BUF_ON_BUSY_DUE_REQ_S 3 +#define E830_GLPCI_DOE_ERR_STATUS_SW_WR_REQ_BUF_ON_BUSY_DUE_REQ_M BIT(3) +#define E830_GLPCI_DOE_ERR_STATUS_SW_GO_ON_BUSY_DUE_REQ_S 4 +#define E830_GLPCI_DOE_ERR_STATUS_SW_GO_ON_BUSY_DUE_REQ_M BIT(4) +#define E830_GLPCI_DOE_ERR_STATUS_SW_WR_REQ_BUF_ON_BUSY_DUE_FW_S 5 +#define E830_GLPCI_DOE_ERR_STATUS_SW_WR_REQ_BUF_ON_BUSY_DUE_FW_M BIT(5) +#define E830_GLPCI_DOE_ERR_STATUS_SW_GO_ON_BUSY_DUE_FW_S 6 +#define E830_GLPCI_DOE_ERR_STATUS_SW_GO_ON_BUSY_DUE_FW_M BIT(6) +#define E830_GLPCI_DOE_ERR_STATUS_SW_WR_REQ_BUF_OVERFLOW_S 7 +#define E830_GLPCI_DOE_ERR_STATUS_SW_WR_REQ_BUF_OVERFLOW_M BIT(7) +#define E830_GLPCI_DOE_ERR_STATUS_SW_GO_REQ_BUF_EMPTY_S 8 +#define E830_GLPCI_DOE_ERR_STATUS_SW_GO_REQ_BUF_EMPTY_M BIT(8) +#define E830_GLPCI_DOE_ERR_STATUS_SW_RD_RESP_BUF_ON_READY_LOW_S 9 +#define E830_GLPCI_DOE_ERR_STATUS_SW_RD_RESP_BUF_ON_READY_LOW_M BIT(9) +#define E830_GLPCI_DOE_ERR_STATUS_SW_REQ_DURING_MNG_RST_S 10 +#define E830_GLPCI_DOE_ERR_STATUS_SW_REQ_DURING_MNG_RST_M BIT(10) +#define E830_GLPCI_DOE_ERR_STATUS_FW_SET_ERROR_S 11 +#define E830_GLPCI_DOE_ERR_STATUS_FW_SET_ERROR_M BIT(11) +#define E830_GLPCI_DOE_ERR_STATUS_SW_WR_REQ_BUF_ON_BUSY_DUE_ABORT_S 12 +#define E830_GLPCI_DOE_ERR_STATUS_SW_WR_REQ_BUF_ON_BUSY_DUE_ABORT_M BIT(12) +#define E830_GLPCI_DOE_ERR_STATUS_SW_GO_ON_BUSY_DUE_ABORT_S 13 +#define E830_GLPCI_DOE_ERR_STATUS_SW_GO_ON_BUSY_DUE_ABORT_M BIT(13) +#define E830_GLPCI_DOE_ERR_STATUS_SW_RD_RESP_BUF_ON_BUSY_DUE_ABORT_S 14 +#define E830_GLPCI_DOE_ERR_STATUS_SW_RD_RESP_BUF_ON_BUSY_DUE_ABORT_M BIT(14) +#define E830_GLPCI_DOE_ERR_STATUS_CFG_ERR_IDX_S 24 +#define E830_GLPCI_DOE_ERR_STATUS_CFG_ERR_IDX_M MAKEMASK(0x1F, 24) +#define E830_GLPCI_DOE_REQ_MSG_NUM_DWS 0x0009DF58 /* Reset Source: PCIR */ +#define E830_GLPCI_DOE_REQ_MSG_NUM_DWS_GLPCI_DOE_REQ_MSG_NUM_DWS_S 0 +#define E830_GLPCI_DOE_REQ_MSG_NUM_DWS_GLPCI_DOE_REQ_MSG_NUM_DWS_M MAKEMASK(0x1FF, 0) +#define E830_GLPCI_DOE_RESP 0x0009DF5C /* Reset Source: PCIR */ +#define E830_GLPCI_DOE_RESP_MSG_NUM_DWS_S 0 +#define E830_GLPCI_DOE_RESP_MSG_NUM_DWS_M MAKEMASK(0x1FF, 0) +#define E830_GLPCI_DOE_RESP_READY_SET_S 16 +#define E830_GLPCI_DOE_RESP_READY_SET_M BIT(16) +#define E830_GLPCI_ERR_DBG 0x0009DF84 /* Reset Source: PCIR */ +#define E830_GLPCI_ERR_DBG_ERR_MIFO_FULL_DROP_CTR_S 0 +#define E830_GLPCI_ERR_DBG_ERR_MIFO_FULL_DROP_CTR_M MAKEMASK(0x3, 0) +#define E830_GLPCI_ERR_DBG_PCIE2SB_AER_MSG_SM_S 2 +#define E830_GLPCI_ERR_DBG_PCIE2SB_AER_MSG_SM_M BIT(2) +#define E830_GLPCI_ERR_DBG_PCIE2SB_AER_MSG_FIFO_NUM_ENTRIES_S 3 +#define E830_GLPCI_ERR_DBG_PCIE2SB_AER_MSG_FIFO_NUM_ENTRIES_M MAKEMASK(0x7, 3) +#define E830_GLPCI_ERR_DBG_ERR_MIFO_NUM_ENTRIES_S 6 +#define E830_GLPCI_ERR_DBG_ERR_MIFO_NUM_ENTRIES_M MAKEMASK(0xF, 6) +#define E830_GLPCI_NPQ_CFG_HIGH_TO_S 20 +#define E830_GLPCI_NPQ_CFG_HIGH_TO_M BIT(20) +#define E830_GLPCI_NPQ_CFG_INC_150MS_TO_S 21 +#define E830_GLPCI_NPQ_CFG_INC_150MS_TO_M BIT(21) +#define E830_GLPCI_PUSH_PQM_CTRL 0x0009DF74 /* Reset Source: POR */ +#define E830_GLPCI_PUSH_PQM_CTRL_PF_LEGACY_RANGE_EN_S 0 +#define E830_GLPCI_PUSH_PQM_CTRL_PF_LEGACY_RANGE_EN_M BIT(0) +#define E830_GLPCI_PUSH_PQM_CTRL_PF_TXTIME_RANGE_EN_S 1 +#define E830_GLPCI_PUSH_PQM_CTRL_PF_TXTIME_RANGE_EN_M BIT(1) +#define E830_GLPCI_PUSH_PQM_CTRL_PF_4K_RANGE_EN_S 2 +#define E830_GLPCI_PUSH_PQM_CTRL_PF_4K_RANGE_EN_M BIT(2) +#define E830_GLPCI_PUSH_PQM_CTRL_VF_LEGACY_RANGE_EN_S 3 +#define E830_GLPCI_PUSH_PQM_CTRL_VF_LEGACY_RANGE_EN_M BIT(3) +#define E830_GLPCI_PUSH_PQM_CTRL_VF_TXTIME_RANGE_EN_S 4 +#define E830_GLPCI_PUSH_PQM_CTRL_VF_TXTIME_RANGE_EN_M BIT(4) +#define E830_GLPCI_PUSH_PQM_CTRL_PUSH_PQM_IF_TO_VAL_S 8 +#define E830_GLPCI_PUSH_PQM_CTRL_PUSH_PQM_IF_TO_VAL_M MAKEMASK(0xF, 8) +#define E830_GLPCI_PUSH_PQM_CTRL_PUSH_PQM_IF_TO_DIS_S 12 +#define E830_GLPCI_PUSH_PQM_CTRL_PUSH_PQM_IF_TO_DIS_M BIT(12) +#define E830_GLPCI_PUSH_PQM_CTRL_RD_COMP_LEN_2DWS_ONE_CHUNK_EN_S 16 +#define E830_GLPCI_PUSH_PQM_CTRL_RD_COMP_LEN_2DWS_ONE_CHUNK_EN_M BIT(16) +#define E830_GLPCI_PUSH_PQM_DBG 0x0009DF7C /* Reset Source: PCIR */ +#define E830_GLPCI_PUSH_PQM_DBG_EVENTS_CTR_S 0 +#define E830_GLPCI_PUSH_PQM_DBG_EVENTS_CTR_M MAKEMASK(0xFF, 0) +#define E830_GLPCI_PUSH_PQM_DBG_DROP_CTR_S 8 +#define E830_GLPCI_PUSH_PQM_DBG_DROP_CTR_M MAKEMASK(0xFF, 8) +#define E830_GLPCI_PUSH_PQM_DBG_ASYNC_FIFO_USED_SPACE_S 16 +#define E830_GLPCI_PUSH_PQM_DBG_ASYNC_FIFO_USED_SPACE_M MAKEMASK(0xF, 16) +#define E830_GLPCI_PUSH_PQM_DBG_CDT_FIFO_USED_SPACE_S 20 +#define E830_GLPCI_PUSH_PQM_DBG_CDT_FIFO_USED_SPACE_M MAKEMASK(0x1F, 20) +#define E830_GLPCI_PUSH_PQM_DBG_CDT_FIFO_PUSH_WHEN_FULL_ERR_S 25 +#define E830_GLPCI_PUSH_PQM_DBG_CDT_FIFO_PUSH_WHEN_FULL_ERR_M BIT(25) +#define E830_GLPCI_PUSH_PQM_IF_TO_STATUS 0x0009DF78 /* Reset Source: PCIR */ +#define E830_GLPCI_PUSH_PQM_IF_TO_STATUS_GLPCI_PUSH_PQM_IF_TO_STATUS_S 0 +#define E830_GLPCI_PUSH_PQM_IF_TO_STATUS_GLPCI_PUSH_PQM_IF_TO_STATUS_M BIT(0) +#define E830_GLPCI_RDPU_CMD_DBG 0x000BE264 /* Reset Source: PCIR */ +#define E830_GLPCI_RDPU_CMD_DBG_RDPU0_CMD_POP_CNT_S 0 +#define E830_GLPCI_RDPU_CMD_DBG_RDPU0_CMD_POP_CNT_M MAKEMASK(0xFF, 0) +#define E830_GLPCI_RDPU_CMD_DBG_RDPU1_CMD_POP_CNT_S 8 +#define E830_GLPCI_RDPU_CMD_DBG_RDPU1_CMD_POP_CNT_M MAKEMASK(0xFF, 8) +#define E830_GLPCI_RDPU_CMD_DBG_RDPU2_CMD_POP_CNT_S 16 +#define E830_GLPCI_RDPU_CMD_DBG_RDPU2_CMD_POP_CNT_M MAKEMASK(0xFF, 16) +#define E830_GLPCI_RDPU_CMD_DBG_RDPU3_CMD_POP_CNT_S 24 +#define E830_GLPCI_RDPU_CMD_DBG_RDPU3_CMD_POP_CNT_M MAKEMASK(0xFF, 24) +#define E830_GLPCI_RDPU_CMD_FIFO_DBG0 0x000BE25C /* Reset Source: PCIR */ +#define E830_GLPCI_RDPU_CMD_FIFO_DBG0_RDPU0_CMD_NUM_ENTRIES_S 0 +#define E830_GLPCI_RDPU_CMD_FIFO_DBG0_RDPU0_CMD_NUM_ENTRIES_M MAKEMASK(0x1FF, 0) +#define E830_GLPCI_RDPU_CMD_FIFO_DBG0_RDPU1_CMD_NUM_ENTRIES_S 16 +#define E830_GLPCI_RDPU_CMD_FIFO_DBG0_RDPU1_CMD_NUM_ENTRIES_M MAKEMASK(0x1FF, 16) +#define E830_GLPCI_RDPU_CMD_FIFO_DBG1 0x000BE260 /* Reset Source: PCIR */ +#define E830_GLPCI_RDPU_CMD_FIFO_DBG1_RDPU2_CMD_NUM_ENTRIES_S 0 +#define E830_GLPCI_RDPU_CMD_FIFO_DBG1_RDPU2_CMD_NUM_ENTRIES_M MAKEMASK(0x1FF, 0) +#define E830_GLPCI_RDPU_CMD_FIFO_DBG1_RDPU3_CMD_NUM_ENTRIES_S 16 +#define E830_GLPCI_RDPU_CMD_FIFO_DBG1_RDPU3_CMD_NUM_ENTRIES_M MAKEMASK(0x1FF, 16) +#define E830_GLPCI_RDPU_TAG 0x000BE258 /* Reset Source: PCIR */ +#define E830_GLPCI_RDPU_TAG_OVERRIDE_DELAY_S 0 +#define E830_GLPCI_RDPU_TAG_OVERRIDE_DELAY_M MAKEMASK(0xFF, 0) +#define E830_GLPCI_RDPU_TAG_EXPECTED_TAG_S 8 +#define E830_GLPCI_RDPU_TAG_EXPECTED_TAG_M MAKEMASK(0x3FF, 8) +#define E830_GLPCI_SB_AER_MSG_OUT 0x0009DF80 /* Reset Source: PCIR */ +#define E830_GLPCI_SB_AER_MSG_OUT_EN_S 0 +#define E830_GLPCI_SB_AER_MSG_OUT_EN_M BIT(0) +#define E830_GLPCI_SB_AER_MSG_OUT_ANF_SET_EN_S 1 +#define E830_GLPCI_SB_AER_MSG_OUT_ANF_SET_EN_M BIT(1) +#define E830_PF_FUNC_RID_HOST_S 16 +#define E830_PF_FUNC_RID_HOST_M MAKEMASK(0x3, 16) +#define E830_GLPES_PFRXNPECNMARKEDPKTSHI(_i) (0x00553004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */ +#define E830_GLPES_PFRXNPECNMARKEDPKTSHI_MAX_INDEX 127 +#define E830_GLPES_PFRXNPECNMARKEDPKTSHI_RXNPECNMARKEDPKTSHI_S 0 +#define E830_GLPES_PFRXNPECNMARKEDPKTSHI_RXNPECNMARKEDPKTSHI_M MAKEMASK(0xFFFFFF, 0) +#define E830_GLPES_PFRXNPECNMARKEDPKTSLO(_i) (0x00553000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */ +#define E830_GLPES_PFRXNPECNMARKEDPKTSLO_MAX_INDEX 127 +#define E830_GLPES_PFRXNPECNMARKEDPKTSLO_RXNPECNMARKEDPKTSLO_S 0 +#define E830_GLPES_PFRXNPECNMARKEDPKTSLO_RXNPECNMARKEDPKTSLO_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLPES_PFRXRPCNPHANDLED(_i) (0x00552C00 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */ +#define E830_GLPES_PFRXRPCNPHANDLED_MAX_INDEX 127 +#define E830_GLPES_PFRXRPCNPHANDLED_RXRPCNPHANDLED_S 0 +#define E830_GLPES_PFRXRPCNPHANDLED_RXRPCNPHANDLED_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLPES_PFRXRPCNPIGNORED(_i) (0x00552800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */ +#define E830_GLPES_PFRXRPCNPIGNORED_MAX_INDEX 127 +#define E830_GLPES_PFRXRPCNPIGNORED_RXRPCNPIGNORED_S 0 +#define E830_GLPES_PFRXRPCNPIGNORED_RXRPCNPIGNORED_M MAKEMASK(0xFFFFFF, 0) +#define E830_GLPES_PFTXNPCNPSENT(_i) (0x00553800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */ +#define E830_GLPES_PFTXNPCNPSENT_MAX_INDEX 127 +#define E830_GLPES_PFTXNPCNPSENT_TXNPCNPSENT_S 0 +#define E830_GLPES_PFTXNPCNPSENT_TXNPCNPSENT_M MAKEMASK(0xFFFFFF, 0) +#define E830_GLRPB_GBL_CFG 0x000AD260 /* Reset Source: CORER */ +#define E830_GLRPB_GBL_CFG_RESERVED_1_S 0 +#define E830_GLRPB_GBL_CFG_RESERVED_1_M MAKEMASK(0x3, 0) +#define E830_GLRPB_GBL_CFG_ALW_PE_RLS_S 2 +#define E830_GLRPB_GBL_CFG_ALW_PE_RLS_M BIT(2) +#define E830_GLRPB_GBL_CFG_LFSR_SHFT_S 3 +#define E830_GLRPB_GBL_CFG_LFSR_SHFT_M MAKEMASK(0x7, 3) +#define E830_GLQF_FLAT_HLUT(_i) (0x004C0000 + ((_i) * 4)) /* _i=0...8191 */ /* Reset Source: CORER */ +#define E830_GLQF_FLAT_HLUT_MAX_INDEX 8191 +#define E830_GLQF_FLAT_HLUT_LUT0_S 0 +#define E830_GLQF_FLAT_HLUT_LUT0_M MAKEMASK(0xFF, 0) +#define E830_GLQF_FLAT_HLUT_LUT1_S 8 +#define E830_GLQF_FLAT_HLUT_LUT1_M MAKEMASK(0xFF, 8) +#define E830_GLQF_FLAT_HLUT_LUT2_S 16 +#define E830_GLQF_FLAT_HLUT_LUT2_M MAKEMASK(0xFF, 16) +#define E830_GLQF_FLAT_HLUT_LUT3_S 24 +#define E830_GLQF_FLAT_HLUT_LUT3_M MAKEMASK(0xFF, 24) +#define E830_GLQF_QGRP_CNTX(_i) (0x00490000 + ((_i) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */ +#define E830_GLQF_QGRP_CNTX_MAX_INDEX 2047 +#define E830_GLQF_QGRP_CNTX_QG_LUT_BASE_S 0 +#define E830_GLQF_QGRP_CNTX_QG_LUT_BASE_M MAKEMASK(0x7FFF, 0) +#define E830_GLQF_QGRP_CNTX_QG_LUT_SIZE_S 16 +#define E830_GLQF_QGRP_CNTX_QG_LUT_SIZE_M MAKEMASK(0xF, 16) +#define E830_GLQF_QGRP_CNTX_VSI_S 20 +#define E830_GLQF_QGRP_CNTX_VSI_M MAKEMASK(0x3FF, 20) +#define E830_GLQF_QGRP_PF_OWNER(_i) (0x00484000 + ((_i) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */ +#define E830_GLQF_QGRP_PF_OWNER_MAX_INDEX 2047 +#define E830_GLQF_QGRP_PF_OWNER_OWNER_PF_S 0 +#define E830_GLQF_QGRP_PF_OWNER_OWNER_PF_M MAKEMASK(0x7, 0) +#define E830_GLQF_QGRP_VSI_MODE 0x0048E084 /* Reset Source: CORER */ +#define E830_GLQF_QGRP_VSI_MODE_QGRP_MODE_S 0 +#define E830_GLQF_QGRP_VSI_MODE_QGRP_MODE_M BIT(0) +#define E830_GLQF_QTABLE_MODE 0x0048E080 /* Reset Source: CORER */ +#define E830_GLQF_QTABLE_MODE_SCT_MODE_S 0 +#define E830_GLQF_QTABLE_MODE_SCT_MODE_M BIT(0) +#define E830_GLQF_QTABLE_MODE_SCT_MODE_SET_S 1 +#define E830_GLQF_QTABLE_MODE_SCT_MODE_SET_M BIT(1) +#define E830_PFQF_LUT_ALLOC 0x0048E000 /* Reset Source: CORER */ +#define E830_PFQF_LUT_ALLOC_LUT_BASE_S 0 +#define E830_PFQF_LUT_ALLOC_LUT_BASE_M MAKEMASK(0x7FFF, 0) +#define E830_PFQF_LUT_ALLOC_LUT_SIZE_S 16 +#define E830_PFQF_LUT_ALLOC_LUT_SIZE_M MAKEMASK(0xF, 16) +#define E830_PFQF_QTABLE_ALLOC 0x0048E040 /* Reset Source: CORER */ +#define E830_PFQF_QTABLE_ALLOC_BASE_S 0 +#define E830_PFQF_QTABLE_ALLOC_BASE_M MAKEMASK(0x3FFF, 0) +#define E830_PFQF_QTABLE_ALLOC_SIZE_S 16 +#define E830_PFQF_QTABLE_ALLOC_SIZE_M MAKEMASK(0x1FFF, 16) +#define E830_VSILAN_FLAT_Q(_VSI) (0x00487000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */ +#define E830_VSILAN_FLAT_Q_MAX_INDEX 767 +#define E830_VSILAN_FLAT_Q_SCT_FLAT_BASE_S 0 +#define E830_VSILAN_FLAT_Q_SCT_FLAT_BASE_M MAKEMASK(0xFFF, 0) +#define E830_VSILAN_FLAT_Q_SCT_FLAT_SIZE_S 16 +#define E830_VSILAN_FLAT_Q_SCT_FLAT_SIZE_M MAKEMASK(0xFF, 16) +#define E830_VSIQF_DEF_QGRP(_VSI) (0x00486000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */ +#define E830_VSIQF_DEF_QGRP_MAX_INDEX 767 +#define E830_VSIQF_DEF_QGRP_DEF_QGRP_S 0 +#define E830_VSIQF_DEF_QGRP_DEF_QGRP_M MAKEMASK(0x7FF, 0) +#define E830_GLPRT_BPRCH_BPRCH_S 0 +#define E830_GLPRT_BPRCH_BPRCH_M MAKEMASK(0xFF, 0) +#define E830_GLPRT_BPRCL_BPRCL_S 0 +#define E830_GLPRT_BPRCL_BPRCL_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLPRT_BPTCH_BPTCH_S 0 +#define E830_GLPRT_BPTCH_BPTCH_M MAKEMASK(0xFF, 0) +#define E830_GLPRT_BPTCL_BPTCL_S 0 +#define E830_GLPRT_BPTCL_BPTCL_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLPRT_UPTCL_UPTCL_S 0 +#define E830_GLPRT_UPTCL_UPTCL_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLRPB_PEAK_DOC_LOG(_i) (0x000AD178 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */ +#define E830_GLRPB_PEAK_DOC_LOG_MAX_INDEX 15 +#define E830_GLRPB_PEAK_DOC_LOG_PEAK_OC_S 0 +#define E830_GLRPB_PEAK_DOC_LOG_PEAK_OC_M MAKEMASK(0x3FFFFF, 0) +#define E830_GLRPB_PEAK_SOC_LOG(_i) (0x000AD1B8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GLRPB_PEAK_SOC_LOG_MAX_INDEX 7 +#define E830_GLRPB_PEAK_SOC_LOG_PEAK_OC_S 0 +#define E830_GLRPB_PEAK_SOC_LOG_PEAK_OC_M MAKEMASK(0x3FFFFF, 0) +#define E830_GLPTM_ART_CTL 0x00088B50 /* Reset Source: POR */ +#define E830_GLPTM_ART_CTL_ACTIVE_S 0 +#define E830_GLPTM_ART_CTL_ACTIVE_M BIT(0) +#define E830_GLPTM_ART_CTL_TIME_OUT_S 1 +#define E830_GLPTM_ART_CTL_TIME_OUT_M BIT(1) +#define E830_GLPTM_ART_CTL_PTM_READY_S 2 +#define E830_GLPTM_ART_CTL_PTM_READY_M BIT(2) +#define E830_GLPTM_ART_CTL_PTM_AUTO_S 3 +#define E830_GLPTM_ART_CTL_PTM_AUTO_M BIT(3) +#define E830_GLPTM_ART_CTL_PTM_AUTO_LATCH_S 4 +#define E830_GLPTM_ART_CTL_PTM_AUTO_LATCH_M BIT(4) +#define E830_GLPTM_ART_TIME_H 0x00088B54 /* Reset Source: POR */ +#define E830_GLPTM_ART_TIME_H_ART_TIME_H_S 0 +#define E830_GLPTM_ART_TIME_H_ART_TIME_H_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLPTM_ART_TIME_L 0x00088B58 /* Reset Source: POR */ +#define E830_GLPTM_ART_TIME_L_ART_TIME_L_S 0 +#define E830_GLPTM_ART_TIME_L_ART_TIME_L_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLTSYN_PTMTIME_H(_i) (0x00088B48 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */ +#define E830_GLTSYN_PTMTIME_H_MAX_INDEX 1 +#define E830_GLTSYN_PTMTIME_H_TSYNEVNT_H_S 0 +#define E830_GLTSYN_PTMTIME_H_TSYNEVNT_H_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLTSYN_PTMTIME_L(_i) (0x00088B40 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */ +#define E830_GLTSYN_PTMTIME_L_MAX_INDEX 1 +#define E830_GLTSYN_PTMTIME_L_TSYNEVNT_L_S 0 +#define E830_GLTSYN_PTMTIME_L_TSYNEVNT_L_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLTSYN_TIME_H_0_AL 0x0008A004 /* Reset Source: CORER */ +#define E830_GLTSYN_TIME_H_0_AL_TSYNTIME_L_S 0 +#define E830_GLTSYN_TIME_H_0_AL_TSYNTIME_L_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLTSYN_TIME_H_1_AL 0x0008B004 /* Reset Source: CORER */ +#define E830_GLTSYN_TIME_H_1_AL_TSYNTIME_L_S 0 +#define E830_GLTSYN_TIME_H_1_AL_TSYNTIME_L_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLTSYN_TIME_L_0_AL 0x0008A000 /* Reset Source: CORER */ +#define E830_GLTSYN_TIME_L_0_AL_TSYNTIME_L_S 0 +#define E830_GLTSYN_TIME_L_0_AL_TSYNTIME_L_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLTSYN_TIME_L_1_AL 0x0008B000 /* Reset Source: CORER */ +#define E830_GLTSYN_TIME_L_1_AL_TSYNTIME_L_S 0 +#define E830_GLTSYN_TIME_L_1_AL_TSYNTIME_L_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_PFPTM_SEM 0x00088B00 /* Reset Source: PFR */ +#define E830_PFPTM_SEM_BUSY_S 0 +#define E830_PFPTM_SEM_BUSY_M BIT(0) +#define E830_PFPTM_SEM_PF_OWNER_S 4 +#define E830_PFPTM_SEM_PF_OWNER_M MAKEMASK(0x7, 4) +#define E830_VSI_PASID_1(_VSI) (0x00094000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */ +#define E830_VSI_PASID_1_MAX_INDEX 767 +#define E830_VSI_PASID_1_PASID_S 0 +#define E830_VSI_PASID_1_PASID_M MAKEMASK(0xFFFFF, 0) +#define E830_VSI_PASID_1_EN_S 31 +#define E830_VSI_PASID_1_EN_M BIT(31) +#define E830_VSI_PASID_2(_VSI) (0x00095000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */ +#define E830_VSI_PASID_2_MAX_INDEX 767 +#define E830_VSI_PASID_2_PASID_S 0 +#define E830_VSI_PASID_2_PASID_M MAKEMASK(0xFFFFF, 0) +#define E830_VSI_PASID_2_EN_S 31 +#define E830_VSI_PASID_2_EN_M BIT(31) +#define E830_GLPE_CQM_FUNC_INVALIDATE_PMF_ID_S 15 +#define E830_GLPE_CQM_FUNC_INVALIDATE_PMF_ID_M MAKEMASK(0x3F, 15) +#define E830_GLPE_CQM_FUNC_INVALIDATE_INVALIDATE_TYPE_S 29 +#define E830_GLPE_CQM_FUNC_INVALIDATE_INVALIDATE_TYPE_M MAKEMASK(0x3, 29) +#define E830_VFPE_MRTEIDXMASK_MAX_INDEX 255 +#define E830_GLSWR_PMCFG_RPB_REP_DHW(_i) (0x0020A7A0 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */ +#define E830_GLSWR_PMCFG_RPB_REP_DHW_MAX_INDEX 15 +#define E830_GLSWR_PMCFG_RPB_REP_DHW_DHW_TCN_S 0 +#define E830_GLSWR_PMCFG_RPB_REP_DHW_DHW_TCN_M MAKEMASK(0x3FFFFF, 0) +#define E830_GLSWR_PMCFG_RPB_REP_DLW(_i) (0x0020A7E0 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */ +#define E830_GLSWR_PMCFG_RPB_REP_DLW_MAX_INDEX 15 +#define E830_GLSWR_PMCFG_RPB_REP_DLW_DLW_TCN_S 0 +#define E830_GLSWR_PMCFG_RPB_REP_DLW_DLW_TCN_M MAKEMASK(0x3FFFFF, 0) +#define E830_GLSWR_PMCFG_RPB_REP_DPS(_i) (0x0020A760 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */ +#define E830_GLSWR_PMCFG_RPB_REP_DPS_MAX_INDEX 15 +#define E830_GLSWR_PMCFG_RPB_REP_DPS_DPS_TCN_S 0 +#define E830_GLSWR_PMCFG_RPB_REP_DPS_DPS_TCN_M MAKEMASK(0x3FFFFF, 0) +#define E830_GLSWR_PMCFG_RPB_REP_SHW(_i) (0x0020A720 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GLSWR_PMCFG_RPB_REP_SHW_MAX_INDEX 7 +#define E830_GLSWR_PMCFG_RPB_REP_SHW_SHW_S 0 +#define E830_GLSWR_PMCFG_RPB_REP_SHW_SHW_M MAKEMASK(0x3FFFFF, 0) +#define E830_GLSWR_PMCFG_RPB_REP_SLW(_i) (0x0020A740 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GLSWR_PMCFG_RPB_REP_SLW_MAX_INDEX 7 +#define E830_GLSWR_PMCFG_RPB_REP_SLW_SLW_S 0 +#define E830_GLSWR_PMCFG_RPB_REP_SLW_SLW_M MAKEMASK(0x3FFFFF, 0) +#define E830_GLSWR_PMCFG_RPB_REP_SPS(_i) (0x0020A700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GLSWR_PMCFG_RPB_REP_SPS_MAX_INDEX 7 +#define E830_GLSWR_PMCFG_RPB_REP_SPS_SPS_TCN_S 0 +#define E830_GLSWR_PMCFG_RPB_REP_SPS_SPS_TCN_M MAKEMASK(0x3FFFFF, 0) +#define E830_GLSWR_PMCFG_RPB_REP_TC_CFG(_i) (0x0020A980 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */ +#define E830_GLSWR_PMCFG_RPB_REP_TC_CFG_MAX_INDEX 31 +#define E830_GLSWR_PMCFG_RPB_REP_TC_CFG_D_POOL_S 0 +#define E830_GLSWR_PMCFG_RPB_REP_TC_CFG_D_POOL_M MAKEMASK(0xFFFF, 0) +#define E830_GLSWR_PMCFG_RPB_REP_TC_CFG_S_POOL_S 16 +#define E830_GLSWR_PMCFG_RPB_REP_TC_CFG_S_POOL_M MAKEMASK(0xFFFF, 16) +#define E830_GLSWR_PMCFG_RPB_REP_TCHW(_i) (0x0020A880 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */ +#define E830_GLSWR_PMCFG_RPB_REP_TCHW_MAX_INDEX 31 +#define E830_GLSWR_PMCFG_RPB_REP_TCHW_TCHW_S 0 +#define E830_GLSWR_PMCFG_RPB_REP_TCHW_TCHW_M MAKEMASK(0x3FFFFF, 0) +#define E830_GLSWR_PMCFG_RPB_REP_TCLW(_i) (0x0020A900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */ +#define E830_GLSWR_PMCFG_RPB_REP_TCLW_MAX_INDEX 31 +#define E830_GLSWR_PMCFG_RPB_REP_TCLW_TCLW_S 0 +#define E830_GLSWR_PMCFG_RPB_REP_TCLW_TCLW_M MAKEMASK(0x3FFFFF, 0) +#define E830_GLQF_QGRP_CFG(_VSI) (0x00492000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */ +#define E830_GLQF_QGRP_CFG_MAX_INDEX 767 +#define E830_GLQF_QGRP_CFG_VSI_QGRP_ENABLE_S 0 +#define E830_GLQF_QGRP_CFG_VSI_QGRP_ENABLE_M BIT(0) +#define E830_GLQF_QGRP_CFG_VSI_QGRP_GEN_INDEX_S 1 +#define E830_GLQF_QGRP_CFG_VSI_QGRP_GEN_INDEX_M MAKEMASK(0x7, 1) +#define E830_GLDCB_RTCTI_PD 0x00122740 /* Reset Source: CORER */ +#define E830_GLDCB_RTCTI_PD_PFCTIMEOUT_TC_S 0 +#define E830_GLDCB_RTCTI_PD_PFCTIMEOUT_TC_M MAKEMASK(0xFF, 0) +#define E830_GLDCB_RTCTQ_PD(_i) (0x00122700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GLDCB_RTCTQ_PD_MAX_INDEX 7 +#define E830_GLDCB_RTCTQ_PD_RXQNUM_S 0 +#define E830_GLDCB_RTCTQ_PD_RXQNUM_M MAKEMASK(0x7FF, 0) +#define E830_GLDCB_RTCTQ_PD_IS_PF_Q_S 16 +#define E830_GLDCB_RTCTQ_PD_IS_PF_Q_M BIT(16) +#define E830_GLDCB_RTCTS_PD(_i) (0x00122720 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */ +#define E830_GLDCB_RTCTS_PD_MAX_INDEX 7 +#define E830_GLDCB_RTCTS_PD_PFCTIMER_S 0 +#define E830_GLDCB_RTCTS_PD_PFCTIMER_M MAKEMASK(0x3FFF, 0) +#define E830_GLRPB_PEAK_TC_OC_LOG(_i) (0x000AD1D8 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */ +#define E830_GLRPB_PEAK_TC_OC_LOG_MAX_INDEX 31 +#define E830_GLRPB_PEAK_TC_OC_LOG_PEAK_OC_S 0 +#define E830_GLRPB_PEAK_TC_OC_LOG_PEAK_OC_M MAKEMASK(0x3FFFFF, 0) +#define E830_GLRPB_TC_TOTAL_PC(_i) (0x000ACFE0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */ +#define E830_GLRPB_TC_TOTAL_PC_MAX_INDEX 31 +#define E830_GLRPB_TC_TOTAL_PC_BYTE_CNT_S 0 +#define E830_GLRPB_TC_TOTAL_PC_BYTE_CNT_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_VFINT_ITRN_64(_i, _j) (0x00002C00 + ((_i) * 4 + (_j) * 256)) /* _i=0...63, _j=0...2 */ /* Reset Source: CORER */ +#define E830_VFINT_ITRN_64_MAX_INDEX 63 +#define E830_VFINT_ITRN_64_INTERVAL_S 0 +#define E830_VFINT_ITRN_64_INTERVAL_M MAKEMASK(0xFFF, 0) +#define E830_GLQTX_TXTIME_DBELL_LSB1(_DBQM) (0x0000D000 + ((_DBQM) * 8)) /* _i=0...255 */ /* Reset Source: CORER */ +#define E830_GLQTX_TXTIME_DBELL_LSB1_MAX_INDEX 255 +#define E830_GLQTX_TXTIME_DBELL_LSB1_QTX_TXTIME_DBELL_S 0 +#define E830_GLQTX_TXTIME_DBELL_LSB1_QTX_TXTIME_DBELL_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLQTX_TXTIME_DBELL_MSB1(_DBQM) (0x0000D004 + ((_DBQM) * 8)) /* _i=0...255 */ /* Reset Source: CORER */ +#define E830_GLQTX_TXTIME_DBELL_MSB1_MAX_INDEX 255 +#define E830_GLQTX_TXTIME_DBELL_MSB1_QTX_TXTIME_DBELL_S 0 +#define E830_GLQTX_TXTIME_DBELL_MSB1_QTX_TXTIME_DBELL_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLQTX_TXTIME_LARGE_DBELL_LSB(_DBQM) (0x00040000 + ((_DBQM) * 8)) /* _i=0...255 */ /* Reset Source: CORER */ +#define E830_GLQTX_TXTIME_LARGE_DBELL_LSB_MAX_INDEX 255 +#define E830_GLQTX_TXTIME_LARGE_DBELL_LSB_QTX_TXTIME_DBELL_S 0 +#define E830_GLQTX_TXTIME_LARGE_DBELL_LSB_QTX_TXTIME_DBELL_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLQTX_TXTIME_LARGE_DBELL_MSB(_DBQM) (0x00040004 + ((_DBQM) * 8)) /* _i=0...255 */ /* Reset Source: CORER */ +#define E830_GLQTX_TXTIME_LARGE_DBELL_MSB_MAX_INDEX 255 +#define E830_GLQTX_TXTIME_LARGE_DBELL_MSB_QTX_TXTIME_DBELL_S 0 +#define E830_GLQTX_TXTIME_LARGE_DBELL_MSB_QTX_TXTIME_DBELL_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLTSYN_TIME_H_0_AL1 0x00003004 /* Reset Source: CORER */ +#define E830_GLTSYN_TIME_H_0_AL1_TSYNTIME_L_S 0 +#define E830_GLTSYN_TIME_H_0_AL1_TSYNTIME_L_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLTSYN_TIME_H_1_AL1 0x0000300C /* Reset Source: CORER */ +#define E830_GLTSYN_TIME_H_1_AL1_TSYNTIME_L_S 0 +#define E830_GLTSYN_TIME_H_1_AL1_TSYNTIME_L_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLTSYN_TIME_L_0_AL1 0x00003000 /* Reset Source: CORER */ +#define E830_GLTSYN_TIME_L_0_AL1_TSYNTIME_L_S 0 +#define E830_GLTSYN_TIME_L_0_AL1_TSYNTIME_L_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_GLTSYN_TIME_L_1_AL1 0x00003008 /* Reset Source: CORER */ +#define E830_GLTSYN_TIME_L_1_AL1_TSYNTIME_L_S 0 +#define E830_GLTSYN_TIME_L_1_AL1_TSYNTIME_L_M MAKEMASK(0xFFFFFFFF, 0) +#define E830_VSI_VSI2F_LEM(_VSI) (0x006100A0 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */ +#define E830_VSI_VSI2F_LEM_MAX_INDEX 767 +#define E830_VSI_VSI2F_LEM_VFVMNUMBER_S 0 +#define E830_VSI_VSI2F_LEM_VFVMNUMBER_M MAKEMASK(0x3FF, 0) +#define E830_VSI_VSI2F_LEM_FUNCTIONTYPE_S 10 +#define E830_VSI_VSI2F_LEM_FUNCTIONTYPE_M MAKEMASK(0x3, 10) +#define E830_VSI_VSI2F_LEM_PFNUMBER_S 12 +#define E830_VSI_VSI2F_LEM_PFNUMBER_M MAKEMASK(0x7, 12) +#define E830_VSI_VSI2F_LEM_BUFFERNUMBER_S 16 +#define E830_VSI_VSI2F_LEM_BUFFERNUMBER_M MAKEMASK(0x7, 16) +#define E830_VSI_VSI2F_LEM_VSI_NUMBER_S 20 +#define E830_VSI_VSI2F_LEM_VSI_NUMBER_M MAKEMASK(0x3FF, 20) +#define E830_VSI_VSI2F_LEM_VSI_ENABLE_S 31 +#define E830_VSI_VSI2F_LEM_VSI_ENABLE_M BIT(31) #endif /* !_ICE_HW_AUTOGEN_H_ */ diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h b/drivers/net/ice/base/ice_lan_tx_rx.h index d816df0ff6..3e6c706cb2 100644 --- a/drivers/net/ice/base/ice_lan_tx_rx.h +++ b/drivers/net/ice/base/ice_lan_tx_rx.h @@ -162,7 +162,6 @@ struct ice_fltr_desc { #define ICE_FXD_FLTR_QW1_FDID_PRI_S 25 #define ICE_FXD_FLTR_QW1_FDID_PRI_M (0x7ULL << ICE_FXD_FLTR_QW1_FDID_PRI_S) -#define ICE_FXD_FLTR_QW1_FDID_PRI_ZERO 0x0ULL #define ICE_FXD_FLTR_QW1_FDID_PRI_ONE 0x1ULL #define ICE_FXD_FLTR_QW1_FDID_PRI_THREE 0x3ULL @@ -284,6 +283,7 @@ enum ice_rx_desc_error_l3l4e_masks { enum ice_rx_l2_ptype { ICE_RX_PTYPE_L2_RESERVED = 0, ICE_RX_PTYPE_L2_MAC_PAY2 = 1, + ICE_RX_PTYPE_L2_TIMESYNC_PAY2 = 2, ICE_RX_PTYPE_L2_FIP_PAY2 = 3, ICE_RX_PTYPE_L2_OUI_PAY2 = 4, ICE_RX_PTYPE_L2_MACCNTRL_PAY2 = 5, @@ -343,6 +343,7 @@ enum ice_rx_ptype_inner_prot { ICE_RX_PTYPE_INNER_PROT_TCP = 2, ICE_RX_PTYPE_INNER_PROT_SCTP = 3, ICE_RX_PTYPE_INNER_PROT_ICMP = 4, + ICE_RX_PTYPE_INNER_PROT_TIMESYNC = 5, }; enum ice_rx_ptype_payload_layer { @@ -490,6 +491,7 @@ union ice_32b_rx_flex_desc { * Flex-field 2: Flow ID lower 16-bits * Flex-field 3: Flow ID higher 16-bits * Flex-field 4: reserved, VLAN ID taken from L2Tag + * Flex-field 7: Raw CSUM */ struct ice_32b_rx_flex_desc_nic { /* Qword 0 */ @@ -508,7 +510,7 @@ struct ice_32b_rx_flex_desc_nic { __le16 status_error1; u8 flexi_flags2; u8 ts_low; - __le16 l2tag2_1st; + __le16 raw_csum; __le16 l2tag2_2nd; /* Qword 3 */ @@ -522,46 +524,6 @@ struct ice_32b_rx_flex_desc_nic { } flex_ts; }; -/* Rx Flex Descriptor NIC Raw CSUM Profile - * RxDID Profile ID 9 - * Flex-field 0: RSS hash lower 16-bits - * Flex-field 1: RSS hash upper 16-bits - * Flex-field 2: Flow ID lower 16-bits - * Flex-field 3: Raw CSUM - * Flex-field 4: reserved, VLAN ID taken from L2Tag - */ -struct ice_32b_rx_flex_desc_nic_raw_csum { - /* Qword 0 */ - u8 rxdid; - u8 mir_id_umb_cast; - __le16 ptype_flexi_flags0; - __le16 pkt_len; - __le16 hdr_len_sph_flex_flags1; - - /* Qword 1 */ - __le16 status_error0; - __le16 l2tag1; - __le32 rss_hash; - - /* Qword 2 */ - __le16 status_error1; /* bit 11 Raw CSUM present */ - u8 flexi_flags2; - u8 ts_low; - __le16 l2tag2_1st; - __le16 l2tag2_2nd; - - /* Qword 3 */ - __le16 flow_id; - __le16 raw_csum; - union { - struct { - __le16 rsvd; - __le16 flow_id_ipv6; - } flex; - __le32 ts_high; - } flex_ts; -}; - /* Rx Flex Descriptor Switch Profile * RxDID Profile ID 3 * Flex-field 0: Source VSI @@ -748,7 +710,6 @@ enum ice_rxdid { ICE_RXDID_FLEX_NIC = 2, ICE_RXDID_FLEX_NIC_2 = 6, ICE_RXDID_HW = 7, - ICE_RXDID_GSC = 9, ICE_RXDID_COMMS_GENERIC = 16, ICE_RXDID_COMMS_AUX_VLAN = 17, ICE_RXDID_COMMS_AUX_IPV4 = 18, @@ -920,7 +881,8 @@ enum ice_rx_flex_desc_exstat_bits { ICE_RX_FLEX_DESC_EXSTAT_OVERSIZE_S = 3, }; -/* For ice_32b_rx_flex_desc.ts_low: +/* + * For ice_32b_rx_flex_desc.ts_low: * [0]: Timestamp-low validity bit * [1:7]: Timestamp-low value */ @@ -930,6 +892,8 @@ enum ice_rx_flex_desc_exstat_bits { #define ICE_RXQ_CTX_SIZE_DWORDS 8 #define ICE_RXQ_CTX_SZ (ICE_RXQ_CTX_SIZE_DWORDS * sizeof(u32)) +#define ICE_TXQ_CTX_SIZE_DWORDS 10 +#define ICE_TXQ_CTX_SZ (ICE_TXQ_CTX_SIZE_DWORDS * sizeof(u32)) #define ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS 22 #define ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS 5 #define GLTCLAN_CQ_CNTX(i, CQ) (GLTCLAN_CQ_CNTX0(CQ) + ((i) * 0x0800)) @@ -1070,14 +1034,13 @@ enum ice_tx_desc_len_fields { struct ice_tx_ctx_desc { __le32 tunneling_params; __le16 l2tag2; - __le16 gsc; + __le16 gcs; __le64 qw1; }; -#define ICE_TX_GSC_DESC_START 0 /* 7 BITS */ -#define ICE_TX_GSC_DESC_OFFSET 7 /* 4 BITS */ -#define ICE_TX_GSC_DESC_TYPE 11 /* 2 BITS */ -#define ICE_TX_GSC_DESC_ENA 13 /* 1 BIT */ +#define ICE_TX_GCS_DESC_START 0 /* 8 BITS */ +#define ICE_TX_GCS_DESC_OFFSET 8 /* 4 BITS */ +#define ICE_TX_GCS_DESC_TYPE 12 /* 3 BITS */ #define ICE_TXD_CTX_QW1_DTYPE_S 0 #define ICE_TXD_CTX_QW1_DTYPE_M (0xFUL << ICE_TXD_CTX_QW1_DTYPE_S) @@ -1189,8 +1152,8 @@ struct ice_tlan_ctx { u8 drop_ena; u8 cache_prof_idx; u8 pkt_shaper_prof_idx; - u8 gsc_ena; u8 int_q_state; /* width not needed - internal - DO NOT WRITE!!! */ + u16 tail; }; /* LAN Tx Completion Queue data */ @@ -1207,6 +1170,7 @@ struct ice_tx_cmpltnq { #pragma pack(1) struct ice_tx_cmpltnq_ctx { u64 base; +#define ICE_TX_CMPLTNQ_CTX_BASE_S 7 u32 q_len; #define ICE_TX_CMPLTNQ_CTX_Q_LEN_S 4 u8 generation; @@ -1214,6 +1178,9 @@ struct ice_tx_cmpltnq_ctx { u8 pf_num; u16 vmvf_num; u8 vmvf_type; +#define ICE_TX_CMPLTNQ_CTX_VMVF_TYPE_VF 0 +#define ICE_TX_CMPLTNQ_CTX_VMVF_TYPE_VMQ 1 +#define ICE_TX_CMPLTNQ_CTX_VMVF_TYPE_PF 2 u8 tph_desc_wr; u8 cpuid; u32 cmpltn_cache[16]; @@ -1228,15 +1195,30 @@ struct ice_tx_drbell_fmt { u32 db; }; +/* FIXME: move to a .c file that references this variable */ +/* LAN Tx Doorbell Descriptor format info */ +static const struct ice_ctx_ele ice_tx_drbell_fmt_info[] = { + /* Field Width LSB */ + ICE_CTX_STORE(ice_tx_drbell_fmt, txq_id, 14, 0), + ICE_CTX_STORE(ice_tx_drbell_fmt, dd, 1, 14), + ICE_CTX_STORE(ice_tx_drbell_fmt, rs, 1, 15), + ICE_CTX_STORE(ice_tx_drbell_fmt, db, 32, 32), + { 0 } +}; /* LAN Tx Doorbell Queue Context */ #pragma pack(1) struct ice_tx_drbell_q_ctx { u64 base; +#define ICE_TX_DRBELL_Q_CTX_BASE_S 7 u16 ring_len; +#define ICE_TX_DRBELL_Q_CTX_RING_LEN_S 4 u8 pf_num; u16 vf_num; u8 vmvf_type; +#define ICE_TX_DRBELL_Q_CTX_VMVF_TYPE_VF 0 +#define ICE_TX_DRBELL_Q_CTX_VMVF_TYPE_VMQ 1 +#define ICE_TX_DRBELL_Q_CTX_VMVF_TYPE_PF 2 u8 cpuid; u8 tph_desc_rd; u8 tph_desc_wr; @@ -1286,6 +1268,7 @@ struct ice_tx_drbell_q_ctx { /* shorter macros makes the table fit but are terse */ #define ICE_RX_PTYPE_NOF ICE_RX_PTYPE_NOT_FRAG #define ICE_RX_PTYPE_FRG ICE_RX_PTYPE_FRAG +#define ICE_RX_PTYPE_INNER_PROT_TS ICE_RX_PTYPE_INNER_PROT_TIMESYNC /* Lookup table mapping the 10-bit HW PTYPE to the bit field for decoding */ static const struct ice_rx_ptype_decoded ice_ptype_lkup[1024] = { @@ -2450,7 +2433,7 @@ static const struct ice_rx_ptype_decoded ice_ptype_lkup[1024] = { ICE_PTT_UNUSED_ENTRY(1020), ICE_PTT_UNUSED_ENTRY(1021), ICE_PTT_UNUSED_ENTRY(1022), - ICE_PTT_UNUSED_ENTRY(1023), + ICE_PTT_UNUSED_ENTRY(1023) }; static inline struct ice_rx_ptype_decoded ice_decode_rx_desc_ptype(u16 ptype) @@ -2470,5 +2453,5 @@ static inline struct ice_rx_ptype_decoded ice_decode_rx_desc_ptype(u16 ptype) #define ICE_LINK_SPEED_40000MBPS 40000 #define ICE_LINK_SPEED_50000MBPS 50000 #define ICE_LINK_SPEED_100000MBPS 100000 - +#define ICE_LINK_SPEED_200000MBPS 200000 #endif /* _ICE_LAN_TX_RX_H_ */ diff --git a/drivers/net/ice/base/ice_metainit.c b/drivers/net/ice/base/ice_metainit.c index 1e990c9aa0..978545adc6 100644 --- a/drivers/net/ice/base/ice_metainit.c +++ b/drivers/net/ice/base/ice_metainit.c @@ -40,7 +40,7 @@ void ice_metainit_dump(struct ice_hw *hw, struct ice_metainit_item *item) ice_info(hw, "gpr_d_data_start = %d\n", item->gpr_d_data_start); ice_info(hw, "gpr_d_data_len = %d\n", item->gpr_d_data_len); ice_info(hw, "gpr_d_id = %d\n", item->gpr_d_id); - ice_info(hw, "flags = 0x%016" PRIx64 "\n", item->flags); + ice_info(hw, "flags = 0x%llx\n", (unsigned long long)(item->flags)); } /** The function parses a 192 bits Metadata Init entry with below format: diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c index e46aded12a..52166ccde6 100644 --- a/drivers/net/ice/base/ice_nvm.c +++ b/drivers/net/ice/base/ice_nvm.c @@ -19,7 +19,7 @@ * * Read the NVM using the admin queue commands (0x0701) */ -enum ice_status +int ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length, void *data, bool last_command, bool read_shadow_ram, struct ice_sq_cd *cd) @@ -65,14 +65,14 @@ ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length, * Returns a status code on failure. Note that the data pointer may be * partially updated if some reads succeed before a failure. */ -enum ice_status +int ice_read_flat_nvm(struct ice_hw *hw, u32 offset, u32 *length, u8 *data, bool read_shadow_ram) { - enum ice_status status; u32 inlen = *length; u32 bytes_read = 0; bool last_cmd; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -125,12 +125,11 @@ ice_read_flat_nvm(struct ice_hw *hw, u32 offset, u32 *length, u8 *data, * * Reads one 16 bit word from the Shadow RAM using ice_read_flat_nvm. */ -static enum ice_status -ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data) +static int ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data) { u32 bytes = sizeof(u16); - enum ice_status status; __le16 data_local; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -144,7 +143,7 @@ ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data) return status; *data = LE16_TO_CPU(data_local); - return ICE_SUCCESS; + return 0; } /** @@ -157,11 +156,11 @@ ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data) * Reads 16 bit words (data buf) from the Shadow RAM. Ownership of the NVM is * taken before reading the buffer and later released. */ -static enum ice_status +static int ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data) { u32 bytes = *words * 2, i; - enum ice_status status; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -187,13 +186,11 @@ ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data) * * This function will request NVM ownership. */ -enum ice_status -ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access) +int ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access) { ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); - if (hw->flash.blank_nvm_mode) - return ICE_SUCCESS; + return 0; return ice_acquire_res(hw, ICE_NVM_RES_ID, access, ICE_NVM_TIMEOUT); } @@ -299,11 +296,11 @@ static u32 ice_get_flash_bank_offset(struct ice_hw *hw, enum ice_bank_select ban * hw->flash.banks data being setup by ice_determine_active_flash_banks() * during initialization. */ -static enum ice_status +static int ice_read_flash_module(struct ice_hw *hw, enum ice_bank_select bank, u16 module, u32 offset, u8 *data, u32 length) { - enum ice_status status; + int status; u32 start; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -336,11 +333,11 @@ ice_read_flash_module(struct ice_hw *hw, enum ice_bank_select bank, u16 module, * Read the specified word from the active NVM module. This includes the CSS * header at the start of the NVM module. */ -static enum ice_status +static int ice_read_nvm_module(struct ice_hw *hw, enum ice_bank_select bank, u32 offset, u16 *data) { - enum ice_status status; __le16 data_local; + int status; status = ice_read_flash_module(hw, bank, ICE_SR_1ST_NVM_BANK_PTR, offset * sizeof(u16), (_FORCE_ u8 *)&data_local, sizeof(u16)); @@ -359,13 +356,13 @@ ice_read_nvm_module(struct ice_hw *hw, enum ice_bank_select bank, u32 offset, u1 * Read the CSS header length from the NVM CSS header and add the Authentication * header size, and then convert to words. */ -static enum ice_status +static int ice_get_nvm_css_hdr_len(struct ice_hw *hw, enum ice_bank_select bank, u32 *hdr_len) { u16 hdr_len_l, hdr_len_h; - enum ice_status status; u32 hdr_len_dword; + int status; status = ice_read_nvm_module(hw, bank, ICE_NVM_CSS_HDR_LEN_L, &hdr_len_l); @@ -383,7 +380,7 @@ ice_get_nvm_css_hdr_len(struct ice_hw *hw, enum ice_bank_select bank, hdr_len_dword = hdr_len_h << 16 | hdr_len_l; *hdr_len = (hdr_len_dword * 2) + ICE_NVM_AUTH_HEADER_LEN; - return ICE_SUCCESS; + return 0; } /** @@ -396,11 +393,11 @@ ice_get_nvm_css_hdr_len(struct ice_hw *hw, enum ice_bank_select bank, * Read the specified word from the copy of the Shadow RAM found in the * specified NVM module. */ -static enum ice_status +static int ice_read_nvm_sr_copy(struct ice_hw *hw, enum ice_bank_select bank, u32 offset, u16 *data) { - enum ice_status status; u32 hdr_len; + int status; status = ice_get_nvm_css_hdr_len(hw, bank, &hdr_len); if (status) @@ -422,11 +419,11 @@ ice_read_nvm_sr_copy(struct ice_hw *hw, enum ice_bank_select bank, u32 offset, u * Note that unlike the NVM module, the CSS data is stored at the end of the * module instead of at the beginning. */ -static enum ice_status +static int ice_read_orom_module(struct ice_hw *hw, enum ice_bank_select bank, u32 offset, u16 *data) { - enum ice_status status; __le16 data_local; + int status; status = ice_read_flash_module(hw, bank, ICE_SR_1ST_OROM_BANK_PTR, offset * sizeof(u16), (_FORCE_ u8 *)&data_local, sizeof(u16)); @@ -444,9 +441,9 @@ ice_read_orom_module(struct ice_hw *hw, enum ice_bank_select bank, u32 offset, u * * Reads one 16 bit word from the Shadow RAM using the ice_read_sr_word_aq. */ -enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data) +int ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data) { - enum ice_status status; + int status; status = ice_acquire_nvm(hw, ICE_RES_READ); if (!status) { @@ -468,21 +465,21 @@ enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data) * Area (PFA) and returns the TLV pointer and length. The caller can * use these to read the variable length TLV value. */ -enum ice_status +int ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len, u16 module_type) { - enum ice_status status; u16 pfa_len, pfa_ptr; - u16 next_tlv; + u32 next_tlv; + int status; status = ice_read_sr_word(hw, ICE_SR_PFA_PTR, &pfa_ptr); - if (status != ICE_SUCCESS) { + if (status) { ice_debug(hw, ICE_DBG_INIT, "Preserved Field Array pointer.\n"); return status; } status = ice_read_sr_word(hw, pfa_ptr, &pfa_len); - if (status != ICE_SUCCESS) { + if (status) { ice_debug(hw, ICE_DBG_INIT, "Failed to read PFA length.\n"); return status; } @@ -490,27 +487,32 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len, * of TLVs to find the requested one. */ next_tlv = pfa_ptr + 1; - while (next_tlv < pfa_ptr + pfa_len) { + while (next_tlv < ((u32)pfa_ptr + pfa_len)) { u16 tlv_sub_module_type; u16 tlv_len; /* Read TLV type */ - status = ice_read_sr_word(hw, next_tlv, &tlv_sub_module_type); - if (status != ICE_SUCCESS) { + status = ice_read_sr_word(hw, (u16)next_tlv, + &tlv_sub_module_type); + if (status) { ice_debug(hw, ICE_DBG_INIT, "Failed to read TLV type.\n"); break; } /* Read TLV length */ - status = ice_read_sr_word(hw, next_tlv + 1, &tlv_len); - if (status != ICE_SUCCESS) { + status = ice_read_sr_word(hw, (u16)(next_tlv + 1), &tlv_len); + if (status) { ice_debug(hw, ICE_DBG_INIT, "Failed to read TLV length.\n"); break; } + if (tlv_len > pfa_len) { + ice_debug(hw, ICE_DBG_INIT, "Invalid TLV length.\n"); + return ICE_ERR_INVAL_SIZE; + } if (tlv_sub_module_type == module_type) { if (tlv_len) { - *module_tlv = next_tlv; + *module_tlv = (u16)next_tlv; *module_tlv_len = tlv_len; - return ICE_SUCCESS; + return 0; } return ICE_ERR_INVAL_SIZE; } @@ -531,24 +533,23 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len, * * Reads the part number string from the NVM. */ -enum ice_status -ice_read_pba_string(struct ice_hw *hw, u8 *pba_num, u32 pba_num_size) +int ice_read_pba_string(struct ice_hw *hw, u8 *pba_num, u32 pba_num_size) { u16 pba_tlv, pba_tlv_len; - enum ice_status status; u16 pba_word, pba_size; + int status; u16 i; status = ice_get_pfa_module_tlv(hw, &pba_tlv, &pba_tlv_len, ICE_SR_PBA_BLOCK_PTR); - if (status != ICE_SUCCESS) { + if (status) { ice_debug(hw, ICE_DBG_INIT, "Failed to read PBA Block TLV.\n"); return status; } /* pba_size is the next word */ status = ice_read_sr_word(hw, (pba_tlv + 2), &pba_size); - if (status != ICE_SUCCESS) { + if (status) { ice_debug(hw, ICE_DBG_INIT, "Failed to read PBA Section size.\n"); return status; } @@ -569,7 +570,7 @@ ice_read_pba_string(struct ice_hw *hw, u8 *pba_num, u32 pba_num_size) for (i = 0; i < pba_size; i++) { status = ice_read_sr_word(hw, (pba_tlv + 2 + 1) + i, &pba_word); - if (status != ICE_SUCCESS) { + if (status) { ice_debug(hw, ICE_DBG_INIT, "Failed to read PBA Block word %d.\n", i); return status; } @@ -591,10 +592,10 @@ ice_read_pba_string(struct ice_hw *hw, u8 *pba_num, u32 pba_num_size) * Read the security revision out of the CSS header of the active NVM module * bank. */ -static enum ice_status ice_get_nvm_srev(struct ice_hw *hw, enum ice_bank_select bank, u32 *srev) +static int ice_get_nvm_srev(struct ice_hw *hw, enum ice_bank_select bank, u32 *srev) { - enum ice_status status; u16 srev_l, srev_h; + int status; status = ice_read_nvm_module(hw, bank, ICE_NVM_CSS_SREV_L, &srev_l); if (status) @@ -606,7 +607,7 @@ static enum ice_status ice_get_nvm_srev(struct ice_hw *hw, enum ice_bank_select *srev = srev_h << 16 | srev_l; - return ICE_SUCCESS; + return 0; } /** @@ -616,13 +617,13 @@ static enum ice_status ice_get_nvm_srev(struct ice_hw *hw, enum ice_bank_select * @nvm: pointer to NVM info structure * * Read the NVM EETRACK ID and map version of the main NVM image bank, filling - * in the nvm info structure. + * in the NVM info structure. */ -static enum ice_status +static int ice_get_nvm_ver_info(struct ice_hw *hw, enum ice_bank_select bank, struct ice_nvm_info *nvm) { u16 eetrack_lo, eetrack_hi, ver; - enum ice_status status; + int status; status = ice_read_nvm_sr_copy(hw, bank, ICE_SR_NVM_DEV_STARTER_VER, &ver); if (status) { @@ -650,7 +651,7 @@ ice_get_nvm_ver_info(struct ice_hw *hw, enum ice_bank_select bank, struct ice_nv if (status) ice_debug(hw, ICE_DBG_NVM, "Failed to read NVM security revision.\n"); - return ICE_SUCCESS; + return 0; } /** @@ -662,7 +663,7 @@ ice_get_nvm_ver_info(struct ice_hw *hw, enum ice_bank_select bank, struct ice_nv * inactive NVM bank. Used to access version data for a pending update that * has not yet been activated. */ -enum ice_status ice_get_inactive_nvm_ver(struct ice_hw *hw, struct ice_nvm_info *nvm) +int ice_get_inactive_nvm_ver(struct ice_hw *hw, struct ice_nvm_info *nvm) { return ice_get_nvm_ver_info(hw, ICE_INACTIVE_FLASH_BANK, nvm); } @@ -676,13 +677,13 @@ enum ice_status ice_get_inactive_nvm_ver(struct ice_hw *hw, struct ice_nvm_info * Read the security revision out of the CSS header of the active OROM module * bank. */ -static enum ice_status ice_get_orom_srev(struct ice_hw *hw, enum ice_bank_select bank, u32 *srev) +static int ice_get_orom_srev(struct ice_hw *hw, enum ice_bank_select bank, u32 *srev) { u32 orom_size_word = hw->flash.banks.orom_size / 2; - enum ice_status status; u16 srev_l, srev_h; u32 css_start; u32 hdr_len; + int status; status = ice_get_nvm_css_hdr_len(hw, bank, &hdr_len); if (status) @@ -708,7 +709,7 @@ static enum ice_status ice_get_orom_srev(struct ice_hw *hw, enum ice_bank_select *srev = srev_h << 16 | srev_l; - return ICE_SUCCESS; + return 0; } /** @@ -720,13 +721,14 @@ static enum ice_status ice_get_orom_srev(struct ice_hw *hw, enum ice_bank_select * Searches through the Option ROM flash contents to locate the CIVD data for * the image. */ -static enum ice_status +static int ice_get_orom_civd_data(struct ice_hw *hw, enum ice_bank_select bank, struct ice_orom_civd_info *civd) { - u8 *orom_data; - enum ice_status status; + struct ice_orom_civd_info civd_data_section; + int status; u32 offset; + u32 tmp; /* The CIVD section is located in the Option ROM aligned to 512 bytes. * The first 4 bytes must contain the ASCII characters "$CIV". @@ -737,55 +739,55 @@ ice_get_orom_civd_data(struct ice_hw *hw, enum ice_bank_select bank, * usually somewhere in the middle of the bank. We need to scan the * Option ROM bank to locate it. * - * It's significantly faster to read the entire Option ROM up front - * using the maximum page size, than to read each possible location - * with a separate firmware command. */ - orom_data = (u8 *)ice_calloc(hw, hw->flash.banks.orom_size, sizeof(u8)); - if (!orom_data) - return ICE_ERR_NO_MEMORY; - - status = ice_read_flash_module(hw, bank, ICE_SR_1ST_OROM_BANK_PTR, 0, - orom_data, hw->flash.banks.orom_size); - if (status) { - ice_debug(hw, ICE_DBG_NVM, "Unable to read Option ROM data\n"); - return status; - } /* Scan the memory buffer to locate the CIVD data section */ for (offset = 0; (offset + 512) <= hw->flash.banks.orom_size; offset += 512) { - struct ice_orom_civd_info *tmp; u8 sum = 0, i; - tmp = (struct ice_orom_civd_info *)&orom_data[offset]; + status = ice_read_flash_module(hw, bank, ICE_SR_1ST_OROM_BANK_PTR, + offset, (u8 *)&tmp, sizeof(tmp)); + if (status) { + ice_debug(hw, ICE_DBG_NVM, "Unable to read Option ROM data\n"); + return status; + } /* Skip forward until we find a matching signature */ - if (memcmp("$CIV", tmp->signature, sizeof(tmp->signature)) != 0) + if (memcmp("$CIV", &tmp, sizeof(tmp)) != 0) continue; ice_debug(hw, ICE_DBG_NVM, "Found CIVD section at offset %u\n", offset); + status = ice_read_flash_module(hw, bank, ICE_SR_1ST_OROM_BANK_PTR, + offset, (u8 *)&civd_data_section, + sizeof(civd_data_section)); + if (status) { + ice_debug(hw, ICE_DBG_NVM, "Unable to read CIVD data\n"); + goto exit_error; + } + /* Verify that the simple checksum is zero */ - for (i = 0; i < sizeof(*tmp); i++) - sum += ((u8 *)tmp)[i]; + for (i = 0; i < sizeof(civd_data_section); i++) + sum += ((u8 *)&civd_data_section)[i]; if (sum) { ice_debug(hw, ICE_DBG_NVM, "Found CIVD data with invalid checksum of %u\n", sum); - goto err_invalid_checksum; + status = ICE_ERR_NVM; + goto exit_error; } - *civd = *tmp; - ice_free(hw, orom_data); - return ICE_SUCCESS; + *civd = civd_data_section; + + return 0; } + status = ICE_ERR_NVM; ice_debug(hw, ICE_DBG_NVM, "Unable to locate CIVD data within the Option ROM\n"); -err_invalid_checksum: - ice_free(hw, orom_data); - return ICE_ERR_NVM; +exit_error: + return status; } /** @@ -797,12 +799,12 @@ ice_get_orom_civd_data(struct ice_hw *hw, enum ice_bank_select bank, * Read Option ROM version and security revision from the Option ROM flash * section. */ -static enum ice_status +static int ice_get_orom_ver_info(struct ice_hw *hw, enum ice_bank_select bank, struct ice_orom_info *orom) { struct ice_orom_civd_info civd; - enum ice_status status; u32 combo_ver; + int status; status = ice_get_orom_civd_data(hw, bank, &civd); if (status) { @@ -822,7 +824,7 @@ ice_get_orom_ver_info(struct ice_hw *hw, enum ice_bank_select bank, struct ice_o return status; } - return ICE_SUCCESS; + return 0; } /** @@ -834,23 +836,23 @@ ice_get_orom_ver_info(struct ice_hw *hw, enum ice_bank_select bank, struct ice_o * section of flash. Used to access version data for a pending update that has * not yet been activated. */ -enum ice_status ice_get_inactive_orom_ver(struct ice_hw *hw, struct ice_orom_info *orom) +int ice_get_inactive_orom_ver(struct ice_hw *hw, struct ice_orom_info *orom) { return ice_get_orom_ver_info(hw, ICE_INACTIVE_FLASH_BANK, orom); } /** - * ice_discover_flash_size - Discover the available flash size. + * ice_discover_flash_size - Discover the available flash size * @hw: pointer to the HW struct * * The device flash could be up to 16MB in size. However, it is possible that * the actual size is smaller. Use bisection to determine the accessible size * of flash memory. */ -static enum ice_status ice_discover_flash_size(struct ice_hw *hw) +static int ice_discover_flash_size(struct ice_hw *hw) { u32 min_size = 0, max_size = ICE_AQC_NVM_MAX_OFFSET + 1; - enum ice_status status; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -868,7 +870,7 @@ static enum ice_status ice_discover_flash_size(struct ice_hw *hw) hw->adminq.sq_last_status == ICE_AQ_RC_EINVAL) { ice_debug(hw, ICE_DBG_NVM, "%s: New upper bound of %u bytes\n", __func__, offset); - status = ICE_SUCCESS; + status = 0; max_size = offset; } else if (!status) { ice_debug(hw, ICE_DBG_NVM, "%s: New lower bound of %u bytes\n", @@ -904,10 +906,9 @@ static enum ice_status ice_discover_flash_size(struct ice_hw *hw) * sector size by using the highest bit. The reported pointer value will be in * bytes, intended for flat NVM reads. */ -static enum ice_status -ice_read_sr_pointer(struct ice_hw *hw, u16 offset, u32 *pointer) +static int ice_read_sr_pointer(struct ice_hw *hw, u16 offset, u32 *pointer) { - enum ice_status status; + int status; u16 value; status = ice_read_sr_word(hw, offset, &value); @@ -920,7 +921,7 @@ ice_read_sr_pointer(struct ice_hw *hw, u16 offset, u32 *pointer) else *pointer = value * 2; - return ICE_SUCCESS; + return 0; } /** @@ -936,10 +937,9 @@ ice_read_sr_pointer(struct ice_hw *hw, u16 offset, u32 *pointer) * Each area size word is specified in 4KB sector units. This function reports * the size in bytes, intended for flat NVM reads. */ -static enum ice_status -ice_read_sr_area_size(struct ice_hw *hw, u16 offset, u32 *size) +static int ice_read_sr_area_size(struct ice_hw *hw, u16 offset, u32 *size) { - enum ice_status status; + int status; u16 value; status = ice_read_sr_word(hw, offset, &value); @@ -949,7 +949,7 @@ ice_read_sr_area_size(struct ice_hw *hw, u16 offset, u32 *size) /* Area sizes are always specified in 4KB units */ *size = value * 4 * 1024; - return ICE_SUCCESS; + return 0; } /** @@ -962,12 +962,11 @@ ice_read_sr_area_size(struct ice_hw *hw, u16 offset, u32 *size) * structure for later use in order to calculate the correct offset to read * from the active module. */ -static enum ice_status -ice_determine_active_flash_banks(struct ice_hw *hw) +static int ice_determine_active_flash_banks(struct ice_hw *hw) { struct ice_bank_info *banks = &hw->flash.banks; - enum ice_status status; u16 ctrl_word; + int status; status = ice_read_sr_word(hw, ICE_SR_NVM_CTRL_WORD, &ctrl_word); if (status) { @@ -1032,7 +1031,7 @@ ice_determine_active_flash_banks(struct ice_hw *hw) return status; } - return ICE_SUCCESS; + return 0; } /** @@ -1042,12 +1041,12 @@ ice_determine_active_flash_banks(struct ice_hw *hw) * This function reads and populates NVM settings such as Shadow RAM size, * max_timeout, and blank_nvm_mode */ -enum ice_status ice_init_nvm(struct ice_hw *hw) +int ice_init_nvm(struct ice_hw *hw) { struct ice_flash_info *flash = &hw->flash; - enum ice_status status; u32 fla, gens_stat; u8 sr_size; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -1093,7 +1092,7 @@ enum ice_status ice_init_nvm(struct ice_hw *hw) if (status) ice_debug(hw, ICE_DBG_INIT, "Failed to read Option ROM info.\n"); - return ICE_SUCCESS; + return 0; } /** @@ -1107,10 +1106,10 @@ enum ice_status ice_init_nvm(struct ice_hw *hw) * method. The buf read is preceded by the NVM ownership take * and followed by the release. */ -enum ice_status +int ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data) { - enum ice_status status; + int status; status = ice_acquire_nvm(hw, ICE_RES_READ); if (!status) { @@ -1127,11 +1126,11 @@ ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data) * * Verify NVM PFA checksum validity (0x0706) */ -enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw) +int ice_nvm_validate_checksum(struct ice_hw *hw) { struct ice_aqc_nvm_checksum *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; status = ice_acquire_nvm(hw, ICE_RES_READ); if (status) @@ -1158,11 +1157,11 @@ enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw) * * Recalculate NVM PFA checksum (0x0706) */ -enum ice_status ice_nvm_recalculate_checksum(struct ice_hw *hw) +int ice_nvm_recalculate_checksum(struct ice_hw *hw) { struct ice_aqc_nvm_checksum *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; status = ice_acquire_nvm(hw, ICE_RES_READ); if (status) @@ -1188,7 +1187,7 @@ enum ice_status ice_nvm_recalculate_checksum(struct ice_hw *hw) * Fill in the data section of the NVM access request with a copy of the NVM * features structure. */ -enum ice_status +int ice_nvm_access_get_features(struct ice_nvm_access_cmd *cmd, union ice_nvm_access_data *data) { @@ -1209,7 +1208,7 @@ ice_nvm_access_get_features(struct ice_nvm_access_cmd *cmd, data->drv_features.size = sizeof(struct ice_nvm_features); data->drv_features.features[0] = ICE_NVM_FEATURES_0_REG_ACCESS; - return ICE_SUCCESS; + return 0; } /** @@ -1254,7 +1253,7 @@ u32 ice_nvm_access_get_adapter(struct ice_nvm_access_cmd *cmd) * register offset. First validates that the module and flags are correct, and * then ensures that the register offset is one of the accepted registers. */ -static enum ice_status +static int ice_validate_nvm_rw_reg(struct ice_nvm_access_cmd *cmd) { u32 module, flags, offset; @@ -1282,18 +1281,18 @@ ice_validate_nvm_rw_reg(struct ice_nvm_access_cmd *cmd) case GLNVM_GENS: case GLNVM_FLA: case PF_FUNC_RID: - return ICE_SUCCESS; + return 0; default: break; } for (i = 0; i <= GL_HIDA_MAX_INDEX; i++) if (offset == (u32)GL_HIDA(i)) - return ICE_SUCCESS; + return 0; for (i = 0; i <= GL_HIBA_MAX_INDEX; i++) if (offset == (u32)GL_HIBA(i)) - return ICE_SUCCESS; + return 0; /* All other register offsets are not valid */ return ICE_ERR_OUT_OF_RANGE; @@ -1307,11 +1306,11 @@ ice_validate_nvm_rw_reg(struct ice_nvm_access_cmd *cmd) * * Process an NVM access request to read a register. */ -enum ice_status +int ice_nvm_access_read(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd, union ice_nvm_access_data *data) { - enum ice_status status; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -1329,7 +1328,7 @@ ice_nvm_access_read(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd, /* Read the register and store the contents in the data field */ data->regval = rd32(hw, cmd->offset); - return ICE_SUCCESS; + return 0; } /** @@ -1340,11 +1339,11 @@ ice_nvm_access_read(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd, * * Process an NVM access request to write a register. */ -enum ice_status +int ice_nvm_access_write(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd, union ice_nvm_access_data *data) { - enum ice_status status; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -1354,21 +1353,24 @@ ice_nvm_access_write(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd, return status; /* Reject requests to write to read-only registers */ - switch (cmd->offset) { - case GL_HICR_EN: - case GLGEN_RSTAT: - return ICE_ERR_OUT_OF_RANGE; - default: - break; + if (hw->mac_type == ICE_MAC_E830) { + if (cmd->offset == E830_GL_HICR_EN) + return ICE_ERR_OUT_OF_RANGE; + } else { + if (cmd->offset == GL_HICR_EN) + return ICE_ERR_OUT_OF_RANGE; } + if (cmd->offset == GLGEN_RSTAT) + return ICE_ERR_OUT_OF_RANGE; + ice_debug(hw, ICE_DBG_NVM, "NVM access: writing register %08x with value %08x\n", cmd->offset, data->regval); /* Write the data field to the specified register */ wr32(hw, cmd->offset, data->regval); - return ICE_SUCCESS; + return 0; } /** @@ -1384,7 +1386,7 @@ ice_nvm_access_write(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd, * For valid commands, perform the necessary function, copying the data into * the provided data buffer. */ -enum ice_status +int ice_handle_nvm_access(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd, union ice_nvm_access_data *data) { @@ -1422,3 +1424,60 @@ ice_handle_nvm_access(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd, return ICE_ERR_PARAM; } } + +/** + * ice_nvm_sanitize_operate - Clear the user data + * @hw: pointer to the HW struct + * + * Clear user data from NVM using AQ command (0x070C). + * + * Return: the exit code of the operation. + */ +s32 ice_nvm_sanitize_operate(struct ice_hw *hw) +{ + s32 status; + u8 values; + + u8 cmd_flags = ICE_AQ_NVM_SANITIZE_REQ_OPERATE | + ICE_AQ_NVM_SANITIZE_OPERATE_SUBJECT_CLEAR; + + status = ice_nvm_sanitize(hw, cmd_flags, &values); + if (status) + return status; + if ((!(values & ICE_AQ_NVM_SANITIZE_OPERATE_HOST_CLEAN_DONE) && + !(values & ICE_AQ_NVM_SANITIZE_OPERATE_BMC_CLEAN_DONE)) || + ((values & ICE_AQ_NVM_SANITIZE_OPERATE_HOST_CLEAN_DONE) && + !(values & ICE_AQ_NVM_SANITIZE_OPERATE_HOST_CLEAN_SUCCESS)) || + ((values & ICE_AQ_NVM_SANITIZE_OPERATE_BMC_CLEAN_DONE) && + !(values & ICE_AQ_NVM_SANITIZE_OPERATE_BMC_CLEAN_SUCCESS))) + return ICE_ERR_AQ_ERROR; + + return ICE_SUCCESS; +} + +/** + * ice_nvm_sanitize - Sanitize NVM + * @hw: pointer to the HW struct + * @cmd_flags: flag to the ACI command + * @values: values returned from the command + * + * Sanitize NVM using AQ command (0x070C). + * + * Return: the exit code of the operation. + */ +s32 ice_nvm_sanitize(struct ice_hw *hw, u8 cmd_flags, u8 *values) +{ + struct ice_aqc_nvm_sanitization *cmd; + struct ice_aq_desc desc; + s32 status; + + cmd = &desc.params.sanitization; + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_sanitization); + cmd->cmd_flags = cmd_flags; + + status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL); + if (values) + *values = cmd->values; + + return status; +} diff --git a/drivers/net/ice/base/ice_nvm.h b/drivers/net/ice/base/ice_nvm.h index c3e61a301f..f47b4b6d5a 100644 --- a/drivers/net/ice/base/ice_nvm.h +++ b/drivers/net/ice/base/ice_nvm.h @@ -34,7 +34,6 @@ struct ice_orom_civd_info { u8 combo_name_len; /* Length of the unicode combo image version string, max of 32 */ __le16 combo_name[32]; /* Unicode string representing the Combo Image version */ }; - #pragma pack() #define ICE_NVM_ACCESS_MAJOR_VER 0 @@ -70,41 +69,44 @@ union ice_nvm_access_data { u32 ice_nvm_access_get_module(struct ice_nvm_access_cmd *cmd); u32 ice_nvm_access_get_flags(struct ice_nvm_access_cmd *cmd); u32 ice_nvm_access_get_adapter(struct ice_nvm_access_cmd *cmd); -enum ice_status +int ice_nvm_access_read(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd, union ice_nvm_access_data *data); -enum ice_status +int ice_nvm_access_write(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd, union ice_nvm_access_data *data); -enum ice_status +int ice_nvm_access_get_features(struct ice_nvm_access_cmd *cmd, union ice_nvm_access_data *data); -enum ice_status +int ice_handle_nvm_access(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd, union ice_nvm_access_data *data); -enum ice_status + +int ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access); void ice_release_nvm(struct ice_hw *hw); -enum ice_status +int ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length, void *data, bool last_command, bool read_shadow_ram, struct ice_sq_cd *cd); -enum ice_status +int ice_read_flat_nvm(struct ice_hw *hw, u32 offset, u32 *length, u8 *data, bool read_shadow_ram); -enum ice_status +int ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len, u16 module_type); -enum ice_status +int ice_get_inactive_orom_ver(struct ice_hw *hw, struct ice_orom_info *orom); -enum ice_status +int ice_get_inactive_nvm_ver(struct ice_hw *hw, struct ice_nvm_info *nvm); -enum ice_status +int ice_read_pba_string(struct ice_hw *hw, u8 *pba_num, u32 pba_num_size); -enum ice_status ice_init_nvm(struct ice_hw *hw); -enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data); -enum ice_status +int ice_init_nvm(struct ice_hw *hw); +int ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data); +int ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data); -enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw); -enum ice_status ice_nvm_recalculate_checksum(struct ice_hw *hw); +int ice_nvm_validate_checksum(struct ice_hw *hw); +int ice_nvm_recalculate_checksum(struct ice_hw *hw); +s32 ice_nvm_sanitize_operate(struct ice_hw *hw); +s32 ice_nvm_sanitize(struct ice_hw *hw, u8 cmd_flags, u8 *values); #endif /* _ICE_NVM_H_ */ diff --git a/drivers/net/ice/base/ice_parser.c b/drivers/net/ice/base/ice_parser.c index 79c97f7903..1f155208c4 100644 --- a/drivers/net/ice/base/ice_parser.c +++ b/drivers/net/ice/base/ice_parser.c @@ -96,7 +96,7 @@ void *ice_parser_sect_item_get(u32 sect_type, void *section, if (index >= LE16_TO_CPU(hdr->count)) return NULL; - return (void *)((uintptr_t)section + data_off + index * size); + return (void *)((u64)section + data_off + index * size); } /** @@ -146,9 +146,10 @@ void *ice_parser_create_table(struct ice_hw *hw, u32 sect_type, if (no_offset) idx++; else - idx = hdr->offset + state.entry_idx; + idx = LE16_TO_CPU(hdr->offset) + + state.entry_idx; parse_item(hw, idx, - (void *)((uintptr_t)table + idx * item_size), + (void *)((u64)table + idx * item_size), data, item_size); } } while (data); @@ -161,10 +162,10 @@ void *ice_parser_create_table(struct ice_hw *hw, u32 sect_type, * @hw: pointer to the hardware structure * @psr: output parameter for a new parser instance be created */ -enum ice_status ice_parser_create(struct ice_hw *hw, struct ice_parser **psr) +int ice_parser_create(struct ice_hw *hw, struct ice_parser **psr) { - enum ice_status status; struct ice_parser *p; + int status; p = (struct ice_parser *)ice_malloc(hw, sizeof(struct ice_parser)); if (!p) @@ -270,7 +271,7 @@ enum ice_status ice_parser_create(struct ice_hw *hw, struct ice_parser **psr) } *psr = p; - return ICE_SUCCESS; + return 0; err: ice_parser_destroy(p); return status; @@ -309,8 +310,8 @@ void ice_parser_destroy(struct ice_parser *psr) * @pkt_len: packet length * @rslt: input/output parameter to save parser result. */ -enum ice_status ice_parser_run(struct ice_parser *psr, const u8 *pkt_buf, - int pkt_len, struct ice_parser_result *rslt) +int ice_parser_run(struct ice_parser *psr, const u8 *pkt_buf, + int pkt_len, struct ice_parser_result *rslt) { ice_parser_rt_reset(&psr->rt); ice_parser_rt_pktbuf_set(&psr->rt, pkt_buf, pkt_len); @@ -332,8 +333,10 @@ void ice_parser_result_dump(struct ice_hw *hw, struct ice_parser_result *rslt) ice_info(hw, "proto = %d, offset = %d\n", rslt->po[i].proto_id, rslt->po[i].offset); - ice_info(hw, "flags_psr = 0x%016" PRIx64 "\n", rslt->flags_psr); - ice_info(hw, "flags_pkt = 0x%016" PRIx64 "\n", rslt->flags_pkt); + ice_info(hw, "flags_psr = 0x%016llx\n", + (unsigned long long)rslt->flags_psr); + ice_info(hw, "flags_pkt = 0x%016llx\n", + (unsigned long long)rslt->flags_pkt); ice_info(hw, "flags_sw = 0x%04x\n", rslt->flags_sw); ice_info(hw, "flags_fd = 0x%04x\n", rslt->flags_fd); ice_info(hw, "flags_rss = 0x%04x\n", rslt->flags_rss); @@ -341,10 +344,10 @@ void ice_parser_result_dump(struct ice_hw *hw, struct ice_parser_result *rslt) static void _bst_vm_set(struct ice_parser *psr, const char *prefix, bool on) { - struct ice_bst_tcam_item *item; u16 i = 0; while (true) { + struct ice_bst_tcam_item *item; item = ice_bst_tcam_search(psr->bst_tcam_table, psr->bst_lbl_table, prefix, &i); @@ -367,15 +370,15 @@ void ice_parser_dvm_set(struct ice_parser *psr, bool on) _bst_vm_set(psr, "BOOST_MAC_VLAN_SVM", !on); } -static enum ice_status +static int _tunnel_port_set(struct ice_parser *psr, const char *prefix, u16 udp_port, bool on) { u8 *buf = (u8 *)&udp_port; - struct ice_bst_tcam_item *item; u16 i = 0; while (true) { + struct ice_bst_tcam_item *item; item = ice_bst_tcam_search(psr->bst_tcam_table, psr->bst_lbl_table, prefix, &i); @@ -389,7 +392,7 @@ _tunnel_port_set(struct ice_parser *psr, const char *prefix, u16 udp_port, item->key[15] = (u8)(0xff - buf[0]); item->key[16] = (u8)(0xff - buf[1]); - return ICE_SUCCESS; + return 0; /* found a matched slot to delete */ } else if (!on && (item->key_inv[15] == buf[0] || item->key_inv[16] == buf[1])) { @@ -398,7 +401,7 @@ _tunnel_port_set(struct ice_parser *psr, const char *prefix, u16 udp_port, item->key[15] = 0xff; item->key[16] = 0xfe; - return ICE_SUCCESS; + return 0; } i++; } @@ -412,8 +415,8 @@ _tunnel_port_set(struct ice_parser *psr, const char *prefix, u16 udp_port, * @udp_port: vxlan tunnel port in UDP header * @on: true to turn on; false to turn off */ -enum ice_status ice_parser_vxlan_tunnel_set(struct ice_parser *psr, - u16 udp_port, bool on) +int ice_parser_vxlan_tunnel_set(struct ice_parser *psr, + u16 udp_port, bool on) { return _tunnel_port_set(psr, "TNL_VXLAN", udp_port, on); } @@ -424,8 +427,8 @@ enum ice_status ice_parser_vxlan_tunnel_set(struct ice_parser *psr, * @udp_port: geneve tunnel port in UDP header * @on: true to turn on; false to turn off */ -enum ice_status ice_parser_geneve_tunnel_set(struct ice_parser *psr, - u16 udp_port, bool on) +int ice_parser_geneve_tunnel_set(struct ice_parser *psr, + u16 udp_port, bool on) { return _tunnel_port_set(psr, "TNL_GENEVE", udp_port, on); } @@ -436,8 +439,8 @@ enum ice_status ice_parser_geneve_tunnel_set(struct ice_parser *psr, * @udp_port: ecpri tunnel port in UDP header * @on: true to turn on; false to turn off */ -enum ice_status ice_parser_ecpri_tunnel_set(struct ice_parser *psr, - u16 udp_port, bool on) +int ice_parser_ecpri_tunnel_set(struct ice_parser *psr, + u16 udp_port, bool on) { return _tunnel_port_set(psr, "TNL_UDP_ECPRI", udp_port, on); } @@ -485,11 +488,11 @@ static bool _nearest_proto_id(struct ice_parser_result *rslt, u16 offset, * @prefix_match: match protocol stack exactly or only prefix * @prof: input/output parameter to save the profile */ -enum ice_status ice_parser_profile_init(struct ice_parser_result *rslt, - const u8 *pkt_buf, const u8 *msk_buf, - int buf_len, enum ice_block blk, - bool prefix_match, - struct ice_parser_profile *prof) +int ice_parser_profile_init(struct ice_parser_result *rslt, + const u8 *pkt_buf, const u8 *msk_buf, + int buf_len, enum ice_block blk, + bool prefix_match, + struct ice_parser_profile *prof) { u8 proto_id = 0xff; u16 proto_off = 0; @@ -528,7 +531,7 @@ enum ice_status ice_parser_profile_init(struct ice_parser_result *rslt, prof->fv_num++; } - return ICE_SUCCESS; + return 0; } /** diff --git a/drivers/net/ice/base/ice_parser.h b/drivers/net/ice/base/ice_parser.h index 0f64584898..28c96af6da 100644 --- a/drivers/net/ice/base/ice_parser.h +++ b/drivers/net/ice/base/ice_parser.h @@ -55,15 +55,15 @@ struct ice_parser { struct ice_parser_rt rt; /* parser runtime */ }; -enum ice_status ice_parser_create(struct ice_hw *hw, struct ice_parser **psr); +int ice_parser_create(struct ice_hw *hw, struct ice_parser **psr); void ice_parser_destroy(struct ice_parser *psr); void ice_parser_dvm_set(struct ice_parser *psr, bool on); -enum ice_status ice_parser_vxlan_tunnel_set(struct ice_parser *psr, - u16 udp_port, bool on); -enum ice_status ice_parser_geneve_tunnel_set(struct ice_parser *psr, - u16 udp_port, bool on); -enum ice_status ice_parser_ecpri_tunnel_set(struct ice_parser *psr, - u16 udp_port, bool on); +int ice_parser_vxlan_tunnel_set(struct ice_parser *psr, + u16 udp_port, bool on); +int ice_parser_geneve_tunnel_set(struct ice_parser *psr, + u16 udp_port, bool on); +int ice_parser_ecpri_tunnel_set(struct ice_parser *psr, + u16 udp_port, bool on); struct ice_parser_proto_off { u8 proto_id; /* hardware protocol ID */ @@ -83,8 +83,8 @@ struct ice_parser_result { u16 flags_rss; /* 16 bits key builder flag for RSS */ }; -enum ice_status ice_parser_run(struct ice_parser *psr, const u8 *pkt_buf, - int pkt_len, struct ice_parser_result *rslt); +int ice_parser_run(struct ice_parser *psr, const u8 *pkt_buf, + int pkt_len, struct ice_parser_result *rslt); void ice_parser_result_dump(struct ice_hw *hw, struct ice_parser_result *rslt); struct ice_parser_fv { @@ -95,7 +95,7 @@ struct ice_parser_fv { }; struct ice_parser_profile { - struct ice_parser_fv fv[48]; /* field vector array */ + struct ice_parser_fv fv[48]; /* field vector arrary */ int fv_num; /* field vector number must <= 48 */ u16 flags; /* 16 bits key builder flag */ u16 flags_msk; /* key builder flag masker */ @@ -103,11 +103,11 @@ struct ice_parser_profile { ice_declare_bitmap(ptypes, ICE_FLOW_PTYPE_MAX); }; -enum ice_status ice_parser_profile_init(struct ice_parser_result *rslt, - const u8 *pkt_buf, const u8 *msk_buf, - int buf_len, enum ice_block blk, - bool prefix_match, - struct ice_parser_profile *prof); +int ice_parser_profile_init(struct ice_parser_result *rslt, + const u8 *pkt_buf, const u8 *msk_buf, + int buf_len, enum ice_block blk, + bool prefix_match, + struct ice_parser_profile *prof); void ice_parser_profile_dump(struct ice_hw *hw, struct ice_parser_profile *prof); bool ice_check_ddp_support_proto_id(struct ice_hw *hw, diff --git a/drivers/net/ice/base/ice_parser_rt.c b/drivers/net/ice/base/ice_parser_rt.c index 68c0f5d7fb..37e1e92125 100644 --- a/drivers/net/ice/base/ice_parser_rt.c +++ b/drivers/net/ice/base/ice_parser_rt.c @@ -110,29 +110,27 @@ void ice_parser_rt_pktbuf_set(struct ice_parser_rt *rt, const u8 *pkt_buf, ice_memcpy(rt->pkt_buf, pkt_buf, len, ICE_NONDMA_TO_NONDMA); rt->pkt_len = pkt_len; - ice_memcpy(&rt->gpr[GPR_HB_IDX], &rt->pkt_buf[ho], - ICE_PARSER_HDR_BUF_LEN, ICE_NONDMA_TO_NONDMA); + ice_memcpy(&rt->gpr[GPR_HB_IDX], &rt->pkt_buf[ho], 32, + ICE_NONDMA_TO_NONDMA); } static void _bst_key_init(struct ice_parser_rt *rt, struct ice_imem_item *imem) { - int second_last_key_idx = ICE_PARSER_BST_KEY_LEN - 2; - int last_key_idx = ICE_PARSER_BST_KEY_LEN - 1; u8 tsr = (u8)rt->gpr[GPR_TSR_IDX]; u16 ho = rt->gpr[GPR_HO_IDX]; u8 *key = rt->bst_key; - - int i, j; + int i; if (imem->b_kb.tsr_ctrl) - key[last_key_idx] = (u8)tsr; + key[19] = (u8)tsr; else - key[last_key_idx] = imem->b_kb.priority; + key[19] = imem->b_kb.priority; - for (i = second_last_key_idx; i >= 0; i--) { - j = ho + second_last_key_idx - i; + for (i = 18; i >= 0; i--) { + int j; + j = ho + 18 - i; if (j < ICE_PARSER_MAX_PKT_LEN) - key[i] = rt->pkt_buf[ho + second_last_key_idx - i]; + key[i] = rt->pkt_buf[ho + 18 - i]; else key[i] = 0; } @@ -187,23 +185,21 @@ static u32 _bit_rev_u32(u32 v, int len) static u32 _hv_bit_sel(struct ice_parser_rt *rt, int start, int len) { - u64 msk; - union { - u64 d64; - u8 b[8]; - } bit_sel; + u64 d64, msk; + u8 b[8]; int i; int offset = GPR_HB_IDX + start / 16; - ice_memcpy(bit_sel.b, &rt->gpr[offset], 8, ICE_NONDMA_TO_NONDMA); + ice_memcpy(b, &rt->gpr[offset], 8, ICE_NONDMA_TO_NONDMA); for (i = 0; i < 8; i++) - bit_sel.b[i] = _bit_rev_u8(bit_sel.b[i]); + b[i] = _bit_rev_u8(b[i]); + d64 = *(u64 *)b; msk = (1ul << len) - 1; - return _bit_rev_u32((u32)((bit_sel.d64 >> (start % 16)) & msk), len); + return _bit_rev_u32((u32)((d64 >> (start % 16)) & msk), len); } static u32 _pk_build(struct ice_parser_rt *rt, struct ice_np_keybuilder *kb) @@ -346,7 +342,7 @@ static void _bst_pgp_set(struct ice_parser_rt *rt, rt->pg, bst->address); } -static struct ice_pg_cam_item *_pg_cam_match(struct ice_parser_rt *rt) +static struct ice_pg_cam_item *__pg_cam_match(struct ice_parser_rt *rt) { struct ice_parser *psr = rt->psr; struct ice_pg_cam_item *item; @@ -361,7 +357,7 @@ static struct ice_pg_cam_item *_pg_cam_match(struct ice_parser_rt *rt) return item; } -static struct ice_pg_nm_cam_item *_pg_nm_cam_match(struct ice_parser_rt *rt) +static struct ice_pg_nm_cam_item *__pg_nm_cam_match(struct ice_parser_rt *rt) { struct ice_parser *psr = rt->psr; struct ice_pg_nm_cam_item *item; @@ -411,9 +407,8 @@ static void _flg_add(struct ice_parser_rt *rt, int idx, bool val) static void _flg_update(struct ice_parser_rt *rt, struct ice_alu *alu) { - int i; - if (alu->dedicate_flags_ena) { + int i; if (alu->flags_extr_imm) { for (i = 0; i < alu->dst_len; i++) _flg_add(rt, alu->dst_start + i, @@ -446,32 +441,30 @@ static void _po_update(struct ice_parser_rt *rt, struct ice_alu *alu) static u16 _reg_bit_sel(struct ice_parser_rt *rt, int reg_idx, int start, int len) { - u32 msk; - union { - u32 d32; - u8 b[4]; - } bit_sel; + u32 d32, msk; + u8 b[4]; + u8 v[4]; - ice_memcpy(bit_sel.b, &rt->gpr[reg_idx + start / 16], 4, - ICE_NONDMA_TO_NONDMA); + ice_memcpy(b, &rt->gpr[reg_idx + start / 16], 4, ICE_NONDMA_TO_NONDMA); - bit_sel.b[0] = _bit_rev_u8(bit_sel.b[0]); - bit_sel.b[1] = _bit_rev_u8(bit_sel.b[1]); - bit_sel.b[2] = _bit_rev_u8(bit_sel.b[2]); - bit_sel.b[3] = _bit_rev_u8(bit_sel.b[3]); + v[0] = _bit_rev_u8(b[0]); + v[1] = _bit_rev_u8(b[1]); + v[2] = _bit_rev_u8(b[2]); + v[3] = _bit_rev_u8(b[3]); + d32 = *(u32 *)v; msk = (1u << len) - 1; - return _bit_rev_u16((u16)((bit_sel.d32 >> (start % 16)) & msk), len); + return _bit_rev_u16((u16)((d32 >> (start % 16)) & msk), len); } static void _err_add(struct ice_parser_rt *rt, int idx, bool val) { rt->pu.err_msk |= (u16)(1 << idx); if (val) - rt->pu.flg_val |= (u16)(1 << idx); + rt->pu.flg_val |= (1ULL << idx); else - rt->pu.flg_val &= ~(u16)(1 << idx); + rt->pu.flg_val &= ~(1ULL << idx); ice_debug(rt->psr->hw, ICE_DBG_PARSER, "Pending update for error %d value %d\n", idx, val); @@ -595,7 +588,7 @@ static void _pu_exe(struct ice_parser_rt *rt) ice_debug(rt->psr->hw, ICE_DBG_PARSER, "Updating Registers ...\n"); - for (i = 0; i < ICE_PARSER_GPR_NUM; i++) { + for (i = 0; i < 128; i++) { if (pu->gpr_val_upd[i]) _rt_gpr_set(rt, i, pu->gpr_val[i]); } @@ -653,12 +646,12 @@ static void _alu_pg_exe(struct ice_parser_rt *rt) static void _proto_off_update(struct ice_parser_rt *rt) { struct ice_parser *psr = rt->psr; - int i; if (rt->action->is_pg) { struct ice_proto_grp_item *proto_grp = &psr->proto_grp_table[rt->action->proto_id]; u16 po; + int i; for (i = 0; i < 8; i++) { struct ice_proto_off *entry = &proto_grp->po[i]; @@ -696,11 +689,11 @@ static void _marker_set(struct ice_parser_rt *rt, int idx) static void _marker_update(struct ice_parser_rt *rt) { struct ice_parser *psr = rt->psr; - int i; if (rt->action->is_mg) { struct ice_mk_grp_item *mk_grp = &psr->mk_grp_table[rt->action->marker_id]; + int i; for (i = 0; i < 8; i++) { u8 marker = mk_grp->markers[i]; @@ -770,15 +763,13 @@ static void _result_resolve(struct ice_parser_rt *rt, * @rt: pointer to the parser runtime * @rslt: input/output parameter to save parser result */ -enum ice_status ice_parser_rt_execute(struct ice_parser_rt *rt, - struct ice_parser_result *rslt) +int ice_parser_rt_execute(struct ice_parser_rt *rt, + struct ice_parser_result *rslt) { - enum ice_status status = ICE_SUCCESS; struct ice_pg_nm_cam_item *pg_nm_cam; struct ice_parser *psr = rt->psr; struct ice_pg_cam_item *pg_cam; - struct ice_bst_tcam_item *bst; - struct ice_imem_item *imem; + int status = 0; u16 node; u16 pc; @@ -786,6 +777,9 @@ enum ice_status ice_parser_rt_execute(struct ice_parser_rt *rt, ice_debug(rt->psr->hw, ICE_DBG_PARSER, "Start with Node: %d\n", node); while (true) { + struct ice_bst_tcam_item *bst; + struct ice_imem_item *imem; + pc = rt->gpr[GPR_NP_IDX]; imem = &psr->imem_table[pc]; ice_debug(rt->psr->hw, ICE_DBG_PARSER, "Load imem at pc: %d\n", @@ -829,9 +823,9 @@ enum ice_status ice_parser_rt_execute(struct ice_parser_rt *rt, } rt->action = NULL; - pg_cam = _pg_cam_match(rt); + pg_cam = __pg_cam_match(rt); if (!pg_cam) { - pg_nm_cam = _pg_nm_cam_match(rt); + pg_nm_cam = __pg_nm_cam_match(rt); if (pg_nm_cam) { ice_debug(rt->psr->hw, ICE_DBG_PARSER, "Match ParseGraph Nomatch CAM Address %d\n", pg_nm_cam->idx); diff --git a/drivers/net/ice/base/ice_parser_rt.h b/drivers/net/ice/base/ice_parser_rt.h index de851643b4..d69e82e62c 100644 --- a/drivers/net/ice/base/ice_parser_rt.h +++ b/drivers/net/ice/base/ice_parser_rt.h @@ -9,15 +9,10 @@ struct ice_parser_ctx; #define ICE_PARSER_MAX_PKT_LEN 504 #define ICE_PARSER_GPR_NUM 128 -#define ICE_PARSER_HDR_BUF_LEN 32 -#define ICE_PARSER_BST_KEY_LEN 20 -#define ICE_PARSER_MARKER_NUM_IN_BYTES 9 /* 72 bits */ -#define ICE_PARSER_PROTO_NUM 256 struct ice_gpr_pu { - /* flag to indicate if GRP needs to be updated */ - bool gpr_val_upd[ICE_PARSER_GPR_NUM]; - u16 gpr_val[ICE_PARSER_GPR_NUM]; + bool gpr_val_upd[128]; /* flag to indicate if GRP needs to be updated */ + u16 gpr_val[128]; u64 flg_msk; u64 flg_val; u16 err_msk; @@ -27,10 +22,10 @@ struct ice_gpr_pu { struct ice_parser_rt { struct ice_parser *psr; u16 gpr[ICE_PARSER_GPR_NUM]; - u8 pkt_buf[ICE_PARSER_MAX_PKT_LEN + ICE_PARSER_HDR_BUF_LEN]; + u8 pkt_buf[ICE_PARSER_MAX_PKT_LEN + 32]; u16 pkt_len; u16 po; - u8 bst_key[ICE_PARSER_BST_KEY_LEN]; + u8 bst_key[20]; struct ice_pg_cam_key pg_key; struct ice_alu *alu0; struct ice_alu *alu1; @@ -38,9 +33,9 @@ struct ice_parser_rt { struct ice_pg_cam_action *action; u8 pg; struct ice_gpr_pu pu; - u8 markers[ICE_PARSER_MARKER_NUM_IN_BYTES]; - bool protocols[ICE_PARSER_PROTO_NUM]; - u16 offsets[ICE_PARSER_PROTO_NUM]; + u8 markers[9]; /* 8 * 9 = 72 bits*/ + bool protocols[256]; + u16 offsets[256]; }; void ice_parser_rt_reset(struct ice_parser_rt *rt); @@ -48,6 +43,6 @@ void ice_parser_rt_pktbuf_set(struct ice_parser_rt *rt, const u8 *pkt_buf, int pkt_len); struct ice_parser_result; -enum ice_status ice_parser_rt_execute(struct ice_parser_rt *rt, - struct ice_parser_result *rslt); +int ice_parser_rt_execute(struct ice_parser_rt *rt, + struct ice_parser_result *rslt); #endif /* _ICE_PARSER_RT_H_ */ diff --git a/drivers/net/ice/base/ice_phy_regs.h b/drivers/net/ice/base/ice_phy_regs.h new file mode 100644 index 0000000000..9b713300fe --- /dev/null +++ b/drivers/net/ice/base/ice_phy_regs.h @@ -0,0 +1,84 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2001-2023 Intel Corporation + */ + +#ifndef _ICE_PHY_REGS_H_ +#define _ICE_PHY_REGS_H_ + +#define CLKRX_CMN_CLK(i) (0x7E8000 + (i) * 0x5000) +#define CLKRX_CMN_CLK_NUM 5 + +#define CLKRX_CMN_REG_10(i) (CLKRX_CMN_CLK(i) + 0x28) +union clkrx_cmn_reg_10 { + struct { + u32 cmnntl_refck_pdmtchval : 19; + u32 cmnntl_refckm_charge_up_locovr : 1; + u32 cmnntl_refckm_pull_dn_locovr : 1; + u32 cmnntl_refckm_sense_locovr : 1; + u32 cmnntl_refckp_charge_up_locovr : 1; + u32 cmnntl_refckp_pull_dn_locovr : 1; + u32 cmnntl_refckp_sense_locovr : 1; + u32 cmnpmu_h8_off_delay : 4; + u32 cmnref_locovren : 1; + u32 cmnref_pad2cmos_ana_en_locovr : 1; + u32 cmnref_pad2cmos_dig_en_locovr : 1; + } field; + u32 val; +}; + +#define CLKRX_CMN_REG_12(i) (CLKRX_CMN_CLK(i) + 0x30) +union clkrx_cmn_reg_12 { + struct { + u32 cmnpmu_restore_off_delay : 4; + u32 cmnpmu_rst_off_delay : 4; + u32 cmnref_cdrdivsel_locovr : 5; + u32 cmnref_refsel0_locovr : 4; + u32 cmnref_refsel1_locovr : 4; + u32 cmnref_refsel1_powersave_en_locovr : 1; + u32 cmnref_refsel2_locovr : 4; + u32 cmnref_refsel2_powersave_en_locovr : 1; + u32 cmnref_refsel3_locovr : 4; + u32 cmnref_refsel3_powersave_en_locovr : 1; + } field; + u32 val; +}; + +#define CLKRX_CMN_REG_46(i) (CLKRX_CMN_CLK(i) + 0x220) +union clkrx_cmn_reg_46 { + struct { + u32 cmnntl_refck_lkgcnt : 19; + u32 cmnref_refsel0_loc : 4; + u32 cmnref_refsel1_loc : 4; + u32 cmnref_refsel1_powersave_en_loc : 1; + u32 cmnref_refsel2_loc : 4; + } field; + u32 val; +}; + +#define SERDES_IP_IF_LN_FLXM_GENERAL(n, m) \ + (0x32B800 + (m) * 0x100000 + (n) * 0x8000) +union serdes_ip_if_ln_flxm_general { + struct { + u32 reserved0_1 : 2; + u32 ictl_pcs_mode_nt : 1; + u32 ictl_pcs_rcomp_slave_en_nt : 1; + u32 ictl_pcs_cmn_force_pup_a : 1; + u32 ictl_pcs_rcomp_slave_valid_a : 1; + u32 ictl_pcs_ref_sel_rx_nt : 4; + u32 idat_dfx_obs_dig_ : 2; + u32 irst_apb_mem_b : 1; + u32 ictl_pcs_disconnect_nt : 1; + u32 ictl_pcs_isolate_nt : 1; + u32 reserved15_15 : 1; + u32 irst_pcs_tstbus_b_a : 1; + u32 ictl_pcs_ref_term_hiz_en_nt : 1; + u32 reserved18_19 : 2; + u32 ictl_pcs_synthlcslow_force_pup_a : 1; + u32 ictl_pcs_synthlcfast_force_pup_a : 1; + u32 reserved22_24 : 3; + u32 ictl_pcs_ref_sel_tx_nt : 4; + u32 reserved29_31 : 3; + } field; + u32 val; +}; +#endif /* _ICE_PHY_REGS_H_ */ diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h index d2d3f75fc2..cb2349086d 100644 --- a/drivers/net/ice/base/ice_protocol_type.h +++ b/drivers/net/ice/base/ice_protocol_type.h @@ -98,6 +98,10 @@ enum ice_sw_tunnel_type { ICE_SW_TUN_IPV6_GTPU_EH_IPV6_UDP, ICE_SW_TUN_IPV6_GTPU_IPV6_TCP, ICE_SW_TUN_IPV6_GTPU_EH_IPV6_TCP, + ICE_SW_TUN_IPV4_GTPU_IPV4, + ICE_SW_TUN_IPV4_GTPU_IPV6, + ICE_SW_TUN_IPV6_GTPU_IPV4, + ICE_SW_TUN_IPV6_GTPU_IPV6, ICE_SW_TUN_PPPOE, ICE_SW_TUN_PPPOE_PAY, ICE_SW_TUN_PPPOE_IPV4, @@ -128,12 +132,6 @@ enum ice_sw_tunnel_type { ICE_SW_TUN_PPPOE_PAY_QINQ, ICE_SW_TUN_PPPOE_IPV4_QINQ, ICE_SW_TUN_PPPOE_IPV6_QINQ, - ICE_SW_TUN_IPV4_GTPU_IPV4, - ICE_SW_TUN_IPV4_GTPU_IPV6, - ICE_SW_TUN_IPV6_GTPU_IPV4, - ICE_SW_TUN_IPV6_GTPU_IPV6, - ICE_SW_TUN_GTP_IPV4, - ICE_SW_TUN_GTP_IPV6, ICE_ALL_TUNNELS /* All tunnel types including NVGRE */ }; @@ -223,8 +221,10 @@ enum ice_prot_id { #define ICE_MDID_SIZE 2 #define ICE_TUN_FLAG_MDID 20 -#define ICE_TUN_FLAG_MDID_OFF(word) (ICE_MDID_SIZE * (ICE_TUN_FLAG_MDID + (word))) +#define ICE_TUN_FLAG_MDID_OFF(word) \ + (ICE_MDID_SIZE * (ICE_TUN_FLAG_MDID + (word))) #define ICE_TUN_FLAG_MASK 0xFF +#define ICE_FROM_NETWORK_FLAG_MASK 0x8 #define ICE_DIR_FLAG_MASK 0x10 #define ICE_TUN_FLAG_IN_VLAN_MASK 0x80 /* VLAN inside tunneled header */ #define ICE_TUN_FLAG_VLAN_MASK 0x01 @@ -328,7 +328,6 @@ struct ice_udp_gtp_hdr { u8 qfi; u8 rsvrd; }; - struct ice_pppoe_hdr { u8 rsrvd_ver_type; u8 rsrvd_code; @@ -431,7 +430,7 @@ struct ice_recp_grp_entry { #define ICE_INVAL_CHAIN_IND 0xFF u16 rid; u8 chain_idx; - u8 fv_idx[ICE_NUM_WORDS_RECIPE]; + u16 fv_idx[ICE_NUM_WORDS_RECIPE]; u16 fv_mask[ICE_NUM_WORDS_RECIPE]; struct ice_pref_recipe_group r_group; }; diff --git a/drivers/net/ice/base/ice_ptp_consts.h b/drivers/net/ice/base/ice_ptp_consts.h index 546bf8ba91..bd0258f437 100644 --- a/drivers/net/ice/base/ice_ptp_consts.h +++ b/drivers/net/ice/base/ice_ptp_consts.h @@ -157,7 +157,8 @@ const struct ice_cgu_pll_params_e822 e822_cgu_params[NUM_ICE_TIME_REF_FREQ] = { }, }; -/* struct ice_vernier_info_e822 +/* + * struct ice_vernier_info_e822 * * E822 hardware calibrates the delay of the timestamp indication from the * actual packet transmission or reception during the initialization of the diff --git a/drivers/net/ice/base/ice_ptp_hw.c b/drivers/net/ice/base/ice_ptp_hw.c index 548ef5e820..c54eb9d241 100644 --- a/drivers/net/ice/base/ice_ptp_hw.c +++ b/drivers/net/ice/base/ice_ptp_hw.c @@ -7,6 +7,7 @@ #include "ice_ptp_hw.h" #include "ice_ptp_consts.h" #include "ice_cgu_regs.h" +#include "ice_phy_regs.h" /* Low level functions for interacting with and managing the device clock used * for the Precision Time Protocol. @@ -102,51 +103,51 @@ u64 ice_ptp_read_src_incval(struct ice_hw *hw) } /** - * ice_read_cgu_reg_e822 - Read a CGU register + * ice_read_cgu_reg_e82x - Read a CGU register * @hw: pointer to the HW struct * @addr: Register address to read * @val: storage for register value read * * Read the contents of a register of the Clock Generation Unit. Only - * applicable to E822 devices. + * applicable to E822/E823/E825 devices. */ -static enum ice_status -ice_read_cgu_reg_e822(struct ice_hw *hw, u16 addr, u32 *val) +static int +ice_read_cgu_reg_e82x(struct ice_hw *hw, u16 addr, u32 *val) { struct ice_sbq_msg_input cgu_msg; - enum ice_status status; + int err; cgu_msg.opcode = ice_sbq_msg_rd; cgu_msg.dest_dev = cgu; cgu_msg.msg_addr_low = addr; cgu_msg.msg_addr_high = 0x0; - status = ice_sbq_rw_reg_lp(hw, &cgu_msg, true); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read CGU register 0x%04x, status %d\n", - addr, status); - return status; + err = ice_sbq_rw_reg_lp(hw, &cgu_msg, ICE_AQ_FLAG_RD, true); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read CGU register 0x%04x, err %d\n", + addr, err); + return err; } *val = cgu_msg.data; - return ICE_SUCCESS; + return 0; } /** - * ice_write_cgu_reg_e822 - Write a CGU register + * ice_write_cgu_reg_e82x - Write a CGU register * @hw: pointer to the HW struct * @addr: Register address to write * @val: value to write into the register * * Write the specified value to a register of the Clock Generation Unit. Only - * applicable to E822 devices. + * applicable to E822/E823/E825 devices. */ -static enum ice_status -ice_write_cgu_reg_e822(struct ice_hw *hw, u16 addr, u32 val) +static int +ice_write_cgu_reg_e82x(struct ice_hw *hw, u16 addr, u32 val) { struct ice_sbq_msg_input cgu_msg; - enum ice_status status; + int err; cgu_msg.opcode = ice_sbq_msg_wr; cgu_msg.dest_dev = cgu; @@ -154,14 +155,14 @@ ice_write_cgu_reg_e822(struct ice_hw *hw, u16 addr, u32 val) cgu_msg.msg_addr_high = 0x0; cgu_msg.data = val; - status = ice_sbq_rw_reg_lp(hw, &cgu_msg, true); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write CGU register 0x%04x, status %d\n", - addr, status); - return status; + err = ice_sbq_rw_reg_lp(hw, &cgu_msg, ICE_AQ_FLAG_RD, true); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write CGU register 0x%04x, err %d\n", + addr, err); + return err; } - return ICE_SUCCESS; + return 0; } /** @@ -170,7 +171,7 @@ ice_write_cgu_reg_e822(struct ice_hw *hw, u16 addr, u32 val) * * Convert the specified TIME_REF clock frequency to a string. */ -static const char *ice_clk_freq_str(u8 clk_freq) +const char *ice_clk_freq_str(u8 clk_freq) { switch ((enum ice_time_ref_freq)clk_freq) { case ICE_TIME_REF_FREQ_25_000: @@ -196,7 +197,7 @@ static const char *ice_clk_freq_str(u8 clk_freq) * * Convert the specified clock source to its string name. */ -static const char *ice_clk_src_str(u8 clk_src) +const char *ice_clk_src_str(u8 clk_src) { switch ((enum ice_clk_src)clk_src) { case ICE_CLK_SRC_TCX0: @@ -217,44 +218,44 @@ static const char *ice_clk_src_str(u8 clk_src) * Configure the Clock Generation Unit with the desired clock frequency and * time reference, enabling the PLL which drives the PTP hardware clock. */ -enum ice_status -ice_cfg_cgu_pll_e822(struct ice_hw *hw, enum ice_time_ref_freq clk_freq, - enum ice_clk_src clk_src) +int +ice_cfg_cgu_pll_e822(struct ice_hw *hw, enum ice_time_ref_freq *clk_freq, + enum ice_clk_src *clk_src) { union tspll_ro_bwm_lf bwm_lf; union nac_cgu_dword19 dw19; union nac_cgu_dword22 dw22; union nac_cgu_dword24 dw24; union nac_cgu_dword9 dw9; - enum ice_status status; + int err; - if (clk_freq >= NUM_ICE_TIME_REF_FREQ) { - ice_warn(hw, "Invalid TIME_REF frequency %u\n", clk_freq); + if (*clk_freq >= NUM_ICE_TIME_REF_FREQ) { + ice_warn(hw, "Invalid TIME_REF frequency %u\n", *clk_freq); return ICE_ERR_PARAM; } - if (clk_src >= NUM_ICE_CLK_SRC) { - ice_warn(hw, "Invalid clock source %u\n", clk_src); + if (*clk_src >= NUM_ICE_CLK_SRC) { + ice_warn(hw, "Invalid clock source %u\n", *clk_src); return ICE_ERR_PARAM; } - if (clk_src == ICE_CLK_SRC_TCX0 && - clk_freq != ICE_TIME_REF_FREQ_25_000) { + if (*clk_src == ICE_CLK_SRC_TCX0 && + *clk_freq != ICE_TIME_REF_FREQ_25_000) { ice_warn(hw, "TCX0 only supports 25 MHz frequency\n"); return ICE_ERR_PARAM; } - status = ice_read_cgu_reg_e822(hw, NAC_CGU_DWORD9, &dw9.val); - if (status) - return status; + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD9, &dw9.val); + if (err) + return err; - status = ice_read_cgu_reg_e822(hw, NAC_CGU_DWORD24, &dw24.val); - if (status) - return status; + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD24, &dw24.val); + if (err) + return err; - status = ice_read_cgu_reg_e822(hw, TSPLL_RO_BWM_LF, &bwm_lf.val); - if (status) - return status; + err = ice_read_cgu_reg_e82x(hw, TSPLL_RO_BWM_LF, &bwm_lf.val); + if (err) + return err; /* Log the current clock configuration */ ice_debug(hw, ICE_DBG_PTP, "Current CGU configuration -- %s, clk_src %s, clk_freq %s, PLL %s\n", @@ -267,67 +268,67 @@ ice_cfg_cgu_pll_e822(struct ice_hw *hw, enum ice_time_ref_freq clk_freq, if (dw24.field.ts_pll_enable) { dw24.field.ts_pll_enable = 0; - status = ice_write_cgu_reg_e822(hw, NAC_CGU_DWORD24, dw24.val); - if (status) - return status; + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD24, dw24.val); + if (err) + return err; } /* Set the frequency */ - dw9.field.time_ref_freq_sel = clk_freq; - status = ice_write_cgu_reg_e822(hw, NAC_CGU_DWORD9, dw9.val); - if (status) - return status; + dw9.field.time_ref_freq_sel = *clk_freq; + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD9, dw9.val); + if (err) + return err; /* Configure the TS PLL feedback divisor */ - status = ice_read_cgu_reg_e822(hw, NAC_CGU_DWORD19, &dw19.val); - if (status) - return status; + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD19, &dw19.val); + if (err) + return err; - dw19.field.tspll_fbdiv_intgr = e822_cgu_params[clk_freq].feedback_div; + dw19.field.tspll_fbdiv_intgr = e822_cgu_params[*clk_freq].feedback_div; dw19.field.tspll_ndivratio = 1; - status = ice_write_cgu_reg_e822(hw, NAC_CGU_DWORD19, dw19.val); - if (status) - return status; + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD19, dw19.val); + if (err) + return err; /* Configure the TS PLL post divisor */ - status = ice_read_cgu_reg_e822(hw, NAC_CGU_DWORD22, &dw22.val); - if (status) - return status; + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD22, &dw22.val); + if (err) + return err; - dw22.field.time1588clk_div = e822_cgu_params[clk_freq].post_pll_div; + dw22.field.time1588clk_div = e822_cgu_params[*clk_freq].post_pll_div; dw22.field.time1588clk_sel_div2 = 0; - status = ice_write_cgu_reg_e822(hw, NAC_CGU_DWORD22, dw22.val); - if (status) - return status; + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD22, dw22.val); + if (err) + return err; /* Configure the TS PLL pre divisor and clock source */ - status = ice_read_cgu_reg_e822(hw, NAC_CGU_DWORD24, &dw24.val); - if (status) - return status; + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD24, &dw24.val); + if (err) + return err; - dw24.field.ref1588_ck_div = e822_cgu_params[clk_freq].refclk_pre_div; - dw24.field.tspll_fbdiv_frac = e822_cgu_params[clk_freq].frac_n_div; - dw24.field.time_ref_sel = clk_src; + dw24.field.ref1588_ck_div = e822_cgu_params[*clk_freq].refclk_pre_div; + dw24.field.tspll_fbdiv_frac = e822_cgu_params[*clk_freq].frac_n_div; + dw24.field.time_ref_sel = *clk_src; - status = ice_write_cgu_reg_e822(hw, NAC_CGU_DWORD24, dw24.val); - if (status) - return status; + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD24, dw24.val); + if (err) + return err; /* Finally, enable the PLL */ dw24.field.ts_pll_enable = 1; - status = ice_write_cgu_reg_e822(hw, NAC_CGU_DWORD24, dw24.val); - if (status) - return status; + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD24, dw24.val); + if (err) + return err; /* Wait to verify if the PLL locks */ ice_msec_delay(1, true); - status = ice_read_cgu_reg_e822(hw, TSPLL_RO_BWM_LF, &bwm_lf.val); - if (status) - return status; + err = ice_read_cgu_reg_e82x(hw, TSPLL_RO_BWM_LF, &bwm_lf.val); + if (err) + return err; if (!bwm_lf.field.plllock_true_lock_cri) { ice_warn(hw, "CGU PLL failed to lock\n"); @@ -341,1485 +342,630 @@ ice_cfg_cgu_pll_e822(struct ice_hw *hw, enum ice_time_ref_freq clk_freq, ice_clk_freq_str(dw9.field.time_ref_freq_sel), bwm_lf.field.plllock_true_lock_cri ? "locked" : "unlocked"); - return ICE_SUCCESS; -} - -/** - * ice_init_cgu_e822 - Initialize CGU with settings from firmware - * @hw: pointer to the HW structure - * - * Initialize the Clock Generation Unit of the E822 device. - */ -static enum ice_status ice_init_cgu_e822(struct ice_hw *hw) -{ - struct ice_ts_func_info *ts_info = &hw->func_caps.ts_func_info; - union tspll_cntr_bist_settings cntr_bist; - enum ice_status status; - - status = ice_read_cgu_reg_e822(hw, TSPLL_CNTR_BIST_SETTINGS, - &cntr_bist.val); - if (status) - return status; - - /* Disable sticky lock detection so lock status reported is accurate */ - cntr_bist.field.i_plllock_sel_0 = 0; - cntr_bist.field.i_plllock_sel_1 = 0; - - status = ice_write_cgu_reg_e822(hw, TSPLL_CNTR_BIST_SETTINGS, - cntr_bist.val); - if (status) - return status; - - /* Configure the CGU PLL using the parameters from the function - * capabilities. - */ - status = ice_cfg_cgu_pll_e822(hw, ts_info->time_ref, - (enum ice_clk_src)ts_info->clk_src); - if (status) - return status; + *clk_freq = (enum ice_time_ref_freq)dw9.field.time_ref_freq_sel; + *clk_src = (enum ice_clk_src)dw24.field.time_ref_sel; - return ICE_SUCCESS; + return 0; } /** - * ice_ptp_src_cmd - Prepare source timer for a timer command - * @hw: pointer to HW structure - * @cmd: Timer command + * ice_cfg_cgu_pll_e825c - Configure the Clock Generation Unit for E825-C + * @hw: pointer to the HW struct + * @clk_freq: Clock frequency to program + * @clk_src: Clock source to select (TIME_REF, or TCX0) * - * Prepare the source timer for an upcoming timer sync command. + * Configure the Clock Generation Unit with the desired clock frequency and + * time reference, enabling the PLL which drives the PTP hardware clock. */ -void ice_ptp_src_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd) +int +ice_cfg_cgu_pll_e825c(struct ice_hw *hw, enum ice_time_ref_freq *clk_freq, + enum ice_clk_src *clk_src) { - u32 cmd_val; - u8 tmr_idx; - - tmr_idx = ice_get_ptp_src_clock_index(hw); - cmd_val = tmr_idx << SEL_CPK_SRC; + union tspll_ro_lock_e825c ro_lock; + union nac_cgu_dword23_e825c dw23; + union nac_cgu_dword19 dw19; + union nac_cgu_dword22 dw22; + union nac_cgu_dword24 dw24; + union nac_cgu_dword9 dw9; + int err; - switch (cmd) { - case ICE_PTP_INIT_TIME: - cmd_val |= GLTSYN_CMD_INIT_TIME; - break; - case ICE_PTP_INIT_INCVAL: - cmd_val |= GLTSYN_CMD_INIT_INCVAL; - break; - case ICE_PTP_ADJ_TIME: - cmd_val |= GLTSYN_CMD_ADJ_TIME; - break; - case ICE_PTP_ADJ_TIME_AT_TIME: - cmd_val |= GLTSYN_CMD_ADJ_INIT_TIME; - break; - case ICE_PTP_READ_TIME: - cmd_val |= GLTSYN_CMD_READ_TIME; - break; - case ICE_PTP_NOP: - break; - default: - ice_warn(hw, "Unknown timer command %u\n", cmd); - return; + if (*clk_freq >= NUM_ICE_TIME_REF_FREQ) { + ice_warn(hw, "Invalid TIME_REF frequency %u\n", *clk_freq); + return ICE_ERR_PARAM; } - wr32(hw, GLTSYN_CMD, cmd_val); -} - -/** - * ice_ptp_exec_tmr_cmd - Execute all prepared timer commands - * @hw: pointer to HW struct - * - * Write the SYNC_EXEC_CMD bit to the GLTSYN_CMD_SYNC register, and flush the - * write immediately. This triggers the hardware to begin executing all of the - * source and PHY timer commands synchronously. - */ -static void ice_ptp_exec_tmr_cmd(struct ice_hw *hw) -{ - wr32(hw, GLTSYN_CMD_SYNC, SYNC_EXEC_CMD); - ice_flush(hw); -} - -/** - * ice_ptp_clean_cmd - Clean the timer command register - * @hw: pointer to HW struct - * - * Zero out the GLTSYN_CMD to avoid any residual command execution. - */ -static void ice_ptp_clean_cmd(struct ice_hw *hw) -{ - wr32(hw, GLTSYN_CMD, 0); - ice_flush(hw); -} + if (*clk_src >= NUM_ICE_CLK_SRC) { + ice_warn(hw, "Invalid clock source %u\n", *clk_src); + return ICE_ERR_PARAM; + } -/* 56G PHY access functions */ -static const u32 eth56g_port_base[ICE_NUM_PHY_PORTS] = { - ICE_PHY0_BASE, - ICE_PHY1_BASE, - ICE_PHY2_BASE, - ICE_PHY3_BASE, - ICE_PHY4_BASE, -}; + if (*clk_src == ICE_CLK_SRC_TCX0 && + *clk_freq != ICE_TIME_REF_FREQ_25_000) { + ice_warn(hw, "TCX0 only supports 25 MHz frequency\n"); + return ICE_ERR_PARAM; + } -/** - * ice_write_phy_eth56g_raw_lp - Write a PHY port register with lock parameter - * @hw: pointer to the HW struct - * @reg_addr: PHY register address - * @val: Value to write - * @lock_sbq: true to lock the sideband queue - */ -static enum ice_status -ice_write_phy_eth56g_raw_lp(struct ice_hw *hw, u32 reg_addr, u32 val, - bool lock_sbq) -{ - struct ice_sbq_msg_input phy_msg; - enum ice_status status; + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD9, &dw9.val); + if (err) + return err; - phy_msg.opcode = ice_sbq_msg_wr; + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD24, &dw24.val); + if (err) + return err; - phy_msg.msg_addr_low = ICE_LO_WORD(reg_addr); - phy_msg.msg_addr_high = ICE_HI_WORD(reg_addr); + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD23_E825C, &dw23.val); + if (err) + return err; - phy_msg.data = val; - phy_msg.dest_dev = phy_56g; + err = ice_read_cgu_reg_e82x(hw, TSPLL_RO_LOCK_E825C, &ro_lock.val); + if (err) + return err; - status = ice_sbq_rw_reg_lp(hw, &phy_msg, lock_sbq); + /* Log the current clock configuration */ + ice_debug(hw, ICE_DBG_PTP, "Current CGU configuration -- %s, clk_src %s, clk_freq %s, PLL %s\n", + dw24.field.ts_pll_enable ? "enabled" : "disabled", + ice_clk_src_str(dw23.field.time_ref_sel), + ice_clk_freq_str(dw9.field.time_ref_freq_sel), + ro_lock.field.plllock_true_lock_cri ? "locked" : "unlocked"); - if (status) - ice_debug(hw, ICE_DBG_PTP, "PTP failed to send msg to phy %d\n", - status); + /* Disable the PLL before changing the clock source or frequency */ + if (dw23.field.ts_pll_enable) { + dw23.field.ts_pll_enable = 0; - return status; -} + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD23_E825C, + dw23.val); + if (err) + return err; + } -/** - * ice_read_phy_eth56g_raw_lp - Read a PHY port register with lock parameter - * @hw: pointer to the HW struct - * @reg_addr: PHY port register address - * @val: Pointer to the value to read (out param) - * @lock_sbq: true to lock the sideband queue - */ -static enum ice_status -ice_read_phy_eth56g_raw_lp(struct ice_hw *hw, u32 reg_addr, u32 *val, - bool lock_sbq) -{ - struct ice_sbq_msg_input phy_msg; - enum ice_status status; + /* Set the frequency */ + dw9.field.time_ref_freq_sel = *clk_freq; + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD9, dw9.val); + if (err) + return err; - phy_msg.opcode = ice_sbq_msg_rd; + /* Configure the TS PLL feedback divisor */ + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD19, &dw19.val); + if (err) + return err; - phy_msg.msg_addr_low = ICE_LO_WORD(reg_addr); - phy_msg.msg_addr_high = ICE_HI_WORD(reg_addr); + dw19.field.tspll_fbdiv_intgr = e822_cgu_params[*clk_freq].feedback_div; + dw19.field.tspll_ndivratio = 1; - phy_msg.dest_dev = phy_56g; + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD19, dw19.val); + if (err) + return err; - status = ice_sbq_rw_reg_lp(hw, &phy_msg, lock_sbq); + /* Configure the TS PLL post divisor */ + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD22, &dw22.val); + if (err) + return err; - if (status) - ice_debug(hw, ICE_DBG_PTP, "PTP failed to send msg to phy %d\n", - status); - else - *val = phy_msg.data; + dw22.field.time1588clk_div = e822_cgu_params[*clk_freq].post_pll_div; + dw22.field.time1588clk_sel_div2 = 0; - return status; -} + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD22, dw22.val); + if (err) + return err; -/** - * ice_phy_port_reg_address_eth56g - Calculate a PHY port register address - * @port: Port number to be written - * @offset: Offset from PHY port register base - * @address: The result address - */ -static enum ice_status -ice_phy_port_reg_address_eth56g(u8 port, u16 offset, u32 *address) -{ - u8 phy, lane; + /* Configure the TS PLL pre divisor and clock source */ + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD23_E825C, &dw23.val); + if (err) + return err; - if (port >= ICE_NUM_EXTERNAL_PORTS) - return ICE_ERR_OUT_OF_RANGE; + dw23.field.ref1588_ck_div = e822_cgu_params[*clk_freq].refclk_pre_div; + dw23.field.time_ref_sel = *clk_src; - phy = port / ICE_PORTS_PER_QUAD; - lane = port % ICE_PORTS_PER_QUAD; + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD23_E825C, dw23.val); + if (err) + return err; - *address = offset + eth56g_port_base[phy] + - PHY_PTP_LANE_ADDR_STEP * lane; + dw24.field.tspll_fbdiv_frac = e822_cgu_params[*clk_freq].frac_n_div; - return ICE_SUCCESS; -} + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD24, dw24.val); + if (err) + return err; -/** - * ice_phy_port_mem_address_eth56g - Calculate a PHY port memory address - * @port: Port number to be written - * @offset: Offset from PHY port register base - * @address: The result address - */ -static enum ice_status -ice_phy_port_mem_address_eth56g(u8 port, u16 offset, u32 *address) -{ - u8 phy, lane; + /* Finally, enable the PLL */ + dw23.field.ts_pll_enable = 1; - if (port >= ICE_NUM_EXTERNAL_PORTS) - return ICE_ERR_OUT_OF_RANGE; + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD23_E825C, dw23.val); + if (err) + return err; - phy = port / ICE_PORTS_PER_QUAD; - lane = port % ICE_PORTS_PER_QUAD; + /* Wait to verify if the PLL locks */ + ice_msec_delay(1, true); - *address = offset + eth56g_port_base[phy] + - PHY_PTP_MEM_START + PHY_PTP_MEM_LANE_STEP * lane; + err = ice_read_cgu_reg_e82x(hw, TSPLL_RO_LOCK_E825C, &ro_lock.val); + if (err) + return err; - return ICE_SUCCESS; -} + if (!ro_lock.field.plllock_true_lock_cri) { + ice_warn(hw, "CGU PLL failed to lock\n"); + return ICE_ERR_NOT_READY; + } -/** - * ice_write_phy_reg_eth56g_lp - Write a PHY port register with lock parameter - * @hw: pointer to the HW struct - * @port: Port number to be written - * @offset: Offset from PHY port register base - * @val: Value to write - * @lock_sbq: true to lock the sideband queue - */ -static enum ice_status -ice_write_phy_reg_eth56g_lp(struct ice_hw *hw, u8 port, u16 offset, u32 val, - bool lock_sbq) -{ - enum ice_status status; - u32 reg_addr; + /* Log the current clock configuration */ + ice_debug(hw, ICE_DBG_PTP, "New CGU configuration -- %s, clk_src %s, clk_freq %s, PLL %s\n", + dw24.field.ts_pll_enable ? "enabled" : "disabled", + ice_clk_src_str(dw23.field.time_ref_sel), + ice_clk_freq_str(dw9.field.time_ref_freq_sel), + ro_lock.field.plllock_true_lock_cri ? "locked" : "unlocked"); - status = ice_phy_port_reg_address_eth56g(port, offset, ®_addr); - if (status) - return status; + *clk_freq = (enum ice_time_ref_freq)dw9.field.time_ref_freq_sel; + *clk_src = (enum ice_clk_src)dw23.field.time_ref_sel; - return ice_write_phy_eth56g_raw_lp(hw, reg_addr, val, lock_sbq); + return 0; } /** - * ice_write_phy_reg_eth56g - Write a PHY port register with sbq locked + * ice_cfg_cgu_pll_dis_sticky_bits_e822 - disable TS PLL sticky bits * @hw: pointer to the HW struct - * @port: Port number to be written - * @offset: Offset from PHY port register base - * @val: Value to write + * + * Configure the Clock Generation Unit TS PLL sticky bits so they don't latch on + * losing TS PLL lock, but always show current state. */ -enum ice_status -ice_write_phy_reg_eth56g(struct ice_hw *hw, u8 port, u16 offset, u32 val) +static int ice_cfg_cgu_pll_dis_sticky_bits_e822(struct ice_hw *hw) { - return ice_write_phy_reg_eth56g_lp(hw, port, offset, val, true); -} + union tspll_cntr_bist_settings cntr_bist; + int err; -/** - * ice_read_phy_reg_eth56g_lp - Read a PHY port register with - * lock parameter - * @hw: pointer to the HW struct - * @port: Port number to be read - * @offset: Offset from PHY port register base - * @val: Pointer to the value to read (out param) - * @lock_sbq: true to lock the sideband queue - */ -static enum ice_status -ice_read_phy_reg_eth56g_lp(struct ice_hw *hw, u8 port, u16 offset, u32 *val, - bool lock_sbq) -{ - enum ice_status status; - u32 reg_addr; + err = ice_read_cgu_reg_e82x(hw, TSPLL_CNTR_BIST_SETTINGS, + &cntr_bist.val); + if (err) + return err; - status = ice_phy_port_reg_address_eth56g(port, offset, ®_addr); - if (status) - return status; + cntr_bist.field.i_plllock_sel_0 = 0; + cntr_bist.field.i_plllock_sel_1 = 0; - return ice_read_phy_eth56g_raw_lp(hw, reg_addr, val, lock_sbq); + err = ice_write_cgu_reg_e82x(hw, TSPLL_CNTR_BIST_SETTINGS, + cntr_bist.val); + return err; } /** - * ice_read_phy_reg_eth56g - Read a PHY port register with sbq locked + * ice_cfg_cgu_pll_dis_sticky_bits_e825c - disable TS PLL sticky bits for E825-C * @hw: pointer to the HW struct - * @port: Port number to be read - * @offset: Offset from PHY port register base - * @val: Pointer to the value to read (out param) + * + * Configure the Clock Generation Unit TS PLL sticky bits so they don't latch on + * losing TS PLL lock, but always show current state. */ -enum ice_status -ice_read_phy_reg_eth56g(struct ice_hw *hw, u8 port, u16 offset, u32 *val) +static int ice_cfg_cgu_pll_dis_sticky_bits_e825c(struct ice_hw *hw) { - return ice_read_phy_reg_eth56g_lp(hw, port, offset, val, true); -} + union tspll_bw_tdc_e825c bw_tdc; + int err; -/** - * ice_phy_port_mem_read_eth56g_lp - Read a PHY port memory location - * with lock parameter - * @hw: pointer to the HW struct - * @port: Port number to be read - * @offset: Offset from PHY port register base - * @val: Pointer to the value to read (out param) - * @lock_sbq: true to lock the sideband queue - */ -static enum ice_status -ice_phy_port_mem_read_eth56g_lp(struct ice_hw *hw, u8 port, u16 offset, - u32 *val, bool lock_sbq) -{ - enum ice_status status; - u32 mem_addr; + err = ice_read_cgu_reg_e82x(hw, TSPLL_BW_TDC_E825C, &bw_tdc.val); + if (err) + return err; - status = ice_phy_port_mem_address_eth56g(port, offset, &mem_addr); - if (status) - return status; + bw_tdc.field.i_plllock_sel_1_0 = 0; - return ice_read_phy_eth56g_raw_lp(hw, mem_addr, val, lock_sbq); + err = ice_write_cgu_reg_e82x(hw, TSPLL_BW_TDC_E825C, bw_tdc.val); + return err; } /** - * ice_phy_port_mem_read_eth56g - Read a PHY port memory location with - * sbq locked + * ice_cgu_ts_pll_lost_lock_e825c - check if TS PLL lost lock * @hw: pointer to the HW struct - * @port: Port number to be read - * @offset: Offset from PHY port register base - * @val: Pointer to the value to read (out param) + * @lost_lock: output flag for reporting lost lock */ -static enum ice_status -ice_phy_port_mem_read_eth56g(struct ice_hw *hw, u8 port, u16 offset, u32 *val) +int +ice_cgu_ts_pll_lost_lock_e825c(struct ice_hw *hw, bool *lost_lock) { - return ice_phy_port_mem_read_eth56g_lp(hw, port, offset, val, true); -} + union tspll_ro_lock_e825c ro_lock; + int err; -/** - * ice_phy_port_mem_write_eth56g_lp - Write a PHY port memory location with - * lock parameter - * @hw: pointer to the HW struct - * @port: Port number to be read - * @offset: Offset from PHY port register base - * @val: Pointer to the value to read (out param) - * @lock_sbq: true to lock the sideband queue - */ -static enum ice_status -ice_phy_port_mem_write_eth56g_lp(struct ice_hw *hw, u8 port, u16 offset, - u32 val, bool lock_sbq) -{ - enum ice_status status; - u32 mem_addr; + err = ice_read_cgu_reg_e82x(hw, TSPLL_RO_LOCK_E825C, &ro_lock.val); + if (err) + return err; - status = ice_phy_port_mem_address_eth56g(port, offset, &mem_addr); - if (status) - return status; + if (ro_lock.field.pllunlock_flag_cri && + !ro_lock.field.plllock_true_lock_cri) + *lost_lock = true; + else + *lost_lock = false; - return ice_write_phy_eth56g_raw_lp(hw, mem_addr, val, lock_sbq); + return 0; } /** - * ice_phy_port_mem_write_eth56g - Write a PHY port memory location with - * sbq locked + * ice_cgu_ts_pll_restart_e825c - trigger TS PLL restart * @hw: pointer to the HW struct - * @port: Port number to be read - * @offset: Offset from PHY port register base - * @val: Pointer to the value to read (out param) - */ -static enum ice_status -ice_phy_port_mem_write_eth56g(struct ice_hw *hw, u8 port, u16 offset, u32 val) -{ - return ice_phy_port_mem_write_eth56g_lp(hw, port, offset, val, true); -} - -/** - * ice_is_64b_phy_reg_eth56g - Check if this is a 64bit PHY register - * @low_addr: the low address to check - * - * Checks if the provided low address is one of the known 64bit PHY values - * represented as two 32bit registers. */ -static bool ice_is_64b_phy_reg_eth56g(u16 low_addr) +int ice_cgu_ts_pll_restart_e825c(struct ice_hw *hw) { - switch (low_addr) { - case PHY_REG_TX_TIMER_INC_PRE_L: - case PHY_REG_RX_TIMER_INC_PRE_L: - case PHY_REG_TX_CAPTURE_L: - case PHY_REG_RX_CAPTURE_L: - case PHY_REG_TOTAL_TX_OFFSET_L: - case PHY_REG_TOTAL_RX_OFFSET_L: - return true; - default: - return false; - } -} + union nac_cgu_dword23_e825c dw23; + int err; -/** - * ice_is_40b_phy_reg_eth56g - Check if this is a 40bit PHY register - * @low_addr: the low address to check - * - * Checks if the provided low address is one of the known 40bit PHY values - * split into two registers with the lower 8 bits in the low register and the - * upper 32 bits in the high register. - */ -static bool ice_is_40b_phy_reg_eth56g(u16 low_addr) -{ - switch (low_addr) { - case PHY_REG_TIMETUS_L: - return true; - default: - return false; - } -} + /* Read the initial values of DW23 */ + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD23_E825C, &dw23.val); + if (err) + return err; -/** - * ice_read_40b_phy_reg_eth56g - Read a 40bit value from PHY registers - * @hw: pointer to the HW struct - * @port: PHY port to read from - * @low_addr: offset of the lower register to read from - * @val: on return, the contents of the 40bit value from the PHY registers - * - * Reads the two registers associated with a 40bit value and returns it in the - * val pointer. - * This function checks that the caller has specified a known 40 bit register - * offset - */ -static enum ice_status -ice_read_40b_phy_reg_eth56g(struct ice_hw *hw, u8 port, u16 low_addr, u64 *val) -{ - u16 high_addr = low_addr + sizeof(u32); - enum ice_status status; - u32 lo, hi; + /* Disable the PLL */ + dw23.field.ts_pll_enable = 0; - if (!ice_is_40b_phy_reg_eth56g(low_addr)) - return ICE_ERR_PARAM; + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD23_E825C, dw23.val); + if (err) + return err; - status = ice_read_phy_reg_eth56g(hw, port, low_addr, &lo); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read from low register %#08x\n, status %d", - (int)low_addr, status); - return status; - } + /* Wait 5us before reenabling PLL */ + ice_usec_delay(5, false); - status = ice_read_phy_reg_eth56g(hw, port, low_addr + sizeof(u32), &hi); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read from high register %08x\n, status %d", - high_addr, status); - return status; - } + /* Re-enable the PLL */ + dw23.field.ts_pll_enable = 1; - *val = ((u64)hi << P_REG_40B_HIGH_S) | (lo & P_REG_40B_LOW_M); + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD23_E825C, dw23.val); + if (err) + return err; - return ICE_SUCCESS; + return 0; } +#define E825C_CGU_BYPASS_MUX_OFFSET 3 /** - * ice_read_64b_phy_reg_eth56g - Read a 64bit value from PHY registers + * cgu_bypass_mux_port - calculate which output of the mux should be used * @hw: pointer to the HW struct - * @port: PHY port to read from - * @low_addr: offset of the lower register to read from - * @val: on return, the contents of the 64bit value from the PHY registers - * - * Reads the two registers associated with a 64bit value and returns it in the - * val pointer. - * This function checks that the caller has specified a known 64 bit register - * offset + * @port: number of the port */ -static enum ice_status -ice_read_64b_phy_reg_eth56g(struct ice_hw *hw, u8 port, u16 low_addr, u64 *val) -{ - u16 high_addr = low_addr + sizeof(u32); - enum ice_status status; - u32 lo, hi; - - if (!ice_is_64b_phy_reg_eth56g(low_addr)) - return ICE_ERR_PARAM; - - status = ice_read_phy_reg_eth56g(hw, port, low_addr, &lo); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read from low register %#08x\n, status %d", - low_addr, status); - return status; - } - - status = ice_read_phy_reg_eth56g(hw, port, high_addr, &hi); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read from high register %#08x\n, status %d", - high_addr, status); - return status; - } - - *val = ((u64)hi << 32) | lo; - - return ICE_SUCCESS; -} - -/** - * ice_write_40b_phy_reg_eth56g - Write a 40b value to the PHY - * @hw: pointer to the HW struct - * @port: port to write to - * @low_addr: offset of the low register - * @val: 40b value to write - * - * Write the provided 40b value to the two associated registers by splitting - * it up into two chunks, the lower 8 bits and the upper 32 bits. - * This function checks that the caller has specified a known 40 bit register - * offset - */ -static enum ice_status -ice_write_40b_phy_reg_eth56g(struct ice_hw *hw, u8 port, u16 low_addr, u64 val) -{ - u16 high_addr = low_addr + sizeof(u32); - enum ice_status status; - u32 lo, hi; - - if (!ice_is_40b_phy_reg_eth56g(low_addr)) - return ICE_ERR_PARAM; - - lo = (u32)(val & P_REG_40B_LOW_M); - hi = (u32)(val >> P_REG_40B_HIGH_S); - - status = ice_write_phy_reg_eth56g(hw, port, low_addr, lo); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write to low register 0x%08x\n, status %d", - low_addr, status); - return status; - } - - status = ice_write_phy_reg_eth56g(hw, port, high_addr, hi); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write to high register 0x%08x\n, status %d", - high_addr, status); - return status; - } - - return ICE_SUCCESS; -} - -/** - * ice_write_64b_phy_reg_eth56g - Write a 64bit value to PHY registers - * @hw: pointer to the HW struct - * @port: PHY port to read from - * @low_addr: offset of the lower register to read from - * @val: the contents of the 64bit value to write to PHY - * - * Write the 64bit value to the two associated 32bit PHY registers. - * This function checks that the caller has specified a known 64 bit register - * offset - */ -static enum ice_status -ice_write_64b_phy_reg_eth56g(struct ice_hw *hw, u8 port, u16 low_addr, u64 val) -{ - u16 high_addr = low_addr + sizeof(u32); - enum ice_status status; - u32 lo, hi; - - if (!ice_is_64b_phy_reg_eth56g(low_addr)) - return ICE_ERR_PARAM; - - lo = ICE_LO_DWORD(val); - hi = ICE_HI_DWORD(val); - - status = ice_write_phy_reg_eth56g(hw, port, low_addr, lo); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write to low register 0x%08x\n, status %d", - low_addr, status); - return status; - } - - status = ice_write_phy_reg_eth56g(hw, port, high_addr, hi); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write to high register 0x%08x\n, status %d", - high_addr, status); - return status; - } - - return ICE_SUCCESS; -} - -/** - * ice_read_phy_tstamp_eth56g - Read a PHY timestamp out of the port memory - * @hw: pointer to the HW struct - * @port: the port to read from - * @idx: the timestamp index to read - * @tstamp: on return, the 40bit timestamp value - * - * Read a 40bit timestamp value out of the two associated entries in the - * port memory block of the internal PHYs of the 56G devices. - */ -static enum ice_status -ice_read_phy_tstamp_eth56g(struct ice_hw *hw, u8 port, u8 idx, u64 *tstamp) -{ - enum ice_status status; - u16 lo_addr, hi_addr; - u32 lo, hi; - - lo_addr = (u16)PHY_TSTAMP_L(idx); - hi_addr = (u16)PHY_TSTAMP_U(idx); - - status = ice_phy_port_mem_read_eth56g(hw, port, lo_addr, &lo); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read low PTP timestamp register, status %d\n", - status); - return status; - } - - status = ice_phy_port_mem_read_eth56g(hw, port, hi_addr, &hi); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read high PTP timestamp register, status %d\n", - status); - return status; - } - - /* For 56G based internal PHYs, the timestamp is reported with the - * lower 8 bits in the low register, and the upper 32 bits in the high - * register. - */ - *tstamp = ((u64)hi) << TS_PHY_HIGH_S | ((u64)lo & TS_PHY_LOW_M); - - return ICE_SUCCESS; -} - -/** - * ice_clear_phy_tstamp_eth56g - Clear a timestamp from the quad block - * @hw: pointer to the HW struct - * @port: the quad to read from - * @idx: the timestamp index to reset - * - * Clear a timestamp, resetting its valid bit, in the PHY port memory of - * internal PHYs of the 56G devices. - */ -static enum ice_status -ice_clear_phy_tstamp_eth56g(struct ice_hw *hw, u8 port, u8 idx) -{ - enum ice_status status; - u16 lo_addr; - - lo_addr = (u16)PHY_TSTAMP_L(idx); - - status = ice_phy_port_mem_write_eth56g(hw, port, lo_addr, 0); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to clear low PTP timestamp register, status %d\n", - status); - return status; - } - - return ICE_SUCCESS; -} - -/** - * ice_ptp_prep_port_phy_time_eth56g - Prepare one PHY port with initial time - * @hw: pointer to the HW struct - * @port: port number - * @phy_time: time to initialize the PHY port clocks to - * - * Write a new initial time value into registers of a specific PHY port. - */ -static enum ice_status -ice_ptp_prep_port_phy_time_eth56g(struct ice_hw *hw, u8 port, u64 phy_time) -{ - enum ice_status status; - - /* Tx case */ - status = ice_write_64b_phy_reg_eth56g(hw, port, - PHY_REG_TX_TIMER_INC_PRE_L, - phy_time); - if (status) - return status; - - /* Rx case */ - return ice_write_64b_phy_reg_eth56g(hw, port, - PHY_REG_RX_TIMER_INC_PRE_L, - phy_time); -} - -/** - * ice_ptp_prep_phy_time_eth56g - Prepare PHY port with initial time - * @hw: pointer to the HW struct - * @time: Time to initialize the PHY port clocks to - * - * Program the PHY port registers with a new initial time value. The port - * clock will be initialized once the driver issues an ICE_PTP_INIT_TIME sync - * command. The time value is the upper 32 bits of the PHY timer, usually in - * units of nominal nanoseconds. - */ -static enum ice_status -ice_ptp_prep_phy_time_eth56g(struct ice_hw *hw, u32 time) -{ - enum ice_status status; - u64 phy_time; - u8 port; - - /* The time represents the upper 32 bits of the PHY timer, so we need - * to shift to account for this when programming. - */ - phy_time = (u64)time << 32; - - for (port = 0; port < ICE_NUM_EXTERNAL_PORTS; port++) { - if (!(hw->ena_lports & BIT(port))) - continue; - status = ice_ptp_prep_port_phy_time_eth56g(hw, port, - phy_time); - - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write init time for port %u, status %d\n", - port, status); - return status; - } - } - - return ICE_SUCCESS; -} - -/** - * ice_ptp_prep_port_adj_eth56g - Prepare a single port for time adjust - * @hw: pointer to HW struct - * @port: Port number to be programmed - * @time: time in cycles to adjust the port Tx and Rx clocks - * @lock_sbq: true to lock the sbq sq_lock (the usual case); false if the - * sq_lock has already been locked at a higher level - * - * Program the port for an atomic adjustment by writing the Tx and Rx timer - * registers. The atomic adjustment won't be completed until the driver issues - * an ICE_PTP_ADJ_TIME command. - * - * Note that time is not in units of nanoseconds. It is in clock time - * including the lower sub-nanosecond portion of the port timer. - * - * Negative adjustments are supported using 2s complement arithmetic. - */ -enum ice_status -ice_ptp_prep_port_adj_eth56g(struct ice_hw *hw, u8 port, s64 time, - bool lock_sbq) -{ - enum ice_status status; - u32 l_time, u_time; - - l_time = ICE_LO_DWORD(time); - u_time = ICE_HI_DWORD(time); - - /* Tx case */ - status = ice_write_phy_reg_eth56g_lp(hw, port, - PHY_REG_TX_TIMER_INC_PRE_L, - l_time, lock_sbq); - if (status) - goto exit_err; - - status = ice_write_phy_reg_eth56g_lp(hw, port, - PHY_REG_TX_TIMER_INC_PRE_U, - u_time, lock_sbq); - if (status) - goto exit_err; - - /* Rx case */ - status = ice_write_phy_reg_eth56g_lp(hw, port, - PHY_REG_RX_TIMER_INC_PRE_L, - l_time, lock_sbq); - if (status) - goto exit_err; - - status = ice_write_phy_reg_eth56g_lp(hw, port, - PHY_REG_RX_TIMER_INC_PRE_U, - u_time, lock_sbq); - if (status) - goto exit_err; - - return ICE_SUCCESS; - -exit_err: - ice_debug(hw, ICE_DBG_PTP, "Failed to write time adjust for port %u, status %d\n", - port, status); - return status; -} - -/** - * ice_ptp_prep_phy_adj_eth56g - Prep PHY ports for a time adjustment - * @hw: pointer to HW struct - * @adj: adjustment in nanoseconds - * @lock_sbq: true to lock the sbq sq_lock (the usual case); false if the - * sq_lock has already been locked at a higher level - * - * Prepare the PHY ports for an atomic time adjustment by programming the PHY - * Tx and Rx port registers. The actual adjustment is completed by issuing an - * ICE_PTP_ADJ_TIME or ICE_PTP_ADJ_TIME_AT_TIME sync command. - */ -static enum ice_status -ice_ptp_prep_phy_adj_eth56g(struct ice_hw *hw, s32 adj, bool lock_sbq) -{ - enum ice_status status = ICE_SUCCESS; - s64 cycles; - u8 port; - - /* The port clock supports adjustment of the sub-nanosecond portion of - * the clock. We shift the provided adjustment in nanoseconds to - * calculate the appropriate adjustment to program into the PHY ports. - */ - cycles = (s64)adj << 32; - - for (port = 0; port < ICE_NUM_EXTERNAL_PORTS; port++) { - if (!(hw->ena_lports & BIT(port))) - continue; - - status = ice_ptp_prep_port_adj_eth56g(hw, port, cycles, - lock_sbq); - if (status) - break; - } - - return status; -} - -/** - * ice_ptp_prep_phy_incval_eth56g - Prepare PHY ports for time adjustment - * @hw: pointer to HW struct - * @incval: new increment value to prepare - * - * Prepare each of the PHY ports for a new increment value by programming the - * port's TIMETUS registers. The new increment value will be updated after - * issuing an ICE_PTP_INIT_INCVAL command. - */ -static enum ice_status -ice_ptp_prep_phy_incval_eth56g(struct ice_hw *hw, u64 incval) -{ - enum ice_status status; - u8 port; - - for (port = 0; port < ICE_NUM_EXTERNAL_PORTS; port++) { - if (!(hw->ena_lports & BIT(port))) - continue; - status = ice_write_40b_phy_reg_eth56g(hw, port, - PHY_REG_TIMETUS_L, - incval); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write incval for port %u, status %d\n", - port, status); - return status; - } - } - - return ICE_SUCCESS; -} - -/** - * ice_ptp_read_phy_incval_eth56g - Read a PHY port's current incval - * @hw: pointer to the HW struct - * @port: the port to read - * @incval: on return, the time_clk_cyc incval for this port - * - * Read the time_clk_cyc increment value for a given PHY port. - */ -enum ice_status -ice_ptp_read_phy_incval_eth56g(struct ice_hw *hw, u8 port, u64 *incval) -{ - enum ice_status status; - - status = ice_read_40b_phy_reg_eth56g(hw, port, PHY_REG_TIMETUS_L, - incval); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read TIMETUS_L, status %d\n", - status); - return status; - } - - ice_debug(hw, ICE_DBG_PTP, "read INCVAL = 0x%016llx\n", - (unsigned long long)*incval); - - return ICE_SUCCESS; -} - -/** - * ice_ptp_prep_phy_adj_target_eth56g - Prepare PHY for adjust at target time - * @hw: pointer to HW struct - * @target_time: target time to program - * - * Program the PHY port Tx and Rx TIMER_CNT_ADJ registers used for the - * ICE_PTP_ADJ_TIME_AT_TIME command. This should be used in conjunction with - * ice_ptp_prep_phy_adj_eth56g to program an atomic adjustment that is - * delayed until a specified target time. - * - * Note that a target time adjustment is not currently supported on E810 - * devices. - */ -static enum ice_status -ice_ptp_prep_phy_adj_target_eth56g(struct ice_hw *hw, u32 target_time) -{ - enum ice_status status; - u8 port; - - for (port = 0; port < ICE_NUM_EXTERNAL_PORTS; port++) { - if (!(hw->ena_lports & BIT(port))) - continue; - - /* Tx case */ - /* No sub-nanoseconds data */ - status = ice_write_phy_reg_eth56g_lp(hw, port, - PHY_REG_TX_TIMER_CNT_ADJ_L, - 0, true); - if (status) - goto exit_err; - - status = ice_write_phy_reg_eth56g_lp(hw, port, - PHY_REG_TX_TIMER_CNT_ADJ_U, - target_time, true); - if (status) - goto exit_err; - - /* Rx case */ - /* No sub-nanoseconds data */ - status = ice_write_phy_reg_eth56g_lp(hw, port, - PHY_REG_RX_TIMER_CNT_ADJ_L, - 0, true); - if (status) - goto exit_err; - - status = ice_write_phy_reg_eth56g_lp(hw, port, - PHY_REG_RX_TIMER_CNT_ADJ_U, - target_time, true); - if (status) - goto exit_err; - } - - return ICE_SUCCESS; - -exit_err: - ice_debug(hw, ICE_DBG_PTP, "Failed to write target time for port %u, status %d\n", - port, status); - - return status; -} - -/** - * ice_ptp_read_port_capture_eth56g - Read a port's local time capture - * @hw: pointer to HW struct - * @port: Port number to read - * @tx_ts: on return, the Tx port time capture - * @rx_ts: on return, the Rx port time capture - * - * Read the port's Tx and Rx local time capture values. - */ -enum ice_status -ice_ptp_read_port_capture_eth56g(struct ice_hw *hw, u8 port, u64 *tx_ts, - u64 *rx_ts) -{ - enum ice_status status; - - /* Tx case */ - status = ice_read_64b_phy_reg_eth56g(hw, port, PHY_REG_TX_CAPTURE_L, - tx_ts); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read REG_TX_CAPTURE, status %d\n", - status); - return status; - } - - ice_debug(hw, ICE_DBG_PTP, "tx_init = %#016llx\n", - (unsigned long long)*tx_ts); - - /* Rx case */ - status = ice_read_64b_phy_reg_eth56g(hw, port, PHY_REG_RX_CAPTURE_L, - rx_ts); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read RX_CAPTURE, status %d\n", - status); - return status; - } - - ice_debug(hw, ICE_DBG_PTP, "rx_init = %#016llx\n", - (unsigned long long)*rx_ts); - - return ICE_SUCCESS; -} - -/** - * ice_ptp_one_port_cmd_eth56g - Prepare a single PHY port for a timer command - * @hw: pointer to HW struct - * @port: Port to which cmd has to be sent - * @cmd: Command to be sent to the port - * @lock_sbq: true if the sideband queue lock must be acquired - * - * Prepare the requested port for an upcoming timer sync command. - */ -enum ice_status -ice_ptp_one_port_cmd_eth56g(struct ice_hw *hw, u8 port, - enum ice_ptp_tmr_cmd cmd, bool lock_sbq) -{ - enum ice_status status; - u32 cmd_val, val; - u8 tmr_idx; - - tmr_idx = ice_get_ptp_src_clock_index(hw); - cmd_val = tmr_idx << SEL_PHY_SRC; - switch (cmd) { - case ICE_PTP_INIT_TIME: - cmd_val |= PHY_CMD_INIT_TIME; - break; - case ICE_PTP_INIT_INCVAL: - cmd_val |= PHY_CMD_INIT_INCVAL; - break; - case ICE_PTP_ADJ_TIME: - cmd_val |= PHY_CMD_ADJ_TIME; - break; - case ICE_PTP_ADJ_TIME_AT_TIME: - cmd_val |= PHY_CMD_ADJ_TIME_AT_TIME; - break; - case ICE_PTP_READ_TIME: - cmd_val |= PHY_CMD_READ_TIME; - break; - default: - ice_warn(hw, "Unknown timer command %u\n", cmd); - return ICE_ERR_PARAM; - } - - /* Tx case */ - /* Read, modify, write */ - status = ice_read_phy_reg_eth56g_lp(hw, port, PHY_REG_TX_TMR_CMD, &val, - lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read TX_TMR_CMD, status %d\n", - status); - return status; - } - - /* Modify necessary bits only and perform write */ - val &= ~TS_CMD_MASK; - val |= cmd_val; - - status = ice_write_phy_reg_eth56g_lp(hw, port, PHY_REG_TX_TMR_CMD, val, - lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write back TX_TMR_CMD, status %d\n", - status); - return status; - } - - /* Rx case */ - /* Read, modify, write */ - status = ice_read_phy_reg_eth56g_lp(hw, port, PHY_REG_RX_TMR_CMD, &val, - lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read RX_TMR_CMD, status %d\n", - status); - return status; - } - - /* Modify necessary bits only and perform write */ - val &= ~TS_CMD_MASK; - val |= cmd_val; - - status = ice_write_phy_reg_eth56g_lp(hw, port, PHY_REG_RX_TMR_CMD, val, - lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write back RX_TMR_CMD, status %d\n", - status); - return status; - } - - return ICE_SUCCESS; -} - -/** - * ice_ptp_port_cmd_eth56g - Prepare all ports for a timer command - * @hw: pointer to the HW struct - * @cmd: timer command to prepare - * @lock_sbq: true if the sideband queue lock must be acquired - * - * Prepare all ports connected to this device for an upcoming timer sync - * command. - */ -static enum ice_status -ice_ptp_port_cmd_eth56g(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd, - bool lock_sbq) -{ - enum ice_status status; - u8 port; - - for (port = 0; port < ICE_NUM_EXTERNAL_PORTS; port++) { - if (!(hw->ena_lports & BIT(port))) - continue; - - status = ice_ptp_one_port_cmd_eth56g(hw, port, cmd, lock_sbq); - if (status) - return status; - } - - return ICE_SUCCESS; -} - -/** - * ice_calc_fixed_tx_offset_eth56g - Calculated Fixed Tx offset for a port - * @hw: pointer to the HW struct - * @link_spd: the Link speed to calculate for - * - * Calculate the fixed offset due to known static latency data. - */ -static u64 -ice_calc_fixed_tx_offset_eth56g(struct ice_hw *hw, - enum ice_ptp_link_spd link_spd) -{ - u64 fixed_offset = 0; - return fixed_offset; -} - -/** - * ice_phy_cfg_tx_offset_eth56g - Configure total Tx timestamp offset - * @hw: pointer to the HW struct - * @port: the PHY port to configure - * - * Program the PHY_REG_TOTAL_TX_OFFSET register with the total number of TUs to - * adjust Tx timestamps by. - * - * To avoid overflow, when calculating the offset based on the known static - * latency values, we use measurements in 1/100th of a nanosecond, and divide - * the TUs per second up front. This avoids overflow while allowing - * calculation of the adjustment using integer arithmetic. - */ -enum ice_status ice_phy_cfg_tx_offset_eth56g(struct ice_hw *hw, u8 port) -{ - enum ice_ptp_link_spd link_spd = ICE_PTP_LNK_SPD_10G; - enum ice_status status; - u64 total_offset; - - total_offset = ice_calc_fixed_tx_offset_eth56g(hw, link_spd); - - /* Now that the total offset has been calculated, program it to the - * PHY and indicate that the Tx offset is ready. After this, - * timestamps will be enabled. - */ - status = ice_write_64b_phy_reg_eth56g(hw, port, - PHY_REG_TOTAL_TX_OFFSET_L, - total_offset); - if (status) - return status; - - return ice_write_phy_reg_eth56g(hw, port, PHY_REG_TX_OFFSET_READY, 1); -} - -/** - * ice_calc_fixed_rx_offset_eth56g - Calculated the fixed Rx offset for a port - * @hw: pointer to HW struct - * @link_spd: The Link speed to calculate for - * - * Determine the fixed Rx latency for a given link speed. - */ -static u64 -ice_calc_fixed_rx_offset_eth56g(struct ice_hw *hw, - enum ice_ptp_link_spd link_spd) -{ - u64 fixed_offset = 0; - return fixed_offset; -} - -/** - * ice_phy_cfg_rx_offset_eth56g - Configure total Rx timestamp offset - * @hw: pointer to the HW struct - * @port: the PHY port to configure - * - * Program the PHY_REG_TOTAL_RX_OFFSET register with the number of Time Units to - * adjust Rx timestamps by. This combines calculations from the Vernier offset - * measurements taken in hardware with some data about known fixed delay as - * well as adjusting for multi-lane alignment delay. - * - * This function must be called only after the offset registers are valid, - * i.e. after the Vernier calibration wait has passed, to ensure that the PHY - * has measured the offset. - * - * To avoid overflow, when calculating the offset based on the known static - * latency values, we use measurements in 1/100th of a nanosecond, and divide - * the TUs per second up front. This avoids overflow while allowing - * calculation of the adjustment using integer arithmetic. - */ -enum ice_status ice_phy_cfg_rx_offset_eth56g(struct ice_hw *hw, u8 port) -{ - enum ice_status status; - u64 total_offset; - - total_offset = ice_calc_fixed_rx_offset_eth56g(hw, 0); - - /* Now that the total offset has been calculated, program it to the - * PHY and indicate that the Rx offset is ready. After this, - * timestamps will be enabled. - */ - status = ice_write_64b_phy_reg_eth56g(hw, port, - PHY_REG_TOTAL_RX_OFFSET_L, - total_offset); - if (status) - return status; - - return ice_write_phy_reg_eth56g(hw, port, PHY_REG_RX_OFFSET_READY, 1); -} - -/** - * ice_read_phy_and_phc_time_eth56g - Simultaneously capture PHC and PHY time - * @hw: pointer to the HW struct - * @port: the PHY port to read - * @phy_time: on return, the 64bit PHY timer value - * @phc_time: on return, the lower 64bits of PHC time - * - * Issue a ICE_PTP_READ_TIME timer command to simultaneously capture the PHY - * and PHC timer values. - */ -static enum ice_status -ice_read_phy_and_phc_time_eth56g(struct ice_hw *hw, u8 port, u64 *phy_time, - u64 *phc_time) -{ - enum ice_status status; - u64 tx_time, rx_time; - u32 zo, lo; - u8 tmr_idx; - - tmr_idx = ice_get_ptp_src_clock_index(hw); - - /* Prepare the PHC timer for a ICE_PTP_READ_TIME capture command */ - ice_ptp_src_cmd(hw, ICE_PTP_READ_TIME); - - /* Prepare the PHY timer for a ICE_PTP_READ_TIME capture command */ - status = ice_ptp_one_port_cmd_eth56g(hw, port, ICE_PTP_READ_TIME, true); - if (status) - return status; - - /* Issue the sync to start the ICE_PTP_READ_TIME capture */ - ice_ptp_exec_tmr_cmd(hw); - ice_ptp_clean_cmd(hw); - - /* Read the captured PHC time from the shadow time registers */ - zo = rd32(hw, GLTSYN_SHTIME_0(tmr_idx)); - lo = rd32(hw, GLTSYN_SHTIME_L(tmr_idx)); - *phc_time = (u64)lo << 32 | zo; - - /* Read the captured PHY time from the PHY shadow registers */ - status = ice_ptp_read_port_capture_eth56g(hw, port, &tx_time, &rx_time); - if (status) - return status; - - /* If the PHY Tx and Rx timers don't match, log a warning message. - * Note that this should not happen in normal circumstances since the - * driver always programs them together. - */ - if (tx_time != rx_time) - ice_warn(hw, "PHY port %u Tx and Rx timers do not match, tx_time 0x%016llX, rx_time 0x%016llX\n", - port, (unsigned long long)tx_time, - (unsigned long long)rx_time); - - *phy_time = tx_time; - - return ICE_SUCCESS; -} - -/** - * ice_sync_phy_timer_eth56g - Synchronize the PHY timer with PHC timer - * @hw: pointer to the HW struct - * @port: the PHY port to synchronize - * - * Perform an adjustment to ensure that the PHY and PHC timers are in sync. - * This is done by issuing a ICE_PTP_READ_TIME command which triggers a - * simultaneous read of the PHY timer and PHC timer. Then we use the - * difference to calculate an appropriate 2s complement addition to add - * to the PHY timer in order to ensure it reads the same value as the - * primary PHC timer. +static u32 cgu_bypass_mux_port(struct ice_hw *hw, u8 port) +{ + return (port % hw->phy_ports) + + E825C_CGU_BYPASS_MUX_OFFSET; +} + +/** + * ice_cgu_bypass_mux_port_active_e825c - check if the given port is set active + * @hw: pointer to the HW struct + * @port: number of the port + * @active: output flag showing if port is active */ -static enum ice_status ice_sync_phy_timer_eth56g(struct ice_hw *hw, u8 port) +int +ice_cgu_bypass_mux_port_active_e825c(struct ice_hw *hw, u8 port, bool *active) { - u64 phc_time, phy_time, difference; - enum ice_status status; + union nac_cgu_dword11_e825c dw11; + int err; - if (!ice_ptp_lock(hw)) { - ice_debug(hw, ICE_DBG_PTP, "Failed to acquire PTP semaphore\n"); - return ICE_ERR_NOT_READY; - } + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD11_E825C, &dw11.val); + if (err) + return err; - status = ice_read_phy_and_phc_time_eth56g(hw, port, &phy_time, - &phc_time); - if (status) - goto err_unlock; + if (dw11.field.synce_s_byp_clk == cgu_bypass_mux_port(hw, port)) + *active = true; + else + *active = false; - /* Calculate the amount required to add to the port time in order for - * it to match the PHC time. - * - * Note that the port adjustment is done using 2s complement - * arithmetic. This is convenient since it means that we can simply - * calculate the difference between the PHC time and the port time, - * and it will be interpreted correctly. - */ + return 0; +} - ice_ptp_src_cmd(hw, ICE_PTP_NOP); - difference = phc_time - phy_time; +/** + * ice_cfg_cgu_bypass_mux_e825c - check if the given port is set active + * @hw: pointer to the HW struct + * @port: number of the port + * @clock_1588: true to enable 1588 reference, false to recover from port + * @ena: true to enable the reference, false if disable + */ +int +ice_cfg_cgu_bypass_mux_e825c(struct ice_hw *hw, u8 port, bool clock_1588, + unsigned int ena) +{ + union nac_cgu_dword11_e825c dw11; + union nac_cgu_dword10_e825c dw10; + int err; - status = ice_ptp_prep_port_adj_eth56g(hw, port, (s64)difference, true); - if (status) - goto err_unlock; + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD11_E825C, &dw11.val); + if (err) + return err; - status = ice_ptp_one_port_cmd_eth56g(hw, port, ICE_PTP_ADJ_TIME, true); - if (status) - goto err_unlock; + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD10_E825C, &dw10.val); + if (err) + return err; - /* Issue the sync to activate the time adjustment */ - ice_ptp_exec_tmr_cmd(hw); - ice_ptp_clean_cmd(hw); + /* ref_clk_byp1_div */ + dw10.field.synce_ethclko_sel = 0x1; - /* Re-capture the timer values to flush the command registers and - * verify that the time was properly adjusted. - */ + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD10_E825C, dw10.val); + if (err) + return err; - status = ice_read_phy_and_phc_time_eth56g(hw, port, &phy_time, - &phc_time); - if (status) - goto err_unlock; + if (!ena) + /* net_ref_clk0 */ + dw11.field.synce_s_byp_clk = 0x0; + else + dw11.field.synce_s_byp_clk = cgu_bypass_mux_port(hw, port); - ice_info(hw, "Port %u PHY time synced to PHC: 0x%016llX, 0x%016llX\n", - port, (unsigned long long)phy_time, - (unsigned long long)phc_time); + return ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD11_E825C, dw11.val); +} -err_unlock: - ice_ptp_unlock(hw); - return status; +/** + * ice_get_div_e825c - get the divider for the given speed + * @link_speed: link speed of the port + * @divider: output value, calculated divider + */ +static int ice_get_div_e825c(u16 link_speed, u8 *divider) +{ + switch (link_speed) { + case ICE_AQ_LINK_SPEED_100GB: + case ICE_AQ_LINK_SPEED_50GB: + case ICE_AQ_LINK_SPEED_25GB: + *divider = 10; + break; + case ICE_AQ_LINK_SPEED_40GB: + case ICE_AQ_LINK_SPEED_10GB: + *divider = 4; + break; + case ICE_AQ_LINK_SPEED_5GB: + case ICE_AQ_LINK_SPEED_2500MB: + case ICE_AQ_LINK_SPEED_1000MB: + *divider = 2; + break; + case ICE_AQ_LINK_SPEED_100MB: + *divider = 1; + break; + default: + return ICE_ERR_NOT_SUPPORTED; + } + return 0; } /** - * ice_stop_phy_timer_eth56g - Stop the PHY clock timer + * ice_cfg_synce_ethdiv_e825c - set the divider on the mux * @hw: pointer to the HW struct - * @port: the PHY port to stop - * @soft_reset: if true, hold the SOFT_RESET bit of PHY_REG_PS - * - * Stop the clock of a PHY port. This must be done as part of the flow to - * re-calibrate Tx and Rx timestamping offsets whenever the clock time is - * initialized or when link speed changes. + * @divider: output parameter, returns used divider value */ -enum ice_status -ice_stop_phy_timer_eth56g(struct ice_hw *hw, u8 port, bool soft_reset) +int ice_cfg_synce_ethdiv_e825c(struct ice_hw *hw, u8 *divider) { - enum ice_status status; + union nac_cgu_dword10_e825c dw10; + int err; + u16 link_speed; - status = ice_write_phy_reg_eth56g(hw, port, PHY_REG_TX_OFFSET_READY, 0); - if (status) - return status; + link_speed = hw->port_info->phy.link_info.link_speed; + err = ice_get_div_e825c(link_speed, divider); + if (err) + return err; - status = ice_write_phy_reg_eth56g(hw, port, PHY_REG_RX_OFFSET_READY, 0); - if (status) - return status; + err = ice_read_cgu_reg_e82x(hw, NAC_CGU_DWORD10_E825C, &dw10.val); + if (err) + return err; - ice_debug(hw, ICE_DBG_PTP, "Disabled clock on PHY port %u\n", port); + /* programmable divider value (from 2 to 16) minus 1 for ETHCLKOUT */ + dw10.field.synce_ethdiv_m1 = *divider + 1; - return ICE_SUCCESS; + err = ice_write_cgu_reg_e82x(hw, NAC_CGU_DWORD10_E825C, dw10.val); + return err; } /** - * ice_start_phy_timer_eth56g - Start the PHY clock timer - * @hw: pointer to the HW struct - * @port: the PHY port to start - * @bypass: unused, for compatibility - * - * Start the clock of a PHY port. This must be done as part of the flow to - * re-calibrate Tx and Rx timestamping offsets whenever the clock time is - * initialized or when link speed changes. + * ice_init_cgu_e82x - Initialize CGU with settings from firmware + * @hw: pointer to the HW structure * + * Initialize the Clock Generation Unit of the E822/E823/E825 device. */ -enum ice_status -ice_start_phy_timer_eth56g(struct ice_hw *hw, u8 port, bool bypass) +static int ice_init_cgu_e82x(struct ice_hw *hw) { - enum ice_status status; - u32 lo, hi; - u64 incval; - u8 tmr_idx; - - tmr_idx = ice_get_ptp_src_clock_index(hw); - - status = ice_stop_phy_timer_eth56g(hw, port, false); - if (status) - return status; + struct ice_ts_func_info *ts_info = &hw->func_caps.ts_func_info; + enum ice_time_ref_freq time_ref_freq; + enum ice_clk_src clk_src; + int err; - ice_ptp_src_cmd(hw, ICE_PTP_NOP); + /* Disable sticky lock detection so lock status reported is accurate */ + if (ice_is_e825c(hw)) + err = ice_cfg_cgu_pll_dis_sticky_bits_e825c(hw); + else + err = ice_cfg_cgu_pll_dis_sticky_bits_e822(hw); + if (err) + return err; - lo = rd32(hw, GLTSYN_INCVAL_L(tmr_idx)); - hi = rd32(hw, GLTSYN_INCVAL_H(tmr_idx)); - incval = (u64)hi << 32 | lo; + /* Configure the CGU PLL using the parameters from the function + * capabilities. + */ + time_ref_freq = (enum ice_time_ref_freq)ts_info->time_ref; + clk_src = (enum ice_clk_src)ts_info->clk_src; + if (ice_is_e825c(hw)) + err = ice_cfg_cgu_pll_e825c(hw, &time_ref_freq, &clk_src); + else + err = ice_cfg_cgu_pll_e822(hw, &time_ref_freq, &clk_src); + if (err) { + ice_warn(hw, "Failed to lock TS PLL to predefined frequency. Retrying with fallback frequency.\n"); + + /* Try to lock to internal 25 MHz TCXO as a fallback */ + time_ref_freq = ICE_TIME_REF_FREQ_25_000; + clk_src = ICE_CLK_SRC_TCX0; + if (ice_is_e825c(hw)) + err = ice_cfg_cgu_pll_e825c(hw, &time_ref_freq, + &clk_src); + else + err = ice_cfg_cgu_pll_e822(hw, &time_ref_freq, + &clk_src); - status = ice_write_40b_phy_reg_eth56g(hw, port, PHY_REG_TIMETUS_L, - incval); - if (status) - return status; + if (err) + ice_warn(hw, "Failed to lock TS PLL to fallback frequency.\n"); + } - status = ice_ptp_one_port_cmd_eth56g(hw, port, ICE_PTP_INIT_INCVAL, - true); - if (status) - return status; + return err; +} - ice_ptp_exec_tmr_cmd(hw); +/** + * ice_ptp_cgu_err_reporting - Enable/disable error reporting for CGU + * @hw: pointer to HW struct + * @enable: true if reporting should be enabled + * + * Enable or disable error events to be reported through Admin Queue. + * + * Return: 0 on success, error code otherwise + */ +static int ice_ptp_cgu_err_reporting(struct ice_hw *hw, bool enable) +{ + int err; - status = ice_sync_phy_timer_eth56g(hw, port); - if (status) - return status; + err = ice_aq_cfg_cgu_err(hw, enable, enable, NULL); + if (err) { + ice_debug(hw, ICE_DBG_PTP, + "Failed to %s CGU error reporting, err %d\n", + enable ? "enable" : "disable", err); + return err; + } - /* Program the Tx offset */ - status = ice_phy_cfg_tx_offset_eth56g(hw, port); - if (status) - return status; + return 0; +} - /* Program the Rx offset */ - status = ice_phy_cfg_rx_offset_eth56g(hw, port); - if (status) - return status; +/** + * ice_ptp_process_cgu_err - Handle reported CGU error + * @hw: pointer to HW struct + * @event: reported CGU error descriptor + */ +void ice_ptp_process_cgu_err(struct ice_hw *hw, struct ice_rq_event_info *event) +{ + u8 err_type = event->desc.params.cgu_err.err_type; - ice_debug(hw, ICE_DBG_PTP, "Enabled clock on PHY port %u\n", port); + if (err_type & ICE_AQC_CGU_ERR_TIMESYNC_LOCK_LOSS) { + ice_warn(hw, "TimeSync PLL lock lost. Retrying to acquire lock with default PLL configuration.\n"); + ice_init_cgu_e82x(hw); + } - return ICE_SUCCESS; + /* Reenable CGU error reporting */ + ice_ptp_cgu_err_reporting(hw, true); } /** - * ice_ptp_init_phc_eth56g - Perform E822 specific PHC initialization + * ice_ptp_tmr_cmd_to_src_reg - Convert to source timer command value * @hw: pointer to HW struct + * @cmd: Timer command * - * Perform PHC initialization steps specific to E822 devices. + * Returns: the source timer command register value for the given PTP timer + * command. */ -static enum ice_status ice_ptp_init_phc_eth56g(struct ice_hw *hw) +static u32 ice_ptp_tmr_cmd_to_src_reg(struct ice_hw *hw, + enum ice_ptp_tmr_cmd cmd) { - enum ice_status status = ICE_SUCCESS; - u32 regval; + u32 cmd_val, tmr_idx; - /* Enable reading switch and PHY registers over the sideband queue */ -#define PF_SB_REM_DEV_CTL_SWITCH_READ BIT(1) -#define PF_SB_REM_DEV_CTL_PHY0 BIT(2) - regval = rd32(hw, PF_SB_REM_DEV_CTL); - regval |= (PF_SB_REM_DEV_CTL_SWITCH_READ | - PF_SB_REM_DEV_CTL_PHY0); - wr32(hw, PF_SB_REM_DEV_CTL, regval); + switch (cmd) { + case ICE_PTP_INIT_TIME: + cmd_val = GLTSYN_CMD_INIT_TIME; + break; + case ICE_PTP_INIT_INCVAL: + cmd_val = GLTSYN_CMD_INIT_INCVAL; + break; + case ICE_PTP_ADJ_TIME: + cmd_val = GLTSYN_CMD_ADJ_TIME; + break; + case ICE_PTP_ADJ_TIME_AT_TIME: + cmd_val = GLTSYN_CMD_ADJ_INIT_TIME; + break; + case ICE_PTP_NOP: + case ICE_PTP_READ_TIME: + cmd_val = GLTSYN_CMD_READ_TIME; + break; + default: + ice_warn(hw, "Ignoring unrecognized timer command %u\n", cmd); + cmd_val = 0; + } - /* Initialize the Clock Generation Unit */ - status = ice_init_cgu_e822(hw); + tmr_idx = ice_get_ptp_src_clock_index(hw) << SEL_CPK_SRC; - return status; + return tmr_idx | cmd_val; } /** - * ice_ptp_read_tx_hwtstamp_status_eth56g - Get the current TX timestamp - * status mask. Returns the mask of ports where TX timestamps are available - * @hw: pointer to the HW struct - * @ts_status: the timestamp mask pointer + * ice_ptp_tmr_cmd_to_port_reg- Convert to port timer command value + * @hw: pointer to HW struct + * @cmd: Timer command + * + * Note that some hardware families use a different command register value for + * the PHY ports, while other hardware families use the same register values + * as the source timer. + * + * Returns: the PHY port timer command register value for the given PTP timer + * command. */ -enum ice_status -ice_ptp_read_tx_hwtstamp_status_eth56g(struct ice_hw *hw, u32 *ts_status) +static u32 ice_ptp_tmr_cmd_to_port_reg(struct ice_hw *hw, + enum ice_ptp_tmr_cmd cmd) { - enum ice_status status; + u32 cmd_val, tmr_idx; - status = ice_read_phy_eth56g_raw_lp(hw, PHY_PTP_INT_STATUS, ts_status, - true); - if (status) - return status; + /* Certain hardware families share the same register values for the + * port register and source timer register. + */ + switch (hw->phy_model) { + case ICE_PHY_E810: + case ICE_PHY_E830: + return ice_ptp_tmr_cmd_to_src_reg(hw, cmd) & TS_CMD_MASK_E810; + default: + break; + } + + switch (cmd) { + case ICE_PTP_INIT_TIME: + cmd_val = PHY_CMD_INIT_TIME; + break; + case ICE_PTP_INIT_INCVAL: + cmd_val = PHY_CMD_INIT_INCVAL; + break; + case ICE_PTP_ADJ_TIME: + cmd_val = PHY_CMD_ADJ_TIME; + break; + case ICE_PTP_ADJ_TIME_AT_TIME: + cmd_val = PHY_CMD_ADJ_TIME_AT_TIME; + break; + case ICE_PTP_READ_TIME: + cmd_val = PHY_CMD_READ_TIME; + break; + case ICE_PTP_NOP: + cmd_val = 0; + break; + default: + ice_warn(hw, "Ignoring unrecognized timer command %u\n", cmd); + cmd_val = 0; + } - ice_debug(hw, ICE_DBG_PTP, "PHY interrupt status: %x\n", *ts_status); + tmr_idx = ice_get_ptp_src_clock_index(hw) << SEL_PHY_SRC; - return ICE_SUCCESS; + return tmr_idx | cmd_val; } /** - * ice_ptp_init_phy_cfg - Get the current TX timestamp status - * mask. Returns the mask of ports where TX timestamps are available - * @hw: pointer to the HW struct + * ice_ptp_src_cmd - Prepare source timer for a timer command + * @hw: pointer to HW structure + * @cmd: Timer command + * + * Prepare the source timer for an upcoming timer sync command. */ -enum ice_status -ice_ptp_init_phy_cfg(struct ice_hw *hw) +void ice_ptp_src_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd) { - enum ice_status status; - u32 phy_rev; + u32 cmd_val = ice_ptp_tmr_cmd_to_src_reg(hw, cmd); - status = ice_read_phy_eth56g_raw_lp(hw, PHY_REG_REVISION, &phy_rev, - true); - if (status) - return status; - - if (phy_rev == PHY_REVISION_ETH56G) { - hw->phy_cfg = ICE_PHY_ETH56G; - return ICE_SUCCESS; - } + wr32(hw, GLTSYN_CMD, cmd_val); +} - if (ice_is_e810(hw)) - hw->phy_cfg = ICE_PHY_E810; - else - hw->phy_cfg = ICE_PHY_E822; +/** + * ice_ptp_exec_tmr_cmd - Execute all prepared timer commands + * @hw: pointer to HW struct + * + * Write the SYNC_EXEC_CMD bit to the GLTSYN_CMD_SYNC register, and flush the + * write immediately. This triggers the hardware to begin executing all of the + * source and PHY timer commands synchronously. + */ +static void ice_ptp_exec_tmr_cmd(struct ice_hw *hw) +{ + wr32(hw, GLTSYN_CMD_SYNC, SYNC_EXEC_CMD); + ice_flush(hw); +} - return ICE_SUCCESS; +/** + * ice_ptp_zero_syn_dlay - Set synchronization delay to zero + * @hw: pointer to HW struct + * + * Zero E810 and E830 specific PTP hardware clock synchronization delay. + */ +static void ice_ptp_zero_syn_dlay(struct ice_hw *hw) +{ + wr32(hw, GLTSYN_SYNC_DLAY, 0); + ice_flush(hw); } -/* ---------------------------------------------------------------------------- - * E822 family functions +/* E822 family functions * * The following functions operate on the E822 family of devices. */ @@ -1963,29 +1109,29 @@ static bool ice_is_40b_phy_reg_e822(u16 low_addr, u16 *high_addr) * * Read a PHY register for the given port over the device sideband queue. */ -static enum ice_status +static int ice_read_phy_reg_e822_lp(struct ice_hw *hw, u8 port, u16 offset, u32 *val, bool lock_sbq) { struct ice_sbq_msg_input msg = {0}; - enum ice_status status; + int err; ice_fill_phy_msg_e822(&msg, port, offset); msg.opcode = ice_sbq_msg_rd; - status = ice_sbq_rw_reg_lp(hw, &msg, lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to send message to phy, status %d\n", - status); - return status; + err = ice_sbq_rw_reg_lp(hw, &msg, ICE_AQ_FLAG_RD, lock_sbq); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to send message to PHY, err %d\n", + err); + return err; } *val = msg.data; - return ICE_SUCCESS; + return 0; } -enum ice_status +int ice_read_phy_reg_e822(struct ice_hw *hw, u8 port, u16 offset, u32 *val) { return ice_read_phy_reg_e822_lp(hw, port, offset, val, true); @@ -2003,12 +1149,12 @@ ice_read_phy_reg_e822(struct ice_hw *hw, u8 port, u16 offset, u32 *val) * The high offset is looked up. This function only operates on registers * known to be split into a lower 8 bit chunk and an upper 32 bit chunk. */ -static enum ice_status +static int ice_read_40b_phy_reg_e822(struct ice_hw *hw, u8 port, u16 low_addr, u64 *val) { - enum ice_status status; u32 low, high; u16 high_addr; + int err; /* Only operate on registers known to be split into two 32bit * registers. @@ -2019,23 +1165,23 @@ ice_read_40b_phy_reg_e822(struct ice_hw *hw, u8 port, u16 low_addr, u64 *val) return ICE_ERR_PARAM; } - status = ice_read_phy_reg_e822(hw, port, low_addr, &low); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read from low register 0x%08x\n, status %d", - low_addr, status); - return status; + err = ice_read_phy_reg_e822(hw, port, low_addr, &low); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read from low register 0x%08x\n, err %d", + low_addr, err); + return err; } - status = ice_read_phy_reg_e822(hw, port, high_addr, &high); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read from high register 0x%08x\n, status %d", - high_addr, status); - return status; + err = ice_read_phy_reg_e822(hw, port, high_addr, &high); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read from high register 0x%08x\n, err %d", + high_addr, err); + return err; } *val = (u64)high << P_REG_40B_HIGH_S | (low & P_REG_40B_LOW_M); - return ICE_SUCCESS; + return 0; } /** @@ -2050,12 +1196,12 @@ ice_read_40b_phy_reg_e822(struct ice_hw *hw, u8 port, u16 low_addr, u64 *val) * The high offset is looked up. This function only operates on registers * known to be two parts of a 64bit value. */ -static enum ice_status +static int ice_read_64b_phy_reg_e822(struct ice_hw *hw, u8 port, u16 low_addr, u64 *val) { - enum ice_status status; u32 low, high; u16 high_addr; + int err; /* Only operate on registers known to be split into two 32bit * registers. @@ -2066,23 +1212,23 @@ ice_read_64b_phy_reg_e822(struct ice_hw *hw, u8 port, u16 low_addr, u64 *val) return ICE_ERR_PARAM; } - status = ice_read_phy_reg_e822(hw, port, low_addr, &low); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read from low register 0x%08x\n, status %d", - low_addr, status); - return status; + err = ice_read_phy_reg_e822(hw, port, low_addr, &low); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read from low register 0x%08x\n, err %d", + low_addr, err); + return err; } - status = ice_read_phy_reg_e822(hw, port, high_addr, &high); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read from high register 0x%08x\n, status %d", - high_addr, status); - return status; + err = ice_read_phy_reg_e822(hw, port, high_addr, &high); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read from high register 0x%08x\n, err %d", + high_addr, err); + return err; } *val = (u64)high << 32 | low; - return ICE_SUCCESS; + return 0; } /** @@ -2095,28 +1241,28 @@ ice_read_64b_phy_reg_e822(struct ice_hw *hw, u8 port, u16 low_addr, u64 *val) * * Write a PHY register for the given port over the device sideband queue. */ -static enum ice_status +static int ice_write_phy_reg_e822_lp(struct ice_hw *hw, u8 port, u16 offset, u32 val, bool lock_sbq) { struct ice_sbq_msg_input msg = {0}; - enum ice_status status; + int err; ice_fill_phy_msg_e822(&msg, port, offset); msg.opcode = ice_sbq_msg_wr; msg.data = val; - status = ice_sbq_rw_reg_lp(hw, &msg, lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to send message to phy, status %d\n", - status); - return status; + err = ice_sbq_rw_reg_lp(hw, &msg, ICE_AQ_FLAG_RD, lock_sbq); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to send message to PHY, err %d\n", + err); + return err; } - return ICE_SUCCESS; + return 0; } -enum ice_status +int ice_write_phy_reg_e822(struct ice_hw *hw, u8 port, u16 offset, u32 val) { return ice_write_phy_reg_e822_lp(hw, port, offset, val, true); @@ -2132,12 +1278,12 @@ ice_write_phy_reg_e822(struct ice_hw *hw, u8 port, u16 offset, u32 val) * Write the provided 40b value to the two associated registers by splitting * it up into two chunks, the lower 8 bits and the upper 32 bits. */ -static enum ice_status +static int ice_write_40b_phy_reg_e822(struct ice_hw *hw, u8 port, u16 low_addr, u64 val) { - enum ice_status status; u32 low, high; u16 high_addr; + int err; /* Only operate on registers known to be split into a lower 8 bit * register and an upper 32 bit register. @@ -2151,21 +1297,21 @@ ice_write_40b_phy_reg_e822(struct ice_hw *hw, u8 port, u16 low_addr, u64 val) low = (u32)(val & P_REG_40B_LOW_M); high = (u32)(val >> P_REG_40B_HIGH_S); - status = ice_write_phy_reg_e822(hw, port, low_addr, low); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write to low register 0x%08x\n, status %d", - low_addr, status); - return status; + err = ice_write_phy_reg_e822(hw, port, low_addr, low); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write to low register 0x%08x\n, err %d", + low_addr, err); + return err; } - status = ice_write_phy_reg_e822(hw, port, high_addr, high); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write to high register 0x%08x\n, status %d", - high_addr, status); - return status; + err = ice_write_phy_reg_e822(hw, port, high_addr, high); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write to high register 0x%08x\n, err %d", + high_addr, err); + return err; } - return ICE_SUCCESS; + return 0; } /** @@ -2180,12 +1326,12 @@ ice_write_40b_phy_reg_e822(struct ice_hw *hw, u8 port, u16 low_addr, u64 val) * up. This function only operates on registers known to be two parts of * a 64bit value. */ -static enum ice_status +static int ice_write_64b_phy_reg_e822(struct ice_hw *hw, u8 port, u16 low_addr, u64 val) { - enum ice_status status; u32 low, high; u16 high_addr; + int err; /* Only operate on registers known to be split into two 32bit * registers. @@ -2199,21 +1345,21 @@ ice_write_64b_phy_reg_e822(struct ice_hw *hw, u8 port, u16 low_addr, u64 val) low = ICE_LO_DWORD(val); high = ICE_HI_DWORD(val); - status = ice_write_phy_reg_e822(hw, port, low_addr, low); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write to low register 0x%08x\n, status %d", - low_addr, status); - return status; + err = ice_write_phy_reg_e822(hw, port, low_addr, low); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write to low register 0x%08x\n, err %d", + low_addr, err); + return err; } - status = ice_write_phy_reg_e822(hw, port, high_addr, high); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write to high register 0x%08x\n, status %d", - high_addr, status); - return status; + err = ice_write_phy_reg_e822(hw, port, high_addr, high); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write to high register 0x%08x\n, err %d", + high_addr, err); + return err; } - return ICE_SUCCESS; + return 0; } /** @@ -2225,7 +1371,7 @@ ice_write_64b_phy_reg_e822(struct ice_hw *hw, u8 port, u16 low_addr, u64 val) * Fill a message buffer for accessing a register in a quad shared between * multiple PHYs. */ -static enum ice_status +static int ice_fill_quad_msg_e822(struct ice_sbq_msg_input *msg, u8 quad, u16 offset) { u32 addr; @@ -2243,7 +1389,7 @@ ice_fill_quad_msg_e822(struct ice_sbq_msg_input *msg, u8 quad, u16 offset) msg->msg_addr_low = ICE_LO_WORD(addr); msg->msg_addr_high = ICE_HI_WORD(addr); - return ICE_SUCCESS; + return 0; } /** @@ -2257,31 +1403,32 @@ ice_fill_quad_msg_e822(struct ice_sbq_msg_input *msg, u8 quad, u16 offset) * Read a quad register over the device sideband queue. Quad registers are * shared between multiple PHYs. */ -static enum ice_status +static int ice_read_quad_reg_e822_lp(struct ice_hw *hw, u8 quad, u16 offset, u32 *val, bool lock_sbq) { struct ice_sbq_msg_input msg = {0}; - enum ice_status status; + int err; - status = ice_fill_quad_msg_e822(&msg, quad, offset); - if (status) - goto exit_err; + err = ice_fill_quad_msg_e822(&msg, quad, offset); + if (err) + return err; msg.opcode = ice_sbq_msg_rd; - status = ice_sbq_rw_reg_lp(hw, &msg, lock_sbq); -exit_err: - if (status) - ice_debug(hw, ICE_DBG_PTP, "Failed to send message to phy, status %d\n", - status); - else - *val = msg.data; + err = ice_sbq_rw_reg_lp(hw, &msg, ICE_AQ_FLAG_RD, lock_sbq); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to send message to PHY, err %d\n", + err); + return err; + } - return status; + *val = msg.data; + + return ICE_SUCCESS; } -enum ice_status +int ice_read_quad_reg_e822(struct ice_hw *hw, u8 quad, u16 offset, u32 *val) { return ice_read_quad_reg_e822_lp(hw, quad, offset, val, true); @@ -2298,30 +1445,31 @@ ice_read_quad_reg_e822(struct ice_hw *hw, u8 quad, u16 offset, u32 *val) * Write a quad register over the device sideband queue. Quad registers are * shared between multiple PHYs. */ -static enum ice_status +static int ice_write_quad_reg_e822_lp(struct ice_hw *hw, u8 quad, u16 offset, u32 val, bool lock_sbq) { struct ice_sbq_msg_input msg = {0}; - enum ice_status status; + int err; - status = ice_fill_quad_msg_e822(&msg, quad, offset); - if (status) - goto exit_err; + err = ice_fill_quad_msg_e822(&msg, quad, offset); + if (err) + return err; msg.opcode = ice_sbq_msg_wr; msg.data = val; - status = ice_sbq_rw_reg_lp(hw, &msg, lock_sbq); -exit_err: - if (status) - ice_debug(hw, ICE_DBG_PTP, "Failed to send message to phy, status %d\n", - status); + err = ice_sbq_rw_reg_lp(hw, &msg, ICE_AQ_FLAG_RD, lock_sbq); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to send message to PHY, err %d\n", + err); + return err; + } - return status; + return ICE_SUCCESS; } -enum ice_status +int ice_write_quad_reg_e822(struct ice_hw *hw, u8 quad, u16 offset, u32 val) { return ice_write_quad_reg_e822_lp(hw, quad, offset, val, true); @@ -2338,28 +1486,28 @@ ice_write_quad_reg_e822(struct ice_hw *hw, u8 quad, u16 offset, u32 val) * quad memory block that is shared between the internal PHYs of the E822 * family of devices. */ -static enum ice_status +static int ice_read_phy_tstamp_e822(struct ice_hw *hw, u8 quad, u8 idx, u64 *tstamp) { - enum ice_status status; u16 lo_addr, hi_addr; u32 lo, hi; + int err; lo_addr = (u16)TS_L(Q_REG_TX_MEMORY_BANK_START, idx); hi_addr = (u16)TS_H(Q_REG_TX_MEMORY_BANK_START, idx); - status = ice_read_quad_reg_e822(hw, quad, lo_addr, &lo); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read low PTP timestamp register, status %d\n", - status); - return status; + err = ice_read_quad_reg_e822(hw, quad, lo_addr, &lo); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read low PTP timestamp register, err %d\n", + err); + return err; } - status = ice_read_quad_reg_e822(hw, quad, hi_addr, &hi); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read high PTP timestamp register, status %d\n", - status); - return status; + err = ice_read_quad_reg_e822(hw, quad, hi_addr, &hi); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read high PTP timestamp register, err %d\n", + err); + return err; } /* For E822 based internal PHYs, the timestamp is reported with the @@ -2368,7 +1516,7 @@ ice_read_phy_tstamp_e822(struct ice_hw *hw, u8 quad, u8 idx, u64 *tstamp) */ *tstamp = ((u64)hi) << TS_PHY_HIGH_S | ((u64)lo & TS_PHY_LOW_M); - return ICE_SUCCESS; + return 0; } /** @@ -2377,33 +1525,63 @@ ice_read_phy_tstamp_e822(struct ice_hw *hw, u8 quad, u8 idx, u64 *tstamp) * @quad: the quad to read from * @idx: the timestamp index to reset * - * Clear a timestamp, resetting its valid bit, from the PHY quad block that is - * shared between the internal PHYs on the E822 devices. + * Read the timestamp out of the quad to clear its timestamp status bit from + * the PHY quad block that is shared between the internal PHYs of the E822 + * devices. + * + * Note that unlike E810, software cannot directly write to the quad memory + * bank registers. E822 relies on the ice_get_phy_tx_tstamp_ready() function + * to determine which timestamps are valid. Reading a timestamp auto-clears + * the valid bit. + * + * To directly clear the contents of the timestamp block entirely, discarding + * all timestamp data at once, software should instead use + * ice_ptp_reset_ts_memory_quad_e822(). + * + * This function should only be called on an idx whose bit is set according to + * ice_get_phy_tx_tstamp_ready(). */ -static enum ice_status +static int ice_clear_phy_tstamp_e822(struct ice_hw *hw, u8 quad, u8 idx) { - enum ice_status status; - u16 lo_addr, hi_addr; - - lo_addr = (u16)TS_L(Q_REG_TX_MEMORY_BANK_START, idx); - hi_addr = (u16)TS_H(Q_REG_TX_MEMORY_BANK_START, idx); + u64 unused_tstamp; + int err; - status = ice_write_quad_reg_e822(hw, quad, lo_addr, 0); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to clear low PTP timestamp register, status %d\n", - status); - return status; + err = ice_read_phy_tstamp_e822(hw, quad, idx, &unused_tstamp); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read the timestamp register for quad %u, idx %u, err %d\n", + quad, idx, err); + return err; } - status = ice_write_quad_reg_e822(hw, quad, hi_addr, 0); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to clear high PTP timestamp register, status %d\n", - status); - return status; - } + return 0; +} - return ICE_SUCCESS; +/** + * ice_ptp_reset_ts_memory_quad_e822 - Clear all timestamps from the quad block + * @hw: pointer to the HW struct + * @quad: the quad to read from + * + * Clear all timestamps from the PHY quad block that is shared between the + * internal PHYs on the E822 devices. + */ +void ice_ptp_reset_ts_memory_quad_e822(struct ice_hw *hw, u8 quad) +{ + ice_write_quad_reg_e822(hw, quad, Q_REG_TS_CTRL, Q_REG_TS_CTRL_M); + ice_write_quad_reg_e822(hw, quad, Q_REG_TS_CTRL, ~(u32)Q_REG_TS_CTRL_M); +} + +/** + * ice_ptp_reset_ts_memory_e822 - Clear all timestamps from all quad blocks + * @hw: pointer to the HW struct + */ +static void ice_ptp_reset_ts_memory_e822(struct ice_hw *hw) +{ + u8 quad; + + for (quad = 0; quad < ICE_MAX_QUAD; quad++) { + ice_ptp_reset_ts_memory_quad_e822(hw, quad); + } } /** @@ -2412,23 +1590,23 @@ ice_clear_phy_tstamp_e822(struct ice_hw *hw, u8 quad, u8 idx) * * Set the window length used for the vernier port calibration process. */ -enum ice_status ice_ptp_set_vernier_wl(struct ice_hw *hw) +int ice_ptp_set_vernier_wl(struct ice_hw *hw) { u8 port; - for (port = 0; port < ICE_NUM_EXTERNAL_PORTS; port++) { - enum ice_status status; + for (port = 0; port < hw->phy_ports; port++) { + int err; - status = ice_write_phy_reg_e822_lp(hw, port, P_REG_WL, - PTP_VERNIER_WL, true); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to set vernier window length for port %u, status %d\n", - port, status); - return status; + err = ice_write_phy_reg_e822_lp(hw, port, P_REG_WL, + PTP_VERNIER_WL, true); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to set vernier window length for port %u, err %d\n", + port, err); + return err; } } - return ICE_SUCCESS; + return 0; } /** @@ -2437,9 +1615,9 @@ enum ice_status ice_ptp_set_vernier_wl(struct ice_hw *hw) * * Perform PHC initialization steps specific to E822 devices. */ -static enum ice_status ice_ptp_init_phc_e822(struct ice_hw *hw) +static int ice_ptp_init_phc_e822(struct ice_hw *hw) { - enum ice_status status; + int err; u32 regval; /* Enable reading switch and PHY registers over the sideband queue */ @@ -2451,9 +1629,14 @@ static enum ice_status ice_ptp_init_phc_e822(struct ice_hw *hw) wr32(hw, PF_SB_REM_DEV_CTL, regval); /* Initialize the Clock Generation Unit */ - status = ice_init_cgu_e822(hw); - if (status) - return status; + err = ice_init_cgu_e82x(hw); + if (err) + return err; + + /* Enable CGU error reporting */ + err = ice_ptp_cgu_err_reporting(hw, true); + if (err) + return err; /* Set window length for all the ports */ return ice_ptp_set_vernier_wl(hw); @@ -2469,11 +1652,11 @@ static enum ice_status ice_ptp_init_phc_e822(struct ice_hw *hw) * command. The time value is the upper 32 bits of the PHY timer, usually in * units of nominal nanoseconds. */ -static enum ice_status +static int ice_ptp_prep_phy_time_e822(struct ice_hw *hw, u32 time) { - enum ice_status status; u64 phy_time; + int err; u8 port; /* The time represents the upper 32 bits of the PHY timer, so we need @@ -2481,30 +1664,30 @@ ice_ptp_prep_phy_time_e822(struct ice_hw *hw, u32 time) */ phy_time = (u64)time << 32; - for (port = 0; port < ICE_NUM_EXTERNAL_PORTS; port++) { + for (port = 0; port < hw->phy_ports; port++) { /* Tx case */ - status = ice_write_64b_phy_reg_e822(hw, port, - P_REG_TX_TIMER_INC_PRE_L, - phy_time); - if (status) + err = ice_write_64b_phy_reg_e822(hw, port, + P_REG_TX_TIMER_INC_PRE_L, + phy_time); + if (err) goto exit_err; /* Rx case */ - status = ice_write_64b_phy_reg_e822(hw, port, - P_REG_RX_TIMER_INC_PRE_L, - phy_time); - if (status) + err = ice_write_64b_phy_reg_e822(hw, port, + P_REG_RX_TIMER_INC_PRE_L, + phy_time); + if (err) goto exit_err; } - return ICE_SUCCESS; + return 0; exit_err: - ice_debug(hw, ICE_DBG_PTP, "Failed to write init time for port %u, status %d\n", - port, status); + ice_debug(hw, ICE_DBG_PTP, "Failed to write init time for port %u, err %d\n", + port, err); - return status; + return err; } /** @@ -2524,44 +1707,44 @@ ice_ptp_prep_phy_time_e822(struct ice_hw *hw, u32 time) * * Negative adjustments are supported using 2s complement arithmetic. */ -enum ice_status +int ice_ptp_prep_port_adj_e822(struct ice_hw *hw, u8 port, s64 time, bool lock_sbq) { - enum ice_status status; u32 l_time, u_time; + int err; l_time = ICE_LO_DWORD(time); u_time = ICE_HI_DWORD(time); /* Tx case */ - status = ice_write_phy_reg_e822_lp(hw, port, P_REG_TX_TIMER_INC_PRE_L, - l_time, lock_sbq); - if (status) + err = ice_write_phy_reg_e822_lp(hw, port, P_REG_TX_TIMER_INC_PRE_L, + l_time, lock_sbq); + if (err) goto exit_err; - status = ice_write_phy_reg_e822_lp(hw, port, P_REG_TX_TIMER_INC_PRE_U, - u_time, lock_sbq); - if (status) + err = ice_write_phy_reg_e822_lp(hw, port, P_REG_TX_TIMER_INC_PRE_U, + u_time, lock_sbq); + if (err) goto exit_err; /* Rx case */ - status = ice_write_phy_reg_e822_lp(hw, port, P_REG_RX_TIMER_INC_PRE_L, - l_time, lock_sbq); - if (status) + err = ice_write_phy_reg_e822_lp(hw, port, P_REG_RX_TIMER_INC_PRE_L, + l_time, lock_sbq); + if (err) goto exit_err; - status = ice_write_phy_reg_e822_lp(hw, port, P_REG_RX_TIMER_INC_PRE_U, - u_time, lock_sbq); - if (status) + err = ice_write_phy_reg_e822_lp(hw, port, P_REG_RX_TIMER_INC_PRE_U, + u_time, lock_sbq); + if (err) goto exit_err; - return ICE_SUCCESS; + return 0; exit_err: - ice_debug(hw, ICE_DBG_PTP, "Failed to write time adjust for port %u, status %d\n", - port, status); - return status; + ice_debug(hw, ICE_DBG_PTP, "Failed to write time adjust for port %u, err %d\n", + port, err); + return err; } /** @@ -2575,7 +1758,7 @@ ice_ptp_prep_port_adj_e822(struct ice_hw *hw, u8 port, s64 time, * Tx and Rx port registers. The actual adjustment is completed by issuing an * ICE_PTP_ADJ_TIME or ICE_PTP_ADJ_TIME_AT_TIME sync command. */ -static enum ice_status +static int ice_ptp_prep_phy_adj_e822(struct ice_hw *hw, s32 adj, bool lock_sbq) { s64 cycles; @@ -2590,16 +1773,15 @@ ice_ptp_prep_phy_adj_e822(struct ice_hw *hw, s32 adj, bool lock_sbq) else cycles = -(((s64)-adj) << 32); - for (port = 0; port < ICE_NUM_EXTERNAL_PORTS; port++) { - enum ice_status status; + for (port = 0; port < hw->phy_ports; port++) { + int err; - status = ice_ptp_prep_port_adj_e822(hw, port, cycles, - lock_sbq); - if (status) - return status; + err = ice_ptp_prep_port_adj_e822(hw, port, cycles, lock_sbq); + if (err) + return err; } - return ICE_SUCCESS; + return 0; } /** @@ -2611,26 +1793,26 @@ ice_ptp_prep_phy_adj_e822(struct ice_hw *hw, s32 adj, bool lock_sbq) * port's TIMETUS registers. The new increment value will be updated after * issuing an ICE_PTP_INIT_INCVAL command. */ -static enum ice_status +static int ice_ptp_prep_phy_incval_e822(struct ice_hw *hw, u64 incval) { - enum ice_status status; + int err; u8 port; - for (port = 0; port < ICE_NUM_EXTERNAL_PORTS; port++) { - status = ice_write_40b_phy_reg_e822(hw, port, P_REG_TIMETUS_L, - incval); - if (status) + for (port = 0; port < hw->phy_ports; port++) { + err = ice_write_40b_phy_reg_e822(hw, port, P_REG_TIMETUS_L, + incval); + if (err) goto exit_err; } - return ICE_SUCCESS; + return 0; exit_err: - ice_debug(hw, ICE_DBG_PTP, "Failed to write incval for port %u, status %d\n", - port, status); + ice_debug(hw, ICE_DBG_PTP, "Failed to write incval for port %u, err %d\n", + port, err); - return status; + return err; } /** @@ -2641,22 +1823,22 @@ ice_ptp_prep_phy_incval_e822(struct ice_hw *hw, u64 incval) * * Read the time_clk_cyc increment value for a given PHY port. */ -enum ice_status +int ice_ptp_read_phy_incval_e822(struct ice_hw *hw, u8 port, u64 *incval) { - enum ice_status status; + int err; - status = ice_read_40b_phy_reg_e822(hw, port, P_REG_TIMETUS_L, incval); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read TIMETUS_L, status %d\n", - status); - return status; + err = ice_read_40b_phy_reg_e822(hw, port, P_REG_TIMETUS_L, incval); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read TIMETUS_L, err %d\n", + err); + return err; } ice_debug(hw, ICE_DBG_PTP, "read INCVAL = 0x%016llx\n", (unsigned long long)*incval); - return ICE_SUCCESS; + return 0; } /** @@ -2672,50 +1854,50 @@ ice_ptp_read_phy_incval_e822(struct ice_hw *hw, u8 port, u64 *incval) * Note that a target time adjustment is not currently supported on E810 * devices. */ -static enum ice_status +static int ice_ptp_prep_phy_adj_target_e822(struct ice_hw *hw, u32 target_time) { - enum ice_status status; + int err; u8 port; - for (port = 0; port < ICE_NUM_EXTERNAL_PORTS; port++) { + for (port = 0; port < hw->phy_ports; port++) { /* Tx case */ /* No sub-nanoseconds data */ - status = ice_write_phy_reg_e822_lp(hw, port, - P_REG_TX_TIMER_CNT_ADJ_L, - 0, true); - if (status) + err = ice_write_phy_reg_e822_lp(hw, port, + P_REG_TX_TIMER_CNT_ADJ_L, + 0, true); + if (err) goto exit_err; - status = ice_write_phy_reg_e822_lp(hw, port, - P_REG_TX_TIMER_CNT_ADJ_U, - target_time, true); - if (status) + err = ice_write_phy_reg_e822_lp(hw, port, + P_REG_TX_TIMER_CNT_ADJ_U, + target_time, true); + if (err) goto exit_err; /* Rx case */ /* No sub-nanoseconds data */ - status = ice_write_phy_reg_e822_lp(hw, port, - P_REG_RX_TIMER_CNT_ADJ_L, - 0, true); - if (status) + err = ice_write_phy_reg_e822_lp(hw, port, + P_REG_RX_TIMER_CNT_ADJ_L, + 0, true); + if (err) goto exit_err; - status = ice_write_phy_reg_e822_lp(hw, port, - P_REG_RX_TIMER_CNT_ADJ_U, - target_time, true); - if (status) + err = ice_write_phy_reg_e822_lp(hw, port, + P_REG_RX_TIMER_CNT_ADJ_U, + target_time, true); + if (err) goto exit_err; } - return ICE_SUCCESS; + return 0; exit_err: - ice_debug(hw, ICE_DBG_PTP, "Failed to write target time for port %u, status %d\n", - port, status); + ice_debug(hw, ICE_DBG_PTP, "Failed to write target time for port %u, err %d\n", + port, err); - return status; + return err; } /** @@ -2729,39 +1911,39 @@ ice_ptp_prep_phy_adj_target_e822(struct ice_hw *hw, u32 target_time) * * Note this has no equivalent for the E810 devices. */ -enum ice_status +int ice_ptp_read_port_capture_e822(struct ice_hw *hw, u8 port, u64 *tx_ts, u64 *rx_ts) { - enum ice_status status; + int err; /* Tx case */ - status = ice_read_64b_phy_reg_e822(hw, port, P_REG_TX_CAPTURE_L, tx_ts); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read REG_TX_CAPTURE, status %d\n", - status); - return status; + err = ice_read_64b_phy_reg_e822(hw, port, P_REG_TX_CAPTURE_L, tx_ts); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read REG_TX_CAPTURE, err %d\n", + err); + return err; } ice_debug(hw, ICE_DBG_PTP, "tx_init = 0x%016llx\n", (unsigned long long)*tx_ts); /* Rx case */ - status = ice_read_64b_phy_reg_e822(hw, port, P_REG_RX_CAPTURE_L, rx_ts); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read RX_CAPTURE, status %d\n", - status); - return status; + err = ice_read_64b_phy_reg_e822(hw, port, P_REG_RX_CAPTURE_L, rx_ts); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read RX_CAPTURE, err %d\n", + err); + return err; } ice_debug(hw, ICE_DBG_PTP, "rx_init = 0x%016llx\n", (unsigned long long)*rx_ts); - return ICE_SUCCESS; + return 0; } /** - * ice_ptp_one_port_cmd_e822 - Prepare a single PHY port for a timer command + * ice_ptp_write_port_cmd_e822 - Prepare a single PHY port for a timer command * @hw: pointer to HW struct * @port: Port to which cmd has to be sent * @cmd: Command to be sent to the port @@ -2772,108 +1954,32 @@ ice_ptp_read_port_capture_e822(struct ice_hw *hw, u8 port, u64 *tx_ts, * Note there is no equivalent of this operation on E810, as that device * always handles all external PHYs internally. */ -enum ice_status -ice_ptp_one_port_cmd_e822(struct ice_hw *hw, u8 port, enum ice_ptp_tmr_cmd cmd, - bool lock_sbq) +static int +ice_ptp_write_port_cmd_e822(struct ice_hw *hw, u8 port, + enum ice_ptp_tmr_cmd cmd, bool lock_sbq) { - enum ice_status status; - u32 cmd_val, val; - u8 tmr_idx; - - tmr_idx = ice_get_ptp_src_clock_index(hw); - cmd_val = tmr_idx << SEL_PHY_SRC; - switch (cmd) { - case ICE_PTP_INIT_TIME: - cmd_val |= PHY_CMD_INIT_TIME; - break; - case ICE_PTP_INIT_INCVAL: - cmd_val |= PHY_CMD_INIT_INCVAL; - break; - case ICE_PTP_ADJ_TIME: - cmd_val |= PHY_CMD_ADJ_TIME; - break; - case ICE_PTP_ADJ_TIME_AT_TIME: - cmd_val |= PHY_CMD_ADJ_TIME_AT_TIME; - break; - case ICE_PTP_READ_TIME: - cmd_val |= PHY_CMD_READ_TIME; - break; - default: - ice_warn(hw, "Unknown timer command %u\n", cmd); - return ICE_ERR_PARAM; - } + u32 val = ice_ptp_tmr_cmd_to_port_reg(hw, cmd); + int err; /* Tx case */ - /* Read, modify, write */ - status = ice_read_phy_reg_e822_lp(hw, port, P_REG_TX_TMR_CMD, &val, - lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read TX_TMR_CMD, status %d\n", - status); - return status; - } - - /* Modify necessary bits only and perform write */ - val &= ~TS_CMD_MASK; - val |= cmd_val; - - status = ice_write_phy_reg_e822_lp(hw, port, P_REG_TX_TMR_CMD, val, - lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write back TX_TMR_CMD, status %d\n", - status); - return status; + err = ice_write_phy_reg_e822_lp(hw, port, P_REG_TX_TMR_CMD, val, + lock_sbq); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write back TX_TMR_CMD, err %d\n", + err); + return err; } /* Rx case */ - /* Read, modify, write */ - status = ice_read_phy_reg_e822_lp(hw, port, P_REG_RX_TMR_CMD, &val, - lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read RX_TMR_CMD, status %d\n", - status); - return status; - } - - /* Modify necessary bits only and perform write */ - val &= ~TS_CMD_MASK; - val |= cmd_val; - - status = ice_write_phy_reg_e822_lp(hw, port, P_REG_RX_TMR_CMD, val, - lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write back RX_TMR_CMD, status %d\n", - status); - return status; - } - - return ICE_SUCCESS; -} - -/** - * ice_ptp_port_cmd_e822 - Prepare all ports for a timer command - * @hw: pointer to the HW struct - * @cmd: timer command to prepare - * @lock_sbq: true if the sideband queue lock must be acquired - * - * Prepare all ports connected to this device for an upcoming timer sync - * command. - */ -static enum ice_status -ice_ptp_port_cmd_e822(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd, - bool lock_sbq) -{ - u8 port; - - for (port = 0; port < ICE_NUM_EXTERNAL_PORTS; port++) { - enum ice_status status; - - status = ice_ptp_one_port_cmd_e822(hw, port, cmd, lock_sbq); - if (status) - return status; + err = ice_write_phy_reg_e822_lp(hw, port, P_REG_RX_TMR_CMD, + val | TS_CMD_RX_TYPE, lock_sbq); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write back RX_TMR_CMD, err %d\n", + err); + return err; } - return ICE_SUCCESS; + return 0; } /* E822 Vernier calibration functions @@ -2893,20 +1999,20 @@ ice_ptp_port_cmd_e822(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd, * Read the serdes data for the PHY port and extract the link speed and FEC * algorithm. */ -enum ice_status +int ice_phy_get_speed_and_fec_e822(struct ice_hw *hw, u8 port, enum ice_ptp_link_spd *link_out, enum ice_ptp_fec_mode *fec_out) { enum ice_ptp_link_spd link; enum ice_ptp_fec_mode fec; - enum ice_status status; u32 serdes; + int err; - status = ice_read_phy_reg_e822(hw, port, P_REG_LINK_SPEED, &serdes); - if (status) { + err = ice_read_phy_reg_e822(hw, port, P_REG_LINK_SPEED, &serdes); + if (err) { ice_debug(hw, ICE_DBG_PTP, "Failed to read serdes info\n"); - return status; + return err; } /* Determine the FEC algorithm */ @@ -2956,7 +2062,7 @@ ice_phy_get_speed_and_fec_e822(struct ice_hw *hw, u8 port, if (fec_out) *fec_out = fec; - return ICE_SUCCESS; + return 0; } /** @@ -2967,23 +2073,23 @@ ice_phy_get_speed_and_fec_e822(struct ice_hw *hw, u8 port, void ice_phy_cfg_lane_e822(struct ice_hw *hw, u8 port) { enum ice_ptp_link_spd link_spd; - enum ice_status status; + int err; u32 val; u8 quad; - status = ice_phy_get_speed_and_fec_e822(hw, port, &link_spd, NULL); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to get PHY link speed, status %d\n", - status); + err = ice_phy_get_speed_and_fec_e822(hw, port, &link_spd, NULL); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to get PHY link speed, err %d\n", + err); return; } quad = port / ICE_PORTS_PER_QUAD; - status = ice_read_quad_reg_e822(hw, quad, Q_REG_TX_MEM_GBL_CFG, &val); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read TX_MEM_GLB_CFG, status %d\n", - status); + err = ice_read_quad_reg_e822(hw, quad, Q_REG_TX_MEM_GBL_CFG, &val); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read TX_MEM_GLB_CFG, err %d\n", + err); return; } @@ -2992,10 +2098,10 @@ void ice_phy_cfg_lane_e822(struct ice_hw *hw, u8 port) else val |= Q_REG_TX_MEM_GBL_CFG_LANE_TYPE_M; - status = ice_write_quad_reg_e822(hw, quad, Q_REG_TX_MEM_GBL_CFG, val); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write back TX_MEM_GBL_CFG, status %d\n", - status); + err = ice_write_quad_reg_e822(hw, quad, Q_REG_TX_MEM_GBL_CFG, val); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write back TX_MEM_GBL_CFG, err %d\n", + err); return; } } @@ -3046,10 +2152,10 @@ void ice_phy_cfg_lane_e822(struct ice_hw *hw, u8 port) * a divide by 390,625,000. This does lose some precision, but avoids * miscalculation due to arithmetic overflow. */ -static enum ice_status ice_phy_cfg_uix_e822(struct ice_hw *hw, u8 port) +static int ice_phy_cfg_uix_e822(struct ice_hw *hw, u8 port) { u64 cur_freq, clk_incval, tu_per_sec, uix; - enum ice_status status; + int err; cur_freq = ice_e822_pll_freq(ice_e822_time_ref(hw)); clk_incval = ice_ptp_read_src_incval(hw); @@ -3063,26 +2169,26 @@ static enum ice_status ice_phy_cfg_uix_e822(struct ice_hw *hw, u8 port) /* Program the 10Gb/40Gb conversion ratio */ uix = DIV_U64(tu_per_sec * LINE_UI_10G_40G, 390625000); - status = ice_write_64b_phy_reg_e822(hw, port, P_REG_UIX66_10G_40G_L, - uix); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write UIX66_10G_40G, status %d\n", - status); - return status; + err = ice_write_64b_phy_reg_e822(hw, port, P_REG_UIX66_10G_40G_L, + uix); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write UIX66_10G_40G, err %d\n", + err); + return err; } /* Program the 25Gb/100Gb conversion ratio */ uix = DIV_U64(tu_per_sec * LINE_UI_25G_100G, 390625000); - status = ice_write_64b_phy_reg_e822(hw, port, P_REG_UIX66_25G_100G_L, - uix); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write UIX66_25G_100G, status %d\n", - status); - return status; + err = ice_write_64b_phy_reg_e822(hw, port, P_REG_UIX66_25G_100G_L, + uix); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write UIX66_25G_100G, err %d\n", + err); + return err; } - return ICE_SUCCESS; + return 0; } /** @@ -3128,16 +2234,16 @@ static enum ice_status ice_phy_cfg_uix_e822(struct ice_hw *hw, u8 port) * frequency is ~29 bits, so multiplying them together should fit within the * 64 bit arithmetic. */ -static enum ice_status ice_phy_cfg_parpcs_e822(struct ice_hw *hw, u8 port) +static int ice_phy_cfg_parpcs_e822(struct ice_hw *hw, u8 port) { u64 cur_freq, clk_incval, tu_per_sec, phy_tus; enum ice_ptp_link_spd link_spd; enum ice_ptp_fec_mode fec_mode; - enum ice_status status; + int err; - status = ice_phy_get_speed_and_fec_e822(hw, port, &link_spd, &fec_mode); - if (status) - return status; + err = ice_phy_get_speed_and_fec_e822(hw, port, &link_spd, &fec_mode); + if (err) + return err; cur_freq = ice_e822_pll_freq(ice_e822_time_ref(hw)); clk_incval = ice_ptp_read_src_incval(hw); @@ -3159,10 +2265,10 @@ static enum ice_status ice_phy_cfg_parpcs_e822(struct ice_hw *hw, u8 port) else phy_tus = 0; - status = ice_write_40b_phy_reg_e822(hw, port, P_REG_PAR_TX_TUS_L, - phy_tus); - if (status) - return status; + err = ice_write_40b_phy_reg_e822(hw, port, P_REG_PAR_TX_TUS_L, + phy_tus); + if (err) + return err; /* P_REG_PAR_RX_TUS */ if (e822_vernier[link_spd].rx_par_clk) @@ -3171,10 +2277,10 @@ static enum ice_status ice_phy_cfg_parpcs_e822(struct ice_hw *hw, u8 port) else phy_tus = 0; - status = ice_write_40b_phy_reg_e822(hw, port, P_REG_PAR_RX_TUS_L, - phy_tus); - if (status) - return status; + err = ice_write_40b_phy_reg_e822(hw, port, P_REG_PAR_RX_TUS_L, + phy_tus); + if (err) + return err; /* P_REG_PCS_TX_TUS */ if (e822_vernier[link_spd].tx_pcs_clk) @@ -3183,10 +2289,10 @@ static enum ice_status ice_phy_cfg_parpcs_e822(struct ice_hw *hw, u8 port) else phy_tus = 0; - status = ice_write_40b_phy_reg_e822(hw, port, P_REG_PCS_TX_TUS_L, - phy_tus); - if (status) - return status; + err = ice_write_40b_phy_reg_e822(hw, port, P_REG_PCS_TX_TUS_L, + phy_tus); + if (err) + return err; /* P_REG_PCS_RX_TUS */ if (e822_vernier[link_spd].rx_pcs_clk) @@ -3195,10 +2301,10 @@ static enum ice_status ice_phy_cfg_parpcs_e822(struct ice_hw *hw, u8 port) else phy_tus = 0; - status = ice_write_40b_phy_reg_e822(hw, port, P_REG_PCS_RX_TUS_L, - phy_tus); - if (status) - return status; + err = ice_write_40b_phy_reg_e822(hw, port, P_REG_PCS_RX_TUS_L, + phy_tus); + if (err) + return err; /* P_REG_DESK_PAR_TX_TUS */ if (e822_vernier[link_spd].tx_desk_rsgb_par) @@ -3207,10 +2313,10 @@ static enum ice_status ice_phy_cfg_parpcs_e822(struct ice_hw *hw, u8 port) else phy_tus = 0; - status = ice_write_40b_phy_reg_e822(hw, port, P_REG_DESK_PAR_TX_TUS_L, - phy_tus); - if (status) - return status; + err = ice_write_40b_phy_reg_e822(hw, port, P_REG_DESK_PAR_TX_TUS_L, + phy_tus); + if (err) + return err; /* P_REG_DESK_PAR_RX_TUS */ if (e822_vernier[link_spd].rx_desk_rsgb_par) @@ -3219,10 +2325,10 @@ static enum ice_status ice_phy_cfg_parpcs_e822(struct ice_hw *hw, u8 port) else phy_tus = 0; - status = ice_write_40b_phy_reg_e822(hw, port, P_REG_DESK_PAR_RX_TUS_L, - phy_tus); - if (status) - return status; + err = ice_write_40b_phy_reg_e822(hw, port, P_REG_DESK_PAR_RX_TUS_L, + phy_tus); + if (err) + return err; /* P_REG_DESK_PCS_TX_TUS */ if (e822_vernier[link_spd].tx_desk_rsgb_pcs) @@ -3231,10 +2337,10 @@ static enum ice_status ice_phy_cfg_parpcs_e822(struct ice_hw *hw, u8 port) else phy_tus = 0; - status = ice_write_40b_phy_reg_e822(hw, port, P_REG_DESK_PCS_TX_TUS_L, - phy_tus); - if (status) - return status; + err = ice_write_40b_phy_reg_e822(hw, port, P_REG_DESK_PCS_TX_TUS_L, + phy_tus); + if (err) + return err; /* P_REG_DESK_PCS_RX_TUS */ if (e822_vernier[link_spd].rx_desk_rsgb_pcs) @@ -3287,25 +2393,52 @@ ice_calc_fixed_tx_offset_e822(struct ice_hw *hw, enum ice_ptp_link_spd link_spd) * adjust Tx timestamps by. This is calculated by combining some known static * latency along with the Vernier offset computations done by hardware. * - * This function must be called only after the offset registers are valid, - * i.e. after the Vernier calibration wait has passed, to ensure that the PHY - * has measured the offset. + * This function will not return successfully until the Tx offset calculations + * have been completed, which requires waiting until at least one packet has + * been transmitted by the device. It is safe to call this function + * periodically until calibration succeeds, as it will only program the offset + * once. * * To avoid overflow, when calculating the offset based on the known static * latency values, we use measurements in 1/100th of a nanosecond, and divide * the TUs per second up front. This avoids overflow while allowing * calculation of the adjustment using integer arithmetic. + * + * Returns zero on success, ICE_ERR_NOT_READY if the hardware vernier offset + * calibration has not completed, or another error code on failure. */ -enum ice_status ice_phy_cfg_tx_offset_e822(struct ice_hw *hw, u8 port) +int ice_phy_cfg_tx_offset_e822(struct ice_hw *hw, u8 port) { enum ice_ptp_link_spd link_spd; enum ice_ptp_fec_mode fec_mode; - enum ice_status status; u64 total_offset, val; + int err; + u32 reg; - status = ice_phy_get_speed_and_fec_e822(hw, port, &link_spd, &fec_mode); - if (status) - return status; + /* Nothing to do if we've already programmed the offset */ + err = ice_read_phy_reg_e822(hw, port, P_REG_TX_OR, ®); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read TX_OR for port %u, err %d\n", + port, err); + return err; + } + + if (reg) + return 0; + + err = ice_read_phy_reg_e822(hw, port, P_REG_TX_OV_STATUS, ®); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read TX_OV_STATUS for port %u, err %d\n", + port, err); + return err; + } + + if (!(reg & P_REG_TX_OV_STATUS_OV_M)) + return ICE_ERR_NOT_READY; + + err = ice_phy_get_speed_and_fec_e822(hw, port, &link_spd, &fec_mode); + if (err) + return err; total_offset = ice_calc_fixed_tx_offset_e822(hw, link_spd); @@ -3318,11 +2451,11 @@ enum ice_status ice_phy_cfg_tx_offset_e822(struct ice_hw *hw, u8 port) link_spd == ICE_PTP_LNK_SPD_25G_RS || link_spd == ICE_PTP_LNK_SPD_40G || link_spd == ICE_PTP_LNK_SPD_50G) { - status = ice_read_64b_phy_reg_e822(hw, port, - P_REG_PAR_PCS_TX_OFFSET_L, - &val); - if (status) - return status; + err = ice_read_64b_phy_reg_e822(hw, port, + P_REG_PAR_PCS_TX_OFFSET_L, + &val); + if (err) + return err; total_offset += val; } @@ -3333,11 +2466,10 @@ enum ice_status ice_phy_cfg_tx_offset_e822(struct ice_hw *hw, u8 port) */ if (link_spd == ICE_PTP_LNK_SPD_50G_RS || link_spd == ICE_PTP_LNK_SPD_100G_RS) { - status = ice_read_64b_phy_reg_e822(hw, port, - P_REG_PAR_TX_TIME_L, - &val); - if (status) - return status; + err = ice_read_64b_phy_reg_e822(hw, port, P_REG_PAR_TX_TIME_L, + &val); + if (err) + return err; total_offset += val; } @@ -3346,57 +2478,18 @@ enum ice_status ice_phy_cfg_tx_offset_e822(struct ice_hw *hw, u8 port) * PHY and indicate that the Tx offset is ready. After this, * timestamps will be enabled. */ - status = ice_write_64b_phy_reg_e822(hw, port, P_REG_TOTAL_TX_OFFSET_L, - total_offset); - if (status) - return status; - - status = ice_write_phy_reg_e822(hw, port, P_REG_TX_OR, 1); - if (status) - return status; - - return ICE_SUCCESS; -} - -/** - * ice_phy_cfg_fixed_tx_offset_e822 - Configure Tx offset for bypass mode - * @hw: pointer to the HW struct - * @port: the PHY port to configure - * - * Calculate and program the fixed Tx offset, and indicate that the offset is - * ready. This can be used when operating in bypass mode. - */ -static enum ice_status -ice_phy_cfg_fixed_tx_offset_e822(struct ice_hw *hw, u8 port) -{ - enum ice_ptp_link_spd link_spd; - enum ice_ptp_fec_mode fec_mode; - enum ice_status status; - u64 total_offset; - - status = ice_phy_get_speed_and_fec_e822(hw, port, &link_spd, &fec_mode); - if (status) - return status; + err = ice_write_64b_phy_reg_e822(hw, port, P_REG_TOTAL_TX_OFFSET_L, + total_offset); + if (err) + return err; - total_offset = ice_calc_fixed_tx_offset_e822(hw, link_spd); - - /* Program the fixed Tx offset into the P_REG_TOTAL_TX_OFFSET_L - * register, then indicate that the Tx offset is ready. After this, - * timestamps will be enabled. - * - * Note that this skips including the more precise offsets generated - * by the Vernier calibration. - */ - status = ice_write_64b_phy_reg_e822(hw, port, P_REG_TOTAL_TX_OFFSET_L, - total_offset); - if (status) - return status; + err = ice_write_phy_reg_e822(hw, port, P_REG_TX_OR, 1); + if (err) + return err; - status = ice_write_phy_reg_e822(hw, port, P_REG_TX_OR, 1); - if (status) - return status; + ice_info(hw, "Port=%d Tx vernier offset calibration complete\n", port); - return ICE_SUCCESS; + return 0; } /** @@ -3411,21 +2504,21 @@ ice_phy_cfg_fixed_tx_offset_e822(struct ice_hw *hw, u8 port) * This varies by link speed and FEC mode. The value calculated accounts for * various delays caused when receiving a packet. */ -static enum ice_status +static int ice_phy_calc_pmd_adj_e822(struct ice_hw *hw, u8 port, enum ice_ptp_link_spd link_spd, enum ice_ptp_fec_mode fec_mode, u64 *pmd_adj) { u64 cur_freq, clk_incval, tu_per_sec, mult, adj; - u32 pmd_adj_divisor, val; - enum ice_status status; u8 pmd_align; + u32 val; + int err; - status = ice_read_phy_reg_e822(hw, port, P_REG_PMD_ALIGNMENT, &val); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read PMD alignment, status %d\n", - status); - return status; + err = ice_read_phy_reg_e822(hw, port, P_REG_PMD_ALIGNMENT, &val); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read PMD alignment, err %d\n", + err); + return err; } pmd_align = (u8)val; @@ -3436,9 +2529,6 @@ ice_phy_calc_pmd_adj_e822(struct ice_hw *hw, u8 port, /* Calculate TUs per second */ tu_per_sec = cur_freq * clk_incval; - /* Get the link speed dependent PMD adjustment divisor */ - pmd_adj_divisor = e822_vernier[link_spd].pmd_adj_divisor; - /* The PMD alignment adjustment measurement depends on the link speed, * and whether FEC is enabled. For each link speed, the alignment * adjustment is calculated by dividing a value by the length of @@ -3493,7 +2583,7 @@ ice_phy_calc_pmd_adj_e822(struct ice_hw *hw, u8 port, /* In some cases, there's no need to adjust for the PMD alignment */ if (!mult) { *pmd_adj = 0; - return ICE_SUCCESS; + return 0; } /* Calculate the adjustment by multiplying TUs per second by the @@ -3503,7 +2593,7 @@ ice_phy_calc_pmd_adj_e822(struct ice_hw *hw, u8 port, */ adj = DIV_U64(tu_per_sec, 125); adj *= mult; - adj = DIV_U64(adj, pmd_adj_divisor); + adj = DIV_U64(adj, e822_vernier[link_spd].pmd_adj_divisor); /* Finally, for 25G-RS and 50G-RS, a further adjustment for the Rx * cycle count is necessary. @@ -3512,12 +2602,12 @@ ice_phy_calc_pmd_adj_e822(struct ice_hw *hw, u8 port, u64 cycle_adj; u8 rx_cycle; - status = ice_read_phy_reg_e822(hw, port, P_REG_RX_40_TO_160_CNT, - &val); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read 25G-RS Rx cycle count, status %d\n", - status); - return status; + err = ice_read_phy_reg_e822(hw, port, P_REG_RX_40_TO_160_CNT, + &val); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read 25G-RS Rx cycle count, err %d\n", + err); + return err; } rx_cycle = val & P_REG_RX_40_TO_160_CNT_RXCYC_M; @@ -3526,7 +2616,7 @@ ice_phy_calc_pmd_adj_e822(struct ice_hw *hw, u8 port, cycle_adj = DIV_U64(tu_per_sec, 125); cycle_adj *= mult; - cycle_adj = DIV_U64(cycle_adj, pmd_adj_divisor); + cycle_adj = DIV_U64(cycle_adj, e822_vernier[link_spd].pmd_adj_divisor); adj += cycle_adj; } @@ -3534,21 +2624,21 @@ ice_phy_calc_pmd_adj_e822(struct ice_hw *hw, u8 port, u64 cycle_adj; u8 rx_cycle; - status = ice_read_phy_reg_e822(hw, port, P_REG_RX_80_TO_160_CNT, - &val); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read 50G-RS Rx cycle count, status %d\n", - status); - return status; + err = ice_read_phy_reg_e822(hw, port, P_REG_RX_80_TO_160_CNT, + &val); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read 50G-RS Rx cycle count, err %d\n", + err); + return err; } - rx_cycle = val & P_REG_RX_80_TO_160_CNT_RXCYC_M; + rx_cycle = (u8)(val & P_REG_RX_80_TO_160_CNT_RXCYC_M); if (rx_cycle) { mult = rx_cycle * 40; cycle_adj = DIV_U64(tu_per_sec, 125); cycle_adj *= mult; - cycle_adj = DIV_U64(cycle_adj, pmd_adj_divisor); + cycle_adj = DIV_U64(cycle_adj, e822_vernier[link_spd].pmd_adj_divisor); adj += cycle_adj; } @@ -3557,7 +2647,7 @@ ice_phy_calc_pmd_adj_e822(struct ice_hw *hw, u8 port, /* Return the calculated adjustment */ *pmd_adj = adj; - return ICE_SUCCESS; + return 0; } /** @@ -3601,6 +2691,11 @@ ice_calc_fixed_rx_offset_e822(struct ice_hw *hw, enum ice_ptp_link_spd link_spd) * measurements taken in hardware with some data about known fixed delay as * well as adjusting for multi-lane alignment delay. * + * This function will not return successfully until the Rx offset calculations + * have been completed, which requires waiting until at least one packet has + * been received by the device. It is safe to call this function periodically + * until calibration succeeds, as it will only program the offset once. + * * This function must be called only after the offset registers are valid, * i.e. after the Vernier calibration wait has passed, to ensure that the PHY * has measured the offset. @@ -3609,28 +2704,53 @@ ice_calc_fixed_rx_offset_e822(struct ice_hw *hw, enum ice_ptp_link_spd link_spd) * latency values, we use measurements in 1/100th of a nanosecond, and divide * the TUs per second up front. This avoids overflow while allowing * calculation of the adjustment using integer arithmetic. + * + * Returns zero on success, ICE_ERR_NOT_READY if the hardware vernier offset + * calibration has not completed, or another error code on failure. */ -enum ice_status ice_phy_cfg_rx_offset_e822(struct ice_hw *hw, u8 port) +int ice_phy_cfg_rx_offset_e822(struct ice_hw *hw, u8 port) { enum ice_ptp_link_spd link_spd; enum ice_ptp_fec_mode fec_mode; u64 total_offset, pmd, val; - enum ice_status status; + int err; + u32 reg; - status = ice_phy_get_speed_and_fec_e822(hw, port, &link_spd, &fec_mode); - if (status) - return status; + /* Nothing to do if we've already programmed the offset */ + err = ice_read_phy_reg_e822(hw, port, P_REG_RX_OR, ®); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read RX_OR for port %u, err %d\n", + port, err); + return err; + } + + if (reg) + return 0; + + err = ice_read_phy_reg_e822(hw, port, P_REG_RX_OV_STATUS, ®); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read RX_OV_STATUS for port %u, err %d\n", + port, err); + return err; + } + + if (!(reg & P_REG_RX_OV_STATUS_OV_M)) + return ICE_ERR_NOT_READY; + + err = ice_phy_get_speed_and_fec_e822(hw, port, &link_spd, &fec_mode); + if (err) + return err; total_offset = ice_calc_fixed_rx_offset_e822(hw, link_spd); /* Read the first Vernier offset from the PHY register and add it to * the total offset. */ - status = ice_read_64b_phy_reg_e822(hw, port, - P_REG_PAR_PCS_RX_OFFSET_L, - &val); - if (status) - return status; + err = ice_read_64b_phy_reg_e822(hw, port, + P_REG_PAR_PCS_RX_OFFSET_L, + &val); + if (err) + return err; total_offset += val; @@ -3641,19 +2761,19 @@ enum ice_status ice_phy_cfg_rx_offset_e822(struct ice_hw *hw, u8 port) link_spd == ICE_PTP_LNK_SPD_50G || link_spd == ICE_PTP_LNK_SPD_50G_RS || link_spd == ICE_PTP_LNK_SPD_100G_RS) { - status = ice_read_64b_phy_reg_e822(hw, port, - P_REG_PAR_RX_TIME_L, - &val); - if (status) - return status; + err = ice_read_64b_phy_reg_e822(hw, port, + P_REG_PAR_RX_TIME_L, + &val); + if (err) + return err; total_offset += val; } /* In addition, Rx must account for the PMD alignment */ - status = ice_phy_calc_pmd_adj_e822(hw, port, link_spd, fec_mode, &pmd); - if (status) - return status; + err = ice_phy_calc_pmd_adj_e822(hw, port, link_spd, fec_mode, &pmd); + if (err) + return err; /* For RS-FEC, this adjustment adds delay, but for other modes, it * subtracts delay. @@ -3667,57 +2787,49 @@ enum ice_status ice_phy_cfg_rx_offset_e822(struct ice_hw *hw, u8 port) * PHY and indicate that the Rx offset is ready. After this, * timestamps will be enabled. */ - status = ice_write_64b_phy_reg_e822(hw, port, P_REG_TOTAL_RX_OFFSET_L, - total_offset); - if (status) - return status; + err = ice_write_64b_phy_reg_e822(hw, port, P_REG_TOTAL_RX_OFFSET_L, + total_offset); + if (err) + return err; - status = ice_write_phy_reg_e822(hw, port, P_REG_RX_OR, 1); - if (status) - return status; + err = ice_write_phy_reg_e822(hw, port, P_REG_RX_OR, 1); + if (err) + return err; - return ICE_SUCCESS; + ice_info(hw, + "Port=%d Rx vernier offset calibration complete\n", port); + + return 0; } /** - * ice_phy_cfg_fixed_rx_offset_e822 - Configure fixed Rx offset for bypass mode + * ice_ptp_clear_phy_offset_ready_e822 - Clear PHY TX_/RX_OFFSET_READY registers * @hw: pointer to the HW struct - * @port: the PHY port to configure * - * Calculate and program the fixed Rx offset, and indicate that the offset is - * ready. This can be used when operating in bypass mode. + * Clear PHY TX_/RX_OFFSET_READY registers, effectively marking all transmitted + * and received timestamps as invalid. */ -static enum ice_status -ice_phy_cfg_fixed_rx_offset_e822(struct ice_hw *hw, u8 port) +static int ice_ptp_clear_phy_offset_ready_e822(struct ice_hw *hw) { - enum ice_ptp_link_spd link_spd; - enum ice_ptp_fec_mode fec_mode; - enum ice_status status; - u64 total_offset; - - status = ice_phy_get_speed_and_fec_e822(hw, port, &link_spd, &fec_mode); - if (status) - return status; + u8 port; - total_offset = ice_calc_fixed_rx_offset_e822(hw, link_spd); + for (port = 0; port < hw->phy_ports; port++) { + int err; - /* Program the fixed Rx offset into the P_REG_TOTAL_RX_OFFSET_L - * register, then indicate that the Rx offset is ready. After this, - * timestamps will be enabled. - * - * Note that this skips including the more precise offsets generated - * by Vernier calibration. - */ - status = ice_write_64b_phy_reg_e822(hw, port, P_REG_TOTAL_RX_OFFSET_L, - total_offset); - if (status) - return status; + err = ice_write_phy_reg_e822(hw, port, P_REG_TX_OR, 0); + if (err) { + ice_warn(hw, "Failed to clear PHY TX_OFFSET_READY register\n"); + return err; + } - status = ice_write_phy_reg_e822(hw, port, P_REG_RX_OR, 1); - if (status) - return status; + err = ice_write_phy_reg_e822(hw, port, P_REG_RX_OR, 0); + if (err) { + ice_warn(hw, "Failed to clear PHY RX_OFFSET_READY register\n"); + return err; + } + } - return ICE_SUCCESS; + return 0; } /** @@ -3730,14 +2842,14 @@ ice_phy_cfg_fixed_rx_offset_e822(struct ice_hw *hw, u8 port) * Issue a ICE_PTP_READ_TIME timer command to simultaneously capture the PHY * and PHC timer values. */ -static enum ice_status +static int ice_read_phy_and_phc_time_e822(struct ice_hw *hw, u8 port, u64 *phy_time, u64 *phc_time) { - enum ice_status status; u64 tx_time, rx_time; u32 zo, lo; u8 tmr_idx; + int err; tmr_idx = ice_get_ptp_src_clock_index(hw); @@ -3745,13 +2857,12 @@ ice_read_phy_and_phc_time_e822(struct ice_hw *hw, u8 port, u64 *phy_time, ice_ptp_src_cmd(hw, ICE_PTP_READ_TIME); /* Prepare the PHY timer for a ICE_PTP_READ_TIME capture command */ - status = ice_ptp_one_port_cmd_e822(hw, port, ICE_PTP_READ_TIME, true); - if (status) - return status; + err = ice_ptp_one_port_cmd(hw, port, ICE_PTP_READ_TIME, true); + if (err) + return err; /* Issue the sync to start the ICE_PTP_READ_TIME capture */ ice_ptp_exec_tmr_cmd(hw); - ice_ptp_clean_cmd(hw); /* Read the captured PHC time from the shadow time registers */ zo = rd32(hw, GLTSYN_SHTIME_0(tmr_idx)); @@ -3759,9 +2870,9 @@ ice_read_phy_and_phc_time_e822(struct ice_hw *hw, u8 port, u64 *phy_time, *phc_time = (u64)lo << 32 | zo; /* Read the captured PHY time from the PHY shadow registers */ - status = ice_ptp_read_port_capture_e822(hw, port, &tx_time, &rx_time); - if (status) - return status; + err = ice_ptp_read_port_capture_e822(hw, port, &tx_time, &rx_time); + if (err) + return err; /* If the PHY Tx and Rx timers don't match, log a warning message. * Note that this should not happen in normal circumstances since the @@ -3774,7 +2885,7 @@ ice_read_phy_and_phc_time_e822(struct ice_hw *hw, u8 port, u64 *phy_time, *phy_time = tx_time; - return ICE_SUCCESS; + return 0; } /** @@ -3789,18 +2900,18 @@ ice_read_phy_and_phc_time_e822(struct ice_hw *hw, u8 port, u64 *phy_time, * to the PHY timer in order to ensure it reads the same value as the * primary PHC timer. */ -static enum ice_status ice_sync_phy_timer_e822(struct ice_hw *hw, u8 port) +static int ice_sync_phy_timer_e822(struct ice_hw *hw, u8 port) { u64 phc_time, phy_time, difference; - enum ice_status status; + int err; if (!ice_ptp_lock(hw)) { ice_debug(hw, ICE_DBG_PTP, "Failed to acquire PTP semaphore\n"); return ICE_ERR_NOT_READY; } - status = ice_read_phy_and_phc_time_e822(hw, port, &phy_time, &phc_time); - if (status) + err = ice_read_phy_and_phc_time_e822(hw, port, &phy_time, &phc_time); + if (err) goto err_unlock; /* Calculate the amount required to add to the port time in order for @@ -3813,26 +2924,25 @@ static enum ice_status ice_sync_phy_timer_e822(struct ice_hw *hw, u8 port) */ difference = phc_time - phy_time; - status = ice_ptp_prep_port_adj_e822(hw, port, (s64)difference, true); - if (status) + err = ice_ptp_prep_port_adj_e822(hw, port, (s64)difference, true); + if (err) goto err_unlock; - status = ice_ptp_one_port_cmd_e822(hw, port, ICE_PTP_ADJ_TIME, true); - if (status) + err = ice_ptp_one_port_cmd(hw, port, ICE_PTP_ADJ_TIME, true); + if (err) goto err_unlock; - /* Init PHC mstr/src cmd for exec during sync */ - ice_ptp_src_cmd(hw, ICE_PTP_READ_TIME); + /* Do not perform any action on the main timer */ + ice_ptp_src_cmd(hw, ICE_PTP_NOP); /* Issue the sync to activate the time adjustment */ ice_ptp_exec_tmr_cmd(hw); - ice_ptp_clean_cmd(hw); /* Re-capture the timer values to flush the command registers and * verify that the time was properly adjusted. */ - status = ice_read_phy_and_phc_time_e822(hw, port, &phy_time, &phc_time); - if (status) + err = ice_read_phy_and_phc_time_e822(hw, port, &phy_time, &phc_time); + if (err) goto err_unlock; ice_info(hw, "Port %u PHY time synced to PHC: 0x%016llX, 0x%016llX\n", @@ -3841,11 +2951,11 @@ static enum ice_status ice_sync_phy_timer_e822(struct ice_hw *hw, u8 port) ice_ptp_unlock(hw); - return ICE_SUCCESS; + return 0; err_unlock: ice_ptp_unlock(hw); - return status; + return err; } /** @@ -3858,250 +2968,215 @@ static enum ice_status ice_sync_phy_timer_e822(struct ice_hw *hw, u8 port) * re-calibrate Tx and Rx timestamping offsets whenever the clock time is * initialized or when link speed changes. */ -enum ice_status +int ice_stop_phy_timer_e822(struct ice_hw *hw, u8 port, bool soft_reset) { - enum ice_status status; + int err; u32 val; - status = ice_write_phy_reg_e822(hw, port, P_REG_TX_OR, 0); - if (status) - return status; + err = ice_write_phy_reg_e822(hw, port, P_REG_TX_OR, 0); + if (err) + return err; - status = ice_write_phy_reg_e822(hw, port, P_REG_RX_OR, 0); - if (status) - return status; + err = ice_write_phy_reg_e822(hw, port, P_REG_RX_OR, 0); + if (err) + return err; - status = ice_read_phy_reg_e822(hw, port, P_REG_PS, &val); - if (status) - return status; + err = ice_read_phy_reg_e822(hw, port, P_REG_PS, &val); + if (err) + return err; val &= ~P_REG_PS_START_M; - status = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); - if (status) - return status; + err = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); + if (err) + return err; val &= ~P_REG_PS_ENA_CLK_M; - status = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); - if (status) - return status; + err = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); + if (err) + return err; if (soft_reset) { val |= P_REG_PS_SFT_RESET_M; - status = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); - if (status) - return status; + err = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); + if (err) + return err; } ice_debug(hw, ICE_DBG_PTP, "Disabled clock on PHY port %u\n", port); - return ICE_SUCCESS; + return 0; } /** * ice_start_phy_timer_e822 - Start the PHY clock timer * @hw: pointer to the HW struct * @port: the PHY port to start - * @bypass: if true, start the PHY in bypass mode * * Start the clock of a PHY port. This must be done as part of the flow to * re-calibrate Tx and Rx timestamping offsets whenever the clock time is * initialized or when link speed changes. * - * Bypass mode enables timestamps immediately without waiting for Vernier - * calibration to complete. Hardware will still continue taking Vernier - * measurements on Tx or Rx of packets, but they will not be applied to - * timestamps. Use ice_phy_exit_bypass_e822 to exit bypass mode once hardware - * has completed offset calculation. + * Hardware will take Vernier measurements on Tx or Rx of packets. */ -enum ice_status -ice_start_phy_timer_e822(struct ice_hw *hw, u8 port, bool bypass) +int ice_start_phy_timer_e822(struct ice_hw *hw, u8 port) { - enum ice_status status; u32 lo, hi, val; u64 incval; u8 tmr_idx; + int err; - ice_ptp_clean_cmd(hw); tmr_idx = ice_get_ptp_src_clock_index(hw); - status = ice_stop_phy_timer_e822(hw, port, false); - if (status) - return status; + err = ice_stop_phy_timer_e822(hw, port, false); + if (err) + return err; ice_phy_cfg_lane_e822(hw, port); - status = ice_phy_cfg_uix_e822(hw, port); - if (status) - return status; + err = ice_phy_cfg_uix_e822(hw, port); + if (err) + return err; - status = ice_phy_cfg_parpcs_e822(hw, port); - if (status) - return status; + err = ice_phy_cfg_parpcs_e822(hw, port); + if (err) + return err; lo = rd32(hw, GLTSYN_INCVAL_L(tmr_idx)); hi = rd32(hw, GLTSYN_INCVAL_H(tmr_idx)); incval = (u64)hi << 32 | lo; - status = ice_write_40b_phy_reg_e822(hw, port, P_REG_TIMETUS_L, incval); - if (status) - return status; + err = ice_write_40b_phy_reg_e822(hw, port, P_REG_TIMETUS_L, incval); + if (err) + return err; - status = ice_ptp_one_port_cmd_e822(hw, port, ICE_PTP_INIT_INCVAL, true); - if (status) - return status; + err = ice_ptp_one_port_cmd(hw, port, ICE_PTP_INIT_INCVAL, true); + if (err) + return err; - /* Init PHC mstr/src cmd for exec during sync */ - ice_ptp_src_cmd(hw, ICE_PTP_READ_TIME); + /* Do not perform any action on the main timer */ + ice_ptp_src_cmd(hw, ICE_PTP_NOP); ice_ptp_exec_tmr_cmd(hw); - status = ice_read_phy_reg_e822(hw, port, P_REG_PS, &val); - if (status) - return status; + err = ice_read_phy_reg_e822(hw, port, P_REG_PS, &val); + if (err) + return err; val |= P_REG_PS_SFT_RESET_M; - status = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); - if (status) - return status; + err = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); + if (err) + return err; val |= P_REG_PS_START_M; - status = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); - if (status) - return status; + err = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); + if (err) + return err; val &= ~P_REG_PS_SFT_RESET_M; - status = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); - if (status) - return status; + err = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); + if (err) + return err; - status = ice_ptp_one_port_cmd_e822(hw, port, ICE_PTP_INIT_INCVAL, true); - if (status) - return status; + err = ice_ptp_one_port_cmd(hw, port, ICE_PTP_INIT_INCVAL, true); + if (err) + return err; ice_ptp_exec_tmr_cmd(hw); val |= P_REG_PS_ENA_CLK_M; - status = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); - if (status) - return status; + err = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); + if (err) + return err; val |= P_REG_PS_LOAD_OFFSET_M; - status = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); - if (status) - return status; + err = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); + if (err) + return err; ice_ptp_exec_tmr_cmd(hw); - status = ice_sync_phy_timer_e822(hw, port); - if (status) - return status; - - if (bypass) { - val |= P_REG_PS_BYPASS_MODE_M; - /* Enter BYPASS mode, enabling timestamps immediately. */ - status = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); - if (status) - return status; - - /* Program the fixed Tx offset */ - status = ice_phy_cfg_fixed_tx_offset_e822(hw, port); - if (status) - return status; + err = ice_sync_phy_timer_e822(hw, port); + if (err) + return err; - /* Program the fixed Rx offset */ - status = ice_phy_cfg_fixed_rx_offset_e822(hw, port); - if (status) - return status; - } ice_debug(hw, ICE_DBG_PTP, "Enabled clock on PHY port %u\n", port); - return ICE_SUCCESS; + return 0; } /** - * ice_phy_exit_bypass_e822 - Exit bypass mode, after vernier calculations + * ice_get_phy_tx_tstamp_ready_e822 - Read Tx memory status register * @hw: pointer to the HW struct - * @port: the PHY port to configure + * @quad: the timestamp quad to read from + * @tstamp_ready: contents of the Tx memory status register * - * After hardware finishes vernier calculations for the Tx and Rx offset, this - * function can be used to exit bypass mode by updating the total Tx and Rx - * offsets, and then disabling bypass. This will enable hardware to include - * the more precise offset calibrations, increasing precision of the generated - * timestamps. - * - * This cannot be done until hardware has measured the offsets, which requires - * waiting until at least one packet has been sent and received by the device. + * Read the Q_REG_TX_MEMORY_STATUS register indicating which timestamps in + * the PHY are ready. A set bit means the corresponding timestamp is valid and + * ready to be captured from the PHY timestamp block. */ -enum ice_status ice_phy_exit_bypass_e822(struct ice_hw *hw, u8 port) +static int +ice_get_phy_tx_tstamp_ready_e822(struct ice_hw *hw, u8 quad, u64 *tstamp_ready) { - enum ice_status status; - u32 val; - - status = ice_read_phy_reg_e822(hw, port, P_REG_TX_OV_STATUS, &val); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read TX_OV_STATUS for port %u, status %d\n", - port, status); - return status; - } - - if (!(val & P_REG_TX_OV_STATUS_OV_M)) { - ice_debug(hw, ICE_DBG_PTP, "Tx offset is not yet valid for port %u\n", - port); - return ICE_ERR_NOT_READY; - } + u32 hi, lo; + int err; - status = ice_read_phy_reg_e822(hw, port, P_REG_RX_OV_STATUS, &val); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read RX_OV_STATUS for port %u, status %d\n", - port, status); - return status; + err = ice_read_quad_reg_e822(hw, quad, Q_REG_TX_MEMORY_STATUS_U, &hi); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read TX_MEMORY_STATUS_U for quad %u, err %d\n", + quad, err); + return err; } - if (!(val & P_REG_TX_OV_STATUS_OV_M)) { - ice_debug(hw, ICE_DBG_PTP, "Rx offset is not yet valid for port %u\n", - port); - return ICE_ERR_NOT_READY; + err = ice_read_quad_reg_e822(hw, quad, Q_REG_TX_MEMORY_STATUS_L, &lo); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read TX_MEMORY_STATUS_L for quad %u, err %d\n", + quad, err); + return err; } - status = ice_phy_cfg_tx_offset_e822(hw, port); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to program total Tx offset for port %u, status %d\n", - port, status); - return status; - } + *tstamp_ready = (u64)hi << 32 | (u64)lo; - status = ice_phy_cfg_rx_offset_e822(hw, port); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to program total Rx offset for port %u, status %d\n", - port, status); - return status; - } + return 0; +} - /* Exit bypass mode now that the offset has been updated */ - status = ice_read_phy_reg_e822(hw, port, P_REG_PS, &val); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read P_REG_PS for port %u, status %d\n", - port, status); - return status; - } +/** + * ice_phy_cfg_intr_e822 - Configure TX timestamp interrupt + * @hw: pointer to the HW struct + * @quad: the timestamp quad + * @ena: enable or disable interrupt + * @threshold: interrupt threshold + * + * Configure TX timestamp interrupt for the specified quad + */ - if (!(val & P_REG_PS_BYPASS_MODE_M)) - ice_debug(hw, ICE_DBG_PTP, "Port %u not in bypass mode\n", - port); +int +ice_phy_cfg_intr_e822(struct ice_hw *hw, u8 quad, bool ena, u8 threshold) +{ + int err; + u32 val; - val &= ~P_REG_PS_BYPASS_MODE_M; - status = ice_write_phy_reg_e822(hw, port, P_REG_PS, val); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to disable bypass for port %u, status %d\n", - port, status); - return status; + err = ice_read_quad_reg_e822(hw, quad, + Q_REG_TX_MEM_GBL_CFG, + &val); + if (err) + return err; + + if (ena) { + val |= Q_REG_TX_MEM_GBL_CFG_INTR_ENA_M; + val &= ~Q_REG_TX_MEM_GBL_CFG_INTR_THR_M; + val |= ((threshold << Q_REG_TX_MEM_GBL_CFG_INTR_THR_S) & + Q_REG_TX_MEM_GBL_CFG_INTR_THR_M); + } else { + val &= ~Q_REG_TX_MEM_GBL_CFG_INTR_ENA_M; } - ice_info(hw, "Exiting bypass mode on PHY port %u\n", port); + err = ice_write_quad_reg_e822(hw, quad, + Q_REG_TX_MEM_GBL_CFG, + val); - return ICE_SUCCESS; + return err; } /* E810 functions @@ -4119,31 +3194,30 @@ enum ice_status ice_phy_exit_bypass_e822(struct ice_hw *hw, u8 port) * * Read a register from the external PHY on the E810 device. */ -static enum ice_status +static int ice_read_phy_reg_e810_lp(struct ice_hw *hw, u32 addr, u32 *val, bool lock_sbq) { struct ice_sbq_msg_input msg = {0}; - enum ice_status status; + int err; msg.msg_addr_low = ICE_LO_WORD(addr); msg.msg_addr_high = ICE_HI_WORD(addr); msg.opcode = ice_sbq_msg_rd; msg.dest_dev = rmn_0; - status = ice_sbq_rw_reg_lp(hw, &msg, lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to send message to phy, status %d\n", - status); - return status; + err = ice_sbq_rw_reg_lp(hw, &msg, ICE_AQ_FLAG_RD, lock_sbq); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to send message to PHY, err %d\n", + err); + return err; } *val = msg.data; - return ICE_SUCCESS; + return 0; } -static enum ice_status -ice_read_phy_reg_e810(struct ice_hw *hw, u32 addr, u32 *val) +static int ice_read_phy_reg_e810(struct ice_hw *hw, u32 addr, u32 *val) { return ice_read_phy_reg_e810_lp(hw, addr, val, true); } @@ -4157,11 +3231,11 @@ ice_read_phy_reg_e810(struct ice_hw *hw, u32 addr, u32 *val) * * Write a value to a register of the external PHY on the E810 device. */ -static enum ice_status +static int ice_write_phy_reg_e810_lp(struct ice_hw *hw, u32 addr, u32 val, bool lock_sbq) { struct ice_sbq_msg_input msg = {0}; - enum ice_status status; + int err; msg.msg_addr_low = ICE_LO_WORD(addr); msg.msg_addr_high = ICE_HI_WORD(addr); @@ -4169,18 +3243,17 @@ ice_write_phy_reg_e810_lp(struct ice_hw *hw, u32 addr, u32 val, bool lock_sbq) msg.dest_dev = rmn_0; msg.data = val; - status = ice_sbq_rw_reg_lp(hw, &msg, lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to send message to phy, status %d\n", - status); - return status; + err = ice_sbq_rw_reg_lp(hw, &msg, ICE_AQ_FLAG_RD, lock_sbq); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to send message to PHY, err %d\n", + err); + return err; } - return ICE_SUCCESS; + return 0; } -static enum ice_status -ice_write_phy_reg_e810(struct ice_hw *hw, u32 addr, u32 val) +static int ice_write_phy_reg_e810(struct ice_hw *hw, u32 addr, u32 val) { return ice_write_phy_reg_e810_lp(hw, addr, val, true); } @@ -4196,10 +3269,10 @@ ice_write_phy_reg_e810(struct ice_hw *hw, u32 addr, u32 val) * timestamp block of the external PHY on the E810 device using the low latency * timestamp read. */ -static enum ice_status +static int ice_read_phy_tstamp_ll_e810(struct ice_hw *hw, u8 idx, u8 *hi, u32 *lo) { - u8 i; + unsigned int i; /* Write TS index to read to the PF register so the FW can read it */ wr32(hw, PF_SB_ATQBAL, TS_LL_READ_TS_IDX(idx)); @@ -4215,7 +3288,7 @@ ice_read_phy_tstamp_ll_e810(struct ice_hw *hw, u8 idx, u8 *hi, u32 *lo) /* Read the low 32 bit value and set the TS valid bit */ *lo = rd32(hw, PF_SB_ATQBAH) | TS_VALID; - return ICE_SUCCESS; + return 0; } ice_usec_delay(10, false); @@ -4237,33 +3310,33 @@ ice_read_phy_tstamp_ll_e810(struct ice_hw *hw, u8 idx, u8 *hi, u32 *lo) * Read a 8bit timestamp high value and 32 bit timestamp low value out of the * timestamp block of the external PHY on the E810 device using sideband queue. */ -static enum ice_status +static int ice_read_phy_tstamp_sbq_e810(struct ice_hw *hw, u8 lport, u8 idx, u8 *hi, u32 *lo) { u32 hi_addr = TS_EXT(HIGH_TX_MEMORY_BANK_START, lport, idx); u32 lo_addr = TS_EXT(LOW_TX_MEMORY_BANK_START, lport, idx); - enum ice_status status; u32 lo_val, hi_val; + int err; - status = ice_read_phy_reg_e810(hw, lo_addr, &lo_val); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read low PTP timestamp register, status %d\n", - status); - return status; + err = ice_read_phy_reg_e810(hw, lo_addr, &lo_val); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read low PTP timestamp register, err %d\n", + err); + return err; } - status = ice_read_phy_reg_e810(hw, hi_addr, &hi_val); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read high PTP timestamp register, status %d\n", - status); - return status; + err = ice_read_phy_reg_e810(hw, hi_addr, &hi_val); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read high PTP timestamp register, err %d\n", + err); + return err; } *lo = lo_val; *hi = (u8)hi_val; - return ICE_SUCCESS; + return 0; } /** @@ -4276,27 +3349,27 @@ ice_read_phy_tstamp_sbq_e810(struct ice_hw *hw, u8 lport, u8 idx, u8 *hi, * Read a 40bit timestamp value out of the timestamp block of the external PHY * on the E810 device. */ -static enum ice_status +static int ice_read_phy_tstamp_e810(struct ice_hw *hw, u8 lport, u8 idx, u64 *tstamp) { - enum ice_status status; u32 lo = 0; u8 hi = 0; + int err; if (hw->dev_caps.ts_dev_info.ts_ll_read) - status = ice_read_phy_tstamp_ll_e810(hw, idx, &hi, &lo); + err = ice_read_phy_tstamp_ll_e810(hw, idx, &hi, &lo); else - status = ice_read_phy_tstamp_sbq_e810(hw, lport, idx, &hi, &lo); + err = ice_read_phy_tstamp_sbq_e810(hw, lport, idx, &hi, &lo); - if (status) - return status; + if (err) + return err; /* For E810 devices, the timestamp is reported with the lower 32 bits * in the low register, and the upper 8 bits in the high register. */ *tstamp = ((u64)hi) << TS_HIGH_S | ((u64)lo & TS_LOW_M); - return ICE_SUCCESS; + return 0; } /** @@ -4305,33 +3378,43 @@ ice_read_phy_tstamp_e810(struct ice_hw *hw, u8 lport, u8 idx, u64 *tstamp) * @lport: the lport to read from * @idx: the timestamp index to reset * - * Clear a timestamp, resetting its valid bit, from the timestamp block of the - * external PHY on the E810 device. + * Read the timestamp and then forcibly overwrite its value to clear the valid + * bit from the timestamp block of the external PHY on the E810 device. + * + * This function should only be called on an idx whose bit is set according to + * ice_get_phy_tx_tstamp_ready(). */ -static enum ice_status -ice_clear_phy_tstamp_e810(struct ice_hw *hw, u8 lport, u8 idx) +static int ice_clear_phy_tstamp_e810(struct ice_hw *hw, u8 lport, u8 idx) { - enum ice_status status; u32 lo_addr, hi_addr; + u64 unused_tstamp; + int err; + + err = ice_read_phy_tstamp_e810(hw, lport, idx, &unused_tstamp); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to read the timestamp register for lport %u, idx %u, err %d\n", + lport, idx, err); + return err; + } lo_addr = TS_EXT(LOW_TX_MEMORY_BANK_START, lport, idx); hi_addr = TS_EXT(HIGH_TX_MEMORY_BANK_START, lport, idx); - status = ice_write_phy_reg_e810(hw, lo_addr, 0); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to clear low PTP timestamp register, status %d\n", - status); - return status; + err = ice_write_phy_reg_e810(hw, lo_addr, 0); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to clear low PTP timestamp register for lport %u, idx %u, err %d\n", + lport, idx, err); + return err; } - status = ice_write_phy_reg_e810(hw, hi_addr, 0); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to clear high PTP timestamp register, status %d\n", - status); - return status; + err = ice_write_phy_reg_e810(hw, hi_addr, 0); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to clear high PTP timestamp register for lport %u, idx %u, err %d\n", + lport, idx, err); + return err; } - return ICE_SUCCESS; + return 0; } /** @@ -4340,22 +3423,20 @@ ice_clear_phy_tstamp_e810(struct ice_hw *hw, u8 lport, u8 idx) * * Enable the timesync PTP functionality for the external PHY connected to * this function. - * - * Note there is no equivalent function needed on E822 based devices. */ -enum ice_status ice_ptp_init_phy_e810(struct ice_hw *hw) +int ice_ptp_init_phy_e810(struct ice_hw *hw) { - enum ice_status status; u8 tmr_idx; + int err; tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned; - status = ice_write_phy_reg_e810(hw, ETH_GLTSYN_ENA(tmr_idx), - GLTSYN_ENA_TSYN_ENA_M); - if (status) + err = ice_write_phy_reg_e810(hw, ETH_GLTSYN_ENA(tmr_idx), + GLTSYN_ENA_TSYN_ENA_M); + if (err) ice_debug(hw, ICE_DBG_PTP, "PTP failed in ena_phy_time_syn %d\n", - status); + err); - return status; + return err; } /** @@ -4364,10 +3445,9 @@ enum ice_status ice_ptp_init_phy_e810(struct ice_hw *hw) * * Perform E810-specific PTP hardware clock initialization steps. */ -static enum ice_status ice_ptp_init_phc_e810(struct ice_hw *hw) +static int ice_ptp_init_phc_e810(struct ice_hw *hw) { - /* Ensure synchronization delay is zero */ - wr32(hw, GLTSYN_SYNC_DLAY, 0); + ice_ptp_zero_syn_dlay(hw); /* Initialize the PHY */ return ice_ptp_init_phy_e810(hw); @@ -4385,27 +3465,27 @@ static enum ice_status ice_ptp_init_phc_e810(struct ice_hw *hw) * The time value is the upper 32 bits of the PHY timer, usually in units of * nominal nanoseconds. */ -static enum ice_status ice_ptp_prep_phy_time_e810(struct ice_hw *hw, u32 time) +static int ice_ptp_prep_phy_time_e810(struct ice_hw *hw, u32 time) { - enum ice_status status; u8 tmr_idx; + int err; tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned; - status = ice_write_phy_reg_e810(hw, ETH_GLTSYN_SHTIME_0(tmr_idx), 0); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write SHTIME_0, status %d\n", - status); - return status; + err = ice_write_phy_reg_e810(hw, ETH_GLTSYN_SHTIME_0(tmr_idx), 0); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write SHTIME_0, err %d\n", + err); + return err; } - status = ice_write_phy_reg_e810(hw, ETH_GLTSYN_SHTIME_L(tmr_idx), time); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write SHTIME_L, status %d\n", - status); - return status; + err = ice_write_phy_reg_e810(hw, ETH_GLTSYN_SHTIME_L(tmr_idx), time); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write SHTIME_L, err %d\n", + err); + return err; } - return ICE_SUCCESS; + return 0; } /** @@ -4422,34 +3502,33 @@ static enum ice_status ice_ptp_prep_phy_time_e810(struct ice_hw *hw, u32 time) * the PHY timer, usually in units of nominal nanoseconds. Negative * adjustments are supported using 2s complement arithmetic. */ -static enum ice_status -ice_ptp_prep_phy_adj_e810(struct ice_hw *hw, s32 adj, bool lock_sbq) +static int ice_ptp_prep_phy_adj_e810(struct ice_hw *hw, s32 adj, bool lock_sbq) { - enum ice_status status; u8 tmr_idx; + int err; tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned; /* Adjustments are represented as signed 2's complement values in * nanoseconds. Sub-nanosecond adjustment is not supported. */ - status = ice_write_phy_reg_e810_lp(hw, ETH_GLTSYN_SHADJ_L(tmr_idx), - 0, lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write adj to PHY SHADJ_L, status %d\n", - status); - return status; + err = ice_write_phy_reg_e810_lp(hw, ETH_GLTSYN_SHADJ_L(tmr_idx), 0, + lock_sbq); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write adj to PHY SHADJ_L, err %d\n", + err); + return err; } - status = ice_write_phy_reg_e810_lp(hw, ETH_GLTSYN_SHADJ_H(tmr_idx), - adj, lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write adj to PHY SHADJ_H, status %d\n", - status); - return status; + err = ice_write_phy_reg_e810_lp(hw, ETH_GLTSYN_SHADJ_H(tmr_idx), adj, + lock_sbq); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write adj to PHY SHADJ_H, err %d\n", + err); + return err; } - return ICE_SUCCESS; + return 0; } /** @@ -4461,32 +3540,31 @@ ice_ptp_prep_phy_adj_e810(struct ice_hw *hw, s32 adj, bool lock_sbq) * ETH_GLTSYN_SHADJ_L and ETH_GLTSYN_SHADJ_H registers. The actual change is * completed by issuing an ICE_PTP_INIT_INCVAL command. */ -static enum ice_status -ice_ptp_prep_phy_incval_e810(struct ice_hw *hw, u64 incval) +static int ice_ptp_prep_phy_incval_e810(struct ice_hw *hw, u64 incval) { - enum ice_status status; u32 high, low; u8 tmr_idx; + int err; tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned; low = ICE_LO_DWORD(incval); high = ICE_HI_DWORD(incval); - status = ice_write_phy_reg_e810(hw, ETH_GLTSYN_SHADJ_L(tmr_idx), low); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write incval to PHY SHADJ_L, status %d\n", - status); - return status; + err = ice_write_phy_reg_e810(hw, ETH_GLTSYN_SHADJ_L(tmr_idx), low); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write incval to PHY SHADJ_L, err %d\n", + err); + return err; } - status = ice_write_phy_reg_e810(hw, ETH_GLTSYN_SHADJ_H(tmr_idx), high); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write incval PHY SHADJ_H, status %d\n", - status); - return status; + err = ice_write_phy_reg_e810(hw, ETH_GLTSYN_SHADJ_H(tmr_idx), high); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write incval PHY SHADJ_H, err %d\n", + err); + return err; } - return ICE_SUCCESS; + return 0; } /** @@ -4503,29 +3581,29 @@ ice_ptp_prep_phy_incval_e810(struct ice_hw *hw, u64 incval) * The time value is the upper 32 bits of the PHY timer, usually in units of * nominal nanoseconds. */ -static enum ice_status +static int ice_ptp_prep_phy_adj_target_e810(struct ice_hw *hw, u32 target_time) { - enum ice_status status; + int err; u8 tmr_idx; tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned; - status = ice_write_phy_reg_e810(hw, ETH_GLTSYN_SHTIME_0(tmr_idx), 0); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write target time to SHTIME_0, status %d\n", - status); - return status; + err = ice_write_phy_reg_e810(hw, ETH_GLTSYN_SHTIME_0(tmr_idx), 0); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write target time to SHTIME_0, err %d\n", + err); + return err; } - status = ice_write_phy_reg_e810(hw, ETH_GLTSYN_SHTIME_L(tmr_idx), - target_time); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write target time to SHTIME_L, status %d\n", - status); - return status; + err = ice_write_phy_reg_e810(hw, ETH_GLTSYN_SHTIME_L(tmr_idx), + target_time); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write target time to SHTIME_L, err %d\n", + err); + return err; } - return ICE_SUCCESS; + return 0; } /** @@ -4537,54 +3615,36 @@ ice_ptp_prep_phy_adj_target_e810(struct ice_hw *hw, u32 target_time) * Prepare the external PHYs connected to this device for a timer sync * command. */ -static enum ice_status -ice_ptp_port_cmd_e810(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd, - bool lock_sbq) +static int ice_ptp_port_cmd_e810(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd, + bool lock_sbq) { - enum ice_status status; - u32 cmd_val, val; - - switch (cmd) { - case ICE_PTP_INIT_TIME: - cmd_val = GLTSYN_CMD_INIT_TIME; - break; - case ICE_PTP_INIT_INCVAL: - cmd_val = GLTSYN_CMD_INIT_INCVAL; - break; - case ICE_PTP_ADJ_TIME: - cmd_val = GLTSYN_CMD_ADJ_TIME; - break; - case ICE_PTP_ADJ_TIME_AT_TIME: - cmd_val = GLTSYN_CMD_ADJ_INIT_TIME; - break; - case ICE_PTP_READ_TIME: - cmd_val = GLTSYN_CMD_READ_TIME; - break; - default: - ice_warn(hw, "Unknown timer command %u\n", cmd); - return ICE_ERR_PARAM; - } + u32 val = ice_ptp_tmr_cmd_to_port_reg(hw, cmd); + int err; - /* Read, modify, write */ - status = ice_read_phy_reg_e810_lp(hw, ETH_GLTSYN_CMD, &val, lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to read GLTSYN_CMD, status %d\n", - status); - return status; + err = ice_write_phy_reg_e810_lp(hw, E810_ETH_GLTSYN_CMD, val, + lock_sbq); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to write back GLTSYN_CMD, err %d\n", err); + return err; } - /* Modify necessary bits only and perform write */ - val &= ~TS_CMD_MASK_E810; - val |= cmd_val; - - status = ice_write_phy_reg_e810_lp(hw, ETH_GLTSYN_CMD, val, lock_sbq); - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to write back GLTSYN_CMD, status %d\n", - status); - return status; - } + return 0; +} - return ICE_SUCCESS; +/** + * ice_get_phy_tx_tstamp_ready_e810 - Read Tx memory status register + * @hw: pointer to the HW struct + * @port: the PHY port to read + * @tstamp_ready: contents of the Tx memory status register + * + * E810 devices do not use a Tx memory status register. Instead simply + * indicate that all timestamps are currently ready. + */ +static int +ice_get_phy_tx_tstamp_ready_e810(struct ice_hw *hw, u8 port, u64 *tstamp_ready) +{ + *tstamp_ready = 0xFFFFFFFFFFFFFFFF; + return 0; } /* E810T SMA functions @@ -4598,17 +3658,18 @@ ice_ptp_port_cmd_e810(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd, * @hw: pointer to the hw struct * @pca9575_handle: GPIO controller's handle * - * Find and return the GPIO controller's handle in the netlist. - * When found - the value will be cached in the hw structure and following calls - * will return cached value + * Find and return the GPIO controller's handle by checking what drives clock + * mux pin. When found - the value will be cached in the hw structure and + * following calls will return cached value. */ -static enum ice_status +static int ice_get_pca9575_handle(struct ice_hw *hw, u16 *pca9575_handle) { + u8 node_part_number, idx, node_type_ctx_clk_mux, node_part_num_clk_mux; + struct ice_aqc_get_link_topo_pin cmd_pin; + u16 node_handle, clock_mux_handle; struct ice_aqc_get_link_topo cmd; - u8 node_part_number, idx; - enum ice_status status; - u16 node_handle; + int status; if (!hw || !pca9575_handle) return ICE_ERR_PARAM; @@ -4616,15 +3677,50 @@ ice_get_pca9575_handle(struct ice_hw *hw, u16 *pca9575_handle) /* If handle was read previously return cached value */ if (hw->io_expander_handle) { *pca9575_handle = hw->io_expander_handle; - return ICE_SUCCESS; + return 0; } memset(&cmd, 0, sizeof(cmd)); + memset(&cmd_pin, 0, sizeof(cmd_pin)); + + node_type_ctx_clk_mux = (ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_MUX << + ICE_AQC_LINK_TOPO_NODE_TYPE_S); + node_type_ctx_clk_mux |= (ICE_AQC_LINK_TOPO_NODE_CTX_GLOBAL << + ICE_AQC_LINK_TOPO_NODE_CTX_S); + node_part_num_clk_mux = ICE_AQC_GET_LINK_TOPO_NODE_NR_GEN_CLK_MUX; + + /* Look for CLOCK MUX handle in the netlist */ + status = ice_find_netlist_node(hw, node_type_ctx_clk_mux, + node_part_num_clk_mux, + &clock_mux_handle); + if (status) + return ICE_ERR_NOT_SUPPORTED; + + /* Take CLOCK MUX GPIO pin */ + cmd_pin.input_io_params = (ICE_AQC_LINK_TOPO_INPUT_IO_TYPE_GPIO << + ICE_AQC_LINK_TOPO_INPUT_IO_TYPE_S); + cmd_pin.input_io_params |= (ICE_AQC_LINK_TOPO_IO_FUNC_CLK_IN << + ICE_AQC_LINK_TOPO_INPUT_IO_FUNC_S); + cmd_pin.addr.handle = CPU_TO_LE16(clock_mux_handle); + cmd_pin.addr.topo_params.node_type_ctx = + (ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_MUX << + ICE_AQC_LINK_TOPO_NODE_TYPE_S); + cmd_pin.addr.topo_params.node_type_ctx |= + (ICE_AQC_LINK_TOPO_NODE_CTX_PROVIDED << + ICE_AQC_LINK_TOPO_NODE_CTX_S); + + status = ice_aq_get_netlist_node_pin(hw, &cmd_pin, &node_handle); + if (status) + return ICE_ERR_NOT_SUPPORTED; - /* Set node type to GPIO controller */ + /* Check what is driving the pin */ cmd.addr.topo_params.node_type_ctx = - (ICE_AQC_LINK_TOPO_NODE_TYPE_M & - ICE_AQC_LINK_TOPO_NODE_TYPE_GPIO_CTRL); + (ICE_AQC_LINK_TOPO_NODE_TYPE_GPIO_CTRL << + ICE_AQC_LINK_TOPO_NODE_TYPE_S); + cmd.addr.topo_params.node_type_ctx |= + (ICE_AQC_LINK_TOPO_NODE_CTX_GLOBAL << + ICE_AQC_LINK_TOPO_NODE_CTX_S); + cmd.addr.handle = CPU_TO_LE16(node_handle); #define SW_PCA9575_SFP_TOPO_IDX 2 #define SW_PCA9575_QSFP_TOPO_IDX 1 @@ -4638,36 +3734,82 @@ ice_get_pca9575_handle(struct ice_hw *hw, u16 *pca9575_handle) return ICE_ERR_NOT_SUPPORTED; cmd.addr.topo_params.index = idx; - status = ice_aq_get_netlist_node(hw, &cmd, &node_part_number, &node_handle); if (status) return ICE_ERR_NOT_SUPPORTED; - /* Verify if we found the right IO expander type */ - if (node_part_number != ICE_ACQ_GET_LINK_TOPO_NODE_NR_PCA9575) + /* Verify if PCA9575 drives the pin */ + if (node_part_number != ICE_AQC_GET_LINK_TOPO_NODE_NR_PCA9575) return ICE_ERR_NOT_SUPPORTED; /* If present save the handle and return it */ hw->io_expander_handle = node_handle; *pca9575_handle = hw->io_expander_handle; - return ICE_SUCCESS; + return 0; +} + +/** + * ice_read_sma_ctrl_e810t + * @hw: pointer to the hw struct + * @data: pointer to data to be read from the GPIO controller + * + * Read the SMA controller state. Only bits 3-7 in data are valid. + */ +int ice_read_sma_ctrl_e810t(struct ice_hw *hw, u8 *data) +{ + int status; + u16 handle; + u8 i; + + status = ice_get_pca9575_handle(hw, &handle); + if (status) + return status; + + *data = 0; + + for (i = ICE_SMA_MIN_BIT_E810T; i <= ICE_SMA_MAX_BIT_E810T; i++) { + bool pin; + + status = ice_aq_get_gpio(hw, handle, i + ICE_PCA9575_P1_OFFSET, + &pin, NULL); + if (status) + break; + *data |= (u8)(!pin) << i; + } + + return status; } /** - * ice_is_gps_present_e810t + * ice_write_sma_ctrl_e810t * @hw: pointer to the hw struct + * @data: data to be written to the GPIO controller * - * Check if the GPS generic device is present in the netlist + * Write the data to the SMA controller. Only bits 3-7 in data are valid. */ -bool ice_is_gps_present_e810t(struct ice_hw *hw) +int ice_write_sma_ctrl_e810t(struct ice_hw *hw, u8 data) { - if (ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_GPS, - ICE_ACQ_GET_LINK_TOPO_NODE_NR_GEN_GPS, NULL)) - return false; + int status; + u16 handle; + u8 i; - return true; + status = ice_get_pca9575_handle(hw, &handle); + if (status) + return status; + + for (i = ICE_SMA_MIN_BIT_E810T; i <= ICE_SMA_MAX_BIT_E810T; i++) { + bool pin; + + pin = !(data & (1 << i)); + status = ice_aq_set_gpio(hw, handle, i + ICE_PCA9575_P1_OFFSET, + pin, NULL); + if (status) + break; + } + + return status; } /** @@ -4678,19 +3820,18 @@ bool ice_is_gps_present_e810t(struct ice_hw *hw) * * Read the register from the GPIO controller */ -enum ice_status -ice_read_pca9575_reg_e810t(struct ice_hw *hw, u8 offset, u8 *data) +int ice_read_pca9575_reg_e810t(struct ice_hw *hw, u8 offset, u8 *data) { struct ice_aqc_link_topo_addr link_topo; - enum ice_status status; __le16 addr; u16 handle; + int err; memset(&link_topo, 0, sizeof(link_topo)); - status = ice_get_pca9575_handle(hw, &handle); - if (status) - return status; + err = ice_get_pca9575_handle(hw, &handle); + if (err) + return err; link_topo.handle = CPU_TO_LE16(handle); link_topo.topo_params.node_type_ctx = @@ -4710,12 +3851,12 @@ ice_read_pca9575_reg_e810t(struct ice_hw *hw, u8 offset, u8 *data) * * Write the data to the GPIO controller register */ -enum ice_status +int ice_write_pca9575_reg_e810t(struct ice_hw *hw, u8 offset, u8 data) { struct ice_aqc_link_topo_addr link_topo; - enum ice_status status; __le16 addr; + int status; u16 handle; memset(&link_topo, 0, sizeof(link_topo)); @@ -4724,94 +3865,260 @@ ice_write_pca9575_reg_e810t(struct ice_hw *hw, u8 offset, u8 data) if (status) return status; - link_topo.handle = CPU_TO_LE16(handle); - link_topo.topo_params.node_type_ctx = - (ICE_AQC_LINK_TOPO_NODE_CTX_PROVIDED << - ICE_AQC_LINK_TOPO_NODE_CTX_S); + link_topo.handle = CPU_TO_LE16(handle); + link_topo.topo_params.node_type_ctx = + (ICE_AQC_LINK_TOPO_NODE_CTX_PROVIDED << + ICE_AQC_LINK_TOPO_NODE_CTX_S); + + addr = CPU_TO_LE16((u16)offset); + + return ice_aq_write_i2c(hw, link_topo, 0, addr, 1, &data, NULL); +} + +/** + * ice_is_pca9575_present + * @hw: pointer to the hw struct + * + * Check if the SW IO expander is present in the netlist + */ +bool ice_is_pca9575_present(struct ice_hw *hw) +{ + u16 handle = 0; + int status; + + status = ice_get_pca9575_handle(hw, &handle); + if (!status && handle) + return true; + + return false; +} + +/** + * ice_ptp_read_sdp_section_from_nvm - reads SDP section from NVM + * @hw: pointer to the HW struct + * @section_exist: on return, returns true if section exist + * @pin_desc_num: on return, returns the number of ice_ptp_pin_desc entries + * @pin_config_num: on return, returns the number of pin that should be + * exposed on pin_config I/F + * @sdp_entries: on return, returns the SDP connection section from NVM + * @nvm_entries: on return, returns the number of valid entries in sdp_entries + * + * Reads SDP connection section from NVM + * Returns -1 if NVM read failed or section corrupted, otherwise 0 + */ +int ice_ptp_read_sdp_section_from_nvm(struct ice_hw *hw, bool *section_exist, + u8 *pin_desc_num, u8 *pin_config_num, + u16 *sdp_entries, u8 *nvm_entries) +{ + __le16 loc_raw_data, raw_nvm_entries; + u32 loc_data, i, all_pin_bitmap = 0; + int err; + + *section_exist = false; + *pin_desc_num = 0; + *pin_config_num = 0; + + err = ice_acquire_nvm(hw, ICE_RES_READ); + if (err) + goto exit; + + /* Read the offset of EMP_SR_PTR */ + err = ice_aq_read_nvm(hw, ICE_AQC_NVM_START_POINT, + ICE_AQC_NVM_SDP_CFG_PTR_OFFSET, + ICE_AQC_NVM_SDP_CFG_PTR_RD_LEN, + &loc_raw_data, false, true, NULL); + if (err) + goto exit; + + /* check if section exist */ + loc_data = LE16_TO_CPU(loc_raw_data); + if ((loc_data & ICE_AQC_NVM_SDP_CFG_PTR_M) == ICE_AQC_NVM_SDP_CFG_PTR_M) + goto exit; + + if (loc_data & ICE_AQC_NVM_SDP_CFG_PTR_TYPE_M) { + loc_data &= ICE_AQC_NVM_SDP_CFG_PTR_M; + loc_data *= ICE_AQC_NVM_SECTOR_UNIT; + } else { + loc_data *= ICE_AQC_NVM_WORD_UNIT; + } + + /* Skip SDP configuration section length (2 bytes) */ + loc_data += ICE_AQC_NVM_SDP_CFG_HEADER_LEN; + + /* read number of valid entries */ + err = ice_aq_read_nvm(hw, ICE_AQC_NVM_START_POINT, loc_data, + ICE_AQC_NVM_SDP_CFG_SEC_LEN_LEN, &raw_nvm_entries, + false, true, NULL); + if (err) + goto exit; + *nvm_entries = (u8)LE16_TO_CPU(raw_nvm_entries); + + /* Read entire SDP configuration section */ + loc_data += ICE_AQC_NVM_SDP_CFG_SEC_LEN_LEN; + err = ice_aq_read_nvm(hw, ICE_AQC_NVM_START_POINT, loc_data, + ICE_AQC_NVM_SDP_CFG_DATA_LEN, sdp_entries, + false, true, NULL); + if (err) + goto exit; + + /* get number of existing pin/connector */ + for (i = 0; i < *nvm_entries; i++) { + all_pin_bitmap |= (sdp_entries[i] & + ICE_AQC_NVM_SDP_CFG_PIN_MASK) >> + ICE_AQC_NVM_SDP_CFG_PIN_OFFSET; + if (sdp_entries[i] & ICE_AQC_NVM_SDP_CFG_NA_PIN_MASK) + *pin_desc_num += 1; + } + + for (i = 0; i < ICE_AQC_NVM_SDP_CFG_PIN_SIZE - 1; i++) + *pin_config_num += (all_pin_bitmap & (1 << i)) != 0; + *pin_desc_num += *pin_config_num; + + *section_exist = true; +exit: + ice_release_nvm(hw); + return err; +} + +/* E830 functions + * + * The following functions operate on the E830 series devices. + * + */ + +/** + * ice_ptp_init_phc_e830 - Perform E830 specific PHC initialization + * @hw: pointer to HW struct + * + * Perform E830-specific PTP hardware clock initialization steps. + */ +static int ice_ptp_init_phc_e830(struct ice_hw *hw) +{ + ice_ptp_zero_syn_dlay(hw); + return 0; +} + +/** + * ice_ptp_write_direct_incval_e830 - Prep PHY port increment value change + * @hw: pointer to HW struct + * @incval: The new 40bit increment value to prepare + * + * Prepare the PHY port for a new increment value by programming the PHC + * GLTSYN_INCVAL_L and GLTSYN_INCVAL_H registers. The actual change is + * completed by FW automatically. + */ +static int +ice_ptp_write_direct_incval_e830(struct ice_hw *hw, u64 incval) +{ + u32 high, low; + u8 tmr_idx; + + tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned; + low = ICE_LO_DWORD(incval); + high = ICE_HI_DWORD(incval); + + wr32(hw, GLTSYN_INCVAL_L(tmr_idx), low); + wr32(hw, GLTSYN_INCVAL_H(tmr_idx), high); + + return 0; +} + +/** + * ice_ptp_write_direct_phc_time_e830 - Prepare PHY port with initial time + * @hw: Board private structure + * @time: Time to initialize the PHY port clock to + * + * Program the PHY port ETH_GLTSYN_SHTIME registers in preparation setting the + * initial clock time. The time will not actually be programmed until the + * driver issues an ICE_PTP_INIT_TIME command. + * + * The time value is the upper 32 bits of the PHY timer, usually in units of + * nominal nanoseconds. + */ +static int +ice_ptp_write_direct_phc_time_e830(struct ice_hw *hw, u64 time) +{ + u8 tmr_idx; + + tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned; - addr = CPU_TO_LE16((u16)offset); + wr32(hw, GLTSYN_TIME_0(tmr_idx), 0); + wr32(hw, GLTSYN_TIME_L(tmr_idx), ICE_LO_DWORD(time)); + wr32(hw, GLTSYN_TIME_H(tmr_idx), ICE_HI_DWORD(time)); - return ice_aq_write_i2c(hw, link_topo, 0, addr, 1, &data, NULL); + return 0; } /** - * ice_read_sma_ctrl_e810t - * @hw: pointer to the hw struct - * @data: pointer to data to be read from the GPIO controller + * ice_ptp_port_cmd_e830 - Prepare all external PHYs for a timer command + * @hw: pointer to HW struct + * @cmd: Command to be sent to the port + * @lock_sbq: true if the sideband queue lock must be acquired * - * Read the SMA controller state. Only bits 3-7 in data are valid. + * Prepare the external PHYs connected to this device for a timer sync + * command. */ -enum ice_status ice_read_sma_ctrl_e810t(struct ice_hw *hw, u8 *data) +static int +ice_ptp_port_cmd_e830(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd, + bool lock_sbq) { - enum ice_status status; - u16 handle; - u8 i; - - status = ice_get_pca9575_handle(hw, &handle); - if (status) - return status; - - *data = 0; - - for (i = ICE_E810T_SMA_MIN_BIT; i <= ICE_E810T_SMA_MAX_BIT; i++) { - bool pin; - - status = ice_aq_get_gpio(hw, handle, i + ICE_E810T_P1_OFFSET, - &pin, NULL); - if (status) - break; - *data |= (u8)(!pin) << i; - } + u32 val = ice_ptp_tmr_cmd_to_port_reg(hw, cmd); - return status; + return ice_write_phy_reg_e810_lp(hw, E830_ETH_GLTSYN_CMD, val, + lock_sbq); } /** - * ice_write_sma_ctrl_e810t - * @hw: pointer to the hw struct - * @data: data to be written to the GPIO controller + * ice_read_phy_tstamp_e830 - Read a PHY timestamp out of the external PHY + * @hw: pointer to the HW struct + * @lport: the lport to read from + * @idx: the timestamp index to read + * @tstamp: on return, the 40bit timestamp value * - * Write the data to the SMA controller. Only bits 3-7 in data are valid. + * Read a 40bit timestamp value out of the timestamp block of the external PHY + * on the E830 device. */ -enum ice_status ice_write_sma_ctrl_e810t(struct ice_hw *hw, u8 data) +static int +ice_read_phy_tstamp_e830(struct ice_hw *hw, u8 lport, u8 idx, u64 *tstamp) { - enum ice_status status; - u16 handle; - u8 i; + u32 hi_addr = E830_HIGH_TX_MEMORY_BANK(idx, lport); + u32 lo_addr = E830_LOW_TX_MEMORY_BANK(idx, lport); + u32 lo_val, hi_val, lo; + u8 hi; - status = ice_get_pca9575_handle(hw, &handle); - if (status) - return status; + lo_val = rd32(hw, lo_addr); + hi_val = rd32(hw, hi_addr); - for (i = ICE_E810T_SMA_MIN_BIT; i <= ICE_E810T_SMA_MAX_BIT; i++) { - bool pin; + lo = lo_val; + hi = (u8)hi_val; - pin = !(data & (1 << i)); - status = ice_aq_set_gpio(hw, handle, i + ICE_E810T_P1_OFFSET, - pin, NULL); - if (status) - break; - } + /* For E830 devices, the timestamp is reported with the lower 32 bits + * in the low register, and the upper 8 bits in the high register. + */ + *tstamp = ((u64)hi) << TS_HIGH_S | ((u64)lo & TS_LOW_M); - return status; + return 0; } /** - * ice_is_pca9575_present - * @hw: pointer to the hw struct + * ice_get_phy_tx_tstamp_ready_e830 - Read Tx memory status register + * @hw: pointer to the HW struct + * @port: the PHY port to read + * @tstamp_ready: contents of the Tx memory status register * - * Check if the SW IO expander is present in the netlist */ -bool ice_is_pca9575_present(struct ice_hw *hw) +static int +ice_get_phy_tx_tstamp_ready_e830(struct ice_hw *hw, u8 port, u64 *tstamp_ready) { - enum ice_status status; - u16 handle = 0; + u64 hi; + u32 lo; - status = ice_get_pca9575_handle(hw, &handle); - if (!status && handle) - return true; + lo = rd32(hw, E830_PRTMAC_TS_TX_MEM_VALID_L); + hi = (u64)rd32(hw, E830_PRTMAC_TS_TX_MEM_VALID_H) << 32; - return false; + *tstamp_ready = hi | lo; + + return 0; } /* Device agnostic functions @@ -4870,6 +4177,122 @@ void ice_ptp_unlock(struct ice_hw *hw) wr32(hw, PFTSYN_SEM + (PFTSYN_SEM_BYTES * hw->pf_id), 0); } +#define ICE_DEVID_MASK 0xFFF8 + +/** + * ice_ptp_init_phy_model - Initialize hw->phy_model based on device type + * @hw: pointer to the HW structure + * + * Determine the PHY model for the device, and initialize hw->phy_model + * for use by other functions. + */ +void ice_ptp_init_phy_model(struct ice_hw *hw) +{ + + if (ice_is_e810(hw)) + hw->phy_model = ICE_PHY_E810; + else if (ice_is_e830(hw)) + hw->phy_model = ICE_PHY_E830; + else + hw->phy_model = ICE_PHY_E822; + hw->phy_ports = ICE_NUM_EXTERNAL_PORTS; + hw->max_phy_port = ICE_NUM_EXTERNAL_PORTS; + + return; +} + +/** + * ice_ptp_write_port_cmd - Prepare a single PHY port for a timer command + * @hw: pointer to HW struct + * @port: Port to which cmd has to be sent + * @cmd: Command to be sent to the port + * @lock_sbq: true if the sideband queue lock must be acquired + * + * Prepare one port for the upcoming timer sync command. Do not use this for + * programming only a single port, instead use ice_ptp_one_port_cmd() to + * ensure non-modified ports get properly initialized to ICE_PTP_NOP. + */ +static int ice_ptp_write_port_cmd(struct ice_hw *hw, u8 port, + enum ice_ptp_tmr_cmd cmd, bool lock_sbq) +{ + switch (hw->phy_model) { + case ICE_PHY_E822: + return ice_ptp_write_port_cmd_e822(hw, port, cmd, lock_sbq); + default: + return ICE_ERR_NOT_SUPPORTED; + } +} + +/** + * ice_ptp_one_port_cmd - Program one PHY port for a timer command + * @hw: pointer to HW struct + * @configured_port: the port that should execute the command + * @configured_cmd: the command to be executed on the configured port + * @lock_sbq: true if the sideband queue lock must be acquired + * + * Prepare one port for executing a timer command, while preparing all other + * ports to ICE_PTP_NOP. This allows executing a command on a single port + * while ensuring all other ports do not execute stale commands. + */ +int ice_ptp_one_port_cmd(struct ice_hw *hw, u8 configured_port, + enum ice_ptp_tmr_cmd configured_cmd, bool lock_sbq) +{ + u8 port; + + for (port = 0; port < hw->max_phy_port; port++) { + enum ice_ptp_tmr_cmd cmd; + int err; + + /* Program the configured port with the configured command, + * program all other ports with ICE_PTP_NOP. + */ + cmd = port == configured_port ? configured_cmd : ICE_PTP_NOP; + + err = ice_ptp_write_port_cmd(hw, port, cmd, lock_sbq); + if (err) + return err; + } + + return 0; +} + +/** + * ice_ptp_port_cmd - Prepare PHY ports for a timer sync command + * @hw: pointer to HW struct + * @cmd: the timer command to setup + * @lock_sbq: true of sideband queue lock must be acquired + * + * Prepare all PHY ports on this device for the requested timer command. For + * some families this can be done in one shot, but for other families each + * port must be configured individually. + */ +static int ice_ptp_port_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd, + bool lock_sbq) +{ + u8 port; + + /* PHY models which can program all ports simultaneously */ + switch (hw->phy_model) { + case ICE_PHY_E830: + return ice_ptp_port_cmd_e830(hw, cmd, lock_sbq); + case ICE_PHY_E810: + return ice_ptp_port_cmd_e810(hw, cmd, lock_sbq); + default: + break; + } + + /* PHY models which require programming each port separately */ + for (port = 0; port < hw->max_phy_port; port++) { + int err; + + err = ice_ptp_write_port_cmd(hw, port, cmd, lock_sbq); + if (err) + return err; + } + + return 0; +} + /** * ice_ptp_tmr_cmd - Prepare and trigger a timer sync command * @hw: pointer to HW struct @@ -4881,47 +4304,35 @@ void ice_ptp_unlock(struct ice_hw *hw) * for the command to be synchronously applied to both the source and PHY * timers. */ -static enum ice_status -ice_ptp_tmr_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd, bool lock_sbq) +static int ice_ptp_tmr_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd, + bool lock_sbq) { - enum ice_status status; + int err; /* First, prepare the source timer */ ice_ptp_src_cmd(hw, cmd); /* Next, prepare the ports */ - switch (hw->phy_cfg) { - case ICE_PHY_ETH56G: - status = ice_ptp_port_cmd_eth56g(hw, cmd, lock_sbq); - break; - case ICE_PHY_E810: - status = ice_ptp_port_cmd_e810(hw, cmd, lock_sbq); - break; - case ICE_PHY_E822: - status = ice_ptp_port_cmd_e822(hw, cmd, lock_sbq); - break; - default: - status = ICE_ERR_NOT_SUPPORTED; - } - if (status) { - ice_debug(hw, ICE_DBG_PTP, "Failed to prepare PHY ports for timer command %u, status %d\n", - cmd, status); - return status; + err = ice_ptp_port_cmd(hw, cmd, lock_sbq); + if (err) { + ice_debug(hw, ICE_DBG_PTP, "Failed to prepare PHY ports for timer command %u, err %d\n", + cmd, err); + return err; } /* Write the sync command register to drive both source and PHY timer * commands synchronously */ ice_ptp_exec_tmr_cmd(hw); - ice_ptp_clean_cmd(hw); - return ICE_SUCCESS; + return 0; } /** * ice_ptp_init_time - Initialize device time to provided value * @hw: pointer to HW struct * @time: 64bits of time (GLTSYN_TIME_L and GLTSYN_TIME_H) + * @wr_main_tmr: program the main timer * * Initialize the device to the specified time provided. This requires a three * step process: @@ -4931,36 +4342,39 @@ ice_ptp_tmr_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd, bool lock_sbq) * 3) issue an init_time timer command to synchronously switch both the source * and port timers to the new init time value at the next clock cycle. */ -enum ice_status ice_ptp_init_time(struct ice_hw *hw, u64 time) +int ice_ptp_init_time(struct ice_hw *hw, u64 time, bool wr_main_tmr) { - enum ice_status status; + int err; u8 tmr_idx; tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned; /* Source timers */ - wr32(hw, GLTSYN_SHTIME_L(tmr_idx), ICE_LO_DWORD(time)); - wr32(hw, GLTSYN_SHTIME_H(tmr_idx), ICE_HI_DWORD(time)); - wr32(hw, GLTSYN_SHTIME_0(tmr_idx), 0); + /* For E830 we don't need to use shadow registers, its automatic */ + if (hw->phy_model == ICE_PHY_E830) + return ice_ptp_write_direct_phc_time_e830(hw, time); + + if (wr_main_tmr) { + wr32(hw, GLTSYN_SHTIME_L(tmr_idx), ICE_LO_DWORD(time)); + wr32(hw, GLTSYN_SHTIME_H(tmr_idx), ICE_HI_DWORD(time)); + wr32(hw, GLTSYN_SHTIME_0(tmr_idx), 0); + } /* PHY Clks */ /* Fill Rx and Tx ports and send msg to PHY */ - switch (hw->phy_cfg) { - case ICE_PHY_ETH56G: - status = ice_ptp_prep_phy_time_eth56g(hw, time & 0xFFFFFFFF); - break; + switch (hw->phy_model) { case ICE_PHY_E810: - status = ice_ptp_prep_phy_time_e810(hw, time & 0xFFFFFFFF); + err = ice_ptp_prep_phy_time_e810(hw, (u32)(time & 0xFFFFFFFF)); break; case ICE_PHY_E822: - status = ice_ptp_prep_phy_time_e822(hw, time & 0xFFFFFFFF); + err = ice_ptp_prep_phy_time_e822(hw, (u32)(time & 0xFFFFFFFF)); break; default: - status = ICE_ERR_NOT_SUPPORTED; + err = ICE_ERR_NOT_SUPPORTED; } - if (status) - return status; + if (err) + return err; return ice_ptp_tmr_cmd(hw, ICE_PTP_INIT_TIME, true); } @@ -4969,8 +4383,9 @@ enum ice_status ice_ptp_init_time(struct ice_hw *hw, u64 time) * ice_ptp_write_incval - Program PHC with new increment value * @hw: pointer to HW struct * @incval: Source timer increment value per clock cycle + * @wr_main_tmr: Program the main timer * - * Program the PHC with a new increment value. This requires a three-step + * Program the timers with a new increment value. This requires a three-step * process: * * 1) Write the increment value to the source timer shadow registers @@ -4979,33 +4394,37 @@ enum ice_status ice_ptp_init_time(struct ice_hw *hw, u64 time) * the source and port timers to the new increment value at the next clock * cycle. */ -enum ice_status ice_ptp_write_incval(struct ice_hw *hw, u64 incval) +int ice_ptp_write_incval(struct ice_hw *hw, u64 incval, + bool wr_main_tmr) { - enum ice_status status; + int err; u8 tmr_idx; tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned; - /* Shadow Adjust */ - wr32(hw, GLTSYN_SHADJ_L(tmr_idx), ICE_LO_DWORD(incval)); - wr32(hw, GLTSYN_SHADJ_H(tmr_idx), ICE_HI_DWORD(incval)); + /* For E830 we don't need to use shadow registers, its automatic */ + if (hw->phy_model == ICE_PHY_E830) + return ice_ptp_write_direct_incval_e830(hw, incval); - switch (hw->phy_cfg) { - case ICE_PHY_ETH56G: - status = ice_ptp_prep_phy_incval_eth56g(hw, incval); - break; + if (wr_main_tmr) { + /* Shadow Adjust */ + wr32(hw, GLTSYN_SHADJ_L(tmr_idx), ICE_LO_DWORD(incval)); + wr32(hw, GLTSYN_SHADJ_H(tmr_idx), ICE_HI_DWORD(incval)); + } + + switch (hw->phy_model) { case ICE_PHY_E810: - status = ice_ptp_prep_phy_incval_e810(hw, incval); + err = ice_ptp_prep_phy_incval_e810(hw, incval); break; case ICE_PHY_E822: - status = ice_ptp_prep_phy_incval_e822(hw, incval); + err = ice_ptp_prep_phy_incval_e822(hw, incval); break; default: - status = ICE_ERR_NOT_SUPPORTED; + err = ICE_ERR_NOT_SUPPORTED; } - if (status) - return status; + if (err) + return err; return ice_ptp_tmr_cmd(hw, ICE_PTP_INIT_INCVAL, true); } @@ -5014,21 +4433,23 @@ enum ice_status ice_ptp_write_incval(struct ice_hw *hw, u64 incval) * ice_ptp_write_incval_locked - Program new incval while holding semaphore * @hw: pointer to HW struct * @incval: Source timer increment value per clock cycle + * @wr_main_tmr: Program the main timer * * Program a new PHC incval while holding the PTP semaphore. */ -enum ice_status ice_ptp_write_incval_locked(struct ice_hw *hw, u64 incval) +int ice_ptp_write_incval_locked(struct ice_hw *hw, u64 incval, + bool wr_main_tmr) { - enum ice_status status; + int err; if (!ice_ptp_lock(hw)) return ICE_ERR_NOT_READY; - status = ice_ptp_write_incval(hw, incval); + err = ice_ptp_write_incval(hw, incval, wr_main_tmr); ice_ptp_unlock(hw); - return status; + return err; } /** @@ -5046,37 +4467,37 @@ enum ice_status ice_ptp_write_incval_locked(struct ice_hw *hw, u64 incval) * 3) Issue an ICE_PTP_ADJ_TIME timer command to synchronously apply the * adjustment to both the source and port timers at the next clock cycle. */ -enum ice_status ice_ptp_adj_clock(struct ice_hw *hw, s32 adj, bool lock_sbq) +int ice_ptp_adj_clock(struct ice_hw *hw, s32 adj, bool lock_sbq) { - enum ice_status status; + int err; u8 tmr_idx; tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned; /* Write the desired clock adjustment into the GLTSYN_SHADJ register. - * For an ICE_PTP_ADJ_TIME command, this set of registers represents - * the value to add to the clock time. It supports subtraction by - * interpreting the value as a 2's complement integer. + * For an ICE_PTP_ADJ_TIME command, this set of registers represents the value + * to add to the clock time. It supports subtraction by interpreting + * the value as a 2's complement integer. */ wr32(hw, GLTSYN_SHADJ_L(tmr_idx), 0); wr32(hw, GLTSYN_SHADJ_H(tmr_idx), adj); - switch (hw->phy_cfg) { - case ICE_PHY_ETH56G: - status = ice_ptp_prep_phy_adj_eth56g(hw, adj, lock_sbq); - break; + switch (hw->phy_model) { + case ICE_PHY_E830: + /* E830 sync PHYs automatically after setting GLTSYN_SHADJ */ + return 0; case ICE_PHY_E810: - status = ice_ptp_prep_phy_adj_e810(hw, adj, lock_sbq); + err = ice_ptp_prep_phy_adj_e810(hw, adj, lock_sbq); break; case ICE_PHY_E822: - status = ice_ptp_prep_phy_adj_e822(hw, adj, lock_sbq); + err = ice_ptp_prep_phy_adj_e822(hw, adj, lock_sbq); break; default: - status = ICE_ERR_NOT_SUPPORTED; + err = ICE_ERR_NOT_SUPPORTED; } - if (status) - return status; + if (err) + return err; return ice_ptp_tmr_cmd(hw, ICE_PTP_ADJ_TIME, lock_sbq); } @@ -5097,12 +4518,12 @@ enum ice_status ice_ptp_adj_clock(struct ice_hw *hw, s32 adj, bool lock_sbq) * 5) Issue an ICE_PTP_ADJ_TIME_AT_TIME command to initiate the atomic * adjustment. */ -enum ice_status +int ice_ptp_adj_clock_at_time(struct ice_hw *hw, u64 at_time, s32 adj) { - enum ice_status status; u32 time_lo, time_hi; u8 tmr_idx; + int err; tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned; time_lo = ICE_LO_DWORD(at_time); @@ -5122,44 +4543,58 @@ ice_ptp_adj_clock_at_time(struct ice_hw *hw, u64 at_time, s32 adj) wr32(hw, GLTSYN_SHTIME_H(tmr_idx), time_hi); /* Prepare PHY port adjustments */ - switch (hw->phy_cfg) { - case ICE_PHY_ETH56G: - status = ice_ptp_prep_phy_adj_eth56g(hw, adj, true); - break; + switch (hw->phy_model) { case ICE_PHY_E810: - status = ice_ptp_prep_phy_adj_e810(hw, adj, true); + err = ice_ptp_prep_phy_adj_e810(hw, adj, true); break; case ICE_PHY_E822: - status = ice_ptp_prep_phy_adj_e822(hw, adj, true); + err = ice_ptp_prep_phy_adj_e822(hw, adj, true); break; default: - status = ICE_ERR_NOT_SUPPORTED; + err = ICE_ERR_NOT_SUPPORTED; } - if (status) - return status; + if (err) + return err; /* Set target time for each PHY port */ - switch (hw->phy_cfg) { - case ICE_PHY_ETH56G: - status = ice_ptp_prep_phy_adj_target_eth56g(hw, time_lo); - break; + switch (hw->phy_model) { case ICE_PHY_E810: - status = ice_ptp_prep_phy_adj_target_e810(hw, time_lo); + err = ice_ptp_prep_phy_adj_target_e810(hw, time_lo); break; case ICE_PHY_E822: - status = ice_ptp_prep_phy_adj_target_e822(hw, time_lo); + err = ice_ptp_prep_phy_adj_target_e822(hw, time_lo); break; default: - status = ICE_ERR_NOT_SUPPORTED; + err = ICE_ERR_NOT_SUPPORTED; } - if (status) - return status; + if (err) + return err; return ice_ptp_tmr_cmd(hw, ICE_PTP_ADJ_TIME_AT_TIME, true); } +/** + * ice_ptp_clear_phy_offset_ready - Clear PHY TX_/RX_OFFSET_READY registers + * @hw: pointer to the HW struct + * + * Clear PHY TX_/RX_OFFSET_READY registers, effectively marking all transmitted + * and received timestamps as invalid. + */ +int ice_ptp_clear_phy_offset_ready(struct ice_hw *hw) +{ + switch (hw->phy_model) { + case ICE_PHY_E830: + case ICE_PHY_E810: + return 0; + case ICE_PHY_E822: + return ice_ptp_clear_phy_offset_ready_e822(hw); + default: + return ICE_ERR_NOT_SUPPORTED; + } +} + /** * ice_read_phy_tstamp - Read a PHY timestamp from the timestamp block * @hw: pointer to the HW struct @@ -5171,26 +4606,18 @@ ice_ptp_adj_clock_at_time(struct ice_hw *hw, u64 at_time, s32 adj) * the block is the quad to read from. For E810 devices, the block is the * logical port to read from. */ -enum ice_status -ice_read_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx, u64 *tstamp) +int ice_read_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx, u64 *tstamp) { - enum ice_status status; - - switch (hw->phy_cfg) { - case ICE_PHY_ETH56G: - status = ice_read_phy_tstamp_eth56g(hw, block, idx, tstamp); - break; + switch (hw->phy_model) { + case ICE_PHY_E830: + return ice_read_phy_tstamp_e830(hw, block, idx, tstamp); case ICE_PHY_E810: - status = ice_read_phy_tstamp_e810(hw, block, idx, tstamp); - break; + return ice_read_phy_tstamp_e810(hw, block, idx, tstamp); case ICE_PHY_E822: - status = ice_read_phy_tstamp_e822(hw, block, idx, tstamp); - break; + return ice_read_phy_tstamp_e822(hw, block, idx, tstamp); default: - status = ICE_ERR_NOT_SUPPORTED; + return ICE_ERR_NOT_SUPPORTED; } - - return status; } /** @@ -5199,30 +4626,43 @@ ice_read_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx, u64 *tstamp) * @block: the block to read from * @idx: the timestamp index to reset * - * Clear a timestamp, resetting its valid bit, from the timestamp block. For - * E822 devices, the block is the quad to clear from. For E810 devices, the - * block is the logical port to clear from. + * Clear a timestamp from the timestamp block, discarding its value without + * returning it. This resets the memory status bit for the timestamp index + * allowing it to be reused for another timestamp in the future. + * + * For E822 devices, the block number is the PHY quad to clear from. For E810 + * devices, the block number is the logical port to clear from. + * + * This function must only be called on a timestamp index whose valid bit is + * set according to ice_get_phy_tx_tstamp_ready(). */ -enum ice_status -ice_clear_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx) +int ice_clear_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx) { - enum ice_status status; - - switch (hw->phy_cfg) { - case ICE_PHY_ETH56G: - status = ice_clear_phy_tstamp_eth56g(hw, block, idx); - break; + switch (hw->phy_model) { case ICE_PHY_E810: - status = ice_clear_phy_tstamp_e810(hw, block, idx); - break; + return ice_clear_phy_tstamp_e810(hw, block, idx); case ICE_PHY_E822: - status = ice_clear_phy_tstamp_e822(hw, block, idx); - break; + return ice_clear_phy_tstamp_e822(hw, block, idx); default: - status = ICE_ERR_NOT_SUPPORTED; + return ICE_ERR_NOT_SUPPORTED; } +} - return status; +/** + * ice_ptp_reset_ts_memory - Reset timestamp memory for all blocks + * @hw: pointer to the HW struct + */ +void ice_ptp_reset_ts_memory(struct ice_hw *hw) +{ + switch (hw->phy_model) { + case ICE_PHY_E822: + ice_ptp_reset_ts_memory_e822(hw); + break; + case ICE_PHY_E810: + case ICE_PHY_E830: + default: + return; + } } /** @@ -5231,9 +4671,8 @@ ice_clear_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx) * * Perform the steps required to initialize the PTP hardware clock. */ -enum ice_status ice_ptp_init_phc(struct ice_hw *hw) +int ice_ptp_init_phc(struct ice_hw *hw) { - enum ice_status status; u8 src_idx = hw->func_caps.ts_func_info.tmr_index_owned; /* Enable source clocks */ @@ -5242,19 +4681,101 @@ enum ice_status ice_ptp_init_phc(struct ice_hw *hw) /* Clear event status indications for auxiliary pins */ (void)rd32(hw, GLTSYN_STAT(src_idx)); - switch (hw->phy_cfg) { - case ICE_PHY_ETH56G: - status = ice_ptp_init_phc_eth56g(hw); - break; + switch (hw->phy_model) { case ICE_PHY_E810: - status = ice_ptp_init_phc_e810(hw); - break; + return ice_ptp_init_phc_e810(hw); + case ICE_PHY_E822: + return ice_ptp_init_phc_e822(hw); + case ICE_PHY_E830: + return ice_ptp_init_phc_e830(hw); + default: + return ICE_ERR_NOT_SUPPORTED; + } +} + +/** + * ice_get_phy_tx_tstamp_ready - Read PHY Tx memory status indication + * @hw: pointer to the HW struct + * @block: the timestamp block to check + * @tstamp_ready: storage for the PHY Tx memory status information + * + * Check the PHY for Tx timestamp memory status. This reports a 64 bit value + * which indicates which timestamps in the block may be captured. A set bit + * means the timestamp can be read. An unset bit means the timestamp is not + * ready and software should avoid reading the register. + */ +int ice_get_phy_tx_tstamp_ready(struct ice_hw *hw, u8 block, u64 *tstamp_ready) +{ + switch (hw->phy_model) { + case ICE_PHY_E830: + return ice_get_phy_tx_tstamp_ready_e830(hw, block, + tstamp_ready); + case ICE_PHY_E810: + return ice_get_phy_tx_tstamp_ready_e810(hw, block, + tstamp_ready); case ICE_PHY_E822: - status = ice_ptp_init_phc_e822(hw); + return ice_get_phy_tx_tstamp_ready_e822(hw, block, + tstamp_ready); break; default: - status = ICE_ERR_NOT_SUPPORTED; + return ICE_ERR_NOT_SUPPORTED; } +} - return status; +/** + * ice_ptp_read_port_capture - Read a port's local time capture + * @hw: pointer to HW struct + * @port: Port number to read + * @tx_ts: on return, the Tx port time capture + * @rx_ts: on return, the Rx port time capture + * + * Read the port's Tx and Rx local time capture values. + */ +int +ice_ptp_read_port_capture(struct ice_hw *hw, u8 port, u64 *tx_ts, + u64 *rx_ts) +{ + switch (hw->phy_model) { + case ICE_PHY_E822: + return ice_ptp_read_port_capture_e822(hw, port, + tx_ts, rx_ts); + default: + return ICE_ERR_NOT_SUPPORTED; + } +} + +/** + * ice_ptp_read_phy_incval - Read a PHY port's current incval + * @hw: pointer to the HW struct + * @port: the port to read + * @incval: on return, the time_clk_cyc incval for this port + * + * Read the time_clk_cyc increment value for a given PHY port. + */ +int +ice_ptp_read_phy_incval(struct ice_hw *hw, u8 port, u64 *incval) +{ + switch (hw->phy_model) { + case ICE_PHY_E822: + return ice_ptp_read_phy_incval_e822(hw, port, incval); + default: + return ICE_ERR_NOT_SUPPORTED; + } +} + +/** + * refsync_pin_id_valid + * @hw: pointer to the HW struct + * @id: pin index + * + * Checks whether DPLL's input pin can be configured to ref-sync pairing mode. + */ +bool refsync_pin_id_valid(struct ice_hw *hw, u8 id) +{ + /* refsync is allowed only on pins 1 or 5 for E810T */ + if (ice_is_e810t(hw) && id != 1 && id != 5) + return false; + + return true; } + diff --git a/drivers/net/ice/base/ice_ptp_hw.h b/drivers/net/ice/base/ice_ptp_hw.h index 3667c9882d..02f8dc8ff2 100644 --- a/drivers/net/ice/base/ice_ptp_hw.h +++ b/drivers/net/ice/base/ice_ptp_hw.h @@ -41,6 +41,14 @@ enum ice_ptp_fec_mode { ICE_PTP_FEC_MODE_RS_FEC }; +/* Main timer mode */ +enum ice_src_tmr_mode { + ICE_SRC_TMR_MODE_NANOSECONDS, + ICE_SRC_TMR_MODE_LOCKED, + + NUM_ICE_SRC_TMR_MODE +}; + /** * struct ice_time_ref_info_e822 * @pll_freq: Frequency of PLL that drives timer ticks in Hz @@ -123,7 +131,10 @@ extern const struct ice_vernier_info_e822 e822_vernier[NUM_ICE_PTP_LNK_SPD]; /* Increment value to generate nanoseconds in the GLTSYN_TIME_L register for * the E810 devices. Based off of a PLL with an 812.5 MHz frequency. */ -#define ICE_PTP_NOMINAL_INCVAL_E810 0x13b13b13bULL + +#define ICE_E810_PLL_FREQ 812500000 +#define ICE_PTP_NOMINAL_INCVAL_E810 0x13b13b13bULL +#define E810_OUT_PROP_DELAY_NS 1 /* Device agnostic functions */ u8 ice_get_ptp_src_clock_index(struct ice_hw *hw); @@ -131,41 +142,70 @@ u64 ice_ptp_read_src_incval(struct ice_hw *hw); bool ice_ptp_lock(struct ice_hw *hw); void ice_ptp_unlock(struct ice_hw *hw); void ice_ptp_src_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd); -enum ice_status ice_ptp_init_time(struct ice_hw *hw, u64 time); -enum ice_status ice_ptp_write_incval(struct ice_hw *hw, u64 incval); -enum ice_status ice_ptp_write_incval_locked(struct ice_hw *hw, u64 incval); -enum ice_status ice_ptp_adj_clock(struct ice_hw *hw, s32 adj, bool lock_sbq); -enum ice_status +int ice_ptp_init_time(struct ice_hw *hw, u64 time, + bool wr_main_tmr); +int ice_ptp_write_incval(struct ice_hw *hw, u64 incval, + bool wr_main_tmr); +int ice_ptp_write_incval_locked(struct ice_hw *hw, u64 incval, + bool wr_main_tmr); +int ice_ptp_adj_clock(struct ice_hw *hw, s32 adj, bool lock_sbq); +int ice_ptp_adj_clock_at_time(struct ice_hw *hw, u64 at_time, s32 adj); -enum ice_status +int ice_ptp_clear_phy_offset_ready(struct ice_hw *hw); +int ice_read_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx, u64 *tstamp); -enum ice_status +int ice_clear_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx); -enum ice_status ice_ptp_init_phc(struct ice_hw *hw); +void ice_ptp_reset_ts_memory(struct ice_hw *hw); +int ice_ptp_init_phc(struct ice_hw *hw); +bool refsync_pin_id_valid(struct ice_hw *hw, u8 id); +int +ice_get_phy_tx_tstamp_ready(struct ice_hw *hw, u8 block, u64 *tstamp_ready); +int +ice_ptp_one_port_cmd(struct ice_hw *hw, u8 configured_port, + enum ice_ptp_tmr_cmd configured_cmd, bool lock_sbq); +int +ice_ptp_read_port_capture(struct ice_hw *hw, u8 port, u64 *tx_ts, u64 *rx_ts); +int +ice_ptp_read_phy_incval(struct ice_hw *hw, u8 port, u64 *incval); /* E822 family functions */ -enum ice_status +#define LOCKED_INCVAL_E822 0x100000000ULL + +int ice_read_phy_reg_e822(struct ice_hw *hw, u8 port, u16 offset, u32 *val); -enum ice_status +int ice_write_phy_reg_e822(struct ice_hw *hw, u8 port, u16 offset, u32 val); -enum ice_status +int ice_read_quad_reg_e822(struct ice_hw *hw, u8 quad, u16 offset, u32 *val); -enum ice_status +int ice_write_quad_reg_e822(struct ice_hw *hw, u8 quad, u16 offset, u32 val); -enum ice_status +int ice_ptp_prep_port_adj_e822(struct ice_hw *hw, u8 port, s64 time, bool lock_sbq); -enum ice_status +int ice_ptp_read_phy_incval_e822(struct ice_hw *hw, u8 port, u64 *incval); -enum ice_status +int ice_ptp_read_port_capture_e822(struct ice_hw *hw, u8 port, u64 *tx_ts, u64 *rx_ts); -enum ice_status -ice_ptp_one_port_cmd_e822(struct ice_hw *hw, u8 port, - enum ice_ptp_tmr_cmd cmd, bool lock_sbq); -enum ice_status -ice_cfg_cgu_pll_e822(struct ice_hw *hw, enum ice_time_ref_freq clk_freq, - enum ice_clk_src clk_src); +const char *ice_clk_freq_str(u8 clk_freq); +const char *ice_clk_src_str(u8 clk_src); +void ice_ptp_reset_ts_memory_quad_e822(struct ice_hw *hw, u8 quad); +int +ice_cfg_cgu_pll_e822(struct ice_hw *hw, enum ice_time_ref_freq *clk_freq, + enum ice_clk_src *clk_src); +int +ice_cfg_cgu_pll_e825c(struct ice_hw *hw, enum ice_time_ref_freq *clk_freq, + enum ice_clk_src *clk_src); +int +ice_cgu_ts_pll_lost_lock_e825c(struct ice_hw *hw, bool *lost_lock); +int ice_cgu_ts_pll_restart_e825c(struct ice_hw *hw); +int +ice_cgu_bypass_mux_port_active_e825c(struct ice_hw *hw, u8 port, bool *active); +int +ice_cfg_cgu_bypass_mux_e825c(struct ice_hw *hw, u8 port_num, bool clock_1588, + unsigned int ena); +int ice_cfg_synce_ethdiv_e825c(struct ice_hw *hw, u8 *divider); /** * ice_e822_time_ref - Get the current TIME_REF from capabilities @@ -208,65 +248,99 @@ static inline u64 ice_e822_pps_delay(enum ice_time_ref_freq time_ref) } /* E822 Vernier calibration functions */ -enum ice_status ice_ptp_set_vernier_wl(struct ice_hw *hw); -enum ice_status +int ice_ptp_set_vernier_wl(struct ice_hw *hw); +int ice_phy_get_speed_and_fec_e822(struct ice_hw *hw, u8 port, enum ice_ptp_link_spd *link_out, enum ice_ptp_fec_mode *fec_out); void ice_phy_cfg_lane_e822(struct ice_hw *hw, u8 port); -enum ice_status +int ice_stop_phy_timer_e822(struct ice_hw *hw, u8 port, bool soft_reset); -enum ice_status -ice_start_phy_timer_e822(struct ice_hw *hw, u8 port, bool bypass); -enum ice_status ice_phy_cfg_tx_offset_e822(struct ice_hw *hw, u8 port); -enum ice_status ice_phy_cfg_rx_offset_e822(struct ice_hw *hw, u8 port); -enum ice_status ice_phy_exit_bypass_e822(struct ice_hw *hw, u8 port); +int ice_start_phy_timer_e822(struct ice_hw *hw, u8 port); +int ice_phy_cfg_tx_offset_e822(struct ice_hw *hw, u8 port); +int ice_phy_cfg_rx_offset_e822(struct ice_hw *hw, u8 port); +int +ice_phy_cfg_intr_e822(struct ice_hw *hw, u8 quad, bool ena, u8 threshold); /* E810 family functions */ -bool ice_is_gps_present_e810t(struct ice_hw *hw); -enum ice_status ice_ptp_init_phy_e810(struct ice_hw *hw); -enum ice_status +int ice_ptp_init_phy_e810(struct ice_hw *hw); +int ice_read_pca9575_reg_e810t(struct ice_hw *hw, u8 offset, u8 *data); -enum ice_status +int ice_write_pca9575_reg_e810t(struct ice_hw *hw, u8 offset, u8 data); -enum ice_status ice_read_sma_ctrl_e810t(struct ice_hw *hw, u8 *data); -enum ice_status ice_write_sma_ctrl_e810t(struct ice_hw *hw, u8 data); +int ice_read_sma_ctrl_e810t(struct ice_hw *hw, u8 *data); +int ice_write_sma_ctrl_e810t(struct ice_hw *hw, u8 data); bool ice_is_pca9575_present(struct ice_hw *hw); +int ice_ptp_read_sdp_section_from_nvm(struct ice_hw *hw, bool *section_exist, + u8 *pin_desc_num, u8 *pin_config_num, + u16 *sdp_entries, u8 *nvm_entries); void ice_ptp_process_cgu_err(struct ice_hw *hw, struct ice_rq_event_info *event); -/* ETH56G family functions */ -enum ice_status -ice_read_phy_reg_eth56g(struct ice_hw *hw, u8 port, u16 offset, u32 *val); -enum ice_status -ice_write_phy_reg_eth56g(struct ice_hw *hw, u8 port, u16 offset, u32 val); -enum ice_status -ice_read_phy_mem_eth56g(struct ice_hw *hw, u8 port, u16 offset, u32 *val); -enum ice_status -ice_write_phy_mem_eth56g(struct ice_hw *hw, u8 port, u16 offset, u32 val); - -enum ice_status -ice_ptp_prep_port_adj_eth56g(struct ice_hw *hw, u8 port, s64 time, - bool lock_sbq); - -enum ice_status -ice_ptp_read_phy_incval_eth56g(struct ice_hw *hw, u8 port, u64 *incval); -enum ice_status -ice_ptp_read_port_capture_eth56g(struct ice_hw *hw, u8 port, - u64 *tx_ts, u64 *rx_ts); -enum ice_status -ice_ptp_one_port_cmd_eth56g(struct ice_hw *hw, u8 port, - enum ice_ptp_tmr_cmd cmd, bool lock_sbq); -enum ice_status -ice_ptp_read_tx_hwtstamp_status_eth56g(struct ice_hw *hw, u32 *ts_status); -enum ice_status -ice_stop_phy_timer_eth56g(struct ice_hw *hw, u8 port, bool soft_reset); -enum ice_status -ice_start_phy_timer_eth56g(struct ice_hw *hw, u8 port, bool bypass); -enum ice_status ice_phy_cfg_tx_offset_eth56g(struct ice_hw *hw, u8 port); -enum ice_status ice_phy_cfg_rx_offset_eth56g(struct ice_hw *hw, u8 port); - -enum ice_status ice_ptp_init_phy_cfg(struct ice_hw *hw); + +void ice_ptp_init_phy_model(struct ice_hw *hw); + +/** + * ice_ptp_get_pll_freq - Get PLL frequency + * @hw: Board private structure + */ +static inline u64 +ice_ptp_get_pll_freq(struct ice_hw *hw) +{ + switch (hw->phy_model) { + case ICE_PHY_E810: + return ICE_E810_PLL_FREQ; + case ICE_PHY_E822: + return ice_e822_pll_freq(ice_e822_time_ref(hw)); + default: + return 0; + } +} + +static inline u64 +ice_prop_delay(struct ice_hw *hw) +{ + switch (hw->phy_model) { + case ICE_PHY_E810: + return E810_OUT_PROP_DELAY_NS; + case ICE_PHY_E822: + return ice_e822_pps_delay(ice_e822_time_ref(hw)); + default: + return 0; + } +} + +static inline enum ice_time_ref_freq +ice_time_ref(struct ice_hw *hw) +{ + switch (hw->phy_model) { + case ICE_PHY_E810: + case ICE_PHY_E822: + return ice_e822_time_ref(hw); + default: + return ICE_TIME_REF_FREQ_INVALID; + } +} + +static inline u64 +ice_get_base_incval(struct ice_hw *hw, enum ice_src_tmr_mode src_tmr_mode) +{ + switch (hw->phy_model) { + case ICE_PHY_E830: + case ICE_PHY_E810: + return ICE_PTP_NOMINAL_INCVAL_E810; + case ICE_PHY_E822: + if (src_tmr_mode == ICE_SRC_TMR_MODE_NANOSECONDS && + ice_e822_time_ref(hw) < NUM_ICE_TIME_REF_FREQ) + return ice_e822_nominal_incval(ice_e822_time_ref(hw)); + else + return LOCKED_INCVAL_E822; + + break; + default: + return 0; + } +} #define PFTSYN_SEM_BYTES 4 @@ -295,6 +369,9 @@ enum ice_status ice_ptp_init_phy_cfg(struct ice_hw *hw); #define TS_CMD_MASK_E810 0xFF #define TS_CMD_MASK 0xF #define SYNC_EXEC_CMD 0x3 +#define TS_CMD_RX_TYPE_S 0x4 +#define TS_CMD_RX_TYPE MAKEMASK(0x18, TS_CMD_RX_TYPE_S) + /* Macros to derive port low and high addresses on both quads */ #define P_Q0_L(a, p) ((((a) + (0x2000 * (p)))) & 0xFFFF) @@ -470,7 +547,9 @@ enum ice_status ice_ptp_init_phy_cfg(struct ice_hw *hw); #define ETH_GLTSYN_SHADJ_H(_i) (0x0300037C + ((_i) * 32)) /* E810 timer command register */ -#define ETH_GLTSYN_CMD 0x03000344 +#define E810_ETH_GLTSYN_CMD 0x03000344 +/* E830 timer command register */ +#define E830_ETH_GLTSYN_CMD 0x00088814 /* Source timer incval macros */ #define INCVAL_HIGH_M 0xFF @@ -488,7 +567,12 @@ enum ice_status ice_ptp_init_phy_cfg(struct ice_hw *hw); #define BYTES_PER_IDX_ADDR_L 4 /* Tx timestamp low latency read definitions */ -#define TS_LL_READ_RETRIES 200 +#define TS_LL_MAX_TIME_READ_PER_PORT 80 +#define TS_LL_MAX_PORT 8 +#define TS_LL_DELTA_TIME 360 +#define TS_LL_READ_RETRIES (TS_LL_MAX_TIME_READ_PER_PORT * \ + TS_LL_MAX_PORT) + TS_LL_DELTA_TIME +#define TS_LL_READ_TS_INTR BIT(30) #define TS_LL_READ_TS BIT(31) #define TS_LL_READ_TS_IDX_S 24 #define TS_LL_READ_TS_IDX_M MAKEMASK(0x3F, 0) @@ -509,6 +593,30 @@ enum ice_status ice_ptp_init_phy_cfg(struct ice_hw *hw); #define LOW_TX_MEMORY_BANK_START 0x03090000 #define HIGH_TX_MEMORY_BANK_START 0x03090004 +#define E830_LOW_TX_MEMORY_BANK(slot, port) \ + (E830_PRTTSYN_TXTIME_L(slot) + 0x8 * (port)) +#define E830_HIGH_TX_MEMORY_BANK(slot, port) \ + (E830_PRTTSYN_TXTIME_H(slot) + 0x8 * (port)) + +/* E810T SMA controller pin control */ +#define ICE_SMA1_DIR_EN_E810T BIT(4) +#define ICE_SMA1_TX_EN_E810T BIT(5) +#define ICE_SMA2_UFL2_RX_DIS_E810T BIT(3) +#define ICE_SMA2_DIR_EN_E810T BIT(6) +#define ICE_SMA2_TX_EN_E810T BIT(7) + +#define ICE_SMA1_MASK_E810T (ICE_SMA1_DIR_EN_E810T | \ + ICE_SMA1_TX_EN_E810T) +#define ICE_SMA2_MASK_E810T (ICE_SMA2_UFL2_RX_DIS_E810T | \ + ICE_SMA2_DIR_EN_E810T | \ + ICE_SMA2_TX_EN_E810T) +#define ICE_ALL_SMA_MASK_E810T (ICE_SMA1_MASK_E810T | \ + ICE_SMA2_MASK_E810T) + +#define ICE_SMA_MIN_BIT_E810T 3 +#define ICE_SMA_MAX_BIT_E810T 7 +#define ICE_PCA9575_P1_OFFSET 8 + /* E810T PCA9575 IO controller registers */ #define ICE_PCA9575_P0_IN 0x0 #define ICE_PCA9575_P1_IN 0x1 @@ -519,89 +627,5 @@ enum ice_status ice_ptp_init_phy_cfg(struct ice_hw *hw); /* E810T PCA9575 IO controller pin control */ #define ICE_E810T_P0_GNSS_PRSNT_N BIT(4) -#define ICE_E810T_P1_SMA1_DIR_EN BIT(4) -#define ICE_E810T_P1_SMA1_TX_EN BIT(5) -#define ICE_E810T_P1_SMA2_UFL2_RX_DIS BIT(3) -#define ICE_E810T_P1_SMA2_DIR_EN BIT(6) -#define ICE_E810T_P1_SMA2_TX_EN BIT(7) - -#define ICE_E810T_SMA_MIN_BIT 3 -#define ICE_E810T_SMA_MAX_BIT 7 -#define ICE_E810T_P1_OFFSET 8 -/* 56G PHY quad register base addresses */ -#define ICE_PHY0_BASE 0x092000 -#define ICE_PHY1_BASE 0x126000 -#define ICE_PHY2_BASE 0x1BA000 -#define ICE_PHY3_BASE 0x24E000 -#define ICE_PHY4_BASE 0x2E2000 - -/* Timestamp memory */ -#define PHY_PTP_LANE_ADDR_STEP 0x98 - -#define PHY_PTP_MEM_START 0x1000 -#define PHY_PTP_MEM_LANE_STEP 0x04A0 -#define PHY_PTP_MEM_LOCATIONS 0x40 - -/* Number of PHY ports */ -#define ICE_NUM_PHY_PORTS 5 -/* Timestamp PHY incval registers */ -#define PHY_REG_TIMETUS_L 0x8 -#define PHY_REG_TIMETUS_U 0xC - -/* Timestamp init registers */ -#define PHY_REG_RX_TIMER_INC_PRE_L 0x64 -#define PHY_REG_RX_TIMER_INC_PRE_U 0x68 - -#define PHY_REG_TX_TIMER_INC_PRE_L 0x44 -#define PHY_REG_TX_TIMER_INC_PRE_U 0x48 - -/* Timestamp match and adjust target registers */ -#define PHY_REG_RX_TIMER_CNT_ADJ_L 0x6C -#define PHY_REG_RX_TIMER_CNT_ADJ_U 0x70 - -#define PHY_REG_TX_TIMER_CNT_ADJ_L 0x4C -#define PHY_REG_TX_TIMER_CNT_ADJ_U 0x50 - -/* Timestamp command registers */ -#define PHY_REG_TX_TMR_CMD 0x40 -#define PHY_REG_RX_TMR_CMD 0x60 - -/* Phy offset ready registers */ -#define PHY_REG_TX_OFFSET_READY 0x54 -#define PHY_REG_RX_OFFSET_READY 0x74 -/* Phy total offset registers */ -#define PHY_REG_TOTAL_TX_OFFSET_L 0x38 -#define PHY_REG_TOTAL_TX_OFFSET_U 0x3C - -#define PHY_REG_TOTAL_RX_OFFSET_L 0x58 -#define PHY_REG_TOTAL_RX_OFFSET_U 0x5C - -/* Timestamp capture registers */ -#define PHY_REG_TX_CAPTURE_L 0x78 -#define PHY_REG_TX_CAPTURE_U 0x7C - -#define PHY_REG_RX_CAPTURE_L 0x8C -#define PHY_REG_RX_CAPTURE_U 0x90 - -/* Memory status registers */ -#define PHY_REG_TX_MEMORY_STATUS_L 0x80 -#define PHY_REG_TX_MEMORY_STATUS_U 0x84 - -/* Interrupt config register */ -#define PHY_REG_TS_INT_CONFIG 0x88 - -#define PHY_PTP_INT_STATUS 0x7FD140 - -#define PHY_TS_INT_CONFIG_THRESHOLD_S 0 -#define PHY_TS_INT_CONFIG_THRESHOLD_M MAKEMASK(0x3F, 0) -#define PHY_TS_INT_CONFIG_ENA_S 6 -#define PHY_TS_INT_CONFIG_ENA_M BIT(6) - -/* Macros to derive offsets for TimeStampLow and TimeStampHigh */ -#define PHY_TSTAMP_L(x) (((x) * 8) + 0) -#define PHY_TSTAMP_U(x) (((x) * 8) + 4) - -#define PHY_REG_REVISION 0x85000 -#define PHY_REVISION_ETH56G 0x10200 #endif /* _ICE_PTP_HW_H_ */ diff --git a/drivers/net/ice/base/ice_sbq_cmd.h b/drivers/net/ice/base/ice_sbq_cmd.h index 4da16caf70..a74c4b8e8e 100644 --- a/drivers/net/ice/base/ice_sbq_cmd.h +++ b/drivers/net/ice/base/ice_sbq_cmd.h @@ -48,7 +48,6 @@ struct ice_sbq_evt_desc { }; enum ice_sbq_msg_dev { - phy_56g = 0x02, rmn_0 = 0x02, rmn_1 = 0x03, rmn_2 = 0x04, diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index a1dd0c6ace..c44bac95e6 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -12,7 +12,7 @@ * This function inserts the root node of the scheduling tree topology * to the SW DB. */ -static enum ice_status +static int ice_sched_add_root_node(struct ice_port_info *pi, struct ice_aqc_txsched_elem_data *info) { @@ -28,17 +28,16 @@ ice_sched_add_root_node(struct ice_port_info *pi, if (!root) return ICE_ERR_NO_MEMORY; - /* coverity[suspicious_sizeof] */ root->children = (struct ice_sched_node **) - ice_calloc(hw, hw->max_children[0], sizeof(*root)); + ice_calloc(hw, hw->max_children[0], sizeof(*root->children)); if (!root->children) { ice_free(hw, root); return ICE_ERR_NO_MEMORY; } - ice_memcpy(&root->info, info, sizeof(*info), ICE_DMA_TO_NONDMA); + ice_memcpy(&root->info, info, sizeof(*info), ICE_NONDMA_TO_NONDMA); pi->root = root; - return ICE_SUCCESS; + return 0; } /** @@ -57,6 +56,9 @@ ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid) { u16 i; + if (!start_node) + return NULL; + /* The TEID is same as that of the start_node */ if (ICE_TXSCHED_GET_NODE_TEID(start_node) == teid) return start_node; @@ -97,14 +99,14 @@ ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid) * * This function sends a scheduling elements cmd (cmd_opc) */ -static enum ice_status +static int ice_aqc_send_sched_elem_cmd(struct ice_hw *hw, enum ice_adminq_opc cmd_opc, u16 elems_req, void *buf, u16 buf_size, u16 *elems_resp, struct ice_sq_cd *cd) { struct ice_aqc_sched_elem_cmd *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; cmd = &desc.params.sched_elem_cmd; ice_fill_dflt_direct_cmd_desc(&desc, cmd_opc); @@ -128,7 +130,7 @@ ice_aqc_send_sched_elem_cmd(struct ice_hw *hw, enum ice_adminq_opc cmd_opc, * * Query scheduling elements (0x0404) */ -enum ice_status +int ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req, struct ice_aqc_txsched_elem_data *buf, u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd) @@ -147,7 +149,7 @@ ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req, * * This function inserts a scheduler node to the SW DB. */ -enum ice_status +int ice_sched_add_node(struct ice_port_info *pi, u8 layer, struct ice_aqc_txsched_elem_data *info, struct ice_sched_node *prealloc_node) @@ -155,8 +157,8 @@ ice_sched_add_node(struct ice_port_info *pi, u8 layer, struct ice_aqc_txsched_elem_data elem; struct ice_sched_node *parent; struct ice_sched_node *node; - enum ice_status status; struct ice_hw *hw; + int status; if (!pi) return ICE_ERR_PARAM; @@ -186,9 +188,9 @@ ice_sched_add_node(struct ice_port_info *pi, u8 layer, if (!node) return ICE_ERR_NO_MEMORY; if (hw->max_children[layer]) { - /* coverity[suspicious_sizeof] */ node->children = (struct ice_sched_node **) - ice_calloc(hw, hw->max_children[layer], sizeof(*node)); + ice_calloc(hw, hw->max_children[layer], + sizeof(*node->children)); if (!node->children) { ice_free(hw, node); return ICE_ERR_NO_MEMORY; @@ -200,7 +202,7 @@ ice_sched_add_node(struct ice_port_info *pi, u8 layer, node->tx_sched_layer = layer; parent->children[parent->num_children++] = node; node->info = elem; - return ICE_SUCCESS; + return 0; } /** @@ -214,7 +216,7 @@ ice_sched_add_node(struct ice_port_info *pi, u8 layer, * * Delete scheduling elements (0x040F) */ -static enum ice_status +static int ice_aq_delete_sched_elems(struct ice_hw *hw, u16 grps_req, struct ice_aqc_delete_elem *buf, u16 buf_size, u16 *grps_del, struct ice_sq_cd *cd) @@ -233,14 +235,14 @@ ice_aq_delete_sched_elems(struct ice_hw *hw, u16 grps_req, * * This function remove nodes from HW */ -static enum ice_status +static int ice_sched_remove_elems(struct ice_hw *hw, struct ice_sched_node *parent, u16 num_nodes, u32 *node_teids) { struct ice_aqc_delete_elem *buf; u16 i, num_groups_removed = 0; - enum ice_status status; u16 buf_size; + int status; buf_size = ice_struct_size(buf, teid, num_nodes); buf = (struct ice_aqc_delete_elem *)ice_malloc(hw, buf_size); @@ -254,7 +256,7 @@ ice_sched_remove_elems(struct ice_hw *hw, struct ice_sched_node *parent, status = ice_aq_delete_sched_elems(hw, 1, buf, buf_size, &num_groups_removed, NULL); - if (status != ICE_SUCCESS || num_groups_removed != 1) + if (status || num_groups_removed != 1) ice_debug(hw, ICE_DBG_SCHED, "remove node failed FW error %d\n", hw->adminq.sq_last_status); @@ -374,14 +376,14 @@ void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node) * * Get default scheduler topology (0x400) */ -static enum ice_status +static int ice_aq_get_dflt_topo(struct ice_hw *hw, u8 lport, struct ice_aqc_get_topo_elem *buf, u16 buf_size, u8 *num_branches, struct ice_sq_cd *cd) { struct ice_aqc_get_topo *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; cmd = &desc.params.get_topo; ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_dflt_topo); @@ -404,7 +406,7 @@ ice_aq_get_dflt_topo(struct ice_hw *hw, u8 lport, * * Add scheduling elements (0x0401) */ -static enum ice_status +static int ice_aq_add_sched_elems(struct ice_hw *hw, u16 grps_req, struct ice_aqc_add_elem *buf, u16 buf_size, u16 *grps_added, struct ice_sq_cd *cd) @@ -425,7 +427,7 @@ ice_aq_add_sched_elems(struct ice_hw *hw, u16 grps_req, * * Configure scheduling elements (0x0403) */ -static enum ice_status +static int ice_aq_cfg_sched_elems(struct ice_hw *hw, u16 elems_req, struct ice_aqc_txsched_elem_data *buf, u16 buf_size, u16 *elems_cfgd, struct ice_sq_cd *cd) @@ -446,7 +448,7 @@ ice_aq_cfg_sched_elems(struct ice_hw *hw, u16 elems_req, * * Move scheduling elements (0x0408) */ -enum ice_status +int ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req, struct ice_aqc_move_elem *buf, u16 buf_size, u16 *grps_movd, struct ice_sq_cd *cd) @@ -467,7 +469,7 @@ ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req, * * Suspend scheduling elements (0x0409) */ -static enum ice_status +static int ice_aq_suspend_sched_elems(struct ice_hw *hw, u16 elems_req, __le32 *buf, u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd) { @@ -487,7 +489,7 @@ ice_aq_suspend_sched_elems(struct ice_hw *hw, u16 elems_req, __le32 *buf, * * resume scheduling elements (0x040A) */ -static enum ice_status +static int ice_aq_resume_sched_elems(struct ice_hw *hw, u16 elems_req, __le32 *buf, u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd) { @@ -505,7 +507,7 @@ ice_aq_resume_sched_elems(struct ice_hw *hw, u16 elems_req, __le32 *buf, * * Query scheduler resource allocation (0x0412) */ -static enum ice_status +static int ice_aq_query_sched_res(struct ice_hw *hw, u16 buf_size, struct ice_aqc_query_txsched_res_resp *buf, struct ice_sq_cd *cd) @@ -525,13 +527,13 @@ ice_aq_query_sched_res(struct ice_hw *hw, u16 buf_size, * * This function suspends or resumes HW nodes */ -static enum ice_status +static int ice_sched_suspend_resume_elems(struct ice_hw *hw, u8 num_nodes, u32 *node_teids, bool suspend) { u16 i, buf_size, num_elem_ret = 0; - enum ice_status status; __le32 *buf; + int status; buf_size = sizeof(*buf) * num_nodes; buf = (__le32 *)ice_malloc(hw, buf_size); @@ -549,7 +551,7 @@ ice_sched_suspend_resume_elems(struct ice_hw *hw, u8 num_nodes, u32 *node_teids, status = ice_aq_resume_sched_elems(hw, num_nodes, buf, buf_size, &num_elem_ret, NULL); - if (status != ICE_SUCCESS || num_elem_ret != num_nodes) + if (status || num_elem_ret != num_nodes) ice_debug(hw, ICE_DBG_SCHED, "suspend/resume failed\n"); ice_free(hw, buf); @@ -563,7 +565,7 @@ ice_sched_suspend_resume_elems(struct ice_hw *hw, u8 num_nodes, u32 *node_teids, * @tc: TC number * @new_numqs: number of queues */ -static enum ice_status +static int ice_alloc_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 new_numqs) { struct ice_vsi_ctx *vsi_ctx; @@ -579,7 +581,7 @@ ice_alloc_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 new_numqs) if (!vsi_ctx->lan_q_ctx[tc]) return ICE_ERR_NO_MEMORY; vsi_ctx->num_lan_q_entries[tc] = new_numqs; - return ICE_SUCCESS; + return 0; } /* num queues are increased, update the queue contexts */ if (new_numqs > vsi_ctx->num_lan_q_entries[tc]) { @@ -595,7 +597,7 @@ ice_alloc_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 new_numqs) vsi_ctx->lan_q_ctx[tc] = q_ctx; vsi_ctx->num_lan_q_entries[tc] = new_numqs; } - return ICE_SUCCESS; + return 0; } /** @@ -610,14 +612,14 @@ ice_alloc_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 new_numqs) * * RL profile function to add, query, or remove profile(s) */ -static enum ice_status +static int ice_aq_rl_profile(struct ice_hw *hw, enum ice_adminq_opc opcode, u16 num_profiles, struct ice_aqc_rl_profile_elem *buf, u16 buf_size, u16 *num_processed, struct ice_sq_cd *cd) { struct ice_aqc_rl_profile *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; cmd = &desc.params.rl_profile; @@ -641,7 +643,7 @@ ice_aq_rl_profile(struct ice_hw *hw, enum ice_adminq_opc opcode, * * Add RL profile (0x0410) */ -static enum ice_status +static int ice_aq_add_rl_profile(struct ice_hw *hw, u16 num_profiles, struct ice_aqc_rl_profile_elem *buf, u16 buf_size, u16 *num_profiles_added, struct ice_sq_cd *cd) @@ -660,7 +662,7 @@ ice_aq_add_rl_profile(struct ice_hw *hw, u16 num_profiles, * * Query RL profile (0x0411) */ -enum ice_status +int ice_aq_query_rl_profile(struct ice_hw *hw, u16 num_profiles, struct ice_aqc_rl_profile_elem *buf, u16 buf_size, struct ice_sq_cd *cd) @@ -680,7 +682,7 @@ ice_aq_query_rl_profile(struct ice_hw *hw, u16 num_profiles, * * Remove RL profile (0x0415) */ -static enum ice_status +static int ice_aq_remove_rl_profile(struct ice_hw *hw, u16 num_profiles, struct ice_aqc_rl_profile_elem *buf, u16 buf_size, u16 *num_profiles_removed, struct ice_sq_cd *cd) @@ -699,14 +701,14 @@ ice_aq_remove_rl_profile(struct ice_hw *hw, u16 num_profiles, * its associated parameters from HW DB,and locally. The caller needs to * hold scheduler lock. */ -static enum ice_status +static int ice_sched_del_rl_profile(struct ice_hw *hw, struct ice_aqc_rl_profile_info *rl_info) { struct ice_aqc_rl_profile_elem *buf; u16 num_profiles_removed; - enum ice_status status; u16 num_profiles = 1; + int status; if (rl_info->prof_id_ref != 0) return ICE_ERR_IN_USE; @@ -742,7 +744,7 @@ static void ice_sched_clear_rl_prof(struct ice_port_info *pi) LIST_FOR_EACH_ENTRY_SAFE(rl_prof_elem, rl_prof_tmp, &hw->rl_prof_list[ln], ice_aqc_rl_profile_info, list_entry) { - enum ice_status status; + int status; rl_prof_elem->prof_id_ref = 0; status = ice_sched_del_rl_profile(hw, rl_prof_elem); @@ -855,7 +857,7 @@ void ice_sched_cleanup_all(struct ice_hw *hw) * * Configure Node Attributes (0x0417) */ -enum ice_status +int ice_aq_cfg_node_attr(struct ice_hw *hw, u16 num_nodes, struct ice_aqc_node_attr_elem *buf, u16 buf_size, struct ice_sq_cd *cd) @@ -882,7 +884,7 @@ ice_aq_cfg_node_attr(struct ice_hw *hw, u16 num_nodes, * * Configure L2 Node CGD (0x0414) */ -enum ice_status +int ice_aq_cfg_l2_node_cgd(struct ice_hw *hw, u16 num_l2_nodes, struct ice_aqc_cfg_l2_node_cgd_elem *buf, u16 buf_size, struct ice_sq_cd *cd) @@ -911,7 +913,7 @@ ice_aq_cfg_l2_node_cgd(struct ice_hw *hw, u16 num_l2_nodes, * * This function add nodes to HW as well as to SW DB for a given layer */ -static enum ice_status +int ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node, struct ice_sched_node *parent, u8 layer, u16 num_nodes, u16 *num_nodes_added, u32 *first_node_teid, @@ -920,8 +922,8 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node, struct ice_sched_node *prev, *new_node; struct ice_aqc_add_elem *buf; u16 i, num_groups_added = 0; - enum ice_status status = ICE_SUCCESS; struct ice_hw *hw = pi->hw; + int status = 0; u16 buf_size; u32 teid; @@ -951,7 +953,7 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node, status = ice_aq_add_sched_elems(hw, 1, buf, buf_size, &num_groups_added, NULL); - if (status != ICE_SUCCESS || num_groups_added != 1) { + if (status || num_groups_added != 1) { ice_debug(hw, ICE_DBG_SCHED, "add node failed FW Error %d\n", hw->adminq.sq_last_status); ice_free(hw, buf); @@ -966,7 +968,7 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node, else status = ice_sched_add_node(pi, layer, &buf->generic[i], NULL); - if (status != ICE_SUCCESS) { + if (status) { ice_debug(hw, ICE_DBG_SCHED, "add nodes in SW DB failed status =%d\n", status); break; @@ -1015,7 +1017,7 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node, * * Add nodes into specific hw layer. */ -static enum ice_status +static int ice_sched_add_nodes_to_hw_layer(struct ice_port_info *pi, struct ice_sched_node *tc_node, struct ice_sched_node *parent, u8 layer, @@ -1027,7 +1029,7 @@ ice_sched_add_nodes_to_hw_layer(struct ice_port_info *pi, *num_nodes_added = 0; if (!num_nodes) - return ICE_SUCCESS; + return 0; if (!parent || layer < pi->hw->sw_entry_point_layer) return ICE_ERR_PARAM; @@ -1059,7 +1061,7 @@ ice_sched_add_nodes_to_hw_layer(struct ice_port_info *pi, * * This function add nodes to a given layer. */ -static enum ice_status +static int ice_sched_add_nodes_to_layer(struct ice_port_info *pi, struct ice_sched_node *tc_node, struct ice_sched_node *parent, u8 layer, @@ -1068,18 +1070,21 @@ ice_sched_add_nodes_to_layer(struct ice_port_info *pi, { u32 *first_teid_ptr = first_node_teid; u16 new_num_nodes = num_nodes; - enum ice_status status = ICE_SUCCESS; + int status = 0; +#ifdef __CHECKER__ + /* cppcheck-suppress unusedVariable */ +#endif /* __CHECKER__ */ + u32 temp; *num_nodes_added = 0; while (*num_nodes_added < num_nodes) { u16 max_child_nodes, num_added = 0; - u32 temp; status = ice_sched_add_nodes_to_hw_layer(pi, tc_node, parent, layer, new_num_nodes, first_teid_ptr, &num_added); - if (status == ICE_SUCCESS) + if (!status) *num_nodes_added += num_added; /* added more nodes than requested ? */ if (*num_nodes_added > num_nodes) { @@ -1089,10 +1094,10 @@ ice_sched_add_nodes_to_layer(struct ice_port_info *pi, break; } /* break if all the nodes are added successfully */ - if (status == ICE_SUCCESS && (*num_nodes_added == num_nodes)) + if (!status && (*num_nodes_added == num_nodes)) break; /* break if the error is not max limit */ - if (status != ICE_SUCCESS && status != ICE_ERR_MAX_LIMIT) + if (status && status != ICE_ERR_MAX_LIMIT) break; /* Exceeded the max children */ max_child_nodes = pi->hw->max_children[parent->tx_sched_layer]; @@ -1187,7 +1192,7 @@ static void ice_rm_dflt_leaf_node(struct ice_port_info *pi) } if (node && node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF) { u32 teid = LE32_TO_CPU(node->info.node_teid); - enum ice_status status; + int status; /* remove the default leaf node */ status = ice_sched_remove_elems(pi->hw, node->parent, 1, &teid); @@ -1233,13 +1238,13 @@ static void ice_sched_rm_dflt_nodes(struct ice_port_info *pi) * resources, default topology created by firmware and storing the information * in SW DB. */ -enum ice_status ice_sched_init_port(struct ice_port_info *pi) +int ice_sched_init_port(struct ice_port_info *pi) { struct ice_aqc_get_topo_elem *buf; - enum ice_status status; struct ice_hw *hw; u8 num_branches; u16 num_elems; + int status; u8 i, j; if (!pi) @@ -1362,12 +1367,12 @@ struct ice_sched_node *ice_sched_get_node(struct ice_port_info *pi, u32 teid) * * query FW for allocated scheduler resources and store in HW struct */ -enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw) +int ice_sched_query_res_alloc(struct ice_hw *hw) { struct ice_aqc_query_txsched_res_resp *buf; - enum ice_status status = ICE_SUCCESS; __le16 max_sibl; - u8 i; + int status = 0; + u16 i; if (hw->layer_info) return status; @@ -1653,12 +1658,12 @@ ice_sched_get_agg_node(struct ice_port_info *pi, struct ice_sched_node *tc_node, static bool ice_sched_check_node(struct ice_hw *hw, struct ice_sched_node *node) { struct ice_aqc_txsched_elem_data buf; - enum ice_status status; u32 node_teid; + int status; node_teid = LE32_TO_CPU(node->info.node_teid); status = ice_sched_query_elem(hw, node_teid, &buf); - if (status != ICE_SUCCESS) + if (status) return false; if (memcmp(&buf, &node->info, sizeof(buf))) { @@ -1709,7 +1714,7 @@ ice_sched_calc_vsi_child_nodes(struct ice_hw *hw, u16 num_qs, u16 *num_nodes) * This function adds the VSI child nodes to tree. It gets called for * LAN and RDMA separately. */ -static enum ice_status +static int ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle, struct ice_sched_node *tc_node, u16 *num_nodes, u8 owner) @@ -1724,7 +1729,7 @@ ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle, vsil = ice_sched_get_vsi_layer(hw); parent = ice_sched_get_vsi_node(pi, tc_node, vsi_handle); for (i = vsil + 1; i <= qgl; i++) { - enum ice_status status; + int status; if (!parent) return ICE_ERR_CFG; @@ -1733,7 +1738,7 @@ ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle, num_nodes[i], &first_node_teid, &num_added); - if (status != ICE_SUCCESS || num_nodes[i] != num_added) + if (status || num_nodes[i] != num_added) return ICE_ERR_CFG; /* The newly added node can be a new parent for the next @@ -1752,7 +1757,7 @@ ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle, } } - return ICE_SUCCESS; + return 0; } /** @@ -1814,7 +1819,7 @@ ice_sched_calc_vsi_support_nodes(struct ice_port_info *pi, * This function adds the VSI supported nodes into Tx tree including the * VSI, its parent and intermediate nodes in below layers */ -static enum ice_status +static int ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_handle, struct ice_sched_node *tc_node, u16 *num_nodes) { @@ -1828,13 +1833,13 @@ ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_handle, vsil = ice_sched_get_vsi_layer(pi->hw); for (i = pi->hw->sw_entry_point_layer; i <= vsil; i++) { - enum ice_status status; + int status; status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i, num_nodes[i], &first_node_teid, &num_added); - if (status != ICE_SUCCESS || num_nodes[i] != num_added) + if (status || num_nodes[i] != num_added) return ICE_ERR_CFG; /* The newly added node can be a new parent for the next @@ -1853,7 +1858,7 @@ ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_handle, parent->vsi_handle = vsi_handle; } - return ICE_SUCCESS; + return 0; } /** @@ -1864,7 +1869,7 @@ ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_handle, * * This function adds a new VSI into scheduler tree */ -static enum ice_status +static int ice_sched_add_vsi_to_topo(struct ice_port_info *pi, u16 vsi_handle, u8 tc) { u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 }; @@ -1892,7 +1897,7 @@ ice_sched_add_vsi_to_topo(struct ice_port_info *pi, u16 vsi_handle, u8 tc) * * This function updates the VSI child nodes based on the number of queues */ -static enum ice_status +static int ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 new_numqs, u8 owner) { @@ -1900,8 +1905,8 @@ ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle, struct ice_sched_node *vsi_node; struct ice_sched_node *tc_node; struct ice_vsi_ctx *vsi_ctx; - enum ice_status status = ICE_SUCCESS; struct ice_hw *hw = pi->hw; + int status = 0; u16 prev_numqs; tc_node = ice_sched_get_tc_node(pi, tc); @@ -1939,7 +1944,7 @@ ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle, return status; vsi_ctx->sched.max_lanq[tc] = new_numqs; - return ICE_SUCCESS; + return 0; } /** @@ -1955,14 +1960,14 @@ ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle, * enabled and VSI is in suspended state then resume the VSI back. If TC is * disabled then suspend the VSI if it is not already. */ -enum ice_status +int ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs, u8 owner, bool enable) { struct ice_sched_node *vsi_node, *tc_node; struct ice_vsi_ctx *vsi_ctx; - enum ice_status status = ICE_SUCCESS; struct ice_hw *hw = pi->hw; + int status = 0; ice_debug(pi->hw, ICE_DBG_SCHED, "add/config VSI %d\n", vsi_handle); tc_node = ice_sched_get_tc_node(pi, tc); @@ -2079,11 +2084,11 @@ static bool ice_sched_is_leaf_node_present(struct ice_sched_node *node) * This function removes the VSI and its LAN or RDMA children nodes from the * scheduler tree. */ -static enum ice_status +static int ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner) { - enum ice_status status = ICE_ERR_PARAM; struct ice_vsi_ctx *vsi_ctx; + int status = ICE_ERR_PARAM; u8 i; ice_debug(pi->hw, ICE_DBG_SCHED, "removing VSI %d\n", vsi_handle); @@ -2134,7 +2139,7 @@ ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner) if (owner == ICE_SCHED_NODE_OWNER_LAN) vsi_ctx->sched.max_lanq[i] = 0; } - status = ICE_SUCCESS; + status = 0; exit_sched_rm_vsi_cfg: ice_release_lock(&pi->sched_lock); @@ -2149,7 +2154,7 @@ ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner) * This function clears the VSI and its LAN children nodes from scheduler tree * for all TCs. */ -enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle) +int ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle) { return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_LAN); } @@ -2189,7 +2194,7 @@ bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node) * This function retrieves the tree topology from the firmware for a given * node TEID to the root node. */ -enum ice_status +int ice_aq_query_node_to_root(struct ice_hw *hw, u32 node_teid, struct ice_aqc_txsched_elem_data *buf, u16 buf_size, struct ice_sq_cd *cd) @@ -2275,7 +2280,7 @@ ice_sched_get_free_vsi_parent(struct ice_hw *hw, struct ice_sched_node *node, * This function removes the child from the old parent and adds it to a new * parent */ -static void +void ice_sched_update_parent(struct ice_sched_node *new_parent, struct ice_sched_node *node) { @@ -2309,15 +2314,15 @@ ice_sched_update_parent(struct ice_sched_node *new_parent, * * This function move the child nodes to a given parent. */ -static enum ice_status +int ice_sched_move_nodes(struct ice_port_info *pi, struct ice_sched_node *parent, u16 num_items, u32 *list) { - enum ice_status status = ICE_SUCCESS; struct ice_aqc_move_elem *buf; struct ice_sched_node *node; u16 i, grps_movd = 0; struct ice_hw *hw; + int status = 0; u16 buf_len; hw = pi->hw; @@ -2372,16 +2377,16 @@ ice_sched_move_nodes(struct ice_port_info *pi, struct ice_sched_node *parent, * This function moves a VSI to an aggregator node or its subtree. * Intermediate nodes may be created if required. */ -static enum ice_status +static int ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id, u8 tc) { struct ice_sched_node *vsi_node, *agg_node, *tc_node, *parent; u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 }; u32 first_node_teid, vsi_teid; - enum ice_status status; u16 num_nodes_added; u8 aggl, vsil, i; + int status; tc_node = ice_sched_get_tc_node(pi, tc); if (!tc_node) @@ -2397,7 +2402,7 @@ ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id, /* Is this VSI already part of given aggregator? */ if (ice_sched_find_node_in_subtree(pi->hw, agg_node, vsi_node)) - return ICE_SUCCESS; + return 0; aggl = ice_sched_get_agg_layer(pi->hw); vsil = ice_sched_get_vsi_layer(pi->hw); @@ -2422,7 +2427,7 @@ ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id, num_nodes[i], &first_node_teid, &num_nodes_added); - if (status != ICE_SUCCESS || num_nodes[i] != num_nodes_added) + if (status || num_nodes[i] != num_nodes_added) return ICE_ERR_CFG; /* The newly added node can be a new parent for the next @@ -2454,14 +2459,14 @@ ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id, * aggregator VSI info based on passed in boolean parameter rm_vsi_info. The * caller holds the scheduler lock. */ -static enum ice_status +static int ice_move_all_vsi_to_dflt_agg(struct ice_port_info *pi, struct ice_sched_agg_info *agg_info, u8 tc, bool rm_vsi_info) { struct ice_sched_agg_vsi_info *agg_vsi_info; struct ice_sched_agg_vsi_info *tmp; - enum ice_status status = ICE_SUCCESS; + int status = 0; LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, tmp, &agg_info->agg_vsi_list, ice_sched_agg_vsi_info, list_entry) { @@ -2518,7 +2523,7 @@ ice_sched_is_agg_inuse(struct ice_port_info *pi, struct ice_sched_node *node) * This function removes the aggregator node and intermediate nodes if any * from the given TC */ -static enum ice_status +static int ice_sched_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc) { struct ice_sched_node *tc_node, *agg_node; @@ -2552,7 +2557,7 @@ ice_sched_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc) } ice_free_sched_node(pi, agg_node); - return ICE_SUCCESS; + return 0; } /** @@ -2566,11 +2571,11 @@ ice_sched_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc) * the aggregator configuration completely for requested TC. The caller needs * to hold the scheduler lock. */ -static enum ice_status +static int ice_rm_agg_cfg_tc(struct ice_port_info *pi, struct ice_sched_agg_info *agg_info, u8 tc, bool rm_vsi_info) { - enum ice_status status = ICE_SUCCESS; + int status = 0; /* If nothing to remove - return success */ if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc)) @@ -2599,7 +2604,7 @@ ice_rm_agg_cfg_tc(struct ice_port_info *pi, struct ice_sched_agg_info *agg_info, * Save aggregator TC bitmap. This function needs to be called with scheduler * lock held. */ -static enum ice_status +static int ice_save_agg_tc_bitmap(struct ice_port_info *pi, u32 agg_id, ice_bitmap_t *tc_bitmap) { @@ -2610,7 +2615,7 @@ ice_save_agg_tc_bitmap(struct ice_port_info *pi, u32 agg_id, return ICE_ERR_PARAM; ice_cp_bitmap(agg_info->replay_tc_bitmap, tc_bitmap, ICE_MAX_TRAFFIC_CLASS); - return ICE_SUCCESS; + return 0; } /** @@ -2622,15 +2627,15 @@ ice_save_agg_tc_bitmap(struct ice_port_info *pi, u32 agg_id, * This function creates an aggregator node and intermediate nodes if required * for the given TC */ -static enum ice_status +static int ice_sched_add_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc) { struct ice_sched_node *parent, *agg_node, *tc_node; u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 }; - enum ice_status status = ICE_SUCCESS; struct ice_hw *hw = pi->hw; u32 first_node_teid; u16 num_nodes_added; + int status = 0; u8 i, aggl; tc_node = ice_sched_get_tc_node(pi, tc); @@ -2676,7 +2681,7 @@ ice_sched_add_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc) num_nodes[i], &first_node_teid, &num_nodes_added); - if (status != ICE_SUCCESS || num_nodes[i] != num_nodes_added) + if (status || num_nodes[i] != num_nodes_added) return ICE_ERR_CFG; /* The newly added node can be a new parent for the next @@ -2693,7 +2698,7 @@ ice_sched_add_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc) } } - return ICE_SUCCESS; + return 0; } /** @@ -2712,13 +2717,13 @@ ice_sched_add_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc) * resources and remove aggregator ID. * This function needs to be called with scheduler lock held. */ -static enum ice_status +static int ice_sched_cfg_agg(struct ice_port_info *pi, u32 agg_id, enum ice_agg_type agg_type, ice_bitmap_t *tc_bitmap) { struct ice_sched_agg_info *agg_info; - enum ice_status status = ICE_SUCCESS; struct ice_hw *hw = pi->hw; + int status = 0; u8 tc; agg_info = ice_get_agg_info(hw, agg_id); @@ -2774,12 +2779,12 @@ ice_sched_cfg_agg(struct ice_port_info *pi, u32 agg_id, * * This function configures aggregator node(s). */ -enum ice_status +int ice_cfg_agg(struct ice_port_info *pi, u32 agg_id, enum ice_agg_type agg_type, u8 tc_bitmap) { ice_bitmap_t bitmap = tc_bitmap; - enum ice_status status; + int status; ice_acquire_lock(&pi->sched_lock); status = ice_sched_cfg_agg(pi, agg_id, agg_type, @@ -2847,7 +2852,7 @@ ice_get_vsi_agg_info(struct ice_hw *hw, u16 vsi_handle) * Save VSI to aggregator TC bitmap. This function needs to call with scheduler * lock held. */ -static enum ice_status +static int ice_save_agg_vsi_tc_bitmap(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle, ice_bitmap_t *tc_bitmap) { @@ -2863,7 +2868,7 @@ ice_save_agg_vsi_tc_bitmap(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle, return ICE_ERR_PARAM; ice_cp_bitmap(agg_vsi_info->replay_tc_bitmap, tc_bitmap, ICE_MAX_TRAFFIC_CLASS); - return ICE_SUCCESS; + return 0; } /** @@ -2877,14 +2882,14 @@ ice_save_agg_vsi_tc_bitmap(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle, * already associated to the aggregator node then no operation is performed on * the tree. This function needs to be called with scheduler lock held. */ -static enum ice_status +static int ice_sched_assoc_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle, ice_bitmap_t *tc_bitmap) { struct ice_sched_agg_vsi_info *agg_vsi_info, *old_agg_vsi_info = NULL; struct ice_sched_agg_info *agg_info, *old_agg_info; - enum ice_status status = ICE_SUCCESS; struct ice_hw *hw = pi->hw; + int status = 0; u8 tc; if (!ice_is_vsi_valid(pi->hw, vsi_handle)) @@ -2975,14 +2980,14 @@ static void ice_sched_rm_unused_rl_prof(struct ice_hw *hw) * returns success or error on config sched element failure. The caller * needs to hold scheduler lock. */ -static enum ice_status +static int ice_sched_update_elem(struct ice_hw *hw, struct ice_sched_node *node, struct ice_aqc_txsched_elem_data *info) { struct ice_aqc_txsched_elem_data buf; - enum ice_status status; u16 elem_cfgd = 0; u16 num_elems = 1; + int status; buf = *info; /* For TC nodes, CIR config is not supported */ @@ -3020,13 +3025,13 @@ ice_sched_update_elem(struct ice_hw *hw, struct ice_sched_node *node, * * This function configures node element's BW allocation. */ -enum ice_status +int ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node, enum ice_rl_type rl_type, u16 bw_alloc) { struct ice_aqc_txsched_elem_data buf; struct ice_aqc_txsched_elem *data; - enum ice_status status; + int status; buf = node->info; data = &buf.data; @@ -3054,12 +3059,12 @@ ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node, * * Move or associate VSI to a new or default aggregator node. */ -enum ice_status +int ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle, u8 tc_bitmap) { ice_bitmap_t bitmap = tc_bitmap; - enum ice_status status; + int status; ice_acquire_lock(&pi->sched_lock); status = ice_sched_assoc_vsi_to_agg(pi, agg_id, vsi_handle, @@ -3079,10 +3084,10 @@ ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle, * This function removes aggregator reference to VSI and delete aggregator ID * info. It removes the aggregator configuration completely. */ -enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id) +int ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id) { struct ice_sched_agg_info *agg_info; - enum ice_status status = ICE_SUCCESS; + int status = 0; u8 tc; ice_acquire_lock(&pi->sched_lock); @@ -3161,7 +3166,7 @@ ice_set_clear_eir_bw_alloc(struct ice_bw_type_info *bw_t_info, u16 bw_alloc) * * Save BW alloc information of VSI type node for post replay use. */ -static enum ice_status +static int ice_sched_save_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc, enum ice_rl_type rl_type, u16 bw_alloc) { @@ -3184,7 +3189,7 @@ ice_sched_save_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc, default: return ICE_ERR_PARAM; } - return ICE_SUCCESS; + return 0; } /** @@ -3254,7 +3259,7 @@ static void ice_set_clear_shared_bw(struct ice_bw_type_info *bw_t_info, u32 bw) * * Save BW information of VSI type node for post replay use. */ -static enum ice_status +static int ice_sched_save_vsi_bw(struct ice_port_info *pi, u16 vsi_handle, u8 tc, enum ice_rl_type rl_type, u32 bw) { @@ -3278,7 +3283,7 @@ ice_sched_save_vsi_bw(struct ice_port_info *pi, u16 vsi_handle, u8 tc, default: return ICE_ERR_PARAM; } - return ICE_SUCCESS; + return 0; } /** @@ -3306,7 +3311,7 @@ static void ice_set_clear_prio(struct ice_bw_type_info *bw_t_info, u8 prio) * * Save priority information of VSI type node for post replay use. */ -static enum ice_status +static int ice_sched_save_vsi_prio(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 prio) { @@ -3320,7 +3325,7 @@ ice_sched_save_vsi_prio(struct ice_port_info *pi, u16 vsi_handle, u8 tc, if (tc >= ICE_MAX_TRAFFIC_CLASS) return ICE_ERR_PARAM; ice_set_clear_prio(&vsi_ctx->sched.bw_t_info[tc], prio); - return ICE_SUCCESS; + return 0; } /** @@ -3333,7 +3338,7 @@ ice_sched_save_vsi_prio(struct ice_port_info *pi, u16 vsi_handle, u8 tc, * * Save BW alloc information of AGG type node for post replay use. */ -static enum ice_status +static int ice_sched_save_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 tc, enum ice_rl_type rl_type, u16 bw_alloc) { @@ -3354,7 +3359,7 @@ ice_sched_save_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 tc, default: return ICE_ERR_PARAM; } - return ICE_SUCCESS; + return 0; } /** @@ -3367,7 +3372,7 @@ ice_sched_save_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 tc, * * Save BW information of AGG type node for post replay use. */ -static enum ice_status +static int ice_sched_save_agg_bw(struct ice_port_info *pi, u32 agg_id, u8 tc, enum ice_rl_type rl_type, u32 bw) { @@ -3391,7 +3396,7 @@ ice_sched_save_agg_bw(struct ice_port_info *pi, u32 agg_id, u8 tc, default: return ICE_ERR_PARAM; } - return ICE_SUCCESS; + return 0; } /** @@ -3405,11 +3410,11 @@ ice_sched_save_agg_bw(struct ice_port_info *pi, u32 agg_id, u8 tc, * This function configures BW limit of VSI scheduling node based on TC * information. */ -enum ice_status +int ice_cfg_vsi_bw_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc, enum ice_rl_type rl_type, u32 bw) { - enum ice_status status; + int status; status = ice_sched_set_node_bw_lmt_per_tc(pi, vsi_handle, ICE_AGG_TYPE_VSI, @@ -3432,11 +3437,11 @@ ice_cfg_vsi_bw_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc, * This function configures default BW limit of VSI scheduling node based on TC * information. */ -enum ice_status +int ice_cfg_vsi_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc, enum ice_rl_type rl_type) { - enum ice_status status; + int status; status = ice_sched_set_node_bw_lmt_per_tc(pi, vsi_handle, ICE_AGG_TYPE_VSI, @@ -3462,11 +3467,11 @@ ice_cfg_vsi_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc, * This function applies BW limit to aggregator scheduling node based on TC * information. */ -enum ice_status +int ice_cfg_agg_bw_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc, enum ice_rl_type rl_type, u32 bw) { - enum ice_status status; + int status; status = ice_sched_set_node_bw_lmt_per_tc(pi, agg_id, ICE_AGG_TYPE_AGG, tc, rl_type, bw); @@ -3488,11 +3493,11 @@ ice_cfg_agg_bw_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc, * This function applies default BW limit to aggregator scheduling node based * on TC information. */ -enum ice_status +int ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc, enum ice_rl_type rl_type) { - enum ice_status status; + int status; status = ice_sched_set_node_bw_lmt_per_tc(pi, agg_id, ICE_AGG_TYPE_AGG, tc, rl_type, @@ -3517,7 +3522,7 @@ ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc, * Configure shared rate limiter(SRL) of all VSI type nodes across all traffic * classes for VSI matching handle. */ -enum ice_status +int ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 min_bw, u32 max_bw, u32 shared_bw) { @@ -3533,7 +3538,7 @@ ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 min_bw, * This function removes the shared rate limiter(SRL) of all VSI type nodes * across all traffic classes for VSI matching handle. */ -enum ice_status +int ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle) { return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle, @@ -3553,7 +3558,7 @@ ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle) * This function configures the shared rate limiter(SRL) of all aggregator type * nodes across all traffic classes for aggregator matching agg_id. */ -enum ice_status +int ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 min_bw, u32 max_bw, u32 shared_bw) { @@ -3569,7 +3574,7 @@ ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 min_bw, * This function removes the shared rate limiter(SRL) of all aggregator type * nodes across all traffic classes for aggregator matching agg_id. */ -enum ice_status +int ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id) { return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, ICE_SCHED_DFLT_BW, @@ -3589,7 +3594,7 @@ ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id) * This function configures the shared rate limiter(SRL) of all aggregator type * nodes across all traffic classes for aggregator matching agg_id. */ -enum ice_status +int ice_cfg_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc, u32 min_bw, u32 max_bw, u32 shared_bw) { @@ -3606,7 +3611,7 @@ ice_cfg_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc, * This function configures the shared rate limiter(SRL) of all aggregator type * nodes across all traffic classes for aggregator matching agg_id. */ -enum ice_status +int ice_cfg_agg_bw_no_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc) { return ice_sched_set_agg_bw_shared_lmt_per_tc(pi, agg_id, tc, @@ -3625,11 +3630,11 @@ ice_cfg_agg_bw_no_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc) * This function configures the queue node priority (Sibling Priority) of the * passed in VSI's queue(s) for a given traffic class (TC). */ -enum ice_status +int ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids, u8 *q_prio) { - enum ice_status status = ICE_ERR_PARAM; + int status = ICE_ERR_PARAM; u16 i; ice_acquire_lock(&pi->sched_lock); @@ -3661,12 +3666,12 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids, * * This function configures node element's sibling priority only. */ -enum ice_status +int ice_sched_cfg_sibl_node_prio_lock(struct ice_port_info *pi, struct ice_sched_node *node, u8 priority) { - enum ice_status status; + int status; ice_acquire_lock(&pi->sched_lock); status = ice_sched_cfg_sibl_node_prio(pi, node, priority); @@ -3683,7 +3688,7 @@ ice_sched_cfg_sibl_node_prio_lock(struct ice_port_info *pi, * * Save BW information of queue type node for post replay use. */ -static enum ice_status +static int ice_sched_save_q_bw_alloc(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type, u32 bw_alloc) { @@ -3697,7 +3702,7 @@ ice_sched_save_q_bw_alloc(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type, default: return ICE_ERR_PARAM; } - return ICE_SUCCESS; + return 0; } /** @@ -3711,11 +3716,11 @@ ice_sched_save_q_bw_alloc(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type, * * This function configures BW allocation of queue scheduling node. */ -enum ice_status +int ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc) { - enum ice_status status = ICE_ERR_PARAM; + int status = ICE_ERR_PARAM; struct ice_sched_node *node; struct ice_q_ctx *q_ctx; @@ -3751,17 +3756,17 @@ ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc, * This function configures the node priority (Sibling Priority) of the * passed in VSI's for a given traffic class (TC) of an Aggregator ID. */ -enum ice_status +int ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id, u16 num_vsis, u16 *vsi_handle_arr, u8 *node_prio, u8 tc) { struct ice_sched_agg_vsi_info *agg_vsi_info; struct ice_sched_node *tc_node, *agg_node; - enum ice_status status = ICE_ERR_PARAM; struct ice_sched_agg_info *agg_info; bool agg_id_present = false; struct ice_hw *hw = pi->hw; + int status = ICE_ERR_PARAM; u16 i; ice_acquire_lock(&pi->sched_lock); @@ -3798,6 +3803,9 @@ ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id, LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list, ice_sched_agg_vsi_info, list_entry) if (agg_vsi_info->vsi_handle == vsi_handle) { +#ifdef __CHECKER__ + /* cppcheck-suppress unreadVariable */ +#endif /* __CHECKER__ */ vsi_handle_valid = true; break; } @@ -3838,11 +3846,11 @@ ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id, * This function configures the BW allocation of the passed in VSI's * node(s) for enabled traffic class. */ -enum ice_status +int ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap, enum ice_rl_type rl_type, u8 *bw_alloc) { - enum ice_status status = ICE_SUCCESS; + int status = 0; u8 tc; if (!ice_is_vsi_valid(pi->hw, vsi_handle)) @@ -3890,14 +3898,14 @@ ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap, * This function configures the BW allocation of passed in aggregator for * enabled traffic class(s). */ -enum ice_status +int ice_cfg_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 ena_tcmap, enum ice_rl_type rl_type, u8 *bw_alloc) { struct ice_sched_agg_info *agg_info; bool agg_id_present = false; - enum ice_status status = ICE_SUCCESS; struct ice_hw *hw = pi->hw; + int status = 0; u8 tc; ice_acquire_lock(&pi->sched_lock); @@ -3992,12 +4000,12 @@ static u16 ice_sched_calc_wakeup(struct ice_hw *hw, s32 bw) * * This function converts the BW to profile structure format. */ -static enum ice_status +static int ice_sched_bw_to_rl_profile(struct ice_hw *hw, u32 bw, struct ice_aqc_rl_profile_elem *profile) { - enum ice_status status = ICE_ERR_PARAM; s64 bytes_per_sec, ts_rate, mv_tmp; + int status = ICE_ERR_PARAM; bool found = false; s32 encode = 0; s64 mv = 0; @@ -4042,7 +4050,7 @@ ice_sched_bw_to_rl_profile(struct ice_hw *hw, u32 bw, profile->rl_multiply = CPU_TO_LE16(mv); profile->wake_up_calc = CPU_TO_LE16(wm); profile->rl_encode = CPU_TO_LE16(encode); - status = ICE_SUCCESS; + status = 0; } else { status = ICE_ERR_DOES_NOT_EXIST; } @@ -4070,8 +4078,8 @@ ice_sched_add_rl_profile(struct ice_hw *hw, enum ice_rl_type rl_type, struct ice_aqc_rl_profile_info *rl_prof_elem; u16 profiles_added = 0, num_profiles = 1; struct ice_aqc_rl_profile_elem *buf; - enum ice_status status; u8 profile_type; + int status; if (!hw || layer_num >= hw->num_tx_sched_layers) return NULL; @@ -4104,7 +4112,7 @@ ice_sched_add_rl_profile(struct ice_hw *hw, enum ice_rl_type rl_type, return NULL; status = ice_sched_bw_to_rl_profile(hw, bw, &rl_prof_elem->profile); - if (status != ICE_SUCCESS) + if (status) goto exit_add_rl_prof; rl_prof_elem->bw = bw; @@ -4139,7 +4147,7 @@ ice_sched_add_rl_profile(struct ice_hw *hw, enum ice_rl_type rl_type, * * This function configures node element's BW limit. */ -static enum ice_status +static int ice_sched_cfg_node_bw_lmt(struct ice_hw *hw, struct ice_sched_node *node, enum ice_rl_type rl_type, u16 rl_prof_id) { @@ -4283,12 +4291,12 @@ ice_sched_get_srl_node(struct ice_sched_node *node, u8 srl_layer) * 'profile_type' and profile ID as 'profile_id'. The caller needs to hold * scheduler lock. */ -static enum ice_status +static int ice_sched_rm_rl_profile(struct ice_hw *hw, u8 layer_num, u8 profile_type, u16 profile_id) { struct ice_aqc_rl_profile_info *rl_prof_elem; - enum ice_status status = ICE_SUCCESS; + int status = 0; if (!hw || layer_num >= hw->num_tx_sched_layers) return ICE_ERR_PARAM; @@ -4309,7 +4317,7 @@ ice_sched_rm_rl_profile(struct ice_hw *hw, u8 layer_num, u8 profile_type, break; } if (status == ICE_ERR_IN_USE) - status = ICE_SUCCESS; + status = 0; return status; } @@ -4324,15 +4332,15 @@ ice_sched_rm_rl_profile(struct ice_hw *hw, u8 layer_num, u8 profile_type, * type CIR, EIR, or SRL to default. This function needs to be called * with the scheduler lock held. */ -static enum ice_status +static int ice_sched_set_node_bw_dflt(struct ice_port_info *pi, struct ice_sched_node *node, enum ice_rl_type rl_type, u8 layer_num) { - enum ice_status status; struct ice_hw *hw; u8 profile_type; u16 rl_prof_id; + int status; u16 old_id; hw = pi->hw; @@ -4363,7 +4371,7 @@ ice_sched_set_node_bw_dflt(struct ice_port_info *pi, /* Remove stale RL profile ID */ if (old_id == ICE_SCHED_DFLT_RL_PROF_ID || old_id == ICE_SCHED_INVAL_PROF_ID) - return ICE_SUCCESS; + return 0; return ice_sched_rm_rl_profile(hw, layer_num, profile_type, old_id); } @@ -4380,14 +4388,14 @@ ice_sched_set_node_bw_dflt(struct ice_port_info *pi, * node's RL profile ID of type CIR, EIR, or SRL, and removes old profile * ID from local database. The caller needs to hold scheduler lock. */ -static enum ice_status +int ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node, enum ice_rl_type rl_type, u32 bw, u8 layer_num) { struct ice_aqc_rl_profile_info *rl_prof_info; - enum ice_status status = ICE_ERR_PARAM; struct ice_hw *hw = pi->hw; u16 old_id, rl_prof_id; + int status = ICE_ERR_PARAM; rl_prof_info = ice_sched_add_rl_profile(hw, rl_type, bw, layer_num); if (!rl_prof_info) @@ -4409,13 +4417,65 @@ ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node, /* Check for old ID removal */ if ((old_id == ICE_SCHED_DFLT_RL_PROF_ID && rl_type != ICE_SHARED_BW) || old_id == ICE_SCHED_INVAL_PROF_ID || old_id == rl_prof_id) - return ICE_SUCCESS; + return 0; return ice_sched_rm_rl_profile(hw, layer_num, rl_prof_info->profile.flags & ICE_AQC_RL_PROFILE_TYPE_M, old_id); } +/** + * ice_sched_set_node_priority - set node's priority + * @pi: port information structure + * @node: tree node + * @priority: number 0-7 representing priority among siblings + * + * This function sets priority of a node among it's siblings. + */ +int +ice_sched_set_node_priority(struct ice_port_info *pi, struct ice_sched_node *node, + u16 priority) +{ + struct ice_aqc_txsched_elem_data buf; + struct ice_aqc_txsched_elem *data; + + buf = node->info; + data = &buf.data; + + data->valid_sections |= ICE_AQC_ELEM_VALID_GENERIC; + data->generic |= ICE_AQC_ELEM_GENERIC_PRIO_M & + (priority << ICE_AQC_ELEM_GENERIC_PRIO_S); + + return ice_sched_update_elem(pi->hw, node, &buf); +} + +/** + * ice_sched_set_node_weight - set node's weight + * @pi: port information structure + * @node: tree node + * @weight: number 1-200 representing weight for WFQ + * + * This function sets weight of the node for WFQ algorithm. + */ +int +ice_sched_set_node_weight(struct ice_port_info *pi, struct ice_sched_node *node, u16 weight) +{ + struct ice_aqc_txsched_elem_data buf; + struct ice_aqc_txsched_elem *data; + + buf = node->info; + data = &buf.data; + + data->valid_sections = ICE_AQC_ELEM_VALID_CIR | ICE_AQC_ELEM_VALID_EIR | + ICE_AQC_ELEM_VALID_GENERIC; + data->cir_bw.bw_alloc = CPU_TO_LE16(weight); + data->eir_bw.bw_alloc = CPU_TO_LE16(weight); + data->generic |= ICE_AQC_ELEM_GENERIC_SP_M & + (0x0 << ICE_AQC_ELEM_GENERIC_SP_S); + + return ice_sched_update_elem(pi->hw, node, &buf); +} + /** * ice_sched_set_node_bw_lmt - set node's BW limit * @pi: port information structure @@ -4429,7 +4489,7 @@ ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node, * NOTE: Caller provides the correct SRL node in case of shared profile * settings. */ -enum ice_status +int ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node, enum ice_rl_type rl_type, u32 bw) { @@ -4462,7 +4522,7 @@ ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node, * type CIR, EIR, or SRL to default. This function needs to be called * with the scheduler lock held. */ -static enum ice_status +static int ice_sched_set_node_bw_dflt_lmt(struct ice_port_info *pi, struct ice_sched_node *node, enum ice_rl_type rl_type) @@ -4480,7 +4540,7 @@ ice_sched_set_node_bw_dflt_lmt(struct ice_port_info *pi, * behalf of the requested node (first argument). This function needs to be * called with scheduler lock held. */ -static enum ice_status +static int ice_sched_validate_srl_node(struct ice_sched_node *node, u8 sel_layer) { /* SRL profiles are not available on all layers. Check if the @@ -4493,7 +4553,7 @@ ice_sched_validate_srl_node(struct ice_sched_node *node, u8 sel_layer) node->num_children == 1) || ((sel_layer == node->tx_sched_layer - 1) && (node->parent && node->parent->num_children == 1))) - return ICE_SUCCESS; + return 0; return ICE_ERR_CFG; } @@ -4506,7 +4566,7 @@ ice_sched_validate_srl_node(struct ice_sched_node *node, u8 sel_layer) * * Save BW information of queue type node for post replay use. */ -static enum ice_status +static int ice_sched_save_q_bw(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type, u32 bw) { switch (rl_type) { @@ -4522,7 +4582,7 @@ ice_sched_save_q_bw(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type, u32 bw) default: return ICE_ERR_PARAM; } - return ICE_SUCCESS; + return 0; } /** @@ -4536,13 +4596,13 @@ ice_sched_save_q_bw(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type, u32 bw) * * This function sets BW limit of queue scheduling node. */ -static enum ice_status +static int ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle, enum ice_rl_type rl_type, u32 bw) { - enum ice_status status = ICE_ERR_PARAM; struct ice_sched_node *node; struct ice_q_ctx *q_ctx; + int status = ICE_ERR_PARAM; if (!ice_is_vsi_valid(pi->hw, vsi_handle)) return ICE_ERR_PARAM; @@ -4599,7 +4659,7 @@ ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc, * * This function configures BW limit of queue scheduling node. */ -enum ice_status +int ice_cfg_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle, enum ice_rl_type rl_type, u32 bw) { @@ -4617,7 +4677,7 @@ ice_cfg_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc, * * This function configures BW default limit of queue scheduling node. */ -enum ice_status +int ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle, enum ice_rl_type rl_type) { @@ -4635,7 +4695,7 @@ ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc, * This function saves the modified values of bandwidth settings for later * replay purpose (restore) after reset. */ -static enum ice_status +static int ice_sched_save_tc_node_bw(struct ice_port_info *pi, u8 tc, enum ice_rl_type rl_type, u32 bw) { @@ -4654,9 +4714,12 @@ ice_sched_save_tc_node_bw(struct ice_port_info *pi, u8 tc, default: return ICE_ERR_PARAM; } - return ICE_SUCCESS; + return 0; } +#define ICE_SCHED_GENERIC_STRICT_MODE BIT(4) +#define ICE_SCHED_GENERIC_PRIO_S 1 + /** * ice_sched_set_tc_node_bw_lmt - sets TC node BW limit * @pi: port information structure @@ -4666,12 +4729,14 @@ ice_sched_save_tc_node_bw(struct ice_port_info *pi, u8 tc, * * This function configures bandwidth limit of TC node. */ -static enum ice_status +static int ice_sched_set_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc, enum ice_rl_type rl_type, u32 bw) { - enum ice_status status = ICE_ERR_PARAM; + struct ice_aqc_txsched_elem_data buf; + struct ice_aqc_txsched_elem *data; struct ice_sched_node *tc_node; + int status = ICE_ERR_PARAM; if (tc >= ICE_MAX_TRAFFIC_CLASS) return status; @@ -4679,6 +4744,17 @@ ice_sched_set_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc, tc_node = ice_sched_get_tc_node(pi, tc); if (!tc_node) goto exit_set_tc_node_bw; + + /* update node's generic field */ + buf = tc_node->info; + data = &buf.data; + data->valid_sections = ICE_AQC_ELEM_VALID_GENERIC; + data->generic = (tc << ICE_SCHED_GENERIC_PRIO_S) | + ICE_SCHED_GENERIC_STRICT_MODE; + status = ice_sched_update_elem(pi->hw, tc_node, &buf); + if (status) + goto exit_set_tc_node_bw; + if (bw == ICE_SCHED_DFLT_BW) status = ice_sched_set_node_bw_dflt_lmt(pi, tc_node, rl_type); else @@ -4701,7 +4777,7 @@ ice_sched_set_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc, * This function configures BW limit of TC node. * Note: The minimum guaranteed reservation is done via DCBX. */ -enum ice_status +int ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc, enum ice_rl_type rl_type, u32 bw) { @@ -4716,7 +4792,7 @@ ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc, * * This function configures BW default limit of TC node. */ -enum ice_status +int ice_cfg_tc_node_bw_dflt_lmt(struct ice_port_info *pi, u8 tc, enum ice_rl_type rl_type) { @@ -4732,7 +4808,7 @@ ice_cfg_tc_node_bw_dflt_lmt(struct ice_port_info *pi, u8 tc, * * Save BW alloc information of VSI type node for post replay use. */ -static enum ice_status +static int ice_sched_save_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc, enum ice_rl_type rl_type, u16 bw_alloc) { @@ -4750,7 +4826,7 @@ ice_sched_save_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc, default: return ICE_ERR_PARAM; } - return ICE_SUCCESS; + return 0; } /** @@ -4764,12 +4840,12 @@ ice_sched_save_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc, * changed settings for replay purpose, and return success if it succeeds * in modifying bandwidth alloc setting. */ -static enum ice_status +static int ice_sched_set_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc, enum ice_rl_type rl_type, u8 bw_alloc) { - enum ice_status status = ICE_ERR_PARAM; struct ice_sched_node *tc_node; + int status = ICE_ERR_PARAM; if (tc >= ICE_MAX_TRAFFIC_CLASS) return status; @@ -4798,7 +4874,7 @@ ice_sched_set_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc, * This function configures BW limit of TC node. * Note: The minimum guaranteed reservation is done via DCBX. */ -enum ice_status +int ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc, enum ice_rl_type rl_type, u8 bw_alloc) { @@ -4814,11 +4890,11 @@ ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc, * and sets node's BW limit to default. This function needs to be * called with the scheduler lock held. */ -enum ice_status +int ice_sched_set_agg_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle) { struct ice_vsi_ctx *vsi_ctx; - enum ice_status status = ICE_SUCCESS; + int status = 0; u8 tc; if (!ice_is_vsi_valid(pi->hw, vsi_handle)) @@ -4930,13 +5006,13 @@ ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id, * This function sets BW limit of VSI or Aggregator scheduling node * based on TC information from passed in argument BW. */ -enum ice_status +int ice_sched_set_node_bw_lmt_per_tc(struct ice_port_info *pi, u32 id, enum ice_agg_type agg_type, u8 tc, enum ice_rl_type rl_type, u32 bw) { - enum ice_status status = ICE_ERR_PARAM; struct ice_sched_node *node; + int status = ICE_ERR_PARAM; if (!pi) return status; @@ -4969,7 +5045,7 @@ ice_sched_set_node_bw_lmt_per_tc(struct ice_port_info *pi, u32 id, * different than the VSI node layer on all TC(s).This function needs to be * called with scheduler lock held. */ -static enum ice_status +static int ice_sched_validate_vsi_srl_node(struct ice_port_info *pi, u16 vsi_handle) { u8 sel_layer = ICE_SCHED_INVAL_LAYER_NUM; @@ -4982,7 +5058,7 @@ ice_sched_validate_vsi_srl_node(struct ice_port_info *pi, u16 vsi_handle) ice_for_each_traffic_class(tc) { struct ice_sched_node *tc_node, *vsi_node; enum ice_rl_type rl_type = ICE_SHARED_BW; - enum ice_status status; + int status; tc_node = ice_sched_get_tc_node(pi, tc); if (!tc_node) @@ -5008,7 +5084,7 @@ ice_sched_validate_vsi_srl_node(struct ice_port_info *pi, u16 vsi_handle) if (status) return status; } - return ICE_SUCCESS; + return 0; } /** @@ -5024,12 +5100,12 @@ ice_sched_validate_vsi_srl_node(struct ice_port_info *pi, u16 vsi_handle) * class, and saves those value for later use for replaying purposes. The * caller holds the scheduler lock. */ -static enum ice_status +static int ice_sched_set_save_vsi_srl_node_bw(struct ice_port_info *pi, u16 vsi_handle, u8 tc, struct ice_sched_node *srl_node, enum ice_rl_type rl_type, u32 bw) { - enum ice_status status; + int status; if (bw == ICE_SCHED_DFLT_BW) { status = ice_sched_set_node_bw_dflt_lmt(pi, srl_node, rl_type); @@ -5056,13 +5132,13 @@ ice_sched_set_save_vsi_srl_node_bw(struct ice_port_info *pi, u16 vsi_handle, * is passed, it removes the corresponding bw from the node. The caller * holds scheduler lock. */ -static enum ice_status +static int ice_sched_set_vsi_node_srl_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u32 min_bw, u32 max_bw, u32 shared_bw) { struct ice_sched_node *tc_node, *vsi_node, *cfg_node; - enum ice_status status; u8 layer_num; + int status; tc_node = ice_sched_get_tc_node(pi, tc); if (!tc_node) @@ -5110,11 +5186,11 @@ ice_sched_set_vsi_node_srl_per_tc(struct ice_port_info *pi, u16 vsi_handle, * classes for VSI matching handle. When BW value of ICE_SCHED_DFLT_BW is * passed, it removes those value(s) from the node. */ -enum ice_status +int ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 min_bw, u32 max_bw, u32 shared_bw) { - enum ice_status status = ICE_SUCCESS; + int status = 0; u8 tc; if (!pi) @@ -5160,13 +5236,13 @@ ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, * different than the AGG node layer on all TC(s).This function needs to be * called with scheduler lock held. */ -static enum ice_status +static int ice_sched_validate_agg_srl_node(struct ice_port_info *pi, u32 agg_id) { u8 sel_layer = ICE_SCHED_INVAL_LAYER_NUM; struct ice_sched_agg_info *agg_info; bool agg_id_present = false; - enum ice_status status = ICE_SUCCESS; + int status = 0; u8 tc; LIST_FOR_EACH_ENTRY(agg_info, &pi->hw->agg_list, ice_sched_agg_info, @@ -5215,13 +5291,13 @@ ice_sched_validate_agg_srl_node(struct ice_port_info *pi, u32 agg_id) * * This function validates aggregator id. Caller holds the scheduler lock. */ -static enum ice_status +static int ice_sched_validate_agg_id(struct ice_port_info *pi, u32 agg_id) { struct ice_sched_agg_info *agg_info; struct ice_sched_agg_info *tmp; bool agg_id_present = false; - enum ice_status status; + int status; status = ice_sched_validate_agg_srl_node(pi, agg_id); if (status) @@ -5237,7 +5313,7 @@ ice_sched_validate_agg_id(struct ice_port_info *pi, u32 agg_id) if (!agg_id_present) return ICE_ERR_PARAM; - return ICE_SUCCESS; + return 0; } /** @@ -5253,12 +5329,12 @@ ice_sched_validate_agg_id(struct ice_port_info *pi, u32 agg_id) * requested traffic class, and saves those value for later use for * replaying purposes. The caller holds the scheduler lock. */ -static enum ice_status +static int ice_sched_set_save_agg_srl_node_bw(struct ice_port_info *pi, u32 agg_id, u8 tc, struct ice_sched_node *srl_node, enum ice_rl_type rl_type, u32 bw) { - enum ice_status status; + int status; if (bw == ICE_SCHED_DFLT_BW) { status = ice_sched_set_node_bw_dflt_lmt(pi, srl_node, rl_type); @@ -5285,13 +5361,13 @@ ice_sched_set_save_agg_srl_node_bw(struct ice_port_info *pi, u32 agg_id, u8 tc, * value of ICE_SCHED_DFLT_BW is passed, it removes SRL from the node. Caller * holds the scheduler lock. */ -static enum ice_status +static int ice_sched_set_agg_node_srl_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc, u32 min_bw, u32 max_bw, u32 shared_bw) { struct ice_sched_node *tc_node, *agg_node, *cfg_node; enum ice_rl_type rl_type = ICE_SHARED_BW; - enum ice_status status = ICE_ERR_CFG; + int status = ICE_ERR_CFG; u8 layer_num; tc_node = ice_sched_get_tc_node(pi, tc); @@ -5340,11 +5416,11 @@ ice_sched_set_agg_node_srl_per_tc(struct ice_port_info *pi, u32 agg_id, * BW value of ICE_SCHED_DFLT_BW is passed, it removes SRL from the * node(s). */ -enum ice_status +int ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 min_bw, u32 max_bw, u32 shared_bw) { - enum ice_status status; + int status; u8 tc; if (!pi) @@ -5392,12 +5468,12 @@ ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, * node for a given traffic class for aggregator matching agg_id. When BW * value of ICE_SCHED_DFLT_BW is passed, it removes SRL from the node. */ -enum ice_status +int ice_sched_set_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc, u32 min_bw, u32 max_bw, u32 shared_bw) { - enum ice_status status; + int status; if (!pi) return ICE_ERR_PARAM; @@ -5423,14 +5499,14 @@ ice_sched_set_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, * This function configures node element's sibling priority only. This * function needs to be called with scheduler lock held. */ -enum ice_status +int ice_sched_cfg_sibl_node_prio(struct ice_port_info *pi, struct ice_sched_node *node, u8 priority) { struct ice_aqc_txsched_elem_data buf; struct ice_aqc_txsched_elem *data; struct ice_hw *hw = pi->hw; - enum ice_status status; + int status; if (!hw) return ICE_ERR_PARAM; @@ -5456,7 +5532,7 @@ ice_sched_cfg_sibl_node_prio(struct ice_port_info *pi, * burst size value is used for future rate limit calls. It doesn't change the * existing or previously created RL profiles. */ -enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes) +int ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes) { u16 burst_size_to_prog; @@ -5485,7 +5561,7 @@ enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes) burst_size_to_prog |= (u16)(bytes / 1024); } hw->max_burst_size = burst_size_to_prog; - return ICE_SUCCESS; + return 0; } /** @@ -5497,13 +5573,13 @@ enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes) * This function configures node element's priority value. It * needs to be called with scheduler lock held. */ -static enum ice_status +static int ice_sched_replay_node_prio(struct ice_hw *hw, struct ice_sched_node *node, u8 priority) { struct ice_aqc_txsched_elem_data buf; struct ice_aqc_txsched_elem *data; - enum ice_status status; + int status; buf = node->info; data = &buf.data; @@ -5524,18 +5600,18 @@ ice_sched_replay_node_prio(struct ice_hw *hw, struct ice_sched_node *node, * This function restores node's BW from bw_t_info. The caller needs * to hold the scheduler lock. */ -static enum ice_status +static int ice_sched_replay_node_bw(struct ice_hw *hw, struct ice_sched_node *node, struct ice_bw_type_info *bw_t_info) { struct ice_port_info *pi = hw->port_info; - enum ice_status status = ICE_ERR_PARAM; + int status = ICE_ERR_PARAM; u16 bw_alloc; if (!node) return status; if (!ice_is_any_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_CNT)) - return ICE_SUCCESS; + return 0; if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_PRIO)) { status = ice_sched_replay_node_prio(hw, node, bw_t_info->generic); @@ -5582,11 +5658,11 @@ ice_sched_replay_node_bw(struct ice_hw *hw, struct ice_sched_node *node, * This function re-creates aggregator type nodes. The caller needs to hold * the scheduler lock. */ -static enum ice_status +static int ice_sched_replay_agg_bw(struct ice_hw *hw, struct ice_sched_agg_info *agg_info) { struct ice_sched_node *tc_node, *agg_node; - enum ice_status status = ICE_SUCCESS; + int status = 0; u8 tc; if (!agg_info) @@ -5659,7 +5735,7 @@ void ice_sched_replay_agg(struct ice_hw *hw) ICE_MAX_TRAFFIC_CLASS)) { ice_declare_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS); - enum ice_status status; + int status; ice_zero_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS); ice_sched_get_ena_tc_bitmap(pi, @@ -5715,9 +5791,9 @@ void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw) * * Replay root node BW settings. */ -enum ice_status ice_sched_replay_root_node_bw(struct ice_port_info *pi) +int ice_sched_replay_root_node_bw(struct ice_port_info *pi) { - enum ice_status status = ICE_SUCCESS; + int status = 0; if (!pi->hw) return ICE_ERR_PARAM; @@ -5735,9 +5811,9 @@ enum ice_status ice_sched_replay_root_node_bw(struct ice_port_info *pi) * * This function replay TC nodes. */ -enum ice_status ice_sched_replay_tc_node_bw(struct ice_port_info *pi) +int ice_sched_replay_tc_node_bw(struct ice_port_info *pi) { - enum ice_status status = ICE_SUCCESS; + int status = 0; u8 tc; if (!pi->hw) @@ -5767,7 +5843,7 @@ enum ice_status ice_sched_replay_tc_node_bw(struct ice_port_info *pi) * This function replays VSI type nodes bandwidth. This function needs to be * called with scheduler lock held. */ -static enum ice_status +static int ice_sched_replay_vsi_bw(struct ice_hw *hw, u16 vsi_handle, ice_bitmap_t *tc_bitmap) { @@ -5775,7 +5851,7 @@ ice_sched_replay_vsi_bw(struct ice_hw *hw, u16 vsi_handle, struct ice_port_info *pi = hw->port_info; struct ice_bw_type_info *bw_t_info; struct ice_vsi_ctx *vsi_ctx; - enum ice_status status = ICE_SUCCESS; + int status = 0; u8 tc; vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle); @@ -5807,24 +5883,24 @@ ice_sched_replay_vsi_bw(struct ice_hw *hw, u16 vsi_handle, * their node bandwidth information. This function needs to be called with * scheduler lock held. */ -static enum ice_status +static int ice_sched_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle) { ice_declare_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS); struct ice_sched_agg_vsi_info *agg_vsi_info; struct ice_port_info *pi = hw->port_info; struct ice_sched_agg_info *agg_info; - enum ice_status status; + int status; ice_zero_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS); if (!ice_is_vsi_valid(hw, vsi_handle)) return ICE_ERR_PARAM; agg_info = ice_get_vsi_agg_info(hw, vsi_handle); if (!agg_info) - return ICE_SUCCESS; /* Not present in list - default Agg case */ + return 0; /* Not present in list - default Agg case */ agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle); if (!agg_vsi_info) - return ICE_SUCCESS; /* Not present in list - default Agg case */ + return 0; /* Not present in list - default Agg case */ ice_sched_get_ena_tc_bitmap(pi, agg_info->replay_tc_bitmap, replay_bitmap); /* Replay aggregator node associated to vsi_handle */ @@ -5858,10 +5934,10 @@ ice_sched_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle) * This function replays association of VSI to aggregator type nodes, and * node bandwidth information. */ -enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle) +int ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle) { struct ice_port_info *pi = hw->port_info; - enum ice_status status; + int status; ice_acquire_lock(&pi->sched_lock); status = ice_sched_replay_vsi_agg(hw, vsi_handle); @@ -5877,7 +5953,7 @@ enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle) * This function replays queue type node bandwidth. This function needs to be * called with scheduler lock held. */ -enum ice_status +int ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx) { struct ice_sched_node *q_node; diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h index 5b35fd564e..456b54cfb8 100644 --- a/drivers/net/ice/base/ice_sched.h +++ b/drivers/net/ice/base/ice_sched.h @@ -7,6 +7,8 @@ #include "ice_common.h" +#define SCHED_NODE_NAME_MAX_LEN 32 + #define ICE_SCHED_5_LAYERS 5 #define ICE_SCHED_9_LAYERS 9 @@ -81,28 +83,54 @@ struct ice_sched_agg_info { }; /* FW AQ command calls */ -enum ice_status +int ice_aq_query_rl_profile(struct ice_hw *hw, u16 num_profiles, struct ice_aqc_rl_profile_elem *buf, u16 buf_size, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_cfg_node_attr(struct ice_hw *hw, u16 num_nodes, struct ice_aqc_node_attr_elem *buf, u16 buf_size, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_cfg_l2_node_cgd(struct ice_hw *hw, u16 num_nodes, struct ice_aqc_cfg_l2_node_cgd_elem *buf, u16 buf_size, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req, struct ice_aqc_move_elem *buf, u16 buf_size, u16 *grps_movd, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req, struct ice_aqc_txsched_elem_data *buf, u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd); -enum ice_status ice_sched_init_port(struct ice_port_info *pi); -enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw); + +int +ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node, + enum ice_rl_type rl_type, u32 bw); + +int +ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node, + enum ice_rl_type rl_type, u32 bw, u8 layer_num); + +int +ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node, + struct ice_sched_node *parent, u8 layer, u16 num_nodes, + u16 *num_nodes_added, u32 *first_node_teid, + struct ice_sched_node **prealloc_node); + +int +ice_sched_move_nodes(struct ice_port_info *pi, struct ice_sched_node *parent, + u16 num_items, u32 *list); + +int +ice_sched_set_node_priority(struct ice_port_info *pi, struct ice_sched_node *node, + u16 priority); +int +ice_sched_set_node_weight(struct ice_port_info *pi, struct ice_sched_node *node, + u16 weight); + +int ice_sched_init_port(struct ice_port_info *pi); +int ice_sched_query_res_alloc(struct ice_hw *hw); void ice_sched_get_psm_clk_freq(struct ice_hw *hw); /* Functions to cleanup scheduler SW DB */ @@ -115,132 +143,132 @@ struct ice_sched_node *ice_sched_get_node(struct ice_port_info *pi, u32 teid); struct ice_sched_node * ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid); /* Add a scheduling node into SW DB for given info */ -enum ice_status +int ice_sched_add_node(struct ice_port_info *pi, u8 layer, struct ice_aqc_txsched_elem_data *info, struct ice_sched_node *prealloc_node); +void +ice_sched_update_parent(struct ice_sched_node *new_parent, + struct ice_sched_node *node); void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node); struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc); struct ice_sched_node * ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 owner); -enum ice_status +int ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs, u8 owner, bool enable); -enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle); +int ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle); struct ice_sched_node * ice_sched_get_vsi_node(struct ice_port_info *pi, struct ice_sched_node *tc_node, u16 vsi_handle); bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node); -enum ice_status +int ice_aq_query_node_to_root(struct ice_hw *hw, u32 node_teid, struct ice_aqc_txsched_elem_data *buf, u16 buf_size, struct ice_sq_cd *cd); /* Tx scheduler rate limiter functions */ -enum ice_status +int ice_cfg_agg(struct ice_port_info *pi, u32 agg_id, enum ice_agg_type agg_type, u8 tc_bitmap); -enum ice_status +int ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle, u8 tc_bitmap); -enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id); -enum ice_status +int ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id); +int ice_cfg_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle, enum ice_rl_type rl_type, u32 bw); -enum ice_status +int ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle, enum ice_rl_type rl_type); -enum ice_status +int ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc, enum ice_rl_type rl_type, u32 bw); -enum ice_status +int ice_cfg_tc_node_bw_dflt_lmt(struct ice_port_info *pi, u8 tc, enum ice_rl_type rl_type); -enum ice_status +int ice_cfg_vsi_bw_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc, enum ice_rl_type rl_type, u32 bw); -enum ice_status +int ice_cfg_vsi_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc, enum ice_rl_type rl_type); -enum ice_status +int ice_cfg_agg_bw_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc, enum ice_rl_type rl_type, u32 bw); -enum ice_status +int ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc, enum ice_rl_type rl_type); -enum ice_status +int ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 min_bw, u32 max_bw, u32 shared_bw); -enum ice_status +int ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle); -enum ice_status +int ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 min_bw, u32 max_bw, u32 shared_bw); -enum ice_status +int ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id); -enum ice_status +int ice_cfg_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc, u32 min_bw, u32 max_bw, u32 shared_bw); -enum ice_status +int ice_cfg_agg_bw_no_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc); -enum ice_status +int ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids, u8 *q_prio); -enum ice_status +int ice_sched_cfg_sibl_node_prio_lock(struct ice_port_info *pi, struct ice_sched_node *node, u8 priority); -enum ice_status +int ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc); -enum ice_status +int ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap, enum ice_rl_type rl_type, u8 *bw_alloc); -enum ice_status +int ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id, u16 num_vsis, u16 *vsi_handle_arr, u8 *node_prio, u8 tc); -enum ice_status +int ice_cfg_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 ena_tcmap, enum ice_rl_type rl_type, u8 *bw_alloc); bool ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base, struct ice_sched_node *node); -enum ice_status +int ice_sched_set_agg_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle); -enum ice_status +int ice_sched_set_node_bw_lmt_per_tc(struct ice_port_info *pi, u32 id, enum ice_agg_type agg_type, u8 tc, enum ice_rl_type rl_type, u32 bw); -enum ice_status +int ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 min_bw, u32 max_bw, u32 shared_bw); -enum ice_status +int ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 min_bw, u32 max_bw, u32 shared_bw); -enum ice_status +int ice_sched_set_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc, u32 min_bw, u32 max_bw, u32 shared_bw); -enum ice_status +int ice_sched_cfg_sibl_node_prio(struct ice_port_info *pi, struct ice_sched_node *node, u8 priority); -enum ice_status +int ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc, enum ice_rl_type rl_type, u8 bw_alloc); -enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes); +int ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes); void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw); void ice_sched_replay_agg(struct ice_hw *hw); -enum ice_status ice_sched_replay_tc_node_bw(struct ice_port_info *pi); -enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle); -enum ice_status ice_sched_replay_root_node_bw(struct ice_port_info *pi); -enum ice_status -ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx); -enum ice_status -ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node, - enum ice_rl_type rl_type, u32 bw); -enum ice_status +int ice_sched_replay_tc_node_bw(struct ice_port_info *pi); +int ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle); +int ice_sched_replay_root_node_bw(struct ice_port_info *pi); +int ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx); +int ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node, enum ice_rl_type rl_type, u16 bw_alloc); + #endif /* _ICE_SCHED_H_ */ diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c index f7fcc3a8d4..70fa9c203f 100644 --- a/drivers/net/ice/base/ice_switch.c +++ b/drivers/net/ice/base/ice_switch.c @@ -12,14 +12,14 @@ #define ICE_ETH_VLAN_TCI_OFFSET 14 #define ICE_MAX_VLAN_ID 0xFFF #define ICE_IPV6_ETHER_ID 0x86DD -#define ICE_IPV4_NVGRE_PROTO_ID 0x002F #define ICE_PPP_IPV6_PROTO_ID 0x0057 +#define ICE_IPV4_NVGRE_PROTO_ID 0x002F #define ICE_TCP_PROTO_ID 0x06 #define ICE_GTPU_PROFILE 24 #define ICE_MPLS_ETHER_ID 0x8847 #define ICE_ETH_P_8021Q 0x8100 -/* Dummy ethernet header needed in the ice_aqc_sw_rules_elem +/* Dummy ethernet header needed in the ice_sw_rule_* * struct to configure any switch filter rules. * {DA (6 bytes), SA(6 bytes), * Ether type (2 bytes for header without VLAN tag) OR @@ -38,11 +38,6 @@ static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0, 0x2, 0, 0, 0, 0, 0, 0x81, 0, 0, 0}; -struct ice_dummy_pkt_offsets { - enum ice_protocol_type type; - u16 offset; /* ICE_PROTOCOL_LAST indicates end of list */ -}; - static const struct ice_dummy_pkt_offsets dummy_gre_tcp_packet_offsets[] = { { ICE_MAC_OFOS, 0 }, { ICE_ETYPE_OL, 12 }, @@ -1141,15 +1136,6 @@ static const u8 dummy_ipv6_gtpu_ipv6_udp_packet[] = { 0x00, 0x00, /* 2 bytes for 4 byte alignment */ }; -static const struct ice_dummy_pkt_offsets dummy_ipv4_gtpu_ipv4_packet_offsets[] = { - { ICE_MAC_OFOS, 0 }, - { ICE_IPV4_OFOS, 14 }, - { ICE_UDP_OF, 34 }, - { ICE_GTP, 42 }, - { ICE_IPV4_IL, 62 }, - { ICE_PROTOCOL_LAST, 0 }, -}; - static const u8 dummy_ipv4_gtpu_ipv4_packet[] = { 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ 0x00, 0x00, 0x00, 0x00, @@ -1180,8 +1166,17 @@ static const u8 dummy_ipv4_gtpu_ipv4_packet[] = { 0x00, 0x00, }; -static const -struct ice_dummy_pkt_offsets dummy_ipv4_gtpu_ipv6_packet_offsets[] = { +static const struct +ice_dummy_pkt_offsets dummy_ipv4_gtpu_ipv4_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_IPV4_OFOS, 14 }, + { ICE_UDP_OF, 34 }, + { ICE_GTP, 42 }, + { ICE_IPV4_IL, 62 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const struct ice_dummy_pkt_offsets dummy_ipv4_gtpu_ipv6_packet_offsets[] = { { ICE_MAC_OFOS, 0 }, { ICE_IPV4_OFOS, 14 }, { ICE_UDP_OF, 34 }, @@ -1226,8 +1221,7 @@ static const u8 dummy_ipv4_gtpu_ipv6_packet[] = { 0x00, 0x00, }; -static const -struct ice_dummy_pkt_offsets dummy_ipv6_gtpu_ipv4_packet_offsets[] = { +static const struct ice_dummy_pkt_offsets dummy_ipv6_gtpu_ipv4_packet_offsets[] = { { ICE_MAC_OFOS, 0 }, { ICE_IPV6_OFOS, 14 }, { ICE_UDP_OF, 54 }, @@ -1272,8 +1266,7 @@ static const u8 dummy_ipv6_gtpu_ipv4_packet[] = { 0x00, 0x00, }; -static const -struct ice_dummy_pkt_offsets dummy_ipv6_gtpu_ipv6_packet_offsets[] = { +static const struct ice_dummy_pkt_offsets dummy_ipv6_gtpu_ipv6_packet_offsets[] = { { ICE_MAC_OFOS, 0 }, { ICE_IPV6_OFOS, 14 }, { ICE_UDP_OF, 54 }, @@ -1660,83 +1653,6 @@ static const u8 dummy_pppoe_ipv4_packet[] = { 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ }; -static const -struct ice_dummy_pkt_offsets dummy_pppoe_ipv4_tcp_packet_offsets[] = { - { ICE_MAC_OFOS, 0 }, - { ICE_VLAN_OFOS, 12 }, - { ICE_ETYPE_OL, 16 }, - { ICE_PPPOE, 18 }, - { ICE_IPV4_OFOS, 26 }, - { ICE_TCP_IL, 46 }, - { ICE_PROTOCOL_LAST, 0 }, -}; - -static const u8 dummy_pppoe_ipv4_tcp_packet[] = { - 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - - 0x81, 0x00, 0x00, 0x00, /* ICE_VLAN_OFOS 12 */ - - 0x88, 0x64, /* ICE_ETYPE_OL 16 */ - - 0x11, 0x00, 0x00, 0x00, /* ICE_PPPOE 18 */ - 0x00, 0x16, - - 0x00, 0x21, /* PPP Link Layer 24 */ - - 0x45, 0x00, 0x00, 0x28, /* ICE_IPV4_OFOS 26 */ - 0x00, 0x01, 0x00, 0x00, - 0x00, 0x06, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - - 0x00, 0x00, 0x00, 0x00, /* ICE_TCP_IL 46 */ - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x50, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - - 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ -}; - -static const -struct ice_dummy_pkt_offsets dummy_pppoe_ipv4_udp_packet_offsets[] = { - { ICE_MAC_OFOS, 0 }, - { ICE_VLAN_OFOS, 12 }, - { ICE_ETYPE_OL, 16 }, - { ICE_PPPOE, 18 }, - { ICE_IPV4_OFOS, 26 }, - { ICE_UDP_ILOS, 46 }, - { ICE_PROTOCOL_LAST, 0 }, -}; - -static const u8 dummy_pppoe_ipv4_udp_packet[] = { - 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - - 0x81, 0x00, 0x00, 0x00, /* ICE_VLAN_OFOS 12 */ - - 0x88, 0x64, /* ICE_ETYPE_OL 16 */ - - 0x11, 0x00, 0x00, 0x00, /* ICE_PPPOE 18 */ - 0x00, 0x16, - - 0x00, 0x21, /* PPP Link Layer 24 */ - - 0x45, 0x00, 0x00, 0x1c, /* ICE_IPV4_OFOS 26 */ - 0x00, 0x01, 0x00, 0x00, - 0x00, 0x11, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - - 0x00, 0x00, 0x00, 0x00, /* ICE_UDP_ILOS 46 */ - 0x00, 0x08, 0x00, 0x00, - - 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ -}; - static const struct ice_dummy_pkt_offsets dummy_pppoe_packet_ipv6_offsets[] = { { ICE_MAC_OFOS, 0 }, { ICE_VLAN_OFOS, 12 }, @@ -1774,97 +1690,10 @@ static const u8 dummy_pppoe_ipv6_packet[] = { 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ }; -static const -struct ice_dummy_pkt_offsets dummy_pppoe_ipv6_tcp_packet_offsets[] = { - { ICE_MAC_OFOS, 0 }, - { ICE_VLAN_OFOS, 12 }, - { ICE_ETYPE_OL, 16 }, - { ICE_PPPOE, 18 }, - { ICE_IPV6_OFOS, 26 }, - { ICE_TCP_IL, 66 }, - { ICE_PROTOCOL_LAST, 0 }, -}; - -static const u8 dummy_pppoe_ipv6_tcp_packet[] = { - 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - - 0x81, 0x00, 0x00, 0x00, /* ICE_VLAN_OFOS 12 */ - - 0x88, 0x64, /* ICE_ETYPE_OL 16 */ - - 0x11, 0x00, 0x00, 0x00, /* ICE_PPPOE 18 */ - 0x00, 0x2a, - - 0x00, 0x57, /* PPP Link Layer 24 */ - - 0x60, 0x00, 0x00, 0x00, /* ICE_IPV6_OFOS 26 */ - 0x00, 0x14, 0x06, 0x00, /* Next header is TCP */ - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - - 0x00, 0x00, 0x00, 0x00, /* ICE_TCP_IL 66 */ - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x50, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - - 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ -}; - -static const -struct ice_dummy_pkt_offsets dummy_pppoe_ipv6_udp_packet_offsets[] = { - { ICE_MAC_OFOS, 0 }, - { ICE_VLAN_OFOS, 12 }, - { ICE_ETYPE_OL, 16 }, - { ICE_PPPOE, 18 }, - { ICE_IPV6_OFOS, 26 }, - { ICE_UDP_ILOS, 66 }, - { ICE_PROTOCOL_LAST, 0 }, -}; - -static const u8 dummy_pppoe_ipv6_udp_packet[] = { - 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - - 0x81, 0x00, 0x00, 0x00, /* ICE_VLAN_OFOS 12 */ - - 0x88, 0x64, /* ICE_ETYPE_OL 16 */ - - 0x11, 0x00, 0x00, 0x00, /* ICE_PPPOE 18 */ - 0x00, 0x2a, - - 0x00, 0x57, /* PPP Link Layer 24 */ - - 0x60, 0x00, 0x00, 0x00, /* ICE_IPV6_OFOS 26 */ - 0x00, 0x08, 0x11, 0x00, /* Next header UDP*/ - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - - 0x00, 0x00, 0x00, 0x00, /* ICE_UDP_ILOS 66 */ - 0x00, 0x08, 0x00, 0x00, - - 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ -}; - static const struct ice_dummy_pkt_offsets dummy_ipv4_esp_packet_offsets[] = { { ICE_MAC_OFOS, 0 }, { ICE_IPV4_OFOS, 14 }, - { ICE_ESP, 34 }, + { ICE_ESP, 34 }, { ICE_PROTOCOL_LAST, 0 }, }; @@ -1888,7 +1717,7 @@ static const u8 dummy_ipv4_esp_pkt[] = { static const struct ice_dummy_pkt_offsets dummy_ipv6_esp_packet_offsets[] = { { ICE_MAC_OFOS, 0 }, { ICE_IPV6_OFOS, 14 }, - { ICE_ESP, 54 }, + { ICE_ESP, 54 }, { ICE_PROTOCOL_LAST, 0 }, }; @@ -1917,7 +1746,7 @@ static const u8 dummy_ipv6_esp_pkt[] = { static const struct ice_dummy_pkt_offsets dummy_ipv4_ah_packet_offsets[] = { { ICE_MAC_OFOS, 0 }, { ICE_IPV4_OFOS, 14 }, - { ICE_AH, 34 }, + { ICE_AH, 34 }, { ICE_PROTOCOL_LAST, 0 }, }; @@ -1942,7 +1771,7 @@ static const u8 dummy_ipv4_ah_pkt[] = { static const struct ice_dummy_pkt_offsets dummy_ipv6_ah_packet_offsets[] = { { ICE_MAC_OFOS, 0 }, { ICE_IPV6_OFOS, 14 }, - { ICE_AH, 54 }, + { ICE_AH, 54 }, { ICE_PROTOCOL_LAST, 0 }, }; @@ -2031,61 +1860,6 @@ static const u8 dummy_ipv6_nat_pkt[] = { }; -static const struct ice_dummy_pkt_offsets dummy_ipv4_l2tpv3_packet_offsets[] = { - { ICE_MAC_OFOS, 0 }, - { ICE_IPV4_OFOS, 14 }, - { ICE_L2TPV3, 34 }, - { ICE_PROTOCOL_LAST, 0 }, -}; - -static const u8 dummy_ipv4_l2tpv3_pkt[] = { - 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x08, 0x00, - - 0x45, 0x00, 0x00, 0x20, /* ICE_IPV4_IL 14 */ - 0x00, 0x00, 0x40, 0x00, - 0x40, 0x73, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - - 0x00, 0x00, 0x00, 0x00, /* ICE_L2TPV3 34 */ - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ -}; - -static const struct ice_dummy_pkt_offsets dummy_ipv6_l2tpv3_packet_offsets[] = { - { ICE_MAC_OFOS, 0 }, - { ICE_IPV6_OFOS, 14 }, - { ICE_L2TPV3, 54 }, - { ICE_PROTOCOL_LAST, 0 }, -}; - -static const u8 dummy_ipv6_l2tpv3_pkt[] = { - 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x86, 0xDD, - - 0x60, 0x00, 0x00, 0x00, /* ICE_IPV6_IL 14 */ - 0x00, 0x0c, 0x73, 0x40, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - - 0x00, 0x00, 0x00, 0x00, /* ICE_L2TPV3 54 */ - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ -}; - static const struct ice_dummy_pkt_offsets dummy_qinq_pppoe_packet_offsets[] = { { ICE_MAC_OFOS, 0 }, { ICE_VLAN_EX, 12 }, @@ -2168,13 +1942,236 @@ static const u8 dummy_qinq_pppoe_ipv6_packet[] = { 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ }; -/* this is a recipe to profile association bitmap */ -static ice_declare_bitmap(recipe_to_profile[ICE_MAX_NUM_RECIPES], - ICE_MAX_NUM_PROFILES); - -/* this is a profile to recipe association bitmap */ -static ice_declare_bitmap(profile_to_recipe[ICE_MAX_NUM_PROFILES], - ICE_MAX_NUM_RECIPES); +static const struct ice_dummy_pkt_offsets dummy_ipv4_l2tpv3_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_ETYPE_OL, 12 }, + { ICE_IPV4_OFOS, 14 }, + { ICE_L2TPV3, 34 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const u8 dummy_ipv4_l2tpv3_pkt[] = { + 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x08, 0x00, /* ICE_ETYPE_OL 12 */ + + 0x45, 0x00, 0x00, 0x20, /* ICE_IPV4_IL 14 */ + 0x00, 0x00, 0x40, 0x00, + 0x40, 0x73, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x00, 0x00, /* ICE_L2TPV3 34 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ +}; + +static const struct ice_dummy_pkt_offsets dummy_ipv6_l2tpv3_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_ETYPE_OL, 12 }, + { ICE_IPV6_OFOS, 14 }, + { ICE_L2TPV3, 54 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const u8 dummy_ipv6_l2tpv3_pkt[] = { + 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x86, 0xDD, /* ICE_ETYPE_OL 12 */ + + 0x60, 0x00, 0x00, 0x00, /* ICE_IPV6_IL 14 */ + 0x00, 0x0c, 0x73, 0x40, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x00, 0x00, /* ICE_L2TPV3 54 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ +}; + +static const +struct ice_dummy_pkt_offsets dummy_pppoe_ipv4_tcp_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_VLAN_OFOS, 12 }, + { ICE_ETYPE_OL, 16 }, + { ICE_PPPOE, 18 }, + { ICE_IPV4_OFOS, 26 }, + { ICE_TCP_IL, 46 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const u8 dummy_pppoe_ipv4_tcp_packet[] = { + 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x81, 0x00, 0x00, 0x00, /* ICE_VLAN_OFOS 12 */ + + 0x88, 0x64, /* ICE_ETYPE_OL 16 */ + + 0x11, 0x00, 0x00, 0x00, /* ICE_PPPOE 18 */ + 0x00, 0x16, + + 0x00, 0x21, /* PPP Link Layer 24 */ + + 0x45, 0x00, 0x00, 0x28, /* ICE_IPV4_OFOS 26 */ + 0x00, 0x01, 0x00, 0x00, + 0x00, 0x06, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x00, 0x00, /* ICE_TCP_IL 46 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x50, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ +}; + +static const +struct ice_dummy_pkt_offsets dummy_pppoe_ipv4_udp_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_VLAN_OFOS, 12 }, + { ICE_ETYPE_OL, 16 }, + { ICE_PPPOE, 18 }, + { ICE_IPV4_OFOS, 26 }, + { ICE_UDP_ILOS, 46 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const u8 dummy_pppoe_ipv4_udp_packet[] = { + 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x81, 0x00, 0x00, 0x00, /* ICE_VLAN_OFOS 12 */ + + 0x88, 0x64, /* ICE_ETYPE_OL 16 */ + + 0x11, 0x00, 0x00, 0x00, /* ICE_PPPOE 18 */ + 0x00, 0x16, + + 0x00, 0x21, /* PPP Link Layer 24 */ + + 0x45, 0x00, 0x00, 0x1c, /* ICE_IPV4_OFOS 26 */ + 0x00, 0x01, 0x00, 0x00, + 0x00, 0x11, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x00, 0x00, /* ICE_UDP_ILOS 46 */ + 0x00, 0x08, 0x00, 0x00, + + 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ +}; + +static const +struct ice_dummy_pkt_offsets dummy_pppoe_ipv6_tcp_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_VLAN_OFOS, 12 }, + { ICE_ETYPE_OL, 16 }, + { ICE_PPPOE, 18 }, + { ICE_IPV6_OFOS, 26 }, + { ICE_TCP_IL, 66 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const u8 dummy_pppoe_ipv6_tcp_packet[] = { + 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x81, 0x00, 0x00, 0x00, /* ICE_VLAN_OFOS 12 */ + + 0x88, 0x64, /* ICE_ETYPE_OL 16 */ + + 0x11, 0x00, 0x00, 0x00, /* ICE_PPPOE 18 */ + 0x00, 0x2a, + + 0x00, 0x57, /* PPP Link Layer 24 */ + + 0x60, 0x00, 0x00, 0x00, /* ICE_IPV6_OFOS 26 */ + 0x00, 0x14, 0x06, 0x00, /* Next header is TCP */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x00, 0x00, /* ICE_TCP_IL 66 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x50, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ +}; + +static const +struct ice_dummy_pkt_offsets dummy_pppoe_ipv6_udp_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_VLAN_OFOS, 12 }, + { ICE_ETYPE_OL, 16 }, + { ICE_PPPOE, 18 }, + { ICE_IPV6_OFOS, 26 }, + { ICE_UDP_ILOS, 66 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const u8 dummy_pppoe_ipv6_udp_packet[] = { + 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x81, 0x00, 0x00, 0x00, /* ICE_VLAN_OFOS 12 */ + + 0x88, 0x64, /* ICE_ETYPE_OL 16 */ + + 0x11, 0x00, 0x00, 0x00, /* ICE_PPPOE 18 */ + 0x00, 0x2a, + + 0x00, 0x57, /* PPP Link Layer 24 */ + + 0x60, 0x00, 0x00, 0x00, /* ICE_IPV6_OFOS 26 */ + 0x00, 0x08, 0x11, 0x00, /* Next header UDP*/ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x00, 0x00, /* ICE_UDP_ILOS 66 */ + 0x00, 0x08, 0x00, 0x00, + + 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ +}; + +/* this is a recipe to profile association bitmap */ +static ice_declare_bitmap(recipe_to_profile[ICE_MAX_NUM_RECIPES], + ICE_MAX_NUM_PROFILES); + +/* this is a profile to recipe association bitmap */ +static ice_declare_bitmap(profile_to_recipe[ICE_MAX_NUM_PROFILES], + ICE_MAX_NUM_RECIPES); static void ice_get_recp_to_prof_map(struct ice_hw *hw); @@ -2221,18 +2218,21 @@ static struct ice_prof_type_entry ice_prof_type_tbl[ICE_GTPU_PROFILE] = { /** * ice_get_tun_type_for_recipe - get tunnel type for the recipe * @rid: recipe ID that we are populating + * @vlan: flag of vlan protocol */ static enum ice_sw_tunnel_type ice_get_tun_type_for_recipe(u8 rid, bool vlan) { - u8 vxlan_profile[12] = {10, 11, 12, 16, 17, 18, 22, 23, 24, 25, 26, 27}; + u8 udp_tun_profile[12] = {10, 11, 12, 16, 17, 18, 22, 23, 24, 25, 26, + 27}; u8 gre_profile[12] = {13, 14, 15, 19, 20, 21, 28, 29, 30, 31, 32, 33}; u8 pppoe_profile[7] = {34, 35, 36, 37, 38, 39, 40}; u8 non_tun_profile[6] = {4, 5, 6, 7, 8, 9}; enum ice_sw_tunnel_type tun_type; - u16 i, j, k, profile_num = 0; + u16 i, j, profile_num = 0; + u16 k; + bool udp_tun_valid = false; bool non_tun_valid = false; bool pppoe_valid = false; - bool vxlan_valid = false; bool gre_valid = false; bool gtp_valid = false; bool flag_valid = false; @@ -2249,8 +2249,8 @@ static enum ice_sw_tunnel_type ice_get_tun_type_for_recipe(u8 rid, bool vlan) } for (i = 0; i < 12; i++) { - if (vxlan_profile[i] == j) - vxlan_valid = true; + if (udp_tun_profile[i] == j) + udp_tun_valid = true; } for (i = 0; i < 7; i++) { @@ -2274,8 +2274,8 @@ static enum ice_sw_tunnel_type ice_get_tun_type_for_recipe(u8 rid, bool vlan) flag_valid = true; } - if (!non_tun_valid && vxlan_valid) - tun_type = ICE_SW_TUN_VXLAN; + if (!non_tun_valid && udp_tun_valid) + tun_type = ICE_SW_TUN_UDP; else if (!non_tun_valid && gre_valid) tun_type = ICE_SW_TUN_NVGRE; else if (!non_tun_valid && pppoe_valid) @@ -2283,9 +2283,9 @@ static enum ice_sw_tunnel_type ice_get_tun_type_for_recipe(u8 rid, bool vlan) else if (!non_tun_valid && gtp_valid) tun_type = ICE_SW_TUN_GTP; else if (non_tun_valid && - (vxlan_valid || gre_valid || gtp_valid || pppoe_valid)) + (udp_tun_valid || gre_valid || gtp_valid || pppoe_valid)) tun_type = ICE_SW_TUN_AND_NON_TUN; - else if (non_tun_valid && !vxlan_valid && !gre_valid && !gtp_valid && + else if (non_tun_valid && !udp_tun_valid && !gre_valid && !gtp_valid && !pppoe_valid) tun_type = ICE_NON_TUN; else @@ -2425,23 +2425,23 @@ static enum ice_sw_tunnel_type ice_get_tun_type_for_recipe(u8 rid, bool vlan) * @recps: struct that we need to populate * @rid: recipe ID that we are populating * @refresh_required: true if we should get recipe to profile mapping from FW + * @is_add: flag of adding recipe * - * This function is used to populate all the necessary entries into our - * bookkeeping so that we have a current list of all the recipes that are - * programmed in the firmware. + * Populate all the necessary entries into SW bookkeeping so that we have a + * current list of all the recipes that are programmed in the firmware. */ -static enum ice_status +static int ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid, - bool *refresh_required) + bool *refresh_required, bool *is_add) { ice_declare_bitmap(result_bm, ICE_MAX_FV_WORDS); struct ice_aqc_recipe_data_elem *tmp; u16 num_recps = ICE_MAX_NUM_RECIPES; struct ice_prot_lkup_ext *lkup_exts; - enum ice_status status; u8 fv_word_idx = 0; bool vlan = false; u16 sub_recps; + int status; ice_zero_bitmap(result_bm, ICE_MAX_FV_WORDS); @@ -2457,6 +2457,11 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid, if (status) goto err_unroll; + if (!num_recps) { + status = ICE_ERR_PARAM; + goto err_unroll; + } + /* Get recipe to profile map so that we can get the fv from lkups that * we read for a recipe from FW. Since we want to minimize the number of * times we make this FW call, just make one call and cache the copy @@ -2540,7 +2545,7 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid, LIST_ADD(&rg_entry->l_entry, &recps[rid].rg_list); /* Propagate some data to the recipe database */ - recps[idx].is_root = !!is_root; + recps[idx].is_root = is_root; recps[idx].priority = root_bufs.content.act_ctrl_fwd_priority; ice_zero_bitmap(recps[idx].res_idxs, ICE_MAX_FV_WORDS); if (root_bufs.content.result_indx & ICE_AQ_RECIPE_RESULT_EN) { @@ -2551,8 +2556,12 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid, recps[idx].chain_idx = ICE_INVAL_CHAIN_IND; } - if (!is_root) + if (!is_root) { + if (hw->subscribable_recipes_supported && *is_add) + recps[idx].recp_created = true; + continue; + } /* Only do the following for root recipes entries */ ice_memcpy(recps[idx].r_bitmap, root_bufs.recipe_bitmap, @@ -2567,6 +2576,9 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid, recps[rid].big_recp = (num_recps > 1); recps[rid].n_grp_count = (u8)num_recps; recps[rid].tun_type = ice_get_tun_type_for_recipe(rid, vlan); + if (recps[rid].root_buf) + ice_free(hw, recps[rid].root_buf); + recps[rid].root_buf = (struct ice_aqc_recipe_data_elem *) ice_memdup(hw, tmp, recps[rid].n_grp_count * sizeof(*recps[rid].root_buf), ICE_NONDMA_TO_NONDMA); @@ -2575,7 +2587,8 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid, /* Copy result indexes */ ice_cp_bitmap(recps[rid].res_idxs, result_bm, ICE_MAX_FV_WORDS); - recps[rid].recp_created = true; + if (!hw->subscribable_recipes_supported || (hw->subscribable_recipes_supported && *is_add)) + recps[rid].recp_created = true; err_unroll: ice_free(hw, tmp); @@ -2620,7 +2633,7 @@ ice_vsi_uses_fltr(struct ice_fltr_mgmt_list_entry *fm_entry, u16 vsi_handle); * Allocate memory for the entire recipe table and initialize the structures/ * entries corresponding to basic recipes. */ -enum ice_status +int ice_init_def_sw_recp(struct ice_hw *hw, struct ice_sw_recipe **recp_list) { struct ice_sw_recipe *recps; @@ -2641,7 +2654,7 @@ ice_init_def_sw_recp(struct ice_hw *hw, struct ice_sw_recipe **recp_list) *recp_list = recps; - return ICE_SUCCESS; + return 0; } /** @@ -2669,14 +2682,14 @@ ice_init_def_sw_recp(struct ice_hw *hw, struct ice_sw_recipe **recp_list) * in response buffer. The caller of this function to use *num_elems while * parsing the response buffer. */ -static enum ice_status +static int ice_aq_get_sw_cfg(struct ice_hw *hw, struct ice_aqc_get_sw_cfg_resp_elem *buf, u16 buf_size, u16 *req_desc, u16 *num_elems, struct ice_sq_cd *cd) { struct ice_aqc_get_sw_cfg *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_sw_cfg); cmd = &desc.params.get_sw_conf; @@ -2697,10 +2710,10 @@ ice_aq_get_sw_cfg(struct ice_hw *hw, struct ice_aqc_get_sw_cfg_resp_elem *buf, * @shared_res: true to allocate as a shared resource and false to allocate as a dedicated resource * @global_lut_id: output parameter for the RSS global LUT's ID */ -enum ice_status ice_alloc_rss_global_lut(struct ice_hw *hw, bool shared_res, u16 *global_lut_id) +int ice_alloc_rss_global_lut(struct ice_hw *hw, bool shared_res, u16 *global_lut_id) { struct ice_aqc_alloc_free_res_elem *sw_buf; - enum ice_status status; + int status; u16 buf_len; buf_len = ice_struct_size(sw_buf, elem, 1); @@ -2733,12 +2746,12 @@ enum ice_status ice_alloc_rss_global_lut(struct ice_hw *hw, bool shared_res, u16 * @marker_lg_id: ID of the marker large action to free * @sw_marker: sw marker to tag the Rx descriptor with */ -static enum ice_status +static int ice_free_sw_marker_lg(struct ice_hw *hw, u16 marker_lg_id, u32 sw_marker) { struct ice_aqc_alloc_free_res_elem *sw_buf; u16 buf_len, num_elems = 1; - enum ice_status status; + int status; buf_len = ice_struct_size(sw_buf, elem, num_elems); sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len); @@ -2746,7 +2759,7 @@ ice_free_sw_marker_lg(struct ice_hw *hw, u16 marker_lg_id, u32 sw_marker) return ICE_ERR_NO_MEMORY; sw_buf->num_elems = CPU_TO_LE16(num_elems); - if (sw_marker == (sw_marker & 0xFFFF)) + if (sw_marker <= 0xFFFF) sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_WIDE_TABLE_1); else sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_WIDE_TABLE_2); @@ -2768,11 +2781,11 @@ ice_free_sw_marker_lg(struct ice_hw *hw, u16 marker_lg_id, u32 sw_marker) * @hw: pointer to the HW struct * @global_lut_id: ID of the RSS global LUT to free */ -enum ice_status ice_free_rss_global_lut(struct ice_hw *hw, u16 global_lut_id) +int ice_free_rss_global_lut(struct ice_hw *hw, u16 global_lut_id) { struct ice_aqc_alloc_free_res_elem *sw_buf; u16 buf_len, num_elems = 1; - enum ice_status status; + int status; buf_len = ice_struct_size(sw_buf, elem, num_elems); sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len); @@ -2802,14 +2815,14 @@ enum ice_status ice_free_rss_global_lut(struct ice_hw *hw, u16 global_lut_id) * * allocates switch resources (SWID and VEB counter) (0x0208) */ -enum ice_status +int ice_alloc_sw(struct ice_hw *hw, bool ena_stats, bool shared_res, u16 *sw_id, u16 *counter_id) { struct ice_aqc_alloc_free_res_elem *sw_buf; struct ice_aqc_res_elem *sw_ele; - enum ice_status status; u16 buf_len; + int status; buf_len = ice_struct_size(sw_buf, elem, 1); sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len); @@ -2886,10 +2899,10 @@ ice_alloc_sw(struct ice_hw *hw, bool ena_stats, bool shared_res, u16 *sw_id, * releasing other resources even after it encounters error. * The error code returned is the last error it encountered. */ -enum ice_status ice_free_sw(struct ice_hw *hw, u16 sw_id, u16 counter_id) +int ice_free_sw(struct ice_hw *hw, u16 sw_id, u16 counter_id) { struct ice_aqc_alloc_free_res_elem *sw_buf, *counter_buf; - enum ice_status status, ret_status; + int status, ret_status; u16 buf_len; buf_len = ice_struct_size(sw_buf, elem, 1); @@ -2948,14 +2961,14 @@ enum ice_status ice_free_sw(struct ice_hw *hw, u16 sw_id, u16 counter_id) * * Add a VSI context to the hardware (0x0210) */ -enum ice_status +int ice_aq_add_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx, struct ice_sq_cd *cd) { struct ice_aqc_add_update_free_vsi_resp *res; struct ice_aqc_add_get_update_free_vsi *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; cmd = &desc.params.vsi_cmd; res = &desc.params.add_update_free_vsi_res; @@ -2991,14 +3004,14 @@ ice_aq_add_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx, * * Free VSI context info from hardware (0x0213) */ -enum ice_status +int ice_aq_free_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx, bool keep_vsi_alloc, struct ice_sq_cd *cd) { struct ice_aqc_add_update_free_vsi_resp *resp; struct ice_aqc_add_get_update_free_vsi *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; cmd = &desc.params.vsi_cmd; resp = &desc.params.add_update_free_vsi_res; @@ -3026,14 +3039,14 @@ ice_aq_free_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx, * * Update VSI context in the hardware (0x0211) */ -enum ice_status +int ice_aq_update_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx, struct ice_sq_cd *cd) { struct ice_aqc_add_update_free_vsi_resp *resp; struct ice_aqc_add_get_update_free_vsi *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; cmd = &desc.params.vsi_cmd; resp = &desc.params.add_update_free_vsi_res; @@ -3111,7 +3124,7 @@ ice_save_vsi_ctx(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi) * @hw: pointer to the HW struct * @vsi_handle: VSI handle */ -static void ice_clear_vsi_q_ctx(struct ice_hw *hw, u16 vsi_handle) +void ice_clear_vsi_q_ctx(struct ice_hw *hw, u16 vsi_handle) { struct ice_vsi_ctx *vsi; u8 i; @@ -3169,12 +3182,12 @@ void ice_clear_all_vsi_ctx(struct ice_hw *hw) * If this function gets called after reset for existing VSIs then update * with the new HW VSI number in the corresponding VSI handle list entry. */ -enum ice_status +int ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx, struct ice_sq_cd *cd) { struct ice_vsi_ctx *tmp_vsi_ctx; - enum ice_status status; + int status; if (vsi_handle >= ICE_MAX_VSI) return ICE_ERR_PARAM; @@ -3198,7 +3211,7 @@ ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx, tmp_vsi_ctx->vsi_num = vsi_ctx->vsi_num; } - return ICE_SUCCESS; + return 0; } /** @@ -3211,11 +3224,11 @@ ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx, * * Free VSI context info from hardware as well as from VSI handle list */ -enum ice_status +int ice_free_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx, bool keep_vsi_alloc, struct ice_sq_cd *cd) { - enum ice_status status; + int status; if (!ice_is_vsi_valid(hw, vsi_handle)) return ICE_ERR_PARAM; @@ -3235,7 +3248,7 @@ ice_free_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx, * * Update VSI context in the hardware */ -enum ice_status +int ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx, struct ice_sq_cd *cd) { @@ -3253,14 +3266,14 @@ ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx, * * Get VSI context info from hardware (0x0212) */ -enum ice_status +int ice_aq_get_vsi_params(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx, struct ice_sq_cd *cd) { struct ice_aqc_add_get_update_free_vsi *cmd; struct ice_aqc_get_vsi_resp *resp; struct ice_aq_desc desc; - enum ice_status status; + int status; cmd = &desc.params.vsi_cmd; resp = &desc.params.get_vsi_resp; @@ -3293,16 +3306,16 @@ ice_aq_get_vsi_params(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx, * * Add/Update Mirror Rule (0x260). */ -enum ice_status +int ice_aq_add_update_mir_rule(struct ice_hw *hw, u16 rule_type, u16 dest_vsi, u16 count, struct ice_mir_rule_buf *mr_buf, struct ice_sq_cd *cd, u16 *rule_id) { struct ice_aqc_add_update_mir_rule *cmd; struct ice_aq_desc desc; - enum ice_status status; __le16 *mr_list = NULL; u16 buf_size = 0; + int status; switch (rule_type) { case ICE_AQC_RULE_TYPE_VPORT_INGRESS: @@ -3391,7 +3404,7 @@ ice_aq_add_update_mir_rule(struct ice_hw *hw, u16 rule_type, u16 dest_vsi, * * Delete Mirror Rule (0x261). */ -enum ice_status +int ice_aq_delete_mir_rule(struct ice_hw *hw, u16 rule_id, bool keep_allocd, struct ice_sq_cd *cd) { @@ -3423,15 +3436,15 @@ ice_aq_delete_mir_rule(struct ice_hw *hw, u16 rule_id, bool keep_allocd, * * allocates or free a VSI list resource */ -static enum ice_status +static int ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id, enum ice_sw_lkup_type lkup_type, enum ice_adminq_opc opc) { struct ice_aqc_alloc_free_res_elem *sw_buf; struct ice_aqc_res_elem *vsi_ele; - enum ice_status status; u16 buf_len; + int status; buf_len = ice_struct_size(sw_buf, elem, 1); sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len); @@ -3482,7 +3495,7 @@ ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id, * * Sets the storm control configuration (0x0280) */ -enum ice_status +int ice_aq_set_storm_ctrl(struct ice_hw *hw, u32 bcast_thresh, u32 mcast_thresh, u32 ctl_bitmask) { @@ -3509,12 +3522,12 @@ ice_aq_set_storm_ctrl(struct ice_hw *hw, u32 bcast_thresh, u32 mcast_thresh, * * Gets the storm control configuration (0x0281) */ -enum ice_status +int ice_aq_get_storm_ctrl(struct ice_hw *hw, u32 *bcast_thresh, u32 *mcast_thresh, u32 *ctl_bitmask) { - enum ice_status status; struct ice_aq_desc desc; + int status; ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_storm_cfg); @@ -3546,12 +3559,12 @@ ice_aq_get_storm_ctrl(struct ice_hw *hw, u32 *bcast_thresh, u32 *mcast_thresh, * * Add(0x02a0)/Update(0x02a1)/Remove(0x02a2) switch rules commands to firmware */ -static enum ice_status +static int ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz, u8 num_rules, enum ice_adminq_opc opc, struct ice_sq_cd *cd) { struct ice_aq_desc desc; - enum ice_status status; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -3582,7 +3595,7 @@ ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz, * * Add(0x0290) */ -enum ice_status +int ice_aq_add_recipe(struct ice_hw *hw, struct ice_aqc_recipe_data_elem *s_recipe_list, u16 num_recipes, struct ice_sq_cd *cd) @@ -3620,15 +3633,15 @@ ice_aq_add_recipe(struct ice_hw *hw, * The caller must supply enough space in s_recipe_list to hold all possible * recipes and *num_recipes must equal ICE_MAX_NUM_RECIPES. */ -enum ice_status +int ice_aq_get_recipe(struct ice_hw *hw, struct ice_aqc_recipe_data_elem *s_recipe_list, u16 *num_recipes, u16 recipe_root, struct ice_sq_cd *cd) { struct ice_aqc_add_get_recipe *cmd; struct ice_aq_desc desc; - enum ice_status status; u16 buf_size; + int status; if (*num_recipes != ICE_MAX_NUM_RECIPES) return ICE_ERR_PARAM; @@ -3661,13 +3674,13 @@ ice_aq_get_recipe(struct ice_hw *hw, * mask if it's valid at the lkup_idx. Finally, use the add recipe AQ to update * the pre-existing recipe with the modifications. */ -enum ice_status +int ice_update_recipe_lkup_idx(struct ice_hw *hw, struct ice_update_recipe_lkup_idx_params *params) { struct ice_aqc_recipe_data_elem *rcp_list; u16 num_recps = ICE_MAX_NUM_RECIPES; - enum ice_status status; + int status; rcp_list = (struct ice_aqc_recipe_data_elem *)ice_malloc(hw, num_recps * sizeof(*rcp_list)); if (!rcp_list) @@ -3714,7 +3727,7 @@ ice_update_recipe_lkup_idx(struct ice_hw *hw, * @cd: pointer to command details structure or NULL * Recipe to profile association (0x0291) */ -enum ice_status +int ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, struct ice_sq_cd *cd) { @@ -3742,13 +3755,13 @@ ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, * @cd: pointer to command details structure or NULL * Associate profile ID with given recipe (0x0293) */ -enum ice_status +int ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, struct ice_sq_cd *cd) { struct ice_aqc_recipe_to_profile *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); cmd = &desc.params.recipe_to_profile; @@ -3764,31 +3777,215 @@ ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, } /** - * ice_alloc_recipe - add recipe resource + * ice_init_chk_subscribable_recipe_support - are subscribable recipes available + * @hw: pointer to the hardware structure + */ +void ice_init_chk_subscribable_recipe_support(struct ice_hw *hw) +{ + struct ice_nvm_info *nvm = &hw->flash.nvm; + + if (nvm->major >= 0x04 && nvm->minor >= 0x30) + hw->subscribable_recipes_supported = true; + else + hw->subscribable_recipes_supported = false; +} + +/** + * ice_alloc_legacy_shared_recipe - alloc legacy shared recipe + * @hw: pointer to the hardware structure + * @rid: recipe ID returned as response to AQ call + */ +static int +ice_alloc_legacy_shared_recipe(struct ice_hw *hw, u16 *rid) +{ + struct ice_aqc_alloc_free_res_elem *sw_buf; + u16 buf_len; + int status; + + buf_len = ice_struct_size(sw_buf, elem, 1); + sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len); + if (!sw_buf) + return ICE_ERR_NO_MEMORY; + + sw_buf->num_elems = CPU_TO_LE16(1); + sw_buf->res_type = CPU_TO_LE16((ICE_AQC_RES_TYPE_RECIPE << + ICE_AQC_RES_TYPE_S) | + ICE_AQC_RES_TYPE_FLAG_SHARED); + status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len, + ice_aqc_opc_alloc_res, NULL); + if (!status) + *rid = LE16_TO_CPU(sw_buf->elem[0].e.sw_resp); + ice_free(hw, sw_buf); + + return status; +} + +/** + * ice_alloc_subscribable_recipe - alloc shared recipe that can be subscribed to + * @hw: pointer to the hardware structure + * @rid: recipe ID returned as response to AQ call + */ +static int +ice_alloc_subscribable_recipe(struct ice_hw *hw, u16 *rid) +{ + struct ice_aqc_alloc_free_res_elem *buf; + int status; + u16 buf_len; + + buf_len = ice_struct_size(buf, elem, 1); + buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len); + if (!buf) + return ICE_ERR_NO_MEMORY; + + /* Prepare buffer to allocate resource */ + buf->num_elems = CPU_TO_LE16(1); + buf->res_type = CPU_TO_LE16((ICE_AQC_RES_TYPE_RECIPE << + ICE_AQC_RES_TYPE_S) | + ICE_AQC_RES_TYPE_FLAG_SUBSCRIBE_SHARED); + + status = ice_aq_alloc_free_res(hw, 1, buf, buf_len, + ice_aqc_opc_alloc_res, NULL); + + if (status) + goto exit; + + ice_memcpy(rid, buf->elem, sizeof(*buf->elem) * 1, + ICE_NONDMA_TO_NONDMA); + +exit: + ice_free(hw, buf); + return status; +} + +/** + * ice_alloc_recipe - add recipe resource + * @hw: pointer to the hardware structure + * @rid: recipe ID returned as response to AQ call + */ +int ice_alloc_recipe(struct ice_hw *hw, u16 *rid) +{ + if (hw->subscribable_recipes_supported) + return ice_alloc_subscribable_recipe(hw, rid); + else + return ice_alloc_legacy_shared_recipe(hw, rid); +} + +/** + * ice_free_recipe_res - free recipe resource + * @hw: pointer to the hardware structure + * @rid: recipe ID to free + */ +static int ice_free_recipe_res(struct ice_hw *hw, u16 rid) +{ + return ice_free_hw_res(hw, ICE_AQC_RES_TYPE_RECIPE, 1, &rid); +} + +/* + * ice_subscribe_recipe - subscribe to an existing recipe + * @hw: pointer to the hardware structure + * @rid: recipe ID to subscribe to + */ +static int ice_subscribe_recipe(struct ice_hw *hw, u16 rid) +{ + struct ice_aqc_alloc_free_res_elem *buf; + int status; + u16 buf_len; + + buf_len = ice_struct_size(buf, elem, 1); + buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len); + if (!buf) + return ICE_ERR_NO_MEMORY; + + /* Prepare buffer to allocate resource */ + buf->num_elems = CPU_TO_LE16(1); + buf->res_type = CPU_TO_LE16((ICE_AQC_RES_TYPE_RECIPE << + ICE_AQC_RES_TYPE_S) | + ICE_AQC_RES_TYPE_FLAG_SUBSCRIBE_SHARED | + ICE_AQC_RES_TYPE_FLAG_SUBSCRIBE_CTL); + + buf->elem[0].e.flu_resp = CPU_TO_LE16(rid); + + status = ice_aq_alloc_free_res(hw, 1, buf, buf_len, + ice_aqc_opc_alloc_res, NULL); + + ice_free(hw, buf); + return status; +} + +/** + * ice_subscribable_recp_shared - share an existing subscribable recipe * @hw: pointer to the hardware structure - * @rid: recipe ID returned as response to AQ call + * @rid: recipe ID to subscribe to */ -enum ice_status ice_alloc_recipe(struct ice_hw *hw, u16 *rid) +static void ice_subscribable_recp_shared(struct ice_hw *hw, u16 rid) { - struct ice_aqc_alloc_free_res_elem *sw_buf; - enum ice_status status; - u16 buf_len; + ice_declare_bitmap(sub_bitmap, ICE_MAX_NUM_RECIPES); + struct ice_sw_recipe *recps; + u16 i, cnt; - buf_len = ice_struct_size(sw_buf, elem, 1); - sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len); - if (!sw_buf) - return ICE_ERR_NO_MEMORY; + recps = hw->switch_info->recp_list; + ice_cp_bitmap(sub_bitmap, recps[rid].r_bitmap, ICE_MAX_NUM_RECIPES); + cnt = ice_bitmap_hweight(sub_bitmap, ICE_MAX_NUM_RECIPES); + for (i = 0; i < cnt; i++) { + u8 sub_rid; - sw_buf->num_elems = CPU_TO_LE16(1); - sw_buf->res_type = CPU_TO_LE16((ICE_AQC_RES_TYPE_RECIPE << - ICE_AQC_RES_TYPE_S) | - ICE_AQC_RES_TYPE_FLAG_SHARED); - status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len, - ice_aqc_opc_alloc_res, NULL); - if (!status) - *rid = LE16_TO_CPU(sw_buf->elem[0].e.sw_resp); - ice_free(hw, sw_buf); + sub_rid = (u8)ice_find_first_bit(sub_bitmap, + ICE_MAX_NUM_RECIPES); + ice_subscribe_recipe(hw, sub_rid); + ice_clear_bit(sub_rid, sub_bitmap); + } +} + +/** + * ice_release_recipe_res - disassociate and free recipe resource + * @hw: pointer to the hardware structure + * @recp: the recipe struct resource to unassociate and free + */ +static int ice_release_recipe_res(struct ice_hw *hw, + struct ice_sw_recipe *recp) +{ + ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES); + struct ice_switch_info *sw = hw->switch_info; + int status = 0; + u16 num_recp, num_prof; + u8 rid, prof, i, j; + + num_recp = ice_bitmap_hweight(recp->r_bitmap, ICE_MAX_NUM_RECIPES); + for (i = 0; i < num_recp; i++) { + rid = (u8)ice_find_first_bit(recp->r_bitmap, + ICE_MAX_NUM_RECIPES); + num_prof = ice_bitmap_hweight(recipe_to_profile[rid], + ICE_MAX_NUM_PROFILES); + for (j = 0; j < num_prof; j++) { + prof = (u8)ice_find_first_bit(recipe_to_profile[rid], + ICE_MAX_NUM_PROFILES); + status = ice_aq_get_recipe_to_profile(hw, prof, + (u8 *)r_bitmap, + NULL); + if (status) + goto exit; + + ice_andnot_bitmap(r_bitmap, r_bitmap, + recp->r_bitmap, ICE_MAX_NUM_RECIPES); + ice_aq_map_recipe_to_profile(hw, prof, + (u8 *)r_bitmap, NULL); + + ice_clear_bit(rid, profile_to_recipe[prof]); + ice_clear_bit(prof, recipe_to_profile[rid]); + } + + status = ice_free_recipe_res(hw, rid); + if (status) + goto exit; + + sw->recp_list[rid].recp_created = false; + sw->recp_list[rid].adv_rule = false; + memset(&sw->recp_list[rid].lkup_exts, 0, + sizeof(struct ice_prot_lkup_ext)); + ice_clear_bit(rid, recp->r_bitmap); + } +exit: return status; } @@ -3820,13 +4017,13 @@ ice_init_port_info(struct ice_port_info *pi, u16 vsi_port_num, u8 type, /* ice_get_initial_sw_cfg - Get initial port and default VSI data * @hw: pointer to the hardware structure */ -enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw) +int ice_get_initial_sw_cfg(struct ice_hw *hw) { struct ice_aqc_get_sw_cfg_resp_elem *rbuf; - enum ice_status status; u8 num_total_ports; u16 req_desc = 0; u16 num_elems; + int status; u8 j = 0; u16 i; @@ -3874,6 +4071,10 @@ enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw) switch (res_type) { case ICE_AQC_GET_SW_CONF_RESP_VSI: + if (hw->fw_vsi_num != ICE_DFLT_VSI_INVAL) + ice_debug(hw, ICE_DBG_SW, "fw_vsi_num %d -> %d\n", + hw->fw_vsi_num, vsi_port_num); + hw->fw_vsi_num = vsi_port_num; if (hw->dcf_enabled && !is_vf) hw->pf_id = pf_vf_num; break; @@ -3945,6 +4146,7 @@ static void ice_fill_sw_info(struct ice_hw *hw, struct ice_fltr_info *fi) * * In all other cases, the LAN enable has to be set to false. */ + if (hw->evb_veb) { if (fi->lkup_type == ICE_SW_LKUP_ETHERTYPE || fi->lkup_type == ICE_SW_LKUP_PROMISC || @@ -3955,12 +4157,21 @@ static void ice_fill_sw_info(struct ice_hw *hw, struct ice_fltr_info *fi) (fi->lkup_type == ICE_SW_LKUP_MAC && !IS_UNICAST_ETHER_ADDR(fi->l_data.mac.mac_addr)) || (fi->lkup_type == ICE_SW_LKUP_MAC_VLAN && - !IS_UNICAST_ETHER_ADDR(fi->l_data.mac.mac_addr))) - fi->lan_en = true; + !IS_UNICAST_ETHER_ADDR(fi->l_data.mac.mac_addr))) { + if (!fi->fltVeb_en) + fi->lan_en = true; + } } else { fi->lan_en = true; } } + /* To be able to receive packets coming from the VF on the same PF, + * unicast filter needs to be added without LB_EN bit + */ + if (fi->flag & ICE_FLTR_RX_LB) { + fi->lb_en = false; + fi->lan_en = true; + } } /** @@ -3972,7 +4183,8 @@ static void ice_fill_sw_info(struct ice_hw *hw, struct ice_fltr_info *fi) */ static void ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, - struct ice_aqc_sw_rules_elem *s_rule, enum ice_adminq_opc opc) + struct ice_sw_rule_lkup_rx_tx *s_rule, + enum ice_adminq_opc opc) { u16 vlan_id = ICE_MAX_VLAN_ID + 1; u16 vlan_tpid = ICE_ETH_P_8021Q; @@ -3984,15 +4196,14 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, u8 q_rgn; if (opc == ice_aqc_opc_remove_sw_rules) { - s_rule->pdata.lkup_tx_rx.act = 0; - s_rule->pdata.lkup_tx_rx.index = - CPU_TO_LE16(f_info->fltr_rule_id); - s_rule->pdata.lkup_tx_rx.hdr_len = 0; + s_rule->act = 0; + s_rule->index = CPU_TO_LE16(f_info->fltr_rule_id); + s_rule->hdr_len = 0; return; } eth_hdr_sz = sizeof(dummy_eth_header); - eth_hdr = s_rule->pdata.lkup_tx_rx.hdr; + eth_hdr = s_rule->hdr_data; /* initialize the ether header with a dummy header */ ice_memcpy(eth_hdr, dummy_eth_header, eth_hdr_sz, ICE_NONDMA_TO_NONDMA); @@ -4077,14 +4288,14 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, break; } - s_rule->type = (f_info->flag & ICE_FLTR_RX) ? + s_rule->hdr.type = (f_info->flag & ICE_FLTR_RX) ? CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_RX) : CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_TX); /* Recipe set depending on lookup type */ - s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(f_info->lkup_type); - s_rule->pdata.lkup_tx_rx.src = CPU_TO_LE16(f_info->src); - s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act); + s_rule->recipe_id = CPU_TO_LE16(f_info->lkup_type); + s_rule->src = CPU_TO_LE16(f_info->src); + s_rule->act = CPU_TO_LE32(act); if (daddr) ice_memcpy(eth_hdr + ICE_ETH_DA_OFFSET, daddr, ETH_ALEN, @@ -4099,7 +4310,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, /* Create the switch rule with the final dummy Ethernet header */ if (opc != ice_aqc_opc_update_sw_rules) - s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(eth_hdr_sz); + s_rule->hdr_len = CPU_TO_LE16(eth_hdr_sz); } /** @@ -4112,20 +4323,21 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, * Create a large action to hold software marker and update the switch rule * entry pointed by m_ent with newly created large action */ -static enum ice_status +static int ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent, u16 sw_marker, u16 l_id) { - struct ice_aqc_sw_rules_elem *lg_act, *rx_tx; + struct ice_sw_rule_lkup_rx_tx *rx_tx; + struct ice_sw_rule_lg_act *lg_act; /* For software marker we need 3 large actions * 1. FWD action: FWD TO VSI or VSI LIST * 2. GENERIC VALUE action to hold the profile ID * 3. GENERIC VALUE action to hold the software marker ID */ const u16 num_lg_acts = 3; - enum ice_status status; u16 lg_act_size; u16 rules_size; + int status; u32 act; u16 id; @@ -4137,18 +4349,19 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent, * 1. Large Action * 2. Look up Tx Rx */ - lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(num_lg_acts); - rules_size = lg_act_size + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE; - lg_act = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rules_size); + lg_act_size = (u16)ice_struct_size(lg_act, act, num_lg_acts); + rules_size = lg_act_size + + ice_struct_size(rx_tx, hdr_data, DUMMY_ETH_HDR_LEN); + lg_act = (struct ice_sw_rule_lg_act *)ice_malloc(hw, rules_size); if (!lg_act) return ICE_ERR_NO_MEMORY; - rx_tx = (struct ice_aqc_sw_rules_elem *)((u8 *)lg_act + lg_act_size); + rx_tx = (struct ice_sw_rule_lkup_rx_tx *)((u8 *)lg_act + lg_act_size); /* Fill in the first switch rule i.e. large action */ - lg_act->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT); - lg_act->pdata.lg_act.index = CPU_TO_LE16(l_id); - lg_act->pdata.lg_act.size = CPU_TO_LE16(num_lg_acts); + lg_act->hdr.type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT); + lg_act->index = CPU_TO_LE16(l_id); + lg_act->size = CPU_TO_LE16(num_lg_acts); /* First action VSI forwarding or VSI list forwarding depending on how * many VSIs @@ -4160,13 +4373,13 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent, act |= (id << ICE_LG_ACT_VSI_LIST_ID_S) & ICE_LG_ACT_VSI_LIST_ID_M; if (m_ent->vsi_count > 1) act |= ICE_LG_ACT_VSI_LIST; - lg_act->pdata.lg_act.act[0] = CPU_TO_LE32(act); + lg_act->act[0] = CPU_TO_LE32(act); /* Second action descriptor type */ act = ICE_LG_ACT_GENERIC; act |= (1 << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M; - lg_act->pdata.lg_act.act[1] = CPU_TO_LE32(act); + lg_act->act[1] = CPU_TO_LE32(act); act = (ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX << ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_OFFSET_M; @@ -4176,24 +4389,22 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent, act |= (sw_marker << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M; - lg_act->pdata.lg_act.act[2] = CPU_TO_LE32(act); + lg_act->act[2] = CPU_TO_LE32(act); /* call the fill switch rule to fill the lookup Tx Rx structure */ ice_fill_sw_rule(hw, &m_ent->fltr_info, rx_tx, ice_aqc_opc_update_sw_rules); /* Update the action to point to the large action ID */ - rx_tx->pdata.lkup_tx_rx.act = - CPU_TO_LE32(ICE_SINGLE_ACT_PTR | - ((l_id << ICE_SINGLE_ACT_PTR_VAL_S) & - ICE_SINGLE_ACT_PTR_VAL_M)); + rx_tx->act = CPU_TO_LE32(ICE_SINGLE_ACT_PTR | + ((l_id << ICE_SINGLE_ACT_PTR_VAL_S) & + ICE_SINGLE_ACT_PTR_VAL_M)); /* Use the filter rule ID of the previously created rule with single * act. Once the update happens, hardware will treat this as large * action */ - rx_tx->pdata.lkup_tx_rx.index = - CPU_TO_LE16(m_ent->fltr_info.fltr_rule_id); + rx_tx->index = CPU_TO_LE16(m_ent->fltr_info.fltr_rule_id); status = ice_aq_sw_rules(hw, lg_act, rules_size, 2, ice_aqc_opc_update_sw_rules, NULL); @@ -4213,19 +4424,19 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent, * @counter_id: VLAN counter ID returned as part of allocate resource * @l_id: large action resource ID */ -static enum ice_status +static int ice_add_counter_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent, u16 counter_id, u16 l_id) { - struct ice_aqc_sw_rules_elem *lg_act; - struct ice_aqc_sw_rules_elem *rx_tx; - enum ice_status status; + struct ice_sw_rule_lkup_rx_tx *rx_tx; + struct ice_sw_rule_lg_act *lg_act; /* 2 actions will be added while adding a large action counter */ const int num_acts = 2; u16 lg_act_size; u16 rules_size; u16 f_rule_id; u32 act; + int status; u16 id; if (m_ent->fltr_info.lkup_type != ICE_SW_LKUP_MAC) @@ -4236,18 +4447,20 @@ ice_add_counter_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent, * 1. Large Action * 2. Look up Tx Rx */ - lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(num_acts); - rules_size = lg_act_size + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE; - lg_act = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rules_size); + lg_act_size = (u16)ice_struct_size(lg_act, act, num_acts); + rules_size = lg_act_size + + ice_struct_size(rx_tx, hdr_data, DUMMY_ETH_HDR_LEN); + lg_act = (struct ice_sw_rule_lg_act *)ice_malloc(hw, rules_size); if (!lg_act) return ICE_ERR_NO_MEMORY; - rx_tx = (struct ice_aqc_sw_rules_elem *)((u8 *)lg_act + lg_act_size); + rx_tx = (struct ice_sw_rule_lkup_rx_tx *)((u8 *)lg_act + + lg_act_size); /* Fill in the first switch rule i.e. large action */ - lg_act->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT); - lg_act->pdata.lg_act.index = CPU_TO_LE16(l_id); - lg_act->pdata.lg_act.size = CPU_TO_LE16(num_acts); + lg_act->hdr.type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT); + lg_act->index = CPU_TO_LE16(l_id); + lg_act->size = CPU_TO_LE16(num_acts); /* First action VSI forwarding or VSI list forwarding depending on how * many VSIs @@ -4260,13 +4473,13 @@ ice_add_counter_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent, ICE_LG_ACT_VSI_LIST_ID_M; if (m_ent->vsi_count > 1) act |= ICE_LG_ACT_VSI_LIST; - lg_act->pdata.lg_act.act[0] = CPU_TO_LE32(act); + lg_act->act[0] = CPU_TO_LE32(act); /* Second action counter ID */ act = ICE_LG_ACT_STAT_COUNT; act |= (counter_id << ICE_LG_ACT_STAT_COUNT_S) & ICE_LG_ACT_STAT_COUNT_M; - lg_act->pdata.lg_act.act[1] = CPU_TO_LE32(act); + lg_act->act[1] = CPU_TO_LE32(act); /* call the fill switch rule to fill the lookup Tx Rx structure */ ice_fill_sw_rule(hw, &m_ent->fltr_info, rx_tx, @@ -4274,14 +4487,14 @@ ice_add_counter_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent, act = ICE_SINGLE_ACT_PTR; act |= (l_id << ICE_SINGLE_ACT_PTR_VAL_S) & ICE_SINGLE_ACT_PTR_VAL_M; - rx_tx->pdata.lkup_tx_rx.act = CPU_TO_LE32(act); + rx_tx->act = CPU_TO_LE32(act); /* Use the filter rule ID of the previously created rule with single * act. Once the update happens, hardware will treat this as large * action */ f_rule_id = m_ent->fltr_info.fltr_rule_id; - rx_tx->pdata.lkup_tx_rx.index = CPU_TO_LE16(f_rule_id); + rx_tx->index = CPU_TO_LE16(f_rule_id); status = ice_aq_sw_rules(hw, lg_act, rules_size, 2, ice_aqc_opc_update_sw_rules, NULL); @@ -4338,15 +4551,15 @@ ice_create_vsi_list_map(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi, * Call AQ command to add a new switch rule or update existing switch rule * using the given VSI list ID */ -static enum ice_status +static int ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi, u16 vsi_list_id, bool remove, enum ice_adminq_opc opc, enum ice_sw_lkup_type lkup_type) { - struct ice_aqc_sw_rules_elem *s_rule; - enum ice_status status; + struct ice_sw_rule_vsi_list *s_rule; u16 s_rule_size; u16 rule_type; + int status; int i; if (!num_vsi) @@ -4368,8 +4581,8 @@ ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi, else return ICE_ERR_PARAM; - s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(num_vsi); - s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size); + s_rule_size = (u16)ice_struct_size(s_rule, vsi, num_vsi); + s_rule = (struct ice_sw_rule_vsi_list *)ice_malloc(hw, s_rule_size); if (!s_rule) return ICE_ERR_NO_MEMORY; for (i = 0; i < num_vsi; i++) { @@ -4378,13 +4591,13 @@ ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi, goto exit; } /* AQ call requires hw_vsi_id(s) */ - s_rule->pdata.vsi_list.vsi[i] = + s_rule->vsi[i] = CPU_TO_LE16(ice_get_hw_vsi_num(hw, vsi_handle_arr[i])); } - s_rule->type = CPU_TO_LE16(rule_type); - s_rule->pdata.vsi_list.number_vsi = CPU_TO_LE16(num_vsi); - s_rule->pdata.vsi_list.index = CPU_TO_LE16(vsi_list_id); + s_rule->hdr.type = CPU_TO_LE16(rule_type); + s_rule->number_vsi = CPU_TO_LE16(num_vsi); + s_rule->index = CPU_TO_LE16(vsi_list_id); status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opc, NULL); @@ -4401,11 +4614,11 @@ ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi, * @vsi_list_id: stores the ID of the VSI list to be created * @lkup_type: switch rule filter's lookup type */ -static enum ice_status +static int ice_create_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi, u16 *vsi_list_id, enum ice_sw_lkup_type lkup_type) { - enum ice_status status; + int status; status = ice_aq_alloc_free_vsi_list(hw, vsi_list_id, lkup_type, ice_aqc_opc_alloc_res); @@ -4428,16 +4641,17 @@ ice_create_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi, * to the corresponding filter management list to track this switch rule * and VSI mapping */ -static enum ice_status +static int ice_create_pkt_fwd_rule(struct ice_hw *hw, struct ice_sw_recipe *recp_list, struct ice_fltr_list_entry *f_entry) { struct ice_fltr_mgmt_list_entry *fm_entry; - struct ice_aqc_sw_rules_elem *s_rule; - enum ice_status status; + struct ice_sw_rule_lkup_rx_tx *s_rule; + int status; - s_rule = (struct ice_aqc_sw_rules_elem *) - ice_malloc(hw, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE); + s_rule = (struct ice_sw_rule_lkup_rx_tx *) + ice_malloc(hw, ice_struct_size(s_rule, hdr_data, + DUMMY_ETH_HDR_LEN)); if (!s_rule) return ICE_ERR_NO_MEMORY; fm_entry = (struct ice_fltr_mgmt_list_entry *) @@ -4458,17 +4672,17 @@ ice_create_pkt_fwd_rule(struct ice_hw *hw, struct ice_sw_recipe *recp_list, ice_fill_sw_rule(hw, &fm_entry->fltr_info, s_rule, ice_aqc_opc_add_sw_rules); - status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1, - ice_aqc_opc_add_sw_rules, NULL); + status = ice_aq_sw_rules(hw, s_rule, + ice_struct_size(s_rule, hdr_data, + DUMMY_ETH_HDR_LEN), + 1, ice_aqc_opc_add_sw_rules, NULL); if (status) { ice_free(hw, fm_entry); goto ice_create_pkt_fwd_rule_exit; } - f_entry->fltr_info.fltr_rule_id = - LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index); - fm_entry->fltr_info.fltr_rule_id = - LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index); + f_entry->fltr_info.fltr_rule_id = LE16_TO_CPU(s_rule->index); + fm_entry->fltr_info.fltr_rule_id = LE16_TO_CPU(s_rule->index); /* The book keeping entries will get removed when base driver * calls remove filter AQ command @@ -4488,24 +4702,27 @@ ice_create_pkt_fwd_rule(struct ice_hw *hw, struct ice_sw_recipe *recp_list, * Call AQ command to update a previously created switch rule with a * VSI list ID */ -static enum ice_status +static int ice_update_pkt_fwd_rule(struct ice_hw *hw, struct ice_fltr_info *f_info) { - struct ice_aqc_sw_rules_elem *s_rule; - enum ice_status status; + struct ice_sw_rule_lkup_rx_tx *s_rule; + int status; - s_rule = (struct ice_aqc_sw_rules_elem *) - ice_malloc(hw, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE); + s_rule = (struct ice_sw_rule_lkup_rx_tx *) + ice_malloc(hw, ice_struct_size(s_rule, hdr_data, + DUMMY_ETH_HDR_LEN)); if (!s_rule) return ICE_ERR_NO_MEMORY; ice_fill_sw_rule(hw, f_info, s_rule, ice_aqc_opc_update_sw_rules); - s_rule->pdata.lkup_tx_rx.index = CPU_TO_LE16(f_info->fltr_rule_id); + s_rule->index = CPU_TO_LE16(f_info->fltr_rule_id); /* Update switch rule with new rule set to forward VSI list */ - status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1, - ice_aqc_opc_update_sw_rules, NULL); + status = ice_aq_sw_rules(hw, s_rule, + ice_struct_size(s_rule, hdr_data, + DUMMY_ETH_HDR_LEN), + 1, ice_aqc_opc_update_sw_rules, NULL); ice_free(hw, s_rule); return status; @@ -4517,13 +4734,14 @@ ice_update_pkt_fwd_rule(struct ice_hw *hw, struct ice_fltr_info *f_info) * * Updates unicast switch filter rules based on VEB/VEPA mode */ -enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw) +int ice_update_sw_rule_bridge_mode(struct ice_hw *hw) { struct ice_fltr_mgmt_list_entry *fm_entry; - enum ice_status status = ICE_SUCCESS; struct LIST_HEAD_TYPE *rule_head; struct ice_lock *rule_lock; /* Lock to protect filter rule list */ struct ice_switch_info *sw; + int status = 0; + sw = hw->switch_info; rule_lock = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock; @@ -4575,14 +4793,14 @@ enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw) * Add the new VSI to the previously created VSI list set * using the update switch rule command */ -static enum ice_status +static int ice_add_update_vsi_list(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_entry, struct ice_fltr_info *cur_fltr, struct ice_fltr_info *new_fltr) { - enum ice_status status = ICE_SUCCESS; u16 vsi_list_id = 0; + int status = 0; if ((cur_fltr->fltr_act == ICE_FWD_TO_Q || cur_fltr->fltr_act == ICE_FWD_TO_QGRP)) @@ -4603,7 +4821,7 @@ ice_add_update_vsi_list(struct ice_hw *hw, u16 vsi_handle_arr[2]; /* A rule already exists with the new VSI being added */ - if (cur_fltr->fwd_id.hw_vsi_id == new_fltr->fwd_id.hw_vsi_id) + if (cur_fltr->vsi_handle == new_fltr->vsi_handle) return ICE_ERR_ALREADY_EXISTS; vsi_handle_arr[0] = cur_fltr->vsi_handle; @@ -4651,7 +4869,7 @@ ice_add_update_vsi_list(struct ice_hw *hw, /* A rule already exists with the new VSI being added */ if (ice_is_bit_set(m_entry->vsi_list_info->vsi_map, vsi_handle)) - return ICE_SUCCESS; + return ICE_ERR_ALREADY_EXISTS; /* Update the previously created VSI list set with * the new VSI ID passed in @@ -4708,7 +4926,7 @@ ice_find_rule_entry(struct LIST_HEAD_TYPE *list_head, * handle element. This can be extended further to search VSI list with more * than 1 vsi_count. Returns pointer to VSI list entry if found. */ -static struct ice_vsi_list_map_info * +struct ice_vsi_list_map_info * ice_find_vsi_list_entry(struct ice_sw_recipe *recp_list, u16 vsi_handle, u16 *vsi_list_id) { @@ -4760,14 +4978,14 @@ ice_find_vsi_list_entry(struct ice_sw_recipe *recp_list, u16 vsi_handle, * * Adds or updates the rule lists for a given recipe */ -static enum ice_status +static int ice_add_rule_internal(struct ice_hw *hw, struct ice_sw_recipe *recp_list, u8 lport, struct ice_fltr_list_entry *f_entry) { struct ice_fltr_info *new_fltr, *cur_fltr; struct ice_fltr_mgmt_list_entry *m_entry; struct ice_lock *rule_lock; /* Lock to protect filter rule list */ - enum ice_status status = ICE_SUCCESS; + int status = 0; if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle)) return ICE_ERR_PARAM; @@ -4783,7 +5001,7 @@ ice_add_rule_internal(struct ice_hw *hw, struct ice_sw_recipe *recp_list, new_fltr = &f_entry->fltr_info; if (new_fltr->flag & ICE_FLTR_RX) new_fltr->src = lport; - else if (new_fltr->flag & ICE_FLTR_TX) + else if (new_fltr->flag & (ICE_FLTR_TX | ICE_FLTR_RX_LB)) new_fltr->src = ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle); @@ -4810,7 +5028,7 @@ ice_add_rule_internal(struct ice_hw *hw, struct ice_sw_recipe *recp_list, * The VSI list should be emptied before this function is called to remove the * VSI list. */ -static enum ice_status +static int ice_remove_vsi_list_rule(struct ice_hw *hw, u16 vsi_list_id, enum ice_sw_lkup_type lkup_type) { @@ -4826,15 +5044,15 @@ ice_remove_vsi_list_rule(struct ice_hw *hw, u16 vsi_list_id, * @hw: pointer to the hardware structure * @vsi_handle: VSI handle of the VSI to remove * @fm_list: filter management entry for which the VSI list management needs to - * be done + * be done */ -static enum ice_status +static int ice_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle, struct ice_fltr_mgmt_list_entry *fm_list) { enum ice_sw_lkup_type lkup_type; - enum ice_status status = ICE_SUCCESS; u16 vsi_list_id; + int status = 0; if (fm_list->fltr_info.fltr_act != ICE_FWD_TO_VSI_LIST || fm_list->vsi_count == 0) @@ -4911,19 +5129,18 @@ ice_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle, /** * ice_remove_rule_internal - Remove a filter rule of a given type - * * @hw: pointer to the hardware structure * @recp_list: recipe list for which the rule needs to removed * @f_entry: rule entry containing filter information */ -static enum ice_status +static int ice_remove_rule_internal(struct ice_hw *hw, struct ice_sw_recipe *recp_list, struct ice_fltr_list_entry *f_entry) { struct ice_fltr_mgmt_list_entry *list_elem; struct ice_lock *rule_lock; /* Lock to protect filter rule list */ - enum ice_status status = ICE_SUCCESS; bool remove_rule = false; + int status = 0; u16 vsi_handle; if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle)) @@ -4933,6 +5150,7 @@ ice_remove_rule_internal(struct ice_hw *hw, struct ice_sw_recipe *recp_list, rule_lock = &recp_list->filt_rule_lock; ice_acquire_lock(rule_lock); + list_elem = ice_find_rule_entry(&recp_list->filt_rules, &f_entry->fltr_info); if (!list_elem) { @@ -4970,10 +5188,10 @@ ice_remove_rule_internal(struct ice_hw *hw, struct ice_sw_recipe *recp_list, if (remove_rule) { /* Remove the lookup rule */ - struct ice_aqc_sw_rules_elem *s_rule; + struct ice_sw_rule_lkup_rx_tx *s_rule; - s_rule = (struct ice_aqc_sw_rules_elem *) - ice_malloc(hw, ICE_SW_RULE_RX_TX_NO_HDR_SIZE); + s_rule = (struct ice_sw_rule_lkup_rx_tx *) + ice_malloc(hw, ice_struct_size(s_rule, hdr_data, 0)); if (!s_rule) { status = ICE_ERR_NO_MEMORY; goto exit; @@ -4983,8 +5201,8 @@ ice_remove_rule_internal(struct ice_hw *hw, struct ice_sw_recipe *recp_list, ice_aqc_opc_remove_sw_rules); status = ice_aq_sw_rules(hw, s_rule, - ICE_SW_RULE_RX_TX_NO_HDR_SIZE, 1, - ice_aqc_opc_remove_sw_rules, NULL); + ice_struct_size(s_rule, hdr_data, 0), + 1, ice_aqc_opc_remove_sw_rules, NULL); /* Remove a book keeping from the list */ ice_free(hw, s_rule); @@ -5012,14 +5230,14 @@ ice_remove_rule_internal(struct ice_hw *hw, struct ice_sw_recipe *recp_list, * information for all resource types. Each resource type is an * ice_aqc_get_res_resp_elem structure. */ -enum ice_status +int ice_aq_get_res_alloc(struct ice_hw *hw, u16 *num_entries, struct ice_aqc_get_res_resp_elem *buf, u16 buf_size, struct ice_sq_cd *cd) { struct ice_aqc_get_res_alloc *resp; - enum ice_status status; struct ice_aq_desc desc; + int status; if (!buf) return ICE_ERR_BAD_PTR; @@ -5049,14 +5267,14 @@ ice_aq_get_res_alloc(struct ice_hw *hw, u16 *num_entries, * @desc_id: input - first desc ID to start; output - next desc ID * @cd: pointer to command details structure or NULL */ -enum ice_status +int ice_aq_get_res_descs(struct ice_hw *hw, u16 num_entries, struct ice_aqc_res_elem *buf, u16 buf_size, u16 res_type, bool res_shared, u16 *desc_id, struct ice_sq_cd *cd) { struct ice_aqc_get_allocd_res_desc *cmd; struct ice_aq_desc desc; - enum ice_status status; + int status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); @@ -5095,18 +5313,18 @@ ice_aq_get_res_descs(struct ice_hw *hw, u16 num_entries, * check for duplicates in this case, removing duplicates from a given * list should be taken care of in the caller of this function. */ -static enum ice_status +static int ice_add_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list, struct ice_switch_info *sw, u8 lport) { struct ice_sw_recipe *recp_list = &sw->recp_list[ICE_SW_LKUP_MAC]; - struct ice_aqc_sw_rules_elem *s_rule, *r_iter; + struct ice_sw_rule_lkup_rx_tx *s_rule, *r_iter; struct ice_fltr_list_entry *m_list_itr; struct LIST_HEAD_TYPE *rule_head; u16 total_elem_left, s_rule_size; struct ice_lock *rule_lock; /* Lock to protect filter rule list */ - enum ice_status status = ICE_SUCCESS; u16 num_unicast = 0; + int status = 0; u8 elem_sent; s_rule = NULL; @@ -5156,13 +5374,13 @@ ice_add_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list, ice_acquire_lock(rule_lock); /* Exit if no suitable entries were found for adding bulk switch rule */ if (!num_unicast) { - status = ICE_SUCCESS; + status = 0; goto ice_add_mac_exit; } /* Allocate switch rule buffer for the bulk update for unicast */ - s_rule_size = ICE_SW_RULE_RX_TX_ETH_HDR_SIZE; - s_rule = (struct ice_aqc_sw_rules_elem *) + s_rule_size = ice_struct_size(s_rule, hdr_data, DUMMY_ETH_HDR_LEN); + s_rule = (struct ice_sw_rule_lkup_rx_tx *) ice_calloc(hw, num_unicast, s_rule_size); if (!s_rule) { status = ICE_ERR_NO_MEMORY; @@ -5178,7 +5396,7 @@ ice_add_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list, if (IS_UNICAST_ETHER_ADDR(mac_addr)) { ice_fill_sw_rule(hw, &m_list_itr->fltr_info, r_iter, ice_aqc_opc_add_sw_rules); - r_iter = (struct ice_aqc_sw_rules_elem *) + r_iter = (struct ice_sw_rule_lkup_rx_tx *) ((u8 *)r_iter + s_rule_size); } } @@ -5188,7 +5406,7 @@ ice_add_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list, /* Call AQ switch rule in AQ_MAX chunk */ for (total_elem_left = num_unicast; total_elem_left > 0; total_elem_left -= elem_sent) { - struct ice_aqc_sw_rules_elem *entry = r_iter; + struct ice_sw_rule_lkup_rx_tx *entry = r_iter; elem_sent = MIN_T(u8, total_elem_left, (ICE_AQ_MAX_BUF_LEN / s_rule_size)); @@ -5197,7 +5415,7 @@ ice_add_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list, NULL); if (status) goto ice_add_mac_exit; - r_iter = (struct ice_aqc_sw_rules_elem *) + r_iter = (struct ice_sw_rule_lkup_rx_tx *) ((u8 *)r_iter + (elem_sent * s_rule_size)); } @@ -5211,7 +5429,7 @@ ice_add_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list, if (IS_UNICAST_ETHER_ADDR(mac_addr)) { f_info->fltr_rule_id = - LE16_TO_CPU(r_iter->pdata.lkup_tx_rx.index); + LE16_TO_CPU(r_iter->index); f_info->fltr_act = ICE_FWD_TO_VSI; /* Create an entry to track this MAC address */ fm_entry = (struct ice_fltr_mgmt_list_entry *) @@ -5227,7 +5445,7 @@ ice_add_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list, */ LIST_ADD(&fm_entry->list_entry, rule_head); - r_iter = (struct ice_aqc_sw_rules_elem *) + r_iter = (struct ice_sw_rule_lkup_rx_tx *) ((u8 *)r_iter + s_rule_size); } } @@ -5246,7 +5464,7 @@ ice_add_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list, * * Function add MAC rule for logical port from HW struct */ -enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list) +int ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list) { if (!m_list || !hw) return ICE_ERR_PARAM; @@ -5261,7 +5479,7 @@ enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list) * @recp_list: recipe list for which rule has to be added * @f_entry: filter entry containing one VLAN information */ -static enum ice_status +static int ice_add_vlan_internal(struct ice_hw *hw, struct ice_sw_recipe *recp_list, struct ice_fltr_list_entry *f_entry) { @@ -5270,7 +5488,7 @@ ice_add_vlan_internal(struct ice_hw *hw, struct ice_sw_recipe *recp_list, enum ice_sw_lkup_type lkup_type; u16 vsi_list_id = 0, vsi_handle; struct ice_lock *rule_lock; /* Lock to protect filter rule list */ - enum ice_status status = ICE_SUCCESS; + int status = 0; if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle)) return ICE_ERR_PARAM; @@ -5415,7 +5633,7 @@ ice_add_vlan_internal(struct ice_hw *hw, struct ice_sw_recipe *recp_list, * @v_list: list of VLAN entries and forwarding information * @sw: pointer to switch info struct for which function add rule */ -static enum ice_status +static int ice_add_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list, struct ice_switch_info *sw) { @@ -5433,7 +5651,7 @@ ice_add_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list, if (v_list_itr->status) return v_list_itr->status; } - return ICE_SUCCESS; + return 0; } /** @@ -5443,7 +5661,7 @@ ice_add_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list, * * Function add VLAN rule for logical port from HW struct */ -enum ice_status ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list) +int ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list) { if (!v_list || !hw) return ICE_ERR_PARAM; @@ -5463,7 +5681,7 @@ enum ice_status ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list) * sure to add a VLAN only filter on the same VSI. Packets belonging to that * VLAN won't be received on that VSI otherwise. */ -static enum ice_status +static int ice_add_mac_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list, struct ice_switch_info *sw, u8 lport) { @@ -5488,7 +5706,7 @@ ice_add_mac_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list, if (mv_list_itr->status) return mv_list_itr->status; } - return ICE_SUCCESS; + return 0; } /** @@ -5498,7 +5716,7 @@ ice_add_mac_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list, * * Function add MAC VLAN rule for logical port from HW struct */ -enum ice_status +int ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list) { if (!mv_list || !hw) @@ -5519,7 +5737,7 @@ ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list) * the filter list with the necessary fields (including flags to * indicate Tx or Rx rules). */ -static enum ice_status +static int ice_add_eth_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list, struct ice_switch_info *sw, u8 lport) { @@ -5543,7 +5761,7 @@ ice_add_eth_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list, if (em_list_itr->status) return em_list_itr->status; } - return ICE_SUCCESS; + return 0; } /** @@ -5553,7 +5771,7 @@ ice_add_eth_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list, * * Function add ethertype rule for logical port from HW struct */ -enum ice_status +int ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list) { if (!em_list || !hw) @@ -5569,7 +5787,7 @@ ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list) * @em_list: list of ethertype or ethertype MAC entries * @sw: pointer to switch info struct for which function add rule */ -static enum ice_status +static int ice_remove_eth_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list, struct ice_switch_info *sw) { @@ -5592,7 +5810,7 @@ ice_remove_eth_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list, if (em_list_itr->status) return em_list_itr->status; } - return ICE_SUCCESS; + return 0; } /** @@ -5601,7 +5819,7 @@ ice_remove_eth_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list, * @em_list: list of ethertype and forwarding information * */ -enum ice_status +int ice_remove_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list) { if (!em_list || !hw) @@ -5618,7 +5836,7 @@ ice_remove_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list) * Get resource type for a large action depending on the number * of single actions that it contains. */ -static enum ice_status +static int ice_get_lg_act_aqc_res_type(u16 *res_type, int num_acts) { if (!res_type) @@ -5645,7 +5863,7 @@ ice_get_lg_act_aqc_res_type(u16 *res_type, int num_acts) return ICE_ERR_PARAM; } - return ICE_SUCCESS; + return 0; } /** @@ -5654,12 +5872,12 @@ ice_get_lg_act_aqc_res_type(u16 *res_type, int num_acts) * @l_id: large action ID to fill it in * @num_acts: number of actions to hold with a large action entry */ -static enum ice_status +static int ice_alloc_res_lg_act(struct ice_hw *hw, u16 *l_id, u16 num_acts) { struct ice_aqc_alloc_free_res_elem *sw_buf; - enum ice_status status; u16 buf_len, res_type; + int status; if (!l_id) return ICE_ERR_BAD_PTR; @@ -5762,17 +5980,18 @@ void ice_rem_all_sw_rules_info(struct ice_hw *hw) * add filter rule to set/unset given VSI as default VSI for the switch * (represented by swid) */ -enum ice_status +int ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set, u8 direction) { struct ice_fltr_list_entry f_list_entry; - struct ice_sw_recipe *recp_list; + struct ice_sw_recipe *recp_list = NULL; struct ice_fltr_info f_info; struct ice_hw *hw = pi->hw; - enum ice_status status; u8 lport = pi->lport; u16 hw_vsi_id; + int status; + recp_list = &pi->hw->switch_info->recp_list[ICE_SW_LKUP_DFLT]; if (!ice_is_vsi_valid(hw, vsi_handle)) @@ -5888,7 +6107,7 @@ ice_find_ucast_rule_entry(struct LIST_HEAD_TYPE *list_head, * the entries passed into m_list were added previously. It will not attempt to * do a partial remove of entries that were found. */ -static enum ice_status +static int ice_remove_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list, struct ice_sw_recipe *recp_list) { @@ -5932,7 +6151,7 @@ ice_remove_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list, if (list_itr->status) return list_itr->status; } - return ICE_SUCCESS; + return 0; } /** @@ -5941,7 +6160,7 @@ ice_remove_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list, * @m_list: list of MAC addresses and forwarding information * */ -enum ice_status ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list) +int ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list) { struct ice_sw_recipe *recp_list; @@ -5955,7 +6174,7 @@ enum ice_status ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list) * @v_list: list of VLAN entries and forwarding information * @recp_list: list from which function remove VLAN */ -static enum ice_status +static int ice_remove_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list, struct ice_sw_recipe *recp_list) { @@ -5972,7 +6191,7 @@ ice_remove_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list, if (v_list_itr->status) return v_list_itr->status; } - return ICE_SUCCESS; + return 0; } /** @@ -5981,7 +6200,7 @@ ice_remove_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list, * @v_list: list of VLAN and forwarding information * */ -enum ice_status +int ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list) { struct ice_sw_recipe *recp_list; @@ -5999,7 +6218,7 @@ ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list) * @v_list: list of MAC VLAN entries and forwarding information * @recp_list: list from which function remove MAC VLAN */ -static enum ice_status +static int ice_remove_mac_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list, struct ice_sw_recipe *recp_list) { @@ -6018,7 +6237,7 @@ ice_remove_mac_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list, if (v_list_itr->status) return v_list_itr->status; } - return ICE_SUCCESS; + return 0; } /** @@ -6026,7 +6245,7 @@ ice_remove_mac_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list, * @hw: pointer to the hardware structure * @mv_list: list of MAC VLAN and forwarding information */ -enum ice_status +int ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list) { struct ice_sw_recipe *recp_list; @@ -6067,7 +6286,7 @@ ice_vsi_uses_fltr(struct ice_fltr_mgmt_list_entry *fm_entry, u16 vsi_handle) * fltr_info.fwd_id fields. These are set such that later logic can * extract which VSI to remove the fltr from, and pass on that information. */ -static enum ice_status +static int ice_add_entry_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle, struct LIST_HEAD_TYPE *vsi_list_head, struct ice_fltr_info *fi) @@ -6094,7 +6313,7 @@ ice_add_entry_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle, LIST_ADD(&tmp->list_entry, vsi_list_head); - return ICE_SUCCESS; + return 0; } /** @@ -6110,13 +6329,13 @@ ice_add_entry_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle, * Note that this means all entries in vsi_list_head must be explicitly * deallocated by the caller when done with list. */ -static enum ice_status +static int ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle, struct LIST_HEAD_TYPE *lkup_list_head, struct LIST_HEAD_TYPE *vsi_list_head) { struct ice_fltr_mgmt_list_entry *fm_entry; - enum ice_status status = ICE_SUCCESS; + int status = 0; /* check to make sure VSI ID is valid and within boundary */ if (!ice_is_vsi_valid(hw, vsi_handle)) @@ -6139,34 +6358,45 @@ ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle, /** * ice_determine_promisc_mask * @fi: filter info to parse + * @promisc_mask: pointer to mask to be filled in * * Helper function to determine which ICE_PROMISC_ mask corresponds * to given filter into. */ -static u8 ice_determine_promisc_mask(struct ice_fltr_info *fi) +static void ice_determine_promisc_mask(struct ice_fltr_info *fi, + ice_bitmap_t *promisc_mask) { u16 vid = fi->l_data.mac_vlan.vlan_id; u8 *macaddr = fi->l_data.mac.mac_addr; + bool is_rx_lb_fltr = false; bool is_tx_fltr = false; - u8 promisc_mask = 0; + + ice_zero_bitmap(promisc_mask, ICE_PROMISC_MAX); if (fi->flag == ICE_FLTR_TX) is_tx_fltr = true; + if (fi->flag == ICE_FLTR_RX_LB) + is_rx_lb_fltr = true; + + if (IS_BROADCAST_ETHER_ADDR(macaddr)) { + ice_set_bit(is_tx_fltr ? ICE_PROMISC_BCAST_TX + : ICE_PROMISC_BCAST_RX, promisc_mask); + } else if (IS_MULTICAST_ETHER_ADDR(macaddr)) { + ice_set_bit(is_tx_fltr ? ICE_PROMISC_MCAST_TX + : ICE_PROMISC_MCAST_RX, promisc_mask); + } else if (IS_UNICAST_ETHER_ADDR(macaddr)) { + if (is_tx_fltr) + ice_set_bit(ICE_PROMISC_UCAST_TX, promisc_mask); + else if (is_rx_lb_fltr) + ice_set_bit(ICE_PROMISC_UCAST_RX_LB, promisc_mask); + else + ice_set_bit(ICE_PROMISC_UCAST_RX, promisc_mask); + } - if (IS_BROADCAST_ETHER_ADDR(macaddr)) - promisc_mask |= is_tx_fltr ? - ICE_PROMISC_BCAST_TX : ICE_PROMISC_BCAST_RX; - else if (IS_MULTICAST_ETHER_ADDR(macaddr)) - promisc_mask |= is_tx_fltr ? - ICE_PROMISC_MCAST_TX : ICE_PROMISC_MCAST_RX; - else if (IS_UNICAST_ETHER_ADDR(macaddr)) - promisc_mask |= is_tx_fltr ? - ICE_PROMISC_UCAST_TX : ICE_PROMISC_UCAST_RX; - if (vid) - promisc_mask |= is_tx_fltr ? - ICE_PROMISC_VLAN_TX : ICE_PROMISC_VLAN_RX; - - return promisc_mask; + if (vid) { + ice_set_bit(is_tx_fltr ? ICE_PROMISC_VLAN_TX + : ICE_PROMISC_VLAN_RX, promisc_mask); + } } /** @@ -6178,11 +6408,12 @@ static u8 ice_determine_promisc_mask(struct ice_fltr_info *fi) * @sw: pointer to switch info struct for which function add rule * @lkup: switch rule filter lookup type */ -static enum ice_status -_ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask, - u16 *vid, struct ice_switch_info *sw, - enum ice_sw_lkup_type lkup) +static int +_ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, + ice_bitmap_t *promisc_mask, u16 *vid, + struct ice_switch_info *sw, enum ice_sw_lkup_type lkup) { + ice_declare_bitmap(fltr_promisc_mask, ICE_PROMISC_MAX); struct ice_fltr_mgmt_list_entry *itr; struct LIST_HEAD_TYPE *rule_head; struct ice_lock *rule_lock; /* Lock to protect filter rule list */ @@ -6192,10 +6423,11 @@ _ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask, return ICE_ERR_PARAM; *vid = 0; - *promisc_mask = 0; rule_head = &sw->recp_list[lkup].filt_rules; rule_lock = &sw->recp_list[lkup].filt_rule_lock; + ice_zero_bitmap(promisc_mask, ICE_PROMISC_MAX); + ice_acquire_lock(rule_lock); LIST_FOR_EACH_ENTRY(itr, rule_head, ice_fltr_mgmt_list_entry, list_entry) { @@ -6205,11 +6437,14 @@ _ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask, if (!ice_vsi_uses_fltr(itr, vsi_handle)) continue; - *promisc_mask |= ice_determine_promisc_mask(&itr->fltr_info); + ice_determine_promisc_mask(&itr->fltr_info, fltr_promisc_mask); + ice_or_bitmap(promisc_mask, promisc_mask, fltr_promisc_mask, + ICE_PROMISC_MAX); + } ice_release_lock(rule_lock); - return ICE_SUCCESS; + return 0; } /** @@ -6219,10 +6454,13 @@ _ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask, * @promisc_mask: pointer to mask to be filled in * @vid: VLAN ID of promisc VLAN VSI */ -enum ice_status -ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask, - u16 *vid) +int +ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, + ice_bitmap_t *promisc_mask, u16 *vid) { + if (!vid || !promisc_mask || !hw) + return ICE_ERR_PARAM; + return _ice_get_vsi_promisc(hw, vsi_handle, promisc_mask, vid, hw->switch_info, ICE_SW_LKUP_PROMISC); } @@ -6234,10 +6472,13 @@ ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask, * @promisc_mask: pointer to mask to be filled in * @vid: VLAN ID of promisc VLAN VSI */ -enum ice_status -ice_get_vsi_vlan_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask, - u16 *vid) +int +ice_get_vsi_vlan_promisc(struct ice_hw *hw, u16 vsi_handle, + ice_bitmap_t *promisc_mask, u16 *vid) { + if (!hw || !promisc_mask || !vid) + return ICE_ERR_PARAM; + return _ice_get_vsi_promisc(hw, vsi_handle, promisc_mask, vid, hw->switch_info, ICE_SW_LKUP_PROMISC_VLAN); @@ -6249,7 +6490,7 @@ ice_get_vsi_vlan_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask, * @recp_id: recipe ID for which the rule needs to removed * @v_list: list of promisc entries */ -static enum ice_status +static int ice_remove_promisc(struct ice_hw *hw, u8 recp_id, struct LIST_HEAD_TYPE *v_list) { @@ -6264,33 +6505,37 @@ ice_remove_promisc(struct ice_hw *hw, u8 recp_id, if (v_list_itr->status) return v_list_itr->status; } - return ICE_SUCCESS; + return 0; } /** * _ice_clear_vsi_promisc - clear specified promiscuous mode(s) * @hw: pointer to the hardware structure * @vsi_handle: VSI handle to clear mode - * @promisc_mask: mask of promiscuous config bits to clear + * @promisc_mask: pointer to mask of promiscuous config bits to clear * @vid: VLAN ID to clear VLAN promiscuous * @sw: pointer to switch info struct for which function add rule */ -static enum ice_status -_ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, - u16 vid, struct ice_switch_info *sw) +static int +_ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, + ice_bitmap_t *promisc_mask, u16 vid, + struct ice_switch_info *sw) { + ice_declare_bitmap(compl_promisc_mask, ICE_PROMISC_MAX); + ice_declare_bitmap(fltr_promisc_mask, ICE_PROMISC_MAX); struct ice_fltr_list_entry *fm_entry, *tmp; struct LIST_HEAD_TYPE remove_list_head; struct ice_fltr_mgmt_list_entry *itr; struct LIST_HEAD_TYPE *rule_head; struct ice_lock *rule_lock; /* Lock to protect filter rule list */ - enum ice_status status = ICE_SUCCESS; + int status = 0; u8 recipe_id; if (!ice_is_vsi_valid(hw, vsi_handle)) return ICE_ERR_PARAM; - if (promisc_mask & (ICE_PROMISC_VLAN_RX | ICE_PROMISC_VLAN_TX)) + if (ice_is_bit_set(promisc_mask, ICE_PROMISC_VLAN_RX) && + ice_is_bit_set(promisc_mask, ICE_PROMISC_VLAN_TX)) recipe_id = ICE_SW_LKUP_PROMISC_VLAN; else recipe_id = ICE_SW_LKUP_PROMISC; @@ -6304,7 +6549,7 @@ _ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, LIST_FOR_EACH_ENTRY(itr, rule_head, ice_fltr_mgmt_list_entry, list_entry) { struct ice_fltr_info *fltr_info; - u8 fltr_promisc_mask = 0; + ice_zero_bitmap(compl_promisc_mask, ICE_PROMISC_MAX); if (!ice_vsi_uses_fltr(itr, vsi_handle)) continue; @@ -6314,10 +6559,12 @@ _ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, vid != fltr_info->l_data.mac_vlan.vlan_id) continue; - fltr_promisc_mask |= ice_determine_promisc_mask(fltr_info); + ice_determine_promisc_mask(fltr_info, fltr_promisc_mask); + ice_andnot_bitmap(compl_promisc_mask, fltr_promisc_mask, + promisc_mask, ICE_PROMISC_MAX); /* Skip if filter is not completely specified by given mask */ - if (fltr_promisc_mask & ~promisc_mask) + if (ice_is_any_bit_set(compl_promisc_mask, ICE_PROMISC_MAX)) continue; status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle, @@ -6346,13 +6593,16 @@ _ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, * ice_clear_vsi_promisc - clear specified promiscuous mode(s) for given VSI * @hw: pointer to the hardware structure * @vsi_handle: VSI handle to clear mode - * @promisc_mask: mask of promiscuous config bits to clear + * @promisc_mask: pointer to mask of promiscuous config bits to clear * @vid: VLAN ID to clear VLAN promiscuous */ -enum ice_status +int ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, - u8 promisc_mask, u16 vid) + ice_bitmap_t *promisc_mask, u16 vid) { + if (!hw || !promisc_mask) + return ICE_ERR_PARAM; + return _ice_clear_vsi_promisc(hw, vsi_handle, promisc_mask, vid, hw->switch_info); } @@ -6361,20 +6611,22 @@ ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, * _ice_set_vsi_promisc - set given VSI to given promiscuous mode(s) * @hw: pointer to the hardware structure * @vsi_handle: VSI handle to configure - * @promisc_mask: mask of promiscuous config bits + * @promisc_mask: pointer to mask of promiscuous config bits * @vid: VLAN ID to set VLAN promiscuous * @lport: logical port number to configure promisc mode * @sw: pointer to switch info struct for which function add rule */ -static enum ice_status -_ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, - u16 vid, u8 lport, struct ice_switch_info *sw) +static int +_ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, + ice_bitmap_t *promisc_mask, u16 vid, u8 lport, + struct ice_switch_info *sw) { enum { UCAST_FLTR = 1, MCAST_FLTR, BCAST_FLTR }; + ice_declare_bitmap(p_mask, ICE_PROMISC_MAX); struct ice_fltr_list_entry f_list_entry; + bool is_tx_fltr, is_rx_lb_fltr; struct ice_fltr_info new_fltr; - enum ice_status status = ICE_SUCCESS; - bool is_tx_fltr; + int status = 0; u16 hw_vsi_id; int pkt_type; u8 recipe_id; @@ -6387,7 +6639,11 @@ _ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, ice_memset(&new_fltr, 0, sizeof(new_fltr), ICE_NONDMA_MEM); - if (promisc_mask & (ICE_PROMISC_VLAN_RX | ICE_PROMISC_VLAN_TX)) { + /* Do not modify original bitmap */ + ice_cp_bitmap(p_mask, promisc_mask, ICE_PROMISC_MAX); + + if (ice_is_bit_set(p_mask, ICE_PROMISC_VLAN_RX) && + ice_is_bit_set(p_mask, ICE_PROMISC_VLAN_TX)) { new_fltr.lkup_type = ICE_SW_LKUP_PROMISC_VLAN; new_fltr.l_data.mac_vlan.vlan_id = vid; recipe_id = ICE_SW_LKUP_PROMISC_VLAN; @@ -6401,44 +6657,48 @@ _ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, * individual type, and clear it out in the input mask as it * is found. */ - while (promisc_mask) { + while (ice_is_any_bit_set(p_mask, ICE_PROMISC_MAX)) { struct ice_sw_recipe *recp_list; u8 *mac_addr; pkt_type = 0; is_tx_fltr = false; + is_rx_lb_fltr = false; - if (promisc_mask & ICE_PROMISC_UCAST_RX) { - promisc_mask &= ~ICE_PROMISC_UCAST_RX; + if (ice_test_and_clear_bit(ICE_PROMISC_UCAST_RX, + p_mask)) { pkt_type = UCAST_FLTR; - } else if (promisc_mask & ICE_PROMISC_UCAST_TX) { - promisc_mask &= ~ICE_PROMISC_UCAST_TX; + } else if (ice_test_and_clear_bit(ICE_PROMISC_UCAST_TX, + p_mask)) { pkt_type = UCAST_FLTR; is_tx_fltr = true; - } else if (promisc_mask & ICE_PROMISC_MCAST_RX) { - promisc_mask &= ~ICE_PROMISC_MCAST_RX; + } else if (ice_test_and_clear_bit(ICE_PROMISC_MCAST_RX, + p_mask)) { pkt_type = MCAST_FLTR; - } else if (promisc_mask & ICE_PROMISC_MCAST_TX) { - promisc_mask &= ~ICE_PROMISC_MCAST_TX; + } else if (ice_test_and_clear_bit(ICE_PROMISC_MCAST_TX, + p_mask)) { pkt_type = MCAST_FLTR; is_tx_fltr = true; - } else if (promisc_mask & ICE_PROMISC_BCAST_RX) { - promisc_mask &= ~ICE_PROMISC_BCAST_RX; + } else if (ice_test_and_clear_bit(ICE_PROMISC_BCAST_RX, + p_mask)) { pkt_type = BCAST_FLTR; - } else if (promisc_mask & ICE_PROMISC_BCAST_TX) { - promisc_mask &= ~ICE_PROMISC_BCAST_TX; + } else if (ice_test_and_clear_bit(ICE_PROMISC_BCAST_TX, + p_mask)) { pkt_type = BCAST_FLTR; is_tx_fltr = true; + } else if (ice_test_and_clear_bit(ICE_PROMISC_UCAST_RX_LB, + p_mask)) { + pkt_type = UCAST_FLTR; + is_rx_lb_fltr = true; } /* Check for VLAN promiscuous flag */ - if (promisc_mask & ICE_PROMISC_VLAN_RX) { - promisc_mask &= ~ICE_PROMISC_VLAN_RX; - } else if (promisc_mask & ICE_PROMISC_VLAN_TX) { - promisc_mask &= ~ICE_PROMISC_VLAN_TX; + if (ice_is_bit_set(p_mask, ICE_PROMISC_VLAN_RX)) { + ice_clear_bit(ICE_PROMISC_VLAN_RX, p_mask); + } else if (ice_test_and_clear_bit(ICE_PROMISC_VLAN_TX, + p_mask)) { is_tx_fltr = true; } - /* Set filter DA based on packet type */ mac_addr = new_fltr.l_data.mac.mac_addr; if (pkt_type == BCAST_FLTR) { @@ -6457,6 +6717,9 @@ _ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, if (is_tx_fltr) { new_fltr.flag |= ICE_FLTR_TX; new_fltr.src = hw_vsi_id; + } else if (is_rx_lb_fltr) { + new_fltr.flag |= ICE_FLTR_RX_LB; + new_fltr.src = hw_vsi_id; } else { new_fltr.flag |= ICE_FLTR_RX; new_fltr.src = lport; @@ -6470,7 +6733,7 @@ _ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, status = ice_add_rule_internal(hw, recp_list, lport, &f_list_entry); - if (status != ICE_SUCCESS) + if (status) goto set_promisc_exit; } @@ -6482,13 +6745,16 @@ _ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, * ice_set_vsi_promisc - set given VSI to given promiscuous mode(s) * @hw: pointer to the hardware structure * @vsi_handle: VSI handle to configure - * @promisc_mask: mask of promiscuous config bits + * @promisc_mask: pointer to mask of promiscuous config bits * @vid: VLAN ID to set VLAN promiscuous */ -enum ice_status -ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, - u16 vid) +int +ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, + ice_bitmap_t *promisc_mask, u16 vid) { + if (!hw || !promisc_mask) + return ICE_ERR_PARAM; + return _ice_set_vsi_promisc(hw, vsi_handle, promisc_mask, vid, hw->port_info->lport, hw->switch_info); @@ -6498,23 +6764,23 @@ ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, * _ice_set_vlan_vsi_promisc * @hw: pointer to the hardware structure * @vsi_handle: VSI handle to configure - * @promisc_mask: mask of promiscuous config bits + * @promisc_mask: pointer to mask of promiscuous config bits * @rm_vlan_promisc: Clear VLANs VSI promisc mode * @lport: logical port number to configure promisc mode * @sw: pointer to switch info struct for which function add rule * * Configure VSI with all associated VLANs to given promiscuous mode(s) */ -static enum ice_status -_ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, - bool rm_vlan_promisc, u8 lport, - struct ice_switch_info *sw) +static int +_ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, + ice_bitmap_t *promisc_mask, bool rm_vlan_promisc, + u8 lport, struct ice_switch_info *sw) { struct ice_fltr_list_entry *list_itr, *tmp; struct LIST_HEAD_TYPE vsi_list_head; struct LIST_HEAD_TYPE *vlan_head; struct ice_lock *vlan_lock; /* Lock to protect filter rule list */ - enum ice_status status; + int status; u16 vlan_id; INIT_LIST_HEAD(&vsi_list_head); @@ -6567,10 +6833,13 @@ _ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, * * Configure VSI with all associated VLANs to given promiscuous mode(s) */ -enum ice_status -ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, - bool rm_vlan_promisc) +int +ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, + ice_bitmap_t *promisc_mask, bool rm_vlan_promisc) { + if (!hw || !promisc_mask) + return ICE_ERR_PARAM; + return _ice_set_vlan_vsi_promisc(hw, vsi_handle, promisc_mask, rm_vlan_promisc, hw->port_info->lport, hw->switch_info); @@ -6593,7 +6862,7 @@ ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle, struct LIST_HEAD_TYPE *rule_head; struct ice_fltr_list_entry *tmp; struct ice_lock *rule_lock; /* Lock to protect filter rule list */ - enum ice_status status; + int status; INIT_LIST_HEAD(&remove_list_head); rule_lock = &recp_list[lkup].filt_rule_lock; @@ -6687,13 +6956,13 @@ void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle) * @num_items: number of entries requested for FD resource type * @counter_id: counter index returned by AQ call */ -enum ice_status +int ice_alloc_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items, u16 *counter_id) { struct ice_aqc_alloc_free_res_elem *buf; - enum ice_status status; u16 buf_len; + int status; /* Allocate resource */ buf_len = ice_struct_size(buf, elem, 1); @@ -6725,13 +6994,13 @@ ice_alloc_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items, * @num_items: number of entries to be freed for FD resource type * @counter_id: counter ID resource which needs to be freed */ -enum ice_status +int ice_free_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items, u16 counter_id) { struct ice_aqc_alloc_free_res_elem *buf; - enum ice_status status; u16 buf_len; + int status; /* Free resource */ buf_len = ice_struct_size(buf, elem, 1); @@ -6758,7 +7027,7 @@ ice_free_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items, * @hw: pointer to the hardware structure * @counter_id: returns counter index */ -enum ice_status ice_alloc_vlan_res_counter(struct ice_hw *hw, u16 *counter_id) +int ice_alloc_vlan_res_counter(struct ice_hw *hw, u16 *counter_id) { return ice_alloc_res_cntr(hw, ICE_AQC_RES_TYPE_VLAN_COUNTER, ICE_AQC_RES_TYPE_FLAG_DEDICATED, 1, @@ -6770,7 +7039,7 @@ enum ice_status ice_alloc_vlan_res_counter(struct ice_hw *hw, u16 *counter_id) * @hw: pointer to the hardware structure * @counter_id: counter index to be freed */ -enum ice_status ice_free_vlan_res_counter(struct ice_hw *hw, u16 counter_id) +int ice_free_vlan_res_counter(struct ice_hw *hw, u16 counter_id) { return ice_free_res_cntr(hw, ICE_AQC_RES_TYPE_VLAN_COUNTER, ICE_AQC_RES_TYPE_FLAG_DEDICATED, 1, @@ -6783,7 +7052,7 @@ enum ice_status ice_free_vlan_res_counter(struct ice_hw *hw, u16 counter_id) * @f_info: filter info structure containing the MAC filter information * @sw_marker: sw marker to tag the Rx descriptor with */ -enum ice_status +int ice_add_mac_with_sw_marker(struct ice_hw *hw, struct ice_fltr_info *f_info, u16 sw_marker) { @@ -6792,9 +7061,9 @@ ice_add_mac_with_sw_marker(struct ice_hw *hw, struct ice_fltr_info *f_info, struct ice_sw_recipe *recp_list; struct LIST_HEAD_TYPE l_head; struct ice_lock *rule_lock; /* Lock to protect filter rule list */ - enum ice_status ret; bool entry_exists; u16 lg_act_id; + int ret; if (f_info->fltr_act != ICE_FWD_TO_VSI) return ICE_ERR_PARAM; @@ -6879,7 +7148,7 @@ ice_add_mac_with_sw_marker(struct ice_hw *hw, struct ice_fltr_info *f_info, * @f_info: pointer to filter info structure containing the MAC filter * information */ -enum ice_status +int ice_add_mac_with_counter(struct ice_hw *hw, struct ice_fltr_info *f_info) { struct ice_fltr_mgmt_list_entry *m_entry; @@ -6887,10 +7156,10 @@ ice_add_mac_with_counter(struct ice_hw *hw, struct ice_fltr_info *f_info) struct ice_sw_recipe *recp_list; struct LIST_HEAD_TYPE l_head; struct ice_lock *rule_lock; /* Lock to protect filter rule list */ - enum ice_status ret; bool entry_exist; u16 counter_id; u16 lg_act_id; + int ret; if (f_info->fltr_act != ICE_FWD_TO_VSI) return ICE_ERR_PARAM; @@ -7050,15 +7319,19 @@ static struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = { { ICE_FLG_DIR, ICE_META_DATA_ID_HW}, }; -/** +/* * ice_find_recp - find a recipe * @hw: pointer to the hardware structure * @lkup_exts: extension sequence to match + * @tun_type: tunnel type of switch filter + * @priority: priority of switch filter + * @is_add: flag of adding recipe * * Returns index of matching recipe, or ICE_MAX_NUM_RECIPES if not found. */ static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts, - enum ice_sw_tunnel_type tun_type, u32 priority) + enum ice_sw_tunnel_type tun_type, u32 priority, + bool *is_add) { bool refresh_required = true; struct ice_sw_recipe *recp; @@ -7072,11 +7345,18 @@ static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts, * entry update it in our SW bookkeeping and continue with the * matching. */ - if (!recp[i].recp_created) + if (hw->subscribable_recipes_supported) { if (ice_get_recp_frm_fw(hw, hw->switch_info->recp_list, i, - &refresh_required)) + &refresh_required, is_add)) continue; + } else { + if (!recp[i].recp_created) + if (ice_get_recp_frm_fw(hw, + hw->switch_info->recp_list, i, + &refresh_required, is_add)) + continue; + } /* Skip inverse action recipes */ if (recp[i].root_buf && recp[i].root_buf->content.act_ctrl & @@ -7120,9 +7400,16 @@ static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts, /* If for "i"th recipe the found was never set to false * then it means we found our match */ - if (tun_type == recp[i].tun_type && found && - priority == recp[i].priority) - return i; /* Return the recipe ID */ + if (found && priority == recp[i].priority) { + if (tun_type == recp[i].tun_type || + (recp[i].tun_type == ICE_SW_TUN_UDP && + (tun_type == ICE_SW_TUN_VXLAN_GPE || + tun_type == ICE_SW_TUN_VXLAN || + tun_type == ICE_SW_TUN_GENEVE || + tun_type == ICE_SW_TUN_GENEVE_VLAN || + tun_type == ICE_SW_TUN_VXLAN_VLAN))) + return i; /* Return the recipe ID */ + } } } return ICE_MAX_NUM_RECIPES; @@ -7184,7 +7471,7 @@ ice_fill_valid_words(struct ice_adv_lkup_elem *rule, for (j = 0; j < sizeof(rule->m_u) / sizeof(u16); j++) if (((u16 *)&rule->m_u)[j] && - (size_t)rule->type < ARRAY_SIZE(ice_prot_ext)) { + rule->type < ARRAY_SIZE(ice_prot_ext)) { /* No more space to accommodate */ if (word >= ICE_MAX_CHAIN_WORDS) return 0; @@ -7214,7 +7501,7 @@ ice_fill_valid_words(struct ice_adv_lkup_elem *rule, * and start grouping them in 4-word groups. Each group makes up one * recipe. */ -static enum ice_status +static int ice_create_first_fit_recp_def(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts, struct LIST_HEAD_TYPE *rg_list, @@ -7256,15 +7543,17 @@ ice_create_first_fit_recp_def(struct ice_hw *hw, (*recp_cnt)++; } - grp->pairs[grp->n_val_pairs].prot_id = - lkup_exts->fv_words[j].prot_id; - grp->pairs[grp->n_val_pairs].off = - lkup_exts->fv_words[j].off; - grp->mask[grp->n_val_pairs] = lkup_exts->field_mask[j]; - grp->n_val_pairs++; + if (grp->n_val_pairs < ICE_NUM_WORDS_RECIPE) { + grp->pairs[grp->n_val_pairs].prot_id = + lkup_exts->fv_words[j].prot_id; + grp->pairs[grp->n_val_pairs].off = + lkup_exts->fv_words[j].off; + grp->mask[grp->n_val_pairs] = lkup_exts->field_mask[j]; + grp->n_val_pairs++; + } } - return ICE_SUCCESS; + return 0; } /** @@ -7276,7 +7565,7 @@ ice_create_first_fit_recp_def(struct ice_hw *hw, * Helper function to fill in the field vector indices for protocol-offset * pairs. These indexes are then ultimately programmed into a recipe. */ -static enum ice_status +static int ice_fill_fv_word_index(struct ice_hw *hw, struct LIST_HEAD_TYPE *fv_list, struct LIST_HEAD_TYPE *rg_list) { @@ -7285,7 +7574,7 @@ ice_fill_fv_word_index(struct ice_hw *hw, struct LIST_HEAD_TYPE *fv_list, struct ice_fv_word *fv_ext; if (LIST_EMPTY(fv_list)) - return ICE_SUCCESS; + return 0; fv = LIST_FIRST_ENTRY(fv_list, struct ice_sw_fv_list_entry, list_entry); fv_ext = fv->fv_ptr->ew; @@ -7321,7 +7610,7 @@ ice_fill_fv_word_index(struct ice_hw *hw, struct LIST_HEAD_TYPE *fv_list, } } - return ICE_SUCCESS; + return 0; } /** @@ -7390,7 +7679,7 @@ ice_find_free_recp_res_idx(struct ice_hw *hw, const ice_bitmap_t *profiles, ice_xor_bitmap(free_idx, used_idx, possible_idx, ICE_MAX_FV_WORDS); /* return number of free indexes */ - return (u16)ice_bitmap_hweight(free_idx, ICE_MAX_FV_WORDS); + return ice_bitmap_hweight(free_idx, ICE_MAX_FV_WORDS); } static void ice_set_recipe_index(unsigned long idx, u8 *bitmap) @@ -7410,7 +7699,7 @@ static void ice_set_recipe_index(unsigned long idx, u8 *bitmap) * @rm: recipe management list entry * @profiles: bitmap of profiles that will be associated. */ -static enum ice_status +static int ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, ice_bitmap_t *profiles) { @@ -7418,11 +7707,11 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, struct ice_aqc_recipe_data_elem *tmp; struct ice_aqc_recipe_data_elem *buf; struct ice_recp_grp_entry *entry; - enum ice_status status; u16 free_res_idx; u16 recipe_count; u8 chain_idx; u8 recps = 0; + int status; /* When more than one recipe are required, another recipe is needed to * chain them together. Matching a tunnel metadata ID takes up one of @@ -7505,7 +7794,8 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, } for (i = 0; i < entry->r_group.n_val_pairs; i++) { - buf[recps].content.lkup_indx[i + 1] = entry->fv_idx[i]; + buf[recps].content.lkup_indx[i + 1] = + (u8)entry->fv_idx[i]; buf[recps].content.mask[i + 1] = CPU_TO_LE16(entry->fv_mask[i]); } @@ -7526,8 +7816,8 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, ((chain_idx << ICE_AQ_RECIPE_RESULT_DATA_S) & ICE_AQ_RECIPE_RESULT_DATA_M); ice_clear_bit(chain_idx, result_idx_bm); - chain_idx = ice_find_first_bit(result_idx_bm, - ICE_MAX_FV_WORDS); + chain_idx = (u8)ice_find_first_bit(result_idx_bm, + ICE_MAX_FV_WORDS); } /* fill recipe dependencies */ @@ -7709,12 +7999,12 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm, * @rm: recipe management list entry * @lkup_exts: lookup elements */ -static enum ice_status +static int ice_create_recipe_group(struct ice_hw *hw, struct ice_sw_recipe *rm, struct ice_prot_lkup_ext *lkup_exts) { - enum ice_status status; u8 recp_count = 0; + int status; rm->n_grp_count = 0; @@ -7791,13 +8081,24 @@ ice_tun_type_match_word(struct ice_adv_rule_info *rinfo, u16 *off, u16 *mask) * @lkup_exts: lookup word structure * @dvm_ena: is double VLAN mode enabled */ -static enum ice_status +static int ice_add_special_words(struct ice_adv_rule_info *rinfo, struct ice_prot_lkup_ext *lkup_exts, bool dvm_ena) { u16 mask; u16 off; + /* Always add direction flag */ + if (lkup_exts->n_val_words < ICE_MAX_CHAIN_WORDS) { + u8 word = lkup_exts->n_val_words++; + + lkup_exts->fv_words[word].prot_id = ICE_META_DATA_ID_HW; + lkup_exts->fv_words[word].off = ICE_TUN_FLAG_MDID_OFF(0); + lkup_exts->field_mask[word] = ICE_FROM_NETWORK_FLAG_MASK; + } else { + return ICE_ERR_MAX_LIMIT; + } + /* If this is a tunneled packet, then add recipe index to match the * tunnel bit in the packet metadata flags. If this is a tun_and_non_tun * packet, then add recipe index to match the direction bit in the flag. @@ -7827,7 +8128,7 @@ ice_add_special_words(struct ice_adv_rule_info *rinfo, } } - return ICE_SUCCESS; + return 0; } /* ice_get_compat_fv_bitmap - Get compatible field vector bitmap for rule @@ -7860,9 +8161,22 @@ ice_get_compat_fv_bitmap(struct ice_hw *hw, struct ice_adv_rule_info *rinfo, case ICE_SW_TUN_GTP: prof_type = ICE_PROF_TUN_UDP; break; + case ICE_SW_TUN_NVGRE: prof_type = ICE_PROF_TUN_GRE; break; + case ICE_SW_IPV4_TCP: + ice_set_bit(ICE_PROFID_IPV4_TCP, bm); + return; + case ICE_SW_IPV4_UDP: + ice_set_bit(ICE_PROFID_IPV4_UDP, bm); + return; + case ICE_SW_IPV6_TCP: + ice_set_bit(ICE_PROFID_IPV6_TCP, bm); + return; + case ICE_SW_IPV6_UDP: + ice_set_bit(ICE_PROFID_IPV6_UDP, bm); + return; case ICE_SW_TUN_PPPOE: case ICE_SW_TUN_PPPOE_QINQ: prof_type = ICE_PROF_TUN_PPPOE; @@ -7935,18 +8249,6 @@ ice_get_compat_fv_bitmap(struct ice_hw *hw, struct ice_adv_rule_info *rinfo, case ICE_SW_TUN_IPV4_AH: ice_set_bit(ICE_PROFID_IPV4_AH, bm); return; - case ICE_SW_IPV4_TCP: - ice_set_bit(ICE_PROFID_IPV4_TCP, bm); - return; - case ICE_SW_IPV4_UDP: - ice_set_bit(ICE_PROFID_IPV4_UDP, bm); - return; - case ICE_SW_IPV6_TCP: - ice_set_bit(ICE_PROFID_IPV6_TCP, bm); - return; - case ICE_SW_IPV6_UDP: - ice_set_bit(ICE_PROFID_IPV6_UDP, bm); - return; case ICE_SW_TUN_IPV4_GTPU_NO_PAY: ice_set_bit(ICE_PROFID_IPV4_GTPU_TEID, bm); return; @@ -8087,7 +8389,7 @@ bool ice_is_prof_rule(enum ice_sw_tunnel_type type) * @rinfo: other information regarding the rule e.g. priority and action info * @rid: return the recipe ID of the recipe created */ -static enum ice_status +int ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, struct ice_adv_rule_info *rinfo, u16 *rid) { @@ -8098,9 +8400,11 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, struct ice_sw_fv_list_entry *fvit; struct ice_recp_grp_entry *r_tmp; struct ice_sw_fv_list_entry *tmp; - enum ice_status status = ICE_SUCCESS; struct ice_sw_recipe *rm; - u8 i; + u8 i, rid_tmp; + bool is_add = true; + int status = ICE_SUCCESS; + u16 cnt; if (!ice_is_prof_rule(rinfo->tun_type) && !lkups_cnt) return ICE_ERR_PARAM; @@ -8202,10 +8506,15 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, } /* Look for a recipe which matches our requested fv / mask list */ - *rid = ice_find_recp(hw, lkup_exts, rinfo->tun_type, rinfo->priority); - if (*rid < ICE_MAX_NUM_RECIPES) + *rid = ice_find_recp(hw, lkup_exts, rinfo->tun_type, + rinfo->priority, &is_add); + if (*rid < ICE_MAX_NUM_RECIPES) { /* Success if found a recipe that match the existing criteria */ + if (hw->subscribable_recipes_supported) + ice_subscribable_recp_shared(hw, *rid); + goto err_unroll; + } rm->tun_type = rinfo->tun_type; /* Recipe we need does not exist, add a recipe */ @@ -8223,22 +8532,34 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, status = ice_aq_get_recipe_to_profile(hw, fvit->profile_id, (u8 *)r_bitmap, NULL); - if (status) - goto err_unroll; + if (status) { + if (hw->subscribable_recipes_supported) + goto err_free_recipe; + else + goto err_unroll; + } ice_or_bitmap(r_bitmap, r_bitmap, rm->r_bitmap, ICE_MAX_NUM_RECIPES); status = ice_acquire_change_lock(hw, ICE_RES_WRITE); - if (status) - goto err_unroll; + if (status) { + if (hw->subscribable_recipes_supported) + goto err_free_recipe; + else + goto err_unroll; + } status = ice_aq_map_recipe_to_profile(hw, fvit->profile_id, (u8 *)r_bitmap, NULL); ice_release_change_lock(hw); - if (status) - goto err_unroll; + if (status) { + if (hw->subscribable_recipes_supported) + goto err_free_recipe; + else + goto err_unroll; + } /* Update profile to recipe bitmap array */ ice_cp_bitmap(profile_to_recipe[fvit->profile_id], r_bitmap, @@ -8253,6 +8574,18 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, *rid = rm->root_rid; ice_memcpy(&hw->switch_info->recp_list[*rid].lkup_exts, lkup_exts, sizeof(*lkup_exts), ICE_NONDMA_TO_NONDMA); + goto err_unroll; + +err_free_recipe: + cnt = ice_bitmap_hweight(rm->r_bitmap, ICE_MAX_NUM_RECIPES); + for (i = 0; i < cnt; i++) { + rid_tmp = (u8)ice_find_first_bit(rm->r_bitmap, + ICE_MAX_NUM_RECIPES); + if (hw->subscribable_recipes_supported) { + if (!ice_free_recipe_res(hw, rid_tmp)) + ice_clear_bit(rid_tmp, rm->r_bitmap); + } + } err_unroll: LIST_FOR_EACH_ENTRY_SAFE(r_entry, r_tmp, &rm->rg_list, ice_recp_grp_entry, l_entry) { @@ -8288,7 +8621,7 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, * @pkt_len: packet length of dummy packet * @offsets: pointer to receive the pointer to the offsets for the packet */ -static void +void ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, enum ice_sw_tunnel_type tun_type, const u8 **pkt, u16 *pkt_len, @@ -8395,6 +8728,34 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, } } + if (tun_type == ICE_SW_IPV4_TCP) { + *pkt = dummy_tcp_packet; + *pkt_len = sizeof(dummy_tcp_packet); + *offsets = dummy_tcp_packet_offsets; + return; + } + + if (tun_type == ICE_SW_IPV4_UDP) { + *pkt = dummy_udp_packet; + *pkt_len = sizeof(dummy_udp_packet); + *offsets = dummy_udp_packet_offsets; + return; + } + + if (tun_type == ICE_SW_IPV6_TCP) { + *pkt = dummy_tcp_ipv6_packet; + *pkt_len = sizeof(dummy_tcp_ipv6_packet); + *offsets = dummy_tcp_ipv6_packet_offsets; + return; + } + + if (tun_type == ICE_SW_IPV6_UDP) { + *pkt = dummy_udp_ipv6_packet; + *pkt_len = sizeof(dummy_udp_ipv6_packet); + *offsets = dummy_udp_ipv6_packet_offsets; + return; + } + if (tun_type == ICE_SW_TUN_PPPOE_IPV6_QINQ) { *pkt = dummy_qinq_pppoe_ipv6_packet; *pkt_len = sizeof(dummy_qinq_pppoe_ipv6_packet); @@ -8644,34 +9005,6 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, return; } - if (tun_type == ICE_SW_IPV4_TCP) { - *pkt = dummy_tcp_packet; - *pkt_len = sizeof(dummy_tcp_packet); - *offsets = dummy_tcp_packet_offsets; - return; - } - - if (tun_type == ICE_SW_IPV4_UDP) { - *pkt = dummy_udp_packet; - *pkt_len = sizeof(dummy_udp_packet); - *offsets = dummy_udp_packet_offsets; - return; - } - - if (tun_type == ICE_SW_IPV6_TCP) { - *pkt = dummy_tcp_ipv6_packet; - *pkt_len = sizeof(dummy_tcp_ipv6_packet); - *offsets = dummy_tcp_ipv6_packet_offsets; - return; - } - - if (tun_type == ICE_SW_IPV6_UDP) { - *pkt = dummy_udp_ipv6_packet; - *pkt_len = sizeof(dummy_udp_ipv6_packet); - *offsets = dummy_udp_ipv6_packet_offsets; - return; - } - if (tun_type == ICE_ALL_TUNNELS) { *pkt = dummy_gre_udp_packet; *pkt_len = sizeof(dummy_gre_udp_packet); @@ -8819,9 +9152,9 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, * @pkt_len: packet length of dummy packet * @offsets: offset info for the dummy packet */ -static enum ice_status +int ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, - struct ice_aqc_sw_rules_elem *s_rule, + struct ice_sw_rule_lkup_rx_tx *s_rule, const u8 *dummy_pkt, u16 pkt_len, const struct ice_dummy_pkt_offsets *offsets) { @@ -8831,7 +9164,7 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, /* Start with a packet with a pre-defined/dummy content. Then, fill * in the header values to be looked up or matched. */ - pkt = s_rule->pdata.lkup_tx_rx.hdr; + pkt = s_rule->hdr_data; ice_memcpy(pkt, dummy_pkt, pkt_len, ICE_NONDMA_TO_NONDMA); @@ -8894,9 +9227,6 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, len = sizeof(struct ice_udp_tnl_hdr); break; - case ICE_PPPOE: - len = sizeof(struct ice_pppoe_hdr); - break; case ICE_ESP: len = sizeof(struct ice_esp_hdr); break; @@ -8906,13 +9236,16 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, case ICE_AH: len = sizeof(struct ice_ah_hdr); break; - case ICE_L2TPV3: - len = sizeof(struct ice_l2tpv3_sess_hdr); - break; - case ICE_GTP: case ICE_GTP_NO_PAY: + case ICE_GTP: len = sizeof(struct ice_udp_gtp_hdr); break; + case ICE_PPPOE: + len = sizeof(struct ice_pppoe_hdr); + break; + case ICE_L2TPV3: + len = sizeof(struct ice_l2tpv3_sess_hdr); + break; default: return ICE_ERR_PARAM; } @@ -8929,17 +9262,29 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, * over any significant packet data. */ for (j = 0; j < len / sizeof(u16); j++) +#ifdef __CHECKER__ + /* cppcheck-suppress objectIndex */ +#endif /* __CHECKER__ */ if (((u16 *)&lkups[i].m_u)[j]) ((u16 *)(pkt + offset))[j] = (((u16 *)(pkt + offset))[j] & +#ifdef __CHECKER__ + /* cppcheck-suppress objectIndex */ +#endif /* __CHECKER__ */ ~((u16 *)&lkups[i].m_u)[j]) | +#ifdef __CHECKER__ + /* cppcheck-suppress objectIndex */ +#endif /* __CHECKER__ */ (((u16 *)&lkups[i].h_u)[j] & +#ifdef __CHECKER__ + /* cppcheck-suppress objectIndex */ +#endif /* __CHECKER__ */ ((u16 *)&lkups[i].m_u)[j]); } - s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(pkt_len); + s_rule->hdr_len = CPU_TO_LE16(pkt_len); - return ICE_SUCCESS; + return 0; } /** @@ -8949,7 +9294,7 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, * @pkt: dummy packet to fill in * @offsets: offset info for the dummy packet */ -static enum ice_status +static int ice_fill_adv_packet_tun(struct ice_hw *hw, enum ice_sw_tunnel_type tun_type, u8 *pkt, const struct ice_dummy_pkt_offsets *offsets) { @@ -8964,16 +9309,14 @@ ice_fill_adv_packet_tun(struct ice_hw *hw, enum ice_sw_tunnel_type tun_type, if (!ice_get_open_tunnel_port(hw, TNL_VXLAN, &open_port)) return ICE_ERR_CFG; break; - case ICE_SW_TUN_GENEVE: case ICE_SW_TUN_GENEVE_VLAN: if (!ice_get_open_tunnel_port(hw, TNL_GENEVE, &open_port)) return ICE_ERR_CFG; break; - default: /* Nothing needs to be done for this tunnel type */ - return ICE_SUCCESS; + return 0; } /* Find the outer UDP protocol header and insert the port number */ @@ -8986,7 +9329,7 @@ ice_fill_adv_packet_tun(struct ice_hw *hw, enum ice_sw_tunnel_type tun_type, hdr = (struct ice_l4_hdr *)&pkt[offset]; hdr->dst_port = CPU_TO_BE16(open_port); - return ICE_SUCCESS; + return 0; } } @@ -8999,7 +9342,7 @@ ice_fill_adv_packet_tun(struct ice_hw *hw, enum ice_sw_tunnel_type tun_type, * @pkt: dummy packet to fill in * @offsets: offset info for the dummy packet */ -static enum ice_status +static int ice_fill_adv_packet_vlan(u16 vlan_type, u8 *pkt, const struct ice_dummy_pkt_offsets *offsets) { @@ -9016,7 +9359,7 @@ ice_fill_adv_packet_vlan(u16 vlan_type, u8 *pkt, hdr = (struct ice_vlan_hdr *)&pkt[offset]; hdr->type = CPU_TO_BE16(vlan_type); - return ICE_SUCCESS; + return 0; } } @@ -9035,7 +9378,7 @@ ice_fill_adv_packet_vlan(u16 vlan_type, u8 *pkt, * Helper function to search for a given advance rule entry * Returns pointer to entry storing the rule if found */ -static struct ice_adv_fltr_mgmt_list_entry * +struct ice_adv_fltr_mgmt_list_entry * ice_find_adv_rule_entry(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, u16 recp_id, struct ice_adv_rule_info *rinfo) @@ -9086,14 +9429,14 @@ ice_find_adv_rule_entry(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, * Add the new VSI to the previously created VSI list set * using the update switch rule command */ -static enum ice_status +int ice_adv_add_update_vsi_list(struct ice_hw *hw, struct ice_adv_fltr_mgmt_list_entry *m_entry, struct ice_adv_rule_info *cur_fltr, struct ice_adv_rule_info *new_fltr) { - enum ice_status status; u16 vsi_list_id = 0; + int status; if (cur_fltr->sw_act.fltr_act == ICE_FWD_TO_Q || cur_fltr->sw_act.fltr_act == ICE_FWD_TO_QGRP || @@ -9254,15 +9597,15 @@ ice_set_lg_action_entry(u8 act_type, union lg_act_entry *lg_entry) * Fill a large action to hold software marker and link the lookup rule * with an action pointing to this larger action */ -static struct ice_aqc_sw_rules_elem * +static struct ice_sw_rule_lg_act * ice_fill_sw_marker_lg_act(struct ice_hw *hw, u32 sw_marker, u16 l_id, u16 lkup_rule_sz, u16 lg_act_size, u16 num_lg_acts, - struct ice_aqc_sw_rules_elem *s_rule) + struct ice_sw_rule_lkup_rx_tx *s_rule) { - struct ice_aqc_sw_rules_elem *rx_tx, *lg_act; + struct ice_sw_rule_lkup_rx_tx *rx_tx; const u16 offset_generic_md_word_0 = 0; const u16 offset_generic_md_word_1 = 1; - enum ice_status status = ICE_SUCCESS; + struct ice_sw_rule_lg_act *lg_act; union lg_act_entry lg_e_lo; union lg_act_entry lg_e_hi; const u8 priority = 0x3; @@ -9271,19 +9614,19 @@ ice_fill_sw_marker_lg_act(struct ice_hw *hw, u32 sw_marker, u16 l_id, /* For software marker we need 2 large actions for 32 bit mark id */ rules_size = lg_act_size + lkup_rule_sz; - lg_act = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rules_size); + lg_act = (struct ice_sw_rule_lg_act *)ice_malloc(hw, rules_size); if (!lg_act) return NULL; - rx_tx = (struct ice_aqc_sw_rules_elem *)((u8 *)lg_act + lg_act_size); + rx_tx = (struct ice_sw_rule_lkup_rx_tx *)((u8 *)lg_act + lg_act_size); ice_memcpy(rx_tx, s_rule, lkup_rule_sz, ICE_NONDMA_TO_NONDMA); ice_free(hw, s_rule); s_rule = NULL; - lg_act->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT); - lg_act->pdata.lg_act.index = CPU_TO_LE16(l_id); - lg_act->pdata.lg_act.size = CPU_TO_LE16(num_lg_acts); + lg_act->hdr.type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT); + lg_act->index = CPU_TO_LE16(l_id); + lg_act->size = CPU_TO_LE16(num_lg_acts); /* GENERIC VALUE action to hold the software marker ID low 16 bits */ /* and set in meta data index 4 by default. */ @@ -9291,7 +9634,7 @@ ice_fill_sw_marker_lg_act(struct ice_hw *hw, u32 sw_marker, u16 l_id, lg_e_lo.generic_act.offset = offset_generic_md_word_0; lg_e_lo.generic_act.priority = priority; act = ice_set_lg_action_entry(ICE_LG_ACT_GENERIC, &lg_e_lo); - lg_act->pdata.lg_act.act[0] = CPU_TO_LE32(act); + lg_act->act[0] = CPU_TO_LE32(act); if (num_lg_acts == 1) return lg_act; @@ -9303,7 +9646,7 @@ ice_fill_sw_marker_lg_act(struct ice_hw *hw, u32 sw_marker, u16 l_id, lg_e_hi.generic_act.offset = offset_generic_md_word_1; lg_e_hi.generic_act.priority = priority; act = ice_set_lg_action_entry(ICE_LG_ACT_GENERIC, &lg_e_hi); - lg_act->pdata.lg_act.act[1] = CPU_TO_LE32(act); + lg_act->act[1] = CPU_TO_LE32(act); return lg_act; } @@ -9326,26 +9669,27 @@ ice_fill_sw_marker_lg_act(struct ice_hw *hw, u32 sw_marker, u16 l_id, * rinfo describes other information related to this rule such as forwarding * IDs, priority of this rule, etc. */ -enum ice_status +int ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, struct ice_adv_rule_info *rinfo, struct ice_rule_query_data *added_entry) { struct ice_adv_fltr_mgmt_list_entry *m_entry, *adv_fltr = NULL; - u16 lg_act_size, lg_act_id = ICE_INVAL_LG_ACT_INDEX; + u16 lg_act_sz, lg_act_id = ICE_INVAL_LG_ACT_INDEX; u16 rid = 0, i, pkt_len, rule_buf_sz, vsi_handle; const struct ice_dummy_pkt_offsets *pkt_offsets; - struct ice_aqc_sw_rules_elem *s_rule = NULL; - struct ice_aqc_sw_rules_elem *rx_tx; + struct ice_sw_rule_lg_act *lg_rule = NULL; + struct ice_sw_rule_lkup_rx_tx *s_rule = NULL; + struct ice_sw_rule_lkup_rx_tx *rx_tx; struct LIST_HEAD_TYPE *rule_head; struct ice_switch_info *sw; u16 nb_lg_acts_mark = 1; - enum ice_status status; const u8 *pkt = NULL; - u16 num_rules = 1; + u8 num_rules = 1; bool prof_rule; u16 word_cnt; u32 act = 0; + int status; u8 q_rgn; /* Initialize profile to result index bitmap */ @@ -9365,6 +9709,9 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, ptr = (u16 *)&lkups[i].m_u; for (j = 0; j < sizeof(lkups->m_u) / sizeof(u16); j++) +#ifdef __CHECKER__ + /* cppcheck-suppress objectIndex */ +#endif /* __CHECKER__ */ if (ptr[j] != 0) word_cnt++; } @@ -9425,8 +9772,8 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, } return status; } - rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len; - s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rule_buf_sz); + rule_buf_sz = ice_struct_size(s_rule, hdr_data, 0) + pkt_len; + s_rule = (struct ice_sw_rule_lkup_rx_tx *)ice_malloc(hw, rule_buf_sz); if (!s_rule) return ICE_ERR_NO_MEMORY; if (!rinfo->flags_info.act_valid) @@ -9456,7 +9803,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, ICE_SINGLE_ACT_Q_REGION_M; break; case ICE_SET_MARK: - if (rinfo->sw_act.markid != (rinfo->sw_act.markid & 0xFFFF)) + if (rinfo->sw_act.markid > 0xFFFF) nb_lg_acts_mark += 1; /* Allocate a hardware table entry to hold large act. */ status = ice_alloc_res_lg_act(hw, &lg_act_id, nb_lg_acts_mark); @@ -9477,24 +9824,23 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, goto err_ice_add_adv_rule; } - /* set the rule LOOKUP type based on caller specified 'RX' + /* Set the rule LOOKUP type based on caller specified 'Rx' * instead of hardcoding it to be either LOOKUP_TX/RX * - * for 'RX' set the source to be the port number - * for 'TX' set the source to be the source HW VSI number (determined + * for 'Rx' set the source to be the port number + * for 'Tx' set the source to be the source HW VSI number (determined * by caller) */ if (rinfo->rx) { - s_rule->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_RX); - s_rule->pdata.lkup_tx_rx.src = - CPU_TO_LE16(hw->port_info->lport); + s_rule->hdr.type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_RX); + s_rule->src = CPU_TO_LE16(hw->port_info->lport); } else { - s_rule->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_TX); - s_rule->pdata.lkup_tx_rx.src = CPU_TO_LE16(rinfo->sw_act.src); + s_rule->hdr.type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_TX); + s_rule->src = CPU_TO_LE16(rinfo->sw_act.src); } - s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(rid); - s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act); + s_rule->recipe_id = CPU_TO_LE16(rid); + s_rule->act = CPU_TO_LE32(act); status = ice_fill_adv_dummy_packet(lkups, lkups_cnt, s_rule, pkt, pkt_len, pkt_offsets); @@ -9504,7 +9850,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, if (rinfo->tun_type != ICE_NON_TUN && rinfo->tun_type != ICE_SW_TUN_AND_NON_TUN) { status = ice_fill_adv_packet_tun(hw, rinfo->tun_type, - s_rule->pdata.lkup_tx_rx.hdr, + s_rule->hdr_data, pkt_offsets); if (status) goto err_ice_add_adv_rule; @@ -9512,7 +9858,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, if (rinfo->vlan_type != 0 && ice_is_dvm_ena(hw)) { status = ice_fill_adv_packet_vlan(rinfo->vlan_type, - s_rule->pdata.lkup_tx_rx.hdr, + s_rule->hdr_data, pkt_offsets); if (status) goto err_ice_add_adv_rule; @@ -9520,18 +9866,19 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, rx_tx = s_rule; if (rinfo->sw_act.fltr_act == ICE_SET_MARK) { - lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(nb_lg_acts_mark); - s_rule = ice_fill_sw_marker_lg_act(hw, rinfo->sw_act.markid, - lg_act_id, rule_buf_sz, - lg_act_size, nb_lg_acts_mark, - s_rule); - if (!s_rule) + lg_act_sz = (u16)ice_struct_size(lg_rule, act, nb_lg_acts_mark); + lg_rule = ice_fill_sw_marker_lg_act(hw, rinfo->sw_act.markid, + lg_act_id, rule_buf_sz, + lg_act_sz, nb_lg_acts_mark, + s_rule); + if (!lg_rule) goto err_ice_add_adv_rule; - rule_buf_sz += lg_act_size; + s_rule = (struct ice_sw_rule_lkup_rx_tx *)lg_rule; + rule_buf_sz += lg_act_sz; num_rules += 1; - rx_tx = (struct ice_aqc_sw_rules_elem *) - ((u8 *)s_rule + lg_act_size); + rx_tx = (struct ice_sw_rule_lkup_rx_tx *) + ((u8 *)s_rule + lg_act_sz); } status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule, @@ -9553,6 +9900,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, } else { adv_fltr->lkups = NULL; } + if (!adv_fltr->lkups && !prof_rule) { status = ICE_ERR_NO_MEMORY; goto err_ice_add_adv_rule; @@ -9561,7 +9909,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, adv_fltr->lkups_cnt = lkups_cnt; adv_fltr->rule_info = *rinfo; adv_fltr->rule_info.fltr_rule_id = - LE16_TO_CPU(rx_tx->pdata.lkup_tx_rx.index); + LE16_TO_CPU(rx_tx->index); adv_fltr->rule_info.lg_id = LE16_TO_CPU(lg_act_id); sw = hw->switch_info; sw->recp_list[rid].adv_rule = true; @@ -9580,7 +9928,6 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, err_ice_add_adv_rule: if (status && rinfo->sw_act.fltr_act == ICE_SET_MARK) ice_free_sw_marker_lg(hw, lg_act_id, rinfo->sw_act.markid); - if (status && adv_fltr) { ice_free(hw, adv_fltr->lkups); ice_free(hw, adv_fltr); @@ -9598,14 +9945,14 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, * @fm_list: filter management entry for which the VSI list management needs to * be done */ -static enum ice_status +static int ice_adv_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle, struct ice_adv_fltr_mgmt_list_entry *fm_list) { struct ice_vsi_list_map_info *vsi_list_info; enum ice_sw_lkup_type lkup_type; - enum ice_status status; u16 vsi_list_id; + int status; if (fm_list->rule_info.sw_act.fltr_act != ICE_FWD_TO_VSI_LIST || fm_list->vsi_count == 0) @@ -9697,16 +10044,17 @@ ice_adv_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle, * header. rinfo describes other information related to this rule such as * forwarding IDs, priority of this rule, etc. */ -enum ice_status +int ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, struct ice_adv_rule_info *rinfo) { struct ice_adv_fltr_mgmt_list_entry *list_elem; struct ice_prot_lkup_ext lkup_exts; - struct ice_lock *rule_lock; /* Lock to protect filter rule list */ - enum ice_status status = ICE_SUCCESS; bool remove_rule = false; + struct ice_lock *rule_lock; /* Lock to protect filter rule list */ u16 i, rid, vsi_handle; + bool is_add = false; + int status = ICE_SUCCESS; ice_memset(&lkup_exts, 0, sizeof(lkup_exts), ICE_NONDMA_MEM); for (i = 0; i < lkups_cnt; i++) { @@ -9727,7 +10075,8 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, if (status) return status; - rid = ice_find_recp(hw, &lkup_exts, rinfo->tun_type, rinfo->priority); + rid = ice_find_recp(hw, &lkup_exts, rinfo->tun_type, + rinfo->priority, &is_add); /* If did not find a recipe that match the existing criteria */ if (rid == ICE_MAX_NUM_RECIPES) return ICE_ERR_PARAM; @@ -9756,34 +10105,39 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, } ice_release_lock(rule_lock); if (remove_rule) { - struct ice_aqc_sw_rules_elem *s_rule; + struct ice_sw_rule_lkup_rx_tx *s_rule; u16 rule_buf_sz; if (rinfo->sw_act.fltr_act == ICE_SET_MARK) ice_free_sw_marker_lg(hw, list_elem->rule_info.lg_id, rinfo->sw_act.markid); - rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE; - s_rule = (struct ice_aqc_sw_rules_elem *) + rule_buf_sz = ice_struct_size(s_rule, hdr_data, 0); + s_rule = (struct ice_sw_rule_lkup_rx_tx *) ice_malloc(hw, rule_buf_sz); if (!s_rule) return ICE_ERR_NO_MEMORY; - s_rule->pdata.lkup_tx_rx.act = 0; - s_rule->pdata.lkup_tx_rx.index = - CPU_TO_LE16(list_elem->rule_info.fltr_rule_id); - s_rule->pdata.lkup_tx_rx.hdr_len = 0; - status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule, - rule_buf_sz, 1, + s_rule->act = 0; + s_rule->index = CPU_TO_LE16(list_elem->rule_info.fltr_rule_id); + s_rule->hdr_len = 0; + status = ice_aq_sw_rules(hw, s_rule, rule_buf_sz, 1, ice_aqc_opc_remove_sw_rules, NULL); if (status == ICE_SUCCESS || status == ICE_ERR_DOES_NOT_EXIST) { struct ice_switch_info *sw = hw->switch_info; + struct ice_sw_recipe *r_list = sw->recp_list; ice_acquire_lock(rule_lock); LIST_DEL(&list_elem->list_entry); ice_free(hw, list_elem->lkups); ice_free(hw, list_elem); ice_release_lock(rule_lock); - if (LIST_EMPTY(&sw->recp_list[rid].filt_rules)) - sw->recp_list[rid].adv_rule = false; + if (LIST_EMPTY(&r_list[rid].filt_rules)) { + r_list[rid].adv_rule = false; + + /* All rules for this recipe are now removed */ + if (hw->subscribable_recipes_supported) + ice_release_recipe_res(hw, + &r_list[rid]); + } } ice_free(hw, s_rule); } @@ -9799,7 +10153,7 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, * the remove_entry parameter. This function will remove rule for a given * vsi_handle with a given rule_id which is passed as parameter in remove_entry */ -enum ice_status +int ice_rem_adv_rule_by_id(struct ice_hw *hw, struct ice_rule_query_data *remove_entry) { @@ -9828,22 +10182,22 @@ ice_rem_adv_rule_by_id(struct ice_hw *hw, /** * ice_rem_adv_rule_for_vsi - removes existing advanced switch rules for a - * given VSI handle + * given VSI handle * @hw: pointer to the hardware structure * @vsi_handle: VSI handle for which we are supposed to remove all the rules. * * This function is used to remove all the rules for a given VSI and as soon * as removing a rule fails, it will return immediately with the error code, - * else it will return ICE_SUCCESS + * else it will return 0. */ -enum ice_status ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle) +int ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle) { struct ice_adv_fltr_mgmt_list_entry *list_itr, *tmp_entry; struct ice_vsi_list_map_info *map_info; - struct LIST_HEAD_TYPE *list_head; struct ice_adv_rule_info rinfo; + struct LIST_HEAD_TYPE *list_head; struct ice_switch_info *sw; - enum ice_status status; + int status; u8 rid; sw = hw->switch_info; @@ -9864,8 +10218,7 @@ enum ice_status ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle) if (!map_info) continue; - if (!ice_is_bit_set(map_info->vsi_map, - vsi_handle)) + if (!ice_is_bit_set(map_info->vsi_map, vsi_handle)) continue; } else if (rinfo.sw_act.vsi_handle != vsi_handle) { continue; @@ -9874,7 +10227,6 @@ enum ice_status ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle) rinfo.sw_act.vsi_handle = vsi_handle; status = ice_rem_adv_rule(hw, list_itr->lkups, list_itr->lkups_cnt, &rinfo); - if (status) return status; } @@ -9888,14 +10240,14 @@ enum ice_status ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle) * @list_head: list for which filters needs to be replayed * @recp_id: Recipe ID for which rules need to be replayed */ -static enum ice_status +static int ice_replay_fltr(struct ice_hw *hw, u8 recp_id, struct LIST_HEAD_TYPE *list_head) { struct ice_fltr_mgmt_list_entry *itr; - enum ice_status status = ICE_SUCCESS; struct ice_sw_recipe *recp_list; u8 lport = hw->port_info->lport; struct LIST_HEAD_TYPE l_head; + int status = 0; if (LIST_EMPTY(list_head)) return status; @@ -9919,7 +10271,7 @@ ice_replay_fltr(struct ice_hw *hw, u8 recp_id, struct LIST_HEAD_TYPE *list_head) if (itr->vsi_count < 2 && recp_id != ICE_SW_LKUP_VLAN) { status = ice_add_rule_internal(hw, recp_list, lport, &f_entry); - if (status != ICE_SUCCESS) + if (status) goto end; continue; } @@ -9942,7 +10294,7 @@ ice_replay_fltr(struct ice_hw *hw, u8 recp_id, struct LIST_HEAD_TYPE *list_head) status = ice_add_rule_internal(hw, recp_list, lport, &f_entry); - if (status != ICE_SUCCESS) + if (status) goto end; } } @@ -9959,10 +10311,10 @@ ice_replay_fltr(struct ice_hw *hw, u8 recp_id, struct LIST_HEAD_TYPE *list_head) * NOTE: This function does not clean up partially added filters on error. * It is up to caller of the function to issue a reset or fail early. */ -enum ice_status ice_replay_all_fltr(struct ice_hw *hw) +int ice_replay_all_fltr(struct ice_hw *hw) { struct ice_switch_info *sw = hw->switch_info; - enum ice_status status = ICE_SUCCESS; + int status = ICE_SUCCESS; u8 i; for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) { @@ -9987,14 +10339,14 @@ enum ice_status ice_replay_all_fltr(struct ice_hw *hw) * Replays the filter of recipe recp_id for a VSI represented via vsi_handle. * It is required to pass valid VSI handle. */ -static enum ice_status +static int ice_replay_vsi_fltr(struct ice_hw *hw, struct ice_port_info *pi, struct ice_switch_info *sw, u16 vsi_handle, u8 recp_id, struct LIST_HEAD_TYPE *list_head) { struct ice_fltr_mgmt_list_entry *itr; - enum ice_status status = ICE_SUCCESS; struct ice_sw_recipe *recp_list; + int status = 0; u16 hw_vsi_id; if (LIST_EMPTY(list_head)) @@ -10015,7 +10367,7 @@ ice_replay_vsi_fltr(struct ice_hw *hw, struct ice_port_info *pi, status = ice_add_rule_internal(hw, recp_list, pi->lport, &f_entry); - if (status != ICE_SUCCESS) + if (status) goto end; continue; } @@ -10035,7 +10387,7 @@ ice_replay_vsi_fltr(struct ice_hw *hw, struct ice_port_info *pi, status = ice_add_rule_internal(hw, recp_list, pi->lport, &f_entry); - if (status != ICE_SUCCESS) + if (status) goto end; } end: @@ -10050,13 +10402,13 @@ ice_replay_vsi_fltr(struct ice_hw *hw, struct ice_port_info *pi, * * Replay the advanced rule for the given VSI. */ -static enum ice_status +static int ice_replay_vsi_adv_rule(struct ice_hw *hw, u16 vsi_handle, struct LIST_HEAD_TYPE *list_head) { struct ice_rule_query_data added_entry = { 0 }; struct ice_adv_fltr_mgmt_list_entry *adv_fltr; - enum ice_status status = ICE_SUCCESS; + int status = 0; if (LIST_EMPTY(list_head)) return status; @@ -10083,12 +10435,12 @@ ice_replay_vsi_adv_rule(struct ice_hw *hw, u16 vsi_handle, * * Replays filters for requested VSI via vsi_handle. */ -enum ice_status +int ice_replay_vsi_all_fltr(struct ice_hw *hw, struct ice_port_info *pi, u16 vsi_handle) { - struct ice_switch_info *sw; - enum ice_status status; + struct ice_switch_info *sw = NULL; + int status; u8 i; sw = hw->switch_info; @@ -10103,11 +10455,11 @@ ice_replay_vsi_all_fltr(struct ice_hw *hw, struct ice_port_info *pi, head); else status = ice_replay_vsi_adv_rule(hw, vsi_handle, head); - if (status != ICE_SUCCESS) + if (status) return status; } - return ICE_SUCCESS; + return 0; } /** @@ -10147,3 +10499,4 @@ void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw) { ice_rm_sw_replay_rule_info(hw, hw->switch_info); } + diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h index 7a6944893d..49fb439347 100644 --- a/drivers/net/ice/base/ice_switch.h +++ b/drivers/net/ice/base/ice_switch.h @@ -11,9 +11,11 @@ #define ICE_SW_CFG_MAX_BUF_LEN 2048 #define ICE_MAX_SW 256 #define ICE_DFLT_VSI_INVAL 0xff -#define ICE_FLTR_RX BIT(0) -#define ICE_FLTR_TX BIT(1) -#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX) + +#define ICE_FLTR_RX BIT(0) +#define ICE_FLTR_TX BIT(1) +#define ICE_FLTR_RX_LB BIT(2) +#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX) /* Switch Profile IDs for Profile related switch rules */ #define ICE_PROFID_IPV4_TCP 4 @@ -71,18 +73,6 @@ #define ICE_PROFID_IPV6_PFCP_SESSION 82 #define DUMMY_ETH_HDR_LEN 16 -#define ICE_SW_RULE_RX_TX_ETH_HDR_SIZE \ - (offsetof(struct ice_aqc_sw_rules_elem, pdata.lkup_tx_rx.hdr) + \ - (DUMMY_ETH_HDR_LEN * \ - sizeof(((struct ice_sw_rule_lkup_rx_tx *)0)->hdr[0]))) -#define ICE_SW_RULE_RX_TX_NO_HDR_SIZE \ - (offsetof(struct ice_aqc_sw_rules_elem, pdata.lkup_tx_rx.hdr)) -#define ICE_SW_RULE_LG_ACT_SIZE(n) \ - (offsetof(struct ice_aqc_sw_rules_elem, pdata.lg_act.act) + \ - ((n) * sizeof(((struct ice_sw_rule_lg_act *)0)->act[0]))) -#define ICE_SW_RULE_VSI_LIST_SIZE(n) \ - (offsetof(struct ice_aqc_sw_rules_elem, pdata.vsi_list.vsi) + \ - ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi[0]))) /* Worst case buffer length for ice_aqc_opc_get_res_alloc */ #define ICE_MAX_RES_TYPES 0x80 @@ -156,6 +146,7 @@ struct ice_fltr_info { union { struct { u8 mac_addr[ETH_ALEN]; + u16 sw_id; } mac; struct { u8 mac_addr[ETH_ALEN]; @@ -165,6 +156,7 @@ struct ice_fltr_info { u16 vlan_id; u16 tpid; u8 tpid_valid; + u16 sw_id; } vlan; /* Set lkup_type as ICE_SW_LKUP_ETHERTYPE * if just using ethertype as filter. Set lkup_type as @@ -202,11 +194,12 @@ struct ice_fltr_info { /* Rule creations populate these indicators basing on the switch type */ u8 lb_en; /* Indicate if packet can be looped back */ u8 lan_en; /* Indicate if packet can be forwarded to the uplink */ + u8 fltVeb_en; /* Indicate if VSI is connected to floating VEB */ }; struct ice_update_recipe_lkup_idx_params { u16 rid; - u8 fv_idx; + u16 fv_idx; bool ignore_valid; u16 mask; bool mask_valid; @@ -219,19 +212,19 @@ struct ice_adv_lkup_elem { union ice_prot_hdr m_u; /* Mask of header values to match */ }; -struct lg_entry_vsi_fwd { +struct entry_vsi_fwd { u16 vsi_list; u8 list; u8 valid; }; -struct lg_entry_to_q { +struct entry_to_q { u16 q_idx; u8 q_region_sz; u8 q_pri; }; -struct lg_entry_prune { +struct entry_prune { u16 vsi_list; u8 list; u8 egr; @@ -239,28 +232,29 @@ struct lg_entry_prune { u8 prune_t; }; -struct lg_entry_mirror { +struct entry_mirror { u16 mirror_vsi; }; -struct lg_entry_generic_act { +struct entry_generic_act { u16 generic_value; u8 offset; u8 priority; }; -struct lg_entry_statistics { +struct entry_statistics { u8 counter_idx; }; union lg_act_entry { - struct lg_entry_vsi_fwd vsi_fwd; - struct lg_entry_to_q to_q; - struct lg_entry_prune prune; - struct lg_entry_mirror mirror; - struct lg_entry_generic_act generic_act; - struct lg_entry_statistics statistics; + struct entry_vsi_fwd vsi_fwd; + struct entry_to_q to_q; + struct entry_prune prune; + struct entry_mirror mirror; + struct entry_generic_act generic_act; + struct entry_statistics statistics; }; + struct ice_prof_type_entry { u16 prof_id; enum ice_sw_tunnel_type type; @@ -296,7 +290,8 @@ struct ice_rule_query_data { u16 vsi_handle; }; -/* This structure allows to pass info about lb_en and lan_en +/* + * This structure allows to pass info about lb_en and lan_en * flags to ice_add_adv_rule. Values in act would be used * only if act_valid was set to true, otherwise dflt * values would be used. @@ -395,7 +390,7 @@ struct ice_vsi_list_map_info { struct ice_fltr_list_entry { struct LIST_ENTRY_TYPE list_entry; - enum ice_status status; + int status; struct ice_fltr_info fltr_info; }; @@ -430,176 +425,216 @@ struct ice_adv_fltr_mgmt_list_entry { }; enum ice_promisc_flags { - ICE_PROMISC_UCAST_RX = 0x1, - ICE_PROMISC_UCAST_TX = 0x2, - ICE_PROMISC_MCAST_RX = 0x4, - ICE_PROMISC_MCAST_TX = 0x8, - ICE_PROMISC_BCAST_RX = 0x10, - ICE_PROMISC_BCAST_TX = 0x20, - ICE_PROMISC_VLAN_RX = 0x40, - ICE_PROMISC_VLAN_TX = 0x80, + ICE_PROMISC_UCAST_RX = 0, + ICE_PROMISC_UCAST_TX, + ICE_PROMISC_MCAST_RX, + ICE_PROMISC_MCAST_TX, + ICE_PROMISC_BCAST_RX, + ICE_PROMISC_BCAST_TX, + ICE_PROMISC_VLAN_RX, + ICE_PROMISC_VLAN_TX, + ICE_PROMISC_UCAST_RX_LB, + /* Max value */ + ICE_PROMISC_MAX, }; +struct ice_dummy_pkt_offsets { + enum ice_protocol_type type; + u16 offset; /* ICE_PROTOCOL_LAST indicates end of list */ +}; + +void +ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, + enum ice_sw_tunnel_type tun_type, const u8 **pkt, + u16 *pkt_len, + const struct ice_dummy_pkt_offsets **offsets); + +int +ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, + struct ice_sw_rule_lkup_rx_tx *s_rule, + const u8 *dummy_pkt, u16 pkt_len, + const struct ice_dummy_pkt_offsets *offsets); + +int +ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, + u16 lkups_cnt, struct ice_adv_rule_info *rinfo, u16 *rid); + +struct ice_adv_fltr_mgmt_list_entry * +ice_find_adv_rule_entry(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, + u16 lkups_cnt, u16 recp_id, + struct ice_adv_rule_info *rinfo); + +int +ice_adv_add_update_vsi_list(struct ice_hw *hw, + struct ice_adv_fltr_mgmt_list_entry *m_entry, + struct ice_adv_rule_info *cur_fltr, + struct ice_adv_rule_info *new_fltr); + +struct ice_vsi_list_map_info * +ice_find_vsi_list_entry(struct ice_sw_recipe *recp_list, u16 vsi_handle, + u16 *vsi_list_id); + /* VSI related commands */ -enum ice_status +int ice_aq_add_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_free_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx, bool keep_vsi_alloc, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_update_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx, struct ice_sq_cd *cd); -enum ice_status +int ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx, struct ice_sq_cd *cd); -enum ice_status +int ice_free_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx, bool keep_vsi_alloc, struct ice_sq_cd *cd); -enum ice_status +int ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx, struct ice_sq_cd *cd); struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle); +void ice_clear_vsi_q_ctx(struct ice_hw *hw, u16 vsi_handle); void ice_clear_all_vsi_ctx(struct ice_hw *hw); -enum ice_status +int ice_aq_get_vsi_params(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_add_update_mir_rule(struct ice_hw *hw, u16 rule_type, u16 dest_vsi, u16 count, struct ice_mir_rule_buf *mr_buf, struct ice_sq_cd *cd, u16 *rule_id); -enum ice_status +int ice_aq_delete_mir_rule(struct ice_hw *hw, u16 rule_id, bool keep_allocd, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_get_storm_ctrl(struct ice_hw *hw, u32 *bcast_thresh, u32 *mcast_thresh, u32 *ctl_bitmask); -enum ice_status +int ice_aq_set_storm_ctrl(struct ice_hw *hw, u32 bcast_thresh, u32 mcast_thresh, u32 ctl_bitmask); /* Switch config */ -enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw); +int ice_get_initial_sw_cfg(struct ice_hw *hw); -enum ice_status +int ice_alloc_vlan_res_counter(struct ice_hw *hw, u16 *counter_id); -enum ice_status +int ice_free_vlan_res_counter(struct ice_hw *hw, u16 counter_id); -enum ice_status +int ice_alloc_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items, u16 *counter_id); -enum ice_status +int ice_free_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items, u16 counter_id); -/* Switch/bridge related commands */ -enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw); -enum ice_status ice_alloc_rss_global_lut(struct ice_hw *hw, bool shared_res, u16 *global_lut_id); -enum ice_status ice_free_rss_global_lut(struct ice_hw *hw, u16 global_lut_id); -enum ice_status +int ice_update_sw_rule_bridge_mode(struct ice_hw *hw); +int ice_alloc_rss_global_lut(struct ice_hw *hw, bool shared_res, u16 *global_lut_id); +int ice_free_rss_global_lut(struct ice_hw *hw, u16 global_lut_id); +int ice_alloc_sw(struct ice_hw *hw, bool ena_stats, bool shared_res, u16 *sw_id, u16 *counter_id); -enum ice_status +int ice_free_sw(struct ice_hw *hw, u16 sw_id, u16 counter_id); -enum ice_status +int ice_aq_get_res_alloc(struct ice_hw *hw, u16 *num_entries, struct ice_aqc_get_res_resp_elem *buf, u16 buf_size, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_get_res_descs(struct ice_hw *hw, u16 num_entries, struct ice_aqc_res_elem *buf, u16 buf_size, u16 res_type, bool res_shared, u16 *desc_id, struct ice_sq_cd *cd); -enum ice_status +int ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list); -enum ice_status -ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list); +int ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list); void ice_rem_all_sw_rules_info(struct ice_hw *hw); -enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst); -enum ice_status ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst); -enum ice_status +int ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst); +int ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst); +int ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list); -enum ice_status +int ice_remove_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list); -enum ice_status +int ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list); -enum ice_status +int ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list); -enum ice_status +int ice_add_mac_with_sw_marker(struct ice_hw *hw, struct ice_fltr_info *f_info, u16 sw_marker); -enum ice_status +int ice_add_mac_with_counter(struct ice_hw *hw, struct ice_fltr_info *f_info); void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle); /* Promisc/defport setup for VSIs */ -enum ice_status +int ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set, u8 direction); bool ice_check_if_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool *rule_exists); -enum ice_status -ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, - u16 vid); -enum ice_status -ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, - u16 vid); -enum ice_status -ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, - bool rm_vlan_promisc); +int +ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, + ice_bitmap_t *promisc_mask, u16 vid); +int +ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, + ice_bitmap_t *promisc_mask, u16 vid); +int +ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, + ice_bitmap_t *promisc_mask, bool rm_vlan_promisc); /* Get VSIs Promisc/defport settings */ -enum ice_status -ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask, - u16 *vid); -enum ice_status -ice_get_vsi_vlan_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask, - u16 *vid); - -enum ice_status +int +ice_get_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, + ice_bitmap_t *promisc_mask, u16 *vid); +int +ice_get_vsi_vlan_promisc(struct ice_hw *hw, u16 vsi_handle, + ice_bitmap_t *promisc_mask, u16 *vid); + +int ice_aq_add_recipe(struct ice_hw *hw, struct ice_aqc_recipe_data_elem *s_recipe_list, u16 num_recipes, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_get_recipe(struct ice_hw *hw, struct ice_aqc_recipe_data_elem *s_recipe_list, u16 *num_recipes, u16 recipe_root, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, struct ice_sq_cd *cd); -enum ice_status +int ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, struct ice_sq_cd *cd); -enum ice_status ice_alloc_recipe(struct ice_hw *hw, u16 *recipe_id); -enum ice_status +void ice_init_chk_subscribable_recipe_support(struct ice_hw *hw); + +int ice_alloc_recipe(struct ice_hw *hw, u16 *recipe_id); +int ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, struct ice_adv_rule_info *rinfo, struct ice_rule_query_data *added_entry); -enum ice_status +int ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle); -enum ice_status +int ice_rem_adv_rule_by_id(struct ice_hw *hw, struct ice_rule_query_data *remove_entry); -enum ice_status +int ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, struct ice_adv_rule_info *rinfo); -enum ice_status ice_replay_all_fltr(struct ice_hw *hw); +int ice_replay_all_fltr(struct ice_hw *hw); -enum ice_status +int ice_init_def_sw_recp(struct ice_hw *hw, struct ice_sw_recipe **recp_list); u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle); bool ice_is_vsi_valid(struct ice_hw *hw, u16 vsi_handle); -enum ice_status +int ice_replay_vsi_all_fltr(struct ice_hw *hw, struct ice_port_info *pi, u16 vsi_handle); void ice_rm_sw_replay_rule_info(struct ice_hw *hw, struct ice_switch_info *sw); void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw); bool ice_is_prof_rule(enum ice_sw_tunnel_type type); -enum ice_status +int ice_update_recipe_lkup_idx(struct ice_hw *hw, struct ice_update_recipe_lkup_idx_params *params); void ice_change_proto_id_to_dvm(void); diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index d13105070b..bdf31dae8d 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -43,6 +43,24 @@ static inline int ice_ilog2(u64 n) return -1; } +/** + * ice_fls - find the most significant bit set in a u64 + * @n: u64 value to scan for a bit + * + * Returns: 0 if no bits found, otherwise the index of the highest bit that was + * set, like ice_fls(0x20) == 6. This means this is returning a *1 based* + * count, and that the maximum largest value returned is 64! + */ +static inline unsigned int ice_fls(u64 n) +{ + int ret; + + ret = ice_ilog2(n); + + /* add one to turn to the ilog2 value into a 1 based index */ + return ret >= 0 ? ret + 1 : 0; +} + static inline bool ice_is_tc_ena(ice_bitmap_t bitmap, u8 tc) { return ice_is_bit_set(&bitmap, tc); @@ -98,6 +116,8 @@ static inline u32 ice_round_to_num(u32 N, u32 R) #define ICE_LO_DWORD(x) ((u32)((x) & 0xFFFFFFFF)) #define ICE_HI_WORD(x) ((u16)(((x) >> 16) & 0xFFFF)) #define ICE_LO_WORD(x) ((u16)((x) & 0xFFFF)) +#define ICE_HI_BYTE(x) ((u8)(((x) >> 8) & 0xFF)) +#define ICE_LO_BYTE(x) ((u8)((x) & 0xFF)) /* debug masks - set these bits in hw->debug_mask to control output */ #define ICE_DBG_TRACE BIT_ULL(0) /* for function-trace only */ @@ -204,6 +224,7 @@ enum ice_set_fc_aq_failures { enum ice_mac_type { ICE_MAC_UNKNOWN = 0, ICE_MAC_E810, + ICE_MAC_E830, ICE_MAC_GENERIC, ICE_MAC_GENERIC_3K, ICE_MAC_GENERIC_3K_E825, @@ -211,7 +232,8 @@ enum ice_mac_type { /* Media Types */ enum ice_media_type { - ICE_MEDIA_UNKNOWN = 0, + ICE_MEDIA_NONE = 0, + ICE_MEDIA_UNKNOWN, ICE_MEDIA_FIBER, ICE_MEDIA_BASET, ICE_MEDIA_BACKPLANE, @@ -219,11 +241,106 @@ enum ice_media_type { ICE_MEDIA_AUI, }; +#define ICE_MEDIA_BASET_PHY_TYPE_LOW_M (ICE_PHY_TYPE_LOW_100BASE_TX | \ + ICE_PHY_TYPE_LOW_1000BASE_T | \ + ICE_PHY_TYPE_LOW_2500BASE_T | \ + ICE_PHY_TYPE_LOW_5GBASE_T | \ + ICE_PHY_TYPE_LOW_10GBASE_T | \ + ICE_PHY_TYPE_LOW_25GBASE_T) + +#define ICE_MEDIA_C2M_PHY_TYPE_LOW_M (ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC | \ + ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC | \ + ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC | \ + ICE_PHY_TYPE_LOW_50G_LAUI2_AOC_ACC | \ + ICE_PHY_TYPE_LOW_50G_AUI2_AOC_ACC | \ + ICE_PHY_TYPE_LOW_50G_AUI1_AOC_ACC | \ + ICE_PHY_TYPE_LOW_100G_CAUI4_AOC_ACC | \ + ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC) + +#define ICE_MEDIA_C2M_PHY_TYPE_HIGH_M (ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC | \ + ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC | \ + ICE_PHY_TYPE_HIGH_200G_AUI4_AOC_ACC | \ + ICE_PHY_TYPE_HIGH_200G_AUI8_AOC_ACC) + +#define ICE_MEDIA_OPT_PHY_TYPE_LOW_M (ICE_PHY_TYPE_LOW_1000BASE_SX | \ + ICE_PHY_TYPE_LOW_1000BASE_LX | \ + ICE_PHY_TYPE_LOW_10GBASE_SR | \ + ICE_PHY_TYPE_LOW_10GBASE_LR | \ + ICE_PHY_TYPE_LOW_25GBASE_SR | \ + ICE_PHY_TYPE_LOW_25GBASE_LR | \ + ICE_PHY_TYPE_LOW_40GBASE_SR4 | \ + ICE_PHY_TYPE_LOW_40GBASE_LR4 | \ + ICE_PHY_TYPE_LOW_50GBASE_SR2 | \ + ICE_PHY_TYPE_LOW_50GBASE_LR2 | \ + ICE_PHY_TYPE_LOW_50GBASE_SR | \ + ICE_PHY_TYPE_LOW_50GBASE_LR | \ + ICE_PHY_TYPE_LOW_100GBASE_SR4 | \ + ICE_PHY_TYPE_LOW_100GBASE_LR4 | \ + ICE_PHY_TYPE_LOW_100GBASE_SR2 | \ + ICE_PHY_TYPE_LOW_50GBASE_FR | \ + ICE_PHY_TYPE_LOW_100GBASE_DR) + +#define ICE_MEDIA_OPT_PHY_TYPE_HIGH_M (ICE_PHY_TYPE_HIGH_200G_SR4 | \ + ICE_PHY_TYPE_HIGH_200G_LR4 | \ + ICE_PHY_TYPE_HIGH_200G_FR4 | \ + ICE_PHY_TYPE_HIGH_200G_DR4 | \ + ICE_PHY_TYPE_HIGH_400GBASE_FR8) + +#define ICE_MEDIA_BP_PHY_TYPE_LOW_M (ICE_PHY_TYPE_LOW_1000BASE_KX | \ + ICE_PHY_TYPE_LOW_2500BASE_KX | \ + ICE_PHY_TYPE_LOW_5GBASE_KR | \ + ICE_PHY_TYPE_LOW_10GBASE_KR_CR1 | \ + ICE_PHY_TYPE_LOW_25GBASE_KR | \ + ICE_PHY_TYPE_LOW_25GBASE_KR_S | \ + ICE_PHY_TYPE_LOW_25GBASE_KR1 | \ + ICE_PHY_TYPE_LOW_40GBASE_KR4 | \ + ICE_PHY_TYPE_LOW_50GBASE_KR2 | \ + ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4 | \ + ICE_PHY_TYPE_LOW_100GBASE_KR4 | \ + ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4) + +#define ICE_MEDIA_BP_PHY_TYPE_HIGH_M (ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4 | \ + ICE_PHY_TYPE_HIGH_200G_KR4_PAM4) + +#define ICE_MEDIA_DAC_PHY_TYPE_LOW_M (ICE_PHY_TYPE_LOW_10G_SFI_DA | \ + ICE_PHY_TYPE_LOW_25GBASE_CR | \ + ICE_PHY_TYPE_LOW_25GBASE_CR_S | \ + ICE_PHY_TYPE_LOW_25GBASE_CR1 | \ + ICE_PHY_TYPE_LOW_40GBASE_CR4 | \ + ICE_PHY_TYPE_LOW_50GBASE_CR2 | \ + ICE_PHY_TYPE_LOW_100GBASE_CR4 | \ + ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4 | \ + ICE_PHY_TYPE_LOW_50GBASE_CP | \ + ICE_PHY_TYPE_LOW_100GBASE_CP2) + +#define ICE_MEDIA_DAC_PHY_TYPE_HIGH_M ICE_PHY_TYPE_HIGH_200G_CR4_PAM4 + +#define ICE_MEDIA_C2C_PHY_TYPE_LOW_M (ICE_PHY_TYPE_LOW_100M_SGMII | \ + ICE_PHY_TYPE_LOW_1G_SGMII | \ + ICE_PHY_TYPE_LOW_2500BASE_X | \ + ICE_PHY_TYPE_LOW_10G_SFI_C2C | \ + ICE_PHY_TYPE_LOW_25G_AUI_C2C | \ + ICE_PHY_TYPE_LOW_40G_XLAUI | \ + ICE_PHY_TYPE_LOW_50G_LAUI2 | \ + ICE_PHY_TYPE_LOW_50G_AUI2 | \ + ICE_PHY_TYPE_LOW_50G_AUI1 | \ + ICE_PHY_TYPE_LOW_100G_CAUI4 | \ + ICE_PHY_TYPE_LOW_100G_AUI4) + +#define ICE_MEDIA_C2C_PHY_TYPE_HIGH_M (ICE_PHY_TYPE_HIGH_100G_CAUI2 | \ + ICE_PHY_TYPE_HIGH_100G_AUI2 | \ + ICE_PHY_TYPE_HIGH_200G_AUI4 | \ + ICE_PHY_TYPE_HIGH_200G_AUI8) + /* Software VSI types. */ enum ice_vsi_type { ICE_VSI_PF = 0, ICE_VSI_CTRL = 3, /* equates to ICE_VSI_PF with 1 queue pair */ ICE_VSI_LB = 6, + ICE_VSI_ADI = 8, +#ifdef SF_SUPPORT + ICE_VSI_SF = 9, +#endif /* SF_SUPPORT */ }; struct ice_link_status { @@ -602,9 +719,11 @@ struct ice_hw_common_caps { bool sec_rev_disabled; bool update_disabled; bool nvm_unified_update; + bool netlist_auth; #define ICE_NVM_MGMT_SEC_REV_DISABLED BIT(0) #define ICE_NVM_MGMT_UPDATE_DISABLED BIT(1) #define ICE_NVM_MGMT_UNIFIED_UPD_SUPPORT BIT(3) +#define ICE_NVM_MGMT_NETLIST_AUTH_SUPPORT BIT(5) /* PCIe reset avoidance */ bool pcie_reset_avoidance; /* false: not supported, true: supported */ /* Post update reset restriction */ @@ -622,7 +741,12 @@ struct ice_hw_common_caps { #define ICE_EXT_TOPO_DEV_IMG_LOAD_EN BIT(0) bool ext_topo_dev_img_prog_en[ICE_EXT_TOPO_DEV_IMG_COUNT]; #define ICE_EXT_TOPO_DEV_IMG_PROG_EN BIT(1) + bool ext_topo_dev_img_ver_schema[ICE_EXT_TOPO_DEV_IMG_COUNT]; +#define ICE_EXT_TOPO_DEV_IMG_VER_SCHEMA BIT(2) bool tx_sched_topo_comp_mode_en; + /* Support for OROM update in Recovery Mode */ + bool orom_recovery_update; + bool next_cluster_id_support; }; /* IEEE 1588 TIME_SYNC specific info */ @@ -632,6 +756,7 @@ struct ice_hw_common_caps { #define ICE_TS_TMR_ENA_M BIT(2) #define ICE_TS_TMR_IDX_OWND_S 4 #define ICE_TS_TMR_IDX_OWND_M BIT(4) +#define ICE_TS_GPIO_1PPS_ASSOC BIT(12) #define ICE_TS_CLK_FREQ_S 16 #define ICE_TS_CLK_FREQ_M MAKEMASK(0x7, ICE_TS_CLK_FREQ_S) #define ICE_TS_CLK_SRC_S 20 @@ -648,7 +773,9 @@ enum ice_time_ref_freq { ICE_TIME_REF_FREQ_156_250 = 4, ICE_TIME_REF_FREQ_245_760 = 5, - NUM_ICE_TIME_REF_FREQ + NUM_ICE_TIME_REF_FREQ, + + ICE_TIME_REF_FREQ_INVALID = -1, }; /* Clock source specification */ @@ -668,6 +795,7 @@ struct ice_ts_func_info { u8 tmr_index_owned : 1; u8 src_tmr_owned : 1; u8 tmr_ena : 1; + u8 gpio_1pps : 1; }; /* Device specific definitions */ @@ -680,6 +808,7 @@ struct ice_ts_func_info { #define ICE_TS_TMR0_ENA_M BIT(25) #define ICE_TS_TMR1_ENA_M BIT(26) #define ICE_TS_LL_TX_TS_READ_M BIT(28) +#define ICE_TS_LL_TX_TS_INT_READ_M BIT(29) struct ice_ts_dev_info { /* Device specific info */ @@ -692,6 +821,7 @@ struct ice_ts_dev_info { u8 tmr0_ena : 1; u8 tmr1_ena : 1; u8 ts_ll_read : 1; + u8 ts_ll_int_read : 1; }; #define ICE_NAC_TOPO_PRIMARY_M BIT(0) @@ -720,6 +850,9 @@ struct ice_hw_dev_caps { struct ice_ts_dev_info ts_dev_info; u32 num_funcs; struct ice_nac_topology nac_topo; + /* bitmap of supported sensors */ + u32 supported_sensors; +#define ICE_SENSOR_SUPPORT_E810_INT_TEMP BIT(0) }; /* Information about MAC such as address, etc... */ @@ -744,7 +877,8 @@ enum ice_pcie_bus_speed { ice_pcie_speed_2_5GT = 0x14, ice_pcie_speed_5_0GT = 0x15, ice_pcie_speed_8_0GT = 0x16, - ice_pcie_speed_16_0GT = 0x17 + ice_pcie_speed_16_0GT = 0x17, + ice_pcie_speed_32_0GT = 0x18, }; /* PCI bus widths */ @@ -807,6 +941,14 @@ struct ice_nvm_info { u8 minor; }; +/* Minimum Security Revision information */ +struct ice_minsrev_info { + u32 nvm; + u32 orom; + u8 nvm_valid : 1; + u8 orom_valid : 1; +}; + /* Enumeration of possible flash banks for the NVM, OROM, and Netlist modules * of the flash image. */ @@ -957,6 +1099,8 @@ enum ice_rl_type { #define ICE_TXSCHED_GET_RL_WAKEUP_MV(p) LE16_TO_CPU((p)->info.wake_up_calc) #define ICE_TXSCHED_GET_RL_ENCODE(p) LE16_TO_CPU((p)->info.rl_encode) +#define ICE_MAX_PORT_PER_PCI_DEV 8 + /* The following tree example shows the naming conventions followed under * ice_port_info struct for default scheduler tree topology. * @@ -1103,6 +1247,7 @@ struct ice_port_info { u16 sw_id; /* Initial switch ID belongs to port */ u16 pf_vf_num; u8 port_state; + u8 loopback_mode; #define ICE_SCHED_PORT_STATE_INIT 0x0 #define ICE_SCHED_PORT_STATE_READY 0x1 u8 lport; @@ -1117,6 +1262,7 @@ struct ice_port_info { struct ice_bw_type_info tc_node_bw_t_info[ICE_MAX_TRAFFIC_CLASS]; struct ice_qos_cfg qos_cfg; u8 is_vf:1; + u8 is_custom_tx_enabled:1; }; struct ice_switch_info { @@ -1128,11 +1274,12 @@ struct ice_switch_info { ice_declare_bitmap(prof_res_bm[ICE_MAX_NUM_PROFILES], ICE_MAX_FV_WORDS); }; -/* PHY configuration */ -enum ice_phy_cfg { - ICE_PHY_E810 = 1, +/* PHY model */ +enum ice_phy_model { + ICE_PHY_UNSUP = -1, + ICE_PHY_E810 = 1, ICE_PHY_E822, - ICE_PHY_ETH56G, + ICE_PHY_E830, }; /* Port hardware description */ @@ -1151,6 +1298,7 @@ struct ice_hw { enum ice_mac_type mac_type; u16 fd_ctr_base; /* FD counter base index */ + u16 fw_vsi_num; /* pci info */ u16 device_id; u16 vendor_id; @@ -1159,7 +1307,9 @@ struct ice_hw { u8 revision_id; u8 pf_id; /* device profile info */ - enum ice_phy_cfg phy_cfg; + enum ice_phy_model phy_model; + u8 phy_ports; + u8 max_phy_port; u8 logical_pf_id; u16 max_burst_size; /* driver sets this value */ @@ -1193,7 +1343,6 @@ struct ice_hw { void *buf, u16 buf_size); void *aq_send_cmd_param; u8 dcf_enabled; /* Device Config Function */ - u8 api_branch; /* API branch version */ u8 api_maj_ver; /* API major version */ u8 api_min_ver; /* API minor version */ @@ -1234,20 +1383,14 @@ struct ice_hw { #define ICE_PORTS_PER_PHY_E810 4 #define ICE_NUM_EXTERNAL_PORTS (ICE_MAX_QUAD * ICE_PORTS_PER_QUAD) - /* bitmap of enabled logical ports */ - u32 ena_lports; - /* Active package version (currently active) */ struct ice_pkg_ver active_pkg_ver; u32 pkg_seg_id; u32 pkg_sign_type; u32 active_track_id; - u8 pkg_has_signing_seg:1; u8 active_pkg_name[ICE_PKG_NAME_SIZE]; u8 active_pkg_in_nvm; - enum ice_aq_err pkg_dwnld_status; - /* Driver's package ver - (from the Ice Metadata section) */ struct ice_pkg_ver pkg_ver; u8 pkg_name[ICE_PKG_NAME_SIZE]; @@ -1266,6 +1409,7 @@ struct ice_hw { /* tunneling info */ struct ice_lock tnl_lock; struct ice_tunnel_table tnl; + /* dvm boost update information */ struct ice_dvm_table dvm_upd; @@ -1292,9 +1436,13 @@ struct ice_hw { ice_declare_bitmap(fdir_perfect_fltr, ICE_FLTR_PTYPE_MAX); struct ice_lock rss_locks; /* protect RSS configuration */ struct LIST_HEAD_TYPE rss_list_head; + u16 vsi_owning_pf_lut; /* SW IDX of VSI that acquired PF RSS LUT */ ice_declare_bitmap(hw_ptype, ICE_FLOW_PTYPE_MAX); u8 dvm_ena; u16 io_expander_handle; + + bool subscribable_recipes_supported; + bool skip_clear_pf; }; /* Statistics collected by each port, VSI, VEB, and S-channel */ @@ -1377,6 +1525,7 @@ enum ice_sw_fwd_act_type { ICE_FWD_TO_QGRP, ICE_SET_MARK, ICE_DROP_PACKET, + ICE_LG_ACTION, ICE_INVAL_ACT }; @@ -1546,16 +1695,27 @@ struct ice_aq_get_set_rss_lut_params { /* AQ API version for report default configuration */ #define ICE_FW_API_REPORT_DFLT_CFG_MAJ 1 #define ICE_FW_API_REPORT_DFLT_CFG_MIN 7 - #define ICE_FW_API_REPORT_DFLT_CFG_PATCH 3 +/* FW branch number for hardware families */ +#define ICE_FW_VER_BRANCH_E82X 0 +#define ICE_FW_VER_BRANCH_E810 1 + /* FW version for FEC disable in Auto FEC mode */ -#define ICE_FW_FEC_DIS_AUTO_BRANCH 1 #define ICE_FW_FEC_DIS_AUTO_MAJ 7 #define ICE_FW_FEC_DIS_AUTO_MIN 0 #define ICE_FW_FEC_DIS_AUTO_PATCH 5 +#define ICE_FW_FEC_DIS_AUTO_MAJ_E82X 7 +#define ICE_FW_FEC_DIS_AUTO_MIN_E82X 1 +#define ICE_FW_FEC_DIS_AUTO_PATCH_E82X 2 /* AQ API version for FW auto drop reports */ #define ICE_FW_API_AUTO_DROP_MAJ 1 #define ICE_FW_API_AUTO_DROP_MIN 4 + +static inline bool +ice_is_nac_dual(struct ice_hw *hw) +{ + return !!(hw->dev_caps.nac_topo.mode & ICE_NAC_TOPO_DUAL_M); +} #endif /* _ICE_TYPE_H_ */ diff --git a/drivers/net/ice/base/ice_vf_mbx.c b/drivers/net/ice/base/ice_vf_mbx.c new file mode 100644 index 0000000000..7faf002754 --- /dev/null +++ b/drivers/net/ice/base/ice_vf_mbx.c @@ -0,0 +1,4 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2001-2023 Intel Corporation + */ + diff --git a/drivers/net/ice/base/ice_vf_mbx.h b/drivers/net/ice/base/ice_vf_mbx.h new file mode 100644 index 0000000000..7faf002754 --- /dev/null +++ b/drivers/net/ice/base/ice_vf_mbx.h @@ -0,0 +1,4 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2001-2023 Intel Corporation + */ + diff --git a/drivers/net/ice/base/ice_vlan_mode.c b/drivers/net/ice/base/ice_vlan_mode.c index 33759e4b8a..cac3842c9e 100644 --- a/drivers/net/ice/base/ice_vlan_mode.c +++ b/drivers/net/ice/base/ice_vlan_mode.c @@ -10,13 +10,13 @@ * @hw: pointer to the HW struct * @dvm: output variable to determine if DDP supports DVM(true) or SVM(false) */ -static enum ice_status +static int ice_pkg_get_supported_vlan_mode(struct ice_hw *hw, bool *dvm) { u16 meta_init_size = sizeof(struct ice_meta_init_section); struct ice_meta_init_section *sect; struct ice_buf_build *bld; - enum ice_status status; + int status; /* if anything fails, we assume there is no DVM support */ *dvm = false; @@ -61,7 +61,7 @@ ice_pkg_get_supported_vlan_mode(struct ice_hw *hw, bool *dvm) * * Get VLAN Mode Parameters (0x020D) */ -static enum ice_status +static int ice_aq_get_vlan_mode(struct ice_hw *hw, struct ice_aqc_get_vlan_mode *get_params) { @@ -91,7 +91,7 @@ ice_aq_get_vlan_mode(struct ice_hw *hw, static bool ice_aq_is_dvm_ena(struct ice_hw *hw) { struct ice_aqc_get_vlan_mode get_params = { 0 }; - enum ice_status status; + int status; status = ice_aq_get_vlan_mode(hw, &get_params); if (status) { @@ -136,7 +136,7 @@ static void ice_cache_vlan_mode(struct ice_hw *hw) */ static bool ice_pkg_supports_dvm(struct ice_hw *hw) { - enum ice_status status; + int status; bool pkg_supports_dvm; status = ice_pkg_get_supported_vlan_mode(hw, &pkg_supports_dvm); @@ -156,7 +156,7 @@ static bool ice_pkg_supports_dvm(struct ice_hw *hw) static bool ice_fw_supports_dvm(struct ice_hw *hw) { struct ice_aqc_get_vlan_mode get_vlan_mode = { 0 }; - enum ice_status status; + int status; /* If firmware returns success, then it supports DVM, else it only * supports SVM @@ -242,13 +242,13 @@ static struct ice_update_recipe_lkup_idx_params ice_dvm_dflt_recipes[] = { * ice_dvm_update_dflt_recipes - update default switch recipes in DVM * @hw: hardware structure used to update the recipes */ -static enum ice_status ice_dvm_update_dflt_recipes(struct ice_hw *hw) +static int ice_dvm_update_dflt_recipes(struct ice_hw *hw) { unsigned long i; for (i = 0; i < ARRAY_SIZE(ice_dvm_dflt_recipes); i++) { struct ice_update_recipe_lkup_idx_params *params; - enum ice_status status; + int status; params = &ice_dvm_dflt_recipes[i]; @@ -262,7 +262,7 @@ static enum ice_status ice_dvm_update_dflt_recipes(struct ice_hw *hw) } } - return ICE_SUCCESS; + return 0; } /** @@ -272,7 +272,7 @@ static enum ice_status ice_dvm_update_dflt_recipes(struct ice_hw *hw) * * Set VLAN Mode Parameters (0x020C) */ -static enum ice_status +static int ice_aq_set_vlan_mode(struct ice_hw *hw, struct ice_aqc_set_vlan_mode *set_params) { @@ -307,10 +307,10 @@ ice_aq_set_vlan_mode(struct ice_hw *hw, * ice_set_dvm - sets up software and hardware for double VLAN mode * @hw: pointer to the hardware structure */ -static enum ice_status ice_set_dvm(struct ice_hw *hw) +static int ice_set_dvm(struct ice_hw *hw) { struct ice_aqc_set_vlan_mode params = { 0 }; - enum ice_status status; + int status; params.l2tag_prio_tagging = ICE_AQ_VLAN_PRIO_TAG_OUTER_CTAG; params.rdma_packet = ICE_AQ_DVM_VLAN_RDMA_PKT_FLAG_SETTING; @@ -345,17 +345,17 @@ static enum ice_status ice_set_dvm(struct ice_hw *hw) return status; } - return ICE_SUCCESS; + return 0; } /** * ice_set_svm - set single VLAN mode * @hw: pointer to the HW structure */ -static enum ice_status ice_set_svm(struct ice_hw *hw) +static int ice_set_svm(struct ice_hw *hw) { struct ice_aqc_set_vlan_mode *set_params; - enum ice_status status; + int status; status = ice_aq_set_port_params(hw->port_info, 0, false, false, false, NULL); if (status) { @@ -385,19 +385,19 @@ static enum ice_status ice_set_svm(struct ice_hw *hw) * ice_set_vlan_mode * @hw: pointer to the HW structure */ -enum ice_status ice_set_vlan_mode(struct ice_hw *hw) +int ice_set_vlan_mode(struct ice_hw *hw) { /* DCF only has the ability to query the VLAN mode. Setting the VLAN * mode is done by the PF. */ if (hw->dcf_enabled) - return ICE_SUCCESS; + return 0; if (!ice_is_dvm_supported(hw)) - return ICE_SUCCESS; + return 0; if (!ice_set_dvm(hw)) - return ICE_SUCCESS; + return 0; return ice_set_svm(hw); } @@ -416,14 +416,11 @@ static void ice_print_dvm_not_supported(struct ice_hw *hw) bool fw_supports_dvm = ice_fw_supports_dvm(hw); if (!fw_supports_dvm && !pkg_supports_dvm) - ice_info(hw, "QinQ functionality cannot be enabled on this device. " - "Update your DDP package and NVM to versions that support QinQ.\n"); + ice_info(hw, "QinQ functionality cannot be enabled on this device. Update your DDP package and NVM to versions that support QinQ.\n"); else if (!pkg_supports_dvm) - ice_info(hw, "QinQ functionality cannot be enabled on this device. " - "Update your DDP package to a version that supports QinQ.\n"); + ice_info(hw, "QinQ functionality cannot be enabled on this device. Update your DDP package to a version that supports QinQ.\n"); else if (!fw_supports_dvm) - ice_info(hw, "QinQ functionality cannot be enabled on this device. " - "Update your NVM to a version that supports QinQ.\n"); + ice_info(hw, "QinQ functionality cannot be enabled on this device. Update your NVM to a version that supports QinQ.\n"); } /** diff --git a/drivers/net/ice/base/ice_vlan_mode.h b/drivers/net/ice/base/ice_vlan_mode.h index d2380eb94b..bcb591b308 100644 --- a/drivers/net/ice/base/ice_vlan_mode.h +++ b/drivers/net/ice/base/ice_vlan_mode.h @@ -10,7 +10,7 @@ struct ice_hw; bool ice_is_dvm_ena(struct ice_hw *hw); -enum ice_status ice_set_vlan_mode(struct ice_hw *hw); +int ice_set_vlan_mode(struct ice_hw *hw); void ice_post_pkg_dwnld_vlan_mode_cfg(struct ice_hw *hw); #endif /* _ICE_VLAN_MODE_H */ diff --git a/drivers/net/ice/base/ice_xlt_kb.c b/drivers/net/ice/base/ice_xlt_kb.c index b8240946b4..0d5a510384 100644 --- a/drivers/net/ice/base/ice_xlt_kb.c +++ b/drivers/net/ice/base/ice_xlt_kb.c @@ -24,7 +24,7 @@ static void _xlt_kb_entry_dump(struct ice_hw *hw, } /** - * ice_imem_dump - dump a xlt key build info + * ice_xlt_kb_dump - dump a xlt key build info * @hw: pointer to the hardware structure * @kb: key build to dump */ @@ -180,7 +180,7 @@ struct ice_xlt_kb *ice_xlt_kb_get_fd(struct ice_hw *hw) } /** - * ice_xlt_kb_get_fd - create rss xlt key build + * ice_xlt_kb_get_rss - create rss xlt key build * @hw: pointer to the hardware structure */ struct ice_xlt_kb *ice_xlt_kb_get_rss(struct ice_hw *hw) diff --git a/drivers/net/ice/base/meson.build b/drivers/net/ice/base/meson.build index 41ed2d96c6..38ddde9e8c 100644 --- a/drivers/net/ice/base/meson.build +++ b/drivers/net/ice/base/meson.build @@ -27,6 +27,8 @@ sources = [ 'ice_xlt_kb.c', 'ice_parser_rt.c', 'ice_ddp.c', + 'ice_fwlog.c', + 'ice_vf_mbx.c', ] error_cflags = [ diff --git a/drivers/net/ice/ice_diagnose.c b/drivers/net/ice/ice_diagnose.c index 3be819d7f8..4006e9b35d 100644 --- a/drivers/net/ice/ice_diagnose.c +++ b/drivers/net/ice/ice_diagnose.c @@ -437,9 +437,10 @@ static int ice_dump_switch(struct rte_eth_dev *dev, uint8_t **buff2, uint32_t *size) { struct ice_hw *hw; + struct ice_sq_cd *cd = NULL; int i = 0; uint16_t tbl_id = 0; - uint32_t tbl_idx = 0; + uint16_t tbl_idx = 0; uint8_t *buffer = *buff2; hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -466,10 +467,10 @@ ice_dump_switch(struct rte_eth_dev *dev, uint8_t **buff2, uint32_t *size) } res = ice_aq_get_internal_data(hw, - ICE_AQC_DBG_DUMP_CLUSTER_ID_SW, + ICE_AQC_DBG_DUMP_CLUSTER_ID_SW_E810, tbl_id, tbl_idx, buff, ICE_PKG_BUF_SIZE, - &buff_size, &tbl_id, &tbl_idx, NULL); + &buff_size, &tbl_id, &tbl_idx, NULL, cd); if (res) { free(buff); @@ -481,7 +482,7 @@ ice_dump_switch(struct rte_eth_dev *dev, uint8_t **buff2, uint32_t *size) free(buff); - if (tbl_idx == 0xffffffff) { + if (tbl_idx == 0xffff) { tbl_idx = 0; memset(buffer, '\n', sizeof(char)); buffer++; diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 87385d2649..df1104abff 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -211,11 +211,9 @@ static const struct rte_pci_id pci_id_ice_map[] = { { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E822L_SFP) }, { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E822L_10G_BASE_T) }, { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E822L_SGMII) }, - { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E824S) }, { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E825C_BACKPLANE) }, { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E825C_QSFP) }, { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E825C_SFP) }, - { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_C825X) }, { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E825C_SGMII) }, { .vendor_id = 0, /* sentinel */ }, }; @@ -1442,7 +1440,7 @@ ice_interrupt_handler(void *param) event, queue, pf_num); } - reg = ICE_READ_REG(hw, GL_MDET_TX_TCLAN); + reg = ICE_READ_REG(hw, E800_GL_MDET_TX_TCLAN); if (reg & GL_MDET_TX_TCLAN_VALID_M) { pf_num = (reg & GL_MDET_TX_TCLAN_PF_NUM_M) >> GL_MDET_TX_TCLAN_PF_NUM_S; @@ -1609,9 +1607,7 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type) TAILQ_INIT(&vsi->vlan_list); /* Be sync with RTE_ETH_RSS_RETA_SIZE_x maximum value definition */ - pf->hash_lut_size = hw->func_caps.common_cap.rss_table_size > - RTE_ETH_RSS_RETA_SIZE_512 ? RTE_ETH_RSS_RETA_SIZE_512 : - hw->func_caps.common_cap.rss_table_size; + pf->hash_lut_size = hw->func_caps.common_cap.rss_table_size; pf->flags |= ICE_FLAG_RSS_AQ_CAPABLE; /* Defines the type of outer tag expected */ @@ -2499,12 +2495,12 @@ ice_dev_init(struct rte_eth_dev *dev) ice_tm_conf_init(dev); if (ice_is_e810(hw)) - hw->phy_cfg = ICE_PHY_E810; + hw->phy_model = ICE_PHY_E810; else - hw->phy_cfg = ICE_PHY_E822; + hw->phy_model = ICE_PHY_E822; - if (hw->phy_cfg == ICE_PHY_E822) { - ret = ice_start_phy_timer_e822(hw, hw->pf_id, true); + if (hw->phy_model == ICE_PHY_E822) { + ret = ice_start_phy_timer_e822(hw, hw->pf_id); if (ret) PMD_INIT_LOG(ERR, "Failed to start phy timer\n"); } @@ -3468,7 +3464,7 @@ static int ice_init_rss(struct ice_pf *pf) lut_params.vsi_handle = vsi->idx; lut_params.lut_size = vsi->rss_lut_size; - lut_params.lut_type = ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF; + lut_params.lut_type = ICE_LUT_PF; lut_params.lut = vsi->rss_lut; lut_params.global_lut_id = 0; ret = ice_aq_set_rss_lut(hw, &lut_params); @@ -3785,6 +3781,7 @@ ice_dev_start(struct rte_eth_dev *dev) uint8_t timer = hw->func_caps.ts_func_info.tmr_index_owned; uint32_t pin_idx = ad->devargs.pin_idx; struct rte_tm_error tm_err; + ice_bitmap_t pmask; /* program Tx queues' context in hardware */ for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { @@ -3827,21 +3824,21 @@ ice_dev_start(struct rte_eth_dev *dev) mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK | RTE_ETH_VLAN_EXTEND_MASK | RTE_ETH_QINQ_STRIP_MASK; + ret = ice_vlan_offload_set(dev, mask); if (ret) { PMD_INIT_LOG(ERR, "Unable to set VLAN offload"); goto rx_err; } + pmask = ICE_PROMISC_BCAST_RX | ICE_PROMISC_BCAST_TX | + ICE_PROMISC_UCAST_TX | ICE_PROMISC_MCAST_TX; /* enable Rx interrupt and mapping Rx queue to interrupt vector */ if (ice_rxq_intr_setup(dev)) return -EIO; /* Enable receiving broadcast packets and transmitting packets */ - ret = ice_set_vsi_promisc(hw, vsi->idx, - ICE_PROMISC_BCAST_RX | ICE_PROMISC_BCAST_TX | - ICE_PROMISC_UCAST_TX | ICE_PROMISC_MCAST_TX, - 0); + ret = ice_set_vsi_promisc(hw, vsi->idx, &pmask, 0); if (ret != ICE_SUCCESS) PMD_DRV_LOG(INFO, "fail to set vsi broadcast"); @@ -4928,7 +4925,7 @@ ice_get_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size) if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) { lut_params.vsi_handle = vsi->idx; lut_params.lut_size = lut_size; - lut_params.lut_type = ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF; + lut_params.lut_type = ICE_LUT_PF; lut_params.lut = lut; lut_params.global_lut_id = 0; ret = ice_aq_get_rss_lut(hw, &lut_params); @@ -4964,7 +4961,7 @@ ice_set_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size) if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) { lut_params.vsi_handle = vsi->idx; lut_params.lut_size = lut_size; - lut_params.lut_type = ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF; + lut_params.lut_type = ICE_LUT_PF; lut_params.lut = lut; lut_params.global_lut_id = 0; ret = ice_aq_set_rss_lut(hw, &lut_params); @@ -4996,9 +4993,9 @@ ice_rss_reta_update(struct rte_eth_dev *dev, uint8_t *lut; int ret; - if (reta_size != ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128 && - reta_size != ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512 && - reta_size != ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K) { + if (reta_size != ICE_LUT_PF_SMALL_SIZE && + reta_size != ICE_LUT_GLOBAL_SIZE && + reta_size != ICE_LUT_PF_SIZE) { PMD_DRV_LOG(ERR, "The size of hash lookup table configured (%d)" "doesn't match the number hardware can " @@ -5173,13 +5170,13 @@ ice_promisc_enable(struct rte_eth_dev *dev) struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ice_vsi *vsi = pf->main_vsi; enum ice_status status; - uint8_t pmask; + ice_bitmap_t pmask; int ret = 0; pmask = ICE_PROMISC_UCAST_RX | ICE_PROMISC_UCAST_TX | ICE_PROMISC_MCAST_RX | ICE_PROMISC_MCAST_TX; - status = ice_set_vsi_promisc(hw, vsi->idx, pmask, 0); + status = ice_set_vsi_promisc(hw, vsi->idx, &pmask, 0); switch (status) { case ICE_ERR_ALREADY_EXISTS: PMD_DRV_LOG(DEBUG, "Promisc mode has already been enabled"); @@ -5200,7 +5197,7 @@ ice_promisc_disable(struct rte_eth_dev *dev) struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ice_vsi *vsi = pf->main_vsi; enum ice_status status; - uint8_t pmask; + ice_bitmap_t pmask; int ret = 0; if (dev->data->all_multicast == 1) @@ -5209,7 +5206,7 @@ ice_promisc_disable(struct rte_eth_dev *dev) pmask = ICE_PROMISC_UCAST_RX | ICE_PROMISC_UCAST_TX | ICE_PROMISC_MCAST_RX | ICE_PROMISC_MCAST_TX; - status = ice_clear_vsi_promisc(hw, vsi->idx, pmask, 0); + status = ice_clear_vsi_promisc(hw, vsi->idx, &pmask, 0); if (status != ICE_SUCCESS) { PMD_DRV_LOG(ERR, "Failed to clear promisc, err=%d", status); ret = -EAGAIN; @@ -5225,12 +5222,12 @@ ice_allmulti_enable(struct rte_eth_dev *dev) struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ice_vsi *vsi = pf->main_vsi; enum ice_status status; - uint8_t pmask; + ice_bitmap_t pmask; int ret = 0; pmask = ICE_PROMISC_MCAST_RX | ICE_PROMISC_MCAST_TX; - status = ice_set_vsi_promisc(hw, vsi->idx, pmask, 0); + status = ice_set_vsi_promisc(hw, vsi->idx, &pmask, 0); switch (status) { case ICE_ERR_ALREADY_EXISTS: @@ -5252,7 +5249,7 @@ ice_allmulti_disable(struct rte_eth_dev *dev) struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ice_vsi *vsi = pf->main_vsi; enum ice_status status; - uint8_t pmask; + ice_bitmap_t pmask; int ret = 0; if (dev->data->promiscuous == 1) @@ -5260,7 +5257,7 @@ ice_allmulti_disable(struct rte_eth_dev *dev) pmask = ICE_PROMISC_MCAST_RX | ICE_PROMISC_MCAST_TX; - status = ice_clear_vsi_promisc(hw, vsi->idx, pmask, 0); + status = ice_clear_vsi_promisc(hw, vsi->idx, &pmask, 0); if (status != ICE_SUCCESS) { PMD_DRV_LOG(ERR, "Failed to clear allmulti, err=%d", status); ret = -EAGAIN; @@ -6438,7 +6435,7 @@ ice_timesync_enable(struct rte_eth_dev *dev) return -1; } - ret = ice_ptp_write_incval(hw, ICE_PTP_NOMINAL_INCVAL_E810); + ret = ice_ptp_write_incval(hw, ICE_PTP_NOMINAL_INCVAL_E810, true); if (ret) { PMD_DRV_LOG(ERR, "Failed to write PHC increment time value"); -- 2.34.1