From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 98500A055D; Fri, 19 Feb 2021 12:17:21 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9DC081608ED; Fri, 19 Feb 2021 12:17:15 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 3A9C51608D0 for ; Fri, 19 Feb 2021 11:15:12 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11JAExuJ019416; Fri, 19 Feb 2021 02:15:11 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=Q1B5zXWzh3S5QFZyHUa6UFs7rA+vzAM3frnTPwrOCUA=; b=cN/YQZ2KLHvuI7EK4yQdAdXUmWMT7rueJCHBd3qLTSUFaNN0w61Vp2wHoNCI8zMglm8k BOBbSKu19i/7/IG0H838/ndySHkS8wpyX3qPJhW21K4INzGSgHfHVTwZlTRjn69XSKGB sBmU6va6/7xWHkQ7TsGrta5Nfl73mo2y/Nec7O8xkOtrMROnRE6eHsl2tj8MQM4ZgnL+ 7V/15eoU9EKnqCKNQbVZpYTKgBIIuaUlQhARaAWHVBuPAjbz22ZQa8YjSONV7tvdrEEW ZcXifQK0oFl0WrLukayrrYE98z9gO+lTRVAIcBjRhXBhyc0CuoGId+oXrMmkOo33qAEs HA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 36sesvvtdd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 19 Feb 2021 02:15:09 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 19 Feb 2021 02:15:05 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 19 Feb 2021 02:15:05 -0800 Received: from irv1user08.caveonetworks.com (unknown [10.104.116.105]) by maili.marvell.com (Postfix) with ESMTP id DAC2E3F7041; Fri, 19 Feb 2021 02:15:04 -0800 (PST) Received: (from rmody@localhost) by irv1user08.caveonetworks.com (8.14.4/8.14.4/Submit) id 11JAF4CG019199; Fri, 19 Feb 2021 02:15:04 -0800 X-Authentication-Warning: irv1user08.caveonetworks.com: rmody set sender to rmody@marvell.com using -f From: Rasesh Mody To: , CC: Rasesh Mody , , , Igor Russkikh Date: Fri, 19 Feb 2021 02:14:19 -0800 Message-ID: <20210219101422.19121-5-rmody@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20210219101422.19121-1-rmody@marvell.com> References: <20210219101422.19121-1-rmody@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-02-19_04:2021-02-18, 2021-02-19 signatures=0 X-Mailman-Approved-At: Fri, 19 Feb 2021 12:17:13 +0100 Subject: [dpdk-dev] [PATCH 4/7] net/qede/base: update base driver to 8.62.4.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The patch updates the base driver required to support new hardware and the new firmware supported by it as well as the existing hardware. The base driver is updated to 8.62.4.0 version. There are some white space changes adhering new policy. Signed-off-by: Rasesh Mody Signed-off-by: Igor Russkikh --- drivers/net/qede/base/ecore.h | 550 +- drivers/net/qede/base/ecore_attn_values.h | 3 +- drivers/net/qede/base/ecore_cxt.c | 1184 +++-- drivers/net/qede/base/ecore_cxt.h | 140 +- drivers/net/qede/base/ecore_cxt_api.h | 31 +- drivers/net/qede/base/ecore_dcbx.c | 524 +- drivers/net/qede/base/ecore_dcbx.h | 16 +- drivers/net/qede/base/ecore_dcbx_api.h | 41 +- drivers/net/qede/base/ecore_dev.c | 3865 ++++++++++---- drivers/net/qede/base/ecore_dev_api.h | 354 +- drivers/net/qede/base/ecore_gtt_reg_addr.h | 93 +- drivers/net/qede/base/ecore_gtt_values.h | 4 +- drivers/net/qede/base/ecore_hsi_common.h | 2 +- drivers/net/qede/base/ecore_hw.c | 381 +- drivers/net/qede/base/ecore_hw.h | 55 +- drivers/net/qede/base/ecore_hw_defs.h | 45 +- drivers/net/qede/base/ecore_init_fw_funcs.c | 1224 +++-- drivers/net/qede/base/ecore_init_fw_funcs.h | 457 +- drivers/net/qede/base/ecore_init_ops.c | 143 +- drivers/net/qede/base/ecore_init_ops.h | 19 +- drivers/net/qede/base/ecore_int.c | 1313 +++-- drivers/net/qede/base/ecore_int.h | 60 +- drivers/net/qede/base/ecore_int_api.h | 125 +- drivers/net/qede/base/ecore_iov_api.h | 115 +- drivers/net/qede/base/ecore_iro.h | 427 +- drivers/net/qede/base/ecore_iro_values.h | 463 +- drivers/net/qede/base/ecore_l2.c | 485 +- drivers/net/qede/base/ecore_l2.h | 18 +- drivers/net/qede/base/ecore_l2_api.h | 148 +- drivers/net/qede/base/ecore_mcp.c | 2610 +++++++--- drivers/net/qede/base/ecore_mcp.h | 124 +- drivers/net/qede/base/ecore_mcp_api.h | 467 +- drivers/net/qede/base/ecore_mng_tlv.c | 910 ++-- drivers/net/qede/base/ecore_proto_if.h | 69 +- drivers/net/qede/base/ecore_rt_defs.h | 895 ++-- drivers/net/qede/base/ecore_sp_api.h | 6 +- drivers/net/qede/base/ecore_sp_commands.c | 138 +- drivers/net/qede/base/ecore_sp_commands.h | 18 +- drivers/net/qede/base/ecore_spq.c | 330 +- drivers/net/qede/base/ecore_spq.h | 63 +- drivers/net/qede/base/ecore_sriov.c | 1660 ++++-- drivers/net/qede/base/ecore_sriov.h | 146 +- drivers/net/qede/base/ecore_status.h | 4 +- drivers/net/qede/base/ecore_utils.h | 18 +- drivers/net/qede/base/ecore_vf.c | 546 +- drivers/net/qede/base/ecore_vf.h | 54 +- drivers/net/qede/base/ecore_vf_api.h | 72 +- drivers/net/qede/base/ecore_vfpf_if.h | 117 +- drivers/net/qede/base/eth_common.h | 294 +- drivers/net/qede/base/mcp_public.h | 2341 ++++++--- drivers/net/qede/base/nvm_cfg.h | 5059 ++++++++++--------- drivers/net/qede/qede_debug.c | 31 +- drivers/net/qede/qede_ethdev.c | 2 +- drivers/net/qede/qede_rxtx.c | 2 +- 54 files changed, 18340 insertions(+), 9921 deletions(-) diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h index 3fb8fd2ce..9801d5348 100644 --- a/drivers/net/qede/base/ecore.h +++ b/drivers/net/qede/base/ecore.h @@ -1,19 +1,28 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_H #define __ECORE_H +#include + +#define ECORE_ETH_ALEN ETH_ALEN +#define ECORE_ETH_P_8021Q ETH_P_8021Q +#define ECORE_ETH_P_8021AD ETH_P_8021AD +#define UEFI + /* @DPDK */ +#define CONFIG_ECORE_BINARY_FW +#undef CONFIG_ECORE_ZIPPED_FW + +#ifdef CONFIG_ECORE_BINARY_FW #include #include #include - -#define CONFIG_ECORE_BINARY_FW -#undef CONFIG_ECORE_ZIPPED_FW +#endif #ifdef CONFIG_ECORE_ZIPPED_FW #include @@ -29,8 +38,8 @@ #include "mcp_public.h" #define ECORE_MAJOR_VERSION 8 -#define ECORE_MINOR_VERSION 40 -#define ECORE_REVISION_VERSION 26 +#define ECORE_MINOR_VERSION 62 +#define ECORE_REVISION_VERSION 4 #define ECORE_ENGINEERING_VERSION 0 #define ECORE_VERSION \ @@ -49,14 +58,23 @@ #define ECORE_WFQ_UNIT 100 #include "../qede_logs.h" /* @DPDK */ -#define ISCSI_BDQ_ID(_port_id) (_port_id) -#define FCOE_BDQ_ID(_port_id) (_port_id + 2) /* Constants */ #define ECORE_WID_SIZE (1024) #define ECORE_MIN_WIDS (4) /* Configurable */ #define ECORE_PF_DEMS_SIZE (4) +#define ECORE_VF_DEMS_SIZE (32) +#define ECORE_MIN_DPIS (4) /* The minimal number of DPIs required to + * load the driver. The number was + * arbitrarily set. + */ +/* Derived */ +#define ECORE_MIN_PWM_REGION (ECORE_WID_SIZE * ECORE_MIN_DPIS) + +#define ECORE_CXT_PF_CID (0xff) + +#define ECORE_HW_STOP_RETRY_LIMIT (10) /* cau states */ enum ecore_coalescing_mode { @@ -71,23 +89,29 @@ enum ecore_nvm_cmd { ECORE_NVM_WRITE_NVRAM = DRV_MSG_CODE_NVM_WRITE_NVRAM, ECORE_NVM_DEL_FILE = DRV_MSG_CODE_NVM_DEL_FILE, ECORE_EXT_PHY_FW_UPGRADE = DRV_MSG_CODE_EXT_PHY_FW_UPGRADE, - ECORE_NVM_SET_SECURE_MODE = DRV_MSG_CODE_SET_SECURE_MODE, ECORE_PHY_RAW_READ = DRV_MSG_CODE_PHY_RAW_READ, ECORE_PHY_RAW_WRITE = DRV_MSG_CODE_PHY_RAW_WRITE, ECORE_PHY_CORE_READ = DRV_MSG_CODE_PHY_CORE_READ, ECORE_PHY_CORE_WRITE = DRV_MSG_CODE_PHY_CORE_WRITE, + ECORE_ENCRYPT_PASSWORD = DRV_MSG_CODE_ENCRYPT_PASSWORD, ECORE_GET_MCP_NVM_RESP = 0xFFFFFF00 }; -#ifndef LINUX_REMOVE -#if !defined(CONFIG_ECORE_L2) +#if !defined(CONFIG_ECORE_L2) && !defined(CONFIG_ECORE_ROCE) && \ + !defined(CONFIG_ECORE_FCOE) && !defined(CONFIG_ECORE_ISCSI) && \ + !defined(CONFIG_ECORE_IWARP) && !defined(CONFIG_ECORE_OOO) #define CONFIG_ECORE_L2 #define CONFIG_ECORE_SRIOV -#endif +#define CONFIG_ECORE_ROCE +#define CONFIG_ECORE_IWARP +#define CONFIG_ECORE_FCOE +#define CONFIG_ECORE_ISCSI +#define CONFIG_ECORE_LL2 +#define CONFIG_ECORE_OOO #endif /* helpers */ -#ifndef __EXTRACT__LINUX__ +#ifndef __EXTRACT__LINUX__IF__ #define MASK_FIELD(_name, _value) \ ((_value) &= (_name##_MASK)) @@ -103,14 +127,16 @@ do { \ #define GET_FIELD(value, name) \ (((value) >> (name##_SHIFT)) & name##_MASK) -#define GET_MFW_FIELD(name, field) \ +#define GET_MFW_FIELD(name, field) \ (((name) & (field ## _MASK)) >> (field ## _OFFSET)) #define SET_MFW_FIELD(name, field, value) \ do { \ - (name) &= ~((field ## _MASK)); \ + (name) &= ~(field ## _MASK); \ (name) |= (((value) << (field ## _OFFSET)) & (field ## _MASK)); \ } while (0) + +#define DB_ADDR_SHIFT(addr) ((addr) << DB_PWM_ADDR_OFFSET_SHIFT) #endif static OSAL_INLINE u32 DB_ADDR(u32 cid, u32 DEMS) @@ -121,7 +147,7 @@ static OSAL_INLINE u32 DB_ADDR(u32 cid, u32 DEMS) return db_addr; } -static OSAL_INLINE u32 DB_ADDR_VF(u32 cid, u32 DEMS) +static OSAL_INLINE u32 DB_ADDR_VF_E4(u32 cid, u32 DEMS) { u32 db_addr = FIELD_VALUE(DB_LEGACY_ADDR_DEMS, DEMS) | FIELD_VALUE(DB_LEGACY_ADDR_ICID, cid); @@ -129,6 +155,17 @@ static OSAL_INLINE u32 DB_ADDR_VF(u32 cid, u32 DEMS) return db_addr; } +static OSAL_INLINE u32 DB_ADDR_VF_E5(u32 cid, u32 DEMS) +{ + u32 db_addr = FIELD_VALUE(DB_LEGACY_ADDR_DEMS, DEMS) | + (cid * ECORE_VF_DEMS_SIZE); + + return db_addr; +} + +#define DB_ADDR_VF(dev, cid, DEMS) \ + (ECORE_IS_E4(dev) ? DB_ADDR_VF_E4(cid, DEMS) : DB_ADDR_VF_E5(cid, DEMS)) + #define ALIGNED_TYPE_SIZE(type_name, p_hwfn) \ ((sizeof(type_name) + (u32)(1 << (p_hwfn->p_dev->cache_shift)) - 1) & \ ~((1 << (p_hwfn->p_dev->cache_shift)) - 1)) @@ -143,7 +180,84 @@ static OSAL_INLINE u32 DB_ADDR_VF(u32 cid, u32 DEMS) #endif #endif -#ifndef __EXTRACT__LINUX__ +#ifndef __EXTRACT__LINUX__IF__ +#define ECORE_INT_DEBUG_SIZE_DEF _MB(2) +struct ecore_internal_trace { + char *buf; + u32 size; + u64 prod; + osal_spinlock_t lock; +}; + +#define ECORE_DP_INT_LOG_MAX_STR_SIZE 256 +#define ECORE_DP_INT_LOG_DEFAULT_MASK (0xffffc3ff) + +#ifndef UEFI +/* Debug print definitions */ +#define DP_INT_LOG(P_DEV, LEVEL, MODULE, fmt, ...) \ +do { \ + if (OSAL_UNLIKELY((P_DEV)->dp_int_level > (LEVEL))) \ + break; \ + if (OSAL_UNLIKELY((P_DEV)->dp_int_level == ECORE_LEVEL_VERBOSE) && \ + ((LEVEL) == ECORE_LEVEL_VERBOSE) && \ + ((P_DEV)->dp_int_module & (MODULE)) == 0) \ + break; \ + \ + OSAL_INT_DBG_STORE(P_DEV, fmt, \ + __func__, __LINE__, \ + (P_DEV)->name ? (P_DEV)->name : "", \ + ##__VA_ARGS__); \ +} while (0) + +#define DP_ERR(P_DEV, fmt, ...) \ +do { \ + DP_INT_LOG((P_DEV), ECORE_LEVEL_ERR, 0, \ + "ERR: [%s:%d(%s)]" fmt, ##__VA_ARGS__); \ + PRINT_ERR((P_DEV)->dp_ctx, "[%s:%d(%s)]" fmt, \ + __func__, __LINE__, \ + (P_DEV)->name ? (P_DEV)->name : "", \ + ##__VA_ARGS__); \ +} while (0) + +#define DP_NOTICE(P_DEV, is_assert, fmt, ...) \ +do { \ + DP_INT_LOG((P_DEV), ECORE_LEVEL_NOTICE, 0, \ + "NOTICE: [%s:%d(%s)]" fmt, ##__VA_ARGS__); \ + if (OSAL_UNLIKELY((P_DEV)->dp_level <= ECORE_LEVEL_NOTICE)) { \ + PRINT((P_DEV)->dp_ctx, "[%s:%d(%s)]" fmt, \ + __func__, __LINE__, \ + (P_DEV)->name ? (P_DEV)->name : "", \ + ##__VA_ARGS__); \ + OSAL_ASSERT(!(is_assert)); \ + } \ +} while (0) + +#define DP_INFO(P_DEV, fmt, ...) \ +do { \ + DP_INT_LOG((P_DEV), ECORE_LEVEL_INFO, 0, \ + "INFO: [%s:%d(%s)]" fmt, ##__VA_ARGS__); \ + if (OSAL_UNLIKELY((P_DEV)->dp_level <= ECORE_LEVEL_INFO)) { \ + PRINT((P_DEV)->dp_ctx, "[%s:%d(%s)]" fmt, \ + __func__, __LINE__, \ + (P_DEV)->name ? (P_DEV)->name : "", \ + ##__VA_ARGS__); \ + } \ +} while (0) + +#define DP_VERBOSE(P_DEV, module, fmt, ...) \ +do { \ + DP_INT_LOG((P_DEV), ECORE_LEVEL_VERBOSE, module, \ + "VERBOSE: [%s:%d(%s)]" fmt, ##__VA_ARGS__); \ + if (OSAL_UNLIKELY(((P_DEV)->dp_level <= ECORE_LEVEL_VERBOSE) && \ + ((P_DEV)->dp_module & module))) { \ + PRINT((P_DEV)->dp_ctx, "[%s:%d(%s)]" fmt, \ + __func__, __LINE__, \ + (P_DEV)->name ? (P_DEV)->name : "", \ + ##__VA_ARGS__); \ + } \ +} while (0) +#endif + enum DP_LEVEL { ECORE_LEVEL_VERBOSE = 0x0, ECORE_LEVEL_INFO = 0x1, @@ -181,6 +295,7 @@ enum DP_MODULE { ECORE_MSG_SP = 0x100000, ECORE_MSG_STORAGE = 0x200000, ECORE_MSG_OOO = 0x200000, + ECORE_MSG_FS = 0x400000, ECORE_MSG_CXT = 0x800000, ECORE_MSG_LL2 = 0x1000000, ECORE_MSG_ILT = 0x2000000, @@ -188,13 +303,49 @@ enum DP_MODULE { ECORE_MSG_DEBUG = 0x8000000, /* to be added...up to 0x8000000 */ }; + +/** + * @brief Convert from 32b debug param to two params of level and module + + * @param debug + * @param p_dp_module + * @param p_dp_level + * @return void + * + * @note Input 32b decoding: + * b31 - enable all NOTICE prints. NOTICE prints are for deviation from + * the 'happy' flow, e.g. memory allocation failed. + * b30 - enable all INFO prints. INFO prints are for major steps in the + * flow and provide important parameters. + * b29-b0 - per-module bitmap, where each bit enables VERBOSE prints of + * that module. VERBOSE prints are for tracking the specific flow in low + * level. + * + * Notice that the level should be that of the lowest required logs. + */ +static OSAL_INLINE void ecore_config_debug(u32 debug, u32 *p_dp_module, + u8 *p_dp_level) +{ + *p_dp_level = ECORE_LEVEL_NOTICE; + *p_dp_module = 0; + + if (debug & ECORE_LOG_VERBOSE_MASK) { + *p_dp_level = ECORE_LEVEL_VERBOSE; + *p_dp_module = (debug & 0x3FFFFFFF); + } else if (debug & ECORE_LOG_INFO_MASK) { + *p_dp_level = ECORE_LEVEL_INFO; + } else if (debug & ECORE_LOG_NOTICE_MASK) { + *p_dp_level = ECORE_LEVEL_NOTICE; + } +} + #endif #define for_each_hwfn(p_dev, i) for (i = 0; i < p_dev->num_hwfns; i++) #define D_TRINE(val, cond1, cond2, true1, true2, def) \ - (val == (cond1) ? true1 : \ - (val == (cond2) ? true2 : def)) + ((val) == (cond1) ? (true1) : \ + ((val) == (cond2) ? (true2) : (def))) /* forward */ struct ecore_ptt_pool; @@ -210,6 +361,8 @@ struct ecore_igu_info; struct ecore_mcp_info; struct ecore_dcbx_info; struct ecore_llh_info; +struct ecore_fs_info_e4; +struct ecore_fs_info_e5; struct ecore_rt_data { u32 *init_val; @@ -233,6 +386,13 @@ enum ecore_tunn_clss { MAX_ECORE_TUNN_CLSS, }; +#ifndef __EXTRACT__LINUX__IF__ +enum ecore_tcp_ip_version { + ECORE_TCP_IPV4, + ECORE_TCP_IPV6, +}; +#endif + struct ecore_tunn_update_type { bool b_update_mode; bool b_mode_enabled; @@ -256,6 +416,9 @@ struct ecore_tunnel_info { bool b_update_rx_cls; bool b_update_tx_cls; + + bool update_non_l2_vxlan; + bool non_l2_vxlan_enable; }; /* The PCI personality is not quite synonymous to protocol ID: @@ -279,6 +442,15 @@ struct ecore_qm_iids { u32 tids; }; +/* The PCI relax ordering is either taken care by management FW or can be + * enable/disable by ecore client. + */ +enum ecore_pci_rlx_odr { + ECORE_DEFAULT_RLX_ODR, + ECORE_ENABLE_RLX_ODR, + ECORE_DISABLE_RLX_ODR +}; + #define MAX_PF_PER_PORT 8 /* HW / FW resources, output of features supported below, most information @@ -292,12 +464,16 @@ enum ecore_resources { ECORE_RL, ECORE_MAC, ECORE_VLAN, + ECORE_VF_RDMA_CNQ_RAM, ECORE_RDMA_CNQ_RAM, ECORE_ILT, - ECORE_LL2_QUEUE, + ECORE_LL2_RAM_QUEUE, + ECORE_LL2_CTX_QUEUE, ECORE_CMDQS_CQS, ECORE_RDMA_STATS_QUEUE, ECORE_BDQ, + ECORE_VF_MAC_ADDR, + ECORE_GFS_PROFILE, /* This is needed only internally for matching against the IGU. * In case of legacy MFW, would be set to `0'. @@ -317,6 +493,7 @@ enum ecore_feature { ECORE_EXTRA_VF_QUE, ECORE_VMQ, ECORE_RDMA_CNQ, + ECORE_VF_RDMA_CNQ, ECORE_ISCSI_CQ, ECORE_FCOE_CQ, ECORE_VF_L2_QUE, @@ -335,6 +512,11 @@ enum ecore_port_mode { ECORE_PORT_MODE_DE_1X25G, ECORE_PORT_MODE_DE_4X25G, ECORE_PORT_MODE_DE_2X10G, + ECORE_PORT_MODE_DE_2X50G_R1, + ECORE_PORT_MODE_DE_4X50G_R1, + ECORE_PORT_MODE_DE_1X100G_R2, + ECORE_PORT_MODE_DE_2X100G_R2, + ECORE_PORT_MODE_DE_1X100G_R4, }; enum ecore_dev_cap { @@ -345,7 +527,7 @@ enum ecore_dev_cap { ECORE_DEV_CAP_IWARP }; -#ifndef __EXTRACT__LINUX__ +#ifndef __EXTRACT__LINUX__IF__ enum ecore_hw_err_type { ECORE_HW_ERR_FAN_FAIL, ECORE_HW_ERR_MFW_RESP_FAIL, @@ -356,10 +538,9 @@ enum ecore_hw_err_type { }; #endif -enum ecore_db_rec_exec { - DB_REC_DRY_RUN, - DB_REC_REAL_DEAL, - DB_REC_ONCE, +enum ecore_wol_support { + ECORE_WOL_SUPPORT_NONE, + ECORE_WOL_SUPPORT_PME, }; struct ecore_hw_info { @@ -382,7 +563,9 @@ struct ecore_hw_info { ((dev)->hw_info.personality == ECORE_PCI_FCOE) #define ECORE_IS_ISCSI_PERSONALITY(dev) \ ((dev)->hw_info.personality == ECORE_PCI_ISCSI) - +#define ECORE_IS_NVMETCP_PERSONALITY(dev) \ + ((dev)->hw_info.personality == ECORE_PCI_ISCSI && \ + (dev)->is_nvmetcp) /* Resource Allocation scheme results */ u32 resc_start[ECORE_MAX_RESC]; u32 resc_num[ECORE_MAX_RESC]; @@ -397,23 +580,22 @@ struct ecore_hw_info { /* Amount of traffic classes HW supports */ u8 num_hw_tc; -/* Amount of TCs which should be active according to DCBx or upper layer driver - * configuration - */ - +/* Amount of TCs which should be active according to DCBx or upper layer driver configuration */ u8 num_active_tc; /* The traffic class used by PF for it's offloaded protocol */ u8 offload_tc; + bool offload_tc_set; + + bool multi_tc_roce_en; +#define IS_ECORE_MULTI_TC_ROCE(p_hwfn) (!!((p_hwfn)->hw_info.multi_tc_roce_en)) u32 concrete_fid; u16 opaque_fid; u16 ovlan; u32 part_num[4]; - unsigned char hw_mac_addr[ETH_ALEN]; - u64 node_wwn; /* For FCoE only */ - u64 port_wwn; /* For FCoE only */ + unsigned char hw_mac_addr[ECORE_ETH_ALEN]; u16 num_iscsi_conns; u16 num_fcoe_conns; @@ -424,12 +606,16 @@ struct ecore_hw_info { u32 port_mode; u32 hw_mode; - u32 device_capabilities; + u32 device_capabilities; /* @DPDK */ +#ifndef __EXTRACT__LINUX__THROW__ /* Default DCBX mode */ u8 dcbx_mode; +#endif u16 mtu; + + enum ecore_wol_support b_wol_support; }; /* maximun size of read/write commands (HW limit) */ @@ -470,38 +656,71 @@ struct ecore_wfq_data { #define OFLD_GRP_SIZE 4 +struct ecore_offload_pq { + u8 port; + u8 tc; +}; + struct ecore_qm_info { struct init_qm_pq_params *qm_pq_params; struct init_qm_vport_params *qm_vport_params; struct init_qm_port_params *qm_port_params; u16 start_pq; - u8 start_vport; + u16 start_vport; + u16 start_rl; u16 pure_lb_pq; - u16 offload_pq; + u16 first_ofld_pq; + u16 first_llt_pq; u16 pure_ack_pq; u16 ooo_pq; + u16 single_vf_rdma_pq; u16 first_vf_pq; u16 first_mcos_pq; u16 first_rl_pq; + u16 first_ofld_grp_pq; u16 num_pqs; u16 num_vf_pqs; + u16 ilt_pf_pqs; u8 num_vports; + u8 num_rls; u8 max_phys_tcs_per_port; u8 ooo_tc; + bool pq_overflow; bool pf_rl_en; bool pf_wfq_en; bool vport_rl_en; bool vport_wfq_en; + bool vf_rdma_en; +#define IS_ECORE_QM_VF_RDMA(_p_hwfn) ((_p_hwfn)->qm_info.vf_rdma_en) u8 pf_wfq; u32 pf_rl; struct ecore_wfq_data *wfq_data; u8 num_pf_rls; + struct ecore_offload_pq offload_group[OFLD_GRP_SIZE]; + u8 offload_group_count; +#define IS_ECORE_OFLD_GRP(p_hwfn) ((p_hwfn)->qm_info.offload_group_count > 0) + + /* Locks PQ getters against QM info initialization */ + osal_spinlock_t qm_info_lock; }; +#define ECORE_OVERFLOW_BIT 1 + struct ecore_db_recovery_info { - osal_list_t list; - osal_spinlock_t lock; - u32 db_recovery_counter; + osal_list_t list; + osal_spinlock_t lock; + u32 count; + + /* PF doorbell overflow sticky indicator was cleared in the DORQ + * attention callback, but still needs to execute doorbell recovery. + * Full (REAL_DEAL) dorbell recovery is executed in the periodic + * handler. + * This value doesn't require a lock but must use atomic operations. + */ + u32 overflow; /* @DPDK*/ + + /* Indicates that DORQ attention was handled in ecore_int_deassertion */ + bool dorq_attn; }; struct storm_stats { @@ -553,18 +772,32 @@ enum ecore_mf_mode_bit { /* Use stag for steering */ ECORE_MF_8021AD_TAGGING, + /* Allow DSCP to TC mapping */ + ECORE_MF_DSCP_TO_TC_MAP, + /* Allow FIP discovery fallback */ ECORE_MF_FIP_SPECIAL, + + /* Do not insert a vlan tag with id 0 */ + ECORE_MF_DONT_ADD_VLAN0_TAG, + + /* Allow VF RDMA */ + ECORE_MF_VF_RDMA, + + /* Allow RoCE LAG */ + ECORE_MF_ROCE_LAG, }; enum ecore_ufp_mode { ECORE_UFP_MODE_ETS, ECORE_UFP_MODE_VNIC_BW, + ECORE_UFP_MODE_UNKNOWN }; enum ecore_ufp_pri_type { ECORE_UFP_PRI_OS, - ECORE_UFP_PRI_VNIC + ECORE_UFP_PRI_VNIC, + ECORE_UFP_PRI_UNKNOWN }; struct ecore_ufp_info { @@ -578,16 +811,85 @@ enum BAR_ID { BAR_ID_1 /* Used for doorbells */ }; +#ifndef __EXTRACT__LINUX__IF__ +enum ecore_lag_type { + ECORE_LAG_TYPE_NONE, + ECORE_LAG_TYPE_ACTIVEACTIVE, + ECORE_LAG_TYPE_ACTIVEBACKUP +}; +#endif + struct ecore_nvm_image_info { u32 num_images; struct bist_nvm_image_att *image_att; bool valid; }; +#define LAG_MAX_PORT_NUM 2 + +struct ecore_lag_info { + enum ecore_lag_type lag_type; + void (*link_change_cb)(void *cxt); + void *cxt; + u8 port_num; + u32 active_ports; /* @DPDK */ + u8 first_port; + u8 second_port; + bool is_master; + u8 master_pf; +}; + +/* PWM region specific data */ +struct ecore_dpi_info { + u16 wid_count; + u32 dpi_size; + u32 dpi_count; + u32 dpi_start_offset; /* this is used to calculate + * the doorbell address + */ + u32 dpi_bit_shift_addr; +}; + +struct ecore_common_dpm_info { + u8 db_bar_no_edpm; + u8 mfw_no_edpm; + bool vf_cfg; +}; + +enum ecore_hsi_def_type { + ECORE_HSI_DEF_MAX_NUM_VFS, + ECORE_HSI_DEF_MAX_NUM_L2_QUEUES, + ECORE_HSI_DEF_MAX_NUM_PORTS, + ECORE_HSI_DEF_MAX_SB_PER_PATH, + ECORE_HSI_DEF_MAX_NUM_PFS, + ECORE_HSI_DEF_MAX_NUM_VPORTS, + ECORE_HSI_DEF_NUM_ETH_RSS_ENGINE, + ECORE_HSI_DEF_MAX_QM_TX_QUEUES, + ECORE_HSI_DEF_NUM_PXP_ILT_RECORDS, + ECORE_HSI_DEF_NUM_RDMA_STATISTIC_COUNTERS, + ECORE_HSI_DEF_MAX_QM_GLOBAL_RLS, + ECORE_HSI_DEF_MAX_PBF_CMD_LINES, + ECORE_HSI_DEF_MAX_BTB_BLOCKS, + ECORE_NUM_HSI_DEFS +}; + +enum ecore_rx_config_flags { + ECORE_RX_CONF_SKIP_ACCEPT_FLAGS_UPDATE, + ECORE_RX_CONF_SKIP_UCAST_FILTER_UPDATE, + ECORE_RX_CONF_SET_LB_VPORT +}; + +struct ecore_rx_config { + u32 flags; /* @DPDK */ + u8 loopback_dst_vport_id; +}; + struct ecore_hwfn { struct ecore_dev *p_dev; u8 my_id; /* ID inside the PF */ -#define IS_LEAD_HWFN(edev) (!((edev)->my_id)) +#define IS_LEAD_HWFN(_p_hwfn) (!((_p_hwfn)->my_id)) +#define IS_AFFIN_HWFN(_p_hwfn) \ + ((_p_hwfn) == ECORE_AFFIN_HWFN((_p_hwfn)->p_dev)) u8 rel_pf_id; /* Relative to engine*/ u8 abs_pf_id; #define ECORE_PATH_ID(_p_hwfn) \ @@ -597,10 +899,11 @@ struct ecore_hwfn { u32 dp_module; u8 dp_level; + u32 dp_int_module; + u8 dp_int_level; char name[NAME_SIZE]; void *dp_ctx; - bool first_on_engine; bool hw_init_done; u8 num_funcs_on_engine; @@ -612,6 +915,8 @@ struct ecore_hwfn { void OSAL_IOMEM *doorbells; u64 db_phys_addr; unsigned long db_size; + u64 reg_offset; + u64 db_offset; /* PTT pool */ struct ecore_ptt_pool *p_ptt_pool; @@ -638,6 +943,11 @@ struct ecore_hwfn { struct ecore_ptt *p_main_ptt; struct ecore_ptt *p_dpc_ptt; + /* PTP will be used only by the leading function. + * Usage of all PTP-apis should be synchronized as result. + */ + struct ecore_ptt *p_ptp_ptt; + struct ecore_sb_sp_info *p_sp_sb; struct ecore_sb_attn_info *p_sb_attn; @@ -649,6 +959,7 @@ struct ecore_hwfn { struct ecore_fcoe_info *p_fcoe_info; struct ecore_rdma_info *p_rdma_info; struct ecore_pf_params pf_params; + bool is_nvmetcp; bool b_rdma_enabled_in_prs; u32 rdma_prs_search_reg; @@ -673,8 +984,8 @@ struct ecore_hwfn { /* QM init */ struct ecore_qm_info qm_info; -#ifdef CONFIG_ECORE_ZIPPED_FW /* Buffer for unzipping firmware data */ +#ifdef CONFIG_ECORE_ZIPPED_FW void *unzip_buf; #endif @@ -682,19 +993,12 @@ struct ecore_hwfn { void *dbg_user_info; struct virt_mem_desc dbg_arrays[MAX_BIN_DBG_BUFFER_TYPE]; - struct z_stream_s *stream; - - /* PWM region specific data */ - u32 dpi_size; - u32 dpi_count; - u32 dpi_start_offset; /* this is used to - * calculate th - * doorbell address - */ + struct ecore_dpi_info dpi_info; - /* If one of the following is set then EDPM shouldn't be used */ + struct ecore_common_dpm_info dpm_info; + u8 roce_edpm_mode; u8 dcbx_no_edpm; - u8 db_bar_no_edpm; + u8 num_vf_cnqs; /* L2-related */ struct ecore_l2_info *p_l2_info; @@ -708,11 +1012,26 @@ struct ecore_hwfn { * struct ecore_hw_prepare_params by ecore client. */ bool b_en_pacing; + struct ecore_lag_info lag_info; /* Nvm images number and attributes */ - struct ecore_nvm_image_info nvm_info; + struct ecore_nvm_image_info nvm_info; + + /* Flow steering info */ + union { + struct ecore_fs_info_e4 *e4; + struct ecore_fs_info_e5 *e5; + void *info; + } fs_info; - struct phys_mem_desc *fw_overlay_mem; + /* Flow steering statistics accuracy */ + u8 fs_accuracy; + + struct phys_mem_desc *fw_overlay_mem; + enum _ecore_status_t (*p_dummy_cb) + (struct ecore_hwfn *p_hwfn, void *cookie); + /* Rx configuration */ + struct ecore_rx_config rx_conf; /* @DPDK */ struct ecore_ptt *p_arfs_ptt; @@ -722,18 +1041,22 @@ struct ecore_hwfn { u32 iov_task_flags; }; +#ifndef __EXTRACT__LINUX__THROW__ enum ecore_mf_mode { ECORE_MF_DEFAULT, ECORE_MF_OVLAN, ECORE_MF_NPAR, ECORE_MF_UFP, }; +#endif +#ifndef __EXTRACT__LINUX__IF__ enum ecore_dev_type { ECORE_DEV_TYPE_BB, ECORE_DEV_TYPE_AH, ECORE_DEV_TYPE_E5, }; +#endif /* @DPDK */ enum ecore_dbg_features { @@ -765,7 +1088,12 @@ struct ecore_dev { u8 dp_level; char name[NAME_SIZE]; void *dp_ctx; + struct ecore_internal_trace internal_trace; + u8 dp_int_level; + u32 dp_int_module; +/* for work DP_* macros with cdev, hwfn, etc */ + struct ecore_dev *p_dev; enum ecore_dev_type type; /* Translate type/revision combo into the proper conditions */ #define ECORE_IS_BB(dev) ((dev)->type == ECORE_DEV_TYPE_BB) @@ -781,6 +1109,8 @@ struct ecore_dev { #define ECORE_IS_E4(dev) (ECORE_IS_BB(dev) || ECORE_IS_AH(dev)) #define ECORE_IS_E5(dev) ((dev)->type == ECORE_DEV_TYPE_E5) +#define ECORE_E5_MISSING_CODE OSAL_BUILD_BUG_ON(false) + u16 vendor_id; u16 device_id; #define ECORE_DEV_ID_MASK 0xff00 @@ -829,21 +1159,18 @@ struct ecore_dev { #define CHIP_BOND_ID_MASK 0xff #define CHIP_BOND_ID_SHIFT 0 - u8 num_engines; u8 num_ports; u8 num_ports_in_engine; - u8 num_funcs_in_port; u8 path_id; - u32 mf_bits; + u32 mf_bits; /* @DPDK */ +#ifndef __EXTRACT__LINUX__THROW__ enum ecore_mf_mode mf_mode; -#define IS_MF_DEFAULT(_p_hwfn) \ - (((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_DEFAULT) -#define IS_MF_SI(_p_hwfn) \ - (((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_NPAR) -#define IS_MF_SD(_p_hwfn) \ - (((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_OVLAN) +#define IS_MF_DEFAULT(_p_hwfn) (((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_DEFAULT) +#define IS_MF_SI(_p_hwfn) (((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_NPAR) +#define IS_MF_SD(_p_hwfn) (((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_OVLAN) +#endif int pcie_width; int pcie_speed; @@ -852,7 +1179,9 @@ struct ecore_dev { u8 mcp_rev; u8 boot_mode; - u8 wol; + /* WoL related configurations */ + u8 wol_config; + u8 wol_mac[ECORE_ETH_ALEN]; u32 int_mode; enum ecore_coalescing_mode int_coalescing_mode; @@ -907,6 +1236,7 @@ struct ecore_dev { u32 rdma_max_sge; u32 rdma_max_inline; u32 rdma_max_srq_sge; + u8 ilt_page_size; struct ecore_eth_stats *reset_stats; struct ecore_fw_data *fw_data; @@ -916,8 +1246,7 @@ struct ecore_dev { /* Recovery */ bool recov_in_prog; -/* Indicates whether should prevent attentions from being reasserted */ - + /* Indicates whether should prevent attentions from being reasserted */ bool attn_clr_en; /* Indicates whether allowing the MFW to collect a crash dump */ @@ -926,6 +1255,9 @@ struct ecore_dev { /* Indicates if the reg_fifo is checked after any register access */ bool chk_reg_fifo; + /* Indicates the monitored address by ecore_rd()/ecore_wr() */ + u32 monitored_hw_addr; + #ifndef ASIC_ONLY bool b_is_emul_full; bool b_is_emul_mac; @@ -937,6 +1269,9 @@ struct ecore_dev { /* Indicates whether this PF serves a storage target */ bool b_is_target; + /* Instruct driver to read statistics from the specified bin id */ + u16 stats_bin_id; + #ifdef CONFIG_ECORE_BINARY_FW /* @DPDK */ void *firmware; u64 fw_len; @@ -952,23 +1287,6 @@ struct ecore_dev { struct rte_pci_device *pci_dev; }; -enum ecore_hsi_def_type { - ECORE_HSI_DEF_MAX_NUM_VFS, - ECORE_HSI_DEF_MAX_NUM_L2_QUEUES, - ECORE_HSI_DEF_MAX_NUM_PORTS, - ECORE_HSI_DEF_MAX_SB_PER_PATH, - ECORE_HSI_DEF_MAX_NUM_PFS, - ECORE_HSI_DEF_MAX_NUM_VPORTS, - ECORE_HSI_DEF_NUM_ETH_RSS_ENGINE, - ECORE_HSI_DEF_MAX_QM_TX_QUEUES, - ECORE_HSI_DEF_NUM_PXP_ILT_RECORDS, - ECORE_HSI_DEF_NUM_RDMA_STATISTIC_COUNTERS, - ECORE_HSI_DEF_MAX_QM_GLOBAL_RLS, - ECORE_HSI_DEF_MAX_PBF_CMD_LINES, - ECORE_HSI_DEF_MAX_BTB_BLOCKS, - ECORE_NUM_HSI_DEFS -}; - u32 ecore_get_hsi_def_val(struct ecore_dev *p_dev, enum ecore_hsi_def_type type); @@ -1039,6 +1357,11 @@ int ecore_device_num_ports(struct ecore_dev *p_dev); void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb, u8 *mac); +#define ECORE_TOS_ECN_SHIFT 0 +#define ECORE_TOS_ECN_MASK 0x3 +#define ECORE_TOS_DSCP_SHIFT 2 +#define ECORE_TOS_DSCP_MASK 0x3f + /* Flags for indication of required queues */ #define PQ_FLAGS_RLS (1 << 0) #define PQ_FLAGS_MCOS (1 << 1) @@ -1046,26 +1369,50 @@ void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb, #define PQ_FLAGS_OOO (1 << 3) #define PQ_FLAGS_ACK (1 << 4) #define PQ_FLAGS_OFLD (1 << 5) -#define PQ_FLAGS_VFS (1 << 6) -#define PQ_FLAGS_LLT (1 << 7) +#define PQ_FLAGS_GRP (1 << 6) +#define PQ_FLAGS_VFS (1 << 7) +#define PQ_FLAGS_LLT (1 << 8) +#define PQ_FLAGS_MTC (1 << 9) +#define PQ_FLAGS_VFR (1 << 10) +#define PQ_FLAGS_VSR (1 << 11) /* physical queue index for cm context intialization */ u16 ecore_get_cm_pq_idx(struct ecore_hwfn *p_hwfn, u32 pq_flags); u16 ecore_get_cm_pq_idx_mcos(struct ecore_hwfn *p_hwfn, u8 tc); u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf); +u16 ecore_get_cm_pq_idx_vf_rdma(struct ecore_hwfn *p_hwfn, u16 vf); + u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u16 rl); +u16 ecore_get_cm_pq_idx_grp(struct ecore_hwfn *p_hwfn, u8 idx); +u16 ecore_get_cm_pq_idx_ofld_mtc(struct ecore_hwfn *p_hwfn, u16 idx, u8 tc); +u16 ecore_get_cm_pq_idx_llt_mtc(struct ecore_hwfn *p_hwfn, u16 idx, u8 tc); +u16 ecore_get_cm_pq_idx_ll2(struct ecore_hwfn *p_hwfn, u8 tc); -/* qm vport for rate limit configuration */ -u16 ecore_get_qm_vport_idx_rl(struct ecore_hwfn *p_hwfn, u16 rl); +/* qm vport/rl for rate limit configuration */ +u16 ecore_get_pq_vport_id_from_rl(struct ecore_hwfn *p_hwfn, u16 rl); +u16 ecore_get_pq_vport_id_from_vf(struct ecore_hwfn *p_hwfn, u16 vf); +u16 ecore_get_pq_rl_id_from_rl(struct ecore_hwfn *p_hwfn, u16 rl); +u16 ecore_get_pq_rl_id_from_vf(struct ecore_hwfn *p_hwfn, u16 vf); const char *ecore_hw_get_resc_name(enum ecore_resources res_id); /* doorbell recovery mechanism */ void ecore_db_recovery_dp(struct ecore_hwfn *p_hwfn); -void ecore_db_recovery_execute(struct ecore_hwfn *p_hwfn, - enum ecore_db_rec_exec); - -bool ecore_edpm_enabled(struct ecore_hwfn *p_hwfn); +void ecore_db_recovery_execute(struct ecore_hwfn *p_hwfn); +enum _ecore_status_t +ecore_db_rec_flush_queue(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u32 usage_cnt_reg, u32 *count); +#define ECORE_DB_REC_COUNT 1000 +#define ECORE_DB_REC_INTERVAL 100 + +bool ecore_edpm_enabled(struct ecore_hwfn *p_hwfn, + struct ecore_common_dpm_info *dpm_info); + +enum _ecore_status_t ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct ecore_dpi_info *dpi_info, + u32 pwm_region_size, + u32 n_cpus); /* amount of resources used in qm init */ u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn); @@ -1074,9 +1421,18 @@ u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn); u16 ecore_init_qm_get_num_vports(struct ecore_hwfn *p_hwfn); u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn); +void ecore_hw_info_set_offload_tc(struct ecore_hw_info *p_info, u8 tc); +u8 ecore_get_offload_tc(struct ecore_hwfn *p_hwfn); + #define MFW_PORT(_p_hwfn) ((_p_hwfn)->abs_pf_id % \ ecore_device_num_ports((_p_hwfn)->p_dev)) +enum _ecore_status_t ecore_abs_ppfid(struct ecore_dev *p_dev, u8 rel_ppfid, + u8 *p_abs_ppfid); +enum _ecore_status_t ecore_llh_map_ppfid_to_pfid(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 ppfid, u8 pfid); + /* The PFID<->PPFID calculation is based on the relative index of a PF on its * port. In BB there is a bug in the LLH in which the PPFID is actually engine * based, and thus it equals the PFID. @@ -1110,6 +1466,12 @@ enum _ecore_status_t ecore_llh_dump_all(struct ecore_dev *p_dev); void ecore_set_platform_str(struct ecore_hwfn *p_hwfn, char *buf_str, u32 buf_size); +#define LNX_STATIC +#define IFDEF_HAS_IFLA_VF_RATE +#define ENDIF_HAS_IFLA_VF_RATE +#define IFDEF_DEFINE_IFLA_VF_SPOOFCHK +#define ENDIF_DEFINE_IFLA_VF_SPOOFCHK + #define TSTORM_QZONE_START PXP_VF_BAR0_START_SDM_ZONE_A #define TSTORM_QZONE_SIZE(dev) \ (ECORE_IS_E4(dev) ? TSTORM_QZONE_SIZE_E4 : TSTORM_QZONE_SIZE_E5) diff --git a/drivers/net/qede/base/ecore_attn_values.h b/drivers/net/qede/base/ecore_attn_values.h index ec773fbdd..ff4fc5c38 100644 --- a/drivers/net/qede/base/ecore_attn_values.h +++ b/drivers/net/qede/base/ecore_attn_values.h @@ -1,7 +1,8 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ #ifndef __ATTN_VALUES_H__ diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c index 221cec22b..c22f97f7d 100644 --- a/drivers/net/qede/base/ecore_cxt.c +++ b/drivers/net/qede/base/ecore_cxt.c @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #include "bcm_osal.h" #include "reg_addr.h" #include "common_hsi.h" @@ -37,13 +37,11 @@ #define TM_ELEM_SIZE 4 /* ILT constants */ -#define ILT_DEFAULT_HW_P_SIZE 4 - #define ILT_PAGE_IN_BYTES(hw_p_size) (1U << ((hw_p_size) + 12)) #define ILT_CFG_REG(cli, reg) PSWRQ2_REG_##cli##_##reg##_RT_OFFSET /* ILT entry structure */ -#define ILT_ENTRY_PHY_ADDR_MASK 0x000FFFFFFFFFFFULL +#define ILT_ENTRY_PHY_ADDR_MASK (~0ULL >> 12) #define ILT_ENTRY_PHY_ADDR_SHIFT 0 #define ILT_ENTRY_VALID_MASK 0x1ULL #define ILT_ENTRY_VALID_SHIFT 52 @@ -81,11 +79,11 @@ union e5_type1_task_context { }; struct src_ent { - u8 opaque[56]; + u8 opaque[56]; u64 next; }; -#define CDUT_SEG_ALIGNMET 3 /* in 4k chunks */ +#define CDUT_SEG_ALIGNMET 3 /* in 4k chunks */ #define CDUT_SEG_ALIGNMET_IN_BYTES (1 << (CDUT_SEG_ALIGNMET + 12)) #define CONN_CXT_SIZE(p_hwfn) \ @@ -93,8 +91,6 @@ struct src_ent { ALIGNED_TYPE_SIZE(union e4_conn_context, (p_hwfn)) : \ ALIGNED_TYPE_SIZE(union e5_conn_context, (p_hwfn))) -#define SRQ_CXT_SIZE (sizeof(struct regpair) * 8) /* @DPDK */ - #define TYPE0_TASK_CXT_SIZE(p_hwfn) \ (ECORE_IS_E4(((p_hwfn)->p_dev)) ? \ ALIGNED_TYPE_SIZE(union e4_type0_task_context, (p_hwfn)) : \ @@ -117,9 +113,12 @@ static bool src_proto(enum protocol_type type) type == PROTOCOLID_IWARP; } -static OSAL_INLINE bool tm_cid_proto(enum protocol_type type) +static bool tm_cid_proto(enum protocol_type type) { - return type == PROTOCOLID_TOE; + return type == PROTOCOLID_ISCSI || + type == PROTOCOLID_FCOE || + type == PROTOCOLID_ROCE || + type == PROTOCOLID_IWARP; } static bool tm_tid_proto(enum protocol_type type) @@ -133,8 +132,8 @@ struct ecore_cdu_iids { u32 per_vf_cids; }; -static void ecore_cxt_cdu_iids(struct ecore_cxt_mngr *p_mngr, - struct ecore_cdu_iids *iids) +static void ecore_cxt_cdu_iids(struct ecore_cxt_mngr *p_mngr, + struct ecore_cdu_iids *iids) { u32 type; @@ -146,8 +145,8 @@ static void ecore_cxt_cdu_iids(struct ecore_cxt_mngr *p_mngr, /* counts the iids for the Searcher block configuration */ struct ecore_src_iids { - u32 pf_cids; - u32 per_vf_cids; + u32 pf_cids; + u32 per_vf_cids; }; static void ecore_cxt_src_iids(struct ecore_cxt_mngr *p_mngr, @@ -156,6 +155,9 @@ static void ecore_cxt_src_iids(struct ecore_cxt_mngr *p_mngr, u32 i; for (i = 0; i < MAX_CONN_TYPES; i++) { + if (!src_proto(i)) + continue; + iids->pf_cids += p_mngr->conn_cfg[i].cid_count; iids->per_vf_cids += p_mngr->conn_cfg[i].cids_per_vf; } @@ -167,24 +169,39 @@ static void ecore_cxt_src_iids(struct ecore_cxt_mngr *p_mngr, /* counts the iids for the Timers block configuration */ struct ecore_tm_iids { u32 pf_cids; - u32 pf_tids[NUM_TASK_PF_SEGMENTS]; /* per segment */ + u32 pf_tids[NUM_TASK_PF_SEGMENTS]; /* per segment */ u32 pf_tids_total; u32 per_vf_cids; u32 per_vf_tids; }; static void ecore_cxt_tm_iids(struct ecore_hwfn *p_hwfn, - struct ecore_cxt_mngr *p_mngr, struct ecore_tm_iids *iids) { + struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; struct ecore_conn_type_cfg *p_cfg; bool tm_vf_required = false; bool tm_required = false; - u32 i, j; + int i, j; - for (i = 0; i < MAX_CONN_TYPES; i++) { + /* Timers is a special case -> we don't count how many cids require + * timers but what's the max cid that will be used by the timer block. + * therefore we traverse in reverse order, and once we hit a protocol + * that requires the timers memory, we'll sum all the protocols up + * to that one. + */ + for (i = MAX_CONN_TYPES - 1; i >= 0; i--) { p_cfg = &p_mngr->conn_cfg[i]; + /* In E5 the CORE CIDs are allocated first, and not according to + * its 'enum protocol_type' value. To not miss the count of the + * CORE CIDs by a protocol that requires the timers memory, but + * with a lower 'enum protocol_type' value - the addition of the + * CORE CIDs is done outside the loop. + */ + if (ECORE_IS_E5(p_hwfn->p_dev) && (i == PROTOCOLID_CORE)) + continue; + if (tm_cid_proto(i) || tm_required) { if (p_cfg->cid_count) tm_required = true; @@ -196,6 +213,7 @@ static void ecore_cxt_tm_iids(struct ecore_hwfn *p_hwfn, if (p_cfg->cids_per_vf) tm_vf_required = true; + iids->per_vf_cids += p_cfg->cids_per_vf; } if (tm_tid_proto(i)) { @@ -215,6 +233,15 @@ static void ecore_cxt_tm_iids(struct ecore_hwfn *p_hwfn, } } + if (ECORE_IS_E5(p_hwfn->p_dev)) { + p_cfg = &p_mngr->conn_cfg[PROTOCOLID_CORE]; + + if (tm_required) + iids->pf_cids += p_cfg->cid_count; + if (tm_vf_required) + iids->per_vf_cids += p_cfg->cids_per_vf; + } + iids->pf_cids = ROUNDUP(iids->pf_cids, TM_ALIGN); iids->per_vf_cids = ROUNDUP(iids->per_vf_cids, TM_ALIGN); iids->per_vf_tids = ROUNDUP(iids->per_vf_tids, TM_ALIGN); @@ -251,7 +278,7 @@ static void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn, vf_tids += segs[NUM_TASK_PF_SEGMENTS].count; } - iids->vf_cids += vf_cids * p_mngr->vf_count; + iids->vf_cids = vf_cids; iids->tids += vf_tids * p_mngr->vf_count; DP_VERBOSE(p_hwfn, ECORE_MSG_ILT, @@ -259,8 +286,8 @@ static void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn, iids->cids, iids->vf_cids, iids->tids, vf_tids); } -static struct ecore_tid_seg *ecore_cxt_tid_seg_info(struct ecore_hwfn *p_hwfn, - u32 seg) +static struct ecore_tid_seg *ecore_cxt_tid_seg_info(struct ecore_hwfn *p_hwfn, + u32 seg) { struct ecore_cxt_mngr *p_cfg = p_hwfn->p_cxt_mngr; u32 i; @@ -275,24 +302,18 @@ static struct ecore_tid_seg *ecore_cxt_tid_seg_info(struct ecore_hwfn *p_hwfn, return OSAL_NULL; } -static void ecore_cxt_set_srq_count(struct ecore_hwfn *p_hwfn, u32 num_srqs) -{ - struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr; - - p_mgr->srq_count = num_srqs; -} - -u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn) +/* This function was written under the assumption that all the ILT clients + * share the same ILT page size (although it is not required). + */ +u32 ecore_cxt_get_ilt_page_size(struct ecore_hwfn *p_hwfn) { - struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr; - - return p_mgr->srq_count; + return ILT_PAGE_IN_BYTES(p_hwfn->p_dev->ilt_page_size); } /* set the iids (cid/tid) count per protocol */ static void ecore_cxt_set_proto_cid_count(struct ecore_hwfn *p_hwfn, - enum protocol_type type, - u32 cid_count, u32 vf_cid_cnt) + enum protocol_type type, + u32 cid_count, u32 vf_cid_cnt) { struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr; struct ecore_conn_type_cfg *p_conn = &p_mgr->conn_cfg[type]; @@ -301,8 +322,9 @@ static void ecore_cxt_set_proto_cid_count(struct ecore_hwfn *p_hwfn, p_conn->cids_per_vf = ROUNDUP(vf_cid_cnt, DQ_RANGE_ALIGN); } -u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn, - enum protocol_type type, u32 *vf_cid) +u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn, + enum protocol_type type, + u32 *vf_cid) { if (vf_cid) *vf_cid = p_hwfn->p_cxt_mngr->conn_cfg[type].cids_per_vf; @@ -310,28 +332,41 @@ u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn, return p_hwfn->p_cxt_mngr->conn_cfg[type].cid_count; } -u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn, - enum protocol_type type) +u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn, + enum protocol_type type, + u8 vf_id) { - return p_hwfn->p_cxt_mngr->acquired[type].start_cid; + if (vf_id != ECORE_CXT_PF_CID) + return p_hwfn->p_cxt_mngr->acquired_vf[type][vf_id].start_cid; + else + return p_hwfn->p_cxt_mngr->acquired[type].start_cid; } u32 ecore_cxt_get_proto_tid_count(struct ecore_hwfn *p_hwfn, - enum protocol_type type) + enum protocol_type type, + u8 vf_id) { + struct ecore_conn_type_cfg *p_conn_cfg; u32 cnt = 0; int i; - for (i = 0; i < TASK_SEGMENTS; i++) - cnt += p_hwfn->p_cxt_mngr->conn_cfg[type].tid_seg[i].count; + p_conn_cfg = &p_hwfn->p_cxt_mngr->conn_cfg[type]; + + if (vf_id != ECORE_CXT_PF_CID) + return p_conn_cfg->tid_seg[TASK_SEGMENT_VF].count; + + for (i = 0; i < NUM_TASK_PF_SEGMENTS; i++) + cnt += p_conn_cfg->tid_seg[i].count; return cnt; } -static OSAL_INLINE void -ecore_cxt_set_proto_tid_count(struct ecore_hwfn *p_hwfn, - enum protocol_type proto, - u8 seg, u8 seg_type, u32 count, bool has_fl) +static void ecore_cxt_set_proto_tid_count(struct ecore_hwfn *p_hwfn, + enum protocol_type proto, + u8 seg, + u8 seg_type, + u32 count, + bool has_fl) { struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; struct ecore_tid_seg *p_seg = &p_mngr->conn_cfg[proto].tid_seg[seg]; @@ -342,12 +377,13 @@ ecore_cxt_set_proto_tid_count(struct ecore_hwfn *p_hwfn, } /* the *p_line parameter must be either 0 for the first invocation or the - * value returned in the previous invocation. + value returned in the previous invocation. */ -static void ecore_ilt_cli_blk_fill(struct ecore_ilt_client_cfg *p_cli, - struct ecore_ilt_cli_blk *p_blk, - u32 start_line, - u32 total_size, u32 elem_size) +static void ecore_ilt_cli_blk_fill(struct ecore_ilt_client_cfg *p_cli, + struct ecore_ilt_cli_blk *p_blk, + u32 start_line, + u32 total_size, + u32 elem_size) { u32 ilt_size = ILT_PAGE_IN_BYTES(p_cli->p_size.val); @@ -362,10 +398,11 @@ static void ecore_ilt_cli_blk_fill(struct ecore_ilt_client_cfg *p_cli, p_blk->start_line = start_line; } -static void ecore_ilt_cli_adv_line(struct ecore_hwfn *p_hwfn, - struct ecore_ilt_client_cfg *p_cli, - struct ecore_ilt_cli_blk *p_blk, - u32 *p_line, enum ilt_clients client_id) +static void ecore_ilt_cli_adv_line(struct ecore_hwfn *p_hwfn, + struct ecore_ilt_client_cfg *p_cli, + struct ecore_ilt_cli_blk *p_blk, + u32 *p_line, + enum ilt_clients client_id) { if (!p_blk->total_size) return; @@ -378,8 +415,7 @@ static void ecore_ilt_cli_adv_line(struct ecore_hwfn *p_hwfn, p_cli->last.val = *p_line - 1; DP_VERBOSE(p_hwfn, ECORE_MSG_ILT, - "ILT[Client %d] - Lines: [%08x - %08x]. Block - Size %08x" - " [Real %08x] Start line %d\n", + "ILT[Client %d] - Lines: [%08x - %08x]. Block - Size %08x [Real %08x] Start line %d\n", client_id, p_cli->first.val, p_cli->last.val, p_blk->total_size, p_blk->real_size_in_page, p_blk->start_line); @@ -388,7 +424,7 @@ static void ecore_ilt_cli_adv_line(struct ecore_hwfn *p_hwfn, static void ecore_ilt_get_dynamic_line_range(struct ecore_hwfn *p_hwfn, enum ilt_clients ilt_client, u32 *dynamic_line_offset, - u32 *dynamic_line_cnt) + u32 *dynamic_line_cnt, u8 is_vf) { struct ecore_ilt_client_cfg *p_cli; struct ecore_conn_type_cfg *p_cfg; @@ -404,9 +440,23 @@ static void ecore_ilt_get_dynamic_line_range(struct ecore_hwfn *p_hwfn, p_cfg = &p_hwfn->p_cxt_mngr->conn_cfg[PROTOCOLID_ROCE]; cxts_per_p = ILT_PAGE_IN_BYTES(p_cli->p_size.val) / - (u32)CONN_CXT_SIZE(p_hwfn); - - *dynamic_line_cnt = p_cfg->cid_count / cxts_per_p; + (u32)CONN_CXT_SIZE(p_hwfn); + + *dynamic_line_cnt = is_vf ? p_cfg->cids_per_vf / cxts_per_p : + p_cfg->cid_count / cxts_per_p; + + /* In E5 the CORE CIDs are allocated before the ROCE CIDs */ + if (*dynamic_line_cnt && ECORE_IS_E5(p_hwfn->p_dev)) { + u32 roce_cid_cnt = is_vf ? p_cfg->cids_per_vf : + p_cfg->cid_count; + u32 core_cid_cnt; + + p_cfg = &p_hwfn->p_cxt_mngr->conn_cfg[PROTOCOLID_CORE]; + core_cid_cnt = p_cfg->cid_count; + *dynamic_line_offset = 1 + (core_cid_cnt / cxts_per_p); + *dynamic_line_cnt = ((core_cid_cnt + roce_cid_cnt) / + cxts_per_p) - *dynamic_line_offset; + } } } @@ -424,7 +474,7 @@ ecore_cxt_set_blk(struct ecore_ilt_cli_blk *p_blk) { p_blk->total_size = 0; return p_blk; - } +} static u32 ecore_cxt_src_elements(struct ecore_cxt_mngr *p_mngr) @@ -435,9 +485,9 @@ ecore_cxt_src_elements(struct ecore_cxt_mngr *p_mngr) OSAL_MEM_ZERO(&src_iids, sizeof(src_iids)); ecore_cxt_src_iids(p_mngr, &src_iids); - /* Both the PF and VFs searcher connections are stored in the per PF - * database. Thus sum the PF searcher cids and all the VFs searcher - * cids. + /* Both the PF and VFs searcher connections are stored + * in the per PF database. Thus sum the PF searcher + * cids and all the VFs searcher cids. */ elem_num = src_iids.pf_cids + src_iids.per_vf_cids * p_mngr->vf_count; @@ -450,16 +500,34 @@ ecore_cxt_src_elements(struct ecore_cxt_mngr *p_mngr) return elem_num; } -enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn) +static void eocre_ilt_blk_reset(struct ecore_hwfn *p_hwfn) +{ + struct ecore_ilt_client_cfg *clients = p_hwfn->p_cxt_mngr->clients; + u32 cli_idx, blk_idx; + + for (cli_idx = 0; cli_idx < MAX_ILT_CLIENTS; cli_idx++) { + for (blk_idx = 0; blk_idx < ILT_CLI_PF_BLOCKS; blk_idx++) + clients[cli_idx].pf_blks[blk_idx].total_size = 0; + for (blk_idx = 0; blk_idx < ILT_CLI_VF_BLOCKS; blk_idx++) + clients[cli_idx].vf_blks[blk_idx].total_size = 0; + } +} + +enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn, + u32 *line_count) { - u32 curr_line, total, i, task_size, line, total_size, elem_size; + u32 total, i, task_size, line, total_size, elem_size; struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; + struct ecore_qm_info *qm_info = &p_hwfn->qm_info; + u32 curr_line, prev_line, total_lines; struct ecore_ilt_client_cfg *p_cli; struct ecore_ilt_cli_blk *p_blk; struct ecore_cdu_iids cdu_iids; struct ecore_qm_iids qm_iids; struct ecore_tm_iids tm_iids; struct ecore_tid_seg *p_seg; + u16 num_vf_pqs; + int ret; OSAL_MEM_ZERO(&qm_iids, sizeof(qm_iids)); OSAL_MEM_ZERO(&cdu_iids, sizeof(cdu_iids)); @@ -467,8 +535,14 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn) p_mngr->pf_start_line = RESC_START(p_hwfn, ECORE_ILT); + /* Reset all the ILT blocks at the beginning of ILT compute - this + * is done in order to prevent memory allocation for irrelevant blocks + * afterwards (e.g. VF timer block after disabling VF-RDMA). + */ + eocre_ilt_blk_reset(p_hwfn); + DP_VERBOSE(p_hwfn, ECORE_MSG_ILT, - "hwfn [%d] - Set context mngr starting line to be 0x%08x\n", + "hwfn [%d] - Set context manager starting line to be 0x%08x\n", p_hwfn->my_id, p_hwfn->p_cxt_mngr->pf_start_line); /* CDUC */ @@ -494,7 +568,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn) ecore_ilt_get_dynamic_line_range(p_hwfn, ILT_CLI_CDUC, &p_blk->dynamic_line_offset, - &p_blk->dynamic_line_cnt); + &p_blk->dynamic_line_cnt, IOV_PF); /* CDUC VF */ p_blk = ecore_cxt_set_blk(&p_cli->vf_blks[CDUC_BLK]); @@ -510,6 +584,10 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn) ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line, ILT_CLI_CDUC); + ecore_ilt_get_dynamic_line_range(p_hwfn, ILT_CLI_CDUC, + &p_blk->dynamic_line_offset, + &p_blk->dynamic_line_cnt, IOV_VF); + /* CDUT PF */ p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_CDUT]); p_cli->first.val = curr_line; @@ -525,8 +603,39 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn) ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line, total, p_mngr->task_type_size[p_seg->type]); + prev_line = curr_line; ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line, ILT_CLI_CDUT); + total_lines = curr_line - prev_line; + + switch (i) { + case ECORE_CXT_ISCSI_TID_SEG: + p_mngr->iscsi_task_pages = (u16)total_lines; + DP_VERBOSE(p_hwfn, ECORE_MSG_ILT, + "CDUT ILT Info: iscsi_task_pages %hu\n", + p_mngr->iscsi_task_pages); + break; + case ECORE_CXT_FCOE_TID_SEG: + p_mngr->fcoe_task_pages = (u16)total_lines; + DP_VERBOSE(p_hwfn, ECORE_MSG_ILT, + "CDUT ILT Info: fcoe_task_pages %hu\n", + p_mngr->fcoe_task_pages); + break; + case ECORE_CXT_ROCE_TID_SEG: + p_mngr->roce_task_pages = (u16)total_lines; + DP_VERBOSE(p_hwfn, ECORE_MSG_ILT, + "CDUT ILT Info: roce_task_pages %hu\n", + p_mngr->roce_task_pages); + break; + case ECORE_CXT_ETH_TID_SEG: + p_mngr->eth_task_pages = (u16)total_lines; + DP_VERBOSE(p_hwfn, ECORE_MSG_ILT, + "CDUT ILT Info: eth_task_pages %hu\n", + p_mngr->eth_task_pages); + break; + default: + break; + } } /* next the 'init' task memory (forced load memory) */ @@ -535,8 +644,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn) if (!p_seg || p_seg->count == 0) continue; - p_blk = - ecore_cxt_set_blk(&p_cli->pf_blks[CDUT_FL_SEG_BLK(i, PF)]); + p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[CDUT_FL_SEG_BLK(i, PF)]); if (!p_seg->has_fl_mem) { /* The segment is active (total size pf 'working' @@ -590,8 +698,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn) ILT_CLI_CDUT); /* 'init' memory */ - p_blk = - ecore_cxt_set_blk(&p_cli->vf_blks[CDUT_FL_SEG_BLK(0, VF)]); + p_blk = ecore_cxt_set_blk(&p_cli->vf_blks[CDUT_FL_SEG_BLK(0, VF)]); if (!p_seg->has_fl_mem) { /* see comment above */ line = p_cli->vf_blks[CDUT_SEG_BLK(0)].start_line; @@ -599,7 +706,8 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn) } else { task_size = p_mngr->task_type_size[p_seg->type]; ecore_ilt_cli_blk_fill(p_cli, p_blk, - curr_line, total, task_size); + curr_line, total, + task_size); ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line, ILT_CLI_CDUT); } @@ -624,23 +732,29 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn) p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_QM]); p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[0]); + ecore_cxt_qm_iids(p_hwfn, &qm_iids); + /* At this stage, after the first QM configuration, the PF PQs amount * is the highest possible. Save this value at qm_info->ilt_pf_pqs to * detect overflows in the future. * Even though VF PQs amount can be larger than VF count, use vf_count * because each VF requires only the full amount of CIDs. */ - ecore_cxt_qm_iids(p_hwfn, &qm_iids); + qm_info->ilt_pf_pqs = qm_info->num_pqs - qm_info->num_vf_pqs; + if (ECORE_IS_VF_RDMA(p_hwfn)) + num_vf_pqs = RESC_NUM(p_hwfn, ECORE_PQ) - qm_info->ilt_pf_pqs; + else + num_vf_pqs = (u16)p_mngr->vf_count; + total = ecore_qm_pf_mem_size(p_hwfn, qm_iids.cids, qm_iids.vf_cids, qm_iids.tids, - p_hwfn->qm_info.num_pqs + OFLD_GRP_SIZE, - p_hwfn->qm_info.num_vf_pqs); + qm_info->ilt_pf_pqs, + num_vf_pqs); DP_VERBOSE(p_hwfn, ECORE_MSG_ILT, - "QM ILT Info, (cids=%d, vf_cids=%d, tids=%d, num_pqs=%d," - " num_vf_pqs=%d, memory_size=%d)\n", + "QM ILT Info, (cids=%d, vf_cids=%d, tids=%d, pf_pqs=%d, vf_pqs=%d, memory_size=%d)\n", qm_iids.cids, qm_iids.vf_cids, qm_iids.tids, - p_hwfn->qm_info.num_pqs, p_hwfn->qm_info.num_vf_pqs, total); + qm_info->ilt_pf_pqs, p_mngr->vf_count, total); ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line, total * 0x1000, QM_PQ_ELEMENT_SIZE); @@ -650,7 +764,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn) /* TM PF */ p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_TM]); - ecore_cxt_tm_iids(p_hwfn, p_mngr, &tm_iids); + ecore_cxt_tm_iids(p_hwfn, &tm_iids); total = tm_iids.pf_cids + tm_iids.pf_tids_total; if (total) { p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[0]); @@ -668,12 +782,14 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn) if (total) { p_blk = ecore_cxt_set_blk(&p_cli->vf_blks[0]); ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line, - total * TM_ELEM_SIZE, TM_ELEM_SIZE); + total * TM_ELEM_SIZE, + TM_ELEM_SIZE); ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line, ILT_CLI_TM); p_cli->vf_total_lines = curr_line - p_blk->start_line; + for (i = 1; i < p_mngr->vf_count; i++) { ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line, ILT_CLI_TM); @@ -696,30 +812,54 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn) p_cli->pf_total_lines = curr_line - p_blk->start_line; } - /* TSDM (SRQ CONTEXT) */ - total = ecore_cxt_get_srq_count(p_hwfn); - - if (total) { - p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_TSDM]); - p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[SRQ_BLK]); - ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line, - total * SRQ_CXT_SIZE, SRQ_CXT_SIZE); - - ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line, - ILT_CLI_TSDM); - p_cli->pf_total_lines = curr_line - p_blk->start_line; - } + *line_count = curr_line - p_hwfn->p_cxt_mngr->pf_start_line; if (curr_line - p_hwfn->p_cxt_mngr->pf_start_line > RESC_NUM(p_hwfn, ECORE_ILT)) { - DP_ERR(p_hwfn, "too many ilt lines...#lines=%d\n", - curr_line - p_hwfn->p_cxt_mngr->pf_start_line); return ECORE_INVAL; } return ECORE_SUCCESS; } +u32 ecore_cxt_cfg_ilt_compute_excess(struct ecore_hwfn *p_hwfn, u32 used_lines) +{ + struct ecore_ilt_client_cfg *p_cli; + u32 excess_lines, available_lines; + struct ecore_cxt_mngr *p_mngr; + u32 ilt_page_size, elem_size; + struct ecore_tid_seg *p_seg; + int i; + + available_lines = RESC_NUM(p_hwfn, ECORE_ILT); + excess_lines = used_lines - available_lines; + + if (!excess_lines) + return 0; + + if (!ECORE_IS_L2_PERSONALITY(p_hwfn)) + return 0; + + p_mngr = p_hwfn->p_cxt_mngr; + p_cli = &p_mngr->clients[ILT_CLI_CDUT]; + ilt_page_size = ILT_PAGE_IN_BYTES(p_cli->p_size.val); + + for (i = 0; i < NUM_TASK_PF_SEGMENTS; i++) { + p_seg = ecore_cxt_tid_seg_info(p_hwfn, i); + if (!p_seg || p_seg->count == 0) + continue; + + elem_size = p_mngr->task_type_size[p_seg->type]; + if (!elem_size) + continue; + + return (ilt_page_size / elem_size) * excess_lines; + } + + DP_ERR(p_hwfn, "failed computing excess ILT lines\n"); + return 0; +} + static void ecore_cxt_src_t2_free(struct ecore_hwfn *p_hwfn) { struct ecore_src_t2 *p_t2 = &p_hwfn->p_cxt_mngr->src_t2; @@ -810,6 +950,7 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn) if (rc) goto t2_fail; + /* Set the t2 pointers */ /* entries per page - must be a power of two */ @@ -829,7 +970,8 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn) u32 j; for (j = 0; j < ent_num - 1; j++) { - val = p_ent_phys + (j + 1) * sizeof(struct src_ent); + val = p_ent_phys + + (j + 1) * sizeof(struct src_ent); entries[j].next = OSAL_CPU_TO_BE64(val); } @@ -849,11 +991,11 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn) return rc; } -#define for_each_ilt_valid_client(pos, clients) \ - for (pos = 0; pos < MAX_ILT_CLIENTS; pos++) \ - if (!clients[pos].active) { \ - continue; \ - } else \ +#define for_each_ilt_valid_client(pos, clients) \ + for (pos = 0; pos < MAX_ILT_CLIENTS; pos++) \ + if (!(clients)[pos].active) { \ + continue; \ + } else \ /* Total number of ILT lines used by this PF */ @@ -885,24 +1027,26 @@ static void ecore_ilt_shadow_free(struct ecore_hwfn *p_hwfn) if (p_dma->virt_addr) OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev, - p_dma->p_virt, - p_dma->phys_addr, p_dma->size); + p_dma->virt_addr, + p_dma->phys_addr, + p_dma->size); p_dma->virt_addr = OSAL_NULL; } OSAL_FREE(p_hwfn->p_dev, p_mngr->ilt_shadow); p_mngr->ilt_shadow = OSAL_NULL; } -static enum _ecore_status_t -ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn, - struct ecore_ilt_cli_blk *p_blk, - enum ilt_clients ilt_client, u32 start_line_offset) +static enum _ecore_status_t ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn, + struct ecore_ilt_cli_blk *p_blk, + enum ilt_clients ilt_client, + u32 start_line_offset) { struct phys_mem_desc *ilt_shadow = p_hwfn->p_cxt_mngr->ilt_shadow; u32 lines, line, sz_left, lines_to_skip, first_skipped_line; /* Special handling for RoCE that supports dynamic allocation */ - if (ilt_client == ILT_CLI_CDUT || ilt_client == ILT_CLI_TSDM) + if (ECORE_IS_RDMA_PERSONALITY(p_hwfn) && + ((ilt_client == ILT_CLI_CDUT) || ilt_client == ILT_CLI_TSDM)) return ECORE_SUCCESS; if (!p_blk->total_size) @@ -910,7 +1054,8 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn, sz_left = p_blk->total_size; lines_to_skip = p_blk->dynamic_line_cnt; - lines = DIV_ROUND_UP(sz_left, p_blk->real_size_in_page) - lines_to_skip; + lines = DIV_ROUND_UP(sz_left, p_blk->real_size_in_page) - + lines_to_skip; line = p_blk->start_line + start_line_offset - p_hwfn->p_cxt_mngr->pf_start_line; first_skipped_line = line + p_blk->dynamic_line_offset; @@ -926,7 +1071,6 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn, } size = OSAL_MIN_T(u32, sz_left, p_blk->real_size_in_page); - /* @DPDK */ #define ILT_BLOCK_ALIGN_SIZE 0x1000 p_virt = OSAL_DMA_ALLOC_COHERENT_ALIGNED(p_hwfn->p_dev, @@ -941,9 +1085,8 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn, ilt_shadow[line].size = size; DP_VERBOSE(p_hwfn, ECORE_MSG_ILT, - "ILT shadow: Line [%d] Physical 0x%lx" - " Virtual %p Size %d\n", - line, (unsigned long)p_phys, p_virt, size); + "ILT shadow: Line [%d] Physical 0x%" PRIx64 " Virtual %p Size %d\n", + line, (u64)p_phys, p_virt, size); sz_left -= size; line++; @@ -955,7 +1098,7 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn, static enum _ecore_status_t ecore_ilt_shadow_alloc(struct ecore_hwfn *p_hwfn) { - struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; + struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; struct ecore_ilt_client_cfg *clients = p_mngr->clients; struct ecore_ilt_cli_blk *p_blk; u32 size, i, j, k; @@ -965,7 +1108,7 @@ static enum _ecore_status_t ecore_ilt_shadow_alloc(struct ecore_hwfn *p_hwfn) p_mngr->ilt_shadow = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, size * sizeof(struct phys_mem_desc)); - if (!p_mngr->ilt_shadow) { + if (p_mngr->ilt_shadow == OSAL_NULL) { DP_NOTICE(p_hwfn, false, "Failed to allocate ilt shadow table\n"); rc = ECORE_NOMEM; goto ilt_shadow_fail; @@ -995,6 +1138,8 @@ static enum _ecore_status_t ecore_ilt_shadow_alloc(struct ecore_hwfn *p_hwfn) } } + p_mngr->ilt_shadow_size = size; + return ECORE_SUCCESS; ilt_shadow_fail: @@ -1013,6 +1158,9 @@ static void ecore_cid_map_free(struct ecore_hwfn *p_hwfn) p_mngr->acquired[type].max_count = 0; p_mngr->acquired[type].start_cid = 0; + if (!p_mngr->acquired_vf[type]) + continue; + for (vf = 0; vf < max_num_vfs; vf++) { OSAL_FREE(p_hwfn->p_dev, p_mngr->acquired_vf[type][vf].cid_map); @@ -1025,8 +1173,8 @@ static void ecore_cid_map_free(struct ecore_hwfn *p_hwfn) static enum _ecore_status_t __ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type, - u32 cid_start, u32 cid_count, - struct ecore_cid_acquired_map *p_map) + u32 cid_start, u32 cid_count, + struct ecore_cid_acquired_map *p_map) { u32 size; @@ -1060,8 +1208,8 @@ ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type, u32 start_cid, p_cfg = &p_mngr->conn_cfg[type]; - /* Handle PF maps */ - p_map = &p_mngr->acquired[type]; + /* Handle PF maps */ + p_map = &p_mngr->acquired[type]; rc = __ecore_cid_map_alloc_single(p_hwfn, type, start_cid, p_cfg->cid_count, p_map); if (rc != ECORE_SUCCESS) @@ -1086,7 +1234,23 @@ static enum _ecore_status_t ecore_cid_map_alloc(struct ecore_hwfn *p_hwfn) u32 type; enum _ecore_status_t rc; + /* Set the CORE CIDs to be first so it can have a global range ID */ + if (ECORE_IS_E5(p_hwfn->p_dev)) { + rc = ecore_cid_map_alloc_single(p_hwfn, PROTOCOLID_CORE, + start_cid, vf_start_cid); + if (rc != ECORE_SUCCESS) + goto cid_map_fail; + + start_cid = p_mngr->conn_cfg[PROTOCOLID_CORE].cid_count; + + /* Add to VFs the required offset to be after the CORE CIDs */ + vf_start_cid = start_cid; + } + for (type = 0; type < MAX_CONN_TYPES; type++) { + if (ECORE_IS_E5(p_hwfn->p_dev) && (type == PROTOCOLID_CORE)) + continue; + rc = ecore_cid_map_alloc_single(p_hwfn, type, start_cid, vf_start_cid); if (rc != ECORE_SUCCESS) @@ -1107,6 +1271,7 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn) { struct ecore_cid_acquired_map *acquired_vf; struct ecore_ilt_client_cfg *clients; + struct ecore_hw_sriov_info *p_iov; struct ecore_cxt_mngr *p_mngr; u32 i, max_num_vfs; @@ -1116,6 +1281,9 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn) return ECORE_NOMEM; } + /* Set the cxt mangr pointer priori to further allocations */ + p_hwfn->p_cxt_mngr = p_mngr; + /* Initialize ILT client registers */ clients = p_mngr->clients; clients[ILT_CLI_CDUC].first.reg = ILT_CFG_REG(CDUC, FIRST_ILT); @@ -1144,16 +1312,22 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn) /* default ILT page size for all clients is 64K */ for (i = 0; i < MAX_ILT_CLIENTS; i++) - p_mngr->clients[i].p_size.val = ILT_DEFAULT_HW_P_SIZE; + p_mngr->clients[i].p_size.val = p_hwfn->p_dev->ilt_page_size; + /* Initialize task sizes */ /* due to removal of ISCSI/FCoE files union type0_task_context * task_type_size will be 0. So hardcoded for now. */ p_mngr->task_type_size[0] = 512; /* @DPDK */ p_mngr->task_type_size[1] = 128; /* @DPDK */ - if (p_hwfn->p_dev->p_iov_info) - p_mngr->vf_count = p_hwfn->p_dev->p_iov_info->total_vfs; + p_mngr->conn_ctx_size = CONN_CXT_SIZE(p_hwfn); + + p_iov = p_hwfn->p_dev->p_iov_info; + if (p_iov) { + p_mngr->vf_count = p_iov->total_vfs; + p_mngr->first_vf_in_pf = p_iov->first_vf_in_pf; + } /* Initialize the dynamic ILT allocation mutex */ #ifdef CONFIG_ECORE_LOCK_ALLOC @@ -1164,9 +1338,6 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn) #endif OSAL_MUTEX_INIT(&p_mngr->mutex); - /* Set the cxt mangr pointer prior to further allocations */ - p_hwfn->p_cxt_mngr = p_mngr; - max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev); for (i = 0; i < MAX_CONN_TYPES; i++) { acquired_vf = OSAL_CALLOC(p_hwfn->p_dev, GFP_KERNEL, @@ -1177,7 +1348,7 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn) return ECORE_NOMEM; } - p_mngr->acquired_vf[i] = acquired_vf; + p_hwfn->p_cxt_mngr->acquired_vf[i] = acquired_vf; } return ECORE_SUCCESS; @@ -1185,7 +1356,7 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn) enum _ecore_status_t ecore_cxt_tables_alloc(struct ecore_hwfn *p_hwfn) { - enum _ecore_status_t rc; + enum _ecore_status_t rc; /* Allocate the ILT shadow table */ rc = ecore_ilt_shadow_alloc(p_hwfn); @@ -1194,10 +1365,11 @@ enum _ecore_status_t ecore_cxt_tables_alloc(struct ecore_hwfn *p_hwfn) goto tables_alloc_fail; } - /* Allocate the T2 table */ + /* Allocate the T2 tables */ rc = ecore_cxt_src_t2_alloc(p_hwfn); if (rc) { - DP_NOTICE(p_hwfn, false, "Failed to allocate T2 memory\n"); + DP_NOTICE(p_hwfn, false, + "Failed to allocate src T2 memory\n"); goto tables_alloc_fail; } @@ -1334,7 +1506,7 @@ void ecore_cxt_mngr_setup(struct ecore_hwfn *p_hwfn) static void ecore_cdu_init_common(struct ecore_hwfn *p_hwfn) { - u32 page_sz, elems_per_page, block_waste, cxt_size, cdu_params = 0; + u32 page_sz, elems_per_page, block_waste, cxt_size, cdu_params = 0; /* CDUC - connection configuration */ page_sz = p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC].p_size.val; @@ -1390,7 +1562,8 @@ static void ecore_cdu_init_pf(struct ecore_hwfn *p_hwfn) CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET, CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET, CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET, - CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET + CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET, + CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET }; static const u32 rt_type_offset_fl_arr[] = { @@ -1404,7 +1577,6 @@ static void ecore_cdu_init_pf(struct ecore_hwfn *p_hwfn) /* There are initializations only for CDUT during pf Phase */ for (i = 0; i < NUM_TASK_PF_SEGMENTS; i++) { - /* Segment 0 */ p_seg = ecore_cxt_tid_seg_info(p_hwfn, i); if (!p_seg) continue; @@ -1415,22 +1587,40 @@ static void ecore_cdu_init_pf(struct ecore_hwfn *p_hwfn) * Page size is larger than 32K! */ offset = (ILT_PAGE_IN_BYTES(p_cli->p_size.val) * - (p_cli->pf_blks[CDUT_SEG_BLK(i)].start_line - - p_cli->first.val)) / CDUT_SEG_ALIGNMET_IN_BYTES; + (p_cli->pf_blks[CDUT_SEG_BLK(i)].start_line - + p_cli->first.val)) / CDUT_SEG_ALIGNMET_IN_BYTES; cdu_seg_params = 0; SET_FIELD(cdu_seg_params, CDU_SEG_REG_TYPE, p_seg->type); SET_FIELD(cdu_seg_params, CDU_SEG_REG_OFFSET, offset); - STORE_RT_REG(p_hwfn, rt_type_offset_arr[i], cdu_seg_params); + STORE_RT_REG(p_hwfn, rt_type_offset_arr[i], + cdu_seg_params); offset = (ILT_PAGE_IN_BYTES(p_cli->p_size.val) * - (p_cli->pf_blks[CDUT_FL_SEG_BLK(i, PF)].start_line - - p_cli->first.val)) / CDUT_SEG_ALIGNMET_IN_BYTES; + (p_cli->pf_blks[CDUT_FL_SEG_BLK(i, PF)].start_line - + p_cli->first.val)) / CDUT_SEG_ALIGNMET_IN_BYTES; + + cdu_seg_params = 0; + SET_FIELD(cdu_seg_params, CDU_SEG_REG_TYPE, p_seg->type); + SET_FIELD(cdu_seg_params, CDU_SEG_REG_OFFSET, offset); + STORE_RT_REG(p_hwfn, rt_type_offset_fl_arr[i], + cdu_seg_params); + } + /* Init VF (single) segment */ + p_seg = ecore_cxt_tid_seg_info(p_hwfn, TASK_SEGMENT_VF); + if (p_seg) { + /* VF has a single segment so the offset is 0 by definition. + * The offset expresses where this segment starts relative to + * the VF section in CDUT. Since there is a single segment it + * will be 0 by definition + */ + offset = 0; cdu_seg_params = 0; SET_FIELD(cdu_seg_params, CDU_SEG_REG_TYPE, p_seg->type); SET_FIELD(cdu_seg_params, CDU_SEG_REG_OFFSET, offset); - STORE_RT_REG(p_hwfn, rt_type_offset_fl_arr[i], cdu_seg_params); + STORE_RT_REG(p_hwfn, rt_type_offset_arr[TASK_SEGMENT_VF], + cdu_seg_params); } } @@ -1459,12 +1649,35 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, /* CM PF */ static void ecore_cm_init_pf(struct ecore_hwfn *p_hwfn) { - STORE_RT_REG(p_hwfn, XCM_REG_CON_PHY_Q3_RT_OFFSET, - ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB)); + STORE_RT_REG(p_hwfn, XCM_REG_CON_PHY_Q3_RT_OFFSET, ecore_get_cm_pq_idx(p_hwfn, + PQ_FLAGS_LB)); +} + +#define GLB_MAX_ICID_RT_OFFSET(id) \ + DORQ_REG_GLB_MAX_ICID_ ## id ## _RT_OFFSET +#define GLB_RANGE2CONN_TYPE_RT_OFFSET(id) \ + DORQ_REG_GLB_RANGE2CONN_TYPE_ ## id ## _RT_OFFSET + +static void ecore_dq_init_common(struct ecore_hwfn *p_hwfn) +{ + struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; + u32 dq_core_max_cid; + + if (!ECORE_IS_E5(p_hwfn->p_dev)) + return; + + dq_core_max_cid = p_mngr->conn_cfg[PROTOCOLID_CORE].cid_count >> + DQ_RANGE_SHIFT; + STORE_RT_REG(p_hwfn, GLB_MAX_ICID_RT_OFFSET(0), dq_core_max_cid); + + /* Range ID #1 is an empty range */ + STORE_RT_REG(p_hwfn, GLB_MAX_ICID_RT_OFFSET(1), dq_core_max_cid); + + STORE_RT_REG(p_hwfn, GLB_RANGE2CONN_TYPE_RT_OFFSET(0), PROTOCOLID_CORE); } /* DQ PF */ -static void ecore_dq_init_pf(struct ecore_hwfn *p_hwfn) +static void ecore_dq_init_pf_e4(struct ecore_hwfn *p_hwfn) { struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; u32 dq_pf_max_cid = 0, dq_vf_max_cid = 0; @@ -1505,11 +1718,10 @@ static void ecore_dq_init_pf(struct ecore_hwfn *p_hwfn) dq_vf_max_cid += (p_mngr->conn_cfg[5].cids_per_vf >> DQ_RANGE_SHIFT); STORE_RT_REG(p_hwfn, DORQ_REG_VF_MAX_ICID_5_RT_OFFSET, dq_vf_max_cid); - /* Connection types 6 & 7 are not in use, yet they must be configured - * as the highest possible connection. Not configuring them means the - * defaults will be used, and with a large number of cids a bug may - * occur, if the defaults will be smaller than dq_pf_max_cid / - * dq_vf_max_cid. + /* Connection types 6 & 7 are not in use, but still must be configured + * as the highest possible connection. Not configuring them means that + * the defaults will be used, and with a large number of cids a bug may + * occur, if the defaults are smaller than dq_pf_max_cid/dq_vf_max_cid. */ STORE_RT_REG(p_hwfn, DORQ_REG_PF_MAX_ICID_6_RT_OFFSET, dq_pf_max_cid); STORE_RT_REG(p_hwfn, DORQ_REG_VF_MAX_ICID_6_RT_OFFSET, dq_vf_max_cid); @@ -1518,6 +1730,90 @@ static void ecore_dq_init_pf(struct ecore_hwfn *p_hwfn) STORE_RT_REG(p_hwfn, DORQ_REG_VF_MAX_ICID_7_RT_OFFSET, dq_vf_max_cid); } +#define PRV_MAX_ICID_RT_OFFSET(pfvf, id) \ + DORQ_REG_PRV_ ## pfvf ## _MAX_ICID_ ## id ## _RT_OFFSET +#define PRV_RANGE2CONN_TYPE_RT_OFFSET(pfvf, id) \ + DORQ_REG_PRV_ ## pfvf ## _RANGE2CONN_TYPE_ ## id ## _RT_OFFSET + +static void ecore_dq_init_pf_e5(struct ecore_hwfn *p_hwfn) +{ + struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; + u32 dq_pf_max_cid, dq_vf_max_cid, type; + + /* The private ranges should start after the CORE's global range */ + dq_pf_max_cid = p_mngr->conn_cfg[PROTOCOLID_CORE].cid_count >> + DQ_RANGE_SHIFT; + dq_vf_max_cid = dq_pf_max_cid; + + /* Range ID #2 */ + if (ECORE_IS_ISCSI_PERSONALITY(p_hwfn)) + type = PROTOCOLID_ISCSI; + else if (ECORE_IS_FCOE_PERSONALITY(p_hwfn)) + type = PROTOCOLID_FCOE; + else if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) + type = PROTOCOLID_ROCE; + else /* ETH or ETH_IWARP */ + type = PROTOCOLID_ETH; + + dq_pf_max_cid += p_mngr->conn_cfg[type].cid_count >> DQ_RANGE_SHIFT; + dq_vf_max_cid += p_mngr->conn_cfg[type].cids_per_vf >> DQ_RANGE_SHIFT; + STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(PF, 2), dq_pf_max_cid); + STORE_RT_REG(p_hwfn, PRV_RANGE2CONN_TYPE_RT_OFFSET(PF, 2), type); + STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(VF, 2), dq_vf_max_cid); + STORE_RT_REG(p_hwfn, PRV_RANGE2CONN_TYPE_RT_OFFSET(VF, 2), type); + + /* Range ID #3 */ + if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) { + dq_pf_max_cid += p_mngr->conn_cfg[PROTOCOLID_ETH].cid_count >> + DQ_RANGE_SHIFT; + dq_vf_max_cid += + p_mngr->conn_cfg[PROTOCOLID_ETH].cids_per_vf >> + DQ_RANGE_SHIFT; + STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(PF, 3), + dq_pf_max_cid); + STORE_RT_REG(p_hwfn, PRV_RANGE2CONN_TYPE_RT_OFFSET(PF, 3), + PROTOCOLID_ETH); + STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(VF, 3), + dq_vf_max_cid); + STORE_RT_REG(p_hwfn, PRV_RANGE2CONN_TYPE_RT_OFFSET(VF, 3), + PROTOCOLID_ETH); + } else if (ECORE_IS_IWARP_PERSONALITY(p_hwfn)) { + dq_pf_max_cid += p_mngr->conn_cfg[PROTOCOLID_IWARP].cid_count >> + DQ_RANGE_SHIFT; + dq_vf_max_cid += + p_mngr->conn_cfg[PROTOCOLID_IWARP].cids_per_vf >> + DQ_RANGE_SHIFT; + STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(PF, 3), + dq_pf_max_cid); + STORE_RT_REG(p_hwfn, PRV_RANGE2CONN_TYPE_RT_OFFSET(PF, 3), + PROTOCOLID_IWARP); + STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(VF, 3), + dq_vf_max_cid); + STORE_RT_REG(p_hwfn, PRV_RANGE2CONN_TYPE_RT_OFFSET(VF, 3), + PROTOCOLID_IWARP); + } else { + /* Range ID #3 is an empty range */ + STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(PF, 3), + dq_pf_max_cid); + STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(VF, 3), + dq_vf_max_cid); + } + + /* Range IDs #4 and #5 are empty ranges */ + STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(PF, 4), dq_pf_max_cid); + STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(VF, 4), dq_vf_max_cid); + STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(PF, 5), dq_pf_max_cid); + STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(VF, 5), dq_vf_max_cid); +} + +static void ecore_dq_init_pf(struct ecore_hwfn *p_hwfn) +{ + if (ECORE_IS_E4(p_hwfn->p_dev)) + ecore_dq_init_pf_e4(p_hwfn); + else + ecore_dq_init_pf_e5(p_hwfn); +} + static void ecore_ilt_bounds_init(struct ecore_hwfn *p_hwfn) { struct ecore_ilt_client_cfg *ilt_clients; @@ -1529,7 +1825,8 @@ static void ecore_ilt_bounds_init(struct ecore_hwfn *p_hwfn) ilt_clients[i].first.reg, ilt_clients[i].first.val); STORE_RT_REG(p_hwfn, - ilt_clients[i].last.reg, ilt_clients[i].last.val); + ilt_clients[i].last.reg, + ilt_clients[i].last.val); STORE_RT_REG(p_hwfn, ilt_clients[i].p_size.reg, ilt_clients[i].p_size.val); @@ -1585,7 +1882,8 @@ static void ecore_ilt_vf_bounds_init(struct ecore_hwfn *p_hwfn) blk_factor = OSAL_LOG2(ILT_PAGE_IN_BYTES(p_cli->p_size.val) >> 10); if (p_cli->active) { STORE_RT_REG(p_hwfn, - PSWRQ2_REG_TM_BLOCKS_FACTOR_RT_OFFSET, blk_factor); + PSWRQ2_REG_TM_BLOCKS_FACTOR_RT_OFFSET, + blk_factor); STORE_RT_REG(p_hwfn, PSWRQ2_REG_TM_NUMBER_OF_PF_BLOCKS_RT_OFFSET, p_cli->pf_total_lines); @@ -1606,8 +1904,8 @@ static void ecore_ilt_init_pf(struct ecore_hwfn *p_hwfn) ecore_ilt_bounds_init(p_hwfn); ecore_ilt_vf_bounds_init(p_hwfn); - p_mngr = p_hwfn->p_cxt_mngr; - p_shdw = p_mngr->ilt_shadow; + p_mngr = p_hwfn->p_cxt_mngr; + p_shdw = p_mngr->ilt_shadow; clients = p_hwfn->p_cxt_mngr->clients; for_each_ilt_valid_client(i, clients) { @@ -1616,13 +1914,13 @@ static void ecore_ilt_init_pf(struct ecore_hwfn *p_hwfn) */ line = clients[i].first.val - p_mngr->pf_start_line; rt_offst = PSWRQ2_REG_ILT_MEMORY_RT_OFFSET + - clients[i].first.val * ILT_ENTRY_IN_REGS; + clients[i].first.val * ILT_ENTRY_IN_REGS; for (; line <= clients[i].last.val - p_mngr->pf_start_line; line++, rt_offst += ILT_ENTRY_IN_REGS) { u64 ilt_hw_entry = 0; - /** p_virt could be OSAL_NULL incase of dynamic + /** virt_addr could be OSAL_NULL incase of dynamic * allocation */ if (p_shdw[line].virt_addr != OSAL_NULL) { @@ -1631,12 +1929,10 @@ static void ecore_ilt_init_pf(struct ecore_hwfn *p_hwfn) (p_shdw[line].phys_addr >> 12)); DP_VERBOSE(p_hwfn, ECORE_MSG_ILT, - "Setting RT[0x%08x] from" - " ILT[0x%08x] [Client is %d] to" - " Physical addr: 0x%lx\n", + "Setting RT[0x%08x] from ILT[0x%08x] [Client is %d] to " + "Physical addr: 0x%" PRIx64 "\n", rt_offst, line, i, - (unsigned long)(p_shdw[line]. - phys_addr >> 12)); + (u64)(p_shdw[line].phys_addr >> 12)); } STORE_RT_REG_AGG(p_hwfn, rt_offst, ilt_hw_entry); @@ -1673,65 +1969,114 @@ static void ecore_src_init_pf(struct ecore_hwfn *p_hwfn) conn_num); } -/* Timers PF */ -#define TM_CFG_NUM_IDS_SHIFT 0 -#define TM_CFG_NUM_IDS_MASK 0xFFFFULL -#define TM_CFG_PRE_SCAN_OFFSET_SHIFT 16 -#define TM_CFG_PRE_SCAN_OFFSET_MASK 0x1FFULL -#define TM_CFG_PARENT_PF_SHIFT 25 -#define TM_CFG_PARENT_PF_MASK 0x7ULL - -#define TM_CFG_CID_PRE_SCAN_ROWS_SHIFT 30 -#define TM_CFG_CID_PRE_SCAN_ROWS_MASK 0x1FFULL +/* Timers PF - configuration memory for the connections and the tasks */ + +/* Common parts to connections and tasks */ +#define TM_CFG_NUM_IDS_SHIFT 0 +#define TM_CFG_NUM_IDS_MASK 0xFFFFULL +/* BB */ +#define TM_CFG_PRE_SCAN_OFFSET_BB_SHIFT 16 +#define TM_CFG_PRE_SCAN_OFFSET_BB_MASK 0x1FFULL +#define TM_CFG_PARENT_PF_BB_SHIFT 26 +#define TM_CFG_PARENT_PF_BB_MASK 0x7ULL +/* AH */ +#define TM_CFG_PRE_SCAN_OFFSET_AH_SHIFT 16 +#define TM_CFG_PRE_SCAN_OFFSET_AH_MASK 0x3FFULL +#define TM_CFG_PARENT_PF_AH_SHIFT 26 +#define TM_CFG_PARENT_PF_AH_MASK 0xFULL +/* E5 */ +#define TM_CFG_PRE_SCAN_OFFSET_E5_SHIFT 16 +#define TM_CFG_PRE_SCAN_OFFSET_E5_MASK 0x3FFULL +#define TM_CFG_PARENT_PF_E5_SHIFT 26 +#define TM_CFG_PARENT_PF_E5_MASK 0xFULL + +/* Connections specific */ +#define TM_CFG_CID_PRE_SCAN_ROWS_SHIFT 30 +#define TM_CFG_CID_PRE_SCAN_ROWS_MASK 0x1FFULL + +/* Tasks specific */ +#define TM_CFG_TID_OFFSET_SHIFT 30 +#define TM_CFG_TID_OFFSET_MASK 0x7FFFFULL +#define TM_CFG_TID_PRE_SCAN_ROWS_SHIFT 49 +#define TM_CFG_TID_PRE_SCAN_ROWS_MASK 0x1FFULL + +static void ecore_tm_cfg_set_parent_pf(struct ecore_dev *p_dev, u64 *cfg_word, + u8 val) +{ + if (ECORE_IS_BB(p_dev)) + SET_FIELD(*cfg_word, TM_CFG_PARENT_PF_BB, val); + else if (ECORE_IS_AH(p_dev)) + SET_FIELD(*cfg_word, TM_CFG_PARENT_PF_AH, val); + else /* E5 */ + SET_FIELD(*cfg_word, TM_CFG_PARENT_PF_E5, val); +} -#define TM_CFG_TID_OFFSET_SHIFT 30 -#define TM_CFG_TID_OFFSET_MASK 0x7FFFFULL -#define TM_CFG_TID_PRE_SCAN_ROWS_SHIFT 49 -#define TM_CFG_TID_PRE_SCAN_ROWS_MASK 0x1FFULL +static void ecore_tm_cfg_set_pre_scan_offset(struct ecore_dev *p_dev, + u64 *cfg_word, u8 val) +{ + if (ECORE_IS_BB(p_dev)) + SET_FIELD(*cfg_word, TM_CFG_PRE_SCAN_OFFSET_BB, val); + else if (ECORE_IS_AH(p_dev)) + SET_FIELD(*cfg_word, TM_CFG_PRE_SCAN_OFFSET_AH, val); + else /* E5 */ + SET_FIELD(*cfg_word, TM_CFG_PRE_SCAN_OFFSET_E5, val); +} static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn) { struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; u32 active_seg_mask = 0, tm_offset, rt_reg; + u32 *p_cfg_word_32, cfg_word_size; struct ecore_tm_iids tm_iids; u64 cfg_word; u8 i; OSAL_MEM_ZERO(&tm_iids, sizeof(tm_iids)); - ecore_cxt_tm_iids(p_hwfn, p_mngr, &tm_iids); + ecore_cxt_tm_iids(p_hwfn, &tm_iids); /* @@@TBD No pre-scan for now */ - cfg_word = 0; - SET_FIELD(cfg_word, TM_CFG_NUM_IDS, tm_iids.per_vf_cids); - SET_FIELD(cfg_word, TM_CFG_PARENT_PF, p_hwfn->rel_pf_id); - SET_FIELD(cfg_word, TM_CFG_PRE_SCAN_OFFSET, 0); + cfg_word = 0; + SET_FIELD(cfg_word, TM_CFG_NUM_IDS, tm_iids.per_vf_cids); + ecore_tm_cfg_set_parent_pf(p_hwfn->p_dev, &cfg_word, p_hwfn->rel_pf_id); + ecore_tm_cfg_set_pre_scan_offset(p_hwfn->p_dev, &cfg_word, 0); + if (ECORE_IS_E4(p_hwfn->p_dev)) SET_FIELD(cfg_word, TM_CFG_CID_PRE_SCAN_ROWS, 0); /* scan all */ + /* Each CONFIG_CONN_MEM row in E5 is 32 bits and not 64 bits as in E4 */ + p_cfg_word_32 = (u32 *)&cfg_word; + cfg_word_size = ECORE_IS_E4(p_hwfn->p_dev) ? sizeof(cfg_word) + : sizeof(*p_cfg_word_32); + /* Note: We assume consecutive VFs for a PF */ for (i = 0; i < p_mngr->vf_count; i++) { rt_reg = TM_REG_CONFIG_CONN_MEM_RT_OFFSET + - (sizeof(cfg_word) / sizeof(u32)) * - (p_hwfn->p_dev->p_iov_info->first_vf_in_pf + i); - STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word); + (cfg_word_size / sizeof(u32)) * + (p_hwfn->p_dev->p_iov_info->first_vf_in_pf + i); + if (ECORE_IS_E4(p_hwfn->p_dev)) + STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word); + else + STORE_RT_REG_AGG(p_hwfn, rt_reg, *p_cfg_word_32); } cfg_word = 0; SET_FIELD(cfg_word, TM_CFG_NUM_IDS, tm_iids.pf_cids); - SET_FIELD(cfg_word, TM_CFG_PRE_SCAN_OFFSET, 0); - SET_FIELD(cfg_word, TM_CFG_PARENT_PF, 0); /* n/a for PF */ - SET_FIELD(cfg_word, TM_CFG_CID_PRE_SCAN_ROWS, 0); /* scan all */ + ecore_tm_cfg_set_parent_pf(p_hwfn->p_dev, &cfg_word, 0); /* n/a for PF */ + ecore_tm_cfg_set_pre_scan_offset(p_hwfn->p_dev, &cfg_word, 0); + if (ECORE_IS_E4(p_hwfn->p_dev)) + SET_FIELD(cfg_word, TM_CFG_CID_PRE_SCAN_ROWS, 0); /* scan all */ rt_reg = TM_REG_CONFIG_CONN_MEM_RT_OFFSET + - (sizeof(cfg_word) / sizeof(u32)) * - (NUM_OF_VFS(p_hwfn->p_dev) + p_hwfn->rel_pf_id); - STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word); + (cfg_word_size / sizeof(u32)) * + (NUM_OF_VFS(p_hwfn->p_dev) + p_hwfn->rel_pf_id); + if (ECORE_IS_E4(p_hwfn->p_dev)) + STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word); + else + STORE_RT_REG_AGG(p_hwfn, rt_reg, *p_cfg_word_32); - /* enable scan */ + /* enable scan for PF */ STORE_RT_REG(p_hwfn, TM_REG_PF_ENABLE_CONN_RT_OFFSET, - tm_iids.pf_cids ? 0x1 : 0x0); - - /* @@@TBD how to enable the scan for the VFs */ + tm_iids.pf_cids ? 0x1 : 0x0); tm_offset = tm_iids.per_vf_cids; @@ -1739,14 +2084,15 @@ static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn) for (i = 0; i < p_mngr->vf_count; i++) { cfg_word = 0; SET_FIELD(cfg_word, TM_CFG_NUM_IDS, tm_iids.per_vf_tids); - SET_FIELD(cfg_word, TM_CFG_PRE_SCAN_OFFSET, 0); - SET_FIELD(cfg_word, TM_CFG_PARENT_PF, p_hwfn->rel_pf_id); + ecore_tm_cfg_set_pre_scan_offset(p_hwfn->p_dev, &cfg_word, 0); + ecore_tm_cfg_set_parent_pf(p_hwfn->p_dev, &cfg_word, + p_hwfn->rel_pf_id); SET_FIELD(cfg_word, TM_CFG_TID_OFFSET, tm_offset); SET_FIELD(cfg_word, TM_CFG_TID_PRE_SCAN_ROWS, (u64)0); rt_reg = TM_REG_CONFIG_TASK_MEM_RT_OFFSET + - (sizeof(cfg_word) / sizeof(u32)) * - (p_hwfn->p_dev->p_iov_info->first_vf_in_pf + i); + (sizeof(cfg_word) / sizeof(u32)) * + (p_hwfn->p_dev->p_iov_info->first_vf_in_pf + i); STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word); } @@ -1755,15 +2101,15 @@ static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn) for (i = 0; i < NUM_TASK_PF_SEGMENTS; i++) { cfg_word = 0; SET_FIELD(cfg_word, TM_CFG_NUM_IDS, tm_iids.pf_tids[i]); - SET_FIELD(cfg_word, TM_CFG_PRE_SCAN_OFFSET, 0); - SET_FIELD(cfg_word, TM_CFG_PARENT_PF, 0); + ecore_tm_cfg_set_pre_scan_offset(p_hwfn->p_dev, &cfg_word, 0); + ecore_tm_cfg_set_parent_pf(p_hwfn->p_dev, &cfg_word, 0); SET_FIELD(cfg_word, TM_CFG_TID_OFFSET, tm_offset); SET_FIELD(cfg_word, TM_CFG_TID_PRE_SCAN_ROWS, (u64)0); rt_reg = TM_REG_CONFIG_TASK_MEM_RT_OFFSET + - (sizeof(cfg_word) / sizeof(u32)) * - (NUM_OF_VFS(p_hwfn->p_dev) + - p_hwfn->rel_pf_id * NUM_TASK_PF_SEGMENTS + i); + (sizeof(cfg_word) / sizeof(u32)) * + (NUM_OF_VFS(p_hwfn->p_dev) + + p_hwfn->rel_pf_id * NUM_TASK_PF_SEGMENTS + i); STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word); active_seg_mask |= (tm_iids.pf_tids[i] ? (1 << i) : 0); @@ -1771,9 +2117,43 @@ static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn) tm_offset += tm_iids.pf_tids[i]; } + if (ECORE_IS_RDMA_PERSONALITY(p_hwfn)) + active_seg_mask = 0; + STORE_RT_REG(p_hwfn, TM_REG_PF_ENABLE_TASK_RT_OFFSET, active_seg_mask); +} - /* @@@TBD how to enable the scan for the VFs */ +void ecore_tm_clear_vf_ilt(struct ecore_hwfn *p_hwfn, u16 vf_idx) +{ + struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; + struct ecore_ilt_client_cfg *p_cli; + struct phys_mem_desc *shadow_line; + struct ecore_ilt_cli_blk *p_blk; + u32 shadow_start_line, line; + u32 i; + + p_cli = &p_mngr->clients[ILT_CLI_TM]; + p_blk = &p_cli->vf_blks[0]; + line = p_blk->start_line + vf_idx * p_cli->vf_total_lines; + shadow_start_line = line - p_mngr->pf_start_line; + + for (i = 0; i < p_cli->vf_total_lines; i++) { + shadow_line = &p_mngr->ilt_shadow[shadow_start_line + i]; + + DP_VERBOSE(p_hwfn, ECORE_MSG_CXT, + "zeroing ILT for VF %d line %d address %p size %d\n", + vf_idx, i, shadow_line->virt_addr, shadow_line->size); + + if (shadow_line->virt_addr != OSAL_NULL) + OSAL_MEM_ZERO(shadow_line->virt_addr, shadow_line->size); + } +} + +static void ecore_prs_init_common(struct ecore_hwfn *p_hwfn) +{ + if ((p_hwfn->hw_info.personality == ECORE_PCI_FCOE) && + p_hwfn->pf_params.fcoe_pf_params.is_target) + STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_RESP_INITIATOR_TYPE_RT_OFFSET, 0); } static void ecore_prs_init_pf(struct ecore_hwfn *p_hwfn) @@ -1781,6 +2161,7 @@ static void ecore_prs_init_pf(struct ecore_hwfn *p_hwfn) struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; struct ecore_conn_type_cfg *p_fcoe; struct ecore_tid_seg *p_tid; + u32 max_tid; p_fcoe = &p_mngr->conn_cfg[PROTOCOLID_FCOE]; @@ -1789,15 +2170,23 @@ static void ecore_prs_init_pf(struct ecore_hwfn *p_hwfn) return; p_tid = &p_fcoe->tid_seg[ECORE_CXT_FCOE_TID_SEG]; - STORE_RT_REG_AGG(p_hwfn, - PRS_REG_TASK_ID_MAX_INITIATOR_PF_RT_OFFSET, - p_tid->count); + max_tid = p_tid->count - 1; + if (p_hwfn->pf_params.fcoe_pf_params.is_target) { + STORE_RT_REG_AGG(p_hwfn, + PRS_REG_TASK_ID_MAX_TARGET_PF_RT_OFFSET, + max_tid); + } else { + STORE_RT_REG_AGG(p_hwfn, + PRS_REG_TASK_ID_MAX_INITIATOR_PF_RT_OFFSET, + max_tid); + } } void ecore_cxt_hw_init_common(struct ecore_hwfn *p_hwfn) { - /* CDU configuration */ ecore_cdu_init_common(p_hwfn); + ecore_prs_init_common(p_hwfn); + ecore_dq_init_common(p_hwfn); } void ecore_cxt_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) @@ -1807,7 +2196,10 @@ void ecore_cxt_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) ecore_dq_init_pf(p_hwfn); ecore_cdu_init_pf(p_hwfn); ecore_ilt_init_pf(p_hwfn); - ecore_src_init_pf(p_hwfn); + + if (!ECORE_IS_E5(p_hwfn->p_dev)) + ecore_src_init_pf(p_hwfn); + ecore_tm_init_pf(p_hwfn); ecore_prs_init_pf(p_hwfn); } @@ -1850,7 +2242,7 @@ enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn, return ECORE_NORESOURCES; } - OSAL_SET_BIT(rel_cid, p_map->cid_map); + OSAL_NON_ATOMIC_SET_BIT(rel_cid, p_map->cid_map); *p_cid = rel_cid + p_map->start_cid; @@ -1890,15 +2282,16 @@ static bool ecore_cxt_test_cid_acquired(struct ecore_hwfn *p_hwfn, break; } } + if (*p_type == MAX_CONN_TYPES) { - DP_NOTICE(p_hwfn, true, "Invalid CID %d vfid %02x", cid, vfid); + DP_NOTICE(p_hwfn, false, "Invalid CID %d vfid %02x\n", cid, vfid); goto fail; } rel_cid = cid - (*pp_map)->start_cid; - if (!OSAL_GET_BIT(rel_cid, (*pp_map)->cid_map)) { - DP_NOTICE(p_hwfn, true, - "CID %d [vifd %02x] not acquired", cid, vfid); + if (!OSAL_TEST_BIT(rel_cid, (*pp_map)->cid_map)) { + DP_NOTICE(p_hwfn, false, + "CID %d [vifd %02x] not acquired\n", cid, vfid); goto fail; } @@ -1975,29 +2368,38 @@ enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn, return ECORE_INVAL; p_info->p_cxt = (u8 *)p_mngr->ilt_shadow[line].virt_addr + - p_info->iid % cxts_per_p * conn_cxt_size; + p_info->iid % cxts_per_p * conn_cxt_size; DP_VERBOSE(p_hwfn, (ECORE_MSG_ILT | ECORE_MSG_CXT), - "Accessing ILT shadow[%d]: CXT pointer is at %p (for iid %d)\n", - (p_info->iid / cxts_per_p), p_info->p_cxt, p_info->iid); + "Accessing ILT shadow[%d]: CXT pointer is at %p (for iid %d)\n", + (p_info->iid / cxts_per_p), p_info->p_cxt, p_info->iid); return ECORE_SUCCESS; } -enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn) +enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn, + u32 rdma_tasks, u32 eth_tasks) { + struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; + /* Set the number of required CORE connections */ - u32 core_cids = 1; /* SPQ */ + enum _ecore_status_t rc = ECORE_SUCCESS; + struct ecore_ptt *p_ptt; + u32 core_cids = 1; /* SPQ */ + u32 vf_core_cids = 0; - ecore_cxt_set_proto_cid_count(p_hwfn, PROTOCOLID_CORE, core_cids, 0); + ecore_cxt_set_proto_cid_count(p_hwfn, PROTOCOLID_CORE, core_cids, + vf_core_cids); switch (p_hwfn->hw_info.personality) { case ECORE_PCI_ETH: - { + { u32 count = 0; struct ecore_eth_pf_params *p_params = - &p_hwfn->pf_params.eth_pf_params; + &p_hwfn->pf_params.eth_pf_params; + + p_mngr->task_ctx_size = TYPE1_TASK_CXT_SIZE(p_hwfn); if (!p_params->num_vf_cons) p_params->num_vf_cons = ETH_PF_PARAMS_VF_CONS_DEFAULT; @@ -2005,17 +2407,108 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn) p_params->num_cons, p_params->num_vf_cons); + ecore_cxt_set_proto_tid_count(p_hwfn, PROTOCOLID_ETH, + ECORE_CXT_ETH_TID_SEG, + ETH_CDU_TASK_SEG_TYPE, + eth_tasks, false); + +#ifdef CONFIG_ECORE_FS + if (ECORE_IS_E5(p_hwfn->p_dev)) + p_hwfn->fs_info.e5->tid_count = eth_tasks; +#endif + count = p_params->num_arfs_filters; - if (!OSAL_GET_BIT(ECORE_MF_DISABLE_ARFS, + if (!OSAL_TEST_BIT(ECORE_MF_DISABLE_ARFS, &p_hwfn->p_dev->mf_bits)) p_hwfn->p_cxt_mngr->arfs_count = count; break; - } + } default: + rc = ECORE_INVAL; + } + + return rc; +} + +enum _ecore_status_t ecore_cxt_get_tid_mem_info(struct ecore_hwfn *p_hwfn, + struct ecore_tid_mem *p_info) +{ + struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; + u32 proto, seg, total_lines, i, shadow_line; + struct ecore_ilt_client_cfg *p_cli; + struct ecore_ilt_cli_blk *p_fl_seg; + struct ecore_tid_seg *p_seg_info; + + /* Verify the personality */ + switch (p_hwfn->hw_info.personality) { + case ECORE_PCI_FCOE: + proto = PROTOCOLID_FCOE; + seg = ECORE_CXT_FCOE_TID_SEG; + break; + case ECORE_PCI_ISCSI: + proto = PROTOCOLID_ISCSI; + seg = ECORE_CXT_ISCSI_TID_SEG; + break; + default: + return ECORE_INVAL; + } + + p_cli = &p_mngr->clients[ILT_CLI_CDUT]; + if (!p_cli->active) + return ECORE_INVAL; + + p_seg_info = &p_mngr->conn_cfg[proto].tid_seg[seg]; + if (!p_seg_info->has_fl_mem) return ECORE_INVAL; + + p_fl_seg = &p_cli->pf_blks[CDUT_FL_SEG_BLK(seg, PF)]; + total_lines = DIV_ROUND_UP(p_fl_seg->total_size, + p_fl_seg->real_size_in_page); + + for (i = 0; i < total_lines; i++) { + shadow_line = i + p_fl_seg->start_line - + p_hwfn->p_cxt_mngr->pf_start_line; + p_info->blocks[i] = p_mngr->ilt_shadow[shadow_line].virt_addr; } + p_info->waste = ILT_PAGE_IN_BYTES(p_cli->p_size.val) - + p_fl_seg->real_size_in_page; + p_info->tid_size = p_mngr->task_type_size[p_seg_info->type]; + p_info->num_tids_per_block = p_fl_seg->real_size_in_page / + p_info->tid_size; + + return ECORE_SUCCESS; +} + +static enum _ecore_status_t +ecore_cxt_get_iid_info(struct ecore_hwfn *p_hwfn, + enum ecore_cxt_elem_type elem_type, + struct ecore_ilt_client_cfg **pp_cli, + struct ecore_ilt_cli_blk **pp_blk, + u32 *elem_size, bool is_vf) +{ + struct ecore_ilt_client_cfg *p_cli; + + switch (elem_type) { + case ECORE_ELEM_CXT: + p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC]; + *elem_size = CONN_CXT_SIZE(p_hwfn); + *pp_blk = is_vf ? &p_cli->vf_blks[CDUC_BLK] : + &p_cli->pf_blks[CDUC_BLK]; + break; + case ECORE_ELEM_ETH_TASK: + p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT]; + *elem_size = TYPE1_TASK_CXT_SIZE(p_hwfn); + *pp_blk = &p_cli->pf_blks[CDUT_SEG_BLK(ECORE_CXT_ETH_TID_SEG)]; + break; + default: + DP_NOTICE(p_hwfn, false, + "ECORE_INVALID elem type = %d", elem_type); + return ECORE_INVAL; + } + + *pp_cli = p_cli; return ECORE_SUCCESS; } @@ -2026,8 +2519,13 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn) enum _ecore_status_t ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn, enum ecore_cxt_elem_type elem_type, - u32 iid) + u32 iid, u8 vf_id) { + /* TODO + * Check to see if we need to do anything differeny if this is + * called on behalf of VF. + */ + u32 reg_offset, shadow_line, elem_size, hw_p_size, elems_per_p, line; struct ecore_ilt_client_cfg *p_cli; struct ecore_ilt_cli_blk *p_blk; @@ -2036,33 +2534,25 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn, u64 ilt_hw_entry; void *p_virt; enum _ecore_status_t rc = ECORE_SUCCESS; + bool is_vf = (vf_id != ECORE_CXT_PF_CID); - switch (elem_type) { - case ECORE_ELEM_CXT: - p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC]; - elem_size = CONN_CXT_SIZE(p_hwfn); - p_blk = &p_cli->pf_blks[CDUC_BLK]; - break; - case ECORE_ELEM_SRQ: - p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_TSDM]; - elem_size = SRQ_CXT_SIZE; - p_blk = &p_cli->pf_blks[SRQ_BLK]; - break; - case ECORE_ELEM_TASK: - p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT]; - elem_size = TYPE1_TASK_CXT_SIZE(p_hwfn); - p_blk = &p_cli->pf_blks[CDUT_SEG_BLK(ECORE_CXT_ROCE_TID_SEG)]; - break; - default: - DP_NOTICE(p_hwfn, false, - "ECORE_INVALID elem type = %d", elem_type); - return ECORE_INVAL; - } + rc = ecore_cxt_get_iid_info(p_hwfn, elem_type, &p_cli, &p_blk, + &elem_size, is_vf); + if (rc) + return rc; /* Calculate line in ilt */ hw_p_size = p_cli->p_size.val; elems_per_p = ILT_PAGE_IN_BYTES(hw_p_size) / elem_size; - line = p_blk->start_line + (iid / elems_per_p); + if (is_vf) + /* start_line - where the VF sections starts (p_blk is VF's one) + * (vf_id * p_cli->vf_total_lines) - Where this VF starts + */ + line = p_blk->start_line + + (vf_id * p_cli->vf_total_lines) + (iid / elems_per_p); + else + line = p_blk->start_line + (iid / elems_per_p); + shadow_line = line - p_hwfn->p_cxt_mngr->pf_start_line; /* If line is already allocated, do nothing, otherwise allocate it and @@ -2070,7 +2560,9 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn, * This section can be run in parallel from different contexts and thus * a mutex protection is needed. */ - +#ifdef _NTDDK_ +#pragma warning(suppress : 28121) +#endif OSAL_MUTEX_ACQUIRE(&p_hwfn->p_cxt_mngr->mutex); if (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].virt_addr) @@ -2106,14 +2598,13 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn, SET_FIELD(ilt_hw_entry, ILT_ENTRY_VALID, 1ULL); SET_FIELD(ilt_hw_entry, ILT_ENTRY_PHY_ADDR, - (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].phys_addr >> 12)); - -/* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a wide-bus */ + (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].phys_addr >> + 12)); + /* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a wide-bus */ ecore_dmae_host2grc(p_hwfn, p_ptt, (u64)(osal_uintptr_t)&ilt_hw_entry, reg_offset, sizeof(ilt_hw_entry) / sizeof(u32), OSAL_NULL /* default parameters */); - out1: ecore_ptt_release(p_hwfn, p_ptt); out0: @@ -2125,77 +2616,119 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn, /* This function is very RoCE oriented, if another protocol in the future * will want this feature we'll need to modify the function to be more generic */ -static enum _ecore_status_t +enum _ecore_status_t ecore_cxt_free_ilt_range(struct ecore_hwfn *p_hwfn, enum ecore_cxt_elem_type elem_type, - u32 start_iid, u32 count) + u32 start_iid, u32 count, u8 vf_id) { - u32 start_line, end_line, shadow_start_line, shadow_end_line; + u32 end_iid, start_line, end_line, shadow_start_line, shadow_end_line; + u32 start_offset, end_offset, start_iid_offset, end_iid_offset; u32 reg_offset, elem_size, hw_p_size, elems_per_p; + bool b_skip_start = false, b_skip_end = false; + bool is_vf = (vf_id != ECORE_CXT_PF_CID); struct ecore_ilt_client_cfg *p_cli; struct ecore_ilt_cli_blk *p_blk; - u32 end_iid = start_iid + count; + struct phys_mem_desc *ilt_page; struct ecore_ptt *p_ptt; u64 ilt_hw_entry = 0; - u32 i; + u32 i, abs_line; + enum _ecore_status_t rc = ECORE_SUCCESS; - switch (elem_type) { - case ECORE_ELEM_CXT: - p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC]; - elem_size = CONN_CXT_SIZE(p_hwfn); - p_blk = &p_cli->pf_blks[CDUC_BLK]; - break; - case ECORE_ELEM_SRQ: - p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_TSDM]; - elem_size = SRQ_CXT_SIZE; - p_blk = &p_cli->pf_blks[SRQ_BLK]; - break; - case ECORE_ELEM_TASK: - p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT]; - elem_size = TYPE1_TASK_CXT_SIZE(p_hwfn); - p_blk = &p_cli->pf_blks[CDUT_SEG_BLK(ECORE_CXT_ROCE_TID_SEG)]; - break; - default: - DP_NOTICE(p_hwfn, false, - "ECORE_INVALID elem type = %d", elem_type); - return ECORE_INVAL; - } + /* in case this client has no ILT lines, no need to free anything */ + if (count == 0) + return ECORE_SUCCESS; - /* Calculate line in ilt */ + rc = ecore_cxt_get_iid_info(p_hwfn, elem_type, &p_cli, &p_blk, + &elem_size, is_vf); + if (rc) + return rc; + + /* Calculate lines in ILT. + * Skip the start line if 'start_iid' is not the first element in page. + * Skip the end line if 'end_iid' is not the last element in page. + */ hw_p_size = p_cli->p_size.val; elems_per_p = ILT_PAGE_IN_BYTES(hw_p_size) / elem_size; + end_iid = start_iid + count - 1; + start_line = p_blk->start_line + (start_iid / elems_per_p); end_line = p_blk->start_line + (end_iid / elems_per_p); - if (((end_iid + 1) / elems_per_p) != (end_iid / elems_per_p)) - end_line--; + + if (is_vf) { + start_line += (vf_id * p_cli->vf_total_lines); + end_line += (vf_id * p_cli->vf_total_lines); + } + + if (start_iid % elems_per_p) + b_skip_start = true; + + if ((end_iid % elems_per_p) != (elems_per_p - 1)) + b_skip_end = true; + + start_iid_offset = (start_iid % elems_per_p) * elem_size; + end_iid_offset = ((end_iid % elems_per_p) + 1) * elem_size; shadow_start_line = start_line - p_hwfn->p_cxt_mngr->pf_start_line; shadow_end_line = end_line - p_hwfn->p_cxt_mngr->pf_start_line; p_ptt = ecore_ptt_acquire(p_hwfn); if (!p_ptt) { - DP_NOTICE(p_hwfn, false, - "ECORE_TIME_OUT on ptt acquire - dynamic allocation"); + DP_NOTICE(p_hwfn, false, "ECORE_TIME_OUT on ptt acquire - dynamic allocation"); return ECORE_TIMEOUT; } - for (i = shadow_start_line; i < shadow_end_line; i++) { - if (!p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr) + /* This piece of code takes care of freeing the ILT specific range, as + * well as setting it to zero. + * In case the start or the end lines of the range share other iids, + * they should not be freed, but only be set to zero. + * The reason for zeroing the lines is to prevent future access to the + * old iids. + * + * For example, lets assume the RDMA tids of VF0 occupy 3.5 lines. + * Now we run a test which uses all the tids and then perform VF FLR. + * During the FLR we will free the first 3 lines but not the 4th. + * If we won't zero the first half of the 4th line, a new VF0 might try + * to use the old tids which are stored there, and this will lead to an + * error. + */ + for (i = shadow_start_line; i <= shadow_end_line; i++) { + ilt_page = &p_hwfn->p_cxt_mngr->ilt_shadow[i]; + + if (!ilt_page->virt_addr) { + DP_VERBOSE(p_hwfn, ECORE_MSG_ILT, + "Virtual address of ILT shadow line %u is NULL\n", i); + continue; + } + + start_offset = (i == shadow_start_line && b_skip_start) ? + start_iid_offset : 0; + end_offset = (i == shadow_end_line && b_skip_end) ? + end_iid_offset : ilt_page->size; + + OSAL_MEM_ZERO((u8 *)ilt_page->virt_addr + start_offset, + end_offset - start_offset); + + DP_VERBOSE(p_hwfn, ECORE_MSG_ILT, + "Zeroing shadow line %u start offset %x end offset %x\n", + i, start_offset, end_offset); + + if (end_offset - start_offset < ilt_page->size) continue; OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev, - p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr, - p_hwfn->p_cxt_mngr->ilt_shadow[i].phys_addr, - p_hwfn->p_cxt_mngr->ilt_shadow[i].size); + ilt_page->virt_addr, + ilt_page->phys_addr, + ilt_page->size); - p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr = OSAL_NULL; - p_hwfn->p_cxt_mngr->ilt_shadow[i].phys_addr = 0; - p_hwfn->p_cxt_mngr->ilt_shadow[i].size = 0; + ilt_page->virt_addr = OSAL_NULL; + ilt_page->phys_addr = 0; + ilt_page->size = 0; /* compute absolute offset */ + abs_line = p_hwfn->p_cxt_mngr->pf_start_line + i; reg_offset = PSWRQ2_REG_ILT_MEMORY + - ((start_line++) * ILT_REG_SIZE_IN_BYTES * - ILT_ENTRY_IN_REGS); + (abs_line * ILT_REG_SIZE_IN_BYTES * + ILT_ENTRY_IN_REGS); /* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a * wide-bus. @@ -2212,6 +2745,75 @@ ecore_cxt_free_ilt_range(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } +enum _ecore_status_t ecore_cxt_get_task_ctx(struct ecore_hwfn *p_hwfn, + u32 tid, + u8 ctx_type, + void **pp_task_ctx) +{ + struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; + struct ecore_ilt_client_cfg *p_cli; + struct ecore_tid_seg *p_seg_info; + struct ecore_ilt_cli_blk *p_seg; + u32 num_tids_per_block; + u32 tid_size, ilt_idx; + u32 total_lines; + u32 proto, seg; + + /* Verify the personality */ + switch (p_hwfn->hw_info.personality) { + case ECORE_PCI_FCOE: + proto = PROTOCOLID_FCOE; + seg = ECORE_CXT_FCOE_TID_SEG; + break; + case ECORE_PCI_ISCSI: + proto = PROTOCOLID_ISCSI; + seg = ECORE_CXT_ISCSI_TID_SEG; + break; + case ECORE_PCI_ETH_RDMA: + case ECORE_PCI_ETH_IWARP: + case ECORE_PCI_ETH_ROCE: + case ECORE_PCI_ETH: + /* All ETH personalities refer to Ethernet TIDs since RDMA does + * not use this API. + */ + proto = PROTOCOLID_ETH; + seg = ECORE_CXT_ETH_TID_SEG; + break; + default: + return ECORE_INVAL; + } + + p_cli = &p_mngr->clients[ILT_CLI_CDUT]; + if (!p_cli->active) + return ECORE_INVAL; + + p_seg_info = &p_mngr->conn_cfg[proto].tid_seg[seg]; + + if (ctx_type == ECORE_CTX_WORKING_MEM) { + p_seg = &p_cli->pf_blks[CDUT_SEG_BLK(seg)]; + } else if (ctx_type == ECORE_CTX_FL_MEM) { + if (!p_seg_info->has_fl_mem) + return ECORE_INVAL; + p_seg = &p_cli->pf_blks[CDUT_FL_SEG_BLK(seg, PF)]; + } else { + return ECORE_INVAL; + } + total_lines = DIV_ROUND_UP(p_seg->total_size, + p_seg->real_size_in_page); + tid_size = p_mngr->task_type_size[p_seg_info->type]; + num_tids_per_block = p_seg->real_size_in_page / tid_size; + + if (total_lines < tid / num_tids_per_block) + return ECORE_INVAL; + + ilt_idx = tid / num_tids_per_block + p_seg->start_line - + p_mngr->pf_start_line; + *pp_task_ctx = (u8 *)p_mngr->ilt_shadow[ilt_idx].virt_addr + + (tid % num_tids_per_block) * tid_size; + + return ECORE_SUCCESS; +} + static u16 ecore_blk_calculate_pages(struct ecore_ilt_cli_blk *p_blk) { if (p_blk->real_size_in_page == 0) diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h index cd0e32e3b..fe61a8b8f 100644 --- a/drivers/net/qede/base/ecore_cxt.h +++ b/drivers/net/qede/base/ecore_cxt.h @@ -1,25 +1,35 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef _ECORE_CID_ #define _ECORE_CID_ #include "ecore_hsi_common.h" +#include "ecore_hsi_eth.h" #include "ecore_proto_if.h" #include "ecore_cxt_api.h" -/* Tasks segments definitions */ -#define ECORE_CXT_ISCSI_TID_SEG PROTOCOLID_ISCSI /* 0 */ -#define ECORE_CXT_FCOE_TID_SEG PROTOCOLID_FCOE /* 1 */ -#define ECORE_CXT_ROCE_TID_SEG PROTOCOLID_ROCE /* 2 */ +/* Tasks segments definitions (keeping this numbering is necessary) */ +#define ECORE_CXT_ISCSI_TID_SEG 0 /* PROTOCOLID_ISCSI */ +#define ECORE_CXT_FCOE_TID_SEG 1 /* PROTOCOLID_FCOE */ +#define ECORE_CXT_ROCE_TID_SEG 2 /* PROTOCOLID_ROCE */ +#define ECORE_CXT_ETH_TID_SEG 3 enum ecore_cxt_elem_type { ECORE_ELEM_CXT, ECORE_ELEM_SRQ, - ECORE_ELEM_TASK + ECORE_ELEM_RDMA_TASK, + ECORE_ELEM_ETH_TASK, + ECORE_ELEM_XRC_SRQ, +}; + +/* @DPDK */ +enum ecore_iov_is_vf_or_pf { + IOV_PF = 0, /* This is a PF instance. */ + IOV_VF = 1 /* This is a VF instance. */ }; u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn, @@ -27,11 +37,12 @@ u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn, u32 *vf_cid); u32 ecore_cxt_get_proto_tid_count(struct ecore_hwfn *p_hwfn, - enum protocol_type type); + enum protocol_type type, + u8 vf_id); u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn, - enum protocol_type type); -u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn); + enum protocol_type type, + u8 vf_id); /** * @brief ecore_cxt_set_pf_params - Set the PF params for cxt init @@ -40,16 +51,27 @@ u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn); * * @return enum _ecore_status_t */ -enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn); +enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn, + u32 rdma_tasks, u32 eth_tasks); /** * @brief ecore_cxt_cfg_ilt_compute - compute ILT init parameters * * @param p_hwfn + * @param last_line * * @return enum _ecore_status_t */ -enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn); +enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn, + u32 *last_line); + +/** + * @brief ecore_cxt_cfg_ilt_compute_excess - how many lines can be decreased + * + * @param p_hwfn + * @param used_lines + */ +u32 ecore_cxt_cfg_ilt_compute_excess(struct ecore_hwfn *p_hwfn, u32 used_lines); /** * @brief ecore_cxt_mngr_alloc - Allocate and init the context manager struct @@ -68,8 +90,7 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn); void ecore_cxt_mngr_free(struct ecore_hwfn *p_hwfn); /** - * @brief ecore_cxt_tables_alloc - Allocate ILT shadow, Searcher T2, acquired - * map + * @brief ecore_cxt_tables_alloc - Allocate ILT shadow, Searcher T2, acquired map * * @param p_hwfn * @@ -85,8 +106,7 @@ enum _ecore_status_t ecore_cxt_tables_alloc(struct ecore_hwfn *p_hwfn); void ecore_cxt_mngr_setup(struct ecore_hwfn *p_hwfn); /** - * @brief ecore_cxt_hw_init_common - Initailze ILT and DQ, common phase, per - * path. + * @brief ecore_cxt_hw_init_common - Initailze ILT and DQ, common phase, per path. * * @param p_hwfn */ @@ -121,7 +141,16 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); -#define ECORE_CXT_PF_CID (0xff) +/** + * @brief Reconfigures QM from a non-sleepable context. + * + * @param p_hwfn + * @param p_ptt + * + * @return enum _ecore_status_t + */ +enum _ecore_status_t ecore_qm_reconf_intr(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt); /** * @brief ecore_cxt_release - Release a cid @@ -177,28 +206,39 @@ enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn, * @param p_hwfn * @param elem_type * @param iid + * @param vf_id * * @return enum _ecore_status_t */ enum _ecore_status_t ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn, enum ecore_cxt_elem_type elem_type, - u32 iid); + u32 iid, u8 vf_id); /** - * @brief ecore_cxt_free_proto_ilt - function frees ilt pages - * associated with the protocol passed. + * @brief ecore_cxt_free_ilt_range - function frees ilt pages + * associated with the protocol and element type passed. * * @param p_hwfn * @param proto * * @return enum _ecore_status_t */ -enum _ecore_status_t ecore_cxt_free_proto_ilt(struct ecore_hwfn *p_hwfn, - enum protocol_type proto); +enum _ecore_status_t +ecore_cxt_free_ilt_range(struct ecore_hwfn *p_hwfn, + enum ecore_cxt_elem_type elem_type, + u32 start_iid, u32 count, u8 vf_id); #define ECORE_CTX_WORKING_MEM 0 #define ECORE_CTX_FL_MEM 1 +enum _ecore_status_t ecore_cxt_get_task_ctx(struct ecore_hwfn *p_hwfn, + u32 tid, + u8 ctx_type, + void **task_ctx); + +u32 ecore_cxt_get_ilt_page_size(struct ecore_hwfn *p_hwfn); + +u32 ecore_cxt_get_total_srq_count(struct ecore_hwfn *p_hwfn); /* Max number of connection types in HW (DQ/CDU etc.) */ #define MAX_CONN_TYPES PROTOCOLID_COMMON @@ -206,20 +246,20 @@ enum _ecore_status_t ecore_cxt_free_proto_ilt(struct ecore_hwfn *p_hwfn, #define NUM_TASK_PF_SEGMENTS 4 #define NUM_TASK_VF_SEGMENTS 1 -/* PF per protocol configuration object */ +/* PF per protocl configuration object */ #define TASK_SEGMENTS (NUM_TASK_PF_SEGMENTS + NUM_TASK_VF_SEGMENTS) #define TASK_SEGMENT_VF (NUM_TASK_PF_SEGMENTS) struct ecore_tid_seg { - u32 count; - u8 type; - bool has_fl_mem; + u32 count; + u8 type; + bool has_fl_mem; }; struct ecore_conn_type_cfg { - u32 cid_count; - u32 cids_per_vf; - struct ecore_tid_seg tid_seg[TASK_SEGMENTS]; + u32 cid_count; + u32 cids_per_vf; + struct ecore_tid_seg tid_seg[TASK_SEGMENTS]; }; /* ILT Client configuration, @@ -240,7 +280,7 @@ struct ilt_cfg_pair { }; struct ecore_ilt_cli_blk { - u32 total_size; /* 0 means not active */ + u32 total_size; /* 0 means not active */ u32 real_size_in_page; u32 start_line; u32 dynamic_line_offset; @@ -248,29 +288,29 @@ struct ecore_ilt_cli_blk { }; struct ecore_ilt_client_cfg { - bool active; + bool active; /* ILT boundaries */ - struct ilt_cfg_pair first; - struct ilt_cfg_pair last; - struct ilt_cfg_pair p_size; + struct ilt_cfg_pair first; + struct ilt_cfg_pair last; + struct ilt_cfg_pair p_size; /* ILT client blocks for PF */ - struct ecore_ilt_cli_blk pf_blks[ILT_CLI_PF_BLOCKS]; - u32 pf_total_lines; + struct ecore_ilt_cli_blk pf_blks[ILT_CLI_PF_BLOCKS]; + u32 pf_total_lines; /* ILT client blocks for VFs */ - struct ecore_ilt_cli_blk vf_blks[ILT_CLI_VF_BLOCKS]; - u32 vf_total_lines; + struct ecore_ilt_cli_blk vf_blks[ILT_CLI_VF_BLOCKS]; + u32 vf_total_lines; }; #define MAP_WORD_SIZE sizeof(unsigned long) #define BITS_PER_MAP_WORD (MAP_WORD_SIZE * 8) struct ecore_cid_acquired_map { - u32 start_cid; - u32 max_count; - u32 *cid_map; + u32 start_cid; + u32 max_count; + u32 *cid_map; /* @DPDK */ }; struct ecore_src_t2 { @@ -281,7 +321,7 @@ struct ecore_src_t2 { }; struct ecore_cxt_mngr { - /* Per protocol configuration */ + /* Per protocl configuration */ struct ecore_conn_type_cfg conn_cfg[MAX_CONN_TYPES]; /* computed ILT structure */ @@ -300,15 +340,14 @@ struct ecore_cxt_mngr { struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES]; struct ecore_cid_acquired_map *acquired_vf[MAX_CONN_TYPES]; - /* ILT shadow table */ + /* ILT shadow table */ struct phys_mem_desc *ilt_shadow; u32 ilt_shadow_size; u32 pf_start_line; /* Mutex for a dynamic ILT allocation */ - osal_mutex_t mutex; + osal_mutex_t mutex; - /* SRC T2 */ struct ecore_src_t2 src_t2; /* The infrastructure originally was very generic and context/task @@ -317,19 +356,24 @@ struct ecore_cxt_mngr { * needing for a given block we'd iterate over all the relevant * connection-types. * But since then we've had some additional resources, some of which - * require memory which is independent of the general context/task + * require memory which is indepent of the general context/task * scheme. We add those here explicitly per-feature. */ /* total number of SRQ's for this hwfn */ u32 srq_count; + u32 xrc_srq_count; + u32 vfs_srq_count; /* Maximal number of L2 steering filters */ u32 arfs_count; /* TODO - VF arfs filters ? */ - u8 task_type_id; + u16 iscsi_task_pages; + u16 fcoe_task_pages; + u16 roce_task_pages; + u16 eth_task_pages; u16 task_ctx_size; u16 conn_ctx_size; }; @@ -338,4 +382,6 @@ u16 ecore_get_cdut_num_pf_init_pages(struct ecore_hwfn *p_hwfn); u16 ecore_get_cdut_num_vf_init_pages(struct ecore_hwfn *p_hwfn); u16 ecore_get_cdut_num_pf_work_pages(struct ecore_hwfn *p_hwfn); u16 ecore_get_cdut_num_vf_work_pages(struct ecore_hwfn *p_hwfn); + +void ecore_tm_clear_vf_ilt(struct ecore_hwfn *p_hwfn, u16 vf_idx); #endif /* _ECORE_CID_ */ diff --git a/drivers/net/qede/base/ecore_cxt_api.h b/drivers/net/qede/base/ecore_cxt_api.h index 6c8b2831c..fa85a0dc8 100644 --- a/drivers/net/qede/base/ecore_cxt_api.h +++ b/drivers/net/qede/base/ecore_cxt_api.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_CXT_API_H__ #define __ECORE_CXT_API_H__ @@ -24,15 +24,26 @@ struct ecore_tid_mem { }; /** -* @brief ecoreo_cid_get_cxt_info - Returns the context info for a specific cid -* -* -* @param p_hwfn -* @param p_info in/out -* -* @return enum _ecore_status_t -*/ + * @brief ecoreo_cid_get_cxt_info - Returns the context info for a specific cid + * + * + * @param p_hwfn + * @param p_info in/out + * + * @return enum _ecore_status_t + */ enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn, struct ecore_cxt_info *p_info); +/** + * @brief ecore_cxt_get_tid_mem_info + * + * @param p_hwfn + * @param p_info + * + * @return enum _ecore_status_t + */ +enum _ecore_status_t ecore_cxt_get_tid_mem_info(struct ecore_hwfn *p_hwfn, + struct ecore_tid_mem *p_info); + #endif diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c index f94efce72..627975369 100644 --- a/drivers/net/qede/base/ecore_dcbx.c +++ b/drivers/net/qede/base/ecore_dcbx.c @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #include "bcm_osal.h" #include "ecore.h" #include "ecore_sp_commands.h" @@ -15,6 +15,10 @@ #define ECORE_DCBX_MAX_MIB_READ_TRY (100) #define ECORE_ETH_TYPE_DEFAULT (0) +#define ECORE_ETH_TYPE_ROCE (0x8915) +#define ECORE_UDP_PORT_TYPE_ROCE_V2 (0x12B7) +#define ECORE_ETH_TYPE_FCOE (0x8906) +#define ECORE_TCP_PORT_ISCSI (0xCBC) #define ECORE_DCBX_INVALID_PRIORITY 0xFF @@ -22,7 +26,7 @@ * the traffic class corresponding to the priority. */ #define ECORE_DCBX_PRIO2TC(prio_tc_tbl, prio) \ - ((u32)(prio_tc_tbl >> ((7 - prio) * 4)) & 0x7) + ((u32)(prio_tc_tbl >> ((7 - prio) * 4)) & 0xF) static bool ecore_dcbx_app_ethtype(u32 app_info_bitmap) { @@ -70,6 +74,56 @@ static bool ecore_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee) return !!(ethtype && (proto_id == ECORE_ETH_TYPE_DEFAULT)); } +static bool ecore_dcbx_iscsi_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee) +{ + bool port; + + if (ieee) + port = ecore_dcbx_ieee_app_port(app_info_bitmap, + DCBX_APP_SF_IEEE_TCP_PORT); + else + port = ecore_dcbx_app_port(app_info_bitmap); + + return !!(port && (proto_id == ECORE_TCP_PORT_ISCSI)); +} + +static bool ecore_dcbx_fcoe_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee) +{ + bool ethtype; + + if (ieee) + ethtype = ecore_dcbx_ieee_app_ethtype(app_info_bitmap); + else + ethtype = ecore_dcbx_app_ethtype(app_info_bitmap); + + return !!(ethtype && (proto_id == ECORE_ETH_TYPE_FCOE)); +} + +static bool ecore_dcbx_roce_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee) +{ + bool ethtype; + + if (ieee) + ethtype = ecore_dcbx_ieee_app_ethtype(app_info_bitmap); + else + ethtype = ecore_dcbx_app_ethtype(app_info_bitmap); + + return !!(ethtype && (proto_id == ECORE_ETH_TYPE_ROCE)); +} + +static bool ecore_dcbx_roce_v2_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee) +{ + bool port; + + if (ieee) + port = ecore_dcbx_ieee_app_port(app_info_bitmap, + DCBX_APP_SF_IEEE_UDP_PORT); + else + port = ecore_dcbx_app_port(app_info_bitmap); + + return !!(port && (proto_id == ECORE_UDP_PORT_TYPE_ROCE_V2)); +} + static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap, u16 proto_id, bool ieee) { @@ -92,7 +146,7 @@ ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn, struct ecore_dcbx_results *p_data) { enum dcbx_protocol_type id; - int i; + u32 i; DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "DCBX negotiated: %d\n", p_data->dcbx_enabled); @@ -101,10 +155,8 @@ ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn, id = ecore_dcbx_app_update[i].id; DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, - "%s info: update %d, enable %d, prio %d, tc %d," - " num_active_tc %d dscp_enable = %d dscp_val = %d\n", - ecore_dcbx_app_update[i].name, - p_data->arr[id].update, + "%s info: update %d, enable %d, prio %d, tc %d, num_active_tc %d dscp_enable = %d dscp_val = %d\n", + ecore_dcbx_app_update[i].name, p_data->arr[id].update, p_data->arr[id].enable, p_data->arr[id].priority, p_data->arr[id].tc, p_hwfn->hw_info.num_active_tc, p_data->arr[id].dscp_enable, @@ -130,7 +182,7 @@ u8 ecore_dcbx_get_dscp_value(struct ecore_hwfn *p_hwfn, u8 pri) static void ecore_dcbx_set_params(struct ecore_dcbx_results *p_data, struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, - bool enable, u8 prio, u8 tc, + bool app_tlv, bool enable, u8 prio, u8 tc, enum dcbx_protocol_type type, enum ecore_pci_personality personality) { @@ -143,21 +195,21 @@ ecore_dcbx_set_params(struct ecore_dcbx_results *p_data, p_data->arr[type].dscp_enable = false; p_data->arr[type].dscp_val = 0; } else { - p_data->arr[type].dscp_enable = true; + p_data->arr[type].dscp_enable = enable; } + p_data->arr[type].update = UPDATE_DCB_DSCP; - /* Do not add valn tag 0 when DCB is enabled and port is in UFP mode */ - if (OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits)) + if (OSAL_TEST_BIT(ECORE_MF_DONT_ADD_VLAN0_TAG, &p_hwfn->p_dev->mf_bits)) p_data->arr[type].dont_add_vlan0 = true; /* QM reconf data */ - if (p_hwfn->hw_info.personality == personality) - p_hwfn->hw_info.offload_tc = tc; + if (app_tlv && p_hwfn->hw_info.personality == personality) + ecore_hw_info_set_offload_tc(&p_hwfn->hw_info, tc); /* Configure dcbx vlan priority in doorbell block for roce EDPM */ - if (OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits) && - type == DCBX_PROTOCOL_ROCE) { + if (OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits) && + (type == DCBX_PROTOCOL_ROCE)) { ecore_wr(p_hwfn, p_ptt, DORQ_REG_TAG1_OVRD_MODE, 1); ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_PCP_BB_K2, prio << 1); } @@ -167,12 +219,12 @@ ecore_dcbx_set_params(struct ecore_dcbx_results *p_data, static void ecore_dcbx_update_app_info(struct ecore_dcbx_results *p_data, struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, - bool enable, u8 prio, u8 tc, + bool app_tlv, bool enable, u8 prio, u8 tc, enum dcbx_protocol_type type) { enum ecore_pci_personality personality; enum dcbx_protocol_type id; - int i; + u32 i; for (i = 0; i < OSAL_ARRAY_SIZE(ecore_dcbx_app_update); i++) { id = ecore_dcbx_app_update[i].id; @@ -182,7 +234,7 @@ ecore_dcbx_update_app_info(struct ecore_dcbx_results *p_data, personality = ecore_dcbx_app_update[i].personality; - ecore_dcbx_set_params(p_data, p_hwfn, p_ptt, enable, + ecore_dcbx_set_params(p_data, p_hwfn, p_ptt, app_tlv, enable, prio, tc, type, personality); } } @@ -221,12 +273,22 @@ ecore_dcbx_get_app_protocol_type(struct ecore_hwfn *p_hwfn, u32 app_prio_bitmap, u16 id, enum dcbx_protocol_type *type, bool ieee) { - if (ecore_dcbx_default_tlv(app_prio_bitmap, id, ieee)) { + if (ecore_dcbx_fcoe_tlv(app_prio_bitmap, id, ieee)) { + *type = DCBX_PROTOCOL_FCOE; + } else if (ecore_dcbx_roce_tlv(app_prio_bitmap, id, ieee)) { + *type = DCBX_PROTOCOL_ROCE; + } else if (ecore_dcbx_iscsi_tlv(app_prio_bitmap, id, ieee)) { + *type = DCBX_PROTOCOL_ISCSI; + } else if (ecore_dcbx_default_tlv(app_prio_bitmap, id, ieee)) { *type = DCBX_PROTOCOL_ETH; + } else if (ecore_dcbx_roce_v2_tlv(app_prio_bitmap, id, ieee)) { + *type = DCBX_PROTOCOL_ROCE_V2; + } else if (ecore_dcbx_iwarp_tlv(p_hwfn, app_prio_bitmap, id, ieee)) { + *type = DCBX_PROTOCOL_IWARP; } else { *type = DCBX_MAX_PROTOCOL_TYPE; DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, - "No action required, App TLV entry = 0x%x\n", + "No action required, App TLV entry = 0x%x\n", app_prio_bitmap); return false; } @@ -287,13 +349,13 @@ ecore_dcbx_process_tlv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, enable = true; } - ecore_dcbx_update_app_info(p_data, p_hwfn, p_ptt, + ecore_dcbx_update_app_info(p_data, p_hwfn, p_ptt, true, enable, priority, tc, type); } } /* If Eth TLV is not detected, use UFP TC as default TC */ - if (OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC, + if (OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits) && !eth_tlv) p_data->arr[DCBX_PROTOCOL_ETH].tc = p_hwfn->ufp_info.tc; @@ -310,9 +372,9 @@ ecore_dcbx_process_tlv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, continue; /* if no app tlv was present, don't override in FW */ - ecore_dcbx_update_app_info(p_data, p_hwfn, p_ptt, - p_data->arr[DCBX_PROTOCOL_ETH].enable, - priority, tc, type); + ecore_dcbx_update_app_info(p_data, p_hwfn, p_ptt, false, + p_data->arr[DCBX_PROTOCOL_ETH].enable, + priority, tc, type); } return ECORE_SUCCESS; @@ -399,16 +461,14 @@ ecore_dcbx_copy_mib(struct ecore_hwfn *p_hwfn, read_count++; DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, - "mib type = %d, try count = %d prefix seq num =" - " %d suffix seq num = %d\n", + "mib type = %d, try count = %d prefix seq num = %d suffix seq num = %d\n", type, read_count, prefix_seq_num, suffix_seq_num); } while ((prefix_seq_num != suffix_seq_num) && (read_count < ECORE_DCBX_MAX_MIB_READ_TRY)); if (read_count >= ECORE_DCBX_MAX_MIB_READ_TRY) { DP_ERR(p_hwfn, - "MIB read err, mib type = %d, try count =" - " %d prefix seq num = %d suffix seq num = %d\n", + "MIB read err, mib type = %d, try count = %d prefix seq num = %d suffix seq num = %d\n", type, read_count, prefix_seq_num, suffix_seq_num); rc = ECORE_IO; } @@ -423,12 +483,36 @@ ecore_dcbx_get_priority_info(struct ecore_hwfn *p_hwfn, { u8 val; + p_prio->roce = ECORE_DCBX_INVALID_PRIORITY; + p_prio->roce_v2 = ECORE_DCBX_INVALID_PRIORITY; + p_prio->iscsi = ECORE_DCBX_INVALID_PRIORITY; + p_prio->fcoe = ECORE_DCBX_INVALID_PRIORITY; + + if (p_results->arr[DCBX_PROTOCOL_ROCE].update && + p_results->arr[DCBX_PROTOCOL_ROCE].enable) + p_prio->roce = p_results->arr[DCBX_PROTOCOL_ROCE].priority; + + if (p_results->arr[DCBX_PROTOCOL_ROCE_V2].update && + p_results->arr[DCBX_PROTOCOL_ROCE_V2].enable) { + val = p_results->arr[DCBX_PROTOCOL_ROCE_V2].priority; + p_prio->roce_v2 = val; + } + + if (p_results->arr[DCBX_PROTOCOL_ISCSI].update && + p_results->arr[DCBX_PROTOCOL_ISCSI].enable) + p_prio->iscsi = p_results->arr[DCBX_PROTOCOL_ISCSI].priority; + + if (p_results->arr[DCBX_PROTOCOL_FCOE].update && + p_results->arr[DCBX_PROTOCOL_FCOE].enable) + p_prio->fcoe = p_results->arr[DCBX_PROTOCOL_FCOE].priority; + if (p_results->arr[DCBX_PROTOCOL_ETH].update && p_results->arr[DCBX_PROTOCOL_ETH].enable) p_prio->eth = p_results->arr[DCBX_PROTOCOL_ETH].priority; DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, - "Priorities: eth %d\n", + "Priorities: iscsi %d, roce %d, roce v2 %d, fcoe %d, eth %d\n", + p_prio->iscsi, p_prio->roce, p_prio->roce_v2, p_prio->fcoe, p_prio->eth); } @@ -474,8 +558,7 @@ ecore_dcbx_get_app_data(struct ecore_hwfn *p_hwfn, entry->sf_ieee = ECORE_DCBX_SF_IEEE_UDP_PORT; break; case DCBX_APP_SF_IEEE_TCP_UDP_PORT: - entry->sf_ieee = - ECORE_DCBX_SF_IEEE_TCP_UDP_PORT; + entry->sf_ieee = ECORE_DCBX_SF_IEEE_TCP_UDP_PORT; break; } } else { @@ -506,6 +589,7 @@ ecore_dcbx_get_pfc_data(struct ecore_hwfn *p_hwfn, p_params->pfc.willing = GET_MFW_FIELD(pfc, DCBX_PFC_WILLING); p_params->pfc.max_tc = GET_MFW_FIELD(pfc, DCBX_PFC_CAPS); + p_params->pfc.mbc = GET_MFW_FIELD(pfc, DCBX_PFC_MBC); p_params->pfc.enabled = GET_MFW_FIELD(pfc, DCBX_PFC_ENABLED); pfc_map = GET_MFW_FIELD(pfc, DCBX_PFC_PRI_EN_BITMAP); p_params->pfc.prio[0] = !!(pfc_map & DCBX_PFC_PRI_EN_BITMAP_PRI_0); @@ -518,9 +602,9 @@ ecore_dcbx_get_pfc_data(struct ecore_hwfn *p_hwfn, p_params->pfc.prio[7] = !!(pfc_map & DCBX_PFC_PRI_EN_BITMAP_PRI_7); DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, - "PFC params: willing %d, pfc_bitmap %u max_tc = %u enabled = %d\n", + "PFC params: willing %d, pfc_bitmap %u max_tc = %u enabled = %d mbc = %d\n", p_params->pfc.willing, pfc_map, p_params->pfc.max_tc, - p_params->pfc.enabled); + p_params->pfc.enabled, p_params->pfc.mbc); } static void @@ -540,7 +624,12 @@ ecore_dcbx_get_ets_data(struct ecore_hwfn *p_hwfn, p_params->ets_willing, p_params->ets_enabled, p_params->ets_cbs, p_ets->pri_tc_tbl[0], p_params->max_ets_tc); - + if (p_params->ets_enabled && !p_params->max_ets_tc) { + p_params->max_ets_tc = ECORE_MAX_PFC_PRIORITIES; + DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, + "ETS params: max_ets_tc is forced to %d\n", + p_params->max_ets_tc); + } /* 8 bit tsa and bw data corresponding to each of the 8 TC's are * encoded in a type u32 array of size 2. */ @@ -600,8 +689,8 @@ ecore_dcbx_get_remote_params(struct ecore_hwfn *p_hwfn, params->remote.valid = true; } -static void ecore_dcbx_get_dscp_params(struct ecore_hwfn *p_hwfn, - struct ecore_dcbx_get *params) +static void ecore_dcbx_get_dscp_params(struct ecore_hwfn *p_hwfn, + struct ecore_dcbx_get *params) { struct ecore_dcbx_dscp_params *p_dscp; struct dcb_dscp_map *p_dscp_map; @@ -616,7 +705,7 @@ static void ecore_dcbx_get_dscp_params(struct ecore_hwfn *p_hwfn, * where each entry holds the 4bit priority map for 8 dscp entries. */ for (i = 0, entry = 0; i < ECORE_DCBX_DSCP_SIZE / 8; i++) { - pri_map = OSAL_BE32_TO_CPU(p_dscp_map->dscp_pri_map[i]); + pri_map = p_dscp_map->dscp_pri_map[i]; DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "elem %d pri_map 0x%x\n", entry, pri_map); for (j = 0; j < ECORE_DCBX_DSCP_SIZE / 8; j++, entry++) @@ -785,7 +874,7 @@ ecore_dcbx_read_operational_mib(struct ecore_hwfn *p_hwfn, OSAL_MEM_ZERO(&data, sizeof(data)); data.addr = p_hwfn->mcp_info->port_addr + - offsetof(struct public_port, operational_dcbx_mib); + offsetof(struct public_port, operational_dcbx_mib); data.mib = &p_hwfn->p_dcbx_info->operational; data.size = sizeof(struct dcbx_mib); rc = ecore_dcbx_copy_mib(p_hwfn, p_ptt, &data, type); @@ -803,7 +892,7 @@ ecore_dcbx_read_remote_mib(struct ecore_hwfn *p_hwfn, OSAL_MEM_ZERO(&data, sizeof(data)); data.addr = p_hwfn->mcp_info->port_addr + - offsetof(struct public_port, remote_dcbx_mib); + offsetof(struct public_port, remote_dcbx_mib); data.mib = &p_hwfn->p_dcbx_info->remote; data.size = sizeof(struct dcbx_mib); rc = ecore_dcbx_copy_mib(p_hwfn, p_ptt, &data, type); @@ -819,7 +908,7 @@ ecore_dcbx_read_local_mib(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) OSAL_MEM_ZERO(&data, sizeof(data)); data.addr = p_hwfn->mcp_info->port_addr + - offsetof(struct public_port, local_admin_dcbx_mib); + offsetof(struct public_port, local_admin_dcbx_mib); data.local_admin = &p_hwfn->p_dcbx_info->local_admin; data.size = sizeof(struct dcbx_local_params); ecore_memcpy_from(p_hwfn, p_ptt, data.local_admin, @@ -867,6 +956,61 @@ static enum _ecore_status_t ecore_dcbx_read_mib(struct ecore_hwfn *p_hwfn, DP_ERR(p_hwfn, "MIB read err, unknown mib type %d\n", type); } + return rc; +} + +static enum _ecore_status_t +ecore_dcbx_dscp_map_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + bool b_en) +{ + struct ecore_dev *p_dev = p_hwfn->p_dev; + u8 ppfid, abs_ppfid, pfid; + u32 addr, val; + u16 fid; + enum _ecore_status_t rc; + + if (!OSAL_TEST_BIT(ECORE_MF_DSCP_TO_TC_MAP, &p_dev->mf_bits)) + return ECORE_INVAL; + + if (ECORE_IS_E4(p_hwfn->p_dev)) { + addr = NIG_REG_DSCP_TO_TC_MAP_ENABLE_BB_K2; + val = b_en ? 0x1 : 0x0; + } else { /* E5 */ + addr = NIG_REG_LLH_TC_CLS_DSCP_MODE_E5; + val = b_en ? 0x2 /* L2-PRI if exists, else L3-DSCP */ + : 0x0; /* L2-PRI only */ + } + + if (!ECORE_IS_AH(p_dev)) + return ecore_all_ppfids_wr(p_hwfn, p_ptt, addr, val); + + /* Workaround for a HW bug in E4 (only AH is affected): + * Instead of writing to "NIG_REG_DSCP_TO_TC_MAP_ENABLE[ppfid]", write + * to "NIG_REG_DSCP_TO_TC_MAP_ENABLE[n]", where "n" is the "pfid" which + * is read from "NIG_REG_LLH_PPFID2PFID_TBL[ppfid]". + */ + for (ppfid = 0; ppfid < ecore_llh_get_num_ppfid(p_dev); ppfid++) { + rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid); + if (rc != ECORE_SUCCESS) + return rc; + + /* Cannot just take "rel_pf_id" since a ppfid could have been + * loaned to another pf (e.g. RDMA bonding). + */ + pfid = (u8)ecore_rd(p_hwfn, p_ptt, + NIG_REG_LLH_PPFID2PFID_TBL_0 + + abs_ppfid * 0x4); + + fid = FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, pfid); + ecore_fid_pretend(p_hwfn, p_ptt, fid); + + ecore_wr(p_hwfn, p_ptt, addr, val); + + fid = FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, + p_hwfn->rel_pf_id); + ecore_fid_pretend(p_hwfn, p_ptt, fid); + } + return ECORE_SUCCESS; } @@ -893,7 +1037,7 @@ ecore_dcbx_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, /* reconfigure tcs of QM queues according * to negotiation results */ - ecore_qm_reconf(p_hwfn, p_ptt); + ecore_qm_reconf_intr(p_hwfn, p_ptt); /* update storm FW with negotiation results */ ecore_sp_pf_update_dcbx(p_hwfn); @@ -902,20 +1046,33 @@ ecore_dcbx_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, ecore_dcbx_get_params(p_hwfn, &p_hwfn->p_dcbx_info->get, type); - /* Update the DSCP to TC mapping enable bit if required */ - if ((type == ECORE_DCBX_OPERATIONAL_MIB) && - p_hwfn->p_dcbx_info->dscp_nig_update) { - u8 val = !!p_hwfn->p_dcbx_info->get.dscp.enabled; - u32 addr = NIG_REG_DSCP_TO_TC_MAP_ENABLE_BB_K2; + if (type == ECORE_DCBX_OPERATIONAL_MIB) { + struct ecore_dcbx_results *p_data; + u16 val; + + /* Enable/disable the DSCP to TC mapping if required */ + if (p_hwfn->p_dcbx_info->dscp_nig_update) { + bool b_en = p_hwfn->p_dcbx_info->get.dscp.enabled; + + rc = ecore_dcbx_dscp_map_enable(p_hwfn, p_ptt, b_en); + if (rc != ECORE_SUCCESS) { + DP_NOTICE(p_hwfn, false, + "Failed to update the DSCP to TC mapping enable bit\n"); + return rc; + } - rc = ecore_all_ppfids_wr(p_hwfn, p_ptt, addr, val); - if (rc != ECORE_SUCCESS) { - DP_NOTICE(p_hwfn, false, - "Failed to update the DSCP to TC mapping enable bit\n"); - return rc; + p_hwfn->p_dcbx_info->dscp_nig_update = false; } - p_hwfn->p_dcbx_info->dscp_nig_update = false; + /* Configure in NIG which protocols support EDPM and should + * honor PFC. + */ + p_data = &p_hwfn->p_dcbx_info->results; + val = (0x1 << p_data->arr[DCBX_PROTOCOL_ROCE].tc) | + (0x1 << p_data->arr[DCBX_PROTOCOL_ROCE_V2].tc); + val <<= NIG_REG_TX_EDPM_CTRL_TX_EDPM_TC_EN_SHIFT; + val |= NIG_REG_TX_EDPM_CTRL_TX_EDPM_EN; + ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_EDPM_CTRL, val); } OSAL_DCBX_AEN(p_hwfn, type); @@ -925,6 +1082,15 @@ ecore_dcbx_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, enum _ecore_status_t ecore_dcbx_info_alloc(struct ecore_hwfn *p_hwfn) { +#ifndef __EXTRACT__LINUX__ + OSAL_BUILD_BUG_ON(ECORE_LLDP_CHASSIS_ID_STAT_LEN != + LLDP_CHASSIS_ID_STAT_LEN); + OSAL_BUILD_BUG_ON(ECORE_LLDP_PORT_ID_STAT_LEN != + LLDP_PORT_ID_STAT_LEN); + OSAL_BUILD_BUG_ON(ECORE_DCBX_MAX_APP_PROTOCOL != + DCBX_MAX_APP_PROTOCOL); +#endif + p_hwfn->p_dcbx_info = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_hwfn->p_dcbx_info)); if (!p_hwfn->p_dcbx_info) { @@ -942,6 +1108,7 @@ enum _ecore_status_t ecore_dcbx_info_alloc(struct ecore_hwfn *p_hwfn) void ecore_dcbx_info_free(struct ecore_hwfn *p_hwfn) { OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_dcbx_info); + p_hwfn->p_dcbx_info = OSAL_NULL; } static void ecore_dcbx_update_protocol_data(struct protocol_dcb_data *p_data, @@ -963,17 +1130,62 @@ void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src, struct protocol_dcb_data *p_dcb_data; u8 update_flag; + update_flag = p_src->arr[DCBX_PROTOCOL_FCOE].update; + p_dest->update_fcoe_dcb_data_mode = update_flag; + + update_flag = p_src->arr[DCBX_PROTOCOL_ROCE].update; + p_dest->update_roce_dcb_data_mode = update_flag; + + update_flag = p_src->arr[DCBX_PROTOCOL_ROCE_V2].update; + p_dest->update_rroce_dcb_data_mode = update_flag; + + update_flag = p_src->arr[DCBX_PROTOCOL_ISCSI].update; + p_dest->update_iscsi_dcb_data_mode = update_flag; update_flag = p_src->arr[DCBX_PROTOCOL_ETH].update; p_dest->update_eth_dcb_data_mode = update_flag; update_flag = p_src->arr[DCBX_PROTOCOL_IWARP].update; p_dest->update_iwarp_dcb_data_mode = update_flag; + p_dcb_data = &p_dest->fcoe_dcb_data; + ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_FCOE); + p_dcb_data = &p_dest->roce_dcb_data; + ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ROCE); + p_dcb_data = &p_dest->rroce_dcb_data; + ecore_dcbx_update_protocol_data(p_dcb_data, p_src, + DCBX_PROTOCOL_ROCE_V2); + p_dcb_data = &p_dest->iscsi_dcb_data; + ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ISCSI); p_dcb_data = &p_dest->eth_dcb_data; ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ETH); p_dcb_data = &p_dest->iwarp_dcb_data; ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_IWARP); } +bool ecore_dcbx_get_dscp_state(struct ecore_hwfn *p_hwfn) +{ + struct ecore_dcbx_get *p_dcbx_info = &p_hwfn->p_dcbx_info->get; + + return p_dcbx_info->dscp.enabled; +} + +u8 ecore_dcbx_get_priority_tc(struct ecore_hwfn *p_hwfn, u8 pri) +{ + struct ecore_dcbx_get *dcbx_info = &p_hwfn->p_dcbx_info->get; + + if (pri >= ECORE_MAX_PFC_PRIORITIES) { + DP_ERR(p_hwfn, "Invalid priority %d\n", pri); + return ECORE_DCBX_DEFAULT_TC; + } + + if (!dcbx_info->operational.valid) { + DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, + "Dcbx parameters not available\n"); + return ECORE_DCBX_DEFAULT_TC; + } + + return dcbx_info->operational.params.ets_pri_tc_tbl[pri]; +} + enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn, struct ecore_dcbx_get *p_get, enum ecore_mib_read_type type) @@ -981,6 +1193,11 @@ enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt; enum _ecore_status_t rc; +#ifndef ASIC_ONLY + if (!ecore_mcp_is_init(p_hwfn)) + return ECORE_INVAL; +#endif + if (IS_VF(p_hwfn->p_dev)) return ECORE_INVAL; @@ -1008,24 +1225,16 @@ ecore_dcbx_set_pfc_data(struct ecore_hwfn *p_hwfn, u8 pfc_map = 0; int i; - if (p_params->pfc.willing) - *pfc |= DCBX_PFC_WILLING_MASK; - else - *pfc &= ~DCBX_PFC_WILLING_MASK; - - if (p_params->pfc.enabled) - *pfc |= DCBX_PFC_ENABLED_MASK; - else - *pfc &= ~DCBX_PFC_ENABLED_MASK; - - *pfc &= ~DCBX_PFC_CAPS_MASK; - *pfc |= (u32)p_params->pfc.max_tc << DCBX_PFC_CAPS_OFFSET; + SET_MFW_FIELD(*pfc, DCBX_PFC_ERROR, 0); + SET_MFW_FIELD(*pfc, DCBX_PFC_WILLING, p_params->pfc.willing ? 1 : 0); + SET_MFW_FIELD(*pfc, DCBX_PFC_ENABLED, p_params->pfc.enabled ? 1 : 0); + SET_MFW_FIELD(*pfc, DCBX_PFC_CAPS, (u32)p_params->pfc.max_tc); + SET_MFW_FIELD(*pfc, DCBX_PFC_MBC, p_params->pfc.mbc ? 1 : 0); for (i = 0; i < ECORE_MAX_PFC_PRIORITIES; i++) if (p_params->pfc.prio[i]) pfc_map |= (1 << i); - *pfc &= ~DCBX_PFC_PRI_EN_BITMAP_MASK; - *pfc |= (pfc_map << DCBX_PFC_PRI_EN_BITMAP_OFFSET); + SET_MFW_FIELD(*pfc, DCBX_PFC_PRI_EN_BITMAP, pfc_map); DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "pfc = 0x%x\n", *pfc); } @@ -1039,23 +1248,14 @@ ecore_dcbx_set_ets_data(struct ecore_hwfn *p_hwfn, u32 val; int i; - if (p_params->ets_willing) - p_ets->flags |= DCBX_ETS_WILLING_MASK; - else - p_ets->flags &= ~DCBX_ETS_WILLING_MASK; - - if (p_params->ets_cbs) - p_ets->flags |= DCBX_ETS_CBS_MASK; - else - p_ets->flags &= ~DCBX_ETS_CBS_MASK; - - if (p_params->ets_enabled) - p_ets->flags |= DCBX_ETS_ENABLED_MASK; - else - p_ets->flags &= ~DCBX_ETS_ENABLED_MASK; - - p_ets->flags &= ~DCBX_ETS_MAX_TCS_MASK; - p_ets->flags |= (u32)p_params->max_ets_tc << DCBX_ETS_MAX_TCS_OFFSET; + SET_MFW_FIELD(p_ets->flags, DCBX_ETS_WILLING, + p_params->ets_willing ? 1 : 0); + SET_MFW_FIELD(p_ets->flags, DCBX_ETS_CBS, + p_params->ets_cbs ? 1 : 0); + SET_MFW_FIELD(p_ets->flags, DCBX_ETS_ENABLED, + p_params->ets_enabled ? 1 : 0); + SET_MFW_FIELD(p_ets->flags, DCBX_ETS_MAX_TCS, + (u32)p_params->max_ets_tc); bw_map = (u8 *)&p_ets->tc_bw_tbl[0]; tsa_map = (u8 *)&p_ets->tc_tsa_tbl[0]; @@ -1089,66 +1289,55 @@ ecore_dcbx_set_app_data(struct ecore_hwfn *p_hwfn, u32 *entry; int i; - if (p_params->app_willing) - p_app->flags |= DCBX_APP_WILLING_MASK; - else - p_app->flags &= ~DCBX_APP_WILLING_MASK; - - if (p_params->app_valid) - p_app->flags |= DCBX_APP_ENABLED_MASK; - else - p_app->flags &= ~DCBX_APP_ENABLED_MASK; - - p_app->flags &= ~DCBX_APP_NUM_ENTRIES_MASK; - p_app->flags |= (u32)p_params->num_app_entries << - DCBX_APP_NUM_ENTRIES_OFFSET; + SET_MFW_FIELD(p_app->flags, DCBX_APP_WILLING, + p_params->app_willing ? 1 : 0); + SET_MFW_FIELD(p_app->flags, DCBX_APP_ENABLED, + p_params->app_valid ? 1 : 0); + SET_MFW_FIELD(p_app->flags, DCBX_APP_NUM_ENTRIES, + (u32)p_params->num_app_entries); for (i = 0; i < p_params->num_app_entries; i++) { entry = &p_app->app_pri_tbl[i].entry; *entry = 0; if (ieee) { - *entry &= ~(DCBX_APP_SF_IEEE_MASK | DCBX_APP_SF_MASK); + SET_MFW_FIELD(*entry, DCBX_APP_SF_IEEE, 0); + SET_MFW_FIELD(*entry, DCBX_APP_SF, 0); switch (p_params->app_entry[i].sf_ieee) { case ECORE_DCBX_SF_IEEE_ETHTYPE: - *entry |= ((u32)DCBX_APP_SF_IEEE_ETHTYPE << - DCBX_APP_SF_IEEE_OFFSET); - *entry |= ((u32)DCBX_APP_SF_ETHTYPE << - DCBX_APP_SF_OFFSET); + SET_MFW_FIELD(*entry, DCBX_APP_SF_IEEE, + (u32)DCBX_APP_SF_IEEE_ETHTYPE); + SET_MFW_FIELD(*entry, DCBX_APP_SF, + (u32)DCBX_APP_SF_ETHTYPE); break; case ECORE_DCBX_SF_IEEE_TCP_PORT: - *entry |= ((u32)DCBX_APP_SF_IEEE_TCP_PORT << - DCBX_APP_SF_IEEE_OFFSET); - *entry |= ((u32)DCBX_APP_SF_PORT << - DCBX_APP_SF_OFFSET); + SET_MFW_FIELD(*entry, DCBX_APP_SF_IEEE, + (u32)DCBX_APP_SF_IEEE_TCP_PORT); + SET_MFW_FIELD(*entry, DCBX_APP_SF, + (u32)DCBX_APP_SF_PORT); break; case ECORE_DCBX_SF_IEEE_UDP_PORT: - *entry |= ((u32)DCBX_APP_SF_IEEE_UDP_PORT << - DCBX_APP_SF_IEEE_OFFSET); - *entry |= ((u32)DCBX_APP_SF_PORT << - DCBX_APP_SF_OFFSET); + SET_MFW_FIELD(*entry, DCBX_APP_SF_IEEE, + (u32)DCBX_APP_SF_IEEE_UDP_PORT); + SET_MFW_FIELD(*entry, DCBX_APP_SF, + (u32)DCBX_APP_SF_PORT); break; case ECORE_DCBX_SF_IEEE_TCP_UDP_PORT: - *entry |= (u32)DCBX_APP_SF_IEEE_TCP_UDP_PORT << - DCBX_APP_SF_IEEE_OFFSET; - *entry |= ((u32)DCBX_APP_SF_PORT << - DCBX_APP_SF_OFFSET); + SET_MFW_FIELD(*entry, DCBX_APP_SF_IEEE, + (u32)DCBX_APP_SF_IEEE_TCP_UDP_PORT); + SET_MFW_FIELD(*entry, DCBX_APP_SF, + (u32)DCBX_APP_SF_PORT); break; } } else { - *entry &= ~DCBX_APP_SF_MASK; - if (p_params->app_entry[i].ethtype) - *entry |= ((u32)DCBX_APP_SF_ETHTYPE << - DCBX_APP_SF_OFFSET); - else - *entry |= ((u32)DCBX_APP_SF_PORT << - DCBX_APP_SF_OFFSET); + SET_MFW_FIELD(*entry, DCBX_APP_SF, + p_params->app_entry[i].ethtype ? + (u32)DCBX_APP_SF_ETHTYPE : + (u32)DCBX_APP_SF_PORT); } - *entry &= ~DCBX_APP_PROTOCOL_ID_MASK; - *entry |= ((u32)p_params->app_entry[i].proto_id << - DCBX_APP_PROTOCOL_ID_OFFSET); - *entry &= ~DCBX_APP_PRI_MAP_MASK; - *entry |= ((u32)(p_params->app_entry[i].prio) << - DCBX_APP_PRI_MAP_OFFSET); + SET_MFW_FIELD(*entry, DCBX_APP_PROTOCOL_ID, + (u32)p_params->app_entry[i].proto_id); + SET_MFW_FIELD(*entry, DCBX_APP_PRI_MAP, + (u32)(1 << p_params->app_entry[i].prio)); } DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "flags = 0x%x\n", p_app->flags); @@ -1200,9 +1389,8 @@ ecore_dcbx_set_dscp_params(struct ecore_hwfn *p_hwfn, OSAL_MEMCPY(p_dscp_map, &p_hwfn->p_dcbx_info->dscp_map, sizeof(*p_dscp_map)); - p_dscp_map->flags &= ~DCB_DSCP_ENABLE_MASK; - if (p_params->dscp.enabled) - p_dscp_map->flags |= DCB_DSCP_ENABLE_MASK; + SET_MFW_FIELD(p_dscp_map->flags, DCB_DSCP_ENABLE, + p_params->dscp.enabled ? 1 : 0); for (i = 0, entry = 0; i < 8; i++) { val = 0; @@ -1210,7 +1398,7 @@ ecore_dcbx_set_dscp_params(struct ecore_hwfn *p_hwfn, val |= (((u32)p_params->dscp.dscp_pri_map[entry]) << (j * 4)); - p_dscp_map->dscp_pri_map[i] = OSAL_CPU_TO_BE32(val); + p_dscp_map->dscp_pri_map[i] = val; } p_hwfn->p_dcbx_info->dscp_nig_update = true; @@ -1231,10 +1419,10 @@ enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *p_hwfn, struct ecore_dcbx_set *params, bool hw_commit) { + u32 resp = 0, param = 0, drv_mb_param = 0; struct dcbx_local_params local_admin; struct ecore_dcbx_mib_meta_data data; struct dcb_dscp_map dscp_map; - u32 resp = 0, param = 0; enum _ecore_status_t rc = ECORE_SUCCESS; OSAL_MEMCPY(&p_hwfn->p_dcbx_info->set, params, @@ -1263,8 +1451,9 @@ enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *p_hwfn, data.size); } + SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_LLDP_SEND, 1); rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_DCBX, - 1 << DRV_MB_PARAM_LLDP_SEND_OFFSET, &resp, ¶m); + drv_mb_param, &resp, ¶m); if (rc != ECORE_SUCCESS) DP_NOTICE(p_hwfn, false, "Failed to send DCBX update request\n"); @@ -1276,7 +1465,7 @@ enum _ecore_status_t ecore_dcbx_get_config_params(struct ecore_hwfn *p_hwfn, struct ecore_dcbx_set *params) { struct ecore_dcbx_get *dcbx_info; - int rc; + enum _ecore_status_t rc; if (p_hwfn->p_dcbx_info->set.config.valid) { OSAL_MEMCPY(params, &p_hwfn->p_dcbx_info->set, @@ -1557,6 +1746,11 @@ ecore_dcbx_get_dscp_priority(struct ecore_hwfn *p_hwfn, struct ecore_dcbx_get *p_dcbx_info; enum _ecore_status_t rc; + if (IS_VF(p_hwfn->p_dev)) { + DP_ERR(p_hwfn->p_dev, "ecore rdma get dscp priority not supported for VF.\n"); + return ECORE_INVAL; + } + if (dscp_index >= ECORE_DCBX_DSCP_SIZE) { DP_ERR(p_hwfn, "Invalid dscp index %d\n", dscp_index); return ECORE_INVAL; @@ -1588,6 +1782,11 @@ ecore_dcbx_set_dscp_priority(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_dcbx_set dcbx_set; enum _ecore_status_t rc; + if (IS_VF(p_hwfn->p_dev)) { + DP_ERR(p_hwfn->p_dev, "ecore rdma set dscp priority not supported for VF.\n"); + return ECORE_INVAL; + } + if (dscp_index >= ECORE_DCBX_DSCP_SIZE || pri_val >= ECORE_MAX_PFC_PRIORITIES) { DP_ERR(p_hwfn, "Invalid dscp params: index = %d pri = %d\n", @@ -1605,3 +1804,58 @@ ecore_dcbx_set_dscp_priority(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, return ecore_dcbx_config_params(p_hwfn, p_ptt, &dcbx_set, 1); } + +enum _ecore_status_t +ecore_lldp_get_stats(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + struct ecore_lldp_stats *p_params) +{ + u32 mcp_resp = 0, mcp_param = 0, drv_mb_param = 0, addr, val; + struct lldp_stats_stc lldp_stats; + enum _ecore_status_t rc; + + switch (p_params->agent) { + case ECORE_LLDP_NEAREST_BRIDGE: + val = LLDP_NEAREST_BRIDGE; + break; + case ECORE_LLDP_NEAREST_NON_TPMR_BRIDGE: + val = LLDP_NEAREST_NON_TPMR_BRIDGE; + break; + case ECORE_LLDP_NEAREST_CUSTOMER_BRIDGE: + val = LLDP_NEAREST_CUSTOMER_BRIDGE; + break; + default: + DP_ERR(p_hwfn, "Invalid agent type %d\n", p_params->agent); + return ECORE_INVAL; + } + + SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_LLDP_STATS_AGENT, val); + rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GET_LLDP_STATS, + drv_mb_param, &mcp_resp, &mcp_param); + if (rc != ECORE_SUCCESS) { + DP_ERR(p_hwfn, "GET_LLDP_STATS failed, error = %d\n", rc); + return rc; + } + + addr = p_hwfn->mcp_info->drv_mb_addr + + OFFSETOF(struct public_drv_mb, union_data); + + ecore_memcpy_from(p_hwfn, p_ptt, &lldp_stats, addr, sizeof(lldp_stats)); + + p_params->tx_frames = lldp_stats.tx_frames_total; + p_params->rx_frames = lldp_stats.rx_frames_total; + p_params->rx_discards = lldp_stats.rx_frames_discarded; + p_params->rx_age_outs = lldp_stats.rx_age_outs; + + return ECORE_SUCCESS; +} + +bool ecore_dcbx_is_enabled(struct ecore_hwfn *p_hwfn) +{ + struct ecore_dcbx_operational_params *op_params = + &p_hwfn->p_dcbx_info->get.operational; + + if (op_params->valid && op_params->enabled) + return true; + + return false; +} diff --git a/drivers/net/qede/base/ecore_dcbx.h b/drivers/net/qede/base/ecore_dcbx.h index 519e6ceaa..57c07c59f 100644 --- a/drivers/net/qede/base/ecore_dcbx.h +++ b/drivers/net/qede/base/ecore_dcbx.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_DCBX_H__ #define __ECORE_DCBX_H__ @@ -45,18 +45,26 @@ struct ecore_dcbx_mib_meta_data { /* ECORE local interface routines */ enum _ecore_status_t -ecore_dcbx_mib_update_event(struct ecore_hwfn *, struct ecore_ptt *, - enum ecore_mib_read_type); +ecore_dcbx_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + enum ecore_mib_read_type type); enum _ecore_status_t ecore_dcbx_info_alloc(struct ecore_hwfn *p_hwfn); void ecore_dcbx_info_free(struct ecore_hwfn *p_hwfn); void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src, struct pf_update_ramrod_data *p_dest); +#define ECORE_DCBX_DEFAULT_TC 0 + +u8 ecore_dcbx_get_priority_tc(struct ecore_hwfn *p_hwfn, u8 pri); + +bool ecore_dcbx_get_dscp_state(struct ecore_hwfn *p_hwfn); + /* Returns TOS value for a given priority */ u8 ecore_dcbx_get_dscp_value(struct ecore_hwfn *p_hwfn, u8 pri); enum _ecore_status_t ecore_lldp_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); +bool ecore_dcbx_is_enabled(struct ecore_hwfn *p_hwfn); + #endif /* __ECORE_DCBX_H__ */ diff --git a/drivers/net/qede/base/ecore_dcbx_api.h b/drivers/net/qede/base/ecore_dcbx_api.h index 6fad2ecc2..26616f0b9 100644 --- a/drivers/net/qede/base/ecore_dcbx_api.h +++ b/drivers/net/qede/base/ecore_dcbx_api.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_DCBX_API_H__ #define __ECORE_DCBX_API_H__ @@ -30,7 +30,7 @@ struct ecore_dcbx_app_data { bool dont_add_vlan0; /* Do not insert a vlan tag with id 0 */ }; -#ifndef __EXTRACT__LINUX__ +#ifndef __EXTRACT__LINUX__IF__ enum dcbx_protocol_type { DCBX_PROTOCOL_ISCSI, DCBX_PROTOCOL_FCOE, @@ -72,6 +72,7 @@ struct ecore_dcbx_app_prio { struct ecore_dbcx_pfc_params { bool willing; bool enabled; + bool mbc; u8 prio[ECORE_MAX_PFC_PRIORITIES]; u8 max_tc; }; @@ -86,7 +87,6 @@ enum ecore_dcbx_sf_ieee_type { struct ecore_app_entry { bool ethtype; enum ecore_dcbx_sf_ieee_type sf_ieee; - bool enabled; u8 prio; u16 proto_id; enum dcbx_protocol_type proto_type; @@ -199,17 +199,26 @@ struct ecore_lldp_sys_tlvs { u16 buf_size; }; -enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *, - struct ecore_dcbx_get *, - enum ecore_mib_read_type); +struct ecore_lldp_stats { + enum ecore_lldp_agent agent; + u32 tx_frames; + u32 rx_frames; + u32 rx_discards; + u32 rx_age_outs; +}; + +enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn, + struct ecore_dcbx_get *p_get, + enum ecore_mib_read_type type); -enum _ecore_status_t ecore_dcbx_get_config_params(struct ecore_hwfn *, - struct ecore_dcbx_set *); +enum _ecore_status_t ecore_dcbx_get_config_params(struct ecore_hwfn *p_hwfn, + struct ecore_dcbx_set + *params); -enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *, - struct ecore_ptt *, - struct ecore_dcbx_set *, - bool); +enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct ecore_dcbx_set *params, + bool hw_commit); enum _ecore_status_t ecore_lldp_register_tlv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, @@ -238,6 +247,11 @@ enum _ecore_status_t ecore_dcbx_set_dscp_priority(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u8 dscp_index, u8 pri_val); +enum _ecore_status_t +ecore_lldp_get_stats(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + struct ecore_lldp_stats *p_params); + +#ifndef __EXTRACT__LINUX__C__ static const struct ecore_dcbx_app_metadata ecore_dcbx_app_update[] = { {DCBX_PROTOCOL_ISCSI, "ISCSI", ECORE_PCI_ISCSI}, {DCBX_PROTOCOL_FCOE, "FCOE", ECORE_PCI_FCOE}, @@ -246,5 +260,6 @@ static const struct ecore_dcbx_app_metadata ecore_dcbx_app_update[] = { {DCBX_PROTOCOL_ETH, "ETH", ECORE_PCI_ETH}, {DCBX_PROTOCOL_IWARP, "IWARP", ECORE_PCI_ETH_IWARP} }; +#endif #endif /* __ECORE_DCBX_API_H__ */ diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c index 5db02d0c4..63e5d6860 100644 --- a/drivers/net/qede/base/ecore_dev.c +++ b/drivers/net/qede/base/ecore_dev.c @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #include "bcm_osal.h" #include "reg_addr.h" #include "ecore_gtt_reg_addr.h" @@ -27,8 +27,15 @@ #include "ecore_iro.h" #include "nvm_cfg.h" #include "ecore_dcbx.h" +#include /* @DPDK */ #include "ecore_l2.h" +#ifdef _NTDDK_ +#pragma warning(push) +#pragma warning(disable : 28167) +#pragma warning(disable : 28123) +#endif + /* TODO - there's a bug in DCBx re-configuration flows in MF, as the QM * registers involved are not split and thus configuration is a race where * some of the PFs configuration might be lost. @@ -43,6 +50,11 @@ static u32 qm_lock_ref_cnt; static bool b_ptt_gtt_init; #endif +void ecore_set_ilt_page_size(struct ecore_dev *p_dev, u8 ilt_page_size) +{ + p_dev->ilt_page_size = ilt_page_size; +} + /******************** Doorbell Recovery *******************/ /* The doorbell recovery mechanism consists of a list of entries which represent * doorbelling entities (l2 queues, roce sq/rq/cqs, the slowpath spq, etc). Each @@ -60,10 +72,11 @@ struct ecore_db_recovery_entry { u8 hwfn_idx; }; +/* @DPDK */ /* display a single doorbell recovery entry */ -void ecore_db_recovery_dp_entry(struct ecore_hwfn *p_hwfn, - struct ecore_db_recovery_entry *db_entry, - const char *action) +static void ecore_db_recovery_dp_entry(struct ecore_hwfn *p_hwfn, + struct ecore_db_recovery_entry *db_entry, + const char *action) { DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "(%s: db_entry %p, addr %p, data %p, width %s, %s space, hwfn %d)\n", action, db_entry, db_entry->db_addr, db_entry->db_data, @@ -72,17 +85,40 @@ void ecore_db_recovery_dp_entry(struct ecore_hwfn *p_hwfn, db_entry->hwfn_idx); } -/* doorbell address sanity (address within doorbell bar range) */ -bool ecore_db_rec_sanity(struct ecore_dev *p_dev, void OSAL_IOMEM *db_addr, - void *db_data) +/* find hwfn according to the doorbell address */ +static struct ecore_hwfn *ecore_db_rec_find_hwfn(struct ecore_dev *p_dev, + void OSAL_IOMEM *db_addr) { - /* make sure doorbell address is within the doorbell bar */ - if (db_addr < p_dev->doorbells || (u8 *)db_addr > - (u8 *)p_dev->doorbells + p_dev->db_size) { + struct ecore_hwfn *p_hwfn; + + /* in CMT doorbell bar is split down the middle between engine 0 and enigne 1 */ + if (ECORE_IS_CMT(p_dev)) + p_hwfn = db_addr < p_dev->hwfns[1].doorbells ? + &p_dev->hwfns[0] : &p_dev->hwfns[1]; + else + p_hwfn = ECORE_LEADING_HWFN(p_dev); + + return p_hwfn; +} + +/* doorbell address sanity (address within doorbell bar range) */ +static bool ecore_db_rec_sanity(struct ecore_dev *p_dev, + void OSAL_IOMEM *db_addr, + enum ecore_db_rec_width db_width, + void *db_data) +{ + struct ecore_hwfn *p_hwfn = ecore_db_rec_find_hwfn(p_dev, db_addr); + u32 width = (db_width == DB_REC_WIDTH_32B) ? 32 : 64; + + /* make sure doorbell address is within the doorbell bar */ + if (db_addr < p_hwfn->doorbells || + (u8 OSAL_IOMEM *)db_addr + width > + (u8 OSAL_IOMEM *)p_hwfn->doorbells + p_hwfn->db_size) { OSAL_WARN(true, "Illegal doorbell address: %p. Legal range for doorbell addresses is [%p..%p]\n", - db_addr, p_dev->doorbells, - (u8 *)p_dev->doorbells + p_dev->db_size); + db_addr, p_hwfn->doorbells, + (u8 OSAL_IOMEM *)p_hwfn->doorbells + p_hwfn->db_size); + return false; } @@ -95,24 +131,6 @@ bool ecore_db_rec_sanity(struct ecore_dev *p_dev, void OSAL_IOMEM *db_addr, return true; } -/* find hwfn according to the doorbell address */ -struct ecore_hwfn *ecore_db_rec_find_hwfn(struct ecore_dev *p_dev, - void OSAL_IOMEM *db_addr) -{ - struct ecore_hwfn *p_hwfn; - - /* In CMT doorbell bar is split down the middle between engine 0 and - * enigne 1 - */ - if (ECORE_IS_CMT(p_dev)) - p_hwfn = db_addr < p_dev->hwfns[1].doorbells ? - &p_dev->hwfns[0] : &p_dev->hwfns[1]; - else - p_hwfn = ECORE_LEADING_HWFN(p_dev); - - return p_hwfn; -} - /* add a new entry to the doorbell recovery mechanism */ enum _ecore_status_t ecore_db_recovery_add(struct ecore_dev *p_dev, void OSAL_IOMEM *db_addr, @@ -123,14 +141,8 @@ enum _ecore_status_t ecore_db_recovery_add(struct ecore_dev *p_dev, struct ecore_db_recovery_entry *db_entry; struct ecore_hwfn *p_hwfn; - /* shortcircuit VFs, for now */ - if (IS_VF(p_dev)) { - DP_VERBOSE(p_dev, ECORE_MSG_IOV, "db recovery - skipping VF doorbell\n"); - return ECORE_SUCCESS; - } - /* sanitize doorbell address */ - if (!ecore_db_rec_sanity(p_dev, db_addr, db_data)) + if (!ecore_db_rec_sanity(p_dev, db_addr, db_width, db_data)) return ECORE_INVAL; /* obtain hwfn from doorbell address */ @@ -171,16 +183,6 @@ enum _ecore_status_t ecore_db_recovery_del(struct ecore_dev *p_dev, enum _ecore_status_t rc = ECORE_INVAL; struct ecore_hwfn *p_hwfn; - /* shortcircuit VFs, for now */ - if (IS_VF(p_dev)) { - DP_VERBOSE(p_dev, ECORE_MSG_IOV, "db recovery - skipping VF doorbell\n"); - return ECORE_SUCCESS; - } - - /* sanitize doorbell address */ - if (!ecore_db_rec_sanity(p_dev, db_addr, db_data)) - return ECORE_INVAL; - /* obtain hwfn from doorbell address */ p_hwfn = ecore_db_rec_find_hwfn(p_dev, db_addr); @@ -190,12 +192,9 @@ enum _ecore_status_t ecore_db_recovery_del(struct ecore_dev *p_dev, &p_hwfn->db_recovery_info.list, list_entry, struct ecore_db_recovery_entry) { - /* search according to db_data addr since db_addr is not unique - * (roce) - */ + /* search according to db_data addr since db_addr is not unique (roce) */ if (db_entry->db_data == db_data) { - ecore_db_recovery_dp_entry(p_hwfn, db_entry, - "Deleting"); + ecore_db_recovery_dp_entry(p_hwfn, db_entry, "Deleting"); OSAL_LIST_REMOVE_ENTRY(&db_entry->list_entry, &p_hwfn->db_recovery_info.list); rc = ECORE_SUCCESS; @@ -217,40 +216,40 @@ enum _ecore_status_t ecore_db_recovery_del(struct ecore_dev *p_dev, } /* initialize the doorbell recovery mechanism */ -enum _ecore_status_t ecore_db_recovery_setup(struct ecore_hwfn *p_hwfn) +static enum _ecore_status_t ecore_db_recovery_setup(struct ecore_hwfn *p_hwfn) { DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "Setting up db recovery\n"); /* make sure db_size was set in p_dev */ - if (!p_hwfn->p_dev->db_size) { + if (!p_hwfn->db_size) { DP_ERR(p_hwfn->p_dev, "db_size not set\n"); return ECORE_INVAL; } OSAL_LIST_INIT(&p_hwfn->db_recovery_info.list); #ifdef CONFIG_ECORE_LOCK_ALLOC - if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_hwfn->db_recovery_info.lock)) + if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_hwfn->db_recovery_info.lock, + "db_recov_lock")) return ECORE_NOMEM; #endif OSAL_SPIN_LOCK_INIT(&p_hwfn->db_recovery_info.lock); - p_hwfn->db_recovery_info.db_recovery_counter = 0; + p_hwfn->db_recovery_info.count = 0; return ECORE_SUCCESS; } /* destroy the doorbell recovery mechanism */ -void ecore_db_recovery_teardown(struct ecore_hwfn *p_hwfn) +static void ecore_db_recovery_teardown(struct ecore_hwfn *p_hwfn) { struct ecore_db_recovery_entry *db_entry = OSAL_NULL; DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "Tearing down db recovery\n"); if (!OSAL_LIST_IS_EMPTY(&p_hwfn->db_recovery_info.list)) { - DP_VERBOSE(p_hwfn, false, "Doorbell Recovery teardown found the doorbell recovery list was not empty (Expected in disorderly driver unload (e.g. recovery) otherwise this probably means some flow forgot to db_recovery_del). Prepare to purge doorbell recovery list...\n"); + DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "Doorbell Recovery teardown found the doorbell recovery list was not empty (Expected in disorderly driver unload (e.g. recovery) otherwise this probably means some flow forgot to db_recovery_del). Prepare to purge doorbell recovery list...\n"); while (!OSAL_LIST_IS_EMPTY(&p_hwfn->db_recovery_info.list)) { - db_entry = OSAL_LIST_FIRST_ENTRY( - &p_hwfn->db_recovery_info.list, - struct ecore_db_recovery_entry, - list_entry); + db_entry = OSAL_LIST_FIRST_ENTRY(&p_hwfn->db_recovery_info.list, + struct ecore_db_recovery_entry, + list_entry); ecore_db_recovery_dp_entry(p_hwfn, db_entry, "Purging"); OSAL_LIST_REMOVE_ENTRY(&db_entry->list_entry, &p_hwfn->db_recovery_info.list); @@ -260,17 +259,34 @@ void ecore_db_recovery_teardown(struct ecore_hwfn *p_hwfn) #ifdef CONFIG_ECORE_LOCK_ALLOC OSAL_SPIN_LOCK_DEALLOC(&p_hwfn->db_recovery_info.lock); #endif - p_hwfn->db_recovery_info.db_recovery_counter = 0; + p_hwfn->db_recovery_info.count = 0; } /* print the content of the doorbell recovery mechanism */ void ecore_db_recovery_dp(struct ecore_hwfn *p_hwfn) { struct ecore_db_recovery_entry *db_entry = OSAL_NULL; + u32 dp_module; + u8 dp_level; DP_NOTICE(p_hwfn, false, - "Dispalying doorbell recovery database. Counter was %d\n", - p_hwfn->db_recovery_info.db_recovery_counter); + "Displaying doorbell recovery database. Counter is %d\n", + p_hwfn->db_recovery_info.count); + + if (IS_PF(p_hwfn->p_dev)) + if (p_hwfn->pf_iov_info->max_db_rec_count > 0) + DP_NOTICE(p_hwfn, false, + "Max VF counter is %u (VF %u)\n", + p_hwfn->pf_iov_info->max_db_rec_count, + p_hwfn->pf_iov_info->max_db_rec_vfid); + + /* Save dp_module/dp_level values and enable ECORE_MSG_SPQ verbosity + * to force print the db entries. + */ + dp_module = p_hwfn->dp_module; + p_hwfn->dp_module |= ECORE_MSG_SPQ; + dp_level = p_hwfn->dp_level; + p_hwfn->dp_level = ECORE_LEVEL_VERBOSE; /* protect the list */ OSAL_SPIN_LOCK(&p_hwfn->db_recovery_info.lock); @@ -282,28 +298,27 @@ void ecore_db_recovery_dp(struct ecore_hwfn *p_hwfn) } OSAL_SPIN_UNLOCK(&p_hwfn->db_recovery_info.lock); + + /* Get back to saved dp_module/dp_level values */ + p_hwfn->dp_module = dp_module; + p_hwfn->dp_level = dp_level; } /* ring the doorbell of a single doorbell recovery entry */ -void ecore_db_recovery_ring(struct ecore_hwfn *p_hwfn, - struct ecore_db_recovery_entry *db_entry, - enum ecore_db_rec_exec db_exec) +static void ecore_db_recovery_ring(struct ecore_hwfn *p_hwfn, + struct ecore_db_recovery_entry *db_entry) { /* Print according to width */ if (db_entry->db_width == DB_REC_WIDTH_32B) - DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "%s doorbell address %p data %x\n", - db_exec == DB_REC_DRY_RUN ? "would have rung" : "ringing", - db_entry->db_addr, *(u32 *)db_entry->db_data); + DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, + "ringing doorbell address %p data %x\n", + db_entry->db_addr, + *(u32 *)db_entry->db_data); else - DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "%s doorbell address %p data %lx\n", - db_exec == DB_REC_DRY_RUN ? "would have rung" : "ringing", + DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, + "ringing doorbell address %p data %" PRIx64 "\n", db_entry->db_addr, - *(unsigned long *)(db_entry->db_data)); - - /* Sanity */ - if (!ecore_db_rec_sanity(p_hwfn->p_dev, db_entry->db_addr, - db_entry->db_data)) - return; + *(u64 *)(db_entry->db_data)); /* Flush the write combined buffer. Since there are multiple doorbelling * entities using the same address, if we don't flush, a transaction @@ -312,14 +327,12 @@ void ecore_db_recovery_ring(struct ecore_hwfn *p_hwfn, OSAL_WMB(p_hwfn->p_dev); /* Ring the doorbell */ - if (db_exec == DB_REC_REAL_DEAL || db_exec == DB_REC_ONCE) { - if (db_entry->db_width == DB_REC_WIDTH_32B) - DIRECT_REG_WR(p_hwfn, db_entry->db_addr, - *(u32 *)(db_entry->db_data)); - else - DIRECT_REG_WR64(p_hwfn, db_entry->db_addr, - *(u64 *)(db_entry->db_data)); - } + if (db_entry->db_width == DB_REC_WIDTH_32B) + DIRECT_REG_WR(p_hwfn, db_entry->db_addr, + *(u32 *)(db_entry->db_data)); + else + DIRECT_REG_WR64(p_hwfn, db_entry->db_addr, + *(u64 *)(db_entry->db_data)); /* Flush the write combined buffer. Next doorbell may come from a * different entity to the same address... @@ -328,30 +341,21 @@ void ecore_db_recovery_ring(struct ecore_hwfn *p_hwfn, } /* traverse the doorbell recovery entry list and ring all the doorbells */ -void ecore_db_recovery_execute(struct ecore_hwfn *p_hwfn, - enum ecore_db_rec_exec db_exec) +void ecore_db_recovery_execute(struct ecore_hwfn *p_hwfn) { struct ecore_db_recovery_entry *db_entry = OSAL_NULL; - if (db_exec != DB_REC_ONCE) { - DP_NOTICE(p_hwfn, false, "Executing doorbell recovery. Counter was %d\n", - p_hwfn->db_recovery_info.db_recovery_counter); - - /* track amount of times recovery was executed */ - p_hwfn->db_recovery_info.db_recovery_counter++; - } + DP_NOTICE(p_hwfn, false, + "Executing doorbell recovery. Counter is %d\n", + ++p_hwfn->db_recovery_info.count); /* protect the list */ OSAL_SPIN_LOCK(&p_hwfn->db_recovery_info.lock); OSAL_LIST_FOR_EACH_ENTRY(db_entry, &p_hwfn->db_recovery_info.list, list_entry, - struct ecore_db_recovery_entry) { - ecore_db_recovery_ring(p_hwfn, db_entry, db_exec); - if (db_exec == DB_REC_ONCE) - break; - } - + struct ecore_db_recovery_entry) + ecore_db_recovery_ring(p_hwfn, db_entry); OSAL_SPIN_UNLOCK(&p_hwfn->db_recovery_info.lock); } /******************** Doorbell Recovery end ****************/ @@ -364,7 +368,7 @@ enum ecore_llh_filter_type { }; struct ecore_llh_mac_filter { - u8 addr[ETH_ALEN]; + u8 addr[ECORE_ETH_ALEN]; }; struct ecore_llh_protocol_filter { @@ -661,16 +665,15 @@ ecore_llh_shadow_remove_all_filters(struct ecore_dev *p_dev, u8 ppfid) return ECORE_SUCCESS; } -static enum _ecore_status_t ecore_abs_ppfid(struct ecore_dev *p_dev, - u8 rel_ppfid, u8 *p_abs_ppfid) +enum _ecore_status_t ecore_abs_ppfid(struct ecore_dev *p_dev, u8 rel_ppfid, + u8 *p_abs_ppfid) { struct ecore_llh_info *p_llh_info = p_dev->p_llh_info; - u8 ppfids = p_llh_info->num_ppfid - 1; if (rel_ppfid >= p_llh_info->num_ppfid) { DP_NOTICE(p_dev, false, - "rel_ppfid %d is not valid, available indices are 0..%hhd\n", - rel_ppfid, ppfids); + "rel_ppfid %d is not valid, available indices are 0..%hhu\n", + rel_ppfid, p_llh_info->num_ppfid - 1); return ECORE_INVAL; } @@ -679,6 +682,24 @@ static enum _ecore_status_t ecore_abs_ppfid(struct ecore_dev *p_dev, return ECORE_SUCCESS; } +enum _ecore_status_t ecore_llh_map_ppfid_to_pfid(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 ppfid, u8 pfid) +{ + u8 abs_ppfid; + u32 addr; + enum _ecore_status_t rc; + + rc = ecore_abs_ppfid(p_hwfn->p_dev, ppfid, &abs_ppfid); + if (rc != ECORE_SUCCESS) + return rc; + + addr = NIG_REG_LLH_PPFID2PFID_TBL_0 + abs_ppfid * 0x4; + ecore_wr(p_hwfn, p_ptt, addr, pfid); + + return ECORE_SUCCESS; +} + static enum _ecore_status_t __ecore_llh_set_engine_affin(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { @@ -791,21 +812,21 @@ static enum _ecore_status_t ecore_llh_hw_init_pf(struct ecore_hwfn *p_hwfn, bool avoid_eng_affin) { struct ecore_dev *p_dev = p_hwfn->p_dev; - u8 ppfid, abs_ppfid; + u8 ppfid; enum _ecore_status_t rc; for (ppfid = 0; ppfid < p_dev->p_llh_info->num_ppfid; ppfid++) { - u32 addr; - - rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid); - if (rc != ECORE_SUCCESS) + rc = ecore_llh_map_ppfid_to_pfid(p_hwfn, p_ptt, ppfid, + p_hwfn->rel_pf_id); + if (rc != ECORE_SUCCESS) { + DP_NOTICE(p_dev, false, + "Failed to map ppfid %d to pfid %d\n", + ppfid, p_hwfn->rel_pf_id); return rc; - - addr = NIG_REG_LLH_PPFID2PFID_TBL_0 + abs_ppfid * 0x4; - ecore_wr(p_hwfn, p_ptt, addr, p_hwfn->rel_pf_id); + } } - if (OSAL_GET_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits) && + if (OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits) && !ECORE_IS_FCOE_PERSONALITY(p_hwfn)) { rc = ecore_llh_add_mac_filter(p_dev, 0, p_hwfn->hw_info.hw_mac_addr); @@ -833,7 +854,10 @@ enum ecore_eng ecore_llh_get_l2_affinity_hint(struct ecore_dev *p_dev) return p_dev->l2_affin_hint ? ECORE_ENG1 : ECORE_ENG0; } -/* TBD - should be removed when these definitions are available in reg_addr.h */ +/* TBD - + * When the relevant definitions are available in reg_addr.h, the SHIFT + * definitions should be removed, and the MASK definitions should be revised. + */ #define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_MASK 0x3 #define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_SHIFT 0 #define NIG_REG_PPF_TO_ENGINE_SEL_NON_ROCE_MASK 0x3 @@ -939,7 +963,7 @@ enum _ecore_status_t ecore_llh_set_roce_affinity(struct ecore_dev *p_dev, return rc; } -struct ecore_llh_filter_details { +struct ecore_llh_filter_e4_details { u64 value; u32 mode; u32 protocol_type; @@ -948,10 +972,10 @@ struct ecore_llh_filter_details { }; static enum _ecore_status_t -ecore_llh_access_filter(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx, - struct ecore_llh_filter_details *p_details, - bool b_write_access) +ecore_llh_access_filter_e4(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx, + struct ecore_llh_filter_e4_details *p_details, + bool b_write_access) { u8 pfid = ECORE_PFID_BY_PPFID(p_hwfn, abs_ppfid); struct dmae_params params; @@ -1035,16 +1059,16 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn, } static enum _ecore_status_t -ecore_llh_add_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, +ecore_llh_add_filter_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx, u8 filter_prot_type, u32 high, u32 low) { - struct ecore_llh_filter_details filter_details; + struct ecore_llh_filter_e4_details filter_details; filter_details.enable = 1; filter_details.value = ((u64)high << 32) | low; filter_details.hdr_sel = - OSAL_GET_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits) ? + OSAL_TEST_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits) ? 1 : /* inner/encapsulated header */ 0; /* outer/tunnel header */ filter_details.protocol_type = filter_prot_type; @@ -1052,42 +1076,104 @@ ecore_llh_add_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, 1 : /* protocol-based classification */ 0; /* MAC-address based classification */ - return ecore_llh_access_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx, - &filter_details, - true /* write access */); + return ecore_llh_access_filter_e4(p_hwfn, p_ptt, abs_ppfid, filter_idx, + &filter_details, + true /* write access */); } static enum _ecore_status_t -ecore_llh_remove_filter(struct ecore_hwfn *p_hwfn, +ecore_llh_remove_filter_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx) { - struct ecore_llh_filter_details filter_details; + struct ecore_llh_filter_e4_details filter_details; OSAL_MEMSET(&filter_details, 0, sizeof(filter_details)); - return ecore_llh_access_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx, - &filter_details, - true /* write access */); + return ecore_llh_access_filter_e4(p_hwfn, p_ptt, abs_ppfid, filter_idx, + &filter_details, + true /* write access */); +} + +/* OSAL_UNUSED is temporary used to avoid unused-parameter compilation warnings. + * Should be removed when the function is implemented. + */ +static enum _ecore_status_t +ecore_llh_add_filter_e5(struct ecore_hwfn OSAL_UNUSED * p_hwfn, + struct ecore_ptt OSAL_UNUSED * p_ptt, + u8 OSAL_UNUSED abs_ppfid, u8 OSAL_UNUSED filter_idx, + u8 OSAL_UNUSED filter_prot_type, u32 OSAL_UNUSED high, + u32 OSAL_UNUSED low) +{ + ECORE_E5_MISSING_CODE; + + return ECORE_SUCCESS; +} + +/* OSAL_UNUSED is temporary used to avoid unused-parameter compilation warnings. + * Should be removed when the function is implemented. + */ +static enum _ecore_status_t +ecore_llh_remove_filter_e5(struct ecore_hwfn OSAL_UNUSED * p_hwfn, + struct ecore_ptt OSAL_UNUSED * p_ptt, + u8 OSAL_UNUSED abs_ppfid, + u8 OSAL_UNUSED filter_idx) +{ + ECORE_E5_MISSING_CODE; + + return ECORE_SUCCESS; +} + +static enum _ecore_status_t +ecore_llh_add_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u8 abs_ppfid, u8 filter_idx, u8 filter_prot_type, u32 high, + u32 low) +{ + if (ECORE_IS_E4(p_hwfn->p_dev)) + return ecore_llh_add_filter_e4(p_hwfn, p_ptt, abs_ppfid, + filter_idx, filter_prot_type, + high, low); + else /* E5 */ + return ecore_llh_add_filter_e5(p_hwfn, p_ptt, abs_ppfid, + filter_idx, filter_prot_type, + high, low); +} + +static enum _ecore_status_t +ecore_llh_remove_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u8 abs_ppfid, u8 filter_idx) +{ + if (ECORE_IS_E4(p_hwfn->p_dev)) + return ecore_llh_remove_filter_e4(p_hwfn, p_ptt, abs_ppfid, + filter_idx); + else /* E5 */ + return ecore_llh_remove_filter_e5(p_hwfn, p_ptt, abs_ppfid, + filter_idx); } enum _ecore_status_t ecore_llh_add_mac_filter(struct ecore_dev *p_dev, u8 ppfid, - u8 mac_addr[ETH_ALEN]) + u8 mac_addr[ECORE_ETH_ALEN]) { struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); - struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn); union ecore_llh_filter filter; u8 filter_idx, abs_ppfid; + struct ecore_ptt *p_ptt; u32 high, low, ref_cnt; enum _ecore_status_t rc = ECORE_SUCCESS; + if (!OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits)) + return rc; + + if (IS_VF(p_hwfn->p_dev)) { + DP_NOTICE(p_dev, false, "Setting MAC to LLH is not supported to VF\n"); + return ECORE_NOTIMPL; + } + + p_ptt = ecore_ptt_acquire(p_hwfn); if (p_ptt == OSAL_NULL) return ECORE_AGAIN; - if (!OSAL_GET_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits)) - goto out; - OSAL_MEM_ZERO(&filter, sizeof(filter)); - OSAL_MEMCPY(filter.mac.addr, mac_addr, ETH_ALEN); + OSAL_MEMCPY(filter.mac.addr, mac_addr, ECORE_ETH_ALEN); rc = ecore_llh_shadow_add_filter(p_dev, ppfid, ECORE_LLH_FILTER_TYPE_MAC, &filter, &filter_idx, &ref_cnt); @@ -1204,6 +1290,22 @@ ecore_llh_protocol_filter_to_hilo(struct ecore_dev *p_dev, return ECORE_SUCCESS; } +enum _ecore_status_t +ecore_llh_add_dst_tcp_port_filter(struct ecore_dev *p_dev, u16 dest_port) +{ + return ecore_llh_add_protocol_filter(p_dev, 0, + ECORE_LLH_FILTER_TCP_DEST_PORT, + ECORE_LLH_DONT_CARE, dest_port); +} + +enum _ecore_status_t +ecore_llh_add_src_tcp_port_filter(struct ecore_dev *p_dev, u16 src_port) +{ + return ecore_llh_add_protocol_filter(p_dev, 0, + ECORE_LLH_FILTER_TCP_SRC_PORT, + src_port, ECORE_LLH_DONT_CARE); +} + enum _ecore_status_t ecore_llh_add_protocol_filter(struct ecore_dev *p_dev, u8 ppfid, enum ecore_llh_prot_filter_type_t type, @@ -1212,15 +1314,15 @@ ecore_llh_add_protocol_filter(struct ecore_dev *p_dev, u8 ppfid, struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn); u8 filter_idx, abs_ppfid, type_bitmap; - char str[32]; union ecore_llh_filter filter; u32 high, low, ref_cnt; + char str[32]; enum _ecore_status_t rc = ECORE_SUCCESS; if (p_ptt == OSAL_NULL) return ECORE_AGAIN; - if (!OSAL_GET_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits)) + if (!OSAL_TEST_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits)) goto out; rc = ecore_llh_protocol_filter_stringify(p_dev, type, @@ -1275,23 +1377,33 @@ ecore_llh_add_protocol_filter(struct ecore_dev *p_dev, u8 ppfid, } void ecore_llh_remove_mac_filter(struct ecore_dev *p_dev, u8 ppfid, - u8 mac_addr[ETH_ALEN]) + u8 mac_addr[ECORE_ETH_ALEN]) { struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); - struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn); union ecore_llh_filter filter; u8 filter_idx, abs_ppfid; + struct ecore_ptt *p_ptt; enum _ecore_status_t rc = ECORE_SUCCESS; u32 ref_cnt; - if (p_ptt == OSAL_NULL) + if (!OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits)) return; - if (!OSAL_GET_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits)) - goto out; + if (ECORE_IS_NVMETCP_PERSONALITY(p_hwfn)) + return; + + if (IS_VF(p_hwfn->p_dev)) { + DP_NOTICE(p_dev, false, "Removing MAC from LLH is not supported to VF\n"); + return; + } + + p_ptt = ecore_ptt_acquire(p_hwfn); + + if (p_ptt == OSAL_NULL) + return; OSAL_MEM_ZERO(&filter, sizeof(filter)); - OSAL_MEMCPY(filter.mac.addr, mac_addr, ETH_ALEN); + OSAL_MEMCPY(filter.mac.addr, mac_addr, ECORE_ETH_ALEN); rc = ecore_llh_shadow_remove_filter(p_dev, ppfid, &filter, &filter_idx, &ref_cnt); if (rc != ECORE_SUCCESS) @@ -1326,6 +1438,22 @@ void ecore_llh_remove_mac_filter(struct ecore_dev *p_dev, u8 ppfid, ecore_ptt_release(p_hwfn, p_ptt); } +void ecore_llh_remove_dst_tcp_port_filter(struct ecore_dev *p_dev, + u16 dest_port) +{ + ecore_llh_remove_protocol_filter(p_dev, 0, + ECORE_LLH_FILTER_TCP_DEST_PORT, + ECORE_LLH_DONT_CARE, dest_port); +} + +void ecore_llh_remove_src_tcp_port_filter(struct ecore_dev *p_dev, + u16 src_port) +{ + ecore_llh_remove_protocol_filter(p_dev, 0, + ECORE_LLH_FILTER_TCP_SRC_PORT, + src_port, ECORE_LLH_DONT_CARE); +} + void ecore_llh_remove_protocol_filter(struct ecore_dev *p_dev, u8 ppfid, enum ecore_llh_prot_filter_type_t type, u16 source_port_or_eth_type, @@ -1334,15 +1462,15 @@ void ecore_llh_remove_protocol_filter(struct ecore_dev *p_dev, u8 ppfid, struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn); u8 filter_idx, abs_ppfid; - char str[32]; union ecore_llh_filter filter; enum _ecore_status_t rc = ECORE_SUCCESS; + char str[32]; u32 ref_cnt; if (p_ptt == OSAL_NULL) return; - if (!OSAL_GET_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits)) + if (!OSAL_TEST_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits)) goto out; rc = ecore_llh_protocol_filter_stringify(p_dev, type, @@ -1396,8 +1524,8 @@ void ecore_llh_clear_ppfid_filters(struct ecore_dev *p_dev, u8 ppfid) if (p_ptt == OSAL_NULL) return; - if (!OSAL_GET_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits) && - !OSAL_GET_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits)) + if (!OSAL_TEST_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits) && + !OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits)) goto out; rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid); @@ -1411,7 +1539,7 @@ void ecore_llh_clear_ppfid_filters(struct ecore_dev *p_dev, u8 ppfid) for (filter_idx = 0; filter_idx < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; filter_idx++) { rc = ecore_llh_remove_filter(p_hwfn, p_ptt, - abs_ppfid, filter_idx); + abs_ppfid, filter_idx); if (rc != ECORE_SUCCESS) goto out; } @@ -1423,8 +1551,8 @@ void ecore_llh_clear_all_filters(struct ecore_dev *p_dev) { u8 ppfid; - if (!OSAL_GET_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits) && - !OSAL_GET_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits)) + if (!OSAL_TEST_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits) && + !OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits)) return; for (ppfid = 0; ppfid < p_dev->p_llh_info->num_ppfid; ppfid++) @@ -1450,22 +1578,18 @@ enum _ecore_status_t ecore_all_ppfids_wr(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } -enum _ecore_status_t -ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid) +static enum _ecore_status_t +ecore_llh_dump_ppfid_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u8 ppfid) { - struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); - struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn); - struct ecore_llh_filter_details filter_details; + struct ecore_llh_filter_e4_details filter_details; u8 abs_ppfid, filter_idx; u32 addr; enum _ecore_status_t rc; - if (!p_ptt) - return ECORE_AGAIN; - rc = ecore_abs_ppfid(p_hwfn->p_dev, ppfid, &abs_ppfid); if (rc != ECORE_SUCCESS) - goto out; + return rc; addr = NIG_REG_PPF_TO_ENGINE_SEL + abs_ppfid * 0x4; DP_NOTICE(p_hwfn, false, @@ -1476,22 +1600,46 @@ ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid) for (filter_idx = 0; filter_idx < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; filter_idx++) { OSAL_MEMSET(&filter_details, 0, sizeof(filter_details)); - rc = ecore_llh_access_filter(p_hwfn, p_ptt, abs_ppfid, - filter_idx, &filter_details, - false /* read access */); + rc = ecore_llh_access_filter_e4(p_hwfn, p_ptt, abs_ppfid, + filter_idx, &filter_details, + false /* read access */); if (rc != ECORE_SUCCESS) - goto out; + return rc; DP_NOTICE(p_hwfn, false, - "filter %2hhd: enable %d, value 0x%016lx, mode %d, protocol_type 0x%x, hdr_sel 0x%x\n", + "filter %2hhd: enable %d, value 0x%016" PRIx64 ", mode %d, protocol_type 0x%x, hdr_sel 0x%x\n", filter_idx, filter_details.enable, - (unsigned long)filter_details.value, - filter_details.mode, + filter_details.value, filter_details.mode, filter_details.protocol_type, filter_details.hdr_sel); } + return ECORE_SUCCESS; +} + +static enum _ecore_status_t +ecore_llh_dump_ppfid_e5(struct ecore_hwfn OSAL_UNUSED * p_hwfn, + struct ecore_ptt OSAL_UNUSED * p_ptt, + u8 OSAL_UNUSED ppfid) +{ + ECORE_E5_MISSING_CODE; + + return ECORE_NOTIMPL; +} + +enum _ecore_status_t ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid) +{ + struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); + struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn); + enum _ecore_status_t rc; + + if (p_ptt == OSAL_NULL) + return ECORE_AGAIN; + + if (ECORE_IS_E4(p_dev)) + rc = ecore_llh_dump_ppfid_e4(p_hwfn, p_ptt, ppfid); + else /* E5 */ + rc = ecore_llh_dump_ppfid_e5(p_hwfn, p_ptt, ppfid); -out: ecore_ptt_release(p_hwfn, p_ptt); return rc; @@ -1513,15 +1661,6 @@ enum _ecore_status_t ecore_llh_dump_all(struct ecore_dev *p_dev) /******************************* NIG LLH - End ********************************/ -/* Configurable */ -#define ECORE_MIN_DPIS (4) /* The minimal num of DPIs required to - * load the driver. The number was - * arbitrarily set. - */ - -/* Derived */ -#define ECORE_MIN_PWM_REGION (ECORE_WID_SIZE * ECORE_MIN_DPIS) - static u32 ecore_hw_bar_size(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, enum BAR_ID bar_id) @@ -1544,18 +1683,18 @@ static u32 ecore_hw_bar_size(struct ecore_hwfn *p_hwfn, if (ECORE_IS_CMT(p_hwfn->p_dev)) { DP_INFO(p_hwfn, "BAR size not configured. Assuming BAR size of 256kB for GRC and 512kB for DB\n"); - val = BAR_ID_0 ? 256 * 1024 : 512 * 1024; + return BAR_ID_0 ? 256 * 1024 : 512 * 1024; } else { DP_INFO(p_hwfn, "BAR size not configured. Assuming BAR size of 512kB for GRC and 512kB for DB\n"); - val = 512 * 1024; + return 512 * 1024; } - - return val; } -void ecore_init_dp(struct ecore_dev *p_dev, - u32 dp_module, u8 dp_level, void *dp_ctx) +void ecore_init_dp(struct ecore_dev *p_dev, + u32 dp_module, + u8 dp_level, + void *dp_ctx) { u32 i; @@ -1571,6 +1710,70 @@ void ecore_init_dp(struct ecore_dev *p_dev, } } +void ecore_init_int_dp(struct ecore_dev *p_dev, u32 dp_module, u8 dp_level) +{ + u32 i; + + p_dev->dp_int_level = dp_level; + p_dev->dp_int_module = dp_module; + for (i = 0; i < MAX_HWFNS_PER_DEVICE; i++) { + struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i]; + + p_hwfn->dp_int_level = dp_level; + p_hwfn->dp_int_module = dp_module; + } +} + +#define ECORE_DP_INT_LOG_MAX_STR_SIZE 256 /* @DPDK */ + +void ecore_dp_internal_log(struct ecore_dev *p_dev, char *fmt, ...) +{ + char buff[ECORE_DP_INT_LOG_MAX_STR_SIZE]; + struct ecore_internal_trace *p_int_log; + u32 len, partial_len; + unsigned long flags; + osal_va_list args; + char *buf = buff; + u32 prod; + + if (!p_dev) + return; + + p_int_log = &p_dev->internal_trace; + + if (!p_int_log->buf) + return; + + OSAL_VA_START(args, fmt); + len = OSAL_VSNPRINTF(buf, ECORE_DP_INT_LOG_MAX_STR_SIZE, fmt, args); + OSAL_VA_END(args); + + if (len > ECORE_DP_INT_LOG_MAX_STR_SIZE) { + len = ECORE_DP_INT_LOG_MAX_STR_SIZE; + buf[len - 1] = '\n'; + } + + partial_len = len; + + OSAL_SPIN_LOCK_IRQSAVE(&p_int_log->lock, flags); + prod = p_int_log->prod % p_int_log->size; + + if (p_int_log->size - prod <= len) { + partial_len = p_int_log->size - prod; + OSAL_MEMCPY(p_int_log->buf + prod, buf, partial_len); + p_int_log->prod += partial_len; + prod = p_int_log->prod % p_int_log->size; + buf += partial_len; + partial_len = len - partial_len; + } + + OSAL_MEMCPY(p_int_log->buf + prod, buf, partial_len); + + p_int_log->prod += partial_len; + + OSAL_SPIN_UNLOCK_IRQSAVE(&p_int_log->lock, flags); +} + enum _ecore_status_t ecore_init_struct(struct ecore_dev *p_dev) { u8 i; @@ -1581,19 +1784,32 @@ enum _ecore_status_t ecore_init_struct(struct ecore_dev *p_dev) p_hwfn->p_dev = p_dev; p_hwfn->my_id = i; p_hwfn->b_active = false; + p_hwfn->p_dummy_cb = ecore_int_dummy_comp_cb; #ifdef CONFIG_ECORE_LOCK_ALLOC - if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_hwfn->dmae_info.lock)) + if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_hwfn->dmae_info.lock, + "dma_info_lock")) goto handle_err; #endif OSAL_SPIN_LOCK_INIT(&p_hwfn->dmae_info.lock); } +#ifdef CONFIG_ECORE_LOCK_ALLOC + if (OSAL_SPIN_LOCK_ALLOC(&p_dev->hwfns[0], &p_dev->internal_trace.lock, + "internal_trace_lock")) + goto handle_err; +#endif + OSAL_SPIN_LOCK_INIT(&p_dev->internal_trace.lock); + + p_dev->p_dev = p_dev; /* hwfn 0 is always active */ p_dev->hwfns[0].b_active = true; /* set the default cache alignment to 128 (may be overridden later) */ p_dev->cache_shift = 7; + + p_dev->ilt_page_size = ECORE_DEFAULT_ILT_PAGE_SIZE; + return ECORE_SUCCESS; #ifdef CONFIG_ECORE_LOCK_ALLOC handle_err: @@ -1612,9 +1828,16 @@ static void ecore_qm_info_free(struct ecore_hwfn *p_hwfn) struct ecore_qm_info *qm_info = &p_hwfn->qm_info; OSAL_FREE(p_hwfn->p_dev, qm_info->qm_pq_params); + qm_info->qm_pq_params = OSAL_NULL; OSAL_FREE(p_hwfn->p_dev, qm_info->qm_vport_params); + qm_info->qm_vport_params = OSAL_NULL; OSAL_FREE(p_hwfn->p_dev, qm_info->qm_port_params); + qm_info->qm_port_params = OSAL_NULL; OSAL_FREE(p_hwfn->p_dev, qm_info->wfq_data); + qm_info->wfq_data = OSAL_NULL; +#ifdef CONFIG_ECORE_LOCK_ALLOC + OSAL_SPIN_LOCK_DEALLOC(&qm_info->qm_info_lock); +#endif } static void ecore_dbg_user_data_free(struct ecore_hwfn *p_hwfn) @@ -1627,49 +1850,66 @@ void ecore_resc_free(struct ecore_dev *p_dev) { int i; - if (IS_VF(p_dev)) { - for_each_hwfn(p_dev, i) - ecore_l2_free(&p_dev->hwfns[i]); - return; - } - - OSAL_FREE(p_dev, p_dev->fw_data); - OSAL_FREE(p_dev, p_dev->reset_stats); + p_dev->reset_stats = OSAL_NULL; - ecore_llh_free(p_dev); + if (IS_PF(p_dev)) { + OSAL_FREE(p_dev, p_dev->fw_data); + p_dev->fw_data = OSAL_NULL; + + ecore_llh_free(p_dev); + } for_each_hwfn(p_dev, i) { struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i]; + ecore_spq_free(p_hwfn); + ecore_l2_free(p_hwfn); + + if (IS_VF(p_dev)) { + ecore_db_recovery_teardown(p_hwfn); + continue; + } + ecore_cxt_mngr_free(p_hwfn); ecore_qm_info_free(p_hwfn); - ecore_spq_free(p_hwfn); ecore_eq_free(p_hwfn); ecore_consq_free(p_hwfn); ecore_int_free(p_hwfn); + ecore_iov_free(p_hwfn); - ecore_l2_free(p_hwfn); ecore_dmae_info_free(p_hwfn); ecore_dcbx_info_free(p_hwfn); ecore_dbg_user_data_free(p_hwfn); - ecore_fw_overlay_mem_free(p_hwfn, p_hwfn->fw_overlay_mem); - /* @@@TBD Flush work-queue ? */ + ecore_fw_overlay_mem_free(p_hwfn, &p_hwfn->fw_overlay_mem); + /* @@@TBD Flush work-queue ?*/ /* destroy doorbell recovery mechanism */ ecore_db_recovery_teardown(p_hwfn); } + + if (IS_PF(p_dev)) { + OSAL_FREE(p_dev, p_dev->fw_data); + p_dev->fw_data = OSAL_NULL; + } } /******************** QM initialization *******************/ - -/* bitmaps for indicating active traffic classes. - * Special case for Arrowhead 4 port - */ +/* bitmaps for indicating active traffic classes. Special case for Arrowhead 4 port */ /* 0..3 actualy used, 4 serves OOO, 7 serves high priority stuff (e.g. DCQCN) */ -#define ACTIVE_TCS_BMAP 0x9f -/* 0..3 actually used, OOO and high priority stuff all use 3 */ -#define ACTIVE_TCS_BMAP_4PORT_K2 0xf +#define ACTIVE_TCS_BMAP_E4 0x9f +#define ACTIVE_TCS_BMAP_E5 0x1f /* 0..3 actualy used, 4 serves OOO */ +#define ACTIVE_TCS_BMAP_4PORT_K2 0xf /* 0..3 actually used, OOO and high priority stuff all use 3 */ + +#define ACTIVE_TCS_BMAP(_p_hwfn) \ + (ECORE_IS_E4((_p_hwfn)->p_dev) ? \ + ACTIVE_TCS_BMAP_E4 : ACTIVE_TCS_BMAP_E5) + +static u16 ecore_init_qm_get_num_active_vfs(struct ecore_hwfn *p_hwfn) +{ + return IS_ECORE_SRIOV(p_hwfn->p_dev) ? + p_hwfn->pf_iov_info->max_active_vfs : 0; +} /* determines the physical queue flags for a given PF. */ static u32 ecore_get_pq_flags(struct ecore_hwfn *p_hwfn) @@ -1680,8 +1920,10 @@ static u32 ecore_get_pq_flags(struct ecore_hwfn *p_hwfn) flags = PQ_FLAGS_LB; /* feature flags */ - if (IS_ECORE_SRIOV(p_hwfn->p_dev)) + if (ecore_init_qm_get_num_active_vfs(p_hwfn)) flags |= PQ_FLAGS_VFS; + + /* @DPDK */ if (IS_ECORE_PACING(p_hwfn)) flags |= PQ_FLAGS_RLS; @@ -1691,27 +1933,11 @@ static u32 ecore_get_pq_flags(struct ecore_hwfn *p_hwfn) if (!IS_ECORE_PACING(p_hwfn)) flags |= PQ_FLAGS_MCOS; break; - case ECORE_PCI_FCOE: - flags |= PQ_FLAGS_OFLD; - break; - case ECORE_PCI_ISCSI: - flags |= PQ_FLAGS_ACK | PQ_FLAGS_OOO | PQ_FLAGS_OFLD; - break; - case ECORE_PCI_ETH_ROCE: - flags |= PQ_FLAGS_OFLD | PQ_FLAGS_LLT; - if (!IS_ECORE_PACING(p_hwfn)) - flags |= PQ_FLAGS_MCOS; - break; - case ECORE_PCI_ETH_IWARP: - flags |= PQ_FLAGS_ACK | PQ_FLAGS_OOO | PQ_FLAGS_OFLD; - if (!IS_ECORE_PACING(p_hwfn)) - flags |= PQ_FLAGS_MCOS; - break; default: - DP_ERR(p_hwfn, "unknown personality %d\n", - p_hwfn->hw_info.personality); + DP_ERR(p_hwfn, "unknown personality %d\n", p_hwfn->hw_info.personality); return 0; } + return flags; } @@ -1723,8 +1949,53 @@ u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn) u16 ecore_init_qm_get_num_vfs(struct ecore_hwfn *p_hwfn) { - return IS_ECORE_SRIOV(p_hwfn->p_dev) ? - p_hwfn->p_dev->p_iov_info->total_vfs : 0; + return IS_ECORE_SRIOV(p_hwfn->p_dev) ? p_hwfn->p_dev->p_iov_info->total_vfs : 0; +} + +static u16 ecore_init_qm_get_num_vfs_pqs(struct ecore_hwfn *p_hwfn) +{ + u16 num_pqs, num_vfs = ecore_init_qm_get_num_active_vfs(p_hwfn); + u32 pq_flags = ecore_get_pq_flags(p_hwfn); + + /* One L2 PQ per VF */ + num_pqs = num_vfs; + + /* Separate RDMA PQ per VF */ + if ((PQ_FLAGS_VFR & pq_flags)) + num_pqs += num_vfs; + + /* Separate RDMA PQ for all VFs */ + if ((PQ_FLAGS_VSR & pq_flags)) + num_pqs += 1; + + return num_pqs; +} + +static bool ecore_lag_support(struct ecore_hwfn *p_hwfn) +{ + return (ECORE_IS_AH(p_hwfn->p_dev) && + ECORE_IS_ROCE_PERSONALITY(p_hwfn) && + OSAL_TEST_BIT(ECORE_MF_ROCE_LAG, &p_hwfn->p_dev->mf_bits)); +} + +static u8 ecore_init_qm_get_num_mtc_tcs(struct ecore_hwfn *p_hwfn) +{ + u32 pq_flags = ecore_get_pq_flags(p_hwfn); + + if (!(PQ_FLAGS_MTC & pq_flags)) + return 1; + + return ecore_init_qm_get_num_tcs(p_hwfn); +} + +static u8 ecore_init_qm_get_num_mtc_pqs(struct ecore_hwfn *p_hwfn) +{ + u32 num_ports, num_tcs; + + num_ports = ecore_lag_support(p_hwfn) ? LAG_MAX_PORT_NUM : 1; + num_tcs = ecore_init_qm_get_num_mtc_tcs(p_hwfn); + + return num_ports * num_tcs; } #define NUM_DEFAULT_RLS 1 @@ -1733,21 +2004,13 @@ u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn) { u16 num_pf_rls, num_vfs = ecore_init_qm_get_num_vfs(p_hwfn); - /* num RLs can't exceed resource amount of rls or vports or the - * dcqcn qps - */ - num_pf_rls = (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL), - RESC_NUM(p_hwfn, ECORE_VPORT)); + /* num RLs can't exceed resource amount of rls or vports or the dcqcn qps */ + num_pf_rls = (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL), RESC_NUM(p_hwfn, + ECORE_VPORT)); - /* make sure after we reserve the default and VF rls we'll have - * something left - */ - if (num_pf_rls < num_vfs + NUM_DEFAULT_RLS) { - DP_NOTICE(p_hwfn, false, - "no rate limiters left for PF rate limiting" - " [num_pf_rls %d num_vfs %d]\n", num_pf_rls, num_vfs); + /* make sure after we reserve the default and VF rls we'll have something left */ + if (num_pf_rls < num_vfs + NUM_DEFAULT_RLS) return 0; - } /* subtract rls necessary for VFs and one default one for the PF */ num_pf_rls -= num_vfs + NUM_DEFAULT_RLS; @@ -1755,17 +2018,40 @@ u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn) return num_pf_rls; } +static u16 ecore_init_qm_get_num_rls(struct ecore_hwfn *p_hwfn) +{ + u32 pq_flags = ecore_get_pq_flags(p_hwfn); + u16 num_rls = 0; + + num_rls += (!!(PQ_FLAGS_RLS & pq_flags)) * + ecore_init_qm_get_num_pf_rls(p_hwfn); + + /* RL for each VF L2 PQ */ + num_rls += (!!(PQ_FLAGS_VFS & pq_flags)) * + ecore_init_qm_get_num_active_vfs(p_hwfn); + + /* RL for each VF RDMA PQ */ + num_rls += (!!(PQ_FLAGS_VFR & pq_flags)) * + ecore_init_qm_get_num_active_vfs(p_hwfn); + + /* RL for VF RDMA single PQ */ + num_rls += (!!(PQ_FLAGS_VSR & pq_flags)); + + return num_rls; +} + u16 ecore_init_qm_get_num_vports(struct ecore_hwfn *p_hwfn) { u32 pq_flags = ecore_get_pq_flags(p_hwfn); - /* all pqs share the same vport (hence the 1 below), except for vfs - * and pf_rl pqs - */ - return (!!(PQ_FLAGS_RLS & pq_flags)) * - ecore_init_qm_get_num_pf_rls(p_hwfn) + - (!!(PQ_FLAGS_VFS & pq_flags)) * - ecore_init_qm_get_num_vfs(p_hwfn) + 1; + /* all pqs share the same vport (hence the 1 below), except for vfs and pf_rl pqs */ + return (!!(PQ_FLAGS_RLS & pq_flags)) * ecore_init_qm_get_num_pf_rls(p_hwfn) + + (!!(PQ_FLAGS_VFS & pq_flags)) * ecore_init_qm_get_num_vfs(p_hwfn) + 1; +} + +static u8 ecore_init_qm_get_group_count(struct ecore_hwfn *p_hwfn) +{ + return p_hwfn->qm_info.offload_group_count; } /* calc amount of PQs according to the requested flags */ @@ -1773,16 +2059,15 @@ u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn) { u32 pq_flags = ecore_get_pq_flags(p_hwfn); - return (!!(PQ_FLAGS_RLS & pq_flags)) * - ecore_init_qm_get_num_pf_rls(p_hwfn) + - (!!(PQ_FLAGS_MCOS & pq_flags)) * - ecore_init_qm_get_num_tcs(p_hwfn) + + return (!!(PQ_FLAGS_RLS & pq_flags)) * ecore_init_qm_get_num_pf_rls(p_hwfn) + + (!!(PQ_FLAGS_MCOS & pq_flags)) * ecore_init_qm_get_num_tcs(p_hwfn) + (!!(PQ_FLAGS_LB & pq_flags)) + (!!(PQ_FLAGS_OOO & pq_flags)) + (!!(PQ_FLAGS_ACK & pq_flags)) + - (!!(PQ_FLAGS_OFLD & pq_flags)) + - (!!(PQ_FLAGS_VFS & pq_flags)) * - ecore_init_qm_get_num_vfs(p_hwfn); + (!!(PQ_FLAGS_OFLD & pq_flags)) * ecore_init_qm_get_num_mtc_pqs(p_hwfn) + + (!!(PQ_FLAGS_GRP & pq_flags)) * OFLD_GRP_SIZE + + (!!(PQ_FLAGS_LLT & pq_flags)) * ecore_init_qm_get_num_mtc_pqs(p_hwfn) + + (!!(PQ_FLAGS_VFS & pq_flags)) * ecore_init_qm_get_num_vfs_pqs(p_hwfn); } /* initialize the top level QM params */ @@ -1793,7 +2078,8 @@ static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn) /* pq and vport bases for this PF */ qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ); - qm_info->start_vport = (u8)RESC_START(p_hwfn, ECORE_VPORT); + qm_info->start_vport = (u16)RESC_START(p_hwfn, ECORE_VPORT); + qm_info->start_rl = (u16)RESC_START(p_hwfn, ECORE_RL); /* rate limiting and weighted fair queueing are always enabled */ qm_info->vport_rl_en = 1; @@ -1803,15 +2089,11 @@ static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn) four_port = p_hwfn->p_dev->num_ports_in_engine == MAX_NUM_PORTS_K2; /* in AH 4 port we have fewer TCs per port */ - qm_info->max_phys_tcs_per_port = four_port ? NUM_PHYS_TCS_4PORT_K2 : - NUM_OF_PHYS_TCS; + qm_info->max_phys_tcs_per_port = four_port ? NUM_PHYS_TCS_4PORT_K2 : NUM_OF_PHYS_TCS; - /* unless MFW indicated otherwise, ooo_tc should be 3 for AH 4 port and - * 4 otherwise - */ + /* unless MFW indicated otherwise, ooo_tc should be 3 for AH 4 port and 4 otherwise */ if (!qm_info->ooo_tc) - qm_info->ooo_tc = four_port ? DCBX_TCP_OOO_K2_4PORT_TC : - DCBX_TCP_OOO_TC; + qm_info->ooo_tc = four_port ? DCBX_TCP_OOO_K2_4PORT_TC : DCBX_TCP_OOO_TC; } /* initialize qm vport params */ @@ -1834,7 +2116,7 @@ static void ecore_init_qm_port_params(struct ecore_hwfn *p_hwfn) /* indicate how ooo and high pri traffic is dealt with */ active_phys_tcs = num_ports == MAX_NUM_PORTS_K2 ? - ACTIVE_TCS_BMAP_4PORT_K2 : ACTIVE_TCS_BMAP; + ACTIVE_TCS_BMAP_4PORT_K2 : ACTIVE_TCS_BMAP(p_hwfn); for (i = 0; i < num_ports; i++) { struct init_qm_port_params *p_qm_port = @@ -1862,11 +2144,14 @@ static void ecore_init_qm_reset_params(struct ecore_hwfn *p_hwfn) qm_info->num_pqs = 0; qm_info->num_vports = 0; + qm_info->num_rls = 0; qm_info->num_pf_rls = 0; qm_info->num_vf_pqs = 0; qm_info->first_vf_pq = 0; qm_info->first_mcos_pq = 0; qm_info->first_rl_pq = 0; + qm_info->single_vf_rdma_pq = 0; + qm_info->pq_overflow = false; } static void ecore_init_qm_advance_vport(struct ecore_hwfn *p_hwfn) @@ -1876,18 +2161,13 @@ static void ecore_init_qm_advance_vport(struct ecore_hwfn *p_hwfn) qm_info->num_vports++; if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn)) - DP_ERR(p_hwfn, - "vport overflow! qm_info->num_vports %d," - " qm_init_get_num_vports() %d\n", - qm_info->num_vports, - ecore_init_qm_get_num_vports(p_hwfn)); + DP_ERR(p_hwfn, "vport overflow! qm_info->num_vports %d, qm_init_get_num_vports() %d\n", + qm_info->num_vports, ecore_init_qm_get_num_vports(p_hwfn)); } /* initialize a single pq and manage qm_info resources accounting. - * The pq_init_flags param determines whether the PQ is rate limited - * (for VF or PF) - * and whether a new vport is allocated to the pq or not (i.e. vport will be - * shared) + * The pq_init_flags param determines whether the PQ is rate limited (for VF or PF) + * and whether a new vport is allocated to the pq or not (i.e. vport will be shared) */ /* flags for pq init */ @@ -1898,65 +2178,108 @@ static void ecore_init_qm_advance_vport(struct ecore_hwfn *p_hwfn) /* defines for pq init */ #define PQ_INIT_DEFAULT_WRR_GROUP 1 #define PQ_INIT_DEFAULT_TC 0 -#define PQ_INIT_OFLD_TC (p_hwfn->hw_info.offload_tc) -static void ecore_init_qm_pq(struct ecore_hwfn *p_hwfn, - struct ecore_qm_info *qm_info, - u8 tc, u32 pq_init_flags) +void ecore_hw_info_set_offload_tc(struct ecore_hw_info *p_info, u8 tc) { - u16 pq_idx = qm_info->num_pqs, max_pq = - ecore_init_qm_get_num_pqs(p_hwfn); + p_info->offload_tc = tc; + p_info->offload_tc_set = true; +} - if (pq_idx > max_pq) - DP_ERR(p_hwfn, - "pq overflow! pq %d, max pq %d\n", pq_idx, max_pq); +static bool ecore_is_offload_tc_set(struct ecore_hwfn *p_hwfn) +{ + return p_hwfn->hw_info.offload_tc_set; +} + +u8 ecore_get_offload_tc(struct ecore_hwfn *p_hwfn) +{ + if (ecore_is_offload_tc_set(p_hwfn)) + return p_hwfn->hw_info.offload_tc; + + return PQ_INIT_DEFAULT_TC; +} + +static void ecore_init_qm_pq_port(struct ecore_hwfn *p_hwfn, + struct ecore_qm_info *qm_info, + u8 tc, u32 pq_init_flags, u8 port) +{ + u16 pq_idx = qm_info->num_pqs, max_pq = ecore_init_qm_get_num_pqs(p_hwfn); + u16 num_pf_pqs; + + if (pq_idx > max_pq) { + qm_info->pq_overflow = true; + DP_ERR(p_hwfn, "pq overflow! pq %d, max pq %d\n", pq_idx, max_pq); + } /* init pq params */ - qm_info->qm_pq_params[pq_idx].port_id = p_hwfn->port_id; - qm_info->qm_pq_params[pq_idx].vport_id = qm_info->start_vport + - qm_info->num_vports; + qm_info->qm_pq_params[pq_idx].port_id = port; + qm_info->qm_pq_params[pq_idx].vport_id = qm_info->start_vport + qm_info->num_vports; qm_info->qm_pq_params[pq_idx].tc_id = tc; qm_info->qm_pq_params[pq_idx].wrr_group = PQ_INIT_DEFAULT_WRR_GROUP; - qm_info->qm_pq_params[pq_idx].rl_valid = - (pq_init_flags & PQ_INIT_PF_RL || - pq_init_flags & PQ_INIT_VF_RL); - /* The "rl_id" is set as the "vport_id" */ - qm_info->qm_pq_params[pq_idx].rl_id = - qm_info->qm_pq_params[pq_idx].vport_id; + if (pq_init_flags & (PQ_INIT_PF_RL | PQ_INIT_VF_RL)) { + qm_info->qm_pq_params[pq_idx].rl_valid = 1; + qm_info->qm_pq_params[pq_idx].rl_id = + qm_info->start_rl + qm_info->num_rls++; + } /* qm params accounting */ qm_info->num_pqs++; + if (pq_init_flags & PQ_INIT_VF_RL) { + qm_info->num_vf_pqs++; + } else { + num_pf_pqs = qm_info->num_pqs - qm_info->num_vf_pqs; + if (qm_info->ilt_pf_pqs && num_pf_pqs > qm_info->ilt_pf_pqs) { + qm_info->pq_overflow = true; + DP_ERR(p_hwfn, + "ilt overflow! num_pf_pqs %d, qm_info->ilt_pf_pqs %d\n", + num_pf_pqs, qm_info->ilt_pf_pqs); + } + } + if (!(pq_init_flags & PQ_INIT_SHARE_VPORT)) qm_info->num_vports++; if (pq_init_flags & PQ_INIT_PF_RL) qm_info->num_pf_rls++; - if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn)) - DP_ERR(p_hwfn, - "vport overflow! qm_info->num_vports %d," - " qm_init_get_num_vports() %d\n", - qm_info->num_vports, - ecore_init_qm_get_num_vports(p_hwfn)); + if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn)) { + qm_info->pq_overflow = true; + DP_ERR(p_hwfn, "vport overflow! qm_info->num_vports %d, qm_init_get_num_vports() %d\n", + qm_info->num_vports, ecore_init_qm_get_num_vports(p_hwfn)); + } - if (qm_info->num_pf_rls > ecore_init_qm_get_num_pf_rls(p_hwfn)) - DP_ERR(p_hwfn, "rl overflow! qm_info->num_pf_rls %d," - " qm_init_get_num_pf_rls() %d\n", - qm_info->num_pf_rls, - ecore_init_qm_get_num_pf_rls(p_hwfn)); + if (qm_info->num_pf_rls > ecore_init_qm_get_num_pf_rls(p_hwfn)) { + qm_info->pq_overflow = true; + DP_ERR(p_hwfn, "rl overflow! qm_info->num_pf_rls %d, qm_init_get_num_pf_rls() %d\n", + qm_info->num_pf_rls, ecore_init_qm_get_num_pf_rls(p_hwfn)); + } +} + +/* init one qm pq, assume port of the PF */ +static void ecore_init_qm_pq(struct ecore_hwfn *p_hwfn, + struct ecore_qm_info *qm_info, + u8 tc, u32 pq_init_flags) +{ + ecore_init_qm_pq_port(p_hwfn, qm_info, tc, pq_init_flags, p_hwfn->port_id); } /* get pq index according to PQ_FLAGS */ static u16 *ecore_init_qm_get_idx_from_flags(struct ecore_hwfn *p_hwfn, - u32 pq_flags) + unsigned long pq_flags) { struct ecore_qm_info *qm_info = &p_hwfn->qm_info; /* Can't have multiple flags set here */ - if (OSAL_BITMAP_WEIGHT((unsigned long *)&pq_flags, - sizeof(pq_flags)) > 1) + if (OSAL_BITMAP_WEIGHT(&pq_flags, + sizeof(pq_flags) * BITS_PER_BYTE) > 1) { + DP_ERR(p_hwfn, "requested multiple pq flags 0x%lx\n", pq_flags); + goto err; + } + + if (!(ecore_get_pq_flags(p_hwfn) & pq_flags)) { + DP_ERR(p_hwfn, "pq flag 0x%lx is not set\n", pq_flags); goto err; + } switch (pq_flags) { case PQ_FLAGS_RLS: @@ -1970,16 +2293,20 @@ static u16 *ecore_init_qm_get_idx_from_flags(struct ecore_hwfn *p_hwfn, case PQ_FLAGS_ACK: return &qm_info->pure_ack_pq; case PQ_FLAGS_OFLD: - return &qm_info->offload_pq; + return &qm_info->first_ofld_pq; + case PQ_FLAGS_LLT: + return &qm_info->first_llt_pq; case PQ_FLAGS_VFS: return &qm_info->first_vf_pq; + case PQ_FLAGS_GRP: + return &qm_info->first_ofld_grp_pq; + case PQ_FLAGS_VSR: + return &qm_info->single_vf_rdma_pq; default: goto err; } - err: - DP_ERR(p_hwfn, "BAD pq flags %d\n", pq_flags); - return OSAL_NULL; + return &qm_info->start_pq; } /* save pq index in qm info */ @@ -1991,59 +2318,285 @@ static void ecore_init_qm_set_idx(struct ecore_hwfn *p_hwfn, *base_pq_idx = p_hwfn->qm_info.start_pq + pq_val; } +static u16 ecore_qm_get_start_pq(struct ecore_hwfn *p_hwfn) +{ + u16 start_pq; + + OSAL_SPIN_LOCK(&p_hwfn->qm_info.qm_info_lock); + start_pq = p_hwfn->qm_info.start_pq; + OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock); + + return start_pq; +} + /* get tx pq index, with the PQ TX base already set (ready for context init) */ u16 ecore_get_cm_pq_idx(struct ecore_hwfn *p_hwfn, u32 pq_flags) { - u16 *base_pq_idx = ecore_init_qm_get_idx_from_flags(p_hwfn, pq_flags); + u16 *base_pq_idx; + u16 pq_idx; + + OSAL_SPIN_LOCK(&p_hwfn->qm_info.qm_info_lock); + base_pq_idx = ecore_init_qm_get_idx_from_flags(p_hwfn, pq_flags); + pq_idx = *base_pq_idx + CM_TX_PQ_BASE; + OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock); + + return pq_idx; +} + +u16 ecore_get_cm_pq_idx_grp(struct ecore_hwfn *p_hwfn, u8 idx) +{ + u16 pq_idx = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_GRP); + u8 max_idx = ecore_init_qm_get_group_count(p_hwfn); - return *base_pq_idx + CM_TX_PQ_BASE; + if (max_idx == 0) { + DP_ERR(p_hwfn, "pq with flag 0x%x do not exist\n", + PQ_FLAGS_GRP); + return ecore_qm_get_start_pq(p_hwfn); + } + + if (idx > max_idx) + DP_ERR(p_hwfn, "idx %d must be smaller than %d\n", idx, max_idx); + + return pq_idx + (idx % max_idx); } u16 ecore_get_cm_pq_idx_mcos(struct ecore_hwfn *p_hwfn, u8 tc) { + u16 pq_idx = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS); u8 max_tc = ecore_init_qm_get_num_tcs(p_hwfn); + if (max_tc == 0) { + DP_ERR(p_hwfn, "pq with flag 0x%x do not exist\n", + PQ_FLAGS_MCOS); + return ecore_qm_get_start_pq(p_hwfn); + } + if (tc > max_tc) DP_ERR(p_hwfn, "tc %d must be smaller than %d\n", tc, max_tc); - return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + (tc % max_tc); + return pq_idx + (tc % max_tc); +} + +static u8 ecore_qm_get_pqs_per_vf(struct ecore_hwfn *p_hwfn) +{ + u8 pqs_per_vf; + u32 pq_flags; + + /* When VFR is set, there is pair of PQs per VF. If VSR is set, + * no additional action required in computing the per VF PQ. + */ + OSAL_SPIN_LOCK(&p_hwfn->qm_info.qm_info_lock); + pq_flags = ecore_get_pq_flags(p_hwfn); + pqs_per_vf = (PQ_FLAGS_VFR & pq_flags) ? 2 : 1; + OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock); + + return pqs_per_vf; } u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf) { - u16 max_vf = ecore_init_qm_get_num_vfs(p_hwfn); + u16 pq_idx = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS); + u16 max_vf = ecore_init_qm_get_num_active_vfs(p_hwfn); + u8 pqs_per_vf; + + if (max_vf == 0) { + DP_ERR(p_hwfn, "pq with flag 0x%x do not exist\n", + PQ_FLAGS_VFS); + return ecore_qm_get_start_pq(p_hwfn); + } if (vf > max_vf) DP_ERR(p_hwfn, "vf %d must be smaller than %d\n", vf, max_vf); - return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + (vf % max_vf); + pqs_per_vf = ecore_qm_get_pqs_per_vf(p_hwfn); + + return pq_idx + ((vf % max_vf) * pqs_per_vf); +} + +u16 ecore_get_cm_pq_idx_vf_rdma(struct ecore_hwfn *p_hwfn, u16 vf) +{ + u32 pq_flags; + u16 pq_idx; + + OSAL_SPIN_LOCK(&p_hwfn->qm_info.qm_info_lock); + pq_flags = ecore_get_pq_flags(p_hwfn); + OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock); + + /* If VSR is set, dedicated single PQ for VFs RDMA */ + if (PQ_FLAGS_VSR & pq_flags) + pq_idx = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VSR); + else + pq_idx = ecore_get_cm_pq_idx_vf(p_hwfn, vf); + + /* If VFR is set, VF's 2nd PQ is for RDMA */ + if ((PQ_FLAGS_VFR & pq_flags)) + pq_idx++; + + return pq_idx; } u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u16 rl) { + u16 pq_idx = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_RLS); u16 max_rl = ecore_init_qm_get_num_pf_rls(p_hwfn); - /* for rate limiters, it is okay to use the modulo behavior - no - * DP_ERR + if (max_rl == 0) { + DP_ERR(p_hwfn, "pq with flag 0x%x do not exist\n", + PQ_FLAGS_RLS); + return ecore_qm_get_start_pq(p_hwfn); + } + + /* When an invalid RL index is requested, return the highest + * available RL PQ. "max_rl - 1" is the relative index of the + * last PQ reserved for RLs. */ - return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_RLS) + (rl % max_rl); + if (rl >= max_rl) { + DP_ERR(p_hwfn, + "rl %hu is not a valid rate limiter, returning rl %hu\n", + rl, max_rl - 1); + return pq_idx + max_rl - 1; + } + + return pq_idx + rl; } -u16 ecore_get_qm_vport_idx_rl(struct ecore_hwfn *p_hwfn, u16 rl) +static u16 ecore_get_qm_pq_from_cm_pq(struct ecore_hwfn *p_hwfn, u16 cm_pq_id) { - u16 start_pq, pq, qm_pq_idx; + u16 start_pq = ecore_qm_get_start_pq(p_hwfn); - pq = ecore_get_cm_pq_idx_rl(p_hwfn, rl); - start_pq = p_hwfn->qm_info.start_pq; - qm_pq_idx = pq - start_pq - CM_TX_PQ_BASE; + return cm_pq_id - CM_TX_PQ_BASE - start_pq; +} + +static u16 ecore_get_vport_id_from_pq(struct ecore_hwfn *p_hwfn, u16 pq_id) +{ + u16 vport_id; + + OSAL_SPIN_LOCK(&p_hwfn->qm_info.qm_info_lock); + vport_id = p_hwfn->qm_info.qm_pq_params[pq_id].vport_id; + OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock); + + return vport_id; +} + +static u16 ecore_get_rl_id_from_pq(struct ecore_hwfn *p_hwfn, u16 pq_id) +{ + u16 rl_id; + + OSAL_SPIN_LOCK(&p_hwfn->qm_info.qm_info_lock); + rl_id = p_hwfn->qm_info.qm_pq_params[pq_id].rl_id; + OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock); - if (qm_pq_idx > p_hwfn->qm_info.num_pqs) { + return rl_id; +} + +u16 ecore_get_pq_vport_id_from_rl(struct ecore_hwfn *p_hwfn, u16 rl) +{ + u16 cm_pq_id = ecore_get_cm_pq_idx_rl(p_hwfn, rl); + u16 qm_pq_id = ecore_get_qm_pq_from_cm_pq(p_hwfn, cm_pq_id); + + return ecore_get_vport_id_from_pq(p_hwfn, qm_pq_id); +} + +u16 ecore_get_pq_vport_id_from_vf(struct ecore_hwfn *p_hwfn, u16 vf) +{ + u16 cm_pq_id = ecore_get_cm_pq_idx_vf(p_hwfn, vf); + u16 qm_pq_id = ecore_get_qm_pq_from_cm_pq(p_hwfn, cm_pq_id); + + return ecore_get_vport_id_from_pq(p_hwfn, qm_pq_id); +} + +u16 ecore_get_pq_rl_id_from_rl(struct ecore_hwfn *p_hwfn, u16 rl) +{ + u16 cm_pq_id = ecore_get_cm_pq_idx_rl(p_hwfn, rl); + u16 qm_pq_id = ecore_get_qm_pq_from_cm_pq(p_hwfn, cm_pq_id); + + return ecore_get_rl_id_from_pq(p_hwfn, qm_pq_id); +} + +u16 ecore_get_pq_rl_id_from_vf(struct ecore_hwfn *p_hwfn, u16 vf) +{ + u16 cm_pq_id = ecore_get_cm_pq_idx_vf(p_hwfn, vf); + u16 qm_pq_id = ecore_get_qm_pq_from_cm_pq(p_hwfn, cm_pq_id); + + return ecore_get_rl_id_from_pq(p_hwfn, qm_pq_id); +} + +static u16 ecore_get_cm_pq_offset_mtc(struct ecore_hwfn *p_hwfn, + u16 idx, u8 tc) +{ + u16 pq_offset = 0, max_pqs; + u8 num_ports, num_tcs; + + num_ports = ecore_lag_support(p_hwfn) ? LAG_MAX_PORT_NUM : 1; + num_tcs = ecore_init_qm_get_num_mtc_tcs(p_hwfn); + + /* add the port offset */ + pq_offset += (idx % num_ports) * num_tcs; + /* add the tc offset */ + pq_offset += tc % num_tcs; + + /* Verify that the pq returned is within pqs range */ + max_pqs = ecore_init_qm_get_num_mtc_pqs(p_hwfn); + if (pq_offset >= max_pqs) { DP_ERR(p_hwfn, - "qm_pq_idx %d must be smaller than %d\n", - qm_pq_idx, p_hwfn->qm_info.num_pqs); + "pq_offset %d must be smaller than %d (idx %d tc %d)\n", + pq_offset, max_pqs, idx, tc); + return 0; } - return p_hwfn->qm_info.qm_pq_params[qm_pq_idx].vport_id; + return pq_offset; +} + +u16 ecore_get_cm_pq_idx_ofld_mtc(struct ecore_hwfn *p_hwfn, + u16 idx, u8 tc) +{ + u16 first_ofld_pq, pq_offset; + +#ifdef CONFIG_DCQCN + if (p_hwfn->p_rdma_info->roce.dcqcn_enabled) + return ecore_get_cm_pq_idx_rl(p_hwfn, idx); +#endif + + first_ofld_pq = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_OFLD); + pq_offset = ecore_get_cm_pq_offset_mtc(p_hwfn, idx, tc); + + return first_ofld_pq + pq_offset; +} + +u16 ecore_get_cm_pq_idx_llt_mtc(struct ecore_hwfn *p_hwfn, + u16 idx, u8 tc) +{ + u16 first_llt_pq, pq_offset; + +#ifdef CONFIG_DCQCN + if (p_hwfn->p_rdma_info->roce.dcqcn_enabled) + return ecore_get_cm_pq_idx_rl(p_hwfn, idx); +#endif + + first_llt_pq = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LLT); + pq_offset = ecore_get_cm_pq_offset_mtc(p_hwfn, idx, tc); + + return first_llt_pq + pq_offset; +} + +u16 ecore_get_cm_pq_idx_ll2(struct ecore_hwfn *p_hwfn, u8 tc) +{ + switch (tc) { + case PURE_LB_TC: + return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB); + case PKT_LB_TC: + return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_OOO); + default: +#ifdef CONFIG_DCQCN + /* In RoCE, when DCQCN is enabled, there are no OFLD pqs, + * get the first RL pq. + */ + if (ECORE_IS_ROCE_PERSONALITY(p_hwfn) && + p_hwfn->p_rdma_info->roce.dcqcn_enabled) + return ecore_get_cm_pq_idx_rl(p_hwfn, 0); +#endif + return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_OFLD); + } } /* Functions for creating specific types of pqs */ @@ -2077,7 +2630,39 @@ static void ecore_init_qm_pure_ack_pq(struct ecore_hwfn *p_hwfn) return; ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_ACK, qm_info->num_pqs); - ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC, PQ_INIT_SHARE_VPORT); + ecore_init_qm_pq(p_hwfn, qm_info, ecore_get_offload_tc(p_hwfn), + PQ_INIT_SHARE_VPORT); +} + +static void ecore_init_qm_mtc_pqs(struct ecore_hwfn *p_hwfn) +{ + u8 num_tcs = ecore_init_qm_get_num_mtc_tcs(p_hwfn); + struct ecore_qm_info *qm_info = &p_hwfn->qm_info; + u8 second_port = p_hwfn->port_id; + u8 first_port = p_hwfn->port_id; + u8 tc; + + /* if lag is not active, init all pqs with p_hwfn's default port */ + if (ecore_lag_is_active(p_hwfn)) { + first_port = p_hwfn->lag_info.first_port; + second_port = p_hwfn->lag_info.second_port; + } + + /* override pq's TC if offload TC is set */ + for (tc = 0; tc < num_tcs; tc++) + ecore_init_qm_pq_port(p_hwfn, qm_info, + ecore_is_offload_tc_set(p_hwfn) ? + p_hwfn->hw_info.offload_tc : tc, + PQ_INIT_SHARE_VPORT, + first_port); + if (ecore_lag_support(p_hwfn)) + /* initialize second port's pqs even if lag is not active */ + for (tc = 0; tc < num_tcs; tc++) + ecore_init_qm_pq_port(p_hwfn, qm_info, + ecore_is_offload_tc_set(p_hwfn) ? + p_hwfn->hw_info.offload_tc : tc, + PQ_INIT_SHARE_VPORT, + second_port); } static void ecore_init_qm_offload_pq(struct ecore_hwfn *p_hwfn) @@ -2088,7 +2673,36 @@ static void ecore_init_qm_offload_pq(struct ecore_hwfn *p_hwfn) return; ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OFLD, qm_info->num_pqs); - ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC, PQ_INIT_SHARE_VPORT); + ecore_init_qm_mtc_pqs(p_hwfn); +} + +static void ecore_init_qm_low_latency_pq(struct ecore_hwfn *p_hwfn) +{ + struct ecore_qm_info *qm_info = &p_hwfn->qm_info; + + if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_LLT)) + return; + + ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_LLT, qm_info->num_pqs); + ecore_init_qm_mtc_pqs(p_hwfn); +} + +static void ecore_init_qm_offload_pq_group(struct ecore_hwfn *p_hwfn) +{ + struct ecore_qm_info *qm_info = &p_hwfn->qm_info; + u8 idx; + + if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_GRP)) + return; + + ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_GRP, qm_info->num_pqs); + + /* iterate over offload pqs */ + for (idx = 0; idx < ecore_init_qm_get_group_count(p_hwfn); idx++) { + ecore_init_qm_pq_port(p_hwfn, qm_info, qm_info->offload_group[idx].tc, + PQ_INIT_SHARE_VPORT, + qm_info->offload_group[idx].port); + } } static void ecore_init_qm_mcos_pqs(struct ecore_hwfn *p_hwfn) @@ -2104,34 +2718,76 @@ static void ecore_init_qm_mcos_pqs(struct ecore_hwfn *p_hwfn) ecore_init_qm_pq(p_hwfn, qm_info, tc_idx, PQ_INIT_SHARE_VPORT); } +static void ecore_init_qm_vf_single_rdma_pq(struct ecore_hwfn *p_hwfn) +{ + struct ecore_qm_info *qm_info = &p_hwfn->qm_info; + u32 pq_flags = ecore_get_pq_flags(p_hwfn); + + if (!(pq_flags & PQ_FLAGS_VSR)) + return; + + /* ecore_init_qm_pq_params() is going to increment vport ID anyway, + * so keep it shared here so we don't waste a vport. + */ + ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_VSR, qm_info->num_pqs); + ecore_init_qm_pq(p_hwfn, qm_info, ecore_get_offload_tc(p_hwfn), + PQ_INIT_VF_RL | PQ_INIT_SHARE_VPORT); +} + static void ecore_init_qm_vf_pqs(struct ecore_hwfn *p_hwfn) { + u16 vf_idx, num_vfs = ecore_init_qm_get_num_active_vfs(p_hwfn); struct ecore_qm_info *qm_info = &p_hwfn->qm_info; - u16 vf_idx, num_vfs = ecore_init_qm_get_num_vfs(p_hwfn); + u32 pq_flags = ecore_get_pq_flags(p_hwfn); + u32 l2_pq_init_flags = PQ_INIT_VF_RL; - if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_VFS)) + if (!(pq_flags & PQ_FLAGS_VFS)) return; + /* Mark PQ starting VF range */ ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_VFS, qm_info->num_pqs); - qm_info->num_vf_pqs = num_vfs; - for (vf_idx = 0; vf_idx < num_vfs; vf_idx++) + /* If VFR is set, the L2 PQ will share the rate limiter with the rdma PQ */ + if (pq_flags & PQ_FLAGS_VFR) + l2_pq_init_flags |= PQ_INIT_SHARE_VPORT; + + /* Init the per PF PQs */ + for (vf_idx = 0; vf_idx < num_vfs; vf_idx++) { + /* Per VF L2 PQ */ ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_DEFAULT_TC, - PQ_INIT_VF_RL); + l2_pq_init_flags); + + /* Per VF Rdma PQ */ + if (pq_flags & PQ_FLAGS_VFR) + ecore_init_qm_pq(p_hwfn, qm_info, + ecore_get_offload_tc(p_hwfn), + PQ_INIT_VF_RL); + } } static void ecore_init_qm_rl_pqs(struct ecore_hwfn *p_hwfn) { u16 pf_rls_idx, num_pf_rls = ecore_init_qm_get_num_pf_rls(p_hwfn); + struct ecore_lag_info *lag_info = &p_hwfn->lag_info; struct ecore_qm_info *qm_info = &p_hwfn->qm_info; + u8 port = p_hwfn->port_id, tc; if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_RLS)) return; ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_RLS, qm_info->num_pqs); - for (pf_rls_idx = 0; pf_rls_idx < num_pf_rls; pf_rls_idx++) - ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC, - PQ_INIT_PF_RL); + tc = ecore_get_offload_tc(p_hwfn); + for (pf_rls_idx = 0; pf_rls_idx < num_pf_rls; pf_rls_idx++) { + /* if lag is present, set these pqs per port according to parity */ + if (lag_info->is_master && + lag_info->lag_type != ECORE_LAG_TYPE_NONE && + lag_info->port_num > 0) + port = (pf_rls_idx % lag_info->port_num == 0) ? + lag_info->first_port : lag_info->second_port; + + ecore_init_qm_pq_port(p_hwfn, qm_info, tc, PQ_INIT_PF_RL, + port); + } } static void ecore_init_qm_pq_params(struct ecore_hwfn *p_hwfn) @@ -2154,24 +2810,68 @@ static void ecore_init_qm_pq_params(struct ecore_hwfn *p_hwfn) /* pq for offloaded protocol */ ecore_init_qm_offload_pq(p_hwfn); - /* done sharing vports */ + /* low latency pq */ + ecore_init_qm_low_latency_pq(p_hwfn); + + /* per offload group pqs */ + ecore_init_qm_offload_pq_group(p_hwfn); + + /* Single VF-RDMA PQ, in case there weren't enough for each VF */ + ecore_init_qm_vf_single_rdma_pq(p_hwfn); + + /* PF done sharing vports, advance vport for first VF. + * Vport ID is incremented in a separate function because we can't + * rely on the last PF PQ to not use PQ_INIT_SHARE_VPORT, which can + * be different in every QM reconfiguration. + */ ecore_init_qm_advance_vport(p_hwfn); /* pqs for vfs */ ecore_init_qm_vf_pqs(p_hwfn); } -/* compare values of getters against resources amounts */ -static enum _ecore_status_t ecore_init_qm_sanity(struct ecore_hwfn *p_hwfn) +/* Finds the optimal features configuration to maximize PQs utilization */ +static enum _ecore_status_t ecore_init_qm_features(struct ecore_hwfn *p_hwfn) { - if (ecore_init_qm_get_num_vports(p_hwfn) > - RESC_NUM(p_hwfn, ECORE_VPORT)) { + if (ecore_init_qm_get_num_vports(p_hwfn) > RESC_NUM(p_hwfn, ECORE_VPORT)) { DP_ERR(p_hwfn, "requested amount of vports exceeds resource\n"); return ECORE_INVAL; } - if (ecore_init_qm_get_num_pqs(p_hwfn) > RESC_NUM(p_hwfn, ECORE_PQ)) { - DP_ERR(p_hwfn, "requested amount of pqs exceeds resource\n"); + if (ecore_init_qm_get_num_pf_rls(p_hwfn) == 0) { + if (IS_ECORE_PACING(p_hwfn)) { + DP_ERR(p_hwfn, "No rate limiters available for PF\n"); + return ECORE_INVAL; + } + } + + /* For VF RDMA try to provide 2 PQs (separate PQ for RDMA) per VF */ + if (ECORE_IS_RDMA_PERSONALITY(p_hwfn) && ECORE_IS_VF_RDMA(p_hwfn) && + ecore_init_qm_get_num_active_vfs(p_hwfn)) + p_hwfn->qm_info.vf_rdma_en = true; + + while (ecore_init_qm_get_num_pqs(p_hwfn) > RESC_NUM(p_hwfn, ECORE_PQ) || + ecore_init_qm_get_num_rls(p_hwfn) > RESC_NUM(p_hwfn, ECORE_RL)) { + if (IS_ECORE_QM_VF_RDMA(p_hwfn)) { + p_hwfn->qm_info.vf_rdma_en = false; + DP_NOTICE(p_hwfn, false, + "PQ per rdma vf was disabled to reduce requested amount of pqs/rls. A single PQ for all rdma VFs will be used\n"); + continue; + } + + if (IS_ECORE_MULTI_TC_ROCE(p_hwfn)) { + p_hwfn->hw_info.multi_tc_roce_en = false; + DP_NOTICE(p_hwfn, false, + "multi-tc roce was disabled to reduce requested amount of pqs/rls\n"); + continue; + } + + DP_ERR(p_hwfn, + "Requested amount: %d pqs %d rls, Actual amount: %d pqs %d rls\n", + ecore_init_qm_get_num_pqs(p_hwfn), + ecore_init_qm_get_num_rls(p_hwfn), + RESC_NUM(p_hwfn, ECORE_PQ), + RESC_NUM(p_hwfn, ECORE_RL)); return ECORE_INVAL; } @@ -2189,35 +2889,38 @@ static void ecore_dp_init_qm_params(struct ecore_hwfn *p_hwfn) struct init_qm_pq_params *pq; int i, tc; + if (qm_info->pq_overflow) + return; + /* top level params */ - DP_VERBOSE(p_hwfn, ECORE_MSG_HW, - "qm init top level params: start_pq %d, start_vport %d," - " pure_lb_pq %d, offload_pq %d, pure_ack_pq %d\n", - qm_info->start_pq, qm_info->start_vport, qm_info->pure_lb_pq, - qm_info->offload_pq, qm_info->pure_ack_pq); - DP_VERBOSE(p_hwfn, ECORE_MSG_HW, - "ooo_pq %d, first_vf_pq %d, num_pqs %d, num_vf_pqs %d," - " num_vports %d, max_phys_tcs_per_port %d\n", - qm_info->ooo_pq, qm_info->first_vf_pq, qm_info->num_pqs, - qm_info->num_vf_pqs, qm_info->num_vports, - qm_info->max_phys_tcs_per_port); - DP_VERBOSE(p_hwfn, ECORE_MSG_HW, - "pf_rl_en %d, pf_wfq_en %d, vport_rl_en %d, vport_wfq_en %d," - " pf_wfq %d, pf_rl %d, num_pf_rls %d, pq_flags %x\n", - qm_info->pf_rl_en, qm_info->pf_wfq_en, qm_info->vport_rl_en, - qm_info->vport_wfq_en, qm_info->pf_wfq, qm_info->pf_rl, - qm_info->num_pf_rls, ecore_get_pq_flags(p_hwfn)); + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, + "qm init params: pq_flags 0x%x, num_pqs %d, num_vf_pqs %d, start_pq %d\n", + ecore_get_pq_flags(p_hwfn), qm_info->num_pqs, + qm_info->num_vf_pqs, qm_info->start_pq); + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, + "qm init params: pf_rl_en %d, pf_wfq_en %d, vport_rl_en %d, vport_wfq_en %d\n", + qm_info->pf_rl_en, qm_info->pf_wfq_en, + qm_info->vport_rl_en, qm_info->vport_wfq_en); + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, + "qm init params: num_vports %d, start_vport %d, num_rls %d, num_pf_rls %d, start_rl %d, pf_rl %d\n", + qm_info->num_vports, qm_info->start_vport, + qm_info->num_rls, qm_info->num_pf_rls, + qm_info->start_rl, qm_info->pf_rl); + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, + "qm init params: pure_lb_pq %d, ooo_pq %d, pure_ack_pq %d, first_ofld_pq %d, first_llt_pq %d\n", + qm_info->pure_lb_pq, qm_info->ooo_pq, qm_info->pure_ack_pq, + qm_info->first_ofld_pq, qm_info->first_llt_pq); + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, + "qm init params: single_vf_rdma_pq %d, first_vf_pq %d, max_phys_tcs_per_port %d, pf_wfq %d\n", + qm_info->single_vf_rdma_pq, qm_info->first_vf_pq, + qm_info->max_phys_tcs_per_port, qm_info->pf_wfq); /* port table */ for (i = 0; i < p_hwfn->p_dev->num_ports_in_engine; i++) { port = &qm_info->qm_port_params[i]; - DP_VERBOSE(p_hwfn, ECORE_MSG_HW, - "port idx %d, active %d, active_phys_tcs %d," - " num_pbf_cmd_lines %d, num_btb_blocks %d," - " reserved %d\n", - i, port->active, port->active_phys_tcs, - port->num_pbf_cmd_lines, port->num_btb_blocks, - port->reserved); + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "port idx %d, active %d, active_phys_tcs %d, num_pbf_cmd_lines %d, num_btb_blocks %d, reserved %d\n", + i, port->active, port->active_phys_tcs, port->num_pbf_cmd_lines, + port->num_btb_blocks, port->reserved); } /* vport table */ @@ -2226,8 +2929,7 @@ static void ecore_dp_init_qm_params(struct ecore_hwfn *p_hwfn) DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "vport idx %d, wfq %d, first_tx_pq_id [ ", qm_info->start_vport + i, vport->wfq); for (tc = 0; tc < NUM_OF_TCS; tc++) - DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "%d ", - vport->first_tx_pq_id[tc]); + DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "%d ", vport->first_tx_pq_id[tc]); DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "]\n"); } @@ -2270,19 +2972,66 @@ static void ecore_init_qm_info(struct ecore_hwfn *p_hwfn) * 4. activate init tool in QM_PF stage * 5. send an sdm_qm_cmd through rbc interface to release the QM */ -enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) +static enum _ecore_status_t __ecore_qm_reconf(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + bool b_can_sleep) { - struct ecore_qm_info *qm_info = &p_hwfn->qm_info; - bool b_rc; + struct ecore_resc_unlock_params resc_unlock_params; + struct ecore_resc_lock_params resc_lock_params; + bool b_rc, b_mfw_unlock = true; + struct ecore_qm_info *qm_info; enum _ecore_status_t rc = ECORE_SUCCESS; - /* multiple flows can issue qm reconf. Need to lock */ + qm_info = &p_hwfn->qm_info; + + /* Obtain MFW resource lock to sync with PFs with driver instances not + * covered by the static global qm_lock (monolithic, dpdk, PDA). + */ + ecore_mcp_resc_lock_default_init(&resc_lock_params, &resc_unlock_params, + ECORE_RESC_LOCK_QM_RECONF, false); + resc_lock_params.sleep_b4_retry = b_can_sleep; + rc = ecore_mcp_resc_lock(p_hwfn, p_ptt, &resc_lock_params); + + /* If lock is taken we must abort. If MFW does not support the feature + * or took too long to acquire the lock we soldier on. + */ + if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL && rc != ECORE_TIMEOUT) { + DP_ERR(p_hwfn, + "QM reconf MFW lock is stuck. Failing reconf flow\n"); + return ECORE_INVAL; + } + + /* if MFW doesn't support, no need to unlock. There is no harm in + * trying, but we would need to tweak the rc value in case of + * ECORE_NOTIMPL, so seems nicer to avoid. + */ + if (rc == ECORE_NOTIMPL) + b_mfw_unlock = false; + + /* Multiple hwfn flows can issue qm reconf. Need to lock between hwfn + * flows. + */ OSAL_SPIN_LOCK(&qm_lock); + /* qm_info is invalid while this lock is taken */ + OSAL_SPIN_LOCK(&p_hwfn->qm_info.qm_info_lock); + + rc = ecore_init_qm_features(p_hwfn); + if (rc != ECORE_SUCCESS) { + OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock); + goto unlock; + } + /* initialize ecore's qm data structure */ ecore_init_qm_info(p_hwfn); + OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock); + + if (qm_info->pq_overflow) { + rc = ECORE_INVAL; + goto unlock; + } + /* stop PF's qm queues */ b_rc = ecore_send_qm_stop_cmd(p_hwfn, p_ptt, false, true, qm_info->start_pq, qm_info->num_pqs); @@ -2291,9 +3040,6 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn, goto unlock; } - /* clear the QM_PF runtime phase leftovers from previous init */ - ecore_init_clear_rt_data(p_hwfn); - /* prepare QM portion of runtime array */ ecore_qm_init_pf(p_hwfn, p_ptt, false); @@ -2310,39 +3056,66 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn, unlock: OSAL_SPIN_UNLOCK(&qm_lock); + if (b_mfw_unlock) + rc = ecore_mcp_resc_unlock(p_hwfn, p_ptt, &resc_unlock_params); + return rc; } +enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) +{ + return __ecore_qm_reconf(p_hwfn, p_ptt, true); +} + +enum _ecore_status_t ecore_qm_reconf_intr(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) +{ + return __ecore_qm_reconf(p_hwfn, p_ptt, false); +} + static enum _ecore_status_t ecore_alloc_qm_data(struct ecore_hwfn *p_hwfn) { struct ecore_qm_info *qm_info = &p_hwfn->qm_info; + u16 max_pqs_num, max_vports_num; enum _ecore_status_t rc; - rc = ecore_init_qm_sanity(p_hwfn); +#ifdef CONFIG_ECORE_LOCK_ALLOC + rc = OSAL_SPIN_LOCK_ALLOC(p_hwfn, &qm_info->qm_info_lock, + "qm_info_lock"); + if (rc) + goto alloc_err; +#endif + OSAL_SPIN_LOCK_INIT(&qm_info->qm_info_lock); + + rc = ecore_init_qm_features(p_hwfn); if (rc != ECORE_SUCCESS) goto alloc_err; + max_pqs_num = (u16)RESC_NUM(p_hwfn, ECORE_PQ); + max_vports_num = (u16)RESC_NUM(p_hwfn, ECORE_VPORT); + qm_info->qm_pq_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(struct init_qm_pq_params) * - ecore_init_qm_get_num_pqs(p_hwfn)); + max_pqs_num); if (!qm_info->qm_pq_params) goto alloc_err; qm_info->qm_vport_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, - sizeof(struct init_qm_vport_params) * - ecore_init_qm_get_num_vports(p_hwfn)); + sizeof(struct init_qm_vport_params) * + max_vports_num); if (!qm_info->qm_vport_params) goto alloc_err; qm_info->qm_port_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, - sizeof(struct init_qm_port_params) * - p_hwfn->p_dev->num_ports_in_engine); + sizeof(struct init_qm_port_params) * + p_hwfn->p_dev->num_ports_in_engine); if (!qm_info->qm_port_params) goto alloc_err; qm_info->wfq_data = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(struct ecore_wfq_data) * - ecore_init_qm_get_num_vports(p_hwfn)); + max_vports_num); if (!qm_info->wfq_data) goto alloc_err; @@ -2355,25 +3128,256 @@ static enum _ecore_status_t ecore_alloc_qm_data(struct ecore_hwfn *p_hwfn) } /******************** End QM initialization ***************/ -enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev) +static enum _ecore_status_t ecore_lag_create_slave(struct ecore_hwfn *p_hwfn, + u8 master_pfid) +{ + struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn); + u8 slave_ppfid = 1; /* TODO: Need some sort of resource management function + * to return a free entry + */ + enum _ecore_status_t rc; + + if (!p_ptt) + return ECORE_AGAIN; + + rc = ecore_llh_map_ppfid_to_pfid(p_hwfn, p_ptt, slave_ppfid, + master_pfid); + ecore_ptt_release(p_hwfn, p_ptt); + if (rc != ECORE_SUCCESS) + return rc; + + /* Protocol filter for RoCE v1 */ + rc = ecore_llh_add_protocol_filter(p_hwfn->p_dev, slave_ppfid, + ECORE_LLH_FILTER_ETHERTYPE, 0x8915, + ECORE_LLH_DONT_CARE); + if (rc != ECORE_SUCCESS) + return rc; + + /* Protocol filter for RoCE v2 */ + return ecore_llh_add_protocol_filter(p_hwfn->p_dev, slave_ppfid, + ECORE_LLH_FILTER_UDP_DEST_PORT, + ECORE_LLH_DONT_CARE, 4791); +} + +static void ecore_lag_destroy_slave(struct ecore_hwfn *p_hwfn) +{ + struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn); + u8 slave_ppfid = 1; /* Need some sort of resource management function + * to return a free entry + */ + + /* Protocol filter for RoCE v1 */ + ecore_llh_remove_protocol_filter(p_hwfn->p_dev, slave_ppfid, + ECORE_LLH_FILTER_ETHERTYPE, + 0x8915, ECORE_LLH_DONT_CARE); + + /* Protocol filter for RoCE v2 */ + ecore_llh_remove_protocol_filter(p_hwfn->p_dev, slave_ppfid, + ECORE_LLH_FILTER_UDP_DEST_PORT, + ECORE_LLH_DONT_CARE, 4791); + + if (p_ptt) { + ecore_llh_map_ppfid_to_pfid(p_hwfn, p_ptt, slave_ppfid, + p_hwfn->rel_pf_id); + ecore_ptt_release(p_hwfn, p_ptt); + } +} + +/* Map ports: + * port 0/2 - 0/2 + * port 1/3 - 1/3 + * If port 0/2 is down, map both to port 1/3, if port 1/3 is down, map both to + * port 0/2, and if both are down, it doesn't really matter. + */ +static void ecore_lag_map_ports(struct ecore_hwfn *p_hwfn) +{ + struct ecore_lag_info *lag_info = &p_hwfn->lag_info; + + /* for now support only 2 ports in the bond */ + if (lag_info->master_pf == 0) { + lag_info->first_port = (lag_info->active_ports & (1 << 0)) ? 0 : 1; + lag_info->second_port = (lag_info->active_ports & (1 << 1)) ? 1 : 0; + } else if (lag_info->master_pf == 2) { + lag_info->first_port = (lag_info->active_ports & (1 << 2)) ? 2 : 3; + lag_info->second_port = (lag_info->active_ports & (1 << 3)) ? 3 : 2; + } + lag_info->port_num = LAG_MAX_PORT_NUM; +} + +/* The following function strongly assumes two ports only */ +static enum _ecore_status_t ecore_lag_create_master(struct ecore_hwfn *p_hwfn) +{ + struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn); + enum _ecore_status_t rc; + + if (!p_ptt) + return ECORE_AGAIN; + + ecore_lag_map_ports(p_hwfn); + rc = ecore_qm_reconf_intr(p_hwfn, p_ptt); + ecore_ptt_release(p_hwfn, p_ptt); + + return rc; +} + +/* The following function strongly assumes two ports only */ +static enum _ecore_status_t ecore_lag_destroy_master(struct ecore_hwfn *p_hwfn) { + struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn); + enum _ecore_status_t rc; + + if (!p_ptt) + return ECORE_AGAIN; + + p_hwfn->qm_info.offload_group_count = 0; + + rc = ecore_qm_reconf_intr(p_hwfn, p_ptt); + ecore_ptt_release(p_hwfn, p_ptt); + + return rc; +} + +enum _ecore_status_t ecore_lag_create(struct ecore_dev *dev, + enum ecore_lag_type lag_type, + void (*link_change_cb)(void *cxt), + void *cxt, + u8 active_ports) +{ + struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(dev); + u8 master_pfid = p_hwfn->abs_pf_id < 2 ? 0 : 2; + + if (!ecore_lag_support(p_hwfn)) { + DP_NOTICE(p_hwfn, false, "RDMA bonding will not be configured - only supported on AH devices on default mode\n"); + return ECORE_INVAL; + } + + /* TODO: Check Supported MFW */ + p_hwfn->lag_info.lag_type = lag_type; + p_hwfn->lag_info.link_change_cb = link_change_cb; + p_hwfn->lag_info.cxt = cxt; + p_hwfn->lag_info.active_ports = active_ports; + p_hwfn->lag_info.is_master = p_hwfn->abs_pf_id == master_pfid; + p_hwfn->lag_info.master_pf = master_pfid; + + /* Configure RX for LAG */ + if (p_hwfn->lag_info.is_master) + return ecore_lag_create_master(p_hwfn); + + return ecore_lag_create_slave(p_hwfn, master_pfid); +} + +/* Modify the link state of a given port */ +enum _ecore_status_t ecore_lag_modify(struct ecore_dev *dev, + u8 port_id, + u8 link_active) +{ + struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(dev); + struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn); + struct ecore_lag_info *lag_info = &p_hwfn->lag_info; enum _ecore_status_t rc = ECORE_SUCCESS; - enum dbg_status debug_status = DBG_STATUS_OK; - int i; + unsigned long active_ports; + u8 curr_active; - if (IS_VF(p_dev)) { - for_each_hwfn(p_dev, i) { - rc = ecore_l2_alloc(&p_dev->hwfns[i]); - if (rc != ECORE_SUCCESS) - return rc; + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Active ports changed before %x link active %x port_id=%d\n", + lag_info->active_ports, link_active, port_id); + + if (!p_ptt) + return ECORE_AGAIN; + + active_ports = lag_info->active_ports; + curr_active = !!OSAL_TEST_BIT(port_id, &lag_info->active_ports); + if (curr_active != link_active) { + OSAL_TEST_AND_FLIP_BIT(port_id, &lag_info->active_ports); + + /* Reconfigure QM according to active_ports */ + if (lag_info->is_master) { + ecore_lag_map_ports(p_hwfn); + rc = ecore_qm_reconf_intr(p_hwfn, p_ptt); } - return rc; + + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Active ports changed before %lx after %x\n", + active_ports, lag_info->active_ports); + } else { + /* No change in active ports, triggered from port event */ + /* call dcbx related code */ + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Nothing changed\n"); } - p_dev->fw_data = OSAL_ZALLOC(p_dev, GFP_KERNEL, - sizeof(*p_dev->fw_data)); - if (!p_dev->fw_data) - return ECORE_NOMEM; + ecore_ptt_release(p_hwfn, p_ptt); + return rc; +} + +enum _ecore_status_t ecore_lag_destroy(struct ecore_dev *dev) +{ + struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(dev); + struct ecore_lag_info *lag_info = &p_hwfn->lag_info; + + lag_info->lag_type = ECORE_LAG_TYPE_NONE; + lag_info->link_change_cb = OSAL_NULL; + lag_info->cxt = OSAL_NULL; + + if (!lag_info->is_master) { + ecore_lag_destroy_slave(p_hwfn); + return ECORE_SUCCESS; + } + + return ecore_lag_destroy_master(p_hwfn); +} + +bool ecore_lag_is_active(struct ecore_hwfn *p_hwfn) +{ + return !(p_hwfn->lag_info.lag_type == ECORE_LAG_TYPE_NONE); +} + +static enum _ecore_status_t ecore_cxt_calculate_tasks(struct ecore_hwfn *p_hwfn, + u32 *rdma_tasks, + u32 *eth_tasks, + u32 excess_tasks) +{ + u32 eth_tasks_tmp, rdma_tasks_tmp, reduced_tasks = 0; + +/* @DPDK */ +#define ECORE_ETH_MAX_TIDS 0 /* !CONFIG_ECORE_FS */ + /* ETH tasks are used for GFS stats counters in AHP */ + *eth_tasks = ECORE_IS_E5(p_hwfn->p_dev) ? ECORE_ETH_MAX_TIDS : 0; + + /* No tasks requested. If there are no excess tasks, return. + * If there are excess tasks and ILT lines needs to be reduced, + * it can't be done by reducing number of tasks, return failure. + */ + /* DPDK */ + if (*eth_tasks == 0) + return excess_tasks ? ECORE_INVAL : ECORE_SUCCESS; + + /* Can not reduce enough tasks */ + if (excess_tasks > *eth_tasks) + return ECORE_INVAL; + + eth_tasks_tmp = *eth_tasks; + + while (reduced_tasks < excess_tasks) { + eth_tasks_tmp >>= 1; + /* DPDK */ + reduced_tasks = (*eth_tasks) - (eth_tasks_tmp); + } + + *eth_tasks = eth_tasks_tmp; + + return ECORE_SUCCESS; +} + +enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev) +{ + u32 rdma_tasks = 0, eth_tasks = 0, excess_tasks = 0, line_count; + enum _ecore_status_t rc = ECORE_SUCCESS; + int i; + + if (IS_PF(p_dev)) { + p_dev->fw_data = OSAL_ZALLOC(p_dev, GFP_KERNEL, + sizeof(*p_dev->fw_data)); + if (!p_dev->fw_data) + return ECORE_NOMEM; + } for_each_hwfn(p_dev, i) { struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i]; @@ -2384,6 +3388,26 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev) if (rc) goto alloc_err; + rc = ecore_l2_alloc(p_hwfn); + if (rc != ECORE_SUCCESS) + goto alloc_err; + + if (IS_VF(p_dev)) { + /* Allocating the entire spq struct although only the + * async_comp callbacks are used by VF. + */ + rc = ecore_spq_alloc(p_hwfn); + if (rc != ECORE_SUCCESS) + return rc; + + continue; + } + + /* ecore_iov_alloc must be called before ecore_cxt_set_pf_params()*/ + rc = ecore_iov_alloc(p_hwfn); + if (rc) + goto alloc_err; + /* First allocate the context manager structure */ rc = ecore_cxt_mngr_alloc(p_hwfn); if (rc) @@ -2392,7 +3416,8 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev) /* Set the HW cid/tid numbers (in the context manager) * Must be done prior to any further computations. */ - rc = ecore_cxt_set_pf_params(p_hwfn); + ecore_cxt_calculate_tasks(p_hwfn, &rdma_tasks, ð_tasks, 0); + rc = ecore_cxt_set_pf_params(p_hwfn, rdma_tasks, eth_tasks); if (rc) goto alloc_err; @@ -2404,9 +3429,70 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev) ecore_init_qm_info(p_hwfn); /* Compute the ILT client partition */ - rc = ecore_cxt_cfg_ilt_compute(p_hwfn); - if (rc) - goto alloc_err; + rc = ecore_cxt_cfg_ilt_compute(p_hwfn, &line_count); + if (rc) { + u32 ilt_page_size_kb = + ecore_cxt_get_ilt_page_size(p_hwfn) >> 10; + + DP_NOTICE(p_hwfn, false, + "Requested %u lines but only %u are available. line size is %u KB; 0x%x rdma tasks 0x%x eth tasks; VF RDMA %s\n", + line_count, RESC_NUM(p_hwfn, ECORE_ILT), + ilt_page_size_kb, rdma_tasks, eth_tasks, + ECORE_IS_VF_RDMA(p_hwfn) ? + "enabled" : "disabled"); + + /* Calculate the number of tasks need to be reduced + * in order to have a successful ILT computation. + */ + excess_tasks = ecore_cxt_cfg_ilt_compute_excess(p_hwfn, line_count); + if (!excess_tasks) + goto alloc_err; + + rc = ecore_cxt_calculate_tasks(p_hwfn, &rdma_tasks, + ð_tasks, + excess_tasks); + if (!rc) { + DP_NOTICE(p_hwfn, false, + "Re-computing after reducing tasks (0x%x rdma tasks, 0x%x eth tasks)\n", + rdma_tasks, eth_tasks); + rc = ecore_cxt_set_pf_params(p_hwfn, rdma_tasks, + eth_tasks); + if (rc) + goto alloc_err; + + rc = ecore_cxt_cfg_ilt_compute(p_hwfn, + &line_count); + if (rc) + DP_NOTICE(p_hwfn, false, + "Requested %u lines but only %u are available.\n", + line_count, + RESC_NUM(p_hwfn, ECORE_ILT)); + } + + if (rc && ECORE_IS_VF_RDMA(p_hwfn)) { + DP_NOTICE(p_hwfn, false, + "Re-computing after disabling VF RDMA\n"); + p_hwfn->pf_iov_info->rdma_enable = false; + + /* After disabling VF RDMA, we must call + * ecore_cxt_set_pf_params(), in order to + * recalculate the VF cids/tids amount. + */ + rc = ecore_cxt_set_pf_params(p_hwfn, rdma_tasks, + eth_tasks); + if (rc) + goto alloc_err; + + rc = ecore_cxt_cfg_ilt_compute(p_hwfn, &line_count); + } + + if (rc) { + DP_ERR(p_hwfn, + "ILT compute failed - requested %u lines but only %u are available. Need to increase ILT line size (current size is %u KB)\n", + line_count, RESC_NUM(p_hwfn, ECORE_ILT), ilt_page_size_kb); + goto alloc_err; + } + } /* CID map / ILT shadow table / T2 * The talbes sizes are determined by the computations above @@ -2428,62 +3514,12 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev) if (rc) goto alloc_err; - rc = ecore_iov_alloc(p_hwfn); - if (rc) - goto alloc_err; - /* EQ */ n_eqes = ecore_chain_get_capacity(&p_hwfn->p_spq->chain); - if (ECORE_IS_RDMA_PERSONALITY(p_hwfn)) { - /* Calculate the EQ size - * --------------------- - * Each ICID may generate up to one event at a time i.e. - * the event must be handled/cleared before a new one - * can be generated. We calculate the sum of events per - * protocol and create an EQ deep enough to handle the - * worst case: - * - Core - according to SPQ. - * - RoCE - per QP there are a couple of ICIDs, one - * responder and one requester, each can - * generate an EQE => n_eqes_qp = 2 * n_qp. - * Each CQ can generate an EQE. There are 2 CQs - * per QP => n_eqes_cq = 2 * n_qp. - * Hence the RoCE total is 4 * n_qp or - * 2 * num_cons. - * - ENet - There can be up to two events per VF. One - * for VF-PF channel and another for VF FLR - * initial cleanup. The number of VFs is - * bounded by MAX_NUM_VFS_BB, and is much - * smaller than RoCE's so we avoid exact - * calculation. - */ - if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) { - num_cons = - ecore_cxt_get_proto_cid_count( - p_hwfn, - PROTOCOLID_ROCE, - OSAL_NULL); - num_cons *= 2; - } else { - num_cons = ecore_cxt_get_proto_cid_count( - p_hwfn, - PROTOCOLID_IWARP, - OSAL_NULL); - } - n_eqes += num_cons + 2 * MAX_NUM_VFS_BB; - } else if (p_hwfn->hw_info.personality == ECORE_PCI_ISCSI) { - num_cons = - ecore_cxt_get_proto_cid_count(p_hwfn, - PROTOCOLID_ISCSI, - OSAL_NULL); - n_eqes += 2 * num_cons; - } - - if (n_eqes > 0xFFFF) { - DP_ERR(p_hwfn, "Cannot allocate 0x%x EQ elements." - "The maximum of a u16 chain is 0x%x\n", - n_eqes, 0xFFFF); - goto alloc_no_mem; + if (n_eqes > ECORE_EQ_MAX_ELEMENTS) { + DP_INFO(p_hwfn, "EQs maxing out at 0x%x elements\n", + ECORE_EQ_MAX_ELEMENTS); + n_eqes = ECORE_EQ_MAX_ELEMENTS; } rc = ecore_eq_alloc(p_hwfn, (u16)n_eqes); @@ -2494,14 +3530,11 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev) if (rc) goto alloc_err; - rc = ecore_l2_alloc(p_hwfn); - if (rc != ECORE_SUCCESS) - goto alloc_err; - /* DMA info initialization */ rc = ecore_dmae_info_alloc(p_hwfn); if (rc) { - DP_NOTICE(p_hwfn, false, "Failed to allocate memory for dmae_info structure\n"); + DP_NOTICE(p_hwfn, false, + "Failed to allocate memory for dmae_info structure\n"); goto alloc_err; } @@ -2513,36 +3546,28 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev) goto alloc_err; } - debug_status = OSAL_DBG_ALLOC_USER_DATA(p_hwfn, - &p_hwfn->dbg_user_info); - if (debug_status) { + rc = OSAL_DBG_ALLOC_USER_DATA(p_hwfn, &p_hwfn->dbg_user_info); + if (rc) { DP_NOTICE(p_hwfn, false, "Failed to allocate dbg user info structure\n"); - rc = (enum _ecore_status_t)debug_status; goto alloc_err; } + } /* hwfn loop */ - debug_status = OSAL_DBG_ALLOC_USER_DATA(p_hwfn, - &p_hwfn->dbg_user_info); - if (debug_status) { - DP_NOTICE(p_hwfn, false, - "Failed to allocate dbg user info structure\n"); - rc = (enum _ecore_status_t)debug_status; + if (IS_PF(p_dev)) { + rc = ecore_llh_alloc(p_dev); + if (rc != ECORE_SUCCESS) { + DP_NOTICE(p_dev, false, + "Failed to allocate memory for the llh_info structure\n"); goto alloc_err; } - } /* hwfn loop */ - - rc = ecore_llh_alloc(p_dev); - if (rc != ECORE_SUCCESS) { - DP_NOTICE(p_dev, true, - "Failed to allocate memory for the llh_info structure\n"); - goto alloc_err; } p_dev->reset_stats = OSAL_ZALLOC(p_dev, GFP_KERNEL, sizeof(*p_dev->reset_stats)); if (!p_dev->reset_stats) { - DP_NOTICE(p_dev, false, "Failed to allocate reset statistics\n"); + DP_NOTICE(p_dev, false, + "Failed to allocate reset statistics\n"); goto alloc_no_mem; } @@ -2560,8 +3585,10 @@ void ecore_resc_setup(struct ecore_dev *p_dev) int i; if (IS_VF(p_dev)) { - for_each_hwfn(p_dev, i) + for_each_hwfn(p_dev, i) { ecore_l2_setup(&p_dev->hwfns[i]); + } + return; } @@ -2592,36 +3619,37 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u16 id, bool is_vf) { - u32 command = 0, addr, count = FINAL_CLEANUP_POLL_CNT; + u32 count = FINAL_CLEANUP_POLL_CNT, poll_time = FINAL_CLEANUP_POLL_TIME; + u32 command = 0, addr; enum _ecore_status_t rc = ECORE_TIMEOUT; #ifndef ASIC_ONLY if (CHIP_REV_IS_TEDIBEAR(p_hwfn->p_dev) || - CHIP_REV_IS_SLOW(p_hwfn->p_dev)) { + (ECORE_IS_E4(p_hwfn->p_dev) && CHIP_REV_IS_SLOW(p_hwfn->p_dev))) { DP_INFO(p_hwfn, "Skipping final cleanup for non-ASIC\n"); return ECORE_SUCCESS; } + + if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) + poll_time *= 10; #endif addr = GTT_BAR0_MAP_REG_USDM_RAM + - USTORM_FLR_FINAL_ACK_OFFSET(p_hwfn->rel_pf_id); + USTORM_FLR_FINAL_ACK_OFFSET(p_hwfn->rel_pf_id); if (is_vf) id += 0x10; - command |= X_FINAL_CLEANUP_AGG_INT << - SDM_AGG_INT_COMP_PARAMS_AGG_INT_INDEX_SHIFT; - command |= 1 << SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_ENABLE_SHIFT; - command |= id << SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_BIT_SHIFT; - command |= SDM_COMP_TYPE_AGG_INT << SDM_OP_GEN_COMP_TYPE_SHIFT; - -/* Make sure notification is not set before initiating final cleanup */ + SET_FIELD(command, SDM_AGG_INT_COMP_PARAMS_AGG_INT_INDEX, + X_FINAL_CLEANUP_AGG_INT); + SET_FIELD(command, SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_ENABLE, 1); + SET_FIELD(command, SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_BIT, id); + SET_FIELD(command, SDM_OP_GEN_COMP_TYPE, SDM_COMP_TYPE_AGG_INT); + /* Make sure notification is not set before initiating final cleanup */ if (REG_RD(p_hwfn, addr)) { DP_NOTICE(p_hwfn, false, - "Unexpected; Found final cleanup notification"); - DP_NOTICE(p_hwfn, false, - " before initiating final cleanup\n"); + "Unexpected; Found final cleanup notification before initiating final cleanup\n"); REG_WR(p_hwfn, addr, 0); } @@ -2633,13 +3661,12 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn *p_hwfn, /* Poll until completion */ while (!REG_RD(p_hwfn, addr) && count--) - OSAL_MSLEEP(FINAL_CLEANUP_POLL_TIME); + OSAL_MSLEEP(poll_time); if (REG_RD(p_hwfn, addr)) rc = ECORE_SUCCESS; else - DP_NOTICE(p_hwfn, true, - "Failed to receive FW final cleanup notification\n"); + DP_NOTICE(p_hwfn, true, "Failed to receive FW final cleanup notification\n"); /* Cleanup afterwards */ REG_WR(p_hwfn, addr, 0); @@ -2655,13 +3682,15 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn) hw_mode |= 1 << MODE_BB; } else if (ECORE_IS_AH(p_hwfn->p_dev)) { hw_mode |= 1 << MODE_K2; + } else if (ECORE_IS_E5(p_hwfn->p_dev)) { + hw_mode |= 1 << MODE_E5; } else { DP_NOTICE(p_hwfn, true, "Unknown chip type %#x\n", p_hwfn->p_dev->type); return ECORE_INVAL; } - /* Ports per engine is based on the values in CNIG_REG_NW_PORT_MODE */ + /* Ports per engine is based on the values in CNIG_REG_NW_PORT_MODE*/ switch (p_hwfn->p_dev->num_ports_in_engine) { case 1: hw_mode |= 1 << MODE_PORTS_PER_ENG_1; @@ -2673,13 +3702,12 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn) hw_mode |= 1 << MODE_PORTS_PER_ENG_4; break; default: - DP_NOTICE(p_hwfn, true, - "num_ports_in_engine = %d not supported\n", + DP_NOTICE(p_hwfn, true, "num_ports_in_engine = %d not supported\n", p_hwfn->p_dev->num_ports_in_engine); return ECORE_INVAL; } - if (OSAL_GET_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits)) + if (OSAL_TEST_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits)) hw_mode |= 1 << MODE_MF_SD; else hw_mode |= 1 << MODE_MF_SI; @@ -2711,8 +3739,6 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn) } #ifndef ASIC_ONLY -#define PCI_EXP_DEVCTL_PAYLOAD 0x00e0 - /* MFW-replacement initializations for emulation */ static enum _ecore_status_t ecore_hw_init_chip(struct ecore_dev *p_dev, struct ecore_ptt *p_ptt) @@ -2728,17 +3754,23 @@ static enum _ecore_status_t ecore_hw_init_chip(struct ecore_dev *p_dev, return ECORE_INVAL; } - pl_hv = ECORE_IS_BB(p_dev) ? 0x1 : 0x401; + if (ECORE_IS_BB(p_dev)) + pl_hv = 0x1; + else if (ECORE_IS_AH(p_dev)) + pl_hv = 0x401; + else /* E5 */ + pl_hv = 0x4601; ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV + 4, pl_hv); - if (ECORE_IS_AH(p_dev)) - ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2_E5, 0x3ffffff); + if (ECORE_IS_AH(p_dev) || ECORE_IS_E5(p_dev)) + ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2_E5, + 0x3ffffff); /* Initialize port mode to 4x10G_E (10G with 4x10 SERDES) */ if (ECORE_IS_BB(p_dev)) ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB, 4); - if (ECORE_IS_AH(p_dev)) { + if (ECORE_IS_AH(p_dev) || ECORE_IS_E5(p_dev)) { /* 2 for 4-port, 1 for 2-port, 0 for 1-port */ ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE, p_dev->num_ports_in_engine >> 1); @@ -2778,6 +3810,22 @@ static enum _ecore_status_t ecore_hw_init_chip(struct ecore_dev *p_dev, */ ecore_wr(p_hwfn, p_ptt, PSWRQ2_REG_PRTY_STS_WR_H_0, 0x8); + if (ECORE_IS_E5(p_dev)) { + /* Clock enable for CSCLK and PCE_CLK */ + ecore_wr(p_hwfn, p_ptt, + PGL2PEM_REG_PEM_NATIVE_E5 + PEM_REG_PEM_CLK_EN_E5, + 0x0); + + /* Allow the traffic to be sent out the PCIe link */ + ecore_wr(p_hwfn, p_ptt, + PGL2PEM_REG_PEM_NATIVE_E5 + PEM_REG_PEM_DIS_PORT_E5, + 0x1); + + /* Enable zone_b access of VFs to SDM */ + ecore_wr(p_hwfn, p_ptt, + PGLUE_B_REG_DISABLE_ZONE_B_ACCESS_OF_VF_E5, 0x0); + } + /* Configure PSWRQ2_REG_WR_MBS0 according to the MaxPayloadSize field in * the PCI configuration space. The value is common for all PFs, so it * is okay to do it according to the first loading PF. @@ -2796,6 +3844,18 @@ static enum _ecore_status_t ecore_hw_init_chip(struct ecore_dev *p_dev, /* Configure the PGLUE_B to discard mode */ ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_MASTER_DISCARD_NBLOCK, 0x3f); + /* Workaround for a HW bug in E5 to allow VFs of PFs > 0 */ + if (ECORE_IS_E5(p_dev)) { + u32 addr, timer_ctl; + + addr = PGLCS_REG_PGL_CS + PCIEIP_REG_PCIEEP_TIMER_CTL_E5; + timer_ctl = ecore_rd(p_hwfn, p_ptt, addr); + timer_ctl = (timer_ctl & + ~PCIEIP_REG_PCIEEP_TIMER_CTL_MFUNCN_E5) | + (0xf & PCIEIP_REG_PCIEEP_TIMER_CTL_MFUNCN_E5); + ecore_wr(p_hwfn, p_ptt, addr, timer_ctl); + } + return ECORE_SUCCESS; } #endif @@ -2827,7 +3887,8 @@ static void ecore_init_cau_rt_data(struct ecore_dev *p_dev) continue; ecore_init_cau_sb_entry(p_hwfn, &sb_entry, - p_block->function_id, 0, 0); + p_block->function_id, + 0, 0); STORE_RT_REG_AGG(p_hwfn, offset + igu_sb_id * 2, sb_entry); } @@ -2898,7 +3959,7 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn, u8 vf_id, max_num_vfs; u16 num_pfs, pf_id; u32 concrete_fid; - enum _ecore_status_t rc = ECORE_SUCCESS; + enum _ecore_status_t rc = ECORE_SUCCESS; ecore_init_cau_rt_data(p_dev); @@ -2975,6 +4036,26 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn, /* pretend to original PF */ ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id); +#ifndef ASIC_ONLY + /* Clear the FIRST_VF register for all PFs */ + if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && + ECORE_IS_E5(p_hwfn->p_dev) && + IS_PF_SRIOV(p_hwfn)) { + u8 pf_id; + + for (pf_id = 0; pf_id < MAX_NUM_PFS_E5; pf_id++) { + /* pretend to the relevant PF */ + ecore_fid_pretend(p_hwfn, p_ptt, + FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, pf_id)); + ecore_wr(p_hwfn, p_ptt, PGLCS_REG_FIRST_VF_K2_E5, 0); + } + + /* pretend to the original PF */ + ecore_fid_pretend(p_hwfn, p_ptt, + FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, p_hwfn->rel_pf_id)); + } +#endif + return rc; } @@ -2984,9 +4065,12 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn, #define PMEG_IF_BYTE_COUNT 8 -static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u32 addr, u64 data, u8 reg_type, u8 port) +static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u32 addr, + u64 data, + u8 reg_type, + u8 port) { DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, "CMD: %08x, ADDR: 0x%08x, DATA: %08x:%08x\n", @@ -2998,7 +4082,8 @@ static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn, ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB, (ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB) & - 0xffff00fe) | (8 << PMEG_IF_BYTE_COUNT)); + 0xffff00fe) | + (8 << PMEG_IF_BYTE_COUNT)); ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_ADDR_BB, (reg_type << 25) | (addr << 8) | port); ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB, data & 0xffffffff); @@ -3023,36 +4108,34 @@ static void ecore_emul_link_init_bb(struct ecore_hwfn *p_hwfn, { u8 loopback = 0, port = p_hwfn->port_id * 2; - /* XLPORT MAC MODE *//* 0 Quad, 4 Single... */ - ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MODE_REG, (0x4 << 4) | 0x4, 1, - port); + ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MODE_REG, + (0x4 << 4) | 0x4, 1, port); /* XLPORT MAC MODE */ /* 0 Quad, 4 Single... */ ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MAC_CONTROL, 0, 1, port); - /* XLMAC: SOFT RESET */ - ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL, 0x40, 0, port); - /* XLMAC: Port Speed >= 10Gbps */ - ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_MODE, 0x40, 0, port); - /* XLMAC: Max Size */ - ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_RX_MAX_SIZE, 0x3fff, 0, port); + ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL, + 0x40, 0, port); /*XLMAC: SOFT RESET */ + ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_MODE, + 0x40, 0, port); /*XLMAC: Port Speed >= 10Gbps */ + ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_RX_MAX_SIZE, + 0x3fff, 0, port); /* XLMAC: Max Size */ ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_TX_CTRL, 0x01000000800ULL | (0xa << 12) | ((u64)1 << 38), 0, port); - ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_PAUSE_CTRL, 0x7c000, 0, port); + ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_PAUSE_CTRL, + 0x7c000, 0, port); ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_PFC_CTRL, 0x30ffffc000ULL, 0, port); - ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL, 0x3 | (loopback << 2), 0, - port); /* XLMAC: TX_EN, RX_EN */ - /* XLMAC: TX_EN, RX_EN, SW_LINK_STATUS */ - ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL, - 0x1003 | (loopback << 2), 0, port); - /* Enabled Parallel PFC interface */ - ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_FLOW_CONTROL_CONFIG, 1, 0, port); - - /* XLPORT port enable */ - ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_ENABLE_REG, 0xf, 1, port); + ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL, 0x3 | (loopback << 2), + 0, port); /* XLMAC: TX_EN, RX_EN */ + ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL, 0x1003 | (loopback << 2), + 0, port); /* XLMAC: TX_EN, RX_EN, SW_LINK_STATUS */ + ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_FLOW_CONTROL_CONFIG, + 1, 0, port); /* Enabled Parallel PFC interface */ + ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_ENABLE_REG, + 0xf, 1, port); /* XLPORT port enable */ } static void ecore_emul_link_init_ah(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) + struct ecore_ptt *p_ptt) { u32 mac_base, mac_config_val = 0xa853; u8 port = p_hwfn->port_id; @@ -3089,6 +4172,187 @@ static void ecore_emul_link_init_ah(struct ecore_hwfn *p_hwfn, mac_config_val); } +static struct { + u32 offset; + u32 value; +} ecore_e5_emul_mac_init[] = { + {RCE_4CH_REG_MAC_RS_TX_0_MAC_TX_MAX_PKT_LEN_E5, 0x3fff}, + {RCE_4CH_REG_MAC_RS_TX_1_MAC_TX_MAX_PKT_LEN_E5, 0x3fff}, + {RCE_4CH_REG_MAC_RS_TX_2_MAC_TX_MAX_PKT_LEN_E5, 0x3fff}, + {RCE_4CH_REG_MAC_RS_TX_3_MAC_TX_MAX_PKT_LEN_E5, 0x3fff}, + {RCE_4CH_REG_MAC_RS_RX_0_RX_MAX_PKT_LEN_E5, 0x3fff}, + {RCE_4CH_REG_MAC_RS_RX_1_RX_MAX_PKT_LEN_E5, 0x3fff}, + {RCE_4CH_REG_MAC_RS_RX_2_RX_MAX_PKT_LEN_E5, 0x3fff}, + {RCE_4CH_REG_MAC_RS_RX_3_RX_MAX_PKT_LEN_E5, 0x3fff}, + {RCE_4CH_REG_APP_FIFO_0_TX_SOF_THRESH_E5, 0x3}, + {RCE_4CH_REG_MAC_RS_TX_0_TX_MAC_CFG_E5, 0x68094e5}, + {RCE_4CH_REG_MAC_RS_TX_0_STN_ADD_47_32_E5, 0xa868}, + {RCE_4CH_REG_MAC_RS_TX_0_STN_ADD_31_0_E5, 0xaf88776f}, + {RCE_4CH_REG_MAC_RS_TX_0_TX_MAC_FC_CONFIG0_E5, 0x7f97}, + {RCE_4CH_REG_MAC_RS_TX_0_TX_MAC_FC_CONFIG2_E5, 0x11}, + {RCE_4CH_REG_MAC_RS_TX_0_TX_START_PRE_SMD_E5, 0xb37f4cd5}, + {RCE_4CH_REG_MAC_RS_TX_0_RS_CTRL_E5, 0x8}, + {RCE_4CH_REG_MAC_RS_TX_0_TX_IS_FIFO_THRESH_E5, 0x83}, + {RCE_4CH_REG_MAC_RS_RX_0_RX_MAC_CFG_E5, 0x15d5}, + {RCE_4CH_REG_MAC_RS_RX_0_RX_MAC_FC_CONFIG_E5, 0xff08}, + {RCE_4CH_REG_MAC_RS_RX_0_RS_CTRL_E5, 0x4}, + {RCE_4CH_REG_PCS_RX_0_RX_PCS_CFG_E5, 0x1808a03}, + {RCE_4CH_REG_PCS_RX_0_RX_PCS_CFG2_E5, 0x2060c202}, + {RCE_4CH_REG_PCS_RX_0_RX_PCS_ALIGN_CFG_E5, 0xa00}, + {RCE_4CH_REG_PCS_TX_0_TX_PCS_CFG_E5, 0x8a03}, + {RCE_4CH_REG_PCS_TX_0_TX_PCS_CFG2_E5, 0x5a0610}, + {RCE_4CH_REG_PCS_TX_0_TX_PCS_CTRL_ALIGN_PERIOD_E5, 0xa00}, + {RCE_4CH_REG_PCS_TX_0_TX_PCS_CTRL_ENCODER_E5, 0xc00000}, + {RCE_4CH_REG_MAC_RS_RX_0_RX_EXPRESS_SMD0_3_E5, 0xffffffd5}, + {RCE_4CH_REG_APP_FIFO_1_TX_SOF_THRESH_E5, 0x3}, + {RCE_4CH_REG_MAC_RS_TX_1_TX_MAC_CFG_E5, 0x68494e5}, + {RCE_4CH_REG_MAC_RS_TX_1_STN_ADD_47_32_E5, 0x44b2}, + {RCE_4CH_REG_MAC_RS_TX_1_STN_ADD_31_0_E5, 0x20123eee}, + {RCE_4CH_REG_MAC_RS_TX_1_TX_MAC_FC_CONFIG0_E5, 0x7f97}, + {RCE_4CH_REG_MAC_RS_TX_1_TX_MAC_FC_CONFIG2_E5, 0x11}, + {RCE_4CH_REG_MAC_RS_TX_1_TX_START_PRE_SMD_E5, 0xb37f4cd5}, + {RCE_4CH_REG_MAC_RS_TX_1_RS_CTRL_E5, 0x8}, + {RCE_4CH_REG_MAC_RS_TX_1_TX_IS_FIFO_THRESH_E5, 0x83}, + {RCE_4CH_REG_MAC_RS_RX_1_RX_MAC_CFG_E5, 0x415d5}, + {RCE_4CH_REG_MAC_RS_RX_1_RX_MAC_FC_CONFIG_E5, 0xff08}, + {RCE_4CH_REG_MAC_RS_RX_1_RS_CTRL_E5, 0x4}, + {RCE_4CH_REG_MAC_RS_RX_1_RX_START_PRE_SMD_VC0_E5, 0xffffffff}, + {RCE_4CH_REG_MAC_RS_RX_1_RX_START_PRE_SMD_VC1_E5, 0xb37f4ce6}, + {RCE_4CH_REG_MAC_RS_RX_1_RX_CONT_FRAG_SMD_VC0_E5, 0xffffffff}, + {RCE_4CH_REG_MAC_RS_RX_1_RX_CONT_FRAG_SMD_VC1_E5, 0x2a9e5261}, + {RCE_4CH_REG_MAC_RS_RX_1_RX_EXP_VER_SMD_VC0_3_E5, 0xffff07ff}, + {RCE_4CH_REG_MAC_RS_RX_1_RX_EXP_RES_SMD_VC0_3_E5, 0xffff19ff}, + {RCE_4CH_REG_PCS_RX_1_RX_PCS_CFG_E5, 0x1808a03}, + {RCE_4CH_REG_PCS_RX_1_RX_PCS_CFG2_E5, 0x2060c202}, + {RCE_4CH_REG_PCS_RX_1_RX_PCS_ALIGN_CFG_E5, 0xa00}, + {RCE_4CH_REG_PCS_TX_1_TX_PCS_CFG_E5, 0x8a03}, + {RCE_4CH_REG_PCS_TX_1_TX_PCS_CFG2_E5, 0x5a0610}, + {RCE_4CH_REG_PCS_TX_1_TX_PCS_CTRL_ALIGN_PERIOD_E5, 0xa00}, + {RCE_4CH_REG_PCS_TX_1_TX_PCS_CTRL_ENCODER_E5, 0xc00000}, + {RCE_4CH_REG_MAC_RS_RX_1_RX_EXPRESS_SMD0_3_E5, 0xffffd5ff}, + {RCE_4CH_REG_APP_FIFO_2_TX_SOF_THRESH_E5, 0x3}, + {RCE_4CH_REG_MAC_RS_TX_2_TX_MAC_CFG_E5, 0x68894e5}, + {RCE_4CH_REG_MAC_RS_TX_2_STN_ADD_47_32_E5, 0xf1c0}, + {RCE_4CH_REG_MAC_RS_TX_2_STN_ADD_31_0_E5, 0xabc40c51}, + {RCE_4CH_REG_MAC_RS_TX_2_TX_MAC_FC_CONFIG0_E5, 0x7f97}, + {RCE_4CH_REG_MAC_RS_TX_2_TX_MAC_FC_CONFIG2_E5, 0x15}, + {RCE_4CH_REG_MAC_RS_TX_2_TX_START_PRE_SMD_E5, 0xb37f4cd5}, + {RCE_4CH_REG_MAC_RS_TX_2_RS_CTRL_E5, 0x8}, + {RCE_4CH_REG_MAC_RS_TX_2_TX_IS_FIFO_THRESH_E5, 0x83}, + {RCE_4CH_REG_MAC_RS_RX_2_RX_MAC_CFG_E5, 0x815d5}, + {RCE_4CH_REG_MAC_RS_RX_2_RX_MAC_FC_CONFIG_E5, 0xff08}, + {RCE_4CH_REG_MAC_RS_RX_2_RS_CTRL_E5, 0x4}, + {RCE_4CH_REG_MAC_RS_RX_2_RX_START_PRE_SMD_VC0_E5, 0xffffffff}, + {RCE_4CH_REG_MAC_RS_RX_2_RX_START_PRE_SMD_VC2_E5, 0xb37f4ce6}, + {RCE_4CH_REG_MAC_RS_RX_2_RX_CONT_FRAG_SMD_VC0_E5, 0xffffffff}, + {RCE_4CH_REG_MAC_RS_RX_2_RX_CONT_FRAG_SMD_VC2_E5, 0x2a9e5261}, + {RCE_4CH_REG_MAC_RS_RX_2_RX_EXP_VER_SMD_VC0_3_E5, 0xff07ffff}, + {RCE_4CH_REG_MAC_RS_RX_2_RX_EXP_RES_SMD_VC0_3_E5, 0xff19ffff}, + {RCE_4CH_REG_PCS_RX_2_RX_PCS_CFG_E5, 0x1808a03}, + {RCE_4CH_REG_PCS_RX_2_RX_PCS_CFG2_E5, 0x2060c202}, + {RCE_4CH_REG_PCS_RX_2_RX_PCS_ALIGN_CFG_E5, 0xa00}, + {RCE_4CH_REG_PCS_TX_2_TX_PCS_CFG_E5, 0x8a03}, + {RCE_4CH_REG_PCS_TX_2_TX_PCS_CFG2_E5, 0x5a0610}, + {RCE_4CH_REG_PCS_TX_2_TX_PCS_CTRL_ALIGN_PERIOD_E5, 0xa00}, + {RCE_4CH_REG_PCS_TX_2_TX_PCS_CTRL_ENCODER_E5, 0xc00000}, + {RCE_4CH_REG_MAC_RS_RX_2_RX_EXPRESS_SMD0_3_E5, 0xffd5ffff}, + {RCE_4CH_REG_APP_FIFO_3_TX_SOF_THRESH_E5, 0x3}, + {RCE_4CH_REG_MAC_RS_TX_3_TX_MAC_CFG_E5, 0x68c94e5}, + {RCE_4CH_REG_MAC_RS_TX_3_STN_ADD_47_32_E5, 0xa235}, + {RCE_4CH_REG_MAC_RS_TX_3_STN_ADD_31_0_E5, 0x23650a9d}, + {RCE_4CH_REG_MAC_RS_TX_3_TX_MAC_FC_CONFIG0_E5, 0x7f97}, + {RCE_4CH_REG_MAC_RS_TX_3_TX_MAC_FC_CONFIG2_E5, 0x1f}, + {RCE_4CH_REG_MAC_RS_TX_3_TX_START_PRE_SMD_E5, 0xb37f4cd5}, + {RCE_4CH_REG_MAC_RS_TX_3_RS_CTRL_E5, 0x8}, + {RCE_4CH_REG_MAC_RS_TX_3_TX_IS_FIFO_THRESH_E5, 0x83}, + {RCE_4CH_REG_MAC_RS_RX_3_RX_MAC_CFG_E5, 0xc15d5}, + {RCE_4CH_REG_MAC_RS_RX_3_RX_MAC_FC_CONFIG_E5, 0xff08}, + {RCE_4CH_REG_MAC_RS_RX_3_RS_CTRL_E5, 0x4}, + {RCE_4CH_REG_MAC_RS_RX_3_RX_START_PRE_SMD_VC0_E5, 0xffffffff}, + {RCE_4CH_REG_MAC_RS_RX_3_RX_START_PRE_SMD_VC3_E5, 0xb37f4ce6}, + {RCE_4CH_REG_MAC_RS_RX_3_RX_CONT_FRAG_SMD_VC0_E5, 0xffffffff}, + {RCE_4CH_REG_MAC_RS_RX_3_RX_CONT_FRAG_SMD_VC3_E5, 0x2a9e5261}, + {RCE_4CH_REG_MAC_RS_RX_3_RX_EXP_VER_SMD_VC0_3_E5, 0x7ffffff}, + {RCE_4CH_REG_MAC_RS_RX_3_RX_EXP_RES_SMD_VC0_3_E5, 0x19ffffff}, + {RCE_4CH_REG_PCS_RX_3_RX_PCS_CFG_E5, 0x1808a03}, + {RCE_4CH_REG_PCS_RX_3_RX_PCS_CFG2_E5, 0x2060c202}, + {RCE_4CH_REG_PCS_RX_3_RX_PCS_ALIGN_CFG_E5, 0xa00}, + {RCE_4CH_REG_PCS_TX_3_TX_PCS_CFG_E5, 0x8a03}, + {RCE_4CH_REG_PCS_TX_3_TX_PCS_CFG2_E5, 0x5a0610}, + {RCE_4CH_REG_PCS_TX_3_TX_PCS_CTRL_ALIGN_PERIOD_E5, 0xa00}, + {RCE_4CH_REG_PCS_TX_3_TX_PCS_CTRL_ENCODER_E5, 0xc00000}, + {RCE_4CH_REG_MAC_RS_RX_3_RX_EXPRESS_SMD0_3_E5, 0xd5ffffff}, + {RCE_4CH_REG_MAC_TOP_TS_TO_PC_MAP_E5, 0x650}, + {RCE_4CH_REG_PCS_TOP_PCS_SEL_MAP_E5, 0xb6d249}, + {RCE_4CH_REG_PCS_TOP_PMA_TX_FIFO_CFG3_E5, 0x5294a}, + {RCE_4CH_REG_PCS_TOP_RX_PMA_SEL_MAP1_E5, 0x18022}, + {RCE_4CH_REG_PCS_TOP_TX_PMA_SEL_MAP1_E5, 0x443}, + {RCE_4CH_REG_MAC_RS_TX_0_TX_MAC_CFG_E5, 0x68094e6}, + {RCE_4CH_REG_MAC_RS_RX_0_RX_MAC_CFG_E5, 0x15d6}, + {RCE_4CH_REG_PCS_RX_0_RX_PCS_CFG_E5, 0x1808a02}, + {RCE_4CH_REG_PCS_TX_0_TX_PCS_CFG_E5, 0x8a02}, + {RCE_4CH_REG_MAC_RS_TX_1_TX_MAC_CFG_E5, 0x68494e6}, + {RCE_4CH_REG_MAC_RS_RX_1_RX_MAC_CFG_E5, 0x415d6}, + {RCE_4CH_REG_PCS_RX_1_RX_PCS_CFG_E5, 0x1808a02}, + {RCE_4CH_REG_PCS_TX_1_TX_PCS_CFG_E5, 0x8a02}, + {RCE_4CH_REG_MAC_RS_TX_2_TX_MAC_CFG_E5, 0x68894e6}, + {RCE_4CH_REG_MAC_RS_RX_2_RX_MAC_CFG_E5, 0x815d6}, + {RCE_4CH_REG_PCS_RX_2_RX_PCS_CFG_E5, 0x1808a02}, + {RCE_4CH_REG_PCS_TX_2_TX_PCS_CFG_E5, 0x8a02}, + {RCE_4CH_REG_MAC_RS_TX_3_TX_MAC_CFG_E5, 0x68c94e6}, + {RCE_4CH_REG_MAC_RS_RX_3_RX_MAC_CFG_E5, 0xc15d6}, + {RCE_4CH_REG_PCS_RX_3_RX_PCS_CFG_E5, 0x1808a02}, + {RCE_4CH_REG_PCS_TX_3_TX_PCS_CFG_E5, 0x8a02}, + {RCE_4CH_REG_PCS_TOP_PCS_RESET_E5, 0xd}, + {RCE_4CH_REG_PCS_TOP_PCS_RESET_E5, 0x5}, + {RCE_4CH_REG_PCS_TOP_PCS_RESET_E5, 0x4}, + {RCE_4CH_REG_PCS_TOP_PCS_RESET_E5, 0x0}, + {RCE_4CH_REG_PCS_TOP_PMA_RX_FIFO_CFG3_E5, 0xf0}, + {RCE_4CH_REG_PCS_TOP_PMA_TX_FIFO_CFG5_E5, 0xf0}, + {RCE_4CH_REG_APP_FIFO_0_APP_FIFO_CFG_E5, 0x43c}, + {RCE_4CH_REG_APP_FIFO_1_APP_FIFO_CFG_E5, 0x43c}, + {RCE_4CH_REG_APP_FIFO_2_APP_FIFO_CFG_E5, 0x43c}, + {RCE_4CH_REG_APP_FIFO_3_APP_FIFO_CFG_E5, 0x43c}, +}; + +static void ecore_emul_link_init_e5(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) +{ + u32 val; + + /* Init the MAC if exists and disable external loopback. + * Without MAC - configure a loopback in the NIG. + */ + if (p_hwfn->p_dev->b_is_emul_mac) { + u32 i, base = NWM_REG_MAC_E5; + osal_size_t size; + + DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, + "E5 emulation: Initializing the MAC\n"); + + size = sizeof(ecore_e5_emul_mac_init) / + sizeof(ecore_e5_emul_mac_init[0]); + for (i = 0; i < size; i++) + ecore_wr(p_hwfn, p_ptt, + base + ecore_e5_emul_mac_init[i].offset, + ecore_e5_emul_mac_init[i].value); + + /* MISCS_REG_ECO_RESERVED[31:30]: enable/disable ext loopback */ + val = ecore_rd(p_hwfn, p_ptt, MISCS_REG_ECO_RESERVED); + val &= ~0xc0000000; + ecore_wr(p_hwfn, p_ptt, MISCS_REG_ECO_RESERVED, val); + } else { + u8 port_mode = 0x2; + + /* NIG_REG_ECO_RESERVED[1:0]: NIG loopback. + * '2' - ports "0 <-> 1" and "2 <-> 3". + */ + val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ECO_RESERVED); + val = (val & ~0x3) | port_mode; + ecore_wr(p_hwfn, p_ptt, NIG_REG_ECO_RESERVED, val); + } +} + static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { @@ -3100,8 +4364,10 @@ static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn, if (ECORE_IS_BB(p_hwfn->p_dev)) ecore_emul_link_init_bb(p_hwfn, p_ptt); - else + else if (ECORE_IS_AH(p_hwfn->p_dev)) ecore_emul_link_init_ah(p_hwfn, p_ptt); + else /* E5 */ + ecore_emul_link_init_e5(p_hwfn, p_ptt); return; } @@ -3110,15 +4376,15 @@ static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u8 port) { int port_offset = port ? 0x800 : 0; - u32 xmac_rxctrl = 0; + u32 xmac_rxctrl = 0; /* Reset of XMAC */ /* FIXME: move to common start */ ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + 2 * sizeof(u32), - MISC_REG_RESET_REG_2_XMAC_BIT); /* Clear */ + MISC_REG_RESET_REG_2_XMAC_BIT); /* Clear */ OSAL_MSLEEP(1); ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + sizeof(u32), - MISC_REG_RESET_REG_2_XMAC_BIT); /* Set */ + MISC_REG_RESET_REG_2_XMAC_BIT); /* Set */ ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_CORE_PORT_MODE_BB, 1); @@ -3152,28 +4418,45 @@ static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn, } #endif -static u32 ecore_hw_norm_region_conn(struct ecore_hwfn *p_hwfn) +static u32 ecore_hw_get_norm_region_conn(struct ecore_hwfn *p_hwfn) { u32 norm_region_conn; - /* The order of CIDs allocation is according to the order of + /* In E4, the order of CIDs allocation is according to the order of * 'enum protocol_type'. Therefore, the number of CIDs for the normal * region is calculated based on the CORE CIDs, in case of non-ETH * personality, and otherwise - based on the ETH CIDs. + * In E5 there is an exception - the CORE CIDs are allocated first. + * Therefore, the calculation should consider every possible + * personality. */ norm_region_conn = - ecore_cxt_get_proto_cid_start(p_hwfn, PROTOCOLID_CORE) + + ecore_cxt_get_proto_cid_start(p_hwfn, PROTOCOLID_CORE, + ECORE_CXT_PF_CID) + ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_CORE, OSAL_NULL) + ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH, OSAL_NULL); + if (ECORE_IS_E5(p_hwfn->p_dev)) { + norm_region_conn += + ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ISCSI, + OSAL_NULL) + + ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_FCOE, + OSAL_NULL) + + ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ROCE, + OSAL_NULL); + } + return norm_region_conn; } -static enum _ecore_status_t +enum _ecore_status_t ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u32 pwm_region_size, u32 n_cpus) + struct ecore_ptt *p_ptt, + struct ecore_dpi_info *dpi_info, + u32 pwm_region_size, + u32 n_cpus) { u32 dpi_bit_shift, dpi_count, dpi_page_size; u32 min_dpis; @@ -3202,20 +4485,16 @@ ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn, */ n_wids = OSAL_MAX_T(u32, ECORE_MIN_WIDS, n_cpus); dpi_page_size = ECORE_WID_SIZE * OSAL_ROUNDUP_POW_OF_TWO(n_wids); - dpi_page_size = (dpi_page_size + OSAL_PAGE_SIZE - 1) & - ~(OSAL_PAGE_SIZE - 1); + dpi_page_size = (dpi_page_size + OSAL_PAGE_SIZE - 1) & ~(OSAL_PAGE_SIZE - 1); dpi_bit_shift = OSAL_LOG2(dpi_page_size / 4096); dpi_count = pwm_region_size / dpi_page_size; min_dpis = p_hwfn->pf_params.rdma_pf_params.min_dpis; min_dpis = OSAL_MAX_T(u32, ECORE_MIN_DPIS, min_dpis); - /* Update hwfn */ - p_hwfn->dpi_size = dpi_page_size; - p_hwfn->dpi_count = dpi_count; - - /* Update registers */ - ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_DPI_BIT_SHIFT, dpi_bit_shift); + dpi_info->dpi_size = dpi_page_size; + dpi_info->dpi_count = dpi_count; + ecore_wr(p_hwfn, p_ptt, dpi_info->dpi_bit_shift_addr, dpi_bit_shift); if (dpi_count < min_dpis) return ECORE_NORESOURCES; @@ -3223,15 +4502,11 @@ ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } -enum ECORE_ROCE_EDPM_MODE { - ECORE_ROCE_EDPM_MODE_ENABLE = 0, - ECORE_ROCE_EDPM_MODE_FORCE_ON = 1, - ECORE_ROCE_EDPM_MODE_DISABLE = 2, -}; - -bool ecore_edpm_enabled(struct ecore_hwfn *p_hwfn) +bool ecore_edpm_enabled(struct ecore_hwfn *p_hwfn, + struct ecore_common_dpm_info *dpm_info) { - if (p_hwfn->dcbx_no_edpm || p_hwfn->db_bar_no_edpm) + if (p_hwfn->dcbx_no_edpm || dpm_info->db_bar_no_edpm || + dpm_info->mfw_no_edpm) return false; return true; @@ -3241,20 +4516,24 @@ static enum _ecore_status_t ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { + struct ecore_common_dpm_info *dpm_info; u32 norm_region_conn, min_addr_reg1; + struct ecore_dpi_info *dpi_info; u32 pwm_regsize, norm_regsize; - u32 db_bar_size, n_cpus; + u32 db_bar_size, n_cpus = 1; u32 roce_edpm_mode; - u32 pf_dems_shift; + u32 dems_shift; enum _ecore_status_t rc = ECORE_SUCCESS; u8 cond; db_bar_size = ecore_hw_bar_size(p_hwfn, p_ptt, BAR_ID_1); + + /* In CMT, doorbell bar should be split over both engines */ if (ECORE_IS_CMT(p_hwfn->p_dev)) db_bar_size /= 2; /* Calculate doorbell regions - * ----------------------------------- + * -------------------------- * The doorbell BAR is made of two regions. The first is called normal * region and the second is called PWM region. In the normal region * each ICID has its own set of addresses so that writing to that @@ -3267,7 +4546,7 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn, * connections. The DORQ_REG_PF_MIN_ADDR_REG1 register is * in units of 4,096 bytes. */ - norm_region_conn = ecore_hw_norm_region_conn(p_hwfn); + norm_region_conn = ecore_hw_get_norm_region_conn(p_hwfn); norm_regsize = ROUNDUP(ECORE_PF_DEMS_SIZE * norm_region_conn, OSAL_PAGE_SIZE); min_addr_reg1 = norm_regsize / 4096; @@ -3288,15 +4567,32 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn, return ECORE_NORESOURCES; } - /* Calculate number of DPIs */ - roce_edpm_mode = p_hwfn->pf_params.rdma_pf_params.roce_edpm_mode; + dpi_info = &p_hwfn->dpi_info; + dpm_info = &p_hwfn->dpm_info; + + dpi_info->dpi_bit_shift_addr = DORQ_REG_PF_DPI_BIT_SHIFT; + dpm_info->vf_cfg = false; + + if (p_hwfn->roce_edpm_mode <= ECORE_ROCE_EDPM_MODE_DISABLE) { + roce_edpm_mode = p_hwfn->roce_edpm_mode; + } else { + DP_ERR(p_hwfn->p_dev, + "roce edpm mode was configured to an illegal value of %u. Resetting it to 0-Enable EDPM if BAR size is adequate\n", + p_hwfn->roce_edpm_mode); + + /* Reset this field in order to save this check for a VF */ + p_hwfn->roce_edpm_mode = 0; + roce_edpm_mode = 0; + } + if ((roce_edpm_mode == ECORE_ROCE_EDPM_MODE_ENABLE) || ((roce_edpm_mode == ECORE_ROCE_EDPM_MODE_FORCE_ON))) { /* Either EDPM is mandatory, or we are attempting to allocate a * WID per CPU. */ n_cpus = OSAL_NUM_CPUS(); - rc = ecore_hw_init_dpi_size(p_hwfn, p_ptt, pwm_regsize, n_cpus); + rc = ecore_hw_init_dpi_size(p_hwfn, p_ptt, dpi_info, + pwm_regsize, n_cpus); } cond = ((rc != ECORE_SUCCESS) && @@ -3308,38 +4604,55 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn, * allocated a WID per CPU. */ n_cpus = 1; - rc = ecore_hw_init_dpi_size(p_hwfn, p_ptt, pwm_regsize, n_cpus); - - /* If we entered this flow due to DCBX then the DPM register is - * already configured. - */ + rc = ecore_hw_init_dpi_size(p_hwfn, p_ptt, dpi_info, + pwm_regsize, n_cpus); } - DP_INFO(p_hwfn, - "doorbell bar: normal_region_size=%d, pwm_region_size=%d", - norm_regsize, pwm_regsize); - DP_INFO(p_hwfn, - " dpi_size=%d, dpi_count=%d, roce_edpm=%s\n", - p_hwfn->dpi_size, p_hwfn->dpi_count, - (!ecore_edpm_enabled(p_hwfn)) ? - "disabled" : "enabled"); + dpi_info->wid_count = (u16)n_cpus; /* Check return codes from above calls */ if (rc != ECORE_SUCCESS) { DP_ERR(p_hwfn, - "Failed to allocate enough DPIs\n"); + "Failed to allocate enough DPIs. Allocated %d but the current minimum is set to %d. You can reduce this minimum down to %d via user configuration min_dpis or by disabling EDPM via user configuration roce_edpm_mode\n", + dpi_info->dpi_count, + p_hwfn->pf_params.rdma_pf_params.min_dpis, + ECORE_MIN_DPIS); + DP_ERR(p_hwfn, + "PF doorbell bar: normal_region_size=0x%x, pwm_region_size=0x%x, dpi_size=0x%x, dpi_count=%d, roce_edpm=%s, page_size=%u\n", + norm_regsize, pwm_regsize, dpi_info->dpi_size, + dpi_info->dpi_count, + (!ecore_edpm_enabled(p_hwfn, dpm_info)) ? + "disabled" : "enabled", OSAL_PAGE_SIZE); + return ECORE_NORESOURCES; } + DP_INFO(p_hwfn, + "PF doorbell bar: db_bar_size=0x%x normal_region_size=0x%x, pwm_region_size=0x%x, dpi_size=0x%x, dpi_count=%d, roce_edpm=%s, page_size=%u\n", + db_bar_size, norm_regsize, pwm_regsize, dpi_info->dpi_size, + dpi_info->dpi_count, (!ecore_edpm_enabled(p_hwfn, dpm_info)) ? + "disabled" : "enabled", OSAL_PAGE_SIZE); + /* Update hwfn */ - p_hwfn->dpi_start_offset = norm_regsize; + dpi_info->dpi_start_offset = norm_regsize; /* this is later used to + * calculate the doorbell + * address + */ - /* Update registers */ - /* DEMS size is configured log2 of DWORDs, hence the division by 4 */ - pf_dems_shift = OSAL_LOG2(ECORE_PF_DEMS_SIZE / 4); - ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_ICID_BIT_SHIFT_NORM, pf_dems_shift); + /* Update the DORQ registers. + * DEMS size is configured as log2 of DWORDs, hence the division by 4. + */ + + dems_shift = OSAL_LOG2(ECORE_PF_DEMS_SIZE / 4); + ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_ICID_BIT_SHIFT_NORM, dems_shift); ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_MIN_ADDR_REG1, min_addr_reg1); + if (ECORE_IS_E5(p_hwfn->p_dev)) { + dems_shift = OSAL_LOG2(ECORE_VF_DEMS_SIZE / 4); + ecore_wr(p_hwfn, p_ptt, DORQ_REG_VF_ICID_BIT_SHIFT_NORM, + dems_shift); + } + return ECORE_SUCCESS; } @@ -3389,7 +4702,7 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn, * The ppfid should be set in the vector, except in BB which has * a bug in the LLH where the ppfid is actually engine based. */ - if (OSAL_GET_BIT(ECORE_MF_NEED_DEF_PF, &p_dev->mf_bits)) { + if (OSAL_TEST_BIT(ECORE_MF_NEED_DEF_PF, &p_dev->mf_bits)) { u8 pf_id = p_hwfn->rel_pf_id; if (!ECORE_IS_BB(p_dev)) @@ -3411,8 +4724,8 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, { u8 rel_pf_id = p_hwfn->rel_pf_id; u32 prs_reg; - enum _ecore_status_t rc = ECORE_SUCCESS; - u16 ctrl; + enum _ecore_status_t rc = ECORE_SUCCESS; + u16 ctrl = 0; int pos; if (p_hwfn->mcp_info) { @@ -3444,13 +4757,12 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, /* Enable classification by MAC if needed */ if (hw_mode & (1 << MODE_MF_SI)) { - DP_VERBOSE(p_hwfn, ECORE_MSG_HW, - "Configuring TAGMAC_CLS_TYPE\n"); - STORE_RT_REG(p_hwfn, NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET, - 1); + DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "Configuring TAGMAC_CLS_TYPE\n"); + STORE_RT_REG(p_hwfn, + NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET, 1); } - /* Protocl Configuration - @@@TBD - should we set 0 otherwise? */ + /* Protocl Configuration - @@@TBD - should we set 0 otherwise?*/ STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_TCP_RT_OFFSET, (p_hwfn->hw_info.personality == ECORE_PCI_ISCSI) ? 1 : 0); STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_FCOE_RT_OFFSET, @@ -3480,30 +4792,50 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, /* Pure runtime initializations - directly to the HW */ ecore_int_igu_init_pure_rt(p_hwfn, p_ptt, true, true); - /* PCI relaxed ordering causes a decrease in the performance on some - * systems. Till a root cause is found, disable this attribute in the - * PCI config space. + #if 0 /* @DPDK */ + /* PCI relaxed ordering is generally beneficial for performance, + * but can hurt performance or lead to instability on some setups. + * If management FW is taking care of it go with that, otherwise + * disable to be on the safe side. */ - /* Not in use @DPDK - * pos = OSAL_PCI_FIND_CAPABILITY(p_hwfn->p_dev, PCI_CAP_ID_EXP); - * if (!pos) { - * DP_NOTICE(p_hwfn, true, - * "Failed to find the PCIe Cap\n"); - * return ECORE_IO; - * } - * OSAL_PCI_READ_CONFIG_WORD(p_hwfn->p_dev, pos + PCI_EXP_DEVCTL, &ctrl); - * ctrl &= ~PCI_EXP_DEVCTL_RELAX_EN; - * OSAL_PCI_WRITE_CONFIG_WORD(p_hwfn->p_dev, pos + PCI_EXP_DEVCTL, ctrl); - */ + pos = OSAL_PCI_FIND_CAPABILITY(p_hwfn->p_dev, PCI_CAP_ID_EXP); + if (!pos) { + DP_NOTICE(p_hwfn, true, + "Failed to find the PCI Express Capability structure in the PCI config space\n"); + return ECORE_IO; + } + + OSAL_PCI_READ_CONFIG_WORD(p_hwfn->p_dev, pos + PCI_EXP_DEVCTL, &ctrl); + + if (p_params->pci_rlx_odr_mode == ECORE_ENABLE_RLX_ODR) { + ctrl |= PCI_EXP_DEVCTL_RELAX_EN; + OSAL_PCI_WRITE_CONFIG_WORD(p_hwfn->p_dev, + pos + PCI_EXP_DEVCTL, ctrl); + } else if (p_params->pci_rlx_odr_mode == ECORE_DISABLE_RLX_ODR) { + ctrl &= ~PCI_EXP_DEVCTL_RELAX_EN; + OSAL_PCI_WRITE_CONFIG_WORD(p_hwfn->p_dev, + pos + PCI_EXP_DEVCTL, ctrl); + } else if (ecore_mcp_rlx_odr_supported(p_hwfn)) { + DP_INFO(p_hwfn, "PCI relax ordering configured by MFW\n"); + } else { + ctrl &= ~PCI_EXP_DEVCTL_RELAX_EN; + OSAL_PCI_WRITE_CONFIG_WORD(p_hwfn->p_dev, + pos + PCI_EXP_DEVCTL, ctrl); + } + #endif /* @DPDK */ rc = ecore_hw_init_pf_doorbell_bar(p_hwfn, p_ptt); if (rc != ECORE_SUCCESS) return rc; + rc = ecore_iov_init_vf_doorbell_bar(p_hwfn, p_ptt); + if (rc != ECORE_SUCCESS && ECORE_IS_VF_RDMA(p_hwfn)) + p_hwfn->pf_iov_info->rdma_enable = false; + /* Use the leading hwfn since in CMT only NIG #0 is operational */ if (IS_LEAD_HWFN(p_hwfn)) { rc = ecore_llh_hw_init_pf(p_hwfn, p_ptt, - p_params->avoid_eng_affin); + p_params->avoid_eng_affin); if (rc != ECORE_SUCCESS) return rc; } @@ -3518,44 +4850,69 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, rc = ecore_sp_pf_start(p_hwfn, p_ptt, p_params->p_tunn, p_params->allow_npar_tx_switch); if (rc) { - DP_NOTICE(p_hwfn, true, - "Function start ramrod failed\n"); + DP_NOTICE(p_hwfn, true, "Function start ramrod failed\n"); return rc; } + prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_TAG1); DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE, - "PRS_REG_SEARCH_TAG1: %x\n", prs_reg); + "PRS_REG_SEARCH_TAG1: %x\n", prs_reg); if (p_hwfn->hw_info.personality == ECORE_PCI_FCOE) { - ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_TAG1, - (1 << 2)); + ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_TAG1, (1 << 2)); ecore_wr(p_hwfn, p_ptt, PRS_REG_PKT_LEN_STAT_TAGS_NOT_COUNTED_FIRST, 0x100); } + DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE, - "PRS_REG_SEARCH registers after start PFn\n"); + "PRS_REG_SEARCH registers after start PFn\n"); prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_TCP); DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE, - "PRS_REG_SEARCH_TCP: %x\n", prs_reg); + "PRS_REG_SEARCH_TCP: %x\n", prs_reg); prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_UDP); DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE, - "PRS_REG_SEARCH_UDP: %x\n", prs_reg); + "PRS_REG_SEARCH_UDP: %x\n", prs_reg); prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_FCOE); DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE, - "PRS_REG_SEARCH_FCOE: %x\n", prs_reg); + "PRS_REG_SEARCH_FCOE: %x\n", prs_reg); prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_ROCE); DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE, - "PRS_REG_SEARCH_ROCE: %x\n", prs_reg); + "PRS_REG_SEARCH_ROCE: %x\n", prs_reg); prs_reg = ecore_rd(p_hwfn, p_ptt, - PRS_REG_SEARCH_TCP_FIRST_FRAG); + PRS_REG_SEARCH_TCP_FIRST_FRAG); DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE, - "PRS_REG_SEARCH_TCP_FIRST_FRAG: %x\n", - prs_reg); + "PRS_REG_SEARCH_TCP_FIRST_FRAG: %x\n", prs_reg); prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_TAG1); DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE, - "PRS_REG_SEARCH_TAG1: %x\n", prs_reg); + "PRS_REG_SEARCH_TAG1: %x\n", prs_reg); + } + +#ifndef ASIC_ONLY + /* Configure the first VF number for the current PF and the following PF */ + if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && + ECORE_IS_E5(p_hwfn->p_dev) && + IS_PF_SRIOV(p_hwfn)) { + struct ecore_hw_sriov_info *p_iov_info = p_hwfn->p_dev->p_iov_info; + + ecore_wr(p_hwfn, p_ptt, PGLCS_REG_FIRST_VF_K2_E5, + p_iov_info->first_vf_in_pf); + + /* If this is not the last PF, pretend to the next PF and set first VF register */ + if ((rel_pf_id + 1) < MAX_NUM_PFS_E5) { + /* pretend to the following PF */ + ecore_fid_pretend(p_hwfn, p_ptt, + FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, + (rel_pf_id + 1))); + ecore_wr(p_hwfn, p_ptt, PGLCS_REG_FIRST_VF_K2_E5, + p_iov_info->first_vf_in_pf + p_iov_info->total_vfs); + /* pretend to the original PF */ + ecore_fid_pretend(p_hwfn, p_ptt, + FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, rel_pf_id)); + } } +#endif + return ECORE_SUCCESS; } @@ -3582,14 +4939,14 @@ enum _ecore_status_t ecore_pglueb_set_pfid_enable(struct ecore_hwfn *p_hwfn, if (val != set_val) { DP_NOTICE(p_hwfn, true, "PFID_ENABLE_MASTER wasn't changed after a second\n"); - return ECORE_UNKNOWN_ERROR; + return ECORE_AGAIN; } return ECORE_SUCCESS; } static void ecore_reset_mb_shadow(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_main_ptt) + struct ecore_ptt *p_main_ptt) { /* Read shadow of current MFW mailbox */ ecore_mcp_read_mb(p_hwfn, p_main_ptt); @@ -3598,13 +4955,6 @@ static void ecore_reset_mb_shadow(struct ecore_hwfn *p_hwfn, p_hwfn->mcp_info->mfw_mb_length); } -static void ecore_pglueb_clear_err(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) -{ - ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_WAS_ERROR_PF_31_0_CLR, - 1 << p_hwfn->abs_pf_id); -} - static enum _ecore_status_t ecore_fill_load_req_params(struct ecore_hwfn *p_hwfn, struct ecore_load_req_params *p_load_req, @@ -3666,8 +5016,8 @@ ecore_fill_load_req_params(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } -enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn, - struct ecore_hw_init_params *p_params) +static enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn, + struct ecore_hw_init_params *p_params) { if (p_params->p_tunn) { ecore_vf_set_vf_start_tunn_update_param(p_params->p_tunn); @@ -3679,16 +5029,23 @@ enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } +static void ecore_pglueb_clear_err(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) +{ + ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_WAS_ERROR_PF_31_0_CLR, + 1 << p_hwfn->abs_pf_id); +} + enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev, struct ecore_hw_init_params *p_params) { struct ecore_load_req_params load_req_params; u32 load_code, resp, param, drv_mb_param; + enum _ecore_status_t rc = ECORE_SUCCESS, cancel_load_rc; bool b_default_mtu = true; struct ecore_hwfn *p_hwfn; const u32 *fw_overlays; u32 fw_overlays_len; - enum _ecore_status_t rc = ECORE_SUCCESS; u16 ether_type; int i; @@ -3718,19 +5075,22 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev, continue; } + /* Some flows may keep variable set */ + p_hwfn->mcp_info->mcp_handling_status = 0; + rc = ecore_calc_hw_mode(p_hwfn); if (rc != ECORE_SUCCESS) return rc; - if (IS_PF(p_dev) && (OSAL_GET_BIT(ECORE_MF_8021Q_TAGGING, + if (IS_PF(p_dev) && (OSAL_TEST_BIT(ECORE_MF_8021Q_TAGGING, &p_dev->mf_bits) || - OSAL_GET_BIT(ECORE_MF_8021AD_TAGGING, + OSAL_TEST_BIT(ECORE_MF_8021AD_TAGGING, &p_dev->mf_bits))) { - if (OSAL_GET_BIT(ECORE_MF_8021Q_TAGGING, + if (OSAL_TEST_BIT(ECORE_MF_8021Q_TAGGING, &p_dev->mf_bits)) - ether_type = ETHER_TYPE_VLAN; + ether_type = ECORE_ETH_P_8021Q; else - ether_type = ETHER_TYPE_QINQ; + ether_type = ECORE_ETH_P_8021AD; STORE_RT_REG(p_hwfn, PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET, ether_type); STORE_RT_REG(p_hwfn, NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET, @@ -3761,10 +5121,15 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev, "Load request was sent. Load code: 0x%x\n", load_code); + /* Only relevant for recovery: + * Clear the indication after LOAD_REQ is responded by the MFW. + */ + p_dev->recov_in_prog = false; + ecore_mcp_set_capabilities(p_hwfn, p_hwfn->p_main_ptt); /* CQ75580: - * When coming back from hiberbate state, the registers from + * When coming back from hibernate state, the registers from * which shadow is read initially are not initialized. It turns * out that these registers get initialized during the call to * ecore_mcp_load_req request. So we need to reread them here @@ -3775,18 +5140,9 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev, */ ecore_reset_mb_shadow(p_hwfn, p_hwfn->p_main_ptt); - /* Only relevant for recovery: - * Clear the indication after the LOAD_REQ command is responded - * by the MFW. - */ - p_dev->recov_in_prog = false; - - p_hwfn->first_on_engine = (load_code == - FW_MSG_CODE_DRV_LOAD_ENGINE); - if (!qm_lock_ref_cnt) { #ifdef CONFIG_ECORE_LOCK_ALLOC - rc = OSAL_SPIN_LOCK_ALLOC(p_hwfn, &qm_lock); + rc = OSAL_SPIN_LOCK_ALLOC(p_hwfn, &qm_lock, "qm_lock"); if (rc) { DP_ERR(p_hwfn, "qm_lock allocation failed\n"); goto qm_lock_fail; @@ -3804,8 +5160,16 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev, rc = ecore_final_cleanup(p_hwfn, p_hwfn->p_main_ptt, p_hwfn->rel_pf_id, false); if (rc != ECORE_SUCCESS) { - ecore_hw_err_notify(p_hwfn, - ECORE_HW_ERR_RAMROD_FAIL); + u8 str[ECORE_HW_ERR_MAX_STR_SIZE]; + + OSAL_SNPRINTF((char *)str, + ECORE_HW_ERR_MAX_STR_SIZE, + "Final cleanup failed\n"); + DP_NOTICE(p_hwfn, false, "%s", str); + ecore_hw_err_notify(p_hwfn, p_hwfn->p_main_ptt, + ECORE_HW_ERR_RAMROD_FAIL, + str, + OSAL_STRLEN((char *)str) + 1); goto load_err; } } @@ -3839,6 +5203,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev, if (!p_hwfn->fw_overlay_mem) { DP_NOTICE(p_hwfn, false, "Failed to allocate fw overlay memory\n"); + rc = ECORE_NOMEM; goto load_err; } @@ -3848,13 +5213,13 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev, p_hwfn->hw_info.hw_mode); if (rc != ECORE_SUCCESS) break; - /* Fall into */ + /* Fall through */ case FW_MSG_CODE_DRV_LOAD_PORT: rc = ecore_hw_init_port(p_hwfn, p_hwfn->p_main_ptt, p_hwfn->hw_info.hw_mode); if (rc != ECORE_SUCCESS) break; - /* Fall into */ + /* Fall through */ case FW_MSG_CODE_DRV_LOAD_FUNCTION: rc = ecore_hw_init_pf(p_hwfn, p_hwfn->p_main_ptt, p_hwfn->hw_info.hw_mode, @@ -3876,8 +5241,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev, rc = ecore_mcp_load_done(p_hwfn, p_hwfn->p_main_ptt); if (rc != ECORE_SUCCESS) { - DP_NOTICE(p_hwfn, false, - "Sending load done failed, rc = %d\n", rc); + DP_NOTICE(p_hwfn, false, "Sending load done failed, rc = %d\n", rc); if (rc == ECORE_NOMEM) { DP_NOTICE(p_hwfn, false, "Sending load done was failed due to memory allocation failure\n"); @@ -3889,10 +5253,11 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev, /* send DCBX attention request command */ DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "sending phony dcbx set command to trigger DCBx attention handling\n"); + drv_mb_param = 0; + SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_DCBX_NOTIFY, 1); rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt, DRV_MSG_CODE_SET_DCBX, - 1 << DRV_MB_PARAM_DCBX_NOTIFY_OFFSET, &resp, - ¶m); + drv_mb_param, &resp, ¶m); if (rc != ECORE_SUCCESS) { DP_NOTICE(p_hwfn, false, "Failed to send DCBX attention request\n"); @@ -3907,24 +5272,11 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev, p_hwfn = ECORE_LEADING_HWFN(p_dev); DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "Sending GET_OEM_UPDATES command to trigger stag/bandwidth attention handling\n"); + drv_mb_param = 0; + SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_DUMMY_OEM_UPDATES, 1); rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt, DRV_MSG_CODE_GET_OEM_UPDATES, - 1 << DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET, - &resp, ¶m); - if (rc != ECORE_SUCCESS) - DP_NOTICE(p_hwfn, false, - "Failed to send GET_OEM_UPDATES attention request\n"); - } - - if (IS_PF(p_dev)) { - /* Get pre-negotiated values for stag, bandwidth etc. */ - p_hwfn = ECORE_LEADING_HWFN(p_dev); - DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, - "Sending GET_OEM_UPDATES command to trigger stag/bandwidth attention handling\n"); - rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt, - DRV_MSG_CODE_GET_OEM_UPDATES, - 1 << DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET, - &resp, ¶m); + drv_mb_param, &resp, ¶m); if (rc != ECORE_SUCCESS) DP_NOTICE(p_hwfn, false, "Failed to send GET_OEM_UPDATES attention request\n"); @@ -3948,7 +5300,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev, rc = ecore_mcp_ov_update_driver_state(p_hwfn, p_hwfn->p_main_ptt, - ECORE_OV_DRIVER_STATE_DISABLED); + ECORE_OV_DRIVER_STATE_DISABLED); if (rc != ECORE_SUCCESS) DP_INFO(p_hwfn, "Failed to update driver state\n"); @@ -3967,11 +5319,16 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev, OSAL_SPIN_LOCK_DEALLOC(&qm_lock); qm_lock_fail: #endif - /* The MFW load lock should be released regardless of success or failure - * of initialization. - * TODO: replace this with an attempt to send cancel_load. + /* The MFW load lock should be released also when initialization fails. + * If supported, use a cancel_load request to update the MFW with the + * load failure. */ - ecore_mcp_load_done(p_hwfn, p_hwfn->p_main_ptt); + cancel_load_rc = ecore_mcp_cancel_load_req(p_hwfn, p_hwfn->p_main_ptt); + if (cancel_load_rc == ECORE_NOTIMPL) { + DP_INFO(p_hwfn, + "Send a load done request instead of cancel load\n"); + ecore_mcp_load_done(p_hwfn, p_hwfn->p_main_ptt); + } return rc; } @@ -3985,11 +5342,15 @@ static void ecore_hw_timers_stop(struct ecore_dev *p_dev, /* close timers */ ecore_wr(p_hwfn, p_ptt, TM_REG_PF_ENABLE_CONN, 0x0); ecore_wr(p_hwfn, p_ptt, TM_REG_PF_ENABLE_TASK, 0x0); - for (i = 0; i < ECORE_HW_STOP_RETRY_LIMIT && !p_dev->recov_in_prog; - i++) { + + if (p_dev->recov_in_prog) + return; + + for (i = 0; i < ECORE_HW_STOP_RETRY_LIMIT; i++) { if ((!ecore_rd(p_hwfn, p_ptt, TM_REG_PF_SCAN_ACTIVE_CONN)) && - (!ecore_rd(p_hwfn, p_ptt, TM_REG_PF_SCAN_ACTIVE_TASK))) + (!ecore_rd(p_hwfn, p_ptt, + TM_REG_PF_SCAN_ACTIVE_TASK))) break; /* Dependent on number of connection/tasks, possibly @@ -4052,10 +5413,10 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev) ecore_vf_pf_int_cleanup(p_hwfn); rc = ecore_vf_pf_reset(p_hwfn); if (rc != ECORE_SUCCESS) { - DP_NOTICE(p_hwfn, true, + DP_NOTICE(p_hwfn, false, "ecore_vf_pf_reset failed. rc = %d.\n", rc); - rc2 = ECORE_UNKNOWN_ERROR; + rc2 = rc; } continue; } @@ -4130,12 +5491,6 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev) /* Need to wait 1ms to guarantee SBs are cleared */ OSAL_MSLEEP(1); - if (IS_LEAD_HWFN(p_hwfn) && - OSAL_GET_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits) && - !ECORE_IS_FCOE_PERSONALITY(p_hwfn)) - ecore_llh_remove_mac_filter(p_dev, 0, - p_hwfn->hw_info.hw_mac_addr); - if (!p_dev->recov_in_prog) { ecore_verify_reg_val(p_hwfn, p_ptt, QM_REG_USG_CNT_PF_TX, 0); @@ -4148,6 +5503,12 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev) ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_DB_ENABLE, 0); ecore_wr(p_hwfn, p_ptt, QM_REG_PF_EN, 0); + if (IS_LEAD_HWFN(p_hwfn) && + OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits) && + !ECORE_IS_FCOE_PERSONALITY(p_hwfn)) + ecore_llh_remove_mac_filter(p_dev, 0, + p_hwfn->hw_info.hw_mac_addr); + --qm_lock_ref_cnt; #ifdef CONFIG_ECORE_LOCK_ALLOC if (!qm_lock_ref_cnt) @@ -4179,8 +5540,7 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev) * only after all transactions have stopped for all active * hw-functions. */ - rc = ecore_pglueb_set_pfid_enable(p_hwfn, p_hwfn->p_main_ptt, - false); + rc = ecore_pglueb_set_pfid_enable(p_hwfn, p_ptt, false); if (rc != ECORE_SUCCESS) { DP_NOTICE(p_hwfn, true, "ecore_pglueb_set_pfid_enable() failed. rc = %d.\n", @@ -4208,18 +5568,11 @@ enum _ecore_status_t ecore_hw_stop_fastpath(struct ecore_dev *p_dev) if (!p_ptt) return ECORE_AGAIN; - DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, - "Shutting down the fastpath\n"); + DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Shutting down the fastpath\n"); ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x1); - ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_TCP, 0x0); - ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_UDP, 0x0); - ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_FCOE, 0x0); - ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_ROCE, 0x0); - ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_OPENFLOW, 0x0); - /* @@@TBD - clean transmission queues (5.b) */ /* @@@TBD - clean BTB (5.c) */ @@ -4245,20 +5598,172 @@ enum _ecore_status_t ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn) if (!p_ptt) return ECORE_AGAIN; - /* If roce info is allocated it means roce is initialized and should - * be enabled in searcher. - */ - if (p_hwfn->p_rdma_info) { - if (p_hwfn->b_rdma_enabled_in_prs) - ecore_wr(p_hwfn, p_ptt, - p_hwfn->rdma_prs_search_reg, 0x1); - ecore_wr(p_hwfn, p_ptt, TM_REG_PF_ENABLE_CONN, 0x1); + /* Re-open incoming traffic */ + ecore_wr(p_hwfn, p_ptt, + NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x0); + ecore_ptt_release(p_hwfn, p_ptt); + + return ECORE_SUCCESS; +} + +static void ecore_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u32 hw_addr, u32 val) +{ + if (ECORE_IS_BB(p_hwfn->p_dev)) + ecore_wr(p_hwfn, p_ptt, hw_addr, val); + else + ecore_mcp_wol_wr(p_hwfn, p_ptt, hw_addr, val); +} + +enum _ecore_status_t ecore_set_nwuf_reg(struct ecore_dev *p_dev, u32 reg_idx, + u32 pattern_size, u32 crc) +{ + struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); + enum _ecore_status_t rc = ECORE_SUCCESS; + struct ecore_ptt *p_ptt; + u32 reg_len = 0; + u32 reg_crc = 0; + + p_ptt = ecore_ptt_acquire(p_hwfn); + if (!p_ptt) + return ECORE_AGAIN; + + /* Get length and CRC register offsets */ + switch (reg_idx) { + case 0: + reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_0_LEN_BB : + WOL_REG_ACPI_PAT_0_LEN_K2_E5; + reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_0_CRC_BB : + WOL_REG_ACPI_PAT_0_CRC_K2_E5; + break; + case 1: + reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_1_LEN_BB : + WOL_REG_ACPI_PAT_1_LEN_K2_E5; + reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_1_CRC_BB : + WOL_REG_ACPI_PAT_1_CRC_K2_E5; + break; + case 2: + reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_2_LEN_BB : + WOL_REG_ACPI_PAT_2_LEN_K2_E5; + reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_2_CRC_BB : + WOL_REG_ACPI_PAT_2_CRC_K2_E5; + break; + case 3: + reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_3_LEN_BB : + WOL_REG_ACPI_PAT_3_LEN_K2_E5; + reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_3_CRC_BB : + WOL_REG_ACPI_PAT_3_CRC_K2_E5; + break; + case 4: + reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_4_LEN_BB : + WOL_REG_ACPI_PAT_4_LEN_K2_E5; + reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_4_CRC_BB : + WOL_REG_ACPI_PAT_4_CRC_K2_E5; + break; + case 5: + reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_5_LEN_BB : + WOL_REG_ACPI_PAT_5_LEN_K2_E5; + reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_5_CRC_BB : + WOL_REG_ACPI_PAT_5_CRC_K2_E5; + break; + case 6: + reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_6_LEN_BB : + WOL_REG_ACPI_PAT_6_LEN_K2_E5; + reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_6_CRC_BB : + WOL_REG_ACPI_PAT_6_CRC_K2_E5; + break; + case 7: + reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_7_LEN_BB : + WOL_REG_ACPI_PAT_7_LEN_K2_E5; + reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_7_CRC_BB : + WOL_REG_ACPI_PAT_7_CRC_K2_E5; + break; + default: + rc = ECORE_UNKNOWN_ERROR; + goto out; + } + + /* Align pattern size to 4 */ + while (pattern_size % 4) + pattern_size++; + + /* Write pattern length */ + ecore_wol_wr(p_hwfn, p_ptt, reg_len, pattern_size); + + /* Write crc value*/ + ecore_wol_wr(p_hwfn, p_ptt, reg_crc, crc); + + DP_INFO(p_dev, + "idx[%d] reg_crc[0x%x=0x%08x] " + "reg_len[0x%x=0x%x]\n", + reg_idx, reg_crc, crc, reg_len, pattern_size); +out: + ecore_ptt_release(p_hwfn, p_ptt); + + return rc; +} + +void ecore_wol_buffer_clear(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) +{ + const u32 wake_buffer_clear_offset = + ECORE_IS_BB(p_hwfn->p_dev) ? + NIG_REG_WAKE_BUFFER_CLEAR_BB : WOL_REG_WAKE_BUFFER_CLEAR_K2_E5; + + DP_INFO(p_hwfn->p_dev, + "reset " + "REG_WAKE_BUFFER_CLEAR offset=0x%08x\n", + wake_buffer_clear_offset); + + ecore_wol_wr(p_hwfn, p_ptt, wake_buffer_clear_offset, 1); + ecore_wol_wr(p_hwfn, p_ptt, wake_buffer_clear_offset, 0); +} + +enum _ecore_status_t ecore_get_wake_info(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct ecore_wake_info *wake_info) +{ + struct ecore_dev *p_dev = p_hwfn->p_dev; + u32 *buf = OSAL_NULL; + u32 i = 0; + const u32 reg_wake_buffer_offest = + ECORE_IS_BB(p_dev) ? NIG_REG_WAKE_BUFFER_BB : + WOL_REG_WAKE_BUFFER_K2_E5; + + wake_info->wk_info = ecore_rd(p_hwfn, p_ptt, + ECORE_IS_BB(p_dev) ? NIG_REG_WAKE_INFO_BB : + WOL_REG_WAKE_INFO_K2_E5); + wake_info->wk_details = ecore_rd(p_hwfn, p_ptt, + ECORE_IS_BB(p_dev) ? NIG_REG_WAKE_DETAILS_BB : + WOL_REG_WAKE_DETAILS_K2_E5); + wake_info->wk_pkt_len = ecore_rd(p_hwfn, p_ptt, + ECORE_IS_BB(p_dev) ? NIG_REG_WAKE_PKT_LEN_BB : + WOL_REG_WAKE_PKT_LEN_K2_E5); + + DP_INFO(p_dev, + "REG_WAKE_INFO=0x%08x " + "REG_WAKE_DETAILS=0x%08x " + "REG_WAKE_PKT_LEN=0x%08x\n", + wake_info->wk_info, + wake_info->wk_details, + wake_info->wk_pkt_len); + + buf = (u32 *)wake_info->wk_buffer; + + for (i = 0; i < (wake_info->wk_pkt_len / sizeof(u32)); i++) { + if ((i * sizeof(u32)) >= sizeof(wake_info->wk_buffer)) { + DP_INFO(p_dev, + "i index to 0 high=%d\n", + i); + break; + } + buf[i] = ecore_rd(p_hwfn, p_ptt, + reg_wake_buffer_offest + (i * sizeof(u32))); + DP_INFO(p_dev, "wk_buffer[%u]: 0x%08x\n", + i, buf[i]); } - /* Re-open incoming traffic */ - ecore_wr(p_hwfn, p_ptt, - NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x0); - ecore_ptt_release(p_hwfn, p_ptt); + ecore_wol_buffer_clear(p_hwfn, p_ptt); return ECORE_SUCCESS; } @@ -4268,13 +5773,14 @@ static void ecore_hw_hwfn_free(struct ecore_hwfn *p_hwfn) { ecore_ptt_pool_free(p_hwfn); OSAL_FREE(p_hwfn->p_dev, p_hwfn->hw_info.p_igu_info); + p_hwfn->hw_info.p_igu_info = OSAL_NULL; } /* Setup bar access */ static void ecore_hw_hwfn_prepare(struct ecore_hwfn *p_hwfn) { /* clear indirect access */ - if (ECORE_IS_AH(p_hwfn->p_dev)) { + if (ECORE_IS_AH(p_hwfn->p_dev) || ECORE_IS_E5(p_hwfn->p_dev)) { ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5, 0); ecore_wr(p_hwfn, p_hwfn->p_main_ptt, @@ -4306,7 +5812,7 @@ static void get_function_id(struct ecore_hwfn *p_hwfn) { /* ME Register */ p_hwfn->hw_info.opaque_fid = (u16)REG_RD(p_hwfn, - PXP_PF_ME_OPAQUE_ADDR); + PXP_PF_ME_OPAQUE_ADDR); p_hwfn->hw_info.concrete_fid = REG_RD(p_hwfn, PXP_PF_ME_CONCRETE_ADDR); @@ -4322,22 +5828,65 @@ static void get_function_id(struct ecore_hwfn *p_hwfn) p_hwfn->hw_info.concrete_fid, p_hwfn->hw_info.opaque_fid); } -static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn) +void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn) { u32 *feat_num = p_hwfn->hw_info.feat_num; struct ecore_sb_cnt_info sb_cnt; - u32 non_l2_sbs = 0; + u32 non_l2_sbs = 0, non_l2_iov_sbs = 0; OSAL_MEM_ZERO(&sb_cnt, sizeof(sb_cnt)); ecore_int_get_num_sbs(p_hwfn, &sb_cnt); +#ifdef CONFIG_ECORE_ROCE + /* Roce CNQ require each: 1 status block. 1 CNQ, we divide the + * status blocks equally between L2 / RoCE but with consideration as + * to how many l2 queues / cnqs we have + */ + if (ECORE_IS_RDMA_PERSONALITY(p_hwfn)) { +#ifndef __EXTRACT__LINUX__THROW__ + u32 max_cnqs; +#endif + + feat_num[ECORE_RDMA_CNQ] = + OSAL_MIN_T(u32, + sb_cnt.cnt / 2, + RESC_NUM(p_hwfn, ECORE_RDMA_CNQ_RAM)); + + feat_num[ECORE_VF_RDMA_CNQ] = + OSAL_MIN_T(u32, + sb_cnt.iov_cnt / 2, + RESC_NUM(p_hwfn, ECORE_VF_RDMA_CNQ_RAM)); + +#ifndef __EXTRACT__LINUX__THROW__ + /* Upper layer might require less */ + max_cnqs = (u32)p_hwfn->pf_params.rdma_pf_params.max_cnqs; + if (max_cnqs) { + if (max_cnqs == ECORE_RDMA_PF_PARAMS_CNQS_NONE) + max_cnqs = 0; + feat_num[ECORE_RDMA_CNQ] = + OSAL_MIN_T(u32, + feat_num[ECORE_RDMA_CNQ], + max_cnqs); + feat_num[ECORE_VF_RDMA_CNQ] = + OSAL_MIN_T(u32, + feat_num[ECORE_VF_RDMA_CNQ], + max_cnqs); + } +#endif + + non_l2_sbs = feat_num[ECORE_RDMA_CNQ]; + non_l2_iov_sbs = feat_num[ECORE_VF_RDMA_CNQ]; + } +#endif + /* L2 Queues require each: 1 status block. 1 L2 queue */ if (ECORE_IS_L2_PERSONALITY(p_hwfn)) { /* Start by allocating VF queues, then PF's */ feat_num[ECORE_VF_L2_QUE] = OSAL_MIN_T(u32, - RESC_NUM(p_hwfn, ECORE_L2_QUEUE), - sb_cnt.iov_cnt); + sb_cnt.iov_cnt - non_l2_iov_sbs, + RESC_NUM(p_hwfn, ECORE_L2_QUEUE)); + feat_num[ECORE_PF_L2_QUE] = OSAL_MIN_T(u32, sb_cnt.cnt - non_l2_sbs, @@ -4345,36 +5894,12 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn) FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE)); } - if (ECORE_IS_FCOE_PERSONALITY(p_hwfn) || - ECORE_IS_ISCSI_PERSONALITY(p_hwfn)) { - u32 *p_storage_feat = ECORE_IS_FCOE_PERSONALITY(p_hwfn) ? - &feat_num[ECORE_FCOE_CQ] : - &feat_num[ECORE_ISCSI_CQ]; - u32 limit = sb_cnt.cnt; - - /* The number of queues should not exceed the number of FP SBs. - * In storage target, the queues are divided into pairs of a CQ - * and a CmdQ, and each pair uses a single SB. The limit in - * this case should allow a max ratio of 2:1 instead of 1:1. - */ - if (p_hwfn->p_dev->b_is_target) - limit *= 2; - *p_storage_feat = OSAL_MIN_T(u32, limit, - RESC_NUM(p_hwfn, ECORE_CMDQS_CQS)); - - /* @DPDK */ - /* The size of "cq_cmdq_sb_num_arr" in the fcoe/iscsi init - * ramrod is limited to "NUM_OF_GLOBAL_QUEUES / 2". - */ - *p_storage_feat = OSAL_MIN_T(u32, *p_storage_feat, - (NUM_OF_GLOBAL_QUEUES / 2)); - } - DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE, - "#PF_L2_QUEUE=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d #FCOE_CQ=%d #ISCSI_CQ=%d #SB=%d\n", + "#PF_L2_QUEUE=%d VF_L2_QUEUES=%d #PF_ROCE_CNQ=%d #VF_ROCE_CNQ=%d #FCOE_CQ=%d #ISCSI_CQ=%d #SB=%d\n", (int)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE), (int)FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE), (int)FEAT_NUM(p_hwfn, ECORE_RDMA_CNQ), + (int)FEAT_NUM(p_hwfn, ECORE_VF_RDMA_CNQ), (int)FEAT_NUM(p_hwfn, ECORE_FCOE_CQ), (int)FEAT_NUM(p_hwfn, ECORE_ISCSI_CQ), (int)sb_cnt.cnt); @@ -4397,18 +5922,26 @@ const char *ecore_hw_get_resc_name(enum ecore_resources res_id) return "MAC"; case ECORE_VLAN: return "VLAN"; + case ECORE_VF_RDMA_CNQ_RAM: + return "VF_RDMA_CNQ_RAM"; case ECORE_RDMA_CNQ_RAM: return "RDMA_CNQ_RAM"; case ECORE_ILT: return "ILT"; - case ECORE_LL2_QUEUE: - return "LL2_QUEUE"; + case ECORE_LL2_RAM_QUEUE: + return "LL2_RAM_QUEUE"; + case ECORE_LL2_CTX_QUEUE: + return "LL2_CTX_QUEUE"; case ECORE_CMDQS_CQS: return "CMDQS_CQS"; case ECORE_RDMA_STATS_QUEUE: return "RDMA_STATS_QUEUE"; case ECORE_BDQ: return "BDQ"; + case ECORE_VF_MAC_ADDR: + return "VF_MAC_ADDR"; + case ECORE_GFS_PROFILE: + return "GFS_PROFILE"; case ECORE_SB: return "SB"; default: @@ -4467,7 +6000,8 @@ static u32 ecore_hsi_def_val[][MAX_CHIP_IDS] = { u32 ecore_get_hsi_def_val(struct ecore_dev *p_dev, enum ecore_hsi_def_type type) { - enum chip_ids chip_id = ECORE_IS_BB(p_dev) ? CHIP_BB : CHIP_K2; + enum chip_ids chip_id = ECORE_IS_BB(p_dev) ? CHIP_BB : + ECORE_IS_AH(p_dev) ? CHIP_K2 : CHIP_E5; if (type >= ECORE_NUM_HSI_DEFS) { DP_ERR(p_dev, "Unexpected HSI definition type [%d]\n", type); @@ -4482,18 +6016,13 @@ ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { u32 resc_max_val, mcp_resp; - u8 res_id; + u8 num_vf_cnqs, res_id; enum _ecore_status_t rc; + num_vf_cnqs = p_hwfn->num_vf_cnqs; + for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) { - /* @DPDK */ switch (res_id) { - case ECORE_LL2_QUEUE: - case ECORE_RDMA_CNQ_RAM: - case ECORE_RDMA_STATS_QUEUE: - case ECORE_BDQ: - resc_max_val = 0; - break; default: continue; } @@ -4523,6 +6052,7 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn, { u8 num_funcs = p_hwfn->num_funcs_on_engine; struct ecore_dev *p_dev = p_hwfn->p_dev; + u8 num_vf_cnqs = p_hwfn->num_vf_cnqs; switch (res_id) { case ECORE_L2_QUEUE: @@ -4549,37 +6079,57 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn, case ECORE_ILT: *p_resc_num = NUM_OF_PXP_ILT_RECORDS(p_dev) / num_funcs; break; - case ECORE_LL2_QUEUE: + case ECORE_LL2_RAM_QUEUE: *p_resc_num = MAX_NUM_LL2_RX_RAM_QUEUES / num_funcs; break; + case ECORE_LL2_CTX_QUEUE: + *p_resc_num = MAX_NUM_LL2_RX_CTX_QUEUES / num_funcs; + break; + case ECORE_VF_RDMA_CNQ_RAM: + *p_resc_num = num_vf_cnqs / num_funcs; + break; case ECORE_RDMA_CNQ_RAM: case ECORE_CMDQS_CQS: /* CNQ/CMDQS are the same resource */ - /* @DPDK */ - *p_resc_num = (NUM_OF_GLOBAL_QUEUES / 2) / num_funcs; + *p_resc_num = (NUM_OF_GLOBAL_QUEUES - num_vf_cnqs) / num_funcs; break; case ECORE_RDMA_STATS_QUEUE: *p_resc_num = NUM_OF_RDMA_STATISTIC_COUNTERS(p_dev) / num_funcs; break; case ECORE_BDQ: - /* @DPDK */ + if (p_hwfn->hw_info.personality != ECORE_PCI_ISCSI && + p_hwfn->hw_info.personality != ECORE_PCI_FCOE) + *p_resc_num = 0; + else + *p_resc_num = 1; + break; + case ECORE_VF_MAC_ADDR: *p_resc_num = 0; break; - default: + case ECORE_GFS_PROFILE: + *p_resc_num = ECORE_IS_E5(p_hwfn->p_dev) ? + GFS_PROFILE_MAX_ENTRIES / num_funcs : 0; + break; + case ECORE_SB: + /* Since we want its value to reflect whether MFW supports + * the new scheme, have a default of 0. + */ + *p_resc_num = 0; break; + default: + return ECORE_INVAL; } - switch (res_id) { case ECORE_BDQ: if (!*p_resc_num) *p_resc_start = 0; - break; - case ECORE_SB: - /* Since we want its value to reflect whether MFW supports - * the new scheme, have a default of 0. - */ - *p_resc_num = 0; + else if (p_hwfn->p_dev->num_ports_in_engine == 4) + *p_resc_start = p_hwfn->port_id; + else if (p_hwfn->hw_info.personality == ECORE_PCI_ISCSI) + *p_resc_start = p_hwfn->port_id; + else if (p_hwfn->hw_info.personality == ECORE_PCI_FCOE) + *p_resc_start = p_hwfn->port_id + 2; break; default: *p_resc_start = *p_resc_num * p_hwfn->enabled_func_idx; @@ -4609,8 +6159,19 @@ __ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn, enum ecore_resources res_id, return rc; } + /* TODO: Add MFW support for GFS profiles resource */ + if (res_id == ECORE_GFS_PROFILE) { + DP_INFO(p_hwfn, + "Resource %d [%s]: Applying default values [%d,%d]\n", + res_id, ecore_hw_get_resc_name(res_id), + dflt_resc_num, dflt_resc_start); + *p_resc_num = dflt_resc_num; + *p_resc_start = dflt_resc_start; + return ECORE_SUCCESS; + } + #ifndef ASIC_ONLY - if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) { + if (CHIP_REV_IS_SLOW(p_hwfn->p_dev) && !ECORE_IS_E5(p_hwfn->p_dev)) { *p_resc_num = dflt_resc_num; *p_resc_start = dflt_resc_start; goto out; @@ -4620,9 +6181,8 @@ __ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn, enum ecore_resources res_id, rc = ecore_mcp_get_resc_info(p_hwfn, p_hwfn->p_main_ptt, res_id, &mcp_resp, p_resc_num, p_resc_start); if (rc != ECORE_SUCCESS) { - DP_NOTICE(p_hwfn, true, - "MFW response failure for an allocation request for" - " resource %d [%s]\n", + DP_NOTICE(p_hwfn, false, + "MFW response failure for an allocation request for resource %d [%s]\n", res_id, ecore_hw_get_resc_name(res_id)); return rc; } @@ -4634,12 +6194,9 @@ __ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn, enum ecore_resources res_id, */ if (mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK) { DP_INFO(p_hwfn, - "Failed to receive allocation info for resource %d [%s]." - " mcp_resp = 0x%x. Applying default values" - " [%d,%d].\n", + "Failed to receive allocation info for resource %d [%s]. mcp_resp = 0x%x. Applying default values [%d,%d].\n", res_id, ecore_hw_get_resc_name(res_id), mcp_resp, dflt_resc_num, dflt_resc_start); - *p_resc_num = dflt_resc_num; *p_resc_start = dflt_resc_start; goto out; @@ -4659,6 +6216,37 @@ __ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn, enum ecore_resources res_id, } } out: + /* PQs have to divide by 8 [that's the HW granularity]. + * Reduce number so it would fit. + */ + if ((res_id == ECORE_PQ) && + ((*p_resc_num % 8) || (*p_resc_start % 8))) { + DP_INFO(p_hwfn, + "PQs need to align by 8; Number %08x --> %08x, Start %08x --> %08x\n", + *p_resc_num, (*p_resc_num) & ~0x7, + *p_resc_start, (*p_resc_start) & ~0x7); + *p_resc_num &= ~0x7; + *p_resc_start &= ~0x7; + } + + /* The last RSS engine is used by the FW for TPA hash calculation. + * Old MFW versions allocate it to the drivers, so it needs to be + * truncated. + */ + if (res_id == ECORE_RSS_ENG && + *p_resc_num && + (*p_resc_start + *p_resc_num - 1) == + NUM_OF_RSS_ENGINES(p_hwfn->p_dev)) + --*p_resc_num; + + /* In case personality is not RDMA or there is no separate doorbell bar + * for VFs, VF-RDMA will not be supported, thus no need CNQs for VFs. + */ + if ((res_id == ECORE_VF_RDMA_CNQ_RAM) && + (!ECORE_IS_RDMA_PERSONALITY(p_hwfn) || + !ecore_iov_vf_db_bar_size(p_hwfn, p_hwfn->p_main_ptt))) + *p_resc_num = 0; + return ECORE_SUCCESS; } @@ -4697,7 +6285,7 @@ static enum _ecore_status_t ecore_hw_get_ppfid_bitmap(struct ecore_hwfn *p_hwfn, /* 4-ports mode has limitations that should be enforced: * - BB: the MFW can access only PPFIDs which their corresponding PFIDs * belong to this certain port. - * - AH: only 4 PPFIDs per port are available. + * - AH/E5: only 4 PPFIDs per port are available. */ if (ecore_device_num_ports(p_dev) == 4) { u8 mask; @@ -4762,9 +6350,17 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn, * Old drivers that don't acquire the lock can run in parallel, and * their allocation values won't be affected by the updated max values. */ + ecore_mcp_resc_lock_default_init(&resc_lock_params, &resc_unlock_params, ECORE_RESC_LOCK_RESC_ALLOC, false); + /* Changes on top of the default values to accommodate parallel attempts + * of several PFs. + * [10 x 10 msec by default ==> 20 x 50 msec] + */ + resc_lock_params.retry_num *= 2; + resc_lock_params.retry_interval *= 5; + rc = ecore_mcp_resc_lock(p_hwfn, p_ptt, &resc_lock_params); if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) { return rc; @@ -4774,8 +6370,7 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn, } else if (rc == ECORE_SUCCESS && !resc_lock_params.b_granted) { DP_NOTICE(p_hwfn, false, "Failed to acquire the resource lock for the resource allocation commands\n"); - rc = ECORE_BUSY; - goto unlock_and_exit; + return ECORE_BUSY; } else { rc = ecore_hw_set_soft_resc_size(p_hwfn, p_ptt); if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) { @@ -4818,7 +6413,7 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn, if (!(p_dev->b_is_emul_full)) { resc_num[ECORE_PQ] = 32; resc_start[ECORE_PQ] = resc_num[ECORE_PQ] * - p_hwfn->enabled_func_idx; + p_hwfn->enabled_func_idx; } /* For AH emulation, since we have a possible maximal number of @@ -4832,23 +6427,38 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn, if (!p_hwfn->rel_pf_id) { resc_num[ECORE_ILT] = OSAL_MAX_T(u32, resc_num[ECORE_ILT], - roce_min_ilt_lines); + roce_min_ilt_lines); } else if (resc_num[ECORE_ILT] < roce_min_ilt_lines) { resc_start[ECORE_ILT] += roce_min_ilt_lines - resc_num[ECORE_ILT]; } } + + /* [E5 emulation] MF + SRIOV: + * The 14 low PFs have 16 VFs while the 2 high PFs have 8 VFs. + * Need to adjust the vport values so that each PF/VF will have + * a vport: + * 256 vports = 14 x (PF + 16 VFs) + 2 x (PF + 8 VFs) + */ + if (ECORE_IS_E5(p_dev) && + p_hwfn->num_funcs_on_engine == 16 && + IS_ECORE_SRIOV(p_dev)) { + u8 id = p_hwfn->abs_pf_id; + + resc_num[ECORE_VPORT] = (id < 14) ? 17 : 9; + resc_start[ECORE_VPORT] = (id < 14) ? + (id * 17) : + (14 * 17 + (id - 14) * 9); + } } #endif /* Sanity for ILT */ max_ilt_lines = NUM_OF_PXP_ILT_RECORDS(p_dev); if (RESC_END(p_hwfn, ECORE_ILT) > max_ilt_lines) { - DP_NOTICE(p_hwfn, true, - "Can't assign ILT pages [%08x,...,%08x]\n", - RESC_START(p_hwfn, ECORE_ILT), RESC_END(p_hwfn, - ECORE_ILT) - - 1); + DP_NOTICE(p_hwfn, true, "Can't assign ILT pages [%08x,...,%08x]\n", + RESC_START(p_hwfn, ECORE_ILT), + RESC_END(p_hwfn, ECORE_ILT) - 1); return ECORE_INVAL; } @@ -4875,6 +6485,26 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn, return rc; } +static bool ecore_is_dscp_mapping_allowed(struct ecore_hwfn *p_hwfn, + u32 mf_mode) +{ + /* HW bug in E4: + * The NIG accesses the "NIG_REG_DSCP_TO_TC_MAP_ENABLE" PORT_PF register + * with the PFID, as set in "NIG_REG_LLH_PPFID2PFID_TBL", instead of + * with the PPFID. AH is being affected from this bug, and thus DSCP to + * TC mapping is only allowed on the following configurations: + * - QPAR, 2-ports + * - QPAR, 4-ports, single PF on a port + * There is no impact on BB since it has another bug in which the PPFID + * is actually engine based. + */ + return !ECORE_IS_AH(p_hwfn->p_dev) || + (mf_mode == NVM_CFG1_GLOB_MF_MODE_DEFAULT && + (ecore_device_num_ports(p_hwfn->p_dev) == 2 || + (ecore_device_num_ports(p_hwfn->p_dev) == 4 && + p_hwfn->num_funcs_on_port == 1))); +} + #ifndef ASIC_ONLY static enum _ecore_status_t ecore_emul_hw_get_nvm_info(struct ecore_hwfn *p_hwfn) @@ -4885,7 +6515,8 @@ ecore_emul_hw_get_nvm_info(struct ecore_hwfn *p_hwfn) /* The MF mode on emulation is either default or NPAR 1.0 */ p_dev->mf_bits = 1 << ECORE_MF_LLH_MAC_CLSS | 1 << ECORE_MF_LLH_PROTO_CLSS | - 1 << ECORE_MF_LL2_NON_UNICAST; + 1 << ECORE_MF_LL2_NON_UNICAST | + 1 << ECORE_MF_DSCP_TO_TC_MAP; if (p_hwfn->num_funcs_on_port > 1) p_dev->mf_bits |= 1 << ECORE_MF_INTER_PF_SWITCH | 1 << ECORE_MF_DISABLE_ARFS; @@ -4902,14 +6533,16 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_hw_prepare_params *p_params) { - u32 nvm_cfg1_offset, mf_mode, addr, generic_cont0, core_cfg, dcbx_mode; u32 port_cfg_addr, link_temp, nvm_cfg_addr, device_capabilities; + u32 nvm_cfg1_offset, mf_mode, addr, generic_cont0, core_cfg; struct ecore_mcp_link_capabilities *p_caps; struct ecore_mcp_link_params *link; enum _ecore_status_t rc; + u32 dcbx_mode; /* __LINUX__THROW__ */ #ifndef ASIC_ONLY - if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) + if (CHIP_REV_IS_SLOW(p_hwfn->p_dev) || + p_hwfn->mcp_info->recovery_mode) return ecore_emul_hw_get_nvm_info(p_hwfn); #endif @@ -4924,18 +6557,16 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn, return ECORE_INVAL; } -/* Read nvm_cfg1 (Notice this is just offset, and not offsize (TBD) */ - + /* Read nvm_cfg1 (Notice this is just offset, and not offsize (TBD) */ nvm_cfg1_offset = ecore_rd(p_hwfn, p_ptt, nvm_cfg_addr + 4); - addr = MCP_REG_SCRATCH + nvm_cfg1_offset + + addr = MCP_REG_SCRATCH + nvm_cfg1_offset + OFFSETOF(struct nvm_cfg1, glob) + OFFSETOF(struct nvm_cfg1_glob, core_cfg); core_cfg = ecore_rd(p_hwfn, p_ptt, addr); - switch ((core_cfg & NVM_CFG1_GLOB_NETWORK_PORT_MODE_MASK) >> - NVM_CFG1_GLOB_NETWORK_PORT_MODE_OFFSET) { + switch (GET_MFW_FIELD(core_cfg, NVM_CFG1_GLOB_NETWORK_PORT_MODE)) { case NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_2X40G: p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_2X40G; break; @@ -4969,20 +6600,35 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn, case NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X25G: p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_4X25G; break; + case NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_2X50G_R1: + p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_2X50G_R1; + break; + case NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_4X50G_R1: + p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_4X50G_R1; + break; + case NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_1X100G_R2: + p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_1X100G_R2; + break; + case NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_2X100G_R2: + p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_2X100G_R2; + break; + case NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_1X100G_R4: + p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_1X100G_R4; + break; default: DP_NOTICE(p_hwfn, true, "Unknown port mode in 0x%08x\n", core_cfg); break; } +#ifndef __EXTRACT__LINUX__THROW__ /* Read DCBX configuration */ port_cfg_addr = MCP_REG_SCRATCH + nvm_cfg1_offset + OFFSETOF(struct nvm_cfg1, port[MFW_PORT(p_hwfn)]); dcbx_mode = ecore_rd(p_hwfn, p_ptt, port_cfg_addr + OFFSETOF(struct nvm_cfg1_port, generic_cont0)); - dcbx_mode = (dcbx_mode & NVM_CFG1_PORT_DCBX_MODE_MASK) - >> NVM_CFG1_PORT_DCBX_MODE_OFFSET; + dcbx_mode = GET_MFW_FIELD(dcbx_mode, NVM_CFG1_PORT_DCBX_MODE); switch (dcbx_mode) { case NVM_CFG1_PORT_DCBX_MODE_DYNAMIC: p_hwfn->hw_info.dcbx_mode = ECORE_DCBX_VERSION_DYNAMIC; @@ -4996,12 +6642,13 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn, default: p_hwfn->hw_info.dcbx_mode = ECORE_DCBX_VERSION_DISABLED; } +#endif /* Read default link configuration */ link = &p_hwfn->mcp_info->link_input; p_caps = &p_hwfn->mcp_info->link_capabilities; port_cfg_addr = MCP_REG_SCRATCH + nvm_cfg1_offset + - OFFSETOF(struct nvm_cfg1, port[MFW_PORT(p_hwfn)]); + OFFSETOF(struct nvm_cfg1, port[MFW_PORT(p_hwfn)]); link_temp = ecore_rd(p_hwfn, p_ptt, port_cfg_addr + OFFSETOF(struct nvm_cfg1_port, speed_cap_mask)); @@ -5012,8 +6659,7 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn, link_temp = ecore_rd(p_hwfn, p_ptt, port_cfg_addr + OFFSETOF(struct nvm_cfg1_port, link_settings)); - switch ((link_temp & NVM_CFG1_PORT_DRV_LINK_SPEED_MASK) >> - NVM_CFG1_PORT_DRV_LINK_SPEED_OFFSET) { + switch (GET_MFW_FIELD(link_temp, NVM_CFG1_PORT_DRV_LINK_SPEED)) { case NVM_CFG1_PORT_DRV_LINK_SPEED_AUTONEG: link->speed.autoneg = true; break; @@ -5023,6 +6669,9 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn, case NVM_CFG1_PORT_DRV_LINK_SPEED_10G: link->speed.forced_speed = 10000; break; + case NVM_CFG1_PORT_DRV_LINK_SPEED_20G: + link->speed.forced_speed = 20000; + break; case NVM_CFG1_PORT_DRV_LINK_SPEED_25G: link->speed.forced_speed = 25000; break; @@ -5036,27 +6685,56 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn, link->speed.forced_speed = 100000; break; default: - DP_NOTICE(p_hwfn, true, "Unknown Speed in 0x%08x\n", link_temp); + DP_NOTICE(p_hwfn, true, "Unknown Speed in 0x%08x\n", + link_temp); } - p_caps->default_speed = link->speed.forced_speed; + p_caps->default_speed = link->speed.forced_speed; /* __LINUX__THROW__ */ p_caps->default_speed_autoneg = link->speed.autoneg; - link_temp &= NVM_CFG1_PORT_DRV_FLOW_CONTROL_MASK; - link_temp >>= NVM_CFG1_PORT_DRV_FLOW_CONTROL_OFFSET; + link_temp = GET_MFW_FIELD(link_temp, NVM_CFG1_PORT_DRV_FLOW_CONTROL); link->pause.autoneg = !!(link_temp & - NVM_CFG1_PORT_DRV_FLOW_CONTROL_AUTONEG); + NVM_CFG1_PORT_DRV_FLOW_CONTROL_AUTONEG); link->pause.forced_rx = !!(link_temp & - NVM_CFG1_PORT_DRV_FLOW_CONTROL_RX); + NVM_CFG1_PORT_DRV_FLOW_CONTROL_RX); link->pause.forced_tx = !!(link_temp & - NVM_CFG1_PORT_DRV_FLOW_CONTROL_TX); + NVM_CFG1_PORT_DRV_FLOW_CONTROL_TX); link->loopback_mode = 0; + if (p_hwfn->mcp_info->capabilities & + FW_MB_PARAM_FEATURE_SUPPORT_FEC_CONTROL) { + link_temp = ecore_rd(p_hwfn, p_ptt, port_cfg_addr + + OFFSETOF(struct nvm_cfg1_port, + link_settings)); + switch (GET_MFW_FIELD(link_temp, + NVM_CFG1_PORT_FEC_FORCE_MODE)) { + case NVM_CFG1_PORT_FEC_FORCE_MODE_NONE: + p_caps->fec_default |= ECORE_MCP_FEC_NONE; + break; + case NVM_CFG1_PORT_FEC_FORCE_MODE_FIRECODE: + p_caps->fec_default |= ECORE_MCP_FEC_FIRECODE; + break; + case NVM_CFG1_PORT_FEC_FORCE_MODE_RS: + p_caps->fec_default |= ECORE_MCP_FEC_RS; + break; + case NVM_CFG1_PORT_FEC_FORCE_MODE_AUTO: + p_caps->fec_default |= ECORE_MCP_FEC_AUTO; + break; + default: + DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, + "unknown FEC mode in 0x%08x\n", link_temp); + } + } else { + p_caps->fec_default = ECORE_MCP_FEC_UNSUPPORTED; + } + + link->fec = p_caps->fec_default; + if (p_hwfn->mcp_info->capabilities & FW_MB_PARAM_FEATURE_SUPPORT_EEE) { link_temp = ecore_rd(p_hwfn, p_ptt, port_cfg_addr + OFFSETOF(struct nvm_cfg1_port, ext_phy)); - link_temp &= NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_MASK; - link_temp >>= NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_OFFSET; + link_temp = GET_MFW_FIELD(link_temp, + NVM_CFG1_PORT_EEE_POWER_SAVING_MODE); p_caps->default_eee = ECORE_MCP_EEE_ENABLED; link->eee.enable = true; switch (link_temp) { @@ -5083,100 +6761,202 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn, p_caps->default_eee = ECORE_MCP_EEE_UNSUPPORTED; } + if (p_hwfn->mcp_info->capabilities & + FW_MB_PARAM_FEATURE_SUPPORT_EXT_SPEED_FEC_CONTROL) { + u32 mask; + link_temp = ecore_rd(p_hwfn, p_ptt, + port_cfg_addr + + OFFSETOF(struct nvm_cfg1_port, extended_speed)); + mask = GET_MFW_FIELD(link_temp, NVM_CFG1_PORT_EXTENDED_SPEED); + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_AN) + link->ext_speed.autoneg = true; + link->ext_speed.forced_speed = 0; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_1G) + link->ext_speed.forced_speed |= ECORE_EXT_SPEED_1G; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_10G) + link->ext_speed.forced_speed |= ECORE_EXT_SPEED_10G; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_20G) + link->ext_speed.forced_speed |= ECORE_EXT_SPEED_20G; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_25G) + link->ext_speed.forced_speed |= ECORE_EXT_SPEED_25G; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_40G) + link->ext_speed.forced_speed |= ECORE_EXT_SPEED_40G; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R) + link->ext_speed.forced_speed |= ECORE_EXT_SPEED_50G_R; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R2) + link->ext_speed.forced_speed |= ECORE_EXT_SPEED_50G_R2; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R2) + link->ext_speed.forced_speed |= ECORE_EXT_SPEED_100G_R2; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R4) + link->ext_speed.forced_speed |= ECORE_EXT_SPEED_100G_R4; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_P4) + link->ext_speed.forced_speed |= ECORE_EXT_SPEED_100G_P4; + + mask = GET_MFW_FIELD(link_temp, + NVM_CFG1_PORT_EXTENDED_SPEED_CAP); + link->ext_speed.advertised_speeds = 0; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_RESERVED) + link->ext_speed.advertised_speeds |= + ECORE_EXT_SPEED_MASK_RES; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_1G) + link->ext_speed.advertised_speeds |= + ECORE_EXT_SPEED_MASK_1G; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_10G) + link->ext_speed.advertised_speeds |= + ECORE_EXT_SPEED_MASK_10G; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_20G) + link->ext_speed.advertised_speeds |= + ECORE_EXT_SPEED_MASK_20G; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_25G) + link->ext_speed.advertised_speeds |= + ECORE_EXT_SPEED_MASK_25G; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_40G) + link->ext_speed.advertised_speeds |= + ECORE_EXT_SPEED_MASK_40G; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R) + link->ext_speed.advertised_speeds |= + ECORE_EXT_SPEED_MASK_50G_R; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R2) + link->ext_speed.advertised_speeds |= + ECORE_EXT_SPEED_MASK_50G_R2; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R2) + link->ext_speed.advertised_speeds |= + ECORE_EXT_SPEED_MASK_100G_R2; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R4) + link->ext_speed.advertised_speeds |= + ECORE_EXT_SPEED_MASK_100G_R4; + if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_P4) + link->ext_speed.advertised_speeds |= + ECORE_EXT_SPEED_MASK_100G_P4; + + link->ext_fec_mode = ecore_rd(p_hwfn, p_ptt, port_cfg_addr + + OFFSETOF(struct nvm_cfg1_port, + extended_fec_mode)); + p_caps->default_ext_speed_caps = + link->ext_speed.advertised_speeds; + p_caps->default_ext_speed = link->ext_speed.forced_speed; + p_caps->default_ext_autoneg = link->ext_speed.autoneg; + p_caps->default_ext_fec = link->ext_fec_mode; + DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, + "Read default extended link config: An 0x%08x Speed 0x%08x, Adv. Speed 0x%08x, AN 0x%02x, FEC: 0x%2x\n", + link->ext_speed.autoneg, + link->ext_speed.forced_speed, + link->ext_speed.advertised_speeds, + link->ext_speed.autoneg, link->ext_fec_mode); + } + DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, - "Read default link: Speed 0x%08x, Adv. Speed 0x%08x, AN: 0x%02x, PAUSE AN: 0x%02x\n EEE: %02x [%08x usec]", + "Read default link: Speed 0x%08x, Adv. Speed 0x%08x, AN: 0x%02x, PAUSE AN: 0x%02x EEE: %02x [%08x usec], FEC: 0x%2x\n", link->speed.forced_speed, link->speed.advertised_speeds, link->speed.autoneg, link->pause.autoneg, - p_caps->default_eee, p_caps->eee_lpi_timer); - - /* Read Multi-function information from shmem */ - addr = MCP_REG_SCRATCH + nvm_cfg1_offset + - OFFSETOF(struct nvm_cfg1, glob) + - OFFSETOF(struct nvm_cfg1_glob, generic_cont0); + p_caps->default_eee, p_caps->eee_lpi_timer, + p_caps->fec_default); - generic_cont0 = ecore_rd(p_hwfn, p_ptt, addr); + if (IS_LEAD_HWFN(p_hwfn)) { + struct ecore_dev *p_dev = p_hwfn->p_dev; - mf_mode = (generic_cont0 & NVM_CFG1_GLOB_MF_MODE_MASK) >> - NVM_CFG1_GLOB_MF_MODE_OFFSET; + /* Read Multi-function information from shmem */ + addr = MCP_REG_SCRATCH + nvm_cfg1_offset + + OFFSETOF(struct nvm_cfg1, glob) + + OFFSETOF(struct nvm_cfg1_glob, generic_cont0); + generic_cont0 = ecore_rd(p_hwfn, p_ptt, addr); + mf_mode = GET_MFW_FIELD(generic_cont0, NVM_CFG1_GLOB_MF_MODE); - switch (mf_mode) { - case NVM_CFG1_GLOB_MF_MODE_MF_ALLOWED: - p_hwfn->p_dev->mf_bits = 1 << ECORE_MF_OVLAN_CLSS; - break; - case NVM_CFG1_GLOB_MF_MODE_UFP: - p_hwfn->p_dev->mf_bits = 1 << ECORE_MF_OVLAN_CLSS | + switch (mf_mode) { + case NVM_CFG1_GLOB_MF_MODE_MF_ALLOWED: + p_dev->mf_bits = 1 << ECORE_MF_OVLAN_CLSS; + break; + case NVM_CFG1_GLOB_MF_MODE_UFP: + p_dev->mf_bits = 1 << ECORE_MF_OVLAN_CLSS | + 1 << ECORE_MF_LLH_PROTO_CLSS | 1 << ECORE_MF_UFP_SPECIFIC | - 1 << ECORE_MF_8021Q_TAGGING; - break; - case NVM_CFG1_GLOB_MF_MODE_BD: - p_hwfn->p_dev->mf_bits = 1 << ECORE_MF_OVLAN_CLSS | + 1 << ECORE_MF_8021Q_TAGGING | + 1 << ECORE_MF_DONT_ADD_VLAN0_TAG; + break; + case NVM_CFG1_GLOB_MF_MODE_BD: + p_dev->mf_bits = 1 << ECORE_MF_OVLAN_CLSS | 1 << ECORE_MF_LLH_PROTO_CLSS | 1 << ECORE_MF_8021AD_TAGGING | - 1 << ECORE_MF_FIP_SPECIAL; - break; - case NVM_CFG1_GLOB_MF_MODE_NPAR1_0: - p_hwfn->p_dev->mf_bits = 1 << ECORE_MF_LLH_MAC_CLSS | + 1 << ECORE_MF_FIP_SPECIAL | + 1 << ECORE_MF_DONT_ADD_VLAN0_TAG; + break; + case NVM_CFG1_GLOB_MF_MODE_NPAR1_0: + p_dev->mf_bits = 1 << ECORE_MF_LLH_MAC_CLSS | 1 << ECORE_MF_LLH_PROTO_CLSS | 1 << ECORE_MF_LL2_NON_UNICAST | 1 << ECORE_MF_INTER_PF_SWITCH | 1 << ECORE_MF_DISABLE_ARFS; - break; - case NVM_CFG1_GLOB_MF_MODE_DEFAULT: - p_hwfn->p_dev->mf_bits = 1 << ECORE_MF_LLH_MAC_CLSS | + break; + case NVM_CFG1_GLOB_MF_MODE_DEFAULT: + p_dev->mf_bits = 1 << ECORE_MF_LLH_MAC_CLSS | 1 << ECORE_MF_LLH_PROTO_CLSS | - 1 << ECORE_MF_LL2_NON_UNICAST; - if (ECORE_IS_BB(p_hwfn->p_dev)) - p_hwfn->p_dev->mf_bits |= 1 << ECORE_MF_NEED_DEF_PF; - break; - } - DP_INFO(p_hwfn, "Multi function mode is 0x%x\n", - p_hwfn->p_dev->mf_bits); + 1 << ECORE_MF_LL2_NON_UNICAST | + 1 << ECORE_MF_VF_RDMA | + 1 << ECORE_MF_ROCE_LAG; + if (ECORE_IS_BB(p_dev)) + p_dev->mf_bits |= 1 << ECORE_MF_NEED_DEF_PF; + break; + } - if (ECORE_IS_CMT(p_hwfn->p_dev)) - p_hwfn->p_dev->mf_bits |= (1 << ECORE_MF_DISABLE_ARFS); + DP_INFO(p_dev, "Multi function mode is 0x%x\n", + p_dev->mf_bits); - /* It's funny since we have another switch, but it's easier - * to throw this away in linux this way. Long term, it might be - * better to have have getters for needed ECORE_MF_* fields, - * convert client code and eliminate this. - */ - switch (mf_mode) { - case NVM_CFG1_GLOB_MF_MODE_MF_ALLOWED: - case NVM_CFG1_GLOB_MF_MODE_BD: - p_hwfn->p_dev->mf_mode = ECORE_MF_OVLAN; - break; - case NVM_CFG1_GLOB_MF_MODE_NPAR1_0: - p_hwfn->p_dev->mf_mode = ECORE_MF_NPAR; - break; - case NVM_CFG1_GLOB_MF_MODE_DEFAULT: - p_hwfn->p_dev->mf_mode = ECORE_MF_DEFAULT; - break; - case NVM_CFG1_GLOB_MF_MODE_UFP: - p_hwfn->p_dev->mf_mode = ECORE_MF_UFP; - break; + /* In CMT the PF is unknown when the GFS block processes the + * packet. Therefore cannot use searcher as it has a per PF + * database, and thus ARFS must be disabled. + * + */ + if (ECORE_IS_CMT(p_dev)) + p_dev->mf_bits |= 1 << ECORE_MF_DISABLE_ARFS; + + if (ecore_is_dscp_mapping_allowed(p_hwfn, mf_mode)) + p_dev->mf_bits |= 1 << ECORE_MF_DSCP_TO_TC_MAP; + +#ifndef __EXTRACT__LINUX__THROW__ + /* It's funny since we have another switch, but it's easier + * to throw this away in linux this way. Long term, it might be + * better to have getters for needed ECORE_MF_* fields, + * convert client code and eliminate this. + */ + switch (mf_mode) { + case NVM_CFG1_GLOB_MF_MODE_MF_ALLOWED: + case NVM_CFG1_GLOB_MF_MODE_BD: + p_dev->mf_mode = ECORE_MF_OVLAN; + break; + case NVM_CFG1_GLOB_MF_MODE_NPAR1_0: + p_dev->mf_mode = ECORE_MF_NPAR; + break; + case NVM_CFG1_GLOB_MF_MODE_DEFAULT: + p_dev->mf_mode = ECORE_MF_DEFAULT; + break; + case NVM_CFG1_GLOB_MF_MODE_UFP: + p_dev->mf_mode = ECORE_MF_UFP; + break; + } +#endif } - /* Read Multi-function information from shmem */ + /* Read device capabilities information from shmem */ addr = MCP_REG_SCRATCH + nvm_cfg1_offset + OFFSETOF(struct nvm_cfg1, glob) + OFFSETOF(struct nvm_cfg1_glob, device_capabilities); device_capabilities = ecore_rd(p_hwfn, p_ptt, addr); if (device_capabilities & NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ETHERNET) - OSAL_SET_BIT(ECORE_DEV_CAP_ETH, - &p_hwfn->hw_info.device_capabilities); + OSAL_NON_ATOMIC_SET_BIT(ECORE_DEV_CAP_ETH, + &p_hwfn->hw_info.device_capabilities); if (device_capabilities & NVM_CFG1_GLOB_DEVICE_CAPABILITIES_FCOE) - OSAL_SET_BIT(ECORE_DEV_CAP_FCOE, - &p_hwfn->hw_info.device_capabilities); + OSAL_NON_ATOMIC_SET_BIT(ECORE_DEV_CAP_FCOE, + &p_hwfn->hw_info.device_capabilities); if (device_capabilities & NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ISCSI) - OSAL_SET_BIT(ECORE_DEV_CAP_ISCSI, - &p_hwfn->hw_info.device_capabilities); + OSAL_NON_ATOMIC_SET_BIT(ECORE_DEV_CAP_ISCSI, + &p_hwfn->hw_info.device_capabilities); if (device_capabilities & NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ROCE) - OSAL_SET_BIT(ECORE_DEV_CAP_ROCE, - &p_hwfn->hw_info.device_capabilities); + OSAL_NON_ATOMIC_SET_BIT(ECORE_DEV_CAP_ROCE, + &p_hwfn->hw_info.device_capabilities); if (device_capabilities & NVM_CFG1_GLOB_DEVICE_CAPABILITIES_IWARP) - OSAL_SET_BIT(ECORE_DEV_CAP_IWARP, - &p_hwfn->hw_info.device_capabilities); + OSAL_NON_ATOMIC_SET_BIT(ECORE_DEV_CAP_IWARP, + &p_hwfn->hw_info.device_capabilities); rc = ecore_mcp_fill_shmem_func_info(p_hwfn, p_ptt); if (rc != ECORE_SUCCESS && p_params->b_relaxed_probe) { @@ -5190,11 +6970,14 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn, static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { - u8 num_funcs, enabled_func_idx = p_hwfn->rel_pf_id; - u32 reg_function_hide, tmp, eng_mask, low_pfs_mask; + u8 num_funcs, enabled_func_idx, num_funcs_on_port; struct ecore_dev *p_dev = p_hwfn->p_dev; + u32 reg_function_hide; + /* Default "worst-case" values */ num_funcs = ECORE_IS_AH(p_dev) ? MAX_NUM_PFS_K2 : MAX_NUM_PFS_BB; + enabled_func_idx = p_hwfn->rel_pf_id; + num_funcs_on_port = MAX_PF_PER_PORT; /* Bit 0 of MISCS_REG_FUNCTION_HIDE indicates whether the bypass values * in the other bits are selected. @@ -5207,21 +6990,23 @@ static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn, reg_function_hide = ecore_rd(p_hwfn, p_ptt, MISCS_REG_FUNCTION_HIDE); if (reg_function_hide & 0x1) { + u32 enabled_funcs, eng_mask, low_pfs_mask, port_mask, tmp; + + /* Get the number of the enabled functions on the device */ + enabled_funcs = (reg_function_hide & 0xffff) ^ 0xfffe; + + /* Get the number of the enabled functions on the engine */ if (ECORE_IS_BB(p_dev)) { - if (ECORE_PATH_ID(p_hwfn) && !ECORE_IS_CMT(p_dev)) { - num_funcs = 0; + if (ECORE_PATH_ID(p_hwfn) && !ECORE_IS_CMT(p_dev)) eng_mask = 0xaaaa; - } else { - num_funcs = 1; - eng_mask = 0x5554; - } + else + eng_mask = 0x5555; } else { - num_funcs = 1; - eng_mask = 0xfffe; + eng_mask = 0xffff; } - /* Get the number of the enabled functions on the engine */ - tmp = (reg_function_hide ^ 0xffffffff) & eng_mask; + num_funcs = 0; + tmp = enabled_funcs & eng_mask; while (tmp) { if (tmp & 0x1) num_funcs++; @@ -5229,17 +7014,35 @@ static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn, } /* Get the PF index within the enabled functions */ - low_pfs_mask = (0x1 << p_hwfn->abs_pf_id) - 1; - tmp = reg_function_hide & eng_mask & low_pfs_mask; + low_pfs_mask = (1 << ECORE_LEADING_HWFN(p_dev)->abs_pf_id) - 1; + enabled_func_idx = 0; + tmp = enabled_funcs & eng_mask & low_pfs_mask; + while (tmp) { + if (tmp & 0x1) + enabled_func_idx++; + tmp >>= 0x1; + } + + /* Get the number of functions on the port */ + if (ecore_device_num_ports(p_hwfn->p_dev) == 4) + port_mask = 0x1111 << (p_hwfn->abs_pf_id % 4); + else if (ecore_device_num_ports(p_hwfn->p_dev) == 2) + port_mask = 0x5555 << (p_hwfn->abs_pf_id % 2); + else /* single port */ + port_mask = 0xffff; + + num_funcs_on_port = 0; + tmp = enabled_funcs & port_mask; while (tmp) { if (tmp & 0x1) - enabled_func_idx--; + num_funcs_on_port++; tmp >>= 0x1; } } p_hwfn->num_funcs_on_engine = num_funcs; p_hwfn->enabled_func_idx = enabled_func_idx; + p_hwfn->num_funcs_on_port = num_funcs_on_port; #ifndef ASIC_ONLY if (CHIP_REV_IS_FPGA(p_dev)) { @@ -5250,14 +7053,15 @@ static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn, #endif DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE, - "PF [rel_id %d, abs_id %d] occupies index %d within the %d enabled functions on the engine\n", - p_hwfn->rel_pf_id, p_hwfn->abs_pf_id, - p_hwfn->enabled_func_idx, p_hwfn->num_funcs_on_engine); + "PF {abs %d, rel %d}: enabled func %d, %d funcs on engine, %d funcs on port [reg_function_hide 0x%x]\n", + p_hwfn->abs_pf_id, p_hwfn->rel_pf_id, + p_hwfn->enabled_func_idx, p_hwfn->num_funcs_on_engine, + p_hwfn->num_funcs_on_port, reg_function_hide); } #ifndef ASIC_ONLY static void ecore_emul_hw_info_port_num(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) + struct ecore_ptt *p_ptt) { struct ecore_dev *p_dev = p_hwfn->p_dev; u32 eco_reserved; @@ -5265,22 +7069,22 @@ static void ecore_emul_hw_info_port_num(struct ecore_hwfn *p_hwfn, /* MISCS_REG_ECO_RESERVED[15:12]: num of ports in an engine */ eco_reserved = ecore_rd(p_hwfn, p_ptt, MISCS_REG_ECO_RESERVED); switch ((eco_reserved & 0xf000) >> 12) { - case 1: - p_dev->num_ports_in_engine = 1; - break; - case 3: - p_dev->num_ports_in_engine = 2; - break; - case 0xf: - p_dev->num_ports_in_engine = 4; - break; - default: - DP_NOTICE(p_hwfn, false, + case 1: + p_dev->num_ports_in_engine = 1; + break; + case 3: + p_dev->num_ports_in_engine = 2; + break; + case 0xf: + p_dev->num_ports_in_engine = 4; + break; + default: + DP_NOTICE(p_hwfn, false, "Emulation: Unknown port mode [ECO_RESERVED 0x%08x]\n", eco_reserved); p_dev->num_ports_in_engine = 1; /* Default to something */ break; - } + } p_dev->num_ports = p_dev->num_ports_in_engine * ecore_device_num_engines(p_dev); @@ -5307,7 +7111,7 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn, } #endif - /* In CMT there is always only one port */ + /* In CMT there is always only one port */ if (ECORE_IS_CMT(p_dev)) { p_dev->num_ports_in_engine = 1; p_dev->num_ports = 1; @@ -5355,8 +7159,7 @@ static void ecore_mcp_get_eee_caps(struct ecore_hwfn *p_hwfn, p_caps->eee_speed_caps = 0; eee_status = ecore_rd(p_hwfn, p_ptt, p_hwfn->mcp_info->port_addr + OFFSETOF(struct public_port, eee_status)); - eee_status = (eee_status & EEE_SUPPORTED_SPEED_MASK) >> - EEE_SUPPORTED_SPEED_OFFSET; + eee_status = GET_MFW_FIELD(eee_status, EEE_SUPPORTED_SPEED); if (eee_status & EEE_1G_SUPPORTED) p_caps->eee_speed_caps |= ECORE_EEE_1G_ADV; if (eee_status & EEE_10G_ADV) @@ -5377,26 +7180,41 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, } /* Since all information is common, only first hwfns should do this */ - if (IS_LEAD_HWFN(p_hwfn) && !IS_ECORE_PACING(p_hwfn)) { + if (IS_LEAD_HWFN(p_hwfn) && !IS_ECORE_PACING(p_hwfn) && + !p_params->b_sriov_disable) { rc = ecore_iov_hw_info(p_hwfn); + if (rc != ECORE_SUCCESS) { if (p_params->b_relaxed_probe) p_params->p_relaxed_res = - ECORE_HW_PREPARE_BAD_IOV; + ECORE_HW_PREPARE_BAD_IOV; else return rc; } } + /* The following order should be kept: + * (1) ecore_hw_info_port_num(), + * (2) ecore_get_num_funcs(), + * (3) ecore_mcp_get_capabilities, and + * (4) ecore_hw_get_nvm_info(), + * since (2) depends on (1) and (4) depends on both. + * In addition, can send the MFW a MB command only after (1) is called. + */ if (IS_LEAD_HWFN(p_hwfn)) ecore_hw_info_port_num(p_hwfn, p_ptt); + ecore_get_num_funcs(p_hwfn, p_ptt); + ecore_mcp_get_capabilities(p_hwfn, p_ptt); rc = ecore_hw_get_nvm_info(p_hwfn, p_ptt, p_params); if (rc != ECORE_SUCCESS) return rc; + if (p_hwfn->mcp_info->recovery_mode) + return ECORE_SUCCESS; + rc = ecore_int_igu_read_cam(p_hwfn, p_ptt); if (rc != ECORE_SUCCESS) { if (p_params->b_relaxed_probe) @@ -5405,16 +7223,18 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, return rc; } + OSAL_NUM_FUNCS_IS_SET(p_hwfn); + #ifndef ASIC_ONLY if (CHIP_REV_IS_ASIC(p_hwfn->p_dev) && ecore_mcp_is_init(p_hwfn)) { #endif OSAL_MEMCPY(p_hwfn->hw_info.hw_mac_addr, - p_hwfn->mcp_info->func_info.mac, ETH_ALEN); + p_hwfn->mcp_info->func_info.mac, ECORE_ETH_ALEN); #ifndef ASIC_ONLY } else { - static u8 mcp_hw_mac[6] = { 0, 2, 3, 4, 5, 6 }; + static u8 mcp_hw_mac[6] = {0, 2, 3, 4, 5, 6}; - OSAL_MEMCPY(p_hwfn->hw_info.hw_mac_addr, mcp_hw_mac, ETH_ALEN); + OSAL_MEMCPY(p_hwfn->hw_info.hw_mac_addr, mcp_hw_mac, ECORE_ETH_ALEN); p_hwfn->hw_info.hw_mac_addr[5] = p_hwfn->abs_pf_id; } #endif @@ -5422,7 +7242,7 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, if (ecore_mcp_is_init(p_hwfn)) { if (p_hwfn->mcp_info->func_info.ovlan != ECORE_MCP_VLAN_UNSET) p_hwfn->hw_info.ovlan = - p_hwfn->mcp_info->func_info.ovlan; + p_hwfn->mcp_info->func_info.ovlan; ecore_mcp_cmd_port_init(p_hwfn, p_ptt); @@ -5443,7 +7263,7 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) { /* AH emulation: * Allow only PF0 to be RoCE to overcome a lack of ILT lines. - */ + */ if (ECORE_IS_AH(p_hwfn->p_dev) && p_hwfn->rel_pf_id) p_hwfn->hw_info.personality = ECORE_PCI_ETH; else @@ -5451,6 +7271,9 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, } #endif + if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) + p_hwfn->hw_info.multi_tc_roce_en = 1; + /* although in BB some constellations may support more than 4 tcs, * that can result in performance penalty in some cases. 4 * represents a good tradeoff between performance and flexibility. @@ -5465,16 +7288,18 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, */ p_hwfn->hw_info.num_active_tc = 1; - ecore_get_num_funcs(p_hwfn, p_ptt); - if (ecore_mcp_is_init(p_hwfn)) p_hwfn->hw_info.mtu = p_hwfn->mcp_info->func_info.mtu; - /* In case of forcing the driver's default resource allocation, calling - * ecore_hw_get_resc() should come after initializing the personality - * and after getting the number of functions, since the calculation of - * the resources/features depends on them. - * This order is not harmful if not forcing. + /* In case the current MF mode doesn't support VF-RDMA, there is no + * need in VF CNQs. + */ + if (!OSAL_TEST_BIT(ECORE_MF_VF_RDMA, &p_hwfn->p_dev->mf_bits)) + p_hwfn->num_vf_cnqs = 0; + + /* Due to a dependency, ecore_hw_get_resc() should be called after + * getting the number of functions on an engine, and after initializing + * the personality. */ rc = ecore_hw_get_resc(p_hwfn, p_ptt, drv_resc_alloc); if (rc != ECORE_SUCCESS && p_params->b_relaxed_probe) { @@ -5485,7 +7310,7 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, return rc; } -#define ECORE_MAX_DEVICE_NAME_LEN (8) +#define ECORE_MAX_DEVICE_NAME_LEN (8) void ecore_get_dev_name(struct ecore_dev *p_dev, u8 *name, u8 max_chars) { @@ -5493,7 +7318,8 @@ void ecore_get_dev_name(struct ecore_dev *p_dev, u8 *name, u8 max_chars) n = OSAL_MIN_T(u8, max_chars, ECORE_MAX_DEVICE_NAME_LEN); OSAL_SNPRINTF((char *)name, n, "%s %c%d", - ECORE_IS_BB(p_dev) ? "BB" : "AH", + ECORE_IS_BB(p_dev) ? "BB" + : ECORE_IS_AH(p_dev) ? "AH" : "E5", 'A' + p_dev->chip_rev, (int)p_dev->chip_metal); } @@ -5519,6 +7345,9 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn, case ECORE_DEV_ID_MASK_AH: p_dev->type = ECORE_DEV_TYPE_AH; break; + case ECORE_DEV_ID_MASK_E5: + p_dev->type = ECORE_DEV_TYPE_E5; + break; default: DP_NOTICE(p_hwfn, true, "Unknown device id 0x%x\n", p_dev->device_id); @@ -5545,8 +7374,7 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn, /* For some reason we have problems with this register * in BB B0 emulation; Simply assume no CMT */ - DP_NOTICE(p_dev->hwfns, false, - "device on emul - assume no CMT\n"); + DP_NOTICE(p_dev->hwfns, false, "device on emul - assume no CMT\n"); p_dev->num_hwfns = 1; } #endif @@ -5558,7 +7386,7 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn, DP_INFO(p_dev->hwfns, "Chip details - %s %c%d, Num: %04x Rev: %02x Bond id: %02x Metal: %02x\n", - ECORE_IS_BB(p_dev) ? "BB" : "AH", + ECORE_IS_BB(p_dev) ? "BB" : ECORE_IS_AH(p_dev) ? "AH" : "E5", 'A' + p_dev->chip_rev, (int)p_dev->chip_metal, p_dev->chip_num, p_dev->chip_rev, p_dev->chip_bond_id, p_dev->chip_metal); @@ -5568,6 +7396,7 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn, "The chip type/rev (BB A0) is not supported!\n"); return ECORE_ABORTED; } + #ifndef ASIC_ONLY if (CHIP_REV_IS_EMUL(p_dev) && ECORE_IS_AH(p_dev)) ecore_wr(p_hwfn, p_ptt, MISCS_REG_PLL_MAIN_CTRL_4, 0x1); @@ -5581,7 +7410,7 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn, /* MISCS_REG_ECO_RESERVED[28]: emulation build w/ or w/o MAC */ p_dev->b_is_emul_mac = !!(tmp & (1 << 28)); - DP_NOTICE(p_hwfn, false, + DP_NOTICE(p_hwfn, false, "Emulation: Running on a %s build %s MAC\n", p_dev->b_is_emul_full ? "full" : "reduced", p_dev->b_is_emul_mac ? "with" : "without"); @@ -5591,8 +7420,7 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } -#ifndef LINUX_REMOVE -void ecore_prepare_hibernate(struct ecore_dev *p_dev) +void ecore_hw_hibernate_prepare(struct ecore_dev *p_dev) { int j; @@ -5602,19 +7430,41 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev) for_each_hwfn(p_dev, j) { struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j]; - DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, - "Mark hw/fw uninitialized\n"); + DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Mark hw/fw uninitialized\n"); p_hwfn->hw_init_done = false; ecore_ptt_invalidate(p_hwfn); } } -#endif + +void ecore_hw_hibernate_resume(struct ecore_dev *p_dev) +{ + int j = 0; + + if (IS_VF(p_dev)) + return; + + for_each_hwfn(p_dev, j) { + struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j]; + struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn); + + ecore_hw_hwfn_prepare(p_hwfn); + + if (!p_ptt) { + DP_NOTICE(p_hwfn, false, "ptt acquire failed\n"); + } else { + ecore_load_mcp_offsets(p_hwfn, p_ptt); + ecore_ptt_release(p_hwfn, p_ptt); + } + DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP, "Reinitialized hw after low power state\n"); + } +} static enum _ecore_status_t ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview, void OSAL_IOMEM *p_doorbells, u64 db_phys_addr, + unsigned long db_size, struct ecore_hw_prepare_params *p_params) { struct ecore_mdump_retain_data mdump_retain; @@ -5626,14 +7476,18 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview, p_hwfn->regview = p_regview; p_hwfn->doorbells = p_doorbells; p_hwfn->db_phys_addr = db_phys_addr; + p_hwfn->db_size = db_size; + p_hwfn->reg_offset = (u8 *)p_hwfn->regview - + (u8 *)p_hwfn->p_dev->regview; + p_hwfn->db_offset = (u8 *)p_hwfn->doorbells - + (u8 *)p_hwfn->p_dev->doorbells; if (IS_VF(p_dev)) return ecore_vf_hw_prepare(p_hwfn, p_params); /* Validate that chip access is feasible */ if (REG_RD(p_hwfn, PXP_PF_ME_OPAQUE_ADDR) == 0xffffffff) { - DP_ERR(p_hwfn, - "Reading the ME register returns all Fs; Preventing further chip access\n"); + DP_ERR(p_hwfn, "Reading the ME register returns all Fs; Preventing further chip access\n"); if (p_params->b_relaxed_probe) p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_ME; return ECORE_INVAL; @@ -5677,6 +7531,7 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview, if (val != 1) { DP_ERR(p_hwfn, "PTT and GTT init in PGLUE_B didn't complete\n"); + rc = ECORE_BUSY; goto err1; } @@ -5710,21 +7565,19 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview, goto err2; } - /* Sending a mailbox to the MFW should be after ecore_get_hw_info() is - * called, since among others it sets the ports number in an engine. - */ if (p_params->initiate_pf_flr && IS_LEAD_HWFN(p_hwfn) && !p_dev->recov_in_prog) { rc = ecore_mcp_initiate_pf_flr(p_hwfn, p_hwfn->p_main_ptt); if (rc != ECORE_SUCCESS) DP_NOTICE(p_hwfn, false, "Failed to initiate PF FLR\n"); + } - /* Workaround for MFW issue where PF FLR does not cleanup - * IGU block - */ - if (!(p_hwfn->mcp_info->capabilities & - FW_MB_PARAM_FEATURE_SUPPORT_IGU_CLEANUP)) - ecore_pf_flr_igu_cleanup(p_hwfn); + /* NVRAM info initialization and population */ + rc = ecore_mcp_nvm_info_populate(p_hwfn); + if (rc) { + DP_NOTICE(p_hwfn, false, + "Failed to populate nvm info shadow\n"); + goto err2; } /* Check if mdump logs/data are present and update the epoch value */ @@ -5733,7 +7586,7 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview, &mdump_info); if (rc == ECORE_SUCCESS && mdump_info.num_of_logs) DP_NOTICE(p_hwfn, false, - "* * * IMPORTANT - HW ERROR register dump captured by device * * *\n"); + "mdump data available.\n"); rc = ecore_mcp_mdump_get_retain(p_hwfn, p_hwfn->p_main_ptt, &mdump_retain); @@ -5753,8 +7606,9 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview, DP_NOTICE(p_hwfn, false, "Failed to allocate the init array\n"); if (p_params->b_relaxed_probe) p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM; - goto err2; + goto err3; } + #ifndef ASIC_ONLY if (CHIP_REV_IS_FPGA(p_dev)) { if (ECORE_IS_AH(p_dev)) { @@ -5772,6 +7626,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview, #endif return rc; +err3: + ecore_mcp_nvm_info_free(p_hwfn); err2: if (IS_LEAD_HWFN(p_hwfn)) ecore_iov_free_hw_info(p_dev); @@ -5789,9 +7645,13 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev, enum _ecore_status_t rc; p_dev->chk_reg_fifo = p_params->chk_reg_fifo; + p_dev->monitored_hw_addr = p_params->monitored_hw_addr; p_dev->allow_mdump = p_params->allow_mdump; p_hwfn->b_en_pacing = p_params->b_en_pacing; p_dev->b_is_target = p_params->b_is_target; + p_hwfn->roce_edpm_mode = p_params->roce_edpm_mode; + p_hwfn->num_vf_cnqs = p_params->num_vf_cnqs; + p_hwfn->fs_accuracy = p_params->fs_accuracy; if (p_params->b_relaxed_probe) p_params->p_relaxed_res = ECORE_HW_PREPARE_SUCCESS; @@ -5799,7 +7659,7 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev, /* Initialize the first hwfn - will learn number of hwfns */ rc = ecore_hw_prepare_single(p_hwfn, p_dev->regview, p_dev->doorbells, p_dev->db_phys_addr, - p_params); + p_dev->db_size, p_params); if (rc != ECORE_SUCCESS) return rc; @@ -5825,13 +7685,14 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev, db_phys_addr = p_dev->db_phys_addr + offset; p_dev->hwfns[1].b_en_pacing = p_params->b_en_pacing; + p_dev->hwfns[1].num_vf_cnqs = p_params->num_vf_cnqs; /* prepare second hw function */ rc = ecore_hw_prepare_single(&p_dev->hwfns[1], p_regview, p_doorbell, db_phys_addr, - p_params); + p_dev->db_size, p_params); /* in case of error, need to free the previously - * initiliazed hwfn 0. + * initialized hwfn 0. */ if (rc != ECORE_SUCCESS) { if (p_params->b_relaxed_probe) @@ -5840,6 +7701,7 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev, if (IS_PF(p_dev)) { ecore_init_free(p_hwfn); + ecore_mcp_nvm_info_free(p_hwfn); ecore_mcp_free(p_hwfn); ecore_hw_hwfn_free(p_hwfn); } else { @@ -5854,12 +7716,13 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev, void ecore_hw_remove(struct ecore_dev *p_dev) { - struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); int i; - if (IS_PF(p_dev)) + if (IS_PF(p_dev)) { + struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); ecore_mcp_ov_update_driver_state(p_hwfn, p_hwfn->p_main_ptt, - ECORE_OV_DRIVER_STATE_NOT_LOADED); + ECORE_OV_DRIVER_STATE_NOT_LOADED); + } for_each_hwfn(p_dev, i) { struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i]; @@ -5870,6 +7733,7 @@ void ecore_hw_remove(struct ecore_dev *p_dev) } ecore_init_free(p_hwfn); + ecore_mcp_nvm_info_free(p_hwfn); ecore_hw_hwfn_free(p_hwfn); ecore_mcp_free(p_hwfn); @@ -5878,6 +7742,9 @@ void ecore_hw_remove(struct ecore_dev *p_dev) #endif } +#ifdef CONFIG_ECORE_LOCK_ALLOC + OSAL_SPIN_LOCK_DEALLOC(&p_dev->internal_trace.lock); +#endif ecore_iov_free_hw_info(p_dev); } @@ -5903,7 +7770,7 @@ static void ecore_chain_free_next_ptr(struct ecore_dev *p_dev, p_phys_next = HILO_DMA_REGPAIR(p_next->next_phys); OSAL_DMA_FREE_COHERENT(p_dev, p_virt, p_phys, - ECORE_CHAIN_PAGE_SIZE); + p_chain->page_size); p_virt = p_virt_next; p_phys = p_phys_next; @@ -6168,11 +8035,16 @@ enum _ecore_status_t ecore_chain_alloc(struct ecore_dev *p_dev, enum _ecore_status_t ecore_fw_l2_queue(struct ecore_hwfn *p_hwfn, u16 src_id, u16 *dst_id) { + if (!RESC_NUM(p_hwfn, ECORE_L2_QUEUE)) { + DP_NOTICE(p_hwfn, false, "No L2 queue is available\n"); + return ECORE_INVAL; + } + if (src_id >= RESC_NUM(p_hwfn, ECORE_L2_QUEUE)) { u16 min, max; min = (u16)RESC_START(p_hwfn, ECORE_L2_QUEUE); - max = min + RESC_NUM(p_hwfn, ECORE_L2_QUEUE); + max = min + RESC_NUM(p_hwfn, ECORE_L2_QUEUE) - 1; DP_NOTICE(p_hwfn, true, "l2_queue id [%d] is not valid, available indices [%d - %d]\n", src_id, min, max); @@ -6188,11 +8060,16 @@ enum _ecore_status_t ecore_fw_l2_queue(struct ecore_hwfn *p_hwfn, enum _ecore_status_t ecore_fw_vport(struct ecore_hwfn *p_hwfn, u8 src_id, u8 *dst_id) { + if (!RESC_NUM(p_hwfn, ECORE_VPORT)) { + DP_NOTICE(p_hwfn, false, "No vport is available\n"); + return ECORE_INVAL; + } + if (src_id >= RESC_NUM(p_hwfn, ECORE_VPORT)) { u8 min, max; min = (u8)RESC_START(p_hwfn, ECORE_VPORT); - max = min + RESC_NUM(p_hwfn, ECORE_VPORT); + max = min + RESC_NUM(p_hwfn, ECORE_VPORT) - 1; DP_NOTICE(p_hwfn, true, "vport id [%d] is not valid, available indices [%d - %d]\n", src_id, min, max); @@ -6208,11 +8085,16 @@ enum _ecore_status_t ecore_fw_vport(struct ecore_hwfn *p_hwfn, enum _ecore_status_t ecore_fw_rss_eng(struct ecore_hwfn *p_hwfn, u8 src_id, u8 *dst_id) { + if (!RESC_NUM(p_hwfn, ECORE_RSS_ENG)) { + DP_NOTICE(p_hwfn, false, "No RSS engine is available\n"); + return ECORE_INVAL; + } + if (src_id >= RESC_NUM(p_hwfn, ECORE_RSS_ENG)) { u8 min, max; min = (u8)RESC_START(p_hwfn, ECORE_RSS_ENG); - max = min + RESC_NUM(p_hwfn, ECORE_RSS_ENG); + max = min + RESC_NUM(p_hwfn, ECORE_RSS_ENG) - 1; DP_NOTICE(p_hwfn, true, "rss_eng id [%d] is not valid, available indices [%d - %d]\n", src_id, min, max); @@ -6229,17 +8111,17 @@ enum _ecore_status_t ecore_llh_set_function_as_default(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { - if (OSAL_GET_BIT(ECORE_MF_NEED_DEF_PF, &p_hwfn->p_dev->mf_bits)) { + if (OSAL_TEST_BIT(ECORE_MF_NEED_DEF_PF, &p_hwfn->p_dev->mf_bits)) { ecore_wr(p_hwfn, p_ptt, NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR, 1 << p_hwfn->abs_pf_id / 2); ecore_wr(p_hwfn, p_ptt, PRS_REG_MSG_INFO, 0); return ECORE_SUCCESS; + } else { + DP_NOTICE(p_hwfn, false, + "This function can't be set as default\n"); + return ECORE_INVAL; } - - DP_NOTICE(p_hwfn, false, - "This function can't be set as default\n"); - return ECORE_INVAL; } static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn, @@ -6366,7 +8248,6 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn, DP_ERR(p_hwfn, "Invalid coalesce value - %d\n", coalesce); return ECORE_INVAL; } - timeset = (u8)(coalesce >> timer_res); rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res, @@ -6385,7 +8266,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn, /* Calculate final WFQ values for all vports and configure it. * After this configuration each vport must have - * approx min rate = wfq * min_pf_rate / ECORE_WFQ_UNIT + * approx min rate = vport_wfq * min_pf_rate / ECORE_WFQ_UNIT */ static void ecore_configure_wfq_for_all_vports(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, @@ -6400,7 +8281,7 @@ static void ecore_configure_wfq_for_all_vports(struct ecore_hwfn *p_hwfn, u32 wfq_speed = p_hwfn->qm_info.wfq_data[i].min_speed; vport_params[i].wfq = (wfq_speed * ECORE_WFQ_UNIT) / - min_pf_rate; + min_pf_rate; ecore_init_vport_wfq(p_hwfn, p_ptt, vport_params[i].first_tx_pq_id, vport_params[i].wfq); @@ -6408,6 +8289,7 @@ static void ecore_configure_wfq_for_all_vports(struct ecore_hwfn *p_hwfn, } static void ecore_init_wfq_default_param(struct ecore_hwfn *p_hwfn) + { int i; @@ -6447,8 +8329,7 @@ static enum _ecore_status_t ecore_init_wfq_param(struct ecore_hwfn *p_hwfn, num_vports = p_hwfn->qm_info.num_vports; -/* Accounting for the vports which are configured for WFQ explicitly */ - + /* Accounting for the vports which are configured for WFQ explicitly */ for (i = 0; i < num_vports; i++) { u32 tmp_speed; @@ -6466,24 +8347,23 @@ static enum _ecore_status_t ecore_init_wfq_param(struct ecore_hwfn *p_hwfn, /* validate possible error cases */ if (req_rate < min_pf_rate / ECORE_WFQ_UNIT) { - DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, - "Vport [%d] - Requested rate[%d Mbps] is less than one percent of configured PF min rate[%d Mbps]\n", - vport_id, req_rate, min_pf_rate); + DP_ERR(p_hwfn, + "Vport [%d] - Requested rate[%d Mbps] is less than one percent of configured PF min rate[%d Mbps]\n", + vport_id, req_rate, min_pf_rate); return ECORE_INVAL; } /* TBD - for number of vports greater than 100 */ if (num_vports > ECORE_WFQ_UNIT) { - DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, - "Number of vports is greater than %d\n", - ECORE_WFQ_UNIT); + DP_ERR(p_hwfn, "Number of vports is greater than %d\n", + ECORE_WFQ_UNIT); return ECORE_INVAL; } if (total_req_min_rate > min_pf_rate) { - DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, - "Total requested min rate for all vports[%d Mbps] is greater than configured PF min rate[%d Mbps]\n", - total_req_min_rate, min_pf_rate); + DP_ERR(p_hwfn, + "Total requested min rate for all vports[%d Mbps] is greater than configured PF min rate[%d Mbps]\n", + total_req_min_rate, min_pf_rate); return ECORE_INVAL; } @@ -6493,9 +8373,9 @@ static enum _ecore_status_t ecore_init_wfq_param(struct ecore_hwfn *p_hwfn, /* validate if non requested get < 1% of min bw */ if (left_rate_per_vp < min_pf_rate / ECORE_WFQ_UNIT) { - DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, - "Non WFQ configured vports rate [%d Mbps] is less than one percent of configured PF min rate[%d Mbps]\n", - left_rate_per_vp, min_pf_rate); + DP_ERR(p_hwfn, + "Non WFQ configured vports rate [%d Mbps] is less than one percent of configured PF min rate[%d Mbps]\n", + left_rate_per_vp, min_pf_rate); return ECORE_INVAL; } @@ -6621,8 +8501,8 @@ void ecore_configure_vp_wfq_on_link_change(struct ecore_dev *p_dev, /* TBD - for multiple hardware functions - that is 100 gig */ if (ECORE_IS_CMT(p_dev)) { - DP_VERBOSE(p_dev, ECORE_MSG_LINK, - "WFQ configuration is not supported for this device\n"); + DP_ERR(p_dev, + "WFQ configuration is not supported for this device\n"); return; } @@ -6778,7 +8658,7 @@ void ecore_clean_wfq_db(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) OSAL_MEMSET(p_hwfn->qm_info.wfq_data, 0, sizeof(*p_hwfn->qm_info.wfq_data) * - p_hwfn->qm_info.num_vports); + p_hwfn->qm_info.num_vports); } int ecore_device_num_engines(struct ecore_dev *p_dev) @@ -6804,6 +8684,15 @@ void ecore_set_fw_mac_addr(__le16 *fw_msb, ((u8 *)fw_lsb)[1] = mac[4]; } +void ecore_set_dev_access_enable(struct ecore_dev *p_dev, bool b_enable) +{ + if (p_dev->recov_in_prog != !b_enable) { + DP_INFO(p_dev, "%s access to the device\n", + b_enable ? "Enable" : "Disable"); + p_dev->recov_in_prog = !b_enable; + } +} + void ecore_set_platform_str(struct ecore_hwfn *p_hwfn, char *buf_str, u32 buf_size) { @@ -6819,5 +8708,19 @@ void ecore_set_platform_str(struct ecore_hwfn *p_hwfn, bool ecore_is_mf_fip_special(struct ecore_dev *p_dev) { - return !!OSAL_GET_BIT(ECORE_MF_FIP_SPECIAL, &p_dev->mf_bits); + return !!OSAL_TEST_BIT(ECORE_MF_FIP_SPECIAL, &p_dev->mf_bits); } + +bool ecore_is_dscp_to_tc_capable(struct ecore_dev *p_dev) +{ + return !!OSAL_TEST_BIT(ECORE_MF_DSCP_TO_TC_MAP, &p_dev->mf_bits); +} + +u8 ecore_get_num_funcs_on_engine(struct ecore_hwfn *p_hwfn) +{ + return p_hwfn->num_funcs_on_engine; +} + +#ifdef _NTDDK_ +#pragma warning(pop) +#endif diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h index 9ddf502eb..1ffe286d7 100644 --- a/drivers/net/qede/base/ecore_dev_api.h +++ b/drivers/net/qede/base/ecore_dev_api.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_DEV_API_H__ #define __ECORE_DEV_API_H__ @@ -11,6 +11,15 @@ #include "ecore_chain.h" #include "ecore_int_api.h" +#define ECORE_DEFAULT_ILT_PAGE_SIZE 4 + +struct ecore_wake_info { + u32 wk_info; + u32 wk_details; + u32 wk_pkt_len; + u8 wk_buffer[256]; +}; + /** * @brief ecore_init_dp - initialize the debug level * @@ -24,6 +33,26 @@ void ecore_init_dp(struct ecore_dev *p_dev, u8 dp_level, void *dp_ctx); +/** + * @brief ecore_init_int_dp - initialize the internal debug level + * + * @param p_dev + * @param dp_module + * @param dp_level + */ +void ecore_init_int_dp(struct ecore_dev *p_dev, + u32 dp_module, + u8 dp_level); + +/** + * @brief ecore_dp_internal_log - store into internal log + * + * @param p_dev + * @param buf + * @param len + */ +void ecore_dp_internal_log(struct ecore_dev *cdev, char *fmt, ...); + /** * @brief ecore_init_struct - initialize the device structure to * its defaults @@ -84,7 +113,7 @@ struct ecore_drv_load_params { #define ECORE_LOAD_REQ_LOCK_TO_NONE 255 /* Action to take in case the MFW doesn't support timeout values other - * than default and none. + * then default and none. */ enum ecore_mfw_timeout_fallback mfw_timeout_fallback; @@ -104,10 +133,12 @@ struct ecore_hw_init_params { /* Interrupt mode [msix, inta, etc.] to use */ enum ecore_int_mode int_mode; - /* NPAR tx switching to be used for vports configured for tx-switching - */ + /* NPAR tx switching to be used for vports configured for tx-switching */ bool allow_npar_tx_switch; + /* PCI relax ordering to be configured by MFW or ecore client */ + enum ecore_pci_rlx_odr pci_rlx_odr_mode; + /* Binary fw data pointer in binary fw file */ const u8 *bin_fw_data; @@ -161,61 +192,23 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev); */ enum _ecore_status_t ecore_hw_stop_fastpath(struct ecore_dev *p_dev); -#ifndef LINUX_REMOVE /** - * @brief ecore_prepare_hibernate -should be called when + * @brief ecore_hw_hibernate_prepare -should be called when * the system is going into the hibernate state * * @param p_dev * */ -void ecore_prepare_hibernate(struct ecore_dev *p_dev); - -enum ecore_db_rec_width { - DB_REC_WIDTH_32B, - DB_REC_WIDTH_64B, -}; - -enum ecore_db_rec_space { - DB_REC_KERNEL, - DB_REC_USER, -}; +void ecore_hw_hibernate_prepare(struct ecore_dev *p_dev); /** - * @brief db_recovery_add - add doorbell information to the doorbell - * recovery mechanism. + * @brief ecore_hw_hibernate_resume -should be called when the system is + resuming from D3 power state and before calling ecore_hw_init. * - * @param p_dev - * @param db_addr - doorbell address - * @param db_data - address of where db_data is stored - * @param db_width - doorbell is 32b pr 64b - * @param db_space - doorbell recovery addresses are user or kernel space - */ -enum _ecore_status_t ecore_db_recovery_add(struct ecore_dev *p_dev, - void OSAL_IOMEM *db_addr, - void *db_data, - enum ecore_db_rec_width db_width, - enum ecore_db_rec_space db_space); - -/** - * @brief db_recovery_del - remove doorbell information from the doorbell - * recovery mechanism. db_data serves as key (db_addr is not unique). + * @param p_hwfn * - * @param cdev - * @param db_addr - doorbell address - * @param db_data - address where db_data is stored. Serves as key for the - * entry to delete. */ -enum _ecore_status_t ecore_db_recovery_del(struct ecore_dev *p_dev, - void OSAL_IOMEM *db_addr, - void *db_data); - -static OSAL_INLINE bool ecore_is_mf_ufp(struct ecore_hwfn *p_hwfn) -{ - return !!OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits); -} - -#endif +void ecore_hw_hibernate_resume(struct ecore_dev *p_dev); /** * @brief ecore_hw_start_fastpath -restart fastpath traffic, @@ -246,6 +239,12 @@ enum ecore_hw_prepare_result { ECORE_HW_PREPARE_BAD_IGU, }; +enum ECORE_ROCE_EDPM_MODE { + ECORE_ROCE_EDPM_MODE_ENABLE = 0, + ECORE_ROCE_EDPM_MODE_FORCE_ON = 1, + ECORE_ROCE_EDPM_MODE_DISABLE = 2, +}; + struct ecore_hw_prepare_params { /* Personality to initialize */ int personality; @@ -256,6 +255,9 @@ struct ecore_hw_prepare_params { /* Check the reg_fifo after any register access */ bool chk_reg_fifo; + /* Monitored address by ecore_rd()/ecore_wr() */ + u32 monitored_hw_addr; + /* Request the MFW to initiate PF FLR */ bool initiate_pf_flr; @@ -278,8 +280,20 @@ struct ecore_hw_prepare_params { /* Indicates whether this PF serves a storage target */ bool b_is_target; + /* EDPM can be enabled/forced_on/disabled */ + u8 roce_edpm_mode; + /* retry count for VF acquire on channel timeout */ u8 acquire_retry_cnt; + + /* Num of VF CNQs resources that will be requested */ + u8 num_vf_cnqs; + + /* Flow steering statistics accuracy */ + u8 fs_accuracy; + + /* Disable SRIOV */ + bool b_sriov_disable; }; /** @@ -300,6 +314,43 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev, */ void ecore_hw_remove(struct ecore_dev *p_dev); +/** + * @brief ecore_set_nwuf_reg - + * + * @param p_dev + * @param reg_idx - Index of the pattern register + * @param pattern_size - size of pattern + * @param crc - CRC value of patter & mask + * + * @return enum _ecore_status_t + */ +enum _ecore_status_t ecore_set_nwuf_reg(struct ecore_dev *p_dev, + u32 reg_idx, u32 pattern_size, u32 crc); + +/** + * @brief ecore_get_wake_info - get magic packet buffer + * + * @param p_hwfn + * @param p_ppt + * @param wake_info - pointer to ecore_wake_info buffer + * + * @return enum _ecore_status_t + */ +enum _ecore_status_t ecore_get_wake_info(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct ecore_wake_info *wake_info); + +/** + * @brief ecore_wol_buffer_clear - Clear magic package buffer + * + * @param p_hwfn + * @param p_ptt + * + * @return void + */ +void ecore_wol_buffer_clear(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt); + /** * @brief ecore_ptt_acquire - Allocate a PTT window * @@ -325,6 +376,18 @@ struct ecore_ptt *ecore_ptt_acquire(struct ecore_hwfn *p_hwfn); void ecore_ptt_release(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); +/** + * @brief ecore_get_dev_name - get device name, e.g., "BB B0" + * + * @param p_hwfn + * @param name - this is where the name will be written to + * @param max_chars - maximum chars that can be written to name including '\0' + */ +void ecore_get_dev_name(struct ecore_dev *p_dev, + u8 *name, + u8 max_chars); + +#ifndef __EXTRACT__LINUX__IF__ struct ecore_eth_stats_common { u64 no_buff_discards; u64 packet_too_big_discard; @@ -417,6 +480,7 @@ struct ecore_eth_stats { struct ecore_eth_stats_ah ah; }; }; +#endif /** * @brief ecore_chain_alloc - Allocate and initialize a chain @@ -488,6 +552,7 @@ enum _ecore_status_t ecore_fw_rss_eng(struct ecore_hwfn *p_hwfn, u8 src_id, u8 *dst_id); +#define ECORE_LLH_DONT_CARE 0 /** * @brief ecore_llh_get_num_ppfid - Return the allocated number of LLH filter * banks that are allocated to the PF. @@ -548,7 +613,7 @@ enum _ecore_status_t ecore_llh_set_roce_affinity(struct ecore_dev *p_dev, * @return enum _ecore_status_t */ enum _ecore_status_t ecore_llh_add_mac_filter(struct ecore_dev *p_dev, u8 ppfid, - u8 mac_addr[ETH_ALEN]); + u8 mac_addr[ECORE_ETH_ALEN]); /** * @brief ecore_llh_remove_mac_filter - Remove a LLH MAC filter from the given @@ -559,7 +624,7 @@ enum _ecore_status_t ecore_llh_add_mac_filter(struct ecore_dev *p_dev, u8 ppfid, * @param mac_addr - MAC to remove */ void ecore_llh_remove_mac_filter(struct ecore_dev *p_dev, u8 ppfid, - u8 mac_addr[ETH_ALEN]); + u8 mac_addr[ECORE_ETH_ALEN]); enum ecore_llh_prot_filter_type_t { ECORE_LLH_FILTER_ETHERTYPE, @@ -571,6 +636,30 @@ enum ecore_llh_prot_filter_type_t { ECORE_LLH_FILTER_UDP_SRC_AND_DEST_PORT }; +/** + * @brief ecore_llh_add_dst_tcp_port_filter - Add a destination tcp port + * LLH filter into the given filter bank. + * + * @param p_dev + * @param dest_port - destination port to add + * + * @return enum _ecore_status_t + */ +enum _ecore_status_t +ecore_llh_add_dst_tcp_port_filter(struct ecore_dev *p_dev, u16 dest_port); + +/** + * @brief ecore_llh_add_src_tcp_port_filter - Add a source tcp port filter + * into the given filter bank. + * + * @param p_dev + * @param src_port - source port to add + * + * @return enum _ecore_status_t + */ +enum _ecore_status_t +ecore_llh_add_src_tcp_port_filter(struct ecore_dev *p_dev, u16 src_port); + /** * @brief ecore_llh_add_protocol_filter - Add a LLH protocol filter into the * given filter bank. @@ -588,6 +677,30 @@ ecore_llh_add_protocol_filter(struct ecore_dev *p_dev, u8 ppfid, enum ecore_llh_prot_filter_type_t type, u16 source_port_or_eth_type, u16 dest_port); +/** + * @brief ecore_llh_remove_dst_tcp_port_filter - Remove a destination tcp + * port LLH filter from the given filter bank. + * + * @param p_dev + * @param dest_port - destination port + * + * @return enum _ecore_status_t + */ +void ecore_llh_remove_dst_tcp_port_filter(struct ecore_dev *p_dev, + u16 dest_port); + +/** + * @brief ecore_llh_remove_src_tcp_port_filter - Remove a source tcp + * port LLH filter from the given filter bank. + * + * @param p_dev + * @param src_port - source port + * + * @return enum _ecore_status_t + */ +void ecore_llh_remove_src_tcp_port_filter(struct ecore_dev *p_dev, + u16 src_port); + /** * @brief ecore_llh_remove_protocol_filter - Remove a LLH protocol filter from * the given filter bank. @@ -677,6 +790,16 @@ enum _ecore_status_t ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal, u16 tx_coal, void *p_handle); +/** + * @brief - Recalculate feature distributions based on HW resources and + * user inputs. Currently this affects RDMA_CNQ, PF_L2_QUE and VF_L2_QUE. + * As a result, this must not be called while RDMA is active or while VFs + * are enabled. + * + * @param p_hwfn + */ +void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn); + /** * @brief ecore_pglueb_set_pfid_enable - Enable or disable PCI BUS MASTER * @@ -690,6 +813,116 @@ enum _ecore_status_t ecore_pglueb_set_pfid_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, bool b_enable); +#ifndef __EXTRACT__LINUX__IF__ +enum ecore_db_rec_width { + DB_REC_WIDTH_32B, + DB_REC_WIDTH_64B, +}; + +enum ecore_db_rec_space { + DB_REC_KERNEL, + DB_REC_USER, +}; +#endif + +/** + * @brief db_recovery_add - add doorbell information to the doorbell + * recovery mechanism. + * + * @param p_dev + * @param db_addr - doorbell address + * @param db_data - address of where db_data is stored + * @param db_width - doorbell is 32b pr 64b + * @param db_space - doorbell recovery addresses are user or kernel space + */ +enum _ecore_status_t ecore_db_recovery_add(struct ecore_dev *p_dev, + void OSAL_IOMEM *db_addr, + void *db_data, + enum ecore_db_rec_width db_width, + enum ecore_db_rec_space db_space); + +/** + * @brief db_recovery_del - remove doorbell information from the doorbell + * recovery mechanism. db_data serves as key (db_addr is not unique). + * + * @param cdev + * @param db_addr - doorbell address + * @param db_data - address where db_data is stored. Serves as key for the + * entry to delete. + */ +enum _ecore_status_t ecore_db_recovery_del(struct ecore_dev *p_dev, + void OSAL_IOMEM *db_addr, + void *db_data); + +#ifndef __EXTRACT__LINUX__THROW__ +static OSAL_INLINE bool ecore_is_mf_ufp(struct ecore_hwfn *p_hwfn) +{ + return !!OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits); +} +#endif + +/** + * @brief ecore_set_dev_access_enable - Enable or disable access to the device + * + * @param p_hwfn + * @param b_enable - true/false + */ +void ecore_set_dev_access_enable(struct ecore_dev *p_dev, bool b_enable); + +/** + * @brief ecore_set_ilt_page_size - Set ILT page size + * + * @param p_dev + * @param ilt_size + * + * @return enum _ecore_status_t + */ +void ecore_set_ilt_page_size(struct ecore_dev *p_dev, u8 ilt_size); + +/** + * @brief Create Lag + * + * two ports of the same device are bonded or unbonded, + * or link status changes. + * + * @param lag_type: LAG_TYPE_NONE: Disable lag + * LAG_TYPE_ACTIVEACTIVE: Utilize all ports + * LAG_TYPE_ACTIVEBACKUP: Configure all queues to + * active port + * @param active_ports: Bitmap, each bit represents whether the + * port is active or not (1 - active ) + * @param link_change_cb: Callback function to call if port + * settings change such as dcbx. + * @param cxt: Parameter will be passed to the + * link_change_cb function + * + * @param p_hwfn + * @return enum _ecore_status_t + */ +enum _ecore_status_t ecore_lag_create(struct ecore_dev *dev, + enum ecore_lag_type lag_type, + void (*link_change_cb)(void *cxt), + void *cxt, + u8 active_ports); +/** + * @brief Modify lag link status of a given port + * + * @param port_id: the port id that change + * @param link_active: current link state + */ +enum _ecore_status_t ecore_lag_modify(struct ecore_dev *dev, + u8 port_id, + u8 link_active); + +/** + * @brief Exit lag mode + * + * @param p_hwfn + */ +enum _ecore_status_t ecore_lag_destroy(struct ecore_dev *dev); + +bool ecore_lag_is_active(struct ecore_hwfn *p_hwfn); + /** * @brief Whether FIP discovery fallback special mode is enabled or not. * @@ -698,4 +931,23 @@ enum _ecore_status_t ecore_pglueb_set_pfid_enable(struct ecore_hwfn *p_hwfn, * @return true if device is in FIP special mode, false otherwise. */ bool ecore_is_mf_fip_special(struct ecore_dev *p_dev); + +/** + * @brief Whether device allows DSCP to TC mapping or not. + * + * @param cdev + * + * @return true if device allows dscp to tc mapping. + */ +bool ecore_is_dscp_to_tc_capable(struct ecore_dev *p_dev); + +/** + * @brief Returns the number of PFs. + * + * @param p_hwfn + * + * @return u8 - Number of PFs. + */ +u8 ecore_get_num_funcs_on_engine(struct ecore_hwfn *p_hwfn); + #endif diff --git a/drivers/net/qede/base/ecore_gtt_reg_addr.h b/drivers/net/qede/base/ecore_gtt_reg_addr.h index f5b11eb28..e21e5a5df 100644 --- a/drivers/net/qede/base/ecore_gtt_reg_addr.h +++ b/drivers/net/qede/base/ecore_gtt_reg_addr.h @@ -1,60 +1,75 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef GTT_REG_ADDR_H #define GTT_REG_ADDR_H -/* Win 2 */ -//Access:RW DataWidth:0x20 // -#define GTT_BAR0_MAP_REG_IGU_CMD 0x00f000UL +/** + * Win 2 + */ +#define GTT_BAR0_MAP_REG_IGU_CMD 0x00f000UL /* Access:RW DataWidth:0x20 */ -/* Win 3 */ -//Access:RW DataWidth:0x20 // -#define GTT_BAR0_MAP_REG_TSDM_RAM 0x010000UL +/** + * Win 3 + */ +#define GTT_BAR0_MAP_REG_TSDM_RAM 0x010000UL /* Access:RW DataWidth:0x20 */ -/* Win 4 */ -//Access:RW DataWidth:0x20 // -#define GTT_BAR0_MAP_REG_MSDM_RAM 0x011000UL +/** + * Win 4 + */ +#define GTT_BAR0_MAP_REG_MSDM_RAM 0x011000UL /* Access:RW DataWidth:0x20 */ -/* Win 5 */ -//Access:RW DataWidth:0x20 // -#define GTT_BAR0_MAP_REG_MSDM_RAM_1024 0x012000UL +/** + * Win 5 + */ +#define GTT_BAR0_MAP_REG_MSDM_RAM_1024 0x012000UL /* Access:RW DataWidth:0x20 */ -/* Win 6 */ -//Access:RW DataWidth:0x20 // -#define GTT_BAR0_MAP_REG_MSDM_RAM_2048 0x013000UL +/** + * Win 6 + */ +#define GTT_BAR0_MAP_REG_MSDM_RAM_2048 0x013000UL /* Access:RW DataWidth:0x20 */ -/* Win 7 */ -//Access:RW DataWidth:0x20 // -#define GTT_BAR0_MAP_REG_USDM_RAM 0x014000UL +/** + * Win 7 + */ +#define GTT_BAR0_MAP_REG_USDM_RAM 0x014000UL /* Access:RW DataWidth:0x20 */ -/* Win 8 */ -//Access:RW DataWidth:0x20 // -#define GTT_BAR0_MAP_REG_USDM_RAM_1024 0x015000UL +/** + * Win 8 + */ +#define GTT_BAR0_MAP_REG_USDM_RAM_1024 0x015000UL /* Access:RW DataWidth:0x20 */ -/* Win 9 */ -//Access:RW DataWidth:0x20 // -#define GTT_BAR0_MAP_REG_USDM_RAM_2048 0x016000UL +/** + * Win 9 + */ +#define GTT_BAR0_MAP_REG_USDM_RAM_2048 0x016000UL /* Access:RW DataWidth:0x20 */ -/* Win 10 */ -//Access:RW DataWidth:0x20 // -#define GTT_BAR0_MAP_REG_XSDM_RAM 0x017000UL +/** + * Win 10 + */ +#define GTT_BAR0_MAP_REG_XSDM_RAM 0x017000UL /* Access:RW DataWidth:0x20 */ -/* Win 11 */ -//Access:RW DataWidth:0x20 // -#define GTT_BAR0_MAP_REG_XSDM_RAM_1024 0x018000UL +/** + * Win 11 + */ +#define GTT_BAR0_MAP_REG_XSDM_RAM_1024 0x018000UL /* Access:RW DataWidth:0x20 */ -/* Win 12 */ -//Access:RW DataWidth:0x20 // -#define GTT_BAR0_MAP_REG_YSDM_RAM 0x019000UL +/** + * Win 12 + */ +#define GTT_BAR0_MAP_REG_YSDM_RAM 0x019000UL /* Access:RW DataWidth:0x20 */ -/* Win 13 */ -//Access:RW DataWidth:0x20 // -#define GTT_BAR0_MAP_REG_PSDM_RAM 0x01a000UL +/** + * Win 13 + */ +#define GTT_BAR0_MAP_REG_PSDM_RAM 0x01a000UL /* Access:RW DataWidth:0x20 */ -/* Win 14 */ +/** + * Win 14 + */ +#define GTT_BAR0_MAP_REG_IGU_CMD_EXT_E5 0x01b000UL /* Access:RW DataWidth:0x20 */ #endif diff --git a/drivers/net/qede/base/ecore_gtt_values.h b/drivers/net/qede/base/ecore_gtt_values.h index 2035bed5c..6752178b6 100644 --- a/drivers/net/qede/base/ecore_gtt_values.h +++ b/drivers/net/qede/base/ecore_gtt_values.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __PREVENT_PXP_GLOBAL_WIN__ static u32 pxp_global_win[] = { diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h index 638178c30..80812d4ce 100644 --- a/drivers/net/qede/base/ecore_hsi_common.h +++ b/drivers/net/qede/base/ecore_hsi_common.h @@ -3191,7 +3191,7 @@ enum nvm_image_type { NVM_TYPE_INIT_HW = 0x19, NVM_TYPE_DEFAULT_CFG = 0x1a, NVM_TYPE_MDUMP = 0x1b, - NVM_TYPE_META = 0x1c, + NVM_TYPE_NVM_META = 0x1c, /* @DPDK */ NVM_TYPE_ISCSI_CFG = 0x1d, NVM_TYPE_FCOE_CFG = 0x1f, NVM_TYPE_ETH_PHY_FW1 = 0x20, diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c index 1db39d6a3..e4f9b5e85 100644 --- a/drivers/net/qede/base/ecore_hw.c +++ b/drivers/net/qede/base/ecore_hw.c @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #include "bcm_osal.h" #include "ecore_hsi_common.h" #include "ecore_status.h" @@ -14,6 +14,14 @@ #include "ecore_iov_api.h" #include "ecore_gtt_values.h" #include "ecore_dev_api.h" +#include "ecore_mcp.h" + +#ifdef _NTDDK_ +#pragma warning(push) +#pragma warning(disable : 28167) +#pragma warning(disable : 28123) +#pragma warning(disable : 28121) +#endif #ifndef ASIC_ONLY #define ECORE_EMUL_FACTOR 2000 @@ -26,19 +34,19 @@ #define ECORE_BAR_INVALID_OFFSET (OSAL_CPU_TO_LE32(-1)) struct ecore_ptt { - osal_list_entry_t list_entry; - unsigned int idx; - struct pxp_ptt_entry pxp; - u8 hwfn_id; + osal_list_entry_t list_entry; + unsigned int idx; + struct pxp_ptt_entry pxp; + u8 hwfn_id; }; struct ecore_ptt_pool { - osal_list_t free_list; - osal_spinlock_t lock; /* ptt synchronized access */ - struct ecore_ptt ptts[PXP_EXTERNAL_BAR_PF_WINDOW_NUM]; + osal_list_t free_list; + osal_spinlock_t lock; /* ptt synchronized access */ + struct ecore_ptt ptts[PXP_EXTERNAL_BAR_PF_WINDOW_NUM]; }; -void __ecore_ptt_pool_free(struct ecore_hwfn *p_hwfn) +static void __ecore_ptt_pool_free(struct ecore_hwfn *p_hwfn) { OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_ptt_pool); p_hwfn->p_ptt_pool = OSAL_NULL; @@ -71,12 +79,13 @@ enum _ecore_status_t ecore_ptt_pool_alloc(struct ecore_hwfn *p_hwfn) p_hwfn->p_ptt_pool = p_pool; #ifdef CONFIG_ECORE_LOCK_ALLOC - if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_pool->lock)) { + if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_pool->lock, "ptt_lock")) { __ecore_ptt_pool_free(p_hwfn); return ECORE_NOMEM; } #endif OSAL_SPIN_LOCK_INIT(&p_pool->lock); + return ECORE_SUCCESS; } @@ -122,10 +131,10 @@ struct ecore_ptt *ecore_ptt_acquire(struct ecore_hwfn *p_hwfn) /* Take the free PTT from the list */ for (i = 0; i < ECORE_BAR_ACQUIRE_TIMEOUT; i++) { OSAL_SPIN_LOCK(&p_hwfn->p_ptt_pool->lock); + if (!OSAL_LIST_IS_EMPTY(&p_hwfn->p_ptt_pool->free_list)) { - p_ptt = OSAL_LIST_FIRST_ENTRY( - &p_hwfn->p_ptt_pool->free_list, - struct ecore_ptt, list_entry); + p_ptt = OSAL_LIST_FIRST_ENTRY(&p_hwfn->p_ptt_pool->free_list, + struct ecore_ptt, list_entry); OSAL_LIST_REMOVE_ENTRY(&p_ptt->list_entry, &p_hwfn->p_ptt_pool->free_list); @@ -141,17 +150,15 @@ struct ecore_ptt *ecore_ptt_acquire(struct ecore_hwfn *p_hwfn) OSAL_MSLEEP(1); } - DP_NOTICE(p_hwfn, true, - "PTT acquire timeout - failed to allocate PTT\n"); + DP_NOTICE(p_hwfn, true, "PTT acquire timeout - failed to allocate PTT\n"); return OSAL_NULL; } -void ecore_ptt_release(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) +void ecore_ptt_release(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) { /* This PTT should not be set to pretend if it is being released */ - /* TODO - add some pretend sanity checks, to make sure pretend - * isn't set on this ptt - */ + /* TODO - add some pretend sanity checks, to make sure pretend isn't set on this ptt */ OSAL_SPIN_LOCK(&p_hwfn->p_ptt_pool->lock); OSAL_LIST_PUSH_HEAD(&p_ptt->list_entry, &p_hwfn->p_ptt_pool->free_list); @@ -167,17 +174,18 @@ static u32 ecore_ptt_get_hw_addr(struct ecore_ptt *p_ptt) static u32 ecore_ptt_config_addr(struct ecore_ptt *p_ptt) { return PXP_PF_WINDOW_ADMIN_PER_PF_START + - p_ptt->idx * sizeof(struct pxp_ptt_entry); + p_ptt->idx * sizeof(struct pxp_ptt_entry); } u32 ecore_ptt_get_bar_addr(struct ecore_ptt *p_ptt) { return PXP_EXTERNAL_BAR_PF_WINDOW_START + - p_ptt->idx * PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE; + p_ptt->idx * PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE; } void ecore_ptt_set_win(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u32 new_hw_addr) + struct ecore_ptt *p_ptt, + u32 new_hw_addr) { u32 prev_hw_addr; @@ -201,7 +209,8 @@ void ecore_ptt_set_win(struct ecore_hwfn *p_hwfn, } static u32 ecore_set_ptt(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u32 hw_addr) + struct ecore_ptt *p_ptt, + u32 hw_addr) { u32 win_hw_addr = ecore_ptt_get_hw_addr(p_ptt); u32 offset; @@ -257,8 +266,8 @@ static bool ecore_is_reg_fifo_empty(struct ecore_hwfn *p_hwfn, return is_empty; } -void ecore_wr(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u32 hw_addr, u32 val) +void ecore_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 hw_addr, + u32 val) { bool prev_fifo_err; u32 bar_addr; @@ -271,13 +280,16 @@ void ecore_wr(struct ecore_hwfn *p_hwfn, "bar_addr 0x%x, hw_addr 0x%x, val 0x%x\n", bar_addr, hw_addr, val); + OSAL_WARN((hw_addr == p_hwfn->p_dev->monitored_hw_addr), + "accessed hw_addr 0x%x\n", hw_addr); + #ifndef ASIC_ONLY if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) OSAL_UDELAY(100); #endif OSAL_WARN(!prev_fifo_err && !ecore_is_reg_fifo_empty(p_hwfn, p_ptt), - "reg_fifo err was caused by a call to ecore_wr(0x%x, 0x%x)\n", + "reg_fifo error was caused hw_addr 0x%x, val 0x%x\n", hw_addr, val); } @@ -295,6 +307,9 @@ u32 ecore_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 hw_addr) "bar_addr 0x%x, hw_addr 0x%x, val 0x%x\n", bar_addr, hw_addr, val); + OSAL_WARN((hw_addr == p_hwfn->p_dev->monitored_hw_addr), + "accessed hw_addr 0x%x\n", hw_addr); + #ifndef ASIC_ONLY if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) OSAL_UDELAY(100); @@ -307,12 +322,20 @@ u32 ecore_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 hw_addr) return val; } +enum memcpy_action { + TO_DEVICE, + FROM_DEVICE, + MEMZERO_DEVICE +}; + static void ecore_memcpy_hw(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, void *addr, - u32 hw_addr, osal_size_t n, bool to_device) + u32 hw_addr, + osal_size_t n, + enum memcpy_action action) { - u32 dw_count, *host_addr, hw_offset; + u32 dw_count, *host_addr = OSAL_NULL, hw_offset; osal_size_t quota, done = 0; u32 OSAL_IOMEM *reg_addr; @@ -328,16 +351,30 @@ static void ecore_memcpy_hw(struct ecore_hwfn *p_hwfn, } dw_count = quota / 4; - host_addr = (u32 *)((u8 *)addr + done); + if (addr) + host_addr = (u32 *)((u8 *)addr + done); reg_addr = (u32 OSAL_IOMEM *)OSAL_REG_ADDR(p_hwfn, hw_offset); - if (to_device) + switch (action) { + case TO_DEVICE: while (dw_count--) - DIRECT_REG_WR(p_hwfn, reg_addr++, *host_addr++); - else + DIRECT_REG_WR(p_hwfn, reg_addr++, + *host_addr++); + break; + case FROM_DEVICE: while (dw_count--) - *host_addr++ = DIRECT_REG_RD(p_hwfn, - reg_addr++); + *host_addr++ = + DIRECT_REG_RD(p_hwfn, reg_addr++); + break; + case MEMZERO_DEVICE: + while (dw_count--) + DIRECT_REG_WR(p_hwfn, reg_addr++, 0); + break; + default: + DP_NOTICE(p_hwfn, true, + "Invalid memcpy_action %d\n", action); + return; + } done += quota; } @@ -351,7 +388,7 @@ void ecore_memcpy_from(struct ecore_hwfn *p_hwfn, "hw_addr 0x%x, dest %p hw_addr 0x%x, size %lu\n", hw_addr, dest, hw_addr, (unsigned long)n); - ecore_memcpy_hw(p_hwfn, p_ptt, dest, hw_addr, n, false); + ecore_memcpy_hw(p_hwfn, p_ptt, dest, hw_addr, n, FROM_DEVICE); } void ecore_memcpy_to(struct ecore_hwfn *p_hwfn, @@ -362,7 +399,19 @@ void ecore_memcpy_to(struct ecore_hwfn *p_hwfn, "hw_addr 0x%x, hw_addr 0x%x, src %p size %lu\n", hw_addr, hw_addr, src, (unsigned long)n); - ecore_memcpy_hw(p_hwfn, p_ptt, src, hw_addr, n, true); + ecore_memcpy_hw(p_hwfn, p_ptt, src, hw_addr, n, TO_DEVICE); +} + +void ecore_memzero_hw(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u32 hw_addr, + osal_size_t n) +{ + DP_VERBOSE(p_hwfn, ECORE_MSG_HW, + "hw_addr 0x%x, hw_addr 0x%x, size %lu\n", + hw_addr, hw_addr, (unsigned long)n); + + ecore_memcpy_hw(p_hwfn, p_ptt, OSAL_NULL, hw_addr, n, MEMZERO_DEVICE); } void ecore_fid_pretend(struct ecore_hwfn *p_hwfn, @@ -373,8 +422,9 @@ void ecore_fid_pretend(struct ecore_hwfn *p_hwfn, SET_FIELD(control, PXP_PRETEND_CMD_IS_CONCRETE, 1); SET_FIELD(control, PXP_PRETEND_CMD_PRETEND_FUNCTION, 1); -/* Every pretend undos prev pretends, including previous port pretend */ - + /* Every pretend undos previous pretends, including + * previous port pretend. + */ SET_FIELD(control, PXP_PRETEND_CMD_PORT, 0); SET_FIELD(control, PXP_PRETEND_CMD_USE_PORT, 0); SET_FIELD(control, PXP_PRETEND_CMD_PRETEND_PORT, 1); @@ -388,7 +438,7 @@ void ecore_fid_pretend(struct ecore_hwfn *p_hwfn, REG_WR(p_hwfn, ecore_ptt_config_addr(p_ptt) + OFFSETOF(struct pxp_ptt_entry, pretend), - *(u32 *)&p_ptt->pxp.pretend); + *(u32 *)&p_ptt->pxp.pretend); } void ecore_port_pretend(struct ecore_hwfn *p_hwfn, @@ -404,10 +454,11 @@ void ecore_port_pretend(struct ecore_hwfn *p_hwfn, REG_WR(p_hwfn, ecore_ptt_config_addr(p_ptt) + OFFSETOF(struct pxp_ptt_entry, pretend), - *(u32 *)&p_ptt->pxp.pretend); + *(u32 *)&p_ptt->pxp.pretend); } -void ecore_port_unpretend(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) +void ecore_port_unpretend(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) { u16 control = 0; @@ -420,7 +471,7 @@ void ecore_port_unpretend(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) REG_WR(p_hwfn, ecore_ptt_config_addr(p_ptt) + OFFSETOF(struct pxp_ptt_entry, pretend), - *(u32 *)&p_ptt->pxp.pretend); + *(u32 *)&p_ptt->pxp.pretend); } void ecore_port_fid_pretend(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, @@ -458,23 +509,15 @@ u32 ecore_vfid_to_concrete(struct ecore_hwfn *p_hwfn, u8 vfid) return concrete_fid; } -/* Not in use @DPDK - * Ecore HW lock - * ============= - * Although the implementation is ready, today we don't have any flow that - * utliizes said locks - and we want to keep it this way. - * If this changes, this needs to be revisted. - */ - /* DMAE */ #define ECORE_DMAE_FLAGS_IS_SET(params, flag) \ ((params) != OSAL_NULL && \ GET_FIELD((params)->flags, DMAE_PARAMS_##flag)) -static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn, - const u8 is_src_type_grc, - const u8 is_dst_type_grc, +static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn, + const u8 is_src_type_grc, + const u8 is_dst_type_grc, struct dmae_params *p_params) { u8 src_pf_id, dst_pf_id, port_id; @@ -485,62 +528,61 @@ static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn, * 0- The source is the PCIe * 1- The source is the GRC. */ - opcode |= (is_src_type_grc ? dmae_cmd_src_grc : dmae_cmd_src_pcie) << - DMAE_CMD_SRC_SHIFT; + SET_FIELD(opcode, DMAE_CMD_SRC, + (is_src_type_grc ? dmae_cmd_src_grc + : dmae_cmd_src_pcie)); src_pf_id = ECORE_DMAE_FLAGS_IS_SET(p_params, SRC_PF_VALID) ? p_params->src_pf_id : p_hwfn->rel_pf_id; - opcode |= (src_pf_id & DMAE_CMD_SRC_PF_ID_MASK) << - DMAE_CMD_SRC_PF_ID_SHIFT; + SET_FIELD(opcode, DMAE_CMD_SRC_PF_ID, src_pf_id); /* The destination of the DMA can be: 0-None 1-PCIe 2-GRC 3-None */ - opcode |= (is_dst_type_grc ? dmae_cmd_dst_grc : dmae_cmd_dst_pcie) << - DMAE_CMD_DST_SHIFT; + SET_FIELD(opcode, DMAE_CMD_DST, + (is_dst_type_grc ? dmae_cmd_dst_grc + : dmae_cmd_dst_pcie)); dst_pf_id = ECORE_DMAE_FLAGS_IS_SET(p_params, DST_PF_VALID) ? p_params->dst_pf_id : p_hwfn->rel_pf_id; - opcode |= (dst_pf_id & DMAE_CMD_DST_PF_ID_MASK) << - DMAE_CMD_DST_PF_ID_SHIFT; + SET_FIELD(opcode, DMAE_CMD_DST_PF_ID, dst_pf_id); /* DMAE_E4_TODO need to check which value to specify here. */ - /* opcode |= (!b_complete_to_host)<< DMAE_CMD_C_DST_SHIFT; */ + /* SET_FIELD(opcode, DMAE_CMD_C_DST, !b_complete_to_host);*/ /* Whether to write a completion word to the completion destination: * 0-Do not write a completion word * 1-Write the completion word */ - opcode |= DMAE_CMD_COMP_WORD_EN_MASK << DMAE_CMD_COMP_WORD_EN_SHIFT; - opcode |= DMAE_CMD_SRC_ADDR_RESET_MASK << DMAE_CMD_SRC_ADDR_RESET_SHIFT; + SET_FIELD(opcode, DMAE_CMD_COMP_WORD_EN, 1); + SET_FIELD(opcode, DMAE_CMD_SRC_ADDR_RESET, 1); if (ECORE_DMAE_FLAGS_IS_SET(p_params, COMPLETION_DST)) - opcode |= 1 << DMAE_CMD_COMP_FUNC_SHIFT; + SET_FIELD(opcode, DMAE_CMD_COMP_FUNC, 1); /* swapping mode 3 - big endian there should be a define ifdefed in * the HSI somewhere. Since it is currently */ - opcode |= DMAE_CMD_ENDIANITY << DMAE_CMD_ENDIANITY_MODE_SHIFT; + SET_FIELD(opcode, DMAE_CMD_ENDIANITY_MODE, DMAE_CMD_ENDIANITY); port_id = (ECORE_DMAE_FLAGS_IS_SET(p_params, PORT_VALID)) ? p_params->port_id : p_hwfn->port_id; - opcode |= port_id << DMAE_CMD_PORT_ID_SHIFT; + SET_FIELD(opcode, DMAE_CMD_PORT_ID, port_id); /* reset source address in next go */ - opcode |= DMAE_CMD_SRC_ADDR_RESET_MASK << DMAE_CMD_SRC_ADDR_RESET_SHIFT; + SET_FIELD(opcode, DMAE_CMD_SRC_ADDR_RESET, 1); /* reset dest address in next go */ - opcode |= DMAE_CMD_DST_ADDR_RESET_MASK << DMAE_CMD_DST_ADDR_RESET_SHIFT; + SET_FIELD(opcode, DMAE_CMD_DST_ADDR_RESET, 1); /* SRC/DST VFID: all 1's - pf, otherwise VF id */ if (ECORE_DMAE_FLAGS_IS_SET(p_params, SRC_VF_VALID)) { - opcode |= (1 << DMAE_CMD_SRC_VF_ID_VALID_SHIFT); - opcode_b |= (p_params->src_vf_id << DMAE_CMD_SRC_VF_ID_SHIFT); + SET_FIELD(opcode, DMAE_CMD_SRC_VF_ID_VALID, 1); + SET_FIELD(opcode_b, DMAE_CMD_SRC_VF_ID, p_params->src_vf_id); } else { - opcode_b |= (DMAE_CMD_SRC_VF_ID_MASK << - DMAE_CMD_SRC_VF_ID_SHIFT); + SET_FIELD(opcode_b, DMAE_CMD_SRC_VF_ID, 0xFF); } if (ECORE_DMAE_FLAGS_IS_SET(p_params, DST_VF_VALID)) { - opcode |= 1 << DMAE_CMD_DST_VF_ID_VALID_SHIFT; - opcode_b |= p_params->dst_vf_id << DMAE_CMD_DST_VF_ID_SHIFT; + SET_FIELD(opcode, DMAE_CMD_DST_VF_ID_VALID, 1); + SET_FIELD(opcode_b, DMAE_CMD_DST_VF_ID, p_params->dst_vf_id); } else { - opcode_b |= DMAE_CMD_DST_VF_ID_MASK << DMAE_CMD_DST_VF_ID_SHIFT; + SET_FIELD(opcode_b, DMAE_CMD_DST_VF_ID, 0xFF); } p_hwfn->dmae_info.p_dmae_cmd->opcode = OSAL_CPU_TO_LE32(opcode); @@ -549,7 +591,8 @@ static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn, static u32 ecore_dmae_idx_to_go_cmd(u8 idx) { - OSAL_BUILD_BUG_ON((DMAE_REG_GO_C31 - DMAE_REG_GO_C0) != 31 * 4); + OSAL_BUILD_BUG_ON((DMAE_REG_GO_C31 - DMAE_REG_GO_C0) != + 31 * 4); /* All the DMAE 'go' registers form an array in internal memory */ return DMAE_REG_GO_C0 + (idx << 2); @@ -567,8 +610,7 @@ static enum _ecore_status_t ecore_dmae_post_command(struct ecore_hwfn *p_hwfn, ((!p_command->src_addr_lo) && (!p_command->src_addr_hi)))) { DP_NOTICE(p_hwfn, true, "source or destination address 0 idx_cmd=%d\n" - "opcode = [0x%08x,0x%04x] len=0x%x" - " src=0x%x:%x dst=0x%x:%x\n", + "opcode = [0x%08x,0x%04x] len=0x%x src=0x%x:%x dst=0x%x:%x\n", idx_cmd, OSAL_LE32_TO_CPU(p_command->opcode), OSAL_LE16_TO_CPU(p_command->opcode_b), @@ -582,8 +624,7 @@ static enum _ecore_status_t ecore_dmae_post_command(struct ecore_hwfn *p_hwfn, } DP_VERBOSE(p_hwfn, ECORE_MSG_HW, - "Posting DMAE command [idx %d]: opcode = [0x%08x,0x%04x]" - "len=0x%x src=0x%x:%x dst=0x%x:%x\n", + "Posting DMAE command [idx %d]: opcode = [0x%08x,0x%04x] len=0x%x src=0x%x:%x dst=0x%x:%x\n", idx_cmd, OSAL_LE32_TO_CPU(p_command->opcode), OSAL_LE16_TO_CPU(p_command->opcode_b), @@ -602,7 +643,7 @@ static enum _ecore_status_t ecore_dmae_post_command(struct ecore_hwfn *p_hwfn, */ for (i = 0; i < DMAE_CMD_SIZE; i++) { u32 data = (i < DMAE_CMD_SIZE_TO_FILL) ? - *(((u32 *)p_command) + i) : 0; + *(((u32 *)p_command) + i) : 0; ecore_wr(p_hwfn, p_ptt, DMAE_REG_CMD_MEM + @@ -611,7 +652,8 @@ static enum _ecore_status_t ecore_dmae_post_command(struct ecore_hwfn *p_hwfn, } ecore_wr(p_hwfn, p_ptt, - ecore_dmae_idx_to_go_cmd(idx_cmd), DMAE_GO_VALUE); + ecore_dmae_idx_to_go_cmd(idx_cmd), + DMAE_GO_VALUE); return ecore_status; } @@ -630,7 +672,7 @@ enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn *p_hwfn) goto err; } - p_addr = &p_hwfn->dmae_info.dmae_cmd_phys_addr; + p_addr = &p_hwfn->dmae_info.dmae_cmd_phys_addr; *p_cmd = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev, p_addr, sizeof(struct dmae_cmd)); if (*p_cmd == OSAL_NULL) { @@ -690,7 +732,8 @@ void ecore_dmae_info_free(struct ecore_hwfn *p_hwfn) } } -static enum _ecore_status_t ecore_dmae_operation_wait(struct ecore_hwfn *p_hwfn) +static enum _ecore_status_t +ecore_dmae_operation_wait(struct ecore_hwfn *p_hwfn) { u32 wait_cnt_limit = 10000, wait_cnt = 0; enum _ecore_status_t ecore_status = ECORE_SUCCESS; @@ -713,14 +756,14 @@ static enum _ecore_status_t ecore_dmae_operation_wait(struct ecore_hwfn *p_hwfn) OSAL_UDELAY(DMAE_MIN_WAIT_TIME); if (++wait_cnt > wait_cnt_limit) { DP_NOTICE(p_hwfn->p_dev, false, - "Timed-out waiting for operation to" - " complete. Completion word is 0x%08x" - " expected 0x%08x.\n", + "Timed-out waiting for operation to complete. Completion word is 0x%08x expected 0x%08x.\n", *p_hwfn->dmae_info.p_completion_word, DMAE_COMPLETION_VAL); + OSAL_WARN(true, "Dumping stack"); ecore_status = ECORE_TIMEOUT; break; } + /* to sync the completion_word since we are not * using the volatile keyword for p_completion_word */ @@ -739,12 +782,13 @@ enum ecore_dmae_address_type { ECORE_DMAE_ADDRESS_GRC }; -static enum _ecore_status_t -ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u64 src_addr, - u64 dst_addr, - u8 src_type, u8 dst_type, u32 length_dw) +static enum _ecore_status_t ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u64 src_addr, + u64 dst_addr, + u8 src_type, + u8 dst_type, + u32 length_dw) { dma_addr_t phys = p_hwfn->dmae_info.intermediate_buffer_phys_addr; struct dmae_cmd *cmd = p_hwfn->dmae_info.p_dmae_cmd; @@ -756,7 +800,7 @@ ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn, cmd->src_addr_hi = OSAL_CPU_TO_LE32(DMA_HI(src_addr)); cmd->src_addr_lo = OSAL_CPU_TO_LE32(DMA_LO(src_addr)); break; - /* for virt source addresses we use the intermediate buffer. */ + /* for virtual source addresses we use the intermediate buffer. */ case ECORE_DMAE_ADDRESS_HOST_VIRT: cmd->src_addr_hi = OSAL_CPU_TO_LE32(DMA_HI(phys)); cmd->src_addr_lo = OSAL_CPU_TO_LE32(DMA_LO(phys)); @@ -774,7 +818,7 @@ ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn, cmd->dst_addr_hi = OSAL_CPU_TO_LE32(DMA_HI(dst_addr)); cmd->dst_addr_lo = OSAL_CPU_TO_LE32(DMA_LO(dst_addr)); break; - /* for virt destination address we use the intermediate buff. */ + /* for virtual destination addresses we use the intermediate buffer. */ case ECORE_DMAE_ADDRESS_HOST_VIRT: cmd->dst_addr_hi = OSAL_CPU_TO_LE32(DMA_HI(phys)); cmd->dst_addr_lo = OSAL_CPU_TO_LE32(DMA_LO(phys)); @@ -784,18 +828,20 @@ ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn, } cmd->length_dw = OSAL_CPU_TO_LE16((u16)length_dw); - +#ifndef __EXTRACT__LINUX__ if (src_type == ECORE_DMAE_ADDRESS_HOST_VIRT || src_type == ECORE_DMAE_ADDRESS_HOST_PHYS) OSAL_DMA_SYNC(p_hwfn->p_dev, (void *)HILO_U64(cmd->src_addr_hi, cmd->src_addr_lo), length_dw * sizeof(u32), false); +#endif ecore_dmae_post_command(p_hwfn, p_ptt); ecore_status = ecore_dmae_operation_wait(p_hwfn); +#ifndef __EXTRACT__LINUX__ /* TODO - is it true ? */ if (src_type == ECORE_DMAE_ADDRESS_HOST_VIRT || src_type == ECORE_DMAE_ADDRESS_HOST_PHYS) @@ -803,13 +849,14 @@ ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn, (void *)HILO_U64(cmd->src_addr_hi, cmd->src_addr_lo), length_dw * sizeof(u32), true); +#endif if (ecore_status != ECORE_SUCCESS) { DP_NOTICE(p_hwfn, false, - "Wait Failed. source_addr 0x%lx, grc_addr 0x%lx, size_in_dwords 0x%x, intermediate buffer 0x%lx.\n", - (unsigned long)src_addr, (unsigned long)dst_addr, - length_dw, - (unsigned long)p_hwfn->dmae_info.intermediate_buffer_phys_addr); + "Wait Failed. source_addr 0x%" PRIx64 ", grc_addr 0x%" PRIx64 ", " + "size_in_dwords 0x%x, intermediate buffer 0x%" PRIx64 ".\n", + src_addr, dst_addr, length_dw, + (u64)p_hwfn->dmae_info.intermediate_buffer_phys_addr); return ecore_status; } @@ -821,15 +868,12 @@ ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } -static enum _ecore_status_t -ecore_dmae_execute_command(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u64 src_addr, - u64 dst_addr, - u8 src_type, - u8 dst_type, - u32 size_in_dwords, - struct dmae_params *p_params) +static enum _ecore_status_t ecore_dmae_execute_command(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u64 src_addr, u64 dst_addr, + u8 src_type, u8 dst_type, + u32 size_in_dwords, + struct dmae_params *p_params) { dma_addr_t phys = p_hwfn->dmae_info.completion_word_phys_addr; u16 length_cur = 0, i = 0, cnt_split = 0, length_mod = 0; @@ -841,37 +885,39 @@ ecore_dmae_execute_command(struct ecore_hwfn *p_hwfn, if (!p_hwfn->dmae_info.b_mem_ready) { DP_VERBOSE(p_hwfn, ECORE_MSG_HW, - "No buffers allocated. Avoid DMAE transaction [{src: addr 0x%lx, type %d}, {dst: addr 0x%lx, type %d}, size %d].\n", - (unsigned long)src_addr, src_type, - (unsigned long)dst_addr, dst_type, + "No buffers allocated. Avoid DMAE transaction " + "[{src: addr 0x%" PRIx64 ", type %d}, " + "{dst: addr 0x%" PRIx64 ", type %d}, size %d].\n", + src_addr, src_type, dst_addr, dst_type, size_in_dwords); return ECORE_NOMEM; } if (p_hwfn->p_dev->recov_in_prog) { DP_VERBOSE(p_hwfn, ECORE_MSG_HW, - "Recovery is in progress. Avoid DMAE transaction [{src: addr 0x%lx, type %d}, {dst: addr 0x%lx, type %d}, size %d].\n", - (unsigned long)src_addr, src_type, - (unsigned long)dst_addr, dst_type, + "Recovery is in progress. Avoid DMAE transaction " + "[{src: addr 0x%" PRIx64 ", type %d}, " + "{dst: addr 0x%" PRIx64 ", type %d}, size %d].\n", + src_addr, src_type, dst_addr, dst_type, size_in_dwords); - /* Return success to let the flow to be completed successfully - * w/o any error handling. - */ + + /* Let the flow complete w/o any error handling */ return ECORE_SUCCESS; } if (!cmd) { DP_NOTICE(p_hwfn, true, - "ecore_dmae_execute_sub_operation failed. Invalid state. source_addr 0x%lx, destination addr 0x%lx, size_in_dwords 0x%x\n", - (unsigned long)src_addr, - (unsigned long)dst_addr, - length_cur); + "ecore_dmae_execute_sub_operation failed. Invalid state. " + "source_addr 0x%" PRIx64 ", destination addr 0x%" PRIx64 ", " + "size_in_dwords 0x%x\n", + src_addr, dst_addr, length_cur); return ECORE_INVAL; } ecore_dmae_opcode(p_hwfn, (src_type == ECORE_DMAE_ADDRESS_GRC), - (dst_type == ECORE_DMAE_ADDRESS_GRC), p_params); + (dst_type == ECORE_DMAE_ADDRESS_GRC), + p_params); cmd->comp_addr_lo = OSAL_CPU_TO_LE32(DMA_LO(phys)); cmd->comp_addr_hi = OSAL_CPU_TO_LE32(DMA_HI(phys)); @@ -913,14 +959,18 @@ ecore_dmae_execute_command(struct ecore_hwfn *p_hwfn, dst_type, length_cur); if (ecore_status != ECORE_SUCCESS) { - DP_NOTICE(p_hwfn, false, - "ecore_dmae_execute_sub_operation Failed" - " with error 0x%x. source_addr 0x%lx," - " dest addr 0x%lx, size_in_dwords 0x%x\n", - ecore_status, (unsigned long)src_addr, - (unsigned long)dst_addr, length_cur); - - ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_DMAE_FAIL); + u8 str[ECORE_HW_ERR_MAX_STR_SIZE]; + + OSAL_SNPRINTF((char *)str, ECORE_HW_ERR_MAX_STR_SIZE, + "ecore_dmae_execute_sub_operation Failed with error 0x%x. " + "source_addr 0x%" PRIx64 ", " + "destination addr 0x%" PRIx64 ", size_in_dwords 0x%x\n", + ecore_status, src_addr, dst_addr, + length_cur); + DP_NOTICE(p_hwfn, false, "%s", str); + ecore_hw_err_notify(p_hwfn, p_ptt, + ECORE_HW_ERR_DMAE_FAIL, str, + OSAL_STRLEN((char *)str) + 1); break; } } @@ -973,12 +1023,11 @@ enum _ecore_status_t ecore_dmae_grc2host(struct ecore_hwfn *p_hwfn, return rc; } -enum _ecore_status_t -ecore_dmae_host2host(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - dma_addr_t source_addr, - dma_addr_t dest_addr, - u32 size_in_dwords, +enum _ecore_status_t ecore_dmae_host2host(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + dma_addr_t source_addr, + dma_addr_t dest_addr, + u32 size_in_dwords, struct dmae_params *p_params) { enum _ecore_status_t rc; @@ -989,26 +1038,31 @@ ecore_dmae_host2host(struct ecore_hwfn *p_hwfn, dest_addr, ECORE_DMAE_ADDRESS_HOST_PHYS, ECORE_DMAE_ADDRESS_HOST_PHYS, - size_in_dwords, p_params); + size_in_dwords, + p_params); OSAL_SPIN_UNLOCK(&p_hwfn->dmae_info.lock); return rc; } -void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn, - enum ecore_hw_err_type err_type) +void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + enum ecore_hw_err_type err_type, u8 *p_buf, u32 size) { /* Fan failure cannot be masked by handling of another HW error */ if (p_hwfn->p_dev->recov_in_prog && err_type != ECORE_HW_ERR_FAN_FAIL) { DP_VERBOSE(p_hwfn, ECORE_MSG_DRV, - "Recovery is in progress." - "Avoid notifying about HW error %d.\n", + "Recovery is in progress. Avoid notifying about HW error %d.\n", err_type); return; } OSAL_HW_ERROR_OCCURRED(p_hwfn, err_type); + + if (p_buf != OSAL_NULL) + ecore_mcp_send_raw_debug_data(p_hwfn, p_ptt, p_buf, size); + + ecore_mcp_gen_mdump_idlechk(p_hwfn, p_ptt); } enum _ecore_status_t ecore_dmae_sanity(struct ecore_hwfn *p_hwfn, @@ -1042,9 +1096,9 @@ enum _ecore_status_t ecore_dmae_sanity(struct ecore_hwfn *p_hwfn, OSAL_MEM_ZERO((u8 *)p_virt + size, size); DP_VERBOSE(p_hwfn, ECORE_MSG_SP, - "DMAE sanity [%s]: src_addr={phys 0x%lx, virt %p}, dst_addr={phys 0x%lx, virt %p}, size 0x%x\n", - phase, (unsigned long)p_phys, p_virt, - (unsigned long)(p_phys + size), + "DMAE sanity [%s]: src_addr={phys 0x%" PRIx64 ", virt %p}, " + "dst_addr={phys 0x%" PRIx64 ", virt %p}, size 0x%x\n", + phase, (u64)p_phys, p_virt, (u64)(p_phys + size), (u8 *)p_virt + size, size); rc = ecore_dmae_host2host(p_hwfn, p_ptt, p_phys, p_phys + size, @@ -1066,10 +1120,9 @@ enum _ecore_status_t ecore_dmae_sanity(struct ecore_hwfn *p_hwfn, if (*p_tmp != val) { DP_NOTICE(p_hwfn, false, - "DMAE sanity [%s]: addr={phys 0x%lx, virt %p}, read_val 0x%08x, expected_val 0x%08x\n", + "DMAE sanity [%s]: addr={phys 0x%" PRIx64 ", virt %p}, read_val 0x%08x, expected_val 0x%08x\n", phase, - (unsigned long)p_phys + - ((u8 *)p_tmp - (u8 *)p_virt), + (u64)p_phys + ((u8 *)p_tmp - (u8 *)p_virt), p_tmp, *p_tmp, val); rc = ECORE_UNKNOWN_ERROR; goto out; @@ -1085,13 +1138,15 @@ void ecore_ppfid_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u8 abs_ppfid, u32 hw_addr, u32 val) { u8 pfid = ECORE_PFID_BY_PPFID(p_hwfn, abs_ppfid); + u16 fid; + + fid = FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, pfid); + ecore_fid_pretend(p_hwfn, p_ptt, fid); - ecore_fid_pretend(p_hwfn, p_ptt, - pfid << PXP_PRETEND_CONCRETE_FID_PFID_SHIFT); ecore_wr(p_hwfn, p_ptt, hw_addr, val); - ecore_fid_pretend(p_hwfn, p_ptt, - p_hwfn->rel_pf_id << - PXP_PRETEND_CONCRETE_FID_PFID_SHIFT); + + fid = FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, p_hwfn->rel_pf_id); + ecore_fid_pretend(p_hwfn, p_ptt, fid); } u32 ecore_ppfid_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, @@ -1099,13 +1154,19 @@ u32 ecore_ppfid_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, { u8 pfid = ECORE_PFID_BY_PPFID(p_hwfn, abs_ppfid); u32 val; + u16 fid; + + fid = FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, pfid); + ecore_fid_pretend(p_hwfn, p_ptt, fid); - ecore_fid_pretend(p_hwfn, p_ptt, - pfid << PXP_PRETEND_CONCRETE_FID_PFID_SHIFT); val = ecore_rd(p_hwfn, p_ptt, hw_addr); - ecore_fid_pretend(p_hwfn, p_ptt, - p_hwfn->rel_pf_id << - PXP_PRETEND_CONCRETE_FID_PFID_SHIFT); + + fid = FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, p_hwfn->rel_pf_id); + ecore_fid_pretend(p_hwfn, p_ptt, fid); return val; } + +#ifdef _NTDDK_ +#pragma warning(pop) +#endif diff --git a/drivers/net/qede/base/ecore_hw.h b/drivers/net/qede/base/ecore_hw.h index 238bdb9db..cbf65d737 100644 --- a/drivers/net/qede/base/ecore_hw.h +++ b/drivers/net/qede/base/ecore_hw.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_HW_H__ #define __ECORE_HW_H__ @@ -42,17 +42,19 @@ enum reserved_ptts { #endif #define DMAE_CMD_SIZE 14 -/* size of DMAE command structure to fill.. DMAE_CMD_SIZE-5 */ + +/* Size of DMAE command structure to fill.. DMAE_CMD_SIZE-5 */ #define DMAE_CMD_SIZE_TO_FILL (DMAE_CMD_SIZE - 5) -/* Minimum wait for dmae opertaion to complete 2 milliseconds */ + +/* Minimum wait for dmae operation to complete 2 milliseconds */ #define DMAE_MIN_WAIT_TIME 0x2 #define DMAE_MAX_CLIENTS 32 /** -* @brief ecore_gtt_init - Initialize GTT windows -* -* @param p_hwfn -*/ + * @brief ecore_gtt_init - Initialize GTT windows + * + * @param p_hwfn + */ void ecore_gtt_init(struct ecore_hwfn *p_hwfn); /** @@ -85,7 +87,7 @@ void ecore_ptt_pool_free(struct ecore_hwfn *p_hwfn); * * @return u32 */ -u32 ecore_ptt_get_bar_addr(struct ecore_ptt *p_ptt); +u32 ecore_ptt_get_bar_addr(struct ecore_ptt *p_ptt); /** * @brief ecore_ptt_set_win - Set PTT Window's GRC BAR address @@ -133,6 +135,21 @@ u32 ecore_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 hw_addr); +/** + * @brief ecore_memset_hw - set (n / 4) DWORDs from BAR using the given ptt + * with DWORD value + * + * @param p_hwfn + * @param p_ptt + * @param value + * @param hw_addr + * @param n + */ +void ecore_memzero_hw(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u32 hw_addr, + osal_size_t n); + /** * @brief ecore_memcpy_from - copy n bytes from BAR using the given * ptt @@ -228,7 +245,7 @@ u32 ecore_vfid_to_concrete(struct ecore_hwfn *p_hwfn, u8 vfid); * which is part of p_hwfn. * @param p_hwfn */ -enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn *p_hwfn); +enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn *p_hwfn); /** * @brief ecore_dmae_info_free - Free the dmae_info structure @@ -236,7 +253,7 @@ enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn *p_hwfn); * * @param p_hwfn */ -void ecore_dmae_info_free(struct ecore_hwfn *p_hwfn); +void ecore_dmae_info_free(struct ecore_hwfn *p_hwfn); /** * @brief ecore_dmae_host2grc - copy data from source address to @@ -307,8 +324,20 @@ enum _ecore_status_t ecore_dmae_sanity(struct ecore_hwfn *p_hwfn, enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev, const u8 *fw_data); -void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn, - enum ecore_hw_err_type err_type); +#define ECORE_HW_ERR_MAX_STR_SIZE 256 + +/** + * @brief ecore_hw_err_notify - Notify upper layer driver and management FW + * about a HW error. + * + * @param p_hwfn + * @param p_ptt + * @param err_type + * @param p_buf - debug data buffer to send to the MFW + * @param size - buffer size + */ +void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + enum ecore_hw_err_type err_type, u8 *p_buf, u32 size); /** * @brief ecore_ppfid_wr - Write value to BAR using the given ptt while diff --git a/drivers/net/qede/base/ecore_hw_defs.h b/drivers/net/qede/base/ecore_hw_defs.h index 92361e79c..e47dbd7e5 100644 --- a/drivers/net/qede/base/ecore_hw_defs.h +++ b/drivers/net/qede/base/ecore_hw_defs.h @@ -1,37 +1,26 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef _ECORE_IGU_DEF_H_ #define _ECORE_IGU_DEF_H_ /* Fields of IGU PF CONFIGRATION REGISTER */ -/* function enable */ -#define IGU_PF_CONF_FUNC_EN (0x1 << 0) -/* MSI/MSIX enable */ -#define IGU_PF_CONF_MSI_MSIX_EN (0x1 << 1) -/* INT enable */ -#define IGU_PF_CONF_INT_LINE_EN (0x1 << 2) -/* attention enable */ -#define IGU_PF_CONF_ATTN_BIT_EN (0x1 << 3) -/* single ISR mode enable */ -#define IGU_PF_CONF_SINGLE_ISR_EN (0x1 << 4) -/* simd all ones mode */ -#define IGU_PF_CONF_SIMD_MODE (0x1 << 5) +#define IGU_PF_CONF_FUNC_EN (0x1 << 0) /* function enable */ +#define IGU_PF_CONF_MSI_MSIX_EN (0x1 << 1) /* MSI/MSIX enable */ +#define IGU_PF_CONF_INT_LINE_EN (0x1 << 2) /* INT enable */ +#define IGU_PF_CONF_ATTN_BIT_EN (0x1 << 3) /* attention enable */ +#define IGU_PF_CONF_SINGLE_ISR_EN (0x1 << 4) /* single ISR mode enable */ +#define IGU_PF_CONF_SIMD_MODE (0x1 << 5) /* simd all ones mode */ /* Fields of IGU VF CONFIGRATION REGISTER */ -/* function enable */ -#define IGU_VF_CONF_FUNC_EN (0x1 << 0) -/* MSI/MSIX enable */ -#define IGU_VF_CONF_MSI_MSIX_EN (0x1 << 1) -/* single ISR mode enable */ -#define IGU_VF_CONF_SINGLE_ISR_EN (0x1 << 4) -/* Parent PF */ -#define IGU_VF_CONF_PARENT_MASK (0xF) -/* Parent PF */ -#define IGU_VF_CONF_PARENT_SHIFT 5 +#define IGU_VF_CONF_FUNC_EN (0x1 << 0) /* function enable */ +#define IGU_VF_CONF_MSI_MSIX_EN (0x1 << 1) /* MSI/MSIX enable */ +#define IGU_VF_CONF_SINGLE_ISR_EN (0x1 << 4) /* single ISR mode enable */ +#define IGU_VF_CONF_PARENT_MASK (0xF) /* Parent PF */ +#define IGU_VF_CONF_PARENT_SHIFT 5 /* Parent PF */ /* Igu control commands */ @@ -47,11 +36,11 @@ struct igu_ctrl_reg { u32 ctrl_data; #define IGU_CTRL_REG_FID_MASK 0xFFFF /* Opaque_FID */ #define IGU_CTRL_REG_FID_SHIFT 0 -#define IGU_CTRL_REG_PXP_ADDR_MASK 0xFFF /* Command address */ +#define IGU_CTRL_REG_PXP_ADDR_MASK 0x1FFF /* Command address */ #define IGU_CTRL_REG_PXP_ADDR_SHIFT 16 -#define IGU_CTRL_REG_RESERVED_MASK 0x1 -#define IGU_CTRL_REG_RESERVED_SHIFT 28 -#define IGU_CTRL_REG_TYPE_MASK 0x1U /* use enum igu_ctrl_cmd */ +#define IGU_CTRL_REG_RESERVED_MASK 0x3 +#define IGU_CTRL_REG_RESERVED_SHIFT 29 +#define IGU_CTRL_REG_TYPE_MASK 0x1 /* use enum igu_ctrl_cmd */ #define IGU_CTRL_REG_TYPE_SHIFT 31 }; diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c index 1b751d837..575ba8070 100644 --- a/drivers/net/qede/base/ecore_init_fw_funcs.c +++ b/drivers/net/qede/base/ecore_init_fw_funcs.c @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #include "bcm_osal.h" #include "ecore_hw.h" #include "ecore_init_ops.h" @@ -38,8 +38,7 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = { /* General constants */ #define QM_PQ_MEM_4KB(pq_size) \ (pq_size ? DIV_ROUND_UP((pq_size + 1) * QM_PQ_ELEMENT_SIZE, 0x1000) : 0) -#define QM_PQ_SIZE_256B(pq_size) \ - (pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : 0) +#define QM_PQ_SIZE_256B(pq_size) (pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : 0) #define QM_INVALID_PQ_ID 0xffff /* Max link speed (in Mbps) */ @@ -50,11 +49,12 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = { #define QM_BYTE_CRD_EN 1 /* Other PQ constants */ -#define QM_OTHER_PQS_PER_PF 4 +#define QM_OTHER_PQS_PER_PF_E4 4 +#define QM_OTHER_PQS_PER_PF_E5 4 /* VOQ constants */ -#define MAX_NUM_VOQS (MAX_NUM_PORTS_K2 * NUM_TCS_4PORT_K2) -#define VOQS_BIT_MASK ((1 << MAX_NUM_VOQS) - 1) +#define MAX_NUM_VOQS_E4 (MAX_NUM_PORTS_K2 * NUM_TCS_4PORT_K2) +#define QM_E5_NUM_EXT_VOQ (MAX_NUM_PORTS_E5 * NUM_OF_TCS) /* WFQ constants: */ @@ -65,7 +65,8 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = { #define QM_WFQ_VP_PQ_VOQ_SHIFT 0 /* Bit of PF in WFQ VP PQ map */ -#define QM_WFQ_VP_PQ_PF_SHIFT 5 +#define QM_WFQ_VP_PQ_PF_E4_SHIFT 5 +#define QM_WFQ_VP_PQ_PF_E5_SHIFT 6 /* 0x9000 = 4*9*1024 */ #define QM_WFQ_INC_VAL(weight) ((weight) * 0x9000) @@ -73,6 +74,9 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = { /* Max WFQ increment value is 0.7 * upper bound */ #define QM_WFQ_MAX_INC_VAL ((QM_WFQ_UPPER_BOUND * 7) / 10) +/* Number of VOQs in E5 QmWfqCrd register */ +#define QM_WFQ_CRD_E5_NUM_VOQS 16 + /* RL constants: */ /* Period in us */ @@ -89,8 +93,7 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = { * this point. */ #define QM_RL_INC_VAL(rate) \ - OSAL_MAX_T(u32, (u32)(((rate ? rate : 100000) * QM_RL_PERIOD * 101) / \ - (8 * 100)), 1) + OSAL_MAX_T(u32, (u32)(((rate ? rate : 100000) * QM_RL_PERIOD * 101) / (8 * 100)), 1) /* PF RL Upper bound is set to 10 * burst size of 1ms in 50Gbps */ #define QM_PF_RL_UPPER_BOUND 62500000 @@ -99,8 +102,7 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = { #define QM_PF_RL_MAX_INC_VAL ((QM_PF_RL_UPPER_BOUND * 7) / 10) /* Vport RL Upper bound, link speed is in Mpbs */ -#define QM_VP_RL_UPPER_BOUND(speed) \ - ((u32)OSAL_MAX_T(u32, QM_RL_INC_VAL(speed), 9700 + 1000)) +#define QM_VP_RL_UPPER_BOUND(speed) ((u32)OSAL_MAX_T(u32, QM_RL_INC_VAL(speed), 9700 + 1000)) /* Max Vport RL increment value is the Vport RL upper bound */ #define QM_VP_RL_MAX_INC_VAL(speed) QM_VP_RL_UPPER_BOUND(speed) @@ -116,22 +118,24 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = { /* Command Queue constants: */ /* Pure LB CmdQ lines (+spare) */ -#define PBF_CMDQ_PURE_LB_LINES 150 +#define PBF_CMDQ_PURE_LB_LINES_E4 150 +#define PBF_CMDQ_PURE_LB_LINES_E5 75 + +#define PBF_CMDQ_LINES_E5_RSVD_RATIO 8 #define PBF_CMDQ_LINES_RT_OFFSET(ext_voq) \ - (PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + \ - ext_voq * \ - (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET - \ - PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET)) + (PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + (ext_voq) * \ + (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET - PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET)) #define PBF_BTB_GUARANTEED_RT_OFFSET(ext_voq) \ - (PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + \ - ext_voq * \ - (PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET - \ - PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET)) + (PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + (ext_voq) * \ + (PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET - PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET)) -#define QM_VOQ_LINE_CRD(pbf_cmd_lines) \ -((((pbf_cmd_lines) - 4) * 2) | QM_LINE_CRD_REG_SIGN_BIT) +/* Returns the VOQ line credit for the specified number of PBF command lines. + * PBF lines are specified in 256b units in E4, and in 512b units in E5. + */ +#define QM_VOQ_LINE_CRD(pbf_cmd_lines, is_e5) \ + ((((pbf_cmd_lines) - 4) * (is_e5 ? 4 : 2)) | QM_LINE_CRD_REG_SIGN_BIT) /* BTB: blocks constants (block size = 256B) */ @@ -151,7 +155,7 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = { #define QM_STOP_CMD_STRUCT_SIZE 2 #define QM_STOP_CMD_PAUSE_MASK_OFFSET 0 #define QM_STOP_CMD_PAUSE_MASK_SHIFT 0 -#define QM_STOP_CMD_PAUSE_MASK_MASK 0xffffffff /* @DPDK */ +#define QM_STOP_CMD_PAUSE_MASK_MASK ((u64)-1) #define QM_STOP_CMD_GROUP_ID_OFFSET 1 #define QM_STOP_CMD_GROUP_ID_SHIFT 16 #define QM_STOP_CMD_GROUP_ID_MASK 15 @@ -185,25 +189,31 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = { ((rl_valid ? 1 : 0) << 22) | (((rl_id) & 255) << 24) | \ (((rl_id) >> 8) << 9)) -#define PQ_INFO_RAM_GRC_ADDRESS(pq_id) (XSEM_REG_FAST_MEMORY + \ - SEM_FAST_REG_INT_RAM + XSTORM_PQ_INFO_OFFSET(pq_id)) +#define PQ_INFO_RAM_GRC_ADDRESS(pq_id) \ + (XSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + XSTORM_PQ_INFO_OFFSET(pq_id)) /******************** INTERNAL IMPLEMENTATION *********************/ /* Prepare PF RL enable/disable runtime init values */ -static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en) +static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, + bool pf_rl_en) { STORE_RT_REG(p_hwfn, QM_REG_RLPFENABLE_RT_OFFSET, pf_rl_en ? 1 : 0); if (pf_rl_en) { + u8 num_ext_voqs = ECORE_IS_E5(p_hwfn->p_dev) ? QM_E5_NUM_EXT_VOQ : MAX_NUM_VOQS_E4; + u64 voq_bit_mask = ((u64)1 << num_ext_voqs) - 1; + /* Enable RLs for all VOQs */ - STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_RT_OFFSET, - VOQS_BIT_MASK); + STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_RT_OFFSET, (u32)voq_bit_mask); +#ifdef QM_REG_RLPFVOQENABLE_MSB_RT_OFFSET + if (num_ext_voqs >= 32) + STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_MSB_RT_OFFSET, + (u32)(voq_bit_mask >> 32)); +#endif /* Write RL period */ - STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIOD_RT_OFFSET, - QM_RL_PERIOD_CLK_25M); - STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIODTIMER_RT_OFFSET, - QM_RL_PERIOD_CLK_25M); + STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIOD_RT_OFFSET, QM_RL_PERIOD_CLK_25M); + STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIODTIMER_RT_OFFSET, QM_RL_PERIOD_CLK_25M); /* Set credit threshold for QM bypass flow */ if (QM_BYPASS_EN) @@ -213,184 +223,193 @@ static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en) } /* Prepare PF WFQ enable/disable runtime init values */ -static void ecore_enable_pf_wfq(struct ecore_hwfn *p_hwfn, bool pf_wfq_en) +static void ecore_enable_pf_wfq(struct ecore_hwfn *p_hwfn, + bool pf_wfq_en) { STORE_RT_REG(p_hwfn, QM_REG_WFQPFENABLE_RT_OFFSET, pf_wfq_en ? 1 : 0); /* Set credit threshold for QM bypass flow */ if (pf_wfq_en && QM_BYPASS_EN) - STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET, - QM_WFQ_UPPER_BOUND); + STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET, QM_WFQ_UPPER_BOUND); } /* Prepare global RL enable/disable runtime init values */ static void ecore_enable_global_rl(struct ecore_hwfn *p_hwfn, bool global_rl_en) { - STORE_RT_REG(p_hwfn, QM_REG_RLGLBLENABLE_RT_OFFSET, - global_rl_en ? 1 : 0); + STORE_RT_REG(p_hwfn, QM_REG_RLGLBLENABLE_RT_OFFSET, global_rl_en ? 1 : 0); if (global_rl_en) { /* Write RL period (use timer 0 only) */ - STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIOD_0_RT_OFFSET, - QM_RL_PERIOD_CLK_25M); - STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET, - QM_RL_PERIOD_CLK_25M); + STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIOD_0_RT_OFFSET, QM_RL_PERIOD_CLK_25M); + STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET, QM_RL_PERIOD_CLK_25M); /* Set credit threshold for QM bypass flow */ if (QM_BYPASS_EN) - STORE_RT_REG(p_hwfn, - QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET, + STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET, QM_VP_RL_BYPASS_THRESH_SPEED); } } /* Prepare VPORT WFQ enable/disable runtime init values */ -static void ecore_enable_vport_wfq(struct ecore_hwfn *p_hwfn, bool vport_wfq_en) +static void ecore_enable_vport_wfq(struct ecore_hwfn *p_hwfn, + bool vport_wfq_en) { - STORE_RT_REG(p_hwfn, QM_REG_WFQVPENABLE_RT_OFFSET, - vport_wfq_en ? 1 : 0); + STORE_RT_REG(p_hwfn, QM_REG_WFQVPENABLE_RT_OFFSET, vport_wfq_en ? 1 : 0); /* Set credit threshold for QM bypass flow */ if (vport_wfq_en && QM_BYPASS_EN) - STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET, - QM_WFQ_UPPER_BOUND); + STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET, QM_WFQ_UPPER_BOUND); } /* Prepare runtime init values to allocate PBF command queue lines for - * the specified VOQ + * the specified VOQ. */ static void ecore_cmdq_lines_voq_rt_init(struct ecore_hwfn *p_hwfn, - u8 voq, + u8 ext_voq, u16 cmdq_lines) { - u32 qm_line_crd = QM_VOQ_LINE_CRD(cmdq_lines); + u32 qm_line_crd = QM_VOQ_LINE_CRD(cmdq_lines, ECORE_IS_E5(p_hwfn->p_dev)); - OVERWRITE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq), - (u32)cmdq_lines); - STORE_RT_REG(p_hwfn, QM_REG_VOQCRDLINE_RT_OFFSET + voq, qm_line_crd); - STORE_RT_REG(p_hwfn, QM_REG_VOQINITCRDLINE_RT_OFFSET + voq, - qm_line_crd); + OVERWRITE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(ext_voq), (u32)cmdq_lines); + STORE_RT_REG(p_hwfn, QM_REG_VOQCRDLINE_RT_OFFSET + ext_voq, qm_line_crd); + STORE_RT_REG(p_hwfn, QM_REG_VOQINITCRDLINE_RT_OFFSET + ext_voq, qm_line_crd); } /* Prepare runtime init values to allocate PBF command queue lines. */ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn, u8 max_ports_per_engine, u8 max_phys_tcs_per_port, - struct init_qm_port_params - port_params[MAX_NUM_PORTS]) + struct init_qm_port_params port_params[MAX_NUM_PORTS]) { - u8 tc, voq, port_id, num_tcs_in_port; + u8 tc, ext_voq, port_id, num_tcs_in_port; + u8 num_ext_voqs = ECORE_IS_E5(p_hwfn->p_dev) ? QM_E5_NUM_EXT_VOQ : MAX_NUM_VOQS_E4; /* Clear PBF lines of all VOQs */ - for (voq = 0; voq < MAX_NUM_VOQS; voq++) - STORE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq), 0); + for (ext_voq = 0; ext_voq < num_ext_voqs; ext_voq++) + STORE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(ext_voq), 0); for (port_id = 0; port_id < max_ports_per_engine; port_id++) { - u16 phys_lines, phys_lines_per_tc; + u16 phys_lines, phys_lines_per_tc, pure_lb_lines; if (!port_params[port_id].active) continue; /* Find number of command queue lines to divide between the - * active physical TCs. + * active physical TCs. In E5, 1/8 of the lines are reserved. + * the lines for pure LB TC are subtracted. */ phys_lines = port_params[port_id].num_pbf_cmd_lines; - phys_lines -= PBF_CMDQ_PURE_LB_LINES; + if (ECORE_IS_E5(p_hwfn->p_dev)) { + phys_lines -= DIV_ROUND_UP(phys_lines, PBF_CMDQ_LINES_E5_RSVD_RATIO); + pure_lb_lines = PBF_CMDQ_PURE_LB_LINES_E5; + /* E5 TGFS workaround use 2 TCs for PURE_LB */ + if (((port_params[port_id].active_phys_tcs >> + PBF_TGFS_PURE_LB_TC_E5) & 0x1) == 0) { + phys_lines -= PBF_CMDQ_PURE_LB_LINES_E5; + + /* Init registers for PBF_TGFS_PURE_LB_TC_E5 */ + ext_voq = ecore_get_ext_voq(p_hwfn, port_id, PBF_TGFS_PURE_LB_TC_E5, + max_phys_tcs_per_port); + ecore_cmdq_lines_voq_rt_init(p_hwfn, ext_voq, pure_lb_lines); + } else { + DP_NOTICE(p_hwfn, true, "Error : TC %d used for TGFS ramrods workaround. Must not used by driver.\n", + PBF_TGFS_PURE_LB_TC_E5); + } + } else { + pure_lb_lines = PBF_CMDQ_PURE_LB_LINES_E4; + } + phys_lines -= pure_lb_lines; /* Find #lines per active physical TC */ num_tcs_in_port = 0; for (tc = 0; tc < max_phys_tcs_per_port; tc++) - if (((port_params[port_id].active_phys_tcs >> tc) & - 0x1) == 1) + if (((port_params[port_id].active_phys_tcs >> tc) & 0x1) == 1) num_tcs_in_port++; phys_lines_per_tc = phys_lines / num_tcs_in_port; /* Init registers per active TC */ for (tc = 0; tc < max_phys_tcs_per_port; tc++) { - voq = VOQ(port_id, tc, max_phys_tcs_per_port); - if (((port_params[port_id].active_phys_tcs >> - tc) & 0x1) == 1) - ecore_cmdq_lines_voq_rt_init(p_hwfn, voq, - phys_lines_per_tc); + ext_voq = ecore_get_ext_voq(p_hwfn, port_id, tc, max_phys_tcs_per_port); + if (((port_params[port_id].active_phys_tcs >> tc) & 0x1) == 1) + ecore_cmdq_lines_voq_rt_init(p_hwfn, ext_voq, phys_lines_per_tc); } /* Init registers for pure LB TC */ - voq = VOQ(port_id, PURE_LB_TC, max_phys_tcs_per_port); - ecore_cmdq_lines_voq_rt_init(p_hwfn, voq, - PBF_CMDQ_PURE_LB_LINES); + ext_voq = ecore_get_ext_voq(p_hwfn, port_id, PURE_LB_TC, max_phys_tcs_per_port); + ecore_cmdq_lines_voq_rt_init(p_hwfn, ext_voq, pure_lb_lines); } } -/* - * Prepare runtime init values to allocate guaranteed BTB blocks for the +/* Prepare runtime init values to allocate guaranteed BTB blocks for the * specified port. The guaranteed BTB space is divided between the TCs as * follows (shared space Is currently not used): * 1. Parameters: - * B BTB blocks for this port - * C Number of physical TCs for this port + * B - BTB blocks for this port + * C - Number of physical TCs for this port * 2. Calculation: - * a. 38 blocks (9700B jumbo frame) are allocated for global per port - * headroom - * b. B = B 38 (remainder after global headroom allocation) - * c. MAX(38,B/(C+0.7)) blocks are allocated for the pure LB VOQ. - * d. B = B MAX(38, B/(C+0.7)) (remainder after pure LB allocation). - * e. B/C blocks are allocated for each physical TC. + * a. 38 blocks (9700B jumbo frame) are allocated for global per port + * headroom. + * b. B = B - 38 (remainder after global headroom allocation). + * c. MAX(38,B/(C + 0.7)) blocks are allocated for the pure LB VOQ. + * d. B = B - MAX(38, B/(C + 0.7)) (remainder after pure LB allocation). + * e. B/C blocks are allocated for each physical TC. * Assumptions: * - MTU is up to 9700 bytes (38 blocks) * - All TCs are considered symmetrical (same rate and packet size) - * - No optimization for lossy TC (all are considered lossless). Shared space is - * not enabled and allocated for each TC. + * - No optimization for lossy TC (all are considered lossless). Shared space + * is not enabled and allocated for each TC. */ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn, u8 max_ports_per_engine, u8 max_phys_tcs_per_port, - struct init_qm_port_params - port_params[MAX_NUM_PORTS]) + struct init_qm_port_params port_params[MAX_NUM_PORTS]) { u32 usable_blocks, pure_lb_blocks, phys_blocks; - u8 tc, voq, port_id, num_tcs_in_port; + u8 tc, ext_voq, port_id, num_tcs_in_port; for (port_id = 0; port_id < max_ports_per_engine; port_id++) { if (!port_params[port_id].active) continue; /* Subtract headroom blocks */ - usable_blocks = port_params[port_id].num_btb_blocks - - BTB_HEADROOM_BLOCKS; + usable_blocks = port_params[port_id].num_btb_blocks - BTB_HEADROOM_BLOCKS; /* Find blocks per physical TC. use factor to avoid floating * arithmethic. */ num_tcs_in_port = 0; for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) - if (((port_params[port_id].active_phys_tcs >> tc) & - 0x1) == 1) + if (((port_params[port_id].active_phys_tcs >> tc) & 0x1) == 1) num_tcs_in_port++; pure_lb_blocks = (usable_blocks * BTB_PURE_LB_FACTOR) / - (num_tcs_in_port * BTB_PURE_LB_FACTOR + - BTB_PURE_LB_RATIO); + (num_tcs_in_port * BTB_PURE_LB_FACTOR + BTB_PURE_LB_RATIO); pure_lb_blocks = OSAL_MAX_T(u32, BTB_JUMBO_PKT_BLOCKS, - pure_lb_blocks / - BTB_PURE_LB_FACTOR); - phys_blocks = (usable_blocks - pure_lb_blocks) / - num_tcs_in_port; + pure_lb_blocks / BTB_PURE_LB_FACTOR); + phys_blocks = (usable_blocks - pure_lb_blocks) / num_tcs_in_port; /* Init physical TCs */ for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) { - if (((port_params[port_id].active_phys_tcs >> tc) & - 0x1) == 1) { - voq = VOQ(port_id, tc, max_phys_tcs_per_port); - STORE_RT_REG(p_hwfn, - PBF_BTB_GUARANTEED_RT_OFFSET(voq), - phys_blocks); + if (((port_params[port_id].active_phys_tcs >> tc) & 0x1) == 1) { + ext_voq = ecore_get_ext_voq(p_hwfn, port_id, tc, + max_phys_tcs_per_port); + STORE_RT_REG(p_hwfn, PBF_BTB_GUARANTEED_RT_OFFSET(ext_voq), + phys_blocks); } } /* Init pure LB TC */ - voq = VOQ(port_id, PURE_LB_TC, max_phys_tcs_per_port); - STORE_RT_REG(p_hwfn, PBF_BTB_GUARANTEED_RT_OFFSET(voq), - pure_lb_blocks); + ext_voq = ecore_get_ext_voq(p_hwfn, port_id, PURE_LB_TC, max_phys_tcs_per_port); + STORE_RT_REG(p_hwfn, PBF_BTB_GUARANTEED_RT_OFFSET(ext_voq), pure_lb_blocks); + + if (ECORE_IS_E5(p_hwfn->p_dev) && + (((port_params[port_id].active_phys_tcs >> + PBF_TGFS_PURE_LB_TC_E5) & 0x1) == 0)) { + /* Init registers for PBF_TGFS_PURE_LB_TC_E5 */ + ext_voq = ecore_get_ext_voq(p_hwfn, port_id, PBF_TGFS_PURE_LB_TC_E5, + max_phys_tcs_per_port); + STORE_RT_REG(p_hwfn, PBF_BTB_GUARANTEED_RT_OFFSET(ext_voq), pure_lb_blocks); + } } } @@ -418,12 +437,18 @@ static int ecore_global_rl_rt_init(struct ecore_hwfn *p_hwfn, return -1; } - STORE_RT_REG(p_hwfn, QM_REG_RLGLBLCRD_RT_OFFSET + rl_id, - (u32)QM_RL_CRD_REG_SIGN_BIT); - STORE_RT_REG(p_hwfn, QM_REG_RLGLBLUPPERBOUND_RT_OFFSET + rl_id, - upper_bound); - STORE_RT_REG(p_hwfn, QM_REG_RLGLBLINCVAL_RT_OFFSET + rl_id, - inc_val); + STORE_RT_REG(p_hwfn, rl_id < QM_REG_RLGLBLCRD_RT_SIZE ? + QM_REG_RLGLBLCRD_RT_OFFSET + rl_id : + QM_REG_RLGLBLCRD_MSB_RT_OFFSET + rl_id - QM_REG_RLGLBLCRD_RT_SIZE, + (u32)QM_RL_CRD_REG_SIGN_BIT); + STORE_RT_REG(p_hwfn, rl_id < QM_REG_RLGLBLUPPERBOUND_RT_SIZE ? + QM_REG_RLGLBLUPPERBOUND_RT_OFFSET + rl_id : + QM_REG_RLGLBLUPPERBOUND_MSB_RT_OFFSET + rl_id - + QM_REG_RLGLBLUPPERBOUND_RT_SIZE, upper_bound); + STORE_RT_REG(p_hwfn, rl_id < QM_REG_RLGLBLINCVAL_RT_SIZE ? + QM_REG_RLGLBLINCVAL_RT_OFFSET + rl_id : + QM_REG_RLGLBLINCVAL_MSB_RT_OFFSET + rl_id - QM_REG_RLGLBLINCVAL_RT_SIZE, + inc_val); } return 0; @@ -431,19 +456,19 @@ static int ecore_global_rl_rt_init(struct ecore_hwfn *p_hwfn, /* Prepare Tx PQ mapping runtime init values for the specified PF */ static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u8 pf_id, - u8 max_phys_tcs_per_port, - bool is_pf_loading, - u32 num_pf_cids, - u32 num_vf_cids, - u16 start_pq, - u16 num_pf_pqs, - u16 num_vf_pqs, + struct ecore_ptt *p_ptt, + u8 pf_id, + u8 max_phys_tcs_per_port, + bool is_pf_loading, + u32 num_pf_cids, + u32 num_vf_cids, + u16 start_pq, + u16 num_pf_pqs, + u16 num_vf_pqs, u16 start_vport, - u32 base_mem_addr_4kb, - struct init_qm_pq_params *pq_params, - struct init_qm_vport_params *vport_params) + u32 base_mem_addr_4kb, + struct init_qm_pq_params *pq_params, + struct init_qm_vport_params *vport_params) { /* A bit per Tx PQ indicating if the PQ is associated with a VF */ u32 tx_pq_vf_mask[MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE] = { 0 }; @@ -465,14 +490,11 @@ static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn, /* Set mapping from PQ group to PF */ for (pq_group = first_pq_group; pq_group <= last_pq_group; pq_group++) - STORE_RT_REG(p_hwfn, QM_REG_PQTX2PF_0_RT_OFFSET + pq_group, - (u32)(pf_id)); + STORE_RT_REG(p_hwfn, QM_REG_PQTX2PF_0_RT_OFFSET + pq_group, (u32)(pf_id)); /* Set PQ sizes */ - STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_0_RT_OFFSET, - QM_PQ_SIZE_256B(num_pf_cids)); - STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_1_RT_OFFSET, - QM_PQ_SIZE_256B(num_vf_cids)); + STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_0_RT_OFFSET, QM_PQ_SIZE_256B(num_pf_cids)); + STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_1_RT_OFFSET, QM_PQ_SIZE_256B(num_vf_cids)); /* Go over all Tx PQs */ for (i = 0, pq_id = start_pq; i < num_pqs; i++, pq_id++) { @@ -480,26 +502,24 @@ static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn, bool is_vf_pq; u8 ext_voq; - ext_voq = VOQ(pq_params[i].port_id, pq_params[i].tc_id, - max_phys_tcs_per_port); + ext_voq = ecore_get_ext_voq(p_hwfn, pq_params[i].port_id, pq_params[i].tc_id, + max_phys_tcs_per_port); is_vf_pq = (i >= num_pf_pqs); /* Update first Tx PQ of VPORT/TC */ vport_id_in_pf = pq_params[i].vport_id - start_vport; - first_tx_pq_id = - vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].tc_id]; + first_tx_pq_id = vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].tc_id]; if (first_tx_pq_id == QM_INVALID_PQ_ID) { u32 map_val = (ext_voq << QM_WFQ_VP_PQ_VOQ_SHIFT) | - (pf_id << QM_WFQ_VP_PQ_PF_SHIFT); + (pf_id << (ECORE_IS_E5(p_hwfn->p_dev) ? + QM_WFQ_VP_PQ_PF_E5_SHIFT : QM_WFQ_VP_PQ_PF_E4_SHIFT)); /* Create new VP PQ */ - vport_params[vport_id_in_pf]. - first_tx_pq_id[pq_params[i].tc_id] = pq_id; + vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].tc_id] = pq_id; first_tx_pq_id = pq_id; /* Map VP PQ to VOQ and PF */ - STORE_RT_REG(p_hwfn, QM_REG_WFQVPMAP_RT_OFFSET + - first_tx_pq_id, map_val); + STORE_RT_REG(p_hwfn, QM_REG_WFQVPMAP_RT_OFFSET + first_tx_pq_id, map_val); } /* Prepare PQ map entry */ @@ -518,8 +538,7 @@ static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn, } /* Set PQ base address */ - STORE_RT_REG(p_hwfn, QM_REG_BASEADDRTXPQ_RT_OFFSET + pq_id, - mem_addr_4kb); + STORE_RT_REG(p_hwfn, QM_REG_BASEADDRTXPQ_RT_OFFSET + pq_id, mem_addr_4kb); /* Clear PQ pointer table entry (64 bit) */ if (is_pf_loading) @@ -529,19 +548,16 @@ static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn, /* Write PQ info to RAM */ #if (WRITE_PQ_INFO_TO_RAM != 0) - pq_info = PQ_INFO_ELEMENT(first_tx_pq_id, pf_id, - pq_params[i].tc_id, - pq_params[i].port_id, - pq_params[i].rl_valid, + pq_info = PQ_INFO_ELEMENT(first_tx_pq_id, pf_id, pq_params[i].tc_id, + pq_params[i].port_id, pq_params[i].rl_valid, pq_params[i].rl_id); - ecore_wr(p_hwfn, p_ptt, PQ_INFO_RAM_GRC_ADDRESS(pq_id), - pq_info); + ecore_wr(p_hwfn, p_ptt, PQ_INFO_RAM_GRC_ADDRESS(pq_id), pq_info); #endif /* If VF PQ, add indication to PQ VF mask */ if (is_vf_pq) { tx_pq_vf_mask[pq_id / QM_PF_QUEUE_GROUP_SIZE] |= - (1 << (pq_id % QM_PF_QUEUE_GROUP_SIZE)); + (1 << (pq_id % QM_PF_QUEUE_GROUP_SIZE)); mem_addr_4kb += vport_pq_mem_4kb; } else { mem_addr_4kb += pq_mem_4kb; @@ -551,8 +567,8 @@ static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn, /* Store Tx PQ VF mask to size select register */ for (i = 0; i < num_tx_pq_vf_masks; i++) if (tx_pq_vf_mask[i]) - STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET + - i, tx_pq_vf_mask[i]); + STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET + i, + tx_pq_vf_mask[i]); return 0; } @@ -566,7 +582,7 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn, u32 base_mem_addr_4kb) { u32 pq_size, pq_mem_4kb, mem_addr_4kb; - u16 i, j, pq_id, pq_group; + u16 i, j, pq_id, pq_group, qm_other_pqs_per_pf; /* A single other PQ group is used in each PF, where PQ group i is used * in PF i. @@ -575,26 +591,23 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn, pq_size = num_pf_cids + num_tids; pq_mem_4kb = QM_PQ_MEM_4KB(pq_size); mem_addr_4kb = base_mem_addr_4kb; + qm_other_pqs_per_pf = ECORE_IS_E5(p_hwfn->p_dev) ? QM_OTHER_PQS_PER_PF_E5 : + QM_OTHER_PQS_PER_PF_E4; /* Map PQ group to PF */ - STORE_RT_REG(p_hwfn, QM_REG_PQOTHER2PF_0_RT_OFFSET + pq_group, - (u32)(pf_id)); + STORE_RT_REG(p_hwfn, QM_REG_PQOTHER2PF_0_RT_OFFSET + pq_group, (u32)(pf_id)); /* Set PQ sizes */ - STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_2_RT_OFFSET, - QM_PQ_SIZE_256B(pq_size)); + STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_2_RT_OFFSET, QM_PQ_SIZE_256B(pq_size)); - for (i = 0, pq_id = pf_id * QM_PF_QUEUE_GROUP_SIZE; - i < QM_OTHER_PQS_PER_PF; i++, pq_id++) { + for (i = 0, pq_id = pf_id * QM_PF_QUEUE_GROUP_SIZE; i < qm_other_pqs_per_pf; i++, pq_id++) { /* Set PQ base address */ - STORE_RT_REG(p_hwfn, QM_REG_BASEADDROTHERPQ_RT_OFFSET + pq_id, - mem_addr_4kb); + STORE_RT_REG(p_hwfn, QM_REG_BASEADDROTHERPQ_RT_OFFSET + pq_id, mem_addr_4kb); /* Clear PQ pointer table entry */ if (is_pf_loading) for (j = 0; j < 2; j++) - STORE_RT_REG(p_hwfn, - QM_REG_PTRTBLOTHER_RT_OFFSET + + STORE_RT_REG(p_hwfn, QM_REG_PTRTBLOTHER_RT_OFFSET + (pq_id * 2) + j, 0); mem_addr_4kb += pq_mem_4kb; @@ -612,30 +625,30 @@ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn, struct init_qm_pq_params *pq_params) { u32 inc_val, crd_reg_offset; - u8 voq; + u8 ext_voq; u16 i; inc_val = QM_WFQ_INC_VAL(pf_wfq); if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) { - DP_NOTICE(p_hwfn, true, - "Invalid PF WFQ weight configuration\n"); + DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration\n"); return -1; } for (i = 0; i < num_tx_pqs; i++) { - voq = VOQ(pq_params[i].port_id, pq_params[i].tc_id, - max_phys_tcs_per_port); - crd_reg_offset = (pf_id < MAX_NUM_PFS_BB ? - QM_REG_WFQPFCRD_RT_OFFSET : - QM_REG_WFQPFCRD_MSB_RT_OFFSET) + - voq * MAX_NUM_PFS_BB + - (pf_id % MAX_NUM_PFS_BB); - OVERWRITE_RT_REG(p_hwfn, crd_reg_offset, - (u32)QM_WFQ_CRD_REG_SIGN_BIT); + ext_voq = ecore_get_ext_voq(p_hwfn, pq_params[i].port_id, pq_params[i].tc_id, + max_phys_tcs_per_port); + crd_reg_offset = ECORE_IS_E5(p_hwfn->p_dev) ? + (ext_voq < QM_WFQ_CRD_E5_NUM_VOQS ? QM_REG_WFQPFCRD_RT_OFFSET : + QM_REG_WFQPFCRD_MSB_RT_OFFSET) + + (ext_voq % QM_WFQ_CRD_E5_NUM_VOQS) * MAX_NUM_PFS_E5 + pf_id : + (pf_id < MAX_NUM_PFS_BB ? QM_REG_WFQPFCRD_RT_OFFSET : + QM_REG_WFQPFCRD_MSB_RT_OFFSET) + + ext_voq * MAX_NUM_PFS_BB + (pf_id % MAX_NUM_PFS_BB); + OVERWRITE_RT_REG(p_hwfn, crd_reg_offset, (u32)QM_WFQ_CRD_REG_SIGN_BIT); } - STORE_RT_REG(p_hwfn, QM_REG_WFQPFUPPERBOUND_RT_OFFSET + - pf_id, QM_WFQ_UPPER_BOUND | (u32)QM_WFQ_CRD_REG_SIGN_BIT); + STORE_RT_REG(p_hwfn, QM_REG_WFQPFUPPERBOUND_RT_OFFSET + pf_id, QM_WFQ_UPPER_BOUND | + (u32)QM_WFQ_CRD_REG_SIGN_BIT); STORE_RT_REG(p_hwfn, QM_REG_WFQPFWEIGHT_RT_OFFSET + pf_id, inc_val); return 0; @@ -644,21 +657,21 @@ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn, /* Prepare PF RL runtime init values for the specified PF. * Return -1 on error. */ -static int ecore_pf_rl_rt_init(struct ecore_hwfn *p_hwfn, u8 pf_id, u32 pf_rl) +static int ecore_pf_rl_rt_init(struct ecore_hwfn *p_hwfn, + u8 pf_id, + u32 pf_rl) { u32 inc_val; inc_val = QM_RL_INC_VAL(pf_rl); if (inc_val > QM_PF_RL_MAX_INC_VAL) { - DP_NOTICE(p_hwfn, true, - "Invalid PF rate limit configuration\n"); + DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration\n"); return -1; } - STORE_RT_REG(p_hwfn, QM_REG_RLPFCRD_RT_OFFSET + pf_id, + STORE_RT_REG(p_hwfn, QM_REG_RLPFCRD_RT_OFFSET + pf_id, (u32)QM_RL_CRD_REG_SIGN_BIT); + STORE_RT_REG(p_hwfn, QM_REG_RLPFUPPERBOUND_RT_OFFSET + pf_id, QM_PF_RL_UPPER_BOUND | (u32)QM_RL_CRD_REG_SIGN_BIT); - STORE_RT_REG(p_hwfn, QM_REG_RLPFUPPERBOUND_RT_OFFSET + pf_id, - QM_PF_RL_UPPER_BOUND | (u32)QM_RL_CRD_REG_SIGN_BIT); STORE_RT_REG(p_hwfn, QM_REG_RLPFINCVAL_RT_OFFSET + pf_id, inc_val); return 0; @@ -682,8 +695,7 @@ static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn, inc_val = QM_WFQ_INC_VAL(vport_params[vport_id].wfq); if (inc_val > QM_WFQ_MAX_INC_VAL) { - DP_NOTICE(p_hwfn, true, - "Invalid VPORT WFQ weight configuration\n"); + DP_NOTICE(p_hwfn, true, "Invalid VPORT WFQ weight configuration\n"); return -1; } @@ -693,10 +705,9 @@ static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn, if (vp_pq_id == QM_INVALID_PQ_ID) continue; - STORE_RT_REG(p_hwfn, QM_REG_WFQVPCRD_RT_OFFSET + - vp_pq_id, (u32)QM_WFQ_CRD_REG_SIGN_BIT); - STORE_RT_REG(p_hwfn, QM_REG_WFQVPWEIGHT_RT_OFFSET + - vp_pq_id, inc_val); + STORE_RT_REG(p_hwfn, QM_REG_WFQVPCRD_RT_OFFSET + vp_pq_id, + (u32)QM_WFQ_CRD_REG_SIGN_BIT); + STORE_RT_REG(p_hwfn, QM_REG_WFQVPWEIGHT_RT_OFFSET + vp_pq_id, inc_val); } } @@ -708,16 +719,14 @@ static bool ecore_poll_on_qm_cmd_ready(struct ecore_hwfn *p_hwfn, { u32 reg_val, i; - for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && !reg_val; - i++) { + for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && !reg_val; i++) { OSAL_UDELAY(QM_STOP_CMD_POLL_PERIOD_US); reg_val = ecore_rd(p_hwfn, p_ptt, QM_REG_SDMCMDREADY); } /* Check if timeout while waiting for SDM command ready */ if (i == QM_STOP_CMD_MAX_POLL_COUNT) { - DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG, - "Timeout waiting for QM SDM cmd ready signal\n"); + DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG, "Timeout when waiting for QM SDM command ready signal\n"); return false; } @@ -726,9 +735,9 @@ static bool ecore_poll_on_qm_cmd_ready(struct ecore_hwfn *p_hwfn, static bool ecore_send_qm_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, - u32 cmd_addr, - u32 cmd_data_lsb, - u32 cmd_data_msb) + u32 cmd_addr, + u32 cmd_data_lsb, + u32 cmd_data_msb) { if (!ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt)) return false; @@ -744,16 +753,30 @@ static bool ecore_send_qm_cmd(struct ecore_hwfn *p_hwfn, /******************** INTERFACE IMPLEMENTATION *********************/ +u8 ecore_get_ext_voq(struct ecore_hwfn *p_hwfn, + u8 port_id, + u8 tc, + u8 max_phys_tcs_per_port) +{ + if (tc == PURE_LB_TC) + return NUM_OF_PHYS_TCS * (ECORE_IS_E5(p_hwfn->p_dev) ? MAX_NUM_PORTS_E5 : + MAX_NUM_PORTS_BB) + port_id; + else + return port_id * (ECORE_IS_E5(p_hwfn->p_dev) ? NUM_OF_PHYS_TCS : + max_phys_tcs_per_port) + tc; +} + u32 ecore_qm_pf_mem_size(struct ecore_hwfn *p_hwfn, u32 num_pf_cids, - u32 num_vf_cids, - u32 num_tids, - u16 num_pf_pqs, - u16 num_vf_pqs) + u32 num_vf_cids, + u32 num_tids, + u16 num_pf_pqs, + u16 num_vf_pqs) { return QM_PQ_MEM_4KB(num_pf_cids) * num_pf_pqs + - QM_PQ_MEM_4KB(num_vf_cids) * num_vf_pqs + - QM_PQ_MEM_4KB(num_pf_cids + num_tids) * QM_OTHER_PQS_PER_PF; + QM_PQ_MEM_4KB(num_vf_cids) * num_vf_pqs + + QM_PQ_MEM_4KB(num_pf_cids + num_tids) * (ECORE_IS_E5(p_hwfn->p_dev) ? + QM_OTHER_PQS_PER_PF_E5 : QM_OTHER_PQS_PER_PF_E4); } int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn, @@ -763,24 +786,21 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn, bool pf_wfq_en, bool global_rl_en, bool vport_wfq_en, - struct init_qm_port_params - port_params[MAX_NUM_PORTS], + struct init_qm_port_params port_params[MAX_NUM_PORTS], struct init_qm_global_rl_params - global_rl_params[COMMON_MAX_QM_GLOBAL_RLS]) + global_rl_params[COMMON_MAX_QM_GLOBAL_RLS]) { u32 mask = 0; /* Init AFullOprtnstcCrdMask */ - SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_LINEVOQ, - QM_OPPOR_LINE_VOQ_DEF); + SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_LINEVOQ, QM_OPPOR_LINE_VOQ_DEF); SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ, QM_BYTE_CRD_EN); - SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_PFWFQ, pf_wfq_en); - SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_VPWFQ, vport_wfq_en); - SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_PFRL, pf_rl_en); - SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_VPQCNRL, global_rl_en); + SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_PFWFQ, pf_wfq_en ? 1 : 0); + SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_VPWFQ, vport_wfq_en ? 1 : 0); + SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_PFRL, pf_rl_en ? 1 : 0); + SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_VPQCNRL, global_rl_en ? 1 : 0); SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_FWPAUSE, QM_OPPOR_FW_STOP_DEF); - SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY, - QM_OPPOR_PQ_EMPTY_DEF); + SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY, QM_OPPOR_PQ_EMPTY_DEF); STORE_RT_REG(p_hwfn, QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET, mask); /* Enable/disable PF RL */ @@ -796,12 +816,10 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn, ecore_enable_vport_wfq(p_hwfn, vport_wfq_en); /* Init PBF CMDQ line credit */ - ecore_cmdq_lines_rt_init(p_hwfn, max_ports_per_engine, - max_phys_tcs_per_port, port_params); + ecore_cmdq_lines_rt_init(p_hwfn, max_ports_per_engine, max_phys_tcs_per_port, port_params); /* Init BTB blocks in PBF */ - ecore_btb_blocks_rt_init(p_hwfn, max_ports_per_engine, - max_phys_tcs_per_port, port_params); + ecore_btb_blocks_rt_init(p_hwfn, max_ports_per_engine, max_phys_tcs_per_port, port_params); ecore_global_rl_rt_init(p_hwfn, global_rl_params); @@ -830,33 +848,27 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn, u16 vport_id; u8 tc; - other_mem_size_4kb = QM_PQ_MEM_4KB(num_pf_cids + num_tids) * - QM_OTHER_PQS_PER_PF; + other_mem_size_4kb = QM_PQ_MEM_4KB(num_pf_cids + num_tids) * (ECORE_IS_E5(p_hwfn->p_dev) ? + QM_OTHER_PQS_PER_PF_E5 : QM_OTHER_PQS_PER_PF_E4); /* Clear first Tx PQ ID array for each VPORT */ for (vport_id = 0; vport_id < num_vports; vport_id++) for (tc = 0; tc < NUM_OF_TCS; tc++) - vport_params[vport_id].first_tx_pq_id[tc] = - QM_INVALID_PQ_ID; + vport_params[vport_id].first_tx_pq_id[tc] = QM_INVALID_PQ_ID; /* Map Other PQs (if any) */ -#if QM_OTHER_PQS_PER_PF > 0 - ecore_other_pq_map_rt_init(p_hwfn, pf_id, is_pf_loading, num_pf_cids, - num_tids, 0); -#endif + if ((ECORE_IS_E5(p_hwfn->p_dev) ? QM_OTHER_PQS_PER_PF_E5 : QM_OTHER_PQS_PER_PF_E4) > 0) + ecore_other_pq_map_rt_init(p_hwfn, pf_id, is_pf_loading, num_pf_cids, num_tids, 0); /* Map Tx PQs */ - if (ecore_tx_pq_map_rt_init(p_hwfn, p_ptt, pf_id, max_phys_tcs_per_port, - is_pf_loading, num_pf_cids, num_vf_cids, - start_pq, num_pf_pqs, num_vf_pqs, - start_vport, other_mem_size_4kb, pq_params, - vport_params)) + if (ecore_tx_pq_map_rt_init(p_hwfn, p_ptt, pf_id, max_phys_tcs_per_port, is_pf_loading, + num_pf_cids, num_vf_cids, start_pq, num_pf_pqs, num_vf_pqs, + start_vport, other_mem_size_4kb, pq_params, vport_params)) return -1; /* Init PF WFQ */ if (pf_wfq) - if (ecore_pf_wfq_rt_init(p_hwfn, pf_id, pf_wfq, - max_phys_tcs_per_port, + if (ecore_pf_wfq_rt_init(p_hwfn, pf_id, pf_wfq, max_phys_tcs_per_port, num_pf_pqs + num_vf_pqs, pq_params)) return -1; @@ -872,14 +884,15 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn, } int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u8 pf_id, u16 pf_wfq) + struct ecore_ptt *p_ptt, + u8 pf_id, + u16 pf_wfq) { u32 inc_val; inc_val = QM_WFQ_INC_VAL(pf_wfq); if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) { - DP_NOTICE(p_hwfn, true, - "Invalid PF WFQ weight configuration\n"); + DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration\n"); return -1; } @@ -889,19 +902,19 @@ int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn, } int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u8 pf_id, u32 pf_rl) + struct ecore_ptt *p_ptt, + u8 pf_id, + u32 pf_rl) { u32 inc_val; inc_val = QM_RL_INC_VAL(pf_rl); if (inc_val > QM_PF_RL_MAX_INC_VAL) { - DP_NOTICE(p_hwfn, true, - "Invalid PF rate limit configuration\n"); + DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration\n"); return -1; } - ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFCRD + pf_id * 4, - (u32)QM_RL_CRD_REG_SIGN_BIT); + ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFCRD + pf_id * 4, (u32)QM_RL_CRD_REG_SIGN_BIT); ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFINCVAL + pf_id * 4, inc_val); return 0; @@ -918,22 +931,20 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn, inc_val = QM_WFQ_INC_VAL(wfq); if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) { - DP_NOTICE(p_hwfn, true, - "Invalid VPORT WFQ weight configuration\n"); + DP_NOTICE(p_hwfn, true, "Invalid VPORT WFQ configuration.\n"); return -1; } /* A VPORT can have several VPORT PQ IDs for various TCs */ for (tc = 0; tc < NUM_OF_TCS; tc++) { vp_pq_id = first_tx_pq_id[tc]; - if (vp_pq_id != QM_INVALID_PQ_ID) { - ecore_wr(p_hwfn, p_ptt, - QM_REG_WFQVPWEIGHT + vp_pq_id * 4, inc_val); - } + if (vp_pq_id == QM_INVALID_PQ_ID) + continue; + ecore_wr(p_hwfn, p_ptt, QM_REG_WFQVPWEIGHT + vp_pq_id * 4, inc_val); } return 0; - } +} int ecore_init_global_rl(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, @@ -948,37 +959,13 @@ int ecore_init_global_rl(struct ecore_hwfn *p_hwfn, return -1; } - ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLCRD + rl_id * 4, - (u32)QM_RL_CRD_REG_SIGN_BIT); - ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLINCVAL + rl_id * 4, inc_val); - - return 0; -} - -int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u8 vport_id, - u32 vport_rl, - u32 link_speed) -{ - u32 inc_val, max_qm_global_rls = ECORE_IS_E5(p_hwfn->p_dev) ? MAX_QM_GLOBAL_RLS_E5 - : MAX_QM_GLOBAL_RLS_E4; - - if (vport_id >= max_qm_global_rls) { - DP_NOTICE(p_hwfn, true, - "Invalid VPORT ID for rate limiter configuration\n"); - return -1; - } - - inc_val = QM_RL_INC_VAL(vport_rl ? vport_rl : link_speed); - if (inc_val > QM_VP_RL_MAX_INC_VAL(link_speed)) { - DP_NOTICE(p_hwfn, true, - "Invalid VPORT rate-limit configuration\n"); - return -1; - } - - ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLCRD + vport_id * 4, - (u32)QM_RL_CRD_REG_SIGN_BIT); - ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLINCVAL + vport_id * 4, inc_val); + ecore_wr(p_hwfn, p_ptt, rl_id < QM_REG_RLGLBLCRD_SIZE ? + QM_REG_RLGLBLCRD + rl_id * 4 : + QM_REG_RLGLBLCRD_MSB_E5 + (rl_id - QM_REG_RLGLBLCRD_SIZE) * 4, + (u32)QM_RL_CRD_REG_SIGN_BIT); + ecore_wr(p_hwfn, p_ptt, rl_id < QM_REG_RLGLBLINCVAL_SIZE ? + QM_REG_RLGLBLINCVAL + rl_id * 4 : + QM_REG_RLGLBLINCVAL_MSB_E5 + (rl_id - QM_REG_RLGLBLINCVAL_SIZE) * 4, inc_val); return 0; } @@ -986,9 +973,11 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn, bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, bool is_release_cmd, - bool is_tx_pq, u16 start_pq, u16 num_pqs) + bool is_tx_pq, + u16 start_pq, + u16 num_pqs) { - u32 cmd_arr[QM_CMD_STRUCT_SIZE(QM_STOP_CMD)] = { 0 }; + u32 cmd_arr[QM_CMD_STRUCT_SIZE(QM_STOP_CMD)] = {0}; u32 pq_mask = 0, last_pq, pq_id; last_pq = start_pq + num_pqs - 1; @@ -1003,16 +992,13 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn, pq_mask |= (1 << (pq_id % QM_STOP_PQ_MASK_WIDTH)); /* If last PQ or end of PQ mask, write command */ - if ((pq_id == last_pq) || - (pq_id % QM_STOP_PQ_MASK_WIDTH == - (QM_STOP_PQ_MASK_WIDTH - 1))) { - QM_CMD_SET_FIELD(cmd_arr, QM_STOP_CMD, PAUSE_MASK, - pq_mask); - QM_CMD_SET_FIELD(cmd_arr, QM_STOP_CMD, GROUP_ID, - pq_id / QM_STOP_PQ_MASK_WIDTH); - if (!ecore_send_qm_cmd - (p_hwfn, p_ptt, QM_STOP_CMD_ADDR, cmd_arr[0], - cmd_arr[1])) + if ((pq_id == last_pq) || (pq_id % QM_STOP_PQ_MASK_WIDTH == + (QM_STOP_PQ_MASK_WIDTH - 1))) { + QM_CMD_SET_FIELD(cmd_arr, QM_STOP_CMD, PAUSE_MASK, pq_mask); + QM_CMD_SET_FIELD(cmd_arr, QM_STOP_CMD, + GROUP_ID, pq_id / QM_STOP_PQ_MASK_WIDTH); + if (!ecore_send_qm_cmd(p_hwfn, p_ptt, QM_STOP_CMD_ADDR, + cmd_arr[0], cmd_arr[1])) return false; pq_mask = 0; } @@ -1029,8 +1015,7 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn, #define NIG_ETS_MIN_WFQ_BYTES 1600 /* NIG: ETS constants */ -#define NIG_ETS_UP_BOUND(weight, mtu) \ - (2 * ((weight) > (mtu) ? (weight) : (mtu))) +#define NIG_ETS_UP_BOUND(weight, mtu) (2 * ((weight) > (mtu) ? (weight) : (mtu))) /* NIG: RL constants */ @@ -1046,8 +1031,7 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn, /* Rate in mbps */ #define NIG_RL_INC_VAL(rate) (((rate) * NIG_RL_PERIOD) / 8) -#define NIG_RL_MAX_VAL(inc_val, mtu) \ - (2 * ((inc_val) > (mtu) ? (inc_val) : (mtu))) +#define NIG_RL_MAX_VAL(inc_val, mtu) (2 * ((inc_val) > (mtu) ? (inc_val) : (mtu))) /* NIG: packet prioritry configuration constants */ #define NIG_PRIORITY_MAP_TC_BITS 4 @@ -1055,7 +1039,8 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn, void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, - struct init_ets_req *req, bool is_lb) + struct init_ets_req *req, + bool is_lb) { u32 min_weight, tc_weight_base_addr, tc_weight_addr_diff; u32 tc_bound_base_addr, tc_bound_addr_diff; @@ -1063,8 +1048,7 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn, u8 tc, num_tc, tc_client_offset; num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS; - tc_client_offset = is_lb ? NIG_LB_ETS_CLIENT_OFFSET : - NIG_TX_ETS_CLIENT_OFFSET; + tc_client_offset = is_lb ? NIG_LB_ETS_CLIENT_OFFSET : NIG_TX_ETS_CLIENT_OFFSET; min_weight = 0xffffffff; tc_weight_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 : NIG_REG_TX_ARB_CREDIT_WEIGHT_0; @@ -1098,17 +1082,14 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn, } /* Write SP map */ - ecore_wr(p_hwfn, p_ptt, - is_lb ? NIG_REG_LB_ARB_CLIENT_IS_STRICT : - NIG_REG_TX_ARB_CLIENT_IS_STRICT, - (sp_tc_map << tc_client_offset)); + ecore_wr(p_hwfn, p_ptt, is_lb ? NIG_REG_LB_ARB_CLIENT_IS_STRICT : + NIG_REG_TX_ARB_CLIENT_IS_STRICT, (sp_tc_map << tc_client_offset)); /* Write WFQ map */ - ecore_wr(p_hwfn, p_ptt, - is_lb ? NIG_REG_LB_ARB_CLIENT_IS_SUBJECT2WFQ : - NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ, - (wfq_tc_map << tc_client_offset)); - /* write WFQ weights */ + ecore_wr(p_hwfn, p_ptt, is_lb ? NIG_REG_LB_ARB_CLIENT_IS_SUBJECT2WFQ : + NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ, (wfq_tc_map << tc_client_offset)); + + /* Write WFQ weights */ for (tc = 0; tc < num_tc; tc++, tc_client_offset++) { struct init_ets_tc_req *tc_req = &req->tc_req[tc]; u32 byte_weight; @@ -1117,8 +1098,7 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn, continue; /* Translate weight to bytes */ - byte_weight = (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) / - min_weight; + byte_weight = (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) / min_weight; /* Write WFQ weight */ ecore_wr(p_hwfn, p_ptt, tc_weight_base_addr + @@ -1138,62 +1118,50 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn, u32 ctrl, inc_val, reg_offset; u8 tc; - /* Disable global MAC+LB RL */ - ctrl = - NIG_RL_BASE_TYPE << - NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_BASE_TYPE_SHIFT; + /* Disable global MAC + LB RL */ + ctrl = NIG_RL_BASE_TYPE << + NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_BASE_TYPE_SHIFT; ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl); - /* Configure and enable global MAC+LB RL */ + /* Configure and enable global MAC + LB RL */ if (req->lb_mac_rate) { /* Configure */ ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_PERIOD, NIG_RL_PERIOD_CLK_25M); inc_val = NIG_RL_INC_VAL(req->lb_mac_rate); - ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_VALUE, - inc_val); + ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_VALUE, inc_val); ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_MAX_VALUE, NIG_RL_MAX_VAL(inc_val, req->mtu)); /* Enable */ - ctrl |= - 1 << - NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_EN_SHIFT; + ctrl |= 1 << NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_EN_SHIFT; ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl); } /* Disable global LB-only RL */ - ctrl = - NIG_RL_BASE_TYPE << - NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_BASE_TYPE_SHIFT; + ctrl = NIG_RL_BASE_TYPE << NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_BASE_TYPE_SHIFT; ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl); /* Configure and enable global LB-only RL */ if (req->lb_rate) { /* Configure */ - ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_PERIOD, - NIG_RL_PERIOD_CLK_25M); + ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_PERIOD, NIG_RL_PERIOD_CLK_25M); inc_val = NIG_RL_INC_VAL(req->lb_rate); - ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_VALUE, - inc_val); + ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_VALUE, inc_val); ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_MAX_VALUE, NIG_RL_MAX_VAL(inc_val, req->mtu)); /* Enable */ - ctrl |= - 1 << NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_EN_SHIFT; + ctrl |= 1 << NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_EN_SHIFT; ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl); } /* Per-TC RLs */ - for (tc = 0, reg_offset = 0; tc < NUM_OF_PHYS_TCS; - tc++, reg_offset += 4) { + for (tc = 0, reg_offset = 0; tc < NUM_OF_PHYS_TCS; tc++, reg_offset += 4) { /* Disable TC RL */ - ctrl = - NIG_RL_BASE_TYPE << - NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_BASE_TYPE_0_SHIFT; - ecore_wr(p_hwfn, p_ptt, - NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset, ctrl); + ctrl = NIG_RL_BASE_TYPE << + NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_BASE_TYPE_0_SHIFT; + ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset, ctrl); /* Configure and enable TC RL */ if (!req->tc_rate[tc]) @@ -1203,16 +1171,13 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn, ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 + reg_offset, NIG_RL_PERIOD_CLK_25M); inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]); - ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 + - reg_offset, inc_val); + ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 + reg_offset, inc_val); ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 + reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu)); /* Enable */ - ctrl |= 1 << - NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT; - ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_CTRL_0 + - reg_offset, ctrl); + ctrl |= 1 << NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT; + ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset, ctrl); } } @@ -1228,8 +1193,7 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn, if (!req->pri[pri].valid) continue; - pri_tc_mask |= (req->pri[pri].tc_id << - (pri * NIG_PRIORITY_MAP_TC_BITS)); + pri_tc_mask |= (req->pri[pri].tc_id << (pri * NIG_PRIORITY_MAP_TC_BITS)); tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri); } @@ -1238,10 +1202,8 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn, /* Write TC -> priority mask */ for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) { - ecore_wr(p_hwfn, p_ptt, NIG_REG_PRIORITY_FOR_TC_0 + tc * 4, - tc_pri_mask[tc]); - ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_TC0_PRIORITY_MASK + tc * 4, - tc_pri_mask[tc]); + ecore_wr(p_hwfn, p_ptt, NIG_REG_PRIORITY_FOR_TC_0 + tc * 4, tc_pri_mask[tc]); + ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_TC0_PRIORITY_MASK + tc * 4, tc_pri_mask[tc]); } } @@ -1251,18 +1213,17 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn, /* PRS: ETS configuration constants */ #define PRS_ETS_MIN_WFQ_BYTES 1600 -#define PRS_ETS_UP_BOUND(weight, mtu) \ - (2 * ((weight) > (mtu) ? (weight) : (mtu))) +#define PRS_ETS_UP_BOUND(weight, mtu) (2 * ((weight) > (mtu) ? (weight) : (mtu))) void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, struct init_ets_req *req) + struct ecore_ptt *p_ptt, + struct init_ets_req *req) { u32 tc_weight_addr_diff, tc_bound_addr_diff, min_weight = 0xffffffff; u8 tc, sp_tc_map = 0, wfq_tc_map = 0; - tc_weight_addr_diff = PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 - - PRS_REG_ETS_ARB_CREDIT_WEIGHT_0; + tc_weight_addr_diff = PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 - PRS_REG_ETS_ARB_CREDIT_WEIGHT_0; tc_bound_addr_diff = PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 - PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0; @@ -1284,14 +1245,13 @@ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn, min_weight = tc_req->weight; } - /* write SP map */ + /* Write SP map */ ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_STRICT, sp_tc_map); - /* write WFQ map */ - ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ, - wfq_tc_map); + /* Write WFQ map */ + ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ, wfq_tc_map); - /* write WFQ weights */ + /* Write WFQ weights */ for (tc = 0; tc < NUM_OF_TCS; tc++) { struct init_ets_tc_req *tc_req = &req->tc_req[tc]; u32 byte_weight; @@ -1300,17 +1260,15 @@ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn, continue; /* Translate weight to bytes */ - byte_weight = (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) / - min_weight; + byte_weight = (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) / min_weight; /* Write WFQ weight */ - ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 + tc * - tc_weight_addr_diff, byte_weight); + ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 + + tc * tc_weight_addr_diff, byte_weight); /* Write WFQ upper bound */ ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 + - tc * tc_bound_addr_diff, PRS_ETS_UP_BOUND(byte_weight, - req->mtu)); + tc * tc_bound_addr_diff, PRS_ETS_UP_BOUND(byte_weight, req->mtu)); } } @@ -1327,16 +1285,15 @@ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn, /* Temporary big RAM allocation - should be updated */ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, struct init_brb_ram_req *req) + struct ecore_ptt *p_ptt, + struct init_brb_ram_req *req) { u32 tc_headroom_blocks, min_pkt_size_blocks, total_blocks; u32 active_port_blocks, reg_offset = 0; u8 port, active_ports = 0; - tc_headroom_blocks = (u32)DIV_ROUND_UP(req->headroom_per_tc, - BRB_BLOCK_SIZE); - min_pkt_size_blocks = (u32)DIV_ROUND_UP(req->min_pkt_size, - BRB_BLOCK_SIZE); + tc_headroom_blocks = (u32)DIV_ROUND_UP(req->headroom_per_tc, BRB_BLOCK_SIZE); + min_pkt_size_blocks = (u32)DIV_ROUND_UP(req->min_pkt_size, BRB_BLOCK_SIZE); total_blocks = ECORE_IS_K2(p_hwfn->p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 : BRB_TOTAL_RAM_BLOCKS_BB; @@ -1354,26 +1311,20 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn, u8 tc; /* Calculate per-port sizes */ - tc_guaranteed_blocks = (u32)DIV_ROUND_UP(req->guranteed_per_tc, - BRB_BLOCK_SIZE); - port_blocks = req->num_active_tcs[port] ? active_port_blocks : - 0; - port_guaranteed_blocks = req->num_active_tcs[port] * - tc_guaranteed_blocks; + tc_guaranteed_blocks = (u32)DIV_ROUND_UP(req->guranteed_per_tc, BRB_BLOCK_SIZE); + port_blocks = req->num_active_tcs[port] ? active_port_blocks : 0; + port_guaranteed_blocks = req->num_active_tcs[port] * tc_guaranteed_blocks; port_shared_blocks = port_blocks - port_guaranteed_blocks; - full_xoff_th = req->num_active_tcs[port] * - BRB_MIN_BLOCKS_PER_TC; + full_xoff_th = req->num_active_tcs[port] * BRB_MIN_BLOCKS_PER_TC; full_xon_th = full_xoff_th + min_pkt_size_blocks; pause_xoff_th = tc_headroom_blocks; pause_xon_th = pause_xoff_th + min_pkt_size_blocks; /* Init total size per port */ - ecore_wr(p_hwfn, p_ptt, BRB_REG_TOTAL_MAC_SIZE + port * 4, - port_blocks); + ecore_wr(p_hwfn, p_ptt, BRB_REG_TOTAL_MAC_SIZE + port * 4, port_blocks); /* Init shared size per port */ - ecore_wr(p_hwfn, p_ptt, BRB_REG_SHARED_HR_AREA + port * 4, - port_shared_blocks); + ecore_wr(p_hwfn, p_ptt, BRB_REG_SHARED_HR_AREA + port * 4, port_shared_blocks); for (tc = 0; tc < NUM_OF_TCS; tc++, reg_offset += 4) { /* Clear init values for non-active TCs */ @@ -1386,43 +1337,33 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn, } /* Init guaranteed size per TC */ - ecore_wr(p_hwfn, p_ptt, - BRB_REG_TC_GUARANTIED_0 + reg_offset, + ecore_wr(p_hwfn, p_ptt, BRB_REG_TC_GUARANTIED_0 + reg_offset, tc_guaranteed_blocks); - ecore_wr(p_hwfn, p_ptt, - BRB_REG_MAIN_TC_GUARANTIED_HYST_0 + reg_offset, - BRB_HYST_BLOCKS); + ecore_wr(p_hwfn, p_ptt, BRB_REG_MAIN_TC_GUARANTIED_HYST_0 + + reg_offset, BRB_HYST_BLOCKS); /* Init pause/full thresholds per physical TC - for * loopback traffic. */ - ecore_wr(p_hwfn, p_ptt, - BRB_REG_LB_TC_FULL_XOFF_THRESHOLD_0 + + ecore_wr(p_hwfn, p_ptt, BRB_REG_LB_TC_FULL_XOFF_THRESHOLD_0 + reg_offset, full_xoff_th); - ecore_wr(p_hwfn, p_ptt, - BRB_REG_LB_TC_FULL_XON_THRESHOLD_0 + + ecore_wr(p_hwfn, p_ptt, BRB_REG_LB_TC_FULL_XON_THRESHOLD_0 + reg_offset, full_xon_th); - ecore_wr(p_hwfn, p_ptt, - BRB_REG_LB_TC_PAUSE_XOFF_THRESHOLD_0 + + ecore_wr(p_hwfn, p_ptt, BRB_REG_LB_TC_PAUSE_XOFF_THRESHOLD_0 + reg_offset, pause_xoff_th); - ecore_wr(p_hwfn, p_ptt, - BRB_REG_LB_TC_PAUSE_XON_THRESHOLD_0 + + ecore_wr(p_hwfn, p_ptt, BRB_REG_LB_TC_PAUSE_XON_THRESHOLD_0 + reg_offset, pause_xon_th); /* Init pause/full thresholds per physical TC - for * main traffic. */ - ecore_wr(p_hwfn, p_ptt, - BRB_REG_MAIN_TC_FULL_XOFF_THRESHOLD_0 + + ecore_wr(p_hwfn, p_ptt, BRB_REG_MAIN_TC_FULL_XOFF_THRESHOLD_0 + reg_offset, full_xoff_th); - ecore_wr(p_hwfn, p_ptt, - BRB_REG_MAIN_TC_FULL_XON_THRESHOLD_0 + + ecore_wr(p_hwfn, p_ptt, BRB_REG_MAIN_TC_FULL_XON_THRESHOLD_0 + reg_offset, full_xon_th); - ecore_wr(p_hwfn, p_ptt, - BRB_REG_MAIN_TC_PAUSE_XOFF_THRESHOLD_0 + + ecore_wr(p_hwfn, p_ptt, BRB_REG_MAIN_TC_PAUSE_XOFF_THRESHOLD_0 + reg_offset, pause_xoff_th); - ecore_wr(p_hwfn, p_ptt, - BRB_REG_MAIN_TC_PAUSE_XON_THRESHOLD_0 + + ecore_wr(p_hwfn, p_ptt, BRB_REG_MAIN_TC_PAUSE_XON_THRESHOLD_0 + reg_offset, pause_xon_th); } } @@ -1431,12 +1372,14 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn, #endif /* UNUSED_HSI_FUNC */ #ifndef UNUSED_HSI_FUNC -#define ARR_REG_WR(dev, ptt, addr, arr, arr_size) \ - do { \ - u32 i; \ - for (i = 0; i < (arr_size); i++) \ - ecore_wr(dev, ptt, ((addr) + (4 * i)), \ - ((u32 *)&(arr))[i]); \ +/* @DPDK */ +#define ARR_REG_WR(dev, ptt, addr, arr, arr_size) \ + do { \ + u32 i; \ + \ + for (i = 0; i < (arr_size); i++) \ + ecore_wr(dev, ptt, ((addr) + (4 * i)), \ + ((u32 *)&(arr))[i]); \ } while (0) #ifndef DWORDS_TO_BYTES @@ -1445,25 +1388,27 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn, /** - * @brief ecore_dmae_to_grc - is an internal function - writes from host to - * wide-bus registers (split registers are not supported yet) + * @brief ecore_dmae_to_grc - is an internal function - writes from host to wide-bus registers + * (split registers are not supported yet) * * @param p_hwfn - HW device data * @param p_ptt - ptt window used for writing the registers. - * @param pData - pointer to source data. + * @param p_data - pointer to source data. * @param addr - Destination register address. * @param len_in_dwords - data length in DWARDS (u32) */ + static int ecore_dmae_to_grc(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u32 *pData, - u32 addr, - u32 len_in_dwords) + struct ecore_ptt *p_ptt, + u32 *p_data, + u32 addr, + u32 len_in_dwords) + { struct dmae_params params; bool read_using_dmae = false; - if (!pData) + if (!p_data) return -1; /* Set DMAE params */ @@ -1472,17 +1417,17 @@ static int ecore_dmae_to_grc(struct ecore_hwfn *p_hwfn, SET_FIELD(params.flags, DMAE_PARAMS_COMPLETION_DST, 1); /* Execute DMAE command */ - read_using_dmae = !ecore_dmae_host2grc(p_hwfn, p_ptt, - (u64)(osal_uintptr_t)(pData), + read_using_dmae = !ecore_dmae_host2grc(p_hwfn, p_ptt, (u64)(osal_uintptr_t)(p_data), addr, len_in_dwords, ¶ms); if (!read_using_dmae) - DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG, - "Failed writing to chip using DMAE, using GRC instead\n"); + DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG, "Failed writing to chip using DMAE, using GRC instead\n"); + /* If not read using DMAE, read using GRC */ - if (!read_using_dmae) + if (!read_using_dmae) { /* write to registers using GRC */ - ARR_REG_WR(p_hwfn, p_ptt, addr, pData, len_in_dwords); + ARR_REG_WR(p_hwfn, p_ptt, addr, p_data, len_in_dwords); + } return len_in_dwords; } @@ -1497,12 +1442,13 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn, u32 ethType) #endif /* UNUSED_HSI_FUNC */ #define SET_TUNNEL_TYPE_ENABLE_BIT(var, offset, enable) \ -(var = ((var) & ~(1 << (offset))) | ((enable) ? (1 << (offset)) : 0)) -#define PRS_ETH_TUNN_OUTPUT_FORMAT -188897008 -#define PRS_ETH_OUTPUT_FORMAT -46832 + (var = ((var) & ~(1 << (offset))) | ((enable) ? (1 << (offset)) : 0)) +#define PRS_ETH_TUNN_OUTPUT_FORMAT_E4 0xF4DAB910 +#define PRS_ETH_OUTPUT_FORMAT_E4 0xFFFF4910 void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u16 dest_port) + struct ecore_ptt *p_ptt, + u16 dest_port) { /* Update PRS register */ ecore_wr(p_hwfn, p_ptt, PRS_REG_VXLAN_PORT, dest_port); @@ -1515,7 +1461,8 @@ void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn, } void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, bool vxlan_enable) + struct ecore_ptt *p_ptt, + bool vxlan_enable) { u32 reg_val; @@ -1524,31 +1471,38 @@ void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn, SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, PRS_ENCAPSULATION_TYPE_EN_FLAGS_VXLAN_ENABLE_SHIFT, vxlan_enable); ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val); - if (reg_val) { /* TODO: handle E5 init */ - reg_val = ecore_rd(p_hwfn, p_ptt, - PRS_REG_OUTPUT_FORMAT_4_0_BB_K2); + if ((reg_val != 0) && (!ECORE_IS_E5(p_hwfn->p_dev))) { + reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2); - /* Update output only if tunnel blocks not included. */ - if (reg_val == (u32)PRS_ETH_OUTPUT_FORMAT) + /* @DPDK */ + /* Update output Format only if tunnel blocks not included. */ + if (reg_val == (u32)PRS_ETH_OUTPUT_FORMAT_E4) ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2, - (u32)PRS_ETH_TUNN_OUTPUT_FORMAT); + (u32)PRS_ETH_TUNN_OUTPUT_FORMAT_E4); } /* Update NIG register */ reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE); SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, - NIG_REG_ENC_TYPE_ENABLE_VXLAN_ENABLE_SHIFT, - vxlan_enable); + NIG_REG_ENC_TYPE_ENABLE_VXLAN_ENABLE_SHIFT, vxlan_enable); ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val); - /* Update DORQ register */ - ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_VXLAN_EN_BB_K2, - vxlan_enable ? 1 : 0); + /* Update PBF register */ + if (ECORE_IS_E5(p_hwfn->p_dev)) { + reg_val = ecore_rd(p_hwfn, p_ptt, PBF_REG_TUNNEL_ENABLES_E5); + SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, + PBF_REG_TUNNEL_ENABLES_TUNNEL_VXLAN_EN_E5_SHIFT, vxlan_enable); + ecore_wr(p_hwfn, p_ptt, PBF_REG_TUNNEL_ENABLES_E5, reg_val); + } else { /* Update DORQ register */ + ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_VXLAN_EN_BB_K2, + vxlan_enable ? 1 : 0); + } } void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - bool eth_gre_enable, bool ip_gre_enable) + struct ecore_ptt *p_ptt, + bool eth_gre_enable, + bool ip_gre_enable) { u32 reg_val; @@ -1561,35 +1515,44 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn, PRS_ENCAPSULATION_TYPE_EN_FLAGS_IP_OVER_GRE_ENABLE_SHIFT, ip_gre_enable); ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val); - if (reg_val) { /* TODO: handle E5 init */ - reg_val = ecore_rd(p_hwfn, p_ptt, - PRS_REG_OUTPUT_FORMAT_4_0_BB_K2); + if ((reg_val != 0) && (!ECORE_IS_E5(p_hwfn->p_dev))) { + reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2); - /* Update output only if tunnel blocks not included. */ - if (reg_val == (u32)PRS_ETH_OUTPUT_FORMAT) + /* @DPDK */ + /* Update output Format only if tunnel blocks not included. */ + if (reg_val == (u32)PRS_ETH_OUTPUT_FORMAT_E4) ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2, - (u32)PRS_ETH_TUNN_OUTPUT_FORMAT); + (u32)PRS_ETH_TUNN_OUTPUT_FORMAT_E4); } /* Update NIG register */ reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE); - SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, - NIG_REG_ENC_TYPE_ENABLE_ETH_OVER_GRE_ENABLE_SHIFT, - eth_gre_enable); - SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, - NIG_REG_ENC_TYPE_ENABLE_IP_OVER_GRE_ENABLE_SHIFT, - ip_gre_enable); + SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, NIG_REG_ENC_TYPE_ENABLE_ETH_OVER_GRE_ENABLE_SHIFT, + eth_gre_enable); + SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, NIG_REG_ENC_TYPE_ENABLE_IP_OVER_GRE_ENABLE_SHIFT, + ip_gre_enable); ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val); - /* Update DORQ registers */ - ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_ETH_EN_BB_K2, - eth_gre_enable ? 1 : 0); - ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_IP_EN_BB_K2, - ip_gre_enable ? 1 : 0); + /* Update PBF register */ + if (ECORE_IS_E5(p_hwfn->p_dev)) { + reg_val = ecore_rd(p_hwfn, p_ptt, PBF_REG_TUNNEL_ENABLES_E5); + SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, + PBF_REG_TUNNEL_ENABLES_TUNNEL_GRE_ETH_EN_E5_SHIFT, eth_gre_enable); + SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, + PBF_REG_TUNNEL_ENABLES_TUNNEL_GRE_IP_EN_E5_SHIFT, ip_gre_enable); + ecore_wr(p_hwfn, p_ptt, PBF_REG_TUNNEL_ENABLES_E5, reg_val); + } else { /* Update DORQ registers */ + ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_ETH_EN_BB_K2, + eth_gre_enable ? 1 : 0); + ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_IP_EN_BB_K2, + ip_gre_enable ? 1 : 0); + } } void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u16 dest_port) + struct ecore_ptt *p_ptt, + u16 dest_port) + { /* Update PRS register */ ecore_wr(p_hwfn, p_ptt, PRS_REG_NGE_PORT, dest_port); @@ -1603,7 +1566,8 @@ void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn, void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, - bool eth_geneve_enable, bool ip_geneve_enable) + bool eth_geneve_enable, + bool ip_geneve_enable) { u32 reg_val; @@ -1616,35 +1580,42 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn, PRS_ENCAPSULATION_TYPE_EN_FLAGS_IP_OVER_GENEVE_ENABLE_SHIFT, ip_geneve_enable); ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val); - if (reg_val) { /* TODO: handle E5 init */ - reg_val = ecore_rd(p_hwfn, p_ptt, - PRS_REG_OUTPUT_FORMAT_4_0_BB_K2); + if ((reg_val != 0) && (!ECORE_IS_E5(p_hwfn->p_dev))) { + reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2); - /* Update output only if tunnel blocks not included. */ - if (reg_val == (u32)PRS_ETH_OUTPUT_FORMAT) + /* @DPDK */ + /* Update output Format only if tunnel blocks not included. */ + if (reg_val == (u32)PRS_ETH_OUTPUT_FORMAT_E4) ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2, - (u32)PRS_ETH_TUNN_OUTPUT_FORMAT); + (u32)PRS_ETH_TUNN_OUTPUT_FORMAT_E4); } /* Update NIG register */ - ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_ETH_ENABLE, - eth_geneve_enable ? 1 : 0); - ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_IP_ENABLE, - ip_geneve_enable ? 1 : 0); + ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_ETH_ENABLE, eth_geneve_enable ? 1 : 0); + ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_IP_ENABLE, ip_geneve_enable ? 1 : 0); /* EDPM with geneve tunnel not supported in BB */ if (ECORE_IS_BB_B0(p_hwfn->p_dev)) return; - /* Update DORQ registers */ - ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2, - eth_geneve_enable ? 1 : 0); - ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2, - ip_geneve_enable ? 1 : 0); + /* Update PBF register */ + if (ECORE_IS_E5(p_hwfn->p_dev)) { + reg_val = ecore_rd(p_hwfn, p_ptt, PBF_REG_TUNNEL_ENABLES_E5); + SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, + PBF_REG_TUNNEL_ENABLES_TUNNEL_NGE_ETH_EN_E5_SHIFT, eth_geneve_enable); + SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, + PBF_REG_TUNNEL_ENABLES_TUNNEL_NGE_IP_EN_E5_SHIFT, ip_geneve_enable); + ecore_wr(p_hwfn, p_ptt, PBF_REG_TUNNEL_ENABLES_E5, reg_val); + } else { /* Update DORQ registers */ + ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2, + eth_geneve_enable ? 1 : 0); + ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2, + ip_geneve_enable ? 1 : 0); + } } #define PRS_ETH_VXLAN_NO_L2_ENABLE_OFFSET 3 -#define PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT -925189872 +#define PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT_E4 0xC8DAB910 void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, @@ -1658,13 +1629,14 @@ void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn, /* set VXLAN_NO_L2_ENABLE mask */ cfg_mask = (1 << PRS_ETH_VXLAN_NO_L2_ENABLE_OFFSET); - if (enable) { + if ((enable) && (!ECORE_IS_E5(p_hwfn->p_dev))) { /* set VXLAN_NO_L2_ENABLE flag */ reg_val |= cfg_mask; /* update PRS FIC Format register */ ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2, - (u32)PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT); + (u32)PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT_E4); + } else { /* clear VXLAN_NO_L2_ENABLE flag */ reg_val &= ~cfg_mask; } @@ -1683,9 +1655,10 @@ void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn, #define RAM_LINE_SIZE sizeof(u64) #define REG_SIZE sizeof(u32) + void ecore_gft_disable(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u16 pf_id) + struct ecore_ptt *p_ptt, + u16 pf_id) { struct regpair ram_line; OSAL_MEMSET(&ram_line, 0, sizeof(ram_line)); @@ -1699,35 +1672,32 @@ void ecore_gft_disable(struct ecore_hwfn *p_hwfn, ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id, 0); /* Zero ramline */ - ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line, - PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * pf_id, - sizeof(ram_line) / REG_SIZE); + ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line, PRS_REG_GFT_PROFILE_MASK_RAM + + RAM_LINE_SIZE * pf_id, sizeof(ram_line) / REG_SIZE); } - +/* @DPDK */ void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) + struct ecore_ptt *p_ptt) { u32 rfs_cm_hdr_event_id; /* Set RFS event ID to be awakened i Tstorm By Prs */ rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT); - rfs_cm_hdr_event_id |= T_ETH_PACKET_ACTION_GFT_EVENTID << - PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT; - rfs_cm_hdr_event_id |= PARSER_ETH_CONN_GFT_ACTION_CM_HDR << - PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT; + rfs_cm_hdr_event_id |= T_ETH_PACKET_ACTION_GFT_EVENTID << PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT; + rfs_cm_hdr_event_id |= PARSER_ETH_CONN_GFT_ACTION_CM_HDR << PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT; ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, rfs_cm_hdr_event_id); } void ecore_gft_config(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u16 pf_id, - bool tcp, - bool udp, - bool ipv4, - bool ipv6, - enum gft_profile_type profile_type) + struct ecore_ptt *p_ptt, + u16 pf_id, + bool tcp, + bool udp, + bool ipv4, + bool ipv6, + enum gft_profile_type profile_type) { u32 reg_val, cam_line, search_non_ip_as_gft; struct regpair ram_line = { 0 }; @@ -1740,8 +1710,7 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn, DP_NOTICE(p_hwfn, true, "gft_config: unsupported gft_profile_type\n"); /* Set RFS event ID to be awakened i Tstorm By Prs */ - reg_val = T_ETH_PACKET_MATCH_RFS_EVENTID << - PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT; + reg_val = T_ETH_PACKET_MATCH_RFS_EVENTID << PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT; reg_val |= PARSER_ETH_CONN_CM_HDR << PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT; ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, reg_val); @@ -1756,13 +1725,11 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn, SET_FIELD(cam_line, GFT_CAM_LINE_MAPPED_VALID, 1); /* Filters are per PF!! */ - SET_FIELD(cam_line, GFT_CAM_LINE_MAPPED_PF_ID_MASK, - GFT_CAM_LINE_MAPPED_PF_ID_MASK_MASK); + SET_FIELD(cam_line, GFT_CAM_LINE_MAPPED_PF_ID_MASK, GFT_CAM_LINE_MAPPED_PF_ID_MASK_MASK); SET_FIELD(cam_line, GFT_CAM_LINE_MAPPED_PF_ID, pf_id); if (!(tcp && udp)) { - SET_FIELD(cam_line, - GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK, + SET_FIELD(cam_line, GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK, GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK_MASK); if (tcp) SET_FIELD(cam_line, @@ -1785,10 +1752,8 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn, } /* Write characteristics to cam */ - ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id, - cam_line); - cam_line = ecore_rd(p_hwfn, p_ptt, - PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id); + ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id, cam_line); + cam_line = ecore_rd(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id); /* Write line to RAM - compare to filter 4 tuple */ @@ -1823,19 +1788,15 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn, search_non_ip_as_gft = 1; } - ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_NON_IP_AS_GFT, - search_non_ip_as_gft); - ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line, - PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * pf_id, - sizeof(ram_line) / REG_SIZE); + ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_NON_IP_AS_GFT, search_non_ip_as_gft); + ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line, PRS_REG_GFT_PROFILE_MASK_RAM + + RAM_LINE_SIZE * pf_id, sizeof(ram_line) / REG_SIZE); /* Set default profile so that no filter match will happen */ ram_line.lo = 0xffffffff; ram_line.hi = 0x3ff; - ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line, - PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * - PRS_GFT_CAM_LINES_NO_MATCH, - sizeof(ram_line) / REG_SIZE); + ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line, PRS_REG_GFT_PROFILE_MASK_RAM + + RAM_LINE_SIZE * PRS_GFT_CAM_LINES_NO_MATCH, sizeof(ram_line) / REG_SIZE); /* Enable gft search */ ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 1); @@ -1844,14 +1805,16 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn, #endif /* UNUSED_HSI_FUNC */ -/* Configure VF zone size mode */ -void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u16 mode, +/* Configure VF zone size mode*/ +void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u16 mode, bool runtime_init) { u32 msdm_vf_size_log = MSTORM_VF_ZONE_DEFAULT_SIZE_LOG; u32 msdm_vf_offset_mask; + if (ECORE_IS_E5(p_hwfn->p_dev)) + DP_NOTICE(p_hwfn, true, "config_vf_zone_size_mode: Not supported in E5'\n"); + if (mode == VF_ZONE_SIZE_MODE_DOUBLE) msdm_vf_size_log += 1; else if (mode == VF_ZONE_SIZE_MODE_QUAD) @@ -1860,55 +1823,51 @@ void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, msdm_vf_offset_mask = (1 << msdm_vf_size_log) - 1; if (runtime_init) { - STORE_RT_REG(p_hwfn, - PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET, - msdm_vf_size_log); - STORE_RT_REG(p_hwfn, - PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET, - msdm_vf_offset_mask); + STORE_RT_REG(p_hwfn, PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET, msdm_vf_size_log); + STORE_RT_REG(p_hwfn, PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET, msdm_vf_offset_mask); } else { - ecore_wr(p_hwfn, p_ptt, - PGLUE_B_REG_MSDM_VF_SHIFT_B, msdm_vf_size_log); - ecore_wr(p_hwfn, p_ptt, - PGLUE_B_REG_MSDM_OFFSET_MASK_B, msdm_vf_offset_mask); + ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_MSDM_VF_SHIFT_B, msdm_vf_size_log); + ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_MSDM_OFFSET_MASK_B, msdm_vf_offset_mask); } } /* Get mstorm statistics for offset by VF zone size mode */ -u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn, - u16 stat_cnt_id, +u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn, u16 stat_cnt_id, u16 vf_zone_size_mode) { u32 offset = MSTORM_QUEUE_STAT_OFFSET(stat_cnt_id); - if ((vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) && - (stat_cnt_id > MAX_NUM_PFS)) { + if (ECORE_IS_E5(p_hwfn->p_dev)) + return offset; + + if ((vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) && (stat_cnt_id > MAX_NUM_PFS)) { if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE) offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) * - (stat_cnt_id - MAX_NUM_PFS); + (stat_cnt_id - MAX_NUM_PFS); else if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_QUAD) offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) * - (stat_cnt_id - MAX_NUM_PFS); + (stat_cnt_id - MAX_NUM_PFS); } return offset; } /* Get mstorm VF producer offset by VF zone size mode */ -u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, - u8 vf_id, - u8 vf_queue_id, +u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8 vf_queue_id, u16 vf_zone_size_mode) { u32 offset = MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id); + if (ECORE_IS_E5(p_hwfn->p_dev)) + DP_NOTICE(p_hwfn, true, "get_mstorm_eth_vf_prods_offset: Not supported in E5. Use queue zone instead.'\n"); + if (vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) { if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE) offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) * vf_id; else if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_QUAD) offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) * - vf_id; + vf_id; } return offset; @@ -1919,12 +1878,10 @@ u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, #endif static u8 cdu_crc8_table[CRC8_TABLE_SIZE]; -/* Calculate and return CDU validation byte per connection type / region / - * cid - */ +/* Calculate and return CDU validation byte per connection type/region/cid */ static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region, u32 cid) { - static u8 crc8_table_valid; /*automatically initialized to 0*/ + static u8 crc8_table_valid; /*automatically initialized to 0*/ u8 crc, validation_byte = 0; u32 validation_string = 0; u32 data_to_crc; @@ -1934,62 +1891,55 @@ static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region, u32 cid) crc8_table_valid = 1; } - /* - * The CRC is calculated on the String-to-compress: - * [31:8] = {CID[31:20],CID[11:0]} - * [7:4] = Region - * [3:0] = Type + /* The CRC is calculated on the String-to-compress: + * [31:8] = {CID[31:20],CID[11:0]} + * [ 7:4] = Region + * [ 3:0] = Type */ -#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \ - CDU_CONTEXT_VALIDATION_CFG_USE_CID) & 1) - validation_string |= (cid & 0xFFF00000) | ((cid & 0xFFF) << 8); +#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> CDU_CONTEXT_VALIDATION_CFG_USE_CID) & 1) + validation_string |= (cid & 0xFFF00000) | ((cid & 0xFFF) << 8); #endif -#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \ - CDU_CONTEXT_VALIDATION_CFG_USE_REGION) & 1) - validation_string |= ((region & 0xF) << 4); +#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> CDU_CONTEXT_VALIDATION_CFG_USE_REGION) & 1) + validation_string |= ((region & 0xF) << 4); #endif -#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \ - CDU_CONTEXT_VALIDATION_CFG_USE_TYPE) & 1) - validation_string |= (conn_type & 0xF); +#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> CDU_CONTEXT_VALIDATION_CFG_USE_TYPE) & 1) + validation_string |= (conn_type & 0xF); #endif /* Convert to big-endian and calculate CRC8*/ data_to_crc = OSAL_BE32_TO_CPU(validation_string); - crc = OSAL_CRC8(cdu_crc8_table, (u8 *)&data_to_crc, sizeof(data_to_crc), - CRC8_INIT_VALUE); + crc = OSAL_CRC8(cdu_crc8_table, (u8 *)&data_to_crc, sizeof(data_to_crc), CRC8_INIT_VALUE); /* The validation byte [7:0] is composed: * for type A validation - * [7] = active configuration bit - * [6:0] = crc[6:0] + * [7] = active configuration bit + * [6:0] = crc[6:0] * * for type B validation - * [7] = active configuration bit - * [6:3] = connection_type[3:0] - * [2:0] = crc[2:0] + * [7] = active configuration bit + * [6:3] = connection_type[3:0] + * [2:0] = crc[2:0] */ validation_byte |= ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE) & 1) << 7; -#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \ - CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT) & 1) - validation_byte |= ((conn_type & 0xF) << 3) | (crc & 0x7); +#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT) & 1) + validation_byte |= ((conn_type & 0xF) << 3) | (crc & 0x7); #else - validation_byte |= crc & 0x7F; + validation_byte |= crc & 0x7F; #endif return validation_byte; } /* Calcualte and set validation bytes for session context */ -void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn, - void *p_ctx_mem, u16 ctx_size, +void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, u16 ctx_size, u8 ctx_type, u32 cid) { u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx; - p_ctx = (u8 * const)p_ctx_mem; + p_ctx = (u8 *const)p_ctx_mem; if (ECORE_IS_E5(p_hwfn->p_dev)) { x_val_ptr = &p_ctx[con_region_offsets_e5[0][ctx_type]]; @@ -2009,12 +1959,12 @@ void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn, } /* Calcualte and set validation bytes for task context */ -void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, - u16 ctx_size, u8 ctx_type, u32 tid) +void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, u16 ctx_size, + u8 ctx_type, u32 tid) { u8 *p_ctx, *region1_val_ptr; - p_ctx = (u8 * const)p_ctx_mem; + p_ctx = (u8 *const)p_ctx_mem; region1_val_ptr = ECORE_IS_E5(p_hwfn->p_dev) ? &p_ctx[task_region_offsets_e5[0][ctx_type]] : @@ -2026,13 +1976,12 @@ void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, } /* Memset session context to 0 while preserving validation bytes */ -void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, - u32 ctx_size, u8 ctx_type) +void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, u32 ctx_size, u8 ctx_type) { u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx; u8 x_val, t_val, u_val; - p_ctx = (u8 * const)p_ctx_mem; + p_ctx = (u8 *const)p_ctx_mem; if (ECORE_IS_E5(p_hwfn->p_dev)) { x_val_ptr = &p_ctx[con_region_offsets_e5[0][ctx_type]]; @@ -2056,13 +2005,12 @@ void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, } /* Memset task context to 0 while preserving validation bytes */ -void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, - u32 ctx_size, u8 ctx_type) +void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, u32 ctx_size, u8 ctx_type) { u8 *p_ctx, *region1_val_ptr; u8 region1_val; - p_ctx = (u8 * const)p_ctx_mem; + p_ctx = (u8 *const)p_ctx_mem; region1_val_ptr = ECORE_IS_E5(p_hwfn->p_dev) ? &p_ctx[task_region_offsets_e5[0][ctx_type]] : @@ -2076,8 +2024,7 @@ void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, } /* Enable and configure context validation */ -void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) +void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { u32 ctx_validation; @@ -2094,33 +2041,77 @@ void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn, ecore_wr(p_hwfn, p_ptt, CDU_REG_TCFC_CTX_VALID0, ctx_validation); } -#define PHYS_ADDR_DWORDS DIV_ROUND_UP(sizeof(dma_addr_t), 4) -#define OVERLAY_HDR_SIZE_DWORDS (sizeof(struct fw_overlay_buf_hdr) / 4) +/******************************************************************************* + * File name : rdma_init.c + * Author : Michael Shteinbok + ******************************************************************************* + ******************************************************************************* + * Description: + * RDMA HSI functions + * + ******************************************************************************* + * Notes: This is the input to the auto generated file drv_init_fw_funcs.c + * + *******************************************************************************/ -static u32 ecore_get_overlay_addr_ram_addr(struct ecore_hwfn *p_hwfn, - u8 storm_id) +static u32 ecore_get_rdma_assert_ram_addr(struct ecore_hwfn *p_hwfn, + u8 storm_id) { switch (storm_id) { case 0: return TSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + - TSTORM_OVERLAY_BUF_ADDR_OFFSET; + TSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id); case 1: return MSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + - MSTORM_OVERLAY_BUF_ADDR_OFFSET; + MSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id); case 2: return USEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + - USTORM_OVERLAY_BUF_ADDR_OFFSET; + USTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id); case 3: return XSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + - XSTORM_OVERLAY_BUF_ADDR_OFFSET; + XSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id); case 4: return YSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + - YSTORM_OVERLAY_BUF_ADDR_OFFSET; + YSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id); case 5: return PSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + - PSTORM_OVERLAY_BUF_ADDR_OFFSET; + PSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id); + + default: return 0; + } +} + +void ecore_set_rdma_error_level(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 assert_level[NUM_STORMS]) +{ + u8 storm_id; + for (storm_id = 0; storm_id < NUM_STORMS; storm_id++) { + u32 ram_addr = ecore_get_rdma_assert_ram_addr(p_hwfn, storm_id); + + ecore_wr(p_hwfn, p_ptt, ram_addr, assert_level[storm_id]); + } +} + + + + +#define PHYS_ADDR_DWORDS DIV_ROUND_UP(sizeof(dma_addr_t), 4) +#define OVERLAY_HDR_SIZE_DWORDS (sizeof(struct fw_overlay_buf_hdr) / 4) + + +static u32 ecore_get_overlay_addr_ram_addr(struct ecore_hwfn *p_hwfn, + u8 storm_id) +{ + switch (storm_id) { + case 0: return TSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + TSTORM_OVERLAY_BUF_ADDR_OFFSET; + case 1: return MSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + MSTORM_OVERLAY_BUF_ADDR_OFFSET; + case 2: return USEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + USTORM_OVERLAY_BUF_ADDR_OFFSET; + case 3: return XSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + XSTORM_OVERLAY_BUF_ADDR_OFFSET; + case 4: return YSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + YSTORM_OVERLAY_BUF_ADDR_OFFSET; + case 5: return PSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + PSTORM_OVERLAY_BUF_ADDR_OFFSET; default: return 0; } } struct phys_mem_desc *ecore_fw_overlay_mem_alloc(struct ecore_hwfn *p_hwfn, - const u32 *const fw_overlay_in_buf, - u32 buf_size_in_bytes) + const u32 *const fw_overlay_in_buf, + u32 buf_size_in_bytes) { u32 buf_size = buf_size_in_bytes / sizeof(u32), buf_offset = 0; struct phys_mem_desc *allocated_mem; @@ -2128,15 +2119,12 @@ struct phys_mem_desc *ecore_fw_overlay_mem_alloc(struct ecore_hwfn *p_hwfn, if (!buf_size) return OSAL_NULL; - allocated_mem = (struct phys_mem_desc *)OSAL_ZALLOC(p_hwfn->p_dev, - GFP_KERNEL, - NUM_STORMS * - sizeof(struct phys_mem_desc)); + allocated_mem = (struct phys_mem_desc *)OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, + NUM_STORMS * sizeof(struct phys_mem_desc)); if (!allocated_mem) return OSAL_NULL; - OSAL_MEMSET(allocated_mem, 0, NUM_STORMS * - sizeof(struct phys_mem_desc)); + OSAL_MEMSET(allocated_mem, 0, NUM_STORMS * sizeof(struct phys_mem_desc)); /* For each Storm, set physical address in RAM */ while (buf_offset < buf_size) { @@ -2145,19 +2133,16 @@ struct phys_mem_desc *ecore_fw_overlay_mem_alloc(struct ecore_hwfn *p_hwfn, u32 storm_buf_size; u8 storm_id; - hdr = - (struct fw_overlay_buf_hdr *)&fw_overlay_in_buf[buf_offset]; - storm_buf_size = GET_FIELD(hdr->data, - FW_OVERLAY_BUF_HDR_BUF_SIZE); + hdr = (struct fw_overlay_buf_hdr *)&fw_overlay_in_buf[buf_offset]; + storm_buf_size = GET_FIELD(hdr->data, FW_OVERLAY_BUF_HDR_BUF_SIZE); storm_id = GET_FIELD(hdr->data, FW_OVERLAY_BUF_HDR_STORM_ID); storm_mem_desc = allocated_mem + storm_id; storm_mem_desc->size = storm_buf_size * sizeof(u32); /* Allocate physical memory for Storm's overlays buffer */ - storm_mem_desc->virt_addr = - OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev, - &storm_mem_desc->phys_addr, - storm_mem_desc->size); + storm_mem_desc->virt_addr = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev, + &storm_mem_desc->phys_addr, + storm_mem_desc->size); if (!storm_mem_desc->virt_addr) break; @@ -2165,8 +2150,7 @@ struct phys_mem_desc *ecore_fw_overlay_mem_alloc(struct ecore_hwfn *p_hwfn, buf_offset += OVERLAY_HDR_SIZE_DWORDS; /* Copy Storm's overlays buffer to allocated memory */ - OSAL_MEMCPY(storm_mem_desc->virt_addr, - &fw_overlay_in_buf[buf_offset], + OSAL_MEMCPY(storm_mem_desc->virt_addr, &fw_overlay_in_buf[buf_offset], storm_mem_desc->size); /* Advance to next Storm */ @@ -2175,7 +2159,7 @@ struct phys_mem_desc *ecore_fw_overlay_mem_alloc(struct ecore_hwfn *p_hwfn, /* If memory allocation has failed, free all allocated memory */ if (buf_offset < buf_size) { - ecore_fw_overlay_mem_free(p_hwfn, allocated_mem); + ecore_fw_overlay_mem_free(p_hwfn, &allocated_mem); return OSAL_NULL; } @@ -2189,8 +2173,8 @@ void ecore_fw_overlay_init_ram(struct ecore_hwfn *p_hwfn, u8 storm_id; for (storm_id = 0; storm_id < NUM_STORMS; storm_id++) { - struct phys_mem_desc *storm_mem_desc = - (struct phys_mem_desc *)fw_overlay_mem + storm_id; + struct phys_mem_desc *storm_mem_desc = (struct phys_mem_desc *)fw_overlay_mem + + storm_id; u32 ram_addr, i; /* Skip Storms with no FW overlays */ @@ -2203,31 +2187,29 @@ void ecore_fw_overlay_init_ram(struct ecore_hwfn *p_hwfn, /* Write Storm's overlay physical address to RAM */ for (i = 0; i < PHYS_ADDR_DWORDS; i++, ram_addr += sizeof(u32)) - ecore_wr(p_hwfn, p_ptt, ram_addr, - ((u32 *)&storm_mem_desc->phys_addr)[i]); + ecore_wr(p_hwfn, p_ptt, ram_addr, ((u32 *)&storm_mem_desc->phys_addr)[i]); } } void ecore_fw_overlay_mem_free(struct ecore_hwfn *p_hwfn, - struct phys_mem_desc *fw_overlay_mem) + struct phys_mem_desc **fw_overlay_mem) { u8 storm_id; - if (!fw_overlay_mem) + if (!fw_overlay_mem || !(*fw_overlay_mem)) return; for (storm_id = 0; storm_id < NUM_STORMS; storm_id++) { - struct phys_mem_desc *storm_mem_desc = - (struct phys_mem_desc *)fw_overlay_mem + storm_id; + struct phys_mem_desc *storm_mem_desc = (struct phys_mem_desc *)*fw_overlay_mem + + storm_id; /* Free Storm's physical memory */ if (storm_mem_desc->virt_addr) - OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev, - storm_mem_desc->virt_addr, - storm_mem_desc->phys_addr, - storm_mem_desc->size); + OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev, storm_mem_desc->virt_addr, + storm_mem_desc->phys_addr, storm_mem_desc->size); } /* Free allocated virtual memory */ - OSAL_FREE(p_hwfn->p_dev, fw_overlay_mem); + OSAL_FREE(p_hwfn->p_dev, *fw_overlay_mem); + *fw_overlay_mem = OSAL_NULL; } diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h index a393d088f..7d71ef951 100644 --- a/drivers/net/qede/base/ecore_init_fw_funcs.h +++ b/drivers/net/qede/base/ecore_init_fw_funcs.h @@ -1,21 +1,35 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef _INIT_FW_FUNCS_H #define _INIT_FW_FUNCS_H #include "ecore_hsi_common.h" #include "ecore_hsi_eth.h" +#include "ecore_hsi_func_common.h" +#define NUM_STORMS 6 -/* Returns the VOQ based on port and TC */ -#define VOQ(port, tc, max_phys_tcs_per_port) \ - ((tc) == PURE_LB_TC ? NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB + (port) : \ - (port) * (max_phys_tcs_per_port) + (tc)) +/* Forward declarations */ struct init_qm_pq_params; +/** + * @brief ecore_get_ext_voq - Return the external VOQ ID + * + * @param p_hwfn - HW device data + * @param port_id - port ID + * @param tc_id - traffic class ID + * @param max_phys_tcs_per_port - max number of physical TCs per port in HW + * + * @return The external VOQ ID. + */ +u8 ecore_get_ext_voq(struct ecore_hwfn *p_hwfn, + u8 port_id, + u8 tc, + u8 max_phys_tcs_per_port); + /** * @brief ecore_qm_pf_mem_size - Prepare QM ILT sizes * @@ -33,22 +47,22 @@ struct init_qm_pq_params; */ u32 ecore_qm_pf_mem_size(struct ecore_hwfn *p_hwfn, u32 num_pf_cids, - u32 num_vf_cids, - u32 num_tids, - u16 num_pf_pqs, - u16 num_vf_pqs); + u32 num_vf_cids, + u32 num_tids, + u16 num_pf_pqs, + u16 num_vf_pqs); /** - * @brief ecore_qm_common_rt_init - Prepare QM runtime init values for engine - * phase + * @brief ecore_qm_common_rt_init - Prepare QM runtime init values for the + * engine phase. * - * @param p_hwfn - * @param max_ports_per_engine - max number of ports per engine in HW + * @param p_hwfn - HW device data + * @param max_ports_per_engine - max number of ports per engine in HW * @param max_phys_tcs_per_port - max number of physical TCs per port in HW - * @param pf_rl_en - enable per-PF rate limiters - * @param pf_wfq_en - enable per-PF WFQ + * @param pf_rl_en - enable per-PF rate limiters + * @param pf_wfq_en - enable per-PF WFQ * @param global_rl_en - enable global rate limiters - * @param vport_wfq_en - enable per-VPORT WFQ + * @param vport_wfq_en - enable per-VPORT WFQ * @param port_params - array with parameters for each port. * @param global_rl_params - array with parameters for each global RL. * If OSAL_NULL, global RLs are not configured. @@ -62,36 +76,39 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn, bool pf_wfq_en, bool global_rl_en, bool vport_wfq_en, - struct init_qm_port_params port_params[MAX_NUM_PORTS], - struct init_qm_global_rl_params - global_rl_params[COMMON_MAX_QM_GLOBAL_RLS]); + struct init_qm_port_params port_params[MAX_NUM_PORTS], + struct init_qm_global_rl_params + global_rl_params[COMMON_MAX_QM_GLOBAL_RLS]); /** - * @brief ecore_qm_pf_rt_init Prepare QM runtime init values for the PF phase + * @brief ecore_qm_pf_rt_init - Prepare QM runtime init values for the PF phase * - * @param p_hwfn - * @param p_ptt - ptt window used for writing the registers - * @param pf_id - PF ID + * @param p_hwfn - HW device data + * @param p_ptt - ptt window used for writing the registers + * @param pf_id - PF ID * @param max_phys_tcs_per_port - max number of physical TCs per port in HW * @param is_pf_loading - indicates if the PF is currently loading, * i.e. it has no allocated QM resources. - * @param num_pf_cids - number of connections used by this PF - * @param num_vf_cids - number of connections used by VFs of this PF - * @param num_tids - number of tasks used by this PF - * @param start_pq - first Tx PQ ID associated with this PF - * @param num_pf_pqs - number of Tx PQs associated with this PF - * (non-VF) - * @param num_vf_pqs - number of Tx PQs associated with a VF - * @param start_vport - first VPORT ID associated with this PF - * @param num_vports - number of VPORTs associated with this PF - * @param pf_wfq - WFQ weight. if PF WFQ is globally disabled, the weight must - * be 0. otherwise, the weight must be non-zero. - * @param pf_rl - rate limit in Mb/sec units. a value of 0 means don't - * configure. ignored if PF RL is globally disabled. - * @param pq_params - array of size (num_pf_pqs+num_vf_pqs) with parameters for - * each Tx PQ associated with the specified PF. - * @param vport_params - array of size num_vports with parameters for each - * associated VPORT. + * @param num_pf_cids - number of connections used by this PF + * @param num_vf_cids - number of connections used by VFs of this PF + * @param num_tids - number of tasks used by this PF + * @param start_pq - first Tx PQ ID associated with this PF + * @param num_pf_pqs - number of Tx PQs associated with this PF + * (non-VF) + * @param num_vf_pqs - number of Tx PQs associated with a VF + * @param start_vport - first VPORT ID associated with this PF + * @param num_vports - number of VPORTs associated with this PF + * @param pf_wfq - WFQ weight. if PF WFQ is globally disabled, + * the weight must be 0. otherwise, the weight + * must be non-zero. + * @param pf_rl - rate limit in Mb/sec units. a value of 0 + * means don't configure. ignored if PF RL is + * globally disabled. + * @param pq_params - array of size (num_pf_pqs + num_vf_pqs) with + * parameters for each Tx PQ associated with the + * specified PF. + * @param vport_params - array of size num_vports with parameters for + * each associated VPORT. * * @return 0 on success, -1 on error. */ @@ -114,50 +131,50 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn, struct init_qm_vport_params *vport_params); /** - * @brief ecore_init_pf_wfq Initializes the WFQ weight of the specified PF + * @brief ecore_init_pf_wfq - Initializes the WFQ weight of the specified PF * - * @param p_hwfn - * @param p_ptt - ptt window used for writing the registers - * @param pf_id - PF ID - * @param pf_wfq - WFQ weight. Must be non-zero. + * @param p_hwfn - HW device data + * @param p_ptt - ptt window used for writing the registers + * @param pf_id - PF ID + * @param pf_wfq - WFQ weight. Must be non-zero. * * @return 0 on success, -1 on error. */ int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u8 pf_id, - u16 pf_wfq); + struct ecore_ptt *p_ptt, + u8 pf_id, + u16 pf_wfq); /** * @brief ecore_init_pf_rl - Initializes the rate limit of the specified PF * * @param p_hwfn - * @param p_ptt - ptt window used for writing the registers + * @param p_ptt - ptt window used for writing the registers * @param pf_id - PF ID * @param pf_rl - rate limit in Mb/sec units * * @return 0 on success, -1 on error. */ int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u8 pf_id, - u32 pf_rl); + struct ecore_ptt *p_ptt, + u8 pf_id, + u32 pf_rl); /** - * @brief ecore_init_vport_wfq Initializes the WFQ weight of specified VPORT + * @brief ecore_init_vport_wfq - Initializes the WFQ weight of the specified VPORT * - * @param p_hwfn - * @param p_ptt - ptt window used for writing the registers - * @param first_tx_pq_id- An array containing the first Tx PQ ID associated - * with the VPORT for each TC. This array is filled by - * ecore_qm_pf_rt_init + * @param p_hwfn - HW device data + * @param p_ptt - ptt window used for writing the registers + * @param first_tx_pq_id - An array containing the first Tx PQ ID associated + * with the VPORT for each TC. This array is filled by + * ecore_qm_pf_rt_init * @param wfq - WFQ weight. Must be non-zero. * * @return 0 on success, -1 on error. */ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u16 first_tx_pq_id[NUM_OF_TCS], + struct ecore_ptt *p_ptt, + u16 first_tx_pq_id[NUM_OF_TCS], u16 wfq); /** @@ -177,142 +194,124 @@ int ecore_init_global_rl(struct ecore_hwfn *p_hwfn, u32 rate_limit); /** - * @brief ecore_init_vport_rl - Initializes the rate limit of the specified - * VPORT. - * - * @param p_hwfn - HW device data - * @param p_ptt - ptt window used for writing the registers - * @param vport_id - VPORT ID - * @param vport_rl - rate limit in Mb/sec units - * @param link_speed - link speed in Mbps. - * - * @return 0 on success, -1 on error. - */ -int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u8 vport_id, - u32 vport_rl, - u32 link_speed); - -/** - * @brief ecore_send_qm_stop_cmd Sends a stop command to the QM + * @brief ecore_send_qm_stop_cmd - Sends a stop command to the QM * - * @param p_hwfn - * @param p_ptt - ptt window used for writing the registers + * @param p_hwfn - HW device data + * @param p_ptt - ptt window used for writing the registers * @param is_release_cmd - true for release, false for stop. - * @param is_tx_pq - true for Tx PQs, false for Other PQs. - * @param start_pq - first PQ ID to stop - * @param num_pqs - Number of PQs to stop, starting from start_pq. + * @param is_tx_pq - true for Tx PQs, false for Other PQs. + * @param start_pq - first PQ ID to stop + * @param num_pqs - Number of PQs to stop, starting from start_pq. * - * @return bool, true if successful, false if timeout occurred while waiting - * for QM command done. + * @return bool, true if successful, false if timeout occurred while waiting for + * QM command done. */ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - bool is_release_cmd, - bool is_tx_pq, - u16 start_pq, - u16 num_pqs); + struct ecore_ptt *p_ptt, + bool is_release_cmd, + bool is_tx_pq, + u16 start_pq, + u16 num_pqs); + #ifndef UNUSED_HSI_FUNC /** - * @brief ecore_init_nig_ets - initializes the NIG ETS arbiter + * @brief ecore_init_nig_ets - Initializes the NIG ETS arbiter * * Based on weight/priority requirements per-TC. * - * @param p_ptt - ptt window used for writing the registers. - * @param req - the NIG ETS initialization requirements. + * @param p_hwfn - HW device data + * @param p_ptt - ptt window used for writing the registers. + * @param req - the NIG ETS initialization requirements. * @param is_lb - if set, the loopback port arbiter is initialized, otherwise * the physical port arbiter is initialized. The pure-LB TC * requirements are ignored when is_lb is cleared. */ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - struct init_ets_req *req, - bool is_lb); + struct ecore_ptt *p_ptt, + struct init_ets_req *req, + bool is_lb); /** - * @brief ecore_init_nig_lb_rl - initializes the NIG LB RLs + * @brief ecore_init_nig_lb_rl - Initializes the NIG LB RLs * * Based on global and per-TC rate requirements * - * @param p_ptt - ptt window used for writing the registers. - * @param req - the NIG LB RLs initialization requirements. + * @param p_hwfn - HW device data + * @param p_ptt - ptt window used for writing the registers. + * @param req - the NIG LB RLs initialization requirements. */ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - struct init_nig_lb_rl_req *req); + struct ecore_ptt *p_ptt, + struct init_nig_lb_rl_req *req); + #endif /* UNUSED_HSI_FUNC */ /** - * @brief ecore_init_nig_pri_tc_map - initializes the NIG priority to TC map. + * @brief ecore_init_nig_pri_tc_map - Initializes the NIG priority to TC map. * * Assumes valid arguments. * - * @param p_ptt - ptt window used for writing the registers. - * @param req - required mapping from prioirties to TCs. + * @param p_hwfn - HW device data + * @param p_ptt - ptt window used for writing the registers. + * @param req - required mapping from prioirties to TCs. */ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - struct init_nig_pri_tc_map_req *req); + struct ecore_ptt *p_ptt, + struct init_nig_pri_tc_map_req *req); #ifndef UNUSED_HSI_FUNC + /** - * @brief ecore_init_prs_ets - initializes the PRS Rx ETS arbiter + * @brief ecore_init_prs_ets - Initializes the PRS Rx ETS arbiter * * Based on weight/priority requirements per-TC. * - * @param p_ptt - ptt window used for writing the registers. - * @param req - the PRS ETS initialization requirements. + * @param p_hwfn - HW device data + * @param p_ptt - ptt window used for writing the registers. + * @param req - the PRS ETS initialization requirements. */ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - struct init_ets_req *req); -#endif /* UNUSED_HSI_FUNC */ + struct ecore_ptt *p_ptt, + struct init_ets_req *req); +#endif /* UNUSED_HSI_FUNC */ #ifndef UNUSED_HSI_FUNC + /** - * @brief ecore_init_brb_ram - initializes BRB RAM sizes per TC + * @brief ecore_init_brb_ram - Initializes BRB RAM sizes per TC. * * Based on weight/priority requirements per-TC. * + * @param p_hwfn - HW device data * @param p_ptt - ptt window used for writing the registers. - * @param req - the BRB RAM initialization requirements. + * @param req - the BRB RAM initialization requirements. */ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - struct init_brb_ram_req *req); -#endif /* UNUSED_HSI_FUNC */ - -/** - * @brief ecore_set_vxlan_no_l2_enable - enable or disable VXLAN no L2 parsing - * - * @param p_ptt - ptt window used for writing the registers. - * @param enable - VXLAN no L2 enable flag. - */ -void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - bool enable); + struct ecore_ptt *p_ptt, + struct init_brb_ram_req *req); +#endif /* UNUSED_HSI_FUNC */ #ifndef UNUSED_HSI_FUNC + /** * @brief ecore_set_port_mf_ovlan_eth_type - initializes DORQ ethType Regs to - * input ethType should Be called - * once per port. + * input ethType. should Be called once per port. * - * @param p_hwfn - HW device data + * @param p_hwfn - HW device data * @param ethType - etherType to configure */ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn, u32 ethType); + #endif /* UNUSED_HSI_FUNC */ /** - * @brief ecore_set_vxlan_dest_port - initializes vxlan tunnel destination udp + * @brief ecore_set_vxlan_dest_port - Initializes vxlan tunnel destination udp * port. * * @param p_hwfn - HW device data - * @param p_ptt - ptt window used for writing the registers. + * @param p_ptt - ptt window used for writing the registers. * @param dest_port - vxlan destination udp port. */ void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn, @@ -320,23 +319,23 @@ void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn, u16 dest_port); /** - * @brief ecore_set_vxlan_enable - enable or disable VXLAN tunnel in HW + * @brief ecore_set_vxlan_enable - Enable or disable VXLAN tunnel in HW * * @param p_hwfn - HW device data - * @param p_ptt - ptt window used for writing the registers. - * @param vxlan_enable - vxlan enable flag. + * @param p_ptt - ptt window used for writing the registers. + * @param vxlan_enable - vxlan enable flag. */ void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, bool vxlan_enable); /** - * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW + * @brief ecore_set_gre_enable - Enable or disable GRE tunnel in HW * * @param p_hwfn - HW device data - * @param p_ptt - ptt window used for writing the registers. - * @param eth_gre_enable - eth GRE enable enable flag. - * @param ip_gre_enable - IP GRE enable enable flag. + * @param p_ptt - ptt window used for writing the registers. + * @param eth_gre_enable - eth GRE enable flag. + * @param ip_gre_enable - IP GRE enable flag. */ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, @@ -344,11 +343,11 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn, bool ip_gre_enable); /** - * @brief ecore_set_geneve_dest_port - initializes geneve tunnel destination - * udp port + * @brief ecore_set_geneve_dest_port - Initializes geneve tunnel destination + * udp port. * * @param p_hwfn - HW device data - * @param p_ptt - ptt window used for writing the registers. + * @param p_ptt - ptt window used for writing the registers. * @param dest_port - geneve destination udp port. */ void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn, @@ -356,24 +355,36 @@ void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn, u16 dest_port); /** - * @brief ecore_set_geneve_enable - enable or disable GRE tunnel in HW + * @brief ecore_set_geneve_enable - Enable or disable GRE tunnel in HW * * @param p_hwfn - HW device data - * @param p_ptt - ptt window used for writing the registers. - * @param eth_geneve_enable - eth GENEVE enable enable flag. - * @param ip_geneve_enable - IP GENEVE enable enable flag. + * @param p_ptt - ptt window used for writing the registers. + * @param eth_geneve_enable - eth GENEVE enable flag. + * @param ip_geneve_enable - IP GENEVE enable flag. */ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, bool eth_geneve_enable, bool ip_geneve_enable); + +/** + * @brief ecore_set_vxlan_no_l2_enable - enable or disable VXLAN no L2 parsing + * + * @param p_ptt - ptt window used for writing the registers. + * @param enable - VXLAN no L2 enable flag. + */ +void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + bool enable); + #ifndef UNUSED_HSI_FUNC /** -* @brief ecore_set_gft_event_id_cm_hdr - configure GFT event id and cm header -* -* @param p_ptt - ptt window used for writing the registers. -*/ + * @brief ecore_set_gft_event_id_cm_hdr - Configure GFT event id and cm header + * + * @param p_hwfn - HW device data + * @param p_ptt - ptt window used for writing the registers. + */ void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); @@ -385,21 +396,21 @@ void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn, * @param pf_id - pf on which to disable GFT. */ void ecore_gft_disable(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u16 pf_id); + struct ecore_ptt *p_ptt, + u16 pf_id); /** * @brief ecore_gft_config - Enable and configure HW for GFT -* + * * @param p_hwfn - HW device data -* @param p_ptt - ptt window used for writing the registers. + * @param p_ptt - ptt window used for writing the registers. * @param pf_id - pf on which to enable GFT. -* @param tcp - set profile tcp packets. -* @param udp - set profile udp packet. -* @param ipv4 - set profile ipv4 packet. -* @param ipv6 - set profile ipv6 packet. + * @param tcp - set profile tcp packets. + * @param udp - set profile udp packet. + * @param ipv4 - set profile ipv4 packet. + * @param ipv6 - set profile ipv6 packet. * @param profile_type - define packet same fields. Use enum gft_profile_type. -*/ + */ void ecore_gft_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u16 pf_id, @@ -408,54 +419,60 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn, bool ipv4, bool ipv6, enum gft_profile_type profile_type); + #endif /* UNUSED_HSI_FUNC */ /** -* @brief ecore_config_vf_zone_size_mode - Configure VF zone size mode. Must be -* used before first ETH queue started. -* + * @brief ecore_config_vf_zone_size_mode - Configure VF zone size mode. Must be + * used before first ETH queue started. + * * @param p_hwfn - HW device data -* @param p_ptt - ptt window used for writing the registers. Don't care + * @param p_ptt - ptt window used for writing the registers. Don't care * if runtime_init used. -* @param mode - VF zone size mode. Use enum vf_zone_size_mode. + * @param mode - VF zone size mode. Use enum vf_zone_size_mode. * @param runtime_init - Set 1 to init runtime registers in engine phase. * Set 0 if VF zone size mode configured after engine * phase. -*/ -void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt - *p_ptt, u16 mode, bool runtime_init); + */ +void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u16 mode, + bool runtime_init); /** * @brief ecore_get_mstorm_queue_stat_offset - Get mstorm statistics offset by * VF zone size mode. -* + * * @param p_hwfn - HW device data -* @param stat_cnt_id - statistic counter id -* @param vf_zone_size_mode - VF zone size mode. Use enum vf_zone_size_mode. -*/ + * @param stat_cnt_id - statistic counter id + * @param vf_zone_size_mode - VF zone size mode. Use enum vf_zone_size_mode. + */ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn, - u16 stat_cnt_id, u16 vf_zone_size_mode); + u16 stat_cnt_id, + u16 vf_zone_size_mode); /** * @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone * size mode. -* + * * @param p_hwfn - HW device data -* @param vf_id - vf id. -* @param vf_queue_id - per VF rx queue id. -* @param vf_zone_size_mode - vf zone size mode. Use enum vf_zone_size_mode. -*/ -u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8 - vf_queue_id, u16 vf_zone_size_mode); + * @param vf_id - vf id. + * @param vf_queue_id - per VF rx queue id. + * @param vf_zone_size_mode - vf zone size mode. Use enum vf_zone_size_mode. + */ +u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, + u8 vf_id, + u8 vf_queue_id, + u16 vf_zone_size_mode); + /** * @brief ecore_enable_context_validation - Enable and configure context - * validation. + * validation. * * @param p_hwfn - HW device data - * @param p_ptt - ptt window used for writing the registers. */ -void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt); +void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); + /** * @brief ecore_calc_session_ctx_validation - Calcualte validation byte for * session context. @@ -480,7 +497,7 @@ void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn, * @param p_ctx_mem - pointer to context memory. * @param ctx_size - context size. * @param ctx_type - context type. - * @param tid - context tid. + * @param tid - context tid. */ void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, @@ -492,10 +509,10 @@ void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, * @brief ecore_memset_session_ctx - Memset session context to 0 while * preserving validation bytes. * - * @param p_hwfn - HW device data - * @param p_ctx_mem - pointer to context memory. - * @param ctx_size - size to initialzie. - * @param ctx_type - context type. + * @param p_hwfn - HW device data + * @param p_ctx_mem - pointer to context memory. + * @param ctx_size - size to initialzie. + * @param ctx_type - context type. */ void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, @@ -507,9 +524,9 @@ void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn, * validation bytes. * * @param p_hwfn - HW device data - * @param p_ctx_mem - pointer to context memory. - * @param ctx_size - size to initialzie. - * @param ctx_type - context type. + * @param p_ctx_mem - pointer to context memory. + * @param ctx_size - size to initialzie. + * @param ctx_type - context type. */ void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, @@ -517,57 +534,41 @@ void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn, u8 ctx_type); -/******************************************************************************* - * File name : rdma_init.h - * Author : Michael Shteinbok - ******************************************************************************* - ******************************************************************************* - * Description: - * RDMA HSI functions header - * - ******************************************************************************* - * Notes: This is the input to the auto generated file drv_init_fw_funcs.h - * - ******************************************************************************* - */ -#define NUM_STORMS 6 - - - -/** + /** * @brief ecore_set_rdma_error_level - Sets the RDMA assert level. * If the severity of the error will be - * above the level, the FW will assert. + * above the level, the FW will assert. * @param p_hwfn - HW device data * @param p_ptt - ptt window used for writing the registers * @param assert_level - An array of assert levels for each storm. */ void ecore_set_rdma_error_level(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u8 assert_level[NUM_STORMS]); + struct ecore_ptt *p_ptt, + u8 assert_level[NUM_STORMS]); + + /** - * @brief ecore_fw_overlay_mem_alloc - Allocates and fills the FW overlay memory + * @brief ecore_fw_overlay_mem_alloc - Allocates and fills the FW overlay memory. * - * @param p_hwfn - HW device data + * @param p_hwfn - HW device data * @param fw_overlay_in_buf - the input FW overlay buffer. - * @param buf_size - the size of the input FW overlay buffer in bytes. - * must be aligned to dwords. + * @param buf_size - the size of the input FW overlay buffer in bytes. + * must be aligned to dwords. * @param fw_overlay_out_mem - OUT: a pointer to the allocated overlays memory. * - * @return a pointer to the allocated overlays memory, or OSAL_NULL in case of - * failures. + * @return a pointer to the allocated overlays memory, or OSAL_NULL in case of failures. */ struct phys_mem_desc *ecore_fw_overlay_mem_alloc(struct ecore_hwfn *p_hwfn, - const u32 *const fw_overlay_in_buf, - u32 buf_size_in_bytes); + const u32 *const fw_overlay_in_buf, + u32 buf_size_in_bytes); /** * @brief ecore_fw_overlay_init_ram - Initializes the FW overlay RAM. * - * @param p_hwfn - HW device data. - * @param p_ptt - ptt window used for writing the registers. - * @param fw_overlay_mem - the allocated FW overlay memory. + * @param p_hwfn - HW device data. + * @param p_ptt - ptt window used for writing the registers. + * @param fw_overlay_mem - the allocated FW overlay memory. */ void ecore_fw_overlay_init_ram(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, @@ -576,10 +577,10 @@ void ecore_fw_overlay_init_ram(struct ecore_hwfn *p_hwfn, /** * @brief ecore_fw_overlay_mem_free - Frees the FW overlay memory. * - * @param p_hwfn - HW device data. - * @param fw_overlay_mem - the allocated FW overlay memory to free. + * @param p_hwfn - HW device data. + * @param fw_overlay_mem - pointer to the allocated FW overlay memory to free. */ void ecore_fw_overlay_mem_free(struct ecore_hwfn *p_hwfn, - struct phys_mem_desc *fw_overlay_mem); + struct phys_mem_desc **fw_overlay_mem); #endif diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c index 42885a401..a01f90dff 100644 --- a/drivers/net/qede/base/ecore_init_ops.c +++ b/drivers/net/qede/base/ecore_init_ops.c @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - /* include the precompiled configuration values - only once */ #include "bcm_osal.h" #include "ecore_hsi_common.h" @@ -13,6 +13,14 @@ #include "ecore_rt_defs.h" #include "ecore_init_fw_funcs.h" +#ifndef CONFIG_ECORE_BINARY_FW +#ifdef CONFIG_ECORE_ZIPPED_FW +#include "ecore_init_values_zipped.h" +#else +#include "ecore_init_values.h" +#endif +#endif + #include "ecore_iro_values.h" #include "ecore_sriov.h" #include "reg_addr.h" @@ -23,19 +31,13 @@ void ecore_init_iro_array(struct ecore_dev *p_dev) { - p_dev->iro_arr = iro_arr + E4_IRO_ARR_OFFSET; -} - -/* Runtime configuration helpers */ -void ecore_init_clear_rt_data(struct ecore_hwfn *p_hwfn) -{ - int i; + u32 offset = ECORE_IS_E5(p_dev) ? E5_IRO_ARR_OFFSET : E4_IRO_ARR_OFFSET; - for (i = 0; i < RUNTIME_ARRAY_SIZE; i++) - p_hwfn->rt_data.b_valid[i] = false; + p_dev->iro_arr = iro_arr + offset; } -void ecore_init_store_rt_reg(struct ecore_hwfn *p_hwfn, u32 rt_offset, u32 val) +void ecore_init_store_rt_reg(struct ecore_hwfn *p_hwfn, + u32 rt_offset, u32 val) { if (rt_offset >= RUNTIME_ARRAY_SIZE) { DP_ERR(p_hwfn, @@ -49,7 +51,8 @@ void ecore_init_store_rt_reg(struct ecore_hwfn *p_hwfn, u32 rt_offset, u32 val) } void ecore_init_store_rt_agg(struct ecore_hwfn *p_hwfn, - u32 rt_offset, u32 *p_val, osal_size_t size) + u32 rt_offset, u32 *p_val, + osal_size_t size) { osal_size_t i; @@ -64,6 +67,7 @@ void ecore_init_store_rt_agg(struct ecore_hwfn *p_hwfn, for (i = 0; i < size / sizeof(u32); i++) { p_hwfn->rt_data.init_val[rt_offset + i] = p_val[i]; p_hwfn->rt_data.b_valid[rt_offset + i] = true; + } } @@ -71,11 +75,12 @@ static enum _ecore_status_t ecore_init_rt(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 addr, u16 rt_offset, - u16 size, bool b_must_dmae) + u16 size, + bool b_must_dmae) { u32 *p_init_val = &p_hwfn->rt_data.init_val[rt_offset]; bool *p_valid = &p_hwfn->rt_data.b_valid[rt_offset]; - u16 i, segment; + u16 i, j, segment; enum _ecore_status_t rc = ECORE_SUCCESS; /* Since not all RT entries are initialized, go over the RT and @@ -89,7 +94,9 @@ static enum _ecore_status_t ecore_init_rt(struct ecore_hwfn *p_hwfn, * simply write the data instead of using dmae. */ if (!b_must_dmae) { - ecore_wr(p_hwfn, p_ptt, addr + (i << 2), p_init_val[i]); + ecore_wr(p_hwfn, p_ptt, addr + (i << 2), + p_init_val[i]); + p_valid[i] = false; continue; } @@ -105,6 +112,10 @@ static enum _ecore_status_t ecore_init_rt(struct ecore_hwfn *p_hwfn, if (rc != ECORE_SUCCESS) return rc; + /* invalidate after writing */ + for (j = i; j < i + segment; j++) + p_valid[j] = false; + /* Jump over the entire segment, including invalid entry */ i += segment; } @@ -128,6 +139,7 @@ enum _ecore_status_t ecore_init_alloc(struct ecore_hwfn *p_hwfn) sizeof(u32) * RUNTIME_ARRAY_SIZE); if (!rt_data->init_val) { OSAL_FREE(p_hwfn->p_dev, rt_data->b_valid); + rt_data->b_valid = OSAL_NULL; return ECORE_NOMEM; } @@ -137,18 +149,18 @@ enum _ecore_status_t ecore_init_alloc(struct ecore_hwfn *p_hwfn) void ecore_init_free(struct ecore_hwfn *p_hwfn) { OSAL_FREE(p_hwfn->p_dev, p_hwfn->rt_data.init_val); + p_hwfn->rt_data.init_val = OSAL_NULL; OSAL_FREE(p_hwfn->p_dev, p_hwfn->rt_data.b_valid); + p_hwfn->rt_data.b_valid = OSAL_NULL; } static enum _ecore_status_t ecore_init_array_dmae(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u32 addr, - u32 dmae_data_offset, - u32 size, const u32 *p_buf, - bool b_must_dmae, - bool b_can_dmae) + struct ecore_ptt *p_ptt, + u32 addr, u32 dmae_data_offset, + u32 size, const u32 *p_buf, + bool b_must_dmae, bool b_can_dmae) { - enum _ecore_status_t rc = ECORE_SUCCESS; + enum _ecore_status_t rc = ECORE_SUCCESS; /* Perform DMAE only for lengthy enough sections or for wide-bus */ #ifndef ASIC_ONLY @@ -185,7 +197,7 @@ static enum _ecore_status_t ecore_init_fill_dmae(struct ecore_hwfn *p_hwfn, OSAL_MEMSET(¶ms, 0, sizeof(params)); SET_FIELD(params.flags, DMAE_PARAMS_RW_REPL_SRC, 0x1); return ecore_dmae_host2grc(p_hwfn, p_ptt, - (osal_uintptr_t)&zero_buffer[0], + (osal_uintptr_t)(&zero_buffer[0]), addr, fill_count, ¶ms); } @@ -199,6 +211,7 @@ static void ecore_init_fill(struct ecore_hwfn *p_hwfn, ecore_wr(p_hwfn, p_ptt, addr, fill); } + static enum _ecore_status_t ecore_init_cmd_array(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct init_write_op *cmd, @@ -219,28 +232,29 @@ static enum _ecore_status_t ecore_init_cmd_array(struct ecore_hwfn *p_hwfn, array_data = p_dev->fw_data->arr_data; - hdr = (union init_array_hdr *) - (uintptr_t)(array_data + dmae_array_offset); + hdr = (union init_array_hdr *)(array_data + + dmae_array_offset); data = OSAL_LE32_TO_CPU(hdr->raw.data); switch (GET_FIELD(data, INIT_ARRAY_RAW_HDR_TYPE)) { case INIT_ARR_ZIPPED: #ifdef CONFIG_ECORE_ZIPPED_FW offset = dmae_array_offset + 1; - input_len = GET_FIELD(data, INIT_ARRAY_ZIPPED_HDR_ZIPPED_SIZE); + input_len = GET_FIELD(data, + INIT_ARRAY_ZIPPED_HDR_ZIPPED_SIZE); max_size = MAX_ZIPPED_SIZE * 4; OSAL_MEMSET(p_hwfn->unzip_buf, 0, max_size); output_len = OSAL_UNZIP_DATA(p_hwfn, input_len, - (u8 *)(uintptr_t)&array_data[offset], - max_size, - (u8 *)p_hwfn->unzip_buf); + (u8 *)&array_data[offset], + max_size, (u8 *)p_hwfn->unzip_buf); if (output_len) { rc = ecore_init_array_dmae(p_hwfn, p_ptt, addr, 0, output_len, p_hwfn->unzip_buf, b_must_dmae, b_can_dmae); } else { - DP_NOTICE(p_hwfn, true, "Failed to unzip dmae data\n"); + DP_NOTICE(p_hwfn, true, + "Failed to unzip dmae data\n"); rc = ECORE_INVAL; } #else @@ -250,27 +264,27 @@ static enum _ecore_status_t ecore_init_cmd_array(struct ecore_hwfn *p_hwfn, #endif break; case INIT_ARR_PATTERN: - { - u32 repeats = GET_FIELD(data, + { + u32 repeats = GET_FIELD(data, INIT_ARRAY_PATTERN_HDR_REPETITIONS); - u32 i; - - size = GET_FIELD(data, - INIT_ARRAY_PATTERN_HDR_PATTERN_SIZE); - - for (i = 0; i < repeats; i++, addr += size << 2) { - rc = ecore_init_array_dmae(p_hwfn, p_ptt, addr, - dmae_array_offset + - 1, size, array_data, - b_must_dmae, - b_can_dmae); - if (rc) - break; + u32 i; + + size = GET_FIELD(data, + INIT_ARRAY_PATTERN_HDR_PATTERN_SIZE); + + for (i = 0; i < repeats; i++, addr += size << 2) { + rc = ecore_init_array_dmae(p_hwfn, p_ptt, addr, + dmae_array_offset + 1, + size, array_data, + b_must_dmae, b_can_dmae); + if (rc) + break; } break; } case INIT_ARR_STANDARD: - size = GET_FIELD(data, INIT_ARRAY_STANDARD_HDR_SIZE); + size = GET_FIELD(data, + INIT_ARRAY_STANDARD_HDR_SIZE); rc = ecore_init_array_dmae(p_hwfn, p_ptt, addr, dmae_array_offset + 1, size, array_data, @@ -290,13 +304,12 @@ static enum _ecore_status_t ecore_init_cmd_wr(struct ecore_hwfn *p_hwfn, u32 data = OSAL_LE32_TO_CPU(p_cmd->data); bool b_must_dmae = GET_FIELD(data, INIT_WRITE_OP_WIDE_BUS); u32 addr = GET_FIELD(data, INIT_WRITE_OP_ADDRESS) << 2; - enum _ecore_status_t rc = ECORE_SUCCESS; + enum _ecore_status_t rc = ECORE_SUCCESS; /* Sanitize */ if (b_must_dmae && !b_can_dmae) { DP_NOTICE(p_hwfn, true, - "Need to write to %08x for Wide-bus but DMAE isn't" - " allowed\n", + "Need to write to %08x for Wide-bus but DMAE isn't allowed\n", addr); return ECORE_INVAL; } @@ -345,7 +358,8 @@ static OSAL_INLINE bool comp_or(u32 val, u32 expected_val) /* init_ops read/poll commands */ static void ecore_init_cmd_rd(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, struct init_read_op *cmd) + struct ecore_ptt *p_ptt, + struct init_read_op *cmd) { bool (*comp_check)(u32 val, u32 expected_val); u32 delay = ECORE_INIT_POLL_PERIOD_US, val; @@ -384,14 +398,16 @@ static void ecore_init_cmd_rd(struct ecore_hwfn *p_hwfn, data = OSAL_LE32_TO_CPU(cmd->expected_val); for (i = 0; - i < ECORE_INIT_MAX_POLL_COUNT && !comp_check(val, data); i++) { + i < ECORE_INIT_MAX_POLL_COUNT && !comp_check(val, data); + i++) { OSAL_UDELAY(delay); val = ecore_rd(p_hwfn, p_ptt, addr); } if (i == ECORE_INIT_MAX_POLL_COUNT) DP_ERR(p_hwfn, "Timeout when polling reg: 0x%08x [ Waiting-for: %08x Got: %08x (comparison %08x)]\n", - addr, OSAL_LE32_TO_CPU(cmd->expected_val), val, + addr, + OSAL_LE32_TO_CPU(cmd->expected_val), val, OSAL_LE32_TO_CPU(cmd->op_data)); } @@ -469,7 +485,9 @@ static u32 ecore_init_cmd_phase(struct init_if_phase_op *p_cmd, enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, - int phase, int phase_id, int modes) + int phase, + int phase_id, + int modes) { struct ecore_dev *p_dev = p_hwfn->p_dev; bool b_dmae = (phase != PHASE_ENGINE); @@ -531,6 +549,7 @@ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn, } #ifdef CONFIG_ECORE_ZIPPED_FW OSAL_FREE(p_hwfn->p_dev, p_hwfn->unzip_buf); + p_hwfn->unzip_buf = OSAL_NULL; #endif return rc; } @@ -553,19 +572,19 @@ enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev, return ECORE_INVAL; } - buf_hdr = (struct bin_buffer_hdr *)(uintptr_t)fw_data; + buf_hdr = (struct bin_buffer_hdr *)fw_data; offset = buf_hdr[BIN_BUF_INIT_FW_VER_INFO].offset; - fw->fw_ver_info = (struct fw_ver_info *)((uintptr_t)(fw_data + offset)); + fw->fw_ver_info = (struct fw_ver_info *)(fw_data + offset); offset = buf_hdr[BIN_BUF_INIT_CMD].offset; - fw->init_ops = (union init_op *)((uintptr_t)(fw_data + offset)); + fw->init_ops = (union init_op *)(fw_data + offset); offset = buf_hdr[BIN_BUF_INIT_VAL].offset; - fw->arr_data = (u32 *)((uintptr_t)(fw_data + offset)); + fw->arr_data = (u32 *)(fw_data + offset); offset = buf_hdr[BIN_BUF_INIT_MODE_TREE].offset; - fw->modes_tree_buf = (u8 *)((uintptr_t)(fw_data + offset)); + fw->modes_tree_buf = (u8 *)(fw_data + offset); len = buf_hdr[BIN_BUF_INIT_CMD].length; fw->init_ops_size = len / sizeof(struct init_raw_op); @@ -585,8 +604,10 @@ enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev, fw->arr_data = (u32 *)init_val; fw->modes_tree_buf = (u8 *)modes_tree_buf; fw->init_ops_size = init_ops_size; - fw->fw_overlays = fw_overlays; - fw->fw_overlays_len = sizeof(fw_overlays); + fw->fw_overlays_e4 = e4_overlays; + fw->fw_overlays_e4_len = sizeof(e4_overlays); + fw->fw_overlays_e5 = e5_overlays; + fw->fw_overlays_e5_len = sizeof(e5_overlays); #endif return ECORE_SUCCESS; diff --git a/drivers/net/qede/base/ecore_init_ops.h b/drivers/net/qede/base/ecore_init_ops.h index 0cbf293b3..fb305e531 100644 --- a/drivers/net/qede/base/ecore_init_ops.h +++ b/drivers/net/qede/base/ecore_init_ops.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_INIT_OPS__ #define __ECORE_INIT_OPS__ @@ -29,7 +29,7 @@ void ecore_init_iro_array(struct ecore_dev *p_dev); * @return _ecore_status_t */ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, + struct ecore_ptt *p_ptt, int phase, int phase_id, int modes); @@ -52,15 +52,6 @@ enum _ecore_status_t ecore_init_alloc(struct ecore_hwfn *p_hwfn); */ void ecore_init_free(struct ecore_hwfn *p_hwfn); - -/** - * @brief ecore_init_clear_rt_data - Clears the runtime init array. - * - * - * @param p_hwfn - */ -void ecore_init_clear_rt_data(struct ecore_hwfn *p_hwfn); - /** * @brief ecore_init_store_rt_reg - Store a configuration value in the RT array. * @@ -70,8 +61,8 @@ void ecore_init_clear_rt_data(struct ecore_hwfn *p_hwfn); * @param val */ void ecore_init_store_rt_reg(struct ecore_hwfn *p_hwfn, - u32 rt_offset, - u32 val); + u32 rt_offset, + u32 val); #define STORE_RT_REG(hwfn, offset, val) \ ecore_init_store_rt_reg(hwfn, offset, val) diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c index e0d61e375..e56d5dba7 100644 --- a/drivers/net/qede/base/ecore_int.c +++ b/drivers/net/qede/base/ecore_int.c @@ -1,14 +1,13 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - -#include - #include "bcm_osal.h" #include "ecore.h" #include "ecore_spq.h" +#include "reg_addr.h" #include "ecore_gtt_reg_addr.h" #include "ecore_init_ops.h" #include "ecore_rt_defs.h" @@ -20,10 +19,32 @@ #include "ecore_hw_defs.h" #include "ecore_hsi_common.h" #include "ecore_mcp.h" +#include "../qede_debug.h" /* @DPDK */ + +#ifdef DIAG +/* This is nasty, but diag is using the drv_dbg_fw_funcs.c [non-ecore flavor], + * and so the functions are lacking ecore prefix. + * If there would be other clients needing this [or if the content that isn't + * really optional there would increase], we'll need to re-think this. + */ +enum dbg_status dbg_read_attn(struct ecore_hwfn *dev, + struct ecore_ptt *ptt, + enum block_id block, + enum dbg_attn_type attn_type, + bool clear_status, + struct dbg_attn_block_result *results); + +const char *dbg_get_status_str(enum dbg_status status); + +#define ecore_dbg_read_attn(hwfn, ptt, id, type, clear, results) \ + dbg_read_attn(hwfn, ptt, id, type, clear, results) +#define ecore_dbg_get_status_str(status) \ + dbg_get_status_str(status) +#endif struct ecore_pi_info { ecore_int_comp_cb_t comp_cb; - void *cookie; /* Will be sent to the compl cb function */ + void *cookie; /* Will be sent to the completion callback function */ }; struct ecore_sb_sp_info { @@ -47,17 +68,16 @@ struct aeu_invert_reg_bit { #define ATTENTION_PARITY (1 << 0) -#define ATTENTION_LENGTH_MASK (0x00000ff0) +#define ATTENTION_LENGTH_MASK (0xff) #define ATTENTION_LENGTH_SHIFT (4) -#define ATTENTION_LENGTH(flags) (((flags) & ATTENTION_LENGTH_MASK) >> \ - ATTENTION_LENGTH_SHIFT) +#define ATTENTION_LENGTH(flags) (GET_FIELD((flags), ATTENTION_LENGTH)) #define ATTENTION_SINGLE (1 << ATTENTION_LENGTH_SHIFT) #define ATTENTION_PAR (ATTENTION_SINGLE | ATTENTION_PARITY) #define ATTENTION_PAR_INT ((2 << ATTENTION_LENGTH_SHIFT) | \ ATTENTION_PARITY) /* Multiple bits start with this offset */ -#define ATTENTION_OFFSET_MASK (0x000ff000) +#define ATTENTION_OFFSET_MASK (0xff) #define ATTENTION_OFFSET_SHIFT (12) #define ATTENTION_BB_MASK (0xf) @@ -78,79 +98,83 @@ struct aeu_invert_reg { struct aeu_invert_reg_bit bits[32]; }; -#define MAX_ATTN_GRPS (8) -#define NUM_ATTN_REGS (9) +#define MAX_ATTN_GRPS 8 +#define NUM_ATTN_REGS_E4 9 +#define NUM_ATTN_REGS_E5 10 +#define MAX_NUM_ATTN_REGS NUM_ATTN_REGS_E5 +#define NUM_ATTN_REGS(p_hwfn) \ + (ECORE_IS_E4((p_hwfn)->p_dev) ? NUM_ATTN_REGS_E4 : NUM_ATTN_REGS_E5) static enum _ecore_status_t ecore_mcp_attn_cb(struct ecore_hwfn *p_hwfn) { u32 tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, MCP_REG_CPU_STATE); - DP_INFO(p_hwfn->p_dev, "MCP_REG_CPU_STATE: %08x - Masking...\n", tmp); - ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, MCP_REG_CPU_EVENT_MASK, 0xffffffff); + DP_INFO(p_hwfn->p_dev, "MCP_REG_CPU_STATE: %08x - Masking...\n", + tmp); + ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, MCP_REG_CPU_EVENT_MASK, + 0xffffffff); return ECORE_SUCCESS; } -#define ECORE_PSWHST_ATTENTION_DISABLED_PF_MASK (0x3c000) -#define ECORE_PSWHST_ATTENTION_DISABLED_PF_SHIFT (14) -#define ECORE_PSWHST_ATTENTION_DISABLED_VF_MASK (0x03fc0) -#define ECORE_PSWHST_ATTENTION_DISABLED_VF_SHIFT (6) -#define ECORE_PSWHST_ATTENTION_DISABLED_VALID_MASK (0x00020) -#define ECORE_PSWHST_ATTENTION_DISABLED_VALID_SHIFT (5) -#define ECORE_PSWHST_ATTENTION_DISABLED_CLIENT_MASK (0x0001e) -#define ECORE_PSWHST_ATTENTION_DISABLED_CLIENT_SHIFT (1) -#define ECORE_PSWHST_ATTENTION_DISABLED_WRITE_MASK (0x1) -#define ECORE_PSWHST_ATTNETION_DISABLED_WRITE_SHIFT (0) -#define ECORE_PSWHST_ATTENTION_VF_DISABLED (0x1) -#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS (0x1) -#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_WR_MASK (0x1) -#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_WR_SHIFT (0) -#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_CLIENT_MASK (0x1e) -#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_CLIENT_SHIFT (1) -#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_VALID_MASK (0x20) -#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_VALID_SHIFT (5) -#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_ID_MASK (0x3fc0) -#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_ID_SHIFT (6) -#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_PF_ID_MASK (0x3c000) -#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_PF_ID_SHIFT (14) -#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_BYTE_EN_MASK (0x3fc0000) -#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_BYTE_EN_SHIFT (18) +/* Register PSWHST_REG_VF_DISABLED_ERROR_DATA */ +#define ECORE_PSWHST_ATTN_DISABLED_PF_MASK (0xf) +#define ECORE_PSWHST_ATTN_DISABLED_PF_SHIFT (14) +#define ECORE_PSWHST_ATTN_DISABLED_VF_MASK (0xff) +#define ECORE_PSWHST_ATTN_DISABLED_VF_SHIFT (6) +#define ECORE_PSWHST_ATTN_DISABLED_VALID_MASK (0x1) +#define ECORE_PSWHST_ATTN_DISABLED_VALID_SHIFT (5) +#define ECORE_PSWHST_ATTN_DISABLED_CLIENT_MASK (0xf) +#define ECORE_PSWHST_ATTN_DISABLED_CLIENT_SHIFT (1) +#define ECORE_PSWHST_ATTN_DISABLED_WRITE_MASK (0x1) +#define ECORE_PSWHST_ATTN_DISABLED_WRITE_SHIFT (0) + +/* Register PSWHST_REG_VF_DISABLED_ERROR_VALID */ +#define ECORE_PSWHST_ATTN_VF_DISABLED_MASK (0x1) +#define ECORE_PSWHST_ATTN_VF_DISABLED_SHIFT (0) + +/* Register PSWHST_REG_INCORRECT_ACCESS_VALID */ +#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_MASK (0x1) +#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_SHIFT (0) + +/* Register PSWHST_REG_INCORRECT_ACCESS_DATA */ +#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_WR_MASK (0x1) +#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_WR_SHIFT (0) +#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_CLIENT_MASK (0xf) +#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_CLIENT_SHIFT (1) +#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_VF_VALID_MASK (0x1) +#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_VF_VALID_SHIFT (5) +#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_VF_ID_MASK (0xff) +#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_VF_ID_SHIFT (6) +#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_PF_ID_MASK (0xf) +#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_PF_ID_SHIFT (14) +#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_BYTE_EN_MASK (0xff) +#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_BYTE_EN_SHIFT (18) + static enum _ecore_status_t ecore_pswhst_attn_cb(struct ecore_hwfn *p_hwfn) { - u32 tmp = - ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, - PSWHST_REG_VF_DISABLED_ERROR_VALID); + u32 tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, PSWHST_REG_VF_DISABLED_ERROR_VALID); /* Disabled VF access */ - if (tmp & ECORE_PSWHST_ATTENTION_VF_DISABLED) { + if (GET_FIELD(tmp, ECORE_PSWHST_ATTN_VF_DISABLED)) { u32 addr, data; addr = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, PSWHST_REG_VF_DISABLED_ERROR_ADDRESS); data = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, PSWHST_REG_VF_DISABLED_ERROR_DATA); - DP_INFO(p_hwfn->p_dev, - "PF[0x%02x] VF [0x%02x] [Valid 0x%02x] Client [0x%02x]" - " Write [0x%02x] Addr [0x%08x]\n", - (u8)((data & ECORE_PSWHST_ATTENTION_DISABLED_PF_MASK) - >> ECORE_PSWHST_ATTENTION_DISABLED_PF_SHIFT), - (u8)((data & ECORE_PSWHST_ATTENTION_DISABLED_VF_MASK) - >> ECORE_PSWHST_ATTENTION_DISABLED_VF_SHIFT), - (u8)((data & - ECORE_PSWHST_ATTENTION_DISABLED_VALID_MASK) >> - ECORE_PSWHST_ATTENTION_DISABLED_VALID_SHIFT), - (u8)((data & - ECORE_PSWHST_ATTENTION_DISABLED_CLIENT_MASK) >> - ECORE_PSWHST_ATTENTION_DISABLED_CLIENT_SHIFT), - (u8)((data & - ECORE_PSWHST_ATTENTION_DISABLED_WRITE_MASK) >> - ECORE_PSWHST_ATTNETION_DISABLED_WRITE_SHIFT), + DP_INFO(p_hwfn->p_dev, "PF[0x%02x] VF [0x%02x] [Valid 0x%02x] Client [0x%02x] Write [0x%02x] Addr [0x%08x]\n", + (u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_DISABLED_PF)), + (u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_DISABLED_VF)), + (u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_DISABLED_VALID)), + (u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_DISABLED_CLIENT)), + (u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_DISABLED_WRITE)), addr); } tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, PSWHST_REG_INCORRECT_ACCESS_VALID); - if (tmp & ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS) { + if (GET_FIELD(tmp, ECORE_PSWHST_ATTN_INCORRECT_ACCESS)) { u32 addr, data, length; addr = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, @@ -160,29 +184,14 @@ static enum _ecore_status_t ecore_pswhst_attn_cb(struct ecore_hwfn *p_hwfn) length = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, PSWHST_REG_INCORRECT_ACCESS_LENGTH); - DP_INFO(p_hwfn->p_dev, - "Incorrect access to %08x of length %08x - PF [%02x]" - " VF [%04x] [valid %02x] client [%02x] write [%02x]" - " Byte-Enable [%04x] [%08x]\n", + DP_INFO(p_hwfn->p_dev, "Incorrect access to %08x of length %08x - PF [%02x] VF [%04x] [valid %02x] client [%02x] write [%02x] Byte-Enable [%04x] [%08x]\n", addr, length, - (u8)((data & - ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_PF_ID_MASK) >> - ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_PF_ID_SHIFT), - (u8)((data & - ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_ID_MASK) >> - ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_ID_SHIFT), - (u8)((data & - ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_VALID_MASK) >> - ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_VALID_SHIFT), - (u8)((data & - ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_CLIENT_MASK) >> - ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_CLIENT_SHIFT), - (u8)((data & - ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_WR_MASK) >> - ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_WR_SHIFT), - (u8)((data & - ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_BYTE_EN_MASK) >> - ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_BYTE_EN_SHIFT), + (u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_INCORRECT_ACCESS_PF_ID)), + (u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_INCORRECT_ACCESS_VF_ID)), + (u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_INCORRECT_ACCESS_VF_VALID)), + (u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_INCORRECT_ACCESS_CLIENT)), + (u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_INCORRECT_ACCESS_WR)), + (u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_INCORRECT_ACCESS_BYTE_EN)), data); } @@ -193,42 +202,41 @@ static enum _ecore_status_t ecore_pswhst_attn_cb(struct ecore_hwfn *p_hwfn) } /* Register GRC_REG_TIMEOUT_ATTN_ACCESS_VALID */ -#define ECORE_GRC_ATTENTION_VALID_BIT_MASK (0x1) -#define ECORE_GRC_ATTENTION_VALID_BIT_SHIFT (0) - -#define ECORE_GRC_ATTENTION_ADDRESS_MASK (0x7fffff << 0) -#define ECORE_GRC_ATTENTION_RDWR_BIT (1 << 23) -#define ECORE_GRC_ATTENTION_MASTER_MASK (0xf << 24) +#define ECORE_GRC_ATTENTION_VALID_BIT_MASK (0x1) +#define ECORE_GRC_ATTENTION_VALID_BIT_SHIFT (0) + +/* Register GRC_REG_TIMEOUT_ATTN_ACCESS_DATA_0 */ +#define ECORE_GRC_ATTENTION_ADDRESS_MASK (0x7fffff) +#define ECORE_GRC_ATTENTION_ADDRESS_SHIFT (0) +#define ECORE_GRC_ATTENTION_RDWR_BIT_MASK (0x1) +#define ECORE_GRC_ATTENTION_RDWR_BIT_SHIFT (23) +#define ECORE_GRC_ATTENTION_MASTER_MASK (0xf) #define ECORE_GRC_ATTENTION_MASTER_SHIFT (24) + +/* Register GRC_REG_TIMEOUT_ATTN_ACCESS_DATA_1 */ #define ECORE_GRC_ATTENTION_PF_MASK (0xf) -#define ECORE_GRC_ATTENTION_VF_MASK (0xff << 4) +#define ECORE_GRC_ATTENTION_PF_SHIFT (0) +#define ECORE_GRC_ATTENTION_VF_MASK (0xff) #define ECORE_GRC_ATTENTION_VF_SHIFT (4) -#define ECORE_GRC_ATTENTION_PRIV_MASK (0x3 << 14) +#define ECORE_GRC_ATTENTION_PRIV_MASK (0x3) #define ECORE_GRC_ATTENTION_PRIV_SHIFT (14) + +/* Constant value for ECORE_GRC_ATTENTION_PRIV field */ #define ECORE_GRC_ATTENTION_PRIV_VF (0) + static const char *grc_timeout_attn_master_to_str(u8 master) { switch (master) { - case 1: - return "PXP"; - case 2: - return "MCP"; - case 3: - return "MSDM"; - case 4: - return "PSDM"; - case 5: - return "YSDM"; - case 6: - return "USDM"; - case 7: - return "TSDM"; - case 8: - return "XSDM"; - case 9: - return "DBU"; - case 10: - return "DMAE"; + case 1: return "PXP"; + case 2: return "MCP"; + case 3: return "MSDM"; + case 4: return "PSDM"; + case 5: return "YSDM"; + case 6: return "USDM"; + case 7: return "TSDM"; + case 8: return "XSDM"; + case 9: return "DBU"; + case 10: return "DMAE"; default: return "Unknown"; } @@ -255,21 +263,19 @@ static enum _ecore_status_t ecore_grc_attn_cb(struct ecore_hwfn *p_hwfn) tmp2 = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, GRC_REG_TIMEOUT_ATTN_ACCESS_DATA_1); + /* ECORE_GRC_ATTENTION_ADDRESS: register to bytes address format */ DP_NOTICE(p_hwfn->p_dev, false, "GRC timeout [%08x:%08x] - %s Address [%08x] [Master %s] [PF: %02x %s %02x]\n", tmp2, tmp, - (tmp & ECORE_GRC_ATTENTION_RDWR_BIT) ? "Write to" + GET_FIELD(tmp, ECORE_GRC_ATTENTION_RDWR_BIT) ? "Write to" : "Read from", - (tmp & ECORE_GRC_ATTENTION_ADDRESS_MASK) << 2, + GET_FIELD(tmp, ECORE_GRC_ATTENTION_ADDRESS) << 2, grc_timeout_attn_master_to_str( - (tmp & ECORE_GRC_ATTENTION_MASTER_MASK) >> - ECORE_GRC_ATTENTION_MASTER_SHIFT), - (tmp2 & ECORE_GRC_ATTENTION_PF_MASK), - (((tmp2 & ECORE_GRC_ATTENTION_PRIV_MASK) >> - ECORE_GRC_ATTENTION_PRIV_SHIFT) == + GET_FIELD(tmp, ECORE_GRC_ATTENTION_MASTER)), + GET_FIELD(tmp2, ECORE_GRC_ATTENTION_PF), + (GET_FIELD(tmp2, ECORE_GRC_ATTENTION_PRIV) == ECORE_GRC_ATTENTION_PRIV_VF) ? "VF" : "(Irrelevant:)", - (tmp2 & ECORE_GRC_ATTENTION_VF_MASK) >> - ECORE_GRC_ATTENTION_VF_SHIFT); + GET_FIELD(tmp2, ECORE_GRC_ATTENTION_VF)); /* Clean the validity bit */ ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, @@ -278,19 +284,41 @@ static enum _ecore_status_t ecore_grc_attn_cb(struct ecore_hwfn *p_hwfn) return rc; } -#define ECORE_PGLUE_ATTENTION_VALID (1 << 29) -#define ECORE_PGLUE_ATTENTION_RD_VALID (1 << 26) -#define ECORE_PGLUE_ATTENTION_DETAILS_PFID_MASK (0xf << 20) -#define ECORE_PGLUE_ATTENTION_DETAILS_PFID_SHIFT (20) -#define ECORE_PGLUE_ATTENTION_DETAILS_VF_VALID (1 << 19) -#define ECORE_PGLUE_ATTENTION_DETAILS_VFID_MASK (0xff << 24) -#define ECORE_PGLUE_ATTENTION_DETAILS_VFID_SHIFT (24) -#define ECORE_PGLUE_ATTENTION_DETAILS2_WAS_ERR (1 << 21) -#define ECORE_PGLUE_ATTENTION_DETAILS2_BME (1 << 22) -#define ECORE_PGLUE_ATTENTION_DETAILS2_FID_EN (1 << 23) -#define ECORE_PGLUE_ATTENTION_ICPL_VALID (1 << 23) -#define ECORE_PGLUE_ATTENTION_ZLR_VALID (1 << 25) -#define ECORE_PGLUE_ATTENTION_ILT_VALID (1 << 23) +/* Register PGLUE_B_REG_TX_ERR_RD_DETAILS and + * Register PGLUE_B_REG_TX_ERR_WR_DETAILS + */ +#define ECORE_PGLUE_ATTN_DETAILS_VF_VALID_MASK (0x1) +#define ECORE_PGLUE_ATTN_DETAILS_VF_VALID_SHIFT (19) +#define ECORE_PGLUE_ATTN_DETAILS_PFID_MASK (0xf) +#define ECORE_PGLUE_ATTN_DETAILS_PFID_SHIFT (20) +#define ECORE_PGLUE_ATTN_DETAILS_VFID_MASK (0xff) +#define ECORE_PGLUE_ATTN_DETAILS_VFID_SHIFT (24) + +/* Register PGLUE_B_REG_TX_ERR_RD_DETAILS2 and + * Register PGLUE_B_REG_TX_ERR_WR_DETAILS2 + */ +#define ECORE_PGLUE_ATTN_DETAILS2_WAS_ERR_MASK (0x1) +#define ECORE_PGLUE_ATTN_DETAILS2_WAS_ERR_SHIFT (21) +#define ECORE_PGLUE_ATTN_DETAILS2_BME_MASK (0x1) +#define ECORE_PGLUE_ATTN_DETAILS2_BME_SHIFT (22) +#define ECORE_PGLUE_ATTN_DETAILS2_FID_EN_MASK (0x1) +#define ECORE_PGLUE_ATTN_DETAILS2_FID_EN_SHIFT (23) +#define ECORE_PGLUE_ATTN_DETAILS2_RD_VALID_MASK (0x1) +#define ECORE_PGLUE_ATTN_DETAILS2_RD_VALID_SHIFT (26) +#define ECORE_PGLUE_ATTN_DETAILS2_WR_VALID_MASK (0x1) +#define ECORE_PGLUE_ATTN_DETAILS2_WR_VALID_SHIFT (29) + +/* Register PGLUE_B_REG_TX_ERR_WR_DETAILS_ICPL */ +#define ECORE_PGLUE_ATTN_ICPL_VALID_MASK (0x1) +#define ECORE_PGLUE_ATTN_ICPL_VALID_SHIFT (23) + +/* Register PGLUE_B_REG_MASTER_ZLR_ERR_DETAILS */ +#define ECORE_PGLUE_ATTN_ZLR_VALID_MASK (0x1) +#define ECORE_PGLUE_ATTN_ZLR_VALID_SHIFT (25) + +/* Register PGLUE_B_REG_VF_ILT_ERR_DETAILS2 */ +#define ECORE_PGLUE_ATTN_ILT_VALID_MASK (0x1) +#define ECORE_PGLUE_ATTN_ILT_VALID_SHIFT (23) enum _ecore_status_t ecore_pglueb_rbc_attn_handler(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, @@ -300,7 +328,7 @@ enum _ecore_status_t ecore_pglueb_rbc_attn_handler(struct ecore_hwfn *p_hwfn, char str[512] = {0}; tmp = ecore_rd(p_hwfn, p_ptt, PGLUE_B_REG_TX_ERR_WR_DETAILS2); - if (tmp & ECORE_PGLUE_ATTENTION_VALID) { + if (GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_WR_VALID)) { u32 addr_lo, addr_hi, details; addr_lo = ecore_rd(p_hwfn, p_ptt, @@ -310,23 +338,19 @@ enum _ecore_status_t ecore_pglueb_rbc_attn_handler(struct ecore_hwfn *p_hwfn, details = ecore_rd(p_hwfn, p_ptt, PGLUE_B_REG_TX_ERR_WR_DETAILS); OSAL_SNPRINTF(str, 512, - "Illegal write by chip to [%08x:%08x] blocked. Details: %08x [PFID %02x, VFID %02x, VF_VALID %02x] Details2 %08x [Was_error %02x BME deassert %02x FID_enable deassert %02x]\n", - addr_hi, addr_lo, details, - (u8)((details & - ECORE_PGLUE_ATTENTION_DETAILS_PFID_MASK) >> - ECORE_PGLUE_ATTENTION_DETAILS_PFID_SHIFT), - (u8)((details & - ECORE_PGLUE_ATTENTION_DETAILS_VFID_MASK) >> - ECORE_PGLUE_ATTENTION_DETAILS_VFID_SHIFT), - (u8)((details & - ECORE_PGLUE_ATTENTION_DETAILS_VF_VALID) ? 1 : 0), - tmp, - (u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_WAS_ERR) ? - 1 : 0), - (u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_BME) ? - 1 : 0), - (u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_FID_EN) ? - 1 : 0)); + "Illegal write by chip to [%08x:%08x] blocked. Details: %08x [PFID %02x, VFID %02x, VF_VALID %02x] Details2 %08x [Was_error %02x BME deassert %02x FID_enable deassert %02x]\n", + addr_hi, addr_lo, details, + (u8)(GET_FIELD(details, + ECORE_PGLUE_ATTN_DETAILS_PFID)), + (u8)(GET_FIELD(details, + ECORE_PGLUE_ATTN_DETAILS_VFID)), + (u8)(GET_FIELD(details, + ECORE_PGLUE_ATTN_DETAILS_VF_VALID) ? 1 : 0), + tmp, + (u8)(GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_WAS_ERR) ? 1 : 0), + (u8)(GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_BME) ? 1 : 0), + (u8)(GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_FID_EN) ? 1 : 0)); + if (is_hw_init) DP_VERBOSE(p_hwfn, ECORE_MSG_INTR, "%s", str); else @@ -334,7 +358,7 @@ enum _ecore_status_t ecore_pglueb_rbc_attn_handler(struct ecore_hwfn *p_hwfn, } tmp = ecore_rd(p_hwfn, p_ptt, PGLUE_B_REG_TX_ERR_RD_DETAILS2); - if (tmp & ECORE_PGLUE_ATTENTION_RD_VALID) { + if (GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_RD_VALID)) { u32 addr_lo, addr_hi, details; addr_lo = ecore_rd(p_hwfn, p_ptt, @@ -347,29 +371,23 @@ enum _ecore_status_t ecore_pglueb_rbc_attn_handler(struct ecore_hwfn *p_hwfn, DP_NOTICE(p_hwfn, false, "Illegal read by chip from [%08x:%08x] blocked. Details: %08x [PFID %02x, VFID %02x, VF_VALID %02x] Details2 %08x [Was_error %02x BME deassert %02x FID_enable deassert %02x]\n", addr_hi, addr_lo, details, - (u8)((details & - ECORE_PGLUE_ATTENTION_DETAILS_PFID_MASK) >> - ECORE_PGLUE_ATTENTION_DETAILS_PFID_SHIFT), - (u8)((details & - ECORE_PGLUE_ATTENTION_DETAILS_VFID_MASK) >> - ECORE_PGLUE_ATTENTION_DETAILS_VFID_SHIFT), - (u8)((details & - ECORE_PGLUE_ATTENTION_DETAILS_VF_VALID) ? 1 : 0), + (u8)(GET_FIELD(details, + ECORE_PGLUE_ATTN_DETAILS_PFID)), + (u8)(GET_FIELD(details, + ECORE_PGLUE_ATTN_DETAILS_VFID)), + (u8)(GET_FIELD(details, ECORE_PGLUE_ATTN_DETAILS_VF_VALID) ? 1 : 0), tmp, - (u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_WAS_ERR) ? - 1 : 0), - (u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_BME) ? - 1 : 0), - (u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_FID_EN) ? - 1 : 0)); + (u8)(GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_WAS_ERR) ? 1 : 0), + (u8)(GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_BME) ? 1 : 0), + (u8)(GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_FID_EN) ? 1 : 0)); } tmp = ecore_rd(p_hwfn, p_ptt, PGLUE_B_REG_TX_ERR_WR_DETAILS_ICPL); - if (tmp & ECORE_PGLUE_ATTENTION_ICPL_VALID) - DP_NOTICE(p_hwfn, false, "ICPL erorr - %08x\n", tmp); + if (GET_FIELD(tmp, ECORE_PGLUE_ATTN_ICPL_VALID)) + DP_VERBOSE(p_hwfn, ECORE_MSG_INTR, "ICPL error - %08x\n", tmp); tmp = ecore_rd(p_hwfn, p_ptt, PGLUE_B_REG_MASTER_ZLR_ERR_DETAILS); - if (tmp & ECORE_PGLUE_ATTENTION_ZLR_VALID) { + if (GET_FIELD(tmp, ECORE_PGLUE_ATTN_ZLR_VALID)) { u32 addr_hi, addr_lo; addr_lo = ecore_rd(p_hwfn, p_ptt, @@ -378,12 +396,12 @@ enum _ecore_status_t ecore_pglueb_rbc_attn_handler(struct ecore_hwfn *p_hwfn, PGLUE_B_REG_MASTER_ZLR_ERR_ADD_63_32); DP_NOTICE(p_hwfn, false, - "ICPL erorr - %08x [Address %08x:%08x]\n", + "ZLR error - %08x [Address %08x:%08x]\n", tmp, addr_hi, addr_lo); } tmp = ecore_rd(p_hwfn, p_ptt, PGLUE_B_REG_VF_ILT_ERR_DETAILS2); - if (tmp & ECORE_PGLUE_ATTENTION_ILT_VALID) { + if (GET_FIELD(tmp, ECORE_PGLUE_ATTN_ILT_VALID)) { u32 addr_hi, addr_lo, details; addr_lo = ecore_rd(p_hwfn, p_ptt, @@ -411,9 +429,12 @@ static enum _ecore_status_t ecore_pglueb_rbc_attn_cb(struct ecore_hwfn *p_hwfn) static enum _ecore_status_t ecore_fw_assertion(struct ecore_hwfn *p_hwfn) { - DP_NOTICE(p_hwfn, false, "FW assertion!\n"); + u8 str[ECORE_HW_ERR_MAX_STR_SIZE]; - ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_FW_ASSERT); + OSAL_SNPRINTF((char *)str, ECORE_HW_ERR_MAX_STR_SIZE, "FW assertion!\n"); + DP_NOTICE(p_hwfn, false, "%s", str); + ecore_hw_err_notify(p_hwfn, p_hwfn->p_dpc_ptt, ECORE_HW_ERR_FW_ASSERT, + str, OSAL_STRLEN((char *)str) + 1); return ECORE_INVAL; } @@ -432,91 +453,212 @@ ecore_general_attention_35(struct ecore_hwfn *p_hwfn) #define ECORE_DORQ_ATTENTION_SIZE_MASK (0x7f) #define ECORE_DORQ_ATTENTION_SIZE_SHIFT (16) -#define ECORE_DB_REC_COUNT 1000 -#define ECORE_DB_REC_INTERVAL 100 - -static enum _ecore_status_t ecore_db_rec_flush_queue(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) +/* Wait for usage to zero or count to run out. This is necessary since EDPM + * doorbell transactions can take multiple 64b cycles, and as such can "split" + * over the pci. Possibly, the doorbell drop can happen with half an EDPM in + * the queue and other half dropped. Another EDPM doorbell to the same address + * (from doorbell recovery mechanism or from the doorbelling entity) could have + * first half dropped and second half interperted as continuation of the first. + * To prevent such malformed doorbells from reaching the device, flush the + * queue before releaseing the overflow sticky indication. + * For PFs/VFs which do not edpm, flushing is not necessary. + */ +enum _ecore_status_t +ecore_db_rec_flush_queue(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u32 usage_cnt_reg, u32 *count) { - u32 count = ECORE_DB_REC_COUNT; - u32 usage = 1; + u32 usage = ecore_rd(p_hwfn, p_ptt, usage_cnt_reg); - /* wait for usage to zero or count to run out. This is necessary since - * EDPM doorbell transactions can take multiple 64b cycles, and as such - * can "split" over the pci. Possibly, the doorbell drop can happen with - * half an EDPM in the queue and other half dropped. Another EDPM - * doorbell to the same address (from doorbell recovery mechanism or - * from the doorbelling entity) could have first half dropped and second - * half interperted as continuation of the first. To prevent such - * malformed doorbells from reaching the device, flush the queue before - * releaseing the overflow sticky indication. + /* Flush any pedning (e)dpm as they may never arrive. + * This command will flush pending dpms for all customer PFs and VFs + * of this DORQ block, and will force them to use a regular doorbell. */ - while (count-- && usage) { - usage = ecore_rd(p_hwfn, p_ptt, DORQ_REG_PF_USAGE_CNT); + ecore_wr(p_hwfn, p_ptt, DORQ_REG_DPM_FORCE_ABORT, 0x1); + + while (*count && usage) { + (*count)--; OSAL_UDELAY(ECORE_DB_REC_INTERVAL); + usage = ecore_rd(p_hwfn, p_ptt, usage_cnt_reg); } /* should have been depleted by now */ if (usage) { DP_NOTICE(p_hwfn->p_dev, false, - "DB recovery: doorbell usage failed to zero after %d usec. usage was %x\n", - ECORE_DB_REC_INTERVAL * ECORE_DB_REC_COUNT, usage); + "Flushing DORQ failed, usage was %u after %d usec\n", + usage, ECORE_DB_REC_INTERVAL * ECORE_DB_REC_COUNT); + return ECORE_TIMEOUT; } return ECORE_SUCCESS; } -enum _ecore_status_t ecore_db_rec_handler(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) +void ecore_db_rec_handler(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { - u32 overflow; + u32 attn_ovfl, cur_ovfl, flush_count = ECORE_DB_REC_COUNT; enum _ecore_status_t rc; - overflow = ecore_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY); - DP_NOTICE(p_hwfn, false, "PF Overflow sticky 0x%x\n", overflow); - if (!overflow) { - ecore_db_recovery_execute(p_hwfn, DB_REC_ONCE); - return ECORE_SUCCESS; - } - - if (ecore_edpm_enabled(p_hwfn)) { - rc = ecore_db_rec_flush_queue(p_hwfn, p_ptt); - if (rc != ECORE_SUCCESS) - return rc; - } + /* If PF overflow was caught by DORQ attn callback, PF must execute + * doorbell recovery. Reading the bit is atomic because the dorq + * attention handler is setting it in interrupt context. + */ + attn_ovfl = OSAL_TEST_AND_CLEAR_BIT(ECORE_OVERFLOW_BIT, + &p_hwfn->db_recovery_info.overflow); - /* flush any pedning (e)dpm as they may never arrive */ - ecore_wr(p_hwfn, p_ptt, DORQ_REG_DPM_FORCE_ABORT, 0x1); + cur_ovfl = ecore_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY); - /* release overflow sticky indication (stop silently dropping - * everything) + /* Check if sticky overflow indication is set, or it was caught set + * during the DORQ attention callback. */ - ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY, 0x0); + if (cur_ovfl || attn_ovfl) { + DP_NOTICE(p_hwfn, false, + "PF Overflow sticky: attn %u current %u\n", + attn_ovfl ? 1 : 0, cur_ovfl ? 1 : 0); + + /* In case the PF is capable of DPM, i.e. it has sufficient + * doorbell BAR, the DB queue must be flushed. + * Note that DCBX events which can modify the DPM register + * state aren't taking into account because they can change + * dynamically. + */ + if (cur_ovfl && !p_hwfn->dpm_info.db_bar_no_edpm) { + rc = ecore_db_rec_flush_queue(p_hwfn, p_ptt, + DORQ_REG_PF_USAGE_CNT, + &flush_count); + if (rc != ECORE_SUCCESS) + return; + } - /* repeat all last doorbells (doorbell drop recovery) */ - ecore_db_recovery_execute(p_hwfn, DB_REC_REAL_DEAL); + /* release overflow sticky indication (stop silently dropping + * everything). + */ + ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY, 0x0); - return ECORE_SUCCESS; + /* VF doorbell recovery in next iov handler for all VFs. + * Setting the bit is atomic because the iov handler is reading + * it in a parallel flow. + */ + if (cur_ovfl && IS_PF_SRIOV_ALLOC(p_hwfn)) + OSAL_SET_BIT(ECORE_OVERFLOW_BIT, + &p_hwfn->pf_iov_info->overflow); + } + + /* Even if the PF didn't overflow, some of its child VFs may have. + * Either way, schedule VFs handler for doorbell recovery. + */ + if (IS_PF_SRIOV_ALLOC(p_hwfn)) + OSAL_IOV_DB_REC_HANDLER(p_hwfn); + + if (cur_ovfl || attn_ovfl) + ecore_db_recovery_execute(p_hwfn); } -static enum _ecore_status_t ecore_dorq_attn_cb(struct ecore_hwfn *p_hwfn) +static void ecore_dorq_attn_overflow(struct ecore_hwfn *p_hwfn) { - u32 int_sts, first_drop_reason, details, address, all_drops_reason; + u32 overflow, i, flush_delay_count = ECORE_DB_REC_COUNT; struct ecore_ptt *p_ptt = p_hwfn->p_dpc_ptt; + struct ecore_vf_info *p_vf; enum _ecore_status_t rc; - int_sts = ecore_rd(p_hwfn, p_ptt, DORQ_REG_INT_STS); - DP_NOTICE(p_hwfn->p_dev, false, "DORQ attention. int_sts was %x\n", - int_sts); + overflow = ecore_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY); + if (overflow) { + /* PF doorbell recovery in next periodic handler. + * Setting the bit is atomic because the db_rec handler is + * reading it in the periodic handler flow. + */ + OSAL_SET_BIT(ECORE_OVERFLOW_BIT, + &p_hwfn->db_recovery_info.overflow); + + /* VF doorbell recovery in next iov handler for all VFs. + * Setting the bit is atomic because the iov handler is reading + * it in a parallel flow. + */ + if (IS_PF_SRIOV_ALLOC(p_hwfn)) + OSAL_SET_BIT(ECORE_OVERFLOW_BIT, + &p_hwfn->pf_iov_info->overflow); + + /* In case the PF is capable of DPM, i.e. it has sufficient + * doorbell BAR, the DB queue must be flushed. + * Note that DCBX events which can modify the DPM register + * state aren't taking into account because they can change + * dynamically. + */ + if (!p_hwfn->dpm_info.db_bar_no_edpm) { + rc = ecore_db_rec_flush_queue(p_hwfn, p_ptt, + DORQ_REG_PF_USAGE_CNT, + &flush_delay_count); + if (rc != ECORE_SUCCESS) + goto out; + } + + /* Allow DORQ to process doorbells from this PF again */ + ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY, 0x0); + } else { + /* Checking for VF overflows is only needed if the PF didn't + * overflow. If the PF did overflow, ecore_iov_db_rec_handler() + * will be scheduled by the periodic doorbell recovery handler + * anyway. + */ + ecore_for_each_vf(p_hwfn, i) { + p_vf = &p_hwfn->pf_iov_info->vfs_array[i]; + ecore_fid_pretend(p_hwfn, p_ptt, + (u16)p_vf->concrete_fid); + overflow = ecore_rd(p_hwfn, p_ptt, + DORQ_REG_VF_OVFL_STICKY); + if (overflow) { + /* VF doorbell recovery in next iov handler. + * Setting the bit is atomic because the iov + * handler is reading it in a parallel flow. + */ + OSAL_SET_BIT(ECORE_OVERFLOW_BIT, + &p_vf->db_recovery_info.overflow); + + if (!p_vf->db_recovery_info.edpm_disabled) { + rc = ecore_db_rec_flush_queue(p_hwfn, + p_ptt, DORQ_REG_VF_USAGE_CNT, + &flush_delay_count); + /* Do not clear VF sticky for this VF + * if flush failed. + */ + if (rc != ECORE_SUCCESS) + continue; + } + + /* Allow DORQ to process doorbells from this + * VF again. + */ + ecore_wr(p_hwfn, p_ptt, + DORQ_REG_VF_OVFL_STICKY, 0x0); + } + } + + ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id); + } +out: + /* Schedule the handler even if overflow was not detected. If an + * overflow occurred after reading the sticky values, there won't be + * another attention. + */ + OSAL_DB_REC_OCCURRED(p_hwfn); +} + +static enum _ecore_status_t ecore_dorq_attn_int_sts(struct ecore_hwfn *p_hwfn) +{ + u32 int_sts, first_drop_reason, details, address, all_drops_reason; + struct ecore_ptt *p_ptt = p_hwfn->p_dpc_ptt; /* int_sts may be zero since all PFs were interrupted for doorbell * overflow but another one already handled it. Can abort here. If - * This PF also requires overflow recovery we will be interrupted again + * This PF also requires overflow recovery we will be interrupted again. + * The masked almost full indication may also be set. Ignoring. */ - if (!int_sts) + int_sts = ecore_rd(p_hwfn, p_ptt, DORQ_REG_INT_STS); + if (!(int_sts & ~DORQ_REG_INT_STS_DORQ_FIFO_AFULL)) return ECORE_SUCCESS; + DP_NOTICE(p_hwfn->p_dev, false, "DORQ attention. int_sts was %x\n", + int_sts); + /* check if db_drop or overflow happened */ if (int_sts & (DORQ_REG_INT_STS_DB_DROP | DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR)) { @@ -539,29 +681,20 @@ static enum _ecore_status_t ecore_dorq_attn_cb(struct ecore_hwfn *p_hwfn) "Size\t\t0x%04x\t\t(in bytes)\n" "1st drop reason\t0x%08x\t(details on first drop since last handling)\n" "Sticky reasons\t0x%08x\t(all drop reasons since last handling)\n", - address, - GET_FIELD(details, ECORE_DORQ_ATTENTION_OPAQUE), + address, GET_FIELD(details, ECORE_DORQ_ATTENTION_OPAQUE), GET_FIELD(details, ECORE_DORQ_ATTENTION_SIZE) * 4, first_drop_reason, all_drops_reason); - rc = ecore_db_rec_handler(p_hwfn, p_ptt); - OSAL_DB_REC_OCCURRED(p_hwfn); - if (rc != ECORE_SUCCESS) - return rc; - /* clear the doorbell drop details and prepare for next drop */ ecore_wr(p_hwfn, p_ptt, DORQ_REG_DB_DROP_DETAILS_REL, 0); - /* mark interrupt as handeld (note: even if drop was due to a - * different reason than overflow we mark as handled) + /* mark interrupt as handeld (note: even if drop was due to a diffrent + * reason than overflow we mark as handled) */ ecore_wr(p_hwfn, p_ptt, DORQ_REG_INT_STS_WR, - DORQ_REG_INT_STS_DB_DROP | - DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR); + DORQ_REG_INT_STS_DB_DROP | DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR); - /* if there are no indications otherthan drop indications, - * success - */ + /* if there are no indications otherthan drop indications, success */ if ((int_sts & ~(DORQ_REG_INT_STS_DB_DROP | DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR | DORQ_REG_INT_STS_DORQ_FIFO_AFULL)) == 0) @@ -574,6 +707,31 @@ static enum _ecore_status_t ecore_dorq_attn_cb(struct ecore_hwfn *p_hwfn) return ECORE_INVAL; } +static enum _ecore_status_t ecore_dorq_attn_cb(struct ecore_hwfn *p_hwfn) +{ + p_hwfn->db_recovery_info.dorq_attn = true; + ecore_dorq_attn_overflow(p_hwfn); + + return ecore_dorq_attn_int_sts(p_hwfn); +} + +/* Handle a corner case race condition where another PF clears the DORQ's + * attention bit in MISCS bitmap before this PF got to read it. This is needed + * only for DORQ attentions because it doesn't send another attention later if + * the original issue was not resolved. If DORQ attention was already handled, + * its callback already set a flag to indicate it. + */ +static void ecore_dorq_attn_handler(struct ecore_hwfn *p_hwfn) +{ + if (p_hwfn->db_recovery_info.dorq_attn) + goto out; + + /* Call DORQ callback if the attention was missed */ + ecore_dorq_attn_cb(p_hwfn); +out: + p_hwfn->db_recovery_info.dorq_attn = false; +} + static enum _ecore_status_t ecore_tm_attn_cb(struct ecore_hwfn *p_hwfn) { #ifndef ASIC_ONLY @@ -587,12 +745,10 @@ static enum _ecore_status_t ecore_tm_attn_cb(struct ecore_hwfn *p_hwfn) if (val & (TM_REG_INT_STS_1_PEND_TASK_SCAN | TM_REG_INT_STS_1_PEND_CONN_SCAN)) - DP_INFO(p_hwfn, - "TM attention on emulation - most likely" - " results of clock-ratios\n"); + DP_INFO(p_hwfn, "TM attention on emulation - most likely results of clock-ratios\n"); val = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, TM_REG_INT_MASK_1); val |= TM_REG_INT_MASK_1_PEND_CONN_SCAN | - TM_REG_INT_MASK_1_PEND_TASK_SCAN; + TM_REG_INT_MASK_1_PEND_TASK_SCAN; ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, TM_REG_INT_MASK_1, val); return ECORE_SUCCESS; @@ -626,191 +782,213 @@ aeu_descs_special[AEU_INVERT_REG_SPECIAL_MAX] = { }; /* Notice aeu_invert_reg must be defined in the same order of bits as HW; */ -static struct aeu_invert_reg aeu_descs[NUM_ATTN_REGS] = { +static struct aeu_invert_reg aeu_descs[MAX_NUM_ATTN_REGS] = { { - { /* After Invert 1 */ - {"GPIO0 function%d", (32 << ATTENTION_LENGTH_SHIFT), OSAL_NULL, - MAX_BLOCK_ID}, - } - }, + { /* After Invert 1 */ + {"GPIO0 function%d", FIELD_VALUE(ATTENTION_LENGTH, 32), OSAL_NULL, + MAX_BLOCK_ID}, + } + }, { - { /* After Invert 2 */ - {"PGLUE config_space", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, - {"PGLUE misc_flr", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, - {"PGLUE B RBC", ATTENTION_PAR_INT, ecore_pglueb_rbc_attn_cb, - BLOCK_PGLUE_B}, - {"PGLUE misc_mctp", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, - {"Flash event", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, - {"SMB event", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, - {"Main Power", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, - {"SW timers #%d", - (8 << ATTENTION_LENGTH_SHIFT) | (1 << ATTENTION_OFFSET_SHIFT), - OSAL_NULL, MAX_BLOCK_ID}, - {"PCIE glue/PXP VPD %d", (16 << ATTENTION_LENGTH_SHIFT), OSAL_NULL, - BLOCK_PGLCS}, - } - }, + { /* After Invert 2 */ + {"PGLUE config_space", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, + {"PGLUE misc_flr", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, + {"PGLUE B RBC", ATTENTION_PAR_INT, ecore_pglueb_rbc_attn_cb, BLOCK_PGLUE_B}, + {"PGLUE misc_mctp", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, + {"Flash event", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, + {"SMB event", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, + {"Main Power", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, + {"SW timers #%d", FIELD_VALUE(ATTENTION_LENGTH, 8) | + FIELD_VALUE(ATTENTION_OFFSET, 1), OSAL_NULL, MAX_BLOCK_ID}, + {"PCIE glue/PXP VPD %d", FIELD_VALUE(ATTENTION_LENGTH, 16), OSAL_NULL, + BLOCK_PGLCS}, + } + }, { - { /* After Invert 3 */ - {"General Attention %d", (32 << ATTENTION_LENGTH_SHIFT), OSAL_NULL, - MAX_BLOCK_ID}, - } - }, + { /* After Invert 3 */ + {"General Attention %d", FIELD_VALUE(ATTENTION_LENGTH, 32), OSAL_NULL, + MAX_BLOCK_ID}, + } + }, { - { /* After Invert 4 */ - {"General Attention 32", ATTENTION_SINGLE | ATTENTION_CLEAR_ENABLE, - ecore_fw_assertion, MAX_BLOCK_ID}, - {"General Attention %d", - (2 << ATTENTION_LENGTH_SHIFT) | (33 << ATTENTION_OFFSET_SHIFT), - OSAL_NULL, MAX_BLOCK_ID}, - {"General Attention 35", ATTENTION_SINGLE | ATTENTION_CLEAR_ENABLE, - ecore_general_attention_35, MAX_BLOCK_ID}, - {"NWS Parity", ATTENTION_PAR | ATTENTION_BB_DIFFERENT | - ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_0), - OSAL_NULL, BLOCK_NWS}, - {"NWS Interrupt", ATTENTION_SINGLE | ATTENTION_BB_DIFFERENT | - ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_1), - OSAL_NULL, BLOCK_NWS}, - {"NWM Parity", ATTENTION_PAR | ATTENTION_BB_DIFFERENT | - ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_2), - OSAL_NULL, BLOCK_NWM}, - {"NWM Interrupt", ATTENTION_SINGLE | ATTENTION_BB_DIFFERENT | - ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_3), - OSAL_NULL, BLOCK_NWM}, - {"MCP CPU", ATTENTION_SINGLE, ecore_mcp_attn_cb, MAX_BLOCK_ID}, - {"MCP Watchdog timer", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, - {"MCP M2P", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, - {"AVS stop status ready", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, - {"MSTAT", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID}, - {"MSTAT per-path", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID}, + { /* After Invert 4 */ + {"General Attention 32", ATTENTION_SINGLE | ATTENTION_CLEAR_ENABLE, + ecore_fw_assertion, MAX_BLOCK_ID}, + {"General Attention %d", FIELD_VALUE(ATTENTION_LENGTH, 2) | + FIELD_VALUE(ATTENTION_OFFSET, 33), OSAL_NULL, MAX_BLOCK_ID}, + {"General Attention 35", ATTENTION_SINGLE | ATTENTION_CLEAR_ENABLE, + ecore_general_attention_35, MAX_BLOCK_ID}, + {"NWS Parity", ATTENTION_PAR | ATTENTION_BB_DIFFERENT | + ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_0), OSAL_NULL, + BLOCK_NWS}, + {"NWS Interrupt", ATTENTION_SINGLE | ATTENTION_BB_DIFFERENT | + ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_1), OSAL_NULL, + BLOCK_NWS}, + {"NWM Parity", ATTENTION_PAR | ATTENTION_BB_DIFFERENT | + ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_2), OSAL_NULL, + BLOCK_NWM}, + {"NWM Interrupt", ATTENTION_SINGLE | ATTENTION_BB_DIFFERENT | + ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_3), OSAL_NULL, + BLOCK_NWM}, + {"MCP CPU", ATTENTION_SINGLE, ecore_mcp_attn_cb, MAX_BLOCK_ID}, + {"MCP Watchdog timer", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, + {"MCP M2P", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, + {"AVS stop status ready", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, + {"MSTAT", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID}, + {"MSTAT per-path", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID}, {"OPTE", ATTENTION_PAR, OSAL_NULL, BLOCK_OPTE}, {"MCP", ATTENTION_PAR, OSAL_NULL, BLOCK_MCP}, {"MS", ATTENTION_SINGLE, OSAL_NULL, BLOCK_MS}, {"UMAC", ATTENTION_SINGLE, OSAL_NULL, BLOCK_UMAC}, {"LED", ATTENTION_SINGLE, OSAL_NULL, BLOCK_LED}, {"BMBN", ATTENTION_SINGLE, OSAL_NULL, BLOCK_BMBN}, - {"NIG", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_NIG}, - {"BMB/OPTE/MCP", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BMB}, + {"NIG", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_NIG}, {"BMB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BMB}, - {"BTB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BTB}, - {"BRB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BRB}, - {"PRS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PRS}, - } - }, + {"BTB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BTB}, + {"BRB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BRB}, + {"PRS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PRS}, + } + }, { - { /* After Invert 5 */ - {"SRC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_SRC}, - {"PB Client1", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PBF_PB1}, - {"PB Client2", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PBF_PB2}, - {"RPB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RPB}, - {"PBF", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PBF}, - {"QM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_QM}, - {"TM", ATTENTION_PAR_INT, ecore_tm_attn_cb, BLOCK_TM}, - {"MCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MCM}, - {"MSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MSDM}, - {"MSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MSEM}, - {"PCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PCM}, - {"PSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSDM}, - {"PSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSEM}, - {"TCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TCM}, - {"TSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TSDM}, - {"TSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TSEM}, - } - }, + { /* After Invert 5 */ + {"SRC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_SRC}, + {"PB Client1", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PBF_PB1}, + {"PB Client2", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PBF_PB2}, + {"RPB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RPB}, + {"PBF", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PBF}, + {"QM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_QM}, + {"TM", ATTENTION_PAR_INT, ecore_tm_attn_cb, BLOCK_TM}, + {"MCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MCM}, + {"MSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MSDM}, + {"MSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MSEM}, + {"PCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PCM}, + {"PSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSDM}, + {"PSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSEM}, + {"TCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TCM}, + {"TSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TSDM}, + {"TSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TSEM}, + } + }, { - { /* After Invert 6 */ - {"UCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_UCM}, - {"USDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_USDM}, - {"USEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_USEM}, - {"XCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XCM}, - {"XSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XSDM}, - {"XSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XSEM}, - {"YCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YCM}, - {"YSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YSDM}, - {"YSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YSEM}, - {"XYLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XYLD}, - {"TMLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TMLD}, - {"MYLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MULD}, - {"YULD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YULD}, - {"DORQ", ATTENTION_PAR_INT, ecore_dorq_attn_cb, BLOCK_DORQ}, - {"DBG", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_DBG}, - {"IPC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_IPC}, - } - }, + { /* After Invert 6 */ + {"UCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_UCM}, + {"USDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_USDM}, + {"USEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_USEM}, + {"XCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XCM}, + {"XSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XSDM}, + {"XSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XSEM}, + {"YCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YCM}, + {"YSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YSDM}, + {"YSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YSEM}, + {"XYLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XYLD}, + {"TMLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TMLD}, + {"MYLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MULD}, + {"YULD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YULD}, + {"DORQ", ATTENTION_PAR_INT, ecore_dorq_attn_cb, BLOCK_DORQ}, + {"DBG", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_DBG}, + {"IPC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_IPC}, + } + }, { - { /* After Invert 7 */ - {"CCFC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CCFC}, - {"CDU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CDU}, - {"DMAE", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_DMAE}, - {"IGU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_IGU}, - {"ATC", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID}, - {"CAU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CAU}, - {"PTU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PTU}, - {"PRM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PRM}, - {"TCFC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TCFC}, - {"RDIF", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RDIF}, - {"TDIF", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TDIF}, - {"RSS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RSS}, - {"MISC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MISC}, - {"MISCS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MISCS}, - {"PCIE", ATTENTION_PAR, OSAL_NULL, BLOCK_PCIE}, - {"Vaux PCI core", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS}, - {"PSWRQ", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRQ}, - } - }, + { /* After Invert 7 */ + {"CCFC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CCFC}, + {"CDU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CDU}, + {"DMAE", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_DMAE}, + {"IGU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_IGU}, + {"ATC", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID}, + {"CAU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CAU}, + {"PTU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PTU}, + {"PRM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PRM}, + {"TCFC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TCFC}, + {"RDIF", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RDIF}, + {"TDIF", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TDIF}, + {"RSS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RSS}, + {"MISC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MISC}, + {"MISCS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MISCS}, + {"PCIE", ATTENTION_PAR, OSAL_NULL, BLOCK_PCIE}, + {"Vaux PCI core", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS}, + {"PSWRQ", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRQ}, + } + }, { - { /* After Invert 8 */ - {"PSWRQ (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRQ2}, - {"PSWWR", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWWR}, - {"PSWWR (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWWR2}, - {"PSWRD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRD}, - {"PSWRD (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRD2}, - {"PSWHST", ATTENTION_PAR_INT, ecore_pswhst_attn_cb, BLOCK_PSWHST}, - {"PSWHST (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWHST2}, - {"GRC", ATTENTION_PAR_INT, ecore_grc_attn_cb, BLOCK_GRC}, - {"CPMU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CPMU}, - {"NCSI", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_NCSI}, - {"MSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID}, - {"PSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID}, - {"TSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID}, - {"USEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID}, - {"XSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID}, - {"YSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID}, - {"pxp_misc_mps", ATTENTION_PAR, OSAL_NULL, BLOCK_PGLCS}, - {"PCIE glue/PXP Exp. ROM", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS}, - {"PERST_B assertion", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, - {"PERST_B deassertion", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, - {"Reserved %d", (2 << ATTENTION_LENGTH_SHIFT), OSAL_NULL, - MAX_BLOCK_ID}, - } - }, + { /* After Invert 8 */ + {"PSWRQ (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRQ2}, + {"PSWWR", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWWR}, + {"PSWWR (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWWR2}, + {"PSWRD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRD}, + {"PSWRD (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRD2}, + {"PSWHST", ATTENTION_PAR_INT, ecore_pswhst_attn_cb, BLOCK_PSWHST}, + {"PSWHST (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWHST2}, + {"GRC", ATTENTION_PAR_INT, ecore_grc_attn_cb, BLOCK_GRC}, + {"CPMU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CPMU}, + {"NCSI", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_NCSI}, + {"MSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID}, + {"PSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID}, + {"TSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID}, + {"USEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID}, + {"XSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID}, + {"YSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID}, + {"pxp_misc_mps", ATTENTION_PAR, OSAL_NULL, BLOCK_PGLCS}, + {"PCIE glue/PXP Exp. ROM", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS}, + {"PERST_B assertion", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, + {"PERST_B deassertion", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, + {"Reserved %d", FIELD_VALUE(ATTENTION_LENGTH, 2), OSAL_NULL, MAX_BLOCK_ID }, + } + }, { - { /* After Invert 9 */ - {"MCP Latched memory", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID}, - {"MCP Latched scratchpad cache", ATTENTION_SINGLE, OSAL_NULL, - MAX_BLOCK_ID}, - {"AVS", ATTENTION_PAR | ATTENTION_BB_DIFFERENT | - ATTENTION_BB(AEU_INVERT_REG_SPECIAL_MCP_UMP_TX), OSAL_NULL, - BLOCK_AVS_WRAP}, - {"AVS", ATTENTION_SINGLE | ATTENTION_BB_DIFFERENT | - ATTENTION_BB(AEU_INVERT_REG_SPECIAL_MCP_SCPAD), OSAL_NULL, - BLOCK_AVS_WRAP}, - {"PCIe core", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS}, - {"PCIe link up", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS}, - {"PCIe hot reset", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS}, - {"Reserved %d", (9 << ATTENTION_LENGTH_SHIFT), OSAL_NULL, - MAX_BLOCK_ID}, - } - }, + { /* After Invert 9 */ + {"MCP Latched memory", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID}, + {"MCP Latched scratchpad cache", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID}, + {"AVS", ATTENTION_PAR | ATTENTION_BB_DIFFERENT | + ATTENTION_BB(AEU_INVERT_REG_SPECIAL_MCP_UMP_TX), OSAL_NULL, + BLOCK_AVS_WRAP}, + {"AVS", ATTENTION_SINGLE | ATTENTION_BB_DIFFERENT | + ATTENTION_BB(AEU_INVERT_REG_SPECIAL_MCP_SCPAD), OSAL_NULL, + BLOCK_AVS_WRAP}, + {"PCIe core", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS}, + {"PCIe link up", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS}, + {"PCIe hot reset", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS}, + {"YPLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YPLD}, + {"PTLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PTLD}, + {"RGFS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RGFS}, + {"TGFS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TGFS}, + {"RGSRC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RGSRC}, + {"TGSRC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TGSRC}, + {"RGFC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RGFC}, + {"TGFC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TGFC}, + {"Reserved %d", FIELD_VALUE(ATTENTION_LENGTH, 9), OSAL_NULL, MAX_BLOCK_ID}, + } + }, + { + { /* After Invert 10 */ + {"BMB_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_BMB_PD}, + {"CFC_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_CFC_PD}, + {"MS_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_MS_PD}, + {"HOST_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_HOST_PD}, + {"NMC_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_NMC_PD}, + {"NM_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_NM_PD}, + {"NW_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_NW_PD}, + {"PS_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PS_PD}, + {"PXP_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PXP_PD}, + {"QM_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_QM_PD}, + {"TS_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_TS_PD}, + {"TX_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_TX_PD}, + {"US_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_US_PD}, + {"XS_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_XS_PD}, + {"YS_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_YS_PD}, + {"RX_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_RX_PD}, + {"Reserved %d", FIELD_VALUE(ATTENTION_LENGTH, 16), OSAL_NULL, MAX_BLOCK_ID}, + } + }, }; static struct aeu_invert_reg_bit * @@ -823,8 +1001,7 @@ ecore_int_aeu_translate(struct ecore_hwfn *p_hwfn, if (!(p_bit->flags & ATTENTION_BB_DIFFERENT)) return p_bit; - return &aeu_descs_special[(p_bit->flags & ATTENTION_BB_MASK) >> - ATTENTION_BB_SHIFT]; + return &aeu_descs_special[GET_FIELD(p_bit->flags, ATTENTION_BB)]; } static bool ecore_int_is_parity_flag(struct ecore_hwfn *p_hwfn, @@ -838,23 +1015,23 @@ static bool ecore_int_is_parity_flag(struct ecore_hwfn *p_hwfn, #define ATTN_BITS_MASKABLE (0x3ff) struct ecore_sb_attn_info { /* Virtual & Physical address of the SB */ - struct atten_status_block *sb_attn; - dma_addr_t sb_phys; + struct atten_status_block *sb_attn; + dma_addr_t sb_phys; /* Last seen running index */ - u16 index; + u16 index; /* A mask of the AEU bits resulting in a parity error */ - u32 parity_mask[NUM_ATTN_REGS]; + u32 parity_mask[MAX_NUM_ATTN_REGS]; /* A pointer to the attention description structure */ - struct aeu_invert_reg *p_aeu_desc; + struct aeu_invert_reg *p_aeu_desc; /* Previously asserted attentions, which are still unasserted */ - u16 known_attn; + u16 known_attn; /* Cleanup address for the link's general hw attention */ - u32 mfw_attn_addr; + u32 mfw_attn_addr; }; static u16 ecore_attn_update_idx(struct ecore_hwfn *p_hwfn, @@ -912,10 +1089,11 @@ static enum _ecore_status_t ecore_int_assertion(struct ecore_hwfn *p_hwfn, /* FIXME - this will change once we'll have GOOD gtt definitions */ DIRECT_REG_WR(p_hwfn, - (u8 OSAL_IOMEM *) p_hwfn->regview + + (u8 OSAL_IOMEM *)p_hwfn->regview + GTT_BAR0_MAP_REG_IGU_CMD + ((IGU_CMD_ATTN_BIT_SET_UPPER - - IGU_CMD_INT_ACK_BASE) << 3), (u32)asserted_bits); + IGU_CMD_INT_ACK_BASE) << 3), + (u32)asserted_bits); DP_VERBOSE(p_hwfn, ECORE_MSG_INTR, "set cmd IGU: 0x%04x\n", asserted_bits); @@ -923,12 +1101,24 @@ static enum _ecore_status_t ecore_int_assertion(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } +/* @DPDK */ static void ecore_int_attn_print(struct ecore_hwfn *p_hwfn, enum block_id id, enum dbg_attn_type type, bool b_clear) { - /* @DPDK */ - DP_NOTICE(p_hwfn->p_dev, false, "[block_id %d type %d]\n", id, type); + struct dbg_attn_block_result attn_results; + enum dbg_status status; + + OSAL_MEMSET(&attn_results, 0, sizeof(attn_results)); + + status = qed_dbg_read_attn(p_hwfn, p_hwfn->p_dpc_ptt, id, type, + b_clear, &attn_results); + if (status != DBG_STATUS_OK) + DP_NOTICE(p_hwfn, true, + "Failed to parse attention information [status: %s]\n", + qed_dbg_get_status_str(status)); + else + qed_dbg_parse_attn(p_hwfn, &attn_results); } /** @@ -967,23 +1157,24 @@ ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn, b_fatal = true; /* Print HW block interrupt registers */ - if (p_aeu->block_index != MAX_BLOCK_ID) { + if (p_aeu->block_index != MAX_BLOCK_ID) ecore_int_attn_print(p_hwfn, p_aeu->block_index, ATTN_TYPE_INTERRUPT, !b_fatal); -} - /* @DPDK */ /* Reach assertion if attention is fatal */ - if (b_fatal || (strcmp(p_bit_name, "PGLUE B RBC") == 0)) { + if (b_fatal) { + u8 str[ECORE_HW_ERR_MAX_STR_SIZE]; + + OSAL_SNPRINTF((char *)str, ECORE_HW_ERR_MAX_STR_SIZE, + "`%s': Fatal attention\n", p_bit_name); #ifndef ASIC_ONLY - DP_NOTICE(p_hwfn, !CHIP_REV_IS_EMUL(p_hwfn->p_dev), - "`%s': Fatal attention\n", p_bit_name); + DP_NOTICE(p_hwfn, !CHIP_REV_IS_EMUL(p_hwfn->p_dev), "%s", str); #else - DP_NOTICE(p_hwfn, true, "`%s': Fatal attention\n", - p_bit_name); + DP_NOTICE(p_hwfn, true, "%s", str); #endif - - ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_HW_ATTN); + ecore_hw_err_notify(p_hwfn, p_hwfn->p_dpc_ptt, + ECORE_HW_ERR_HW_ATTN, str, + OSAL_STRLEN((char *)str) + 1); } /* Prevent this Attention from being asserted in the future */ @@ -996,7 +1187,7 @@ ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn, u32 mask = ~bitmask; val = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en_reg); ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en_reg, (val & mask)); - DP_ERR(p_hwfn, "`%s' - Disabled future attentions\n", + DP_INFO(p_hwfn, "`%s' - Disabled future attentions\n", p_bit_name); } @@ -1042,11 +1233,15 @@ static void ecore_int_deassertion_parity(struct ecore_hwfn *p_hwfn, } #define MISC_REG_AEU_AFTER_INVERT_IGU(n) \ - (MISC_REG_AEU_AFTER_INVERT_1_IGU + (n) * 0x4) + ((n) < NUM_ATTN_REGS_E4 ? \ + MISC_REG_AEU_AFTER_INVERT_1_IGU + (n) * 0x4 : \ + MISC_REG_AEU_AFTER_INVERT_10_IGU_E5) #define MISC_REG_AEU_ENABLE_IGU_OUT(n, group) \ - (MISC_REG_AEU_ENABLE1_IGU_OUT_0 + (n) * 0x4 + \ - (group) * 0x4 * NUM_ATTN_REGS) + ((n) < NUM_ATTN_REGS_E4 ? \ + (MISC_REG_AEU_ENABLE1_IGU_OUT_0 + (n) * 0x4 + \ + (group) * 0x4 * NUM_ATTN_REGS_E4) : \ + (MISC_REG_AEU_ENABLE10_IGU_OUT_0_E5 + (group) * 0x4)) /** * @brief - handles deassertion of previously asserted attentions. @@ -1059,21 +1254,22 @@ static void ecore_int_deassertion_parity(struct ecore_hwfn *p_hwfn, static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn, u16 deasserted_bits) { + u8 i, j, k, bit_idx, num_attn_regs = NUM_ATTN_REGS(p_hwfn); struct ecore_sb_attn_info *sb_attn_sw = p_hwfn->p_sb_attn; - u32 aeu_inv_arr[NUM_ATTN_REGS], aeu_mask, aeu_en, en; - u8 i, j, k, bit_idx; + u32 aeu_inv_arr[MAX_NUM_ATTN_REGS], aeu_mask, aeu_en, en; enum _ecore_status_t rc = ECORE_SUCCESS; /* Read the attention registers in the AEU */ - for (i = 0; i < NUM_ATTN_REGS; i++) { + for (i = 0; i < num_attn_regs; i++) { aeu_inv_arr[i] = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, MISC_REG_AEU_AFTER_INVERT_IGU(i)); DP_VERBOSE(p_hwfn, ECORE_MSG_INTR, - "Deasserted bits [%d]: %08x\n", i, aeu_inv_arr[i]); + "Deasserted bits [%d]: %08x\n", + i, aeu_inv_arr[i]); } /* Handle parity attentions first */ - for (i = 0; i < NUM_ATTN_REGS; i++) { + for (i = 0; i < num_attn_regs; i++) { struct aeu_invert_reg *p_aeu = &sb_attn_sw->p_aeu_desc[i]; u32 parities; @@ -1105,7 +1301,7 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn, if (!(deasserted_bits & (1 << k))) continue; - for (i = 0; i < NUM_ATTN_REGS; i++) { + for (i = 0; i < num_attn_regs; i++) { u32 bits; aeu_en = MISC_REG_AEU_ENABLE_IGU_OUT(i, k); @@ -1121,11 +1317,12 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn, * previous assertion. */ for (j = 0, bit_idx = 0; bit_idx < 32; j++) { - unsigned long int bitmask; + unsigned long bitmask; u8 bit, bit_len; /* Need to account bits with changed meaning */ p_aeu = &sb_attn_sw->p_aeu_desc[i].bits[j]; + p_aeu = ecore_int_aeu_translate(p_hwfn, p_aeu); bit = bit_idx; bit_len = ATTENTION_LENGTH(p_aeu->flags); @@ -1160,9 +1357,9 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn, p_aeu->bit_name, num); else - strlcpy(bit_name, - p_aeu->bit_name, - sizeof(bit_name)); + OSAL_STRLCPY(bit_name, + p_aeu->bit_name, + 30); /* We now need to pass bitmask in its * correct position. @@ -1182,13 +1379,17 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn, } } + /* Handle missed DORQ attention */ + ecore_dorq_attn_handler(p_hwfn); + /* Clear IGU indication for the deasserted bits */ /* FIXME - this will change once we'll have GOOD gtt definitions */ DIRECT_REG_WR(p_hwfn, - (u8 OSAL_IOMEM *) p_hwfn->regview + + (u8 OSAL_IOMEM *)p_hwfn->regview + GTT_BAR0_MAP_REG_IGU_CMD + ((IGU_CMD_ATTN_BIT_CLR_UPPER - - IGU_CMD_INT_ACK_BASE) << 3), ~((u32)deasserted_bits)); + IGU_CMD_INT_ACK_BASE) << 3), + ~((u32)deasserted_bits)); /* Unmask deasserted attentions in IGU */ aeu_mask = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, @@ -1215,6 +1416,8 @@ static enum _ecore_status_t ecore_int_attentions(struct ecore_hwfn *p_hwfn) */ do { index = OSAL_LE16_TO_CPU(p_sb_attn->sb_index); + /* Make sure reading index is not optimized out */ + OSAL_RMB(p_hwfn->p_dev); attn_bits = OSAL_LE32_TO_CPU(p_sb_attn->atten_bits); attn_acks = OSAL_LE32_TO_CPU(p_sb_attn->atten_ack); } while (index != OSAL_LE16_TO_CPU(p_sb_attn->sb_index)); @@ -1226,9 +1429,9 @@ static enum _ecore_status_t ecore_int_attentions(struct ecore_hwfn *p_hwfn) * attention with no previous attention */ asserted_bits = (attn_bits & ~attn_acks & ATTN_STATE_BITS) & - ~p_sb_attn_sw->known_attn; + ~p_sb_attn_sw->known_attn; deasserted_bits = (~attn_bits & attn_acks & ATTN_STATE_BITS) & - p_sb_attn_sw->known_attn; + p_sb_attn_sw->known_attn; if ((asserted_bits & ~0x100) || (deasserted_bits & ~0x100)) DP_INFO(p_hwfn, @@ -1236,7 +1439,8 @@ static enum _ecore_status_t ecore_int_attentions(struct ecore_hwfn *p_hwfn) index, attn_bits, attn_acks, asserted_bits, deasserted_bits, p_sb_attn_sw->known_attn); else if (asserted_bits == 0x100) - DP_INFO(p_hwfn, "MFW indication via attention\n"); + DP_VERBOSE(p_hwfn, ECORE_MSG_INTR, + "MFW indication via attention\n"); else DP_VERBOSE(p_hwfn, ECORE_MSG_INTR, "MFW indication [deassertion]\n"); @@ -1259,12 +1463,14 @@ static void ecore_sb_ack_attn(struct ecore_hwfn *p_hwfn, struct igu_prod_cons_update igu_ack; OSAL_MEMSET(&igu_ack, 0, sizeof(struct igu_prod_cons_update)); - igu_ack.sb_id_and_flags = - ((ack_cons << IGU_PROD_CONS_UPDATE_SB_INDEX_SHIFT) | - (1 << IGU_PROD_CONS_UPDATE_UPDATE_FLAG_SHIFT) | - (IGU_INT_NOP << IGU_PROD_CONS_UPDATE_ENABLE_INT_SHIFT) | - (IGU_SEG_ACCESS_ATTN << - IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS_SHIFT)); + SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_SB_INDEX, + ack_cons); + SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_UPDATE_FLAG, + 1); + SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_ENABLE_INT, + IGU_INT_NOP); + SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS, + IGU_SEG_ACCESS_ATTN); DIRECT_REG_WR(p_hwfn, igu_addr, igu_ack.sb_id_and_flags); @@ -1293,8 +1499,7 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie) sb_info = &p_hwfn->p_sp_sb->sb_info; if (!sb_info) { - DP_ERR(p_hwfn->p_dev, - "Status block is NULL - cannot ack interrupts\n"); + DP_ERR(p_hwfn->p_dev, "Status block is NULL - cannot ack interrupts\n"); return; } @@ -1302,7 +1507,7 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie) DP_ERR(p_hwfn->p_dev, "DPC called - no p_sb_attn"); return; } - sb_attn = p_hwfn->p_sb_attn; + sb_attn = p_hwfn->p_sb_attn; DP_VERBOSE(p_hwfn, ECORE_MSG_INTR, "DPC Called! (hwfn %p %d)\n", p_hwfn, p_hwfn->my_id); @@ -1315,8 +1520,7 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie) /* Gather Interrupts/Attentions information */ if (!sb_info->sb_virt) { DP_ERR(p_hwfn->p_dev, - "Interrupt Status block is NULL -" - " cannot check for new interrupts!\n"); + "Interrupt Status block is NULL - cannot check for new interrupts!\n"); } else { u32 tmp_index = sb_info->sb_ack; rc = ecore_sb_update_sb_idx(sb_info); @@ -1326,9 +1530,7 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie) } if (!sb_attn || !sb_attn->sb_attn) { - DP_ERR(p_hwfn->p_dev, - "Attentions Status block is NULL -" - " cannot check for new attentions!\n"); + DP_ERR(p_hwfn->p_dev, "Attentions Status block is NULL - cannot check for new attentions!\n"); } else { u16 tmp_index = sb_attn->index; @@ -1344,8 +1546,7 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie) return; } -/* Check the validity of the DPC ptt. If not ack interrupts and fail */ - + /* Check the validity of the DPC ptt. If not ack interrupts and fail */ if (!p_hwfn->p_dpc_ptt) { DP_NOTICE(p_hwfn->p_dev, true, "Failed to allocate PTT\n"); ecore_sb_ack(sb_info, IGU_INT_ENABLE, 1); @@ -1392,7 +1593,9 @@ static void ecore_int_sb_attn_free(struct ecore_hwfn *p_hwfn) p_sb->sb_phys, SB_ATTN_ALIGNED_SIZE(p_hwfn)); } + OSAL_FREE(p_hwfn->p_dev, p_sb); + p_hwfn->p_sb_attn = OSAL_NULL; } static void ecore_int_sb_attn_setup(struct ecore_hwfn *p_hwfn, @@ -1414,10 +1617,11 @@ static void ecore_int_sb_attn_setup(struct ecore_hwfn *p_hwfn, static void ecore_int_sb_attn_init(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, - void *sb_virt_addr, dma_addr_t sb_phy_addr) + void *sb_virt_addr, + dma_addr_t sb_phy_addr) { struct ecore_sb_attn_info *sb_info = p_hwfn->p_sb_attn; - int i, j, k; + int i, j, k, num_attn_regs = NUM_ATTN_REGS(p_hwfn); sb_info->sb_attn = sb_virt_addr; sb_info->sb_phys = sb_phy_addr; @@ -1426,8 +1630,8 @@ static void ecore_int_sb_attn_init(struct ecore_hwfn *p_hwfn, sb_info->p_aeu_desc = aeu_descs; /* Calculate Parity Masks */ - OSAL_MEMSET(sb_info->parity_mask, 0, sizeof(u32) * NUM_ATTN_REGS); - for (i = 0; i < NUM_ATTN_REGS; i++) { + OSAL_MEMSET(sb_info->parity_mask, 0, sizeof(u32) * MAX_NUM_ATTN_REGS); + for (i = 0; i < num_attn_regs; i++) { /* j is array index, k is bit index */ for (j = 0, k = 0; k < 32; j++) { struct aeu_invert_reg_bit *p_aeu; @@ -1445,7 +1649,7 @@ static void ecore_int_sb_attn_init(struct ecore_hwfn *p_hwfn, /* Set the address of cleanup for the mcp attention */ sb_info->mfw_attn_addr = (p_hwfn->rel_pf_id << 3) + - MISC_REG_AEU_GENERAL_ATTN_0; + MISC_REG_AEU_GENERAL_ATTN_0; ecore_int_sb_attn_setup(p_hwfn, p_ptt); } @@ -1590,25 +1794,23 @@ void ecore_int_cau_conf_sb(struct ecore_hwfn *p_hwfn, /* Wide-bus, initialize via DMAE */ u64 phys_addr = (u64)sb_phys; - ecore_dmae_host2grc(p_hwfn, p_ptt, - (u64)(osal_uintptr_t)&phys_addr, + ecore_dmae_host2grc(p_hwfn, p_ptt, (u64)(osal_uintptr_t)&phys_addr, CAU_REG_SB_ADDR_MEMORY + igu_sb_id * sizeof(u64), 2, OSAL_NULL /* default parameters */); - ecore_dmae_host2grc(p_hwfn, p_ptt, - (u64)(osal_uintptr_t)&sb_entry, + ecore_dmae_host2grc(p_hwfn, p_ptt, (u64)(osal_uintptr_t)&sb_entry, CAU_REG_SB_VAR_MEMORY + igu_sb_id * sizeof(u64), 2, OSAL_NULL /* default parameters */); } else { /* Initialize Status Block Address */ STORE_RT_REG_AGG(p_hwfn, - CAU_REG_SB_ADDR_MEMORY_RT_OFFSET + - igu_sb_id * 2, sb_phys); + CAU_REG_SB_ADDR_MEMORY_RT_OFFSET + igu_sb_id * 2, + sb_phys); STORE_RT_REG_AGG(p_hwfn, - CAU_REG_SB_VAR_MEMORY_RT_OFFSET + - igu_sb_id * 2, sb_entry); + CAU_REG_SB_VAR_MEMORY_RT_OFFSET + igu_sb_id * 2, + sb_entry); } /* Configure pi coalescing if set */ @@ -1650,7 +1852,8 @@ void ecore_int_cau_conf_sb(struct ecore_hwfn *p_hwfn, } void ecore_int_sb_setup(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, struct ecore_sb_info *sb_info) + struct ecore_ptt *p_ptt, + struct ecore_sb_info *sb_info) { /* zero status block and ack counter */ sb_info->sb_ack = 0; @@ -1734,7 +1937,8 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_sb_info *sb_info, void *sb_virt_addr, - dma_addr_t sb_phy_addr, u16 sb_id) + dma_addr_t sb_phy_addr, + u16 sb_id) { struct ecore_dev *p_dev = p_hwfn->p_dev; @@ -1764,7 +1968,7 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn, /* Let the igu info reference the client's SB info */ if (sb_id != ECORE_SP_SB_ID) { - if (IS_PF(p_hwfn->p_dev)) { + if (IS_PF(p_dev)) { struct ecore_igu_info *p_info; struct ecore_igu_block *p_block; @@ -1778,24 +1982,51 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn, ecore_vf_set_sb_info(p_hwfn, sb_id, sb_info); } } + #ifdef ECORE_CONFIG_DIRECT_HWFN sb_info->p_hwfn = p_hwfn; #endif - sb_info->p_dev = p_hwfn->p_dev; + sb_info->p_dev = p_dev; /* The igu address will hold the absolute address that needs to be * written to for a specific status block */ - if (IS_PF(p_hwfn->p_dev)) + if (IS_PF(p_dev)) { + u32 igu_cmd_gtt_win = ECORE_IS_E4(p_dev) ? + GTT_BAR0_MAP_REG_IGU_CMD : + GTT_BAR0_MAP_REG_IGU_CMD_EXT_E5; + sb_info->igu_addr = (u8 OSAL_IOMEM *)p_hwfn->regview + - GTT_BAR0_MAP_REG_IGU_CMD + - (sb_info->igu_sb_id << 3); + igu_cmd_gtt_win + + (sb_info->igu_sb_id << 3); + } else { + u32 igu_bar_offset, int_ack_base; + + if (ECORE_IS_E4(p_dev)) { + igu_bar_offset = PXP_VF_BAR0_START_IGU; + int_ack_base = IGU_CMD_INT_ACK_BASE; + } else { + igu_bar_offset = PXP_VF_BAR0_START_IGU2; + int_ack_base = IGU_CMD_INT_ACK_E5_BASE - + (IGU_CMD_INT_ACK_RESERVED_UPPER + 1); + + /* Force emulator to use legacy IGU for lower SBs */ + if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) { + int_ack_base = IGU_CMD_INT_ACK_BASE; + if (sb_info->igu_sb_id < + MAX_SB_PER_PATH_E5_BC_MODE) + igu_bar_offset = PXP_VF_BAR0_START_IGU; + else + igu_bar_offset = + PXP_VF_BAR0_START_IGU2 - + PXP_VF_BAR0_IGU_LENGTH; + } + } - else sb_info->igu_addr = (u8 OSAL_IOMEM *)p_hwfn->regview + - PXP_VF_BAR0_START_IGU + - ((IGU_CMD_INT_ACK_BASE + - sb_info->igu_sb_id) << 3); + igu_bar_offset + + ((int_ack_base + sb_info->igu_sb_id) << 3); + } sb_info->flags |= ECORE_SB_INFO_INIT; @@ -1855,6 +2086,7 @@ static void ecore_int_sp_sb_free(struct ecore_hwfn *p_hwfn) } OSAL_FREE(p_hwfn->p_dev, p_sb); + p_hwfn->p_sp_sb = OSAL_NULL; } static enum _ecore_status_t ecore_int_sp_sb_alloc(struct ecore_hwfn *p_hwfn, @@ -1874,7 +2106,8 @@ static enum _ecore_status_t ecore_int_sp_sb_alloc(struct ecore_hwfn *p_hwfn, /* SB ring */ p_virt = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev, - &p_phys, SB_ALIGNED_SIZE(p_hwfn)); + &p_phys, + SB_ALIGNED_SIZE(p_hwfn)); if (!p_virt) { DP_NOTICE(p_hwfn, false, "Failed to allocate status block\n"); OSAL_FREE(p_hwfn->p_dev, p_sb); @@ -1892,24 +2125,42 @@ static enum _ecore_status_t ecore_int_sp_sb_alloc(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } +enum _ecore_status_t +ecore_int_dummy_comp_cb(struct ecore_hwfn OSAL_UNUSED * p_hwfn, + void OSAL_UNUSED * cookie) +{ + /* Empty completion callback for avoid race */ + return ECORE_SUCCESS; +} + enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn, ecore_int_comp_cb_t comp_cb, void *cookie, - u8 *sb_idx, __le16 **p_fw_cons) + u8 *sb_idx, + __le16 **p_fw_cons) { - struct ecore_sb_sp_info *p_sp_sb = p_hwfn->p_sp_sb; + struct ecore_sb_sp_info *p_sp_sb = p_hwfn->p_sp_sb; enum _ecore_status_t rc = ECORE_NOMEM; u8 pi; /* Look for a free index */ for (pi = 0; pi < p_sp_sb->pi_info_arr_size; pi++) { - if (p_sp_sb->pi_info_arr[pi].comp_cb != OSAL_NULL) + if ((p_sp_sb->pi_info_arr[pi].comp_cb != OSAL_NULL) && + (p_sp_sb->pi_info_arr[pi].comp_cb != p_hwfn->p_dummy_cb)) continue; - p_sp_sb->pi_info_arr[pi].comp_cb = comp_cb; p_sp_sb->pi_info_arr[pi].cookie = cookie; *sb_idx = pi; *p_fw_cons = &p_sp_sb->sb_info.sb_pi_array[pi]; + **p_fw_cons = 0; + + /* The callback might be called in parallel from + * ecore_int_sp_dpc(), so make sure that the cookie is set and + * that the PI producer is zeroed before setting the callback. + */ + OSAL_BARRIER(p_hwfn->p_dev); + p_sp_sb->pi_info_arr[pi].comp_cb = comp_cb; + rc = ECORE_SUCCESS; break; } @@ -1917,15 +2168,19 @@ enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn, return rc; } -enum _ecore_status_t ecore_int_unregister_cb(struct ecore_hwfn *p_hwfn, u8 pi) +enum _ecore_status_t ecore_int_unregister_cb(struct ecore_hwfn *p_hwfn, + u8 pi) { struct ecore_sb_sp_info *p_sp_sb = p_hwfn->p_sp_sb; if (p_sp_sb->pi_info_arr[pi].comp_cb == OSAL_NULL) return ECORE_NOMEM; - p_sp_sb->pi_info_arr[pi].comp_cb = OSAL_NULL; - p_sp_sb->pi_info_arr[pi].cookie = OSAL_NULL; + /* In order to prevent damage in possible race set dummy callback and + * don't clear cookie instead of set NULL to table entry + */ + p_sp_sb->pi_info_arr[pi].comp_cb = p_hwfn->p_dummy_cb; + return ECORE_SUCCESS; } @@ -1935,7 +2190,7 @@ u16 ecore_int_get_sp_sb_id(struct ecore_hwfn *p_hwfn) } void ecore_int_igu_enable_int(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, + struct ecore_ptt *p_ptt, enum ecore_int_mode int_mode) { u32 igu_pf_conf = IGU_PF_CONF_FUNC_EN | IGU_PF_CONF_ATTN_BIT_EN; @@ -1974,8 +2229,7 @@ static void ecore_int_igu_enable_attn(struct ecore_hwfn *p_hwfn, { #ifndef ASIC_ONLY if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) { - DP_INFO(p_hwfn, - "FPGA - Don't enable Attentions in IGU and MISC\n"); + DP_INFO(p_hwfn, "FPGA - Don't enable Attentions in IGU and MISC\n"); return; } #endif @@ -2004,8 +2258,7 @@ ecore_int_igu_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, if ((int_mode != ECORE_INT_MODE_INTA) || IS_LEAD_HWFN(p_hwfn)) { rc = OSAL_SLOWPATH_IRQ_REQ(p_hwfn); if (rc != ECORE_SUCCESS) { - DP_NOTICE(p_hwfn, true, - "Slowpath IRQ request failed\n"); + DP_NOTICE(p_hwfn, true, "Slowpath IRQ request failed\n"); return ECORE_NORESOURCES; } p_hwfn->b_int_requested = true; @@ -2019,8 +2272,8 @@ ecore_int_igu_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, return rc; } -void ecore_int_igu_disable_int(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) +void ecore_int_igu_disable_int(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) { p_hwfn->b_int_enabled = 0; @@ -2033,12 +2286,13 @@ void ecore_int_igu_disable_int(struct ecore_hwfn *p_hwfn, #define IGU_CLEANUP_SLEEP_LENGTH (1000) static void ecore_int_igu_cleanup_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, - u32 igu_sb_id, + u16 igu_sb_id, bool cleanup_set, u16 opaque_fid) { - u32 data = 0, cmd_ctrl = 0, sb_bit, sb_bit_addr, pxp_addr; + u32 data = 0, cmd_ctrl = 0, sb_bit, sb_bit_addr; u32 sleep_cnt = IGU_CLEANUP_SLEEP_LENGTH, val; + u32 pxp_addr, int_ack_base; u8 type = 0; OSAL_BUILD_BUG_ON((IGU_REG_CLEANUP_STATUS_4 - @@ -2054,7 +2308,9 @@ static void ecore_int_igu_cleanup_sb(struct ecore_hwfn *p_hwfn, SET_FIELD(data, IGU_CLEANUP_COMMAND_TYPE, IGU_COMMAND_TYPE_SET); /* Set the control register */ - pxp_addr = IGU_CMD_INT_ACK_BASE + igu_sb_id; + int_ack_base = ECORE_IS_E4(p_hwfn->p_dev) ? IGU_CMD_INT_ACK_BASE + : IGU_CMD_INT_ACK_E5_BASE; + pxp_addr = int_ack_base + igu_sb_id; SET_FIELD(cmd_ctrl, IGU_CTRL_REG_PXP_ADDR, pxp_addr); SET_FIELD(cmd_ctrl, IGU_CTRL_REG_FID, opaque_fid); SET_FIELD(cmd_ctrl, IGU_CTRL_REG_TYPE, IGU_CTRL_CMD_TYPE_WR); @@ -2136,8 +2392,9 @@ void ecore_int_igu_init_pure_rt_single(struct ecore_hwfn *p_hwfn, } void ecore_int_igu_init_pure_rt(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - bool b_set, bool b_slowpath) + struct ecore_ptt *p_ptt, + bool b_set, + bool b_slowpath) { struct ecore_igu_info *p_info = p_hwfn->hw_info.p_igu_info; struct ecore_igu_block *p_block; @@ -2344,10 +2601,11 @@ static void ecore_int_igu_read_cam_block(struct ecore_hwfn *p_hwfn, p_block = &p_hwfn->hw_info.p_igu_info->entry[igu_sb_id]; /* Fill the block information */ - p_block->function_id = GET_FIELD(val, IGU_MAPPING_LINE_FUNCTION_NUMBER); + p_block->function_id = GET_FIELD(val, + IGU_MAPPING_LINE_FUNCTION_NUMBER); p_block->is_pf = GET_FIELD(val, IGU_MAPPING_LINE_PF_VALID); - p_block->vector_number = GET_FIELD(val, IGU_MAPPING_LINE_VECTOR_NUMBER); - + p_block->vector_number = GET_FIELD(val, + IGU_MAPPING_LINE_VECTOR_NUMBER); p_block->igu_sb_id = igu_sb_id; } @@ -2507,6 +2765,12 @@ ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, return ECORE_INVAL; } + if (p_block == OSAL_NULL) { + DP_VERBOSE(p_hwfn, (ECORE_MSG_INTR | ECORE_MSG_IOV), + "SB address (p_block) is NULL\n"); + return ECORE_INVAL; + } + /* At this point, p_block points to the SB we want to relocate */ if (b_to_vf) { p_block->status &= ~ECORE_IGU_STATUS_PF; @@ -2546,6 +2810,11 @@ ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, p_hwfn->hw_info.feat_num[ECORE_VF_L2_QUE]--; } + /* clean up PF`s SB before assigning it to VF */ + if (b_to_vf) + ecore_int_igu_cleanup_sb(p_hwfn, p_ptt, igu_sb_id, 1, + p_hwfn->hw_info.opaque_fid); + /* Update the IGU and CAU with the new configuration */ SET_FIELD(val, IGU_MAPPING_LINE_FUNCTION_NUMBER, p_block->function_id); @@ -2567,6 +2836,11 @@ ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, igu_sb_id, p_block->function_id, p_block->is_pf, p_block->vector_number); + /* clean up new assigned PF`s SB */ + if (p_block->is_pf) + ecore_int_igu_cleanup_sb(p_hwfn, p_ptt, igu_sb_id, 1, + p_hwfn->hw_info.opaque_fid); + return ECORE_SUCCESS; } @@ -2620,6 +2894,7 @@ static enum _ecore_status_t ecore_int_sp_dpc_alloc(struct ecore_hwfn *p_hwfn) static void ecore_int_sp_dpc_free(struct ecore_hwfn *p_hwfn) { OSAL_FREE(p_hwfn->p_dev, p_hwfn->sp_dpc); + p_hwfn->sp_dpc = OSAL_NULL; } enum _ecore_status_t ecore_int_alloc(struct ecore_hwfn *p_hwfn, @@ -2754,35 +3029,3 @@ enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } - -void ecore_pf_flr_igu_cleanup(struct ecore_hwfn *p_hwfn) -{ - struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt; - struct ecore_ptt *p_dpc_ptt = ecore_get_reserved_ptt(p_hwfn, - RESERVED_PTT_DPC); - int i; - - /* Do not reorder the following cleanup sequence */ - /* Ack all attentions */ - ecore_wr(p_hwfn, p_ptt, IGU_REG_ATTENTION_ACK_BITS, 0xfff); - - /* Clear driver attention */ - ecore_wr(p_hwfn, p_dpc_ptt, - ((p_hwfn->rel_pf_id << 3) + MISC_REG_AEU_GENERAL_ATTN_0), 0); - - /* Clear per-PF IGU registers to restore them as if the IGU - * was reset for this PF - */ - ecore_wr(p_hwfn, p_ptt, IGU_REG_LEADING_EDGE_LATCH, 0); - ecore_wr(p_hwfn, p_ptt, IGU_REG_TRAILING_EDGE_LATCH, 0); - ecore_wr(p_hwfn, p_ptt, IGU_REG_PF_CONFIGURATION, 0); - - /* Execute IGU clean up*/ - ecore_wr(p_hwfn, p_ptt, IGU_REG_PF_FUNCTIONAL_CLEANUP, 1); - - /* Clear Stats */ - ecore_wr(p_hwfn, p_ptt, IGU_REG_STATISTIC_NUM_OF_INTA_ASSERTED, 0); - - for (i = 0; i < IGU_REG_PBA_STS_PF_SIZE; i++) - ecore_wr(p_hwfn, p_ptt, IGU_REG_PBA_STS_PF + i * 4, 0); -} diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h index 924c31429..6d72d5ced 100644 --- a/drivers/net/qede/base/ecore_int.h +++ b/drivers/net/qede/base/ecore_int.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_INT_H__ #define __ECORE_INT_H__ @@ -24,15 +24,15 @@ #define ECORE_SB_INVALID_IDX 0xffff struct ecore_igu_block { - u8 status; + u8 status; #define ECORE_IGU_STATUS_FREE 0x01 #define ECORE_IGU_STATUS_VALID 0x02 #define ECORE_IGU_STATUS_PF 0x04 #define ECORE_IGU_STATUS_DSB 0x08 - u8 vector_number; - u8 function_id; - u8 is_pf; + u8 vector_number; + u8 function_id; + u8 is_pf; /* Index inside IGU [meant for back reference] */ u16 igu_sb_id; @@ -91,12 +91,14 @@ u16 ecore_get_igu_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id); */ struct ecore_igu_block * ecore_get_igu_free_sb(struct ecore_hwfn *p_hwfn, bool b_is_pf); + /* TODO Names of function may change... */ -void ecore_int_igu_init_pure_rt(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - bool b_set, bool b_slowpath); +void ecore_int_igu_init_pure_rt(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + bool b_set, + bool b_slowpath); -void ecore_int_igu_init_rt(struct ecore_hwfn *p_hwfn); +void ecore_int_igu_init_rt(struct ecore_hwfn *p_hwfn); /** * @brief ecore_int_igu_read_cam - Reads the IGU CAM. @@ -109,11 +111,11 @@ void ecore_int_igu_init_rt(struct ecore_hwfn *p_hwfn); * * @return enum _ecore_status_t */ -enum _ecore_status_t ecore_int_igu_read_cam(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt); +enum _ecore_status_t ecore_int_igu_read_cam(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt); -typedef enum _ecore_status_t (*ecore_int_comp_cb_t) (struct ecore_hwfn *p_hwfn, - void *cookie); +typedef enum _ecore_status_t(*ecore_int_comp_cb_t)(struct ecore_hwfn *p_hwfn, + void *cookie); /** * @brief ecore_int_register_cb - Register callback func for * slowhwfn statusblock. @@ -134,10 +136,11 @@ typedef enum _ecore_status_t (*ecore_int_comp_cb_t) (struct ecore_hwfn *p_hwfn, * * @return enum _ecore_status_t */ -enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn, - ecore_int_comp_cb_t comp_cb, - void *cookie, - u8 *sb_idx, __le16 **p_fw_cons); +enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn, + ecore_int_comp_cb_t comp_cb, + void *cookie, + u8 *sb_idx, + __le16 **p_fw_cons); /** * @brief ecore_int_unregister_cb - Unregisters callback * function from sp sb. @@ -149,7 +152,8 @@ enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn, * * @return enum _ecore_status_t */ -enum _ecore_status_t ecore_int_unregister_cb(struct ecore_hwfn *p_hwfn, u8 pi); +enum _ecore_status_t ecore_int_unregister_cb(struct ecore_hwfn *p_hwfn, + u8 pi); /** * @brief ecore_int_get_sp_sb_id - Get the slowhwfn sb id. @@ -187,10 +191,12 @@ void ecore_int_igu_init_pure_rt_single(struct ecore_hwfn *p_hwfn, * @param vf_number * @param vf_valid */ -void ecore_int_cau_conf_sb(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - dma_addr_t sb_phys, - u16 igu_sb_id, u16 vf_number, u8 vf_valid); +void ecore_int_cau_conf_sb(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + dma_addr_t sb_phys, + u16 igu_sb_id, + u16 vf_number, + u8 vf_valid); /** * @brief ecore_int_alloc @@ -216,7 +222,8 @@ void ecore_int_free(struct ecore_hwfn *p_hwfn); * @param p_hwfn * @param p_ptt */ -void ecore_int_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); +void ecore_int_setup(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt); /** * @brief - Enable Interrupt & Attention for hw function @@ -258,6 +265,9 @@ enum _ecore_status_t ecore_int_set_timer_res(struct ecore_hwfn *p_hwfn, enum _ecore_status_t ecore_pglueb_rbc_attn_handler(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, bool is_hw_init); -void ecore_pf_flr_igu_cleanup(struct ecore_hwfn *p_hwfn); + +enum _ecore_status_t +ecore_int_dummy_comp_cb(struct ecore_hwfn OSAL_UNUSED * p_hwfn, + void OSAL_UNUSED * cookie); #endif /* __ECORE_INT_H__ */ diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h index 553d7e6b3..8f82d74e0 100644 --- a/drivers/net/qede/base/ecore_int_api.h +++ b/drivers/net/qede/base/ecore_int_api.h @@ -1,18 +1,33 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_INT_API_H__ #define __ECORE_INT_API_H__ +#include "common_hsi.h" + #ifndef __EXTRACT__LINUX__ #define ECORE_SB_IDX 0x0002 #define RX_PI 0 #define TX_PI(tc) (RX_PI + 1 + tc) +#define LL2_VF_RX_PI_E4 9 +#define LL2_VF_TX_PI_E4 10 +#define LL2_VF_RX_PI_E5 6 +#define LL2_VF_TX_PI_E5 7 + +#define LL2_VF_RX_PI(_p_hwfn) \ + (ECORE_IS_E4((_p_hwfn)->p_dev) ? \ + LL2_VF_RX_PI_E4 : LL2_VF_RX_PI_E5) + +#define LL2_VF_TX_PI(_p_hwfn) \ + (ECORE_IS_E4((_p_hwfn)->p_dev) ? \ + LL2_VF_TX_PI_E4 : LL2_VF_TX_PI_E5) + #ifndef ECORE_INT_MODE #define ECORE_INT_MODE enum ecore_int_mode { @@ -31,7 +46,7 @@ struct ecore_sb_info { #define STATUS_BLOCK_PROD_INDEX_MASK 0xFFFFFF dma_addr_t sb_phys; - u32 sb_ack; /* Last given ack */ + u32 sb_ack; /* Last given ack */ u16 igu_sb_id; void OSAL_IOMEM *igu_addr; u8 flags; @@ -52,22 +67,22 @@ struct ecore_sb_info_dbg { struct ecore_sb_cnt_info { /* Original, current, and free SBs for PF */ - int orig; - int cnt; - int free_cnt; + u32 orig; + u32 cnt; + u32 free_cnt; /* Original, current and free SBS for child VFs */ - int iov_orig; - int iov_cnt; - int free_cnt_iov; + u32 iov_orig; + u32 iov_cnt; + u32 free_cnt_iov; }; static OSAL_INLINE u16 ecore_sb_update_sb_idx(struct ecore_sb_info *sb_info) { u32 prod = 0; - u16 rc = 0; + u16 rc = 0; - /* barrier(); status block is written to by the chip */ + /* barrier(); */ /* status block is written to by the chip */ /* FIXME: need some sort of barrier. */ prod = OSAL_LE32_TO_CPU(*sb_info->sb_prod_index) & STATUS_BLOCK_PROD_INDEX_MASK; @@ -81,7 +96,6 @@ static OSAL_INLINE u16 ecore_sb_update_sb_idx(struct ecore_sb_info *sb_info) } /** - * * @brief This function creates an update command for interrupts that is * written to the IGU. * @@ -98,19 +112,29 @@ static OSAL_INLINE void ecore_sb_ack(struct ecore_sb_info *sb_info, enum igu_int_cmd int_cmd, u8 upd_flg) { struct igu_prod_cons_update igu_ack; +#ifndef ECORE_CONFIG_DIRECT_HWFN + u32 val; +#endif + if (sb_info->p_dev->int_mode == ECORE_INT_MODE_POLL) + return; OSAL_MEMSET(&igu_ack, 0, sizeof(struct igu_prod_cons_update)); - igu_ack.sb_id_and_flags = - ((sb_info->sb_ack << IGU_PROD_CONS_UPDATE_SB_INDEX_SHIFT) | - (upd_flg << IGU_PROD_CONS_UPDATE_UPDATE_FLAG_SHIFT) | - (int_cmd << IGU_PROD_CONS_UPDATE_ENABLE_INT_SHIFT) | - (IGU_SEG_ACCESS_REG << IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS_SHIFT)); + SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_SB_INDEX, + sb_info->sb_ack); + SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_UPDATE_FLAG, + upd_flg); + SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_ENABLE_INT, + int_cmd); + SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS, + IGU_SEG_ACCESS_REG); + igu_ack.sb_id_and_flags = OSAL_CPU_TO_LE32(igu_ack.sb_id_and_flags); #ifdef ECORE_CONFIG_DIRECT_HWFN DIRECT_REG_WR(sb_info->p_hwfn, sb_info->igu_addr, igu_ack.sb_id_and_flags); #else - DIRECT_REG_WR(OSAL_NULL, sb_info->igu_addr, igu_ack.sb_id_and_flags); + val = OSAL_LE32_TO_CPU(igu_ack.sb_id_and_flags); + DIRECT_REG_WR(OSAL_NULL, sb_info->igu_addr, val); #endif /* Both segments (interrupts & acks) are written to same place address; * Need to guarantee all commands will be received (in-order) by HW. @@ -127,46 +151,29 @@ static OSAL_INLINE void __internal_ram_wr(struct ecore_hwfn *p_hwfn, static OSAL_INLINE void __internal_ram_wr(__rte_unused void *p_hwfn, void OSAL_IOMEM *addr, int size, u32 *data) -#endif -{ - unsigned int i; - for (i = 0; i < size / sizeof(*data); i++) - DIRECT_REG_WR(p_hwfn, &((u32 OSAL_IOMEM *)addr)[i], data[i]); -} - -#ifdef ECORE_CONFIG_DIRECT_HWFN -static OSAL_INLINE void __internal_ram_wr_relaxed(struct ecore_hwfn *p_hwfn, - void OSAL_IOMEM * addr, - int size, u32 *data) -#else -static OSAL_INLINE void __internal_ram_wr_relaxed(__rte_unused void *p_hwfn, - void OSAL_IOMEM * addr, - int size, u32 *data) #endif { unsigned int i; for (i = 0; i < size / sizeof(*data); i++) - DIRECT_REG_WR_RELAXED(p_hwfn, &((u32 OSAL_IOMEM *)addr)[i], - data[i]); + DIRECT_REG_WR(p_hwfn, &((u32 OSAL_IOMEM *)addr)[i], data[i]); } #ifdef ECORE_CONFIG_DIRECT_HWFN static OSAL_INLINE void internal_ram_wr(struct ecore_hwfn *p_hwfn, - void OSAL_IOMEM * addr, - int size, u32 *data) + void OSAL_IOMEM *addr, + int size, u32 *data) { - __internal_ram_wr_relaxed(p_hwfn, addr, size, data); + __internal_ram_wr(p_hwfn, addr, size, data); } #else static OSAL_INLINE void internal_ram_wr(void OSAL_IOMEM *addr, - int size, u32 *data) + int size, u32 *data) { - __internal_ram_wr_relaxed(OSAL_NULL, addr, size, data); + __internal_ram_wr(OSAL_NULL, addr, size, data); } #endif - #endif struct ecore_hwfn; @@ -196,7 +203,6 @@ void ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn, u8 timeset); /** - * * @brief ecore_int_igu_enable_int - enable device interrupts * * @param p_hwfn @@ -208,17 +214,15 @@ void ecore_int_igu_enable_int(struct ecore_hwfn *p_hwfn, enum ecore_int_mode int_mode); /** - * * @brief ecore_int_igu_disable_int - disable device interrupts * * @param p_hwfn * @param p_ptt */ void ecore_int_igu_disable_int(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt); + struct ecore_ptt *p_ptt); /** - * * @brief ecore_int_igu_read_sisr_reg - Reads the single isr multiple dpc * register from igu. * @@ -246,11 +250,12 @@ u64 ecore_int_igu_read_sisr_reg(struct ecore_hwfn *p_hwfn); * * @return enum _ecore_status_t */ -enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - struct ecore_sb_info *sb_info, - void *sb_virt_addr, - dma_addr_t sb_phy_addr, u16 sb_id); +enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct ecore_sb_info *sb_info, + void *sb_virt_addr, + dma_addr_t sb_phy_addr, + u16 sb_id); /** * @brief ecore_int_sb_setup - Setup the sb. * @@ -258,8 +263,9 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn, * @param p_ptt * @param sb_info initialized sb_info structure */ -void ecore_int_sb_setup(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, struct ecore_sb_info *sb_info); +void ecore_int_sb_setup(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct ecore_sb_info *sb_info); /** * @brief ecore_int_sb_release - releases the sb_info structure. @@ -274,9 +280,9 @@ void ecore_int_sb_setup(struct ecore_hwfn *p_hwfn, * * @return enum _ecore_status_t */ -enum _ecore_status_t ecore_int_sb_release(struct ecore_hwfn *p_hwfn, - struct ecore_sb_info *sb_info, - u16 sb_id); +enum _ecore_status_t ecore_int_sb_release(struct ecore_hwfn *p_hwfn, + struct ecore_sb_info *sb_info, + u16 sb_id); /** * @brief ecore_int_sp_dpc - To be called when an interrupt is received on the @@ -296,7 +302,7 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie); * * @return */ -void ecore_int_get_num_sbs(struct ecore_hwfn *p_hwfn, +void ecore_int_get_num_sbs(struct ecore_hwfn *p_hwfn, struct ecore_sb_cnt_info *p_sb_cnt_info); /** @@ -352,12 +358,11 @@ ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, /** * @brief - Doorbell Recovery handler. - * Run DB_REAL_DEAL doorbell recovery in case of PF overflow - * (and flush DORQ if needed), otherwise run DB_REC_ONCE. + * Run doorbell recovery in case of PF overflow (and flush DORQ if + * needed). * * @param p_hwfn * @param p_ptt */ -enum _ecore_status_t ecore_db_rec_handler(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt); +void ecore_db_rec_handler(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); #endif diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h index bd7c5703f..329b30b7b 100644 --- a/drivers/net/qede/base/ecore_iov_api.h +++ b/drivers/net/qede/base/ecore_iov_api.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_SRIOV_API_H__ #define __ECORE_SRIOV_API_H__ @@ -12,7 +12,17 @@ #define ECORE_ETH_VF_NUM_MAC_FILTERS 1 #define ECORE_ETH_VF_NUM_VLAN_FILTERS 2 -#define ECORE_VF_ARRAY_LENGTH (3) + +#define ECORE_VF_ARRAY_LENGTH (DIV_ROUND_UP(MAX_NUM_VFS, 64)) + +#define ECORE_VF_ARRAY_GET_VFID(arr, vfid) \ + (((arr)[(vfid) / 64]) & (1ULL << ((vfid) % 64))) + +#define ECORE_VF_ARRAY_SET_VFID(arr, vfid) \ + (((arr)[(vfid) / 64]) |= (1ULL << ((vfid) % 64))) + +#define ECORE_VF_ARRAY_CLEAR_VFID(arr, vfid) \ + (((arr)[(vfid) / 64]) &= ~(1ULL << ((vfid) % 64))) #define ECORE_VF_ARRAY_GET_VFID(arr, vfid) \ (((arr)[(vfid) / 64]) & (1ULL << ((vfid) % 64))) @@ -28,7 +38,10 @@ #define IS_PF_PDA(p_hwfn) 0 /* @@TBD Michalk */ /* @@@ TBD MichalK - what should this number be*/ -#define ECORE_MAX_VF_CHAINS_PER_PF 16 +#define ECORE_MAX_QUEUE_VF_CHAINS_PER_PF 16 +#define ECORE_MAX_CNQ_VF_CHAINS_PER_PF 16 +#define ECORE_MAX_VF_CHAINS_PER_PF \ + (ECORE_MAX_QUEUE_VF_CHAINS_PER_PF + ECORE_MAX_CNQ_VF_CHAINS_PER_PF) /* vport update extended feature tlvs flags */ enum ecore_iov_vport_update_flag { @@ -45,7 +58,7 @@ enum ecore_iov_vport_update_flag { /* PF to VF STATUS is part of vfpf-channel API * and must be forward compatible -*/ + */ enum ecore_iov_pf_to_vf_status { PFVF_STATUS_WAITING = 0, PFVF_STATUS_SUCCESS, @@ -67,26 +80,18 @@ struct ecore_mcp_link_capabilities; #define VFPF_ACQUIRE_OS_ESX (2) #define VFPF_ACQUIRE_OS_SOLARIS (3) #define VFPF_ACQUIRE_OS_LINUX_USERSPACE (4) +#define VFPF_ACQUIRE_OS_FREEBSD (5) struct ecore_vf_acquire_sw_info { u32 driver_version; u8 os_type; - - /* We have several close releases that all use ~same FW with different - * versions [making it incompatible as the versioning scheme is still - * tied directly to FW version], allow to override the checking. Only - * those versions would actually support this feature [so it would not - * break forward compatibility with newer HV drivers that are no longer - * suited]. - */ - bool override_fw_version; }; struct ecore_public_vf_info { /* These copies will later be reflected in the bulletin board, * but this copy should be newer. */ - u8 forced_mac[ETH_ALEN]; + u8 forced_mac[ECORE_ETH_ALEN]; u16 forced_vlan; /* Trusted VFs can configure promiscuous mode and @@ -104,14 +109,17 @@ struct ecore_iov_vf_init_params { * number of Rx/Tx queues. */ /* TODO - remove this limitation */ - u16 num_queues; + u8 num_queues; + u8 num_cnqs; + + u8 cnq_offset; /* Allow the client to choose which qzones to use for Rx/Tx, * and which queue_base to use for Tx queues on a per-queue basis. * Notice values should be relative to the PF resources. */ - u16 req_rx_queue[ECORE_MAX_VF_CHAINS_PER_PF]; - u16 req_tx_queue[ECORE_MAX_VF_CHAINS_PER_PF]; + u16 req_rx_queue[ECORE_MAX_QUEUE_VF_CHAINS_PER_PF]; + u16 req_tx_queue[ECORE_MAX_QUEUE_VF_CHAINS_PER_PF]; u8 vport_id; @@ -175,6 +183,33 @@ struct ecore_hw_sriov_info { }; #ifdef CONFIG_ECORE_SRIOV +/** + * @brief ecore_iov_pci_enable_prolog - Called before enabling sriov on pci. + * Reconfigure QM to initialize PQs for the + * max_active_vfs. + * + * @param p_hwfn + * @param p_ptt + * @param max_active_vfs + * + * @return enum _ecore_status_t + */ +enum _ecore_status_t ecore_iov_pci_enable_prolog(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 max_active_vfs); + +/** + * @brief ecore_iov_pci_disable_epilog - Called after disabling sriov on pci. + * Reconfigure QM to delete VF PQs. + * + * @param p_hwfn + * @param p_ptt + * + * @return enum _ecore_status_t + */ +enum _ecore_status_t ecore_iov_pci_disable_epilog(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt); + #ifndef LINUX_REMOVE /** * @brief mark/clear all VFs before/after an incoming PCIe sriov @@ -210,10 +245,10 @@ void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev, * * @return enum _ecore_status_t */ -enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - struct ecore_iov_vf_init_params - *p_params); +enum _ecore_status_t +ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct ecore_iov_vf_init_params *p_params); /** * @brief ecore_iov_process_mbx_req - process a request received @@ -322,6 +357,7 @@ void ecore_iov_get_link(struct ecore_hwfn *p_hwfn, */ bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn, u16 rel_vf_id); +#endif /** * @brief Check if given VF ID @vfid is valid @@ -704,7 +740,7 @@ bool ecore_iov_is_vf_started(struct ecore_hwfn *p_hwfn, * * @return - rate in Mbps */ -int ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid); +u32 ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid); /** * @brief - Configure min rate for VF's vport. @@ -716,11 +752,21 @@ int ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid); */ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev, int vfid, u32 rate); -#endif /** - * @brief ecore_pf_configure_vf_queue_coalesce - PF configure coalesce - * parameters of VFs for Rx and Tx queue. + * @brief Set default vlan in VF info wihtout configuring FW/HW. + * + * @param p_hwfn + * @param vlan + * @param vfid + */ +enum _ecore_status_t ecore_iov_set_default_vlan(struct ecore_hwfn *p_hwfn, + u16 vlan, int vfid); + + +/** + * @brief ecore_pf_configure_vf_queue_coalesce - PF configure coalesce parameters + * of VFs for Rx and Tx queue. * While the API allows setting coalescing per-qid, all queues sharing a SB * should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff] * otherwise configuration would break. @@ -744,10 +790,9 @@ ecore_iov_pf_configure_vf_queue_coalesce(struct ecore_hwfn *p_hwfn, * @param p_hwfn * @param rel_vf_id * - * @return MAX_NUM_VFS_K2 in case no further active VFs, otherwise index. + * @return MAX_NUM_VFS in case no further active VFs, otherwise index. */ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id); - void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn, int vfid, u16 vxlan_port, u16 geneve_port); @@ -764,11 +809,21 @@ void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn, int vfid, void ecore_iov_set_vf_hw_channel(struct ecore_hwfn *p_hwfn, int vfid, bool b_is_hw); #endif -#endif /* CONFIG_ECORE_SRIOV */ + +/** + * @brief Run DORQ flush for VFs that may use EDPM, and notify VFs that need + * to perform doorbell overflow recovery. A VF needs to perform recovery + * if it or its PF overflowed. + * + * @param p_hwfn + */ +enum _ecore_status_t ecore_iov_db_rec_handler(struct ecore_hwfn *p_hwfn); + +#endif #define ecore_for_each_vf(_p_hwfn, _i) \ for (_i = ecore_iov_get_next_active_vf(_p_hwfn, 0); \ - _i < MAX_NUM_VFS_K2; \ + _i < MAX_NUM_VFS; \ _i = ecore_iov_get_next_active_vf(_p_hwfn, _i + 1)) #endif diff --git a/drivers/net/qede/base/ecore_iro.h b/drivers/net/qede/base/ecore_iro.h index b146faff9..87f6ecacb 100644 --- a/drivers/net/qede/base/ecore_iro.h +++ b/drivers/net/qede/base/ecore_iro.h @@ -1,273 +1,304 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __IRO_H__ #define __IRO_H__ /* Ystorm flow control mode. Use enum fw_flow_ctrl_mode */ -#define YSTORM_FLOW_CONTROL_MODE_OFFSET (IRO[0].base) -#define YSTORM_FLOW_CONTROL_MODE_SIZE (IRO[0].size) +#define YSTORM_FLOW_CONTROL_MODE_OFFSET (IRO[0].base) +#define YSTORM_FLOW_CONTROL_MODE_SIZE (IRO[0].size) +/* Pstorm LL2 packet duplication configuration. Use pstorm_pkt_dup_cfg data type. */ +#define PSTORM_PKT_DUPLICATION_CFG_OFFSET(pf_id) \ + (IRO[1].base + ((pf_id) * IRO[1].m1)) +#define PSTORM_PKT_DUPLICATION_CFG_SIZE (IRO[1].size) /* Tstorm port statistics */ -#define TSTORM_PORT_STAT_OFFSET(port_id) (IRO[1].base + ((port_id) * IRO[1].m1)) -#define TSTORM_PORT_STAT_SIZE (IRO[1].size) +#define TSTORM_PORT_STAT_OFFSET(port_id) \ + (IRO[2].base + ((port_id) * IRO[2].m1)) +#define TSTORM_PORT_STAT_SIZE (IRO[2].size) /* Tstorm ll2 port statistics */ -#define TSTORM_LL2_PORT_STAT_OFFSET(port_id) (IRO[2].base + \ - ((port_id) * IRO[2].m1)) -#define TSTORM_LL2_PORT_STAT_SIZE (IRO[2].size) +#define TSTORM_LL2_PORT_STAT_OFFSET(port_id) \ + (IRO[3].base + ((port_id) * IRO[3].m1)) +#define TSTORM_LL2_PORT_STAT_SIZE (IRO[3].size) +/* Tstorm LL2 packet duplication configuration. Use tstorm_pkt_dup_cfg data type. */ +#define TSTORM_PKT_DUPLICATION_CFG_OFFSET(pf_id) \ + (IRO[4].base + ((pf_id) * IRO[4].m1)) +#define TSTORM_PKT_DUPLICATION_CFG_SIZE (IRO[4].size) /* Ustorm VF-PF Channel ready flag */ -#define USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id) (IRO[3].base + \ - ((vf_id) * IRO[3].m1)) -#define USTORM_VF_PF_CHANNEL_READY_SIZE (IRO[3].size) +#define USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id) \ + (IRO[5].base + ((vf_id) * IRO[5].m1)) +#define USTORM_VF_PF_CHANNEL_READY_SIZE (IRO[5].size) /* Ustorm Final flr cleanup ack */ -#define USTORM_FLR_FINAL_ACK_OFFSET(pf_id) (IRO[4].base + ((pf_id) * IRO[4].m1)) -#define USTORM_FLR_FINAL_ACK_SIZE (IRO[4].size) +#define USTORM_FLR_FINAL_ACK_OFFSET(pf_id) \ + (IRO[6].base + ((pf_id) * IRO[6].m1)) +#define USTORM_FLR_FINAL_ACK_SIZE (IRO[6].size) /* Ustorm Event ring consumer */ -#define USTORM_EQE_CONS_OFFSET(pf_id) (IRO[5].base + ((pf_id) * IRO[5].m1)) -#define USTORM_EQE_CONS_SIZE (IRO[5].size) +#define USTORM_EQE_CONS_OFFSET(pf_id) \ + (IRO[7].base + ((pf_id) * IRO[7].m1)) +#define USTORM_EQE_CONS_SIZE (IRO[7].size) /* Ustorm eth queue zone */ -#define USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id) (IRO[6].base + \ - ((queue_zone_id) * IRO[6].m1)) -#define USTORM_ETH_QUEUE_ZONE_SIZE (IRO[6].size) +#define USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id) \ + (IRO[8].base + ((queue_zone_id) * IRO[8].m1)) +#define USTORM_ETH_QUEUE_ZONE_SIZE (IRO[8].size) /* Ustorm Common Queue ring consumer */ -#define USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id) (IRO[7].base + \ - ((queue_zone_id) * IRO[7].m1)) -#define USTORM_COMMON_QUEUE_CONS_SIZE (IRO[7].size) +#define USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id) \ + (IRO[9].base + ((queue_zone_id) * IRO[9].m1)) +#define USTORM_COMMON_QUEUE_CONS_SIZE (IRO[9].size) /* Xstorm common PQ info */ -#define XSTORM_PQ_INFO_OFFSET(pq_id) (IRO[8].base + ((pq_id) * IRO[8].m1)) -#define XSTORM_PQ_INFO_SIZE (IRO[8].size) +#define XSTORM_PQ_INFO_OFFSET(pq_id) \ + (IRO[10].base + ((pq_id) * IRO[10].m1)) +#define XSTORM_PQ_INFO_SIZE (IRO[10].size) /* Xstorm Integration Test Data */ -#define XSTORM_INTEG_TEST_DATA_OFFSET (IRO[9].base) -#define XSTORM_INTEG_TEST_DATA_SIZE (IRO[9].size) +#define XSTORM_INTEG_TEST_DATA_OFFSET (IRO[11].base) +#define XSTORM_INTEG_TEST_DATA_SIZE (IRO[11].size) /* Ystorm Integration Test Data */ -#define YSTORM_INTEG_TEST_DATA_OFFSET (IRO[10].base) -#define YSTORM_INTEG_TEST_DATA_SIZE (IRO[10].size) +#define YSTORM_INTEG_TEST_DATA_OFFSET (IRO[12].base) +#define YSTORM_INTEG_TEST_DATA_SIZE (IRO[12].size) /* Pstorm Integration Test Data */ -#define PSTORM_INTEG_TEST_DATA_OFFSET (IRO[11].base) -#define PSTORM_INTEG_TEST_DATA_SIZE (IRO[11].size) +#define PSTORM_INTEG_TEST_DATA_OFFSET (IRO[13].base) +#define PSTORM_INTEG_TEST_DATA_SIZE (IRO[13].size) /* Tstorm Integration Test Data */ -#define TSTORM_INTEG_TEST_DATA_OFFSET (IRO[12].base) -#define TSTORM_INTEG_TEST_DATA_SIZE (IRO[12].size) +#define TSTORM_INTEG_TEST_DATA_OFFSET (IRO[14].base) +#define TSTORM_INTEG_TEST_DATA_SIZE (IRO[14].size) /* Mstorm Integration Test Data */ -#define MSTORM_INTEG_TEST_DATA_OFFSET (IRO[13].base) -#define MSTORM_INTEG_TEST_DATA_SIZE (IRO[13].size) +#define MSTORM_INTEG_TEST_DATA_OFFSET (IRO[15].base) +#define MSTORM_INTEG_TEST_DATA_SIZE (IRO[15].size) /* Ustorm Integration Test Data */ -#define USTORM_INTEG_TEST_DATA_OFFSET (IRO[14].base) -#define USTORM_INTEG_TEST_DATA_SIZE (IRO[14].size) +#define USTORM_INTEG_TEST_DATA_OFFSET (IRO[16].base) +#define USTORM_INTEG_TEST_DATA_SIZE (IRO[16].size) /* Xstorm overlay buffer host address */ -#define XSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[15].base) -#define XSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[15].size) +#define XSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[17].base) +#define XSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[17].size) /* Ystorm overlay buffer host address */ -#define YSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[16].base) -#define YSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[16].size) +#define YSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[18].base) +#define YSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[18].size) /* Pstorm overlay buffer host address */ -#define PSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[17].base) -#define PSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[17].size) +#define PSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[19].base) +#define PSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[19].size) /* Tstorm overlay buffer host address */ -#define TSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[18].base) -#define TSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[18].size) +#define TSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[20].base) +#define TSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[20].size) /* Mstorm overlay buffer host address */ -#define MSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[19].base) -#define MSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[19].size) +#define MSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[21].base) +#define MSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[21].size) /* Ustorm overlay buffer host address */ -#define USTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[20].base) -#define USTORM_OVERLAY_BUF_ADDR_SIZE (IRO[20].size) +#define USTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[22].base) +#define USTORM_OVERLAY_BUF_ADDR_SIZE (IRO[22].size) /* Tstorm producers */ -#define TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) (IRO[21].base + \ - ((core_rx_queue_id) * IRO[21].m1)) -#define TSTORM_LL2_RX_PRODS_SIZE (IRO[21].size) +#define TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) \ + (IRO[23].base + ((core_rx_queue_id) * IRO[23].m1)) +#define TSTORM_LL2_RX_PRODS_SIZE (IRO[23].size) /* Tstorm LightL2 queue statistics */ -#define CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) \ - (IRO[22].base + ((core_rx_queue_id) * IRO[22].m1)) -#define CORE_LL2_TSTORM_PER_QUEUE_STAT_SIZE (IRO[22].size) +#define CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) \ + (IRO[24].base + ((core_rx_queue_id) * IRO[24].m1)) +#define CORE_LL2_TSTORM_PER_QUEUE_STAT_SIZE (IRO[24].size) /* Ustorm LiteL2 queue statistics */ -#define CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) \ - (IRO[23].base + ((core_rx_queue_id) * IRO[23].m1)) -#define CORE_LL2_USTORM_PER_QUEUE_STAT_SIZE (IRO[23].size) +#define CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) \ + (IRO[25].base + ((core_rx_queue_id) * IRO[25].m1)) +#define CORE_LL2_USTORM_PER_QUEUE_STAT_SIZE (IRO[25].size) /* Pstorm LiteL2 queue statistics */ -#define CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) \ - (IRO[24].base + ((core_tx_stats_id) * IRO[24].m1)) -#define CORE_LL2_PSTORM_PER_QUEUE_STAT_SIZE (IRO[24].size) +#define CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) \ + (IRO[26].base + ((core_tx_stats_id) * IRO[26].m1)) +#define CORE_LL2_PSTORM_PER_QUEUE_STAT_SIZE (IRO[26].size) /* Mstorm queue statistics */ -#define MSTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[25].base + \ - ((stat_counter_id) * IRO[25].m1)) -#define MSTORM_QUEUE_STAT_SIZE (IRO[25].size) -/* TPA agregation timeout in us resolution (on ASIC) */ -#define MSTORM_TPA_TIMEOUT_US_OFFSET (IRO[26].base) -#define MSTORM_TPA_TIMEOUT_US_SIZE (IRO[26].size) -/* Mstorm ETH VF queues producers offset in RAM. Used in default VF zone size - * mode. - */ -#define MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id) (IRO[27].base + \ - ((vf_id) * IRO[27].m1) + ((vf_queue_id) * IRO[27].m2)) -#define MSTORM_ETH_VF_PRODS_SIZE (IRO[27].size) +#define MSTORM_QUEUE_STAT_OFFSET(stat_counter_id) \ + (IRO[27].base + ((stat_counter_id) * IRO[27].m1)) +#define MSTORM_QUEUE_STAT_SIZE (IRO[27].size) +/* TPA aggregation timeout in us resolution (on ASIC) */ +#define MSTORM_TPA_TIMEOUT_US_OFFSET (IRO[28].base) +#define MSTORM_TPA_TIMEOUT_US_SIZE (IRO[28].size) +/* Mstorm ETH VF queues producers offset in RAM. Used in default VF zone size mode. */ +#define MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id) \ + (IRO[29].base + ((vf_id) * IRO[29].m1) + ((vf_queue_id) * IRO[29].m2)) +#define MSTORM_ETH_VF_PRODS_SIZE (IRO[29].size) /* Mstorm ETH PF queues producers */ -#define MSTORM_ETH_PF_PRODS_OFFSET(queue_id) (IRO[28].base + \ - ((queue_id) * IRO[28].m1)) -#define MSTORM_ETH_PF_PRODS_SIZE (IRO[28].size) +#define MSTORM_ETH_PF_PRODS_OFFSET(queue_id) \ + (IRO[30].base + ((queue_id) * IRO[30].m1)) +#define MSTORM_ETH_PF_PRODS_SIZE (IRO[30].size) +/* Mstorm ETH PF BD queues producers */ +#define MSTORM_ETH_PF_BD_PROD_OFFSET(queue_id) \ + (IRO[31].base + ((queue_id) * IRO[31].m1)) +#define MSTORM_ETH_PF_BD_PROD_SIZE (IRO[31].size) +/* Mstorm ETH PF CQ queues producers */ +#define MSTORM_ETH_PF_CQE_PROD_OFFSET(queue_id) \ + (IRO[32].base + ((queue_id) * IRO[32].m1)) +#define MSTORM_ETH_PF_CQE_PROD_SIZE (IRO[32].size) /* Mstorm pf statistics */ -#define MSTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[29].base + ((pf_id) * IRO[29].m1)) -#define MSTORM_ETH_PF_STAT_SIZE (IRO[29].size) +#define MSTORM_ETH_PF_STAT_OFFSET(pf_id) \ + (IRO[33].base + ((pf_id) * IRO[33].m1)) +#define MSTORM_ETH_PF_STAT_SIZE (IRO[33].size) /* Ustorm queue statistics */ -#define USTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[30].base + \ - ((stat_counter_id) * IRO[30].m1)) -#define USTORM_QUEUE_STAT_SIZE (IRO[30].size) +#define USTORM_QUEUE_STAT_OFFSET(stat_counter_id) \ + (IRO[34].base + ((stat_counter_id) * IRO[34].m1)) +#define USTORM_QUEUE_STAT_SIZE (IRO[34].size) /* Ustorm pf statistics */ -#define USTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[31].base + ((pf_id) * IRO[31].m1)) -#define USTORM_ETH_PF_STAT_SIZE (IRO[31].size) +#define USTORM_ETH_PF_STAT_OFFSET(pf_id) \ + (IRO[35].base + ((pf_id) * IRO[35].m1)) +#define USTORM_ETH_PF_STAT_SIZE (IRO[35].size) /* Pstorm queue statistics */ -#define PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[32].base + \ - ((stat_counter_id) * IRO[32].m1)) -#define PSTORM_QUEUE_STAT_SIZE (IRO[32].size) +#define PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) \ + (IRO[36].base + ((stat_counter_id) * IRO[36].m1)) +#define PSTORM_QUEUE_STAT_SIZE (IRO[36].size) /* Pstorm pf statistics */ -#define PSTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[33].base + ((pf_id) * IRO[33].m1)) -#define PSTORM_ETH_PF_STAT_SIZE (IRO[33].size) +#define PSTORM_ETH_PF_STAT_OFFSET(pf_id) \ + (IRO[37].base + ((pf_id) * IRO[37].m1)) +#define PSTORM_ETH_PF_STAT_SIZE (IRO[37].size) /* Control frame's EthType configuration for TX control frame security */ -#define PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) (IRO[34].base + \ - ((ethType_id) * IRO[34].m1)) -#define PSTORM_CTL_FRAME_ETHTYPE_SIZE (IRO[34].size) +#define PSTORM_CTL_FRAME_ETHTYPE_OFFSET(eth_type_id) \ + (IRO[38].base + ((eth_type_id) * IRO[38].m1)) +#define PSTORM_CTL_FRAME_ETHTYPE_SIZE (IRO[38].size) /* Tstorm last parser message */ -#define TSTORM_ETH_PRS_INPUT_OFFSET (IRO[35].base) -#define TSTORM_ETH_PRS_INPUT_SIZE (IRO[35].size) +#define TSTORM_ETH_PRS_INPUT_OFFSET (IRO[39].base) +#define TSTORM_ETH_PRS_INPUT_SIZE (IRO[39].size) /* Tstorm Eth limit Rx rate */ -#define ETH_RX_RATE_LIMIT_OFFSET(pf_id) (IRO[36].base + ((pf_id) * IRO[36].m1)) -#define ETH_RX_RATE_LIMIT_SIZE (IRO[36].size) -/* RSS indirection table entry update command per PF offset in TSTORM PF BAR0. - * Use eth_tstorm_rss_update_data for update. +#define ETH_RX_RATE_LIMIT_OFFSET(pf_id) \ + (IRO[40].base + ((pf_id) * IRO[40].m1)) +#define ETH_RX_RATE_LIMIT_SIZE (IRO[40].size) +/* RSS indirection table entry update command per PF offset in TSTORM PF BAR0. Use + * eth_tstorm_rss_update_data for update. */ -#define TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id) (IRO[37].base + \ - ((pf_id) * IRO[37].m1)) -#define TSTORM_ETH_RSS_UPDATE_SIZE (IRO[37].size) +#define TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id) \ + (IRO[41].base + ((pf_id) * IRO[41].m1)) +#define TSTORM_ETH_RSS_UPDATE_SIZE (IRO[41].size) /* Xstorm queue zone */ -#define XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) (IRO[38].base + \ - ((queue_id) * IRO[38].m1)) -#define XSTORM_ETH_QUEUE_ZONE_SIZE (IRO[38].size) +#define XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) \ + (IRO[42].base + ((queue_id) * IRO[42].m1)) +#define XSTORM_ETH_QUEUE_ZONE_SIZE (IRO[42].size) +/* Hairpin Allocations Table with Per-PF Entry of Hairpin base CID, number of allocated CIDs and + * Default Vport + */ +#define XSTORM_ETH_HAIRPIN_CID_ALLOCATION_OFFSET(pf_id) \ + (IRO[43].base + ((pf_id) * IRO[43].m1)) +#define XSTORM_ETH_HAIRPIN_CID_ALLOCATION_SIZE (IRO[43].size) /* Ystorm cqe producer */ -#define YSTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[39].base + \ - ((rss_id) * IRO[39].m1)) -#define YSTORM_TOE_CQ_PROD_SIZE (IRO[39].size) +#define YSTORM_TOE_CQ_PROD_OFFSET(rss_id) \ + (IRO[44].base + ((rss_id) * IRO[44].m1)) +#define YSTORM_TOE_CQ_PROD_SIZE (IRO[44].size) /* Ustorm cqe producer */ -#define USTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[40].base + \ - ((rss_id) * IRO[40].m1)) -#define USTORM_TOE_CQ_PROD_SIZE (IRO[40].size) +#define USTORM_TOE_CQ_PROD_OFFSET(rss_id) \ + (IRO[45].base + ((rss_id) * IRO[45].m1)) +#define USTORM_TOE_CQ_PROD_SIZE (IRO[45].size) /* Ustorm grq producer */ -#define USTORM_TOE_GRQ_PROD_OFFSET(pf_id) (IRO[41].base + \ - ((pf_id) * IRO[41].m1)) -#define USTORM_TOE_GRQ_PROD_SIZE (IRO[41].size) +#define USTORM_TOE_GRQ_PROD_OFFSET(pf_id) \ + (IRO[46].base + ((pf_id) * IRO[46].m1)) +#define USTORM_TOE_GRQ_PROD_SIZE (IRO[46].size) /* Tstorm cmdq-cons of given command queue-id */ -#define TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id) (IRO[42].base + \ - ((cmdq_queue_id) * IRO[42].m1)) -#define TSTORM_SCSI_CMDQ_CONS_SIZE (IRO[42].size) -/* Tstorm (reflects M-Storm) bdq-external-producer of given function ID, - * BDqueue-id - */ -#define TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id, bdq_id) \ - (IRO[43].base + ((storage_func_id) * IRO[43].m1) + \ - ((bdq_id) * IRO[43].m2)) -#define TSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[43].size) +#define TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id) \ + (IRO[47].base + ((cmdq_queue_id) * IRO[47].m1)) +#define TSTORM_SCSI_CMDQ_CONS_SIZE (IRO[47].size) +/* Tstorm (reflects M-Storm) bdq-external-producer of given function ID, BDqueue-id */ +#define TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id, bdq_id) \ + (IRO[48].base + ((storage_func_id) * IRO[48].m1) + ((bdq_id) * IRO[48].m2)) +#define TSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[48].size) /* Mstorm bdq-external-producer of given BDQ resource ID, BDqueue-id */ -#define MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id, bdq_id) \ - (IRO[44].base + ((storage_func_id) * IRO[44].m1) + \ - ((bdq_id) * IRO[44].m2)) -#define MSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[44].size) +#define MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id, bdq_id) \ + (IRO[49].base + ((storage_func_id) * IRO[49].m1) + ((bdq_id) * IRO[49].m2)) +#define MSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[49].size) /* Tstorm iSCSI RX stats */ -#define TSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) (IRO[45].base + \ - ((storage_func_id) * IRO[45].m1)) -#define TSTORM_ISCSI_RX_STATS_SIZE (IRO[45].size) +#define TSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) \ + (IRO[50].base + ((storage_func_id) * IRO[50].m1)) +#define TSTORM_ISCSI_RX_STATS_SIZE (IRO[50].size) /* Mstorm iSCSI RX stats */ -#define MSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) (IRO[46].base + \ - ((storage_func_id) * IRO[46].m1)) -#define MSTORM_ISCSI_RX_STATS_SIZE (IRO[46].size) +#define MSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) \ + (IRO[51].base + ((storage_func_id) * IRO[51].m1)) +#define MSTORM_ISCSI_RX_STATS_SIZE (IRO[51].size) /* Ustorm iSCSI RX stats */ -#define USTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) (IRO[47].base + \ - ((storage_func_id) * IRO[47].m1)) -#define USTORM_ISCSI_RX_STATS_SIZE (IRO[47].size) +#define USTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) \ + (IRO[52].base + ((storage_func_id) * IRO[52].m1)) +#define USTORM_ISCSI_RX_STATS_SIZE (IRO[52].size) /* Xstorm iSCSI TX stats */ -#define XSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) (IRO[48].base + \ - ((storage_func_id) * IRO[48].m1)) -#define XSTORM_ISCSI_TX_STATS_SIZE (IRO[48].size) +#define XSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) \ + (IRO[53].base + ((storage_func_id) * IRO[53].m1)) +#define XSTORM_ISCSI_TX_STATS_SIZE (IRO[53].size) /* Ystorm iSCSI TX stats */ -#define YSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) (IRO[49].base + \ - ((storage_func_id) * IRO[49].m1)) -#define YSTORM_ISCSI_TX_STATS_SIZE (IRO[49].size) +#define YSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) \ + (IRO[54].base + ((storage_func_id) * IRO[54].m1)) +#define YSTORM_ISCSI_TX_STATS_SIZE (IRO[54].size) /* Pstorm iSCSI TX stats */ -#define PSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) (IRO[50].base + \ - ((storage_func_id) * IRO[50].m1)) -#define PSTORM_ISCSI_TX_STATS_SIZE (IRO[50].size) +#define PSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) \ + (IRO[55].base + ((storage_func_id) * IRO[55].m1)) +#define PSTORM_ISCSI_TX_STATS_SIZE (IRO[55].size) /* Tstorm FCoE RX stats */ -#define TSTORM_FCOE_RX_STATS_OFFSET(pf_id) (IRO[51].base + \ - ((pf_id) * IRO[51].m1)) -#define TSTORM_FCOE_RX_STATS_SIZE (IRO[51].size) +#define TSTORM_FCOE_RX_STATS_OFFSET(pf_id) \ + (IRO[56].base + ((pf_id) * IRO[56].m1)) +#define TSTORM_FCOE_RX_STATS_SIZE (IRO[56].size) /* Pstorm FCoE TX stats */ -#define PSTORM_FCOE_TX_STATS_OFFSET(pf_id) (IRO[52].base + \ - ((pf_id) * IRO[52].m1)) -#define PSTORM_FCOE_TX_STATS_SIZE (IRO[52].size) +#define PSTORM_FCOE_TX_STATS_OFFSET(pf_id) \ + (IRO[57].base + ((pf_id) * IRO[57].m1)) +#define PSTORM_FCOE_TX_STATS_SIZE (IRO[57].size) /* Pstorm RDMA queue statistics */ -#define PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[53].base + \ - ((rdma_stat_counter_id) * IRO[53].m1)) -#define PSTORM_RDMA_QUEUE_STAT_SIZE (IRO[53].size) +#define PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) \ + (IRO[58].base + ((rdma_stat_counter_id) * IRO[58].m1)) +#define PSTORM_RDMA_QUEUE_STAT_SIZE (IRO[58].size) /* Tstorm RDMA queue statistics */ -#define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[54].base + \ - ((rdma_stat_counter_id) * IRO[54].m1)) -#define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[54].size) +#define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) \ + (IRO[59].base + ((rdma_stat_counter_id) * IRO[59].m1)) +#define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[59].size) /* Xstorm error level for assert */ -#define XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[55].base + \ - ((pf_id) * IRO[55].m1)) -#define XSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[55].size) +#define XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) \ + (IRO[60].base + ((pf_id) * IRO[60].m1)) +#define XSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[60].size) /* Ystorm error level for assert */ -#define YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[56].base + \ - ((pf_id) * IRO[56].m1)) -#define YSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[56].size) +#define YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) \ + (IRO[61].base + ((pf_id) * IRO[61].m1)) +#define YSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[61].size) /* Pstorm error level for assert */ -#define PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[57].base + \ - ((pf_id) * IRO[57].m1)) -#define PSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[57].size) +#define PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) \ + (IRO[62].base + ((pf_id) * IRO[62].m1)) +#define PSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[62].size) /* Tstorm error level for assert */ -#define TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[58].base + \ - ((pf_id) * IRO[58].m1)) -#define TSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[58].size) +#define TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) \ + (IRO[63].base + ((pf_id) * IRO[63].m1)) +#define TSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[63].size) /* Mstorm error level for assert */ -#define MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[59].base + \ - ((pf_id) * IRO[59].m1)) -#define MSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[59].size) +#define MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) \ + (IRO[64].base + ((pf_id) * IRO[64].m1)) +#define MSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[64].size) /* Ustorm error level for assert */ -#define USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[60].base + \ - ((pf_id) * IRO[60].m1)) -#define USTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[60].size) +#define USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) \ + (IRO[65].base + ((pf_id) * IRO[65].m1)) +#define USTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[65].size) /* Xstorm iWARP rxmit stats */ -#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) (IRO[61].base + \ - ((pf_id) * IRO[61].m1)) -#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[61].size) +#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) \ + (IRO[66].base + ((pf_id) * IRO[66].m1)) +#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[66].size) /* Tstorm RoCE Event Statistics */ -#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) (IRO[62].base + \ - ((roce_pf_id) * IRO[62].m1)) -#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[62].size) +#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) \ + (IRO[67].base + ((roce_pf_id) * IRO[67].m1)) +#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[67].size) /* DCQCN Received Statistics */ -#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id) (IRO[63].base + \ - ((roce_pf_id) * IRO[63].m1)) -#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_SIZE (IRO[63].size) +#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id) \ + (IRO[68].base + ((roce_pf_id) * IRO[68].m1)) +#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_SIZE (IRO[68].size) /* RoCE Error Statistics */ -#define YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id) (IRO[64].base + \ - ((roce_pf_id) * IRO[64].m1)) -#define YSTORM_ROCE_ERROR_STATS_SIZE (IRO[64].size) +#define YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id) \ + (IRO[69].base + ((roce_pf_id) * IRO[69].m1)) +#define YSTORM_ROCE_ERROR_STATS_SIZE (IRO[69].size) /* DCQCN Sent Statistics */ -#define PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id) (IRO[65].base + \ - ((roce_pf_id) * IRO[65].m1)) -#define PSTORM_ROCE_DCQCN_SENT_STATS_SIZE (IRO[65].size) +#define PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id) \ + (IRO[70].base + ((roce_pf_id) * IRO[70].m1)) +#define PSTORM_ROCE_DCQCN_SENT_STATS_SIZE (IRO[70].size) /* RoCE CQEs Statistics */ -#define USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id) (IRO[66].base + \ - ((roce_pf_id) * IRO[66].m1)) -#define USTORM_ROCE_CQE_STATS_SIZE (IRO[66].size) +#define USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id) \ + (IRO[71].base + ((roce_pf_id) * IRO[71].m1)) +#define USTORM_ROCE_CQE_STATS_SIZE (IRO[71].size) /* Tstorm NVMf per port per producer consumer data */ -#define TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_num_id, \ - taskpool_index) (IRO[67].base + ((port_num_id) * IRO[67].m1) + \ - ((taskpool_index) * IRO[67].m2)) -#define TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_SIZE (IRO[67].size) +#define TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_num_id, taskpool_index) \ + (IRO[72].base + ((port_num_id) * IRO[72].m1) + ((taskpool_index) * IRO[72].m2)) +#define TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_SIZE (IRO[72].size) /* Ustorm NVMf per port counters */ -#define USTORM_NVMF_PORT_COUNTERS_OFFSET(port_num_id) (IRO[68].base + \ - ((port_num_id) * IRO[68].m1)) -#define USTORM_NVMF_PORT_COUNTERS_SIZE (IRO[68].size) +#define USTORM_NVMF_PORT_COUNTERS_OFFSET(port_num_id) \ + (IRO[73].base + ((port_num_id) * IRO[73].m1)) +#define USTORM_NVMF_PORT_COUNTERS_SIZE (IRO[73].size) +/* Xstorm RoCE non EDPM ACK sent */ +#define XSTORM_ROCE_NON_EDPM_ACK_PKT_OFFSET(counter_id) \ + (IRO[74].base + ((counter_id) * IRO[74].m1)) +#define XSTORM_ROCE_NON_EDPM_ACK_PKT_SIZE (IRO[74].size) +/* Mstorm RoCE ACK sent */ +#define MSTORM_ROCE_ACK_PKT_OFFSET(counter_id) \ + (IRO[75].base + ((counter_id) * IRO[75].m1)) +#define MSTORM_ROCE_ACK_PKT_SIZE (IRO[75].size) #endif diff --git a/drivers/net/qede/base/ecore_iro_values.h b/drivers/net/qede/base/ecore_iro_values.h index dd7349778..562146dd0 100644 --- a/drivers/net/qede/base/ecore_iro_values.h +++ b/drivers/net/qede/base/ecore_iro_values.h @@ -1,227 +1,336 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __IRO_VALUES_H__ #define __IRO_VALUES_H__ /* Per-chip offsets in iro_arr in dwords */ #define E4_IRO_ARR_OFFSET 0 +#define E5_IRO_ARR_OFFSET 228 /* IRO Array */ -static const u32 iro_arr[] = { +static const u32 iro_arr[] = { /* @DPDK */ /* E4 */ - /* YSTORM_FLOW_CONTROL_MODE_OFFSET */ - /* offset=0x0, size=0x8 */ +/* YSTORM_FLOW_CONTROL_MODE_OFFSET, offset=0x0, size=0x8 */ 0x00000000, 0x00000000, 0x00080000, - /* TSTORM_PORT_STAT_OFFSET(port_id), */ - /* offset=0x3288, mult1=0x88, size=0x88 */ +/* PSTORM_PKT_DUPLICATION_CFG_OFFSET(pf_id), offset=0x4478, mult1=0x8, size=0x8 */ + 0x00004478, 0x00000008, 0x00080000, +/* TSTORM_PORT_STAT_OFFSET(port_id), offset=0x3288, mult1=0x88, size=0x88 */ 0x00003288, 0x00000088, 0x00880000, - /* TSTORM_LL2_PORT_STAT_OFFSET(port_id), */ - /* offset=0x58f0, mult1=0x20, size=0x20 */ - 0x000058f0, 0x00000020, 0x00200000, - /* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id), */ - /* offset=0xb00, mult1=0x8, size=0x4 */ +/* TSTORM_LL2_PORT_STAT_OFFSET(port_id), offset=0x5868, mult1=0x20, size=0x20 */ + 0x00005868, 0x00000020, 0x00200000, +/* TSTORM_PKT_DUPLICATION_CFG_OFFSET(pf_id), offset=0x3188, mult1=0x8, size=0x8 */ + 0x00003188, 0x00000008, 0x00080000, +/* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id), offset=0xb00, mult1=0x8, size=0x4 */ 0x00000b00, 0x00000008, 0x00040000, - /* USTORM_FLR_FINAL_ACK_OFFSET(pf_id), */ - /* offset=0xa80, mult1=0x8, size=0x4 */ +/* USTORM_FLR_FINAL_ACK_OFFSET(pf_id), offset=0xa80, mult1=0x8, size=0x4 */ 0x00000a80, 0x00000008, 0x00040000, - /* USTORM_EQE_CONS_OFFSET(pf_id), */ - /* offset=0x0, mult1=0x8, size=0x2 */ +/* USTORM_EQE_CONS_OFFSET(pf_id), offset=0x0, mult1=0x8, size=0x2 */ 0x00000000, 0x00000008, 0x00020000, - /* USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id), */ - /* offset=0x80, mult1=0x8, size=0x4 */ +/* USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id), offset=0x80, mult1=0x8, size=0x4 */ 0x00000080, 0x00000008, 0x00040000, - /* USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id), */ - /* offset=0x84, mult1=0x8, size=0x2 */ +/* USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id), offset=0x84, mult1=0x8, size=0x2 */ 0x00000084, 0x00000008, 0x00020000, - /* XSTORM_PQ_INFO_OFFSET(pq_id), */ - /* offset=0x5718, mult1=0x4, size=0x4 */ - 0x00005718, 0x00000004, 0x00040000, - /* XSTORM_INTEG_TEST_DATA_OFFSET, */ - /* offset=0x4dd0, size=0x78 */ - 0x00004dd0, 0x00000000, 0x00780000, - /* YSTORM_INTEG_TEST_DATA_OFFSET */ - /* offset=0x3e40, size=0x78 */ +/* XSTORM_PQ_INFO_OFFSET(pq_id), offset=0x5798, mult1=0x4, size=0x4 */ + 0x00005798, 0x00000004, 0x00040000, +/* XSTORM_INTEG_TEST_DATA_OFFSET, offset=0x4e50, size=0x78 */ + 0x00004e50, 0x00000000, 0x00780000, +/* YSTORM_INTEG_TEST_DATA_OFFSET, offset=0x3e40, size=0x78 */ 0x00003e40, 0x00000000, 0x00780000, - /* PSTORM_INTEG_TEST_DATA_OFFSET, */ - /* offset=0x4480, size=0x78 */ - 0x00004480, 0x00000000, 0x00780000, - /* TSTORM_INTEG_TEST_DATA_OFFSET, */ - /* offset=0x3210, size=0x78 */ +/* PSTORM_INTEG_TEST_DATA_OFFSET, offset=0x4500, size=0x78 */ + 0x00004500, 0x00000000, 0x00780000, +/* TSTORM_INTEG_TEST_DATA_OFFSET, offset=0x3210, size=0x78 */ 0x00003210, 0x00000000, 0x00780000, - /* MSTORM_INTEG_TEST_DATA_OFFSET */ - /* offset=0x3b50, size=0x78 */ +/* MSTORM_INTEG_TEST_DATA_OFFSET, offset=0x3b50, size=0x78 */ 0x00003b50, 0x00000000, 0x00780000, - /* USTORM_INTEG_TEST_DATA_OFFSET */ - /* offset=0x7f58, size=0x78 */ +/* USTORM_INTEG_TEST_DATA_OFFSET, offset=0x7f58, size=0x78 */ 0x00007f58, 0x00000000, 0x00780000, - /* XSTORM_OVERLAY_BUF_ADDR_OFFSET, */ - /* offset=0x5f58, size=0x8 */ - 0x00005f58, 0x00000000, 0x00080000, - /* YSTORM_OVERLAY_BUF_ADDR_OFFSET */ - /* offset=0x7100, size=0x8 */ +/* XSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0x5fd8, size=0x8 */ + 0x00005fd8, 0x00000000, 0x00080000, +/* YSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0x7100, size=0x8 */ 0x00007100, 0x00000000, 0x00080000, - /* PSTORM_OVERLAY_BUF_ADDR_OFFSET, */ - /* offset=0xaea0, size=0x8 */ - 0x0000aea0, 0x00000000, 0x00080000, - /* TSTORM_OVERLAY_BUF_ADDR_OFFSET, */ - /* offset=0x4398, size=0x8 */ +/* PSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0xaf20, size=0x8 */ + 0x0000af20, 0x00000000, 0x00080000, +/* TSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0x4398, size=0x8 */ 0x00004398, 0x00000000, 0x00080000, - /* MSTORM_OVERLAY_BUF_ADDR_OFFSET */ - /* offset=0xa5a0, size=0x8 */ +/* MSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0xa5a0, size=0x8 */ 0x0000a5a0, 0x00000000, 0x00080000, - /* USTORM_OVERLAY_BUF_ADDR_OFFSET */ - /* offset=0xbde8, size=0x8 */ - 0x0000bde8, 0x00000000, 0x00080000, - /* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id), */ - /* offset=0x20, mult1=0x4, size=0x4 */ - 0x00000020, 0x00000004, 0x00040000, - /* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), */ - /* offset=0x56d0, mult1=0x10, size=0x10 */ - 0x000056d0, 0x00000010, 0x00100000, - /* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), */ - /* offset=0xc210, mult1=0x30, size=0x30 */ - 0x0000c210, 0x00000030, 0x00300000, - /* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id), */ - /* offset=0xb088, mult1=0x38, size=0x38 */ - 0x0000b088, 0x00000038, 0x00380000, - /* MSTORM_QUEUE_STAT_OFFSET(stat_counter_id), */ - /* offset=0x3d20, mult1=0x80, size=0x40 */ +/* USTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0xc1e8, size=0x8 */ + 0x0000c1e8, 0x00000000, 0x00080000, +/* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id), offset=0x20, mult1=0x8, size=0x8 */ + 0x00000020, 0x00000008, 0x00080000, +/* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), offset=0x5548, mult1=0x10, size=0x10 */ + 0x00005548, 0x00000010, 0x00100000, +/* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), offset=0xc610, mult1=0x30, size=0x30 */ + 0x0000c610, 0x00000030, 0x00300000, +/* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id), offset=0xb108, mult1=0x38, size=0x38 */ + 0x0000b108, 0x00000038, 0x00380000, +/* MSTORM_QUEUE_STAT_OFFSET(stat_counter_id), offset=0x3d20, mult1=0x80, size=0x40 */ 0x00003d20, 0x00000080, 0x00400000, - /* MSTORM_TPA_TIMEOUT_US_OFFSET */ - /* offset=0xbf60, size=0x4 */ +/* MSTORM_TPA_TIMEOUT_US_OFFSET, offset=0xbf60, size=0x4 */ 0x0000bf60, 0x00000000, 0x00040000, - /* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id), */ - /* offset=0x4560, mult1=0x80, mult2=0x4, size=0x4 */ +/* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id), offset=0x4560, mult1=0x80, mult2=0x4, size=0x4 */ 0x00004560, 0x00040080, 0x00040000, - /* MSTORM_ETH_PF_PRODS_OFFSET(queue_id), */ - /* offset=0x1f8, mult1=0x4, size=0x4 */ +/* MSTORM_ETH_PF_PRODS_OFFSET(queue_id), offset=0x1f8, mult1=0x4, size=0x4 */ 0x000001f8, 0x00000004, 0x00040000, - /* MSTORM_ETH_PF_STAT_OFFSET(pf_id), */ - /* offset=0x3d60, mult1=0x80, size=0x20 */ +/* MSTORM_ETH_PF_BD_PROD_OFFSET(queue_id), offset=0x1f8, mult1=0x4, size=0x2 */ + 0x000001f8, 0x00000004, 0x00020000, +/* MSTORM_ETH_PF_CQE_PROD_OFFSET(queue_id), offset=0x1fa, mult1=0x4, size=0x2 */ + 0x000001fa, 0x00000004, 0x00020000, +/* MSTORM_ETH_PF_STAT_OFFSET(pf_id), offset=0x3d60, mult1=0x80, size=0x20 */ 0x00003d60, 0x00000080, 0x00200000, - /* USTORM_QUEUE_STAT_OFFSET(stat_counter_id), */ - /* offset=0x8960, mult1=0x40, size=0x30 */ - 0x00008960, 0x00000040, 0x00300000, - /* USTORM_ETH_PF_STAT_OFFSET(pf_id), */ - /* offset=0xe840, mult1=0x60, size=0x60 */ - 0x0000e840, 0x00000060, 0x00600000, - /* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id), */ - /* offset=0x4618, mult1=0x80, size=0x38 */ - 0x00004618, 0x00000080, 0x00380000, - /* PSTORM_ETH_PF_STAT_OFFSET(pf_id), */ - /* offset=0x10738, mult1=0xc0, size=0xc0 */ - 0x00010738, 0x000000c0, 0x00c00000, - /* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id), */ - /* offset=0x1f8, mult1=0x2, size=0x2 */ +/* USTORM_QUEUE_STAT_OFFSET(stat_counter_id), offset=0x8d60, mult1=0x40, size=0x30 */ + 0x00008d60, 0x00000040, 0x00300000, +/* USTORM_ETH_PF_STAT_OFFSET(pf_id), offset=0xef40, mult1=0x60, size=0x60 */ + 0x0000ef40, 0x00000060, 0x00600000, +/* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id), offset=0x4698, mult1=0x80, size=0x38 */ + 0x00004698, 0x00000080, 0x00380000, +/* PSTORM_ETH_PF_STAT_OFFSET(pf_id), offset=0x107b0, mult1=0xc0, size=0xc0 */ + 0x000107b0, 0x000000c0, 0x00c00000, +/* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id), offset=0x1f8, mult1=0x2, size=0x2 */ 0x000001f8, 0x00000002, 0x00020000, - /* TSTORM_ETH_PRS_INPUT_OFFSET, */ - /* offset=0xa2a8, size=0x108 */ - 0x0000a2a8, 0x00000000, 0x01080000, - /* ETH_RX_RATE_LIMIT_OFFSET(pf_id), */ - /* offset=0xa3b0, mult1=0x8, size=0x8 */ - 0x0000a3b0, 0x00000008, 0x00080000, - /* TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id), */ - /* offset=0x1c0, mult1=0x8, size=0x8 */ +/* TSTORM_ETH_PRS_INPUT_OFFSET, offset=0xa220, size=0x108 */ + 0x0000a220, 0x00000000, 0x01080000, +/* ETH_RX_RATE_LIMIT_OFFSET(pf_id), offset=0xa328, mult1=0x8, size=0x8 */ + 0x0000a328, 0x00000008, 0x00080000, +/* TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id), offset=0x1c0, mult1=0x8, size=0x8 */ 0x000001c0, 0x00000008, 0x00080000, - /* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id), */ - /* offset=0x1f8, mult1=0x8, size=0x8 */ +/* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id), offset=0x1f8, mult1=0x8, size=0x8 */ 0x000001f8, 0x00000008, 0x00080000, - /* YSTORM_TOE_CQ_PROD_OFFSET(rss_id), */ - /* offset=0xac0, mult1=0x8, size=0x8 */ +/* XSTORM_ETH_HAIRPIN_CID_ALLOCATION_OFFSET(pf_id), offset=0x8ce0, mult1=0x8, size=0x8 */ + 0x00008ce0, 0x00000008, 0x00080000, +/* YSTORM_TOE_CQ_PROD_OFFSET(rss_id), offset=0xac0, mult1=0x8, size=0x8 */ 0x00000ac0, 0x00000008, 0x00080000, - /* USTORM_TOE_CQ_PROD_OFFSET(rss_id), */ - /* offset=0x2578, mult1=0x8, size=0x8 */ +/* USTORM_TOE_CQ_PROD_OFFSET(rss_id), offset=0x2578, mult1=0x8, size=0x8 */ 0x00002578, 0x00000008, 0x00080000, - /* USTORM_TOE_GRQ_PROD_OFFSET(pf_id), */ - /* offset=0x24f8, mult1=0x8, size=0x8 */ +/* USTORM_TOE_GRQ_PROD_OFFSET(pf_id), offset=0x24f8, mult1=0x8, size=0x8 */ 0x000024f8, 0x00000008, 0x00080000, - /* TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id), */ - /* offset=0x280, mult1=0x8, size=0x8 */ +/* TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id), offset=0x280, mult1=0x8, size=0x8 */ 0x00000280, 0x00000008, 0x00080000, - /* TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), */ - /* offset=0x680, mult1=0x18, mult2=0x8, size=0x8 */ +/* TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), offset=0x680, mult1=0x18, mult2=0x8, + * size=0x8 + */ 0x00000680, 0x00080018, 0x00080000, - /* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), */ - /* offset=0xb78, mult1=0x18, mult2=0x8, size=0x2 */ +/* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), offset=0xb78, mult1=0x18, mult2=0x8, + * size=0x2 + */ 0x00000b78, 0x00080018, 0x00020000, - /* TSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), */ - /* offset=0xc648, mult1=0x50, size=0x3c */ - 0x0000c648, 0x00000050, 0x003c0000, - /* MSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), */ - /* offset=0x12038, mult1=0x18, size=0x10 */ +/* TSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), offset=0xc5c0, mult1=0x50, size=0x3c */ + 0x0000c5c0, 0x00000050, 0x003c0000, +/* MSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), offset=0x12038, mult1=0x18, size=0x10 */ 0x00012038, 0x00000018, 0x00100000, - /* USTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), */ - /* offset=0x11b00, mult1=0x40, size=0x18 */ - 0x00011b00, 0x00000040, 0x00180000, - /* XSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), */ - /* offset=0x95d0, mult1=0x50, size=0x20 */ - 0x000095d0, 0x00000050, 0x00200000, - /* YSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), */ - /* offset=0x8b10, mult1=0x40, size=0x28 */ +/* USTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), offset=0x12200, mult1=0x40, size=0x18 */ + 0x00012200, 0x00000040, 0x00180000, +/* XSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), offset=0x96c8, mult1=0x50, size=0x20 */ + 0x000096c8, 0x00000050, 0x00200000, +/* YSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), offset=0x8b10, mult1=0x40, size=0x28 */ 0x00008b10, 0x00000040, 0x00280000, - /* PSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), */ - /* offset=0x11640, mult1=0x18, size=0x10 */ - 0x00011640, 0x00000018, 0x00100000, - /* TSTORM_FCOE_RX_STATS_OFFSET(pf_id), */ - /* offset=0xc830, mult1=0x48, size=0x38 */ +/* PSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), offset=0x116b8, mult1=0x18, size=0x10 */ + 0x000116b8, 0x00000018, 0x00100000, +/* TSTORM_FCOE_RX_STATS_OFFSET(pf_id), offset=0xc830, mult1=0x48, size=0x38 */ 0x0000c830, 0x00000048, 0x00380000, - /* PSTORM_FCOE_TX_STATS_OFFSET(pf_id), */ - /* offset=0x11710, mult1=0x20, size=0x20 */ - 0x00011710, 0x00000020, 0x00200000, - /* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), */ - /* offset=0x4650, mult1=0x80, size=0x10 */ - 0x00004650, 0x00000080, 0x00100000, - /* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), */ - /* offset=0x3618, mult1=0x10, size=0x10 */ +/* PSTORM_FCOE_TX_STATS_OFFSET(pf_id), offset=0x11788, mult1=0x20, size=0x20 */ + 0x00011788, 0x00000020, 0x00200000, +/* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), offset=0x46d0, mult1=0x80, size=0x10 */ + 0x000046d0, 0x00000080, 0x00100000, +/* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), offset=0x3618, mult1=0x10, size=0x10 */ 0x00003618, 0x00000010, 0x00100000, - /* XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */ - /* offset=0xa968, mult1=0x8, size=0x1 */ - 0x0000a968, 0x00000008, 0x00010000, - /* YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */ - /* offset=0x97a0, mult1=0x8, size=0x1 */ +/* XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0xaa60, mult1=0x8, size=0x1 */ + 0x0000aa60, 0x00000008, 0x00010000, +/* YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x97a0, mult1=0x8, size=0x1 */ 0x000097a0, 0x00000008, 0x00010000, - /* PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */ - /* offset=0x11990, mult1=0x8, size=0x1 */ - 0x00011990, 0x00000008, 0x00010000, - /* TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */ - /* offset=0xf020, mult1=0x8, size=0x1 */ - 0x0000f020, 0x00000008, 0x00010000, - /* MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */ - /* offset=0x12628, mult1=0x8, size=0x1 */ +/* PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x11a08, mult1=0x8, size=0x1 */ + 0x00011a08, 0x00000008, 0x00010000, +/* TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0xea20, mult1=0x8, size=0x1 */ + 0x0000ea20, 0x00000008, 0x00010000, +/* MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x12628, mult1=0x8, size=0x1 */ 0x00012628, 0x00000008, 0x00010000, - /* USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */ - /* offset=0x11da8, mult1=0x8, size=0x1 */ - 0x00011da8, 0x00000008, 0x00010000, - /* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id), */ - /* offset=0xaa78, mult1=0x30, size=0x10 */ - 0x0000aa78, 0x00000030, 0x00100000, - /* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id), */ - /* offset=0xd770, mult1=0x28, size=0x28 */ +/* USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x12930, mult1=0x8, size=0x1 */ + 0x00012930, 0x00000008, 0x00010000, +/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id), offset=0xaf80, mult1=0x30, size=0x10 */ + 0x0000af80, 0x00000030, 0x00100000, +/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id), offset=0xd770, mult1=0x28, size=0x28 */ 0x0000d770, 0x00000028, 0x00280000, - /* YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id), */ - /* offset=0x9a58, mult1=0x18, size=0x18 */ - 0x00009a58, 0x00000018, 0x00180000, - /* YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id), */ - /* offset=0x9bd8, mult1=0x8, size=0x8 */ - 0x00009bd8, 0x00000008, 0x00080000, - /* PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id), */ - /* offset=0x13a18, mult1=0x8, size=0x8 */ - 0x00013a18, 0x00000008, 0x00080000, - /* USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id), */ - /* offset=0x126e8, mult1=0x18, size=0x18 */ - 0x000126e8, 0x00000018, 0x00180000, - /* TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_num_id,taskpool_index), */ - /* offset=0xe610, mult1=0x288, mult2=0x50, size=0x10 */ - 0x0000e610, 0x00500288, 0x00100000, - /* USTORM_NVMF_PORT_COUNTERS_OFFSET(port_num_id), */ - /* offset=0x12970, mult1=0x138, size=0x28 */ - 0x00012970, 0x00000138, 0x00280000, +/* YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id), offset=0x9e68, mult1=0x18, size=0x18 */ + 0x00009e68, 0x00000018, 0x00180000, +/* YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id), offset=0x9fe8, mult1=0x8, size=0x8 */ + 0x00009fe8, 0x00000008, 0x00080000, +/* PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id), offset=0x13ea0, mult1=0x8, size=0x8 */ + 0x00013ea0, 0x00000008, 0x00080000, +/* USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id), offset=0x13680, mult1=0x18, size=0x18 */ + 0x00013680, 0x00000018, 0x00180000, +/* TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_num_id,taskpool_index), offset=0xe010, + * mult1=0x288, mult2=0x50, size=0x10 + */ + 0x0000e010, 0x00500288, 0x00100000, +/* USTORM_NVMF_PORT_COUNTERS_OFFSET(port_num_id), offset=0x13908, mult1=0x138, size=0x28 */ + 0x00013908, 0x00000138, 0x00280000, + 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, + /* E5 */ +/* YSTORM_FLOW_CONTROL_MODE_OFFSET, offset=0x0, size=0x8 */ + 0x00000000, 0x00000000, 0x00080000, +/* PSTORM_PKT_DUPLICATION_CFG_OFFSET(pf_id), offset=0x7ff8, mult1=0x8, size=0x8 */ + 0x00007ff8, 0x00000008, 0x00080000, +/* TSTORM_PORT_STAT_OFFSET(port_id), offset=0x6fc8, mult1=0x88, size=0x88 */ + 0x00006fc8, 0x00000088, 0x00880000, +/* TSTORM_LL2_PORT_STAT_OFFSET(port_id), offset=0x9b30, mult1=0x20, size=0x20 */ + 0x00009b30, 0x00000020, 0x00200000, +/* TSTORM_PKT_DUPLICATION_CFG_OFFSET(pf_id), offset=0x6ec8, mult1=0x8, size=0x8 */ + 0x00006ec8, 0x00000008, 0x00080000, +/* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id), offset=0x1100, mult1=0x8, size=0x4 */ + 0x00001100, 0x00000008, 0x00040000, +/* USTORM_FLR_FINAL_ACK_OFFSET(pf_id), offset=0x1080, mult1=0x8, size=0x4 */ + 0x00001080, 0x00000008, 0x00040000, +/* USTORM_EQE_CONS_OFFSET(pf_id), offset=0x0, mult1=0x8, size=0x2 */ + 0x00000000, 0x00000008, 0x00020000, +/* USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id), offset=0x80, mult1=0x8, size=0x4 */ + 0x00000080, 0x00000008, 0x00040000, +/* USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id), offset=0x84, mult1=0x8, size=0x2 */ + 0x00000084, 0x00000008, 0x00020000, +/* XSTORM_PQ_INFO_OFFSET(pq_id), offset=0x8b98, mult1=0x4, size=0x4 */ + 0x00008b98, 0x00000004, 0x00040000, +/* XSTORM_INTEG_TEST_DATA_OFFSET, offset=0x80d0, size=0x78 */ + 0x000080d0, 0x00000000, 0x00780000, +/* YSTORM_INTEG_TEST_DATA_OFFSET, offset=0x6b10, size=0x78 */ + 0x00006b10, 0x00000000, 0x00780000, +/* PSTORM_INTEG_TEST_DATA_OFFSET, offset=0x8080, size=0x78 */ + 0x00008080, 0x00000000, 0x00780000, +/* TSTORM_INTEG_TEST_DATA_OFFSET, offset=0x6f50, size=0x78 */ + 0x00006f50, 0x00000000, 0x00780000, +/* MSTORM_INTEG_TEST_DATA_OFFSET, offset=0x7b10, size=0x78 */ + 0x00007b10, 0x00000000, 0x00780000, +/* USTORM_INTEG_TEST_DATA_OFFSET, offset=0xaa20, size=0x78 */ + 0x0000aa20, 0x00000000, 0x00780000, +/* XSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0x93d8, size=0x8 */ + 0x000093d8, 0x00000000, 0x00080000, +/* YSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0xc950, size=0x8 */ + 0x0000c950, 0x00000000, 0x00080000, +/* PSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0x102a8, size=0x8 */ + 0x000102a8, 0x00000000, 0x00080000, +/* TSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0x83d8, size=0x8 */ + 0x000083d8, 0x00000000, 0x00080000, +/* MSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0xfd68, size=0x8 */ + 0x0000fd68, 0x00000000, 0x00080000, +/* USTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0x116b0, size=0x8 */ + 0x000116b0, 0x00000000, 0x00080000, +/* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id), offset=0x40, mult1=0x8, size=0x8 */ + 0x00000040, 0x00000008, 0x00080000, +/* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), offset=0x9810, mult1=0x10, size=0x10 */ + 0x00009810, 0x00000010, 0x00100000, +/* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), offset=0x11d60, mult1=0x30, size=0x30 */ + 0x00011d60, 0x00000030, 0x00300000, +/* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id), offset=0x10958, mult1=0x38, size=0x38 */ + 0x00010958, 0x00000038, 0x00380000, +/* MSTORM_QUEUE_STAT_OFFSET(stat_counter_id), offset=0x7ce8, mult1=0x80, size=0x40 */ + 0x00007ce8, 0x00000080, 0x00400000, +/* MSTORM_TPA_TIMEOUT_US_OFFSET, offset=0x12830, size=0x4 */ + 0x00012830, 0x00000000, 0x00040000, +/* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id), offset=0x1f8, mult1=0x0, mult2=0x0, size=0x4 */ + 0x000001f8, 0x00000000, 0x00040000, +/* MSTORM_ETH_PF_PRODS_OFFSET(queue_id), offset=0x1f8, mult1=0x10, size=0x4 */ + 0x000001f8, 0x00000010, 0x00040000, +/* MSTORM_ETH_PF_BD_PROD_OFFSET(queue_id), offset=0x1f8, mult1=0x10, size=0x2 */ + 0x000001f8, 0x00000010, 0x00020000, +/* MSTORM_ETH_PF_CQE_PROD_OFFSET(queue_id), offset=0x1fa, mult1=0x10, size=0x2 */ + 0x000001fa, 0x00000010, 0x00020000, +/* MSTORM_ETH_PF_STAT_OFFSET(pf_id), offset=0x17310, mult1=0x20, size=0x20 */ + 0x00017310, 0x00000020, 0x00200000, +/* USTORM_QUEUE_STAT_OFFSET(stat_counter_id), offset=0xd628, mult1=0x40, size=0x30 */ + 0x0000d628, 0x00000040, 0x00300000, +/* USTORM_ETH_PF_STAT_OFFSET(pf_id), offset=0x15500, mult1=0x60, size=0x60 */ + 0x00015500, 0x00000060, 0x00600000, +/* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id), offset=0x8218, mult1=0x80, size=0x38 */ + 0x00008218, 0x00000080, 0x00380000, +/* PSTORM_ETH_PF_STAT_OFFSET(pf_id), offset=0x14a68, mult1=0xc0, size=0xc0 */ + 0x00014a68, 0x000000c0, 0x00c00000, +/* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id), offset=0x1f8, mult1=0x2, size=0x2 */ + 0x000001f8, 0x00000002, 0x00020000, +/* TSTORM_ETH_PRS_INPUT_OFFSET, offset=0xeeb8, size=0x128 */ + 0x0000eeb8, 0x00000000, 0x01280000, +/* ETH_RX_RATE_LIMIT_OFFSET(pf_id), offset=0xefe0, mult1=0x8, size=0x8 */ + 0x0000efe0, 0x00000008, 0x00080000, +/* TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id), offset=0x200, mult1=0x8, size=0x8 */ + 0x00000200, 0x00000008, 0x00080000, +/* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id), offset=0x1f8, mult1=0x8, size=0x8 */ + 0x000001f8, 0x00000008, 0x00080000, +/* XSTORM_ETH_HAIRPIN_CID_ALLOCATION_OFFSET(pf_id), offset=0x12370, mult1=0x8, size=0x8 */ + 0x00012370, 0x00000008, 0x00080000, +/* YSTORM_TOE_CQ_PROD_OFFSET(rss_id), offset=0xac0, mult1=0x8, size=0x8 */ + 0x00000ac0, 0x00000008, 0x00080000, +/* USTORM_TOE_CQ_PROD_OFFSET(rss_id), offset=0x2880, mult1=0x8, size=0x8 */ + 0x00002880, 0x00000008, 0x00080000, +/* USTORM_TOE_GRQ_PROD_OFFSET(pf_id), offset=0x2800, mult1=0x8, size=0x8 */ + 0x00002800, 0x00000008, 0x00080000, +/* TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id), offset=0x340, mult1=0x8, size=0x8 */ + 0x00000340, 0x00000008, 0x00080000, +/* TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), offset=0x740, mult1=0x18, mult2=0x8, + * size=0x8 + */ + 0x00000740, 0x00080018, 0x00080000, +/* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), offset=0x2508, mult1=0x18, mult2=0x8, + * size=0x2 + */ + 0x00002508, 0x00080018, 0x00020000, +/* TSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), offset=0x11298, mult1=0x50, size=0x3c */ + 0x00011298, 0x00000050, 0x003c0000, +/* MSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), offset=0x19888, mult1=0x18, size=0x10 */ + 0x00019888, 0x00000018, 0x00100000, +/* USTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), offset=0x187c0, mult1=0x40, size=0x18 */ + 0x000187c0, 0x00000040, 0x00180000, +/* XSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), offset=0x12d58, mult1=0x50, size=0x20 */ + 0x00012d58, 0x00000050, 0x00200000, +/* YSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), offset=0xe928, mult1=0x40, size=0x28 */ + 0x0000e928, 0x00000040, 0x00280000, +/* PSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), offset=0x17968, mult1=0x18, size=0x10 */ + 0x00017968, 0x00000018, 0x00100000, +/* TSTORM_FCOE_RX_STATS_OFFSET(pf_id), offset=0x11508, mult1=0x48, size=0x38 */ + 0x00011508, 0x00000048, 0x00380000, +/* PSTORM_FCOE_TX_STATS_OFFSET(pf_id), offset=0x17a38, mult1=0x20, size=0x20 */ + 0x00017a38, 0x00000020, 0x00200000, +/* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), offset=0x8250, mult1=0x80, size=0x10 */ + 0x00008250, 0x00000080, 0x00100000, +/* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), offset=0x7358, mult1=0x10, size=0x10 */ + 0x00007358, 0x00000010, 0x00100000, +/* XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x14270, mult1=0x8, size=0x1 */ + 0x00014270, 0x00000008, 0x00010000, +/* YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0xf7b8, mult1=0x8, size=0x1 */ + 0x0000f7b8, 0x00000008, 0x00010000, +/* PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x17cb8, mult1=0x8, size=0x1 */ + 0x00017cb8, 0x00000008, 0x00010000, +/* TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x13978, mult1=0x8, size=0x1 */ + 0x00013978, 0x00000008, 0x00010000, +/* MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x19e78, mult1=0x8, size=0x1 */ + 0x00019e78, 0x00000008, 0x00010000, +/* USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x18ef0, mult1=0x8, size=0x1 */ + 0x00018ef0, 0x00000008, 0x00010000, +/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id), offset=0x14790, mult1=0x30, size=0x10 */ + 0x00014790, 0x00000030, 0x00100000, +/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id), offset=0x125c8, mult1=0x28, size=0x28 */ + 0x000125c8, 0x00000028, 0x00280000, +/* YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id), offset=0xfe80, mult1=0x18, size=0x18 */ + 0x0000fe80, 0x00000018, 0x00180000, +/* YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id), offset=0x10000, mult1=0x8, size=0x8 */ + 0x00010000, 0x00000008, 0x00080000, +/* PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id), offset=0x1a150, mult1=0x8, size=0x8 */ + 0x0001a150, 0x00000008, 0x00080000, +/* USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id), offset=0x19c40, mult1=0x18, size=0x18 */ + 0x00019c40, 0x00000018, 0x00180000, +/* TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_num_id,taskpool_index), offset=0x12e70, + * mult1=0x2c8, mult2=0x58, size=0x10 + */ + 0x00012e70, 0x005802c8, 0x00100000, +/* USTORM_NVMF_PORT_COUNTERS_OFFSET(port_num_id), offset=0x19ec8, mult1=0x138, size=0x28 */ + 0x00019ec8, 0x00000138, 0x00280000, +/* XSTORM_ROCE_NON_EDPM_ACK_PKT_OFFSET(counter_id), offset=0x82e8, mult1=0x8, size=0x8 */ + 0x000082e8, 0x00000008, 0x00080000, +/* MSTORM_ROCE_ACK_PKT_OFFSET(counter_id), offset=0x7d28, mult1=0x80, size=0x8 */ + 0x00007d28, 0x00000080, 0x00080000, }; -/* Data size: 828 bytes */ +/* Data size: 1824 bytes */ -#endif /* __IRO_VALUES_H__ */ +#endif diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c index 01eb69c0e..646cf07cb 100644 --- a/drivers/net/qede/base/ecore_l2.c +++ b/drivers/net/qede/base/ecore_l2.c @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #include "bcm_osal.h" #include "ecore.h" @@ -20,16 +20,23 @@ #include "reg_addr.h" #include "ecore_int.h" #include "ecore_hw.h" -#include "ecore_vf.h" #include "ecore_sriov.h" +#include "ecore_vf.h" #include "ecore_mcp.h" #define ECORE_MAX_SGES_NUM 16 #define CRC32_POLY 0x1edc6f41 +#ifdef _NTDDK_ +#pragma warning(push) +#pragma warning(disable : 28167) +#pragma warning(disable : 28123) +#pragma warning(disable : 28121) +#endif + struct ecore_l2_info { u32 queues; - u32 **pp_qid_usage; + u32 **pp_qid_usage; /* @DPDK */ /* The lock is meant to synchronize access to the qid usage */ osal_mutex_t lock; @@ -38,7 +45,7 @@ struct ecore_l2_info { enum _ecore_status_t ecore_l2_alloc(struct ecore_hwfn *p_hwfn) { struct ecore_l2_info *p_l2_info; - u32 **pp_qids; + u32 **pp_qids; /* @DPDK */ u32 i; if (!ECORE_IS_L2_PERSONALITY(p_hwfn)) @@ -152,7 +159,7 @@ static bool ecore_eth_queue_qid_usage_add(struct ecore_hwfn *p_hwfn, goto out; } - OSAL_SET_BIT(first, p_l2_info->pp_qid_usage[queue_id]); + OSAL_NON_ATOMIC_SET_BIT(first, p_l2_info->pp_qid_usage[queue_id]); p_cid->qid_usage_idx = first; out: @@ -190,7 +197,7 @@ void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn, OSAL_VFREE(p_hwfn->p_dev, p_cid); } -/* The internal is only meant to be directly called by PFs initializeing CIDs +/* The internal is only meant to be directly called by PFs initializing CIDs * for their VFs. */ static struct ecore_queue_cid * @@ -293,23 +300,26 @@ ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn, u16 opaque_fid, bool b_is_rx, struct ecore_queue_cid_vf_params *p_vf_params) { + u8 vfid = p_vf_params ? p_vf_params->vfid : ECORE_CXT_PF_CID; struct ecore_queue_cid *p_cid; - u8 vfid = ECORE_CXT_PF_CID; bool b_legacy_vf = false; u32 cid = 0; - /* In case of legacy VFs, The CID can be derived from the additional - * VF parameters - the VF assumes queue X uses CID X, so we can simply + /* In case of legacy VFs, The ICID can be derived from the additional + * VF parameters - the VF assumes queue X uses ICID X, so we can simply * use the vf_qid for this purpose as well. + * In E5 the ICID should be after the CORE's global range, so an offset + * is required. */ - if (p_vf_params) { - vfid = p_vf_params->vfid; - - if (p_vf_params->vf_legacy & - ECORE_QCID_LEGACY_VF_CID) { - b_legacy_vf = true; - cid = p_vf_params->vf_qid; - } + if (p_vf_params && + (p_vf_params->vf_legacy & ECORE_QCID_LEGACY_VF_CID)) { + b_legacy_vf = true; + cid = p_vf_params->vf_qid; + + if (ECORE_IS_E5(p_hwfn->p_dev)) + cid += ecore_cxt_get_proto_cid_count(p_hwfn, + PROTOCOLID_CORE, + OSAL_NULL); } /* Get a unique firmware CID for this queue, in case it's a PF. @@ -341,9 +351,8 @@ ecore_eth_queue_to_cid_pf(struct ecore_hwfn *p_hwfn, u16 opaque_fid, OSAL_NULL); } -enum _ecore_status_t -ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn, - struct ecore_sp_vport_start_params *p_params) +enum _ecore_status_t ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn, + struct ecore_sp_vport_start_params *p_params) { struct vport_start_ramrod_data *p_ramrod = OSAL_NULL; struct ecore_spq_entry *p_ent = OSAL_NULL; @@ -375,14 +384,14 @@ ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn, p_ramrod->mtu = OSAL_CPU_TO_LE16(p_params->mtu); p_ramrod->handle_ptp_pkts = p_params->handle_ptp_pkts; p_ramrod->inner_vlan_removal_en = p_params->remove_inner_vlan; - p_ramrod->drop_ttl0_en = p_params->drop_ttl0; + p_ramrod->drop_ttl0_en = p_params->drop_ttl0; p_ramrod->untagged = p_params->only_untagged; p_ramrod->zero_placement_offset = p_params->zero_placement_offset; SET_FIELD(rx_mode, ETH_VPORT_RX_MODE_UCAST_DROP_ALL, 1); SET_FIELD(rx_mode, ETH_VPORT_RX_MODE_MCAST_DROP_ALL, 1); - p_ramrod->rx_mode.state = OSAL_CPU_TO_LE16(rx_mode); + p_ramrod->rx_mode.state = OSAL_CPU_TO_LE16(rx_mode); /* Handle requests for strict behavior on transmission errors */ SET_FIELD(tx_err, ETH_TX_ERR_VALS_ILLEGAL_VLAN_MODE, @@ -431,8 +440,13 @@ ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn, } p_ramrod->tx_switching_en = p_params->tx_switching; + p_ramrod->tx_dst_port_mode_config = p_params->tx_dst_port_mode_config; + p_ramrod->tx_dst_port_mode = p_params->tx_dst_port_mode; + p_ramrod->dst_vport_id = p_params->dst_vport_id; + p_ramrod->dst_vport_id_valid = p_params->dst_vport_id_valid; + #ifndef ASIC_ONLY - if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) + if (CHIP_REV_IS_SLOW(p_hwfn->p_dev) && ECORE_IS_E4(p_hwfn->p_dev)) p_ramrod->tx_switching_en = 0; #endif @@ -445,9 +459,8 @@ ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn, return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL); } -enum _ecore_status_t -ecore_sp_vport_start(struct ecore_hwfn *p_hwfn, - struct ecore_sp_vport_start_params *p_params) +enum _ecore_status_t ecore_sp_vport_start(struct ecore_hwfn *p_hwfn, + struct ecore_sp_vport_start_params *p_params) { if (IS_VF(p_hwfn->p_dev)) return ecore_vf_pf_vport_start(p_hwfn, p_params->vport_id, @@ -455,7 +468,8 @@ ecore_sp_vport_start(struct ecore_hwfn *p_hwfn, p_params->remove_inner_vlan, p_params->tpa_mode, p_params->max_buffers_per_cqe, - p_params->only_untagged); + p_params->only_untagged, + p_params->zero_placement_offset); return ecore_sp_eth_vport_start(p_hwfn, p_params); } @@ -477,9 +491,10 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn, p_config = &p_ramrod->rss_config; OSAL_BUILD_BUG_ON(ECORE_RSS_IND_TABLE_SIZE != - ETH_RSS_IND_TABLE_ENTRIES_NUM); + ETH_RSS_IND_TABLE_ENTRIES_NUM); - rc = ecore_fw_rss_eng(p_hwfn, p_rss->rss_eng_id, &p_config->rss_id); + rc = ecore_fw_rss_eng(p_hwfn, p_rss->rss_eng_id, + &p_config->rss_id); if (rc != ECORE_SUCCESS) return rc; @@ -489,7 +504,8 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn, p_config->update_rss_key = p_rss->update_rss_key; p_config->rss_mode = p_rss->rss_enable ? - ETH_VPORT_RSS_MODE_REGULAR : ETH_VPORT_RSS_MODE_DISABLED; + ETH_VPORT_RSS_MODE_REGULAR : + ETH_VPORT_RSS_MODE_DISABLED; p_config->capabilities = 0; @@ -520,7 +536,8 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn, p_config->rss_mode, p_config->update_rss_capabilities, p_config->capabilities, - p_config->update_rss_ind_table, p_config->update_rss_key); + p_config->update_rss_ind_table, + p_config->update_rss_key); table_size = OSAL_MIN_T(int, ECORE_RSS_IND_TABLE_SIZE, 1 << p_config->tbl_size); @@ -550,15 +567,15 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn, OSAL_LE16_TO_CPU(p_config->indirection_table[i + 7]), OSAL_LE16_TO_CPU(p_config->indirection_table[i + 8]), OSAL_LE16_TO_CPU(p_config->indirection_table[i + 9]), - OSAL_LE16_TO_CPU(p_config->indirection_table[i + 10]), - OSAL_LE16_TO_CPU(p_config->indirection_table[i + 11]), - OSAL_LE16_TO_CPU(p_config->indirection_table[i + 12]), - OSAL_LE16_TO_CPU(p_config->indirection_table[i + 13]), - OSAL_LE16_TO_CPU(p_config->indirection_table[i + 14]), - OSAL_LE16_TO_CPU(p_config->indirection_table[i + 15])); + OSAL_LE16_TO_CPU(p_config->indirection_table[i + 10]), + OSAL_LE16_TO_CPU(p_config->indirection_table[i + 11]), + OSAL_LE16_TO_CPU(p_config->indirection_table[i + 12]), + OSAL_LE16_TO_CPU(p_config->indirection_table[i + 13]), + OSAL_LE16_TO_CPU(p_config->indirection_table[i + 14]), + OSAL_LE16_TO_CPU(p_config->indirection_table[i + 15])); } - for (i = 0; i < 10; i++) + for (i = 0; i < 10; i++) p_config->rss_key[i] = OSAL_CPU_TO_LE32(p_rss->rss_key[i]); return rc; @@ -578,7 +595,7 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn, /* On B0 emulation we cannot enable Tx, since this would cause writes * to PVFC HW block which isn't implemented in emulation. */ - if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) { + if (CHIP_REV_IS_SLOW(p_hwfn->p_dev) && ECORE_IS_E4(p_hwfn->p_dev)) { DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Non-Asic - prevent Tx mode in vport update\n"); p_ramrod->common.update_tx_mode_flg = 0; @@ -599,7 +616,7 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn, SET_FIELD(state, ETH_VPORT_RX_MODE_MCAST_DROP_ALL, !(!!(accept_filter & ECORE_ACCEPT_MCAST_MATCHED) || - !!(accept_filter & ECORE_ACCEPT_MCAST_UNMATCHED))); + !!(accept_filter & ECORE_ACCEPT_MCAST_UNMATCHED))); SET_FIELD(state, ETH_VPORT_RX_MODE_MCAST_ACCEPT_ALL, (!!(accept_filter & ECORE_ACCEPT_MCAST_MATCHED) && @@ -632,6 +649,10 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn, (!!(accept_filter & ECORE_ACCEPT_MCAST_MATCHED) && !!(accept_filter & ECORE_ACCEPT_MCAST_UNMATCHED))); + SET_FIELD(state, ETH_VPORT_TX_MODE_UCAST_ACCEPT_ALL, + (!!(accept_filter & ECORE_ACCEPT_UCAST_MATCHED) && + !!(accept_filter & ECORE_ACCEPT_UCAST_UNMATCHED))); + SET_FIELD(state, ETH_VPORT_TX_MODE_BCAST_ACCEPT_ALL, !!(accept_filter & ECORE_ACCEPT_BCAST)); @@ -697,11 +718,10 @@ ecore_sp_update_mcast_bin(struct vport_update_ramrod_data *p_ramrod, } } -enum _ecore_status_t -ecore_sp_vport_update(struct ecore_hwfn *p_hwfn, - struct ecore_sp_vport_update_params *p_params, - enum spq_mode comp_mode, - struct ecore_spq_comp_cb *p_comp_data) +enum _ecore_status_t ecore_sp_vport_update(struct ecore_hwfn *p_hwfn, + struct ecore_sp_vport_update_params *p_params, + enum spq_mode comp_mode, + struct ecore_spq_comp_cb *p_comp_data) { struct ecore_rss_params *p_rss_params = p_params->rss_params; struct vport_update_ramrod_data_cmn *p_cmn; @@ -762,13 +782,11 @@ ecore_sp_vport_update(struct ecore_hwfn *p_hwfn, p_cmn->silent_vlan_removal_en = p_params->silent_vlan_removal_flg; p_ramrod->common.tx_switching_en = p_params->tx_switching_flg; - #ifndef ASIC_ONLY if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) if (p_ramrod->common.tx_switching_en || p_ramrod->common.update_tx_switching_en_flg) { - DP_NOTICE(p_hwfn, false, - "FPGA - why are we seeing tx-switching? Overriding it\n"); + DP_NOTICE(p_hwfn, false, "FPGA - why are we seeing tx-switching? Overriding it\n"); p_ramrod->common.tx_switching_en = 0; p_ramrod->common.update_tx_switching_en_flg = 1; } @@ -781,8 +799,7 @@ ecore_sp_vport_update(struct ecore_hwfn *p_hwfn, rc = ecore_sp_vport_update_rss(p_hwfn, p_ramrod, p_rss_params); if (rc != ECORE_SUCCESS) { - /* Return spq entry which is taken in ecore_sp_init_request()*/ - ecore_spq_return_entry(p_hwfn, p_ent); + ecore_sp_destroy_request(p_hwfn, p_ent); return rc; } @@ -791,11 +808,19 @@ ecore_sp_vport_update(struct ecore_hwfn *p_hwfn, p_cmn->ctl_frame_ethtype_check_en = p_params->ethtype_chk_en; } + p_cmn->update_tx_dst_port_mode_flg = + p_params->update_tx_dst_port_mode_flag; + p_cmn->tx_dst_port_mode_config = p_params->tx_dst_port_mode_config; + p_cmn->tx_dst_port_mode = p_params->tx_dst_port_mode; + p_cmn->dst_vport_id = p_params->dst_vport_id; + p_cmn->dst_vport_id_valid = p_params->dst_vport_id_valid; + /* Update mcast bins for VFs, PF doesn't use this functionality */ ecore_sp_update_mcast_bin(p_ramrod, p_params); ecore_sp_update_accept_mode(p_hwfn, p_ramrod, p_params->accept_flags); ecore_sp_vport_update_sge_tpa(p_ramrod, p_params->sge_tpa_params); + if (p_params->mtu) { p_ramrod->common.update_mtu_flg = 1; p_ramrod->common.mtu = OSAL_CPU_TO_LE16(p_params->mtu); @@ -805,7 +830,8 @@ ecore_sp_vport_update(struct ecore_hwfn *p_hwfn, } enum _ecore_status_t ecore_sp_vport_stop(struct ecore_hwfn *p_hwfn, - u16 opaque_fid, u8 vport_id) + u16 opaque_fid, + u8 vport_id) { struct vport_stop_ramrod_data *p_ramrod; struct ecore_sp_init_data init_data; @@ -851,14 +877,13 @@ ecore_vf_pf_accept_flags(struct ecore_hwfn *p_hwfn, return ecore_vf_pf_vport_update(p_hwfn, &s_params); } -enum _ecore_status_t -ecore_filter_accept_cmd(struct ecore_dev *p_dev, - u8 vport, - struct ecore_filter_accept_flags accept_flags, - u8 update_accept_any_vlan, - u8 accept_any_vlan, - enum spq_mode comp_mode, - struct ecore_spq_comp_cb *p_comp_data) +enum _ecore_status_t ecore_filter_accept_cmd(struct ecore_dev *p_dev, + u8 vport, + struct ecore_filter_accept_flags accept_flags, + u8 update_accept_any_vlan, + u8 accept_any_vlan, + enum spq_mode comp_mode, + struct ecore_spq_comp_cb *p_comp_data) { struct ecore_sp_vport_update_params vport_update_params; int i, rc; @@ -882,6 +907,12 @@ ecore_filter_accept_cmd(struct ecore_dev *p_dev, continue; } + /* Check whether skipping accept flags update is required, + * e.g. when Switchdev is enabled. + */ + if (ecore_get_skip_accept_flags_update(p_hwfn)) + continue; + rc = ecore_sp_vport_update(p_hwfn, &vport_update_params, comp_mode, p_comp_data); if (rc != ECORE_SUCCESS) { @@ -916,8 +947,7 @@ ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn, struct ecore_sp_init_data init_data; enum _ecore_status_t rc = ECORE_NOTIMPL; - DP_VERBOSE(p_hwfn, ECORE_MSG_SP, - "opaque_fid=0x%x, cid=0x%x, rx_qzone=0x%x, vport_id=0x%x, sb_id=0x%x\n", + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "opaque_fid=0x%x, cid=0x%x, rx_qzone=0x%x, vport_id=0x%x, sb_id=0x%x\n", p_cid->opaque_fid, p_cid->cid, p_cid->abs.queue_id, p_cid->abs.vport_id, p_cid->sb_igu_id); @@ -954,8 +984,7 @@ ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn, ECORE_QCID_LEGACY_VF_RX_PROD); p_ramrod->vf_rx_prod_index = p_cid->vf_qid; - DP_VERBOSE(p_hwfn, ECORE_MSG_SP, - "Queue%s is meant for VF rxq[%02x]\n", + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Queue%s is meant for VF rxq[%02x]\n", b_legacy_vf ? " [legacy]" : "", p_cid->vf_qid); p_ramrod->vf_rx_prod_use_zone_a = b_legacy_vf; @@ -971,7 +1000,7 @@ ecore_eth_pf_rx_queue_start(struct ecore_hwfn *p_hwfn, dma_addr_t bd_chain_phys_addr, dma_addr_t cqe_pbl_addr, u16 cqe_pbl_size, - void OSAL_IOMEM * *pp_prod) + void OSAL_IOMEM **pp_prod) { u32 init_prod_val = 0; @@ -1031,14 +1060,13 @@ ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn, return rc; } -enum _ecore_status_t -ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn, - void **pp_rxq_handles, - u8 num_rxqs, - u8 complete_cqe_flg, - u8 complete_event_flg, - enum spq_mode comp_mode, - struct ecore_spq_comp_cb *p_comp_data) +enum _ecore_status_t ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn, + void **pp_rxq_handles, + u8 num_rxqs, + u8 complete_cqe_flg, + u8 complete_event_flg, + enum spq_mode comp_mode, + struct ecore_spq_comp_cb *p_comp_data) { struct rx_queue_update_ramrod_data *p_ramrod = OSAL_NULL; struct ecore_spq_entry *p_ent = OSAL_NULL; @@ -1087,6 +1115,50 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn, return rc; } +enum _ecore_status_t +ecore_sp_eth_rx_queues_set_default(struct ecore_hwfn *p_hwfn, + void *p_rxq_handler, + enum spq_mode comp_mode, + struct ecore_spq_comp_cb *p_comp_data) +{ + struct rx_queue_update_ramrod_data *p_ramrod = OSAL_NULL; + struct ecore_spq_entry *p_ent = OSAL_NULL; + struct ecore_sp_init_data init_data; + struct ecore_queue_cid *p_cid; + enum _ecore_status_t rc = ECORE_SUCCESS; + + if (IS_VF(p_hwfn->p_dev)) + return ECORE_NOTIMPL; + + OSAL_MEMSET(&init_data, 0, sizeof(init_data)); + init_data.comp_mode = comp_mode; + init_data.p_comp_data = p_comp_data; + + p_cid = (struct ecore_queue_cid *)p_rxq_handler; + + /* Get SPQ entry */ + init_data.cid = p_cid->cid; + init_data.opaque_fid = p_cid->opaque_fid; + + rc = ecore_sp_init_request(p_hwfn, &p_ent, + ETH_RAMROD_RX_QUEUE_UPDATE, + PROTOCOLID_ETH, &init_data); + if (rc != ECORE_SUCCESS) + return rc; + + p_ramrod = &p_ent->ramrod.rx_queue_update; + p_ramrod->vport_id = p_cid->abs.vport_id; + + p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id); + p_ramrod->complete_cqe_flg = 0; + p_ramrod->complete_event_flg = 1; + p_ramrod->set_default_rss_queue = 1; + + rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL); + + return rc; +} + static enum _ecore_status_t ecore_eth_pf_rx_queue_stop(struct ecore_hwfn *p_hwfn, struct ecore_queue_cid *p_cid, @@ -1191,7 +1263,7 @@ ecore_eth_pf_tx_queue_start(struct ecore_hwfn *p_hwfn, struct ecore_queue_cid *p_cid, u8 tc, dma_addr_t pbl_addr, u16 pbl_size, - void OSAL_IOMEM * *pp_doorbell) + void OSAL_IOMEM **pp_doorbell) { enum _ecore_status_t rc; u16 pq_id; @@ -1288,8 +1360,7 @@ enum _ecore_status_t ecore_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn, return rc; } -static enum eth_filter_action -ecore_filter_action(enum ecore_filter_opcode opcode) +static enum eth_filter_action ecore_filter_action(enum ecore_filter_opcode opcode) { enum eth_filter_action action = MAX_ETH_FILTER_ACTION; @@ -1356,21 +1427,20 @@ ecore_filter_ucast_common(struct ecore_hwfn *p_hwfn, p_ramrod->filter_cmd_hdr.tx = p_filter_cmd->is_tx_filter ? 1 : 0; #ifndef ASIC_ONLY - if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) { + if (CHIP_REV_IS_SLOW(p_hwfn->p_dev) && ECORE_IS_E4(p_hwfn->p_dev)) { DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Non-Asic - prevent Tx filters\n"); p_ramrod->filter_cmd_hdr.tx = 0; } + #endif switch (p_filter_cmd->opcode) { case ECORE_FILTER_REPLACE: case ECORE_FILTER_MOVE: - p_ramrod->filter_cmd_hdr.cmd_cnt = 2; - break; + p_ramrod->filter_cmd_hdr.cmd_cnt = 2; break; default: - p_ramrod->filter_cmd_hdr.cmd_cnt = 1; - break; + p_ramrod->filter_cmd_hdr.cmd_cnt = 1; break; } p_first_filter = &p_ramrod->filter_cmds[0]; @@ -1378,35 +1448,26 @@ ecore_filter_ucast_common(struct ecore_hwfn *p_hwfn, switch (p_filter_cmd->type) { case ECORE_FILTER_MAC: - p_first_filter->type = ETH_FILTER_TYPE_MAC; - break; + p_first_filter->type = ETH_FILTER_TYPE_MAC; break; case ECORE_FILTER_VLAN: - p_first_filter->type = ETH_FILTER_TYPE_VLAN; - break; + p_first_filter->type = ETH_FILTER_TYPE_VLAN; break; case ECORE_FILTER_MAC_VLAN: - p_first_filter->type = ETH_FILTER_TYPE_PAIR; - break; + p_first_filter->type = ETH_FILTER_TYPE_PAIR; break; case ECORE_FILTER_INNER_MAC: - p_first_filter->type = ETH_FILTER_TYPE_INNER_MAC; - break; + p_first_filter->type = ETH_FILTER_TYPE_INNER_MAC; break; case ECORE_FILTER_INNER_VLAN: - p_first_filter->type = ETH_FILTER_TYPE_INNER_VLAN; - break; + p_first_filter->type = ETH_FILTER_TYPE_INNER_VLAN; break; case ECORE_FILTER_INNER_PAIR: - p_first_filter->type = ETH_FILTER_TYPE_INNER_PAIR; - break; + p_first_filter->type = ETH_FILTER_TYPE_INNER_PAIR; break; case ECORE_FILTER_INNER_MAC_VNI_PAIR: p_first_filter->type = ETH_FILTER_TYPE_INNER_MAC_VNI_PAIR; break; case ECORE_FILTER_MAC_VNI_PAIR: - p_first_filter->type = ETH_FILTER_TYPE_MAC_VNI_PAIR; - break; + p_first_filter->type = ETH_FILTER_TYPE_MAC_VNI_PAIR; break; case ECORE_FILTER_VNI: - p_first_filter->type = ETH_FILTER_TYPE_VNI; - break; + p_first_filter->type = ETH_FILTER_TYPE_VNI; break; case ECORE_FILTER_UNUSED: /* @DPDK */ - p_first_filter->type = MAX_ETH_FILTER_TYPE; - break; + p_first_filter->type = MAX_ETH_FILTER_TYPE; break; } if ((p_first_filter->type == ETH_FILTER_TYPE_MAC) || @@ -1458,24 +1519,24 @@ ecore_filter_ucast_common(struct ecore_hwfn *p_hwfn, DP_NOTICE(p_hwfn, true, "%d is not supported yet\n", p_filter_cmd->opcode); + ecore_sp_destroy_request(p_hwfn, *pp_ent); return ECORE_NOTIMPL; } p_first_filter->action = action; p_first_filter->vport_id = - (p_filter_cmd->opcode == ECORE_FILTER_REMOVE) ? - vport_to_remove_from : vport_to_add_to; + (p_filter_cmd->opcode == ECORE_FILTER_REMOVE) ? + vport_to_remove_from : vport_to_add_to; } return ECORE_SUCCESS; } -enum _ecore_status_t -ecore_sp_eth_filter_ucast(struct ecore_hwfn *p_hwfn, - u16 opaque_fid, - struct ecore_filter_ucast *p_filter_cmd, - enum spq_mode comp_mode, - struct ecore_spq_comp_cb *p_comp_data) +enum _ecore_status_t ecore_sp_eth_filter_ucast(struct ecore_hwfn *p_hwfn, + u16 opaque_fid, + struct ecore_filter_ucast *p_filter_cmd, + enum spq_mode comp_mode, + struct ecore_spq_comp_cb *p_comp_data) { struct vport_filter_update_ramrod_data *p_ramrod = OSAL_NULL; struct ecore_spq_entry *p_ent = OSAL_NULL; @@ -1494,22 +1555,25 @@ ecore_sp_eth_filter_ucast(struct ecore_hwfn *p_hwfn, rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL); if (rc != ECORE_SUCCESS) { - DP_ERR(p_hwfn, "Unicast filter ADD command failed %d\n", rc); + DP_ERR(p_hwfn, + "Unicast filter ADD command failed %d\n", + rc); return rc; } DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Unicast filter configured, opcode = %s, type = %s, cmd_cnt = %d, is_rx_filter = %d, is_tx_filter = %d\n", (p_filter_cmd->opcode == ECORE_FILTER_ADD) ? "ADD" : - ((p_filter_cmd->opcode == ECORE_FILTER_REMOVE) ? - "REMOVE" : - ((p_filter_cmd->opcode == ECORE_FILTER_MOVE) ? - "MOVE" : "REPLACE")), + ((p_filter_cmd->opcode == ECORE_FILTER_REMOVE) ? + "REMOVE" : + ((p_filter_cmd->opcode == ECORE_FILTER_MOVE) ? + "MOVE" : "REPLACE")), (p_filter_cmd->type == ECORE_FILTER_MAC) ? "MAC" : - ((p_filter_cmd->type == ECORE_FILTER_VLAN) ? - "VLAN" : "MAC & VLAN"), + ((p_filter_cmd->type == ECORE_FILTER_VLAN) ? + "VLAN" : "MAC & VLAN"), p_ramrod->filter_cmd_hdr.cmd_cnt, - p_filter_cmd->is_rx_filter, p_filter_cmd->is_tx_filter); + p_filter_cmd->is_rx_filter, + p_filter_cmd->is_tx_filter); DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "vport_to_add_to = %d, vport_to_remove_from = %d, mac = %2x:%2x:%2x:%2x:%2x:%2x, vlan = %d\n", p_filter_cmd->vport_to_add_to, @@ -1531,10 +1595,11 @@ ecore_sp_eth_filter_ucast(struct ecore_hwfn *p_hwfn, static u32 ecore_calc_crc32c(u8 *crc32_packet, u32 crc32_length, u32 crc32_seed) { u32 byte = 0, bit = 0, crc32_result = crc32_seed; - u8 msb = 0, current_byte = 0; + u8 msb = 0, current_byte = 0; if ((crc32_packet == OSAL_NULL) || - (crc32_length == 0) || ((crc32_length % 8) != 0)) { + (crc32_length == 0) || + ((crc32_length % 8) != 0)) { return crc32_result; } @@ -1545,7 +1610,7 @@ static u32 ecore_calc_crc32c(u8 *crc32_packet, u32 crc32_length, u32 crc32_seed) crc32_result = crc32_result << 1; if (msb != (0x1 & (current_byte >> bit))) { crc32_result = crc32_result ^ CRC32_POLY; - crc32_result |= 1; + crc32_result |= 1; /*crc32_result[0] = 1;*/ } } } @@ -1555,7 +1620,7 @@ static u32 ecore_calc_crc32c(u8 *crc32_packet, u32 crc32_length, u32 crc32_seed) static u32 ecore_crc32c_le(u32 seed, u8 *mac) { - u32 packet_buf[2] = { 0 }; + u32 packet_buf[2] = {0}; OSAL_MEMCPY((u8 *)(&packet_buf[0]), &mac[0], 6); return ecore_calc_crc32c((u8 *)packet_buf, 8, seed); @@ -1583,12 +1648,10 @@ ecore_sp_eth_filter_mcast(struct ecore_hwfn *p_hwfn, int i; if (p_filter_cmd->opcode == ECORE_FILTER_ADD) - rc = ecore_fw_vport(p_hwfn, - p_filter_cmd->vport_to_add_to, + rc = ecore_fw_vport(p_hwfn, p_filter_cmd->vport_to_add_to, &abs_vport_id); else - rc = ecore_fw_vport(p_hwfn, - p_filter_cmd->vport_to_remove_from, + rc = ecore_fw_vport(p_hwfn, p_filter_cmd->vport_to_remove_from, &abs_vport_id); if (rc != ECORE_SUCCESS) return rc; @@ -1644,19 +1707,18 @@ ecore_sp_eth_filter_mcast(struct ecore_hwfn *p_hwfn, return rc; } -enum _ecore_status_t -ecore_filter_mcast_cmd(struct ecore_dev *p_dev, - struct ecore_filter_mcast *p_filter_cmd, - enum spq_mode comp_mode, - struct ecore_spq_comp_cb *p_comp_data) +enum _ecore_status_t ecore_filter_mcast_cmd(struct ecore_dev *p_dev, + struct ecore_filter_mcast *p_filter_cmd, + enum spq_mode comp_mode, + struct ecore_spq_comp_cb *p_comp_data) { enum _ecore_status_t rc = ECORE_SUCCESS; int i; /* only ADD and REMOVE operations are supported for multi-cast */ - if ((p_filter_cmd->opcode != ECORE_FILTER_ADD && + if ((p_filter_cmd->opcode != ECORE_FILTER_ADD && (p_filter_cmd->opcode != ECORE_FILTER_REMOVE)) || - (p_filter_cmd->num_mc_addrs > ECORE_MAX_MC_ADDRS)) { + (p_filter_cmd->num_mc_addrs > ECORE_MAX_MC_ADDRS)) { return ECORE_INVAL; } @@ -1670,7 +1732,8 @@ ecore_filter_mcast_cmd(struct ecore_dev *p_dev, rc = ecore_sp_eth_filter_mcast(p_hwfn, p_filter_cmd, - comp_mode, p_comp_data); + comp_mode, + p_comp_data); if (rc != ECORE_SUCCESS) break; } @@ -1678,11 +1741,10 @@ ecore_filter_mcast_cmd(struct ecore_dev *p_dev, return rc; } -enum _ecore_status_t -ecore_filter_ucast_cmd(struct ecore_dev *p_dev, - struct ecore_filter_ucast *p_filter_cmd, - enum spq_mode comp_mode, - struct ecore_spq_comp_cb *p_comp_data) +enum _ecore_status_t ecore_filter_ucast_cmd(struct ecore_dev *p_dev, + struct ecore_filter_ucast *p_filter_cmd, + enum spq_mode comp_mode, + struct ecore_spq_comp_cb *p_comp_data) { enum _ecore_status_t rc = ECORE_SUCCESS; int i; @@ -1696,11 +1758,18 @@ ecore_filter_ucast_cmd(struct ecore_dev *p_dev, continue; } + /* Check whether skipping unicast filter update is required, + * e.g. when Switchdev mode is enabled. + */ + if (ecore_get_skip_ucast_filter_update(p_hwfn)) + continue; + opaque_fid = p_hwfn->hw_info.opaque_fid; rc = ecore_sp_eth_filter_ucast(p_hwfn, opaque_fid, p_filter_cmd, - comp_mode, p_comp_data); + comp_mode, + p_comp_data); if (rc != ECORE_SUCCESS) break; } @@ -1715,7 +1784,7 @@ static void __ecore_get_vport_pstats_addrlen(struct ecore_hwfn *p_hwfn, { if (IS_PF(p_hwfn->p_dev)) { *p_addr = BAR0_MAP_REG_PSDM_RAM + - PSTORM_QUEUE_STAT_OFFSET(statistics_bin); + PSTORM_QUEUE_STAT_OFFSET(statistics_bin); *p_len = sizeof(struct eth_pstorm_per_queue_stat); } else { struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; @@ -1738,7 +1807,8 @@ static void __ecore_get_vport_pstats(struct ecore_hwfn *p_hwfn, statistics_bin); OSAL_MEMSET(&pstats, 0, sizeof(pstats)); - ecore_memcpy_from(p_hwfn, p_ptt, &pstats, pstats_addr, pstats_len); + ecore_memcpy_from(p_hwfn, p_ptt, &pstats, + pstats_addr, pstats_len); p_stats->common.tx_ucast_bytes += HILO_64_REGPAIR(pstats.sent_ucast_bytes); @@ -1765,7 +1835,7 @@ static void __ecore_get_vport_tstats(struct ecore_hwfn *p_hwfn, if (IS_PF(p_hwfn->p_dev)) { tstats_addr = BAR0_MAP_REG_TSDM_RAM + - TSTORM_PORT_STAT_OFFSET(MFW_PORT(p_hwfn)); + TSTORM_PORT_STAT_OFFSET(MFW_PORT(p_hwfn)); tstats_len = sizeof(struct tstorm_per_port_stat); } else { struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; @@ -1776,7 +1846,8 @@ static void __ecore_get_vport_tstats(struct ecore_hwfn *p_hwfn, } OSAL_MEMSET(&tstats, 0, sizeof(tstats)); - ecore_memcpy_from(p_hwfn, p_ptt, &tstats, tstats_addr, tstats_len); + ecore_memcpy_from(p_hwfn, p_ptt, &tstats, + tstats_addr, tstats_len); p_stats->common.mftag_filter_discards += HILO_64_REGPAIR(tstats.mftag_filter_discard); @@ -1792,7 +1863,7 @@ static void __ecore_get_vport_ustats_addrlen(struct ecore_hwfn *p_hwfn, { if (IS_PF(p_hwfn->p_dev)) { *p_addr = BAR0_MAP_REG_USDM_RAM + - USTORM_QUEUE_STAT_OFFSET(statistics_bin); + USTORM_QUEUE_STAT_OFFSET(statistics_bin); *p_len = sizeof(struct eth_ustorm_per_queue_stat); } else { struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; @@ -1815,7 +1886,8 @@ static void __ecore_get_vport_ustats(struct ecore_hwfn *p_hwfn, statistics_bin); OSAL_MEMSET(&ustats, 0, sizeof(ustats)); - ecore_memcpy_from(p_hwfn, p_ptt, &ustats, ustats_addr, ustats_len); + ecore_memcpy_from(p_hwfn, p_ptt, &ustats, + ustats_addr, ustats_len); p_stats->common.rx_ucast_bytes += HILO_64_REGPAIR(ustats.rcv_ucast_bytes); @@ -1837,7 +1909,7 @@ static void __ecore_get_vport_mstats_addrlen(struct ecore_hwfn *p_hwfn, { if (IS_PF(p_hwfn->p_dev)) { *p_addr = BAR0_MAP_REG_MSDM_RAM + - MSTORM_QUEUE_STAT_OFFSET(statistics_bin); + MSTORM_QUEUE_STAT_OFFSET(statistics_bin); *p_len = sizeof(struct eth_mstorm_per_queue_stat); } else { struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; @@ -1860,7 +1932,8 @@ static void __ecore_get_vport_mstats(struct ecore_hwfn *p_hwfn, statistics_bin); OSAL_MEMSET(&mstats, 0, sizeof(mstats)); - ecore_memcpy_from(p_hwfn, p_ptt, &mstats, mstats_addr, mstats_len); + ecore_memcpy_from(p_hwfn, p_ptt, &mstats, + mstats_addr, mstats_len); p_stats->common.no_buff_discards += HILO_64_REGPAIR(mstats.no_buff_discard); @@ -1981,7 +2054,7 @@ void __ecore_get_vport_stats(struct ecore_hwfn *p_hwfn, __ecore_get_vport_pstats(p_hwfn, p_ptt, stats, statistics_bin); #ifndef ASIC_ONLY - /* Avoid getting PORT stats for emulation. */ + /* Avoid getting PORT stats for emulation.*/ if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) return; #endif @@ -2001,12 +2074,14 @@ static void _ecore_get_vport_stats(struct ecore_dev *p_dev, for_each_hwfn(p_dev, i) { struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i]; struct ecore_ptt *p_ptt = IS_PF(p_dev) ? - ecore_ptt_acquire(p_hwfn) : OSAL_NULL; + ecore_ptt_acquire(p_hwfn) : OSAL_NULL; bool b_get_port_stats; if (IS_PF(p_dev)) { + if (p_dev->stats_bin_id) + fw_vport = (u8)p_dev->stats_bin_id; /* The main vport index is relative first */ - if (ecore_fw_vport(p_hwfn, 0, &fw_vport)) { + else if (ecore_fw_vport(p_hwfn, 0, &fw_vport)) { DP_ERR(p_hwfn, "No vport available!\n"); goto out; } @@ -2017,7 +2092,8 @@ static void _ecore_get_vport_stats(struct ecore_dev *p_dev, continue; } - b_get_port_stats = IS_PF(p_dev) && IS_LEAD_HWFN(p_hwfn); + b_get_port_stats = p_dev->stats_bin_id ? 0 : + IS_PF(p_dev) && IS_LEAD_HWFN(p_hwfn); __ecore_get_vport_stats(p_hwfn, p_ptt, stats, fw_vport, b_get_port_stats); @@ -2025,6 +2101,8 @@ static void _ecore_get_vport_stats(struct ecore_dev *p_dev, if (IS_PF(p_dev) && p_ptt) ecore_ptt_release(p_hwfn, p_ptt); } + + p_dev->stats_bin_id = 0; } void ecore_get_vport_stats(struct ecore_dev *p_dev, @@ -2058,7 +2136,7 @@ void ecore_reset_vport_stats(struct ecore_dev *p_dev) struct eth_ustorm_per_queue_stat ustats; struct eth_pstorm_per_queue_stat pstats; struct ecore_ptt *p_ptt = IS_PF(p_dev) ? - ecore_ptt_acquire(p_hwfn) : OSAL_NULL; + ecore_ptt_acquire(p_hwfn) : OSAL_NULL; u32 addr = 0, len = 0; if (IS_PF(p_dev) && !p_ptt) { @@ -2099,13 +2177,10 @@ ecore_arfs_mode_to_hsi(enum ecore_filter_config_mode mode) { if (mode == ECORE_FILTER_CONFIG_MODE_5_TUPLE) return GFT_PROFILE_TYPE_4_TUPLE; - if (mode == ECORE_FILTER_CONFIG_MODE_IP_DEST) return GFT_PROFILE_TYPE_IP_DST_ADDR; - if (mode == ECORE_FILTER_CONFIG_MODE_TUNN_TYPE) return GFT_PROFILE_TYPE_TUNNEL_TYPE; - if (mode == ECORE_FILTER_CONFIG_MODE_IP_SRC) return GFT_PROFILE_TYPE_IP_SRC_ADDR; @@ -2116,7 +2191,7 @@ void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_arfs_config_params *p_cfg_params) { - if (OSAL_GET_BIT(ECORE_MF_DISABLE_ARFS, &p_hwfn->p_dev->mf_bits)) + if (OSAL_TEST_BIT(ECORE_MF_DISABLE_ARFS, &p_hwfn->p_dev->mf_bits)) return; if (p_cfg_params->mode != ECORE_FILTER_CONFIG_MODE_DISABLE) { @@ -2126,17 +2201,18 @@ void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn, p_cfg_params->ipv4, p_cfg_params->ipv6, ecore_arfs_mode_to_hsi(p_cfg_params->mode)); + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, - "tcp = %s, udp = %s, ipv4 = %s, ipv6 =%s\n", + "Configured Filtering: tcp = %s, udp = %s, ipv4 = %s, ipv6 =%s mode=%08x\n", p_cfg_params->tcp ? "Enable" : "Disable", p_cfg_params->udp ? "Enable" : "Disable", p_cfg_params->ipv4 ? "Enable" : "Disable", - p_cfg_params->ipv6 ? "Enable" : "Disable"); + p_cfg_params->ipv6 ? "Enable" : "Disable", + (u32)p_cfg_params->mode); } else { + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Disabled Filtering\n"); ecore_gft_disable(p_hwfn, p_ptt, p_hwfn->rel_pf_id); } - DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Configured ARFS mode : %d\n", - (int)p_cfg_params->mode); } enum _ecore_status_t @@ -2181,13 +2257,13 @@ ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn, rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id); if (rc) - return rc; + goto err; if (p_params->qid != ECORE_RFS_NTUPLE_QID_RSS) { rc = ecore_fw_l2_queue(p_hwfn, p_params->qid, &abs_rx_q_id); if (rc) - return rc; + goto err; p_ramrod->rx_qid_valid = 1; p_ramrod->rx_qid = OSAL_CPU_TO_LE16(abs_rx_q_id); @@ -2198,17 +2274,20 @@ ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn, p_ramrod->flow_id_valid = 0; p_ramrod->flow_id = 0; - p_ramrod->filter_action = p_params->b_is_add ? GFT_ADD_FILTER : GFT_DELETE_FILTER; DP_VERBOSE(p_hwfn, ECORE_MSG_SP, - "V[%0x], Q[%04x] - %s filter from 0x%lx [length %04xb]\n", + "V[%0x], Q[%04x] - %s filter from 0x%" PRIx64 " [length %04xb]\n", abs_vport_id, abs_rx_q_id, p_params->b_is_add ? "Adding" : "Removing", - (unsigned long)p_params->addr, p_params->length); + (u64)p_params->addr, p_params->length); return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL); + +err: + ecore_sp_destroy_request(p_hwfn, p_ent); + return rc; } enum _ecore_status_t ecore_get_rxq_coalesce(struct ecore_hwfn *p_hwfn, @@ -2323,16 +2402,22 @@ ecore_eth_tx_queue_maxrate(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_queue_cid *p_cid, u32 rate) { - u16 rl_id; - u8 vport; + u16 qid, vport_id, rl_id; - vport = (u8)ecore_get_qm_vport_idx_rl(p_hwfn, p_cid->rel.queue_id); + if (!IS_ECORE_PACING(p_hwfn)) { + DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG, + "Packet pacing is not enabled\n"); + return ECORE_INVAL; + } + + qid = p_cid->rel.queue_id; + vport_id = ecore_get_pq_vport_id_from_rl(p_hwfn, qid); + rl_id = ecore_get_pq_rl_id_from_rl(p_hwfn, qid); DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, - "About to rate limit qm vport %d for queue %d with rate %d\n", - vport, p_cid->rel.queue_id, rate); + "About to rate limit qm vport %d (rl_id %d) for queue %d with rate %d\n", + vport_id, rl_id, qid, rate); - rl_id = vport; /* The "rl_id" is set as the "vport_id" */ return ecore_init_global_rl(p_hwfn, p_ptt, rl_id, rate); } @@ -2357,7 +2442,8 @@ ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn, if (rc != ECORE_SUCCESS) return rc; - addr = (u8 *)p_hwfn->regview + GTT_BAR0_MAP_REG_TSDM_RAM + + addr = (u8 OSAL_IOMEM *)p_hwfn->regview + + GTT_BAR0_MAP_REG_TSDM_RAM + TSTORM_ETH_RSS_UPDATE_OFFSET(p_hwfn->rel_pf_id); *(u64 *)(&update_data) = DIRECT_REG_RD64(p_hwfn, addr); @@ -2386,3 +2472,62 @@ ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } + +enum _ecore_status_t +ecore_set_rx_config(struct ecore_hwfn *p_hwfn, struct ecore_rx_config *p_conf) +{ + struct ecore_rx_config *rx_conf = &p_hwfn->rx_conf; + u8 abs_dst_vport_id = 0; + enum _ecore_status_t rc; + + if (OSAL_TEST_BIT(ECORE_RX_CONF_SET_LB_VPORT, &p_conf->flags)) { + rc = ecore_fw_vport(p_hwfn, p_conf->loopback_dst_vport_id, + &abs_dst_vport_id); + if (rc) + return rc; + } + + rx_conf->flags = p_conf->flags; + rx_conf->loopback_dst_vport_id = abs_dst_vport_id; + + return ECORE_SUCCESS; +} + +int ecore_get_skip_accept_flags_update(struct ecore_hwfn *p_hwfn) +{ + return OSAL_TEST_BIT(ECORE_RX_CONF_SKIP_ACCEPT_FLAGS_UPDATE, + &p_hwfn->rx_conf.flags); +} + +int ecore_get_skip_ucast_filter_update(struct ecore_hwfn *p_hwfn) +{ + return OSAL_TEST_BIT(ECORE_RX_CONF_SKIP_UCAST_FILTER_UPDATE, + &p_hwfn->rx_conf.flags); +} + +int ecore_get_loopback_vport(struct ecore_hwfn *p_hwfn, u8 *abs_vport_id) +{ + *abs_vport_id = p_hwfn->rx_conf.loopback_dst_vport_id; + + return OSAL_TEST_BIT(ECORE_RX_CONF_SET_LB_VPORT, + &p_hwfn->rx_conf.flags); +} + +enum _ecore_status_t ecore_set_vf_stats_bin_id(struct ecore_dev *p_dev, + u16 vf_id) +{ + struct ecore_vf_info *p_vf; + struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); + + p_vf = ecore_iov_get_vf_info(p_hwfn, vf_id, true); + if (!p_vf) + return ECORE_INVAL; + + p_dev->stats_bin_id = p_vf->abs_vf_id + ECORE_VF_START_BIN_ID; + + return ECORE_SUCCESS; +} + +#ifdef _NTDDK_ +#pragma warning(pop) +#endif diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h index 8fa403029..3e72f1539 100644 --- a/drivers/net/qede/base/ecore_l2.h +++ b/drivers/net/qede/base/ecore_l2.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_L2_H__ #define __ECORE_L2_H__ @@ -162,4 +162,18 @@ enum _ecore_status_t ecore_get_txq_coalesce(struct ecore_hwfn *p_hwfn, struct ecore_queue_cid *p_cid, u16 *p_hw_coal); +/** + * @brief - Sets a Tx rate limit on a specific queue + * + * @params p_hwfn + * @params p_ptt + * @params p_cid + * @params rate + * + * @return enum _ecore_status_t + */ +enum _ecore_status_t +ecore_eth_tx_queue_maxrate(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct ecore_queue_cid *p_cid, u32 rate); #endif diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h index bebf412ed..13f559fc7 100644 --- a/drivers/net/qede/base/ecore_l2_api.h +++ b/drivers/net/qede/base/ecore_l2_api.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_L2_API_H__ #define __ECORE_L2_API_H__ @@ -24,8 +24,30 @@ enum ecore_rss_caps { /* Should be the same as ETH_RSS_IND_TABLE_ENTRIES_NUM */ #define ECORE_RSS_IND_TABLE_SIZE 128 #define ECORE_RSS_KEY_SIZE 10 /* size in 32b chunks */ + +#define ECORE_MAX_PHC_DRIFT_PPB 291666666 +#define ECORE_VF_START_BIN_ID 16 + +enum ecore_ptp_filter_type { + ECORE_PTP_FILTER_NONE, + ECORE_PTP_FILTER_ALL, + ECORE_PTP_FILTER_V1_L4_EVENT, + ECORE_PTP_FILTER_V1_L4_GEN, + ECORE_PTP_FILTER_V2_L4_EVENT, + ECORE_PTP_FILTER_V2_L4_GEN, + ECORE_PTP_FILTER_V2_L2_EVENT, + ECORE_PTP_FILTER_V2_L2_GEN, + ECORE_PTP_FILTER_V2_EVENT, + ECORE_PTP_FILTER_V2_GEN +}; + +enum ecore_ptp_hwtstamp_tx_type { + ECORE_PTP_HWTSTAMP_TX_OFF, + ECORE_PTP_HWTSTAMP_TX_ON, +}; #endif +#ifndef __EXTRACT__LINUX__ struct ecore_queue_start_common_params { /* Should always be relative to entity sending this. */ u8 vport_id; @@ -36,6 +58,8 @@ struct ecore_queue_start_common_params { struct ecore_sb_info *p_sb; u8 sb_idx; + + u8 tc; }; struct ecore_rxq_start_ret_params { @@ -47,6 +71,7 @@ struct ecore_txq_start_ret_params { void OSAL_IOMEM *p_doorbell; void *p_handle; }; +#endif struct ecore_rss_params { u8 update_rss_config; @@ -110,7 +135,7 @@ struct ecore_filter_ucast { u8 is_tx_filter; u8 vport_to_add_to; u8 vport_to_remove_from; - unsigned char mac[ETH_ALEN]; + unsigned char mac[ECORE_ETH_ALEN]; u8 assert_on_error; u16 vlan; u32 vni; @@ -123,7 +148,7 @@ struct ecore_filter_mcast { u8 vport_to_remove_from; u8 num_mc_addrs; #define ECORE_MAX_MC_ADDRS 64 - unsigned char mac[ECORE_MAX_MC_ADDRS][ETH_ALEN]; + unsigned char mac[ECORE_MAX_MC_ADDRS][ECORE_ETH_ALEN]; }; struct ecore_filter_accept_flags { @@ -140,6 +165,7 @@ struct ecore_filter_accept_flags { #define ECORE_ACCEPT_ANY_VNI 0x40 }; +#ifndef __EXTRACT__LINUX__ enum ecore_filter_config_mode { ECORE_FILTER_CONFIG_MODE_DISABLE, ECORE_FILTER_CONFIG_MODE_5_TUPLE, @@ -148,6 +174,7 @@ enum ecore_filter_config_mode { ECORE_FILTER_CONFIG_MODE_TUNN_TYPE, ECORE_FILTER_CONFIG_MODE_IP_SRC, }; +#endif struct ecore_arfs_config_params { bool tcp; @@ -247,7 +274,7 @@ ecore_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn, * @param tc traffic class to use with this L2 txq * @param pbl_addr address of the pbl array * @param pbl_size number of entries in pbl - * @param p_ret_params Pointer to fill the return parameters in. + * @oaram p_ret_params Pointer to fill the return parameters in. * * @return enum _ecore_status_t */ @@ -293,6 +320,10 @@ struct ecore_sp_vport_start_params { bool zero_placement_offset; bool check_mac; bool check_ethtype; + u8 tx_dst_port_mode_config; /* Configurations for tx forwarding */ + u8 dst_vport_id; /* Destination Vport ID to forward the packet */ + u8 tx_dst_port_mode; /* Destination tx to forward the packet */ + u8 dst_vport_id_valid; /* If set, dst_vport_id has valid value */ /* Strict behavior on transmission errors */ bool b_err_illegal_vlan_mode; @@ -346,6 +377,7 @@ struct ecore_sp_vport_update_params { struct ecore_rss_params *rss_params; struct ecore_filter_accept_flags accept_flags; struct ecore_sge_tpa_params *sge_tpa_params; + /* MTU change - notice this requires the vport to be disabled. * If non-zero, value would be used. */ @@ -353,6 +385,20 @@ struct ecore_sp_vport_update_params { u8 update_ctl_frame_check; u8 mac_chk_en; u8 ethtype_chk_en; + + u8 update_tx_dst_port_mode_flag; + + /* Configurations for tx forwarding */ + u8 tx_dst_port_mode_config; + + /* Destination Vport ID to forward the packet */ + u8 dst_vport_id; + + /* Destination tx to forward the packet */ + u8 tx_dst_port_mode; + + /* If set, dst_vport_id has valid value */ + u8 dst_vport_id_valid; }; /** @@ -425,6 +471,27 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn, enum spq_mode comp_mode, struct ecore_spq_comp_cb *p_comp_data); +/** + * @brief ecore_sp_eth_rx_queues_set_default - + * + * This ramrod sets RSS RX queue as default one. + * + * @note Final phase API. + * + * @param p_hwfn + * @param p_rxq_handlers queue handlers to be updated. + * @param comp_mode + * @param p_comp_data + * + * @return enum _ecore_status_t + */ + +enum _ecore_status_t +ecore_sp_eth_rx_queues_set_default(struct ecore_hwfn *p_hwfn, + void *p_rxq_handler, + enum spq_mode comp_mode, + struct ecore_spq_comp_cb *p_comp_data); + void __ecore_get_vport_stats(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_eth_stats *stats, @@ -450,6 +517,7 @@ void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_arfs_config_params *p_cfg_params); +#ifndef __EXTRACT__LINUX__ struct ecore_ntuple_filter_params { /* Physically mapped address containing header of buffer to be used * as filter. @@ -460,7 +528,7 @@ struct ecore_ntuple_filter_params { u16 length; /* Relative queue-id to receive classified packet */ - #define ECORE_RFS_NTUPLE_QID_RSS ((u16)-1) +#define ECORE_RFS_NTUPLE_QID_RSS ((u16)-1) u16 qid; /* Identifier can either be according to vport-id or vfid */ @@ -468,12 +536,13 @@ struct ecore_ntuple_filter_params { u8 vport_id; u8 vf_id; - /* true if this filter is to be added. Else to be removed */ + /* true iff this filter is to be added. Else to be removed */ bool b_is_add; - /* If packet needs to be dropped */ + /* If packet need to be drop */ bool b_is_drop; }; +#endif /** * @brief - ecore_configure_rfs_ntuple_filter @@ -514,4 +583,67 @@ ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn, u8 vport_id, u8 ind_table_index, u16 ind_table_value); + +/** + * @brief - ecore_set_rx_config + * + * Set a new Rx configuration. The current options are: + * - Skip updating new accept flags. + * - Skip updating unicast filter (mac/vlan/mac-vlan pair). + * - Set the PF/VFs vports to loopback to a specific vport ID. + * + * @params p_hwfn + * @params p_conf The new Rx configuration + * + * @return enum _ecore_status_t + */ +enum _ecore_status_t +ecore_set_rx_config(struct ecore_hwfn *p_hwfn, struct ecore_rx_config *p_conf); + +/** + * @brief - ecore_get_skip_accept_flags_update + * + * Returns whether accept flags update should be skipped. + * + * @params p_hwfn + * + * @return int + */ +int ecore_get_skip_accept_flags_update(struct ecore_hwfn *p_hwfn); + +/** + * @brief - ecore_get_skip_ucast_filter_update + * + * Returns whether unicast filter update should be skipped. + * + * @params p_hwfn + * + * @return int + */ +int ecore_get_skip_ucast_filter_update(struct ecore_hwfn *p_hwfn); + +/** + * @brief - ecore_get_skip_ucast_filter_update + * + * Returns whether loopback to a specific vport is required. + * + * @params p_hwfn + * @params abs_vport_id The destination vport ID to loopback to. + * + * @return int + */ +int ecore_get_loopback_vport(struct ecore_hwfn *p_hwfn, u8 *abs_vport_id); + +/** + * @brief - ecore_set_vf_stats_bin_id + * + * Set stats bin id of a given vf + * + * @params p_dev + * @params vf_id VF number + * + * @return enum _ecore_status_t + */ +enum _ecore_status_t ecore_set_vf_stats_bin_id(struct ecore_dev *p_dev, + u16 vf_id); #endif diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c index 6f092dd37..2e945556c 100644 --- a/drivers/net/qede/base/ecore_mcp.c +++ b/drivers/net/qede/base/ecore_mcp.c @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #include "bcm_osal.h" #include "ecore.h" #include "ecore_status.h" @@ -25,18 +25,18 @@ #define GRCBASE_MCP 0xe00000 #define ECORE_MCP_RESP_ITER_US 10 -#define ECORE_DRV_MB_MAX_RETRIES (500 * 1000) /* Account for 5 sec */ -#define ECORE_MCP_RESET_RETRIES (50 * 1000) /* Account for 500 msec */ +#define ECORE_DRV_MB_MAX_RETRIES (500 * 1000) /* Account for 5 sec */ +#define ECORE_MCP_RESET_RETRIES (50 * 1000) /* Account for 500 msec */ #ifndef ASIC_ONLY /* Non-ASIC: * The waiting interval is multiplied by 100 to reduce the impact of the * built-in delay of 100usec in each ecore_rd(). - * In addition, a factor of 4 comparing to ASIC is applied. + * In addition, a factor of 6 comparing to ASIC is applied. */ #define ECORE_EMUL_MCP_RESP_ITER_US (ECORE_MCP_RESP_ITER_US * 100) -#define ECORE_EMUL_DRV_MB_MAX_RETRIES ((ECORE_DRV_MB_MAX_RETRIES / 100) * 4) -#define ECORE_EMUL_MCP_RESET_RETRIES ((ECORE_MCP_RESET_RETRIES / 100) * 4) +#define ECORE_EMUL_DRV_MB_MAX_RETRIES ((ECORE_DRV_MB_MAX_RETRIES / 100) * 6) +#define ECORE_EMUL_MCP_RESET_RETRIES ((ECORE_MCP_RESET_RETRIES / 100) * 6) #endif #define DRV_INNER_WR(_p_hwfn, _p_ptt, _ptr, _offset, _val) \ @@ -54,11 +54,16 @@ DRV_INNER_RD(_p_hwfn, _p_ptt, drv_mb_addr, \ OFFSETOF(struct public_drv_mb, _field)) -#define PDA_COMP (((FW_MAJOR_VERSION) + (FW_MINOR_VERSION << 8)) << \ - DRV_ID_PDA_COMP_VER_OFFSET) +#define PDA_COMP ((FW_MAJOR_VERSION) + (FW_MINOR_VERSION << 8)) #define MCP_BYTES_PER_MBIT_OFFSET 17 +#ifdef _NTDDK_ +#pragma warning(push) +#pragma warning(disable : 28167) +#pragma warning(disable : 28123) +#endif + #ifndef ASIC_ONLY static int loaded; static int loaded_port[MAX_NUM_PORTS] = { 0 }; @@ -66,12 +71,13 @@ static int loaded_port[MAX_NUM_PORTS] = { 0 }; bool ecore_mcp_is_init(struct ecore_hwfn *p_hwfn) { - if (!p_hwfn->mcp_info || !p_hwfn->mcp_info->public_base) + if (!p_hwfn->mcp_info || !p_hwfn->mcp_info->b_mfw_running) return false; return true; } -void ecore_mcp_cmd_port_init(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) +void ecore_mcp_cmd_port_init(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) { u32 addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base, PUBLIC_PORT); @@ -84,7 +90,8 @@ void ecore_mcp_cmd_port_init(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) p_hwfn->mcp_info->port_addr, MFW_PORT(p_hwfn)); } -void ecore_mcp_read_mb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) +void ecore_mcp_read_mb(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) { u32 length = MFW_DRV_MSG_MAX_DWORDS(p_hwfn->mcp_info->mfw_mb_length); OSAL_BE32 tmp; @@ -104,7 +111,7 @@ void ecore_mcp_read_mb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) (i << 2) + sizeof(u32)); ((u32 *)p_hwfn->mcp_info->mfw_mb_cur)[i] = - OSAL_BE32_TO_CPU(tmp); + OSAL_BE32_TO_CPU(tmp); } } @@ -180,10 +187,13 @@ enum _ecore_status_t ecore_mcp_free(struct ecore_hwfn *p_hwfn) #ifdef CONFIG_ECORE_LOCK_ALLOC OSAL_SPIN_LOCK_DEALLOC(&p_hwfn->mcp_info->cmd_lock); OSAL_SPIN_LOCK_DEALLOC(&p_hwfn->mcp_info->link_lock); + OSAL_SPIN_LOCK_DEALLOC(&p_hwfn->mcp_info->dbg_data_lock); + OSAL_SPIN_LOCK_DEALLOC(&p_hwfn->mcp_info->unload_lock); #endif } OSAL_FREE(p_hwfn->p_dev, p_hwfn->mcp_info); + p_hwfn->mcp_info = OSAL_NULL; return ECORE_SUCCESS; } @@ -192,28 +202,52 @@ enum _ecore_status_t ecore_mcp_free(struct ecore_hwfn *p_hwfn) #define ECORE_MCP_SHMEM_RDY_MAX_RETRIES 20 #define ECORE_MCP_SHMEM_RDY_ITER_MS 50 -static enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) +#define MCP_REG_CACHE_PAGING_ENABLE_ENABLE_MASK \ + MCP_REG_CACHE_PAGING_ENABLE_ENABLE + +enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) { u32 drv_mb_offsize, mfw_mb_offsize, cache_paging_en_addr, val; struct ecore_mcp_info *p_info = p_hwfn->mcp_info; u8 cnt = ECORE_MCP_SHMEM_RDY_MAX_RETRIES; u8 msec = ECORE_MCP_SHMEM_RDY_ITER_MS; u32 mcp_pf_id = MCP_PF_ID(p_hwfn); + u32 recovery_mode; /* Disabled cache paging in the MCP implies that no MFW is running */ - cache_paging_en_addr = MCP_REG_CACHE_PAGING_ENABLE_BB_K2; + cache_paging_en_addr = ECORE_IS_E4(p_hwfn->p_dev) ? + MCP_REG_CACHE_PAGING_ENABLE_BB_K2 : + MCP_REG_CACHE_PAGING_ENABLE_E5; val = ecore_rd(p_hwfn, p_ptt, cache_paging_en_addr); + /* MISCS_REG_GENERIC_HW_0[31] is indication that the MCP/ROM is in + * recovery mode, serving NVM upgrade mailbox commands. + */ + recovery_mode = ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_HW_0) & + 0x80000000; + p_info->recovery_mode = recovery_mode ? true : false; + p_info->b_mfw_running = + GET_FIELD(val, MCP_REG_CACHE_PAGING_ENABLE_ENABLE) || + p_info->recovery_mode; + if (!p_info->b_mfw_running) { +#ifndef ASIC_ONLY + if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) + DP_INFO(p_hwfn, + "Emulation: Assume that no MFW is running [MCP_REG_CACHE_PAGING_ENABLE 0x%x]\n", + val); + else +#endif + DP_NOTICE(p_hwfn, false, + "Assume that no MFW is running [MCP_REG_CACHE_PAGING_ENABLE 0x%x]\n", + val); + return ECORE_INVAL; + } + p_info->public_base = ecore_rd(p_hwfn, p_ptt, MISC_REG_SHARED_MEM_ADDR); if (!p_info->public_base) { DP_NOTICE(p_hwfn, false, "The address of the MCP scratch-pad is not configured\n"); -#ifndef ASIC_ONLY - /* Zeroed "public_base" implies no MFW */ - if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) - DP_INFO(p_hwfn, "Emulation: Assume no MFW\n"); -#endif return ECORE_INVAL; } @@ -225,27 +259,34 @@ static enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn, PUBLIC_MFW_MB)); p_info->mfw_mb_addr = SECTION_ADDR(mfw_mb_offsize, mcp_pf_id); p_info->mfw_mb_length = (u16)ecore_rd(p_hwfn, p_ptt, - p_info->mfw_mb_addr); + p_info->mfw_mb_addr + + OFFSETOF(struct public_mfw_mb, + sup_msgs)); /* @@@TBD: - * The driver can notify that there was an MCP reset, and read the SHMEM - * values before the MFW has completed initializing them. - * As a temporary solution, the "sup_msgs" field is used as a data ready - * indication. + * The driver can notify that there was an MCP reset, and might read the + * SHMEM values before the MFW has completed initializing them. + * As a temporary solution, the "sup_msgs" field in the MFW mailbox is + * used as a data ready indication. * This should be replaced with an actual indication when it is provided * by the MFW. */ - while (!p_info->mfw_mb_length && cnt--) { - OSAL_MSLEEP(msec); - p_info->mfw_mb_length = (u16)ecore_rd(p_hwfn, p_ptt, - p_info->mfw_mb_addr); - } + if (!p_info->recovery_mode) { + while (!p_info->mfw_mb_length && --cnt) { + OSAL_MSLEEP(msec); + p_info->mfw_mb_length = + (u16)ecore_rd(p_hwfn, p_ptt, + p_info->mfw_mb_addr + + OFFSETOF(struct public_mfw_mb, + sup_msgs)); + } - if (!cnt) { - DP_NOTICE(p_hwfn, false, - "Failed to get the SHMEM ready notification after %d msec\n", - ECORE_MCP_SHMEM_RDY_MAX_RETRIES * msec); - return ECORE_TIMEOUT; + if (!cnt) { + DP_NOTICE(p_hwfn, false, + "Failed to get the SHMEM ready notification after %d msec\n", + ECORE_MCP_SHMEM_RDY_MAX_RETRIES * msec); + return ECORE_TIMEOUT; + } } /* Calculate the driver and MFW mailbox address */ @@ -254,19 +295,18 @@ static enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn, PUBLIC_DRV_MB)); p_info->drv_mb_addr = SECTION_ADDR(drv_mb_offsize, mcp_pf_id); DP_VERBOSE(p_hwfn, ECORE_MSG_SP, - "drv_mb_offsiz = 0x%x, drv_mb_addr = 0x%x" - " mcp_pf_id = 0x%x\n", + "drv_mb_offsiz = 0x%x, drv_mb_addr = 0x%x mcp_pf_id = 0x%x\n", drv_mb_offsize, p_info->drv_mb_addr, mcp_pf_id); /* Get the current driver mailbox sequence before sending * the first command */ p_info->drv_mb_seq = DRV_MB_RD(p_hwfn, p_ptt, drv_mb_header) & - DRV_MSG_SEQ_NUMBER_MASK; + DRV_MSG_SEQ_NUMBER_MASK; /* Get current FW pulse sequence */ p_info->drv_pulse_seq = DRV_MB_RD(p_hwfn, p_ptt, drv_pulse_mb) & - DRV_PULSE_SEQ_MASK; + DRV_PULSE_SEQ_MASK; p_info->mcp_hist = ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0); @@ -290,18 +330,34 @@ enum _ecore_status_t ecore_mcp_cmd_init(struct ecore_hwfn *p_hwfn, /* Initialize the MFW spinlocks */ #ifdef CONFIG_ECORE_LOCK_ALLOC - if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_info->cmd_lock)) { + if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_info->cmd_lock, "cmd_lock")) { + OSAL_FREE(p_hwfn->p_dev, p_hwfn->mcp_info); + return ECORE_NOMEM; + } + if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_info->link_lock, "link_lock")) { + OSAL_SPIN_LOCK_DEALLOC(&p_info->cmd_lock); + OSAL_FREE(p_hwfn->p_dev, p_hwfn->mcp_info); + return ECORE_NOMEM; + } + if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_info->dbg_data_lock, + "dbg_data_lock")) { + OSAL_SPIN_LOCK_DEALLOC(&p_info->cmd_lock); + OSAL_SPIN_LOCK_DEALLOC(&p_info->link_lock); OSAL_FREE(p_hwfn->p_dev, p_hwfn->mcp_info); return ECORE_NOMEM; } - if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_info->link_lock)) { + if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_info->unload_lock, "unload_lock")) { OSAL_SPIN_LOCK_DEALLOC(&p_info->cmd_lock); + OSAL_SPIN_LOCK_DEALLOC(&p_info->link_lock); + OSAL_SPIN_LOCK_DEALLOC(&p_info->dbg_data_lock); OSAL_FREE(p_hwfn->p_dev, p_hwfn->mcp_info); return ECORE_NOMEM; } #endif OSAL_SPIN_LOCK_INIT(&p_info->cmd_lock); OSAL_SPIN_LOCK_INIT(&p_info->link_lock); + OSAL_SPIN_LOCK_INIT(&p_info->dbg_data_lock); + OSAL_SPIN_LOCK_INIT(&p_info->unload_lock); OSAL_LIST_INIT(&p_info->cmd_list); @@ -316,7 +372,7 @@ enum _ecore_status_t ecore_mcp_cmd_init(struct ecore_hwfn *p_hwfn, size = MFW_DRV_MSG_MAX_DWORDS(p_info->mfw_mb_length) * sizeof(u32); p_info->mfw_mb_cur = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, size); p_info->mfw_mb_shadow = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, size); - if (!p_info->mfw_mb_shadow || !p_info->mfw_mb_addr) + if (p_info->mfw_mb_cur == OSAL_NULL || p_info->mfw_mb_shadow == OSAL_NULL) goto err; return ECORE_SUCCESS; @@ -352,7 +408,7 @@ enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn, u32 retries = ECORE_MCP_RESET_RETRIES; enum _ecore_status_t rc = ECORE_SUCCESS; -#ifndef ASIC_ONLY +#if (!defined ASIC_ONLY) && (!defined ROM_TEST) if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) { delay = ECORE_EMUL_MCP_RESP_ITER_US; retries = ECORE_EMUL_MCP_RESET_RETRIES; @@ -407,6 +463,7 @@ static void ecore_emul_mcp_load_req(struct ecore_hwfn *p_hwfn, return; } + /* @DPDK */ if (!loaded) p_mb_params->mcp_resp = FW_MSG_CODE_DRV_LOAD_ENGINE; else if (!loaded_port[p_hwfn->port_id]) @@ -566,8 +623,16 @@ static void ecore_mcp_cmd_set_blocking(struct ecore_hwfn *p_hwfn, block_cmd ? "Block" : "Unblock"); } -void ecore_mcp_print_cpu_info(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) +static void ecore_mcp_cmd_set_halt(struct ecore_hwfn *p_hwfn, bool halt) +{ + p_hwfn->mcp_info->b_halted = halt; + + DP_INFO(p_hwfn, "%s sending of mailbox commands to the MFW\n", + halt ? "Halt" : "Unhalt"); +} + +static void ecore_mcp_print_cpu_info(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) { u32 cpu_mode, cpu_state, cpu_pc_0, cpu_pc_1, cpu_pc_2; u32 delay = ECORE_MCP_RESP_ITER_US; @@ -592,13 +657,21 @@ void ecore_mcp_print_cpu_info(struct ecore_hwfn *p_hwfn, static enum _ecore_status_t _ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_mcp_mb_params *p_mb_params, - u32 max_retries, u32 delay) + u32 max_retries, u32 usecs) { + u32 cnt = 0, msecs = DIV_ROUND_UP(usecs, 1000); struct ecore_mcp_cmd_elem *p_cmd_elem; - u32 cnt = 0; + u32 ud_time = 10, ud_iter = 100; u16 seq_num; enum _ecore_status_t rc = ECORE_SUCCESS; + if (!p_mb_params) + return ECORE_INVAL; + + /* First 100 udelay wait iterations equates to 1ms wait time */ + if (ECORE_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) + max_retries += ud_iter - 1; + /* Wait until the mailbox is non-occupied */ do { /* Exit the loop if there is no pending command, or if the @@ -618,7 +691,15 @@ _ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, goto err; OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock); - OSAL_UDELAY(delay); + + if (ECORE_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) { + if (cnt < ud_iter) + OSAL_UDELAY(ud_time); + else + OSAL_MSLEEP(msecs); + } else { + OSAL_UDELAY(usecs); + } OSAL_MFW_CMD_PREEMPT(p_hwfn); } while (++cnt < max_retries); @@ -648,7 +729,15 @@ _ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, * The spinlock stays locked until the list element is removed. */ - OSAL_UDELAY(delay); + if (ECORE_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) { + if (cnt < ud_iter) + OSAL_UDELAY(ud_time); + else + OSAL_MSLEEP(msecs); + } else { + OSAL_UDELAY(usecs); + } + OSAL_SPIN_LOCK(&p_hwfn->mcp_info->cmd_lock); if (p_cmd_elem->b_is_completed) @@ -674,8 +763,10 @@ _ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, ecore_mcp_cmd_del_elem(p_hwfn, p_cmd_elem); OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock); - ecore_mcp_cmd_set_blocking(p_hwfn, true); - ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_MFW_RESP_FAIL); + if (!ECORE_MB_FLAGS_IS_SET(p_mb_params, AVOID_BLOCK)) + ecore_mcp_cmd_set_blocking(p_hwfn, true); + ecore_hw_err_notify(p_hwfn, p_ptt, ECORE_HW_ERR_MFW_RESP_FAIL, + OSAL_NULL, 0); return ECORE_AGAIN; } @@ -685,7 +776,7 @@ _ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "MFW mailbox: response 0x%08x param 0x%08x [after %d.%03d ms]\n", p_mb_params->mcp_resp, p_mb_params->mcp_param, - (cnt * delay) / 1000, (cnt * delay) % 1000); + (cnt * usecs) / 1000, (cnt * usecs) % 1000); /* Clear the sequence number from the MFW response */ p_mb_params->mcp_resp &= FW_MSG_CODE_MASK; @@ -697,15 +788,17 @@ _ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, return rc; } -static enum _ecore_status_t -ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - struct ecore_mcp_mb_params *p_mb_params) +static enum _ecore_status_t ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct ecore_mcp_mb_params *p_mb_params) { osal_size_t union_data_size = sizeof(union drv_union_data); u32 max_retries = ECORE_DRV_MB_MAX_RETRIES; u32 usecs = ECORE_MCP_RESP_ITER_US; + if (!p_mb_params) + return ECORE_INVAL; + #ifndef ASIC_ONLY if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && !ecore_mcp_is_init(p_hwfn)) return ecore_emul_mcp_cmd(p_hwfn, p_mb_params); @@ -715,17 +808,29 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, usecs = ECORE_EMUL_MCP_RESP_ITER_US; } #endif - if (ECORE_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) { - max_retries = DIV_ROUND_UP(max_retries, 1000); - usecs *= 1000; - } - /* MCP not initialized */ if (!ecore_mcp_is_init(p_hwfn)) { DP_NOTICE(p_hwfn, true, "MFW is not initialized!\n"); return ECORE_BUSY; } + /* If MFW was halted by another process, let the mailbox sender know + * they can retry sending the mailbox later. + */ + if (p_hwfn->mcp_info->b_halted) { + DP_VERBOSE(p_hwfn, ECORE_MSG_HW, + "The MFW is halted. Avoid sending mailbox command 0x%08x [param 0x%08x].\n", + p_mb_params->cmd, p_mb_params->param); + return ECORE_AGAIN; + } + + if (p_hwfn->mcp_info->b_block_cmd) { + DP_NOTICE(p_hwfn, false, + "The MFW is not responsive. Avoid sending mailbox command 0x%08x [param 0x%08x].\n", + p_mb_params->cmd, p_mb_params->param); + return ECORE_ABORTED; + } + if (p_mb_params->data_src_size > union_data_size || p_mb_params->data_dst_size > union_data_size) { DP_ERR(p_hwfn, @@ -735,11 +840,9 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, return ECORE_INVAL; } - if (p_hwfn->mcp_info->b_block_cmd) { - DP_NOTICE(p_hwfn, false, - "The MFW is not responsive. Avoid sending mailbox command 0x%08x [param 0x%08x].\n", - p_mb_params->cmd, p_mb_params->param); - return ECORE_ABORTED; + if (ECORE_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) { + max_retries = DIV_ROUND_UP(max_retries, 1000); + usecs *= 1000; } return _ecore_mcp_cmd_and_union(p_hwfn, p_ptt, p_mb_params, max_retries, @@ -772,7 +875,9 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn, u32 param, u32 *o_mcp_resp, u32 *o_mcp_param, - u32 i_txn_size, u32 *i_buf) + u32 i_txn_size, + u32 *i_buf, + bool b_can_sleep) { struct ecore_mcp_mb_params mb_params; enum _ecore_status_t rc; @@ -782,6 +887,8 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn, mb_params.param = param; mb_params.p_data_src = i_buf; mb_params.data_src_size = (u8)i_txn_size; + if (b_can_sleep) + mb_params.flags = ECORE_MB_FLAG_CAN_SLEEP; rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); if (rc != ECORE_SUCCESS) return rc; @@ -789,6 +896,9 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn, *o_mcp_resp = mb_params.mcp_resp; *o_mcp_param = mb_params.mcp_param; + /* nvm_info needs to be updated */ + p_hwfn->nvm_info.valid = false; + return ECORE_SUCCESS; } @@ -798,7 +908,9 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn, u32 param, u32 *o_mcp_resp, u32 *o_mcp_param, - u32 *o_txn_size, u32 *o_buf) + u32 *o_txn_size, + u32 *o_buf, + bool b_can_sleep) { struct ecore_mcp_mb_params mb_params; u8 raw_data[MCP_DRV_NVM_BUF_LEN]; @@ -811,6 +923,8 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn, /* Use the maximal value since the actual one is part of the response */ mb_params.data_dst_size = MCP_DRV_NVM_BUF_LEN; + if (b_can_sleep) + mb_params.flags = ECORE_MB_FLAG_CAN_SLEEP; rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); if (rc != ECORE_SUCCESS) @@ -820,8 +934,7 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn, *o_mcp_param = mb_params.mcp_param; *o_txn_size = *o_mcp_param; - /* @DPDK */ - OSAL_MEMCPY(o_buf, raw_data, RTE_MIN(*o_txn_size, MCP_DRV_NVM_BUF_LEN)); + OSAL_MEMCPY(o_buf, raw_data, *o_txn_size); return ECORE_SUCCESS; } @@ -850,19 +963,27 @@ ecore_mcp_can_force_load(u8 drv_role, u8 exist_drv_role, return can_force_load; } -static enum _ecore_status_t ecore_mcp_cancel_load_req(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) +enum _ecore_status_t ecore_mcp_cancel_load_req(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) { u32 resp = 0, param = 0; enum _ecore_status_t rc; rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_CANCEL_LOAD_REQ, 0, &resp, ¶m); - if (rc != ECORE_SUCCESS) + if (rc != ECORE_SUCCESS) { DP_NOTICE(p_hwfn, false, "Failed to send cancel load request, rc = %d\n", rc); + return rc; + } - return rc; + if (resp == FW_MSG_CODE_UNSUPPORTED) { + DP_INFO(p_hwfn, + "The cancel load command is unsupported by the MFW\n"); + return ECORE_NOTIMPL; + } + + return ECORE_SUCCESS; } #define CONFIG_ECORE_L2_BITMAP_IDX (0x1 << 0) @@ -933,7 +1054,6 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_mcp_mb_params mb_params; struct load_req_stc load_req; struct load_rsp_stc load_rsp; - u32 hsi_ver; enum _ecore_status_t rc; OSAL_MEM_ZERO(&load_req, sizeof(load_req)); @@ -947,17 +1067,18 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, SET_MFW_FIELD(load_req.misc0, LOAD_REQ_FLAGS0, p_in_params->avoid_eng_reset); - hsi_ver = (p_in_params->hsi_ver == ECORE_LOAD_REQ_HSI_VER_DEFAULT) ? - DRV_ID_MCP_HSI_VER_CURRENT : - (p_in_params->hsi_ver << DRV_ID_MCP_HSI_VER_OFFSET); - OSAL_MEM_ZERO(&mb_params, sizeof(mb_params)); mb_params.cmd = DRV_MSG_CODE_LOAD_REQ; - mb_params.param = PDA_COMP | hsi_ver | p_hwfn->p_dev->drv_type; + mb_params.param = p_hwfn->p_dev->drv_type; + SET_MFW_FIELD(mb_params.param, DRV_ID_PDA_COMP_VER, PDA_COMP); + SET_MFW_FIELD(mb_params.param, DRV_ID_MCP_HSI_VER, + p_in_params->hsi_ver == ECORE_LOAD_REQ_HSI_VER_DEFAULT ? + LOAD_REQ_HSI_VERSION : p_in_params->hsi_ver); mb_params.p_data_src = &load_req; mb_params.data_src_size = sizeof(load_req); mb_params.p_data_dst = &load_rsp; mb_params.data_dst_size = sizeof(load_rsp); + mb_params.flags = ECORE_MB_FLAG_CAN_SLEEP | ECORE_MB_FLAG_AVOID_BLOCK; DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n", @@ -1113,7 +1234,7 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, return rc; } else { DP_NOTICE(p_hwfn, false, - "A force load is required [{role, fw_ver, drv_ver}: loading={%d, 0x%08x, x%08x_0x%08x}, existing={%d, 0x%08x, 0x%08x_0x%08x}] - Avoid\n", + "A force load is required [{role, fw_ver, drv_ver}: loading={%d, 0x%08x, 0x%08x_%08x}, existing={%d, 0x%08x, 0x%08x_%08x}] - Avoid\n", in_params.drv_role, in_params.fw_ver, in_params.drv_ver_0, in_params.drv_ver_1, out_params.exist_drv_role, @@ -1172,6 +1293,12 @@ enum _ecore_status_t ecore_mcp_load_done(struct ecore_hwfn *p_hwfn, return rc; } + if (resp == FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT) { + DP_NOTICE(p_hwfn, false, + "Received a LOAD_REFUSED_REJECT response from the mfw\n"); + return ECORE_ABORTED; + } + /* Check if there is a DID mismatch between nvm-cfg/efuse */ if (param & FW_MB_PARAM_LOAD_DONE_DID_EFUSE_ERROR) DP_NOTICE(p_hwfn, false, @@ -1180,16 +1307,56 @@ enum _ecore_status_t ecore_mcp_load_done(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } +#define MFW_COMPLETION_MAX_ITER 5000 +#define MFW_COMPLETION_INTERVAL_MS 1 + enum _ecore_status_t ecore_mcp_unload_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { - u32 wol_param, mcp_resp, mcp_param; + struct ecore_mcp_mb_params mb_params; + u32 cnt = MFW_COMPLETION_MAX_ITER; + u32 wol_param; + enum _ecore_status_t rc; - /* @DPDK */ - wol_param = DRV_MB_PARAM_UNLOAD_WOL_MCP; + switch (p_hwfn->p_dev->wol_config) { + case ECORE_OV_WOL_DISABLED: + wol_param = DRV_MB_PARAM_UNLOAD_WOL_DISABLED; + break; + case ECORE_OV_WOL_ENABLED: + wol_param = DRV_MB_PARAM_UNLOAD_WOL_ENABLED; + break; + default: + DP_NOTICE(p_hwfn, true, + "Unknown WoL configuration %02x\n", + p_hwfn->p_dev->wol_config); + /* Fallthrough */ + case ECORE_OV_WOL_DEFAULT: + wol_param = DRV_MB_PARAM_UNLOAD_WOL_MCP; + } - return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_UNLOAD_REQ, wol_param, - &mcp_resp, &mcp_param); + OSAL_MEM_ZERO(&mb_params, sizeof(mb_params)); + mb_params.cmd = DRV_MSG_CODE_UNLOAD_REQ; + mb_params.param = wol_param; + mb_params.flags = ECORE_MB_FLAG_CAN_SLEEP | ECORE_MB_FLAG_AVOID_BLOCK; + + OSAL_SPIN_LOCK(&p_hwfn->mcp_info->unload_lock); + OSAL_SET_BIT(ECORE_MCP_BYPASS_PROC_BIT, + &p_hwfn->mcp_info->mcp_handling_status); + OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->unload_lock); + + rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); + + while (OSAL_TEST_BIT(ECORE_MCP_IN_PROCESSING_BIT, + &p_hwfn->mcp_info->mcp_handling_status) && --cnt) + OSAL_MSLEEP(MFW_COMPLETION_INTERVAL_MS); + + if (!cnt) + DP_NOTICE(p_hwfn, true, + "Failed to wait MFW event completion after %d msec\n", + MFW_COMPLETION_MAX_ITER * + MFW_COMPLETION_INTERVAL_MS); + + return rc; } enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn, @@ -1201,6 +1368,24 @@ enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn, OSAL_MEM_ZERO(&mb_params, sizeof(mb_params)); mb_params.cmd = DRV_MSG_CODE_UNLOAD_DONE; + /* Set the primary MAC if WoL is enabled */ + if (p_hwfn->p_dev->wol_config == ECORE_OV_WOL_ENABLED) { + u8 *p_mac = p_hwfn->p_dev->wol_mac; + + OSAL_MEM_ZERO(&wol_mac, sizeof(wol_mac)); + wol_mac.mac_upper = p_mac[0] << 8 | p_mac[1]; + wol_mac.mac_lower = p_mac[2] << 24 | p_mac[3] << 16 | + p_mac[4] << 8 | p_mac[5]; + + DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IFDOWN), + "Setting WoL MAC: %02x:%02x:%02x:%02x:%02x:%02x --> [%08x,%08x]\n", + p_mac[0], p_mac[1], p_mac[2], p_mac[3], p_mac[4], + p_mac[5], wol_mac.mac_upper, wol_mac.mac_lower); + + mb_params.p_data_src = &wol_mac; + mb_params.data_src_size = sizeof(wol_mac); + } + return ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); } @@ -1218,8 +1403,7 @@ static void ecore_mcp_handle_vf_flr(struct ecore_hwfn *p_hwfn, OSAL_MEM_ZERO(disabled_vfs, EXT_VF_BITMAP_SIZE_IN_BYTES); DP_VERBOSE(p_hwfn, ECORE_MSG_SP, - "Reading Disabled VF information from [offset %08x]," - " path_addr %08x\n", + "Reading Disabled VF information from [offset %08x], path_addr %08x\n", mfw_path_offsize, path_addr); for (i = 0; i < VF_BITMAP_SIZE_IN_DWORDS; i++) { @@ -1241,11 +1425,19 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 *vfs_to_ack) { + u16 i, vf_bitmap_size, vf_bitmap_size_in_bytes; struct ecore_mcp_mb_params mb_params; enum _ecore_status_t rc; - u16 i; - for (i = 0; i < VF_BITMAP_SIZE_IN_DWORDS; i++) + if (ECORE_IS_E4(p_hwfn->p_dev)) { + vf_bitmap_size = VF_BITMAP_SIZE_IN_DWORDS; + vf_bitmap_size_in_bytes = VF_BITMAP_SIZE_IN_BYTES; + } else { + vf_bitmap_size = EXT_VF_BITMAP_SIZE_IN_DWORDS; + vf_bitmap_size_in_bytes = EXT_VF_BITMAP_SIZE_IN_BYTES; + } + + for (i = 0; i < vf_bitmap_size; i++) DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IOV), "Acking VFs [%08x,...,%08x] - %08x\n", i * 32, (i + 1) * 32 - 1, vfs_to_ack[i]); @@ -1253,9 +1445,8 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn, OSAL_MEM_ZERO(&mb_params, sizeof(mb_params)); mb_params.cmd = DRV_MSG_CODE_VF_DISABLED_DONE; mb_params.p_data_src = vfs_to_ack; - mb_params.data_src_size = (u8)VF_BITMAP_SIZE_IN_BYTES; - rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, - &mb_params); + mb_params.data_src_size = (u8)vf_bitmap_size_in_bytes; + rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); if (rc != ECORE_SUCCESS) { DP_NOTICE(p_hwfn, false, "Failed to pass ACK for VF flr to MFW\n"); @@ -1276,8 +1467,7 @@ static void ecore_mcp_handle_transceiver_change(struct ecore_hwfn *p_hwfn, transceiver_data)); DP_VERBOSE(p_hwfn, (ECORE_MSG_HW | ECORE_MSG_SP), - "Received transceiver state update [0x%08x] from mfw" - " [Addr 0x%x]\n", + "Received transceiver state update [0x%08x] from mfw [Addr 0x%x]\n", transceiver_state, (u32)(p_hwfn->mcp_info->port_addr + OFFSETOF(struct public_port, transceiver_data))); @@ -1293,6 +1483,22 @@ static void ecore_mcp_handle_transceiver_change(struct ecore_hwfn *p_hwfn, OSAL_TRANSCEIVER_UPDATE(p_hwfn); } +static void ecore_mcp_handle_transceiver_tx_fault(struct ecore_hwfn *p_hwfn) +{ + DP_VERBOSE(p_hwfn, (ECORE_MSG_HW | ECORE_MSG_SP), + "Received TX_FAULT event from mfw\n"); + + OSAL_TRANSCEIVER_TX_FAULT(p_hwfn); +} + +static void ecore_mcp_handle_transceiver_rx_los(struct ecore_hwfn *p_hwfn) +{ + DP_VERBOSE(p_hwfn, (ECORE_MSG_HW | ECORE_MSG_SP), + "Received RX_LOS event from mfw\n"); + + OSAL_TRANSCEIVER_RX_LOS(p_hwfn); +} + static void ecore_mcp_read_eee_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_mcp_link_state *p_link) @@ -1304,12 +1510,12 @@ static void ecore_mcp_read_eee_config(struct ecore_hwfn *p_hwfn, eee_status = ecore_rd(p_hwfn, p_ptt, p_hwfn->mcp_info->port_addr + OFFSETOF(struct public_port, eee_status)); p_link->eee_active = !!(eee_status & EEE_ACTIVE_BIT); - val = (eee_status & EEE_LD_ADV_STATUS_MASK) >> EEE_LD_ADV_STATUS_OFFSET; + val = GET_MFW_FIELD(eee_status, EEE_LD_ADV_STATUS); if (val & EEE_1G_ADV) p_link->eee_adv_caps |= ECORE_EEE_1G_ADV; if (val & EEE_10G_ADV) p_link->eee_adv_caps |= ECORE_EEE_10G_ADV; - val = (eee_status & EEE_LP_ADV_STATUS_MASK) >> EEE_LP_ADV_STATUS_OFFSET; + val = GET_MFW_FIELD(eee_status, EEE_LP_ADV_STATUS); if (val & EEE_1G_ADV) p_link->eee_lp_adv_caps |= ECORE_EEE_1G_ADV; if (val & EEE_10G_ADV) @@ -1338,6 +1544,38 @@ static u32 ecore_mcp_get_shmem_func(struct ecore_hwfn *p_hwfn, return size; } +static void ecore_read_pf_bandwidth(struct ecore_hwfn *p_hwfn, + struct public_func *p_shmem_info) +{ + struct ecore_mcp_function_info *p_info; + + p_info = &p_hwfn->mcp_info->func_info; + + /* TODO - bandwidth min/max should have valid values of 1-100, + * as well as some indication that the feature is disabled. + * Until MFW/qlediag enforce those limitations, Assume THERE IS ALWAYS + * limit and correct value to min `1' and max `100' if limit isn't in + * range. + */ + p_info->bandwidth_min = GET_MFW_FIELD(p_shmem_info->config, + FUNC_MF_CFG_MIN_BW); + if (p_info->bandwidth_min < 1 || p_info->bandwidth_min > 100) { + DP_INFO(p_hwfn, + "bandwidth minimum out of bounds [%02x]. Set to 1\n", + p_info->bandwidth_min); + p_info->bandwidth_min = 1; + } + + p_info->bandwidth_max = GET_MFW_FIELD(p_shmem_info->config, + FUNC_MF_CFG_MAX_BW); + if (p_info->bandwidth_max < 1 || p_info->bandwidth_max > 100) { + DP_INFO(p_hwfn, + "bandwidth maximum out of bounds [%02x]. Set to 100\n", + p_info->bandwidth_max); + p_info->bandwidth_max = 100; + } +} + static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, bool b_reset) @@ -1356,11 +1594,9 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn, p_hwfn->mcp_info->port_addr + OFFSETOF(struct public_port, link_status)); DP_VERBOSE(p_hwfn, (ECORE_MSG_LINK | ECORE_MSG_SP), - "Received link update [0x%08x] from mfw" - " [Addr 0x%x]\n", + "Received link update [0x%08x] from mfw [Addr 0x%x]\n", status, (u32)(p_hwfn->mcp_info->port_addr + - OFFSETOF(struct public_port, - link_status))); + OFFSETOF(struct public_port, link_status))); } else { DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, "Resetting link indications\n"); @@ -1379,11 +1615,20 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn, MCP_PF_ID(p_hwfn)); p_link->link_up = !!(shmem_info.status & FUNC_STATUS_VIRTUAL_LINK_UP); + ecore_read_pf_bandwidth(p_hwfn, &shmem_info); + DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, + "Virtual link_up = %d\n", p_link->link_up); } else { p_link->link_up = !!(status & LINK_STATUS_LINK_UP); + DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, + "Physical link_up = %d\n", p_link->link_up); } } else { p_link->link_up = false; + + DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, + "Link is down - disbale dcbx\n"); + p_hwfn->p_dcbx_info->get.operational.enabled = false; } p_link->full_duplex = true; @@ -1414,6 +1659,7 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn, break; default: p_link->speed = 0; + p_link->link_up = 0; } /* We never store total line speed as p_link->speed is @@ -1428,50 +1674,49 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn, min_bw = p_hwfn->mcp_info->func_info.bandwidth_min; /* Max bandwidth configuration */ - __ecore_configure_pf_max_bandwidth(p_hwfn, p_ptt, - p_link, max_bw); + __ecore_configure_pf_max_bandwidth(p_hwfn, p_ptt, p_link, max_bw); /* Min bandwidth configuration */ - __ecore_configure_pf_min_bandwidth(p_hwfn, p_ptt, - p_link, min_bw); + __ecore_configure_pf_min_bandwidth(p_hwfn, p_ptt, p_link, min_bw); ecore_configure_vp_wfq_on_link_change(p_hwfn->p_dev, p_ptt, p_link->min_pf_rate); p_link->an = !!(status & LINK_STATUS_AUTO_NEGOTIATE_ENABLED); - p_link->an_complete = !!(status & LINK_STATUS_AUTO_NEGOTIATE_COMPLETE); + p_link->an_complete = !!(status & + LINK_STATUS_AUTO_NEGOTIATE_COMPLETE); p_link->parallel_detection = !!(status & - LINK_STATUS_PARALLEL_DETECTION_USED); + LINK_STATUS_PARALLEL_DETECTION_USED); p_link->pfc_enabled = !!(status & LINK_STATUS_PFC_ENABLED); p_link->partner_adv_speed |= - (status & LINK_STATUS_LINK_PARTNER_1000TFD_CAPABLE) ? - ECORE_LINK_PARTNER_SPEED_1G_FD : 0; + (status & LINK_STATUS_LINK_PARTNER_1000TFD_CAPABLE) ? + ECORE_LINK_PARTNER_SPEED_1G_FD : 0; p_link->partner_adv_speed |= - (status & LINK_STATUS_LINK_PARTNER_1000THD_CAPABLE) ? - ECORE_LINK_PARTNER_SPEED_1G_HD : 0; + (status & LINK_STATUS_LINK_PARTNER_1000THD_CAPABLE) ? + ECORE_LINK_PARTNER_SPEED_1G_HD : 0; p_link->partner_adv_speed |= - (status & LINK_STATUS_LINK_PARTNER_10G_CAPABLE) ? - ECORE_LINK_PARTNER_SPEED_10G : 0; + (status & LINK_STATUS_LINK_PARTNER_10G_CAPABLE) ? + ECORE_LINK_PARTNER_SPEED_10G : 0; p_link->partner_adv_speed |= - (status & LINK_STATUS_LINK_PARTNER_20G_CAPABLE) ? - ECORE_LINK_PARTNER_SPEED_20G : 0; + (status & LINK_STATUS_LINK_PARTNER_20G_CAPABLE) ? + ECORE_LINK_PARTNER_SPEED_20G : 0; p_link->partner_adv_speed |= - (status & LINK_STATUS_LINK_PARTNER_25G_CAPABLE) ? - ECORE_LINK_PARTNER_SPEED_25G : 0; + (status & LINK_STATUS_LINK_PARTNER_25G_CAPABLE) ? + ECORE_LINK_PARTNER_SPEED_25G : 0; p_link->partner_adv_speed |= - (status & LINK_STATUS_LINK_PARTNER_40G_CAPABLE) ? - ECORE_LINK_PARTNER_SPEED_40G : 0; + (status & LINK_STATUS_LINK_PARTNER_40G_CAPABLE) ? + ECORE_LINK_PARTNER_SPEED_40G : 0; p_link->partner_adv_speed |= - (status & LINK_STATUS_LINK_PARTNER_50G_CAPABLE) ? - ECORE_LINK_PARTNER_SPEED_50G : 0; + (status & LINK_STATUS_LINK_PARTNER_50G_CAPABLE) ? + ECORE_LINK_PARTNER_SPEED_50G : 0; p_link->partner_adv_speed |= - (status & LINK_STATUS_LINK_PARTNER_100G_CAPABLE) ? - ECORE_LINK_PARTNER_SPEED_100G : 0; + (status & LINK_STATUS_LINK_PARTNER_100G_CAPABLE) ? + ECORE_LINK_PARTNER_SPEED_100G : 0; p_link->partner_tx_flow_ctrl_en = - !!(status & LINK_STATUS_TX_FLOW_CONTROL_ENABLED); + !!(status & LINK_STATUS_TX_FLOW_CONTROL_ENABLED); p_link->partner_rx_flow_ctrl_en = - !!(status & LINK_STATUS_RX_FLOW_CONTROL_ENABLED); + !!(status & LINK_STATUS_RX_FLOW_CONTROL_ENABLED); switch (status & LINK_STATUS_LINK_PARTNER_FLOW_CONTROL_MASK) { case LINK_STATUS_LINK_PARTNER_SYMMETRIC_PAUSE: @@ -1492,24 +1737,44 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn, if (p_hwfn->mcp_info->capabilities & FW_MB_PARAM_FEATURE_SUPPORT_EEE) ecore_mcp_read_eee_config(p_hwfn, p_ptt, p_link); - OSAL_LINK_UPDATE(p_hwfn); + if (p_hwfn->mcp_info->capabilities & + FW_MB_PARAM_FEATURE_SUPPORT_FEC_CONTROL) { + switch (status & LINK_STATUS_FEC_MODE_MASK) { + case LINK_STATUS_FEC_MODE_NONE: + p_link->fec_active = ECORE_MCP_FEC_NONE; + break; + case LINK_STATUS_FEC_MODE_FIRECODE_CL74: + p_link->fec_active = ECORE_MCP_FEC_FIRECODE; + break; + case LINK_STATUS_FEC_MODE_RS_CL91: + p_link->fec_active = ECORE_MCP_FEC_RS; + break; + default: + p_link->fec_active = ECORE_MCP_FEC_AUTO; + } + } else { + p_link->fec_active = ECORE_MCP_FEC_UNSUPPORTED; + } + + OSAL_LINK_UPDATE(p_hwfn); /* @DPDK */ out: OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->link_lock); } enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, bool b_up) + struct ecore_ptt *p_ptt, + bool b_up) { struct ecore_mcp_link_params *params = &p_hwfn->mcp_info->link_input; struct ecore_mcp_mb_params mb_params; struct eth_phy_cfg phy_cfg; enum _ecore_status_t rc = ECORE_SUCCESS; - u32 cmd; + u32 cmd, fec_bit = 0; #ifndef ASIC_ONLY if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) { if (b_up) - OSAL_LINK_UPDATE(p_hwfn); + OSAL_LINK_UPDATE(p_hwfn); /* @DPDK */ return ECORE_SUCCESS; } #endif @@ -1540,18 +1805,87 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn, phy_cfg.eee_cfg |= EEE_CFG_ADV_SPEED_1G; if (params->eee.adv_caps & ECORE_EEE_10G_ADV) phy_cfg.eee_cfg |= EEE_CFG_ADV_SPEED_10G; - phy_cfg.eee_cfg |= (params->eee.tx_lpi_timer << - EEE_TX_TIMER_USEC_OFFSET) & - EEE_TX_TIMER_USEC_MASK; + SET_MFW_FIELD(phy_cfg.eee_cfg, EEE_TX_TIMER_USEC, + params->eee.tx_lpi_timer); + } + + if (p_hwfn->mcp_info->capabilities & + FW_MB_PARAM_FEATURE_SUPPORT_FEC_CONTROL) { + if (params->fec & ECORE_MCP_FEC_NONE) + fec_bit |= FEC_FORCE_MODE_NONE; + else if (params->fec & ECORE_MCP_FEC_FIRECODE) + fec_bit |= FEC_FORCE_MODE_FIRECODE; + else if (params->fec & ECORE_MCP_FEC_RS) + fec_bit |= FEC_FORCE_MODE_RS; + else if (params->fec & ECORE_MCP_FEC_AUTO) + fec_bit |= FEC_FORCE_MODE_AUTO; + SET_MFW_FIELD(phy_cfg.fec_mode, FEC_FORCE_MODE, fec_bit); + } + + if (p_hwfn->mcp_info->capabilities & + FW_MB_PARAM_FEATURE_SUPPORT_EXT_SPEED_FEC_CONTROL) { + u32 spd_mask = 0, val = params->ext_speed.forced_speed; + + if (params->ext_speed.autoneg) + spd_mask |= ETH_EXT_SPEED_AN; + if (val & ECORE_EXT_SPEED_1G) + spd_mask |= ETH_EXT_SPEED_1G; + if (val & ECORE_EXT_SPEED_10G) + spd_mask |= ETH_EXT_SPEED_10G; + if (val & ECORE_EXT_SPEED_20G) + spd_mask |= ETH_EXT_SPEED_20G; + if (val & ECORE_EXT_SPEED_25G) + spd_mask |= ETH_EXT_SPEED_25G; + if (val & ECORE_EXT_SPEED_40G) + spd_mask |= ETH_EXT_SPEED_40G; + if (val & ECORE_EXT_SPEED_50G_R) + spd_mask |= ETH_EXT_SPEED_50G_BASE_R; + if (val & ECORE_EXT_SPEED_50G_R2) + spd_mask |= ETH_EXT_SPEED_50G_BASE_R2; + if (val & ECORE_EXT_SPEED_100G_R2) + spd_mask |= ETH_EXT_SPEED_100G_BASE_R2; + if (val & ECORE_EXT_SPEED_100G_R4) + spd_mask |= ETH_EXT_SPEED_100G_BASE_R4; + if (val & ECORE_EXT_SPEED_100G_P4) + spd_mask |= ETH_EXT_SPEED_100G_BASE_P4; + SET_MFW_FIELD(phy_cfg.extended_speed, ETH_EXT_SPEED, spd_mask); + + spd_mask = 0; + val = params->ext_speed.advertised_speeds; + if (val & ECORE_EXT_SPEED_MASK_1G) + spd_mask |= ETH_EXT_ADV_SPEED_1G; + if (val & ECORE_EXT_SPEED_MASK_10G) + spd_mask |= ETH_EXT_ADV_SPEED_10G; + if (val & ECORE_EXT_SPEED_MASK_20G) + spd_mask |= ETH_EXT_ADV_SPEED_20G; + if (val & ECORE_EXT_SPEED_MASK_25G) + spd_mask |= ETH_EXT_ADV_SPEED_25G; + if (val & ECORE_EXT_SPEED_MASK_40G) + spd_mask |= ETH_EXT_ADV_SPEED_40G; + if (val & ECORE_EXT_SPEED_MASK_50G_R) + spd_mask |= ETH_EXT_ADV_SPEED_50G_BASE_R; + if (val & ECORE_EXT_SPEED_MASK_50G_R2) + spd_mask |= ETH_EXT_ADV_SPEED_50G_BASE_R2; + if (val & ECORE_EXT_SPEED_MASK_100G_R2) + spd_mask |= ETH_EXT_ADV_SPEED_100G_BASE_R2; + if (val & ECORE_EXT_SPEED_MASK_100G_R4) + spd_mask |= ETH_EXT_ADV_SPEED_100G_BASE_R4; + if (val & ECORE_EXT_SPEED_MASK_100G_P4) + spd_mask |= ETH_EXT_ADV_SPEED_100G_BASE_P4; + phy_cfg.extended_speed |= spd_mask; + + SET_MFW_FIELD(phy_cfg.fec_mode, FEC_EXTENDED_MODE, + params->ext_fec_mode); } p_hwfn->b_drv_link_init = b_up; if (b_up) DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, - "Configuring Link: Speed 0x%08x, Pause 0x%08x, adv_speed 0x%08x, loopback 0x%08x\n", + "Configuring Link: Speed 0x%08x, Pause 0x%08x, adv_speed 0x%08x, loopback 0x%08x FEC 0x%08x, ext_speed 0x%08x\n", phy_cfg.speed, phy_cfg.pause, phy_cfg.adv_speed, - phy_cfg.loopback_mode); + phy_cfg.loopback_mode, phy_cfg.fec_mode, + phy_cfg.extended_speed); else DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, "Resetting link\n"); @@ -1559,6 +1893,7 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn, mb_params.cmd = cmd; mb_params.p_data_src = &phy_cfg; mb_params.data_src_size = sizeof(phy_cfg); + mb_params.flags = ECORE_MB_FLAG_CAN_SLEEP; rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); /* if mcp fails to respond we must abort */ @@ -1595,7 +1930,7 @@ u32 ecore_get_process_kill_counter(struct ecore_hwfn *p_hwfn, proc_kill_cnt = ecore_rd(p_hwfn, p_ptt, path_addr + OFFSETOF(struct public_path, process_kill)) & - PROCESS_KILL_COUNTER_MASK; + PROCESS_KILL_COUNTER_MASK; return proc_kill_cnt; } @@ -1621,8 +1956,7 @@ static void ecore_mcp_handle_process_kill(struct ecore_hwfn *p_hwfn, if (p_dev->recov_in_prog) { DP_NOTICE(p_hwfn, false, - "Ignoring the indication since a recovery" - " process is already in progress\n"); + "Ignoring the indication since a recovery process is already in progress\n"); return; } @@ -1649,6 +1983,18 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn, stats_type = ECORE_MCP_LAN_STATS; hsi_param = DRV_MSG_CODE_STATS_TYPE_LAN; break; + case MFW_DRV_MSG_GET_FCOE_STATS: + stats_type = ECORE_MCP_FCOE_STATS; + hsi_param = DRV_MSG_CODE_STATS_TYPE_FCOE; + break; + case MFW_DRV_MSG_GET_ISCSI_STATS: + stats_type = ECORE_MCP_ISCSI_STATS; + hsi_param = DRV_MSG_CODE_STATS_TYPE_ISCSI; + break; + case MFW_DRV_MSG_GET_RDMA_STATS: + stats_type = ECORE_MCP_RDMA_STATS; + hsi_param = DRV_MSG_CODE_STATS_TYPE_RDMA; + break; default: DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Invalid protocol type %d\n", type); @@ -1667,40 +2013,6 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn, DP_ERR(p_hwfn, "Failed to send protocol stats, rc = %d\n", rc); } -static void ecore_read_pf_bandwidth(struct ecore_hwfn *p_hwfn, - struct public_func *p_shmem_info) -{ - struct ecore_mcp_function_info *p_info; - - p_info = &p_hwfn->mcp_info->func_info; - - /* TODO - bandwidth min/max should have valid values of 1-100, - * as well as some indication that the feature is disabled. - * Until MFW/qlediag enforce those limitations, Assume THERE IS ALWAYS - * limit and correct value to min `1' and max `100' if limit isn't in - * range. - */ - p_info->bandwidth_min = (p_shmem_info->config & - FUNC_MF_CFG_MIN_BW_MASK) >> - FUNC_MF_CFG_MIN_BW_OFFSET; - if (p_info->bandwidth_min < 1 || p_info->bandwidth_min > 100) { - DP_INFO(p_hwfn, - "bandwidth minimum out of bounds [%02x]. Set to 1\n", - p_info->bandwidth_min); - p_info->bandwidth_min = 1; - } - - p_info->bandwidth_max = (p_shmem_info->config & - FUNC_MF_CFG_MAX_BW_MASK) >> - FUNC_MF_CFG_MAX_BW_OFFSET; - if (p_info->bandwidth_max < 1 || p_info->bandwidth_max > 100) { - DP_INFO(p_hwfn, - "bandwidth maximum out of bounds [%02x]. Set to 100\n", - p_info->bandwidth_max); - p_info->bandwidth_max = 100; - } -} - static void ecore_mcp_update_bw(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { @@ -1708,7 +2020,10 @@ ecore_mcp_update_bw(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) struct public_func shmem_info; u32 resp = 0, param = 0; - ecore_mcp_get_shmem_func(p_hwfn, p_ptt, &shmem_info, MCP_PF_ID(p_hwfn)); + OSAL_SPIN_LOCK(&p_hwfn->mcp_info->link_lock); + + ecore_mcp_get_shmem_func(p_hwfn, p_ptt, &shmem_info, + MCP_PF_ID(p_hwfn)); ecore_read_pf_bandwidth(p_hwfn, &shmem_info); @@ -1718,6 +2033,10 @@ ecore_mcp_update_bw(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) ecore_configure_pf_max_bandwidth(p_hwfn->p_dev, p_info->bandwidth_max); + OSAL_BW_UPDATE(p_hwfn, p_ptt); + + OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->link_lock); + /* Acknowledge the MFW */ ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BW_UPDATE_ACK, 0, &resp, ¶m); @@ -1735,7 +2054,7 @@ static void ecore_mcp_update_stag(struct ecore_hwfn *p_hwfn, p_hwfn->mcp_info->func_info.ovlan = (u16)shmem_info.ovlan_stag & FUNC_MF_CFG_OV_STAG_MASK; p_hwfn->hw_info.ovlan = p_hwfn->mcp_info->func_info.ovlan; - if (OSAL_GET_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits)) { + if (OSAL_TEST_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits)) { if (p_hwfn->hw_info.ovlan != ECORE_MCP_VLAN_UNSET) { ecore_wr(p_hwfn, p_ptt, NIG_REG_LLH_FUNC_TAG_VALUE, p_hwfn->hw_info.ovlan); @@ -1766,29 +2085,64 @@ static void ecore_mcp_update_stag(struct ecore_hwfn *p_hwfn, &resp, ¶m); } -static void ecore_mcp_handle_fan_failure(struct ecore_hwfn *p_hwfn) +static void ecore_mcp_handle_fan_failure(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) { + u32 addr, global_offsize, global_addr; + u32 nvm_cfg_addr, nvm_cfg1_offset; + u32 phy_mod_temp, gpio_value; + u32 fan_fail_enforcement; + u32 int_temp, ext_temp; + u32 gpio0; + /* A single notification should be sent to upper driver in CMT mode */ if (p_hwfn != ECORE_LEADING_HWFN(p_hwfn->p_dev)) return; + addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base, + PUBLIC_GLOBAL); + global_offsize = ecore_rd(p_hwfn, p_ptt, addr); + global_addr = SECTION_ADDR(global_offsize, 0); + int_temp = ecore_rd(p_hwfn, p_ptt, global_addr + + offsetof(struct public_global, + internal_temperature)); + + addr = global_addr + offsetof(struct public_global, + external_temperature); + ext_temp = ecore_rd(p_hwfn, p_ptt, addr); + + addr = p_hwfn->mcp_info->port_addr + OFFSETOF(struct public_port, + phy_module_temperature); + phy_mod_temp = ecore_rd(p_hwfn, p_ptt, addr); + + nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt, MISC_REG_GEN_PURP_CR0); + nvm_cfg1_offset = ecore_rd(p_hwfn, p_ptt, nvm_cfg_addr + 4); + addr = MCP_REG_SCRATCH + nvm_cfg1_offset + + offsetof(struct nvm_cfg1, glob) + + offsetof(struct nvm_cfg1_glob, generic_cont0); + fan_fail_enforcement = ecore_rd(p_hwfn, p_ptt, addr); + fan_fail_enforcement &= NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_MASK; + + if (NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_ENABLED == + fan_fail_enforcement) { + ecore_mcp_gpio_read(p_hwfn, p_ptt, MISCS_REG_GPIO_VAL, + &gpio_value); + gpio0 = gpio_value & 1; + DP_NOTICE(p_hwfn, false, "Fan Failure(Status = %d)\n", + gpio0); + } + DP_NOTICE(p_hwfn, false, - "Fan failure was detected on the network interface card" - " and it's going to be shut down.\n"); + "Fan failure: Internal Temp=%dC External Temp=%dC Phy Module Temp=%dC\n", + int_temp, ext_temp, phy_mod_temp); + DP_NOTICE(p_hwfn, false, + "Fan failure was detected on the network interface card and it's going to be shut down.\n"); - ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_FAN_FAIL); + ecore_hw_err_notify(p_hwfn, p_ptt, ECORE_HW_ERR_FAN_FAIL, OSAL_NULL, 0); } -struct ecore_mdump_cmd_params { - u32 cmd; - void *p_data_src; - u8 data_src_size; - void *p_data_dst; - u8 data_dst_size; - u32 mcp_resp; -}; -static enum _ecore_status_t +enum _ecore_status_t ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_mdump_cmd_params *p_mdump_cmd_params) { @@ -1807,6 +2161,7 @@ ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, return rc; p_mdump_cmd_params->mcp_resp = mb_params.mcp_resp; + p_mdump_cmd_params->mcp_param = mb_params.mcp_param; if (p_mdump_cmd_params->mcp_resp == FW_MSG_CODE_MDUMP_INVALID_CMD) { DP_INFO(p_hwfn, @@ -1851,11 +2206,22 @@ enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { struct ecore_mdump_cmd_params mdump_cmd_params; + enum _ecore_status_t rc; OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params)); mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_TRIGGER; - return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params); + rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params); + if (mdump_cmd_params.mcp_resp == FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND) { + DP_INFO(p_hwfn, + "The mdump command failed cause there is no HW_DUMP image\n"); + rc = ECORE_INVAL; + } else if (mdump_cmd_params.mcp_resp != FW_MSG_CODE_OK) { + DP_INFO(p_hwfn, + "The mdump command failed cause of MFW error\n"); + rc = ECORE_UNKNOWN_ERROR; + } + return rc; } static enum _ecore_status_t @@ -1988,17 +2354,260 @@ enum _ecore_status_t ecore_mcp_mdump_clr_retain(struct ecore_hwfn *p_hwfn, return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params); } -static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) +enum _ecore_status_t ecore_mcp_hw_dump_trigger(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) { - struct ecore_mdump_retain_data mdump_retain; + struct ecore_mdump_cmd_params mdump_cmd_params; enum _ecore_status_t rc; - /* In CMT mode - no need for more than a single acknowledgment to the - * MFW, and no more than a single notification to the upper driver. - */ - if (p_hwfn != ECORE_LEADING_HWFN(p_hwfn->p_dev)) - return; + OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params)); + mdump_cmd_params.cmd = DRV_MSG_CODE_HW_DUMP_TRIGGER; + + rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params); + if (mdump_cmd_params.mcp_resp == FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND) { + DP_INFO(p_hwfn, + "The hw_dump command failed cause there is no HW_DUMP image\n"); + rc = ECORE_INVAL; + } else if (mdump_cmd_params.mcp_resp != FW_MSG_CODE_OK) { + DP_INFO(p_hwfn, + "The hw_dump command failed cause of MFW error\n"); + rc = ECORE_UNKNOWN_ERROR; + } + return rc; +} + +enum _ecore_status_t ecore_mdump2_req_offsize(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u32 *buff_byte_size, + u32 *buff_byte_addr) +{ + struct ecore_mdump_cmd_params mdump_cmd_params; + enum _ecore_status_t rc; + + OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params)); + mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_GEN_LINK_DUMP; + + SET_MFW_FIELD(mdump_cmd_params.cmd, + DRV_MSG_CODE_MDUMP_USE_DRIVER_BUF, 1); + + rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params); + if (mdump_cmd_params.mcp_resp != FW_MSG_CODE_OK) { + DP_INFO(p_hwfn, + "The mdump2 command failed because of MFW error\n"); + rc = ECORE_UNKNOWN_ERROR; + } + + *buff_byte_size = SECTION_SIZE(mdump_cmd_params.mcp_param); + *buff_byte_addr = SECTION_ADDR(mdump_cmd_params.mcp_param, 0); + + return rc; +} + +enum _ecore_status_t ecore_mdump2_req_free(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) +{ + struct ecore_mdump_cmd_params mdump_cmd_params; + enum _ecore_status_t rc; + + OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params)); + mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_FREE_DRIVER_BUF; + + rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params); + if (mdump_cmd_params.mcp_resp != FW_MSG_CODE_OK) { + DP_INFO(p_hwfn, + "The mdump2 command failed because of MFW error\n"); + rc = ECORE_UNKNOWN_ERROR; + } + + return rc; +} + +static enum _ecore_status_t +ecore_mcp_get_disabled_attns(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u32 *disabled_attns) +{ + struct ecore_mcp_mb_params mb_params; + struct get_att_ctrl_stc get_att_ctrl; + enum _ecore_status_t rc; + + OSAL_MEM_ZERO(&get_att_ctrl, sizeof(get_att_ctrl)); + OSAL_MEM_ZERO(&mb_params, sizeof(mb_params)); + mb_params.cmd = DRV_MSG_CODE_GET_ATTN_CONTROL; + mb_params.p_data_dst = &get_att_ctrl; + mb_params.data_dst_size = sizeof(get_att_ctrl); + + rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); + if (rc != ECORE_SUCCESS) { + DP_NOTICE(p_hwfn, false, "GET_ATTN_CONTROL failed, rc = %d\n", + rc); + return rc; + } + if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) { + DP_INFO(p_hwfn, + "GET_ATTN_CONTROL command is not supported\n"); + return ECORE_NOTIMPL; + } + + *disabled_attns = get_att_ctrl.disabled_attns; + + return ECORE_SUCCESS; +} + +static enum _ecore_status_t +ecore_mcp_get_attn_control(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + enum MFW_DRV_MSG_TYPE type, u8 *enabled) +{ + u32 disabled_attns = 0; + enum _ecore_status_t rc; + + rc = ecore_mcp_get_disabled_attns(p_hwfn, p_ptt, &disabled_attns); + if (rc != ECORE_SUCCESS) + return rc; + + *enabled = !(disabled_attns & (1 << type)); + + return ECORE_SUCCESS; +} + +static enum _ecore_status_t +ecore_mcp_set_attn_control(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + enum MFW_DRV_MSG_TYPE type, u8 enable) +{ + struct ecore_mcp_mb_params mb_params; + u32 disabled_attns = 0; + u8 is_enabled; + enum _ecore_status_t rc; + + rc = ecore_mcp_get_disabled_attns(p_hwfn, p_ptt, &disabled_attns); + if (rc != ECORE_SUCCESS) + return rc; + + is_enabled = (~disabled_attns >> type) & 0x1; + if (is_enabled == enable) + return ECORE_SUCCESS; + + if (enable) + disabled_attns &= ~(1 << type); + else + disabled_attns |= (1 << type); + + OSAL_MEM_ZERO(&mb_params, sizeof(mb_params)); + mb_params.cmd = DRV_MSG_CODE_SET_ATTN_CONTROL; + mb_params.p_data_src = &disabled_attns; + mb_params.data_src_size = sizeof(disabled_attns); + + rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); + if (rc != ECORE_SUCCESS) { + DP_NOTICE(p_hwfn, false, "SET_ATTN_CONTROL failed, rc = %d\n", + rc); + return rc; + } + + return rc; +} + +enum _ecore_status_t +ecore_mcp_is_tx_flt_attn_enabled(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 *enabled) +{ + return ecore_mcp_get_attn_control(p_hwfn, p_ptt, + MFW_DRV_MSG_XCVR_TX_FAULT, + enabled); +} + +enum _ecore_status_t +ecore_mcp_is_rx_los_attn_enabled(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 *enabled) +{ + return ecore_mcp_get_attn_control(p_hwfn, p_ptt, + MFW_DRV_MSG_XCVR_RX_LOS, + enabled); +} + +enum _ecore_status_t +ecore_mcp_enable_tx_flt_attn(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 enable) +{ + return ecore_mcp_set_attn_control(p_hwfn, p_ptt, + MFW_DRV_MSG_XCVR_TX_FAULT, + enable); +} + +enum _ecore_status_t +ecore_mcp_enable_rx_los_attn(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 enable) +{ + return ecore_mcp_set_attn_control(p_hwfn, p_ptt, + MFW_DRV_MSG_XCVR_RX_LOS, + enable); +} + +enum _ecore_status_t +ecore_mcp_set_trace_filter(struct ecore_hwfn *p_hwfn, + u32 *dbg_level, u32 *dbg_modules) +{ + struct trace_filter_stc trace_filter; + struct ecore_ptt *p_ptt; + u32 resp = 0, param; + enum _ecore_status_t rc; + + trace_filter.level = *dbg_level; + trace_filter.modules = *dbg_modules; + + p_ptt = ecore_ptt_acquire(p_hwfn); + if (!p_ptt) + return ECORE_BUSY; + + rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_TRACE_FILTER, + sizeof(trace_filter), &resp, + ¶m, sizeof(trace_filter), + (u32 *)&trace_filter, false); + if (rc != ECORE_SUCCESS) + DP_NOTICE(p_hwfn, false, "MCP command rc = %d\n", rc); + ecore_ptt_release(p_hwfn, p_ptt); + + return rc; +} + +enum _ecore_status_t +ecore_mcp_restore_trace_filter(struct ecore_hwfn *p_hwfn) +{ + struct ecore_ptt *p_ptt; + u32 resp = 0, param; + enum _ecore_status_t rc; + + p_ptt = ecore_ptt_acquire(p_hwfn); + if (!p_ptt) + return ECORE_BUSY; + + rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, + DRV_MSG_CODE_RESTORE_TRACE_FILTER, + 0, &resp, ¶m, 0, OSAL_NULL, false); + if (rc != ECORE_SUCCESS) + DP_NOTICE(p_hwfn, false, "MCP command rc = %d\n", rc); + ecore_ptt_release(p_hwfn, p_ptt); + + return rc; +} + +static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) +{ + struct ecore_mdump_retain_data mdump_retain; + enum _ecore_status_t rc; + + /* In CMT mode - no need for more than a single acknowledgment to the + * MFW, and no more than a single notification to the upper driver. + */ + if (p_hwfn != ECORE_LEADING_HWFN(p_hwfn->p_dev)) + return; rc = ecore_mcp_mdump_get_retain(p_hwfn, p_ptt, &mdump_retain); if (rc == ECORE_SUCCESS && mdump_retain.valid) { @@ -2020,7 +2629,7 @@ static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn, DP_NOTICE(p_hwfn, false, "Acknowledging the notification to not allow the MFW crash dump [driver debug data collection is preferable]\n"); ecore_mcp_mdump_ack(p_hwfn, p_ptt); - ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_HW_ATTN); + ecore_hw_err_notify(p_hwfn, p_ptt, ECORE_HW_ERR_HW_ATTN, OSAL_NULL, 0); } void @@ -2029,7 +2638,7 @@ ecore_mcp_read_ufp_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) struct public_func shmem_info; u32 port_cfg, val; - if (!OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits)) + if (!OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits)) return; OSAL_MEMSET(&p_hwfn->ufp_info, 0, sizeof(p_hwfn->ufp_info)); @@ -2037,17 +2646,19 @@ ecore_mcp_read_ufp_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) OFFSETOF(struct public_port, oem_cfg_port)); val = GET_MFW_FIELD(port_cfg, OEM_CFG_CHANNEL_TYPE); if (val != OEM_CFG_CHANNEL_TYPE_STAGGED) - DP_NOTICE(p_hwfn, false, "Incorrect UFP Channel type %d\n", - val); + DP_NOTICE(p_hwfn, false, "Incorrect UFP Channel type %d port_id 0x%02x\n", + val, MFW_PORT(p_hwfn)); val = GET_MFW_FIELD(port_cfg, OEM_CFG_SCHED_TYPE); - if (val == OEM_CFG_SCHED_TYPE_ETS) + if (val == OEM_CFG_SCHED_TYPE_ETS) { p_hwfn->ufp_info.mode = ECORE_UFP_MODE_ETS; - else if (val == OEM_CFG_SCHED_TYPE_VNIC_BW) + } else if (val == OEM_CFG_SCHED_TYPE_VNIC_BW) { p_hwfn->ufp_info.mode = ECORE_UFP_MODE_VNIC_BW; - else - DP_NOTICE(p_hwfn, false, "Unknown UFP scheduling mode %d\n", - val); + } else { + p_hwfn->ufp_info.mode = ECORE_UFP_MODE_UNKNOWN; + DP_NOTICE(p_hwfn, false, "Unknown UFP scheduling mode %d port_id 0x%02x\n", + val, MFW_PORT(p_hwfn)); + } ecore_mcp_get_shmem_func(p_hwfn, p_ptt, &shmem_info, MCP_PF_ID(p_hwfn)); @@ -2055,18 +2666,20 @@ ecore_mcp_read_ufp_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) p_hwfn->ufp_info.tc = (u8)val; val = GET_MFW_FIELD(shmem_info.oem_cfg_func, OEM_CFG_FUNC_HOST_PRI_CTRL); - if (val == OEM_CFG_FUNC_HOST_PRI_CTRL_VNIC) + if (val == OEM_CFG_FUNC_HOST_PRI_CTRL_VNIC) { p_hwfn->ufp_info.pri_type = ECORE_UFP_PRI_VNIC; - else if (val == OEM_CFG_FUNC_HOST_PRI_CTRL_OS) + } else if (val == OEM_CFG_FUNC_HOST_PRI_CTRL_OS) { p_hwfn->ufp_info.pri_type = ECORE_UFP_PRI_OS; - else - DP_NOTICE(p_hwfn, false, "Unknown Host priority control %d\n", - val); + } else { + p_hwfn->ufp_info.pri_type = ECORE_UFP_PRI_UNKNOWN; + DP_NOTICE(p_hwfn, false, "Unknown Host priority control %d port_id 0x%02x\n", + val, MFW_PORT(p_hwfn)); + } - DP_VERBOSE(p_hwfn, ECORE_MSG_SP, - "UFP shmem config: mode = %d tc = %d pri_type = %d\n", - p_hwfn->ufp_info.mode, p_hwfn->ufp_info.tc, - p_hwfn->ufp_info.pri_type); + DP_NOTICE(p_hwfn, false, + "UFP shmem config: mode = %d tc = %d pri_type = %d port_id 0x%02x\n", + p_hwfn->ufp_info.mode, p_hwfn->ufp_info.tc, + p_hwfn->ufp_info.pri_type, MFW_PORT(p_hwfn)); } static enum _ecore_status_t @@ -2076,13 +2689,16 @@ ecore_mcp_handle_ufp_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) if (p_hwfn->ufp_info.mode == ECORE_UFP_MODE_VNIC_BW) { p_hwfn->qm_info.ooo_tc = p_hwfn->ufp_info.tc; - p_hwfn->hw_info.offload_tc = p_hwfn->ufp_info.tc; - - ecore_qm_reconf(p_hwfn, p_ptt); - } else { + ecore_hw_info_set_offload_tc(&p_hwfn->hw_info, + p_hwfn->ufp_info.tc); + ecore_qm_reconf_intr(p_hwfn, p_ptt); + } else if (p_hwfn->ufp_info.mode == ECORE_UFP_MODE_ETS) { /* Merge UFP TC with the dcbx TC data */ ecore_dcbx_mib_update_event(p_hwfn, p_ptt, ECORE_DCBX_OPERATIONAL_MIB); + } else { + DP_ERR(p_hwfn, "Invalid sched type, discard the UFP config\n"); + return ECORE_INVAL; } /* update storm FW with negotiation results */ @@ -2118,6 +2734,19 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn, "Msg [%d] - old CMD 0x%02x, new CMD 0x%02x\n", i, info->mfw_mb_shadow[i], info->mfw_mb_cur[i]); + OSAL_SPIN_LOCK(&p_hwfn->mcp_info->unload_lock); + if (OSAL_TEST_BIT(ECORE_MCP_BYPASS_PROC_BIT, + &p_hwfn->mcp_info->mcp_handling_status)) { + OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->unload_lock); + DP_INFO(p_hwfn, + "Msg [%d] is bypassed on unload flow\n", i); + continue; + } + + OSAL_SET_BIT(ECORE_MCP_IN_PROCESSING_BIT, + &p_hwfn->mcp_info->mcp_handling_status); + OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->unload_lock); + switch (i) { case MFW_DRV_MSG_LINK_CHANGE: ecore_mcp_handle_link_change(p_hwfn, p_ptt, false); @@ -2149,6 +2778,12 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn, case MFW_DRV_MSG_TRANSCEIVER_STATE_CHANGE: ecore_mcp_handle_transceiver_change(p_hwfn, p_ptt); break; + case MFW_DRV_MSG_XCVR_TX_FAULT: + ecore_mcp_handle_transceiver_tx_fault(p_hwfn); + break; + case MFW_DRV_MSG_XCVR_RX_LOS: + ecore_mcp_handle_transceiver_rx_los(p_hwfn); + break; case MFW_DRV_MSG_ERROR_RECOVERY: ecore_mcp_handle_process_kill(p_hwfn, p_ptt); break; @@ -2165,15 +2800,21 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn, ecore_mcp_update_stag(p_hwfn, p_ptt); break; case MFW_DRV_MSG_FAILURE_DETECTED: - ecore_mcp_handle_fan_failure(p_hwfn); + ecore_mcp_handle_fan_failure(p_hwfn, p_ptt); break; case MFW_DRV_MSG_CRITICAL_ERROR_OCCURRED: ecore_mcp_handle_critical_error(p_hwfn, p_ptt); break; + case MFW_DRV_MSG_GET_TLV_REQ: + OSAL_MFW_TLV_REQ(p_hwfn); + break; default: DP_INFO(p_hwfn, "Unimplemented MFW message %d\n", i); rc = ECORE_INVAL; } + + OSAL_CLEAR_BIT(ECORE_MCP_IN_PROCESSING_BIT, + &p_hwfn->mcp_info->mcp_handling_status); } /* ACK everything */ @@ -2188,9 +2829,8 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn, } if (!found) { - DP_NOTICE(p_hwfn, false, - "Received an MFW message indication but no" - " new message!\n"); + DP_INFO(p_hwfn, + "Received an MFW message indication but no new message!\n"); rc = ECORE_INVAL; } @@ -2229,60 +2869,57 @@ enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_hwfn *p_hwfn, } global_offsize = ecore_rd(p_hwfn, p_ptt, - SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info-> - public_base, - PUBLIC_GLOBAL)); - *p_mfw_ver = - ecore_rd(p_hwfn, p_ptt, - SECTION_ADDR(global_offsize, - 0) + OFFSETOF(struct public_global, mfw_ver)); + SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base, + PUBLIC_GLOBAL)); + *p_mfw_ver = ecore_rd(p_hwfn, p_ptt, + SECTION_ADDR(global_offsize, 0) + + OFFSETOF(struct public_global, mfw_ver)); if (p_running_bundle_id != OSAL_NULL) { *p_running_bundle_id = ecore_rd(p_hwfn, p_ptt, - SECTION_ADDR(global_offsize, - 0) + - OFFSETOF(struct public_global, - running_bundle_id)); + SECTION_ADDR(global_offsize, 0) + + OFFSETOF(struct public_global, + running_bundle_id)); } return ECORE_SUCCESS; } -int ecore_mcp_get_mbi_ver(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u32 *p_mbi_ver) +enum _ecore_status_t ecore_mcp_get_mbi_ver(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u32 *p_mbi_ver) { u32 nvm_cfg_addr, nvm_cfg1_offset, mbi_ver_addr; #ifndef ASIC_ONLY if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && !ecore_mcp_is_init(p_hwfn)) { DP_INFO(p_hwfn, "Emulation: Can't get MBI version\n"); - return -EOPNOTSUPP; + return ECORE_NOTIMPL; } #endif if (IS_VF(p_hwfn->p_dev)) - return -EINVAL; + return ECORE_INVAL; /* Read the address of the nvm_cfg */ nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt, MISC_REG_GEN_PURP_CR0); if (!nvm_cfg_addr) { DP_NOTICE(p_hwfn, false, "Shared memory not initialized\n"); - return -EINVAL; + return ECORE_INVAL; } /* Read the offset of nvm_cfg1 */ nvm_cfg1_offset = ecore_rd(p_hwfn, p_ptt, nvm_cfg_addr + 4); mbi_ver_addr = MCP_REG_SCRATCH + nvm_cfg1_offset + - offsetof(struct nvm_cfg1, glob) + offsetof(struct nvm_cfg1_glob, - mbi_version); - *p_mbi_ver = - ecore_rd(p_hwfn, p_ptt, - mbi_ver_addr) & (NVM_CFG1_GLOB_MBI_VERSION_0_MASK | - NVM_CFG1_GLOB_MBI_VERSION_1_MASK | - NVM_CFG1_GLOB_MBI_VERSION_2_MASK); + OFFSETOF(struct nvm_cfg1, glob) + + OFFSETOF(struct nvm_cfg1_glob, mbi_version); + *p_mbi_ver = ecore_rd(p_hwfn, p_ptt, mbi_ver_addr) & + (NVM_CFG1_GLOB_MBI_VERSION_0_MASK | + NVM_CFG1_GLOB_MBI_VERSION_1_MASK | + NVM_CFG1_GLOB_MBI_VERSION_2_MASK); - return 0; + return ECORE_SUCCESS; } enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_hwfn *p_hwfn, @@ -2322,44 +2959,55 @@ enum _ecore_status_t ecore_mcp_get_transceiver_data(struct ecore_hwfn *p_hwfn, u32 *p_transceiver_type) { u32 transceiver_info; - enum _ecore_status_t rc = ECORE_SUCCESS; + + if (p_hwfn->p_dev->recov_in_prog) { + DP_ERR(p_hwfn, "Error recovery in progress\n"); + return ECORE_AGAIN; + } + + *p_transceiver_type = ETH_TRANSCEIVER_TYPE_NONE; + *p_transceiver_state = ETH_TRANSCEIVER_STATE_UPDATING; /* TODO - Add support for VFs */ if (IS_VF(p_hwfn->p_dev)) return ECORE_INVAL; if (!ecore_mcp_is_init(p_hwfn)) { +#ifndef ASIC_ONLY + if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) { + DP_INFO(p_hwfn, + "Emulation: Can't get transceiver data\n"); + return ECORE_NOTIMPL; + } +#endif DP_NOTICE(p_hwfn, false, "MFW is not initialized!\n"); return ECORE_BUSY; } - *p_transceiver_type = ETH_TRANSCEIVER_TYPE_NONE; - *p_transceiver_state = ETH_TRANSCEIVER_STATE_UPDATING; - transceiver_info = ecore_rd(p_hwfn, p_ptt, p_hwfn->mcp_info->port_addr + offsetof(struct public_port, transceiver_data)); - *p_transceiver_state = GET_MFW_FIELD(transceiver_info, ETH_TRANSCEIVER_STATE); if (*p_transceiver_state == ETH_TRANSCEIVER_STATE_PRESENT) { *p_transceiver_type = GET_MFW_FIELD(transceiver_info, - ETH_TRANSCEIVER_TYPE); + ETH_TRANSCEIVER_TYPE); } else { *p_transceiver_type = ETH_TRANSCEIVER_TYPE_UNKNOWN; } - return rc; + return 0; } static int is_transceiver_ready(u32 transceiver_state, u32 transceiver_type) { if ((transceiver_state & ETH_TRANSCEIVER_STATE_PRESENT) && ((transceiver_state & ETH_TRANSCEIVER_STATE_UPDATING) == 0x0) && - (transceiver_type != ETH_TRANSCEIVER_TYPE_NONE)) + (transceiver_type != ETH_TRANSCEIVER_TYPE_NONE)) { return 1; + } return 0; } @@ -2368,14 +3016,17 @@ enum _ecore_status_t ecore_mcp_trans_speed_mask(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 *p_speed_mask) { - u32 transceiver_type = ETH_TRANSCEIVER_TYPE_NONE, transceiver_state; - - ecore_mcp_get_transceiver_data(p_hwfn, p_ptt, &transceiver_state, - &transceiver_type); + u32 transceiver_type, transceiver_state; + enum _ecore_status_t ret; + ret = ecore_mcp_get_transceiver_data(p_hwfn, p_ptt, &transceiver_state, + &transceiver_type); + if (ret) + return ret; - if (is_transceiver_ready(transceiver_state, transceiver_type) == 0) + if (is_transceiver_ready(transceiver_state, transceiver_type) == 0) { return ECORE_INVAL; + } switch (transceiver_type) { case ETH_TRANSCEIVER_TYPE_1G_LX: @@ -2431,6 +3082,11 @@ enum _ecore_status_t ecore_mcp_trans_speed_mask(struct ecore_hwfn *p_hwfn, NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G | NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G; break; + case ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_25G_SR: + case ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_25G_LR: + *p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G | + NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G; + break; case ETH_TRANSCEIVER_TYPE_40G_CR4: case ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_40G_CR: @@ -2466,12 +3122,14 @@ enum _ecore_status_t ecore_mcp_trans_speed_mask(struct ecore_hwfn *p_hwfn, break; case ETH_TRANSCEIVER_TYPE_10G_BASET: + case ETH_TRANSCEIVER_TYPE_MULTI_RATE_1G_10G_SR: + case ETH_TRANSCEIVER_TYPE_MULTI_RATE_1G_10G_LR: *p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G | NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G; break; default: - DP_INFO(p_hwfn, "Unknown transcevier type 0x%x\n", + DP_INFO(p_hwfn, "Unknown transceiver type 0x%x\n", transceiver_type); *p_speed_mask = 0xff; break; @@ -2485,24 +3143,29 @@ enum _ecore_status_t ecore_mcp_get_board_config(struct ecore_hwfn *p_hwfn, u32 *p_board_config) { u32 nvm_cfg_addr, nvm_cfg1_offset, port_cfg_addr; - enum _ecore_status_t rc = ECORE_SUCCESS; /* TODO - Add support for VFs */ if (IS_VF(p_hwfn->p_dev)) return ECORE_INVAL; if (!ecore_mcp_is_init(p_hwfn)) { +#ifndef ASIC_ONLY + if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) { + DP_INFO(p_hwfn, "Emulation: Can't get board config\n"); + return ECORE_NOTIMPL; + } +#endif DP_NOTICE(p_hwfn, false, "MFW is not initialized!\n"); return ECORE_BUSY; } if (!p_ptt) { *p_board_config = NVM_CFG1_PORT_PORT_TYPE_UNDEFINED; - rc = ECORE_INVAL; + return ECORE_INVAL; } else { nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt, - MISC_REG_GEN_PURP_CR0); + MISC_REG_GEN_PURP_CR0); nvm_cfg1_offset = ecore_rd(p_hwfn, p_ptt, - nvm_cfg_addr + 4); + nvm_cfg_addr + 4); port_cfg_addr = MCP_REG_SCRATCH + nvm_cfg1_offset + offsetof(struct nvm_cfg1, port[MFW_PORT(p_hwfn)]); *p_board_config = ecore_rd(p_hwfn, p_ptt, @@ -2511,23 +3174,28 @@ enum _ecore_status_t ecore_mcp_get_board_config(struct ecore_hwfn *p_hwfn, board_cfg)); } - return rc; + return ECORE_SUCCESS; } -/* @DPDK */ /* Old MFW has a global configuration for all PFs regarding RDMA support */ static void ecore_mcp_get_shmem_proto_legacy(struct ecore_hwfn *p_hwfn, enum ecore_pci_personality *p_proto) { - *p_proto = ECORE_PCI_ETH; + /* There wasn't ever a legacy MFW that published iwarp. + * So at this point, this is either plain l2 or RoCE. + */ + if (OSAL_TEST_BIT(ECORE_DEV_CAP_ROCE, + &p_hwfn->hw_info.device_capabilities)) + *p_proto = ECORE_PCI_ETH_ROCE; + else + *p_proto = ECORE_PCI_ETH; DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP, "According to Legacy capabilities, L2 personality is %08x\n", (u32)*p_proto); } -/* @DPDK */ static enum _ecore_status_t ecore_mcp_get_shmem_proto_mfw(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, @@ -2536,6 +3204,37 @@ ecore_mcp_get_shmem_proto_mfw(struct ecore_hwfn *p_hwfn, u32 resp = 0, param = 0; enum _ecore_status_t rc; + rc = ecore_mcp_cmd(p_hwfn, p_ptt, + DRV_MSG_CODE_GET_PF_RDMA_PROTOCOL, 0, &resp, ¶m); + if (rc != ECORE_SUCCESS) + return rc; + if (resp != FW_MSG_CODE_OK) { + DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP, + "MFW lacks support for command; Returns %08x\n", + resp); + return ECORE_INVAL; + } + + switch (param) { + case FW_MB_PARAM_GET_PF_RDMA_NONE: + *p_proto = ECORE_PCI_ETH; + break; + case FW_MB_PARAM_GET_PF_RDMA_ROCE: + *p_proto = ECORE_PCI_ETH_ROCE; + break; + case FW_MB_PARAM_GET_PF_RDMA_IWARP: + *p_proto = ECORE_PCI_ETH_IWARP; + break; + case FW_MB_PARAM_GET_PF_RDMA_BOTH: + *p_proto = ECORE_PCI_ETH_RDMA; + break; + default: + DP_NOTICE(p_hwfn, true, + "MFW answers GET_PF_RDMA_PROTOCOL but param is %08x\n", + param); + return ECORE_INVAL; + } + DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP, "According to capabilities, L2 personality is %08x [resp %08x param %08x]\n", (u32)*p_proto, resp, param); @@ -2556,6 +3255,16 @@ ecore_mcp_get_shmem_proto(struct ecore_hwfn *p_hwfn, ECORE_SUCCESS) ecore_mcp_get_shmem_proto_legacy(p_hwfn, p_proto); break; + case FUNC_MF_CFG_PROTOCOL_NVMETCP: + case FUNC_MF_CFG_PROTOCOL_ISCSI: + *p_proto = ECORE_PCI_ISCSI; + break; + case FUNC_MF_CFG_PROTOCOL_FCOE: + *p_proto = ECORE_PCI_FCOE; + break; + case FUNC_MF_CFG_PROTOCOL_ROCE: + DP_NOTICE(p_hwfn, true, "RoCE personality is not a valid value!\n"); + /* Fallthrough */ default: rc = ECORE_INVAL; } @@ -2569,7 +3278,8 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn, struct ecore_mcp_function_info *info; struct public_func shmem_info; - ecore_mcp_get_shmem_func(p_hwfn, p_ptt, &shmem_info, MCP_PF_ID(p_hwfn)); + ecore_mcp_get_shmem_func(p_hwfn, p_ptt, &shmem_info, + MCP_PF_ID(p_hwfn)); info = &p_hwfn->mcp_info->func_info; info->pause_on_host = (shmem_info.config & @@ -2591,37 +3301,49 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn, info->mac[3] = (u8)(shmem_info.mac_lower >> 16); info->mac[4] = (u8)(shmem_info.mac_lower >> 8); info->mac[5] = (u8)(shmem_info.mac_lower); + + /* Store primary MAC for later possible WoL */ + OSAL_MEMCPY(p_hwfn->p_dev->wol_mac, info->mac, ECORE_ETH_ALEN); + } else { /* TODO - are there protocols for which there's no MAC? */ DP_NOTICE(p_hwfn, false, "MAC is 0 in shmem\n"); } /* TODO - are these calculations true for BE machine? */ - info->wwn_port = (u64)shmem_info.fcoe_wwn_port_name_upper | - (((u64)shmem_info.fcoe_wwn_port_name_lower) << 32); - info->wwn_node = (u64)shmem_info.fcoe_wwn_node_name_upper | - (((u64)shmem_info.fcoe_wwn_node_name_lower) << 32); + info->wwn_port = (u64)shmem_info.fcoe_wwn_port_name_lower | + (((u64)shmem_info.fcoe_wwn_port_name_upper) << 32); + info->wwn_node = (u64)shmem_info.fcoe_wwn_node_name_lower | + (((u64)shmem_info.fcoe_wwn_node_name_upper) << 32); info->ovlan = (u16)(shmem_info.ovlan_stag & FUNC_MF_CFG_OV_STAG_MASK); info->mtu = (u16)shmem_info.mtu_size; - if (info->mtu == 0) - info->mtu = 1500; + p_hwfn->hw_info.b_wol_support = ECORE_WOL_SUPPORT_NONE; + p_hwfn->p_dev->wol_config = (u8)ECORE_OV_WOL_DEFAULT; + if (ecore_mcp_is_init(p_hwfn)) { + u32 resp = 0, param = 0; + enum _ecore_status_t rc; - info->mtu = (u16)shmem_info.mtu_size; + rc = ecore_mcp_cmd(p_hwfn, p_ptt, + DRV_MSG_CODE_OS_WOL, 0, &resp, ¶m); + if (rc != ECORE_SUCCESS) + return rc; + if (resp == FW_MSG_CODE_OS_WOL_SUPPORTED) + p_hwfn->hw_info.b_wol_support = ECORE_WOL_SUPPORT_PME; + } DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IFUP), - "Read configuration from shmem: pause_on_host %02x" - " protocol %02x BW [%02x - %02x]" - " MAC %02x:%02x:%02x:%02x:%02x:%02x wwn port %lx" - " node %lx ovlan %04x\n", + "Read configuration from shmem: pause_on_host %02x protocol %02x " + "BW [%02x - %02x] MAC %02x:%02x:%02x:%02x:%02x:%02x wwn port %" PRIx64 " " + "node %" PRIx64 " ovlan %04x wol %02x\n", info->pause_on_host, info->protocol, info->bandwidth_min, info->bandwidth_max, info->mac[0], info->mac[1], info->mac[2], info->mac[3], info->mac[4], info->mac[5], - (unsigned long)info->wwn_port, - (unsigned long)info->wwn_node, info->ovlan); + info->wwn_port, info->wwn_node, info->ovlan, + (u8)p_hwfn->hw_info.b_wol_support); return ECORE_SUCCESS; } @@ -2665,7 +3387,8 @@ enum _ecore_status_t ecore_mcp_drain(struct ecore_hwfn *p_hwfn, enum _ecore_status_t rc; rc = ecore_mcp_cmd(p_hwfn, p_ptt, - DRV_MSG_CODE_NIG_DRAIN, 1000, &resp, ¶m); + DRV_MSG_CODE_NIG_DRAIN, 1000, + &resp, ¶m); /* Wait for the drain to complete before returning */ OSAL_MSLEEP(1020); @@ -2682,7 +3405,8 @@ const struct ecore_mcp_function_info } int ecore_mcp_get_personality_cnt(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u32 personalities) + struct ecore_ptt *p_ptt, + u32 personalities) { enum ecore_pci_personality protocol = ECORE_PCI_DEFAULT; struct public_func shmem_info; @@ -2741,8 +3465,7 @@ enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn, if (p_dev->recov_in_prog) { DP_NOTICE(p_hwfn, false, - "Avoid triggering a recovery since such a process" - " is already in progress\n"); + "Avoid triggering a recovery since such a process is already in progress\n"); return ECORE_AGAIN; } @@ -2752,29 +3475,47 @@ enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } -static enum _ecore_status_t -ecore_mcp_config_vf_msix_bb(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u8 vf_id, u8 num) -{ - u32 resp = 0, param = 0, rc_param = 0; +#define ECORE_RECOVERY_PROLOG_SLEEP_MS 100 + +enum _ecore_status_t ecore_recovery_prolog(struct ecore_dev *p_dev) +{ + struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); + struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt; enum _ecore_status_t rc; -/* Only Leader can configure MSIX, and need to take CMT into account */ + /* Allow ongoing PCIe transactions to complete */ + OSAL_MSLEEP(ECORE_RECOVERY_PROLOG_SLEEP_MS); + + /* Clear the PF's internal FID_enable in the PXP */ + rc = ecore_pglueb_set_pfid_enable(p_hwfn, p_ptt, false); + if (rc != ECORE_SUCCESS) + DP_NOTICE(p_hwfn, false, + "ecore_pglueb_set_pfid_enable() failed. rc = %d.\n", + rc); + + return rc; +} + +static enum _ecore_status_t +ecore_mcp_config_vf_msix_bb(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 vf_id, u8 num) +{ + u32 resp = 0, param = 0, rc_param = 0; + enum _ecore_status_t rc; + /* Only Leader can configure MSIX, and need to take CMT into account */ if (!IS_LEAD_HWFN(p_hwfn)) return ECORE_SUCCESS; num *= p_hwfn->p_dev->num_hwfns; - param |= (vf_id << DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_OFFSET) & - DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_MASK; - param |= (num << DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_OFFSET) & - DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_MASK; + SET_MFW_FIELD(param, DRV_MB_PARAM_CFG_VF_MSIX_VF_ID, vf_id); + SET_MFW_FIELD(param, DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM, num); rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_CFG_VF_MSIX, param, &resp, &rc_param); - if (resp != FW_MSG_CODE_DRV_CFG_VF_MSIX_DONE) { + if (resp != (u32)FW_MSG_CODE_DRV_CFG_VF_MSIX_DONE) { DP_NOTICE(p_hwfn, true, "VF[%d]: MFW failed to set MSI-X\n", vf_id); rc = ECORE_INVAL; @@ -2788,9 +3529,8 @@ ecore_mcp_config_vf_msix_bb(struct ecore_hwfn *p_hwfn, } static enum _ecore_status_t -ecore_mcp_config_vf_msix_ah(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u8 num) +ecore_mcp_config_vf_msix_ah_e5(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, u8 num) { u32 resp = 0, param = num, rc_param = 0; enum _ecore_status_t rc; @@ -2803,8 +3543,7 @@ ecore_mcp_config_vf_msix_ah(struct ecore_hwfn *p_hwfn, rc = ECORE_INVAL; } else { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "Requested 0x%02x MSI-x interrupts for VFs\n", - num); + "Requested 0x%02x MSI-x interrupts for VFs\n", num); } return rc; @@ -2827,7 +3566,7 @@ enum _ecore_status_t ecore_mcp_config_vf_msix(struct ecore_hwfn *p_hwfn, if (ECORE_IS_BB(p_hwfn->p_dev)) return ecore_mcp_config_vf_msix_bb(p_hwfn, p_ptt, vf_id, num); else - return ecore_mcp_config_vf_msix_ah(p_hwfn, p_ptt, num); + return ecore_mcp_config_vf_msix_ah_e5(p_hwfn, p_ptt, num); } enum _ecore_status_t @@ -2898,7 +3637,7 @@ enum _ecore_status_t ecore_mcp_halt(struct ecore_hwfn *p_hwfn, return ECORE_BUSY; } - ecore_mcp_cmd_set_blocking(p_hwfn, true); + ecore_mcp_cmd_set_halt(p_hwfn, true); return ECORE_SUCCESS; } @@ -2915,7 +3654,6 @@ enum _ecore_status_t ecore_mcp_resume(struct ecore_hwfn *p_hwfn, cpu_mode = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE); cpu_mode &= ~MCP_REG_CPU_MODE_SOFT_HALT; ecore_wr(p_hwfn, p_ptt, MCP_REG_CPU_MODE, cpu_mode); - OSAL_MSLEEP(ECORE_MCP_RESUME_SLEEP_MS); cpu_state = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE); @@ -2926,7 +3664,7 @@ enum _ecore_status_t ecore_mcp_resume(struct ecore_hwfn *p_hwfn, return ECORE_BUSY; } - ecore_mcp_cmd_set_blocking(p_hwfn, false); + ecore_mcp_cmd_set_halt(p_hwfn, false); return ECORE_SUCCESS; } @@ -2951,7 +3689,8 @@ ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn, drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_VENDOR_SPEC; break; default: - DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", client); + DP_NOTICE(p_hwfn, true, + "Invalid client type %d\n", client); return ECORE_INVAL; } @@ -2983,7 +3722,8 @@ ecore_mcp_ov_update_driver_state(struct ecore_hwfn *p_hwfn, drv_mb_param = DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_ACTIVE; break; default: - DP_NOTICE(p_hwfn, true, "Invalid driver state %d\n", drv_state); + DP_NOTICE(p_hwfn, true, + "Invalid driver state %d\n", drv_state); return ECORE_INVAL; } @@ -2999,7 +3739,78 @@ enum _ecore_status_t ecore_mcp_ov_get_fc_npiv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_fc_npiv_tbl *p_table) { - return 0; + struct dci_fc_npiv_tbl *p_npiv_table; + u8 *p_buf = OSAL_NULL; + u32 addr, size, i; + enum _ecore_status_t rc = ECORE_SUCCESS; + struct dci_fc_npiv_cfg npiv_cfg; + + p_table->num_wwpn = 0; + p_table->num_wwnn = 0; + addr = ecore_rd(p_hwfn, p_ptt, p_hwfn->mcp_info->port_addr + + OFFSETOF(struct public_port, fc_npiv_nvram_tbl_addr)); + if (addr == NPIV_TBL_INVALID_ADDR) { + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "NPIV table doesn't exist\n"); + return ECORE_INVAL; + } + + /* First read table header. Then use number of entries field to + * know actual number of npiv entries. + */ + size = sizeof(struct dci_fc_npiv_cfg); + OSAL_MEMSET(&npiv_cfg, 0, size); + + rc = ecore_mcp_nvm_read(p_hwfn->p_dev, addr, (u8 *)&npiv_cfg, size); + if (rc != ECORE_SUCCESS) { + DP_ERR(p_hwfn, "NPIV table configuration read failed.\n"); + return rc; + } + + if (npiv_cfg.num_of_npiv < 1) { + DP_ERR(p_hwfn, "No entries in NPIV table\n"); + return ECORE_INVAL; + } + + size += sizeof(struct dci_npiv_settings) * npiv_cfg.num_of_npiv; + + p_buf = OSAL_VZALLOC(p_hwfn->p_dev, size); + if (!p_buf) { + DP_ERR(p_hwfn, "Buffer allocation failed\n"); + return ECORE_NOMEM; + } + + rc = ecore_mcp_nvm_read(p_hwfn->p_dev, addr, p_buf, size); + if (rc != ECORE_SUCCESS) { + OSAL_VFREE(p_hwfn->p_dev, p_buf); + return rc; + } + + p_npiv_table = (struct dci_fc_npiv_tbl *)p_buf; + p_table->num_wwpn = (u16)p_npiv_table->fc_npiv_cfg.num_of_npiv; + p_table->num_wwnn = (u16)p_npiv_table->fc_npiv_cfg.num_of_npiv; + for (i = 0; i < p_npiv_table->fc_npiv_cfg.num_of_npiv; i++) { + p_table->wwpn[i][0] = p_npiv_table->settings[i].npiv_wwpn[3]; + p_table->wwpn[i][1] = p_npiv_table->settings[i].npiv_wwpn[2]; + p_table->wwpn[i][2] = p_npiv_table->settings[i].npiv_wwpn[1]; + p_table->wwpn[i][3] = p_npiv_table->settings[i].npiv_wwpn[0]; + p_table->wwpn[i][4] = p_npiv_table->settings[i].npiv_wwpn[7]; + p_table->wwpn[i][5] = p_npiv_table->settings[i].npiv_wwpn[6]; + p_table->wwpn[i][6] = p_npiv_table->settings[i].npiv_wwpn[5]; + p_table->wwpn[i][7] = p_npiv_table->settings[i].npiv_wwpn[4]; + + p_table->wwnn[i][0] = p_npiv_table->settings[i].npiv_wwnn[3]; + p_table->wwnn[i][1] = p_npiv_table->settings[i].npiv_wwnn[2]; + p_table->wwnn[i][2] = p_npiv_table->settings[i].npiv_wwnn[1]; + p_table->wwnn[i][3] = p_npiv_table->settings[i].npiv_wwnn[0]; + p_table->wwnn[i][4] = p_npiv_table->settings[i].npiv_wwnn[7]; + p_table->wwnn[i][5] = p_npiv_table->settings[i].npiv_wwnn[6]; + p_table->wwnn[i][6] = p_npiv_table->settings[i].npiv_wwnn[5]; + p_table->wwnn[i][7] = p_npiv_table->settings[i].npiv_wwnn[4]; + } + + OSAL_VFREE(p_hwfn->p_dev, p_buf); + + return ECORE_SUCCESS; } enum _ecore_status_t @@ -3023,7 +3834,7 @@ ecore_mcp_ov_update_mac(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u8 *mac) { struct ecore_mcp_mb_params mb_params; - union drv_union_data union_data; + u32 mfw_mac[2]; enum _ecore_status_t rc; OSAL_MEM_ZERO(&mb_params, sizeof(mb_params)); @@ -3031,12 +3842,64 @@ ecore_mcp_ov_update_mac(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, SET_MFW_FIELD(mb_params.param, DRV_MSG_CODE_VMAC_TYPE, DRV_MSG_CODE_VMAC_TYPE_MAC); mb_params.param |= MCP_PF_ID(p_hwfn); - OSAL_MEMCPY(&union_data.raw_data, mac, ETH_ALEN); - mb_params.p_data_src = &union_data; + + /* MCP is BE, and on LE platforms PCI would swap access to SHMEM + * in 32-bit granularity. + * So the MAC has to be set in native order [and not byte order], + * otherwise it would be read incorrectly by MFW after swap. + */ + mfw_mac[0] = mac[0] << 24 | mac[1] << 16 | mac[2] << 8 | mac[3]; + mfw_mac[1] = mac[4] << 24 | mac[5] << 16; + + mb_params.p_data_src = (u8 *)mfw_mac; + mb_params.data_src_size = 8; rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); if (rc != ECORE_SUCCESS) DP_ERR(p_hwfn, "Failed to send mac address, rc = %d\n", rc); + /* Store primary MAC for later possible WoL */ + OSAL_MEMCPY(p_hwfn->p_dev->wol_mac, mac, ECORE_ETH_ALEN); + + return rc; +} + +enum _ecore_status_t +ecore_mcp_ov_update_wol(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + enum ecore_ov_wol wol) +{ + u32 resp = 0, param = 0; + u32 drv_mb_param; + enum _ecore_status_t rc; + + if (p_hwfn->hw_info.b_wol_support == ECORE_WOL_SUPPORT_NONE) { + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, + "Can't change WoL configuration when WoL isn't supported\n"); + return ECORE_INVAL; + } + + switch (wol) { + case ECORE_OV_WOL_DEFAULT: + drv_mb_param = DRV_MB_PARAM_WOL_DEFAULT; + break; + case ECORE_OV_WOL_DISABLED: + drv_mb_param = DRV_MB_PARAM_WOL_DISABLED; + break; + case ECORE_OV_WOL_ENABLED: + drv_mb_param = DRV_MB_PARAM_WOL_ENABLED; + break; + default: + DP_ERR(p_hwfn, "Invalid wol state %d\n", wol); + return ECORE_INVAL; + } + + rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_OV_UPDATE_WOL, + drv_mb_param, &resp, ¶m); + if (rc != ECORE_SUCCESS) + DP_ERR(p_hwfn, "Failed to send wol mode, rc = %d\n", rc); + + /* Store the WoL update for a future unload */ + p_hwfn->p_dev->wol_config = (u8)wol; + return rc; } @@ -3044,10 +3907,9 @@ enum _ecore_status_t ecore_mcp_ov_update_eswitch(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, enum ecore_ov_eswitch eswitch) { - enum _ecore_status_t rc; u32 resp = 0, param = 0; u32 drv_mb_param; - + enum _ecore_status_t rc = ECORE_SUCCESS; switch (eswitch) { case ECORE_OV_ESWITCH_NONE: drv_mb_param = DRV_MB_PARAM_ESWITCH_MODE_NONE; @@ -3112,24 +3974,24 @@ enum _ecore_status_t ecore_mcp_mask_parities(struct ecore_hwfn *p_hwfn, mask_parities, &resp, ¶m); if (rc != ECORE_SUCCESS) { - DP_ERR(p_hwfn, - "MCP response failure for mask parities, aborting\n"); + DP_ERR(p_hwfn, "MCP response failure for mask parities, aborting\n"); } else if (resp != FW_MSG_CODE_OK) { - DP_ERR(p_hwfn, - "MCP did not ack mask parity request. Old MFW?\n"); + DP_ERR(p_hwfn, "MCP did not acknowledge mask parity request. Old MFW?\n"); rc = ECORE_INVAL; } return rc; } +#define FLASH_PAGE_SIZE 0x1000 /* @DPDK */ + enum _ecore_status_t ecore_mcp_nvm_read(struct ecore_dev *p_dev, u32 addr, - u8 *p_buf, u32 len) + u8 *p_buf, u32 len) { struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); u32 bytes_left, offset, bytes_to_copy, buf_size; - u32 nvm_offset, resp, param; - struct ecore_ptt *p_ptt; + u32 nvm_offset = 0, resp = 0, param; + struct ecore_ptt *p_ptt; enum _ecore_status_t rc = ECORE_SUCCESS; p_ptt = ecore_ptt_acquire(p_hwfn); @@ -3141,12 +4003,13 @@ enum _ecore_status_t ecore_mcp_nvm_read(struct ecore_dev *p_dev, u32 addr, while (bytes_left > 0) { bytes_to_copy = OSAL_MIN_T(u32, bytes_left, MCP_DRV_NVM_BUF_LEN); - nvm_offset = (addr + offset) | (bytes_to_copy << - DRV_MB_PARAM_NVM_LEN_OFFSET); + SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_NVM_OFFSET, + addr + offset); + SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_NVM_LEN, bytes_to_copy); rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_NVM_READ_NVRAM, nvm_offset, &resp, ¶m, &buf_size, - (u32 *)(p_buf + offset)); + (u32 *)(p_buf + offset), false); if (rc != ECORE_SUCCESS) { DP_NOTICE(p_dev, false, "ecore_mcp_nvm_rd_cmd() failed, rc = %d\n", @@ -3165,8 +4028,8 @@ enum _ecore_status_t ecore_mcp_nvm_read(struct ecore_dev *p_dev, u32 addr, /* This can be a lengthy process, and it's possible scheduler * isn't preemptible. Sleep a bit to prevent CPU hogging. */ - if (bytes_left % 0x1000 < - (bytes_left - buf_size) % 0x1000) + if (bytes_left % FLASH_PAGE_SIZE < + (bytes_left - buf_size) % FLASH_PAGE_SIZE) OSAL_MSLEEP(1); offset += buf_size; @@ -3183,10 +4046,15 @@ enum _ecore_status_t ecore_mcp_phy_read(struct ecore_dev *p_dev, u32 cmd, u32 addr, u8 *p_buf, u32 *p_len) { struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); - struct ecore_ptt *p_ptt; + struct ecore_ptt *p_ptt; u32 resp = 0, param; enum _ecore_status_t rc; + if (p_hwfn->p_dev->recov_in_prog) { + DP_ERR(p_hwfn, "Error recovery in progress\n"); + return ECORE_AGAIN; + } + p_ptt = ecore_ptt_acquire(p_hwfn); if (!p_ptt) return ECORE_BUSY; @@ -3194,8 +4062,8 @@ enum _ecore_status_t ecore_mcp_phy_read(struct ecore_dev *p_dev, u32 cmd, rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, (cmd == ECORE_PHY_CORE_READ) ? DRV_MSG_CODE_PHY_CORE_READ : - DRV_MSG_CODE_PHY_RAW_READ, - addr, &resp, ¶m, p_len, (u32 *)p_buf); + DRV_MSG_CODE_PHY_RAW_READ, addr, &resp, + ¶m, p_len, (u32 *)p_buf, false); if (rc != ECORE_SUCCESS) DP_NOTICE(p_dev, false, "MCP command rc = %d\n", rc); @@ -3207,23 +4075,16 @@ enum _ecore_status_t ecore_mcp_phy_read(struct ecore_dev *p_dev, u32 cmd, enum _ecore_status_t ecore_mcp_nvm_resp(struct ecore_dev *p_dev, u8 *p_buf) { - struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); - struct ecore_ptt *p_ptt; - - p_ptt = ecore_ptt_acquire(p_hwfn); - if (!p_ptt) - return ECORE_BUSY; - OSAL_MEMCPY(p_buf, &p_dev->mcp_nvm_resp, sizeof(p_dev->mcp_nvm_resp)); - ecore_ptt_release(p_hwfn, p_ptt); return ECORE_SUCCESS; } -enum _ecore_status_t ecore_mcp_nvm_del_file(struct ecore_dev *p_dev, u32 addr) +enum _ecore_status_t ecore_mcp_nvm_del_file(struct ecore_dev *p_dev, + u32 addr) { struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); - struct ecore_ptt *p_ptt; + struct ecore_ptt *p_ptt; u32 resp = 0, param; enum _ecore_status_t rc; @@ -3235,24 +4096,8 @@ enum _ecore_status_t ecore_mcp_nvm_del_file(struct ecore_dev *p_dev, u32 addr) p_dev->mcp_nvm_resp = resp; ecore_ptt_release(p_hwfn, p_ptt); - return rc; -} - -enum _ecore_status_t ecore_mcp_nvm_put_file_begin(struct ecore_dev *p_dev, - u32 addr) -{ - struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); - struct ecore_ptt *p_ptt; - u32 resp = 0, param; - enum _ecore_status_t rc; - - p_ptt = ecore_ptt_acquire(p_hwfn); - if (!p_ptt) - return ECORE_BUSY; - rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_NVM_PUT_FILE_BEGIN, addr, - &resp, ¶m); - p_dev->mcp_nvm_resp = resp; - ecore_ptt_release(p_hwfn, p_ptt); + /* nvm_info needs to be updated */ + p_hwfn->nvm_info.valid = false; return rc; } @@ -3263,17 +4108,19 @@ enum _ecore_status_t ecore_mcp_nvm_put_file_begin(struct ecore_dev *p_dev, enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd, u32 addr, u8 *p_buf, u32 len) { - u32 buf_idx, buf_size, nvm_cmd, nvm_offset; - u32 resp = FW_MSG_CODE_ERROR, param; + u32 buf_idx, buf_size, nvm_cmd, nvm_offset, resp = 0, param; struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); enum _ecore_status_t rc = ECORE_INVAL; - struct ecore_ptt *p_ptt; + struct ecore_ptt *p_ptt; p_ptt = ecore_ptt_acquire(p_hwfn); if (!p_ptt) return ECORE_BUSY; switch (cmd) { + case ECORE_PUT_FILE_BEGIN: + nvm_cmd = DRV_MSG_CODE_NVM_PUT_FILE_BEGIN; + break; case ECORE_PUT_FILE_DATA: nvm_cmd = DRV_MSG_CODE_NVM_PUT_FILE_DATA; break; @@ -3283,6 +4130,9 @@ enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd, case ECORE_EXT_PHY_FW_UPGRADE: nvm_cmd = DRV_MSG_CODE_EXT_PHY_FW_UPGRADE; break; + case ECORE_ENCRYPT_PASSWORD: + nvm_cmd = DRV_MSG_CODE_ENCRYPT_PASSWORD; + break; default: DP_NOTICE(p_hwfn, true, "Invalid nvm write command 0x%x\n", cmd); @@ -3291,15 +4141,16 @@ enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd, } buf_idx = 0; + buf_size = OSAL_MIN_T(u32, (len - buf_idx), MCP_DRV_NVM_BUF_LEN); while (buf_idx < len) { - buf_size = OSAL_MIN_T(u32, (len - buf_idx), - MCP_DRV_NVM_BUF_LEN); - nvm_offset = ((buf_size << DRV_MB_PARAM_NVM_LEN_OFFSET) | - addr) + - buf_idx; + if (cmd == ECORE_PUT_FILE_BEGIN) + nvm_offset = addr; + else + nvm_offset = ((buf_size << DRV_MB_PARAM_NVM_LEN_OFFSET) | + addr) + buf_idx; rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, nvm_cmd, nvm_offset, &resp, ¶m, buf_size, - (u32 *)&p_buf[buf_idx]); + (u32 *)&p_buf[buf_idx], true); if (rc != ECORE_SUCCESS) { DP_NOTICE(p_dev, false, "ecore_mcp_nvm_write() failed, rc = %d\n", @@ -3320,11 +4171,22 @@ enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd, /* This can be a lengthy process, and it's possible scheduler * isn't preemptible. Sleep a bit to prevent CPU hogging. */ - if (buf_idx % 0x1000 > - (buf_idx + buf_size) % 0x1000) + if (buf_idx % 0x1000 > (buf_idx + buf_size) % 0x1000) OSAL_MSLEEP(1); - buf_idx += buf_size; + /* For MBI upgrade, MFW response includes the next buffer offset + * to be delivered to MFW. + */ + if (param && cmd == ECORE_PUT_FILE_DATA) { + buf_idx = GET_MFW_FIELD(param, + FW_MB_PARAM_NVM_PUT_FILE_REQ_OFFSET); + buf_size = GET_MFW_FIELD(param, + FW_MB_PARAM_NVM_PUT_FILE_REQ_SIZE); + } else { + buf_idx += buf_size; + buf_size = OSAL_MIN_T(u32, (len - buf_idx), + MCP_DRV_NVM_BUF_LEN); + } } p_dev->mcp_nvm_resp = resp; @@ -3338,10 +4200,15 @@ enum _ecore_status_t ecore_mcp_phy_write(struct ecore_dev *p_dev, u32 cmd, u32 addr, u8 *p_buf, u32 len) { struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); + struct ecore_ptt *p_ptt; u32 resp = 0, param, nvm_cmd; - struct ecore_ptt *p_ptt; enum _ecore_status_t rc; + if (p_hwfn->p_dev->recov_in_prog) { + DP_ERR(p_hwfn, "Error recovery in progress\n"); + return ECORE_AGAIN; + } + p_ptt = ecore_ptt_acquire(p_hwfn); if (!p_ptt) return ECORE_BUSY; @@ -3349,7 +4216,7 @@ enum _ecore_status_t ecore_mcp_phy_write(struct ecore_dev *p_dev, u32 cmd, nvm_cmd = (cmd == ECORE_PHY_CORE_WRITE) ? DRV_MSG_CODE_PHY_CORE_WRITE : DRV_MSG_CODE_PHY_RAW_WRITE; rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, nvm_cmd, addr, - &resp, ¶m, len, (u32 *)p_buf); + &resp, ¶m, len, (u32 *)p_buf, false); if (rc != ECORE_SUCCESS) DP_NOTICE(p_dev, false, "MCP command rc = %d\n", rc); p_dev->mcp_nvm_resp = resp; @@ -3358,37 +4225,22 @@ enum _ecore_status_t ecore_mcp_phy_write(struct ecore_dev *p_dev, u32 cmd, return rc; } -enum _ecore_status_t ecore_mcp_nvm_set_secure_mode(struct ecore_dev *p_dev, - u32 addr) -{ - struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); - struct ecore_ptt *p_ptt; - u32 resp = 0, param; - enum _ecore_status_t rc; - - p_ptt = ecore_ptt_acquire(p_hwfn); - if (!p_ptt) - return ECORE_BUSY; - - rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_SECURE_MODE, addr, - &resp, ¶m); - p_dev->mcp_nvm_resp = resp; - ecore_ptt_release(p_hwfn, p_ptt); - - return rc; -} - enum _ecore_status_t ecore_mcp_phy_sfp_read(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 port, u32 addr, u32 offset, u32 len, u8 *p_buf) { - u32 bytes_left, bytes_to_copy, buf_size, nvm_offset; + u32 bytes_left, bytes_to_copy, buf_size, nvm_offset = 0; u32 resp, param; enum _ecore_status_t rc; - nvm_offset = (port << DRV_MB_PARAM_TRANSCEIVER_PORT_OFFSET) | - (addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_OFFSET); + if (p_hwfn->p_dev->recov_in_prog) { + DP_ERR(p_hwfn, "Error recovery in progress\n"); + return ECORE_AGAIN; + } + + SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_PORT, port); + SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS, addr); addr = offset; offset = 0; bytes_left = len; @@ -3397,14 +4249,14 @@ enum _ecore_status_t ecore_mcp_phy_sfp_read(struct ecore_hwfn *p_hwfn, MAX_I2C_TRANSACTION_SIZE); nvm_offset &= (DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK | DRV_MB_PARAM_TRANSCEIVER_PORT_MASK); - nvm_offset |= ((addr + offset) << - DRV_MB_PARAM_TRANSCEIVER_OFFSET_OFFSET); - nvm_offset |= (bytes_to_copy << - DRV_MB_PARAM_TRANSCEIVER_SIZE_OFFSET); + SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_OFFSET, + (addr + offset)); + SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_SIZE, + bytes_to_copy); rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_TRANSCEIVER_READ, nvm_offset, &resp, ¶m, &buf_size, - (u32 *)(p_buf + offset)); + (u32 *)(p_buf + offset), true); if (rc != ECORE_SUCCESS) { DP_NOTICE(p_hwfn, false, "Failed to send a transceiver read command to the MFW. rc = %d.\n", @@ -3419,6 +4271,11 @@ enum _ecore_status_t ecore_mcp_phy_sfp_read(struct ecore_hwfn *p_hwfn, offset += buf_size; bytes_left -= buf_size; + + /* The MFW reads from the SFP using the slow I2C bus, so a short + * sleep is needed to prevent CPU hogging. + */ + OSAL_MSLEEP(1); } return ECORE_SUCCESS; @@ -3429,25 +4286,30 @@ enum _ecore_status_t ecore_mcp_phy_sfp_write(struct ecore_hwfn *p_hwfn, u32 port, u32 addr, u32 offset, u32 len, u8 *p_buf) { - u32 buf_idx, buf_size, nvm_offset, resp, param; + u32 buf_idx, buf_size, nvm_offset = 0, resp, param; enum _ecore_status_t rc; - nvm_offset = (port << DRV_MB_PARAM_TRANSCEIVER_PORT_OFFSET) | - (addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_OFFSET); + if (p_hwfn->p_dev->recov_in_prog) { + DP_ERR(p_hwfn, "Error recovery in progress\n"); + return ECORE_AGAIN; + } + + SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_PORT, port); + SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS, addr); buf_idx = 0; while (buf_idx < len) { buf_size = OSAL_MIN_T(u32, (len - buf_idx), MAX_I2C_TRANSACTION_SIZE); nvm_offset &= (DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK | DRV_MB_PARAM_TRANSCEIVER_PORT_MASK); - nvm_offset |= ((offset + buf_idx) << - DRV_MB_PARAM_TRANSCEIVER_OFFSET_OFFSET); - nvm_offset |= (buf_size << - DRV_MB_PARAM_TRANSCEIVER_SIZE_OFFSET); + SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_OFFSET, + (offset + buf_idx)); + SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_SIZE, + buf_size); rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_TRANSCEIVER_WRITE, nvm_offset, &resp, ¶m, buf_size, - (u32 *)&p_buf[buf_idx]); + (u32 *)&p_buf[buf_idx], false); if (rc != ECORE_SUCCESS) { DP_NOTICE(p_hwfn, false, "Failed to send a transceiver write command to the MFW. rc = %d.\n", @@ -3461,6 +4323,11 @@ enum _ecore_status_t ecore_mcp_phy_sfp_write(struct ecore_hwfn *p_hwfn, return ECORE_UNKNOWN_ERROR; buf_idx += buf_size; + + /* The MFW writes to the SFP using the slow I2C bus, so a short + * sleep is needed to prevent CPU hogging. + */ + OSAL_MSLEEP(1); } return ECORE_SUCCESS; @@ -3471,9 +4338,14 @@ enum _ecore_status_t ecore_mcp_gpio_read(struct ecore_hwfn *p_hwfn, u16 gpio, u32 *gpio_val) { enum _ecore_status_t rc = ECORE_SUCCESS; - u32 drv_mb_param = 0, rsp = 0; + u32 drv_mb_param = 0, rsp; + + if (p_hwfn->p_dev->recov_in_prog) { + DP_ERR(p_hwfn, "Error recovery in progress\n"); + return ECORE_AGAIN; + } - drv_mb_param = (gpio << DRV_MB_PARAM_GPIO_NUMBER_OFFSET); + SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_GPIO_NUMBER, gpio); rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_READ, drv_mb_param, &rsp, gpio_val); @@ -3492,10 +4364,15 @@ enum _ecore_status_t ecore_mcp_gpio_write(struct ecore_hwfn *p_hwfn, u16 gpio, u16 gpio_val) { enum _ecore_status_t rc = ECORE_SUCCESS; - u32 drv_mb_param = 0, param, rsp = 0; + u32 drv_mb_param = 0, param, rsp; + + if (p_hwfn->p_dev->recov_in_prog) { + DP_ERR(p_hwfn, "Error recovery in progress\n"); + return ECORE_AGAIN; + } - drv_mb_param = (gpio << DRV_MB_PARAM_GPIO_NUMBER_OFFSET) | - (gpio_val << DRV_MB_PARAM_GPIO_VALUE_OFFSET); + SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_GPIO_NUMBER, gpio); + SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_GPIO_VALUE, gpio_val); rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_WRITE, drv_mb_param, &rsp, ¶m); @@ -3517,17 +4394,20 @@ enum _ecore_status_t ecore_mcp_gpio_info(struct ecore_hwfn *p_hwfn, u32 drv_mb_param = 0, rsp, val = 0; enum _ecore_status_t rc = ECORE_SUCCESS; - drv_mb_param = gpio << DRV_MB_PARAM_GPIO_NUMBER_OFFSET; + if (p_hwfn->p_dev->recov_in_prog) { + DP_ERR(p_hwfn, "Error recovery in progress\n"); + return ECORE_AGAIN; + } + + SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_GPIO_NUMBER, gpio); rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_INFO, drv_mb_param, &rsp, &val); if (rc != ECORE_SUCCESS) return rc; - *gpio_direction = (val & DRV_MB_PARAM_GPIO_DIRECTION_MASK) >> - DRV_MB_PARAM_GPIO_DIRECTION_OFFSET; - *gpio_ctrl = (val & DRV_MB_PARAM_GPIO_CTRL_MASK) >> - DRV_MB_PARAM_GPIO_CTRL_OFFSET; + *gpio_direction = GET_MFW_FIELD(val, DRV_MB_PARAM_GPIO_DIRECTION); + *gpio_ctrl = GET_MFW_FIELD(val, DRV_MB_PARAM_GPIO_CTRL); if ((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_GPIO_OK) return ECORE_UNKNOWN_ERROR; @@ -3541,8 +4421,8 @@ enum _ecore_status_t ecore_mcp_bist_register_test(struct ecore_hwfn *p_hwfn, u32 drv_mb_param = 0, rsp, param; enum _ecore_status_t rc = ECORE_SUCCESS; - drv_mb_param = (DRV_MB_PARAM_BIST_REGISTER_TEST << - DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET); + SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_BIST_TEST_INDEX, + DRV_MB_PARAM_BIST_REGISTER_TEST); rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST, drv_mb_param, &rsp, ¶m); @@ -3560,11 +4440,11 @@ enum _ecore_status_t ecore_mcp_bist_register_test(struct ecore_hwfn *p_hwfn, enum _ecore_status_t ecore_mcp_bist_clock_test(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { - u32 drv_mb_param, rsp, param; + u32 drv_mb_param = 0, rsp, param; enum _ecore_status_t rc = ECORE_SUCCESS; - drv_mb_param = (DRV_MB_PARAM_BIST_CLOCK_TEST << - DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET); + SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_BIST_TEST_INDEX, + DRV_MB_PARAM_BIST_CLOCK_TEST); rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST, drv_mb_param, &rsp, ¶m); @@ -3579,54 +4459,8 @@ enum _ecore_status_t ecore_mcp_bist_clock_test(struct ecore_hwfn *p_hwfn, return rc; } -enum _ecore_status_t ecore_mcp_bist_nvm_test_get_num_images( - struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 *num_images) -{ - u32 drv_mb_param = 0, rsp = 0; - enum _ecore_status_t rc = ECORE_SUCCESS; - - drv_mb_param = (DRV_MB_PARAM_BIST_NVM_TEST_NUM_IMAGES << - DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET); - - rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST, - drv_mb_param, &rsp, num_images); - - if (rc != ECORE_SUCCESS) - return rc; - - if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK)) - rc = ECORE_UNKNOWN_ERROR; - - return rc; -} - -enum _ecore_status_t ecore_mcp_bist_nvm_test_get_image_att( - struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, - struct bist_nvm_image_att *p_image_att, u32 image_index) -{ - u32 buf_size, nvm_offset, resp, param; - enum _ecore_status_t rc; - - nvm_offset = (DRV_MB_PARAM_BIST_NVM_TEST_IMAGE_BY_INDEX << - DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET); - nvm_offset |= (image_index << - DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_OFFSET); - rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST, - nvm_offset, &resp, ¶m, &buf_size, - (u32 *)p_image_att); - if (rc != ECORE_SUCCESS) - return rc; - - if (((resp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) || - (p_image_att->return_code != 1)) - rc = ECORE_UNKNOWN_ERROR; - - return rc; -} - -enum _ecore_status_t -ecore_mcp_bist_nvm_get_num_images(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u32 *num_images) +enum _ecore_status_t ecore_mcp_bist_nvm_get_num_images(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, u32 *num_images) { u32 drv_mb_param = 0, rsp; enum _ecore_status_t rc = ECORE_SUCCESS; @@ -3647,11 +4481,9 @@ ecore_mcp_bist_nvm_get_num_images(struct ecore_hwfn *p_hwfn, return rc; } -enum _ecore_status_t -ecore_mcp_bist_nvm_get_image_att(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - struct bist_nvm_image_att *p_image_att, - u32 image_index) +enum _ecore_status_t ecore_mcp_bist_nvm_get_image_att(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct bist_nvm_image_att *p_image_att, u32 image_index) { u32 buf_size, nvm_offset = 0, resp, param; enum _ecore_status_t rc; @@ -3662,7 +4494,7 @@ ecore_mcp_bist_nvm_get_image_att(struct ecore_hwfn *p_hwfn, image_index); rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST, nvm_offset, &resp, ¶m, &buf_size, - (u32 *)p_image_att); + (u32 *)p_image_att, false); if (rc != ECORE_SUCCESS) return rc; @@ -3686,7 +4518,8 @@ enum _ecore_status_t ecore_mcp_nvm_info_populate(struct ecore_hwfn *p_hwfn) #ifndef ASIC_ONLY if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) || - CHIP_REV_IS_TEDIBEAR(p_hwfn->p_dev)) + CHIP_REV_IS_TEDIBEAR(p_hwfn->p_dev) || + p_hwfn->mcp_info->recovery_mode) return ECORE_SUCCESS; #endif @@ -3751,6 +4584,18 @@ enum _ecore_status_t ecore_mcp_nvm_info_populate(struct ecore_hwfn *p_hwfn) return rc; } +void ecore_mcp_nvm_info_free(struct ecore_hwfn *p_hwfn) +{ +#ifndef ASIC_ONLY + if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) || + CHIP_REV_IS_TEDIBEAR(p_hwfn->p_dev)) + return; +#endif + OSAL_FREE(p_hwfn->p_dev, p_hwfn->nvm_info.image_att); + p_hwfn->nvm_info.image_att = OSAL_NULL; + p_hwfn->nvm_info.valid = false; +} + enum _ecore_status_t ecore_mcp_get_nvm_image_att(struct ecore_hwfn *p_hwfn, enum ecore_nvm_images image_id, @@ -3777,7 +4622,7 @@ ecore_mcp_get_nvm_image_att(struct ecore_hwfn *p_hwfn, type = NVM_TYPE_DEFAULT_CFG; break; case ECORE_NVM_IMAGE_NVM_META: - type = NVM_TYPE_META; + type = NVM_TYPE_NVM_META; break; default: DP_NOTICE(p_hwfn, false, "Unknown request of image_id %08x\n", @@ -3832,7 +4677,7 @@ enum _ecore_status_t ecore_mcp_get_nvm_image(struct ecore_hwfn *p_hwfn, } return ecore_mcp_nvm_read(p_hwfn->p_dev, image_att.start_addr, - (u8 *)p_buffer, image_att.length); + p_buffer, image_att.length); } enum _ecore_status_t @@ -3861,14 +4706,13 @@ ecore_mcp_get_temperature_info(struct ecore_hwfn *p_hwfn, for (i = 0; i < p_temp_info->num_sensors; i++) { val = mfw_temp_info.sensor[i]; p_temp_sensor = &p_temp_info->sensors[i]; - p_temp_sensor->sensor_location = (val & SENSOR_LOCATION_MASK) >> - SENSOR_LOCATION_OFFSET; - p_temp_sensor->threshold_high = (val & THRESHOLD_HIGH_MASK) >> - THRESHOLD_HIGH_OFFSET; - p_temp_sensor->critical = (val & CRITICAL_TEMPERATURE_MASK) >> - CRITICAL_TEMPERATURE_OFFSET; - p_temp_sensor->current_temp = (val & CURRENT_TEMP_MASK) >> - CURRENT_TEMP_OFFSET; + p_temp_sensor->sensor_location = + GET_MFW_FIELD(val, SENSOR_LOCATION); + p_temp_sensor->threshold_high = + GET_MFW_FIELD(val, THRESHOLD_HIGH); + p_temp_sensor->critical = + GET_MFW_FIELD(val, CRITICAL_TEMPERATURE); + p_temp_sensor->current_temp = GET_MFW_FIELD(val, CURRENT_TEMP); } return ECORE_SUCCESS; @@ -3884,7 +4728,7 @@ enum _ecore_status_t ecore_mcp_get_mba_versions( rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GET_MBA_VERSION, 0, &resp, ¶m, &buf_size, - &p_mba_vers->mba_vers[0]); + &p_mba_vers->mba_vers[0], false); if (rc != ECORE_SUCCESS) return rc; @@ -3902,10 +4746,24 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u64 *num_events) { - u32 rsp; + struct ecore_mcp_mb_params mb_params; + struct mcp_val64 val64; + enum _ecore_status_t rc; + + OSAL_MEMSET(&mb_params, 0, sizeof(struct ecore_mcp_mb_params)); + mb_params.cmd = DRV_MSG_CODE_MEM_ECC_EVENTS; + mb_params.p_data_dst = &val64; + mb_params.data_dst_size = sizeof(val64); + rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); + if (rc != ECORE_SUCCESS) + return rc; + + *num_events = ((u64)val64.hi << 32) | val64.lo; + + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, + "Number of memory ECC events: %" PRId64 "\n", *num_events); - return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MEM_ECC_EVENTS, - 0, &rsp, (u32 *)num_events); + return ECORE_SUCCESS; } static enum resource_id_enum @@ -3940,9 +4798,15 @@ ecore_mcp_get_mfw_res_id(enum ecore_resources res_id) case ECORE_ILT: mfw_res_id = RESOURCE_ILT_E; break; - case ECORE_LL2_QUEUE: + case ECORE_LL2_RAM_QUEUE: mfw_res_id = RESOURCE_LL2_QUEUE_E; break; + case ECORE_LL2_CTX_QUEUE: + mfw_res_id = RESOURCE_LL2_CTX_E; + break; + case ECORE_VF_RDMA_CNQ_RAM: + mfw_res_id = RESOURCE_VF_CNQS; + break; case ECORE_RDMA_CNQ_RAM: case ECORE_CMDQS_CQS: /* CNQ/CMDQS are the same resource */ @@ -3954,6 +4818,9 @@ ecore_mcp_get_mfw_res_id(enum ecore_resources res_id) case ECORE_BDQ: mfw_res_id = RESOURCE_BDQ_E; break; + case ECORE_VF_MAC_ADDR: + mfw_res_id = RESOURCE_VF_MAC_ADDR; + break; default: break; } @@ -3963,11 +4830,6 @@ ecore_mcp_get_mfw_res_id(enum ecore_resources res_id) #define ECORE_RESC_ALLOC_VERSION_MAJOR 2 #define ECORE_RESC_ALLOC_VERSION_MINOR 0 -#define ECORE_RESC_ALLOC_VERSION \ - ((ECORE_RESC_ALLOC_VERSION_MAJOR << \ - DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_OFFSET) | \ - (ECORE_RESC_ALLOC_VERSION_MINOR << \ - DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_OFFSET)) struct ecore_resc_alloc_in_params { u32 cmd; @@ -3985,27 +4847,6 @@ struct ecore_resc_alloc_out_params { u32 flags; }; -#define ECORE_RECOVERY_PROLOG_SLEEP_MS 100 - -enum _ecore_status_t ecore_recovery_prolog(struct ecore_dev *p_dev) -{ - struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev); - struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt; - enum _ecore_status_t rc; - - /* Allow ongoing PCIe transactions to complete */ - OSAL_MSLEEP(ECORE_RECOVERY_PROLOG_SLEEP_MS); - - /* Clear the PF's internal FID_enable in the PXP */ - rc = ecore_pglueb_set_pfid_enable(p_hwfn, p_ptt, false); - if (rc != ECORE_SUCCESS) - DP_NOTICE(p_hwfn, false, - "ecore_pglueb_set_pfid_enable() failed. rc = %d.\n", - rc); - - return rc; -} - static enum _ecore_status_t ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, @@ -4041,7 +4882,12 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn, OSAL_MEM_ZERO(&mb_params, sizeof(mb_params)); mb_params.cmd = p_in_params->cmd; - mb_params.param = ECORE_RESC_ALLOC_VERSION; + SET_MFW_FIELD(mb_params.param, + DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR, + ECORE_RESC_ALLOC_VERSION_MAJOR); + SET_MFW_FIELD(mb_params.param, + DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR, + ECORE_RESC_ALLOC_VERSION_MINOR); mb_params.p_data_src = &mfw_resc_info; mb_params.data_src_size = sizeof(mfw_resc_info); mb_params.p_data_dst = mb_params.p_data_src; @@ -4143,14 +4989,162 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn, &mcp_resp, &mcp_param); } -static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u32 param, u32 *p_mcp_resp, - u32 *p_mcp_param) +static enum _ecore_status_t +ecore_mcp_initiate_vf_flr(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, u32 *vfs_to_flr) { + u8 i, vf_bitmap_size, vf_bitmap_size_in_bytes; + struct ecore_mcp_mb_params mb_params; enum _ecore_status_t rc; - rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_RESOURCE_CMD, param, + if (ECORE_IS_E4(p_hwfn->p_dev)) { + vf_bitmap_size = VF_BITMAP_SIZE_IN_DWORDS; + vf_bitmap_size_in_bytes = VF_BITMAP_SIZE_IN_BYTES; + } else { + vf_bitmap_size = EXT_VF_BITMAP_SIZE_IN_DWORDS; + vf_bitmap_size_in_bytes = EXT_VF_BITMAP_SIZE_IN_BYTES; + } + + for (i = 0; i < vf_bitmap_size; i++) + DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IOV), + "FLR-ing VFs [%08x,...,%08x] - %08x\n", + i * 32, (i + 1) * 32 - 1, vfs_to_flr[i]); + + OSAL_MEM_ZERO(&mb_params, sizeof(mb_params)); + mb_params.cmd = DRV_MSG_CODE_INITIATE_VF_FLR; + mb_params.p_data_src = vfs_to_flr; + mb_params.data_src_size = vf_bitmap_size_in_bytes; + rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); + if (rc != ECORE_SUCCESS) { + DP_NOTICE(p_hwfn, false, + "Failed to send an initiate VF FLR request\n"); + return rc; + } + + if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) { + DP_INFO(p_hwfn, + "The VF FLR command is unsupported by the MFW\n"); + rc = ECORE_NOTIMPL; + } else if (mb_params.mcp_resp != (u32)FW_MSG_CODE_INITIATE_VF_FLR_OK) { + DP_NOTICE(p_hwfn, false, + "Failed to initiate VF FLR [resp 0x%08x]\n", + mb_params.mcp_resp); + rc = ECORE_INVAL; + } + + return rc; +} + +enum _ecore_status_t ecore_mcp_vf_flr(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, u8 rel_vf_id) +{ + u32 vfs_to_flr[EXT_VF_BITMAP_SIZE_IN_DWORDS]; + struct ecore_dev *p_dev = p_hwfn->p_dev; + u8 abs_vf_id, index, bit; + enum _ecore_status_t rc; + + if (!IS_PF_SRIOV(p_hwfn)) { + DP_ERR(p_hwfn, + "Failed to initiate VF FLR: SRIOV is disabled\n"); + return ECORE_INVAL; + } + + if (rel_vf_id >= p_dev->p_iov_info->total_vfs) { + DP_ERR(p_hwfn, + "Failed to initiate VF FLR: VF ID is too high [total_vfs %d]\n", + p_dev->p_iov_info->total_vfs); + return ECORE_INVAL; + } + + abs_vf_id = (u8)p_dev->p_iov_info->first_vf_in_pf + rel_vf_id; + index = abs_vf_id / 32; + bit = abs_vf_id % 32; + OSAL_MEMSET(vfs_to_flr, 0, EXT_VF_BITMAP_SIZE_IN_BYTES); + vfs_to_flr[index] |= (1 << bit); + + rc = ecore_mcp_initiate_vf_flr(p_hwfn, p_ptt, vfs_to_flr); + return rc; +} + +enum _ecore_status_t ecore_mcp_get_lldp_mac(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 lldp_mac_addr[ECORE_ETH_ALEN]) +{ + struct ecore_mcp_mb_params mb_params; + struct mcp_mac lldp_mac; + enum _ecore_status_t rc; + + OSAL_MEM_ZERO(&mb_params, sizeof(mb_params)); + mb_params.cmd = DRV_MSG_CODE_GET_LLDP_MAC; + mb_params.p_data_dst = &lldp_mac; + mb_params.data_dst_size = sizeof(lldp_mac); + rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); + if (rc != ECORE_SUCCESS) + return rc; + + if (mb_params.mcp_resp != FW_MSG_CODE_OK) { + DP_NOTICE(p_hwfn, false, + "MFW lacks support for the GET_LLDP_MAC command [resp 0x%08x]\n", + mb_params.mcp_resp); + return ECORE_INVAL; + } + + *(u16 *)lldp_mac_addr = OSAL_BE16_TO_CPU(*(u16 *)&lldp_mac.mac_upper); + *(u32 *)(lldp_mac_addr + sizeof(u16)) = + OSAL_BE32_TO_CPU(lldp_mac.mac_lower); + + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, + "LLDP MAC address is %02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx\n", + lldp_mac_addr[0], lldp_mac_addr[1], lldp_mac_addr[2], + lldp_mac_addr[3], lldp_mac_addr[4], lldp_mac_addr[5]); + + return ECORE_SUCCESS; +} + +enum _ecore_status_t ecore_mcp_set_lldp_mac(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 lldp_mac_addr[ECORE_ETH_ALEN]) +{ + struct ecore_mcp_mb_params mb_params; + struct mcp_mac lldp_mac; + enum _ecore_status_t rc; + + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, + "Configuring LLDP MAC address to %02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx\n", + lldp_mac_addr[0], lldp_mac_addr[1], lldp_mac_addr[2], + lldp_mac_addr[3], lldp_mac_addr[4], lldp_mac_addr[5]); + + OSAL_MEM_ZERO(&lldp_mac, sizeof(lldp_mac)); + lldp_mac.mac_upper = OSAL_CPU_TO_BE16(*(u16 *)lldp_mac_addr); + lldp_mac.mac_lower = + OSAL_CPU_TO_BE32(*(u32 *)(lldp_mac_addr + sizeof(u16))); + + OSAL_MEM_ZERO(&mb_params, sizeof(mb_params)); + mb_params.cmd = DRV_MSG_CODE_SET_LLDP_MAC; + mb_params.p_data_src = &lldp_mac; + mb_params.data_src_size = sizeof(lldp_mac); + rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); + if (rc != ECORE_SUCCESS) + return rc; + + if (mb_params.mcp_resp != FW_MSG_CODE_OK) { + DP_NOTICE(p_hwfn, false, + "MFW lacks support for the SET_LLDP_MAC command [resp 0x%08x]\n", + mb_params.mcp_resp); + return ECORE_INVAL; + } + + return ECORE_SUCCESS; +} + +static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u32 param, u32 *p_mcp_resp, + u32 *p_mcp_param) +{ + enum _ecore_status_t rc; + + rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_RESOURCE_CMD, param, p_mcp_resp, p_mcp_param); if (rc != ECORE_SUCCESS) return rc; @@ -4173,35 +5167,36 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn, return rc; } -enum _ecore_status_t +static enum _ecore_status_t __ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_resc_lock_params *p_params) { - u32 param = 0, mcp_resp = 0, mcp_param = 0; - u8 opcode; + u32 param = 0, mcp_resp, mcp_param; + u8 opcode, timeout; enum _ecore_status_t rc; switch (p_params->timeout) { case ECORE_MCP_RESC_LOCK_TO_DEFAULT: opcode = RESOURCE_OPCODE_REQ; - p_params->timeout = 0; + timeout = 0; break; case ECORE_MCP_RESC_LOCK_TO_NONE: opcode = RESOURCE_OPCODE_REQ_WO_AGING; - p_params->timeout = 0; + timeout = 0; break; default: opcode = RESOURCE_OPCODE_REQ_W_AGING; + timeout = p_params->timeout; break; } SET_MFW_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource); SET_MFW_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode); - SET_MFW_FIELD(param, RESOURCE_CMD_REQ_AGE, p_params->timeout); + SET_MFW_FIELD(param, RESOURCE_CMD_REQ_AGE, timeout); DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Resource lock request: param 0x%08x [age %d, opcode %d, resource %d]\n", - param, p_params->timeout, opcode, p_params->resource); + param, timeout, opcode, p_params->resource); /* Attempt to acquire the resource */ rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp, @@ -4245,7 +5240,7 @@ ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, /* No need for an interval before the first iteration */ if (retry_cnt) { if (p_params->sleep_b4_retry) { - u16 retry_interval_in_ms = + u32 retry_interval_in_ms = DIV_ROUND_UP(p_params->retry_interval, 1000); @@ -4256,44 +5251,16 @@ ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, } rc = __ecore_mcp_resc_lock(p_hwfn, p_ptt, p_params); + if (rc == ECORE_AGAIN) + continue; if (rc != ECORE_SUCCESS) return rc; if (p_params->b_granted) - break; + return ECORE_SUCCESS; } while (retry_cnt++ < p_params->retry_num); - return ECORE_SUCCESS; -} - -void ecore_mcp_resc_lock_default_init(struct ecore_resc_lock_params *p_lock, - struct ecore_resc_unlock_params *p_unlock, - enum ecore_resc_lock resource, - bool b_is_permanent) -{ - if (p_lock != OSAL_NULL) { - OSAL_MEM_ZERO(p_lock, sizeof(*p_lock)); - - /* Permanent resources don't require aging, and there's no - * point in trying to acquire them more than once since it's - * unexpected another entity would release them. - */ - if (b_is_permanent) { - p_lock->timeout = ECORE_MCP_RESC_LOCK_TO_NONE; - } else { - p_lock->retry_num = ECORE_MCP_RESC_LOCK_RETRY_CNT_DFLT; - p_lock->retry_interval = - ECORE_MCP_RESC_LOCK_RETRY_VAL_DFLT; - p_lock->sleep_b4_retry = true; - } - - p_lock->resource = resource; - } - - if (p_unlock != OSAL_NULL) { - OSAL_MEM_ZERO(p_unlock, sizeof(*p_unlock)); - p_unlock->resource = resource; - } + return ECORE_TIMEOUT; } enum _ecore_status_t @@ -4348,12 +5315,114 @@ ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, return ECORE_SUCCESS; } +void ecore_mcp_resc_lock_default_init(struct ecore_resc_lock_params *p_lock, + struct ecore_resc_unlock_params *p_unlock, + enum ecore_resc_lock resource, + bool b_is_permanent) +{ + if (p_lock != OSAL_NULL) { + OSAL_MEM_ZERO(p_lock, sizeof(*p_lock)); + + /* Permanent resources don't require aging, and there's no + * point in trying to acquire them more than once since it's + * unexpected another entity would release them. + */ + if (b_is_permanent) { + p_lock->timeout = ECORE_MCP_RESC_LOCK_TO_NONE; + } else { + p_lock->retry_num = ECORE_MCP_RESC_LOCK_RETRY_CNT_DFLT; + p_lock->retry_interval = + ECORE_MCP_RESC_LOCK_RETRY_VAL_DFLT; + p_lock->sleep_b4_retry = true; + } + + p_lock->resource = resource; + } + + if (p_unlock != OSAL_NULL) { + OSAL_MEM_ZERO(p_unlock, sizeof(*p_unlock)); + p_unlock->resource = resource; + } +} + +enum _ecore_status_t +ecore_mcp_update_fcoe_cvid(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u16 vlan) +{ + u32 resp = 0, param = 0, mcp_param = 0; + enum _ecore_status_t rc; + + SET_MFW_FIELD(param, DRV_MB_PARAM_FCOE_CVID, vlan); + rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_OEM_UPDATE_FCOE_CVID, + param, &resp, &mcp_param); + if (rc != ECORE_SUCCESS) + DP_ERR(p_hwfn, "Failed to update fcoe vlan, rc = %d\n", rc); + + return rc; +} + +enum _ecore_status_t +ecore_mcp_update_fcoe_fabric_name(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, u8 *wwn) +{ + struct ecore_mcp_mb_params mb_params; + struct mcp_wwn fabric_name; + enum _ecore_status_t rc; + + OSAL_MEM_ZERO(&fabric_name, sizeof(fabric_name)); + fabric_name.wwn_upper = *(u32 *)wwn; + fabric_name.wwn_lower = *(u32 *)(wwn + 4); + + OSAL_MEM_ZERO(&mb_params, sizeof(mb_params)); + mb_params.cmd = DRV_MSG_CODE_OEM_UPDATE_FCOE_FABRIC_NAME; + mb_params.p_data_src = &fabric_name; + mb_params.data_src_size = sizeof(fabric_name); + rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); + if (rc != ECORE_SUCCESS) + DP_ERR(p_hwfn, "Failed to update fcoe wwn, rc = %d\n", rc); + + return rc; +} + +void ecore_mcp_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u32 offset, u32 val) +{ + enum _ecore_status_t rc = ECORE_SUCCESS; + u32 dword = val; + struct ecore_mcp_mb_params mb_params; + + OSAL_MEMSET(&mb_params, 0, sizeof(struct ecore_mcp_mb_params)); + mb_params.cmd = DRV_MSG_CODE_WRITE_WOL_REG; + mb_params.param = offset; + mb_params.p_data_src = &dword; + mb_params.data_src_size = sizeof(dword); + + rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); + if (rc != ECORE_SUCCESS) { + DP_NOTICE(p_hwfn, false, + "Failed to wol write request, rc = %d\n", rc); + } + + if (mb_params.mcp_resp != FW_MSG_CODE_WOL_READ_WRITE_OK) { + DP_NOTICE(p_hwfn, false, + "Failed to write value 0x%x to offset 0x%x [mcp_resp 0x%x]\n", + val, offset, mb_params.mcp_resp); + rc = ECORE_UNKNOWN_ERROR; + } +} + bool ecore_mcp_is_smart_an_supported(struct ecore_hwfn *p_hwfn) { return !!(p_hwfn->mcp_info->capabilities & FW_MB_PARAM_FEATURE_SUPPORT_SMARTLINQ); } +bool ecore_mcp_rlx_odr_supported(struct ecore_hwfn *p_hwfn) +{ + return !!(p_hwfn->mcp_info->capabilities & + FW_MB_PARAM_FEATURE_SUPPORT_RELAXED_ORD); +} + enum _ecore_status_t ecore_mcp_get_capabilities(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { @@ -4377,7 +5446,12 @@ enum _ecore_status_t ecore_mcp_set_capabilities(struct ecore_hwfn *p_hwfn, features = DRV_MB_PARAM_FEATURE_SUPPORT_PORT_SMARTLINQ | DRV_MB_PARAM_FEATURE_SUPPORT_PORT_EEE | - DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_VLINK; + DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_VLINK | + DRV_MB_PARAM_FEATURE_SUPPORT_PORT_FEC_CONTROL; + + if (ECORE_IS_E5(p_hwfn->p_dev)) + features |= + DRV_MB_PARAM_FEATURE_SUPPORT_PORT_EXT_SPEED_FEC_CONTROL; return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_FEATURE_SUPPORT, features, &mcp_resp, &mcp_param); @@ -4525,29 +5599,403 @@ enum _ecore_status_t ecore_mcp_get_ppfid_bitmap(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } -void ecore_mcp_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, - u32 offset, u32 val) +enum _ecore_status_t +ecore_mcp_ind_table_lock(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 retry_num, + u32 retry_interval) +{ + struct ecore_resc_lock_params resc_lock_params; + enum _ecore_status_t rc; + + OSAL_MEM_ZERO(&resc_lock_params, + sizeof(struct ecore_resc_lock_params)); + resc_lock_params.resource = ECORE_RESC_LOCK_IND_TABLE; + if (!retry_num) + retry_num = ECORE_MCP_RESC_LOCK_RETRY_CNT_DFLT; + resc_lock_params.retry_num = retry_num; + + if (!retry_interval) + retry_interval = ECORE_MCP_RESC_LOCK_RETRY_VAL_DFLT; + resc_lock_params.retry_interval = retry_interval; + + rc = ecore_mcp_resc_lock(p_hwfn, p_ptt, &resc_lock_params); + if (rc == ECORE_SUCCESS && !resc_lock_params.b_granted) { + DP_NOTICE(p_hwfn, false, + "Failed to acquire the resource lock for IDT access\n"); + return ECORE_BUSY; + } + return rc; +} + +enum _ecore_status_t +ecore_mcp_ind_table_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) +{ + struct ecore_resc_unlock_params resc_unlock_params; + enum _ecore_status_t rc; + + OSAL_MEM_ZERO(&resc_unlock_params, + sizeof(struct ecore_resc_unlock_params)); + resc_unlock_params.resource = ECORE_RESC_LOCK_IND_TABLE; + rc = ecore_mcp_resc_unlock(p_hwfn, p_ptt, + &resc_unlock_params); + return rc; +} + +enum _ecore_status_t +ecore_mcp_nvm_get_cfg(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u16 option_id, u8 entity_id, u16 flags, u8 *p_buf, + u32 *p_len) +{ + u32 mb_param = 0, resp, param; + enum _ecore_status_t rc; + + SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_ID, option_id); + if (flags & ECORE_NVM_CFG_OPTION_INIT) + SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_INIT, 1); + if (flags & ECORE_NVM_CFG_OPTION_FREE) + SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_FREE, 1); + if (flags & ECORE_NVM_CFG_OPTION_ENTITY_SEL) { + SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_SEL, + 1); + SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_ID, + entity_id); + } + + rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, + DRV_MSG_CODE_GET_NVM_CFG_OPTION, mb_param, + &resp, ¶m, p_len, (u32 *)p_buf, false); + + return rc; +} + +enum _ecore_status_t +ecore_mcp_nvm_set_cfg(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u16 option_id, u8 entity_id, u16 flags, u8 *p_buf, + u32 len) +{ + u32 mb_param = 0, resp, param; + enum _ecore_status_t rc; + + SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_ID, option_id); + if (flags & ECORE_NVM_CFG_OPTION_ALL) + SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_ALL, 1); + if (flags & ECORE_NVM_CFG_OPTION_INIT) + SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_INIT, 1); + if (flags & ECORE_NVM_CFG_OPTION_COMMIT) + SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT, 1); + if (flags & ECORE_NVM_CFG_OPTION_FREE) + SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_FREE, 1); + if (flags & ECORE_NVM_CFG_OPTION_ENTITY_SEL) { + SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_SEL, + 1); + SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_ID, + entity_id); + } + if (flags & ECORE_NVM_CFG_OPTION_DEFAULT_RESTORE_ALL) { + if (!ecore_mcp_is_restore_default_cfg_supported(p_hwfn)) { + DP_ERR(p_hwfn, "Restoring nvm config to default value is unsupported by the MFW\n"); + return ECORE_NOTIMPL; + } + SET_MFW_FIELD(mb_param, + DRV_MB_PARAM_NVM_CFG_OPTION_DEFAULT_RESTORE_ALL, + 1); + } + + rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, + DRV_MSG_CODE_SET_NVM_CFG_OPTION, mb_param, + &resp, ¶m, len, (u32 *)p_buf, false); + + return rc; +} + +enum ecore_perm_mac_type { + ECORE_PERM_MAC_TYPE_PF, + ECORE_PERM_MAC_TYPE_BMC, + ECORE_PERM_MAC_TYPE_VF, + ECORE_PERM_MAC_TYPE_LLDP, +}; + +static enum _ecore_status_t +ecore_mcp_get_permanent_mac(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + enum ecore_perm_mac_type type, u32 index, + u8 mac_addr[ECORE_ETH_ALEN]) { - enum _ecore_status_t rc = ECORE_SUCCESS; - u32 dword = val; struct ecore_mcp_mb_params mb_params; + struct mcp_mac perm_mac; + u32 mfw_type; + enum _ecore_status_t rc; - OSAL_MEMSET(&mb_params, 0, sizeof(struct ecore_mcp_mb_params)); - mb_params.cmd = DRV_MSG_CODE_WRITE_WOL_REG; - mb_params.param = offset; - mb_params.p_data_src = &dword; - mb_params.data_src_size = sizeof(dword); + OSAL_MEM_ZERO(&mb_params, sizeof(mb_params)); + mb_params.cmd = DRV_MSG_CODE_GET_PERM_MAC; + switch (type) { + case ECORE_PERM_MAC_TYPE_PF: + mfw_type = DRV_MSG_CODE_GET_PERM_MAC_TYPE_PF; + break; + case ECORE_PERM_MAC_TYPE_BMC: + mfw_type = DRV_MSG_CODE_GET_PERM_MAC_TYPE_BMC; + break; + case ECORE_PERM_MAC_TYPE_VF: + mfw_type = DRV_MSG_CODE_GET_PERM_MAC_TYPE_VF; + break; + case ECORE_PERM_MAC_TYPE_LLDP: + mfw_type = DRV_MSG_CODE_GET_PERM_MAC_TYPE_LLDP; + break; + default: + DP_ERR(p_hwfn, "Unexpected type of permanent MAC [%d]\n", type); + return ECORE_INVAL; + } + SET_MFW_FIELD(mb_params.param, DRV_MSG_CODE_GET_PERM_MAC_TYPE, + mfw_type); + SET_MFW_FIELD(mb_params.param, DRV_MSG_CODE_GET_PERM_MAC_INDEX, index); + mb_params.p_data_dst = &perm_mac; + mb_params.data_dst_size = sizeof(perm_mac); + rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); + if (rc != ECORE_SUCCESS) + return rc; + + if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) { + DP_INFO(p_hwfn, + "The GET_PERM_MAC command is unsupported by the MFW\n"); + return ECORE_NOTIMPL; + } else if (mb_params.mcp_resp != (u32)FW_MSG_CODE_GET_PERM_MAC_OK) { + DP_NOTICE(p_hwfn, false, + "Failed to get a permanent MAC address [type %d, index %d, resp 0x%08x]\n", + type, index, mb_params.mcp_resp); + return ECORE_INVAL; + } + + *(u16 *)mac_addr = OSAL_BE16_TO_CPU(*(u16 *)&perm_mac.mac_upper); + *(u32 *)(mac_addr + sizeof(u16)) = OSAL_BE32_TO_CPU(perm_mac.mac_lower); + + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, + "Permanent MAC address [type %d, index %d] is %02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx\n", + type, index, mac_addr[0], mac_addr[1], mac_addr[2], + mac_addr[3], mac_addr[4], mac_addr[5]); + + return ECORE_SUCCESS; +} + +enum _ecore_status_t ecore_mcp_get_perm_vf_mac(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u32 rel_vf_id, + u8 mac_addr[ECORE_ETH_ALEN]) +{ + u32 index; + + if (!RESC_NUM(p_hwfn, ECORE_VF_MAC_ADDR)) { + DP_INFO(p_hwfn, "There are no permanent VF MAC addresses\n"); + return ECORE_NORESOURCES; + } + + if (rel_vf_id >= RESC_NUM(p_hwfn, ECORE_VF_MAC_ADDR)) { + DP_INFO(p_hwfn, + "There are permanent VF MAC addresses for only the first %d VFs\n", + RESC_NUM(p_hwfn, ECORE_VF_MAC_ADDR)); + return ECORE_NORESOURCES; + } + + index = RESC_START(p_hwfn, ECORE_VF_MAC_ADDR) + rel_vf_id; + return ecore_mcp_get_permanent_mac(p_hwfn, p_ptt, + ECORE_PERM_MAC_TYPE_VF, index, + mac_addr); +} +#define ECORE_MCP_DBG_DATA_MAX_SIZE MCP_DRV_NVM_BUF_LEN +#define ECORE_MCP_DBG_DATA_MAX_HEADER_SIZE sizeof(u32) +#define ECORE_MCP_DBG_DATA_MAX_PAYLOAD_SIZE \ + (ECORE_MCP_DBG_DATA_MAX_SIZE - ECORE_MCP_DBG_DATA_MAX_HEADER_SIZE) + +static enum _ecore_status_t +__ecore_mcp_send_debug_data(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u8 *p_buf, u8 size) +{ + struct ecore_mcp_mb_params mb_params; + enum _ecore_status_t rc; + + if (size > ECORE_MCP_DBG_DATA_MAX_SIZE) { + DP_ERR(p_hwfn, + "Debug data size is %d while it should not exceed %d\n", + size, ECORE_MCP_DBG_DATA_MAX_SIZE); + return ECORE_INVAL; + } + + OSAL_MEM_ZERO(&mb_params, sizeof(mb_params)); + mb_params.cmd = DRV_MSG_CODE_DEBUG_DATA_SEND; + SET_MFW_FIELD(mb_params.param, DRV_MSG_CODE_DEBUG_DATA_SEND_SIZE, size); + mb_params.p_data_src = p_buf; + mb_params.data_src_size = size; rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params); - if (rc != ECORE_SUCCESS) { + if (rc != ECORE_SUCCESS) + return rc; + + if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) { + DP_INFO(p_hwfn, + "The DEBUG_DATA_SEND command is unsupported by the MFW\n"); + return ECORE_NOTIMPL; + } else if (mb_params.mcp_resp == (u32)FW_MSG_CODE_DEBUG_NOT_ENABLED) { + DP_INFO(p_hwfn, + "The DEBUG_DATA_SEND command is not enabled\n"); + return ECORE_ABORTED; + } else if (mb_params.mcp_resp != (u32)FW_MSG_CODE_DEBUG_DATA_SEND_OK) { DP_NOTICE(p_hwfn, false, - "Failed to wol write request, rc = %d\n", rc); + "Failed to send debug data to the MFW [resp 0x%08x]\n", + mb_params.mcp_resp); + return ECORE_INVAL; } - if (mb_params.mcp_resp != FW_MSG_CODE_WOL_READ_WRITE_OK) { + return ECORE_SUCCESS; +} + +enum ecore_mcp_dbg_data_type { + ECORE_MCP_DBG_DATA_TYPE_RAW, +}; + +/* Header format: [31:28] PFID, [27:20] flags, [19:12] type, [11:0] S/N */ +#define ECORE_MCP_DBG_DATA_HDR_SN_OFFSET 0 +#define ECORE_MCP_DBG_DATA_HDR_SN_MASK 0x00000fff +#define ECORE_MCP_DBG_DATA_HDR_TYPE_OFFSET 12 +#define ECORE_MCP_DBG_DATA_HDR_TYPE_MASK 0x000ff000 +#define ECORE_MCP_DBG_DATA_HDR_FLAGS_OFFSET 20 +#define ECORE_MCP_DBG_DATA_HDR_FLAGS_MASK 0x0ff00000 +#define ECORE_MCP_DBG_DATA_HDR_PF_OFFSET 28 +#define ECORE_MCP_DBG_DATA_HDR_PF_MASK 0xf0000000 + +#define ECORE_MCP_DBG_DATA_HDR_FLAGS_FIRST 0x1 +#define ECORE_MCP_DBG_DATA_HDR_FLAGS_LAST 0x2 + +static enum _ecore_status_t +ecore_mcp_send_debug_data(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + enum ecore_mcp_dbg_data_type type, u8 *p_buf, + u32 size) +{ + u8 raw_data[ECORE_MCP_DBG_DATA_MAX_SIZE], *p_tmp_buf = p_buf; + u32 tmp_size = size, *p_header, *p_payload; + u8 flags = 0; + u16 seq; + enum _ecore_status_t rc; + + p_header = (u32 *)raw_data; + p_payload = (u32 *)(raw_data + ECORE_MCP_DBG_DATA_MAX_HEADER_SIZE); + + OSAL_SPIN_LOCK(&p_hwfn->mcp_info->dbg_data_lock); + seq = p_hwfn->mcp_info->dbg_data_seq++; + OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->dbg_data_lock); + + /* First chunk is marked as 'first' */ + flags |= ECORE_MCP_DBG_DATA_HDR_FLAGS_FIRST; + + *p_header = 0; + SET_MFW_FIELD(*p_header, ECORE_MCP_DBG_DATA_HDR_SN, seq); + SET_MFW_FIELD(*p_header, ECORE_MCP_DBG_DATA_HDR_TYPE, type); + SET_MFW_FIELD(*p_header, ECORE_MCP_DBG_DATA_HDR_FLAGS, flags); + SET_MFW_FIELD(*p_header, ECORE_MCP_DBG_DATA_HDR_PF, p_hwfn->abs_pf_id); + + while (tmp_size > ECORE_MCP_DBG_DATA_MAX_PAYLOAD_SIZE) { + OSAL_MEMCPY(p_payload, p_tmp_buf, + ECORE_MCP_DBG_DATA_MAX_PAYLOAD_SIZE); + rc = __ecore_mcp_send_debug_data(p_hwfn, p_ptt, raw_data, + ECORE_MCP_DBG_DATA_MAX_SIZE); + if (rc != ECORE_SUCCESS) + return rc; + + /* Clear the 'first' marking after sending the first chunk */ + if (p_tmp_buf == p_buf) { + flags &= ~ECORE_MCP_DBG_DATA_HDR_FLAGS_FIRST; + SET_MFW_FIELD(*p_header, ECORE_MCP_DBG_DATA_HDR_FLAGS, + flags); + } + + p_tmp_buf += ECORE_MCP_DBG_DATA_MAX_PAYLOAD_SIZE; + tmp_size -= ECORE_MCP_DBG_DATA_MAX_PAYLOAD_SIZE; + } + + /* Last chunk is marked as 'last' */ + flags |= ECORE_MCP_DBG_DATA_HDR_FLAGS_LAST; + SET_MFW_FIELD(*p_header, ECORE_MCP_DBG_DATA_HDR_FLAGS, flags); + OSAL_MEMCPY(p_payload, p_tmp_buf, tmp_size); + + /* Casting the left size to u8 is ok since at this point it is <= 32 */ + return __ecore_mcp_send_debug_data(p_hwfn, p_ptt, raw_data, + (u8)(ECORE_MCP_DBG_DATA_MAX_HEADER_SIZE + + tmp_size)); +} + +enum _ecore_status_t +ecore_mcp_send_raw_debug_data(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, u8 *p_buf, u32 size) +{ + return ecore_mcp_send_debug_data(p_hwfn, p_ptt, + ECORE_MCP_DBG_DATA_TYPE_RAW, p_buf, + size); +} + +enum _ecore_status_t +ecore_mcp_set_bandwidth(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u8 bw_min, u8 bw_max) +{ + u32 resp = 0, param = 0, drv_mb_param = 0; + enum _ecore_status_t rc; + + SET_MFW_FIELD(drv_mb_param, BW_MIN, bw_min); + SET_MFW_FIELD(drv_mb_param, BW_MAX, bw_max); + rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_BW, + drv_mb_param, &resp, ¶m); + if (rc != ECORE_SUCCESS) + DP_ERR(p_hwfn, "Failed to send BW update, rc = %d\n", rc); + + return rc; +} + +bool ecore_mcp_is_esl_supported(struct ecore_hwfn *p_hwfn) +{ + return !!(p_hwfn->mcp_info->capabilities & + FW_MB_PARAM_FEATURE_SUPPORT_ENHANCED_SYS_LCK); +} + +enum _ecore_status_t +ecore_mcp_get_esl_status(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + bool *active) + +{ + u32 resp = 0, param = 0; + enum _ecore_status_t rc; + + rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GET_MANAGEMENT_STATUS, 0, + &resp, ¶m); + if (rc != ECORE_SUCCESS) { DP_NOTICE(p_hwfn, false, - "Failed to write value 0x%x to offset 0x%x [mcp_resp 0x%x]\n", - val, offset, mb_params.mcp_resp); - rc = ECORE_UNKNOWN_ERROR; + "Failed to send ESL command, rc = %d\n", rc); + return rc; } + + *active = !!(param & FW_MB_PARAM_MANAGEMENT_STATUS_LOCKDOWN_ENABLED); + + return ECORE_SUCCESS; +} + +bool ecore_mcp_is_ext_speed_supported(struct ecore_hwfn *p_hwfn) +{ + return !!(p_hwfn->mcp_info->capabilities & + FW_MB_PARAM_FEATURE_SUPPORT_EXT_SPEED_FEC_CONTROL); +} + +enum _ecore_status_t +ecore_mcp_gen_mdump_idlechk(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) +{ + struct ecore_mdump_cmd_params mdump_cmd_params; + + OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params)); + mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_GEN_IDLE_CHK; + + return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params); +} + +bool ecore_mcp_is_restore_default_cfg_supported(struct ecore_hwfn *p_hwfn) +{ + return !!(p_hwfn->mcp_info->capabilities & + FW_MB_PARAM_FEATURE_SUPPORT_RESTORE_DEFAULT_CFG); } +#ifdef _NTDDK_ +#pragma warning(pop) +#endif diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h index 185cc2339..81d3dc04a 100644 --- a/drivers/net/qede/base/ecore_mcp.h +++ b/drivers/net/qede/base/ecore_mcp.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_MCP_H__ #define __ECORE_MCP_H__ @@ -26,6 +26,9 @@ #define MCP_PF_ID(p_hwfn) MCP_PF_ID_BY_REL(p_hwfn, (p_hwfn)->rel_pf_id) struct ecore_mcp_info { + /* Flag to indicate whether the MFW is running */ + bool b_mfw_running; + /* List for mailbox commands which were sent and wait for a response */ osal_list_t cmd_list; @@ -37,6 +40,14 @@ struct ecore_mcp_info { /* Flag to indicate whether sending a MFW mailbox command is blocked */ bool b_block_cmd; + /* Flag to indicate whether MFW was halted */ + bool b_halted; + + /* Flag to indicate that driver discovered that mcp is running in + * recovery mode. + */ + bool recovery_mode; + /* Spinlock used for syncing SW link-changes and link-changes * originating from attention context. */ @@ -67,8 +78,16 @@ struct ecore_mcp_info { u16 mfw_mb_length; u32 mcp_hist; - /* Capabilties negotiated with the MFW */ + /* Capabilities negotiated with the MFW */ u32 capabilities; + + /* S/N for debug data mailbox commands and a spinlock to protect it */ + u16 dbg_data_seq; + osal_spinlock_t dbg_data_lock; + osal_spinlock_t unload_lock; + u32 mcp_handling_status; /* @DPDK */ +#define ECORE_MCP_BYPASS_PROC_BIT 0 +#define ECORE_MCP_IN_PROCESSING_BIT 1 }; struct ecore_mcp_mb_params { @@ -76,14 +95,14 @@ struct ecore_mcp_mb_params { u32 param; void *p_data_src; void *p_data_dst; - u32 mcp_resp; - u32 mcp_param; u8 data_src_size; u8 data_dst_size; + u32 mcp_resp; + u32 mcp_param; u32 flags; -#define ECORE_MB_FLAG_CAN_SLEEP (0x1 << 0) -#define ECORE_MB_FLAG_AVOID_BLOCK (0x1 << 1) -#define ECORE_MB_FLAGS_IS_SET(params, flag) \ +#define ECORE_MB_FLAG_CAN_SLEEP (0x1 << 0) +#define ECORE_MB_FLAG_AVOID_BLOCK (0x1 << 1) +#define ECORE_MB_FLAGS_IS_SET(params, flag) \ ((params) != OSAL_NULL && ((params)->flags & ECORE_MB_FLAG_##flag)) }; @@ -199,6 +218,17 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, enum _ecore_status_t ecore_mcp_load_done(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); +/** + * @brief Sends a CANCEL_LOAD_REQ message to the MFW + * + * @param p_hwfn + * @param p_ptt + * + * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful. + */ +enum _ecore_status_t ecore_mcp_cancel_load_req(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt); + /** * @brief Sends a UNLOAD_REQ message to the MFW * @@ -242,6 +272,8 @@ void ecore_mcp_read_mb(struct ecore_hwfn *p_hwfn, enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 *vfs_to_ack); +enum _ecore_status_t ecore_mcp_vf_flr(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, u8 rel_vf_id); /** * @brief - calls during init to read shmem of all function-related info. @@ -319,11 +351,13 @@ int __ecore_configure_pf_min_bandwidth(struct ecore_hwfn *p_hwfn, enum _ecore_status_t ecore_mcp_mask_parities(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 mask_parities); + /** * @brief - Sends crash mdump related info to the MFW. * * @param p_hwfn * @param p_ptt + * @param epoch * * @param return ECORE_SUCCESS upon success. */ @@ -336,7 +370,6 @@ enum _ecore_status_t ecore_mcp_mdump_set_values(struct ecore_hwfn *p_hwfn, * * @param p_hwfn * @param p_ptt - * @param epoch * * @param return ECORE_SUCCESS upon success. */ @@ -399,10 +432,10 @@ ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, /** * @brief - Initiates PF FLR * - * @param p_hwfn - * @param p_ptt + * @param p_hwfn + * @param p_ptt * - * @param return ECORE_SUCCESS upon success. + * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful. */ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); @@ -415,6 +448,12 @@ enum ecore_resc_lock { /* Locks that the MFW is aware of should be added here downwards */ /* Ecore only locks should be added here upwards */ + ECORE_RESC_LOCK_QM_RECONF = 25, + ECORE_RESC_LOCK_IND_TABLE = 26, + ECORE_RESC_LOCK_PTP_PORT0 = 27, + ECORE_RESC_LOCK_PTP_PORT1 = 28, + ECORE_RESC_LOCK_PTP_PORT2 = 29, + ECORE_RESC_LOCK_PTP_PORT3 = 30, ECORE_RESC_LOCK_RESC_ALLOC = ECORE_MCP_RESC_LOCK_MAX_VAL, /* A dummy value to be used for auxiliary functions in need of @@ -437,7 +476,7 @@ struct ecore_resc_lock_params { #define ECORE_MCP_RESC_LOCK_RETRY_CNT_DFLT 10 /* The interval in usec between retries */ - u16 retry_interval; + u32 retry_interval; #define ECORE_MCP_RESC_LOCK_RETRY_VAL_DFLT 10000 /* Use sleep or delay between retries */ @@ -502,6 +541,9 @@ void ecore_mcp_resc_lock_default_init(struct ecore_resc_lock_params *p_lock, enum ecore_resc_lock resource, bool b_is_permanent); +void ecore_mcp_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u32 offset, u32 val); + /** * @brief Learn of supported MFW features; To be done during early init * @@ -521,6 +563,15 @@ enum _ecore_status_t ecore_mcp_get_capabilities(struct ecore_hwfn *p_hwfn, enum _ecore_status_t ecore_mcp_set_capabilities(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); +/** + * @brief Initialize MFW mailbox and sequence values for driver interaction. + * + * @param p_hwfn + * @param p_ptt + */ +enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt); + enum ecore_mcp_drv_attr_cmd { ECORE_MCP_DRV_ATTR_CMD_READ, ECORE_MCP_DRV_ATTR_CMD_WRITE, @@ -565,9 +616,6 @@ ecore_mcp_drv_attribute(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, void ecore_mcp_read_ufp_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); -void ecore_mcp_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, - u32 offset, u32 val); - /** * @brief Get the engine affinity configuration. * @@ -586,4 +634,48 @@ enum _ecore_status_t ecore_mcp_get_engine_config(struct ecore_hwfn *p_hwfn, enum _ecore_status_t ecore_mcp_get_ppfid_bitmap(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); +/** + * @brief Acquire MCP lock to access to HW indirection table entries + * + * @param p_hwfn + * @param p_ptt + * @param retry_num + * @param retry_interval + */ +enum _ecore_status_t +ecore_mcp_ind_table_lock(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 retry_num, + u32 retry_interval); + +/** + * @brief Release MCP lock of access to HW indirection table entries + * + * @param p_hwfn + * @param p_ptt + */ +enum _ecore_status_t +ecore_mcp_ind_table_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); + +/** + * @brief Populate the nvm info shadow in the given hardware function + * + * @param p_hwfn + */ +enum _ecore_status_t +ecore_mcp_nvm_info_populate(struct ecore_hwfn *p_hwfn); + +/** + * @brief Delete nvm info shadow in the given hardware function + * + * @param p_hwfn + */ +void ecore_mcp_nvm_info_free(struct ecore_hwfn *p_hwfn); + +enum _ecore_status_t +ecore_mcp_set_trace_filter(struct ecore_hwfn *p_hwfn, + u32 *dbg_level, u32 *dbg_modules); + +enum _ecore_status_t +ecore_mcp_restore_trace_filter(struct ecore_hwfn *p_hwfn); #endif /* __ECORE_MCP_H__ */ diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h index c3922ba43..800a23fb1 100644 --- a/drivers/net/qede/base/ecore_mcp_api.h +++ b/drivers/net/qede/base/ecore_mcp_api.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_MCP_API_H__ #define __ECORE_MCP_API_H__ @@ -12,7 +12,28 @@ struct ecore_mcp_link_speed_params { bool autoneg; u32 advertised_speeds; /* bitmask of DRV_SPEED_CAPABILITY */ +#define ECORE_EXT_SPEED_MASK_RES 0x1 +#define ECORE_EXT_SPEED_MASK_1G 0x2 +#define ECORE_EXT_SPEED_MASK_10G 0x4 +#define ECORE_EXT_SPEED_MASK_20G 0x8 +#define ECORE_EXT_SPEED_MASK_25G 0x10 +#define ECORE_EXT_SPEED_MASK_40G 0x20 +#define ECORE_EXT_SPEED_MASK_50G_R 0x40 +#define ECORE_EXT_SPEED_MASK_50G_R2 0x80 +#define ECORE_EXT_SPEED_MASK_100G_R2 0x100 +#define ECORE_EXT_SPEED_MASK_100G_R4 0x200 +#define ECORE_EXT_SPEED_MASK_100G_P4 0x400 u32 forced_speed; /* In Mb/s */ +#define ECORE_EXT_SPEED_1G 0x1 +#define ECORE_EXT_SPEED_10G 0x2 +#define ECORE_EXT_SPEED_20G 0x4 +#define ECORE_EXT_SPEED_25G 0x8 +#define ECORE_EXT_SPEED_40G 0x10 +#define ECORE_EXT_SPEED_50G_R 0x20 +#define ECORE_EXT_SPEED_50G_R2 0x40 +#define ECORE_EXT_SPEED_100G_R2 0x80 +#define ECORE_EXT_SPEED_100G_R4 0x100 +#define ECORE_EXT_SPEED_100G_P4 0x200 }; struct ecore_mcp_link_pause_params { @@ -27,6 +48,13 @@ enum ecore_mcp_eee_mode { ECORE_MCP_EEE_UNSUPPORTED }; +#ifndef __EXTRACT__LINUX__ +#define ECORE_MCP_FEC_NONE (1 << 0) +#define ECORE_MCP_FEC_FIRECODE (1 << 1) +#define ECORE_MCP_FEC_RS (1 << 2) +#define ECORE_MCP_FEC_AUTO (1 << 3) +#define ECORE_MCP_FEC_UNSUPPORTED (1 << 4) + struct ecore_link_eee_params { u32 tx_lpi_timer; #define ECORE_EEE_1G_ADV (1 << 0) @@ -37,21 +65,40 @@ struct ecore_link_eee_params { bool enable; bool tx_lpi_enable; }; +#endif + +struct ecore_mdump_cmd_params { + u32 cmd; + void *p_data_src; + u8 data_src_size; + void *p_data_dst; + u8 data_dst_size; + u32 mcp_resp; + u32 mcp_param; +}; struct ecore_mcp_link_params { struct ecore_mcp_link_speed_params speed; struct ecore_mcp_link_pause_params pause; u32 loopback_mode; /* in PMM_LOOPBACK values */ struct ecore_link_eee_params eee; + u32 fec; + struct ecore_mcp_link_speed_params ext_speed; + u32 ext_fec_mode; }; struct ecore_mcp_link_capabilities { u32 speed_capabilities; bool default_speed_autoneg; /* In Mb/s */ - u32 default_speed; /* In Mb/s */ + u32 default_speed; /* In Mb/s */ /* __LINUX__THROW__ */ + u32 fec_default; enum ecore_mcp_eee_mode default_eee; u32 eee_lpi_timer; u8 eee_speed_caps; + u32 default_ext_speed_caps; + u32 default_ext_autoneg; + u32 default_ext_speed; + u32 default_ext_fec; }; struct ecore_mcp_link_state { @@ -96,6 +143,8 @@ struct ecore_mcp_link_state { bool eee_active; u8 eee_adv_caps; u8 eee_lp_adv_caps; + + u32 fec_active; }; struct ecore_mcp_function_info { @@ -106,7 +155,7 @@ struct ecore_mcp_function_info { u8 bandwidth_min; u8 bandwidth_max; - u8 mac[ETH_ALEN]; + u8 mac[ECORE_ETH_ALEN]; u64 wwn_port; u64 wwn_node; @@ -189,21 +238,28 @@ enum ecore_ov_driver_state { ECORE_OV_DRIVER_STATE_ACTIVE }; +enum ecore_ov_wol { + ECORE_OV_WOL_DEFAULT, + ECORE_OV_WOL_DISABLED, + ECORE_OV_WOL_ENABLED +}; + enum ecore_ov_eswitch { ECORE_OV_ESWITCH_NONE, ECORE_OV_ESWITCH_VEB, ECORE_OV_ESWITCH_VEPA }; +#ifndef __EXTRACT__LINUX__ #define ECORE_MAX_NPIV_ENTRIES 128 #define ECORE_WWN_SIZE 8 struct ecore_fc_npiv_tbl { - u32 count; + u16 num_wwpn; + u16 num_wwnn; u8 wwpn[ECORE_MAX_NPIV_ENTRIES][ECORE_WWN_SIZE]; u8 wwnn[ECORE_MAX_NPIV_ENTRIES][ECORE_WWN_SIZE]; }; -#ifndef __EXTRACT__LINUX__ enum ecore_led_mode { ECORE_LED_MODE_OFF, ECORE_LED_MODE_ON, @@ -249,18 +305,17 @@ enum ecore_mfw_tlv_type { }; struct ecore_mfw_tlv_generic { - u16 feat_flags; - bool feat_flags_set; - u64 local_mac; - bool local_mac_set; - u64 additional_mac1; - bool additional_mac1_set; - u64 additional_mac2; - bool additional_mac2_set; - u8 drv_state; - bool drv_state_set; - u8 pxe_progress; - bool pxe_progress_set; + struct { + u8 ipv4_csum_offload; + u8 lso_supported; + bool b_set; + } flags; + +#define ECORE_MFW_TLV_MAC_COUNT 3 + /* First entry for primary MAC, 2 secondary MACs possible */ + u8 mac[ECORE_MFW_TLV_MAC_COUNT][6]; + bool mac_set[ECORE_MFW_TLV_MAC_COUNT]; + u64 rx_frames; bool rx_frames_set; u64 rx_bytes; @@ -271,6 +326,7 @@ struct ecore_mfw_tlv_generic { bool tx_bytes_set; }; +#ifndef __EXTRACT__LINUX__ struct ecore_mfw_tlv_eth { u16 lso_maxoff_size; bool lso_maxoff_size_set; @@ -293,6 +349,10 @@ struct ecore_mfw_tlv_eth { u16 rx_descr_qdepth; bool rx_descr_qdepth_set; u8 iov_offload; +#define ECORE_MFW_TLV_IOV_OFFLOAD_NONE (0) +#define ECORE_MFW_TLV_IOV_OFFLOAD_MULTIQUEUE (1) +#define ECORE_MFW_TLV_IOV_OFFLOAD_VEB (2) +#define ECORE_MFW_TLV_IOV_OFFLOAD_VEPA (3) bool iov_offload_set; u8 txqs_empty; bool txqs_empty_set; @@ -304,6 +364,16 @@ struct ecore_mfw_tlv_eth { bool num_rxqs_full_set; }; +struct ecore_mfw_tlv_time { + bool b_set; + u8 month; + u8 day; + u8 hour; + u8 min; + u16 msec; + u16 usec; +}; + struct ecore_mfw_tlv_fcoe { u8 scsi_timeout; bool scsi_timeout_set; @@ -338,6 +408,10 @@ struct ecore_mfw_tlv_fcoe { u8 port_alias[3]; bool port_alias_set; u8 port_state; +#define ECORE_MFW_TLV_PORT_STATE_OFFLINE (0) +#define ECORE_MFW_TLV_PORT_STATE_LOOP (1) +#define ECORE_MFW_TLV_PORT_STATE_P2P (2) +#define ECORE_MFW_TLV_PORT_STATE_FABRIC (3) bool port_state_set; u16 fip_tx_descr_size; bool fip_tx_descr_size_set; @@ -367,8 +441,7 @@ struct ecore_mfw_tlv_fcoe { bool crc_count_set; u32 crc_err_src_fcid[5]; bool crc_err_src_fcid_set[5]; - u8 crc_err_tstamp[5][14]; - bool crc_err_tstamp_set[5]; + struct ecore_mfw_tlv_time crc_err[5]; u16 losync_err; bool losync_err_set; u16 losig_err; @@ -381,16 +454,13 @@ struct ecore_mfw_tlv_fcoe { bool code_violation_err_set; u32 flogi_param[4]; bool flogi_param_set[4]; - u8 flogi_tstamp[14]; - bool flogi_tstamp_set; + struct ecore_mfw_tlv_time flogi_tstamp; u32 flogi_acc_param[4]; bool flogi_acc_param_set[4]; - u8 flogi_acc_tstamp[14]; - bool flogi_acc_tstamp_set; + struct ecore_mfw_tlv_time flogi_acc_tstamp; u32 flogi_rjt; bool flogi_rjt_set; - u8 flogi_rjt_tstamp[14]; - bool flogi_rjt_tstamp_set; + struct ecore_mfw_tlv_time flogi_rjt_tstamp; u32 fdiscs; bool fdiscs_set; u8 fdisc_acc; @@ -405,12 +475,10 @@ struct ecore_mfw_tlv_fcoe { bool plogi_rjt_set; u32 plogi_dst_fcid[5]; bool plogi_dst_fcid_set[5]; - u8 plogi_tstamp[5][14]; - bool plogi_tstamp_set[5]; + struct ecore_mfw_tlv_time plogi_tstamp[5]; u32 plogi_acc_src_fcid[5]; bool plogi_acc_src_fcid_set[5]; - u8 plogi_acc_tstamp[5][14]; - bool plogi_acc_tstamp_set[5]; + struct ecore_mfw_tlv_time plogi_acc_tstamp[5]; u8 tx_plogos; bool tx_plogos_set; u8 plogo_acc; @@ -419,8 +487,7 @@ struct ecore_mfw_tlv_fcoe { bool plogo_rjt_set; u32 plogo_src_fcid[5]; bool plogo_src_fcid_set[5]; - u8 plogo_tstamp[5][14]; - bool plogo_tstamp_set[5]; + struct ecore_mfw_tlv_time plogo_tstamp[5]; u8 rx_logos; bool rx_logos_set; u8 tx_accs; @@ -437,8 +504,7 @@ struct ecore_mfw_tlv_fcoe { bool rx_abts_rjt_set; u32 abts_dst_fcid[5]; bool abts_dst_fcid_set[5]; - u8 abts_tstamp[5][14]; - bool abts_tstamp_set[5]; + struct ecore_mfw_tlv_time abts_tstamp[5]; u8 rx_rscn; bool rx_rscn_set; u32 rx_rscn_nport[4]; @@ -487,8 +553,7 @@ struct ecore_mfw_tlv_fcoe { bool scsi_tsk_abort_set; u32 scsi_rx_chk[5]; bool scsi_rx_chk_set[5]; - u8 scsi_chk_tstamp[5][14]; - bool scsi_chk_tstamp_set[5]; + struct ecore_mfw_tlv_time scsi_chk_tstamp[5]; }; struct ecore_mfw_tlv_iscsi { @@ -499,6 +564,9 @@ struct ecore_mfw_tlv_iscsi { u8 data_digest; bool data_digest_set; u8 auth_method; +#define ECORE_MFW_TLV_AUTH_METHOD_NONE (1) +#define ECORE_MFW_TLV_AUTH_METHOD_CHAP (2) +#define ECORE_MFW_TLV_AUTH_METHOD_MUTUAL_CHAP (3) bool auth_method_set; u16 boot_taget_portal; bool boot_taget_portal_set; @@ -523,6 +591,7 @@ struct ecore_mfw_tlv_iscsi { u64 tx_bytes; bool tx_bytes_set; }; +#endif union ecore_mfw_tlv_data { struct ecore_mfw_tlv_generic generic; @@ -531,9 +600,30 @@ union ecore_mfw_tlv_data { struct ecore_mfw_tlv_iscsi iscsi; }; +#ifndef __EXTRACT__LINUX__ enum ecore_hw_info_change { ECORE_HW_INFO_CHANGE_OVLAN, }; +#endif + +#define ECORE_NVM_CFG_OPTION_ALL (1 << 0) +#define ECORE_NVM_CFG_OPTION_INIT (1 << 1) +#define ECORE_NVM_CFG_OPTION_COMMIT (1 << 2) +#define ECORE_NVM_CFG_OPTION_FREE (1 << 3) +#define ECORE_NVM_CFG_OPTION_ENTITY_SEL (1 << 4) +#define ECORE_NVM_CFG_OPTION_DEFAULT_RESTORE_ALL (1 << 5) +#define ECORE_NVM_CFG_GET_FLAGS 0xA +#define ECORE_NVM_CFG_GET_PF_FLAGS 0x1A +#define ECORE_NVM_CFG_MAX_ATTRS 50 + +enum ecore_nvm_flash_cmd { + ECORE_NVM_FLASH_CMD_FILE_DATA = 0x2, + ECORE_NVM_FLASH_CMD_FILE_START = 0x3, + ECORE_NVM_FLASH_CMD_NVM_CHANGE = 0x4, + ECORE_NVM_FLASH_CMD_NVM_CFG_ID = 0x5, + ECORE_NVM_FLASH_CMD_NVM_CFG_RESTORE_DEFAULT = 0x6, + ECORE_NVM_FLASH_CMD_NVM_MAX, +}; /** * @brief - returns the link params of the hw function @@ -542,7 +632,8 @@ enum ecore_hw_info_change { * * @returns pointer to link params */ -struct ecore_mcp_link_params *ecore_mcp_get_link_params(struct ecore_hwfn *); +struct ecore_mcp_link_params *ecore_mcp_get_link_params(struct ecore_hwfn + *p_hwfn); /** * @brief - return the link state of the hw function @@ -551,7 +642,8 @@ struct ecore_mcp_link_params *ecore_mcp_get_link_params(struct ecore_hwfn *); * * @returns pointer to link state */ -struct ecore_mcp_link_state *ecore_mcp_get_link_state(struct ecore_hwfn *); +struct ecore_mcp_link_state *ecore_mcp_get_link_state(struct ecore_hwfn + *p_hwfn); /** * @brief - return the link capabilities of the hw function @@ -598,10 +690,11 @@ enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_hwfn *p_hwfn, * @param p_ptt * @param p_mbi_ver - A pointer to a variable to be filled with the MBI version. * - * @return int - 0 - operation was successful. + * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful. */ -int ecore_mcp_get_mbi_ver(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u32 *p_mbi_ver); +enum _ecore_status_t ecore_mcp_get_mbi_ver(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u32 *p_mbi_ver); /** * @brief Get media type value of the port. @@ -850,6 +943,19 @@ enum _ecore_status_t ecore_mcp_ov_update_mac(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u8 *mac); +/** + * @brief Send WOL mode to MFW + * + * @param p_hwfn + * @param p_ptt + * @param wol - WOL mode + * + * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful. + */ +enum _ecore_status_t +ecore_mcp_ov_update_wol(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + enum ecore_ov_wol wol); + /** * @brief Send eswitch mode to MFW * @@ -876,17 +982,6 @@ enum _ecore_status_t ecore_mcp_set_led(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, enum ecore_led_mode mode); -/** - * @brief Set secure mode - * - * @param p_dev - * @param addr - nvm offset - * - * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful. - */ -enum _ecore_status_t ecore_mcp_nvm_set_secure_mode(struct ecore_dev *p_dev, - u32 addr); - /** * @brief Write to phy * @@ -915,17 +1010,6 @@ enum _ecore_status_t ecore_mcp_phy_write(struct ecore_dev *p_dev, u32 cmd, enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd, u32 addr, u8 *p_buf, u32 len); -/** - * @brief Put file begin - * - * @param p_dev - * @param addr - nvm offset - * - * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful. - */ -enum _ecore_status_t ecore_mcp_nvm_put_file_begin(struct ecore_dev *p_dev, - u32 addr); - /** * @brief Delete file * @@ -1001,7 +1085,7 @@ ecore_mcp_get_nvm_image_att(struct ecore_hwfn *p_hwfn, * @param p_buffer - allocated buffer into which to fill data * @param buffer_len - length of the allocated buffer. * - * @return ECORE_SUCCESS if p_buffer now contains the nvram image. + * @return ECORE_SUCCESS iff p_buffer now contains the nvram image. */ enum _ecore_status_t ecore_mcp_get_nvm_image(struct ecore_hwfn *p_hwfn, enum ecore_nvm_images image_id, @@ -1020,6 +1104,7 @@ enum _ecore_status_t ecore_mcp_get_nvm_image(struct ecore_hwfn *p_hwfn, * @param o_mcp_param - MCP response param * @param i_txn_size - Buffer size * @param i_buf - Pointer to the buffer + * @param b_can_sleep - Whether sleep is allowed in this execution path. * * @param return ECORE_SUCCESS upon success. */ @@ -1030,7 +1115,8 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn, u32 *o_mcp_resp, u32 *o_mcp_param, u32 i_txn_size, - u32 *i_buf); + u32 *i_buf, + bool b_can_sleep); /** * @brief - Sends an NVM read command request to the MFW to get @@ -1045,6 +1131,7 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn, * @param o_mcp_param - MCP response param * @param o_txn_size - Buffer size output * @param o_buf - Pointer to the buffer returned by the MFW. + * @param b_can_sleep - Whether sleep is allowed in this execution path. * * @param return ECORE_SUCCESS upon success. */ @@ -1055,7 +1142,8 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn, u32 *o_mcp_resp, u32 *o_mcp_param, u32 *o_txn_size, - u32 *o_buf); + u32 *o_buf, + bool b_can_sleep); /** * @brief Read from sfp @@ -1170,10 +1258,9 @@ enum _ecore_status_t ecore_mcp_bist_clock_test(struct ecore_hwfn *p_hwfn, * * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful. */ -enum _ecore_status_t ecore_mcp_bist_nvm_test_get_num_images( - struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u32 *num_images); +enum _ecore_status_t ecore_mcp_bist_nvm_get_num_images(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u32 *num_images); /** * @brief Bist nvm test - get image attributes by index @@ -1185,11 +1272,10 @@ enum _ecore_status_t ecore_mcp_bist_nvm_test_get_num_images( * * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful. */ -enum _ecore_status_t ecore_mcp_bist_nvm_test_get_image_att( - struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - struct bist_nvm_image_att *p_image_att, - u32 image_index); +enum _ecore_status_t ecore_mcp_bist_nvm_get_image_att(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct bist_nvm_image_att *p_image_att, + u32 image_index); /** * @brief ecore_mcp_get_temperature_info - get the status of the temperature @@ -1278,6 +1364,56 @@ enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn, enum _ecore_status_t ecore_mcp_mdump_clr_retain(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); +/** + * @brief - Get mdump2 offset and size. + * + * @param p_hwfn + * @param p_ptt + * @param buff_byte_size - pointer to get mdump2 buffer size in bytes + * @param buff_byte_addr - pointer to get mdump2 buffer address. + * @param return ECORE_SUCCESS upon success. + */ +enum _ecore_status_t ecore_mdump2_req_offsize(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u32 *buff_byte_size, + u32 *buff_byte_addr); + +/** + * @brief - Request to free mdump2 buffer. + * + * @param p_hwfn + * @param p_ptt + * @param return ECORE_SUCCESS upon success. + */ +enum _ecore_status_t ecore_mdump2_req_free(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt); + +/** + * @brief - Gets the LLDP MAC address. + * + * @param p_hwfn + * @param p_ptt + * @param lldp_mac_addr - a buffer to be filled with the read LLDP MAC address. + * + * @param return ECORE_SUCCESS upon success. + */ +enum _ecore_status_t ecore_mcp_get_lldp_mac(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 lldp_mac_addr[ECORE_ETH_ALEN]); + +/** + * @brief - Sets the LLDP MAC address. + * + * @param p_hwfn + * @param p_ptt + * @param lldp_mac_addr - a buffer with the LLDP MAC address to be written. + * + * @param return ECORE_SUCCESS upon success. + */ +enum _ecore_status_t ecore_mcp_set_lldp_mac(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 lldp_mac_addr[ECORE_ETH_ALEN]); + /** * @brief - Processes the TLV request from MFW i.e., get the required TLV info * from the ecore client and send it to the MFW. @@ -1290,13 +1426,192 @@ enum _ecore_status_t ecore_mcp_mdump_clr_retain(struct ecore_hwfn *p_hwfn, enum _ecore_status_t ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); +/** + * @brief - Update fcoe vlan id value to the MFW. + * + * @param p_hwfn + * @param p_ptt + * @param vlan - fcoe vlan + * + * @param return ECORE_SUCCESS upon success. + */ +enum _ecore_status_t +ecore_mcp_update_fcoe_cvid(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u16 vlan); + +/** + * @brief - Update fabric name (wwn) value to the MFW. + * + * @param p_hwfn + * @param p_ptt + * @param wwn - world wide name + * + * @param return ECORE_SUCCESS upon success. + */ +enum _ecore_status_t +ecore_mcp_update_fcoe_fabric_name(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, u8 *wwn); /** * @brief - Return whether management firmware support smart AN * * @param p_hwfn * - * @return bool - true iff feature is supported. + * @return bool - true if feature is supported. */ bool ecore_mcp_is_smart_an_supported(struct ecore_hwfn *p_hwfn); + +/** + * @brief - Return whether management firmware support setting of + * PCI relaxed ordering. + * + * @param p_hwfn + * + * @return bool - true if feature is supported. + */ +bool ecore_mcp_rlx_odr_supported(struct ecore_hwfn *p_hwfn); + +/** + * @brief - Triggers a HW dump procedure. + * + * @param p_hwfn + * @param p_ptt + * + * @param return ECORE_SUCCESS upon success. + */ +enum _ecore_status_t ecore_mcp_hw_dump_trigger(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt); + +enum _ecore_status_t +ecore_mcp_nvm_get_cfg(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u16 option_id, u8 entity_id, u16 flags, u8 *p_buf, + u32 *p_len); + +enum _ecore_status_t +ecore_mcp_nvm_set_cfg(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u16 option_id, u8 entity_id, u16 flags, u8 *p_buf, + u32 len); + +enum _ecore_status_t +ecore_mcp_is_tx_flt_attn_enabled(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 *enabled); + +enum _ecore_status_t +ecore_mcp_is_rx_los_attn_enabled(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 *enabled); + +enum _ecore_status_t +ecore_mcp_enable_tx_flt_attn(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 enable); + +enum _ecore_status_t +ecore_mcp_enable_rx_los_attn(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 enable); + +/** + * @brief - Gets the permanent VF MAC address of the given relative VF ID. + * + * @param p_hwfn + * @param p_ptt + * @param rel_vf_id + * @param mac_addr - a buffer to be filled with the read VF MAC address. + * + * @param return ECORE_SUCCESS upon success. + */ +enum _ecore_status_t ecore_mcp_get_perm_vf_mac(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u32 rel_vf_id, + u8 mac_addr[ECORE_ETH_ALEN]); + +/** + * @brief Send raw debug data to the MFW + * + * @param p_hwfn + * @param p_ptt + * @param p_buf - raw debug data buffer + * @param size - buffer size + */ +enum _ecore_status_t +ecore_mcp_send_raw_debug_data(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 *p_buf, u32 size); + +/** + * @brief Configure min/max bandwidths. + * + * @param p_dev + * + * @return enum _ecore_status_t + */ +enum _ecore_status_t +ecore_mcp_set_bandwidth(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + u8 bw_min, u8 bw_max); + +/** + * @brief - Return whether management firmware support ESL or not. + * + * @param p_dev + * + * @return bool - true if feature is supported. + */ +bool ecore_mcp_is_esl_supported(struct ecore_hwfn *p_hwfn); + +/** + * @brief Get enhanced system lockdown status + * + * @param p_hwfn + * @param p_ptt + * @param active - ESL active status + */ +enum _ecore_status_t +ecore_mcp_get_esl_status(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + bool *active); + +/** + * @brief - Return whether management firmware support Extended speeds or not. + * + * @param p_dev + * + * @return bool - true if feature is supported. + */ +bool ecore_mcp_is_ext_speed_supported(struct ecore_hwfn *p_hwfn); + +/** + * @brief - Instruct mfw to collect idlechk and fw asserts. + * + * @param p_hwfn + * @param p_ptt + * + * @param return ECORE_SUCCESS upon success. + */ +enum _ecore_status_t +ecore_mcp_gen_mdump_idlechk(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); + +/** + * @brief - Return whether management firmware support restoring nvm cfg to + * default values or not. + * + * @param p_hwfn + * + * @return bool - true if feature is supported. + */ +bool ecore_mcp_is_restore_default_cfg_supported(struct ecore_hwfn *p_hwfn); + +/** + * @brief Send mdump command + * + * @param p_hwfn + * @param p_ptt + * @param p_mdump_cmd_params + * @return enum _ecore_status_t - ECORE_SUCCESS - operation was + * successful. + */ +enum _ecore_status_t +ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + struct ecore_mdump_cmd_params *p_mdump_cmd_params); + #endif diff --git a/drivers/net/qede/base/ecore_mng_tlv.c b/drivers/net/qede/base/ecore_mng_tlv.c index f7666472d..35f866223 100644 --- a/drivers/net/qede/base/ecore_mng_tlv.c +++ b/drivers/net/qede/base/ecore_mng_tlv.c @@ -1,20 +1,32 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #include "bcm_osal.h" #include "ecore.h" #include "ecore_status.h" #include "ecore_mcp.h" #include "ecore_hw.h" +#include "ecore_hsi_roce.h" +#include "ecore_hsi_iwarp.h" +#include "ecore_rdma.h" #include "reg_addr.h" #define TLV_TYPE(p) (p[0]) #define TLV_LENGTH(p) (p[1]) #define TLV_FLAGS(p) (p[3]) +#define ECORE_TLV_DATA_MAX (14) +struct ecore_tlv_parsed_buf { + /* To be filled with the address to set in Value field */ + u8 *p_val; + + /* To be used internally in case the value has to be modified */ + u8 data[ECORE_TLV_DATA_MAX]; +}; + static enum _ecore_status_t ecore_mfw_get_tlv_group(u8 tlv_type, u8 *tlv_group) { @@ -29,6 +41,14 @@ ecore_mfw_get_tlv_group(u8 tlv_type, u8 *tlv_group) case DRV_TLV_RX_BYTES_RECEIVED: case DRV_TLV_TX_FRAMES_SENT: case DRV_TLV_TX_BYTES_SENT: + case DRV_TLV_NPIV_ENABLED: + case DRV_TLV_PCIE_BUS_RX_UTILIZATION: + case DRV_TLV_PCIE_BUS_TX_UTILIZATION: + case DRV_TLV_DEVICE_CPU_CORES_UTILIZATION: + case DRV_TLV_LAST_VALID_DCC_TLV_RECEIVED: + case DRV_TLV_NCSI_RX_BYTES_RECEIVED: + case DRV_TLV_NCSI_TX_BYTES_SENT: + case DRV_TLV_RDMA_DRV_VERSION: *tlv_group |= ECORE_MFW_TLV_GENERIC; break; case DRV_TLV_LSO_MAX_OFFLOAD_SIZE: @@ -224,71 +244,70 @@ ecore_mfw_get_tlv_group(u8 tlv_type, u8 *tlv_group) } static int -ecore_mfw_get_gen_tlv_value(struct ecore_drv_tlv_hdr *p_tlv, +ecore_mfw_get_gen_tlv_value(struct ecore_hwfn *p_hwfn, + struct ecore_drv_tlv_hdr *p_tlv, struct ecore_mfw_tlv_generic *p_drv_buf, - u8 **p_tlv_buf) + struct ecore_tlv_parsed_buf *p_buf) { switch (p_tlv->tlv_type) { case DRV_TLV_FEATURE_FLAGS: - if (p_drv_buf->feat_flags_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->feat_flags; - return sizeof(p_drv_buf->feat_flags); + if (p_drv_buf->flags.b_set) { + OSAL_MEM_ZERO(p_buf->data, + sizeof(u8) * ECORE_TLV_DATA_MAX); + p_buf->data[0] = p_drv_buf->flags.ipv4_csum_offload ? + 1 : 0; + p_buf->data[0] |= (p_drv_buf->flags.lso_supported ? + 1 : 0) << 1; + p_buf->p_val = p_buf->data; + return 2; } break; + case DRV_TLV_LOCAL_ADMIN_ADDR: - if (p_drv_buf->local_mac_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->local_mac; - return sizeof(p_drv_buf->local_mac); - } - break; case DRV_TLV_ADDITIONAL_MAC_ADDR_1: - if (p_drv_buf->additional_mac1_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->additional_mac1; - return sizeof(p_drv_buf->additional_mac1); - } - break; case DRV_TLV_ADDITIONAL_MAC_ADDR_2: - if (p_drv_buf->additional_mac2_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->additional_mac2; - return sizeof(p_drv_buf->additional_mac2); - } - break; - case DRV_TLV_OS_DRIVER_STATES: - if (p_drv_buf->drv_state_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->drv_state; - return sizeof(p_drv_buf->drv_state); - } - break; - case DRV_TLV_PXE_BOOT_PROGRESS: - if (p_drv_buf->pxe_progress_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->pxe_progress; - return sizeof(p_drv_buf->pxe_progress); + { + int idx = p_tlv->tlv_type - DRV_TLV_LOCAL_ADMIN_ADDR; + + if (p_drv_buf->mac_set[idx]) { + p_buf->p_val = p_drv_buf->mac[idx]; + return 6; } break; + } + case DRV_TLV_RX_FRAMES_RECEIVED: if (p_drv_buf->rx_frames_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_frames; + p_buf->p_val = (u8 *)&p_drv_buf->rx_frames; return sizeof(p_drv_buf->rx_frames); } break; case DRV_TLV_RX_BYTES_RECEIVED: if (p_drv_buf->rx_bytes_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_bytes; + p_buf->p_val = (u8 *)&p_drv_buf->rx_bytes; return sizeof(p_drv_buf->rx_bytes); } break; case DRV_TLV_TX_FRAMES_SENT: if (p_drv_buf->tx_frames_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_frames; + p_buf->p_val = (u8 *)&p_drv_buf->tx_frames; return sizeof(p_drv_buf->tx_frames); } break; case DRV_TLV_TX_BYTES_SENT: if (p_drv_buf->tx_bytes_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_bytes; + p_buf->p_val = (u8 *)&p_drv_buf->tx_bytes; return sizeof(p_drv_buf->tx_bytes); } break; + case DRV_TLV_RDMA_DRV_VERSION: + if (p_hwfn->p_rdma_info && p_hwfn->p_rdma_info->drv_ver) { + OSAL_MEM_ZERO(p_buf->data, sizeof(u8) * ECORE_TLV_DATA_MAX); + OSAL_MEMCPY(p_buf->data, &p_hwfn->p_rdma_info->drv_ver, + sizeof(p_hwfn->p_rdma_info->drv_ver)); + p_buf->p_val = p_buf->data; + return sizeof(p_hwfn->p_rdma_info->drv_ver); + } default: break; } @@ -299,96 +318,96 @@ ecore_mfw_get_gen_tlv_value(struct ecore_drv_tlv_hdr *p_tlv, static int ecore_mfw_get_eth_tlv_value(struct ecore_drv_tlv_hdr *p_tlv, struct ecore_mfw_tlv_eth *p_drv_buf, - u8 **p_tlv_buf) + struct ecore_tlv_parsed_buf *p_buf) { switch (p_tlv->tlv_type) { case DRV_TLV_LSO_MAX_OFFLOAD_SIZE: if (p_drv_buf->lso_maxoff_size_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->lso_maxoff_size; + p_buf->p_val = (u8 *)&p_drv_buf->lso_maxoff_size; return sizeof(p_drv_buf->lso_maxoff_size); } break; case DRV_TLV_LSO_MIN_SEGMENT_COUNT: if (p_drv_buf->lso_minseg_size_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->lso_minseg_size; + p_buf->p_val = (u8 *)&p_drv_buf->lso_minseg_size; return sizeof(p_drv_buf->lso_minseg_size); } break; case DRV_TLV_PROMISCUOUS_MODE: if (p_drv_buf->prom_mode_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->prom_mode; + p_buf->p_val = (u8 *)&p_drv_buf->prom_mode; return sizeof(p_drv_buf->prom_mode); } break; case DRV_TLV_TX_DESCRIPTORS_QUEUE_SIZE: if (p_drv_buf->tx_descr_size_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_descr_size; + p_buf->p_val = (u8 *)&p_drv_buf->tx_descr_size; return sizeof(p_drv_buf->tx_descr_size); } break; case DRV_TLV_RX_DESCRIPTORS_QUEUE_SIZE: if (p_drv_buf->rx_descr_size_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_descr_size; + p_buf->p_val = (u8 *)&p_drv_buf->rx_descr_size; return sizeof(p_drv_buf->rx_descr_size); } break; case DRV_TLV_NUM_OF_NET_QUEUE_VMQ_CFG: if (p_drv_buf->netq_count_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->netq_count; + p_buf->p_val = (u8 *)&p_drv_buf->netq_count; return sizeof(p_drv_buf->netq_count); } break; case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV4: if (p_drv_buf->tcp4_offloads_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tcp4_offloads; + p_buf->p_val = (u8 *)&p_drv_buf->tcp4_offloads; return sizeof(p_drv_buf->tcp4_offloads); } break; case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV6: if (p_drv_buf->tcp6_offloads_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tcp6_offloads; + p_buf->p_val = (u8 *)&p_drv_buf->tcp6_offloads; return sizeof(p_drv_buf->tcp6_offloads); } break; case DRV_TLV_TX_DESCRIPTOR_QUEUE_AVG_DEPTH: if (p_drv_buf->tx_descr_qdepth_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_descr_qdepth; + p_buf->p_val = (u8 *)&p_drv_buf->tx_descr_qdepth; return sizeof(p_drv_buf->tx_descr_qdepth); } break; case DRV_TLV_RX_DESCRIPTORS_QUEUE_AVG_DEPTH: if (p_drv_buf->rx_descr_qdepth_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_descr_qdepth; + p_buf->p_val = (u8 *)&p_drv_buf->rx_descr_qdepth; return sizeof(p_drv_buf->rx_descr_qdepth); } break; case DRV_TLV_IOV_OFFLOAD: if (p_drv_buf->iov_offload_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->iov_offload; + p_buf->p_val = (u8 *)&p_drv_buf->iov_offload; return sizeof(p_drv_buf->iov_offload); } break; case DRV_TLV_TX_QUEUES_EMPTY: if (p_drv_buf->txqs_empty_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->txqs_empty; + p_buf->p_val = (u8 *)&p_drv_buf->txqs_empty; return sizeof(p_drv_buf->txqs_empty); } break; case DRV_TLV_RX_QUEUES_EMPTY: if (p_drv_buf->rxqs_empty_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rxqs_empty; + p_buf->p_val = (u8 *)&p_drv_buf->rxqs_empty; return sizeof(p_drv_buf->rxqs_empty); } break; case DRV_TLV_TX_QUEUES_FULL: if (p_drv_buf->num_txqs_full_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->num_txqs_full; + p_buf->p_val = (u8 *)&p_drv_buf->num_txqs_full; return sizeof(p_drv_buf->num_txqs_full); } break; case DRV_TLV_RX_QUEUES_FULL: if (p_drv_buf->num_rxqs_full_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->num_rxqs_full; + p_buf->p_val = (u8 *)&p_drv_buf->num_rxqs_full; return sizeof(p_drv_buf->num_rxqs_full); } break; @@ -399,906 +418,704 @@ ecore_mfw_get_eth_tlv_value(struct ecore_drv_tlv_hdr *p_tlv, return -1; } +static int +ecore_mfw_get_tlv_time_value(struct ecore_mfw_tlv_time *p_time, + struct ecore_tlv_parsed_buf *p_buf) +{ + if (!p_time->b_set) + return -1; + + /* Validate numbers */ + if (p_time->month > 12) + p_time->month = 0; + if (p_time->day > 31) + p_time->day = 0; + if (p_time->hour > 23) + p_time->hour = 0; + if (p_time->min > 59) + p_time->hour = 0; + if (p_time->msec > 999) + p_time->msec = 0; + if (p_time->usec > 999) + p_time->usec = 0; + + OSAL_MEM_ZERO(p_buf->data, sizeof(u8) * ECORE_TLV_DATA_MAX); + OSAL_SNPRINTF((char *)p_buf->data, 14, "%d%d%d%d%d%d", + p_time->month, p_time->day, + p_time->hour, p_time->min, + p_time->msec, p_time->usec); + + p_buf->p_val = p_buf->data; + return 14; +} + static int ecore_mfw_get_fcoe_tlv_value(struct ecore_drv_tlv_hdr *p_tlv, struct ecore_mfw_tlv_fcoe *p_drv_buf, - u8 **p_tlv_buf) + struct ecore_tlv_parsed_buf *p_buf) { switch (p_tlv->tlv_type) { case DRV_TLV_SCSI_TO: if (p_drv_buf->scsi_timeout_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_timeout; + p_buf->p_val = (u8 *)&p_drv_buf->scsi_timeout; return sizeof(p_drv_buf->scsi_timeout); } break; case DRV_TLV_R_T_TOV: if (p_drv_buf->rt_tov_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rt_tov; + p_buf->p_val = (u8 *)&p_drv_buf->rt_tov; return sizeof(p_drv_buf->rt_tov); } break; case DRV_TLV_R_A_TOV: if (p_drv_buf->ra_tov_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->ra_tov; + p_buf->p_val = (u8 *)&p_drv_buf->ra_tov; return sizeof(p_drv_buf->ra_tov); } break; case DRV_TLV_E_D_TOV: if (p_drv_buf->ed_tov_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->ed_tov; + p_buf->p_val = (u8 *)&p_drv_buf->ed_tov; return sizeof(p_drv_buf->ed_tov); } break; case DRV_TLV_CR_TOV: if (p_drv_buf->cr_tov_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->cr_tov; + p_buf->p_val = (u8 *)&p_drv_buf->cr_tov; return sizeof(p_drv_buf->cr_tov); } break; case DRV_TLV_BOOT_TYPE: if (p_drv_buf->boot_type_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->boot_type; + p_buf->p_val = (u8 *)&p_drv_buf->boot_type; return sizeof(p_drv_buf->boot_type); } break; case DRV_TLV_NPIV_STATE: if (p_drv_buf->npiv_state_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->npiv_state; + p_buf->p_val = (u8 *)&p_drv_buf->npiv_state; return sizeof(p_drv_buf->npiv_state); } break; case DRV_TLV_NUM_OF_NPIV_IDS: if (p_drv_buf->num_npiv_ids_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->num_npiv_ids; + p_buf->p_val = (u8 *)&p_drv_buf->num_npiv_ids; return sizeof(p_drv_buf->num_npiv_ids); } break; case DRV_TLV_SWITCH_NAME: if (p_drv_buf->switch_name_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->switch_name; + p_buf->p_val = (u8 *)&p_drv_buf->switch_name; return sizeof(p_drv_buf->switch_name); } break; case DRV_TLV_SWITCH_PORT_NUM: if (p_drv_buf->switch_portnum_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->switch_portnum; + p_buf->p_val = (u8 *)&p_drv_buf->switch_portnum; return sizeof(p_drv_buf->switch_portnum); } break; case DRV_TLV_SWITCH_PORT_ID: if (p_drv_buf->switch_portid_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->switch_portid; + p_buf->p_val = (u8 *)&p_drv_buf->switch_portid; return sizeof(p_drv_buf->switch_portid); } break; case DRV_TLV_VENDOR_NAME: if (p_drv_buf->vendor_name_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->vendor_name; + p_buf->p_val = (u8 *)&p_drv_buf->vendor_name; return sizeof(p_drv_buf->vendor_name); } break; case DRV_TLV_SWITCH_MODEL: if (p_drv_buf->switch_model_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->switch_model; + p_buf->p_val = (u8 *)&p_drv_buf->switch_model; return sizeof(p_drv_buf->switch_model); } break; case DRV_TLV_SWITCH_FW_VER: if (p_drv_buf->switch_fw_version_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->switch_fw_version; + p_buf->p_val = (u8 *)&p_drv_buf->switch_fw_version; return sizeof(p_drv_buf->switch_fw_version); } break; case DRV_TLV_QOS_PRIORITY_PER_802_1P: if (p_drv_buf->qos_pri_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->qos_pri; + p_buf->p_val = (u8 *)&p_drv_buf->qos_pri; return sizeof(p_drv_buf->qos_pri); } break; case DRV_TLV_PORT_ALIAS: if (p_drv_buf->port_alias_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->port_alias; + p_buf->p_val = (u8 *)&p_drv_buf->port_alias; return sizeof(p_drv_buf->port_alias); } break; case DRV_TLV_PORT_STATE: if (p_drv_buf->port_state_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->port_state; + p_buf->p_val = (u8 *)&p_drv_buf->port_state; return sizeof(p_drv_buf->port_state); } break; case DRV_TLV_FIP_TX_DESCRIPTORS_QUEUE_SIZE: if (p_drv_buf->fip_tx_descr_size_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->fip_tx_descr_size; + p_buf->p_val = (u8 *)&p_drv_buf->fip_tx_descr_size; return sizeof(p_drv_buf->fip_tx_descr_size); } break; case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_SIZE: if (p_drv_buf->fip_rx_descr_size_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->fip_rx_descr_size; + p_buf->p_val = (u8 *)&p_drv_buf->fip_rx_descr_size; return sizeof(p_drv_buf->fip_rx_descr_size); } break; case DRV_TLV_LINK_FAILURE_COUNT: if (p_drv_buf->link_failures_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->link_failures; + p_buf->p_val = (u8 *)&p_drv_buf->link_failures; return sizeof(p_drv_buf->link_failures); } break; case DRV_TLV_FCOE_BOOT_PROGRESS: if (p_drv_buf->fcoe_boot_progress_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->fcoe_boot_progress; + p_buf->p_val = (u8 *)&p_drv_buf->fcoe_boot_progress; return sizeof(p_drv_buf->fcoe_boot_progress); } break; case DRV_TLV_RX_BROADCAST_PACKETS: if (p_drv_buf->rx_bcast_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_bcast; + p_buf->p_val = (u8 *)&p_drv_buf->rx_bcast; return sizeof(p_drv_buf->rx_bcast); } break; case DRV_TLV_TX_BROADCAST_PACKETS: if (p_drv_buf->tx_bcast_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_bcast; + p_buf->p_val = (u8 *)&p_drv_buf->tx_bcast; return sizeof(p_drv_buf->tx_bcast); } break; case DRV_TLV_FCOE_TX_DESCRIPTOR_QUEUE_AVG_DEPTH: if (p_drv_buf->fcoe_txq_depth_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->fcoe_txq_depth; + p_buf->p_val = (u8 *)&p_drv_buf->fcoe_txq_depth; return sizeof(p_drv_buf->fcoe_txq_depth); } break; case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_AVG_DEPTH: if (p_drv_buf->fcoe_rxq_depth_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rxq_depth; + p_buf->p_val = (u8 *)&p_drv_buf->fcoe_rxq_depth; return sizeof(p_drv_buf->fcoe_rxq_depth); } break; case DRV_TLV_FCOE_RX_FRAMES_RECEIVED: if (p_drv_buf->fcoe_rx_frames_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rx_frames; + p_buf->p_val = (u8 *)&p_drv_buf->fcoe_rx_frames; return sizeof(p_drv_buf->fcoe_rx_frames); } break; case DRV_TLV_FCOE_RX_BYTES_RECEIVED: if (p_drv_buf->fcoe_rx_bytes_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rx_bytes; + p_buf->p_val = (u8 *)&p_drv_buf->fcoe_rx_bytes; return sizeof(p_drv_buf->fcoe_rx_bytes); } break; case DRV_TLV_FCOE_TX_FRAMES_SENT: if (p_drv_buf->fcoe_tx_frames_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->fcoe_tx_frames; + p_buf->p_val = (u8 *)&p_drv_buf->fcoe_tx_frames; return sizeof(p_drv_buf->fcoe_tx_frames); } break; case DRV_TLV_FCOE_TX_BYTES_SENT: if (p_drv_buf->fcoe_tx_bytes_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->fcoe_tx_bytes; + p_buf->p_val = (u8 *)&p_drv_buf->fcoe_tx_bytes; return sizeof(p_drv_buf->fcoe_tx_bytes); } break; case DRV_TLV_CRC_ERROR_COUNT: if (p_drv_buf->crc_count_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->crc_count; + p_buf->p_val = (u8 *)&p_drv_buf->crc_count; return sizeof(p_drv_buf->crc_count); } break; case DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID: - if (p_drv_buf->crc_err_src_fcid_set[0]) { - *p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[0]; - return sizeof(p_drv_buf->crc_err_src_fcid[0]); - } - break; case DRV_TLV_CRC_ERROR_2_RECEIVED_SOURCE_FC_ID: - if (p_drv_buf->crc_err_src_fcid_set[1]) { - *p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[1]; - return sizeof(p_drv_buf->crc_err_src_fcid[1]); - } - break; case DRV_TLV_CRC_ERROR_3_RECEIVED_SOURCE_FC_ID: - if (p_drv_buf->crc_err_src_fcid_set[2]) { - *p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[2]; - return sizeof(p_drv_buf->crc_err_src_fcid[2]); - } - break; case DRV_TLV_CRC_ERROR_4_RECEIVED_SOURCE_FC_ID: - if (p_drv_buf->crc_err_src_fcid_set[3]) { - *p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[3]; - return sizeof(p_drv_buf->crc_err_src_fcid[3]); - } - break; case DRV_TLV_CRC_ERROR_5_RECEIVED_SOURCE_FC_ID: - if (p_drv_buf->crc_err_src_fcid_set[4]) { - *p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[4]; - return sizeof(p_drv_buf->crc_err_src_fcid[4]); + { + u8 idx = (p_tlv->tlv_type - + DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID) / 2; + + if (p_drv_buf->crc_err_src_fcid_set[idx]) { + p_buf->p_val = (u8 *)&p_drv_buf->crc_err_src_fcid[idx]; + return sizeof(p_drv_buf->crc_err_src_fcid[idx]); } break; + } + case DRV_TLV_CRC_ERROR_1_TIMESTAMP: - if (p_drv_buf->crc_err_tstamp_set[0]) { - *p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[0]; - return sizeof(p_drv_buf->crc_err_tstamp[0]); - } - break; case DRV_TLV_CRC_ERROR_2_TIMESTAMP: - if (p_drv_buf->crc_err_tstamp_set[1]) { - *p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[1]; - return sizeof(p_drv_buf->crc_err_tstamp[1]); - } - break; case DRV_TLV_CRC_ERROR_3_TIMESTAMP: - if (p_drv_buf->crc_err_tstamp_set[2]) { - *p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[2]; - return sizeof(p_drv_buf->crc_err_tstamp[2]); - } - break; case DRV_TLV_CRC_ERROR_4_TIMESTAMP: - if (p_drv_buf->crc_err_tstamp_set[3]) { - *p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[3]; - return sizeof(p_drv_buf->crc_err_tstamp[3]); - } - break; case DRV_TLV_CRC_ERROR_5_TIMESTAMP: - if (p_drv_buf->crc_err_tstamp_set[4]) { - *p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[4]; - return sizeof(p_drv_buf->crc_err_tstamp[4]); - } - break; + { + u8 idx = (p_tlv->tlv_type - + DRV_TLV_CRC_ERROR_1_TIMESTAMP) / 2; + + return ecore_mfw_get_tlv_time_value(&p_drv_buf->crc_err[idx], + p_buf); + } + case DRV_TLV_LOSS_OF_SYNC_ERROR_COUNT: if (p_drv_buf->losync_err_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->losync_err; + p_buf->p_val = (u8 *)&p_drv_buf->losync_err; return sizeof(p_drv_buf->losync_err); } break; case DRV_TLV_LOSS_OF_SIGNAL_ERRORS: if (p_drv_buf->losig_err_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->losig_err; + p_buf->p_val = (u8 *)&p_drv_buf->losig_err; return sizeof(p_drv_buf->losig_err); } break; case DRV_TLV_PRIMITIVE_SEQUENCE_PROTOCOL_ERROR_COUNT: if (p_drv_buf->primtive_err_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->primtive_err; + p_buf->p_val = (u8 *)&p_drv_buf->primtive_err; return sizeof(p_drv_buf->primtive_err); } break; case DRV_TLV_DISPARITY_ERROR_COUNT: if (p_drv_buf->disparity_err_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->disparity_err; + p_buf->p_val = (u8 *)&p_drv_buf->disparity_err; return sizeof(p_drv_buf->disparity_err); } break; case DRV_TLV_CODE_VIOLATION_ERROR_COUNT: if (p_drv_buf->code_violation_err_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->code_violation_err; + p_buf->p_val = (u8 *)&p_drv_buf->code_violation_err; return sizeof(p_drv_buf->code_violation_err); } break; case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1: - if (p_drv_buf->flogi_param_set[0]) { - *p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[0]; - return sizeof(p_drv_buf->flogi_param[0]); - } - break; case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_2: - if (p_drv_buf->flogi_param_set[1]) { - *p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[1]; - return sizeof(p_drv_buf->flogi_param[1]); - } - break; case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_3: - if (p_drv_buf->flogi_param_set[2]) { - *p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[2]; - return sizeof(p_drv_buf->flogi_param[2]); - } - break; case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_4: - if (p_drv_buf->flogi_param_set[3]) { - *p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[3]; - return sizeof(p_drv_buf->flogi_param[3]); + { + u8 idx = p_tlv->tlv_type - + DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1; + if (p_drv_buf->flogi_param_set[idx]) { + p_buf->p_val = (u8 *)&p_drv_buf->flogi_param[idx]; + return sizeof(p_drv_buf->flogi_param[idx]); } break; + } case DRV_TLV_LAST_FLOGI_TIMESTAMP: - if (p_drv_buf->flogi_tstamp_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->flogi_tstamp; - return sizeof(p_drv_buf->flogi_tstamp); - } - break; + return ecore_mfw_get_tlv_time_value(&p_drv_buf->flogi_tstamp, + p_buf); case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1: - if (p_drv_buf->flogi_acc_param_set[0]) { - *p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[0]; - return sizeof(p_drv_buf->flogi_acc_param[0]); - } - break; case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_2: - if (p_drv_buf->flogi_acc_param_set[1]) { - *p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[1]; - return sizeof(p_drv_buf->flogi_acc_param[1]); - } - break; case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_3: - if (p_drv_buf->flogi_acc_param_set[2]) { - *p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[2]; - return sizeof(p_drv_buf->flogi_acc_param[2]); - } - break; case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_4: - if (p_drv_buf->flogi_acc_param_set[3]) { - *p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[3]; - return sizeof(p_drv_buf->flogi_acc_param[3]); + { + u8 idx = p_tlv->tlv_type - + DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1; + + if (p_drv_buf->flogi_acc_param_set[idx]) { + p_buf->p_val = (u8 *)&p_drv_buf->flogi_acc_param[idx]; + return sizeof(p_drv_buf->flogi_acc_param[idx]); } break; + } case DRV_TLV_LAST_FLOGI_ACC_TIMESTAMP: - if (p_drv_buf->flogi_acc_tstamp_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_tstamp; - return sizeof(p_drv_buf->flogi_acc_tstamp); - } - break; + return ecore_mfw_get_tlv_time_value(&p_drv_buf->flogi_acc_tstamp, + p_buf); case DRV_TLV_LAST_FLOGI_RJT: if (p_drv_buf->flogi_rjt_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->flogi_rjt; + p_buf->p_val = (u8 *)&p_drv_buf->flogi_rjt; return sizeof(p_drv_buf->flogi_rjt); } break; case DRV_TLV_LAST_FLOGI_RJT_TIMESTAMP: - if (p_drv_buf->flogi_rjt_tstamp_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->flogi_rjt_tstamp; - return sizeof(p_drv_buf->flogi_rjt_tstamp); - } - break; + return ecore_mfw_get_tlv_time_value(&p_drv_buf->flogi_rjt_tstamp, + p_buf); case DRV_TLV_FDISCS_SENT_COUNT: if (p_drv_buf->fdiscs_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->fdiscs; + p_buf->p_val = (u8 *)&p_drv_buf->fdiscs; return sizeof(p_drv_buf->fdiscs); } break; case DRV_TLV_FDISC_ACCS_RECEIVED: if (p_drv_buf->fdisc_acc_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->fdisc_acc; + p_buf->p_val = (u8 *)&p_drv_buf->fdisc_acc; return sizeof(p_drv_buf->fdisc_acc); } break; case DRV_TLV_FDISC_RJTS_RECEIVED: if (p_drv_buf->fdisc_rjt_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->fdisc_rjt; + p_buf->p_val = (u8 *)&p_drv_buf->fdisc_rjt; return sizeof(p_drv_buf->fdisc_rjt); } break; case DRV_TLV_PLOGI_SENT_COUNT: if (p_drv_buf->plogi_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi; + p_buf->p_val = (u8 *)&p_drv_buf->plogi; return sizeof(p_drv_buf->plogi); } break; case DRV_TLV_PLOGI_ACCS_RECEIVED: if (p_drv_buf->plogi_acc_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc; + p_buf->p_val = (u8 *)&p_drv_buf->plogi_acc; return sizeof(p_drv_buf->plogi_acc); } break; case DRV_TLV_PLOGI_RJTS_RECEIVED: if (p_drv_buf->plogi_rjt_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_rjt; + p_buf->p_val = (u8 *)&p_drv_buf->plogi_rjt; return sizeof(p_drv_buf->plogi_rjt); } break; case DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID: - if (p_drv_buf->plogi_dst_fcid_set[0]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[0]; - return sizeof(p_drv_buf->plogi_dst_fcid[0]); - } - break; case DRV_TLV_PLOGI_2_SENT_DESTINATION_FC_ID: - if (p_drv_buf->plogi_dst_fcid_set[1]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[1]; - return sizeof(p_drv_buf->plogi_dst_fcid[1]); - } - break; case DRV_TLV_PLOGI_3_SENT_DESTINATION_FC_ID: - if (p_drv_buf->plogi_dst_fcid_set[2]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[2]; - return sizeof(p_drv_buf->plogi_dst_fcid[2]); - } - break; case DRV_TLV_PLOGI_4_SENT_DESTINATION_FC_ID: - if (p_drv_buf->plogi_dst_fcid_set[3]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[3]; - return sizeof(p_drv_buf->plogi_dst_fcid[3]); - } - break; case DRV_TLV_PLOGI_5_SENT_DESTINATION_FC_ID: - if (p_drv_buf->plogi_dst_fcid_set[4]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[4]; - return sizeof(p_drv_buf->plogi_dst_fcid[4]); + { + u8 idx = (p_tlv->tlv_type - + DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID) / 2; + + if (p_drv_buf->plogi_dst_fcid_set[idx]) { + p_buf->p_val = (u8 *)&p_drv_buf->plogi_dst_fcid[idx]; + return sizeof(p_drv_buf->plogi_dst_fcid[idx]); } break; + } case DRV_TLV_PLOGI_1_TIMESTAMP: - if (p_drv_buf->plogi_tstamp_set[0]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[0]; - return sizeof(p_drv_buf->plogi_tstamp[0]); - } - break; case DRV_TLV_PLOGI_2_TIMESTAMP: - if (p_drv_buf->plogi_tstamp_set[1]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[1]; - return sizeof(p_drv_buf->plogi_tstamp[1]); - } - break; case DRV_TLV_PLOGI_3_TIMESTAMP: - if (p_drv_buf->plogi_tstamp_set[2]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[2]; - return sizeof(p_drv_buf->plogi_tstamp[2]); - } - break; case DRV_TLV_PLOGI_4_TIMESTAMP: - if (p_drv_buf->plogi_tstamp_set[3]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[3]; - return sizeof(p_drv_buf->plogi_tstamp[3]); - } - break; case DRV_TLV_PLOGI_5_TIMESTAMP: - if (p_drv_buf->plogi_tstamp_set[4]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[4]; - return sizeof(p_drv_buf->plogi_tstamp[4]); - } - break; + { + u8 idx = (p_tlv->tlv_type - + DRV_TLV_PLOGI_1_TIMESTAMP) / 2; + + return ecore_mfw_get_tlv_time_value(&p_drv_buf->plogi_tstamp[idx], + p_buf); + } + case DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID: - if (p_drv_buf->plogi_acc_src_fcid_set[0]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[0]; - return sizeof(p_drv_buf->plogi_acc_src_fcid[0]); - } - break; case DRV_TLV_PLOGI_2_ACC_RECEIVED_SOURCE_FC_ID: - if (p_drv_buf->plogi_acc_src_fcid_set[1]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[1]; - return sizeof(p_drv_buf->plogi_acc_src_fcid[1]); - } - break; case DRV_TLV_PLOGI_3_ACC_RECEIVED_SOURCE_FC_ID: - if (p_drv_buf->plogi_acc_src_fcid_set[2]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[2]; - return sizeof(p_drv_buf->plogi_acc_src_fcid[2]); - } - break; case DRV_TLV_PLOGI_4_ACC_RECEIVED_SOURCE_FC_ID: - if (p_drv_buf->plogi_acc_src_fcid_set[3]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[3]; - return sizeof(p_drv_buf->plogi_acc_src_fcid[3]); - } - break; case DRV_TLV_PLOGI_5_ACC_RECEIVED_SOURCE_FC_ID: - if (p_drv_buf->plogi_acc_src_fcid_set[4]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[4]; - return sizeof(p_drv_buf->plogi_acc_src_fcid[4]); + { + u8 idx = (p_tlv->tlv_type - + DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID) / 2; + + if (p_drv_buf->plogi_acc_src_fcid_set[idx]) { + p_buf->p_val = (u8 *)&p_drv_buf->plogi_acc_src_fcid[idx]; + return sizeof(p_drv_buf->plogi_acc_src_fcid[idx]); } break; + } case DRV_TLV_PLOGI_1_ACC_TIMESTAMP: - if (p_drv_buf->plogi_acc_tstamp_set[0]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[0]; - return sizeof(p_drv_buf->plogi_acc_tstamp[0]); - } - break; case DRV_TLV_PLOGI_2_ACC_TIMESTAMP: - if (p_drv_buf->plogi_acc_tstamp_set[1]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[1]; - return sizeof(p_drv_buf->plogi_acc_tstamp[1]); - } - break; case DRV_TLV_PLOGI_3_ACC_TIMESTAMP: - if (p_drv_buf->plogi_acc_tstamp_set[2]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[2]; - return sizeof(p_drv_buf->plogi_acc_tstamp[2]); - } - break; case DRV_TLV_PLOGI_4_ACC_TIMESTAMP: - if (p_drv_buf->plogi_acc_tstamp_set[3]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[3]; - return sizeof(p_drv_buf->plogi_acc_tstamp[3]); - } - break; case DRV_TLV_PLOGI_5_ACC_TIMESTAMP: - if (p_drv_buf->plogi_acc_tstamp_set[4]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[4]; - return sizeof(p_drv_buf->plogi_acc_tstamp[4]); - } - break; + { + u8 idx = (p_tlv->tlv_type - + DRV_TLV_PLOGI_1_ACC_TIMESTAMP) / 2; + + return ecore_mfw_get_tlv_time_value(&p_drv_buf->plogi_acc_tstamp[idx], + p_buf); + } + case DRV_TLV_LOGOS_ISSUED: if (p_drv_buf->tx_plogos_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_plogos; + p_buf->p_val = (u8 *)&p_drv_buf->tx_plogos; return sizeof(p_drv_buf->tx_plogos); } break; case DRV_TLV_LOGO_ACCS_RECEIVED: if (p_drv_buf->plogo_acc_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogo_acc; + p_buf->p_val = (u8 *)&p_drv_buf->plogo_acc; return sizeof(p_drv_buf->plogo_acc); } break; case DRV_TLV_LOGO_RJTS_RECEIVED: if (p_drv_buf->plogo_rjt_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogo_rjt; + p_buf->p_val = (u8 *)&p_drv_buf->plogo_rjt; return sizeof(p_drv_buf->plogo_rjt); } break; case DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID: - if (p_drv_buf->plogo_src_fcid_set[0]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[0]; - return sizeof(p_drv_buf->plogo_src_fcid[0]); - } - break; case DRV_TLV_LOGO_2_RECEIVED_SOURCE_FC_ID: - if (p_drv_buf->plogo_src_fcid_set[1]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[1]; - return sizeof(p_drv_buf->plogo_src_fcid[1]); - } - break; case DRV_TLV_LOGO_3_RECEIVED_SOURCE_FC_ID: - if (p_drv_buf->plogo_src_fcid_set[2]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[2]; - return sizeof(p_drv_buf->plogo_src_fcid[2]); - } - break; case DRV_TLV_LOGO_4_RECEIVED_SOURCE_FC_ID: - if (p_drv_buf->plogo_src_fcid_set[3]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[3]; - return sizeof(p_drv_buf->plogo_src_fcid[3]); - } - break; case DRV_TLV_LOGO_5_RECEIVED_SOURCE_FC_ID: - if (p_drv_buf->plogo_src_fcid_set[4]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[4]; - return sizeof(p_drv_buf->plogo_src_fcid[4]); + { + u8 idx = (p_tlv->tlv_type - + DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID) / 2; + + if (p_drv_buf->plogo_src_fcid_set[idx]) { + p_buf->p_val = (u8 *)&p_drv_buf->plogo_src_fcid[idx]; + return sizeof(p_drv_buf->plogo_src_fcid[idx]); } break; + } case DRV_TLV_LOGO_1_TIMESTAMP: - if (p_drv_buf->plogo_tstamp_set[0]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[0]; - return sizeof(p_drv_buf->plogo_tstamp[0]); - } - break; case DRV_TLV_LOGO_2_TIMESTAMP: - if (p_drv_buf->plogo_tstamp_set[1]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[1]; - return sizeof(p_drv_buf->plogo_tstamp[1]); - } - break; case DRV_TLV_LOGO_3_TIMESTAMP: - if (p_drv_buf->plogo_tstamp_set[2]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[2]; - return sizeof(p_drv_buf->plogo_tstamp[2]); - } - break; case DRV_TLV_LOGO_4_TIMESTAMP: - if (p_drv_buf->plogo_tstamp_set[3]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[3]; - return sizeof(p_drv_buf->plogo_tstamp[3]); - } - break; case DRV_TLV_LOGO_5_TIMESTAMP: - if (p_drv_buf->plogo_tstamp_set[4]) { - *p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[4]; - return sizeof(p_drv_buf->plogo_tstamp[4]); - } - break; + { + u8 idx = (p_tlv->tlv_type - + DRV_TLV_LOGO_1_TIMESTAMP) / 2; + + return ecore_mfw_get_tlv_time_value(&p_drv_buf->plogo_tstamp[idx], + p_buf); + } case DRV_TLV_LOGOS_RECEIVED: if (p_drv_buf->rx_logos_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_logos; + p_buf->p_val = (u8 *)&p_drv_buf->rx_logos; return sizeof(p_drv_buf->rx_logos); } break; case DRV_TLV_ACCS_ISSUED: if (p_drv_buf->tx_accs_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_accs; + p_buf->p_val = (u8 *)&p_drv_buf->tx_accs; return sizeof(p_drv_buf->tx_accs); } break; case DRV_TLV_PRLIS_ISSUED: if (p_drv_buf->tx_prlis_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_prlis; + p_buf->p_val = (u8 *)&p_drv_buf->tx_prlis; return sizeof(p_drv_buf->tx_prlis); } break; case DRV_TLV_ACCS_RECEIVED: if (p_drv_buf->rx_accs_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_accs; + p_buf->p_val = (u8 *)&p_drv_buf->rx_accs; return sizeof(p_drv_buf->rx_accs); } break; case DRV_TLV_ABTS_SENT_COUNT: if (p_drv_buf->tx_abts_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_abts; + p_buf->p_val = (u8 *)&p_drv_buf->tx_abts; return sizeof(p_drv_buf->tx_abts); } break; case DRV_TLV_ABTS_ACCS_RECEIVED: if (p_drv_buf->rx_abts_acc_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_abts_acc; + p_buf->p_val = (u8 *)&p_drv_buf->rx_abts_acc; return sizeof(p_drv_buf->rx_abts_acc); } break; case DRV_TLV_ABTS_RJTS_RECEIVED: if (p_drv_buf->rx_abts_rjt_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_abts_rjt; + p_buf->p_val = (u8 *)&p_drv_buf->rx_abts_rjt; return sizeof(p_drv_buf->rx_abts_rjt); } break; case DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID: - if (p_drv_buf->abts_dst_fcid_set[0]) { - *p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[0]; - return sizeof(p_drv_buf->abts_dst_fcid[0]); - } - break; case DRV_TLV_ABTS_2_SENT_DESTINATION_FC_ID: - if (p_drv_buf->abts_dst_fcid_set[1]) { - *p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[1]; - return sizeof(p_drv_buf->abts_dst_fcid[1]); - } - break; case DRV_TLV_ABTS_3_SENT_DESTINATION_FC_ID: - if (p_drv_buf->abts_dst_fcid_set[2]) { - *p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[2]; - return sizeof(p_drv_buf->abts_dst_fcid[2]); - } - break; case DRV_TLV_ABTS_4_SENT_DESTINATION_FC_ID: - if (p_drv_buf->abts_dst_fcid_set[3]) { - *p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[3]; - return sizeof(p_drv_buf->abts_dst_fcid[3]); - } - break; case DRV_TLV_ABTS_5_SENT_DESTINATION_FC_ID: - if (p_drv_buf->abts_dst_fcid_set[4]) { - *p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[4]; - return sizeof(p_drv_buf->abts_dst_fcid[4]); + { + u8 idx = (p_tlv->tlv_type - + DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID) / 2; + + if (p_drv_buf->abts_dst_fcid_set[idx]) { + p_buf->p_val = (u8 *)&p_drv_buf->abts_dst_fcid[idx]; + return sizeof(p_drv_buf->abts_dst_fcid[idx]); } break; + } case DRV_TLV_ABTS_1_TIMESTAMP: - if (p_drv_buf->abts_tstamp_set[0]) { - *p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[0]; - return sizeof(p_drv_buf->abts_tstamp[0]); - } - break; case DRV_TLV_ABTS_2_TIMESTAMP: - if (p_drv_buf->abts_tstamp_set[1]) { - *p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[1]; - return sizeof(p_drv_buf->abts_tstamp[1]); - } - break; case DRV_TLV_ABTS_3_TIMESTAMP: - if (p_drv_buf->abts_tstamp_set[2]) { - *p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[2]; - return sizeof(p_drv_buf->abts_tstamp[2]); - } - break; case DRV_TLV_ABTS_4_TIMESTAMP: - if (p_drv_buf->abts_tstamp_set[3]) { - *p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[3]; - return sizeof(p_drv_buf->abts_tstamp[3]); - } - break; case DRV_TLV_ABTS_5_TIMESTAMP: - if (p_drv_buf->abts_tstamp_set[4]) { - *p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[4]; - return sizeof(p_drv_buf->abts_tstamp[4]); - } - break; + { + u8 idx = (p_tlv->tlv_type - + DRV_TLV_ABTS_1_TIMESTAMP) / 2; + + return ecore_mfw_get_tlv_time_value(&p_drv_buf->abts_tstamp[idx], + p_buf); + } + case DRV_TLV_RSCNS_RECEIVED: if (p_drv_buf->rx_rscn_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn; + p_buf->p_val = (u8 *)&p_drv_buf->rx_rscn; return sizeof(p_drv_buf->rx_rscn); } break; case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1: - if (p_drv_buf->rx_rscn_nport_set[0]) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[0]; - return sizeof(p_drv_buf->rx_rscn_nport[0]); - } - break; case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_2: - if (p_drv_buf->rx_rscn_nport_set[1]) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[1]; - return sizeof(p_drv_buf->rx_rscn_nport[1]); - } - break; case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_3: - if (p_drv_buf->rx_rscn_nport_set[2]) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[2]; - return sizeof(p_drv_buf->rx_rscn_nport[2]); - } - break; case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_4: - if (p_drv_buf->rx_rscn_nport_set[3]) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[3]; - return sizeof(p_drv_buf->rx_rscn_nport[3]); + { + u8 idx = p_tlv->tlv_type - DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1; + + if (p_drv_buf->rx_rscn_nport_set[idx]) { + p_buf->p_val = (u8 *)&p_drv_buf->rx_rscn_nport[idx]; + return sizeof(p_drv_buf->rx_rscn_nport[idx]); } break; + } case DRV_TLV_LUN_RESETS_ISSUED: if (p_drv_buf->tx_lun_rst_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_lun_rst; + p_buf->p_val = (u8 *)&p_drv_buf->tx_lun_rst; return sizeof(p_drv_buf->tx_lun_rst); } break; case DRV_TLV_ABORT_TASK_SETS_ISSUED: if (p_drv_buf->abort_task_sets_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->abort_task_sets; + p_buf->p_val = (u8 *)&p_drv_buf->abort_task_sets; return sizeof(p_drv_buf->abort_task_sets); } break; case DRV_TLV_TPRLOS_SENT: if (p_drv_buf->tx_tprlos_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_tprlos; + p_buf->p_val = (u8 *)&p_drv_buf->tx_tprlos; return sizeof(p_drv_buf->tx_tprlos); } break; case DRV_TLV_NOS_SENT_COUNT: if (p_drv_buf->tx_nos_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_nos; + p_buf->p_val = (u8 *)&p_drv_buf->tx_nos; return sizeof(p_drv_buf->tx_nos); } break; case DRV_TLV_NOS_RECEIVED_COUNT: if (p_drv_buf->rx_nos_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_nos; + p_buf->p_val = (u8 *)&p_drv_buf->rx_nos; return sizeof(p_drv_buf->rx_nos); } break; case DRV_TLV_OLS_COUNT: if (p_drv_buf->ols_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->ols; + p_buf->p_val = (u8 *)&p_drv_buf->ols; return sizeof(p_drv_buf->ols); } break; case DRV_TLV_LR_COUNT: if (p_drv_buf->lr_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->lr; + p_buf->p_val = (u8 *)&p_drv_buf->lr; return sizeof(p_drv_buf->lr); } break; case DRV_TLV_LRR_COUNT: if (p_drv_buf->lrr_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->lrr; + p_buf->p_val = (u8 *)&p_drv_buf->lrr; return sizeof(p_drv_buf->lrr); } break; case DRV_TLV_LIP_SENT_COUNT: if (p_drv_buf->tx_lip_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_lip; + p_buf->p_val = (u8 *)&p_drv_buf->tx_lip; return sizeof(p_drv_buf->tx_lip); } break; case DRV_TLV_LIP_RECEIVED_COUNT: if (p_drv_buf->rx_lip_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_lip; + p_buf->p_val = (u8 *)&p_drv_buf->rx_lip; return sizeof(p_drv_buf->rx_lip); } break; case DRV_TLV_EOFA_COUNT: if (p_drv_buf->eofa_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->eofa; + p_buf->p_val = (u8 *)&p_drv_buf->eofa; return sizeof(p_drv_buf->eofa); } break; case DRV_TLV_EOFNI_COUNT: if (p_drv_buf->eofni_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->eofni; + p_buf->p_val = (u8 *)&p_drv_buf->eofni; return sizeof(p_drv_buf->eofni); } break; case DRV_TLV_SCSI_STATUS_CHECK_CONDITION_COUNT: if (p_drv_buf->scsi_chks_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_chks; + p_buf->p_val = (u8 *)&p_drv_buf->scsi_chks; return sizeof(p_drv_buf->scsi_chks); } break; case DRV_TLV_SCSI_STATUS_CONDITION_MET_COUNT: if (p_drv_buf->scsi_cond_met_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_cond_met; + p_buf->p_val = (u8 *)&p_drv_buf->scsi_cond_met; return sizeof(p_drv_buf->scsi_cond_met); } break; case DRV_TLV_SCSI_STATUS_BUSY_COUNT: if (p_drv_buf->scsi_busy_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_busy; + p_buf->p_val = (u8 *)&p_drv_buf->scsi_busy; return sizeof(p_drv_buf->scsi_busy); } break; case DRV_TLV_SCSI_STATUS_INTERMEDIATE_COUNT: if (p_drv_buf->scsi_inter_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_inter; + p_buf->p_val = (u8 *)&p_drv_buf->scsi_inter; return sizeof(p_drv_buf->scsi_inter); } break; case DRV_TLV_SCSI_STATUS_INTERMEDIATE_CONDITION_MET_COUNT: if (p_drv_buf->scsi_inter_cond_met_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_inter_cond_met; + p_buf->p_val = (u8 *)&p_drv_buf->scsi_inter_cond_met; return sizeof(p_drv_buf->scsi_inter_cond_met); } break; case DRV_TLV_SCSI_STATUS_RESERVATION_CONFLICT_COUNT: if (p_drv_buf->scsi_rsv_conflicts_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_rsv_conflicts; + p_buf->p_val = (u8 *)&p_drv_buf->scsi_rsv_conflicts; return sizeof(p_drv_buf->scsi_rsv_conflicts); } break; case DRV_TLV_SCSI_STATUS_TASK_SET_FULL_COUNT: if (p_drv_buf->scsi_tsk_full_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_tsk_full; + p_buf->p_val = (u8 *)&p_drv_buf->scsi_tsk_full; return sizeof(p_drv_buf->scsi_tsk_full); } break; case DRV_TLV_SCSI_STATUS_ACA_ACTIVE_COUNT: if (p_drv_buf->scsi_aca_active_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_aca_active; + p_buf->p_val = (u8 *)&p_drv_buf->scsi_aca_active; return sizeof(p_drv_buf->scsi_aca_active); } break; case DRV_TLV_SCSI_STATUS_TASK_ABORTED_COUNT: if (p_drv_buf->scsi_tsk_abort_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_tsk_abort; + p_buf->p_val = (u8 *)&p_drv_buf->scsi_tsk_abort; return sizeof(p_drv_buf->scsi_tsk_abort); } break; case DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ: - if (p_drv_buf->scsi_rx_chk_set[0]) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[0]; - return sizeof(p_drv_buf->scsi_rx_chk[0]); - } - break; case DRV_TLV_SCSI_CHECK_CONDITION_2_RECEIVED_SK_ASC_ASCQ: - if (p_drv_buf->scsi_rx_chk_set[1]) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[1]; - return sizeof(p_drv_buf->scsi_rx_chk[1]); - } - break; case DRV_TLV_SCSI_CHECK_CONDITION_3_RECEIVED_SK_ASC_ASCQ: - if (p_drv_buf->scsi_rx_chk_set[2]) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[2]; - return sizeof(p_drv_buf->scsi_rx_chk[2]); - } - break; case DRV_TLV_SCSI_CHECK_CONDITION_4_RECEIVED_SK_ASC_ASCQ: - if (p_drv_buf->scsi_rx_chk_set[3]) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[3]; - return sizeof(p_drv_buf->scsi_rx_chk[4]); - } - break; case DRV_TLV_SCSI_CHECK_CONDITION_5_RECEIVED_SK_ASC_ASCQ: - if (p_drv_buf->scsi_rx_chk_set[4]) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[4]; - return sizeof(p_drv_buf->scsi_rx_chk[4]); + { + u8 idx = (p_tlv->tlv_type - + DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ) / 2; + + if (p_drv_buf->scsi_rx_chk_set[idx]) { + p_buf->p_val = (u8 *)&p_drv_buf->scsi_rx_chk[idx]; + return sizeof(p_drv_buf->scsi_rx_chk[idx]); } break; + } case DRV_TLV_SCSI_CHECK_1_TIMESTAMP: - if (p_drv_buf->scsi_chk_tstamp_set[0]) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[0]; - return sizeof(p_drv_buf->scsi_chk_tstamp[0]); - } - break; case DRV_TLV_SCSI_CHECK_2_TIMESTAMP: - if (p_drv_buf->scsi_chk_tstamp_set[1]) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[1]; - return sizeof(p_drv_buf->scsi_chk_tstamp[1]); - } - break; case DRV_TLV_SCSI_CHECK_3_TIMESTAMP: - if (p_drv_buf->scsi_chk_tstamp_set[2]) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[2]; - return sizeof(p_drv_buf->scsi_chk_tstamp[2]); - } - break; case DRV_TLV_SCSI_CHECK_4_TIMESTAMP: - if (p_drv_buf->scsi_chk_tstamp_set[3]) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[3]; - return sizeof(p_drv_buf->scsi_chk_tstamp[3]); - } - break; case DRV_TLV_SCSI_CHECK_5_TIMESTAMP: - if (p_drv_buf->scsi_chk_tstamp_set[4]) { - *p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[4]; - return sizeof(p_drv_buf->scsi_chk_tstamp[4]); - } - break; + { + u8 idx = (p_tlv->tlv_type - + DRV_TLV_SCSI_CHECK_1_TIMESTAMP) / 2; + + return ecore_mfw_get_tlv_time_value(&p_drv_buf->scsi_chk_tstamp[idx], + p_buf); + } + default: break; } @@ -1309,96 +1126,96 @@ ecore_mfw_get_fcoe_tlv_value(struct ecore_drv_tlv_hdr *p_tlv, static int ecore_mfw_get_iscsi_tlv_value(struct ecore_drv_tlv_hdr *p_tlv, struct ecore_mfw_tlv_iscsi *p_drv_buf, - u8 **p_tlv_buf) + struct ecore_tlv_parsed_buf *p_buf) { switch (p_tlv->tlv_type) { case DRV_TLV_TARGET_LLMNR_ENABLED: if (p_drv_buf->target_llmnr_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->target_llmnr; + p_buf->p_val = (u8 *)&p_drv_buf->target_llmnr; return sizeof(p_drv_buf->target_llmnr); } break; case DRV_TLV_HEADER_DIGEST_FLAG_ENABLED: if (p_drv_buf->header_digest_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->header_digest; + p_buf->p_val = (u8 *)&p_drv_buf->header_digest; return sizeof(p_drv_buf->header_digest); } break; case DRV_TLV_DATA_DIGEST_FLAG_ENABLED: if (p_drv_buf->data_digest_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->data_digest; + p_buf->p_val = (u8 *)&p_drv_buf->data_digest; return sizeof(p_drv_buf->data_digest); } break; case DRV_TLV_AUTHENTICATION_METHOD: if (p_drv_buf->auth_method_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->auth_method; + p_buf->p_val = (u8 *)&p_drv_buf->auth_method; return sizeof(p_drv_buf->auth_method); } break; case DRV_TLV_ISCSI_BOOT_TARGET_PORTAL: if (p_drv_buf->boot_taget_portal_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->boot_taget_portal; + p_buf->p_val = (u8 *)&p_drv_buf->boot_taget_portal; return sizeof(p_drv_buf->boot_taget_portal); } break; case DRV_TLV_MAX_FRAME_SIZE: if (p_drv_buf->frame_size_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->frame_size; + p_buf->p_val = (u8 *)&p_drv_buf->frame_size; return sizeof(p_drv_buf->frame_size); } break; case DRV_TLV_PDU_TX_DESCRIPTORS_QUEUE_SIZE: if (p_drv_buf->tx_desc_size_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_desc_size; + p_buf->p_val = (u8 *)&p_drv_buf->tx_desc_size; return sizeof(p_drv_buf->tx_desc_size); } break; case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_SIZE: if (p_drv_buf->rx_desc_size_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_desc_size; + p_buf->p_val = (u8 *)&p_drv_buf->rx_desc_size; return sizeof(p_drv_buf->rx_desc_size); } break; case DRV_TLV_ISCSI_BOOT_PROGRESS: if (p_drv_buf->boot_progress_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->boot_progress; + p_buf->p_val = (u8 *)&p_drv_buf->boot_progress; return sizeof(p_drv_buf->boot_progress); } break; case DRV_TLV_PDU_TX_DESCRIPTOR_QUEUE_AVG_DEPTH: if (p_drv_buf->tx_desc_qdepth_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_desc_qdepth; + p_buf->p_val = (u8 *)&p_drv_buf->tx_desc_qdepth; return sizeof(p_drv_buf->tx_desc_qdepth); } break; case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_AVG_DEPTH: if (p_drv_buf->rx_desc_qdepth_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_desc_qdepth; + p_buf->p_val = (u8 *)&p_drv_buf->rx_desc_qdepth; return sizeof(p_drv_buf->rx_desc_qdepth); } break; case DRV_TLV_ISCSI_PDU_RX_FRAMES_RECEIVED: if (p_drv_buf->rx_frames_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_frames; + p_buf->p_val = (u8 *)&p_drv_buf->rx_frames; return sizeof(p_drv_buf->rx_frames); } break; case DRV_TLV_ISCSI_PDU_RX_BYTES_RECEIVED: if (p_drv_buf->rx_bytes_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->rx_bytes; + p_buf->p_val = (u8 *)&p_drv_buf->rx_bytes; return sizeof(p_drv_buf->rx_bytes); } break; case DRV_TLV_ISCSI_PDU_TX_FRAMES_SENT: if (p_drv_buf->tx_frames_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_frames; + p_buf->p_val = (u8 *)&p_drv_buf->tx_frames; return sizeof(p_drv_buf->tx_frames); } break; case DRV_TLV_ISCSI_PDU_TX_BYTES_SENT: if (p_drv_buf->tx_bytes_set) { - *p_tlv_buf = (u8 *)&p_drv_buf->tx_bytes; + p_buf->p_val = (u8 *)&p_drv_buf->tx_bytes; return sizeof(p_drv_buf->tx_bytes); } break; @@ -1414,10 +1231,11 @@ static enum _ecore_status_t ecore_mfw_update_tlvs(struct ecore_hwfn *p_hwfn, u32 size) { union ecore_mfw_tlv_data *p_tlv_data; + struct ecore_tlv_parsed_buf buffer; struct ecore_drv_tlv_hdr tlv; - u8 *p_tlv_ptr = OSAL_NULL, *p_temp; u32 offset; int len; + u8 *p_tlv; p_tlv_data = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data)); if (!p_tlv_data) @@ -1428,41 +1246,39 @@ static enum _ecore_status_t ecore_mfw_update_tlvs(struct ecore_hwfn *p_hwfn, return ECORE_INVAL; } - offset = 0; OSAL_MEMSET(&tlv, 0, sizeof(tlv)); - while (offset < size) { - p_temp = &p_mfw_buf[offset]; - tlv.tlv_type = TLV_TYPE(p_temp); - tlv.tlv_length = TLV_LENGTH(p_temp); - tlv.tlv_flags = TLV_FLAGS(p_temp); - DP_INFO(p_hwfn, "Type %d length = %d flags = 0x%x\n", - tlv.tlv_type, tlv.tlv_length, tlv.tlv_flags); + for (offset = 0; offset < size; + offset += sizeof(tlv) + sizeof(u32) * tlv.tlv_length) { + p_tlv = &p_mfw_buf[offset]; + tlv.tlv_type = TLV_TYPE(p_tlv); + tlv.tlv_length = TLV_LENGTH(p_tlv); + tlv.tlv_flags = TLV_FLAGS(p_tlv); + + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, + "Type %d length = %d flags = 0x%x\n", tlv.tlv_type, + tlv.tlv_length, tlv.tlv_flags); - offset += sizeof(tlv); if (tlv_group == ECORE_MFW_TLV_GENERIC) - len = ecore_mfw_get_gen_tlv_value(&tlv, - &p_tlv_data->generic, &p_tlv_ptr); + len = ecore_mfw_get_gen_tlv_value(p_hwfn, &tlv, + &p_tlv_data->generic, + &buffer); else if (tlv_group == ECORE_MFW_TLV_ETH) - len = ecore_mfw_get_eth_tlv_value(&tlv, - &p_tlv_data->eth, &p_tlv_ptr); + len = ecore_mfw_get_eth_tlv_value(&tlv, &p_tlv_data->eth, &buffer); else if (tlv_group == ECORE_MFW_TLV_FCOE) - len = ecore_mfw_get_fcoe_tlv_value(&tlv, - &p_tlv_data->fcoe, &p_tlv_ptr); + len = ecore_mfw_get_fcoe_tlv_value(&tlv, &p_tlv_data->fcoe, &buffer); else - len = ecore_mfw_get_iscsi_tlv_value(&tlv, - &p_tlv_data->iscsi, &p_tlv_ptr); + len = ecore_mfw_get_iscsi_tlv_value(&tlv, &p_tlv_data->iscsi, &buffer); if (len > 0) { OSAL_WARN(len > 4 * tlv.tlv_length, - "Incorrect MFW TLV length"); + "Incorrect MFW TLV length %d, it shouldn't be greater than %d\n", + len, 4 * tlv.tlv_length); len = OSAL_MIN_T(int, len, 4 * tlv.tlv_length); tlv.tlv_flags |= ECORE_DRV_TLV_FLAGS_CHANGED; - /* TODO: Endianness handling? */ - OSAL_MEMCPY(p_mfw_buf, &tlv, sizeof(tlv)); - OSAL_MEMCPY(p_mfw_buf + offset, p_tlv_ptr, len); + TLV_FLAGS(p_tlv) = tlv.tlv_flags; + OSAL_MEMCPY(p_mfw_buf + offset + sizeof(tlv), + buffer.p_val, len); } - - offset += sizeof(u32) * tlv.tlv_length; } OSAL_VFREE(p_hwfn->p_dev, p_tlv_data); @@ -1484,6 +1300,7 @@ ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) global_offsize = ecore_rd(p_hwfn, p_ptt, addr); global_addr = SECTION_ADDR(global_offsize, 0); addr = global_addr + OFFSETOF(struct public_global, data_ptr); + addr = ecore_rd(p_hwfn, p_ptt, addr); size = ecore_rd(p_hwfn, p_ptt, global_addr + OFFSETOF(struct public_global, data_size)); @@ -1494,14 +1311,19 @@ ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) p_mfw_buf = (void *)OSAL_VZALLOC(p_hwfn->p_dev, size); if (!p_mfw_buf) { - DP_NOTICE(p_hwfn, false, - "Failed allocate memory for p_mfw_buf\n"); + DP_NOTICE(p_hwfn, false, "Failed allocate memory for p_mfw_buf\n"); goto drv_done; } - /* Read the TLV request to local buffer */ + /* Read the TLV request to local buffer. MFW represents the TLV in + * little endian format and mcp returns it bigendian format. Hence + * driver need to convert data to little endian first and then do the + * memcpy (casting) to preserve the MFW TLV format in the driver buffer. + * + */ for (offset = 0; offset < size; offset += sizeof(u32)) { val = ecore_rd(p_hwfn, p_ptt, addr + offset); + val = OSAL_BE32_TO_CPU(val); OSAL_MEMCPY(&p_mfw_buf[offset], &val, sizeof(u32)); } @@ -1512,7 +1334,30 @@ ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) tlv.tlv_type = TLV_TYPE(p_temp); tlv.tlv_length = TLV_LENGTH(p_temp); if (ecore_mfw_get_tlv_group(tlv.tlv_type, &tlv_group)) - goto drv_done; + DP_VERBOSE(p_hwfn, ECORE_MSG_DRV, + "Un recognized TLV %d\n", tlv.tlv_type); + } + + /* Sanitize the TLV groups according to personality */ + if ((tlv_group & ECORE_MFW_TLV_FCOE) && + p_hwfn->hw_info.personality != ECORE_PCI_FCOE) { + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, + "Skipping FCoE TLVs for non-FCoE function\n"); + tlv_group &= ~ECORE_MFW_TLV_FCOE; + } + + if ((tlv_group & ECORE_MFW_TLV_ISCSI) && + p_hwfn->hw_info.personality != ECORE_PCI_ISCSI) { + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, + "Skipping iSCSI TLVs for non-iSCSI function\n"); + tlv_group &= ~ECORE_MFW_TLV_ISCSI; + } + + if ((tlv_group & ECORE_MFW_TLV_ETH) && + !ECORE_IS_L2_PERSONALITY(p_hwfn)) { + DP_VERBOSE(p_hwfn, ECORE_MSG_SP, + "Skipping L2 TLVs for non-L2 function\n"); + tlv_group &= ~ECORE_MFW_TLV_ETH; } /* Update the TLV values in the local buffer */ @@ -1523,11 +1368,14 @@ ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) } } - /* Write the TLV data to shared memory */ + /* Write the TLV data to shared memory. The stream of 4 bytes first need + * to be mem-copied to u32 element to make it as LSB format. And then + * converted to big endian as required by mcp-write. + */ for (offset = 0; offset < size; offset += sizeof(u32)) { - val = (u32)p_mfw_buf[offset]; + OSAL_MEMCPY(&val, &p_mfw_buf[offset], sizeof(u32)); + val = OSAL_CPU_TO_BE32(val); ecore_wr(p_hwfn, p_ptt, addr + offset, val); - offset += sizeof(u32); } drv_done: diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h index 64509f7cc..0e15c51b3 100644 --- a/drivers/net/qede/base/ecore_proto_if.h +++ b/drivers/net/qede/base/ecore_proto_if.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_PROTO_IF_H__ #define __ECORE_PROTO_IF_H__ @@ -34,10 +34,43 @@ struct ecore_eth_pf_params { bool allow_vf_mac_change; }; +/* Most of the parameters below are described in the FW FCoE HSI */ +struct ecore_fcoe_pf_params { + /* The following parameters are used during protocol-init */ + u64 glbl_q_params_addr; + u64 bdq_pbl_base_addr[2]; + + /* The following parameters are used during HW-init + * and these parameters need to be passed as arguments + * to update_pf_params routine invoked before slowpath start + */ + u16 num_cons; + u16 num_tasks; + + /* The following parameters are used during protocol-init */ + u16 sq_num_pbl_pages; + + u16 cq_num_entries; + u16 cmdq_num_entries; + u16 rq_buffer_log_size; + u16 mtu; + u16 dummy_icid; + u16 bdq_xoff_threshold[2]; + u16 bdq_xon_threshold[2]; + u16 rq_buffer_size; + u8 num_cqs; /* num of global CQs */ + u8 log_page_size; + u8 gl_rq_pi; + u8 gl_cmd_pi; + u8 debug_mode; + u8 is_target; + u8 bdq_pbl_num_entries[2]; +}; + /* Most of the parameters below are described in the FW iSCSI / TCP HSI */ struct ecore_iscsi_pf_params { u64 glbl_q_params_addr; - u64 bdq_pbl_base_addr[2]; + u64 bdq_pbl_base_addr[3]; u16 cq_num_entries; u16 cmdq_num_entries; u32 two_msl_timer; @@ -51,33 +84,36 @@ struct ecore_iscsi_pf_params { /* The following parameters are used during protocol-init */ u16 half_way_close_timeout; - u16 bdq_xoff_threshold[2]; - u16 bdq_xon_threshold[2]; + u16 bdq_xoff_threshold[3]; + u16 bdq_xon_threshold[3]; u16 cmdq_xoff_threshold; u16 cmdq_xon_threshold; + u16 in_capsule_max_size; u16 rq_buffer_size; u8 num_sq_pages_in_ring; - u8 num_r2tq_pages_in_ring; u8 num_uhq_pages_in_ring; u8 num_queues; u8 log_page_size; u8 log_page_size_conn; u8 rqe_log_size; + u8 max_syn_rt; u8 max_fin_rt; u8 gl_rq_pi; u8 gl_cmd_pi; u8 debug_mode; u8 ll2_ooo_queue_id; - u8 ooo_enable; - u8 is_target; - u8 bdq_pbl_num_entries[2]; + u8 is_soc_en; + u8 soc_num_of_blocks_log; + u8 bdq_pbl_num_entries[3]; u8 disable_stats_collection; + u8 nvmetcp_en; }; enum ecore_rdma_protocol { ECORE_RDMA_PROTOCOL_DEFAULT, + ECORE_RDMA_PROTOCOL_NONE, ECORE_RDMA_PROTOCOL_ROCE, ECORE_RDMA_PROTOCOL_IWARP, }; @@ -87,15 +123,23 @@ struct ecore_rdma_pf_params { * the doorbell BAR). */ u32 min_dpis; /* number of requested DPIs */ - u32 num_mrs; /* number of requested memory regions*/ u32 num_qps; /* number of requested Queue Pairs */ - u32 num_srqs; /* number of requested SRQ */ - u8 roce_edpm_mode; /* see QED_ROCE_EDPM_MODE_ENABLE */ + u32 num_vf_qps; /* number of requested Queue Pairs for VF */ + u32 num_vf_tasks; /* number of requested MRs for VF */ + u32 num_srqs; /* number of requested SRQs */ + u32 num_vf_srqs; /* number of requested SRQs for VF*/ u8 gl_pi; /* protocol index */ /* Will allocate rate limiters to be used with QPs */ u8 enable_dcqcn; + /* Max number of CNQs - limits number of ECORE_RDMA_CNQ feature, + * Allowing an incrementation in ECORE_PF_L2_QUE. + * To disable CNQs, use dedicated value instead of `0'. + */ +#define ECORE_RDMA_PF_PARAMS_CNQS_NONE (0xffff) + u16 max_cnqs; + /* TCP port number used for the iwarp traffic */ u16 iwarp_port; enum ecore_rdma_protocol rdma_protocol; @@ -103,6 +147,7 @@ struct ecore_rdma_pf_params { struct ecore_pf_params { struct ecore_eth_pf_params eth_pf_params; + struct ecore_fcoe_pf_params fcoe_pf_params; struct ecore_iscsi_pf_params iscsi_pf_params; struct ecore_rdma_pf_params rdma_pf_params; }; diff --git a/drivers/net/qede/base/ecore_rt_defs.h b/drivers/net/qede/base/ecore_rt_defs.h index 08b1f4700..7badacd6d 100644 --- a/drivers/net/qede/base/ecore_rt_defs.h +++ b/drivers/net/qede/base/ecore_rt_defs.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __RT_DEFS_H__ #define __RT_DEFS_H__ @@ -27,425 +27,534 @@ #define DORQ_REG_VF_ICID_BIT_SHIFT_NORM_RT_OFFSET 16 #define DORQ_REG_PF_WAKE_ALL_RT_OFFSET 17 #define DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET 18 -#define IGU_REG_PF_CONFIGURATION_RT_OFFSET 19 -#define IGU_REG_VF_CONFIGURATION_RT_OFFSET 20 -#define IGU_REG_ATTN_MSG_ADDR_L_RT_OFFSET 21 -#define IGU_REG_ATTN_MSG_ADDR_H_RT_OFFSET 22 -#define IGU_REG_LEADING_EDGE_LATCH_RT_OFFSET 23 -#define IGU_REG_TRAILING_EDGE_LATCH_RT_OFFSET 24 -#define CAU_REG_CQE_AGG_UNIT_SIZE_RT_OFFSET 25 -#define CAU_REG_SB_VAR_MEMORY_RT_OFFSET 26 -#define CAU_REG_SB_VAR_MEMORY_RT_SIZE 736 -#define CAU_REG_SB_ADDR_MEMORY_RT_OFFSET 762 -#define CAU_REG_SB_ADDR_MEMORY_RT_SIZE 736 -#define CAU_REG_PI_MEMORY_RT_OFFSET 1498 -#define CAU_REG_PI_MEMORY_RT_SIZE 4416 -#define PRS_REG_SEARCH_RESP_INITIATOR_TYPE_RT_OFFSET 5914 -#define PRS_REG_TASK_ID_MAX_INITIATOR_PF_RT_OFFSET 5915 -#define PRS_REG_TASK_ID_MAX_INITIATOR_VF_RT_OFFSET 5916 -#define PRS_REG_TASK_ID_MAX_TARGET_PF_RT_OFFSET 5917 -#define PRS_REG_TASK_ID_MAX_TARGET_VF_RT_OFFSET 5918 -#define PRS_REG_SEARCH_TCP_RT_OFFSET 5919 -#define PRS_REG_SEARCH_FCOE_RT_OFFSET 5920 -#define PRS_REG_SEARCH_ROCE_RT_OFFSET 5921 -#define PRS_REG_ROCE_DEST_QP_MAX_VF_RT_OFFSET 5922 -#define PRS_REG_ROCE_DEST_QP_MAX_PF_RT_OFFSET 5923 -#define PRS_REG_SEARCH_OPENFLOW_RT_OFFSET 5924 -#define PRS_REG_SEARCH_NON_IP_AS_OPENFLOW_RT_OFFSET 5925 -#define PRS_REG_OPENFLOW_SUPPORT_ONLY_KNOWN_OVER_IP_RT_OFFSET 5926 -#define PRS_REG_OPENFLOW_SEARCH_KEY_MASK_RT_OFFSET 5927 -#define PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET 5928 -#define PRS_REG_LIGHT_L2_ETHERTYPE_EN_RT_OFFSET 5929 -#define SRC_REG_FIRSTFREE_RT_OFFSET 5930 +#define DORQ_REG_GLB_MAX_ICID_0_RT_OFFSET 19 +#define DORQ_REG_GLB_MAX_ICID_1_RT_OFFSET 20 +#define DORQ_REG_GLB_RANGE2CONN_TYPE_0_RT_OFFSET 21 +#define DORQ_REG_GLB_RANGE2CONN_TYPE_1_RT_OFFSET 22 +#define DORQ_REG_PRV_PF_MAX_ICID_2_RT_OFFSET 23 +#define DORQ_REG_PRV_PF_MAX_ICID_3_RT_OFFSET 24 +#define DORQ_REG_PRV_PF_MAX_ICID_4_RT_OFFSET 25 +#define DORQ_REG_PRV_PF_MAX_ICID_5_RT_OFFSET 26 +#define DORQ_REG_PRV_VF_MAX_ICID_2_RT_OFFSET 27 +#define DORQ_REG_PRV_VF_MAX_ICID_3_RT_OFFSET 28 +#define DORQ_REG_PRV_VF_MAX_ICID_4_RT_OFFSET 29 +#define DORQ_REG_PRV_VF_MAX_ICID_5_RT_OFFSET 30 +#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_2_RT_OFFSET 31 +#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_3_RT_OFFSET 32 +#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_4_RT_OFFSET 33 +#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_5_RT_OFFSET 34 +#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_2_RT_OFFSET 35 +#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_3_RT_OFFSET 36 +#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_4_RT_OFFSET 37 +#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_5_RT_OFFSET 38 +#define IGU_REG_PF_CONFIGURATION_RT_OFFSET 39 +#define IGU_REG_VF_CONFIGURATION_RT_OFFSET 40 +#define IGU_REG_ATTN_MSG_ADDR_L_RT_OFFSET 41 +#define IGU_REG_ATTN_MSG_ADDR_H_RT_OFFSET 42 +#define IGU_REG_LEADING_EDGE_LATCH_RT_OFFSET 43 +#define IGU_REG_TRAILING_EDGE_LATCH_RT_OFFSET 44 +#define CAU_REG_CQE_AGG_UNIT_SIZE_RT_OFFSET 45 +#define CAU_REG_SB_VAR_MEMORY_RT_OFFSET 46 +#define CAU_REG_SB_VAR_MEMORY_RT_SIZE 1024 +#define CAU_REG_SB_ADDR_MEMORY_RT_OFFSET 1070 +#define CAU_REG_SB_ADDR_MEMORY_RT_SIZE 1024 +#define CAU_REG_PI_MEMORY_RT_OFFSET 2094 +#define CAU_REG_PI_MEMORY_RT_SIZE 5120 +#define PRS_REG_SEARCH_RESP_INITIATOR_TYPE_RT_OFFSET 7214 +#define PRS_REG_TASK_ID_MAX_INITIATOR_PF_RT_OFFSET 7215 +#define PRS_REG_TASK_ID_MAX_INITIATOR_VF_RT_OFFSET 7216 +#define PRS_REG_TASK_ID_MAX_TARGET_PF_RT_OFFSET 7217 +#define PRS_REG_TASK_ID_MAX_TARGET_VF_RT_OFFSET 7218 +#define PRS_REG_SEARCH_TCP_RT_OFFSET 7219 +#define PRS_REG_SEARCH_FCOE_RT_OFFSET 7220 +#define PRS_REG_SEARCH_ROCE_RT_OFFSET 7221 +#define PRS_REG_ROCE_DEST_QP_MAX_VF_RT_OFFSET 7222 +#define PRS_REG_ROCE_DEST_QP_MAX_PF_RT_OFFSET 7223 +#define PRS_REG_SEARCH_OPENFLOW_RT_OFFSET 7224 +#define PRS_REG_SEARCH_NON_IP_AS_OPENFLOW_RT_OFFSET 7225 +#define PRS_REG_OPENFLOW_SUPPORT_ONLY_KNOWN_OVER_IP_RT_OFFSET 7226 +#define PRS_REG_OPENFLOW_SEARCH_KEY_MASK_RT_OFFSET 7227 +#define PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET 7228 +#define PRS_REG_LIGHT_L2_ETHERTYPE_EN_RT_OFFSET 7229 +#define SRC_REG_TABLE_T1_ENTRY_SIZE_RT_OFFSET 7230 +#define SRC_REG_TABLE_T2_ENTRY_SIZE_RT_OFFSET 7231 +#define SRC_REG_FIRSTFREE_RT_OFFSET 7232 #define SRC_REG_FIRSTFREE_RT_SIZE 2 -#define SRC_REG_LASTFREE_RT_OFFSET 5932 +#define SRC_REG_LASTFREE_RT_OFFSET 7234 #define SRC_REG_LASTFREE_RT_SIZE 2 -#define SRC_REG_COUNTFREE_RT_OFFSET 5934 -#define SRC_REG_NUMBER_HASH_BITS_RT_OFFSET 5935 -#define PSWRQ2_REG_CDUT_P_SIZE_RT_OFFSET 5936 -#define PSWRQ2_REG_CDUC_P_SIZE_RT_OFFSET 5937 -#define PSWRQ2_REG_TM_P_SIZE_RT_OFFSET 5938 -#define PSWRQ2_REG_QM_P_SIZE_RT_OFFSET 5939 -#define PSWRQ2_REG_SRC_P_SIZE_RT_OFFSET 5940 -#define PSWRQ2_REG_TSDM_P_SIZE_RT_OFFSET 5941 -#define PSWRQ2_REG_TM_FIRST_ILT_RT_OFFSET 5942 -#define PSWRQ2_REG_TM_LAST_ILT_RT_OFFSET 5943 -#define PSWRQ2_REG_QM_FIRST_ILT_RT_OFFSET 5944 -#define PSWRQ2_REG_QM_LAST_ILT_RT_OFFSET 5945 -#define PSWRQ2_REG_SRC_FIRST_ILT_RT_OFFSET 5946 -#define PSWRQ2_REG_SRC_LAST_ILT_RT_OFFSET 5947 -#define PSWRQ2_REG_CDUC_FIRST_ILT_RT_OFFSET 5948 -#define PSWRQ2_REG_CDUC_LAST_ILT_RT_OFFSET 5949 -#define PSWRQ2_REG_CDUT_FIRST_ILT_RT_OFFSET 5950 -#define PSWRQ2_REG_CDUT_LAST_ILT_RT_OFFSET 5951 -#define PSWRQ2_REG_TSDM_FIRST_ILT_RT_OFFSET 5952 -#define PSWRQ2_REG_TSDM_LAST_ILT_RT_OFFSET 5953 -#define PSWRQ2_REG_TM_NUMBER_OF_PF_BLOCKS_RT_OFFSET 5954 -#define PSWRQ2_REG_CDUT_NUMBER_OF_PF_BLOCKS_RT_OFFSET 5955 -#define PSWRQ2_REG_CDUC_NUMBER_OF_PF_BLOCKS_RT_OFFSET 5956 -#define PSWRQ2_REG_TM_VF_BLOCKS_RT_OFFSET 5957 -#define PSWRQ2_REG_CDUT_VF_BLOCKS_RT_OFFSET 5958 -#define PSWRQ2_REG_CDUC_VF_BLOCKS_RT_OFFSET 5959 -#define PSWRQ2_REG_TM_BLOCKS_FACTOR_RT_OFFSET 5960 -#define PSWRQ2_REG_CDUT_BLOCKS_FACTOR_RT_OFFSET 5961 -#define PSWRQ2_REG_CDUC_BLOCKS_FACTOR_RT_OFFSET 5962 -#define PSWRQ2_REG_VF_BASE_RT_OFFSET 5963 -#define PSWRQ2_REG_VF_LAST_ILT_RT_OFFSET 5964 -#define PSWRQ2_REG_DRAM_ALIGN_WR_RT_OFFSET 5965 -#define PSWRQ2_REG_DRAM_ALIGN_RD_RT_OFFSET 5966 -#define PSWRQ2_REG_ILT_MEMORY_RT_OFFSET 5967 -#define PSWRQ2_REG_ILT_MEMORY_RT_SIZE 22000 -#define PGLUE_REG_B_VF_BASE_RT_OFFSET 27967 -#define PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET 27968 -#define PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET 27969 -#define PGLUE_REG_B_CACHE_LINE_SIZE_RT_OFFSET 27970 -#define PGLUE_REG_B_PF_BAR0_SIZE_RT_OFFSET 27971 -#define PGLUE_REG_B_PF_BAR1_SIZE_RT_OFFSET 27972 -#define PGLUE_REG_B_VF_BAR1_SIZE_RT_OFFSET 27973 -#define TM_REG_VF_ENABLE_CONN_RT_OFFSET 27974 -#define TM_REG_PF_ENABLE_CONN_RT_OFFSET 27975 -#define TM_REG_PF_ENABLE_TASK_RT_OFFSET 27976 -#define TM_REG_GROUP_SIZE_RESOLUTION_CONN_RT_OFFSET 27977 -#define TM_REG_GROUP_SIZE_RESOLUTION_TASK_RT_OFFSET 27978 -#define TM_REG_CONFIG_CONN_MEM_RT_OFFSET 27979 +#define SRC_REG_COUNTFREE_RT_OFFSET 7236 +#define SRC_REG_NUMBER_HASH_BITS_RT_OFFSET 7237 +#define PSWRQ2_REG_CDUT_P_SIZE_RT_OFFSET 7238 +#define PSWRQ2_REG_CDUC_P_SIZE_RT_OFFSET 7239 +#define PSWRQ2_REG_TM_P_SIZE_RT_OFFSET 7240 +#define PSWRQ2_REG_QM_P_SIZE_RT_OFFSET 7241 +#define PSWRQ2_REG_SRC_P_SIZE_RT_OFFSET 7242 +#define PSWRQ2_REG_TSDM_P_SIZE_RT_OFFSET 7243 +#define PSWRQ2_REG_TM_FIRST_ILT_RT_OFFSET 7244 +#define PSWRQ2_REG_TM_LAST_ILT_RT_OFFSET 7245 +#define PSWRQ2_REG_QM_FIRST_ILT_RT_OFFSET 7246 +#define PSWRQ2_REG_QM_LAST_ILT_RT_OFFSET 7247 +#define PSWRQ2_REG_SRC_FIRST_ILT_RT_OFFSET 7248 +#define PSWRQ2_REG_SRC_LAST_ILT_RT_OFFSET 7249 +#define PSWRQ2_REG_CDUC_FIRST_ILT_RT_OFFSET 7250 +#define PSWRQ2_REG_CDUC_LAST_ILT_RT_OFFSET 7251 +#define PSWRQ2_REG_CDUT_FIRST_ILT_RT_OFFSET 7252 +#define PSWRQ2_REG_CDUT_LAST_ILT_RT_OFFSET 7253 +#define PSWRQ2_REG_TSDM_FIRST_ILT_RT_OFFSET 7254 +#define PSWRQ2_REG_TSDM_LAST_ILT_RT_OFFSET 7255 +#define PSWRQ2_REG_TM_NUMBER_OF_PF_BLOCKS_RT_OFFSET 7256 +#define PSWRQ2_REG_CDUT_NUMBER_OF_PF_BLOCKS_RT_OFFSET 7257 +#define PSWRQ2_REG_CDUC_NUMBER_OF_PF_BLOCKS_RT_OFFSET 7258 +#define PSWRQ2_REG_TM_VF_BLOCKS_RT_OFFSET 7259 +#define PSWRQ2_REG_CDUT_VF_BLOCKS_RT_OFFSET 7260 +#define PSWRQ2_REG_CDUC_VF_BLOCKS_RT_OFFSET 7261 +#define PSWRQ2_REG_TM_BLOCKS_FACTOR_RT_OFFSET 7262 +#define PSWRQ2_REG_CDUT_BLOCKS_FACTOR_RT_OFFSET 7263 +#define PSWRQ2_REG_CDUC_BLOCKS_FACTOR_RT_OFFSET 7264 +#define PSWRQ2_REG_VF_BASE_RT_OFFSET 7265 +#define PSWRQ2_REG_VF_LAST_ILT_RT_OFFSET 7266 +#define PSWRQ2_REG_DRAM_ALIGN_WR_RT_OFFSET 7267 +#define PSWRQ2_REG_DRAM_ALIGN_RD_RT_OFFSET 7268 +#define PSWRQ2_REG_TGSRC_P_SIZE_RT_OFFSET 7269 +#define PSWRQ2_REG_RGSRC_P_SIZE_RT_OFFSET 7270 +#define PSWRQ2_REG_TGSRC_FIRST_ILT_RT_OFFSET 7271 +#define PSWRQ2_REG_RGSRC_FIRST_ILT_RT_OFFSET 7272 +#define PSWRQ2_REG_TGSRC_LAST_ILT_RT_OFFSET 7273 +#define PSWRQ2_REG_RGSRC_LAST_ILT_RT_OFFSET 7274 +#define PSWRQ2_REG_ILT_MEMORY_RT_OFFSET 7275 +#define PSWRQ2_REG_ILT_MEMORY_RT_SIZE 27328 +#define PGLUE_REG_B_VF_BASE_RT_OFFSET 34603 +#define PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET 34604 +#define PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET 34605 +#define PGLUE_REG_B_CACHE_LINE_SIZE_RT_OFFSET 34606 +#define PGLUE_REG_B_PF_BAR0_SIZE_RT_OFFSET 34607 +#define PGLUE_REG_B_PF_BAR1_SIZE_RT_OFFSET 34608 +#define PGLUE_REG_B_VF_BAR1_SIZE_RT_OFFSET 34609 +#define TM_REG_VF_ENABLE_CONN_RT_OFFSET 34610 +#define TM_REG_PF_ENABLE_CONN_RT_OFFSET 34611 +#define TM_REG_PF_ENABLE_TASK_RT_OFFSET 34612 +#define TM_REG_GROUP_SIZE_RESOLUTION_CONN_RT_OFFSET 34613 +#define TM_REG_GROUP_SIZE_RESOLUTION_TASK_RT_OFFSET 34614 +#define TM_REG_CONFIG_CONN_MEM_RT_OFFSET 34615 #define TM_REG_CONFIG_CONN_MEM_RT_SIZE 416 -#define TM_REG_CONFIG_TASK_MEM_RT_OFFSET 28395 -#define TM_REG_CONFIG_TASK_MEM_RT_SIZE 512 -#define QM_REG_MAXPQSIZE_0_RT_OFFSET 28907 -#define QM_REG_MAXPQSIZE_1_RT_OFFSET 28908 -#define QM_REG_MAXPQSIZE_2_RT_OFFSET 28909 -#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET 28910 -#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET 28911 -#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET 28912 -#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET 28913 -#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET 28914 -#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET 28915 -#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET 28916 -#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET 28917 -#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET 28918 -#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET 28919 -#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET 28920 -#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET 28921 -#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET 28922 -#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET 28923 -#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET 28924 -#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET 28925 -#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET 28926 -#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET 28927 -#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET 28928 -#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET 28929 -#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET 28930 -#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET 28931 -#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET 28932 -#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET 28933 -#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET 28934 -#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET 28935 -#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET 28936 -#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET 28937 -#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET 28938 -#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET 28939 -#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET 28940 -#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET 28941 -#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET 28942 -#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET 28943 -#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET 28944 -#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET 28945 -#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET 28946 -#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET 28947 -#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET 28948 -#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET 28949 -#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET 28950 -#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET 28951 -#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET 28952 -#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET 28953 -#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET 28954 -#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET 28955 -#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET 28956 -#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET 28957 -#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET 28958 -#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET 28959 -#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET 28960 -#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET 28961 -#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET 28962 -#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET 28963 -#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET 28964 -#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET 28965 -#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET 28966 -#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET 28967 -#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET 28968 -#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET 28969 -#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET 28970 -#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET 28971 -#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET 28972 -#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET 28973 -#define QM_REG_BASEADDROTHERPQ_RT_OFFSET 28974 +#define TM_REG_CONFIG_TASK_MEM_RT_OFFSET 35031 +#define TM_REG_CONFIG_TASK_MEM_RT_SIZE 608 +#define QM_REG_MAXPQSIZE_0_RT_OFFSET 35639 +#define QM_REG_MAXPQSIZE_1_RT_OFFSET 35640 +#define QM_REG_MAXPQSIZE_2_RT_OFFSET 35641 +#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET 35642 +#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET 35643 +#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET 35644 +#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET 35645 +#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET 35646 +#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET 35647 +#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET 35648 +#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET 35649 +#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET 35650 +#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET 35651 +#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET 35652 +#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET 35653 +#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET 35654 +#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET 35655 +#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET 35656 +#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET 35657 +#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET 35658 +#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET 35659 +#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET 35660 +#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET 35661 +#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET 35662 +#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET 35663 +#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET 35664 +#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET 35665 +#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET 35666 +#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET 35667 +#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET 35668 +#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET 35669 +#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET 35670 +#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET 35671 +#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET 35672 +#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET 35673 +#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET 35674 +#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET 35675 +#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET 35676 +#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET 35677 +#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET 35678 +#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET 35679 +#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET 35680 +#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET 35681 +#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET 35682 +#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET 35683 +#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET 35684 +#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET 35685 +#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET 35686 +#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET 35687 +#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET 35688 +#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET 35689 +#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET 35690 +#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET 35691 +#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET 35692 +#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET 35693 +#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET 35694 +#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET 35695 +#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET 35696 +#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET 35697 +#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET 35698 +#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET 35699 +#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET 35700 +#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET 35701 +#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET 35702 +#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET 35703 +#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET 35704 +#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET 35705 +#define QM_REG_BASEADDROTHERPQ_RT_OFFSET 35706 #define QM_REG_BASEADDROTHERPQ_RT_SIZE 128 -#define QM_REG_PTRTBLOTHER_RT_OFFSET 29102 +#define QM_REG_PTRTBLOTHER_RT_OFFSET 35834 #define QM_REG_PTRTBLOTHER_RT_SIZE 256 -#define QM_REG_VOQCRDLINE_RT_OFFSET 29358 -#define QM_REG_VOQCRDLINE_RT_SIZE 20 -#define QM_REG_VOQINITCRDLINE_RT_OFFSET 29378 -#define QM_REG_VOQINITCRDLINE_RT_SIZE 20 -#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET 29398 -#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET 29399 -#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET 29400 -#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET 29401 -#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET 29402 -#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET 29403 -#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET 29404 -#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET 29405 -#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET 29406 -#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET 29407 -#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET 29408 -#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET 29409 -#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET 29410 -#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET 29411 -#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET 29412 -#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET 29413 -#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET 29414 -#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET 29415 -#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET 29416 -#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET 29417 -#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET 29418 -#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET 29419 -#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET 29420 -#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET 29421 -#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET 29422 -#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET 29423 -#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET 29424 -#define QM_REG_PQTX2PF_0_RT_OFFSET 29425 -#define QM_REG_PQTX2PF_1_RT_OFFSET 29426 -#define QM_REG_PQTX2PF_2_RT_OFFSET 29427 -#define QM_REG_PQTX2PF_3_RT_OFFSET 29428 -#define QM_REG_PQTX2PF_4_RT_OFFSET 29429 -#define QM_REG_PQTX2PF_5_RT_OFFSET 29430 -#define QM_REG_PQTX2PF_6_RT_OFFSET 29431 -#define QM_REG_PQTX2PF_7_RT_OFFSET 29432 -#define QM_REG_PQTX2PF_8_RT_OFFSET 29433 -#define QM_REG_PQTX2PF_9_RT_OFFSET 29434 -#define QM_REG_PQTX2PF_10_RT_OFFSET 29435 -#define QM_REG_PQTX2PF_11_RT_OFFSET 29436 -#define QM_REG_PQTX2PF_12_RT_OFFSET 29437 -#define QM_REG_PQTX2PF_13_RT_OFFSET 29438 -#define QM_REG_PQTX2PF_14_RT_OFFSET 29439 -#define QM_REG_PQTX2PF_15_RT_OFFSET 29440 -#define QM_REG_PQTX2PF_16_RT_OFFSET 29441 -#define QM_REG_PQTX2PF_17_RT_OFFSET 29442 -#define QM_REG_PQTX2PF_18_RT_OFFSET 29443 -#define QM_REG_PQTX2PF_19_RT_OFFSET 29444 -#define QM_REG_PQTX2PF_20_RT_OFFSET 29445 -#define QM_REG_PQTX2PF_21_RT_OFFSET 29446 -#define QM_REG_PQTX2PF_22_RT_OFFSET 29447 -#define QM_REG_PQTX2PF_23_RT_OFFSET 29448 -#define QM_REG_PQTX2PF_24_RT_OFFSET 29449 -#define QM_REG_PQTX2PF_25_RT_OFFSET 29450 -#define QM_REG_PQTX2PF_26_RT_OFFSET 29451 -#define QM_REG_PQTX2PF_27_RT_OFFSET 29452 -#define QM_REG_PQTX2PF_28_RT_OFFSET 29453 -#define QM_REG_PQTX2PF_29_RT_OFFSET 29454 -#define QM_REG_PQTX2PF_30_RT_OFFSET 29455 -#define QM_REG_PQTX2PF_31_RT_OFFSET 29456 -#define QM_REG_PQTX2PF_32_RT_OFFSET 29457 -#define QM_REG_PQTX2PF_33_RT_OFFSET 29458 -#define QM_REG_PQTX2PF_34_RT_OFFSET 29459 -#define QM_REG_PQTX2PF_35_RT_OFFSET 29460 -#define QM_REG_PQTX2PF_36_RT_OFFSET 29461 -#define QM_REG_PQTX2PF_37_RT_OFFSET 29462 -#define QM_REG_PQTX2PF_38_RT_OFFSET 29463 -#define QM_REG_PQTX2PF_39_RT_OFFSET 29464 -#define QM_REG_PQTX2PF_40_RT_OFFSET 29465 -#define QM_REG_PQTX2PF_41_RT_OFFSET 29466 -#define QM_REG_PQTX2PF_42_RT_OFFSET 29467 -#define QM_REG_PQTX2PF_43_RT_OFFSET 29468 -#define QM_REG_PQTX2PF_44_RT_OFFSET 29469 -#define QM_REG_PQTX2PF_45_RT_OFFSET 29470 -#define QM_REG_PQTX2PF_46_RT_OFFSET 29471 -#define QM_REG_PQTX2PF_47_RT_OFFSET 29472 -#define QM_REG_PQTX2PF_48_RT_OFFSET 29473 -#define QM_REG_PQTX2PF_49_RT_OFFSET 29474 -#define QM_REG_PQTX2PF_50_RT_OFFSET 29475 -#define QM_REG_PQTX2PF_51_RT_OFFSET 29476 -#define QM_REG_PQTX2PF_52_RT_OFFSET 29477 -#define QM_REG_PQTX2PF_53_RT_OFFSET 29478 -#define QM_REG_PQTX2PF_54_RT_OFFSET 29479 -#define QM_REG_PQTX2PF_55_RT_OFFSET 29480 -#define QM_REG_PQTX2PF_56_RT_OFFSET 29481 -#define QM_REG_PQTX2PF_57_RT_OFFSET 29482 -#define QM_REG_PQTX2PF_58_RT_OFFSET 29483 -#define QM_REG_PQTX2PF_59_RT_OFFSET 29484 -#define QM_REG_PQTX2PF_60_RT_OFFSET 29485 -#define QM_REG_PQTX2PF_61_RT_OFFSET 29486 -#define QM_REG_PQTX2PF_62_RT_OFFSET 29487 -#define QM_REG_PQTX2PF_63_RT_OFFSET 29488 -#define QM_REG_PQOTHER2PF_0_RT_OFFSET 29489 -#define QM_REG_PQOTHER2PF_1_RT_OFFSET 29490 -#define QM_REG_PQOTHER2PF_2_RT_OFFSET 29491 -#define QM_REG_PQOTHER2PF_3_RT_OFFSET 29492 -#define QM_REG_PQOTHER2PF_4_RT_OFFSET 29493 -#define QM_REG_PQOTHER2PF_5_RT_OFFSET 29494 -#define QM_REG_PQOTHER2PF_6_RT_OFFSET 29495 -#define QM_REG_PQOTHER2PF_7_RT_OFFSET 29496 -#define QM_REG_PQOTHER2PF_8_RT_OFFSET 29497 -#define QM_REG_PQOTHER2PF_9_RT_OFFSET 29498 -#define QM_REG_PQOTHER2PF_10_RT_OFFSET 29499 -#define QM_REG_PQOTHER2PF_11_RT_OFFSET 29500 -#define QM_REG_PQOTHER2PF_12_RT_OFFSET 29501 -#define QM_REG_PQOTHER2PF_13_RT_OFFSET 29502 -#define QM_REG_PQOTHER2PF_14_RT_OFFSET 29503 -#define QM_REG_PQOTHER2PF_15_RT_OFFSET 29504 -#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET 29505 -#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET 29506 -#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET 29507 -#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET 29508 -#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET 29509 -#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET 29510 -#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET 29511 -#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET 29512 -#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET 29513 -#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET 29514 -#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET 29515 -#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET 29516 -#define QM_REG_RLGLBLINCVAL_RT_OFFSET 29517 +#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET 36090 +#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET 36091 +#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET 36092 +#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET 36093 +#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET 36094 +#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET 36095 +#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET 36096 +#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET 36097 +#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET 36098 +#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET 36099 +#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET 36100 +#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET 36101 +#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET 36102 +#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET 36103 +#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET 36104 +#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET 36105 +#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET 36106 +#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET 36107 +#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET 36108 +#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET 36109 +#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET 36110 +#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET 36111 +#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET 36112 +#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET 36113 +#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET 36114 +#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET 36115 +#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET 36116 +#define QM_REG_PQTX2PF_0_RT_OFFSET 36117 +#define QM_REG_PQTX2PF_1_RT_OFFSET 36118 +#define QM_REG_PQTX2PF_2_RT_OFFSET 36119 +#define QM_REG_PQTX2PF_3_RT_OFFSET 36120 +#define QM_REG_PQTX2PF_4_RT_OFFSET 36121 +#define QM_REG_PQTX2PF_5_RT_OFFSET 36122 +#define QM_REG_PQTX2PF_6_RT_OFFSET 36123 +#define QM_REG_PQTX2PF_7_RT_OFFSET 36124 +#define QM_REG_PQTX2PF_8_RT_OFFSET 36125 +#define QM_REG_PQTX2PF_9_RT_OFFSET 36126 +#define QM_REG_PQTX2PF_10_RT_OFFSET 36127 +#define QM_REG_PQTX2PF_11_RT_OFFSET 36128 +#define QM_REG_PQTX2PF_12_RT_OFFSET 36129 +#define QM_REG_PQTX2PF_13_RT_OFFSET 36130 +#define QM_REG_PQTX2PF_14_RT_OFFSET 36131 +#define QM_REG_PQTX2PF_15_RT_OFFSET 36132 +#define QM_REG_PQTX2PF_16_RT_OFFSET 36133 +#define QM_REG_PQTX2PF_17_RT_OFFSET 36134 +#define QM_REG_PQTX2PF_18_RT_OFFSET 36135 +#define QM_REG_PQTX2PF_19_RT_OFFSET 36136 +#define QM_REG_PQTX2PF_20_RT_OFFSET 36137 +#define QM_REG_PQTX2PF_21_RT_OFFSET 36138 +#define QM_REG_PQTX2PF_22_RT_OFFSET 36139 +#define QM_REG_PQTX2PF_23_RT_OFFSET 36140 +#define QM_REG_PQTX2PF_24_RT_OFFSET 36141 +#define QM_REG_PQTX2PF_25_RT_OFFSET 36142 +#define QM_REG_PQTX2PF_26_RT_OFFSET 36143 +#define QM_REG_PQTX2PF_27_RT_OFFSET 36144 +#define QM_REG_PQTX2PF_28_RT_OFFSET 36145 +#define QM_REG_PQTX2PF_29_RT_OFFSET 36146 +#define QM_REG_PQTX2PF_30_RT_OFFSET 36147 +#define QM_REG_PQTX2PF_31_RT_OFFSET 36148 +#define QM_REG_PQTX2PF_32_RT_OFFSET 36149 +#define QM_REG_PQTX2PF_33_RT_OFFSET 36150 +#define QM_REG_PQTX2PF_34_RT_OFFSET 36151 +#define QM_REG_PQTX2PF_35_RT_OFFSET 36152 +#define QM_REG_PQTX2PF_36_RT_OFFSET 36153 +#define QM_REG_PQTX2PF_37_RT_OFFSET 36154 +#define QM_REG_PQTX2PF_38_RT_OFFSET 36155 +#define QM_REG_PQTX2PF_39_RT_OFFSET 36156 +#define QM_REG_PQTX2PF_40_RT_OFFSET 36157 +#define QM_REG_PQTX2PF_41_RT_OFFSET 36158 +#define QM_REG_PQTX2PF_42_RT_OFFSET 36159 +#define QM_REG_PQTX2PF_43_RT_OFFSET 36160 +#define QM_REG_PQTX2PF_44_RT_OFFSET 36161 +#define QM_REG_PQTX2PF_45_RT_OFFSET 36162 +#define QM_REG_PQTX2PF_46_RT_OFFSET 36163 +#define QM_REG_PQTX2PF_47_RT_OFFSET 36164 +#define QM_REG_PQTX2PF_48_RT_OFFSET 36165 +#define QM_REG_PQTX2PF_49_RT_OFFSET 36166 +#define QM_REG_PQTX2PF_50_RT_OFFSET 36167 +#define QM_REG_PQTX2PF_51_RT_OFFSET 36168 +#define QM_REG_PQTX2PF_52_RT_OFFSET 36169 +#define QM_REG_PQTX2PF_53_RT_OFFSET 36170 +#define QM_REG_PQTX2PF_54_RT_OFFSET 36171 +#define QM_REG_PQTX2PF_55_RT_OFFSET 36172 +#define QM_REG_PQTX2PF_56_RT_OFFSET 36173 +#define QM_REG_PQTX2PF_57_RT_OFFSET 36174 +#define QM_REG_PQTX2PF_58_RT_OFFSET 36175 +#define QM_REG_PQTX2PF_59_RT_OFFSET 36176 +#define QM_REG_PQTX2PF_60_RT_OFFSET 36177 +#define QM_REG_PQTX2PF_61_RT_OFFSET 36178 +#define QM_REG_PQTX2PF_62_RT_OFFSET 36179 +#define QM_REG_PQTX2PF_63_RT_OFFSET 36180 +#define QM_REG_PQOTHER2PF_0_RT_OFFSET 36181 +#define QM_REG_PQOTHER2PF_1_RT_OFFSET 36182 +#define QM_REG_PQOTHER2PF_2_RT_OFFSET 36183 +#define QM_REG_PQOTHER2PF_3_RT_OFFSET 36184 +#define QM_REG_PQOTHER2PF_4_RT_OFFSET 36185 +#define QM_REG_PQOTHER2PF_5_RT_OFFSET 36186 +#define QM_REG_PQOTHER2PF_6_RT_OFFSET 36187 +#define QM_REG_PQOTHER2PF_7_RT_OFFSET 36188 +#define QM_REG_PQOTHER2PF_8_RT_OFFSET 36189 +#define QM_REG_PQOTHER2PF_9_RT_OFFSET 36190 +#define QM_REG_PQOTHER2PF_10_RT_OFFSET 36191 +#define QM_REG_PQOTHER2PF_11_RT_OFFSET 36192 +#define QM_REG_PQOTHER2PF_12_RT_OFFSET 36193 +#define QM_REG_PQOTHER2PF_13_RT_OFFSET 36194 +#define QM_REG_PQOTHER2PF_14_RT_OFFSET 36195 +#define QM_REG_PQOTHER2PF_15_RT_OFFSET 36196 +#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET 36197 +#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET 36198 +#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET 36199 +#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET 36200 +#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET 36201 +#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET 36202 +#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET 36203 +#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET 36204 +#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET 36205 +#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET 36206 +#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET 36207 +#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET 36208 +#define QM_REG_RLGLBLPERIODSEL_8_RT_OFFSET 36209 +#define QM_REG_RLGLBLPERIODSEL_9_RT_OFFSET 36210 +#define QM_REG_RLGLBLPERIODSEL_10_RT_OFFSET 36211 +#define QM_REG_RLGLBLPERIODSEL_11_RT_OFFSET 36212 +#define QM_REG_RLGLBLPERIODSEL_12_RT_OFFSET 36213 +#define QM_REG_RLGLBLPERIODSEL_13_RT_OFFSET 36214 +#define QM_REG_RLGLBLPERIODSEL_14_RT_OFFSET 36215 +#define QM_REG_RLGLBLPERIODSEL_15_RT_OFFSET 36216 +#define QM_REG_RLGLBLINCVAL_RT_OFFSET 36217 #define QM_REG_RLGLBLINCVAL_RT_SIZE 256 -#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET 29773 +#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET 36473 #define QM_REG_RLGLBLUPPERBOUND_RT_SIZE 256 -#define QM_REG_RLGLBLCRD_RT_OFFSET 30029 +#define QM_REG_RLGLBLCRD_RT_OFFSET 36729 #define QM_REG_RLGLBLCRD_RT_SIZE 256 -#define QM_REG_RLGLBLENABLE_RT_OFFSET 30285 -#define QM_REG_RLPFPERIOD_RT_OFFSET 30286 -#define QM_REG_RLPFPERIODTIMER_RT_OFFSET 30287 -#define QM_REG_RLPFINCVAL_RT_OFFSET 30288 +#define QM_REG_RLGLBLENABLE_RT_OFFSET 36985 +#define QM_REG_RLPFPERIOD_RT_OFFSET 36986 +#define QM_REG_RLPFPERIODTIMER_RT_OFFSET 36987 +#define QM_REG_RLPFINCVAL_RT_OFFSET 36988 #define QM_REG_RLPFINCVAL_RT_SIZE 16 -#define QM_REG_RLPFUPPERBOUND_RT_OFFSET 30304 +#define QM_REG_RLPFUPPERBOUND_RT_OFFSET 37004 #define QM_REG_RLPFUPPERBOUND_RT_SIZE 16 -#define QM_REG_RLPFCRD_RT_OFFSET 30320 +#define QM_REG_RLPFCRD_RT_OFFSET 37020 #define QM_REG_RLPFCRD_RT_SIZE 16 -#define QM_REG_RLPFENABLE_RT_OFFSET 30336 -#define QM_REG_RLPFVOQENABLE_RT_OFFSET 30337 -#define QM_REG_WFQPFWEIGHT_RT_OFFSET 30338 +#define QM_REG_RLPFENABLE_RT_OFFSET 37036 +#define QM_REG_RLPFVOQENABLE_RT_OFFSET 37037 +#define QM_REG_WFQPFWEIGHT_RT_OFFSET 37038 #define QM_REG_WFQPFWEIGHT_RT_SIZE 16 -#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET 30354 +#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET 37054 #define QM_REG_WFQPFUPPERBOUND_RT_SIZE 16 -#define QM_REG_WFQPFCRD_RT_OFFSET 30370 -#define QM_REG_WFQPFCRD_RT_SIZE 160 -#define QM_REG_WFQPFENABLE_RT_OFFSET 30530 -#define QM_REG_WFQVPENABLE_RT_OFFSET 30531 -#define QM_REG_BASEADDRTXPQ_RT_OFFSET 30532 +#define QM_REG_WFQPFCRD_RT_OFFSET 37070 +#define QM_REG_WFQPFCRD_RT_SIZE 256 +#define QM_REG_WFQPFENABLE_RT_OFFSET 37326 +#define QM_REG_WFQVPENABLE_RT_OFFSET 37327 +#define QM_REG_BASEADDRTXPQ_RT_OFFSET 37328 #define QM_REG_BASEADDRTXPQ_RT_SIZE 512 -#define QM_REG_TXPQMAP_RT_OFFSET 31044 +#define QM_REG_TXPQMAP_RT_OFFSET 37840 #define QM_REG_TXPQMAP_RT_SIZE 512 -#define QM_REG_WFQVPWEIGHT_RT_OFFSET 31556 +#define QM_REG_WFQVPWEIGHT_RT_OFFSET 38352 #define QM_REG_WFQVPWEIGHT_RT_SIZE 512 -#define QM_REG_WFQVPCRD_RT_OFFSET 32068 +#define QM_REG_RLGLBLINCVAL_MSB_RT_OFFSET 38864 +#define QM_REG_RLGLBLINCVAL_MSB_RT_SIZE 256 +#define QM_REG_RLGLBLUPPERBOUND_MSB_RT_OFFSET 39120 +#define QM_REG_RLGLBLUPPERBOUND_MSB_RT_SIZE 256 +#define QM_REG_WFQVPCRD_RT_OFFSET 39376 #define QM_REG_WFQVPCRD_RT_SIZE 512 -#define QM_REG_WFQVPMAP_RT_OFFSET 32580 +#define QM_REG_RLGLBLCRD_MSB_RT_OFFSET 39888 +#define QM_REG_RLGLBLCRD_MSB_RT_SIZE 256 +#define QM_REG_WFQVPMAP_RT_OFFSET 40144 #define QM_REG_WFQVPMAP_RT_SIZE 512 -#define QM_REG_PTRTBLTX_RT_OFFSET 33092 +#define QM_REG_PTRTBLTX_RT_OFFSET 40656 #define QM_REG_PTRTBLTX_RT_SIZE 1024 -#define QM_REG_WFQPFCRD_MSB_RT_OFFSET 34116 -#define QM_REG_WFQPFCRD_MSB_RT_SIZE 160 -#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET 34276 -#define NIG_REG_BRB_GATE_DNTFWD_PORT_RT_OFFSET 34277 -#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET 34278 -#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET 34279 -#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET 34280 -#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET 34281 -#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET 34282 -#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET 34283 +#define QM_REG_WFQPFCRD_MSB_RT_OFFSET 41680 +#define QM_REG_WFQPFCRD_MSB_RT_SIZE 320 +#define QM_REG_VOQCRDLINE_RT_OFFSET 42000 +#define QM_REG_VOQCRDLINE_RT_SIZE 36 +#define QM_REG_VOQINITCRDLINE_RT_OFFSET 42036 +#define QM_REG_VOQINITCRDLINE_RT_SIZE 36 +#define QM_REG_RLPFVOQENABLE_MSB_RT_OFFSET 42072 +#define RGSRC_REG_TABLE_T1_ENTRY_SIZE_RT_OFFSET 42073 +#define RGSRC_REG_TABLE_T2_ENTRY_SIZE_RT_OFFSET 42074 +#define RGSRC_REG_NUMBER_HASH_BITS_RT_OFFSET 42075 +#define TGSRC_REG_TABLE_T1_ENTRY_SIZE_RT_OFFSET 42076 +#define TGSRC_REG_TABLE_T2_ENTRY_SIZE_RT_OFFSET 42077 +#define TGSRC_REG_NUMBER_HASH_BITS_RT_OFFSET 42078 +#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET 42079 +#define NIG_REG_BRB_GATE_DNTFWD_PORT_RT_OFFSET 42080 +#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET 42081 +#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET 42082 +#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET 42083 +#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET 42084 +#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET 42085 +#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET 42086 #define NIG_REG_LLH_FUNC_TAG_EN_RT_SIZE 4 -#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET 34287 +#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET 42090 #define NIG_REG_LLH_FUNC_TAG_VALUE_RT_SIZE 4 -#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET 34291 +#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET 42094 #define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_SIZE 32 -#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET 34323 +#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET 42126 #define NIG_REG_LLH_FUNC_FILTER_EN_RT_SIZE 16 -#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET 34339 +#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET 42142 #define NIG_REG_LLH_FUNC_FILTER_MODE_RT_SIZE 16 -#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET 34355 +#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET 42158 #define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE 16 -#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET 34371 +#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET 42174 #define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_SIZE 16 -#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET 34387 -#define NIG_REG_PPF_TO_ENGINE_SEL_RT_OFFSET 34388 +#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET 42190 +#define NIG_REG_PPF_TO_ENGINE_SEL_RT_OFFSET 42191 #define NIG_REG_PPF_TO_ENGINE_SEL_RT_SIZE 8 -#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET 34396 -#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET 34397 -#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET 34398 -#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET 34399 -#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET 34400 -#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET 34401 -#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET 34402 -#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET 34403 -#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET 34404 -#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET 34405 -#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET 34406 -#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET 34407 -#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET 34408 -#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET 34409 -#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET 34410 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET 34411 -#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET 34412 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET 34413 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET 34414 -#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET 34415 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET 34416 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET 34417 -#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET 34418 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET 34419 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET 34420 -#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET 34421 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET 34422 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET 34423 -#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET 34424 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET 34425 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET 34426 -#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET 34427 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET 34428 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET 34429 -#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET 34430 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET 34431 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET 34432 -#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET 34433 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET 34434 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET 34435 -#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET 34436 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET 34437 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET 34438 -#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET 34439 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET 34440 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET 34441 -#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET 34442 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET 34443 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET 34444 -#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET 34445 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET 34446 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET 34447 -#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET 34448 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET 34449 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET 34450 -#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET 34451 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET 34452 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET 34453 -#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET 34454 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET 34455 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET 34456 -#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET 34457 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET 34458 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET 34459 -#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET 34460 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET 34461 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET 34462 -#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET 34463 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET 34464 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET 34465 -#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET 34466 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET 34467 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET 34468 -#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET 34469 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET 34470 -#define XCM_REG_CON_PHY_Q3_RT_OFFSET 34471 +#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_VALUE_RT_OFFSET 42199 +#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_VALUE_RT_SIZE 1024 +#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_EN_RT_OFFSET 43223 +#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_EN_RT_SIZE 512 +#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_MODE_RT_OFFSET 43735 +#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_MODE_RT_SIZE 512 +#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET 44247 +#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE 512 +#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_HDR_SEL_RT_OFFSET 44759 +#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_HDR_SEL_RT_SIZE 512 +#define NIG_REG_LLH_PF_CLS_FILTERS_MAP_RT_OFFSET 45271 +#define NIG_REG_LLH_PF_CLS_FILTERS_MAP_RT_SIZE 32 +#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET 45303 +#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET 45304 +#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET 45305 +#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET 45306 +#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET 45307 +#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET 45308 +#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET 45309 +#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET 45310 +#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET 45311 +#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET 45312 +#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET 45313 +#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET 45314 +#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET 45315 +#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET 45316 +#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET 45317 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET 45318 +#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET 45319 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET 45320 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET 45321 +#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET 45322 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET 45323 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET 45324 +#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET 45325 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET 45326 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET 45327 +#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET 45328 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET 45329 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET 45330 +#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET 45331 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET 45332 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET 45333 +#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET 45334 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET 45335 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET 45336 +#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET 45337 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET 45338 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET 45339 +#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET 45340 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET 45341 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET 45342 +#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET 45343 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET 45344 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET 45345 +#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET 45346 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET 45347 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET 45348 +#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET 45349 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET 45350 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET 45351 +#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET 45352 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET 45353 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET 45354 +#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET 45355 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET 45356 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET 45357 +#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET 45358 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET 45359 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET 45360 +#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET 45361 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET 45362 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET 45363 +#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET 45364 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET 45365 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET 45366 +#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET 45367 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET 45368 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET 45369 +#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET 45370 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET 45371 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET 45372 +#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET 45373 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET 45374 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET 45375 +#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET 45376 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET 45377 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ20_RT_OFFSET 45378 +#define PBF_REG_BTB_GUARANTEED_VOQ20_RT_OFFSET 45379 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ20_RT_OFFSET 45380 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ21_RT_OFFSET 45381 +#define PBF_REG_BTB_GUARANTEED_VOQ21_RT_OFFSET 45382 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ21_RT_OFFSET 45383 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ22_RT_OFFSET 45384 +#define PBF_REG_BTB_GUARANTEED_VOQ22_RT_OFFSET 45385 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ22_RT_OFFSET 45386 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ23_RT_OFFSET 45387 +#define PBF_REG_BTB_GUARANTEED_VOQ23_RT_OFFSET 45388 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ23_RT_OFFSET 45389 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ24_RT_OFFSET 45390 +#define PBF_REG_BTB_GUARANTEED_VOQ24_RT_OFFSET 45391 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ24_RT_OFFSET 45392 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ25_RT_OFFSET 45393 +#define PBF_REG_BTB_GUARANTEED_VOQ25_RT_OFFSET 45394 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ25_RT_OFFSET 45395 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ26_RT_OFFSET 45396 +#define PBF_REG_BTB_GUARANTEED_VOQ26_RT_OFFSET 45397 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ26_RT_OFFSET 45398 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ27_RT_OFFSET 45399 +#define PBF_REG_BTB_GUARANTEED_VOQ27_RT_OFFSET 45400 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ27_RT_OFFSET 45401 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ28_RT_OFFSET 45402 +#define PBF_REG_BTB_GUARANTEED_VOQ28_RT_OFFSET 45403 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ28_RT_OFFSET 45404 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ29_RT_OFFSET 45405 +#define PBF_REG_BTB_GUARANTEED_VOQ29_RT_OFFSET 45406 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ29_RT_OFFSET 45407 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ30_RT_OFFSET 45408 +#define PBF_REG_BTB_GUARANTEED_VOQ30_RT_OFFSET 45409 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ30_RT_OFFSET 45410 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ31_RT_OFFSET 45411 +#define PBF_REG_BTB_GUARANTEED_VOQ31_RT_OFFSET 45412 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ31_RT_OFFSET 45413 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ32_RT_OFFSET 45414 +#define PBF_REG_BTB_GUARANTEED_VOQ32_RT_OFFSET 45415 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ32_RT_OFFSET 45416 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ33_RT_OFFSET 45417 +#define PBF_REG_BTB_GUARANTEED_VOQ33_RT_OFFSET 45418 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ33_RT_OFFSET 45419 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ34_RT_OFFSET 45420 +#define PBF_REG_BTB_GUARANTEED_VOQ34_RT_OFFSET 45421 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ34_RT_OFFSET 45422 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ35_RT_OFFSET 45423 +#define PBF_REG_BTB_GUARANTEED_VOQ35_RT_OFFSET 45424 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ35_RT_OFFSET 45425 +#define XCM_REG_CON_PHY_Q3_RT_OFFSET 45426 -#define RUNTIME_ARRAY_SIZE 34472 +#define RUNTIME_ARRAY_SIZE 45427 /* Init Callbacks */ #define DMAE_READY_CB 0 diff --git a/drivers/net/qede/base/ecore_sp_api.h b/drivers/net/qede/base/ecore_sp_api.h index 4633dbebe..5d326bef9 100644 --- a/drivers/net/qede/base/ecore_sp_api.h +++ b/drivers/net/qede/base/ecore_sp_api.h @@ -1,16 +1,16 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_SP_API_H__ #define __ECORE_SP_API_H__ #include "ecore_status.h" enum spq_mode { - ECORE_SPQ_MODE_BLOCK, /* Client will poll a designated mem. address */ + ECORE_SPQ_MODE_BLOCK, /* Client will poll a designated mem. address */ ECORE_SPQ_MODE_CB, /* Client supplies a callback */ ECORE_SPQ_MODE_EBLOCK, /* ECORE should block until completion */ }; diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c index 44ced135d..3a507843a 100644 --- a/drivers/net/qede/base/ecore_sp_commands.c +++ b/drivers/net/qede/base/ecore_sp_commands.c @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #include "bcm_osal.h" #include "ecore.h" @@ -22,6 +22,15 @@ #include "ecore_sriov.h" #include "ecore_vf.h" +void ecore_sp_destroy_request(struct ecore_hwfn *p_hwfn, + struct ecore_spq_entry *p_ent) +{ + if (p_ent->queue == &p_hwfn->p_spq->unlimited_pending) + OSAL_FREE(p_hwfn->p_dev, p_ent); + else + ecore_spq_return_entry(p_hwfn, p_ent); +} + enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry **pp_ent, u8 cmd, @@ -56,7 +65,7 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn, case ECORE_SPQ_MODE_BLOCK: if (!p_data->p_comp_data) - return ECORE_INVAL; + goto err; p_ent->comp_cb.cookie = p_data->p_comp_data->cookie; break; @@ -71,13 +80,13 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn, default: DP_NOTICE(p_hwfn, true, "Unknown SPQE completion mode %d\n", p_ent->comp_mode); - return ECORE_INVAL; + goto err; } DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, - "Initialized: CID %08x cmd %02x protocol %02x data_addr %lu comp_mode [%s]\n", + "Initialized: CID %08x cmd %02x protocol %02x data_addr %llx comp_mode [%s]\n", opaque_cid, cmd, protocol, - (unsigned long)&p_ent->ramrod, + (unsigned long long)(osal_uintptr_t)&p_ent->ramrod, D_TRINE(p_ent->comp_mode, ECORE_SPQ_MODE_EBLOCK, ECORE_SPQ_MODE_BLOCK, "MODE_EBLOCK", "MODE_BLOCK", "MODE_CB")); @@ -85,6 +94,11 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn, OSAL_MEMSET(&p_ent->ramrod, 0, sizeof(p_ent->ramrod)); return ECORE_SUCCESS; + +err: + ecore_sp_destroy_request(p_hwfn, p_ent); + + return ECORE_INVAL; } static enum tunnel_clss ecore_tunn_clss_to_fw_clss(u8 type) @@ -131,22 +145,14 @@ ecore_set_pf_update_tunn_mode(struct ecore_tunnel_info *p_tun, static void ecore_set_tunn_cls_info(struct ecore_tunnel_info *p_tun, struct ecore_tunnel_info *p_src) { - enum tunnel_clss type; - - p_tun->b_update_rx_cls = p_src->b_update_rx_cls; - p_tun->b_update_tx_cls = p_src->b_update_tx_cls; - - /* @DPDK - typecast tunnul class */ - type = ecore_tunn_clss_to_fw_clss(p_src->vxlan.tun_cls); - p_tun->vxlan.tun_cls = (enum ecore_tunn_clss)type; - type = ecore_tunn_clss_to_fw_clss(p_src->l2_gre.tun_cls); - p_tun->l2_gre.tun_cls = (enum ecore_tunn_clss)type; - type = ecore_tunn_clss_to_fw_clss(p_src->ip_gre.tun_cls); - p_tun->ip_gre.tun_cls = (enum ecore_tunn_clss)type; - type = ecore_tunn_clss_to_fw_clss(p_src->l2_geneve.tun_cls); - p_tun->l2_geneve.tun_cls = (enum ecore_tunn_clss)type; - type = ecore_tunn_clss_to_fw_clss(p_src->ip_geneve.tun_cls); - p_tun->ip_geneve.tun_cls = (enum ecore_tunn_clss)type; + p_tun->b_update_rx_cls = p_src->b_update_rx_cls; + p_tun->b_update_tx_cls = p_src->b_update_tx_cls; + + p_tun->vxlan.tun_cls = p_src->vxlan.tun_cls; + p_tun->l2_gre.tun_cls = p_src->l2_gre.tun_cls; + p_tun->ip_gre.tun_cls = p_src->ip_gre.tun_cls; + p_tun->l2_geneve.tun_cls = p_src->l2_geneve.tun_cls; + p_tun->ip_geneve.tun_cls = p_src->ip_geneve.tun_cls; } static void ecore_set_tunn_ports(struct ecore_tunnel_info *p_tun, @@ -166,7 +172,7 @@ static void __ecore_set_ramrod_tunnel_param(u8 *p_tunn_cls, struct ecore_tunn_update_type *tun_type) { - *p_tunn_cls = tun_type->tun_cls; + *p_tunn_cls = ecore_tunn_clss_to_fw_clss(tun_type->tun_cls); } static void @@ -218,7 +224,7 @@ ecore_tunn_set_pf_update_params(struct ecore_hwfn *p_hwfn, } static void ecore_set_hw_tunn_mode(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, + struct ecore_ptt *p_ptt, struct ecore_tunnel_info *p_tun) { ecore_set_gre_enable(p_hwfn, p_ptt, p_tun->l2_gre.b_mode_enabled, @@ -251,9 +257,9 @@ static void ecore_set_hw_tunn_mode_port(struct ecore_hwfn *p_hwfn, } static void -ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn, +ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn, struct ecore_tunnel_info *p_src, - struct pf_start_tunnel_config *p_tunn_cfg) + struct pf_start_tunnel_config *p_tunn_cfg) { struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel; @@ -292,9 +298,6 @@ ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn, &p_tun->ip_gre); } -#define ETH_P_8021Q 0x8100 -#define ETH_P_8021AD 0x88A8 /* 802.1ad Service VLAN */ - enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_tunnel_info *p_tunn, @@ -321,7 +324,8 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn, rc = ecore_sp_init_request(p_hwfn, &p_ent, COMMON_RAMROD_PF_START, - PROTOCOLID_COMMON, &init_data); + PROTOCOLID_COMMON, + &init_data); if (rc != ECORE_SUCCESS) return rc; @@ -335,18 +339,18 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn, p_ramrod->dont_log_ramrods = 0; p_ramrod->log_type_mask = OSAL_CPU_TO_LE16(0x8f); - if (OSAL_GET_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits)) + if (OSAL_TEST_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits)) p_ramrod->mf_mode = MF_OVLAN; else p_ramrod->mf_mode = MF_NPAR; p_ramrod->outer_tag_config.outer_tag.tci = OSAL_CPU_TO_LE16(p_hwfn->hw_info.ovlan); - if (OSAL_GET_BIT(ECORE_MF_8021Q_TAGGING, &p_hwfn->p_dev->mf_bits)) { - p_ramrod->outer_tag_config.outer_tag.tpid = ETH_P_8021Q; - } else if (OSAL_GET_BIT(ECORE_MF_8021AD_TAGGING, + if (OSAL_TEST_BIT(ECORE_MF_8021Q_TAGGING, &p_hwfn->p_dev->mf_bits)) { + p_ramrod->outer_tag_config.outer_tag.tpid = ECORE_ETH_P_8021Q; + } else if (OSAL_TEST_BIT(ECORE_MF_8021AD_TAGGING, &p_hwfn->p_dev->mf_bits)) { - p_ramrod->outer_tag_config.outer_tag.tpid = ETH_P_8021AD; + p_ramrod->outer_tag_config.outer_tag.tpid = ECORE_ETH_P_8021AD; p_ramrod->outer_tag_config.enable_stag_pri_change = 1; } @@ -357,7 +361,7 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn, /* enable_stag_pri_change should be set if port is in BD mode or, * UFP with Host Control mode. */ - if (OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits)) { + if (OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits)) { if (p_hwfn->ufp_info.pri_type == ECORE_UFP_PRI_OS) p_ramrod->outer_tag_config.enable_stag_pri_change = 1; else @@ -378,7 +382,7 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn, ecore_tunn_set_pf_start_params(p_hwfn, p_tunn, &p_ramrod->tunnel_config); - if (OSAL_GET_BIT(ECORE_MF_INTER_PF_SWITCH, + if (OSAL_TEST_BIT(ECORE_MF_INTER_PF_SWITCH, &p_hwfn->p_dev->mf_bits)) p_ramrod->allow_npar_tx_switching = allow_npar_tx_switch; @@ -386,9 +390,20 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn, case ECORE_PCI_ETH: p_ramrod->personality = PERSONALITY_ETH; break; + case ECORE_PCI_FCOE: + p_ramrod->personality = PERSONALITY_FCOE; + break; + case ECORE_PCI_ISCSI: + p_ramrod->personality = PERSONALITY_ISCSI; + break; + case ECORE_PCI_ETH_IWARP: + case ECORE_PCI_ETH_ROCE: + case ECORE_PCI_ETH_RDMA: + p_ramrod->personality = PERSONALITY_RDMA_AND_ETH; + break; default: DP_NOTICE(p_hwfn, true, "Unknown personality %d\n", - p_hwfn->hw_info.personality); + p_hwfn->hw_info.personality); p_ramrod->personality = PERSONALITY_ETH; } @@ -448,6 +463,12 @@ enum _ecore_status_t ecore_sp_pf_update_ufp(struct ecore_hwfn *p_hwfn) struct ecore_sp_init_data init_data; enum _ecore_status_t rc = ECORE_NOTIMPL; + if (p_hwfn->ufp_info.pri_type == ECORE_UFP_PRI_UNKNOWN) { + DP_INFO(p_hwfn, "Invalid priority type %d\n", + p_hwfn->ufp_info.pri_type); + return ECORE_INVAL; + } + /* Get SPQ entry */ OSAL_MEMSET(&init_data, 0, sizeof(init_data)); init_data.cid = ecore_spq_get_cid(p_hwfn); @@ -476,12 +497,12 @@ enum _ecore_status_t ecore_sp_pf_update_ufp(struct ecore_hwfn *p_hwfn) /* FW uses 1/64k to express gd */ #define FW_GD_RESOLUTION(gd) (64 * 1024 / (gd)) -u16 ecore_sp_rl_mb_to_qm(u32 mb_val) +static u16 ecore_sp_rl_mb_to_qm(u32 mb_val) { return (u16)OSAL_MIN_T(u32, (u16)(~0U), QM_RL_RESOLUTION(mb_val)); } -u16 ecore_sp_rl_gd_denom(u32 gd) +static u16 ecore_sp_rl_gd_denom(u32 gd) { return gd ? (u16)OSAL_MIN_T(u32, (u16)(~0U), FW_GD_RESOLUTION(gd)) : 0; } @@ -516,32 +537,20 @@ enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn, rl_update->rl_id_first = params->rl_id_first; rl_update->rl_id_last = params->rl_id_last; rl_update->rl_dc_qcn_flg = params->rl_dc_qcn_flg; - rl_update->dcqcn_reset_alpha_on_idle = - params->dcqcn_reset_alpha_on_idle; - rl_update->rl_bc_stage_th = params->rl_bc_stage_th; - rl_update->rl_timer_stage_th = params->rl_timer_stage_th; rl_update->rl_bc_rate = OSAL_CPU_TO_LE32(params->rl_bc_rate); - rl_update->rl_max_rate = - OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_max_rate)); - rl_update->rl_r_ai = - OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_r_ai)); - rl_update->rl_r_hai = - OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_r_hai)); - rl_update->dcqcn_g = - OSAL_CPU_TO_LE16(ecore_sp_rl_gd_denom(params->dcqcn_gd)); + rl_update->rl_max_rate = OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_max_rate)); + rl_update->rl_r_ai = OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_r_ai)); + rl_update->rl_r_hai = OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_r_hai)); + rl_update->dcqcn_g = OSAL_CPU_TO_LE16(ecore_sp_rl_gd_denom(params->dcqcn_gd)); rl_update->dcqcn_k_us = OSAL_CPU_TO_LE32(params->dcqcn_k_us); - rl_update->dcqcn_timeuot_us = - OSAL_CPU_TO_LE32(params->dcqcn_timeuot_us); + rl_update->dcqcn_timeuot_us = OSAL_CPU_TO_LE32(params->dcqcn_timeuot_us); rl_update->qcn_timeuot_us = OSAL_CPU_TO_LE32(params->qcn_timeuot_us); - DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "rl_params: qcn_update_param_flg %x, dcqcn_update_param_flg %x, rl_init_flg %x, rl_start_flg %x, rl_stop_flg %x, rl_id_first %x, rl_id_last %x, rl_dc_qcn_flg %x,dcqcn_reset_alpha_on_idle %x, rl_bc_stage_th %x, rl_timer_stage_th %x, rl_bc_rate %x, rl_max_rate %x, rl_r_ai %x, rl_r_hai %x, dcqcn_g %x, dcqcn_k_us %x, dcqcn_timeuot_us %x, qcn_timeuot_us %x\n", - rl_update->qcn_update_param_flg, - rl_update->dcqcn_update_param_flg, + DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "rl_params: qcn_update_param_flg %x, dcqcn_update_param_flg %x, rl_init_flg %x, rl_start_flg %x, rl_stop_flg %x, rl_id_first %x, rl_id_last %x, rl_dc_qcn_flg %x, rl_bc_rate %x, rl_max_rate %x, rl_r_ai %x, rl_r_hai %x, dcqcn_g %x, dcqcn_k_us %x, dcqcn_timeuot_us %x, qcn_timeuot_us %x\n", + rl_update->qcn_update_param_flg, rl_update->dcqcn_update_param_flg, rl_update->rl_init_flg, rl_update->rl_start_flg, rl_update->rl_stop_flg, rl_update->rl_id_first, rl_update->rl_id_last, rl_update->rl_dc_qcn_flg, - rl_update->dcqcn_reset_alpha_on_idle, - rl_update->rl_bc_stage_th, rl_update->rl_timer_stage_th, rl_update->rl_bc_rate, rl_update->rl_max_rate, rl_update->rl_r_ai, rl_update->rl_r_hai, rl_update->dcqcn_g, rl_update->dcqcn_k_us, @@ -638,10 +647,6 @@ enum _ecore_status_t ecore_sp_heartbeat_ramrod(struct ecore_hwfn *p_hwfn) if (rc != ECORE_SUCCESS) return rc; - if (OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits)) - p_ent->ramrod.pf_update.mf_vlan |= - OSAL_CPU_TO_LE16(((u16)p_hwfn->ufp_info.tc << 13)); - return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL); } @@ -664,8 +669,11 @@ enum _ecore_status_t ecore_sp_pf_update_stag(struct ecore_hwfn *p_hwfn) return rc; p_ent->ramrod.pf_update.update_mf_vlan_flag = true; - p_ent->ramrod.pf_update.mf_vlan = - OSAL_CPU_TO_LE16(p_hwfn->hw_info.ovlan); + p_ent->ramrod.pf_update.mf_vlan = OSAL_CPU_TO_LE16(p_hwfn->hw_info.ovlan); + + if (OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits)) + p_ent->ramrod.pf_update.mf_vlan |= + OSAL_CPU_TO_LE16(((u16)p_hwfn->ufp_info.tc << 13)); return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL); } diff --git a/drivers/net/qede/base/ecore_sp_commands.h b/drivers/net/qede/base/ecore_sp_commands.h index 524fe57a1..4850a60f1 100644 --- a/drivers/net/qede/base/ecore_sp_commands.h +++ b/drivers/net/qede/base/ecore_sp_commands.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_SP_COMMANDS_H__ #define __ECORE_SP_COMMANDS_H__ @@ -28,6 +28,17 @@ struct ecore_sp_init_data { }; +/** + * @brief Returns a SPQ entry to the pool / frees the entry if allocated + * Should be called on in error flows after initializing the SPQ entry + * and before posting it. + * + * @param p_hwfn + * @param p_ent + */ +void ecore_sp_destroy_request(struct ecore_hwfn *p_hwfn, + struct ecore_spq_entry *p_ent); + /** * @brief Acquire and initialize and SPQ entry for a given ramrod. * @@ -119,9 +130,6 @@ struct ecore_rl_update_params { u8 rl_stop_flg; u8 rl_id_first; u8 rl_id_last; - u8 dcqcn_reset_alpha_on_idle; - u8 rl_bc_stage_th; - u8 rl_timer_stage_th; u8 rl_dc_qcn_flg; /* If set, RL will used for DCQCN */ u32 rl_bc_rate; /* Byte Counter Limit */ u32 rl_max_rate; /* Maximum rate in Mbps resolution */ diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c index 93e984bf8..47c8a4e90 100644 --- a/drivers/net/qede/base/ecore_spq.c +++ b/drivers/net/qede/base/ecore_spq.c @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #include "bcm_osal.h" #include "reg_addr.h" #include "ecore_gtt_reg_addr.h" @@ -20,6 +20,12 @@ #include "ecore_hw.h" #include "ecore_sriov.h" +#ifdef _NTDDK_ +#pragma warning(push) +#pragma warning(disable : 28167) +#pragma warning(disable : 28123) +#endif + /*************************************************************************** * Structures & Definitions ***************************************************************************/ @@ -88,6 +94,7 @@ static enum _ecore_status_t ecore_spq_block(struct ecore_hwfn *p_hwfn, u8 *p_fw_ret, bool skip_quick_poll) { struct ecore_spq_comp_done *comp_done; + u8 str[ECORE_HW_ERR_MAX_STR_SIZE]; struct ecore_ptt *p_ptt; enum _ecore_status_t rc; @@ -97,19 +104,20 @@ static enum _ecore_status_t ecore_spq_block(struct ecore_hwfn *p_hwfn, if (!skip_quick_poll) { rc = __ecore_spq_block(p_hwfn, p_ent, p_fw_ret, false); if (rc == ECORE_SUCCESS) - return ECORE_SUCCESS; + return rc; } /* Move to polling with a sleeping period between iterations */ rc = __ecore_spq_block(p_hwfn, p_ent, p_fw_ret, true); if (rc == ECORE_SUCCESS) - return ECORE_SUCCESS; + return rc; + + DP_INFO(p_hwfn, "Ramrod is stuck, requesting MCP drain\n"); p_ptt = ecore_ptt_acquire(p_hwfn); if (!p_ptt) - return ECORE_AGAIN; + goto err; - DP_INFO(p_hwfn, "Ramrod is stuck, requesting MCP drain\n"); rc = ecore_mcp_drain(p_hwfn, p_ptt); ecore_ptt_release(p_hwfn, p_ptt); if (rc != ECORE_SUCCESS) { @@ -120,24 +128,34 @@ static enum _ecore_status_t ecore_spq_block(struct ecore_hwfn *p_hwfn, /* Retry after drain */ rc = __ecore_spq_block(p_hwfn, p_ent, p_fw_ret, true); if (rc == ECORE_SUCCESS) - return ECORE_SUCCESS; + return rc; comp_done = (struct ecore_spq_comp_done *)p_ent->comp_cb.cookie; if (comp_done->done == 1) { if (p_fw_ret) *p_fw_ret = comp_done->fw_return_code; + return ECORE_SUCCESS; } + + rc = ECORE_BUSY; err: - DP_NOTICE(p_hwfn, true, - "Ramrod is stuck [CID %08x cmd %02x proto %02x echo %04x]\n", - OSAL_LE32_TO_CPU(p_ent->elem.hdr.cid), - p_ent->elem.hdr.cmd_id, p_ent->elem.hdr.protocol_id, - OSAL_LE16_TO_CPU(p_ent->elem.hdr.echo)); + OSAL_SNPRINTF((char *)str, ECORE_HW_ERR_MAX_STR_SIZE, + "Ramrod is stuck [CID %08x cmd %02x protocol %02x echo %04x]\n", + OSAL_LE32_TO_CPU(p_ent->elem.hdr.cid), + p_ent->elem.hdr.cmd_id, p_ent->elem.hdr.protocol_id, + OSAL_LE16_TO_CPU(p_ent->elem.hdr.echo)); + DP_NOTICE(p_hwfn, true, "%s", str); + + p_ptt = ecore_ptt_acquire(p_hwfn); + if (!p_ptt) + return rc; - ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_RAMROD_FAIL); + ecore_hw_err_notify(p_hwfn, p_ptt, ECORE_HW_ERR_RAMROD_FAIL, str, + OSAL_STRLEN((char *)str) + 1); + ecore_ptt_release(p_hwfn, p_ptt); - return ECORE_BUSY; + return rc; } void ecore_set_spq_block_timeout(struct ecore_hwfn *p_hwfn, @@ -151,8 +169,8 @@ void ecore_set_spq_block_timeout(struct ecore_hwfn *p_hwfn, /*************************************************************************** * SPQ entries inner API ***************************************************************************/ -static enum _ecore_status_t -ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry *p_ent) +static enum _ecore_status_t ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn, + struct ecore_spq_entry *p_ent) { p_ent->flags = 0; @@ -170,8 +188,7 @@ ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry *p_ent) } DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, - "Ramrod header: [CID 0x%08x CMD 0x%02x protocol 0x%02x]" - " Data pointer: [%08x:%08x] Completion Mode: %s\n", + "Ramrod header: [CID 0x%08x CMD 0x%02x protocol 0x%02x] Data pointer: [%08x:%08x] Completion Mode: %s\n", p_ent->elem.hdr.cid, p_ent->elem.hdr.cmd_id, p_ent->elem.hdr.protocol_id, p_ent->elem.data_ptr.hi, p_ent->elem.data_ptr.lo, @@ -196,7 +213,7 @@ ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry *p_ent) #define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6 static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn, - struct ecore_spq *p_spq) + struct ecore_spq *p_spq) { __le32 *p_spq_base_lo, *p_spq_base_hi; struct regpair *p_consolid_base_addr; @@ -265,9 +282,9 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn, p_hwfn->p_consq->chain.p_phys_addr); } -static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn, - struct ecore_spq *p_spq, - struct ecore_spq_entry *p_ent) +static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn, + struct ecore_spq *p_spq, + struct ecore_spq_entry *p_ent) { struct ecore_chain *p_chain = &p_hwfn->p_spq->chain; struct core_db_data *p_db_data = &p_spq->db_data; @@ -281,7 +298,7 @@ static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn, return ECORE_INVAL; } - *elem = p_ent->elem; /* Struct assignment */ + *elem = p_ent->elem; /* Struct assignment */ p_db_data->spq_prod = OSAL_CPU_TO_LE16(ecore_chain_get_prod_idx(p_chain)); @@ -291,12 +308,11 @@ static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn, DOORBELL(p_hwfn, p_spq->db_addr_offset, *(u32 *)p_db_data); - /* Make sure doorbell is rang */ + /* Make sure doorbell was rung */ OSAL_WMB(p_hwfn->p_dev); DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, - "Doorbelled [0x%08x, CID 0x%08x] with Flags: %02x" - " agg_params: %02x, prod: %04x\n", + "Doorbelled [0x%08x, CID 0x%08x] with Flags: %02x agg_params: %02x, prod: %04x\n", p_spq->db_addr_offset, p_spq->cid, p_db_data->params, p_db_data->agg_flags, ecore_chain_get_prod_idx(p_chain)); @@ -307,7 +323,7 @@ static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn, * Asynchronous events ***************************************************************************/ -static enum _ecore_status_t +enum _ecore_status_t ecore_async_event_completion(struct ecore_hwfn *p_hwfn, struct event_ring_entry *p_eqe) { @@ -319,6 +335,15 @@ ecore_async_event_completion(struct ecore_hwfn *p_hwfn, return ECORE_INVAL; } + /* This function is called by PF only, and only for VF completions. + * The exceptions which are handled by the PF are PROTOCOLID_COMMON + * and ROCE_ASYNC_EVENT_DESTROY_QP_DONE events. + */ + /* @DPDK */ + if (IS_PF(p_hwfn->p_dev) && p_eqe->vf_id != ECORE_CXT_PF_CID && + p_eqe->protocol_id != PROTOCOLID_COMMON) + return ecore_iov_async_event_completion(p_hwfn, p_eqe); + cb = p_hwfn->p_spq->async_comp_cb[p_eqe->protocol_id]; if (!cb) { DP_NOTICE(p_hwfn, @@ -328,7 +353,7 @@ ecore_async_event_completion(struct ecore_hwfn *p_hwfn, } rc = cb(p_hwfn, p_eqe->opcode, p_eqe->echo, - &p_eqe->data, p_eqe->fw_return_code); + &p_eqe->data, p_eqe->fw_return_code, p_eqe->vf_id); if (rc != ECORE_SUCCESS) DP_NOTICE(p_hwfn, true, "Async completion callback failed, rc = %d [opcode %x, echo %x, fw_return_code %x]", @@ -363,10 +388,11 @@ ecore_spq_unregister_async_cb(struct ecore_hwfn *p_hwfn, /*************************************************************************** * EQ API ***************************************************************************/ -void ecore_eq_prod_update(struct ecore_hwfn *p_hwfn, u16 prod) +void ecore_eq_prod_update(struct ecore_hwfn *p_hwfn, + u16 prod) { u32 addr = GTT_BAR0_MAP_REG_USDM_RAM + - USTORM_EQE_CONS_OFFSET(p_hwfn->rel_pf_id); + USTORM_EQE_CONS_OFFSET(p_hwfn->rel_pf_id); REG_WR16(p_hwfn, addr, prod); @@ -374,13 +400,24 @@ void ecore_eq_prod_update(struct ecore_hwfn *p_hwfn, u16 prod) OSAL_MMIOWB(p_hwfn->p_dev); } -enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn, - void *cookie) +void ecore_event_ring_entry_dp(struct ecore_hwfn *p_hwfn, + struct event_ring_entry *p_eqe) { - struct ecore_eq *p_eq = cookie; + DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, + "op %x prot %x res0 %x vfid %x echo %x fwret %x flags %x\n", + p_eqe->opcode, p_eqe->protocol_id, p_eqe->reserved0, + p_eqe->vf_id, OSAL_LE16_TO_CPU(p_eqe->echo), + p_eqe->fw_return_code, p_eqe->flags); +} + +enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn, + void *cookie) + +{ + struct ecore_eq *p_eq = cookie; struct ecore_chain *p_chain = &p_eq->chain; u16 fw_cons_idx = 0; - enum _ecore_status_t rc = ECORE_SUCCESS; + enum _ecore_status_t rc; if (!p_hwfn->p_spq) { DP_ERR(p_hwfn, "Unexpected NULL p_spq\n"); @@ -409,17 +446,7 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn, break; } - DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, - "op %x prot %x res0 %x echo %x fwret %x flags %x\n", - p_eqe->opcode, /* Event Opcode */ - p_eqe->protocol_id, /* Event Protocol ID */ - p_eqe->reserved0, /* Reserved */ - /* Echo value from ramrod data on the host */ - OSAL_LE16_TO_CPU(p_eqe->echo), - p_eqe->fw_return_code, /* FW return code for SP - * ramrods - */ - p_eqe->flags); + ecore_event_ring_entry_dp(p_hwfn, p_eqe); if (GET_FIELD(p_eqe->flags, EVENT_RING_ENTRY_ASYNC)) ecore_async_event_completion(p_hwfn, p_eqe); @@ -434,6 +461,11 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn, ecore_eq_prod_update(p_hwfn, ecore_chain_get_prod_idx(p_chain)); + /* Attempt to post pending requests */ + OSAL_SPIN_LOCK(&p_hwfn->p_spq->lock); + rc = ecore_spq_pend_post(p_hwfn); + OSAL_SPIN_UNLOCK(&p_hwfn->p_spq->lock); + return rc; } @@ -493,8 +525,7 @@ void ecore_eq_free(struct ecore_hwfn *p_hwfn) * CQE API - manipulate EQ functionality ***************************************************************************/ static enum _ecore_status_t ecore_cqe_completion(struct ecore_hwfn *p_hwfn, - struct eth_slow_path_rx_cqe - *cqe, + struct eth_slow_path_rx_cqe *cqe, enum protocol_type protocol) { if (IS_VF(p_hwfn->p_dev)) @@ -556,10 +587,10 @@ void ecore_spq_setup(struct ecore_hwfn *p_hwfn) } /* Statistics */ - p_spq->normal_count = 0; - p_spq->comp_count = 0; - p_spq->comp_sent_count = 0; - p_spq->unlimited_pending_count = 0; + p_spq->normal_count = 0; + p_spq->comp_count = 0; + p_spq->comp_sent_count = 0; + p_spq->unlimited_pending_count = 0; OSAL_MEM_ZERO(p_spq->p_comp_bitmap, SPQ_COMP_BMAP_SIZE * sizeof(unsigned long)); @@ -583,7 +614,8 @@ void ecore_spq_setup(struct ecore_hwfn *p_hwfn) p_db_data->agg_flags = DQ_XCM_CORE_DQ_CF_CMD; /* Register the SPQ doorbell with the doorbell recovery mechanism */ - db_addr = (void *)((u8 *)p_hwfn->doorbells + p_spq->db_addr_offset); + db_addr = (void OSAL_IOMEM *)((u8 OSAL_IOMEM *)p_hwfn->doorbells + + p_spq->db_addr_offset); rc = ecore_db_recovery_add(p_hwfn->p_dev, db_addr, &p_spq->db_data, DB_REC_WIDTH_32B, DB_REC_KERNEL); if (rc != ECORE_SUCCESS) @@ -630,7 +662,7 @@ enum _ecore_status_t ecore_spq_alloc(struct ecore_hwfn *p_hwfn) p_spq->p_phys = p_phys; #ifdef CONFIG_ECORE_LOCK_ALLOC - if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_spq->lock)) + if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_spq->lock, "spq_lock")) goto spq_allocate_fail; #endif @@ -652,8 +684,12 @@ void ecore_spq_free(struct ecore_hwfn *p_hwfn) if (!p_spq) return; + if (IS_VF(p_hwfn->p_dev)) + goto out; + /* Delete the SPQ doorbell from the doorbell recovery mechanism */ - db_addr = (void *)((u8 *)p_hwfn->doorbells + p_spq->db_addr_offset); + db_addr = (void OSAL_IOMEM *)((u8 OSAL_IOMEM *)p_hwfn->doorbells + + p_spq->db_addr_offset); ecore_db_recovery_del(p_hwfn->p_dev, db_addr, &p_spq->db_data); if (p_spq->p_virt) { @@ -669,12 +705,13 @@ void ecore_spq_free(struct ecore_hwfn *p_hwfn) #ifdef CONFIG_ECORE_LOCK_ALLOC OSAL_SPIN_LOCK_DEALLOC(&p_spq->lock); #endif - +out: OSAL_FREE(p_hwfn->p_dev, p_spq); + p_hwfn->p_spq = OSAL_NULL; } -enum _ecore_status_t -ecore_spq_get_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry **pp_ent) +enum _ecore_status_t ecore_spq_get_entry(struct ecore_hwfn *p_hwfn, + struct ecore_spq_entry **pp_ent) { struct ecore_spq *p_spq = p_hwfn->p_spq; struct ecore_spq_entry *p_ent = OSAL_NULL; @@ -692,7 +729,8 @@ ecore_spq_get_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry **pp_ent) p_ent->queue = &p_spq->unlimited_pending; } else { p_ent = OSAL_LIST_FIRST_ENTRY(&p_spq->free_pool, - struct ecore_spq_entry, list); + struct ecore_spq_entry, + list); OSAL_LIST_REMOVE_ENTRY(&p_ent->list, &p_spq->free_pool); p_ent->queue = &p_spq->pending; } @@ -706,7 +744,7 @@ ecore_spq_get_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry **pp_ent) /* Locked variant; Should be called while the SPQ lock is taken */ static void __ecore_spq_return_entry(struct ecore_hwfn *p_hwfn, - struct ecore_spq_entry *p_ent) + struct ecore_spq_entry *p_ent) { OSAL_LIST_PUSH_TAIL(&p_ent->list, &p_hwfn->p_spq->free_pool); } @@ -733,11 +771,11 @@ void ecore_spq_return_entry(struct ecore_hwfn *p_hwfn, * * @return enum _ecore_status_t */ -static enum _ecore_status_t -ecore_spq_add_entry(struct ecore_hwfn *p_hwfn, - struct ecore_spq_entry *p_ent, enum spq_priority priority) +static enum _ecore_status_t ecore_spq_add_entry(struct ecore_hwfn *p_hwfn, + struct ecore_spq_entry *p_ent, + enum spq_priority priority) { - struct ecore_spq *p_spq = p_hwfn->p_spq; + struct ecore_spq *p_spq = p_hwfn->p_spq; if (p_ent->queue == &p_spq->unlimited_pending) { if (OSAL_LIST_IS_EMPTY(&p_spq->free_pool)) { @@ -766,6 +804,8 @@ ecore_spq_add_entry(struct ecore_hwfn *p_hwfn, /* EBLOCK responsible to free the allocated p_ent */ if (p_ent->comp_mode != ECORE_SPQ_MODE_EBLOCK) OSAL_FREE(p_hwfn->p_dev, p_ent); + else + p_ent->post_ent = p_en2; p_ent = p_en2; } @@ -804,11 +844,11 @@ u32 ecore_spq_get_cid(struct ecore_hwfn *p_hwfn) ***************************************************************************/ static enum _ecore_status_t ecore_spq_post_list(struct ecore_hwfn *p_hwfn, - osal_list_t *head, - u32 keep_reserve) + osal_list_t *head, + u32 keep_reserve) { - struct ecore_spq *p_spq = p_hwfn->p_spq; - enum _ecore_status_t rc; + struct ecore_spq *p_spq = p_hwfn->p_spq; + enum _ecore_status_t rc; /* TODO - implementation might be wasteful; will always keep room * for an additional high priority ramrod (even if one is already @@ -816,21 +856,20 @@ static enum _ecore_status_t ecore_spq_post_list(struct ecore_hwfn *p_hwfn, */ while (ecore_chain_get_elem_left(&p_spq->chain) > keep_reserve && !OSAL_LIST_IS_EMPTY(head)) { - struct ecore_spq_entry *p_ent = + struct ecore_spq_entry *p_ent = OSAL_LIST_FIRST_ENTRY(head, struct ecore_spq_entry, list); if (p_ent != OSAL_NULL) { #if defined(_NTDDK_) #pragma warning(suppress : 6011 28182) #endif OSAL_LIST_REMOVE_ENTRY(&p_ent->list, head); - OSAL_LIST_PUSH_TAIL(&p_ent->list, - &p_spq->completion_pending); + OSAL_LIST_PUSH_TAIL(&p_ent->list, &p_spq->completion_pending); p_spq->comp_sent_count++; rc = ecore_spq_hw_post(p_hwfn, p_spq, p_ent); if (rc) { OSAL_LIST_REMOVE_ENTRY(&p_ent->list, - &p_spq->completion_pending); + &p_spq->completion_pending); __ecore_spq_return_entry(p_hwfn, p_ent); return rc; } @@ -840,7 +879,7 @@ static enum _ecore_status_t ecore_spq_post_list(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } -static enum _ecore_status_t ecore_spq_pend_post(struct ecore_hwfn *p_hwfn) +enum _ecore_status_t ecore_spq_pend_post(struct ecore_hwfn *p_hwfn) { struct ecore_spq *p_spq = p_hwfn->p_spq; struct ecore_spq_entry *p_ent = OSAL_NULL; @@ -850,7 +889,8 @@ static enum _ecore_status_t ecore_spq_pend_post(struct ecore_hwfn *p_hwfn) break; p_ent = OSAL_LIST_FIRST_ENTRY(&p_spq->unlimited_pending, - struct ecore_spq_entry, list); + struct ecore_spq_entry, + list); if (!p_ent) return ECORE_INVAL; @@ -862,17 +902,43 @@ static enum _ecore_status_t ecore_spq_pend_post(struct ecore_hwfn *p_hwfn) ecore_spq_add_entry(p_hwfn, p_ent, p_ent->priority); } - return ecore_spq_post_list(p_hwfn, - &p_spq->pending, SPQ_HIGH_PRI_RESERVE_DEFAULT); + return ecore_spq_post_list(p_hwfn, &p_spq->pending, + SPQ_HIGH_PRI_RESERVE_DEFAULT); +} + +static void ecore_spq_recov_set_ret_code(struct ecore_spq_entry *p_ent, + u8 *fw_return_code) +{ + if (fw_return_code == OSAL_NULL) + return; } -enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn, - struct ecore_spq_entry *p_ent, - u8 *fw_return_code) +/* Avoid overriding of SPQ entries when getting out-of-order completions, by + * marking the completions in a bitmap and increasing the chain consumer only + * for the first successive completed entries. + */ +static void ecore_spq_comp_bmap_update(struct ecore_hwfn *p_hwfn, __le16 echo) +{ + struct ecore_spq *p_spq = p_hwfn->p_spq; + + SPQ_COMP_BMAP_SET_BIT(p_spq, echo); + while (SPQ_COMP_BMAP_TEST_BIT(p_spq, + p_spq->comp_bitmap_idx)) { + SPQ_COMP_BMAP_CLEAR_BIT(p_spq, + p_spq->comp_bitmap_idx); + p_spq->comp_bitmap_idx++; + ecore_chain_return_produced(&p_spq->chain); + } +} + +enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn, + struct ecore_spq_entry *p_ent, + u8 *fw_return_code) { enum _ecore_status_t rc = ECORE_SUCCESS; struct ecore_spq *p_spq = p_hwfn ? p_hwfn->p_spq : OSAL_NULL; bool b_ret_ent = true; + bool eblock; if (!p_hwfn) return ECORE_INVAL; @@ -884,12 +950,11 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn, if (p_hwfn->p_dev->recov_in_prog) { DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, - "Recovery is in progress -> skip spq post" - " [cmd %02x protocol %02x]\n", + "Recovery is in progress. Skip spq post [cmd %02x protocol %02x]\n", p_ent->elem.hdr.cmd_id, p_ent->elem.hdr.protocol_id); - /* Return success to let the flows to be completed successfully - * w/o any error handling. - */ + + /* Let the flow complete w/o any error handling */ + ecore_spq_recov_set_ret_code(p_ent, fw_return_code); return ECORE_SUCCESS; } @@ -902,6 +967,11 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn, if (rc) goto spq_post_fail; + /* Check if entry is in block mode before qed_spq_add_entry, + * which might kfree p_ent. + */ + eblock = (p_ent->comp_mode == ECORE_SPQ_MODE_EBLOCK); + /* Add the request to the pending queue */ rc = ecore_spq_add_entry(p_hwfn, p_ent, p_ent->priority); if (rc) @@ -919,7 +989,7 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn, OSAL_SPIN_UNLOCK(&p_spq->lock); - if (p_ent->comp_mode == ECORE_SPQ_MODE_EBLOCK) { + if (eblock) { /* For entries in ECORE BLOCK mode, the completion code cannot * perform the necessary cleanup - if it did, we couldn't * access p_ent here to see whether it's successful or not. @@ -929,15 +999,11 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn, p_ent->queue == &p_spq->unlimited_pending); if (p_ent->queue == &p_spq->unlimited_pending) { - /* This is an allocated p_ent which does not need to - * return to pool. - */ + struct ecore_spq_entry *p_post_ent = p_ent->post_ent; OSAL_FREE(p_hwfn->p_dev, p_ent); - /* TBD: handle error flow and remove p_ent from - * completion pending - */ - return rc; + /* Return the entry which was actually posted */ + p_ent = p_post_ent; } if (rc) @@ -951,7 +1017,7 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn, spq_post_fail2: OSAL_SPIN_LOCK(&p_spq->lock); OSAL_LIST_REMOVE_ENTRY(&p_ent->list, &p_spq->completion_pending); - ecore_chain_return_produced(&p_spq->chain); + ecore_spq_comp_bmap_update(p_hwfn, p_ent->elem.hdr.echo); spq_post_fail: /* return to the free pool */ @@ -965,13 +1031,12 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn, enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn, __le16 echo, u8 fw_return_code, - union event_ring_data *p_data) + union event_ring_data *p_data) { - struct ecore_spq *p_spq; - struct ecore_spq_entry *p_ent = OSAL_NULL; - struct ecore_spq_entry *tmp; - struct ecore_spq_entry *found = OSAL_NULL; - enum _ecore_status_t rc; + struct ecore_spq *p_spq; + struct ecore_spq_entry *p_ent = OSAL_NULL; + struct ecore_spq_entry *tmp; + struct ecore_spq_entry *found = OSAL_NULL; p_spq = p_hwfn->p_spq; if (!p_spq) { @@ -983,25 +1048,26 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn, OSAL_LIST_FOR_EACH_ENTRY_SAFE(p_ent, tmp, &p_spq->completion_pending, - list, struct ecore_spq_entry) { + list, + struct ecore_spq_entry) { if (p_ent->elem.hdr.echo == echo) { +#ifndef ECORE_UPSTREAM + /* Ignoring completions is only supported for EBLOCK */ + if (p_spq->drop_next_completion && + p_ent->comp_mode == ECORE_SPQ_MODE_EBLOCK) { + DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, + "Got completion for echo %04x - ignoring\n", + echo); + p_spq->drop_next_completion = false; + OSAL_SPIN_UNLOCK(&p_spq->lock); + return ECORE_SUCCESS; + } +#endif + OSAL_LIST_REMOVE_ENTRY(&p_ent->list, &p_spq->completion_pending); - /* Avoid overriding of SPQ entries when getting - * out-of-order completions, by marking the completions - * in a bitmap and increasing the chain consumer only - * for the first successive completed entries. - */ - SPQ_COMP_BMAP_SET_BIT(p_spq, echo); - while (SPQ_COMP_BMAP_GET_BIT(p_spq, - p_spq->comp_bitmap_idx)) { - SPQ_COMP_BMAP_CLEAR_BIT(p_spq, - p_spq->comp_bitmap_idx); - p_spq->comp_bitmap_idx++; - ecore_chain_return_produced(&p_spq->chain); - } - + ecore_spq_comp_bmap_update(p_hwfn, echo); p_spq->comp_count++; found = p_ent; break; @@ -1011,8 +1077,7 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn, * on scenarios which have mutliple per-PF sent ramrods. */ DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, - "Got completion for echo %04x - doesn't match" - " echo %04x in completion pending list\n", + "Got completion for echo %04x - doesn't match echo %04x in completion pending list\n", OSAL_LE16_TO_CPU(echo), OSAL_LE16_TO_CPU(p_ent->elem.hdr.echo)); } @@ -1024,8 +1089,7 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn, if (!found) { DP_NOTICE(p_hwfn, true, - "Failed to find an entry this" - " EQE [echo %04x] completes\n", + "Failed to find an entry this EQE [echo %04x] completes\n", OSAL_LE16_TO_CPU(echo)); return ECORE_EXISTS; } @@ -1038,32 +1102,28 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn, found->comp_cb.function(p_hwfn, found->comp_cb.cookie, p_data, fw_return_code); else - DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, - "Got a completion without a callback function\n"); + DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "Got a completion without a callback function\n"); - if ((found->comp_mode != ECORE_SPQ_MODE_EBLOCK) || - (found->queue == &p_spq->unlimited_pending)) - /* EBLOCK is responsible for returning its own entry into the - * free list, unless it originally added the entry into the - * unlimited pending list. - */ + /* EBLOCK is responsible for returning its own entry */ + if (found->comp_mode != ECORE_SPQ_MODE_EBLOCK) ecore_spq_return_entry(p_hwfn, found); - /* Attempt to post pending requests */ - OSAL_SPIN_LOCK(&p_spq->lock); - rc = ecore_spq_pend_post(p_hwfn); - OSAL_SPIN_UNLOCK(&p_spq->lock); + return ECORE_SUCCESS; +} - return rc; +#ifndef ECORE_UPSTREAM +void ecore_spq_drop_next_completion(struct ecore_hwfn *p_hwfn) +{ + p_hwfn->p_spq->drop_next_completion = true; } +#endif enum _ecore_status_t ecore_consq_alloc(struct ecore_hwfn *p_hwfn) { struct ecore_consq *p_consq; /* Allocate ConsQ struct */ - p_consq = - OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_consq)); + p_consq = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_consq)); if (!p_consq) { DP_NOTICE(p_hwfn, false, "Failed to allocate `struct ecore_consq'\n"); @@ -1101,5 +1161,11 @@ void ecore_consq_free(struct ecore_hwfn *p_hwfn) return; ecore_chain_free(p_hwfn->p_dev, &p_hwfn->p_consq->chain); + OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_consq); + p_hwfn->p_consq = OSAL_NULL; } + +#ifdef _NTDDK_ +#pragma warning(pop) +#endif diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h index 80dbdeaa0..72a0e1512 100644 --- a/drivers/net/qede/base/ecore_spq.h +++ b/drivers/net/qede/base/ecore_spq.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_SPQ_H__ #define __ECORE_SPQ_H__ @@ -72,8 +72,13 @@ struct ecore_spq_entry { enum spq_mode comp_mode; struct ecore_spq_comp_cb comp_cb; struct ecore_spq_comp_done comp_done; /* SPQ_MODE_EBLOCK */ + + /* Posted entry for unlimited list entry in EBLOCK mode */ + struct ecore_spq_entry *post_ent; }; +#define ECORE_EQ_MAX_ELEMENTS 0xFF00 + struct ecore_eq { struct ecore_chain chain; u8 eq_sb_index; /* index within the SB */ @@ -89,7 +94,8 @@ typedef enum _ecore_status_t u8 opcode, u16 echo, union event_ring_data *data, - u8 fw_return_code); + u8 fw_return_code, + u8 vf_id); enum _ecore_status_t ecore_spq_register_async_cb(struct ecore_hwfn *p_hwfn, @@ -120,18 +126,19 @@ struct ecore_spq { /* Bitmap for handling out-of-order completions */ #define SPQ_RING_SIZE \ (CORE_SPQE_PAGE_SIZE_BYTES / sizeof(struct slow_path_element)) -/* BITS_PER_LONG */ -#define SPQ_COMP_BMAP_SIZE (SPQ_RING_SIZE / (sizeof(u32) * 8)) - u32 p_comp_bitmap[SPQ_COMP_BMAP_SIZE]; +#define SPQ_COMP_BMAP_SIZE (SPQ_RING_SIZE / (sizeof(u32) * 8 /* BITS_PER_LONG */)) /* @DPDK */ + u32 p_comp_bitmap[SPQ_COMP_BMAP_SIZE]; /* @DPDK */ u8 comp_bitmap_idx; #define SPQ_COMP_BMAP_SET_BIT(p_spq, idx) \ - (OSAL_SET_BIT(((idx) % SPQ_RING_SIZE), (p_spq)->p_comp_bitmap)) - + do { \ + OSAL_NON_ATOMIC_SET_BIT(((idx) % SPQ_RING_SIZE), \ + (p_spq)->p_comp_bitmap); \ + } while (0) +/* @DPDK */ #define SPQ_COMP_BMAP_CLEAR_BIT(p_spq, idx) \ (OSAL_CLEAR_BIT(((idx) % SPQ_RING_SIZE), (p_spq)->p_comp_bitmap)) - -#define SPQ_COMP_BMAP_GET_BIT(p_spq, idx) \ - (OSAL_GET_BIT(((idx) % SPQ_RING_SIZE), (p_spq)->p_comp_bitmap)) +#define SPQ_COMP_BMAP_TEST_BIT(p_spq, idx) \ + (OSAL_TEST_BIT(((idx) % SPQ_RING_SIZE), (p_spq)->p_comp_bitmap)) /* Statistics */ u32 unlimited_pending_count; @@ -145,6 +152,10 @@ struct ecore_spq { u32 db_addr_offset; struct core_db_data db_data; ecore_spq_async_comp_cb async_comp_cb[MAX_PROTOCOL_TYPE]; + +#ifndef ECORE_UPSTREAM + bool drop_next_completion; +#endif }; struct ecore_port; @@ -152,7 +163,7 @@ struct ecore_hwfn; /** * @brief ecore_set_spq_block_timeout - calculates the maximum sleep - * iterations used in __ecore_spq_block(); + iterations used in __ecore_spq_block(); * * @param p_hwfn * @param spq_timeout_ms @@ -278,6 +289,15 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn, u8 fw_return_code, union event_ring_data *p_data); +#ifndef ECORE_UPSTREAM +/** + * @brief ecore_spq_drop_next_completion - Next SPQ completion will be ignored + * + * @param p_hwfn + */ +void ecore_spq_drop_next_completion(struct ecore_hwfn *p_hwfn); +#endif + /** * @brief ecore_spq_get_cid - Given p_hwfn, return cid for the hwfn's SPQ * @@ -309,5 +329,24 @@ void ecore_consq_setup(struct ecore_hwfn *p_hwfn); * @param p_hwfn */ void ecore_consq_free(struct ecore_hwfn *p_hwfn); +enum _ecore_status_t ecore_spq_pend_post(struct ecore_hwfn *p_hwfn); +/** + * @brief ecore_event_ring_entry_dp - print ring entry info + * + * @param p_hwfn + * @param p_eqe + */ +void ecore_event_ring_entry_dp(struct ecore_hwfn *p_hwfn, + struct event_ring_entry *p_eqe); + +/** + * @brief ecore_async_event_completion - handle EQ completions + * + * @param p_hwfn + * @param p_eqe + */ +enum _ecore_status_t +ecore_async_event_completion(struct ecore_hwfn *p_hwfn, + struct event_ring_entry *p_eqe); #endif /* __ECORE_SPQ_H__ */ diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c index b49465b88..0d768c525 100644 --- a/drivers/net/qede/base/ecore_sriov.c +++ b/drivers/net/qede/base/ecore_sriov.c @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #include "bcm_osal.h" #include "ecore.h" #include "reg_addr.h" @@ -29,10 +29,11 @@ static enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn *p_hwfn, u8 opcode, __le16 echo, union event_ring_data *data, - u8 fw_return_code); + u8 OSAL_UNUSED fw_return_code, + u8 OSAL_UNUSED vf_id); -const char *qede_ecore_channel_tlvs_string[] = { - "CHANNEL_TLV_NONE", /* ends tlv sequence */ +const char *ecore_channel_tlvs_string[] = { + "CHANNEL_TLV_NONE", /* ends tlv sequence */ "CHANNEL_TLV_ACQUIRE", "CHANNEL_TLV_VPORT_START", "CHANNEL_TLV_VPORT_UPDATE", @@ -78,9 +79,6 @@ const char *qede_ecore_channel_tlvs_string[] = { "CHANNEL_TLV_RDMA_MODIFY_QP", "CHANNEL_TLV_RDMA_QUERY_QP", "CHANNEL_TLV_RDMA_DESTROY_QP", - "CHANNEL_TLV_RDMA_CREATE_SRQ", - "CHANNEL_TLV_RDMA_MODIFY_SRQ", - "CHANNEL_TLV_RDMA_DESTROY_SRQ", "CHANNEL_TLV_RDMA_QUERY_PORT", "CHANNEL_TLV_RDMA_QUERY_DEVICE", "CHANNEL_TLV_RDMA_IWARP_CONNECT", @@ -93,10 +91,52 @@ const char *qede_ecore_channel_tlvs_string[] = { "CHANNEL_TLV_ESTABLISH_LL2_CONN", "CHANNEL_TLV_TERMINATE_LL2_CONN", "CHANNEL_TLV_ASYNC_EVENT", + "CHANNEL_TLV_RDMA_CREATE_SRQ", + "CHANNEL_TLV_RDMA_MODIFY_SRQ", + "CHANNEL_TLV_RDMA_DESTROY_SRQ", "CHANNEL_TLV_SOFT_FLR", "CHANNEL_TLV_MAX" }; +enum _ecore_status_t ecore_iov_pci_enable_prolog(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u8 max_active_vfs) +{ + if (IS_LEAD_HWFN(p_hwfn) && + max_active_vfs > p_hwfn->p_dev->p_iov_info->total_vfs) { + DP_ERR(p_hwfn, + "max_active_vfs %u is greater than the total_vf which is supported by the PF %u\n", + max_active_vfs, p_hwfn->p_dev->p_iov_info->total_vfs); + return ECORE_INVAL; + } + + if (p_hwfn->pf_iov_info->max_active_vfs) { + DP_ERR(p_hwfn, + "max_active_vfs has already been set to %u. In order to set a new value, it is required to unload all the VFs and call disable_epilog\n", + p_hwfn->pf_iov_info->max_active_vfs); + return ECORE_INVAL; + } + + p_hwfn->pf_iov_info->max_active_vfs = max_active_vfs; + + return ecore_qm_reconf(p_hwfn, p_ptt); +} + +enum _ecore_status_t ecore_iov_pci_disable_epilog(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) +{ + if (IS_LEAD_HWFN(p_hwfn) && p_hwfn->p_dev->p_iov_info->num_vfs) { + DP_ERR(p_hwfn, + "There are still loaded VFs (%u) - can't re-configure the QM\n", + p_hwfn->p_dev->p_iov_info->num_vfs); + return ECORE_INVAL; + } + + p_hwfn->pf_iov_info->max_active_vfs = 0; + + return ecore_qm_reconf(p_hwfn, p_ptt); +} + static u8 ecore_vf_calculate_legacy(struct ecore_vf_info *p_vf) { u8 legacy = 0; @@ -150,6 +190,7 @@ static enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn, default: DP_NOTICE(p_hwfn, true, "Unknown VF personality %d\n", p_hwfn->hw_info.personality); + ecore_sp_destroy_request(p_hwfn, p_ent); return ECORE_INVAL; } @@ -157,9 +198,7 @@ static enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn, if (fp_minor > ETH_HSI_VER_MINOR && fp_minor != ETH_HSI_VER_NO_PKT_LEN_TUNN) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "VF [%d] - Requested fp hsi %02x.%02x which is" - " slightly newer than PF's %02x.%02x; Configuring" - " PFs version\n", + "VF [%d] - Requested fp hsi %02x.%02x which is slightly newer than PF's %02x.%02x; Configuring PFs version\n", p_vf->abs_vf_id, ETH_HSI_VER_MAJOR, fp_minor, ETH_HSI_VER_MAJOR, ETH_HSI_VER_MINOR); @@ -304,8 +343,7 @@ static bool ecore_iov_validate_rxq(struct ecore_hwfn *p_hwfn, { if (rx_qid >= p_vf->num_rxqs) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "VF[0x%02x] - can't touch Rx queue[%04x];" - " Only 0x%04x are allocated\n", + "VF[0x%02x] - can't touch Rx queue[%04x]; Only 0x%04x are allocated\n", p_vf->abs_vf_id, rx_qid, p_vf->num_rxqs); return false; } @@ -320,8 +358,7 @@ static bool ecore_iov_validate_txq(struct ecore_hwfn *p_hwfn, { if (tx_qid >= p_vf->num_txqs) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "VF[0x%02x] - can't touch Tx queue[%04x];" - " Only 0x%04x are allocated\n", + "VF[0x%02x] - can't touch Tx queue[%04x]; Only 0x%04x are allocated\n", p_vf->abs_vf_id, tx_qid, p_vf->num_txqs); return false; } @@ -340,8 +377,7 @@ static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn, return true; DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "VF[0%02x] - tried using sb_idx %04x which doesn't exist as" - " one of its 0x%02x SBs\n", + "VF[0%02x] - tried using sb_idx %04x which doesn't exist as one of its 0x%02x SBs\n", p_vf->abs_vf_id, sb_idx, p_vf->num_sbs); return false; @@ -374,14 +410,23 @@ static bool ecore_iov_validate_active_txq(struct ecore_vf_info *p_vf) return false; } -enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn, - int vfid, - struct ecore_ptt *p_ptt) +static u32 ecore_iov_vf_bulletin_crc(struct ecore_vf_info *p_vf) { - struct ecore_bulletin_content *p_bulletin; + struct ecore_bulletin_content *p_bulletin = p_vf->bulletin.p_virt; int crc_size = sizeof(p_bulletin->crc); + + return OSAL_CRC32(0, (u8 *)p_bulletin + crc_size, + p_vf->bulletin.size - crc_size); +} + +LNX_STATIC enum _ecore_status_t +ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn, int vfid, + struct ecore_ptt *p_ptt) +{ + struct ecore_bulletin_content *p_bulletin; struct dmae_params params; struct ecore_vf_info *p_vf; + u32 crc; p_vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true); if (!p_vf) @@ -393,14 +438,19 @@ enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn, p_bulletin = p_vf->bulletin.p_virt; + /* Do not post if bulletin was not changed */ + crc = ecore_iov_vf_bulletin_crc(p_vf); + if (crc == p_bulletin->crc) + return ECORE_SUCCESS; + /* Increment bulletin board version and compute crc */ p_bulletin->version++; - p_bulletin->crc = OSAL_CRC32(0, (u8 *)p_bulletin + crc_size, - p_vf->bulletin.size - crc_size); + p_bulletin->crc = ecore_iov_vf_bulletin_crc(p_vf); DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Posting Bulletin 0x%08x to VF[%d] (CRC 0x%08x)\n", - p_bulletin->version, p_vf->relative_vf_id, p_bulletin->crc); + p_bulletin->version, p_vf->relative_vf_id, + p_bulletin->crc); /* propagate bulletin board via dmae to vm memory */ OSAL_MEMSET(¶ms, 0, sizeof(params)); @@ -411,6 +461,7 @@ enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn, ¶ms); } +/* @DPDK */ static enum _ecore_status_t ecore_iov_pci_cfg_info(struct ecore_dev *p_dev) { struct ecore_hw_sriov_info *iov = p_dev->p_iov_info; @@ -419,44 +470,33 @@ static enum _ecore_status_t ecore_iov_pci_cfg_info(struct ecore_dev *p_dev) DP_VERBOSE(p_dev, ECORE_MSG_IOV, "sriov ext pos %d\n", pos); OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_CTRL, &iov->ctrl); - OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_TOTAL_VF, - &iov->total_vfs); - OSAL_PCI_READ_CONFIG_WORD(p_dev, - pos + RTE_PCI_SRIOV_INITIAL_VF, - &iov->initial_vfs); + OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_TOTAL_VF, &iov->total_vfs); + OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_INITIAL_VF, &iov->initial_vfs); - OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_NUM_VF, - &iov->num_vfs); + OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_NUM_VF, &iov->num_vfs); if (iov->num_vfs) { /* @@@TODO - in future we might want to add an OSAL here to * allow each OS to decide on its own how to act. */ DP_VERBOSE(p_dev, ECORE_MSG_IOV, - "Number of VFs are already set to non-zero value." - " Ignoring PCI configuration value\n"); + "Number of VFs are already set to non-zero value. Ignoring PCI configuration value\n"); iov->num_vfs = 0; } - OSAL_PCI_READ_CONFIG_WORD(p_dev, - pos + RTE_PCI_SRIOV_VF_OFFSET, &iov->offset); + OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_VF_OFFSET, &iov->offset); - OSAL_PCI_READ_CONFIG_WORD(p_dev, - pos + RTE_PCI_SRIOV_VF_STRIDE, &iov->stride); + OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_VF_STRIDE, &iov->stride); - OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_VF_DID, - &iov->vf_device_id); + OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_VF_DID, &iov->vf_device_id); - OSAL_PCI_READ_CONFIG_DWORD(p_dev, - pos + RTE_PCI_SRIOV_SUP_PGSIZE, &iov->pgsz); + OSAL_PCI_READ_CONFIG_DWORD(p_dev, pos + RTE_PCI_SRIOV_SUP_PGSIZE, &iov->pgsz); OSAL_PCI_READ_CONFIG_DWORD(p_dev, pos + RTE_PCI_SRIOV_CAP, &iov->cap); - OSAL_PCI_READ_CONFIG_BYTE(p_dev, pos + RTE_PCI_SRIOV_FUNC_LINK, - &iov->link); + OSAL_PCI_READ_CONFIG_BYTE(p_dev, pos + RTE_PCI_SRIOV_FUNC_LINK, &iov->link); - DP_VERBOSE(p_dev, ECORE_MSG_IOV, "IOV info: nres %d, cap 0x%x," - "ctrl 0x%x, total %d, initial %d, num vfs %d, offset %d," - " stride %d, page size 0x%x\n", + DP_VERBOSE(p_dev, ECORE_MSG_IOV, + "IOV info: nres %d, cap 0x%x, ctrl 0x%x, total %d, initial %d, num vfs %d, offset %d, stride %d, page size 0x%x\n", iov->nres, iov->cap, iov->ctrl, iov->total_vfs, iov->initial_vfs, iov->nr_virtfn, iov->offset, iov->stride, iov->pgsz); @@ -468,9 +508,7 @@ static enum _ecore_status_t ecore_iov_pci_cfg_info(struct ecore_dev *p_dev) * num_vfs to zero to avoid memory corruption in the code that * assumes max number of vfs */ - DP_NOTICE(p_dev, false, - "IOV: Unexpected number of vfs set: %d" - " setting num_vf to zero\n", + DP_NOTICE(p_dev, false, "IOV: Unexpected number of vfs set: %d setting num_vf to zero\n", iov->num_vfs); iov->num_vfs = 0; @@ -490,17 +528,20 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn) union vfpf_tlvs *p_req_virt_addr; u8 idx = 0; - OSAL_MEMSET(p_iov_info->vfs_array, 0, sizeof(p_iov_info->vfs_array)); - p_req_virt_addr = p_iov_info->mbx_msg_virt_addr; req_p = p_iov_info->mbx_msg_phys_addr; p_reply_virt_addr = p_iov_info->mbx_reply_virt_addr; rply_p = p_iov_info->mbx_reply_phys_addr; p_bulletin_virt = p_iov_info->p_bulletins; bulletin_p = p_iov_info->bulletins_phys; + if (!p_req_virt_addr || !p_reply_virt_addr || !p_bulletin_virt) { - DP_ERR(p_hwfn, - "ecore_iov_setup_vfdb called without alloc mem first\n"); + DP_ERR(p_hwfn, "called without allocating mem first\n"); + return; + } + + if (ECORE_IS_VF_RDMA(p_hwfn) && !p_iov_info->p_rdma_info) { + DP_ERR(p_hwfn, "called without allocating mem first for rdma_info\n"); return; } @@ -521,7 +562,8 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn) vf->b_init = false; vf->bulletin.phys = idx * - sizeof(struct ecore_bulletin_content) + bulletin_p; + sizeof(struct ecore_bulletin_content) + + bulletin_p; vf->bulletin.p_virt = p_bulletin_virt + idx; vf->bulletin.size = sizeof(struct ecore_bulletin_content); @@ -531,10 +573,15 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn) vf->concrete_fid = concrete; /* TODO - need to devise a better way of getting opaque */ vf->opaque_fid = (p_hwfn->hw_info.opaque_fid & 0xff) | - (vf->abs_vf_id << 8); + (vf->abs_vf_id << 8); vf->num_mac_filters = ECORE_ETH_VF_NUM_MAC_FILTERS; vf->num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS; + + /* Pending EQs list for this VF and its lock */ + OSAL_LIST_INIT(&vf->vf_eq_info.eq_list); + OSAL_SPIN_LOCK_INIT(&vf->vf_eq_info.eq_list_lock); + vf->vf_eq_info.eq_active = true; } } @@ -568,7 +615,7 @@ static enum _ecore_status_t ecore_iov_allocate_vfdb(struct ecore_hwfn *p_hwfn) return ECORE_NOMEM; p_iov_info->bulletins_size = sizeof(struct ecore_bulletin_content) * - num_vfs; + num_vfs; p_v_addr = &p_iov_info->p_bulletins; *p_v_addr = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev, &p_iov_info->bulletins_phys, @@ -577,15 +624,15 @@ static enum _ecore_status_t ecore_iov_allocate_vfdb(struct ecore_hwfn *p_hwfn) return ECORE_NOMEM; DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "PF's Requests mailbox [%p virt 0x%lx phys], " - "Response mailbox [%p virt 0x%lx phys] Bulletinsi" - " [%p virt 0x%lx phys]\n", + "PF's Requests mailbox [%p virt 0x%" PRIx64 " phys], " + "Response mailbox [%p virt 0x%" PRIx64 " phys] " + "Bulletins [%p virt 0x%" PRIx64 " phys]\n", p_iov_info->mbx_msg_virt_addr, - (unsigned long)p_iov_info->mbx_msg_phys_addr, + (u64)p_iov_info->mbx_msg_phys_addr, p_iov_info->mbx_reply_virt_addr, - (unsigned long)p_iov_info->mbx_reply_phys_addr, + (u64)p_iov_info->mbx_reply_phys_addr, p_iov_info->p_bulletins, - (unsigned long)p_iov_info->bulletins_phys); + (u64)p_iov_info->bulletins_phys); return ECORE_SUCCESS; } @@ -593,6 +640,7 @@ static enum _ecore_status_t ecore_iov_allocate_vfdb(struct ecore_hwfn *p_hwfn) static void ecore_iov_free_vfdb(struct ecore_hwfn *p_hwfn) { struct ecore_pf_iov *p_iov_info = p_hwfn->pf_iov_info; + u8 idx; if (p_hwfn->pf_iov_info->mbx_msg_virt_addr) OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev, @@ -611,11 +659,39 @@ static void ecore_iov_free_vfdb(struct ecore_hwfn *p_hwfn) p_iov_info->p_bulletins, p_iov_info->bulletins_phys, p_iov_info->bulletins_size); + + /* Free pending EQ entries */ + for (idx = 0; idx < p_hwfn->p_dev->p_iov_info->total_vfs; idx++) { + struct ecore_vf_info *p_vf = &p_iov_info->vfs_array[idx]; + + if (!p_vf->vf_eq_info.eq_active) + continue; + + p_vf->vf_eq_info.eq_active = false; + while (!OSAL_LIST_IS_EMPTY(&p_vf->vf_eq_info.eq_list)) { + struct event_ring_list_entry *p_eqe; + + p_eqe = OSAL_LIST_FIRST_ENTRY(&p_vf->vf_eq_info.eq_list, + struct event_ring_list_entry, + list_entry); + if (p_eqe) { + OSAL_LIST_REMOVE_ENTRY(&p_eqe->list_entry, + &p_vf->vf_eq_info.eq_list); + OSAL_FREE(p_hwfn->p_dev, p_eqe); + } + } +#ifdef CONFIG_ECORE_LOCK_ALLOC + OSAL_SPIN_LOCK_DEALLOC(&p_vf->vf_eq_info.eq_list_lock); +#endif + } } enum _ecore_status_t ecore_iov_alloc(struct ecore_hwfn *p_hwfn) { struct ecore_pf_iov *p_sriov; +#ifdef CONFIG_ECORE_LOCK_ALLOC + u8 idx; +#endif if (!IS_PF_SRIOV(p_hwfn)) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, @@ -629,20 +705,34 @@ enum _ecore_status_t ecore_iov_alloc(struct ecore_hwfn *p_hwfn) return ECORE_NOMEM; } - p_hwfn->pf_iov_info = p_sriov; +#ifdef CONFIG_ECORE_LOCK_ALLOC + for (idx = 0; idx < p_hwfn->p_dev->p_iov_info->total_vfs; idx++) { + struct ecore_vf_info *p_vf = &p_sriov->vfs_array[idx]; - ecore_spq_register_async_cb(p_hwfn, PROTOCOLID_COMMON, - ecore_sriov_eqe_event); + if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_vf->vf_eq_info.eq_list_lock, + "vf_eq_list_lock")) { + DP_NOTICE(p_hwfn, false, + "Failed to allocate `eq_list_lock'\n"); + OSAL_FREE(p_hwfn->p_dev, p_sriov); + return ECORE_NOMEM; + } + } +#endif + + p_hwfn->pf_iov_info = p_sriov; return ecore_iov_allocate_vfdb(p_hwfn); } -void ecore_iov_setup(struct ecore_hwfn *p_hwfn) +void ecore_iov_setup(struct ecore_hwfn *p_hwfn) { if (!IS_PF_SRIOV(p_hwfn) || !IS_PF_SRIOV_ALLOC(p_hwfn)) return; ecore_iov_setup_vfdb(p_hwfn); + + ecore_spq_register_async_cb(p_hwfn, PROTOCOLID_COMMON, + ecore_sriov_eqe_event); } void ecore_iov_free(struct ecore_hwfn *p_hwfn) @@ -652,12 +742,14 @@ void ecore_iov_free(struct ecore_hwfn *p_hwfn) if (IS_PF_SRIOV_ALLOC(p_hwfn)) { ecore_iov_free_vfdb(p_hwfn); OSAL_FREE(p_hwfn->p_dev, p_hwfn->pf_iov_info); + p_hwfn->pf_iov_info = OSAL_NULL; } } void ecore_iov_free_hw_info(struct ecore_dev *p_dev) { OSAL_FREE(p_dev, p_dev->p_iov_info); + p_dev->p_iov_info = OSAL_NULL; } enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn) @@ -700,6 +792,7 @@ enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn) DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "IOV capabilities, but no VFs are published\n"); OSAL_FREE(p_dev, p_dev->p_iov_info); + p_dev->p_iov_info = OSAL_NULL; return ECORE_SUCCESS; } @@ -750,13 +843,14 @@ static bool _ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid, return true; } -bool ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid) +LNX_STATIC bool ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid) { return _ecore_iov_pf_sanity_check(p_hwfn, vfid, true); } -void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev, - u16 rel_vf_id, u8 to_disable) +LNX_STATIC void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev, + u16 rel_vf_id, + u8 to_disable) { struct ecore_vf_info *vf; int i; @@ -772,8 +866,8 @@ void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev, } } -void ecore_iov_set_vfs_to_disable(struct ecore_dev *p_dev, - u8 to_disable) +LNX_STATIC void ecore_iov_set_vfs_to_disable(struct ecore_dev *p_dev, + u8 to_disable) { u16 i; @@ -835,9 +929,10 @@ static void ecore_iov_vf_igu_reset(struct ecore_hwfn *p_hwfn, vf->opaque_fid, true); } -static void ecore_iov_vf_igu_set_int(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - struct ecore_vf_info *vf, bool enable) +static void ecore_iov_vf_igu_set_int(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct ecore_vf_info *vf, + bool enable) { u32 igu_vf_conf; @@ -892,11 +987,12 @@ ecore_iov_enable_vf_access_msix(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } -static enum _ecore_status_t -ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, struct ecore_vf_info *vf) +static enum _ecore_status_t ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct ecore_vf_info *vf) { u32 igu_vf_conf = IGU_VF_CONF_FUNC_EN; + bool enable_scan = false; enum _ecore_status_t rc = ECORE_SUCCESS; /* It's possible VF was previously considered malicious - @@ -907,9 +1003,8 @@ ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn, if (vf->to_disable) return ECORE_SUCCESS; - DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "Enable internal access for vf %x [abs %x]\n", vf->abs_vf_id, - ECORE_VF_ABS_ID(p_hwfn, vf)); + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Enable internal access for vf %x [abs %x]\n", + vf->abs_vf_id, ECORE_VF_ABS_ID(p_hwfn, vf)); ecore_iov_vf_pglue_clear_err(p_hwfn, p_ptt, ECORE_VF_ABS_ID(p_hwfn, vf)); @@ -921,6 +1016,15 @@ ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn, if (rc != ECORE_SUCCESS) return rc; + /* enable scan for VF only if it supports RDMA */ + enable_scan = ECORE_IS_VF_RDMA(p_hwfn); + + if (enable_scan) + ecore_tm_clear_vf_ilt(p_hwfn, vf->relative_vf_id); + + STORE_RT_REG(p_hwfn, TM_REG_VF_ENABLE_CONN_RT_OFFSET, + enable_scan ? 0x1 : 0x0); + ecore_fid_pretend(p_hwfn, p_ptt, (u16)vf->concrete_fid); SET_FIELD(igu_vf_conf, IGU_VF_CONF_PARENT, p_hwfn->rel_pf_id); @@ -938,10 +1042,9 @@ ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn, } /** - * * @brief ecore_iov_config_perm_table - configure the permission * zone table. - * The queue zone permission table size is 320x9. There + * In E4, queue zone permission table size is 320x9. There * are 320 VF queues for single engine device (256 for dual * engine device), and each entry has the following format: * {Valid, VF[7:0]} @@ -950,9 +1053,10 @@ ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn, * @param vf * @param enable */ -static void ecore_iov_config_perm_table(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - struct ecore_vf_info *vf, u8 enable) +static void ecore_iov_config_perm_table(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct ecore_vf_info *vf, + u8 enable) { u32 reg_addr, val; u16 qzone_id = 0; @@ -984,23 +1088,23 @@ static void ecore_iov_enable_vf_traffic(struct ecore_hwfn *p_hwfn, static u8 ecore_iov_alloc_vf_igu_sbs(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_vf_info *vf, - u16 num_rx_queues) + u16 num_irqs) { struct ecore_igu_block *p_block; struct cau_sb_entry sb_entry; int qid = 0; u32 val = 0; - if (num_rx_queues > p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov) - num_rx_queues = - (u16)p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov; - p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov -= num_rx_queues; + if (num_irqs > p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov) + num_irqs = (u16)p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov; + + p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov -= num_irqs; SET_FIELD(val, IGU_MAPPING_LINE_FUNCTION_NUMBER, vf->abs_vf_id); SET_FIELD(val, IGU_MAPPING_LINE_VALID, 1); SET_FIELD(val, IGU_MAPPING_LINE_PF_VALID, 0); - for (qid = 0; qid < num_rx_queues; qid++) { + for (qid = 0; qid < num_irqs; qid++) { p_block = ecore_get_igu_free_sb(p_hwfn, false); if (!p_block) continue; @@ -1025,7 +1129,7 @@ static u8 ecore_iov_alloc_vf_igu_sbs(struct ecore_hwfn *p_hwfn, OSAL_NULL /* default parameters */); } - vf->num_sbs = (u8)num_rx_queues; + vf->num_sbs = (u8)num_irqs; return vf->num_sbs; } @@ -1044,6 +1148,7 @@ static u8 ecore_iov_alloc_vf_igu_sbs(struct ecore_hwfn *p_hwfn, static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_vf_info *vf) + { struct ecore_igu_info *p_info = p_hwfn->hw_info.p_igu_info; int idx, igu_id; @@ -1052,7 +1157,8 @@ static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn, /* Invalidate igu CAM lines and mark them as free */ for (idx = 0; idx < vf->num_sbs; idx++) { igu_id = vf->igu_sbs[idx]; - addr = IGU_REG_MAPPING_MEMORY + sizeof(u32) * igu_id; + addr = IGU_REG_MAPPING_MEMORY + + sizeof(u32) * igu_id; val = ecore_rd(p_hwfn, p_ptt, addr); SET_FIELD(val, IGU_MAPPING_LINE_VALID, 0); @@ -1065,11 +1171,11 @@ static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn, vf->num_sbs = 0; } -void ecore_iov_set_link(struct ecore_hwfn *p_hwfn, - u16 vfid, - struct ecore_mcp_link_params *params, - struct ecore_mcp_link_state *link, - struct ecore_mcp_link_capabilities *p_caps) +LNX_STATIC void ecore_iov_set_link(struct ecore_hwfn *p_hwfn, + u16 vfid, + struct ecore_mcp_link_params *params, + struct ecore_mcp_link_state *link, + struct ecore_mcp_link_capabilities *p_caps) { struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false); struct ecore_bulletin_content *p_bulletin; @@ -1111,7 +1217,7 @@ static void ecore_emul_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn, } #endif -enum _ecore_status_t +LNX_STATIC enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_iov_vf_init_params *p_params) @@ -1126,6 +1232,16 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn, u32 cids; u8 i; + if (IS_LEAD_HWFN(p_hwfn) && + p_hwfn->p_dev->p_iov_info->num_vfs == + p_hwfn->pf_iov_info->max_active_vfs) { + DP_NOTICE(p_hwfn, false, + "Can't add this VF - num_vfs (%u) has already reached max_active_vfs (%u)\n", + p_hwfn->p_dev->p_iov_info->num_vfs, + p_hwfn->pf_iov_info->max_active_vfs); + return ECORE_NORESOURCES; + } + vf = ecore_iov_get_vf_info(p_hwfn, p_params->rel_vf_id, false); if (!vf) { DP_ERR(p_hwfn, "ecore_iov_init_hw_for_vf : vf is OSAL_NULL\n"); @@ -1140,14 +1256,14 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn, /* Perform sanity checking on the requested vport/rss */ if (p_params->vport_id >= RESC_NUM(p_hwfn, ECORE_VPORT)) { - DP_NOTICE(p_hwfn, true, "VF[%d] - can't use VPORT %02x\n", + DP_NOTICE(p_hwfn, true, "VF[%d] - can't use VPORT 0x%02x\n", p_params->rel_vf_id, p_params->vport_id); return ECORE_INVAL; } if ((p_params->num_queues > 1) && (p_params->rss_eng_id >= RESC_NUM(p_hwfn, ECORE_RSS_ENG))) { - DP_NOTICE(p_hwfn, true, "VF[%d] - can't use RSS_ENG %02x\n", + DP_NOTICE(p_hwfn, true, "VF[%d] - can't use RSS_ENG 0x%02x\n", p_params->rel_vf_id, p_params->rss_eng_id); return ECORE_INVAL; } @@ -1193,10 +1309,14 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn, /* Limit number of queues according to number of CIDs */ ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH, &cids); DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "VF[%d] - requesting to initialize for 0x%04x queues" - " [0x%04x CIDs available]\n", - vf->relative_vf_id, p_params->num_queues, (u16)cids); - num_irqs = OSAL_MIN_T(u16, p_params->num_queues, ((u16)cids)); + "VF[%d] - requesting to initialize for 0x%04x queues [0x%04x CIDs available] and 0x%04x cnqs\n", + vf->relative_vf_id, p_params->num_queues, (u16)cids, + p_params->num_cnqs); + + p_params->num_queues = OSAL_MIN_T(u16, (p_params->num_queues), + ((u16)cids)); + + num_irqs = p_params->num_queues + p_params->num_cnqs; num_of_vf_available_chains = ecore_iov_alloc_vf_igu_sbs(p_hwfn, p_ptt, @@ -1207,9 +1327,21 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn, return ECORE_NOMEM; } - /* Choose queue number and index ranges */ - vf->num_rxqs = num_of_vf_available_chains; - vf->num_txqs = num_of_vf_available_chains; + /* l2 queues will get as much chains as they need. The rest will be + * served the cnqs. + */ + if (num_of_vf_available_chains > p_params->num_queues) { + vf->num_rxqs = p_params->num_queues; + vf->num_txqs = p_params->num_queues; + vf->num_cnqs = num_of_vf_available_chains - p_params->num_queues; + vf->cnq_offset = p_params->cnq_offset; + } else { + vf->num_rxqs = num_of_vf_available_chains; + vf->num_txqs = num_of_vf_available_chains; + vf->num_cnqs = 0; + } + + vf->cnq_sb_start_id = vf->num_rxqs; for (i = 0; i < vf->num_rxqs; i++) { struct ecore_vf_queue *p_queue = &vf->vf_queues[i]; @@ -1240,8 +1372,8 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn, vf->b_init = true; #ifndef REMOVE_DBG - p_hwfn->pf_iov_info->active_vfs[vf->relative_vf_id / 64] |= - (1ULL << (vf->relative_vf_id % 64)); + ECORE_VF_ARRAY_SET_VFID(p_hwfn->pf_iov_info->active_vfs, + vf->relative_vf_id); #endif if (IS_LEAD_HWFN(p_hwfn)) @@ -1253,7 +1385,7 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn, #endif return ECORE_SUCCESS; - } +} #ifndef ASIC_ONLY static void ecore_emul_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn, @@ -1265,18 +1397,56 @@ static void ecore_emul_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn, ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_SR_IOV_DISABLED_REQUEST_CLR, sriov_dis); -} + } } #endif -enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u16 rel_vf_id) +static enum _ecore_status_t +ecore_iov_timers_stop(struct ecore_hwfn *p_hwfn, + struct ecore_vf_info *p_vf, + struct ecore_ptt *p_ptt) +{ + int i; + + /* pretend to the specific VF for the split */ + ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_vf->concrete_fid); + + /* close timers */ + ecore_wr(p_hwfn, p_ptt, TM_REG_VF_ENABLE_CONN, 0x0); + + for (i = 0; i < ECORE_HW_STOP_RETRY_LIMIT; i++) { + if (!ecore_rd(p_hwfn, p_ptt, TM_REG_VF_SCAN_ACTIVE_CONN)) + break; + + /* Dependent on number of connection/tasks, possibly + * 1ms sleep is required between polls + */ + OSAL_MSLEEP(1); + } + + /* pretend back to the PF */ + ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_hwfn->hw_info.concrete_fid); + + if (i < ECORE_HW_STOP_RETRY_LIMIT) + return ECORE_SUCCESS; + + DP_NOTICE(p_hwfn, false, + "Timer linear scan connection is not over [%02x]\n", + (u8)ecore_rd(p_hwfn, p_ptt, TM_REG_VF_SCAN_ACTIVE_CONN)); + + return ECORE_TIMEOUT; +} + +LNX_STATIC enum _ecore_status_t +ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + u16 rel_vf_id) { struct ecore_mcp_link_capabilities caps; struct ecore_mcp_link_params params; struct ecore_mcp_link_state link; struct ecore_vf_info *vf = OSAL_NULL; + enum _ecore_status_t rc = ECORE_SUCCESS; vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true); if (!vf) { @@ -1318,8 +1488,14 @@ enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn, if (vf->b_init) { vf->b_init = false; - p_hwfn->pf_iov_info->active_vfs[vf->relative_vf_id / 64] &= - ~(1ULL << (vf->relative_vf_id / 64)); + +#ifdef CONFIG_ECORE_SW_CHANNEL + vf->b_hw_channel = 0; +#endif +#ifndef REMOVE_DBG + ECORE_VF_ARRAY_CLEAR_VFID(p_hwfn->pf_iov_info->active_vfs, + vf->relative_vf_id); +#endif if (IS_LEAD_HWFN(p_hwfn)) p_hwfn->p_dev->p_iov_info->num_vfs--; @@ -1330,7 +1506,10 @@ enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn, ecore_emul_iov_release_hw_for_vf(p_hwfn, p_ptt); #endif - return ECORE_SUCCESS; + /* disable timer scan for VF */ + rc = ecore_iov_timers_stop(p_hwfn, vf, p_ptt); + + return rc; } static bool ecore_iov_tlv_supported(u16 tlvtype) @@ -1339,7 +1518,8 @@ static bool ecore_iov_tlv_supported(u16 tlvtype) } static void ecore_iov_lock_vf_pf_channel(struct ecore_hwfn *p_hwfn, - struct ecore_vf_info *vf, u16 tlv) + struct ecore_vf_info *vf, + u16 tlv) { /* lock the channel */ /* mutex_lock(&vf->op_mutex); @@@TBD MichalK - add lock... */ @@ -1349,14 +1529,12 @@ static void ecore_iov_lock_vf_pf_channel(struct ecore_hwfn *p_hwfn, /* log the lock */ if (ecore_iov_tlv_supported(tlv)) - DP_VERBOSE(p_hwfn, - ECORE_MSG_IOV, + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "VF[%d]: vf pf channel locked by %s\n", vf->abs_vf_id, - qede_ecore_channel_tlvs_string[tlv]); + ecore_channel_tlvs_string[tlv]); else - DP_VERBOSE(p_hwfn, - ECORE_MSG_IOV, + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "VF[%d]: vf pf channel locked by %04x\n", vf->abs_vf_id, tlv); } @@ -1365,13 +1543,23 @@ static void ecore_iov_unlock_vf_pf_channel(struct ecore_hwfn *p_hwfn, struct ecore_vf_info *vf, u16 expected_tlv) { + /* WARN(expected_tlv != vf->op_current, + * "lock mismatch: expected %s found %s", + * channel_tlvs_string[expected_tlv], + * channel_tlvs_string[vf->op_current]); + * @@@TBD MichalK + */ + + /* lock the channel */ + /* mutex_unlock(&vf->op_mutex); @@@TBD MichalK add the lock */ + /* log the unlock */ if (ecore_iov_tlv_supported(expected_tlv)) DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "VF[%d]: vf pf channel unlocked by %s\n", vf->abs_vf_id, - qede_ecore_channel_tlvs_string[expected_tlv]); + ecore_channel_tlvs_string[expected_tlv]); else DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, @@ -1379,7 +1567,7 @@ static void ecore_iov_unlock_vf_pf_channel(struct ecore_hwfn *p_hwfn, vf->abs_vf_id, expected_tlv); /* record the locking op */ - /* vf->op_current = CHANNEL_TLV_NONE; */ + /* vf->op_current = CHANNEL_TLV_NONE;*/ } /* place a given tlv on the tlv buffer, continuing current tlv list */ @@ -1404,14 +1592,14 @@ void ecore_dp_tlv_list(struct ecore_hwfn *p_hwfn, void *tlvs_list) struct channel_tlv *tlv; do { - /* cast current tlv list entry to channel tlv header */ + /* cast current tlv list entry to channel tlv header*/ tlv = (struct channel_tlv *)((u8 *)tlvs_list + total_length); /* output tlv */ if (ecore_iov_tlv_supported(tlv->type)) DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "TLV number %d: type %s, length %d\n", - i, qede_ecore_channel_tlvs_string[tlv->type], + i, ecore_channel_tlvs_string[tlv->type], tlv->length); else DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, @@ -1426,7 +1614,9 @@ void ecore_dp_tlv_list(struct ecore_hwfn *p_hwfn, void *tlvs_list) DP_NOTICE(p_hwfn, false, "TLV of length 0 found\n"); return; } + total_length += tlv->length; + if (total_length >= sizeof(struct tlv_buffer_size)) { DP_NOTICE(p_hwfn, false, "TLV ==> Buffer overflow\n"); return; @@ -1456,7 +1646,7 @@ static void ecore_iov_send_response(struct ecore_hwfn *p_hwfn, #ifdef CONFIG_ECORE_SW_CHANNEL mbx->sw_mbx.response_size = - length + sizeof(struct channel_list_end_tlv); + length + sizeof(struct channel_list_end_tlv); if (!p_vf->b_hw_channel) return; @@ -1480,7 +1670,8 @@ static void ecore_iov_send_response(struct ecore_hwfn *p_hwfn, */ REG_WR(p_hwfn, GTT_BAR0_MAP_REG_USDM_RAM + - USTORM_VF_PF_CHANNEL_READY_OFFSET(eng_vf_id), 1); + USTORM_VF_PF_CHANNEL_READY_OFFSET(eng_vf_id), + 1); ecore_dmae_host2host(p_hwfn, p_ptt, mbx->reply_phys, mbx->req_virt->first_tlv.reply_address, @@ -1543,7 +1734,7 @@ static u16 ecore_iov_prep_vp_update_resp_tlvs(struct ecore_hwfn *p_hwfn, resp->hdr.status = PFVF_STATUS_NOT_SUPPORTED; DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "VF[%d] - vport_update resp: TLV %d, status %02x\n", + "VF[%d] - vport_update response: TLV %d, status %02x\n", p_vf->relative_vf_id, ecore_iov_vport_to_tlv(i), resp->hdr.status); @@ -1573,10 +1764,10 @@ static void ecore_iov_prepare_resp(struct ecore_hwfn *p_hwfn, ecore_iov_send_response(p_hwfn, p_ptt, vf_info, length, status); } -struct ecore_public_vf_info -*ecore_iov_get_public_vf_info(struct ecore_hwfn *p_hwfn, - u16 relative_vf_id, - bool b_enabled_only) +LNX_STATIC struct ecore_public_vf_info * +ecore_iov_get_public_vf_info(struct ecore_hwfn *p_hwfn, + u16 relative_vf_id, + bool b_enabled_only) { struct ecore_vf_info *vf = OSAL_NULL; @@ -1590,18 +1781,19 @@ struct ecore_public_vf_info static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn, struct ecore_vf_info *p_vf) { + struct ecore_bulletin_content *p_bulletin; u32 i, j; + + p_bulletin = p_vf->bulletin.p_virt; + p_bulletin->version = 0; + p_vf->vf_bulletin = 0; p_vf->vport_instance = 0; p_vf->configured_features = 0; - /* If VF previously requested less resources, go back to default */ - p_vf->num_rxqs = p_vf->num_sbs; - p_vf->num_txqs = p_vf->num_sbs; - p_vf->num_active_rxqs = 0; - for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) { + for (i = 0; i < ECORE_MAX_QUEUE_VF_CHAINS_PER_PF; i++) { struct ecore_vf_queue *p_queue = &p_vf->vf_queues[i]; for (j = 0; j < MAX_QUEUES_PER_QZONE; j++) { @@ -1620,16 +1812,29 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn, } /* Returns either 0, or log(size) */ -static u32 ecore_iov_vf_db_bar_size(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) +#define PGLUE_B_REG_VF_BAR1_SIZE_LOG_OFFSET 11 +u32 ecore_iov_vf_db_bar_size(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) { u32 val = ecore_rd(p_hwfn, p_ptt, PGLUE_B_REG_VF_BAR1_SIZE); + /* The register holds the log of the bar size minus 11, e.g value of 1 + * indicates 4K bar size (log(4K)-11 = 1). + * Therefore, we need to add 11 to register value. + */ if (val) - return val + 11; + return val + PGLUE_B_REG_VF_BAR1_SIZE_LOG_OFFSET; + return 0; } +u8 ecore_iov_abs_to_rel_id(struct ecore_hwfn *p_hwfn, u8 abs_id) +{ + struct ecore_hw_sriov_info *p_iov = p_hwfn->p_dev->p_iov_info; + + return abs_id - p_iov->first_vf_in_pf; +} + static void ecore_iov_vf_mbx_acquire_resc_cids(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, @@ -1638,8 +1843,8 @@ ecore_iov_vf_mbx_acquire_resc_cids(struct ecore_hwfn *p_hwfn, struct pf_vf_resc *p_resp) { u8 num_vf_cons = p_hwfn->pf_params.eth_pf_params.num_vf_cons; - u8 db_size = DB_ADDR_VF(1, DQ_DEMS_LEGACY) - - DB_ADDR_VF(0, DQ_DEMS_LEGACY); + u8 db_size = DB_ADDR_VF(p_hwfn->p_dev, 1, DQ_DEMS_LEGACY) - + DB_ADDR_VF(p_hwfn->p_dev, 0, DQ_DEMS_LEGACY); u32 bar_size; p_resp->num_cids = OSAL_MIN_T(u8, p_req->num_cids, num_vf_cons); @@ -1662,6 +1867,7 @@ ecore_iov_vf_mbx_acquire_resc_cids(struct ecore_hwfn *p_hwfn, if (bar_size) bar_size = 1 << bar_size; + /* In CMT, doorbell bar should be split over both engines */ if (ECORE_IS_CMT(p_hwfn->p_dev)) bar_size /= 2; } else { @@ -1677,16 +1883,18 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_vf_info *p_vf, struct vf_pf_resc_request *p_req, - struct pf_vf_resc *p_resp) + struct pf_vf_resc *p_resp, + bool is_rdma_supported) { u8 i; /* Queue related information */ - p_resp->num_rxqs = p_vf->num_rxqs; - p_resp->num_txqs = p_vf->num_txqs; - p_resp->num_sbs = p_vf->num_sbs; + p_resp->num_rxqs = OSAL_MIN_T(u8, p_vf->num_rxqs, p_req->num_rxqs); + p_resp->num_txqs = OSAL_MIN_T(u8, p_vf->num_txqs, p_req->num_txqs); + /* Legacy drivers expect num SB and num rxqs to match */ + p_resp->num_sbs = is_rdma_supported ? p_vf->num_sbs : p_vf->num_rxqs; - for (i = 0; i < p_resp->num_sbs; i++) { + for (i = 0; i < p_resp->num_rxqs; i++) { p_resp->hw_sbs[i].hw_sb_id = p_vf->igu_sbs[i]; /* TODO - what's this sb_qid field? Is it deprecated? * or is there an ecore_client that looks at this? @@ -1725,7 +1933,7 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn, p_resp->num_mc_filters < p_req->num_mc_filters || p_resp->num_cids < p_req->num_cids) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "VF[%d] - Insufficient resources: rxq [%02x/%02x] txq [%02x/%02x] sbs [%02x/%02x] mac [%02x/%02x] vlan [%02x/%02x] mc [%02x/%02x] cids [%02x/%02x]\n", + "VF[%d] - Insufficient resources: rxq [%u/%u] txq [%u/%u] sbs [%u/%u] mac [%u/%u] vlan [%u/%u] mc [%u/%u] cids [%u/%u]\n", p_vf->abs_vf_id, p_req->num_rxqs, p_resp->num_rxqs, p_req->num_rxqs, p_resp->num_txqs, @@ -1747,6 +1955,10 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn, return PFVF_STATUS_NO_RESOURCE; } + /* Save the actual numbers from the response */ + p_vf->actual_num_rxqs = p_resp->num_rxqs; + p_vf->actual_num_txqs = p_resp->num_txqs; + return PFVF_STATUS_SUCCESS; } @@ -1798,6 +2010,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn, DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "VF[%d] sent ACQUIRE but is already in state %d - fail request\n", vf->abs_vf_id, vf->state); + vfpf_status = PFVF_STATUS_ACQUIRED; goto out; } @@ -1819,9 +2032,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn, p_vfdev->eth_fp_hsi_minor = ETH_HSI_VER_NO_PKT_LEN_TUNN; } else { DP_INFO(p_hwfn, - "VF[%d] needs fastpath HSI %02x.%02x, which is" - " incompatible with loaded FW's faspath" - " HSI %02x.%02x\n", + "VF[%d] needs fastpath HSI %02x.%02x, which is incompatible with loaded FW's faspath HSI %02x.%02x\n", vf->abs_vf_id, req->vfdev_info.eth_fp_hsi_major, req->vfdev_info.eth_fp_hsi_minor, @@ -1834,9 +2045,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn, /* On 100g PFs, prevent old VFs from loading */ if (ECORE_IS_CMT(p_hwfn->p_dev) && !(req->vfdev_info.capabilities & VFPF_ACQUIRE_CAP_100G)) { - DP_INFO(p_hwfn, - "VF[%d] is running an old driver that doesn't support" - " 100g\n", + DP_INFO(p_hwfn, "VF[%d] is running an old driver that doesn't support 100g\n", vf->abs_vf_id); goto out; } @@ -1855,7 +2064,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn, vf->vf_bulletin = req->bulletin_addr; vf->bulletin.size = (vf->bulletin.size < req->bulletin_size) ? - vf->bulletin.size : req->bulletin_size; + vf->bulletin.size : req->bulletin_size; /* fill in pfdev info */ pfdev_info->chip_num = p_hwfn->p_dev->chip_num; @@ -1874,6 +2083,22 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn, if (req->vfdev_info.capabilities & VFPF_ACQUIRE_CAP_QUEUE_QIDS) pfdev_info->capabilities |= PFVF_ACQUIRE_CAP_QUEUE_QIDS; + /* EDPM is disabled for VF when the capability is missing, or the VFs + * doorbell bar size is insufficient, or it's not supported by the MFW. + * Otherwise, let the VF know that it supports DPM. + */ + if (!(req->vfdev_info.capabilities & VFPF_ACQUIRE_CAP_EDPM) || + p_hwfn->pf_iov_info->dpm_info.db_bar_no_edpm || + p_hwfn->pf_iov_info->dpm_info.mfw_no_edpm) { + vf->db_recovery_info.edpm_disabled = true; + } else { + pfdev_info->capabilities |= PFVF_ACQUIRE_CAP_EDPM; + } + + /* This VF supports requesting, receiving and processing async events */ + if (req->vfdev_info.capabilities & VFPF_ACQUIRE_CAP_EQ) + vf->vf_eq_info.eq_support = true; + /* Share the sizes of the bars with VF */ resp->pfdev_info.bar_size = (u8)ecore_iov_vf_db_bar_size(p_hwfn, p_ptt); @@ -1881,7 +2106,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn, ecore_iov_vf_mbx_acquire_stats(&pfdev_info->stats_info); OSAL_MEMCPY(pfdev_info->port_mac, p_hwfn->hw_info.hw_mac_addr, - ETH_ALEN); + ECORE_ETH_ALEN); pfdev_info->fw_major = FW_MAJOR_VERSION; pfdev_info->fw_minor = FW_MINOR_VERSION; @@ -1900,14 +2125,6 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn, pfdev_info->dev_type = p_hwfn->p_dev->type; pfdev_info->chip_rev = p_hwfn->p_dev->chip_rev; - /* Fill resources available to VF; Make sure there are enough to - * satisfy the VF's request. - */ - vfpf_status = ecore_iov_vf_mbx_acquire_resc(p_hwfn, p_ptt, vf, - &req->resc_request, resc); - if (vfpf_status != PFVF_STATUS_SUCCESS) - goto out; - /* Start the VF in FW */ rc = ecore_sp_vf_start(p_hwfn, vf); if (rc != ECORE_SUCCESS) { @@ -1924,18 +2141,28 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn, ecore_iov_post_vf_bulletin(p_hwfn, vf->relative_vf_id, p_ptt); DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "VF[%d] ACQUIRE_RESPONSE: pfdev_info- chip_num=0x%x," - " db_size=%d, idx_per_sb=%d, pf_cap=0x%lx\n" - "resources- n_rxq-%d, n_txq-%d, n_sbs-%d, n_macs-%d," - " n_vlans-%d\n", + "VF[%d] ACQUIRE_RESPONSE: pfdev_info- chip_num=0x%x, db_size=%d, idx_per_sb=%d, " + "pf_cap=0x%" PRIx64 "\n" + "resources- n_rxq-%d, n_txq-%d, n_sbs-%d, n_macs-%d, n_vlans-%d\n", vf->abs_vf_id, resp->pfdev_info.chip_num, resp->pfdev_info.db_size, resp->pfdev_info.indices_per_sb, - (unsigned long)resp->pfdev_info.capabilities, resc->num_rxqs, + resp->pfdev_info.capabilities, resc->num_rxqs, resc->num_txqs, resc->num_sbs, resc->num_mac_filters, resc->num_vlan_filters); vf->state = VF_ACQUIRED; + /* Enable load requests for this VF - pretend to the specific VF + * for the split. + */ + ecore_fid_pretend(p_hwfn, p_ptt, (u16)vf->concrete_fid); + + ecore_wr(p_hwfn, p_ptt, CCFC_REG_WEAK_ENABLE_VF, 0x1); + ecore_wr(p_hwfn, p_ptt, TCFC_REG_WEAK_ENABLE_VF, 0x1); + + /* unpretend */ + ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_hwfn->hw_info.concrete_fid); + out: /* Prepare Response */ ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_ACQUIRE, @@ -1943,16 +2170,16 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn, vfpf_status); } -static enum _ecore_status_t -__ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn, - struct ecore_vf_info *p_vf, bool val) +static enum _ecore_status_t __ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn, + struct ecore_vf_info *p_vf, bool val) { struct ecore_sp_vport_update_params params; enum _ecore_status_t rc; if (val == p_vf->spoof_chk) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "Spoofchk value[%d] is already configured\n", val); + "Spoofchk value[%d] is already configured\n", + val); return ECORE_SUCCESS; } @@ -1978,9 +2205,8 @@ __ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn, return rc; } -static enum _ecore_status_t -ecore_iov_reconfigure_unicast_vlan(struct ecore_hwfn *p_hwfn, - struct ecore_vf_info *p_vf) +static enum _ecore_status_t ecore_iov_reconfigure_unicast_vlan(struct ecore_hwfn *p_hwfn, + struct ecore_vf_info *p_vf) { struct ecore_filter_ucast filter; enum _ecore_status_t rc = ECORE_SUCCESS; @@ -2003,13 +2229,11 @@ ecore_iov_reconfigure_unicast_vlan(struct ecore_hwfn *p_hwfn, "Reconfiguring VLAN [0x%04x] for VF [%04x]\n", filter.vlan, p_vf->relative_vf_id); rc = ecore_sp_eth_filter_ucast(p_hwfn, p_vf->opaque_fid, - &filter, ECORE_SPQ_MODE_CB, - OSAL_NULL); + &filter, ECORE_SPQ_MODE_CB, OSAL_NULL); if (rc) { - DP_NOTICE(p_hwfn, true, - "Failed to configure VLAN [%04x]" - " to VF [%04x]\n", - filter.vlan, p_vf->relative_vf_id); + DP_NOTICE(p_hwfn, true, "Failed to configure VLAN [%04x] to VF [%04x]\n", + filter.vlan, + p_vf->relative_vf_id); break; } } @@ -2019,11 +2243,12 @@ ecore_iov_reconfigure_unicast_vlan(struct ecore_hwfn *p_hwfn, static enum _ecore_status_t ecore_iov_reconfigure_unicast_shadow(struct ecore_hwfn *p_hwfn, - struct ecore_vf_info *p_vf, u64 events) + struct ecore_vf_info *p_vf, + u64 events) { enum _ecore_status_t rc = ECORE_SUCCESS; - /*TODO - what about MACs? */ + /* TODO - what about MACs? */ if ((events & (1 << VLAN_ADDR_FORCED)) && !(p_vf->configured_features & (1 << VLAN_ADDR_FORCED))) @@ -2043,6 +2268,12 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn, if (!p_vf->vport_instance) return ECORE_INVAL; + /* Check whether skipping unicast filter update is required, e.g. when + * Switchdev mode is enabled. + */ + if (ecore_get_skip_ucast_filter_update(p_hwfn)) + return ECORE_INVAL; + if ((events & (1 << MAC_ADDR_FORCED)) || p_hwfn->pf_params.eth_pf_params.allow_vf_mac_change || p_vf->p_vf_info.is_trusted_configured) { @@ -2055,7 +2286,9 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn, filter.is_rx_filter = 1; filter.is_tx_filter = 1; filter.vport_to_add_to = p_vf->vport_id; - OSAL_MEMCPY(filter.mac, p_vf->bulletin.p_virt->mac, ETH_ALEN); + OSAL_MEMCPY(filter.mac, + p_vf->bulletin.p_virt->mac, + ECORE_ETH_ALEN); rc = ecore_sp_eth_filter_ucast(p_hwfn, p_vf->opaque_fid, &filter, @@ -2086,7 +2319,7 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn, filter.vport_to_add_to = p_vf->vport_id; filter.vlan = p_vf->bulletin.p_virt->pvid; filter.opcode = filter.vlan ? ECORE_FILTER_REPLACE : - ECORE_FILTER_FLUSH; + ECORE_FILTER_FLUSH; /* Send the ramrod */ rc = ecore_sp_eth_filter_ucast(p_hwfn, p_vf->opaque_fid, @@ -2109,11 +2342,12 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn, vport_update.update_inner_vlan_removal_flg = 1; removal = filter.vlan ? - 1 : p_vf->shadow_config.inner_vlan_removal; + 1 : p_vf->shadow_config.inner_vlan_removal; vport_update.inner_vlan_removal_flg = removal; vport_update.silent_vlan_removal_flg = filter.vlan ? 1 : 0; rc = ecore_sp_vport_update(p_hwfn, &vport_update, - ECORE_SPQ_MODE_EBLOCK, OSAL_NULL); + ECORE_SPQ_MODE_EBLOCK, + OSAL_NULL); if (rc) { DP_NOTICE(p_hwfn, true, "PF failed to configure VF vport for vlan\n"); @@ -2121,7 +2355,7 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn, } /* Update all the Rx queues */ - for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) { + for (i = 0; i < ECORE_MAX_QUEUE_VF_CHAINS_PER_PF; i++) { struct ecore_vf_queue *p_queue = &p_vf->vf_queues[i]; struct ecore_queue_cid *p_cid = OSAL_NULL; @@ -2132,13 +2366,12 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn, rc = ecore_sp_eth_rx_queues_update(p_hwfn, (void **)&p_cid, - 1, 0, 1, - ECORE_SPQ_MODE_EBLOCK, - OSAL_NULL); + 1, 0, 1, + ECORE_SPQ_MODE_EBLOCK, + OSAL_NULL); if (rc) { DP_NOTICE(p_hwfn, true, - "Failed to send Rx update" - " fo queue[0x%04x]\n", + "Failed to send Rx update fo queue[0x%04x]\n", p_cid->rel.queue_id); return rc; } @@ -2186,7 +2419,7 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn, ecore_iov_enable_vf_traffic(p_hwfn, p_ptt, vf); /* Initialize Status block in CAU */ - for (sb_id = 0; sb_id < vf->num_sbs; sb_id++) { + for (sb_id = 0; sb_id < vf->num_rxqs; sb_id++) { if (!start->sb_addr[sb_id]) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "VF[%d] did not fill the address of SB %d\n", @@ -2216,14 +2449,14 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn, } OSAL_MEMSET(¶ms, 0, sizeof(struct ecore_sp_vport_start_params)); - params.tpa_mode = start->tpa_mode; + params.tpa_mode = start->tpa_mode; params.remove_inner_vlan = start->inner_vlan_removal; params.tx_switching = true; + params.zero_placement_offset = start->zero_placement_offset; #ifndef ASIC_ONLY if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) { - DP_NOTICE(p_hwfn, false, - "FPGA: Don't config VF for Tx-switching [no pVFC]\n"); + DP_NOTICE(p_hwfn, false, "FPGA: Don't configure VF for Tx-switching [no pVFC]\n"); params.tx_switching = false; } #endif @@ -2241,8 +2474,7 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn, rc = ecore_sp_eth_vport_start(p_hwfn, ¶ms); if (rc != ECORE_SUCCESS) { - DP_ERR(p_hwfn, - "ecore_iov_vf_mbx_start_vport returned error %d\n", rc); + DP_ERR(p_hwfn, "returned error %d\n", rc); status = PFVF_STATUS_FAILURE; } else { vf->vport_instance++; @@ -2253,7 +2485,6 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn, vf->vport_id, vf->opaque_fid); __ecore_iov_spoofchk_set(p_hwfn, vf, vf->req_spoofchk_val); } - ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_VPORT_START, sizeof(struct pfvf_def_resp_tlv), status); } @@ -2265,6 +2496,14 @@ static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn, u8 status = PFVF_STATUS_SUCCESS; enum _ecore_status_t rc; + if (!vf->vport_instance) { + DP_NOTICE(p_hwfn, false, + "VF [%02x] requested vport stop, but no vport active\n", + vf->abs_vf_id); + status = PFVF_STATUS_FAILURE; + goto out; + } + OSAL_IOV_VF_VPORT_STOP(p_hwfn, vf); vf->vport_instance--; vf->spoof_chk = false; @@ -2272,9 +2511,8 @@ static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn, if ((ecore_iov_validate_active_rxq(vf)) || (ecore_iov_validate_active_txq(vf))) { vf->b_malicious = true; - DP_NOTICE(p_hwfn, false, - "VF [%02x] - considered malicious;" - " Unable to stop RX/TX queuess\n", + DP_NOTICE(p_hwfn, + false, " VF [%02x] - considered malicious; Unable to stop RX/TX queuess\n", vf->abs_vf_id); status = PFVF_STATUS_MALICIOUS; goto out; @@ -2282,8 +2520,8 @@ static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn, rc = ecore_sp_vport_stop(p_hwfn, vf->opaque_fid, vf->vport_id); if (rc != ECORE_SUCCESS) { - DP_ERR(p_hwfn, - "ecore_iov_vf_mbx_stop_vport returned error %d\n", rc); + DP_ERR(p_hwfn, "returned error %d\n", + rc); status = PFVF_STATUS_FAILURE; } @@ -2303,6 +2541,7 @@ static void ecore_iov_vf_mbx_start_rxq_resp(struct ecore_hwfn *p_hwfn, { struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx; struct pfvf_start_queue_resp_tlv *p_tlv; + struct ecore_dev *p_dev = p_hwfn->p_dev; struct vfpf_start_rxq_tlv *req; u16 length; @@ -2322,16 +2561,32 @@ static void ecore_iov_vf_mbx_start_rxq_resp(struct ecore_hwfn *p_hwfn, sizeof(struct channel_list_end_tlv)); /* Update the TLV with the response. - * The VF Rx producers are located in the vf zone. + * The VF Rx producers are located in the vf zone in E4, and in the + * queue zone in E5. */ if ((status == PFVF_STATUS_SUCCESS) && !b_legacy) { req = &mbx->req_virt->start_rxq; - p_tlv->offset = - PXP_VF_BAR0_START_MSDM_ZONE_B + + if (ECORE_IS_E4(p_dev)) { + p_tlv->offset = + PXP_VF_BAR0_START_MSDM_ZONE_B + OFFSETOF(struct mstorm_vf_zone, non_trigger.eth_rx_queue_producers) + sizeof(struct eth_rx_prod_data) * req->rx_qid; + } else { + struct ecore_vf_queue *p_queue; + u16 qzone_id; + + p_queue = &vf->vf_queues[req->rx_qid]; + ecore_fw_l2_queue(p_hwfn, p_queue->fw_rx_qid, + &qzone_id); + + p_tlv->offset = + MSTORM_QZONE_START(p_dev) + + qzone_id * MSTORM_QZONE_SIZE(p_dev) + + OFFSETOF(struct mstorm_eth_queue_zone_e5, + rx_producers); + } } ecore_iov_send_response(p_hwfn, p_ptt, vf, length, status); @@ -2409,7 +2664,7 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn, OSAL_MEMSET(¶ms, 0, sizeof(params)); params.queue_id = (u8)p_queue->fw_rx_qid; params.vport_id = vf->vport_id; - params.stats_id = vf->abs_vf_id + 0x10; + params.stats_id = vf->abs_vf_id + ECORE_VF_START_BIN_ID; /* Since IGU index is passed via sb_info, construct a dummy one */ OSAL_MEM_ZERO(&sb_dummy, sizeof(sb_dummy)); @@ -2428,16 +2683,31 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn, if (p_cid == OSAL_NULL) goto out; - /* The VF Rx producers are located in the vf zone. - * Legacy VFs have their producers in the queue zone, but they + /* The VF Rx producers are located in the vf zone in E4, and in the + * queue zone in E5. + * Legacy E4 VFs have their producers in the queue zone, but they * calculate the location by their own and clean them prior to this. */ - if (!(vf_legacy & ECORE_QCID_LEGACY_VF_RX_PROD)) - REG_WR(p_hwfn, - GTT_BAR0_MAP_REG_MSDM_RAM + - MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, - req->rx_qid), - 0); + if (!(vf_legacy & ECORE_QCID_LEGACY_VF_RX_PROD)) { + if (ECORE_IS_E4(p_hwfn->p_dev)) { + REG_WR(p_hwfn, + GTT_BAR0_MAP_REG_MSDM_RAM + + MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, + req->rx_qid), + 0); + } else { + u16 qzone_id; + + if (ecore_fw_l2_queue(p_hwfn, p_queue->fw_rx_qid, + &qzone_id)) + goto out; + + REG_WR(p_hwfn, + GTT_BAR0_MAP_REG_MSDM_RAM + + MSTORM_ETH_PF_PRODS_OFFSET(qzone_id), + 0); + } + } rc = ecore_eth_rxq_start_ramrod(p_hwfn, p_cid, req->bd_max_bytes, @@ -2516,7 +2786,8 @@ ecore_iov_pf_validate_tunn_param(struct vfpf_update_tunn_param_tlv *p_req) bool b_update_requested = false; if (p_req->tun_mode_update_mask || p_req->update_tun_cls || - p_req->update_geneve_port || p_req->update_vxlan_port) + p_req->update_geneve_port || p_req->update_vxlan_port || + p_req->update_non_l2_vxlan) b_update_requested = true; return b_update_requested; @@ -2549,6 +2820,16 @@ static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn, goto send_resp; } + if (p_req->update_non_l2_vxlan) { + if (p_req->non_l2_vxlan_enable) { + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, + "non_l2_vxlan mode requested by VF\n"); + ecore_set_vxlan_no_l2_enable(p_hwfn, p_ptt, true); + } else { + ecore_set_vxlan_no_l2_enable(p_hwfn, p_ptt, false); + } + } + tunn.b_update_rx_cls = p_req->update_tun_cls; tunn.b_update_tx_cls = p_req->update_tun_cls; @@ -2642,7 +2923,7 @@ static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn, /* Update the TLV with the response */ if ((status == PFVF_STATUS_SUCCESS) && !b_legacy) - p_tlv->offset = DB_ADDR_VF(cid, DQ_DEMS_LEGACY); + p_tlv->offset = DB_ADDR_VF(p_hwfn->p_dev, cid, DQ_DEMS_LEGACY); ecore_iov_send_response(p_hwfn, p_ptt, p_vf, length, status); } @@ -2685,7 +2966,7 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn, /* Acquire a new queue-cid */ params.queue_id = p_queue->fw_tx_qid; params.vport_id = vf->vport_id; - params.stats_id = vf->abs_vf_id + 0x10; + params.stats_id = vf->abs_vf_id + ECORE_VF_START_BIN_ID; /* Since IGU index is passed via sb_info, construct a dummy one */ OSAL_MEM_ZERO(&sb_dummy, sizeof(sb_dummy)); @@ -2877,7 +3158,7 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_vf_info *vf) { - struct ecore_queue_cid *handlers[ECORE_MAX_VF_CHAINS_PER_PF]; + struct ecore_queue_cid *handlers[ECORE_MAX_QUEUE_VF_CHAINS_PER_PF]; u16 length = sizeof(struct pfvf_def_resp_tlv); struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx; struct vfpf_update_rxq_tlv *req; @@ -2947,8 +3228,8 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn, static enum _ecore_status_t ecore_iov_vf_pf_update_mtu(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - struct ecore_vf_info *p_vf) + struct ecore_ptt *p_ptt, + struct ecore_vf_info *p_vf) { struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx; struct ecore_sp_vport_update_params params; @@ -2984,22 +3265,187 @@ ecore_iov_vf_pf_update_mtu(struct ecore_hwfn *p_hwfn, return rc; } +static u32 +ecore_iov_get_norm_region_conn(struct ecore_hwfn *p_hwfn) +{ + u32 roce_cids, core_cids, eth_cids; + + ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ROCE, &roce_cids); + ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_CORE, &core_cids); + ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH, ð_cids); + + return roce_cids + core_cids + eth_cids; +} + +#define ECORE_DPM_TIMEOUT 1200 + +enum _ecore_status_t +ecore_iov_init_vf_doorbell_bar(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) +{ + enum _ecore_status_t rc = ECORE_SUCCESS; + struct ecore_common_dpm_info *dpm_info; + u32 norm_region_conn, min_addr_reg1; + struct ecore_dpi_info *dpi_info; + u32 pwm_regsize, norm_regsize; + u32 db_bar_size, n_cpus = 1; + u32 roce_edpm_mode; + u32 dems_shift; + u8 cond, cond2; + + if (!IS_PF_SRIOV(p_hwfn)) + return ECORE_SUCCESS; + + db_bar_size = ecore_iov_vf_db_bar_size(p_hwfn, p_ptt); + + if (db_bar_size) { + db_bar_size = 1 << db_bar_size; + + /* In CMT, doorbell bar should be split over both engines */ + if (ECORE_IS_CMT(p_hwfn->p_dev)) + db_bar_size /= 2; + } else { + db_bar_size = PXP_VF_BAR0_DQ_LENGTH; + } + + norm_region_conn = ecore_iov_get_norm_region_conn(p_hwfn); + norm_regsize = ROUNDUP(ECORE_VF_DEMS_SIZE * norm_region_conn, + OSAL_PAGE_SIZE); + min_addr_reg1 = norm_regsize / 4096; + pwm_regsize = db_bar_size - norm_regsize; + + /* Check that the normal and PWM sizes are valid */ + if (db_bar_size < norm_regsize) { + DP_ERR(p_hwfn->p_dev, + "Disabling RDMA for VFs: VF Doorbell BAR size is 0x%x but normal region required to map the L2 cids would be 0x%0x\n", + db_bar_size, norm_regsize); + return ECORE_NORESOURCES; + } + + if (pwm_regsize < ECORE_MIN_PWM_REGION) { + DP_ERR(p_hwfn->p_dev, + "Disabling RDMA for VFs: PWM region size 0x%0x is too small. Should be at least 0x%0x (Doorbell BAR size is 0x%x and normal region size is 0x%0x)\n", + pwm_regsize, ECORE_MIN_PWM_REGION, db_bar_size, + norm_regsize); + return ECORE_NORESOURCES; + } + + dpi_info = &p_hwfn->pf_iov_info->dpi_info; + dpm_info = &p_hwfn->pf_iov_info->dpm_info; + + dpi_info->dpi_bit_shift_addr = DORQ_REG_VF_DPI_BIT_SHIFT; + dpm_info->vf_cfg = true; + + dpi_info->wid_count = (u16)n_cpus; + + /* Check return codes from above calls */ + if (rc != ECORE_SUCCESS) { + DP_ERR(p_hwfn, + "Failed to allocate enough DPIs. Allocated %d but the current minimum is set to %d\n", + dpi_info->dpi_count, + p_hwfn->pf_params.rdma_pf_params.min_dpis); + + DP_ERR(p_hwfn, + "VF doorbell bar: normal_region_size=0x%x, pwm_region_size=0x%x, dpi_size=0x%x, dpi_count=%d, roce_edpm=%s, page_size=%u\n", + norm_regsize, pwm_regsize, dpi_info->dpi_size, + dpi_info->dpi_count, + ecore_edpm_enabled(p_hwfn, dpm_info) ? + "enabled" : "disabled", OSAL_PAGE_SIZE); + + return ECORE_NORESOURCES; + } + + DP_INFO(p_hwfn, + "VF doorbell bar: db_bar_size=0x%x, normal_region_size=0x%x, pwm_region_size=0x%x, dpi_size=0x%x, dpi_count=%d, roce_edpm=%s, page_size=%u\n", + db_bar_size, norm_regsize, pwm_regsize, dpi_info->dpi_size, + dpi_info->dpi_count, ecore_edpm_enabled(p_hwfn, dpm_info) ? + "enabled" : "disabled", OSAL_PAGE_SIZE); + + /* In E4 chip, in case DPM is enabled for VF, the DPM timeout needs to + * be increased. + */ + if (ECORE_IS_E4(p_hwfn->p_dev) && ecore_edpm_enabled(p_hwfn, dpm_info)) + ecore_wr(p_hwfn, p_ptt, DORQ_REG_DPM_TIMEOUT, + ECORE_DPM_TIMEOUT); + + dpi_info->dpi_start_offset = norm_regsize; + + /* Update the DORQ registers. + * DEMS size is configured as log2 of DWORDs, hence the division by 4. + */ + dems_shift = OSAL_LOG2(ECORE_VF_DEMS_SIZE / 4); + ecore_wr(p_hwfn, p_ptt, DORQ_REG_VF_ICID_BIT_SHIFT_NORM, dems_shift); + ecore_wr(p_hwfn, p_ptt, DORQ_REG_VF_MIN_ADDR_REG1, min_addr_reg1); + + return ECORE_SUCCESS; +} + +static void +ecore_iov_vf_pf_async_event_resp(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct ecore_vf_info *p_vf) +{ + struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx; + struct pfvf_async_event_resp_tlv *p_resp; + struct event_ring_list_entry *p_eqe_ent; + u8 status = PFVF_STATUS_SUCCESS; + u8 i; + + mbx->offset = (u8 *)mbx->reply_virt; + p_resp = ecore_add_tlv(&mbx->offset, CHANNEL_TLV_ASYNC_EVENT, + sizeof(*p_resp)); + + OSAL_SPIN_LOCK(&p_vf->vf_eq_info.eq_list_lock); + for (i = 0; i < ECORE_PFVF_EQ_COUNT; i++) { + if (OSAL_LIST_IS_EMPTY(&p_vf->vf_eq_info.eq_list)) + break; + + p_eqe_ent = OSAL_LIST_FIRST_ENTRY(&p_vf->vf_eq_info.eq_list, + struct event_ring_list_entry, + list_entry); + OSAL_LIST_REMOVE_ENTRY(&p_eqe_ent->list_entry, + &p_vf->vf_eq_info.eq_list); + p_vf->vf_eq_info.eq_list_size--; + p_resp->eqs[i] = p_eqe_ent->eqe; + DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, + "vf %u: Send eq entry - op %x prot %x echo %x\n", + p_eqe_ent->eqe.vf_id, p_eqe_ent->eqe.opcode, + p_eqe_ent->eqe.protocol_id, + OSAL_LE16_TO_CPU(p_eqe_ent->eqe.echo)); + OSAL_FREE(p_hwfn->p_dev, p_eqe_ent); + } + + /* Notify VF about more pending completions */ + if (!OSAL_LIST_IS_EMPTY(&p_vf->vf_eq_info.eq_list)) { + p_vf->bulletin.p_virt->eq_completion++; + OSAL_IOV_BULLETIN_UPDATE(p_hwfn); + } + + OSAL_SPIN_UNLOCK(&p_vf->vf_eq_info.eq_list_lock); + + p_resp->eq_count = i; + ecore_add_tlv(&mbx->offset, CHANNEL_TLV_LIST_END, + sizeof(struct channel_list_end_tlv)); + ecore_iov_send_response(p_hwfn, p_ptt, p_vf, sizeof(*p_resp), status); +} + void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn, - void *p_tlvs_list, u16 req_type) + void *p_tlvs_list, u16 req_type) { struct channel_tlv *p_tlv = (struct channel_tlv *)p_tlvs_list; int len = 0; do { if (!p_tlv->length) { - DP_NOTICE(p_hwfn, true, "Zero length TLV found\n"); + DP_NOTICE(p_hwfn, true, + "Zero length TLV found\n"); return OSAL_NULL; } if (p_tlv->type == req_type) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Extended tlv type %s, length %d found\n", - qede_ecore_channel_tlvs_string[p_tlv->type], + ecore_channel_tlvs_string[p_tlv->type], p_tlv->length); return p_tlv; } @@ -3026,7 +3472,8 @@ ecore_iov_vp_update_act_param(struct ecore_hwfn *p_hwfn, u16 tlv = CHANNEL_TLV_VPORT_UPDATE_ACTIVATE; p_act_tlv = (struct vfpf_vport_update_activate_tlv *) - ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv); + ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, + tlv); if (!p_act_tlv) return; @@ -3047,7 +3494,8 @@ ecore_iov_vp_update_vlan_param(struct ecore_hwfn *p_hwfn, u16 tlv = CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP; p_vlan_tlv = (struct vfpf_vport_update_vlan_strip_tlv *) - ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv); + ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, + tlv); if (!p_vlan_tlv) return; @@ -3071,15 +3519,14 @@ ecore_iov_vp_update_tx_switch(struct ecore_hwfn *p_hwfn, u16 tlv = CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH; p_tx_switch_tlv = (struct vfpf_vport_update_tx_switch_tlv *) - ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv); + ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, + tlv); if (!p_tx_switch_tlv) return; #ifndef ASIC_ONLY if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) { - DP_NOTICE(p_hwfn, false, - "FPGA: Ignore tx-switching configuration originating" - " from VFs\n"); + DP_NOTICE(p_hwfn, false, "FPGA: Ignore tx-switching configuration originating from VFs\n"); return; } #endif @@ -3099,7 +3546,8 @@ ecore_iov_vp_update_mcast_bin_param(struct ecore_hwfn *p_hwfn, u16 tlv = CHANNEL_TLV_VPORT_UPDATE_MCAST; p_mcast_tlv = (struct vfpf_vport_update_mcast_bin_tlv *) - ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv); + ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, + tlv); if (!p_mcast_tlv) return; @@ -3119,7 +3567,8 @@ ecore_iov_vp_update_accept_flag(struct ecore_hwfn *p_hwfn, u16 tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM; p_accept_tlv = (struct vfpf_vport_update_accept_param_tlv *) - ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv); + ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, + tlv); if (!p_accept_tlv) return; @@ -3140,7 +3589,8 @@ ecore_iov_vp_update_accept_any_vlan(struct ecore_hwfn *p_hwfn, u16 tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN; p_accept_any_vlan = (struct vfpf_vport_update_accept_any_vlan_tlv *) - ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv); + ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, + tlv); if (!p_accept_any_vlan) return; @@ -3165,7 +3615,8 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn, u16 i, q_idx; p_rss_tlv = (struct vfpf_vport_update_rss_tlv *) - ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv); + ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, + tlv); if (!p_rss_tlv) { p_data->rss_params = OSAL_NULL; return; @@ -3173,18 +3624,14 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn, OSAL_MEMSET(p_rss, 0, sizeof(struct ecore_rss_params)); - p_rss->update_rss_config = - !!(p_rss_tlv->update_rss_flags & - VFPF_UPDATE_RSS_CONFIG_FLAG); - p_rss->update_rss_capabilities = - !!(p_rss_tlv->update_rss_flags & - VFPF_UPDATE_RSS_CAPS_FLAG); - p_rss->update_rss_ind_table = - !!(p_rss_tlv->update_rss_flags & - VFPF_UPDATE_RSS_IND_TABLE_FLAG); - p_rss->update_rss_key = - !!(p_rss_tlv->update_rss_flags & - VFPF_UPDATE_RSS_KEY_FLAG); + p_rss->update_rss_config = !!(p_rss_tlv->update_rss_flags & + VFPF_UPDATE_RSS_CONFIG_FLAG); + p_rss->update_rss_capabilities = !!(p_rss_tlv->update_rss_flags & + VFPF_UPDATE_RSS_CAPS_FLAG); + p_rss->update_rss_ind_table = !!(p_rss_tlv->update_rss_flags & + VFPF_UPDATE_RSS_IND_TABLE_FLAG); + p_rss->update_rss_key = !!(p_rss_tlv->update_rss_flags & + VFPF_UPDATE_RSS_KEY_FLAG); p_rss->rss_enable = p_rss_tlv->rss_enable; p_rss->rss_eng_id = vf->rss_eng_id; @@ -3231,7 +3678,8 @@ ecore_iov_vp_update_sge_tpa_param(struct ecore_hwfn *p_hwfn, u16 tlv = CHANNEL_TLV_VPORT_UPDATE_SGE_TPA; p_sge_tpa_tlv = (struct vfpf_vport_update_sge_tpa_tlv *) - ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv); + ecore_iov_search_list_tlvs(p_hwfn, + p_mbx->req_virt, tlv); if (!p_sge_tpa_tlv) { p_data->sge_tpa_params = OSAL_NULL; @@ -3241,27 +3689,42 @@ ecore_iov_vp_update_sge_tpa_param(struct ecore_hwfn *p_hwfn, OSAL_MEMSET(p_sge_tpa, 0, sizeof(struct ecore_sge_tpa_params)); p_sge_tpa->update_tpa_en_flg = - !!(p_sge_tpa_tlv->update_sge_tpa_flags & VFPF_UPDATE_TPA_EN_FLAG); + !!(p_sge_tpa_tlv->update_sge_tpa_flags & + VFPF_UPDATE_TPA_EN_FLAG); p_sge_tpa->update_tpa_param_flg = - !!(p_sge_tpa_tlv->update_sge_tpa_flags & - VFPF_UPDATE_TPA_PARAM_FLAG); + !!(p_sge_tpa_tlv->update_sge_tpa_flags & + VFPF_UPDATE_TPA_PARAM_FLAG); p_sge_tpa->tpa_ipv4_en_flg = - !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_IPV4_EN_FLAG); + !!(p_sge_tpa_tlv->sge_tpa_flags & + VFPF_TPA_IPV4_EN_FLAG); p_sge_tpa->tpa_ipv6_en_flg = - !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_IPV6_EN_FLAG); + !!(p_sge_tpa_tlv->sge_tpa_flags & + VFPF_TPA_IPV6_EN_FLAG); p_sge_tpa->tpa_pkt_split_flg = - !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_PKT_SPLIT_FLAG); + !!(p_sge_tpa_tlv->sge_tpa_flags & + VFPF_TPA_PKT_SPLIT_FLAG); p_sge_tpa->tpa_hdr_data_split_flg = - !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_HDR_DATA_SPLIT_FLAG); + !!(p_sge_tpa_tlv->sge_tpa_flags & + VFPF_TPA_HDR_DATA_SPLIT_FLAG); p_sge_tpa->tpa_gro_consistent_flg = - !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_GRO_CONSIST_FLAG); + !!(p_sge_tpa_tlv->sge_tpa_flags & + VFPF_TPA_GRO_CONSIST_FLAG); + p_sge_tpa->tpa_ipv4_tunn_en_flg = + !!(p_sge_tpa_tlv->sge_tpa_flags & + VFPF_TPA_TUNN_IPV4_EN_FLAG); + p_sge_tpa->tpa_ipv6_tunn_en_flg = + !!(p_sge_tpa_tlv->sge_tpa_flags & + VFPF_TPA_TUNN_IPV6_EN_FLAG); p_sge_tpa->tpa_max_aggs_num = p_sge_tpa_tlv->tpa_max_aggs_num; p_sge_tpa->tpa_max_size = p_sge_tpa_tlv->tpa_max_size; - p_sge_tpa->tpa_min_size_to_start = p_sge_tpa_tlv->tpa_min_size_to_start; - p_sge_tpa->tpa_min_size_to_cont = p_sge_tpa_tlv->tpa_min_size_to_cont; - p_sge_tpa->max_buffers_per_cqe = p_sge_tpa_tlv->max_buffers_per_cqe; + p_sge_tpa->tpa_min_size_to_start = + p_sge_tpa_tlv->tpa_min_size_to_start; + p_sge_tpa->tpa_min_size_to_cont = + p_sge_tpa_tlv->tpa_min_size_to_cont; + p_sge_tpa->max_buffers_per_cqe = + p_sge_tpa_tlv->max_buffers_per_cqe; p_data->sge_tpa_params = p_sge_tpa; @@ -3284,8 +3747,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn, /* Valiate PF can send such a request */ if (!vf->vport_instance) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "No VPORT instance available for VF[%d]," - " failing vport update\n", + "No VPORT instance available for VF[%d], failing vport update\n", vf->abs_vf_id); status = PFVF_STATUS_FAILURE; goto out; @@ -3298,7 +3760,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn, } OSAL_MEMSET(¶ms, 0, sizeof(params)); - params.opaque_fid = vf->opaque_fid; + params.opaque_fid = vf->opaque_fid; params.vport_id = vf->vport_id; params.rss_params = OSAL_NULL; @@ -3323,6 +3785,24 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn, ecore_iov_vp_update_rss_param(p_hwfn, vf, ¶ms, p_rss_params, mbx, &tlvs_mask, &tlvs_accepted); + /* Check whether skipping accept flags update is required, e.g. when + * Switchdev mode is enabled. If this is the case and the VF requested + * it, the accept flags update will be skipped and the VF will be + * responded with PFVF_STATUS_SUCCESS. + */ + if (ecore_get_skip_accept_flags_update(p_hwfn)) { + u16 accept_param = 1 << ECORE_IOV_VP_UPDATE_ACCEPT_PARAM; + + if (tlvs_accepted & accept_param) { + if (tlvs_accepted == accept_param) { + goto out; + } else { + params.accept_flags.update_rx_mode_config = 0; + params.accept_flags.update_tx_mode_config = 0; + } + } + } + /* Just log a message if there is no single extended tlv in buffer. * When all features of vport update ramrod would be requested by VF * as extended TLVs in buffer then an error can be returned in response @@ -3339,8 +3819,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn, if (!tlvs_accepted) { if (tlvs_mask) DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "Upper-layer prevents said VF" - " configuration\n"); + "Upper-layer prevents said VF configuration\n"); else DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "No feature tlvs found for vport update\n"); @@ -3361,10 +3840,9 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn, ecore_iov_send_response(p_hwfn, p_ptt, vf, length, status); } -static enum _ecore_status_t -ecore_iov_vf_update_vlan_shadow(struct ecore_hwfn *p_hwfn, - struct ecore_vf_info *p_vf, - struct ecore_filter_ucast *p_params) +static enum _ecore_status_t ecore_iov_vf_update_vlan_shadow(struct ecore_hwfn *p_hwfn, + struct ecore_vf_info *p_vf, + struct ecore_filter_ucast *p_params) { int i; @@ -3379,9 +3857,8 @@ ecore_iov_vf_update_vlan_shadow(struct ecore_hwfn *p_hwfn, } if (i == ECORE_ETH_VF_NUM_VLAN_FILTERS + 1) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "VF [%d] - Tries to remove a non-existing" - " vlan\n", - p_vf->relative_vf_id); + "VF [%d] - Tries to remove a non-existing vlan\n", + p_vf->relative_vf_id); return ECORE_INVAL; } } else if (p_params->opcode == ECORE_FILTER_REPLACE || @@ -3409,8 +3886,7 @@ ecore_iov_vf_update_vlan_shadow(struct ecore_hwfn *p_hwfn, if (i == ECORE_ETH_VF_NUM_VLAN_FILTERS + 1) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "VF [%d] - Tries to configure more than %d" - " vlan filters\n", + "VF [%d] - Tries to configure more than %d vlan filters\n", p_vf->relative_vf_id, ECORE_ETH_VF_NUM_VLAN_FILTERS + 1); return ECORE_INVAL; @@ -3420,15 +3896,14 @@ ecore_iov_vf_update_vlan_shadow(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } -static enum _ecore_status_t -ecore_iov_vf_update_mac_shadow(struct ecore_hwfn *p_hwfn, - struct ecore_vf_info *p_vf, - struct ecore_filter_ucast *p_params) +static enum _ecore_status_t ecore_iov_vf_update_mac_shadow(struct ecore_hwfn *p_hwfn, + struct ecore_vf_info *p_vf, + struct ecore_filter_ucast *p_params) { - char empty_mac[ETH_ALEN]; + char empty_mac[ECORE_ETH_ALEN]; int i; - OSAL_MEM_ZERO(empty_mac, ETH_ALEN); + OSAL_MEM_ZERO(empty_mac, ECORE_ETH_ALEN); /* If we're in forced-mode, we don't allow any change */ /* TODO - this would change if we were ever to implement logic for @@ -3451,9 +3926,9 @@ ecore_iov_vf_update_mac_shadow(struct ecore_hwfn *p_hwfn, if (p_params->opcode == ECORE_FILTER_REMOVE) { for (i = 0; i < ECORE_ETH_VF_NUM_MAC_FILTERS; i++) { if (!OSAL_MEMCMP(p_vf->shadow_config.macs[i], - p_params->mac, ETH_ALEN)) { + p_params->mac, ECORE_ETH_ALEN)) { OSAL_MEM_ZERO(p_vf->shadow_config.macs[i], - ETH_ALEN); + ECORE_ETH_ALEN); break; } } @@ -3466,7 +3941,7 @@ ecore_iov_vf_update_mac_shadow(struct ecore_hwfn *p_hwfn, } else if (p_params->opcode == ECORE_FILTER_REPLACE || p_params->opcode == ECORE_FILTER_FLUSH) { for (i = 0; i < ECORE_ETH_VF_NUM_MAC_FILTERS; i++) - OSAL_MEM_ZERO(p_vf->shadow_config.macs[i], ETH_ALEN); + OSAL_MEM_ZERO(p_vf->shadow_config.macs[i], ECORE_ETH_ALEN); } /* List the new MAC address */ @@ -3476,9 +3951,9 @@ ecore_iov_vf_update_mac_shadow(struct ecore_hwfn *p_hwfn, for (i = 0; i < ECORE_ETH_VF_NUM_MAC_FILTERS; i++) { if (!OSAL_MEMCMP(p_vf->shadow_config.macs[i], - empty_mac, ETH_ALEN)) { + empty_mac, ECORE_ETH_ALEN)) { OSAL_MEMCPY(p_vf->shadow_config.macs[i], - p_params->mac, ETH_ALEN); + p_params->mac, ECORE_ETH_ALEN); DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Added MAC at %d entry in shadow\n", i); break; @@ -3524,6 +3999,13 @@ static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn, struct ecore_filter_ucast params; enum _ecore_status_t rc; + /* Check whether skipping unicast filter update is required, e.g. when + * Switchdev mode is enabled. If this is the case, the VF will be + * responded with PFVF_STATUS_SUCCESS. + */ + if (ecore_get_skip_ucast_filter_update(p_hwfn)) + goto out; + /* Prepare the unicast filter params */ OSAL_MEMSET(¶ms, 0, sizeof(struct ecore_filter_ucast)); req = &mbx->req_virt->ucast_filter; @@ -3535,12 +4017,11 @@ static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn, params.is_tx_filter = 1; params.vport_to_remove_from = vf->vport_id; params.vport_to_add_to = vf->vport_id; - OSAL_MEMCPY(params.mac, req->mac, ETH_ALEN); + OSAL_MEMCPY(params.mac, req->mac, ECORE_ETH_ALEN); params.vlan = req->vlan; DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "VF[%d]: opcode 0x%02x type 0x%02x [%s %s] [vport 0x%02x]" - " MAC %02x:%02x:%02x:%02x:%02x:%02x, vlan 0x%04x\n", + "VF[%d]: opcode 0x%02x type 0x%02x [%s %s] [vport 0x%02x] MAC %02x:%02x:%02x:%02x:%02x:%02x, vlan 0x%04x\n", vf->abs_vf_id, params.opcode, params.type, params.is_rx_filter ? "RX" : "", params.is_tx_filter ? "TX" : "", @@ -3550,8 +4031,7 @@ static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn, if (!vf->vport_instance) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "No VPORT instance available for VF[%d]," - " failing ucast MAC configuration\n", + "No VPORT instance available for VF[%d], failing ucast MAC configuration\n", vf->abs_vf_id); status = PFVF_STATUS_FAILURE; goto out; @@ -3581,7 +4061,7 @@ static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn, if ((p_bulletin->valid_bitmap & (1 << MAC_ADDR_FORCED)) && (params.type == ECORE_FILTER_MAC || params.type == ECORE_FILTER_MAC_VLAN)) { - if (OSAL_MEMCMP(p_bulletin->mac, params.mac, ETH_ALEN) || + if (OSAL_MEMCMP(p_bulletin->mac, params.mac, ECORE_ETH_ALEN) || (params.opcode != ECORE_FILTER_ADD && params.opcode != ECORE_FILTER_REPLACE)) status = PFVF_STATUS_FORCED; @@ -3606,6 +4086,43 @@ static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn, sizeof(struct pfvf_def_resp_tlv), status); } +static enum _ecore_status_t +ecore_iov_vf_pf_bulletin_update_mac(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct ecore_vf_info *p_vf) +{ + struct ecore_bulletin_content *p_bulletin = p_vf->bulletin.p_virt; + struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx; + enum _ecore_status_t rc = ECORE_SUCCESS; + struct vfpf_bulletin_update_mac_tlv *p_req; + u8 status = PFVF_STATUS_SUCCESS; + u8 *p_mac; + + if (!p_vf->p_vf_info.is_trusted_configured) { + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, + "Blocking bulletin update request from untrusted VF[%d]\n", + p_vf->abs_vf_id); + status = PFVF_STATUS_NOT_SUPPORTED; + rc = ECORE_INVAL; + goto send_status; + } + + p_req = &mbx->req_virt->bulletin_update_mac; + p_mac = p_req->mac; + OSAL_MEMCPY(p_bulletin->mac, p_mac, ECORE_ETH_ALEN); + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, + "Updated bulletin of VF[%d] with requested MAC[%02x:%02x:%02x:%02x:%02x:%02x]\n", + p_vf->abs_vf_id, + p_mac[0], p_mac[1], p_mac[2], p_mac[3], p_mac[4], p_mac[5]); + +send_status: + ecore_iov_prepare_resp(p_hwfn, p_ptt, p_vf, + CHANNEL_TLV_BULLETIN_UPDATE_MAC, + sizeof(struct pfvf_def_resp_tlv), + status); + return rc; +} + static void ecore_iov_vf_mbx_int_cleanup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct ecore_vf_info *vf) @@ -3625,10 +4142,10 @@ static void ecore_iov_vf_mbx_int_cleanup(struct ecore_hwfn *p_hwfn, static void ecore_iov_vf_mbx_close(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, - struct ecore_vf_info *vf) + struct ecore_vf_info *vf) { - u16 length = sizeof(struct pfvf_def_resp_tlv); - u8 status = PFVF_STATUS_SUCCESS; + u16 length = sizeof(struct pfvf_def_resp_tlv); + u8 status = PFVF_STATUS_SUCCESS; /* Disable Interrupts for VF */ ecore_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 0); @@ -3661,6 +4178,18 @@ static void ecore_iov_vf_mbx_release(struct ecore_hwfn *p_hwfn, status = PFVF_STATUS_FAILURE; } + /* Load requests for this VF will result with load-cancel + * response and an execution error - pretend to the specific VF + * for the split. + */ + ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_vf->concrete_fid); + + ecore_wr(p_hwfn, p_ptt, CCFC_REG_WEAK_ENABLE_VF, 0x0); + ecore_wr(p_hwfn, p_ptt, TCFC_REG_WEAK_ENABLE_VF, 0x0); + + /* unpretend */ + ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_hwfn->hw_info.concrete_fid); + p_vf->state = VF_STOPPED; } @@ -3910,7 +4439,8 @@ ecore_iov_pf_configure_vf_queue_coalesce(struct ecore_hwfn *p_hwfn, static enum _ecore_status_t ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn, - struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt) + struct ecore_vf_info *p_vf, + struct ecore_ptt *p_ptt) { int cnt; u32 val; @@ -3923,11 +4453,11 @@ ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn, break; OSAL_MSLEEP(20); } + ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_hwfn->hw_info.concrete_fid); if (cnt == 50) { - DP_ERR(p_hwfn, - "VF[%d] - dorq failed to cleanup [usage 0x%08x]\n", + DP_ERR(p_hwfn, "VF[%d] - dorq failed to cleanup [usage 0x%08x]\n", p_vf->abs_vf_id, val); return ECORE_TIMEOUT; } @@ -3939,7 +4469,8 @@ ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn, static enum _ecore_status_t ecore_iov_vf_flr_poll_pbf(struct ecore_hwfn *p_hwfn, - struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt) + struct ecore_vf_info *p_vf, + struct ecore_ptt *p_ptt) { u32 prod, cons[MAX_NUM_EXT_VOQS], distance[MAX_NUM_EXT_VOQS], tmp; u8 max_phys_tcs_per_port = p_hwfn->qm_info.max_phys_tcs_per_port; @@ -3949,6 +4480,9 @@ ecore_iov_vf_flr_poll_pbf(struct ecore_hwfn *p_hwfn, u8 port_id, tc, tc_id = 0, voq = 0; int cnt; + OSAL_MEM_ZERO(cons, MAX_NUM_EXT_VOQS * sizeof(u32)); + OSAL_MEM_ZERO(distance, MAX_NUM_EXT_VOQS * sizeof(u32)); + /* Read initial consumers & producers */ for (port_id = 0; port_id < max_ports_per_engine; port_id++) { /* "max_phys_tcs_per_port" active TCs + 1 pure LB TC */ @@ -3956,10 +4490,11 @@ ecore_iov_vf_flr_poll_pbf(struct ecore_hwfn *p_hwfn, tc_id = (tc < max_phys_tcs_per_port) ? tc : PURE_LB_TC; - voq = VOQ(port_id, tc_id, max_phys_tcs_per_port); + voq = ecore_get_ext_voq(p_hwfn, port_id, tc_id, + max_phys_tcs_per_port); cons[voq] = ecore_rd(p_hwfn, p_ptt, cons_voq0_addr + voq * 0x40); - prod = ecore_rd(p_hwfn, p_ptt, + prod = ecore_rd(p_hwfn, p_ptt, prod_voq0_addr + voq * 0x40); distance[voq] = prod - cons[voq]; } @@ -3975,13 +4510,13 @@ ecore_iov_vf_flr_poll_pbf(struct ecore_hwfn *p_hwfn, tc_id = (tc < max_phys_tcs_per_port) ? tc : PURE_LB_TC; - voq = VOQ(port_id, tc_id, - max_phys_tcs_per_port); - tmp = ecore_rd(p_hwfn, p_ptt, + voq = ecore_get_ext_voq(p_hwfn, port_id, tc_id, + max_phys_tcs_per_port); + tmp = ecore_rd(p_hwfn, p_ptt, cons_voq0_addr + voq * 0x40); - if (distance[voq] > tmp - cons[voq]) - break; - } + if (distance[voq] > tmp - cons[voq]) + break; + } if (tc == max_phys_tcs_per_port + 1) tc = 0; @@ -4011,7 +4546,10 @@ static enum _ecore_status_t ecore_iov_vf_flr_poll(struct ecore_hwfn *p_hwfn, { enum _ecore_status_t rc; - /* TODO - add SRC and TM polling once we add storage IOV */ + /* TODO - add SRC polling and Tm task once we add storage/iwarp IOV */ + rc = ecore_iov_timers_stop(p_hwfn, p_vf, p_ptt); + if (rc) + return rc; rc = ecore_iov_vf_flr_poll_dorq(p_hwfn, p_vf, p_ptt); if (rc) @@ -4026,8 +4564,9 @@ static enum _ecore_status_t ecore_iov_vf_flr_poll(struct ecore_hwfn *p_hwfn, static enum _ecore_status_t ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - u16 rel_vf_id, u32 *ack_vfs) + struct ecore_ptt *p_ptt, + u16 rel_vf_id, + u32 *ack_vfs) { struct ecore_vf_info *p_vf; enum _ecore_status_t rc = ECORE_SUCCESS; @@ -4036,9 +4575,10 @@ ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn, if (!p_vf) return ECORE_SUCCESS; - if (p_hwfn->pf_iov_info->pending_flr[rel_vf_id / 64] & - (1ULL << (rel_vf_id % 64))) { + if (ECORE_VF_ARRAY_GET_VFID(p_hwfn->pf_iov_info->pending_flr, + rel_vf_id)) { u16 vfid = p_vf->abs_vf_id; + int i; /* TODO - should we lock channel? */ @@ -4059,7 +4599,8 @@ ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn, rc = ecore_final_cleanup(p_hwfn, p_ptt, vfid, true); if (rc) { /* TODO - what's now? What a mess.... */ - DP_ERR(p_hwfn, "Failed handle FLR of VF[%d]\n", vfid); + DP_ERR(p_hwfn, "Failed handle FLR of VF[%d]\n", + vfid); return rc; } @@ -4087,18 +4628,20 @@ ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn, if (p_vf->state == VF_RESET) p_vf->state = VF_STOPPED; ack_vfs[vfid / 32] |= (1 << (vfid % 32)); - p_hwfn->pf_iov_info->pending_flr[rel_vf_id / 64] &= - ~(1ULL << (rel_vf_id % 64)); + ECORE_VF_ARRAY_CLEAR_VFID(p_hwfn->pf_iov_info->pending_flr, + rel_vf_id); p_vf->vf_mbx.b_pending_msg = false; } return rc; } -enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) +LNX_STATIC enum _ecore_status_t +ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) + { u32 ack_vfs[EXT_VF_BITMAP_SIZE_IN_DWORDS]; + u32 err_data, err_valid, int_sts; enum _ecore_status_t rc = ECORE_SUCCESS; u16 i; @@ -4110,6 +4653,15 @@ enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn, */ OSAL_MSLEEP(100); + /* Clear expected VF_DISABLED_ACCESS error after pci passthrough */ + err_data = ecore_rd(p_hwfn, p_ptt, PSWHST_REG_VF_DISABLED_ERROR_DATA); + err_valid = ecore_rd(p_hwfn, p_ptt, PSWHST_REG_VF_DISABLED_ERROR_VALID); + int_sts = ecore_rd(p_hwfn, p_ptt, PSWHST_REG_INT_STS_CLR); + if (int_sts && int_sts != PSWHST_REG_INT_STS_CLR_HST_VF_DISABLED_ACCESS) + DP_NOTICE(p_hwfn, false, + "PSWHST int_sts was 0x%x, err_data = 0x%x, err_valid = 0x%x\n", + int_sts, err_data, err_valid); + for (i = 0; i < p_hwfn->p_dev->p_iov_info->total_vfs; i++) ecore_iov_execute_vf_flr_cleanup(p_hwfn, p_ptt, i, ack_vfs); @@ -4119,7 +4671,9 @@ enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn, enum _ecore_status_t ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u16 rel_vf_id) + struct ecore_ptt *p_ptt, + u16 rel_vf_id) + { u32 ack_vfs[EXT_VF_BITMAP_SIZE_IN_DWORDS]; enum _ecore_status_t rc = ECORE_SUCCESS; @@ -4135,14 +4689,18 @@ ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn, return rc; } -bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs) +bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, + u32 *p_disabled_vfs) { + u16 i, vf_bitmap_size; bool found = false; - u16 i; DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Marking FLR-ed VFs\n"); - for (i = 0; i < VF_BITMAP_SIZE_IN_DWORDS; i++) + vf_bitmap_size = ECORE_IS_E4(p_hwfn->p_dev) ? + VF_BITMAP_SIZE_IN_DWORDS : + EXT_VF_BITMAP_SIZE_IN_DWORDS; + for (i = 0; i < vf_bitmap_size; i++) DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "[%08x,...,%08x]: %08x\n", i * 32, (i + 1) * 32 - 1, p_disabled_vfs[i]); @@ -4163,7 +4721,7 @@ bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs) vfid = p_vf->abs_vf_id; if ((1 << (vfid % 32)) & p_disabled_vfs[vfid / 32]) { - u64 *p_flr = p_hwfn->pf_iov_info->pending_flr; + u64 *p_flr = p_hwfn->pf_iov_info->pending_flr; u16 rel_vf_id = p_vf->relative_vf_id; DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, @@ -4177,7 +4735,7 @@ bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs) * MFW will not trigger an additional attention for * VF flr until ACKs, we're safe. */ - p_flr[rel_vf_id / 64] |= 1ULL << (rel_vf_id % 64); + ECORE_VF_ARRAY_SET_VFID(p_flr, rel_vf_id); found = true; } } @@ -4185,11 +4743,11 @@ bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs) return found; } -void ecore_iov_get_link(struct ecore_hwfn *p_hwfn, - u16 vfid, - struct ecore_mcp_link_params *p_params, - struct ecore_mcp_link_state *p_link, - struct ecore_mcp_link_capabilities *p_caps) +LNX_STATIC void ecore_iov_get_link(struct ecore_hwfn *p_hwfn, + u16 vfid, + struct ecore_mcp_link_params *p_params, + struct ecore_mcp_link_state *p_link, + struct ecore_mcp_link_capabilities *p_caps) { struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false); struct ecore_bulletin_content *p_bulletin; @@ -4207,8 +4765,24 @@ void ecore_iov_get_link(struct ecore_hwfn *p_hwfn, __ecore_vf_get_link_caps(p_caps, p_bulletin); } -void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, int vfid) +static bool +ecore_iov_vf_has_error(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + struct ecore_vf_info *p_vf) +{ + u16 abs_vf_id = ECORE_VF_ABS_ID(p_hwfn, p_vf); + u32 hw_addr; + + hw_addr = PGLUE_B_REG_WAS_ERROR_VF_31_0 + (abs_vf_id >> 5) * 4; + + if (ecore_rd(p_hwfn, p_ptt, hw_addr) & (1 << (abs_vf_id & 0x1f))) + return true; + + return false; +} + +LNX_STATIC void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + int vfid) { struct ecore_iov_vf_mbx *mbx; struct ecore_vf_info *p_vf; @@ -4243,8 +4817,8 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn, /* Lock the per vf op mutex and note the locker's identity. * The unlock will take place in mbx response. */ - ecore_iov_lock_vf_pf_channel(p_hwfn, - p_vf, mbx->first_tlv.tl.type); + ecore_iov_lock_vf_pf_channel(p_hwfn, p_vf, + mbx->first_tlv.tl.type); /* check if tlv type is known */ if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type) && @@ -4299,9 +4873,19 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn, case CHANNEL_TLV_COALESCE_READ: ecore_iov_vf_pf_get_coalesce(p_hwfn, p_ptt, p_vf); break; + case CHANNEL_TLV_BULLETIN_UPDATE_MAC: + ecore_iov_vf_pf_bulletin_update_mac(p_hwfn, p_ptt, + p_vf); + break; case CHANNEL_TLV_UPDATE_MTU: ecore_iov_vf_pf_update_mtu(p_hwfn, p_ptt, p_vf); break; + case CHANNEL_TLV_ASYNC_EVENT: + ecore_iov_vf_pf_async_event_resp(p_hwfn, p_ptt, p_vf); + break; + case CHANNEL_TLV_SOFT_FLR: + ecore_mcp_vf_flr(p_hwfn, p_ptt, p_vf->relative_vf_id); + break; } } else if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) { /* If we've received a message from a VF we consider malicious @@ -4332,13 +4916,23 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn, * VF driver and is sending garbage over the channel. */ DP_NOTICE(p_hwfn, false, - "VF[%02x]: unknown TLV. type %04x length %04x" - " padding %08x reply address %lu\n", + "VF[%02x]: unknown TLV. type %04x length %04x padding %08x " + "reply address %" PRIu64 "\n", p_vf->abs_vf_id, mbx->first_tlv.tl.type, mbx->first_tlv.tl.length, mbx->first_tlv.padding, - (unsigned long)mbx->first_tlv.reply_address); + mbx->first_tlv.reply_address); + + if (ecore_iov_vf_has_error(p_hwfn, p_ptt, p_vf)) { + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, + "VF abs[%d]:rel[%d] has error, issue FLR\n", + p_vf->abs_vf_id, p_vf->relative_vf_id); + ecore_mcp_vf_flr(p_hwfn, p_ptt, p_vf->relative_vf_id); + ecore_iov_unlock_vf_pf_channel(p_hwfn, p_vf, + mbx->first_tlv.tl.type); + return; + } /* Try replying in case reply address matches the acquisition's * posted address. @@ -4352,8 +4946,7 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn, PFVF_STATUS_NOT_SUPPORTED); else DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "VF[%02x]: Can't respond to TLV -" - " no valid reply address\n", + "VF[%02x]: Can't respond to TLV - no valid reply address\n", p_vf->abs_vf_id); } @@ -4366,8 +4959,8 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn, #endif } -void ecore_iov_pf_get_pending_events(struct ecore_hwfn *p_hwfn, - u64 *events) +LNX_STATIC void ecore_iov_pf_get_pending_events(struct ecore_hwfn *p_hwfn, + u64 *events) { int i; @@ -4378,7 +4971,7 @@ void ecore_iov_pf_get_pending_events(struct ecore_hwfn *p_hwfn, p_vf = &p_hwfn->pf_iov_info->vfs_array[i]; if (p_vf->vf_mbx.b_pending_msg) - events[i / 64] |= 1ULL << (i % 64); + ECORE_VF_ARRAY_SET_VFID(events, i); } } @@ -4389,8 +4982,7 @@ ecore_sriov_get_vf_from_absid(struct ecore_hwfn *p_hwfn, u16 abs_vfid) if (!_ecore_iov_pf_sanity_check(p_hwfn, (int)abs_vfid - min, false)) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "Got indication for VF [abs 0x%08x] that cannot be" - " handled by PF\n", + "Got indication for VF [abs 0x%08x] that cannot be handled by PF\n", abs_vfid); return OSAL_NULL; } @@ -4411,7 +5003,8 @@ static enum _ecore_status_t ecore_sriov_vfpf_msg(struct ecore_hwfn *p_hwfn, /* List the physical address of the request so that handler * could later on copy the message from it. */ - p_vf->vf_mbx.pending_req = (((u64)vf_msg->hi) << 32) | vf_msg->lo; + p_vf->vf_mbx.pending_req = (((u64)vf_msg->hi) << 32) | + vf_msg->lo; p_vf->vf_mbx.b_pending_msg = true; @@ -4447,7 +5040,8 @@ static enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn *p_hwfn, u8 opcode, __le16 echo, union event_ring_data *data, - u8 OSAL_UNUSED fw_return_code) + u8 OSAL_UNUSED fw_return_code, + u8 OSAL_UNUSED vf_id) { switch (opcode) { case COMMON_EVENT_VF_PF_CHANNEL: @@ -4467,10 +5061,11 @@ static enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn *p_hwfn, } } -bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn, u16 rel_vf_id) +bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn, + u16 rel_vf_id) { - return !!(p_hwfn->pf_iov_info->pending_flr[rel_vf_id / 64] & - (1ULL << (rel_vf_id % 64))); + return !!(ECORE_VF_ARRAY_GET_VFID(p_hwfn->pf_iov_info->pending_flr, + rel_vf_id)); } u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id) @@ -4482,15 +5077,17 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id) goto out; for (i = rel_vf_id; i < p_iov->total_vfs; i++) - if (ecore_iov_is_valid_vfid(p_hwfn, rel_vf_id, true, false)) + if (ecore_iov_is_valid_vfid(p_hwfn, i, true, false)) return i; out: - return MAX_NUM_VFS_K2; + return MAX_NUM_VFS; } -enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *ptt, int vfid) +LNX_STATIC enum _ecore_status_t +ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *ptt, + int vfid) { struct dmae_params params; struct ecore_vf_info *vf_info; @@ -4507,9 +5104,11 @@ enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn, if (ecore_dmae_host2host(p_hwfn, ptt, vf_info->vf_mbx.pending_req, vf_info->vf_mbx.req_phys, - sizeof(union vfpf_tlvs) / 4, ¶ms)) { + sizeof(union vfpf_tlvs) / 4, + ¶ms)) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "Failed to copy message from VF 0x%02x\n", vfid); + "Failed to copy message from VF 0x%02x\n", + vfid); return ECORE_IO; } @@ -4517,21 +5116,20 @@ enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } -void ecore_iov_bulletin_set_forced_mac(struct ecore_hwfn *p_hwfn, - u8 *mac, int vfid) +LNX_STATIC void ecore_iov_bulletin_set_forced_mac(struct ecore_hwfn *p_hwfn, + u8 *mac, int vfid) { struct ecore_vf_info *vf_info; u64 feature; vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true); if (!vf_info) { - DP_NOTICE(p_hwfn->p_dev, true, - "Can not set forced MAC, invalid vfid [%d]\n", vfid); + DP_NOTICE(p_hwfn->p_dev, true, "Can not set forced MAC, invalid vfid [%d]\n", + vfid); return; } if (vf_info->b_malicious) { - DP_NOTICE(p_hwfn->p_dev, false, - "Can't set forced MAC to malicious VF [%d]\n", + DP_NOTICE(p_hwfn->p_dev, false, "Can't set forced MAC to malicious VF [%d]\n", vfid); return; } @@ -4550,40 +5148,39 @@ void ecore_iov_bulletin_set_forced_mac(struct ecore_hwfn *p_hwfn, } OSAL_MEMCPY(vf_info->bulletin.p_virt->mac, - mac, ETH_ALEN); + mac, ECORE_ETH_ALEN); vf_info->bulletin.p_virt->valid_bitmap |= feature; ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature); } -enum _ecore_status_t ecore_iov_bulletin_set_mac(struct ecore_hwfn *p_hwfn, - u8 *mac, int vfid) +LNX_STATIC enum _ecore_status_t +ecore_iov_bulletin_set_mac(struct ecore_hwfn *p_hwfn, u8 *mac, int vfid) { struct ecore_vf_info *vf_info; u64 feature; vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true); if (!vf_info) { - DP_NOTICE(p_hwfn->p_dev, true, - "Can not set MAC, invalid vfid [%d]\n", vfid); + DP_NOTICE(p_hwfn->p_dev, true, "Can not set MAC, invalid vfid [%d]\n", + vfid); return ECORE_INVAL; } if (vf_info->b_malicious) { - DP_NOTICE(p_hwfn->p_dev, false, - "Can't set MAC to malicious VF [%d]\n", + DP_NOTICE(p_hwfn->p_dev, false, "Can't set MAC to malicious VF [%d]\n", vfid); return ECORE_INVAL; } if (vf_info->bulletin.p_virt->valid_bitmap & (1 << MAC_ADDR_FORCED)) { - DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "Can not set MAC, Forced MAC is configured\n"); + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Can not set MAC, Forced MAC is configured\n"); return ECORE_INVAL; } feature = 1 << VFPF_BULLETIN_MAC_ADDR; - OSAL_MEMCPY(vf_info->bulletin.p_virt->mac, mac, ETH_ALEN); + OSAL_MEMCPY(vf_info->bulletin.p_virt->mac, + mac, ECORE_ETH_ALEN); vf_info->bulletin.p_virt->valid_bitmap |= feature; @@ -4597,7 +5194,8 @@ enum _ecore_status_t ecore_iov_bulletin_set_mac(struct ecore_hwfn *p_hwfn, #ifndef LINUX_REMOVE enum _ecore_status_t ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn, - bool b_untagged_only, int vfid) + bool b_untagged_only, + int vfid) { struct ecore_vf_info *vf_info; u64 feature; @@ -4621,8 +5219,7 @@ ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn, */ if (vf_info->state == VF_ENABLED) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "Can't support untagged change for vfid[%d] -" - " VF is already active\n", + "Can't support untagged change for vfid[%d] - VF is already active\n", vfid); return ECORE_INVAL; } @@ -4631,11 +5228,11 @@ ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn, * VF initialization. */ feature = (1 << VFPF_BULLETIN_UNTAGGED_DEFAULT) | - (1 << VFPF_BULLETIN_UNTAGGED_DEFAULT_FORCED); + (1 << VFPF_BULLETIN_UNTAGGED_DEFAULT_FORCED); vf_info->bulletin.p_virt->valid_bitmap |= feature; vf_info->bulletin.p_virt->default_only_untagged = b_untagged_only ? 1 - : 0; + : 0; return ECORE_SUCCESS; } @@ -4653,16 +5250,15 @@ void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid, } #endif -void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn, - u16 pvid, int vfid) +LNX_STATIC void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn, + u16 pvid, int vfid) { struct ecore_vf_info *vf_info; u64 feature; vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true); if (!vf_info) { - DP_NOTICE(p_hwfn->p_dev, true, - "Can not set forced MAC, invalid vfid [%d]\n", + DP_NOTICE(p_hwfn->p_dev, true, "Can not set forced MAC, invalid vfid [%d]\n", vfid); return; } @@ -4706,7 +5302,8 @@ void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn, vf_info->bulletin.p_virt->geneve_udp_port = geneve_port; } -bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn, int vfid) +LNX_STATIC bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn, + int vfid) { struct ecore_vf_info *p_vf_info; @@ -4717,7 +5314,7 @@ bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn, int vfid) return !!p_vf_info->vport_instance; } -bool ecore_iov_is_vf_stopped(struct ecore_hwfn *p_hwfn, int vfid) +LNX_STATIC bool ecore_iov_is_vf_stopped(struct ecore_hwfn *p_hwfn, int vfid) { struct ecore_vf_info *p_vf_info; @@ -4728,7 +5325,8 @@ bool ecore_iov_is_vf_stopped(struct ecore_hwfn *p_hwfn, int vfid) return p_vf_info->state == VF_STOPPED; } -bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn, int vfid) +IFDEF_DEFINE_IFLA_VF_SPOOFCHK +LNX_STATIC bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn, int vfid) { struct ecore_vf_info *vf_info; @@ -4738,9 +5336,10 @@ bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn, int vfid) return vf_info->spoof_chk; } +ENDIF_DEFINE_IFLA_VF_SPOOFCHK -enum _ecore_status_t ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn, - int vfid, bool val) +LNX_STATIC enum _ecore_status_t +ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn, int vfid, bool val) { struct ecore_vf_info *vf; enum _ecore_status_t rc = ECORE_INVAL; @@ -4772,8 +5371,8 @@ u8 ecore_iov_vf_chains_per_pf(struct ecore_hwfn *p_hwfn) { u8 max_chains_per_vf = p_hwfn->hw_info.max_chains_per_vf; - max_chains_per_vf = (max_chains_per_vf) ? max_chains_per_vf - : ECORE_MAX_VF_CHAINS_PER_PF; + max_chains_per_vf = (max_chains_per_vf) ? + max_chains_per_vf : ECORE_MAX_QUEUE_VF_CHAINS_PER_PF; return max_chains_per_vf; } @@ -4784,7 +5383,7 @@ void ecore_iov_get_vf_req_virt_mbx_params(struct ecore_hwfn *p_hwfn, u16 *p_req_virt_size) { struct ecore_vf_info *vf_info = - ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true); + ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true); if (!vf_info) return; @@ -4797,12 +5396,12 @@ void ecore_iov_get_vf_req_virt_mbx_params(struct ecore_hwfn *p_hwfn, } void ecore_iov_get_vf_reply_virt_mbx_params(struct ecore_hwfn *p_hwfn, - u16 rel_vf_id, + u16 rel_vf_id, void **pp_reply_virt_addr, - u16 *p_reply_virt_size) + u16 *p_reply_virt_size) { struct ecore_vf_info *vf_info = - ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true); + ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true); if (!vf_info) return; @@ -4815,11 +5414,12 @@ void ecore_iov_get_vf_reply_virt_mbx_params(struct ecore_hwfn *p_hwfn, } #ifdef CONFIG_ECORE_SW_CHANNEL -struct ecore_iov_sw_mbx *ecore_iov_get_vf_sw_mbx(struct ecore_hwfn *p_hwfn, - u16 rel_vf_id) +struct ecore_iov_sw_mbx* +ecore_iov_get_vf_sw_mbx(struct ecore_hwfn *p_hwfn, + u16 rel_vf_id) { struct ecore_vf_info *vf_info = - ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true); + ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true); if (!vf_info) return OSAL_NULL; @@ -4839,8 +5439,22 @@ u32 ecore_iov_pfvf_msg_length(void) return sizeof(union pfvf_tlvs); } -u8 *ecore_iov_bulletin_get_mac(struct ecore_hwfn *p_hwfn, - u16 rel_vf_id) +enum _ecore_status_t +ecore_iov_set_default_vlan(struct ecore_hwfn *p_hwfn, u16 vlan, int vfid) +{ + struct ecore_vf_info *vf_info; + + vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true); + if (!vf_info) + return ECORE_INVAL; + + vf_info->p_vf_info.forced_vlan = vlan; + + return ECORE_SUCCESS; +} + +LNX_STATIC u8 *ecore_iov_bulletin_get_mac(struct ecore_hwfn *p_hwfn, + u16 rel_vf_id) { struct ecore_vf_info *p_vf; @@ -4855,7 +5469,8 @@ u8 *ecore_iov_bulletin_get_mac(struct ecore_hwfn *p_hwfn, return p_vf->bulletin.p_virt->mac; } -u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn *p_hwfn, u16 rel_vf_id) +LNX_STATIC u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn *p_hwfn, + u16 rel_vf_id) { struct ecore_vf_info *p_vf; @@ -4869,8 +5484,8 @@ u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn *p_hwfn, u16 rel_vf_id) return p_vf->bulletin.p_virt->mac; } -u16 ecore_iov_bulletin_get_forced_vlan(struct ecore_hwfn *p_hwfn, - u16 rel_vf_id) +LNX_STATIC u16 ecore_iov_bulletin_get_forced_vlan(struct ecore_hwfn *p_hwfn, + u16 rel_vf_id) { struct ecore_vf_info *p_vf; @@ -4884,36 +5499,24 @@ u16 ecore_iov_bulletin_get_forced_vlan(struct ecore_hwfn *p_hwfn, return p_vf->bulletin.p_virt->pvid; } -enum _ecore_status_t ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, - int vfid, int val) +LNX_STATIC enum _ecore_status_t +ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, + int vfid, int val) { - struct ecore_vf_info *vf; - u8 abs_vp_id = 0; - u16 rl_id; - enum _ecore_status_t rc; - - vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true); - - if (!vf) - return ECORE_INVAL; - - rc = ecore_fw_vport(p_hwfn, vf->vport_id, &abs_vp_id); - if (rc != ECORE_SUCCESS) - return rc; + u16 rl_id = ecore_get_pq_rl_id_from_vf(p_hwfn, (u16)vfid); - rl_id = abs_vp_id; /* The "rl_id" is set as the "vport_id" */ return ecore_init_global_rl(p_hwfn, p_ptt, rl_id, (u32)val); } -enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev, - int vfid, u32 rate) +LNX_STATIC enum _ecore_status_t +ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev, int vfid, u32 rate) { - struct ecore_vf_info *vf; + struct ecore_hwfn *p_hwfn; + u16 vport_id; int i; for_each_hwfn(p_dev, i) { - struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i]; + p_hwfn = &p_dev->hwfns[i]; if (!ecore_iov_pf_sanity_check(p_hwfn, vfid)) { DP_NOTICE(p_hwfn, true, @@ -4922,14 +5525,25 @@ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev, } } - vf = ecore_iov_get_vf_info(ECORE_LEADING_HWFN(p_dev), (u16)vfid, true); - if (!vf) { - DP_NOTICE(p_dev, true, - "Getting vf info failed, can't set min rate\n"); + /* Use the leading hwfn - the qm configuration should be symmetric + * between the hwfns. + */ + p_hwfn = ECORE_LEADING_HWFN(p_dev); + vport_id = ecore_get_pq_vport_id_from_vf(p_hwfn, (u16)vfid); + + return ecore_configure_vport_wfq(p_dev, vport_id, rate); +} + +enum _ecore_status_t ecore_iov_get_vf_vport(struct ecore_hwfn *p_hwfn, + int rel_vf_id, u8 *abs_vport_id) +{ + struct ecore_vf_info *vf; + + vf = ecore_iov_get_vf_info(p_hwfn, (u16)rel_vf_id, true); + if (!vf || vf->state != VF_ENABLED) return ECORE_INVAL; - } - return ecore_configure_vport_wfq(p_dev, vf->vport_id, rate); + return ecore_fw_vport(p_hwfn, vf->vport_id, abs_vport_id); } enum _ecore_status_t ecore_iov_get_vf_stats(struct ecore_hwfn *p_hwfn, @@ -4952,7 +5566,8 @@ enum _ecore_status_t ecore_iov_get_vf_stats(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } -u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id) +u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn, + u16 rel_vf_id) { struct ecore_vf_info *p_vf; @@ -4963,7 +5578,8 @@ u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id) return p_vf->num_rxqs; } -u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id) +u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn *p_hwfn, + u16 rel_vf_id) { struct ecore_vf_info *p_vf; @@ -4974,7 +5590,8 @@ u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id) return p_vf->num_active_rxqs; } -void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn, u16 rel_vf_id) +void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn, + u16 rel_vf_id) { struct ecore_vf_info *p_vf; @@ -4985,7 +5602,8 @@ void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn, u16 rel_vf_id) return p_vf->ctx; } -u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id) +u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn, + u16 rel_vf_id) { struct ecore_vf_info *p_vf; @@ -4996,7 +5614,8 @@ u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id) return p_vf->num_sbs; } -bool ecore_iov_is_vf_wait_for_acquire(struct ecore_hwfn *p_hwfn, u16 rel_vf_id) +bool ecore_iov_is_vf_wait_for_acquire(struct ecore_hwfn *p_hwfn, + u16 rel_vf_id) { struct ecore_vf_info *p_vf; @@ -5019,7 +5638,8 @@ bool ecore_iov_is_vf_acquired_not_initialized(struct ecore_hwfn *p_hwfn, return (p_vf->state == VF_ACQUIRED); } -bool ecore_iov_is_vf_initialized(struct ecore_hwfn *p_hwfn, u16 rel_vf_id) +bool ecore_iov_is_vf_initialized(struct ecore_hwfn *p_hwfn, + u16 rel_vf_id) { struct ecore_vf_info *p_vf; @@ -5042,8 +5662,8 @@ bool ecore_iov_is_vf_started(struct ecore_hwfn *p_hwfn, return (p_vf->state != VF_FREE && p_vf->state != VF_STOPPED); } -int -ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid) +IFDEF_HAS_IFLA_VF_RATE +LNX_STATIC u32 ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid) { struct ecore_wfq_data *vf_vp_wfq; struct ecore_vf_info *vf_info; @@ -5059,6 +5679,7 @@ ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid) else return 0; } +ENDIF_HAS_IFLA_VF_RATE #ifdef CONFIG_ECORE_SW_CHANNEL void ecore_iov_set_vf_hw_channel(struct ecore_hwfn *p_hwfn, int vfid, @@ -5073,3 +5694,172 @@ void ecore_iov_set_vf_hw_channel(struct ecore_hwfn *p_hwfn, int vfid, vf_info->b_hw_channel = b_is_hw; } #endif + +static void ecore_iov_db_rec_stats_update(struct ecore_hwfn *p_hwfn, + struct ecore_vf_info *p_vf) +{ + u16 vf_id = GET_FIELD(p_vf->concrete_fid, PXP_CONCRETE_FID_VFID); + struct ecore_pf_iov *p_iov_info = p_hwfn->pf_iov_info; + + if (++p_vf->db_recovery_info.count > p_iov_info->max_db_rec_count) { + p_iov_info->max_db_rec_count = p_vf->db_recovery_info.count; + p_hwfn->pf_iov_info->max_db_rec_vfid = vf_id; + } +} + +static void ecore_iov_db_rec_handler_vf(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, + struct ecore_vf_info *p_vf, + u32 pf_attn_ovfl, + u32 *flush_delay_count) +{ + struct ecore_bulletin_content *p_bulletin; + u32 vf_cur_ovfl, vf_attn_ovfl; + u16 vf_id; + enum _ecore_status_t rc; + + /* If VF sticky was caught by the DORQ attn callback, VF must execute + * doorbell recovery. Reading the bit is atomic because the dorq + * attention handler is setting it in interrupt context. + */ + vf_attn_ovfl = OSAL_TEST_AND_CLEAR_BIT(ECORE_OVERFLOW_BIT, + &p_vf->db_recovery_info.overflow); + + vf_id = GET_FIELD(p_vf->concrete_fid, PXP_CONCRETE_FID_VFID); + vf_cur_ovfl = ecore_rd(p_hwfn, p_ptt, DORQ_REG_VF_OVFL_STICKY); + if (!vf_cur_ovfl && !vf_attn_ovfl && !pf_attn_ovfl) + return; + + DP_NOTICE(p_hwfn, false, + "VF %u Overflow sticky: attn %u current %u pf %u\n", + vf_id, vf_attn_ovfl ? 1 : 0, vf_cur_ovfl ? 1 : 0, + pf_attn_ovfl ? 1 : 0); + + /* Check if sticky overflow indication is set, or it was caught set + * during the DORQ attention callback. + */ + if (vf_cur_ovfl || vf_attn_ovfl) { + if (vf_cur_ovfl && !p_vf->db_recovery_info.edpm_disabled) { + rc = ecore_db_rec_flush_queue(p_hwfn, p_ptt, + DORQ_REG_VF_USAGE_CNT, + flush_delay_count); + if (rc != ECORE_SUCCESS) { + DP_NOTICE(p_hwfn, false, + "Doorbell Recovery flush failed for VF %u, unable to post any more doorbells.\n", + vf_id); + return; + } + } + + ecore_iov_db_rec_stats_update(p_hwfn, p_vf); + } + + /* Make doorbell recovery possible for this VF, by resetting + * VF_STICKY to allow the DORQ receiving doorbells from this VF. + */ + ecore_wr(p_hwfn, p_ptt, DORQ_REG_VF_OVFL_STICKY, 0x0); + + p_bulletin = p_vf->bulletin.p_virt; + p_bulletin->db_recovery_execute++; +} + +enum _ecore_status_t +ecore_iov_db_rec_handler(struct ecore_hwfn *p_hwfn) +{ + u32 pf_cur_ovfl, pf_attn_ovfl, flush_delay_count = ECORE_DB_REC_COUNT; + struct ecore_pf_iov *p_iov_info = p_hwfn->pf_iov_info; + struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn); + struct ecore_vf_info *p_vf; + u16 i; + + if (!p_ptt) + return ECORE_AGAIN; + + /* If PF sticky was caught by the DORQ attn callback, all VFs must + * execute doorbell recovery. Reading the bit is atomic because the + * dorq attention handler is setting it in interrupt context (or in + * the periodic db_rec handler in a parallel flow). + */ + pf_attn_ovfl = OSAL_TEST_AND_CLEAR_BIT(ECORE_OVERFLOW_BIT, + &p_hwfn->pf_iov_info->overflow); + + ecore_for_each_vf(p_hwfn, i) { + /* PF sticky might change during this loop */ + pf_cur_ovfl = ecore_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY); + if (pf_cur_ovfl) { + DP_NOTICE(p_hwfn, false, + "PF Overflow sticky is set, VF doorbell recovery will be rescheduled.\n"); + goto out; + } + + /* Check DORQ drops (sticky) for VF */ + p_vf = &p_iov_info->vfs_array[i]; + ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_vf->concrete_fid); + ecore_iov_db_rec_handler_vf(p_hwfn, p_ptt, p_vf, pf_attn_ovfl, + &flush_delay_count); + ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id); + } +out: + ecore_ptt_release(p_hwfn, p_ptt); + OSAL_IOV_BULLETIN_UPDATE(p_hwfn); + + return ECORE_SUCCESS; +} + +enum _ecore_status_t +ecore_iov_async_event_completion(struct ecore_hwfn *p_hwfn, + struct event_ring_entry *p_eqe) +{ + struct event_ring_list_entry *p_eqe_ent, *p_eqe_tmp; + struct ecore_pf_iov *p_iov_info; + struct ecore_vf_info *p_vf; + + p_iov_info = p_hwfn->pf_iov_info; + if (!p_iov_info) + return ECORE_INVAL; + + p_vf = ecore_sriov_get_vf_from_absid(p_hwfn, p_eqe->vf_id); + if (!p_vf || p_vf->state != VF_ENABLED) + return ECORE_INVAL; + + /* Old version VF, doesn't support async events */ + if (!p_vf->vf_eq_info.eq_support) + return ECORE_SUCCESS; + + p_eqe_ent = OSAL_ZALLOC(p_hwfn->p_dev, GFP_ATOMIC, sizeof(*p_eqe_ent)); + if (!p_eqe_ent) { + DP_NOTICE(p_hwfn, false, + "vf %u: Failed to allocate eq entry - op %x prot %x echo %x\n", + p_eqe->vf_id, p_eqe->opcode, p_eqe->protocol_id, + OSAL_LE16_TO_CPU(p_eqe->echo)); + return ECORE_NOMEM; + } + + OSAL_SPIN_LOCK(&p_vf->vf_eq_info.eq_list_lock); + p_eqe_ent->eqe = *p_eqe; + DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, + "vf %u: Store eq entry - op %x prot %x echo %x\n", + p_eqe->vf_id, p_eqe->opcode, p_eqe->protocol_id, + OSAL_LE16_TO_CPU(p_eqe->echo)); + OSAL_LIST_PUSH_TAIL(&p_eqe_ent->list_entry, &p_vf->vf_eq_info.eq_list); + if (++p_vf->vf_eq_info.eq_list_size == ECORE_EQ_MAX_ELEMENTS) { + DP_NOTICE(p_hwfn, false, + "vf %u: EQ list reached maximum size 0x%x\n", + p_eqe->vf_id, ECORE_EQ_MAX_ELEMENTS); + + /* Delete oldest EQ entry */ + p_eqe_tmp = OSAL_LIST_FIRST_ENTRY(&p_vf->vf_eq_info.eq_list, + struct event_ring_list_entry, + list_entry); + OSAL_LIST_REMOVE_ENTRY(&p_eqe_tmp->list_entry, + &p_vf->vf_eq_info.eq_list); + p_vf->vf_eq_info.eq_list_size--; + OSAL_FREE(p_hwfn->p_dev, p_eqe_tmp); + } + + p_vf->bulletin.p_virt->eq_completion++; + OSAL_SPIN_UNLOCK(&p_vf->vf_eq_info.eq_list_lock); + OSAL_IOV_BULLETIN_UPDATE(p_hwfn); + + return ECORE_SUCCESS; +} diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h index e748e67d7..8d8a12b92 100644 --- a/drivers/net/qede/base/ecore_sriov.h +++ b/drivers/net/qede/base/ecore_sriov.h @@ -1,20 +1,20 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_SRIOV_H__ #define __ECORE_SRIOV_H__ #include "ecore_status.h" -#include "ecore_vfpf_if.h" #include "ecore_iov_api.h" #include "ecore_hsi_common.h" #include "ecore_l2.h" +#include "ecore_vfpf_if.h" #define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \ - (MAX_NUM_VFS_K2 * ECORE_ETH_VF_NUM_VLAN_FILTERS) + (MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS) /* Represents a full message. Both the request filled by VF * and the response filled by the PF. The VF needs one copy @@ -100,10 +100,43 @@ struct ecore_vf_shadow_config { struct ecore_vf_vlan_shadow vlans[ECORE_ETH_VF_NUM_VLAN_FILTERS + 1]; /* Shadow copy of all configured MACs; Empty if forcing MACs */ - u8 macs[ECORE_ETH_VF_NUM_MAC_FILTERS][ETH_ALEN]; + u8 macs[ECORE_ETH_VF_NUM_MAC_FILTERS][ECORE_ETH_ALEN]; u8 inner_vlan_removal; }; +struct ecore_vf_ll2_queue { + u32 cid; + u8 qid; /* abs queue id */ + bool used; +}; + +struct ecore_vf_db_rec_info { + /* VF doorbell overflow counter */ + u32 count; + + /* VF doesn't need to flush DORQ if it doesn't use EDPM */ + bool edpm_disabled; + + /* VF doorbell overflow sticky indicator was cleared in the DORQ + * attention callback, but still needs to execute doorbell recovery. + * This value doesn't require a lock but must use atomic operations. + */ + u32 overflow; /* @DPDK */ +}; + +struct event_ring_list_entry { + osal_list_entry_t list_entry; + struct event_ring_entry eqe; +}; + +struct ecore_vf_eq_info { + bool eq_support; + bool eq_active; + osal_list_t eq_list; + osal_spinlock_t eq_list_lock; + u16 eq_list_size; +}; + /* PFs maintain an array of this structure, per VF */ struct ecore_vf_info { struct ecore_iov_vf_mbx vf_mbx; @@ -138,6 +171,14 @@ struct ecore_vf_info { u8 vport_instance; /* Number of active vports */ u8 num_rxqs; u8 num_txqs; + u8 num_cnqs; + u8 cnq_offset; + + /* The numbers which are communicated to the VF */ + u8 actual_num_rxqs; + u8 actual_num_txqs; + + u16 cnq_sb_start_id; u16 rx_coal; u16 tx_coal; @@ -147,7 +188,8 @@ struct ecore_vf_info { u8 num_mac_filters; u8 num_vlan_filters; - struct ecore_vf_queue vf_queues[ECORE_MAX_VF_CHAINS_PER_PF]; + struct ecore_vf_queue vf_queues[ECORE_MAX_QUEUE_VF_CHAINS_PER_PF]; + u16 igu_sbs[ECORE_MAX_VF_CHAINS_PER_PF]; /* TODO - Only windows is using it - should be removed */ @@ -167,13 +209,19 @@ struct ecore_vf_info { u64 configured_features; #define ECORE_IOV_CONFIGURED_FEATURES_MASK ((1 << MAC_ADDR_FORCED) | \ (1 << VLAN_ADDR_FORCED)) + struct ecore_rdma_info *rdma_info; + + struct ecore_vf_db_rec_info db_recovery_info; + + /* Pending EQs list to send to VF */ + struct ecore_vf_eq_info vf_eq_info; }; /* This structure is part of ecore_hwfn and used only for PFs that have sriov * capability enabled. */ struct ecore_pf_iov { - struct ecore_vf_info vfs_array[MAX_NUM_VFS_K2]; + struct ecore_vf_info vfs_array[MAX_NUM_VFS]; u64 pending_flr[ECORE_VF_ARRAY_LENGTH]; #ifndef REMOVE_DBG @@ -193,6 +241,39 @@ struct ecore_pf_iov { void *p_bulletins; dma_addr_t bulletins_phys; u32 bulletins_size; + +#define ECORE_IS_VF_RDMA(p_hwfn) (((p_hwfn)->pf_iov_info) && \ + ((p_hwfn)->pf_iov_info->rdma_enable)) + + /* Indicates whether vf-rdma is supported */ + bool rdma_enable; + + /* Hold the original numbers of l2_queue and cnq features */ + u8 num_l2_queue_feat; + u8 num_cnq_feat; + + struct ecore_dpi_info dpi_info; + + struct ecore_common_dpm_info dpm_info; + + void *p_rdma_info; + + /* Keep doorbell recovery statistics on corresponding VFs: + * Which VF overflowed the most, and how many times. + */ + u32 max_db_rec_count; + u16 max_db_rec_vfid; + + /* PF doorbell overflow sticky indicator was cleared in the DORQ + * attention callback, but all its child VFs still need to execute + * doorbell recovery, because when PF overflows, VF doorbells are also + * silently dropped without raising the VF sticky. + * This value doesn't require a lock but must use atomic operations. + */ + u32 overflow; /* @DPDK */ + + /* Holds the max number of VFs which can be loaded */ + u8 max_active_vfs; }; #ifdef CONFIG_ECORE_SRIOV @@ -265,7 +346,7 @@ void ecore_iov_free_hw_info(struct ecore_dev *p_dev); * @return true iff one of the PF's vfs got FLRed. false otherwise. */ bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, - u32 *disabled_vfs); + u32 *disabled_vfs); /** * @brief Search extended TLVs in request/reply buffer. @@ -293,5 +374,54 @@ void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn, struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn *p_hwfn, u16 relative_vf_id, bool b_enabled_only); + +/** + * @brief get the VF doorbell bar size. + * + * @param p_dev + * @param p_ptt + */ +u32 ecore_iov_vf_db_bar_size(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt); + +/** + * @brief get the relative id from absolute id. + * + * @param p_hwfn + * @param abs_id + */ +u8 ecore_iov_abs_to_rel_id(struct ecore_hwfn *p_hwfn, u8 abs_id); + +/** + * @brief store the EQ and notify the VF + * + * @param p_dev + * @param p_eqe + */ +enum _ecore_status_t +ecore_iov_async_event_completion(struct ecore_hwfn *p_hwfn, + struct event_ring_entry *p_eqe); + +/** + * @brief initialaize the VF doorbell bar + * + * @param p_hwfn + * @param p_ptt + */ +enum _ecore_status_t +ecore_iov_init_vf_doorbell_bar(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt); + +/** + * @brief ecore_iov_get_vf_vport - return the vport_id of a specific VF + * + * @param p_hwfn + * @param rel_vf_id - relative id of the VF for which info is requested + * @param abs_vport_id - result vport_id + * + * @return enum _ecore_status_t + */ +enum _ecore_status_t ecore_iov_get_vf_vport(struct ecore_hwfn *p_hwfn, + int rel_vf_id, u8 *abs_vport_id); #endif #endif /* __ECORE_SRIOV_H__ */ diff --git a/drivers/net/qede/base/ecore_status.h b/drivers/net/qede/base/ecore_status.h index 01cf9fd31..0c1dc52ea 100644 --- a/drivers/net/qede/base/ecore_status.h +++ b/drivers/net/qede/base/ecore_status.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_STATUS_H__ #define __ECORE_STATUS_H__ diff --git a/drivers/net/qede/base/ecore_utils.h b/drivers/net/qede/base/ecore_utils.h index 249136b08..c040d1411 100644 --- a/drivers/net/qede/base/ecore_utils.h +++ b/drivers/net/qede/base/ecore_utils.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_UTILS_H__ #define __ECORE_UTILS_H__ @@ -32,4 +32,18 @@ #define HILO_DMA_REGPAIR(regpair) (HILO_DMA(regpair.hi, regpair.lo)) #define HILO_64_REGPAIR(regpair) (HILO_64(regpair.hi, regpair.lo)) +#ifndef USHRT_MAX +#define USHRT_MAX ((u16)(~0U)) +#endif + +#define ether_addr_copy(_dst_mac, _src_mac) OSAL_MEMCPY(_dst_mac, _src_mac, ECORE_ETH_ALEN) +#define ether_addr_equal(_mac1, _mac2) !OSAL_MEMCMP(_mac1, _mac2, ECORE_ETH_ALEN) +#ifndef BITS_PER_BYTE +#define BITS_PER_BYTE 8 +#endif + +#ifndef ALIGN +#define __ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask)) +#define ALIGN(x, a) __ALIGN_MASK(x, (a) - 1) +#endif #endif diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c index e00caf646..9b99e9281 100644 --- a/drivers/net/qede/base/ecore_vf.c +++ b/drivers/net/qede/base/ecore_vf.c @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #include "bcm_osal.h" #include "ecore.h" #include "ecore_hsi_eth.h" @@ -18,7 +18,15 @@ #include "ecore_mcp_api.h" #include "ecore_vf_api.h" -static void *ecore_vf_pf_prep(struct ecore_hwfn *p_hwfn, u16 type, u16 length) +#ifdef _NTDDK_ +#pragma warning(push) +#pragma warning(disable : 28167) +#pragma warning(disable : 28123) +#pragma warning(disable : 28121) +#endif + +static void *ecore_vf_pf_prep(struct ecore_hwfn *p_hwfn, + u16 type, u16 length) { struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; void *p_tlv; @@ -30,23 +38,24 @@ static void *ecore_vf_pf_prep(struct ecore_hwfn *p_hwfn, u16 type, u16 length) */ OSAL_MUTEX_ACQUIRE(&p_iov->mutex); - DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "preparing to send %s tlv over vf pf channel\n", - qede_ecore_channel_tlvs_string[type]); + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "preparing to send %s tlv over vf pf channel\n", + ecore_channel_tlvs_string[type]); /* Reset Request offset */ - p_iov->offset = (u8 *)(p_iov->vf2pf_request); + p_iov->offset = (u8 *)p_iov->vf2pf_request; /* Clear mailbox - both request and reply */ - OSAL_MEMSET(p_iov->vf2pf_request, 0, sizeof(union vfpf_tlvs)); - OSAL_MEMSET(p_iov->pf2vf_reply, 0, sizeof(union pfvf_tlvs)); + OSAL_MEMSET(p_iov->vf2pf_request, 0, + sizeof(union vfpf_tlvs)); + OSAL_MEMSET(p_iov->pf2vf_reply, 0, + sizeof(union pfvf_tlvs)); /* Init type and length */ p_tlv = ecore_add_tlv(&p_iov->offset, type, length); /* Init first tlv header */ ((struct vfpf_first_tlv *)p_tlv)->reply_address = - (u64)p_iov->pf2vf_reply_phys; + (u64)p_iov->pf2vf_reply_phys; return p_tlv; } @@ -71,6 +80,13 @@ static void ecore_vf_pf_req_end(struct ecore_hwfn *p_hwfn, * We'd need to handshake in acquire capabilities for any such. */ #endif + +#define ECORE_VF_CHANNEL_USLEEP_ITERATIONS 90 +#define ECORE_VF_CHANNEL_USLEEP_DELAY 100 +#define ECORE_VF_CHANNEL_MSLEEP_ITERATIONS 10 +#define ECORE_VF_CHANNEL_MSLEEP_DELAY 25 +#define ECORE_VF_CHANNEL_MSLEEP_ITER_EMUL 90 + static enum _ecore_status_t ecore_send_msg2pf(struct ecore_hwfn *p_hwfn, u8 *done, u32 resp_size) @@ -79,7 +95,7 @@ ecore_send_msg2pf(struct ecore_hwfn *p_hwfn, struct ustorm_trigger_vf_zone trigger; struct ustorm_vf_zone *zone_data; enum _ecore_status_t rc = ECORE_SUCCESS; - int time = 100; + int iter; zone_data = (struct ustorm_vf_zone *)PXP_VF_BAR0_START_USDM_ZONE_B; @@ -89,19 +105,32 @@ ecore_send_msg2pf(struct ecore_hwfn *p_hwfn, /* need to add the END TLV to the message size */ resp_size += sizeof(struct channel_list_end_tlv); +#ifdef CONFIG_ECORE_SW_CHANNEL + if (!p_hwfn->vf_iov_info->b_hw_channel) { + rc = OSAL_VF_SEND_MSG2PF(p_hwfn->p_dev, + done, + p_req, + p_hwfn->vf_iov_info->pf2vf_reply, + sizeof(union vfpf_tlvs), + resp_size); + /* TODO - no prints about message ? */ + return rc; + } +#endif + /* Send TLVs over HW channel */ OSAL_MEMSET(&trigger, 0, sizeof(struct ustorm_trigger_vf_zone)); trigger.vf_pf_msg_valid = 1; DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "VF -> PF [%02x] message: [%08x, %08x] --> %p," - " %08x --> %p\n", + "VF -> PF [%02x] message: [%08x, %08x] --> %p, %08x --> %p\n", GET_FIELD(p_hwfn->hw_info.concrete_fid, PXP_CONCRETE_FID_PFID), U64_HI(p_hwfn->vf_iov_info->vf2pf_request_phys), U64_LO(p_hwfn->vf_iov_info->vf2pf_request_phys), &zone_data->non_trigger.vf_pf_msg_addr, - *((u32 *)&trigger), &zone_data->trigger); + *((u32 *)&trigger), + &zone_data->trigger); REG_WR(p_hwfn, (osal_uintptr_t)&zone_data->non_trigger.vf_pf_msg_addr.lo, @@ -116,34 +145,39 @@ ecore_send_msg2pf(struct ecore_hwfn *p_hwfn, */ OSAL_WMB(p_hwfn->p_dev); - REG_WR(p_hwfn, (osal_uintptr_t)&zone_data->trigger, - *((u32 *)&trigger)); + REG_WR(p_hwfn, (osal_uintptr_t)&zone_data->trigger, *((u32 *)&trigger)); /* When PF would be done with the response, it would write back to the * `done' address. Poll until then. */ - while ((!*done) && time) { - OSAL_MSLEEP(25); - time--; - } + iter = ECORE_VF_CHANNEL_USLEEP_ITERATIONS; + while (!*done && iter--) + OSAL_UDELAY(ECORE_VF_CHANNEL_USLEEP_DELAY); + + iter = ECORE_VF_CHANNEL_MSLEEP_ITERATIONS; + if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) + iter += ECORE_VF_CHANNEL_MSLEEP_ITER_EMUL; + + while (!*done && iter--) + OSAL_MSLEEP(ECORE_VF_CHANNEL_MSLEEP_DELAY); if (!*done) { DP_NOTICE(p_hwfn, true, "VF <-- PF Timeout [Type %d]\n", p_req->first_tlv.tl.type); - rc = ECORE_TIMEOUT; - } else { - if ((*done != PFVF_STATUS_SUCCESS) && - (*done != PFVF_STATUS_NO_RESOURCE)) - DP_NOTICE(p_hwfn, false, - "PF response: %d [Type %d]\n", - *done, p_req->first_tlv.tl.type); - else - DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "PF response: %d [Type %d]\n", - *done, p_req->first_tlv.tl.type); + return ECORE_TIMEOUT; } + if ((*done != PFVF_STATUS_SUCCESS) && + (*done != PFVF_STATUS_NO_RESOURCE)) + DP_NOTICE(p_hwfn, false, + "PF response: %d [Type %d]\n", + *done, p_req->first_tlv.tl.type); + else + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, + "PF response: %d [Type %d]\n", + *done, p_req->first_tlv.tl.type); + return rc; } @@ -163,8 +197,8 @@ static void ecore_vf_pf_add_qid(struct ecore_hwfn *p_hwfn, p_qid_tlv->qid = p_cid->qid_usage_idx; } -enum _ecore_status_t _ecore_vf_pf_release(struct ecore_hwfn *p_hwfn, - bool b_final) +static enum _ecore_status_t _ecore_vf_pf_release(struct ecore_hwfn *p_hwfn, + bool b_final) { struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; struct pfvf_def_resp_tlv *resp; @@ -231,7 +265,7 @@ static void ecore_vf_pf_acquire_reduce_resc(struct ecore_hwfn *p_hwfn, struct pf_vf_resc *p_resp) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "PF unwilling to fullill resource request: rxq [%02x/%02x] txq [%02x/%02x] sbs [%02x/%02x] mac [%02x/%02x] vlan [%02x/%02x] mc [%02x/%02x] cids [%02x/%02x]. Try PF recommended amount\n", + "PF unwilling to fullill resource request: rxq [%u/%u] txq [%u/%u] sbs [%u/%u] mac [%u/%u] vlan [%u/%u] mc [%u/%u] cids [%u/%u]. Try PF recommended amount\n", p_req->num_rxqs, p_resp->num_rxqs, p_req->num_rxqs, p_resp->num_txqs, p_req->num_sbs, p_resp->num_sbs, @@ -304,8 +338,8 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn) /* @@@ TBD: PF may not be ready bnx2x_get_vf_id... */ req->vfdev_info.opaque_fid = p_hwfn->hw_info.opaque_fid; - p_resc->num_rxqs = ECORE_MAX_VF_CHAINS_PER_PF; - p_resc->num_txqs = ECORE_MAX_VF_CHAINS_PER_PF; + p_resc->num_rxqs = ECORE_MAX_QUEUE_VF_CHAINS_PER_PF; + p_resc->num_txqs = ECORE_MAX_QUEUE_VF_CHAINS_PER_PF; p_resc->num_sbs = ECORE_MAX_VF_CHAINS_PER_PF; p_resc->num_mac_filters = ECORE_ETH_VF_NUM_MAC_FILTERS; p_resc->num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS; @@ -326,10 +360,25 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn) /* Fill capability field with any non-deprecated config we support */ req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_100G; - /* If we've mapped the doorbell bar, try using queue qids */ - if (p_iov->b_doorbell_bar) + /* If doorbell bar size is sufficient, try using queue qids */ + if (p_hwfn->db_size > PXP_VF_BAR0_DQ_LENGTH) { req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_PHYSICAL_BAR | VFPF_ACQUIRE_CAP_QUEUE_QIDS; + p_resc->num_cids = ECORE_ETH_VF_MAX_NUM_CIDS; + } + + /* Currently, fill ROCE by default. */ + req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_ROCE; + + /* Check if VF uses EDPM and needs DORQ flush when overflowed */ + if (p_hwfn->roce_edpm_mode != ECORE_ROCE_EDPM_MODE_DISABLE) + req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_EDPM; + + /* This VF version supports async events */ + req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_EQ; + + /* Set when IWARP is supported. */ + /* req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_IWARP; */ /* pf 2 vf bulletin board address */ req->bulletin_addr = p_iov->bulletin.phys; @@ -341,8 +390,7 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn) sizeof(struct channel_list_end_tlv)); while (!resources_acquired) { - DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "attempting to acquire resources\n"); + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "attempting to acquire resources\n"); /* Clear response buffer, as this might be a re-send */ OSAL_MEMSET(p_iov->pf2vf_reply, 0, @@ -350,8 +398,17 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn) /* send acquire request */ rc = ecore_send_msg2pf(p_hwfn, - &resp->hdr.status, sizeof(*resp)); - + &resp->hdr.status, + sizeof(*resp)); + /* In general, PF shouldn't take long time to respond for a + * successful message sent by the VF over VPC. If for + * sufficiently long time (~2.5 sec) there is no response, + * which is mainly possible due to VF being FLRed which could + * cause message send failure by VF to the PF over VPC as being + * disallowed due to FLR and cause such timeout, in such cases + * retry for the acquire message which can later be successful + * after VF FLR handling is completed. + */ if (retry_cnt && rc == ECORE_TIMEOUT) { DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "VF retrying to acquire due to VPC timeout\n"); @@ -364,7 +421,8 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn) /* copy acquire response from buffer to p_hwfn */ OSAL_MEMCPY(&p_iov->acquire_resp, - resp, sizeof(p_iov->acquire_resp)); + resp, + sizeof(p_iov->acquire_resp)); attempts++; @@ -379,8 +437,26 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn) req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_PRE_FP_HSI; } - DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "resources acquired\n"); + + DP_VERBOSE(p_hwfn, (ECORE_MSG_IOV | ECORE_MSG_RDMA), + "VF AQUIRE-RDMA capabilities is:%" PRIx64 "\n", + resp->pfdev_info.capabilities); + + if (resp->pfdev_info.capabilities & + PFVF_ACQUIRE_CAP_ROCE) + p_hwfn->hw_info.personality = + ECORE_PCI_ETH_ROCE; + + if (resp->pfdev_info.capabilities & + PFVF_ACQUIRE_CAP_IWARP) + p_hwfn->hw_info.personality = + ECORE_PCI_ETH_IWARP; + + if (resp->pfdev_info.capabilities & + PFVF_ACQUIRE_CAP_EDPM) + p_hwfn->vf_iov_info->dpm_enabled = true; + + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "resources acquired\n"); resources_acquired = true; } /* PF refuses to allocate our resources */ else if (resp->hdr.status == PFVF_STATUS_NO_RESOURCE && @@ -392,10 +468,7 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn) if (pfdev_info->major_fp_hsi && (pfdev_info->major_fp_hsi != ETH_HSI_VER_MAJOR)) { DP_NOTICE(p_hwfn, false, - "PF uses an incompatible fastpath HSI" - " %02x.%02x [VF requires %02x.%02x]." - " Please change to a VF driver using" - " %02x.xx.\n", + "PF uses an incompatible fastpath HSI %02x.%02x [VF requires %02x.%02x]. Please change to a VF driver using %02x.xx.\n", pfdev_info->major_fp_hsi, pfdev_info->minor_fp_hsi, ETH_HSI_VER_MAJOR, ETH_HSI_VER_MINOR, @@ -408,17 +481,12 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn) if (req->vfdev_info.capabilities & VFPF_ACQUIRE_CAP_PRE_FP_HSI) { DP_NOTICE(p_hwfn, false, - "PF uses very old drivers." - " Please change to a VF" - " driver using no later than" - " 8.8.x.x.\n"); + "PF uses very old drivers. Please change to a VF driver using no later than 8.8.x.x.\n"); rc = ECORE_INVAL; goto exit; } else { DP_INFO(p_hwfn, - "PF is old - try re-acquire to" - " see if it supports FW-version" - " override\n"); + "PF is old - try re-acquire to see if it supports FW-version override\n"); req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_PRE_FP_HSI; continue; @@ -436,14 +504,19 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn) ecore_vf_pf_req_end(p_hwfn, ECORE_AGAIN); return ecore_vf_pf_soft_flr_acquire(p_hwfn); } else { - DP_ERR(p_hwfn, - "PF returned err %d to VF acquisition request\n", + DP_ERR(p_hwfn, "PF returned error %d to VF acquisition request\n", resp->hdr.status); rc = ECORE_AGAIN; goto exit; } } + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, + "resc acquired : rxq [%u] txq [%u] sbs [%u] mac [%u] vlan [%u] mc [%u] cids [%u].\n", + resp->resc.num_rxqs, resp->resc.num_txqs, resp->resc.num_sbs, + resp->resc.num_mac_filters, resp->resc.num_vlan_filters, + resp->resc.num_mc_filters, resp->resc.num_cids); + /* Mark the PF as legacy, if needed */ if (req->vfdev_info.capabilities & VFPF_ACQUIRE_CAP_PRE_FP_HSI) @@ -461,8 +534,7 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn) rc = OSAL_VF_UPDATE_ACQUIRE_RESC_RESP(p_hwfn, &resp->resc); if (rc) { DP_NOTICE(p_hwfn, true, - "VF_UPDATE_ACQUIRE_RESC_RESP Failed:" - " status = 0x%x.\n", + "VF_UPDATE_ACQUIRE_RESC_RESP Failed: status = 0x%x.\n", rc); rc = ECORE_AGAIN; goto exit; @@ -475,8 +547,8 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn) p_dev->type = resp->pfdev_info.dev_type; p_dev->chip_rev = (u8)resp->pfdev_info.chip_rev; - DP_INFO(p_hwfn, "Chip details - %s%d\n", - ECORE_IS_BB(p_dev) ? "BB" : "AH", + DP_INFO(p_hwfn, "Chip details - %s %d\n", + ECORE_IS_BB(p_dev) ? "BB" : ECORE_IS_AH(p_dev) ? "AH" : "E5", CHIP_REV_IS_A0(p_hwfn->p_dev) ? 0 : 1); p_dev->chip_num = pfdev_info->chip_num & 0xffff; @@ -484,18 +556,15 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn) /* Learn of the possibility of CMT */ if (IS_LEAD_HWFN(p_hwfn)) { if (resp->pfdev_info.capabilities & PFVF_ACQUIRE_CAP_100G) { - DP_INFO(p_hwfn, "100g VF\n"); + DP_NOTICE(p_hwfn, false, "100g VF\n"); p_dev->num_hwfns = 2; } } - /* @DPDK */ - if (((p_iov->b_pre_fp_hsi == true) & - ETH_HSI_VER_MINOR) && + if (!p_iov->b_pre_fp_hsi && (resp->pfdev_info.minor_fp_hsi < ETH_HSI_VER_MINOR)) DP_INFO(p_hwfn, - "PF is using older fastpath HSI;" - " %02x.%02x is configured\n", + "PF is using older fastpath HSI; %02x.%02x is configured\n", ETH_HSI_VER_MAJOR, resp->pfdev_info.minor_fp_hsi); @@ -545,8 +614,7 @@ ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn, /* Allocate vf sriov info */ p_iov = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_iov)); if (!p_iov) { - DP_NOTICE(p_hwfn, true, - "Failed to allocate `struct ecore_sriov'\n"); + DP_NOTICE(p_hwfn, true, "Failed to allocate `struct ecore_sriov'\n"); return ECORE_NOMEM; } @@ -556,7 +624,10 @@ ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn, */ if (p_hwfn->doorbells == OSAL_NULL) { p_hwfn->doorbells = (u8 OSAL_IOMEM *)p_hwfn->regview + - PXP_VF_BAR0_START_DQ; + PXP_VF_BAR0_START_DQ; + p_hwfn->db_size = PXP_VF_BAR0_DQ_LENGTH; + p_hwfn->db_offset = (u8 *)p_hwfn->doorbells - + (u8 *)p_hwfn->p_dev->doorbells; } else if (p_hwfn == p_lead) { /* For leading hw-function, value is always correct, but need * to handle scenario where legacy PF would not support 100g @@ -569,56 +640,52 @@ ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn, */ if (p_lead->vf_iov_info->b_doorbell_bar) p_iov->b_doorbell_bar = true; - else - p_hwfn->doorbells = (u8 OSAL_IOMEM *) - p_hwfn->regview + + else { + p_hwfn->doorbells = (u8 OSAL_IOMEM *)p_hwfn->regview + PXP_VF_BAR0_START_DQ; + p_hwfn->db_offset = (u8 *)p_hwfn->doorbells - + (u8 *)p_hwfn->p_dev->doorbells; + } } /* Allocate vf2pf msg */ p_iov->vf2pf_request = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev, - &p_iov-> - vf2pf_request_phys, - sizeof(union - vfpf_tlvs)); + &p_iov->vf2pf_request_phys, + sizeof(union vfpf_tlvs)); if (!p_iov->vf2pf_request) { - DP_NOTICE(p_hwfn, true, - "Failed to allocate `vf2pf_request' DMA memory\n"); + DP_NOTICE(p_hwfn, true, "Failed to allocate `vf2pf_request' DMA memory\n"); goto free_p_iov; } p_iov->pf2vf_reply = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev, - &p_iov-> - pf2vf_reply_phys, + &p_iov->pf2vf_reply_phys, sizeof(union pfvf_tlvs)); if (!p_iov->pf2vf_reply) { - DP_NOTICE(p_hwfn, true, - "Failed to allocate `pf2vf_reply' DMA memory\n"); + DP_NOTICE(p_hwfn, true, "Failed to allocate `pf2vf_reply' DMA memory\n"); goto free_vf2pf_request; } DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "VF's Request mailbox [%p virt 0x%lx phys], " - "Response mailbox [%p virt 0x%lx phys]\n", + "VF's Request mailbox [%p virt 0x%" PRIx64 " phys], " + "Response mailbox [%p virt 0x%" PRIx64 " phys]\n", p_iov->vf2pf_request, - (unsigned long)p_iov->vf2pf_request_phys, + (u64)p_iov->vf2pf_request_phys, p_iov->pf2vf_reply, - (unsigned long)p_iov->pf2vf_reply_phys); + (u64)p_iov->pf2vf_reply_phys); /* Allocate Bulletin board */ p_iov->bulletin.size = sizeof(struct ecore_bulletin_content); p_iov->bulletin.p_virt = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev, - &p_iov->bulletin. - phys, - p_iov->bulletin. - size); + &p_iov->bulletin.phys, + p_iov->bulletin.size); if (!p_iov->bulletin.p_virt) { DP_NOTICE(p_hwfn, false, "Failed to alloc bulletin memory\n"); goto free_pf2vf_reply; } DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "VF's bulletin Board [%p virt 0x%lx phys 0x%08x bytes]\n", - p_iov->bulletin.p_virt, (unsigned long)p_iov->bulletin.phys, + "VF's bulletin Board [%p virt 0x%" PRIx64 " phys 0x%08x bytes]\n", + p_iov->bulletin.p_virt, + (u64)p_iov->bulletin.phys, p_iov->bulletin.size); #ifdef CONFIG_ECORE_LOCK_ALLOC @@ -635,6 +702,8 @@ ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn, p_hwfn->hw_info.personality = ECORE_PCI_ETH; rc = ecore_vf_pf_acquire(p_hwfn); + if (rc != ECORE_SUCCESS) + return rc; /* If VF is 100g using a mapped bar and PF is too old to support that, * acquisition would succeed - but the VF would have no way knowing @@ -653,7 +722,9 @@ ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn, p_iov->b_doorbell_bar = false; p_hwfn->doorbells = (u8 OSAL_IOMEM *)p_hwfn->regview + - PXP_VF_BAR0_START_DQ; + PXP_VF_BAR0_START_DQ; + p_hwfn->db_offset = (u8 *)p_hwfn->doorbells - + (u8 *)p_hwfn->p_dev->doorbells; rc = ecore_vf_pf_acquire(p_hwfn); } @@ -684,7 +755,6 @@ ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn, return ECORE_NOMEM; } -/* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */ static void __ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req, struct ecore_tunn_update_type *p_src, @@ -700,7 +770,6 @@ __ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req, *p_cls = p_src->tun_cls; } -/* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */ static void ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req, struct ecore_tunn_update_type *p_src, @@ -802,6 +871,8 @@ ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn, &p_req->vxlan_clss, &p_src->vxlan_port, &p_req->update_vxlan_port, &p_req->vxlan_port); + p_req->update_non_l2_vxlan = p_src->update_non_l2_vxlan; + p_req->non_l2_vxlan_enable = p_src->non_l2_vxlan_enable; ecore_vf_prep_tunn_req_tlv(p_req, &p_src->l2_geneve, ECORE_MODE_L2GENEVE_TUNN, &p_req->l2geneve_clss, &p_src->geneve_port, @@ -1005,18 +1076,17 @@ ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn, * provided only the queue id. */ if (!p_iov->b_pre_fp_hsi) { - *pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells + - resp->offset; + *pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells + resp->offset; } else { u8 cid = p_iov->acquire_resp.resc.cid[qid]; *pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells + - DB_ADDR_VF(cid, DQ_DEMS_LEGACY); + DB_ADDR_VF(p_hwfn->p_dev, cid, DQ_DEMS_LEGACY); } DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, - "Txq[0x%02x]: doorbell at %p [offset 0x%08x]\n", - qid, *pp_doorbell, resp->offset); + "Txq[0x%02x.%02x]: doorbell at %p [offset 0x%08x]\n", + qid, p_cid->qid_usage_idx, *pp_doorbell, resp->offset); exit: ecore_vf_pf_req_end(p_hwfn, rc); @@ -1114,11 +1184,14 @@ enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn, return rc; } -enum _ecore_status_t -ecore_vf_pf_vport_start(struct ecore_hwfn *p_hwfn, u8 vport_id, - u16 mtu, u8 inner_vlan_removal, - enum ecore_tpa_mode tpa_mode, u8 max_buffers_per_cqe, - u8 only_untagged) +enum _ecore_status_t ecore_vf_pf_vport_start(struct ecore_hwfn *p_hwfn, + u8 vport_id, + u16 mtu, + u8 inner_vlan_removal, + enum ecore_tpa_mode tpa_mode, + u8 max_buffers_per_cqe, + u8 only_untagged, + u8 zero_placement_offset) { struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; struct vfpf_vport_start_tlv *req; @@ -1135,9 +1208,10 @@ ecore_vf_pf_vport_start(struct ecore_hwfn *p_hwfn, u8 vport_id, req->tpa_mode = tpa_mode; req->max_buffers_per_cqe = max_buffers_per_cqe; req->only_untagged = only_untagged; + req->zero_placement_offset = zero_placement_offset; /* status blocks */ - for (i = 0; i < p_hwfn->vf_iov_info->acquire_resp.resc.num_sbs; i++) { + for (i = 0; i < p_hwfn->vf_iov_info->acquire_resp.resc.num_rxqs; i++) { struct ecore_sb_info *p_sb = p_hwfn->vf_iov_info->sbs_info[i]; if (p_sb) @@ -1168,7 +1242,7 @@ ecore_vf_pf_vport_start(struct ecore_hwfn *p_hwfn, u8 vport_id, enum _ecore_status_t ecore_vf_pf_vport_stop(struct ecore_hwfn *p_hwfn) { struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; - struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp; + struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp; enum _ecore_status_t rc; /* clear mailbox and prep first tlv */ @@ -1227,7 +1301,7 @@ ecore_vf_handle_vp_update_is_needed(struct ecore_hwfn *p_hwfn, return !!p_data->sge_tpa_params; default: DP_INFO(p_hwfn, "Unexpected vport-update TLV[%d] %s\n", - tlv, qede_ecore_channel_tlvs_string[tlv]); + tlv, ecore_channel_tlvs_string[tlv]); return false; } } @@ -1247,19 +1321,19 @@ ecore_vf_handle_vp_update_tlvs_resp(struct ecore_hwfn *p_hwfn, continue; p_resp = (struct pfvf_def_resp_tlv *) - ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv); + ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, + tlv); if (p_resp && p_resp->hdr.status) DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "TLV[%d] type %s Configuration %s\n", - tlv, qede_ecore_channel_tlvs_string[tlv], + tlv, ecore_channel_tlvs_string[tlv], (p_resp && p_resp->hdr.status) ? "succeeded" : "failed"); } } -enum _ecore_status_t -ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn, - struct ecore_sp_vport_update_params *p_params) +enum _ecore_status_t ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn, + struct ecore_sp_vport_update_params *p_params) { struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; struct vfpf_vport_update_tlv *req; @@ -1299,6 +1373,7 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn, } } +#ifndef ECORE_UPSTREAM if (p_params->update_inner_vlan_removal_flg) { struct vfpf_vport_update_vlan_strip_tlv *p_vlan_tlv; @@ -1310,6 +1385,7 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn, p_vlan_tlv->remove_vlan = p_params->inner_vlan_removal_flg; } +#endif if (p_params->update_tx_switching_flg) { struct vfpf_vport_update_tx_switch_tlv *p_tx_switch_tlv; @@ -1350,13 +1426,13 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn, if (update_rx) { p_accept_tlv->update_rx_mode = update_rx; p_accept_tlv->rx_accept_filter = - p_params->accept_flags.rx_accept_filter; + p_params->accept_flags.rx_accept_filter; } if (update_tx) { p_accept_tlv->update_tx_mode = update_tx; p_accept_tlv->tx_accept_filter = - p_params->accept_flags.tx_accept_filter; + p_params->accept_flags.tx_accept_filter; } } @@ -1372,15 +1448,16 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn, if (rss_params->update_rss_config) p_rss_tlv->update_rss_flags |= - VFPF_UPDATE_RSS_CONFIG_FLAG; + VFPF_UPDATE_RSS_CONFIG_FLAG; if (rss_params->update_rss_capabilities) p_rss_tlv->update_rss_flags |= - VFPF_UPDATE_RSS_CAPS_FLAG; + VFPF_UPDATE_RSS_CAPS_FLAG; if (rss_params->update_rss_ind_table) p_rss_tlv->update_rss_flags |= - VFPF_UPDATE_RSS_IND_TABLE_FLAG; + VFPF_UPDATE_RSS_IND_TABLE_FLAG; if (rss_params->update_rss_key) - p_rss_tlv->update_rss_flags |= VFPF_UPDATE_RSS_KEY_FLAG; + p_rss_tlv->update_rss_flags |= + VFPF_UPDATE_RSS_KEY_FLAG; p_rss_tlv->rss_enable = rss_params->rss_enable; p_rss_tlv->rss_caps = rss_params->rss_caps; @@ -1409,7 +1486,7 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn, resp_size += sizeof(struct pfvf_def_resp_tlv); p_any_vlan_tlv->accept_any_vlan = p_params->accept_any_vlan; p_any_vlan_tlv->update_accept_any_vlan_flg = - p_params->update_accept_any_vlan_flg; + p_params->update_accept_any_vlan_flg; } if (p_params->sge_tpa_params) { @@ -1425,40 +1502,43 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn, if (sge_tpa_params->update_tpa_en_flg) p_sge_tpa_tlv->update_sge_tpa_flags |= - VFPF_UPDATE_TPA_EN_FLAG; + VFPF_UPDATE_TPA_EN_FLAG; if (sge_tpa_params->update_tpa_param_flg) p_sge_tpa_tlv->update_sge_tpa_flags |= - VFPF_UPDATE_TPA_PARAM_FLAG; + VFPF_UPDATE_TPA_PARAM_FLAG; if (sge_tpa_params->tpa_ipv4_en_flg) - p_sge_tpa_tlv->sge_tpa_flags |= VFPF_TPA_IPV4_EN_FLAG; + p_sge_tpa_tlv->sge_tpa_flags |= + VFPF_TPA_IPV4_EN_FLAG; if (sge_tpa_params->tpa_ipv6_en_flg) - p_sge_tpa_tlv->sge_tpa_flags |= VFPF_TPA_IPV6_EN_FLAG; + p_sge_tpa_tlv->sge_tpa_flags |= + VFPF_TPA_IPV6_EN_FLAG; if (sge_tpa_params->tpa_pkt_split_flg) - p_sge_tpa_tlv->sge_tpa_flags |= VFPF_TPA_PKT_SPLIT_FLAG; + p_sge_tpa_tlv->sge_tpa_flags |= + VFPF_TPA_PKT_SPLIT_FLAG; if (sge_tpa_params->tpa_hdr_data_split_flg) p_sge_tpa_tlv->sge_tpa_flags |= - VFPF_TPA_HDR_DATA_SPLIT_FLAG; + VFPF_TPA_HDR_DATA_SPLIT_FLAG; if (sge_tpa_params->tpa_gro_consistent_flg) p_sge_tpa_tlv->sge_tpa_flags |= - VFPF_TPA_GRO_CONSIST_FLAG; + VFPF_TPA_GRO_CONSIST_FLAG; if (sge_tpa_params->tpa_ipv4_tunn_en_flg) p_sge_tpa_tlv->sge_tpa_flags |= - VFPF_TPA_TUNN_IPV4_EN_FLAG; + VFPF_TPA_TUNN_IPV4_EN_FLAG; if (sge_tpa_params->tpa_ipv6_tunn_en_flg) p_sge_tpa_tlv->sge_tpa_flags |= - VFPF_TPA_TUNN_IPV6_EN_FLAG; + VFPF_TPA_TUNN_IPV6_EN_FLAG; p_sge_tpa_tlv->tpa_max_aggs_num = - sge_tpa_params->tpa_max_aggs_num; + sge_tpa_params->tpa_max_aggs_num; p_sge_tpa_tlv->tpa_max_size = sge_tpa_params->tpa_max_size; p_sge_tpa_tlv->tpa_min_size_to_start = - sge_tpa_params->tpa_min_size_to_start; + sge_tpa_params->tpa_min_size_to_start; p_sge_tpa_tlv->tpa_min_size_to_cont = - sge_tpa_params->tpa_min_size_to_cont; + sge_tpa_params->tpa_min_size_to_cont; p_sge_tpa_tlv->max_buffers_per_cqe = - sge_tpa_params->max_buffers_per_cqe; + sge_tpa_params->max_buffers_per_cqe; } /* add list termination tlv */ @@ -1538,8 +1618,7 @@ void ecore_vf_pf_filter_mcast(struct ecore_hwfn *p_hwfn, } enum _ecore_status_t ecore_vf_pf_filter_ucast(struct ecore_hwfn *p_hwfn, - struct ecore_filter_ucast - *p_ucast) + struct ecore_filter_ucast *p_ucast) { struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; struct vfpf_ucast_filter_tlv *req; @@ -1548,8 +1627,7 @@ enum _ecore_status_t ecore_vf_pf_filter_ucast(struct ecore_hwfn *p_hwfn, /* Sanitize */ if (p_ucast->opcode == ECORE_FILTER_MOVE) { - DP_NOTICE(p_hwfn, true, - "VFs don't support Moving of filters\n"); + DP_NOTICE(p_hwfn, true, "VFs don't support Moving of filters\n"); return ECORE_INVAL; } @@ -1557,7 +1635,7 @@ enum _ecore_status_t ecore_vf_pf_filter_ucast(struct ecore_hwfn *p_hwfn, req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_UCAST_FILTER, sizeof(*req)); req->opcode = (u8)p_ucast->opcode; req->type = (u8)p_ucast->type; - OSAL_MEMCPY(req->mac, p_ucast->mac, ETH_ALEN); + OSAL_MEMCPY(req->mac, p_ucast->mac, ECORE_ETH_ALEN); req->vlan = p_ucast->vlan; /* add list termination tlv */ @@ -1584,7 +1662,7 @@ enum _ecore_status_t ecore_vf_pf_filter_ucast(struct ecore_hwfn *p_hwfn, enum _ecore_status_t ecore_vf_pf_int_cleanup(struct ecore_hwfn *p_hwfn) { struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; - struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp; + struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp; enum _ecore_status_t rc; /* clear mailbox and prep first tlv */ @@ -1644,6 +1722,37 @@ enum _ecore_status_t ecore_vf_pf_get_coalesce(struct ecore_hwfn *p_hwfn, return rc; } +enum _ecore_status_t +ecore_vf_pf_bulletin_update_mac(struct ecore_hwfn *p_hwfn, u8 *p_mac) +{ + struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; + struct vfpf_bulletin_update_mac_tlv *p_req; + struct pfvf_def_resp_tlv *p_resp; + enum _ecore_status_t rc; + + if (!p_mac) + return ECORE_INVAL; + + /* clear mailbox and prep header tlv */ + p_req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_BULLETIN_UPDATE_MAC, + sizeof(*p_req)); + OSAL_MEMCPY(p_req->mac, p_mac, ECORE_ETH_ALEN); + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, + "Requesting bulletin update for MAC[%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx]\n", + p_mac[0], p_mac[1], p_mac[2], p_mac[3], p_mac[4], p_mac[5]); + + /* add list termination tlv */ + ecore_add_tlv(&p_iov->offset, + CHANNEL_TLV_LIST_END, + sizeof(struct channel_list_end_tlv)); + + p_resp = &p_iov->pf2vf_reply->default_resp; + rc = ecore_send_msg2pf(p_hwfn, &p_resp->hdr.status, sizeof(*p_resp)); + ecore_vf_pf_req_end(p_hwfn, rc); + + return rc; +} + enum _ecore_status_t ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal, u16 tx_coal, struct ecore_queue_cid *p_cid) @@ -1723,12 +1832,16 @@ u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id) { struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; + u8 num_rxqs; if (!p_iov) { DP_NOTICE(p_hwfn, true, "vf_sriov_info isn't initialized\n"); return 0; } + ecore_vf_get_num_rxqs(p_hwfn, &num_rxqs); + + /* @DPDK */ return p_iov->acquire_resp.resc.hw_sbs[sb_id].hw_sb_id; } @@ -1746,10 +1859,44 @@ void ecore_vf_set_sb_info(struct ecore_hwfn *p_hwfn, DP_NOTICE(p_hwfn, true, "Can't configure SB %04x\n", sb_id); return; } - p_iov->sbs_info[sb_id] = p_sb; } +void ecore_vf_init_admin_mac(struct ecore_hwfn *p_hwfn, u8 *p_mac) +{ + struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; + struct ecore_bulletin_content temp; + u32 crc, crc_size; + + crc_size = sizeof(p_iov->bulletin.p_virt->crc); + OSAL_MEMCPY(&temp, p_iov->bulletin.p_virt, p_iov->bulletin.size); + + /* If version did not update, no need to do anything */ + if (temp.version == p_iov->bulletin_shadow.version) { + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, + "Bulletin version didn't update\n"); + return; + } + + /* Verify the bulletin we see is valid */ + crc = OSAL_CRC32(0, (u8 *)&temp + crc_size, + p_iov->bulletin.size - crc_size); + if (crc != temp.crc) { + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, + "Bulletin CRC validation failed\n"); + return; + } + + if ((temp.valid_bitmap & (1 << MAC_ADDR_FORCED)) || + (temp.valid_bitmap & (1 << VFPF_BULLETIN_MAC_ADDR))) { + OSAL_MEMCPY(p_mac, temp.mac, ECORE_ETH_ALEN); + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, + "Admin MAC read from bulletin:[%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx]\n", + p_mac[0], p_mac[1], p_mac[2], p_mac[3], + p_mac[4], p_mac[5]); + } +} + enum _ecore_status_t ecore_vf_read_bulletin(struct ecore_hwfn *p_hwfn, u8 *p_change) { @@ -1845,7 +1992,14 @@ void ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn, &p_hwfn->vf_iov_info->bulletin_shadow); } -void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn, u8 *num_rxqs) +void ecore_vf_get_num_sbs(struct ecore_hwfn *p_hwfn, + u8 *num_sbs) +{ + *num_sbs = p_hwfn->vf_iov_info->acquire_resp.resc.num_sbs; +} + +void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn, + u8 *num_rxqs) { *num_rxqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_rxqs; } @@ -1856,11 +2010,18 @@ void ecore_vf_get_num_txqs(struct ecore_hwfn *p_hwfn, *num_txqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_txqs; } -void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn, u8 *port_mac) +void ecore_vf_get_num_cids(struct ecore_hwfn *p_hwfn, + u8 *num_cids) +{ + *num_cids = p_hwfn->vf_iov_info->acquire_resp.resc.num_cids; +} + +void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn, + u8 *port_mac) { OSAL_MEMCPY(port_mac, p_hwfn->vf_iov_info->acquire_resp.pfdev_info.port_mac, - ETH_ALEN); + ECORE_ETH_ALEN); } void ecore_vf_get_num_vlan_filters(struct ecore_hwfn *p_hwfn, @@ -1872,17 +2033,8 @@ void ecore_vf_get_num_vlan_filters(struct ecore_hwfn *p_hwfn, *num_vlan_filters = p_vf->acquire_resp.resc.num_vlan_filters; } -void ecore_vf_get_num_sbs(struct ecore_hwfn *p_hwfn, - u32 *num_sbs) -{ - struct ecore_vf_iov *p_vf; - - p_vf = p_hwfn->vf_iov_info; - *num_sbs = (u32)p_vf->acquire_resp.resc.num_sbs; -} - void ecore_vf_get_num_mac_filters(struct ecore_hwfn *p_hwfn, - u32 *num_mac_filters) + u8 *num_mac_filters) { struct ecore_vf_iov *p_vf = p_hwfn->vf_iov_info; @@ -1898,14 +2050,14 @@ bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac) return true; /* Forbid VF from changing a MAC enforced by PF */ - if (OSAL_MEMCMP(bulletin->mac, mac, ETH_ALEN)) + if (OSAL_MEMCMP(bulletin->mac, mac, ECORE_ETH_ALEN)) return false; return false; } -bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac, - u8 *p_is_forced) +LNX_STATIC bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, + u8 *dst_mac, u8 *p_is_forced) { struct ecore_bulletin_content *bulletin; @@ -1921,7 +2073,7 @@ bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac, return false; } - OSAL_MEMCPY(dst_mac, bulletin->mac, ETH_ALEN); + OSAL_MEMCPY(dst_mac, bulletin->mac, ECORE_ETH_ALEN); return true; } @@ -1972,9 +2124,85 @@ void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn, *fw_eng = info->fw_eng; } +void ecore_vf_update_mac(struct ecore_hwfn *p_hwfn, u8 *mac) +{ + if (p_hwfn->vf_iov_info) + OSAL_MEMCPY(p_hwfn->vf_iov_info->mac_addr, mac, ECORE_ETH_ALEN); +} + #ifdef CONFIG_ECORE_SW_CHANNEL void ecore_vf_set_hw_channel(struct ecore_hwfn *p_hwfn, bool b_is_hw) { p_hwfn->vf_iov_info->b_hw_channel = b_is_hw; } #endif + +LNX_STATIC void ecore_vf_db_recovery_execute(struct ecore_hwfn *p_hwfn) +{ + struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; + struct ecore_bulletin_content *p_bulletin; + + p_bulletin = &p_iov->bulletin_shadow; + + /* Got a new indication from PF to execute doorbell overflow recovery */ + if (p_iov->db_recovery_execute_prev != p_bulletin->db_recovery_execute) + ecore_db_recovery_execute(p_hwfn); + + p_iov->db_recovery_execute_prev = p_bulletin->db_recovery_execute; +} + +LNX_STATIC enum _ecore_status_t +ecore_vf_async_event_req(struct ecore_hwfn *p_hwfn) +{ + struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info; + struct pfvf_async_event_resp_tlv *p_resp = OSAL_NULL; + struct vfpf_async_event_req_tlv *p_req = OSAL_NULL; + struct ecore_bulletin_content *p_bulletin; + enum _ecore_status_t rc; + u8 i; + + p_bulletin = &p_iov->bulletin_shadow; + if (p_iov->eq_completion_prev == p_bulletin->eq_completion) + return ECORE_SUCCESS; + + /* Update prev value for future comparison */ + p_iov->eq_completion_prev = p_bulletin->eq_completion; + + p_req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_ASYNC_EVENT, + sizeof(*p_req)); + ecore_add_tlv(&p_iov->offset, + CHANNEL_TLV_LIST_END, + sizeof(struct channel_list_end_tlv)); + + p_resp = &p_iov->pf2vf_reply->async_event_resp; + rc = ecore_send_msg2pf(p_hwfn, &p_resp->hdr.status, sizeof(*p_resp)); + if (rc != ECORE_SUCCESS) + goto exit; + + if (p_resp->hdr.status != PFVF_STATUS_SUCCESS) { + rc = ECORE_INVAL; + goto exit; + } + + for (i = 0; i < p_resp->eq_count; i++) { + struct event_ring_entry *p_eqe = &p_resp->eqs[i]; + + ecore_event_ring_entry_dp(p_hwfn, p_eqe); + rc = ecore_async_event_completion(p_hwfn, p_eqe); + if (rc != ECORE_SUCCESS) + DP_NOTICE(p_hwfn, false, + "Processing async event completion failed - op %x prot %x echo %x\n", + p_eqe->opcode, p_eqe->protocol_id, + OSAL_LE16_TO_CPU(p_eqe->echo)); + } + + rc = ECORE_SUCCESS; +exit: + ecore_vf_pf_req_end(p_hwfn, rc); + + return rc; +} + +#ifdef _NTDDK_ +#pragma warning(pop) +#endif diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h index f027eba3e..098cbe6b3 100644 --- a/drivers/net/qede/base/ecore_vf.h +++ b/drivers/net/qede/base/ecore_vf.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_VF_H__ #define __ECORE_VF_H__ @@ -14,9 +14,11 @@ #include "ecore_dev_api.h" /* Default number of CIDs [total of both Rx and Tx] to be requested - * by default. + * by default, and maximum possible number. */ #define ECORE_ETH_VF_DEFAULT_NUM_CIDS (32) +#define ECORE_ETH_VF_MAX_NUM_CIDS (255) +#define ECORE_VF_RDMA_DEFAULT_CNQS 4 /* This data is held in the ecore_hwfn structure for VFs only. */ struct ecore_vf_iov { @@ -63,8 +65,23 @@ struct ecore_vf_iov { /* retry count for VF acquire on channel timeout */ u8 acquire_retry_cnt; + + /* Compare this value with bulletin board's db_recovery_execute to know + * whether doorbell overflow recovery is needed. + */ + u8 db_recovery_execute_prev; + + /* Compare this value with bulletin board's eq_completion to know + * whether new async events completions are pending. + */ + u16 eq_completion_prev; + + u8 mac_addr[ECORE_ETH_ALEN]; + + bool dpm_enabled; }; +#ifdef CONFIG_ECORE_SRIOV /** * @brief VF - Get coalesce per VF's relative queue. * @@ -77,7 +94,6 @@ enum _ecore_status_t ecore_vf_pf_get_coalesce(struct ecore_hwfn *p_hwfn, u16 *p_coal, struct ecore_queue_cid *p_cid); -enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn); /** * @brief VF - Set Rx/Tx coalesce per VF's relative queue. * Coalesce value '0' will omit the configuration. @@ -92,7 +108,6 @@ enum _ecore_status_t ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal, u16 tx_coal, struct ecore_queue_cid *p_cid); -#ifdef CONFIG_ECORE_SRIOV /** * @brief hw preparation for VF * sends ACQUIRE message @@ -136,7 +151,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn, * @param p_cid * @param bd_chain_phys_addr - physical address of tx chain * @param pp_doorbell - pointer to address to which to - * write the doorbell too.. + * write the doorbell too.. * * @return enum _ecore_status_t */ @@ -200,9 +215,8 @@ enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn, * * @return enum _ecore_status_t */ -enum _ecore_status_t -ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn, - struct ecore_sp_vport_update_params *p_params); +enum _ecore_status_t ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn, + struct ecore_sp_vport_update_params *p_params); /** * @brief VF - send a close message to PF @@ -254,6 +268,7 @@ void ecore_vf_set_sb_info(struct ecore_hwfn *p_hwfn, * @param tpa_mode * @param max_buffers_per_cqe, * @param only_untagged - default behavior regarding vlan acceptance + * @param zero_placement_offset - if set, zero padding will be inserted * * @return enum _ecore_status */ @@ -264,7 +279,8 @@ enum _ecore_status_t ecore_vf_pf_vport_start( u8 inner_vlan_removal, enum ecore_tpa_mode tpa_mode, u8 max_buffers_per_cqe, - u8 only_untagged); + u8 only_untagged, + u8 zero_placement_offset); /** * @brief ecore_vf_pf_vport_stop - stop the VF's vport @@ -317,23 +333,29 @@ void __ecore_vf_get_link_state(struct ecore_mcp_link_state *p_link, */ void __ecore_vf_get_link_caps(struct ecore_mcp_link_capabilities *p_link_caps, struct ecore_bulletin_content *p_bulletin); - enum _ecore_status_t ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn, struct ecore_tunnel_info *p_tunn); - void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun); -u32 ecore_vf_hw_bar_size(struct ecore_hwfn *p_hwfn, - enum BAR_ID bar_id); - +u32 ecore_vf_hw_bar_size(struct ecore_hwfn *p_hwfn, enum BAR_ID bar_id); +/** + * @brief - Ask PF to update the MAC address in it's bulletin board + * + * @param p_mac - mac address to be updated in bulletin board + */ +enum _ecore_status_t +ecore_vf_pf_bulletin_update_mac(struct ecore_hwfn *p_hwfn, u8 *p_mac); /** * @brief - ecore_vf_pf_update_mtu Update MTU for VF. + * vport should not be active before issuing this function. * - * @param p_hwfn * @param - mtu */ enum _ecore_status_t ecore_vf_pf_update_mtu(struct ecore_hwfn *p_hwfn, u16 mtu); + +enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn); #endif + #endif /* __ECORE_VF_H__ */ diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h index 43951a9a3..b664dafbd 100644 --- a/drivers/net/qede/base/ecore_vf_api.h +++ b/drivers/net/qede/base/ecore_vf_api.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_VF_API_H__ #define __ECORE_VF_API_H__ @@ -52,6 +52,15 @@ void ecore_vf_get_link_state(struct ecore_hwfn *p_hwfn, void ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn, struct ecore_mcp_link_capabilities *p_link_caps); +/** + * @brief Get number of status blocks allocated for VF by ecore + * + * @param p_hwfn + * @param num_sbs - allocated status blocks + */ +void ecore_vf_get_num_sbs(struct ecore_hwfn *p_hwfn, + u8 *num_sbs); + /** * @brief Get number of Rx queues allocated for VF by ecore * @@ -70,6 +79,29 @@ void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn, void ecore_vf_get_num_txqs(struct ecore_hwfn *p_hwfn, u8 *num_txqs); +/** + * @brief Get number of available connections [both Rx and Tx] for VF + * + * @param p_hwfn + * @param num_cids - allocated number of connections + */ +void ecore_vf_get_num_cids(struct ecore_hwfn *p_hwfn, u8 *num_cids); + +/** + * @brief Get number of cnqs allocated for VF by ecore + * + * @param p_hwfn + * @param num_cnqs - allocated cnqs + */ +void ecore_vf_get_num_cnqs(struct ecore_hwfn *p_hwfn, u8 *num_cnqs); + +/** + * @brief Returns the relative sb_id for the first CNQ + * + * @param p_hwfn + */ +u8 ecore_vf_rdma_cnq_sb_start_id(struct ecore_hwfn *p_hwfn); + /** * @brief Get port mac address for VF * @@ -88,9 +120,6 @@ void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn, void ecore_vf_get_num_vlan_filters(struct ecore_hwfn *p_hwfn, u8 *num_vlan_filters); -void ecore_vf_get_num_sbs(struct ecore_hwfn *p_hwfn, - u32 *num_sbs); - /** * @brief Get number of MAC filters allocated for VF by ecore * @@ -98,7 +127,7 @@ void ecore_vf_get_num_sbs(struct ecore_hwfn *p_hwfn, * @param num_rxqs - allocated MAC filters */ void ecore_vf_get_num_mac_filters(struct ecore_hwfn *p_hwfn, - u32 *num_mac_filters); + u8 *num_mac_filters); /** * @brief Check if VF can set a MAC address @@ -110,6 +139,17 @@ void ecore_vf_get_num_mac_filters(struct ecore_hwfn *p_hwfn, */ bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac); +/** + * @brief Init the admin mac for the VF from the bulletin + * without updating VF shadow copy of bulletin + * + * @param p_hwfn + * @param p_mac + * + * @return void + */ +void ecore_vf_init_admin_mac(struct ecore_hwfn *p_hwfn, u8 *p_mac); + #ifndef LINUX_REMOVE /** * @brief Copy forced MAC address from bulletin board @@ -117,7 +157,7 @@ bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac); * @param hwfn * @param dst_mac * @param p_is_forced - out param which indicate in case mac - * exist if it forced or not. + * exist if it forced or not. * * @return bool - return true if mac exist and false if * not. @@ -145,11 +185,17 @@ bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid); */ bool ecore_vf_get_pre_fp_hsi(struct ecore_hwfn *p_hwfn); +/** + * @brief Execute doorbell overflow recovery if needed + * + * @param p_hwfn + */ +void ecore_vf_db_recovery_execute(struct ecore_hwfn *p_hwfn); + #endif /** - * @brief Set firmware version information in dev_info from VFs acquire - * response tlv + * @brief Set firmware version information in dev_info from VFs acquire response tlv * * @param p_hwfn * @param fw_major @@ -164,6 +210,7 @@ void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn, u16 *fw_eng); void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn, u16 *p_vxlan_port, u16 *p_geneve_port); +void ecore_vf_update_mac(struct ecore_hwfn *p_hwfn, u8 *mac); #ifdef CONFIG_ECORE_SW_CHANNEL /** @@ -177,5 +224,12 @@ void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn, */ void ecore_vf_set_hw_channel(struct ecore_hwfn *p_hwfn, bool b_is_hw); #endif + +/** + * @brief request and process completed async events + * + * @param p_hwfn + */ +enum _ecore_status_t ecore_vf_async_event_req(struct ecore_hwfn *p_hwfn); #endif #endif diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h index f92dc428a..e03db7c2b 100644 --- a/drivers/net/qede/base/ecore_vfpf_if.h +++ b/drivers/net/qede/base/ecore_vfpf_if.h @@ -1,14 +1,13 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ECORE_VF_PF_IF_H__ #define __ECORE_VF_PF_IF_H__ -/* @@@ TBD MichalK this should be HSI? */ -#define T_ETH_INDIRECTION_TABLE_SIZE 128 +#define T_ETH_INDIRECTION_TABLE_SIZE 128 /* @@@ TBD MichalK this should be HSI? */ #define T_ETH_RSS_KEY_SIZE 10 /* @@@ TBD this should be HSI? */ /*********************************************** @@ -33,6 +32,15 @@ struct hw_sb_info { u8 padding[5]; }; +/* ECORE GID can be used as IPv4/6 address in RoCE v2 */ +union vfpf_roce_gid { + u8 bytes[16]; + u16 words[8]; + u32 dwords[4]; + u64 qwords[2]; + u32 ipv4_addr; +}; + /*********************************************** * * HW VF-PF channel definitions @@ -42,6 +50,11 @@ struct hw_sb_info { **/ #define TLV_BUFFER_SIZE 1024 +#define PFVF_MAX_QUEUES_PER_VF 16 +#define PFVF_MAX_CNQS_PER_VF 16 +#define PFVF_MAX_SBS_PER_VF \ + (PFVF_MAX_QUEUES_PER_VF + PFVF_MAX_CNQS_PER_VF) + /* vf pf channel tlvs */ /* general tlv header (used for both vf->pf request and pf->vf response) */ struct channel_tlv { @@ -88,8 +101,7 @@ struct vfpf_acquire_tlv { * and allow them to interact. */ #endif -/* VF pre-FP hsi version */ -#define VFPF_ACQUIRE_CAP_PRE_FP_HSI (1 << 0) +#define VFPF_ACQUIRE_CAP_PRE_FP_HSI (1 << 0) /* VF pre-FP hsi version */ #define VFPF_ACQUIRE_CAP_100G (1 << 1) /* VF can support 100g */ /* A requirement for supporting multi-Tx queues on a single queue-zone, @@ -105,6 +117,21 @@ struct vfpf_acquire_tlv { * QUEUE_QIDS is set. */ #define VFPF_ACQUIRE_CAP_PHYSICAL_BAR (1 << 3) + + /* Let the PF know this VF needs DORQ flush in case of overflow */ +#define VFPF_ACQUIRE_CAP_EDPM (1 << 4) + + /* Below capabilities to let PF know that VF supports RDMA. */ +#define VFPF_ACQUIRE_CAP_ROCE (1 << 5) +#define VFPF_ACQUIRE_CAP_IWARP (1 << 6) + + + /* Requesting and receiving EQs from PF */ +#define VFPF_ACQUIRE_CAP_EQ (1 << 7) + + /* Let the PF know this VF supports XRC mode */ +#define VFPF_ACQUIRE_CAP_XRC (1 << 8) + u64 capabilities; u8 fw_major; u8 fw_minor; @@ -186,6 +213,13 @@ struct pfvf_acquire_resp_tlv { /* PF expects queues to be received with additional qids */ #define PFVF_ACQUIRE_CAP_QUEUE_QIDS (1 << 3) + /* Below capabilities to let VF know what PF/chip supports. */ +#define PFVF_ACQUIRE_CAP_ROCE (1 << 4) +#define PFVF_ACQUIRE_CAP_IWARP (1 << 5) + + /* Let VF know whether it supports EDPM */ +#define PFVF_ACQUIRE_CAP_EDPM (1 << 6) + u16 db_size; u8 indices_per_sb; u8 os_type; @@ -199,7 +233,7 @@ struct pfvf_acquire_resp_tlv { struct pfvf_stats_info stats_info; - u8 port_mac[ETH_ALEN]; + u8 port_mac[ECORE_ETH_ALEN]; /* It's possible PF had to configure an older fastpath HSI * [in case VF is newer than PF]. This is communicated back @@ -215,9 +249,7 @@ struct pfvf_acquire_resp_tlv { * this struct with suggested amount of resources for next * acquire request */ - #define PFVF_MAX_QUEUES_PER_VF 16 - #define PFVF_MAX_SBS_PER_VF 16 - struct hw_sb_info hw_sbs[PFVF_MAX_SBS_PER_VF]; + struct hw_sb_info hw_sbs[PFVF_MAX_QUEUES_PER_VF]; u8 hw_qid[PFVF_MAX_QUEUES_PER_VF]; u8 cid[PFVF_MAX_QUEUES_PER_VF]; @@ -253,7 +285,7 @@ struct vfpf_qid_tlv { /* Soft FLR req */ struct vfpf_soft_flr_tlv { - struct vfpf_first_tlv first_tlv; + struct vfpf_first_tlv first_tlv; u32 reserved1; u32 reserved2; }; @@ -349,9 +381,9 @@ struct vfpf_q_mac_vlan_filter { u32 flags; #define VFPF_Q_FILTER_DEST_MAC_VALID 0x01 #define VFPF_Q_FILTER_VLAN_TAG_VALID 0x02 - #define VFPF_Q_FILTER_SET_MAC 0x100 /* set/clear */ + #define VFPF_Q_FILTER_SET_MAC 0x100 /* set/clear */ - u8 mac[ETH_ALEN]; + u8 mac[ECORE_ETH_ALEN]; u16 vlan_tag; u8 padding[4]; @@ -361,7 +393,7 @@ struct vfpf_q_mac_vlan_filter { struct vfpf_vport_start_tlv { struct vfpf_first_tlv first_tlv; - u64 sb_addr[PFVF_MAX_SBS_PER_VF]; + u64 sb_addr[PFVF_MAX_QUEUES_PER_VF]; u32 tpa_mode; u16 dep1; @@ -373,7 +405,8 @@ struct vfpf_vport_start_tlv { u8 only_untagged; u8 max_buffers_per_cqe; - u8 padding[4]; + u8 zero_placement_offset; + u8 padding[3]; }; /* Extended tlvs - need to add rss, mcast, accept mode tlvs */ @@ -468,7 +501,7 @@ struct vfpf_ucast_filter_tlv { u8 opcode; u8 type; - u8 mac[ETH_ALEN]; + u8 mac[ECORE_ETH_ALEN]; u16 vlan; u16 padding[3]; @@ -490,7 +523,8 @@ struct vfpf_update_tunn_param_tlv { u8 update_vxlan_port; u16 geneve_port; u16 vxlan_port; - u8 padding[2]; + u8 update_non_l2_vxlan; + u8 non_l2_vxlan_enable; }; struct pfvf_update_tunn_param_tlv { @@ -538,7 +572,7 @@ struct pfvf_read_coal_resp_tlv { struct vfpf_bulletin_update_mac_tlv { struct vfpf_first_tlv first_tlv; - u8 mac[ETH_ALEN]; + u8 mac[ECORE_ETH_ALEN]; u8 padding[2]; }; @@ -548,6 +582,22 @@ struct vfpf_update_mtu_tlv { u8 padding[6]; }; +struct vfpf_async_event_req_tlv { + struct vfpf_first_tlv first_tlv; + u32 reserved1; + u32 reserved2; +}; + +#define ECORE_PFVF_EQ_COUNT 32 + +struct pfvf_async_event_resp_tlv { + struct pfvf_tlv hdr; + + struct event_ring_entry eqs[ECORE_PFVF_EQ_COUNT]; + u8 eq_count; + u8 padding[7]; +}; + union vfpf_tlvs { struct vfpf_first_tlv first_tlv; struct vfpf_acquire_tlv acquire; @@ -564,17 +614,19 @@ union vfpf_tlvs { struct vfpf_read_coal_req_tlv read_coal_req; struct vfpf_bulletin_update_mac_tlv bulletin_update_mac; struct vfpf_update_mtu_tlv update_mtu; - struct vfpf_soft_flr_tlv soft_flr; + struct vfpf_async_event_req_tlv async_event_req; + struct vfpf_soft_flr_tlv vf_soft_flr; struct tlv_buffer_size tlv_buf_size; }; union pfvf_tlvs { - struct pfvf_def_resp_tlv default_resp; - struct pfvf_acquire_resp_tlv acquire_resp; - struct tlv_buffer_size tlv_buf_size; - struct pfvf_start_queue_resp_tlv queue_start; - struct pfvf_update_tunn_param_tlv tunn_param_resp; - struct pfvf_read_coal_resp_tlv read_coal_resp; + struct pfvf_def_resp_tlv default_resp; + struct pfvf_acquire_resp_tlv acquire_resp; + struct tlv_buffer_size tlv_buf_size; + struct pfvf_start_queue_resp_tlv queue_start; + struct pfvf_update_tunn_param_tlv tunn_param_resp; + struct pfvf_read_coal_resp_tlv read_coal_resp; + struct pfvf_async_event_resp_tlv async_event_resp; }; /* This is a structure which is allocated in the VF, which the PF may update @@ -599,7 +651,7 @@ enum ecore_bulletin_bit { /* Alert the VF that suggested mac was sent by the PF. * MAC_ADDR will be disabled in case MAC_ADDR_FORCED is set */ - VFPF_BULLETIN_MAC_ADDR = 5 + VFPF_BULLETIN_MAC_ADDR = 5, }; struct ecore_bulletin_content { @@ -612,7 +664,7 @@ struct ecore_bulletin_content { u64 valid_bitmap; /* used for MAC_ADDR or MAC_ADDR_FORCED */ - u8 mac[ETH_ALEN]; + u8 mac[ECORE_ETH_ALEN]; /* If valid, 1 => only untagged Rx if no vlan is configured */ u8 default_only_untagged; @@ -647,7 +699,9 @@ struct ecore_bulletin_content { u8 sfp_tx_fault; u16 vxlan_udp_port; u16 geneve_udp_port; - u8 padding4[2]; + + /* Pending async events completions */ + u16 eq_completion; u32 speed; u32 partner_adv_speed; @@ -656,7 +710,9 @@ struct ecore_bulletin_content { /* Forced vlan */ u16 pvid; - u16 padding5; + + u8 db_recovery_execute; + u8 padding4; }; struct ecore_bulletin { @@ -667,6 +723,7 @@ struct ecore_bulletin { enum { /*!!!!! Make sure to update STRINGS structure accordingly !!!!!*/ +/*!!!!! Also make sure that new TLVs will be inserted last (before CHANNEL_TLV_MAX) !!!!!*/ CHANNEL_TLV_NONE, /* ends tlv sequence */ CHANNEL_TLV_ACQUIRE, @@ -739,6 +796,6 @@ enum { /*!!!!! Make sure to update STRINGS structure accordingly !!!!!*/ }; -extern const char *qede_ecore_channel_tlvs_string[]; +extern const char *ecore_channel_tlvs_string[]; #endif /* __ECORE_VF_PF_IF_H__ */ diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h index 1ca11c473..706443838 100644 --- a/drivers/net/qede/base/eth_common.h +++ b/drivers/net/qede/base/eth_common.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - #ifndef __ETH_COMMON__ #define __ETH_COMMON__ /********************/ @@ -13,9 +13,7 @@ /* FP HSI version. FP HSI is compatible if (fwVer.major == drvVer.major && * fwVer.minor >= drvVer.minor) */ -/* ETH FP HSI Major version */ -#define ETH_HSI_VER_MAJOR 3 -/* ETH FP HSI Minor version */ +#define ETH_HSI_VER_MAJOR 3 /* ETH FP HSI Major version */ #define ETH_HSI_VER_MINOR 11 /* ETH FP HSI Minor version */ /* Alias for 8.7.x.x/8.8.x.x ETH FP HSI MINOR version. In this version driver @@ -35,12 +33,12 @@ #define ETH_RX_CQE_PAGE_SIZE_BYTES 4096 #define ETH_RX_NUM_NEXT_PAGE_BDS 2 -/* Limitation for Tunneled LSO Packets on the offset (in bytes) of the inner IP - * header (relevant to LSO for tunneled packet): +/* Limitation for Tunneled LSO Packets on the offset (in bytes) of the inner IP header (relevant to + * LSO for tunneled packet): */ -/* Offset is limited to 253 bytes (inclusive). */ +/* E4 Only: Offset is limited to 253 bytes (inclusive). */ #define ETH_MAX_TUNN_LSO_INNER_IPV4_OFFSET 253 -/* Offset is limited to 251 bytes (inclusive). */ +/* E4 Only: Offset is limited to 251 bytes (inclusive). */ #define ETH_MAX_TUNN_LSO_INNER_IPV6_OFFSET 251 #define ETH_TX_MIN_BDS_PER_NON_LSO_PKT 1 @@ -70,14 +68,16 @@ /* Value for a connection for which same as last feature is disabled */ #define ETH_TX_INACTIVE_SAME_AS_LAST 0xFFFF -/* Maximum number of statistics counters */ -#define ETH_NUM_STATISTIC_COUNTERS MAX_NUM_VPORTS -/* Maximum number of statistics counters when doubled VF zone used */ -#define ETH_NUM_STATISTIC_COUNTERS_DOUBLE_VF_ZONE \ - (ETH_NUM_STATISTIC_COUNTERS - MAX_NUM_VFS / 2) -/* Maximum number of statistics counters when quad VF zone used */ -#define ETH_NUM_STATISTIC_COUNTERS_QUAD_VF_ZONE \ - (ETH_NUM_STATISTIC_COUNTERS - 3 * MAX_NUM_VFS / 4) +/* Maximum number of statistics counters for E4 */ +#define ETH_NUM_STATISTIC_COUNTERS_E4 MAX_NUM_VPORTS_E4 +/* Maximum number of statistics counters for E5 */ +#define ETH_NUM_STATISTIC_COUNTERS_E5 MAX_NUM_VPORTS_E5 +/* E4 Only: Maximum number of statistics counters when doubled VF zone used */ +#define ETH_NUM_STATISTIC_COUNTERS_DOUBLE_VF_ZONE \ + (ETH_NUM_STATISTIC_COUNTERS_E4 - MAX_NUM_VFS_E4 / 2) +/* E4 Only: Maximum number of statistics counters when quad VF zone used */ +#define ETH_NUM_STATISTIC_COUNTERS_QUAD_VF_ZONE \ + (ETH_NUM_STATISTIC_COUNTERS_E4 - 3 * MAX_NUM_VFS_E4 / 4) /* Maximum number of buffers, used for RX packet placement */ #define ETH_RX_MAX_BUFF_PER_PKT 5 @@ -122,13 +122,15 @@ /* Control frame check constants */ /* Number of etherType values configured by driver for control frame check */ -#define ETH_CTL_FRAME_ETH_TYPE_NUM 4 +#define ETH_CTL_FRAME_ETH_TYPE_NUM 4 /* GFS constants */ #define ETH_GFT_TRASHCAN_VPORT 0x1FF /* GFT drop flow vport number */ #define ETH_GFS_NUM_OF_ACTIONS 10 /* Maximum number of GFS actions supported */ +#define ETH_TGSRC_CTX_SIZE 6 /*Size in QREGS*/ +#define ETH_RGSRC_CTX_SIZE 6 /*Size in QREGS*/ /* * Destination port mode @@ -156,39 +158,34 @@ enum eth_addr_type { struct eth_tx_1st_bd_flags { u8 bitfields; -/* Set to 1 in the first BD. (for debug) */ -#define ETH_TX_1ST_BD_FLAGS_START_BD_MASK 0x1 +#define ETH_TX_1ST_BD_FLAGS_START_BD_MASK 0x1 /* Set to 1 in the first BD. (for debug) */ #define ETH_TX_1ST_BD_FLAGS_START_BD_SHIFT 0 /* Do not allow additional VLAN manipulations on this packet. */ #define ETH_TX_1ST_BD_FLAGS_FORCE_VLAN_MODE_MASK 0x1 #define ETH_TX_1ST_BD_FLAGS_FORCE_VLAN_MODE_SHIFT 1 -/* Recalculate IP checksum. For tunneled packet - relevant to inner header. */ +/* Recalculate IP checksum. For tunneled packet - relevant to inner header. Note: For LSO packets, + * must be set. + */ #define ETH_TX_1ST_BD_FLAGS_IP_CSUM_MASK 0x1 #define ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT 2 -/* Recalculate TCP/UDP checksum. - * For tunneled packet - relevant to inner header. - */ +/* Recalculate TCP/UDP checksum. For tunneled packet - relevant to inner header. */ #define ETH_TX_1ST_BD_FLAGS_L4_CSUM_MASK 0x1 #define ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT 3 -/* If set, insert VLAN tag from vlan field to the packet. - * For tunneled packet - relevant to outer header. +/* If set, insert VLAN tag from vlan field to the packet. For tunneled packet - relevant to outer + * header. */ #define ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_MASK 0x1 #define ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT 4 -/* If set, this is an LSO packet. Note: For Tunneled LSO packets, the offset of - * the inner IPV4 (and IPV6) header is limited to 253 (and 251 respectively) - * bytes, inclusive. +/* If set, this is an LSO packet. Note: For Tunneled LSO packets, the offset of the inner IPV4 (and + * IPV6) header is limited to 253 (and 251 respectively) bytes, inclusive. */ #define ETH_TX_1ST_BD_FLAGS_LSO_MASK 0x1 #define ETH_TX_1ST_BD_FLAGS_LSO_SHIFT 5 /* Recalculate Tunnel IP Checksum (if Tunnel IP Header is IPv4) */ #define ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_MASK 0x1 #define ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_SHIFT 6 -/* Recalculate Tunnel UDP/GRE Checksum (Depending on Tunnel Type) */ -#define ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_MASK 0x1 -/* Recalculate Tunnel UDP/GRE Checksum (Depending on Tunnel Type). In case of - * GRE tunnel, this flag means GRE CSO, and in this case GRE checksum field - * Must be present. +/* Recalculate Tunnel UDP/GRE Checksum (Depending on Tunnel Type). In case of GRE tunnel, this flag + * means GRE CSO, and in this case GRE checksum field Must be present. */ #define ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_MASK 0x1 #define ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_SHIFT 7 @@ -198,10 +195,9 @@ struct eth_tx_1st_bd_flags { * The parsing information data for the first tx bd of a given packet. */ struct eth_tx_data_1st_bd { -/* VLAN tag to insert to packet (if enabled by vlan_insertion flag). */ - __le16 vlan; -/* Number of BDs in packet. Should be at least 1 in non-LSO packet and at least - * 3 in LSO (or Tunnel with IPv6+ext) packet. + __le16 vlan /* VLAN tag to insert to packet (if enabled by vlan_insertion flag). */; +/* Number of BDs in packet. Should be at least 1 in non-LSO packet and at least 3 in LSO (or Tunnel + * with IPv6+ext) packet. */ u8 nbds; struct eth_tx_1st_bd_flags bd_flags; @@ -226,47 +222,47 @@ struct eth_tx_data_2nd_bd { /* For Tunnel header with IPv6 ext. - Inner L2 Header Size (in 2-byte WORDs) */ #define ETH_TX_DATA_2ND_BD_TUNN_INNER_L2_HDR_SIZE_W_MASK 0xF #define ETH_TX_DATA_2ND_BD_TUNN_INNER_L2_HDR_SIZE_W_SHIFT 0 -/* For Tunnel header with IPv6 ext. - Inner L2 Header MAC DA Type - * (use enum eth_addr_type) - */ +/* For Tunnel header with IPv6 ext. - Inner L2 Header MAC DA Type (use enum eth_addr_type) */ #define ETH_TX_DATA_2ND_BD_TUNN_INNER_ETH_TYPE_MASK 0x3 #define ETH_TX_DATA_2ND_BD_TUNN_INNER_ETH_TYPE_SHIFT 4 -/* Destination port mode. (use enum dest_port_mode) */ -#define ETH_TX_DATA_2ND_BD_DEST_PORT_MODE_MASK 0x3 -#define ETH_TX_DATA_2ND_BD_DEST_PORT_MODE_SHIFT 6 +/* Destination port mode, applicable only when tx_dst_port_mode_config == + * ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_BD. (use enum dst_port_mode) + */ +#define ETH_TX_DATA_2ND_BD_DST_PORT_MODE_MASK 0x3 +#define ETH_TX_DATA_2ND_BD_DST_PORT_MODE_SHIFT 6 /* Should be 0 in all the BDs, except the first one. (for debug) */ #define ETH_TX_DATA_2ND_BD_START_BD_MASK 0x1 #define ETH_TX_DATA_2ND_BD_START_BD_SHIFT 8 /* For Tunnel header with IPv6 ext. - Tunnel Type (use enum eth_tx_tunn_type) */ #define ETH_TX_DATA_2ND_BD_TUNN_TYPE_MASK 0x3 #define ETH_TX_DATA_2ND_BD_TUNN_TYPE_SHIFT 9 -/* For LSO / Tunnel header with IPv6+ext - Set if inner header is IPv6 */ +/* Relevant only for LSO. When Tunnel is with IPv6+ext - Set if inner header is IPv6 */ #define ETH_TX_DATA_2ND_BD_TUNN_INNER_IPV6_MASK 0x1 #define ETH_TX_DATA_2ND_BD_TUNN_INNER_IPV6_SHIFT 11 -/* In tunneling mode - Set to 1 when the Inner header is IPv6 with extension. - * Otherwise set to 1 if the header is IPv6 with extension. +/* In tunneling mode - Set to 1 when the Inner header is IPv6 with extension. Otherwise set to 1 if + * the header is IPv6 with extension. */ #define ETH_TX_DATA_2ND_BD_IPV6_EXT_MASK 0x1 #define ETH_TX_DATA_2ND_BD_IPV6_EXT_SHIFT 12 -/* Set to 1 if Tunnel (outer = encapsulating) header has IPv6 ext. (Note: 3rd BD - * is required, hence EDPM does not support Tunnel [outer] header with Ipv6Ext) +/* Set to 1 if Tunnel (outer = encapsulating) header has IPv6 ext. (Note: 3rd BD is required, hence + * EDPM does not support Tunnel [outer] header with Ipv6Ext) */ #define ETH_TX_DATA_2ND_BD_TUNN_IPV6_EXT_MASK 0x1 #define ETH_TX_DATA_2ND_BD_TUNN_IPV6_EXT_SHIFT 13 -/* Set if (inner) L4 protocol is UDP. (Required when IPv6+ext (or tunnel with - * inner or outer Ipv6+ext) and l4_csum is set) +/* Set if (inner) L4 protocol is UDP. (Required when IPv6+ext (or tunnel with inner or outer + * Ipv6+ext) and l4_csum is set) */ #define ETH_TX_DATA_2ND_BD_L4_UDP_MASK 0x1 #define ETH_TX_DATA_2ND_BD_L4_UDP_SHIFT 14 -/* The pseudo header checksum type in the L4 checksum field. Required when - * IPv6+ext and l4_csum is set. (use enum eth_l4_pseudo_checksum_mode) +/* The pseudo header checksum type in the L4 checksum field. Required when IPv6+ext and l4_csum is + * set. (use enum eth_l4_pseudo_checksum_mode) */ #define ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE_MASK 0x1 #define ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE_SHIFT 15 __le16 bitfields2; -/* For inner/outer header IPv6+ext - (inner) L4 header offset (in 2-byte WORDs). - * For regular packet - offset from the beginning of the packet. For tunneled - * packet - offset from the beginning of the inner header +/* For inner/outer header IPv6+ext - (inner) L4 header offset (in 2-byte WORDs). For regular packet + * - offset from the beginning of the packet. For tunneled packet - offset from the beginning of the + * inner header */ #define ETH_TX_DATA_2ND_BD_L4_HDR_START_OFFSET_W_MASK 0x1FFF #define ETH_TX_DATA_2ND_BD_L4_HDR_START_OFFSET_W_SHIFT 0 @@ -278,40 +274,27 @@ struct eth_tx_data_2nd_bd { * Firmware data for L2-EDPM packet. */ struct eth_edpm_fw_data { -/* Parsing information data from the 1st BD. */ - struct eth_tx_data_1st_bd data_1st_bd; -/* Parsing information data from the 2nd BD. */ - struct eth_tx_data_2nd_bd data_2nd_bd; + struct eth_tx_data_1st_bd data_1st_bd /* Parsing information data from the 1st BD. */; + struct eth_tx_data_2nd_bd data_2nd_bd /* Parsing information data from the 2nd BD. */; __le32 reserved; }; -/* - * FW debug. - */ -struct eth_fast_path_cqe_fw_debug { - __le16 reserved2 /* FW reserved. */; -}; - - /* * tunneling parsing flags */ struct eth_tunnel_parsing_flags { u8 flags; -/* 0 - no tunneling, 1 - GENEVE, 2 - GRE, 3 - VXLAN - * (use enum eth_rx_tunn_type) - */ +/* 0 - no tunneling, 1 - GENEVE, 2 - GRE, 3 - VXLAN (use enum eth_rx_tunn_type) */ #define ETH_TUNNEL_PARSING_FLAGS_TYPE_MASK 0x3 #define ETH_TUNNEL_PARSING_FLAGS_TYPE_SHIFT 0 -/* If it s not an encapsulated packet then put 0x0. If it s an encapsulated - * packet but the tenant-id doesn t exist then put 0x0. Else put 0x1 - * +/* If it s not an encapsulated packet then put 0x0. If it s an encapsulated packet but the + * tenant-id doesn t exist then put 0x0. Else put 0x1 */ #define ETH_TUNNEL_PARSING_FLAGS_TENNANT_ID_EXIST_MASK 0x1 #define ETH_TUNNEL_PARSING_FLAGS_TENNANT_ID_EXIST_SHIFT 2 -/* Type of the next header above the tunneling: 0 - unknown, 1 - L2, 2 - Ipv4, - * 3 - IPv6 (use enum tunnel_next_protocol) +/* Type of the next header above the tunneling: 0 - unknown, 1 - L2, 2 - Ipv4, 3 - IPv6 (use enum + * tunnel_next_protocol) */ #define ETH_TUNNEL_PARSING_FLAGS_NEXT_PROTOCOL_MASK 0x3 #define ETH_TUNNEL_PARSING_FLAGS_NEXT_PROTOCOL_SHIFT 3 @@ -341,30 +324,27 @@ struct eth_pmd_flow_flags { * Regular ETH Rx FP CQE. */ struct eth_fast_path_rx_reg_cqe { - u8 type /* CQE type */; + u8 type /* CQE type (use enum eth_rx_cqe_type) */; u8 bitfields; /* Type of calculated RSS hash (use enum rss_hash_type) */ #define ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE_MASK 0x7 #define ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE_SHIFT 0 -/* Traffic Class */ -#define ETH_FAST_PATH_RX_REG_CQE_TC_MASK 0xF +#define ETH_FAST_PATH_RX_REG_CQE_TC_MASK 0xF /* Traffic Class */ #define ETH_FAST_PATH_RX_REG_CQE_TC_SHIFT 3 #define ETH_FAST_PATH_RX_REG_CQE_RESERVED0_MASK 0x1 #define ETH_FAST_PATH_RX_REG_CQE_RESERVED0_SHIFT 7 __le16 pkt_len /* Total packet length (from the parser) */; -/* Parsing and error flags from the parser */ - struct parsing_and_err_flags pars_flags; + struct parsing_and_err_flags pars_flags /* Parsing and error flags from the parser */; __le16 vlan_tag /* 802.1q VLAN tag */; __le32 rss_hash /* RSS hash result */; __le16 len_on_first_bd /* Number of bytes placed on first BD */; u8 placement_offset /* Offset of placement from BD start */; -/* Tunnel Parsing Flags */ - struct eth_tunnel_parsing_flags tunnel_pars_flags; + struct eth_tunnel_parsing_flags tunnel_pars_flags /* Tunnel Parsing Flags */; u8 bd_num /* Number of BDs, used for packet */; u8 reserved; __le16 reserved2; -/* aRFS flow ID or Resource ID - Indicates a Vport ID from which packet was - * sent, used when sending from VF to VF Representor. + /* aRFS flow ID or Resource ID - Indicates a Vport ID from which packet was + * sent, used when sending from VF to VF Representor. */ __le32 flow_id_or_resource_id; u8 reserved1[7]; @@ -376,10 +356,9 @@ struct eth_fast_path_rx_reg_cqe { * TPA-continue ETH Rx FP CQE. */ struct eth_fast_path_rx_tpa_cont_cqe { - u8 type /* CQE type */; + u8 type /* CQE type (use enum eth_rx_cqe_type) */; u8 tpa_agg_index /* TPA aggregation index */; -/* List of the segment sizes */ - __le16 len_list[ETH_TPA_CQE_CONT_LEN_LIST_SIZE]; + __le16 len_list[ETH_TPA_CQE_CONT_LEN_LIST_SIZE] /* List of the segment sizes */; u8 reserved; u8 reserved1 /* FW reserved. */; __le16 reserved2[ETH_TPA_CQE_CONT_LEN_LIST_SIZE] /* FW reserved. */; @@ -392,16 +371,14 @@ struct eth_fast_path_rx_tpa_cont_cqe { * TPA-end ETH Rx FP CQE . */ struct eth_fast_path_rx_tpa_end_cqe { - u8 type /* CQE type */; + u8 type /* CQE type (use enum eth_rx_cqe_type) */; u8 tpa_agg_index /* TPA aggregation index */; __le16 total_packet_len /* Total aggregated packet length */; u8 num_of_bds /* Total number of BDs comprising the packet */; -/* Aggregation end reason. Use enum eth_tpa_end_reason */ - u8 end_reason; + u8 end_reason /* Aggregation end reason. Use enum eth_tpa_end_reason */; __le16 num_of_coalesced_segs /* Number of coalesced TCP segments */; __le32 ts_delta /* TCP timestamp delta */; -/* List of the segment sizes */ - __le16 len_list[ETH_TPA_CQE_END_LEN_LIST_SIZE]; + __le16 len_list[ETH_TPA_CQE_END_LEN_LIST_SIZE] /* List of the segment sizes */; __le16 reserved3[ETH_TPA_CQE_END_LEN_LIST_SIZE] /* FW reserved. */; __le16 reserved1; u8 reserved2 /* FW reserved. */; @@ -413,32 +390,29 @@ struct eth_fast_path_rx_tpa_end_cqe { * TPA-start ETH Rx FP CQE. */ struct eth_fast_path_rx_tpa_start_cqe { - u8 type /* CQE type */; + u8 type /* CQE type (use enum eth_rx_cqe_type) */; u8 bitfields; /* Type of calculated RSS hash (use enum rss_hash_type) */ #define ETH_FAST_PATH_RX_TPA_START_CQE_RSS_HASH_TYPE_MASK 0x7 #define ETH_FAST_PATH_RX_TPA_START_CQE_RSS_HASH_TYPE_SHIFT 0 -/* Traffic Class */ -#define ETH_FAST_PATH_RX_TPA_START_CQE_TC_MASK 0xF +#define ETH_FAST_PATH_RX_TPA_START_CQE_TC_MASK 0xF /* Traffic Class */ #define ETH_FAST_PATH_RX_TPA_START_CQE_TC_SHIFT 3 #define ETH_FAST_PATH_RX_TPA_START_CQE_RESERVED0_MASK 0x1 #define ETH_FAST_PATH_RX_TPA_START_CQE_RESERVED0_SHIFT 7 __le16 seg_len /* Segment length (packetLen from the parser) */; -/* Parsing and error flags from the parser */ - struct parsing_and_err_flags pars_flags; + struct parsing_and_err_flags pars_flags /* Parsing and error flags from the parser */; __le16 vlan_tag /* 802.1q VLAN tag */; __le32 rss_hash /* RSS hash result */; __le16 len_on_first_bd /* Number of bytes placed on first BD */; u8 placement_offset /* Offset of placement from BD start */; -/* Tunnel Parsing Flags */ - struct eth_tunnel_parsing_flags tunnel_pars_flags; + struct eth_tunnel_parsing_flags tunnel_pars_flags /* Tunnel Parsing Flags */; u8 tpa_agg_index /* TPA aggregation index */; u8 header_len /* Packet L2+L3+L4 header length */; /* Additional BDs length list. Used for backward compatible. */ __le16 bw_ext_bd_len_list[ETH_TPA_CQE_START_BW_LEN_LIST_SIZE]; __le16 reserved2; -/* aRFS or GFS flow ID or Resource ID - Indicates a Vport ID from which packet - * was sent, used when sending from VF to VF Representor + /* aRFS or GFS flow ID or Resource ID - Indicates a Vport ID from which packet + * was sent, used when sending from VF to VF Representor */ __le32 flow_id_or_resource_id; u8 reserved[3]; @@ -446,15 +420,18 @@ struct eth_fast_path_rx_tpa_start_cqe { }; + /* * The L4 pseudo checksum mode for Ethernet */ enum eth_l4_pseudo_checksum_mode { -/* Pseudo Header checksum on packet is calculated with the correct packet length +/* Pseudo Header checksum on packet is calculated with the correct (as defined in RFC) packet length * field. */ ETH_L4_PSEUDO_CSUM_CORRECT_LENGTH, -/* Pseudo Header checksum on packet is calculated with zero length field. */ +/* (Supported only for LSO) Pseudo Header checksum on packet is calculated with zero length field. + * + */ ETH_L4_PSEUDO_CSUM_ZERO_LENGTH, MAX_ETH_L4_PSEUDO_CHECKSUM_MODE }; @@ -470,7 +447,7 @@ struct eth_rx_bd { * regular ETH Rx SP CQE */ struct eth_slow_path_rx_cqe { - u8 type /* CQE type */; + u8 type /* CQE type (use enum eth_rx_cqe_type) */; u8 ramrod_cmd_id; u8 error_flag; u8 reserved[25]; @@ -483,14 +460,10 @@ struct eth_slow_path_rx_cqe { * union for all ETH Rx CQE types */ union eth_rx_cqe { -/* Regular FP CQE */ - struct eth_fast_path_rx_reg_cqe fast_path_regular; -/* TPA-start CQE */ - struct eth_fast_path_rx_tpa_start_cqe fast_path_tpa_start; -/* TPA-continue CQE */ - struct eth_fast_path_rx_tpa_cont_cqe fast_path_tpa_cont; -/* TPA-end CQE */ - struct eth_fast_path_rx_tpa_end_cqe fast_path_tpa_end; + struct eth_fast_path_rx_reg_cqe fast_path_regular /* Regular FP CQE */; + struct eth_fast_path_rx_tpa_start_cqe fast_path_tpa_start /* TPA-start CQE */; + struct eth_fast_path_rx_tpa_cont_cqe fast_path_tpa_cont /* TPA-continue CQE */; + struct eth_fast_path_rx_tpa_end_cqe fast_path_tpa_end /* TPA-end CQE */; struct eth_slow_path_rx_cqe slow_path /* SP CQE */; }; @@ -510,8 +483,7 @@ enum eth_rx_cqe_type { /* - * Wrapper for PD RX CQE - used in order to cover full cache line when writing - * CQE + * Wrapper for PD RX CQE - used in order to cover full cache line when writing CQE */ struct eth_rx_pmd_cqe { union eth_rx_cqe cqe /* CQE data itself */; @@ -538,23 +510,21 @@ enum eth_rx_tunn_type { enum eth_tpa_end_reason { ETH_AGG_END_UNUSED, ETH_AGG_END_SP_UPDATE /* SP configuration update */, -/* Maximum aggregation length or maximum buffer number used. */ - ETH_AGG_END_MAX_LEN, -/* TCP PSH flag or TCP payload length below continue threshold. */ - ETH_AGG_END_LAST_SEG, + ETH_AGG_END_MAX_LEN /* Maximum aggregation length or maximum buffer number used. */, + ETH_AGG_END_LAST_SEG /* TCP PSH flag or TCP payload length below continue threshold. */, ETH_AGG_END_TIMEOUT /* Timeout expiration. */, -/* Packet header not consistency: different IPv4 TOS, TTL or flags, IPv6 TC, - * Hop limit or Flow label, TCP header length or TS options. In GRO different - * TS value, SMAC, DMAC, ackNum, windowSize or VLAN +/* Packet header not consistency: different IPv4 TOS, TTL or flags, IPv6 TC, Hop limit or Flow + * label, TCP header length or TS options. In GRO different TS value, SMAC, DMAC, ackNum, windowSize + * or VLAN */ ETH_AGG_END_NOT_CONSISTENT, -/* Out of order or retransmission packet: sequence, ack or timestamp not - * consistent with previous segment. +/* Out of order or retransmission packet: sequence, ack or timestamp not consistent with previous + * segment. */ ETH_AGG_END_OUT_OF_ORDER, -/* Next segment cant be aggregated due to LLC/SNAP, IP error, IP fragment, IPv4 - * options, IPv6 extension, IP ECN = CE, TCP errors, TCP options, zero TCP - * payload length , TCP flags or not supported tunnel header options. +/* Next segment cant be aggregated due to LLC/SNAP, IP error, IP fragment, IPv4 options, IPv6 + * extension, IP ECN = CE, TCP errors, TCP options, zero TCP payload length , TCP flags or not + * supported tunnel header options. */ ETH_AGG_END_NON_TPA_SEG, MAX_ETH_TPA_END_REASON @@ -568,7 +538,7 @@ enum eth_tpa_end_reason { struct eth_tx_1st_bd { struct regpair addr /* Single continuous buffer */; __le16 nbytes /* Number of bytes in this BD. */; - struct eth_tx_data_1st_bd data /* Parsing information data. */; + struct eth_tx_data_1st_bd data /* 1st BD Data */; }; @@ -579,7 +549,7 @@ struct eth_tx_1st_bd { struct eth_tx_2nd_bd { struct regpair addr /* Single continuous buffer */; __le16 nbytes /* Number of bytes in this BD. */; - struct eth_tx_data_2nd_bd data /* Parsing information data. */; + struct eth_tx_data_2nd_bd data /* 2nd BD Data */; }; @@ -589,11 +559,14 @@ struct eth_tx_2nd_bd { struct eth_tx_data_3rd_bd { __le16 lso_mss /* For LSO packet - the MSS in bytes. */; __le16 bitfields; -/* For LSO with inner/outer IPv6+ext - TCP header length (in 4-byte WORDs) */ +/* Inner (for tunneled packets, and single for non-tunneled packets) TCP header length (in 4 byte + * words). This value is required in LSO mode, when the inner/outer header is Ipv6 with extension + * headers. + */ #define ETH_TX_DATA_3RD_BD_TCP_HDR_LEN_DW_MASK 0xF #define ETH_TX_DATA_3RD_BD_TCP_HDR_LEN_DW_SHIFT 0 -/* LSO - number of BDs which contain headers. value should be in range - * (1..ETH_TX_MAX_LSO_HDR_NBD). +/* LSO - number of BDs which contain headers. value should be in range (1..ETH_TX_MAX_LSO_HDR_NBD). + * */ #define ETH_TX_DATA_3RD_BD_HDR_NBD_MASK 0xF #define ETH_TX_DATA_3RD_BD_HDR_NBD_SHIFT 4 @@ -602,9 +575,9 @@ struct eth_tx_data_3rd_bd { #define ETH_TX_DATA_3RD_BD_START_BD_SHIFT 8 #define ETH_TX_DATA_3RD_BD_RESERVED0_MASK 0x7F #define ETH_TX_DATA_3RD_BD_RESERVED0_SHIFT 9 -/* For tunnel with IPv6+ext - Pointer to tunnel L4 Header (in 2-byte WORDs) */ +/* For tunnel with IPv6+ext - Pointer to the tunnel L4 Header (in 2-byte WORDs) */ u8 tunn_l4_hdr_start_offset_w; -/* For tunnel with IPv6+ext - Total size of Tunnel Header (in 2-byte WORDs) */ +/* For tunnel with IPv6+ext - Total size of the Tunnel Header (in 2-byte WORDs) */ u8 tunn_hdr_size_w; }; @@ -614,7 +587,7 @@ struct eth_tx_data_3rd_bd { struct eth_tx_3rd_bd { struct regpair addr /* Single continuous buffer */; __le16 nbytes /* Number of bytes in this BD. */; - struct eth_tx_data_3rd_bd data /* Parsing information data. */; + struct eth_tx_data_3rd_bd data /* 3rd BD Data */; }; @@ -622,10 +595,9 @@ struct eth_tx_3rd_bd { * The parsing information data for the forth tx bd of a given packet. */ struct eth_tx_data_4th_bd { -/* Destination Vport ID to forward the packet, applicable only when - * tx_dst_port_mode_config == ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_BD and - * dst_port_mode == DST_PORT_LOOPBACK, used to route the packet from VF - * Representor to VF +/* Destination Vport ID to forward the packet, applicable only when tx_dst_port_mode_config == + * ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_BD and dst_port_mode== DST_PORT_LOOPBACK, used to route + * the packet from VF Representor to VF */ u8 dst_vport_id; u8 reserved4; @@ -644,12 +616,12 @@ struct eth_tx_data_4th_bd { }; /* - * The forth tx bd of a given packet + * The fourth tx bd of a given packet */ struct eth_tx_4th_bd { struct regpair addr /* Single continuous buffer */; __le16 nbytes /* Number of bytes in this BD. */; - struct eth_tx_data_4th_bd data /* Parsing information data. */; + struct eth_tx_data_4th_bd data /* 4th BD Data */; }; @@ -670,19 +642,18 @@ struct eth_tx_data_bd { }; /* - * The common regular TX BD ring element + * The common regular bd */ struct eth_tx_bd { struct regpair addr /* Single continuous buffer */; __le16 nbytes /* Number of bytes in this BD. */; - struct eth_tx_data_bd data /* Complementary information. */; + struct eth_tx_data_bd data /* Common BD Data */; }; union eth_tx_bd_types { struct eth_tx_1st_bd first_bd /* The first tx bd of a given packet */; -/* The second tx bd of a given packet */ - struct eth_tx_2nd_bd second_bd; + struct eth_tx_2nd_bd second_bd /* The second tx bd of a given packet */; struct eth_tx_3rd_bd third_bd /* The third tx bd of a given packet */; struct eth_tx_4th_bd fourth_bd /* The fourth tx bd of a given packet */; struct eth_tx_bd reg_bd /* The common regular bd */; @@ -693,6 +664,7 @@ union eth_tx_bd_types { + /* * Eth Tx Tunnel Type */ @@ -708,7 +680,7 @@ enum eth_tx_tunn_type { /* * Mstorm Queue Zone */ -struct mstorm_eth_queue_zone { +struct mstorm_eth_queue_zone_e5 { struct eth_rx_prod_data rx_producers /* ETH Rx producers data */; __le32 reserved[3]; }; @@ -718,8 +690,7 @@ struct mstorm_eth_queue_zone { * Ystorm Queue Zone */ struct xstorm_eth_queue_zone { -/* Tx interrupt coalescing TimeSet */ - struct coalescing_timeset int_coalescing_timeset; + struct coalescing_timeset int_coalescing_timeset /* Tx interrupt coalescing TimeSet */; u8 reserved[7]; }; @@ -729,22 +700,33 @@ struct xstorm_eth_queue_zone { */ struct eth_db_data { u8 params; -/* destination of doorbell (use enum db_dest) */ -#define ETH_DB_DATA_DEST_MASK 0x3 +#define ETH_DB_DATA_DEST_MASK 0x3 /* destination of doorbell (use enum db_dest) */ #define ETH_DB_DATA_DEST_SHIFT 0 -/* aggregative command to CM (use enum db_agg_cmd_sel) */ -#define ETH_DB_DATA_AGG_CMD_MASK 0x3 +#define ETH_DB_DATA_AGG_CMD_MASK 0x3 /* aggregative command to CM (use enum db_agg_cmd_sel) */ #define ETH_DB_DATA_AGG_CMD_SHIFT 2 #define ETH_DB_DATA_BYPASS_EN_MASK 0x1 /* enable QM bypass */ #define ETH_DB_DATA_BYPASS_EN_SHIFT 4 #define ETH_DB_DATA_RESERVED_MASK 0x1 #define ETH_DB_DATA_RESERVED_SHIFT 5 -/* aggregative value selection */ -#define ETH_DB_DATA_AGG_VAL_SEL_MASK 0x3 +#define ETH_DB_DATA_AGG_VAL_SEL_MASK 0x3 /* aggregative value selection */ #define ETH_DB_DATA_AGG_VAL_SEL_SHIFT 6 -/* bit for every DQ counter flags in CM context that DQ can increment */ - u8 agg_flags; + u8 agg_flags /* bit for every DQ counter flags in CM context that DQ can increment */; __le16 bd_prod; }; + +/* + * RSS hash type + */ +enum rss_hash_type { + RSS_HASH_TYPE_DEFAULT = 0, + RSS_HASH_TYPE_IPV4 = 1, + RSS_HASH_TYPE_TCP_IPV4 = 2, + RSS_HASH_TYPE_IPV6 = 3, + RSS_HASH_TYPE_TCP_IPV6 = 4, + RSS_HASH_TYPE_UDP_IPV4 = 5, + RSS_HASH_TYPE_UDP_IPV6 = 6, + MAX_RSS_HASH_TYPE +}; + #endif /* __ETH_COMMON__ */ diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h index 6667c2d7a..437482422 100644 --- a/drivers/net/qede/base/mcp_public.h +++ b/drivers/net/qede/base/mcp_public.h @@ -1,30 +1,32 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - /**************************************************************************** * - * Name: mcp_public.h + * Name: mcp_public.h * * Description: MCP public data * - * Created: 13/01/2013 yanivr + * Created: 13/01/2013 yanivr * ****************************************************************************/ #ifndef MCP_PUBLIC_H #define MCP_PUBLIC_H -#define VF_MAX_STATIC 192 /* In case of AH */ -#define VF_BITMAP_SIZE_IN_DWORDS (VF_MAX_STATIC / 32) -#define VF_BITMAP_SIZE_IN_BYTES (VF_BITMAP_SIZE_IN_DWORDS * sizeof(u32)) +#define UEFI + +#define VF_MAX_STATIC 192 /* In case of AH */ +#define VF_BITMAP_SIZE_IN_DWORDS (VF_MAX_STATIC / 32) +#define VF_BITMAP_SIZE_IN_BYTES (VF_BITMAP_SIZE_IN_DWORDS * sizeof(u32)) /* Extended array size to support for 240 VFs 8 dwords */ -#define EXT_VF_MAX_STATIC 240 -#define EXT_VF_BITMAP_SIZE_IN_DWORDS (((EXT_VF_MAX_STATIC - 1) / 32) + 1) -#define EXT_VF_BITMAP_SIZE_IN_BYTES (EXT_VF_BITMAP_SIZE_IN_DWORDS * \ +#define EXT_VF_MAX_STATIC 240 +#define EXT_VF_BITMAP_SIZE_IN_DWORDS (((EXT_VF_MAX_STATIC - 1) / 32) + 1) +#define EXT_VF_BITMAP_SIZE_IN_BYTES (EXT_VF_BITMAP_SIZE_IN_DWORDS * \ sizeof(u32)) #define ADDED_VF_BITMAP_SIZE 2 @@ -33,7 +35,7 @@ #define MCP_GLOB_PORT_MAX 4 /* Global */ #define MCP_GLOB_FUNC_MAX 16 /* Global */ -typedef u32 offsize_t; /* In DWORDS !!! */ +typedef u32 offsize_t; /* In DWORDS !!! */ /* Offset from the beginning of the MCP scratchpad */ #define OFFSIZE_OFFSET_OFFSET 0 #define OFFSIZE_OFFSET_MASK 0x0000ffff @@ -43,58 +45,48 @@ typedef u32 offsize_t; /* In DWORDS !!! */ /* SECTION_OFFSET is calculating the offset in bytes out of offsize */ #define SECTION_OFFSET(_offsize) \ - ((((_offsize & OFFSIZE_OFFSET_MASK) >> OFFSIZE_OFFSET_OFFSET) << 2)) + (((((_offsize) & OFFSIZE_OFFSET_MASK) >> OFFSIZE_OFFSET_OFFSET) << 2)) /* SECTION_SIZE is calculating the size in bytes out of offsize */ #define SECTION_SIZE(_offsize) \ - (((_offsize & OFFSIZE_SIZE_MASK) >> OFFSIZE_SIZE_OFFSET) << 2) + ((((_offsize) & OFFSIZE_SIZE_MASK) >> OFFSIZE_SIZE_OFFSET) << 2) -/* SECTION_ADDR returns the GRC addr of a section, given offsize and index - * within section - */ +/* SECTION_ADDR returns the GRC addr of a section, given offsize and index within section */ #define SECTION_ADDR(_offsize, idx) \ - (MCP_REG_SCRATCH + \ - SECTION_OFFSET(_offsize) + (SECTION_SIZE(_offsize) * idx)) + (MCP_REG_SCRATCH + SECTION_OFFSET(_offsize) + (SECTION_SIZE(_offsize) * (idx))) -/* SECTION_OFFSIZE_ADDR returns the GRC addr to the offsize address. Use - * offsetof, since the OFFSETUP collide with the firmware definition +/* SECTION_OFFSIZE_ADDR returns the GRC addr to the offsize address. Use offsetof, since the + * OFFSETUP collide with the firmware definition */ #define SECTION_OFFSIZE_ADDR(_pub_base, _section) \ - (_pub_base + offsetof(struct mcp_public_data, sections[_section])) + ((_pub_base) + offsetof(struct mcp_public_data, sections[_section])) /* PHY configuration */ struct eth_phy_cfg { -/* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */ - u32 speed; + u32 speed; /* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */ #define ETH_SPEED_AUTONEG 0 #define ETH_SPEED_SMARTLINQ 0x8 /* deprecated - use link_modes field instead */ - u32 pause; /* bitmask */ + u32 pause; /* bitmask */ #define ETH_PAUSE_NONE 0x0 #define ETH_PAUSE_AUTONEG 0x1 #define ETH_PAUSE_RX 0x2 #define ETH_PAUSE_TX 0x4 - u32 adv_speed; /* Default should be the speed_cap_mask */ + u32 adv_speed; /* Default should be the speed_cap_mask */ u32 loopback_mode; -#define ETH_LOOPBACK_NONE (0) -/* Serdes loopback. In AH, it refers to Near End */ -#define ETH_LOOPBACK_INT_PHY (1) +#define ETH_LOOPBACK_NONE (0) +#define ETH_LOOPBACK_INT_PHY (1) /* Serdes loopback. In AH, it refers to Near End */ #define ETH_LOOPBACK_EXT_PHY (2) /* External PHY Loopback */ -/* External Loopback (Require loopback plug) */ -#define ETH_LOOPBACK_EXT (3) +#define ETH_LOOPBACK_EXT (3) /* External Loopback (Require loopback plug) */ #define ETH_LOOPBACK_MAC (4) /* MAC Loopback - not supported */ -#define ETH_LOOPBACK_CNIG_AH_ONLY_0123 (5) /* Port to itself */ -#define ETH_LOOPBACK_CNIG_AH_ONLY_2301 (6) /* Port to Port */ +#define ETH_LOOPBACK_CNIG_AH_ONLY_0123 (5) /* Port to itself */ +#define ETH_LOOPBACK_CNIG_AH_ONLY_2301 (6) /* Port to Port */ #define ETH_LOOPBACK_PCS_AH_ONLY (7) /* PCS loopback (TX to RX) */ -/* Loop RX packet from PCS to TX */ -#define ETH_LOOPBACK_REVERSE_MAC_AH_ONLY (8) -/* Remote Serdes Loopback (RX to TX) */ -#define ETH_LOOPBACK_INT_PHY_FEA_AH_ONLY (9) +#define ETH_LOOPBACK_REVERSE_MAC_AH_ONLY (8) /* Loop RX packet from PCS to TX */ +#define ETH_LOOPBACK_INT_PHY_FEA_AH_ONLY (9) /* Remote Serdes Loopback (RX to TX) */ u32 eee_cfg; -/* EEE is enabled (configuration). Refer to eee_status->active for negotiated - * status - */ +/* EEE is enabled (configuration). Refer to eee_status->active for negotiated status */ #define EEE_CFG_EEE_ENABLED (1 << 0) #define EEE_CFG_TX_LPI (1 << 1) #define EEE_CFG_ADV_SPEED_1G (1 << 2) @@ -107,28 +99,84 @@ struct eth_phy_cfg { u32 link_modes; /* Additional link modes */ #define LINK_MODE_SMARTLINQ_ENABLE 0x1 /* XXX Deprecate */ + + u32 fec_mode; /* Values similar to nvm cfg */ +#define FEC_FORCE_MODE_MASK 0x000000FF +#define FEC_FORCE_MODE_OFFSET 0 +#define FEC_FORCE_MODE_NONE 0x00 +#define FEC_FORCE_MODE_FIRECODE 0x01 +#define FEC_FORCE_MODE_RS 0x02 +#define FEC_FORCE_MODE_AUTO 0x07 +#define FEC_EXTENDED_MODE_MASK 0xFFFFFF00 +#define FEC_EXTENDED_MODE_OFFSET 8 +#define ETH_EXT_FEC_NONE 0x00000100 +#define ETH_EXT_FEC_10G_NONE 0x00000200 +#define ETH_EXT_FEC_10G_BASE_R 0x00000400 +#define ETH_EXT_FEC_20G_NONE 0x00000800 +#define ETH_EXT_FEC_20G_BASE_R 0x00001000 +#define ETH_EXT_FEC_25G_NONE 0x00002000 +#define ETH_EXT_FEC_25G_BASE_R 0x00004000 +#define ETH_EXT_FEC_25G_RS528 0x00008000 +#define ETH_EXT_FEC_40G_NONE 0x00010000 +#define ETH_EXT_FEC_40G_BASE_R 0x00020000 +#define ETH_EXT_FEC_50G_NONE 0x00040000 +#define ETH_EXT_FEC_50G_BASE_R 0x00080000 +#define ETH_EXT_FEC_50G_RS528 0x00100000 +#define ETH_EXT_FEC_50G_RS544 0x00200000 +#define ETH_EXT_FEC_100G_NONE 0x00400000 +#define ETH_EXT_FEC_100G_BASE_R 0x00800000 +#define ETH_EXT_FEC_100G_RS528 0x01000000 +#define ETH_EXT_FEC_100G_RS544 0x02000000 +u32 extended_speed; /* Values similar to nvm cfg */ +#define ETH_EXT_SPEED_MASK 0x0000FFFF +#define ETH_EXT_SPEED_OFFSET 0 +#define ETH_EXT_SPEED_AN 0x00000001 +#define ETH_EXT_SPEED_1G 0x00000002 +#define ETH_EXT_SPEED_10G 0x00000004 +#define ETH_EXT_SPEED_20G 0x00000008 +#define ETH_EXT_SPEED_25G 0x00000010 +#define ETH_EXT_SPEED_40G 0x00000020 +#define ETH_EXT_SPEED_50G_BASE_R 0x00000040 +#define ETH_EXT_SPEED_50G_BASE_R2 0x00000080 +#define ETH_EXT_SPEED_100G_BASE_R2 0x00000100 +#define ETH_EXT_SPEED_100G_BASE_R4 0x00000200 +#define ETH_EXT_SPEED_100G_BASE_P4 0x00000400 +#define ETH_EXT_ADV_SPEED_MASK 0xFFFF0000 +#define ETH_EXT_ADV_SPEED_OFFSET 16 +#define ETH_EXT_ADV_SPEED_RESERVED 0x00010000 +#define ETH_EXT_ADV_SPEED_1G 0x00020000 +#define ETH_EXT_ADV_SPEED_10G 0x00040000 +#define ETH_EXT_ADV_SPEED_20G 0x00080000 +#define ETH_EXT_ADV_SPEED_25G 0x00100000 +#define ETH_EXT_ADV_SPEED_40G 0x00200000 +#define ETH_EXT_ADV_SPEED_50G_BASE_R 0x00400000 +#define ETH_EXT_ADV_SPEED_50G_BASE_R2 0x00800000 +#define ETH_EXT_ADV_SPEED_100G_BASE_R2 0x01000000 +#define ETH_EXT_ADV_SPEED_100G_BASE_R4 0x02000000 +#define ETH_EXT_ADV_SPEED_100G_BASE_P4 0x04000000 }; struct port_mf_cfg { - u32 dynamic_cfg; /* device control channel */ -#define PORT_MF_CFG_OV_TAG_MASK 0x0000ffff -#define PORT_MF_CFG_OV_TAG_OFFSET 0 -#define PORT_MF_CFG_OV_TAG_DEFAULT PORT_MF_CFG_OV_TAG_MASK - - u32 reserved[1]; + u32 dynamic_cfg; /* device control channel */ +#define PORT_MF_CFG_OV_TAG_MASK 0x0000ffff +#define PORT_MF_CFG_OV_TAG_OFFSET 0 +#define PORT_MF_CFG_OV_TAG_DEFAULT PORT_MF_CFG_OV_TAG_MASK + + u32 reserved[2]; /* This is to make sure next field "stats" is alignment to 64-bit. + * It doesn't change existing alignment in MFW + */ }; /* DO NOT add new fields in the middle * MUST be synced with struct pmm_stats_map */ struct eth_stats { - u64 r64; /* 0x00 (Offset 0x00 ) RX 64-byte frame counter*/ - u64 r127; /* 0x01 (Offset 0x08 ) RX 65 to 127 byte frame counter*/ - u64 r255; /* 0x02 (Offset 0x10 ) RX 128 to 255 byte frame counter*/ - u64 r511; /* 0x03 (Offset 0x18 ) RX 256 to 511 byte frame counter*/ - u64 r1023; /* 0x04 (Offset 0x20 ) RX 512 to 1023 byte frame counter*/ -/* 0x05 (Offset 0x28 ) RX 1024 to 1518 byte frame counter */ - u64 r1518; + u64 r64; /* 0x00 (Offset 0x00 ) RX 64-byte frame counter*/ + u64 r127; /* 0x01 (Offset 0x08 ) RX 65 to 127 byte frame counter*/ + u64 r255; /* 0x02 (Offset 0x10 ) RX 128 to 255 byte frame counter*/ + u64 r511; /* 0x03 (Offset 0x18 ) RX 256 to 511 byte frame counter*/ + u64 r1023; /* 0x04 (Offset 0x20 ) RX 512 to 1023 byte frame counter*/ + u64 r1518; /* 0x05 (Offset 0x28 ) RX 1024 to 1518 byte frame counter */ union { struct { /* bb */ /* 0x06 (Offset 0x30 ) RX 1519 to 1522 byte VLAN-tagged frame counter */ @@ -141,33 +189,40 @@ struct eth_stats { u64 r9216; /* 0x0A (Offset 0x50 ) RX 9217 to 16383 byte frame counter */ u64 r16383; + } bb0; struct { /* ah */ u64 unused1; -/* 0x07 (Offset 0x38 ) RX 1519 to max byte frame counter*/ - u64 r1519_to_max; + u64 r1519_to_max; /* 0x07 (Offset 0x38 ) RX 1519 to max byte frame counter*/ u64 unused2; u64 unused3; u64 unused4; } ah0; + struct { /* E5 */ +/* 0x06 (Offset 0x30 ) RX 1519 to 2047 byte frame counter*/ + u64 r2047; + u64 r2048_to_max; /* 0x07 (Offset 0x38 ) RX 2048 to max byte frame counter*/ + u64 unused11; + u64 unused12; + u64 unused13; + } e50; } u0; - u64 rfcs; /* 0x0F (Offset 0x58 ) RX FCS error frame counter*/ - u64 rxcf; /* 0x10 (Offset 0x60 ) RX control frame counter*/ - u64 rxpf; /* 0x11 (Offset 0x68 ) RX pause frame counter*/ - u64 rxpp; /* 0x12 (Offset 0x70 ) RX PFC frame counter*/ - u64 raln; /* 0x16 (Offset 0x78 ) RX alignment error counter*/ - u64 rfcr; /* 0x19 (Offset 0x80 ) RX false carrier counter */ - u64 rovr; /* 0x1A (Offset 0x88 ) RX oversized frame counter*/ - u64 rjbr; /* 0x1B (Offset 0x90 ) RX jabber frame counter */ - u64 rund; /* 0x34 (Offset 0x98 ) RX undersized frame counter */ - u64 rfrg; /* 0x35 (Offset 0xa0 ) RX fragment counter */ - u64 t64; /* 0x40 (Offset 0xa8 ) TX 64-byte frame counter */ - u64 t127; /* 0x41 (Offset 0xb0 ) TX 65 to 127 byte frame counter */ - u64 t255; /* 0x42 (Offset 0xb8 ) TX 128 to 255 byte frame counter*/ - u64 t511; /* 0x43 (Offset 0xc0 ) TX 256 to 511 byte frame counter*/ - u64 t1023; /* 0x44 (Offset 0xc8 ) TX 512 to 1023 byte frame counter*/ -/* 0x45 (Offset 0xd0 ) TX 1024 to 1518 byte frame counter */ - u64 t1518; + u64 rfcs; /* 0x0F (Offset 0x58 ) RX FCS error frame counter*/ + u64 rxcf; /* 0x10 (Offset 0x60 ) RX control frame counter*/ + u64 rxpf; /* 0x11 (Offset 0x68 ) RX pause frame counter*/ + u64 rxpp; /* 0x12 (Offset 0x70 ) RX PFC frame counter*/ + u64 raln; /* 0x16 (Offset 0x78 ) RX alignment error counter*/ + u64 rfcr; /* 0x19 (Offset 0x80 ) RX false carrier counter */ + u64 rovr; /* 0x1A (Offset 0x88 ) RX oversized frame counter*/ + u64 rjbr; /* 0x1B (Offset 0x90 ) RX jabber frame counter */ + u64 rund; /* 0x34 (Offset 0x98 ) RX undersized frame counter */ + u64 rfrg; /* 0x35 (Offset 0xa0 ) RX fragment counter */ + u64 t64; /* 0x40 (Offset 0xa8 ) TX 64-byte frame counter */ + u64 t127; /* 0x41 (Offset 0xb0 ) TX 65 to 127 byte frame counter */ + u64 t255; /* 0x42 (Offset 0xb8 ) TX 128 to 255 byte frame counter*/ + u64 t511; /* 0x43 (Offset 0xc0 ) TX 256 to 511 byte frame counter*/ + u64 t1023; /* 0x44 (Offset 0xc8 ) TX 512 to 1023 byte frame counter*/ + u64 t1518; /* 0x45 (Offset 0xd0 ) TX 1024 to 1518 byte frame counter */ union { struct { /* bb */ /* 0x47 (Offset 0xd8 ) TX 1519 to 2047 byte frame counter */ @@ -186,37 +241,43 @@ struct eth_stats { u64 unused7; u64 unused8; } ah1; + struct { /* E5 */ +/* 0x06 (Offset 0x30 ) TX 1519 to 2047 byte frame counter*/ + u64 t2047; + u64 t2048_to_max; /* 0x07 (Offset 0x38 ) TX 2048 to max byte frame counter*/ + u64 unused14; + u64 unused15; + } e5_1; } u1; - u64 txpf; /* 0x50 (Offset 0xf8 ) TX pause frame counter */ - u64 txpp; /* 0x51 (Offset 0x100) TX PFC frame counter */ -/* 0x6C (Offset 0x108) Transmit Logical Type LLFC message counter */ + u64 txpf; /* 0x50 (Offset 0xf8 ) TX pause frame counter */ + u64 txpp; /* 0x51 (Offset 0x100) TX PFC frame counter */ union { struct { /* bb */ /* 0x6C (Offset 0x108) Transmit Logical Type LLFC message counter */ u64 tlpiec; -/* 0x6E (Offset 0x110) Transmit Total Collision Counter */ - u64 tncl; + u64 tncl; /* 0x6E (Offset 0x110) Transmit Total Collision Counter */ } bb2; struct { /* ah */ u64 unused9; u64 unused10; } ah2; + struct { /* E5 */ + u64 unused16; + u64 unused17; + } e5_2; } u2; - u64 rbyte; /* 0x3d (Offset 0x118) RX byte counter */ - u64 rxuca; /* 0x0c (Offset 0x120) RX UC frame counter */ - u64 rxmca; /* 0x0d (Offset 0x128) RX MC frame counter */ - u64 rxbca; /* 0x0e (Offset 0x130) RX BC frame counter */ + u64 rbyte; /* 0x3d (Offset 0x118) RX byte counter */ + u64 rxuca; /* 0x0c (Offset 0x120) RX UC frame counter */ + u64 rxmca; /* 0x0d (Offset 0x128) RX MC frame counter */ + u64 rxbca; /* 0x0e (Offset 0x130) RX BC frame counter */ /* 0x22 (Offset 0x138) RX good frame (good CRC, not oversized, no ERROR) */ u64 rxpok; - u64 tbyte; /* 0x6f (Offset 0x140) TX byte counter */ - u64 txuca; /* 0x4d (Offset 0x148) TX UC frame counter */ - u64 txmca; /* 0x4e (Offset 0x150) TX MC frame counter */ - u64 txbca; /* 0x4f (Offset 0x158) TX BC frame counter */ - u64 txcf; /* 0x54 (Offset 0x160) TX control frame counter */ -/* HSI - Cannot add more stats to this struct. If needed, then need to open new - * struct - */ - + u64 tbyte; /* 0x6f (Offset 0x140) TX byte counter */ + u64 txuca; /* 0x4d (Offset 0x148) TX UC frame counter */ + u64 txmca; /* 0x4e (Offset 0x150) TX MC frame counter */ + u64 txbca; /* 0x4f (Offset 0x158) TX BC frame counter */ + u64 txcf; /* 0x54 (Offset 0x160) TX control frame counter */ + /* HSI - Cannot add more stats to this struct. If needed, then need to open new struct */ }; struct brb_stats { @@ -229,26 +290,25 @@ struct port_stats { struct eth_stats eth; }; -/*----+------------------------------------------------------------------------ - * C | Number and | Ports in| Ports in|2 PHY-s |# of ports|# of engines - * h | rate of | team #1 | team #2 |are used|per path | (paths) - * i | physical | | | | | enabled - * p | ports | | | | | - *====+============+=========+=========+========+==========+=================== - * BB | 1x100G | This is special mode, where there are actually 2 HW func - * BB | 2x10/20Gbps| 0,1 | NA | No | 1 | 1 - * BB | 2x40 Gbps | 0,1 | NA | Yes | 1 | 1 - * BB | 2x50Gbps | 0,1 | NA | No | 1 | 1 - * BB | 4x10Gbps | 0,2 | 1,3 | No | 1/2 | 1,2 (2 is optional) - * BB | 4x10Gbps | 0,1 | 2,3 | No | 1/2 | 1,2 (2 is optional) - * BB | 4x10Gbps | 0,3 | 1,2 | No | 1/2 | 1,2 (2 is optional) - * BB | 4x10Gbps | 0,1,2,3 | NA | No | 1 | 1 - * AH | 2x10/20Gbps| 0,1 | NA | NA | 1 | NA - * AH | 4x10Gbps | 0,1 | 2,3 | NA | 2 | NA - * AH | 4x10Gbps | 0,2 | 1,3 | NA | 2 | NA - * AH | 4x10Gbps | 0,3 | 1,2 | NA | 2 | NA - * AH | 4x10Gbps | 0,1,2,3 | NA | NA | 1 | NA - *====+============+=========+=========+========+==========+=================== +/*-----+----------------------------------------------------------------------------- + * Chip | Number and | Ports in| Ports in|2 PHY-s |# of ports|# of engines + * | rate of physical | team #1 | team #2 |are used|per path | (paths) enabled + * | ports | | | | | + *======+==================+=========+=========+========+==========+================= + * BB | 1x100G | This is special mode, where there are actually 2 HW func + * BB | 2x10/20Gbps | 0,1 | NA | No | 1 | 1 + * BB | 2x40 Gbps | 0,1 | NA | Yes | 1 | 1 + * BB | 2x50Gbps | 0,1 | NA | No | 1 | 1 + * BB | 4x10Gbps | 0,2 | 1,3 | No | 1/2 | 1,2 (2 is optional) + * BB | 4x10Gbps | 0,1 | 2,3 | No | 1/2 | 1,2 (2 is optional) + * BB | 4x10Gbps | 0,3 | 1,2 | No | 1/2 | 1,2 (2 is optional) + * BB | 4x10Gbps | 0,1,2,3 | NA | No | 1 | 1 + * AH | 2x10/20Gbps | 0,1 | NA | NA | 1 | NA + * AH | 4x10Gbps | 0,1 | 2,3 | NA | 2 | NA + * AH | 4x10Gbps | 0,2 | 1,3 | NA | 2 | NA + * AH | 4x10Gbps | 0,3 | 1,2 | NA | 2 | NA + * AH | 4x10Gbps | 0,1,2,3 | NA | NA | 1 | NA + *======+==================+=========+=========+========+==========+=================== */ #define CMT_TEAM0 0 @@ -257,24 +317,24 @@ struct port_stats { struct couple_mode_teaming { u8 port_cmt[MCP_GLOB_PORT_MAX]; -#define PORT_CMT_IN_TEAM (1 << 0) +#define PORT_CMT_IN_TEAM (1 << 0) -#define PORT_CMT_PORT_ROLE (1 << 1) -#define PORT_CMT_PORT_INACTIVE (0 << 1) -#define PORT_CMT_PORT_ACTIVE (1 << 1) +#define PORT_CMT_PORT_ROLE (1 << 1) +#define PORT_CMT_PORT_INACTIVE (0 << 1) +#define PORT_CMT_PORT_ACTIVE (1 << 1) -#define PORT_CMT_TEAM_MASK (1 << 2) -#define PORT_CMT_TEAM0 (0 << 2) -#define PORT_CMT_TEAM1 (1 << 2) +#define PORT_CMT_TEAM_MASK (1 << 2) +#define PORT_CMT_TEAM0 (0 << 2) +#define PORT_CMT_TEAM1 (1 << 2) }; /************************************** * LLDP and DCBX HSI structures **************************************/ #define LLDP_CHASSIS_ID_STAT_LEN 4 -#define LLDP_PORT_ID_STAT_LEN 4 +#define LLDP_PORT_ID_STAT_LEN 4 #define DCBX_MAX_APP_PROTOCOL 32 -#define MAX_SYSTEM_LLDP_TLV_DATA 32 /* In dwords. 128 in bytes*/ +#define MAX_SYSTEM_LLDP_TLV_DATA 32 /* In dwords. 128 in bytes*/ #define MAX_TLV_BUFFER 128 /* In dwords. 512 in bytes*/ typedef enum _lldp_agent_e { LLDP_NEAREST_BRIDGE = 0, @@ -285,23 +345,23 @@ typedef enum _lldp_agent_e { struct lldp_config_params_s { u32 config; -#define LLDP_CONFIG_TX_INTERVAL_MASK 0x000000ff -#define LLDP_CONFIG_TX_INTERVAL_OFFSET 0 -#define LLDP_CONFIG_HOLD_MASK 0x00000f00 -#define LLDP_CONFIG_HOLD_OFFSET 8 -#define LLDP_CONFIG_MAX_CREDIT_MASK 0x0000f000 -#define LLDP_CONFIG_MAX_CREDIT_OFFSET 12 -#define LLDP_CONFIG_ENABLE_RX_MASK 0x40000000 -#define LLDP_CONFIG_ENABLE_RX_OFFSET 30 -#define LLDP_CONFIG_ENABLE_TX_MASK 0x80000000 -#define LLDP_CONFIG_ENABLE_TX_OFFSET 31 +#define LLDP_CONFIG_TX_INTERVAL_MASK 0x000000ff +#define LLDP_CONFIG_TX_INTERVAL_OFFSET 0 +#define LLDP_CONFIG_HOLD_MASK 0x00000f00 +#define LLDP_CONFIG_HOLD_OFFSET 8 +#define LLDP_CONFIG_MAX_CREDIT_MASK 0x0000f000 +#define LLDP_CONFIG_MAX_CREDIT_OFFSET 12 +#define LLDP_CONFIG_ENABLE_RX_MASK 0x40000000 +#define LLDP_CONFIG_ENABLE_RX_OFFSET 30 +#define LLDP_CONFIG_ENABLE_TX_MASK 0x80000000 +#define LLDP_CONFIG_ENABLE_TX_OFFSET 31 /* Holds local Chassis ID TLV header, subtype and 9B of payload. * If firtst byte is 0, then we will use default chassis ID */ u32 local_chassis_id[LLDP_CHASSIS_ID_STAT_LEN]; /* Holds local Port ID TLV header, subtype and 9B of payload. * If firtst byte is 0, then we will use default port ID - */ + */ u32 local_port_id[LLDP_PORT_ID_STAT_LEN]; }; @@ -317,40 +377,29 @@ struct lldp_status_params_s { struct dcbx_ets_feature { u32 flags; -#define DCBX_ETS_ENABLED_MASK 0x00000001 -#define DCBX_ETS_ENABLED_OFFSET 0 -#define DCBX_ETS_WILLING_MASK 0x00000002 -#define DCBX_ETS_WILLING_OFFSET 1 -#define DCBX_ETS_ERROR_MASK 0x00000004 -#define DCBX_ETS_ERROR_OFFSET 2 -#define DCBX_ETS_CBS_MASK 0x00000008 -#define DCBX_ETS_CBS_OFFSET 3 -#define DCBX_ETS_MAX_TCS_MASK 0x000000f0 -#define DCBX_ETS_MAX_TCS_OFFSET 4 -#define DCBX_OOO_TC_MASK 0x00000f00 -#define DCBX_OOO_TC_OFFSET 8 -/* Entries in tc table are orginized that the left most is pri 0, right most is - * prio 7 - */ - +#define DCBX_ETS_ENABLED_MASK 0x00000001 +#define DCBX_ETS_ENABLED_OFFSET 0 +#define DCBX_ETS_WILLING_MASK 0x00000002 +#define DCBX_ETS_WILLING_OFFSET 1 +#define DCBX_ETS_ERROR_MASK 0x00000004 +#define DCBX_ETS_ERROR_OFFSET 2 +#define DCBX_ETS_CBS_MASK 0x00000008 +#define DCBX_ETS_CBS_OFFSET 3 +#define DCBX_ETS_MAX_TCS_MASK 0x000000f0 +#define DCBX_ETS_MAX_TCS_OFFSET 4 +#define DCBX_OOO_TC_MASK 0x00000f00 +#define DCBX_OOO_TC_OFFSET 8 + /* Entries in tc table are orginized that the left most is pri 0, right most is prio 7 */ u32 pri_tc_tbl[1]; -/* Fixed TCP OOO TC usage is deprecated and used only for driver backward - * compatibility - */ +/* Fixed TCP OOO TC usage is deprecated and used only for driver backward compatibility */ #define DCBX_TCP_OOO_TC (4) #define DCBX_TCP_OOO_K2_4PORT_TC (3) #define NIG_ETS_ISCSI_OOO_CLIENT_OFFSET (DCBX_TCP_OOO_TC + 1) #define DCBX_CEE_STRICT_PRIORITY 0xf -/* Entries in tc table are orginized that the left most is pri 0, right most is - * prio 7 - */ - + /* Entries in tc table are orginized that the left most is pri 0, right most is prio 7 */ u32 tc_bw_tbl[2]; -/* Entries in tc table are orginized that the left most is pri 0, right most is - * prio 7 - */ - + /* Entries in tc table are orginized that the left most is pri 0, right most is prio 7 */ u32 tc_tsa_tbl[2]; #define DCBX_ETS_TSA_STRICT 0 #define DCBX_ETS_TSA_CBS 1 @@ -359,24 +408,24 @@ struct dcbx_ets_feature { struct dcbx_app_priority_entry { u32 entry; -#define DCBX_APP_PRI_MAP_MASK 0x000000ff -#define DCBX_APP_PRI_MAP_OFFSET 0 -#define DCBX_APP_PRI_0 0x01 -#define DCBX_APP_PRI_1 0x02 -#define DCBX_APP_PRI_2 0x04 -#define DCBX_APP_PRI_3 0x08 -#define DCBX_APP_PRI_4 0x10 -#define DCBX_APP_PRI_5 0x20 -#define DCBX_APP_PRI_6 0x40 -#define DCBX_APP_PRI_7 0x80 -#define DCBX_APP_SF_MASK 0x00000300 -#define DCBX_APP_SF_OFFSET 8 -#define DCBX_APP_SF_ETHTYPE 0 -#define DCBX_APP_SF_PORT 1 -#define DCBX_APP_SF_IEEE_MASK 0x0000f000 -#define DCBX_APP_SF_IEEE_OFFSET 12 +#define DCBX_APP_PRI_MAP_MASK 0x000000ff +#define DCBX_APP_PRI_MAP_OFFSET 0 +#define DCBX_APP_PRI_0 0x01 +#define DCBX_APP_PRI_1 0x02 +#define DCBX_APP_PRI_2 0x04 +#define DCBX_APP_PRI_3 0x08 +#define DCBX_APP_PRI_4 0x10 +#define DCBX_APP_PRI_5 0x20 +#define DCBX_APP_PRI_6 0x40 +#define DCBX_APP_PRI_7 0x80 +#define DCBX_APP_SF_MASK 0x00000300 +#define DCBX_APP_SF_OFFSET 8 +#define DCBX_APP_SF_ETHTYPE 0 +#define DCBX_APP_SF_PORT 1 +#define DCBX_APP_SF_IEEE_MASK 0x0000f000 +#define DCBX_APP_SF_IEEE_OFFSET 12 #define DCBX_APP_SF_IEEE_RESERVED 0 -#define DCBX_APP_SF_IEEE_ETHTYPE 1 +#define DCBX_APP_SF_IEEE_ETHTYPE 1 #define DCBX_APP_SF_IEEE_TCP_PORT 2 #define DCBX_APP_SF_IEEE_UDP_PORT 3 #define DCBX_APP_SF_IEEE_TCP_UDP_PORT 4 @@ -389,20 +438,20 @@ struct dcbx_app_priority_entry { /* FW structure in BE */ struct dcbx_app_priority_feature { u32 flags; -#define DCBX_APP_ENABLED_MASK 0x00000001 -#define DCBX_APP_ENABLED_OFFSET 0 -#define DCBX_APP_WILLING_MASK 0x00000002 -#define DCBX_APP_WILLING_OFFSET 1 -#define DCBX_APP_ERROR_MASK 0x00000004 -#define DCBX_APP_ERROR_OFFSET 2 +#define DCBX_APP_ENABLED_MASK 0x00000001 +#define DCBX_APP_ENABLED_OFFSET 0 +#define DCBX_APP_WILLING_MASK 0x00000002 +#define DCBX_APP_WILLING_OFFSET 1 +#define DCBX_APP_ERROR_MASK 0x00000004 +#define DCBX_APP_ERROR_OFFSET 2 /* Not in use - #define DCBX_APP_DEFAULT_PRI_MASK 0x00000f00 - #define DCBX_APP_DEFAULT_PRI_OFFSET 8 +#define DCBX_APP_DEFAULT_PRI_MASK 0x00000f00 +#define DCBX_APP_DEFAULT_PRI_OFFSET 8 */ -#define DCBX_APP_MAX_TCS_MASK 0x0000f000 -#define DCBX_APP_MAX_TCS_OFFSET 12 -#define DCBX_APP_NUM_ENTRIES_MASK 0x00ff0000 -#define DCBX_APP_NUM_ENTRIES_OFFSET 16 +#define DCBX_APP_MAX_TCS_MASK 0x0000f000 +#define DCBX_APP_MAX_TCS_OFFSET 12 +#define DCBX_APP_NUM_ENTRIES_MASK 0x00ff0000 +#define DCBX_APP_NUM_ENTRIES_OFFSET 16 struct dcbx_app_priority_entry app_pri_tbl[DCBX_MAX_APP_PROTOCOL]; }; @@ -412,29 +461,29 @@ struct dcbx_features { struct dcbx_ets_feature ets; /* PFC feature */ u32 pfc; -#define DCBX_PFC_PRI_EN_BITMAP_MASK 0x000000ff -#define DCBX_PFC_PRI_EN_BITMAP_OFFSET 0 -#define DCBX_PFC_PRI_EN_BITMAP_PRI_0 0x01 -#define DCBX_PFC_PRI_EN_BITMAP_PRI_1 0x02 -#define DCBX_PFC_PRI_EN_BITMAP_PRI_2 0x04 -#define DCBX_PFC_PRI_EN_BITMAP_PRI_3 0x08 -#define DCBX_PFC_PRI_EN_BITMAP_PRI_4 0x10 -#define DCBX_PFC_PRI_EN_BITMAP_PRI_5 0x20 -#define DCBX_PFC_PRI_EN_BITMAP_PRI_6 0x40 -#define DCBX_PFC_PRI_EN_BITMAP_PRI_7 0x80 - -#define DCBX_PFC_FLAGS_MASK 0x0000ff00 -#define DCBX_PFC_FLAGS_OFFSET 8 -#define DCBX_PFC_CAPS_MASK 0x00000f00 -#define DCBX_PFC_CAPS_OFFSET 8 -#define DCBX_PFC_MBC_MASK 0x00004000 -#define DCBX_PFC_MBC_OFFSET 14 -#define DCBX_PFC_WILLING_MASK 0x00008000 -#define DCBX_PFC_WILLING_OFFSET 15 -#define DCBX_PFC_ENABLED_MASK 0x00010000 -#define DCBX_PFC_ENABLED_OFFSET 16 -#define DCBX_PFC_ERROR_MASK 0x00020000 -#define DCBX_PFC_ERROR_OFFSET 17 +#define DCBX_PFC_PRI_EN_BITMAP_MASK 0x000000ff +#define DCBX_PFC_PRI_EN_BITMAP_OFFSET 0 +#define DCBX_PFC_PRI_EN_BITMAP_PRI_0 0x01 +#define DCBX_PFC_PRI_EN_BITMAP_PRI_1 0x02 +#define DCBX_PFC_PRI_EN_BITMAP_PRI_2 0x04 +#define DCBX_PFC_PRI_EN_BITMAP_PRI_3 0x08 +#define DCBX_PFC_PRI_EN_BITMAP_PRI_4 0x10 +#define DCBX_PFC_PRI_EN_BITMAP_PRI_5 0x20 +#define DCBX_PFC_PRI_EN_BITMAP_PRI_6 0x40 +#define DCBX_PFC_PRI_EN_BITMAP_PRI_7 0x80 + +#define DCBX_PFC_FLAGS_MASK 0x0000ff00 +#define DCBX_PFC_FLAGS_OFFSET 8 +#define DCBX_PFC_CAPS_MASK 0x00000f00 +#define DCBX_PFC_CAPS_OFFSET 8 +#define DCBX_PFC_MBC_MASK 0x00004000 +#define DCBX_PFC_MBC_OFFSET 14 +#define DCBX_PFC_WILLING_MASK 0x00008000 +#define DCBX_PFC_WILLING_OFFSET 15 +#define DCBX_PFC_ENABLED_MASK 0x00010000 +#define DCBX_PFC_ENABLED_OFFSET 16 +#define DCBX_PFC_ERROR_MASK 0x00020000 +#define DCBX_PFC_ERROR_OFFSET 17 /* APP feature */ struct dcbx_app_priority_feature app; @@ -442,14 +491,13 @@ struct dcbx_features { struct dcbx_local_params { u32 config; -#define DCBX_CONFIG_VERSION_MASK 0x00000007 -#define DCBX_CONFIG_VERSION_OFFSET 0 -#define DCBX_CONFIG_VERSION_DISABLED 0 -#define DCBX_CONFIG_VERSION_IEEE 1 -#define DCBX_CONFIG_VERSION_CEE 2 -#define DCBX_CONFIG_VERSION_DYNAMIC \ - (DCBX_CONFIG_VERSION_IEEE | DCBX_CONFIG_VERSION_CEE) -#define DCBX_CONFIG_VERSION_STATIC 4 +#define DCBX_CONFIG_VERSION_MASK 0x00000007 +#define DCBX_CONFIG_VERSION_OFFSET 0 +#define DCBX_CONFIG_VERSION_DISABLED 0 +#define DCBX_CONFIG_VERSION_IEEE 1 +#define DCBX_CONFIG_VERSION_CEE 2 +#define DCBX_CONFIG_VERSION_DYNAMIC (DCBX_CONFIG_VERSION_IEEE | DCBX_CONFIG_VERSION_CEE) +#define DCBX_CONFIG_VERSION_STATIC 4 u32 flags; struct dcbx_features features; @@ -459,12 +507,12 @@ struct dcbx_mib { u32 prefix_seq_num; u32 flags; /* - #define DCBX_CONFIG_VERSION_MASK 0x00000007 - #define DCBX_CONFIG_VERSION_OFFSET 0 - #define DCBX_CONFIG_VERSION_DISABLED 0 - #define DCBX_CONFIG_VERSION_IEEE 1 - #define DCBX_CONFIG_VERSION_CEE 2 - #define DCBX_CONFIG_VERSION_STATIC 4 +#define DCBX_CONFIG_VERSION_MASK 0x00000007 +#define DCBX_CONFIG_VERSION_OFFSET 0 +#define DCBX_CONFIG_VERSION_DISABLED 0 +#define DCBX_CONFIG_VERSION_IEEE 1 +#define DCBX_CONFIG_VERSION_CEE 2 +#define DCBX_CONFIG_VERSION_STATIC 4 */ struct dcbx_features features; u32 suffix_seq_num; @@ -500,6 +548,43 @@ struct dcb_dscp_map { #define DCB_DSCP_ENABLE_OFFSET 0 #define DCB_DSCP_ENABLE 1 u32 dscp_pri_map[8]; + /* the map structure is the following: + * each u32 is split into 4 bits chunks, each chunk holds priority for respective dscp + * Lowest dscp is at lsb + * 31 28 24 20 16 + * 12 8 4 0 + * dscp_pri_map[0]: | dscp7 pri | dscp6 pri | dscp5 pri | dscp4 pri | + * dscp3 pri | dscp2 pri | dscp1 pri | dscp0 pri | + * dscp_pri_map[1]: | dscp15 pri| dscp14 pri| dscp13 pri| dscp12 pri| + * dscp11 pri| dscp10 pri| dscp9 pri | dscp8 pri | + * etc. + */ +}; + +struct mcp_val64 { + u32 lo; + u32 hi; +}; + +/* generic_idc_msg_t to be used for inter driver communication. + * source_pf specifies the originating PF that sent messages to all target PFs + * msg contains 64 bit value of the message - opaque to the MFW + */ +struct generic_idc_msg_s { + u32 source_pf; + struct mcp_val64 msg; +}; + +/************************************** + * PCIE Statistics structures + **************************************/ +struct pcie_stats_stc { + u32 sr_cnt_wr_byte_msb; + u32 sr_cnt_wr_byte_lsb; + u32 sr_cnt_wr_cnt; + u32 sr_cnt_rd_byte_msb; + u32 sr_cnt_rd_byte_lsb; + u32 sr_cnt_rd_cnt; }; /************************************** @@ -515,14 +600,11 @@ enum _attribute_commands_e { }; /**************************************/ -/* */ -/* P U B L I C G L O B A L */ -/* */ +/* P U B L I C G L O B A L */ /**************************************/ struct public_global { - u32 max_path; /* 32bit is wasty, but this will be used often */ -/* (Global) 32bit is wasty, but this will be used often */ - u32 max_ports; + u32 max_path; /* 32bit is wasty, but this will be used often */ + u32 max_ports; /* (Global) 32bit is wasty, but this will be used often */ #define MODE_1P 1 /* TBD - NEED TO THINK OF A BETTER NAME */ #define MODE_2P 2 #define MODE_3P 3 @@ -530,7 +612,7 @@ struct public_global { u32 debug_mb_offset; u32 phymod_dbg_mb_offset; struct couple_mode_teaming cmt; -/* Temperature in Celcius (-255C / +255C), measured every second. */ + /* Temperature in Celcius (-255C / +255C), measured every second. */ s32 internal_temperature; u32 mfw_ver; u32 running_bundle_id; @@ -547,33 +629,55 @@ struct public_global { #define EXT_PHY_FW_UPGRADE_STATUS_SUCCESS (3) #define EXT_PHY_FW_UPGRADE_TYPE_MASK (0xffff0000) #define EXT_PHY_FW_UPGRADE_TYPE_OFFSET (16) + + u8 runtime_port_swap_map[MODE_4P]; + u32 data_ptr; + u32 data_size; + u32 bmb_error_status_cnt; + u32 bmb_jumbo_frame_cnt; + u32 sent_to_bmc_cnt; + u32 handled_by_mfw; + u32 sent_to_nw_cnt; /* BMC BC/MC are handled by MFW */ +#define LEAKY_BUCKET_SAMPLES 10 + u32 to_bmc_kb_per_second; /* Used to be sent_to_nw_cnt */ + u32 bcast_dropped_to_bmc_cnt; + u32 mcast_dropped_to_bmc_cnt; + u32 ucast_dropped_to_bmc_cnt; + u32 ncsi_response_failure_cnt; + u32 device_attr; +#define DEVICE_ATTR_NVMETCP_HOST (1 << 0) /* NVMeTCP host exists in the device */ +#define DEVICE_ATTR_NVMETCP_TARGET (1 << 1) /* NVMeTCP target exists in the device */ +#define DEVICE_ATTR_L2_VFS (1 << 2) /* At least one VF assigned for ETH PF */ +#define DEVICE_ATTR_RDMA (1 << 3) /* RMDA personality exists in device */ +/* RMDA with VFs personality exists in device */ +#define DEVICE_ATTR_RDMA_VFS (1 << 4) +#define DEVICE_ATTR_ISCSI (1 << 5) /* ISCSI personality exists in device */ +#define DEVICE_ATTR_FCOE (1 << 7) /* FCOE personality exists in device */ }; /**************************************/ -/* */ -/* P U B L I C P A T H */ -/* */ +/* P U B L I C P A T H */ /**************************************/ /**************************************************************************** - * Shared Memory 2 Region * + * Shared Memory 2 Region * ****************************************************************************/ -/* The fw_flr_ack is actually built in the following way: */ -/* 8 bit: PF ack */ -/* 128 bit: VF ack */ -/* 8 bit: ios_dis_ack */ +/* The fw_flr_ack is actually built in the following way: */ +/* 8 bit: PF ack */ +/* 128 bit: VF ack */ +/* 8 bit: ios_dis_ack */ /* In order to maintain endianity in the mailbox hsi, we want to keep using */ -/* u32. The fw must have the VF right after the PF since this is how it */ -/* access arrays(it expects always the VF to reside after the PF, and that */ -/* makes the calculation much easier for it. ) */ +/* u32. The fw must have the VF right after the PF since this is how it */ +/* access arrays(it expects always the VF to reside after the PF, and that */ +/* makes the calculation much easier for it. ) */ /* In order to answer both limitations, and keep the struct small, the code */ -/* will abuse the structure defined here to achieve the actual partition */ -/* above */ +/* will abuse the structure defined here to achieve the actual partition */ +/* above */ /****************************************************************************/ struct fw_flr_mb { u32 aggint; u32 opgen_addr; - u32 accum_ack; /* 0..15:PF, 16..207:VF, 256..271:IOV_DIS */ + u32 accum_ack; /* 0..15:PF, 16..207:VF, 256..271:IOV_DIS */ #define ACCUM_ACK_PF_BASE 0 #define ACCUM_ACK_PF_SHIFT 0 @@ -591,23 +695,20 @@ struct public_path { * mcp_vf_disabled is set by the MCP to indicate the driver about VFs * which were disabled/flred */ - u32 mcp_vf_disabled[VF_MAX_STATIC / 32]; /* 0x003c */ + u32 mcp_vf_disabled[VF_MAX_STATIC / 32]; /* 0x003c */ -/* Reset on mcp reset, and incremented for eveny process kill event. */ - u32 process_kill; + u32 process_kill; /* Reset on mcp reset, and incremented for eveny process kill event. */ #define PROCESS_KILL_COUNTER_MASK 0x0000ffff #define PROCESS_KILL_COUNTER_OFFSET 0 #define PROCESS_KILL_GLOB_AEU_BIT_MASK 0xffff0000 -#define PROCESS_KILL_GLOB_AEU_BIT_OFFSET 16 +#define PROCESS_KILL_GLOB_AEU_BIT_OFFSET 16 #define GLOBAL_AEU_BIT(aeu_reg_id, aeu_bit) (aeu_reg_id * 32 + aeu_bit) /*Added to support E5 240 VFs*/ u32 mcp_vf_disabled2[ADDED_VF_BITMAP_SIZE]; }; /**************************************/ -/* */ -/* P U B L I C P O R T */ -/* */ +/* P U B L I C P O R T */ /**************************************/ #define FC_NPIV_WWPN_SIZE 8 #define FC_NPIV_WWNN_SIZE 8 @@ -629,32 +730,31 @@ struct dci_fc_npiv_tbl { }; /**************************************************************************** - * Driver <-> FW Mailbox * + * Driver <-> FW Mailbox * ****************************************************************************/ struct public_port { - u32 validity_map; /* 0x0 (4*2 = 0x8) */ + u32 validity_map; /* 0x0 (0x4) */ /* validity bits */ -#define MCP_VALIDITY_PCI_CFG 0x00100000 -#define MCP_VALIDITY_MB 0x00200000 -#define MCP_VALIDITY_DEV_INFO 0x00400000 -#define MCP_VALIDITY_RESERVED 0x00000007 +#define MCP_VALIDITY_PCI_CFG 0x00100000 +#define MCP_VALIDITY_MB 0x00200000 +#define MCP_VALIDITY_DEV_INFO 0x00400000 +#define MCP_VALIDITY_RESERVED 0x00000007 /* One licensing bit should be set */ -/* yaniv - tbd ? license */ -#define MCP_VALIDITY_LIC_KEY_IN_EFFECT_MASK 0x00000038 -#define MCP_VALIDITY_LIC_MANUF_KEY_IN_EFFECT 0x00000008 +#define MCP_VALIDITY_LIC_KEY_IN_EFFECT_MASK 0x00000038 /* yaniv - tbd ? license */ +#define MCP_VALIDITY_LIC_MANUF_KEY_IN_EFFECT 0x00000008 #define MCP_VALIDITY_LIC_UPGRADE_KEY_IN_EFFECT 0x00000010 -#define MCP_VALIDITY_LIC_NO_KEY_IN_EFFECT 0x00000020 +#define MCP_VALIDITY_LIC_NO_KEY_IN_EFFECT 0x00000020 /* Active MFW */ -#define MCP_VALIDITY_ACTIVE_MFW_UNKNOWN 0x00000000 -#define MCP_VALIDITY_ACTIVE_MFW_MASK 0x000001c0 -#define MCP_VALIDITY_ACTIVE_MFW_NCSI 0x00000040 -#define MCP_VALIDITY_ACTIVE_MFW_NONE 0x000001c0 +#define MCP_VALIDITY_ACTIVE_MFW_UNKNOWN 0x00000000 +#define MCP_VALIDITY_ACTIVE_MFW_MASK 0x000001c0 +#define MCP_VALIDITY_ACTIVE_MFW_NCSI 0x00000040 +#define MCP_VALIDITY_ACTIVE_MFW_NONE 0x000001c0 - u32 link_status; + u32 link_status; /* 0x4 (0x4)*/ #define LINK_STATUS_LINK_UP 0x00000001 #define LINK_STATUS_SPEED_AND_DUPLEX_MASK 0x0000001e #define LINK_STATUS_SPEED_AND_DUPLEX_1000THD (1 << 1) @@ -669,6 +769,7 @@ struct public_port { #define LINK_STATUS_AUTO_NEGOTIATE_COMPLETE 0x00000040 #define LINK_STATUS_PARALLEL_DETECTION_USED 0x00000080 #define LINK_STATUS_PFC_ENABLED 0x00000100 +#define LINK_STATUS_LINK_PARTNER_SPEED_MASK 0x0001fe00 #define LINK_STATUS_LINK_PARTNER_1000TFD_CAPABLE 0x00000200 #define LINK_STATUS_LINK_PARTNER_1000THD_CAPABLE 0x00000400 #define LINK_STATUS_LINK_PARTNER_10G_CAPABLE 0x00000800 @@ -695,30 +796,46 @@ struct public_port { #define LINK_STATUS_FEC_MODE_RS_CL91 (2 << 27) #define LINK_STATUS_EXT_PHY_LINK_UP 0x40000000 - u32 link_status1; - u32 ext_phy_fw_version; + u32 link_status1; /* 0x8 (0x4) */ +#define LP_PRESENCE_STATUS_OFFSET 0 +#define LP_PRESENCE_STATUS_MASK 0x3 +#define LP_PRESENCE_UNKNOWN 0x0 +#define LP_PRESENCE_PROBING 0x1 +#define LP_PRESENT 0x2 +#define LP_NOT_PRESENT 0x3 +#define TXFAULT_STATUS_OFFSET 2 +#define TXFAULT_STATUS_MASK (0x1 << TXFAULT_STATUS_OFFSET) +#define TXFAULT_NOT_PRESENT (0x0 << TXFAULT_STATUS_OFFSET) +#define TXFAULT_PRESENT (0x1 << TXFAULT_STATUS_OFFSET) +#define RXLOS_STATUS_OFFSET 3 +#define RXLOS_STATUS_MASK (0x1 << RXLOS_STATUS_OFFSET) +#define RXLOS_NOT_PRESENT (0x0 << RXLOS_STATUS_OFFSET) +#define RXLOS_PRESENT (0x1 << RXLOS_STATUS_OFFSET) +#define PORT_NOT_READY (1 << 4) /* Port init is not completed or failed */ + u32 ext_phy_fw_version; /* 0xc (0x4) */ /* Points to struct eth_phy_cfg (For READ-ONLY) */ u32 drv_phy_cfg_addr; - u32 port_stx; + u32 port_stx; /* 0x14 (0x4) */ - u32 stat_nig_timer; + u32 stat_nig_timer; /* 0x18 (0x4) */ - struct port_mf_cfg port_mf_config; - struct port_stats stats; + struct port_mf_cfg port_mf_config; /* 0x1c (0xc) */ + struct port_stats stats; /* 0x28 (0x1e8) 64-bit aligned*/ - u32 media_type; -#define MEDIA_UNSPECIFIED 0x0 + u32 media_type; /* 0x210 (0x4) */ +#define MEDIA_UNSPECIFIED 0x0 #define MEDIA_SFPP_10G_FIBER 0x1 /* Use MEDIA_MODULE_FIBER instead */ -#define MEDIA_XFP_FIBER 0x2 /* Use MEDIA_MODULE_FIBER instead */ -#define MEDIA_DA_TWINAX 0x3 -#define MEDIA_BASE_T 0x4 -#define MEDIA_SFP_1G_FIBER 0x5 /* Use MEDIA_MODULE_FIBER instead */ -#define MEDIA_MODULE_FIBER 0x6 -#define MEDIA_KR 0xf0 -#define MEDIA_NOT_PRESENT 0xff - - u32 lfa_status; +/* Use MEDIA_MODULE_FIBER instead */ +#define MEDIA_XFP_FIBER 0x2 +#define MEDIA_DA_TWINAX 0x3 +#define MEDIA_BASE_T 0x4 +#define MEDIA_SFP_1G_FIBER 0x5 /* Use MEDIA_MODULE_FIBER instead */ +#define MEDIA_MODULE_FIBER 0x6 +#define MEDIA_KR 0xf0 +#define MEDIA_NOT_PRESENT 0xff + + u32 lfa_status; /* 0x214 (0x4) */ #define LFA_LINK_FLAP_REASON_OFFSET 0 #define LFA_LINK_FLAP_REASON_MASK 0x000000ff #define LFA_NO_REASON (0 << 0) @@ -734,11 +851,15 @@ struct public_port { #define LINK_FLAP_AVOIDANCE_COUNT_MASK 0x0000ff00 #define LINK_FLAP_COUNT_OFFSET 16 #define LINK_FLAP_COUNT_MASK 0x00ff0000 - - u32 link_change_count; +#define LFA_LINK_FLAP_EXT_REASON_OFFSET 24 +#define LFA_LINK_FLAP_EXT_REASON_MASK 0xff000000 +#define LFA_EXT_SPEED_MISMATCH (1 << 24) +#define LFA_EXT_ADV_SPEED_MISMATCH (1 << 25) +#define LFA_EXT_FEC_MISMATCH (1 << 26) + u32 link_change_count; /* 0x218 (0x4) */ /* LLDP params */ -/* offset: 536 bytes? */ + /* offset: 536 bytes? */ struct lldp_config_params_s lldp_config_params[LLDP_MAX_LLDP_AGENTS]; struct lldp_status_params_s lldp_status_params[LLDP_MAX_LLDP_AGENTS]; struct lldp_system_tlvs_buffer_s system_lldp_tlvs_buf; @@ -748,46 +869,42 @@ struct public_port { struct dcbx_mib remote_dcbx_mib; struct dcbx_mib operational_dcbx_mib; -/* FC_NPIV table offset & size in NVRAM value of 0 means not present */ - + /* FC_NPIV table offset & size in NVRAM value of 0 means not present */ u32 fc_npiv_nvram_tbl_addr; +#define NPIV_TBL_INVALID_ADDR 0xFFFFFFFF + u32 fc_npiv_nvram_tbl_size; u32 transceiver_data; -#define ETH_TRANSCEIVER_STATE_MASK 0x000000FF -#define ETH_TRANSCEIVER_STATE_OFFSET 0x00000000 -#define ETH_TRANSCEIVER_STATE_UNPLUGGED 0x00000000 -#define ETH_TRANSCEIVER_STATE_PRESENT 0x00000001 -#define ETH_TRANSCEIVER_STATE_VALID 0x00000003 -#define ETH_TRANSCEIVER_STATE_UPDATING 0x00000008 -#define ETH_TRANSCEIVER_TYPE_MASK 0x0000FF00 -#define ETH_TRANSCEIVER_TYPE_OFFSET 0x00000008 -#define ETH_TRANSCEIVER_TYPE_NONE 0x00000000 -#define ETH_TRANSCEIVER_TYPE_UNKNOWN 0x000000FF -/* 1G Passive copper cable */ -#define ETH_TRANSCEIVER_TYPE_1G_PCC 0x01 -/* 1G Active copper cable */ -#define ETH_TRANSCEIVER_TYPE_1G_ACC 0x02 +#define ETH_TRANSCEIVER_STATE_MASK 0x000000FF +#define ETH_TRANSCEIVER_STATE_OFFSET 0x0 +#define ETH_TRANSCEIVER_STATE_UNPLUGGED 0x00 +#define ETH_TRANSCEIVER_STATE_PRESENT 0x01 +#define ETH_TRANSCEIVER_STATE_VALID 0x03 +#define ETH_TRANSCEIVER_STATE_UPDATING 0x08 +#define ETH_TRANSCEIVER_STATE_IN_SETUP 0x10 +#define ETH_TRANSCEIVER_TYPE_MASK 0x0000FF00 +#define ETH_TRANSCEIVER_TYPE_OFFSET 0x8 +#define ETH_TRANSCEIVER_TYPE_NONE 0x00 +#define ETH_TRANSCEIVER_TYPE_UNKNOWN 0xFF +#define ETH_TRANSCEIVER_TYPE_1G_PCC 0x01 /* 1G Passive copper cable */ +#define ETH_TRANSCEIVER_TYPE_1G_ACC 0x02 /* 1G Active copper cable */ #define ETH_TRANSCEIVER_TYPE_1G_LX 0x03 #define ETH_TRANSCEIVER_TYPE_1G_SX 0x04 #define ETH_TRANSCEIVER_TYPE_10G_SR 0x05 #define ETH_TRANSCEIVER_TYPE_10G_LR 0x06 #define ETH_TRANSCEIVER_TYPE_10G_LRM 0x07 #define ETH_TRANSCEIVER_TYPE_10G_ER 0x08 -/* 10G Passive copper cable */ -#define ETH_TRANSCEIVER_TYPE_10G_PCC 0x09 -/* 10G Active copper cable */ -#define ETH_TRANSCEIVER_TYPE_10G_ACC 0x0a +#define ETH_TRANSCEIVER_TYPE_10G_PCC 0x09 /* 10G Passive copper cable */ +#define ETH_TRANSCEIVER_TYPE_10G_ACC 0x0a /* 10G Active copper cable */ #define ETH_TRANSCEIVER_TYPE_XLPPI 0x0b #define ETH_TRANSCEIVER_TYPE_40G_LR4 0x0c #define ETH_TRANSCEIVER_TYPE_40G_SR4 0x0d #define ETH_TRANSCEIVER_TYPE_40G_CR4 0x0e -/* Active optical cable */ -#define ETH_TRANSCEIVER_TYPE_100G_AOC 0x0f +#define ETH_TRANSCEIVER_TYPE_100G_AOC 0x0f /* Active optical cable */ #define ETH_TRANSCEIVER_TYPE_100G_SR4 0x10 #define ETH_TRANSCEIVER_TYPE_100G_LR4 0x11 #define ETH_TRANSCEIVER_TYPE_100G_ER4 0x12 -/* Active copper cable */ -#define ETH_TRANSCEIVER_TYPE_100G_ACC 0x13 +#define ETH_TRANSCEIVER_TYPE_100G_ACC 0x13 /* Active copper cable */ #define ETH_TRANSCEIVER_TYPE_100G_CR4 0x14 #define ETH_TRANSCEIVER_TYPE_4x10G_SR 0x15 /* 25G Passive copper cable - short */ @@ -805,7 +922,6 @@ struct public_port { #define ETH_TRANSCEIVER_TYPE_25G_SR 0x1c #define ETH_TRANSCEIVER_TYPE_25G_LR 0x1d #define ETH_TRANSCEIVER_TYPE_25G_AOC 0x1e - #define ETH_TRANSCEIVER_TYPE_4x10G 0x1f #define ETH_TRANSCEIVER_TYPE_4x25G_CR 0x20 #define ETH_TRANSCEIVER_TYPE_1000BASET 0x21 @@ -817,6 +933,62 @@ struct public_port { #define ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_CR 0x34 #define ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_LR 0x35 #define ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_AOC 0x36 +#define ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_25G_SR 0x37 +#define ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_25G_LR 0x38 +#define ETH_TRANSCEIVER_TYPE_MULTI_RATE_1G_10G_SR 0x39 +#define ETH_TRANSCEIVER_TYPE_MULTI_RATE_1G_10G_LR 0x3a +#define ETH_TRANSCEIVER_TYPE_25G_ER 0x3b +#define ETH_TRANSCEIVER_TYPE_50G_CR 0x3c +#define ETH_TRANSCEIVER_TYPE_50G_SR 0x3d +#define ETH_TRANSCEIVER_TYPE_50G_LR 0x3e +#define ETH_TRANSCEIVER_TYPE_50G_FR 0x3f +#define ETH_TRANSCEIVER_TYPE_50G_R1_AOC 0x40 +#define ETH_TRANSCEIVER_TYPE_50G_R1_ACC_BER_1E6 0x41 +#define ETH_TRANSCEIVER_TYPE_50G_R1_ACC_BER_1E4 0x42 +#define ETH_TRANSCEIVER_TYPE_50G_R1_AOC_BER_1E6 0x43 +#define ETH_TRANSCEIVER_TYPE_50G_R1_AOC_BER_1E4 0x44 +#define ETH_TRANSCEIVER_TYPE_100G_CR2 0x45 +#define ETH_TRANSCEIVER_TYPE_100G_SR2 0x46 +#define ETH_TRANSCEIVER_TYPE_100G_R2_AOC 0x47 +#define ETH_TRANSCEIVER_TYPE_100G_R2_ACC_BER_1E6 0x48 +#define ETH_TRANSCEIVER_TYPE_100G_R2_ACC_BER_1E4 0x49 +#define ETH_TRANSCEIVER_TYPE_100G_R2_AOC_BER_1E6 0x4a +#define ETH_TRANSCEIVER_TYPE_100G_R2_AOC_BER_1E4 0x4b +#define ETH_TRANSCEIVER_TYPE_200G_CR4 0x4c +#define ETH_TRANSCEIVER_TYPE_200G_SR4 0x4d +#define ETH_TRANSCEIVER_TYPE_200G_LR4 0x4e +#define ETH_TRANSCEIVER_TYPE_200G_FR4 0x4f +#define ETH_TRANSCEIVER_TYPE_200G_DR4 0x50 +#define ETH_TRANSCEIVER_TYPE_200G_R4_AOC 0x51 +#define ETH_TRANSCEIVER_TYPE_200G_R4_ACC_BER_1E6 0x52 +#define ETH_TRANSCEIVER_TYPE_200G_R4_ACC_BER_1E4 0x53 +#define ETH_TRANSCEIVER_TYPE_200G_R4_AOC_BER_1E6 0x54 +#define ETH_TRANSCEIVER_TYPE_200G_R4_AOC_BER_1E4 0x55 +/* COULD ALSO ADD... BUT FOR NOW UNSUPPORTED */ +#define ETH_TRANSCEIVER_TYPE_5G_BASET 0x56 +#define ETH_TRANSCEIVER_TYPE_2p5G_BASET 0x57 +#define ETH_TRANSCEIVER_TYPE_10G_BASET_30M 0x58 +#define ETH_TRANSCEIVER_TYPE_25G_BASET 0x59 +#define ETH_TRANSCEIVER_TYPE_40G_BASET 0x5a +#define ETH_TRANSCEIVER_TYPE_40G_ER4 0x5b +#define ETH_TRANSCEIVER_TYPE_40G_PSM4 0x5c +#define ETH_TRANSCEIVER_TYPE_40G_SWDM4 0x5d +#define ETH_TRANSCEIVER_TYPE_50G_BASET 0x5e +#define ETH_TRANSCEIVER_TYPE_100G_SR10 0x5f +#define ETH_TRANSCEIVER_TYPE_100G_CDWM4 0x60 +#define ETH_TRANSCEIVER_TYPE_100G_DWDM2 0x61 +#define ETH_TRANSCEIVER_TYPE_100G_PSM4 0x62 +#define ETH_TRANSCEIVER_TYPE_100G_CLR4 0x63 +#define ETH_TRANSCEIVER_TYPE_100G_WDM 0x64 +#define ETH_TRANSCEIVER_TYPE_100G_SWDM4 0x65 /* Supported */ +#define ETH_TRANSCEIVER_TYPE_100G_4WDM_10 0x66 +#define ETH_TRANSCEIVER_TYPE_100G_4WDM_20 0x67 +#define ETH_TRANSCEIVER_TYPE_100G_4WDM_40 0x68 +#define ETH_TRANSCEIVER_TYPE_100G_DR_CAUI4 0x69 +#define ETH_TRANSCEIVER_TYPE_100G_FR_CAUI4 0x6a +#define ETH_TRANSCEIVER_TYPE_100G_LR_CAUI4 0x6b +#define ETH_TRANSCEIVER_TYPE_200G_PSM4 0x6c + u32 wol_info; u32 wol_pkt_len; u32 wol_pkt_details; @@ -829,17 +1001,16 @@ struct public_port { /* Shows the Local Device EEE capabilities */ #define EEE_LD_ADV_STATUS_MASK 0x000000f0 #define EEE_LD_ADV_STATUS_OFFSET 4 - #define EEE_1G_ADV (1 << 1) - #define EEE_10G_ADV (1 << 2) +#define EEE_1G_ADV (1 << 1) +#define EEE_10G_ADV (1 << 2) /* Same values as in EEE_LD_ADV, but for Link Parter */ #define EEE_LP_ADV_STATUS_MASK 0x00000f00 #define EEE_LP_ADV_STATUS_OFFSET 8 -/* Supported speeds for EEE */ -#define EEE_SUPPORTED_SPEED_MASK 0x0000f000 +#define EEE_SUPPORTED_SPEED_MASK 0x0000f000 /* Supported speeds for EEE */ #define EEE_SUPPORTED_SPEED_OFFSET 12 - #define EEE_1G_SUPPORTED (1 << 1) - #define EEE_10G_SUPPORTED (1 << 2) +#define EEE_1G_SUPPORTED (1 << 1) +#define EEE_10G_SUPPORTED (1 << 2) u32 eee_remote; /* Used for EEE in LLDP */ #define EEE_REMOTE_TW_TX_MASK 0x0000ffff @@ -857,6 +1028,10 @@ struct public_port { #define ETH_TRANSCEIVER_HAS_DIAGNOSTIC (1 << 6) #define ETH_TRANSCEIVER_IDENT_MASK 0x0000ff00 #define ETH_TRANSCEIVER_IDENT_OFFSET 8 +#define ETH_TRANSCEIVER_ENHANCED_OPTIONS_MASK 0x00ff0000 +#define ETH_TRANSCEIVER_ENHANCED_OPTIONS_OFFSET 16 +#define ETH_TRANSCEIVER_HAS_TXFAULT (1 << 5) +#define ETH_TRANSCEIVER_HAS_RXLOS (1 << 4) u32 oem_cfg_port; #define OEM_CFG_CHANNEL_TYPE_MASK 0x00000003 @@ -871,12 +1046,15 @@ struct public_port { struct lldp_received_tlvs_s lldp_received_tlvs[LLDP_MAX_LLDP_AGENTS]; u32 system_lldp_tlvs_buf2[MAX_SYSTEM_LLDP_TLV_DATA]; + u32 phy_module_temperature; /* Temperature of transceiver / external PHY - 0xc80 (0x4)*/ + u32 nig_reg_stat_rx_bmb_packet; + u32 nig_reg_rx_llh_ncsi_mcp_mask; + u32 nig_reg_rx_llh_ncsi_mcp_mask_2; + }; /**************************************/ -/* */ -/* P U B L I C F U N C */ -/* */ +/* P U B L I C F U N C */ /**************************************/ struct public_func { @@ -885,8 +1063,7 @@ struct public_func { /* MTU size per funciton is needed for the OV feature */ u32 mtu_size; -/* 9 entires for the C2S PCP map for each inner VLAN PCP + 1 default */ - + /* 9 entires for the C2S PCP map for each inner VLAN PCP + 1 default */ /* For PCP values 0-3 use the map lower */ /* 0xFF000000 - PCP 0, 0x00FF0000 - PCP 1, * 0x0000FF00 - PCP 2, 0x000000FF PCP 3 @@ -901,45 +1078,66 @@ struct public_func { /* For PCP default value get the MSB byte of the map default */ u32 c2s_pcp_map_default; - u32 reserved[4]; + /* For generic inter driver communication channel messages between PFs via MFW*/ + struct generic_idc_msg_s generic_idc_msg; + + u32 num_of_msix; /* replace old mf_cfg */ u32 config; /* E/R/I/D */ /* function 0 of each port cannot be hidden */ -#define FUNC_MF_CFG_FUNC_HIDE 0x00000001 -#define FUNC_MF_CFG_PAUSE_ON_HOST_RING 0x00000002 +#define FUNC_MF_CFG_FUNC_HIDE 0x00000001 +#define FUNC_MF_CFG_PAUSE_ON_HOST_RING 0x00000002 #define FUNC_MF_CFG_PAUSE_ON_HOST_RING_OFFSET 0x00000001 -#define FUNC_MF_CFG_PROTOCOL_MASK 0x000000f0 -#define FUNC_MF_CFG_PROTOCOL_OFFSET 4 -#define FUNC_MF_CFG_PROTOCOL_ETHERNET 0x00000000 -#define FUNC_MF_CFG_PROTOCOL_ISCSI 0x00000010 -#define FUNC_MF_CFG_PROTOCOL_FCOE 0x00000020 -#define FUNC_MF_CFG_PROTOCOL_ROCE 0x00000030 -#define FUNC_MF_CFG_PROTOCOL_MAX 0x00000030 +#define FUNC_MF_CFG_PROTOCOL_MASK 0x000000f0 +#define FUNC_MF_CFG_PROTOCOL_OFFSET 4 +#define FUNC_MF_CFG_PROTOCOL_ETHERNET 0x00000000 +#define FUNC_MF_CFG_PROTOCOL_ISCSI 0x00000010 +#define FUNC_MF_CFG_PROTOCOL_FCOE 0x00000020 +#define FUNC_MF_CFG_PROTOCOL_ROCE 0x00000030 +#define FUNC_MF_CFG_PROTOCOL_NVMETCP 0x00000040 + + +#define FUNC_MF_CFG_PROTOCOL_MAX 0x00000050 /* MINBW, MAXBW */ /* value range - 0..100, increments in 1 % */ -#define FUNC_MF_CFG_MIN_BW_MASK 0x0000ff00 -#define FUNC_MF_CFG_MIN_BW_OFFSET 8 -#define FUNC_MF_CFG_MIN_BW_DEFAULT 0x00000000 -#define FUNC_MF_CFG_MAX_BW_MASK 0x00ff0000 -#define FUNC_MF_CFG_MAX_BW_OFFSET 16 -#define FUNC_MF_CFG_MAX_BW_DEFAULT 0x00640000 +#define FUNC_MF_CFG_MIN_BW_MASK 0x0000ff00 +#define FUNC_MF_CFG_MIN_BW_OFFSET 8 +#define FUNC_MF_CFG_MIN_BW_DEFAULT 0x00000000 +#define FUNC_MF_CFG_MAX_BW_MASK 0x00ff0000 +#define FUNC_MF_CFG_MAX_BW_OFFSET 16 +#define FUNC_MF_CFG_MAX_BW_DEFAULT 0x00640000 + + /*RDMA PROTOCL*/ +#define FUNC_MF_CFG_RDMA_PROTOCOL_MASK 0x03000000 +#define FUNC_MF_CFG_RDMA_PROTOCOL_OFFSET 24 +#define FUNC_MF_CFG_RDMA_PROTOCOL_NONE 0x00000000 +#define FUNC_MF_CFG_RDMA_PROTOCOL_ROCE 0x01000000 +#define FUNC_MF_CFG_RDMA_PROTOCOL_IWARP 0x02000000 + /*for future support*/ +#define FUNC_MF_CFG_RDMA_PROTOCOL_BOTH 0x03000000 + +#define FUNC_MF_CFG_BOOT_MODE_MASK 0x0C000000 +#define FUNC_MF_CFG_BOOT_MODE_OFFSET 26 +#define FUNC_MF_CFG_BOOT_MODE_BIOS_CTRL 0x00000000 +#define FUNC_MF_CFG_BOOT_MODE_DISABLED 0x04000000 +#define FUNC_MF_CFG_BOOT_MODE_ENABLED 0x08000000 u32 status; #define FUNC_STATUS_VIRTUAL_LINK_UP 0x00000001 #define FUNC_STATUS_LOGICAL_LINK_UP 0x00000002 -#define FUNC_STATUS_FORCED_LINK 0x00000004 +#define FUNC_STATUS_FORCED_LINK 0x00000004 - u32 mac_upper; /* MAC */ -#define FUNC_MF_CFG_UPPERMAC_MASK 0x0000ffff -#define FUNC_MF_CFG_UPPERMAC_OFFSET 0 -#define FUNC_MF_CFG_UPPERMAC_DEFAULT FUNC_MF_CFG_UPPERMAC_MASK + u32 mac_upper; /* MAC */ +#define FUNC_MF_CFG_UPPERMAC_MASK 0x0000ffff +#define FUNC_MF_CFG_UPPERMAC_OFFSET 0 +#define FUNC_MF_CFG_UPPERMAC_DEFAULT FUNC_MF_CFG_UPPERMAC_MASK u32 mac_lower; -#define FUNC_MF_CFG_LOWERMAC_DEFAULT 0xffffffff +#define FUNC_MF_CFG_LOWERMAC_DEFAULT 0xffffffff u32 fcoe_wwn_port_name_upper; u32 fcoe_wwn_port_name_lower; @@ -947,10 +1145,10 @@ struct public_func { u32 fcoe_wwn_node_name_upper; u32 fcoe_wwn_node_name_lower; - u32 ovlan_stag; /* tags */ -#define FUNC_MF_CFG_OV_STAG_MASK 0x0000ffff -#define FUNC_MF_CFG_OV_STAG_OFFSET 0 -#define FUNC_MF_CFG_OV_STAG_DEFAULT FUNC_MF_CFG_OV_STAG_MASK + u32 ovlan_stag; /* tags */ +#define FUNC_MF_CFG_OV_STAG_MASK 0x0000ffff +#define FUNC_MF_CFG_OV_STAG_OFFSET 0 +#define FUNC_MF_CFG_OV_STAG_DEFAULT FUNC_MF_CFG_OV_STAG_MASK u32 pf_allocation; /* vf per pf */ @@ -962,7 +1160,8 @@ struct public_func { * drv_ack_vf_disabled is set by the PF driver to ack handled disabled * VFs */ - u32 drv_ack_vf_disabled[VF_MAX_STATIC / 32]; /* 0x0044 */ + /*Not in use anymore*/ + u32 drv_ack_vf_disabled[VF_MAX_STATIC / 32]; /* 0x0044 */ u32 drv_id; #define DRV_ID_PDA_COMP_VER_MASK 0x0000ffff @@ -971,8 +1170,7 @@ struct public_func { #define LOAD_REQ_HSI_VERSION 2 #define DRV_ID_MCP_HSI_VER_MASK 0x00ff0000 #define DRV_ID_MCP_HSI_VER_OFFSET 16 -#define DRV_ID_MCP_HSI_VER_CURRENT (LOAD_REQ_HSI_VERSION << \ - DRV_ID_MCP_HSI_VER_OFFSET) +#define DRV_ID_MCP_HSI_VER_CURRENT (LOAD_REQ_HSI_VERSION << DRV_ID_MCP_HSI_VER_OFFSET) #define DRV_ID_DRV_TYPE_MASK 0x7f000000 #define DRV_ID_DRV_TYPE_OFFSET 24 @@ -986,6 +1184,10 @@ struct public_func { #define DRV_ID_DRV_TYPE_FREEBSD (7 << DRV_ID_DRV_TYPE_OFFSET) #define DRV_ID_DRV_TYPE_AIX (8 << DRV_ID_DRV_TYPE_OFFSET) +#define DRV_ID_DRV_TYPE_OS (DRV_ID_DRV_TYPE_LINUX | DRV_ID_DRV_TYPE_WINDOWS | \ + DRV_ID_DRV_TYPE_SOLARIS | DRV_ID_DRV_TYPE_VMWARE | \ + DRV_ID_DRV_TYPE_FREEBSD | DRV_ID_DRV_TYPE_AIX) + #define DRV_ID_DRV_INIT_HW_MASK 0x80000000 #define DRV_ID_DRV_INIT_HW_OFFSET 31 #define DRV_ID_DRV_INIT_HW_FLAG (1 << DRV_ID_DRV_INIT_HW_OFFSET) @@ -1009,9 +1211,7 @@ struct public_func { }; /**************************************/ -/* */ -/* P U B L I C M B */ -/* */ +/* P U B L I C M B */ /**************************************/ /* This is the only section that the driver can write to, and each */ /* Basically each driver request to set feature parameters, @@ -1021,15 +1221,10 @@ struct public_func { */ struct mcp_mac { - u32 mac_upper; /* Upper 16 bits are always zeroes */ + u32 mac_upper; /* Upper 16 bits are always zeroes */ u32 mac_lower; }; -struct mcp_val64 { - u32 lo; - u32 hi; -}; - struct mcp_file_att { u32 nvm_start_addr; u32 len; @@ -1085,6 +1280,27 @@ struct ocbb_data_stc { u32 ocsd_req_update_interval; }; +struct fcoe_cap_stc { +#define FCOE_CAP_UNDEFINED_VALUE 0xffff + u32 max_ios;/*Maximum number of I/Os per connection*/ +#define FCOE_CAP_IOS_MASK 0x0000ffff +#define FCOE_CAP_IOS_OFFSET 0 + u32 max_log;/*Maximum number of Logins per port*/ +#define FCOE_CAP_LOG_MASK 0x0000ffff +#define FCOE_CAP_LOG_OFFSET 0 + u32 max_exch;/*Maximum number of exchanges*/ +#define FCOE_CAP_EXCH_MASK 0x0000ffff +#define FCOE_CAP_EXCH_OFFSET 0 + u32 max_npiv;/*Maximum NPIV WWN per port*/ +#define FCOE_CAP_NPIV_MASK 0x0000ffff +#define FCOE_CAP_NPIV_OFFSET 0 + u32 max_tgt;/*Maximum number of targets supported*/ +#define FCOE_CAP_TGT_MASK 0x0000ffff +#define FCOE_CAP_TGT_OFFSET 0 + u32 max_outstnd;/*Maximum number of outstanding commands across all connections*/ +#define FCOE_CAP_OUTSTND_MASK 0x0000ffff +#define FCOE_CAP_OUTSTND_OFFSET 0 +}; #define MAX_NUM_OF_SENSORS 7 #define MFW_SENSOR_LOCATION_INTERNAL 1 #define MFW_SENSOR_LOCATION_EXTERNAL 2 @@ -1133,6 +1349,11 @@ enum resource_id_enum { RESOURCE_LL2_QUEUE_E = 15, RESOURCE_RDMA_STATS_QUEUE_E = 16, RESOURCE_BDQ_E = 17, + RESOURCE_QCN_E = 18, + RESOURCE_LLH_FILTER_E = 19, + RESOURCE_VF_MAC_ADDR = 20, + RESOURCE_LL2_CTX_E = 21, + RESOURCE_VF_CNQS = 22, RESOURCE_MAX_NUM, RESOURCE_NUM_INVALID = 0xFFFFFFFF }; @@ -1150,6 +1371,11 @@ struct resource_info { #define RESOURCE_ELEMENT_STRICT (1 << 0) }; +struct mcp_wwn { + u32 wwn_upper; + u32 wwn_lower; +}; + #define DRV_ROLE_NONE 0 #define DRV_ROLE_PREBOOT 1 #define DRV_ROLE_OS 2 @@ -1203,11 +1429,26 @@ struct attribute_cmd_write_stc { u32 offset; }; +struct lldp_stats_stc { + u32 tx_frames_total; + u32 rx_frames_total; + u32 rx_frames_discarded; + u32 rx_age_outs; +}; + +struct get_att_ctrl_stc { + u32 disabled_attns; + u32 controllable_attns; +}; + +struct trace_filter_stc { + u32 level; + u32 modules; +}; union drv_union_data { struct mcp_mac wol_mac; /* UNLOAD_DONE */ -/* This configuration should be set by the driver for the LINK_SET command. */ - + /* This configuration should be set by the driver for the LINK_SET command. */ struct eth_phy_cfg drv_phy_cfg; struct mcp_val64 val64; /* For PHY / AVS commands */ @@ -1215,8 +1456,8 @@ union drv_union_data { u8 raw_data[MCP_DRV_NVM_BUF_LEN]; struct mcp_file_att file_att; - - u32 ack_vf_disabled[VF_MAX_STATIC / 32]; + /*extend from (VF_MAX_STATIC / 32 6 dword) to support 240 VFs (8 dwords) */ + u32 ack_vf_disabled[EXT_VF_BITMAP_SIZE_IN_DWORDS]; struct drv_version_stc drv_version; @@ -1229,287 +1470,385 @@ union drv_union_data { struct resource_info resource; struct bist_nvm_image_att nvm_image_att; struct mdump_config_stc mdump_config; + struct mcp_mac lldp_mac; + struct mcp_wwn fcoe_fabric_name; u32 dword; struct load_req_stc load_req; struct load_rsp_stc load_rsp; struct mdump_retain_data_stc mdump_retain; struct attribute_cmd_write_stc attribute_cmd_write; + struct lldp_stats_stc lldp_stats; + struct pcie_stats_stc pcie_stats; + + struct get_att_ctrl_stc get_att_ctrl; + struct fcoe_cap_stc fcoe_cap; + struct trace_filter_stc trace_filter; /* ... */ }; struct public_drv_mb { u32 drv_mb_header; -#define DRV_MSG_CODE_MASK 0xffff0000 -#define DRV_MSG_CODE_LOAD_REQ 0x10000000 -#define DRV_MSG_CODE_LOAD_DONE 0x11000000 -#define DRV_MSG_CODE_INIT_HW 0x12000000 -#define DRV_MSG_CODE_CANCEL_LOAD_REQ 0x13000000 -#define DRV_MSG_CODE_UNLOAD_REQ 0x20000000 -#define DRV_MSG_CODE_UNLOAD_DONE 0x21000000 -#define DRV_MSG_CODE_INIT_PHY 0x22000000 - /* Params - FORCE - Reinitialize the link regardless of LFA */ - /* - DONT_CARE - Don't flap the link if up */ -#define DRV_MSG_CODE_LINK_RESET 0x23000000 - -#define DRV_MSG_CODE_SET_LLDP 0x24000000 -#define DRV_MSG_CODE_REGISTER_LLDP_TLVS_RX 0x24100000 -#define DRV_MSG_CODE_SET_DCBX 0x25000000 - /* OneView feature driver HSI*/ -#define DRV_MSG_CODE_OV_UPDATE_CURR_CFG 0x26000000 -#define DRV_MSG_CODE_OV_UPDATE_BUS_NUM 0x27000000 -#define DRV_MSG_CODE_OV_UPDATE_BOOT_PROGRESS 0x28000000 -#define DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER 0x29000000 -#define DRV_MSG_CODE_NIG_DRAIN 0x30000000 -#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE 0x31000000 -#define DRV_MSG_CODE_BW_UPDATE_ACK 0x32000000 -#define DRV_MSG_CODE_OV_UPDATE_MTU 0x33000000 -/* DRV_MB Param: driver version supp, FW_MB param: MFW version supp, - * data: struct resource_info - */ -#define DRV_MSG_GET_RESOURCE_ALLOC_MSG 0x34000000 -#define DRV_MSG_SET_RESOURCE_VALUE_MSG 0x35000000 -#define DRV_MSG_CODE_OV_UPDATE_WOL 0x38000000 -#define DRV_MSG_CODE_OV_UPDATE_ESWITCH_MODE 0x39000000 -#define DRV_MSG_CODE_S_TAG_UPDATE_ACK 0x3b000000 -#define DRV_MSG_CODE_OEM_UPDATE_FCOE_CVID 0x3c000000 -#define DRV_MSG_CODE_OEM_UPDATE_FCOE_FABRIC_NAME 0x3d000000 -#define DRV_MSG_CODE_OEM_UPDATE_BOOT_CFG 0x3e000000 -#define DRV_MSG_CODE_OEM_RESET_TO_DEFAULT 0x3f000000 -#define DRV_MSG_CODE_OV_GET_CURR_CFG 0x40000000 -#define DRV_MSG_CODE_GET_OEM_UPDATES 0x41000000 -/* params [31:8] - reserved, [7:0] - bitmap */ -#define DRV_MSG_CODE_GET_PPFID_BITMAP 0x43000000 +#define DRV_MSG_SEQ_NUMBER_MASK 0x0000ffff +#define DRV_MSG_SEQ_NUMBER_OFFSET 0 +#define DRV_MSG_CODE_MASK 0xffff0000 +#define DRV_MSG_CODE_OFFSET 16 + u32 drv_mb_param; -/* Param: [0:15] Option ID, [16] - All, [17] - Init, [18] - Commit, - * [19] - Free - */ -#define DRV_MSG_CODE_GET_NVM_CFG_OPTION 0x003e0000 -/* Param: [0:15] Option ID, [17] - Init, [18] , [19] - Free */ -#define DRV_MSG_CODE_SET_NVM_CFG_OPTION 0x003f0000 -/*deprecated don't use*/ -#define DRV_MSG_CODE_INITIATE_FLR_DEPRECATED 0x02000000 -#define DRV_MSG_CODE_INITIATE_PF_FLR 0x02010000 -#define DRV_MSG_CODE_INITIATE_VF_FLR 0x02020000 -#define DRV_MSG_CODE_VF_DISABLED_DONE 0xc0000000 -#define DRV_MSG_CODE_CFG_VF_MSIX 0xc0010000 -#define DRV_MSG_CODE_CFG_PF_VFS_MSIX 0xc0020000 + u32 fw_mb_header; +#define FW_MSG_SEQ_NUMBER_MASK 0x0000ffff +#define FW_MSG_SEQ_NUMBER_OFFSET 0 +#define FW_MSG_CODE_MASK 0xffff0000 +#define FW_MSG_CODE_OFFSET 16 + u32 fw_mb_param; + + u32 drv_pulse_mb; +#define DRV_PULSE_SEQ_MASK 0x00007fff +#define DRV_PULSE_SYSTEM_TIME_MASK 0xffff0000 + /* + * The system time is in the format of + * (year-2001)*12*32 + month*32 + day. + */ +#define DRV_PULSE_ALWAYS_ALIVE 0x00008000 + /* + * Indicate to the firmware not to go into the + * OS-absent when it is not getting driver pulse. + * This is used for debugging as well for PXE(MBA). + */ + + u32 mcp_pulse_mb; +#define MCP_PULSE_SEQ_MASK 0x00007fff +#define MCP_PULSE_ALWAYS_ALIVE 0x00008000 + /* Indicates to the driver not to assert due to lack + * of MCP response + */ +#define MCP_EVENT_MASK 0xffff0000 +#define MCP_EVENT_OTHER_DRIVER_RESET_REQ 0x00010000 + + /* The union data is used by the driver to pass parameters to the scratchpad. */ + union drv_union_data union_data; + +}; +/***************************************************************/ +/* Driver Message Code (Request) */ +/***************************************************************/ +#define DRV_MSG_CODE(_code_) ((_code_) << DRV_MSG_CODE_OFFSET) +enum drv_msg_code_enum { /* Param is either DRV_MB_PARAM_NVM_PUT_FILE_BEGIN_MFW/IMAGE */ -#define DRV_MSG_CODE_NVM_PUT_FILE_BEGIN 0x00010000 + DRV_MSG_CODE_NVM_PUT_FILE_BEGIN = DRV_MSG_CODE(0x0001), /* Param should be set to the transaction size (up to 64 bytes) */ -#define DRV_MSG_CODE_NVM_PUT_FILE_DATA 0x00020000 + DRV_MSG_CODE_NVM_PUT_FILE_DATA = DRV_MSG_CODE(0x0002), /* MFW will place the file offset and len in file_att struct */ -#define DRV_MSG_CODE_NVM_GET_FILE_ATT 0x00030000 -/* Read 32bytes of nvram data. Param is [0:23] ??? Offset [24:31] - - * ??? Len in Bytes + DRV_MSG_CODE_NVM_GET_FILE_ATT = DRV_MSG_CODE(0x0003), +/* Read 32bytes of nvram data. Param is [0:23] ??? Offset [24:31] ??? Len in Bytes */ + DRV_MSG_CODE_NVM_READ_NVRAM = DRV_MSG_CODE(0x0005), +/* Writes up to 32Bytes to nvram. Param is [0:23] ??? Offset [24:31] ??? Len in Bytes. In case this + * address is in the range of secured file in secured mode, the operation will fail */ -#define DRV_MSG_CODE_NVM_READ_NVRAM 0x00050000 -/* Writes up to 32Bytes to nvram. Param is [0:23] ??? Offset [24:31] - * ??? Len in Bytes. In case this address is in the range of secured file in - * secured mode, the operation will fail - */ -#define DRV_MSG_CODE_NVM_WRITE_NVRAM 0x00060000 + DRV_MSG_CODE_NVM_WRITE_NVRAM = DRV_MSG_CODE(0x0006), /* Delete a file from nvram. Param is image_type. */ -#define DRV_MSG_CODE_NVM_DEL_FILE 0x00080000 -/* Reset MCP when no NVM operation is going on, and no drivers are loaded. - * In case operation succeed, MCP will not ack back. - */ -#define DRV_MSG_CODE_MCP_RESET 0x00090000 -/* Temporary command to set secure mode, where the param is 0 (None secure) / - * 1 (Secure) / 2 (Full-Secure) + DRV_MSG_CODE_NVM_DEL_FILE = DRV_MSG_CODE(0x0008), +/* Reset MCP when no NVM operation is going on, and no drivers are loaded. In case operation + * succeed, MCP will not ack back. */ -#define DRV_MSG_CODE_SET_SECURE_MODE 0x000a0000 -/* Param: [0:15] - Address, [16:18] - lane# (0/1/2/3 - for single lane, - * 4/5 - for dual lanes, 6 - for all lanes, [28] - PMD reg, [29] - select port, - * [30:31] - port + DRV_MSG_CODE_MCP_RESET = DRV_MSG_CODE(0x0009), +/* Param: [0:15] - Address, [16:18] - lane# (0/1/2/3 - for single lane, 4/5 - for dual lanes, 6 - + * for all lanes, [28] - PMD reg, [29] - select port, [30:31] - port */ -#define DRV_MSG_CODE_PHY_RAW_READ 0x000b0000 -/* Param: [0:15] - Address, [16:18] - lane# (0/1/2/3 - for single lane, - * 4/5 - for dual lanes, 6 - for all lanes, [28] - PMD reg, [29] - select port, - * [30:31] - port + DRV_MSG_CODE_PHY_RAW_READ = DRV_MSG_CODE(0x000b), +/* Param: [0:15] - Address, [16:18] - lane# (0/1/2/3 - for single lane, 4/5 - for dual lanes, 6 - + * for all lanes, [28] - PMD reg, [29] - select port, [30:31] - port */ -#define DRV_MSG_CODE_PHY_RAW_WRITE 0x000c0000 -/* Param: [0:15] - Address, [30:31] - port */ -#define DRV_MSG_CODE_PHY_CORE_READ 0x000d0000 -/* Param: [0:15] - Address, [30:31] - port */ -#define DRV_MSG_CODE_PHY_CORE_WRITE 0x000e0000 + DRV_MSG_CODE_PHY_RAW_WRITE = DRV_MSG_CODE(0x000c), + /* Param: [0:15] - Address, [30:31] - port */ + DRV_MSG_CODE_PHY_CORE_READ = DRV_MSG_CODE(0x000d), + /* Param: [0:15] - Address, [30:31] - port */ + DRV_MSG_CODE_PHY_CORE_WRITE = DRV_MSG_CODE(0x000e), /* Param: [0:3] - version, [4:15] - name (null terminated) */ -#define DRV_MSG_CODE_SET_VERSION 0x000f0000 -#define DRV_MSG_CODE_MCP_RESET_FORCE 0x000f04ce -/* Halts the MCP. To resume MCP, user will need to use - * MCP_REG_CPU_STATE/MCP_REG_CPU_MODE registers. + DRV_MSG_CODE_SET_VERSION = DRV_MSG_CODE(0x000f), +/* Halts the MCP. To resume MCP, user will need to use MCP_REG_CPU_STATE/MCP_REG_CPU_MODE registers. + * */ -#define DRV_MSG_CODE_MCP_HALT 0x00100000 -/* Set virtual mac address, params [31:6] - reserved, [5:4] - type, - * [3:0] - func, drv_data[7:0] - MAC/WWNN/WWPN + DRV_MSG_CODE_MCP_HALT = DRV_MSG_CODE(0x0010), +/* Set virtual mac address, params [31:6] - reserved, [5:4] - type, [3:0] - func, drv_data[7:0] - + * MAC/WWNN/WWPN */ -#define DRV_MSG_CODE_SET_VMAC 0x00110000 -/* Set virtual mac address, params [31:6] - reserved, [5:4] - type, - * [3:0] - func, drv_data[7:0] - MAC/WWNN/WWPN + DRV_MSG_CODE_SET_VMAC = DRV_MSG_CODE(0x0011), +/* Set virtual mac address, params [31:6] - reserved, [5:4] - type, [3:0] - func, drv_data[7:0] - + * MAC/WWNN/WWPN */ -#define DRV_MSG_CODE_GET_VMAC 0x00120000 -#define DRV_MSG_CODE_VMAC_TYPE_OFFSET 4 -#define DRV_MSG_CODE_VMAC_TYPE_MASK 0x30 -#define DRV_MSG_CODE_VMAC_TYPE_MAC 1 -#define DRV_MSG_CODE_VMAC_TYPE_WWNN 2 -#define DRV_MSG_CODE_VMAC_TYPE_WWPN 3 + DRV_MSG_CODE_GET_VMAC = DRV_MSG_CODE(0x0012), /* Get statistics from pf, params [31:4] - reserved, [3:0] - stats type */ -#define DRV_MSG_CODE_GET_STATS 0x00130000 -#define DRV_MSG_CODE_STATS_TYPE_LAN 1 -#define DRV_MSG_CODE_STATS_TYPE_FCOE 2 -#define DRV_MSG_CODE_STATS_TYPE_ISCSI 3 -#define DRV_MSG_CODE_STATS_TYPE_RDMA 4 -/* Host shall provide buffer and size for MFW */ -#define DRV_MSG_CODE_PMD_DIAG_DUMP 0x00140000 -/* Host shall provide buffer and size for MFW */ -#define DRV_MSG_CODE_PMD_DIAG_EYE 0x00150000 -/* Param: [0:1] - Port, [2:7] - read size, [8:15] - I2C address, - * [16:31] - offset - */ -#define DRV_MSG_CODE_TRANSCEIVER_READ 0x00160000 -/* Param: [0:1] - Port, [2:7] - write size, [8:15] - I2C address, - * [16:31] - offset - */ -#define DRV_MSG_CODE_TRANSCEIVER_WRITE 0x00170000 + DRV_MSG_CODE_GET_STATS = DRV_MSG_CODE(0x0013), +/* Host shall provide buffer and size for MFW */ + DRV_MSG_CODE_PMD_DIAG_DUMP = DRV_MSG_CODE(0x0014), +/* Host shall provide buffer and size for MFW */ + DRV_MSG_CODE_PMD_DIAG_EYE = DRV_MSG_CODE(0x0015), +/* Param: [0:1] - Port, [2:7] - read size, [8:15] - I2C address, [16:31] - offset */ + DRV_MSG_CODE_TRANSCEIVER_READ = DRV_MSG_CODE(0x0016), +/* Param: [0:1] - Port, [2:7] - write size, [8:15] - I2C address, [16:31] - offset */ + DRV_MSG_CODE_TRANSCEIVER_WRITE = DRV_MSG_CODE(0x0017), /* indicate OCBB related information */ -#define DRV_MSG_CODE_OCBB_DATA 0x00180000 + DRV_MSG_CODE_OCBB_DATA = DRV_MSG_CODE(0x0018), /* Set function BW, params[15:8] - min, params[7:0] - max */ -#define DRV_MSG_CODE_SET_BW 0x00190000 + DRV_MSG_CODE_SET_BW = DRV_MSG_CODE(0x0019), +/* When param is set to 1, all parities will be masked(disabled). When params are set to 0, parities + * will be unmasked again. + */ + DRV_MSG_CODE_MASK_PARITIES = DRV_MSG_CODE(0x001a), +/* param[0] - Simulate fan failure, param[1] - simulate over temp. */ + DRV_MSG_CODE_INDUCE_FAILURE = DRV_MSG_CODE(0x001b), +/* Param: [0:15] - gpio number */ + DRV_MSG_CODE_GPIO_READ = DRV_MSG_CODE(0x001c), +/* Param: [0:15] - gpio number, [16:31] - gpio value */ + DRV_MSG_CODE_GPIO_WRITE = DRV_MSG_CODE(0x001d), +/* Param: [0:7] - test enum, [8:15] - image index, [16:31] - reserved */ + DRV_MSG_CODE_BIST_TEST = DRV_MSG_CODE(0x001e), + DRV_MSG_CODE_GET_TEMPERATURE = DRV_MSG_CODE(0x001f), +/* Set LED mode params :0 operational, 1 LED turn ON, 2 LED turn OFF */ + DRV_MSG_CODE_SET_LED_MODE = DRV_MSG_CODE(0x0020), +/* drv_data[7:0] - EPOC in seconds, drv_data[15:8] - driver version (MAJ MIN BUILD SUB) */ + DRV_MSG_CODE_TIMESTAMP = DRV_MSG_CODE(0x0021), +/* This is an empty mailbox just return OK*/ + DRV_MSG_CODE_EMPTY_MB = DRV_MSG_CODE(0x0022), +/* Param[0:4] - resource number (0-31), Param[5:7] - opcode, param[15:8] - age */ + DRV_MSG_CODE_RESOURCE_CMD = DRV_MSG_CODE(0x0023), + DRV_MSG_CODE_GET_MBA_VERSION = DRV_MSG_CODE(0x0024), /* Get MBA version */ +/* Send crash dump commands with param[3:0] - opcode */ + DRV_MSG_CODE_MDUMP_CMD = DRV_MSG_CODE(0x0025), + DRV_MSG_CODE_MEM_ECC_EVENTS = DRV_MSG_CODE(0x0026), /* Param: None */ +/* Param: [0:15] - gpio number */ + DRV_MSG_CODE_GPIO_INFO = DRV_MSG_CODE(0x0027), +/* Value will be placed in union */ + DRV_MSG_CODE_EXT_PHY_READ = DRV_MSG_CODE(0x0028), +/* Value should be placed in union */ + DRV_MSG_CODE_EXT_PHY_WRITE = DRV_MSG_CODE(0x0029), + DRV_MSG_CODE_EXT_PHY_FW_UPGRADE = DRV_MSG_CODE(0x002a), + DRV_MSG_CODE_GET_PF_RDMA_PROTOCOL = DRV_MSG_CODE(0x002b), + DRV_MSG_CODE_SET_LLDP_MAC = DRV_MSG_CODE(0x002c), + DRV_MSG_CODE_GET_LLDP_MAC = DRV_MSG_CODE(0x002d), + DRV_MSG_CODE_OS_WOL = DRV_MSG_CODE(0x002e), + DRV_MSG_CODE_GET_TLV_DONE = DRV_MSG_CODE(0x002f), /* Param: None */ +/* Param: Set DRV_MB_PARAM_FEATURE_SUPPORT_* */ + DRV_MSG_CODE_FEATURE_SUPPORT = DRV_MSG_CODE(0x0030), +/* return FW_MB_PARAM_FEATURE_SUPPORT_* */ + DRV_MSG_CODE_GET_MFW_FEATURE_SUPPORT = DRV_MSG_CODE(0x0031), + DRV_MSG_CODE_READ_WOL_REG = DRV_MSG_CODE(0x0032), + DRV_MSG_CODE_WRITE_WOL_REG = DRV_MSG_CODE(0x0033), + DRV_MSG_CODE_GET_WOL_BUFFER = DRV_MSG_CODE(0x0034), +/* Param: [0:23] Attribute key, [24:31] Attribute sub command */ + DRV_MSG_CODE_ATTRIBUTE = DRV_MSG_CODE(0x0035), +/* Param: Password len. Union: Plain Password */ + DRV_MSG_CODE_ENCRYPT_PASSWORD = DRV_MSG_CODE(0x0036), + DRV_MSG_CODE_GET_ENGINE_CONFIG = DRV_MSG_CODE(0x0037), /* Param: None */ +/* Param: [0:7] - Cmd, [8:9] - len */ + DRV_MSG_CODE_PMBUS_READ = DRV_MSG_CODE(0x0038), +/* Param: [0:7] - Cmd, [8:9] - len, [16:31] -data */ + DRV_MSG_CODE_PMBUS_WRITE = DRV_MSG_CODE(0x0039), + DRV_MSG_CODE_GENERIC_IDC = DRV_MSG_CODE(0x003a), +/* If param = 0, it triggers engine reset. If param = 1, it triggers soft_reset */ + DRV_MSG_CODE_RESET_CHIP = DRV_MSG_CODE(0x003b), + DRV_MSG_CODE_SET_RETAIN_VMAC = DRV_MSG_CODE(0x003c), + DRV_MSG_CODE_GET_RETAIN_VMAC = DRV_MSG_CODE(0x003d), +/* Param: [0:15] Option ID, [16] - All, [17] - Init, [18] - Commit, [19] - Free, [20] - + * SelectEntity, [24:27] - EntityID + */ + DRV_MSG_CODE_GET_NVM_CFG_OPTION = DRV_MSG_CODE(0x003e), +/* Param: [0:15] Option ID, [17] - Init, [18] , [19] - Free, [20] - SelectEntity, [24:27] - EntityID + * + */ + DRV_MSG_CODE_SET_NVM_CFG_OPTION = DRV_MSG_CODE(0x003f), + /* param determine the timeout in usec */ + DRV_MSG_CODE_PCIE_STATS_START = DRV_MSG_CODE(0x0040), +/* MFW returns the stats (struct pcie_stats_stc pcie_stats) gathered in fw_param */ + DRV_MSG_CODE_PCIE_STATS_GET = DRV_MSG_CODE(0x0041), + DRV_MSG_CODE_GET_ATTN_CONTROL = DRV_MSG_CODE(0x0042), + DRV_MSG_CODE_SET_ATTN_CONTROL = DRV_MSG_CODE(0x0043), +/* Param is the struct size. Arguments are in trace_filter_stc (raw data) */ + DRV_MSG_CODE_SET_TRACE_FILTER = DRV_MSG_CODE(0x0044), +/* No params. Restores the trace level to the nvm cfg */ + DRV_MSG_CODE_RESTORE_TRACE_FILTER = DRV_MSG_CODE(0x0045), + DRV_MSG_CODE_INITIATE_FLR_DEPRECATED = DRV_MSG_CODE(0x0200), /*deprecated don't use*/ + DRV_MSG_CODE_INITIATE_PF_FLR = DRV_MSG_CODE(0x0201), + DRV_MSG_CODE_INITIATE_VF_FLR = DRV_MSG_CODE(0x0202), + DRV_MSG_CODE_LOAD_REQ = DRV_MSG_CODE(0x1000), + DRV_MSG_CODE_LOAD_DONE = DRV_MSG_CODE(0x1100), + DRV_MSG_CODE_INIT_HW = DRV_MSG_CODE(0x1200), + DRV_MSG_CODE_CANCEL_LOAD_REQ = DRV_MSG_CODE(0x1300), + DRV_MSG_CODE_UNLOAD_REQ = DRV_MSG_CODE(0x2000), + DRV_MSG_CODE_UNLOAD_DONE = DRV_MSG_CODE(0x2100), +/* Params - FORCE - Reinitialize the link regardless of LFA , - DONT_CARE - Don't flap the link if + * up + */ + DRV_MSG_CODE_INIT_PHY = DRV_MSG_CODE(0x2200), + DRV_MSG_CODE_LINK_RESET = DRV_MSG_CODE(0x2300), + DRV_MSG_CODE_SET_LLDP = DRV_MSG_CODE(0x2400), + DRV_MSG_CODE_REGISTER_LLDP_TLVS_RX = DRV_MSG_CODE(0x2410), +/* OneView feature driver HSI*/ + DRV_MSG_CODE_SET_DCBX = DRV_MSG_CODE(0x2500), + DRV_MSG_CODE_OV_UPDATE_CURR_CFG = DRV_MSG_CODE(0x2600), + DRV_MSG_CODE_OV_UPDATE_BUS_NUM = DRV_MSG_CODE(0x2700), + DRV_MSG_CODE_OV_UPDATE_BOOT_PROGRESS = DRV_MSG_CODE(0x2800), + DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER = DRV_MSG_CODE(0x2900), + DRV_MSG_CODE_NIG_DRAIN = DRV_MSG_CODE(0x3000), + DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE = DRV_MSG_CODE(0x3100), + DRV_MSG_CODE_BW_UPDATE_ACK = DRV_MSG_CODE(0x3200), + DRV_MSG_CODE_OV_UPDATE_MTU = DRV_MSG_CODE(0x3300), +/* DRV_MB Param: driver version supp, FW_MB param: MFW version supp, data: struct resource_info */ + DRV_MSG_GET_RESOURCE_ALLOC_MSG = DRV_MSG_CODE(0x3400), + DRV_MSG_SET_RESOURCE_VALUE_MSG = DRV_MSG_CODE(0x3500), + DRV_MSG_CODE_OV_UPDATE_WOL = DRV_MSG_CODE(0x3800), + DRV_MSG_CODE_OV_UPDATE_ESWITCH_MODE = DRV_MSG_CODE(0x3900), + DRV_MSG_CODE_S_TAG_UPDATE_ACK = DRV_MSG_CODE(0x3b00), + DRV_MSG_CODE_OEM_UPDATE_FCOE_CVID = DRV_MSG_CODE(0x3c00), + DRV_MSG_CODE_OEM_UPDATE_FCOE_FABRIC_NAME = DRV_MSG_CODE(0x3d00), + DRV_MSG_CODE_OEM_UPDATE_BOOT_CFG = DRV_MSG_CODE(0x3e00), + DRV_MSG_CODE_OEM_RESET_TO_DEFAULT = DRV_MSG_CODE(0x3f00), + DRV_MSG_CODE_OV_GET_CURR_CFG = DRV_MSG_CODE(0x4000), + DRV_MSG_CODE_GET_OEM_UPDATES = DRV_MSG_CODE(0x4100), + DRV_MSG_CODE_GET_LLDP_STATS = DRV_MSG_CODE(0x4200), +/* params [31:8] - reserved, [7:0] - bitmap */ + DRV_MSG_CODE_GET_PPFID_BITMAP = DRV_MSG_CODE(0x4300), + DRV_MSG_CODE_VF_DISABLED_DONE = DRV_MSG_CODE(0xc000), + DRV_MSG_CODE_CFG_VF_MSIX = DRV_MSG_CODE(0xc001), + DRV_MSG_CODE_CFG_PF_VFS_MSIX = DRV_MSG_CODE(0xc002), +/* get permanent mac - params: [31:24] reserved ,[23:8] index, [7:4] reserved, [3:0] type,.- + * driver_data[7:0] mac + */ + DRV_MSG_CODE_GET_PERM_MAC = DRV_MSG_CODE(0xc003), +/* send driver debug data up to 32 bytes params [7:0] size < 32 data in the union part */ + DRV_MSG_CODE_DEBUG_DATA_SEND = DRV_MSG_CODE(0xc004), +/* get fcoe capablities from driver*/ + DRV_MSG_CODE_GET_FCOE_CAP = DRV_MSG_CODE(0xc005), + /* params [7:0] - VFID, [8] - set/clear */ + DRV_MSG_CODE_VF_WITH_MORE_16SB = DRV_MSG_CODE(0xc006), + /* return FW_MB_MANAGEMENT_STATUS */ + DRV_MSG_CODE_GET_MANAGEMENT_STATUS = DRV_MSG_CODE(0xc007), +}; + +/* DRV_MSG_CODE_VMAC_TYPE parameters */ +#define DRV_MSG_CODE_VMAC_TYPE_OFFSET 4 +#define DRV_MSG_CODE_VMAC_TYPE_MASK 0x30 +#define DRV_MSG_CODE_VMAC_TYPE_MAC 1 +#define DRV_MSG_CODE_VMAC_TYPE_WWNN 2 +#define DRV_MSG_CODE_VMAC_TYPE_WWPN 3 + +/* DRV_MSG_CODE_RETAIN_VMAC parameters */ +#define DRV_MSG_CODE_RETAIN_VMAC_FUNC_OFFSET 0 +#define DRV_MSG_CODE_RETAIN_VMAC_FUNC_MASK 0xf + +#define DRV_MSG_CODE_RETAIN_VMAC_TYPE_OFFSET 4 +#define DRV_MSG_CODE_RETAIN_VMAC_TYPE_MASK 0x70 +#define DRV_MSG_CODE_RETAIN_VMAC_TYPE_L2 0 +#define DRV_MSG_CODE_RETAIN_VMAC_TYPE_ISCSI 1 +#define DRV_MSG_CODE_RETAIN_VMAC_TYPE_FCOE 2 +#define DRV_MSG_CODE_RETAIN_VMAC_TYPE_WWNN 3 +#define DRV_MSG_CODE_RETAIN_VMAC_TYPE_WWPN 4 + +#define DRV_MSG_CODE_MCP_RESET_FORCE 0xf04ce +/* DRV_MSG_CODE_STAT_TYPE parameters */ +#define DRV_MSG_CODE_STATS_TYPE_LAN 1 +#define DRV_MSG_CODE_STATS_TYPE_FCOE 2 +#define DRV_MSG_CODE_STATS_TYPE_ISCSI 3 +#define DRV_MSG_CODE_STATS_TYPE_RDMA 4 + +/* DRV_MSG_CODE_SET_BW parameters */ #define BW_MAX_MASK 0x000000ff #define BW_MAX_OFFSET 0 #define BW_MIN_MASK 0x0000ff00 #define BW_MIN_OFFSET 8 -/* When param is set to 1, all parities will be masked(disabled). When params - * are set to 0, parities will be unmasked again. - */ -#define DRV_MSG_CODE_MASK_PARITIES 0x001a0000 -/* param[0] - Simulate fan failure, param[1] - simulate over temp. */ -#define DRV_MSG_CODE_INDUCE_FAILURE 0x001b0000 +/* DRV_MSG_CODE_INDUCE_FAILURE parameters */ #define DRV_MSG_FAN_FAILURE_TYPE (1 << 0) #define DRV_MSG_TEMPERATURE_FAILURE_TYPE (1 << 1) -/* Param: [0:15] - gpio number */ -#define DRV_MSG_CODE_GPIO_READ 0x001c0000 -/* Param: [0:15] - gpio number, [16:31] - gpio value */ -#define DRV_MSG_CODE_GPIO_WRITE 0x001d0000 -/* Param: [0:7] - test enum, [8:15] - image index, [16:31] - reserved */ -#define DRV_MSG_CODE_BIST_TEST 0x001e0000 -#define DRV_MSG_CODE_GET_TEMPERATURE 0x001f0000 - -/* Set LED mode params :0 operational, 1 LED turn ON, 2 LED turn OFF */ -#define DRV_MSG_CODE_SET_LED_MODE 0x00200000 -/* drv_data[7:0] - EPOC in seconds, drv_data[15:8] - - * driver version (MAJ MIN BUILD SUB) - */ -#define DRV_MSG_CODE_TIMESTAMP 0x00210000 -/* This is an empty mailbox just return OK*/ -#define DRV_MSG_CODE_EMPTY_MB 0x00220000 - -/* Param[0:4] - resource number (0-31), Param[5:7] - opcode, - * param[15:8] - age - */ -#define DRV_MSG_CODE_RESOURCE_CMD 0x00230000 +/* DRV_MSG_CODE_RESOURCE_CMD parameters */ #define RESOURCE_CMD_REQ_RESC_MASK 0x0000001F #define RESOURCE_CMD_REQ_RESC_OFFSET 0 #define RESOURCE_CMD_REQ_OPCODE_MASK 0x000000E0 #define RESOURCE_CMD_REQ_OPCODE_OFFSET 5 /* request resource ownership with default aging */ #define RESOURCE_OPCODE_REQ 1 -/* request resource ownership without aging */ -#define RESOURCE_OPCODE_REQ_WO_AGING 2 +#define RESOURCE_OPCODE_REQ_WO_AGING 2 /* request resource ownership without aging */ /* request resource ownership with specific aging timer (in seconds) */ #define RESOURCE_OPCODE_REQ_W_AGING 3 #define RESOURCE_OPCODE_RELEASE 4 /* release resource */ -/* force resource release */ -#define RESOURCE_OPCODE_FORCE_RELEASE 5 +#define RESOURCE_OPCODE_FORCE_RELEASE 5 /* force resource release */ #define RESOURCE_CMD_REQ_AGE_MASK 0x0000FF00 #define RESOURCE_CMD_REQ_AGE_OFFSET 8 - #define RESOURCE_CMD_RSP_OWNER_MASK 0x000000FF #define RESOURCE_CMD_RSP_OWNER_OFFSET 0 #define RESOURCE_CMD_RSP_OPCODE_MASK 0x00000700 #define RESOURCE_CMD_RSP_OPCODE_OFFSET 8 -/* resource is free and granted to requester */ -#define RESOURCE_OPCODE_GNT 1 -/* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15, - * 16 = MFW, 17 = diag over serial +#define RESOURCE_OPCODE_GNT 1 /* resource is free and granted to requester */ +/* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15, 16 = MFW, 17 = diag over + * serial */ #define RESOURCE_OPCODE_BUSY 2 -/* indicate release request was acknowledged */ -#define RESOURCE_OPCODE_RELEASED 3 +#define RESOURCE_OPCODE_RELEASED 3 /* indicate release request was acknowledged */ /* indicate release request was previously received by other owner */ #define RESOURCE_OPCODE_RELEASED_PREVIOUS 4 -/* indicate wrong owner during release */ -#define RESOURCE_OPCODE_WRONG_OWNER 5 +#define RESOURCE_OPCODE_WRONG_OWNER 5 /* indicate wrong owner during release */ #define RESOURCE_OPCODE_UNKNOWN_CMD 255 +#define RESOURCE_DUMP 0 /* dedicate resource 0 for dump */ -/* dedicate resource 0 for dump */ -#define RESOURCE_DUMP 0 - -#define DRV_MSG_CODE_GET_MBA_VERSION 0x00240000 /* Get MBA version */ -/* Send crash dump commands with param[3:0] - opcode */ -#define DRV_MSG_CODE_MDUMP_CMD 0x00250000 -#define MDUMP_DRV_PARAM_OPCODE_MASK 0x0000000f +/* DRV_MSG_CODE_MDUMP_CMD parameters */ +#define MDUMP_DRV_PARAM_OPCODE_MASK 0x000000ff /* acknowledge reception of error indication */ -#define DRV_MSG_CODE_MDUMP_ACK 0x01 -/* set epoc and personality as follow: drv_data[3:0] - epoch, - * drv_data[7:4] - personality - */ +#define DRV_MSG_CODE_MDUMP_ACK 0x01 +/* set epoc and personality as follow: drv_data[3:0] - epoch, drv_data[7:4] - personality */ #define DRV_MSG_CODE_MDUMP_SET_VALUES 0x02 -/* trigger crash dump procedure */ -#define DRV_MSG_CODE_MDUMP_TRIGGER 0x03 -/* Request valid logs and config words */ -#define DRV_MSG_CODE_MDUMP_GET_CONFIG 0x04 -/* Set triggers mask. drv_mb_param should indicate (bitwise) which - * trigger enabled - */ +#define DRV_MSG_CODE_MDUMP_TRIGGER 0x03 /* trigger crash dump procedure */ +#define DRV_MSG_CODE_MDUMP_GET_CONFIG 0x04 /* Request valid logs and config words */ +/* Set triggers mask. drv_mb_param should indicate (bitwise) which trigger enabled */ #define DRV_MSG_CODE_MDUMP_SET_ENABLE 0x05 -/* Clear all logs */ -#define DRV_MSG_CODE_MDUMP_CLEAR_LOGS 0x06 +#define DRV_MSG_CODE_MDUMP_CLEAR_LOGS 0x06 /* Clear all logs */ #define DRV_MSG_CODE_MDUMP_GET_RETAIN 0x07 /* Get retained data */ #define DRV_MSG_CODE_MDUMP_CLR_RETAIN 0x08 /* Clear retain data */ -#define DRV_MSG_CODE_MEM_ECC_EVENTS 0x00260000 /* Param: None */ -/* Param: [0:15] - gpio number */ -#define DRV_MSG_CODE_GPIO_INFO 0x00270000 -/* Value will be placed in union */ -#define DRV_MSG_CODE_EXT_PHY_READ 0x00280000 -/* Value should be placed in union */ -#define DRV_MSG_CODE_EXT_PHY_WRITE 0x00290000 -#define DRV_MB_PARAM_ADDR_OFFSET 0 +/* hw_dump parameters */ +#define DRV_MSG_CODE_HW_DUMP_TRIGGER 0x0a /* Clear retain data */ +/* Free Link_Dump / Idle_check buffer after driver consumed it */ +#define DRV_MSG_CODE_MDUMP_FREE_DRIVER_BUF 0x0b +/* Generate debug data for scripts as wol_dump, link_dump and phy_dump. Data will be available at + * fw_param (offsize) in fw_param) for 6 seconds + */ +#define DRV_MSG_CODE_MDUMP_GEN_LINK_DUMP 0x0c +/* Generate debug data for scripts as idle_chk */ +#define DRV_MSG_CODE_MDUMP_GEN_IDLE_CHK 0x0d + + +/* DRV_MSG_CODE_MDUMP_CMD options */ +#define MDUMP_DRV_PARAM_OPTION_MASK 0x00000f00 +/* Driver will not use debug data (GEN_LINK_DUMP, GEN_IDLE_CHK), free Buffer imediately */ +#define DRV_MSG_CODE_MDUMP_USE_DRIVER_BUF_OFFSET 8 +/* Driver will not use debug data (GEN_LINK_DUMP, GEN_IDLE_CHK), free Buffer imediately */ +#define DRV_MSG_CODE_MDUMP_USE_DRIVER_BUF_MASK 0x100 + + +/* DRV_MSG_CODE_EXT_PHY_READ/DRV_MSG_CODE_EXT_PHY_WRITE parameters */ +#define DRV_MB_PARAM_ADDR_OFFSET 0 #define DRV_MB_PARAM_ADDR_MASK 0x0000FFFF #define DRV_MB_PARAM_DEVAD_OFFSET 16 #define DRV_MB_PARAM_DEVAD_MASK 0x001F0000 -#define DRV_MB_PARAM_PORT_OFFSET 21 +#define DRV_MB_PARAM_PORT_OFFSET 21 #define DRV_MB_PARAM_PORT_MASK 0x00600000 -#define DRV_MSG_CODE_EXT_PHY_FW_UPGRADE 0x002a0000 - -#define DRV_MSG_CODE_GET_TLV_DONE 0x002f0000 /* Param: None */ -/* Param: Set DRV_MB_PARAM_FEATURE_SUPPORT_* */ -#define DRV_MSG_CODE_FEATURE_SUPPORT 0x00300000 -/* return FW_MB_PARAM_FEATURE_SUPPORT_* */ -#define DRV_MSG_CODE_GET_MFW_FEATURE_SUPPORT 0x00310000 -#define DRV_MSG_CODE_READ_WOL_REG 0X00320000 -#define DRV_MSG_CODE_WRITE_WOL_REG 0X00330000 -#define DRV_MSG_CODE_GET_WOL_BUFFER 0X00340000 -/* Param: [0:23] Attribute key, [24:31] Attribute sub command */ -#define DRV_MSG_CODE_ATTRIBUTE 0x00350000 -/* Param: Password len. Union: Plain Password */ -#define DRV_MSG_CODE_ENCRYPT_PASSWORD 0x00360000 -#define DRV_MSG_CODE_GET_ENGINE_CONFIG 0x00370000 /* Param: None */ - -#define DRV_MSG_SEQ_NUMBER_MASK 0x0000ffff +/* DRV_MSG_CODE_PMBUS_READ/DRV_MSG_CODE_PMBUS_WRITE parameters */ +#define DRV_MB_PARAM_PMBUS_CMD_OFFSET 0 +#define DRV_MB_PARAM_PMBUS_CMD_MASK 0xFF +#define DRV_MB_PARAM_PMBUS_LEN_OFFSET 8 +#define DRV_MB_PARAM_PMBUS_LEN_MASK 0x300 +#define DRV_MB_PARAM_PMBUS_DATA_OFFSET 16 +#define DRV_MB_PARAM_PMBUS_DATA_MASK 0xFFFF0000 - u32 drv_mb_param; /* UNLOAD_REQ params */ -#define DRV_MB_PARAM_UNLOAD_WOL_UNKNOWN 0x00000000 +#define DRV_MB_PARAM_UNLOAD_WOL_UNKNOWN 0x00000000 #define DRV_MB_PARAM_UNLOAD_WOL_MCP 0x00000001 -#define DRV_MB_PARAM_UNLOAD_WOL_DISABLED 0x00000002 -#define DRV_MB_PARAM_UNLOAD_WOL_ENABLED 0x00000003 +#define DRV_MB_PARAM_UNLOAD_WOL_DISABLED 0x00000002 +#define DRV_MB_PARAM_UNLOAD_WOL_ENABLED 0x00000003 /* UNLOAD_DONE_params */ -#define DRV_MB_PARAM_UNLOAD_NON_D3_POWER 0x00000001 +#define DRV_MB_PARAM_UNLOAD_NON_D3_POWER 0x00000001 /* INIT_PHY params */ #define DRV_MB_PARAM_INIT_PHY_FORCE 0x00000001 @@ -1534,8 +1873,11 @@ struct public_drv_mb { #define DRV_MB_PARAM_NIG_DRAIN_PERIOD_MS_MASK 0x000000FF #define DRV_MB_PARAM_NIG_DRAIN_PERIOD_MS_OFFSET 0 +#define DRV_MB_PARAM_NVM_PUT_FILE_TYPE_MASK 0x000000ff +#define DRV_MB_PARAM_NVM_PUT_FILE_TYPE_OFFSET 0 #define DRV_MB_PARAM_NVM_PUT_FILE_BEGIN_MFW 0x1 #define DRV_MB_PARAM_NVM_PUT_FILE_BEGIN_IMAGE 0x2 +#define DRV_MB_PARAM_NVM_PUT_FILE_BEGIN_MBI 0x3 #define DRV_MB_PARAM_NVM_OFFSET_OFFSET 0 #define DRV_MB_PARAM_NVM_OFFSET_MASK 0x00FFFFFF @@ -1557,13 +1899,18 @@ struct public_drv_mb { #define DRV_MB_PARAM_PHYMOD_SIZE_MASK 0x000FFF00 /* configure vf MSIX params BB */ #define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_OFFSET 0 -#define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_MASK 0x000000FF +#define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_MASK 0x000000FF #define DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_OFFSET 8 #define DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_MASK 0x0000FF00 /* configure vf MSIX for PF params AH*/ #define DRV_MB_PARAM_CFG_PF_VFS_MSIX_SB_NUM_OFFSET 0 #define DRV_MB_PARAM_CFG_PF_VFS_MSIX_SB_NUM_MASK 0x000000FF +#define DRV_MSG_CODE_VF_WITH_MORE_16SB_VF_ID_OFFSET 0 +#define DRV_MSG_CODE_VF_WITH_MORE_16SB_VF_ID_MASK 0x000000FF +#define DRV_MSG_CODE_VF_WITH_MORE_16SB_SET_OFFSET 8 +#define DRV_MSG_CODE_VF_WITH_MORE_16SB_SET_MASK 0x00000100 + /* OneView configuration parametres */ #define DRV_MB_PARAM_OV_CURR_CFG_OFFSET 0 #define DRV_MB_PARAM_OV_CURR_CFG_MASK 0x0000000F @@ -1576,7 +1923,7 @@ struct public_drv_mb { #define DRV_MB_PARAM_OV_CURR_CFG_DCI 6 #define DRV_MB_PARAM_OV_CURR_CFG_HII 7 -#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_OFFSET 0 +#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_OFFSET 0 #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_MASK 0x000000FF #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_NONE (1 << 0) #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_ISCSI_IP_ACQUIRED (1 << 1) @@ -1587,9 +1934,9 @@ struct public_drv_mb { #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_LOGGED_INTO_TGT (1 << 4) #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_IMG_DOWNLOADED (1 << 5) #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_OS_HANDOFF (1 << 6) -#define DRV_MB_PARAM_OV_UPDATE_BOOT_COMPLETED 0 +#define DRV_MB_PARAM_OV_UPDATE_BOOT_COMPLETED 0 -#define DRV_MB_PARAM_OV_PCI_BUS_NUM_OFFSET 0 +#define DRV_MB_PARAM_OV_PCI_BUS_NUM_OFFSET 0 #define DRV_MB_PARAM_OV_PCI_BUS_NUM_MASK 0x000000FF #define DRV_MB_PARAM_OV_STORM_FW_VER_OFFSET 0 @@ -1602,26 +1949,42 @@ struct public_drv_mb { #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_OFFSET 0 #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_MASK 0xF #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_UNKNOWN 0x1 -/* Not Installed*/ -#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_NOT_LOADED 0x2 +#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_NOT_LOADED 0x2 /* Not Installed*/ #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_LOADING 0x3 /* installed but disabled by user/admin/OS */ #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_DISABLED 0x4 -/* installed and active */ -#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_ACTIVE 0x5 +#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_ACTIVE 0x5 /* installed and active */ -#define DRV_MB_PARAM_OV_MTU_SIZE_OFFSET 0 +#define DRV_MB_PARAM_OV_MTU_SIZE_OFFSET 0 #define DRV_MB_PARAM_OV_MTU_SIZE_MASK 0xFFFFFFFF -#define DRV_MB_PARAM_ESWITCH_MODE_MASK (DRV_MB_PARAM_ESWITCH_MODE_NONE | \ +#define DRV_MB_PARAM_WOL_MASK (DRV_MB_PARAM_WOL_DEFAULT | \ + DRV_MB_PARAM_WOL_DISABLED | \ + DRV_MB_PARAM_WOL_ENABLED) +#define DRV_MB_PARAM_WOL_DEFAULT DRV_MB_PARAM_UNLOAD_WOL_MCP +#define DRV_MB_PARAM_WOL_DISABLED DRV_MB_PARAM_UNLOAD_WOL_DISABLED +#define DRV_MB_PARAM_WOL_ENABLED DRV_MB_PARAM_UNLOAD_WOL_ENABLED + +#define DRV_MB_PARAM_ESWITCH_MODE_MASK (DRV_MB_PARAM_ESWITCH_MODE_NONE | \ DRV_MB_PARAM_ESWITCH_MODE_VEB | \ DRV_MB_PARAM_ESWITCH_MODE_VEPA) -#define DRV_MB_PARAM_ESWITCH_MODE_NONE 0x0 -#define DRV_MB_PARAM_ESWITCH_MODE_VEB 0x1 -#define DRV_MB_PARAM_ESWITCH_MODE_VEPA 0x2 +#define DRV_MB_PARAM_ESWITCH_MODE_NONE 0x0 +#define DRV_MB_PARAM_ESWITCH_MODE_VEB 0x1 +#define DRV_MB_PARAM_ESWITCH_MODE_VEPA 0x2 + +#define DRV_MB_PARAM_FCOE_CVID_MASK 0xFFF +#define DRV_MB_PARAM_FCOE_CVID_OFFSET 0 + +#define DRV_MB_PARAM_BOOT_CFG_UPDATE_MASK 0xFF +#define DRV_MB_PARAM_BOOT_CFG_UPDATE_OFFSET 0 +#define DRV_MB_PARAM_BOOT_CFG_UPDATE_FCOE 0x01 +#define DRV_MB_PARAM_BOOT_CFG_UPDATE_ISCSI 0x02 -#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_MASK 0x1 -#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET 0 +#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_MASK 0x1 +#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET 0 + +#define DRV_MB_PARAM_LLDP_STATS_AGENT_MASK 0xFF +#define DRV_MB_PARAM_LLDP_STATS_AGENT_OFFSET 0 #define DRV_MB_PARAM_SET_LED_MODE_OPER 0x0 #define DRV_MB_PARAM_SET_LED_MODE_ON 0x1 @@ -1665,39 +2028,43 @@ struct public_drv_mb { #define DRV_MB_PARAM_BIST_RC_FAILED 2 #define DRV_MB_PARAM_BIST_RC_INVALID_PARAMETER 3 -#define DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET 0 -#define DRV_MB_PARAM_BIST_TEST_INDEX_MASK 0x000000FF +#define DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET 0 +#define DRV_MB_PARAM_BIST_TEST_INDEX_MASK 0x000000FF #define DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_OFFSET 8 -#define DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_MASK 0x0000FF00 +#define DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_MASK 0x0000FF00 -#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_MASK 0x0000FFFF -#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_OFFSET 0 +#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_MASK 0x0000FFFF +#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_OFFSET 0 /* driver supports SmartLinQ parameter */ #define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_SMARTLINQ 0x00000001 -/* driver supports EEE parameter */ -#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_EEE 0x00000002 -#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_MASK 0xFFFF0000 -#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_OFFSET 16 +#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_EEE 0x00000002 /* driver supports EEE parameter */ +/* driver supports FEC Control parameter */ +#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_FEC_CONTROL 0x00000004 +/* driver supports extended speed and FEC Control parameters */ +#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_EXT_SPEED_FEC_CONTROL 0x00000008 +#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_MASK 0xFFFF0000 +#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_OFFSET 16 /* driver supports virtual link parameter */ -#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_VLINK 0x00010000 +#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_VLINK 0x00010000 + +/* DRV_MSG_CODE_DEBUG_DATA_SEND parameters */ +#define DRV_MSG_CODE_DEBUG_DATA_SEND_SIZE_OFFSET 0 +#define DRV_MSG_CODE_DEBUG_DATA_SEND_SIZE_MASK 0xFF + /* Driver attributes params */ -#define DRV_MB_PARAM_ATTRIBUTE_KEY_OFFSET 0 +#define DRV_MB_PARAM_ATTRIBUTE_KEY_OFFSET 0 #define DRV_MB_PARAM_ATTRIBUTE_KEY_MASK 0x00FFFFFF #define DRV_MB_PARAM_ATTRIBUTE_CMD_OFFSET 24 #define DRV_MB_PARAM_ATTRIBUTE_CMD_MASK 0xFF000000 #define DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET 0 -/* Option# */ -#define DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK 0x0000FFFF +#define DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK 0x0000FFFF /* Option# */ +#define DRV_MB_PARAM_NVM_CFG_OPTION_ID_IGNORE 0x0000FFFF #define DRV_MB_PARAM_NVM_CFG_OPTION_ALL_OFFSET 16 -/* (Only for Set) Applies option<92>s value to all entities (port/func) - * depending on the option type - */ +/* (Only for Set) Applies options value to all entities (port/func) depending on the option type */ #define DRV_MB_PARAM_NVM_CFG_OPTION_ALL_MASK 0x00010000 #define DRV_MB_PARAM_NVM_CFG_OPTION_INIT_OFFSET 17 -/* When set, and state is IDLE, MFW will allocate resources and load - * configuration from NVM - */ +/* When set, and state is IDLE, MFW will allocate resources and load configuration from NVM */ #define DRV_MB_PARAM_NVM_CFG_OPTION_INIT_MASK 0x00020000 #define DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_OFFSET 18 /* (Only for Set) - When set submit changed nvm_cfg1 to flash */ @@ -1705,162 +2072,247 @@ struct public_drv_mb { #define DRV_MB_PARAM_NVM_CFG_OPTION_FREE_OFFSET 19 /* Free - When set, free allocated resources, and return to IDLE state. */ #define DRV_MB_PARAM_NVM_CFG_OPTION_FREE_MASK 0x00080000 -#define SINGLE_NVM_WR_OP(optionId) \ - ((((optionId) & DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK) << \ - DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET) | \ - (DRV_MB_PARAM_NVM_CFG_OPTION_INIT_MASK | \ - DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_MASK | \ +#define DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_SEL_OFFSET 20 +/* When set, command applies to the entity (func/port) in the ENTITY_ID field */ +#define DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_SEL_MASK 0x00100000 +#define DRV_MB_PARAM_NVM_CFG_OPTION_DEFAULT_RESTORE_ALL_OFFSET 21 +/* When set, the nvram restore to default */ +#define DRV_MB_PARAM_NVM_CFG_OPTION_DEFAULT_RESTORE_ALL_MASK 0x00200000 +#define DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_ID_OFFSET 24 +/* When ENTITY_SET is set, the command applies to this entity ID (func/port) selection */ +#define DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_ID_MASK 0x0f000000 + +#define SINGLE_NVM_WR_OP(option_id) \ + ((((option_id) & DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK) << \ + DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET) | (DRV_MB_PARAM_NVM_CFG_OPTION_INIT_MASK | \ + DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_MASK | DRV_MB_PARAM_NVM_CFG_OPTION_FREE_MASK)) +#define BEGIN_NVM_WR_OP(option_id) \ + ((((option_id) & DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK) << \ + DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET) | (DRV_MB_PARAM_NVM_CFG_OPTION_INIT_MASK)) +#define CONT_NVM_WR_OP(option_id) \ + ((((option_id) & DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK) << \ + DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET)) +#define END_NVM_WR_OP(option_id) \ + ((((option_id) & DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK) << \ + DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET) | (DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_MASK | \ DRV_MB_PARAM_NVM_CFG_OPTION_FREE_MASK)) - u32 fw_mb_header; -#define FW_MSG_CODE_UNSUPPORTED 0x00000000 -#define FW_MSG_CODE_DRV_LOAD_ENGINE 0x10100000 -#define FW_MSG_CODE_DRV_LOAD_PORT 0x10110000 -#define FW_MSG_CODE_DRV_LOAD_FUNCTION 0x10120000 -#define FW_MSG_CODE_DRV_LOAD_REFUSED_PDA 0x10200000 -#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1 0x10210000 -#define FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG 0x10220000 -#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI 0x10230000 -#define FW_MSG_CODE_DRV_LOAD_REFUSED_REQUIRES_FORCE 0x10300000 -#define FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT 0x10310000 -#define FW_MSG_CODE_DRV_LOAD_DONE 0x11100000 -#define FW_MSG_CODE_DRV_UNLOAD_ENGINE 0x20110000 -#define FW_MSG_CODE_DRV_UNLOAD_PORT 0x20120000 -#define FW_MSG_CODE_DRV_UNLOAD_FUNCTION 0x20130000 -#define FW_MSG_CODE_DRV_UNLOAD_DONE 0x21100000 -#define FW_MSG_CODE_INIT_PHY_DONE 0x21200000 -#define FW_MSG_CODE_INIT_PHY_ERR_INVALID_ARGS 0x21300000 -#define FW_MSG_CODE_LINK_RESET_DONE 0x23000000 -#define FW_MSG_CODE_SET_LLDP_DONE 0x24000000 -#define FW_MSG_CODE_SET_LLDP_UNSUPPORTED_AGENT 0x24010000 -#define FW_MSG_CODE_REGISTER_LLDP_TLVS_RX_DONE 0x24100000 -#define FW_MSG_CODE_SET_DCBX_DONE 0x25000000 -#define FW_MSG_CODE_UPDATE_CURR_CFG_DONE 0x26000000 -#define FW_MSG_CODE_UPDATE_BUS_NUM_DONE 0x27000000 -#define FW_MSG_CODE_UPDATE_BOOT_PROGRESS_DONE 0x28000000 -#define FW_MSG_CODE_UPDATE_STORM_FW_VER_DONE 0x29000000 -#define FW_MSG_CODE_UPDATE_DRIVER_STATE_DONE 0x31000000 -#define FW_MSG_CODE_DRV_MSG_CODE_BW_UPDATE_DONE 0x32000000 -#define FW_MSG_CODE_DRV_MSG_CODE_MTU_SIZE_DONE 0x33000000 -#define FW_MSG_CODE_RESOURCE_ALLOC_OK 0x34000000 -#define FW_MSG_CODE_RESOURCE_ALLOC_UNKNOWN 0x35000000 -#define FW_MSG_CODE_RESOURCE_ALLOC_DEPRECATED 0x36000000 -#define FW_MSG_CODE_RESOURCE_ALLOC_GEN_ERR 0x37000000 -#define FW_MSG_CODE_GET_OEM_UPDATES_DONE 0x41000000 - -#define FW_MSG_CODE_NIG_DRAIN_DONE 0x30000000 -#define FW_MSG_CODE_VF_DISABLED_DONE 0xb0000000 -#define FW_MSG_CODE_DRV_CFG_VF_MSIX_DONE 0xb0010000 -#define FW_MSG_CODE_INITIATE_VF_FLR_OK 0xb0030000 -#define FW_MSG_CODE_ERR_RESOURCE_TEMPORARY_UNAVAILABLE 0x008b0000 -#define FW_MSG_CODE_ERR_RESOURCE_ALREADY_ALLOCATED 0x008c0000 -#define FW_MSG_CODE_ERR_RESOURCE_NOT_ALLOCATED 0x008d0000 -#define FW_MSG_CODE_ERR_NON_USER_OPTION 0x008e0000 -#define FW_MSG_CODE_ERR_UNKNOWN_OPTION 0x008f0000 -#define FW_MSG_CODE_WAIT 0x00900000 -#define FW_MSG_CODE_FLR_ACK 0x02000000 -#define FW_MSG_CODE_FLR_NACK 0x02100000 -#define FW_MSG_CODE_SET_DRIVER_DONE 0x02200000 -#define FW_MSG_CODE_SET_VMAC_SUCCESS 0x02300000 -#define FW_MSG_CODE_SET_VMAC_FAIL 0x02400000 - -#define FW_MSG_CODE_NVM_OK 0x00010000 -#define FW_MSG_CODE_NVM_INVALID_MODE 0x00020000 -#define FW_MSG_CODE_NVM_PREV_CMD_WAS_NOT_FINISHED 0x00030000 -#define FW_MSG_CODE_NVM_FAILED_TO_ALLOCATE_PAGE 0x00040000 -#define FW_MSG_CODE_NVM_INVALID_DIR_FOUND 0x00050000 -#define FW_MSG_CODE_NVM_PAGE_NOT_FOUND 0x00060000 -#define FW_MSG_CODE_NVM_FAILED_PARSING_BNDLE_HEADER 0x00070000 -#define FW_MSG_CODE_NVM_FAILED_PARSING_IMAGE_HEADER 0x00080000 -#define FW_MSG_CODE_NVM_PARSING_OUT_OF_SYNC 0x00090000 -#define FW_MSG_CODE_NVM_FAILED_UPDATING_DIR 0x000a0000 -#define FW_MSG_CODE_NVM_FAILED_TO_FREE_PAGE 0x000b0000 -#define FW_MSG_CODE_NVM_FILE_NOT_FOUND 0x000c0000 -#define FW_MSG_CODE_NVM_OPERATION_FAILED 0x000d0000 -#define FW_MSG_CODE_NVM_FAILED_UNALIGNED 0x000e0000 -#define FW_MSG_CODE_NVM_BAD_OFFSET 0x000f0000 -#define FW_MSG_CODE_NVM_BAD_SIGNATURE 0x00100000 -#define FW_MSG_CODE_NVM_FILE_READ_ONLY 0x00200000 -#define FW_MSG_CODE_NVM_UNKNOWN_FILE 0x00300000 -#define FW_MSG_CODE_NVM_PUT_FILE_FINISH_OK 0x00400000 +#define DEFAULT_RESTORE_NVM_OP() \ + (DRV_MB_PARAM_NVM_CFG_OPTION_DEFAULT_RESTORE_ALL_MASK | \ + DRV_MB_PARAM_NVM_CFG_OPTION_INIT_MASK | DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_MASK | \ + DRV_MB_PARAM_NVM_CFG_OPTION_FREE_MASK) + + +/* DRV_MSG_CODE_GET_PERM_MAC parametres */ +#define DRV_MSG_CODE_GET_PERM_MAC_TYPE_OFFSET 0 +#define DRV_MSG_CODE_GET_PERM_MAC_TYPE_MASK 0xF +#define DRV_MSG_CODE_GET_PERM_MAC_TYPE_PF 0 +#define DRV_MSG_CODE_GET_PERM_MAC_TYPE_BMC 1 +#define DRV_MSG_CODE_GET_PERM_MAC_TYPE_VF 2 +#define DRV_MSG_CODE_GET_PERM_MAC_TYPE_LLDP 3 +#define DRV_MSG_CODE_GET_PERM_MAC_TYPE_MAX 4 +#define DRV_MSG_CODE_GET_PERM_MAC_INDEX_OFFSET 8 +#define DRV_MSG_CODE_GET_PERM_MAC_INDEX_MASK 0xFFFF00 + +/***************************************************************/ +/* Firmware Message Code (Response) */ +/***************************************************************/ + +#define FW_MSG_CODE(_code_) ((_code_) << FW_MSG_CODE_OFFSET) +enum fw_msg_code_enum { + FW_MSG_CODE_UNSUPPORTED = FW_MSG_CODE(0x0000), + FW_MSG_CODE_NVM_OK = FW_MSG_CODE(0x0001), + FW_MSG_CODE_NVM_INVALID_MODE = FW_MSG_CODE(0x0002), + FW_MSG_CODE_NVM_PREV_CMD_WAS_NOT_FINISHED = FW_MSG_CODE(0x0003), + FW_MSG_CODE_ATTRIBUTE_INVALID_KEY = FW_MSG_CODE(0x0002), + FW_MSG_CODE_ATTRIBUTE_INVALID_CMD = FW_MSG_CODE(0x0003), + FW_MSG_CODE_NVM_FAILED_TO_ALLOCATE_PAGE = FW_MSG_CODE(0x0004), + FW_MSG_CODE_NVM_INVALID_DIR_FOUND = FW_MSG_CODE(0x0005), + FW_MSG_CODE_NVM_PAGE_NOT_FOUND = FW_MSG_CODE(0x0006), + FW_MSG_CODE_NVM_FAILED_PARSING_BNDLE_HEADER = FW_MSG_CODE(0x0007), + FW_MSG_CODE_NVM_FAILED_PARSING_IMAGE_HEADER = FW_MSG_CODE(0x0008), + FW_MSG_CODE_NVM_PARSING_OUT_OF_SYNC = FW_MSG_CODE(0x0009), + FW_MSG_CODE_NVM_FAILED_UPDATING_DIR = FW_MSG_CODE(0x000a), + FW_MSG_CODE_NVM_FAILED_TO_FREE_PAGE = FW_MSG_CODE(0x000b), + FW_MSG_CODE_NVM_FILE_NOT_FOUND = FW_MSG_CODE(0x000c), + FW_MSG_CODE_NVM_OPERATION_FAILED = FW_MSG_CODE(0x000d), + FW_MSG_CODE_NVM_FAILED_UNALIGNED = FW_MSG_CODE(0x000e), + FW_MSG_CODE_NVM_BAD_OFFSET = FW_MSG_CODE(0x000f), + FW_MSG_CODE_NVM_BAD_SIGNATURE = FW_MSG_CODE(0x0010), + FW_MSG_CODE_NVM_OPERATION_NOT_STARTED = FW_MSG_CODE(0x0011), + FW_MSG_CODE_NVM_FILE_READ_ONLY = FW_MSG_CODE(0x0020), + FW_MSG_CODE_NVM_UNKNOWN_FILE = FW_MSG_CODE(0x0030), + FW_MSG_CODE_NVM_FAILED_CALC_HASH = FW_MSG_CODE(0x0031), + FW_MSG_CODE_NVM_PUBLIC_KEY_MISSING = FW_MSG_CODE(0x0032), + FW_MSG_CODE_NVM_INVALID_PUBLIC_KEY = FW_MSG_CODE(0x0033), + FW_MSG_CODE_NVM_PUT_FILE_FINISH_OK = FW_MSG_CODE(0x0040), + FW_MSG_CODE_UNVERIFIED_KEY_CHAIN = FW_MSG_CODE(0x0050), /* MFW reject "mcp reset" command if one of the drivers is up */ -#define FW_MSG_CODE_MCP_RESET_REJECT 0x00600000 -#define FW_MSG_CODE_NVM_FAILED_CALC_HASH 0x00310000 -#define FW_MSG_CODE_NVM_PUBLIC_KEY_MISSING 0x00320000 -#define FW_MSG_CODE_NVM_INVALID_PUBLIC_KEY 0x00330000 - -#define FW_MSG_CODE_PHY_OK 0x00110000 -#define FW_MSG_CODE_PHY_ERROR 0x00120000 -#define FW_MSG_CODE_SET_SECURE_MODE_ERROR 0x00130000 -#define FW_MSG_CODE_SET_SECURE_MODE_OK 0x00140000 -#define FW_MSG_MODE_PHY_PRIVILEGE_ERROR 0x00150000 -#define FW_MSG_CODE_OK 0x00160000 -#define FW_MSG_CODE_ERROR 0x00170000 -#define FW_MSG_CODE_LED_MODE_INVALID 0x00170000 -#define FW_MSG_CODE_PHY_DIAG_OK 0x00160000 -#define FW_MSG_CODE_PHY_DIAG_ERROR 0x00170000 -#define FW_MSG_CODE_INIT_HW_FAILED_TO_ALLOCATE_PAGE 0x00040000 -#define FW_MSG_CODE_INIT_HW_FAILED_BAD_STATE 0x00170000 -#define FW_MSG_CODE_INIT_HW_FAILED_TO_SET_WINDOW 0x000d0000 -#define FW_MSG_CODE_INIT_HW_FAILED_NO_IMAGE 0x000c0000 -#define FW_MSG_CODE_INIT_HW_FAILED_VERSION_MISMATCH 0x00100000 -#define FW_MSG_CODE_TRANSCEIVER_DIAG_OK 0x00160000 -#define FW_MSG_CODE_TRANSCEIVER_DIAG_ERROR 0x00170000 -#define FW_MSG_CODE_TRANSCEIVER_NOT_PRESENT 0x00020000 -#define FW_MSG_CODE_TRANSCEIVER_BAD_BUFFER_SIZE 0x000f0000 -#define FW_MSG_CODE_GPIO_OK 0x00160000 -#define FW_MSG_CODE_GPIO_DIRECTION_ERR 0x00170000 -#define FW_MSG_CODE_GPIO_CTRL_ERR 0x00020000 -#define FW_MSG_CODE_GPIO_INVALID 0x000f0000 -#define FW_MSG_CODE_GPIO_INVALID_VALUE 0x00050000 -#define FW_MSG_CODE_BIST_TEST_INVALID 0x000f0000 -#define FW_MSG_CODE_EXTPHY_INVALID_IMAGE_HEADER 0x00700000 -#define FW_MSG_CODE_EXTPHY_INVALID_PHY_TYPE 0x00710000 -#define FW_MSG_CODE_EXTPHY_OPERATION_FAILED 0x00720000 -#define FW_MSG_CODE_EXTPHY_NO_PHY_DETECTED 0x00730000 -#define FW_MSG_CODE_RECOVERY_MODE 0x00740000 - - /* mdump related response codes */ -#define FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND 0x00010000 -#define FW_MSG_CODE_MDUMP_ALLOC_FAILED 0x00020000 -#define FW_MSG_CODE_MDUMP_INVALID_CMD 0x00030000 -#define FW_MSG_CODE_MDUMP_IN_PROGRESS 0x00040000 -#define FW_MSG_CODE_MDUMP_WRITE_FAILED 0x00050000 - - -#define FW_MSG_CODE_DRV_CFG_PF_VFS_MSIX_DONE 0x00870000 -#define FW_MSG_CODE_DRV_CFG_PF_VFS_MSIX_BAD_ASIC 0x00880000 - -#define FW_MSG_CODE_WOL_READ_WRITE_OK 0x00820000 -#define FW_MSG_CODE_WOL_READ_WRITE_INVALID_VAL 0x00830000 -#define FW_MSG_CODE_WOL_READ_WRITE_INVALID_ADDR 0x00840000 -#define FW_MSG_CODE_WOL_READ_BUFFER_OK 0x00850000 -#define FW_MSG_CODE_WOL_READ_BUFFER_INVALID_VAL 0x00860000 - -#define FW_MSG_CODE_ATTRIBUTE_INVALID_KEY 0x00020000 -#define FW_MSG_CODE_ATTRIBUTE_INVALID_CMD 0x00030000 + FW_MSG_CODE_MCP_RESET_REJECT = FW_MSG_CODE(0x0060), + FW_MSG_CODE_PHY_OK = FW_MSG_CODE(0x0011), + FW_MSG_CODE_PHY_ERROR = FW_MSG_CODE(0x0012), + FW_MSG_CODE_SET_SECURE_MODE_ERROR = FW_MSG_CODE(0x0013), + FW_MSG_CODE_SET_SECURE_MODE_OK = FW_MSG_CODE(0x0014), + FW_MSG_MODE_PHY_PRIVILEGE_ERROR = FW_MSG_CODE(0x0015), + FW_MSG_CODE_OK = FW_MSG_CODE(0x0016), + FW_MSG_CODE_ERROR = FW_MSG_CODE(0x0017), + FW_MSG_CODE_LED_MODE_INVALID = FW_MSG_CODE(0x0017), + FW_MSG_CODE_PHY_DIAG_OK = FW_MSG_CODE(0x0016), + FW_MSG_CODE_PHY_DIAG_ERROR = FW_MSG_CODE(0x0017), + FW_MSG_CODE_INIT_HW_FAILED_TO_ALLOCATE_PAGE = FW_MSG_CODE(0x0004), + FW_MSG_CODE_INIT_HW_FAILED_BAD_STATE = FW_MSG_CODE(0x0017), + FW_MSG_CODE_INIT_HW_FAILED_TO_SET_WINDOW = FW_MSG_CODE(0x000d), + FW_MSG_CODE_INIT_HW_FAILED_NO_IMAGE = FW_MSG_CODE(0x000c), + FW_MSG_CODE_INIT_HW_FAILED_VERSION_MISMATCH = FW_MSG_CODE(0x0010), + FW_MSG_CODE_INIT_HW_FAILED_UNALLOWED_ADDR = FW_MSG_CODE(0x0092), + FW_MSG_CODE_TRANSCEIVER_DIAG_OK = FW_MSG_CODE(0x0016), + FW_MSG_CODE_TRANSCEIVER_DIAG_ERROR = FW_MSG_CODE(0x0017), + FW_MSG_CODE_TRANSCEIVER_NOT_PRESENT = FW_MSG_CODE(0x0002), + FW_MSG_CODE_TRANSCEIVER_BAD_BUFFER_SIZE = FW_MSG_CODE(0x000f), + FW_MSG_CODE_GPIO_OK = FW_MSG_CODE(0x0016), + FW_MSG_CODE_GPIO_DIRECTION_ERR = FW_MSG_CODE(0x0017), + FW_MSG_CODE_GPIO_CTRL_ERR = FW_MSG_CODE(0x0002), + FW_MSG_CODE_GPIO_INVALID = FW_MSG_CODE(0x000f), + FW_MSG_CODE_GPIO_INVALID_VALUE = FW_MSG_CODE(0x0005), + FW_MSG_CODE_BIST_TEST_INVALID = FW_MSG_CODE(0x000f), + FW_MSG_CODE_EXTPHY_INVALID_IMAGE_HEADER = FW_MSG_CODE(0x0070), + FW_MSG_CODE_EXTPHY_INVALID_PHY_TYPE = FW_MSG_CODE(0x0071), + FW_MSG_CODE_EXTPHY_OPERATION_FAILED = FW_MSG_CODE(0x0072), + FW_MSG_CODE_EXTPHY_NO_PHY_DETECTED = FW_MSG_CODE(0x0073), + FW_MSG_CODE_RECOVERY_MODE = FW_MSG_CODE(0x0074), + FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND = FW_MSG_CODE(0x0001), + FW_MSG_CODE_MDUMP_ALLOC_FAILED = FW_MSG_CODE(0x0002), + FW_MSG_CODE_MDUMP_INVALID_CMD = FW_MSG_CODE(0x0003), + FW_MSG_CODE_MDUMP_IN_PROGRESS = FW_MSG_CODE(0x0004), + FW_MSG_CODE_MDUMP_WRITE_FAILED = FW_MSG_CODE(0x0005), + FW_MSG_CODE_OS_WOL_SUPPORTED = FW_MSG_CODE(0x0080), + FW_MSG_CODE_OS_WOL_NOT_SUPPORTED = FW_MSG_CODE(0x0081), + FW_MSG_CODE_WOL_READ_WRITE_OK = FW_MSG_CODE(0x0082), + FW_MSG_CODE_WOL_READ_WRITE_INVALID_VAL = FW_MSG_CODE(0x0083), + FW_MSG_CODE_WOL_READ_WRITE_INVALID_ADDR = FW_MSG_CODE(0x0084), + FW_MSG_CODE_WOL_READ_BUFFER_OK = FW_MSG_CODE(0x0085), + FW_MSG_CODE_WOL_READ_BUFFER_INVALID_VAL = FW_MSG_CODE(0x0086), + FW_MSG_CODE_DRV_CFG_PF_VFS_MSIX_DONE = FW_MSG_CODE(0x0087), + FW_MSG_CODE_DRV_CFG_PF_VFS_MSIX_BAD_ASIC = FW_MSG_CODE(0x0088), + FW_MSG_CODE_RETAIN_VMAC_SUCCESS = FW_MSG_CODE(0x0089), + FW_MSG_CODE_RETAIN_VMAC_FAIL = FW_MSG_CODE(0x008a), + FW_MSG_CODE_ERR_RESOURCE_TEMPORARY_UNAVAILABLE = FW_MSG_CODE(0x008b), + FW_MSG_CODE_ERR_RESOURCE_ALREADY_ALLOCATED = FW_MSG_CODE(0x008c), + FW_MSG_CODE_ERR_RESOURCE_NOT_ALLOCATED = FW_MSG_CODE(0x008d), + FW_MSG_CODE_ERR_NON_USER_OPTION = FW_MSG_CODE(0x008e), + FW_MSG_CODE_ERR_UNKNOWN_OPTION = FW_MSG_CODE(0x008f), + FW_MSG_CODE_WAIT = FW_MSG_CODE(0x0090), + FW_MSG_CODE_BUSY = FW_MSG_CODE(0x0091), + FW_MSG_CODE_FLR_ACK = FW_MSG_CODE(0x0200), + FW_MSG_CODE_FLR_NACK = FW_MSG_CODE(0x0210), + FW_MSG_CODE_SET_DRIVER_DONE = FW_MSG_CODE(0x0220), + FW_MSG_CODE_SET_VMAC_SUCCESS = FW_MSG_CODE(0x0230), + FW_MSG_CODE_SET_VMAC_FAIL = FW_MSG_CODE(0x0240), + FW_MSG_CODE_DRV_LOAD_ENGINE = FW_MSG_CODE(0x1010), + FW_MSG_CODE_DRV_LOAD_PORT = FW_MSG_CODE(0x1011), + FW_MSG_CODE_DRV_LOAD_FUNCTION = FW_MSG_CODE(0x1012), + FW_MSG_CODE_DRV_LOAD_REFUSED_PDA = FW_MSG_CODE(0x1020), + FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1 = FW_MSG_CODE(0x1021), + FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG = FW_MSG_CODE(0x1022), + FW_MSG_CODE_DRV_LOAD_REFUSED_HSI = FW_MSG_CODE(0x1023), + FW_MSG_CODE_DRV_LOAD_REFUSED_REQUIRES_FORCE = FW_MSG_CODE(0x1030), + FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT = FW_MSG_CODE(0x1031), + FW_MSG_CODE_DRV_LOAD_DONE = FW_MSG_CODE(0x1110), + FW_MSG_CODE_DRV_UNLOAD_ENGINE = FW_MSG_CODE(0x2011), + FW_MSG_CODE_DRV_UNLOAD_PORT = FW_MSG_CODE(0x2012), + FW_MSG_CODE_DRV_UNLOAD_FUNCTION = FW_MSG_CODE(0x2013), + FW_MSG_CODE_DRV_UNLOAD_DONE = FW_MSG_CODE(0x2110), + FW_MSG_CODE_INIT_PHY_DONE = FW_MSG_CODE(0x2120), + FW_MSG_CODE_INIT_PHY_ERR_INVALID_ARGS = FW_MSG_CODE(0x2130), + FW_MSG_CODE_LINK_RESET_DONE = FW_MSG_CODE(0x2300), + FW_MSG_CODE_SET_LLDP_DONE = FW_MSG_CODE(0x2400), + FW_MSG_CODE_SET_LLDP_UNSUPPORTED_AGENT = FW_MSG_CODE(0x2401), + FW_MSG_CODE_ERROR_EMBEDDED_LLDP_IS_DISABLED = FW_MSG_CODE(0x2402), + FW_MSG_CODE_REGISTER_LLDP_TLVS_RX_DONE = FW_MSG_CODE(0x2410), + FW_MSG_CODE_SET_DCBX_DONE = FW_MSG_CODE(0x2500), + FW_MSG_CODE_UPDATE_CURR_CFG_DONE = FW_MSG_CODE(0x2600), + FW_MSG_CODE_UPDATE_BUS_NUM_DONE = FW_MSG_CODE(0x2700), + FW_MSG_CODE_UPDATE_BOOT_PROGRESS_DONE = FW_MSG_CODE(0x2800), + FW_MSG_CODE_UPDATE_STORM_FW_VER_DONE = FW_MSG_CODE(0x2900), + FW_MSG_CODE_UPDATE_DRIVER_STATE_DONE = FW_MSG_CODE(0x3100), + FW_MSG_CODE_DRV_MSG_CODE_BW_UPDATE_DONE = FW_MSG_CODE(0x3200), + FW_MSG_CODE_DRV_MSG_CODE_MTU_SIZE_DONE = FW_MSG_CODE(0x3300), + FW_MSG_CODE_RESOURCE_ALLOC_OK = FW_MSG_CODE(0x3400), + FW_MSG_CODE_RESOURCE_ALLOC_UNKNOWN = FW_MSG_CODE(0x3500), + FW_MSG_CODE_RESOURCE_ALLOC_DEPRECATED = FW_MSG_CODE(0x3600), + FW_MSG_CODE_RESOURCE_ALLOC_GEN_ERR = FW_MSG_CODE(0x3700), + FW_MSG_CODE_UPDATE_WOL_DONE = FW_MSG_CODE(0x3800), + FW_MSG_CODE_UPDATE_ESWITCH_MODE_DONE = FW_MSG_CODE(0x3900), + FW_MSG_CODE_UPDATE_ERR = FW_MSG_CODE(0x3a01), + FW_MSG_CODE_UPDATE_PARAM_ERR = FW_MSG_CODE(0x3a02), + FW_MSG_CODE_UPDATE_NOT_ALLOWED = FW_MSG_CODE(0x3a03), + FW_MSG_CODE_S_TAG_UPDATE_ACK_DONE = FW_MSG_CODE(0x3b00), + FW_MSG_CODE_UPDATE_FCOE_CVID_DONE = FW_MSG_CODE(0x3c00), + FW_MSG_CODE_UPDATE_FCOE_FABRIC_NAME_DONE = FW_MSG_CODE(0x3d00), + FW_MSG_CODE_UPDATE_BOOT_CFG_DONE = FW_MSG_CODE(0x3e00), + FW_MSG_CODE_RESET_TO_DEFAULT_ACK = FW_MSG_CODE(0x3f00), + FW_MSG_CODE_OV_GET_CURR_CFG_DONE = FW_MSG_CODE(0x4000), + FW_MSG_CODE_GET_OEM_UPDATES_DONE = FW_MSG_CODE(0x4100), + FW_MSG_CODE_GET_LLDP_STATS_DONE = FW_MSG_CODE(0x4200), + FW_MSG_CODE_GET_LLDP_STATS_ERROR = FW_MSG_CODE(0x4201), + FW_MSG_CODE_NIG_DRAIN_DONE = FW_MSG_CODE(0x3000), + FW_MSG_CODE_VF_DISABLED_DONE = FW_MSG_CODE(0xb000), + FW_MSG_CODE_DRV_CFG_VF_MSIX_DONE = FW_MSG_CODE(0xb001), + FW_MSG_CODE_INITIATE_VF_FLR_BAD_PARAM = FW_MSG_CODE(0xb002), + FW_MSG_CODE_INITIATE_VF_FLR_OK = FW_MSG_CODE(0xb003), + FW_MSG_CODE_IDC_BUSY = FW_MSG_CODE(0x0001), + FW_MSG_CODE_GET_PERM_MAC_WRONG_TYPE = FW_MSG_CODE(0xb004), + FW_MSG_CODE_GET_PERM_MAC_WRONG_INDEX = FW_MSG_CODE(0xb005), + FW_MSG_CODE_GET_PERM_MAC_OK = FW_MSG_CODE(0xb006), + FW_MSG_CODE_DEBUG_DATA_SEND_INV_ARG = FW_MSG_CODE(0xb007), + FW_MSG_CODE_DEBUG_DATA_SEND_BUF_FULL = FW_MSG_CODE(0xb008), + FW_MSG_CODE_DEBUG_DATA_SEND_NO_BUF = FW_MSG_CODE(0xb009), + FW_MSG_CODE_DEBUG_NOT_ENABLED = FW_MSG_CODE(0xb00a), + FW_MSG_CODE_DEBUG_DATA_SEND_OK = FW_MSG_CODE(0xb00b), +}; -#define FW_MSG_SEQ_NUMBER_MASK 0x0000ffff -#define FW_MSG_SEQ_NUMBER_OFFSET 0 -#define FW_MSG_CODE_MASK 0xffff0000 -#define FW_MSG_CODE_OFFSET 16 - u32 fw_mb_param; -/* Resource Allocation params - MFW version support */ +/* Resource Allocation params - MFW version support */ #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK 0xFFFF0000 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_OFFSET 16 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_MASK 0x0000FFFF #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_OFFSET 0 +/* get pf rdma protocol command response */ +#define FW_MB_PARAM_GET_PF_RDMA_NONE 0x0 +#define FW_MB_PARAM_GET_PF_RDMA_ROCE 0x1 +#define FW_MB_PARAM_GET_PF_RDMA_IWARP 0x2 +#define FW_MB_PARAM_GET_PF_RDMA_BOTH 0x3 + /* get MFW feature support response */ /* MFW supports SmartLinQ */ -#define FW_MB_PARAM_FEATURE_SUPPORT_SMARTLINQ 0x00000001 -/* MFW supports EEE */ -#define FW_MB_PARAM_FEATURE_SUPPORT_EEE 0x00000002 +#define FW_MB_PARAM_FEATURE_SUPPORT_SMARTLINQ (1 << 0) +#define FW_MB_PARAM_FEATURE_SUPPORT_EEE (1 << 1) /* MFW supports EEE */ /* MFW supports DRV_LOAD Timeout */ -#define FW_MB_PARAM_FEATURE_SUPPORT_DRV_LOAD_TO 0x00000004 -/* MFW support complete IGU cleanup upon FLR */ -#define FW_MB_PARAM_FEATURE_SUPPORT_IGU_CLEANUP 0x00000080 +#define FW_MB_PARAM_FEATURE_SUPPORT_DRV_LOAD_TO (1 << 2) +/* MFW supports early detection of LP Presence */ +#define FW_MB_PARAM_FEATURE_SUPPORT_LP_PRES_DET (1 << 3) +/* MFW supports relaxed ordering setting */ +#define FW_MB_PARAM_FEATURE_SUPPORT_RELAXED_ORD (1 << 4) +/* MFW supports FEC control by driver */ +#define FW_MB_PARAM_FEATURE_SUPPORT_FEC_CONTROL (1 << 5) +/* MFW supports extended speed and FEC Control parameters */ +#define FW_MB_PARAM_FEATURE_SUPPORT_EXT_SPEED_FEC_CONTROL (1 << 6) +/* MFW supports complete IGU cleanup upon FLR */ +#define FW_MB_PARAM_FEATURE_SUPPORT_IGU_CLEANUP (1 << 7) +/* MFW supports VF DPM (bug fixes) */ +#define FW_MB_PARAM_FEATURE_SUPPORT_VF_DPM (1 << 8) +/* MFW supports idleChk collection */ +#define FW_MB_PARAM_FEATURE_SUPPORT_IDLE_CHK (1 << 9) /* MFW supports virtual link */ -#define FW_MB_PARAM_FEATURE_SUPPORT_VLINK 0x00010000 +#define FW_MB_PARAM_FEATURE_SUPPORT_VLINK (1 << 16) +/* MFW supports disabling embedded LLDP */ +#define FW_MB_PARAM_FEATURE_SUPPORT_DISABLE_LLDP (1 << 17) +/* MFW supports ESL : Enhanced System Lockdown */ +#define FW_MB_PARAM_FEATURE_SUPPORT_ENHANCED_SYS_LCK (1 << 18) +/* MFW supports restoring default values of NVM_CFG image */ +#define FW_MB_PARAM_FEATURE_SUPPORT_RESTORE_DEFAULT_CFG (1 << 19) + +/* get MFW management status response */ +#define FW_MB_PARAM_MANAGEMENT_STATUS_LOCKDOWN_ENABLED 0x00000001 /* MFW ESL is active */ #define FW_MB_PARAM_LOAD_DONE_DID_EFUSE_ERROR (1 << 0) @@ -1882,34 +2334,10 @@ struct public_drv_mb { #define FW_MB_PARAM_PPFID_BITMAP_MASK 0xFF #define FW_MB_PARAM_PPFID_BITMAP_OFFSET 0 - u32 drv_pulse_mb; -#define DRV_PULSE_SEQ_MASK 0x00007fff -#define DRV_PULSE_SYSTEM_TIME_MASK 0xffff0000 - /* - * The system time is in the format of - * (year-2001)*12*32 + month*32 + day. - */ -#define DRV_PULSE_ALWAYS_ALIVE 0x00008000 - /* - * Indicate to the firmware not to go into the - * OS-absent when it is not getting driver pulse. - * This is used for debugging as well for PXE(MBA). - */ - - u32 mcp_pulse_mb; -#define MCP_PULSE_SEQ_MASK 0x00007fff -#define MCP_PULSE_ALWAYS_ALIVE 0x00008000 - /* Indicates to the driver not to assert due to lack - * of MCP response - */ -#define MCP_EVENT_MASK 0xffff0000 -#define MCP_EVENT_OTHER_DRIVER_RESET_REQ 0x00010000 - -/* The union data is used by the driver to pass parameters to the scratchpad. */ - - union drv_union_data union_data; - -}; +#define FW_MB_PARAM_NVM_PUT_FILE_REQ_OFFSET_MASK 0x00ffffff +#define FW_MB_PARAM_NVM_PUT_FILE_REQ_OFFSET_OFFSET 0 +#define FW_MB_PARAM_NVM_PUT_FILE_REQ_SIZE_MASK 0xff000000 +#define FW_MB_PARAM_NVM_PUT_FILE_REQ_SIZE_OFFSET 24 /* MFW - DRV MB */ /********************************************************************** @@ -1948,6 +2376,12 @@ enum MFW_DRV_MSG_TYPE { MFW_DRV_MSG_GET_TLV_REQ, MFW_DRV_MSG_OEM_CFG_UPDATE, MFW_DRV_MSG_LLDP_RECEIVED_TLVS_UPDATED, + MFW_DRV_MSG_GENERIC_IDC, /* Generic Inter Driver Communication message */ + MFW_DRV_MSG_XCVR_TX_FAULT, + MFW_DRV_MSG_XCVR_RX_LOS, + MFW_DRV_MSG_GET_FCOE_CAP, + MFW_DRV_MSG_GEN_LINK_DUMP, + MFW_DRV_MSG_GEN_IDLE_CHK, MFW_DRV_MSG_MAX }; @@ -1958,31 +2392,33 @@ enum MFW_DRV_MSG_TYPE { #ifdef BIG_ENDIAN /* Like MFW */ #define DRV_ACK_MSG(msg_p, msg_id) \ -((u8)((u8 *)msg_p)[msg_id]++;) + do { \ + (u8)((u8 *)msg_p)[msg_id]++; \ + } while (0) #else #define DRV_ACK_MSG(msg_p, msg_id) \ -((u8)((u8 *)msg_p)[((msg_id & ~3) | ((~msg_id) & 3))]++;) + do { \ + (u8)((u8 *)msg_p)[((msg_id & ~3) | ((~msg_id) & 3))]++; \ + } while (0) #endif #define MFW_DRV_UPDATE(shmem_func, msg_id) \ -((u8)((u8 *)(MFW_MB_P(shmem_func)->msg))[msg_id]++;) + do { \ + (u8)((u8 *)(MFW_MB_P(shmem_func)->msg))[msg_id]++; \ + } while (0) struct public_mfw_mb { - u32 sup_msgs; /* Assigend with MFW_DRV_MSG_MAX */ -/* Incremented by the MFW */ - u32 msg[MFW_DRV_MSG_MAX_DWORDS(MFW_DRV_MSG_MAX)]; -/* Incremented by the driver */ - u32 ack[MFW_DRV_MSG_MAX_DWORDS(MFW_DRV_MSG_MAX)]; + u32 sup_msgs; /* Assigend with MFW_DRV_MSG_MAX */ + u32 msg[MFW_DRV_MSG_MAX_DWORDS(MFW_DRV_MSG_MAX)]; /* Incremented by the MFW */ + u32 ack[MFW_DRV_MSG_MAX_DWORDS(MFW_DRV_MSG_MAX)]; /* Incremented by the driver */ }; /**************************************/ -/* */ -/* P U B L I C D A T A */ -/* */ +/* P U B L I C D A T A */ /**************************************/ enum public_sections { - PUBLIC_DRV_MB, /* Points to the first drv_mb of path0 */ - PUBLIC_MFW_MB, /* Points to the first mfw_mb of path0 */ + PUBLIC_DRV_MB, /* Points to the first drv_mb of path0 */ + PUBLIC_MFW_MB, /* Points to the first mfw_mb of path0 */ PUBLIC_GLOBAL, PUBLIC_PATH, PUBLIC_PORT, @@ -2020,4 +2456,325 @@ struct mcp_public_data { #define MAX_I2C_TRANSACTION_SIZE 16 #define MAX_I2C_TRANSCEIVER_PAGE_SIZE 256 +/* OCBB definitions */ +enum tlvs { + /* Category 1: Device Properties */ + DRV_TLV_CLP_STR, + DRV_TLV_CLP_STR_CTD, + /* Category 6: Device Configuration */ + DRV_TLV_SCSI_TO, + DRV_TLV_R_T_TOV, + DRV_TLV_R_A_TOV, + DRV_TLV_E_D_TOV, + DRV_TLV_CR_TOV, + DRV_TLV_BOOT_TYPE, + /* Category 8: Port Configuration */ + DRV_TLV_NPIV_ENABLED, + /* Category 10: Function Configuration */ + DRV_TLV_FEATURE_FLAGS, + DRV_TLV_LOCAL_ADMIN_ADDR, + DRV_TLV_ADDITIONAL_MAC_ADDR_1, + DRV_TLV_ADDITIONAL_MAC_ADDR_2, + DRV_TLV_LSO_MAX_OFFLOAD_SIZE, + DRV_TLV_LSO_MIN_SEGMENT_COUNT, + DRV_TLV_PROMISCUOUS_MODE, + DRV_TLV_TX_DESCRIPTORS_QUEUE_SIZE, + DRV_TLV_RX_DESCRIPTORS_QUEUE_SIZE, + DRV_TLV_NUM_OF_NET_QUEUE_VMQ_CFG, + DRV_TLV_FLEX_NIC_OUTER_VLAN_ID, + DRV_TLV_OS_DRIVER_STATES, + DRV_TLV_PXE_BOOT_PROGRESS, + /* Category 12: FC/FCoE Configuration */ + DRV_TLV_NPIV_STATE, + DRV_TLV_NUM_OF_NPIV_IDS, + DRV_TLV_SWITCH_NAME, + DRV_TLV_SWITCH_PORT_NUM, + DRV_TLV_SWITCH_PORT_ID, + DRV_TLV_VENDOR_NAME, + DRV_TLV_SWITCH_MODEL, + DRV_TLV_SWITCH_FW_VER, + DRV_TLV_QOS_PRIORITY_PER_802_1P, + DRV_TLV_PORT_ALIAS, + DRV_TLV_PORT_STATE, + DRV_TLV_FIP_TX_DESCRIPTORS_QUEUE_SIZE, + DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_SIZE, + DRV_TLV_LINK_FAILURE_COUNT, + DRV_TLV_FCOE_BOOT_PROGRESS, + /* Category 13: iSCSI Configuration */ + DRV_TLV_TARGET_LLMNR_ENABLED, + DRV_TLV_HEADER_DIGEST_FLAG_ENABLED, + DRV_TLV_DATA_DIGEST_FLAG_ENABLED, + DRV_TLV_AUTHENTICATION_METHOD, + DRV_TLV_ISCSI_BOOT_TARGET_PORTAL, + DRV_TLV_MAX_FRAME_SIZE, + DRV_TLV_PDU_TX_DESCRIPTORS_QUEUE_SIZE, + DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_SIZE, + DRV_TLV_ISCSI_BOOT_PROGRESS, + /* Category 20: Device Data */ + DRV_TLV_PCIE_BUS_RX_UTILIZATION, + DRV_TLV_PCIE_BUS_TX_UTILIZATION, + DRV_TLV_DEVICE_CPU_CORES_UTILIZATION, + DRV_TLV_LAST_VALID_DCC_TLV_RECEIVED, + DRV_TLV_NCSI_RX_BYTES_RECEIVED, + DRV_TLV_NCSI_TX_BYTES_SENT, + /* Category 22: Base Port Data */ + DRV_TLV_RX_DISCARDS, + DRV_TLV_RX_ERRORS, + DRV_TLV_TX_ERRORS, + DRV_TLV_TX_DISCARDS, + DRV_TLV_RX_FRAMES_RECEIVED, + DRV_TLV_TX_FRAMES_SENT, + /* Category 23: FC/FCoE Port Data */ + DRV_TLV_RX_BROADCAST_PACKETS, + DRV_TLV_TX_BROADCAST_PACKETS, + /* Category 28: Base Function Data */ + DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV4, + DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV6, + DRV_TLV_TX_DESCRIPTOR_QUEUE_AVG_DEPTH, + DRV_TLV_RX_DESCRIPTORS_QUEUE_AVG_DEPTH, + DRV_TLV_PF_RX_FRAMES_RECEIVED, + DRV_TLV_RX_BYTES_RECEIVED, + DRV_TLV_PF_TX_FRAMES_SENT, + DRV_TLV_TX_BYTES_SENT, + DRV_TLV_IOV_OFFLOAD, + DRV_TLV_PCI_ERRORS_CAP_ID, + DRV_TLV_UNCORRECTABLE_ERROR_STATUS, + DRV_TLV_UNCORRECTABLE_ERROR_MASK, + DRV_TLV_CORRECTABLE_ERROR_STATUS, + DRV_TLV_CORRECTABLE_ERROR_MASK, + DRV_TLV_PCI_ERRORS_AECC_REGISTER, + DRV_TLV_TX_QUEUES_EMPTY, + DRV_TLV_RX_QUEUES_EMPTY, + DRV_TLV_TX_QUEUES_FULL, + DRV_TLV_RX_QUEUES_FULL, + /* Category 29: FC/FCoE Function Data */ + DRV_TLV_FCOE_TX_DESCRIPTOR_QUEUE_AVG_DEPTH, + DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_AVG_DEPTH, + DRV_TLV_FCOE_RX_FRAMES_RECEIVED, + DRV_TLV_FCOE_RX_BYTES_RECEIVED, + DRV_TLV_FCOE_TX_FRAMES_SENT, + DRV_TLV_FCOE_TX_BYTES_SENT, + DRV_TLV_CRC_ERROR_COUNT, + DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID, + DRV_TLV_CRC_ERROR_1_TIMESTAMP, + DRV_TLV_CRC_ERROR_2_RECEIVED_SOURCE_FC_ID, + DRV_TLV_CRC_ERROR_2_TIMESTAMP, + DRV_TLV_CRC_ERROR_3_RECEIVED_SOURCE_FC_ID, + DRV_TLV_CRC_ERROR_3_TIMESTAMP, + DRV_TLV_CRC_ERROR_4_RECEIVED_SOURCE_FC_ID, + DRV_TLV_CRC_ERROR_4_TIMESTAMP, + DRV_TLV_CRC_ERROR_5_RECEIVED_SOURCE_FC_ID, + DRV_TLV_CRC_ERROR_5_TIMESTAMP, + DRV_TLV_LOSS_OF_SYNC_ERROR_COUNT, + DRV_TLV_LOSS_OF_SIGNAL_ERRORS, + DRV_TLV_PRIMITIVE_SEQUENCE_PROTOCOL_ERROR_COUNT, + DRV_TLV_DISPARITY_ERROR_COUNT, + DRV_TLV_CODE_VIOLATION_ERROR_COUNT, + DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1, + DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_2, + DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_3, + DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_4, + DRV_TLV_LAST_FLOGI_TIMESTAMP, + DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1, + DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_2, + DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_3, + DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_4, + DRV_TLV_LAST_FLOGI_ACC_TIMESTAMP, + DRV_TLV_LAST_FLOGI_RJT, + DRV_TLV_LAST_FLOGI_RJT_TIMESTAMP, + DRV_TLV_FDISCS_SENT_COUNT, + DRV_TLV_FDISC_ACCS_RECEIVED, + DRV_TLV_FDISC_RJTS_RECEIVED, + DRV_TLV_PLOGI_SENT_COUNT, + DRV_TLV_PLOGI_ACCS_RECEIVED, + DRV_TLV_PLOGI_RJTS_RECEIVED, + DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID, + DRV_TLV_PLOGI_1_TIMESTAMP, + DRV_TLV_PLOGI_2_SENT_DESTINATION_FC_ID, + DRV_TLV_PLOGI_2_TIMESTAMP, + DRV_TLV_PLOGI_3_SENT_DESTINATION_FC_ID, + DRV_TLV_PLOGI_3_TIMESTAMP, + DRV_TLV_PLOGI_4_SENT_DESTINATION_FC_ID, + DRV_TLV_PLOGI_4_TIMESTAMP, + DRV_TLV_PLOGI_5_SENT_DESTINATION_FC_ID, + DRV_TLV_PLOGI_5_TIMESTAMP, + DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID, + DRV_TLV_PLOGI_1_ACC_TIMESTAMP, + DRV_TLV_PLOGI_2_ACC_RECEIVED_SOURCE_FC_ID, + DRV_TLV_PLOGI_2_ACC_TIMESTAMP, + DRV_TLV_PLOGI_3_ACC_RECEIVED_SOURCE_FC_ID, + DRV_TLV_PLOGI_3_ACC_TIMESTAMP, + DRV_TLV_PLOGI_4_ACC_RECEIVED_SOURCE_FC_ID, + DRV_TLV_PLOGI_4_ACC_TIMESTAMP, + DRV_TLV_PLOGI_5_ACC_RECEIVED_SOURCE_FC_ID, + DRV_TLV_PLOGI_5_ACC_TIMESTAMP, + DRV_TLV_LOGOS_ISSUED, + DRV_TLV_LOGO_ACCS_RECEIVED, + DRV_TLV_LOGO_RJTS_RECEIVED, + DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID, + DRV_TLV_LOGO_1_TIMESTAMP, + DRV_TLV_LOGO_2_RECEIVED_SOURCE_FC_ID, + DRV_TLV_LOGO_2_TIMESTAMP, + DRV_TLV_LOGO_3_RECEIVED_SOURCE_FC_ID, + DRV_TLV_LOGO_3_TIMESTAMP, + DRV_TLV_LOGO_4_RECEIVED_SOURCE_FC_ID, + DRV_TLV_LOGO_4_TIMESTAMP, + DRV_TLV_LOGO_5_RECEIVED_SOURCE_FC_ID, + DRV_TLV_LOGO_5_TIMESTAMP, + DRV_TLV_LOGOS_RECEIVED, + DRV_TLV_ACCS_ISSUED, + DRV_TLV_PRLIS_ISSUED, + DRV_TLV_ACCS_RECEIVED, + DRV_TLV_ABTS_SENT_COUNT, + DRV_TLV_ABTS_ACCS_RECEIVED, + DRV_TLV_ABTS_RJTS_RECEIVED, + DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID, + DRV_TLV_ABTS_1_TIMESTAMP, + DRV_TLV_ABTS_2_SENT_DESTINATION_FC_ID, + DRV_TLV_ABTS_2_TIMESTAMP, + DRV_TLV_ABTS_3_SENT_DESTINATION_FC_ID, + DRV_TLV_ABTS_3_TIMESTAMP, + DRV_TLV_ABTS_4_SENT_DESTINATION_FC_ID, + DRV_TLV_ABTS_4_TIMESTAMP, + DRV_TLV_ABTS_5_SENT_DESTINATION_FC_ID, + DRV_TLV_ABTS_5_TIMESTAMP, + DRV_TLV_RSCNS_RECEIVED, + DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1, + DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_2, + DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_3, + DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_4, + DRV_TLV_LUN_RESETS_ISSUED, + DRV_TLV_ABORT_TASK_SETS_ISSUED, + DRV_TLV_TPRLOS_SENT, + DRV_TLV_NOS_SENT_COUNT, + DRV_TLV_NOS_RECEIVED_COUNT, + DRV_TLV_OLS_COUNT, + DRV_TLV_LR_COUNT, + DRV_TLV_LRR_COUNT, + DRV_TLV_LIP_SENT_COUNT, + DRV_TLV_LIP_RECEIVED_COUNT, + DRV_TLV_EOFA_COUNT, + DRV_TLV_EOFNI_COUNT, + DRV_TLV_SCSI_STATUS_CHECK_CONDITION_COUNT, + DRV_TLV_SCSI_STATUS_CONDITION_MET_COUNT, + DRV_TLV_SCSI_STATUS_BUSY_COUNT, + DRV_TLV_SCSI_STATUS_INTERMEDIATE_COUNT, + DRV_TLV_SCSI_STATUS_INTERMEDIATE_CONDITION_MET_COUNT, + DRV_TLV_SCSI_STATUS_RESERVATION_CONFLICT_COUNT, + DRV_TLV_SCSI_STATUS_TASK_SET_FULL_COUNT, + DRV_TLV_SCSI_STATUS_ACA_ACTIVE_COUNT, + DRV_TLV_SCSI_STATUS_TASK_ABORTED_COUNT, + DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ, + DRV_TLV_SCSI_CHECK_1_TIMESTAMP, + DRV_TLV_SCSI_CHECK_CONDITION_2_RECEIVED_SK_ASC_ASCQ, + DRV_TLV_SCSI_CHECK_2_TIMESTAMP, + DRV_TLV_SCSI_CHECK_CONDITION_3_RECEIVED_SK_ASC_ASCQ, + DRV_TLV_SCSI_CHECK_3_TIMESTAMP, + DRV_TLV_SCSI_CHECK_CONDITION_4_RECEIVED_SK_ASC_ASCQ, + DRV_TLV_SCSI_CHECK_4_TIMESTAMP, + DRV_TLV_SCSI_CHECK_CONDITION_5_RECEIVED_SK_ASC_ASCQ, + DRV_TLV_SCSI_CHECK_5_TIMESTAMP, + /* Category 30: iSCSI Function Data */ + DRV_TLV_PDU_TX_DESCRIPTOR_QUEUE_AVG_DEPTH, + DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_AVG_DEPTH, + DRV_TLV_ISCSI_PDU_RX_FRAMES_RECEIVED, + DRV_TLV_ISCSI_PDU_RX_BYTES_RECEIVED, + DRV_TLV_ISCSI_PDU_TX_FRAMES_SENT, + DRV_TLV_ISCSI_PDU_TX_BYTES_SENT, + DRV_TLV_RDMA_DRV_VERSION +}; + +#define I2C_DEV_ADDR_A2 0xa2 +#define SFP_EEPROM_A2_TEMPERATURE_ADDR 0x60 +#define SFP_EEPROM_A2_TEMPERATURE_SIZE 2 +#define SFP_EEPROM_A2_VCC_ADDR 0x62 +#define SFP_EEPROM_A2_VCC_SIZE 2 +#define SFP_EEPROM_A2_TX_BIAS_ADDR 0x64 +#define SFP_EEPROM_A2_TX_BIAS_SIZE 2 +#define SFP_EEPROM_A2_TX_POWER_ADDR 0x66 +#define SFP_EEPROM_A2_TX_POWER_SIZE 2 +#define SFP_EEPROM_A2_RX_POWER_ADDR 0x68 +#define SFP_EEPROM_A2_RX_POWER_SIZE 2 + +#define I2C_DEV_ADDR_A0 0xa0 +#define QSFP_EEPROM_A0_TEMPERATURE_ADDR 0x16 +#define QSFP_EEPROM_A0_TEMPERATURE_SIZE 2 +#define QSFP_EEPROM_A0_VCC_ADDR 0x1a +#define QSFP_EEPROM_A0_VCC_SIZE 2 +#define QSFP_EEPROM_A0_TX1_BIAS_ADDR 0x2a +#define QSFP_EEPROM_A0_TX1_BIAS_SIZE 2 +#define QSFP_EEPROM_A0_TX1_POWER_ADDR 0x32 +#define QSFP_EEPROM_A0_TX1_POWER_SIZE 2 +#define QSFP_EEPROM_A0_RX1_POWER_ADDR 0x22 +#define QSFP_EEPROM_A0_RX1_POWER_SIZE 2 + +/************************************** + * eDiag NETWORK Mode (DON) + **************************************/ + +#define ETH_DON_TYPE 0x0911 /* NETWORK Mode for QeDiag */ +#define ETH_DON_TRACE_TYPE 0x0912 /* NETWORK Mode Continuous Trace */ + +#define DON_RESP_UNKNOWN_CMD_ID 0x10 /* Response Error */ + +/* Op Codes, Response is Op Code+1 */ + +#define DON_REG_READ_REQ_CMD_ID 0x11 +#define DON_REG_WRITE_REQ_CMD_ID 0x22 +#define DON_CHALLENGE_REQ_CMD_ID 0x33 +#define DON_NVM_READ_REQ_CMD_ID 0x44 +#define DON_BLOCK_READ_REQ_CMD_ID 0x55 + +#define DON_MFW_MODE_TRACE_CONTINUOUS_ID 0x70 + +#if defined(MFW) || defined(DIAG) || defined(WINEDIAG) + +#ifndef UEFI +#if defined(_MSC_VER) +#pragma pack(push, 1) +#else +#pragma pack(1) +#endif +#endif + +typedef struct { + u8 dst_addr[6]; + u8 src_addr[6]; + u16 ether_type; + + /* DON Message data starts here, after L2 header */ + /* Do not change alignment to keep backward compatibility */ + u16 cmd_id; /* Op code and response code */ + + union { + struct { /* DON Commands */ + u32 address; + u32 val; + u32 resp_status; + }; + struct { /* DON Traces */ + u16 mcp_clock; /* MCP Clock in MHz */ + u16 trace_size; /* Trace size in bytes */ + + u32 seconds; /* Seconds since last reset */ + u32 ticks; /* Timestamp (NOW) */ + }; + }; + union { + u8 digest[32]; /* SHA256 */ + u8 data[32]; + /* u32 dword[8]; */ + }; +} don_packet_t; + +#ifndef UEFI +#if defined(_MSC_VER) +#pragma pack(pop) +#else +#pragma pack(0) +#endif +#endif /* #ifndef UEFI */ + +#endif /* #if defined(MFW) || defined(DIAG) || defined(WINEDIAG) */ + #endif /* MCP_PUBLIC_H */ diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h index daa5437dd..806b8e9a4 100644 --- a/drivers/net/qede/base/nvm_cfg.h +++ b/drivers/net/qede/base/nvm_cfg.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2016 - 2018 Cavium Inc. + * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc. * All rights reserved. - * www.cavium.com + * www.marvell.com */ - /**************************************************************************** * * Name: nvm_cfg.h @@ -11,7 +11,7 @@ * Description: NVM config file - Generated file from nvm cfg excel. * DO NOT MODIFY !!! * - * Created: 1/6/2019 + * Created: 8/4/2020 * ****************************************************************************/ @@ -19,18 +19,18 @@ #define NVM_CFG_H -#define NVM_CFG_version 0x84500 +#define NVM_CFG_version 0x86009 -#define NVM_CFG_new_option_seq 45 +#define NVM_CFG_new_option_seq 60 #define NVM_CFG_removed_option_seq 4 -#define NVM_CFG_updated_value_seq 13 +#define NVM_CFG_updated_value_seq 22 struct nvm_cfg_mac_address { u32 mac_addr_hi; - #define NVM_CFG_MAC_ADDRESS_HI_MASK 0x0000FFFF - #define NVM_CFG_MAC_ADDRESS_HI_OFFSET 0 + #define NVM_CFG_MAC_ADDRESS_HI_MASK 0x0000FFFF + #define NVM_CFG_MAC_ADDRESS_HI_OFFSET 0 u32 mac_addr_lo; }; @@ -38,2555 +38,2636 @@ struct nvm_cfg_mac_address { * nvm_cfg1 structs ******************************************/ struct nvm_cfg1_glob { - u32 generic_cont0; /* 0x0 */ - #define NVM_CFG1_GLOB_BOARD_SWAP_MASK 0x0000000F - #define NVM_CFG1_GLOB_BOARD_SWAP_OFFSET 0 - #define NVM_CFG1_GLOB_BOARD_SWAP_NONE 0x0 - #define NVM_CFG1_GLOB_BOARD_SWAP_PATH 0x1 - #define NVM_CFG1_GLOB_BOARD_SWAP_PORT 0x2 - #define NVM_CFG1_GLOB_BOARD_SWAP_BOTH 0x3 - #define NVM_CFG1_GLOB_MF_MODE_MASK 0x00000FF0 - #define NVM_CFG1_GLOB_MF_MODE_OFFSET 4 - #define NVM_CFG1_GLOB_MF_MODE_MF_ALLOWED 0x0 - #define NVM_CFG1_GLOB_MF_MODE_DEFAULT 0x1 - #define NVM_CFG1_GLOB_MF_MODE_SPIO4 0x2 - #define NVM_CFG1_GLOB_MF_MODE_NPAR1_0 0x3 - #define NVM_CFG1_GLOB_MF_MODE_NPAR1_5 0x4 - #define NVM_CFG1_GLOB_MF_MODE_NPAR2_0 0x5 - #define NVM_CFG1_GLOB_MF_MODE_BD 0x6 - #define NVM_CFG1_GLOB_MF_MODE_UFP 0x7 - #define NVM_CFG1_GLOB_MF_MODE_DCI_NPAR 0x8 - #define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_MASK 0x00001000 - #define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_OFFSET 12 - #define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_DISABLED 0x0 - #define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_ENABLED 0x1 - #define NVM_CFG1_GLOB_AVS_MARGIN_LOW_MASK 0x001FE000 - #define NVM_CFG1_GLOB_AVS_MARGIN_LOW_OFFSET 13 - #define NVM_CFG1_GLOB_AVS_MARGIN_HIGH_MASK 0x1FE00000 - #define NVM_CFG1_GLOB_AVS_MARGIN_HIGH_OFFSET 21 - #define NVM_CFG1_GLOB_ENABLE_SRIOV_MASK 0x20000000 - #define NVM_CFG1_GLOB_ENABLE_SRIOV_OFFSET 29 - #define NVM_CFG1_GLOB_ENABLE_SRIOV_DISABLED 0x0 - #define NVM_CFG1_GLOB_ENABLE_SRIOV_ENABLED 0x1 - #define NVM_CFG1_GLOB_ENABLE_ATC_MASK 0x40000000 - #define NVM_CFG1_GLOB_ENABLE_ATC_OFFSET 30 - #define NVM_CFG1_GLOB_ENABLE_ATC_DISABLED 0x0 - #define NVM_CFG1_GLOB_ENABLE_ATC_ENABLED 0x1 - #define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_MASK \ - 0x80000000 - #define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_OFFSET 31 - #define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_DISABLED \ - 0x0 - #define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_ENABLED 0x1 - u32 engineering_change[3]; /* 0x4 */ - u32 manufacturing_id; /* 0x10 */ - u32 serial_number[4]; /* 0x14 */ - u32 pcie_cfg; /* 0x24 */ - #define NVM_CFG1_GLOB_PCI_GEN_MASK 0x00000003 - #define NVM_CFG1_GLOB_PCI_GEN_OFFSET 0 - #define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN1 0x0 - #define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN2 0x1 - #define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN3 0x2 - #define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_MASK 0x00000004 - #define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_OFFSET 2 - #define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_DISABLED 0x0 - #define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_ENABLED 0x1 - #define NVM_CFG1_GLOB_ASPM_SUPPORT_MASK 0x00000018 - #define NVM_CFG1_GLOB_ASPM_SUPPORT_OFFSET 3 - #define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_L1_ENABLED 0x0 - #define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_DISABLED 0x1 - #define NVM_CFG1_GLOB_ASPM_SUPPORT_L1_DISABLED 0x2 - #define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_L1_DISABLED 0x3 - #define NVM_CFG1_GLOB_RESERVED_MPREVENT_PCIE_L1_MENTRY_MASK \ - 0x00000020 - #define NVM_CFG1_GLOB_RESERVED_MPREVENT_PCIE_L1_MENTRY_OFFSET 5 - #define NVM_CFG1_GLOB_PCIE_G2_TX_AMPLITUDE_MASK 0x000003C0 - #define NVM_CFG1_GLOB_PCIE_G2_TX_AMPLITUDE_OFFSET 6 - #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_MASK 0x00001C00 - #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_OFFSET 10 - #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_HW 0x0 - #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_0DB 0x1 - #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_3_5DB 0x2 - #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_6_0DB 0x3 - #define NVM_CFG1_GLOB_WWN_NODE_PREFIX0_MASK 0x001FE000 - #define NVM_CFG1_GLOB_WWN_NODE_PREFIX0_OFFSET 13 - #define NVM_CFG1_GLOB_WWN_NODE_PREFIX1_MASK 0x1FE00000 - #define NVM_CFG1_GLOB_WWN_NODE_PREFIX1_OFFSET 21 - #define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_MASK 0x60000000 - #define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_OFFSET 29 - /* Set the duration, in sec, fan failure signal should be sampled */ - #define NVM_CFG1_GLOB_RESERVED_FAN_FAILURE_DURATION_MASK \ - 0x80000000 - #define NVM_CFG1_GLOB_RESERVED_FAN_FAILURE_DURATION_OFFSET 31 - u32 mgmt_traffic; /* 0x28 */ - #define NVM_CFG1_GLOB_RESERVED60_MASK 0x00000001 - #define NVM_CFG1_GLOB_RESERVED60_OFFSET 0 - #define NVM_CFG1_GLOB_WWN_PORT_PREFIX0_MASK 0x000001FE - #define NVM_CFG1_GLOB_WWN_PORT_PREFIX0_OFFSET 1 - #define NVM_CFG1_GLOB_WWN_PORT_PREFIX1_MASK 0x0001FE00 - #define NVM_CFG1_GLOB_WWN_PORT_PREFIX1_OFFSET 9 - #define NVM_CFG1_GLOB_SMBUS_ADDRESS_MASK 0x01FE0000 - #define NVM_CFG1_GLOB_SMBUS_ADDRESS_OFFSET 17 - #define NVM_CFG1_GLOB_SIDEBAND_MODE_MASK 0x06000000 - #define NVM_CFG1_GLOB_SIDEBAND_MODE_OFFSET 25 - #define NVM_CFG1_GLOB_SIDEBAND_MODE_DISABLED 0x0 - #define NVM_CFG1_GLOB_SIDEBAND_MODE_RMII 0x1 - #define NVM_CFG1_GLOB_SIDEBAND_MODE_SGMII 0x2 - #define NVM_CFG1_GLOB_AUX_MODE_MASK 0x78000000 - #define NVM_CFG1_GLOB_AUX_MODE_OFFSET 27 - #define NVM_CFG1_GLOB_AUX_MODE_DEFAULT 0x0 - #define NVM_CFG1_GLOB_AUX_MODE_SMBUS_ONLY 0x1 + u32 generic_cont0; /* 0x0 */ + #define NVM_CFG1_GLOB_BOARD_SWAP_MASK 0x0000000F + #define NVM_CFG1_GLOB_BOARD_SWAP_OFFSET 0 + #define NVM_CFG1_GLOB_BOARD_SWAP_NONE 0x0 + #define NVM_CFG1_GLOB_BOARD_SWAP_PATH 0x1 + #define NVM_CFG1_GLOB_BOARD_SWAP_PORT 0x2 + #define NVM_CFG1_GLOB_BOARD_SWAP_BOTH 0x3 + #define NVM_CFG1_GLOB_MF_MODE_MASK 0x00000FF0 + #define NVM_CFG1_GLOB_MF_MODE_OFFSET 4 + #define NVM_CFG1_GLOB_MF_MODE_MF_ALLOWED 0x0 + #define NVM_CFG1_GLOB_MF_MODE_DEFAULT 0x1 + #define NVM_CFG1_GLOB_MF_MODE_SPIO4 0x2 + #define NVM_CFG1_GLOB_MF_MODE_NPAR1_0 0x3 + #define NVM_CFG1_GLOB_MF_MODE_NPAR1_5 0x4 + #define NVM_CFG1_GLOB_MF_MODE_NPAR2_0 0x5 + #define NVM_CFG1_GLOB_MF_MODE_BD 0x6 + #define NVM_CFG1_GLOB_MF_MODE_UFP 0x7 + #define NVM_CFG1_GLOB_MF_MODE_DCI_NPAR 0x8 + #define NVM_CFG1_GLOB_MF_MODE_QINQ 0x9 + #define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_MASK 0x00001000 + #define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_OFFSET 12 + #define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_DISABLED 0x0 + #define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_ENABLED 0x1 + #define NVM_CFG1_GLOB_AVS_MARGIN_LOW_MASK 0x001FE000 + #define NVM_CFG1_GLOB_AVS_MARGIN_LOW_OFFSET 13 + #define NVM_CFG1_GLOB_AVS_MARGIN_HIGH_MASK 0x1FE00000 + #define NVM_CFG1_GLOB_AVS_MARGIN_HIGH_OFFSET 21 + #define NVM_CFG1_GLOB_ENABLE_SRIOV_MASK 0x20000000 + #define NVM_CFG1_GLOB_ENABLE_SRIOV_OFFSET 29 + #define NVM_CFG1_GLOB_ENABLE_SRIOV_DISABLED 0x0 + #define NVM_CFG1_GLOB_ENABLE_SRIOV_ENABLED 0x1 + #define NVM_CFG1_GLOB_ENABLE_ATC_MASK 0x40000000 + #define NVM_CFG1_GLOB_ENABLE_ATC_OFFSET 30 + #define NVM_CFG1_GLOB_ENABLE_ATC_DISABLED 0x0 + #define NVM_CFG1_GLOB_ENABLE_ATC_ENABLED 0x1 + #define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_MASK 0x80000000 + #define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_OFFSET 31 + #define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_DISABLED 0x0 + #define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_ENABLED 0x1 + u32 engineering_change[3]; /* 0x4 */ + u32 manufacturing_id; /* 0x10 */ + u32 serial_number[4]; /* 0x14 */ + u32 pcie_cfg; /* 0x24 */ + #define NVM_CFG1_GLOB_PCI_GEN_MASK 0x00000003 + #define NVM_CFG1_GLOB_PCI_GEN_OFFSET 0 + #define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN1 0x0 + #define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN2 0x1 + #define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN3 0x2 + #define NVM_CFG1_GLOB_PCI_GEN_AHP_PCI_GEN4 0x3 + #define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_MASK 0x00000004 + #define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_OFFSET 2 + #define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_DISABLED 0x0 + #define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_ENABLED 0x1 + #define NVM_CFG1_GLOB_ASPM_SUPPORT_MASK 0x00000018 + #define NVM_CFG1_GLOB_ASPM_SUPPORT_OFFSET 3 + #define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_L1_ENABLED 0x0 + #define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_DISABLED 0x1 + #define NVM_CFG1_GLOB_ASPM_SUPPORT_L1_DISABLED 0x2 + #define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_L1_DISABLED 0x3 + #define NVM_CFG1_GLOB_RESERVED_MPREVENT_PCIE_L1_MENTRY_MASK 0x00000020 + #define NVM_CFG1_GLOB_RESERVED_MPREVENT_PCIE_L1_MENTRY_OFFSET 5 + #define NVM_CFG1_GLOB_PCIE_G2_TX_AMPLITUDE_MASK 0x000003C0 + #define NVM_CFG1_GLOB_PCIE_G2_TX_AMPLITUDE_OFFSET 6 + #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_MASK 0x00001C00 + #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_OFFSET 10 + #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_HW 0x0 + #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_0DB 0x1 + #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_3_5DB 0x2 + #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_6_0DB 0x3 + #define NVM_CFG1_GLOB_WWN_NODE_PREFIX0_MASK 0x001FE000 + #define NVM_CFG1_GLOB_WWN_NODE_PREFIX0_OFFSET 13 + #define NVM_CFG1_GLOB_WWN_NODE_PREFIX1_MASK 0x1FE00000 + #define NVM_CFG1_GLOB_WWN_NODE_PREFIX1_OFFSET 21 + #define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_MASK 0x60000000 + #define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_OFFSET 29 + /* Set the duration, in seconds, fan failure signal should be + * sampled + */ + #define NVM_CFG1_GLOB_RESERVED_FAN_FAILURE_DURATION_MASK 0x80000000 + #define NVM_CFG1_GLOB_RESERVED_FAN_FAILURE_DURATION_OFFSET 31 + u32 mgmt_traffic; /* 0x28 */ + #define NVM_CFG1_GLOB_RESERVED60_MASK 0x00000001 + #define NVM_CFG1_GLOB_RESERVED60_OFFSET 0 + #define NVM_CFG1_GLOB_WWN_PORT_PREFIX0_MASK 0x000001FE + #define NVM_CFG1_GLOB_WWN_PORT_PREFIX0_OFFSET 1 + #define NVM_CFG1_GLOB_WWN_PORT_PREFIX1_MASK 0x0001FE00 + #define NVM_CFG1_GLOB_WWN_PORT_PREFIX1_OFFSET 9 + #define NVM_CFG1_GLOB_SMBUS_ADDRESS_MASK 0x01FE0000 + #define NVM_CFG1_GLOB_SMBUS_ADDRESS_OFFSET 17 + #define NVM_CFG1_GLOB_SIDEBAND_MODE_MASK 0x06000000 + #define NVM_CFG1_GLOB_SIDEBAND_MODE_OFFSET 25 + #define NVM_CFG1_GLOB_SIDEBAND_MODE_DISABLED 0x0 + #define NVM_CFG1_GLOB_SIDEBAND_MODE_RMII 0x1 + #define NVM_CFG1_GLOB_SIDEBAND_MODE_SGMII 0x2 + #define NVM_CFG1_GLOB_AUX_MODE_MASK 0x78000000 + #define NVM_CFG1_GLOB_AUX_MODE_OFFSET 27 + #define NVM_CFG1_GLOB_AUX_MODE_DEFAULT 0x0 + #define NVM_CFG1_GLOB_AUX_MODE_SMBUS_ONLY 0x1 /* Indicates whether external thermal sonsor is available */ - #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_MASK 0x80000000 - #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_OFFSET 31 - #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_DISABLED 0x0 - #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ENABLED 0x1 - u32 core_cfg; /* 0x2C */ - #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_MASK 0x000000FF - #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_OFFSET 0 - #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_2X40G 0x0 - #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X50G 0x1 - #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_1X100G 0x2 - #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X10G_F 0x3 - #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_4X10G_E 0x4 - #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_4X20G 0x5 - #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_1X40G 0xB - #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X25G 0xC - #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_1X25G 0xD - #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X25G 0xE - #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X10G 0xF - #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X25G_LIO2 0x10 - #define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_MASK 0x00000100 - #define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_OFFSET 8 - #define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_DISABLED 0x0 - #define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_ENABLED 0x1 - #define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_MASK 0x00000200 - #define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_OFFSET 9 - #define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_DISABLED 0x0 - #define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_ENABLED 0x1 - #define NVM_CFG1_GLOB_MPS10_CORE_ADDR_MASK 0x0003FC00 - #define NVM_CFG1_GLOB_MPS10_CORE_ADDR_OFFSET 10 - #define NVM_CFG1_GLOB_MPS25_CORE_ADDR_MASK 0x03FC0000 - #define NVM_CFG1_GLOB_MPS25_CORE_ADDR_OFFSET 18 - #define NVM_CFG1_GLOB_AVS_MODE_MASK 0x1C000000 - #define NVM_CFG1_GLOB_AVS_MODE_OFFSET 26 - #define NVM_CFG1_GLOB_AVS_MODE_CLOSE_LOOP 0x0 - #define NVM_CFG1_GLOB_AVS_MODE_OPEN_LOOP_CFG 0x1 - #define NVM_CFG1_GLOB_AVS_MODE_OPEN_LOOP_OTP 0x2 - #define NVM_CFG1_GLOB_AVS_MODE_DISABLED 0x3 - #define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_MASK 0x60000000 - #define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_OFFSET 29 - #define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_DISABLED 0x0 - #define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_ENABLED 0x1 - u32 e_lane_cfg1; /* 0x30 */ - #define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK 0x0000000F - #define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET 0 - #define NVM_CFG1_GLOB_RX_LANE1_SWAP_MASK 0x000000F0 - #define NVM_CFG1_GLOB_RX_LANE1_SWAP_OFFSET 4 - #define NVM_CFG1_GLOB_RX_LANE2_SWAP_MASK 0x00000F00 - #define NVM_CFG1_GLOB_RX_LANE2_SWAP_OFFSET 8 - #define NVM_CFG1_GLOB_RX_LANE3_SWAP_MASK 0x0000F000 - #define NVM_CFG1_GLOB_RX_LANE3_SWAP_OFFSET 12 - #define NVM_CFG1_GLOB_TX_LANE0_SWAP_MASK 0x000F0000 - #define NVM_CFG1_GLOB_TX_LANE0_SWAP_OFFSET 16 - #define NVM_CFG1_GLOB_TX_LANE1_SWAP_MASK 0x00F00000 - #define NVM_CFG1_GLOB_TX_LANE1_SWAP_OFFSET 20 - #define NVM_CFG1_GLOB_TX_LANE2_SWAP_MASK 0x0F000000 - #define NVM_CFG1_GLOB_TX_LANE2_SWAP_OFFSET 24 - #define NVM_CFG1_GLOB_TX_LANE3_SWAP_MASK 0xF0000000 - #define NVM_CFG1_GLOB_TX_LANE3_SWAP_OFFSET 28 - u32 e_lane_cfg2; /* 0x34 */ - #define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_MASK 0x00000001 - #define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_OFFSET 0 - #define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_MASK 0x00000002 - #define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_OFFSET 1 - #define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_MASK 0x00000004 - #define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_OFFSET 2 - #define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_MASK 0x00000008 - #define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_OFFSET 3 - #define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_MASK 0x00000010 - #define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_OFFSET 4 - #define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_MASK 0x00000020 - #define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_OFFSET 5 - #define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_MASK 0x00000040 - #define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_OFFSET 6 - #define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_MASK 0x00000080 - #define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_OFFSET 7 - #define NVM_CFG1_GLOB_SMBUS_MODE_MASK 0x00000F00 - #define NVM_CFG1_GLOB_SMBUS_MODE_OFFSET 8 - #define NVM_CFG1_GLOB_SMBUS_MODE_DISABLED 0x0 - #define NVM_CFG1_GLOB_SMBUS_MODE_100KHZ 0x1 - #define NVM_CFG1_GLOB_SMBUS_MODE_400KHZ 0x2 - #define NVM_CFG1_GLOB_NCSI_MASK 0x0000F000 - #define NVM_CFG1_GLOB_NCSI_OFFSET 12 - #define NVM_CFG1_GLOB_NCSI_DISABLED 0x0 - #define NVM_CFG1_GLOB_NCSI_ENABLED 0x1 + #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_MASK 0x80000000 + #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_OFFSET 31 + #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_DISABLED 0x0 + #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ENABLED 0x1 + u32 core_cfg; /* 0x2C */ + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_MASK 0x000000FF + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_OFFSET 0 + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_2X40G 0x0 + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X50G 0x1 + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_1X100G 0x2 + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X10G_F 0x3 + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_4X10G_E 0x4 + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_4X20G 0x5 + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_1X40G 0xB + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X25G 0xC + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_1X25G 0xD + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X25G 0xE + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X10G 0xF + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X25G_LIO2 0x10 + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_2X50G_R1 0x11 + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_4X50G_R1 0x12 + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_1X100G_R2 0x13 + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_2X100G_R2 0x14 + #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_1X100G_R4 0x15 + #define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_MASK 0x00000100 + #define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_OFFSET 8 + #define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_DISABLED 0x0 + #define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_ENABLED 0x1 + #define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_MASK 0x00000200 + #define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_OFFSET 9 + #define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_DISABLED 0x0 + #define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_ENABLED 0x1 + #define NVM_CFG1_GLOB_MPS10_CORE_ADDR_MASK 0x0003FC00 + #define NVM_CFG1_GLOB_MPS10_CORE_ADDR_OFFSET 10 + #define NVM_CFG1_GLOB_MPS25_CORE_ADDR_MASK 0x03FC0000 + #define NVM_CFG1_GLOB_MPS25_CORE_ADDR_OFFSET 18 + #define NVM_CFG1_GLOB_AVS_MODE_MASK 0x1C000000 + #define NVM_CFG1_GLOB_AVS_MODE_OFFSET 26 + #define NVM_CFG1_GLOB_AVS_MODE_CLOSE_LOOP 0x0 + #define NVM_CFG1_GLOB_AVS_MODE_OPEN_LOOP_CFG 0x1 + #define NVM_CFG1_GLOB_AVS_MODE_OPEN_LOOP_OTP 0x2 + #define NVM_CFG1_GLOB_AVS_MODE_DISABLED 0x3 + #define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_MASK 0x60000000 + #define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_OFFSET 29 + #define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_DISABLED 0x0 + #define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_ENABLED 0x1 + #define NVM_CFG1_GLOB_DCI_SUPPORT_MASK 0x80000000 + #define NVM_CFG1_GLOB_DCI_SUPPORT_OFFSET 31 + #define NVM_CFG1_GLOB_DCI_SUPPORT_DISABLED 0x0 + #define NVM_CFG1_GLOB_DCI_SUPPORT_ENABLED 0x1 + u32 e_lane_cfg1; /* 0x30 */ + #define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK 0x0000000F + #define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET 0 + #define NVM_CFG1_GLOB_RX_LANE1_SWAP_MASK 0x000000F0 + #define NVM_CFG1_GLOB_RX_LANE1_SWAP_OFFSET 4 + #define NVM_CFG1_GLOB_RX_LANE2_SWAP_MASK 0x00000F00 + #define NVM_CFG1_GLOB_RX_LANE2_SWAP_OFFSET 8 + #define NVM_CFG1_GLOB_RX_LANE3_SWAP_MASK 0x0000F000 + #define NVM_CFG1_GLOB_RX_LANE3_SWAP_OFFSET 12 + #define NVM_CFG1_GLOB_TX_LANE0_SWAP_MASK 0x000F0000 + #define NVM_CFG1_GLOB_TX_LANE0_SWAP_OFFSET 16 + #define NVM_CFG1_GLOB_TX_LANE1_SWAP_MASK 0x00F00000 + #define NVM_CFG1_GLOB_TX_LANE1_SWAP_OFFSET 20 + #define NVM_CFG1_GLOB_TX_LANE2_SWAP_MASK 0x0F000000 + #define NVM_CFG1_GLOB_TX_LANE2_SWAP_OFFSET 24 + #define NVM_CFG1_GLOB_TX_LANE3_SWAP_MASK 0xF0000000 + #define NVM_CFG1_GLOB_TX_LANE3_SWAP_OFFSET 28 + u32 e_lane_cfg2; /* 0x34 */ + #define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_MASK 0x00000001 + #define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_OFFSET 0 + #define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_MASK 0x00000002 + #define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_OFFSET 1 + #define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_MASK 0x00000004 + #define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_OFFSET 2 + #define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_MASK 0x00000008 + #define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_OFFSET 3 + #define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_MASK 0x00000010 + #define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_OFFSET 4 + #define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_MASK 0x00000020 + #define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_OFFSET 5 + #define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_MASK 0x00000040 + #define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_OFFSET 6 + #define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_MASK 0x00000080 + #define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_OFFSET 7 + #define NVM_CFG1_GLOB_SMBUS_MODE_MASK 0x00000F00 + #define NVM_CFG1_GLOB_SMBUS_MODE_OFFSET 8 + #define NVM_CFG1_GLOB_SMBUS_MODE_DISABLED 0x0 + #define NVM_CFG1_GLOB_SMBUS_MODE_100KHZ 0x1 + #define NVM_CFG1_GLOB_SMBUS_MODE_400KHZ 0x2 + #define NVM_CFG1_GLOB_NCSI_MASK 0x0000F000 + #define NVM_CFG1_GLOB_NCSI_OFFSET 12 + #define NVM_CFG1_GLOB_NCSI_DISABLED 0x0 + #define NVM_CFG1_GLOB_NCSI_ENABLED 0x1 /* Maximum advertised pcie link width */ - #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_MASK 0x000F0000 - #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_OFFSET 16 - #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_BB_16_LANES 0x0 - #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_1_LANE 0x1 - #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_2_LANES 0x2 - #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_4_LANES 0x3 - #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_8_LANES 0x4 + #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_MASK 0x000F0000 + #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_OFFSET 16 + #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_BB_16_AHP_16_LANES 0x0 + #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_1_LANE 0x1 + #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_2_LANES 0x2 + #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_4_LANES 0x3 + #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_8_LANES 0x4 /* ASPM L1 mode */ - #define NVM_CFG1_GLOB_ASPM_L1_MODE_MASK 0x00300000 - #define NVM_CFG1_GLOB_ASPM_L1_MODE_OFFSET 20 - #define NVM_CFG1_GLOB_ASPM_L1_MODE_FORCED 0x0 - #define NVM_CFG1_GLOB_ASPM_L1_MODE_DYNAMIC_LOW_LATENCY 0x1 - #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_MASK 0x01C00000 - #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_OFFSET 22 - #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_DISABLED 0x0 - #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_EXT_I2C 0x1 - #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_ONLY 0x2 - #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_EXT_SMBUS 0x3 - #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_MASK \ - 0x06000000 - #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_OFFSET 25 - #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_DISABLE 0x0 - #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_INTERNAL 0x1 - #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_EXTERNAL 0x2 - #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_BOTH 0x3 + #define NVM_CFG1_GLOB_ASPM_L1_MODE_MASK 0x00300000 + #define NVM_CFG1_GLOB_ASPM_L1_MODE_OFFSET 20 + #define NVM_CFG1_GLOB_ASPM_L1_MODE_FORCED 0x0 + #define NVM_CFG1_GLOB_ASPM_L1_MODE_DYNAMIC_LOW_LATENCY 0x1 + #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_MASK 0x01C00000 + #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_OFFSET 22 + #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_DISABLED 0x0 + #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_EXT_I2C 0x1 + #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_ONLY 0x2 + #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_EXT_SMBUS 0x3 + #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_MASK 0x06000000 + #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_OFFSET 25 + #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_DISABLE 0x0 + #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_INTERNAL 0x1 + #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_EXTERNAL 0x2 + #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_BOTH 0x3 /* Set the PLDM sensor modes */ - #define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_MASK 0x38000000 - #define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_OFFSET 27 - #define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_INTERNAL 0x0 - #define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_EXTERNAL 0x1 - #define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_BOTH 0x2 + #define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_MASK 0x38000000 + #define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_OFFSET 27 + #define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_INTERNAL 0x0 + #define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_EXTERNAL 0x1 + #define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_BOTH 0x2 + /* Enable VDM interface */ + #define NVM_CFG1_GLOB_PCIE_VDM_ENABLED_MASK 0x40000000 + #define NVM_CFG1_GLOB_PCIE_VDM_ENABLED_OFFSET 30 + #define NVM_CFG1_GLOB_PCIE_VDM_ENABLED_DISABLED 0x0 + #define NVM_CFG1_GLOB_PCIE_VDM_ENABLED_ENABLED 0x1 /* ROL enable */ - #define NVM_CFG1_GLOB_RESET_ON_LAN_MASK 0x80000000 - #define NVM_CFG1_GLOB_RESET_ON_LAN_OFFSET 31 - #define NVM_CFG1_GLOB_RESET_ON_LAN_DISABLED 0x0 - #define NVM_CFG1_GLOB_RESET_ON_LAN_ENABLED 0x1 - u32 f_lane_cfg1; /* 0x38 */ - #define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK 0x0000000F - #define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET 0 - #define NVM_CFG1_GLOB_RX_LANE1_SWAP_MASK 0x000000F0 - #define NVM_CFG1_GLOB_RX_LANE1_SWAP_OFFSET 4 - #define NVM_CFG1_GLOB_RX_LANE2_SWAP_MASK 0x00000F00 - #define NVM_CFG1_GLOB_RX_LANE2_SWAP_OFFSET 8 - #define NVM_CFG1_GLOB_RX_LANE3_SWAP_MASK 0x0000F000 - #define NVM_CFG1_GLOB_RX_LANE3_SWAP_OFFSET 12 - #define NVM_CFG1_GLOB_TX_LANE0_SWAP_MASK 0x000F0000 - #define NVM_CFG1_GLOB_TX_LANE0_SWAP_OFFSET 16 - #define NVM_CFG1_GLOB_TX_LANE1_SWAP_MASK 0x00F00000 - #define NVM_CFG1_GLOB_TX_LANE1_SWAP_OFFSET 20 - #define NVM_CFG1_GLOB_TX_LANE2_SWAP_MASK 0x0F000000 - #define NVM_CFG1_GLOB_TX_LANE2_SWAP_OFFSET 24 - #define NVM_CFG1_GLOB_TX_LANE3_SWAP_MASK 0xF0000000 - #define NVM_CFG1_GLOB_TX_LANE3_SWAP_OFFSET 28 - u32 f_lane_cfg2; /* 0x3C */ - #define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_MASK 0x00000001 - #define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_OFFSET 0 - #define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_MASK 0x00000002 - #define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_OFFSET 1 - #define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_MASK 0x00000004 - #define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_OFFSET 2 - #define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_MASK 0x00000008 - #define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_OFFSET 3 - #define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_MASK 0x00000010 - #define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_OFFSET 4 - #define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_MASK 0x00000020 - #define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_OFFSET 5 - #define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_MASK 0x00000040 - #define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_OFFSET 6 - #define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_MASK 0x00000080 - #define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_OFFSET 7 + #define NVM_CFG1_GLOB_RESET_ON_LAN_MASK 0x80000000 + #define NVM_CFG1_GLOB_RESET_ON_LAN_OFFSET 31 + #define NVM_CFG1_GLOB_RESET_ON_LAN_DISABLED 0x0 + #define NVM_CFG1_GLOB_RESET_ON_LAN_ENABLED 0x1 + u32 f_lane_cfg1; /* 0x38 */ + #define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK 0x0000000F + #define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET 0 + #define NVM_CFG1_GLOB_RX_LANE1_SWAP_MASK 0x000000F0 + #define NVM_CFG1_GLOB_RX_LANE1_SWAP_OFFSET 4 + #define NVM_CFG1_GLOB_RX_LANE2_SWAP_MASK 0x00000F00 + #define NVM_CFG1_GLOB_RX_LANE2_SWAP_OFFSET 8 + #define NVM_CFG1_GLOB_RX_LANE3_SWAP_MASK 0x0000F000 + #define NVM_CFG1_GLOB_RX_LANE3_SWAP_OFFSET 12 + #define NVM_CFG1_GLOB_TX_LANE0_SWAP_MASK 0x000F0000 + #define NVM_CFG1_GLOB_TX_LANE0_SWAP_OFFSET 16 + #define NVM_CFG1_GLOB_TX_LANE1_SWAP_MASK 0x00F00000 + #define NVM_CFG1_GLOB_TX_LANE1_SWAP_OFFSET 20 + #define NVM_CFG1_GLOB_TX_LANE2_SWAP_MASK 0x0F000000 + #define NVM_CFG1_GLOB_TX_LANE2_SWAP_OFFSET 24 + #define NVM_CFG1_GLOB_TX_LANE3_SWAP_MASK 0xF0000000 + #define NVM_CFG1_GLOB_TX_LANE3_SWAP_OFFSET 28 + u32 f_lane_cfg2; /* 0x3C */ + #define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_MASK 0x00000001 + #define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_OFFSET 0 + #define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_MASK 0x00000002 + #define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_OFFSET 1 + #define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_MASK 0x00000004 + #define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_OFFSET 2 + #define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_MASK 0x00000008 + #define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_OFFSET 3 + #define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_MASK 0x00000010 + #define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_OFFSET 4 + #define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_MASK 0x00000020 + #define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_OFFSET 5 + #define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_MASK 0x00000040 + #define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_OFFSET 6 + #define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_MASK 0x00000080 + #define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_OFFSET 7 /* Control the period between two successive checks */ - #define NVM_CFG1_GLOB_TEMPERATURE_PERIOD_BETWEEN_CHECKS_MASK \ - 0x0000FF00 - #define NVM_CFG1_GLOB_TEMPERATURE_PERIOD_BETWEEN_CHECKS_OFFSET 8 + #define NVM_CFG1_GLOB_TEMPERATURE_PERIOD_BETWEEN_CHECKS_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_TEMPERATURE_PERIOD_BETWEEN_CHECKS_OFFSET 8 /* Set shutdown temperature */ - #define NVM_CFG1_GLOB_SHUTDOWN_THRESHOLD_TEMPERATURE_MASK \ - 0x00FF0000 - #define NVM_CFG1_GLOB_SHUTDOWN_THRESHOLD_TEMPERATURE_OFFSET 16 + #define NVM_CFG1_GLOB_SHUTDOWN_THRESHOLD_TEMPERATURE_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_SHUTDOWN_THRESHOLD_TEMPERATURE_OFFSET 16 /* Set max. count for over operational temperature */ - #define NVM_CFG1_GLOB_MAX_COUNT_OPER_THRESHOLD_MASK 0xFF000000 - #define NVM_CFG1_GLOB_MAX_COUNT_OPER_THRESHOLD_OFFSET 24 - u32 mps10_preemphasis; /* 0x40 */ - #define NVM_CFG1_GLOB_LANE0_PREEMP_MASK 0x000000FF - #define NVM_CFG1_GLOB_LANE0_PREEMP_OFFSET 0 - #define NVM_CFG1_GLOB_LANE1_PREEMP_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_LANE1_PREEMP_OFFSET 8 - #define NVM_CFG1_GLOB_LANE2_PREEMP_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_LANE2_PREEMP_OFFSET 16 - #define NVM_CFG1_GLOB_LANE3_PREEMP_MASK 0xFF000000 - #define NVM_CFG1_GLOB_LANE3_PREEMP_OFFSET 24 - u32 mps10_driver_current; /* 0x44 */ - #define NVM_CFG1_GLOB_LANE0_AMP_MASK 0x000000FF - #define NVM_CFG1_GLOB_LANE0_AMP_OFFSET 0 - #define NVM_CFG1_GLOB_LANE1_AMP_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_LANE1_AMP_OFFSET 8 - #define NVM_CFG1_GLOB_LANE2_AMP_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_LANE2_AMP_OFFSET 16 - #define NVM_CFG1_GLOB_LANE3_AMP_MASK 0xFF000000 - #define NVM_CFG1_GLOB_LANE3_AMP_OFFSET 24 - u32 mps25_preemphasis; /* 0x48 */ - #define NVM_CFG1_GLOB_LANE0_PREEMP_MASK 0x000000FF - #define NVM_CFG1_GLOB_LANE0_PREEMP_OFFSET 0 - #define NVM_CFG1_GLOB_LANE1_PREEMP_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_LANE1_PREEMP_OFFSET 8 - #define NVM_CFG1_GLOB_LANE2_PREEMP_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_LANE2_PREEMP_OFFSET 16 - #define NVM_CFG1_GLOB_LANE3_PREEMP_MASK 0xFF000000 - #define NVM_CFG1_GLOB_LANE3_PREEMP_OFFSET 24 - u32 mps25_driver_current; /* 0x4C */ - #define NVM_CFG1_GLOB_LANE0_AMP_MASK 0x000000FF - #define NVM_CFG1_GLOB_LANE0_AMP_OFFSET 0 - #define NVM_CFG1_GLOB_LANE1_AMP_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_LANE1_AMP_OFFSET 8 - #define NVM_CFG1_GLOB_LANE2_AMP_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_LANE2_AMP_OFFSET 16 - #define NVM_CFG1_GLOB_LANE3_AMP_MASK 0xFF000000 - #define NVM_CFG1_GLOB_LANE3_AMP_OFFSET 24 - u32 pci_id; /* 0x50 */ - #define NVM_CFG1_GLOB_VENDOR_ID_MASK 0x0000FFFF - #define NVM_CFG1_GLOB_VENDOR_ID_OFFSET 0 + #define NVM_CFG1_GLOB_MAX_COUNT_OPER_THRESHOLD_MASK 0xFF000000 + #define NVM_CFG1_GLOB_MAX_COUNT_OPER_THRESHOLD_OFFSET 24 + u32 mps10_preemphasis; /* 0x40 */ + #define NVM_CFG1_GLOB_LANE0_PREEMP_MASK 0x000000FF + #define NVM_CFG1_GLOB_LANE0_PREEMP_OFFSET 0 + #define NVM_CFG1_GLOB_LANE1_PREEMP_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_LANE1_PREEMP_OFFSET 8 + #define NVM_CFG1_GLOB_LANE2_PREEMP_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_LANE2_PREEMP_OFFSET 16 + #define NVM_CFG1_GLOB_LANE3_PREEMP_MASK 0xFF000000 + #define NVM_CFG1_GLOB_LANE3_PREEMP_OFFSET 24 + u32 mps10_driver_current; /* 0x44 */ + #define NVM_CFG1_GLOB_LANE0_AMP_MASK 0x000000FF + #define NVM_CFG1_GLOB_LANE0_AMP_OFFSET 0 + #define NVM_CFG1_GLOB_LANE1_AMP_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_LANE1_AMP_OFFSET 8 + #define NVM_CFG1_GLOB_LANE2_AMP_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_LANE2_AMP_OFFSET 16 + #define NVM_CFG1_GLOB_LANE3_AMP_MASK 0xFF000000 + #define NVM_CFG1_GLOB_LANE3_AMP_OFFSET 24 + u32 mps25_preemphasis; /* 0x48 */ + #define NVM_CFG1_GLOB_LANE0_PREEMP_MASK 0x000000FF + #define NVM_CFG1_GLOB_LANE0_PREEMP_OFFSET 0 + #define NVM_CFG1_GLOB_LANE1_PREEMP_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_LANE1_PREEMP_OFFSET 8 + #define NVM_CFG1_GLOB_LANE2_PREEMP_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_LANE2_PREEMP_OFFSET 16 + #define NVM_CFG1_GLOB_LANE3_PREEMP_MASK 0xFF000000 + #define NVM_CFG1_GLOB_LANE3_PREEMP_OFFSET 24 + u32 mps25_driver_current; /* 0x4C */ + #define NVM_CFG1_GLOB_LANE0_AMP_MASK 0x000000FF + #define NVM_CFG1_GLOB_LANE0_AMP_OFFSET 0 + #define NVM_CFG1_GLOB_LANE1_AMP_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_LANE1_AMP_OFFSET 8 + #define NVM_CFG1_GLOB_LANE2_AMP_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_LANE2_AMP_OFFSET 16 + #define NVM_CFG1_GLOB_LANE3_AMP_MASK 0xFF000000 + #define NVM_CFG1_GLOB_LANE3_AMP_OFFSET 24 + u32 pci_id; /* 0x50 */ + #define NVM_CFG1_GLOB_VENDOR_ID_MASK 0x0000FFFF + #define NVM_CFG1_GLOB_VENDOR_ID_OFFSET 0 /* Set caution temperature */ - #define NVM_CFG1_GLOB_DEAD_TEMP_TH_TEMPERATURE_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_DEAD_TEMP_TH_TEMPERATURE_OFFSET 16 + #define NVM_CFG1_GLOB_DEAD_TEMP_TH_TEMPERATURE_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_DEAD_TEMP_TH_TEMPERATURE_OFFSET 16 /* Set external thermal sensor I2C address */ - #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ADDRESS_MASK \ - 0xFF000000 - #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ADDRESS_OFFSET 24 - u32 pci_subsys_id; /* 0x54 */ - #define NVM_CFG1_GLOB_SUBSYSTEM_VENDOR_ID_MASK 0x0000FFFF - #define NVM_CFG1_GLOB_SUBSYSTEM_VENDOR_ID_OFFSET 0 - #define NVM_CFG1_GLOB_SUBSYSTEM_DEVICE_ID_MASK 0xFFFF0000 - #define NVM_CFG1_GLOB_SUBSYSTEM_DEVICE_ID_OFFSET 16 - u32 bar; /* 0x58 */ - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_MASK 0x0000000F - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_OFFSET 0 - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_DISABLED 0x0 - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_2K 0x1 - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_4K 0x2 - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_8K 0x3 - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_16K 0x4 - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_32K 0x5 - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_64K 0x6 - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_128K 0x7 - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_256K 0x8 - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_512K 0x9 - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_1M 0xA - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_2M 0xB - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_4M 0xC - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_8M 0xD - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_16M 0xE - #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_32M 0xF + #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ADDRESS_MASK 0xFF000000 + #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ADDRESS_OFFSET 24 + u32 pci_subsys_id; /* 0x54 */ + #define NVM_CFG1_GLOB_SUBSYSTEM_VENDOR_ID_MASK 0x0000FFFF + #define NVM_CFG1_GLOB_SUBSYSTEM_VENDOR_ID_OFFSET 0 + #define NVM_CFG1_GLOB_SUBSYSTEM_DEVICE_ID_MASK 0xFFFF0000 + #define NVM_CFG1_GLOB_SUBSYSTEM_DEVICE_ID_OFFSET 16 + u32 bar; /* 0x58 */ + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_MASK 0x0000000F + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_OFFSET 0 + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_DISABLED 0x0 + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_2K 0x1 + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_4K 0x2 + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_8K 0x3 + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_16K 0x4 + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_32K 0x5 + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_64K 0x6 + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_128K 0x7 + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_256K 0x8 + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_512K 0x9 + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_1M 0xA + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_2M 0xB + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_4M 0xC + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_8M 0xD + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_16M 0xE + #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_32M 0xF /* BB VF BAR2 size */ - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_MASK 0x000000F0 - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_OFFSET 4 - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_DISABLED 0x0 - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_4K 0x1 - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_8K 0x2 - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_16K 0x3 - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_32K 0x4 - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_64K 0x5 - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_128K 0x6 - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_256K 0x7 - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_512K 0x8 - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_1M 0x9 - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_2M 0xA - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_4M 0xB - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_8M 0xC - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_16M 0xD - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_32M 0xE - #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_64M 0xF + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_MASK 0x000000F0 + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_OFFSET 4 + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_DISABLED 0x0 + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_4K 0x1 + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_8K 0x2 + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_16K 0x3 + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_32K 0x4 + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_64K 0x5 + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_128K 0x6 + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_256K 0x7 + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_512K 0x8 + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_1M 0x9 + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_2M 0xA + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_4M 0xB + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_8M 0xC + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_16M 0xD + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_32M 0xE + #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_64M 0xF /* BB BAR2 size (global) */ - #define NVM_CFG1_GLOB_BAR2_SIZE_MASK 0x00000F00 - #define NVM_CFG1_GLOB_BAR2_SIZE_OFFSET 8 - #define NVM_CFG1_GLOB_BAR2_SIZE_DISABLED 0x0 - #define NVM_CFG1_GLOB_BAR2_SIZE_64K 0x1 - #define NVM_CFG1_GLOB_BAR2_SIZE_128K 0x2 - #define NVM_CFG1_GLOB_BAR2_SIZE_256K 0x3 - #define NVM_CFG1_GLOB_BAR2_SIZE_512K 0x4 - #define NVM_CFG1_GLOB_BAR2_SIZE_1M 0x5 - #define NVM_CFG1_GLOB_BAR2_SIZE_2M 0x6 - #define NVM_CFG1_GLOB_BAR2_SIZE_4M 0x7 - #define NVM_CFG1_GLOB_BAR2_SIZE_8M 0x8 - #define NVM_CFG1_GLOB_BAR2_SIZE_16M 0x9 - #define NVM_CFG1_GLOB_BAR2_SIZE_32M 0xA - #define NVM_CFG1_GLOB_BAR2_SIZE_64M 0xB - #define NVM_CFG1_GLOB_BAR2_SIZE_128M 0xC - #define NVM_CFG1_GLOB_BAR2_SIZE_256M 0xD - #define NVM_CFG1_GLOB_BAR2_SIZE_512M 0xE - #define NVM_CFG1_GLOB_BAR2_SIZE_1G 0xF - /* Set the duration, in secs, fan failure signal should be sampled */ - #define NVM_CFG1_GLOB_FAN_FAILURE_DURATION_MASK 0x0000F000 - #define NVM_CFG1_GLOB_FAN_FAILURE_DURATION_OFFSET 12 - /* This field defines the board total budget for bar2 when disabled + #define NVM_CFG1_GLOB_BAR2_SIZE_MASK 0x00000F00 + #define NVM_CFG1_GLOB_BAR2_SIZE_OFFSET 8 + #define NVM_CFG1_GLOB_BAR2_SIZE_DISABLED 0x0 + #define NVM_CFG1_GLOB_BAR2_SIZE_64K 0x1 + #define NVM_CFG1_GLOB_BAR2_SIZE_128K 0x2 + #define NVM_CFG1_GLOB_BAR2_SIZE_256K 0x3 + #define NVM_CFG1_GLOB_BAR2_SIZE_512K 0x4 + #define NVM_CFG1_GLOB_BAR2_SIZE_1M 0x5 + #define NVM_CFG1_GLOB_BAR2_SIZE_2M 0x6 + #define NVM_CFG1_GLOB_BAR2_SIZE_4M 0x7 + #define NVM_CFG1_GLOB_BAR2_SIZE_8M 0x8 + #define NVM_CFG1_GLOB_BAR2_SIZE_16M 0x9 + #define NVM_CFG1_GLOB_BAR2_SIZE_32M 0xA + #define NVM_CFG1_GLOB_BAR2_SIZE_64M 0xB + #define NVM_CFG1_GLOB_BAR2_SIZE_128M 0xC + #define NVM_CFG1_GLOB_BAR2_SIZE_256M 0xD + #define NVM_CFG1_GLOB_BAR2_SIZE_512M 0xE + #define NVM_CFG1_GLOB_BAR2_SIZE_1G 0xF + /* Set the duration, in seconds, fan failure signal should be + * sampled + */ + #define NVM_CFG1_GLOB_FAN_FAILURE_DURATION_MASK 0x0000F000 + #define NVM_CFG1_GLOB_FAN_FAILURE_DURATION_OFFSET 12 + /* This field defines the board total budget for bar2 when disabled * the regular bar size is used. */ - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_OFFSET 16 - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_DISABLED 0x0 - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_64K 0x1 - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_128K 0x2 - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_256K 0x3 - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_512K 0x4 - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_1M 0x5 - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_2M 0x6 - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_4M 0x7 - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_8M 0x8 - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_16M 0x9 - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_32M 0xA - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_64M 0xB - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_128M 0xC - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_256M 0xD - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_512M 0xE - #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_1G 0xF + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_OFFSET 16 + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_DISABLED 0x0 + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_64K 0x1 + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_128K 0x2 + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_256K 0x3 + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_512K 0x4 + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_1M 0x5 + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_2M 0x6 + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_4M 0x7 + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_8M 0x8 + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_16M 0x9 + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_32M 0xA + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_64M 0xB + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_128M 0xC + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_256M 0xD + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_512M 0xE + #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_1G 0xF /* Enable/Disable Crash dump triggers */ - #define NVM_CFG1_GLOB_CRASH_DUMP_TRIGGER_ENABLE_MASK 0xFF000000 - #define NVM_CFG1_GLOB_CRASH_DUMP_TRIGGER_ENABLE_OFFSET 24 - u32 mps10_txfir_main; /* 0x5C */ - #define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_MASK 0x000000FF - #define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_OFFSET 0 - #define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_OFFSET 8 - #define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_OFFSET 16 - #define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_MASK 0xFF000000 - #define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_OFFSET 24 - u32 mps10_txfir_post; /* 0x60 */ - #define NVM_CFG1_GLOB_LANE0_TXFIR_POST_MASK 0x000000FF - #define NVM_CFG1_GLOB_LANE0_TXFIR_POST_OFFSET 0 - #define NVM_CFG1_GLOB_LANE1_TXFIR_POST_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_LANE1_TXFIR_POST_OFFSET 8 - #define NVM_CFG1_GLOB_LANE2_TXFIR_POST_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_LANE2_TXFIR_POST_OFFSET 16 - #define NVM_CFG1_GLOB_LANE3_TXFIR_POST_MASK 0xFF000000 - #define NVM_CFG1_GLOB_LANE3_TXFIR_POST_OFFSET 24 - u32 mps25_txfir_main; /* 0x64 */ - #define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_MASK 0x000000FF - #define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_OFFSET 0 - #define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_OFFSET 8 - #define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_OFFSET 16 - #define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_MASK 0xFF000000 - #define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_OFFSET 24 - u32 mps25_txfir_post; /* 0x68 */ - #define NVM_CFG1_GLOB_LANE0_TXFIR_POST_MASK 0x000000FF - #define NVM_CFG1_GLOB_LANE0_TXFIR_POST_OFFSET 0 - #define NVM_CFG1_GLOB_LANE1_TXFIR_POST_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_LANE1_TXFIR_POST_OFFSET 8 - #define NVM_CFG1_GLOB_LANE2_TXFIR_POST_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_LANE2_TXFIR_POST_OFFSET 16 - #define NVM_CFG1_GLOB_LANE3_TXFIR_POST_MASK 0xFF000000 - #define NVM_CFG1_GLOB_LANE3_TXFIR_POST_OFFSET 24 - u32 manufacture_ver; /* 0x6C */ - #define NVM_CFG1_GLOB_MANUF0_VER_MASK 0x0000003F - #define NVM_CFG1_GLOB_MANUF0_VER_OFFSET 0 - #define NVM_CFG1_GLOB_MANUF1_VER_MASK 0x00000FC0 - #define NVM_CFG1_GLOB_MANUF1_VER_OFFSET 6 - #define NVM_CFG1_GLOB_MANUF2_VER_MASK 0x0003F000 - #define NVM_CFG1_GLOB_MANUF2_VER_OFFSET 12 - #define NVM_CFG1_GLOB_MANUF3_VER_MASK 0x00FC0000 - #define NVM_CFG1_GLOB_MANUF3_VER_OFFSET 18 - #define NVM_CFG1_GLOB_MANUF4_VER_MASK 0x3F000000 - #define NVM_CFG1_GLOB_MANUF4_VER_OFFSET 24 + #define NVM_CFG1_GLOB_CRASH_DUMP_TRIGGER_ENABLE_MASK 0xFF000000 + #define NVM_CFG1_GLOB_CRASH_DUMP_TRIGGER_ENABLE_OFFSET 24 + u32 mps10_txfir_main; /* 0x5C */ + #define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_MASK 0x000000FF + #define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_OFFSET 0 + #define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_OFFSET 8 + #define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_OFFSET 16 + #define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_MASK 0xFF000000 + #define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_OFFSET 24 + u32 mps10_txfir_post; /* 0x60 */ + #define NVM_CFG1_GLOB_LANE0_TXFIR_POST_MASK 0x000000FF + #define NVM_CFG1_GLOB_LANE0_TXFIR_POST_OFFSET 0 + #define NVM_CFG1_GLOB_LANE1_TXFIR_POST_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_LANE1_TXFIR_POST_OFFSET 8 + #define NVM_CFG1_GLOB_LANE2_TXFIR_POST_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_LANE2_TXFIR_POST_OFFSET 16 + #define NVM_CFG1_GLOB_LANE3_TXFIR_POST_MASK 0xFF000000 + #define NVM_CFG1_GLOB_LANE3_TXFIR_POST_OFFSET 24 + u32 mps25_txfir_main; /* 0x64 */ + #define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_MASK 0x000000FF + #define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_OFFSET 0 + #define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_OFFSET 8 + #define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_OFFSET 16 + #define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_MASK 0xFF000000 + #define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_OFFSET 24 + u32 mps25_txfir_post; /* 0x68 */ + #define NVM_CFG1_GLOB_LANE0_TXFIR_POST_MASK 0x000000FF + #define NVM_CFG1_GLOB_LANE0_TXFIR_POST_OFFSET 0 + #define NVM_CFG1_GLOB_LANE1_TXFIR_POST_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_LANE1_TXFIR_POST_OFFSET 8 + #define NVM_CFG1_GLOB_LANE2_TXFIR_POST_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_LANE2_TXFIR_POST_OFFSET 16 + #define NVM_CFG1_GLOB_LANE3_TXFIR_POST_MASK 0xFF000000 + #define NVM_CFG1_GLOB_LANE3_TXFIR_POST_OFFSET 24 + u32 manufacture_ver; /* 0x6C */ + #define NVM_CFG1_GLOB_MANUF0_VER_MASK 0x0000003F + #define NVM_CFG1_GLOB_MANUF0_VER_OFFSET 0 + #define NVM_CFG1_GLOB_MANUF1_VER_MASK 0x00000FC0 + #define NVM_CFG1_GLOB_MANUF1_VER_OFFSET 6 + #define NVM_CFG1_GLOB_MANUF2_VER_MASK 0x0003F000 + #define NVM_CFG1_GLOB_MANUF2_VER_OFFSET 12 + #define NVM_CFG1_GLOB_MANUF3_VER_MASK 0x00FC0000 + #define NVM_CFG1_GLOB_MANUF3_VER_OFFSET 18 + #define NVM_CFG1_GLOB_MANUF4_VER_MASK 0x3F000000 + #define NVM_CFG1_GLOB_MANUF4_VER_OFFSET 24 /* Select package id method */ - #define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_MASK 0x40000000 - #define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_OFFSET 30 - #define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_NVRAM 0x0 - #define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_IO_PINS 0x1 - #define NVM_CFG1_GLOB_RECOVERY_MODE_MASK 0x80000000 - #define NVM_CFG1_GLOB_RECOVERY_MODE_OFFSET 31 - #define NVM_CFG1_GLOB_RECOVERY_MODE_DISABLED 0x0 - #define NVM_CFG1_GLOB_RECOVERY_MODE_ENABLED 0x1 - u32 manufacture_time; /* 0x70 */ - #define NVM_CFG1_GLOB_MANUF0_TIME_MASK 0x0000003F - #define NVM_CFG1_GLOB_MANUF0_TIME_OFFSET 0 - #define NVM_CFG1_GLOB_MANUF1_TIME_MASK 0x00000FC0 - #define NVM_CFG1_GLOB_MANUF1_TIME_OFFSET 6 - #define NVM_CFG1_GLOB_MANUF2_TIME_MASK 0x0003F000 - #define NVM_CFG1_GLOB_MANUF2_TIME_OFFSET 12 + #define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_MASK 0x40000000 + #define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_OFFSET 30 + #define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_NVRAM 0x0 + #define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_IO_PINS 0x1 + #define NVM_CFG1_GLOB_RECOVERY_MODE_MASK 0x80000000 + #define NVM_CFG1_GLOB_RECOVERY_MODE_OFFSET 31 + #define NVM_CFG1_GLOB_RECOVERY_MODE_DISABLED 0x0 + #define NVM_CFG1_GLOB_RECOVERY_MODE_ENABLED 0x1 + u32 manufacture_time; /* 0x70 */ + #define NVM_CFG1_GLOB_MANUF0_TIME_MASK 0x0000003F + #define NVM_CFG1_GLOB_MANUF0_TIME_OFFSET 0 + #define NVM_CFG1_GLOB_MANUF1_TIME_MASK 0x00000FC0 + #define NVM_CFG1_GLOB_MANUF1_TIME_OFFSET 6 + #define NVM_CFG1_GLOB_MANUF2_TIME_MASK 0x0003F000 + #define NVM_CFG1_GLOB_MANUF2_TIME_OFFSET 12 /* Max MSIX for Ethernet in default mode */ - #define NVM_CFG1_GLOB_MAX_MSIX_MASK 0x03FC0000 - #define NVM_CFG1_GLOB_MAX_MSIX_OFFSET 18 + #define NVM_CFG1_GLOB_MAX_MSIX_MASK 0x03FC0000 + #define NVM_CFG1_GLOB_MAX_MSIX_OFFSET 18 /* PF Mapping */ - #define NVM_CFG1_GLOB_PF_MAPPING_MASK 0x0C000000 - #define NVM_CFG1_GLOB_PF_MAPPING_OFFSET 26 - #define NVM_CFG1_GLOB_PF_MAPPING_CONTINUOUS 0x0 - #define NVM_CFG1_GLOB_PF_MAPPING_FIXED 0x1 - #define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_MASK 0x30000000 - #define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_OFFSET 28 - #define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_DISABLED 0x0 - #define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_TI 0x1 + #define NVM_CFG1_GLOB_PF_MAPPING_MASK 0x0C000000 + #define NVM_CFG1_GLOB_PF_MAPPING_OFFSET 26 + #define NVM_CFG1_GLOB_PF_MAPPING_CONTINUOUS 0x0 + #define NVM_CFG1_GLOB_PF_MAPPING_FIXED 0x1 + #define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_MASK 0x30000000 + #define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_OFFSET 28 + #define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_DISABLED 0x0 + #define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_TI 0x1 /* Enable/Disable PCIE Relaxed Ordering */ - #define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_MASK 0x40000000 - #define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_OFFSET 30 - #define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_DISABLED 0x0 - #define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_ENABLED 0x1 - /* Reset the chip using iPOR to release PCIe due to short PERST - * issues + #define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_MASK 0x40000000 + #define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_OFFSET 30 + #define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_DISABLED 0x0 + #define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_ENABLED 0x1 + /* Reset the chip using iPOR to release PCIe due to short PERST + * issues */ - #define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_MASK 0x80000000 - #define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_OFFSET 31 - #define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_DISABLED 0x0 - #define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_ENABLED 0x1 - u32 led_global_settings; /* 0x74 */ - #define NVM_CFG1_GLOB_LED_SWAP_0_MASK 0x0000000F - #define NVM_CFG1_GLOB_LED_SWAP_0_OFFSET 0 - #define NVM_CFG1_GLOB_LED_SWAP_1_MASK 0x000000F0 - #define NVM_CFG1_GLOB_LED_SWAP_1_OFFSET 4 - #define NVM_CFG1_GLOB_LED_SWAP_2_MASK 0x00000F00 - #define NVM_CFG1_GLOB_LED_SWAP_2_OFFSET 8 - #define NVM_CFG1_GLOB_LED_SWAP_3_MASK 0x0000F000 - #define NVM_CFG1_GLOB_LED_SWAP_3_OFFSET 12 + #define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_MASK 0x80000000 + #define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_OFFSET 31 + #define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_DISABLED 0x0 + #define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_ENABLED 0x1 + u32 led_global_settings; /* 0x74 */ + #define NVM_CFG1_GLOB_LED_SWAP_0_MASK 0x0000000F + #define NVM_CFG1_GLOB_LED_SWAP_0_OFFSET 0 + #define NVM_CFG1_GLOB_LED_SWAP_1_MASK 0x000000F0 + #define NVM_CFG1_GLOB_LED_SWAP_1_OFFSET 4 + #define NVM_CFG1_GLOB_LED_SWAP_2_MASK 0x00000F00 + #define NVM_CFG1_GLOB_LED_SWAP_2_OFFSET 8 + #define NVM_CFG1_GLOB_LED_SWAP_3_MASK 0x0000F000 + #define NVM_CFG1_GLOB_LED_SWAP_3_OFFSET 12 /* Max. continues operating temperature */ - #define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_OFFSET 16 - /* GPIO which triggers run-time port swap according to the map - * specified in option 205 + #define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_OFFSET 16 + /* GPIO which triggers run-time port swap according to the map + * specified in option 205 */ - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_MASK 0xFF000000 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_OFFSET 24 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_NA 0x0 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO0 0x1 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO1 0x2 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO2 0x3 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO3 0x4 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO4 0x5 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO5 0x6 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO6 0x7 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO7 0x8 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO8 0x9 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO9 0xA - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO10 0xB - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO11 0xC - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO12 0xD - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO13 0xE - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO14 0xF - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO15 0x10 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO16 0x11 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO17 0x12 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO18 0x13 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO19 0x14 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO20 0x15 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO21 0x16 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO22 0x17 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO23 0x18 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO24 0x19 - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO25 0x1A - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO26 0x1B - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO27 0x1C - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO28 0x1D - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO29 0x1E - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO30 0x1F - #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO31 0x20 - u32 generic_cont1; /* 0x78 */ - #define NVM_CFG1_GLOB_AVS_DAC_CODE_MASK 0x000003FF - #define NVM_CFG1_GLOB_AVS_DAC_CODE_OFFSET 0 - #define NVM_CFG1_GLOB_LANE0_SWAP_MASK 0x00000C00 - #define NVM_CFG1_GLOB_LANE0_SWAP_OFFSET 10 - #define NVM_CFG1_GLOB_LANE1_SWAP_MASK 0x00003000 - #define NVM_CFG1_GLOB_LANE1_SWAP_OFFSET 12 - #define NVM_CFG1_GLOB_LANE2_SWAP_MASK 0x0000C000 - #define NVM_CFG1_GLOB_LANE2_SWAP_OFFSET 14 - #define NVM_CFG1_GLOB_LANE3_SWAP_MASK 0x00030000 - #define NVM_CFG1_GLOB_LANE3_SWAP_OFFSET 16 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_MASK 0xFF000000 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_OFFSET 24 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_NA 0x0 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO0 0x1 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO1 0x2 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO2 0x3 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO3 0x4 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO4 0x5 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO5 0x6 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO6 0x7 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO7 0x8 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO8 0x9 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO9 0xA + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO10 0xB + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO11 0xC + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO12 0xD + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO13 0xE + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO14 0xF + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO15 0x10 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO16 0x11 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO17 0x12 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO18 0x13 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO19 0x14 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO20 0x15 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO21 0x16 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO22 0x17 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO23 0x18 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO24 0x19 + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO25 0x1A + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO26 0x1B + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO27 0x1C + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO28 0x1D + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO29 0x1E + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO30 0x1F + #define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO31 0x20 + u32 generic_cont1; /* 0x78 */ + #define NVM_CFG1_GLOB_AVS_DAC_CODE_MASK 0x000003FF + #define NVM_CFG1_GLOB_AVS_DAC_CODE_OFFSET 0 + #define NVM_CFG1_GLOB_LANE0_SWAP_MASK 0x00000C00 + #define NVM_CFG1_GLOB_LANE0_SWAP_OFFSET 10 + #define NVM_CFG1_GLOB_LANE1_SWAP_MASK 0x00003000 + #define NVM_CFG1_GLOB_LANE1_SWAP_OFFSET 12 + #define NVM_CFG1_GLOB_LANE2_SWAP_MASK 0x0000C000 + #define NVM_CFG1_GLOB_LANE2_SWAP_OFFSET 14 + #define NVM_CFG1_GLOB_LANE3_SWAP_MASK 0x00030000 + #define NVM_CFG1_GLOB_LANE3_SWAP_OFFSET 16 /* Enable option 195 - Overriding the PCIe Preset value */ - #define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_MASK 0x00040000 - #define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_OFFSET 18 - #define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_DISABLED 0x0 - #define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_ENABLED 0x1 + #define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_MASK 0x00040000 + #define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_OFFSET 18 + #define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_DISABLED 0x0 + #define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_ENABLED 0x1 /* PCIe Preset value - applies only if option 194 is enabled */ - #define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_MASK 0x00780000 - #define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_OFFSET 19 - /* Port mapping to be used when the run-time GPIO for port-swap is - * defined and set. + #define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_MASK 0x00780000 + #define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_OFFSET 19 + /* Port mapping to be used when the run-time GPIO for port-swap is + * defined and set. */ - #define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_MASK 0x01800000 - #define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_OFFSET 23 - #define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_MASK 0x06000000 - #define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_OFFSET 25 - #define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_MASK 0x18000000 - #define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_OFFSET 27 - #define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_MASK 0x60000000 - #define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_OFFSET 29 + #define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_MASK 0x01800000 + #define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_OFFSET 23 + #define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_MASK 0x06000000 + #define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_OFFSET 25 + #define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_MASK 0x18000000 + #define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_OFFSET 27 + #define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_MASK 0x60000000 + #define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_OFFSET 29 /* Option to Disable embedded LLDP, 0 - Off, 1 - On */ - #define NVM_CFG1_GLOB_LLDP_DISABLE_MASK 0x80000000 - #define NVM_CFG1_GLOB_LLDP_DISABLE_OFFSET 31 - #define NVM_CFG1_GLOB_LLDP_DISABLE_OFF 0x0 - #define NVM_CFG1_GLOB_LLDP_DISABLE_ON 0x1 - u32 mbi_version; /* 0x7C */ - #define NVM_CFG1_GLOB_MBI_VERSION_0_MASK 0x000000FF - #define NVM_CFG1_GLOB_MBI_VERSION_0_OFFSET 0 - #define NVM_CFG1_GLOB_MBI_VERSION_1_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_MBI_VERSION_1_OFFSET 8 - #define NVM_CFG1_GLOB_MBI_VERSION_2_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_MBI_VERSION_2_OFFSET 16 - /* If set to other than NA, 0 - Normal operation, 1 - Thermal event - * occurred + #define NVM_CFG1_GLOB_LLDP_DISABLE_MASK 0x80000000 + #define NVM_CFG1_GLOB_LLDP_DISABLE_OFFSET 31 + #define NVM_CFG1_GLOB_LLDP_DISABLE_OFF 0x0 + #define NVM_CFG1_GLOB_LLDP_DISABLE_ON 0x1 + u32 mbi_version; /* 0x7C */ + #define NVM_CFG1_GLOB_MBI_VERSION_0_MASK 0x000000FF + #define NVM_CFG1_GLOB_MBI_VERSION_0_OFFSET 0 + #define NVM_CFG1_GLOB_MBI_VERSION_1_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_MBI_VERSION_1_OFFSET 8 + #define NVM_CFG1_GLOB_MBI_VERSION_2_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_MBI_VERSION_2_OFFSET 16 + /* If set to other than NA, 0 - Normal operation, 1 - Thermal event + * occurred */ - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_MASK 0xFF000000 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_OFFSET 24 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_NA 0x0 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO0 0x1 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO1 0x2 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO2 0x3 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO3 0x4 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO4 0x5 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO5 0x6 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO6 0x7 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO7 0x8 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO8 0x9 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO9 0xA - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO10 0xB - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO11 0xC - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO12 0xD - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO13 0xE - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO14 0xF - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO15 0x10 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO16 0x11 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO17 0x12 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO18 0x13 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO19 0x14 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO20 0x15 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO21 0x16 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO22 0x17 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO23 0x18 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO24 0x19 - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO25 0x1A - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO26 0x1B - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO27 0x1C - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO28 0x1D - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO29 0x1E - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO30 0x1F - #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO31 0x20 - u32 mbi_date; /* 0x80 */ - u32 misc_sig; /* 0x84 */ + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_MASK 0xFF000000 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_OFFSET 24 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_NA 0x0 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO0 0x1 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO1 0x2 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO2 0x3 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO3 0x4 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO4 0x5 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO5 0x6 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO6 0x7 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO7 0x8 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO8 0x9 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO9 0xA + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO10 0xB + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO11 0xC + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO12 0xD + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO13 0xE + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO14 0xF + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO15 0x10 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO16 0x11 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO17 0x12 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO18 0x13 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO19 0x14 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO20 0x15 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO21 0x16 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO22 0x17 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO23 0x18 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO24 0x19 + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO25 0x1A + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO26 0x1B + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO27 0x1C + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO28 0x1D + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO29 0x1E + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO30 0x1F + #define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO31 0x20 + u32 mbi_date; /* 0x80 */ + u32 misc_sig; /* 0x84 */ /* Define the GPIO mapping to switch i2c mux */ - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_0_MASK 0x000000FF - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_0_OFFSET 0 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_1_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_1_OFFSET 8 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__NA 0x0 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO0 0x1 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO1 0x2 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO2 0x3 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO3 0x4 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO4 0x5 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO5 0x6 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO6 0x7 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO7 0x8 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO8 0x9 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO9 0xA - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO10 0xB - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO11 0xC - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO12 0xD - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO13 0xE - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO14 0xF - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO15 0x10 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO16 0x11 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO17 0x12 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO18 0x13 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO19 0x14 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO20 0x15 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO21 0x16 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO22 0x17 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO23 0x18 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO24 0x19 - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO25 0x1A - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO26 0x1B - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO27 0x1C - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO28 0x1D - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO29 0x1E - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO30 0x1F - #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO31 0x20 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_0_MASK 0x000000FF + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_0_OFFSET 0 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_1_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_1_OFFSET 8 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__NA 0x0 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO0 0x1 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO1 0x2 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO2 0x3 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO3 0x4 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO4 0x5 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO5 0x6 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO6 0x7 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO7 0x8 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO8 0x9 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO9 0xA + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO10 0xB + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO11 0xC + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO12 0xD + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO13 0xE + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO14 0xF + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO15 0x10 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO16 0x11 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO17 0x12 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO18 0x13 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO19 0x14 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO20 0x15 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO21 0x16 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO22 0x17 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO23 0x18 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO24 0x19 + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO25 0x1A + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO26 0x1B + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO27 0x1C + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO28 0x1D + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO29 0x1E + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO30 0x1F + #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO31 0x20 /* Interrupt signal used for SMBus/I2C management interface - * 0 = Interrupt event occurred - * 1 = Normal - */ - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_OFFSET 16 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_NA 0x0 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO0 0x1 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO1 0x2 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO2 0x3 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO3 0x4 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO4 0x5 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO5 0x6 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO6 0x7 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO7 0x8 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO8 0x9 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO9 0xA - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO10 0xB - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO11 0xC - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO12 0xD - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO13 0xE - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO14 0xF - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO15 0x10 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO16 0x11 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO17 0x12 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO18 0x13 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO19 0x14 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO20 0x15 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO21 0x16 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO22 0x17 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO23 0x18 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO24 0x19 - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO25 0x1A - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO26 0x1B - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO27 0x1C - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO28 0x1D - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO29 0x1E - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO30 0x1F - #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO31 0x20 + + 0 = Interrupt event occurred + 1 = Normal + */ + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_OFFSET 16 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_NA 0x0 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO0 0x1 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO1 0x2 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO2 0x3 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO3 0x4 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO4 0x5 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO5 0x6 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO6 0x7 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO7 0x8 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO8 0x9 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO9 0xA + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO10 0xB + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO11 0xC + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO12 0xD + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO13 0xE + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO14 0xF + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO15 0x10 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO16 0x11 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO17 0x12 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO18 0x13 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO19 0x14 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO20 0x15 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO21 0x16 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO22 0x17 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO23 0x18 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO24 0x19 + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO25 0x1A + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO26 0x1B + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO27 0x1C + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO28 0x1D + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO29 0x1E + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO30 0x1F + #define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO31 0x20 /* Set aLOM FAN on GPIO */ - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_MASK 0xFF000000 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_OFFSET 24 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_NA 0x0 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO0 0x1 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO1 0x2 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO2 0x3 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO3 0x4 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO4 0x5 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO5 0x6 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO6 0x7 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO7 0x8 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO8 0x9 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO9 0xA - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO10 0xB - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO11 0xC - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO12 0xD - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO13 0xE - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO14 0xF - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO15 0x10 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO16 0x11 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO17 0x12 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO18 0x13 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO19 0x14 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO20 0x15 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO21 0x16 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO22 0x17 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO23 0x18 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO24 0x19 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO25 0x1A - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO26 0x1B - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO27 0x1C - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO28 0x1D - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO29 0x1E - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO30 0x1F - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO31 0x20 - u32 device_capabilities; /* 0x88 */ - #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ETHERNET 0x1 - #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_FCOE 0x2 - #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ISCSI 0x4 - #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ROCE 0x8 - #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_IWARP 0x10 - u32 power_dissipated; /* 0x8C */ - #define NVM_CFG1_GLOB_POWER_DIS_D0_MASK 0x000000FF - #define NVM_CFG1_GLOB_POWER_DIS_D0_OFFSET 0 - #define NVM_CFG1_GLOB_POWER_DIS_D1_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_POWER_DIS_D1_OFFSET 8 - #define NVM_CFG1_GLOB_POWER_DIS_D2_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_POWER_DIS_D2_OFFSET 16 - #define NVM_CFG1_GLOB_POWER_DIS_D3_MASK 0xFF000000 - #define NVM_CFG1_GLOB_POWER_DIS_D3_OFFSET 24 - u32 power_consumed; /* 0x90 */ - #define NVM_CFG1_GLOB_POWER_CONS_D0_MASK 0x000000FF - #define NVM_CFG1_GLOB_POWER_CONS_D0_OFFSET 0 - #define NVM_CFG1_GLOB_POWER_CONS_D1_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_POWER_CONS_D1_OFFSET 8 - #define NVM_CFG1_GLOB_POWER_CONS_D2_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_POWER_CONS_D2_OFFSET 16 - #define NVM_CFG1_GLOB_POWER_CONS_D3_MASK 0xFF000000 - #define NVM_CFG1_GLOB_POWER_CONS_D3_OFFSET 24 - u32 efi_version; /* 0x94 */ - u32 multi_network_modes_capability; /* 0x98 */ - #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_4X10G 0x1 - #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_1X25G 0x2 - #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X25G 0x4 - #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_4X25G 0x8 - #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_1X40G 0x10 - #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X40G 0x20 - #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X50G 0x40 - #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_BB_1X100G \ - 0x80 - #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X10G 0x100 - /* @DPDK */ - u32 reserved1[12]; /* 0x9C */ - u32 oem1_number[8]; /* 0xCC */ - u32 oem2_number[8]; /* 0xEC */ - u32 mps25_active_txfir_pre; /* 0x10C */ - #define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_MASK 0x000000FF - #define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_OFFSET 0 - #define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_OFFSET 8 - #define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_OFFSET 16 - #define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_MASK 0xFF000000 - #define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_OFFSET 24 - u32 mps25_active_txfir_main; /* 0x110 */ - #define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_MASK 0x000000FF - #define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_OFFSET 0 - #define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_OFFSET 8 - #define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_OFFSET 16 - #define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_MASK 0xFF000000 - #define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_OFFSET 24 - u32 mps25_active_txfir_post; /* 0x114 */ - #define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_MASK 0x000000FF - #define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_OFFSET 0 - #define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_OFFSET 8 - #define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_OFFSET 16 - #define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_MASK 0xFF000000 - #define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_OFFSET 24 - u32 features; /* 0x118 */ + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_MASK 0xFF000000 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_OFFSET 24 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_NA 0x0 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO0 0x1 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO1 0x2 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO2 0x3 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO3 0x4 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO4 0x5 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO5 0x6 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO6 0x7 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO7 0x8 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO8 0x9 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO9 0xA + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO10 0xB + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO11 0xC + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO12 0xD + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO13 0xE + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO14 0xF + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO15 0x10 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO16 0x11 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO17 0x12 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO18 0x13 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO19 0x14 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO20 0x15 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO21 0x16 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO22 0x17 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO23 0x18 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO24 0x19 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO25 0x1A + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO26 0x1B + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO27 0x1C + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO28 0x1D + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO29 0x1E + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO30 0x1F + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO31 0x20 + u32 device_capabilities; /* 0x88 */ + #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ETHERNET 0x1 + #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_FCOE 0x2 + #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ISCSI 0x4 + #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ROCE 0x8 + #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_IWARP 0x10 + #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_NVMETCP 0x20 + u32 power_dissipated; /* 0x8C */ + #define NVM_CFG1_GLOB_POWER_DIS_D0_MASK 0x000000FF + #define NVM_CFG1_GLOB_POWER_DIS_D0_OFFSET 0 + #define NVM_CFG1_GLOB_POWER_DIS_D1_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_POWER_DIS_D1_OFFSET 8 + #define NVM_CFG1_GLOB_POWER_DIS_D2_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_POWER_DIS_D2_OFFSET 16 + #define NVM_CFG1_GLOB_POWER_DIS_D3_MASK 0xFF000000 + #define NVM_CFG1_GLOB_POWER_DIS_D3_OFFSET 24 + u32 power_consumed; /* 0x90 */ + #define NVM_CFG1_GLOB_POWER_CONS_D0_MASK 0x000000FF + #define NVM_CFG1_GLOB_POWER_CONS_D0_OFFSET 0 + #define NVM_CFG1_GLOB_POWER_CONS_D1_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_POWER_CONS_D1_OFFSET 8 + #define NVM_CFG1_GLOB_POWER_CONS_D2_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_POWER_CONS_D2_OFFSET 16 + #define NVM_CFG1_GLOB_POWER_CONS_D3_MASK 0xFF000000 + #define NVM_CFG1_GLOB_POWER_CONS_D3_OFFSET 24 + u32 efi_version; /* 0x94 */ + u32 multi_network_modes_capability; /* 0x98 */ + #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_4X10G 0x1 + #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_1X25G 0x2 + #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X25G 0x4 + #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_4X25G 0x8 + #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_1X40G 0x10 + #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X40G 0x20 + #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X50G 0x40 + #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_BB_1X100G 0x80 + #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X10G 0x100 + u32 nvm_cfg_version; /* 0x9C */ + u32 nvm_cfg_new_option_seq; /* 0xA0 */ + u32 nvm_cfg_removed_option_seq; /* 0xA4 */ + u32 nvm_cfg_updated_value_seq; /* 0xA8 */ + u32 extended_serial_number[8]; /* 0xAC */ + u32 option_kit_pn[8]; /* 0xCC */ + u32 spare_pn[8]; /* 0xEC */ + u32 mps25_active_txfir_pre; /* 0x10C */ + #define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_MASK 0x000000FF + #define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_OFFSET 0 + #define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_OFFSET 8 + #define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_OFFSET 16 + #define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_MASK 0xFF000000 + #define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_OFFSET 24 + u32 mps25_active_txfir_main; /* 0x110 */ + #define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_MASK 0x000000FF + #define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_OFFSET 0 + #define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_OFFSET 8 + #define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_OFFSET 16 + #define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_MASK 0xFF000000 + #define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_OFFSET 24 + u32 mps25_active_txfir_post; /* 0x114 */ + #define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_MASK 0x000000FF + #define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_OFFSET 0 + #define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_OFFSET 8 + #define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_OFFSET 16 + #define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_MASK 0xFF000000 + #define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_OFFSET 24 + u32 features; /* 0x118 */ /* Set the Aux Fan on temperature */ - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_MASK 0x000000FF - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_OFFSET 0 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_MASK 0x000000FF + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_OFFSET 0 /* Set NC-SI package ID */ - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_OFFSET 8 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_NA 0x0 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO0 0x1 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO1 0x2 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO2 0x3 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO3 0x4 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO4 0x5 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO5 0x6 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO6 0x7 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO7 0x8 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO8 0x9 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO9 0xA - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO10 0xB - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO11 0xC - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO12 0xD - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO13 0xE - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO14 0xF - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO15 0x10 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO16 0x11 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO17 0x12 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO18 0x13 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO19 0x14 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO20 0x15 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO21 0x16 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO22 0x17 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO23 0x18 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO24 0x19 - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO25 0x1A - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO26 0x1B - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO27 0x1C - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO28 0x1D - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO29 0x1E - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO30 0x1F - #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO31 0x20 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_OFFSET 8 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_NA 0x0 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO0 0x1 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO1 0x2 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO2 0x3 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO3 0x4 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO4 0x5 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO5 0x6 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO6 0x7 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO7 0x8 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO8 0x9 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO9 0xA + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO10 0xB + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO11 0xC + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO12 0xD + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO13 0xE + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO14 0xF + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO15 0x10 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO16 0x11 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO17 0x12 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO18 0x13 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO19 0x14 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO20 0x15 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO21 0x16 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO22 0x17 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO23 0x18 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO24 0x19 + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO25 0x1A + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO26 0x1B + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO27 0x1C + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO28 0x1D + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO29 0x1E + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO30 0x1F + #define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO31 0x20 /* PMBUS Clock GPIO */ - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_OFFSET 16 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_NA 0x0 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO0 0x1 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO1 0x2 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO2 0x3 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO3 0x4 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO4 0x5 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO5 0x6 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO6 0x7 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO7 0x8 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO8 0x9 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO9 0xA - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO10 0xB - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO11 0xC - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO12 0xD - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO13 0xE - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO14 0xF - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO15 0x10 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO16 0x11 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO17 0x12 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO18 0x13 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO19 0x14 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO20 0x15 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO21 0x16 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO22 0x17 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO23 0x18 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO24 0x19 - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO25 0x1A - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO26 0x1B - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO27 0x1C - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO28 0x1D - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO29 0x1E - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO30 0x1F - #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO31 0x20 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_OFFSET 16 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_NA 0x0 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO0 0x1 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO1 0x2 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO2 0x3 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO3 0x4 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO4 0x5 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO5 0x6 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO6 0x7 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO7 0x8 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO8 0x9 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO9 0xA + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO10 0xB + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO11 0xC + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO12 0xD + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO13 0xE + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO14 0xF + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO15 0x10 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO16 0x11 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO17 0x12 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO18 0x13 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO19 0x14 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO20 0x15 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO21 0x16 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO22 0x17 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO23 0x18 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO24 0x19 + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO25 0x1A + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO26 0x1B + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO27 0x1C + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO28 0x1D + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO29 0x1E + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO30 0x1F + #define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO31 0x20 /* PMBUS Data GPIO */ - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_MASK 0xFF000000 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_OFFSET 24 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_NA 0x0 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO0 0x1 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO1 0x2 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO2 0x3 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO3 0x4 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO4 0x5 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO5 0x6 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO6 0x7 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO7 0x8 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO8 0x9 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO9 0xA - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO10 0xB - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO11 0xC - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO12 0xD - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO13 0xE - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO14 0xF - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO15 0x10 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO16 0x11 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO17 0x12 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO18 0x13 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO19 0x14 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO20 0x15 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO21 0x16 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO22 0x17 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO23 0x18 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO24 0x19 - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO25 0x1A - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO26 0x1B - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO27 0x1C - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO28 0x1D - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO29 0x1E - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO30 0x1F - #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO31 0x20 - u32 tx_rx_eq_25g_hlpc; /* 0x11C */ - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_MASK 0x000000FF - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_OFFSET 0 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_OFFSET 8 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_OFFSET 16 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_MASK 0xFF000000 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_OFFSET 24 - u32 tx_rx_eq_25g_llpc; /* 0x120 */ - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_MASK 0x000000FF - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_OFFSET 0 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_OFFSET 8 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_OFFSET 16 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_MASK 0xFF000000 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_OFFSET 24 - u32 tx_rx_eq_25g_ac; /* 0x124 */ - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_MASK 0x000000FF - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_OFFSET 0 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_OFFSET 8 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_OFFSET 16 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_MASK 0xFF000000 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_OFFSET 24 - u32 tx_rx_eq_10g_pc; /* 0x128 */ - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_MASK 0x000000FF - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_OFFSET 0 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_OFFSET 8 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_OFFSET 16 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_MASK 0xFF000000 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_OFFSET 24 - u32 tx_rx_eq_10g_ac; /* 0x12C */ - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_MASK 0x000000FF - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_OFFSET 0 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_OFFSET 8 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_OFFSET 16 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_MASK 0xFF000000 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_OFFSET 24 - u32 tx_rx_eq_1g; /* 0x130 */ - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_MASK 0x000000FF - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_OFFSET 0 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_OFFSET 8 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_OFFSET 16 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_MASK 0xFF000000 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_OFFSET 24 - u32 tx_rx_eq_25g_bt; /* 0x134 */ - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_MASK 0x000000FF - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_OFFSET 0 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_OFFSET 8 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_OFFSET 16 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_MASK 0xFF000000 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_OFFSET 24 - u32 tx_rx_eq_10g_bt; /* 0x138 */ - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_MASK 0x000000FF - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_OFFSET 0 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_OFFSET 8 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_OFFSET 16 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_MASK 0xFF000000 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_OFFSET 24 - u32 generic_cont4; /* 0x13C */ - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_MASK 0x000000FF - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_OFFSET 0 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_NA 0x0 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO0 0x1 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO1 0x2 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO2 0x3 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO3 0x4 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO4 0x5 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO5 0x6 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO6 0x7 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO7 0x8 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO8 0x9 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO9 0xA - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO10 0xB - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO11 0xC - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO12 0xD - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO13 0xE - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO14 0xF - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO15 0x10 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO16 0x11 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO17 0x12 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO18 0x13 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO19 0x14 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO20 0x15 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO21 0x16 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO22 0x17 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO23 0x18 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO24 0x19 - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO25 0x1A - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO26 0x1B - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO27 0x1C - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO28 0x1D - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO29 0x1E - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO30 0x1F - #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO31 0x20 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_MASK 0xFF000000 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_OFFSET 24 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_NA 0x0 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO0 0x1 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO1 0x2 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO2 0x3 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO3 0x4 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO4 0x5 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO5 0x6 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO6 0x7 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO7 0x8 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO8 0x9 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO9 0xA + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO10 0xB + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO11 0xC + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO12 0xD + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO13 0xE + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO14 0xF + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO15 0x10 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO16 0x11 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO17 0x12 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO18 0x13 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO19 0x14 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO20 0x15 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO21 0x16 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO22 0x17 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO23 0x18 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO24 0x19 + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO25 0x1A + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO26 0x1B + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO27 0x1C + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO28 0x1D + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO29 0x1E + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO30 0x1F + #define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO31 0x20 + u32 tx_rx_eq_25g_hlpc; /* 0x11C */ + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_MASK 0x000000FF + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_OFFSET 0 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_OFFSET 8 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_OFFSET 16 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_MASK 0xFF000000 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_OFFSET 24 + u32 tx_rx_eq_25g_llpc; /* 0x120 */ + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_MASK 0x000000FF + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_OFFSET 0 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_OFFSET 8 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_OFFSET 16 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_MASK 0xFF000000 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_OFFSET 24 + u32 tx_rx_eq_25g_ac; /* 0x124 */ + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_MASK 0x000000FF + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_OFFSET 0 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_OFFSET 8 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_OFFSET 16 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_MASK 0xFF000000 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_OFFSET 24 + u32 tx_rx_eq_10g_pc; /* 0x128 */ + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_MASK 0x000000FF + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_OFFSET 0 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_OFFSET 8 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_OFFSET 16 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_MASK 0xFF000000 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_OFFSET 24 + u32 tx_rx_eq_10g_ac; /* 0x12C */ + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_MASK 0x000000FF + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_OFFSET 0 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_OFFSET 8 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_OFFSET 16 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_MASK 0xFF000000 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_OFFSET 24 + u32 tx_rx_eq_1g; /* 0x130 */ + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_MASK 0x000000FF + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_OFFSET 0 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_OFFSET 8 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_OFFSET 16 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_MASK 0xFF000000 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_OFFSET 24 + u32 tx_rx_eq_25g_bt; /* 0x134 */ + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_MASK 0x000000FF + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_OFFSET 0 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_OFFSET 8 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_OFFSET 16 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_MASK 0xFF000000 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_OFFSET 24 + u32 tx_rx_eq_10g_bt; /* 0x138 */ + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_MASK 0x000000FF + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_OFFSET 0 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_OFFSET 8 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_OFFSET 16 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_MASK 0xFF000000 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_OFFSET 24 + u32 generic_cont4; /* 0x13C */ + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_MASK 0x000000FF + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_OFFSET 0 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_NA 0x0 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO0 0x1 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO1 0x2 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO2 0x3 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO3 0x4 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO4 0x5 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO5 0x6 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO6 0x7 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO7 0x8 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO8 0x9 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO9 0xA + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO10 0xB + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO11 0xC + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO12 0xD + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO13 0xE + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO14 0xF + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO15 0x10 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO16 0x11 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO17 0x12 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO18 0x13 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO19 0x14 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO20 0x15 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO21 0x16 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO22 0x17 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO23 0x18 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO24 0x19 + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO25 0x1A + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO26 0x1B + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO27 0x1C + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO28 0x1D + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO29 0x1E + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO30 0x1F + #define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO31 0x20 /* Select the number of allowed port link in aux power */ - #define NVM_CFG1_GLOB_NCSI_AUX_LINK_MASK 0x00000300 - #define NVM_CFG1_GLOB_NCSI_AUX_LINK_OFFSET 8 - #define NVM_CFG1_GLOB_NCSI_AUX_LINK_DEFAULT 0x0 - #define NVM_CFG1_GLOB_NCSI_AUX_LINK_1_PORT 0x1 - #define NVM_CFG1_GLOB_NCSI_AUX_LINK_2_PORTS 0x2 - #define NVM_CFG1_GLOB_NCSI_AUX_LINK_3_PORTS 0x3 + #define NVM_CFG1_GLOB_NCSI_AUX_LINK_MASK 0x00000300 + #define NVM_CFG1_GLOB_NCSI_AUX_LINK_OFFSET 8 + #define NVM_CFG1_GLOB_NCSI_AUX_LINK_DEFAULT 0x0 + #define NVM_CFG1_GLOB_NCSI_AUX_LINK_1_PORT 0x1 + #define NVM_CFG1_GLOB_NCSI_AUX_LINK_2_PORTS 0x2 + #define NVM_CFG1_GLOB_NCSI_AUX_LINK_3_PORTS 0x3 /* Set Trace Filter Log Level */ - #define NVM_CFG1_GLOB_TRACE_LEVEL_MASK 0x00000C00 - #define NVM_CFG1_GLOB_TRACE_LEVEL_OFFSET 10 - #define NVM_CFG1_GLOB_TRACE_LEVEL_ALL 0x0 - #define NVM_CFG1_GLOB_TRACE_LEVEL_DEBUG 0x1 - #define NVM_CFG1_GLOB_TRACE_LEVEL_TRACE 0x2 - #define NVM_CFG1_GLOB_TRACE_LEVEL_ERROR 0x3 - /* For OCP2.0, MFW listens on SMBUS slave address 0x3e, and return - * temperature reading + #define NVM_CFG1_GLOB_TRACE_LEVEL_MASK 0x00000C00 + #define NVM_CFG1_GLOB_TRACE_LEVEL_OFFSET 10 + #define NVM_CFG1_GLOB_TRACE_LEVEL_DEFAULT 0x0 + #define NVM_CFG1_GLOB_TRACE_LEVEL_DEBUG 0x1 + #define NVM_CFG1_GLOB_TRACE_LEVEL_TRACE 0x2 + #define NVM_CFG1_GLOB_TRACE_LEVEL_ERROR 0x3 + /* For OCP2.0, MFW listens on SMBUS slave address 0x3e, and return + * temperature reading */ - #define NVM_CFG1_GLOB_EMULATED_TMP421_MASK 0x00001000 - #define NVM_CFG1_GLOB_EMULATED_TMP421_OFFSET 12 - #define NVM_CFG1_GLOB_EMULATED_TMP421_DISABLED 0x0 - #define NVM_CFG1_GLOB_EMULATED_TMP421_ENABLED 0x1 - /* GPIO which triggers when ASIC temperature reaches nvm option 286 - * value + #define NVM_CFG1_GLOB_EMULATED_TMP421_MASK 0x00001000 + #define NVM_CFG1_GLOB_EMULATED_TMP421_OFFSET 12 + #define NVM_CFG1_GLOB_EMULATED_TMP421_DISABLED 0x0 + #define NVM_CFG1_GLOB_EMULATED_TMP421_ENABLED 0x1 + /* GPIO which triggers when ASIC temperature reaches nvm option 286 + * value */ - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_MASK 0x001FE000 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_OFFSET 13 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_NA 0x0 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO0 0x1 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO1 0x2 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO2 0x3 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO3 0x4 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO4 0x5 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO5 0x6 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO6 0x7 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO7 0x8 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO8 0x9 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO9 0xA - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO10 0xB - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO11 0xC - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO12 0xD - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO13 0xE - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO14 0xF - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO15 0x10 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO16 0x11 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO17 0x12 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO18 0x13 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO19 0x14 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO20 0x15 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO21 0x16 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO22 0x17 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO23 0x18 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO24 0x19 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO25 0x1A - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO26 0x1B - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO27 0x1C - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO28 0x1D - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO29 0x1E - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO30 0x1F - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO31 0x20 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_MASK 0x001FE000 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_OFFSET 13 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_NA 0x0 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO0 0x1 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO1 0x2 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO2 0x3 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO3 0x4 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO4 0x5 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO5 0x6 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO6 0x7 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO7 0x8 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO8 0x9 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO9 0xA + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO10 0xB + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO11 0xC + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO12 0xD + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO13 0xE + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO14 0xF + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO15 0x10 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO16 0x11 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO17 0x12 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO18 0x13 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO19 0x14 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO20 0x15 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO21 0x16 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO22 0x17 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO23 0x18 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO24 0x19 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO25 0x1A + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO26 0x1B + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO27 0x1C + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO28 0x1D + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO29 0x1E + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO30 0x1F + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO31 0x20 /* Warning temperature threshold used with nvm option 286 */ - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_THRESHOLD_MASK 0x1FE00000 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_THRESHOLD_OFFSET 21 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_THRESHOLD_MASK 0x1FE00000 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_THRESHOLD_OFFSET 21 /* Disable PLDM protocol */ - #define NVM_CFG1_GLOB_DISABLE_PLDM_MASK 0x20000000 - #define NVM_CFG1_GLOB_DISABLE_PLDM_OFFSET 29 - #define NVM_CFG1_GLOB_DISABLE_PLDM_DISABLED 0x0 - #define NVM_CFG1_GLOB_DISABLE_PLDM_ENABLED 0x1 + #define NVM_CFG1_GLOB_DISABLE_PLDM_MASK 0x20000000 + #define NVM_CFG1_GLOB_DISABLE_PLDM_OFFSET 29 + #define NVM_CFG1_GLOB_DISABLE_PLDM_DISABLED 0x0 + #define NVM_CFG1_GLOB_DISABLE_PLDM_ENABLED 0x1 /* Disable OCBB protocol */ - #define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_MASK 0x40000000 - #define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_OFFSET 30 - #define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_DISABLED 0x0 - #define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_ENABLED 0x1 - u32 preboot_debug_mode_std; /* 0x140 */ - u32 preboot_debug_mode_ext; /* 0x144 */ - u32 ext_phy_cfg1; /* 0x148 */ + #define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_MASK 0x40000000 + #define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_OFFSET 30 + #define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_DISABLED 0x0 + #define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_ENABLED 0x1 + /* MCTP Virtual Link OEM3 */ + #define NVM_CFG1_GLOB_MCTP_VIRTUAL_LINK_OEM3_MASK 0x80000000 + #define NVM_CFG1_GLOB_MCTP_VIRTUAL_LINK_OEM3_OFFSET 31 + #define NVM_CFG1_GLOB_MCTP_VIRTUAL_LINK_OEM3_DISABLED 0x0 + #define NVM_CFG1_GLOB_MCTP_VIRTUAL_LINK_OEM3_ENABLED 0x1 + u32 preboot_debug_mode_std; /* 0x140 */ + u32 preboot_debug_mode_ext; /* 0x144 */ + u32 ext_phy_cfg1; /* 0x148 */ /* Ext PHY MDI pair swap value */ - #define NVM_CFG1_GLOB_RESERVED_244_MASK 0x0000FFFF - #define NVM_CFG1_GLOB_RESERVED_244_OFFSET 0 + #define NVM_CFG1_GLOB_RESERVED_244_MASK 0x0000FFFF + #define NVM_CFG1_GLOB_RESERVED_244_OFFSET 0 /* Define for PGOOD signal Mapping for EXT PHY */ - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_OFFSET 16 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_NA 0x0 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO0 0x1 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO1 0x2 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO2 0x3 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO3 0x4 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO4 0x5 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO5 0x6 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO6 0x7 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO7 0x8 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO8 0x9 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO9 0xA - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO10 0xB - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO11 0xC - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO12 0xD - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO13 0xE - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO14 0xF - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO15 0x10 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO16 0x11 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO17 0x12 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO18 0x13 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO19 0x14 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO20 0x15 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO21 0x16 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO22 0x17 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO23 0x18 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO24 0x19 - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO25 0x1A - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO26 0x1B - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO27 0x1C - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO28 0x1D - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO29 0x1E - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO30 0x1F - #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO31 0x20 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_OFFSET 16 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_NA 0x0 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO0 0x1 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO1 0x2 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO2 0x3 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO3 0x4 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO4 0x5 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO5 0x6 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO6 0x7 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO7 0x8 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO8 0x9 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO9 0xA + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO10 0xB + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO11 0xC + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO12 0xD + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO13 0xE + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO14 0xF + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO15 0x10 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO16 0x11 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO17 0x12 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO18 0x13 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO19 0x14 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO20 0x15 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO21 0x16 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO22 0x17 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO23 0x18 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO24 0x19 + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO25 0x1A + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO26 0x1B + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO27 0x1C + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO28 0x1D + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO29 0x1E + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO30 0x1F + #define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO31 0x20 /* GPIO which trigger when PERST asserted */ - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_MASK 0xFF000000 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_OFFSET 24 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_NA 0x0 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO0 0x1 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO1 0x2 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO2 0x3 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO3 0x4 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO4 0x5 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO5 0x6 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO6 0x7 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO7 0x8 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO8 0x9 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO9 0xA - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO10 0xB - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO11 0xC - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO12 0xD - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO13 0xE - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO14 0xF - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO15 0x10 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO16 0x11 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO17 0x12 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO18 0x13 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO19 0x14 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO20 0x15 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO21 0x16 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO22 0x17 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO23 0x18 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO24 0x19 - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO25 0x1A - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO26 0x1B - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO27 0x1C - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO28 0x1D - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO29 0x1E - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO30 0x1F - #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO31 0x20 - u32 clocks; /* 0x14C */ + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_MASK 0xFF000000 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_OFFSET 24 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_NA 0x0 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO0 0x1 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO1 0x2 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO2 0x3 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO3 0x4 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO4 0x5 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO5 0x6 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO6 0x7 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO7 0x8 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO8 0x9 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO9 0xA + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO10 0xB + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO11 0xC + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO12 0xD + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO13 0xE + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO14 0xF + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO15 0x10 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO16 0x11 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO17 0x12 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO18 0x13 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO19 0x14 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO20 0x15 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO21 0x16 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO22 0x17 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO23 0x18 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO24 0x19 + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO25 0x1A + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO26 0x1B + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO27 0x1C + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO28 0x1D + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO29 0x1E + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO30 0x1F + #define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO31 0x20 + u32 clocks; /* 0x14C */ /* Sets core clock frequency */ - #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MASK 0x000000FF - #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_OFFSET 0 - #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_DEFAULT 0x0 - #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_375 0x1 - #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_350 0x2 - #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_325 0x3 - #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_300 0x4 - #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_280 0x5 + #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MASK 0x000000FF + #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_OFFSET 0 + #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_DEFAULT 0x0 + #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_375 0x1 + #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_350 0x2 + #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_325 0x3 + #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_300 0x4 + #define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_280 0x5 /* Sets MAC clock frequency */ - #define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_OFFSET 8 - #define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_DEFAULT 0x0 - #define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_782 0x1 - #define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_516 0x2 + #define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_OFFSET 8 + #define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_DEFAULT 0x0 + #define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_782 0x1 + #define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_516 0x2 /* Sets storm clock frequency */ - #define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_OFFSET 16 - #define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_DEFAULT 0x0 - #define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1200 0x1 - #define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1000 0x2 - #define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_900 0x3 - #define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1100 0x4 - /* Non zero value will override PCIe AGC threshold to improve - * receiver + #define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_OFFSET 16 + #define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_DEFAULT 0x0 + #define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1200 0x1 + #define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1000 0x2 + #define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_900 0x3 + #define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1100 0x4 + /* Non zero value will override PCIe AGC threshold to improve + * receiver */ - #define NVM_CFG1_GLOB_OVERRIDE_AGC_THRESHOLD_MASK 0xFF000000 - #define NVM_CFG1_GLOB_OVERRIDE_AGC_THRESHOLD_OFFSET 24 - u32 pre2_generic_cont_1; /* 0x150 */ - #define NVM_CFG1_GLOB_50G_HLPC_PRE2_MASK 0x000000FF - #define NVM_CFG1_GLOB_50G_HLPC_PRE2_OFFSET 0 - #define NVM_CFG1_GLOB_50G_MLPC_PRE2_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_50G_MLPC_PRE2_OFFSET 8 - #define NVM_CFG1_GLOB_50G_LLPC_PRE2_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_50G_LLPC_PRE2_OFFSET 16 - #define NVM_CFG1_GLOB_25G_HLPC_PRE2_MASK 0xFF000000 - #define NVM_CFG1_GLOB_25G_HLPC_PRE2_OFFSET 24 - u32 pre2_generic_cont_2; /* 0x154 */ - #define NVM_CFG1_GLOB_25G_LLPC_PRE2_MASK 0x000000FF - #define NVM_CFG1_GLOB_25G_LLPC_PRE2_OFFSET 0 - #define NVM_CFG1_GLOB_25G_AC_PRE2_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_25G_AC_PRE2_OFFSET 8 - #define NVM_CFG1_GLOB_10G_PC_PRE2_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_10G_PC_PRE2_OFFSET 16 - #define NVM_CFG1_GLOB_PRE2_10G_AC_MASK 0xFF000000 - #define NVM_CFG1_GLOB_PRE2_10G_AC_OFFSET 24 - u32 pre2_generic_cont_3; /* 0x158 */ - #define NVM_CFG1_GLOB_1G_PRE2_MASK 0x000000FF - #define NVM_CFG1_GLOB_1G_PRE2_OFFSET 0 - #define NVM_CFG1_GLOB_5G_BT_PRE2_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_5G_BT_PRE2_OFFSET 8 - #define NVM_CFG1_GLOB_10G_BT_PRE2_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_10G_BT_PRE2_OFFSET 16 - /* When temperature goes below (warning temperature - delta) warning - * gpio is unset + #define NVM_CFG1_GLOB_OVERRIDE_AGC_THRESHOLD_MASK 0xFF000000 + #define NVM_CFG1_GLOB_OVERRIDE_AGC_THRESHOLD_OFFSET 24 + u32 pre2_generic_cont_1; /* 0x150 */ + #define NVM_CFG1_GLOB_50G_HLPC_PRE2_MASK 0x000000FF + #define NVM_CFG1_GLOB_50G_HLPC_PRE2_OFFSET 0 + #define NVM_CFG1_GLOB_50G_MLPC_PRE2_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_50G_MLPC_PRE2_OFFSET 8 + #define NVM_CFG1_GLOB_50G_LLPC_PRE2_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_50G_LLPC_PRE2_OFFSET 16 + #define NVM_CFG1_GLOB_25G_HLPC_PRE2_MASK 0xFF000000 + #define NVM_CFG1_GLOB_25G_HLPC_PRE2_OFFSET 24 + u32 pre2_generic_cont_2; /* 0x154 */ + #define NVM_CFG1_GLOB_25G_LLPC_PRE2_MASK 0x000000FF + #define NVM_CFG1_GLOB_25G_LLPC_PRE2_OFFSET 0 + #define NVM_CFG1_GLOB_25G_AC_PRE2_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_25G_AC_PRE2_OFFSET 8 + #define NVM_CFG1_GLOB_10G_PC_PRE2_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_10G_PC_PRE2_OFFSET 16 + #define NVM_CFG1_GLOB_PRE2_10G_AC_MASK 0xFF000000 + #define NVM_CFG1_GLOB_PRE2_10G_AC_OFFSET 24 + u32 pre2_generic_cont_3; /* 0x158 */ + #define NVM_CFG1_GLOB_1G_PRE2_MASK 0x000000FF + #define NVM_CFG1_GLOB_1G_PRE2_OFFSET 0 + #define NVM_CFG1_GLOB_5G_BT_PRE2_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_5G_BT_PRE2_OFFSET 8 + #define NVM_CFG1_GLOB_10G_BT_PRE2_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_10G_BT_PRE2_OFFSET 16 + /* When temperature goes below (warning temperature - delta) warning + * gpio is unset */ - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_DELTA_MASK 0xFF000000 - #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_DELTA_OFFSET 24 - u32 tx_rx_eq_50g_hlpc; /* 0x15C */ - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_HLPC_MASK 0x000000FF - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_HLPC_OFFSET 0 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_HLPC_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_HLPC_OFFSET 8 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_HLPC_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_HLPC_OFFSET 16 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_HLPC_MASK 0xFF000000 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_HLPC_OFFSET 24 - u32 tx_rx_eq_50g_mlpc; /* 0x160 */ - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_MLPC_MASK 0x000000FF - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_MLPC_OFFSET 0 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_MLPC_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_MLPC_OFFSET 8 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_MLPC_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_MLPC_OFFSET 16 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_MLPC_MASK 0xFF000000 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_MLPC_OFFSET 24 - u32 tx_rx_eq_50g_llpc; /* 0x164 */ - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_LLPC_MASK 0x000000FF - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_LLPC_OFFSET 0 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_LLPC_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_LLPC_OFFSET 8 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_LLPC_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_LLPC_OFFSET 16 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_LLPC_MASK 0xFF000000 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_LLPC_OFFSET 24 - u32 tx_rx_eq_50g_ac; /* 0x168 */ - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_AC_MASK 0x000000FF - #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_AC_OFFSET 0 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_AC_MASK 0x0000FF00 - #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_AC_OFFSET 8 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_AC_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_AC_OFFSET 16 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_AC_MASK 0xFF000000 - #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_AC_OFFSET 24 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_DELTA_MASK 0xFF000000 + #define NVM_CFG1_GLOB_WARNING_TEMPERATURE_DELTA_OFFSET 24 + u32 tx_rx_eq_50g_hlpc; /* 0x15C */ + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_HLPC_MASK 0x000000FF + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_HLPC_OFFSET 0 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_HLPC_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_HLPC_OFFSET 8 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_HLPC_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_HLPC_OFFSET 16 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_HLPC_MASK 0xFF000000 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_HLPC_OFFSET 24 + u32 tx_rx_eq_50g_mlpc; /* 0x160 */ + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_MLPC_MASK 0x000000FF + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_MLPC_OFFSET 0 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_MLPC_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_MLPC_OFFSET 8 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_MLPC_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_MLPC_OFFSET 16 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_MLPC_MASK 0xFF000000 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_MLPC_OFFSET 24 + u32 tx_rx_eq_50g_llpc; /* 0x164 */ + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_LLPC_MASK 0x000000FF + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_LLPC_OFFSET 0 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_LLPC_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_LLPC_OFFSET 8 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_LLPC_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_LLPC_OFFSET 16 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_LLPC_MASK 0xFF000000 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_LLPC_OFFSET 24 + u32 tx_rx_eq_50g_ac; /* 0x168 */ + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_AC_MASK 0x000000FF + #define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_AC_OFFSET 0 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_AC_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_AC_OFFSET 8 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_AC_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_AC_OFFSET 16 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_AC_MASK 0xFF000000 + #define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_AC_OFFSET 24 /* Set Trace Filter Modules Log Bit Mask */ - u32 trace_modules; /* 0x16C */ - #define NVM_CFG1_GLOB_TRACE_MODULES_ERROR 0x1 - #define NVM_CFG1_GLOB_TRACE_MODULES_DBG 0x2 - #define NVM_CFG1_GLOB_TRACE_MODULES_DRV_HSI 0x4 - #define NVM_CFG1_GLOB_TRACE_MODULES_INTERRUPT 0x8 - #define NVM_CFG1_GLOB_TRACE_MODULES_VPD 0x10 - #define NVM_CFG1_GLOB_TRACE_MODULES_FLR 0x20 - #define NVM_CFG1_GLOB_TRACE_MODULES_INIT 0x40 - #define NVM_CFG1_GLOB_TRACE_MODULES_NVM 0x80 - #define NVM_CFG1_GLOB_TRACE_MODULES_PIM 0x100 - #define NVM_CFG1_GLOB_TRACE_MODULES_NET 0x200 - #define NVM_CFG1_GLOB_TRACE_MODULES_POWER 0x400 - #define NVM_CFG1_GLOB_TRACE_MODULES_UTILS 0x800 - #define NVM_CFG1_GLOB_TRACE_MODULES_RESOURCES 0x1000 - #define NVM_CFG1_GLOB_TRACE_MODULES_SCHEDULER 0x2000 - #define NVM_CFG1_GLOB_TRACE_MODULES_PHYMOD 0x4000 - #define NVM_CFG1_GLOB_TRACE_MODULES_EVENTS 0x8000 - #define NVM_CFG1_GLOB_TRACE_MODULES_PMM 0x10000 - #define NVM_CFG1_GLOB_TRACE_MODULES_DBG_DRV 0x20000 - #define NVM_CFG1_GLOB_TRACE_MODULES_ETH 0x40000 - #define NVM_CFG1_GLOB_TRACE_MODULES_SECURITY 0x80000 - #define NVM_CFG1_GLOB_TRACE_MODULES_PCIE 0x100000 - #define NVM_CFG1_GLOB_TRACE_MODULES_TRACE 0x200000 - #define NVM_CFG1_GLOB_TRACE_MODULES_MANAGEMENT 0x400000 - #define NVM_CFG1_GLOB_TRACE_MODULES_SIM 0x800000 - u32 pcie_class_code_fcoe; /* 0x170 */ + u32 trace_modules; /* 0x16C */ + #define NVM_CFG1_GLOB_TRACE_MODULES_ERROR 0x1 + #define NVM_CFG1_GLOB_TRACE_MODULES_DBG 0x2 + #define NVM_CFG1_GLOB_TRACE_MODULES_DRV_HSI 0x4 + #define NVM_CFG1_GLOB_TRACE_MODULES_INTERRUPT 0x8 + #define NVM_CFG1_GLOB_TRACE_MODULES_TEMPERATURE 0x10 + #define NVM_CFG1_GLOB_TRACE_MODULES_FLR 0x20 + #define NVM_CFG1_GLOB_TRACE_MODULES_INIT 0x40 + #define NVM_CFG1_GLOB_TRACE_MODULES_NVM 0x80 + #define NVM_CFG1_GLOB_TRACE_MODULES_PIM 0x100 + #define NVM_CFG1_GLOB_TRACE_MODULES_NET 0x200 + #define NVM_CFG1_GLOB_TRACE_MODULES_POWER 0x400 + #define NVM_CFG1_GLOB_TRACE_MODULES_UTILS 0x800 + #define NVM_CFG1_GLOB_TRACE_MODULES_RESOURCES 0x1000 + #define NVM_CFG1_GLOB_TRACE_MODULES_SCHEDULER 0x2000 + #define NVM_CFG1_GLOB_TRACE_MODULES_PHYMOD 0x4000 + #define NVM_CFG1_GLOB_TRACE_MODULES_EVENTS 0x8000 + #define NVM_CFG1_GLOB_TRACE_MODULES_PMM 0x10000 + #define NVM_CFG1_GLOB_TRACE_MODULES_DBG_DRV 0x20000 + #define NVM_CFG1_GLOB_TRACE_MODULES_ETH 0x40000 + #define NVM_CFG1_GLOB_TRACE_MODULES_SECURITY 0x80000 + #define NVM_CFG1_GLOB_TRACE_MODULES_PCIE 0x100000 + #define NVM_CFG1_GLOB_TRACE_MODULES_TRACE 0x200000 + #define NVM_CFG1_GLOB_TRACE_MODULES_MANAGEMENT 0x400000 + #define NVM_CFG1_GLOB_TRACE_MODULES_SIM 0x800000 + #define NVM_CFG1_GLOB_TRACE_MODULES_BUF_MGR 0x1000000 + u32 pcie_class_code_fcoe; /* 0x170 */ /* Set PCIe FCoE Class Code */ - #define NVM_CFG1_GLOB_PCIE_CLASS_CODE_FCOE_MASK 0x00FFFFFF - #define NVM_CFG1_GLOB_PCIE_CLASS_CODE_FCOE_OFFSET 0 - /* When temperature goes below (ALOM FAN ON AUX value - delta) ALOM - * FAN ON AUX gpio is unset + #define NVM_CFG1_GLOB_PCIE_CLASS_CODE_FCOE_MASK 0x00FFFFFF + #define NVM_CFG1_GLOB_PCIE_CLASS_CODE_FCOE_OFFSET 0 + /* When temperature goes below (ALOM FAN ON AUX value - delta) ALOM + * FAN ON AUX gpio is unset */ - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_DELTA_MASK 0xFF000000 - #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_DELTA_OFFSET 24 - u32 pcie_class_code_iscsi; /* 0x174 */ + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_DELTA_MASK 0xFF000000 + #define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_DELTA_OFFSET 24 + u32 pcie_class_code_iscsi; /* 0x174 */ /* Set PCIe iSCSI Class Code */ - #define NVM_CFG1_GLOB_PCIE_CLASS_CODE_ISCSI_MASK 0x00FFFFFF - #define NVM_CFG1_GLOB_PCIE_CLASS_CODE_ISCSI_OFFSET 0 - /* When temperature goes below (Dead Temp TH - delta)Thermal Event - * gpio is unset + #define NVM_CFG1_GLOB_PCIE_CLASS_CODE_ISCSI_MASK 0x00FFFFFF + #define NVM_CFG1_GLOB_PCIE_CLASS_CODE_ISCSI_OFFSET 0 + /* When temperature goes below (Dead Temp TH - delta)Thermal Event + * gpio is unset */ - #define NVM_CFG1_GLOB_DEAD_TEMP_TH_DELTA_MASK 0xFF000000 - #define NVM_CFG1_GLOB_DEAD_TEMP_TH_DELTA_OFFSET 24 - u32 no_provisioned_mac; /* 0x178 */ + #define NVM_CFG1_GLOB_DEAD_TEMP_TH_DELTA_MASK 0xFF000000 + #define NVM_CFG1_GLOB_DEAD_TEMP_TH_DELTA_OFFSET 24 + u32 no_provisioned_mac; /* 0x178 */ /* Set number of provisioned MAC addresses */ - #define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_MAC_MASK 0x0000FFFF - #define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_MAC_OFFSET 0 + #define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_MAC_MASK 0x0000FFFF + #define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_MAC_OFFSET 0 /* Set number of provisioned VF MAC addresses */ - #define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_VF_MAC_MASK 0x00FF0000 - #define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_VF_MAC_OFFSET 16 + #define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_VF_MAC_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_VF_MAC_OFFSET 16 /* Enable/Disable BMC MAC */ - #define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_MASK 0x01000000 - #define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_OFFSET 24 - #define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_DISABLED 0x0 - #define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_ENABLED 0x1 - u32 reserved[43]; /* 0x17C */ + #define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_MASK 0x01000000 + #define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_OFFSET 24 + #define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_DISABLED 0x0 + #define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_ENABLED 0x1 + /* Select the number of ports NCSI reports */ + #define NVM_CFG1_GLOB_NCSI_GET_CAPAS_CH_COUNT_MASK 0x06000000 + #define NVM_CFG1_GLOB_NCSI_GET_CAPAS_CH_COUNT_OFFSET 25 + u32 lowest_mbi_version; /* 0x17C */ + #define NVM_CFG1_GLOB_LOWEST_MBI_VERSION_0_MASK 0x000000FF + #define NVM_CFG1_GLOB_LOWEST_MBI_VERSION_0_OFFSET 0 + #define NVM_CFG1_GLOB_LOWEST_MBI_VERSION_1_MASK 0x0000FF00 + #define NVM_CFG1_GLOB_LOWEST_MBI_VERSION_1_OFFSET 8 + #define NVM_CFG1_GLOB_LOWEST_MBI_VERSION_2_MASK 0x00FF0000 + #define NVM_CFG1_GLOB_LOWEST_MBI_VERSION_2_OFFSET 16 + u32 generic_cont5; /* 0x180 */ + /* Activate options 305-306 */ + #define NVM_CFG1_GLOB_OVERRIDE_TX_DRIVER_REGULATOR_MASK 0x00000001 + #define NVM_CFG1_GLOB_OVERRIDE_TX_DRIVER_REGULATOR_OFFSET 0 + #define NVM_CFG1_GLOB_OVERRIDE_TX_DRIVER_REGULATOR_DISABLED 0x0 + #define NVM_CFG1_GLOB_OVERRIDE_TX_DRIVER_REGULATOR_ENABLED 0x1 + /* A value of 0x00 gives 720mV, A setting of 0x05 gives 800mV; A + * setting of 0x15 give a swing of 1060mV + */ + #define NVM_CFG1_GLOB_TX_REGULATOR_VOLTAGE_GEN1_2_MASK 0x0000003E + #define NVM_CFG1_GLOB_TX_REGULATOR_VOLTAGE_GEN1_2_OFFSET 1 + /* A value of 0x00 gives 720mV, A setting of 0x05 gives 800mV; A + * setting of 0x15 give a swing of 1060mV + */ + #define NVM_CFG1_GLOB_TX_REGULATOR_VOLTAGE_GEN3_MASK 0x000007C0 + #define NVM_CFG1_GLOB_TX_REGULATOR_VOLTAGE_GEN3_OFFSET 6 + /* QinQ Device Capability */ + #define NVM_CFG1_GLOB_QINQ_SUPPORT_MASK 0x00000800 + #define NVM_CFG1_GLOB_QINQ_SUPPORT_OFFSET 11 + #define NVM_CFG1_GLOB_QINQ_SUPPORT_DISABLED 0x0 + #define NVM_CFG1_GLOB_QINQ_SUPPORT_ENABLED 0x1 + u32 pre2_generic_cont_4; /* 0x184 */ + #define NVM_CFG1_GLOB_50G_AC_PRE2_MASK 0x000000FF + #define NVM_CFG1_GLOB_50G_AC_PRE2_OFFSET 0 + /* Set PCIe NVMeTCP Class Code */ + #define NVM_CFG1_GLOB_PCIE_CLASS_CODE_NVMETCP_MASK 0xFFFFFF00 + #define NVM_CFG1_GLOB_PCIE_CLASS_CODE_NVMETCP_OFFSET 8 + u32 reserved[40]; /* 0x188 */ }; struct nvm_cfg1_path { - u32 reserved[1]; /* 0x0 */ + u32 reserved[1]; /* 0x0 */ }; struct nvm_cfg1_port { - u32 reserved__m_relocated_to_option_123; /* 0x0 */ - u32 reserved__m_relocated_to_option_124; /* 0x4 */ - u32 generic_cont0; /* 0x8 */ - #define NVM_CFG1_PORT_LED_MODE_MASK 0x000000FF - #define NVM_CFG1_PORT_LED_MODE_OFFSET 0 - #define NVM_CFG1_PORT_LED_MODE_MAC1 0x0 - #define NVM_CFG1_PORT_LED_MODE_PHY1 0x1 - #define NVM_CFG1_PORT_LED_MODE_PHY2 0x2 - #define NVM_CFG1_PORT_LED_MODE_PHY3 0x3 - #define NVM_CFG1_PORT_LED_MODE_MAC2 0x4 - #define NVM_CFG1_PORT_LED_MODE_PHY4 0x5 - #define NVM_CFG1_PORT_LED_MODE_PHY5 0x6 - #define NVM_CFG1_PORT_LED_MODE_PHY6 0x7 - #define NVM_CFG1_PORT_LED_MODE_MAC3 0x8 - #define NVM_CFG1_PORT_LED_MODE_PHY7 0x9 - #define NVM_CFG1_PORT_LED_MODE_PHY8 0xA - #define NVM_CFG1_PORT_LED_MODE_PHY9 0xB - #define NVM_CFG1_PORT_LED_MODE_MAC4 0xC - #define NVM_CFG1_PORT_LED_MODE_PHY10 0xD - #define NVM_CFG1_PORT_LED_MODE_PHY11 0xE - #define NVM_CFG1_PORT_LED_MODE_PHY12 0xF - #define NVM_CFG1_PORT_LED_MODE_BREAKOUT 0x10 - #define NVM_CFG1_PORT_LED_MODE_OCP_3_0 0x11 - #define NVM_CFG1_PORT_LED_MODE_OCP_3_0_MAC2 0x12 - #define NVM_CFG1_PORT_LED_MODE_SW_DEF1 0x13 - #define NVM_CFG1_PORT_LED_MODE_SW_DEF1_MAC2 0x14 - #define NVM_CFG1_PORT_ROCE_PRIORITY_MASK 0x0000FF00 - #define NVM_CFG1_PORT_ROCE_PRIORITY_OFFSET 8 - #define NVM_CFG1_PORT_DCBX_MODE_MASK 0x000F0000 - #define NVM_CFG1_PORT_DCBX_MODE_OFFSET 16 - #define NVM_CFG1_PORT_DCBX_MODE_DISABLED 0x0 - #define NVM_CFG1_PORT_DCBX_MODE_IEEE 0x1 - #define NVM_CFG1_PORT_DCBX_MODE_CEE 0x2 - #define NVM_CFG1_PORT_DCBX_MODE_DYNAMIC 0x3 - #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_MASK 0x00F00000 - #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_OFFSET 20 - #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_ETHERNET 0x1 - #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_FCOE 0x2 - #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_ISCSI 0x4 + u32 reserved__m_relocated_to_option_123; /* 0x0 */ + u32 reserved__m_relocated_to_option_124; /* 0x4 */ + u32 generic_cont0; /* 0x8 */ + #define NVM_CFG1_PORT_LED_MODE_MASK 0x000000FF + #define NVM_CFG1_PORT_LED_MODE_OFFSET 0 + #define NVM_CFG1_PORT_LED_MODE_MAC1 0x0 + #define NVM_CFG1_PORT_LED_MODE_PHY1 0x1 + #define NVM_CFG1_PORT_LED_MODE_PHY2 0x2 + #define NVM_CFG1_PORT_LED_MODE_PHY3 0x3 + #define NVM_CFG1_PORT_LED_MODE_MAC2 0x4 + #define NVM_CFG1_PORT_LED_MODE_PHY4 0x5 + #define NVM_CFG1_PORT_LED_MODE_PHY5 0x6 + #define NVM_CFG1_PORT_LED_MODE_PHY6 0x7 + #define NVM_CFG1_PORT_LED_MODE_MAC3 0x8 + #define NVM_CFG1_PORT_LED_MODE_PHY7 0x9 + #define NVM_CFG1_PORT_LED_MODE_PHY8 0xA + #define NVM_CFG1_PORT_LED_MODE_PHY9 0xB + #define NVM_CFG1_PORT_LED_MODE_MAC4 0xC + #define NVM_CFG1_PORT_LED_MODE_PHY10 0xD + #define NVM_CFG1_PORT_LED_MODE_PHY11 0xE + #define NVM_CFG1_PORT_LED_MODE_PHY12 0xF + #define NVM_CFG1_PORT_LED_MODE_BREAKOUT 0x10 + #define NVM_CFG1_PORT_LED_MODE_OCP_3_0 0x11 + #define NVM_CFG1_PORT_LED_MODE_OCP_3_0_MAC2 0x12 + #define NVM_CFG1_PORT_LED_MODE_SW_DEF1 0x13 + #define NVM_CFG1_PORT_LED_MODE_SW_DEF1_MAC2 0x14 + #define NVM_CFG1_PORT_ROCE_PRIORITY_MASK 0x0000FF00 + #define NVM_CFG1_PORT_ROCE_PRIORITY_OFFSET 8 + #define NVM_CFG1_PORT_DCBX_MODE_MASK 0x000F0000 + #define NVM_CFG1_PORT_DCBX_MODE_OFFSET 16 + #define NVM_CFG1_PORT_DCBX_MODE_DISABLED 0x0 + #define NVM_CFG1_PORT_DCBX_MODE_IEEE 0x1 + #define NVM_CFG1_PORT_DCBX_MODE_CEE 0x2 + #define NVM_CFG1_PORT_DCBX_MODE_DYNAMIC 0x3 + #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_MASK 0x00F00000 + #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_OFFSET 20 + #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_ETHERNET 0x1 + #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_FCOE 0x2 + #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_ISCSI 0x4 + #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_NVMETCP 0x8 /* GPIO for HW reset the PHY. In case it is the same for all ports, * need to set same value for all ports */ - #define NVM_CFG1_PORT_EXT_PHY_RESET_MASK 0xFF000000 - #define NVM_CFG1_PORT_EXT_PHY_RESET_OFFSET 24 - #define NVM_CFG1_PORT_EXT_PHY_RESET_NA 0x0 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO0 0x1 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO1 0x2 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO2 0x3 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO3 0x4 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO4 0x5 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO5 0x6 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO6 0x7 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO7 0x8 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO8 0x9 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO9 0xA - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO10 0xB - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO11 0xC - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO12 0xD - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO13 0xE - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO14 0xF - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO15 0x10 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO16 0x11 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO17 0x12 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO18 0x13 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO19 0x14 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO20 0x15 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO21 0x16 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO22 0x17 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO23 0x18 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO24 0x19 - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO25 0x1A - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO26 0x1B - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO27 0x1C - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO28 0x1D - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO29 0x1E - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO30 0x1F - #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO31 0x20 - u32 pcie_cfg; /* 0xC */ - #define NVM_CFG1_PORT_RESERVED15_MASK 0x00000007 - #define NVM_CFG1_PORT_RESERVED15_OFFSET 0 - u32 features; /* 0x10 */ - #define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_MASK 0x00000001 - #define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_OFFSET 0 - #define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_DISABLED 0x0 - #define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_ENABLED 0x1 - #define NVM_CFG1_PORT_MAGIC_PACKET_WOL_MASK 0x00000002 - #define NVM_CFG1_PORT_MAGIC_PACKET_WOL_OFFSET 1 - #define NVM_CFG1_PORT_MAGIC_PACKET_WOL_DISABLED 0x0 - #define NVM_CFG1_PORT_MAGIC_PACKET_WOL_ENABLED 0x1 - u32 speed_cap_mask; /* 0x14 */ - #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_MASK 0x0000FFFF - #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_OFFSET 0 - #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G 0x1 - #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G 0x2 - #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_20G 0x4 - #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G 0x8 - #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G 0x10 - #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G 0x20 - #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40 - #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_MASK 0xFFFF0000 - #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_OFFSET 16 - #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_1G 0x1 - #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_10G 0x2 - #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_20G 0x4 - #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_25G 0x8 - #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_40G 0x10 - #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_50G 0x20 - #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40 - u32 link_settings; /* 0x18 */ - #define NVM_CFG1_PORT_DRV_LINK_SPEED_MASK 0x0000000F - #define NVM_CFG1_PORT_DRV_LINK_SPEED_OFFSET 0 - #define NVM_CFG1_PORT_DRV_LINK_SPEED_AUTONEG 0x0 - #define NVM_CFG1_PORT_DRV_LINK_SPEED_1G 0x1 - #define NVM_CFG1_PORT_DRV_LINK_SPEED_10G 0x2 - #define NVM_CFG1_PORT_DRV_LINK_SPEED_20G 0x3 - #define NVM_CFG1_PORT_DRV_LINK_SPEED_25G 0x4 - #define NVM_CFG1_PORT_DRV_LINK_SPEED_40G 0x5 - #define NVM_CFG1_PORT_DRV_LINK_SPEED_50G 0x6 - #define NVM_CFG1_PORT_DRV_LINK_SPEED_BB_100G 0x7 - #define NVM_CFG1_PORT_DRV_FLOW_CONTROL_MASK 0x00000070 - #define NVM_CFG1_PORT_DRV_FLOW_CONTROL_OFFSET 4 - #define NVM_CFG1_PORT_DRV_FLOW_CONTROL_AUTONEG 0x1 - #define NVM_CFG1_PORT_DRV_FLOW_CONTROL_RX 0x2 - #define NVM_CFG1_PORT_DRV_FLOW_CONTROL_TX 0x4 - #define NVM_CFG1_PORT_MFW_LINK_SPEED_MASK 0x00000780 - #define NVM_CFG1_PORT_MFW_LINK_SPEED_OFFSET 7 - #define NVM_CFG1_PORT_MFW_LINK_SPEED_AUTONEG 0x0 - #define NVM_CFG1_PORT_MFW_LINK_SPEED_1G 0x1 - #define NVM_CFG1_PORT_MFW_LINK_SPEED_10G 0x2 - #define NVM_CFG1_PORT_MFW_LINK_SPEED_20G 0x3 - #define NVM_CFG1_PORT_MFW_LINK_SPEED_25G 0x4 - #define NVM_CFG1_PORT_MFW_LINK_SPEED_40G 0x5 - #define NVM_CFG1_PORT_MFW_LINK_SPEED_50G 0x6 - #define NVM_CFG1_PORT_MFW_LINK_SPEED_BB_100G 0x7 - #define NVM_CFG1_PORT_MFW_FLOW_CONTROL_MASK 0x00003800 - #define NVM_CFG1_PORT_MFW_FLOW_CONTROL_OFFSET 11 - #define NVM_CFG1_PORT_MFW_FLOW_CONTROL_AUTONEG 0x1 - #define NVM_CFG1_PORT_MFW_FLOW_CONTROL_RX 0x2 - #define NVM_CFG1_PORT_MFW_FLOW_CONTROL_TX 0x4 - #define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_MASK \ - 0x00004000 - #define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_OFFSET 14 - #define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_DISABLED \ - 0x0 - #define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_ENABLED \ - 0x1 - #define NVM_CFG1_PORT_AN_25G_50G_OUI_MASK 0x00018000 - #define NVM_CFG1_PORT_AN_25G_50G_OUI_OFFSET 15 - #define NVM_CFG1_PORT_AN_25G_50G_OUI_CONSORTIUM 0x0 - #define NVM_CFG1_PORT_AN_25G_50G_OUI_BAM 0x1 - #define NVM_CFG1_PORT_FEC_FORCE_MODE_MASK 0x000E0000 - #define NVM_CFG1_PORT_FEC_FORCE_MODE_OFFSET 17 - #define NVM_CFG1_PORT_FEC_FORCE_MODE_NONE 0x0 - #define NVM_CFG1_PORT_FEC_FORCE_MODE_FIRECODE 0x1 - #define NVM_CFG1_PORT_FEC_FORCE_MODE_RS 0x2 - #define NVM_CFG1_PORT_FEC_FORCE_MODE_AUTO 0x7 - #define NVM_CFG1_PORT_FEC_AN_MODE_MASK 0x00700000 - #define NVM_CFG1_PORT_FEC_AN_MODE_OFFSET 20 - #define NVM_CFG1_PORT_FEC_AN_MODE_NONE 0x0 - #define NVM_CFG1_PORT_FEC_AN_MODE_10G_FIRECODE 0x1 - #define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE 0x2 - #define NVM_CFG1_PORT_FEC_AN_MODE_10G_AND_25G_FIRECODE 0x3 - #define NVM_CFG1_PORT_FEC_AN_MODE_25G_RS 0x4 - #define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE_AND_RS 0x5 - #define NVM_CFG1_PORT_FEC_AN_MODE_ALL 0x6 - #define NVM_CFG1_PORT_SMARTLINQ_MODE_MASK 0x00800000 - #define NVM_CFG1_PORT_SMARTLINQ_MODE_OFFSET 23 - #define NVM_CFG1_PORT_SMARTLINQ_MODE_DISABLED 0x0 - #define NVM_CFG1_PORT_SMARTLINQ_MODE_ENABLED 0x1 - #define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_MASK 0x01000000 - #define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_OFFSET 24 - #define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_DISABLED 0x0 - #define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_ENABLED 0x1 + #define NVM_CFG1_PORT_EXT_PHY_RESET_MASK 0xFF000000 + #define NVM_CFG1_PORT_EXT_PHY_RESET_OFFSET 24 + #define NVM_CFG1_PORT_EXT_PHY_RESET_NA 0x0 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO0 0x1 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO1 0x2 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO2 0x3 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO3 0x4 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO4 0x5 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO5 0x6 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO6 0x7 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO7 0x8 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO8 0x9 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO9 0xA + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO10 0xB + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO11 0xC + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO12 0xD + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO13 0xE + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO14 0xF + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO15 0x10 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO16 0x11 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO17 0x12 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO18 0x13 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO19 0x14 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO20 0x15 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO21 0x16 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO22 0x17 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO23 0x18 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO24 0x19 + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO25 0x1A + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO26 0x1B + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO27 0x1C + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO28 0x1D + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO29 0x1E + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO30 0x1F + #define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO31 0x20 + u32 pcie_cfg; /* 0xC */ + #define NVM_CFG1_PORT_RESERVED15_MASK 0x00000007 + #define NVM_CFG1_PORT_RESERVED15_OFFSET 0 + #define NVM_CFG1_PORT_NVMETCP_DID_SUFFIX_MASK 0x000007F8 + #define NVM_CFG1_PORT_NVMETCP_DID_SUFFIX_OFFSET 3 + #define NVM_CFG1_PORT_NVMETCP_MODE_MASK 0x00007800 + #define NVM_CFG1_PORT_NVMETCP_MODE_OFFSET 11 + #define NVM_CFG1_PORT_NVMETCP_MODE_HOST_MODE_ONLY 0x0 + #define NVM_CFG1_PORT_NVMETCP_MODE_TARGET_MODE_ONLY 0x1 + #define NVM_CFG1_PORT_NVMETCP_MODE_DUAL_MODE 0x2 + u32 features; /* 0x10 */ + #define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_MASK 0x00000001 + #define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_OFFSET 0 + #define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_DISABLED 0x0 + #define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_ENABLED 0x1 + #define NVM_CFG1_PORT_MAGIC_PACKET_WOL_MASK 0x00000002 + #define NVM_CFG1_PORT_MAGIC_PACKET_WOL_OFFSET 1 + #define NVM_CFG1_PORT_MAGIC_PACKET_WOL_DISABLED 0x0 + #define NVM_CFG1_PORT_MAGIC_PACKET_WOL_ENABLED 0x1 + /* Enable Permit port shutdown feature */ + #define NVM_CFG1_PORT_PERMIT_TOTAL_PORT_SHUTDOWN_MASK 0x00000004 + #define NVM_CFG1_PORT_PERMIT_TOTAL_PORT_SHUTDOWN_OFFSET 2 + #define NVM_CFG1_PORT_PERMIT_TOTAL_PORT_SHUTDOWN_DISABLED 0x0 + #define NVM_CFG1_PORT_PERMIT_TOTAL_PORT_SHUTDOWN_ENABLED 0x1 + u32 speed_cap_mask; /* 0x14 */ + #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_MASK 0x0000FFFF + #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_OFFSET 0 + #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G 0x1 + #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G 0x2 + #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_20G 0x4 + #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G 0x8 + #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G 0x10 + #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G 0x20 + #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40 + #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_MASK 0xFFFF0000 + #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_OFFSET 16 + #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_1G 0x1 + #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_10G 0x2 + #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_20G 0x4 + #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_25G 0x8 + #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_40G 0x10 + #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_50G 0x20 + #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40 + u32 link_settings; /* 0x18 */ + #define NVM_CFG1_PORT_DRV_LINK_SPEED_MASK 0x0000000F + #define NVM_CFG1_PORT_DRV_LINK_SPEED_OFFSET 0 + #define NVM_CFG1_PORT_DRV_LINK_SPEED_AUTONEG 0x0 + #define NVM_CFG1_PORT_DRV_LINK_SPEED_1G 0x1 + #define NVM_CFG1_PORT_DRV_LINK_SPEED_10G 0x2 + #define NVM_CFG1_PORT_DRV_LINK_SPEED_20G 0x3 + #define NVM_CFG1_PORT_DRV_LINK_SPEED_25G 0x4 + #define NVM_CFG1_PORT_DRV_LINK_SPEED_40G 0x5 + #define NVM_CFG1_PORT_DRV_LINK_SPEED_50G 0x6 + #define NVM_CFG1_PORT_DRV_LINK_SPEED_BB_100G 0x7 + #define NVM_CFG1_PORT_DRV_FLOW_CONTROL_MASK 0x00000070 + #define NVM_CFG1_PORT_DRV_FLOW_CONTROL_OFFSET 4 + #define NVM_CFG1_PORT_DRV_FLOW_CONTROL_AUTONEG 0x1 + #define NVM_CFG1_PORT_DRV_FLOW_CONTROL_RX 0x2 + #define NVM_CFG1_PORT_DRV_FLOW_CONTROL_TX 0x4 + #define NVM_CFG1_PORT_MFW_LINK_SPEED_MASK 0x00000780 + #define NVM_CFG1_PORT_MFW_LINK_SPEED_OFFSET 7 + #define NVM_CFG1_PORT_MFW_LINK_SPEED_AUTONEG 0x0 + #define NVM_CFG1_PORT_MFW_LINK_SPEED_1G 0x1 + #define NVM_CFG1_PORT_MFW_LINK_SPEED_10G 0x2 + #define NVM_CFG1_PORT_MFW_LINK_SPEED_20G 0x3 + #define NVM_CFG1_PORT_MFW_LINK_SPEED_25G 0x4 + #define NVM_CFG1_PORT_MFW_LINK_SPEED_40G 0x5 + #define NVM_CFG1_PORT_MFW_LINK_SPEED_50G 0x6 + #define NVM_CFG1_PORT_MFW_LINK_SPEED_BB_100G 0x7 + #define NVM_CFG1_PORT_MFW_FLOW_CONTROL_MASK 0x00003800 + #define NVM_CFG1_PORT_MFW_FLOW_CONTROL_OFFSET 11 + #define NVM_CFG1_PORT_MFW_FLOW_CONTROL_AUTONEG 0x1 + #define NVM_CFG1_PORT_MFW_FLOW_CONTROL_RX 0x2 + #define NVM_CFG1_PORT_MFW_FLOW_CONTROL_TX 0x4 + #define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_MASK 0x00004000 + #define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_OFFSET 14 + #define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_DISABLED 0x0 + #define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_ENABLED 0x1 + #define NVM_CFG1_PORT_AN_25G_50G_OUI_MASK 0x00018000 + #define NVM_CFG1_PORT_AN_25G_50G_OUI_OFFSET 15 + #define NVM_CFG1_PORT_AN_25G_50G_OUI_CONSORTIUM 0x0 + #define NVM_CFG1_PORT_AN_25G_50G_OUI_BAM 0x1 + #define NVM_CFG1_PORT_FEC_FORCE_MODE_MASK 0x000E0000 + #define NVM_CFG1_PORT_FEC_FORCE_MODE_OFFSET 17 + #define NVM_CFG1_PORT_FEC_FORCE_MODE_NONE 0x0 + #define NVM_CFG1_PORT_FEC_FORCE_MODE_FIRECODE 0x1 + #define NVM_CFG1_PORT_FEC_FORCE_MODE_RS 0x2 + #define NVM_CFG1_PORT_FEC_FORCE_MODE_AUTO 0x7 + #define NVM_CFG1_PORT_FEC_AN_MODE_MASK 0x00700000 + #define NVM_CFG1_PORT_FEC_AN_MODE_OFFSET 20 + #define NVM_CFG1_PORT_FEC_AN_MODE_NONE 0x0 + #define NVM_CFG1_PORT_FEC_AN_MODE_10G_FIRECODE 0x1 + #define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE 0x2 + #define NVM_CFG1_PORT_FEC_AN_MODE_10G_AND_25G_FIRECODE 0x3 + #define NVM_CFG1_PORT_FEC_AN_MODE_25G_RS 0x4 + #define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE_AND_RS 0x5 + #define NVM_CFG1_PORT_FEC_AN_MODE_ALL 0x6 + #define NVM_CFG1_PORT_SMARTLINQ_MODE_MASK 0x00800000 + #define NVM_CFG1_PORT_SMARTLINQ_MODE_OFFSET 23 + #define NVM_CFG1_PORT_SMARTLINQ_MODE_DISABLED 0x0 + #define NVM_CFG1_PORT_SMARTLINQ_MODE_ENABLED 0x1 + #define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_MASK 0x01000000 + #define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_OFFSET 24 + #define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_DISABLED 0x0 + #define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_ENABLED 0x1 /* Enable/Disable RX PAM-4 precoding */ - #define NVM_CFG1_PORT_RX_PRECODE_MASK 0x02000000 - #define NVM_CFG1_PORT_RX_PRECODE_OFFSET 25 - #define NVM_CFG1_PORT_RX_PRECODE_DISABLED 0x0 - #define NVM_CFG1_PORT_RX_PRECODE_ENABLED 0x1 + #define NVM_CFG1_PORT_RX_PRECODE_MASK 0x02000000 + #define NVM_CFG1_PORT_RX_PRECODE_OFFSET 25 + #define NVM_CFG1_PORT_RX_PRECODE_DISABLED 0x0 + #define NVM_CFG1_PORT_RX_PRECODE_ENABLED 0x1 /* Enable/Disable TX PAM-4 precoding */ - #define NVM_CFG1_PORT_TX_PRECODE_MASK 0x04000000 - #define NVM_CFG1_PORT_TX_PRECODE_OFFSET 26 - #define NVM_CFG1_PORT_TX_PRECODE_DISABLED 0x0 - #define NVM_CFG1_PORT_TX_PRECODE_ENABLED 0x1 - u32 phy_cfg; /* 0x1C */ - #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_MASK 0x0000FFFF - #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_OFFSET 0 - #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_HIGIG 0x1 - #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_SCRAMBLER 0x2 - #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_FIBER 0x4 - #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_DISABLE_CL72_AN 0x8 - #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_DISABLE_FEC_AN 0x10 - #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_MASK 0x00FF0000 - #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_OFFSET 16 - #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_BYPASS 0x0 - #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR 0x2 - #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR2 0x3 - #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR4 0x4 - #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XFI 0x8 - #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_SFI 0x9 - #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_1000X 0xB - #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_SGMII 0xC - #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XLAUI 0x11 - #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XLPPI 0x12 - #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_CAUI 0x21 - #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_CPPI 0x22 - #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_25GAUI 0x31 - #define NVM_CFG1_PORT_AN_MODE_MASK 0xFF000000 - #define NVM_CFG1_PORT_AN_MODE_OFFSET 24 - #define NVM_CFG1_PORT_AN_MODE_NONE 0x0 - #define NVM_CFG1_PORT_AN_MODE_CL73 0x1 - #define NVM_CFG1_PORT_AN_MODE_CL37 0x2 - #define NVM_CFG1_PORT_AN_MODE_CL73_BAM 0x3 - #define NVM_CFG1_PORT_AN_MODE_BB_CL37_BAM 0x4 - #define NVM_CFG1_PORT_AN_MODE_BB_HPAM 0x5 - #define NVM_CFG1_PORT_AN_MODE_BB_SGMII 0x6 - u32 mgmt_traffic; /* 0x20 */ - #define NVM_CFG1_PORT_RESERVED61_MASK 0x0000000F - #define NVM_CFG1_PORT_RESERVED61_OFFSET 0 - u32 ext_phy; /* 0x24 */ - #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_MASK 0x000000FF - #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_OFFSET 0 - #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_NONE 0x0 - #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_BCM8485X 0x1 - #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_BCM5422X 0x2 - #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_88X33X0 0x3 - #define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_MASK 0x0000FF00 - #define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_OFFSET 8 + #define NVM_CFG1_PORT_TX_PRECODE_MASK 0x04000000 + #define NVM_CFG1_PORT_TX_PRECODE_OFFSET 26 + #define NVM_CFG1_PORT_TX_PRECODE_DISABLED 0x0 + #define NVM_CFG1_PORT_TX_PRECODE_ENABLED 0x1 + u32 phy_cfg; /* 0x1C */ + #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_MASK 0x0000FFFF + #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_OFFSET 0 + #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_HIGIG 0x1 + #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_SCRAMBLER 0x2 + #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_FIBER 0x4 + #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_DISABLE_CL72_AN 0x8 + #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_DISABLE_FEC_AN 0x10 + #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_MASK 0x00FF0000 + #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_OFFSET 16 + #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_BYPASS 0x0 + #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR 0x2 + #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR2 0x3 + #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR4 0x4 + #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XFI 0x8 + #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_SFI 0x9 + #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_1000X 0xB + #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_SGMII 0xC + #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XLAUI 0x11 + #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XLPPI 0x12 + #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_CAUI 0x21 + #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_CPPI 0x22 + #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_25GAUI 0x31 + #define NVM_CFG1_PORT_AN_MODE_MASK 0xFF000000 + #define NVM_CFG1_PORT_AN_MODE_OFFSET 24 + #define NVM_CFG1_PORT_AN_MODE_NONE 0x0 + #define NVM_CFG1_PORT_AN_MODE_CL73 0x1 + #define NVM_CFG1_PORT_AN_MODE_CL37 0x2 + #define NVM_CFG1_PORT_AN_MODE_CL73_BAM 0x3 + #define NVM_CFG1_PORT_AN_MODE_BB_CL37_BAM 0x4 + #define NVM_CFG1_PORT_AN_MODE_BB_HPAM 0x5 + #define NVM_CFG1_PORT_AN_MODE_BB_SGMII 0x6 + u32 mgmt_traffic; /* 0x20 */ + #define NVM_CFG1_PORT_RESERVED61_MASK 0x0000000F + #define NVM_CFG1_PORT_RESERVED61_OFFSET 0 + u32 ext_phy; /* 0x24 */ + #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_MASK 0x000000FF + #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_OFFSET 0 + #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_NONE 0x0 + #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_BCM8485X 0x1 + #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_BCM5422X 0x2 + #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_88X33X0 0x3 + #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_AQR11X 0x4 + #define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_MASK 0x0000FF00 + #define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_OFFSET 8 /* EEE power saving mode */ - #define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_MASK 0x00FF0000 - #define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_OFFSET 16 - #define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_DISABLED 0x0 - #define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_BALANCED 0x1 - #define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_AGGRESSIVE 0x2 - #define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_LOW_LATENCY 0x3 - u32 mba_cfg1; /* 0x28 */ - #define NVM_CFG1_PORT_PREBOOT_OPROM_MASK 0x00000001 - #define NVM_CFG1_PORT_PREBOOT_OPROM_OFFSET 0 - #define NVM_CFG1_PORT_PREBOOT_OPROM_DISABLED 0x0 - #define NVM_CFG1_PORT_PREBOOT_OPROM_ENABLED 0x1 - #define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_TYPE_MASK 0x00000006 - #define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_TYPE_OFFSET 1 - #define NVM_CFG1_PORT_MBA_DELAY_TIME_MASK 0x00000078 - #define NVM_CFG1_PORT_MBA_DELAY_TIME_OFFSET 3 - #define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_MASK 0x00000080 - #define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_OFFSET 7 - #define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_CTRL_S 0x0 - #define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_CTRL_B 0x1 - #define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_MASK 0x00000100 - #define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_OFFSET 8 - #define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_DISABLED 0x0 - #define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_ENABLED 0x1 - #define NVM_CFG1_PORT_RESERVED5_MASK 0x0001FE00 - #define NVM_CFG1_PORT_RESERVED5_OFFSET 9 - #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_MASK 0x001E0000 - #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_OFFSET 17 - #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_AUTONEG 0x0 - #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_1G 0x1 - #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_10G 0x2 - #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_20G 0x3 - #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_25G 0x4 - #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_40G 0x5 - #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_50G 0x6 - #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_BB_100G 0x7 - #define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_MASK \ - 0x00E00000 - #define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_OFFSET 21 - #define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_MASK \ - 0x01000000 - #define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_OFFSET 24 - #define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_DISABLED \ - 0x0 - #define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_ENABLED 0x1 - u32 mba_cfg2; /* 0x2C */ - #define NVM_CFG1_PORT_RESERVED65_MASK 0x0000FFFF - #define NVM_CFG1_PORT_RESERVED65_OFFSET 0 - #define NVM_CFG1_PORT_RESERVED66_MASK 0x00010000 - #define NVM_CFG1_PORT_RESERVED66_OFFSET 16 - #define NVM_CFG1_PORT_PREBOOT_LINK_UP_DELAY_MASK 0x01FE0000 - #define NVM_CFG1_PORT_PREBOOT_LINK_UP_DELAY_OFFSET 17 - u32 vf_cfg; /* 0x30 */ - #define NVM_CFG1_PORT_RESERVED8_MASK 0x0000FFFF - #define NVM_CFG1_PORT_RESERVED8_OFFSET 0 - #define NVM_CFG1_PORT_RESERVED6_MASK 0x000F0000 - #define NVM_CFG1_PORT_RESERVED6_OFFSET 16 - struct nvm_cfg_mac_address lldp_mac_address; /* 0x34 */ - u32 led_port_settings; /* 0x3C */ - #define NVM_CFG1_PORT_LANE_LED_SPD_0_SEL_MASK 0x000000FF - #define NVM_CFG1_PORT_LANE_LED_SPD_0_SEL_OFFSET 0 - #define NVM_CFG1_PORT_LANE_LED_SPD_1_SEL_MASK 0x0000FF00 - #define NVM_CFG1_PORT_LANE_LED_SPD_1_SEL_OFFSET 8 - #define NVM_CFG1_PORT_LANE_LED_SPD_2_SEL_MASK 0x00FF0000 - #define NVM_CFG1_PORT_LANE_LED_SPD_2_SEL_OFFSET 16 - #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_1G 0x1 - #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_10G 0x2 - #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_AH_25G 0x4 - #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_25G 0x8 - #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_AH_40G 0x8 - #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_40G 0x10 - #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_AH_50G 0x10 - #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_50G 0x20 - #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_100G 0x40 + #define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_MASK 0x00FF0000 + #define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_OFFSET 16 + #define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_DISABLED 0x0 + #define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_BALANCED 0x1 + #define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_AGGRESSIVE 0x2 + #define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_LOW_LATENCY 0x3 + /* Ext PHY AVS Enable */ + #define NVM_CFG1_PORT_EXT_PHY_AVS_ENABLE_MASK 0x01000000 + #define NVM_CFG1_PORT_EXT_PHY_AVS_ENABLE_OFFSET 24 + #define NVM_CFG1_PORT_EXT_PHY_AVS_ENABLE_DISABLED 0x0 + #define NVM_CFG1_PORT_EXT_PHY_AVS_ENABLE_ENABLED 0x1 + u32 mba_cfg1; /* 0x28 */ + #define NVM_CFG1_PORT_PREBOOT_OPROM_MASK 0x00000001 + #define NVM_CFG1_PORT_PREBOOT_OPROM_OFFSET 0 + #define NVM_CFG1_PORT_PREBOOT_OPROM_DISABLED 0x0 + #define NVM_CFG1_PORT_PREBOOT_OPROM_ENABLED 0x1 + #define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_TYPE_MASK 0x00000006 + #define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_TYPE_OFFSET 1 + #define NVM_CFG1_PORT_MBA_DELAY_TIME_MASK 0x00000078 + #define NVM_CFG1_PORT_MBA_DELAY_TIME_OFFSET 3 + #define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_MASK 0x00000080 + #define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_OFFSET 7 + #define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_CTRL_S 0x0 + #define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_CTRL_B 0x1 + #define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_MASK 0x00000100 + #define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_OFFSET 8 + #define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_DISABLED 0x0 + #define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_ENABLED 0x1 + #define NVM_CFG1_PORT_RESERVED5_MASK 0x0001FE00 + #define NVM_CFG1_PORT_RESERVED5_OFFSET 9 + #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_MASK 0x001E0000 + #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_OFFSET 17 + #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_AUTONEG 0x0 + #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_1G 0x1 + #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_10G 0x2 + #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_20G 0x3 + #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_25G 0x4 + #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_40G 0x5 + #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_50G 0x6 + #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_BB_100G 0x7 + #define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_MASK 0x00E00000 + #define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_OFFSET 21 + #define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_MASK 0x01000000 + #define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_OFFSET 24 + #define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_DISABLED 0x0 + #define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_ENABLED 0x1 + u32 mba_cfg2; /* 0x2C */ + #define NVM_CFG1_PORT_RESERVED65_MASK 0x0000FFFF + #define NVM_CFG1_PORT_RESERVED65_OFFSET 0 + #define NVM_CFG1_PORT_RESERVED66_MASK 0x00010000 + #define NVM_CFG1_PORT_RESERVED66_OFFSET 16 + #define NVM_CFG1_PORT_PREBOOT_LINK_UP_DELAY_MASK 0x01FE0000 + #define NVM_CFG1_PORT_PREBOOT_LINK_UP_DELAY_OFFSET 17 + u32 vf_cfg; /* 0x30 */ + #define NVM_CFG1_PORT_RESERVED8_MASK 0x0000FFFF + #define NVM_CFG1_PORT_RESERVED8_OFFSET 0 + #define NVM_CFG1_PORT_RESERVED6_MASK 0x000F0000 + #define NVM_CFG1_PORT_RESERVED6_OFFSET 16 + struct nvm_cfg_mac_address lldp_mac_address; /* 0x34 */ + u32 led_port_settings; /* 0x3C */ + #define NVM_CFG1_PORT_LANE_LED_SPD_0_SEL_MASK 0x000000FF + #define NVM_CFG1_PORT_LANE_LED_SPD_0_SEL_OFFSET 0 + #define NVM_CFG1_PORT_LANE_LED_SPD_1_SEL_MASK 0x0000FF00 + #define NVM_CFG1_PORT_LANE_LED_SPD_1_SEL_OFFSET 8 + #define NVM_CFG1_PORT_LANE_LED_SPD_2_SEL_MASK 0x00FF0000 + #define NVM_CFG1_PORT_LANE_LED_SPD_2_SEL_OFFSET 16 + #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_1G 0x1 + #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_10G 0x2 + #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_AH_25G_AHP_25G 0x4 + #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_25G_AH_40G_AHP_40G 0x8 + #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_40G_AH_50G_AHP_50G 0x10 + #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_50G_AHP_100G 0x20 + #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_100G 0x40 /* UID LED Blink Mode Settings */ - #define NVM_CFG1_PORT_UID_LED_MODE_MASK_MASK 0x0F000000 - #define NVM_CFG1_PORT_UID_LED_MODE_MASK_OFFSET 24 - #define NVM_CFG1_PORT_UID_LED_MODE_MASK_ACTIVITY_LED 0x1 - #define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED0 0x2 - #define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED1 0x4 - #define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED2 0x8 - u32 transceiver_00; /* 0x40 */ + #define NVM_CFG1_PORT_UID_LED_MODE_MASK_MASK 0x0F000000 + #define NVM_CFG1_PORT_UID_LED_MODE_MASK_OFFSET 24 + #define NVM_CFG1_PORT_UID_LED_MODE_MASK_ACTIVITY_LED 0x1 + #define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED0 0x2 + #define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED1 0x4 + #define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED2 0x8 + /* Activity LED Blink Rate in Hz */ + #define NVM_CFG1_PORT_ACTIVITY_LED_BLINK_RATE_HZ_MASK 0xF0000000 + #define NVM_CFG1_PORT_ACTIVITY_LED_BLINK_RATE_HZ_OFFSET 28 + u32 transceiver_00; /* 0x40 */ /* Define for mapping of transceiver signal module absent */ - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_MASK 0x000000FF - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_OFFSET 0 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_NA 0x0 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO0 0x1 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO1 0x2 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO2 0x3 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO3 0x4 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO4 0x5 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO5 0x6 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO6 0x7 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO7 0x8 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO8 0x9 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO9 0xA - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO10 0xB - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO11 0xC - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO12 0xD - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO13 0xE - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO14 0xF - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO15 0x10 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO16 0x11 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO17 0x12 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO18 0x13 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO19 0x14 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO20 0x15 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO21 0x16 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO22 0x17 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO23 0x18 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO24 0x19 - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO25 0x1A - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO26 0x1B - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO27 0x1C - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO28 0x1D - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO29 0x1E - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO30 0x1F - #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO31 0x20 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_MASK 0x000000FF + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_OFFSET 0 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_NA 0x0 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO0 0x1 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO1 0x2 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO2 0x3 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO3 0x4 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO4 0x5 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO5 0x6 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO6 0x7 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO7 0x8 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO8 0x9 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO9 0xA + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO10 0xB + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO11 0xC + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO12 0xD + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO13 0xE + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO14 0xF + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO15 0x10 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO16 0x11 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO17 0x12 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO18 0x13 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO19 0x14 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO20 0x15 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO21 0x16 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO22 0x17 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO23 0x18 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO24 0x19 + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO25 0x1A + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO26 0x1B + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO27 0x1C + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO28 0x1D + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO29 0x1E + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO30 0x1F + #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO31 0x20 /* Define the GPIO mux settings to switch i2c mux to this port */ - #define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_0_MASK 0x00000F00 - #define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_0_OFFSET 8 - #define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_MASK 0x0000F000 - #define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_OFFSET 12 + #define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_0_MASK 0x00000F00 + #define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_0_OFFSET 8 + #define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_MASK 0x0000F000 + #define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_OFFSET 12 /* Option to override SmartAN FEC requirements */ - #define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_MASK 0x00010000 - #define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_OFFSET 16 - #define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_DISABLED 0x0 - #define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_ENABLED 0x1 - u32 device_ids; /* 0x44 */ - #define NVM_CFG1_PORT_ETH_DID_SUFFIX_MASK 0x000000FF - #define NVM_CFG1_PORT_ETH_DID_SUFFIX_OFFSET 0 - #define NVM_CFG1_PORT_FCOE_DID_SUFFIX_MASK 0x0000FF00 - #define NVM_CFG1_PORT_FCOE_DID_SUFFIX_OFFSET 8 - #define NVM_CFG1_PORT_ISCSI_DID_SUFFIX_MASK 0x00FF0000 - #define NVM_CFG1_PORT_ISCSI_DID_SUFFIX_OFFSET 16 - #define NVM_CFG1_PORT_RESERVED_DID_SUFFIX_MASK 0xFF000000 - #define NVM_CFG1_PORT_RESERVED_DID_SUFFIX_OFFSET 24 - u32 board_cfg; /* 0x48 */ - /* This field defines the board technology + #define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_MASK 0x00010000 + #define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_OFFSET 16 + #define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_DISABLED 0x0 + #define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_ENABLED 0x1 + u32 device_ids; /* 0x44 */ + #define NVM_CFG1_PORT_ETH_DID_SUFFIX_MASK 0x000000FF + #define NVM_CFG1_PORT_ETH_DID_SUFFIX_OFFSET 0 + #define NVM_CFG1_PORT_FCOE_DID_SUFFIX_MASK 0x0000FF00 + #define NVM_CFG1_PORT_FCOE_DID_SUFFIX_OFFSET 8 + #define NVM_CFG1_PORT_ISCSI_DID_SUFFIX_MASK 0x00FF0000 + #define NVM_CFG1_PORT_ISCSI_DID_SUFFIX_OFFSET 16 + #define NVM_CFG1_PORT_RESERVED_DID_SUFFIX_MASK 0xFF000000 + #define NVM_CFG1_PORT_RESERVED_DID_SUFFIX_OFFSET 24 + u32 board_cfg; /* 0x48 */ + /* This field defines the board technology * (backpane,transceiver,external PHY) */ - #define NVM_CFG1_PORT_PORT_TYPE_MASK 0x000000FF - #define NVM_CFG1_PORT_PORT_TYPE_OFFSET 0 - #define NVM_CFG1_PORT_PORT_TYPE_UNDEFINED 0x0 - #define NVM_CFG1_PORT_PORT_TYPE_MODULE 0x1 - #define NVM_CFG1_PORT_PORT_TYPE_BACKPLANE 0x2 - #define NVM_CFG1_PORT_PORT_TYPE_EXT_PHY 0x3 - #define NVM_CFG1_PORT_PORT_TYPE_MODULE_SLAVE 0x4 + #define NVM_CFG1_PORT_PORT_TYPE_MASK 0x000000FF + #define NVM_CFG1_PORT_PORT_TYPE_OFFSET 0 + #define NVM_CFG1_PORT_PORT_TYPE_UNDEFINED 0x0 + #define NVM_CFG1_PORT_PORT_TYPE_MODULE 0x1 + #define NVM_CFG1_PORT_PORT_TYPE_BACKPLANE 0x2 + #define NVM_CFG1_PORT_PORT_TYPE_EXT_PHY 0x3 + #define NVM_CFG1_PORT_PORT_TYPE_MODULE_SLAVE 0x4 /* This field defines the GPIO mapped to tx_disable signal in SFP */ - #define NVM_CFG1_PORT_TX_DISABLE_MASK 0x0000FF00 - #define NVM_CFG1_PORT_TX_DISABLE_OFFSET 8 - #define NVM_CFG1_PORT_TX_DISABLE_NA 0x0 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO0 0x1 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO1 0x2 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO2 0x3 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO3 0x4 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO4 0x5 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO5 0x6 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO6 0x7 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO7 0x8 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO8 0x9 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO9 0xA - #define NVM_CFG1_PORT_TX_DISABLE_GPIO10 0xB - #define NVM_CFG1_PORT_TX_DISABLE_GPIO11 0xC - #define NVM_CFG1_PORT_TX_DISABLE_GPIO12 0xD - #define NVM_CFG1_PORT_TX_DISABLE_GPIO13 0xE - #define NVM_CFG1_PORT_TX_DISABLE_GPIO14 0xF - #define NVM_CFG1_PORT_TX_DISABLE_GPIO15 0x10 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO16 0x11 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO17 0x12 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO18 0x13 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO19 0x14 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO20 0x15 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO21 0x16 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO22 0x17 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO23 0x18 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO24 0x19 - #define NVM_CFG1_PORT_TX_DISABLE_GPIO25 0x1A - #define NVM_CFG1_PORT_TX_DISABLE_GPIO26 0x1B - #define NVM_CFG1_PORT_TX_DISABLE_GPIO27 0x1C - #define NVM_CFG1_PORT_TX_DISABLE_GPIO28 0x1D - #define NVM_CFG1_PORT_TX_DISABLE_GPIO29 0x1E - #define NVM_CFG1_PORT_TX_DISABLE_GPIO30 0x1F - #define NVM_CFG1_PORT_TX_DISABLE_GPIO31 0x20 - u32 mnm_10g_cap; /* 0x4C */ - #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_MASK \ - 0x0000FFFF - #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0 - #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_1G 0x1 - #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_10G 0x2 - #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_20G 0x4 - #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_25G 0x8 - #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_40G 0x10 - #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_50G 0x20 - #define \ - NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40 - #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_MASK \ - 0xFFFF0000 - #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_OFFSET \ - 16 - #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_1G 0x1 - #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_10G 0x2 - #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_20G 0x4 - #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_25G 0x8 - #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_40G 0x10 - #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_50G 0x20 - #define \ - NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40 - u32 mnm_10g_ctrl; /* 0x50 */ - #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_MASK 0x0000000F - #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_OFFSET 0 - #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_AUTONEG 0x0 - #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_1G 0x1 - #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_10G 0x2 - #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_20G 0x3 - #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_25G 0x4 - #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_40G 0x5 - #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_50G 0x6 - #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_BB_100G 0x7 - #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_MASK 0x000000F0 - #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_OFFSET 4 - #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_AUTONEG 0x0 - #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_1G 0x1 - #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_10G 0x2 - #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_20G 0x3 - #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_25G 0x4 - #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_40G 0x5 - #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_50G 0x6 - #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_BB_100G 0x7 - /* This field defines the board technology + #define NVM_CFG1_PORT_TX_DISABLE_MASK 0x0000FF00 + #define NVM_CFG1_PORT_TX_DISABLE_OFFSET 8 + #define NVM_CFG1_PORT_TX_DISABLE_NA 0x0 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO0 0x1 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO1 0x2 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO2 0x3 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO3 0x4 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO4 0x5 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO5 0x6 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO6 0x7 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO7 0x8 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO8 0x9 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO9 0xA + #define NVM_CFG1_PORT_TX_DISABLE_GPIO10 0xB + #define NVM_CFG1_PORT_TX_DISABLE_GPIO11 0xC + #define NVM_CFG1_PORT_TX_DISABLE_GPIO12 0xD + #define NVM_CFG1_PORT_TX_DISABLE_GPIO13 0xE + #define NVM_CFG1_PORT_TX_DISABLE_GPIO14 0xF + #define NVM_CFG1_PORT_TX_DISABLE_GPIO15 0x10 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO16 0x11 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO17 0x12 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO18 0x13 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO19 0x14 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO20 0x15 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO21 0x16 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO22 0x17 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO23 0x18 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO24 0x19 + #define NVM_CFG1_PORT_TX_DISABLE_GPIO25 0x1A + #define NVM_CFG1_PORT_TX_DISABLE_GPIO26 0x1B + #define NVM_CFG1_PORT_TX_DISABLE_GPIO27 0x1C + #define NVM_CFG1_PORT_TX_DISABLE_GPIO28 0x1D + #define NVM_CFG1_PORT_TX_DISABLE_GPIO29 0x1E + #define NVM_CFG1_PORT_TX_DISABLE_GPIO30 0x1F + #define NVM_CFG1_PORT_TX_DISABLE_GPIO31 0x20 + u32 mnm_10g_cap; /* 0x4C */ + #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_MASK 0x0000FFFF + #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0 + #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_1G 0x1 + #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_10G 0x2 + #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_20G 0x4 + #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_25G 0x8 + #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_40G 0x10 + #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_50G 0x20 + #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40 + #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_MASK 0xFFFF0000 + #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_OFFSET 16 + #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_1G 0x1 + #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_10G 0x2 + #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_20G 0x4 + #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_25G 0x8 + #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_40G 0x10 + #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_50G 0x20 + #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40 + u32 mnm_10g_ctrl; /* 0x50 */ + #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_MASK 0x0000000F + #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_OFFSET 0 + #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_AUTONEG 0x0 + #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_1G 0x1 + #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_10G 0x2 + #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_20G 0x3 + #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_25G 0x4 + #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_40G 0x5 + #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_50G 0x6 + #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_BB_100G 0x7 + #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_MASK 0x000000F0 + #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_OFFSET 4 + #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_AUTONEG 0x0 + #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_1G 0x1 + #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_10G 0x2 + #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_20G 0x3 + #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_25G 0x4 + #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_40G 0x5 + #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_50G 0x6 + #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_BB_100G 0x7 + /* This field defines the board technology * (backpane,transceiver,external PHY) - */ - #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_MASK 0x0000FF00 - #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_OFFSET 8 - #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_UNDEFINED 0x0 - #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_MODULE 0x1 - #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_BACKPLANE 0x2 - #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_EXT_PHY 0x3 - #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_MODULE_SLAVE 0x4 - #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_MASK \ - 0x00FF0000 - #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_OFFSET 16 - #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_BYPASS 0x0 - #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_KR 0x2 - #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_KR2 0x3 - #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_KR4 0x4 - #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_XFI 0x8 - #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_SFI 0x9 - #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_1000X 0xB - #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_SGMII 0xC - #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_XLAUI 0x11 - #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_XLPPI 0x12 - #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_CAUI 0x21 - #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_CPPI 0x22 - #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_25GAUI 0x31 - #define NVM_CFG1_PORT_MNM_10G_ETH_DID_SUFFIX_MASK 0xFF000000 - #define NVM_CFG1_PORT_MNM_10G_ETH_DID_SUFFIX_OFFSET 24 - u32 mnm_10g_misc; /* 0x54 */ - #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_MASK 0x00000007 - #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_OFFSET 0 - #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_NONE 0x0 - #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_FIRECODE 0x1 - #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_RS 0x2 - #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_AUTO 0x7 - u32 mnm_25g_cap; /* 0x58 */ - #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_MASK \ - 0x0000FFFF - #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0 - #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_1G 0x1 - #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_10G 0x2 - #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_20G 0x4 - #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_25G 0x8 - #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_40G 0x10 - #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_50G 0x20 - #define \ - NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40 - #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_MASK \ - 0xFFFF0000 - #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_OFFSET \ - 16 - #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_1G 0x1 - #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_10G 0x2 - #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_20G 0x4 - #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_25G 0x8 - #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_40G 0x10 - #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_50G 0x20 - #define \ - NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40 - u32 mnm_25g_ctrl; /* 0x5C */ - #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_MASK 0x0000000F - #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_OFFSET 0 - #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_AUTONEG 0x0 - #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_1G 0x1 - #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_10G 0x2 - #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_20G 0x3 - #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_25G 0x4 - #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_40G 0x5 - #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_50G 0x6 - #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_BB_100G 0x7 - #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_MASK 0x000000F0 - #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_OFFSET 4 - #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_AUTONEG 0x0 - #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_1G 0x1 - #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_10G 0x2 - #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_20G 0x3 - #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_25G 0x4 - #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_40G 0x5 - #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_50G 0x6 - #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_BB_100G 0x7 - /* This field defines the board technology + */ + #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_MASK 0x0000FF00 + #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_OFFSET 8 + #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_UNDEFINED 0x0 + #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_MODULE 0x1 + #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_BACKPLANE 0x2 + #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_EXT_PHY 0x3 + #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_MODULE_SLAVE 0x4 + #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_MASK 0x00FF0000 + #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_OFFSET 16 + #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_BYPASS 0x0 + #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_KR 0x2 + #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_KR2 0x3 + #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_KR4 0x4 + #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_XFI 0x8 + #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_SFI 0x9 + #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_1000X 0xB + #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_SGMII 0xC + #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_XLAUI 0x11 + #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_XLPPI 0x12 + #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_CAUI 0x21 + #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_CPPI 0x22 + #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_25GAUI 0x31 + #define NVM_CFG1_PORT_MNM_10G_ETH_DID_SUFFIX_MASK 0xFF000000 + #define NVM_CFG1_PORT_MNM_10G_ETH_DID_SUFFIX_OFFSET 24 + u32 mnm_10g_misc; /* 0x54 */ + #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_MASK 0x00000007 + #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_OFFSET 0 + #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_NONE 0x0 + #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_FIRECODE 0x1 + #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_RS 0x2 + #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_AUTO 0x7 + u32 mnm_25g_cap; /* 0x58 */ + #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_MASK 0x0000FFFF + #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0 + #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_1G 0x1 + #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_10G 0x2 + #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_20G 0x4 + #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_25G 0x8 + #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_40G 0x10 + #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_50G 0x20 + #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40 + #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_MASK 0xFFFF0000 + #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_OFFSET 16 + #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_1G 0x1 + #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_10G 0x2 + #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_20G 0x4 + #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_25G 0x8 + #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_40G 0x10 + #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_50G 0x20 + #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40 + u32 mnm_25g_ctrl; /* 0x5C */ + #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_MASK 0x0000000F + #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_OFFSET 0 + #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_AUTONEG 0x0 + #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_1G 0x1 + #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_10G 0x2 + #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_20G 0x3 + #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_25G 0x4 + #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_40G 0x5 + #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_50G 0x6 + #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_BB_100G 0x7 + #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_MASK 0x000000F0 + #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_OFFSET 4 + #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_AUTONEG 0x0 + #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_1G 0x1 + #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_10G 0x2 + #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_20G 0x3 + #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_25G 0x4 + #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_40G 0x5 + #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_50G 0x6 + #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_BB_100G 0x7 + /* This field defines the board technology * (backpane,transceiver,external PHY) - */ - #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_MASK 0x0000FF00 - #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_OFFSET 8 - #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_UNDEFINED 0x0 - #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_MODULE 0x1 - #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_BACKPLANE 0x2 - #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_EXT_PHY 0x3 - #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_MODULE_SLAVE 0x4 - #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_MASK \ - 0x00FF0000 - #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_OFFSET 16 - #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_BYPASS 0x0 - #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_KR 0x2 - #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_KR2 0x3 - #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_KR4 0x4 - #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_XFI 0x8 - #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_SFI 0x9 - #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_1000X 0xB - #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_SGMII 0xC - #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_XLAUI 0x11 - #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_XLPPI 0x12 - #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_CAUI 0x21 - #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_CPPI 0x22 - #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_25GAUI 0x31 - #define NVM_CFG1_PORT_MNM_25G_ETH_DID_SUFFIX_MASK 0xFF000000 - #define NVM_CFG1_PORT_MNM_25G_ETH_DID_SUFFIX_OFFSET 24 - u32 mnm_25g_misc; /* 0x60 */ - #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_MASK 0x00000007 - #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_OFFSET 0 - #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_NONE 0x0 - #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_FIRECODE 0x1 - #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_RS 0x2 - #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_AUTO 0x7 - u32 mnm_40g_cap; /* 0x64 */ - #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_MASK \ - 0x0000FFFF - #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0 - #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_1G 0x1 - #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_10G 0x2 - #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_20G 0x4 - #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_25G 0x8 - #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_40G 0x10 - #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_50G 0x20 - #define \ - NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40 - #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_MASK \ - 0xFFFF0000 - #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_OFFSET \ - 16 - #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_1G 0x1 - #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_10G 0x2 - #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_20G 0x4 - #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_25G 0x8 - #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_40G 0x10 - #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_50G 0x20 - #define \ - NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40 - u32 mnm_40g_ctrl; /* 0x68 */ - #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_MASK 0x0000000F - #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_OFFSET 0 - #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_AUTONEG 0x0 - #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_1G 0x1 - #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_10G 0x2 - #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_20G 0x3 - #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_25G 0x4 - #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_40G 0x5 - #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_50G 0x6 - #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_BB_100G 0x7 - #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_MASK 0x000000F0 - #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_OFFSET 4 - #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_AUTONEG 0x0 - #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_1G 0x1 - #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_10G 0x2 - #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_20G 0x3 - #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_25G 0x4 - #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_40G 0x5 - #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_50G 0x6 - #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_BB_100G 0x7 - /* This field defines the board technology + */ + #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_MASK 0x0000FF00 + #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_OFFSET 8 + #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_UNDEFINED 0x0 + #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_MODULE 0x1 + #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_BACKPLANE 0x2 + #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_EXT_PHY 0x3 + #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_MODULE_SLAVE 0x4 + #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_MASK 0x00FF0000 + #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_OFFSET 16 + #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_BYPASS 0x0 + #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_KR 0x2 + #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_KR2 0x3 + #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_KR4 0x4 + #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_XFI 0x8 + #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_SFI 0x9 + #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_1000X 0xB + #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_SGMII 0xC + #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_XLAUI 0x11 + #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_XLPPI 0x12 + #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_CAUI 0x21 + #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_CPPI 0x22 + #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_25GAUI 0x31 + #define NVM_CFG1_PORT_MNM_25G_ETH_DID_SUFFIX_MASK 0xFF000000 + #define NVM_CFG1_PORT_MNM_25G_ETH_DID_SUFFIX_OFFSET 24 + u32 mnm_25g_misc; /* 0x60 */ + #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_MASK 0x00000007 + #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_OFFSET 0 + #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_NONE 0x0 + #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_FIRECODE 0x1 + #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_RS 0x2 + #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_AUTO 0x7 + u32 mnm_40g_cap; /* 0x64 */ + #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_MASK 0x0000FFFF + #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0 + #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_1G 0x1 + #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_10G 0x2 + #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_20G 0x4 + #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_25G 0x8 + #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_40G 0x10 + #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_50G 0x20 + #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40 + #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_MASK 0xFFFF0000 + #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_OFFSET 16 + #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_1G 0x1 + #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_10G 0x2 + #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_20G 0x4 + #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_25G 0x8 + #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_40G 0x10 + #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_50G 0x20 + #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40 + u32 mnm_40g_ctrl; /* 0x68 */ + #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_MASK 0x0000000F + #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_OFFSET 0 + #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_AUTONEG 0x0 + #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_1G 0x1 + #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_10G 0x2 + #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_20G 0x3 + #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_25G 0x4 + #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_40G 0x5 + #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_50G 0x6 + #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_BB_100G 0x7 + #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_MASK 0x000000F0 + #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_OFFSET 4 + #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_AUTONEG 0x0 + #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_1G 0x1 + #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_10G 0x2 + #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_20G 0x3 + #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_25G 0x4 + #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_40G 0x5 + #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_50G 0x6 + #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_BB_100G 0x7 + /* This field defines the board technology * (backpane,transceiver,external PHY) - */ - #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_MASK 0x0000FF00 - #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_OFFSET 8 - #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_UNDEFINED 0x0 - #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_MODULE 0x1 - #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_BACKPLANE 0x2 - #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_EXT_PHY 0x3 - #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_MODULE_SLAVE 0x4 - #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_MASK \ - 0x00FF0000 - #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_OFFSET 16 - #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_BYPASS 0x0 - #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_KR 0x2 - #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_KR2 0x3 - #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_KR4 0x4 - #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_XFI 0x8 - #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_SFI 0x9 - #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_1000X 0xB - #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_SGMII 0xC - #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_XLAUI 0x11 - #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_XLPPI 0x12 - #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_CAUI 0x21 - #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_CPPI 0x22 - #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_25GAUI 0x31 - #define NVM_CFG1_PORT_MNM_40G_ETH_DID_SUFFIX_MASK 0xFF000000 - #define NVM_CFG1_PORT_MNM_40G_ETH_DID_SUFFIX_OFFSET 24 - u32 mnm_40g_misc; /* 0x6C */ - #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_MASK 0x00000007 - #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_OFFSET 0 - #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_NONE 0x0 - #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_FIRECODE 0x1 - #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_RS 0x2 - #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_AUTO 0x7 - u32 mnm_50g_cap; /* 0x70 */ - #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_MASK \ - 0x0000FFFF - #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0 - #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_1G 0x1 - #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_10G 0x2 - #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_20G 0x4 - #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_25G 0x8 - #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_40G 0x10 - #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_50G 0x20 - #define \ - NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_BB_100G \ - 0x40 - #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_MASK \ - 0xFFFF0000 - #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_OFFSET \ - 16 - #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_1G 0x1 - #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_10G 0x2 - #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_20G 0x4 - #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_25G 0x8 - #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_40G 0x10 - #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_50G 0x20 - #define \ - NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_BB_100G \ - 0x40 - u32 mnm_50g_ctrl; /* 0x74 */ - #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_MASK 0x0000000F - #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_OFFSET 0 - #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_AUTONEG 0x0 - #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_1G 0x1 - #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_10G 0x2 - #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_20G 0x3 - #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_25G 0x4 - #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_40G 0x5 - #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_50G 0x6 - #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_BB_100G 0x7 - #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_MASK 0x000000F0 - #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_OFFSET 4 - #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_AUTONEG 0x0 - #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_1G 0x1 - #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_10G 0x2 - #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_20G 0x3 - #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_25G 0x4 - #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_40G 0x5 - #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_50G 0x6 - #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_BB_100G 0x7 - /* This field defines the board technology + */ + #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_MASK 0x0000FF00 + #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_OFFSET 8 + #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_UNDEFINED 0x0 + #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_MODULE 0x1 + #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_BACKPLANE 0x2 + #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_EXT_PHY 0x3 + #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_MODULE_SLAVE 0x4 + #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_MASK 0x00FF0000 + #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_OFFSET 16 + #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_BYPASS 0x0 + #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_KR 0x2 + #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_KR2 0x3 + #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_KR4 0x4 + #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_XFI 0x8 + #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_SFI 0x9 + #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_1000X 0xB + #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_SGMII 0xC + #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_XLAUI 0x11 + #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_XLPPI 0x12 + #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_CAUI 0x21 + #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_CPPI 0x22 + #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_25GAUI 0x31 + #define NVM_CFG1_PORT_MNM_40G_ETH_DID_SUFFIX_MASK 0xFF000000 + #define NVM_CFG1_PORT_MNM_40G_ETH_DID_SUFFIX_OFFSET 24 + u32 mnm_40g_misc; /* 0x6C */ + #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_MASK 0x00000007 + #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_OFFSET 0 + #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_NONE 0x0 + #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_FIRECODE 0x1 + #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_RS 0x2 + #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_AUTO 0x7 + u32 mnm_50g_cap; /* 0x70 */ + #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_MASK 0x0000FFFF + #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0 + #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_1G 0x1 + #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_10G 0x2 + #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_20G 0x4 + #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_25G 0x8 + #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_40G 0x10 + #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_50G 0x20 + #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40 + #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_MASK 0xFFFF0000 + #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_OFFSET 16 + #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_1G 0x1 + #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_10G 0x2 + #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_20G 0x4 + #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_25G 0x8 + #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_40G 0x10 + #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_50G 0x20 + #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40 + u32 mnm_50g_ctrl; /* 0x74 */ + #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_MASK 0x0000000F + #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_OFFSET 0 + #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_AUTONEG 0x0 + #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_1G 0x1 + #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_10G 0x2 + #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_20G 0x3 + #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_25G 0x4 + #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_40G 0x5 + #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_50G 0x6 + #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_BB_100G 0x7 + #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_MASK 0x000000F0 + #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_OFFSET 4 + #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_AUTONEG 0x0 + #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_1G 0x1 + #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_10G 0x2 + #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_20G 0x3 + #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_25G 0x4 + #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_40G 0x5 + #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_50G 0x6 + #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_BB_100G 0x7 + /* This field defines the board technology * (backpane,transceiver,external PHY) - */ - #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_MASK 0x0000FF00 - #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_OFFSET 8 - #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_UNDEFINED 0x0 - #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_MODULE 0x1 - #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_BACKPLANE 0x2 - #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_EXT_PHY 0x3 - #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_MODULE_SLAVE 0x4 - #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_MASK \ - 0x00FF0000 - #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_OFFSET 16 - #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_BYPASS 0x0 - #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_KR 0x2 - #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_KR2 0x3 - #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_KR4 0x4 - #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_XFI 0x8 - #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_SFI 0x9 - #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_1000X 0xB - #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_SGMII 0xC - #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_XLAUI 0x11 - #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_XLPPI 0x12 - #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_CAUI 0x21 - #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_CPPI 0x22 - #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_25GAUI 0x31 - #define NVM_CFG1_PORT_MNM_50G_ETH_DID_SUFFIX_MASK 0xFF000000 - #define NVM_CFG1_PORT_MNM_50G_ETH_DID_SUFFIX_OFFSET 24 - u32 mnm_50g_misc; /* 0x78 */ - #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_MASK 0x00000007 - #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_OFFSET 0 - #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_NONE 0x0 - #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_FIRECODE 0x1 - #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_RS 0x2 - #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_AUTO 0x7 - u32 mnm_100g_cap; /* 0x7C */ - #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_MASK \ - 0x0000FFFF - #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_OFFSET 0 - #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_1G 0x1 - #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_10G 0x2 - #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_20G 0x4 - #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_25G 0x8 - #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_40G 0x10 - #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_50G 0x20 - #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_BB_100G 0x40 - #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_MASK \ - 0xFFFF0000 - #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_OFFSET 16 - #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_1G 0x1 - #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_10G 0x2 - #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_20G 0x4 - #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_25G 0x8 - #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_40G 0x10 - #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_50G 0x20 - #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_BB_100G 0x40 - u32 mnm_100g_ctrl; /* 0x80 */ - #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_MASK 0x0000000F - #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_OFFSET 0 - #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_AUTONEG 0x0 - #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_1G 0x1 - #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_10G 0x2 - #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_20G 0x3 - #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_25G 0x4 - #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_40G 0x5 - #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_50G 0x6 - #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_BB_100G 0x7 - #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_MASK 0x000000F0 - #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_OFFSET 4 - #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_AUTONEG 0x0 - #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_1G 0x1 - #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_10G 0x2 - #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_20G 0x3 - #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_25G 0x4 - #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_40G 0x5 - #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_50G 0x6 - #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_BB_100G 0x7 - /* This field defines the board technology + */ + #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_MASK 0x0000FF00 + #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_OFFSET 8 + #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_UNDEFINED 0x0 + #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_MODULE 0x1 + #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_BACKPLANE 0x2 + #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_EXT_PHY 0x3 + #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_MODULE_SLAVE 0x4 + #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_MASK 0x00FF0000 + #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_OFFSET 16 + #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_BYPASS 0x0 + #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_KR 0x2 + #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_KR2 0x3 + #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_KR4 0x4 + #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_XFI 0x8 + #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_SFI 0x9 + #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_1000X 0xB + #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_SGMII 0xC + #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_XLAUI 0x11 + #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_XLPPI 0x12 + #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_CAUI 0x21 + #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_CPPI 0x22 + #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_25GAUI 0x31 + #define NVM_CFG1_PORT_MNM_50G_ETH_DID_SUFFIX_MASK 0xFF000000 + #define NVM_CFG1_PORT_MNM_50G_ETH_DID_SUFFIX_OFFSET 24 + u32 mnm_50g_misc; /* 0x78 */ + #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_MASK 0x00000007 + #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_OFFSET 0 + #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_NONE 0x0 + #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_FIRECODE 0x1 + #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_RS 0x2 + #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_AUTO 0x7 + u32 mnm_100g_cap; /* 0x7C */ + #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_MASK 0x0000FFFF + #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_OFFSET 0 + #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_1G 0x1 + #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_10G 0x2 + #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_20G 0x4 + #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_25G 0x8 + #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_40G 0x10 + #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_50G 0x20 + #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_BB_100G 0x40 + #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_MASK 0xFFFF0000 + #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_OFFSET 16 + #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_1G 0x1 + #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_10G 0x2 + #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_20G 0x4 + #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_25G 0x8 + #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_40G 0x10 + #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_50G 0x20 + #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_BB_100G 0x40 + u32 mnm_100g_ctrl; /* 0x80 */ + #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_MASK 0x0000000F + #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_OFFSET 0 + #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_AUTONEG 0x0 + #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_1G 0x1 + #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_10G 0x2 + #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_20G 0x3 + #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_25G 0x4 + #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_40G 0x5 + #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_50G 0x6 + #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_BB_100G 0x7 + #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_MASK 0x000000F0 + #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_OFFSET 4 + #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_AUTONEG 0x0 + #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_1G 0x1 + #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_10G 0x2 + #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_20G 0x3 + #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_25G 0x4 + #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_40G 0x5 + #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_50G 0x6 + #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_BB_100G 0x7 + /* This field defines the board technology * (backpane,transceiver,external PHY) - */ - #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_MASK 0x0000FF00 - #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_OFFSET 8 - #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_UNDEFINED 0x0 - #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_MODULE 0x1 - #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_BACKPLANE 0x2 - #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_EXT_PHY 0x3 - #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_MODULE_SLAVE 0x4 - #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_MASK \ - 0x00FF0000 - #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_OFFSET 16 - #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_BYPASS 0x0 - #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_KR 0x2 - #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_KR2 0x3 - #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_KR4 0x4 - #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_XFI 0x8 - #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_SFI 0x9 - #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_1000X 0xB - #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_SGMII 0xC - #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_XLAUI 0x11 - #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_XLPPI 0x12 - #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_CAUI 0x21 - #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_CPPI 0x22 - #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_25GAUI 0x31 - #define NVM_CFG1_PORT_MNM_100G_ETH_DID_SUFFIX_MASK 0xFF000000 - #define NVM_CFG1_PORT_MNM_100G_ETH_DID_SUFFIX_OFFSET 24 - u32 mnm_100g_misc; /* 0x84 */ - #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_MASK 0x00000007 - #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_OFFSET 0 - #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_NONE 0x0 - #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_FIRECODE 0x1 - #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_RS 0x2 - #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_AUTO 0x7 - u32 temperature; /* 0x88 */ - #define NVM_CFG1_PORT_PHY_MODULE_DEAD_TEMP_TH_MASK 0x000000FF - #define NVM_CFG1_PORT_PHY_MODULE_DEAD_TEMP_TH_OFFSET 0 - #define NVM_CFG1_PORT_PHY_MODULE_ALOM_FAN_ON_TEMP_TH_MASK \ - 0x0000FF00 - #define NVM_CFG1_PORT_PHY_MODULE_ALOM_FAN_ON_TEMP_TH_OFFSET 8 + */ + #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_MASK 0x0000FF00 + #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_OFFSET 8 + #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_UNDEFINED 0x0 + #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_MODULE 0x1 + #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_BACKPLANE 0x2 + #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_EXT_PHY 0x3 + #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_MODULE_SLAVE 0x4 + #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_MASK 0x00FF0000 + #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_OFFSET 16 + #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_BYPASS 0x0 + #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_KR 0x2 + #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_KR2 0x3 + #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_KR4 0x4 + #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_XFI 0x8 + #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_SFI 0x9 + #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_1000X 0xB + #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_SGMII 0xC + #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_XLAUI 0x11 + #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_XLPPI 0x12 + #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_CAUI 0x21 + #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_CPPI 0x22 + #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_25GAUI 0x31 + #define NVM_CFG1_PORT_MNM_100G_ETH_DID_SUFFIX_MASK 0xFF000000 + #define NVM_CFG1_PORT_MNM_100G_ETH_DID_SUFFIX_OFFSET 24 + u32 mnm_100g_misc; /* 0x84 */ + #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_MASK 0x00000007 + #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_OFFSET 0 + #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_NONE 0x0 + #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_FIRECODE 0x1 + #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_RS 0x2 + #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_AUTO 0x7 + u32 temperature; /* 0x88 */ + #define NVM_CFG1_PORT_PHY_MODULE_DEAD_TEMP_TH_MASK 0x000000FF + #define NVM_CFG1_PORT_PHY_MODULE_DEAD_TEMP_TH_OFFSET 0 + #define NVM_CFG1_PORT_PHY_MODULE_ALOM_FAN_ON_TEMP_TH_MASK 0x0000FF00 + #define NVM_CFG1_PORT_PHY_MODULE_ALOM_FAN_ON_TEMP_TH_OFFSET 8 /* Warning temperature threshold used with nvm option 235 */ - #define NVM_CFG1_PORT_PHY_MODULE_WARNING_TEMP_TH_MASK 0x00FF0000 - #define NVM_CFG1_PORT_PHY_MODULE_WARNING_TEMP_TH_OFFSET 16 - u32 ext_phy_cfg1; /* 0x8C */ + #define NVM_CFG1_PORT_PHY_MODULE_WARNING_TEMP_TH_MASK 0x00FF0000 + #define NVM_CFG1_PORT_PHY_MODULE_WARNING_TEMP_TH_OFFSET 16 + u32 ext_phy_cfg1; /* 0x8C */ /* Ext PHY MDI pair swap value */ - #define NVM_CFG1_PORT_EXT_PHY_MDI_PAIR_SWAP_MASK 0x0000FFFF - #define NVM_CFG1_PORT_EXT_PHY_MDI_PAIR_SWAP_OFFSET 0 - u32 extended_speed; /* 0x90 */ + #define NVM_CFG1_PORT_EXT_PHY_MDI_PAIR_SWAP_MASK 0x0000FFFF + #define NVM_CFG1_PORT_EXT_PHY_MDI_PAIR_SWAP_OFFSET 0 + u32 extended_speed; /* 0x90 */ /* Sets speed in conjunction with legacy speed field */ - #define NVM_CFG1_PORT_EXTENDED_SPEED_MASK 0x0000FFFF - #define NVM_CFG1_PORT_EXTENDED_SPEED_OFFSET 0 - #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_NONE 0x1 - #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_1G 0x2 - #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_10G 0x4 - #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_25G 0x8 - #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_40G 0x10 - #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R 0x20 - #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R2 0x40 - #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R2 0x80 - #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R4 0x100 - #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_P4 0x200 - /* Sets speed capabilities in conjunction with legacy capabilities - * field + #define NVM_CFG1_PORT_EXTENDED_SPEED_MASK 0x0000FFFF + #define NVM_CFG1_PORT_EXTENDED_SPEED_OFFSET 0 + #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_AN 0x1 + #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_1G 0x2 + #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_10G 0x4 + #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_20G 0x8 + #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_25G 0x10 + #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_40G 0x20 + #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R 0x40 + #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R2 0x80 + #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R2 0x100 + #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R4 0x200 + #define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_P4 0x400 + /* Sets speed capabilities in conjunction with legacy capabilities + * field */ - #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_MASK 0xFFFF0000 - #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_OFFSET 16 - #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_NONE 0x1 - #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_1G 0x2 - #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_10G 0x4 - #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_25G 0x8 - #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_40G 0x10 - #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R 0x20 - #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R2 0x40 - #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R2 0x80 - #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R4 0x100 - #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_P4 0x200 - /* Set speed specific FEC setting in conjunction with legacy FEC - * mode + #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_MASK 0xFFFF0000 + #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_OFFSET 16 + #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_RESERVED 0x1 + #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_1G 0x2 + #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_10G 0x4 + #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_20G 0x8 + #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_25G 0x10 + #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_40G 0x20 + #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R 0x40 + #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R2 0x80 + #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R2 0x100 + #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R4 0x200 + #define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_P4 0x400 + /* Set speed specific FEC setting in conjunction with legacy FEC + * mode */ - u32 extended_fec_mode; /* 0x94 */ - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_NONE 0x1 - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_10G_NONE 0x2 - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_10G_BASE_R 0x4 - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_NONE 0x8 - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_BASE_R 0x10 - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_RS528 0x20 - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_40G_NONE 0x40 - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_40G_BASE_R 0x80 - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_NONE 0x100 - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_BASE_R 0x200 - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS528 0x400 - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS544 0x800 - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_NONE 0x1000 - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_BASE_R 0x2000 - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS528 0x4000 - #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS544 0x8000 - u32 port_generic_cont_01; /* 0x98 */ + u32 extended_fec_mode; /* 0x94 */ + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_NONE 0x1 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_10G_NONE 0x2 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_10G_BASE_R 0x4 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_20G_NONE 0x8 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_20G_BASE_R 0x10 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_NONE 0x20 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_BASE_R 0x40 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_RS528 0x80 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_40G_NONE 0x100 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_40G_BASE_R 0x200 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_NONE 0x400 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_BASE_R 0x800 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS528 0x1000 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS544 0x2000 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_NONE 0x4000 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_BASE_R 0x8000 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS528 0x10000 + #define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS544 0x20000 + u32 port_generic_cont_01; /* 0x98 */ /* Define for GPIO mapping of SFP Rate Select 0 */ - #define NVM_CFG1_PORT_MODULE_RS0_MASK 0x000000FF - #define NVM_CFG1_PORT_MODULE_RS0_OFFSET 0 - #define NVM_CFG1_PORT_MODULE_RS0_NA 0x0 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO0 0x1 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO1 0x2 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO2 0x3 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO3 0x4 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO4 0x5 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO5 0x6 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO6 0x7 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO7 0x8 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO8 0x9 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO9 0xA - #define NVM_CFG1_PORT_MODULE_RS0_GPIO10 0xB - #define NVM_CFG1_PORT_MODULE_RS0_GPIO11 0xC - #define NVM_CFG1_PORT_MODULE_RS0_GPIO12 0xD - #define NVM_CFG1_PORT_MODULE_RS0_GPIO13 0xE - #define NVM_CFG1_PORT_MODULE_RS0_GPIO14 0xF - #define NVM_CFG1_PORT_MODULE_RS0_GPIO15 0x10 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO16 0x11 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO17 0x12 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO18 0x13 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO19 0x14 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO20 0x15 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO21 0x16 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO22 0x17 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO23 0x18 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO24 0x19 - #define NVM_CFG1_PORT_MODULE_RS0_GPIO25 0x1A - #define NVM_CFG1_PORT_MODULE_RS0_GPIO26 0x1B - #define NVM_CFG1_PORT_MODULE_RS0_GPIO27 0x1C - #define NVM_CFG1_PORT_MODULE_RS0_GPIO28 0x1D - #define NVM_CFG1_PORT_MODULE_RS0_GPIO29 0x1E - #define NVM_CFG1_PORT_MODULE_RS0_GPIO30 0x1F - #define NVM_CFG1_PORT_MODULE_RS0_GPIO31 0x20 + #define NVM_CFG1_PORT_MODULE_RS0_MASK 0x000000FF + #define NVM_CFG1_PORT_MODULE_RS0_OFFSET 0 + #define NVM_CFG1_PORT_MODULE_RS0_NA 0x0 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO0 0x1 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO1 0x2 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO2 0x3 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO3 0x4 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO4 0x5 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO5 0x6 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO6 0x7 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO7 0x8 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO8 0x9 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO9 0xA + #define NVM_CFG1_PORT_MODULE_RS0_GPIO10 0xB + #define NVM_CFG1_PORT_MODULE_RS0_GPIO11 0xC + #define NVM_CFG1_PORT_MODULE_RS0_GPIO12 0xD + #define NVM_CFG1_PORT_MODULE_RS0_GPIO13 0xE + #define NVM_CFG1_PORT_MODULE_RS0_GPIO14 0xF + #define NVM_CFG1_PORT_MODULE_RS0_GPIO15 0x10 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO16 0x11 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO17 0x12 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO18 0x13 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO19 0x14 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO20 0x15 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO21 0x16 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO22 0x17 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO23 0x18 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO24 0x19 + #define NVM_CFG1_PORT_MODULE_RS0_GPIO25 0x1A + #define NVM_CFG1_PORT_MODULE_RS0_GPIO26 0x1B + #define NVM_CFG1_PORT_MODULE_RS0_GPIO27 0x1C + #define NVM_CFG1_PORT_MODULE_RS0_GPIO28 0x1D + #define NVM_CFG1_PORT_MODULE_RS0_GPIO29 0x1E + #define NVM_CFG1_PORT_MODULE_RS0_GPIO30 0x1F + #define NVM_CFG1_PORT_MODULE_RS0_GPIO31 0x20 /* Define for GPIO mapping of SFP Rate Select 1 */ - #define NVM_CFG1_PORT_MODULE_RS1_MASK 0x0000FF00 - #define NVM_CFG1_PORT_MODULE_RS1_OFFSET 8 - #define NVM_CFG1_PORT_MODULE_RS1_NA 0x0 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO0 0x1 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO1 0x2 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO2 0x3 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO3 0x4 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO4 0x5 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO5 0x6 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO6 0x7 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO7 0x8 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO8 0x9 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO9 0xA - #define NVM_CFG1_PORT_MODULE_RS1_GPIO10 0xB - #define NVM_CFG1_PORT_MODULE_RS1_GPIO11 0xC - #define NVM_CFG1_PORT_MODULE_RS1_GPIO12 0xD - #define NVM_CFG1_PORT_MODULE_RS1_GPIO13 0xE - #define NVM_CFG1_PORT_MODULE_RS1_GPIO14 0xF - #define NVM_CFG1_PORT_MODULE_RS1_GPIO15 0x10 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO16 0x11 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO17 0x12 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO18 0x13 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO19 0x14 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO20 0x15 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO21 0x16 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO22 0x17 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO23 0x18 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO24 0x19 - #define NVM_CFG1_PORT_MODULE_RS1_GPIO25 0x1A - #define NVM_CFG1_PORT_MODULE_RS1_GPIO26 0x1B - #define NVM_CFG1_PORT_MODULE_RS1_GPIO27 0x1C - #define NVM_CFG1_PORT_MODULE_RS1_GPIO28 0x1D - #define NVM_CFG1_PORT_MODULE_RS1_GPIO29 0x1E - #define NVM_CFG1_PORT_MODULE_RS1_GPIO30 0x1F - #define NVM_CFG1_PORT_MODULE_RS1_GPIO31 0x20 + #define NVM_CFG1_PORT_MODULE_RS1_MASK 0x0000FF00 + #define NVM_CFG1_PORT_MODULE_RS1_OFFSET 8 + #define NVM_CFG1_PORT_MODULE_RS1_NA 0x0 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO0 0x1 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO1 0x2 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO2 0x3 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO3 0x4 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO4 0x5 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO5 0x6 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO6 0x7 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO7 0x8 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO8 0x9 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO9 0xA + #define NVM_CFG1_PORT_MODULE_RS1_GPIO10 0xB + #define NVM_CFG1_PORT_MODULE_RS1_GPIO11 0xC + #define NVM_CFG1_PORT_MODULE_RS1_GPIO12 0xD + #define NVM_CFG1_PORT_MODULE_RS1_GPIO13 0xE + #define NVM_CFG1_PORT_MODULE_RS1_GPIO14 0xF + #define NVM_CFG1_PORT_MODULE_RS1_GPIO15 0x10 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO16 0x11 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO17 0x12 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO18 0x13 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO19 0x14 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO20 0x15 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO21 0x16 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO22 0x17 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO23 0x18 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO24 0x19 + #define NVM_CFG1_PORT_MODULE_RS1_GPIO25 0x1A + #define NVM_CFG1_PORT_MODULE_RS1_GPIO26 0x1B + #define NVM_CFG1_PORT_MODULE_RS1_GPIO27 0x1C + #define NVM_CFG1_PORT_MODULE_RS1_GPIO28 0x1D + #define NVM_CFG1_PORT_MODULE_RS1_GPIO29 0x1E + #define NVM_CFG1_PORT_MODULE_RS1_GPIO30 0x1F + #define NVM_CFG1_PORT_MODULE_RS1_GPIO31 0x20 /* Define for GPIO mapping of SFP Module TX Fault */ - #define NVM_CFG1_PORT_MODULE_TX_FAULT_MASK 0x00FF0000 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_OFFSET 16 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_NA 0x0 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO0 0x1 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO1 0x2 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO2 0x3 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO3 0x4 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO4 0x5 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO5 0x6 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO6 0x7 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO7 0x8 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO8 0x9 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO9 0xA - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO10 0xB - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO11 0xC - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO12 0xD - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO13 0xE - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO14 0xF - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO15 0x10 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO16 0x11 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO17 0x12 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO18 0x13 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO19 0x14 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO20 0x15 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO21 0x16 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO22 0x17 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO23 0x18 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO24 0x19 - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO25 0x1A - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO26 0x1B - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO27 0x1C - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO28 0x1D - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO29 0x1E - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO30 0x1F - #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO31 0x20 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_MASK 0x00FF0000 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_OFFSET 16 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_NA 0x0 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO0 0x1 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO1 0x2 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO2 0x3 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO3 0x4 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO4 0x5 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO5 0x6 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO6 0x7 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO7 0x8 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO8 0x9 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO9 0xA + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO10 0xB + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO11 0xC + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO12 0xD + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO13 0xE + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO14 0xF + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO15 0x10 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO16 0x11 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO17 0x12 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO18 0x13 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO19 0x14 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO20 0x15 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO21 0x16 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO22 0x17 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO23 0x18 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO24 0x19 + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO25 0x1A + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO26 0x1B + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO27 0x1C + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO28 0x1D + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO29 0x1E + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO30 0x1F + #define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO31 0x20 /* Define for GPIO mapping of QSFP Reset signal */ - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_MASK 0xFF000000 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_OFFSET 24 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_NA 0x0 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO0 0x1 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO1 0x2 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO2 0x3 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO3 0x4 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO4 0x5 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO5 0x6 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO6 0x7 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO7 0x8 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO8 0x9 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO9 0xA - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO10 0xB - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO11 0xC - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO12 0xD - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO13 0xE - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO14 0xF - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO15 0x10 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO16 0x11 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO17 0x12 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO18 0x13 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO19 0x14 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO20 0x15 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO21 0x16 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO22 0x17 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO23 0x18 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO24 0x19 - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO25 0x1A - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO26 0x1B - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO27 0x1C - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO28 0x1D - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO29 0x1E - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO30 0x1F - #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO31 0x20 - u32 port_generic_cont_02; /* 0x9C */ + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_MASK 0xFF000000 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_OFFSET 24 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_NA 0x0 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO0 0x1 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO1 0x2 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO2 0x3 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO3 0x4 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO4 0x5 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO5 0x6 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO6 0x7 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO7 0x8 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO8 0x9 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO9 0xA + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO10 0xB + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO11 0xC + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO12 0xD + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO13 0xE + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO14 0xF + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO15 0x10 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO16 0x11 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO17 0x12 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO18 0x13 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO19 0x14 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO20 0x15 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO21 0x16 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO22 0x17 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO23 0x18 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO24 0x19 + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO25 0x1A + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO26 0x1B + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO27 0x1C + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO28 0x1D + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO29 0x1E + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO30 0x1F + #define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO31 0x20 + u32 port_generic_cont_02; /* 0x9C */ /* Define for GPIO mapping of QSFP Transceiver LP mode */ - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_MASK 0x000000FF - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_OFFSET 0 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_NA 0x0 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO0 0x1 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO1 0x2 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO2 0x3 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO3 0x4 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO4 0x5 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO5 0x6 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO6 0x7 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO7 0x8 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO8 0x9 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO9 0xA - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO10 0xB - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO11 0xC - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO12 0xD - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO13 0xE - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO14 0xF - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO15 0x10 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO16 0x11 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO17 0x12 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO18 0x13 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO19 0x14 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO20 0x15 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO21 0x16 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO22 0x17 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO23 0x18 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO24 0x19 - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO25 0x1A - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO26 0x1B - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO27 0x1C - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO28 0x1D - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO29 0x1E - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO30 0x1F - #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO31 0x20 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_MASK 0x000000FF + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_OFFSET 0 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_NA 0x0 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO0 0x1 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO1 0x2 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO2 0x3 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO3 0x4 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO4 0x5 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO5 0x6 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO6 0x7 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO7 0x8 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO8 0x9 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO9 0xA + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO10 0xB + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO11 0xC + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO12 0xD + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO13 0xE + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO14 0xF + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO15 0x10 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO16 0x11 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO17 0x12 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO18 0x13 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO19 0x14 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO20 0x15 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO21 0x16 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO22 0x17 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO23 0x18 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO24 0x19 + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO25 0x1A + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO26 0x1B + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO27 0x1C + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO28 0x1D + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO29 0x1E + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO30 0x1F + #define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO31 0x20 /* Define for GPIO mapping of Transceiver Power Enable */ - #define NVM_CFG1_PORT_MODULE_POWER_MASK 0x0000FF00 - #define NVM_CFG1_PORT_MODULE_POWER_OFFSET 8 - #define NVM_CFG1_PORT_MODULE_POWER_NA 0x0 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO0 0x1 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO1 0x2 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO2 0x3 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO3 0x4 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO4 0x5 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO5 0x6 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO6 0x7 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO7 0x8 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO8 0x9 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO9 0xA - #define NVM_CFG1_PORT_MODULE_POWER_GPIO10 0xB - #define NVM_CFG1_PORT_MODULE_POWER_GPIO11 0xC - #define NVM_CFG1_PORT_MODULE_POWER_GPIO12 0xD - #define NVM_CFG1_PORT_MODULE_POWER_GPIO13 0xE - #define NVM_CFG1_PORT_MODULE_POWER_GPIO14 0xF - #define NVM_CFG1_PORT_MODULE_POWER_GPIO15 0x10 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO16 0x11 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO17 0x12 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO18 0x13 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO19 0x14 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO20 0x15 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO21 0x16 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO22 0x17 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO23 0x18 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO24 0x19 - #define NVM_CFG1_PORT_MODULE_POWER_GPIO25 0x1A - #define NVM_CFG1_PORT_MODULE_POWER_GPIO26 0x1B - #define NVM_CFG1_PORT_MODULE_POWER_GPIO27 0x1C - #define NVM_CFG1_PORT_MODULE_POWER_GPIO28 0x1D - #define NVM_CFG1_PORT_MODULE_POWER_GPIO29 0x1E - #define NVM_CFG1_PORT_MODULE_POWER_GPIO30 0x1F - #define NVM_CFG1_PORT_MODULE_POWER_GPIO31 0x20 + #define NVM_CFG1_PORT_MODULE_POWER_MASK 0x0000FF00 + #define NVM_CFG1_PORT_MODULE_POWER_OFFSET 8 + #define NVM_CFG1_PORT_MODULE_POWER_NA 0x0 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO0 0x1 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO1 0x2 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO2 0x3 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO3 0x4 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO4 0x5 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO5 0x6 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO6 0x7 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO7 0x8 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO8 0x9 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO9 0xA + #define NVM_CFG1_PORT_MODULE_POWER_GPIO10 0xB + #define NVM_CFG1_PORT_MODULE_POWER_GPIO11 0xC + #define NVM_CFG1_PORT_MODULE_POWER_GPIO12 0xD + #define NVM_CFG1_PORT_MODULE_POWER_GPIO13 0xE + #define NVM_CFG1_PORT_MODULE_POWER_GPIO14 0xF + #define NVM_CFG1_PORT_MODULE_POWER_GPIO15 0x10 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO16 0x11 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO17 0x12 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO18 0x13 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO19 0x14 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO20 0x15 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO21 0x16 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO22 0x17 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO23 0x18 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO24 0x19 + #define NVM_CFG1_PORT_MODULE_POWER_GPIO25 0x1A + #define NVM_CFG1_PORT_MODULE_POWER_GPIO26 0x1B + #define NVM_CFG1_PORT_MODULE_POWER_GPIO27 0x1C + #define NVM_CFG1_PORT_MODULE_POWER_GPIO28 0x1D + #define NVM_CFG1_PORT_MODULE_POWER_GPIO29 0x1E + #define NVM_CFG1_PORT_MODULE_POWER_GPIO30 0x1F + #define NVM_CFG1_PORT_MODULE_POWER_GPIO31 0x20 /* Define for LASI Mapping of Interrupt from module or PHY */ - #define NVM_CFG1_PORT_LASI_INTR_IN_MASK 0x000F0000 - #define NVM_CFG1_PORT_LASI_INTR_IN_OFFSET 16 - #define NVM_CFG1_PORT_LASI_INTR_IN_NA 0x0 - #define NVM_CFG1_PORT_LASI_INTR_IN_LASI0 0x1 - #define NVM_CFG1_PORT_LASI_INTR_IN_LASI1 0x2 - #define NVM_CFG1_PORT_LASI_INTR_IN_LASI2 0x3 - #define NVM_CFG1_PORT_LASI_INTR_IN_LASI3 0x4 - u32 reserved[110]; /* 0xA0 */ + #define NVM_CFG1_PORT_LASI_INTR_IN_MASK 0x000F0000 + #define NVM_CFG1_PORT_LASI_INTR_IN_OFFSET 16 + #define NVM_CFG1_PORT_LASI_INTR_IN_NA 0x0 + #define NVM_CFG1_PORT_LASI_INTR_IN_LASI0 0x1 + #define NVM_CFG1_PORT_LASI_INTR_IN_LASI1 0x2 + #define NVM_CFG1_PORT_LASI_INTR_IN_LASI2 0x3 + #define NVM_CFG1_PORT_LASI_INTR_IN_LASI3 0x4 + u32 phy_temp_monitor; /* 0xA0 */ + /* Enables PHY temperature monitoring. In case PHY temperature rise + * above option 308 (Temperature monitoring PHY temp) for option 309 + * (#samples) times, MFW will shutdown the chip. + */ + #define NVM_CFG1_PORT_SHUTDOWN__M_PHY_MONITOR_ENABLED_MASK 0x00000001 + #define NVM_CFG1_PORT_SHUTDOWN__M_PHY_MONITOR_ENABLED_OFFSET 0 + #define NVM_CFG1_PORT_SHUTDOWN__M_PHY_MONITOR_ENABLED_DISABLED 0x0 + #define NVM_CFG1_PORT_SHUTDOWN__M_PHY_MONITOR_ENABLED_ENABLED 0x1 + /* In case option 307 is enabled, this option determinges the + * temperature theshold to shutdown the chip if sampled more than + * [option 309 (#samples)] times in a row + */ + #define NVM_CFG1_PORT_SHUTDOWN__M_PHY_THRESHOLD_TEMP_MASK 0x000001FE + #define NVM_CFG1_PORT_SHUTDOWN__M_PHY_THRESHOLD_TEMP_OFFSET 1 + /* Number of PHY temperature samples above option 308 required in a + * row for MFW to shutdown the chip. + */ + #define NVM_CFG1_PORT_SHUTDOWN__M_PHY_SAMPLES_MASK 0x0001FE00 + #define NVM_CFG1_PORT_SHUTDOWN__M_PHY_SAMPLES_OFFSET 9 + u32 reserved[109]; /* 0xA4 */ }; struct nvm_cfg1_func { - struct nvm_cfg_mac_address mac_address; /* 0x0 */ - u32 rsrv1; /* 0x8 */ - #define NVM_CFG1_FUNC_RESERVED1_MASK 0x0000FFFF - #define NVM_CFG1_FUNC_RESERVED1_OFFSET 0 - #define NVM_CFG1_FUNC_RESERVED2_MASK 0xFFFF0000 - #define NVM_CFG1_FUNC_RESERVED2_OFFSET 16 - u32 rsrv2; /* 0xC */ - #define NVM_CFG1_FUNC_RESERVED3_MASK 0x0000FFFF - #define NVM_CFG1_FUNC_RESERVED3_OFFSET 0 - #define NVM_CFG1_FUNC_RESERVED4_MASK 0xFFFF0000 - #define NVM_CFG1_FUNC_RESERVED4_OFFSET 16 - u32 device_id; /* 0x10 */ - #define NVM_CFG1_FUNC_MF_VENDOR_DEVICE_ID_MASK 0x0000FFFF - #define NVM_CFG1_FUNC_MF_VENDOR_DEVICE_ID_OFFSET 0 - #define NVM_CFG1_FUNC_RESERVED77_MASK 0xFFFF0000 - #define NVM_CFG1_FUNC_RESERVED77_OFFSET 16 - u32 cmn_cfg; /* 0x14 */ - #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_MASK 0x00000007 - #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_OFFSET 0 - #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_PXE 0x0 - #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_ISCSI_BOOT 0x3 - #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_FCOE_BOOT 0x4 - #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_NONE 0x7 - #define NVM_CFG1_FUNC_VF_PCI_DEVICE_ID_MASK 0x0007FFF8 - #define NVM_CFG1_FUNC_VF_PCI_DEVICE_ID_OFFSET 3 - #define NVM_CFG1_FUNC_PERSONALITY_MASK 0x00780000 - #define NVM_CFG1_FUNC_PERSONALITY_OFFSET 19 - #define NVM_CFG1_FUNC_PERSONALITY_ETHERNET 0x0 - #define NVM_CFG1_FUNC_PERSONALITY_ISCSI 0x1 - #define NVM_CFG1_FUNC_PERSONALITY_FCOE 0x2 - #define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_MASK 0x7F800000 - #define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_OFFSET 23 - #define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_MASK 0x80000000 - #define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_OFFSET 31 - #define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_DISABLED 0x0 - #define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_ENABLED 0x1 - u32 pci_cfg; /* 0x18 */ - #define NVM_CFG1_FUNC_NUMBER_OF_VFS_PER_PF_MASK 0x0000007F - #define NVM_CFG1_FUNC_NUMBER_OF_VFS_PER_PF_OFFSET 0 + struct nvm_cfg_mac_address mac_address; /* 0x0 */ + u32 rsrv1; /* 0x8 */ + #define NVM_CFG1_FUNC_RESERVED1_MASK 0x0000FFFF + #define NVM_CFG1_FUNC_RESERVED1_OFFSET 0 + #define NVM_CFG1_FUNC_RESERVED2_MASK 0xFFFF0000 + #define NVM_CFG1_FUNC_RESERVED2_OFFSET 16 + u32 rsrv2; /* 0xC */ + #define NVM_CFG1_FUNC_RESERVED3_MASK 0x0000FFFF + #define NVM_CFG1_FUNC_RESERVED3_OFFSET 0 + #define NVM_CFG1_FUNC_RESERVED4_MASK 0xFFFF0000 + #define NVM_CFG1_FUNC_RESERVED4_OFFSET 16 + u32 device_id; /* 0x10 */ + #define NVM_CFG1_FUNC_MF_VENDOR_DEVICE_ID_MASK 0x0000FFFF + #define NVM_CFG1_FUNC_MF_VENDOR_DEVICE_ID_OFFSET 0 + #define NVM_CFG1_FUNC_RESERVED77_MASK 0xFFFF0000 + #define NVM_CFG1_FUNC_RESERVED77_OFFSET 16 + u32 cmn_cfg; /* 0x14 */ + #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_MASK 0x00000007 + #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_OFFSET 0 + #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_PXE 0x0 + #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_ISCSI_BOOT 0x3 + #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_FCOE_BOOT 0x4 + #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_NONE 0x7 + #define NVM_CFG1_FUNC_VF_PCI_DEVICE_ID_MASK 0x0007FFF8 + #define NVM_CFG1_FUNC_VF_PCI_DEVICE_ID_OFFSET 3 + #define NVM_CFG1_FUNC_PERSONALITY_MASK 0x00780000 + #define NVM_CFG1_FUNC_PERSONALITY_OFFSET 19 + #define NVM_CFG1_FUNC_PERSONALITY_ETHERNET 0x0 + #define NVM_CFG1_FUNC_PERSONALITY_ISCSI 0x1 + #define NVM_CFG1_FUNC_PERSONALITY_FCOE 0x2 + #define NVM_CFG1_FUNC_PERSONALITY_NVMETCP 0x4 + #define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_MASK 0x7F800000 + #define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_OFFSET 23 + #define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_MASK 0x80000000 + #define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_OFFSET 31 + #define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_DISABLED 0x0 + #define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_ENABLED 0x1 + u32 pci_cfg; /* 0x18 */ + #define NVM_CFG1_FUNC_NUMBER_OF_VFS_PER_PF_MASK 0x0000007F + #define NVM_CFG1_FUNC_NUMBER_OF_VFS_PER_PF_OFFSET 0 /* AH VF BAR2 size */ - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_MASK 0x00003F80 - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_OFFSET 7 - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_DISABLED 0x0 - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_4K 0x1 - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_8K 0x2 - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_16K 0x3 - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_32K 0x4 - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_64K 0x5 - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_128K 0x6 - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_256K 0x7 - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_512K 0x8 - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_1M 0x9 - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_2M 0xA - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_4M 0xB - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_8M 0xC - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_16M 0xD - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_32M 0xE - #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_64M 0xF - #define NVM_CFG1_FUNC_BAR1_SIZE_MASK 0x0003C000 - #define NVM_CFG1_FUNC_BAR1_SIZE_OFFSET 14 - #define NVM_CFG1_FUNC_BAR1_SIZE_DISABLED 0x0 - #define NVM_CFG1_FUNC_BAR1_SIZE_64K 0x1 - #define NVM_CFG1_FUNC_BAR1_SIZE_128K 0x2 - #define NVM_CFG1_FUNC_BAR1_SIZE_256K 0x3 - #define NVM_CFG1_FUNC_BAR1_SIZE_512K 0x4 - #define NVM_CFG1_FUNC_BAR1_SIZE_1M 0x5 - #define NVM_CFG1_FUNC_BAR1_SIZE_2M 0x6 - #define NVM_CFG1_FUNC_BAR1_SIZE_4M 0x7 - #define NVM_CFG1_FUNC_BAR1_SIZE_8M 0x8 - #define NVM_CFG1_FUNC_BAR1_SIZE_16M 0x9 - #define NVM_CFG1_FUNC_BAR1_SIZE_32M 0xA - #define NVM_CFG1_FUNC_BAR1_SIZE_64M 0xB - #define NVM_CFG1_FUNC_BAR1_SIZE_128M 0xC - #define NVM_CFG1_FUNC_BAR1_SIZE_256M 0xD - #define NVM_CFG1_FUNC_BAR1_SIZE_512M 0xE - #define NVM_CFG1_FUNC_BAR1_SIZE_1G 0xF - #define NVM_CFG1_FUNC_MAX_BANDWIDTH_MASK 0x03FC0000 - #define NVM_CFG1_FUNC_MAX_BANDWIDTH_OFFSET 18 + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_MASK 0x00003F80 + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_OFFSET 7 + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_DISABLED 0x0 + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_4K 0x1 + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_8K 0x2 + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_16K 0x3 + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_32K 0x4 + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_64K 0x5 + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_128K 0x6 + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_256K 0x7 + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_512K 0x8 + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_1M 0x9 + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_2M 0xA + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_4M 0xB + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_8M 0xC + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_16M 0xD + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_32M 0xE + #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_64M 0xF + #define NVM_CFG1_FUNC_BAR1_SIZE_MASK 0x0003C000 + #define NVM_CFG1_FUNC_BAR1_SIZE_OFFSET 14 + #define NVM_CFG1_FUNC_BAR1_SIZE_DISABLED 0x0 + #define NVM_CFG1_FUNC_BAR1_SIZE_64K 0x1 + #define NVM_CFG1_FUNC_BAR1_SIZE_128K 0x2 + #define NVM_CFG1_FUNC_BAR1_SIZE_256K 0x3 + #define NVM_CFG1_FUNC_BAR1_SIZE_512K 0x4 + #define NVM_CFG1_FUNC_BAR1_SIZE_1M 0x5 + #define NVM_CFG1_FUNC_BAR1_SIZE_2M 0x6 + #define NVM_CFG1_FUNC_BAR1_SIZE_4M 0x7 + #define NVM_CFG1_FUNC_BAR1_SIZE_8M 0x8 + #define NVM_CFG1_FUNC_BAR1_SIZE_16M 0x9 + #define NVM_CFG1_FUNC_BAR1_SIZE_32M 0xA + #define NVM_CFG1_FUNC_BAR1_SIZE_64M 0xB + #define NVM_CFG1_FUNC_BAR1_SIZE_128M 0xC + #define NVM_CFG1_FUNC_BAR1_SIZE_256M 0xD + #define NVM_CFG1_FUNC_BAR1_SIZE_512M 0xE + #define NVM_CFG1_FUNC_BAR1_SIZE_1G 0xF + #define NVM_CFG1_FUNC_MAX_BANDWIDTH_MASK 0x03FC0000 + #define NVM_CFG1_FUNC_MAX_BANDWIDTH_OFFSET 18 /* Hide function in npar mode */ - #define NVM_CFG1_FUNC_FUNCTION_HIDE_MASK 0x04000000 - #define NVM_CFG1_FUNC_FUNCTION_HIDE_OFFSET 26 - #define NVM_CFG1_FUNC_FUNCTION_HIDE_DISABLED 0x0 - #define NVM_CFG1_FUNC_FUNCTION_HIDE_ENABLED 0x1 + #define NVM_CFG1_FUNC_FUNCTION_HIDE_MASK 0x04000000 + #define NVM_CFG1_FUNC_FUNCTION_HIDE_OFFSET 26 + #define NVM_CFG1_FUNC_FUNCTION_HIDE_DISABLED 0x0 + #define NVM_CFG1_FUNC_FUNCTION_HIDE_ENABLED 0x1 /* AH BAR2 size (per function) */ - #define NVM_CFG1_FUNC_BAR2_SIZE_MASK 0x78000000 - #define NVM_CFG1_FUNC_BAR2_SIZE_OFFSET 27 - #define NVM_CFG1_FUNC_BAR2_SIZE_DISABLED 0x0 - #define NVM_CFG1_FUNC_BAR2_SIZE_1M 0x5 - #define NVM_CFG1_FUNC_BAR2_SIZE_2M 0x6 - #define NVM_CFG1_FUNC_BAR2_SIZE_4M 0x7 - #define NVM_CFG1_FUNC_BAR2_SIZE_8M 0x8 - #define NVM_CFG1_FUNC_BAR2_SIZE_16M 0x9 - #define NVM_CFG1_FUNC_BAR2_SIZE_32M 0xA - #define NVM_CFG1_FUNC_BAR2_SIZE_64M 0xB - #define NVM_CFG1_FUNC_BAR2_SIZE_128M 0xC - #define NVM_CFG1_FUNC_BAR2_SIZE_256M 0xD - #define NVM_CFG1_FUNC_BAR2_SIZE_512M 0xE - #define NVM_CFG1_FUNC_BAR2_SIZE_1G 0xF - struct nvm_cfg_mac_address fcoe_node_wwn_mac_addr; /* 0x1C */ - struct nvm_cfg_mac_address fcoe_port_wwn_mac_addr; /* 0x24 */ - u32 preboot_generic_cfg; /* 0x2C */ - #define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_MASK 0x0000FFFF - #define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_OFFSET 0 - #define NVM_CFG1_FUNC_PREBOOT_VLAN_MASK 0x00010000 - #define NVM_CFG1_FUNC_PREBOOT_VLAN_OFFSET 16 - #define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_MASK 0x001E0000 - #define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_OFFSET 17 - #define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ETHERNET 0x1 - #define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_FCOE 0x2 - #define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ISCSI 0x4 - #define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_RDMA 0x8 - u32 reserved[8]; /* 0x30 */ + #define NVM_CFG1_FUNC_BAR2_SIZE_MASK 0x78000000 + #define NVM_CFG1_FUNC_BAR2_SIZE_OFFSET 27 + #define NVM_CFG1_FUNC_BAR2_SIZE_DISABLED 0x0 + #define NVM_CFG1_FUNC_BAR2_SIZE_1M 0x5 + #define NVM_CFG1_FUNC_BAR2_SIZE_2M 0x6 + #define NVM_CFG1_FUNC_BAR2_SIZE_4M 0x7 + #define NVM_CFG1_FUNC_BAR2_SIZE_8M 0x8 + #define NVM_CFG1_FUNC_BAR2_SIZE_16M 0x9 + #define NVM_CFG1_FUNC_BAR2_SIZE_32M 0xA + #define NVM_CFG1_FUNC_BAR2_SIZE_64M 0xB + #define NVM_CFG1_FUNC_BAR2_SIZE_128M 0xC + #define NVM_CFG1_FUNC_BAR2_SIZE_256M 0xD + #define NVM_CFG1_FUNC_BAR2_SIZE_512M 0xE + #define NVM_CFG1_FUNC_BAR2_SIZE_1G 0xF + struct nvm_cfg_mac_address fcoe_node_wwn_mac_addr; /* 0x1C */ + struct nvm_cfg_mac_address fcoe_port_wwn_mac_addr; /* 0x24 */ + u32 preboot_generic_cfg; /* 0x2C */ + #define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_MASK 0x0000FFFF + #define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_OFFSET 0 + #define NVM_CFG1_FUNC_PREBOOT_VLAN_MASK 0x00010000 + #define NVM_CFG1_FUNC_PREBOOT_VLAN_OFFSET 16 + #define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_MASK 0x01FE0000 + #define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_OFFSET 17 + #define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ETHERNET 0x1 + #define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_FCOE 0x2 + #define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ISCSI 0x4 + #define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_RDMA 0x8 + #define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_NVMETCP 0x10 + u32 features; /* 0x30 */ + /* RDMA protocol enablement */ + #define NVM_CFG1_FUNC_RDMA_ENABLEMENT_MASK 0x00000003 + #define NVM_CFG1_FUNC_RDMA_ENABLEMENT_OFFSET 0 + #define NVM_CFG1_FUNC_RDMA_ENABLEMENT_NONE 0x0 + #define NVM_CFG1_FUNC_RDMA_ENABLEMENT_ROCE 0x1 + #define NVM_CFG1_FUNC_RDMA_ENABLEMENT_IWARP 0x2 + #define NVM_CFG1_FUNC_RDMA_ENABLEMENT_BOTH 0x3 + u32 mf_mode_feature; /* 0x34 */ + /* QinQ Outer VLAN */ + #define NVM_CFG1_FUNC_QINQ_OUTER_VLAN_MASK 0x0000FFFF + #define NVM_CFG1_FUNC_QINQ_OUTER_VLAN_OFFSET 0 + u32 reserved[6]; /* 0x38 */ }; struct nvm_cfg1 { - struct nvm_cfg1_glob glob; /* 0x0 */ - struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX]; /* 0x228 */ - struct nvm_cfg1_port port[MCP_GLOB_PORT_MAX]; /* 0x230 */ - struct nvm_cfg1_func func[MCP_GLOB_FUNC_MAX]; /* 0xB90 */ + struct nvm_cfg1_glob glob; /* 0x0 */ + struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX]; /* 0x228 */ + struct nvm_cfg1_port port[MCP_GLOB_PORT_MAX]; /* 0x230 */ + struct nvm_cfg1_func func[MCP_GLOB_FUNC_MAX]; /* 0xB90 */ }; /****************************************** @@ -2602,6 +2683,11 @@ struct board_info { char *friendly_name; }; +struct trace_module_info { + char *module_name; +}; +#define NUM_TRACE_MODULES 25 + enum nvm_cfg_sections { NVM_CFG_SECTION_NVM_CFG1, NVM_CFG_SECTION_MAX @@ -2642,7 +2728,6 @@ struct nvm_cfg { #define NVM_CFG_ID_MFW_FLOW_CONTROL 32 #define NVM_CFG_ID_OPTIC_MODULE_VENDOR_ENFORCEMENT 33 #define NVM_CFG_ID_OPTIONAL_LINK_MODES_BB 34 -#define NVM_CFG_ID_MF_VENDOR_DEVICE_ID 37 #define NVM_CFG_ID_NETWORK_PORT_MODE 38 #define NVM_CFG_ID_MPS10_RX_LANE_SWAP_BB 39 #define NVM_CFG_ID_MPS10_TX_LANE_SWAP_BB 40 @@ -2655,8 +2740,8 @@ struct nvm_cfg { #define NVM_CFG_ID_MPS10_PREEMPHASIS_BB 47 #define NVM_CFG_ID_MPS10_DRIVER_CURRENT_BB 48 #define NVM_CFG_ID_MPS10_ENFORCE_TX_FIR_CFG_BB 49 -#define NVM_CFG_ID_MPS25_PREEMPHASIS 50 -#define NVM_CFG_ID_MPS25_DRIVER_CURRENT 51 +#define NVM_CFG_ID_MPS25_PREEMPHASIS_BB_K2 50 +#define NVM_CFG_ID_MPS25_DRIVER_CURRENT_BB_K2 51 #define NVM_CFG_ID_MPS25_ENFORCE_TX_FIR_CFG 52 #define NVM_CFG_ID_MPS10_CORE_ADDR_BB 53 #define NVM_CFG_ID_MPS25_CORE_ADDR_BB 54 @@ -2668,7 +2753,7 @@ struct nvm_cfg { #define NVM_CFG_ID_MBA_DELAY_TIME 61 #define NVM_CFG_ID_MBA_SETUP_HOT_KEY 62 #define NVM_CFG_ID_MBA_HIDE_SETUP_PROMPT 63 -#define NVM_CFG_ID_PREBOOT_LINK_SPEED 67 +#define NVM_CFG_ID_PREBOOT_LINK_SPEED_BB_K2 67 #define NVM_CFG_ID_PREBOOT_BOOT_PROTOCOL 69 #define NVM_CFG_ID_ENABLE_SRIOV 70 #define NVM_CFG_ID_ENABLE_ATC 71 @@ -2677,14 +2762,15 @@ struct nvm_cfg { #define NVM_CFG_ID_VENDOR_ID 76 #define NVM_CFG_ID_SUBSYSTEM_VENDOR_ID 78 #define NVM_CFG_ID_SUBSYSTEM_DEVICE_ID 79 +#define NVM_CFG_ID_EXPANSION_ROM_SIZE 80 #define NVM_CFG_ID_VF_PCI_BAR2_SIZE_BB 81 #define NVM_CFG_ID_BAR1_SIZE 82 #define NVM_CFG_ID_BAR2_SIZE_BB 83 #define NVM_CFG_ID_VF_PCI_DEVICE_ID 84 #define NVM_CFG_ID_MPS10_TXFIR_MAIN_BB 85 #define NVM_CFG_ID_MPS10_TXFIR_POST_BB 86 -#define NVM_CFG_ID_MPS25_TXFIR_MAIN 87 -#define NVM_CFG_ID_MPS25_TXFIR_POST 88 +#define NVM_CFG_ID_MPS25_TXFIR_MAIN_BB_K2 87 +#define NVM_CFG_ID_MPS25_TXFIR_POST_BB_K2 88 #define NVM_CFG_ID_MANUFACTURE_KIT_VERSION 89 #define NVM_CFG_ID_MANUFACTURE_TIMESTAMP 90 #define NVM_CFG_ID_PERSONALITY 92 @@ -2795,20 +2881,19 @@ struct nvm_cfg { #define NVM_CFG_ID_NVM_CFG_UPDATED_VALUE_SEQ 200 #define NVM_CFG_ID_EXTENDED_SERIAL_NUMBER 201 #define NVM_CFG_ID_RDMA_ENABLEMENT 202 -#define NVM_CFG_ID_MAX_CONT_OPERATING_TEMP 203 #define NVM_CFG_ID_RUNTIME_PORT_SWAP_GPIO 204 #define NVM_CFG_ID_RUNTIME_PORT_SWAP_MAP 205 #define NVM_CFG_ID_THERMAL_EVENT_GPIO 206 #define NVM_CFG_ID_I2C_INTERRUPT_GPIO 207 #define NVM_CFG_ID_DCI_SUPPORT 208 #define NVM_CFG_ID_PCIE_VDM_ENABLED 209 -#define NVM_CFG_ID_OEM1_NUMBER 210 -#define NVM_CFG_ID_OEM2_NUMBER 211 +#define NVM_CFG_ID_OPTION_KIT_PN 210 +#define NVM_CFG_ID_SPARE_PN 211 #define NVM_CFG_ID_FEC_AN_MODE_K2_E5 212 #define NVM_CFG_ID_NPAR_ENABLED_PROTOCOL 213 -#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_PRE 214 -#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_MAIN 215 -#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_POST 216 +#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_PRE_BB_K2 214 +#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_MAIN_BB_K2 215 +#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_POST_BB_K2 216 #define NVM_CFG_ID_ALOM_FAN_ON_AUX_GPIO 217 #define NVM_CFG_ID_ALOM_FAN_ON_AUX_VALUE 218 #define NVM_CFG_ID_SLOT_ID_GPIO 219 @@ -2840,9 +2925,29 @@ struct nvm_cfg { #define NVM_CFG_ID_EXT_PHY_MDI_PAIR_SWAP 249 #define NVM_CFG_ID_UID_LED_MODE_MASK 250 #define NVM_CFG_ID_NCSI_AUX_LINK 251 +#define NVM_CFG_ID_EXTENDED_SPEED_E5 252 +#define NVM_CFG_ID_EXTENDED_SPEED_CAP_E5 253 +#define NVM_CFG_ID_EXTENDED_FEC_MODE_E5 254 +#define NVM_CFG_ID_RX_PRECODE_E5 255 +#define NVM_CFG_ID_TX_PRECODE_E5 256 +#define NVM_CFG_ID_50G_HLPC_PRE2_E5 257 +#define NVM_CFG_ID_50G_MLPC_PRE2_E5 258 +#define NVM_CFG_ID_50G_LLPC_PRE2_E5 259 +#define NVM_CFG_ID_25G_HLPC_PRE2_E5 260 +#define NVM_CFG_ID_25G_LLPC_PRE2_E5 261 +#define NVM_CFG_ID_25G_AC_PRE2_E5 262 +#define NVM_CFG_ID_10G_PC_PRE2_E5 263 +#define NVM_CFG_ID_10G_AC_PRE2_E5 264 +#define NVM_CFG_ID_1G_PRE2_E5 265 +#define NVM_CFG_ID_25G_BT_PRE2_E5 266 +#define NVM_CFG_ID_10G_BT_PRE2_E5 267 +#define NVM_CFG_ID_TX_RX_EQ_50G_HLPC_E5 268 +#define NVM_CFG_ID_TX_RX_EQ_50G_MLPC_E5 269 +#define NVM_CFG_ID_TX_RX_EQ_50G_LLPC_E5 270 +#define NVM_CFG_ID_TX_RX_EQ_50G_AC_E5 271 #define NVM_CFG_ID_SMARTAN_FEC_OVERRIDE 272 #define NVM_CFG_ID_LLDP_DISABLE 273 -#define NVM_CFG_ID_SHORT_PERST_PROTECTION_K2_E5 274 +#define NVM_CFG_ID_SHORT_PERST_PROTECTION_K2 274 #define NVM_CFG_ID_TRANSCEIVER_RATE_SELECT_0 275 #define NVM_CFG_ID_TRANSCEIVER_RATE_SELECT_1 276 #define NVM_CFG_ID_TRANSCEIVER_MODULE_TX_FAULT 277 @@ -2869,4 +2974,22 @@ struct nvm_cfg { #define NVM_CFG_ID_PHY_MODULE_WARNING_TEMP_TH 298 #define NVM_CFG_ID_DISABLE_PLDM 299 #define NVM_CFG_ID_DISABLE_MCTP_OEM 300 +#define NVM_CFG_ID_LOWEST_MBI_VERSION 301 +#define NVM_CFG_ID_ACTIVITY_LED_BLINK_RATE_HZ 302 +#define NVM_CFG_ID_EXT_PHY_AVS_ENABLE 303 +#define NVM_CFG_ID_OVERRIDE_TX_DRIVER_REGULATOR_K2 304 +#define NVM_CFG_ID_TX_REGULATOR_VOLTAGE_GEN1_2_K2 305 +#define NVM_CFG_ID_TX_REGULATOR_VOLTAGE_GEN3_K2 306 +#define NVM_CFG_ID_SHUTDOWN___PHY_MONITOR_ENABLED 307 +#define NVM_CFG_ID_SHUTDOWN___PHY_THRESHOLD_TEMP 308 +#define NVM_CFG_ID_SHUTDOWN___PHY_SAMPLES 309 +#define NVM_CFG_ID_NCSI_GET_CAPAS_CH_COUNT 310 +#define NVM_CFG_ID_PERMIT_TOTAL_PORT_SHUTDOWN 311 +#define NVM_CFG_ID_MCTP_VIRTUAL_LINK_OEM3 312 +#define NVM_CFG_ID_50G_AC_PRE2_E5 313 +#define NVM_CFG_ID_QINQ_SUPPORT 314 +#define NVM_CFG_ID_QINQ_OUTER_VLAN 315 +#define NVM_CFG_ID_NVMETCP_DID_SUFFIX_K2_E5 316 +#define NVM_CFG_ID_NVMETCP_MODE_K2_E5 317 +#define NVM_CFG_ID_PCIE_CLASS_CODE_NVMETCP_K2_E5 318 #endif /* NVM_CFG_H */ diff --git a/drivers/net/qede/qede_debug.c b/drivers/net/qede/qede_debug.c index d46727c45..085d0532b 100644 --- a/drivers/net/qede/qede_debug.c +++ b/drivers/net/qede/qede_debug.c @@ -3937,16 +3937,17 @@ static enum dbg_status qed_find_nvram_image(struct ecore_hwfn *p_hwfn, /* Call NVRAM get file command */ nvm_result = ecore_mcp_nvm_rd_cmd(p_hwfn, - p_ptt, - DRV_MSG_CODE_NVM_GET_FILE_ATT, - image_type, - &ret_mcp_resp, - &ret_mcp_param, - &ret_txn_size, (u32 *)&file_att); + p_ptt, + DRV_MSG_CODE_NVM_GET_FILE_ATT, + image_type, + &ret_mcp_resp, + &ret_mcp_param, + &ret_txn_size, + (u32 *)&file_att, false); /* Check response */ - if (nvm_result || - (ret_mcp_resp & FW_MSG_CODE_MASK) != FW_MSG_CODE_NVM_OK) + if (nvm_result || (ret_mcp_resp & FW_MSG_CODE_MASK) != + FW_MSG_CODE_NVM_OK) return DBG_STATUS_NVRAM_GET_IMAGE_FAILED; /* Update return values */ @@ -3995,8 +3996,8 @@ static enum dbg_status qed_nvram_read(struct ecore_hwfn *p_hwfn, DRV_MSG_CODE_NVM_READ_NVRAM, param, &ret_mcp_resp, &ret_mcp_param, &ret_read_size, - (u32 *)((u8 *)ret_buf + - read_offset))) { + (u32 *)((u8 *)ret_buf + read_offset), + false)) { DP_NOTICE(p_hwfn->p_dev, false, "rc = DBG_STATUS_NVRAM_READ_FAILED\n"); return DBG_STATUS_NVRAM_READ_FAILED; } @@ -4685,9 +4686,9 @@ static u32 qed_ilt_dump(struct ecore_hwfn *p_hwfn, num_cids_per_page = (int)(cduc_page_size / conn_ctx_size); ilt_pages = p_hwfn->p_cxt_mngr->ilt_shadow; - /* Dump global params - 22 must match number of params below */ + /* Dump global params - 21 must match number of params below */ offset += qed_dump_common_global_params(p_hwfn, p_ptt, - dump_buf + offset, dump, 22); + dump_buf + offset, dump, 21); offset += qed_dump_str_param(dump_buf + offset, dump, "dump-type", "ilt-dump"); offset += qed_dump_num_param(dump_buf + offset, @@ -4746,15 +4747,11 @@ static u32 qed_ilt_dump(struct ecore_hwfn *p_hwfn, dump, "max-task-ctx-size", p_hwfn->p_cxt_mngr->task_ctx_size); - offset += qed_dump_num_param(dump_buf + offset, - dump, - "task-type-id", - p_hwfn->p_cxt_mngr->task_type_id); offset += qed_dump_num_param(dump_buf + offset, dump, "first-vf-id-in-pf", p_hwfn->p_cxt_mngr->first_vf_in_pf); - offset += /* 18 */ qed_dump_num_param(dump_buf + offset, + offset += /* 17 */ qed_dump_num_param(dump_buf + offset, dump, "num-vfs-in-pf", p_hwfn->p_cxt_mngr->vf_count); diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index ab5f5b106..668821dcb 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -2656,7 +2656,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf) ECORE_MAC); else ecore_vf_get_num_mac_filters(ECORE_LEADING_HWFN(edev), - (uint32_t *)&adapter->dev_info.num_mac_filters); + (uint8_t *)&adapter->dev_info.num_mac_filters); /* Allocate memory for storing MAC addr */ eth_dev->data->mac_addrs = rte_zmalloc(edev->name, diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c index 6fc67d878..c91fcc1fa 100644 --- a/drivers/net/qede/qede_rxtx.c +++ b/drivers/net/qede/qede_rxtx.c @@ -596,7 +596,7 @@ int qede_alloc_fp_resc(struct qede_dev *qdev) { struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); struct qede_fastpath *fp; - uint32_t num_sbs; + uint8_t num_sbs; uint16_t sb_idx; int i; -- 2.18.0