From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0b-0016ce01.pphosted.com (mx0a-0016ce01.pphosted.com [67.231.148.157]) by dpdk.org (Postfix) with ESMTP id 98D7F6A71 for ; Fri, 24 Mar 2017 08:30:26 +0100 (CET) Received: from pps.filterd (m0095336.ppops.net [127.0.0.1]) by mx0a-0016ce01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v2O7Q6xA009391; Fri, 24 Mar 2017 00:30:23 -0700 Received: from avcashub1.qlogic.com ([198.186.0.115]) by mx0a-0016ce01.pphosted.com with ESMTP id 29b9xy1bkh-1 (version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=NOT); Fri, 24 Mar 2017 00:30:22 -0700 Received: from avluser05.qlc.com (10.1.113.115) by avcashub1.qlogic.org (10.1.4.190) with Microsoft SMTP Server (TLS) id 14.3.235.1; Fri, 24 Mar 2017 00:30:22 -0700 Received: (from rmody@localhost) by avluser05.qlc.com (8.14.4/8.14.4/Submit) id v2O7ULIK011619; Fri, 24 Mar 2017 00:30:21 -0700 X-Authentication-Warning: avluser05.qlc.com: rmody set sender to rasesh.mody@cavium.com using -f From: Rasesh Mody To: , CC: Rasesh Mody , Date: Fri, 24 Mar 2017 00:27:56 -0700 Message-ID: <1490340531-11403-7-git-send-email-rasesh.mody@cavium.com> X-Mailer: git-send-email 1.7.10.3 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain disclaimer: bypass X-Proofpoint-Virus-Version: vendor=nai engine=5800 definitions=8476 signatures=668449 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1702020001 definitions=main-1703240067 Subject: [dpdk-dev] [PATCH v3 06/61] net/qede: upgrade the FW to 8.18.9.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 24 Mar 2017 07:30:27 -0000 This patchset adds changes to upgrade to 8.18.9.0 FW. Signed-off-by: Rasesh Mody --- doc/guides/nics/qede.rst | 8 +- drivers/net/qede/base/bcm_osal.h | 1 + drivers/net/qede/base/common_hsi.h | 176 +++- drivers/net/qede/base/ecore_dcbx.c | 4 +- drivers/net/qede/base/ecore_dev.c | 204 ++-- drivers/net/qede/base/ecore_gtt_reg_addr.h | 20 +- drivers/net/qede/base/ecore_hsi_common.h | 46 +- drivers/net/qede/base/ecore_hsi_debug_tools.h | 203 ++-- drivers/net/qede/base/ecore_hsi_eth.h | 17 +- drivers/net/qede/base/ecore_hsi_init_tool.h | 78 +- drivers/net/qede/base/ecore_init_fw_funcs.c | 1378 ++++++++++++++++--------- drivers/net/qede/base/ecore_init_fw_funcs.h | 161 ++- drivers/net/qede/base/ecore_iro.h | 8 + drivers/net/qede/base/ecore_iro_values.h | 28 +- drivers/net/qede/base/ecore_rt_defs.h | 623 ++++++----- drivers/net/qede/base/eth_common.h | 2 +- drivers/net/qede/base/reg_addr.h | 53 + drivers/net/qede/qede_main.c | 2 +- 18 files changed, 1886 insertions(+), 1126 deletions(-) diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst index 4694ec0..36b26b3 100644 --- a/doc/guides/nics/qede.rst +++ b/doc/guides/nics/qede.rst @@ -77,10 +77,10 @@ Supported QLogic Adapters Prerequisites ------------- -- Requires firmware version **8.14.x.** and management firmware - version **8.14.x or higher**. Firmware may be available +- Requires firmware version **8.18.x.** and management firmware + version **8.18.x or higher**. Firmware may be available inbox in certain newer Linux distros under the standard directory - ``E.g. /lib/firmware/qed/qed_init_values-8.14.6.0.bin`` + ``E.g. /lib/firmware/qed/qed_init_values-8.18.9.0.bin`` - If the required firmware files are not available then visit `QLogic Driver Download Center `_. @@ -119,7 +119,7 @@ enabling debugging options may affect system performance. - ``CONFIG_RTE_LIBRTE_QEDE_FW`` (default **""**) Gives absolute path of firmware file. - ``Eg: "/lib/firmware/qed/qed_init_values_zipped-8.14.6.0.bin"`` + ``Eg: "/lib/firmware/qed/qed_init_values_zipped-8.18.9.0.bin"`` Empty string indicates driver will pick up the firmware file from the default location. diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h index 88246b7..0d239c9 100644 --- a/drivers/net/qede/base/bcm_osal.h +++ b/drivers/net/qede/base/bcm_osal.h @@ -398,6 +398,7 @@ u32 qede_osal_log2(u32); #define OSAL_STRCPY(dst, string) strcpy(dst, string) #define OSAL_STRNCPY(dst, string, len) strncpy(dst, string, len) #define OSAL_STRCMP(str1, str2) strcmp(str1, str2) +#define OSAL_STRTOUL(str, base, res) 0 #define OSAL_INLINE inline #define OSAL_REG_ADDR(_p_hwfn, _offset) \ diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h index 59e751f..cbcde22 100644 --- a/drivers/net/qede/base/common_hsi.h +++ b/drivers/net/qede/base/common_hsi.h @@ -78,8 +78,16 @@ #define CORE_SPQE_PAGE_SIZE_BYTES 4096 -#define MAX_NUM_LL2_RX_QUEUES 32 -#define MAX_NUM_LL2_TX_STATS_COUNTERS 32 +/* + * Usually LL2 queues are opened in pairs TX-RX. + * There is a hard restriction on number of RX queues (limited by Tstorm RAM) + * and TX counters (Pstorm RAM). + * Number of TX queues is almost unlimited. + * The constants are different so as to allow asymmetric LL2 connections + */ + +#define MAX_NUM_LL2_RX_QUEUES 48 +#define MAX_NUM_LL2_TX_STATS_COUNTERS 48 /****************************************************************************/ @@ -89,8 +97,8 @@ #define FW_MAJOR_VERSION 8 -#define FW_MINOR_VERSION 14 -#define FW_REVISION_VERSION 6 +#define FW_MINOR_VERSION 18 +#define FW_REVISION_VERSION 9 #define FW_ENGINEERING_VERSION 0 /***********************/ @@ -110,6 +118,7 @@ #define MAX_NUM_VFS_BB (120) #define MAX_NUM_VFS_K2 (192) #define E4_MAX_NUM_VFS (MAX_NUM_VFS_K2) +#define COMMON_MAX_NUM_VFS (240) #define MAX_NUM_FUNCTIONS_BB (MAX_NUM_PFS_BB + MAX_NUM_VFS_BB) #define MAX_NUM_FUNCTIONS_K2 (MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2) @@ -177,6 +186,13 @@ #define CDU_VF_FL_SEG_TYPE_OFFSET_REG_TYPE_SHIFT (12) #define CDU_VF_FL_SEG_TYPE_OFFSET_REG_OFFSET_MASK (0xfff) +#define CDU_CONTEXT_VALIDATION_CFG_ENABLE_SHIFT (0) +#define CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT (1) +#define CDU_CONTEXT_VALIDATION_CFG_USE_TYPE (2) +#define CDU_CONTEXT_VALIDATION_CFG_USE_REGION (3) +#define CDU_CONTEXT_VALIDATION_CFG_USE_CID (4) +#define CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE (5) + /*****************/ /* DQ CONSTANTS */ @@ -472,7 +488,6 @@ #define PXP_BAR_DQ 1 /* PTT and GTT */ -#define PXP_NUM_PF_WINDOWS 12 #define PXP_PER_PF_ENTRY_SIZE 8 #define PXP_NUM_GLOBAL_WINDOWS 243 #define PXP_GLOBAL_ENTRY_SIZE 4 @@ -497,6 +512,8 @@ #define PXP_PF_ME_OPAQUE_ADDR 0x1f8 #define PXP_PF_ME_CONCRETE_ADDR 0x1fc +#define PXP_NUM_PF_WINDOWS 12 + #define PXP_EXTERNAL_BAR_PF_WINDOW_START 0x1000 #define PXP_EXTERNAL_BAR_PF_WINDOW_NUM PXP_NUM_PF_WINDOWS #define PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE 0x1000 @@ -519,8 +536,6 @@ PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH - 1) /* PF BAR */ -/*#define PXP_BAR0_START_GRC 0x1000 */ -/*#define PXP_BAR0_GRC_LENGTH 0xBFF000 */ #define PXP_BAR0_START_GRC 0x0000 #define PXP_BAR0_GRC_LENGTH 0x1C00000 #define PXP_BAR0_END_GRC \ @@ -589,7 +604,7 @@ #define SDM_OP_GEN_TRIG_AGG_INT 2 #define SDM_OP_GEN_TRIG_LOADER 4 #define SDM_OP_GEN_TRIG_INDICATE_ERROR 6 -#define SDM_OP_GEN_TRIG_RELEASE_THREAD 7 +#define SDM_OP_GEN_TRIG_INC_ORDER_CNT 9 /***********************************************************/ /* Completion types */ @@ -612,6 +627,7 @@ #define SDM_COMP_TYPE_RELEASE_THREAD 7 /* Write to local RAM as a completion */ #define SDM_COMP_TYPE_RAM 8 +#define SDM_COMP_TYPE_INC_ORDER_CNT 9 /* Applicable only for E4 */ /******************/ @@ -881,7 +897,7 @@ enum db_dest { */ enum db_dpm_type { DPM_LEGACY /* Legacy DPM- to Xstorm RAM */, - DPM_ROCE /* RoCE DPM- to NIG */, + DPM_RDMA /* RDMA DPM (only RoCE in E4) - to NIG */, /* L2 DPM inline- to PBF, with packet data on doorbell */ DPM_L2_INLINE, DPM_L2_BD /* L2 DPM with BD- to PBF, with TX BD data on doorbell */, @@ -968,42 +984,42 @@ struct db_pwm_addr { }; /* - * Parameters to RoCE firmware, passed in EDPM doorbell + * Parameters to RDMA firmware, passed in EDPM doorbell */ -struct db_roce_dpm_params { +struct db_rdma_dpm_params { __le32 params; /* Size in QWORD-s of the DPM burst */ -#define DB_ROCE_DPM_PARAMS_SIZE_MASK 0x3F -#define DB_ROCE_DPM_PARAMS_SIZE_SHIFT 0 -/* Type of DPM transacation (DPM_ROCE) (use enum db_dpm_type) */ -#define DB_ROCE_DPM_PARAMS_DPM_TYPE_MASK 0x3 -#define DB_ROCE_DPM_PARAMS_DPM_TYPE_SHIFT 6 -/* opcode for ROCE operation */ -#define DB_ROCE_DPM_PARAMS_OPCODE_MASK 0xFF -#define DB_ROCE_DPM_PARAMS_OPCODE_SHIFT 8 +#define DB_RDMA_DPM_PARAMS_SIZE_MASK 0x3F +#define DB_RDMA_DPM_PARAMS_SIZE_SHIFT 0 +/* Type of DPM transacation (DPM_RDMA) (use enum db_dpm_type) */ +#define DB_RDMA_DPM_PARAMS_DPM_TYPE_MASK 0x3 +#define DB_RDMA_DPM_PARAMS_DPM_TYPE_SHIFT 6 +/* opcode for RDMA operation */ +#define DB_RDMA_DPM_PARAMS_OPCODE_MASK 0xFF +#define DB_RDMA_DPM_PARAMS_OPCODE_SHIFT 8 /* the size of the WQE payload in bytes */ -#define DB_ROCE_DPM_PARAMS_WQE_SIZE_MASK 0x7FF -#define DB_ROCE_DPM_PARAMS_WQE_SIZE_SHIFT 16 -#define DB_ROCE_DPM_PARAMS_RESERVED0_MASK 0x1 -#define DB_ROCE_DPM_PARAMS_RESERVED0_SHIFT 27 +#define DB_RDMA_DPM_PARAMS_WQE_SIZE_MASK 0x7FF +#define DB_RDMA_DPM_PARAMS_WQE_SIZE_SHIFT 16 +#define DB_RDMA_DPM_PARAMS_RESERVED0_MASK 0x1 +#define DB_RDMA_DPM_PARAMS_RESERVED0_SHIFT 27 /* RoCE completion flag */ -#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_MASK 0x1 -#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_SHIFT 28 -#define DB_ROCE_DPM_PARAMS_S_FLG_MASK 0x1 /* RoCE S flag */ -#define DB_ROCE_DPM_PARAMS_S_FLG_SHIFT 29 -#define DB_ROCE_DPM_PARAMS_RESERVED1_MASK 0x3 -#define DB_ROCE_DPM_PARAMS_RESERVED1_SHIFT 30 +#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_MASK 0x1 +#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_SHIFT 28 +#define DB_RDMA_DPM_PARAMS_S_FLG_MASK 0x1 /* RoCE S flag */ +#define DB_RDMA_DPM_PARAMS_S_FLG_SHIFT 29 +#define DB_RDMA_DPM_PARAMS_RESERVED1_MASK 0x3 +#define DB_RDMA_DPM_PARAMS_RESERVED1_SHIFT 30 }; /* - * Structure for doorbell data, in ROCE DPM mode, for the first doorbell in a + * Structure for doorbell data, in RDMA DPM mode, for the first doorbell in a * DPM burst */ -struct db_roce_dpm_data { +struct db_rdma_dpm_data { __le16 icid /* internal CID */; __le16 prod_val /* aggregated value to update */; -/* parameters passed to RoCE firmware */ - struct db_roce_dpm_params params; +/* parameters passed to RDMA firmware */ + struct db_rdma_dpm_params params; }; /* Igu interrupt command */ @@ -1136,6 +1152,68 @@ struct parsing_and_err_flags { /* + * Parsing error flags bitmap. + */ +struct parsing_err_flags { + __le16 flags; +/* MAC error indication */ +#define PARSING_ERR_FLAGS_MAC_ERROR_MASK 0x1 +#define PARSING_ERR_FLAGS_MAC_ERROR_SHIFT 0 +/* truncation error indication */ +#define PARSING_ERR_FLAGS_TRUNC_ERROR_MASK 0x1 +#define PARSING_ERR_FLAGS_TRUNC_ERROR_SHIFT 1 +/* packet too small indication */ +#define PARSING_ERR_FLAGS_PKT_TOO_SMALL_MASK 0x1 +#define PARSING_ERR_FLAGS_PKT_TOO_SMALL_SHIFT 2 +/* Header Missing Tag */ +#define PARSING_ERR_FLAGS_ANY_HDR_MISSING_TAG_MASK 0x1 +#define PARSING_ERR_FLAGS_ANY_HDR_MISSING_TAG_SHIFT 3 +/* from frame cracker output */ +#define PARSING_ERR_FLAGS_ANY_HDR_IP_VER_MISMTCH_MASK 0x1 +#define PARSING_ERR_FLAGS_ANY_HDR_IP_VER_MISMTCH_SHIFT 4 +/* from frame cracker output */ +#define PARSING_ERR_FLAGS_ANY_HDR_IP_V4_HDR_LEN_TOO_SMALL_MASK 0x1 +#define PARSING_ERR_FLAGS_ANY_HDR_IP_V4_HDR_LEN_TOO_SMALL_SHIFT 5 +/* set this error if: 1. total-len is smaller than hdr-len 2. total-ip-len + * indicates number that is bigger than real packet length 3. tunneling: + * total-ip-length of the outer header points to offset that is smaller than + * the one pointed to by the total-ip-len of the inner hdr. + */ +#define PARSING_ERR_FLAGS_ANY_HDR_IP_BAD_TOTAL_LEN_MASK 0x1 +#define PARSING_ERR_FLAGS_ANY_HDR_IP_BAD_TOTAL_LEN_SHIFT 6 +/* from frame cracker output */ +#define PARSING_ERR_FLAGS_IP_V4_CHKSM_ERROR_MASK 0x1 +#define PARSING_ERR_FLAGS_IP_V4_CHKSM_ERROR_SHIFT 7 +/* from frame cracker output. for either TCP or UDP */ +#define PARSING_ERR_FLAGS_ANY_HDR_L4_IP_LEN_MISMTCH_MASK 0x1 +#define PARSING_ERR_FLAGS_ANY_HDR_L4_IP_LEN_MISMTCH_SHIFT 8 +/* from frame cracker output */ +#define PARSING_ERR_FLAGS_ZERO_UDP_IP_V6_CHKSM_MASK 0x1 +#define PARSING_ERR_FLAGS_ZERO_UDP_IP_V6_CHKSM_SHIFT 9 +/* cksm calculated and value isn't 0xffff or L4-cksm-wasnt-calculated for any + * reason, like: udp/ipv4 checksum is 0 etc. + */ +#define PARSING_ERR_FLAGS_INNER_L4_CHKSM_ERROR_MASK 0x1 +#define PARSING_ERR_FLAGS_INNER_L4_CHKSM_ERROR_SHIFT 10 +/* from frame cracker output */ +#define PARSING_ERR_FLAGS_ANY_HDR_ZERO_TTL_OR_HOP_LIM_MASK 0x1 +#define PARSING_ERR_FLAGS_ANY_HDR_ZERO_TTL_OR_HOP_LIM_SHIFT 11 +/* from frame cracker output */ +#define PARSING_ERR_FLAGS_NON_8021Q_TAG_EXISTS_IN_BOTH_HDRS_MASK 0x1 +#define PARSING_ERR_FLAGS_NON_8021Q_TAG_EXISTS_IN_BOTH_HDRS_SHIFT 12 +/* set if geneve option size was over 32 byte */ +#define PARSING_ERR_FLAGS_GENEVE_OPTION_OVERSIZED_MASK 0x1 +#define PARSING_ERR_FLAGS_GENEVE_OPTION_OVERSIZED_SHIFT 13 +/* from frame cracker output */ +#define PARSING_ERR_FLAGS_TUNNEL_IP_V4_CHKSM_ERROR_MASK 0x1 +#define PARSING_ERR_FLAGS_TUNNEL_IP_V4_CHKSM_ERROR_SHIFT 14 +/* from frame cracker output */ +#define PARSING_ERR_FLAGS_TUNNEL_L4_CHKSM_ERROR_MASK 0x1 +#define PARSING_ERR_FLAGS_TUNNEL_L4_CHKSM_ERROR_SHIFT 15 +}; + + +/* * Pb context */ struct pb_context { @@ -1492,49 +1570,57 @@ struct tdif_task_context { struct timers_context { __le32 logical_client_0; /* Expiration time of logical client 0 */ -#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_MASK 0xFFFFFFF +#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_MASK 0x7FFFFFF #define TIMERS_CONTEXT_EXPIRATIONTIMELC0_SHIFT 0 +#define TIMERS_CONTEXT_RESERVED0_MASK 0x1 +#define TIMERS_CONTEXT_RESERVED0_SHIFT 27 /* Valid bit of logical client 0 */ #define TIMERS_CONTEXT_VALIDLC0_MASK 0x1 #define TIMERS_CONTEXT_VALIDLC0_SHIFT 28 /* Active bit of logical client 0 */ #define TIMERS_CONTEXT_ACTIVELC0_MASK 0x1 #define TIMERS_CONTEXT_ACTIVELC0_SHIFT 29 -#define TIMERS_CONTEXT_RESERVED0_MASK 0x3 -#define TIMERS_CONTEXT_RESERVED0_SHIFT 30 +#define TIMERS_CONTEXT_RESERVED1_MASK 0x3 +#define TIMERS_CONTEXT_RESERVED1_SHIFT 30 __le32 logical_client_1; /* Expiration time of logical client 1 */ -#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_MASK 0xFFFFFFF +#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_MASK 0x7FFFFFF #define TIMERS_CONTEXT_EXPIRATIONTIMELC1_SHIFT 0 +#define TIMERS_CONTEXT_RESERVED2_MASK 0x1 +#define TIMERS_CONTEXT_RESERVED2_SHIFT 27 /* Valid bit of logical client 1 */ #define TIMERS_CONTEXT_VALIDLC1_MASK 0x1 #define TIMERS_CONTEXT_VALIDLC1_SHIFT 28 /* Active bit of logical client 1 */ #define TIMERS_CONTEXT_ACTIVELC1_MASK 0x1 #define TIMERS_CONTEXT_ACTIVELC1_SHIFT 29 -#define TIMERS_CONTEXT_RESERVED1_MASK 0x3 -#define TIMERS_CONTEXT_RESERVED1_SHIFT 30 +#define TIMERS_CONTEXT_RESERVED3_MASK 0x3 +#define TIMERS_CONTEXT_RESERVED3_SHIFT 30 __le32 logical_client_2; /* Expiration time of logical client 2 */ -#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_MASK 0xFFFFFFF +#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_MASK 0x7FFFFFF #define TIMERS_CONTEXT_EXPIRATIONTIMELC2_SHIFT 0 +#define TIMERS_CONTEXT_RESERVED4_MASK 0x1 +#define TIMERS_CONTEXT_RESERVED4_SHIFT 27 /* Valid bit of logical client 2 */ #define TIMERS_CONTEXT_VALIDLC2_MASK 0x1 #define TIMERS_CONTEXT_VALIDLC2_SHIFT 28 /* Active bit of logical client 2 */ #define TIMERS_CONTEXT_ACTIVELC2_MASK 0x1 #define TIMERS_CONTEXT_ACTIVELC2_SHIFT 29 -#define TIMERS_CONTEXT_RESERVED2_MASK 0x3 -#define TIMERS_CONTEXT_RESERVED2_SHIFT 30 +#define TIMERS_CONTEXT_RESERVED5_MASK 0x3 +#define TIMERS_CONTEXT_RESERVED5_SHIFT 30 __le32 host_expiration_fields; /* Expiration time on host (closest one) */ -#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_MASK 0xFFFFFFF +#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_MASK 0x7FFFFFF #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_SHIFT 0 +#define TIMERS_CONTEXT_RESERVED6_MASK 0x1 +#define TIMERS_CONTEXT_RESERVED6_SHIFT 27 /* Valid bit of host expiration */ #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_MASK 0x1 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_SHIFT 28 -#define TIMERS_CONTEXT_RESERVED3_MASK 0x7 -#define TIMERS_CONTEXT_RESERVED3_SHIFT 29 +#define TIMERS_CONTEXT_RESERVED7_MASK 0x7 +#define TIMERS_CONTEXT_RESERVED7_SHIFT 29 }; diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c index 7380fd8..102774d 100644 --- a/drivers/net/qede/base/ecore_dcbx.c +++ b/drivers/net/qede/base/ecore_dcbx.c @@ -126,7 +126,7 @@ ecore_dcbx_set_params(struct ecore_dcbx_results *p_data, else if (enable) p_data->arr[type].update = UPDATE_DCB; else - p_data->arr[type].update = DONT_UPDATE_DCB_DHCP; + p_data->arr[type].update = DONT_UPDATE_DCB_DSCP; /* QM reconf data */ if (p_hwfn->hw_info.personality == personality) { @@ -938,7 +938,7 @@ void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src, p_dest->pf_id = p_src->pf_id; update_flag = p_src->arr[DCBX_PROTOCOL_ETH].update; - p_dest->update_eth_dcb_data_flag = update_flag; + p_dest->update_eth_dcb_data_mode = update_flag; p_dcb_data = &p_dest->eth_dcb_data; ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ETH); diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c index eef24cd..f82f5e6 100644 --- a/drivers/net/qede/base/ecore_dev.c +++ b/drivers/net/qede/base/ecore_dev.c @@ -814,7 +814,7 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn) int hw_mode = 0; if (ECORE_IS_BB_B0(p_hwfn->p_dev)) { - hw_mode |= 1 << MODE_BB_B0; + hw_mode |= 1 << MODE_BB; } else if (ECORE_IS_AH(p_hwfn->p_dev)) { hw_mode |= 1 << MODE_K2; } else { @@ -886,29 +886,36 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn) static enum _ecore_status_t ecore_hw_init_chip(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { + struct ecore_dev *p_dev = p_hwfn->p_dev; u32 pl_hv = 1; int i; - if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev)) - pl_hv |= 0x600; + if (CHIP_REV_IS_EMUL(p_dev)) { + if (ECORE_IS_AH(p_dev)) + pl_hv |= 0x600; + } ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV + 4, pl_hv); - if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev)) - ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2, 0x3ffffff); + if (CHIP_REV_IS_EMUL(p_dev) && + (ECORE_IS_AH(p_dev))) + ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2_E5, + 0x3ffffff); /* initialize port mode to 4x10G_E (10G with 4x10 SERDES) */ /* CNIG_REG_NW_PORT_MODE is same for A0 and B0 */ - if (!CHIP_REV_IS_EMUL(p_hwfn->p_dev) || !ECORE_IS_AH(p_hwfn->p_dev)) - ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB_B0, 4); + if (!CHIP_REV_IS_EMUL(p_dev) || ECORE_IS_BB(p_dev)) + ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB, 4); - if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev)) { - /* 2 for 4-port, 1 for 2-port, 0 for 1-port */ - ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE, - (p_hwfn->p_dev->num_ports_in_engines >> 1)); + if (CHIP_REV_IS_EMUL(p_dev)) { + if (ECORE_IS_AH(p_dev)) { + /* 2 for 4-port, 1 for 2-port, 0 for 1-port */ + ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE, + (p_dev->num_ports_in_engines >> 1)); - ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN, - p_hwfn->p_dev->num_ports_in_engines == 4 ? 0 : 3); + ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN, + p_dev->num_ports_in_engines == 4 ? 0 : 3); + } } /* Poll on RBC */ @@ -1051,12 +1058,6 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn, /* pretend to original PF */ ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id); - /* @@@TMP: - * CQ89456 - Mask the BRB "RC0_EOP_OUT_SYNC_FIFO_PUSH_ERROR" attention. - */ - if (ECORE_IS_AH(p_dev)) - ecore_wr(p_hwfn, p_ptt, BRB_REG_INT_MASK_10, 0x4000000); - return rc; } @@ -1072,20 +1073,19 @@ static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn, { DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, "CMD: %08x, ADDR: 0x%08x, DATA: %08x:%08x\n", - ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0) | + ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB) | (8 << PMEG_IF_BYTE_COUNT), (reg_type << 25) | (addr << 8) | port, (u32)((data >> 32) & 0xffffffff), (u32)(data & 0xffffffff)); - ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0, - (ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB_B0) & + ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB, + (ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB) & 0xffff00fe) | (8 << PMEG_IF_BYTE_COUNT)); - ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_ADDR_BB_B0, + ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_ADDR_BB, (reg_type << 25) | (addr << 8) | port); - ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB_B0, - data & 0xffffffff); - ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB_B0, + ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB, data & 0xffffffff); + ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB, (data >> 32) & 0xffffffff); } @@ -1101,48 +1101,13 @@ static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn, #define XLMAC_PAUSE_CTRL (0x60d) #define XLMAC_PFC_CTRL (0x60e) -static void ecore_emul_link_init_ah(struct ecore_hwfn *p_hwfn, +static void ecore_emul_link_init_bb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { - u8 port = p_hwfn->port_id; - u32 mac_base = NWM_REG_MAC0 + (port << 2) * NWM_REG_MAC0_SIZE; - - ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2 + (port << 2), - (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_SHIFT) | - (port << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_SHIFT) - | (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_SHIFT)); - - ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE, - 1 << ETH_MAC_REG_XIF_MODE_XGMII_SHIFT); - - ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH, - 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_SHIFT); - - ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH, - 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_SHIFT); - - ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS, - 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_SHIFT); - - ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS, - (0xA << ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_SHIFT) | - (8 << ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_SHIFT)); - - ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG, 0xa853); -} - -static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) -{ u8 loopback = 0, port = p_hwfn->port_id * 2; DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port); - if (ECORE_IS_AH(p_hwfn->p_dev)) { - ecore_emul_link_init_ah(p_hwfn, p_ptt); - return; - } - /* XLPORT MAC MODE *//* 0 Quad, 4 Single... */ ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MODE_REG, (0x4 << 4) | 0x4, 1, port); @@ -1171,8 +1136,53 @@ static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn, ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_ENABLE_REG, 0xf, 1, port); } -static void ecore_link_init(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt, u8 port) +static void ecore_emul_link_init_ah_e5(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) +{ + u8 port = p_hwfn->port_id; + u32 mac_base = NWM_REG_MAC0_K2_E5 + (port << 2) * NWM_REG_MAC0_SIZE; + + DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port); + + ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2_E5 + (port << 2), + (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT) | + (port << + CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT) | + (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT)); + + ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE_K2_E5, + 1 << ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT); + + ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH_K2_E5, + 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT); + + ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH_K2_E5, + 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT); + + ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5, + 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT); + + ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5, + (0xA << + ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT) | + (8 << + ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT)); + + ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG_K2_E5, + 0xa853); +} + +static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) +{ + if (ECORE_IS_AH(p_hwfn->p_dev)) + ecore_emul_link_init_ah_e5(p_hwfn, p_ptt); + else /* BB */ + ecore_emul_link_init_bb(p_hwfn, p_ptt); +} + +static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt, u8 port) { int port_offset = port ? 0x800 : 0; u32 xmac_rxctrl = 0; @@ -1185,10 +1195,10 @@ static void ecore_link_init(struct ecore_hwfn *p_hwfn, ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + sizeof(u32), MISC_REG_RESET_REG_2_XMAC_BIT); /* Set */ - ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_CORE_PORT_MODE, 1); + ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_CORE_PORT_MODE_BB, 1); /* Set the number of ports on the Warp Core to 10G */ - ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_PHY_PORT_MODE, 3); + ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_PHY_PORT_MODE_BB, 3); /* Soft reset of XMAC */ ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + 2 * sizeof(u32), @@ -1199,20 +1209,21 @@ static void ecore_link_init(struct ecore_hwfn *p_hwfn, /* FIXME: move to common end */ if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) - ecore_wr(p_hwfn, p_ptt, XMAC_REG_MODE + port_offset, 0x20); + ecore_wr(p_hwfn, p_ptt, XMAC_REG_MODE_BB + port_offset, 0x20); /* Set Max packet size: initialize XMAC block register for port 0 */ - ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_MAX_SIZE + port_offset, 0x2710); + ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_MAX_SIZE_BB + port_offset, 0x2710); /* CRC append for Tx packets: init XMAC block register for port 1 */ - ecore_wr(p_hwfn, p_ptt, XMAC_REG_TX_CTRL_LO + port_offset, 0xC800); + ecore_wr(p_hwfn, p_ptt, XMAC_REG_TX_CTRL_LO_BB + port_offset, 0xC800); /* Enable TX and RX: initialize XMAC block register for port 1 */ - ecore_wr(p_hwfn, p_ptt, XMAC_REG_CTRL + port_offset, - XMAC_REG_CTRL_TX_EN | XMAC_REG_CTRL_RX_EN); - xmac_rxctrl = ecore_rd(p_hwfn, p_ptt, XMAC_REG_RX_CTRL + port_offset); - xmac_rxctrl |= XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE; - ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_CTRL + port_offset, xmac_rxctrl); + ecore_wr(p_hwfn, p_ptt, XMAC_REG_CTRL_BB + port_offset, + XMAC_REG_CTRL_TX_EN_BB | XMAC_REG_CTRL_RX_EN_BB); + xmac_rxctrl = ecore_rd(p_hwfn, p_ptt, + XMAC_REG_RX_CTRL_BB + port_offset); + xmac_rxctrl |= XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB; + ecore_wr(p_hwfn, p_ptt, XMAC_REG_RX_CTRL_BB + port_offset, xmac_rxctrl); } #endif @@ -1233,7 +1244,8 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn, if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) { if (ECORE_IS_AH(p_hwfn->p_dev)) return ECORE_SUCCESS; - ecore_link_init(p_hwfn, p_ptt, p_hwfn->port_id); + else if (ECORE_IS_BB(p_hwfn->p_dev)) + ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id); } else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) { if (p_hwfn->p_dev->num_hwfns > 1) { /* Activate OPTE in CMT */ @@ -1667,7 +1679,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev, * out that these registers get initialized during the call to * ecore_mcp_load_req request. So we need to reread them here * to get the proper shadow register value. - * Note: This is a workaround for the missinginig MFW + * Note: This is a workaround for the missing MFW * initialization. It may be removed once the implementation * is done. */ @@ -2033,22 +2045,22 @@ static void ecore_hw_hwfn_prepare(struct ecore_hwfn *p_hwfn) /* clear indirect access */ if (ECORE_IS_AH(p_hwfn->p_dev)) { ecore_wr(p_hwfn, p_hwfn->p_main_ptt, - PGLUE_B_REG_PGL_ADDR_E8_F0, 0); + PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5, 0); ecore_wr(p_hwfn, p_hwfn->p_main_ptt, - PGLUE_B_REG_PGL_ADDR_EC_F0, 0); + PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5, 0); ecore_wr(p_hwfn, p_hwfn->p_main_ptt, - PGLUE_B_REG_PGL_ADDR_F0_F0, 0); + PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5, 0); ecore_wr(p_hwfn, p_hwfn->p_main_ptt, - PGLUE_B_REG_PGL_ADDR_F4_F0, 0); + PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5, 0); } else { ecore_wr(p_hwfn, p_hwfn->p_main_ptt, - PGLUE_B_REG_PGL_ADDR_88_F0, 0); + PGLUE_B_REG_PGL_ADDR_88_F0_BB, 0); ecore_wr(p_hwfn, p_hwfn->p_main_ptt, - PGLUE_B_REG_PGL_ADDR_8C_F0, 0); + PGLUE_B_REG_PGL_ADDR_8C_F0_BB, 0); ecore_wr(p_hwfn, p_hwfn->p_main_ptt, - PGLUE_B_REG_PGL_ADDR_90_F0, 0); + PGLUE_B_REG_PGL_ADDR_90_F0_BB, 0); ecore_wr(p_hwfn, p_hwfn->p_main_ptt, - PGLUE_B_REG_PGL_ADDR_94_F0, 0); + PGLUE_B_REG_PGL_ADDR_94_F0_BB, 0); } /* Clean Previous errors if such exist */ @@ -2643,7 +2655,12 @@ static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn, * In case of CMT in BB, only the "even" functions are enabled, and thus * the number of functions for both hwfns is learnt from the same bits. */ - reg_function_hide = ecore_rd(p_hwfn, p_ptt, MISCS_REG_FUNCTION_HIDE); + if (ECORE_IS_BB(p_dev) || ECORE_IS_AH(p_dev)) { + reg_function_hide = ecore_rd(p_hwfn, p_ptt, + MISCS_REG_FUNCTION_HIDE_BB_K2); + } else { /* E5 */ + reg_function_hide = 0; + } if (reg_function_hide & 0x1) { if (ECORE_IS_BB(p_dev)) { @@ -2709,8 +2726,7 @@ static void ecore_hw_info_port_num_bb(struct ecore_hwfn *p_hwfn, port_mode = 1; else #endif - port_mode = ecore_rd(p_hwfn, p_ptt, - CNIG_REG_NW_PORT_MODE_BB_B0); + port_mode = ecore_rd(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB); if (port_mode < 3) { p_hwfn->p_dev->num_ports_in_engines = 1; @@ -2725,8 +2741,8 @@ static void ecore_hw_info_port_num_bb(struct ecore_hwfn *p_hwfn, } } -static void ecore_hw_info_port_num_ah(struct ecore_hwfn *p_hwfn, - struct ecore_ptt *p_ptt) +static void ecore_hw_info_port_num_ah_e5(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) { u32 port; int i; @@ -2755,7 +2771,8 @@ static void ecore_hw_info_port_num_ah(struct ecore_hwfn *p_hwfn, #endif for (i = 0; i < MAX_NUM_PORTS_K2; i++) { port = ecore_rd(p_hwfn, p_ptt, - CNIG_REG_NIG_PORT0_CONF_K2 + (i * 4)); + CNIG_REG_NIG_PORT0_CONF_K2_E5 + + (i * 4)); if (port & 1) p_hwfn->p_dev->num_ports_in_engines++; } @@ -2767,7 +2784,7 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn, if (ECORE_IS_BB(p_hwfn->p_dev)) ecore_hw_info_port_num_bb(p_hwfn, p_ptt); else - ecore_hw_info_port_num_ah(p_hwfn, p_ptt); + ecore_hw_info_port_num_ah_e5(p_hwfn, p_ptt); } static enum _ecore_status_t @@ -3076,12 +3093,13 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, if (CHIP_REV_IS_FPGA(p_dev)) { DP_NOTICE(p_hwfn, false, "FPGA: workaround; Prevent DMAE parities\n"); - ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK, 7); + ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK_K2_E5, + 7); DP_NOTICE(p_hwfn, false, "FPGA: workaround: Set VF bar0 size\n"); ecore_wr(p_hwfn, p_hwfn->p_main_ptt, - PGLUE_B_REG_VF_BAR0_SIZE, 4); + PGLUE_B_REG_VF_BAR0_SIZE_K2_E5, 4); } #endif diff --git a/drivers/net/qede/base/ecore_gtt_reg_addr.h b/drivers/net/qede/base/ecore_gtt_reg_addr.h index 070588d..2acd864 100644 --- a/drivers/net/qede/base/ecore_gtt_reg_addr.h +++ b/drivers/net/qede/base/ecore_gtt_reg_addr.h @@ -10,43 +10,43 @@ #define GTT_REG_ADDR_H /* Win 2 */ -/* Access:RW DataWidth:0x20 Chips: BB_B0 K2 E5 */ +/* Access:RW DataWidth:0x20 */ #define GTT_BAR0_MAP_REG_IGU_CMD 0x00f000UL /* Win 3 */ -/* Access:RW DataWidth:0x20 Chips: BB_B0 K2 E5 */ +/* Access:RW DataWidth:0x20 */ #define GTT_BAR0_MAP_REG_TSDM_RAM 0x010000UL /* Win 4 */ -/* Access:RW DataWidth:0x20 Chips: BB_B0 K2 E5 */ +/* Access:RW DataWidth:0x20 */ #define GTT_BAR0_MAP_REG_MSDM_RAM 0x011000UL /* Win 5 */ -/* Access:RW DataWidth:0x20 Chips: BB_B0 K2 E5 */ +/* Access:RW DataWidth:0x20 */ #define GTT_BAR0_MAP_REG_MSDM_RAM_1024 0x012000UL /* Win 6 */ -/* Access:RW DataWidth:0x20 Chips: BB_B0 K2 E5 */ +/* Access:RW DataWidth:0x20 */ #define GTT_BAR0_MAP_REG_USDM_RAM 0x013000UL /* Win 7 */ -/* Access:RW DataWidth:0x20 Chips: BB_B0 K2 E5 */ +/* Access:RW DataWidth:0x20 */ #define GTT_BAR0_MAP_REG_USDM_RAM_1024 0x014000UL /* Win 8 */ -/* Access:RW DataWidth:0x20 Chips: BB_B0 K2 E5 */ +/* Access:RW DataWidth:0x20 */ #define GTT_BAR0_MAP_REG_USDM_RAM_2048 0x015000UL /* Win 9 */ -/* Access:RW DataWidth:0x20 Chips: BB_B0 K2 E5 */ +/* Access:RW DataWidth:0x20 */ #define GTT_BAR0_MAP_REG_XSDM_RAM 0x016000UL /* Win 10 */ -/* Access:RW DataWidth:0x20 Chips: BB_B0 K2 E5 */ +/* Access:RW DataWidth:0x20 */ #define GTT_BAR0_MAP_REG_YSDM_RAM 0x017000UL /* Win 11 */ -/* Access:RW DataWidth:0x20 Chips: BB_B0 K2 E5 */ +/* Access:RW DataWidth:0x20 */ #define GTT_BAR0_MAP_REG_PSDM_RAM 0x018000UL #endif diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h index f934e68..3042ed5 100644 --- a/drivers/net/qede/base/ecore_hsi_common.h +++ b/drivers/net/qede/base/ecore_hsi_common.h @@ -836,7 +836,12 @@ struct core_rx_fast_path_cqe { __le16 packet_length /* Total packet length (from the parser) */; __le16 vlan /* 802.1q VLAN tag */; struct core_rx_cqe_opaque_data opaque_data /* Opaque Data */; - __le32 reserved[4]; +/* bit- map: each bit represents a specific error. errors indications are + * provided by the cracker. see spec for detailed description + */ + struct parsing_err_flags err_flags; + __le16 reserved0; + __le32 reserved1[3]; }; /* @@ -1042,13 +1047,13 @@ struct core_tx_stop_ramrod_data { /* * Enum flag for what type of dcb data to update */ -enum dcb_dhcp_update_flag { +enum dcb_dscp_update_mode { /* use when no change should be done to dcb data */ - DONT_UPDATE_DCB_DHCP, + DONT_UPDATE_DCB_DSCP, UPDATE_DCB /* use to update only l2 (vlan) priority */, - UPDATE_DSCP /* use to update only l3 dhcp */, - UPDATE_DCB_DSCP /* update vlan pri and dhcp */, - MAX_DCB_DHCP_UPDATE_FLAG + UPDATE_DSCP /* use to update only l3 dscp */, + UPDATE_DCB_DSCP /* update vlan pri and dscp */, + MAX_DCB_DSCP_UPDATE_FLAG }; @@ -1232,6 +1237,10 @@ enum iwarp_ll2_tx_queues { IWARP_LL2_IN_ORDER_TX_QUEUE = 1, /* LL2 queue for unaligned packets sent aligned by the driver */ IWARP_LL2_ALIGNED_TX_QUEUE, +/* LL2 queue for unaligned packets sent aligned and was right-trimmed by the + * driver + */ + IWARP_LL2_ALIGNED_RIGHT_TRIMMED_TX_QUEUE, IWARP_LL2_ERROR /* Error indication */, MAX_IWARP_LL2_TX_QUEUES }; @@ -1446,13 +1455,13 @@ struct pf_update_tunnel_config { */ struct pf_update_ramrod_data { u8 pf_id; - u8 update_eth_dcb_data_flag /* Update Eth DCB data indication */; - u8 update_fcoe_dcb_data_flag /* Update FCOE DCB data indication */; - u8 update_iscsi_dcb_data_flag /* Update iSCSI DCB data indication */; - u8 update_roce_dcb_data_flag /* Update ROCE DCB data indication */; + u8 update_eth_dcb_data_mode /* Update Eth DCB data indication */; + u8 update_fcoe_dcb_data_mode /* Update FCOE DCB data indication */; + u8 update_iscsi_dcb_data_mode /* Update iSCSI DCB data indication */; + u8 update_roce_dcb_data_mode /* Update ROCE DCB data indication */; /* Update RROCE (RoceV2) DCB data indication */ - u8 update_rroce_dcb_data_flag; - u8 update_iwarp_dcb_data_flag /* Update IWARP DCB data indication */; + u8 update_rroce_dcb_data_mode; + u8 update_iwarp_dcb_data_mode /* Update IWARP DCB data indication */; u8 update_mf_vlan_flag /* Update MF outer vlan Id */; struct protocol_dcb_data eth_dcb_data /* core eth related fields */; struct protocol_dcb_data fcoe_dcb_data /* core fcoe related fields */; @@ -1611,6 +1620,8 @@ struct tstorm_per_port_stat { struct regpair fcoe_irregular_pkt; /* packet is an ROCE irregular packet */ struct regpair roce_irregular_pkt; +/* packet is an IWARP irregular packet */ + struct regpair iwarp_irregular_pkt; /* packet is an ETH irregular packet */ struct regpair eth_irregular_pkt; /* packet is an TOE irregular packet */ @@ -1861,8 +1872,11 @@ struct dmae_cmd { #define DMAE_CMD_SRC_VF_ID_SHIFT 0 #define DMAE_CMD_DST_VF_ID_MASK 0xFF /* Destination VF id */ #define DMAE_CMD_DST_VF_ID_SHIFT 8 - __le32 comp_addr_lo /* PCIe completion address low or grc address */; -/* PCIe completion address high or reserved (if completion address is in GRC) */ +/* PCIe completion address low in bytes or GRC completion address in DW */ + __le32 comp_addr_lo; +/* PCIe completion address high in bytes or reserved (if completion address is + * GRC) + */ __le32 comp_addr_hi; __le32 comp_val /* Value to write to completion address */; __le32 crc32 /* crc16 result */; @@ -2250,10 +2264,6 @@ struct sdm_op_gen { #define SDM_OP_GEN_RESERVED_SHIFT 20 }; - - - - struct ystorm_core_conn_ag_ctx { u8 byte0 /* cdu_validation */; u8 byte1 /* state */; diff --git a/drivers/net/qede/base/ecore_hsi_debug_tools.h b/drivers/net/qede/base/ecore_hsi_debug_tools.h index effb6ed..917e8f4 100644 --- a/drivers/net/qede/base/ecore_hsi_debug_tools.h +++ b/drivers/net/qede/base/ecore_hsi_debug_tools.h @@ -93,10 +93,12 @@ enum block_addr { GRCBASE_PHY_PCIE = 0x620000, GRCBASE_LED = 0x6b8000, GRCBASE_AVS_WRAP = 0x6b0000, - GRCBASE_RGFS = 0x19d0000, - GRCBASE_TGFS = 0x19e0000, - GRCBASE_PTLD = 0x19f0000, - GRCBASE_YPLD = 0x1a10000, + GRCBASE_RGFS = 0x1fa0000, + GRCBASE_RGSRC = 0x1fa8000, + GRCBASE_TGFS = 0x1fb0000, + GRCBASE_TGSRC = 0x1fb8000, + GRCBASE_PTLD = 0x1fc0000, + GRCBASE_YPLD = 0x1fe0000, GRCBASE_MISC_AEU = 0x8000, GRCBASE_BAR0_MAP = 0x1c00000, MAX_BLOCK_ADDR @@ -184,7 +186,9 @@ enum block_id { BLOCK_LED, BLOCK_AVS_WRAP, BLOCK_RGFS, + BLOCK_RGSRC, BLOCK_TGFS, + BLOCK_TGSRC, BLOCK_PTLD, BLOCK_YPLD, BLOCK_MISC_AEU, @@ -208,6 +212,10 @@ enum bin_dbg_buffer_type { BIN_BUF_DBG_ATTN_REGS /* Attention registers */, BIN_BUF_DBG_ATTN_INDEXES /* Attention indexes */, BIN_BUF_DBG_ATTN_NAME_OFFSETS /* Attention name offsets */, + BIN_BUF_DBG_BUS_BLOCKS /* Debug Bus blocks */, + BIN_BUF_DBG_BUS_LINES /* Debug Bus lines */, + BIN_BUF_DBG_BUS_BLOCKS_USER_DATA /* Debug Bus blocks user data */, + BIN_BUF_DBG_BUS_LINE_NAME_OFFSETS /* Debug Bus line name offsets */, BIN_BUF_DBG_PARSING_STRINGS /* Debug Tools parsing strings */, MAX_BIN_DBG_BUFFER_TYPE }; @@ -219,8 +227,8 @@ enum bin_dbg_buffer_type { struct dbg_attn_bit_mapping { __le16 data; /* The index of an attention in the blocks attentions list - * (if is_unused_idx_cnt=0), or a number of consecutive unused attention bits - * (if is_unused_idx_cnt=1) + * (if is_unused_bit_cnt=0), or a number of consecutive unused attention bits + * (if is_unused_bit_cnt=1) */ #define DBG_ATTN_BIT_MAPPING_VAL_MASK 0x7FFF #define DBG_ATTN_BIT_MAPPING_VAL_SHIFT 0 @@ -269,10 +277,10 @@ struct dbg_attn_reg_result { #define DBG_ATTN_REG_RESULT_STS_ADDRESS_MASK 0xFFFFFF #define DBG_ATTN_REG_RESULT_STS_ADDRESS_SHIFT 0 /* Number of attention indexes in this register */ -#define DBG_ATTN_REG_RESULT_NUM_ATTN_IDX_MASK 0xFF -#define DBG_ATTN_REG_RESULT_NUM_ATTN_IDX_SHIFT 24 -/* Offset of this registers block attention indexes (values in the range - * 0..number of block attentions) +#define DBG_ATTN_REG_RESULT_NUM_REG_ATTN_MASK 0xFF +#define DBG_ATTN_REG_RESULT_NUM_REG_ATTN_SHIFT 24 +/* The offset of this registers attentions within the blocks attentions + * list (a value in the range 0..number of block attentions-1) */ __le16 attn_idx_offset; __le16 reserved; @@ -289,7 +297,7 @@ struct dbg_attn_block_result { /* Value from dbg_attn_type enum */ #define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_MASK 0x3 #define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_SHIFT 0 -/* Number of registers in the blok in which at least one attention bit is set */ +/* Number of registers in block in which at least one attention bit is set */ #define DBG_ATTN_BLOCK_RESULT_NUM_REGS_MASK 0x3F #define DBG_ATTN_BLOCK_RESULT_NUM_REGS_SHIFT 2 /* Offset of this registers block attention names in the attention name offsets @@ -324,17 +332,17 @@ struct dbg_mode_hdr { */ struct dbg_attn_reg { struct dbg_mode_hdr mode /* Mode header */; -/* Offset of this registers block attention indexes (values in the range - * 0..number of block attentions) +/* The offset of this registers attentions within the blocks attentions + * list (a value in the range 0..number of block attentions-1) */ __le16 attn_idx_offset; __le32 data; /* STS attention register GRC address (in dwords) */ #define DBG_ATTN_REG_STS_ADDRESS_MASK 0xFFFFFF #define DBG_ATTN_REG_STS_ADDRESS_SHIFT 0 -/* Number of attention indexes in this register */ -#define DBG_ATTN_REG_NUM_ATTN_IDX_MASK 0xFF -#define DBG_ATTN_REG_NUM_ATTN_IDX_SHIFT 24 +/* Number of attention in this register */ +#define DBG_ATTN_REG_NUM_REG_ATTN_MASK 0xFF +#define DBG_ATTN_REG_NUM_REG_ATTN_SHIFT 24 /* STS_CLR attention register GRC address (in dwords) */ __le32 sts_clr_address; /* MASK attention register GRC address (in dwords) */ @@ -354,6 +362,53 @@ enum dbg_attn_type { /* + * Debug Bus block data + */ +struct dbg_bus_block { +/* Number of debug lines in this block (excluding signature & latency events) */ + u8 num_of_lines; +/* Indicates if this block has a latency events debug line (0/1). */ + u8 has_latency_events; +/* Offset of this blocks lines in the Debug Bus lines array. */ + __le16 lines_offset; +}; + + +/* + * Debug Bus block user data + */ +struct dbg_bus_block_user_data { +/* Number of debug lines in this block (excluding signature & latency events) */ + u8 num_of_lines; +/* Indicates if this block has a latency events debug line (0/1). */ + u8 has_latency_events; +/* Offset of this blocks lines in the debug bus line name offsets array. */ + __le16 names_offset; +}; + + +/* + * Block Debug line data + */ +struct dbg_bus_line { + u8 data; +/* Number of groups in the line (0-3) */ +#define DBG_BUS_LINE_NUM_OF_GROUPS_MASK 0xF +#define DBG_BUS_LINE_NUM_OF_GROUPS_SHIFT 0 +/* Indicates if this is a 128b line (0) or a 256b line (1). */ +#define DBG_BUS_LINE_IS_256B_MASK 0x1 +#define DBG_BUS_LINE_IS_256B_SHIFT 4 +#define DBG_BUS_LINE_RESERVED_MASK 0x7 +#define DBG_BUS_LINE_RESERVED_SHIFT 5 +/* Four 2-bit values, indicating the size of each group minus 1 (i.e. + * value=0 means size=1, value=1 means size=2, etc), starting from lsb. + * The sizes are in dwords (if is_256b=0) or in qwords (if is_256b=1). + */ + u8 group_sizes; +}; + + +/* * condition header for registers dump */ struct dbg_dump_cond_hdr { @@ -377,8 +432,11 @@ struct dbg_dump_mem { /* register size (in dwords) */ #define DBG_DUMP_MEM_LENGTH_MASK 0xFFFFFF #define DBG_DUMP_MEM_LENGTH_SHIFT 0 -#define DBG_DUMP_MEM_RESERVED_MASK 0xFF -#define DBG_DUMP_MEM_RESERVED_SHIFT 24 +/* indicates if the register is wide-bus */ +#define DBG_DUMP_MEM_WIDE_BUS_MASK 0x1 +#define DBG_DUMP_MEM_WIDE_BUS_SHIFT 24 +#define DBG_DUMP_MEM_RESERVED_MASK 0x7F +#define DBG_DUMP_MEM_RESERVED_SHIFT 25 }; @@ -388,10 +446,13 @@ struct dbg_dump_mem { struct dbg_dump_reg { __le32 data; /* register address (in dwords) */ -#define DBG_DUMP_REG_ADDRESS_MASK 0xFFFFFF -#define DBG_DUMP_REG_ADDRESS_SHIFT 0 -#define DBG_DUMP_REG_LENGTH_MASK 0xFF /* register size (in dwords) */ -#define DBG_DUMP_REG_LENGTH_SHIFT 24 +#define DBG_DUMP_REG_ADDRESS_MASK 0x7FFFFF /* register address (in dwords) */ +#define DBG_DUMP_REG_ADDRESS_SHIFT 0 +/* indicates if the register is wide-bus */ +#define DBG_DUMP_REG_WIDE_BUS_MASK 0x1 +#define DBG_DUMP_REG_WIDE_BUS_SHIFT 23 +#define DBG_DUMP_REG_LENGTH_MASK 0xFF /* register size (in dwords) */ +#define DBG_DUMP_REG_LENGTH_SHIFT 24 }; @@ -424,8 +485,11 @@ struct dbg_idle_chk_cond_hdr { struct dbg_idle_chk_cond_reg { __le32 data; /* Register GRC address (in dwords) */ -#define DBG_IDLE_CHK_COND_REG_ADDRESS_MASK 0xFFFFFF +#define DBG_IDLE_CHK_COND_REG_ADDRESS_MASK 0x7FFFFF #define DBG_IDLE_CHK_COND_REG_ADDRESS_SHIFT 0 +/* indicates if the register is wide-bus */ +#define DBG_IDLE_CHK_COND_REG_WIDE_BUS_MASK 0x1 +#define DBG_IDLE_CHK_COND_REG_WIDE_BUS_SHIFT 23 /* value from block_id enum */ #define DBG_IDLE_CHK_COND_REG_BLOCK_ID_MASK 0xFF #define DBG_IDLE_CHK_COND_REG_BLOCK_ID_SHIFT 24 @@ -441,8 +505,11 @@ struct dbg_idle_chk_cond_reg { struct dbg_idle_chk_info_reg { __le32 data; /* Register GRC address (in dwords) */ -#define DBG_IDLE_CHK_INFO_REG_ADDRESS_MASK 0xFFFFFF +#define DBG_IDLE_CHK_INFO_REG_ADDRESS_MASK 0x7FFFFF #define DBG_IDLE_CHK_INFO_REG_ADDRESS_SHIFT 0 +/* indicates if the register is wide-bus */ +#define DBG_IDLE_CHK_INFO_REG_WIDE_BUS_MASK 0x1 +#define DBG_IDLE_CHK_INFO_REG_WIDE_BUS_SHIFT 23 /* value from block_id enum */ #define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_MASK 0xFF #define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_SHIFT 24 @@ -544,17 +611,21 @@ enum dbg_idle_chk_severity_types { * Debug Bus block data */ struct dbg_bus_block_data { -/* Indicates if the block is enabled for recording (0/1) */ - u8 enabled; - u8 hw_id /* HW ID associated with the block */; + __le16 data; +/* 4-bit value: bit i set -> dword/qword i is enabled. */ +#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_MASK 0xF +#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_SHIFT 0 +/* Number of dwords/qwords to shift right the debug data (0-3) */ +#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_MASK 0xF +#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_SHIFT 4 +/* 4-bit value: bit i set -> dword/qword i is forced valid. */ +#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_MASK 0xF +#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_SHIFT 8 +/* 4-bit value: bit i set -> dword/qword i frame bit is forced. */ +#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_MASK 0xF +#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_SHIFT 12 u8 line_num /* Debug line number to select */; - u8 right_shift /* Number of units to right the debug data (0-3) */; - u8 cycle_en /* 4-bit value: bit i set -> unit i is enabled. */; -/* 4-bit value: bit i set -> unit i is forced valid. */ - u8 force_valid; -/* 4-bit value: bit i set -> unit i frame bit is forced. */ - u8 force_frame; - u8 reserved; + u8 hw_id /* HW ID associated with the block */; }; @@ -604,6 +675,21 @@ enum dbg_bus_constraint_ops { /* + * Debug Bus trigger state data + */ +struct dbg_bus_trigger_state_data { + u8 data; +/* 4-bit value: bit i set -> dword i of the trigger state block + * (after right shift) is enabled. + */ +#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_MASK 0xF +#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_SHIFT 0 +/* 4-bit value: bit i set -> dword i is compared by a constraint */ +#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_MASK 0xF +#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_SHIFT 4 +}; + +/* * Debug Bus memory address */ struct dbg_bus_mem_addr { @@ -650,14 +736,8 @@ union dbg_bus_storm_eid_params { * Debug Bus Storm data */ struct dbg_bus_storm_data { -/* Indicates if the Storm is enabled for fast debug recording (0/1) */ - u8 fast_enabled; -/* Fast debug Storm mode, valid only if fast_enabled is set */ - u8 fast_mode; -/* Indicates if the Storm is enabled for slow debug recording (0/1) */ - u8 slow_enabled; -/* Slow debug Storm mode, valid only if slow_enabled is set */ - u8 slow_mode; + u8 enabled /* indicates if the Storm is enabled for recording */; + u8 mode /* Storm debug mode, valid only if the Storm is enabled */; u8 hw_id /* HW ID associated with the Storm */; u8 eid_filter_en /* Indicates if EID filtering is performed (0/1) */; /* 1 = EID range filter, 0 = EID mask filter. Valid only if eid_filter_en is @@ -667,7 +747,6 @@ struct dbg_bus_storm_data { u8 cid_filter_en /* Indicates if CID filtering is performed (0/1) */; /* EID filter params to filter on. Valid only if eid_filter_en is set. */ union dbg_bus_storm_eid_params eid_filter_params; - __le16 reserved; /* CID to filter on. Valid only if cid_filter_en is set. */ __le32 cid; }; @@ -679,20 +758,18 @@ struct dbg_bus_data { __le32 app_version /* The tools version number of the application */; u8 state /* The current debug bus state */; u8 hw_dwords /* HW dwords per cycle */; - u8 next_hw_id /* Next HW ID to be associated with an input */; +/* The HW IDs of the recorded HW blocks, where bits i*3..i*3+2 contain the + * HW ID of dword/qword i + */ + __le16 hw_id_mask; u8 num_enabled_blocks /* Number of blocks enabled for recording */; u8 num_enabled_storms /* Number of Storms enabled for recording */; u8 target /* Output target */; - u8 next_trigger_state /* ID of next trigger state to be added */; -/* ID of next filter/trigger constraint to be added */ - u8 next_constraint_id; u8 one_shot_en /* Indicates if one-shot mode is enabled (0/1) */; u8 grc_input_en /* Indicates if GRC recording is enabled (0/1) */; /* Indicates if timestamp recording is enabled (0/1) */ u8 timestamp_input_en; u8 filter_en /* Indicates if the recording filter is enabled (0/1) */; -/* Indicates if the recording trigger is enabled (0/1) */ - u8 trigger_en; /* If true, the next added constraint belong to the filter. Otherwise, * it belongs to the last added trigger state. Valid only if either filter or * triggers are enabled. @@ -706,6 +783,14 @@ struct dbg_bus_data { * Valid only if both filter and trigger are enabled (0/1) */ u8 filter_post_trigger; + __le16 reserved; +/* Indicates if the recording trigger is enabled (0/1) */ + u8 trigger_en; +/* trigger states data */ + struct dbg_bus_trigger_state_data trigger_states[3]; + u8 next_trigger_state /* ID of next trigger state to be added */; +/* ID of next filter/trigger constraint to be added */ + u8 next_constraint_id; /* If true, all inputs are associated with HW ID 0. Otherwise, each input is * assigned a different HW ID (0/1) */ @@ -716,7 +801,6 @@ struct dbg_bus_data { * DBG_BUS_TARGET_ID_PCI. */ struct dbg_bus_pci_buf_data pci_buf; - __le16 reserved; /* Debug Bus data for each block */ struct dbg_bus_block_data blocks[88]; /* Debug Bus data for each block */ @@ -748,17 +832,6 @@ enum dbg_bus_frame_modes { /* - * Debug bus input types - */ -enum dbg_bus_input_types { - DBG_BUS_INPUT_TYPE_STORM, - DBG_BUS_INPUT_TYPE_BLOCK, - MAX_DBG_BUS_INPUT_TYPES -}; - - - -/* * Debug bus other engine mode */ enum dbg_bus_other_engine_modes { @@ -852,6 +925,7 @@ enum dbg_bus_targets { }; + /* * GRC Dump data */ @@ -987,7 +1061,10 @@ enum dbg_status { DBG_STATUS_REG_FIFO_BAD_DATA, DBG_STATUS_PROTECTION_OVERRIDE_BAD_DATA, DBG_STATUS_DBG_ARRAY_NOT_SET, - DBG_STATUS_MULTI_BLOCKS_WITH_FILTER, + DBG_STATUS_FILTER_BUG, + DBG_STATUS_NON_MATCHING_LINES, + DBG_STATUS_INVALID_TRIGGER_DWORD_OFFSET, + DBG_STATUS_DBG_BUS_IN_USE, MAX_DBG_STATUS }; @@ -1028,7 +1105,7 @@ struct dbg_tools_data { /* Indicates if a block is in reset state (0/1) */ u8 block_in_reset[88]; u8 chip_id /* Chip ID (from enum chip_ids) */; - u8 platform_id /* Platform ID (from enum platform_ids) */; + u8 platform_id /* Platform ID */; u8 initialized /* Indicates if the data was initialized */; u8 reserved; }; diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h index 9d2a118..397c408 100644 --- a/drivers/net/qede/base/ecore_hsi_eth.h +++ b/drivers/net/qede/base/ecore_hsi_eth.h @@ -739,6 +739,7 @@ enum eth_error_code { ETH_FILTERS_VNI_ADD_FAIL_FULL, /* vni add filters command failed due to duplicate VNI filter */ ETH_FILTERS_VNI_ADD_FAIL_DUP, + ETH_FILTERS_GFT_UPDATE_FAIL /* Fail update GFT filter. */, MAX_ETH_ERROR_CODE }; @@ -982,8 +983,10 @@ struct eth_vport_rss_config { u8 rss_id; u8 rss_mode /* The RSS mode for this function */; u8 update_rss_key /* if set update the rss key */; - u8 update_rss_ind_table /* if set update the indirection table */; - u8 update_rss_capabilities /* if set update the capabilities */; +/* if set update the indirection table values */ + u8 update_rss_ind_table; +/* if set update the capabilities and indirection table size. */ + u8 update_rss_capabilities; u8 tbl_size /* rss mask (Tbl size) */; __le32 reserved2[2]; /* RSS indirection table */ @@ -1267,7 +1270,10 @@ struct rx_update_gft_filter_data { /* Use enum to set type of flow using gft HW logic blocks */ u8 filter_type; u8 filter_action /* Use to set type of action on filter */; - u8 reserved; +/* 0 - dont assert in case of error. Just return an error code. 1 - assert in + * case of error. + */ + u8 assert_on_error; }; @@ -2290,8 +2296,7 @@ enum gft_profile_upper_protocol_type { * GFT RAM line struct */ struct gft_ram_line { - __le32 low32bits; -/* (use enum gft_vlan_select) */ + __le32 lo; #define GFT_RAM_LINE_VLAN_SELECT_MASK 0x3 #define GFT_RAM_LINE_VLAN_SELECT_SHIFT 0 #define GFT_RAM_LINE_TUNNEL_ENTROPHY_MASK 0x1 @@ -2354,7 +2359,7 @@ struct gft_ram_line { #define GFT_RAM_LINE_DST_PORT_SHIFT 30 #define GFT_RAM_LINE_SRC_PORT_MASK 0x1 #define GFT_RAM_LINE_SRC_PORT_SHIFT 31 - __le32 high32bits; + __le32 hi; #define GFT_RAM_LINE_DSCP_MASK 0x1 #define GFT_RAM_LINE_DSCP_SHIFT 0 #define GFT_RAM_LINE_OVER_IP_PROTOCOL_MASK 0x1 diff --git a/drivers/net/qede/base/ecore_hsi_init_tool.h b/drivers/net/qede/base/ecore_hsi_init_tool.h index d07549c..1f57e9b 100644 --- a/drivers/net/qede/base/ecore_hsi_init_tool.h +++ b/drivers/net/qede/base/ecore_hsi_init_tool.h @@ -22,43 +22,13 @@ /* Max size in dwords of a zipped array */ #define MAX_ZIPPED_SIZE 8192 -enum init_modes { - MODE_BB_A0_DEPRECATED, - MODE_BB_B0, - MODE_K2, - MODE_ASIC, - MODE_EMUL_REDUCED, - MODE_EMUL_FULL, - MODE_FPGA, - MODE_CHIPSIM, - MODE_SF, - MODE_MF_SD, - MODE_MF_SI, - MODE_PORTS_PER_ENG_1, - MODE_PORTS_PER_ENG_2, - MODE_PORTS_PER_ENG_4, - MODE_100G, - MODE_E5, - MAX_INIT_MODES -}; - -enum init_phases { - PHASE_ENGINE, - PHASE_PORT, - PHASE_PF, - PHASE_VF, - PHASE_QM_PF, - MAX_INIT_PHASES +enum chip_ids { + CHIP_BB, + CHIP_K2, + CHIP_E5, + MAX_CHIP_IDS }; -enum init_split_types { - SPLIT_TYPE_NONE, - SPLIT_TYPE_PORT, - SPLIT_TYPE_PF, - SPLIT_TYPE_PORT_PF, - SPLIT_TYPE_VF, - MAX_INIT_SPLIT_TYPES -}; struct fw_asserts_ram_section { /* The offset of the section in the RAM in RAM lines (64-bit units) */ @@ -196,8 +166,46 @@ union init_array_hdr { }; +enum init_modes { + MODE_BB_A0_DEPRECATED, + MODE_BB, + MODE_K2, + MODE_ASIC, + MODE_EMUL_REDUCED, + MODE_EMUL_FULL, + MODE_FPGA, + MODE_CHIPSIM, + MODE_SF, + MODE_MF_SD, + MODE_MF_SI, + MODE_PORTS_PER_ENG_1, + MODE_PORTS_PER_ENG_2, + MODE_PORTS_PER_ENG_4, + MODE_100G, + MODE_E5, + MAX_INIT_MODES +}; +enum init_phases { + PHASE_ENGINE, + PHASE_PORT, + PHASE_PF, + PHASE_VF, + PHASE_QM_PF, + MAX_INIT_PHASES +}; + + +enum init_split_types { + SPLIT_TYPE_NONE, + SPLIT_TYPE_PORT, + SPLIT_TYPE_PF, + SPLIT_TYPE_PORT_PF, + SPLIT_TYPE_VF, + MAX_INIT_SPLIT_TYPES +}; + /* * init array types diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c index 77f9152..af0deaa 100644 --- a/drivers/net/qede/base/ecore_init_fw_funcs.c +++ b/drivers/net/qede/base/ecore_init_fw_funcs.c @@ -17,112 +17,156 @@ #include "ecore_hsi_init_tool.h" #include "ecore_iro.h" #include "ecore_init_fw_funcs.h" -enum CmInterfaceEnum { - MCM_SEC, - MCM_PRI, - UCM_SEC, - UCM_PRI, - TCM_SEC, - TCM_PRI, - YCM_SEC, - YCM_PRI, - XCM_SEC, - XCM_PRI, - NUM_OF_CM_INTERFACES + +#define CDU_VALIDATION_DEFAULT_CFG 61 + +static u16 con_region_offsets[3][E4_NUM_OF_CONNECTION_TYPES] = { + { 400, 336, 352, 304, 304, 384, 416, 352}, /* region 3 offsets */ + { 528, 496, 416, 448, 448, 512, 544, 480}, /* region 4 offsets */ + { 608, 544, 496, 512, 576, 592, 624, 560} /* region 5 offsets */ +}; +static u16 task_region_offsets[1][E4_NUM_OF_CONNECTION_TYPES] = { + { 240, 240, 112, 0, 0, 0, 0, 96} /* region 1 offsets */ }; -/* general constants */ -#define QM_PQ_MEM_4KB(pq_size) \ -(pq_size ? DIV_ROUND_UP((pq_size + 1) * QM_PQ_ELEMENT_SIZE, 0x1000) : 0) -#define QM_PQ_SIZE_256B(pq_size) \ -(pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : 0) -#define QM_INVALID_PQ_ID 0xffff -/* feature enable */ -#define QM_BYPASS_EN 1 -#define QM_BYTE_CRD_EN 1 -/* other PQ constants */ -#define QM_OTHER_PQS_PER_PF 4 -/* WFQ constants */ -#define QM_WFQ_UPPER_BOUND 62500000 + +/* General constants */ +#define QM_PQ_MEM_4KB(pq_size) (pq_size ? DIV_ROUND_UP((pq_size + 1) * \ + QM_PQ_ELEMENT_SIZE, 0x1000) : 0) +#define QM_PQ_SIZE_256B(pq_size) (pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : \ + 0) +#define QM_INVALID_PQ_ID 0xffff + +/* Feature enable */ +#define QM_BYPASS_EN 1 +#define QM_BYTE_CRD_EN 1 + +/* Other PQ constants */ +#define QM_OTHER_PQS_PER_PF 4 + +/* WFQ constants: */ + +/* Upper bound in MB, 10 * burst size of 1ms in 50Gbps */ +#define QM_WFQ_UPPER_BOUND 62500000 + +/* Bit of VOQ in WFQ VP PQ map */ #define QM_WFQ_VP_PQ_VOQ_SHIFT 0 + +/* Bit of PF in WFQ VP PQ map */ #define QM_WFQ_VP_PQ_PF_SHIFT 5 + +/* 0x9000 = 4*9*1024 */ #define QM_WFQ_INC_VAL(weight) ((weight) * 0x9000) -#define QM_WFQ_MAX_INC_VAL 43750000 -/* RL constants */ -#define QM_RL_UPPER_BOUND 62500000 -#define QM_RL_PERIOD 5 + +/* 0.7 * upper bound (62500000) */ +#define QM_WFQ_MAX_INC_VAL 43750000 + +/* RL constants: */ + +/* Upper bound is set to 10 * burst size of 1ms in 50Gbps */ +#define QM_RL_UPPER_BOUND 62500000 + +/* Period in us */ +#define QM_RL_PERIOD 5 + +/* Period in 25MHz cycles */ #define QM_RL_PERIOD_CLK_25M (25 * QM_RL_PERIOD) -#define QM_RL_MAX_INC_VAL 43750000 -/* RL increment value - the factor of 1.01 was added after seeing only - * 99% factor reached in a 25Gbps port with DPDK RFC 2544 test. - * In this scenario the PF RL was reducing the line rate to 99% although - * the credit increment value was the correct one and FW calculated - * correct packet sizes. The reason for the inaccuracy of the RL is - * unknown at this point. + +/* 0.7 * upper bound (62500000) */ +#define QM_RL_MAX_INC_VAL 43750000 + +/* RL increment value - rate is specified in mbps. the factor of 1.01 was + * added after seeing only 99% factor reached in a 25Gbps port with DPDK RFC + * 2544 test. In this scenario the PF RL was reducing the line rate to 99% + * although the credit increment value was the correct one and FW calculated + * correct packet sizes. The reason for the inaccuracy of the RL is unknown at + * this point. */ -/* rate in mbps */ #define QM_RL_INC_VAL(rate) OSAL_MAX_T(u32, (u32)(((rate ? rate : 1000000) * \ - QM_RL_PERIOD * 101) / (8 * 100)), 1) + QM_RL_PERIOD * 101) / (8 * 100)), 1) + /* AFullOprtnstcCrdMask constants */ #define QM_OPPOR_LINE_VOQ_DEF 1 #define QM_OPPOR_FW_STOP_DEF 0 #define QM_OPPOR_PQ_EMPTY_DEF 1 -/* Command Queue constants */ -#define PBF_CMDQ_PURE_LB_LINES 150 + +/* Command Queue constants: */ + +/* Pure LB CmdQ lines (+spare) */ +#define PBF_CMDQ_PURE_LB_LINES 150 + #define PBF_CMDQ_LINES_RT_OFFSET(voq) \ -(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + \ -voq * (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET \ -- PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET)) + (PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + voq * \ + (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET - \ + PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET)) + #define PBF_BTB_GUARANTEED_RT_OFFSET(voq) \ -(PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + voq * \ -(PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET - PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET)) + (PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + voq * \ + (PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET - \ + PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET)) + #define QM_VOQ_LINE_CRD(pbf_cmd_lines) \ ((((pbf_cmd_lines) - 4) * 2) | QM_LINE_CRD_REG_SIGN_BIT) + /* BTB: blocks constants (block size = 256B) */ -#define BTB_JUMBO_PKT_BLOCKS 38 /* 256B blocks in 9700B packet */ -/* headroom per-port */ -#define BTB_HEADROOM_BLOCKS BTB_JUMBO_PKT_BLOCKS + +/* 256B blocks in 9700B packet */ +#define BTB_JUMBO_PKT_BLOCKS 38 + +/* Headroom per-port */ +#define BTB_HEADROOM_BLOCKS BTB_JUMBO_PKT_BLOCKS #define BTB_PURE_LB_FACTOR 10 -#define BTB_PURE_LB_RATIO 7 /* factored (hence really 0.7) */ + +/* Factored (hence really 0.7) */ +#define BTB_PURE_LB_RATIO 7 + /* QM stop command constants */ -#define QM_STOP_PQ_MASK_WIDTH 32 -#define QM_STOP_CMD_ADDR 0x2 -#define QM_STOP_CMD_STRUCT_SIZE 2 +#define QM_STOP_PQ_MASK_WIDTH 32 +#define QM_STOP_CMD_ADDR 2 +#define QM_STOP_CMD_STRUCT_SIZE 2 #define QM_STOP_CMD_PAUSE_MASK_OFFSET 0 #define QM_STOP_CMD_PAUSE_MASK_SHIFT 0 -#define QM_STOP_CMD_PAUSE_MASK_MASK 0xffffffff /* @DPDK */ -#define QM_STOP_CMD_GROUP_ID_OFFSET 1 -#define QM_STOP_CMD_GROUP_ID_SHIFT 16 -#define QM_STOP_CMD_GROUP_ID_MASK 15 -#define QM_STOP_CMD_PQ_TYPE_OFFSET 1 -#define QM_STOP_CMD_PQ_TYPE_SHIFT 24 -#define QM_STOP_CMD_PQ_TYPE_MASK 1 -#define QM_STOP_CMD_MAX_POLL_COUNT 100 -#define QM_STOP_CMD_POLL_PERIOD_US 500 +#define QM_STOP_CMD_PAUSE_MASK_MASK 0xffffffff /* @DPDK */ +#define QM_STOP_CMD_GROUP_ID_OFFSET 1 +#define QM_STOP_CMD_GROUP_ID_SHIFT 16 +#define QM_STOP_CMD_GROUP_ID_MASK 15 +#define QM_STOP_CMD_PQ_TYPE_OFFSET 1 +#define QM_STOP_CMD_PQ_TYPE_SHIFT 24 +#define QM_STOP_CMD_PQ_TYPE_MASK 1 +#define QM_STOP_CMD_MAX_POLL_COUNT 100 +#define QM_STOP_CMD_POLL_PERIOD_US 500 + /* QM command macros */ -#define QM_CMD_STRUCT_SIZE(cmd) cmd##_STRUCT_SIZE +#define QM_CMD_STRUCT_SIZE(cmd) cmd##_STRUCT_SIZE #define QM_CMD_SET_FIELD(var, cmd, field, value) \ -SET_FIELD(var[cmd##_##field##_OFFSET], cmd##_##field, value) + SET_FIELD(var[cmd##_##field##_OFFSET], cmd##_##field, value) + /* QM: VOQ macros */ #define PHYS_VOQ(port, tc, max_phys_tcs_per_port) \ -((port) * (max_phys_tcs_per_port) + (tc)) -#define LB_VOQ(port) (MAX_PHYS_VOQS + (port)) + ((port) * (max_phys_tcs_per_port) + (tc)) +#define LB_VOQ(port) (MAX_PHYS_VOQS + (port)) #define VOQ(port, tc, max_phys_tcs_per_port) \ -((tc) < LB_TC ? PHYS_VOQ(port, tc, max_phys_tcs_per_port) : LB_VOQ(port)) + ((tc) < LB_TC ? PHYS_VOQ(port, tc, max_phys_tcs_per_port) : \ + LB_VOQ(port)) + + /******************** INTERNAL IMPLEMENTATION *********************/ + /* Prepare PF RL enable/disable runtime init values */ static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en) { STORE_RT_REG(p_hwfn, QM_REG_RLPFENABLE_RT_OFFSET, pf_rl_en ? 1 : 0); if (pf_rl_en) { - /* enable RLs for all VOQs */ + /* Enable RLs for all VOQs */ STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_RT_OFFSET, (1 << MAX_NUM_VOQS) - 1); - /* write RL period */ + + /* Write RL period */ STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIOD_RT_OFFSET, QM_RL_PERIOD_CLK_25M); STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIODTIMER_RT_OFFSET, QM_RL_PERIOD_CLK_25M); - /* set credit threshold for QM bypass flow */ + + /* Set credit threshold for QM bypass flow */ if (QM_BYPASS_EN) STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET, QM_RL_UPPER_BOUND); @@ -133,7 +177,8 @@ static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en) static void ecore_enable_pf_wfq(struct ecore_hwfn *p_hwfn, bool pf_wfq_en) { STORE_RT_REG(p_hwfn, QM_REG_WFQPFENABLE_RT_OFFSET, pf_wfq_en ? 1 : 0); - /* set credit threshold for QM bypass flow */ + + /* Set credit threshold for QM bypass flow */ if (pf_wfq_en && QM_BYPASS_EN) STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET, QM_WFQ_UPPER_BOUND); @@ -145,12 +190,13 @@ static void ecore_enable_vport_rl(struct ecore_hwfn *p_hwfn, bool vport_rl_en) STORE_RT_REG(p_hwfn, QM_REG_RLGLBLENABLE_RT_OFFSET, vport_rl_en ? 1 : 0); if (vport_rl_en) { - /* write RL period (use timer 0 only) */ + /* Write RL period (use timer 0 only) */ STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIOD_0_RT_OFFSET, QM_RL_PERIOD_CLK_25M); STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET, QM_RL_PERIOD_CLK_25M); - /* set credit threshold for QM bypass flow */ + + /* Set credit threshold for QM bypass flow */ if (QM_BYPASS_EN) STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET, @@ -163,7 +209,8 @@ static void ecore_enable_vport_wfq(struct ecore_hwfn *p_hwfn, bool vport_wfq_en) { STORE_RT_REG(p_hwfn, QM_REG_WFQVPENABLE_RT_OFFSET, vport_wfq_en ? 1 : 0); - /* set credit threshold for QM bypass flow */ + + /* Set credit threshold for QM bypass flow */ if (vport_wfq_en && QM_BYPASS_EN) STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET, QM_WFQ_UPPER_BOUND); @@ -176,7 +223,9 @@ static void ecore_cmdq_lines_voq_rt_init(struct ecore_hwfn *p_hwfn, u8 voq, u16 cmdq_lines) { u32 qm_line_crd; + qm_line_crd = QM_VOQ_LINE_CRD(cmdq_lines); + OVERWRITE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq), (u32)cmdq_lines); STORE_RT_REG(p_hwfn, QM_REG_VOQCRDLINE_RT_OFFSET + voq, qm_line_crd); @@ -192,38 +241,43 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn, port_params[MAX_NUM_PORTS]) { u8 tc, voq, port_id, num_tcs_in_port; - /* clear PBF lines for all VOQs */ + + /* Clear PBF lines for all VOQs */ for (voq = 0; voq < MAX_NUM_VOQS; voq++) STORE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq), 0); + for (port_id = 0; port_id < max_ports_per_engine; port_id++) { - if (port_params[port_id].active) { - u16 phys_lines, phys_lines_per_tc; - /* find #lines to divide between active physical TCs */ - phys_lines = - port_params[port_id].num_pbf_cmd_lines - - PBF_CMDQ_PURE_LB_LINES; - /* find #lines per active physical TC */ - num_tcs_in_port = 0; - for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) { - if (((port_params[port_id].active_phys_tcs >> - tc) & 0x1) == 1) - num_tcs_in_port++; - } - phys_lines_per_tc = phys_lines / num_tcs_in_port; - /* init registers per active TC */ - for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) { - if (((port_params[port_id].active_phys_tcs >> - tc) & 0x1) == 1) { - voq = PHYS_VOQ(port_id, tc, - max_phys_tcs_per_port); - ecore_cmdq_lines_voq_rt_init(p_hwfn, - voq, phys_lines_per_tc); - } + u16 phys_lines, phys_lines_per_tc; + + if (!port_params[port_id].active) + continue; + + /* Find #lines to divide between the active physical TCs */ + phys_lines = port_params[port_id].num_pbf_cmd_lines - + PBF_CMDQ_PURE_LB_LINES; + + /* Find #lines per active physical TC */ + num_tcs_in_port = 0; + for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) + if (((port_params[port_id].active_phys_tcs >> tc) & + 0x1) == 1) + num_tcs_in_port++; + phys_lines_per_tc = phys_lines / num_tcs_in_port; + + /* Init registers per active TC */ + for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) { + if (((port_params[port_id].active_phys_tcs >> tc) & + 0x1) == 1) { + voq = PHYS_VOQ(port_id, tc, + max_phys_tcs_per_port); + ecore_cmdq_lines_voq_rt_init(p_hwfn, voq, + phys_lines_per_tc); } - /* init registers for pure LB TC */ - ecore_cmdq_lines_voq_rt_init(p_hwfn, LB_VOQ(port_id), - PBF_CMDQ_PURE_LB_LINES); } + + /* Init registers for pure LB TC */ + ecore_cmdq_lines_voq_rt_init(p_hwfn, LB_VOQ(port_id), + PBF_CMDQ_PURE_LB_LINES); } } @@ -253,50 +307,51 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn, struct init_qm_port_params port_params[MAX_NUM_PORTS]) { - u8 tc, voq, port_id, num_tcs_in_port; u32 usable_blocks, pure_lb_blocks, phys_blocks; + u8 tc, voq, port_id, num_tcs_in_port; + for (port_id = 0; port_id < max_ports_per_engine; port_id++) { - if (port_params[port_id].active) { - /* subtract headroom blocks */ - usable_blocks = - port_params[port_id].num_btb_blocks - - BTB_HEADROOM_BLOCKS; -/* find blocks per physical TC. use factor to avoid floating arithmethic */ - - num_tcs_in_port = 0; - for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) - if (((port_params[port_id].active_phys_tcs >> - tc) & 0x1) == 1) - num_tcs_in_port++; - pure_lb_blocks = - (usable_blocks * BTB_PURE_LB_FACTOR) / - (num_tcs_in_port * - BTB_PURE_LB_FACTOR + BTB_PURE_LB_RATIO); - pure_lb_blocks = - OSAL_MAX_T(u32, BTB_JUMBO_PKT_BLOCKS, - pure_lb_blocks / BTB_PURE_LB_FACTOR); - phys_blocks = - (usable_blocks - - pure_lb_blocks) / - num_tcs_in_port; - /* init physical TCs */ - for (tc = 0; - tc < NUM_OF_PHYS_TCS; - tc++) { - if (((port_params[port_id].active_phys_tcs >> - tc) & 0x1) == 1) { - voq = PHYS_VOQ(port_id, tc, - max_phys_tcs_per_port); - STORE_RT_REG(p_hwfn, + if (!port_params[port_id].active) + continue; + + /* Subtract headroom blocks */ + usable_blocks = port_params[port_id].num_btb_blocks - + BTB_HEADROOM_BLOCKS; + + /* Find blocks per physical TC. use factor to avoid floating + * arithmethic. + */ + num_tcs_in_port = 0; + for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) + if (((port_params[port_id].active_phys_tcs >> tc) & + 0x1) == 1) + num_tcs_in_port++; + + pure_lb_blocks = (usable_blocks * BTB_PURE_LB_FACTOR) / + (num_tcs_in_port * BTB_PURE_LB_FACTOR + + BTB_PURE_LB_RATIO); + pure_lb_blocks = OSAL_MAX_T(u32, BTB_JUMBO_PKT_BLOCKS, + pure_lb_blocks / + BTB_PURE_LB_FACTOR); + phys_blocks = (usable_blocks - pure_lb_blocks) / + num_tcs_in_port; + + /* Init physical TCs */ + for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) { + if (((port_params[port_id].active_phys_tcs >> tc) & + 0x1) == 1) { + voq = PHYS_VOQ(port_id, tc, + max_phys_tcs_per_port); + STORE_RT_REG(p_hwfn, PBF_BTB_GUARANTEED_RT_OFFSET(voq), phys_blocks); - } } - /* init pure LB TC */ - STORE_RT_REG(p_hwfn, - PBF_BTB_GUARANTEED_RT_OFFSET( - LB_VOQ(port_id)), pure_lb_blocks); } + + /* Init pure LB TC */ + STORE_RT_REG(p_hwfn, + PBF_BTB_GUARANTEED_RT_OFFSET(LB_VOQ(port_id)), + pure_lb_blocks); } } @@ -317,57 +372,69 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn, struct init_qm_pq_params *pq_params, struct init_qm_vport_params *vport_params) { - u16 i, pq_id, pq_group; - u16 num_pqs = num_pf_pqs + num_vf_pqs; - u16 first_pq_group = start_pq / QM_PF_QUEUE_GROUP_SIZE; - u16 last_pq_group = (start_pq + num_pqs - 1) / QM_PF_QUEUE_GROUP_SIZE; - /* a bit per Tx PQ indicating if the PQ is associated with a VF */ + /* A bit per Tx PQ indicating if the PQ is associated with a VF */ u32 tx_pq_vf_mask[MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE] = { 0 }; u32 num_tx_pq_vf_masks = MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE; - u32 pq_mem_4kb = QM_PQ_MEM_4KB(num_pf_cids); - u32 vport_pq_mem_4kb = QM_PQ_MEM_4KB(num_vf_cids); - u32 mem_addr_4kb = base_mem_addr_4kb; - /* set mapping from PQ group to PF */ + u16 num_pqs, first_pq_group, last_pq_group, i, pq_id, pq_group; + u32 pq_mem_4kb, vport_pq_mem_4kb, mem_addr_4kb; + + num_pqs = num_pf_pqs + num_vf_pqs; + + first_pq_group = start_pq / QM_PF_QUEUE_GROUP_SIZE; + last_pq_group = (start_pq + num_pqs - 1) / QM_PF_QUEUE_GROUP_SIZE; + + pq_mem_4kb = QM_PQ_MEM_4KB(num_pf_cids); + vport_pq_mem_4kb = QM_PQ_MEM_4KB(num_vf_cids); + mem_addr_4kb = base_mem_addr_4kb; + + /* Set mapping from PQ group to PF */ for (pq_group = first_pq_group; pq_group <= last_pq_group; pq_group++) STORE_RT_REG(p_hwfn, QM_REG_PQTX2PF_0_RT_OFFSET + pq_group, (u32)(pf_id)); - /* set PQ sizes */ + + /* Set PQ sizes */ STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_0_RT_OFFSET, QM_PQ_SIZE_256B(num_pf_cids)); STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_1_RT_OFFSET, QM_PQ_SIZE_256B(num_vf_cids)); - /* go over all Tx PQs */ + + /* Go over all Tx PQs */ for (i = 0, pq_id = start_pq; i < num_pqs; i++, pq_id++) { - struct qm_rf_pq_map tx_pq_map; - u8 voq = - VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port); - bool is_vf_pq = (i >= num_pf_pqs); - /* added to avoid compilation warning */ u32 max_qm_global_rls = MAX_QM_GLOBAL_RLS; - bool rl_valid = pq_params[i].rl_valid && - pq_params[i].vport_id < max_qm_global_rls; - /* update first Tx PQ of VPORT/TC */ - u8 vport_id_in_pf = pq_params[i].vport_id - start_vport; - u16 first_tx_pq_id = - vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i]. - tc_id]; + struct qm_rf_pq_map tx_pq_map; + bool is_vf_pq, rl_valid; + u8 voq, vport_id_in_pf; + u16 first_tx_pq_id; + + voq = VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port); + is_vf_pq = (i >= num_pf_pqs); + rl_valid = pq_params[i].rl_valid && pq_params[i].vport_id < + max_qm_global_rls; + + /* Update first Tx PQ of VPORT/TC */ + vport_id_in_pf = pq_params[i].vport_id - start_vport; + first_tx_pq_id = + vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].tc_id]; if (first_tx_pq_id == QM_INVALID_PQ_ID) { - /* create new VP PQ */ + /* Create new VP PQ */ vport_params[vport_id_in_pf]. first_tx_pq_id[pq_params[i].tc_id] = pq_id; first_tx_pq_id = pq_id; - /* map VP PQ to VOQ and PF */ + + /* Map VP PQ to VOQ and PF */ STORE_RT_REG(p_hwfn, QM_REG_WFQVPMAP_RT_OFFSET + first_tx_pq_id, (voq << QM_WFQ_VP_PQ_VOQ_SHIFT) | (pf_id << QM_WFQ_VP_PQ_PF_SHIFT)); } - /* check RL ID */ + + /* Check RL ID */ if (pq_params[i].rl_valid && pq_params[i].vport_id >= max_qm_global_rls) DP_NOTICE(p_hwfn, true, - "Invalid VPORT ID for rate limiter config"); - /* fill PQ map entry */ + "Invalid VPORT ID for rate limiter config\n"); + + /* Fill PQ map entry */ OSAL_MEMSET(&tx_pq_map, 0, sizeof(tx_pq_map)); SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_PQ_VALID, 1); SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_RL_VALID, @@ -378,17 +445,17 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn, SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_VOQ, voq); SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP, pq_params[i].wrr_group); - /* write PQ map entry to CAM */ + + /* Write PQ map entry to CAM */ STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + pq_id, *((u32 *)&tx_pq_map)); - /* set base address */ + + /* Set base address */ STORE_RT_REG(p_hwfn, QM_REG_BASEADDRTXPQ_RT_OFFSET + pq_id, mem_addr_4kb); - /* check if VF PQ */ + + /* If VF PQ, add indication to PQ VF mask */ if (is_vf_pq) { - /* if PQ is associated with a VF, add indication to PQ - * VF mask - */ tx_pq_vf_mask[pq_id / QM_PF_QUEUE_GROUP_SIZE] |= (1 << (pq_id % QM_PF_QUEUE_GROUP_SIZE)); mem_addr_4kb += vport_pq_mem_4kb; @@ -396,12 +463,12 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn, mem_addr_4kb += pq_mem_4kb; } } - /* store Tx PQ VF mask to size select register */ - for (i = 0; i < num_tx_pq_vf_masks; i++) { + + /* Store Tx PQ VF mask to size select register */ + for (i = 0; i < num_tx_pq_vf_masks; i++) if (tx_pq_vf_mask[i]) STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET + i, tx_pq_vf_mask[i]); - } } /* Prepare Other PQ mapping runtime init values for the specified PF */ @@ -411,20 +478,26 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn, u32 num_pf_cids, u32 num_tids, u32 base_mem_addr_4kb) { - u16 i, pq_id; -/* a single other PQ grp is used in each PF, where PQ group i is used in PF i */ - - u16 pq_group = pf_id; - u32 pq_size = num_pf_cids + num_tids; - u32 pq_mem_4kb = QM_PQ_MEM_4KB(pq_size); - u32 mem_addr_4kb = base_mem_addr_4kb; - /* map PQ group to PF */ + u32 pq_size, pq_mem_4kb, mem_addr_4kb; + u16 i, pq_id, pq_group; + + /* A single other PQ group is used in each PF, where PQ group i is used + * in PF i. + */ + pq_group = pf_id; + pq_size = num_pf_cids + num_tids; + pq_mem_4kb = QM_PQ_MEM_4KB(pq_size); + mem_addr_4kb = base_mem_addr_4kb; + + /* Map PQ group to PF */ STORE_RT_REG(p_hwfn, QM_REG_PQOTHER2PF_0_RT_OFFSET + pq_group, (u32)(pf_id)); - /* set PQ sizes */ + + /* Set PQ sizes */ STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_2_RT_OFFSET, QM_PQ_SIZE_256B(pq_size)); - /* set base address */ + + /* Set base address */ for (i = 0, pq_id = pf_id * QM_PF_QUEUE_GROUP_SIZE; i < QM_OTHER_PQS_PER_PF; i++, pq_id++) { STORE_RT_REG(p_hwfn, QM_REG_BASEADDROTHERPQ_RT_OFFSET + pq_id, @@ -432,7 +505,10 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn, mem_addr_4kb += pq_mem_4kb; } } -/* Prepare PF WFQ runtime init values for specified PF. Return -1 on error. */ + +/* Prepare PF WFQ runtime init values for the specified PF. + * Return -1 on error. + */ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn, u8 port_id, u8 pf_id, @@ -441,76 +517,89 @@ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn, u16 num_tx_pqs, struct init_qm_pq_params *pq_params) { + u32 inc_val, crd_reg_offset; + u8 voq; u16 i; - u32 inc_val; - u32 crd_reg_offset = - (pf_id < - MAX_NUM_PFS_BB ? QM_REG_WFQPFCRD_RT_OFFSET : - QM_REG_WFQPFCRD_MSB_RT_OFFSET) + (pf_id % MAX_NUM_PFS_BB); + + crd_reg_offset = (pf_id < MAX_NUM_PFS_BB ? QM_REG_WFQPFCRD_RT_OFFSET : + QM_REG_WFQPFCRD_MSB_RT_OFFSET) + + (pf_id % MAX_NUM_PFS_BB); + inc_val = QM_WFQ_INC_VAL(pf_wfq); - if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) { - DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration"); + if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) { + DP_NOTICE(p_hwfn, true, + "Invalid PF WFQ weight configuration\n"); return -1; } + for (i = 0; i < num_tx_pqs; i++) { - u8 voq = - VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port); + voq = VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port); OVERWRITE_RT_REG(p_hwfn, crd_reg_offset + voq * MAX_NUM_PFS_BB, (u32)QM_WFQ_CRD_REG_SIGN_BIT); } + STORE_RT_REG(p_hwfn, QM_REG_WFQPFUPPERBOUND_RT_OFFSET + pf_id, QM_WFQ_UPPER_BOUND | (u32)QM_WFQ_CRD_REG_SIGN_BIT); STORE_RT_REG(p_hwfn, QM_REG_WFQPFWEIGHT_RT_OFFSET + pf_id, inc_val); return 0; } -/* Prepare PF RL runtime init values for specified PF. Return -1 on error. */ + +/* Prepare PF RL runtime init values for the specified PF. + * Return -1 on error. + */ static int ecore_pf_rl_rt_init(struct ecore_hwfn *p_hwfn, u8 pf_id, u32 pf_rl) { - u32 inc_val = QM_RL_INC_VAL(pf_rl); + u32 inc_val; + + inc_val = QM_RL_INC_VAL(pf_rl); if (inc_val > QM_RL_MAX_INC_VAL) { - DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration"); + DP_NOTICE(p_hwfn, true, + "Invalid PF rate limit configuration\n"); return -1; } + STORE_RT_REG(p_hwfn, QM_REG_RLPFCRD_RT_OFFSET + pf_id, (u32)QM_RL_CRD_REG_SIGN_BIT); STORE_RT_REG(p_hwfn, QM_REG_RLPFUPPERBOUND_RT_OFFSET + pf_id, QM_RL_UPPER_BOUND | (u32)QM_RL_CRD_REG_SIGN_BIT); STORE_RT_REG(p_hwfn, QM_REG_RLPFINCVAL_RT_OFFSET + pf_id, inc_val); + return 0; } -/* Prepare VPORT WFQ runtime init values for the specified VPORTs. Return -1 on - * error. + +/* Prepare VPORT WFQ runtime init values for the specified VPORTs. + * Return -1 on error. */ static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn, u8 num_vports, struct init_qm_vport_params *vport_params) { - u8 tc, i; + u16 vport_pq_id; u32 inc_val; - /* go over all PF VPORTs */ + u8 tc, i; + + /* Go over all PF VPORTs */ for (i = 0; i < num_vports; i++) { - if (vport_params[i].vport_wfq) { - inc_val = QM_WFQ_INC_VAL(vport_params[i].vport_wfq); - if (inc_val > QM_WFQ_MAX_INC_VAL) { - DP_NOTICE(p_hwfn, true, - "Invalid VPORT WFQ weight config"); - return -1; - } - /* each VPORT can have several VPORT PQ IDs for - * different TCs - */ - for (tc = 0; tc < NUM_OF_TCS; tc++) { - u16 vport_pq_id = - vport_params[i].first_tx_pq_id[tc]; - if (vport_pq_id != QM_INVALID_PQ_ID) { - STORE_RT_REG(p_hwfn, - QM_REG_WFQVPCRD_RT_OFFSET + - vport_pq_id, - (u32)QM_WFQ_CRD_REG_SIGN_BIT); - STORE_RT_REG(p_hwfn, - QM_REG_WFQVPWEIGHT_RT_OFFSET - + vport_pq_id, inc_val); - } + if (!vport_params[i].vport_wfq) + continue; + + inc_val = QM_WFQ_INC_VAL(vport_params[i].vport_wfq); + if (inc_val > QM_WFQ_MAX_INC_VAL) { + DP_NOTICE(p_hwfn, true, + "Invalid VPORT WFQ weight configuration\n"); + return -1; + } + + /* Each VPORT can have several VPORT PQ IDs for various TCs */ + for (tc = 0; tc < NUM_OF_TCS; tc++) { + vport_pq_id = vport_params[i].first_tx_pq_id[tc]; + if (vport_pq_id != QM_INVALID_PQ_ID) { + STORE_RT_REG(p_hwfn, QM_REG_WFQVPCRD_RT_OFFSET + + vport_pq_id, + (u32)QM_WFQ_CRD_REG_SIGN_BIT); + STORE_RT_REG(p_hwfn, + QM_REG_WFQVPWEIGHT_RT_OFFSET + + vport_pq_id, inc_val); } } } @@ -526,19 +615,23 @@ static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn, struct init_qm_vport_params *vport_params) { u8 i, vport_id; + u32 inc_val; + if (start_vport + num_vports >= MAX_QM_GLOBAL_RLS) { DP_NOTICE(p_hwfn, true, - "Invalid VPORT ID for rate limiter configuration"); + "Invalid VPORT ID for rate limiter configuration\n"); return -1; } - /* go over all PF VPORTs */ + + /* Go over all PF VPORTs */ for (i = 0, vport_id = start_vport; i < num_vports; i++, vport_id++) { u32 inc_val = QM_RL_INC_VAL(vport_params[i].vport_rl); if (inc_val > QM_RL_MAX_INC_VAL) { DP_NOTICE(p_hwfn, true, - "Invalid VPORT rate-limit configuration"); + "Invalid VPORT rate-limit configuration\n"); return -1; } + STORE_RT_REG(p_hwfn, QM_REG_RLGLBLCRD_RT_OFFSET + vport_id, (u32)QM_RL_CRD_REG_SIGN_BIT); STORE_RT_REG(p_hwfn, @@ -547,6 +640,7 @@ static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn, STORE_RT_REG(p_hwfn, QM_REG_RLGLBLINCVAL_RT_OFFSET + vport_id, inc_val); } + return 0; } @@ -554,17 +648,20 @@ static bool ecore_poll_on_qm_cmd_ready(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { u32 reg_val, i; - for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && reg_val == 0; + + for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && !reg_val; i++) { OSAL_UDELAY(QM_STOP_CMD_POLL_PERIOD_US); reg_val = ecore_rd(p_hwfn, p_ptt, QM_REG_SDMCMDREADY); } - /* check if timeout while waiting for SDM command ready */ + + /* Check if timeout while waiting for SDM command ready */ if (i == QM_STOP_CMD_MAX_POLL_COUNT) { DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG, "Timeout waiting for QM SDM cmd ready signal\n"); return false; } + return true; } @@ -574,15 +671,19 @@ static bool ecore_send_qm_cmd(struct ecore_hwfn *p_hwfn, { if (!ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt)) return false; + ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDADDR, cmd_addr); ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDDATALSB, cmd_data_lsb); ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDDATAMSB, cmd_data_msb); ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDGO, 1); ecore_wr(p_hwfn, p_ptt, QM_REG_SDMCMDGO, 0); + return ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt); } + /******************** INTERFACE IMPLEMENTATION *********************/ + u32 ecore_qm_pf_mem_size(u8 pf_id, u32 num_pf_cids, u32 num_vf_cids, @@ -603,32 +704,42 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn, struct init_qm_port_params port_params[MAX_NUM_PORTS]) { - /* init AFullOprtnstcCrdMask */ - u32 mask = - (QM_OPPOR_LINE_VOQ_DEF << QM_RF_OPPORTUNISTIC_MASK_LINEVOQ_SHIFT) | - (QM_BYTE_CRD_EN << QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ_SHIFT) | - (pf_wfq_en << QM_RF_OPPORTUNISTIC_MASK_PFWFQ_SHIFT) | - (vport_wfq_en << QM_RF_OPPORTUNISTIC_MASK_VPWFQ_SHIFT) | - (pf_rl_en << QM_RF_OPPORTUNISTIC_MASK_PFRL_SHIFT) | - (vport_rl_en << QM_RF_OPPORTUNISTIC_MASK_VPQCNRL_SHIFT) | - (QM_OPPOR_FW_STOP_DEF << QM_RF_OPPORTUNISTIC_MASK_FWPAUSE_SHIFT) | - (QM_OPPOR_PQ_EMPTY_DEF << - QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY_SHIFT); + u32 mask; + + /* Init AFullOprtnstcCrdMask */ + mask = (QM_OPPOR_LINE_VOQ_DEF << + QM_RF_OPPORTUNISTIC_MASK_LINEVOQ_SHIFT) | + (QM_BYTE_CRD_EN << QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ_SHIFT) | + (pf_wfq_en << QM_RF_OPPORTUNISTIC_MASK_PFWFQ_SHIFT) | + (vport_wfq_en << QM_RF_OPPORTUNISTIC_MASK_VPWFQ_SHIFT) | + (pf_rl_en << QM_RF_OPPORTUNISTIC_MASK_PFRL_SHIFT) | + (vport_rl_en << QM_RF_OPPORTUNISTIC_MASK_VPQCNRL_SHIFT) | + (QM_OPPOR_FW_STOP_DEF << + QM_RF_OPPORTUNISTIC_MASK_FWPAUSE_SHIFT) | + (QM_OPPOR_PQ_EMPTY_DEF << + QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY_SHIFT); STORE_RT_REG(p_hwfn, QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET, mask); - /* enable/disable PF RL */ + + /* Enable/disable PF RL */ ecore_enable_pf_rl(p_hwfn, pf_rl_en); - /* enable/disable PF WFQ */ + + /* Enable/disable PF WFQ */ ecore_enable_pf_wfq(p_hwfn, pf_wfq_en); - /* enable/disable VPORT RL */ + + /* Enable/disable VPORT RL */ ecore_enable_vport_rl(p_hwfn, vport_rl_en); - /* enable/disable VPORT WFQ */ + + /* Enable/disable VPORT WFQ */ ecore_enable_vport_wfq(p_hwfn, vport_wfq_en); - /* init PBF CMDQ line credit */ + + /* Init PBF CMDQ line credit */ ecore_cmdq_lines_rt_init(p_hwfn, max_ports_per_engine, max_phys_tcs_per_port, port_params); - /* init BTB blocks in PBF */ + + /* Init BTB blocks in PBF */ ecore_btb_blocks_rt_init(p_hwfn, max_ports_per_engine, max_phys_tcs_per_port, port_params); + return 0; } @@ -651,66 +762,86 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn, struct init_qm_pq_params *pq_params, struct init_qm_vport_params *vport_params) { + u32 other_mem_size_4kb; u8 tc, i; - u32 other_mem_size_4kb = - QM_PQ_MEM_4KB(num_pf_cids + num_tids) * QM_OTHER_PQS_PER_PF; - /* clear first Tx PQ ID array for each VPORT */ + + other_mem_size_4kb = QM_PQ_MEM_4KB(num_pf_cids + num_tids) * + QM_OTHER_PQS_PER_PF; + + /* Clear first Tx PQ ID array for each VPORT */ for (i = 0; i < num_vports; i++) for (tc = 0; tc < NUM_OF_TCS; tc++) vport_params[i].first_tx_pq_id[tc] = QM_INVALID_PQ_ID; - /* map Other PQs (if any) */ + + /* Map Other PQs (if any) */ #if QM_OTHER_PQS_PER_PF > 0 ecore_other_pq_map_rt_init(p_hwfn, port_id, pf_id, num_pf_cids, num_tids, 0); #endif - /* map Tx PQs */ + + /* Map Tx PQs */ ecore_tx_pq_map_rt_init(p_hwfn, p_ptt, port_id, pf_id, max_phys_tcs_per_port, is_first_pf, num_pf_cids, num_vf_cids, start_pq, num_pf_pqs, num_vf_pqs, start_vport, other_mem_size_4kb, pq_params, vport_params); - /* init PF WFQ */ + + /* Init PF WFQ */ if (pf_wfq) if (ecore_pf_wfq_rt_init (p_hwfn, port_id, pf_id, pf_wfq, max_phys_tcs_per_port, - num_pf_pqs + num_vf_pqs, pq_params) != 0) + num_pf_pqs + num_vf_pqs, pq_params)) return -1; - /* init PF RL */ - if (ecore_pf_rl_rt_init(p_hwfn, pf_id, pf_rl) != 0) + + /* Init PF RL */ + if (ecore_pf_rl_rt_init(p_hwfn, pf_id, pf_rl)) return -1; - /* set VPORT WFQ */ - if (ecore_vp_wfq_rt_init(p_hwfn, num_vports, vport_params) != 0) + + /* Set VPORT WFQ */ + if (ecore_vp_wfq_rt_init(p_hwfn, num_vports, vport_params)) return -1; - /* set VPORT RL */ + + /* Set VPORT RL */ if (ecore_vport_rl_rt_init - (p_hwfn, start_vport, num_vports, vport_params) != 0) + (p_hwfn, start_vport, num_vports, vport_params)) return -1; + return 0; } int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u8 pf_id, u16 pf_wfq) { - u32 inc_val = QM_WFQ_INC_VAL(pf_wfq); - if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) { - DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration"); + u32 inc_val; + + inc_val = QM_WFQ_INC_VAL(pf_wfq); + if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) { + DP_NOTICE(p_hwfn, true, + "Invalid PF WFQ weight configuration\n"); return -1; } + ecore_wr(p_hwfn, p_ptt, QM_REG_WFQPFWEIGHT + pf_id * 4, inc_val); + return 0; } int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u8 pf_id, u32 pf_rl) { - u32 inc_val = QM_RL_INC_VAL(pf_rl); + u32 inc_val; + + inc_val = QM_RL_INC_VAL(pf_rl); if (inc_val > QM_RL_MAX_INC_VAL) { - DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration"); + DP_NOTICE(p_hwfn, true, + "Invalid PF rate limit configuration\n"); return -1; } + ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFCRD + pf_id * 4, (u32)QM_RL_CRD_REG_SIGN_BIT); ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFINCVAL + pf_id * 4, inc_val); + return 0; } @@ -718,20 +849,25 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u16 first_tx_pq_id[NUM_OF_TCS], u16 vport_wfq) { + u16 vport_pq_id; + u32 inc_val; u8 tc; - u32 inc_val = QM_WFQ_INC_VAL(vport_wfq); - if (inc_val == 0 || inc_val > QM_WFQ_MAX_INC_VAL) { + + inc_val = QM_WFQ_INC_VAL(vport_wfq); + if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) { DP_NOTICE(p_hwfn, true, - "Invalid VPORT WFQ weight configuration"); + "Invalid VPORT WFQ weight configuration\n"); return -1; } + for (tc = 0; tc < NUM_OF_TCS; tc++) { - u16 vport_pq_id = first_tx_pq_id[tc]; + vport_pq_id = first_tx_pq_id[tc]; if (vport_pq_id != QM_INVALID_PQ_ID) { ecore_wr(p_hwfn, p_ptt, QM_REG_WFQVPWEIGHT + vport_pq_id * 4, inc_val); } } + return 0; } @@ -739,20 +875,24 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u8 vport_id, u32 vport_rl) { u32 inc_val, max_qm_global_rls = MAX_QM_GLOBAL_RLS; + if (vport_id >= max_qm_global_rls) { DP_NOTICE(p_hwfn, true, - "Invalid VPORT ID for rate limiter configuration"); + "Invalid VPORT ID for rate limiter configuration\n"); return -1; } + inc_val = QM_RL_INC_VAL(vport_rl); if (inc_val > QM_RL_MAX_INC_VAL) { DP_NOTICE(p_hwfn, true, - "Invalid VPORT rate-limit configuration"); + "Invalid VPORT rate-limit configuration\n"); return -1; } + ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLCRD + vport_id * 4, (u32)QM_RL_CRD_REG_SIGN_BIT); ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLINCVAL + vport_id * 4, inc_val); + return 0; } @@ -762,15 +902,20 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn, bool is_tx_pq, u16 start_pq, u16 num_pqs) { u32 cmd_arr[QM_CMD_STRUCT_SIZE(QM_STOP_CMD)] = { 0 }; - u32 pq_mask = 0, last_pq = start_pq + num_pqs - 1, pq_id; - /* set command's PQ type */ + u32 pq_mask = 0, last_pq, pq_id; + + last_pq = start_pq + num_pqs - 1; + + /* Set command's PQ type */ QM_CMD_SET_FIELD(cmd_arr, QM_STOP_CMD, PQ_TYPE, is_tx_pq ? 0 : 1); - /* go over requested PQs */ + + /* Go over requested PQs */ for (pq_id = start_pq; pq_id <= last_pq; pq_id++) { - /* set PQ bit in mask (stop command only) */ + /* Set PQ bit in mask (stop command only) */ if (!is_release_cmd) pq_mask |= (1 << (pq_id % QM_STOP_PQ_MASK_WIDTH)); - /* if last PQ or end of PQ mask, write command */ + + /* If last PQ or end of PQ mask, write command */ if ((pq_id == last_pq) || (pq_id % QM_STOP_PQ_MASK_WIDTH == (QM_STOP_PQ_MASK_WIDTH - 1))) { @@ -785,68 +930,92 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn, pq_mask = 0; } } + return true; } + /* NIG: ETS configuration constants */ #define NIG_TX_ETS_CLIENT_OFFSET 4 #define NIG_LB_ETS_CLIENT_OFFSET 1 #define NIG_ETS_MIN_WFQ_BYTES 1600 + /* NIG: ETS constants */ #define NIG_ETS_UP_BOUND(weight, mtu) \ -(2 * ((weight) > (mtu) ? (weight) : (mtu))) + (2 * ((weight) > (mtu) ? (weight) : (mtu))) + /* NIG: RL constants */ -#define NIG_RL_BASE_TYPE 1 /* byte base type */ -#define NIG_RL_PERIOD 1 /* in us */ + +/* Byte base type value */ +#define NIG_RL_BASE_TYPE 1 + +/* Period in us */ +#define NIG_RL_PERIOD 1 + +/* Period in 25MHz cycles */ #define NIG_RL_PERIOD_CLK_25M (25 * NIG_RL_PERIOD) + +/* Rate in mbps */ #define NIG_RL_INC_VAL(rate) (((rate) * NIG_RL_PERIOD) / 8) + #define NIG_RL_MAX_VAL(inc_val, mtu) \ -(2 * ((inc_val) > (mtu) ? (inc_val) : (mtu))) + (2 * ((inc_val) > (mtu) ? (inc_val) : (mtu))) + /* NIG: packet prioritry configuration constants */ -#define NIG_PRIORITY_MAP_TC_BITS 4 +#define NIG_PRIORITY_MAP_TC_BITS 4 + + void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct init_ets_req *req, bool is_lb) { - u8 tc, sp_tc_map = 0, wfq_tc_map = 0; - u8 num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS; - u8 tc_client_offset = - is_lb ? NIG_LB_ETS_CLIENT_OFFSET : NIG_TX_ETS_CLIENT_OFFSET; - u32 min_weight = 0xffffffff; - u32 tc_weight_base_addr = - is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 : - NIG_REG_TX_ARB_CREDIT_WEIGHT_0; - u32 tc_weight_addr_diff = - is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_1 - - NIG_REG_LB_ARB_CREDIT_WEIGHT_0 : NIG_REG_TX_ARB_CREDIT_WEIGHT_1 - - NIG_REG_TX_ARB_CREDIT_WEIGHT_0; - u32 tc_bound_base_addr = - is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 : - NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0; - u32 tc_bound_addr_diff = - is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_1 - - NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 : - NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_1 - - NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0; + u32 min_weight, tc_weight_base_addr, tc_weight_addr_diff; + u32 tc_bound_base_addr, tc_bound_addr_diff; + u8 sp_tc_map = 0, wfq_tc_map = 0; + u8 tc, num_tc, tc_client_offset; + + num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS; + tc_client_offset = is_lb ? NIG_LB_ETS_CLIENT_OFFSET : + NIG_TX_ETS_CLIENT_OFFSET; + min_weight = 0xffffffff; + tc_weight_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 : + NIG_REG_TX_ARB_CREDIT_WEIGHT_0; + tc_weight_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_1 - + NIG_REG_LB_ARB_CREDIT_WEIGHT_0 : + NIG_REG_TX_ARB_CREDIT_WEIGHT_1 - + NIG_REG_TX_ARB_CREDIT_WEIGHT_0; + tc_bound_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 : + NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0; + tc_bound_addr_diff = is_lb ? NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_1 - + NIG_REG_LB_ARB_CREDIT_UPPER_BOUND_0 : + NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_1 - + NIG_REG_TX_ARB_CREDIT_UPPER_BOUND_0; + for (tc = 0; tc < num_tc; tc++) { struct init_ets_tc_req *tc_req = &req->tc_req[tc]; - /* update SP map */ + + /* Update SP map */ if (tc_req->use_sp) sp_tc_map |= (1 << tc); - if (tc_req->use_wfq) { - /* update WFQ map */ - wfq_tc_map |= (1 << tc); - /* find minimal weight */ - if (tc_req->weight < min_weight) - min_weight = tc_req->weight; - } + + if (!tc_req->use_wfq) + continue; + + /* Update WFQ map */ + wfq_tc_map |= (1 << tc); + + /* Find minimal weight */ + if (tc_req->weight < min_weight) + min_weight = tc_req->weight; } - /* write SP map */ + + /* Write SP map */ ecore_wr(p_hwfn, p_ptt, is_lb ? NIG_REG_LB_ARB_CLIENT_IS_STRICT : NIG_REG_TX_ARB_CLIENT_IS_STRICT, (sp_tc_map << tc_client_offset)); - /* write WFQ map */ + + /* Write WFQ map */ ecore_wr(p_hwfn, p_ptt, is_lb ? NIG_REG_LB_ARB_CLIENT_IS_SUBJECT2WFQ : NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ, @@ -854,22 +1023,23 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn, /* write WFQ weights */ for (tc = 0; tc < num_tc; tc++, tc_client_offset++) { struct init_ets_tc_req *tc_req = &req->tc_req[tc]; - if (tc_req->use_wfq) { - /* translate weight to bytes */ - u32 byte_weight = - (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) / - min_weight; - /* write WFQ weight */ - ecore_wr(p_hwfn, p_ptt, - tc_weight_base_addr + - tc_weight_addr_diff * tc_client_offset, - byte_weight); - /* write WFQ upper bound */ - ecore_wr(p_hwfn, p_ptt, - tc_bound_base_addr + - tc_bound_addr_diff * tc_client_offset, - NIG_ETS_UP_BOUND(byte_weight, req->mtu)); - } + u32 byte_weight; + + if (!tc_req->use_wfq) + continue; + + /* Translate weight to bytes */ + byte_weight = (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) / + min_weight; + + /* Write WFQ weight */ + ecore_wr(p_hwfn, p_ptt, tc_weight_base_addr + + tc_weight_addr_diff * tc_client_offset, byte_weight); + + /* Write WFQ upper bound */ + ecore_wr(p_hwfn, p_ptt, tc_bound_base_addr + + tc_bound_addr_diff * tc_client_offset, + NIG_ETS_UP_BOUND(byte_weight, req->mtu)); } } @@ -877,16 +1047,18 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct init_nig_lb_rl_req *req) { - u8 tc; u32 ctrl, inc_val, reg_offset; - /* disable global MAC+LB RL */ + u8 tc; + + /* Disable global MAC+LB RL */ ctrl = NIG_RL_BASE_TYPE << NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_BASE_TYPE_SHIFT; ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl); - /* configure and enable global MAC+LB RL */ + + /* Configure and enable global MAC+LB RL */ if (req->lb_mac_rate) { - /* configure */ + /* Configure */ ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_PERIOD, NIG_RL_PERIOD_CLK_25M); inc_val = NIG_RL_INC_VAL(req->lb_mac_rate); @@ -894,20 +1066,23 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn, inc_val); ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_MAX_VALUE, NIG_RL_MAX_VAL(inc_val, req->mtu)); - /* enable */ + + /* Enable */ ctrl |= 1 << NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_EN_SHIFT; ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl); } - /* disable global LB-only RL */ + + /* Disable global LB-only RL */ ctrl = NIG_RL_BASE_TYPE << NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_BASE_TYPE_SHIFT; ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl); - /* configure and enable global LB-only RL */ + + /* Configure and enable global LB-only RL */ if (req->lb_rate) { - /* configure */ + /* Configure */ ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_PERIOD, NIG_RL_PERIOD_CLK_25M); inc_val = NIG_RL_INC_VAL(req->lb_rate); @@ -915,41 +1090,41 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn, inc_val); ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_MAX_VALUE, NIG_RL_MAX_VAL(inc_val, req->mtu)); - /* enable */ + + /* Enable */ ctrl |= 1 << NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_EN_SHIFT; ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl); } - /* per-TC RLs */ + + /* Per-TC RLs */ for (tc = 0, reg_offset = 0; tc < NUM_OF_PHYS_TCS; tc++, reg_offset += 4) { - /* disable TC RL */ + /* Disable TC RL */ ctrl = NIG_RL_BASE_TYPE << NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_BASE_TYPE_0_SHIFT; ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset, ctrl); - /* configure and enable TC RL */ - if (req->tc_rate[tc]) { - /* configure */ - ecore_wr(p_hwfn, p_ptt, - NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 + - reg_offset, NIG_RL_PERIOD_CLK_25M); - inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]); - ecore_wr(p_hwfn, p_ptt, - NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 + - reg_offset, inc_val); - ecore_wr(p_hwfn, p_ptt, - NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 + - reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu)); - /* enable */ - ctrl |= - 1 << - NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT; - ecore_wr(p_hwfn, p_ptt, - NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset, - ctrl); - } + + /* Configure and enable TC RL */ + if (!req->tc_rate[tc]) + continue; + + /* Configure */ + ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 + + reg_offset, NIG_RL_PERIOD_CLK_25M); + inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]); + ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 + + reg_offset, inc_val); + ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 + + reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu)); + + /* Enable */ + ctrl |= 1 << + NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT; + ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_CTRL_0 + + reg_offset, ctrl); } } @@ -957,20 +1132,23 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct init_nig_pri_tc_map_req *req) { - u8 pri, tc; - u32 pri_tc_mask = 0; u8 tc_pri_mask[NUM_OF_PHYS_TCS] = { 0 }; + u32 pri_tc_mask = 0; + u8 pri, tc; + for (pri = 0; pri < NUM_OF_VLAN_PRIORITIES; pri++) { - if (req->pri[pri].valid) { - pri_tc_mask |= - (req->pri[pri]. - tc_id << (pri * NIG_PRIORITY_MAP_TC_BITS)); - tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri); - } + if (!req->pri[pri].valid) + continue; + + pri_tc_mask |= (req->pri[pri].tc_id << + (pri * NIG_PRIORITY_MAP_TC_BITS)); + tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri); } - /* write priority -> TC mask */ + + /* Write priority -> TC mask */ ecore_wr(p_hwfn, p_ptt, NIG_REG_PKT_PRIORITY_TO_TC, pri_tc_mask); - /* write TC -> priority mask */ + + /* Write TC -> priority mask */ for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) { ecore_wr(p_hwfn, p_ptt, NIG_REG_PRIORITY_FOR_TC_0 + tc * 4, tc_pri_mask[tc]); @@ -979,110 +1157,133 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn, } } + /* PRS: ETS configuration constants */ -#define PRS_ETS_MIN_WFQ_BYTES 1600 +#define PRS_ETS_MIN_WFQ_BYTES 1600 #define PRS_ETS_UP_BOUND(weight, mtu) \ -(2 * ((weight) > (mtu) ? (weight) : (mtu))) + (2 * ((weight) > (mtu) ? (weight) : (mtu))) + + void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct init_ets_req *req) { + u32 tc_weight_addr_diff, tc_bound_addr_diff, min_weight = 0xffffffff; u8 tc, sp_tc_map = 0, wfq_tc_map = 0; - u32 min_weight = 0xffffffff; - u32 tc_weight_addr_diff = - PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 - PRS_REG_ETS_ARB_CREDIT_WEIGHT_0; - u32 tc_bound_addr_diff = - PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 - - PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0; + + tc_weight_addr_diff = PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 - + PRS_REG_ETS_ARB_CREDIT_WEIGHT_0; + tc_bound_addr_diff = PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 - + PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0; + for (tc = 0; tc < NUM_OF_TCS; tc++) { struct init_ets_tc_req *tc_req = &req->tc_req[tc]; - /* update SP map */ + + /* Update SP map */ if (tc_req->use_sp) sp_tc_map |= (1 << tc); - if (tc_req->use_wfq) { - /* update WFQ map */ - wfq_tc_map |= (1 << tc); - /* find minimal weight */ - if (tc_req->weight < min_weight) - min_weight = tc_req->weight; - } + + if (!tc_req->use_wfq) + continue; + + /* Update WFQ map */ + wfq_tc_map |= (1 << tc); + + /* Find minimal weight */ + if (tc_req->weight < min_weight) + min_weight = tc_req->weight; } + /* write SP map */ ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_STRICT, sp_tc_map); + /* write WFQ map */ ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ, wfq_tc_map); + /* write WFQ weights */ for (tc = 0; tc < NUM_OF_TCS; tc++) { struct init_ets_tc_req *tc_req = &req->tc_req[tc]; - if (tc_req->use_wfq) { - /* translate weight to bytes */ - u32 byte_weight = - (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) / - min_weight; - /* write WFQ weight */ - ecore_wr(p_hwfn, p_ptt, - PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 + - tc * tc_weight_addr_diff, byte_weight); - /* write WFQ upper bound */ - ecore_wr(p_hwfn, p_ptt, - PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 + - tc * tc_bound_addr_diff, - PRS_ETS_UP_BOUND(byte_weight, req->mtu)); - } + u32 byte_weight; + + if (!tc_req->use_wfq) + continue; + + /* Translate weight to bytes */ + byte_weight = (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) / + min_weight; + + /* Write WFQ weight */ + ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 + tc * + tc_weight_addr_diff, byte_weight); + + /* Write WFQ upper bound */ + ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 + + tc * tc_bound_addr_diff, PRS_ETS_UP_BOUND(byte_weight, + req->mtu)); } } + /* BRB: RAM configuration constants */ #define BRB_TOTAL_RAM_BLOCKS_BB 4800 #define BRB_TOTAL_RAM_BLOCKS_K2 5632 -#define BRB_BLOCK_SIZE 128 /* in bytes */ +#define BRB_BLOCK_SIZE 128 #define BRB_MIN_BLOCKS_PER_TC 9 -#define BRB_HYST_BYTES 10240 -#define BRB_HYST_BLOCKS (BRB_HYST_BYTES / BRB_BLOCK_SIZE) -/* - * temporary big RAM allocation - should be updated - */ +#define BRB_HYST_BYTES 10240 +#define BRB_HYST_BLOCKS (BRB_HYST_BYTES / BRB_BLOCK_SIZE) + +/* Temporary big RAM allocation - should be updated */ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct init_brb_ram_req *req) { - u8 port, active_ports = 0; + u32 tc_headroom_blocks, min_pkt_size_blocks, total_blocks; u32 active_port_blocks, reg_offset = 0; - u32 tc_headroom_blocks = - (u32)DIV_ROUND_UP(req->headroom_per_tc, BRB_BLOCK_SIZE); - u32 min_pkt_size_blocks = - (u32)DIV_ROUND_UP(req->min_pkt_size, BRB_BLOCK_SIZE); - u32 total_blocks = - ECORE_IS_K2(p_hwfn-> - p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 : - BRB_TOTAL_RAM_BLOCKS_BB; - /* find number of active ports */ + u8 port, active_ports = 0; + + tc_headroom_blocks = (u32)DIV_ROUND_UP(req->headroom_per_tc, + BRB_BLOCK_SIZE); + min_pkt_size_blocks = (u32)DIV_ROUND_UP(req->min_pkt_size, + BRB_BLOCK_SIZE); + total_blocks = ECORE_IS_K2(p_hwfn->p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 : + BRB_TOTAL_RAM_BLOCKS_BB; + + /* Find number of active ports */ for (port = 0; port < MAX_NUM_PORTS; port++) if (req->num_active_tcs[port]) active_ports++; + active_port_blocks = (u32)(total_blocks / active_ports); + for (port = 0; port < req->max_ports_per_engine; port++) { - /* calculate per-port sizes */ - u32 tc_guaranteed_blocks = - (u32)DIV_ROUND_UP(req->guranteed_per_tc, BRB_BLOCK_SIZE); - u32 port_blocks = - req->num_active_tcs[port] ? active_port_blocks : 0; - u32 port_guaranteed_blocks = - req->num_active_tcs[port] * tc_guaranteed_blocks; - u32 port_shared_blocks = port_blocks - port_guaranteed_blocks; - u32 full_xoff_th = - req->num_active_tcs[port] * BRB_MIN_BLOCKS_PER_TC; - u32 full_xon_th = full_xoff_th + min_pkt_size_blocks; - u32 pause_xoff_th = tc_headroom_blocks; - u32 pause_xon_th = pause_xoff_th + min_pkt_size_blocks; + u32 port_blocks, port_shared_blocks, port_guaranteed_blocks; + u32 full_xoff_th, full_xon_th, pause_xoff_th, pause_xon_th; + u32 tc_guaranteed_blocks; u8 tc; - /* init total size per port */ + + /* Calculate per-port sizes */ + tc_guaranteed_blocks = (u32)DIV_ROUND_UP(req->guranteed_per_tc, + BRB_BLOCK_SIZE); + port_blocks = req->num_active_tcs[port] ? active_port_blocks : + 0; + port_guaranteed_blocks = req->num_active_tcs[port] * + tc_guaranteed_blocks; + port_shared_blocks = port_blocks - port_guaranteed_blocks; + full_xoff_th = req->num_active_tcs[port] * + BRB_MIN_BLOCKS_PER_TC; + full_xon_th = full_xoff_th + min_pkt_size_blocks; + pause_xoff_th = tc_headroom_blocks; + pause_xon_th = pause_xoff_th + min_pkt_size_blocks; + + /* Init total size per port */ ecore_wr(p_hwfn, p_ptt, BRB_REG_TOTAL_MAC_SIZE + port * 4, port_blocks); - /* init shared size per port */ + + /* Init shared size per port */ ecore_wr(p_hwfn, p_ptt, BRB_REG_SHARED_HR_AREA + port * 4, port_shared_blocks); + for (tc = 0; tc < NUM_OF_TCS; tc++, reg_offset += 4) { - /* clear init values for non-active TCs */ + /* Clear init values for non-active TCs */ if (tc == req->num_active_tcs[port]) { tc_guaranteed_blocks = 0; full_xoff_th = 0; @@ -1090,15 +1291,18 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn, pause_xoff_th = 0; pause_xon_th = 0; } - /* init guaranteed size per TC */ + + /* Init guaranteed size per TC */ ecore_wr(p_hwfn, p_ptt, BRB_REG_TC_GUARANTIED_0 + reg_offset, tc_guaranteed_blocks); ecore_wr(p_hwfn, p_ptt, BRB_REG_MAIN_TC_GUARANTIED_HYST_0 + reg_offset, BRB_HYST_BLOCKS); -/* init pause/full thresholds per physical TC - for loopback traffic */ + /* Init pause/full thresholds per physical TC - for + * loopback traffic. + */ ecore_wr(p_hwfn, p_ptt, BRB_REG_LB_TC_FULL_XOFF_THRESHOLD_0 + reg_offset, full_xoff_th); @@ -1111,7 +1315,10 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn, ecore_wr(p_hwfn, p_ptt, BRB_REG_LB_TC_PAUSE_XON_THRESHOLD_0 + reg_offset, pause_xon_th); -/* init pause/full thresholds per physical TC - for main traffic */ + + /* Init pause/full thresholds per physical TC - for + * main traffic. + */ ecore_wr(p_hwfn, p_ptt, BRB_REG_MAIN_TC_FULL_XOFF_THRESHOLD_0 + reg_offset, full_xoff_th); @@ -1128,23 +1335,25 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn, } } -/*In MF should be called once per engine to set EtherType of OuterTag*/ +/* In MF should be called once per engine to set EtherType of OuterTag */ void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 ethType) { - /* update PRS register */ + /* Update PRS register */ STORE_RT_REG(p_hwfn, PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType); - /* update NIG register */ + + /* Update NIG register */ STORE_RT_REG(p_hwfn, NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType); - /* update PBF register */ + + /* Update PBF register */ STORE_RT_REG(p_hwfn, PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType); } -/*In MF should be called once per port to set EtherType of OuterTag*/ +/* In MF should be called once per port to set EtherType of OuterTag */ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 ethType) { - /* update DORQ register */ + /* Update DORQ register */ STORE_RT_REG(p_hwfn, DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET, ethType); } @@ -1154,11 +1363,13 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn, void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u16 dest_port) { - /* update PRS register */ + /* Update PRS register */ ecore_wr(p_hwfn, p_ptt, PRS_REG_VXLAN_PORT, dest_port); - /* update NIG register */ + + /* Update NIG register */ ecore_wr(p_hwfn, p_ptt, NIG_REG_VXLAN_CTRL, dest_port); - /* update PBF register */ + + /* Update PBF register */ ecore_wr(p_hwfn, p_ptt, PBF_REG_VXLAN_PORT, dest_port); } @@ -1166,23 +1377,26 @@ void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, bool vxlan_enable) { u32 reg_val; - /* update PRS register */ + + /* Update PRS register */ reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN); SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, PRS_REG_ENCAPSULATION_TYPE_EN_VXLAN_ENABLE_SHIFT, vxlan_enable); ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val); if (reg_val) { - ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0, + ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2, (u32)PRS_ETH_TUNN_FIC_FORMAT); } - /* update NIG register */ + + /* Update NIG register */ reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE); SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, NIG_REG_ENC_TYPE_ENABLE_VXLAN_ENABLE_SHIFT, vxlan_enable); ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val); - /* update DORQ register */ + + /* Update DORQ register */ ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_VXLAN_EN, vxlan_enable ? 1 : 0); } @@ -1192,7 +1406,8 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn, bool eth_gre_enable, bool ip_gre_enable) { u32 reg_val; - /* update PRS register */ + + /* Update PRS register */ reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN); SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GRE_ENABLE_SHIFT, @@ -1202,10 +1417,11 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn, ip_gre_enable); ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val); if (reg_val) { - ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0, + ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2, (u32)PRS_ETH_TUNN_FIC_FORMAT); } - /* update NIG register */ + + /* Update NIG register */ reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE); SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, NIG_REG_ENC_TYPE_ENABLE_ETH_OVER_GRE_ENABLE_SHIFT, @@ -1214,7 +1430,8 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn, NIG_REG_ENC_TYPE_ENABLE_IP_OVER_GRE_ENABLE_SHIFT, ip_gre_enable); ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val); - /* update DORQ registers */ + + /* Update DORQ registers */ ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_ETH_EN, eth_gre_enable ? 1 : 0); ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_IP_EN, @@ -1224,11 +1441,13 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn, void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u16 dest_port) { - /* update PRS register */ + /* Update PRS register */ ecore_wr(p_hwfn, p_ptt, PRS_REG_NGE_PORT, dest_port); - /* update NIG register */ + + /* Update NIG register */ ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_PORT, dest_port); - /* update PBF register */ + + /* Update PBF register */ ecore_wr(p_hwfn, p_ptt, PBF_REG_NGE_PORT, dest_port); } @@ -1237,7 +1456,8 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn, bool eth_geneve_enable, bool ip_geneve_enable) { u32 reg_val; - /* update PRS register */ + + /* Update PRS register */ reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN); SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GENEVE_ENABLE_SHIFT, @@ -1247,37 +1467,44 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn, ip_geneve_enable); ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val); if (reg_val) { - ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0, + ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2, (u32)PRS_ETH_TUNN_FIC_FORMAT); } - /* update NIG register */ + + /* Update NIG register */ ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_ETH_ENABLE, eth_geneve_enable ? 1 : 0); ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_IP_ENABLE, ip_geneve_enable ? 1 : 0); - /* EDPM with geneve tunnel not supported in BB_B0 */ + + /* EDPM with geneve tunnel not supported in BB */ if (ECORE_IS_BB_B0(p_hwfn->p_dev)) return; - /* update DORQ registers */ - ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN, + + /* Update DORQ registers */ + ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5, eth_geneve_enable ? 1 : 0); - ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN, + ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5, ip_geneve_enable ? 1 : 0); } + #define T_ETH_PACKET_ACTION_GFT_EVENTID 23 #define PARSER_ETH_CONN_GFT_ACTION_CM_HDR 272 #define T_ETH_PACKET_MATCH_RFS_EVENTID 25 -#define PARSER_ETH_CONN_CM_HDR (0x0) +#define PARSER_ETH_CONN_CM_HDR 0 #define CAM_LINE_SIZE sizeof(u32) #define RAM_LINE_SIZE sizeof(u64) #define REG_SIZE sizeof(u32) + void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt) { - /* set RFS event ID to be awakened i Tstorm By Prs */ - u32 rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT); + u32 rfs_cm_hdr_event_id; + + /* Set RFS event ID to be awakened i Tstorm By Prs */ + rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT); rfs_cm_hdr_event_id |= T_ETH_PACKET_ACTION_GFT_EVENTID << PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT; rfs_cm_hdr_event_id |= PARSER_ETH_CONN_GFT_ACTION_CM_HDR << @@ -1298,39 +1525,48 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn, struct gft_ram_line ramLine; u32 *ramLinePointer = (u32 *)&ramLine; int i; + if (!ipv6 && !ipv4) DP_NOTICE(p_hwfn, true, "set_rfs_mode_enable: must accept at " "least on of - ipv4 or ipv6"); + if (!tcp && !udp) DP_NOTICE(p_hwfn, true, "set_rfs_mode_enable: must accept at " "least on of - udp or tcp"); - /* set RFS event ID to be awakened i Tstorm By Prs */ + + /* Set RFS event ID to be awakened i Tstorm By Prs */ rfs_cm_hdr_event_id |= T_ETH_PACKET_MATCH_RFS_EVENTID << PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT; rfs_cm_hdr_event_id |= PARSER_ETH_CONN_CM_HDR << PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT; ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, rfs_cm_hdr_event_id); + /* Configure Registers for RFS mode */ -/* enable gft search */ + + /* Enable gft search */ ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 1); ecore_wr(p_hwfn, p_ptt, PRS_REG_LOAD_L2_FILTER, 0); /* do not load * context only cid * in PRS on match */ camLine.cam_line_mapped.camline = 0; - /* cam line is now valid!! */ + + /* Cam line is now valid!! */ SET_FIELD(camLine.cam_line_mapped.camline, GFT_CAM_LINE_MAPPED_VALID, 1); - /* filters are per PF!! */ + + /* Filters are per PF!! */ SET_FIELD(camLine.cam_line_mapped.camline, GFT_CAM_LINE_MAPPED_PF_ID_MASK, 1); SET_FIELD(camLine.cam_line_mapped.camline, GFT_CAM_LINE_MAPPED_PF_ID, pf_id); + if (!(tcp && udp)) { SET_FIELD(camLine.cam_line_mapped.camline, - GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK, 1); + GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK, + GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK_MASK); if (tcp) SET_FIELD(camLine.cam_line_mapped.camline, GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE, @@ -1340,6 +1576,7 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn, GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE, GFT_PROFILE_UDP_PROTOCOL); } + if (!(ipv4 && ipv6)) { SET_FIELD(camLine.cam_line_mapped.camline, GFT_CAM_LINE_MAPPED_IP_VERSION_MASK, 1); @@ -1352,44 +1589,53 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn, GFT_CAM_LINE_MAPPED_IP_VERSION, GFT_PROFILE_IPV6); } - /* write characteristics to cam */ + + /* Write characteristics to cam */ ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id, camLine.cam_line_mapped.camline); camLine.cam_line_mapped.camline = ecore_rd(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id); - /* write line to RAM - compare to filter 4 tuple */ - ramLine.low32bits = 0; - ramLine.high32bits = 0; - SET_FIELD(ramLine.high32bits, GFT_RAM_LINE_DST_IP, 1); - SET_FIELD(ramLine.high32bits, GFT_RAM_LINE_SRC_IP, 1); - SET_FIELD(ramLine.low32bits, GFT_RAM_LINE_SRC_PORT, 1); - SET_FIELD(ramLine.low32bits, GFT_RAM_LINE_DST_PORT, 1); - /* each iteration write to reg */ + + /* Write line to RAM - compare to filter 4 tuple */ + ramLine.lo = 0; + ramLine.hi = 0; + SET_FIELD(ramLine.hi, GFT_RAM_LINE_DST_IP, 1); + SET_FIELD(ramLine.hi, GFT_RAM_LINE_SRC_IP, 1); + SET_FIELD(ramLine.hi, GFT_RAM_LINE_OVER_IP_PROTOCOL, 1); + SET_FIELD(ramLine.lo, GFT_RAM_LINE_ETHERTYPE, 1); + SET_FIELD(ramLine.lo, GFT_RAM_LINE_SRC_PORT, 1); + SET_FIELD(ramLine.lo, GFT_RAM_LINE_DST_PORT, 1); + + /* Each iteration write to reg */ for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++) ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * pf_id + i * REG_SIZE, *(ramLinePointer + i)); - /* set default profile so that no filter match will happen */ - ramLine.low32bits = 0xffff; - ramLine.high32bits = 0xffff; + + /* Set default profile so that no filter match will happen */ + ramLine.lo = 0xffff; + ramLine.hi = 0xffff; for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++) ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * PRS_GFT_CAM_LINES_NO_MATCH + i * REG_SIZE, *(ramLinePointer + i)); } -/* Configure VF zone size mode*/ +/* Configure VF zone size mode */ void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u16 mode, bool runtime_init) { u32 msdm_vf_size_log = MSTORM_VF_ZONE_DEFAULT_SIZE_LOG; u32 msdm_vf_offset_mask; + if (mode == VF_ZONE_SIZE_MODE_DOUBLE) msdm_vf_size_log += 1; else if (mode == VF_ZONE_SIZE_MODE_QUAD) msdm_vf_size_log += 2; + msdm_vf_offset_mask = (1 << msdm_vf_size_log) - 1; + if (runtime_init) { STORE_RT_REG(p_hwfn, PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET, @@ -1405,12 +1651,13 @@ void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, } } -/* get mstorm statistics for offset by VF zone size mode*/ +/* Get mstorm statistics for offset by VF zone size mode */ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn, u16 stat_cnt_id, u16 vf_zone_size_mode) { u32 offset = MSTORM_QUEUE_STAT_OFFSET(stat_cnt_id); + if ((vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) && (stat_cnt_id > MAX_NUM_PFS)) { if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE) @@ -1420,16 +1667,18 @@ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn, offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) * (stat_cnt_id - MAX_NUM_PFS); } + return offset; } -/* get mstorm VF producer offset by VF zone size mode*/ +/* Get mstorm VF producer offset by VF zone size mode */ u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8 vf_queue_id, u16 vf_zone_size_mode) { u32 offset = MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id); + if (vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) { if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE) offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) * @@ -1438,5 +1687,166 @@ u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) * vf_id; } + return offset; } + +/* Calculate CRC8 of first 4 bytes in buf */ +static u8 ecore_calc_crc8(const u8 *buf) +{ + u32 i, j, crc = 0xff << 8; + + /* CRC-8 polynomial */ + #define POLY 0x1070 + + for (j = 0; j < 4; j++, buf++) { + crc ^= (*buf << 8); + for (i = 0; i < 8; i++) { + if (crc & 0x8000) + crc ^= (POLY << 3); + + crc <<= 1; + } + } + + return (u8)(crc >> 8); +} + +/* Calculate and return CDU validation byte per conneciton type / region / + * cid + */ +static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region, + u32 cid) +{ + const u8 validation_cfg = CDU_VALIDATION_DEFAULT_CFG; + u8 crc, validation_byte = 0; + u32 validation_string = 0; + const u8 *data_to_crc_rev; + u8 data_to_crc[4]; + + data_to_crc_rev = (const u8 *)&validation_string; + + /* + * The CRC is calculated on the String-to-compress: + * [31:8] = {CID[31:20],CID[11:0]} + * [7:4] = Region + * [3:0] = Type + */ + if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_CID) & 1) + validation_string |= (cid & 0xFFF00000) | ((cid & 0xFFF) << 8); + + if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_REGION) & 1) + validation_string |= ((region & 0xF) << 4); + + if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_TYPE) & 1) + validation_string |= (conn_type & 0xF); + + /* Convert to big-endian (ntoh())*/ + data_to_crc[0] = data_to_crc_rev[3]; + data_to_crc[1] = data_to_crc_rev[2]; + data_to_crc[2] = data_to_crc_rev[1]; + data_to_crc[3] = data_to_crc_rev[0]; + + crc = ecore_calc_crc8(data_to_crc); + + validation_byte |= ((validation_cfg >> + CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE) & 1) << 7; + + if ((validation_cfg >> + CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT) & 1) + validation_byte |= ((conn_type & 0xF) << 3) | (crc & 0x7); + else + validation_byte |= crc & 0x7F; + + return validation_byte; +} + +/* Calcualte and set validation bytes for session context */ +void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size, + u8 ctx_type, u32 cid) +{ + u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx; + + p_ctx = (u8 *)p_ctx_mem; + x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]]; + t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]]; + u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]]; + + OSAL_MEMSET(p_ctx, 0, ctx_size); + + *x_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 3, cid); + *t_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 4, cid); + *u_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 5, cid); +} + +/* Calcualte and set validation bytes for task context */ +void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size, + u8 ctx_type, u32 tid) +{ + u8 *p_ctx, *region1_val_ptr; + + p_ctx = (u8 *)p_ctx_mem; + region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]]; + + OSAL_MEMSET(p_ctx, 0, ctx_size); + + *region1_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 1, tid); +} + +/* Memset session context to 0 while preserving validation bytes */ +void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size, u8 ctx_type) +{ + u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx; + u8 x_val, t_val, u_val; + + p_ctx = (u8 *)p_ctx_mem; + x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]]; + t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]]; + u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]]; + + x_val = *x_val_ptr; + t_val = *t_val_ptr; + u_val = *u_val_ptr; + + OSAL_MEMSET(p_ctx, 0, ctx_size); + + *x_val_ptr = x_val; + *t_val_ptr = t_val; + *u_val_ptr = u_val; +} + +/* Memset task context to 0 while preserving validation bytes */ +void ecore_memset_task_ctx(void *p_ctx_mem, const u32 ctx_size, + const u8 ctx_type) +{ + u8 *p_ctx, *region1_val_ptr; + u8 region1_val; + + p_ctx = (u8 *)p_ctx_mem; + region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]]; + + region1_val = *region1_val_ptr; + + OSAL_MEMSET(p_ctx, 0, ctx_size); + + *region1_val_ptr = region1_val; +} + +/* Enable and configure context validation */ +void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt) +{ + u32 ctx_validation; + + /* Enable validation for connection region 3 - bits [31:24] */ + ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 24; + ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID0, ctx_validation); + + /* Enable validation for connection region 5 - bits [15: 8] */ + ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8; + ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID1, ctx_validation); + + /* Enable validation for connection region 1 - bits [15: 8] */ + ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8; + ecore_wr(p_hwfn, p_ptt, CDU_REG_TCFC_CTX_VALID0, ctx_validation); +} diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h index 9df0e7d..2d1ab7c 100644 --- a/drivers/net/qede/base/ecore_init_fw_funcs.h +++ b/drivers/net/qede/base/ecore_init_fw_funcs.h @@ -8,20 +8,22 @@ #ifndef _INIT_FW_FUNCS_H #define _INIT_FW_FUNCS_H -/* forward declarations */ +/* Forward declarations */ + struct init_qm_pq_params; + /** - * @brief ecore_qm_pf_mem_size - prepare QM ILT sizes + * @brief ecore_qm_pf_mem_size - Prepare QM ILT sizes * * Returns the required host memory size in 4KB units. * Must be called before all QM init HSI functions. * - * @param pf_id - physical function ID - * @param num_pf_cids - number of connections used by this PF - * @param num_vf_cids - number of connections used by VFs of this PF - * @param num_tids - number of tasks used by this PF - * @param num_pf_pqs - number of PQs used by this PF - * @param num_vf_pqs - number of PQs used by VFs of this PF + * @param pf_id - physical function ID + * @param num_pf_cids - number of connections used by this PF + * @param num_vf_cids - number of connections used by VFs of this PF + * @param num_tids - number of tasks used by this PF + * @param num_pf_pqs - number of PQs used by this PF + * @param num_vf_pqs - number of PQs used by VFs of this PF * * @return The required host memory size in 4KB units. */ @@ -31,6 +33,7 @@ u32 ecore_qm_pf_mem_size(u8 pf_id, u32 num_tids, u16 num_pf_pqs, u16 num_vf_pqs); + /** * @brief ecore_qm_common_rt_init - Prepare QM runtime init values for engine * phase @@ -38,10 +41,10 @@ u32 ecore_qm_pf_mem_size(u8 pf_id, * @param p_hwfn * @param max_ports_per_engine - max number of ports per engine in HW * @param max_phys_tcs_per_port - max number of physical TCs per port in HW - * @param pf_rl_en - enable per-PF rate limiters - * @param pf_wfq_en - enable per-PF WFQ - * @param vport_rl_en - enable per-VPORT rate limiters - * @param vport_wfq_en - enable per-VPORT WFQ + * @param pf_rl_en - enable per-PF rate limiters + * @param pf_wfq_en - enable per-PF WFQ + * @param vport_rl_en - enable per-VPORT rate limiters + * @param vport_wfq_en - enable per-VPORT WFQ * @param port_params - array of size MAX_NUM_PORTS with params for each port * * @return 0 on success, -1 on error. @@ -54,22 +57,24 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn, bool vport_rl_en, bool vport_wfq_en, struct init_qm_port_params port_params[MAX_NUM_PORTS]); + /** * @brief ecore_qm_pf_rt_init Prepare QM runtime init values for the PF phase * * @param p_hwfn * @param p_ptt - ptt window used for writing the registers - * @param port_id - port ID - * @param pf_id - PF ID + * @param port_id - port ID + * @param pf_id - PF ID * @param max_phys_tcs_per_port - max number of physical TCs per port in HW - * @param is_first_pf - 1 = first PF in engine, 0 = othwerwise - * @param num_pf_cids - number of connections used by this PF + * @param is_first_pf - 1 = first PF in engine, 0 = othwerwise + * @param num_pf_cids - number of connections used by this PF * @param num_vf_cids - number of connections used by VFs of this PF - * @param num_tids - number of tasks used by this PF - * @param start_pq - first Tx PQ ID associated with this PF - * @param num_pf_pqs - number of Tx PQs associated with this PF (non-VF) - * @param num_vf_pqs - number of Tx PQs associated with a VF - * @param start_vport - first VPORT ID associated with this PF + * @param num_tids - number of tasks used by this PF + * @param start_pq - first Tx PQ ID associated with this PF + * @param num_pf_pqs - number of Tx PQs associated with this PF + * (non-VF) + * @param num_vf_pqs - number of Tx PQs associated with a VF + * @param start_vport - first VPORT ID associated with this PF * @param num_vports - number of VPORTs associated with this PF * @param pf_wfq - WFQ weight. if PF WFQ is globally disabled, the weight must * be 0. otherwise, the weight must be non-zero. @@ -100,6 +105,7 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn, u32 pf_rl, struct init_qm_pq_params *pq_params, struct init_qm_vport_params *vport_params); + /** * @brief ecore_init_pf_wfq Initializes the WFQ weight of the specified PF * @@ -114,11 +120,12 @@ int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u8 pf_id, u16 pf_wfq); + /** - * @brief ecore_init_pf_rl Initializes the rate limit of the specified PF + * @brief ecore_init_pf_rl - Initializes the rate limit of the specified PF * * @param p_hwfn - * @param p_ptt - ptt window used for writing the registers + * @param p_ptt - ptt window used for writing the registers * @param pf_id - PF ID * @param pf_rl - rate limit in Mb/sec units * @@ -128,6 +135,7 @@ int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u8 pf_id, u32 pf_rl); + /** * @brief ecore_init_vport_wfq Initializes the WFQ weight of specified VPORT * @@ -144,10 +152,12 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u16 first_tx_pq_id[NUM_OF_TCS], u16 vport_wfq); + /** - * @brief ecore_init_vport_rl Initializes the rate limit of the specified VPORT + * @brief ecore_init_vport_rl - Initializes the rate limit of the specified + * VPORT. * - * @param p_hwfn + * @param p_hwfn - HW device data * @param p_ptt - ptt window used for writing the registers * @param vport_id - VPORT ID * @param vport_rl - rate limit in Mb/sec units @@ -158,6 +168,7 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u8 vport_id, u32 vport_rl); + /** * @brief ecore_send_qm_stop_cmd Sends a stop command to the QM * @@ -178,6 +189,7 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn, u16 start_pq, u16 num_pqs); #ifndef UNUSED_HSI_FUNC + /** * @brief ecore_init_nig_ets - initializes the NIG ETS arbiter * @@ -193,6 +205,7 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct init_ets_req *req, bool is_lb); + /** * @brief ecore_init_nig_lb_rl - initializes the NIG LB RLs * @@ -205,6 +218,7 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct init_nig_lb_rl_req *req); #endif /* UNUSED_HSI_FUNC */ + /** * @brief ecore_init_nig_pri_tc_map - initializes the NIG priority to TC map. * @@ -216,6 +230,7 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn, void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct init_nig_pri_tc_map_req *req); + #ifndef UNUSED_HSI_FUNC /** * @brief ecore_init_prs_ets - initializes the PRS Rx ETS arbiter @@ -229,6 +244,7 @@ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct init_ets_req *req); #endif /* UNUSED_HSI_FUNC */ + #ifndef UNUSED_HSI_FUNC /** * @brief ecore_init_brb_ram - initializes BRB RAM sizes per TC @@ -242,6 +258,7 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, struct init_brb_ram_req *req); #endif /* UNUSED_HSI_FUNC */ + #ifndef UNUSED_HSI_FUNC /** * @brief ecore_set_engine_mf_ovlan_eth_type - initializes Nig,Prs,Pbf and llh @@ -250,22 +267,24 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn, * if engine * is in BD mode. * - * @param p_ptt - ptt window used for writing the registers. + * @param p_ptt - ptt window used for writing the registers. * @param ethType - etherType to configure */ void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 ethType); + /** * @brief ecore_set_port_mf_ovlan_eth_type - initializes DORQ ethType Regs to * input ethType should Be called * once per port. * - * @param p_ptt - ptt window used for writing the registers. + * @param p_ptt - ptt window used for writing the registers. * @param ethType - etherType to configure */ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 ethType); #endif /* UNUSED_HSI_FUNC */ + /** * @brief ecore_set_vxlan_dest_port - initializes vxlan tunnel destination udp * port @@ -276,15 +295,17 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn, void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u16 dest_port); + /** * @brief ecore_set_vxlan_enable - enable or disable VXLAN tunnel in HW * - * @param p_ptt - ptt window used for writing the registers. - * @param vxlan_enable - vxlan enable flag. + * @param p_ptt - ptt window used for writing the registers. + * @param vxlan_enable - vxlan enable flag. */ void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, bool vxlan_enable); + /** * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW * @@ -296,6 +317,7 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, bool eth_gre_enable, bool ip_gre_enable); + /** * @brief ecore_set_geneve_dest_port - initializes geneve tunnel destination * udp port @@ -306,6 +328,7 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn, void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u16 dest_port); + /** * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW * @@ -318,6 +341,7 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn, bool eth_geneve_enable, bool ip_geneve_enable); #ifndef UNUSED_HSI_FUNC + /** * @brief ecore_set_gft_event_id_cm_hdr - configure GFT event id and cm header * @@ -325,16 +349,16 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn, */ void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt); + /** * @brief ecore_set_rfs_mode_enable - enable and configure HW for RFS * -* -* @param p_ptt - ptt window used for writing the registers. -* @param pf_id - pf on which to enable RFS. -* @param tcp - set profile tcp packets. -* @param udp - set profile udp packet. -* @param ipv4 - set profile ipv4 packet. -* @param ipv6 - set profile ipv6 packet. +* @param p_ptt - ptt window used for writing the registers. +* @param pf_id - pf on which to enable RFS. +* @param tcp - set profile tcp packets. +* @param udp - set profile udp packet. +* @param ipv4 - set profile ipv4 packet. +* @param ipv6 - set profile ipv6 packet. */ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, @@ -344,6 +368,7 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn, bool ipv4, bool ipv6); #endif /* UNUSED_HSI_FUNC */ + /** * @brief ecore_config_vf_zone_size_mode - Configure VF zone size mode. Must be * used before first ETH queue started. @@ -357,18 +382,20 @@ void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn, */ void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u16 mode, bool runtime_init); + /** -* @brief ecore_get_mstorm_queue_stat_offset - get mstorm statistics offset by VF -* zone size mode. + * @brief ecore_get_mstorm_queue_stat_offset - Get mstorm statistics offset by + * VF zone size mode. * * @param stat_cnt_id - statistic counter id * @param vf_zone_size_mode - VF zone size mode. Use enum vf_zone_size_mode. */ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn, u16 stat_cnt_id, u16 vf_zone_size_mode); + /** -* @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone -* size mode. + * @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone + * size mode. * * @param vf_id - vf id. * @param vf_queue_id - per VF rx queue id. @@ -376,4 +403,58 @@ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn, */ u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8 vf_queue_id, u16 vf_zone_size_mode); +/** + * @brief ecore_enable_context_validation - Enable and configure context + * validation. + * + * @param p_ptt - ptt window used for writing the registers. + */ +void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn, + struct ecore_ptt *p_ptt); +/** + * @brief ecore_calc_session_ctx_validation - Calcualte validation byte for + * session context. + * + * + * @param p_ctx_mem - pointer to context memory. + * @param ctx_size - context size. + * @param ctx_type - context type. + * @param cid - context cid. + */ +void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size, + u8 ctx_type, u32 cid); +/** + * @brief ecore_calc_task_ctx_validation - Calcualte validation byte for task + * context. + * + * + * @param p_ctx_mem - pointer to context memory. + * @param ctx_size - context size. + * @param ctx_type - context type. + * @param tid - context tid. + */ +void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size, + u8 ctx_type, u32 tid); +/** + * @brief ecore_memset_session_ctx - Memset session context to 0 while + * preserving validation bytes. + * + * + * @param p_ctx_mem - pointer to context memory. + * @param ctx_size - size to initialzie. + * @param ctx_type - context type. + */ +void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size, + u8 ctx_type); +/** + * @brief ecore_memset_task_ctx - Memset session context to 0 while preserving + * validation bytes. + * + * + * @param p_ctx_mem - pointer to context memory. + * @param ctx_size - size to initialzie. + * @param ctx_type - context type. + */ +void ecore_memset_task_ctx(void *p_ctx_mem, u32 ctx_size, + u8 ctx_type); #endif diff --git a/drivers/net/qede/base/ecore_iro.h b/drivers/net/qede/base/ecore_iro.h index aad9012..b4bfe89 100644 --- a/drivers/net/qede/base/ecore_iro.h +++ b/drivers/net/qede/base/ecore_iro.h @@ -185,5 +185,13 @@ #define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[46].base + \ ((rdma_stat_counter_id) * IRO[46].m1)) #define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[46].size) +/* Xstorm iWARP rxmit stats */ +#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) (IRO[47].base + \ + ((pf_id) * IRO[47].m1)) +#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[47].size) +/* Tstorm RoCE Event Statistics */ +#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) (IRO[48].base + \ + ((roce_pf_id) * IRO[48].m1)) +#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[48].size) #endif /* __IRO_H__ */ diff --git a/drivers/net/qede/base/ecore_iro_values.h b/drivers/net/qede/base/ecore_iro_values.h index 4ff7e95..6764bfa 100644 --- a/drivers/net/qede/base/ecore_iro_values.h +++ b/drivers/net/qede/base/ecore_iro_values.h @@ -9,13 +9,13 @@ #ifndef __IRO_VALUES_H__ #define __IRO_VALUES_H__ -static const struct iro iro_arr[47] = { +static const struct iro iro_arr[49] = { /* YSTORM_FLOW_CONTROL_MODE_OFFSET */ { 0x0, 0x0, 0x0, 0x0, 0x8}, /* TSTORM_PORT_STAT_OFFSET(port_id) */ - { 0x4cb0, 0x78, 0x0, 0x0, 0x78}, + { 0x4cb0, 0x80, 0x0, 0x0, 0x80}, /* TSTORM_LL2_PORT_STAT_OFFSET(port_id) */ - { 0x6318, 0x20, 0x0, 0x0, 0x20}, + { 0x6518, 0x20, 0x0, 0x0, 0x20}, /* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id) */ { 0xb00, 0x8, 0x0, 0x0, 0x4}, /* USTORM_FLR_FINAL_ACK_OFFSET(pf_id) */ @@ -41,7 +41,7 @@ static const struct iro iro_arr[47] = { /* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) */ { 0xa28, 0x8, 0x0, 0x0, 0x8}, /* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */ - { 0x60f8, 0x10, 0x0, 0x0, 0x10}, + { 0x61f8, 0x10, 0x0, 0x0, 0x10}, /* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */ { 0xb820, 0x30, 0x0, 0x0, 0x30}, /* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) */ @@ -53,7 +53,7 @@ static const struct iro iro_arr[47] = { /* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id) */ { 0x53a0, 0x80, 0x4, 0x0, 0x4}, /* MSTORM_TPA_TIMEOUT_US_OFFSET */ - { 0xc8f0, 0x0, 0x0, 0x0, 0x4}, + { 0xc7c8, 0x0, 0x0, 0x0, 0x4}, /* MSTORM_ETH_PF_STAT_OFFSET(pf_id) */ { 0x4ba0, 0x80, 0x0, 0x0, 0x20}, /* USTORM_QUEUE_STAT_OFFSET(stat_counter_id) */ @@ -63,13 +63,13 @@ static const struct iro iro_arr[47] = { /* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) */ { 0x2b48, 0x80, 0x0, 0x0, 0x38}, /* PSTORM_ETH_PF_STAT_OFFSET(pf_id) */ - { 0xf188, 0x78, 0x0, 0x0, 0x78}, + { 0xf1b0, 0x78, 0x0, 0x0, 0x78}, /* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) */ { 0x1f8, 0x4, 0x0, 0x0, 0x4}, /* TSTORM_ETH_PRS_INPUT_OFFSET */ - { 0xacf0, 0x0, 0x0, 0x0, 0xf0}, + { 0xaef8, 0x0, 0x0, 0x0, 0xf0}, /* ETH_RX_RATE_LIMIT_OFFSET(pf_id) */ - { 0xade0, 0x8, 0x0, 0x0, 0x8}, + { 0xafe8, 0x8, 0x0, 0x0, 0x8}, /* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) */ { 0x1f8, 0x8, 0x0, 0x0, 0x8}, /* YSTORM_TOE_CQ_PROD_OFFSET(rss_id) */ @@ -85,9 +85,9 @@ static const struct iro iro_arr[47] = { /* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id,bdq_id) */ { 0xb78, 0x10, 0x8, 0x0, 0x2}, /* TSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */ - { 0xd888, 0x38, 0x0, 0x0, 0x24}, + { 0xd9a8, 0x38, 0x0, 0x0, 0x24}, /* MSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */ - { 0x12c38, 0x10, 0x0, 0x0, 0x8}, + { 0x12988, 0x10, 0x0, 0x0, 0x8}, /* USTORM_ISCSI_RX_STATS_OFFSET(pf_id) */ { 0x11aa0, 0x38, 0x0, 0x0, 0x18}, /* XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */ @@ -97,13 +97,17 @@ static const struct iro iro_arr[47] = { /* PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */ { 0x101f8, 0x10, 0x0, 0x0, 0x10}, /* TSTORM_FCOE_RX_STATS_OFFSET(pf_id) */ - { 0xdd08, 0x48, 0x0, 0x0, 0x38}, + { 0xde28, 0x48, 0x0, 0x0, 0x38}, /* PSTORM_FCOE_TX_STATS_OFFSET(pf_id) */ { 0x10660, 0x20, 0x0, 0x0, 0x20}, /* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */ { 0x2b80, 0x80, 0x0, 0x0, 0x10}, /* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */ - { 0x5000, 0x10, 0x0, 0x0, 0x10}, + { 0x5020, 0x10, 0x0, 0x0, 0x10}, +/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) */ + { 0xc9b0, 0x30, 0x0, 0x0, 0x10}, +/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) */ + { 0xeec0, 0x10, 0x0, 0x0, 0x10}, }; #endif /* __IRO_VALUES_H__ */ diff --git a/drivers/net/qede/base/ecore_rt_defs.h b/drivers/net/qede/base/ecore_rt_defs.h index 01a29e3..846dc6d 100644 --- a/drivers/net/qede/base/ecore_rt_defs.h +++ b/drivers/net/qede/base/ecore_rt_defs.h @@ -115,339 +115,338 @@ #define TM_REG_CONFIG_CONN_MEM_RT_OFFSET 28716 #define TM_REG_CONFIG_CONN_MEM_RT_SIZE 416 #define TM_REG_CONFIG_TASK_MEM_RT_OFFSET 29132 -#define TM_REG_CONFIG_TASK_MEM_RT_SIZE 512 -#define QM_REG_MAXPQSIZE_0_RT_OFFSET 29644 -#define QM_REG_MAXPQSIZE_1_RT_OFFSET 29645 -#define QM_REG_MAXPQSIZE_2_RT_OFFSET 29646 -#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET 29647 -#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET 29648 -#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET 29649 -#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET 29650 -#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET 29651 -#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET 29652 -#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET 29653 -#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET 29654 -#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET 29655 -#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET 29656 -#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET 29657 -#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET 29658 -#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET 29659 -#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET 29660 -#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET 29661 -#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET 29662 -#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET 29663 -#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET 29664 -#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET 29665 -#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET 29666 -#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET 29667 -#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET 29668 -#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET 29669 -#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET 29670 -#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET 29671 -#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET 29672 -#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET 29673 -#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET 29674 -#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET 29675 -#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET 29676 -#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET 29677 -#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET 29678 -#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET 29679 -#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET 29680 -#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET 29681 -#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET 29682 -#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET 29683 -#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET 29684 -#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET 29685 -#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET 29686 -#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET 29687 -#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET 29688 -#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET 29689 -#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET 29690 -#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET 29691 -#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET 29692 -#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET 29693 -#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET 29694 -#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET 29695 -#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET 29696 -#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET 29697 -#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET 29698 -#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET 29699 -#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET 29700 -#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET 29701 -#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET 29702 -#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET 29703 -#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET 29704 -#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET 29705 -#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET 29706 -#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET 29707 -#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET 29708 -#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET 29709 -#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET 29710 -#define QM_REG_BASEADDROTHERPQ_RT_OFFSET 29711 -#define QM_REG_BASEADDROTHERPQ_RT_SIZE 128 -#define QM_REG_VOQCRDLINE_RT_OFFSET 29839 -#define QM_REG_VOQCRDLINE_RT_SIZE 20 -#define QM_REG_VOQINITCRDLINE_RT_OFFSET 29859 -#define QM_REG_VOQINITCRDLINE_RT_SIZE 20 -#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET 29879 -#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET 29880 -#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET 29881 -#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET 29882 -#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET 29883 -#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET 29884 -#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET 29885 -#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET 29886 -#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET 29887 -#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET 29888 -#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET 29889 -#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET 29890 -#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET 29891 -#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET 29892 -#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET 29893 -#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET 29894 -#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET 29895 -#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET 29896 -#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET 29897 -#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET 29898 -#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET 29899 -#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET 29900 -#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET 29901 -#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET 29902 -#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET 29903 -#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET 29904 -#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET 29905 -#define QM_REG_PQTX2PF_0_RT_OFFSET 29906 -#define QM_REG_PQTX2PF_1_RT_OFFSET 29907 -#define QM_REG_PQTX2PF_2_RT_OFFSET 29908 -#define QM_REG_PQTX2PF_3_RT_OFFSET 29909 -#define QM_REG_PQTX2PF_4_RT_OFFSET 29910 -#define QM_REG_PQTX2PF_5_RT_OFFSET 29911 -#define QM_REG_PQTX2PF_6_RT_OFFSET 29912 -#define QM_REG_PQTX2PF_7_RT_OFFSET 29913 -#define QM_REG_PQTX2PF_8_RT_OFFSET 29914 -#define QM_REG_PQTX2PF_9_RT_OFFSET 29915 -#define QM_REG_PQTX2PF_10_RT_OFFSET 29916 -#define QM_REG_PQTX2PF_11_RT_OFFSET 29917 -#define QM_REG_PQTX2PF_12_RT_OFFSET 29918 -#define QM_REG_PQTX2PF_13_RT_OFFSET 29919 -#define QM_REG_PQTX2PF_14_RT_OFFSET 29920 -#define QM_REG_PQTX2PF_15_RT_OFFSET 29921 -#define QM_REG_PQTX2PF_16_RT_OFFSET 29922 -#define QM_REG_PQTX2PF_17_RT_OFFSET 29923 -#define QM_REG_PQTX2PF_18_RT_OFFSET 29924 -#define QM_REG_PQTX2PF_19_RT_OFFSET 29925 -#define QM_REG_PQTX2PF_20_RT_OFFSET 29926 -#define QM_REG_PQTX2PF_21_RT_OFFSET 29927 -#define QM_REG_PQTX2PF_22_RT_OFFSET 29928 -#define QM_REG_PQTX2PF_23_RT_OFFSET 29929 -#define QM_REG_PQTX2PF_24_RT_OFFSET 29930 -#define QM_REG_PQTX2PF_25_RT_OFFSET 29931 -#define QM_REG_PQTX2PF_26_RT_OFFSET 29932 -#define QM_REG_PQTX2PF_27_RT_OFFSET 29933 -#define QM_REG_PQTX2PF_28_RT_OFFSET 29934 -#define QM_REG_PQTX2PF_29_RT_OFFSET 29935 -#define QM_REG_PQTX2PF_30_RT_OFFSET 29936 -#define QM_REG_PQTX2PF_31_RT_OFFSET 29937 -#define QM_REG_PQTX2PF_32_RT_OFFSET 29938 -#define QM_REG_PQTX2PF_33_RT_OFFSET 29939 -#define QM_REG_PQTX2PF_34_RT_OFFSET 29940 -#define QM_REG_PQTX2PF_35_RT_OFFSET 29941 -#define QM_REG_PQTX2PF_36_RT_OFFSET 29942 -#define QM_REG_PQTX2PF_37_RT_OFFSET 29943 -#define QM_REG_PQTX2PF_38_RT_OFFSET 29944 -#define QM_REG_PQTX2PF_39_RT_OFFSET 29945 -#define QM_REG_PQTX2PF_40_RT_OFFSET 29946 -#define QM_REG_PQTX2PF_41_RT_OFFSET 29947 -#define QM_REG_PQTX2PF_42_RT_OFFSET 29948 -#define QM_REG_PQTX2PF_43_RT_OFFSET 29949 -#define QM_REG_PQTX2PF_44_RT_OFFSET 29950 -#define QM_REG_PQTX2PF_45_RT_OFFSET 29951 -#define QM_REG_PQTX2PF_46_RT_OFFSET 29952 -#define QM_REG_PQTX2PF_47_RT_OFFSET 29953 -#define QM_REG_PQTX2PF_48_RT_OFFSET 29954 -#define QM_REG_PQTX2PF_49_RT_OFFSET 29955 -#define QM_REG_PQTX2PF_50_RT_OFFSET 29956 -#define QM_REG_PQTX2PF_51_RT_OFFSET 29957 -#define QM_REG_PQTX2PF_52_RT_OFFSET 29958 -#define QM_REG_PQTX2PF_53_RT_OFFSET 29959 -#define QM_REG_PQTX2PF_54_RT_OFFSET 29960 -#define QM_REG_PQTX2PF_55_RT_OFFSET 29961 -#define QM_REG_PQTX2PF_56_RT_OFFSET 29962 -#define QM_REG_PQTX2PF_57_RT_OFFSET 29963 -#define QM_REG_PQTX2PF_58_RT_OFFSET 29964 -#define QM_REG_PQTX2PF_59_RT_OFFSET 29965 -#define QM_REG_PQTX2PF_60_RT_OFFSET 29966 -#define QM_REG_PQTX2PF_61_RT_OFFSET 29967 -#define QM_REG_PQTX2PF_62_RT_OFFSET 29968 -#define QM_REG_PQTX2PF_63_RT_OFFSET 29969 -#define QM_REG_PQOTHER2PF_0_RT_OFFSET 29970 -#define QM_REG_PQOTHER2PF_1_RT_OFFSET 29971 -#define QM_REG_PQOTHER2PF_2_RT_OFFSET 29972 -#define QM_REG_PQOTHER2PF_3_RT_OFFSET 29973 -#define QM_REG_PQOTHER2PF_4_RT_OFFSET 29974 -#define QM_REG_PQOTHER2PF_5_RT_OFFSET 29975 -#define QM_REG_PQOTHER2PF_6_RT_OFFSET 29976 -#define QM_REG_PQOTHER2PF_7_RT_OFFSET 29977 -#define QM_REG_PQOTHER2PF_8_RT_OFFSET 29978 -#define QM_REG_PQOTHER2PF_9_RT_OFFSET 29979 -#define QM_REG_PQOTHER2PF_10_RT_OFFSET 29980 -#define QM_REG_PQOTHER2PF_11_RT_OFFSET 29981 -#define QM_REG_PQOTHER2PF_12_RT_OFFSET 29982 -#define QM_REG_PQOTHER2PF_13_RT_OFFSET 29983 -#define QM_REG_PQOTHER2PF_14_RT_OFFSET 29984 -#define QM_REG_PQOTHER2PF_15_RT_OFFSET 29985 -#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET 29986 -#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET 29987 -#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET 29988 -#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET 29989 -#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET 29990 -#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET 29991 -#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET 29992 -#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET 29993 -#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET 29994 -#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET 29995 -#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET 29996 -#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET 29997 -#define QM_REG_RLGLBLINCVAL_RT_OFFSET 29998 +#define TM_REG_CONFIG_TASK_MEM_RT_SIZE 608 +#define QM_REG_MAXPQSIZE_0_RT_OFFSET 29740 +#define QM_REG_MAXPQSIZE_1_RT_OFFSET 29741 +#define QM_REG_MAXPQSIZE_2_RT_OFFSET 29742 +#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET 29743 +#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET 29744 +#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET 29745 +#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET 29746 +#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET 29747 +#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET 29748 +#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET 29749 +#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET 29750 +#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET 29751 +#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET 29752 +#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET 29753 +#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET 29754 +#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET 29755 +#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET 29756 +#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET 29757 +#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET 29758 +#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET 29759 +#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET 29760 +#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET 29761 +#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET 29762 +#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET 29763 +#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET 29764 +#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET 29765 +#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET 29766 +#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET 29767 +#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET 29768 +#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET 29769 +#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET 29770 +#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET 29771 +#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET 29772 +#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET 29773 +#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET 29774 +#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET 29775 +#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET 29776 +#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET 29777 +#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET 29778 +#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET 29779 +#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET 29780 +#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET 29781 +#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET 29782 +#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET 29783 +#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET 29784 +#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET 29785 +#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET 29786 +#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET 29787 +#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET 29788 +#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET 29789 +#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET 29790 +#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET 29791 +#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET 29792 +#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET 29793 +#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET 29794 +#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET 29795 +#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET 29796 +#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET 29797 +#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET 29798 +#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET 29799 +#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET 29800 +#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET 29801 +#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET 29802 +#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET 29803 +#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET 29804 +#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET 29805 +#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET 29806 +#define QM_REG_BASEADDROTHERPQ_RT_OFFSET 29807 +#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET 29935 +#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET 29936 +#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET 29937 +#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET 29938 +#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET 29939 +#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET 29940 +#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET 29941 +#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET 29942 +#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET 29943 +#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET 29944 +#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET 29945 +#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET 29946 +#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET 29947 +#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET 29948 +#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET 29949 +#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET 29950 +#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET 29951 +#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET 29952 +#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET 29953 +#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET 29954 +#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET 29955 +#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET 29956 +#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET 29957 +#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET 29958 +#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET 29959 +#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET 29960 +#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET 29961 +#define QM_REG_PQTX2PF_0_RT_OFFSET 29962 +#define QM_REG_PQTX2PF_1_RT_OFFSET 29963 +#define QM_REG_PQTX2PF_2_RT_OFFSET 29964 +#define QM_REG_PQTX2PF_3_RT_OFFSET 29965 +#define QM_REG_PQTX2PF_4_RT_OFFSET 29966 +#define QM_REG_PQTX2PF_5_RT_OFFSET 29967 +#define QM_REG_PQTX2PF_6_RT_OFFSET 29968 +#define QM_REG_PQTX2PF_7_RT_OFFSET 29969 +#define QM_REG_PQTX2PF_8_RT_OFFSET 29970 +#define QM_REG_PQTX2PF_9_RT_OFFSET 29971 +#define QM_REG_PQTX2PF_10_RT_OFFSET 29972 +#define QM_REG_PQTX2PF_11_RT_OFFSET 29973 +#define QM_REG_PQTX2PF_12_RT_OFFSET 29974 +#define QM_REG_PQTX2PF_13_RT_OFFSET 29975 +#define QM_REG_PQTX2PF_14_RT_OFFSET 29976 +#define QM_REG_PQTX2PF_15_RT_OFFSET 29977 +#define QM_REG_PQTX2PF_16_RT_OFFSET 29978 +#define QM_REG_PQTX2PF_17_RT_OFFSET 29979 +#define QM_REG_PQTX2PF_18_RT_OFFSET 29980 +#define QM_REG_PQTX2PF_19_RT_OFFSET 29981 +#define QM_REG_PQTX2PF_20_RT_OFFSET 29982 +#define QM_REG_PQTX2PF_21_RT_OFFSET 29983 +#define QM_REG_PQTX2PF_22_RT_OFFSET 29984 +#define QM_REG_PQTX2PF_23_RT_OFFSET 29985 +#define QM_REG_PQTX2PF_24_RT_OFFSET 29986 +#define QM_REG_PQTX2PF_25_RT_OFFSET 29987 +#define QM_REG_PQTX2PF_26_RT_OFFSET 29988 +#define QM_REG_PQTX2PF_27_RT_OFFSET 29989 +#define QM_REG_PQTX2PF_28_RT_OFFSET 29990 +#define QM_REG_PQTX2PF_29_RT_OFFSET 29991 +#define QM_REG_PQTX2PF_30_RT_OFFSET 29992 +#define QM_REG_PQTX2PF_31_RT_OFFSET 29993 +#define QM_REG_PQTX2PF_32_RT_OFFSET 29994 +#define QM_REG_PQTX2PF_33_RT_OFFSET 29995 +#define QM_REG_PQTX2PF_34_RT_OFFSET 29996 +#define QM_REG_PQTX2PF_35_RT_OFFSET 29997 +#define QM_REG_PQTX2PF_36_RT_OFFSET 29998 +#define QM_REG_PQTX2PF_37_RT_OFFSET 29999 +#define QM_REG_PQTX2PF_38_RT_OFFSET 30000 +#define QM_REG_PQTX2PF_39_RT_OFFSET 30001 +#define QM_REG_PQTX2PF_40_RT_OFFSET 30002 +#define QM_REG_PQTX2PF_41_RT_OFFSET 30003 +#define QM_REG_PQTX2PF_42_RT_OFFSET 30004 +#define QM_REG_PQTX2PF_43_RT_OFFSET 30005 +#define QM_REG_PQTX2PF_44_RT_OFFSET 30006 +#define QM_REG_PQTX2PF_45_RT_OFFSET 30007 +#define QM_REG_PQTX2PF_46_RT_OFFSET 30008 +#define QM_REG_PQTX2PF_47_RT_OFFSET 30009 +#define QM_REG_PQTX2PF_48_RT_OFFSET 30010 +#define QM_REG_PQTX2PF_49_RT_OFFSET 30011 +#define QM_REG_PQTX2PF_50_RT_OFFSET 30012 +#define QM_REG_PQTX2PF_51_RT_OFFSET 30013 +#define QM_REG_PQTX2PF_52_RT_OFFSET 30014 +#define QM_REG_PQTX2PF_53_RT_OFFSET 30015 +#define QM_REG_PQTX2PF_54_RT_OFFSET 30016 +#define QM_REG_PQTX2PF_55_RT_OFFSET 30017 +#define QM_REG_PQTX2PF_56_RT_OFFSET 30018 +#define QM_REG_PQTX2PF_57_RT_OFFSET 30019 +#define QM_REG_PQTX2PF_58_RT_OFFSET 30020 +#define QM_REG_PQTX2PF_59_RT_OFFSET 30021 +#define QM_REG_PQTX2PF_60_RT_OFFSET 30022 +#define QM_REG_PQTX2PF_61_RT_OFFSET 30023 +#define QM_REG_PQTX2PF_62_RT_OFFSET 30024 +#define QM_REG_PQTX2PF_63_RT_OFFSET 30025 +#define QM_REG_PQOTHER2PF_0_RT_OFFSET 30026 +#define QM_REG_PQOTHER2PF_1_RT_OFFSET 30027 +#define QM_REG_PQOTHER2PF_2_RT_OFFSET 30028 +#define QM_REG_PQOTHER2PF_3_RT_OFFSET 30029 +#define QM_REG_PQOTHER2PF_4_RT_OFFSET 30030 +#define QM_REG_PQOTHER2PF_5_RT_OFFSET 30031 +#define QM_REG_PQOTHER2PF_6_RT_OFFSET 30032 +#define QM_REG_PQOTHER2PF_7_RT_OFFSET 30033 +#define QM_REG_PQOTHER2PF_8_RT_OFFSET 30034 +#define QM_REG_PQOTHER2PF_9_RT_OFFSET 30035 +#define QM_REG_PQOTHER2PF_10_RT_OFFSET 30036 +#define QM_REG_PQOTHER2PF_11_RT_OFFSET 30037 +#define QM_REG_PQOTHER2PF_12_RT_OFFSET 30038 +#define QM_REG_PQOTHER2PF_13_RT_OFFSET 30039 +#define QM_REG_PQOTHER2PF_14_RT_OFFSET 30040 +#define QM_REG_PQOTHER2PF_15_RT_OFFSET 30041 +#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET 30042 +#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET 30043 +#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET 30044 +#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET 30045 +#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET 30046 +#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET 30047 +#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET 30048 +#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET 30049 +#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET 30050 +#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET 30051 +#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET 30052 +#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET 30053 +#define QM_REG_RLGLBLINCVAL_RT_OFFSET 30054 #define QM_REG_RLGLBLINCVAL_RT_SIZE 256 -#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET 30254 +#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET 30310 #define QM_REG_RLGLBLUPPERBOUND_RT_SIZE 256 -#define QM_REG_RLGLBLCRD_RT_OFFSET 30510 +#define QM_REG_RLGLBLCRD_RT_OFFSET 30566 #define QM_REG_RLGLBLCRD_RT_SIZE 256 -#define QM_REG_RLGLBLENABLE_RT_OFFSET 30766 -#define QM_REG_RLPFPERIOD_RT_OFFSET 30767 -#define QM_REG_RLPFPERIODTIMER_RT_OFFSET 30768 -#define QM_REG_RLPFINCVAL_RT_OFFSET 30769 +#define QM_REG_RLGLBLENABLE_RT_OFFSET 30822 +#define QM_REG_RLPFPERIOD_RT_OFFSET 30823 +#define QM_REG_RLPFPERIODTIMER_RT_OFFSET 30824 +#define QM_REG_RLPFINCVAL_RT_OFFSET 30825 #define QM_REG_RLPFINCVAL_RT_SIZE 16 -#define QM_REG_RLPFUPPERBOUND_RT_OFFSET 30785 +#define QM_REG_RLPFUPPERBOUND_RT_OFFSET 30841 #define QM_REG_RLPFUPPERBOUND_RT_SIZE 16 -#define QM_REG_RLPFCRD_RT_OFFSET 30801 +#define QM_REG_RLPFCRD_RT_OFFSET 30857 #define QM_REG_RLPFCRD_RT_SIZE 16 -#define QM_REG_RLPFENABLE_RT_OFFSET 30817 -#define QM_REG_RLPFVOQENABLE_RT_OFFSET 30818 -#define QM_REG_WFQPFWEIGHT_RT_OFFSET 30819 +#define QM_REG_RLPFENABLE_RT_OFFSET 30873 +#define QM_REG_RLPFVOQENABLE_RT_OFFSET 30874 +#define QM_REG_WFQPFWEIGHT_RT_OFFSET 30875 #define QM_REG_WFQPFWEIGHT_RT_SIZE 16 -#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET 30835 +#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET 30891 #define QM_REG_WFQPFUPPERBOUND_RT_SIZE 16 -#define QM_REG_WFQPFCRD_RT_OFFSET 30851 -#define QM_REG_WFQPFCRD_RT_SIZE 160 -#define QM_REG_WFQPFENABLE_RT_OFFSET 31011 -#define QM_REG_WFQVPENABLE_RT_OFFSET 31012 -#define QM_REG_BASEADDRTXPQ_RT_OFFSET 31013 +#define QM_REG_WFQPFCRD_RT_OFFSET 30907 +#define QM_REG_WFQPFCRD_RT_SIZE 256 +#define QM_REG_WFQPFENABLE_RT_OFFSET 31163 +#define QM_REG_WFQVPENABLE_RT_OFFSET 31164 +#define QM_REG_BASEADDRTXPQ_RT_OFFSET 31165 #define QM_REG_BASEADDRTXPQ_RT_SIZE 512 -#define QM_REG_TXPQMAP_RT_OFFSET 31525 +#define QM_REG_TXPQMAP_RT_OFFSET 31677 #define QM_REG_TXPQMAP_RT_SIZE 512 -#define QM_REG_WFQVPWEIGHT_RT_OFFSET 32037 +#define QM_REG_WFQVPWEIGHT_RT_OFFSET 32189 #define QM_REG_WFQVPWEIGHT_RT_SIZE 512 -#define QM_REG_WFQVPCRD_RT_OFFSET 32549 +#define QM_REG_WFQVPCRD_RT_OFFSET 32701 #define QM_REG_WFQVPCRD_RT_SIZE 512 -#define QM_REG_WFQVPMAP_RT_OFFSET 33061 +#define QM_REG_WFQVPMAP_RT_OFFSET 33213 #define QM_REG_WFQVPMAP_RT_SIZE 512 -#define QM_REG_WFQPFCRD_MSB_RT_OFFSET 33573 -#define QM_REG_WFQPFCRD_MSB_RT_SIZE 160 -#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET 33733 -#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET 33734 -#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET 33735 -#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET 33736 -#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET 33737 -#define NIG_REG_OUTER_TAG_VALUE_MASK_RT_OFFSET 33738 -#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET 33739 -#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET 33740 +#define QM_REG_WFQPFCRD_MSB_RT_OFFSET 33725 +#define QM_REG_WFQPFCRD_MSB_RT_SIZE 320 +#define QM_REG_VOQCRDLINE_RT_OFFSET 34045 +#define QM_REG_VOQCRDLINE_RT_SIZE 36 +#define QM_REG_VOQINITCRDLINE_RT_OFFSET 34081 +#define QM_REG_VOQINITCRDLINE_RT_SIZE 36 +#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET 34117 +#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET 34118 +#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET 34119 +#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET 34120 +#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET 34121 +#define NIG_REG_OUTER_TAG_VALUE_MASK_RT_OFFSET 34122 +#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET 34123 +#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET 34124 #define NIG_REG_LLH_FUNC_TAG_EN_RT_SIZE 4 -#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_OFFSET 33744 +#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_OFFSET 34128 #define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_SIZE 4 -#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET 33748 +#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET 34132 #define NIG_REG_LLH_FUNC_TAG_VALUE_RT_SIZE 4 -#define NIG_REG_LLH_FUNC_NO_TAG_RT_OFFSET 33752 -#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET 33753 +#define NIG_REG_LLH_FUNC_NO_TAG_RT_OFFSET 34136 +#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET 34137 #define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_SIZE 32 -#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET 33785 +#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET 34169 #define NIG_REG_LLH_FUNC_FILTER_EN_RT_SIZE 16 -#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET 33801 +#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET 34185 #define NIG_REG_LLH_FUNC_FILTER_MODE_RT_SIZE 16 -#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET 33817 +#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET 34201 #define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE 16 -#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET 33833 +#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET 34217 #define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_SIZE 16 -#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET 33849 -#define NIG_REG_ROCE_DUPLICATE_TO_HOST_RT_OFFSET 33850 -#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET 33851 -#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET 33852 -#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET 33853 -#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET 33854 -#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET 33855 -#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET 33856 -#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET 33857 -#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET 33858 -#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET 33859 -#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET 33860 -#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET 33861 -#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET 33862 -#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET 33863 -#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET 33864 -#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET 33865 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET 33866 -#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET 33867 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET 33868 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET 33869 -#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET 33870 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET 33871 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET 33872 -#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET 33873 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET 33874 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET 33875 -#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET 33876 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET 33877 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET 33878 -#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET 33879 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET 33880 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET 33881 -#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET 33882 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET 33883 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET 33884 -#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET 33885 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET 33886 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET 33887 -#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET 33888 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET 33889 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET 33890 -#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET 33891 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET 33892 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET 33893 -#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET 33894 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET 33895 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET 33896 -#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET 33897 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET 33898 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET 33899 -#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET 33900 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET 33901 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET 33902 -#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET 33903 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET 33904 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET 33905 -#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET 33906 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET 33907 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET 33908 -#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET 33909 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET 33910 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET 33911 -#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET 33912 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET 33913 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET 33914 -#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET 33915 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET 33916 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET 33917 -#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET 33918 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET 33919 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET 33920 -#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET 33921 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET 33922 -#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET 33923 -#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET 33924 -#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET 33925 -#define XCM_REG_CON_PHY_Q3_RT_OFFSET 33926 +#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET 34233 +#define NIG_REG_ROCE_DUPLICATE_TO_HOST_RT_OFFSET 34234 +#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET 34235 +#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET 34236 +#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET 34237 +#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET 34238 +#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET 34239 +#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET 34240 +#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET 34241 +#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET 34242 +#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET 34243 +#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET 34244 +#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET 34245 +#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET 34246 +#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET 34247 +#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET 34248 +#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET 34249 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET 34250 +#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET 34251 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET 34252 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET 34253 +#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET 34254 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET 34255 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET 34256 +#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET 34257 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET 34258 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET 34259 +#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET 34260 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET 34261 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET 34262 +#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET 34263 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET 34264 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET 34265 +#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET 34266 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET 34267 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET 34268 +#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET 34269 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET 34270 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET 34271 +#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET 34272 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET 34273 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET 34274 +#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET 34275 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET 34276 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET 34277 +#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET 34278 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET 34279 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET 34280 +#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET 34281 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET 34282 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET 34283 +#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET 34284 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET 34285 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET 34286 +#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET 34287 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET 34288 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET 34289 +#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET 34290 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET 34291 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET 34292 +#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET 34293 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET 34294 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET 34295 +#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET 34296 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET 34297 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET 34298 +#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET 34299 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET 34300 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET 34301 +#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET 34302 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET 34303 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET 34304 +#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET 34305 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET 34306 +#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET 34307 +#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET 34308 +#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET 34309 +#define XCM_REG_CON_PHY_Q3_RT_OFFSET 34310 -#define RUNTIME_ARRAY_SIZE 33927 +#define RUNTIME_ARRAY_SIZE 34311 #endif /* __RT_DEFS_H__ */ diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h index d2ebce8..6dc969b 100644 --- a/drivers/net/qede/base/eth_common.h +++ b/drivers/net/qede/base/eth_common.h @@ -182,7 +182,7 @@ struct eth_tx_1st_bd_flags { struct eth_tx_data_1st_bd { /* VLAN tag to insert to packet (if enabled by vlan_insertion flag). */ __le16 vlan; -/* Number of BDs in packet. Should be at least 2 in non-LSO packet and at least +/* Number of BDs in packet. Should be at least 1 in non-LSO packet and at least * 3 in LSO (or Tunnel with IPv6+ext) packet. */ u8 nbds; diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h index 3cc7fd4..f9920f3 100644 --- a/drivers/net/qede/base/reg_addr.h +++ b/drivers/net/qede/base/reg_addr.h @@ -1147,3 +1147,56 @@ #define IGU_REG_PRODUCER_MEMORY 0x182000UL #define IGU_REG_CONSUMER_MEM 0x183000UL + +#define CDU_REG_CCFC_CTX_VALID0 0x580400UL +#define CDU_REG_CCFC_CTX_VALID1 0x580404UL +#define CDU_REG_TCFC_CTX_VALID0 0x580408UL + +#define DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5 0x10092cUL +#define DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5 0x100930UL +#define MISCS_REG_RESET_PL_HV_2_K2_E5 0x009150UL +#define CNIG_REG_NW_PORT_MODE_BB 0x218200UL +#define CNIG_REG_PMEG_IF_CMD_BB 0x21821cUL +#define CNIG_REG_PMEG_IF_ADDR_BB 0x218224UL +#define CNIG_REG_PMEG_IF_WRDATA_BB 0x218228UL +#define NWM_REG_MAC0_K2_E5 0x800400UL +#define CNIG_REG_NIG_PORT0_CONF_K2_E5 0x218200UL +#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT 0 +#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT 1 +#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT 3 +#define ETH_MAC_REG_XIF_MODE_K2_E5 0x000080UL +#define ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT 0 +#define ETH_MAC_REG_FRM_LENGTH_K2_E5 0x000014UL +#define ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT 0 +#define ETH_MAC_REG_TX_IPG_LENGTH_K2_E5 0x000044UL +#define ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT 0 +#define ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5 0x00001cUL +#define ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT 0 +#define ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5 0x000020UL +#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT 16 +#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT 0 +#define ETH_MAC_REG_COMMAND_CONFIG_K2_E5 0x000008UL +#define MISC_REG_XMAC_CORE_PORT_MODE_BB 0x008c08UL +#define MISC_REG_XMAC_PHY_PORT_MODE_BB 0x008c04UL +#define XMAC_REG_MODE_BB 0x210008UL +#define XMAC_REG_RX_MAX_SIZE_BB 0x210040UL +#define XMAC_REG_TX_CTRL_LO_BB 0x210020UL +#define XMAC_REG_CTRL_BB 0x210000UL +#define XMAC_REG_CTRL_TX_EN_BB (0x1 << 0) +#define XMAC_REG_CTRL_RX_EN_BB (0x1 << 1) +#define XMAC_REG_RX_CTRL_BB 0x210030UL +#define XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB (0x1 << 12) + +#define PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5 0x2aaf98UL +#define PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5 0x2aaf9cUL +#define PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5 0x2aafa0UL +#define PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5 0x2aafa4UL +#define PGLUE_B_REG_PGL_ADDR_88_F0_BB 0x2aa404UL +#define PGLUE_B_REG_PGL_ADDR_8C_F0_BB 0x2aa408UL +#define PGLUE_B_REG_PGL_ADDR_90_F0_BB 0x2aa40cUL +#define PGLUE_B_REG_PGL_ADDR_94_F0_BB 0x2aa410UL +#define MISCS_REG_FUNCTION_HIDE_BB_K2 0x0096f0UL +#define PCIE_REG_PRTY_MASK_K2_E5 0x0547b4UL +#define PGLUE_B_REG_VF_BAR0_SIZE_K2_E5 0x2aaeb4UL + +#define PRS_REG_OUTPUT_FORMAT_4_0_BB_K2 0x1f099cUL diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c index a604a5b..332b1f8 100644 --- a/drivers/net/qede/qede_main.c +++ b/drivers/net/qede/qede_main.c @@ -21,7 +21,7 @@ static uint8_t npar_tx_switching = 1; char fw_file[PATH_MAX]; const char *QEDE_DEFAULT_FIRMWARE = - "/lib/firmware/qed/qed_init_values-8.14.6.0.bin"; + "/lib/firmware/qed/qed_init_values-8.18.9.0.bin"; static void qed_update_pf_params(struct ecore_dev *edev, struct ecore_pf_params *params) -- 1.7.10.3