DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/7] net/qede: add support for new HW
@ 2021-02-19 10:14 Rasesh Mody
  2021-02-19 10:14 ` [dpdk-dev] [PATCH 2/7] net/qede/base: changes for HSI to support " Rasesh Mody
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: Rasesh Mody @ 2021-02-19 10:14 UTC (permalink / raw)
  To: jerinj, ferruh.yigit; +Cc: Rasesh Mody, dev, GR-Everest-DPDK-Dev

Hi,

This patch series adds support for new HW while modifying
existing driver to continue supporting previous HWs.
Highlights of changes:
 - Registers, HW specific and initialization updates for new HW
 - FW upgrade
 - Base driver upgrade, other optimizations and cleanup

The new 50xxx family of Marvell QLogic fastlinq adapters will bring in
support for higher speeds, will increase max PPS rates significantly.
This family will eventually support flexible flow steering and
various in-device switching modes.

At the same time, that’s the same architecture and design, as with
previous QEDE driver. Thus, a lot of fast path and slow path code is
expected to be the same.

Please note for checkpatch 100 character max_line_length was used.

Thanks,
Rasesh

Rasesh Mody (7):
  net/qede/base: update and add register definitions
  net/qede/base: changes for HSI to support new HW
  net/qede/base: add OS abstracted changes
  net/qede/base: update base driver to 8.62.4.0
  net/qede: changes for DMA page chain allocation and free
  net/qede: add support for new HW
  net/qede/base: clean unnecessary ifdef and comments

 drivers/net/qede/base/bcm_osal.c              |      1 -
 drivers/net/qede/base/bcm_osal.h              |     42 +-
 drivers/net/qede/base/common_hsi.h            |   1752 +-
 drivers/net/qede/base/ecore.h                 |    575 +-
 drivers/net/qede/base/ecore_attn_values.h     |      3 +-
 drivers/net/qede/base/ecore_chain.h           |    242 +-
 drivers/net/qede/base/ecore_cxt.c             |   1234 +-
 drivers/net/qede/base/ecore_cxt.h             |    149 +-
 drivers/net/qede/base/ecore_cxt_api.h         |     31 +-
 drivers/net/qede/base/ecore_dcbx.c            |    526 +-
 drivers/net/qede/base/ecore_dcbx.h            |     16 +-
 drivers/net/qede/base/ecore_dcbx_api.h        |     41 +-
 drivers/net/qede/base/ecore_dev.c             |   4083 +-
 drivers/net/qede/base/ecore_dev_api.h         |    367 +-
 drivers/net/qede/base/ecore_gtt_reg_addr.h    |     93 +-
 drivers/net/qede/base/ecore_gtt_values.h      |      4 +-
 drivers/net/qede/base/ecore_hsi_common.h      |   2722 +-
 drivers/net/qede/base/ecore_hsi_debug_tools.h |    426 +-
 drivers/net/qede/base/ecore_hsi_eth.h         |   4541 +-
 drivers/net/qede/base/ecore_hsi_func_common.h |      5 +-
 drivers/net/qede/base/ecore_hsi_init_func.h   |    707 +-
 drivers/net/qede/base/ecore_hsi_init_tool.h   |    254 +-
 drivers/net/qede/base/ecore_hw.c              |    386 +-
 drivers/net/qede/base/ecore_hw.h              |     55 +-
 drivers/net/qede/base/ecore_hw_defs.h         |     45 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   |   1365 +-
 drivers/net/qede/base/ecore_init_fw_funcs.h   |    457 +-
 drivers/net/qede/base/ecore_init_ops.c        |    159 +-
 drivers/net/qede/base/ecore_init_ops.h        |     19 +-
 drivers/net/qede/base/ecore_int.c             |   1363 +-
 drivers/net/qede/base/ecore_int.h             |     65 +-
 drivers/net/qede/base/ecore_int_api.h         |    127 +-
 drivers/net/qede/base/ecore_iov_api.h         |    118 +-
 drivers/net/qede/base/ecore_iro.h             |    427 +-
 drivers/net/qede/base/ecore_iro_values.h      |    463 +-
 drivers/net/qede/base/ecore_l2.c              |    497 +-
 drivers/net/qede/base/ecore_l2.h              |     18 +-
 drivers/net/qede/base/ecore_l2_api.h          |    148 +-
 drivers/net/qede/base/ecore_mcp.c             |   2631 +-
 drivers/net/qede/base/ecore_mcp.h             |    125 +-
 drivers/net/qede/base/ecore_mcp_api.h         |    471 +-
 drivers/net/qede/base/ecore_mng_tlv.c         |    910 +-
 drivers/net/qede/base/ecore_proto_if.h        |     69 +-
 drivers/net/qede/base/ecore_rt_defs.h         |    895 +-
 drivers/net/qede/base/ecore_sp_api.h          |      6 +-
 drivers/net/qede/base/ecore_sp_commands.c     |    141 +-
 drivers/net/qede/base/ecore_sp_commands.h     |     18 +-
 drivers/net/qede/base/ecore_spq.c             |    431 +-
 drivers/net/qede/base/ecore_spq.h             |     65 +-
 drivers/net/qede/base/ecore_sriov.c           |   1700 +-
 drivers/net/qede/base/ecore_sriov.h           |    147 +-
 drivers/net/qede/base/ecore_status.h          |      4 +-
 drivers/net/qede/base/ecore_utils.h           |     18 +-
 drivers/net/qede/base/ecore_vf.c              |    550 +-
 drivers/net/qede/base/ecore_vf.h              |     57 +-
 drivers/net/qede/base/ecore_vf_api.h          |     74 +-
 drivers/net/qede/base/ecore_vfpf_if.h         |    122 +-
 drivers/net/qede/base/eth_common.h            |    300 +-
 drivers/net/qede/base/mcp_public.h            |   2343 +-
 drivers/net/qede/base/nvm_cfg.h               |   5059 +-
 drivers/net/qede/base/reg_addr.h              | 190590 ++++++++++++++-
 drivers/net/qede/qede_debug.c                 |    117 +-
 drivers/net/qede/qede_ethdev.c                |     11 +-
 drivers/net/qede/qede_ethdev.h                |     11 +-
 drivers/net/qede/qede_if.h                    |     20 +-
 drivers/net/qede/qede_main.c                  |      4 +-
 drivers/net/qede/qede_rxtx.c                  |     89 +-
 drivers/net/qede/qede_sriov.c                 |      4 -
 lib/librte_eal/include/rte_bitops.h           |     54 +-
 69 files changed, 215373 insertions(+), 15189 deletions(-)

-- 
2.18.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] [PATCH 2/7] net/qede/base: changes for HSI to support new HW
  2021-02-19 10:14 [dpdk-dev] [PATCH 0/7] net/qede: add support for new HW Rasesh Mody
@ 2021-02-19 10:14 ` Rasesh Mody
  2021-02-19 10:14 ` [dpdk-dev] [PATCH 3/7] net/qede/base: add OS abstracted changes Rasesh Mody
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Rasesh Mody @ 2021-02-19 10:14 UTC (permalink / raw)
  To: jerinj, ferruh.yigit; +Cc: Rasesh Mody, dev, GR-Everest-DPDK-Dev, Igor Russkikh

The changes introduced in this patch are related to hardware software
interface primarily to support the new hardware while still maintaining
the support for existing hardware.

The code architecture and design are same as with previous QEDE driver.
Thus, a lot of fastpath and slow path code is expected to be the same.
The registers, structure, functions, etc. for 4xxxx family of Marvell
fastlinq adapters and newly supported 50xxx family of Marvell fastlinq
adapters are mainly differentiated by using E4 and E5 suffixes
respectively.

This patch adds PMD support for using new firmware 8.62.0.0.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
---
 drivers/net/qede/base/common_hsi.h            | 1752 ++++---
 drivers/net/qede/base/ecore.h                 |   22 +-
 drivers/net/qede/base/ecore_cxt.c             |   53 +-
 drivers/net/qede/base/ecore_cxt.h             |   12 -
 drivers/net/qede/base/ecore_dev.c             |   49 +-
 drivers/net/qede/base/ecore_hsi_common.h      | 2719 ++++++----
 drivers/net/qede/base/ecore_hsi_debug_tools.h |  426 +-
 drivers/net/qede/base/ecore_hsi_eth.h         | 4541 ++++++++++++-----
 drivers/net/qede/base/ecore_hsi_func_common.h |    5 +-
 drivers/net/qede/base/ecore_hsi_init_func.h   |  705 ++-
 drivers/net/qede/base/ecore_hsi_init_tool.h   |  254 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   |  153 +-
 drivers/net/qede/base/ecore_init_ops.c        |   16 +-
 drivers/net/qede/base/ecore_int.c             |   39 +-
 drivers/net/qede/base/ecore_int.h             |    4 +-
 drivers/net/qede/base/ecore_int_api.h         |    2 +-
 drivers/net/qede/base/ecore_l2.c              |    6 +-
 drivers/net/qede/base/ecore_spq.c             |   39 +-
 drivers/net/qede/base/ecore_spq.h             |    2 +-
 drivers/net/qede/base/ecore_sriov.c           |    5 +-
 drivers/net/qede/base/ecore_vf.c              |    2 +-
 drivers/net/qede/base/eth_common.h            |    6 +-
 drivers/net/qede/qede_debug.c                 |   24 +-
 drivers/net/qede/qede_main.c                  |    2 +-
 drivers/net/qede/qede_rxtx.c                  |   15 +-
 25 files changed, 7358 insertions(+), 3495 deletions(-)

diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index 1a02d460b..523876014 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -1,67 +1,81 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __COMMON_HSI__
 #define __COMMON_HSI__
 /********************************/
 /* PROTOCOL COMMON FW CONSTANTS */
 /********************************/
 
-/* Temporarily here should be added to HSI automatically by resource allocation
- * tool.
- */
-#define T_TEST_AGG_INT_TEMP  6
-#define M_TEST_AGG_INT_TEMP  8
-#define U_TEST_AGG_INT_TEMP  6
-#define X_TEST_AGG_INT_TEMP  14
-#define Y_TEST_AGG_INT_TEMP  4
-#define P_TEST_AGG_INT_TEMP  4
+/* Temporarily here should be added to HSI automatically by resource allocation tool. */
+#define T_TEST_AGG_INT_TEMP_E4  6
+#define M_TEST_AGG_INT_TEMP_E4  8
+#define U_TEST_AGG_INT_TEMP_E4  6
+#define X_TEST_AGG_INT_TEMP_E4  14
+#define Y_TEST_AGG_INT_TEMP_E4  4
+#define P_TEST_AGG_INT_TEMP_E4  4
+
+
+#define T_TEST_AGG_INT_TEMP_E5  3
+#define M_TEST_AGG_INT_TEMP_E5  3
+#define U_TEST_AGG_INT_TEMP_E5  3
+#define X_TEST_AGG_INT_TEMP_E5  7
+#define Y_TEST_AGG_INT_TEMP_E5  3
+#define P_TEST_AGG_INT_TEMP_E5  3
 
 #define X_FINAL_CLEANUP_AGG_INT  1
 
 #define EVENT_RING_PAGE_SIZE_BYTES          4096
 
 #define NUM_OF_GLOBAL_QUEUES				128
-#define COMMON_QUEUE_ENTRY_MAX_BYTE_SIZE	64
+#define COMMON_QUEUE_ENTRY_MAX_BYTE_SIZE	128
 
 #define ISCSI_CDU_TASK_SEG_TYPE       0
 #define FCOE_CDU_TASK_SEG_TYPE        0
 #define RDMA_CDU_TASK_SEG_TYPE        1
-#define ETH_CDU_TASK_SEG_TYPE         2
+#define ETH_CDU_TASK_SEG_TYPE         1
+
+#define ISCSI_CDU_TASK_SEG_ID           0
+#define FCOE_CDU_TASK_SEG_ID            1
+#define ROCE_CDU_TASK_SEG_ID            2
+#define PREROCE_CDU_TASK_SEG_ID         2
+#define ETH_CDU_TASK_SEG_ID             3
 
 #define FW_ASSERT_GENERAL_ATTN_IDX    32
 
-#define EAGLE_ENG1_WORKAROUND_NIG_FLOWCTRL_MODE	3
+#define EAGLE_ENG1_WORKAROUND_NIG_FLOWCTRL_MODE 3
 
 /* Queue Zone sizes in bytes */
-#define TSTORM_QZONE_SIZE    8   /*tstorm_queue_zone*/
-/*mstorm_eth_queue_zone. Used only for RX producer of VFs in backward
- * compatibility mode.
- */
-#define MSTORM_QZONE_SIZE    16
-#define USTORM_QZONE_SIZE    8   /*ustorm_queue_zone*/
-#define XSTORM_QZONE_SIZE    8   /*xstorm_eth_queue_zone*/
-#define YSTORM_QZONE_SIZE    0
-#define PSTORM_QZONE_SIZE    0
-
-/*Log of mstorm default VF zone size.*/
-#define MSTORM_VF_ZONE_DEFAULT_SIZE_LOG       7
-/*Maximum number of RX queues that can be allocated to VF by default*/
+#define TSTORM_QZONE_SIZE_E4    8   /*tstorm_queue_zone*/
+/* mstorm_eth_queue_zone. Used only for RX producer of VFs in backward compatibility mode. */
+#define MSTORM_QZONE_SIZE_E4    16
+#define USTORM_QZONE_SIZE_E4    8   /*ustorm_queue_zone*/
+#define XSTORM_QZONE_SIZE_E4    8   /*xstorm_eth_queue_zone*/
+#define YSTORM_QZONE_SIZE_E4    0
+#define PSTORM_QZONE_SIZE_E4    0
+
+#define TSTORM_QZONE_SIZE_E5    0
+/* mstorm_eth_queue_zone. Used only for RX producer of VFs in backward compatibility mode. */
+#define MSTORM_QZONE_SIZE_E5    16
+#define USTORM_QZONE_SIZE_E5    8   /*ustorm_queue_zone*/
+#define XSTORM_QZONE_SIZE_E5    8   /*xstorm_eth_queue_zone*/
+#define YSTORM_QZONE_SIZE_E5    0
+#define PSTORM_QZONE_SIZE_E5    0
+
+#define MSTORM_VF_ZONE_DEFAULT_SIZE_LOG       7     /*Log of mstorm default VF zone size.*/
+/* Maximum number of RX queues that can be allocated to VF by default */
 #define ETH_MAX_NUM_RX_QUEUES_PER_VF_DEFAULT  16
-/*Maximum number of RX queues that can be allocated to VF with doubled VF zone
- * size. Up to 96 VF supported in this mode
+/* Maximum number of RX queues that can be allocated to VF with doubled VF zone size. Up to 96 VF
+ * supported in this mode
  */
 #define ETH_MAX_NUM_RX_QUEUES_PER_VF_DOUBLE   48
-/*Maximum number of RX queues that can be allocated to VF with 4 VF zone size.
- * Up to 48 VF supported in this mode
+/* Maximum number of RX queues that can be allocated to VF with 4 VF zone size. Up to 48 VF
+ * supported in this mode
  */
 #define ETH_MAX_NUM_RX_QUEUES_PER_VF_QUAD     112
-
-#define ETH_RGSRC_CTX_SIZE                6 /*Size in QREGS*/
-#define ETH_TGSRC_CTX_SIZE                6 /*Size in QREGS*/
 /********************************/
 /* CORE (LIGHT L2) FW CONSTANTS */
 /********************************/
@@ -74,27 +88,29 @@
 
 #define CORE_LL2_TX_MAX_BDS_PER_PACKET				12
 
-#define CORE_SPQE_PAGE_SIZE_BYTES                       4096
+#define CORE_SPQE_PAGE_SIZE_BYTES			4096
 
 /* Number of LL2 RAM based (RX producers and statistics) queues */
-#define MAX_NUM_LL2_RX_RAM_QUEUES               32
+#define MAX_NUM_LL2_RX_QUEUES                   48
+/* Number of LL2 RAM based (RX producers and statistics) queues */
+#define MAX_NUM_LL2_RX_RAM_QUEUES               MAX_NUM_LL2_RX_QUEUES
 /* Number of LL2 context based (RX producers and statistics) queues */
-#define MAX_NUM_LL2_RX_CTX_QUEUES               208
-#define MAX_NUM_LL2_RX_QUEUES (MAX_NUM_LL2_RX_RAM_QUEUES + \
-			       MAX_NUM_LL2_RX_CTX_QUEUES)
+#define MAX_NUM_LL2_RX_CTX_QUEUES               64
+/*Total number of LL2 queue. rename to MAX_NUM_LL2_RX_QUEUES*/
+#define MAX_NUM_LL2_RX_QUEUES_TOTAL             \
+	(MAX_NUM_LL2_RX_RAM_QUEUES + MAX_NUM_LL2_RX_CTX_QUEUES)
 
 #define MAX_NUM_LL2_TX_STATS_COUNTERS			48
 
 
-/****************************************************************************/
-/* Include firmware version number only- do not add constants here to avoid */
-/* redundunt compilations                                                   */
-/****************************************************************************/
+/*///////////////////////////////////////////////////////////////////////////////////////////////*/
+/*Include firmware version number only- do not add constants here to avoid redundant compilations*/
+/*///////////////////////////////////////////////////////////////////////////////////////////////*/
 
 
 #define FW_MAJOR_VERSION		8
-#define FW_MINOR_VERSION		40
-#define FW_REVISION_VERSION		33
+#define FW_MINOR_VERSION		62
+#define FW_REVISION_VERSION		0
 #define FW_ENGINEERING_VERSION	0
 
 /***********************/
@@ -104,48 +120,68 @@
 /* PCI functions */
 #define MAX_NUM_PORTS_BB        (2)
 #define MAX_NUM_PORTS_K2        (4)
-#define MAX_NUM_PORTS           (MAX_NUM_PORTS_K2)
+#define MAX_NUM_PORTS_E5        (4)
+#define MAX_NUM_PORTS			(MAX_NUM_PORTS_E5)
 
 #define MAX_NUM_PFS_BB          (8)
 #define MAX_NUM_PFS_K2          (16)
-#define MAX_NUM_PFS             (MAX_NUM_PFS_K2)
+#define MAX_NUM_PFS_E5          (16)
+#define MAX_NUM_PFS             (MAX_NUM_PFS_E5)
 #define MAX_NUM_OF_PFS_IN_CHIP  (16) /* On both engines */
 
 #define MAX_NUM_VFS_BB          (120)
 #define MAX_NUM_VFS_K2          (192)
-#define COMMON_MAX_NUM_VFS      (MAX_NUM_VFS_K2)
+#define MAX_NUM_VFS_E4          (MAX_NUM_VFS_K2)
+#define MAX_NUM_VFS_E5          (240)
+#define MAX_NUM_VFS		(MAX_NUM_VFS_E5) /* @DPDK */
 
 #define MAX_NUM_FUNCTIONS_BB    (MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
 #define MAX_NUM_FUNCTIONS_K2    (MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2)
 
-/* in both BB and K2, the VF number starts from 16. so for arrays containing all
- * possible PFs and VFs - we need a constant for this size
- */
+/* in both BB and K2, the VF number starts from 16. so for arrays containing all */
+/* possible PFs and VFs - we need a constant for this size */
 #define MAX_FUNCTION_NUMBER_BB      (MAX_NUM_PFS + MAX_NUM_VFS_BB)
 #define MAX_FUNCTION_NUMBER_K2      (MAX_NUM_PFS + MAX_NUM_VFS_K2)
-#define COMMON_MAX_FUNCTION_NUMBER  (MAX_NUM_PFS + MAX_NUM_VFS_K2)
+#define MAX_FUNCTION_NUMBER_E4      (MAX_NUM_PFS + MAX_NUM_VFS_E4)
+#define MAX_FUNCTION_NUMBER_E5      (MAX_NUM_PFS + MAX_NUM_VFS_E5)
+#define COMMON_MAX_FUNCTION_NUMBER  (MAX_NUM_PFS + MAX_NUM_VFS_E5)
 
 #define MAX_NUM_VPORTS_K2       (208)
 #define MAX_NUM_VPORTS_BB       (160)
-#define COMMON_MAX_NUM_VPORTS   (MAX_NUM_VPORTS_K2)
+#define MAX_NUM_VPORTS_E4       (MAX_NUM_VPORTS_K2)
+#define MAX_NUM_VPORTS_E5       (256)
+#define COMMON_MAX_NUM_VPORTS   (MAX_NUM_VPORTS_E5)
 
-#define MAX_NUM_L2_QUEUES_BB	(256)
-#define MAX_NUM_L2_QUEUES_K2    (320)
+#define MAX_NUM_L2_QUEUES_BB			(256)
+#define MAX_NUM_L2_QUEUES_K2			(320)
+#define MAX_NUM_L2_QUEUES_E5			(512)
+#define COMMON_MAX_NUM_L2_QUEUES		(MAX_NUM_L2_QUEUES_E5)
 
-#define FW_LOWEST_CONSUMEDDMAE_CHANNEL   (26)
+#define FW_LOWEST_CONSUMEDDMAE_CHANNEL_E4   (26)
+#define FW_LOWEST_CONSUMEDDMAE_CHANNEL_E5   (32) /* Not used in E5 */
 
 /* Traffic classes in network-facing blocks (PBF, BTB, NIG, BRB, PRS and QM) */
 #define NUM_PHYS_TCS_4PORT_K2     4
+#define NUM_PHYS_TCS_4PORT_TX_E5  6
+#define NUM_PHYS_TCS_4PORT_RX_E5  4
 #define NUM_OF_PHYS_TCS           8
 #define PURE_LB_TC                NUM_OF_PHYS_TCS
 #define NUM_TCS_4PORT_K2          (NUM_PHYS_TCS_4PORT_K2 + 1)
+#define NUM_TCS_4PORT_TX_E5       (NUM_PHYS_TCS_4PORT_TX_E5 + 1)
+#define NUM_TCS_4PORT_RX_E5       (NUM_PHYS_TCS_4PORT_RX_E5 + 1)
 #define NUM_OF_TCS                (NUM_OF_PHYS_TCS + 1)
+#define PBF_TGFS_PURE_LB_TC_E5    5
 
 /* CIDs */
-#define NUM_OF_CONNECTION_TYPES (8)
+#define NUM_OF_CONNECTION_TYPES_E4			(8)
+#define NUM_OF_CONNECTION_TYPES_E5			(16)
+#define COMMON_NUM_OF_CONNECTION_TYPES		(NUM_OF_CONNECTION_TYPES_E5)
+
 #define NUM_OF_TASK_TYPES       (8)
 #define NUM_OF_LCIDS            (320)
-#define NUM_OF_LTIDS            (320)
+#define NUM_OF_LTIDS_E4         (320)
+#define NUM_OF_LTIDS_E5         (384)
+#define COMMON_NUM_OF_LTIDS		(NUM_OF_LTIDS_E5)
 
 /* Global PXP windows (GTT) */
 #define NUM_OF_GTT          19
@@ -154,33 +190,33 @@
 #define GTT_DWORD_SIZE      (1 << GTT_DWORD_SIZE_BITS)
 
 /* Tools Version */
-#define TOOLS_VERSION 10
+#define TOOLS_VERSION 11
 /*****************/
 /* CDU CONSTANTS */
 /*****************/
 
-#define CDU_SEG_TYPE_OFFSET_REG_TYPE_SHIFT              (17)
-#define CDU_SEG_TYPE_OFFSET_REG_OFFSET_MASK             (0x1ffff)
+#define CDU_SEG_TYPE_OFFSET_REG_TYPE_SHIFT		(17)
+#define CDU_SEG_TYPE_OFFSET_REG_OFFSET_MASK		(0x1ffff)
 
 #define CDU_VF_FL_SEG_TYPE_OFFSET_REG_TYPE_SHIFT	(12)
 #define CDU_VF_FL_SEG_TYPE_OFFSET_REG_OFFSET_MASK	(0xfff)
 
 #define	CDU_CONTEXT_VALIDATION_CFG_ENABLE_SHIFT				(0)
 #define	CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT	(1)
-#define	CDU_CONTEXT_VALIDATION_CFG_USE_TYPE				(2)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_TYPE					(2)
 #define	CDU_CONTEXT_VALIDATION_CFG_USE_REGION				(3)
-#define	CDU_CONTEXT_VALIDATION_CFG_USE_CID				(4)
+#define	CDU_CONTEXT_VALIDATION_CFG_USE_CID					(4)
 #define	CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE				(5)
 
 /*enabled, type A, use all */
-#define	CDU_CONTEXT_VALIDATION_DEFAULT_CFG				(0x3D)
+#define	CDU_CONTEXT_VALIDATION_DEFAULT_CFG					(0x3D)
 
 /*****************/
 /* DQ CONSTANTS  */
 /*****************/
 
 /* DEMS */
-#define DQ_DEMS_LEGACY			0
+#define	DQ_DEMS_LEGACY						0
 #define DQ_DEMS_TOE_MORE_TO_SEND			3
 #define DQ_DEMS_TOE_LOCAL_ADV_WND			4
 #define DQ_DEMS_ROCE_CQ_CONS				7
@@ -196,19 +232,13 @@
 #define DQ_XCM_AGG_VAL_SEL_REG6   7
 
 /* XCM agg val selection (FW) */
-#define DQ_XCM_ETH_EDPM_NUM_BDS_CMD \
-	DQ_XCM_AGG_VAL_SEL_WORD2
-#define DQ_XCM_ETH_TX_BD_CONS_CMD \
-	DQ_XCM_AGG_VAL_SEL_WORD3
-#define DQ_XCM_CORE_TX_BD_CONS_CMD \
-	DQ_XCM_AGG_VAL_SEL_WORD3
-#define DQ_XCM_ETH_TX_BD_PROD_CMD \
-	DQ_XCM_AGG_VAL_SEL_WORD4
-#define DQ_XCM_CORE_TX_BD_PROD_CMD \
-	DQ_XCM_AGG_VAL_SEL_WORD4
-#define DQ_XCM_CORE_SPQ_PROD_CMD \
-	DQ_XCM_AGG_VAL_SEL_WORD4
-#define DQ_XCM_ETH_GO_TO_BD_CONS_CMD            DQ_XCM_AGG_VAL_SEL_WORD5
+#define DQ_XCM_CORE_TX_BD_CONS_CMD          DQ_XCM_AGG_VAL_SEL_WORD3
+#define DQ_XCM_CORE_TX_BD_PROD_CMD          DQ_XCM_AGG_VAL_SEL_WORD4
+#define DQ_XCM_CORE_SPQ_PROD_CMD            DQ_XCM_AGG_VAL_SEL_WORD4
+#define DQ_XCM_ETH_EDPM_NUM_BDS_CMD         DQ_XCM_AGG_VAL_SEL_WORD2
+#define DQ_XCM_ETH_TX_BD_CONS_CMD           DQ_XCM_AGG_VAL_SEL_WORD3
+#define DQ_XCM_ETH_TX_BD_PROD_CMD           DQ_XCM_AGG_VAL_SEL_WORD4
+#define DQ_XCM_ETH_GO_TO_BD_CONS_CMD        DQ_XCM_AGG_VAL_SEL_WORD5
 #define DQ_XCM_FCOE_SQ_CONS_CMD             DQ_XCM_AGG_VAL_SEL_WORD3
 #define DQ_XCM_FCOE_SQ_PROD_CMD             DQ_XCM_AGG_VAL_SEL_WORD4
 #define DQ_XCM_FCOE_X_FERQ_PROD_CMD         DQ_XCM_AGG_VAL_SEL_WORD5
@@ -263,20 +293,13 @@
 #define DQ_XCM_AGG_FLG_SHIFT_CF23   7
 
 /* XCM agg counter flag selection (FW) */
-#define DQ_XCM_ETH_DQ_CF_CMD		(1 << \
-					DQ_XCM_AGG_FLG_SHIFT_CF18)
-#define DQ_XCM_CORE_DQ_CF_CMD		(1 << \
-					DQ_XCM_AGG_FLG_SHIFT_CF18)
-#define DQ_XCM_ETH_TERMINATE_CMD	(1 << \
-					DQ_XCM_AGG_FLG_SHIFT_CF19)
-#define DQ_XCM_CORE_TERMINATE_CMD	(1 << \
-					DQ_XCM_AGG_FLG_SHIFT_CF19)
-#define DQ_XCM_ETH_SLOW_PATH_CMD	(1 << \
-					DQ_XCM_AGG_FLG_SHIFT_CF22)
-#define DQ_XCM_CORE_SLOW_PATH_CMD	(1 << \
-					DQ_XCM_AGG_FLG_SHIFT_CF22)
-#define DQ_XCM_ETH_TPH_EN_CMD		(1 << \
-					DQ_XCM_AGG_FLG_SHIFT_CF23)
+#define DQ_XCM_CORE_DQ_CF_CMD               (1 << DQ_XCM_AGG_FLG_SHIFT_CF18)
+#define DQ_XCM_CORE_TERMINATE_CMD           (1 << DQ_XCM_AGG_FLG_SHIFT_CF19)
+#define DQ_XCM_CORE_SLOW_PATH_CMD           (1 << DQ_XCM_AGG_FLG_SHIFT_CF22)
+#define DQ_XCM_ETH_DQ_CF_CMD                (1 << DQ_XCM_AGG_FLG_SHIFT_CF18)
+#define DQ_XCM_ETH_TERMINATE_CMD            (1 << DQ_XCM_AGG_FLG_SHIFT_CF19)
+#define DQ_XCM_ETH_SLOW_PATH_CMD            (1 << DQ_XCM_AGG_FLG_SHIFT_CF22)
+#define DQ_XCM_ETH_TPH_EN_CMD               (1 << DQ_XCM_AGG_FLG_SHIFT_CF23)
 #define DQ_XCM_FCOE_SLOW_PATH_CMD           (1 << DQ_XCM_AGG_FLG_SHIFT_CF22)
 #define DQ_XCM_ISCSI_DQ_FLUSH_CMD           (1 << DQ_XCM_AGG_FLG_SHIFT_CF19)
 #define DQ_XCM_ISCSI_SLOW_PATH_CMD          (1 << DQ_XCM_AGG_FLG_SHIFT_CF22)
@@ -341,19 +364,21 @@
 #define DQ_PWM_OFFSET_UCM_FLAGS				0x69
 #define DQ_PWM_OFFSET_TCM_FLAGS				0x6B
 
-#define DQ_PWM_OFFSET_XCM_RDMA_SQ_PROD		(DQ_PWM_OFFSET_XCM16_BASE + 2)
+#define DQ_PWM_OFFSET_XCM_RDMA_SQ_PROD			(DQ_PWM_OFFSET_XCM16_BASE + 2)
 #define DQ_PWM_OFFSET_UCM_RDMA_CQ_CONS_32BIT	(DQ_PWM_OFFSET_UCM32_BASE)
 #define DQ_PWM_OFFSET_UCM_RDMA_CQ_CONS_16BIT	(DQ_PWM_OFFSET_UCM16_4)
-#define DQ_PWM_OFFSET_UCM_RDMA_INT_TIMEOUT	(DQ_PWM_OFFSET_UCM16_BASE + 2)
-#define DQ_PWM_OFFSET_UCM_RDMA_ARM_FLAGS	(DQ_PWM_OFFSET_UCM_FLAGS)
-#define DQ_PWM_OFFSET_TCM_ROCE_RQ_PROD		(DQ_PWM_OFFSET_TCM16_BASE + 1)
-#define DQ_PWM_OFFSET_TCM_IWARP_RQ_PROD		(DQ_PWM_OFFSET_TCM16_BASE + 3)
-
-#define DQ_PWM_OFFSET_XCM_RDMA_24B_ICID_SQ_PROD \
+#define DQ_PWM_OFFSET_UCM_RDMA_INT_TIMEOUT		(DQ_PWM_OFFSET_UCM16_BASE + 2)
+#define DQ_PWM_OFFSET_UCM_RDMA_ARM_FLAGS		(DQ_PWM_OFFSET_UCM_FLAGS)
+#define DQ_PWM_OFFSET_TCM_ROCE_RQ_PROD			(DQ_PWM_OFFSET_TCM16_BASE + 1)
+#define DQ_PWM_OFFSET_TCM_IWARP_RQ_PROD			(DQ_PWM_OFFSET_TCM16_BASE + 3)
+/* DQ_DEMS_AGG_VAL_BASE**/
+#define DQ_PWM_OFFSET_TCM_LL2_PROD_UPDATE	\
+	(DQ_PWM_OFFSET_TCM32_BASE + DQ_TCM_AGG_VAL_SEL_REG9 - 4)
+
+#define DQ_PWM_OFFSET_XCM_RDMA_24B_ICID_SQ_PROD			\
 	(DQ_PWM_OFFSET_XCM32_24ICID_BASE + 2)
-#define DQ_PWM_OFFSET_UCM_RDMA_24B_ICID_CQ_CONS_32BIT \
-	(DQ_PWM_OFFSET_UCM32_24ICID_BASE + 4)
-#define DQ_PWM_OFFSET_TCM_ROCE_24B_ICID_RQ_PROD	\
+#define DQ_PWM_OFFSET_UCM_RDMA_24B_ICID_CQ_CONS_32BIT	(DQ_PWM_OFFSET_UCM32_24ICID_BASE + 4)
+#define DQ_PWM_OFFSET_TCM_ROCE_24B_ICID_RQ_PROD			\
 	(DQ_PWM_OFFSET_TCM32_24ICID_BASE + 1)
 
 #define DQ_REGION_SHIFT				        (12)
@@ -369,17 +394,19 @@
 /*****************/
 
 /* number of TX queues in the QM */
-#define MAX_QM_TX_QUEUES_K2	512
-#define MAX_QM_TX_QUEUES_BB	448
-#define MAX_QM_TX_QUEUES	MAX_QM_TX_QUEUES_K2
+#define MAX_QM_TX_QUEUES_K2			512
+#define MAX_QM_TX_QUEUES_BB			448
+#define MAX_QM_TX_QUEUES_E5			MAX_QM_TX_QUEUES_K2
+#define MAX_QM_TX_QUEUES			MAX_QM_TX_QUEUES_K2
 
 /* number of Other queues in the QM */
-#define MAX_QM_OTHER_QUEUES_BB	64
-#define MAX_QM_OTHER_QUEUES_K2	128
-#define MAX_QM_OTHER_QUEUES	MAX_QM_OTHER_QUEUES_K2
+#define MAX_QM_OTHER_QUEUES_BB		64
+#define MAX_QM_OTHER_QUEUES_K2		128
+#define MAX_QM_OTHER_QUEUES_E5		MAX_QM_OTHER_QUEUES_K2
+#define MAX_QM_OTHER_QUEUES			MAX_QM_OTHER_QUEUES_K2
 
 /* number of queues in a PF queue group */
-#define QM_PF_QUEUE_GROUP_SIZE	8
+#define QM_PF_QUEUE_GROUP_SIZE		8
 
 /* the size of a single queue element in bytes */
 #define QM_PQ_ELEMENT_SIZE			4
@@ -387,14 +414,12 @@
 /* base number of Tx PQs in the CM PQ representation.
  * should be used when storing PQ IDs in CM PQ registers and context
  */
-#define CM_TX_PQ_BASE	0x200
-
-/* number of global Vport/QCN rate limiters */
-#define MAX_QM_GLOBAL_RLS			256
+#define CM_TX_PQ_BASE               0x200
 
 /* number of global rate limiters */
-#define MAX_QM_GLOBAL_RLS		256
-#define COMMON_MAX_QM_GLOBAL_RLS	(MAX_QM_GLOBAL_RLS)
+#define MAX_QM_GLOBAL_RLS_E4		256
+#define MAX_QM_GLOBAL_RLS_E5		512
+#define COMMON_MAX_QM_GLOBAL_RLS	(MAX_QM_GLOBAL_RLS_E5)
 
 /* QM registers data */
 #define QM_LINE_CRD_REG_WIDTH		16
@@ -402,9 +427,9 @@
 #define QM_BYTE_CRD_REG_WIDTH		24
 #define QM_BYTE_CRD_REG_SIGN_BIT	(1 << (QM_BYTE_CRD_REG_WIDTH - 1))
 #define QM_WFQ_CRD_REG_WIDTH		32
-#define QM_WFQ_CRD_REG_SIGN_BIT		(1U << (QM_WFQ_CRD_REG_WIDTH - 1))
-#define QM_RL_CRD_REG_WIDTH		32
-#define QM_RL_CRD_REG_SIGN_BIT		(1U << (QM_RL_CRD_REG_WIDTH - 1))
+#define QM_WFQ_CRD_REG_SIGN_BIT		(1U << (QM_WFQ_CRD_REG_WIDTH - 1)) /* @DPDK */
+#define QM_RL_CRD_REG_WIDTH			32
+#define QM_RL_CRD_REG_SIGN_BIT		(1U << (QM_RL_CRD_REG_WIDTH - 1)) /* @DPDK */
 
 /*****************/
 /* CAU CONSTANTS */
@@ -414,14 +439,18 @@
 #define CAU_FSM_ETH_TX  1
 
 /* Number of Protocol Indices per Status Block */
-#define PIS_PER_SB    12
-#define MAX_PIS_PER_SB	 PIS_PER_SB
+#define PIS_PER_SB_E4			12
+#define PIS_PER_SB_E5			10
+
+#define PIS_PER_SB_PADDING_E5	2
+#define MAX_PIS_PER_SB	 PIS_PER_SB_E4
+
 
 /* fsm is stopped or not valid for this sb */
 #define CAU_HC_STOPPED_STATE		3
-/* fsm is working without interrupt coalescing for this sb*/
+/* fsm is working without interrupt coalescing for this sb */
 #define CAU_HC_DISABLE_STATE		4
-/* fsm is working with interrupt coalescing for this sb*/
+/* fsm is working with interrupt coalescing for this sb */
 #define CAU_HC_ENABLE_STATE			0
 
 
@@ -429,40 +458,47 @@
 /* IGU CONSTANTS */
 /*****************/
 
-#define MAX_SB_PER_PATH_K2			(368)
-#define MAX_SB_PER_PATH_BB			(288)
-#define MAX_TOT_SB_PER_PATH			MAX_SB_PER_PATH_K2
+#define MAX_SB_PER_PATH_K2					(368)
+#define MAX_SB_PER_PATH_BB					(288)
+#define MAX_SB_PER_PATH_E5_BC_MODE			(496)
+#define MAX_SB_PER_PATH_E5					(512)
+#define MAX_TOT_SB_PER_PATH					MAX_SB_PER_PATH_E5
 
-#define MAX_SB_PER_PF_MIMD			129
-#define MAX_SB_PER_PF_SIMD			64
-#define MAX_SB_PER_VF				64
+#define MAX_SB_PER_PF_MIMD					129
+#define MAX_SB_PER_PF_SIMD					64
+#define MAX_SB_PER_VF						64
 
 /* Memory addresses on the BAR for the IGU Sub Block */
-#define IGU_MEM_BASE				0x0000
+#define IGU_MEM_BASE						0x0000
 
-#define IGU_MEM_MSIX_BASE			0x0000
-#define IGU_MEM_MSIX_UPPER			0x0101
-#define IGU_MEM_MSIX_RESERVED_UPPER		0x01ff
+#define IGU_MEM_MSIX_BASE					0x0000
+#define IGU_MEM_MSIX_UPPER					0x0101
+#define IGU_MEM_MSIX_RESERVED_UPPER			0x01ff
 
-#define IGU_MEM_PBA_MSIX_BASE			0x0200
-#define IGU_MEM_PBA_MSIX_UPPER			0x0202
+#define IGU_MEM_PBA_MSIX_BASE				0x0200
+#define IGU_MEM_PBA_MSIX_UPPER				0x0202
 #define IGU_MEM_PBA_MSIX_RESERVED_UPPER		0x03ff
 
-#define IGU_CMD_INT_ACK_BASE			0x0400
+#define IGU_CMD_INT_ACK_BASE				0x0400
 #define IGU_CMD_INT_ACK_RESERVED_UPPER		0x05ff
 
-#define IGU_CMD_ATTN_BIT_UPD_UPPER		0x05f0
-#define IGU_CMD_ATTN_BIT_SET_UPPER		0x05f1
-#define IGU_CMD_ATTN_BIT_CLR_UPPER		0x05f2
+#define IGU_CMD_ATTN_BIT_UPD_UPPER			0x05f0
+#define IGU_CMD_ATTN_BIT_SET_UPPER			0x05f1
+#define IGU_CMD_ATTN_BIT_CLR_UPPER			0x05f2
 
 #define IGU_REG_SISR_MDPC_WMASK_UPPER		0x05f3
 #define IGU_REG_SISR_MDPC_WMASK_LSB_UPPER	0x05f4
 #define IGU_REG_SISR_MDPC_WMASK_MSB_UPPER	0x05f5
 #define IGU_REG_SISR_MDPC_WOMASK_UPPER		0x05f6
 
-#define IGU_CMD_PROD_UPD_BASE			0x0600
+#define IGU_CMD_PROD_UPD_BASE				0x0600
 #define IGU_CMD_PROD_UPD_RESERVED_UPPER		0x07ff
 
+#define IGU_CMD_INT_ACK_E5_BASE				0x1000
+#define IGU_CMD_INT_ACK_RESERVED_E5_UPPER	0x17FF
+
+#define IGU_CMD_PROD_UPD_E5_BASE			0x1800
+#define IGU_CMD_PROD_UPD_RESERVED_E5_UPPER	0x1FFF
 /*****************/
 /* PXP CONSTANTS */
 /*****************/
@@ -479,110 +515,92 @@
 #define PXP_BAR_DQ                                          1
 
 /* PTT and GTT */
-#define PXP_PER_PF_ENTRY_SIZE		8
-#define PXP_NUM_GLOBAL_WINDOWS		243
-#define PXP_GLOBAL_ENTRY_SIZE		4
-#define PXP_ADMIN_WINDOW_ALLOWED_LENGTH		4
-#define PXP_PF_WINDOW_ADMIN_START	0
-#define PXP_PF_WINDOW_ADMIN_LENGTH	0x1000
-#define PXP_PF_WINDOW_ADMIN_END		(PXP_PF_WINDOW_ADMIN_START + \
-				PXP_PF_WINDOW_ADMIN_LENGTH - 1)
-#define PXP_PF_WINDOW_ADMIN_PER_PF_START	0
-#define PXP_PF_WINDOW_ADMIN_PER_PF_LENGTH	(PXP_NUM_PF_WINDOWS * \
-						 PXP_PER_PF_ENTRY_SIZE)
-#define PXP_PF_WINDOW_ADMIN_PER_PF_END (PXP_PF_WINDOW_ADMIN_PER_PF_START + \
-					PXP_PF_WINDOW_ADMIN_PER_PF_LENGTH - 1)
-#define PXP_PF_WINDOW_ADMIN_GLOBAL_START	0x200
-#define PXP_PF_WINDOW_ADMIN_GLOBAL_LENGTH	(PXP_NUM_GLOBAL_WINDOWS * \
-						 PXP_GLOBAL_ENTRY_SIZE)
-#define PXP_PF_WINDOW_ADMIN_GLOBAL_END \
-		(PXP_PF_WINDOW_ADMIN_GLOBAL_START + \
-		 PXP_PF_WINDOW_ADMIN_GLOBAL_LENGTH - 1)
-#define PXP_PF_GLOBAL_PRETEND_ADDR	0x1f0
-#define PXP_PF_ME_OPAQUE_MASK_ADDR	0xf4
-#define PXP_PF_ME_OPAQUE_ADDR		0x1f8
-#define PXP_PF_ME_CONCRETE_ADDR		0x1fc
-
-#define PXP_NUM_PF_WINDOWS		12
-
-#define PXP_EXTERNAL_BAR_PF_WINDOW_START	0x1000
-#define PXP_EXTERNAL_BAR_PF_WINDOW_NUM		PXP_NUM_PF_WINDOWS
-#define PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE	0x1000
-#define PXP_EXTERNAL_BAR_PF_WINDOW_LENGTH \
-	(PXP_EXTERNAL_BAR_PF_WINDOW_NUM * \
-	 PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE)
-#define PXP_EXTERNAL_BAR_PF_WINDOW_END \
-	(PXP_EXTERNAL_BAR_PF_WINDOW_START + \
-	 PXP_EXTERNAL_BAR_PF_WINDOW_LENGTH - 1)
-
-#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START \
-	(PXP_EXTERNAL_BAR_PF_WINDOW_END + 1)
-#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_NUM		PXP_NUM_GLOBAL_WINDOWS
-#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_SINGLE_SIZE	0x1000
-#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH \
-	(PXP_EXTERNAL_BAR_GLOBAL_WINDOW_NUM * \
-	 PXP_EXTERNAL_BAR_GLOBAL_WINDOW_SINGLE_SIZE)
-#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_END \
-	(PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START + \
-	 PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH - 1)
+#define PXP_PER_PF_ENTRY_SIZE                               8
+#define PXP_NUM_GLOBAL_WINDOWS                              243
+#define PXP_GLOBAL_ENTRY_SIZE                               4
+#define PXP_ADMIN_WINDOW_ALLOWED_LENGTH                     4
+#define PXP_PF_WINDOW_ADMIN_START                           0
+#define PXP_PF_WINDOW_ADMIN_LENGTH                          0x1000
+#define PXP_PF_WINDOW_ADMIN_END                             \
+	(PXP_PF_WINDOW_ADMIN_START + PXP_PF_WINDOW_ADMIN_LENGTH - 1)
+#define PXP_PF_WINDOW_ADMIN_PER_PF_START                    0
+#define PXP_PF_WINDOW_ADMIN_PER_PF_LENGTH                   \
+	(PXP_NUM_PF_WINDOWS * PXP_PER_PF_ENTRY_SIZE)
+#define PXP_PF_WINDOW_ADMIN_PER_PF_END                      \
+	(PXP_PF_WINDOW_ADMIN_PER_PF_START + PXP_PF_WINDOW_ADMIN_PER_PF_LENGTH - 1)
+#define PXP_PF_WINDOW_ADMIN_GLOBAL_START                    0x200
+#define PXP_PF_WINDOW_ADMIN_GLOBAL_LENGTH                   \
+	(PXP_NUM_GLOBAL_WINDOWS * PXP_GLOBAL_ENTRY_SIZE)
+#define PXP_PF_WINDOW_ADMIN_GLOBAL_END                      \
+	(PXP_PF_WINDOW_ADMIN_GLOBAL_START + PXP_PF_WINDOW_ADMIN_GLOBAL_LENGTH - 1)
+#define PXP_PF_GLOBAL_PRETEND_ADDR                          0x1f0
+#define PXP_PF_ME_OPAQUE_MASK_ADDR                          0xf4
+#define PXP_PF_ME_OPAQUE_ADDR                               0x1f8
+#define PXP_PF_ME_CONCRETE_ADDR                             0x1fc
+
+#define PXP_NUM_PF_WINDOWS                                  12
+
+#define PXP_EXTERNAL_BAR_PF_WINDOW_START                    0x1000
+#define PXP_EXTERNAL_BAR_PF_WINDOW_NUM                      PXP_NUM_PF_WINDOWS
+#define PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE              0x1000
+#define PXP_EXTERNAL_BAR_PF_WINDOW_LENGTH                   \
+	(PXP_EXTERNAL_BAR_PF_WINDOW_NUM * PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE)
+#define PXP_EXTERNAL_BAR_PF_WINDOW_END                      \
+	(PXP_EXTERNAL_BAR_PF_WINDOW_START + PXP_EXTERNAL_BAR_PF_WINDOW_LENGTH - 1)
+
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START                (PXP_EXTERNAL_BAR_PF_WINDOW_END + 1)
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_NUM                  PXP_NUM_GLOBAL_WINDOWS
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_SINGLE_SIZE          0x1000
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH               \
+	(PXP_EXTERNAL_BAR_GLOBAL_WINDOW_NUM * PXP_EXTERNAL_BAR_GLOBAL_WINDOW_SINGLE_SIZE)
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_END                  \
+	(PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START + PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH - 1)
 
 /* PF BAR */
 #define PXP_BAR0_START_GRC                      0x0000
 #define PXP_BAR0_GRC_LENGTH                     0x1C00000
-#define PXP_BAR0_END_GRC                        \
-	(PXP_BAR0_START_GRC + PXP_BAR0_GRC_LENGTH - 1)
+#define PXP_BAR0_END_GRC                        (PXP_BAR0_START_GRC + PXP_BAR0_GRC_LENGTH - 1)
 
 #define PXP_BAR0_START_IGU                      0x1C00000
 #define PXP_BAR0_IGU_LENGTH                     0x10000
-#define PXP_BAR0_END_IGU                        \
-	(PXP_BAR0_START_IGU + PXP_BAR0_IGU_LENGTH - 1)
+#define PXP_BAR0_END_IGU                        (PXP_BAR0_START_IGU + PXP_BAR0_IGU_LENGTH - 1)
 
 #define PXP_BAR0_START_TSDM                     0x1C80000
 #define PXP_BAR0_SDM_LENGTH                     0x40000
 #define PXP_BAR0_SDM_RESERVED_LENGTH            0x40000
-#define PXP_BAR0_END_TSDM                       \
-	(PXP_BAR0_START_TSDM + PXP_BAR0_SDM_LENGTH - 1)
+#define PXP_BAR0_END_TSDM                       (PXP_BAR0_START_TSDM + PXP_BAR0_SDM_LENGTH - 1)
 
 #define PXP_BAR0_START_MSDM                     0x1D00000
-#define PXP_BAR0_END_MSDM                       \
-	(PXP_BAR0_START_MSDM + PXP_BAR0_SDM_LENGTH - 1)
+#define PXP_BAR0_END_MSDM                       (PXP_BAR0_START_MSDM + PXP_BAR0_SDM_LENGTH - 1)
 
 #define PXP_BAR0_START_USDM                     0x1D80000
-#define PXP_BAR0_END_USDM                       \
-	(PXP_BAR0_START_USDM + PXP_BAR0_SDM_LENGTH - 1)
+#define PXP_BAR0_END_USDM                       (PXP_BAR0_START_USDM + PXP_BAR0_SDM_LENGTH - 1)
 
 #define PXP_BAR0_START_XSDM                     0x1E00000
-#define PXP_BAR0_END_XSDM                       \
-	(PXP_BAR0_START_XSDM + PXP_BAR0_SDM_LENGTH - 1)
+#define PXP_BAR0_END_XSDM                       (PXP_BAR0_START_XSDM + PXP_BAR0_SDM_LENGTH - 1)
 
 #define PXP_BAR0_START_YSDM                     0x1E80000
-#define PXP_BAR0_END_YSDM                       \
-	(PXP_BAR0_START_YSDM + PXP_BAR0_SDM_LENGTH - 1)
+#define PXP_BAR0_END_YSDM                       (PXP_BAR0_START_YSDM + PXP_BAR0_SDM_LENGTH - 1)
 
 #define PXP_BAR0_START_PSDM                     0x1F00000
-#define PXP_BAR0_END_PSDM                       \
-	(PXP_BAR0_START_PSDM + PXP_BAR0_SDM_LENGTH - 1)
+#define PXP_BAR0_END_PSDM                       (PXP_BAR0_START_PSDM + PXP_BAR0_SDM_LENGTH - 1)
 
-#define PXP_BAR0_FIRST_INVALID_ADDRESS          \
-	(PXP_BAR0_END_PSDM + 1)
+#define PXP_BAR0_FIRST_INVALID_ADDRESS          (PXP_BAR0_END_PSDM + 1)
 
 /* VF BAR */
 #define PXP_VF_BAR0                             0
 
 #define PXP_VF_BAR0_START_IGU                   0
 #define PXP_VF_BAR0_IGU_LENGTH                  0x3000
-#define PXP_VF_BAR0_END_IGU                     \
-	(PXP_VF_BAR0_START_IGU + PXP_VF_BAR0_IGU_LENGTH - 1)
+#define PXP_VF_BAR0_END_IGU                     (PXP_VF_BAR0_START_IGU + PXP_VF_BAR0_IGU_LENGTH - 1)
 
 #define PXP_VF_BAR0_START_DQ                    0x3000
 #define PXP_VF_BAR0_DQ_LENGTH                   0x200
 #define PXP_VF_BAR0_DQ_OPAQUE_OFFSET            0
 #define PXP_VF_BAR0_ME_OPAQUE_ADDRESS           \
 	(PXP_VF_BAR0_START_DQ + PXP_VF_BAR0_DQ_OPAQUE_OFFSET)
-#define PXP_VF_BAR0_ME_CONCRETE_ADDRESS         \
-	(PXP_VF_BAR0_ME_OPAQUE_ADDRESS + 4)
-#define PXP_VF_BAR0_END_DQ                      \
-	(PXP_VF_BAR0_START_DQ + PXP_VF_BAR0_DQ_LENGTH - 1)
+#define PXP_VF_BAR0_ME_CONCRETE_ADDRESS         (PXP_VF_BAR0_ME_OPAQUE_ADDRESS + 4)
+#define PXP_VF_BAR0_END_DQ                      (PXP_VF_BAR0_START_DQ + PXP_VF_BAR0_DQ_LENGTH - 1)
 
 #define PXP_VF_BAR0_START_TSDM_ZONE_B           0x3200
 #define PXP_VF_BAR0_SDM_LENGTH_ZONE_B           0x200
@@ -611,8 +629,7 @@
 
 #define PXP_VF_BAR0_START_GRC                   0x3E00
 #define PXP_VF_BAR0_GRC_LENGTH                  0x200
-#define PXP_VF_BAR0_END_GRC                     \
-	(PXP_VF_BAR0_START_GRC + PXP_VF_BAR0_GRC_LENGTH - 1)
+#define PXP_VF_BAR0_END_GRC                     (PXP_VF_BAR0_START_GRC + PXP_VF_BAR0_GRC_LENGTH - 1)
 
 #define PXP_VF_BAR0_START_SDM_ZONE_A            0x4000
 #define PXP_VF_BAR0_END_SDM_ZONE_A              0x10000
@@ -627,14 +644,18 @@
 #define PXP_ILT_PAGE_SIZE_NUM_BITS_MIN          12
 #define PXP_ILT_BLOCK_FACTOR_MULTIPLIER         1024
 
-// ILT Records
+/* ILT Records */
 #define PXP_NUM_ILT_RECORDS_BB 7600
 #define PXP_NUM_ILT_RECORDS_K2 11000
-#define MAX_NUM_ILT_RECORDS \
-	OSAL_MAX_T(PXP_NUM_ILT_RECORDS_BB, PXP_NUM_ILT_RECORDS_K2)
+#define MAX_NUM_ILT_RECORDS_E4 OSAL_MAX_T(PXP_NUM_ILT_RECORDS_BB, PXP_NUM_ILT_RECORDS_K2)
 
-// Host Interface
-#define PXP_QUEUES_ZONE_MAX_NUM	320
+#define PXP_NUM_ILT_RECORDS_E5 13664
+#define MAX_NUM_ILT_RECORDS_E5 PXP_NUM_ILT_RECORDS_E5
+
+
+/* Host Interface */
+#define PXP_QUEUES_ZONE_MAX_NUM_E4	320
+#define PXP_QUEUES_ZONE_MAX_NUM_E5	512
 
 
 /*****************/
@@ -653,47 +674,56 @@
 #define SDM_OP_GEN_TRIG_INDICATE_ERROR	6
 #define SDM_OP_GEN_TRIG_INC_ORDER_CNT	9
 
-/***********************************************************/
-/* Completion types                                        */
-/***********************************************************/
-
-#define SDM_COMP_TYPE_NONE		0
-#define SDM_COMP_TYPE_WAKE_THREAD	1
-#define SDM_COMP_TYPE_AGG_INT		2
-/* Send direct message to local CM and/or remote CMs. Destinations are defined
- * by vector in CompParams.
- */
-#define SDM_COMP_TYPE_CM		3
-#define SDM_COMP_TYPE_LOADER		4
-/* Send direct message to PXP (like "internal write" command) to write to remote
- * Storm RAM via remote SDM
- */
-#define SDM_COMP_TYPE_PXP		5
-/* Indicate error per thread */
-#define SDM_COMP_TYPE_INDICATE_ERROR	6
-#define SDM_COMP_TYPE_RELEASE_THREAD	7
-/* Write to local RAM as a completion */
-#define SDM_COMP_TYPE_RAM		8
-#define SDM_COMP_TYPE_INC_ORDER_CNT	9 /* Applicable only for E4 */
+/*////////////////////////////////////////////////////////////*/
+/* Completion types */
+/*////////////////////////////////////////////////////////////*/
 
+#define SDM_COMP_TYPE_NONE				0
+#define SDM_COMP_TYPE_WAKE_THREAD		1
+#define SDM_COMP_TYPE_AGG_INT			2
+/* Send direct message to local CM and/or remote CMs. Destinations are defined by vector in
+ * CompParams.
+ */
+#define SDM_COMP_TYPE_CM				3
+#define SDM_COMP_TYPE_LOADER			4
+/* Send direct message to PXP (like "internal write" command) to write to remote Storm RAM via
+ * remote SDM
+ */
+#define SDM_COMP_TYPE_PXP				5
+#define SDM_COMP_TYPE_INDICATE_ERROR	6		/* Indicate error per thread */
+#define SDM_COMP_TYPE_RELEASE_THREAD	7		/* Obsolete in E5 */
+/* Write to local RAM as a completion */
+#define SDM_COMP_TYPE_RAM				8
+#define SDM_COMP_TYPE_INC_ORDER_CNT		9		/* Applicable only for E5 */
 
 /******************/
 /* PBF CONSTANTS  */
 /******************/
 
 /* Number of PBF command queue lines. */
-#define PBF_MAX_CMD_LINES 3328 /* Each line is 256b */
+#define PBF_MAX_CMD_LINES_E4 3328 /* Each line is 256b */
+#define PBF_MAX_CMD_LINES_E5 5280 /* Each line is 512b */
 
 /* Number of BTB blocks. Each block is 256B. */
 #define BTB_MAX_BLOCKS_BB 1440 /* 2880 blocks of 128B */
 #define BTB_MAX_BLOCKS_K2 1840 /* 3680 blocks of 128B */
-#define BTB_MAX_BLOCKS 1440
+#define BTB_MAX_BLOCKS_E5 2640
 
 /*****************/
 /* PRS CONSTANTS */
 /*****************/
 
 #define PRS_GFT_CAM_LINES_NO_MATCH  31
+/*****************/
+/* GFS CONSTANTS */
+/*****************/
+
+
+#define SRC_HEADER_MAGIC_NUMBER		        0x600DFEED
+#define GFS_HASH_SIZE_IN_BYTES          16
+#define GFS_PROFILE_MAX_ENTRIES         256
+
+
 
 /*
  * Interrupt coalescing TimeSet
@@ -708,16 +738,12 @@ struct coalescing_timeset {
 #define COALESCING_TIMESET_VALID_SHIFT   7
 };
 
+
 struct common_queue_zone {
 	__le16 ring_drv_data_consumer;
 	__le16 reserved;
 };
 
-struct nvmf_eqe_data {
-	__le16 icid /* The connection ID for which the EQE is written. */;
-	u8 reserved0[6] /* Alignment to line */;
-};
-
 
 /*
  * ETH Rx producers data
@@ -741,8 +767,7 @@ struct tcp_ulp_connect_done_params {
 struct iscsi_connect_done_results {
 	__le16 icid /* Context ID of the connection */;
 	__le16 conn_id /* Driver connection ID */;
-/* decided tcp params after connect done */
-	struct tcp_ulp_connect_done_params params;
+	struct tcp_ulp_connect_done_params params /* decided tcp params after connect done */;
 };
 
 
@@ -750,11 +775,10 @@ struct iscsi_eqe_data {
 	__le16 icid /* Context ID of the connection */;
 	__le16 conn_id /* Driver connection ID */;
 	__le16 reserved;
-/* error code - relevant only if the opcode indicates its an error */
-	u8 error_code;
+	u8 error_code /* error code - relevant only if the opcode indicates its an error */;
 	u8 error_pdu_opcode_reserved;
-/* The processed PDUs opcode on which happened the error - updated for specific
- * error codes, by default=0xFF
+/* The processed PDUs opcode on which happened the error - updated for specific error codes, by
+ * default=0xFF
  */
 #define ISCSI_EQE_DATA_ERROR_PDU_OPCODE_MASK        0x3F
 #define ISCSI_EQE_DATA_ERROR_PDU_OPCODE_SHIFT       0
@@ -776,7 +800,38 @@ enum mf_mode {
 	MAX_MF_MODE
 };
 
-/* Per-protocol connection types */
+
+struct nvmf_eqe_data {
+	__le16 icid /* The connection ID for which the EQE is written. */;
+	u8 reserved0[6] /* Alignment to line */;
+};
+
+
+/* Per protocol packet duplication enable bit vector. If set, duplicate offloaded traffic to LL2
+ * debug queueu.
+ */
+struct offload_pkt_dup_enable {
+	__le16 enable_vector;
+#define OFFLOAD_PKT_DUP_ENABLE_ISCSI_MASK      0x1 /* iSCSI */
+#define OFFLOAD_PKT_DUP_ENABLE_ISCSI_SHIFT     0
+#define OFFLOAD_PKT_DUP_ENABLE_FCOE_MASK       0x1 /* FCoE */
+#define OFFLOAD_PKT_DUP_ENABLE_FCOE_SHIFT      1
+#define OFFLOAD_PKT_DUP_ENABLE_ROCE_MASK       0x1 /* RoCE */
+#define OFFLOAD_PKT_DUP_ENABLE_ROCE_SHIFT      2
+#define OFFLOAD_PKT_DUP_ENABLE_LL2_MASK        0x1 /* LL2 */
+#define OFFLOAD_PKT_DUP_ENABLE_LL2_SHIFT       3
+#define OFFLOAD_PKT_DUP_ENABLE_RESERVED_MASK   0x1
+#define OFFLOAD_PKT_DUP_ENABLE_RESERVED_SHIFT  4
+#define OFFLOAD_PKT_DUP_ENABLE_IWARP_MASK      0x1 /* iWARP */
+#define OFFLOAD_PKT_DUP_ENABLE_IWARP_SHIFT     5
+#define OFFLOAD_PKT_DUP_ENABLE_RESERVED1_MASK  0x3FF
+#define OFFLOAD_PKT_DUP_ENABLE_RESERVED1_SHIFT 6
+};
+
+
+/*
+ * Per-protocol connection types
+ */
 enum protocol_type {
 	PROTOCOLID_ISCSI /* iSCSI */,
 	PROTOCOLID_FCOE /* FCoE */,
@@ -794,6 +849,15 @@ enum protocol_type {
 };
 
 
+/*
+ * Pstorm packet duplication config
+ */
+struct pstorm_pkt_dup_cfg {
+	struct offload_pkt_dup_enable enable /* TX packet duplication per protocol enable */;
+	__le16 reserved[3];
+};
+
+
 struct regpair {
 	__le32 lo /* low word for reg-pair */;
 	__le32 hi /* high word for reg-pair */;
@@ -820,12 +884,25 @@ struct rdma_eqe_suspend_qp {
  */
 union rdma_eqe_data {
 	struct regpair async_handle /* Host handle for the Async Completions */;
-	/* RoCE Destroy Event Data */
-	struct rdma_eqe_destroy_qp rdma_destroy_qp_data;
-	/* RoCE Suspend QP Event Data */
-	struct rdma_eqe_suspend_qp rdma_suspend_qp_data;
+	struct rdma_eqe_destroy_qp rdma_destroy_qp_data /* RoCE Destroy Event Data */;
+	struct rdma_eqe_suspend_qp rdma_suspend_qp_data /* RoCE Suspend QP Event Data */;
 };
 
+
+
+
+
+/*
+ * Tstorm packet duplication config
+ */
+struct tstorm_pkt_dup_cfg {
+/* RX and loopback packet duplication per protocol enable */
+	struct offload_pkt_dup_enable enable;
+	__le16 reserved;
+	__le32 cid /* Light L2 debug queue CID. */;
+};
+
+
 struct tstorm_queue_zone {
 	__le32 reserved[2];
 };
@@ -835,8 +912,7 @@ struct tstorm_queue_zone {
  * Ustorm Queue Zone
  */
 struct ustorm_eth_queue_zone {
-/* Rx interrupt coalescing TimeSet */
-	struct coalescing_timeset int_coalescing_timeset;
+	struct coalescing_timeset int_coalescing_timeset /* Rx interrupt coalescing TimeSet */;
 	u8 reserved[3];
 };
 
@@ -846,28 +922,30 @@ struct ustorm_queue_zone {
 	struct common_queue_zone common;
 };
 
-/* status block structure */
+
+/*
+ * status block structure
+ */
 struct cau_pi_entry {
 	__le32 prod;
-/* A per protocol indexPROD value. */
-#define CAU_PI_ENTRY_PROD_VAL_MASK    0xFFFF
+#define CAU_PI_ENTRY_PROD_VAL_MASK    0xFFFF /* A per protocol indexPROD value. */
 #define CAU_PI_ENTRY_PROD_VAL_SHIFT   0
-/* This value determines the TimeSet that the PI is associated with */
+/* This value determines the TimeSet that the PI is associated with  */
 #define CAU_PI_ENTRY_PI_TIMESET_MASK  0x7F
 #define CAU_PI_ENTRY_PI_TIMESET_SHIFT 16
-/* Select the FSM within the SB */
-#define CAU_PI_ENTRY_FSM_SEL_MASK     0x1
+#define CAU_PI_ENTRY_FSM_SEL_MASK     0x1 /* Select the FSM within the SB */
 #define CAU_PI_ENTRY_FSM_SEL_SHIFT    23
-/* Select the FSM within the SB */
-#define CAU_PI_ENTRY_RESERVED_MASK    0xFF
+#define CAU_PI_ENTRY_RESERVED_MASK    0xFF /* Select the FSM within the SB */
 #define CAU_PI_ENTRY_RESERVED_SHIFT   24
 };
 
-/* status block structure */
+
+/*
+ * status block structure
+ */
 struct cau_sb_entry {
 	__le32 data;
-/* The SB PROD index which is sent to the IGU. */
-#define CAU_SB_ENTRY_SB_PROD_MASK      0xFFFFFF
+#define CAU_SB_ENTRY_SB_PROD_MASK      0xFFFFFF /* The SB PROD index which is sent to the IGU. */
 #define CAU_SB_ENTRY_SB_PROD_SHIFT     0
 #define CAU_SB_ENTRY_STATE0_MASK       0xF /* RX state */
 #define CAU_SB_ENTRY_STATE0_SHIFT      24
@@ -880,10 +958,10 @@ struct cau_sb_entry {
 /* Indicates the TX TimeSet that this SB is associated with. */
 #define CAU_SB_ENTRY_SB_TIMESET1_MASK  0x7F
 #define CAU_SB_ENTRY_SB_TIMESET1_SHIFT 7
-/* This value will determine the RX FSM timer resolution in ticks */
+/* This value will determine the RX FSM timer resolution in ticks  */
 #define CAU_SB_ENTRY_TIMER_RES0_MASK   0x3
 #define CAU_SB_ENTRY_TIMER_RES0_SHIFT  14
-/* This value will determine the TX FSM timer resolution in ticks */
+/* This value will determine the TX FSM timer resolution in ticks  */
 #define CAU_SB_ENTRY_TIMER_RES1_MASK   0x3
 #define CAU_SB_ENTRY_TIMER_RES1_SHIFT  16
 #define CAU_SB_ENTRY_VF_NUMBER_MASK    0xFF
@@ -892,8 +970,8 @@ struct cau_sb_entry {
 #define CAU_SB_ENTRY_VF_VALID_SHIFT    26
 #define CAU_SB_ENTRY_PF_NUMBER_MASK    0xF
 #define CAU_SB_ENTRY_PF_NUMBER_SHIFT   27
-/* If set then indicates that the TPH STAG is equal to the SB number. Otherwise
- * the STAG will be equal to all ones.
+/* If set then indicates that the TPH STAG is equal to the SB number. Otherwise the STAG will be
+ * equal to all ones.
  */
 #define CAU_SB_ENTRY_TPH_MASK          0x1
 #define CAU_SB_ENTRY_TPH_SHIFT         31
@@ -901,8 +979,7 @@ struct cau_sb_entry {
 
 
 /*
- * Igu cleanup bit values to distinguish between clean or producer consumer
- * update.
+ * Igu cleanup bit values to distinguish between clean or producer consumer update.
  */
 enum command_type_bit {
 	IGU_COMMAND_TYPE_NOP = 0,
@@ -911,28 +988,29 @@ enum command_type_bit {
 };
 
 
-/* core doorbell data */
+/*
+ * core doorbell data
+ */
 struct core_db_data {
 	u8 params;
-/* destination of doorbell (use enum db_dest) */
-#define CORE_DB_DATA_DEST_MASK         0x3
+#define CORE_DB_DATA_DEST_MASK         0x3 /* destination of doorbell (use enum db_dest) */
 #define CORE_DB_DATA_DEST_SHIFT        0
-/* aggregative command to CM (use enum db_agg_cmd_sel) */
-#define CORE_DB_DATA_AGG_CMD_MASK      0x3
+#define CORE_DB_DATA_AGG_CMD_MASK      0x3 /* aggregative command to CM (use enum db_agg_cmd_sel) */
 #define CORE_DB_DATA_AGG_CMD_SHIFT     2
 #define CORE_DB_DATA_BYPASS_EN_MASK    0x1 /* enable QM bypass */
 #define CORE_DB_DATA_BYPASS_EN_SHIFT   4
 #define CORE_DB_DATA_RESERVED_MASK     0x1
 #define CORE_DB_DATA_RESERVED_SHIFT    5
-/* aggregative value selection */
-#define CORE_DB_DATA_AGG_VAL_SEL_MASK  0x3
+#define CORE_DB_DATA_AGG_VAL_SEL_MASK  0x3 /* aggregative value selection */
 #define CORE_DB_DATA_AGG_VAL_SEL_SHIFT 6
-/* bit for every DQ counter flags in CM context that DQ can increment */
-	u8	agg_flags;
-	__le16	spq_prod;
+	u8 agg_flags /* bit for every DQ counter flags in CM context that DQ can increment */;
+	__le16 spq_prod;
 };
 
-/* Enum of doorbell aggregative command selection */
+
+/*
+ * Enum of doorbell aggregative command selection
+ */
 enum db_agg_cmd_sel {
 	DB_AGG_CMD_NOP /* No operation */,
 	DB_AGG_CMD_SET /* Set the value */,
@@ -941,7 +1019,10 @@ enum db_agg_cmd_sel {
 	MAX_DB_AGG_CMD_SEL
 };
 
-/* Enum of doorbell destination */
+
+/*
+ * Enum of doorbell destination
+ */
 enum db_dest {
 	DB_DEST_XCM /* TX doorbell to XCM */,
 	DB_DEST_UCM /* RX doorbell to UCM */,
@@ -957,42 +1038,37 @@ enum db_dest {
 enum db_dpm_type {
 	DPM_LEGACY /* Legacy DPM- to Xstorm RAM */,
 	DPM_RDMA /* RDMA DPM (only RoCE in E4) - to NIG */,
-/* L2 DPM inline- to PBF, with packet data on doorbell */
-	DPM_L2_INLINE,
+	DPM_L2_INLINE /* L2 DPM inline- to PBF, with packet data on doorbell */,
 	DPM_L2_BD /* L2 DPM with BD- to PBF, with TX BD data on doorbell */,
 	MAX_DB_DPM_TYPE
 };
 
+
 /*
- * Structure for doorbell data, in L2 DPM mode, for the first doorbell in a DPM
- * burst
+ * Structure for doorbell data, in L2 DPM mode, for the first doorbell in a DPM burst
  */
 struct db_l2_dpm_data {
 	__le16 icid /* internal CID */;
 	__le16 bd_prod /* bd producer value to update */;
 	__le32 params;
-/* Size in QWORD-s of the DPM burst */
-#define DB_L2_DPM_DATA_SIZE_MASK        0x3F
-#define DB_L2_DPM_DATA_SIZE_SHIFT       0
-/* Type of DPM transaction (DPM_L2_INLINE or DPM_L2_BD) (use enum db_dpm_type)
- */
-#define DB_L2_DPM_DATA_DPM_TYPE_MASK    0x3
-#define DB_L2_DPM_DATA_DPM_TYPE_SHIFT   6
-#define DB_L2_DPM_DATA_NUM_BDS_MASK     0xFF /* number of BD-s */
-#define DB_L2_DPM_DATA_NUM_BDS_SHIFT    8
-/* size of the packet to be transmitted in bytes */
-#define DB_L2_DPM_DATA_PKT_SIZE_MASK    0x7FF
-#define DB_L2_DPM_DATA_PKT_SIZE_SHIFT   16
-#define DB_L2_DPM_DATA_RESERVED0_MASK   0x1
-#define DB_L2_DPM_DATA_RESERVED0_SHIFT  27
-/* In DPM_L2_BD mode: the number of SGE-s */
-#define DB_L2_DPM_DATA_SGE_NUM_MASK     0x7
-#define DB_L2_DPM_DATA_SGE_NUM_SHIFT    28
-/* Flag indicating whether to enable GFS search */
-#define DB_L2_DPM_DATA_RESERVED1_MASK   0x1
-#define DB_L2_DPM_DATA_RESERVED1_SHIFT  31
+#define DB_L2_DPM_DATA_SIZE_MASK         0x3F /* Size in QWORD-s of the DPM burst */
+#define DB_L2_DPM_DATA_SIZE_SHIFT        0
+/* Type of DPM transaction (DPM_L2_INLINE or DPM_L2_BD) (use enum db_dpm_type) */
+#define DB_L2_DPM_DATA_DPM_TYPE_MASK     0x3
+#define DB_L2_DPM_DATA_DPM_TYPE_SHIFT    6
+#define DB_L2_DPM_DATA_NUM_BDS_MASK      0xFF /* number of BD-s */
+#define DB_L2_DPM_DATA_NUM_BDS_SHIFT     8
+#define DB_L2_DPM_DATA_PKT_SIZE_MASK     0x7FF /* size of the packet to be transmitted in bytes */
+#define DB_L2_DPM_DATA_PKT_SIZE_SHIFT    16
+#define DB_L2_DPM_DATA_RESERVED0_MASK    0x1
+#define DB_L2_DPM_DATA_RESERVED0_SHIFT   27
+#define DB_L2_DPM_DATA_SGE_NUM_MASK      0x7 /* In DPM_L2_BD mode: the number of SGE-s */
+#define DB_L2_DPM_DATA_SGE_NUM_SHIFT     28
+#define DB_L2_DPM_DATA_TGFS_SRC_EN_MASK  0x1 /* Flag indicating whether to enable TGFS search */
+#define DB_L2_DPM_DATA_TGFS_SRC_EN_SHIFT 31
 };
 
+
 /*
  * Structure for SGE in a DPM doorbell of type DPM_L2_BD
  */
@@ -1000,48 +1076,31 @@ struct db_l2_dpm_sge {
 	struct regpair addr /* Single continuous buffer */;
 	__le16 nbytes /* Number of bytes in this BD. */;
 	__le16 bitfields;
-/* The TPH STAG index value */
-#define DB_L2_DPM_SGE_TPH_ST_INDEX_MASK  0x1FF
+#define DB_L2_DPM_SGE_TPH_ST_INDEX_MASK  0x1FF /* The TPH STAG index value */
 #define DB_L2_DPM_SGE_TPH_ST_INDEX_SHIFT 0
 #define DB_L2_DPM_SGE_RESERVED0_MASK     0x3
 #define DB_L2_DPM_SGE_RESERVED0_SHIFT    9
-/* Indicate if ST hint is requested or not */
-#define DB_L2_DPM_SGE_ST_VALID_MASK      0x1
+#define DB_L2_DPM_SGE_ST_VALID_MASK      0x1 /* Indicate if ST hint is requested or not */
 #define DB_L2_DPM_SGE_ST_VALID_SHIFT     11
 #define DB_L2_DPM_SGE_RESERVED1_MASK     0xF
 #define DB_L2_DPM_SGE_RESERVED1_SHIFT    12
 	__le32 reserved2;
 };
 
-/* Structure for doorbell address, in legacy mode */
+
+/*
+ * Structure for doorbell address, in legacy mode
+ */
 struct db_legacy_addr {
 	__le32 addr;
 #define DB_LEGACY_ADDR_RESERVED0_MASK  0x3
 #define DB_LEGACY_ADDR_RESERVED0_SHIFT 0
-/* doorbell extraction mode specifier- 0 if not used */
-#define DB_LEGACY_ADDR_DEMS_MASK       0x7
+#define DB_LEGACY_ADDR_DEMS_MASK       0x7 /* doorbell extraction mode specifier- 0 if not used */
 #define DB_LEGACY_ADDR_DEMS_SHIFT      2
 #define DB_LEGACY_ADDR_ICID_MASK       0x7FFFFFF /* internal CID */
 #define DB_LEGACY_ADDR_ICID_SHIFT      5
 };
 
-/*
- * Structure for doorbell address, in PWM mode
- */
-struct db_pwm_addr {
-	__le32 addr;
-#define DB_PWM_ADDR_RESERVED0_MASK  0x7
-#define DB_PWM_ADDR_RESERVED0_SHIFT 0
-/* Offset in PWM address space */
-#define DB_PWM_ADDR_OFFSET_MASK     0x7F
-#define DB_PWM_ADDR_OFFSET_SHIFT    3
-#define DB_PWM_ADDR_WID_MASK        0x3 /* Window ID */
-#define DB_PWM_ADDR_WID_SHIFT       10
-#define DB_PWM_ADDR_DPI_MASK        0xFFFF /* Doorbell page ID */
-#define DB_PWM_ADDR_DPI_SHIFT       12
-#define DB_PWM_ADDR_RESERVED1_MASK  0xF
-#define DB_PWM_ADDR_RESERVED1_SHIFT 28
-};
 
 /*
  * Structure for doorbell address, in legacy mode, without DEMS
@@ -1056,37 +1115,23 @@ struct db_legacy_wo_dems_addr {
 
 
 /*
- * Parameters to RDMA firmware, passed in EDPM doorbell
+ * Structure for doorbell address, in PWM mode
  */
-struct db_rdma_dpm_params {
-	__le32 params;
-/* Size in QWORD-s of the DPM burst */
-#define DB_RDMA_DPM_PARAMS_SIZE_MASK                0x3F
-#define DB_RDMA_DPM_PARAMS_SIZE_SHIFT               0
-/* Type of DPM transacation (DPM_RDMA) (use enum db_dpm_type) */
-#define DB_RDMA_DPM_PARAMS_DPM_TYPE_MASK            0x3
-#define DB_RDMA_DPM_PARAMS_DPM_TYPE_SHIFT           6
-/* opcode for RDMA operation */
-#define DB_RDMA_DPM_PARAMS_OPCODE_MASK              0xFF
-#define DB_RDMA_DPM_PARAMS_OPCODE_SHIFT             8
-/* the size of the WQE payload in bytes */
-#define DB_RDMA_DPM_PARAMS_WQE_SIZE_MASK            0x7FF
-#define DB_RDMA_DPM_PARAMS_WQE_SIZE_SHIFT           16
-#define DB_RDMA_DPM_PARAMS_RESERVED0_MASK           0x1
-#define DB_RDMA_DPM_PARAMS_RESERVED0_SHIFT          27
-/* RoCE ack request (will be set 1) */
-#define DB_RDMA_DPM_PARAMS_ACK_REQUEST_MASK         0x1
-#define DB_RDMA_DPM_PARAMS_ACK_REQUEST_SHIFT        28
-#define DB_RDMA_DPM_PARAMS_S_FLG_MASK               0x1 /* RoCE S flag */
-#define DB_RDMA_DPM_PARAMS_S_FLG_SHIFT              29
-/* RoCE completion flag for FW use */
-#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_MASK      0x1
-#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_SHIFT     30
-/* Connection type is iWARP */
-#define DB_RDMA_DPM_PARAMS_CONN_TYPE_IS_IWARP_MASK  0x1
-#define DB_RDMA_DPM_PARAMS_CONN_TYPE_IS_IWARP_SHIFT 31
+struct db_pwm_addr {
+	__le32 addr;
+#define DB_PWM_ADDR_RESERVED0_MASK  0x7
+#define DB_PWM_ADDR_RESERVED0_SHIFT 0
+#define DB_PWM_ADDR_OFFSET_MASK     0x7F /* Offset in PWM address space */
+#define DB_PWM_ADDR_OFFSET_SHIFT    3
+#define DB_PWM_ADDR_WID_MASK        0x3 /* Window ID */
+#define DB_PWM_ADDR_WID_SHIFT       10
+#define DB_PWM_ADDR_DPI_MASK        0xFFFF /* Doorbell page ID */
+#define DB_PWM_ADDR_DPI_SHIFT       12
+#define DB_PWM_ADDR_RESERVED1_MASK  0xF
+#define DB_PWM_ADDR_RESERVED1_SHIFT 28
 };
 
+
 /*
  * Parameters to RDMA firmware, passed in EDPM doorbell
  */
@@ -1098,11 +1143,9 @@ struct db_rdma_24b_icid_dpm_params {
 /* Type of DPM transacation (DPM_RDMA) (use enum db_dpm_type) */
 #define DB_RDMA_24B_ICID_DPM_PARAMS_DPM_TYPE_MASK            0x3
 #define DB_RDMA_24B_ICID_DPM_PARAMS_DPM_TYPE_SHIFT           6
-/* opcode for RDMA operation */
-#define DB_RDMA_24B_ICID_DPM_PARAMS_OPCODE_MASK              0xFF
+#define DB_RDMA_24B_ICID_DPM_PARAMS_OPCODE_MASK              0xFF /* opcode for RDMA operation */
 #define DB_RDMA_24B_ICID_DPM_PARAMS_OPCODE_SHIFT             8
-/* ICID extension */
-#define DB_RDMA_24B_ICID_DPM_PARAMS_ICID_EXT_MASK            0xFF
+#define DB_RDMA_24B_ICID_DPM_PARAMS_ICID_EXT_MASK            0xFF /* ICID extension */
 #define DB_RDMA_24B_ICID_DPM_PARAMS_ICID_EXT_SHIFT           16
 /* Number of invalid bytes in last QWROD of the DPM transaction */
 #define DB_RDMA_24B_ICID_DPM_PARAMS_INV_BYTE_CNT_MASK        0x7
@@ -1110,41 +1153,571 @@ struct db_rdma_24b_icid_dpm_params {
 /* Flag indicating 24b icid mode is enabled */
 #define DB_RDMA_24B_ICID_DPM_PARAMS_EXT_ICID_MODE_EN_MASK    0x1
 #define DB_RDMA_24B_ICID_DPM_PARAMS_EXT_ICID_MODE_EN_SHIFT   27
-/* RoCE completion flag */
-#define DB_RDMA_24B_ICID_DPM_PARAMS_COMPLETION_FLG_MASK      0x1
+#define DB_RDMA_24B_ICID_DPM_PARAMS_COMPLETION_FLG_MASK      0x1 /* RoCE completion flag */
 #define DB_RDMA_24B_ICID_DPM_PARAMS_COMPLETION_FLG_SHIFT     28
-/* RoCE S flag */
-#define DB_RDMA_24B_ICID_DPM_PARAMS_S_FLG_MASK               0x1
+#define DB_RDMA_24B_ICID_DPM_PARAMS_S_FLG_MASK               0x1 /* RoCE S flag */
 #define DB_RDMA_24B_ICID_DPM_PARAMS_S_FLG_SHIFT              29
 #define DB_RDMA_24B_ICID_DPM_PARAMS_RESERVED1_MASK           0x1
 #define DB_RDMA_24B_ICID_DPM_PARAMS_RESERVED1_SHIFT          30
-/* Connection type is iWARP */
-#define DB_RDMA_24B_ICID_DPM_PARAMS_CONN_TYPE_IS_IWARP_MASK  0x1
+#define DB_RDMA_24B_ICID_DPM_PARAMS_CONN_TYPE_IS_IWARP_MASK  0x1 /* Connection type is iWARP */
 #define DB_RDMA_24B_ICID_DPM_PARAMS_CONN_TYPE_IS_IWARP_SHIFT 31
 };
 
 
 /*
- * Structure for doorbell data, in RDMA DPM mode, for the first doorbell in a
- * DPM burst
+ * Parameters to RDMA firmware, passed in EDPM doorbell
+ */
+struct db_rdma_dpm_params {
+	__le32 params;
+#define DB_RDMA_DPM_PARAMS_SIZE_MASK                0x3F /* Size in QWORD-s of the DPM burst */
+#define DB_RDMA_DPM_PARAMS_SIZE_SHIFT               0
+/* Type of DPM transacation (DPM_RDMA) (use enum db_dpm_type) */
+#define DB_RDMA_DPM_PARAMS_DPM_TYPE_MASK            0x3
+#define DB_RDMA_DPM_PARAMS_DPM_TYPE_SHIFT           6
+#define DB_RDMA_DPM_PARAMS_OPCODE_MASK              0xFF /* opcode for RDMA operation */
+#define DB_RDMA_DPM_PARAMS_OPCODE_SHIFT             8
+#define DB_RDMA_DPM_PARAMS_WQE_SIZE_MASK            0x7FF /* the size of the WQE payload in bytes */
+#define DB_RDMA_DPM_PARAMS_WQE_SIZE_SHIFT           16
+#define DB_RDMA_DPM_PARAMS_RESERVED0_MASK           0x1
+#define DB_RDMA_DPM_PARAMS_RESERVED0_SHIFT          27
+#define DB_RDMA_DPM_PARAMS_ACK_REQUEST_MASK         0x1 /* RoCE ack request (will be set 1) */
+#define DB_RDMA_DPM_PARAMS_ACK_REQUEST_SHIFT        28
+#define DB_RDMA_DPM_PARAMS_S_FLG_MASK               0x1 /* RoCE S flag */
+#define DB_RDMA_DPM_PARAMS_S_FLG_SHIFT              29
+#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_MASK      0x1 /* RoCE completion flag for FW use */
+#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_SHIFT     30
+#define DB_RDMA_DPM_PARAMS_CONN_TYPE_IS_IWARP_MASK  0x1 /* Connection type is iWARP */
+#define DB_RDMA_DPM_PARAMS_CONN_TYPE_IS_IWARP_SHIFT 31
+};
+
+/*
+ * Structure for doorbell data, in RDMA DPM mode, for the first doorbell in a DPM burst
  */
 struct db_rdma_dpm_data {
 	__le16 icid /* internal CID */;
 	__le16 prod_val /* aggregated value to update */;
-/* parameters passed to RDMA firmware */
-	struct db_rdma_dpm_params params;
+	struct db_rdma_dpm_params params /* parameters passed to RDMA firmware */;
+};
+
+
+
+/*
+ * Rdif context
+ */
+struct e4_rdif_task_context {
+	__le32 initial_ref_tag;
+	__le16 app_tag_value;
+	__le16 app_tag_mask;
+	u8 flags0;
+#define E4_RDIF_TASK_CONTEXT_IGNORE_APP_TAG_MASK             0x1
+#define E4_RDIF_TASK_CONTEXT_IGNORE_APP_TAG_SHIFT            0
+#define E4_RDIF_TASK_CONTEXT_INITIAL_REF_TAG_VALID_MASK      0x1
+#define E4_RDIF_TASK_CONTEXT_INITIAL_REF_TAG_VALID_SHIFT     1
+#define E4_RDIF_TASK_CONTEXT_HOST_GUARD_TYPE_MASK            0x1 /* 0 = IP checksum, 1 = CRC */
+#define E4_RDIF_TASK_CONTEXT_HOST_GUARD_TYPE_SHIFT           2
+#define E4_RDIF_TASK_CONTEXT_SET_ERROR_WITH_EOP_MASK         0x1
+#define E4_RDIF_TASK_CONTEXT_SET_ERROR_WITH_EOP_SHIFT        3
+#define E4_RDIF_TASK_CONTEXT_PROTECTION_TYPE_MASK            0x3 /* 1/2/3 - Protection Type */
+#define E4_RDIF_TASK_CONTEXT_PROTECTION_TYPE_SHIFT           4
+#define E4_RDIF_TASK_CONTEXT_CRC_SEED_MASK                   0x1 /* 0=0x0000, 1=0xffff */
+#define E4_RDIF_TASK_CONTEXT_CRC_SEED_SHIFT                  6
+#define E4_RDIF_TASK_CONTEXT_KEEP_REF_TAG_CONST_MASK         0x1 /* Keep reference tag constant */
+#define E4_RDIF_TASK_CONTEXT_KEEP_REF_TAG_CONST_SHIFT        7
+	u8 partial_dif_data[7];
+	__le16 partial_crc_value;
+	__le16 partial_checksum_value;
+	__le32 offset_in_io;
+	__le16 flags1;
+#define E4_RDIF_TASK_CONTEXT_VALIDATE_GUARD_MASK             0x1
+#define E4_RDIF_TASK_CONTEXT_VALIDATE_GUARD_SHIFT            0
+#define E4_RDIF_TASK_CONTEXT_VALIDATE_APP_TAG_MASK           0x1
+#define E4_RDIF_TASK_CONTEXT_VALIDATE_APP_TAG_SHIFT          1
+#define E4_RDIF_TASK_CONTEXT_VALIDATE_REF_TAG_MASK           0x1
+#define E4_RDIF_TASK_CONTEXT_VALIDATE_REF_TAG_SHIFT          2
+#define E4_RDIF_TASK_CONTEXT_FORWARD_GUARD_MASK              0x1
+#define E4_RDIF_TASK_CONTEXT_FORWARD_GUARD_SHIFT             3
+#define E4_RDIF_TASK_CONTEXT_FORWARD_APP_TAG_MASK            0x1
+#define E4_RDIF_TASK_CONTEXT_FORWARD_APP_TAG_SHIFT           4
+#define E4_RDIF_TASK_CONTEXT_FORWARD_REF_TAG_MASK            0x1
+#define E4_RDIF_TASK_CONTEXT_FORWARD_REF_TAG_SHIFT           5
+/* 0=512B, 1=1KB, 2=2KB, 3=4KB, 4=8KB */
+#define E4_RDIF_TASK_CONTEXT_INTERVAL_SIZE_MASK              0x7
+#define E4_RDIF_TASK_CONTEXT_INTERVAL_SIZE_SHIFT             6
+#define E4_RDIF_TASK_CONTEXT_HOST_INTERFACE_MASK             0x3 /* 0=None, 1=DIF, 2=DIX */
+#define E4_RDIF_TASK_CONTEXT_HOST_INTERFACE_SHIFT            9
+/* DIF tag right at the beginning of DIF interval */
+#define E4_RDIF_TASK_CONTEXT_DIF_BEFORE_DATA_MASK            0x1
+#define E4_RDIF_TASK_CONTEXT_DIF_BEFORE_DATA_SHIFT           11
+#define E4_RDIF_TASK_CONTEXT_RESERVED0_MASK                  0x1
+#define E4_RDIF_TASK_CONTEXT_RESERVED0_SHIFT                 12
+#define E4_RDIF_TASK_CONTEXT_NETWORK_INTERFACE_MASK          0x1 /* 0=None, 1=DIF */
+#define E4_RDIF_TASK_CONTEXT_NETWORK_INTERFACE_SHIFT         13
+/* Forward application tag with mask */
+#define E4_RDIF_TASK_CONTEXT_FORWARD_APP_TAG_WITH_MASK_MASK  0x1
+#define E4_RDIF_TASK_CONTEXT_FORWARD_APP_TAG_WITH_MASK_SHIFT 14
+/* Forward reference tag with mask */
+#define E4_RDIF_TASK_CONTEXT_FORWARD_REF_TAG_WITH_MASK_MASK  0x1
+#define E4_RDIF_TASK_CONTEXT_FORWARD_REF_TAG_WITH_MASK_SHIFT 15
+	__le16 state;
+#define E4_RDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_MASK    0xF
+#define E4_RDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_SHIFT   0
+#define E4_RDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_MASK  0xF
+#define E4_RDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_SHIFT 4
+#define E4_RDIF_TASK_CONTEXT_ERROR_IN_IO_MASK                0x1
+#define E4_RDIF_TASK_CONTEXT_ERROR_IN_IO_SHIFT               8
+#define E4_RDIF_TASK_CONTEXT_CHECKSUM_OVERFLOW_MASK          0x1
+#define E4_RDIF_TASK_CONTEXT_CHECKSUM_OVERFLOW_SHIFT         9
+/* mask for refernce tag handling */
+#define E4_RDIF_TASK_CONTEXT_REF_TAG_MASK_MASK               0xF
+#define E4_RDIF_TASK_CONTEXT_REF_TAG_MASK_SHIFT              10
+#define E4_RDIF_TASK_CONTEXT_RESERVED1_MASK                  0x3
+#define E4_RDIF_TASK_CONTEXT_RESERVED1_SHIFT                 14
+	__le32 reserved2;
 };
 
-/* Igu interrupt command */
+
+/*
+ * Tdif context
+ */
+struct e4_tdif_task_context {
+	__le32 initial_ref_tag;
+	__le16 app_tag_value;
+	__le16 app_tag_mask;
+	__le16 partial_crc_value_b;
+	__le16 partial_checksum_value_b;
+	__le16 stateB;
+#define E4_TDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_B_MASK    0xF
+#define E4_TDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_B_SHIFT   0
+#define E4_TDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_B_MASK  0xF
+#define E4_TDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_B_SHIFT 4
+#define E4_TDIF_TASK_CONTEXT_ERROR_IN_IO_B_MASK                0x1
+#define E4_TDIF_TASK_CONTEXT_ERROR_IN_IO_B_SHIFT               8
+#define E4_TDIF_TASK_CONTEXT_CHECKSUM_VERFLOW_MASK             0x1
+#define E4_TDIF_TASK_CONTEXT_CHECKSUM_VERFLOW_SHIFT            9
+#define E4_TDIF_TASK_CONTEXT_RESERVED0_MASK                    0x3F
+#define E4_TDIF_TASK_CONTEXT_RESERVED0_SHIFT                   10
+	u8 reserved1;
+	u8 flags0;
+#define E4_TDIF_TASK_CONTEXT_IGNORE_APP_TAG_MASK               0x1
+#define E4_TDIF_TASK_CONTEXT_IGNORE_APP_TAG_SHIFT              0
+#define E4_TDIF_TASK_CONTEXT_INITIAL_REF_TAG_VALID_MASK        0x1
+#define E4_TDIF_TASK_CONTEXT_INITIAL_REF_TAG_VALID_SHIFT       1
+#define E4_TDIF_TASK_CONTEXT_HOST_GUARD_TYPE_MASK              0x1 /* 0 = IP checksum, 1 = CRC */
+#define E4_TDIF_TASK_CONTEXT_HOST_GUARD_TYPE_SHIFT             2
+#define E4_TDIF_TASK_CONTEXT_SET_ERROR_WITH_EOP_MASK           0x1
+#define E4_TDIF_TASK_CONTEXT_SET_ERROR_WITH_EOP_SHIFT          3
+#define E4_TDIF_TASK_CONTEXT_PROTECTION_TYPE_MASK              0x3 /* 1/2/3 - Protection Type */
+#define E4_TDIF_TASK_CONTEXT_PROTECTION_TYPE_SHIFT             4
+#define E4_TDIF_TASK_CONTEXT_CRC_SEED_MASK                     0x1 /* 0=0x0000, 1=0xffff */
+#define E4_TDIF_TASK_CONTEXT_CRC_SEED_SHIFT                    6
+#define E4_TDIF_TASK_CONTEXT_RESERVED2_MASK                    0x1
+#define E4_TDIF_TASK_CONTEXT_RESERVED2_SHIFT                   7
+	__le32 flags1;
+#define E4_TDIF_TASK_CONTEXT_VALIDATE_GUARD_MASK               0x1
+#define E4_TDIF_TASK_CONTEXT_VALIDATE_GUARD_SHIFT              0
+#define E4_TDIF_TASK_CONTEXT_VALIDATE_APP_TAG_MASK             0x1
+#define E4_TDIF_TASK_CONTEXT_VALIDATE_APP_TAG_SHIFT            1
+#define E4_TDIF_TASK_CONTEXT_VALIDATE_REF_TAG_MASK             0x1
+#define E4_TDIF_TASK_CONTEXT_VALIDATE_REF_TAG_SHIFT            2
+#define E4_TDIF_TASK_CONTEXT_FORWARD_GUARD_MASK                0x1
+#define E4_TDIF_TASK_CONTEXT_FORWARD_GUARD_SHIFT               3
+#define E4_TDIF_TASK_CONTEXT_FORWARD_APP_TAG_MASK              0x1
+#define E4_TDIF_TASK_CONTEXT_FORWARD_APP_TAG_SHIFT             4
+#define E4_TDIF_TASK_CONTEXT_FORWARD_REF_TAG_MASK              0x1
+#define E4_TDIF_TASK_CONTEXT_FORWARD_REF_TAG_SHIFT             5
+/* 0=512B, 1=1KB, 2=2KB, 3=4KB, 4=8KB */
+#define E4_TDIF_TASK_CONTEXT_INTERVAL_SIZE_MASK                0x7
+#define E4_TDIF_TASK_CONTEXT_INTERVAL_SIZE_SHIFT               6
+#define E4_TDIF_TASK_CONTEXT_HOST_INTERFACE_MASK               0x3 /* 0=None, 1=DIF, 2=DIX */
+#define E4_TDIF_TASK_CONTEXT_HOST_INTERFACE_SHIFT              9
+/* DIF tag right at the beginning of DIF interval */
+#define E4_TDIF_TASK_CONTEXT_DIF_BEFORE_DATA_MASK              0x1
+#define E4_TDIF_TASK_CONTEXT_DIF_BEFORE_DATA_SHIFT             11
+#define E4_TDIF_TASK_CONTEXT_RESERVED3_MASK                    0x1 /* reserved */
+#define E4_TDIF_TASK_CONTEXT_RESERVED3_SHIFT                   12
+#define E4_TDIF_TASK_CONTEXT_NETWORK_INTERFACE_MASK            0x1 /* 0=None, 1=DIF */
+#define E4_TDIF_TASK_CONTEXT_NETWORK_INTERFACE_SHIFT           13
+#define E4_TDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_A_MASK    0xF
+#define E4_TDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_A_SHIFT   14
+#define E4_TDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_A_MASK  0xF
+#define E4_TDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_A_SHIFT 18
+#define E4_TDIF_TASK_CONTEXT_ERROR_IN_IO_A_MASK                0x1
+#define E4_TDIF_TASK_CONTEXT_ERROR_IN_IO_A_SHIFT               22
+#define E4_TDIF_TASK_CONTEXT_CHECKSUM_OVERFLOW_A_MASK          0x1
+#define E4_TDIF_TASK_CONTEXT_CHECKSUM_OVERFLOW_A_SHIFT         23
+/* mask for refernce tag handling */
+#define E4_TDIF_TASK_CONTEXT_REF_TAG_MASK_MASK                 0xF
+#define E4_TDIF_TASK_CONTEXT_REF_TAG_MASK_SHIFT                24
+/* Forward application tag with mask */
+#define E4_TDIF_TASK_CONTEXT_FORWARD_APP_TAG_WITH_MASK_MASK    0x1
+#define E4_TDIF_TASK_CONTEXT_FORWARD_APP_TAG_WITH_MASK_SHIFT   28
+/* Forward reference tag with mask */
+#define E4_TDIF_TASK_CONTEXT_FORWARD_REF_TAG_WITH_MASK_MASK    0x1
+#define E4_TDIF_TASK_CONTEXT_FORWARD_REF_TAG_WITH_MASK_SHIFT   29
+#define E4_TDIF_TASK_CONTEXT_KEEP_REF_TAG_CONST_MASK           0x1 /* Keep reference tag constant */
+#define E4_TDIF_TASK_CONTEXT_KEEP_REF_TAG_CONST_SHIFT          30
+#define E4_TDIF_TASK_CONTEXT_RESERVED4_MASK                    0x1
+#define E4_TDIF_TASK_CONTEXT_RESERVED4_SHIFT                   31
+	__le32 offset_in_io_b;
+	__le16 partial_crc_value_a;
+	__le16 partial_checksum_value_a;
+	__le32 offset_in_io_a;
+	u8 partial_dif_data_a[8];
+	u8 partial_dif_data_b[8];
+};
+
+
+/*
+ * Rdif context
+ */
+struct e5_rdif_task_context {
+	__le32 initial_ref_tag;
+	__le16 app_tag_value;
+	__le16 app_tag_mask;
+	u8 flags0;
+#define E5_RDIF_TASK_CONTEXT_IGNORE_APP_TAG_MASK             0x1
+#define E5_RDIF_TASK_CONTEXT_IGNORE_APP_TAG_SHIFT            0
+#define E5_RDIF_TASK_CONTEXT_INITIAL_REF_TAG_VALID_MASK      0x1
+#define E5_RDIF_TASK_CONTEXT_INITIAL_REF_TAG_VALID_SHIFT     1
+#define E5_RDIF_TASK_CONTEXT_HOST_GUARD_TYPE_MASK            0x1 /* 0 = IP checksum, 1 = CRC */
+#define E5_RDIF_TASK_CONTEXT_HOST_GUARD_TYPE_SHIFT           2
+#define E5_RDIF_TASK_CONTEXT_SET_ERROR_WITH_EOP_MASK         0x1
+#define E5_RDIF_TASK_CONTEXT_SET_ERROR_WITH_EOP_SHIFT        3
+#define E5_RDIF_TASK_CONTEXT_PROTECTION_TYPE_MASK            0x3 /* 1/2/3 - Protection Type */
+#define E5_RDIF_TASK_CONTEXT_PROTECTION_TYPE_SHIFT           4
+/* 0=0x0000, 1=0xffff used only in reg_e4_backward_compatible_mode */
+#define E5_RDIF_TASK_CONTEXT_CRC_SEED_MASK                   0x1
+#define E5_RDIF_TASK_CONTEXT_CRC_SEED_SHIFT                  6
+#define E5_RDIF_TASK_CONTEXT_KEEP_REF_TAG_CONST_MASK         0x1 /* Keep reference tag constant */
+#define E5_RDIF_TASK_CONTEXT_KEEP_REF_TAG_CONST_SHIFT        7
+	u8 partial_dif_data[7];
+	__le16 partial_crc_value;
+	__le16 partial_checksum_value;
+	__le32 offset_in_io;
+	__le16 flags1;
+#define E5_RDIF_TASK_CONTEXT_VALIDATE_GUARD_MASK             0x1
+#define E5_RDIF_TASK_CONTEXT_VALIDATE_GUARD_SHIFT            0
+#define E5_RDIF_TASK_CONTEXT_VALIDATE_APP_TAG_MASK           0x1
+#define E5_RDIF_TASK_CONTEXT_VALIDATE_APP_TAG_SHIFT          1
+#define E5_RDIF_TASK_CONTEXT_VALIDATE_REF_TAG_MASK           0x1
+#define E5_RDIF_TASK_CONTEXT_VALIDATE_REF_TAG_SHIFT          2
+#define E5_RDIF_TASK_CONTEXT_FORWARD_GUARD_MASK              0x1
+#define E5_RDIF_TASK_CONTEXT_FORWARD_GUARD_SHIFT             3
+#define E5_RDIF_TASK_CONTEXT_FORWARD_APP_TAG_MASK            0x1
+#define E5_RDIF_TASK_CONTEXT_FORWARD_APP_TAG_SHIFT           4
+#define E5_RDIF_TASK_CONTEXT_FORWARD_REF_TAG_MASK            0x1
+#define E5_RDIF_TASK_CONTEXT_FORWARD_REF_TAG_SHIFT           5
+/* 0=512B, 1=1KB, 2=2KB, 3=4KB, 4=8KB */
+#define E5_RDIF_TASK_CONTEXT_INTERVAL_SIZE_MASK              0x7
+#define E5_RDIF_TASK_CONTEXT_INTERVAL_SIZE_SHIFT             6
+#define E5_RDIF_TASK_CONTEXT_HOST_INTERFACE_MASK             0x3 /* 0=None, 1=DIF, 2=DIX */
+#define E5_RDIF_TASK_CONTEXT_HOST_INTERFACE_SHIFT            9
+#define E5_RDIF_TASK_CONTEXT_RESERVED0_MASK                  0x3
+#define E5_RDIF_TASK_CONTEXT_RESERVED0_SHIFT                 11
+#define E5_RDIF_TASK_CONTEXT_NETWORK_INTERFACE_MASK          0x1 /* 0=None, 1=DIF */
+#define E5_RDIF_TASK_CONTEXT_NETWORK_INTERFACE_SHIFT         13
+/* Forward application tag with mask */
+#define E5_RDIF_TASK_CONTEXT_FORWARD_APP_TAG_WITH_MASK_MASK  0x1
+#define E5_RDIF_TASK_CONTEXT_FORWARD_APP_TAG_WITH_MASK_SHIFT 14
+/* Forward reference tag with mask */
+#define E5_RDIF_TASK_CONTEXT_FORWARD_REF_TAG_WITH_MASK_MASK  0x1
+#define E5_RDIF_TASK_CONTEXT_FORWARD_REF_TAG_WITH_MASK_SHIFT 15
+	__le16 state;
+#define E5_RDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_MASK    0xF
+#define E5_RDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_SHIFT   0
+#define E5_RDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_MASK  0xF
+#define E5_RDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_SHIFT 4
+#define E5_RDIF_TASK_CONTEXT_ERROR_IN_IO_MASK                0x1
+#define E5_RDIF_TASK_CONTEXT_ERROR_IN_IO_SHIFT               8
+#define E5_RDIF_TASK_CONTEXT_CHECKSUM_OVERFLOW_MASK          0x1
+#define E5_RDIF_TASK_CONTEXT_CHECKSUM_OVERFLOW_SHIFT         9
+/* mask for refernce tag handling; used only in reg_e4_backward_compatible_mod */
+#define E5_RDIF_TASK_CONTEXT_REF_TAG_MASK_MASK               0xF
+#define E5_RDIF_TASK_CONTEXT_REF_TAG_MASK_SHIFT              10
+#define E5_RDIF_TASK_CONTEXT_RESERVED1_MASK                  0x3
+#define E5_RDIF_TASK_CONTEXT_RESERVED1_SHIFT                 14
+/* mask for refernce tag handling; used only in 0=reg_e4_backward_compatible_mod **/
+	__le32 ref_tag_mask_e5;
+	__le16 crc_seed_e5 /* Full CRC seed; used only in 0=tdif_reg_e4_backward_compatible_mode */;
+	__le16 reserved2[7] /* padding to 128bit */;
+};
+
+
+/*
+ * Tdif context
+ */
+struct e5_tdif_task_context {
+	__le32 initial_ref_tag;
+	__le16 app_tag_value;
+	__le16 app_tag_mask;
+	__le16 partial_crc_value_b;
+	__le16 partial_checksum_value_b;
+	__le16 stateB;
+#define E5_TDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_B_MASK    0xF
+#define E5_TDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_B_SHIFT   0
+#define E5_TDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_B_MASK  0xF
+#define E5_TDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_B_SHIFT 4
+#define E5_TDIF_TASK_CONTEXT_ERROR_IN_IO_B_MASK                0x1
+#define E5_TDIF_TASK_CONTEXT_ERROR_IN_IO_B_SHIFT               8
+#define E5_TDIF_TASK_CONTEXT_CHECKSUM_VERFLOW_MASK             0x1
+#define E5_TDIF_TASK_CONTEXT_CHECKSUM_VERFLOW_SHIFT            9
+#define E5_TDIF_TASK_CONTEXT_RESERVED0_MASK                    0x3F
+#define E5_TDIF_TASK_CONTEXT_RESERVED0_SHIFT                   10
+	u8 reserved1;
+	u8 flags0;
+#define E5_TDIF_TASK_CONTEXT_IGNORE_APP_TAG_MASK               0x1
+#define E5_TDIF_TASK_CONTEXT_IGNORE_APP_TAG_SHIFT              0
+#define E5_TDIF_TASK_CONTEXT_INITIAL_REF_TAG_VALID_MASK        0x1
+#define E5_TDIF_TASK_CONTEXT_INITIAL_REF_TAG_VALID_SHIFT       1
+#define E5_TDIF_TASK_CONTEXT_HOST_GUARD_TYPE_MASK              0x1 /* 0 = IP checksum, 1 = CRC */
+#define E5_TDIF_TASK_CONTEXT_HOST_GUARD_TYPE_SHIFT             2
+#define E5_TDIF_TASK_CONTEXT_SET_ERROR_WITH_EOP_MASK           0x1
+#define E5_TDIF_TASK_CONTEXT_SET_ERROR_WITH_EOP_SHIFT          3
+#define E5_TDIF_TASK_CONTEXT_PROTECTION_TYPE_MASK              0x3 /* 1/2/3 - Protection Type */
+#define E5_TDIF_TASK_CONTEXT_PROTECTION_TYPE_SHIFT             4
+/* 0=0x0000, 1=0xffff; used only in tdif_reg_e4_backward_compatible_mode */
+#define E5_TDIF_TASK_CONTEXT_CRC_SEED_MASK                     0x1
+#define E5_TDIF_TASK_CONTEXT_CRC_SEED_SHIFT                    6
+#define E5_TDIF_TASK_CONTEXT_RESERVED2_MASK                    0x1
+#define E5_TDIF_TASK_CONTEXT_RESERVED2_SHIFT                   7
+	__le32 flags1;
+#define E5_TDIF_TASK_CONTEXT_VALIDATE_GUARD_MASK               0x1
+#define E5_TDIF_TASK_CONTEXT_VALIDATE_GUARD_SHIFT              0
+#define E5_TDIF_TASK_CONTEXT_VALIDATE_APP_TAG_MASK             0x1
+#define E5_TDIF_TASK_CONTEXT_VALIDATE_APP_TAG_SHIFT            1
+#define E5_TDIF_TASK_CONTEXT_VALIDATE_REF_TAG_MASK             0x1
+#define E5_TDIF_TASK_CONTEXT_VALIDATE_REF_TAG_SHIFT            2
+#define E5_TDIF_TASK_CONTEXT_FORWARD_GUARD_MASK                0x1
+#define E5_TDIF_TASK_CONTEXT_FORWARD_GUARD_SHIFT               3
+#define E5_TDIF_TASK_CONTEXT_FORWARD_APP_TAG_MASK              0x1
+#define E5_TDIF_TASK_CONTEXT_FORWARD_APP_TAG_SHIFT             4
+#define E5_TDIF_TASK_CONTEXT_FORWARD_REF_TAG_MASK              0x1
+#define E5_TDIF_TASK_CONTEXT_FORWARD_REF_TAG_SHIFT             5
+/* 0=512B, 1=1KB, 2=2KB, 3=4KB, 4=8KB */
+#define E5_TDIF_TASK_CONTEXT_INTERVAL_SIZE_MASK                0x7
+#define E5_TDIF_TASK_CONTEXT_INTERVAL_SIZE_SHIFT               6
+#define E5_TDIF_TASK_CONTEXT_HOST_INTERFACE_MASK               0x3 /* 0=None, 1=DIF, 2=DIX */
+#define E5_TDIF_TASK_CONTEXT_HOST_INTERFACE_SHIFT              9
+#define E5_TDIF_TASK_CONTEXT_RESERVED3_MASK                    0x3 /* reserved */
+#define E5_TDIF_TASK_CONTEXT_RESERVED3_SHIFT                   11
+#define E5_TDIF_TASK_CONTEXT_NETWORK_INTERFACE_MASK            0x1 /* 0=None, 1=DIF */
+#define E5_TDIF_TASK_CONTEXT_NETWORK_INTERFACE_SHIFT           13
+#define E5_TDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_A_MASK    0xF
+#define E5_TDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_A_SHIFT   14
+#define E5_TDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_A_MASK  0xF
+#define E5_TDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_A_SHIFT 18
+#define E5_TDIF_TASK_CONTEXT_ERROR_IN_IO_A_MASK                0x1
+#define E5_TDIF_TASK_CONTEXT_ERROR_IN_IO_A_SHIFT               22
+#define E5_TDIF_TASK_CONTEXT_CHECKSUM_OVERFLOW_A_MASK          0x1
+#define E5_TDIF_TASK_CONTEXT_CHECKSUM_OVERFLOW_A_SHIFT         23
+/* mask for refernce tag handlingl used only in tdif_reg_e4_backward_compatible_mode */
+#define E5_TDIF_TASK_CONTEXT_REF_TAG_MASK_MASK                 0xF
+#define E5_TDIF_TASK_CONTEXT_REF_TAG_MASK_SHIFT                24
+/* Forward application tag with mask */
+#define E5_TDIF_TASK_CONTEXT_FORWARD_APP_TAG_WITH_MASK_MASK    0x1
+#define E5_TDIF_TASK_CONTEXT_FORWARD_APP_TAG_WITH_MASK_SHIFT   28
+/* Forward reference tag with mask */
+#define E5_TDIF_TASK_CONTEXT_FORWARD_REF_TAG_WITH_MASK_MASK    0x1
+#define E5_TDIF_TASK_CONTEXT_FORWARD_REF_TAG_WITH_MASK_SHIFT   29
+#define E5_TDIF_TASK_CONTEXT_KEEP_REF_TAG_CONST_MASK           0x1 /* Keep reference tag constant */
+#define E5_TDIF_TASK_CONTEXT_KEEP_REF_TAG_CONST_SHIFT          30
+#define E5_TDIF_TASK_CONTEXT_RESERVED4_MASK                    0x1
+#define E5_TDIF_TASK_CONTEXT_RESERVED4_SHIFT                   31
+	__le32 offset_in_io_b;
+	__le16 partial_crc_value_a;
+	__le16 partial_checksum_value_a;
+	__le32 offset_in_io_a;
+	u8 partial_dif_data_a[8];
+	u8 partial_dif_data_b[8];
+/* Full reference tagmask seed; used only in 0=tdif_reg_e4_backward_compatible_mode **/
+	__le32 ref_tag_mask_e5;
+	__le16 crc_seed_e5 /* Full CRC seed; used only in 0=tdif_reg_e4_backward_compatible_mode */;
+	__le16 reserved5 /* padding to 64bit */;
+	__le32 reserved6[2] /* padding to 128bit */;
+};
+
+
+/*
+ * Describes L2 upper protocol
+ */
+enum gfs_ether_type_enum {
+	e_gfs_ether_type_other_protocol,
+	e_gfs_ether_type_ipv4,
+	e_gfs_ether_type_ipv6,
+	e_gfs_ether_type_arp,
+	e_gfs_ether_type_roce,
+	e_gfs_ether_type_fcoe,
+	MAX_GFS_ETHER_TYPE_ENUM
+};
+
+
+/*
+ * GFS CAM line struct with fields breakout
+ */
+struct gfs_profile_cam_line {
+	__le32 reg0;
+#define GFS_PROFILE_CAM_LINE_PF_MASK                         0xF
+#define GFS_PROFILE_CAM_LINE_PF_SHIFT                        0
+#define GFS_PROFILE_CAM_LINE_TUNNEL_EXISTS_MASK              0x1
+#define GFS_PROFILE_CAM_LINE_TUNNEL_EXISTS_SHIFT             4
+/*  (use enum gfs_tunnel_type_enum) */
+#define GFS_PROFILE_CAM_LINE_TUNNEL_TYPE_MASK                0xF
+#define GFS_PROFILE_CAM_LINE_TUNNEL_TYPE_SHIFT               5
+#define GFS_PROFILE_CAM_LINE_TUNNEL_IP_VERSION_MASK          0x1
+#define GFS_PROFILE_CAM_LINE_TUNNEL_IP_VERSION_SHIFT         9
+#define GFS_PROFILE_CAM_LINE_L2_HEADER_EXISTS_MASK           0x1
+#define GFS_PROFILE_CAM_LINE_L2_HEADER_EXISTS_SHIFT          10
+/*  (use enum gfs_ether_type_enum) */
+#define GFS_PROFILE_CAM_LINE_ETHER_TYPE_PROTOCOL_MASK        0x7
+#define GFS_PROFILE_CAM_LINE_ETHER_TYPE_PROTOCOL_SHIFT       11
+#define GFS_PROFILE_CAM_LINE_OVER_IP_PROTOCOL_MASK           0xFF
+#define GFS_PROFILE_CAM_LINE_OVER_IP_PROTOCOL_SHIFT          14
+#define GFS_PROFILE_CAM_LINE_IS_IP_MASK                      0x1
+#define GFS_PROFILE_CAM_LINE_IS_IP_SHIFT                     22
+#define GFS_PROFILE_CAM_LINE_IS_TCP_UDP_SCTP_MASK            0x1
+#define GFS_PROFILE_CAM_LINE_IS_TCP_UDP_SCTP_SHIFT           23
+#define GFS_PROFILE_CAM_LINE_IS_UNICAST_MASK                 0x1
+#define GFS_PROFILE_CAM_LINE_IS_UNICAST_SHIFT                24
+#define GFS_PROFILE_CAM_LINE_IP_FRAGMENT_MASK                0x1
+#define GFS_PROFILE_CAM_LINE_IP_FRAGMENT_SHIFT               25
+#define GFS_PROFILE_CAM_LINE_CALCULATED_TCP_FLAGS_MASK       0x3F
+#define GFS_PROFILE_CAM_LINE_CALCULATED_TCP_FLAGS_SHIFT      26
+	__le32 reg1;
+#define GFS_PROFILE_CAM_LINE_INNER_CVLAN_EXISTS_MASK         0x1
+#define GFS_PROFILE_CAM_LINE_INNER_CVLAN_EXISTS_SHIFT        0
+#define GFS_PROFILE_CAM_LINE_INNER_SVLAN_EXISTS_MASK         0x1
+#define GFS_PROFILE_CAM_LINE_INNER_SVLAN_EXISTS_SHIFT        1
+#define GFS_PROFILE_CAM_LINE_TUNNEL_CVLAN_EXISTS_MASK        0x1
+#define GFS_PROFILE_CAM_LINE_TUNNEL_CVLAN_EXISTS_SHIFT       2
+#define GFS_PROFILE_CAM_LINE_TUNNEL_SVLAN_EXISTS_MASK        0x1
+#define GFS_PROFILE_CAM_LINE_TUNNEL_SVLAN_EXISTS_SHIFT       3
+#define GFS_PROFILE_CAM_LINE_MPLS_EXISTS_MASK                0x1
+#define GFS_PROFILE_CAM_LINE_MPLS_EXISTS_SHIFT               4
+#define GFS_PROFILE_CAM_LINE_FLEX_BYTE0_MASK                 0xFF
+#define GFS_PROFILE_CAM_LINE_FLEX_BYTE0_SHIFT                5
+#define GFS_PROFILE_CAM_LINE_FLEX_BYTE1_MASK                 0xFF
+#define GFS_PROFILE_CAM_LINE_FLEX_BYTE1_SHIFT                13
+#define GFS_PROFILE_CAM_LINE_FLEX_BYTE2_LSB_MASK             0xF
+#define GFS_PROFILE_CAM_LINE_FLEX_BYTE2_LSB_SHIFT            21
+#define GFS_PROFILE_CAM_LINE_FLEX_BYTE3_MSB_MASK             0xF
+#define GFS_PROFILE_CAM_LINE_FLEX_BYTE3_MSB_SHIFT            25
+#define GFS_PROFILE_CAM_LINE_RESERVED_MASK                   0x3
+#define GFS_PROFILE_CAM_LINE_RESERVED_SHIFT                  29
+#define GFS_PROFILE_CAM_LINE_VALID_MASK                      0x1
+#define GFS_PROFILE_CAM_LINE_VALID_SHIFT                     31
+	__le32 reg2;
+#define GFS_PROFILE_CAM_LINE_MASK_PF_MASK                    0xF
+#define GFS_PROFILE_CAM_LINE_MASK_PF_SHIFT                   0
+#define GFS_PROFILE_CAM_LINE_MASK_TUNNEL_EXISTS_MASK         0x1
+#define GFS_PROFILE_CAM_LINE_MASK_TUNNEL_EXISTS_SHIFT        4
+#define GFS_PROFILE_CAM_LINE_MASK_TUNNEL_TYPE_MASK           0xF
+#define GFS_PROFILE_CAM_LINE_MASK_TUNNEL_TYPE_SHIFT          5
+#define GFS_PROFILE_CAM_LINE_MASK_TUNNEL_IP_VERSION_MASK     0x1
+#define GFS_PROFILE_CAM_LINE_MASK_TUNNEL_IP_VERSION_SHIFT    9
+#define GFS_PROFILE_CAM_LINE_MASK_L2HEADER_EXISTS_MASK       0x1
+#define GFS_PROFILE_CAM_LINE_MASK_L2HEADER_EXISTS_SHIFT      10
+#define GFS_PROFILE_CAM_LINE_MASK_ETHERTYPE_PROTOCOL_MASK    0x7
+#define GFS_PROFILE_CAM_LINE_MASK_ETHERTYPE_PROTOCOL_SHIFT   11
+#define GFS_PROFILE_CAM_LINE_MASK_OVER_IP_PROTOCOL_MASK      0xFF
+#define GFS_PROFILE_CAM_LINE_MASK_OVER_IP_PROTOCOL_SHIFT     14
+#define GFS_PROFILE_CAM_LINE_MASK_IS_IP_MASK                 0x1
+#define GFS_PROFILE_CAM_LINE_MASK_IS_IP_SHIFT                22
+#define GFS_PROFILE_CAM_LINE_MASK_IS_TCP_UDP_SCTP_MASK       0x1
+#define GFS_PROFILE_CAM_LINE_MASK_IS_TCP_UDP_SCTP_SHIFT      23
+#define GFS_PROFILE_CAM_LINE_MASK_IS_UNICAST_MASK            0x1
+#define GFS_PROFILE_CAM_LINE_MASK_IS_UNICAST_SHIFT           24
+#define GFS_PROFILE_CAM_LINE_MASK_IP_FRAGMENT_MASK           0x1
+#define GFS_PROFILE_CAM_LINE_MASK_IP_FRAGMENT_SHIFT          25
+#define GFS_PROFILE_CAM_LINE_MASK_CALCULATED_TCP_FLAGS_MASK  0x3F
+#define GFS_PROFILE_CAM_LINE_MASK_CALCULATED_TCP_FLAGS_SHIFT 26
+	__le32 reg3;
+#define GFS_PROFILE_CAM_LINE_MASK_INNER_CVLAN_EXISTS_MASK    0x1
+#define GFS_PROFILE_CAM_LINE_MASK_INNER_CVLAN_EXISTS_SHIFT   0
+#define GFS_PROFILE_CAM_LINE_MASK_INNER_SVLAN_EXISTS_MASK    0x1
+#define GFS_PROFILE_CAM_LINE_MASK_INNER_SVLAN_EXISTS_SHIFT   1
+#define GFS_PROFILE_CAM_LINE_MASK_TUNNEL_CVLAN_EXISTS_MASK   0x1
+#define GFS_PROFILE_CAM_LINE_MASK_TUNNEL_CVLAN_EXISTS_SHIFT  2
+#define GFS_PROFILE_CAM_LINE_MASK_TUNNEL_SVLAN_EXISTS_MASK   0x1
+#define GFS_PROFILE_CAM_LINE_MASK_TUNNEL_SVLAN_EXISTS_SHIFT  3
+#define GFS_PROFILE_CAM_LINE_MASK_MPLS_EXISTS_MASK           0x1
+#define GFS_PROFILE_CAM_LINE_MASK_MPLS_EXISTS_SHIFT          4
+#define GFS_PROFILE_CAM_LINE_MASK_FLEX_BYTE0_MASK            0xFF
+#define GFS_PROFILE_CAM_LINE_MASK_FLEX_BYTE0_SHIFT           5
+#define GFS_PROFILE_CAM_LINE_MASK_FLEX_BYTE1_MASK            0xFF
+#define GFS_PROFILE_CAM_LINE_MASK_FLEX_BYTE1_SHIFT           13
+#define GFS_PROFILE_CAM_LINE_MASK_FLEX_BYTE2_LSB_MASK        0xF
+#define GFS_PROFILE_CAM_LINE_MASK_FLEX_BYTE2_LSB_SHIFT       21
+#define GFS_PROFILE_CAM_LINE_MASK_FLEX_BYTE3_MSB_MASK        0xF
+#define GFS_PROFILE_CAM_LINE_MASK_FLEX_BYTE3_MSB_SHIFT       25
+#define GFS_PROFILE_CAM_LINE_RESERVED1_MASK                  0x7
+#define GFS_PROFILE_CAM_LINE_RESERVED1_SHIFT                 29
+};
+
+
+/*
+ * used to write to HASH_PROF_ENABLE_VEC_TWO_BIT_MODIFY register
+ */
+struct gfs_profile_modify_enable {
+	u8 index0 /* The index of the bit to be modified */;
+	u8 index1 /* The index of the bit to be modified */;
+	u8 flags;
+/* The operation to perform on the bit. 0 - reset, 1 - set. */
+#define GFS_PROFILE_MODIFY_ENABLE_SETRESET0_MASK  0x1
+#define GFS_PROFILE_MODIFY_ENABLE_SETRESET0_SHIFT 0
+/* The operation to perform on the bit. 0 - reset, 1 - set. */
+#define GFS_PROFILE_MODIFY_ENABLE_SETRESET1_MASK  0x1
+#define GFS_PROFILE_MODIFY_ENABLE_SETRESET1_SHIFT 1
+/* The operation will be applyed to the bit only if this bit is set. */
+#define GFS_PROFILE_MODIFY_ENABLE_VALID0_MASK     0x1
+#define GFS_PROFILE_MODIFY_ENABLE_VALID0_SHIFT    2
+/* The operation will be applyed to the bit only if this bit is set. */
+#define GFS_PROFILE_MODIFY_ENABLE_VALID1_MASK     0x1
+#define GFS_PROFILE_MODIFY_ENABLE_VALID1_SHIFT    3
+#define GFS_PROFILE_MODIFY_ENABLE_RESERVED1_MASK  0xF
+#define GFS_PROFILE_MODIFY_ENABLE_RESERVED1_SHIFT 4
+	u8 reserved2;
+};
+
+
+enum gfs_swap_i2o_enum {
+	e_no_swap,
+	e_swap_if_tunnel_exist,
+	e_swap_if_tunnel_not_exist,
+	e_swap_always,
+	MAX_GFS_SWAP_I2O_ENUM
+};
+
+
+/*
+ * Describes tunnel type
+ */
+enum gfs_tunnel_type_enum {
+	e_gfs_tunnel_type_no_tunnel = 0,
+	e_gfs_tunnel_type_geneve = 1,
+	e_gfs_tunnel_type_gre = 2,
+	e_gfs_tunnel_type_vxlan = 3,
+	e_gfs_tunnel_type_mpls = 4,
+	e_gfs_tunnel_type_gre_mpls = 5,
+	e_gfs_tunnel_type_udp_mpls = 6,
+	MAX_GFS_TUNNEL_TYPE_ENUM
+};
+
+
+/*
+ * Igu interrupt command
+ */
 enum igu_int_cmd {
-	IGU_INT_ENABLE	= 0,
+	IGU_INT_ENABLE = 0,
 	IGU_INT_DISABLE = 1,
-	IGU_INT_NOP	= 2,
-	IGU_INT_NOP2	= 3,
+	IGU_INT_NOP = 2,
+	IGU_INT_NOP2 = 3,
 	MAX_IGU_INT_CMD
 };
 
-/* IGU producer or consumer update command */
+
+/*
+ * IGU producer or consumer update command
+ */
 struct igu_prod_cons_update {
 	__le32 sb_id_and_flags;
 #define IGU_PROD_CONS_UPDATE_SB_INDEX_MASK        0xFFFFFF
@@ -1154,8 +1727,7 @@ struct igu_prod_cons_update {
 /* interrupt enable/disable/nop (use enum igu_int_cmd) */
 #define IGU_PROD_CONS_UPDATE_ENABLE_INT_MASK      0x3
 #define IGU_PROD_CONS_UPDATE_ENABLE_INT_SHIFT     25
-/*  (use enum igu_seg_access) */
-#define IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS_MASK  0x1
+#define IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS_MASK  0x1 /*  (use enum igu_seg_access) */
 #define IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS_SHIFT 27
 #define IGU_PROD_CONS_UPDATE_TIMER_MASK_MASK      0x1
 #define IGU_PROD_CONS_UPDATE_TIMER_MASK_SHIFT     28
@@ -1167,18 +1739,19 @@ struct igu_prod_cons_update {
 	__le32 reserved1;
 };
 
-/* Igu segments access for default status block only */
+
+/*
+ * Igu segments access for default status block only
+ */
 enum igu_seg_access {
-	IGU_SEG_ACCESS_REG	= 0,
-	IGU_SEG_ACCESS_ATTN	= 1,
+	IGU_SEG_ACCESS_REG = 0,
+	IGU_SEG_ACCESS_ATTN = 1,
 	MAX_IGU_SEG_ACCESS
 };
 
 
-/*
- * Enumeration for L3 type field of parsing_and_err_flags_union. L3Type:
- * 0 - unknown (not ip) ,1 - Ipv4, 2 - Ipv6 (this field can be filled according
- * to the last-ethertype)
+/* Enumeration for L3 type field of parsing_and_err_flags. L3Type: 0 - unknown (not ip) ,1 - Ipv4,
+ * 2 - Ipv6 (this field can be filled according to the last-ethertype)
  */
 enum l3_type {
 	e_l3_type_unknown,
@@ -1188,10 +1761,9 @@ enum l3_type {
 };
 
 
-/*
- * Enumeration for l4Protocol field of parsing_and_err_flags_union. L4-protocol
- * 0 - none, 1 - TCP, 2- UDP. if the packet is IPv4 fragment, and its not the
- * first fragment, the protocol-type should be set to none.
+/* Enumeration for l4Protocol field of parsing_and_err_flags. L4-protocol 0 - none, 1 - TCP, 2- UDP.
+ * if the packet is IPv4 fragment, and its not the first fragment, the protocol-type should be set
+ * to none.
  */
 enum l4_protocol {
 	e_l4_protocol_none,
@@ -1206,59 +1778,55 @@ enum l4_protocol {
  */
 struct parsing_and_err_flags {
 	__le16 flags;
-/* L3Type: 0 - unknown (not ip) ,1 - Ipv4, 2 - Ipv6 (this field can be filled
- * according to the last-ethertype) (use enum l3_type)
+/* L3Type: 0 - unknown (not ip) ,1 - Ipv4, 2 - Ipv6 (this field can be filled according to the
+ * last-ethertype) (use enum l3_type)
  */
 #define PARSING_AND_ERR_FLAGS_L3TYPE_MASK                      0x3
 #define PARSING_AND_ERR_FLAGS_L3TYPE_SHIFT                     0
-/* L4-protocol 0 - none, 1 - TCP, 2- UDP. if the packet is IPv4 fragment, and
- * its not the first fragment, the protocol-type should be set to none.
- * (use enum l4_protocol)
+/* L4-protocol 0 - none, 1 - TCP, 2- UDP. if the packet is IPv4 fragment, and its not the first
+ * fragment, the protocol-type should be set to none. (use enum l4_protocol)
  */
 #define PARSING_AND_ERR_FLAGS_L4PROTOCOL_MASK                  0x3
 #define PARSING_AND_ERR_FLAGS_L4PROTOCOL_SHIFT                 2
-/* Set if the packet is IPv4 fragment. */
+/* Set if the packet is IPv4/IPv6 fragment. */
 #define PARSING_AND_ERR_FLAGS_IPV4FRAG_MASK                    0x1
 #define PARSING_AND_ERR_FLAGS_IPV4FRAG_SHIFT                   4
-/* Set if VLAN tag exists. Invalid if tunnel type are IP GRE or IP GENEVE. */
+/* corresponds to the same 8021q tag that is selected for 8021q-tag fiel. This flag should be set if
+ * the tag appears in the packet, regardless of its value.
+ */
 #define PARSING_AND_ERR_FLAGS_TAG8021QEXIST_MASK               0x1
 #define PARSING_AND_ERR_FLAGS_TAG8021QEXIST_SHIFT              5
-/* Set if L4 checksum was calculated. */
+/* Set if L4 checksum was calculated. taken from the EOP descriptor. */
 #define PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_MASK        0x1
 #define PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_SHIFT       6
-/* Set for PTP packet. */
-#define PARSING_AND_ERR_FLAGS_TIMESYNCPKT_MASK                 0x1
+#define PARSING_AND_ERR_FLAGS_TIMESYNCPKT_MASK                 0x1 /* Set for PTP packet. */
 #define PARSING_AND_ERR_FLAGS_TIMESYNCPKT_SHIFT                7
 /* Set if PTP timestamp recorded. */
 #define PARSING_AND_ERR_FLAGS_TIMESTAMPRECORDED_MASK           0x1
 #define PARSING_AND_ERR_FLAGS_TIMESTAMPRECORDED_SHIFT          8
-/* Set if either version-mismatch or hdr-len-error or ipv4-cksm is set or ipv6
- * ver mismatch
- */
+/* Set if either version-mismatch or hdr-len-error or ipv4-cksm is set or ipv6 ver mismatch */
 #define PARSING_AND_ERR_FLAGS_IPHDRERROR_MASK                  0x1
 #define PARSING_AND_ERR_FLAGS_IPHDRERROR_SHIFT                 9
-/* Set if L4 checksum validation failed. Valid only if L4 checksum was
- * calculated.
- */
+/* Set if L4 checksum validation failed. Valid only if L4 checksum was calculated. */
 #define PARSING_AND_ERR_FLAGS_L4CHKSMERROR_MASK                0x1
 #define PARSING_AND_ERR_FLAGS_L4CHKSMERROR_SHIFT               10
 /* Set if GRE/VXLAN/GENEVE tunnel detected. */
 #define PARSING_AND_ERR_FLAGS_TUNNELEXIST_MASK                 0x1
 #define PARSING_AND_ERR_FLAGS_TUNNELEXIST_SHIFT                11
-/* Set if VLAN tag exists in tunnel header. */
+/* This flag should be set if the tag appears in the packet tunnel header, regardless of its value..
+ *
+ */
 #define PARSING_AND_ERR_FLAGS_TUNNEL8021QTAGEXIST_MASK         0x1
 #define PARSING_AND_ERR_FLAGS_TUNNEL8021QTAGEXIST_SHIFT        12
-/* Set if either tunnel-ipv4-version-mismatch or tunnel-ipv4-hdr-len-error or
- * tunnel-ipv4-cksm is set or tunneling ipv6 ver mismatch
+/* Set if either tunnel-ipv4-version-mismatch or tunnel-ipv4-hdr-len-error or tunnel-ipv4-cksm is
+ * set or tunneling ipv6 ver mismatch
  */
 #define PARSING_AND_ERR_FLAGS_TUNNELIPHDRERROR_MASK            0x1
 #define PARSING_AND_ERR_FLAGS_TUNNELIPHDRERROR_SHIFT           13
-/* Set if GRE or VXLAN/GENEVE UDP checksum was calculated. */
+/* taken from the EOP descriptor. */
 #define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMWASCALCULATED_MASK  0x1
 #define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMWASCALCULATED_SHIFT 14
-/* Set if tunnel L4 checksum validation failed. Valid only if tunnel L4 checksum
- * was calculated.
- */
+/* Set if tunnel L4 checksum validation failed. Valid only if tunnel L4 checksum was calculated. */
 #define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMERROR_MASK          0x1
 #define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMERROR_SHIFT         15
 };
@@ -1269,8 +1837,7 @@ struct parsing_and_err_flags {
  */
 struct parsing_err_flags {
 	__le16 flags;
-/* MAC error indication */
-#define PARSING_ERR_FLAGS_MAC_ERROR_MASK                          0x1
+#define PARSING_ERR_FLAGS_MAC_ERROR_MASK                          0x1 /* MAC error indication */
 #define PARSING_ERR_FLAGS_MAC_ERROR_SHIFT                         0
 /* truncation error indication */
 #define PARSING_ERR_FLAGS_TRUNC_ERROR_MASK                        0x1
@@ -1278,49 +1845,47 @@ struct parsing_err_flags {
 /* packet too small indication */
 #define PARSING_ERR_FLAGS_PKT_TOO_SMALL_MASK                      0x1
 #define PARSING_ERR_FLAGS_PKT_TOO_SMALL_SHIFT                     2
-/* Header Missing Tag */
-#define PARSING_ERR_FLAGS_ANY_HDR_MISSING_TAG_MASK                0x1
+#define PARSING_ERR_FLAGS_ANY_HDR_MISSING_TAG_MASK                0x1 /* Header Missing Tag */
 #define PARSING_ERR_FLAGS_ANY_HDR_MISSING_TAG_SHIFT               3
-/* from frame cracker output */
+/* From FC. See spec for detailed description */
 #define PARSING_ERR_FLAGS_ANY_HDR_IP_VER_MISMTCH_MASK             0x1
 #define PARSING_ERR_FLAGS_ANY_HDR_IP_VER_MISMTCH_SHIFT            4
-/* from frame cracker output */
+/* From FC. See spec for detailed description */
 #define PARSING_ERR_FLAGS_ANY_HDR_IP_V4_HDR_LEN_TOO_SMALL_MASK    0x1
 #define PARSING_ERR_FLAGS_ANY_HDR_IP_V4_HDR_LEN_TOO_SMALL_SHIFT   5
-/* set this error if: 1. total-len is smaller than hdr-len 2. total-ip-len
- * indicates number that is bigger than real packet length 3. tunneling:
- * total-ip-length of the outer header points to offset that is smaller than
- * the one pointed to by the total-ip-len of the inner hdr.
+/* set this error if: 1. total-len is smaller than hdr-len 2. total-ip-len indicates number that is
+ * bigger than real packet length 3. tunneling: total-ip-length of the outer header points to offset
+ * that is smaller than the one pointed to by the total-ip-len of the inner hdr.
  */
 #define PARSING_ERR_FLAGS_ANY_HDR_IP_BAD_TOTAL_LEN_MASK           0x1
 #define PARSING_ERR_FLAGS_ANY_HDR_IP_BAD_TOTAL_LEN_SHIFT          6
-/* from frame cracker output */
+/* From FC. See spec for detailed description */
 #define PARSING_ERR_FLAGS_IP_V4_CHKSM_ERROR_MASK                  0x1
 #define PARSING_ERR_FLAGS_IP_V4_CHKSM_ERROR_SHIFT                 7
-/* from frame cracker output. for either TCP or UDP */
+/* From FC. See spec for detailed description. for either TCP or UDP */
 #define PARSING_ERR_FLAGS_ANY_HDR_L4_IP_LEN_MISMTCH_MASK          0x1
 #define PARSING_ERR_FLAGS_ANY_HDR_L4_IP_LEN_MISMTCH_SHIFT         8
-/* from frame cracker output */
+/* From FC. See spec for detailed description */
 #define PARSING_ERR_FLAGS_ZERO_UDP_IP_V6_CHKSM_MASK               0x1
 #define PARSING_ERR_FLAGS_ZERO_UDP_IP_V6_CHKSM_SHIFT              9
-/* cksm calculated and value isn't 0xffff or L4-cksm-wasnt-calculated for any
- * reason, like: udp/ipv4 checksum is 0 etc.
+/* cksm calculated and value isn't 0xffff or L4-cksm-wasnt-calculated for any reason, like: udp/ipv4
+ * checksum is 0 etc.
  */
 #define PARSING_ERR_FLAGS_INNER_L4_CHKSM_ERROR_MASK               0x1
 #define PARSING_ERR_FLAGS_INNER_L4_CHKSM_ERROR_SHIFT              10
-/* from frame cracker output */
+/* From FC. See spec for detailed description */
 #define PARSING_ERR_FLAGS_ANY_HDR_ZERO_TTL_OR_HOP_LIM_MASK        0x1
 #define PARSING_ERR_FLAGS_ANY_HDR_ZERO_TTL_OR_HOP_LIM_SHIFT       11
-/* from frame cracker output */
+/* From FC. See spec for detailed description */
 #define PARSING_ERR_FLAGS_NON_8021Q_TAG_EXISTS_IN_BOTH_HDRS_MASK  0x1
 #define PARSING_ERR_FLAGS_NON_8021Q_TAG_EXISTS_IN_BOTH_HDRS_SHIFT 12
 /* set if geneve option size was over 32 byte */
 #define PARSING_ERR_FLAGS_GENEVE_OPTION_OVERSIZED_MASK            0x1
 #define PARSING_ERR_FLAGS_GENEVE_OPTION_OVERSIZED_SHIFT           13
-/* from frame cracker output */
+/* From FC. See spec for detailed description */
 #define PARSING_ERR_FLAGS_TUNNEL_IP_V4_CHKSM_ERROR_MASK           0x1
 #define PARSING_ERR_FLAGS_TUNNEL_IP_V4_CHKSM_ERROR_SHIFT          14
-/* from frame cracker output */
+/* From FC. See spec for detailed description */
 #define PARSING_ERR_FLAGS_TUNNEL_L4_CHKSM_ERROR_MASK              0x1
 #define PARSING_ERR_FLAGS_TUNNEL_L4_CHKSM_ERROR_SHIFT             15
 };
@@ -1333,25 +1898,33 @@ struct pb_context {
 	__le32 crc[4];
 };
 
-/* Concrete Function ID. */
+
+/*
+ * Concrete Function ID.
+ */
 struct pxp_concrete_fid {
 	__le16 fid;
-#define PXP_CONCRETE_FID_PFID_MASK     0xF /* Parent PFID */
+#define PXP_CONCRETE_FID_PFID_MASK     0xF /* Parent PFID (Relative to path) */
 #define PXP_CONCRETE_FID_PFID_SHIFT    0
-#define PXP_CONCRETE_FID_PORT_MASK     0x3 /* port number */
+#define PXP_CONCRETE_FID_PORT_MASK     0x3 /* port number (Relative to path) */
 #define PXP_CONCRETE_FID_PORT_SHIFT    4
 #define PXP_CONCRETE_FID_PATH_MASK     0x1 /* path number */
 #define PXP_CONCRETE_FID_PATH_SHIFT    6
 #define PXP_CONCRETE_FID_VFVALID_MASK  0x1
 #define PXP_CONCRETE_FID_VFVALID_SHIFT 7
-#define PXP_CONCRETE_FID_VFID_MASK     0xFF
+#define PXP_CONCRETE_FID_VFID_MASK     0xFF /* (Relative to path) */
 #define PXP_CONCRETE_FID_VFID_SHIFT    8
 };
 
+
+/*
+ * Concrete Function ID.
+ */
 struct pxp_pretend_concrete_fid {
 	__le16 fid;
-#define PXP_PRETEND_CONCRETE_FID_PFID_MASK      0xF
+#define PXP_PRETEND_CONCRETE_FID_PFID_MASK      0xF /* Parent PFID */
 #define PXP_PRETEND_CONCRETE_FID_PFID_SHIFT     0
+/* port number. Only when part of ME register. */
 #define PXP_PRETEND_CONCRETE_FID_RESERVED_MASK  0x7
 #define PXP_PRETEND_CONCRETE_FID_RESERVED_SHIFT 4
 #define PXP_PRETEND_CONCRETE_FID_VFVALID_MASK   0x1
@@ -1360,15 +1933,20 @@ struct pxp_pretend_concrete_fid {
 #define PXP_PRETEND_CONCRETE_FID_VFID_SHIFT     8
 };
 
+/*
+ * Function ID.
+ */
 union pxp_pretend_fid {
 	struct pxp_pretend_concrete_fid concrete_fid;
-	__le16				opaque_fid;
+	__le16 opaque_fid;
 };
 
-/* Pxp Pretend Command Register. */
+/*
+ * Pxp Pretend Command Register.
+ */
 struct pxp_pretend_cmd {
-	union pxp_pretend_fid	fid;
-	__le16			control;
+	union pxp_pretend_fid fid;
+	__le16 control;
 #define PXP_PRETEND_CMD_PATH_MASK              0x1
 #define PXP_PRETEND_CMD_PATH_SHIFT             0
 #define PXP_PRETEND_CMD_USE_PORT_MASK          0x1
@@ -1379,24 +1957,29 @@ struct pxp_pretend_cmd {
 #define PXP_PRETEND_CMD_RESERVED0_SHIFT        4
 #define PXP_PRETEND_CMD_RESERVED1_MASK         0xF
 #define PXP_PRETEND_CMD_RESERVED1_SHIFT        8
-#define PXP_PRETEND_CMD_PRETEND_PATH_MASK      0x1
+#define PXP_PRETEND_CMD_PRETEND_PATH_MASK      0x1 /* is pretend mode? */
 #define PXP_PRETEND_CMD_PRETEND_PATH_SHIFT     12
-#define PXP_PRETEND_CMD_PRETEND_PORT_MASK      0x1
+#define PXP_PRETEND_CMD_PRETEND_PORT_MASK      0x1 /* is pretend mode? */
 #define PXP_PRETEND_CMD_PRETEND_PORT_SHIFT     13
-#define PXP_PRETEND_CMD_PRETEND_FUNCTION_MASK  0x1
+#define PXP_PRETEND_CMD_PRETEND_FUNCTION_MASK  0x1 /* is pretend mode? */
 #define PXP_PRETEND_CMD_PRETEND_FUNCTION_SHIFT 14
-#define PXP_PRETEND_CMD_IS_CONCRETE_MASK       0x1
+#define PXP_PRETEND_CMD_IS_CONCRETE_MASK       0x1 /* is fid concrete? */
 #define PXP_PRETEND_CMD_IS_CONCRETE_SHIFT      15
 };
 
-/* PTT Record in PXP Admin Window. */
+
+
+
+/*
+ * PTT Record in PXP Admin Window.
+ */
 struct pxp_ptt_entry {
-	__le32			offset;
+	__le32 offset;
 #define PXP_PTT_ENTRY_OFFSET_MASK     0x7FFFFF
 #define PXP_PTT_ENTRY_OFFSET_SHIFT    0
 #define PXP_PTT_ENTRY_RESERVED0_MASK  0x1FF
 #define PXP_PTT_ENTRY_RESERVED0_SHIFT 23
-	struct pxp_pretend_cmd	pretend;
+	struct pxp_pretend_cmd pretend;
 };
 
 
@@ -1416,214 +1999,72 @@ struct pxp_vf_zone_a_permission {
 };
 
 
+
 /*
- * Rdif context
+ * Searcher Table struct
  */
-struct rdif_task_context {
-	__le32 initial_ref_tag;
-	__le16 app_tag_value;
-	__le16 app_tag_mask;
-	u8 flags0;
-#define RDIF_TASK_CONTEXT_IGNORE_APP_TAG_MASK             0x1
-#define RDIF_TASK_CONTEXT_IGNORE_APP_TAG_SHIFT            0
-#define RDIF_TASK_CONTEXT_INITIAL_REF_TAG_VALID_MASK      0x1
-#define RDIF_TASK_CONTEXT_INITIAL_REF_TAG_VALID_SHIFT     1
-/* 0 = IP checksum, 1 = CRC */
-#define RDIF_TASK_CONTEXT_HOST_GUARD_TYPE_MASK            0x1
-#define RDIF_TASK_CONTEXT_HOST_GUARD_TYPE_SHIFT           2
-#define RDIF_TASK_CONTEXT_SET_ERROR_WITH_EOP_MASK         0x1
-#define RDIF_TASK_CONTEXT_SET_ERROR_WITH_EOP_SHIFT        3
-/* 1/2/3 - Protection Type */
-#define RDIF_TASK_CONTEXT_PROTECTION_TYPE_MASK            0x3
-#define RDIF_TASK_CONTEXT_PROTECTION_TYPE_SHIFT           4
-/* 0=0x0000, 1=0xffff */
-#define RDIF_TASK_CONTEXT_CRC_SEED_MASK                   0x1
-#define RDIF_TASK_CONTEXT_CRC_SEED_SHIFT                  6
-/* Keep reference tag constant */
-#define RDIF_TASK_CONTEXT_KEEP_REF_TAG_CONST_MASK         0x1
-#define RDIF_TASK_CONTEXT_KEEP_REF_TAG_CONST_SHIFT        7
-	u8 partial_dif_data[7];
-	__le16 partial_crc_value;
-	__le16 partial_checksum_value;
-	__le32 offset_in_io;
-	__le16 flags1;
-#define RDIF_TASK_CONTEXT_VALIDATE_GUARD_MASK             0x1
-#define RDIF_TASK_CONTEXT_VALIDATE_GUARD_SHIFT            0
-#define RDIF_TASK_CONTEXT_VALIDATE_APP_TAG_MASK           0x1
-#define RDIF_TASK_CONTEXT_VALIDATE_APP_TAG_SHIFT          1
-#define RDIF_TASK_CONTEXT_VALIDATE_REF_TAG_MASK           0x1
-#define RDIF_TASK_CONTEXT_VALIDATE_REF_TAG_SHIFT          2
-#define RDIF_TASK_CONTEXT_FORWARD_GUARD_MASK              0x1
-#define RDIF_TASK_CONTEXT_FORWARD_GUARD_SHIFT             3
-#define RDIF_TASK_CONTEXT_FORWARD_APP_TAG_MASK            0x1
-#define RDIF_TASK_CONTEXT_FORWARD_APP_TAG_SHIFT           4
-#define RDIF_TASK_CONTEXT_FORWARD_REF_TAG_MASK            0x1
-#define RDIF_TASK_CONTEXT_FORWARD_REF_TAG_SHIFT           5
-/* 0=512B, 1=1KB, 2=2KB, 3=4KB, 4=8KB */
-#define RDIF_TASK_CONTEXT_INTERVAL_SIZE_MASK              0x7
-#define RDIF_TASK_CONTEXT_INTERVAL_SIZE_SHIFT             6
-/* 0=None, 1=DIF, 2=DIX */
-#define RDIF_TASK_CONTEXT_HOST_INTERFACE_MASK             0x3
-#define RDIF_TASK_CONTEXT_HOST_INTERFACE_SHIFT            9
-/* DIF tag right at the beginning of DIF interval */
-#define RDIF_TASK_CONTEXT_DIF_BEFORE_DATA_MASK            0x1
-#define RDIF_TASK_CONTEXT_DIF_BEFORE_DATA_SHIFT           11
-#define RDIF_TASK_CONTEXT_RESERVED0_MASK                  0x1
-#define RDIF_TASK_CONTEXT_RESERVED0_SHIFT                 12
-/* 0=None, 1=DIF */
-#define RDIF_TASK_CONTEXT_NETWORK_INTERFACE_MASK          0x1
-#define RDIF_TASK_CONTEXT_NETWORK_INTERFACE_SHIFT         13
-/* Forward application tag with mask */
-#define RDIF_TASK_CONTEXT_FORWARD_APP_TAG_WITH_MASK_MASK  0x1
-#define RDIF_TASK_CONTEXT_FORWARD_APP_TAG_WITH_MASK_SHIFT 14
-/* Forward reference tag with mask */
-#define RDIF_TASK_CONTEXT_FORWARD_REF_TAG_WITH_MASK_MASK  0x1
-#define RDIF_TASK_CONTEXT_FORWARD_REF_TAG_WITH_MASK_SHIFT 15
-	__le16 state;
-#define RDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_MASK    0xF
-#define RDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_SHIFT   0
-#define RDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_MASK  0xF
-#define RDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_SHIFT 4
-#define RDIF_TASK_CONTEXT_ERROR_IN_IO_MASK                0x1
-#define RDIF_TASK_CONTEXT_ERROR_IN_IO_SHIFT               8
-#define RDIF_TASK_CONTEXT_CHECKSUM_OVERFLOW_MASK          0x1
-#define RDIF_TASK_CONTEXT_CHECKSUM_OVERFLOW_SHIFT         9
-/* mask for refernce tag handling */
-#define RDIF_TASK_CONTEXT_REF_TAG_MASK_MASK               0xF
-#define RDIF_TASK_CONTEXT_REF_TAG_MASK_SHIFT              10
-#define RDIF_TASK_CONTEXT_RESERVED1_MASK                  0x3
-#define RDIF_TASK_CONTEXT_RESERVED1_SHIFT                 14
-	__le32 reserved2;
+struct src_entry_header {
+	__le32 flags;
+/* Address type : Physical = 0 , Logical = 1 (use enum src_header_next_ptr_type_enum) */
+#define SRC_ENTRY_HEADER_NEXT_PTR_TYPE_MASK  0x1
+#define SRC_ENTRY_HEADER_NEXT_PTR_TYPE_SHIFT 0
+#define SRC_ENTRY_HEADER_EMPTY_MASK          0x1 /* In T1 (first hop) indicates an empty hash bin */
+#define SRC_ENTRY_HEADER_EMPTY_SHIFT         1
+#define SRC_ENTRY_HEADER_RESERVED_MASK       0x3FFFFFFF
+#define SRC_ENTRY_HEADER_RESERVED_SHIFT      2
+	__le32 magic_number /* Must be SRC_HEADER_MAGIC_NUMBER (0x600DFEED) */;
+	struct regpair next_ptr /* Pointer to next bin */;
 };
 
+
 /*
- * RSS hash type
- */
-enum rss_hash_type {
-	RSS_HASH_TYPE_DEFAULT = 0,
-	RSS_HASH_TYPE_IPV4 = 1,
-	RSS_HASH_TYPE_TCP_IPV4 = 2,
-	RSS_HASH_TYPE_IPV6 = 3,
-	RSS_HASH_TYPE_TCP_IPV6 = 4,
-	RSS_HASH_TYPE_UDP_IPV4 = 5,
-	RSS_HASH_TYPE_UDP_IPV6 = 6,
-	MAX_RSS_HASH_TYPE
+ * Enumeration for address type
+ */
+enum src_header_next_ptr_type_enum {
+	e_physical_addr /* Address of type physical */,
+	e_logical_addr /* Address of type logical */,
+	MAX_SRC_HEADER_NEXT_PTR_TYPE_ENUM
 };
 
+
 /*
  * status block structure
  */
-struct status_block {
-	__le16 pi_array[PIS_PER_SB];
+struct status_block_e4 {
+	__le16 pi_array[PIS_PER_SB_E4];
 	__le32 sb_num;
-#define STATUS_BLOCK_SB_NUM_MASK      0x1FF
-#define STATUS_BLOCK_SB_NUM_SHIFT     0
-#define STATUS_BLOCK_ZERO_PAD_MASK    0x7F
-#define STATUS_BLOCK_ZERO_PAD_SHIFT   9
-#define STATUS_BLOCK_ZERO_PAD2_MASK   0xFFFF
-#define STATUS_BLOCK_ZERO_PAD2_SHIFT  16
+#define STATUS_BLOCK_E4_SB_NUM_MASK      0x1FF
+#define STATUS_BLOCK_E4_SB_NUM_SHIFT     0
+#define STATUS_BLOCK_E4_ZERO_PAD_MASK    0x7F
+#define STATUS_BLOCK_E4_ZERO_PAD_SHIFT   9
+#define STATUS_BLOCK_E4_ZERO_PAD2_MASK   0xFFFF
+#define STATUS_BLOCK_E4_ZERO_PAD2_SHIFT  16
 	__le32 prod_index;
-#define STATUS_BLOCK_PROD_INDEX_MASK  0xFFFFFF
-#define STATUS_BLOCK_PROD_INDEX_SHIFT 0
-#define STATUS_BLOCK_ZERO_PAD3_MASK   0xFF
-#define STATUS_BLOCK_ZERO_PAD3_SHIFT  24
+#define STATUS_BLOCK_E4_PROD_INDEX_MASK  0xFFFFFF
+#define STATUS_BLOCK_E4_PROD_INDEX_SHIFT 0
+#define STATUS_BLOCK_E4_ZERO_PAD3_MASK   0xFF
+#define STATUS_BLOCK_E4_ZERO_PAD3_SHIFT  24
 };
 
 
 /*
- * Tdif context
+ * status block structure
  */
-struct tdif_task_context {
-	__le32 initial_ref_tag;
-	__le16 app_tag_value;
-	__le16 app_tag_mask;
-	__le16 partial_crc_value_b;
-	__le16 partial_checksum_value_b;
-	__le16 stateB;
-#define TDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_B_MASK    0xF
-#define TDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_B_SHIFT   0
-#define TDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_B_MASK  0xF
-#define TDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_B_SHIFT 4
-#define TDIF_TASK_CONTEXT_ERROR_IN_IO_B_MASK                0x1
-#define TDIF_TASK_CONTEXT_ERROR_IN_IO_B_SHIFT               8
-#define TDIF_TASK_CONTEXT_CHECKSUM_VERFLOW_MASK             0x1
-#define TDIF_TASK_CONTEXT_CHECKSUM_VERFLOW_SHIFT            9
-#define TDIF_TASK_CONTEXT_RESERVED0_MASK                    0x3F
-#define TDIF_TASK_CONTEXT_RESERVED0_SHIFT                   10
-	u8 reserved1;
-	u8 flags0;
-#define TDIF_TASK_CONTEXT_IGNORE_APP_TAG_MASK               0x1
-#define TDIF_TASK_CONTEXT_IGNORE_APP_TAG_SHIFT              0
-#define TDIF_TASK_CONTEXT_INITIAL_REF_TAG_VALID_MASK        0x1
-#define TDIF_TASK_CONTEXT_INITIAL_REF_TAG_VALID_SHIFT       1
-/* 0 = IP checksum, 1 = CRC */
-#define TDIF_TASK_CONTEXT_HOST_GUARD_TYPE_MASK              0x1
-#define TDIF_TASK_CONTEXT_HOST_GUARD_TYPE_SHIFT             2
-#define TDIF_TASK_CONTEXT_SET_ERROR_WITH_EOP_MASK           0x1
-#define TDIF_TASK_CONTEXT_SET_ERROR_WITH_EOP_SHIFT          3
-/* 1/2/3 - Protection Type */
-#define TDIF_TASK_CONTEXT_PROTECTION_TYPE_MASK              0x3
-#define TDIF_TASK_CONTEXT_PROTECTION_TYPE_SHIFT             4
-/* 0=0x0000, 1=0xffff */
-#define TDIF_TASK_CONTEXT_CRC_SEED_MASK                     0x1
-#define TDIF_TASK_CONTEXT_CRC_SEED_SHIFT                    6
-#define TDIF_TASK_CONTEXT_RESERVED2_MASK                    0x1
-#define TDIF_TASK_CONTEXT_RESERVED2_SHIFT                   7
-	__le32 flags1;
-#define TDIF_TASK_CONTEXT_VALIDATE_GUARD_MASK               0x1
-#define TDIF_TASK_CONTEXT_VALIDATE_GUARD_SHIFT              0
-#define TDIF_TASK_CONTEXT_VALIDATE_APP_TAG_MASK             0x1
-#define TDIF_TASK_CONTEXT_VALIDATE_APP_TAG_SHIFT            1
-#define TDIF_TASK_CONTEXT_VALIDATE_REF_TAG_MASK             0x1
-#define TDIF_TASK_CONTEXT_VALIDATE_REF_TAG_SHIFT            2
-#define TDIF_TASK_CONTEXT_FORWARD_GUARD_MASK                0x1
-#define TDIF_TASK_CONTEXT_FORWARD_GUARD_SHIFT               3
-#define TDIF_TASK_CONTEXT_FORWARD_APP_TAG_MASK              0x1
-#define TDIF_TASK_CONTEXT_FORWARD_APP_TAG_SHIFT             4
-#define TDIF_TASK_CONTEXT_FORWARD_REF_TAG_MASK              0x1
-#define TDIF_TASK_CONTEXT_FORWARD_REF_TAG_SHIFT             5
-/* 0=512B, 1=1KB, 2=2KB, 3=4KB, 4=8KB */
-#define TDIF_TASK_CONTEXT_INTERVAL_SIZE_MASK                0x7
-#define TDIF_TASK_CONTEXT_INTERVAL_SIZE_SHIFT               6
-/* 0=None, 1=DIF, 2=DIX */
-#define TDIF_TASK_CONTEXT_HOST_INTERFACE_MASK               0x3
-#define TDIF_TASK_CONTEXT_HOST_INTERFACE_SHIFT              9
-/* DIF tag right at the beginning of DIF interval */
-#define TDIF_TASK_CONTEXT_DIF_BEFORE_DATA_MASK              0x1
-#define TDIF_TASK_CONTEXT_DIF_BEFORE_DATA_SHIFT             11
-#define TDIF_TASK_CONTEXT_RESERVED3_MASK                    0x1 /* reserved */
-#define TDIF_TASK_CONTEXT_RESERVED3_SHIFT                   12
-/* 0=None, 1=DIF */
-#define TDIF_TASK_CONTEXT_NETWORK_INTERFACE_MASK            0x1
-#define TDIF_TASK_CONTEXT_NETWORK_INTERFACE_SHIFT           13
-#define TDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_A_MASK    0xF
-#define TDIF_TASK_CONTEXT_RECEIVED_DIF_BYTES_LEFT_A_SHIFT   14
-#define TDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_A_MASK  0xF
-#define TDIF_TASK_CONTEXT_TRANSMITED_DIF_BYTES_LEFT_A_SHIFT 18
-#define TDIF_TASK_CONTEXT_ERROR_IN_IO_A_MASK                0x1
-#define TDIF_TASK_CONTEXT_ERROR_IN_IO_A_SHIFT               22
-#define TDIF_TASK_CONTEXT_CHECKSUM_OVERFLOW_A_MASK          0x1
-#define TDIF_TASK_CONTEXT_CHECKSUM_OVERFLOW_A_SHIFT         23
-/* mask for refernce tag handling */
-#define TDIF_TASK_CONTEXT_REF_TAG_MASK_MASK                 0xF
-#define TDIF_TASK_CONTEXT_REF_TAG_MASK_SHIFT                24
-/* Forward application tag with mask */
-#define TDIF_TASK_CONTEXT_FORWARD_APP_TAG_WITH_MASK_MASK    0x1
-#define TDIF_TASK_CONTEXT_FORWARD_APP_TAG_WITH_MASK_SHIFT   28
-/* Forward reference tag with mask */
-#define TDIF_TASK_CONTEXT_FORWARD_REF_TAG_WITH_MASK_MASK    0x1
-#define TDIF_TASK_CONTEXT_FORWARD_REF_TAG_WITH_MASK_SHIFT   29
-/* Keep reference tag constant */
-#define TDIF_TASK_CONTEXT_KEEP_REF_TAG_CONST_MASK           0x1
-#define TDIF_TASK_CONTEXT_KEEP_REF_TAG_CONST_SHIFT          30
-#define TDIF_TASK_CONTEXT_RESERVED4_MASK                    0x1
-#define TDIF_TASK_CONTEXT_RESERVED4_SHIFT                   31
-	__le32 offset_in_io_b;
-	__le16 partial_crc_value_a;
-	__le16 partial_checksum_value_a;
-	__le32 offset_in_io_a;
-	u8 partial_dif_data_a[8];
-	u8 partial_dif_data_b[8];
+struct status_block_e5 {
+	__le16 pi_array[PIS_PER_SB_E5];
+	__le16 pi_array_padding[PIS_PER_SB_PADDING_E5];
+	__le32 sb_num;
+#define STATUS_BLOCK_E5_SB_NUM_MASK      0x1FF
+#define STATUS_BLOCK_E5_SB_NUM_SHIFT     0
+#define STATUS_BLOCK_E5_ZERO_PAD_MASK    0x7F
+#define STATUS_BLOCK_E5_ZERO_PAD_SHIFT   9
+#define STATUS_BLOCK_E5_ZERO_PAD2_MASK   0xFFFF
+#define STATUS_BLOCK_E5_ZERO_PAD2_SHIFT  16
+	__le32 prod_index;
+#define STATUS_BLOCK_E5_PROD_INDEX_MASK  0xFFFFFF
+#define STATUS_BLOCK_E5_PROD_INDEX_SHIFT 0
+#define STATUS_BLOCK_E5_ZERO_PAD3_MASK   0xFF
+#define STATUS_BLOCK_E5_ZERO_PAD3_SHIFT  24
 };
 
 
@@ -1637,11 +2078,9 @@ struct timers_context {
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC0_SHIFT    0
 #define TIMERS_CONTEXT_RESERVED0_MASK             0x1
 #define TIMERS_CONTEXT_RESERVED0_SHIFT            27
-/* Valid bit of logical client 0 */
-#define TIMERS_CONTEXT_VALIDLC0_MASK              0x1
+#define TIMERS_CONTEXT_VALIDLC0_MASK              0x1 /* Valid bit of logical client 0 */
 #define TIMERS_CONTEXT_VALIDLC0_SHIFT             28
-/* Active bit of logical client 0 */
-#define TIMERS_CONTEXT_ACTIVELC0_MASK             0x1
+#define TIMERS_CONTEXT_ACTIVELC0_MASK             0x1 /* Active bit of logical client 0 */
 #define TIMERS_CONTEXT_ACTIVELC0_SHIFT            29
 #define TIMERS_CONTEXT_RESERVED1_MASK             0x3
 #define TIMERS_CONTEXT_RESERVED1_SHIFT            30
@@ -1651,11 +2090,9 @@ struct timers_context {
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC1_SHIFT    0
 #define TIMERS_CONTEXT_RESERVED2_MASK             0x1
 #define TIMERS_CONTEXT_RESERVED2_SHIFT            27
-/* Valid bit of logical client 1 */
-#define TIMERS_CONTEXT_VALIDLC1_MASK              0x1
+#define TIMERS_CONTEXT_VALIDLC1_MASK              0x1 /* Valid bit of logical client 1 */
 #define TIMERS_CONTEXT_VALIDLC1_SHIFT             28
-/* Active bit of logical client 1 */
-#define TIMERS_CONTEXT_ACTIVELC1_MASK             0x1
+#define TIMERS_CONTEXT_ACTIVELC1_MASK             0x1 /* Active bit of logical client 1 */
 #define TIMERS_CONTEXT_ACTIVELC1_SHIFT            29
 #define TIMERS_CONTEXT_RESERVED3_MASK             0x3
 #define TIMERS_CONTEXT_RESERVED3_SHIFT            30
@@ -1665,11 +2102,9 @@ struct timers_context {
 #define TIMERS_CONTEXT_EXPIRATIONTIMELC2_SHIFT    0
 #define TIMERS_CONTEXT_RESERVED4_MASK             0x1
 #define TIMERS_CONTEXT_RESERVED4_SHIFT            27
-/* Valid bit of logical client 2 */
-#define TIMERS_CONTEXT_VALIDLC2_MASK              0x1
+#define TIMERS_CONTEXT_VALIDLC2_MASK              0x1 /* Valid bit of logical client 2 */
 #define TIMERS_CONTEXT_VALIDLC2_SHIFT             28
-/* Active bit of logical client 2 */
-#define TIMERS_CONTEXT_ACTIVELC2_MASK             0x1
+#define TIMERS_CONTEXT_ACTIVELC2_MASK             0x1 /* Active bit of logical client 2 */
 #define TIMERS_CONTEXT_ACTIVELC2_SHIFT            29
 #define TIMERS_CONTEXT_RESERVED5_MASK             0x3
 #define TIMERS_CONTEXT_RESERVED5_SHIFT            30
@@ -1679,8 +2114,7 @@ struct timers_context {
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_SHIFT 0
 #define TIMERS_CONTEXT_RESERVED6_MASK             0x1
 #define TIMERS_CONTEXT_RESERVED6_SHIFT            27
-/* Valid bit of host expiration */
-#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_MASK  0x1
+#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_MASK  0x1 /* Valid bit of host expiration */
 #define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_SHIFT 28
 #define TIMERS_CONTEXT_RESERVED7_MASK             0x7
 #define TIMERS_CONTEXT_RESERVED7_SHIFT            29
@@ -1688,7 +2122,7 @@ struct timers_context {
 
 
 /*
- * Enum for next_protocol field of tunnel_parsing_flags
+ * Enum for next_protocol field of tunnel_parsing_flags / tunnelTypeDesc
  */
 enum tunnel_next_protocol {
 	e_unknown = 0,
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 6c8e6d407..3fb8fd2ce 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -516,8 +516,10 @@ struct ecore_fw_data {
 	const u8 *modes_tree_buf;
 	union init_op *init_ops;
 	const u32 *arr_data;
-	const u32 *fw_overlays;
-	u32 fw_overlays_len;
+	const u32 *fw_overlays_e4;
+	u32 fw_overlays_e4_len;
+	const u32 *fw_overlays_e5;
+	u32 fw_overlays_e5_len;
 	u32 init_ops_size;
 };
 
@@ -730,6 +732,7 @@ enum ecore_mf_mode {
 enum ecore_dev_type {
 	ECORE_DEV_TYPE_BB,
 	ECORE_DEV_TYPE_AH,
+	ECORE_DEV_TYPE_E5,
 };
 
 /* @DPDK */
@@ -775,12 +778,15 @@ struct ecore_dev {
 #endif
 #define ECORE_IS_AH(dev)	((dev)->type == ECORE_DEV_TYPE_AH)
 #define ECORE_IS_K2(dev)	ECORE_IS_AH(dev)
+#define ECORE_IS_E4(dev)	(ECORE_IS_BB(dev) || ECORE_IS_AH(dev))
+#define ECORE_IS_E5(dev)	((dev)->type == ECORE_DEV_TYPE_E5)
 
 	u16 vendor_id;
 	u16 device_id;
 #define ECORE_DEV_ID_MASK	0xff00
 #define ECORE_DEV_ID_MASK_BB	0x1600
 #define ECORE_DEV_ID_MASK_AH	0x8000
+#define ECORE_DEV_ID_MASK_E5	0x8100
 
 	u16				chip_num;
 #define CHIP_NUM_MASK			0xffff
@@ -1105,8 +1111,18 @@ void ecore_set_platform_str(struct ecore_hwfn *p_hwfn,
 			    char *buf_str, u32 buf_size);
 
 #define TSTORM_QZONE_START	PXP_VF_BAR0_START_SDM_ZONE_A
+#define TSTORM_QZONE_SIZE(dev) \
+	(ECORE_IS_E4(dev) ? TSTORM_QZONE_SIZE_E4 : TSTORM_QZONE_SIZE_E5)
 
 #define MSTORM_QZONE_START(dev) \
-	(TSTORM_QZONE_START + (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
+	(TSTORM_QZONE_START + (TSTORM_QZONE_SIZE(dev) * NUM_OF_L2_QUEUES(dev)))
+#define MSTORM_QZONE_SIZE(dev) \
+	(ECORE_IS_E4(dev) ? MSTORM_QZONE_SIZE_E4 : MSTORM_QZONE_SIZE_E5)
+
+#define USTORM_QZONE_START(dev) \
+	(MSTORM_QZONE_START(dev) + \
+	 (MSTORM_QZONE_SIZE(dev) *  NUM_OF_L2_QUEUES(dev)))
+#define USTORM_QZONE_SIZE(dev) \
+	(ECORE_IS_E4(dev) ? USTORM_QZONE_SIZE_E4 : USTORM_QZONE_SIZE_E5)
 
 #endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index d3025724b..221cec22b 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -51,18 +51,33 @@
 #define ILT_REG_SIZE_IN_BYTES		4
 
 /* connection context union */
-union conn_context {
-	struct core_conn_context core_ctx;
-	struct eth_conn_context eth_ctx;
+#define CONTEXT_PADDING			2048
+union e4_conn_context {
+	struct e4_core_conn_context core_ctx;
+	struct e4_eth_conn_context eth_ctx;
+	u8 padding[CONTEXT_PADDING];
+};
+
+union e5_conn_context {
+	struct e5_core_conn_context core_ctx;
+	struct e5_eth_conn_context eth_ctx;
 };
 
 /* TYPE-0 task context - iSCSI, FCOE */
-union type0_task_context {
+
+union e4_type0_task_context {
+};
+
+union e5_type0_task_context {
+};
+
+/* TYPE-1 task context - RDMA */
+
+union e4_type1_task_context {
 };
 
-/* TYPE-1 task context - ROCE */
-union type1_task_context {
-	struct regpair reserved; /* @DPDK */
+union e5_type1_task_context {
+	struct e5_eth_task_context eth_ctx;
 };
 
 struct src_ent {
@@ -74,15 +89,33 @@ struct src_ent {
 #define CDUT_SEG_ALIGNMET_IN_BYTES (1 << (CDUT_SEG_ALIGNMET + 12))
 
 #define CONN_CXT_SIZE(p_hwfn) \
-	ALIGNED_TYPE_SIZE(union conn_context, p_hwfn)
+	(ECORE_IS_E4(((p_hwfn)->p_dev)) ? \
+	 ALIGNED_TYPE_SIZE(union e4_conn_context, (p_hwfn)) : \
+	 ALIGNED_TYPE_SIZE(union e5_conn_context, (p_hwfn)))
 
 #define SRQ_CXT_SIZE (sizeof(struct regpair) * 8) /* @DPDK */
 
 #define TYPE0_TASK_CXT_SIZE(p_hwfn) \
-	ALIGNED_TYPE_SIZE(union type0_task_context, p_hwfn)
+	(ECORE_IS_E4(((p_hwfn)->p_dev)) ? \
+	 ALIGNED_TYPE_SIZE(union e4_type0_task_context, (p_hwfn)) : \
+	 ALIGNED_TYPE_SIZE(union e5_type0_task_context, (p_hwfn)))
 
 /* Alignment is inherent to the type1_task_context structure */
-#define TYPE1_TASK_CXT_SIZE(p_hwfn) sizeof(union type1_task_context)
+#define TYPE1_TASK_CXT_SIZE(p_hwfn) \
+	(ECORE_IS_E4(((p_hwfn)->p_dev)) ? \
+	 sizeof(union e4_type1_task_context) : \
+	 sizeof(union e5_type1_task_context))
+
+/* check if resources/configuration is required according to protocol type */
+static bool src_proto(enum protocol_type type)
+{
+	return	type == PROTOCOLID_ISCSI	||
+		type == PROTOCOLID_FCOE		||
+#ifdef CONFIG_ECORE_ROCE_PVRDMA
+		type == PROTOCOLID_ROCE		||
+#endif
+		type == PROTOCOLID_IWARP;
+}
 
 static OSAL_INLINE bool tm_cid_proto(enum protocol_type type)
 {
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index 1a539bbc7..cd0e32e3b 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -22,18 +22,6 @@ enum ecore_cxt_elem_type {
 	ECORE_ELEM_TASK
 };
 
-enum ilt_clients {
-	ILT_CLI_CDUC,
-	ILT_CLI_CDUT,
-	ILT_CLI_QM,
-	ILT_CLI_TM,
-	ILT_CLI_SRC,
-	ILT_CLI_TSDM,
-	ILT_CLI_RGFS,
-	ILT_CLI_TGFS,
-	MAX_ILT_CLIENTS
-};
-
 u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn,
 				  enum protocol_type type,
 				  u32 *vf_cid);
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index c4d757d66..5db02d0c4 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3820,14 +3820,19 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 			goto load_err;
 
 		/* Clear the pglue_b was_error indication.
-		 * It must be done after the BME and the internal FID_enable for
-		 * the PF are set, since VDMs may cause the indication to be set
-		 * again.
+		 * In E4 it must be done after the BME and the internal
+		 * FID_enable for the PF are set, since VDMs may cause the
+		 * indication to be set again.
 		 */
 		ecore_pglueb_clear_err(p_hwfn, p_hwfn->p_main_ptt);
 
-		fw_overlays = p_dev->fw_data->fw_overlays;
-		fw_overlays_len = p_dev->fw_data->fw_overlays_len;
+		if (ECORE_IS_E4(p_hwfn->p_dev)) {
+			fw_overlays = p_dev->fw_data->fw_overlays_e4;
+			fw_overlays_len = p_dev->fw_data->fw_overlays_e4_len;
+		} else {
+			fw_overlays = p_dev->fw_data->fw_overlays_e5;
+			fw_overlays_len = p_dev->fw_data->fw_overlays_e5_len;
+		}
 		p_hwfn->fw_overlay_mem =
 			ecore_fw_overlay_mem_alloc(p_hwfn, fw_overlays,
 						   fw_overlays_len);
@@ -4437,23 +4442,27 @@ __ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-#define RDMA_NUM_STATISTIC_COUNTERS_K2                  MAX_NUM_VPORTS_K2
-#define RDMA_NUM_STATISTIC_COUNTERS_BB                  MAX_NUM_VPORTS_BB
+/* @DPDK */
+#define RDMA_NUM_STATISTIC_COUNTERS_K2			MAX_NUM_VPORTS_K2
+#define RDMA_NUM_STATISTIC_COUNTERS_BB			MAX_NUM_VPORTS_BB
+#define RDMA_NUM_STATISTIC_COUNTERS_E5			MAX_NUM_VPORTS_E5
 
 static u32 ecore_hsi_def_val[][MAX_CHIP_IDS] = {
-	{MAX_NUM_VFS_BB, MAX_NUM_VFS_K2},
-	{MAX_NUM_L2_QUEUES_BB, MAX_NUM_L2_QUEUES_K2},
-	{MAX_NUM_PORTS_BB, MAX_NUM_PORTS_K2},
-	{MAX_SB_PER_PATH_BB, MAX_SB_PER_PATH_K2, },
-	{MAX_NUM_PFS_BB, MAX_NUM_PFS_K2},
-	{MAX_NUM_VPORTS_BB, MAX_NUM_VPORTS_K2},
-	{ETH_RSS_ENGINE_NUM_BB, ETH_RSS_ENGINE_NUM_K2},
-	{MAX_QM_TX_QUEUES_BB, MAX_QM_TX_QUEUES_K2},
-	{PXP_NUM_ILT_RECORDS_BB, PXP_NUM_ILT_RECORDS_K2},
-	{RDMA_NUM_STATISTIC_COUNTERS_BB, RDMA_NUM_STATISTIC_COUNTERS_K2},
-	{MAX_QM_GLOBAL_RLS, MAX_QM_GLOBAL_RLS},
-	{PBF_MAX_CMD_LINES, PBF_MAX_CMD_LINES},
-	{BTB_MAX_BLOCKS_BB, BTB_MAX_BLOCKS_K2},
+	{MAX_NUM_VFS_BB, MAX_NUM_VFS_K2, MAX_NUM_VFS_E5},
+	{MAX_NUM_L2_QUEUES_BB, MAX_NUM_L2_QUEUES_K2, MAX_NUM_L2_QUEUES_E5},
+	{MAX_NUM_PORTS_BB, MAX_NUM_PORTS_K2, MAX_NUM_PORTS_E5},
+	{MAX_SB_PER_PATH_BB, MAX_SB_PER_PATH_K2, MAX_SB_PER_PATH_E5},
+	{MAX_NUM_PFS_BB, MAX_NUM_PFS_K2, MAX_NUM_PFS_E5},
+	{MAX_NUM_VPORTS_BB, MAX_NUM_VPORTS_K2, MAX_NUM_VPORTS_E5},
+	{ETH_RSS_ENGINE_NUM_BB, ETH_RSS_ENGINE_NUM_K2, ETH_RSS_ENGINE_NUM_E5},
+	{MAX_QM_TX_QUEUES_BB, MAX_QM_TX_QUEUES_K2, MAX_QM_TX_QUEUES_E5},
+	{PXP_NUM_ILT_RECORDS_BB, PXP_NUM_ILT_RECORDS_K2,
+	 PXP_NUM_ILT_RECORDS_E5},
+	{RDMA_NUM_STATISTIC_COUNTERS_BB, RDMA_NUM_STATISTIC_COUNTERS_K2,
+	 RDMA_NUM_STATISTIC_COUNTERS_E5},
+	{MAX_QM_GLOBAL_RLS_E4, MAX_QM_GLOBAL_RLS_E4, MAX_QM_GLOBAL_RLS_E5},
+	{PBF_MAX_CMD_LINES_E4, PBF_MAX_CMD_LINES_E4, PBF_MAX_CMD_LINES_E5},
+	{BTB_MAX_BLOCKS_BB, BTB_MAX_BLOCKS_K2, BTB_MAX_BLOCKS_E5},
 };
 
 u32 ecore_get_hsi_def_val(struct ecore_dev *p_dev, enum ecore_hsi_def_type type)
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index 578c798a9..638178c30 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -1,17 +1,16 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_HSI_COMMON__
 #define __ECORE_HSI_COMMON__
 /********************************/
 /* Add include to common target */
 /********************************/
 #include "common_hsi.h"
-#include "mcp_public.h"
-
+#include "mcp_public.h" /* @DPDK */
 
 /*
  * opcodes for the event ring
@@ -47,522 +46,6 @@ enum common_ramrod_cmd_id {
 };
 
 
-/*
- * The core storm context for the Ystorm
- */
-struct ystorm_core_conn_st_ctx {
-	__le32 reserved[4];
-};
-
-/*
- * The core storm context for the Pstorm
- */
-struct pstorm_core_conn_st_ctx {
-	__le32 reserved[20];
-};
-
-/*
- * Core Slowpath Connection storm context of Xstorm
- */
-struct xstorm_core_conn_st_ctx {
-	__le32 spq_base_lo /* SPQ Ring Base Address low dword */;
-	__le32 spq_base_hi /* SPQ Ring Base Address high dword */;
-/* Consolidation Ring Base Address */
-	struct regpair consolid_base_addr;
-	__le16 spq_cons /* SPQ Ring Consumer */;
-	__le16 consolid_cons /* Consolidation Ring Consumer */;
-	__le32 reserved0[55] /* Pad to 15 cycles */;
-};
-
-struct xstorm_core_conn_ag_ctx {
-	u8 reserved0 /* cdu_validation */;
-	u8 state /* state */;
-	u8 flags0;
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1 /* exist_in_qm0 */
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1 /* exist_in_qm1 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1 /* exist_in_qm2 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1 /* exist_in_qm3 */
-#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1 /* bit4 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
-/* cf_array_active */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1 /* bit6 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1 /* bit7 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
-	u8 flags1;
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1 /* bit8 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1 /* bit9 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1 /* bit10 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
-#define XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1 /* bit11 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
-#define XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1 /* bit12 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
-#define XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1 /* bit13 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
-#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1 /* bit14 */
-#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1 /* bit15 */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
-	u8 flags2;
-#define XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3 /* timer0cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
-#define XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3 /* timer1cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
-#define XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3 /* timer2cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
-/* timer_stop_all */
-#define XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3
-#define XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
-	u8 flags3;
-#define XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
-#define XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
-#define XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
-#define XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
-#define XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
-#define XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
-#define XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
-#define XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
-	u8 flags4;
-#define XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
-#define XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
-#define XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
-#define XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
-#define XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3 /* cf10 */
-#define XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
-#define XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3 /* cf11 */
-#define XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
-	u8 flags5;
-#define XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3 /* cf12 */
-#define XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
-#define XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3 /* cf13 */
-#define XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
-#define XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3 /* cf14 */
-#define XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
-#define XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3 /* cf15 */
-#define XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
-	u8 flags6;
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3 /* cf16 */
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
-#define XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3 /* cf_array_cf */
-#define XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3 /* cf18 */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3 /* cf19 */
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
-	u8 flags7;
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3 /* cf20 */
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3 /* cf21 */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3 /* cf22 */
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
-#define XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1 /* cf0en */
-#define XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
-#define XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1 /* cf1en */
-#define XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
-	u8 flags8;
-#define XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1 /* cf2en */
-#define XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
-#define XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1 /* cf3en */
-#define XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
-#define XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1 /* cf4en */
-#define XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
-#define XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1 /* cf5en */
-#define XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
-#define XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1 /* cf6en */
-#define XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
-#define XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1 /* cf7en */
-#define XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
-#define XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1 /* cf8en */
-#define XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
-#define XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1 /* cf9en */
-#define XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
-	u8 flags9;
-#define XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1 /* cf10en */
-#define XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
-#define XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1 /* cf11en */
-#define XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
-#define XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1 /* cf12en */
-#define XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
-#define XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1 /* cf13en */
-#define XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
-#define XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1 /* cf14en */
-#define XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
-#define XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1 /* cf15en */
-#define XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1 /* cf16en */
-#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
-/* cf_array_cf_en */
-#define XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1
-#define XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
-	u8 flags10;
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1 /* cf18en */
-#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1 /* cf19en */
-#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1 /* cf20en */
-#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1 /* cf21en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1 /* cf22en */
-#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
-#define XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1 /* cf23en */
-#define XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1 /* rule0en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1 /* rule1en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
-	u8 flags11;
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1 /* rule2en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1 /* rule3en */
-#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
-#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1 /* rule4en */
-#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
-#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1 /* rule5en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
-#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1 /* rule6en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
-#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1 /* rule7en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1 /* rule8en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
-#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1 /* rule9en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
-	u8 flags12;
-#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1 /* rule10en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
-#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1 /* rule11en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1 /* rule12en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1 /* rule13en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
-#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1 /* rule14en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
-#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1 /* rule15en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
-#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1 /* rule16en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
-#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1 /* rule17en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
-	u8 flags13;
-#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1 /* rule18en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
-#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1 /* rule19en */
-#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1 /* rule20en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1 /* rule21en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1 /* rule22en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1 /* rule23en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1 /* rule24en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1 /* rule25en */
-#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
-	u8 flags14;
-#define XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1 /* bit16 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
-#define XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1 /* bit17 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
-#define XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1 /* bit18 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
-#define XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1 /* bit19 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
-#define XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1 /* bit20 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
-#define XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1 /* bit21 */
-#define XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
-#define XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3 /* cf23 */
-#define XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
-	u8 byte2 /* byte2 */;
-	__le16 physical_q0 /* physical_q0 */;
-	__le16 consolid_prod /* physical_q1 */;
-	__le16 reserved16 /* physical_q2 */;
-	__le16 tx_bd_cons /* word3 */;
-	__le16 tx_bd_or_spq_prod /* word4 */;
-	__le16 updated_qm_pq_id /* word5 */;
-	__le16 conn_dpi /* conn_dpi */;
-	u8 byte3 /* byte3 */;
-	u8 byte4 /* byte4 */;
-	u8 byte5 /* byte5 */;
-	u8 byte6 /* byte6 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-	__le32 reg2 /* reg2 */;
-	__le32 reg3 /* reg3 */;
-	__le32 reg4 /* reg4 */;
-	__le32 reg5 /* cf_array0 */;
-	__le32 reg6 /* cf_array1 */;
-	__le16 word7 /* word7 */;
-	__le16 word8 /* word8 */;
-	__le16 word9 /* word9 */;
-	__le16 word10 /* word10 */;
-	__le32 reg7 /* reg7 */;
-	__le32 reg8 /* reg8 */;
-	__le32 reg9 /* reg9 */;
-	u8 byte7 /* byte7 */;
-	u8 byte8 /* byte8 */;
-	u8 byte9 /* byte9 */;
-	u8 byte10 /* byte10 */;
-	u8 byte11 /* byte11 */;
-	u8 byte12 /* byte12 */;
-	u8 byte13 /* byte13 */;
-	u8 byte14 /* byte14 */;
-	u8 byte15 /* byte15 */;
-	u8 e5_reserved /* e5_reserved */;
-	__le16 word11 /* word11 */;
-	__le32 reg10 /* reg10 */;
-	__le32 reg11 /* reg11 */;
-	__le32 reg12 /* reg12 */;
-	__le32 reg13 /* reg13 */;
-	__le32 reg14 /* reg14 */;
-	__le32 reg15 /* reg15 */;
-	__le32 reg16 /* reg16 */;
-	__le32 reg17 /* reg17 */;
-	__le32 reg18 /* reg18 */;
-	__le32 reg19 /* reg19 */;
-	__le16 word12 /* word12 */;
-	__le16 word13 /* word13 */;
-	__le16 word14 /* word14 */;
-	__le16 word15 /* word15 */;
-};
-
-struct tstorm_core_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-#define TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
-#define TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
-#define TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
-#define TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
-#define TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
-#define TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
-	u8 flags1;
-#define TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
-#define TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
-#define TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
-	u8 flags2;
-#define TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
-#define TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
-#define TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
-#define TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
-#define TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
-	u8 flags3;
-#define TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
-#define TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
-#define TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
-#define TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
-#define TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
-#define TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
-#define TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
-#define TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
-	u8 flags4;
-#define TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
-#define TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
-#define TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
-#define TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
-#define TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
-#define TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
-#define TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
-#define TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
-#define TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
-#define TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
-#define TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
-#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
-	u8 flags5;
-#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-	__le32 reg2 /* reg2 */;
-	__le32 reg3 /* reg3 */;
-	__le32 reg4 /* reg4 */;
-	__le32 reg5 /* reg5 */;
-	__le32 reg6 /* reg6 */;
-	__le32 reg7 /* reg7 */;
-	__le32 reg8 /* reg8 */;
-	u8 byte2 /* byte2 */;
-	u8 byte3 /* byte3 */;
-	__le16 word0 /* word0 */;
-	u8 byte4 /* byte4 */;
-	u8 byte5 /* byte5 */;
-	__le16 word1 /* word1 */;
-	__le16 word2 /* conn_dpi */;
-	__le16 word3 /* word3 */;
-	__le32 reg9 /* reg9 */;
-	__le32 reg10 /* reg10 */;
-};
-
-struct ustorm_core_conn_ag_ctx {
-	u8 reserved /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-#define USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
-	u8 flags1;
-#define USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
-#define USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
-#define USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
-#define USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
-	u8 flags2;
-#define USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
-#define USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
-#define USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
-#define USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
-#define USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
-	u8 flags3;
-#define USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
-	u8 byte2 /* byte2 */;
-	u8 byte3 /* byte3 */;
-	__le16 word0 /* conn_dpi */;
-	__le16 word1 /* word1 */;
-	__le32 rx_producers /* reg0 */;
-	__le32 reg1 /* reg1 */;
-	__le32 reg2 /* reg2 */;
-	__le32 reg3 /* reg3 */;
-	__le16 word2 /* word2 */;
-	__le16 word3 /* word3 */;
-};
-
-/*
- * The core storm context for the Mstorm
- */
-struct mstorm_core_conn_st_ctx {
-	__le32 reserved[40];
-};
-
-/*
- * The core storm context for the Ustorm
- */
-struct ustorm_core_conn_st_ctx {
-	__le32 reserved[20];
-};
-
-/*
- * The core storm context for the Tstorm
- */
-struct tstorm_core_conn_st_ctx {
-	__le32 reserved[4];
-};
-
-/*
- * core connection context
- */
-struct core_conn_context {
-/* ystorm storm context */
-	struct ystorm_core_conn_st_ctx ystorm_st_context;
-	struct regpair ystorm_st_padding[2] /* padding */;
-/* pstorm storm context */
-	struct pstorm_core_conn_st_ctx pstorm_st_context;
-	struct regpair pstorm_st_padding[2] /* padding */;
-/* xstorm storm context */
-	struct xstorm_core_conn_st_ctx xstorm_st_context;
-/* xstorm aggregative context */
-	struct xstorm_core_conn_ag_ctx xstorm_ag_context;
-/* tstorm aggregative context */
-	struct tstorm_core_conn_ag_ctx tstorm_ag_context;
-/* ustorm aggregative context */
-	struct ustorm_core_conn_ag_ctx ustorm_ag_context;
-/* mstorm storm context */
-	struct mstorm_core_conn_st_ctx mstorm_st_context;
-/* ustorm storm context */
-	struct ustorm_core_conn_st_ctx ustorm_st_context;
-	struct regpair ustorm_st_padding[2] /* padding */;
-/* tstorm storm context */
-	struct tstorm_core_conn_st_ctx tstorm_st_context;
-	struct regpair tstorm_st_padding[2] /* padding */;
-};
-
-
 /*
  * How ll2 should deal with packet upon errors
  */
@@ -616,20 +99,22 @@ struct core_ll2_port_stats {
  * LL2 TX Per Queue Stats
  */
 struct core_ll2_pstorm_per_queue_stat {
-/* number of total bytes sent without errors */
-	struct regpair sent_ucast_bytes;
-/* number of total bytes sent without errors */
-	struct regpair sent_mcast_bytes;
-/* number of total bytes sent without errors */
-	struct regpair sent_bcast_bytes;
-/* number of total packets sent without errors */
-	struct regpair sent_ucast_pkts;
-/* number of total packets sent without errors */
-	struct regpair sent_mcast_pkts;
-/* number of total packets sent without errors */
-	struct regpair sent_bcast_pkts;
-/* number of total packets dropped due to errors */
-	struct regpair error_drop_pkts;
+	struct regpair sent_ucast_bytes /* number of total bytes sent without errors */;
+	struct regpair sent_mcast_bytes /* number of total bytes sent without errors */;
+	struct regpair sent_bcast_bytes /* number of total bytes sent without errors */;
+	struct regpair sent_ucast_pkts /* number of total packets sent without errors */;
+	struct regpair sent_mcast_pkts /* number of total packets sent without errors */;
+	struct regpair sent_bcast_pkts /* number of total packets sent without errors */;
+	struct regpair error_drop_pkts /* number of total packets dropped due to errors */;
+};
+
+
+/*
+ * Light-L2 context RX Producers. Temporary struct. core_ll2_rx_prod replace it.
+ */
+struct core_ll2_rx_ctx_prod {
+	__le16 bd_prod /* BD Producer */;
+	__le16 cqe_prod /* CQE Producer */;
 };
 
 
@@ -649,6 +134,11 @@ struct core_ll2_ustorm_per_queue_stat {
 	struct regpair rcv_bcast_pkts;
 };
 
+struct core_ll2_rx_per_queue_stat {
+	struct core_ll2_tstorm_per_queue_stat tstorm_stat /* TSTORM per queue statistics */;
+	struct core_ll2_ustorm_per_queue_stat ustorm_stat /* USTORM per queue statistics */;
+};
+
 
 /*
  * Light-L2 RX Producers
@@ -656,13 +146,13 @@ struct core_ll2_ustorm_per_queue_stat {
 struct core_ll2_rx_prod {
 	__le16 bd_prod /* BD Producer */;
 	__le16 cqe_prod /* CQE Producer */;
+	__le32 reserved;
 };
 
 
 
 struct core_ll2_tx_per_queue_stat {
-/* PSTORM per queue statistics */
-	struct core_ll2_pstorm_per_queue_stat pstorm_stat;
+	struct core_ll2_pstorm_per_queue_stat pstorm_stat /* PSTORM per queue statistics */;
 };
 
 
@@ -674,14 +164,12 @@ struct core_pwm_prod_update_data {
 	__le16 icid /* internal CID */;
 	u8 reserved0;
 	u8 params;
-/* aggregative command. Set DB_AGG_CMD_SET for producer update
- * (use enum db_agg_cmd_sel)
- */
+/* aggregative command. Set DB_AGG_CMD_SET for producer update (use enum db_agg_cmd_sel) */
 #define CORE_PWM_PROD_UPDATE_DATA_AGG_CMD_MASK    0x3
 #define CORE_PWM_PROD_UPDATE_DATA_AGG_CMD_SHIFT   0
 #define CORE_PWM_PROD_UPDATE_DATA_RESERVED1_MASK  0x3F /* Set 0. */
 #define CORE_PWM_PROD_UPDATE_DATA_RESERVED1_SHIFT 2
-	struct core_ll2_rx_prod prod /* Producers. */;
+	struct core_ll2_rx_ctx_prod prod /* Producers. */;
 };
 
 
@@ -692,12 +180,12 @@ struct core_queue_stats_query_ramrod_data {
 	u8 rx_stat /* If set, collect RX queue statistics. */;
 	u8 tx_stat /* If set, collect TX queue statistics. */;
 	__le16 reserved[3];
-/* Address of RX statistic buffer. core_ll2_rx_per_queue_stat struct will be
- * write to this address.
+/* Address of RX statistic buffer. core_ll2_rx_per_queue_stat struct will be write to this address.
+ *
  */
 	struct regpair rx_stat_addr;
-/* Address of TX statistic buffer. core_ll2_tx_per_queue_stat struct will be
- * write to this address.
+/* Address of TX statistic buffer. core_ll2_tx_per_queue_stat struct will be write to this address.
+ *
  */
 	struct regpair tx_stat_addr;
 };
@@ -768,8 +256,7 @@ struct core_rx_bd_with_buff_len {
  */
 union core_rx_bd_union {
 	struct core_rx_bd rx_bd /* Core Rx Bd static buffer size */;
-/* Core Rx Bd with dynamic buffer length */
-	struct core_rx_bd_with_buff_len rx_bd_with_len;
+	struct core_rx_bd_with_buff_len rx_bd_with_len /* Core Rx Bd with dynamic buffer length */;
 };
 
 
@@ -798,19 +285,18 @@ enum core_rx_cqe_type {
  * Core RX CQE for Light L2 .
  */
 struct core_rx_fast_path_cqe {
-	u8 type /* CQE type */;
-/* Offset (in bytes) of the packet from start of the buffer */
-	u8 placement_offset;
-/* Parsing and error flags from the parser */
-	struct parsing_and_err_flags parse_flags;
+	u8 type /* CQE type (use enum core_rx_cqe_type) */;
+	u8 placement_offset /* Offset (in bytes) of the packet from start of the buffer */;
+	struct parsing_and_err_flags parse_flags /* Parsing and error flags from the parser */;
 	__le16 packet_length /* Total packet length (from the parser) */;
 	__le16 vlan /* 802.1q VLAN tag */;
 	struct core_rx_cqe_opaque_data opaque_data /* Opaque Data */;
-/* bit- map: each bit represents a specific error. errors indications are
- * provided by the cracker. see spec for detailed description
+/* bit- map: each bit represents a specific error. errors indications are provided by the FC. see
+ * spec for detailed description
  */
 	struct parsing_err_flags err_flags;
-	__le16 reserved0;
+	u8 packet_source /* RX packet source. (use enum core_rx_pkt_source) */;
+	u8 reserved0;
 	__le32 reserved1[3];
 };
 
@@ -818,26 +304,25 @@ struct core_rx_fast_path_cqe {
  * Core Rx CM offload CQE .
  */
 struct core_rx_gsi_offload_cqe {
-	u8 type /* CQE type */;
-	u8 data_length_error /* set if gsi data is bigger than buff */;
-/* Parsing and error flags from the parser */
-	struct parsing_and_err_flags parse_flags;
+	u8 type /* CQE type (use enum core_rx_cqe_type) */;
+	u8 data_length_error /* set if GSI data is bigger than the buffer */;
+	struct parsing_and_err_flags parse_flags /* Parsing and error flags from the parser */;
 	__le16 data_length /* Total packet length (from the parser) */;
 	__le16 vlan /* 802.1q VLAN tag */;
 	__le32 src_mac_addrhi /* hi 4 bytes source mac address */;
 	__le16 src_mac_addrlo /* lo 2 bytes of source mac address */;
-/* These are the lower 16 bit of QP id in RoCE BTH header */
-	__le16 qp_id;
+	__le16 qp_id /* These are the lower 16 bit of QP id in RoCE BTH header */;
 	__le32 src_qp /* Source QP from DETH header */;
 	struct core_rx_cqe_opaque_data opaque_data /* Opaque Data */;
-	__le32 reserved;
+	u8 packet_source /* RX packet source. (use enum core_rx_pkt_source) */;
+	u8 reserved[3];
 };
 
 /*
  * Core RX CQE for Light L2 .
  */
 struct core_rx_slow_path_cqe {
-	u8 type /* CQE type */;
+	u8 type /* CQE type (use enum core_rx_cqe_type) */;
 	u8 ramrod_cmd_id;
 	__le16 echo;
 	struct core_rx_cqe_opaque_data opaque_data /* Opaque Data */;
@@ -856,6 +341,17 @@ union core_rx_cqe_union {
 
 
 
+/*
+ * RX packet source.
+ */
+enum core_rx_pkt_source {
+	CORE_RX_PKT_SOURCE_NETWORK = 0 /* Regular RX packet from network port */,
+	CORE_RX_PKT_SOURCE_LB /* Loop back packet */,
+	CORE_RX_PKT_SOURCE_TX /* TX packet duplication. Used for debug. */,
+	MAX_CORE_RX_PKT_SOURCE
+};
+
+
 
 /*
  * Ramrod data for rx queue start ramrod
@@ -870,32 +366,28 @@ struct core_rx_start_ramrod_data {
 	u8 complete_event_flg /* if set - post completion to the event ring */;
 	u8 drop_ttl0_flg /* if set - drop packet with ttl=0 */;
 	__le16 num_of_pbl_pages /* Number of pages in CQE PBL */;
-/* if set - 802.1q tag will be removed and copied to CQE */
+/* if set - 802.1q tag will be removed and copied to CQE. Set only if vport_id_valid flag clear. If
+ * vport_id_valid flag set, VPORT configuration used instead.
+ */
 	u8 inner_vlan_stripping_en;
-/* if set - outer tag wont be stripped, valid only in MF OVLAN mode. */
-	u8 outer_vlan_stripping_dis;
+/* if set and inner vlan does not exist, the outer vlan will copied to CQE as inner vlan. should be
+ * used in MF_OVLAN mode only.
+ */
+	u8 report_outer_vlan;
 	u8 queue_id /* Light L2 RX Queue ID */;
 	u8 main_func_queue /* Set if this is the main PFs LL2 queue */;
-/* Duplicate broadcast packets to LL2 main queue in mf_si mode. Valid if
- * main_func_queue is set.
- */
+/* Duplicate broadcast packets to LL2 main queue in mf_si mode. Valid if main_func_queue is set. */
 	u8 mf_si_bcast_accept_all;
-/* Duplicate multicast packets to LL2 main queue in mf_si mode. Valid if
- * main_func_queue is set.
- */
+/* Duplicate multicast packets to LL2 main queue in mf_si mode. Valid if main_func_queue is set. */
 	u8 mf_si_mcast_accept_all;
-/* If set, the inner vlan (802.1q tag) priority that is written to cqe will be
- * zero out, used for TenantDcb
- */
 /* Specifies how ll2 should deal with RX packets errors */
 	struct core_rx_action_on_error action_on_error;
 	u8 gsi_offload_flag /* set for GSI offload mode */;
-/* If set, queue is subject for RX VFC classification. */
-	u8 vport_id_valid;
+	u8 vport_id_valid /* If set, queue is subject for RX VFC classification. */;
 	u8 vport_id /* Queue VPORT for RX VFC classification. */;
 	u8 zero_prod_flg /* If set, zero RX producers. */;
-/* If set, the inner vlan (802.1q tag) priority that is written to cqe will be
- * zero out, used for TenantDcb
+/* If set, the inner vlan (802.1q tag) priority that is written to cqe will be zero out, used for
+ * TenantDcb
  */
 	u8 wipe_inner_vlan_pri_en;
 	u8 reserved[2];
@@ -906,8 +398,8 @@ struct core_rx_start_ramrod_data {
  * Ramrod data for rx queue stop ramrod
  */
 struct core_rx_stop_ramrod_data {
-	u8 complete_cqe_flg /* post completion to the CQE ring if set */;
-	u8 complete_event_flg /* post completion to the event ring if set */;
+	u8 complete_cqe_flg /* if set - post completion to the CQE ring */;
+	u8 complete_event_flg /* if set - post completion to the event ring */;
 	u8 queue_id /* Light L2 RX Queue ID */;
 	u8 reserved1;
 	__le16 reserved2[2];
@@ -922,9 +414,7 @@ struct core_tx_bd_data {
 /* Do not allow additional VLAN manipulations on this packet (DCB) */
 #define CORE_TX_BD_DATA_FORCE_VLAN_MODE_MASK         0x1
 #define CORE_TX_BD_DATA_FORCE_VLAN_MODE_SHIFT        0
-/* Insert VLAN into packet. Cannot be set for LB packets
- * (tx_dst == CORE_TX_DEST_LB)
- */
+/* Insert VLAN into packet. Cannot be set for LB packets (tx_dst == CORE_TX_DEST_LB) */
 #define CORE_TX_BD_DATA_VLAN_INSERTION_MASK          0x1
 #define CORE_TX_BD_DATA_VLAN_INSERTION_SHIFT         1
 /* This is the first BD of the packet (for debug) */
@@ -936,16 +426,13 @@ struct core_tx_bd_data {
 /* Calculate the L4 checksum for the packet */
 #define CORE_TX_BD_DATA_L4_CSUM_MASK                 0x1
 #define CORE_TX_BD_DATA_L4_CSUM_SHIFT                4
-/* Packet is IPv6 with extensions */
-#define CORE_TX_BD_DATA_IPV6_EXT_MASK                0x1
+#define CORE_TX_BD_DATA_IPV6_EXT_MASK                0x1 /* Packet is IPv6 with extensions */
 #define CORE_TX_BD_DATA_IPV6_EXT_SHIFT               5
-/* If IPv6+ext, and if l4_csum is 1, than this field indicates L4 protocol:
- * 0-TCP, 1-UDP
- */
+/* If IPv6+ext, and if l4_csum is 1, than this field indicates L4 protocol: 0-TCP, 1-UDP */
 #define CORE_TX_BD_DATA_L4_PROTOCOL_MASK             0x1
 #define CORE_TX_BD_DATA_L4_PROTOCOL_SHIFT            6
-/* The pseudo checksum mode to place in the L4 checksum field. Required only
- * when IPv6+ext and l4_csum is set. (use enum core_l4_pseudo_checksum_mode)
+/* The pseudo checksum mode to place in the L4 checksum field. Required only when IPv6+ext and
+ * l4_csum is set. (use enum core_l4_pseudo_checksum_mode)
  */
 #define CORE_TX_BD_DATA_L4_PSEUDO_CSUM_MODE_MASK     0x1
 #define CORE_TX_BD_DATA_L4_PSEUDO_CSUM_MODE_SHIFT    7
@@ -954,13 +441,12 @@ struct core_tx_bd_data {
  */
 #define CORE_TX_BD_DATA_NBDS_MASK                    0xF
 #define CORE_TX_BD_DATA_NBDS_SHIFT                   8
-/* Use roce_flavor enum - Differentiate between Roce flavors is valid when
- * connType is ROCE (use enum core_roce_flavor_type)
+/* Use roce_flavor enum - Differentiate between Roce flavors is valid when connType is ROCE (use
+ * enum core_roce_flavor_type)
  */
 #define CORE_TX_BD_DATA_ROCE_FLAV_MASK               0x1
 #define CORE_TX_BD_DATA_ROCE_FLAV_SHIFT              12
-/* Calculate ip length */
-#define CORE_TX_BD_DATA_IP_LEN_MASK                  0x1
+#define CORE_TX_BD_DATA_IP_LEN_MASK                  0x1 /* Calculate ip length */
 #define CORE_TX_BD_DATA_IP_LEN_SHIFT                 13
 /* disables the STAG insertion, relevant only in MF OVLAN mode. */
 #define CORE_TX_BD_DATA_DISABLE_STAG_INSERTION_MASK  0x1
@@ -975,14 +461,14 @@ struct core_tx_bd_data {
 struct core_tx_bd {
 	struct regpair addr /* Buffer Address */;
 	__le16 nbytes /* Number of Bytes in Buffer */;
-/* Network packets: VLAN to insert to packet (if insertion flag set) LoopBack
- * packets: echo data to pass to Rx
+/* Network packets: VLAN to insert to packet (if insertion flag set) LoopBack packets: echo data to
+ * pass to Rx
  */
 	__le16 nw_vlan_or_lb_echo;
-	struct core_tx_bd_data bd_data /* BD Flags */;
+	struct core_tx_bd_data bd_data /* BD flags */;
 	__le16 bitfield1;
-/* L4 Header Offset from start of packet (in Words). This is needed if both
- * l4_csum and ipv6_ext are set
+/* L4 Header Offset from start of packet (in Words). This is needed if both l4_csum and ipv6_ext are
+ * set
  */
 #define CORE_TX_BD_L4_HDR_OFFSET_W_MASK  0x3FFF
 #define CORE_TX_BD_L4_HDR_OFFSET_W_SHIFT 0
@@ -1010,54 +496,1061 @@ enum core_tx_dest {
  */
 struct core_tx_start_ramrod_data {
 	struct regpair pbl_base_addr /* Address of the pbl page */;
-	__le16 mtu /* Maximum transmission unit */;
+	__le16 mtu /* MTU */;
 	__le16 sb_id /* Status block ID */;
-	u8 sb_index /* Status block protocol index */;
-	u8 stats_en /* Statistics Enable */;
-	u8 stats_id /* Statistics Counter ID */;
-	u8 conn_type /* connection type that loaded ll2 */;
+	u8 sb_index /* Status block index */;
+	u8 stats_en /* Ram statistics enable */;
+	u8 stats_id /* Ram statistics counter ID */;
+	u8 conn_type /* Parent protocol type (use enum protocol_type) */;
 	__le16 pbl_size /* Number of BD pages pointed by PBL */;
 	__le16 qm_pq_id /* QM PQ ID */;
 	u8 gsi_offload_flag /* set for GSI offload mode */;
 	u8 ctx_stats_en /* Context statistics enable */;
-/* If set, queue is part of VPORT and subject for TX switching. */
-	u8 vport_id_valid;
-/* vport id of the current connection, used to access non_rdma_in_to_in_pri_map
- * which is per vport
+	u8 vport_id_valid /* If set, queue is part of VPORT and subject for TX switching. */;
+/* vport id of the current connection, used to access non_rdma_in_to_in_pri_map which is per vport
+ *
  */
 	u8 vport_id;
+/* if set, security checks will be made for this connection. If set, disable_stag_insertion flag
+ * must be clear in TX BD. If set and anti-spoofing enable for associated VPORT, only packets with
+ * self destination MAC can be forced to LB.
+ */
+	u8 enforce_security_flag;
+	u8 reserved[7];
+};
+
+
+/*
+ * Ramrod data for tx queue stop ramrod
+ */
+struct core_tx_stop_ramrod_data {
+	__le32 reserved0[2];
+};
+
+
+/*
+ * Ramrod data for tx queue update ramrod
+ */
+struct core_tx_update_ramrod_data {
+	u8 update_qm_pq_id_flg /* Flag to Update QM PQ ID */;
+	u8 reserved0;
+	__le16 qm_pq_id /* Updated QM PQ ID */;
+	__le32 reserved1[1];
+};
+
+
+/*
+ * Enum flag for what type of DCB data to update
+ */
+enum dcb_dscp_update_mode {
+	DONT_UPDATE_DCB_DSCP /* Set when no change should be done to DCB data */,
+	UPDATE_DCB /* Set to update only L2 (vlan) priority */,
+	UPDATE_DSCP /* Set to update only IP DSCP */,
+	UPDATE_DCB_DSCP /* Set to update vlan pri and DSCP */,
+	MAX_DCB_DSCP_UPDATE_MODE
+};
+
+
+/*
+ * The core storm context for the Ystorm
+ */
+struct ystorm_core_conn_st_ctx {
+	__le32 reserved[4];
+};
+
+/*
+ * The core storm context for the Pstorm
+ */
+struct pstorm_core_conn_st_ctx {
+	__le32 reserved[20];
+};
+
+/*
+ * Core Slowpath Connection storm context of Xstorm
+ */
+struct xstorm_core_conn_st_ctx {
+	__le32 spq_base_lo /* SPQ Ring Base Address low dword */;
+	__le32 spq_base_hi /* SPQ Ring Base Address high dword */;
+	struct regpair consolid_base_addr /* Consolidation Ring Base Address */;
+	__le16 spq_cons /* SPQ Ring Consumer */;
+	__le16 consolid_cons /* Consolidation Ring Consumer */;
+	__le32 reserved0[55] /* Pad to 15 cycles */;
+};
+
+struct e4_xstorm_core_conn_ag_ctx {
+	u8 reserved0 /* cdu_validation */;
+	u8 state /* state */;
+	u8 flags0;
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1 /* exist_in_qm0 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1 /* exist_in_qm1 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1 /* exist_in_qm2 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1 /* exist_in_qm3 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1 /* bit4 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1 /* cf_array_active */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1 /* bit6 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1 /* bit7 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
+	u8 flags1;
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1 /* bit8 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1 /* bit9 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1 /* bit10 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1 /* bit11 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1 /* bit12 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1 /* bit13 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1 /* bit14 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1 /* bit15 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
+	u8 flags2;
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3 /* timer0cf */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3 /* timer1cf */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3 /* timer2cf */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3 /* timer_stop_all */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
+	u8 flags3;
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
+	u8 flags4;
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3 /* cf10 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3 /* cf11 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
+	u8 flags5;
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3 /* cf12 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3 /* cf13 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3 /* cf14 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3 /* cf15 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
+	u8 flags6;
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3 /* cf16 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3 /* cf_array_cf */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3 /* cf18 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3 /* cf19 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
+	u8 flags7;
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3 /* cf20 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3 /* cf21 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3 /* cf22 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1 /* cf0en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1 /* cf1en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
+	u8 flags8;
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1 /* cf2en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1 /* cf3en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1 /* cf4en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1 /* cf5en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1 /* cf6en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1 /* cf7en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1 /* cf8en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1 /* cf9en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
+	u8 flags9;
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1 /* cf10en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1 /* cf11en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1 /* cf12en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1 /* cf13en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1 /* cf14en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1 /* cf15en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1 /* cf16en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1 /* cf_array_cf_en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
+	u8 flags10;
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1 /* cf18en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1 /* cf19en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1 /* cf20en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1 /* cf21en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1 /* cf22en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1 /* cf23en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1 /* rule0en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1 /* rule1en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
+	u8 flags11;
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1 /* rule2en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1 /* rule3en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1 /* rule4en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1 /* rule5en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1 /* rule6en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1 /* rule7en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1 /* rule8en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1 /* rule9en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
+	u8 flags12;
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1 /* rule10en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1 /* rule11en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1 /* rule12en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1 /* rule13en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1 /* rule14en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1 /* rule15en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1 /* rule16en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1 /* rule17en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
+	u8 flags13;
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1 /* rule18en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1 /* rule19en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1 /* rule20en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1 /* rule21en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1 /* rule22en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1 /* rule23en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1 /* rule24en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1 /* rule25en */
+#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
+	u8 flags14;
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1 /* bit16 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1 /* bit17 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1 /* bit18 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1 /* bit19 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1 /* bit20 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1 /* bit21 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3 /* cf23 */
+#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
+	u8 byte2 /* byte2 */;
+	__le16 physical_q0 /* physical_q0 */;
+	__le16 consolid_prod /* physical_q1 */;
+	__le16 reserved16 /* physical_q2 */;
+	__le16 tx_bd_cons /* word3 */;
+	__le16 tx_bd_or_spq_prod /* word4 */;
+	__le16 updated_qm_pq_id /* word5 */;
+	__le16 conn_dpi /* conn_dpi */;
+	u8 byte3 /* byte3 */;
+	u8 byte4 /* byte4 */;
+	u8 byte5 /* byte5 */;
+	u8 byte6 /* byte6 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+	__le32 reg4 /* reg4 */;
+	__le32 reg5 /* cf_array0 */;
+	__le32 reg6 /* cf_array1 */;
+	__le16 word7 /* word7 */;
+	__le16 word8 /* word8 */;
+	__le16 word9 /* word9 */;
+	__le16 word10 /* word10 */;
+	__le32 reg7 /* reg7 */;
+	__le32 reg8 /* reg8 */;
+	__le32 reg9 /* reg9 */;
+	u8 byte7 /* byte7 */;
+	u8 byte8 /* byte8 */;
+	u8 byte9 /* byte9 */;
+	u8 byte10 /* byte10 */;
+	u8 byte11 /* byte11 */;
+	u8 byte12 /* byte12 */;
+	u8 byte13 /* byte13 */;
+	u8 byte14 /* byte14 */;
+	u8 byte15 /* byte15 */;
+	u8 e5_reserved /* e5_reserved */;
+	__le16 word11 /* word11 */;
+	__le32 reg10 /* reg10 */;
+	__le32 reg11 /* reg11 */;
+	__le32 reg12 /* reg12 */;
+	__le32 reg13 /* reg13 */;
+	__le32 reg14 /* reg14 */;
+	__le32 reg15 /* reg15 */;
+	__le32 reg16 /* reg16 */;
+	__le32 reg17 /* reg17 */;
+	__le32 reg18 /* reg18 */;
+	__le32 reg19 /* reg19 */;
+	__le16 word12 /* word12 */;
+	__le16 word13 /* word13 */;
+	__le16 word14 /* word14 */;
+	__le16 word15 /* word15 */;
+};
+
+struct e4_tstorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
+	u8 flags1;
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
+	u8 flags2;
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
+	u8 flags3;
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
+	u8 flags4;
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+	u8 flags5;
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+	__le32 reg4 /* reg4 */;
+	__le32 reg5 /* reg5 */;
+	__le32 reg6 /* reg6 */;
+	__le32 reg7 /* reg7 */;
+	__le32 reg8 /* reg8 */;
+	u8 byte2 /* byte2 */;
+	u8 byte3 /* byte3 */;
+	__le16 word0 /* word0 */;
+	u8 byte4 /* byte4 */;
+	u8 byte5 /* byte5 */;
+	__le16 word1 /* word1 */;
+	__le16 word2 /* conn_dpi */;
+	__le16 word3 /* word3 */;
+	__le32 ll2_rx_prod /* reg9 */;
+	__le32 reg10 /* reg10 */;
+};
+
+struct e4_ustorm_core_conn_ag_ctx {
+	u8 reserved /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
+	u8 flags2;
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+	u8 flags3;
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+	u8 byte2 /* byte2 */;
+	u8 byte3 /* byte3 */;
+	__le16 word0 /* conn_dpi */;
+	__le16 word1 /* word1 */;
+	__le32 rx_producers /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+	__le16 word2 /* word2 */;
+	__le16 word3 /* word3 */;
+};
+
+/*
+ * The core storm context for the Mstorm
+ */
+struct mstorm_core_conn_st_ctx {
+	__le32 reserved[40];
+};
+
+/*
+ * The core storm context for the Ustorm
+ */
+struct ustorm_core_conn_st_ctx {
+	__le32 reserved[20];
+};
+
+/*
+ * The core storm context for the Tstorm
+ */
+struct tstorm_core_conn_st_ctx {
+	__le32 reserved[4];
+};
+
+/*
+ * core connection context
+ */
+struct e4_core_conn_context {
+	struct ystorm_core_conn_st_ctx ystorm_st_context /* ystorm storm context */;
+	struct regpair ystorm_st_padding[2] /* padding */;
+	struct pstorm_core_conn_st_ctx pstorm_st_context /* pstorm storm context */;
+	struct regpair pstorm_st_padding[2] /* padding */;
+	struct xstorm_core_conn_st_ctx xstorm_st_context /* xstorm storm context */;
+	struct e4_xstorm_core_conn_ag_ctx xstorm_ag_context /* xstorm aggregative context */;
+	struct e4_tstorm_core_conn_ag_ctx tstorm_ag_context /* tstorm aggregative context */;
+	struct e4_ustorm_core_conn_ag_ctx ustorm_ag_context /* ustorm aggregative context */;
+	struct mstorm_core_conn_st_ctx mstorm_st_context /* mstorm storm context */;
+	struct ustorm_core_conn_st_ctx ustorm_st_context /* ustorm storm context */;
+	struct regpair ustorm_st_padding[2] /* padding */;
+	struct tstorm_core_conn_st_ctx tstorm_st_context /* tstorm storm context */;
+	struct regpair tstorm_st_padding[2] /* padding */;
+};
+
+
+struct e5_xstorm_core_conn_ag_ctx {
+	u8 reserved0 /* cdu_validation */;
+	u8 state_and_core_id /* state_and_core_id */;
+	u8 flags0;
+#define E5_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1 /* exist_in_qm0 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1 /* exist_in_qm1 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1 /* exist_in_qm2 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
+#define E5_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1 /* exist_in_qm3 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1 /* bit4 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1 /* cf_array_active */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1 /* bit6 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1 /* bit7 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
+	u8 flags1;
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1 /* bit8 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1 /* bit9 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1 /* bit10 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1 /* bit11 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1 /* bit12 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1 /* bit13 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
+#define E5_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1 /* bit14 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
+#define E5_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1 /* bit15 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
+	u8 flags2;
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3 /* timer0cf */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3 /* timer1cf */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3 /* timer2cf */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3 /* timer_stop_all */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
+	u8 flags3;
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
+	u8 flags4;
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3 /* cf10 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3 /* cf11 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
+	u8 flags5;
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3 /* cf12 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3 /* cf13 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3 /* cf14 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3 /* cf15 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
+	u8 flags6;
+#define E5_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3 /* cf16 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3 /* cf_array_cf */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
+#define E5_XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3 /* cf18 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
+#define E5_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3 /* cf19 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
+	u8 flags7;
+#define E5_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3 /* cf20 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3 /* cf21 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
+#define E5_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3 /* cf22 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1 /* cf0en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1 /* cf1en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
+	u8 flags8;
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1 /* cf2en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1 /* cf3en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1 /* cf4en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1 /* cf5en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1 /* cf6en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1 /* cf7en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1 /* cf8en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1 /* cf9en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
+	u8 flags9;
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1 /* cf10en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1 /* cf11en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1 /* cf12en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1 /* cf13en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1 /* cf14en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1 /* cf15en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
+#define E5_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1 /* cf16en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1 /* cf_array_cf_en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
+	u8 flags10;
+#define E5_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1 /* cf18en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
+#define E5_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1 /* cf19en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
+#define E5_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1 /* cf20en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1 /* cf21en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
+#define E5_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1 /* cf22en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1 /* cf23en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1 /* rule0en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1 /* rule1en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
+	u8 flags11;
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1 /* rule2en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1 /* rule3en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
+#define E5_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1 /* rule4en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1 /* rule5en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1 /* rule6en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1 /* rule7en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1 /* rule8en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1 /* rule9en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
+	u8 flags12;
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1 /* rule10en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1 /* rule11en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1 /* rule12en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1 /* rule13en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1 /* rule14en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1 /* rule15en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1 /* rule16en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1 /* rule17en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
+	u8 flags13;
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1 /* rule18en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1 /* rule19en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1 /* rule20en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1 /* rule21en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1 /* rule22en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1 /* rule23en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1 /* rule24en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1 /* rule25en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
+	u8 flags14;
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1 /* bit16 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1 /* bit17 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1 /* bit18 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1 /* bit19 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1 /* bit20 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1 /* bit21 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3 /* cf23 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
+	u8 byte2 /* byte2 */;
+	__le16 physical_q0 /* physical_q0_and_vf_id_lo */;
+	__le16 consolid_prod /* physical_q1_and_vf_id_hi */;
+	__le16 reserved16 /* physical_q2 */;
+	__le16 tx_bd_cons /* word3 */;
+	__le16 tx_bd_or_spq_prod /* word4 */;
+	__le16 updated_qm_pq_id /* word5 */;
+	__le16 conn_dpi /* conn_dpi */;
+	u8 byte3 /* byte3 */;
+	u8 byte4 /* byte4 */;
+	u8 byte5 /* byte5 */;
+	u8 byte6 /* byte6 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+	__le32 reg4 /* reg4 */;
+	__le32 reg5 /* cf_array0 */;
+	__le32 reg6 /* cf_array1 */;
+	u8 flags15;
+#define E5_XSTORM_CORE_CONN_AG_CTX_E4_RESERVED1_MASK         0x1 /* bit22 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_E4_RESERVED1_SHIFT        0
+#define E5_XSTORM_CORE_CONN_AG_CTX_E4_RESERVED2_MASK         0x1 /* bit23 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_E4_RESERVED2_SHIFT        1
+#define E5_XSTORM_CORE_CONN_AG_CTX_E4_RESERVED3_MASK         0x1 /* bit24 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_E4_RESERVED3_SHIFT        2
+#define E5_XSTORM_CORE_CONN_AG_CTX_E4_RESERVED4_MASK         0x3 /* cf24 */
+#define E5_XSTORM_CORE_CONN_AG_CTX_E4_RESERVED4_SHIFT        3
+#define E5_XSTORM_CORE_CONN_AG_CTX_E4_RESERVED5_MASK         0x1 /* cf24en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_E4_RESERVED5_SHIFT        5
+#define E5_XSTORM_CORE_CONN_AG_CTX_E4_RESERVED6_MASK         0x1 /* rule26en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_E4_RESERVED6_SHIFT        6
+#define E5_XSTORM_CORE_CONN_AG_CTX_E4_RESERVED7_MASK         0x1 /* rule27en */
+#define E5_XSTORM_CORE_CONN_AG_CTX_E4_RESERVED7_SHIFT        7
+	u8 byte7 /* byte7 */;
+	__le16 word7 /* word7 */;
+	__le16 word8 /* word8 */;
+	__le16 word9 /* word9 */;
+	__le16 word10 /* word10 */;
+	__le16 word11 /* word11 */;
+	__le32 reg7 /* reg7 */;
+	__le32 reg8 /* reg8 */;
+	__le32 reg9 /* reg9 */;
+	u8 byte8 /* byte8 */;
+	u8 byte9 /* byte9 */;
+	u8 byte10 /* byte10 */;
+	u8 byte11 /* byte11 */;
+	u8 byte12 /* byte12 */;
+	u8 byte13 /* byte13 */;
+	u8 byte14 /* byte14 */;
+	u8 byte15 /* byte15 */;
+	__le32 reg10 /* reg10 */;
+	__le32 reg11 /* reg11 */;
+	__le32 reg12 /* reg12 */;
+	__le32 reg13 /* reg13 */;
+	__le32 reg14 /* reg14 */;
+	__le32 reg15 /* reg15 */;
+	__le32 reg16 /* reg16 */;
+	__le32 reg17 /* reg17 */;
+	__le32 reg18 /* reg18 */;
+	__le32 reg19 /* reg19 */;
+	__le16 word12 /* word12 */;
+	__le16 word13 /* word13 */;
+	__le16 word14 /* word14 */;
+	__le16 word15 /* word15 */;
 };
 
-
-/*
- * Ramrod data for tx queue stop ramrod
- */
-struct core_tx_stop_ramrod_data {
-	__le32 reserved0[2];
+struct e5_tstorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state_and_core_id */;
+	u8 flags0;
+#define E5_TSTORM_CORE_CONN_AG_CTX_BIT0_MASK          0x1 /* exist_in_qm0 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT         0
+#define E5_TSTORM_CORE_CONN_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT         1
+#define E5_TSTORM_CORE_CONN_AG_CTX_BIT2_MASK          0x1 /* bit2 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT         2
+#define E5_TSTORM_CORE_CONN_AG_CTX_BIT3_MASK          0x1 /* bit3 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT         3
+#define E5_TSTORM_CORE_CONN_AG_CTX_BIT4_MASK          0x1 /* bit4 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT         4
+#define E5_TSTORM_CORE_CONN_AG_CTX_BIT5_MASK          0x1 /* bit5 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT         5
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF0_MASK           0x3 /* timer0cf */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT          6
+	u8 flags1;
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF1_MASK           0x3 /* timer1cf */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT          0
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF2_MASK           0x3 /* timer2cf */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT          2
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF3_MASK           0x3 /* timer_stop_all */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT          4
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF4_MASK           0x3 /* cf4 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT          6
+	u8 flags2;
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF5_MASK           0x3 /* cf5 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT          0
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF6_MASK           0x3 /* cf6 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT          2
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF7_MASK           0x3 /* cf7 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT          4
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF8_MASK           0x3 /* cf8 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT          6
+	u8 flags3;
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF9_MASK           0x3 /* cf9 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT          0
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF10_MASK          0x3 /* cf10 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT         2
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT        4
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT        5
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT        6
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK         0x1 /* cf3en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT        7
+	u8 flags4;
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK         0x1 /* cf4en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT        0
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK         0x1 /* cf5en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT        1
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK         0x1 /* cf6en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT        2
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK         0x1 /* cf7en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT        3
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK         0x1 /* cf8en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT        4
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK         0x1 /* cf9en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT        5
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK        0x1 /* cf10en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT       6
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT      7
+	u8 flags5;
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT      0
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT      1
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT      2
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT      3
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK       0x1 /* rule5en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT      4
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK       0x1 /* rule6en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT      5
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK       0x1 /* rule7en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT      6
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK       0x1 /* rule8en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT      7
+	u8 flags6;
+#define E5_TSTORM_CORE_CONN_AG_CTX_E4_RESERVED1_MASK  0x1 /* bit6 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_E4_RESERVED1_SHIFT 0
+#define E5_TSTORM_CORE_CONN_AG_CTX_E4_RESERVED2_MASK  0x1 /* bit7 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_E4_RESERVED2_SHIFT 1
+#define E5_TSTORM_CORE_CONN_AG_CTX_E4_RESERVED3_MASK  0x1 /* bit8 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_E4_RESERVED3_SHIFT 2
+#define E5_TSTORM_CORE_CONN_AG_CTX_E4_RESERVED4_MASK  0x3 /* cf11 */
+#define E5_TSTORM_CORE_CONN_AG_CTX_E4_RESERVED4_SHIFT 3
+#define E5_TSTORM_CORE_CONN_AG_CTX_E4_RESERVED5_MASK  0x1 /* cf11en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_E4_RESERVED5_SHIFT 5
+#define E5_TSTORM_CORE_CONN_AG_CTX_E4_RESERVED6_MASK  0x1 /* rule9en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_E4_RESERVED6_SHIFT 6
+#define E5_TSTORM_CORE_CONN_AG_CTX_E4_RESERVED7_MASK  0x1 /* rule10en */
+#define E5_TSTORM_CORE_CONN_AG_CTX_E4_RESERVED7_SHIFT 7
+	u8 byte2 /* byte2 */;
+	__le16 word0 /* word0 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+	__le32 reg4 /* reg4 */;
+	__le32 reg5 /* reg5 */;
+	__le32 reg6 /* reg6 */;
+	__le32 reg7 /* reg7 */;
+	__le32 reg8 /* reg8 */;
+	u8 byte3 /* byte3 */;
+	u8 byte4 /* byte4 */;
+	u8 byte5 /* byte5 */;
+	u8 e4_reserved8 /* vf_id */;
+	__le16 word1 /* word1 */;
+	__le16 word2 /* conn_dpi */;
+	__le32 ll2_rx_prod /* reg9 */;
+	__le16 word3 /* word3 */;
+	__le16 e4_reserved9 /* word4 */;
 };
 
-
-/*
- * Ramrod data for tx queue update ramrod
- */
-struct core_tx_update_ramrod_data {
-	u8 update_qm_pq_id_flg /* Flag to Update QM PQ ID */;
-	u8 reserved0;
-	__le16 qm_pq_id /* Updated QM PQ ID */;
-	__le32 reserved1[1];
+struct e5_ustorm_core_conn_ag_ctx {
+	u8 reserved /* cdu_validation */;
+	u8 byte1 /* state_and_core_id */;
+	u8 flags0;
+#define E5_USTORM_CORE_CONN_AG_CTX_BIT0_MASK          0x1 /* exist_in_qm0 */
+#define E5_USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT         0
+#define E5_USTORM_CORE_CONN_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
+#define E5_USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT         1
+#define E5_USTORM_CORE_CONN_AG_CTX_CF0_MASK           0x3 /* timer0cf */
+#define E5_USTORM_CORE_CONN_AG_CTX_CF0_SHIFT          2
+#define E5_USTORM_CORE_CONN_AG_CTX_CF1_MASK           0x3 /* timer1cf */
+#define E5_USTORM_CORE_CONN_AG_CTX_CF1_SHIFT          4
+#define E5_USTORM_CORE_CONN_AG_CTX_CF2_MASK           0x3 /* timer2cf */
+#define E5_USTORM_CORE_CONN_AG_CTX_CF2_SHIFT          6
+	u8 flags1;
+#define E5_USTORM_CORE_CONN_AG_CTX_CF3_MASK           0x3 /* timer_stop_all */
+#define E5_USTORM_CORE_CONN_AG_CTX_CF3_SHIFT          0
+#define E5_USTORM_CORE_CONN_AG_CTX_CF4_MASK           0x3 /* cf4 */
+#define E5_USTORM_CORE_CONN_AG_CTX_CF4_SHIFT          2
+#define E5_USTORM_CORE_CONN_AG_CTX_CF5_MASK           0x3 /* cf5 */
+#define E5_USTORM_CORE_CONN_AG_CTX_CF5_SHIFT          4
+#define E5_USTORM_CORE_CONN_AG_CTX_CF6_MASK           0x3 /* cf6 */
+#define E5_USTORM_CORE_CONN_AG_CTX_CF6_SHIFT          6
+	u8 flags2;
+#define E5_USTORM_CORE_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
+#define E5_USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT        0
+#define E5_USTORM_CORE_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
+#define E5_USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT        1
+#define E5_USTORM_CORE_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
+#define E5_USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT        2
+#define E5_USTORM_CORE_CONN_AG_CTX_CF3EN_MASK         0x1 /* cf3en */
+#define E5_USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT        3
+#define E5_USTORM_CORE_CONN_AG_CTX_CF4EN_MASK         0x1 /* cf4en */
+#define E5_USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT        4
+#define E5_USTORM_CORE_CONN_AG_CTX_CF5EN_MASK         0x1 /* cf5en */
+#define E5_USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT        5
+#define E5_USTORM_CORE_CONN_AG_CTX_CF6EN_MASK         0x1 /* cf6en */
+#define E5_USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT        6
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT      7
+	u8 flags3;
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT      0
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT      1
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT      2
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT      3
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK       0x1 /* rule5en */
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT      4
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK       0x1 /* rule6en */
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT      5
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK       0x1 /* rule7en */
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT      6
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK       0x1 /* rule8en */
+#define E5_USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT      7
+	u8 flags4;
+#define E5_USTORM_CORE_CONN_AG_CTX_E4_RESERVED1_MASK  0x1 /* bit2 */
+#define E5_USTORM_CORE_CONN_AG_CTX_E4_RESERVED1_SHIFT 0
+#define E5_USTORM_CORE_CONN_AG_CTX_E4_RESERVED2_MASK  0x1 /* bit3 */
+#define E5_USTORM_CORE_CONN_AG_CTX_E4_RESERVED2_SHIFT 1
+#define E5_USTORM_CORE_CONN_AG_CTX_E4_RESERVED3_MASK  0x3 /* cf7 */
+#define E5_USTORM_CORE_CONN_AG_CTX_E4_RESERVED3_SHIFT 2
+#define E5_USTORM_CORE_CONN_AG_CTX_E4_RESERVED4_MASK  0x3 /* cf8 */
+#define E5_USTORM_CORE_CONN_AG_CTX_E4_RESERVED4_SHIFT 4
+#define E5_USTORM_CORE_CONN_AG_CTX_E4_RESERVED5_MASK  0x1 /* cf7en */
+#define E5_USTORM_CORE_CONN_AG_CTX_E4_RESERVED5_SHIFT 6
+#define E5_USTORM_CORE_CONN_AG_CTX_E4_RESERVED6_MASK  0x1 /* cf8en */
+#define E5_USTORM_CORE_CONN_AG_CTX_E4_RESERVED6_SHIFT 7
+	u8 byte2 /* byte2 */;
+	__le16 word0 /* conn_dpi */;
+	__le16 word1 /* word1 */;
+	__le32 rx_producers /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+	__le16 word2 /* word2 */;
+	__le16 word3 /* word3 */;
 };
 
-
 /*
- * Enum flag for what type of dcb data to update
+ * core connection context
  */
-enum dcb_dscp_update_mode {
-/* use when no change should be done to DCB data */
-	DONT_UPDATE_DCB_DSCP,
-	UPDATE_DCB /* use to update only L2 (vlan) priority */,
-	UPDATE_DSCP /* use to update only IP DSCP */,
-	UPDATE_DCB_DSCP /* update vlan pri and DSCP */,
-	MAX_DCB_DSCP_UPDATE_FLAG
+struct e5_core_conn_context {
+	struct ystorm_core_conn_st_ctx ystorm_st_context /* ystorm storm context */;
+	struct regpair ystorm_st_padding[2] /* padding */;
+	struct pstorm_core_conn_st_ctx pstorm_st_context /* pstorm storm context */;
+	struct regpair pstorm_st_padding[2] /* padding */;
+	struct xstorm_core_conn_st_ctx xstorm_st_context /* xstorm storm context */;
+	struct regpair xstorm_st_padding[2] /* padding */;
+	struct e5_xstorm_core_conn_ag_ctx xstorm_ag_context /* xstorm aggregative context */;
+	struct regpair xstorm_ag_padding[2] /* padding */;
+	struct e5_tstorm_core_conn_ag_ctx tstorm_ag_context /* tstorm aggregative context */;
+	struct e5_ustorm_core_conn_ag_ctx ustorm_ag_context /* ustorm aggregative context */;
+	struct mstorm_core_conn_st_ctx mstorm_st_context /* mstorm storm context */;
+	struct ustorm_core_conn_st_ctx ustorm_st_context /* ustorm storm context */;
+	struct regpair ustorm_st_padding[2] /* padding */;
+	struct tstorm_core_conn_st_ctx tstorm_st_context /* tstorm storm context */;
+	struct regpair tstorm_st_padding[2] /* padding */;
 };
 
 
@@ -1070,7 +1563,7 @@ struct eth_mstorm_per_pf_stat {
 
 
 struct eth_mstorm_per_queue_stat {
-/* Number of packets discarded because TTL=0 (in IPv4) or hopLimit=0 (IPv6) */
+/* Number of packets discarded because TTL=0 (in IPv4) or hopLimit=0 (in IPv6) */
 	struct regpair ttl0_discard;
 /* Number of packets discarded because they are bigger than MTU */
 	struct regpair packet_too_big_discard;
@@ -1078,14 +1571,13 @@ struct eth_mstorm_per_queue_stat {
 	struct regpair no_buff_discard;
 /* Number of packets discarded because of no active Rx connection */
 	struct regpair not_active_discard;
-/* number of coalesced packets in all TPA aggregations */
-	struct regpair tpa_coalesced_pkts;
-/* total number of TPA aggregations */
-	struct regpair tpa_coalesced_events;
-/* number of aggregations, which abnormally ended */
-	struct regpair tpa_aborts_num;
-/* total TCP payload length in all TPA aggregations */
-	struct regpair tpa_coalesced_bytes;
+/* number of coalesced packets in all TPA aggregations / Discarded Hairpin Packet received from Tx
+ * Loopback
+ */
+	struct regpair tpa_coalesced_pkts_lb_hairpin_discard;
+	struct regpair tpa_coalesced_events /* total number of TPA aggregations */;
+	struct regpair tpa_aborts_num /* number of aggregations, which abnormally ended */;
+	struct regpair tpa_coalesced_bytes /* total TCP payload length in all TPA aggregations */;
 };
 
 
@@ -1121,10 +1613,8 @@ struct eth_pstorm_per_pf_stat {
 	struct regpair vxlan_drop_pkts /* Dropped VXLAN TX packets */;
 	struct regpair geneve_drop_pkts /* Dropped GENEVE TX packets */;
 	struct regpair mpls_drop_pkts /* Dropped MPLS TX packets (E5 Only) */;
-/* Dropped GRE MPLS TX packets (E5 Only) */
-	struct regpair gre_mpls_drop_pkts;
-/* Dropped UDP MPLS TX packets (E5 Only) */
-	struct regpair udp_mpls_drop_pkts;
+	struct regpair gre_mpls_drop_pkts /* Dropped GRE MPLS TX packets (E5 Only) */;
+	struct regpair udp_mpls_drop_pkts /* Dropped UDP MPLS TX packets (E5 Only) */;
 };
 
 
@@ -1132,20 +1622,13 @@ struct eth_pstorm_per_pf_stat {
  * Ethernet TX Per Queue Stats
  */
 struct eth_pstorm_per_queue_stat {
-/* number of total bytes sent without errors */
-	struct regpair sent_ucast_bytes;
-/* number of total bytes sent without errors */
-	struct regpair sent_mcast_bytes;
-/* number of total bytes sent without errors */
-	struct regpair sent_bcast_bytes;
-/* number of total packets sent without errors */
-	struct regpair sent_ucast_pkts;
-/* number of total packets sent without errors */
-	struct regpair sent_mcast_pkts;
-/* number of total packets sent without errors */
-	struct regpair sent_bcast_pkts;
-/* number of total packets dropped due to errors */
-	struct regpair error_drop_pkts;
+	struct regpair sent_ucast_bytes /* number of total bytes sent without errors */;
+	struct regpair sent_mcast_bytes /* number of total bytes sent without errors */;
+	struct regpair sent_bcast_bytes /* number of total bytes sent without errors */;
+	struct regpair sent_ucast_pkts /* number of total packets sent without errors */;
+	struct regpair sent_mcast_pkts /* number of total packets sent without errors */;
+	struct regpair sent_bcast_pkts /* number of total packets sent without errors */;
+	struct regpair error_drop_pkts /* number of total packets dropped due to errors */;
 };
 
 
@@ -1155,25 +1638,22 @@ struct eth_pstorm_per_queue_stat {
 struct eth_rx_rate_limit {
 /* Rate Limit Multiplier - (Storm Clock (MHz) * 8 / Desired Bandwidth (MB/s)) */
 	__le16 mult;
-/* Constant term to add (or subtract from number of cycles) */
-	__le16 cnst;
+	__le16 cnst /* Constant term to add (or subtract from number of cycles) */;
 	u8 add_sub_cnst /* Add (1) or subtract (0) constant term */;
 	u8 reserved0;
 	__le16 reserved1;
 };
 
 
-/* Update RSS indirection table entry command. One outstanding command supported
- * per PF.
+/*
+ * Update RSS indirection table entry command. One outstanding command supported per PF.
  */
 struct eth_tstorm_rss_update_data {
-/* Valid flag. Driver must set this flag, FW clear valid flag when ready for new
- * RSS update command.
+/* Valid flag. Driver must set this flag, FW clear valid flag when ready for new RSS update command.
+ *
  */
 	u8 valid;
-/* Global VPORT ID. If RSS is disable for VPORT, RSS update command will be
- * ignored.
- */
+/* Global VPORT ID. If RSS is disable for VPORT, RSS update command will be ignored. */
 	u8 vport_id;
 	u8 ind_table_index /* RSS indirect table index that will be updated. */;
 	u8 reserved;
@@ -1250,8 +1730,7 @@ union event_ring_data {
 	union rdma_eqe_data rdma_data /* Dedicated field for RDMA data */;
 	struct nvmf_eqe_data nvmf_data /* Dedicated field for NVMf data */;
 	struct malicious_vf_eqe_data malicious_vf /* Malicious VF data */;
-/* VF Initial Cleanup data */
-	struct initial_cleanup_eqe_data vf_init_cleanup;
+	struct initial_cleanup_eqe_data vf_init_cleanup /* VF Initial Cleanup data */;
 };
 
 
@@ -1262,10 +1741,10 @@ struct event_ring_entry {
 	u8 protocol_id /* Event Protocol ID (use enum protocol_type) */;
 	u8 opcode /* Event Opcode (Per Protocol Type) */;
 	u8 reserved0 /* Reserved */;
-	u8 vfId /* vfId for this event, 0xFF if this is a PF event */;
+	u8 vf_id /* VF ID for this event, 0xFF if this is a PF event */;
 	__le16 echo /* Echo value from ramrod data on the host */;
-/* FW return code for SP ramrods. Use (according to protocol) eth_return_code,
- * or rdma_fw_return_code, or fcoe_completion_status
+/* FW return code for SP ramrods. Use (according to protocol) eth_return_code, or
+ * rdma_fw_return_code, or fcoe_completion_status, or LL2: 0-succeeded 1-failed
  */
 	u8 fw_return_code;
 	u8 flags;
@@ -1290,12 +1769,12 @@ struct event_ring_next_addr {
  */
 union event_ring_element {
 	struct event_ring_entry entry /* Event Ring Entry */;
-/* Event Ring Next Page Address */
-	struct event_ring_next_addr next_addr;
+	struct event_ring_next_addr next_addr /* Event Ring Next Page Address */;
 };
 
 
 
+
 /*
  * Ports mode
  */
@@ -1310,14 +1789,12 @@ enum fw_flow_ctrl_mode {
  * GFT profile type.
  */
 enum gft_profile_type {
-/* tunnel type, inner 4 tuple, IP type and L4 type match. */
-	GFT_PROFILE_TYPE_4_TUPLE,
+	GFT_PROFILE_TYPE_4_TUPLE /* tunnel type, inner 4 tuple, IP type and L4 type match. */,
 /* tunnel type, inner L4 destination port, IP type and L4 type match. */
 	GFT_PROFILE_TYPE_L4_DST_PORT,
 /* tunnel type, inner IP destination address and IP type match. */
 	GFT_PROFILE_TYPE_IP_DST_ADDR,
-/* tunnel type, inner IP source address and IP type match. */
-	GFT_PROFILE_TYPE_IP_SRC_ADDR,
+	GFT_PROFILE_TYPE_IP_SRC_ADDR /* tunnel type, inner IP source address and IP type match. */,
 	GFT_PROFILE_TYPE_TUNNEL_TYPE /* tunnel type and outer IP type match. */,
 	MAX_GFT_PROFILE_TYPE
 };
@@ -1332,6 +1809,21 @@ struct hsi_fp_ver_struct {
 };
 
 
+/*
+ * TID HSI Structure
+ */
+struct hsi_tid {
+	__le32 bitfields;
+#define HSI_TID_ITID_MASK        0x3FFFF /* iTID. */
+#define HSI_TID_ITID_SHIFT       0
+#define HSI_TID_SEGMENT_ID_MASK  0x3 /* Segment ID (Set by FW / HSI function). */
+#define HSI_TID_SEGMENT_ID_SHIFT 18
+#define HSI_TID_OPAQUE_FID_MASK  0xFFF /* Opaque FID. */
+#define HSI_TID_OPAQUE_FID_SHIFT 20
+};
+
+
+
 /*
  * Integration Phase
  */
@@ -1339,6 +1831,12 @@ enum integ_phase {
 	INTEG_PHASE_BB_A0_LATEST = 3 /* BB A0 latest integration phase */,
 	INTEG_PHASE_BB_B0_NO_MCP = 10 /* BB B0 without MCP */,
 	INTEG_PHASE_BB_B0_WITH_MCP = 11 /* BB B0 with MCP */,
+/* E5 A0 - phase 1, ETH VFC bypass. Force RX vportId = portId */
+	INTEG_PHASE_E5_A0_VFC_BYPASS = 50,
+	INTEG_PHASE_E5_A0_PHASE_1 = 51 /* E5 A0 - phase 1 */,
+	INTEG_PHASE_E5_A0_PHASE_2 = 52 /* E5 A0 - phase 2 */,
+	INTEG_PHASE_E5_A0_PHASE_3_NO_RGFS = 53 /* E5 A0 - phase 3. RGFS not used. */,
+	INTEG_PHASE_E5_A0_PHASE_3 = 54 /* E5 A0 - phase 3 */,
 	MAX_INTEG_PHASE
 };
 
@@ -1347,72 +1845,57 @@ enum integ_phase {
  * Ports mode
  */
 enum iwarp_ll2_tx_queues {
-/* LL2 queue for OOO packets sent in-order by the driver */
-	IWARP_LL2_IN_ORDER_TX_QUEUE = 1,
-/* LL2 queue for unaligned packets sent aligned by the driver */
-	IWARP_LL2_ALIGNED_TX_QUEUE,
-/* LL2 queue for unaligned packets sent aligned and was right-trimmed by the
- * driver
- */
+	IWARP_LL2_IN_ORDER_TX_QUEUE = 1 /* LL2 queue for OOO packets sent in-order by the driver */,
+	IWARP_LL2_ALIGNED_TX_QUEUE /* LL2 queue for unaligned packets sent aligned by the driver */,
+/* LL2 queue for unaligned packets sent aligned and was right-trimmed by the driver **/
 	IWARP_LL2_ALIGNED_RIGHT_TRIMMED_TX_QUEUE,
 	IWARP_LL2_ERROR /* Error indication */,
 	MAX_IWARP_LL2_TX_QUEUES
 };
 
 
+
 /*
  * Malicious VF error ID
  */
 enum malicious_vf_error_id {
-	MALICIOUS_VF_NO_ERROR /* Zero placeholder value */,
-/* Writing to VF/PF channel when it is not ready */
-	VF_PF_CHANNEL_NOT_READY,
+	MALICIOUS_VF_NO_ERROR,
+	VF_PF_CHANNEL_NOT_READY /* Writing to VF/PF channel when it is not ready */,
 	VF_ZONE_MSG_NOT_VALID /* VF channel message is not valid */,
 	VF_ZONE_FUNC_NOT_ENABLED /* Parent PF of VF channel is not active */,
-/* TX packet is shorter then reported on BDs or from minimal size */
-	ETH_PACKET_TOO_SMALL,
-/* Tx packet with marked as insert VLAN when its illegal */
-	ETH_ILLEGAL_VLAN_MODE,
+	ETH_PACKET_TOO_SMALL /* TX packet is shorter then reported on BDs or from minimal size */,
+	ETH_ILLEGAL_VLAN_MODE /* Tx packet with marked as insert VLAN when its illegal */,
 	ETH_MTU_VIOLATION /* TX packet is greater then MTU */,
-/* TX packet has illegal inband tags marked */
-	ETH_ILLEGAL_INBAND_TAGS,
-/* Vlan cant be added to inband tag */
-	ETH_VLAN_INSERT_AND_INBAND_VLAN,
-/* indicated number of BDs for the packet is illegal */
-	ETH_ILLEGAL_NBDS,
+	ETH_ILLEGAL_INBAND_TAGS /* TX packet has illegal inband tags marked */,
+	ETH_VLAN_INSERT_AND_INBAND_VLAN /* Vlan cant be added to inband tag */,
+	ETH_ILLEGAL_NBDS /* indicated number of BDs for the packet is illegal */,
 	ETH_FIRST_BD_WO_SOP /* 1st BD must have start_bd flag set */,
-/* There are not enough BDs for transmission of even one packet */
-	ETH_INSUFFICIENT_BDS,
+	ETH_INSUFFICIENT_BDS /* There are not enough BDs for transmission of even one packet */,
 	ETH_ILLEGAL_LSO_HDR_NBDS /* Header NBDs value is illegal */,
 	ETH_ILLEGAL_LSO_MSS /* LSO MSS value is more than allowed */,
-/* empty BD (which not contains control flags) is illegal  */
-	ETH_ZERO_SIZE_BD,
+	ETH_ZERO_SIZE_BD /* empty BD (which not contains control flags) is illegal  */,
 	ETH_ILLEGAL_LSO_HDR_LEN /* LSO header size is above the limit  */,
-/* In LSO its expected that on the local BD ring there will be at least MSS
- * bytes of data
- */
+/* In LSO its expected that on the local BD ring there will be at least MSS bytes of data **/
 	ETH_INSUFFICIENT_PAYLOAD,
 	ETH_EDPM_OUT_OF_SYNC /* Valid BDs on local ring after EDPM L2 sync */,
 /* Tunneled packet with IPv6+Ext without a proper number of BDs */
 	ETH_TUNN_IPV6_EXT_NBD_ERR,
 	ETH_CONTROL_PACKET_VIOLATION /* VF sent control frame such as PFC */,
 	ETH_ANTI_SPOOFING_ERR /* Anti-Spoofing verification failure */,
-/* packet scanned is too large (can be 9700 at most) */
-	ETH_PACKET_SIZE_TOO_LARGE,
-/* Tx packet with marked as insert VLAN when its illegal */
-	CORE_ILLEGAL_VLAN_MODE,
-/* indicated number of BDs for the packet is illegal */
-	CORE_ILLEGAL_NBDS,
+	ETH_PACKET_SIZE_TOO_LARGE /* packet scanned is too large (can be 9700 at most) */,
+	CORE_ILLEGAL_VLAN_MODE /* Tx packet with marked as insert VLAN when its illegal */,
+	CORE_ILLEGAL_NBDS /* indicated number of BDs for the packet is illegal */,
 	CORE_FIRST_BD_WO_SOP /* 1st BD must have start_bd flag set */,
-/* There are not enough BDs for transmission of even one packet */
-	CORE_INSUFFICIENT_BDS,
-/* TX packet is shorter then reported on BDs or from minimal size */
-	CORE_PACKET_TOO_SMALL,
+	CORE_INSUFFICIENT_BDS /* There are not enough BDs for transmission of even one packet */,
+	CORE_PACKET_TOO_SMALL /* TX packet is shorter then reported on BDs or from minimal size */,
 	CORE_ILLEGAL_INBAND_TAGS /* TX packet has illegal inband tags marked */,
 	CORE_VLAN_INSERT_AND_INBAND_VLAN /* Vlan cant be added to inband tag */,
 	CORE_MTU_VIOLATION /* TX packet is greater then MTU */,
 	CORE_CONTROL_PACKET_VIOLATION /* VF sent control frame such as PFC */,
 	CORE_ANTI_SPOOFING_ERR /* Anti-Spoofing verification failure */,
+	CORE_PACKET_SIZE_TOO_LARGE /* packet scanned is too large (can be 9700 at most) */,
+	CORE_ILLEGAL_BD_FLAGS /* TX packet has illegal BD flags. */,
+	CORE_GSI_PACKET_VIOLATION /* TX packet GSI validation fail */,
 	MAX_MALICIOUS_VF_ERROR_ID
 };
 
@@ -1422,11 +1905,9 @@ enum malicious_vf_error_id {
  * Mstorm non-triggering VF zone
  */
 struct mstorm_non_trigger_vf_zone {
-/* VF statistic bucket */
-	struct eth_mstorm_per_queue_stat eth_queue_stat;
+	struct eth_mstorm_per_queue_stat eth_queue_stat /* VF statistic bucket */;
 /* VF RX queues producers */
-	struct eth_rx_prod_data
-		eth_rx_queue_producers[ETH_MAX_NUM_RX_QUEUES_PER_VF_QUAD];
+	struct eth_rx_prod_data eth_rx_queue_producers[ETH_MAX_NUM_RX_QUEUES_PER_VF_QUAD];
 };
 
 
@@ -1434,8 +1915,7 @@ struct mstorm_non_trigger_vf_zone {
  * Mstorm VF zone
  */
 struct mstorm_vf_zone {
-/* non-interrupt-triggering zone */
-	struct mstorm_non_trigger_vf_zone non_trigger;
+	struct mstorm_non_trigger_vf_zone non_trigger /* non-interrupt-triggering zone */;
 };
 
 
@@ -1451,35 +1931,43 @@ struct vlan_header {
  * outer tag configurations
  */
 struct outer_tag_config_struct {
-/* Enables updating S-tag priority from inner tag or DCB. Should be 1 for Bette
- * Davis, UFP with Host Control mode, and UFP with DCB over base interface.
- * else - 0.
+/* Enables updating S-tag priority from inner tag or DCB. Should be 1 for Bette Davis, UFP with Host
+ * Control mode, and UFP with DCB over base interface. Else - 0.
  */
 	u8 enable_stag_pri_change;
-/* If inner_to_outer_pri_map is initialize then set pri_map_valid */
-	u8 pri_map_valid;
+	u8 pri_map_valid /* When set, inner_to_outer_pri_map will be used */;
 	u8 reserved[2];
-/* In case mf_mode is MF_OVLAN, this field specifies the outer tag protocol
- * identifier and outer tag control information
+/* In case mf_mode is MF_OVLAN, this field specifies the outer Tag Protocol Identifier and outer Tag
+ * Control Information
  */
 	struct vlan_header outer_tag;
-/* Map from inner to outer priority. Set pri_map_valid when init map */
+/* Map from inner to outer priority. Used if pri_map_valid is set */
 	u8 inner_to_outer_pri_map[8];
 };
 
 
+/*
+ * Integration Phase
+ */
+enum path_init_actions {
+	PATH_INIT_NONE = 0 /* No Action */,
+	PATH_INIT_PBF_HALT_FIX = 1 /* CQ 103263 - PBF Halts under Heavy traffic load */,
+	MAX_PATH_INIT_ACTIONS
+};
+
+
 /*
  * personality per PF
  */
 enum personality_type {
 	BAD_PERSONALITY_TYP,
 	PERSONALITY_ISCSI /* iSCSI and LL2 */,
-	PERSONALITY_FCOE /* Fcoe and LL2 */,
-	PERSONALITY_RDMA_AND_ETH /* Roce or Iwarp, Eth and LL2 */,
-	PERSONALITY_RDMA /* Roce and LL2 */,
-	PERSONALITY_CORE /* CORE(LL2) */,
+	PERSONALITY_FCOE /* FCoE and LL2 */,
+	PERSONALITY_RDMA_AND_ETH /* RoCE or IWARP, Ethernet and LL2 */,
+	PERSONALITY_RDMA /* RoCE and LL2 */,
+	PERSONALITY_CORE /* Core (LL2) */,
 	PERSONALITY_ETH /* Ethernet */,
-	PERSONALITY_TOE /* Toe and LL2 */,
+	PERSONALITY_TOE /* TOE and LL2 */,
 	MAX_PERSONALITY_TYPE
 };
 
@@ -1488,32 +1976,33 @@ enum personality_type {
  * tunnel configuration
  */
 struct pf_start_tunnel_config {
-/* Set VXLAN tunnel UDP destination port to vxlan_udp_port. If not set -
- * FW will use a default port
+/* Set VXLAN tunnel UDP destination port to vxlan_udp_port. If not set - FW will use a default port
+ *
  */
 	u8 set_vxlan_udp_port_flg;
-/* Set GENEVE tunnel UDP destination port to geneve_udp_port. If not set -
- * FW will use a default port
+/* Set GENEVE tunnel UDP destination port to geneve_udp_port. If not set - FW will use a default
+ * port
  */
 	u8 set_geneve_udp_port_flg;
-/* Set no-innet-L2 VXLAN tunnel UDP destination port to
- * no_inner_l2_vxlan_udp_port. If not set - FW will use a default port
+/* Set no-inner-L2 VXLAN tunnel UDP destination port to no_inner_l2_vxlan_udp_port. If not set - FW
+ * will use a default port
  */
 	u8 set_no_inner_l2_vxlan_udp_port_flg;
-	u8 tunnel_clss_vxlan /* Rx classification scheme for VXLAN tunnel. */;
-/* Rx classification scheme for l2 GENEVE tunnel. */
+/* Rx classification scheme for VXLAN tunnel. (use enum tunnel_clss) */
+	u8 tunnel_clss_vxlan;
+/* Rx classification scheme for L2 GENEVE tunnel. (use enum tunnel_clss) */
 	u8 tunnel_clss_l2geneve;
-/* Rx classification scheme for ip GENEVE tunnel. */
+/* Rx classification scheme for IP GENEVE tunnel. (use enum tunnel_clss) */
 	u8 tunnel_clss_ipgeneve;
-	u8 tunnel_clss_l2gre /* Rx classification scheme for l2 GRE tunnel. */;
-	u8 tunnel_clss_ipgre /* Rx classification scheme for ip GRE tunnel. */;
+/* Rx classification scheme for L2 GRE tunnel. (use enum tunnel_clss) */
+	u8 tunnel_clss_l2gre;
+/* Rx classification scheme for IP GRE tunnel. (use enum tunnel_clss) */
+	u8 tunnel_clss_ipgre;
 /* VXLAN tunnel UDP destination port. Valid if set_vxlan_udp_port_flg=1 */
 	__le16 vxlan_udp_port;
 /* GENEVE tunnel UDP destination port. Valid if set_geneve_udp_port_flg=1 */
 	__le16 geneve_udp_port;
-/* no-innet-L2 VXLAN  tunnel UDP destination port. Valid if
- * set_no_inner_l2_vxlan_udp_port_flg=1
- */
+/* no-inner-L2 VXLAN  tunnel UDP destination port. Valid if set_no_inner_l2_vxlan_udp_port_flg=1 */
 	__le16 no_inner_l2_vxlan_udp_port;
 	__le16 reserved[3];
 };
@@ -1523,34 +2012,27 @@ struct pf_start_tunnel_config {
  */
 struct pf_start_ramrod_data {
 	struct regpair event_ring_pbl_addr /* Address of event ring PBL */;
-/* PBL address of consolidation queue */
-	struct regpair consolid_q_pbl_addr;
-/* tunnel configuration. */
-	struct pf_start_tunnel_config tunnel_config;
+	struct regpair consolid_q_pbl_addr /* PBL address of consolidation queue */;
+	struct pf_start_tunnel_config tunnel_config /* tunnel configuration. */;
 	__le16 event_ring_sb_id /* Status block ID */;
-/* All VfIds owned by Pf will be from baseVfId till baseVfId+numVfs */
-	u8 base_vf_id;
-	u8 num_vfs /* Amount of vfs owned by PF */;
+	u8 base_vf_id /* All Vf IDs owned by PF will start from baseVfId till baseVfId+numVfs */;
+	u8 num_vfs /* Number of VFs owned by PF */;
 	u8 event_ring_num_pages /* Number of PBL pages in event ring */;
 	u8 event_ring_sb_index /* Status block index */;
 	u8 path_id /* HW path ID (engine ID) */;
 	u8 warning_as_error /* In FW asserts, treat warning as error */;
-/* If not set - throw a warning for each ramrod (for debug) */
-	u8 dont_log_ramrods;
-	u8 personality /* define what type of personality is new PF */;
-/* Log type mask. Each bit set enables a corresponding event type logging.
- * Event types are defined as ASSERT_LOG_TYPE_xxx
+	u8 dont_log_ramrods /* If set, FW will not log ramrods */;
+	u8 personality /* PFs personality (use enum personality_type) */;
+/* Log type mask. Each bit set enables a corresponding event type logging. Event types are defined
+ * as ASSERT_LOG_TYPE_xxx
  */
 	__le16 log_type_mask;
-	u8 mf_mode /* Multi function mode */;
-	u8 integ_phase /* Integration phase */;
-/* If set, inter-pf tx switching is allowed in Switch Independent func mode */
-	u8 allow_npar_tx_switching;
-	u8 reserved0;
-/* FP HSI version to be used by FW */
-	struct hsi_fp_ver_struct hsi_fp_ver;
-/* Outer tag configurations */
-	struct outer_tag_config_struct outer_tag_config;
+	u8 mf_mode /* Multi function mode (use enum mf_mode) */;
+	u8 integ_phase /* Integration phase (use enum integ_phase) */;
+	u8 allow_npar_tx_switching /* If set, inter-pf tx switching is allowed in NPAR mode */;
+	u8 disable_path_init /* Disable Path Initialization bitmap (use enum path_init_actions) */;
+	struct hsi_fp_ver_struct hsi_fp_ver /* FP HSI version to be used by FW */;
+	struct outer_tag_config_struct outer_tag_config /* Outer tag configurations */;
 };
 
 
@@ -1564,9 +2046,7 @@ struct protocol_dcb_data {
 	u8 dcb_priority /* DCB priority */;
 	u8 dcb_tc /* DCB TC */;
 	u8 dscp_val /* DSCP value to write if dscp_enable_flag is set */;
-/* When DCB is enabled - if this flag is set, dont add VLAN 0 tag to untagged
- * frames
- */
+/* When DCB is enabled - if this flag is set, dont add VLAN 0 tag to untagged frames **/
 	u8 dcb_dont_add_vlan0;
 };
 
@@ -1574,34 +2054,30 @@ struct protocol_dcb_data {
  * Update tunnel configuration
  */
 struct pf_update_tunnel_config {
-/* Update RX per PF tunnel classification scheme. */
-	u8 update_rx_pf_clss;
-/* Update per PORT default tunnel RX classification scheme for traffic with
- * unknown unicast outer MAC in NPAR mode.
+	u8 update_rx_pf_clss /* Update per-PF RX tunnel classification scheme. */;
+/* Update per-PORT default tunnel RX classification scheme for traffic with unknown unicast outer
+ * MAC in NPAR mode.
  */
 	u8 update_rx_def_ucast_clss;
-/* Update per PORT default tunnel RX classification scheme for traffic with non
- * unicast outer MAC in NPAR mode.
+/* Update per-PORT default tunnel RX classification scheme for traffic with non unicast outer MAC in
+ * NPAR mode.
  */
 	u8 update_rx_def_non_ucast_clss;
-/* Update VXLAN tunnel UDP destination port. */
-	u8 set_vxlan_udp_port_flg;
-/* Update GENEVE tunnel UDP destination port. */
-	u8 set_geneve_udp_port_flg;
-/* Update no-innet-L2 VXLAN  tunnel UDP destination port. */
+	u8 set_vxlan_udp_port_flg /* Update VXLAN tunnel UDP destination port. */;
+	u8 set_geneve_udp_port_flg /* Update GENEVE tunnel UDP destination port. */;
+/* Update no-inner-L2 VXLAN  tunnel UDP destination port. */
 	u8 set_no_inner_l2_vxlan_udp_port_flg;
-	u8 tunnel_clss_vxlan /* Classification scheme for VXLAN tunnel. */;
-/* Classification scheme for l2 GENEVE tunnel. */
+	u8 tunnel_clss_vxlan /* Classification scheme for VXLAN tunnel. (use enum tunnel_clss) */;
+/* Classification scheme for L2 GENEVE tunnel. (use enum tunnel_clss) */
 	u8 tunnel_clss_l2geneve;
-/* Classification scheme for ip GENEVE tunnel. */
+/* Classification scheme for IP GENEVE tunnel. (use enum tunnel_clss) */
 	u8 tunnel_clss_ipgeneve;
-	u8 tunnel_clss_l2gre /* Classification scheme for l2 GRE tunnel. */;
-	u8 tunnel_clss_ipgre /* Classification scheme for ip GRE tunnel. */;
+	u8 tunnel_clss_l2gre /* Classification scheme for L2 GRE tunnel. (use enum tunnel_clss) */;
+	u8 tunnel_clss_ipgre /* Classification scheme for IP GRE tunnel. (use enum tunnel_clss) */;
 	u8 reserved;
 	__le16 vxlan_udp_port /* VXLAN tunnel UDP destination port. */;
 	__le16 geneve_udp_port /* GENEVE tunnel UDP destination port. */;
-/* no-innet-L2 VXLAN  tunnel UDP destination port. */
-	__le16 no_inner_l2_vxlan_udp_port;
+	__le16 no_inner_l2_vxlan_udp_port /* no-inner-L2 VXLAN  tunnel UDP destination port. */;
 	__le16 reserved1[3];
 };
 
@@ -1609,37 +2085,33 @@ struct pf_update_tunnel_config {
  * Data for port update ramrod
  */
 struct pf_update_ramrod_data {
-/* Update Eth DCB  data indication (use enum dcb_dscp_update_mode) */
+/* If set - Update Eth DCB data (use enum dcb_dscp_update_mode) */
 	u8 update_eth_dcb_data_mode;
-/* Update FCOE DCB  data indication (use enum dcb_dscp_update_mode) */
+/* If set - Update FCOE DCB data (use enum dcb_dscp_update_mode) */
 	u8 update_fcoe_dcb_data_mode;
-/* Update iSCSI DCB  data indication (use enum dcb_dscp_update_mode) */
+/* If set - Update iSCSI DCB data (use enum dcb_dscp_update_mode) */
 	u8 update_iscsi_dcb_data_mode;
-	u8 update_roce_dcb_data_mode /* Update ROCE DCB  data indication */;
-/* Update RROCE (RoceV2) DCB  data indication */
+/* If set - Update ROCE DCB data (use enum dcb_dscp_update_mode) */
+	u8 update_roce_dcb_data_mode;
+/* If set - Update RROCE (RoceV2) DCB data (use enum dcb_dscp_update_mode) */
 	u8 update_rroce_dcb_data_mode;
-	u8 update_iwarp_dcb_data_mode /* Update IWARP DCB  data indication */;
-	u8 update_mf_vlan_flag /* Update MF outer vlan Id */;
-/* Update Enable STAG Priority Change indication */
-	u8 update_enable_stag_pri_change;
-	struct protocol_dcb_data eth_dcb_data /* core eth related fields */;
-	struct protocol_dcb_data fcoe_dcb_data /* core fcoe related fields */;
-/* core iscsi related fields */
-	struct protocol_dcb_data iscsi_dcb_data;
-	struct protocol_dcb_data roce_dcb_data /* core roce related fields */;
-/* core roce related fields */
-	struct protocol_dcb_data rroce_dcb_data;
-/* core iwarp related fields */
-	struct protocol_dcb_data iwarp_dcb_data;
-	__le16 mf_vlan /* new outer vlan id value */;
-/* enables updating S-tag priority from inner tag or DCB. Should be 1 for Bette
- * Davis, UFP with Host Control mode, and UFP with DCB over base interface.
- * else - 0
+/* If set - Update IWARP DCB  data (use enum dcb_dscp_update_mode) */
+	u8 update_iwarp_dcb_data_mode;
+	u8 update_mf_vlan_flag /* If set - Update MF Tag TCI */;
+	u8 update_enable_stag_pri_change /* If set - Update Enable STAG Priority Change */;
+	struct protocol_dcb_data eth_dcb_data /* eth  DCB data */;
+	struct protocol_dcb_data fcoe_dcb_data /* fcoe DCB data */;
+	struct protocol_dcb_data iscsi_dcb_data /* iscsi DCB data */;
+	struct protocol_dcb_data roce_dcb_data /* roce DCB data */;
+	struct protocol_dcb_data rroce_dcb_data /* roce DCB data */;
+	struct protocol_dcb_data iwarp_dcb_data /* iwarp DCB data */;
+	__le16 mf_vlan /* MF Tag TCI */;
+/* enables updating S-tag priority from inner tag or DCB. Should be 1 for Bette Davis, UFP with Host
+ * Control mode, and UFP with DCB over base interface. else - 0.
  */
 	u8 enable_stag_pri_change;
 	u8 reserved;
-/* tunnel configuration. */
-	struct pf_update_tunnel_config tunnel_config;
+	struct pf_update_tunnel_config tunnel_config /* tunnel configuration. */;
 };
 
 
@@ -1681,8 +2153,7 @@ struct rdma_sent_stats {
  * Pstorm non-triggering VF zone
  */
 struct pstorm_non_trigger_vf_zone {
-/* VF statistic bucket */
-	struct eth_pstorm_per_queue_stat eth_queue_stat;
+	struct eth_pstorm_per_queue_stat eth_queue_stat /* VF statistic bucket */;
 	struct rdma_sent_stats rdma_stats /* RoCE sent statistics */;
 };
 
@@ -1691,8 +2162,7 @@ struct pstorm_non_trigger_vf_zone {
  * Pstorm VF zone
  */
 struct pstorm_vf_zone {
-/* non-interrupt-triggering zone */
-	struct pstorm_non_trigger_vf_zone non_trigger;
+	struct pstorm_non_trigger_vf_zone non_trigger /* non-interrupt-triggering zone */;
 	struct regpair reserved[7] /* vf_zone size mus be power of 2 */;
 };
 
@@ -1703,7 +2173,7 @@ struct pstorm_vf_zone {
 struct ramrod_header {
 	__le32 cid /* Slowpath Connection CID */;
 	u8 cmd_id /* Ramrod Cmd (Per Protocol Type) */;
-	u8 protocol_id /* Ramrod Protocol ID */;
+	u8 protocol_id /* Ramrod Protocol ID (use enum protocol_type) */;
 	__le16 echo /* Ramrod echo */;
 };
 
@@ -1723,22 +2193,20 @@ struct rdma_rcv_stats {
  */
 struct rl_update_ramrod_data {
 	u8 qcn_update_param_flg /* Update QCN global params: timeout. */;
-/* Update DCQCN global params: timeout, g, k. */
-	u8 dcqcn_update_param_flg;
+	u8 dcqcn_update_param_flg /* Update DCQCN global params: timeout, g, k. */;
 	u8 rl_init_flg /* Init RL parameters, when RL disabled. */;
 	u8 rl_start_flg /* Start RL in IDLE state. Set rate to maximum. */;
 	u8 rl_stop_flg /* Stop RL. */;
-	u8 rl_id_first /* ID of first or single RL, that will be updated. */;
-/* ID of last RL, that will be updated. If clear, single RL will updated. */
-	u8 rl_id_last;
 	u8 rl_dc_qcn_flg /* If set, RL will used for DCQCN. */;
 /* If set, alpha will be reset to 1 when the state machine is idle. */
 	u8 dcqcn_reset_alpha_on_idle;
-/* Byte counter threshold to change rate increase stage. */
-	u8 rl_bc_stage_th;
-/* Timer threshold to change rate increase stage. */
-	u8 rl_timer_stage_th;
+	u8 rl_bc_stage_th /* Byte counter threshold to change rate increase stage. */;
+	u8 rl_timer_stage_th /* Timer threshold to change rate increase stage. */;
 	u8 reserved1;
+	__le16 rl_id_first /* ID of first or single RL, that will be updated. */;
+/* ID of last RL, that will be updated. If clear, single RL will updated. */
+	__le16 rl_id_last;
+	__le16 reserved2;
 	__le32 rl_bc_rate /* Byte Counter Limit. */;
 	__le16 rl_max_rate /* Maximum rate in 1.6 Mbps resolution. */;
 	__le16 rl_r_ai /* Active increase rate. */;
@@ -1747,7 +2215,6 @@ struct rl_update_ramrod_data {
 	__le32 dcqcn_k_us /* DCQCN Alpha update interval. */;
 	__le32 dcqcn_timeuot_us /* DCQCN timeout. */;
 	__le32 qcn_timeuot_us /* QCN timeout. */;
-	__le32 reserved2;
 };
 
 
@@ -1760,6 +2227,7 @@ struct slow_path_element {
 };
 
 
+
 /*
  * Tstorm non-triggering VF zone
  */
@@ -1769,41 +2237,26 @@ struct tstorm_non_trigger_vf_zone {
 
 
 struct tstorm_per_port_stat {
-/* packet is dropped because it was truncated in NIG */
-	struct regpair trunc_error_discard;
-/* packet is dropped because of Ethernet FCS error */
-	struct regpair mac_error_discard;
+	struct regpair trunc_error_discard /* packet is dropped because it was truncated in NIG */;
+	struct regpair mac_error_discard /* packet is dropped because of Ethernet FCS error */;
 /* packet is dropped because classification was unsuccessful */
 	struct regpair mftag_filter_discard;
 /* packet was passed to Ethernet and dropped because of no mac filter match */
 	struct regpair eth_mac_filter_discard;
-/* packet passed to Light L2 and dropped because Light L2 is not configured for
- * this PF
- */
+/* packet passed to Light L2 and dropped because Light L2 is not configured for this PF */
 	struct regpair ll2_mac_filter_discard;
-/* packet passed to Light L2 and dropped because Light L2 is not configured for
- * this PF
- */
+/* packet passed to Light L2 and dropped because Light L2 is not configured for this PF */
 	struct regpair ll2_conn_disabled_discard;
-/* packet is an ISCSI irregular packet */
-	struct regpair iscsi_irregular_pkt;
-/* packet is an FCOE irregular packet */
-	struct regpair fcoe_irregular_pkt;
-/* packet is an ROCE irregular packet */
-	struct regpair roce_irregular_pkt;
-/* packet is an IWARP irregular packet */
-	struct regpair iwarp_irregular_pkt;
-/* packet is an ETH irregular packet */
-	struct regpair eth_irregular_pkt;
-/* packet is an TOE irregular packet */
-	struct regpair toe_irregular_pkt;
-/* packet is an PREROCE irregular packet */
-	struct regpair preroce_irregular_pkt;
+	struct regpair iscsi_irregular_pkt /* packet is an ISCSI irregular packet */;
+	struct regpair fcoe_irregular_pkt /* packet is an FCOE irregular packet */;
+	struct regpair roce_irregular_pkt /* packet is an ROCE irregular packet */;
+	struct regpair iwarp_irregular_pkt /* packet is an IWARP irregular packet */;
+	struct regpair eth_irregular_pkt /* packet is an ETH irregular packet */;
+	struct regpair toe_irregular_pkt /* packet is an TOE irregular packet */;
+	struct regpair preroce_irregular_pkt /* packet is an PREROCE irregular packet */;
 	struct regpair eth_gre_tunn_filter_discard /* GRE dropped packets */;
-/* VXLAN dropped packets */
-	struct regpair eth_vxlan_tunn_filter_discard;
-/* GENEVE dropped packets */
-	struct regpair eth_geneve_tunn_filter_discard;
+	struct regpair eth_vxlan_tunn_filter_discard /* VXLAN dropped packets */;
+	struct regpair eth_geneve_tunn_filter_discard /* GENEVE dropped packets */;
 	struct regpair eth_gft_drop_pkt /* GFT dropped packets */;
 };
 
@@ -1812,8 +2265,7 @@ struct tstorm_per_port_stat {
  * Tstorm VF zone
  */
 struct tstorm_vf_zone {
-/* non-interrupt-triggering zone */
-	struct tstorm_non_trigger_vf_zone non_trigger;
+	struct tstorm_non_trigger_vf_zone non_trigger /* non-interrupt-triggering zone */;
 };
 
 
@@ -1821,20 +2273,16 @@ struct tstorm_vf_zone {
  * Tunnel classification scheme
  */
 enum tunnel_clss {
-/* Use MAC and VLAN from first L2 header for vport classification. */
+/* Use MAC and VLAN from outermost L2 header for vport classification. */
 	TUNNEL_CLSS_MAC_VLAN = 0,
-/* Use MAC from first L2 header and VNI from tunnel header for vport
- * classification
- */
+/* Use MAC from outermost L2 header and VNI from tunnel header for vport classification */
 	TUNNEL_CLSS_MAC_VNI,
-/* Use MAC and VLAN from last L2 header for vport classification */
+/* Use MAC and VLAN from inner L2 header for vport classification */
 	TUNNEL_CLSS_INNER_MAC_VLAN,
-/* Use MAC from last L2 header and VNI from tunnel header for vport
- * classification
- */
+/* Use MAC from inner L2 header and VNI from tunnel header for vport classification */
 	TUNNEL_CLSS_INNER_MAC_VNI,
-/* Use MAC and VLAN from last L2 header for vport classification. If no exact
- * match, use MAC and VLAN from first L2 header for classification.
+/* Use MAC and VLAN from inner L2 header for vport classification. If no exact match, use MAC and
+ * VLAN from outermost L2 header for vport classification.
  */
 	TUNNEL_CLSS_MAC_VLAN_DUAL_STAGE,
 	MAX_TUNNEL_CLSS
@@ -1846,8 +2294,7 @@ enum tunnel_clss {
  * Ustorm non-triggering VF zone
  */
 struct ustorm_non_trigger_vf_zone {
-/* VF statistic bucket */
-	struct eth_ustorm_per_queue_stat eth_queue_stat;
+	struct eth_ustorm_per_queue_stat eth_queue_stat /* VF statistic bucket */;
 	struct regpair vf_pf_msg_addr /* VF-PF message address */;
 };
 
@@ -1865,8 +2312,7 @@ struct ustorm_trigger_vf_zone {
  * Ustorm VF zone
  */
 struct ustorm_vf_zone {
-/* non-interrupt-triggering zone */
-	struct ustorm_non_trigger_vf_zone non_trigger;
+	struct ustorm_non_trigger_vf_zone non_trigger /* non-interrupt-triggering zone */;
 	struct ustorm_trigger_vf_zone trigger /* interrupt triggering zone */;
 };
 
@@ -1875,19 +2321,18 @@ struct ustorm_vf_zone {
  * VF-PF channel data
  */
 struct vf_pf_channel_data {
-/* 0: VF-PF Channel NOT ready. Waiting for ack from PF driver. 1: VF-PF Channel
- * is ready for a new transaction.
+/* 0: VF-PF Channel NOT ready. Waiting for ack from PF driver. 1: VF-PF Channel is ready for a new
+ * transaction.
  */
 	__le32 ready;
-/* 0: VF-PF Channel is invalid because of malicious VF. 1: VF-PF Channel is
- * valid.
- */
+/* 0: VF-PF Channel is invalid because of malicious VF. 1: VF-PF Channel is valid. */
 	u8 valid;
 	u8 reserved0;
 	__le16 reserved1;
 };
 
 
+
 /*
  * Ramrod data for VF start ramrod
  */
@@ -1896,10 +2341,9 @@ struct vf_start_ramrod_data {
 /* If set, initial cleanup ack will be sent to parent PF SP event queue */
 	u8 enable_flr_ack;
 	__le16 opaque_fid /* VF opaque FID */;
-	u8 personality /* define what type of personality is new VF */;
+	u8 personality /* VFs personality (use enum personality_type) */;
 	u8 reserved[7];
-/* FP HSI version to be used by FW */
-	struct hsi_fp_ver_struct hsi_fp_ver;
+	struct hsi_fp_ver_struct hsi_fp_ver /* FP HSI version to be used by FW */;
 };
 
 
@@ -1918,12 +2362,9 @@ struct vf_stop_ramrod_data {
  * VF zone size mode.
  */
 enum vf_zone_size_mode {
-/* Default VF zone size. Up to 192 VF supported. */
-	VF_ZONE_SIZE_MODE_DEFAULT,
-/* Doubled VF zone size. Up to 96 VF supported. */
-	VF_ZONE_SIZE_MODE_DOUBLE,
-/* Quad VF zone size. Up to 48 VF supported. */
-	VF_ZONE_SIZE_MODE_QUAD,
+	VF_ZONE_SIZE_MODE_DEFAULT /* Default VF zone size. Up to 192 VF supported. */,
+	VF_ZONE_SIZE_MODE_DOUBLE /* Doubled VF zone size. Up to 96 VF supported. */,
+	VF_ZONE_SIZE_MODE_QUAD /* Quad VF zone size. Up to 48 VF supported. */,
 	MAX_VF_ZONE_SIZE_MODE
 };
 
@@ -1942,8 +2383,7 @@ struct xstorm_non_trigger_vf_zone {
  * Tstorm VF zone
  */
 struct xstorm_vf_zone {
-/* non-interrupt-triggering zone */
-	struct xstorm_non_trigger_vf_zone non_trigger;
+	struct xstorm_non_trigger_vf_zone non_trigger /* non-interrupt-triggering zone */;
 };
 
 
@@ -1968,9 +2408,7 @@ struct dmae_cmd {
 /* DMA Source. 0 - PCIe, 1 - GRC (use enum dmae_cmd_src_enum) */
 #define DMAE_CMD_SRC_MASK              0x1
 #define DMAE_CMD_SRC_SHIFT             0
-/* DMA destination. 0 - None, 1 - PCIe, 2 - GRC, 3 - None
- * (use enum dmae_cmd_dst_enum)
- */
+/* DMA destination. 0 - None, 1 - PCIe, 2 - GRC, 3 - None (use enum dmae_cmd_dst_enum) */
 #define DMAE_CMD_DST_MASK              0x3
 #define DMAE_CMD_DST_SHIFT             1
 /* Completion destination. 0 - PCie, 1 - GRC (use enum dmae_cmd_c_dst_enum) */
@@ -1979,33 +2417,29 @@ struct dmae_cmd {
 /* Reset the CRC result (do not use the previous result as the seed) */
 #define DMAE_CMD_CRC_RESET_MASK        0x1
 #define DMAE_CMD_CRC_RESET_SHIFT       4
-/* Reset the source address in the next go to the same source address of the
- * previous go
- */
+/* Reset the source address in the next go to the same source address of the previous go */
 #define DMAE_CMD_SRC_ADDR_RESET_MASK   0x1
 #define DMAE_CMD_SRC_ADDR_RESET_SHIFT  5
-/* Reset the destination address in the next go to the same destination address
- * of the previous go
+/* Reset the destination address in the next go to the same destination address of the previous go
+ *
  */
 #define DMAE_CMD_DST_ADDR_RESET_MASK   0x1
 #define DMAE_CMD_DST_ADDR_RESET_SHIFT  6
-/* 0   completion function is the same as src function, 1 - 0 completion
- * function is the same as dst function (use enum dmae_cmd_comp_func_enum)
+/* 0   completion function is the same as src function, 1 - 0   completion function is the same as
+ * dst function (use enum dmae_cmd_comp_func_enum)
  */
 #define DMAE_CMD_COMP_FUNC_MASK        0x1
 #define DMAE_CMD_COMP_FUNC_SHIFT       7
-/* 0 - Do not write a completion word, 1 - Write a completion word
- * (use enum dmae_cmd_comp_word_en_enum)
+/* 0 - Do not write a completion word, 1 - Write a completion word (use enum
+ * dmae_cmd_comp_word_en_enum)
  */
 #define DMAE_CMD_COMP_WORD_EN_MASK     0x1
 #define DMAE_CMD_COMP_WORD_EN_SHIFT    8
-/* 0 - Do not write a CRC word, 1 - Write a CRC word
- * (use enum dmae_cmd_comp_crc_en_enum)
- */
+/* 0 - Do not write a CRC word, 1 - Write a CRC word (use enum dmae_cmd_comp_crc_en_enum) */
 #define DMAE_CMD_COMP_CRC_EN_MASK      0x1
 #define DMAE_CMD_COMP_CRC_EN_SHIFT     9
-/* The CRC word should be taken from the DMAE address space from address 9+X,
- * where X is the value in these bits.
+/* The CRC word should be taken from the DMAE address space from address 9+X, where X is the value
+ * in these bits.
  */
 #define DMAE_CMD_COMP_CRC_OFFSET_MASK  0x7
 #define DMAE_CMD_COMP_CRC_OFFSET_SHIFT 10
@@ -2013,23 +2447,20 @@ struct dmae_cmd {
 #define DMAE_CMD_RESERVED1_SHIFT       13
 #define DMAE_CMD_ENDIANITY_MODE_MASK   0x3
 #define DMAE_CMD_ENDIANITY_MODE_SHIFT  14
-/* The field specifies how the completion word is affected by PCIe read error. 0
- * Send a regular completion, 1 - Send a completion with an error indication,
- * 2 do not send a completion (use enum dmae_cmd_error_handling_enum)
+/* The field specifies how the completion word is affected by PCIe read error. 0   Send a regular
+ * completion, 1 - Send a completion with an error indication, 2   do not send a completion (use
+ * enum dmae_cmd_error_handling_enum)
  */
 #define DMAE_CMD_ERR_HANDLING_MASK     0x3
 #define DMAE_CMD_ERR_HANDLING_SHIFT    16
-/* The port ID to be placed on the  RF FID  field of the GRC bus. this field is
- * used both when GRC is the destination and when it is the source of the DMAE
- * transaction.
+/* The port ID to be placed on the  RF FID  field of the GRC bus. this field is used both when GRC
+ * is the destination and when it is the source of the DMAE transaction.
  */
 #define DMAE_CMD_PORT_ID_MASK          0x3
 #define DMAE_CMD_PORT_ID_SHIFT         18
-/* Source PCI function number [3:0] */
-#define DMAE_CMD_SRC_PF_ID_MASK        0xF
+#define DMAE_CMD_SRC_PF_ID_MASK        0xF /* Source PCI function number [3:0] */
 #define DMAE_CMD_SRC_PF_ID_SHIFT       20
-/* Destination PCI function number [3:0] */
-#define DMAE_CMD_DST_PF_ID_MASK        0xF
+#define DMAE_CMD_DST_PF_ID_MASK        0xF /* Destination PCI function number [3:0] */
 #define DMAE_CMD_DST_PF_ID_SHIFT       24
 #define DMAE_CMD_SRC_VF_ID_VALID_MASK  0x1 /* Source VFID valid */
 #define DMAE_CMD_SRC_VF_ID_VALID_SHIFT 28
@@ -2037,10 +2468,8 @@ struct dmae_cmd {
 #define DMAE_CMD_DST_VF_ID_VALID_SHIFT 29
 #define DMAE_CMD_RESERVED2_MASK        0x3
 #define DMAE_CMD_RESERVED2_SHIFT       30
-/* PCIe source address low in bytes or GRC source address in DW */
-	__le32 src_addr_lo;
-/* PCIe source address high in bytes or reserved (if source is GRC) */
-	__le32 src_addr_hi;
+	__le32 src_addr_lo /* PCIe source address low in bytes or GRC source address in DW */;
+	__le32 src_addr_hi /* PCIe source address high in bytes or reserved (if source is GRC) */;
 /* PCIe destination address low in bytes or GRC destination address in DW */
 	__le32 dst_addr_lo;
 /* PCIe destination address high in bytes or reserved (if destination is GRC) */
@@ -2053,9 +2482,7 @@ struct dmae_cmd {
 #define DMAE_CMD_DST_VF_ID_SHIFT       8
 /* PCIe completion address low in bytes or GRC completion address in DW */
 	__le32 comp_addr_lo;
-/* PCIe completion address high in bytes or reserved (if completion address is
- * GRC)
- */
+/* PCIe completion address high in bytes or reserved (if completion address is GRC) */
 	__le32 comp_addr_hi;
 	__le32 comp_val /* Value to write to completion address */;
 	__le32 crc32 /* crc16 result */;
@@ -2115,9 +2542,7 @@ enum dmae_cmd_dst_enum {
 enum dmae_cmd_error_handling_enum {
 /* Send a regular completion (with no error indication) */
 	dmae_cmd_error_handling_send_regular_comp,
-/* Send a completion with an error indication (i.e. set bit 31 of the completion
- * word)
- */
+/* Send a completion with an error indication (i.e. set bit 31 of the completion word) */
 	dmae_cmd_error_handling_send_comp_with_err,
 	dmae_cmd_error_handling_dont_send_comp /* Do not send a completion */,
 	MAX_DMAE_CMD_ERROR_HANDLING_ENUM
@@ -2136,40 +2561,37 @@ enum dmae_cmd_src_enum {
  */
 struct dmae_params {
 	__le32 flags;
-/* If set and the source is a block of length DMAE_MAX_RW_SIZE and the
- * destination is larger, the source block will be duplicated as many
- * times as required to fill the destination block. This is used mostly
- * to write a zeroed buffer to destination address using DMA
+/* If set and the source is a block of length DMAE_MAX_RW_SIZE and the destination is larger, the
+ * source block will be duplicated as many times as required to fill the destination block. This is
+ * used mostly to write a zeroed buffer to destination address using DMA
  */
 #define DMAE_PARAMS_RW_REPL_SRC_MASK     0x1
 #define DMAE_PARAMS_RW_REPL_SRC_SHIFT    0
-/* If set, the source is a VF, and the source VF ID is taken from the
- * src_vf_id parameter.
- */
+/* If set, the source is a VF, and the source VF ID is taken from the src_vf_id parameter. */
 #define DMAE_PARAMS_SRC_VF_VALID_MASK    0x1
 #define DMAE_PARAMS_SRC_VF_VALID_SHIFT   1
-/* If set, the destination is a VF, and the destination VF ID is taken
- * from the dst_vf_id parameter.
+/* If set, the destination is a VF, and the destination VF ID is taken from the dst_vf_id parameter.
+ *
  */
 #define DMAE_PARAMS_DST_VF_VALID_MASK    0x1
 #define DMAE_PARAMS_DST_VF_VALID_SHIFT   2
-/* If set, a completion is sent to the destination function.
- * Otherwise its sent to the source function.
+/* If set, a completion is sent to the destination function. Otherwise its sent to the source
+ * function.
  */
 #define DMAE_PARAMS_COMPLETION_DST_MASK  0x1
 #define DMAE_PARAMS_COMPLETION_DST_SHIFT 3
-/* If set, the port ID is taken from the port_id parameter.
- * Otherwise, the current port ID is used.
+/* If set, the port ID is taken from the port_id parameter. Otherwise, the current port ID is used.
+ *
  */
 #define DMAE_PARAMS_PORT_VALID_MASK      0x1
 #define DMAE_PARAMS_PORT_VALID_SHIFT     4
-/* If set, the source PF ID is taken from the src_pf_id parameter.
- * Otherwise, the current PF ID is used.
+/* If set, the source PF ID is taken from the src_pf_id parameter. Otherwise, the current PF ID is
+ * used.
  */
 #define DMAE_PARAMS_SRC_PF_VALID_MASK    0x1
 #define DMAE_PARAMS_SRC_PF_VALID_SHIFT   5
-/* If set, the destination PF ID is taken from the dst_pf_id parameter.
- * Otherwise, the current PF ID is used
+/* If set, the destination PF ID is taken from the dst_pf_id parameter. Otherwise, the current PF ID
+ * is used.
  */
 #define DMAE_PARAMS_DST_PF_VALID_MASK    0x1
 #define DMAE_PARAMS_DST_PF_VALID_SHIFT   6
@@ -2185,15 +2607,184 @@ struct dmae_params {
 };
 
 
+struct e4_mstorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	__le16 word0 /* word0 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
+
+
+
+struct e4_ystorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	u8 byte2 /* byte2 */;
+	u8 byte3 /* byte3 */;
+	__le16 word0 /* word0 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le16 word1 /* word1 */;
+	__le16 word2 /* word2 */;
+	__le16 word3 /* word3 */;
+	__le16 word4 /* word4 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+};
+
+
+struct e5_mstorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state_and_core_id */;
+	u8 flags0;
+#define E5_MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E5_MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E5_MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E5_MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E5_MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define E5_MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E5_MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define E5_MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E5_MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define E5_MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define E5_MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E5_MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E5_MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E5_MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E5_MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E5_MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E5_MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E5_MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define E5_MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E5_MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define E5_MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E5_MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define E5_MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E5_MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define E5_MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E5_MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	__le16 word0 /* word0 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
+
+
+
+struct e5_ystorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state_and_core_id */;
+	u8 flags0;
+#define E5_YSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E5_YSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define E5_YSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E5_YSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define E5_YSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define E5_YSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define E5_YSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define E5_YSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define E5_YSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define E5_YSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define E5_YSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E5_YSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define E5_YSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E5_YSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define E5_YSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E5_YSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define E5_YSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E5_YSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define E5_YSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E5_YSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define E5_YSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E5_YSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define E5_YSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E5_YSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define E5_YSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E5_YSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	u8 byte2 /* byte2 */;
+	u8 byte3 /* byte3 */;
+	__le16 word0 /* word0 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le16 word1 /* word1 */;
+	__le16 word2 /* word2 */;
+	__le16 word3 /* word3 */;
+	__le16 word4 /* word4 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+};
+
+
 struct fw_asserts_ram_section {
 /* The offset of the section in the RAM in RAM lines (64-bit units) */
 	__le16 section_ram_line_offset;
-/* The size of the section in RAM lines (64-bit units) */
-	__le16 section_ram_line_size;
-/* The offset of the asserts list within the section in dwords */
-	u8 list_dword_offset;
-/* The size of an assert list element in dwords */
-	u8 list_element_dword_size;
+	__le16 section_ram_line_size /* The size of the section in RAM lines (64-bit units) */;
+	u8 list_dword_offset /* The offset of the asserts list within the section in dwords */;
+	u8 list_element_dword_size /* The size of an assert list element in dwords */;
 	u8 list_num_elements /* The number of elements in the asserts list */;
 /* The offset of the next list index field within the section in dwords */
 	u8 list_next_index_dword_offset;
@@ -2225,46 +2816,12 @@ struct fw_info {
 
 struct fw_info_location {
 	__le32 grc_addr /* GRC address where the fw_info struct is located. */;
-/* Size of the fw_info structure (thats located at the grc_addr). */
-	__le32 size;
-};
-
-
-/* DMAE parameters */
-struct ecore_dmae_params {
-	u32 flags;
-/* If QED_DMAE_PARAMS_RW_REPL_SRC flag is set and the
- * source is a block of length DMAE_MAX_RW_SIZE and the
- * destination is larger, the source block will be duplicated as
- * many times as required to fill the destination block. This is
- * used mostly to write a zeroed buffer to destination address
- * using DMA
- */
-#define ECORE_DMAE_PARAMS_RW_REPL_SRC_MASK        0x1
-#define ECORE_DMAE_PARAMS_RW_REPL_SRC_SHIFT       0
-#define ECORE_DMAE_PARAMS_SRC_VF_VALID_MASK       0x1
-#define ECORE_DMAE_PARAMS_SRC_VF_VALID_SHIFT      1
-#define ECORE_DMAE_PARAMS_DST_VF_VALID_MASK       0x1
-#define ECORE_DMAE_PARAMS_DST_VF_VALID_SHIFT      2
-#define ECORE_DMAE_PARAMS_COMPLETION_DST_MASK     0x1
-#define ECORE_DMAE_PARAMS_COMPLETION_DST_SHIFT    3
-#define ECORE_DMAE_PARAMS_PORT_VALID_MASK         0x1
-#define ECORE_DMAE_PARAMS_PORT_VALID_SHIFT        4
-#define ECORE_DMAE_PARAMS_SRC_PF_VALID_MASK       0x1
-#define ECORE_DMAE_PARAMS_SRC_PF_VALID_SHIFT      5
-#define ECORE_DMAE_PARAMS_DST_PF_VALID_MASK       0x1
-#define ECORE_DMAE_PARAMS_DST_PF_VALID_SHIFT      6
-#define ECORE_DMAE_PARAMS_RESERVED_MASK           0x1FFFFFF
-#define ECORE_DMAE_PARAMS_RESERVED_SHIFT          7
-	u8 src_vfid;
-	u8 dst_vfid;
-	u8 port_id;
-	u8 src_pfid;
-	u8 dst_pfid;
-	u8 reserved1;
-	__le16 reserved2;
+	__le32 size /* Size of the fw_info structure (thats located at the grc_addr). */;
 };
 
+
+
+
 /*
  * IGU cleanup command
  */
@@ -2272,13 +2829,11 @@ struct igu_cleanup {
 	__le32 sb_id_and_flags;
 #define IGU_CLEANUP_RESERVED0_MASK     0x7FFFFFF
 #define IGU_CLEANUP_RESERVED0_SHIFT    0
-/* cleanup clear - 0, set - 1 */
-#define IGU_CLEANUP_CLEANUP_SET_MASK   0x1
+#define IGU_CLEANUP_CLEANUP_SET_MASK   0x1 /* cleanup clear - 0, set - 1 */
 #define IGU_CLEANUP_CLEANUP_SET_SHIFT  27
 #define IGU_CLEANUP_CLEANUP_TYPE_MASK  0x7
 #define IGU_CLEANUP_CLEANUP_TYPE_SHIFT 28
-/* must always be set (use enum command_type_bit) */
-#define IGU_CLEANUP_COMMAND_TYPE_MASK  0x1U
+#define IGU_CLEANUP_COMMAND_TYPE_MASK  0x1 /* must always be set (use enum command_type_bit) */
 #define IGU_CLEANUP_COMMAND_TYPE_SHIFT 31
 	__le32 reserved1;
 };
@@ -2303,8 +2858,7 @@ struct igu_command_reg_ctrl {
 #define IGU_COMMAND_REG_CTRL_PXP_BAR_ADDR_SHIFT 0
 #define IGU_COMMAND_REG_CTRL_RESERVED_MASK      0x7
 #define IGU_COMMAND_REG_CTRL_RESERVED_SHIFT     12
-/* command typ: 0 - read, 1 - write */
-#define IGU_COMMAND_REG_CTRL_COMMAND_TYPE_MASK  0x1
+#define IGU_COMMAND_REG_CTRL_COMMAND_TYPE_MASK  0x1 /* command typ: 0 - read, 1 - write */
 #define IGU_COMMAND_REG_CTRL_COMMAND_TYPE_SHIFT 15
 };
 
@@ -2348,80 +2902,39 @@ struct igu_msix_vector {
 };
 
 
-struct mstorm_core_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-#define MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
-#define MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
-#define MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
-#define MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
-	u8 flags1;
-#define MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
-#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
-#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
-#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
-#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
-	__le16 word0 /* word0 */;
-	__le16 word1 /* word1 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-};
-
-
 /*
  * per encapsulation type enabling flags
  */
-struct prs_reg_encapsulation_type_en {
+struct prs_encapsulation_type_en_flags {
 	u8 flags;
 /* Enable bit for Ethernet-over-GRE (L2 GRE) encapsulation. */
-#define PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GRE_ENABLE_MASK     0x1
-#define PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GRE_ENABLE_SHIFT    0
+#define PRS_ENCAPSULATION_TYPE_EN_FLAGS_ETH_OVER_GRE_ENABLE_MASK     0x1
+#define PRS_ENCAPSULATION_TYPE_EN_FLAGS_ETH_OVER_GRE_ENABLE_SHIFT    0
 /* Enable bit for IP-over-GRE (IP GRE) encapsulation. */
-#define PRS_REG_ENCAPSULATION_TYPE_EN_IP_OVER_GRE_ENABLE_MASK      0x1
-#define PRS_REG_ENCAPSULATION_TYPE_EN_IP_OVER_GRE_ENABLE_SHIFT     1
+#define PRS_ENCAPSULATION_TYPE_EN_FLAGS_IP_OVER_GRE_ENABLE_MASK      0x1
+#define PRS_ENCAPSULATION_TYPE_EN_FLAGS_IP_OVER_GRE_ENABLE_SHIFT     1
 /* Enable bit for VXLAN encapsulation. */
-#define PRS_REG_ENCAPSULATION_TYPE_EN_VXLAN_ENABLE_MASK            0x1
-#define PRS_REG_ENCAPSULATION_TYPE_EN_VXLAN_ENABLE_SHIFT           2
+#define PRS_ENCAPSULATION_TYPE_EN_FLAGS_VXLAN_ENABLE_MASK            0x1
+#define PRS_ENCAPSULATION_TYPE_EN_FLAGS_VXLAN_ENABLE_SHIFT           2
 /* Enable bit for T-Tag encapsulation. */
-#define PRS_REG_ENCAPSULATION_TYPE_EN_T_TAG_ENABLE_MASK            0x1
-#define PRS_REG_ENCAPSULATION_TYPE_EN_T_TAG_ENABLE_SHIFT           3
+#define PRS_ENCAPSULATION_TYPE_EN_FLAGS_T_TAG_ENABLE_MASK            0x1
+#define PRS_ENCAPSULATION_TYPE_EN_FLAGS_T_TAG_ENABLE_SHIFT           3
 /* Enable bit for Ethernet-over-GENEVE (L2 GENEVE) encapsulation. */
-#define PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GENEVE_ENABLE_MASK  0x1
-#define PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GENEVE_ENABLE_SHIFT 4
+#define PRS_ENCAPSULATION_TYPE_EN_FLAGS_ETH_OVER_GENEVE_ENABLE_MASK  0x1
+#define PRS_ENCAPSULATION_TYPE_EN_FLAGS_ETH_OVER_GENEVE_ENABLE_SHIFT 4
 /* Enable bit for IP-over-GENEVE (IP GENEVE) encapsulation. */
-#define PRS_REG_ENCAPSULATION_TYPE_EN_IP_OVER_GENEVE_ENABLE_MASK   0x1
-#define PRS_REG_ENCAPSULATION_TYPE_EN_IP_OVER_GENEVE_ENABLE_SHIFT  5
-#define PRS_REG_ENCAPSULATION_TYPE_EN_RESERVED_MASK                0x3
-#define PRS_REG_ENCAPSULATION_TYPE_EN_RESERVED_SHIFT               6
+#define PRS_ENCAPSULATION_TYPE_EN_FLAGS_IP_OVER_GENEVE_ENABLE_MASK   0x1
+#define PRS_ENCAPSULATION_TYPE_EN_FLAGS_IP_OVER_GENEVE_ENABLE_SHIFT  5
+#define PRS_ENCAPSULATION_TYPE_EN_FLAGS_RESERVED_MASK                0x3
+#define PRS_ENCAPSULATION_TYPE_EN_FLAGS_RESERVED_SHIFT               6
 };
 
 
 enum pxp_tph_st_hint {
 	TPH_ST_HINT_BIDIR /* Read/Write access by Host and Device */,
 	TPH_ST_HINT_REQUESTER /* Read/Write access by Device */,
-/* Device Write and Host Read, or Host Write and Device Read */
-	TPH_ST_HINT_TARGET,
-/* Device Write and Host Read, or Host Write and Device Read - with temporal
- * reuse
- */
+	TPH_ST_HINT_TARGET /* Device Write and Host Read, or Host Write and Device Read */,
+/* Device Write and Host Read, or Host Write and Device Read - with temporal reuse */
 	TPH_ST_HINT_TARGET_PRIO,
 	MAX_PXP_TPH_ST_HINT
 };
@@ -2480,25 +2993,48 @@ struct qm_rf_opportunistic_mask {
 
 
 /*
- * QM hardware structure of QM map memory
+ * E4 QM hardware structure of QM map memory
  */
-struct qm_rf_pq_map {
+struct qm_rf_pq_map_e4 {
 	__le32 reg;
-#define QM_RF_PQ_MAP_PQ_VALID_MASK          0x1 /* PQ active */
-#define QM_RF_PQ_MAP_PQ_VALID_SHIFT         0
-#define QM_RF_PQ_MAP_RL_ID_MASK             0xFF /* RL ID */
-#define QM_RF_PQ_MAP_RL_ID_SHIFT            1
+#define QM_RF_PQ_MAP_E4_PQ_VALID_MASK          0x1 /* PQ active */
+#define QM_RF_PQ_MAP_E4_PQ_VALID_SHIFT         0
+#define QM_RF_PQ_MAP_E4_RL_ID_MASK             0xFF /* RL ID */
+#define QM_RF_PQ_MAP_E4_RL_ID_SHIFT            1
 /* the first PQ associated with the VPORT and VOQ of this PQ */
-#define QM_RF_PQ_MAP_VP_PQ_ID_MASK          0x1FF
-#define QM_RF_PQ_MAP_VP_PQ_ID_SHIFT         9
-#define QM_RF_PQ_MAP_VOQ_MASK               0x1F /* VOQ */
-#define QM_RF_PQ_MAP_VOQ_SHIFT              18
-#define QM_RF_PQ_MAP_WRR_WEIGHT_GROUP_MASK  0x3 /* WRR weight */
-#define QM_RF_PQ_MAP_WRR_WEIGHT_GROUP_SHIFT 23
-#define QM_RF_PQ_MAP_RL_VALID_MASK          0x1 /* RL active */
-#define QM_RF_PQ_MAP_RL_VALID_SHIFT         25
-#define QM_RF_PQ_MAP_RESERVED_MASK          0x3F
-#define QM_RF_PQ_MAP_RESERVED_SHIFT         26
+#define QM_RF_PQ_MAP_E4_VP_PQ_ID_MASK          0x1FF
+#define QM_RF_PQ_MAP_E4_VP_PQ_ID_SHIFT         9
+#define QM_RF_PQ_MAP_E4_VOQ_MASK               0x1F /* VOQ */
+#define QM_RF_PQ_MAP_E4_VOQ_SHIFT              18
+#define QM_RF_PQ_MAP_E4_WRR_WEIGHT_GROUP_MASK  0x3 /* WRR weight */
+#define QM_RF_PQ_MAP_E4_WRR_WEIGHT_GROUP_SHIFT 23
+#define QM_RF_PQ_MAP_E4_RL_VALID_MASK          0x1 /* RL active */
+#define QM_RF_PQ_MAP_E4_RL_VALID_SHIFT         25
+#define QM_RF_PQ_MAP_E4_RESERVED_MASK          0x3F
+#define QM_RF_PQ_MAP_E4_RESERVED_SHIFT         26
+};
+
+
+/*
+ * E5 QM hardware structure of QM map memory
+ */
+struct qm_rf_pq_map_e5 {
+	__le32 reg;
+#define QM_RF_PQ_MAP_E5_PQ_VALID_MASK          0x1 /* PQ active */
+#define QM_RF_PQ_MAP_E5_PQ_VALID_SHIFT         0
+#define QM_RF_PQ_MAP_E5_RL_ID_MASK             0x1FF /* RL ID */
+#define QM_RF_PQ_MAP_E5_RL_ID_SHIFT            1
+/* the first PQ associated with the VPORT and VOQ of this PQ */
+#define QM_RF_PQ_MAP_E5_VP_PQ_ID_MASK          0x1FF
+#define QM_RF_PQ_MAP_E5_VP_PQ_ID_SHIFT         10
+#define QM_RF_PQ_MAP_E5_VOQ_MASK               0x3F /* VOQ */
+#define QM_RF_PQ_MAP_E5_VOQ_SHIFT              19
+#define QM_RF_PQ_MAP_E5_WRR_WEIGHT_GROUP_MASK  0x3 /* WRR weight */
+#define QM_RF_PQ_MAP_E5_WRR_WEIGHT_GROUP_SHIFT 25
+#define QM_RF_PQ_MAP_E5_RL_VALID_MASK          0x1 /* RL active */
+#define QM_RF_PQ_MAP_E5_RL_VALID_SHIFT         27
+#define QM_RF_PQ_MAP_E5_RESERVED_MASK          0xF
+#define QM_RF_PQ_MAP_E5_RESERVED_SHIFT         28
 };
 
 
@@ -2524,8 +3060,7 @@ struct sdm_agg_int_comp_params {
  */
 struct sdm_op_gen {
 	__le32 command;
-/* completion parameters 0-15 */
-#define SDM_OP_GEN_COMP_PARAM_MASK  0xFFFF
+#define SDM_OP_GEN_COMP_PARAM_MASK  0xFFFF /* completion parameters 0-15 */
 #define SDM_OP_GEN_COMP_PARAM_SHIFT 0
 #define SDM_OP_GEN_COMP_TYPE_MASK   0xF /* completion type 16-19 */
 #define SDM_OP_GEN_COMP_TYPE_SHIFT  16
@@ -2533,50 +3068,6 @@ struct sdm_op_gen {
 #define SDM_OP_GEN_RESERVED_SHIFT   20
 };
 
-struct ystorm_core_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-#define YSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define YSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define YSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define YSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define YSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
-#define YSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define YSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
-#define YSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define YSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
-#define YSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
-	u8 flags1;
-#define YSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define YSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define YSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define YSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define YSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define YSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define YSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define YSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
-#define YSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define YSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
-#define YSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define YSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
-#define YSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define YSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
-#define YSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define YSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
-	u8 byte2 /* byte2 */;
-	u8 byte3 /* byte3 */;
-	__le16 word0 /* word0 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-	__le16 word1 /* word1 */;
-	__le16 word2 /* word2 */;
-	__le16 word3 /* word3 */;
-	__le16 word4 /* word4 */;
-	__le32 reg2 /* reg2 */;
-	__le32 reg3 /* reg3 */;
-};
-
 /*********/
 /* DEBUG */
 /*********/
diff --git a/drivers/net/qede/base/ecore_hsi_debug_tools.h b/drivers/net/qede/base/ecore_hsi_debug_tools.h
index c5ef67f84..e18595d62 100644
--- a/drivers/net/qede/base/ecore_hsi_debug_tools.h
+++ b/drivers/net/qede/base/ecore_hsi_debug_tools.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_HSI_DEBUG_TOOLS__
 #define __ECORE_HSI_DEBUG_TOOLS__
 /****************************************/
@@ -16,6 +16,9 @@ enum block_id {
 	BLOCK_MISCS,
 	BLOCK_MISC,
 	BLOCK_DBU,
+	BLOCK_PCIE_CTL,
+	BLOCK_PCIE_PHY_TOP,
+	BLOCK_PGL2PEM,
 	BLOCK_PGLUE_B,
 	BLOCK_CNIG,
 	BLOCK_CPMU,
@@ -65,6 +68,8 @@ enum block_id {
 	BLOCK_MULD,
 	BLOCK_YULD,
 	BLOCK_XYLD,
+	BLOCK_PTLD,
+	BLOCK_YPLD,
 	BLOCK_PRM,
 	BLOCK_PBF_PB1,
 	BLOCK_PBF_PB2,
@@ -78,6 +83,12 @@ enum block_id {
 	BLOCK_TCFC,
 	BLOCK_IGU,
 	BLOCK_CAU,
+	BLOCK_RGFS,
+	BLOCK_RGFC,
+	BLOCK_RGSRC,
+	BLOCK_TGFS,
+	BLOCK_TGFC,
+	BLOCK_TGSRC,
 	BLOCK_UMAC,
 	BLOCK_XMAC,
 	BLOCK_MSTAT,
@@ -93,17 +104,45 @@ enum block_id {
 	BLOCK_LED,
 	BLOCK_AVS_WRAP,
 	BLOCK_PXPREQBUS,
+	BLOCK_CFC_PD,
+	BLOCK_QM_PD,
+	BLOCK_XS_PD,
+	BLOCK_YS_PD,
+	BLOCK_PS_PD,
+	BLOCK_US_PD,
+	BLOCK_TS_PD,
+	BLOCK_MS_PD,
+	BLOCK_RX_PD,
+	BLOCK_TX_PD,
+	BLOCK_PXP_PD,
+	BLOCK_HOST_PD,
+	BLOCK_NM_PD,
+	BLOCK_NMC_PD,
+	BLOCK_NW_PD,
+	BLOCK_BMB_PD,
 	BLOCK_BAR0_MAP,
 	BLOCK_MCP_FIO,
 	BLOCK_LAST_INIT,
+	BLOCK_PRS_FC_B,
+	BLOCK_PRS_FC_A,
 	BLOCK_PRS_FC,
+	BLOCK_TSEM_HC,
+	BLOCK_MSEM_HC,
+	BLOCK_USEM_HC,
+	BLOCK_XSEM_HC,
+	BLOCK_YSEM_HC,
+	BLOCK_PSEM_HC,
+	BLOCK_PBF_FC_B,
 	BLOCK_PBF_FC,
 	BLOCK_NIG_LB_FC,
+	BLOCK_NIG_RX,
 	BLOCK_NIG_LB_FC_PLLH,
 	BLOCK_NIG_TX_FC_PLLH,
 	BLOCK_NIG_TX_FC,
 	BLOCK_NIG_RX_FC_PLLH,
 	BLOCK_NIG_RX_FC,
+	BLOCK_NIG_LB,
+	BLOCK_NIG_TX,
 	MAX_BLOCK_ID
 };
 
@@ -140,15 +179,12 @@ enum bin_dbg_buffer_type {
  */
 struct dbg_attn_bit_mapping {
 	u16 data;
-/* The index of an attention in the blocks attentions list
- * (if is_unused_bit_cnt=0), or a number of consecutive unused attention bits
- * (if is_unused_bit_cnt=1)
+/* The index of an attention in the blocks attentions list (if is_unused_bit_cnt=0), or a number of
+ * consecutive unused attention bits (if is_unused_bit_cnt=1)
  */
 #define DBG_ATTN_BIT_MAPPING_VAL_MASK                0x7FFF
 #define DBG_ATTN_BIT_MAPPING_VAL_SHIFT               0
-/* if set, the val field indicates the number of consecutive unused attention
- * bits
- */
+/* if set, the val field indicates the number of consecutive unused attention bits */
 #define DBG_ATTN_BIT_MAPPING_IS_UNUSED_BIT_CNT_MASK  0x1
 #define DBG_ATTN_BIT_MAPPING_IS_UNUSED_BIT_CNT_SHIFT 15
 };
@@ -158,15 +194,13 @@ struct dbg_attn_bit_mapping {
  * Attention block per-type data
  */
 struct dbg_attn_block_type_data {
-/* Offset of this block attention names in the debug attention name offsets
- * array
- */
+/* Offset of this block attention names in the debug attention name offsets array */
 	u16 names_offset;
 	u16 reserved1;
 	u8 num_regs /* Number of attention registers in this block */;
 	u8 reserved2;
-/* Offset of this blocks attention registers in the attention registers array
- * (in dbg_attn_reg units)
+/* Offset of this blocks attention registers in the attention registers array (in dbg_attn_reg
+ * units)
  */
 	u16 regs_offset;
 };
@@ -175,9 +209,7 @@ struct dbg_attn_block_type_data {
  * Block attentions
  */
 struct dbg_attn_block {
-/* attention block per-type data. Count must match the number of elements in
- * dbg_attn_type.
- */
+/* attention block per-type data. Count must match the number of elements in dbg_attn_type. */
 	struct dbg_attn_block_type_data per_type_data[2];
 };
 
@@ -193,8 +225,8 @@ struct dbg_attn_reg_result {
 /* Number of attention indexes in this register */
 #define DBG_ATTN_REG_RESULT_NUM_REG_ATTN_MASK  0xFF
 #define DBG_ATTN_REG_RESULT_NUM_REG_ATTN_SHIFT 24
-/* The offset of this registers attentions within the blocks attentions list
- * (a value in the range 0..number of block attentions-1)
+/* The offset of this registers attentions within the blocks attentions list (a value in the range
+ * 0..number of block attentions-1)
  */
 	u16 block_attn_offset;
 	u16 reserved;
@@ -208,19 +240,14 @@ struct dbg_attn_reg_result {
 struct dbg_attn_block_result {
 	u8 block_id /* Registers block ID */;
 	u8 data;
-/* Value from dbg_attn_type enum */
-#define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_MASK  0x3
+#define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_MASK  0x3 /* Value from dbg_attn_type enum */
 #define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_SHIFT 0
-/* Number of registers in block in which at least one attention bit is set */
+/* Number of registers in the block in which at least one attention bit is set */
 #define DBG_ATTN_BLOCK_RESULT_NUM_REGS_MASK   0x3F
 #define DBG_ATTN_BLOCK_RESULT_NUM_REGS_SHIFT  2
-/* Offset of this registers block attention names in the attention name offsets
- * array
- */
+/* Offset of this registers block attention names in the attention name offsets array */
 	u16 names_offset;
-/* result data for each register in the block in which at least one attention
- * bit is set
- */
+/* result data for each register in the block in which at least one attention bit is set */
 	struct dbg_attn_reg_result reg_results[15];
 };
 
@@ -234,9 +261,7 @@ struct dbg_mode_hdr {
 /* indicates if a mode expression should be evaluated (0/1) */
 #define DBG_MODE_HDR_EVAL_MODE_MASK         0x1
 #define DBG_MODE_HDR_EVAL_MODE_SHIFT        0
-/* offset (in bytes) in modes expression buffer. valid only if eval_mode is
- * set.
- */
+/* offset (in bytes) in modes expression buffer. valid only if eval_mode is set. */
 #define DBG_MODE_HDR_MODES_BUF_OFFSET_MASK  0x7FFF
 #define DBG_MODE_HDR_MODES_BUF_OFFSET_SHIFT 1
 };
@@ -246,19 +271,17 @@ struct dbg_mode_hdr {
  */
 struct dbg_attn_reg {
 	struct dbg_mode_hdr mode /* Mode header */;
-/* The offset of this registers attentions within the blocks attentions list
- * (a value in the range 0..number of block attentions-1)
+/* The offset of this registers attentions within the blocks attentions list (a value in the range
+ * 0..number of block attentions-1)
  */
 	u16 block_attn_offset;
 	u32 data;
 /* STS attention register GRC address (in dwords) */
 #define DBG_ATTN_REG_STS_ADDRESS_MASK   0xFFFFFF
 #define DBG_ATTN_REG_STS_ADDRESS_SHIFT  0
-/* Number of attention in this register */
-#define DBG_ATTN_REG_NUM_REG_ATTN_MASK  0xFF
+#define DBG_ATTN_REG_NUM_REG_ATTN_MASK  0xFF /* Number of attention in this register */
 #define DBG_ATTN_REG_NUM_REG_ATTN_SHIFT 24
-/* STS_CLR attention register GRC address (in dwords) */
-	u32 sts_clr_address;
+	u32 sts_clr_address /* STS_CLR attention register GRC address (in dwords) */;
 	u32 mask_address /* MASK attention register GRC address (in dwords) */;
 };
 
@@ -278,9 +301,15 @@ enum dbg_attn_type {
  * Block debug data
  */
 struct dbg_block {
-	u8 name[15] /* Block name */;
+	u8 name[16] /* Block name */;
+	u8 flags;
+#define DBG_BLOCK_IS_PD_BLOCK_MASK  0x1 /* Indicates if this block is a PD block (0/1). */
+#define DBG_BLOCK_IS_PD_BLOCK_SHIFT 0
+#define DBG_BLOCK_RESERVED0_MASK    0x7F
+#define DBG_BLOCK_RESERVED0_SHIFT   1
 /* The letter (char) of the associated Storm, or 0 if no associated Storm. */
 	u8 associated_storm_letter;
+	u16 reserved1;
 };
 
 
@@ -295,64 +324,65 @@ struct dbg_block_chip {
 /* Indicates if this block has a reset register (0/1). */
 #define DBG_BLOCK_CHIP_HAS_RESET_REG_MASK        0x1
 #define DBG_BLOCK_CHIP_HAS_RESET_REG_SHIFT       1
-/* Indicates if this block should be taken out of reset before GRC Dump (0/1).
- * Valid only if has_reset_reg is set.
+/* Indicates if this block should be taken out of reset before GRC Dump (0/1). Valid only if
+ * has_reset_reg is set.
  */
 #define DBG_BLOCK_CHIP_UNRESET_BEFORE_DUMP_MASK  0x1
 #define DBG_BLOCK_CHIP_UNRESET_BEFORE_DUMP_SHIFT 2
 /* Indicates if this block has a debug bus (0/1). */
 #define DBG_BLOCK_CHIP_HAS_DBG_BUS_MASK          0x1
 #define DBG_BLOCK_CHIP_HAS_DBG_BUS_SHIFT         3
-/* Indicates if this block has a latency events debug line (0/1). Valid only
- * if has_dbg_bus is set.
+/* Indicates if this block has a latency events debug line (0/1). Valid only if has_dbg_bus is set.
+ *
  */
 #define DBG_BLOCK_CHIP_HAS_LATENCY_EVENTS_MASK   0x1
 #define DBG_BLOCK_CHIP_HAS_LATENCY_EVENTS_SHIFT  4
 #define DBG_BLOCK_CHIP_RESERVED0_MASK            0x7
 #define DBG_BLOCK_CHIP_RESERVED0_SHIFT           5
-/* The DBG block client ID of this block/chip. Valid only if has_dbg_bus is
- * set.
- */
+/* The DBG block client ID of this block/chip. Valid only if has_dbg_bus is set. */
 	u8 dbg_client_id;
-/* The ID of the reset register of this block/chip in the dbg_reset_reg
- * array.
- */
+/* The ID of the reset register of this block/chip in the dbg_reset_reg array. */
 	u8 reset_reg_id;
-/* The bit offset of this block/chip in the reset register. Valid only if
- * has_reset_reg is set.
- */
+/* The bit offset of this block/chip in the reset register. Valid only if has_reset_reg is set. */
 	u8 reset_reg_bit_offset;
+/* The ID of the PD block that this block is connected to. Valid only if has_dbg_bus is set. */
+	u8 pd_block_id;
+/* The bit offset of this block/chip in the shift_dbg_high register of this blocks PD block. Valid
+ * only if has_dbg_bus is set.
+ */
+	u8 pd_shift_reg_bit_offset;
 	struct dbg_mode_hdr dbg_bus_mode /* Mode header */;
-	u16 reserved1;
-	u8 reserved2;
-/* Number of Debug Bus lines in this block/chip (excluding signature and latency
- * events). Valid only if has_dbg_bus is set.
+	u8 reserved1;
+/* Number of Debug Bus lines in this block/chip (excluding signature and latency events). Valid only
+ * if has_dbg_bus is set.
  */
 	u8 num_of_dbg_bus_lines;
-/* Offset of this block/chip Debug Bus lines in the Debug Bus lines array. Valid
- * only if has_dbg_bus is set.
+/* Offset of this block/chip Debug Bus lines in the Debug Bus lines array. Valid only if has_dbg_bus
+ * is set.
  */
 	u16 dbg_bus_lines_offset;
-/* GRC address of the Debug Bus dbg_select register (in dwords). Valid only if
- * has_dbg_bus is set.
+/* GRC address of the Debug Bus dbg_select register (in dwords). Valid only if has_dbg_bus is set.
+ *
  */
 	u32 dbg_select_reg_addr;
-/* GRC address of the Debug Bus dbg_dword_enable register (in dwords). Valid
- * only if has_dbg_bus is set.
+/* GRC address of the Debug Bus dbg_dword_enable register (in dwords). Valid only if has_dbg_bus is
+ * set.
  */
 	u32 dbg_dword_enable_reg_addr;
-/* GRC address of the Debug Bus dbg_shift register (in dwords). Valid only if
- * has_dbg_bus is set.
- */
+/* GRC address of the Debug Bus dbg_shift register (in dwords). Valid only if has_dbg_bus is set. */
 	u32 dbg_shift_reg_addr;
-/* GRC address of the Debug Bus dbg_force_valid register (in dwords). Valid only
- * if has_dbg_bus is set.
+/* GRC address of the Debug Bus dbg_force_valid register (in dwords). Valid only if has_dbg_bus is
+ * set.
  */
 	u32 dbg_force_valid_reg_addr;
-/* GRC address of the Debug Bus dbg_force_frame register (in dwords). Valid only
- * if has_dbg_bus is set.
+/* GRC address of the Debug Bus dbg_force_frame register (in dwords). Valid only if has_dbg_bus is
+ * set.
  */
 	u32 dbg_force_frame_reg_addr;
+/* GRC address of the Debug Bus shift_dbg_high register of the PD block of this block/chip (in
+ * dwords). Valid only if is_pd_block is set.
+ */
+	u32 pd_shift_reg_addr;
 };
 
 
@@ -360,12 +390,9 @@ struct dbg_block_chip {
  * Chip-specific block user debug data
  */
 struct dbg_block_chip_user {
-/* Number of debug bus lines in this block (excluding signature and latency
- * events).
- */
+/* Number of debug bus lines in this block (excluding signature and latency events). */
 	u8 num_of_dbg_bus_lines;
-/* Indicates if this block has a latency events debug line (0/1). */
-	u8 has_latency_events;
+	u8 has_latency_events /* Indicates if this block has a latency events debug line (0/1). */;
 /* Offset of this blocks lines in the debug bus line name offsets array. */
 	u16 names_offset;
 };
@@ -384,17 +411,16 @@ struct dbg_block_user {
  */
 struct dbg_bus_line {
 	u8 data;
-/* Number of groups in the line (0-3) */
-#define DBG_BUS_LINE_NUM_OF_GROUPS_MASK  0xF
+#define DBG_BUS_LINE_NUM_OF_GROUPS_MASK  0xF /* Number of groups in the line (0-3) */
 #define DBG_BUS_LINE_NUM_OF_GROUPS_SHIFT 0
 /* Indicates if this is a 128b line (0) or a 256b line (1). */
 #define DBG_BUS_LINE_IS_256B_MASK        0x1
 #define DBG_BUS_LINE_IS_256B_SHIFT       4
 #define DBG_BUS_LINE_RESERVED_MASK       0x7
 #define DBG_BUS_LINE_RESERVED_SHIFT      5
-/* Four 2-bit values, indicating the size of each group minus 1 (i.e.
- * value=0 means size=1, value=1 means size=2, etc), starting from lsb.
- * The sizes are in dwords (if is_256b=0) or in qwords (if is_256b=1).
+/* Four 2-bit values, indicating the size of each group minus 1 (i.e. value=0 means size=1, value=1
+ * means size=2, etc), starting from lsb. The sizes are in dwords (if is_256b=0) or in qwords (if
+ * is_256b=1).
  */
 	u8 group_sizes;
 };
@@ -415,17 +441,14 @@ struct dbg_dump_cond_hdr {
  */
 struct dbg_dump_mem {
 	u32 dword0;
-/* register address (in dwords) */
-#define DBG_DUMP_MEM_ADDRESS_MASK       0xFFFFFF
+#define DBG_DUMP_MEM_ADDRESS_MASK       0xFFFFFF /* register address (in dwords) */
 #define DBG_DUMP_MEM_ADDRESS_SHIFT      0
 #define DBG_DUMP_MEM_MEM_GROUP_ID_MASK  0xFF /* memory group ID */
 #define DBG_DUMP_MEM_MEM_GROUP_ID_SHIFT 24
 	u32 dword1;
-/* register size (in dwords) */
-#define DBG_DUMP_MEM_LENGTH_MASK        0xFFFFFF
+#define DBG_DUMP_MEM_LENGTH_MASK        0xFFFFFF /* register size (in dwords) */
 #define DBG_DUMP_MEM_LENGTH_SHIFT       0
-/* indicates if the register is wide-bus */
-#define DBG_DUMP_MEM_WIDE_BUS_MASK      0x1
+#define DBG_DUMP_MEM_WIDE_BUS_MASK      0x1 /* indicates if the register is wide-bus */
 #define DBG_DUMP_MEM_WIDE_BUS_SHIFT     24
 #define DBG_DUMP_MEM_RESERVED_MASK      0x7F
 #define DBG_DUMP_MEM_RESERVED_SHIFT     25
@@ -437,11 +460,9 @@ struct dbg_dump_mem {
  */
 struct dbg_dump_reg {
 	u32 data;
-/* register address (in dwords) */
 #define DBG_DUMP_REG_ADDRESS_MASK   0x7FFFFF /* register address (in dwords) */
 #define DBG_DUMP_REG_ADDRESS_SHIFT  0
-/* indicates if the register is wide-bus */
-#define DBG_DUMP_REG_WIDE_BUS_MASK  0x1
+#define DBG_DUMP_REG_WIDE_BUS_MASK  0x1 /* indicates if the register is wide-bus */
 #define DBG_DUMP_REG_WIDE_BUS_SHIFT 23
 #define DBG_DUMP_REG_LENGTH_MASK    0xFF /* register size (in dwords) */
 #define DBG_DUMP_REG_LENGTH_SHIFT   24
@@ -475,14 +496,11 @@ struct dbg_idle_chk_cond_hdr {
  */
 struct dbg_idle_chk_cond_reg {
 	u32 data;
-/* Register GRC address (in dwords) */
-#define DBG_IDLE_CHK_COND_REG_ADDRESS_MASK   0x7FFFFF
+#define DBG_IDLE_CHK_COND_REG_ADDRESS_MASK   0x7FFFFF /* Register GRC address (in dwords) */
 #define DBG_IDLE_CHK_COND_REG_ADDRESS_SHIFT  0
-/* indicates if the register is wide-bus */
-#define DBG_IDLE_CHK_COND_REG_WIDE_BUS_MASK  0x1
+#define DBG_IDLE_CHK_COND_REG_WIDE_BUS_MASK  0x1 /* indicates if the register is wide-bus */
 #define DBG_IDLE_CHK_COND_REG_WIDE_BUS_SHIFT 23
-/* value from block_id enum */
-#define DBG_IDLE_CHK_COND_REG_BLOCK_ID_MASK  0xFF
+#define DBG_IDLE_CHK_COND_REG_BLOCK_ID_MASK  0xFF /* value from block_id enum */
 #define DBG_IDLE_CHK_COND_REG_BLOCK_ID_SHIFT 24
 	u16 num_entries /* number of registers entries to check */;
 	u8 entry_size /* size of registers entry (in dwords) */;
@@ -495,14 +513,11 @@ struct dbg_idle_chk_cond_reg {
  */
 struct dbg_idle_chk_info_reg {
 	u32 data;
-/* Register GRC address (in dwords) */
-#define DBG_IDLE_CHK_INFO_REG_ADDRESS_MASK   0x7FFFFF
+#define DBG_IDLE_CHK_INFO_REG_ADDRESS_MASK   0x7FFFFF /* Register GRC address (in dwords) */
 #define DBG_IDLE_CHK_INFO_REG_ADDRESS_SHIFT  0
-/* indicates if the register is wide-bus */
-#define DBG_IDLE_CHK_INFO_REG_WIDE_BUS_MASK  0x1
+#define DBG_IDLE_CHK_INFO_REG_WIDE_BUS_MASK  0x1 /* indicates if the register is wide-bus */
 #define DBG_IDLE_CHK_INFO_REG_WIDE_BUS_SHIFT 23
-/* value from block_id enum */
-#define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_MASK  0xFF
+#define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_MASK  0xFF /* value from block_id enum */
 #define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_SHIFT 24
 	u16 size /* register size in dwords */;
 	struct dbg_mode_hdr mode /* Mode header */;
@@ -536,11 +551,9 @@ struct dbg_idle_chk_result_hdr {
  */
 struct dbg_idle_chk_result_reg_hdr {
 	u8 data;
-/* indicates if this register is a memory */
-#define DBG_IDLE_CHK_RESULT_REG_HDR_IS_MEM_MASK  0x1
+#define DBG_IDLE_CHK_RESULT_REG_HDR_IS_MEM_MASK  0x1 /* indicates if this register is a memory */
 #define DBG_IDLE_CHK_RESULT_REG_HDR_IS_MEM_SHIFT 0
-/* register index within the failing rule */
-#define DBG_IDLE_CHK_RESULT_REG_HDR_REG_ID_MASK  0x7F
+#define DBG_IDLE_CHK_RESULT_REG_HDR_REG_ID_MASK  0x7F /* register index within the failing rule */
 #define DBG_IDLE_CHK_RESULT_REG_HDR_REG_ID_SHIFT 1
 	u8 start_entry /* index of the first checked entry */;
 	u16 size /* register size in dwords */;
@@ -558,13 +571,9 @@ struct dbg_idle_chk_rule {
 	u8 num_info_regs /* number of info registers */;
 	u8 num_imms /* number of immediates in the condition */;
 	u8 reserved1;
-/* offset of this rules registers in the idle check register array
- * (in dbg_idle_chk_reg units)
- */
+/* offset of this rules registers in the idle check register array (in dbg_idle_chk_reg units) */
 	u16 reg_offset;
-/* offset of this rules immediate values in the immediate values array
- * (in dwords)
- */
+/* offset of this rules immediate values in the immediate values array (in dwords) */
 	u16 imm_offset;
 };
 
@@ -587,12 +596,10 @@ struct dbg_idle_chk_rule_parsing_data {
  * idle check severity types
  */
 enum dbg_idle_chk_severity_types {
-/* idle check failure should cause an error */
-	IDLE_CHK_SEVERITY_ERROR,
-/* idle check failure should cause an error only if theres no traffic */
+	IDLE_CHK_SEVERITY_ERROR /* idle check failure should cause an error */,
+/* idle check failure should cause an error only if theres no traffic **/
 	IDLE_CHK_SEVERITY_ERROR_NO_TRAFFIC,
-/* idle check failure should cause a warning */
-	IDLE_CHK_SEVERITY_WARNING,
+	IDLE_CHK_SEVERITY_WARNING /* idle check failure should cause a warning */,
 	MAX_DBG_IDLE_CHK_SEVERITY_TYPES
 };
 
@@ -605,8 +612,7 @@ struct dbg_reset_reg {
 	u32 data;
 #define DBG_RESET_REG_ADDR_MASK        0xFFFFFF /* GRC address (in dwords) */
 #define DBG_RESET_REG_ADDR_SHIFT       0
-/* indicates if this register is removed (0/1). */
-#define DBG_RESET_REG_IS_REMOVED_MASK  0x1
+#define DBG_RESET_REG_IS_REMOVED_MASK  0x1 /* indicates if this register is removed (0/1). */
 #define DBG_RESET_REG_IS_REMOVED_SHIFT 24
 #define DBG_RESET_REG_RESERVED_MASK    0x7F
 #define DBG_RESET_REG_RESERVED_SHIFT   25
@@ -617,24 +623,25 @@ struct dbg_reset_reg {
  * Debug Bus block data
  */
 struct dbg_bus_block_data {
-/* 4 bit value, bit i set -> dword/qword i is enabled in block. */
-	u8 enable_mask;
-/* Number of dwords/qwords to cyclically  right the blocks output (0-3). */
-	u8 right_shift;
-/* 4 bit value, bit i set -> dword/qword i is forced valid in block. */
-	u8 force_valid_mask;
+	u8 enable_mask /* 4 bit value, bit i set -> dword/qword i is enabled in block. */;
+	u8 right_shift /* Number of dwords/qwords to cyclically  right the blocks output (0-3). */;
+	u8 force_valid_mask /* 4 bit value, bit i set -> dword/qword i is forced valid in block. */;
 /* 4 bit value, bit i set -> dword/qword i frame bit is forced in block. */
 	u8 force_frame_mask;
-/* bit i set -> dword i contains this blocks data (after shifting). */
-	u8 dword_mask;
+	u8 dword_mask /* bit i set -> dword i contains this blocks data (after shifting). */;
 	u8 line_num /* Debug line number to select */;
 	u8 hw_id /* HW ID associated with the block */;
 	u8 flags;
+/* 0/1. If 1, the debug line (after right shift) is shifted left by 4 dwords, to the upper 128 bits
+ * of the debug bus. Valid only for 128b debug lines.
+ */
+#define DBG_BUS_BLOCK_DATA_SHIFT_LEFT_4_MASK  0x1
+#define DBG_BUS_BLOCK_DATA_SHIFT_LEFT_4_SHIFT 0
 /* 0/1. If 1, the debug line is 256b, otherwise its 128b. */
 #define DBG_BUS_BLOCK_DATA_IS_256B_LINE_MASK  0x1
-#define DBG_BUS_BLOCK_DATA_IS_256B_LINE_SHIFT 0
-#define DBG_BUS_BLOCK_DATA_RESERVED_MASK      0x7F
-#define DBG_BUS_BLOCK_DATA_RESERVED_SHIFT     1
+#define DBG_BUS_BLOCK_DATA_IS_256B_LINE_SHIFT 1
+#define DBG_BUS_BLOCK_DATA_RESERVED_MASK      0x3F
+#define DBG_BUS_BLOCK_DATA_RESERVED_SHIFT     2
 };
 
 
@@ -660,17 +667,15 @@ enum dbg_bus_constraint_ops {
  * Debug Bus trigger state data
  */
 struct dbg_bus_trigger_state_data {
-/* Message length (in cycles) to be used for message-based trigger constraints.
- * If set to 0, message length is based only on frame bit received from HW.
+/* Message length (in cycles) to be used for message-based trigger constraints. If set to 0, message
+ * length is based only on frame bit received from HW.
  */
 	u8 msg_len;
-/* A bit for each dword in the debug bus cycle, indicating if this dword appears
- * in a trigger constraint (1) or not (0)
+/* A bit for each dword in the debug bus cycle, indicating if this dword appears in a trigger
+ * constraint (1) or not (0)
  */
 	u8 constraint_dword_mask;
-/* Storm ID to trigger on. Valid only when triggering on Storm data.
- * (use enum dbg_storms)
- */
+/* Storm ID to trigger on. Valid only when triggering on Storm data. (use enum dbg_storms) */
 	u8 storm_id;
 	u8 reserved;
 };
@@ -712,10 +717,8 @@ struct dbg_bus_storm_eid_mask_params {
  * Debug Bus Storm EID filter params
  */
 union dbg_bus_storm_eid_params {
-/* EID range filter params */
-	struct dbg_bus_storm_eid_range_params range;
-/* EID mask filter params */
-	struct dbg_bus_storm_eid_mask_params mask;
+	struct dbg_bus_storm_eid_range_params range /* EID range filter params */;
+	struct dbg_bus_storm_eid_mask_params mask /* EID mask filter params */;
 };
 
 /*
@@ -723,12 +726,11 @@ union dbg_bus_storm_eid_params {
  */
 struct dbg_bus_storm_data {
 	u8 enabled /* indicates if the Storm is enabled for recording */;
-	u8 mode /* Storm debug mode, valid only if the Storm is enabled */;
+/* Storm debug mode, valid only if the Storm is enabled (use enum dbg_bus_storm_modes) */
+	u8 mode;
 	u8 hw_id /* HW ID associated with the Storm */;
 	u8 eid_filter_en /* Indicates if EID filtering is performed (0/1) */;
-/* 1 = EID range filter, 0 = EID mask filter. Valid only if eid_filter_en is
- * set,
- */
+/* 1 = EID range filter, 0 = EID mask filter. Valid only if eid_filter_en is set,  */
 	u8 eid_range_not_mask;
 	u8 cid_filter_en /* Indicates if CID filtering is performed (0/1) */;
 /* EID filter params to filter on. Valid only if eid_filter_en is set. */
@@ -741,76 +743,66 @@ struct dbg_bus_storm_data {
  */
 struct dbg_bus_data {
 	u32 app_version /* The tools version number of the application */;
-	u8 state /* The current debug bus state */;
-	u8 mode_256b_en /* Indicates if the 256 bit mode is enabled */;
+	u8 state /* The current debug bus state (use enum dbg_bus_states) */;
+	u8 e4_256b_mode_en /* Indicates if the E4 256 bit mode is enabled */;
 	u8 num_enabled_blocks /* Number of blocks enabled for recording */;
 	u8 num_enabled_storms /* Number of Storms enabled for recording */;
-	u8 target /* Output target */;
+	u8 target /* Output target (use enum dbg_bus_targets) */;
 	u8 one_shot_en /* Indicates if one-shot mode is enabled (0/1) */;
 	u8 grc_input_en /* Indicates if GRC recording is enabled (0/1) */;
-/* Indicates if timestamp recording is enabled (0/1) */
-	u8 timestamp_input_en;
+	u8 timestamp_input_en /* Indicates if timestamp recording is enabled (0/1) */;
 	u8 filter_en /* Indicates if the recording filter is enabled (0/1) */;
-/* If true, the next added constraint belong to the filter. Otherwise,
- * it belongs to the last added trigger state. Valid only if either filter or
- * triggers are enabled.
+/* If true, the next added constraint belong to the filter. Otherwise, it belongs to the last added
+ * trigger state. Valid only if either filter or triggers are enabled.
  */
 	u8 adding_filter;
-/* Indicates if the recording filter should be applied before the trigger.
- * Valid only if both filter and trigger are enabled (0/1)
+/* Indicates if the recording filter should be applied before the trigger. Valid only if both filter
+ * and trigger are enabled (0/1)
  */
 	u8 filter_pre_trigger;
-/* Indicates if the recording filter should be applied after the trigger.
- * Valid only if both filter and trigger are enabled (0/1)
+/* Indicates if the recording filter should be applied after the trigger. Valid only if both filter
+ * and trigger are enabled (0/1)
  */
 	u8 filter_post_trigger;
-/* Indicates if the recording trigger is enabled (0/1) */
-	u8 trigger_en;
-/* A bit for each dword in the debug bus cycle, indicating if this dword
- * appears in a filter constraint (1) or not (0)
+	u8 trigger_en /* Indicates if the recording trigger is enabled (0/1) */;
+/* A bit for each dword in the debug bus cycle, indicating if this dword appears in a filter
+ * constraint (1) or not (0)
  */
 	u8 filter_constraint_dword_mask;
 	u8 next_trigger_state /* ID of next trigger state to be added */;
-/* ID of next filter/trigger constraint to be added */
-	u8 next_constraint_id;
-/* trigger states data */
-	struct dbg_bus_trigger_state_data trigger_states[3];
-/* Message length (in cycles) to be used for message-based filter constraints.
- * If set to 0 message length is based only on frame bit received from HW.
+	u8 next_constraint_id /* ID of next filter/trigger constraint to be added */;
+	struct dbg_bus_trigger_state_data trigger_states[3] /* trigger states data */;
+/* Message length (in cycles) to be used for message-based filter constraints. If set to 0, message
+ * length is based only on frame bit received from HW.
  */
 	u8 filter_msg_len;
 /* Indicates if the other engine sends it NW recording to this engine (0/1) */
 	u8 rcv_from_other_engine;
-/* A bit for each dword in the debug bus cycle, indicating if this dword is
- * recorded (1) or not (0)
+/* A bit for each dword in the debug bus cycle, indicating if this dword is recorded (1) or not (0)
+ *
  */
 	u8 blocks_dword_mask;
-/* Indicates if there are dwords in the debug bus cycle which are recorded
- * by more tan one block (0/1)
+/* Indicates if there are dwords in the debug bus cycle which are recorded by more than one block
+ * (0/1)
  */
 	u8 blocks_dword_overlap;
-/* The HW IDs of the recorded HW blocks, where bits i*3..i*3+2 contain the
- * HW ID of dword/qword i
- */
+/* The HW IDs of the recorded HW blocks, where bits i*3..i*3+2 contain the HW ID of dword/qword i */
 	u32 hw_id_mask;
-/* Debug Bus PCI buffer data. Valid only when the target is
- * DBG_BUS_TARGET_ID_PCI.
- */
+/* Debug Bus PCI buffer data. Valid only when the target is DBG_BUS_TARGET_ID_PCI. */
 	struct dbg_bus_pci_buf_data pci_buf;
-/* Debug Bus data for each block */
-	struct dbg_bus_block_data blocks[132];
-/* Debug Bus data for each block */
-	struct dbg_bus_storm_data storms[6];
+	struct dbg_bus_block_data blocks[132] /* Debug Bus data for each block */;
+	struct dbg_bus_storm_data storms[6] /* Debug Bus data for each block */;
 };
 
 
+
+
 /*
  * Debug bus states
  */
 enum dbg_bus_states {
 	DBG_BUS_STATE_IDLE /* debug bus idle state (not recording) */,
-/* debug bus is ready for configuration and recording */
-	DBG_BUS_STATE_READY,
+	DBG_BUS_STATE_READY /* debug bus is ready for configuration and recording */,
 	DBG_BUS_STATE_RECORDING /* debug bus is currently recording */,
 	DBG_BUS_STATE_STOPPED /* debug bus recording has stopped */,
 	MAX_DBG_BUS_STATES
@@ -831,11 +823,13 @@ enum dbg_bus_storm_modes {
 	DBG_BUS_STORM_MODE_DRA_W /* DRA write data (fast debug) */,
 	DBG_BUS_STORM_MODE_LD_ST_ADDR /* load/store address (fast debug) */,
 	DBG_BUS_STORM_MODE_DRA_FSM /* DRA state machines (fast debug) */,
+	DBG_BUS_STORM_MODE_FAST_DBGMUX /* fast DBGMUX (fast debug - E5 only) */,
 	DBG_BUS_STORM_MODE_RH /* recording handlers (fast debug) */,
-/* recording handlers with store messages (fast debug) */
-	DBG_BUS_STORM_MODE_RH_WITH_STORE,
-	DBG_BUS_STORM_MODE_FOC /* FOC: FIN + DRA Rd (slow debug) */,
-	DBG_BUS_STORM_MODE_EXT_STORE /* FOC: External Store (slow) */,
+/* recording handlers with no compression (fast debug - E5 only) */
+	DBG_BUS_STORM_MODE_RH_NO_COMPRESS,
+	DBG_BUS_STORM_MODE_RH_WITH_STORE /* recording handlers with store messages (fast debug) */,
+	DBG_BUS_STORM_MODE_FOC /* FOC: FIN + DRA Rd (slow debug - E4 only) */,
+	DBG_BUS_STORM_MODE_EXT_STORE /* FOC: External Store (slow debug - E4 only) */,
 	MAX_DBG_BUS_STORM_MODES
 };
 
@@ -844,8 +838,7 @@ enum dbg_bus_storm_modes {
  * Debug bus target IDs
  */
 enum dbg_bus_targets {
-/* records debug bus to DBG block internal buffer */
-	DBG_BUS_TARGET_ID_INT_BUF,
+	DBG_BUS_TARGET_ID_INT_BUF /* records debug bus to DBG block internal buffer */,
 	DBG_BUS_TARGET_ID_NIG /* records debug bus to the NW */,
 	DBG_BUS_TARGET_ID_PCI /* records debug bus to a PCI buffer */,
 	MAX_DBG_BUS_TARGETS
@@ -857,12 +850,10 @@ enum dbg_bus_targets {
  * GRC Dump data
  */
 struct dbg_grc_data {
-/* Indicates if the GRC parameters were initialized */
-	u8 params_initialized;
+	u8 params_initialized /* Indicates if the GRC parameters were initialized */;
 	u8 reserved1;
 	u16 reserved2;
-/* Value of each GRC parameter. Array size must match the enum dbg_grc_params.
- */
+/* Value of each GRC parameter. Array size must match the enum dbg_grc_params. */
 	u32 param_val[48];
 };
 
@@ -894,7 +885,7 @@ enum dbg_grc_params {
 	DBG_GRC_PARAM_DUMP_BRB /* dump BRB memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_BTB /* dump BTB memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_BMB /* dump BMB memories (0/1) */,
-	DBG_GRC_PARAM_RESERVD1 /* reserved */,
+	DBG_GRC_PARAM_DUMP_NIG /* dump NIG memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_MULD /* dump MULD memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_PRS /* dump PRS memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_DMAE /* dump PRS memories (0/1) */,
@@ -903,20 +894,16 @@ enum dbg_grc_params {
 	DBG_GRC_PARAM_DUMP_DIF /* dump DIF memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_STATIC /* dump static debug data (0/1) */,
 	DBG_GRC_PARAM_UNSTALL /* un-stall Storms after dump (0/1) */,
-	DBG_GRC_PARAM_RESERVED2 /* reserved */,
-/* MCP Trace meta data size in bytes */
-	DBG_GRC_PARAM_MCP_TRACE_META_SIZE,
-/* preset: exclude all memories from dump (1 only) */
-	DBG_GRC_PARAM_EXCLUDE_ALL,
-/* preset: include memories for crash dump (1 only) */
-	DBG_GRC_PARAM_CRASH,
-/* perform dump only if MFW is responding (0/1) */
-	DBG_GRC_PARAM_PARITY_SAFE,
+	DBG_GRC_PARAM_DUMP_SEM /* dump SEM memories (0/1) */,
+	DBG_GRC_PARAM_MCP_TRACE_META_SIZE /* MCP Trace meta data size in bytes */,
+	DBG_GRC_PARAM_EXCLUDE_ALL /* preset: exclude all memories from dump (1 only) */,
+	DBG_GRC_PARAM_CRASH /* preset: include memories for crash dump (1 only) */,
+	DBG_GRC_PARAM_PARITY_SAFE /* perform dump only if MFW is responding (0/1) */,
 	DBG_GRC_PARAM_DUMP_CM /* dump CM memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_PHY /* dump PHY memories (0/1) */,
 	DBG_GRC_PARAM_NO_MCP /* dont perform MCP commands (0/1) */,
 	DBG_GRC_PARAM_NO_FW_VER /* dont read FW/MFW version (0/1) */,
-	DBG_GRC_PARAM_RESERVED3 /* reserved */,
+	DBG_GRC_PARAM_DUMP_GFS /* dump GFS memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_MCP_HW_DUMP /* dump MCP HW Dump (0/1) */,
 	DBG_GRC_PARAM_DUMP_ILT_CDUC /* dump ILT CDUC client (0/1) */,
 	DBG_GRC_PARAM_DUMP_ILT_CDUT /* dump ILT CDUT client (0/1) */,
@@ -988,6 +975,19 @@ enum dbg_status {
 	DBG_STATUS_FILTER_SINGLE_HW_ID,
 	DBG_STATUS_TRIGGER_SINGLE_HW_ID,
 	DBG_STATUS_MISSING_TRIGGER_STATE_STORM,
+	DBG_STATUS_MDUMP2_INVALID_LOG_DATA,
+	DBG_STATUS_MDUMP2_INVALID_LOG_SIZE,
+	DBG_STATUS_MDUMP2_INVALID_SIGNATURE,
+	DBG_STATUS_MDUMP2_INVALID_LOG_HDR,
+	DBG_STATUS_MDUMP2_ERROR_READING_LANE_REGS,
+	DBG_STATUS_MDUMP2_FAILED_TO_REQUEST_OFFSIZE,
+	DBG_STATUS_MDUMP2_FAILED_VALIDATION_OF_DATA_CRC,
+	DBG_STATUS_MDUMP2_ERROR_ALLOCATING_BUF,
+	DBG_STATUS_MDUMP2_ERROR_EXTRACTING_NUM_PORTS,
+	DBG_STATUS_MDUMP2_ERROR_EXTRACTING_MFW_STATUS,
+	DBG_STATUS_MDUMP2_ERROR_DISPLAYING_LINKDUMP,
+	DBG_STATUS_MDUMP2_ERROR_READING_PHY_CFG,
+	DBG_STATUS_MDUMP2_ERROR_READING_PLL_MODE,
 	MAX_DBG_STATUS
 };
 
@@ -1011,8 +1011,7 @@ enum dbg_storms {
  */
 struct idle_chk_data {
 	u32 buf_size /* Idle check buffer size in dwords */;
-/* Indicates if the idle check buffer size was set (0/1) */
-	u8 buf_size_set;
+	u8 buf_size_set /* Indicates if the idle check buffer size was set (0/1) */;
 	u8 reserved1;
 	u16 reserved2;
 };
@@ -1034,8 +1033,7 @@ struct dbg_tools_data {
 	struct dbg_bus_data bus /* Debug Bus data */;
 	struct idle_chk_data idle_chk /* Idle Check data */;
 	u8 mode_enable[40] /* Indicates if a mode is enabled (0/1) */;
-/* Indicates if a block is in reset state (0/1) */
-	u8 block_in_reset[132];
+	u8 block_in_reset[132] /* Indicates if a block is in reset state (0/1) */;
 	u8 chip_id /* Chip ID (from enum chip_ids) */;
 	u8 hw_type /* HW Type */;
 	u8 num_ports /* Number of ports in the chip */;
@@ -1045,8 +1043,24 @@ struct dbg_tools_data {
 	u8 use_dmae /* Indicates if DMAE should be used */;
 	u8 reserved;
 	struct pretend_params pretend /* Current pretend parameters */;
-/* Numbers of registers that were read since last log */
-	u32 num_regs_read;
+	u32 num_regs_read /* Numbers of registers that were read since last log */;
+};
+
+
+
+/*
+ * ILT Clients
+ */
+enum ilt_clients {
+	ILT_CLI_CDUC,
+	ILT_CLI_CDUT,
+	ILT_CLI_QM,
+	ILT_CLI_TM,
+	ILT_CLI_SRC,
+	ILT_CLI_TSDM,
+	ILT_CLI_RGFS,
+	ILT_CLI_TGFS,
+	MAX_ILT_CLIENTS
 };
 
 
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index bd7bd8658..b49de54ae 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_HSI_ETH__
 #define __ECORE_HSI_ETH__
 /************************************************************************/
@@ -14,242 +14,234 @@
 /*
  * The eth storm context for the Tstorm
  */
-struct tstorm_eth_conn_st_ctx {
+struct e4_tstorm_eth_conn_st_ctx {
 	__le32 reserved[4];
 };
 
 /*
  * The eth storm context for the Pstorm
  */
-struct pstorm_eth_conn_st_ctx {
+struct e4_pstorm_eth_conn_st_ctx {
 	__le32 reserved[8];
 };
 
 /*
  * The eth storm context for the Xstorm
  */
-struct xstorm_eth_conn_st_ctx {
+struct e4_xstorm_eth_conn_st_ctx {
 	__le32 reserved[60];
 };
 
-struct xstorm_eth_conn_ag_ctx {
+struct e4_xstorm_eth_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 state /* state */;
 	u8 flags0;
-/* exist_in_qm0 */
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
-/* exist_in_qm1 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
-/* exist_in_qm2 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
-/* exist_in_qm3 */
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1 /* bit4 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
-/* cf_array_active */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1 /* exist_in_qm0 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1 /* exist_in_qm1 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1 /* exist_in_qm2 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1 /* exist_in_qm3 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1 /* bit4 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1 /* cf_array_active */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
 	u8 flags1;
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1 /* bit10 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
-#define XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1 /* bit11 */
-#define XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
-#define XSTORM_ETH_CONN_AG_CTX_E5_RESERVED2_MASK            0x1 /* bit12 */
-#define XSTORM_ETH_CONN_AG_CTX_E5_RESERVED2_SHIFT           4
-#define XSTORM_ETH_CONN_AG_CTX_E5_RESERVED3_MASK            0x1 /* bit13 */
-#define XSTORM_ETH_CONN_AG_CTX_E5_RESERVED3_SHIFT           5
-#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1 /* bit14 */
-#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1 /* bit15 */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1 /* bit10 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1 /* bit11 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
+#define E4_XSTORM_ETH_CONN_AG_CTX_E5_RESERVED2_MASK            0x1 /* bit12 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_E5_RESERVED2_SHIFT           4
+#define E4_XSTORM_ETH_CONN_AG_CTX_E5_RESERVED3_MASK            0x1 /* bit13 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_E5_RESERVED3_SHIFT           5
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1 /* bit14 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1 /* bit15 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
 	u8 flags2;
-#define XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3 /* timer0cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
-#define XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3 /* timer1cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
-#define XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3 /* timer2cf */
-#define XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
-/* timer_stop_all */
-#define XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3 /* timer0cf */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3 /* timer1cf */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3 /* timer2cf */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3 /* timer_stop_all */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
 	u8 flags3;
-#define XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
-#define XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
-#define XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
-#define XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
-#define XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
-#define XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
-#define XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
-#define XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
 	u8 flags4;
-#define XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
-#define XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
-#define XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
-#define XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
-#define XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
-#define XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
-#define XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
-#define XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
 	u8 flags5;
-#define XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3 /* cf12 */
-#define XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
-#define XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
-#define XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
-#define XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
-#define XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
-#define XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
-#define XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3 /* cf12 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
 	u8 flags6;
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
-/* cf_array_cf */
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3 /* cf_array_cf */
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
 	u8 flags7;
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
-#define XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1 /* cf0en */
-#define XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
-#define XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1 /* cf1en */
-#define XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1 /* cf0en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1 /* cf1en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
 	u8 flags8;
-#define XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1 /* cf2en */
-#define XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
-#define XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1 /* cf3en */
-#define XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
-#define XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1 /* cf4en */
-#define XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
-#define XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1 /* cf5en */
-#define XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
-#define XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1 /* cf6en */
-#define XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
-#define XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1 /* cf7en */
-#define XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
-#define XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1 /* cf8en */
-#define XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
-#define XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1 /* cf9en */
-#define XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1 /* cf2en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1 /* cf3en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1 /* cf4en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1 /* cf5en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1 /* cf6en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1 /* cf7en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1 /* cf8en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1 /* cf9en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
 	u8 flags9;
-#define XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1 /* cf10en */
-#define XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
-#define XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1 /* cf11en */
-#define XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
-#define XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1 /* cf12en */
-#define XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
-#define XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1 /* cf13en */
-#define XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
-#define XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1 /* cf14en */
-#define XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
-#define XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1 /* cf15en */
-#define XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1 /* cf16en */
-#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
-/* cf_array_cf_en */
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1 /* cf10en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1 /* cf11en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1 /* cf12en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1 /* cf13en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1 /* cf14en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1 /* cf15en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1 /* cf16en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1 /* cf_array_cf_en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
 	u8 flags10;
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1 /* cf18en */
-#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1 /* cf19en */
-#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1 /* cf20en */
-#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1 /* cf21en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1 /* cf22en */
-#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1 /* cf23en */
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1 /* rule0en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1 /* rule1en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1 /* cf18en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1 /* cf19en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1 /* cf20en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1 /* cf21en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1 /* cf22en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1 /* cf23en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1 /* rule0en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1 /* rule1en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
 	u8 flags11;
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1 /* rule2en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1 /* rule3en */
-#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
-#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1 /* rule4en */
-#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
-#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1 /* rule5en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
-#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1 /* rule6en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
-#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1 /* rule7en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1 /* rule8en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
-#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1 /* rule9en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1 /* rule2en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1 /* rule3en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1 /* rule4en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1 /* rule5en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1 /* rule6en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1 /* rule7en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1 /* rule8en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1 /* rule9en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
 	u8 flags12;
-#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1 /* rule10en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
-#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1 /* rule11en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1 /* rule12en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1 /* rule13en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
-#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1 /* rule14en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
-#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1 /* rule15en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
-#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1 /* rule16en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
-#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1 /* rule17en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1 /* rule10en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1 /* rule11en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1 /* rule12en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1 /* rule13en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1 /* rule14en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1 /* rule15en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1 /* rule16en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1 /* rule17en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
 	u8 flags13;
-#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1 /* rule18en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
-#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1 /* rule19en */
-#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1 /* rule20en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1 /* rule21en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1 /* rule22en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1 /* rule23en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1 /* rule24en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1 /* rule25en */
-#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1 /* rule18en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1 /* rule19en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1 /* rule20en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1 /* rule21en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1 /* rule22en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1 /* rule23en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1 /* rule24en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1 /* rule25en */
+#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
 	u8 flags14;
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1 /* bit16 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1 /* bit17 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1 /* bit18 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1 /* bit19 */
-#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1 /* bit20 */
-#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
-#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1 /* bit21 */
-#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
-#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1 /* bit16 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1 /* bit17 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1 /* bit18 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1 /* bit19 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1 /* bit20 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1 /* bit21 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
+#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
 	u8 edpm_event_id /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
 	__le16 e5_reserved1 /* physical_q1 */;
@@ -303,89 +295,89 @@ struct xstorm_eth_conn_ag_ctx {
 	__le16 word15 /* word15 */;
 };
 
-struct tstorm_eth_conn_ag_ctx {
+struct e4_tstorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
-#define TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
-#define TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
-#define TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
-#define TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
-#define TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
-#define TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
-#define TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
 	u8 flags1;
-#define TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
-#define TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
-#define TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
-#define TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
-#define TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
-#define TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
 	u8 flags2;
-#define TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
-#define TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
-#define TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
-#define TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
-#define TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
-#define TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
-#define TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
 	u8 flags3;
-#define TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
-#define TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
-#define TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
-#define TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
-#define TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
-#define TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
-#define TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
-#define TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
-#define TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
-#define TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
-#define TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
-#define TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
 	u8 flags4;
-#define TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
-#define TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
-#define TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
-#define TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
-#define TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
-#define TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
-#define TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
-#define TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
-#define TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
-#define TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
-#define TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
-#define TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
-#define TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
-#define TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
-#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
 	u8 flags5;
-#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
-#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
-#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
-#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
-#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
-#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
-#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
-#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
-#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
-#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
+#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
+#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
 	__le32 reg0 /* reg0 */;
 	__le32 reg1 /* reg1 */;
 	__le32 reg2 /* reg2 */;
@@ -410,41 +402,41 @@ struct tstorm_eth_conn_ag_ctx {
 /*
  * The eth storm context for the Ystorm
  */
-struct ystorm_eth_conn_st_ctx {
+struct e4_ystorm_eth_conn_st_ctx {
 	__le32 reserved[8];
 };
 
-struct ystorm_eth_conn_ag_ctx {
+struct e4_ystorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 state /* state */;
 	u8 flags0;
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1 /* exist_in_qm0 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1 /* exist_in_qm1 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
-#define YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
-#define YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1 /* exist_in_qm0 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1 /* exist_in_qm1 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
 	u8 flags1;
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1 /* cf0en */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1 /* cf1en */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1 /* cf2en */
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1 /* rule0en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1 /* rule1en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1 /* rule2en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1 /* rule3en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1 /* rule4en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1 /* cf0en */
+#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1 /* cf1en */
+#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1 /* cf2en */
+#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1 /* rule0en */
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1 /* rule1en */
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1 /* rule2en */
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1 /* rule3en */
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1 /* rule4en */
+#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
 	u8 tx_q0_int_coallecing_timeset /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* word0 */;
@@ -458,66 +450,63 @@ struct ystorm_eth_conn_ag_ctx {
 	__le32 reg3 /* reg3 */;
 };
 
-struct ustorm_eth_conn_ag_ctx {
+struct e4_ustorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-/* exist_in_qm0 */
-#define USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1
-#define USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
-/* exist_in_qm1 */
-#define USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1
-#define USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3 /* timer0cf */
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3 /* timer1cf */
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
-#define USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3 /* timer2cf */
-#define USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1 /* exist_in_qm0 */
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1 /* exist_in_qm1 */
+#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3 /* timer0cf */
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3 /* timer1cf */
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3 /* timer2cf */
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
 	u8 flags1;
-/* timer_stop_all */
-#define USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3 /* cf4 */
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3 /* cf5 */
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3 /* cf6 */
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3 /* timer_stop_all */
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3 /* cf4 */
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3 /* cf5 */
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3 /* cf6 */
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
 	u8 flags2;
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1 /* cf0en */
-#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1 /* cf1en */
-#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
-#define USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1 /* cf2en */
-#define USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
-#define USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1 /* cf3en */
-#define USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1 /* cf4en */
-#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1 /* cf5en */
-#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1 /* cf6en */
-#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
-#define USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1 /* rule0en */
-#define USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1 /* cf0en */
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1 /* cf1en */
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1 /* cf2en */
+#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1 /* cf3en */
+#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1 /* cf4en */
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1 /* cf5en */
+#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1 /* cf6en */
+#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1 /* rule0en */
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
 	u8 flags3;
-#define USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1 /* rule1en */
-#define USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
-#define USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1 /* rule2en */
-#define USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
-#define USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1 /* rule3en */
-#define USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
-#define USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1 /* rule4en */
-#define USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
-#define USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1 /* rule5en */
-#define USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
-#define USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1 /* rule6en */
-#define USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
-#define USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1 /* rule7en */
-#define USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
-#define USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1 /* rule8en */
-#define USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1 /* rule1en */
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1 /* rule2en */
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1 /* rule3en */
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1 /* rule4en */
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1 /* rule5en */
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1 /* rule6en */
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1 /* rule7en */
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1 /* rule8en */
+#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
 	u8 byte2 /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* conn_dpi */;
@@ -533,68 +522,801 @@ struct ustorm_eth_conn_ag_ctx {
 /*
  * The eth storm context for the Ustorm
  */
-struct ustorm_eth_conn_st_ctx {
+struct e4_ustorm_eth_conn_st_ctx {
 	__le32 reserved[40];
 };
 
 /*
  * The eth storm context for the Mstorm
  */
-struct mstorm_eth_conn_st_ctx {
+struct e4_mstorm_eth_conn_st_ctx {
 	__le32 reserved[8];
 };
 
 /*
  * eth connection context
  */
-struct eth_conn_context {
-/* tstorm storm context */
-	struct tstorm_eth_conn_st_ctx tstorm_st_context;
+struct e4_eth_conn_context {
+	struct e4_tstorm_eth_conn_st_ctx tstorm_st_context /* tstorm storm context */;
 	struct regpair tstorm_st_padding[2] /* padding */;
-/* pstorm storm context */
-	struct pstorm_eth_conn_st_ctx pstorm_st_context;
-/* xstorm storm context */
-	struct xstorm_eth_conn_st_ctx xstorm_st_context;
-/* xstorm aggregative context */
-	struct xstorm_eth_conn_ag_ctx xstorm_ag_context;
-/* tstorm aggregative context */
-	struct tstorm_eth_conn_ag_ctx tstorm_ag_context;
-/* ystorm storm context */
-	struct ystorm_eth_conn_st_ctx ystorm_st_context;
-/* ystorm aggregative context */
-	struct ystorm_eth_conn_ag_ctx ystorm_ag_context;
-/* ustorm aggregative context */
-	struct ustorm_eth_conn_ag_ctx ustorm_ag_context;
-/* ustorm storm context */
-	struct ustorm_eth_conn_st_ctx ustorm_st_context;
-/* mstorm storm context */
-	struct mstorm_eth_conn_st_ctx mstorm_st_context;
+	struct e4_pstorm_eth_conn_st_ctx pstorm_st_context /* pstorm storm context */;
+	struct e4_xstorm_eth_conn_st_ctx xstorm_st_context /* xstorm storm context */;
+	struct e4_xstorm_eth_conn_ag_ctx xstorm_ag_context /* xstorm aggregative context */;
+	struct e4_tstorm_eth_conn_ag_ctx tstorm_ag_context /* tstorm aggregative context */;
+	struct e4_ystorm_eth_conn_st_ctx ystorm_st_context /* ystorm storm context */;
+	struct e4_ystorm_eth_conn_ag_ctx ystorm_ag_context /* ystorm aggregative context */;
+	struct e4_ustorm_eth_conn_ag_ctx ustorm_ag_context /* ustorm aggregative context */;
+	struct e4_ustorm_eth_conn_st_ctx ustorm_st_context /* ustorm storm context */;
+	struct e4_mstorm_eth_conn_st_ctx mstorm_st_context /* mstorm storm context */;
+};
+
+
+
+
+
+
+
+
+/*
+ * The eth storm context for the Tstorm
+ */
+struct e5_tstorm_eth_conn_st_ctx {
+	__le32 reserved[4];
+};
+
+/*
+ * The eth storm context for the Pstorm
+ */
+struct e5_pstorm_eth_conn_st_ctx {
+	__le32 reserved[8];
+};
+
+/*
+ * The eth storm context for the Xstorm
+ */
+struct e5_xstorm_eth_conn_st_ctx {
+	__le32 reserved[72];
+};
+
+struct e5_xstorm_eth_conn_ag_ctx {
+	u8 reserved0 /* cdu_validation */;
+	u8 state_and_core_id /* state_and_core_id */;
+	u8 flags0;
+#define E5_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK                   0x1 /* exist_in_qm0 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT                  0
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK                      0x1 /* exist_in_qm1 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT                     1
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK                      0x1 /* exist_in_qm2 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT                     2
+#define E5_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK                   0x1 /* exist_in_qm3 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT                  3
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK                      0x1 /* bit4 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT                     4
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK                      0x1 /* cf_array_active */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT                     5
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK                      0x1 /* bit6 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT                     6
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK                      0x1 /* bit7 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT                     7
+	u8 flags1;
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK                      0x1 /* bit8 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT                     0
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK                      0x1 /* bit9 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT                     1
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK                      0x1 /* bit10 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT                     2
+#define E5_XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                          0x1 /* bit11 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                         3
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_COPY_CONDITION_LO_MASK         0x1 /* bit12 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_COPY_CONDITION_LO_SHIFT        4
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_COPY_CONDITION_HI_MASK         0x1 /* bit13 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_COPY_CONDITION_HI_SHIFT        5
+#define E5_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK                 0x1 /* bit14 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT                6
+#define E5_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK                   0x1 /* bit15 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT                  7
+	u8 flags2;
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF0_MASK                            0x3 /* timer0cf */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                           0
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF1_MASK                            0x3 /* timer1cf */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                           2
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF2_MASK                            0x3 /* timer2cf */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                           4
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF3_MASK                            0x3 /* timer_stop_all */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                           6
+	u8 flags3;
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF4_MASK                            0x3 /* cf4 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                           0
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF5_MASK                            0x3 /* cf5 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                           2
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF6_MASK                            0x3 /* cf6 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                           4
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF7_MASK                            0x3 /* cf7 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                           6
+	u8 flags4;
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF8_MASK                            0x3 /* cf8 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                           0
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF9_MASK                            0x3 /* cf9 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                           2
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF10_MASK                           0x3 /* cf10 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                          4
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF11_MASK                           0x3 /* cf11 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                          6
+	u8 flags5;
+#define E5_XSTORM_ETH_CONN_AG_CTX_HP_CF_MASK                          0x3 /* cf12 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_HP_CF_SHIFT                         0
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF13_MASK                           0x3 /* cf13 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                          2
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF14_MASK                           0x3 /* cf14 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                          4
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF15_MASK                           0x3 /* cf15 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                          6
+	u8 flags6;
+#define E5_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK               0x3 /* cf16 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT              0
+#define E5_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK               0x3 /* cf_array_cf */
+#define E5_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT              2
+#define E5_XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                          0x3 /* cf18 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                         4
+#define E5_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK                   0x3 /* cf19 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT                  6
+	u8 flags7;
+#define E5_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                       0x3 /* cf20 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT                      0
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK                     0x3 /* cf21 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT                    2
+#define E5_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK                      0x3 /* cf22 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT                     4
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                          0x1 /* cf0en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                         6
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                          0x1 /* cf1en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                         7
+	u8 flags8;
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                          0x1 /* cf2en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                         0
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                          0x1 /* cf3en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                         1
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                          0x1 /* cf4en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                         2
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                          0x1 /* cf5en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                         3
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                          0x1 /* cf6en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                         4
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                          0x1 /* cf7en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                         5
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                          0x1 /* cf8en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                         6
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                          0x1 /* cf9en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                         7
+	u8 flags9;
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                         0x1 /* cf10en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                        0
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                         0x1 /* cf11en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                        1
+#define E5_XSTORM_ETH_CONN_AG_CTX_HP_CF_EN_MASK                       0x1 /* cf12en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_HP_CF_EN_SHIFT                      2
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                         0x1 /* cf13en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                        3
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                         0x1 /* cf14en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                        4
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                         0x1 /* cf15en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                        5
+#define E5_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK            0x1 /* cf16en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT           6
+#define E5_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK            0x1 /* cf_array_cf_en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT           7
+	u8 flags10;
+#define E5_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                       0x1 /* cf18en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT                      0
+#define E5_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK                0x1 /* cf19en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT               1
+#define E5_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK                    0x1 /* cf20en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT                   2
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK                     0x1 /* cf21en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT                    3
+#define E5_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK                   0x1 /* cf22en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT                  4
+#define E5_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK         0x1 /* cf23en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT        5
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK                     0x1 /* rule0en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT                    6
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK                     0x1 /* rule1en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT                    7
+	u8 flags11;
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK                     0x1 /* rule2en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT                    0
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK                     0x1 /* rule3en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT                    1
+#define E5_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK                 0x1 /* rule4en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT                2
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                        0x1 /* rule5en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                       3
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                        0x1 /* rule6en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                       4
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                        0x1 /* rule7en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                       5
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK                   0x1 /* rule8en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT                  6
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                        0x1 /* rule9en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                       7
+	u8 flags12;
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                       0x1 /* rule10en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT                      0
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                       0x1 /* rule11en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT                      1
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK                   0x1 /* rule12en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT                  2
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK                   0x1 /* rule13en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT                  3
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                       0x1 /* rule14en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT                      4
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                       0x1 /* rule15en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT                      5
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                       0x1 /* rule16en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT                      6
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                       0x1 /* rule17en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT                      7
+	u8 flags13;
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                       0x1 /* rule18en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT                      0
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                       0x1 /* rule19en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT                      1
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK                   0x1 /* rule20en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT                  2
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK                   0x1 /* rule21en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT                  3
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK                   0x1 /* rule22en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT                  4
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK                   0x1 /* rule23en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT                  5
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK                   0x1 /* rule24en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT                  6
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK                   0x1 /* rule25en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT                  7
+	u8 flags14;
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK               0x1 /* bit16 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT              0
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK             0x1 /* bit17 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT            1
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK           0x1 /* bit18 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT          2
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK           0x1 /* bit19 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT          3
+#define E5_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK                 0x1 /* bit20 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT                4
+#define E5_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK               0x1 /* bit21 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT              5
+#define E5_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK                     0x3 /* cf23 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT                    6
+	u8 edpm_vport /* byte2 */;
+	__le16 physical_q0 /* physical_q0_and_vf_id_lo */;
+	__le16 tx_l2_edpm_usg_cnt_and_vf_id_hi /* physical_q1_and_vf_id_hi */;
+	__le16 edpm_num_bds /* physical_q2 */;
+	__le16 tx_bd_cons /* word3 */;
+	__le16 tx_bd_prod /* word4 */;
+	__le16 updated_qm_pq_id /* word5 */;
+	__le16 conn_dpi /* conn_dpi */;
+	u8 fw_spare_data0 /* byte3 */;
+	u8 fw_spare_data1 /* byte4 */;
+	u8 fw_spare_data2 /* byte5 */;
+	u8 fw_spare_data3 /* byte6 */;
+	__le32 fw_spare_data4 /* reg0 */;
+	__le32 fw_spare_data5 /* reg1 */;
+	__le32 fw_spare_data6 /* reg2 */;
+	__le32 fw_spare_data7 /* reg3 */;
+	__le32 fw_spare_data8 /* reg4 */;
+	__le32 fw_spare_data9 /* cf_array0 */;
+	__le32 fw_spare_data10 /* cf_array1 */;
+	u8 flags15;
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_REDIRECTION_CONDITION_LO_MASK  0x1 /* bit22 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_REDIRECTION_CONDITION_LO_SHIFT 0
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_REDIRECTION_CONDITION_HI_MASK  0x1 /* bit23 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_EDPM_REDIRECTION_CONDITION_HI_SHIFT 1
+#define E5_XSTORM_ETH_CONN_AG_CTX_E4_RESERVED3_MASK                   0x1 /* bit24 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_E4_RESERVED3_SHIFT                  2
+#define E5_XSTORM_ETH_CONN_AG_CTX_E4_RESERVED4_MASK                   0x3 /* cf24 */
+#define E5_XSTORM_ETH_CONN_AG_CTX_E4_RESERVED4_SHIFT                  3
+#define E5_XSTORM_ETH_CONN_AG_CTX_E4_RESERVED5_MASK                   0x1 /* cf24en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_E4_RESERVED5_SHIFT                  5
+#define E5_XSTORM_ETH_CONN_AG_CTX_E4_RESERVED6_MASK                   0x1 /* rule26en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_E4_RESERVED6_SHIFT                  6
+#define E5_XSTORM_ETH_CONN_AG_CTX_E4_RESERVED7_MASK                   0x1 /* rule27en */
+#define E5_XSTORM_ETH_CONN_AG_CTX_E4_RESERVED7_SHIFT                  7
+	u8 fw_spare_data11 /* byte7 */;
+	__le16 fw_spare_data12 /* word7 */;
+	__le16 fw_spare_data13 /* word8 */;
+	__le16 fw_spare_data14 /* word9 */;
+	__le16 fw_spare_data15 /* word10 */;
+	__le16 fw_spare_data16 /* word11 */;
+	__le32 fw_spare_data17 /* reg7 */;
+	__le32 fw_data0 /* reg8 */;
+	__le32 fw_data1 /* reg9 */;
+	u8 fw_data2 /* byte8 */;
+	u8 fw_data3 /* byte9 */;
+	u8 fw_data4 /* byte10 */;
+	u8 fw_data5 /* byte11 */;
+	u8 fw_data6 /* byte12 */;
+	u8 fw_data7 /* byte13 */;
+	u8 fw_data8 /* byte14 */;
+	u8 fw_data9 /* byte15 */;
+	__le32 fw_data10 /* reg10 */;
+	__le32 fw_data11 /* reg11 */;
+	__le32 fw_data12 /* reg12 */;
+	__le32 fw_data13 /* reg13 */;
+	__le32 fw_data14 /* reg14 */;
+	__le32 fw_data15 /* reg15 */;
+	__le32 fw_data16 /* reg16 */;
+	__le32 fw_data17 /* reg17 */;
+	__le32 fw_data18 /* reg18 */;
+	__le32 fw_data19 /* reg19 */;
+	__le16 fw_data20 /* word12 */;
+	__le16 fw_data21 /* word13 */;
+	__le16 fw_data22 /* word14 */;
+	__le16 fw_data23 /* word15 */;
+};
+
+struct e5_tstorm_eth_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state_and_core_id */;
+	u8 flags0;
+#define E5_TSTORM_ETH_CONN_AG_CTX_BIT0_MASK          0x1 /* exist_in_qm0 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT         0
+#define E5_TSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
+#define E5_TSTORM_ETH_CONN_AG_CTX_BIT2_MASK          0x1 /* bit2 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT         2
+#define E5_TSTORM_ETH_CONN_AG_CTX_BIT3_MASK          0x1 /* bit3 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT         3
+#define E5_TSTORM_ETH_CONN_AG_CTX_BIT4_MASK          0x1 /* bit4 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT         4
+#define E5_TSTORM_ETH_CONN_AG_CTX_BIT5_MASK          0x1 /* bit5 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT         5
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* timer0cf */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          6
+	u8 flags1;
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* timer1cf */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          0
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* timer2cf */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          2
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF3_MASK           0x3 /* timer_stop_all */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT          4
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF4_MASK           0x3 /* cf4 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT          6
+	u8 flags2;
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF5_MASK           0x3 /* cf5 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT          0
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF6_MASK           0x3 /* cf6 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT          2
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF7_MASK           0x3 /* cf7 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT          4
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF8_MASK           0x3 /* cf8 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT          6
+	u8 flags3;
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF9_MASK           0x3 /* cf9 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT          0
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF10_MASK          0x3 /* cf10 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT         2
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        4
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        5
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        6
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK         0x1 /* cf3en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT        7
+	u8 flags4;
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK         0x1 /* cf4en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT        0
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK         0x1 /* cf5en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT        1
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK         0x1 /* cf6en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT        2
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK         0x1 /* cf7en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT        3
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK         0x1 /* cf8en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT        4
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK         0x1 /* cf9en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT        5
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK        0x1 /* cf10en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT       6
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      7
+	u8 flags5;
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      0
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      1
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      2
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      3
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK       0x1 /* rule5en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT      4
+#define E5_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK      0x1 /* rule6en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT     5
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK       0x1 /* rule7en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT      6
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK       0x1 /* rule8en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT      7
+	u8 flags6;
+#define E5_TSTORM_ETH_CONN_AG_CTX_E4_RESERVED1_MASK  0x1 /* bit6 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_E4_RESERVED1_SHIFT 0
+#define E5_TSTORM_ETH_CONN_AG_CTX_E4_RESERVED2_MASK  0x1 /* bit7 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_E4_RESERVED2_SHIFT 1
+#define E5_TSTORM_ETH_CONN_AG_CTX_E4_RESERVED3_MASK  0x1 /* bit8 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_E4_RESERVED3_SHIFT 2
+#define E5_TSTORM_ETH_CONN_AG_CTX_E4_RESERVED4_MASK  0x3 /* cf11 */
+#define E5_TSTORM_ETH_CONN_AG_CTX_E4_RESERVED4_SHIFT 3
+#define E5_TSTORM_ETH_CONN_AG_CTX_E4_RESERVED5_MASK  0x1 /* cf11en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_E4_RESERVED5_SHIFT 5
+#define E5_TSTORM_ETH_CONN_AG_CTX_E4_RESERVED6_MASK  0x1 /* rule9en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_E4_RESERVED6_SHIFT 6
+#define E5_TSTORM_ETH_CONN_AG_CTX_E4_RESERVED7_MASK  0x1 /* rule10en */
+#define E5_TSTORM_ETH_CONN_AG_CTX_E4_RESERVED7_SHIFT 7
+	u8 byte2 /* byte2 */;
+	__le16 rx_bd_cons /* word0 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+	__le32 reg4 /* reg4 */;
+	__le32 reg5 /* reg5 */;
+	__le32 reg6 /* reg6 */;
+	__le32 reg7 /* reg7 */;
+	__le32 reg8 /* reg8 */;
+	u8 byte3 /* byte3 */;
+	u8 byte4 /* byte4 */;
+	u8 byte5 /* byte5 */;
+	u8 e4_reserved8 /* vf_id */;
+	__le16 rx_bd_prod /* word1 */;
+	__le16 word2 /* conn_dpi */;
+	__le32 reg9 /* reg9 */;
+	__le16 word3 /* word3 */;
+	__le16 e4_reserved9 /* word4 */;
+};
+
+/*
+ * The eth storm context for the Ystorm
+ */
+struct e5_ystorm_eth_conn_st_ctx {
+	__le32 reserved[8];
+};
+
+struct e5_ystorm_eth_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 state_and_core_id /* state_and_core_id */;
+	u8 flags0;
+#define E5_YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1 /* exist_in_qm0 */
+#define E5_YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
+#define E5_YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1 /* exist_in_qm1 */
+#define E5_YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
+#define E5_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
+#define E5_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
+#define E5_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
+#define E5_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
+#define E5_YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
+#define E5_YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
+	u8 flags1;
+#define E5_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1 /* cf0en */
+#define E5_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
+#define E5_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1 /* cf1en */
+#define E5_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
+#define E5_YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1 /* cf2en */
+#define E5_YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
+#define E5_YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1 /* rule0en */
+#define E5_YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
+#define E5_YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1 /* rule1en */
+#define E5_YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
+#define E5_YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1 /* rule2en */
+#define E5_YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
+#define E5_YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1 /* rule3en */
+#define E5_YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
+#define E5_YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1 /* rule4en */
+#define E5_YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
+	u8 tx_q0_int_coallecing_timeset /* byte2 */;
+	u8 byte3 /* byte3 */;
+	__le16 word0 /* word0 */;
+	__le32 terminate_spqe /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le16 tx_bd_cons_upd /* word1 */;
+	__le16 word2 /* word2 */;
+	__le16 word3 /* word3 */;
+	__le16 word4 /* word4 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+};
+
+struct e5_ustorm_eth_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state_and_core_id */;
+	u8 flags0;
+#define E5_USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1 /* exist_in_qm0 */
+#define E5_USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
+#define E5_USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1 /* exist_in_qm1 */
+#define E5_USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
+#define E5_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3 /* timer0cf */
+#define E5_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
+#define E5_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3 /* timer1cf */
+#define E5_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
+#define E5_USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3 /* timer2cf */
+#define E5_USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
+	u8 flags1;
+#define E5_USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3 /* timer_stop_all */
+#define E5_USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
+#define E5_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3 /* cf4 */
+#define E5_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
+#define E5_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3 /* cf5 */
+#define E5_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
+#define E5_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3 /* cf6 */
+#define E5_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
+	u8 flags2;
+#define E5_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1 /* cf0en */
+#define E5_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
+#define E5_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1 /* cf1en */
+#define E5_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
+#define E5_USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1 /* cf2en */
+#define E5_USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
+#define E5_USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1 /* cf3en */
+#define E5_USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
+#define E5_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1 /* cf4en */
+#define E5_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
+#define E5_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1 /* cf5en */
+#define E5_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
+#define E5_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1 /* cf6en */
+#define E5_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1 /* rule0en */
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
+	u8 flags3;
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1 /* rule1en */
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1 /* rule2en */
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1 /* rule3en */
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1 /* rule4en */
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1 /* rule5en */
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1 /* rule6en */
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1 /* rule7en */
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1 /* rule8en */
+#define E5_USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
+	u8 flags4;
+#define E5_USTORM_ETH_CONN_AG_CTX_E4_RESERVED1_MASK            0x1 /* bit2 */
+#define E5_USTORM_ETH_CONN_AG_CTX_E4_RESERVED1_SHIFT           0
+#define E5_USTORM_ETH_CONN_AG_CTX_E4_RESERVED2_MASK            0x1 /* bit3 */
+#define E5_USTORM_ETH_CONN_AG_CTX_E4_RESERVED2_SHIFT           1
+#define E5_USTORM_ETH_CONN_AG_CTX_E4_RESERVED3_MASK            0x3 /* cf7 */
+#define E5_USTORM_ETH_CONN_AG_CTX_E4_RESERVED3_SHIFT           2
+#define E5_USTORM_ETH_CONN_AG_CTX_E4_RESERVED4_MASK            0x3 /* cf8 */
+#define E5_USTORM_ETH_CONN_AG_CTX_E4_RESERVED4_SHIFT           4
+#define E5_USTORM_ETH_CONN_AG_CTX_E4_RESERVED5_MASK            0x1 /* cf7en */
+#define E5_USTORM_ETH_CONN_AG_CTX_E4_RESERVED5_SHIFT           6
+#define E5_USTORM_ETH_CONN_AG_CTX_E4_RESERVED6_MASK            0x1 /* cf8en */
+#define E5_USTORM_ETH_CONN_AG_CTX_E4_RESERVED6_SHIFT           7
+	u8 byte2 /* byte2 */;
+	__le16 word0 /* conn_dpi */;
+	__le16 tx_bd_cons /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le32 reg2 /* reg2 */;
+	__le32 tx_int_coallecing_timeset /* reg3 */;
+	__le16 tx_drv_bd_cons /* word2 */;
+	__le16 rx_drv_cqe_cons /* word3 */;
 };
 
+/*
+ * The eth storm context for the Ustorm
+ */
+struct e5_ustorm_eth_conn_st_ctx {
+	__le32 reserved[52];
+};
+
+/*
+ * The eth storm context for the Mstorm
+ */
+struct e5_mstorm_eth_conn_st_ctx {
+	__le32 reserved[8];
+};
+
+/*
+ * eth connection context
+ */
+struct e5_eth_conn_context {
+	struct e5_tstorm_eth_conn_st_ctx tstorm_st_context /* tstorm storm context */;
+	struct regpair tstorm_st_padding[2] /* padding */;
+	struct e5_pstorm_eth_conn_st_ctx pstorm_st_context /* pstorm storm context */;
+	struct e5_xstorm_eth_conn_st_ctx xstorm_st_context /* xstorm storm context */;
+	struct e5_xstorm_eth_conn_ag_ctx xstorm_ag_context /* xstorm aggregative context */;
+	struct regpair xstorm_ag_padding[2] /* padding */;
+	struct e5_tstorm_eth_conn_ag_ctx tstorm_ag_context /* tstorm aggregative context */;
+	struct e5_ystorm_eth_conn_st_ctx ystorm_st_context /* ystorm storm context */;
+	struct e5_ystorm_eth_conn_ag_ctx ystorm_ag_context /* ystorm aggregative context */;
+	struct e5_ustorm_eth_conn_ag_ctx ustorm_ag_context /* ustorm aggregative context */;
+	struct e5_ustorm_eth_conn_st_ctx ustorm_st_context /* ustorm storm context */;
+	struct regpair ustorm_st_padding[2] /* padding */;
+	struct e5_mstorm_eth_conn_st_ctx mstorm_st_context /* mstorm storm context */;
+};
+
+
+struct e5_tstorm_eth_task_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state_and_core_id */;
+	__le16 word0 /* icid */;
+	u8 flags0;
+#define E5_TSTORM_ETH_TASK_AG_CTX_NIBBLE0_MASK  0xF /* connection_type */
+#define E5_TSTORM_ETH_TASK_AG_CTX_NIBBLE0_SHIFT 0
+#define E5_TSTORM_ETH_TASK_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define E5_TSTORM_ETH_TASK_AG_CTX_BIT0_SHIFT    4
+#define E5_TSTORM_ETH_TASK_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define E5_TSTORM_ETH_TASK_AG_CTX_BIT1_SHIFT    5
+#define E5_TSTORM_ETH_TASK_AG_CTX_BIT2_MASK     0x1 /* bit2 */
+#define E5_TSTORM_ETH_TASK_AG_CTX_BIT2_SHIFT    6
+#define E5_TSTORM_ETH_TASK_AG_CTX_BIT3_MASK     0x1 /* bit3 */
+#define E5_TSTORM_ETH_TASK_AG_CTX_BIT3_SHIFT    7
+	u8 flags1;
+#define E5_TSTORM_ETH_TASK_AG_CTX_BIT4_MASK     0x1 /* bit4 */
+#define E5_TSTORM_ETH_TASK_AG_CTX_BIT4_SHIFT    0
+#define E5_TSTORM_ETH_TASK_AG_CTX_BIT5_MASK     0x1 /* bit5 */
+#define E5_TSTORM_ETH_TASK_AG_CTX_BIT5_SHIFT    1
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF0_SHIFT     2
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF1_SHIFT     4
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF2_SHIFT     6
+	u8 flags2;
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF3_SHIFT     0
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF4_SHIFT     2
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF5_SHIFT     4
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF6_SHIFT     6
+	u8 flags3;
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF7_MASK      0x3 /* cf7 */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF7_SHIFT     0
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF0EN_SHIFT   2
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF1EN_SHIFT   3
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF2EN_SHIFT   4
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF3EN_SHIFT   5
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF4EN_SHIFT   6
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF5EN_SHIFT   7
+	u8 flags4;
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF6EN_SHIFT   0
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
+#define E5_TSTORM_ETH_TASK_AG_CTX_CF7EN_SHIFT   1
+#define E5_TSTORM_ETH_TASK_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define E5_TSTORM_ETH_TASK_AG_CTX_RULE0EN_SHIFT 2
+#define E5_TSTORM_ETH_TASK_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define E5_TSTORM_ETH_TASK_AG_CTX_RULE1EN_SHIFT 3
+#define E5_TSTORM_ETH_TASK_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define E5_TSTORM_ETH_TASK_AG_CTX_RULE2EN_SHIFT 4
+#define E5_TSTORM_ETH_TASK_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define E5_TSTORM_ETH_TASK_AG_CTX_RULE3EN_SHIFT 5
+#define E5_TSTORM_ETH_TASK_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define E5_TSTORM_ETH_TASK_AG_CTX_RULE4EN_SHIFT 6
+#define E5_TSTORM_ETH_TASK_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define E5_TSTORM_ETH_TASK_AG_CTX_RULE5EN_SHIFT 7
+	u8 timestamp /* byte2 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	struct regpair packets64 /* regpair0 */;
+	struct regpair bytes64 /* regpair1 */;
+};
+
+struct e5_mstorm_eth_task_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state_and_core_id */;
+	__le16 word0 /* icid */;
+	u8 flags0;
+#define E5_MSTORM_ETH_TASK_AG_CTX_NIBBLE0_MASK       0xF /* connection_type */
+#define E5_MSTORM_ETH_TASK_AG_CTX_NIBBLE0_SHIFT      0
+#define E5_MSTORM_ETH_TASK_AG_CTX_BIT0_MASK          0x1 /* exist_in_qm0 */
+#define E5_MSTORM_ETH_TASK_AG_CTX_BIT0_SHIFT         4
+#define E5_MSTORM_ETH_TASK_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
+#define E5_MSTORM_ETH_TASK_AG_CTX_BIT1_SHIFT         5
+#define E5_MSTORM_ETH_TASK_AG_CTX_BIT2_MASK          0x1 /* bit2 */
+#define E5_MSTORM_ETH_TASK_AG_CTX_BIT2_SHIFT         6
+#define E5_MSTORM_ETH_TASK_AG_CTX_BIT3_MASK          0x1 /* bit3 */
+#define E5_MSTORM_ETH_TASK_AG_CTX_BIT3_SHIFT         7
+	u8 flags1;
+#define E5_MSTORM_ETH_TASK_AG_CTX_CF0_MASK           0x3 /* cf0 */
+#define E5_MSTORM_ETH_TASK_AG_CTX_CF0_SHIFT          0
+#define E5_MSTORM_ETH_TASK_AG_CTX_CF1_MASK           0x3 /* cf1 */
+#define E5_MSTORM_ETH_TASK_AG_CTX_CF1_SHIFT          2
+#define E5_MSTORM_ETH_TASK_AG_CTX_CF2_MASK           0x3 /* cf2 */
+#define E5_MSTORM_ETH_TASK_AG_CTX_CF2_SHIFT          4
+#define E5_MSTORM_ETH_TASK_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
+#define E5_MSTORM_ETH_TASK_AG_CTX_CF0EN_SHIFT        6
+#define E5_MSTORM_ETH_TASK_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
+#define E5_MSTORM_ETH_TASK_AG_CTX_CF1EN_SHIFT        7
+	u8 flags2;
+#define E5_MSTORM_ETH_TASK_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
+#define E5_MSTORM_ETH_TASK_AG_CTX_CF2EN_SHIFT        0
+#define E5_MSTORM_ETH_TASK_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
+#define E5_MSTORM_ETH_TASK_AG_CTX_RULE0EN_SHIFT      1
+#define E5_MSTORM_ETH_TASK_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
+#define E5_MSTORM_ETH_TASK_AG_CTX_RULE1EN_SHIFT      2
+#define E5_MSTORM_ETH_TASK_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
+#define E5_MSTORM_ETH_TASK_AG_CTX_RULE2EN_SHIFT      3
+#define E5_MSTORM_ETH_TASK_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
+#define E5_MSTORM_ETH_TASK_AG_CTX_RULE3EN_SHIFT      4
+#define E5_MSTORM_ETH_TASK_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
+#define E5_MSTORM_ETH_TASK_AG_CTX_RULE4EN_SHIFT      5
+#define E5_MSTORM_ETH_TASK_AG_CTX_RULE5EN_MASK       0x1 /* rule5en */
+#define E5_MSTORM_ETH_TASK_AG_CTX_RULE5EN_SHIFT      6
+#define E5_MSTORM_ETH_TASK_AG_CTX_RULE6EN_MASK       0x1 /* rule6en */
+#define E5_MSTORM_ETH_TASK_AG_CTX_RULE6EN_SHIFT      7
+	u8 flags3;
+#define E5_MSTORM_ETH_TASK_AG_CTX_E4_RESERVED1_MASK  0x1 /* bit4 */
+#define E5_MSTORM_ETH_TASK_AG_CTX_E4_RESERVED1_SHIFT 0
+#define E5_MSTORM_ETH_TASK_AG_CTX_E4_RESERVED2_MASK  0x3 /* cf3 */
+#define E5_MSTORM_ETH_TASK_AG_CTX_E4_RESERVED2_SHIFT 1
+#define E5_MSTORM_ETH_TASK_AG_CTX_E4_RESERVED3_MASK  0x3 /* cf4 */
+#define E5_MSTORM_ETH_TASK_AG_CTX_E4_RESERVED3_SHIFT 3
+#define E5_MSTORM_ETH_TASK_AG_CTX_E4_RESERVED4_MASK  0x1 /* cf3en */
+#define E5_MSTORM_ETH_TASK_AG_CTX_E4_RESERVED4_SHIFT 5
+#define E5_MSTORM_ETH_TASK_AG_CTX_E4_RESERVED5_MASK  0x1 /* cf4en */
+#define E5_MSTORM_ETH_TASK_AG_CTX_E4_RESERVED5_SHIFT 6
+#define E5_MSTORM_ETH_TASK_AG_CTX_E4_RESERVED6_MASK  0x1 /* rule7en */
+#define E5_MSTORM_ETH_TASK_AG_CTX_E4_RESERVED6_SHIFT 7
+	__le32 reg0 /* reg0 */;
+	u8 timestamp /* byte2 */;
+	u8 byte3 /* byte3 */;
+	u8 byte4 /* byte4 */;
+	u8 e4_reserved7 /* icid_ext */;
+	struct regpair packets64 /* regpair0 */;
+	struct regpair bytes64 /* regpair1 */;
+};
+
+/*
+ * eth task context
+ */
+struct e5_eth_task_context {
+	struct e5_tstorm_eth_task_ag_ctx tstorm_ag_context /* tstorm task aggregative context */;
+	struct e5_mstorm_eth_task_ag_ctx mstorm_ag_context /* mstorm task aggregative context */;
+};
+
+
+
+
+
+
+
+
+
 
 /*
  * Ethernet filter types: mac/vlan/pair
  */
 enum eth_error_code {
 	ETH_OK = 0x00 /* command succeeded */,
-/* mac add filters command failed due to cam full state */
-	ETH_FILTERS_MAC_ADD_FAIL_FULL,
+	ETH_FILTERS_MAC_ADD_FAIL_FULL /* mac add filters command failed due to cam full state */,
 /* mac add filters command failed due to mtt2 full state */
 	ETH_FILTERS_MAC_ADD_FAIL_FULL_MTT2,
-/* mac add filters command failed due to duplicate mac address */
+/* mac add filters command failed due to duplicate mac address **/
 	ETH_FILTERS_MAC_ADD_FAIL_DUP_MTT2,
-/* mac add filters command failed due to duplicate mac address */
+/* mac add filters command failed due to duplicate mac address **/
 	ETH_FILTERS_MAC_ADD_FAIL_DUP_STT2,
-/* mac delete filters command failed due to not found state */
-	ETH_FILTERS_MAC_DEL_FAIL_NOF,
+	ETH_FILTERS_MAC_DEL_FAIL_NOF /* mac delete filters command failed due to not found state */,
 /* mac delete filters command failed due to not found state */
 	ETH_FILTERS_MAC_DEL_FAIL_NOF_MTT2,
 /* mac delete filters command failed due to not found state */
 	ETH_FILTERS_MAC_DEL_FAIL_NOF_STT2,
-/* mac add filters command failed due to MAC Address of 00:00:00:00:00:00 */
+/* mac add filters command failed due to MAC Address of 00:00:00:00:00:00 **/
 	ETH_FILTERS_MAC_ADD_FAIL_ZERO_MAC,
-/* vlan add filters command failed due to cam full state */
-	ETH_FILTERS_VLAN_ADD_FAIL_FULL,
+	ETH_FILTERS_VLAN_ADD_FAIL_FULL /* vlan add filters command failed due to cam full state */,
 /* vlan add filters command failed due to duplicate VLAN filter */
 	ETH_FILTERS_VLAN_ADD_FAIL_DUP,
 /* vlan delete filters command failed due to not found state */
@@ -603,21 +1325,33 @@ enum eth_error_code {
 	ETH_FILTERS_VLAN_DEL_FAIL_NOF_TT1,
 /* pair add filters command failed due to duplicate request */
 	ETH_FILTERS_PAIR_ADD_FAIL_DUP,
-/* pair add filters command failed due to full state */
-	ETH_FILTERS_PAIR_ADD_FAIL_FULL,
-/* pair add filters command failed due to full state */
-	ETH_FILTERS_PAIR_ADD_FAIL_FULL_MAC,
-/* pair add filters command failed due not found state */
-	ETH_FILTERS_PAIR_DEL_FAIL_NOF,
-/* pair add filters command failed due not found state */
-	ETH_FILTERS_PAIR_DEL_FAIL_NOF_TT1,
-/* pair add filters command failed due to MAC Address of 00:00:00:00:00:00 */
+	ETH_FILTERS_PAIR_ADD_FAIL_FULL /* pair add filters command failed due to full state */,
+	ETH_FILTERS_PAIR_ADD_FAIL_FULL_MAC /* pair add filters command failed due to full state */,
+	ETH_FILTERS_PAIR_DEL_FAIL_NOF /* pair add filters command failed due not found state */,
+	ETH_FILTERS_PAIR_DEL_FAIL_NOF_TT1 /* pair add filters command failed due not found state */,
+/* pair add filters command failed due to MAC Address of 00:00:00:00:00:00 **/
 	ETH_FILTERS_PAIR_ADD_FAIL_ZERO_MAC,
-/* vni add filters command failed due to cam full state */
-	ETH_FILTERS_VNI_ADD_FAIL_FULL,
+	ETH_FILTERS_VNI_ADD_FAIL_FULL /* vni add filters command failed due to cam full state */,
 /* vni add filters command failed due to duplicate VNI filter */
 	ETH_FILTERS_VNI_ADD_FAIL_DUP,
 	ETH_FILTERS_GFT_UPDATE_FAIL /* Fail update GFT filter. */,
+/* Fail load VF data like BD or CQE ring next page address. */
+	ETH_RX_QUEUE_FAIL_LOAD_VF_DATA,
+	ETH_FILTERS_GFS_ADD_FILTER_FAIL_MAX_HOPS /* Fail to add a GFS filter - max hops reached. */,
+/* Fail to add a GFS filter - no free entry to add, database full. */
+	ETH_FILTERS_GFS_ADD_FILTER_FAIL_NO_FREE_ENRTY,
+/* Fail to add a GFS filter  - entry already added. */
+	ETH_FILTERS_GFS_ADD_FILTER_FAIL_ALREADY_EXISTS,
+	ETH_FILTERS_GFS_ADD_FILTER_FAIL_PCI_ERROR /* Fail to add a GFS filter  - PCI error. */,
+/* Fail to add a GFS filter  - magic number error. */
+	ETH_FILTERS_GFS_ADD_FILTER_FAIL_MAGIC_NUM_ERROR,
+/* Fail to delete GFS filter - max hops reached. */
+	ETH_FILTERS_GFS_DEL_FILTER_FAIL_MAX_HOPS,
+/* Fail to delete GFS filter - no matched entry to detele. */
+	ETH_FILTERS_GFS_DEL_FILTER_FAIL_NO_MATCH_ENRTY,
+	ETH_FILTERS_GFS_DEL_FILTER_FAIL_PCI_ERROR /* Fail to delete GFS filter - PCI error. */,
+/* Fail to delete GFS filter - magic number error. */
+	ETH_FILTERS_GFS_DEL_FILTER_FAIL_MAGIC_NUM_ERROR,
 	MAX_ETH_ERROR_CODE
 };
 
@@ -644,6 +1378,12 @@ enum eth_event_opcode {
 	ETH_EVENT_RX_CREATE_GFT_ACTION,
 	ETH_EVENT_RX_GFT_UPDATE_FILTER,
 	ETH_EVENT_TX_QUEUE_UPDATE,
+	ETH_EVENT_RGFS_ADD_FILTER,
+	ETH_EVENT_RGFS_DEL_FILTER,
+	ETH_EVENT_TGFS_ADD_FILTER,
+	ETH_EVENT_TGFS_DEL_FILTER,
+	ETH_EVENT_GFS_COUNTERS_REPORT,
+	ETH_EVENT_HAIRPIN_QUEUE_STOP,
 	MAX_ETH_EVENT_OPCODE
 };
 
@@ -655,8 +1395,7 @@ enum eth_filter_action {
 	ETH_FILTER_ACTION_UNUSED,
 	ETH_FILTER_ACTION_REMOVE,
 	ETH_FILTER_ACTION_ADD,
-/* Remove all filters of given type and vport ID. */
-	ETH_FILTER_ACTION_REMOVE_ALL,
+	ETH_FILTER_ACTION_REMOVE_ALL /* Remove all filters of given type and vport ID. */,
 	MAX_ETH_FILTER_ACTION
 };
 
@@ -665,9 +1404,9 @@ enum eth_filter_action {
  * Command for adding/removing a classification rule $$KEEP_ENDIANNESS$$
  */
 struct eth_filter_cmd {
-	u8 type /* Filter Type (MAC/VLAN/Pair/VNI) */;
+	u8 type /* Filter Type (MAC/VLAN/Pair/VNI) (use enum eth_filter_type) */;
 	u8 vport_id /* the vport id */;
-	u8 action /* filter command action: add/remove/replace */;
+	u8 action /* filter command action: add/remove/replace (use enum eth_filter_action) */;
 	u8 reserved0;
 	__le32 vni;
 	__le16 mac_lsb;
@@ -684,8 +1423,8 @@ struct eth_filter_cmd_header {
 	u8 rx /* If set, apply these commands to the RX path */;
 	u8 tx /* If set, apply these commands to the TX path */;
 	u8 cmd_cnt /* Number of filter commands */;
-/* 0 - dont assert in case of filter configuration error. Just return an error
- * code. 1 - assert in case of filter configuration error.
+/* 0 - dont assert in case of filter configuration error. Just return an error code. 1 - assert in
+ * case of filter configuration error.
  */
 	u8 assert_on_error;
 	u8 reserved1[4];
@@ -703,29 +1442,136 @@ enum eth_filter_type {
 	ETH_FILTER_TYPE_INNER_MAC /* Add/remove a inner MAC address */,
 	ETH_FILTER_TYPE_INNER_VLAN /* Add/remove a inner VLAN */,
 	ETH_FILTER_TYPE_INNER_PAIR /* Add/remove a inner MAC-VLAN pair */,
-/* Add/remove a inner MAC-VNI pair */
-	ETH_FILTER_TYPE_INNER_MAC_VNI_PAIR,
+	ETH_FILTER_TYPE_INNER_MAC_VNI_PAIR /* Add/remove a inner MAC-VNI pair */,
 	ETH_FILTER_TYPE_MAC_VNI_PAIR /* Add/remove a MAC-VNI pair */,
 	ETH_FILTER_TYPE_VNI /* Add/remove a VNI */,
 	MAX_ETH_FILTER_TYPE
 };
 
 
+/*
+ * GFS copy condition.
+ */
+enum eth_gfs_copy_condition {
+	ETH_GFS_COPY_ALWAYS /* Copy always. */,
+	ETH_GFS_COPY_TTL0 /* Copy if TTL=0. */,
+	ETH_GFS_COPY_TTL1 /* Copy if TTL=1. */,
+	ETH_GFS_COPY_TTL0_TTL1 /* Copy if TTL=0 or TTL=1. */,
+	ETH_GFS_COPY_TCP_FLAGS /* Copy if one of TCP flags from tcp_flags set in packet. */,
+	MAX_ETH_GFS_COPY_CONDITION
+};
+
+
+/*
+ * GFS Flow Counters Report Table Header
+ */
+struct eth_gfs_counters_report_header {
+	__le16 num_valid_entries;
+	u8 reserved[2];
+	__le32 timestamp /* table report timestamp */;
+};
+
+/*
+ * GFS Flow Counters Report Entry
+ */
+struct eth_gfs_counters_report_entry {
+	struct regpair packets /* Packets Counter */;
+	struct regpair bytes /* Bytes Counter */;
+/* TID of GFS Counter. Shall be Reported by FW with zeroised segment_id bits (as received from host
+ * upon counter action configuration)
+ */
+	struct hsi_tid tid;
+	u8 timestamp /* Last-Updated timestamp with timestamp_log2_factor applied  */;
+	u8 reserved[3];
+};
+
+/*
+ * GFS Flow Counters Report Table.
+ */
+struct eth_gfs_counters_report {
+	struct eth_gfs_counters_report_header table_header;
+	struct eth_gfs_counters_report_entry table_entries[NUM_OF_LTIDS_E5];
+};
+
+
+
+
+/*
+ * GFS packet destination type.
+ */
+enum eth_gfs_dest_type {
+	ETH_GFS_DEST_DROP /* Drop TX or RX packet. */,
+	ETH_GFS_DEST_RX_VPORT /* Forward RX or TX packet to RX VPORT. */,
+	ETH_GFS_DEST_RX_QID /* Forward RX packet to RX queue. */,
+	ETH_GFS_DEST_TX_CID /* Forward RX packet to TX queue given by CID (hairpinning). */,
+/* Bypass TX switching for TX packet. TX destination given by tx_dest value. */
+	ETH_GFS_DEST_TXSW_BYPASS,
+	MAX_ETH_GFS_DEST_TYPE
+};
+
+
+/*
+ * GFS packet modified header position.
+ */
+enum eth_gfs_modified_header_position {
+	ETH_GFS_MODIFY_HDR_SINGLE /* Modify single header in no tunnel packet. */,
+	ETH_GFS_MODIFY_HDR_TUNNEL /* Modify tunnel header. */,
+	ETH_GFS_MODIFY_HDR_INNER /* Modify inner header in tunnel packet. */,
+	MAX_ETH_GFS_MODIFIED_HEADER_POSITION
+};
+
+
+enum eth_gfs_pop_hdr_type {
+	ETH_GFS_POP_UNUSED /* Reserved for initialized check */,
+	ETH_GFS_POP_ETH /* Pop outer ETH header. */,
+	ETH_GFS_POP_TUNN /* Pop tunnel header. */,
+	ETH_GFS_POP_TUNN_ETH /* Pop tunnel and inner ETH header. */,
+	MAX_ETH_GFS_POP_HDR_TYPE
+};
+
+
+enum eth_gfs_push_hdr_type {
+	ETH_GFS_PUSH_UNUSED /* Reserved for initialized check */,
+	ETH_GFS_PUSH_ETH /* Push ETH header. */,
+	ETH_GFS_PUSH_GRE_V4 /* Push GRE over IPV4 tunnel header (ETH-IPV4-GRE). */,
+	ETH_GFS_PUSH_GRE_V6 /* Push GRE over IPV6 tunnel header (ETH-IPV6-GRE). */,
+	ETH_GFS_PUSH_VXLAN_V4 /* Push VXLAN over IPV4 tunnel header (ETH-IPV4-UDP-VXLAN). */,
+	ETH_GFS_PUSH_VXLAN_V6 /* Push VXLAN over IPV6 tunnel header (ETH-IPV6-UDP-VXLAN). */,
+/* Push VXLAN over IPV4 tunnel header and inner ETH header (ETH-IPV4-UDP-VXLAN-ETH). */
+	ETH_GFS_PUSH_VXLAN_V4_ETH,
+/* Push VXLAN over IPV6 tunnel header and inner ETH header (ETH-IPV6-UDP-VXLAN-ETH). */
+	ETH_GFS_PUSH_VXLAN_V6_ETH,
+	MAX_ETH_GFS_PUSH_HDR_TYPE
+};
+
+
+/*
+ * GFS redirect condition.
+ */
+enum eth_gfs_redirect_condition {
+	ETH_GFS_REDIRECT_ALWAYS /* Redirect always. */,
+	ETH_GFS_REDIRECT_TTL0 /* Redirect if TTL=0. */,
+	ETH_GFS_REDIRECT_TTL1 /* Redirect if TTL=1. */,
+	ETH_GFS_REDIRECT_TTL0_TTL1 /* Redirect if TTL=0 or TTL=1. */,
+	MAX_ETH_GFS_REDIRECT_CONDITION
+};
+
+
 /*
  * inner to inner vlan priority translation configurations
  */
 struct eth_in_to_in_pri_map_cfg {
-/* If set, non_rdma_in_to_in_pri_map or rdma_in_to_in_pri_map will be used for
- * inner to inner priority mapping depending on protocol type
+/* If set, non_rdma_in_to_in_pri_map or rdma_in_to_in_pri_map will be used for inner to inner
+ * priority mapping depending on protocol type
  */
 	u8 inner_vlan_pri_remap_en;
 	u8 reserved[7];
-/* Map for inner to inner vlan priority translation for Non RDMA protocols, used
- * for TenantDcb. Set inner_vlan_pri_remap_en, when init the map.
+/* Map for inner to inner vlan priority translation for Non RDMA protocols, used for TenantDcb. Set
+ * inner_vlan_pri_remap_en, when init the map.
  */
 	u8 non_rdma_in_to_in_pri_map[8];
-/* Map for inner to inner vlan priority translation for RDMA protocols, used for
- * TenantDcb. Set inner_vlan_pri_remap_en, when init the map.
+/* Map for inner to inner vlan priority translation for RDMA protocols, used for TenantDcb. Set
+ * inner_vlan_pri_remap_en, when init the map.
  */
 	u8 rdma_in_to_in_pri_map[8];
 };
@@ -736,10 +1582,8 @@ struct eth_in_to_in_pri_map_cfg {
  */
 enum eth_ipv4_frag_type {
 	ETH_IPV4_NOT_FRAG /* IPV4 Packet Not Fragmented */,
-/* First Fragment of IPv4 Packet (contains headers) */
-	ETH_IPV4_FIRST_FRAG,
-/* Non-First Fragment of IPv4 Packet (does not contain headers) */
-	ETH_IPV4_NON_FIRST_FRAG,
+	ETH_IPV4_FIRST_FRAG /* First Fragment of IPv4 Packet (contains headers) */,
+	ETH_IPV4_NON_FIRST_FRAG /* Non-First Fragment of IPv4 Packet (does not contain headers) */,
 	MAX_ETH_IPV4_FRAG_TYPE
 };
 
@@ -768,20 +1612,21 @@ enum eth_ramrod_cmd_id {
 	ETH_RAMROD_TX_QUEUE_STOP /* TX Queue Stop Ramrod */,
 	ETH_RAMROD_FILTERS_UPDATE /* Add or Remove Mac/Vlan/Pair filters */,
 	ETH_RAMROD_RX_QUEUE_UPDATE /* RX Queue Update Ramrod */,
-/* RX - Create an Openflow Action */
-	ETH_RAMROD_RX_CREATE_OPENFLOW_ACTION,
-/* RX - Add an Openflow Filter to the Searcher */
-	ETH_RAMROD_RX_ADD_OPENFLOW_FILTER,
-/* RX - Delete an Openflow Filter to the Searcher */
-	ETH_RAMROD_RX_DELETE_OPENFLOW_FILTER,
-/* RX - Add a UDP Filter to the Searcher */
-	ETH_RAMROD_RX_ADD_UDP_FILTER,
-/* RX - Delete a UDP Filter to the Searcher */
-	ETH_RAMROD_RX_DELETE_UDP_FILTER,
+	ETH_RAMROD_RX_CREATE_OPENFLOW_ACTION /* RX - Create an Openflow Action */,
+	ETH_RAMROD_RX_ADD_OPENFLOW_FILTER /* RX - Add an Openflow Filter to the Searcher */,
+	ETH_RAMROD_RX_DELETE_OPENFLOW_FILTER /* RX - Delete an Openflow Filter to the Searcher */,
+	ETH_RAMROD_RX_ADD_UDP_FILTER /* RX - Add a UDP Filter to the Searcher */,
+	ETH_RAMROD_RX_DELETE_UDP_FILTER /* RX - Delete a UDP Filter to the Searcher */,
 	ETH_RAMROD_RX_CREATE_GFT_ACTION /* RX - Create a Gft Action */,
-/* RX - Add/Delete a GFT Filter to the Searcher */
-	ETH_RAMROD_GFT_UPDATE_FILTER,
+	ETH_RAMROD_RX_UPDATE_GFT_FILTER /* RX - Add/Delete a GFT Filter to the Searcher */,
 	ETH_RAMROD_TX_QUEUE_UPDATE /* TX Queue Update Ramrod */,
+	ETH_RAMROD_RGFS_FILTER_ADD /* add new RGFS filter */,
+	ETH_RAMROD_RGFS_FILTER_DEL /* delete RGFS filter */,
+	ETH_RAMROD_TGFS_FILTER_ADD /* add new TGFS filter */,
+	ETH_RAMROD_TGFS_FILTER_DEL /* delete TGFS filter */,
+	ETH_RAMROD_GFS_COUNTERS_REPORT /* GFS Flow Counters Report */,
+/* Hairpin Queue Stop Ramrod (Stops both Rx and Tx Hairpin Queues) */
+	ETH_RAMROD_HAIRPIN_QUEUE_STOP,
 	MAX_ETH_RAMROD_CMD_ID
 };
 
@@ -791,12 +1636,11 @@ enum eth_ramrod_cmd_id {
  */
 struct eth_return_code {
 	u8 value;
-/* error code (use enum eth_error_code) */
-#define ETH_RETURN_CODE_ERR_CODE_MASK  0x3F
+#define ETH_RETURN_CODE_ERR_CODE_MASK  0x3F /* error code (use enum eth_error_code) */
 #define ETH_RETURN_CODE_ERR_CODE_SHIFT 0
 #define ETH_RETURN_CODE_RESERVED_MASK  0x1
 #define ETH_RETURN_CODE_RESERVED_SHIFT 6
-/* rx path - 0, tx path - 1 */
+/* Vport filter update ramrod fail path: 0 - rx path, 1 - tx path */
 #define ETH_RETURN_CODE_RX_TX_MASK     0x1
 #define ETH_RETURN_CODE_RX_TX_SHIFT    7
 };
@@ -806,14 +1650,12 @@ struct eth_return_code {
  * tx destination enum
  */
 enum eth_tx_dst_mode_config_enum {
-/* tx destination configuration override is disabled */
-	ETH_TX_DST_MODE_CONFIG_DISABLE,
-/* tx destination configuration override is enabled, vport and tx dst will be
- * taken from from 4th bd
+	ETH_TX_DST_MODE_CONFIG_DISABLE /* tx destination configuration override is disabled */,
+/* tx destination configuration override is enabled, vport and tx dst will be taken from 4th bd
  */
 	ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_BD,
-/* tx destination configuration override is enabled, vport and tx dst will be
- * taken from from vport data
+/* tx destination configuration override is enabled, vport and tx dst will be taken from vport
+ * data
  */
 	ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_VPORT,
 	MAX_ETH_TX_DST_MODE_CONFIG_ENUM
@@ -825,8 +1667,7 @@ enum eth_tx_dst_mode_config_enum {
  */
 enum eth_tx_err {
 	ETH_TX_ERR_DROP /* Drop erroneous packet. */,
-/* Assert an interrupt for PF, declare as malicious for VF */
-	ETH_TX_ERR_ASSERT_MALICIOUS,
+	ETH_TX_ERR_ASSERT_MALICIOUS /* Assert an interrupt for PF, declare as malicious for VF */,
 	MAX_ETH_TX_ERR
 };
 
@@ -848,21 +1689,22 @@ struct eth_tx_err_vals {
 /* Packet with illegal type of inband tag (use enum eth_tx_err) */
 #define ETH_TX_ERR_VALS_ILLEGAL_INBAND_TAGS_MASK          0x1
 #define ETH_TX_ERR_VALS_ILLEGAL_INBAND_TAGS_SHIFT         3
-/* Packet marked for VLAN insertion when inband tag is present
- * (use enum eth_tx_err)
- */
+/* Packet marked for VLAN insertion when inband tag is present (use enum eth_tx_err) */
 #define ETH_TX_ERR_VALS_VLAN_INSERTION_W_INBAND_TAG_MASK  0x1
 #define ETH_TX_ERR_VALS_VLAN_INSERTION_W_INBAND_TAG_SHIFT 4
 /* Non LSO packet larger than MTU (use enum eth_tx_err) */
 #define ETH_TX_ERR_VALS_MTU_VIOLATION_MASK                0x1
 #define ETH_TX_ERR_VALS_MTU_VIOLATION_SHIFT               5
-/* VF/PF has sent LLDP/PFC or any other type of control packet which is not
- * allowed to (use enum eth_tx_err)
+/* VF/PF has sent LLDP/PFC or any other type of control packet which is not allowed to (use enum
+ * eth_tx_err)
  */
 #define ETH_TX_ERR_VALS_ILLEGAL_CONTROL_FRAME_MASK        0x1
 #define ETH_TX_ERR_VALS_ILLEGAL_CONTROL_FRAME_SHIFT       6
-#define ETH_TX_ERR_VALS_RESERVED_MASK                     0x1FF
-#define ETH_TX_ERR_VALS_RESERVED_SHIFT                    7
+/* Wrong BD flags. (use enum eth_tx_err) */
+#define ETH_TX_ERR_VALS_ILLEGAL_BD_FLAGS_MASK             0x1
+#define ETH_TX_ERR_VALS_ILLEGAL_BD_FLAGS_SHIFT            7
+#define ETH_TX_ERR_VALS_RESERVED_MASK                     0xFF
+#define ETH_TX_ERR_VALS_RESERVED_SHIFT                    8
 };
 
 
@@ -892,157 +1734,746 @@ struct eth_vport_rss_config {
 /* configuration of the 5-tuple capability */
 #define ETH_VPORT_RSS_CONFIG_EN_5_TUPLE_CAPABILITY_MASK  0x1
 #define ETH_VPORT_RSS_CONFIG_EN_5_TUPLE_CAPABILITY_SHIFT 6
-/* if set update the rss keys */
-#define ETH_VPORT_RSS_CONFIG_RESERVED0_MASK              0x1FF
+#define ETH_VPORT_RSS_CONFIG_RESERVED0_MASK              0x1FF /* if set update the rss keys */
 #define ETH_VPORT_RSS_CONFIG_RESERVED0_SHIFT             7
-/* The RSS engine ID. Must be allocated to each vport with RSS enabled.
- * Total number of RSS engines is ETH_RSS_ENGINE_NUM_ , according to chip type.
+/* The RSS engine ID. Must be allocated to each vport with RSS enabled. Total number of RSS engines
+ * is ETH_RSS_ENGINE_NUM_ , according to chip type.
  */
 	u8 rss_id;
-	u8 rss_mode /* The RSS mode for this function */;
-	u8 update_rss_key /* if set update the rss key */;
-/* if set update the indirection table values */
+	u8 rss_mode /* The RSS mode for this function (use enum eth_vport_rss_mode) */;
+	u8 update_rss_key /* if set - update the rss key. rss_id must be valid.  */;
+/* if set - update the indirection table values. rss_id must be valid. */
 	u8 update_rss_ind_table;
-/* if set update the capabilities and indirection table size. */
+/* if set - update the capabilities and indirection table size. rss_id must be valid. */
 	u8 update_rss_capabilities;
 	u8 tbl_size /* rss mask (Tbl size) */;
-	__le32 reserved2[2];
-/* RSS indirection table */
-	__le16 indirection_table[ETH_RSS_IND_TABLE_ENTRIES_NUM];
-/* RSS key supplied to us by OS */
-	__le32 rss_key[ETH_RSS_KEY_SIZE_REGS];
-	__le32 reserved3[2];
+/* If set and update_rss_ind_table set, update part of indirection table according to
+ * ind_table_mask.
+ */
+	u8 ind_table_mask_valid;
+	u8 reserved2[3];
+	__le16 indirection_table[ETH_RSS_IND_TABLE_ENTRIES_NUM] /* RSS indirection table */;
+/* RSS indirection table update mask. Used if update_rss_ind_table and ind_table_mask_valid set. */
+	__le32 ind_table_mask[ETH_RSS_IND_TABLE_MASK_SIZE_REGS];
+	__le32 rss_key[ETH_RSS_KEY_SIZE_REGS] /* RSS key supplied to us by OS */;
+	__le32 reserved3;
+};
+
+
+/*
+ * eth vport RSS mode
+ */
+enum eth_vport_rss_mode {
+	ETH_VPORT_RSS_MODE_DISABLED /* RSS Disabled */,
+	ETH_VPORT_RSS_MODE_REGULAR /* Regular (ndis-like) RSS */,
+	MAX_ETH_VPORT_RSS_MODE
+};
+
+
+/*
+ * Command for setting classification flags for a vport $$KEEP_ENDIANNESS$$
+ */
+struct eth_vport_rx_mode {
+	__le16 state;
+#define ETH_VPORT_RX_MODE_UCAST_DROP_ALL_MASK          0x1 /* drop all unicast packets */
+#define ETH_VPORT_RX_MODE_UCAST_DROP_ALL_SHIFT         0
+/* accept all unicast packets (subject to vlan) */
+#define ETH_VPORT_RX_MODE_UCAST_ACCEPT_ALL_MASK        0x1
+#define ETH_VPORT_RX_MODE_UCAST_ACCEPT_ALL_SHIFT       1
+/* accept all unmatched unicast packets */
+#define ETH_VPORT_RX_MODE_UCAST_ACCEPT_UNMATCHED_MASK  0x1
+#define ETH_VPORT_RX_MODE_UCAST_ACCEPT_UNMATCHED_SHIFT 2
+#define ETH_VPORT_RX_MODE_MCAST_DROP_ALL_MASK          0x1 /* drop all multicast packets */
+#define ETH_VPORT_RX_MODE_MCAST_DROP_ALL_SHIFT         3
+/* accept all multicast packets (subject to vlan) */
+#define ETH_VPORT_RX_MODE_MCAST_ACCEPT_ALL_MASK        0x1
+#define ETH_VPORT_RX_MODE_MCAST_ACCEPT_ALL_SHIFT       4
+/* accept all broadcast packets (subject to vlan) */
+#define ETH_VPORT_RX_MODE_BCAST_ACCEPT_ALL_MASK        0x1
+#define ETH_VPORT_RX_MODE_BCAST_ACCEPT_ALL_SHIFT       5
+/* accept any VNI in tunnel VNI classification. Used for default queue. */
+#define ETH_VPORT_RX_MODE_ACCEPT_ANY_VNI_MASK          0x1
+#define ETH_VPORT_RX_MODE_ACCEPT_ANY_VNI_SHIFT         6
+#define ETH_VPORT_RX_MODE_RESERVED1_MASK               0x1FF
+#define ETH_VPORT_RX_MODE_RESERVED1_SHIFT              7
+};
+
+
+/*
+ * Command for setting tpa parameters
+ */
+struct eth_vport_tpa_param {
+	u8 tpa_ipv4_en_flg /* Enable TPA for IPv4 packets */;
+	u8 tpa_ipv6_en_flg /* Enable TPA for IPv6 packets */;
+	u8 tpa_ipv4_tunn_en_flg /* Enable TPA for IPv4 over tunnel */;
+	u8 tpa_ipv6_tunn_en_flg /* Enable TPA for IPv6 over tunnel */;
+/* If set, start each TPA segment on new BD (GRO mode). One BD per segment allowed. */
+	u8 tpa_pkt_split_flg;
+/* If set, put header of first TPA segment on first BD and data on second BD. */
+	u8 tpa_hdr_data_split_flg;
+	u8 tpa_gro_consistent_flg /* If set, GRO data consistent will checked for TPA continue */;
+	u8 tpa_max_aggs_num /* maximum number of opened aggregations per v-port  */;
+	__le16 tpa_max_size /* maximal size for the aggregated TPA packets */;
+/* minimum TCP payload size for a packet to start aggregation */
+	__le16 tpa_min_size_to_start;
+/* minimum TCP payload size for a packet to continue aggregation */
+	__le16 tpa_min_size_to_cont;
+	u8 max_buff_num /* maximal number of buffers that can be used for one aggregation */;
+	u8 reserved;
+};
+
+
+/*
+ * Command for setting classification flags for a vport $$KEEP_ENDIANNESS$$
+ */
+struct eth_vport_tx_mode {
+	__le16 state;
+#define ETH_VPORT_TX_MODE_UCAST_DROP_ALL_MASK    0x1 /* drop all unicast packets */
+#define ETH_VPORT_TX_MODE_UCAST_DROP_ALL_SHIFT   0
+/* accept all unicast packets (subject to vlan) */
+#define ETH_VPORT_TX_MODE_UCAST_ACCEPT_ALL_MASK  0x1
+#define ETH_VPORT_TX_MODE_UCAST_ACCEPT_ALL_SHIFT 1
+#define ETH_VPORT_TX_MODE_MCAST_DROP_ALL_MASK    0x1 /* drop all multicast packets */
+#define ETH_VPORT_TX_MODE_MCAST_DROP_ALL_SHIFT   2
+/* accept all multicast packets (subject to vlan) */
+#define ETH_VPORT_TX_MODE_MCAST_ACCEPT_ALL_MASK  0x1
+#define ETH_VPORT_TX_MODE_MCAST_ACCEPT_ALL_SHIFT 3
+/* accept all broadcast packets (subject to vlan) */
+#define ETH_VPORT_TX_MODE_BCAST_ACCEPT_ALL_MASK  0x1
+#define ETH_VPORT_TX_MODE_BCAST_ACCEPT_ALL_SHIFT 4
+#define ETH_VPORT_TX_MODE_RESERVED1_MASK         0x7FF
+#define ETH_VPORT_TX_MODE_RESERVED1_SHIFT        5
+};
+
+
+/*
+ * GFS action data vlan insert
+ */
+struct gfs_action_data_vlan_insert {
+	__le16 reserved1;
+	__le16 vlan;
+	__le32 reserved2[14];
+};
+
+/*
+ * GFS action data vlan remove
+ */
+struct gfs_action_data_vlan_remove {
+	__le32 reserved2[15];
+};
+
+/*
+ * GFS destination.
+ */
+struct gfs_dest_data {
+	u8 type /* Destination type (use enum eth_gfs_dest_type) */;
+	u8 vport /* Destination vport id (For Hairpinning - Tx Vport to redirect to) */;
+	__le16 rx_qid /* Destination queue ID. */;
+/* Hairpin Queue Index from base_hairpin_cid of hairpin_pf_id to redirect traffic to */
+	u8 hairpin_queue_index;
+	u8 hairpin_pf_id /* PF ID to redirect traffic to */;
+/* if set, vport field is used for hairpin traffic redirection, otherwise, the default vport of the
+ * PF is used (set upon 1st Hiarpin [Tx/Rx] Queue Start Ramrod)
+ */
+	u8 hairpin_vport_valid;
+	u8 tx_dest /* TX destination for TX switch bypass. (use enum dst_port_mode) */;
+/* GFS redirect or copy action hint value. Limited to 3 bit. Reported by RX CQE for RX packets or TX
+ * packets, forwarded to RX VPORT.
+ */
+	u8 drv_hint;
+	u8 reserved[3];
+};
+
+/*
+ * GFS redirect action data.
+ */
+struct gfs_action_data_redirect {
+	struct gfs_dest_data destination /* GFS destination */;
+	u8 condition /* GFS redirect condition. (use enum eth_gfs_redirect_condition) */;
+	u8 reserved1[3];
+	__le32 reserved2[11];
+};
+
+/*
+ * GFS copy action data.
+ */
+struct gfs_action_data_copy {
+	struct gfs_dest_data destination /* GFS destination */;
+/* Maximum packet length for copy. If packet exceed the length, packet will be truncated. */
+	__le16 sample_len;
+	u8 original_flg /* If set, copy before header modification */;
+	u8 condition /* GFS redirect condition. (use enum eth_gfs_copy_condition) */;
+	u8 tcp_flags /* TCP flags for copy condition. */;
+	u8 reserved1[3];
+	__le32 reserved2[10];
+};
+
+/*
+ * GFS count action data
+ */
+struct gfs_action_data_count {
+/* Note: the Segment ID is set/overridden by internal HSI Function / FW */
+	struct hsi_tid tid;
+	u8 reset_cnt /* if set, reset counter value. */;
+	u8 bitfields;
+/* Log2 factor applied to timestamp base. */
+#define GFS_ACTION_DATA_COUNT_TIMESTAMP_LOG2_FACTOR_MASK  0x1F
+#define GFS_ACTION_DATA_COUNT_TIMESTAMP_LOG2_FACTOR_SHIFT 0
+#define GFS_ACTION_DATA_COUNT_RESERVED_MASK               0x7
+#define GFS_ACTION_DATA_COUNT_RESERVED_SHIFT              5
+	u8 reserved1[2];
+	__le32 reserved2[13];
+};
+
+/*
+ * GFS modify ETH header action data
+ */
+struct gfs_action_data_hdr_modify_eth {
+/* Modified header position. (use enum eth_gfs_modified_header_position) */
+	u8 header_position;
+	u8 set_dst_mac_flg /* If set, modify destination MAC. */;
+	u8 set_src_mac_flg /* If set, modify source MAC. */;
+	u8 set_vlan_id_flg /* If set, modify VLAN ID. */;
+	u8 set_vlan_pri_flg /* If set, modify VLAN priority. */;
+	u8 vlan_pri /* VLAN priority */;
+	__le16 vlan_id /* VID */;
+	__le16 dst_mac_ind_index /* Destination MAC indirect data table index. */;
+	__le16 src_mac_ind_index /* Source MAC indirect data table index. */;
+	__le16 dst_mac_hi /* Destination Mac Bytes 0 to 1 */;
+	__le16 dst_mac_mid /* Destination Mac Bytes 2 to 3 */;
+	__le16 dst_mac_lo /* Destination Mac Bytes 4 to 5 */;
+	__le16 src_mac_hi /* Source Mac Bytes 0 to 1 */;
+	__le16 src_mac_mid /* Source Mac Bytes 2 to 3 */;
+	__le16 src_mac_lo /* Source Mac Bytes 4 to 5 */;
+	__le32 reserved2[9];
+};
+
+/*
+ * GFS modify IPV4 header action data
+ */
+struct gfs_action_data_hdr_modify_ipv4 {
+/* Modified header position. (use enum eth_gfs_modified_header_position) */
+	u8 header_position;
+	u8 set_dst_addr_flg /* If set, modify destination IP address. */;
+	u8 set_src_addr_flg /* If set, modify source IP address. */;
+	u8 set_dscp_flg /* If set, modify DSCP. */;
+	u8 set_ttl_flg /* If set, set TTL value. */;
+	u8 dec_ttl_flg /* If set, decrement TTL by 1. */;
+	u8 dscp /* DSCP */;
+	u8 ttl /* TTL */;
+	__le32 dst_addr /* IP Destination Address */;
+	__le32 src_addr /* IP Source Address */;
+	__le32 reserved2[11];
+};
+
+/*
+ * GFS modify IPV6 header action data
+ */
+struct gfs_action_data_hdr_modify_ipv6 {
+/* Modified header position. (use enum eth_gfs_modified_header_position) */
+	u8 header_position;
+	u8 set_dst_addr_flg /* If set, modify destination IP address. */;
+	u8 set_src_addr_flg /* If set, modify source IP address. */;
+	u8 set_dscp_flg /* If set, modify DSCP. */;
+	u8 set_hop_limit_flg /* If set, set hop limit value. */;
+	u8 dec_hop_limit_flg /* If set, decrement hop limit by 1. */;
+	u8 dscp /* DSCP */;
+	u8 hop_limit /* hop limit */;
+	__le16 dst_addr_ind_index /* Destination IP address indirect data table index. */;
+	__le16 src_addr_ind_index /* Source IP address indirect data table index. */;
+	__le32 dst_addr[4] /* IP Destination Address */;
+	__le32 src_addr[4] /* IP Source Address */;
+	__le32 reserved2[4];
+};
+
+/*
+ * GFS modify UDP header action data.
+ */
+struct gfs_action_data_hdr_modify_udp {
+	u8 set_dst_port_flg /* If set, modify destination port. */;
+	u8 set_src_port_flg /* If set, modify source port. */;
+	__le16 reserved1;
+	__le16 dst_port /* UDP Destination Port */;
+	__le16 src_port /* UDP Source Port */;
+	__le32 reserved2[13];
+};
+
+/*
+ * GFS modify TCP header action data.
+ */
+struct gfs_action_data_hdr_modify_tcp {
+	u8 set_dst_port_flg /* If set, modify destination port. */;
+	u8 set_src_port_flg /* If set, modify source port. */;
+	__le16 reserved1;
+	__le16 dst_port /* TCP Destination Port */;
+	__le16 src_port /* TCP Source Port */;
+	__le32 reserved2[13];
+};
+
+/*
+ * GFS modify VXLAN header action data.
+ */
+struct gfs_action_data_hdr_modify_vxlan {
+	u8 set_vni_flg /* If set, modify VNI. */;
+	u8 set_entropy_flg /* If set, use constant value for tunnel UDP source port. */;
+/* If set, calculate tunnel UDP source port from inner headers. */
+	u8 set_entropy_from_inner;
+	u8 reserved1[3];
+	__le16 entropy;
+	__le32 vni /* VNI value. */;
+	__le32 reserved2[12];
+};
+
+/*
+ * GFS modify GRE header action data.
+ */
+struct gfs_action_data_hdr_modify_gre {
+	u8 set_key_flg /* If set, modify key. */;
+	u8 reserved1[3];
+	__le32 key /* KEY value */;
+	__le32 reserved2[13];
+};
+
+/*
+ * GFS pop header action data.
+ */
+struct gfs_action_data_pop_hdr {
+/* Headers, that must be removed from packet. (use enum eth_gfs_pop_hdr_type) */
+	u8 hdr_pop_type;
+	u8 reserved1[3];
+	__le32 reserved2[14];
+};
+
+/*
+ * GFS push header action data.
+ */
+struct gfs_action_data_push_hdr {
+	u8 hdr_push_type /* Headers, that will be added. (use enum eth_gfs_push_hdr_type) */;
+	u8 reserved1[3];
+	__le32 reserved2[14];
+};
+
+union gfs_action_data_union {
+	struct gfs_action_data_vlan_insert vlan_insert;
+	struct gfs_action_data_vlan_remove vlan_remove;
+	struct gfs_action_data_redirect redirect /* Redirect action. */;
+	struct gfs_action_data_copy copy /* Copy action. */;
+	struct gfs_action_data_count count /* Count action. */;
+	struct gfs_action_data_hdr_modify_eth modify_eth /* Modify ETH header action. */;
+	struct gfs_action_data_hdr_modify_ipv4 modify_ipv4 /* Modify IPV4 header action. */;
+	struct gfs_action_data_hdr_modify_ipv6 modify_ipv6 /* Modify IPV6 header action. */;
+	struct gfs_action_data_hdr_modify_udp modify_udp /* Modify UDP header action. */;
+	struct gfs_action_data_hdr_modify_tcp modify_tcp /* Modify TCP header action. */;
+	struct gfs_action_data_hdr_modify_vxlan modify_vxlan /* Modify VXLAN header action. */;
+	struct gfs_action_data_hdr_modify_gre modify_gre /* Modify GRE header action. */;
+	struct gfs_action_data_pop_hdr pop_hdr /* Pop  headers action. */;
+	struct gfs_action_data_push_hdr push_hdr /* Push headers action. */;
+};
+
+struct gfs_action {
+	u8 action_type /* GFS action type (use enum gfs_action_type) */;
+	u8 reserved[3];
+	union gfs_action_data_union action_data /* GFS action data */;
+};
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+/*
+ * GFS action type enum
+ */
+enum gfs_action_type {
+	ETH_GFS_ACTION_UNUSED,
+	ETH_GFS_ACTION_INSERT_VLAN,
+	ETH_GFS_ACTION_REMOVE_VLAN,
+	ETH_GFS_ACTION_REDIRECT /* Change packet destination */,
+	ETH_GFS_ACTION_COPY /* Copy or sample packet to VPORT */,
+	ETH_GFS_ACTION_COUNT /* Count flow traffic bytes and packets. */,
+	ETH_GFS_ACTION_HDR_MODIFY_ETH /* Modify ETH header */,
+	ETH_GFS_ACTION_HDR_MODIFY_IPV4 /* Modify IPV4 header */,
+	ETH_GFS_ACTION_HDR_MODIFY_IPV6 /* Modify IPV6 header */,
+	ETH_GFS_ACTION_HDR_MODIFY_UDP /* Modify UDP header */,
+	ETH_GFS_ACTION_HDR_MODIFY_TCP /* Modify TCP header */,
+	ETH_GFS_ACTION_HDR_MODIFY_VXLAN /* Modify VXLAN header */,
+	ETH_GFS_ACTION_HDR_MODIFY_GRE /* Modify GRE header */,
+	ETH_GFS_ACTION_POP_HDR /* Pop headers */,
+	ETH_GFS_ACTION_PUSH_HDR /* Push headers */,
+	ETH_GFS_ACTION_APPEND_CONTEXT /* For testing only.  */,
+	MAX_GFS_ACTION_TYPE
+};
+
+
+/*
+ * Add GFS filter command header
+ */
+struct gfs_add_filter_cmd_header {
+	struct regpair pkt_hdr_addr /* Pointer to packet header that defines GFS filter */;
+/* Wtite GFS hash to this address if return_hash_flg set. */
+	struct regpair return_hash_addr;
+	__le32 filter_pri /* filter priority */;
+	__le16 pkt_hdr_length /* Packet header length */;
+	u8 profile_id /* Profile id. */;
+	u8 vport_id /* Vport id. */;
+	u8 return_hash_flg /* If set, FW will write gfs_filter_hash_value to return_hash_addr. */;
+/* 0 - dont assert in case of filter configuration error, return an error code. 1 - assert in case
+ * of filter configuration error
+ */
+	u8 assert_on_error;
+	u8 reserved[6];
+};
+
+
+/*
+ * Ipv6 and MAC addresses to be stored in indirect table in storm ram
+ */
+struct gfs_indirect_data {
+	__le32 src_ipv6_addr[4] /* Inner or single source IPv6 address. */;
+	__le32 dst_ipv6_addr[4] /* Inner or single destination IPv6 address. */;
+	__le32 tunn_src_ipv6_addr[4] /* Tunnel source IPv6 address. */;
+	__le32 tunn_dst_ipv6_addr[4] /* Tunnel destination IPv6 address. */;
+	__le16 src_mac_addr_hi /* Inner or single source MAC address. */;
+	__le16 src_mac_addr_mid /* Inner or single source MAC address. */;
+	__le16 src_mac_addr_lo /* Inner or single source MAC address. */;
+	__le16 dst_mac_addr_hi /* Inner or single destination MAC address. */;
+	__le16 dst_mac_addr_mid /* Inner or single destination MAC address. */;
+	__le16 dst_mac_addr_lo /* Inner or single destination MAC address. */;
+	__le16 tunn_src_mac_addr_hi /* Tunnel source MAC address. */;
+	__le16 tunn_src_mac_addr_mid /* Tunnel source MAC address. */;
+	__le16 tunn_src_mac_addr_lo /* Tunnel source MAC address. */;
+	__le16 tunn_dst_mac_addr_hi /* Tunnel destination MAC address. */;
+	__le16 tunn_dst_mac_addr_mid /* Tunnel destination MAC address. */;
+	__le16 tunn_dst_mac_addr_lo /* Tunnel destination MAC address. */;
+	__le16 ipid /* Identification field in IPv4 header */;
+	u8 ipid_valid_flg /* if set, use ipid field is valid */;
+/* if set, fw will update inner or single ipv6 address according to ipv6 address above */
+	u8 update_ipv6_addr;
+/* if set, fw will update tunnel ipv6 address according to ipv6 address above */
+	u8 update_tunn_ipv6_addr;
+	u8 reset_cnt_0_flg /* if set, reset counter 0 */;
+	u8 reset_cnt_1_flg /* if set, reset counter 1 */;
+	u8 reserved;
+};
+
+/*
+ * GFS filter context.
+ */
+struct gfs_filter_context_data {
+	__le32 context[24] /* GFS context. */;
+/* Ipv6 and MAC addresses to be stored in indirect table in storm ram */
+	struct gfs_indirect_data indirect_data;
+};
+
+/*
+ * GFS filter header
+ */
+struct gfs_filter_header {
+/* flow id associated with this filter. Must be unique to allow performance optimization. Limited to
+ * 31 bit.
+ */
+	__le32 flow_id;
+	__le32 flow_mark /* filter flow mark. Reported by RX CQE. */;
+	u8 num_of_actions /* Number of valid actions in actions list. */;
+/* If set, exception context will be updated. Packet header not needed in this case. Exception
+ * context allocated per PF.
+ */
+	u8 exception_context_flg;
+	u8 reserved[6];
+};
+
+/*
+ * GFS push ETH header data
+ */
+struct gfs_filter_push_data_eth {
+	u8 vlan_exist_flg /* If set, VLAN TAG exist in ETH header. */;
+	u8 reserved;
+	__le16 vlan_id /* VID */;
+	__le16 dst_mac_ind_index /* Destination MAC indirect data table index. */;
+	__le16 src_mac_ind_index /* Source MAC indirect data table index. */;
+	__le16 dst_mac_hi /* Destination Mac Bytes 0 to 1 */;
+	__le16 dst_mac_mid /* Destination Mac Bytes 2 to 3 */;
+	__le16 dst_mac_lo /* Destination Mac Bytes 4 to 5 */;
+	__le16 src_mac_hi /* Source Mac Bytes 0 to 1 */;
+	__le16 src_mac_mid /* Source Mac Bytes 2 to 3 */;
+	__le16 src_mac_lo /* Source Mac Bytes 4 to 5 */;
+};
+
+/*
+ * GFS push IPV4 header data
+ */
+struct gfs_filter_push_data_ipv4 {
+	u8 tc /* TC value */;
+	u8 ttl /* TTL value */;
+	u8 dont_frag_flag /* dont frag flag value. */;
+	u8 set_ipid_in_ram /* If set, update IPID value in ram to ipid_val . */;
+	__le16 ipid_ind_index /* IPID counter indirect data table index. */;
+	__le16 ipid_val /* Initial IPID value. Used if set_ipid_in_ram flag set. */;
+	__le32 dst_addr /* IP destination Address */;
+	__le32 src_addr /* IP source Address */;
+	__le32 reserved[7];
+};
+
+/*
+ * GFS push IPV6 header data
+ */
+struct gfs_filter_push_data_ipv6 {
+	u8 traffic_class /* traffic class */;
+	u8 hop_limit /* hop limit */;
+	__le16 dst_addr_ind_index /* destination IPv6 address indirect data table index. */;
+	__le16 src_addr_ind_index /* source IPv6 address indirect data table index. */;
+	__le16 reserved;
+	__le32 flow_label /* flow label. */;
+	__le32 dst_addr[4] /* IP Destination Address */;
+	__le32 src_addr[4] /* IP Source Address */;
+};
+
+/*
+ * GFS push IP header data
+ */
+union gfs_filter_push_data_ip {
+	struct gfs_filter_push_data_ipv4 ip_v4 /* IPV4 data */;
+	struct gfs_filter_push_data_ipv6 ip_v6 /* IPV6 data */;
+};
+
+/*
+ * GFS push VXLAN tunnel header data.
+ */
+struct gfs_filter_push_data_vxlan {
+	u8 oob_vni /* If set, use out of band VNI from RX path for hairpin traffic. */;
+	u8 udp_checksum_exist_flg /* If set, calculate UDP checksum in tunnel header. */;
+	u8 i_bit /* VXLAN I bit value. Set if VNI valid. */;
+	u8 entropy_from_inner /* If set, calculate tunnel UDP source port from inner headers. */;
+	__le16 reserved1;
+	__le16 entropy;
+	__le32 vni /* VNI value. */;
+};
+
+/*
+ * GFS push GRE tunnel header data.
+ */
+struct gfs_filter_push_data_gre {
+	u8 oob_key /* If set, use out of band KEY from RX path for hairpin traffic. */;
+	u8 checksum_exist_flg /* If set, GRE checksum exist in tunnel header. */;
+	u8 key_exist_flg /* If set, GRE key exist in tunnel header. */;
+	u8 reserved1;
+	__le32 reserved2;
+	__le32 key /* KEY value */;
+};
+
+/*
+ * GFS push tunnel header data
+ */
+union gfs_filter_push_data_tunnel_hdr {
+	struct gfs_filter_push_data_vxlan vxlan /* VXLAN data */;
+	struct gfs_filter_push_data_gre gre /* GRE data */;
+};
+
+/*
+ * GFS filter push data.
+ */
+struct gfs_filter_push_data {
+	struct gfs_filter_push_data_eth tunn_eth_header /* Tunnel ETH header data */;
+	struct gfs_filter_push_data_eth inner_eth_header /* Inner ETH header data */;
+	union gfs_filter_push_data_ip tunn_ip_header /* Tunnel IP header data */;
+	union gfs_filter_push_data_tunnel_hdr tunn_header /* Tunnel header data */;
+};
+
+/*
+ * GFS filter context build input data.
+ */
+struct gfs_filter_ctx_build_data {
+	struct gfs_filter_header filter_hdr /* GFS filter header */;
+	struct gfs_filter_push_data push_data /* Push action data. */;
+	struct gfs_action actions[ETH_GFS_NUM_OF_ACTIONS] /*  GFS actions */;
+};
+
+/*
+ * add GFS filter - filter is packet header of type of packet wished to pass certain FW flow
+ */
+struct gfs_add_filter_ramrod_data {
+	struct gfs_add_filter_cmd_header filter_cmd_hdr /* Add GFS filter command header */;
+	struct gfs_filter_context_data context_data /* GFS filter context. */;
+	struct gfs_filter_ctx_build_data filter_data /* GFS filter data. */;
+};
+
+
+/*
+ * GFS Flow Counters Report Request Ramrod
+ */
+struct gfs_counters_report_ramrod_data {
+	u8 tx_table_valid /* 1: Valid Tx Reporting Table, 0: No Tx Reporting Required */;
+	u8 rx_table_valid /* 1: Valid Rx Reporting Table, 0: No Rx Reporting Required */;
+	u8 timestamp_log2_factor /* Log2 factor applied to timestamp base. */;
+	u8 reserved1[5];
+/* Valid Tx Counter Values reporting Table (type is eth_gfs_counters_report) */
+	struct regpair tx_counters_data_table_address;
+/* Valid Rx Counter Values reporting Table (type is eth_gfs_counters_report) */
+	struct regpair rx_counters_data_table_address;
+};
+
+
+/*
+ * GFS filter hash value.
+ */
+struct gfs_filter_hash_value {
+	__le32 hash[4] /* GFS hash. */;
+};
+
+/*
+ * del GFS filter - filter is packet header of type of packet wished to pass certain FW flow
+ */
+struct gfs_del_filter_ramrod_data {
+	struct regpair pkt_hdr_addr /* Pointer to Packet Header That Defines GFS Filter */;
+	__le16 pkt_hdr_length /* Packet Header Length */;
+	u8 profile_id /* profile id. */;
+	u8 vport_id /* vport id. */;
+/* 0 - dont assert in case of filter configuration error, return an error code. 1 - assert in case
+ * of filter configuration error
+ */
+	u8 assert_on_error;
+	u8 use_hash_flg /* If set, hash value used for delete filter instead packet header. */;
+	__le16 reserved;
+	struct gfs_filter_hash_value hash /* GFS filter hash value. */;
+};
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+enum gfs_module_type {
+	e_rgfs,
+	e_tgfs,
+	MAX_GFS_MODULE_TYPE
 };
 
 
 /*
- * eth vport RSS mode
+ * GFT filter update action type.
  */
-enum eth_vport_rss_mode {
-	ETH_VPORT_RSS_MODE_DISABLED /* RSS Disabled */,
-	ETH_VPORT_RSS_MODE_REGULAR /* Regular (ndis-like) RSS */,
-	MAX_ETH_VPORT_RSS_MODE
+enum gft_filter_update_action {
+	GFT_ADD_FILTER,
+	GFT_DELETE_FILTER,
+	MAX_GFT_FILTER_UPDATE_ACTION
 };
 
 
 /*
- * Command for setting classification flags for a vport $$KEEP_ENDIANNESS$$
+ * Entry (per PF) within the hairpin CIDs allocation table on X-storm RAM
  */
-struct eth_vport_rx_mode {
-	__le16 state;
-/* drop all unicast packets */
-#define ETH_VPORT_RX_MODE_UCAST_DROP_ALL_MASK          0x1
-#define ETH_VPORT_RX_MODE_UCAST_DROP_ALL_SHIFT         0
-/* accept all unicast packets (subject to vlan) */
-#define ETH_VPORT_RX_MODE_UCAST_ACCEPT_ALL_MASK        0x1
-#define ETH_VPORT_RX_MODE_UCAST_ACCEPT_ALL_SHIFT       1
-/* accept all unmatched unicast packets */
-#define ETH_VPORT_RX_MODE_UCAST_ACCEPT_UNMATCHED_MASK  0x1
-#define ETH_VPORT_RX_MODE_UCAST_ACCEPT_UNMATCHED_SHIFT 2
-/* drop all multicast packets */
-#define ETH_VPORT_RX_MODE_MCAST_DROP_ALL_MASK          0x1
-#define ETH_VPORT_RX_MODE_MCAST_DROP_ALL_SHIFT         3
-/* accept all multicast packets (subject to vlan) */
-#define ETH_VPORT_RX_MODE_MCAST_ACCEPT_ALL_MASK        0x1
-#define ETH_VPORT_RX_MODE_MCAST_ACCEPT_ALL_SHIFT       4
-/* accept all broadcast packets (subject to vlan) */
-#define ETH_VPORT_RX_MODE_BCAST_ACCEPT_ALL_MASK        0x1
-#define ETH_VPORT_RX_MODE_BCAST_ACCEPT_ALL_SHIFT       5
-/* accept any VNI in tunnel VNI classification. Used for default queue. */
-#define ETH_VPORT_RX_MODE_ACCEPT_ANY_VNI_MASK          0x1
-#define ETH_VPORT_RX_MODE_ACCEPT_ANY_VNI_SHIFT         6
-#define ETH_VPORT_RX_MODE_RESERVED1_MASK               0x1FF
-#define ETH_VPORT_RX_MODE_RESERVED1_SHIFT              7
+struct hairpin_per_pf_cid_allocation {
+/* Set Only Upon the 1st ever Tx/Rx Queue Start. i.e. only if the cell is 0. Verified by assert on
+ * each consecutive CID. HSI Documentation defines that per-PF, the 1st Hairpin (Tx/Rx) Queue Start
+ * CID to be the Base CID.
+ */
+	__le32 base_hairpin_cid;
+/* Appropriate PF s numPfHairpinQueues is increased per either Hairpin Tx/Rx Queue start. HSI
+ * Documentation requires that per-PF, the Hairpin Allocated CIDs are consecutive.
+ */
+	u8 num_pf_hairpin_queues;
+/* Default Vport for redirection. vport_id Field in First Tx/Rx Queue Start Ramrod. HSI
+ * Documentation defines that per-PF, the 1st Hairpin (Tx/Rx) Queue Start shall define the default
+ * vport to redirect hairpin traffic to.
+ */
+	u8 vport;
+	u8 reserved[2];
 };
 
 
 /*
- * Command for setting tpa parameters
- */
-struct eth_vport_tpa_param {
-	u8 tpa_ipv4_en_flg /* Enable TPA for IPv4 packets */;
-	u8 tpa_ipv6_en_flg /* Enable TPA for IPv6 packets */;
-	u8 tpa_ipv4_tunn_en_flg /* Enable TPA for IPv4 over tunnel */;
-	u8 tpa_ipv6_tunn_en_flg /* Enable TPA for IPv6 over tunnel */;
-/* If set, start each TPA segment on new BD (GRO mode). One BD per segment
- * allowed.
+ * Ramrod data for hairpin queue stop ramrod.
  */
-	u8 tpa_pkt_split_flg;
-/* If set, put header of first TPA segment on first BD and data on second BD. */
-	u8 tpa_hdr_data_split_flg;
-/* If set, GRO data consistent will checked for TPA continue */
-	u8 tpa_gro_consistent_flg;
-/* maximum number of opened aggregations per v-port  */
-	u8 tpa_max_aggs_num;
-	__le16 tpa_max_size /* maximal size for the aggregated TPA packets */;
-/* minimum TCP payload size for a packet to start aggregation */
-	__le16 tpa_min_size_to_start;
-/* minimum TCP payload size for a packet to continue aggregation */
-	__le16 tpa_min_size_to_cont;
-/* maximal number of buffers that can be used for one aggregation */
-	u8 max_buff_num;
-	u8 reserved;
+struct hairpin_queue_stop_ramrod_data {
+	__le16 reserved[4];
 };
 
 
 /*
- * Command for setting classification flags for a vport $$KEEP_ENDIANNESS$$
+ * RGFS Command List
  */
-struct eth_vport_tx_mode {
-	__le16 state;
-/* drop all unicast packets */
-#define ETH_VPORT_TX_MODE_UCAST_DROP_ALL_MASK    0x1
-#define ETH_VPORT_TX_MODE_UCAST_DROP_ALL_SHIFT   0
-/* accept all unicast packets (subject to vlan) */
-#define ETH_VPORT_TX_MODE_UCAST_ACCEPT_ALL_MASK  0x1
-#define ETH_VPORT_TX_MODE_UCAST_ACCEPT_ALL_SHIFT 1
-/* drop all multicast packets */
-#define ETH_VPORT_TX_MODE_MCAST_DROP_ALL_MASK    0x1
-#define ETH_VPORT_TX_MODE_MCAST_DROP_ALL_SHIFT   2
-/* accept all multicast packets (subject to vlan) */
-#define ETH_VPORT_TX_MODE_MCAST_ACCEPT_ALL_MASK  0x1
-#define ETH_VPORT_TX_MODE_MCAST_ACCEPT_ALL_SHIFT 3
-/* accept all broadcast packets (subject to vlan) */
-#define ETH_VPORT_TX_MODE_BCAST_ACCEPT_ALL_MASK  0x1
-#define ETH_VPORT_TX_MODE_BCAST_ACCEPT_ALL_SHIFT 4
-#define ETH_VPORT_TX_MODE_RESERVED1_MASK         0x7FF
-#define ETH_VPORT_TX_MODE_RESERVED1_SHIFT        5
+enum rgfs_commands_list {
+	RGFS_COMMAND_NOP = 0 /* Nop command. Used by default unmatch flow. */,
+	RGFS_COMMAND_REGULAR /* Regular */,
+	RGFS_COMMAND_SIMPLE /* RGFS simple */,
+	RGFS_COMMAND_4TUPLE /* RGFS 4-tuple modify */,
+	RGFS_COMMAND_ORIGINAL /* RGFS 4-tuple modify, original flag set. */,
+	RGFS_COMMAND_INNER_HDR /* RGFS inner header modify */,
+	RGFS_COMMAND_FULL /* RGFS full */,
+	RGFS_STEERING_REGULAR /* Regular */,
+	RGFS_STEERING_SIMPLE /* RGFS simple */,
+	RGFS_STEERING_4TUPLE /* RGFS 4-tuple modify */,
+	RGFS_STEERING_ORIGINAL /* RGFS 4-tuple modify, original flag set. */,
+	RGFS_STEERING_INNER_HDR /* RGFS inner header modify */,
+	RGFS_STEERING_FULL /* RGFS full */,
+	RGFS_HAIRPINNING_FULL /* RGFS send to TX queue (inner header and full) */,
+/* RGFS send to TX queue. optimized (regular, simple and 4-tuple modify) */
+	RGFS_HAIRPINNING_OPT,
+	RGFS_HAIRPINNING_ORIGINAL /* RGFS send to TX queue. Original flag set. */,
+	RGFS_DROP /* RGFS drop (all flows) */,
+	RGFS_COMMAND_INTEG_TEST /* Integration test flow */,
+	MAX_RGFS_COMMANDS_LIST
 };
 
 
 /*
- * GFT filter update action type.
+ * Ramrod data for rx create gft action
  */
-enum gft_filter_update_action {
-	GFT_ADD_FILTER,
-	GFT_DELETE_FILTER,
-	MAX_GFT_FILTER_UPDATE_ACTION
+struct rx_create_gft_action_ramrod_data {
+	u8 vport_id /* Vport Id of GFT Action  */;
+	u8 reserved[7];
 };
 
 
+/*
+ * Ramrod data for rx create openflow action
+ */
+struct rx_create_openflow_action_ramrod_data {
+	u8 vport_id /* ID of RX queue */;
+	u8 reserved[7];
+};
 
 
 /*
  * Ramrod data for rx add openflow filter
  */
-struct rx_add_openflow_filter_data {
+struct rx_openflow_filter_ramrod_data {
 	__le16 action_icid /* CID of Action to run for this filter */;
 	u8 priority /* Searcher String - Packet priority */;
 	u8 reserved0;
 	__le32 tenant_id /* Searcher String - Tenant ID */;
-/* Searcher String - Destination Mac Bytes 0 to 1 */
-	__le16 dst_mac_hi;
-/* Searcher String - Destination Mac Bytes 2 to 3 */
-	__le16 dst_mac_mid;
-/* Searcher String - Destination Mac Bytes 4 to 5 */
-	__le16 dst_mac_lo;
+	__le16 dst_mac_hi /* Searcher String - Destination Mac Bytes 0 to 1 */;
+	__le16 dst_mac_mid /* Searcher String - Destination Mac Bytes 2 to 3 */;
+	__le16 dst_mac_lo /* Searcher String - Destination Mac Bytes 4 to 5 */;
 	__le16 src_mac_hi /* Searcher String - Source Mac 0 to 1 */;
 	__le16 src_mac_mid /* Searcher String - Source Mac 2 to 3 */;
 	__le16 src_mac_lo /* Searcher String - Source Mac 4 to 5 */;
 	__le16 vlan_id /* Searcher String - Vlan ID */;
 	__le16 l2_eth_type /* Searcher String - Last L2 Ethertype */;
 	u8 ipv4_dscp /* Searcher String - IPv4 6 MSBs of the TOS Field */;
-	u8 ipv4_frag_type /* Searcher String - IPv4 Fragmentation Type */;
+/* Searcher String - IPv4 Fragmentation Type (use enum eth_ipv4_frag_type) */
+	u8 ipv4_frag_type;
 	u8 ipv4_over_ip /* Searcher String - IPv4 Over IP Type */;
 	u8 tenant_id_exists /* Searcher String - Tenant ID Exists */;
 	__le32 ipv4_dst_addr /* Searcher String - IPv4 Destination Address */;
@@ -1052,60 +2483,48 @@ struct rx_add_openflow_filter_data {
 };
 
 
-/*
- * Ramrod data for rx create gft action
- */
-struct rx_create_gft_action_data {
-	u8 vport_id /* Vport Id of GFT Action  */;
-	u8 reserved[7];
-};
-
-
-/*
- * Ramrod data for rx create openflow action
- */
-struct rx_create_openflow_action_data {
-	u8 vport_id /* ID of RX queue */;
-	u8 reserved[7];
-};
-
-
 /*
  * Ramrod data for rx queue start ramrod
  */
 struct rx_queue_start_ramrod_data {
-	__le16 rx_queue_id /* ID of RX queue */;
+/* RX queue ID or [E5-only] Hairpin Queue ID when is_hairpin is set. For Hairpin queues, range is
+ * [0..ETH_MAX_NUM_L2_HAIRPIN_QUEUES-1].
+ */
+	__le16 rx_queue_id;
 	__le16 num_of_pbl_pages /* Number of pages in CQE PBL */;
-	__le16 bd_max_bytes /* maximal bytes that can be places on the bd */;
+	__le16 bd_max_bytes /* maximal number of bytes that can be places on the bd */;
 	__le16 sb_id /* Status block ID */;
-	u8 sb_index /* index of the protocol index */;
-	u8 vport_id /* ID of virtual port */;
-	u8 default_rss_queue_flg /* set queue as default rss queue if set */;
-	u8 complete_cqe_flg /* post completion to the CQE ring if set */;
-	u8 complete_event_flg /* post completion to the event ring if set */;
+	u8 sb_index /* Status block index */;
+	u8 vport_id /* Vport ID */;
+	u8 default_rss_queue_flg /* if set - this queue will be the default RSS queue */;
+/* if set - post completion to the CQE ring. For Hairpinning - must be cleared. */
+	u8 complete_cqe_flg;
+/* if set - post completion to the event ring. For Hairpinning - must be set. */
+	u8 complete_event_flg;
 	u8 stats_counter_id /* Statistics counter ID */;
-	u8 pin_context /* Pin context in CCFC to improve performance */;
+/* if set - Pin CID context. Total number of pinned connections cannot exceed
+ * ETH_PINNED_CONN_MAX_NUM
+ */
+	u8 pin_context;
 	u8 pxp_tph_valid_bd /* PXP command TPH Valid - for BD/SGE fetch */;
-/* PXP command TPH Valid - for packet placement */
-	u8 pxp_tph_valid_pkt;
-/* PXP command Steering tag hint. Use enum pxp_tph_st_hint */
-	u8 pxp_st_hint;
+	u8 pxp_tph_valid_pkt /* PXP command TPH Valid - for packet placement */;
+	u8 pxp_st_hint /* PXP command Steering tag hint. Use enum pxp_tph_st_hint */;
 	__le16 pxp_st_index /* PXP command Steering tag index */;
-/* Indicates that current queue belongs to poll-mode driver */
-	u8 pmd_mode;
-/* Indicates that the current queue is using the TX notification queue
- * mechanism - should be set only for PMD queue
+	u8 pmd_mode /* Indicates that current queue belongs to poll-mode driver */;
+/* Indicates that the current queue is using the TX notification queue mechanism - should be set
+ * only for PMD queue
  */
 	u8 notify_en;
-/* Initial value for the toggle valid bit - used in PMD mode */
-	u8 toggle_val;
-/* Index of RX producers in VF zone. Used for VF only. */
-	u8 vf_rx_prod_index;
-/* Backward compatibility mode. If set, unprotected mStorm queue zone will used
- * for VF RX producers instead of VF zone.
+	u8 toggle_val /* Initial value for the toggle valid bit - used in PMD mode */;
+	u8 vf_rx_prod_index /* Index of RX producers in VF zone. Used for VF only. */;
+/* Backward compatibility mode. If set, unprotected mStorm queue zone will used for VF RX producers
+ * instead of VF zone.
  */
 	u8 vf_rx_prod_use_zone_a;
-	u8 reserved[5];
+	u8 is_hairpin /* [E5-only] Hairpin Queue indication */;
+/* [E5-only] # Full hairpin Rx BD pages. Valid when is_hairpin set */
+	u8 num_hairpin_rx_bd_pages;
+	u8 reserved[3];
 	__le16 reserved1 /* FW reserved. */;
 	struct regpair cqe_pbl_addr /* Base address on host of CQE PBL */;
 	struct regpair bd_base /* bd address of the first bd page */;
@@ -1117,9 +2536,9 @@ struct rx_queue_start_ramrod_data {
  * Ramrod data for rx queue stop ramrod
  */
 struct rx_queue_stop_ramrod_data {
-	__le16 rx_queue_id /* ID of RX queue */;
-	u8 complete_cqe_flg /* post completion to the CQE ring if set */;
-	u8 complete_event_flg /* post completion to the event ring if set */;
+	__le16 rx_queue_id /* RX queue ID */;
+	u8 complete_cqe_flg /* if set - post completion to the CQE ring  */;
+	u8 complete_event_flg /* if set - post completion to the event ring */;
 	u8 vport_id /* ID of virtual port */;
 	u8 reserved[3];
 };
@@ -1129,12 +2548,11 @@ struct rx_queue_stop_ramrod_data {
  * Ramrod data for rx queue update ramrod
  */
 struct rx_queue_update_ramrod_data {
-	__le16 rx_queue_id /* ID of RX queue */;
-	u8 complete_cqe_flg /* post completion to the CQE ring if set */;
-	u8 complete_event_flg /* post completion to the event ring if set */;
-	u8 vport_id /* ID of virtual port */;
-/* If set, update default rss queue to this RX queue. */
-	u8 set_default_rss_queue;
+	__le16 rx_queue_id /* RX queue ID */;
+	u8 complete_cqe_flg /* if set - post completion to the CQE ring */;
+	u8 complete_event_flg /* if set - post completion to the event ring */;
+	u8 vport_id /* Vport ID */;
+	u8 set_default_rss_queue /* If set, update default RSS queue to this queue. */;
 	u8 reserved[3];
 	u8 reserved1 /* FW reserved. */;
 	u8 reserved2 /* FW reserved. */;
@@ -1148,10 +2566,10 @@ struct rx_queue_update_ramrod_data {
 /*
  * Ramrod data for rx Add UDP Filter
  */
-struct rx_udp_filter_data {
+struct rx_udp_filter_ramrod_data {
 	__le16 action_icid /* CID of Action to run for this filter */;
 	__le16 vlan_id /* Searcher String - Vlan ID */;
-	u8 ip_type /* Searcher String - IP Type */;
+	u8 ip_type /* Searcher String - IP Type (use enum eth_ip_type) */;
 	u8 tenant_id_exists /* Searcher String - Tenant ID Exists */;
 	__le16 reserved1;
 /* Searcher String - IP Destination Address, for IPv4 use ip_dst_addr[0] only */
@@ -1164,50 +2582,69 @@ struct rx_udp_filter_data {
 };
 
 
-/*
- * add or delete GFT filter - filter is packet header of type of packet wished
- * to pass certain FW flow
+/* add or delete GFT filter - filter is packet header of type of packet wished to pass certain FW
+ * flow
  */
-struct rx_update_gft_filter_data {
-/* Pointer to Packet Header That Defines GFT Filter */
-	struct regpair pkt_hdr_addr;
+struct rx_update_gft_filter_ramrod_data {
+	struct regpair pkt_hdr_addr /* Pointer to Packet Header That Defines GFT Filter */;
 	__le16 pkt_hdr_length /* Packet Header Length */;
-/* Action icid. Valid if action_icid_valid flag set. */
-	__le16 action_icid;
+	__le16 action_icid /* Action icid. Valid if action_icid_valid flag set. */;
 	__le16 rx_qid /* RX queue ID. Valid if rx_qid_valid set. */;
 	__le16 flow_id /* RX flow ID. Valid if flow_id_valid set. */;
-/* RX vport Id. For drop flow, set to ETH_GFT_TRASHCAN_VPORT. */
-	__le16 vport_id;
-/* If set, action_icid will used for GFT filter update. */
-	u8 action_icid_valid;
-/* If set, rx_qid will used for traffic steering, in additional to vport_id.
- * flow_id_valid must be cleared. If cleared, queue ID will selected by RSS.
+	__le16 vport_id /* RX vport Id. For drop flow, set to ETH_GFT_TRASHCAN_VPORT. */;
+	u8 action_icid_valid /* If set, action_icid will used for GFT filter update. */;
+/* If set, rx_qid will used for traffic steering, in additional to vport_id. flow_id_valid must be
+ * cleared. If cleared, queue ID will selected by RSS.
  */
 	u8 rx_qid_valid;
-/* If set, flow_id will reported by CQE, rx_qid_valid must be cleared. If
- * cleared, flow_id 0 will reported by CQE.
+/* If set, flow_id will reported by CQE, rx_qid_valid must be cleared. If cleared, flow_id 0 will
+ * reported by CQE.
  */
 	u8 flow_id_valid;
-	u8 filter_action /* Use to set type of action on filter */;
-/* 0 - dont assert in case of error. Just return an error code. 1 - assert in
- * case of error.
- */
+/* Use to set type of action on filter (use enum gft_filter_update_action) */
+	u8 filter_action;
+/* 0 - dont assert in case of error. Just return an error code. 1 - assert in case of error. */
 	u8 assert_on_error;
-/* If set, inner VLAN will be removed regardless to VPORT configuration.
- * Supported by E4 only.
- */
+/* If set, inner VLAN will be removed regardless to VPORT configuration. Supported by E4 only. */
 	u8 inner_vlan_removal_en;
 };
 
 
+/*
+ * TGFS Command List
+ */
+enum tgfs_commands_list {
+	TGFS_COMMAND_NO_OPERATION /* TGFS no operation */,
+	TGFS_COMMAND_REGULAR /* Regular */,
+	TGFS_COMMAND_SIMPLE_0 /* TGFS simple */,
+	TGFS_COMMAND_SIMPLE_1 /* TGFS simple */,
+	TGFS_COMMAND_SIMPLE_2 /* TGFS simple */,
+	TGFS_COMMAND_INNER_HDR_0 /* TGFS inner header modify */,
+	TGFS_COMMAND_INNER_HDR_1 /* TGFS inner header modify */,
+	TGFS_COMMAND_INNER_HDR_2 /* TGFS inner header modify */,
+	TGFS_COMMAND_FULL_0 /* TGFS full */,
+	TGFS_COMMAND_FULL_1 /* TGFS full */,
+	TGFS_COMMAND_FULL_2 /* TGFS full */,
+	TGFS_COMMAND_FULL_ORIGINAL_1 /* TGFS full for original copy */,
+	TGFS_COMMAND_FULL_ORIGINAL_2 /* TGFS full for original copy */,
+	TGFS_COMMAND_VLAN_INSERTION /* TGFS vlan insertion      - INTEGRATION command */,
+	TGFS_COMMAND_APPEND_CONTEXT /* TGFS append context test - INTEGRATION command */,
+	TGFS_COMMAND_EDPM_VLAN_INSERTION /* TGFS EDPM vlan insertion - INTEGRATION command */,
+	TGFS_COMMAND_DROP /* TGFS drop flow. */,
+	MAX_TGFS_COMMANDS_LIST
+};
+
 
 /*
  * Ramrod data for tx queue start ramrod
  */
 struct tx_queue_start_ramrod_data {
 	__le16 sb_id /* Status block ID */;
-	u8 sb_index /* Status block protocol index */;
-	u8 vport_id /* VPort ID */;
+	u8 sb_index /* Status block index */;
+/* VPort ID. For Hairpin Queues this is the QM Vport ID, which is used for Rate Limiter
+ * functionality.
+ */
+	u8 vport_id;
 	u8 reserved0 /* FW reserved. (qcn_rl_en) */;
 	u8 stats_counter_id /* Statistics counter ID to use */;
 	__le16 qm_pq_id /* QM PQ ID */;
@@ -1218,44 +2655,47 @@ struct tx_queue_start_ramrod_data {
 /* If set, Test Mode - packets will be duplicated by Xstorm handler */
 #define TX_QUEUE_START_RAMROD_DATA_TEST_MODE_PKT_DUP_MASK      0x1
 #define TX_QUEUE_START_RAMROD_DATA_TEST_MODE_PKT_DUP_SHIFT     1
-/* If set, Test Mode - packets destination will be determined by dest_port_mode
- * field from Tx BD
- */
-#define TX_QUEUE_START_RAMROD_DATA_TEST_MODE_TX_DEST_MASK      0x1
-#define TX_QUEUE_START_RAMROD_DATA_TEST_MODE_TX_DEST_SHIFT     2
 /* Indicates that current queue belongs to poll-mode driver */
 #define TX_QUEUE_START_RAMROD_DATA_PMD_MODE_MASK               0x1
-#define TX_QUEUE_START_RAMROD_DATA_PMD_MODE_SHIFT              3
-/* Indicates that the current queue is using the TX notification queue
- * mechanism - should be set only for PMD queue
+#define TX_QUEUE_START_RAMROD_DATA_PMD_MODE_SHIFT              2
+/* Indicates that the current queue is using the TX notification queue mechanism - should be set
+ * only for PMD queue
  */
 #define TX_QUEUE_START_RAMROD_DATA_NOTIFY_EN_MASK              0x1
-#define TX_QUEUE_START_RAMROD_DATA_NOTIFY_EN_SHIFT             4
-/* Pin context in CCFC to improve performance */
+#define TX_QUEUE_START_RAMROD_DATA_NOTIFY_EN_SHIFT             3
+/* If set - Pin CID context. Total number of pinned connections cannot exceed
+ * ETH_PINNED_CONN_MAX_NUM
+ */
 #define TX_QUEUE_START_RAMROD_DATA_PIN_CONTEXT_MASK            0x1
-#define TX_QUEUE_START_RAMROD_DATA_PIN_CONTEXT_SHIFT           5
+#define TX_QUEUE_START_RAMROD_DATA_PIN_CONTEXT_SHIFT           4
+/* [E5-only] Hairpin Queue indication */
+#define TX_QUEUE_START_RAMROD_DATA_IS_HAIRPIN_MASK             0x1
+#define TX_QUEUE_START_RAMROD_DATA_IS_HAIRPIN_SHIFT            5
 #define TX_QUEUE_START_RAMROD_DATA_RESERVED1_MASK              0x3
 #define TX_QUEUE_START_RAMROD_DATA_RESERVED1_SHIFT             6
-	u8 pxp_st_hint /* PXP command Steering tag hint */;
+	u8 pxp_st_hint /* PXP command Steering tag hint (use enum pxp_tph_st_hint) */;
 	u8 pxp_tph_valid_bd /* PXP command TPH Valid - for BD fetch */;
 	u8 pxp_tph_valid_pkt /* PXP command TPH Valid - for packet fetch */;
 	__le16 pxp_st_index /* PXP command Steering tag index */;
-/* TX completion min agg size - for PMD queues */
-	__le16 comp_agg_size;
+	u8 comp_agg_size /* TX completion min agg size - for PMD queues */;
+	u8 reserved3;
 	__le16 queue_zone_id /* queue zone ID to use */;
 	__le16 reserved2 /* FW reserved. (test_dup_count) */;
 	__le16 pbl_size /* Number of BD pages pointed by PBL */;
-/* unique Queue ID - currently used only by PMD flow */
+/* Unique Queue ID - used only by PMD flow. When [E5-only] is_hairpin flag is set: hairpin queue ID
+ * - Must be equal to the Hairpin Queue ID in Start Rx Queue for Hairpin Queues
+ */
 	__le16 tx_queue_id;
-/* Unique Same-As-Last Resource ID - improves performance for same-as-last
- * packets per connection (range 0..ETH_TX_NUM_SAME_AS_LAST_ENTRIES-1 IDs
- * available)
+/* For E4: Unique Same-As-Last Resource ID - improves performance for same-as-last packets per
+ * connection (range 0..ETH_TX_NUM_SAME_AS_LAST_ENTRIES_E4-1 IDs available). Switch off SAL for this
+ * tx queue by setting value to ETH_TX_INACTIVE_SAME_AS_LAST (HSI constant). For E5: Switch off SAL
+ * for this tx queue by setting value to ETH_TX_INACTIVE_SAME_AS_LAST (HSI constant), otherwise
+ * active
  */
 	__le16 same_as_last_id;
 	__le16 reserved[3];
 	struct regpair pbl_base_addr /* address of the pbl page */;
-/* BD consumer address in host - for PMD queues */
-	struct regpair bd_cons_address;
+	struct regpair bd_cons_address /* BD consumer address in host - for PMD queues */;
 };
 
 
@@ -1282,8 +2722,7 @@ struct tx_queue_update_ramrod_data {
  * Inner to Inner VLAN priority map update mode
  */
 enum update_in_to_in_pri_map_mode_enum {
-/* Inner to Inner VLAN priority map update Disabled */
-	ETH_IN_TO_IN_PRI_MAP_UPDATE_DISABLED,
+	ETH_IN_TO_IN_PRI_MAP_UPDATE_DISABLED /* Inner to Inner VLAN priority map update Disabled */,
 /* Update Inner to Inner VLAN priority map for non RDMA protocols */
 	ETH_IN_TO_IN_PRI_MAP_UPDATE_NON_RDMA_TBL,
 /* Update Inner to Inner VLAN priority map for RDMA protocols */
@@ -1292,15 +2731,13 @@ enum update_in_to_in_pri_map_mode_enum {
 };
 
 
-
 /*
  * Ramrod data for vport update ramrod
  */
 struct vport_filter_update_ramrod_data {
 /* Header for Filter Commands (RX/TX, Add/Remove/Replace, etc) */
 	struct eth_filter_cmd_header filter_cmd_hdr;
-/* Filter Commands */
-	struct eth_filter_cmd filter_cmds[ETH_FILTER_RULES_COUNT];
+	struct eth_filter_cmd filter_cmds[ETH_FILTER_RULES_COUNT] /* Filter Commands */;
 };
 
 
@@ -1315,35 +2752,44 @@ struct vport_start_ramrod_data {
 	u8 inner_vlan_removal_en;
 	struct eth_vport_rx_mode rx_mode /* Rx filter data */;
 	struct eth_vport_tx_mode tx_mode /* Tx filter data */;
-/* TPA configuration parameters */
-	struct eth_vport_tpa_param tpa_param;
+	struct eth_vport_tpa_param tpa_param /* TPA configuration parameters */;
 	__le16 default_vlan /* Default Vlan value to be forced by FW */;
 	u8 tx_switching_en /* Tx switching is enabled for current Vport */;
-/* Anti-spoofing verification is set for current Vport */
-	u8 anti_spoofing_en;
-/* If set, the default Vlan value is forced by the FW */
-	u8 default_vlan_en;
-/* If set, the vport handles PTP Timesync Packets */
-	u8 handle_ptp_pkts;
+	u8 anti_spoofing_en /* Anti-spoofing verification is set for current Vport */;
+	u8 default_vlan_en /* If set, the default Vlan value is forced by the FW */;
+	u8 handle_ptp_pkts /* If set, the vport handles PTP Timesync Packets */;
 /* If enable then innerVlan will be striped and not written to cqe */
 	u8 silent_vlan_removal_en;
-/* If set untagged filter (vlan0) is added to current Vport, otherwise port is
- * marked as any-vlan
- */
+/* If set untagged filter (vlan0) is added to current Vport, otherwise port is marked as any-vlan */
 	u8 untagged;
-/* Desired behavior per TX error type */
-	struct eth_tx_err_vals tx_err_behav;
-/* If set, ETH header padding will not inserted. placement_offset will be zero.
- */
+	struct eth_tx_err_vals tx_err_behav /* Desired behavior per TX error type */;
+/* If set, ETH header padding will not inserted. placement_offset will be zero. */
 	u8 zero_placement_offset;
 /* If set, control frames will be filtered according to MAC check. */
 	u8 ctl_frame_mac_check_en;
 /* If set, control frames will be filtered according to ethtype check. */
 	u8 ctl_frame_ethtype_check_en;
-/* If set, the inner vlan (802.1q tag) priority that is written to cqe will be
- * zero out, used for TenantDcb
+	u8 rgfs_search_en_e5 /* Instruct to send a RGFS Search command valid only in E5 */;
+	u8 tgfs_search_en_e5 /* Instruct to send a TGFS Search command valid only in E5 */;
+/* Configurations for tx forwarding, used in VF Representor mode (use enum
+ * eth_tx_dst_mode_config_enum)
+ */
+	u8 tx_dst_port_mode_config;
+/* destination Vport ID to forward the packet, applicable only if dst_vport_id_valid is set and when
+ * tx_dst_port_mode_config == ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_VPORT and (tx_dst_port_mode ==
+ * DST_PORT_LOOPBACK or tx_dst_port_mode == DST_PORT_PHY_LOOPBACK)
+ */
+	u8 dst_vport_id;
+/* destination tx to forward the packet, applicable only when tx_dst_port_mode_config ==
+ * ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_VPORT (use enum dst_port_mode)
+ */
+	u8 tx_dst_port_mode;
+	u8 dst_vport_id_valid /* if set, dst_vport_id has valid value */;
+/* If set, the inner vlan (802.1q tag) priority that is written to cqe will be zero out, used for
+ * TenantDcb
  */
 	u8 wipe_inner_vlan_pri_en;
+	u8 reserved2[2];
 /* inner to inner vlan priority translation configurations */
 	struct eth_in_to_in_pri_map_cfg in_to_in_vlan_pri_map_cfg;
 };
@@ -1369,58 +2815,62 @@ struct vport_update_ramrod_data_cmn {
 	u8 tx_active_flg /* tx active flag value */;
 	u8 update_rx_mode_flg /* set if rx state data should be handled */;
 	u8 update_tx_mode_flg /* set if tx state data should be handled */;
-/* set if approx. mcast data should be handled */
-	u8 update_approx_mcast_flg;
+	u8 update_approx_mcast_flg /* set if approx. mcast data should be handled */;
 	u8 update_rss_flg /* set if rss data should be handled  */;
-/* set if inner_vlan_removal_en should be handled */
-	u8 update_inner_vlan_removal_en_flg;
+	u8 update_inner_vlan_removal_en_flg /* set if inner_vlan_removal_en should be handled */;
 	u8 inner_vlan_removal_en;
 /* set if tpa parameters should be handled, TPA must be disable before */
 	u8 update_tpa_param_flg;
 	u8 update_tpa_en_flg /* set if tpa enable changes */;
-/* set if tx switching en flag should be handled */
-	u8 update_tx_switching_en_flg;
+	u8 update_tx_switching_en_flg /* set if tx switching en flag should be handled */;
 	u8 tx_switching_en /* tx switching en value */;
-/* set if anti spoofing flag should be handled */
-	u8 update_anti_spoofing_en_flg;
+	u8 update_anti_spoofing_en_flg /* set if anti spoofing flag should be handled */;
 	u8 anti_spoofing_en /* Anti-spoofing verification en value */;
-/* set if handle_ptp_pkts should be handled. */
-	u8 update_handle_ptp_pkts;
-/* If set, the vport handles PTP Timesync Packets */
-	u8 handle_ptp_pkts;
-/* If set, the default Vlan enable flag is updated */
-	u8 update_default_vlan_en_flg;
-/* If set, the default Vlan value is forced by the FW */
-	u8 default_vlan_en;
-/* If set, the default Vlan value is updated */
-	u8 update_default_vlan_flg;
+	u8 update_handle_ptp_pkts /* set if handle_ptp_pkts should be handled. */;
+	u8 handle_ptp_pkts /* If set, the vport handles PTP Timesync Packets */;
+	u8 update_default_vlan_en_flg /* If set, the default Vlan enable flag is updated */;
+	u8 default_vlan_en /* If set, the default Vlan value is forced by the FW */;
+	u8 update_default_vlan_flg /* If set, the default Vlan value is updated */;
 	__le16 default_vlan /* Default Vlan value to be forced by FW */;
-/* set if accept_any_vlan should be handled */
-	u8 update_accept_any_vlan_flg;
+	u8 update_accept_any_vlan_flg /* set if accept_any_vlan should be handled */;
 	u8 accept_any_vlan /* accept_any_vlan updated value */;
-/* Set to remove vlan silently, update_inner_vlan_removal_en_flg must be enabled
- * as well. If Rx is in noSgl mode send rx_queue_update_ramrod_data
+/* Set to remove vlan silently, update_inner_vlan_removal_en_flg must be enabled as well. If Rx is
+ * in noSgl mode send rx_queue_update_ramrod_data
  */
 	u8 silent_vlan_removal_en;
-/* If set, MTU will be updated. Vport must be not active. */
-	u8 update_mtu_flg;
+	u8 update_mtu_flg /* If set, MTU will be updated. Vport must be not active. */;
 	__le16 mtu /* New MTU value. Used if update_mtu_flg are set */;
-/* If set, ctl_frame_mac_check_en and ctl_frame_ethtype_check_en will be
- * updated
- */
+/* If set, ctl_frame_mac_check_en and ctl_frame_ethtype_check_en will be updated */
 	u8 update_ctl_frame_checks_en_flg;
 /* If set, control frames will be filtered according to MAC check. */
 	u8 ctl_frame_mac_check_en;
 /* If set, control frames will be filtered according to ethtype check. */
 	u8 ctl_frame_ethtype_check_en;
-/* Indicates to update RDMA or NON-RDMA vlan remapping priority table according
- * to update_in_to_in_pri_map_mode_enum, used for TenantDcb (use enum
+/* Indicates to update RDMA or NON-RDMA vlan remapping priority table according to
+ * update_in_to_in_pri_map_mode_enum, used for TenantDcb (use enum
  * update_in_to_in_pri_map_mode_enum)
  */
 	u8 update_in_to_in_pri_map_mode;
 /* Map for inner to inner vlan priority translation, used for TenantDcb. */
 	u8 in_to_in_pri_map[8];
-	u8 reserved[6];
+/* If set, tx_dst_port_mode_config, tx_dst_port_mode and dst_vport_id will be updated */
+	u8 update_tx_dst_port_mode_flg;
+/* Configurations for tx forwarding, used in VF Representor mode (use enum
+ * eth_tx_dst_mode_config_enum)
+ */
+	u8 tx_dst_port_mode_config;
+/* destination Vport ID to forward the packet, applicable only if dst_vport_id_valid is set and when
+ * tx_dst_port_mode_config == ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_VPORT and (tx_dst_port_mode ==
+ * DST_PORT_LOOPBACK or tx_dst_port_mode == DST_PORT_PHY_LOOPBACK)
+ */
+	u8 dst_vport_id;
+/* destination tx to forward the packet, applicable only when tx_dst_port_mode_config ==
+ * ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_VPORT (use enum dst_port_mode)
+ */
+	u8 tx_dst_port_mode;
+/* if set, dst_vport_id has valid value. If clear, RX classification will performed. */
+	u8 dst_vport_id_valid;
+	u8 reserved;
 };
 
 struct vport_update_ramrod_mcast {
@@ -1431,13 +2881,11 @@ struct vport_update_ramrod_mcast {
  * Ramrod data for vport update ramrod
  */
 struct vport_update_ramrod_data {
-/* Common data for all vport update ramrods */
-	struct vport_update_ramrod_data_cmn common;
+	struct vport_update_ramrod_data_cmn common /* Common data for all vport update ramrods */;
 	struct eth_vport_rx_mode rx_mode /* vport rx mode bitmap */;
 	struct eth_vport_tx_mode tx_mode /* vport tx mode bitmap */;
 	__le32 reserved[3];
-/* TPA configuration parameters */
-	struct eth_vport_tpa_param tpa_param;
+	struct eth_vport_tpa_param tpa_param /* TPA configuration parameters */;
 	struct vport_update_ramrod_mcast approx_mcast;
 	struct eth_vport_rss_config rss_config /* rss config data */;
 };
@@ -1445,313 +2893,215 @@ struct vport_update_ramrod_data {
 
 
 
-
-
 struct E4XstormEthConnAgCtxDqExtLdPart {
 	u8 reserved0 /* cdu_validation */;
 	u8 state /* state */;
 	u8 flags0;
-/* exist_in_qm0 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK            0x1 /* exist_in_qm0 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_SHIFT           0
-/* exist_in_qm1 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_MASK               0x1 /* exist_in_qm1 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_SHIFT              1
-/* exist_in_qm2 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_MASK               0x1 /* exist_in_qm2 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_SHIFT              2
-/* exist_in_qm3 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_MASK            0x1 /* exist_in_qm3 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_SHIFT           3
-/* bit4 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_MASK               0x1 /* bit4 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_SHIFT              4
-/* cf_array_active */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_MASK               0x1 /* cf_array_active */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_SHIFT              5
-/* bit6 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_MASK               0x1 /* bit6 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_SHIFT              6
-/* bit7 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_MASK               0x1 /* bit7 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_SHIFT              7
 	u8 flags1;
-/* bit8 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_MASK               0x1 /* bit8 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_SHIFT              0
-/* bit9 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_MASK               0x1 /* bit9 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_SHIFT              1
-/* bit10 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_MASK               0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_MASK               0x1 /* bit10 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_SHIFT              2
-/* bit11 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_MASK                   0x1 /* bit11 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_SHIFT                  3
-/* bit12 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_MASK                   0x1
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_SHIFT                  4
-/* bit13 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_MASK                   0x1
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_SHIFT                  5
-/* bit14 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_E5_RESERVED2_MASK            0x1 /* bit12 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_E5_RESERVED2_SHIFT           4
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_E5_RESERVED3_MASK            0x1 /* bit13 */
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_E5_RESERVED3_SHIFT           5
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_MASK          0x1 /* bit14 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_SHIFT         6
-/* bit15 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_MASK            0x1 /* bit15 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_SHIFT           7
 	u8 flags2;
-/* timer0cf */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0_MASK                     0x3 /* timer0cf */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0_SHIFT                    0
-/* timer1cf */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1_MASK                     0x3 /* timer1cf */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1_SHIFT                    2
-/* timer2cf */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2_MASK                     0x3 /* timer2cf */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2_SHIFT                    4
-/* timer_stop_all */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3_MASK                     0x3 /* timer_stop_all */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3_SHIFT                    6
 	u8 flags3;
-/* cf4 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4_MASK                     0x3 /* cf4 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4_SHIFT                    0
-/* cf5 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5_MASK                     0x3 /* cf5 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5_SHIFT                    2
-/* cf6 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6_MASK                     0x3 /* cf6 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6_SHIFT                    4
-/* cf7 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7_MASK                     0x3 /* cf7 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7_SHIFT                    6
 	u8 flags4;
-/* cf8 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8_MASK                     0x3 /* cf8 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8_SHIFT                    0
-/* cf9 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9_MASK                     0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9_MASK                     0x3 /* cf9 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9_SHIFT                    2
-/* cf10 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10_MASK                    0x3 /* cf10 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10_SHIFT                   4
-/* cf11 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11_MASK                    0x3 /* cf11 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11_SHIFT                   6
 	u8 flags5;
-/* cf12 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12_MASK                    0x3 /* cf12 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12_SHIFT                   0
-/* cf13 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13_MASK                    0x3 /* cf13 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13_SHIFT                   2
-/* cf14 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14_MASK                    0x3 /* cf14 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14_SHIFT                   4
-/* cf15 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15_MASK                    0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15_MASK                    0x3 /* cf15 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15_SHIFT                   6
 	u8 flags6;
-/* cf16 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_MASK        0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_SHIFT       0
-/* cf_array_cf */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_MASK        0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_MASK        0x3 /* cf_array_cf */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_SHIFT       2
-/* cf18 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_MASK                   0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_MASK                   0x3 /* cf18 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_SHIFT                  4
-/* cf19 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_MASK            0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_MASK            0x3 /* cf19 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_SHIFT           6
 	u8 flags7;
-/* cf20 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_MASK                0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_MASK                0x3 /* cf20 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_SHIFT               0
-/* cf21 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_MASK              0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_MASK              0x3 /* cf21 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_SHIFT             2
-/* cf22 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_MASK               0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_MASK               0x3 /* cf22 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_SHIFT              4
-/* cf0en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_MASK                   0x1 /* cf0en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_SHIFT                  6
-/* cf1en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_MASK                   0x1 /* cf1en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_SHIFT                  7
 	u8 flags8;
-/* cf2en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_MASK                   0x1 /* cf2en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_SHIFT                  0
-/* cf3en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_MASK                   0x1 /* cf3en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_SHIFT                  1
-/* cf4en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_MASK                   0x1 /* cf4en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_SHIFT                  2
-/* cf5en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_MASK                   0x1 /* cf5en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_SHIFT                  3
-/* cf6en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_MASK                   0x1 /* cf6en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_SHIFT                  4
-/* cf7en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_MASK                   0x1 /* cf7en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_SHIFT                  5
-/* cf8en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_MASK                   0x1 /* cf8en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_SHIFT                  6
-/* cf9en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_MASK                   0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_MASK                   0x1 /* cf9en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_SHIFT                  7
 	u8 flags9;
-/* cf10en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_MASK                  0x1 /* cf10en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_SHIFT                 0
-/* cf11en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_MASK                  0x1 /* cf11en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_SHIFT                 1
-/* cf12en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_MASK                  0x1 /* cf12en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_SHIFT                 2
-/* cf13en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_MASK                  0x1 /* cf13en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_SHIFT                 3
-/* cf14en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_MASK                  0x1 /* cf14en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_SHIFT                 4
-/* cf15en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_MASK                  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_MASK                  0x1 /* cf15en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_SHIFT                 5
-/* cf16en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_MASK     0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_MASK     0x1 /* cf16en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_SHIFT    6
-/* cf_array_cf_en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_MASK     0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_MASK     0x1 /* cf_array_cf_en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_SHIFT    7
 	u8 flags10;
-/* cf18en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_MASK                0x1 /* cf18en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_SHIFT               0
-/* cf19en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_MASK         0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_MASK         0x1 /* cf19en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_SHIFT        1
-/* cf20en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_MASK             0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_MASK             0x1 /* cf20en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_SHIFT            2
-/* cf21en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_MASK              0x1 /* cf21en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_SHIFT             3
-/* cf22en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_MASK            0x1 /* cf22en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_SHIFT           4
-/* cf23en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_MASK  0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_MASK  0x1 /* cf23en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_SHIFT 5
-/* rule0en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_MASK              0x1 /* rule0en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_SHIFT             6
-/* rule1en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_MASK              0x1 /* rule1en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_SHIFT             7
 	u8 flags11;
-/* rule2en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_MASK              0x1 /* rule2en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_SHIFT             0
-/* rule3en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_MASK              0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_MASK              0x1 /* rule3en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_SHIFT             1
-/* rule4en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_MASK          0x1 /* rule4en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_SHIFT         2
-/* rule5en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_MASK                 0x1 /* rule5en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_SHIFT                3
-/* rule6en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_MASK                 0x1 /* rule6en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_SHIFT                4
-/* rule7en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_MASK                 0x1 /* rule7en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_SHIFT                5
-/* rule8en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_MASK            0x1 /* rule8en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_SHIFT           6
-/* rule9en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_MASK                 0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_MASK                 0x1 /* rule9en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_SHIFT                7
 	u8 flags12;
-/* rule10en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_MASK                0x1 /* rule10en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_SHIFT               0
-/* rule11en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_MASK                0x1 /* rule11en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_SHIFT               1
-/* rule12en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_MASK            0x1 /* rule12en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_SHIFT           2
-/* rule13en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_MASK            0x1 /* rule13en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_SHIFT           3
-/* rule14en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_MASK                0x1 /* rule14en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_SHIFT               4
-/* rule15en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_MASK                0x1 /* rule15en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_SHIFT               5
-/* rule16en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_MASK                0x1 /* rule16en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_SHIFT               6
-/* rule17en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_MASK                0x1 /* rule17en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_SHIFT               7
 	u8 flags13;
-/* rule18en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_MASK                0x1 /* rule18en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_SHIFT               0
-/* rule19en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_MASK                0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_MASK                0x1 /* rule19en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_SHIFT               1
-/* rule20en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_MASK            0x1 /* rule20en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_SHIFT           2
-/* rule21en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_MASK            0x1 /* rule21en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_SHIFT           3
-/* rule22en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_MASK            0x1 /* rule22en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_SHIFT           4
-/* rule23en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_MASK            0x1 /* rule23en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_SHIFT           5
-/* rule24en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_MASK            0x1 /* rule24en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_SHIFT           6
-/* rule25en */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_MASK            0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_MASK            0x1 /* rule25en */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_SHIFT           7
 	u8 flags14;
-/* bit16 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_MASK        0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_MASK        0x1 /* bit16 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_SHIFT       0
-/* bit17 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_MASK      0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_MASK      0x1 /* bit17 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_SHIFT     1
-/* bit18 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_MASK    0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_MASK    0x1 /* bit18 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_SHIFT   2
-/* bit19 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_MASK    0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_MASK    0x1 /* bit19 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-/* bit20 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_MASK          0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_MASK          0x1 /* bit20 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_SHIFT         4
-/* bit21 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_MASK        0x1
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_MASK        0x1 /* bit21 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_SHIFT       5
-/* cf23 */
-#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_MASK              0x3
+#define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_MASK              0x3 /* cf23 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_SHIFT             6
 	u8 edpm_event_id /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
@@ -1773,37 +3123,37 @@ struct E4XstormEthConnAgCtxDqExtLdPart {
 };
 
 
-struct mstorm_eth_conn_ag_ctx {
+struct e4_mstorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1 /* exist_in_qm0 */
-#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
-#define MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
-#define MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
-#define MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
-#define MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
-#define MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
-#define MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
-#define MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
-#define MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
+#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1 /* exist_in_qm0 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
+#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
 	u8 flags1;
-#define MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
-#define MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
-#define MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
-#define MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
-#define MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
-#define MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
-#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
-#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
-#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
-#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
-#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
-#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
+#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
 	__le16 word0 /* word0 */;
 	__le16 word1 /* word1 */;
 	__le32 reg0 /* reg0 */;
@@ -1814,243 +3164,216 @@ struct mstorm_eth_conn_ag_ctx {
 
 
 
-struct xstorm_eth_hw_conn_ag_ctx {
+struct e4_xstorm_eth_hw_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
-	u8 eth_state /* state */;
+	u8 state /* state */;
 	u8 flags0;
-/* exist_in_qm0 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
-/* exist_in_qm1 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
-/* exist_in_qm2 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
-/* exist_in_qm3 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1 /* bit4 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
-/* cf_array_active */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1 /* exist_in_qm0 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1 /* exist_in_qm1 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1 /* exist_in_qm2 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1 /* exist_in_qm3 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1 /* bit4 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1 /* cf_array_active */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
 	u8 flags1;
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1 /* bit10 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1 /* bit11 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
-#define XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED2_MASK            0x1 /* bit12 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED2_SHIFT           4
-#define XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED3_MASK            0x1 /* bit13 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED3_SHIFT           5
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1 /* bit14 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1 /* bit15 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1 /* bit10 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1 /* bit11 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED2_MASK            0x1 /* bit12 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED2_SHIFT           4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED3_MASK            0x1 /* bit13 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED3_SHIFT           5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1 /* bit14 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1 /* bit15 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
 	u8 flags2;
-/* timer0cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
-/* timer1cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
-/* timer2cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
-/* timer_stop_all */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3 /* timer0cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3 /* timer1cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3 /* timer2cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3 /* timer_stop_all */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
 	u8 flags3;
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
 	u8 flags4;
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
 	u8 flags5;
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3 /* cf12 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3 /* cf12 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
 	u8 flags6;
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
-/* cf_array_cf */
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3 /* cf_array_cf */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
 	u8 flags7;
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1 /* cf0en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1 /* cf1en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1 /* cf0en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1 /* cf1en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
 	u8 flags8;
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1 /* cf2en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1 /* cf3en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1 /* cf4en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1 /* cf5en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1 /* cf6en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1 /* cf7en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1 /* cf8en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1 /* cf9en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1 /* cf2en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1 /* cf3en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1 /* cf4en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1 /* cf5en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1 /* cf6en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1 /* cf7en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1 /* cf8en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1 /* cf9en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
 	u8 flags9;
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1 /* cf10en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1 /* cf11en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1 /* cf12en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1 /* cf13en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1 /* cf14en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1 /* cf15en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1 /* cf16en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
-/* cf_array_cf_en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1 /* cf10en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1 /* cf11en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1 /* cf12en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1 /* cf13en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1 /* cf14en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1 /* cf15en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1 /* cf16en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1 /* cf_array_cf_en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
 	u8 flags10;
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1 /* cf18en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1 /* cf19en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1 /* cf20en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1 /* cf21en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1 /* cf22en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1 /* cf23en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1 /* rule0en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1 /* rule1en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1 /* cf18en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1 /* cf19en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1 /* cf20en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1 /* cf21en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1 /* cf22en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1 /* cf23en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1 /* rule0en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1 /* rule1en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
 	u8 flags11;
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1 /* rule2en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1 /* rule3en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1 /* rule4en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1 /* rule5en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1 /* rule6en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1 /* rule7en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1 /* rule8en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1 /* rule9en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1 /* rule2en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1 /* rule3en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1 /* rule4en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1 /* rule5en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1 /* rule6en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1 /* rule7en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1 /* rule8en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1 /* rule9en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
 	u8 flags12;
-/* rule10en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
-/* rule11en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
-/* rule12en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
-/* rule13en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
-/* rule14en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
-/* rule15en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
-/* rule16en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
-/* rule17en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1 /* rule10en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1 /* rule11en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1 /* rule12en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1 /* rule13en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1 /* rule14en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1 /* rule15en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1 /* rule16en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1 /* rule17en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
 	u8 flags13;
-/* rule18en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
-/* rule19en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
-/* rule20en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
-/* rule21en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
-/* rule22en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
-/* rule23en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
-/* rule24en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
-/* rule25en */
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
-#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1 /* rule18en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1 /* rule19en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1 /* rule20en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1 /* rule21en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1 /* rule22en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1 /* rule23en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1 /* rule24en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1 /* rule25en */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
 	u8 flags14;
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1 /* bit16 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1 /* bit17 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1 /* bit18 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1 /* bit19 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1 /* bit20 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
-#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1 /* bit21 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
-#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1 /* bit16 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1 /* bit17 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1 /* bit18 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1 /* bit19 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1 /* bit20 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1 /* bit21 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
+#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
 	u8 edpm_event_id /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
 	__le16 e5_reserved1 /* physical_q1 */;
@@ -2063,61 +3386,904 @@ struct xstorm_eth_hw_conn_ag_ctx {
 
 
 
+struct e5_xstorm_eth_conn_ag_ctx_dq_ext_ld_part {
+	u8 reserved0 /* cdu_validation */;
+	u8 state_and_core_id /* state_and_core_id */;
+	u8 flags0;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK                   0x1 /* exist_in_qm0 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_SHIFT                  0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_MASK                      0x1 /* exist_in_qm1 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_SHIFT                     1
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_MASK                      0x1 /* exist_in_qm2 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_SHIFT                     2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_MASK                   0x1 /* exist_in_qm3 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_SHIFT                  3
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_MASK                      0x1 /* bit4 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_SHIFT                     4
+/* cf_array_active */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_MASK                      0x1
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_SHIFT                     5
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_MASK                      0x1 /* bit6 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_SHIFT                     6
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_MASK                      0x1 /* bit7 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_SHIFT                     7
+	u8 flags1;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_MASK                      0x1 /* bit8 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_SHIFT                     0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_MASK                      0x1 /* bit9 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_SHIFT                     1
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_MASK                      0x1 /* bit10 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_SHIFT                     2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_MASK                          0x1 /* bit11 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_SHIFT                         3
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_COPY_CONDITION_LO_MASK         0x1 /* bit12 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_COPY_CONDITION_LO_SHIFT        4
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_COPY_CONDITION_HI_MASK         0x1 /* bit13 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_COPY_CONDITION_HI_SHIFT        5
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_MASK                 0x1 /* bit14 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_SHIFT                6
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_MASK                   0x1 /* bit15 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_SHIFT                  7
+	u8 flags2;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF0_MASK                            0x3 /* timer0cf */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF0_SHIFT                           0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF1_MASK                            0x3 /* timer1cf */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF1_SHIFT                           2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF2_MASK                            0x3 /* timer2cf */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF2_SHIFT                           4
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF3_MASK                            0x3 /* timer_stop_all */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF3_SHIFT                           6
+	u8 flags3;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF4_MASK                            0x3 /* cf4 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF4_SHIFT                           0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF5_MASK                            0x3 /* cf5 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF5_SHIFT                           2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF6_MASK                            0x3 /* cf6 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF6_SHIFT                           4
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF7_MASK                            0x3 /* cf7 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF7_SHIFT                           6
+	u8 flags4;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF8_MASK                            0x3 /* cf8 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF8_SHIFT                           0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF9_MASK                            0x3 /* cf9 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF9_SHIFT                           2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF10_MASK                           0x3 /* cf10 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF10_SHIFT                          4
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF11_MASK                           0x3 /* cf11 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF11_SHIFT                          6
+	u8 flags5;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_HP_CF_MASK                          0x3 /* cf12 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_HP_CF_SHIFT                         0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF13_MASK                           0x3 /* cf13 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF13_SHIFT                          2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF14_MASK                           0x3 /* cf14 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF14_SHIFT                          4
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF15_MASK                           0x3 /* cf15 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF15_SHIFT                          6
+	u8 flags6;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_MASK               0x3 /* cf16 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_SHIFT              0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_MASK               0x3 /* cf_array_cf */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_SHIFT              2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_MASK                          0x3 /* cf18 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_SHIFT                         4
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_MASK                   0x3 /* cf19 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_SHIFT                  6
+	u8 flags7;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_MASK                       0x3 /* cf20 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_SHIFT                      0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_MASK                     0x3 /* cf21 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_SHIFT                    2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_MASK                      0x3 /* cf22 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_SHIFT                     4
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_MASK                          0x1 /* cf0en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_SHIFT                         6
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_MASK                          0x1 /* cf1en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_SHIFT                         7
+	u8 flags8;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_MASK                          0x1 /* cf2en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_SHIFT                         0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_MASK                          0x1 /* cf3en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_SHIFT                         1
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_MASK                          0x1 /* cf4en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_SHIFT                         2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_MASK                          0x1 /* cf5en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_SHIFT                         3
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_MASK                          0x1 /* cf6en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_SHIFT                         4
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_MASK                          0x1 /* cf7en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_SHIFT                         5
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_MASK                          0x1 /* cf8en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_SHIFT                         6
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_MASK                          0x1 /* cf9en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_SHIFT                         7
+	u8 flags9;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_MASK                         0x1 /* cf10en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_SHIFT                        0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_MASK                         0x1 /* cf11en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_SHIFT                        1
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_HP_CF_EN_MASK                       0x1 /* cf12en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_HP_CF_EN_SHIFT                      2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_MASK                         0x1 /* cf13en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_SHIFT                        3
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_MASK                         0x1 /* cf14en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_SHIFT                        4
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_MASK                         0x1 /* cf15en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_SHIFT                        5
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_MASK            0x1 /* cf16en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_SHIFT           6
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_MASK            0x1 /* cf_array_cf_en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_SHIFT           7
+	u8 flags10;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_MASK                       0x1 /* cf18en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_SHIFT                      0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_MASK                0x1 /* cf19en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_SHIFT               1
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_MASK                    0x1 /* cf20en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_SHIFT                   2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_MASK                     0x1 /* cf21en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_SHIFT                    3
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_MASK                   0x1 /* cf22en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_SHIFT                  4
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_MASK         0x1 /* cf23en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_SHIFT        5
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_MASK                     0x1 /* rule0en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_SHIFT                    6
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_MASK                     0x1 /* rule1en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_SHIFT                    7
+	u8 flags11;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_MASK                     0x1 /* rule2en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_SHIFT                    0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_MASK                     0x1 /* rule3en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_SHIFT                    1
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_MASK                 0x1 /* rule4en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_SHIFT                2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_MASK                        0x1 /* rule5en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_SHIFT                       3
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_MASK                        0x1 /* rule6en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_SHIFT                       4
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_MASK                        0x1 /* rule7en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_SHIFT                       5
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_MASK                   0x1 /* rule8en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_SHIFT                  6
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_MASK                        0x1 /* rule9en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_SHIFT                       7
+	u8 flags12;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_MASK                       0x1 /* rule10en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_SHIFT                      0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_MASK                       0x1 /* rule11en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_SHIFT                      1
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_MASK                   0x1 /* rule12en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_SHIFT                  2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_MASK                   0x1 /* rule13en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_SHIFT                  3
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_MASK                       0x1 /* rule14en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_SHIFT                      4
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_MASK                       0x1 /* rule15en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_SHIFT                      5
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_MASK                       0x1 /* rule16en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_SHIFT                      6
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_MASK                       0x1 /* rule17en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_SHIFT                      7
+	u8 flags13;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_MASK                       0x1 /* rule18en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_SHIFT                      0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_MASK                       0x1 /* rule19en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_SHIFT                      1
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_MASK                   0x1 /* rule20en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_SHIFT                  2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_MASK                   0x1 /* rule21en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_SHIFT                  3
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_MASK                   0x1 /* rule22en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_SHIFT                  4
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_MASK                   0x1 /* rule23en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_SHIFT                  5
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_MASK                   0x1 /* rule24en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_SHIFT                  6
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_MASK                   0x1 /* rule25en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_SHIFT                  7
+	u8 flags14;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_MASK               0x1 /* bit16 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_SHIFT              0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_MASK             0x1 /* bit17 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_SHIFT            1
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_MASK           0x1 /* bit18 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_SHIFT          2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_MASK           0x1 /* bit19 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_SHIFT          3
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_MASK                 0x1 /* bit20 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_SHIFT                4
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_MASK               0x1 /* bit21 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_SHIFT              5
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_MASK                     0x3 /* cf23 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_SHIFT                    6
+	u8 edpm_vport /* byte2 */;
+	__le16 physical_q0 /* physical_q0_and_vf_id_lo */;
+	__le16 tx_l2_edpm_usg_cnt_and_vf_id_hi /* physical_q1_and_vf_id_hi */;
+	__le16 edpm_num_bds /* physical_q2 */;
+	__le16 tx_bd_cons /* word3 */;
+	__le16 tx_bd_prod /* word4 */;
+	__le16 updated_qm_pq_id /* word5 */;
+	__le16 conn_dpi /* conn_dpi */;
+	u8 fw_spare_data0 /* byte3 */;
+	u8 fw_spare_data1 /* byte4 */;
+	u8 fw_spare_data2 /* byte5 */;
+	u8 fw_spare_data3 /* byte6 */;
+	__le32 fw_spare_data4 /* reg0 */;
+	__le32 fw_spare_data5 /* reg1 */;
+	__le32 fw_spare_data6 /* reg2 */;
+	__le32 fw_spare_data7 /* reg3 */;
+	__le32 fw_spare_data8 /* reg4 */;
+	__le32 fw_spare_data9 /* cf_array0 */;
+	__le32 fw_spare_data10 /* cf_array1 */;
+	u8 flags15;
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_REDIRECTION_CONDITION_LO_MASK  0x1 /* bit22 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_REDIRECTION_CONDITION_LO_SHIFT 0
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_REDIRECTION_CONDITION_HI_MASK  0x1 /* bit23 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_REDIRECTION_CONDITION_HI_SHIFT 1
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_E4_RESERVED3_MASK                   0x1 /* bit24 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_E4_RESERVED3_SHIFT                  2
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_E4_RESERVED4_MASK                   0x3 /* cf24 */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_E4_RESERVED4_SHIFT                  3
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_E4_RESERVED5_MASK                   0x1 /* cf24en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_E4_RESERVED5_SHIFT                  5
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_E4_RESERVED6_MASK                   0x1 /* rule26en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_E4_RESERVED6_SHIFT                  6
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_E4_RESERVED7_MASK                   0x1 /* rule27en */
+#define E5XSTORMETHCONNAGCTXDQEXTLDPART_E4_RESERVED7_SHIFT                  7
+	u8 fw_spare_data11 /* byte7 */;
+	__le16 fw_spare_data12 /* word7 */;
+	__le16 fw_spare_data13 /* word8 */;
+	__le16 fw_spare_data14 /* word9 */;
+};
+
+
+struct e5_mstorm_eth_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state_and_core_id */;
+	u8 flags0;
+#define E5_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1 /* exist_in_qm0 */
+#define E5_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
+#define E5_MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
+#define E5_MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
+#define E5_MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
+#define E5_MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
+#define E5_MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
+#define E5_MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
+#define E5_MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
+#define E5_MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
+	u8 flags1;
+#define E5_MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
+#define E5_MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
+#define E5_MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
+#define E5_MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
+#define E5_MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
+#define E5_MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
+#define E5_MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
+#define E5_MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
+#define E5_MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
+#define E5_MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
+#define E5_MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
+#define E5_MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
+#define E5_MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
+#define E5_MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
+#define E5_MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
+#define E5_MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
+	__le16 word0 /* word0 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
+
+
+
+
+struct e5_ustorm_eth_task_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state_and_core_id */;
+	__le16 word0 /* icid */;
+	u8 flags0;
+#define E5_USTORM_ETH_TASK_AG_CTX_NIBBLE0_MASK       0xF /* connection_type */
+#define E5_USTORM_ETH_TASK_AG_CTX_NIBBLE0_SHIFT      0
+#define E5_USTORM_ETH_TASK_AG_CTX_BIT0_MASK          0x1 /* exist_in_qm0 */
+#define E5_USTORM_ETH_TASK_AG_CTX_BIT0_SHIFT         4
+#define E5_USTORM_ETH_TASK_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
+#define E5_USTORM_ETH_TASK_AG_CTX_BIT1_SHIFT         5
+#define E5_USTORM_ETH_TASK_AG_CTX_CF0_MASK           0x3 /* timer0cf */
+#define E5_USTORM_ETH_TASK_AG_CTX_CF0_SHIFT          6
+	u8 flags1;
+#define E5_USTORM_ETH_TASK_AG_CTX_CF1_MASK           0x3 /* timer1cf */
+#define E5_USTORM_ETH_TASK_AG_CTX_CF1_SHIFT          0
+#define E5_USTORM_ETH_TASK_AG_CTX_CF2_MASK           0x3 /* timer2cf */
+#define E5_USTORM_ETH_TASK_AG_CTX_CF2_SHIFT          2
+#define E5_USTORM_ETH_TASK_AG_CTX_CF3_MASK           0x3 /* timer_stop_all */
+#define E5_USTORM_ETH_TASK_AG_CTX_CF3_SHIFT          4
+#define E5_USTORM_ETH_TASK_AG_CTX_CF4_MASK           0x3 /* dif_error_cf */
+#define E5_USTORM_ETH_TASK_AG_CTX_CF4_SHIFT          6
+	u8 flags2;
+#define E5_USTORM_ETH_TASK_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
+#define E5_USTORM_ETH_TASK_AG_CTX_CF0EN_SHIFT        0
+#define E5_USTORM_ETH_TASK_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
+#define E5_USTORM_ETH_TASK_AG_CTX_CF1EN_SHIFT        1
+#define E5_USTORM_ETH_TASK_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
+#define E5_USTORM_ETH_TASK_AG_CTX_CF2EN_SHIFT        2
+#define E5_USTORM_ETH_TASK_AG_CTX_CF3EN_MASK         0x1 /* cf3en */
+#define E5_USTORM_ETH_TASK_AG_CTX_CF3EN_SHIFT        3
+#define E5_USTORM_ETH_TASK_AG_CTX_CF4EN_MASK         0x1 /* cf4en */
+#define E5_USTORM_ETH_TASK_AG_CTX_CF4EN_SHIFT        4
+#define E5_USTORM_ETH_TASK_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
+#define E5_USTORM_ETH_TASK_AG_CTX_RULE0EN_SHIFT      5
+#define E5_USTORM_ETH_TASK_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
+#define E5_USTORM_ETH_TASK_AG_CTX_RULE1EN_SHIFT      6
+#define E5_USTORM_ETH_TASK_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
+#define E5_USTORM_ETH_TASK_AG_CTX_RULE2EN_SHIFT      7
+	u8 flags3;
+#define E5_USTORM_ETH_TASK_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
+#define E5_USTORM_ETH_TASK_AG_CTX_RULE3EN_SHIFT      0
+#define E5_USTORM_ETH_TASK_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
+#define E5_USTORM_ETH_TASK_AG_CTX_RULE4EN_SHIFT      1
+#define E5_USTORM_ETH_TASK_AG_CTX_RULE5EN_MASK       0x1 /* rule5en */
+#define E5_USTORM_ETH_TASK_AG_CTX_RULE5EN_SHIFT      2
+#define E5_USTORM_ETH_TASK_AG_CTX_RULE6EN_MASK       0x1 /* rule6en */
+#define E5_USTORM_ETH_TASK_AG_CTX_RULE6EN_SHIFT      3
+#define E5_USTORM_ETH_TASK_AG_CTX_E4_RESERVED1_MASK  0x1 /* bit2 */
+#define E5_USTORM_ETH_TASK_AG_CTX_E4_RESERVED1_SHIFT 4
+#define E5_USTORM_ETH_TASK_AG_CTX_E4_RESERVED2_MASK  0x1 /* bit3 */
+#define E5_USTORM_ETH_TASK_AG_CTX_E4_RESERVED2_SHIFT 5
+#define E5_USTORM_ETH_TASK_AG_CTX_E4_RESERVED3_MASK  0x1 /* bit4 */
+#define E5_USTORM_ETH_TASK_AG_CTX_E4_RESERVED3_SHIFT 6
+#define E5_USTORM_ETH_TASK_AG_CTX_E4_RESERVED4_MASK  0x1 /* rule7en */
+#define E5_USTORM_ETH_TASK_AG_CTX_E4_RESERVED4_SHIFT 7
+	u8 flags4;
+#define E5_USTORM_ETH_TASK_AG_CTX_E4_RESERVED5_MASK  0x3 /* cf5 */
+#define E5_USTORM_ETH_TASK_AG_CTX_E4_RESERVED5_SHIFT 0
+#define E5_USTORM_ETH_TASK_AG_CTX_E4_RESERVED6_MASK  0x1 /* cf5en */
+#define E5_USTORM_ETH_TASK_AG_CTX_E4_RESERVED6_SHIFT 2
+#define E5_USTORM_ETH_TASK_AG_CTX_E4_RESERVED7_MASK  0x1 /* rule8en */
+#define E5_USTORM_ETH_TASK_AG_CTX_E4_RESERVED7_SHIFT 3
+#define E5_USTORM_ETH_TASK_AG_CTX_NIBBLE1_MASK       0xF /* dif_error_type */
+#define E5_USTORM_ETH_TASK_AG_CTX_NIBBLE1_SHIFT      4
+	u8 byte2 /* byte2 */;
+	u8 byte3 /* byte3 */;
+	u8 e4_reserved8 /* icid_ext */;
+	__le32 reg0 /* dif_err_intervals */;
+	__le32 reg1 /* dif_error_1st_interval */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+	__le32 reg4 /* reg4 */;
+	__le32 reg5 /* reg5 */;
+	__le16 word1 /* word1 */;
+	__le16 word2 /* word2 */;
+	__le32 reg6 /* reg6 */;
+	__le32 reg7 /* reg7 */;
+};
+
+
+
+struct e5_xstorm_eth_hw_conn_ag_ctx {
+	u8 reserved0 /* cdu_validation */;
+	u8 state_and_core_id /* state_and_core_id */;
+	u8 flags0;
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1 /* exist_in_qm0 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1 /* exist_in_qm1 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1 /* exist_in_qm2 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1 /* exist_in_qm3 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1 /* bit4 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1 /* cf_array_active */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
+	u8 flags1;
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1 /* bit10 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1 /* bit11 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_COPY_CONDITION_LO_MASK  0x1 /* bit12 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_COPY_CONDITION_LO_SHIFT 4
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_COPY_CONDITION_HI_MASK  0x1 /* bit13 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_COPY_CONDITION_HI_SHIFT 5
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1 /* bit14 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1 /* bit15 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+	u8 flags2;
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3 /* timer0cf */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3 /* timer1cf */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3 /* timer2cf */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3 /* timer_stop_all */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
+	u8 flags3;
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
+	u8 flags4;
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
+	u8 flags5;
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_HP_CF_MASK                   0x3 /* cf12 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_HP_CF_SHIFT                  0
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
+	u8 flags6;
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3 /* cf_array_cf */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+	u8 flags7;
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1 /* cf0en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1 /* cf1en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
+	u8 flags8;
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1 /* cf2en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1 /* cf3en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1 /* cf4en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1 /* cf5en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1 /* cf6en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1 /* cf7en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1 /* cf8en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1 /* cf9en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
+	u8 flags9;
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1 /* cf10en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1 /* cf11en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_HP_CF_EN_MASK                0x1 /* cf12en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_HP_CF_EN_SHIFT               2
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1 /* cf13en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1 /* cf14en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1 /* cf15en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1 /* cf16en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1 /* cf_array_cf_en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+	u8 flags10;
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1 /* cf18en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1 /* cf19en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1 /* cf20en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1 /* cf21en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1 /* cf22en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1 /* cf23en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1 /* rule0en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1 /* rule1en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
+	u8 flags11;
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1 /* rule2en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1 /* rule3en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1 /* rule4en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1 /* rule5en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1 /* rule6en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1 /* rule7en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1 /* rule8en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1 /* rule9en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
+	u8 flags12;
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1 /* rule10en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1 /* rule11en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1 /* rule12en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1 /* rule13en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1 /* rule14en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1 /* rule15en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1 /* rule16en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1 /* rule17en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
+	u8 flags13;
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1 /* rule18en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1 /* rule19en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1 /* rule20en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1 /* rule21en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1 /* rule22en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1 /* rule23en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1 /* rule24en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1 /* rule25en */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+	u8 flags14;
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1 /* bit16 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1 /* bit17 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1 /* bit18 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1 /* bit19 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1 /* bit20 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1 /* bit21 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
+#define E5_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+	u8 edpm_vport /* byte2 */;
+	__le16 physical_q0 /* physical_q0_and_vf_id_lo */;
+	__le16 tx_l2_edpm_usg_cnt_and_vf_id_hi /* physical_q1_and_vf_id_hi */;
+	__le16 edpm_num_bds /* physical_q2 */;
+	__le16 tx_bd_cons /* word3 */;
+	__le16 tx_bd_prod /* word4 */;
+	__le16 updated_qm_pq_id /* word5 */;
+	__le16 conn_dpi /* conn_dpi */;
+};
+
+
+
+struct e5_ystorm_eth_task_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state_and_core_id */;
+	__le16 word0 /* icid */;
+	u8 flags0;
+#define E5_YSTORM_ETH_TASK_AG_CTX_NIBBLE0_MASK       0xF /* connection_type */
+#define E5_YSTORM_ETH_TASK_AG_CTX_NIBBLE0_SHIFT      0
+#define E5_YSTORM_ETH_TASK_AG_CTX_BIT0_MASK          0x1 /* exist_in_qm0 */
+#define E5_YSTORM_ETH_TASK_AG_CTX_BIT0_SHIFT         4
+#define E5_YSTORM_ETH_TASK_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
+#define E5_YSTORM_ETH_TASK_AG_CTX_BIT1_SHIFT         5
+#define E5_YSTORM_ETH_TASK_AG_CTX_BIT2_MASK          0x1 /* bit2 */
+#define E5_YSTORM_ETH_TASK_AG_CTX_BIT2_SHIFT         6
+#define E5_YSTORM_ETH_TASK_AG_CTX_BIT3_MASK          0x1 /* bit3 */
+#define E5_YSTORM_ETH_TASK_AG_CTX_BIT3_SHIFT         7
+	u8 flags1;
+#define E5_YSTORM_ETH_TASK_AG_CTX_CF0_MASK           0x3 /* cf0 */
+#define E5_YSTORM_ETH_TASK_AG_CTX_CF0_SHIFT          0
+#define E5_YSTORM_ETH_TASK_AG_CTX_CF1_MASK           0x3 /* cf1 */
+#define E5_YSTORM_ETH_TASK_AG_CTX_CF1_SHIFT          2
+#define E5_YSTORM_ETH_TASK_AG_CTX_CF2SPECIAL_MASK    0x3 /* cf2special */
+#define E5_YSTORM_ETH_TASK_AG_CTX_CF2SPECIAL_SHIFT   4
+#define E5_YSTORM_ETH_TASK_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
+#define E5_YSTORM_ETH_TASK_AG_CTX_CF0EN_SHIFT        6
+#define E5_YSTORM_ETH_TASK_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
+#define E5_YSTORM_ETH_TASK_AG_CTX_CF1EN_SHIFT        7
+	u8 flags2;
+#define E5_YSTORM_ETH_TASK_AG_CTX_BIT4_MASK          0x1 /* bit4 */
+#define E5_YSTORM_ETH_TASK_AG_CTX_BIT4_SHIFT         0
+#define E5_YSTORM_ETH_TASK_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
+#define E5_YSTORM_ETH_TASK_AG_CTX_RULE0EN_SHIFT      1
+#define E5_YSTORM_ETH_TASK_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
+#define E5_YSTORM_ETH_TASK_AG_CTX_RULE1EN_SHIFT      2
+#define E5_YSTORM_ETH_TASK_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
+#define E5_YSTORM_ETH_TASK_AG_CTX_RULE2EN_SHIFT      3
+#define E5_YSTORM_ETH_TASK_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
+#define E5_YSTORM_ETH_TASK_AG_CTX_RULE3EN_SHIFT      4
+#define E5_YSTORM_ETH_TASK_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
+#define E5_YSTORM_ETH_TASK_AG_CTX_RULE4EN_SHIFT      5
+#define E5_YSTORM_ETH_TASK_AG_CTX_RULE5EN_MASK       0x1 /* rule5en */
+#define E5_YSTORM_ETH_TASK_AG_CTX_RULE5EN_SHIFT      6
+#define E5_YSTORM_ETH_TASK_AG_CTX_RULE6EN_MASK       0x1 /* rule6en */
+#define E5_YSTORM_ETH_TASK_AG_CTX_RULE6EN_SHIFT      7
+	u8 flags3;
+#define E5_YSTORM_ETH_TASK_AG_CTX_E4_RESERVED1_MASK  0x1 /* bit5 */
+#define E5_YSTORM_ETH_TASK_AG_CTX_E4_RESERVED1_SHIFT 0
+#define E5_YSTORM_ETH_TASK_AG_CTX_E4_RESERVED2_MASK  0x3 /* cf3 */
+#define E5_YSTORM_ETH_TASK_AG_CTX_E4_RESERVED2_SHIFT 1
+#define E5_YSTORM_ETH_TASK_AG_CTX_E4_RESERVED3_MASK  0x3 /* cf4 */
+#define E5_YSTORM_ETH_TASK_AG_CTX_E4_RESERVED3_SHIFT 3
+#define E5_YSTORM_ETH_TASK_AG_CTX_E4_RESERVED4_MASK  0x1 /* cf3en */
+#define E5_YSTORM_ETH_TASK_AG_CTX_E4_RESERVED4_SHIFT 5
+#define E5_YSTORM_ETH_TASK_AG_CTX_E4_RESERVED5_MASK  0x1 /* cf4en */
+#define E5_YSTORM_ETH_TASK_AG_CTX_E4_RESERVED5_SHIFT 6
+#define E5_YSTORM_ETH_TASK_AG_CTX_E4_RESERVED6_MASK  0x1 /* rule7en */
+#define E5_YSTORM_ETH_TASK_AG_CTX_E4_RESERVED6_SHIFT 7
+	__le32 reg0 /* reg0 */;
+	u8 byte2 /* byte2 */;
+	u8 byte3 /* byte3 */;
+	u8 byte4 /* byte4 */;
+	u8 e4_reserved7 /* icid_ext */;
+	__le16 word1 /* word1 */;
+	__le16 word2 /* word2 */;
+	__le16 word3 /* word3 */;
+	__le16 word4 /* word4 */;
+	__le16 word5 /* word5 */;
+	__le16 e4_reserved8 /* word6 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
 /*
- * GFT CAM line struct
+ * GFS RAM line struct with fields breakout
  */
-struct gft_cam_line {
-	__le32 camline;
-/* Indication if the line is valid. */
-#define GFT_CAM_LINE_VALID_MASK      0x1
-#define GFT_CAM_LINE_VALID_SHIFT     0
-/* Data bits, the word that compared with the profile key */
-#define GFT_CAM_LINE_DATA_MASK       0x3FFF
-#define GFT_CAM_LINE_DATA_SHIFT      1
-/* Mask bits, indicate the bits in the data that are Dont-Care */
-#define GFT_CAM_LINE_MASK_BITS_MASK  0x3FFF
-#define GFT_CAM_LINE_MASK_BITS_SHIFT 15
-#define GFT_CAM_LINE_RESERVED1_MASK  0x7
-#define GFT_CAM_LINE_RESERVED1_SHIFT 29
+struct gfs_profile_ram_line {
+	__le32 reg0;
+#define GFS_PROFILE_RAM_LINE_IN_SRC_PORT_MASK           0x1F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_IN_SRC_PORT_SHIFT          0
+#define GFS_PROFILE_RAM_LINE_IN_SRC_IP_MASK             0xFF /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_IN_SRC_IP_SHIFT            5
+#define GFS_PROFILE_RAM_LINE_IN_DST_MAC_MASK            0x3F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_IN_DST_MAC_SHIFT           13
+#define GFS_PROFILE_RAM_LINE_IN_SRC_MAC_MASK            0x3F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_IN_SRC_MAC_SHIFT           19
+#define GFS_PROFILE_RAM_LINE_IN_ETHTYPE_MASK            0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_IN_ETHTYPE_SHIFT           25
+#define GFS_PROFILE_RAM_LINE_IN_CVLAN_MASK              0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_IN_CVLAN_SHIFT             26
+#define GFS_PROFILE_RAM_LINE_IN_CVLAN_DEI_MASK          0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_IN_CVLAN_DEI_SHIFT         27
+#define GFS_PROFILE_RAM_LINE_IN_CVLAN_PRI_MASK          0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_IN_CVLAN_PRI_SHIFT         28
+#define GFS_PROFILE_RAM_LINE_IN_IS_UNICAST_MASK         0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_IN_IS_UNICAST_SHIFT        29
+#define GFS_PROFILE_RAM_LINE_IN_CVLAN_EXISTS_MASK       0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_IN_CVLAN_EXISTS_SHIFT      30
+#define GFS_PROFILE_RAM_LINE_IN_SVLAN_EXISTS_MASK       0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_IN_SVLAN_EXISTS_SHIFT      31
+	__le32 reg1;
+#define GFS_PROFILE_RAM_LINE_IN_IS_IP_MASK              0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_IN_IS_IP_SHIFT             0
+#define GFS_PROFILE_RAM_LINE_IN_IS_TCP_UDP_SCTP_MASK    0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_IN_IS_TCP_UDP_SCTP_SHIFT   1
+#define GFS_PROFILE_RAM_LINE_IN_DSCP_MASK               0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_IN_DSCP_SHIFT              2
+#define GFS_PROFILE_RAM_LINE_IN_ECN_MASK                0x3 /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_IN_ECN_SHIFT               3
+#define GFS_PROFILE_RAM_LINE_IN_DST_IP_MASK             0xFF /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_IN_DST_IP_SHIFT            5
+#define GFS_PROFILE_RAM_LINE_IN_DST_PORT_MASK           0x1F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_IN_DST_PORT_SHIFT          13
+#define GFS_PROFILE_RAM_LINE_IN_TTL_EQUALS_ZERO_MASK    0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_IN_TTL_EQUALS_ZERO_SHIFT   18
+#define GFS_PROFILE_RAM_LINE_IN_TTL_EQUALS_ONE_MASK     0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_IN_TTL_EQUALS_ONE_SHIFT    19
+#define GFS_PROFILE_RAM_LINE_IN_IP_PROTOCOL_MASK        0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_IN_IP_PROTOCOL_SHIFT       20
+#define GFS_PROFILE_RAM_LINE_IN_IP_VERSION_MASK         0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_IN_IP_VERSION_SHIFT        21
+#define GFS_PROFILE_RAM_LINE_IN_IPV6_FLOW_LABEL_MASK    0x1F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_IN_IPV6_FLOW_LABEL_SHIFT   22
+#define GFS_PROFILE_RAM_LINE_TUN_SRC_PORT_MASK          0x1F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_TUN_SRC_PORT_SHIFT         27
+	__le32 reg2;
+#define GFS_PROFILE_RAM_LINE_TUN_SRC_IP_MASK            0xFF /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_TUN_SRC_IP_SHIFT           0
+#define GFS_PROFILE_RAM_LINE_TUN_DST_MAC_MASK           0x3F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_TUN_DST_MAC_SHIFT          8
+#define GFS_PROFILE_RAM_LINE_TUN_SRC_MAC_MASK           0x3F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_TUN_SRC_MAC_SHIFT          14
+#define GFS_PROFILE_RAM_LINE_TUN_ETHTYPE_MASK           0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TUN_ETHTYPE_SHIFT          20
+#define GFS_PROFILE_RAM_LINE_TUN_CVLAN_MASK             0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TUN_CVLAN_SHIFT            21
+#define GFS_PROFILE_RAM_LINE_TUN_CVLAN_DEI_MASK         0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TUN_CVLAN_DEI_SHIFT        22
+#define GFS_PROFILE_RAM_LINE_TUN_CVLAN_PRI_MASK         0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TUN_CVLAN_PRI_SHIFT        23
+#define GFS_PROFILE_RAM_LINE_TUN_IS_UNICAST_MASK        0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TUN_IS_UNICAST_SHIFT       24
+#define GFS_PROFILE_RAM_LINE_TUN_CVLAN_EXISTS_MASK      0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TUN_CVLAN_EXISTS_SHIFT     25
+#define GFS_PROFILE_RAM_LINE_TUN_SVLAN_EXISTS_MASK      0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TUN_SVLAN_EXISTS_SHIFT     26
+#define GFS_PROFILE_RAM_LINE_TUN_IS_IP_MASK             0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TUN_IS_IP_SHIFT            27
+#define GFS_PROFILE_RAM_LINE_TUN_IS_TCP_UDP_SCTP_MASK   0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TUN_IS_TCP_UDP_SCTP_SHIFT  28
+#define GFS_PROFILE_RAM_LINE_TUN_DSCP_MASK              0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TUN_DSCP_SHIFT             29
+#define GFS_PROFILE_RAM_LINE_TUN_ECN_MASK               0x3 /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_TUN_ECN_SHIFT              30
+	__le32 reg3;
+#define GFS_PROFILE_RAM_LINE_TUN_DST_IP_MASK            0xFF /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_TUN_DST_IP_SHIFT           0
+#define GFS_PROFILE_RAM_LINE_TUN_DST_PORT_MASK          0x1F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_TUN_DST_PORT_SHIFT         8
+#define GFS_PROFILE_RAM_LINE_TUN_TTL_EQUALS_ZERO_MASK   0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TUN_TTL_EQUALS_ZERO_SHIFT  13
+#define GFS_PROFILE_RAM_LINE_TUN_TTL_EQUALS_ONE_MASK    0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TUN_TTL_EQUALS_ONE_SHIFT   14
+#define GFS_PROFILE_RAM_LINE_TUN_IP_PROTOCOL_MASK       0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TUN_IP_PROTOCOL_SHIFT      15
+#define GFS_PROFILE_RAM_LINE_TUN_IP_VERSION_MASK        0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TUN_IP_VERSION_SHIFT       16
+#define GFS_PROFILE_RAM_LINE_TUN_IPV6_FLOW_LABEL_MASK   0x1F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_TUN_IPV6_FLOW_LABEL_SHIFT  17
+#define GFS_PROFILE_RAM_LINE_PF_MASK                    0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_PF_SHIFT                   22
+#define GFS_PROFILE_RAM_LINE_TUNNEL_EXISTS_MASK         0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TUNNEL_EXISTS_SHIFT        23
+/* mask of type: bitwise (use enum gfs_tunnel_type_enum) */
+#define GFS_PROFILE_RAM_LINE_TUNNEL_TYPE_MASK           0xF
+#define GFS_PROFILE_RAM_LINE_TUNNEL_TYPE_SHIFT          24
+#define GFS_PROFILE_RAM_LINE_TENANT_ID_MASK             0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_TENANT_ID_SHIFT            28
+#define GFS_PROFILE_RAM_LINE_ENTROPY_MASK               0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_ENTROPY_SHIFT              29
+#define GFS_PROFILE_RAM_LINE_L2_HEADER_EXISTS_MASK      0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_L2_HEADER_EXISTS_SHIFT     30
+#define GFS_PROFILE_RAM_LINE_IP_FRAGMENT_MASK           0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_IP_FRAGMENT_SHIFT          31
+	__le32 reg4;
+#define GFS_PROFILE_RAM_LINE_TCP_FLAGS_MASK             0x3FF /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_TCP_FLAGS_SHIFT            0
+#define GFS_PROFILE_RAM_LINE_CALC_TCP_FLAGS_MASK        0x3F /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_CALC_TCP_FLAGS_SHIFT       10
+#define GFS_PROFILE_RAM_LINE_STAG_MASK                  0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_STAG_SHIFT                 16
+#define GFS_PROFILE_RAM_LINE_STAG_DEI_MASK              0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_STAG_DEI_SHIFT             17
+#define GFS_PROFILE_RAM_LINE_STAG_PRI_MASK              0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_STAG_PRI_SHIFT             18
+#define GFS_PROFILE_RAM_LINE_MPLS_EXISTS_MASK           0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_MPLS_EXISTS_SHIFT          19
+#define GFS_PROFILE_RAM_LINE_MPLS_LABEL_MASK            0x1F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_MPLS_LABEL_SHIFT           20
+#define GFS_PROFILE_RAM_LINE_MPLS_TC_MASK               0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_MPLS_TC_SHIFT              25
+#define GFS_PROFILE_RAM_LINE_MPLS_BOS_MASK              0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_MPLS_BOS_SHIFT             26
+#define GFS_PROFILE_RAM_LINE_MPLS_TTL_EQUALS_ZERO_MASK  0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_MPLS_TTL_EQUALS_ZERO_SHIFT 27
+#define GFS_PROFILE_RAM_LINE_MPLS_TTL_EQUALS_ONE_MASK   0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_MPLS_TTL_EQUALS_ONE_SHIFT  28
+#define GFS_PROFILE_RAM_LINE_MPLS2_EXISTS_MASK          0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_MPLS2_EXISTS_SHIFT         29
+#define GFS_PROFILE_RAM_LINE_MPLS3_EXISTS_MASK          0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_MPLS3_EXISTS_SHIFT         30
+#define GFS_PROFILE_RAM_LINE_MPLS4_EXISTS_MASK          0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_MPLS4_EXISTS_SHIFT         31
+	__le32 reg5;
+#define GFS_PROFILE_RAM_LINE_FLEX_BYTE0_MASK            0xFF /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_FLEX_BYTE0_SHIFT           0
+#define GFS_PROFILE_RAM_LINE_FLEX_BYTE1_MASK            0xFF /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_FLEX_BYTE1_SHIFT           8
+#define GFS_PROFILE_RAM_LINE_FLEX_BYTE2_MASK            0xFF /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_FLEX_BYTE2_SHIFT           16
+#define GFS_PROFILE_RAM_LINE_FLEX_BYTE3_MASK            0xFF /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_FLEX_BYTE3_SHIFT           24
+	__le32 reg6;
+#define GFS_PROFILE_RAM_LINE_FLEX_BYTE4_MASK            0xFF /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_FLEX_BYTE4_SHIFT           0
+#define GFS_PROFILE_RAM_LINE_FLEX_BYTE5_MASK            0xFF /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_FLEX_BYTE5_SHIFT           8
+#define GFS_PROFILE_RAM_LINE_FLEX_WORD0_MASK            0xFFFF /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_FLEX_WORD0_SHIFT           16
+	__le32 reg7;
+#define GFS_PROFILE_RAM_LINE_FLEX_WORD1_MASK            0xFFFF /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_FLEX_WORD1_SHIFT           0
+#define GFS_PROFILE_RAM_LINE_FLEX_WORD2_MASK            0xFFFF /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_FLEX_WORD2_SHIFT           16
+	__le32 reg8;
+#define GFS_PROFILE_RAM_LINE_FLEX_WORD3_MASK            0xFFFF /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_FLEX_WORD3_SHIFT           0
+#define GFS_PROFILE_RAM_LINE_FLEX_WORD4_MASK            0xFFFF /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_FLEX_WORD4_SHIFT           16
+	__le32 reg9;
+#define GFS_PROFILE_RAM_LINE_FLEX_WORD5_MASK            0xFFFF /* mask of type: bitwise */
+#define GFS_PROFILE_RAM_LINE_FLEX_WORD5_SHIFT           0
+/* mask of type: bitwise, profile id associated with this context */
+#define GFS_PROFILE_RAM_LINE_PROFILE_ID_MASK            0x3FF
+#define GFS_PROFILE_RAM_LINE_PROFILE_ID_SHIFT           16
+#define GFS_PROFILE_RAM_LINE_FLEX_REG0_MASK             0x3F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_FLEX_REG0_SHIFT            26
+	__le32 reg10;
+#define GFS_PROFILE_RAM_LINE_FLEX_REG1_MASK             0x3F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_FLEX_REG1_SHIFT            0
+#define GFS_PROFILE_RAM_LINE_FLEX_REG2_MASK             0x3F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_FLEX_REG2_SHIFT            6
+#define GFS_PROFILE_RAM_LINE_FLEX_REG3_MASK             0x3F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_FLEX_REG3_SHIFT            12
+#define GFS_PROFILE_RAM_LINE_FLEX_REG4_MASK             0x3F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_FLEX_REG4_SHIFT            18
+#define GFS_PROFILE_RAM_LINE_FLEX_REG5_MASK             0x3F /* mask of type: prefix */
+#define GFS_PROFILE_RAM_LINE_FLEX_REG5_SHIFT            24
+#define GFS_PROFILE_RAM_LINE_VPORT_ID_MASK              0x1 /* mask of type: bool */
+#define GFS_PROFILE_RAM_LINE_VPORT_ID_SHIFT             30
+/* When set - if a lookup from this profile matches, all the lookups from subsequent profiles will
+ * be discarded
+ */
+#define GFS_PROFILE_RAM_LINE_PRIORITY_MASK              0x1
+#define GFS_PROFILE_RAM_LINE_PRIORITY_SHIFT             31
+	__le32 reg11;
+#define GFS_PROFILE_RAM_LINE_SWAP_I2O_MASK              0x3 /*  (use enum gfs_swap_i2o_enum) */
+#define GFS_PROFILE_RAM_LINE_SWAP_I2O_SHIFT             0
+#define GFS_PROFILE_RAM_LINE_RESERVED_MASK              0x3FFFFFFF
+#define GFS_PROFILE_RAM_LINE_RESERVED_SHIFT             2
+	__le32 reserved_regs[4];
 };
 
 
 /*
- * GFT CAM line struct (for driversim use)
+ * GFT CAM line struct with fields breakout
  */
 struct gft_cam_line_mapped {
 	__le32 camline;
 /* Indication if the line is valid. */
 #define GFT_CAM_LINE_MAPPED_VALID_MASK                     0x1
 #define GFT_CAM_LINE_MAPPED_VALID_SHIFT                    0
-/* use enum gft_profile_ip_version (use enum gft_profile_ip_version) */
+/*  (use enum gft_profile_ip_version) */
 #define GFT_CAM_LINE_MAPPED_IP_VERSION_MASK                0x1
 #define GFT_CAM_LINE_MAPPED_IP_VERSION_SHIFT               1
-/* use enum gft_profile_ip_version (use enum gft_profile_ip_version) */
+/*  (use enum gft_profile_ip_version) */
 #define GFT_CAM_LINE_MAPPED_TUNNEL_IP_VERSION_MASK         0x1
 #define GFT_CAM_LINE_MAPPED_TUNNEL_IP_VERSION_SHIFT        2
-/* use enum gft_profile_upper_protocol_type
- * (use enum gft_profile_upper_protocol_type)
- */
+/*  (use enum gft_profile_upper_protocol_type) */
 #define GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK       0xF
 #define GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_SHIFT      3
-/* use enum gft_profile_tunnel_type (use enum gft_profile_tunnel_type) */
+/*  (use enum gft_profile_tunnel_type) */
 #define GFT_CAM_LINE_MAPPED_TUNNEL_TYPE_MASK               0xF
 #define GFT_CAM_LINE_MAPPED_TUNNEL_TYPE_SHIFT              7
 #define GFT_CAM_LINE_MAPPED_PF_ID_MASK                     0xF
 #define GFT_CAM_LINE_MAPPED_PF_ID_SHIFT                    11
-/* use enum gft_profile_ip_version (use enum gft_profile_ip_version) */
 #define GFT_CAM_LINE_MAPPED_IP_VERSION_MASK_MASK           0x1
 #define GFT_CAM_LINE_MAPPED_IP_VERSION_MASK_SHIFT          15
-/* use enum gft_profile_ip_version (use enum gft_profile_ip_version) */
 #define GFT_CAM_LINE_MAPPED_TUNNEL_IP_VERSION_MASK_MASK    0x1
 #define GFT_CAM_LINE_MAPPED_TUNNEL_IP_VERSION_MASK_SHIFT   16
-/* use enum gft_profile_upper_protocol_type
- * (use enum gft_profile_upper_protocol_type)
- */
 #define GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK_MASK  0xF
 #define GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK_SHIFT 17
-/* use enum gft_profile_tunnel_type (use enum gft_profile_tunnel_type) */
 #define GFT_CAM_LINE_MAPPED_TUNNEL_TYPE_MASK_MASK          0xF
 #define GFT_CAM_LINE_MAPPED_TUNNEL_TYPE_MASK_SHIFT         21
 #define GFT_CAM_LINE_MAPPED_PF_ID_MASK_MASK                0xF
@@ -2127,12 +4293,6 @@ struct gft_cam_line_mapped {
 };
 
 
-union gft_cam_line_union {
-	struct gft_cam_line cam_line;
-	struct gft_cam_line_mapped cam_line_mapped;
-};
-
-
 /*
  * Used in gft_profile_key: Indication for ip version
  */
@@ -2154,9 +4314,7 @@ struct gft_profile_key {
 /* use enum gft_profile_ip_version (use enum gft_profile_ip_version) */
 #define GFT_PROFILE_KEY_TUNNEL_IP_VERSION_MASK    0x1
 #define GFT_PROFILE_KEY_TUNNEL_IP_VERSION_SHIFT   1
-/* use enum gft_profile_upper_protocol_type
- * (use enum gft_profile_upper_protocol_type)
- */
+/* use enum gft_profile_upper_protocol_type (use enum gft_profile_upper_protocol_type) */
 #define GFT_PROFILE_KEY_UPPER_PROTOCOL_TYPE_MASK  0xF
 #define GFT_PROFILE_KEY_UPPER_PROTOCOL_TYPE_SHIFT 2
 /* use enum gft_profile_tunnel_type (use enum gft_profile_tunnel_type) */
@@ -2212,7 +4370,7 @@ enum gft_profile_upper_protocol_type {
  */
 struct gft_ram_line {
 	__le32 lo;
-#define GFT_RAM_LINE_VLAN_SELECT_MASK              0x3
+#define GFT_RAM_LINE_VLAN_SELECT_MASK              0x3 /*  (use enum gft_vlan_select) */
 #define GFT_RAM_LINE_VLAN_SELECT_SHIFT             0
 #define GFT_RAM_LINE_TUNNEL_ENTROPHY_MASK          0x1
 #define GFT_RAM_LINE_TUNNEL_ENTROPHY_SHIFT         2
@@ -2272,7 +4430,7 @@ struct gft_ram_line {
 #define GFT_RAM_LINE_TCP_FLAG_NS_SHIFT             29
 #define GFT_RAM_LINE_DST_PORT_MASK                 0x1
 #define GFT_RAM_LINE_DST_PORT_SHIFT                30
-#define GFT_RAM_LINE_SRC_PORT_MASK                 0x1U
+#define GFT_RAM_LINE_SRC_PORT_MASK                 0x1
 #define GFT_RAM_LINE_SRC_PORT_SHIFT                31
 	__le32 hi;
 #define GFT_RAM_LINE_DSCP_MASK                     0x1
@@ -2311,5 +4469,4 @@ enum gft_vlan_select {
 	MAX_GFT_VLAN_SELECT
 };
 
-
 #endif /* __ECORE_HSI_ETH__ */
diff --git a/drivers/net/qede/base/ecore_hsi_func_common.h b/drivers/net/qede/base/ecore_hsi_func_common.h
index 4351b7b14..6dc10aece 100644
--- a/drivers/net/qede/base/ecore_hsi_func_common.h
+++ b/drivers/net/qede/base/ecore_hsi_func_common.h
@@ -1,7 +1,8 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (c) 2016 - 2020 Cavium Inc.
+ * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
 
 #ifndef _HSI_FUNC_COMMON_H
diff --git a/drivers/net/qede/base/ecore_hsi_init_func.h b/drivers/net/qede/base/ecore_hsi_init_func.h
index 7efe2eff1..93b24b729 100644
--- a/drivers/net/qede/base/ecore_hsi_init_func.h
+++ b/drivers/net/qede/base/ecore_hsi_init_func.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_HSI_INIT_FUNC__
 #define __ECORE_HSI_INIT_FUNC__
 /********************************/
@@ -18,6 +18,670 @@
 #define CRC8_TABLE_SIZE					256
 #endif
 
+/*
+ * GFS context Descriptot QREG (0)
+ */
+struct gfs_context_descriptor {
+	u32 bitfields0;
+/* Reserved for valid bit in searcher database */
+#define GFS_CONTEXT_DESCRIPTOR_RESERVED_VALID_MASK            0x1
+#define GFS_CONTEXT_DESCRIPTOR_RESERVED_VALID_SHIFT           0
+/* Reserved for last bit in searcher database */
+#define GFS_CONTEXT_DESCRIPTOR_RESERVED_LAST_MASK             0x1
+#define GFS_CONTEXT_DESCRIPTOR_RESERVED_LAST_SHIFT            1
+#define GFS_CONTEXT_DESCRIPTOR_FW_RESERVED0_MASK              0x1F
+#define GFS_CONTEXT_DESCRIPTOR_FW_RESERVED0_SHIFT             2
+/* Skips the last search in the bundle according to the logic descried is GFS spec */
+#define GFS_CONTEXT_DESCRIPTOR_SKIP_LAST_SEARCH_MASK          0x1
+#define GFS_CONTEXT_DESCRIPTOR_SKIP_LAST_SEARCH_SHIFT         7
+/* Always use leading. (Leading, Accumulative, both leading and accumulative) (use enum
+ * gfs_context_type)
+ */
+#define GFS_CONTEXT_DESCRIPTOR_CONTEXT_TYPE_MASK              0x3
+#define GFS_CONTEXT_DESCRIPTOR_CONTEXT_TYPE_SHIFT             8
+/* 0   do always , 1   if redirection condition is not met (use enum gfs_cntx_cmd0) */
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND0_COND_MASK             0x1
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND0_COND_SHIFT            10
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND0_CHANGE_VPORT_MASK     0x1 /* if set, change the vport */
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND0_CHANGE_VPORT_SHIFT    11
+/* Substituted to  resolution  structure for FW use (use enum gfs_res_struct_dst_type) */
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND0_FW_HINT_MASK          0x7
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND0_FW_HINT_SHIFT         12
+/* Command ID, 0   From the 3rd QREG of this context (QREG#2), 1-127   From canned Commands RAM */
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND0_LOCATION_MASK         0x7F
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND0_LOCATION_SHIFT        15
+/* Used if  ChangeVport  is set */
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND0_VPORT_MASK            0xFF
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND0_VPORT_SHIFT           22
+/* 0   disabled, 1   do always, 2   do if redirection condition is met, 3   do if  sampling
+ * condition is met (use enum gfs_cntx_cmd1)
+ */
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND1_COND_MASK             0x3
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND1_COND_SHIFT            30
+	u32 priority /* Priority of the leading context */;
+	u32 bitfields1;
+/* 0   disabled, 1   copy to a single Vport always ,2   copy to a single Vport if copy condition is
+ * met ,3   copy to multiple Vports always, 4   copy to multiple Vports if copy condition is met, 5
+ *  do if  sampling  condition is met (use enum gfs_cntx_cmd2)
+ */
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND2_COND_MASK             0x7
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND2_COND_SHIFT            0
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND1_CHANGE_VPORT_MASK     0x1 /* is set, change the vport */
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND1_CHANGE_VPORT_SHIFT    3
+/* Substituted to  resolution  structure for FW use (use enum gfs_res_struct_dst_type) */
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND1_FW_HINT_MASK          0x7
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND1_FW_HINT_SHIFT         4
+/* 0   From the 3rd QREG of this context (QREG#2), 1-127   From canned Commands RAM */
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND1_LOCATION_MASK         0x7F
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND1_LOCATION_SHIFT        7
+/* Used if  ChangeVport  is set */
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND1_VPORT_MASK            0xFF
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND1_VPORT_SHIFT           14
+/* Matched with 8 TCP flags and 2 additional bits supplied in the GFS header. */
+#define GFS_CONTEXT_DESCRIPTOR_COPY_CONDITION_EN_MASK         0x3FF
+#define GFS_CONTEXT_DESCRIPTOR_COPY_CONDITION_EN_SHIFT        22
+	u32 bitfields2;
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND2_CHANGE_VPORT_MASK     0x1 /* is set, change the vport */
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND2_CHANGE_VPORT_SHIFT    0
+/* Substituted to  resolution  structure for FW use (use enum gfs_res_struct_dst_type) */
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND2_FW_HINT_MASK          0x7
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND2_FW_HINT_SHIFT         1
+/* 0   From the 3rd QREG of this context (QREG#2), 1-127   From canned Commands RAM */
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND2_LOCATION_MASK         0x7F
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND2_LOCATION_SHIFT        4
+/* For copy to a single Vport   Vport number.For copy to multiple Vports: 0-31   index of RAM entry
+ * that contains the Vports bitmask, 0xFF   the bitmask is located in QREG#2 and QREG#3
+ */
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND2_VPORT_MASK            0xFF
+#define GFS_CONTEXT_DESCRIPTOR_COMMAND2_VPORT_SHIFT           11
+/* Matched with TTL==0 (bit 0), TTL==1 (bit 1) from eth_gfs_redirect_condition in
+ * action_data.redirect.condition, and 2 additional bits (2 and 3) supplied in the GFS header.
+ */
+#define GFS_CONTEXT_DESCRIPTOR_REDIRECTION_CONDITION_EN_MASK  0xF
+#define GFS_CONTEXT_DESCRIPTOR_REDIRECTION_CONDITION_EN_SHIFT 19
+#define GFS_CONTEXT_DESCRIPTOR_FW_RESERVED1_MASK              0x1FF
+#define GFS_CONTEXT_DESCRIPTOR_FW_RESERVED1_SHIFT             23
+};
+
+/*
+ * RGFS context replacement data QREG 1
+ */
+struct rgfs_replacement_struct {
+/* CID for replacement in CM header. Used for steering to TX queue. Only one copy can use CID. In
+ * TGFS used to change event ID   Select TGFS flow in PSTORM
+ */
+	u32 cid;
+	u16 flags;
+/* Connection Type for replacement in the CM header (Unused) */
+#define RGFS_REPLACEMENT_STRUCT_CONNECTION_TYPE_MASK          0xF
+#define RGFS_REPLACEMENT_STRUCT_CONNECTION_TYPE_SHIFT         0
+/* Core Affinity for replacement in the CM header (Unused) */
+#define RGFS_REPLACEMENT_STRUCT_CORE_AFFINITY_MASK            0x3
+#define RGFS_REPLACEMENT_STRUCT_CORE_AFFINITY_SHIFT           4
+/* Change dst IP address that RSS header refers to. Set 1 to replace inner IPV4 address */
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_DST_IP_ADDRESS_MASK    0x1
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_DST_IP_ADDRESS_SHIFT   6
+/* Change src IP address that RSS header refers to. Set 1 to replace inner IPV4 address */
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_SRC_IP_ADDRESS_MASK    0x1
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_SRC_IP_ADDRESS_SHIFT   7
+/* Change dst port that RSS header refers to. Set 1 to replace inner UDP/TCP port */
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_DST_PORT_MASK          0x1
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_DST_PORT_SHIFT         8
+/* Change src port that RSS header refers to. Set 1 to replace inner UDP/TCP port */
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_SRC_PORT_MASK          0x1
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_SRC_PORT_SHIFT         9
+/* Change IpVerType field in the RSS header. Set 0 */
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_RSS_IPVER_TYPE_MASK    0x1
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_RSS_IPVER_TYPE_SHIFT   10
+/* Change L4Type field in the RSS header. Set 0 */
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_RSS_L4_TYPE_MASK       0x1
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_RSS_L4_TYPE_SHIFT      11
+#define RGFS_REPLACEMENT_STRUCT_RSS_L4_TYPE_MASK              0x3 /* Unused */
+#define RGFS_REPLACEMENT_STRUCT_RSS_L4_TYPE_SHIFT             12
+/* Change OVER_IP_PROTOCOL field in the RSS header. Set 0 */
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_RSS_OVER_IP_PROT_MASK  0x1
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_RSS_OVER_IP_PROT_SHIFT 14
+/* Change Ipv4Frag field in the RSS header. Set 0 */
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_RSS_IPV4_FRAG_MASK     0x1
+#define RGFS_REPLACEMENT_STRUCT_CHANGE_RSS_IPV4_FRAG_SHIFT    15
+	u16 dst_port /* Dst port value for replacement. L4 port */;
+	u16 src_port /* Src port value for replacement. L4 port */;
+	u16 fw_reserved;
+/* Flow Id. Used for TMLD aggregation. Always set. Replace last register in TM message. */
+	u32 fw_flow_id;
+};
+
+struct gfs_modify_inner_ipv4_addr {
+	u32 dst_ip /* Inner IPv4 destination address. */;
+};
+
+struct gfs_modify_inner_ipv6_addr {
+	u16 dst_ip_index /* Inner IPv6 destination IP indirection indexs. */;
+	u16 src_ip_index /* Inner IPv6 source IP indirection indexs. */;
+};
+
+union gfs_modify_inner_ip_addr {
+	struct gfs_modify_inner_ipv4_addr ipv4 /* Inner IPv4 destination address. */;
+	struct gfs_modify_inner_ipv6_addr ipv6 /* Inner IPv6 indirection indexs. */;
+};
+
+/*
+ * Modify tunnel IPV4 header flags
+ */
+struct gfs_modify_tunnel_ipv4_header_flags {
+	u16 bitfields0;
+/* Modify tunnel source IP address. */
+#define GFS_MODIFY_TUNNEL_IPV4_HEADER_FLAGS_FW_MODIFY_TUNN_IP_SRC_ADDR_MASK    0x1
+#define GFS_MODIFY_TUNNEL_IPV4_HEADER_FLAGS_FW_MODIFY_TUNN_IP_SRC_ADDR_SHIFT   0
+/* Modify tunnel destination IP address. */
+#define GFS_MODIFY_TUNNEL_IPV4_HEADER_FLAGS_FW_MODIFY_TUNN_IP_DST_ADDR_MASK    0x1
+#define GFS_MODIFY_TUNNEL_IPV4_HEADER_FLAGS_FW_MODIFY_TUNN_IP_DST_ADDR_SHIFT   1
+/* Modify tunnel IP TTL or Hop Limit */
+#define GFS_MODIFY_TUNNEL_IPV4_HEADER_FLAGS_FW_SET_TUNN_IP_TTL_HOP_LIMIT_MASK  0x1
+#define GFS_MODIFY_TUNNEL_IPV4_HEADER_FLAGS_FW_SET_TUNN_IP_TTL_HOP_LIMIT_SHIFT 2
+/* Decrement tunnel IP TTL or Hop Limit */
+#define GFS_MODIFY_TUNNEL_IPV4_HEADER_FLAGS_FW_DEC_TUNN_IP_TTL_HOP_LIMIT_MASK  0x1
+#define GFS_MODIFY_TUNNEL_IPV4_HEADER_FLAGS_FW_DEC_TUNN_IP_TTL_HOP_LIMIT_SHIFT 3
+/* Modify tunnel IP DSCP */
+#define GFS_MODIFY_TUNNEL_IPV4_HEADER_FLAGS_FW_MODIFY_TUNN_IP_DSCP_MASK        0x1
+#define GFS_MODIFY_TUNNEL_IPV4_HEADER_FLAGS_FW_MODIFY_TUNN_IP_DSCP_SHIFT       4
+#define GFS_MODIFY_TUNNEL_IPV4_HEADER_FLAGS_RESERVED_MASK                      0x7FF
+#define GFS_MODIFY_TUNNEL_IPV4_HEADER_FLAGS_RESERVED_SHIFT                     5
+};
+
+/*
+ * Push tunnel IPV4 header flags
+ */
+struct gfs_push_tunnel_ipv4_header_flags {
+	u16 bitfields0;
+#define GFS_PUSH_TUNNEL_IPV4_HEADER_FLAGS_DONT_FRAG_FLAG_MASK             0x1
+#define GFS_PUSH_TUNNEL_IPV4_HEADER_FLAGS_DONT_FRAG_FLAG_SHIFT            0
+/* Modify tunnel destination IP address. */
+#define GFS_PUSH_TUNNEL_IPV4_HEADER_FLAGS_FW_TUNN_IPV4_ID_IND_INDEX_MASK  0x1FF
+#define GFS_PUSH_TUNNEL_IPV4_HEADER_FLAGS_FW_TUNN_IPV4_ID_IND_INDEX_SHIFT 1
+#define GFS_PUSH_TUNNEL_IPV4_HEADER_FLAGS_RESERVED_MASK                   0x3F
+#define GFS_PUSH_TUNNEL_IPV4_HEADER_FLAGS_RESERVED_SHIFT                  10
+};
+
+union gfs_tunnel_header_flags {
+/* Modify tunnel IPV4 header Flags */
+	struct gfs_modify_tunnel_ipv4_header_flags modify_tunnel_ipv4_header_flags;
+/* Push tunnel IPV4 header Flags */
+	struct gfs_push_tunnel_ipv4_header_flags push_tunnel_ipv4_header_flags;
+};
+
+/*
+ * Push tunnel VXLAN header data
+ */
+struct gfs_push_tunnel_vxlan_header_data {
+	u16 vni_bits0to15 /* Tunnel ID bits  0..15 */;
+	u8 vni_bits16to23 /* Tunnel ID bits 16..23 */;
+	u8 bitfields0;
+#define GFS_PUSH_TUNNEL_VXLAN_HEADER_DATA_RESERVED0_MASK           0x7
+#define GFS_PUSH_TUNNEL_VXLAN_HEADER_DATA_RESERVED0_SHIFT          0
+#define GFS_PUSH_TUNNEL_VXLAN_HEADER_DATA_UDP_CHECKSUM_VALID_MASK  0x1 /* UDP checksum exists */
+#define GFS_PUSH_TUNNEL_VXLAN_HEADER_DATA_UDP_CHECKSUM_VALID_SHIFT 3
+#define GFS_PUSH_TUNNEL_VXLAN_HEADER_DATA_I_FRAG_MASK              0x1 /* VNI valid. */
+#define GFS_PUSH_TUNNEL_VXLAN_HEADER_DATA_I_FRAG_SHIFT             4
+/* Use out of band VNI from RX path for hairpin traffic. */
+#define GFS_PUSH_TUNNEL_VXLAN_HEADER_DATA_OOB_VNI_MASK             0x1
+#define GFS_PUSH_TUNNEL_VXLAN_HEADER_DATA_OOB_VNI_SHIFT            5
+#define GFS_PUSH_TUNNEL_VXLAN_HEADER_DATA_RESERVED1_MASK           0x1
+#define GFS_PUSH_TUNNEL_VXLAN_HEADER_DATA_RESERVED1_SHIFT          6
+/* Calculate UDP source port from 4-tuple. */
+#define GFS_PUSH_TUNNEL_VXLAN_HEADER_DATA_CALC_ENTROPY_MASK        0x1
+#define GFS_PUSH_TUNNEL_VXLAN_HEADER_DATA_CALC_ENTROPY_SHIFT       7
+	u16 entropy /* VXLAN UDP source value. */;
+};
+
+/*
+ * Modify tunnel VXLAN header data
+ */
+struct gfs_modify_tunnel_vxlan_header_data {
+	u16 vni_bits0to15 /* Tunnel ID bits  0..15 */;
+	u8 vni_bits16to23 /* Tunnel ID bits 16..23 */;
+	u8 bitfields0;
+#define GFS_MODIFY_TUNNEL_VXLAN_HEADER_DATA_RESERVED0_MASK        0x7
+#define GFS_MODIFY_TUNNEL_VXLAN_HEADER_DATA_RESERVED0_SHIFT       0
+/* Update UDP checksum if exists */
+#define GFS_MODIFY_TUNNEL_VXLAN_HEADER_DATA_UPDATE_CHECKSUM_MASK  0x1
+#define GFS_MODIFY_TUNNEL_VXLAN_HEADER_DATA_UPDATE_CHECKSUM_SHIFT 3
+#define GFS_MODIFY_TUNNEL_VXLAN_HEADER_DATA_SET_VNI_MASK          0x1 /* Set VNI */
+#define GFS_MODIFY_TUNNEL_VXLAN_HEADER_DATA_SET_VNI_SHIFT         4
+#define GFS_MODIFY_TUNNEL_VXLAN_HEADER_DATA_RESERVED1_MASK        0x1
+#define GFS_MODIFY_TUNNEL_VXLAN_HEADER_DATA_RESERVED1_SHIFT       5
+/* Set       UDP source port. */
+#define GFS_MODIFY_TUNNEL_VXLAN_HEADER_DATA_SET_ENTROPY_MASK      0x1
+#define GFS_MODIFY_TUNNEL_VXLAN_HEADER_DATA_SET_ENTROPY_SHIFT     6
+/* Calculate UDP source port from 4-tuple. */
+#define GFS_MODIFY_TUNNEL_VXLAN_HEADER_DATA_CALC_ENTROPY_MASK     0x1
+#define GFS_MODIFY_TUNNEL_VXLAN_HEADER_DATA_CALC_ENTROPY_SHIFT    7
+	u16 entropy /* VXLAN UDP source value. */;
+};
+
+/*
+ * Push tunnel GRE header data
+ */
+struct gfs_push_tunnel_gre_header_data {
+	u16 key_bits0to15 /* Tunnel ID bits  0..15. */;
+	u16 key_bits16to31 /* Tunnel ID bits 16..31. */;
+	u16 bitfields0;
+#define GFS_PUSH_TUNNEL_GRE_HEADER_DATA_CHECKSUM_VALID_MASK  0x1 /* GRE checksum exists */
+#define GFS_PUSH_TUNNEL_GRE_HEADER_DATA_CHECKSUM_VALID_SHIFT 0
+#define GFS_PUSH_TUNNEL_GRE_HEADER_DATA_KEY_VALID_MASK       0x1 /* Key exists */
+#define GFS_PUSH_TUNNEL_GRE_HEADER_DATA_KEY_VALID_SHIFT      1
+/* Use out of band KEY from RX path for hairpin traffic. */
+#define GFS_PUSH_TUNNEL_GRE_HEADER_DATA_OOB_KEY_MASK         0x1
+#define GFS_PUSH_TUNNEL_GRE_HEADER_DATA_OOB_KEY_SHIFT        2
+#define GFS_PUSH_TUNNEL_GRE_HEADER_DATA_RESERVED_MASK        0x1FFF
+#define GFS_PUSH_TUNNEL_GRE_HEADER_DATA_RESERVED_SHIFT       3
+};
+
+/*
+ * Modify tunnel GRE header data
+ */
+struct gfs_modify_tunnel_gre_header_data {
+	u16 key_bits0to15 /* Tunnel ID bits  0..15. */;
+	u16 key_bits16to31 /* Tunnel ID bits 16..31. */;
+	u16 bitfields0;
+/* Update GRE checksum if exists */
+#define GFS_MODIFY_TUNNEL_GRE_HEADER_DATA_UPDATE_CHECKSUM_MASK  0x1
+#define GFS_MODIFY_TUNNEL_GRE_HEADER_DATA_UPDATE_CHECKSUM_SHIFT 0
+#define GFS_MODIFY_TUNNEL_GRE_HEADER_DATA_SET_VALID_MASK        0x1 /* Set Key */
+#define GFS_MODIFY_TUNNEL_GRE_HEADER_DATA_SET_VALID_SHIFT       1
+#define GFS_MODIFY_TUNNEL_GRE_HEADER_DATA_RESERVED_MASK         0x3FFF
+#define GFS_MODIFY_TUNNEL_GRE_HEADER_DATA_RESERVED_SHIFT        2
+};
+
+union gfs_tunnel_header_data {
+/* Push   VXLAN tunnel header Data */
+	struct gfs_push_tunnel_vxlan_header_data push_tunnel_vxlan_header_data;
+/* Modify VXLAN tunnel header Data */
+	struct gfs_modify_tunnel_vxlan_header_data modify_tunnel_vxlan_header_data;
+/* Push   GRE   tunnel header Data */
+	struct gfs_push_tunnel_gre_header_data push_tunnel_gre_header_data;
+/* Modify GRE   tunnel header Data */
+	struct gfs_modify_tunnel_gre_header_data modify_tunnel_gre_header_data;
+};
+
+/*
+ * RGFS context Descriptor QREG
+ */
+struct eth_rgfs_full_context {
+	struct gfs_context_descriptor context_descriptor;
+	struct rgfs_replacement_struct replacement_struct;
+	u32 bitfields2;
+/* The log2 factor to apply to the timestamps (must be the same for both TIDs in flow) */
+#define ETH_RGFS_FULL_CONTEXT_FW_COUNTERS_TIMESTAMP_LOG2_FACTOR_MASK  0x1F
+#define ETH_RGFS_FULL_CONTEXT_FW_COUNTERS_TIMESTAMP_LOG2_FACTOR_SHIFT 0
+/* First ETH header (after packet modification) VLAN mode: NOP, POP, PUSH, change VID, change PRI,
+ * change whole tag. (use enum gfs_vlan_mode)
+ */
+#define ETH_RGFS_FULL_CONTEXT_FW_FIRST_HDR_VLAN_MODE_MASK             0x7
+#define ETH_RGFS_FULL_CONTEXT_FW_FIRST_HDR_VLAN_MODE_SHIFT            5
+#define ETH_RGFS_FULL_CONTEXT_RESERVED1_MASK                          0xFF
+#define ETH_RGFS_FULL_CONTEXT_RESERVED1_SHIFT                         8
+/*  (use enum eth_gfs_pop_hdr_type) */
+#define ETH_RGFS_FULL_CONTEXT_FW_POP_TYPE_MASK                        0x3
+#define ETH_RGFS_FULL_CONTEXT_FW_POP_TYPE_SHIFT                       16
+#define ETH_RGFS_FULL_CONTEXT_FW_TID0_VALID_MASK                      0x1 /* TID 0 Valid */
+#define ETH_RGFS_FULL_CONTEXT_FW_TID0_VALID_SHIFT                     18
+#define ETH_RGFS_FULL_CONTEXT_FW_TID1_VALID_MASK                      0x1 /* TID 1 Valid */
+#define ETH_RGFS_FULL_CONTEXT_FW_TID1_VALID_SHIFT                     19
+/* First duplication driver hint. */
+#define ETH_RGFS_FULL_CONTEXT_DRV_HINT_0_MASK                         0x7
+#define ETH_RGFS_FULL_CONTEXT_DRV_HINT_0_SHIFT                        20
+#define ETH_RGFS_FULL_CONTEXT_RESERVED2_MASK                          0x7
+#define ETH_RGFS_FULL_CONTEXT_RESERVED2_SHIFT                         23
+/* Second duplication driver hint. */
+#define ETH_RGFS_FULL_CONTEXT_DRV_HINT_1_MASK                         0x7
+#define ETH_RGFS_FULL_CONTEXT_DRV_HINT_1_SHIFT                        26
+/* Third duplication driver hint. */
+#define ETH_RGFS_FULL_CONTEXT_DRV_HINT_2_MASK                         0x7
+#define ETH_RGFS_FULL_CONTEXT_DRV_HINT_2_SHIFT                        29
+	u16 fw_first_hdr_vlan_val /* Tunnel or single header VLAN value */;
+	u16 fw_command1_sample_len /* 2nd Command sample length (1st command is command0) */;
+	u32 fw_tid0 /* Counter 0 TID */;
+	u32 fw_tid1 /* Counter 1 TID */;
+/* HDR Modify   inner IPv4 destination IP or IPv6 indirection indexs. */
+	union gfs_modify_inner_ip_addr fw_modify_inner_ip_addr;
+	u16 fw_tunnel_src_mac_ind;
+/* Tunnel source MAC indirection index. 0 are invalid index. */
+#define ETH_RGFS_FULL_CONTEXT_FW_TUNNEL_SRC_MAC_IND_INDEX_MASK        0x1FF
+#define ETH_RGFS_FULL_CONTEXT_FW_TUNNEL_SRC_MAC_IND_INDEX_SHIFT       0
+#define ETH_RGFS_FULL_CONTEXT_RESERVED3_MASK                          0xF
+#define ETH_RGFS_FULL_CONTEXT_RESERVED3_SHIFT                         9
+/* CM HDR override - Clear for drop. No special allocation needed for drop fields. */
+#define ETH_RGFS_FULL_CONTEXT_XXLOCK_CMD_MASK                         0x7
+#define ETH_RGFS_FULL_CONTEXT_XXLOCK_CMD_SHIFT                        13
+	u16 bitfields3;
+/* Modify inner/single L4 ports */
+#define ETH_RGFS_FULL_CONTEXT_FW_MODIFY_INNER_4TUPLE_PORTS_MASK       0x1
+#define ETH_RGFS_FULL_CONTEXT_FW_MODIFY_INNER_4TUPLE_PORTS_SHIFT      0
+/* Modify inner/single IP addresses */
+#define ETH_RGFS_FULL_CONTEXT_FW_MODIFY_INNER_4TUPLE_ADDR_MASK        0x1
+#define ETH_RGFS_FULL_CONTEXT_FW_MODIFY_INNER_4TUPLE_ADDR_SHIFT       1
+/* Modify inner/single ETH header */
+#define ETH_RGFS_FULL_CONTEXT_FW_MODIFY_INNER_MAC_MASK                0x1
+#define ETH_RGFS_FULL_CONTEXT_FW_MODIFY_INNER_MAC_SHIFT               2
+#define ETH_RGFS_FULL_CONTEXT_FW_MODIFY_INNER_IP_DSCP_MASK            0x1 /* Modify inner IP DSCP */
+#define ETH_RGFS_FULL_CONTEXT_FW_MODIFY_INNER_IP_DSCP_SHIFT           3
+/* Inner ETH header VLAN modification: NOP, change VID, change PRI, change whole tag, POP, PUSH (use
+ * enum gfs_vlan_mode)
+ */
+#define ETH_RGFS_FULL_CONTEXT_FW_INNER_HDR_VLAN_MODE_MASK             0x7
+#define ETH_RGFS_FULL_CONTEXT_FW_INNER_HDR_VLAN_MODE_SHIFT            4
+/* Modify inner IP TTL or Hop Limit */
+#define ETH_RGFS_FULL_CONTEXT_FW_SET_INNER_IP_TTL_HOP_LIMIT_MASK      0x1
+#define ETH_RGFS_FULL_CONTEXT_FW_SET_INNER_IP_TTL_HOP_LIMIT_SHIFT     7
+/* Decrement inner IP TTL or Hop Limit */
+#define ETH_RGFS_FULL_CONTEXT_FW_DEC_INNER_IP_TTL_HOP_LIMIT_MASK      0x1
+#define ETH_RGFS_FULL_CONTEXT_FW_DEC_INNER_IP_TTL_HOP_LIMIT_SHIFT     8
+/* CM HDR override - Clear for drop. No special allocation needed for drop fields. */
+#define ETH_RGFS_FULL_CONTEXT_LOAD_ST_CTX_FLG_MASK                    0x1
+#define ETH_RGFS_FULL_CONTEXT_LOAD_ST_CTX_FLG_SHIFT                   9
+#define ETH_RGFS_FULL_CONTEXT_RESERVED4_MASK                          0x3F
+#define ETH_RGFS_FULL_CONTEXT_RESERVED4_SHIFT                         10
+	u16 fw_command2_sample_len /* 3rd Command sample length (1st command is command0) */;
+	u16 fw_inner_vlan_val /* Inner VLAN value */;
+	u32 bitfields4;
+/* inner IP DSCP Set value */
+#define ETH_RGFS_FULL_CONTEXT_FW_INNER_IP_DSCP_MASK                   0x3F
+#define ETH_RGFS_FULL_CONTEXT_FW_INNER_IP_DSCP_SHIFT                  0
+/* Inner source      MAC indirection index. 0 are invalid index. */
+#define ETH_RGFS_FULL_CONTEXT_FW_INNER_SRC_MAC_IND_INDEX_MASK         0x1FF
+#define ETH_RGFS_FULL_CONTEXT_FW_INNER_SRC_MAC_IND_INDEX_SHIFT        6
+/* Inner destination MAC indirection index. 0 are invalid index. */
+#define ETH_RGFS_FULL_CONTEXT_FW_INNER_DEST_MAC_IND_INDEX_MASK        0x1FF
+#define ETH_RGFS_FULL_CONTEXT_FW_INNER_DEST_MAC_IND_INDEX_SHIFT       15
+/* inner IP TTL or Hop Limit Set value */
+#define ETH_RGFS_FULL_CONTEXT_FW_INNER_IP_TTL_HOP_LIMIT_MASK          0xFF
+#define ETH_RGFS_FULL_CONTEXT_FW_INNER_IP_TTL_HOP_LIMIT_SHIFT         24
+	u32 src_ip /* HDR Modify   IPv4 source IP. */;
+/* Filter marker. Always set. Replace per-pf config in basic block. Reported by CQE  */
+	u32 flow_mark;
+/* HDR Modify   tunnel IPv4 destination IP or IPv6 indirection index. Used by FW. */
+	u32 fw_tunn_dest_ip;
+/* HDR Modify   tunnel IPv4 source IP or IPv6 indirection index. Used by FW. */
+	u32 fw_tunn_src_ip;
+/* Tunnel IP tunnel header flag union. */
+	union gfs_tunnel_header_flags tunnel_ip_header_flags;
+	u8 fw_tunn_ip_ttl_hop_limit /* Tunnel IP TTL or Hop Limit Set value */;
+	u8 bitfields5;
+/* RSS HDR override - for CID steering. Disable CID override. */
+#define ETH_RGFS_FULL_CONTEXT_CID_OVERWRITE_MASK                      0x1
+#define ETH_RGFS_FULL_CONTEXT_CID_OVERWRITE_SHIFT                     0
+/* Tunnel IP header flags for PUSH flow: 2 LSBits of tos field. */
+#define ETH_RGFS_FULL_CONTEXT_FW_TUNNEL_IP_ECN_MASK                   0x3
+#define ETH_RGFS_FULL_CONTEXT_FW_TUNNEL_IP_ECN_SHIFT                  1
+/* Push headers type. (use enum eth_gfs_push_hdr_type) */
+#define ETH_RGFS_FULL_CONTEXT_FW_PUSH_TYPE_MASK                       0x7
+#define ETH_RGFS_FULL_CONTEXT_FW_PUSH_TYPE_SHIFT                      3
+#define ETH_RGFS_FULL_CONTEXT_RESERVED_MASK                           0x3
+#define ETH_RGFS_FULL_CONTEXT_RESERVED_SHIFT                          6
+	union gfs_tunnel_header_data tunnel_header_data /* VXLAN/GRE tunnel header union. */;
+	u16 bitfields6;
+/* Tunnel destination MAC indirection index. 0 are invalid index. */
+#define ETH_RGFS_FULL_CONTEXT_FW_TUNNEL_DEST_MAC_IND_INDEX_MASK       0x1FF
+#define ETH_RGFS_FULL_CONTEXT_FW_TUNNEL_DEST_MAC_IND_INDEX_SHIFT      0
+#define ETH_RGFS_FULL_CONTEXT_RESERVED5_MASK                          0x7F
+#define ETH_RGFS_FULL_CONTEXT_RESERVED5_SHIFT                         9
+	u32 bitfields7;
+/* RSS HDR override - for queue/CID steering. Only one copy can use queue ID. Limited by 256. Bits
+ * 9-11 can be used by context.
+ */
+#define ETH_RGFS_FULL_CONTEXT_LDR_HEADER_REGION_MASK                  0xFF
+#define ETH_RGFS_FULL_CONTEXT_LDR_HEADER_REGION_SHIFT                 0
+#define ETH_RGFS_FULL_CONTEXT_RESERVED6_MASK                          0xF
+#define ETH_RGFS_FULL_CONTEXT_RESERVED6_SHIFT                         8
+/* RSS HDR override - For steering. Only one copy can use queue ID. Limited by 1. Bits 1-3 can be
+ * used by context. * Cannot Move *
+ */
+#define ETH_RGFS_FULL_CONTEXT_CALC_RSS_AND_IND_TBL_MASK               0xF
+#define ETH_RGFS_FULL_CONTEXT_CALC_RSS_AND_IND_TBL_SHIFT              12
+/* Tunnel IP DSCP Set value */
+#define ETH_RGFS_FULL_CONTEXT_FW_TUNNEL_IP_DSCP_MASK                  0x3F
+#define ETH_RGFS_FULL_CONTEXT_FW_TUNNEL_IP_DSCP_SHIFT                 16
+/* RSS HDR override - For steering. Only one copy can use queue ID. Limited by 511. Bits 9 can be
+ * used by context. * Cannot Move *
+ */
+#define ETH_RGFS_FULL_CONTEXT_DEFAULT_QUEUE_ID_MASK                   0x3FF
+#define ETH_RGFS_FULL_CONTEXT_DEFAULT_QUEUE_ID_SHIFT                  22
+};
+
+
+/*
+ * TGFS context Descriptor QREG
+ */
+struct eth_tgfs_full_context {
+	struct gfs_context_descriptor context_descriptor;
+/* Tunnel or single VLAN value, setting vlan value for push command, updating pri, vid or vlan value
+ * for the pri, vid and whole_tag update commands, respectively
+ */
+	u16 fw_first_hdr_vlan_val;
+	u16 bitfields0;
+/* The log2 factor to apply to the timestamp (must be the same for both TIDs in flow) */
+#define ETH_TGFS_FULL_CONTEXT_FW_COUNTERS_TIMESTAMP_LOG2_FACTOR_MASK  0x1F
+#define ETH_TGFS_FULL_CONTEXT_FW_COUNTERS_TIMESTAMP_LOG2_FACTOR_SHIFT 0
+/* First ETH header VLAN modification: NOP, POP, PUSH, change VID, change PRI, change whole tag.
+ * Replace basic block register 8 (use enum gfs_vlan_mode)
+ */
+#define ETH_TGFS_FULL_CONTEXT_FW_FIRST_HDR_VLAN_MODE_MASK             0x7
+#define ETH_TGFS_FULL_CONTEXT_FW_FIRST_HDR_VLAN_MODE_SHIFT            5
+#define ETH_TGFS_FULL_CONTEXT_FW_TID0_VALID_MASK                      0x1 /* TID 0 Valid */
+#define ETH_TGFS_FULL_CONTEXT_FW_TID0_VALID_SHIFT                     8
+#define ETH_TGFS_FULL_CONTEXT_FW_TID1_VALID_MASK                      0x1 /* TID 1 Valid */
+#define ETH_TGFS_FULL_CONTEXT_FW_TID1_VALID_SHIFT                     9
+#define ETH_TGFS_FULL_CONTEXT_RESERVED0_MASK                          0x3F
+#define ETH_TGFS_FULL_CONTEXT_RESERVED0_SHIFT                         10
+	u16 fw_command1_sample_len /* Command 2 sample length */;
+	u16 fw_command2_sample_len /* Command 3 sample length */;
+	u32 fw_tid0 /* Counter TID. */;
+	u32 fw_tid1 /* Counter TID. */;
+	u32 src_ip /* Inner IPv4 source IP or IPv6 indirection index. */;
+	u32 dst_ip /* Inner IPv4 destination IP or IPv6 indirection index. */;
+	u16 dst_port /* L4 destination port. */;
+	u16 src_port /* L4 source port. */;
+	u32 bitfields1;
+/* Inner source MAC indirection index. 0 are invalid index. */
+#define ETH_TGFS_FULL_CONTEXT_FW_INNER_SRC_MAC_IND_INDEX_MASK         0x1FF
+#define ETH_TGFS_FULL_CONTEXT_FW_INNER_SRC_MAC_IND_INDEX_SHIFT        0
+/* Inner destination MAC indirection index. 0 are invalid index. */
+#define ETH_TGFS_FULL_CONTEXT_FW_INNER_DEST_MAC_IND_INDEX_MASK        0x1FF
+#define ETH_TGFS_FULL_CONTEXT_FW_INNER_DEST_MAC_IND_INDEX_SHIFT       9
+/* Modify inner/single source IP address. */
+#define ETH_TGFS_FULL_CONTEXT_FW_MODIFY_INNER_SRC_IP_ADDR_MASK        0x1
+#define ETH_TGFS_FULL_CONTEXT_FW_MODIFY_INNER_SRC_IP_ADDR_SHIFT       18
+/* Modify inner/single destination IP address. */
+#define ETH_TGFS_FULL_CONTEXT_FW_MODIFY_INNER_DEST_IP_ADDR_MASK       0x1
+#define ETH_TGFS_FULL_CONTEXT_FW_MODIFY_INNER_DEST_IP_ADDR_SHIFT      19
+/* Modify inner/single L4 header source port */
+#define ETH_TGFS_FULL_CONTEXT_FW_MODIFY_INNER_L4_SRC_PORT_MASK        0x1
+#define ETH_TGFS_FULL_CONTEXT_FW_MODIFY_INNER_L4_SRC_PORT_SHIFT       20
+/* Modify inner/single L4 header destination port */
+#define ETH_TGFS_FULL_CONTEXT_FW_MODIFY_INNER_L4_DEST_PORT_MASK       0x1
+#define ETH_TGFS_FULL_CONTEXT_FW_MODIFY_INNER_L4_DEST_PORT_SHIFT      21
+/* Modify MACs in inner ETH header */
+#define ETH_TGFS_FULL_CONTEXT_FW_MODIFY_INNER_MAC_MASK                0x1
+#define ETH_TGFS_FULL_CONTEXT_FW_MODIFY_INNER_MAC_SHIFT               22
+#define ETH_TGFS_FULL_CONTEXT_FW_MODIFY_INNER_IP_DSCP_MASK            0x1 /* Modify inner IP DSCP */
+#define ETH_TGFS_FULL_CONTEXT_FW_MODIFY_INNER_IP_DSCP_SHIFT           23
+/* Modify inner IP TTL or Hop Limit */
+#define ETH_TGFS_FULL_CONTEXT_FW_SET_INNER_IP_TTL_HOP_LIMIT_MASK      0x1
+#define ETH_TGFS_FULL_CONTEXT_FW_SET_INNER_IP_TTL_HOP_LIMIT_SHIFT     24
+/* Decrement inner IP TTL or Hop Limit */
+#define ETH_TGFS_FULL_CONTEXT_FW_DEC_INNER_IP_TTL_HOP_LIMIT_MASK      0x1
+#define ETH_TGFS_FULL_CONTEXT_FW_DEC_INNER_IP_TTL_HOP_LIMIT_SHIFT     25
+#define ETH_TGFS_FULL_CONTEXT_RESERVED1_MASK                          0x1
+#define ETH_TGFS_FULL_CONTEXT_RESERVED1_SHIFT                         26
+/* Original flag set for some copy. */
+#define ETH_TGFS_FULL_CONTEXT_FW_ORIGINAL_COPY_FLG_MASK               0x1
+#define ETH_TGFS_FULL_CONTEXT_FW_ORIGINAL_COPY_FLG_SHIFT              27
+/* Set checksum offload due to modify fields. FP must verify if checksum exist. bit 0 - ip, bit 1 -
+ * l4, bit 2 - tunn_ip, bit 3 - tunn_l4
+ */
+#define ETH_TGFS_FULL_CONTEXT_FW_SET_CHECKSUM_OFFLOAD_MASK            0xF
+#define ETH_TGFS_FULL_CONTEXT_FW_SET_CHECKSUM_OFFLOAD_SHIFT           28
+	u32 bitfields2;
+/* First ETH header destination MAC indirection index. 0 for unchanged. Replace basic block register
+ * 8.
+ */
+#define ETH_TGFS_FULL_CONTEXT_FW_FIRST_HDR_DEST_MAC_IND_INDEX_MASK    0x1FF
+#define ETH_TGFS_FULL_CONTEXT_FW_FIRST_HDR_DEST_MAC_IND_INDEX_SHIFT   0
+/* First ETH header source MAC indirection index. 0 for unchanged. Replace basic block register 8 */
+#define ETH_TGFS_FULL_CONTEXT_FW_FIRST_HDR_SRC_MAC_IND_INDEX_MASK     0x1FF
+#define ETH_TGFS_FULL_CONTEXT_FW_FIRST_HDR_SRC_MAC_IND_INDEX_SHIFT    9
+/* Reserved for last bit in searcher database (use enum eth_gfs_pop_hdr_type) */
+#define ETH_TGFS_FULL_CONTEXT_FW_POP_TYPE_MASK                        0x3
+#define ETH_TGFS_FULL_CONTEXT_FW_POP_TYPE_SHIFT                       18
+#define ETH_TGFS_FULL_CONTEXT_RESERVED2_MASK                          0x1
+#define ETH_TGFS_FULL_CONTEXT_RESERVED2_SHIFT                         20
+/* Inner ETH header VLAN modification: NOP, change VID, change PRI, change whole tag, POP, PUSH.
+ * Replace basic block register 8 (use enum gfs_vlan_mode)
+ */
+#define ETH_TGFS_FULL_CONTEXT_FW_INNER_HDR_VLAN_MODE_MASK             0x7
+#define ETH_TGFS_FULL_CONTEXT_FW_INNER_HDR_VLAN_MODE_SHIFT            21
+/* inner IP TTL or Hop Limit Set value. Replace basic block register 8. */
+#define ETH_TGFS_FULL_CONTEXT_FW_INNER_IP_TTL_HOP_LIMIT_MASK          0xFF
+#define ETH_TGFS_FULL_CONTEXT_FW_INNER_IP_TTL_HOP_LIMIT_SHIFT         24
+	u16 bitfields3;
+/* Push headers type. (use enum eth_gfs_push_hdr_type) */
+#define ETH_TGFS_FULL_CONTEXT_FW_PUSH_TYPE_MASK                       0x7
+#define ETH_TGFS_FULL_CONTEXT_FW_PUSH_TYPE_SHIFT                      0
+/* inner IP DSCP Set value */
+#define ETH_TGFS_FULL_CONTEXT_FW_INNER_IP_DSCP_MASK                   0x3F
+#define ETH_TGFS_FULL_CONTEXT_FW_INNER_IP_DSCP_SHIFT                  3
+#define ETH_TGFS_FULL_CONTEXT_RESERVED3_MASK                          0x7F
+#define ETH_TGFS_FULL_CONTEXT_RESERVED3_SHIFT                         9
+	u16 fw_inner_vlan_val /* Inner VLAN value */;
+	u32 flow_mark /* Filter marker. Pass to Rx if destination is RX_VPORT */;
+	u8 bitfields4;
+/* Tunnel IP header flags for PUSH flow: 2 LSBits of tos field. */
+#define ETH_TGFS_FULL_CONTEXT_FW_TUNNEL_IP_ECN_MASK                   0x3
+#define ETH_TGFS_FULL_CONTEXT_FW_TUNNEL_IP_ECN_SHIFT                  0
+/* Tunnel IP DSCP Set value */
+#define ETH_TGFS_FULL_CONTEXT_FW_TUNNEL_IP_DSCP_MASK                  0x3F
+#define ETH_TGFS_FULL_CONTEXT_FW_TUNNEL_IP_DSCP_SHIFT                 2
+	u8 fw_tunn_ip_ttl_hop_limit /* Tunnel IP TTL or Hop Limit Set value */;
+	u16 reserved4;
+	union gfs_tunnel_header_data tunnel_header_data /* VXLAN/GRE tunnel header union. */;
+/* Tunnel IP tunnel header flag union. */
+	union gfs_tunnel_header_flags tunnel_ip_header_flags;
+/* HDR Modify   tunnel IPv4 destination IP or IPv6 indirection index. */
+	u32 fw_tunn_dest_ip;
+	u32 fw_tunn_src_ip /* HDR Modify   tunnel IPv4 source IP or IPv6 indirection index. */;
+	u16 reserved5;
+	u8 event_id /* This field is used to replace part of event ID in MCM header. */;
+	u8 reserved6;
+	u32 fw_flow_id /* Flow Id. Replace basic block register 9. */;
+	u32 reserved7[2];
+};
+
+
+
+
+
+
+
+
+
+
+
+
+
+/*
+ * GFS Context VLAN mode enum
+ */
+enum gfs_vlan_mode {
+	e_vlan_mode_nop,
+	e_vlan_mode_change_vid,
+	e_vlan_mode_change_pri,
+	e_vlan_mode_change_whole_tag,
+/* In order to be applicable for fw_inner_vlan_modify_flg as well, pop and puch are placed at the
+ * end so as to allow nop..change_whole_tag to consume exactly 2 bits
+ */
+	e_vlan_mode_pop,
+/* In order to be applicable for fw_inner_vlan_modify_flg as well, pop and puch are placed at the
+ * end so as to allow nop..change_whole_tag to consume exactly 2 bits
+ */
+	e_vlan_mode_push,
+	MAX_GFS_VLAN_MODE
+};
+
+
+/*
+ * structure of the tgfs fw hint
+ */
+struct tgfs_fw_hint {
+	u8 bitfields;
+#define TGFS_FW_HINT_FIRST_DUP_FLG_MASK  0x1 /* set if first duplicate */
+#define TGFS_FW_HINT_FIRST_DUP_FLG_SHIFT 0
+/* specify where the pkt is destined for (use enum gfs_res_struct_dst_type) */
+#define TGFS_FW_HINT_DEST_TYPE_MASK      0x3
+#define TGFS_FW_HINT_DEST_TYPE_SHIFT     1
+#define TGFS_FW_HINT_RESERVED_MASK       0x1F
+#define TGFS_FW_HINT_RESERVED_SHIFT      3
+};
+
+
+/*
+ * GFS Context Command 0 enum
+ */
+enum gfs_cntx_cmd0 {
+	e_cntx_cmd0_do_always /* do always */,
+	e_cntx_cmd0_if_red_not_met /* if redirection condition is not met */,
+	MAX_GFS_CNTX_CMD0
+};
+
+
+/*
+ * GFS Context Command 1 enum
+ */
+enum gfs_cntx_cmd1 {
+	e_cntx_cmd1_disable /* disabled */,
+	e_cntx_cmd1_do_always /* do always */,
+	e_cntx_cmd1_do_if_red_met /* do if redirection condition is met */,
+	e_cntx_cmd1_do_if_samp_met /* do if  sampling  condition is met */,
+	MAX_GFS_CNTX_CMD1
+};
+
+
+/*
+ * GFS Context Command 2 enum
+ */
+enum gfs_cntx_cmd2 {
+	e_cntx_cmd2_disable /* disabled */,
+	e_cntx_cmd2_cpy_single_vport_always /* copy to a single Vport always */,
+	e_cntx_cmd2_cpy_single_vport_cpy_met /* copy to a single Vport if copy condition is met */,
+	e_cntx_cmd2_cpy_mul_vport_always /* copy to multiple Vports always */,
+	e_cntx_cmd2_cpy_mul_vport_cpy_met /* copy to multiple Vports if copy condition is met */,
+	e_cntx_cmd2_do_if_samp_met /* do if  sampling  condition is met */,
+	MAX_GFS_CNTX_CMD2
+};
+
+
+
+/*
+ * GFS context type
+ */
+enum gfs_context_type {
+	e_gfs_cntx_lead /* leading context type */,
+	e_gfs_cntx_accum /* accumulative context type */,
+	e_gfs_cntx_lead_and_accum /* leading and accumulative context type */,
+	MAX_GFS_CONTEXT_TYPE
+};
+
+
+enum gfs_res_struct_dst_type {
+	e_gfs_dst_type_regular /* Perform regular TX-Switching classification  */,
+	e_gfs_dst_type_rx_vport /* Redirect to RX Vport */,
+/* TX-Switching is bypassed, The TX destination is written in vport field in resolution struct */
+	e_gfs_dst_type_tx_sw_bypass,
+	MAX_GFS_RES_STRUCT_DST_TYPE
+};
+
+
+
 /*
  * BRB RAM init requirements
  */
@@ -34,12 +698,12 @@ struct init_brb_ram_req {
  * ETS per-TC init requirements
  */
 struct init_ets_tc_req {
-/* if set, this TC participates in the arbitration with a strict priority
- * (the priority is equal to the TC ID)
+/* if set, this TC participates in the arbitration with a strict priority (the priority is equal to
+ * the TC ID)
  */
 	u8 use_sp;
-/* if set, this TC participates in the arbitration with a WFQ weight
- * (indicated by the weight field)
+/* if set, this TC participates in the arbitration with a WFQ weight (indicated by the weight field)
+ *
  */
 	u8 use_wfq;
 	u16 weight /* An arbitration weight. Valid only if use_wfq is set. */;
@@ -50,8 +714,7 @@ struct init_ets_tc_req {
  */
 struct init_ets_req {
 	u32 mtu /* Max packet size (in bytes) */;
-/* ETS initialization requirements per TC. */
-	struct init_ets_tc_req tc_req[NUM_OF_TCS];
+	struct init_ets_tc_req tc_req[NUM_OF_TCS] /* ETS initialization requirements per TC. */;
 };
 
 
@@ -62,8 +725,7 @@ struct init_ets_req {
 struct init_nig_lb_rl_req {
 /* Global MAC+LB RL rate (in Mbps). If set to 0, the RL will be disabled. */
 	u16 lb_mac_rate;
-/* Global LB RL rate (in Mbps). If set to 0, the RL will be disabled. */
-	u16 lb_rate;
+	u16 lb_rate /* Global LB RL rate (in Mbps). If set to 0, the RL will be disabled. */;
 	u32 mtu /* Max packet size (in bytes) */;
 /* RL rate per physical TC (in Mbps). If set to 0, the RL will be disabled. */
 	u16 tc_rate[NUM_OF_PHYS_TCS];
@@ -91,9 +753,7 @@ struct init_nig_pri_tc_map_req {
  * QM per global RL init parameters
  */
 struct init_qm_global_rl_params {
-/* Rate limit in Mb/sec units. If set to zero, the link speed is uwsed
- * instead.
- */
+/* Rate limit in Mb/sec units. If set to zero, the link speed is uwsed instead. */
 	u32 rate_limit;
 };
 
@@ -103,25 +763,24 @@ struct init_qm_global_rl_params {
  */
 struct init_qm_port_params {
 	u8 active /* Indicates if this port is active */;
-/* Vector of valid bits for active TCs used by this port */
-	u8 active_phys_tcs;
-/* number of PBF command lines that can be used by this port */
+	u8 active_phys_tcs /* Vector of valid bits for active TCs used by this port */;
+/* Number of PBF command lines that can be used by this port. In E4 each line is 256b, and in E5
+ * each line is 512b.
+ */
 	u16 num_pbf_cmd_lines;
-/* number of BTB blocks that can be used by this port */
-	u16 num_btb_blocks;
+	u16 num_btb_blocks /* Number of BTB blocks that can be used by this port */;
 	u16 reserved;
 };
 
 
 /*
- * QM per-PQ init parameters
+ * QM per PQ init parameters
  */
 struct init_qm_pq_params {
 	u8 vport_id /* VPORT ID */;
 	u8 tc_id /* TC ID */;
 	u8 wrr_group /* WRR group */;
-/* Indicates if a rate limiter should be allocated for the PQ (0/1) */
-	u8 rl_valid;
+	u8 rl_valid /* Indicates if a rate limiter should be allocated for the PQ (0/1) */;
 	u16 rl_id /* RL ID, valid only if rl_valid is true */;
 	u8 port_id /* Port ID */;
 	u8 reserved;
@@ -132,9 +791,7 @@ struct init_qm_pq_params {
  * QM per VPORT init parameters
  */
 struct init_qm_vport_params {
-/* WFQ weight. A value of 0 means dont configure. ignored if VPORT WFQ is
- * globally disabled.
- */
+/* WFQ weight. A value of 0 means dont configure. ignored if VPORT WFQ is globally disabled. */
 	u16 wfq;
 /* the first Tx PQ ID associated with this VPORT for each TC. */
 	u16 first_tx_pq_id[NUM_OF_TCS];
diff --git a/drivers/net/qede/base/ecore_hsi_init_tool.h b/drivers/net/qede/base/ecore_hsi_init_tool.h
index 4f878d061..bdbf54730 100644
--- a/drivers/net/qede/base/ecore_hsi_init_tool.h
+++ b/drivers/net/qede/base/ecore_hsi_init_tool.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_HSI_INIT_TOOL__
 #define __ECORE_HSI_INIT_TOOL__
 /**************************************/
@@ -20,19 +20,87 @@
 /* Max size in dwords of a zipped array */
 #define MAX_ZIPPED_SIZE			8192
 
+
 enum chip_ids {
 	CHIP_BB,
 	CHIP_K2,
+	CHIP_E5,
 	MAX_CHIP_IDS
 };
 
 
+enum dbg_bus_clients {
+	DBG_BUS_CLIENT_RBCN,
+	DBG_BUS_CLIENT_RBCP,
+	DBG_BUS_CLIENT_RBCR,
+	DBG_BUS_CLIENT_RBCT,
+	DBG_BUS_CLIENT_RBCU,
+	DBG_BUS_CLIENT_RBCF,
+	DBG_BUS_CLIENT_RBCX,
+	DBG_BUS_CLIENT_RBCS,
+	DBG_BUS_CLIENT_RBCH,
+	DBG_BUS_CLIENT_RBCZ,
+	DBG_BUS_CLIENT_OTHER_ENGINE,
+	DBG_BUS_CLIENT_TIMESTAMP,
+	DBG_BUS_CLIENT_CPU,
+	DBG_BUS_CLIENT_RBCY,
+	DBG_BUS_CLIENT_RBCQ,
+	DBG_BUS_CLIENT_RBCM,
+	DBG_BUS_CLIENT_RBCB,
+	DBG_BUS_CLIENT_RBCW,
+	DBG_BUS_CLIENT_RBCV,
+	MAX_DBG_BUS_CLIENTS
+};
+
+
+enum init_modes {
+	MODE_BB_A0_DEPRECATED,
+	MODE_BB,
+	MODE_K2,
+	MODE_ASIC,
+	MODE_EMUL_REDUCED,
+	MODE_EMUL_FULL,
+	MODE_FPGA,
+	MODE_CHIPSIM,
+	MODE_SF,
+	MODE_MF_SD,
+	MODE_MF_SI,
+	MODE_PORTS_PER_ENG_1,
+	MODE_PORTS_PER_ENG_2,
+	MODE_PORTS_PER_ENG_4,
+	MODE_100G,
+	MODE_E5,
+	MODE_SKIP_PRAM_INIT,
+	MODE_EMUL_MAC,
+	MAX_INIT_MODES
+};
+
+
+enum init_phases {
+	PHASE_ENGINE,
+	PHASE_PORT,
+	PHASE_PF,
+	PHASE_VF,
+	PHASE_QM_PF,
+	MAX_INIT_PHASES
+};
+
+
+enum init_split_types {
+	SPLIT_TYPE_NONE,
+	SPLIT_TYPE_PORT,
+	SPLIT_TYPE_PF,
+	SPLIT_TYPE_PORT_PF,
+	SPLIT_TYPE_VF,
+	MAX_INIT_SPLIT_TYPES
+};
+
+
 /*
  * Binary buffer header
  */
 struct bin_buffer_hdr {
-/* buffer offset in bytes from the beginning of the binary file */
-	u32 offset;
+	u32 offset /* buffer offset in bytes from the beginning of the binary file */;
 	u32 length /* buffer length in bytes */;
 };
 
@@ -46,7 +114,8 @@ enum bin_init_buffer_type {
 	BIN_BUF_INIT_VAL /* init data */,
 	BIN_BUF_INIT_MODE_TREE /* init modes tree */,
 	BIN_BUF_INIT_IRO /* internal RAM offsets */,
-	BIN_BUF_INIT_OVERLAYS /* FW overlays (except overlay 0) */,
+	BIN_BUF_INIT_OVERLAYS_E4 /* E4 FW overlays (except overlay 0) */,
+	BIN_BUF_INIT_OVERLAYS_E5 /* E5 FW overlays (except overlay 0) */,
 	MAX_BIN_INIT_BUFFER_TYPE
 };
 
@@ -58,8 +127,7 @@ struct fw_overlay_buf_hdr {
 	u32 data;
 #define FW_OVERLAY_BUF_HDR_STORM_ID_MASK  0xFF /* Storm ID */
 #define FW_OVERLAY_BUF_HDR_STORM_ID_SHIFT 0
-/* Size of Storm FW overlay buffer in dwords */
-#define FW_OVERLAY_BUF_HDR_BUF_SIZE_MASK  0xFFFFFF
+#define FW_OVERLAY_BUF_HDR_BUF_SIZE_MASK  0xFFFFFF /* Size of Storm FW overlay buffer in dwords */
 #define FW_OVERLAY_BUF_HDR_BUF_SIZE_SHIFT 8
 };
 
@@ -69,11 +137,9 @@ struct fw_overlay_buf_hdr {
  */
 struct init_array_raw_hdr {
 	u32 data;
-/* Init array type, from init_array_types enum */
-#define INIT_ARRAY_RAW_HDR_TYPE_MASK    0xF
+#define INIT_ARRAY_RAW_HDR_TYPE_MASK    0xF /* Init array type, from init_array_types enum */
 #define INIT_ARRAY_RAW_HDR_TYPE_SHIFT   0
-/* init array params */
-#define INIT_ARRAY_RAW_HDR_PARAMS_MASK  0xFFFFFFF
+#define INIT_ARRAY_RAW_HDR_PARAMS_MASK  0xFFFFFFF /* init array params */
 #define INIT_ARRAY_RAW_HDR_PARAMS_SHIFT 4
 };
 
@@ -82,11 +148,9 @@ struct init_array_raw_hdr {
  */
 struct init_array_standard_hdr {
 	u32 data;
-/* Init array type, from init_array_types enum */
-#define INIT_ARRAY_STANDARD_HDR_TYPE_MASK  0xF
+#define INIT_ARRAY_STANDARD_HDR_TYPE_MASK  0xF /* Init array type, from init_array_types enum */
 #define INIT_ARRAY_STANDARD_HDR_TYPE_SHIFT 0
-/* Init array size (in dwords) */
-#define INIT_ARRAY_STANDARD_HDR_SIZE_MASK  0xFFFFFFF
+#define INIT_ARRAY_STANDARD_HDR_SIZE_MASK  0xFFFFFFF /* Init array size (in dwords) */
 #define INIT_ARRAY_STANDARD_HDR_SIZE_SHIFT 4
 };
 
@@ -98,8 +162,7 @@ struct init_array_zipped_hdr {
 /* Init array type, from init_array_types enum */
 #define INIT_ARRAY_ZIPPED_HDR_TYPE_MASK         0xF
 #define INIT_ARRAY_ZIPPED_HDR_TYPE_SHIFT        0
-/* Init array zipped size (in bytes) */
-#define INIT_ARRAY_ZIPPED_HDR_ZIPPED_SIZE_MASK  0xFFFFFFF
+#define INIT_ARRAY_ZIPPED_HDR_ZIPPED_SIZE_MASK  0xFFFFFFF /* Init array zipped size (in bytes) */
 #define INIT_ARRAY_ZIPPED_HDR_ZIPPED_SIZE_SHIFT 4
 };
 
@@ -111,11 +174,9 @@ struct init_array_pattern_hdr {
 /* Init array type, from init_array_types enum */
 #define INIT_ARRAY_PATTERN_HDR_TYPE_MASK          0xF
 #define INIT_ARRAY_PATTERN_HDR_TYPE_SHIFT         0
-/* pattern size in dword */
-#define INIT_ARRAY_PATTERN_HDR_PATTERN_SIZE_MASK  0xF
+#define INIT_ARRAY_PATTERN_HDR_PATTERN_SIZE_MASK  0xF /* pattern size in dword */
 #define INIT_ARRAY_PATTERN_HDR_PATTERN_SIZE_SHIFT 4
-/* pattern repetitions */
-#define INIT_ARRAY_PATTERN_HDR_REPETITIONS_MASK   0xFFFFFF
+#define INIT_ARRAY_PATTERN_HDR_REPETITIONS_MASK   0xFFFFFF /* pattern repetitions */
 #define INIT_ARRAY_PATTERN_HDR_REPETITIONS_SHIFT  8
 };
 
@@ -124,77 +185,13 @@ struct init_array_pattern_hdr {
  */
 union init_array_hdr {
 	struct init_array_raw_hdr raw /* raw init array header */;
-/* standard init array header */
-	struct init_array_standard_hdr standard;
+	struct init_array_standard_hdr standard /* standard init array header */;
 	struct init_array_zipped_hdr zipped /* zipped init array header */;
 	struct init_array_pattern_hdr pattern /* pattern init array header */;
 };
 
 
-enum dbg_bus_clients {
-	DBG_BUS_CLIENT_RBCN,
-	DBG_BUS_CLIENT_RBCP,
-	DBG_BUS_CLIENT_RBCR,
-	DBG_BUS_CLIENT_RBCT,
-	DBG_BUS_CLIENT_RBCU,
-	DBG_BUS_CLIENT_RBCF,
-	DBG_BUS_CLIENT_RBCX,
-	DBG_BUS_CLIENT_RBCS,
-	DBG_BUS_CLIENT_RBCH,
-	DBG_BUS_CLIENT_RBCZ,
-	DBG_BUS_CLIENT_OTHER_ENGINE,
-	DBG_BUS_CLIENT_TIMESTAMP,
-	DBG_BUS_CLIENT_CPU,
-	DBG_BUS_CLIENT_RBCY,
-	DBG_BUS_CLIENT_RBCQ,
-	DBG_BUS_CLIENT_RBCM,
-	DBG_BUS_CLIENT_RBCB,
-	DBG_BUS_CLIENT_RBCW,
-	DBG_BUS_CLIENT_RBCV,
-	MAX_DBG_BUS_CLIENTS
-};
-
-
-enum init_modes {
-	MODE_BB_A0_DEPRECATED,
-	MODE_BB,
-	MODE_K2,
-	MODE_ASIC,
-	MODE_EMUL_REDUCED,
-	MODE_EMUL_FULL,
-	MODE_FPGA,
-	MODE_CHIPSIM,
-	MODE_SF,
-	MODE_MF_SD,
-	MODE_MF_SI,
-	MODE_PORTS_PER_ENG_1,
-	MODE_PORTS_PER_ENG_2,
-	MODE_PORTS_PER_ENG_4,
-	MODE_100G,
-	MODE_SKIP_PRAM_INIT,
-	MODE_EMUL_MAC,
-	MAX_INIT_MODES
-};
-
-
-enum init_phases {
-	PHASE_ENGINE,
-	PHASE_PORT,
-	PHASE_PF,
-	PHASE_VF,
-	PHASE_QM_PF,
-	MAX_INIT_PHASES
-};
-
 
-enum init_split_types {
-	SPLIT_TYPE_NONE,
-	SPLIT_TYPE_PORT,
-	SPLIT_TYPE_PF,
-	SPLIT_TYPE_PORT_PF,
-	SPLIT_TYPE_VF,
-	MAX_INIT_SPLIT_TYPES
-};
 
 
 /*
@@ -214,8 +211,7 @@ enum init_array_types {
  */
 struct init_callback_op {
 	u32 op_data;
-/* Init operation, from init_op_types enum */
-#define INIT_CALLBACK_OP_OP_MASK        0xF
+#define INIT_CALLBACK_OP_OP_MASK        0xF /* Init operation, from init_op_types enum */
 #define INIT_CALLBACK_OP_OP_SHIFT       0
 #define INIT_CALLBACK_OP_RESERVED_MASK  0xFFFFFFF
 #define INIT_CALLBACK_OP_RESERVED_SHIFT 4
@@ -229,12 +225,11 @@ struct init_callback_op {
  */
 struct init_delay_op {
 	u32 op_data;
-/* Init operation, from init_op_types enum */
-#define INIT_DELAY_OP_OP_MASK        0xF
+#define INIT_DELAY_OP_OP_MASK        0xF /* Init operation, from init_op_types enum */
 #define INIT_DELAY_OP_OP_SHIFT       0
 #define INIT_DELAY_OP_RESERVED_MASK  0xFFFFFFF
 #define INIT_DELAY_OP_RESERVED_SHIFT 4
-	__le32 delay /* delay in us */;
+	u32 delay /* delay in us */;
 };
 
 
@@ -243,13 +238,11 @@ struct init_delay_op {
  */
 struct init_if_mode_op {
 	u32 op_data;
-/* Init operation, from init_op_types enum */
-#define INIT_IF_MODE_OP_OP_MASK          0xF
+#define INIT_IF_MODE_OP_OP_MASK          0xF /* Init operation, from init_op_types enum */
 #define INIT_IF_MODE_OP_OP_SHIFT         0
 #define INIT_IF_MODE_OP_RESERVED1_MASK   0xFFF
 #define INIT_IF_MODE_OP_RESERVED1_SHIFT  4
-/* Commands to skip if the modes dont match */
-#define INIT_IF_MODE_OP_CMD_OFFSET_MASK  0xFFFF
+#define INIT_IF_MODE_OP_CMD_OFFSET_MASK  0xFFFF /* Commands to skip if the modes dont match */
 #define INIT_IF_MODE_OP_CMD_OFFSET_SHIFT 16
 	u16 reserved2;
 	u16 modes_buf_offset /* offset (in bytes) in modes expression buffer */;
@@ -261,24 +254,19 @@ struct init_if_mode_op {
  */
 struct init_if_phase_op {
 	u32 op_data;
-/* Init operation, from init_op_types enum */
-#define INIT_IF_PHASE_OP_OP_MASK           0xF
-#define INIT_IF_PHASE_OP_OP_SHIFT          0
-/* Indicates if DMAE is enabled in this phase */
-#define INIT_IF_PHASE_OP_DMAE_ENABLE_MASK  0x1
-#define INIT_IF_PHASE_OP_DMAE_ENABLE_SHIFT 4
-#define INIT_IF_PHASE_OP_RESERVED1_MASK    0x7FF
-#define INIT_IF_PHASE_OP_RESERVED1_SHIFT   5
-/* Commands to skip if the phases dont match */
-#define INIT_IF_PHASE_OP_CMD_OFFSET_MASK   0xFFFF
-#define INIT_IF_PHASE_OP_CMD_OFFSET_SHIFT  16
+#define INIT_IF_PHASE_OP_OP_MASK          0xF /* Init operation, from init_op_types enum */
+#define INIT_IF_PHASE_OP_OP_SHIFT         0
+#define INIT_IF_PHASE_OP_RESERVED1_MASK   0xFFF
+#define INIT_IF_PHASE_OP_RESERVED1_SHIFT  4
+#define INIT_IF_PHASE_OP_CMD_OFFSET_MASK  0xFFFF /* Commands to skip if the phases dont match */
+#define INIT_IF_PHASE_OP_CMD_OFFSET_SHIFT 16
 	u32 phase_data;
-#define INIT_IF_PHASE_OP_PHASE_MASK        0xFF /* Init phase */
-#define INIT_IF_PHASE_OP_PHASE_SHIFT       0
-#define INIT_IF_PHASE_OP_RESERVED2_MASK    0xFF
-#define INIT_IF_PHASE_OP_RESERVED2_SHIFT   8
-#define INIT_IF_PHASE_OP_PHASE_ID_MASK     0xFFFF /* Init phase ID */
-#define INIT_IF_PHASE_OP_PHASE_ID_SHIFT    16
+#define INIT_IF_PHASE_OP_PHASE_MASK       0xFF /* Init phase */
+#define INIT_IF_PHASE_OP_PHASE_SHIFT      0
+#define INIT_IF_PHASE_OP_RESERVED2_MASK   0xFF
+#define INIT_IF_PHASE_OP_RESERVED2_SHIFT  8
+#define INIT_IF_PHASE_OP_PHASE_ID_MASK    0xFFFF /* Init phase ID */
+#define INIT_IF_PHASE_OP_PHASE_ID_SHIFT   16
 };
 
 
@@ -298,8 +286,7 @@ enum init_mode_ops {
  */
 struct init_raw_op {
 	u32 op_data;
-/* Init operation, from init_op_types enum */
-#define INIT_RAW_OP_OP_MASK      0xF
+#define INIT_RAW_OP_OP_MASK      0xF /* Init operation, from init_op_types enum */
 #define INIT_RAW_OP_OP_SHIFT     0
 #define INIT_RAW_OP_PARAM1_MASK  0xFFFFFFF /* init param 1 */
 #define INIT_RAW_OP_PARAM1_SHIFT 4
@@ -318,12 +305,9 @@ struct init_op_array_params {
  * Write init operation arguments
  */
 union init_write_args {
-/* value to write, used when init source is INIT_SRC_INLINE */
-	u32 inline_val;
-/* number of zeros to write, used when init source is INIT_SRC_ZEROS */
-	u32 zeros_count;
-/* array offset to write, used when init source is INIT_SRC_ARRAY */
-	u32 array_offset;
+	u32 inline_val /* value to write, used when init source is INIT_SRC_INLINE */;
+	u32 zeros_count /* number of zeros to write, used when init source is INIT_SRC_ZEROS */;
+	u32 array_offset /* array offset to write, used when init source is INIT_SRC_ARRAY */;
 /* runtime array params to write, used when init source is INIT_SRC_RUNTIME */
 	struct init_op_array_params runtime;
 };
@@ -333,19 +317,15 @@ union init_write_args {
  */
 struct init_write_op {
 	u32 data;
-/* init operation, from init_op_types enum */
-#define INIT_WRITE_OP_OP_MASK        0xF
+#define INIT_WRITE_OP_OP_MASK        0xF /* init operation, from init_op_types enum */
 #define INIT_WRITE_OP_OP_SHIFT       0
-/* init source type, taken from init_source_types enum */
-#define INIT_WRITE_OP_SOURCE_MASK    0x7
+#define INIT_WRITE_OP_SOURCE_MASK    0x7 /* init source type, taken from init_source_types enum */
 #define INIT_WRITE_OP_SOURCE_SHIFT   4
 #define INIT_WRITE_OP_RESERVED_MASK  0x1
 #define INIT_WRITE_OP_RESERVED_SHIFT 7
-/* indicates if the register is wide-bus */
-#define INIT_WRITE_OP_WIDE_BUS_MASK  0x1
+#define INIT_WRITE_OP_WIDE_BUS_MASK  0x1 /* indicates if the register is wide-bus */
 #define INIT_WRITE_OP_WIDE_BUS_SHIFT 8
-/* internal (absolute) GRC address, in dwords */
-#define INIT_WRITE_OP_ADDRESS_MASK   0x7FFFFF
+#define INIT_WRITE_OP_ADDRESS_MASK   0x7FFFFF /* internal (absolute) GRC address, in dwords */
 #define INIT_WRITE_OP_ADDRESS_SHIFT  9
 	union init_write_args args /* Write init operation arguments */;
 };
@@ -355,19 +335,15 @@ struct init_write_op {
  */
 struct init_read_op {
 	u32 op_data;
-/* init operation, from init_op_types enum */
-#define INIT_READ_OP_OP_MASK         0xF
+#define INIT_READ_OP_OP_MASK         0xF /* init operation, from init_op_types enum */
 #define INIT_READ_OP_OP_SHIFT        0
-/* polling type, from init_poll_types enum */
-#define INIT_READ_OP_POLL_TYPE_MASK  0xF
+#define INIT_READ_OP_POLL_TYPE_MASK  0xF /* polling type, from init_poll_types enum */
 #define INIT_READ_OP_POLL_TYPE_SHIFT 4
 #define INIT_READ_OP_RESERVED_MASK   0x1
 #define INIT_READ_OP_RESERVED_SHIFT  8
-/* internal (absolute) GRC address, in dwords */
-#define INIT_READ_OP_ADDRESS_MASK    0x7FFFFF
+#define INIT_READ_OP_ADDRESS_MASK    0x7FFFFF /* internal (absolute) GRC address, in dwords */
 #define INIT_READ_OP_ADDRESS_SHIFT   9
-/* expected polling value, used only when polling is done */
-	u32 expected_val;
+	u32 expected_val /* expected polling value, used only when polling is done */;
 };
 
 /*
@@ -391,10 +367,8 @@ union init_op {
 enum init_op_types {
 	INIT_OP_READ /* GRC read init command */,
 	INIT_OP_WRITE /* GRC write init command */,
-/* Skip init commands if the init modes expression doesn't match */
-	INIT_OP_IF_MODE,
-/* Skip init commands if the init phase doesn't match */
-	INIT_OP_IF_PHASE,
+	INIT_OP_IF_MODE /* Skip init commands if the init modes expression doesn't match */,
+	INIT_OP_IF_PHASE /* Skip init commands if the init phase doesn't match */,
 	INIT_OP_DELAY /* delay init command */,
 	INIT_OP_CALLBACK /* callback init command */,
 	MAX_INIT_OP_TYPES
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 627620baf..1b751d837 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -13,15 +13,28 @@
 #include "ecore_hsi_init_tool.h"
 #include "ecore_iro.h"
 #include "ecore_init_fw_funcs.h"
-static u16 con_region_offsets[3][NUM_OF_CONNECTION_TYPES] = {
+static u16 con_region_offsets_e4[3][NUM_OF_CONNECTION_TYPES_E4] = {
 	{ 400,  336,  352,  368,  304,  384,  416,  352}, /* region 3 offsets */
 	{ 528,  496,  416,  512,  448,  512,  544,  480}, /* region 4 offsets */
 	{ 608,  544,  496,  576,  576,  592,  624,  560}  /* region 5 offsets */
 };
-static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
+static u16 task_region_offsets_e4[1][NUM_OF_CONNECTION_TYPES_E4] = {
 	{ 240,  240,  112,    0,    0,    0,    0,   96}  /* region 1 offsets */
 };
 
+static u16 con_region_offsets_e5[3][NUM_OF_CONNECTION_TYPES_E5] = {
+/* region 3 offsets */
+	{ 400,  352,  384,  384,  352,  384,  416,  352,    0,    0,    0,    0,    0,    0, 0, 0},
+/* region 4 offsets */
+	{ 528,  512,  512,  544,  512,  512,  544,  480,    0,    0,    0,    0,    0,    0, 0, 0},
+/* region 5 offsets */
+	{ 608,  560,  592,  608,  640,  592,  624,  560,    0,    0,    0,    0,    0,    0, 0, 0}
+};
+static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = {
+/* region 1 offsets */
+	{ 256,  256,  128,    0,   32,    0,    0,   96,    0,    0,    0,    0,    0,    0, 0, 0}
+};
+
 /* General constants */
 #define QM_PQ_MEM_4KB(pq_size) \
 	(pq_size ? DIV_ROUND_UP((pq_size + 1) * QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
@@ -149,22 +162,20 @@ static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
 #define QM_STOP_CMD_POLL_PERIOD_US	500
 
 /* QM command macros */
-#define QM_CMD_STRUCT_SIZE(cmd) cmd##_STRUCT_SIZE
+#define QM_CMD_STRUCT_SIZE(cmd)		cmd##_STRUCT_SIZE
 #define QM_CMD_SET_FIELD(var, cmd, field, value) \
 	SET_FIELD(var[cmd##_##field##_OFFSET], cmd##_##field, value)
 
-#define QM_INIT_TX_PQ_MAP(p_hwfn, map, pq_id, vp_pq_id, \
-			   rl_valid, rl_id, voq, wrr) \
+#define QM_INIT_TX_PQ_MAP(p_hwfn, map, chip, pq_id, vp_pq_id, rl_valid, rl_id, ext_voq, wrr) \
 	do { \
 		OSAL_MEMSET(&(map), 0, sizeof(map)); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_PQ_VALID, 1); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_VALID, rl_valid ? 1 : 0); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_ID, rl_id); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_VP_PQ_ID, vp_pq_id); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_VOQ, voq); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP, wrr); \
-		STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + (pq_id), \
-			     *((u32 *)&(map))); \
+		SET_FIELD((map).reg, QM_RF_PQ_MAP_##chip##_PQ_VALID, 1); \
+		SET_FIELD((map).reg, QM_RF_PQ_MAP_##chip##_RL_VALID, rl_valid ? 1 : 0); \
+		SET_FIELD((map).reg, QM_RF_PQ_MAP_##chip##_RL_ID, rl_id); \
+		SET_FIELD((map).reg, QM_RF_PQ_MAP_##chip##_VP_PQ_ID, vp_pq_id); \
+		SET_FIELD((map).reg, QM_RF_PQ_MAP_##chip##_VOQ, ext_voq); \
+		SET_FIELD((map).reg, QM_RF_PQ_MAP_##chip##_WRR_WEIGHT_GROUP, wrr); \
+		STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + (pq_id), *((u32 *)&(map))); \
 	} while (0)
 
 #define WRITE_PQ_INFO_TO_RAM		1
@@ -389,20 +400,19 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
  */
 static int ecore_global_rl_rt_init(struct ecore_hwfn *p_hwfn,
 				   struct init_qm_global_rl_params
-				     global_rl_params[COMMON_MAX_QM_GLOBAL_RLS])
+					global_rl_params[COMMON_MAX_QM_GLOBAL_RLS])
 {
-	u32 upper_bound = QM_VP_RL_UPPER_BOUND(QM_MAX_LINK_SPEED) |
-			  (u32)QM_RL_CRD_REG_SIGN_BIT;
+	u16 num_global_rls = ECORE_IS_E5(p_hwfn->p_dev) ? MAX_QM_GLOBAL_RLS_E5
+							: MAX_QM_GLOBAL_RLS_E4;
+	u32 upper_bound = QM_VP_RL_UPPER_BOUND(QM_MAX_LINK_SPEED) | (u32)QM_RL_CRD_REG_SIGN_BIT;
 	u32 inc_val;
 	u16 rl_id;
 
 	/* Go over all global RLs */
-	for (rl_id = 0; rl_id < MAX_QM_GLOBAL_RLS; rl_id++) {
-		u32 rate_limit = global_rl_params ?
-				 global_rl_params[rl_id].rate_limit : 0;
+	for (rl_id = 0; rl_id < num_global_rls; rl_id++) {
+		u32 rate_limit = global_rl_params ? global_rl_params[rl_id].rate_limit : 0;
 
-		inc_val = QM_RL_INC_VAL(rate_limit ?
-					rate_limit : QM_MAX_LINK_SPEED);
+		inc_val = QM_RL_INC_VAL(rate_limit ? rate_limit : QM_MAX_LINK_SPEED);
 		if (inc_val > QM_VP_RL_MAX_INC_VAL(QM_MAX_LINK_SPEED)) {
 			DP_NOTICE(p_hwfn, true, "Invalid rate limit configuration.\n");
 			return -1;
@@ -467,11 +477,10 @@ static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 	/* Go over all Tx PQs */
 	for (i = 0, pq_id = start_pq; i < num_pqs; i++, pq_id++) {
 		u16 first_tx_pq_id, vport_id_in_pf;
-		struct qm_rf_pq_map tx_pq_map;
 		bool is_vf_pq;
-		u8 voq;
+		u8 ext_voq;
 
-		voq = VOQ(pq_params[i].port_id, pq_params[i].tc_id,
+		ext_voq = VOQ(pq_params[i].port_id, pq_params[i].tc_id,
 			  max_phys_tcs_per_port);
 		is_vf_pq = (i >= num_pf_pqs);
 
@@ -480,7 +489,7 @@ static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		first_tx_pq_id =
 		vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].tc_id];
 		if (first_tx_pq_id == QM_INVALID_PQ_ID) {
-			u32 map_val = (voq << QM_WFQ_VP_PQ_VOQ_SHIFT) |
+			u32 map_val = (ext_voq << QM_WFQ_VP_PQ_VOQ_SHIFT) |
 				      (pf_id << QM_WFQ_VP_PQ_PF_SHIFT);
 
 			/* Create new VP PQ */
@@ -494,9 +503,19 @@ static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		}
 
 		/* Prepare PQ map entry */
-		QM_INIT_TX_PQ_MAP(p_hwfn, tx_pq_map, pq_id, first_tx_pq_id,
-				  pq_params[i].rl_valid, pq_params[i].rl_id,
-				  voq, pq_params[i].wrr_group);
+		if (ECORE_IS_E5(p_hwfn->p_dev)) {
+			struct qm_rf_pq_map_e5 tx_pq_map;
+
+			QM_INIT_TX_PQ_MAP(p_hwfn, tx_pq_map, E5, pq_id, first_tx_pq_id,
+				pq_params[i].rl_valid, pq_params[i].rl_id, ext_voq,
+				pq_params[i].wrr_group);
+		} else {
+			struct qm_rf_pq_map_e4 tx_pq_map;
+
+			QM_INIT_TX_PQ_MAP(p_hwfn, tx_pq_map, E4, pq_id, first_tx_pq_id,
+				pq_params[i].rl_valid, pq_params[i].rl_id, ext_voq,
+				pq_params[i].wrr_group);
+		}
 
 		/* Set PQ base address */
 		STORE_RT_REG(p_hwfn, QM_REG_BASEADDRTXPQ_RT_OFFSET + pq_id,
@@ -941,7 +960,8 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
 						u32 vport_rl,
 						u32 link_speed)
 {
-	u32 inc_val, max_qm_global_rls = MAX_QM_GLOBAL_RLS;
+	u32 inc_val, max_qm_global_rls = ECORE_IS_E5(p_hwfn->p_dev) ? MAX_QM_GLOBAL_RLS_E5
+								    : MAX_QM_GLOBAL_RLS_E4;
 
 	if (vport_id >= max_qm_global_rls) {
 		DP_NOTICE(p_hwfn, true,
@@ -1501,9 +1521,8 @@ void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
 
 	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
-	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
-			   PRS_REG_ENCAPSULATION_TYPE_EN_VXLAN_ENABLE_SHIFT,
-			   vxlan_enable);
+	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, PRS_ENCAPSULATION_TYPE_EN_FLAGS_VXLAN_ENABLE_SHIFT,
+		vxlan_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) { /* TODO: handle E5 init */
 		reg_val = ecore_rd(p_hwfn, p_ptt,
@@ -1536,11 +1555,11 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
-		   PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GRE_ENABLE_SHIFT,
-		   eth_gre_enable);
+		PRS_ENCAPSULATION_TYPE_EN_FLAGS_ETH_OVER_GRE_ENABLE_SHIFT,
+		eth_gre_enable);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
-		   PRS_REG_ENCAPSULATION_TYPE_EN_IP_OVER_GRE_ENABLE_SHIFT,
-		   ip_gre_enable);
+		PRS_ENCAPSULATION_TYPE_EN_FLAGS_IP_OVER_GRE_ENABLE_SHIFT,
+		ip_gre_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) { /* TODO: handle E5 init */
 		reg_val = ecore_rd(p_hwfn, p_ptt,
@@ -1591,11 +1610,11 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 	/* Update PRS register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
-		   PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GENEVE_ENABLE_SHIFT,
-		   eth_geneve_enable);
+		PRS_ENCAPSULATION_TYPE_EN_FLAGS_ETH_OVER_GENEVE_ENABLE_SHIFT,
+		eth_geneve_enable);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
-		   PRS_REG_ENCAPSULATION_TYPE_EN_IP_OVER_GENEVE_ENABLE_SHIFT,
-		   ip_geneve_enable);
+		PRS_ENCAPSULATION_TYPE_EN_FLAGS_IP_OVER_GENEVE_ENABLE_SHIFT,
+		ip_geneve_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
 	if (reg_val) { /* TODO: handle E5 init */
 		reg_val = ecore_rd(p_hwfn, p_ptt,
@@ -1903,8 +1922,7 @@ static u8 cdu_crc8_table[CRC8_TABLE_SIZE];
 /* Calculate and return CDU validation byte per connection type / region /
  * cid
  */
-static u8 ecore_calc_cdu_validation_byte(struct ecore_hwfn *p_hwfn,
-					 u8 conn_type, u8 region, u32 cid)
+static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region, u32 cid)
 {
 	static u8 crc8_table_valid;	/*automatically initialized to 0*/
 	u8 crc, validation_byte = 0;
@@ -1971,17 +1989,23 @@ void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
 {
 	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
 
-	p_ctx = (u8 *)p_ctx_mem;
+	p_ctx = (u8 * const)p_ctx_mem;
 
-	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
-	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
-	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
+	if (ECORE_IS_E5(p_hwfn->p_dev)) {
+		x_val_ptr = &p_ctx[con_region_offsets_e5[0][ctx_type]];
+		t_val_ptr = &p_ctx[con_region_offsets_e5[1][ctx_type]];
+		u_val_ptr = &p_ctx[con_region_offsets_e5[2][ctx_type]];
+	} else {
+		x_val_ptr = &p_ctx[con_region_offsets_e4[0][ctx_type]];
+		t_val_ptr = &p_ctx[con_region_offsets_e4[1][ctx_type]];
+		u_val_ptr = &p_ctx[con_region_offsets_e4[2][ctx_type]];
+	}
 
 	OSAL_MEMSET(p_ctx, 0, ctx_size);
 
-	*x_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 3, cid);
-	*t_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 4, cid);
-	*u_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 5, cid);
+	*x_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 3, cid);
+	*t_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 4, cid);
+	*u_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 5, cid);
 }
 
 /* Calcualte and set validation bytes for task context */
@@ -1990,13 +2014,15 @@ void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
 {
 	u8 *p_ctx, *region1_val_ptr;
 
-	p_ctx = (u8 *)p_ctx_mem;
-	region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
+	p_ctx = (u8 * const)p_ctx_mem;
+
+	region1_val_ptr = ECORE_IS_E5(p_hwfn->p_dev) ?
+		&p_ctx[task_region_offsets_e5[0][ctx_type]] :
+		&p_ctx[task_region_offsets_e4[0][ctx_type]];
 
 	OSAL_MEMSET(p_ctx, 0, ctx_size);
 
-	*region1_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 1,
-							  tid);
+	*region1_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 1, tid);
 }
 
 /* Memset session context to 0 while preserving validation bytes */
@@ -2006,11 +2032,17 @@ void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
 	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
 	u8 x_val, t_val, u_val;
 
-	p_ctx = (u8 *)p_ctx_mem;
+	p_ctx = (u8 * const)p_ctx_mem;
 
-	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
-	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
-	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
+	if (ECORE_IS_E5(p_hwfn->p_dev)) {
+		x_val_ptr = &p_ctx[con_region_offsets_e5[0][ctx_type]];
+		t_val_ptr = &p_ctx[con_region_offsets_e5[1][ctx_type]];
+		u_val_ptr = &p_ctx[con_region_offsets_e5[2][ctx_type]];
+	} else {
+		x_val_ptr = &p_ctx[con_region_offsets_e4[0][ctx_type]];
+		t_val_ptr = &p_ctx[con_region_offsets_e4[1][ctx_type]];
+		u_val_ptr = &p_ctx[con_region_offsets_e4[2][ctx_type]];
+	}
 
 	x_val = *x_val_ptr;
 	t_val = *t_val_ptr;
@@ -2030,8 +2062,11 @@ void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
 	u8 *p_ctx, *region1_val_ptr;
 	u8 region1_val;
 
-	p_ctx = (u8 *)p_ctx_mem;
-	region1_val_ptr = &p_ctx[task_region_offsets[0][ctx_type]];
+	p_ctx = (u8 * const)p_ctx_mem;
+
+	region1_val_ptr = ECORE_IS_E5(p_hwfn->p_dev) ?
+		&p_ctx[task_region_offsets_e5[0][ctx_type]] :
+		&p_ctx[task_region_offsets_e4[0][ctx_type]];
 
 	region1_val = *region1_val_ptr;
 
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index ea964ea2f..42885a401 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -568,10 +568,18 @@ enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
 	fw->modes_tree_buf = (u8 *)((uintptr_t)(fw_data + offset));
 	len = buf_hdr[BIN_BUF_INIT_CMD].length;
 	fw->init_ops_size = len / sizeof(struct init_raw_op);
-	offset = buf_hdr[BIN_BUF_INIT_OVERLAYS].offset;
-	fw->fw_overlays = (u32 *)(fw_data + offset);
-	len = buf_hdr[BIN_BUF_INIT_OVERLAYS].length;
-	fw->fw_overlays_len = len;
+
+	if (ECORE_IS_E4(p_dev)) {
+		offset = buf_hdr[BIN_BUF_INIT_OVERLAYS_E4].offset;
+		fw->fw_overlays_e4 = (u32 *)(fw_data + offset);
+		len = buf_hdr[BIN_BUF_INIT_OVERLAYS_E4].length;
+		fw->fw_overlays_e4_len = len;
+	} else {
+		offset = buf_hdr[BIN_BUF_INIT_OVERLAYS_E5].offset;
+		fw->fw_overlays_e5 = (u32 *)(fw_data + offset);
+		len = buf_hdr[BIN_BUF_INIT_OVERLAYS_E5].length;
+		fw->fw_overlays_e5_len = len;
+	}
 #else
 	fw->init_ops = (union init_op *)init_ops;
 	fw->arr_data = (u32 *)init_val;
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index 4207b1853..e0d61e375 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -1544,7 +1544,8 @@ static void _ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
 	if (IS_VF(p_hwfn->p_dev))
 		return;/* @@@TBD MichalK- VF CAU... */
 
-	sb_offset = igu_sb_id * PIS_PER_SB;
+	sb_offset = igu_sb_id * (ECORE_IS_E4(p_hwfn->p_dev) ? PIS_PER_SB_E4
+							    : PIS_PER_SB_E5);
 	OSAL_MEMSET(&pi_entry, 0, sizeof(struct cau_pi_entry));
 
 	SET_FIELD(pi_entry.prod, CAU_PI_ENTRY_PI_TIMESET, timeset);
@@ -1735,14 +1736,24 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
 				       void *sb_virt_addr,
 				       dma_addr_t sb_phy_addr, u16 sb_id)
 {
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
+
 	sb_info->sb_virt = sb_virt_addr;
-	struct status_block *sb_virt;
+	if (ECORE_IS_E4(p_hwfn->p_dev)) {
+		struct status_block_e4 *sb_virt_e4 =
+			(struct status_block_e4 *)sb_info->sb_virt;
 
-	sb_virt = (struct status_block *)sb_info->sb_virt;
+		sb_info->sb_size = sizeof(*sb_virt_e4);
+		sb_info->sb_pi_array = sb_virt_e4->pi_array;
+		sb_info->sb_prod_index = &sb_virt_e4->prod_index;
+	} else {
+		struct status_block_e5 *sb_virt_e5 =
+			(struct status_block_e5 *)sb_info->sb_virt;
 
-	sb_info->sb_size = sizeof(*sb_virt);
-	sb_info->sb_pi_array = sb_virt->pi_array;
-	sb_info->sb_prod_index = &sb_virt->prod_index;
+		sb_info->sb_size = sizeof(*sb_virt_e5);
+		sb_info->sb_pi_array = sb_virt_e5->pi_array;
+		sb_info->sb_prod_index = &sb_virt_e5->prod_index;
+	}
 
 	sb_info->sb_phys = sb_phy_addr;
 
@@ -1875,7 +1886,8 @@ static enum _ecore_status_t ecore_int_sp_sb_alloc(struct ecore_hwfn *p_hwfn,
 	ecore_int_sb_init(p_hwfn, p_ptt, &p_sb->sb_info,
 			  p_virt, p_phys, ECORE_SP_SB_ID);
 
-	p_sb->pi_info_arr_size = PIS_PER_SB;
+	p_sb->pi_info_arr_size = ECORE_IS_E4(p_hwfn->p_dev) ? PIS_PER_SB_E4
+							    : PIS_PER_SB_E5;
 
 	return ECORE_SUCCESS;
 }
@@ -2081,6 +2093,7 @@ void ecore_int_igu_init_pure_rt_single(struct ecore_hwfn *p_hwfn,
 				       u16 igu_sb_id, u16 opaque, bool b_set)
 {
 	struct ecore_igu_block *p_block;
+	u8 pis_per_sb;
 	int pi, i;
 
 	p_block = &p_hwfn->hw_info.p_igu_info->entry[igu_sb_id];
@@ -2114,10 +2127,11 @@ void ecore_int_igu_init_pure_rt_single(struct ecore_hwfn *p_hwfn,
 			  igu_sb_id);
 
 	/* Clear the CAU for the SB */
-	for (pi = 0; pi < PIS_PER_SB; pi++)
+	pis_per_sb = ECORE_IS_E4(p_hwfn->p_dev) ? PIS_PER_SB_E4 : PIS_PER_SB_E5;
+	for (pi = 0; pi < pis_per_sb; pi++)
 		ecore_wr(p_hwfn, p_ptt,
 			 CAU_REG_PI_MEMORY +
-			 (igu_sb_id * PIS_PER_SB + pi) * 4,
+			 (igu_sb_id * pis_per_sb + pi) * 4,
 			 0);
 }
 
@@ -2718,6 +2732,7 @@ enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
 					  struct ecore_sb_info_dbg *p_info)
 {
 	u16 sbid = p_sb->igu_sb_id;
+	u32 pis_per_sb;
 	u32 i;
 
 	if (IS_VF(p_hwfn->p_dev))
@@ -2731,11 +2746,11 @@ enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
 	p_info->igu_cons = ecore_rd(p_hwfn, p_ptt,
 				    IGU_REG_CONSUMER_MEM + sbid * 4);
 
-	for (i = 0; i < PIS_PER_SB; i++)
+	pis_per_sb = ECORE_IS_E4(p_hwfn->p_dev) ? PIS_PER_SB_E4 : PIS_PER_SB_E5;
+	for (i = 0; i < pis_per_sb; i++)
 		p_info->pi[i] = (u16)ecore_rd(p_hwfn, p_ptt,
 					      CAU_REG_PI_MEMORY +
-					      sbid * 4 * PIS_PER_SB +
-					      i * 4);
+					      sbid * 4 * pis_per_sb + i * 4);
 
 	return ECORE_SUCCESS;
 }
diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h
index 5042cd1d1..924c31429 100644
--- a/drivers/net/qede/base/ecore_int.h
+++ b/drivers/net/qede/base/ecore_int.h
@@ -17,7 +17,9 @@
 #define ECORE_SB_EVENT_MASK	0x0003
 
 #define SB_ALIGNED_SIZE(p_hwfn) \
-	ALIGNED_TYPE_SIZE(struct status_block, p_hwfn)
+	(ECORE_IS_E4((p_hwfn)->p_dev) ? \
+	 ALIGNED_TYPE_SIZE(struct status_block_e4, p_hwfn) : \
+	 ALIGNED_TYPE_SIZE(struct status_block_e5, p_hwfn))
 
 #define ECORE_SB_INVALID_IDX	0xffff
 
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index d7b6b86cc..553d7e6b3 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -47,7 +47,7 @@ struct ecore_sb_info {
 struct ecore_sb_info_dbg {
 	u32 igu_prod;
 	u32 igu_cons;
-	u16 pi[PIS_PER_SB];
+	u16 pi[MAX_PIS_PER_SB];
 };
 
 struct ecore_sb_cnt_info {
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index af234dec8..01eb69c0e 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -1869,7 +1869,7 @@ static void __ecore_get_vport_mstats(struct ecore_hwfn *p_hwfn,
 	p_stats->common.ttl0_discard +=
 		HILO_64_REGPAIR(mstats.ttl0_discard);
 	p_stats->common.tpa_coalesced_pkts +=
-		HILO_64_REGPAIR(mstats.tpa_coalesced_pkts);
+		HILO_64_REGPAIR(mstats.tpa_coalesced_pkts_lb_hairpin_discard);
 	p_stats->common.tpa_coalesced_events +=
 		HILO_64_REGPAIR(mstats.tpa_coalesced_events);
 	p_stats->common.tpa_aborts_num +=
@@ -2144,7 +2144,7 @@ ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
 				  struct ecore_spq_comp_cb *p_cb,
 				  struct ecore_ntuple_filter_params *p_params)
 {
-	struct rx_update_gft_filter_data *p_ramrod = OSAL_NULL;
+	struct rx_update_gft_filter_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
 	struct ecore_sp_init_data init_data;
 	u16 abs_rx_q_id = 0;
@@ -2165,7 +2165,7 @@ ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
 	}
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
-				   ETH_RAMROD_GFT_UPDATE_FILTER,
+				   ETH_RAMROD_RX_UPDATE_GFT_FILTER,
 				   PROTOCOLID_ETH, &init_data);
 	if (rc != ECORE_SUCCESS)
 		return rc;
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 02f613688..93e984bf8 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -201,32 +201,49 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 	__le32 *p_spq_base_lo, *p_spq_base_hi;
 	struct regpair *p_consolid_base_addr;
 	u8 *p_flags1, *p_flags9, *p_flags10;
-	struct core_conn_context *p_cxt;
 	struct ecore_cxt_info cxt_info;
 	u32 core_conn_context_size;
 	__le16 *p_physical_q0;
 	u16 physical_q;
+	void *p_cxt;
 	enum _ecore_status_t rc;
 
 	cxt_info.iid = p_spq->cid;
 
 	rc = ecore_cxt_get_cid_info(p_hwfn, &cxt_info);
-
 	if (rc != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true, "Cannot find context info for cid=%d\n",
 			  p_spq->cid);
 		return;
 	}
 
-	p_cxt = cxt_info.p_cxt;
-	core_conn_context_size = sizeof(*p_cxt);
-	p_flags1 = &p_cxt->xstorm_ag_context.flags1;
-	p_flags9 = &p_cxt->xstorm_ag_context.flags9;
-	p_flags10 = &p_cxt->xstorm_ag_context.flags10;
-	p_physical_q0 = &p_cxt->xstorm_ag_context.physical_q0;
-	p_spq_base_lo = &p_cxt->xstorm_st_context.spq_base_lo;
-	p_spq_base_hi = &p_cxt->xstorm_st_context.spq_base_hi;
-	p_consolid_base_addr = &p_cxt->xstorm_st_context.consolid_base_addr;
+	if (ECORE_IS_E4(p_hwfn->p_dev)) {
+		struct e4_core_conn_context *p_cxt_e4 = cxt_info.p_cxt;
+
+		p_cxt = p_cxt_e4;
+		core_conn_context_size = sizeof(*p_cxt_e4);
+		p_flags1 = &p_cxt_e4->xstorm_ag_context.flags1;
+		p_flags9 = &p_cxt_e4->xstorm_ag_context.flags9;
+		p_flags10 = &p_cxt_e4->xstorm_ag_context.flags10;
+		p_physical_q0 = &p_cxt_e4->xstorm_ag_context.physical_q0;
+		p_spq_base_lo = &p_cxt_e4->xstorm_st_context.spq_base_lo;
+		p_spq_base_hi = &p_cxt_e4->xstorm_st_context.spq_base_hi;
+		p_consolid_base_addr =
+			&p_cxt_e4->xstorm_st_context.consolid_base_addr;
+	} else {
+		struct e5_core_conn_context *p_cxt_e5 = cxt_info.p_cxt;
+
+		p_cxt = p_cxt_e5;
+		core_conn_context_size = sizeof(*p_cxt_e5);
+		p_flags1 = &p_cxt_e5->xstorm_ag_context.flags1;
+		p_flags9 = &p_cxt_e5->xstorm_ag_context.flags9;
+		p_flags10 = &p_cxt_e5->xstorm_ag_context.flags10;
+		p_physical_q0 = &p_cxt_e5->xstorm_ag_context.physical_q0;
+		p_spq_base_lo = &p_cxt_e5->xstorm_st_context.spq_base_lo;
+		p_spq_base_hi = &p_cxt_e5->xstorm_st_context.spq_base_hi;
+		p_consolid_base_addr =
+			&p_cxt_e5->xstorm_st_context.consolid_base_addr;
+	}
 
 	/* @@@TBD we zero the context until we have ilt_reset implemented. */
 	OSAL_MEM_ZERO(p_cxt, core_conn_context_size);
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index 0958e5a0a..80dbdeaa0 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -24,7 +24,7 @@ union ramrod_data {
 	struct tx_queue_stop_ramrod_data		tx_queue_stop;
 	struct vport_start_ramrod_data			vport_start;
 	struct vport_stop_ramrod_data			vport_stop;
-	struct rx_update_gft_filter_data		rx_update_gft;
+	struct rx_update_gft_filter_ramrod_data		rx_update_gft;
 	struct vport_update_ramrod_data			vport_update;
 	struct core_rx_start_ramrod_data		core_rx_queue_start;
 	struct core_rx_stop_ramrod_data			core_rx_queue_stop;
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index ed8cc695f..b49465b88 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -1859,8 +1859,9 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 
 	/* fill in pfdev info */
 	pfdev_info->chip_num = p_hwfn->p_dev->chip_num;
-	pfdev_info->db_size = 0;	/* @@@ TBD MichalK Vf Doorbells */
-	pfdev_info->indices_per_sb = PIS_PER_SB;
+	pfdev_info->db_size = 0; /* @@@ TBD MichalK Vf Doorbells */
+	pfdev_info->indices_per_sb = ECORE_IS_E4(p_hwfn->p_dev) ? PIS_PER_SB_E4
+								: PIS_PER_SB_E5;
 
 	pfdev_info->capabilities = PFVF_ACQUIRE_CAP_DEFAULT_UNTAGGED |
 				   PFVF_ACQUIRE_CAP_POST_FW_OVERRIDE;
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index db03bc494..e00caf646 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -874,7 +874,7 @@ ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 
 		*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
 			   MSTORM_QZONE_START(p_hwfn->p_dev) +
-			   (hw_qid) * MSTORM_QZONE_SIZE;
+			   hw_qid * MSTORM_QZONE_SIZE(p_hwfn->p_dev);
 
 		/* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
 		__internal_ram_wr(p_hwfn, *pp_prod, sizeof(u32),
diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h
index 4611d86d9..1ca11c473 100644
--- a/drivers/net/qede/base/eth_common.h
+++ b/drivers/net/qede/base/eth_common.h
@@ -24,7 +24,7 @@
  */
 #define ETH_HSI_VER_NO_PKT_LEN_TUNN         5
 
-/* Maximum number of pinned L2 connections (CIDs)*/
+/* Maximum number of pinned L2 connections (CIDs) */
 #define ETH_PINNED_CONN_MAX_NUM             32
 
 #define ETH_CACHE_LINE_SIZE                 64
@@ -100,12 +100,15 @@
 #define ETH_FILTER_RULES_COUNT              10
 /* number of RSS indirection table entries, per Vport) */
 #define ETH_RSS_IND_TABLE_ENTRIES_NUM       128
+/* RSS indirection table mask size in registers */
+#define ETH_RSS_IND_TABLE_MASK_SIZE_REGS    (ETH_RSS_IND_TABLE_ENTRIES_NUM / 32)
 /* Length of RSS key (in regs) */
 #define ETH_RSS_KEY_SIZE_REGS               10
 /* number of available RSS engines in AH */
 #define ETH_RSS_ENGINE_NUM_K2               207
 /* number of available RSS engines in BB */
 #define ETH_RSS_ENGINE_NUM_BB               127
+#define ETH_RSS_ENGINE_NUM_E5               256 /* number of available RSS engines in E5 */
 
 /* TPA constants */
 /* Maximum number of open TPA aggregations */
@@ -123,6 +126,7 @@
 
 /* GFS constants */
 #define ETH_GFT_TRASHCAN_VPORT         0x1FF /* GFT drop flow vport number */
+#define ETH_GFS_NUM_OF_ACTIONS         10           /* Maximum number of GFS actions supported */
 
 
 
diff --git a/drivers/net/qede/qede_debug.c b/drivers/net/qede/qede_debug.c
index 180d6f92c..d46727c45 100644
--- a/drivers/net/qede/qede_debug.c
+++ b/drivers/net/qede/qede_debug.c
@@ -2719,9 +2719,13 @@ static u32 qed_grc_dump_ctx_data(struct ecore_hwfn *p_hwfn,
 static u32 qed_grc_dump_ctx(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt, u32 *dump_buf, bool dump)
 {
-	u32 offset = 0;
+	struct dbg_tools_data *dev_data = &p_hwfn->dbg_info;
+	u32 offset = 0, num_ltids;
 	u8 storm_id;
 
+	num_ltids = dev_data->chip_id >=
+		CHIP_E5 ? NUM_OF_LTIDS_E5 : NUM_OF_LTIDS_E4;
+
 	for (storm_id = 0; storm_id < MAX_DBG_STORMS; storm_id++) {
 		if (!qed_grc_is_storm_included(p_hwfn,
 					       (enum dbg_storms)storm_id))
@@ -2751,7 +2755,7 @@ static u32 qed_grc_dump_ctx(struct ecore_hwfn *p_hwfn,
 						dump_buf + offset,
 						dump,
 						"TASK_AG_CTX",
-						NUM_OF_LTIDS,
+						num_ltids,
 						CM_CTX_TASK_AG, storm_id);
 
 		/* Dump Task ST context size */
@@ -2760,7 +2764,7 @@ static u32 qed_grc_dump_ctx(struct ecore_hwfn *p_hwfn,
 						dump_buf + offset,
 						dump,
 						"TASK_ST_CTX",
-						NUM_OF_LTIDS,
+						num_ltids,
 						CM_CTX_TASK_ST, storm_id);
 	}
 
@@ -3398,10 +3402,12 @@ static enum dbg_status qed_grc_dump(struct ecore_hwfn *p_hwfn,
 				    bool dump, u32 *num_dumped_dwords)
 {
 	struct dbg_tools_data *dev_data = &p_hwfn->dbg_info;
-	u32 dwords_read, offset = 0;
+	u32 dwords_read, offset = 0, num_ltids;
 	bool parities_masked = false;
 	u8 i;
 
+	num_ltids = dev_data->chip_id >=
+		CHIP_E5 ? NUM_OF_LTIDS_E5 : NUM_OF_LTIDS_E4;
 	*num_dumped_dwords = 0;
 	dev_data->num_regs_read = 0;
 
@@ -3422,7 +3428,7 @@ static enum dbg_status qed_grc_dump(struct ecore_hwfn *p_hwfn,
 	offset += qed_dump_num_param(dump_buf + offset,
 				     dump,
 				     "num-ltids",
-				     NUM_OF_LTIDS);
+				     num_ltids);
 	offset += qed_dump_num_param(dump_buf + offset,
 				     dump, "num-ports", dev_data->num_ports);
 
@@ -4775,9 +4781,9 @@ static u32 qed_ilt_dump(struct ecore_hwfn *p_hwfn,
 	offset += qed_dump_section_hdr(dump_buf + offset,
 				       dump, "num_pf_cids_per_conn_type", 1);
 	offset += qed_dump_num_param(dump_buf + offset,
-				     dump, "size", NUM_OF_CONNECTION_TYPES);
+				     dump, "size", NUM_OF_CONNECTION_TYPES_E4);
 	for (conn_type = 0, valid_conn_pf_cids = 0;
-	     conn_type < NUM_OF_CONNECTION_TYPES; conn_type++, offset++) {
+	     conn_type < NUM_OF_CONNECTION_TYPES_E4; conn_type++, offset++) {
 		u32 num_pf_cids =
 		    p_hwfn->p_cxt_mngr->conn_cfg[conn_type].cid_count;
 
@@ -4790,9 +4796,9 @@ static u32 qed_ilt_dump(struct ecore_hwfn *p_hwfn,
 	offset += qed_dump_section_hdr(dump_buf + offset,
 				       dump, "num_vf_cids_per_conn_type", 1);
 	offset += qed_dump_num_param(dump_buf + offset,
-				     dump, "size", NUM_OF_CONNECTION_TYPES);
+				     dump, "size", NUM_OF_CONNECTION_TYPES_E4);
 	for (conn_type = 0, valid_conn_vf_cids = 0;
-	     conn_type < NUM_OF_CONNECTION_TYPES; conn_type++, offset++) {
+	     conn_type < NUM_OF_CONNECTION_TYPES_E4; conn_type++, offset++) {
 		u32 num_vf_cids =
 		    p_hwfn->p_cxt_mngr->conn_cfg[conn_type].cids_per_vf;
 
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index caa9d1d4f..e865d988f 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -19,7 +19,7 @@
 char qede_fw_file[PATH_MAX];
 
 static const char * const QEDE_DEFAULT_FIRMWARE =
-	"/lib/firmware/qed/qed_init_values-8.40.33.0.bin";
+	"/lib/firmware/qed/qed_init_values-8.62.0.0.bin";
 
 static void
 qed_update_pf_params(struct ecore_dev *edev, struct ecore_pf_params *params)
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 75d78cebb..6fc67d878 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -569,12 +569,14 @@ qede_alloc_mem_sb(struct qede_dev *qdev, struct ecore_sb_info *sb_info,
 		  uint16_t sb_id)
 {
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct status_block *sb_virt;
 	dma_addr_t sb_phys;
+	void *sb_virt;
+	u32 sb_size;
 	int rc;
 
-	sb_virt = OSAL_DMA_ALLOC_COHERENT(edev, &sb_phys,
-					  sizeof(struct status_block));
+	sb_size = ECORE_IS_E5(edev) ? sizeof(struct status_block_e5) :
+				      sizeof(struct status_block_e4);
+	sb_virt = OSAL_DMA_ALLOC_COHERENT(edev, &sb_phys, sb_size);
 	if (!sb_virt) {
 		DP_ERR(edev, "Status block allocation failed\n");
 		return -ENOMEM;
@@ -583,8 +585,7 @@ qede_alloc_mem_sb(struct qede_dev *qdev, struct ecore_sb_info *sb_info,
 					sb_phys, sb_id);
 	if (rc) {
 		DP_ERR(edev, "Status block initialization failed\n");
-		OSAL_DMA_FREE_COHERENT(edev, sb_virt, sb_phys,
-				       sizeof(struct status_block));
+		OSAL_DMA_FREE_COHERENT(edev, sb_virt, sb_phys, sb_size);
 		return rc;
 	}
 
@@ -680,8 +681,8 @@ void qede_dealloc_fp_resc(struct rte_eth_dev *eth_dev)
 			DP_INFO(edev, "Free sb_info index 0x%x\n",
 					fp->sb_info->igu_sb_id);
 			OSAL_DMA_FREE_COHERENT(edev, fp->sb_info->sb_virt,
-				fp->sb_info->sb_phys,
-				sizeof(struct status_block));
+					       fp->sb_info->sb_phys,
+					       fp->sb_info->sb_size);
 			rte_free(fp->sb_info);
 			fp->sb_info = NULL;
 		}
-- 
2.18.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] [PATCH 3/7] net/qede/base: add OS abstracted changes
  2021-02-19 10:14 [dpdk-dev] [PATCH 0/7] net/qede: add support for new HW Rasesh Mody
  2021-02-19 10:14 ` [dpdk-dev] [PATCH 2/7] net/qede/base: changes for HSI to support " Rasesh Mody
@ 2021-02-19 10:14 ` Rasesh Mody
  2021-02-19 10:14 ` [dpdk-dev] [PATCH 4/7] net/qede/base: update base driver to 8.62.4.0 Rasesh Mody
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Rasesh Mody @ 2021-02-19 10:14 UTC (permalink / raw)
  To: jerinj, ferruh.yigit; +Cc: Rasesh Mody, dev, GR-Everest-DPDK-Dev, Igor Russkikh

The patch includes OS abstracted changes required to support new
hardware and the new feature supported by it. It also adds new bit
ops to RTE library.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
---
 drivers/net/qede/base/bcm_osal.c    |  2 +-
 drivers/net/qede/base/bcm_osal.h    | 39 ++++++++++++++++++---
 lib/librte_eal/include/rte_bitops.h | 54 ++++++++++++++++++++++++++++-
 3 files changed, 88 insertions(+), 7 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.c b/drivers/net/qede/base/bcm_osal.c
index 2c59397e0..23a84795f 100644
--- a/drivers/net/qede/base/bcm_osal.c
+++ b/drivers/net/qede/base/bcm_osal.c
@@ -121,7 +121,7 @@ void qede_vf_fill_driver_data(struct ecore_hwfn *hwfn,
 			      struct ecore_vf_acquire_sw_info *vf_sw_info)
 {
 	vf_sw_info->os_type = VFPF_ACQUIRE_OS_LINUX_USERSPACE;
-	vf_sw_info->override_fw_version = 1;
+	/* TODO - fill driver version */
 }
 
 void *osal_dma_alloc_coherent(struct ecore_dev *p_dev,
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index c5b539928..38b7fff67 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -47,9 +47,10 @@ void qed_link_update(struct ecore_hwfn *hwfn);
 #endif
 #endif
 
-#define OSAL_WARN(arg1, arg2, arg3, ...) (0)
-
-#define UNUSED(x)	(void)(x)
+#define UNUSED1(a)		(void)(a)
+#define UNUSED2(a, b)		((void)(a), UNUSED1(b))
+#define UNUSED3(a, b, c)	((void)(a), UNUSED2(b, c))
+#define UNUSED4(a, b, c, d)	((void)(a), UNUSED3(b, c, d))
 
 /* Memory Types */
 typedef uint8_t u8;
@@ -167,9 +168,8 @@ typedef pthread_mutex_t osal_mutex_t;
 #define OSAL_SPIN_UNLOCK(lock) rte_spinlock_unlock(lock)
 #define OSAL_SPIN_LOCK_IRQSAVE(lock, flags)	\
 	do {					\
-		UNUSED(lock);			\
 		flags = 0;			\
-		UNUSED(flags);			\
+		UNUSED2(lock, flags);		\
 	} while (0)
 #define OSAL_SPIN_UNLOCK_IRQSAVE(lock, flags) nothing
 #define OSAL_SPIN_LOCK_ALLOC(hwfn, lock) nothing
@@ -326,6 +326,18 @@ typedef struct osal_list_t {
 #define OSAL_GET_BIT(bit, bitmap) \
 	rte_bit_relaxed_get32(bit, bitmap)
 
+#define OSAL_TEST_BIT(bit, bitmap) \
+	OSAL_GET_BIT(bit, bitmap)
+
+#define OSAL_TEST_AND_CLEAR_BIT(bit, bitmap) \
+	rte_bit_relaxed_test_and_clear32(bit, bitmap)
+
+#define OSAL_TEST_AND_FLIP_BIT(bit, bitmap) \
+	rte_bit_relaxed_test_and_flip32(bit, bitmap)
+
+#define OSAL_NON_ATOMIC_SET_BIT(bit, bitmap) \
+	rte_bit_relaxed_set32(bit, bitmap)
+
 u32 qede_find_first_bit(unsigned long *, u32);
 #define OSAL_FIND_FIRST_BIT(bitmap, length) \
 	qede_find_first_bit(bitmap, length)
@@ -342,7 +354,10 @@ u32 qede_find_first_zero_bit(u32 *bitmap, u32 length);
 #define OSAL_BITMAP_WEIGHT(bitmap, count) 0
 
 #define OSAL_LINK_UPDATE(hwfn) qed_link_update(hwfn)
+#define OSAL_BW_UPDATE(hwfn, ptt) nothing
 #define OSAL_TRANSCEIVER_UPDATE(hwfn) nothing
+#define OSAL_TRANSCEIVER_TX_FAULT(hwfn) nothing
+#define OSAL_TRANSCEIVER_RX_LOS(hwfn) nothing
 #define OSAL_DCBX_AEN(hwfn, mib_type) nothing
 
 /* SR-IOV channel */
@@ -366,6 +381,8 @@ void osal_vf_flr_update(struct ecore_hwfn *p_hwfn);
 #define OSAL_IOV_VF_MSG_TYPE(hwfn, vfid, vf_msg_type) nothing
 #define OSAL_IOV_PF_RESP_TYPE(hwfn, vfid, pf_resp_type) nothing
 #define OSAL_IOV_VF_VPORT_STOP(hwfn, vf) nothing
+#define OSAL_IOV_DB_REC_HANDLER(hwfn) nothing
+#define OSAL_IOV_BULLETIN_UPDATE(hwfn) nothing
 
 u32 qede_unzip_data(struct ecore_hwfn *p_hwfn, u32 input_len,
 		   u8 *input_buf, u32 max_size, u8 *unzip_buf);
@@ -412,16 +429,20 @@ u32 qede_osal_log2(u32);
 
 #define OFFSETOF(str, field) __builtin_offsetof(str, field)
 #define OSAL_ASSERT(is_assert) assert(is_assert)
+#define OSAL_WARN(condition, format, ...) (0)
 #define OSAL_BEFORE_PF_START(file, engine) nothing
 #define OSAL_AFTER_PF_STOP(file, engine) nothing
+#define OSAL_GCD(a, b)  (1)
 
 /* Endian macros */
 #define OSAL_CPU_TO_BE32(val) rte_cpu_to_be_32(val)
+#define OSAL_CPU_TO_BE16(val) rte_cpu_to_be_16(val)
 #define OSAL_BE32_TO_CPU(val) rte_be_to_cpu_32(val)
 #define OSAL_CPU_TO_LE32(val) rte_cpu_to_le_32(val)
 #define OSAL_CPU_TO_LE16(val) rte_cpu_to_le_16(val)
 #define OSAL_LE32_TO_CPU(val) rte_le_to_cpu_32(val)
 #define OSAL_LE16_TO_CPU(val) rte_le_to_cpu_16(val)
+#define OSAL_BE16_TO_CPU(val) rte_be_to_cpu_16(val)
 #define OSAL_CPU_TO_BE64(val) rte_cpu_to_be_64(val)
 
 #define OSAL_ARRAY_SIZE(arr) RTE_DIM(arr)
@@ -432,6 +453,7 @@ u32 qede_osal_log2(u32);
 #define OSAL_STRLEN(string) strlen(string)
 #define OSAL_STRCPY(dst, string) strcpy(dst, string)
 #define OSAL_STRNCPY(dst, string, len) strncpy(dst, string, len)
+#define OSAL_STRLCPY(dst, string, len) strlcpy(dst, string, len)
 #define OSAL_STRCMP(str1, str2) strcmp(str1, str2)
 #define OSAL_STRTOUL(str, base, res) 0
 
@@ -463,6 +485,7 @@ u32 qede_crc32(u32 crc, u8 *ptr, u32 length);
 #define OSAL_MFW_FILL_TLV_DATA(type, buf, data) (0)
 #define OSAL_HW_INFO_CHANGE(p_hwfn, change) nothing
 #define OSAL_MFW_CMD_PREEMPT(p_hwfn) nothing
+#define OSAL_NUM_FUNCS_IS_SET(p_hwfn) nothing
 #define OSAL_PF_VALIDATE_MODIFY_TUNN_CONFIG(p_hwfn, mask, b_update, tunn) 0
 
 #define OSAL_DIV_S64(a, b)	((a) / (b))
@@ -478,4 +501,10 @@ enum dbg_status	qed_dbg_alloc_user_data(struct ecore_hwfn *p_hwfn,
 	qed_dbg_alloc_user_data(p_hwfn, user_data_ptr)
 #define OSAL_DB_REC_OCCURRED(p_hwfn) nothing
 
+typedef int osal_va_list;
+#define OSAL_VA_START(A, B) UNUSED2(A, B)
+#define OSAL_VA_END(A) UNUSED1(A)
+#define OSAL_VSNPRINTF(A, ...) 0
+#define OSAL_INT_DBG_STORE(P_DEV, ...) nothing
+
 #endif /* __BCM_OSAL_H */
diff --git a/lib/librte_eal/include/rte_bitops.h b/lib/librte_eal/include/rte_bitops.h
index 141e8ea73..9a39dd13c 100644
--- a/lib/librte_eal/include/rte_bitops.h
+++ b/lib/librte_eal/include/rte_bitops.h
@@ -54,7 +54,7 @@ rte_bit_relaxed_get32(unsigned int nr, volatile uint32_t *addr)
 {
 	RTE_ASSERT(nr < 32);
 
-	uint32_t mask = UINT32_C(1) << nr;
+	uint32_t mask = RTE_BIT32(nr);
 	return (*addr) & mask;
 }
 
@@ -152,6 +152,32 @@ rte_bit_relaxed_test_and_clear32(unsigned int nr, volatile uint32_t *addr)
 	return val & mask;
 }
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Return the original bit from a 32-bit value, then flip it to 0 without
+ * memory ordering.
+ *
+ * @param nr
+ *   The target bit to get and flip.
+ * @param addr
+ *   The address holding the bit.
+ * @return
+ *   The original bit.
+ */
+__rte_experimental
+static inline uint32_t
+rte_bit_relaxed_test_and_flip32(unsigned int nr, volatile uint32_t *addr)
+{
+	RTE_ASSERT(nr < 32);
+
+	uint32_t mask = RTE_BIT32(nr);
+	uint32_t val = *addr;
+	*addr = val ^ mask;
+	return val & mask;
+}
+
 /*------------------------ 64-bit relaxed operations ------------------------*/
 
 /**
@@ -271,4 +297,30 @@ rte_bit_relaxed_test_and_clear64(unsigned int nr, volatile uint64_t *addr)
 	return val & mask;
 }
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Return the original bit from a 64-bit value, then flip it to 0 without
+ * memory ordering.
+ *
+ * @param nr
+ *   The target bit to get and flip.
+ * @param addr
+ *   The address holding the bit.
+ * @return
+ *   The original bit.
+ */
+__rte_experimental
+static inline uint64_t
+rte_bit_relaxed_test_and_flip64(unsigned int nr, volatile uint64_t *addr)
+{
+	RTE_ASSERT(nr < 64);
+
+	uint64_t mask = RTE_BIT64(nr);
+	uint64_t val = *addr;
+	*addr = val ^ mask;
+	return val & mask;
+}
+
 #endif /* _RTE_BITOPS_H_ */
-- 
2.18.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] [PATCH 4/7] net/qede/base: update base driver to 8.62.4.0
  2021-02-19 10:14 [dpdk-dev] [PATCH 0/7] net/qede: add support for new HW Rasesh Mody
  2021-02-19 10:14 ` [dpdk-dev] [PATCH 2/7] net/qede/base: changes for HSI to support " Rasesh Mody
  2021-02-19 10:14 ` [dpdk-dev] [PATCH 3/7] net/qede/base: add OS abstracted changes Rasesh Mody
@ 2021-02-19 10:14 ` Rasesh Mody
  2021-02-19 10:14 ` [dpdk-dev] [PATCH 5/7] net/qede: changes for DMA page chain allocation and free Rasesh Mody
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Rasesh Mody @ 2021-02-19 10:14 UTC (permalink / raw)
  To: jerinj, ferruh.yigit; +Cc: Rasesh Mody, dev, GR-Everest-DPDK-Dev, Igor Russkikh

The patch updates the base driver required to support new hardware
and the new firmware supported by it as well as the existing hardware.
The base driver is updated to 8.62.4.0 version. There are some white
space changes adhering new policy.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
---
 drivers/net/qede/base/ecore.h               |  550 +-
 drivers/net/qede/base/ecore_attn_values.h   |    3 +-
 drivers/net/qede/base/ecore_cxt.c           | 1184 +++--
 drivers/net/qede/base/ecore_cxt.h           |  140 +-
 drivers/net/qede/base/ecore_cxt_api.h       |   31 +-
 drivers/net/qede/base/ecore_dcbx.c          |  524 +-
 drivers/net/qede/base/ecore_dcbx.h          |   16 +-
 drivers/net/qede/base/ecore_dcbx_api.h      |   41 +-
 drivers/net/qede/base/ecore_dev.c           | 3865 ++++++++++----
 drivers/net/qede/base/ecore_dev_api.h       |  354 +-
 drivers/net/qede/base/ecore_gtt_reg_addr.h  |   93 +-
 drivers/net/qede/base/ecore_gtt_values.h    |    4 +-
 drivers/net/qede/base/ecore_hsi_common.h    |    2 +-
 drivers/net/qede/base/ecore_hw.c            |  381 +-
 drivers/net/qede/base/ecore_hw.h            |   55 +-
 drivers/net/qede/base/ecore_hw_defs.h       |   45 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c | 1224 +++--
 drivers/net/qede/base/ecore_init_fw_funcs.h |  457 +-
 drivers/net/qede/base/ecore_init_ops.c      |  143 +-
 drivers/net/qede/base/ecore_init_ops.h      |   19 +-
 drivers/net/qede/base/ecore_int.c           | 1313 +++--
 drivers/net/qede/base/ecore_int.h           |   60 +-
 drivers/net/qede/base/ecore_int_api.h       |  125 +-
 drivers/net/qede/base/ecore_iov_api.h       |  115 +-
 drivers/net/qede/base/ecore_iro.h           |  427 +-
 drivers/net/qede/base/ecore_iro_values.h    |  463 +-
 drivers/net/qede/base/ecore_l2.c            |  485 +-
 drivers/net/qede/base/ecore_l2.h            |   18 +-
 drivers/net/qede/base/ecore_l2_api.h        |  148 +-
 drivers/net/qede/base/ecore_mcp.c           | 2610 +++++++---
 drivers/net/qede/base/ecore_mcp.h           |  124 +-
 drivers/net/qede/base/ecore_mcp_api.h       |  467 +-
 drivers/net/qede/base/ecore_mng_tlv.c       |  910 ++--
 drivers/net/qede/base/ecore_proto_if.h      |   69 +-
 drivers/net/qede/base/ecore_rt_defs.h       |  895 ++--
 drivers/net/qede/base/ecore_sp_api.h        |    6 +-
 drivers/net/qede/base/ecore_sp_commands.c   |  138 +-
 drivers/net/qede/base/ecore_sp_commands.h   |   18 +-
 drivers/net/qede/base/ecore_spq.c           |  330 +-
 drivers/net/qede/base/ecore_spq.h           |   63 +-
 drivers/net/qede/base/ecore_sriov.c         | 1660 ++++--
 drivers/net/qede/base/ecore_sriov.h         |  146 +-
 drivers/net/qede/base/ecore_status.h        |    4 +-
 drivers/net/qede/base/ecore_utils.h         |   18 +-
 drivers/net/qede/base/ecore_vf.c            |  546 +-
 drivers/net/qede/base/ecore_vf.h            |   54 +-
 drivers/net/qede/base/ecore_vf_api.h        |   72 +-
 drivers/net/qede/base/ecore_vfpf_if.h       |  117 +-
 drivers/net/qede/base/eth_common.h          |  294 +-
 drivers/net/qede/base/mcp_public.h          | 2341 ++++++---
 drivers/net/qede/base/nvm_cfg.h             | 5059 ++++++++++---------
 drivers/net/qede/qede_debug.c               |   31 +-
 drivers/net/qede/qede_ethdev.c              |    2 +-
 drivers/net/qede/qede_rxtx.c                |    2 +-
 54 files changed, 18340 insertions(+), 9921 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 3fb8fd2ce..9801d5348 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -1,19 +1,28 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_H
 #define __ECORE_H
 
+#include <linux/if_ether.h>
+
+#define ECORE_ETH_ALEN ETH_ALEN
+#define ECORE_ETH_P_8021Q ETH_P_8021Q
+#define ECORE_ETH_P_8021AD ETH_P_8021AD
+#define UEFI
+
 /* @DPDK */
+#define CONFIG_ECORE_BINARY_FW
+#undef CONFIG_ECORE_ZIPPED_FW
+
+#ifdef CONFIG_ECORE_BINARY_FW
 #include <sys/stat.h>
 #include <fcntl.h>
 #include <unistd.h>
-
-#define CONFIG_ECORE_BINARY_FW
-#undef CONFIG_ECORE_ZIPPED_FW
+#endif
 
 #ifdef CONFIG_ECORE_ZIPPED_FW
 #include <zlib.h>
@@ -29,8 +38,8 @@
 #include "mcp_public.h"
 
 #define ECORE_MAJOR_VERSION		8
-#define ECORE_MINOR_VERSION		40
-#define ECORE_REVISION_VERSION		26
+#define ECORE_MINOR_VERSION		62
+#define ECORE_REVISION_VERSION		4
 #define ECORE_ENGINEERING_VERSION	0
 
 #define ECORE_VERSION							\
@@ -49,14 +58,23 @@
 #define ECORE_WFQ_UNIT	100
 #include "../qede_logs.h" /* @DPDK */
 
-#define ISCSI_BDQ_ID(_port_id) (_port_id)
-#define FCOE_BDQ_ID(_port_id) (_port_id + 2)
 /* Constants */
 #define ECORE_WID_SIZE		(1024)
 #define ECORE_MIN_WIDS		(4)
 
 /* Configurable */
 #define ECORE_PF_DEMS_SIZE	(4)
+#define ECORE_VF_DEMS_SIZE	(32)
+#define ECORE_MIN_DPIS		(4)  /* The minimal number of DPIs required to
+				      * load the driver. The number was
+				      * arbitrarily set.
+				      */
+/* Derived */
+#define ECORE_MIN_PWM_REGION	(ECORE_WID_SIZE * ECORE_MIN_DPIS)
+
+#define ECORE_CXT_PF_CID (0xff)
+
+#define ECORE_HW_STOP_RETRY_LIMIT	(10)
 
 /* cau states */
 enum ecore_coalescing_mode {
@@ -71,23 +89,29 @@ enum ecore_nvm_cmd {
 	ECORE_NVM_WRITE_NVRAM = DRV_MSG_CODE_NVM_WRITE_NVRAM,
 	ECORE_NVM_DEL_FILE = DRV_MSG_CODE_NVM_DEL_FILE,
 	ECORE_EXT_PHY_FW_UPGRADE = DRV_MSG_CODE_EXT_PHY_FW_UPGRADE,
-	ECORE_NVM_SET_SECURE_MODE = DRV_MSG_CODE_SET_SECURE_MODE,
 	ECORE_PHY_RAW_READ = DRV_MSG_CODE_PHY_RAW_READ,
 	ECORE_PHY_RAW_WRITE = DRV_MSG_CODE_PHY_RAW_WRITE,
 	ECORE_PHY_CORE_READ = DRV_MSG_CODE_PHY_CORE_READ,
 	ECORE_PHY_CORE_WRITE = DRV_MSG_CODE_PHY_CORE_WRITE,
+	ECORE_ENCRYPT_PASSWORD = DRV_MSG_CODE_ENCRYPT_PASSWORD,
 	ECORE_GET_MCP_NVM_RESP = 0xFFFFFF00
 };
 
-#ifndef LINUX_REMOVE
-#if !defined(CONFIG_ECORE_L2)
+#if !defined(CONFIG_ECORE_L2) && !defined(CONFIG_ECORE_ROCE) && \
+	!defined(CONFIG_ECORE_FCOE) && !defined(CONFIG_ECORE_ISCSI) && \
+	!defined(CONFIG_ECORE_IWARP) && !defined(CONFIG_ECORE_OOO)
 #define CONFIG_ECORE_L2
 #define CONFIG_ECORE_SRIOV
-#endif
+#define CONFIG_ECORE_ROCE
+#define CONFIG_ECORE_IWARP
+#define CONFIG_ECORE_FCOE
+#define CONFIG_ECORE_ISCSI
+#define CONFIG_ECORE_LL2
+#define CONFIG_ECORE_OOO
 #endif
 
 /* helpers */
-#ifndef __EXTRACT__LINUX__
+#ifndef __EXTRACT__LINUX__IF__
 #define MASK_FIELD(_name, _value)					\
 		((_value) &= (_name##_MASK))
 
@@ -103,14 +127,16 @@ do {									\
 #define GET_FIELD(value, name)						\
 	(((value) >> (name##_SHIFT)) & name##_MASK)
 
-#define GET_MFW_FIELD(name, field)				\
+#define GET_MFW_FIELD(name, field)					\
 	(((name) & (field ## _MASK)) >> (field ## _OFFSET))
 
 #define SET_MFW_FIELD(name, field, value)				\
 do {									\
-	(name) &= ~((field ## _MASK));		\
+	(name) &= ~(field ## _MASK);					\
 	(name) |= (((value) << (field ## _OFFSET)) & (field ## _MASK));	\
 } while (0)
+
+#define DB_ADDR_SHIFT(addr)             ((addr) << DB_PWM_ADDR_OFFSET_SHIFT)
 #endif
 
 static OSAL_INLINE u32 DB_ADDR(u32 cid, u32 DEMS)
@@ -121,7 +147,7 @@ static OSAL_INLINE u32 DB_ADDR(u32 cid, u32 DEMS)
 	return db_addr;
 }
 
-static OSAL_INLINE u32 DB_ADDR_VF(u32 cid, u32 DEMS)
+static OSAL_INLINE u32 DB_ADDR_VF_E4(u32 cid, u32 DEMS)
 {
 	u32 db_addr = FIELD_VALUE(DB_LEGACY_ADDR_DEMS, DEMS) |
 		      FIELD_VALUE(DB_LEGACY_ADDR_ICID, cid);
@@ -129,6 +155,17 @@ static OSAL_INLINE u32 DB_ADDR_VF(u32 cid, u32 DEMS)
 	return db_addr;
 }
 
+static OSAL_INLINE u32 DB_ADDR_VF_E5(u32 cid, u32 DEMS)
+{
+	u32 db_addr = FIELD_VALUE(DB_LEGACY_ADDR_DEMS, DEMS) |
+		      (cid * ECORE_VF_DEMS_SIZE);
+
+	return db_addr;
+}
+
+#define DB_ADDR_VF(dev, cid, DEMS) \
+	(ECORE_IS_E4(dev) ? DB_ADDR_VF_E4(cid, DEMS) : DB_ADDR_VF_E5(cid, DEMS))
+
 #define ALIGNED_TYPE_SIZE(type_name, p_hwfn)				  \
 	((sizeof(type_name) + (u32)(1 << (p_hwfn->p_dev->cache_shift)) - 1) & \
 	 ~((1 << (p_hwfn->p_dev->cache_shift)) - 1))
@@ -143,7 +180,84 @@ static OSAL_INLINE u32 DB_ADDR_VF(u32 cid, u32 DEMS)
 #endif
 #endif
 
-#ifndef __EXTRACT__LINUX__
+#ifndef __EXTRACT__LINUX__IF__
+#define ECORE_INT_DEBUG_SIZE_DEF     _MB(2)
+struct ecore_internal_trace {
+	char *buf;
+	u32 size;
+	u64 prod;
+	osal_spinlock_t lock;
+};
+
+#define ECORE_DP_INT_LOG_MAX_STR_SIZE 256
+#define ECORE_DP_INT_LOG_DEFAULT_MASK (0xffffc3ff)
+
+#ifndef UEFI
+/* Debug print definitions */
+#define DP_INT_LOG(P_DEV, LEVEL, MODULE, fmt, ...)		\
+do {								\
+	if (OSAL_UNLIKELY((P_DEV)->dp_int_level > (LEVEL)))	\
+		break;						\
+	if (OSAL_UNLIKELY((P_DEV)->dp_int_level == ECORE_LEVEL_VERBOSE) && \
+	    ((LEVEL) == ECORE_LEVEL_VERBOSE) &&			\
+	    ((P_DEV)->dp_int_module & (MODULE)) == 0)		\
+		break;						\
+								\
+	OSAL_INT_DBG_STORE(P_DEV, fmt,				\
+				__func__, __LINE__,		\
+				(P_DEV)->name ? (P_DEV)->name : "",	\
+				##__VA_ARGS__);		\
+} while (0)
+
+#define DP_ERR(P_DEV, fmt, ...)				\
+do {							\
+	DP_INT_LOG((P_DEV), ECORE_LEVEL_ERR, 0,		\
+		"ERR: [%s:%d(%s)]" fmt, ##__VA_ARGS__);	\
+	PRINT_ERR((P_DEV)->dp_ctx, "[%s:%d(%s)]" fmt,	\
+		  __func__, __LINE__,			\
+		  (P_DEV)->name ? (P_DEV)->name : "",	\
+		  ##__VA_ARGS__);			\
+} while (0)
+
+#define DP_NOTICE(P_DEV, is_assert, fmt, ...)				\
+do {									\
+	DP_INT_LOG((P_DEV), ECORE_LEVEL_NOTICE,	0,			\
+		 "NOTICE: [%s:%d(%s)]" fmt, ##__VA_ARGS__);		\
+	if (OSAL_UNLIKELY((P_DEV)->dp_level <= ECORE_LEVEL_NOTICE)) {	\
+		PRINT((P_DEV)->dp_ctx, "[%s:%d(%s)]" fmt,		\
+		      __func__, __LINE__,				\
+		      (P_DEV)->name ? (P_DEV)->name : "",		\
+		      ##__VA_ARGS__);					\
+		OSAL_ASSERT(!(is_assert));				\
+	}								\
+} while (0)
+
+#define DP_INFO(P_DEV, fmt, ...)				      \
+do {								      \
+	DP_INT_LOG((P_DEV), ECORE_LEVEL_INFO, 0,		      \
+		"INFO: [%s:%d(%s)]" fmt, ##__VA_ARGS__);	      \
+	if (OSAL_UNLIKELY((P_DEV)->dp_level <= ECORE_LEVEL_INFO)) {   \
+		PRINT((P_DEV)->dp_ctx, "[%s:%d(%s)]" fmt,	      \
+		      __func__, __LINE__,			      \
+		      (P_DEV)->name ? (P_DEV)->name : "",	      \
+		      ##__VA_ARGS__);				      \
+	}							      \
+} while (0)
+
+#define DP_VERBOSE(P_DEV, module, fmt, ...)				\
+do {									\
+	DP_INT_LOG((P_DEV), ECORE_LEVEL_VERBOSE, module,		\
+		"VERBOSE: [%s:%d(%s)]" fmt, ##__VA_ARGS__);		\
+	if (OSAL_UNLIKELY(((P_DEV)->dp_level <= ECORE_LEVEL_VERBOSE) &&	\
+	    ((P_DEV)->dp_module & module))) {				\
+		PRINT((P_DEV)->dp_ctx, "[%s:%d(%s)]" fmt,		\
+		      __func__, __LINE__,				\
+		      (P_DEV)->name ? (P_DEV)->name : "",		\
+		      ##__VA_ARGS__);					\
+	}								\
+} while (0)
+#endif
+
 enum DP_LEVEL {
 	ECORE_LEVEL_VERBOSE	= 0x0,
 	ECORE_LEVEL_INFO	= 0x1,
@@ -181,6 +295,7 @@ enum DP_MODULE {
 	ECORE_MSG_SP		= 0x100000,
 	ECORE_MSG_STORAGE	= 0x200000,
 	ECORE_MSG_OOO		= 0x200000,
+	ECORE_MSG_FS		= 0x400000,
 	ECORE_MSG_CXT		= 0x800000,
 	ECORE_MSG_LL2		= 0x1000000,
 	ECORE_MSG_ILT		= 0x2000000,
@@ -188,13 +303,49 @@ enum DP_MODULE {
 	ECORE_MSG_DEBUG		= 0x8000000,
 	/* to be added...up to 0x8000000 */
 };
+
+/**
+ * @brief Convert from 32b debug param to two params of level and module
+
+ * @param debug
+ * @param p_dp_module
+ * @param p_dp_level
+ * @return void
+ *
+ * @note Input 32b decoding:
+ *	 b31 - enable all NOTICE prints. NOTICE prints are for deviation from
+ *	 the 'happy' flow, e.g. memory allocation failed.
+ *	 b30 - enable all INFO prints. INFO prints are for major steps in the
+ *	 flow and provide important parameters.
+ *	 b29-b0 - per-module bitmap, where each bit enables VERBOSE prints of
+ *	 that module. VERBOSE prints are for tracking the specific flow in low
+ *	 level.
+ *
+ *	 Notice that the level should be that of the lowest required logs.
+ */
+static OSAL_INLINE void ecore_config_debug(u32 debug, u32 *p_dp_module,
+					   u8 *p_dp_level)
+{
+	*p_dp_level = ECORE_LEVEL_NOTICE;
+	*p_dp_module = 0;
+
+	if (debug & ECORE_LOG_VERBOSE_MASK) {
+		*p_dp_level = ECORE_LEVEL_VERBOSE;
+		*p_dp_module = (debug & 0x3FFFFFFF);
+	} else if (debug & ECORE_LOG_INFO_MASK) {
+		*p_dp_level = ECORE_LEVEL_INFO;
+	} else if (debug & ECORE_LOG_NOTICE_MASK) {
+		*p_dp_level = ECORE_LEVEL_NOTICE;
+	}
+}
+
 #endif
 
 #define for_each_hwfn(p_dev, i)	for (i = 0; i < p_dev->num_hwfns; i++)
 
 #define D_TRINE(val, cond1, cond2, true1, true2, def) \
-	(val == (cond1) ? true1 : \
-	 (val == (cond2) ? true2 : def))
+	((val) == (cond1) ? (true1) : \
+	 ((val) == (cond2) ? (true2) : (def)))
 
 /* forward */
 struct ecore_ptt_pool;
@@ -210,6 +361,8 @@ struct ecore_igu_info;
 struct ecore_mcp_info;
 struct ecore_dcbx_info;
 struct ecore_llh_info;
+struct ecore_fs_info_e4;
+struct ecore_fs_info_e5;
 
 struct ecore_rt_data {
 	u32	*init_val;
@@ -233,6 +386,13 @@ enum ecore_tunn_clss {
 	MAX_ECORE_TUNN_CLSS,
 };
 
+#ifndef __EXTRACT__LINUX__IF__
+enum ecore_tcp_ip_version {
+	ECORE_TCP_IPV4,
+	ECORE_TCP_IPV6,
+};
+#endif
+
 struct ecore_tunn_update_type {
 	bool b_update_mode;
 	bool b_mode_enabled;
@@ -256,6 +416,9 @@ struct ecore_tunnel_info {
 
 	bool b_update_rx_cls;
 	bool b_update_tx_cls;
+
+	bool update_non_l2_vxlan;
+	bool non_l2_vxlan_enable;
 };
 
 /* The PCI personality is not quite synonymous to protocol ID:
@@ -279,6 +442,15 @@ struct ecore_qm_iids {
 	u32 tids;
 };
 
+/* The PCI relax ordering is either taken care by management FW or can be
+ * enable/disable by ecore client.
+ */
+enum ecore_pci_rlx_odr {
+	ECORE_DEFAULT_RLX_ODR,
+	ECORE_ENABLE_RLX_ODR,
+	ECORE_DISABLE_RLX_ODR
+};
+
 #define MAX_PF_PER_PORT 8
 
 /* HW / FW resources, output of features supported below, most information
@@ -292,12 +464,16 @@ enum ecore_resources {
 	ECORE_RL,
 	ECORE_MAC,
 	ECORE_VLAN,
+	ECORE_VF_RDMA_CNQ_RAM,
 	ECORE_RDMA_CNQ_RAM,
 	ECORE_ILT,
-	ECORE_LL2_QUEUE,
+	ECORE_LL2_RAM_QUEUE,
+	ECORE_LL2_CTX_QUEUE,
 	ECORE_CMDQS_CQS,
 	ECORE_RDMA_STATS_QUEUE,
 	ECORE_BDQ,
+	ECORE_VF_MAC_ADDR,
+	ECORE_GFS_PROFILE,
 
 	/* This is needed only internally for matching against the IGU.
 	 * In case of legacy MFW, would be set to `0'.
@@ -317,6 +493,7 @@ enum ecore_feature {
 	ECORE_EXTRA_VF_QUE,
 	ECORE_VMQ,
 	ECORE_RDMA_CNQ,
+	ECORE_VF_RDMA_CNQ,
 	ECORE_ISCSI_CQ,
 	ECORE_FCOE_CQ,
 	ECORE_VF_L2_QUE,
@@ -335,6 +512,11 @@ enum ecore_port_mode {
 	ECORE_PORT_MODE_DE_1X25G,
 	ECORE_PORT_MODE_DE_4X25G,
 	ECORE_PORT_MODE_DE_2X10G,
+	ECORE_PORT_MODE_DE_2X50G_R1,
+	ECORE_PORT_MODE_DE_4X50G_R1,
+	ECORE_PORT_MODE_DE_1X100G_R2,
+	ECORE_PORT_MODE_DE_2X100G_R2,
+	ECORE_PORT_MODE_DE_1X100G_R4,
 };
 
 enum ecore_dev_cap {
@@ -345,7 +527,7 @@ enum ecore_dev_cap {
 	ECORE_DEV_CAP_IWARP
 };
 
-#ifndef __EXTRACT__LINUX__
+#ifndef __EXTRACT__LINUX__IF__
 enum ecore_hw_err_type {
 	ECORE_HW_ERR_FAN_FAIL,
 	ECORE_HW_ERR_MFW_RESP_FAIL,
@@ -356,10 +538,9 @@ enum ecore_hw_err_type {
 };
 #endif
 
-enum ecore_db_rec_exec {
-	DB_REC_DRY_RUN,
-	DB_REC_REAL_DEAL,
-	DB_REC_ONCE,
+enum ecore_wol_support {
+	ECORE_WOL_SUPPORT_NONE,
+	ECORE_WOL_SUPPORT_PME,
 };
 
 struct ecore_hw_info {
@@ -382,7 +563,9 @@ struct ecore_hw_info {
 	((dev)->hw_info.personality == ECORE_PCI_FCOE)
 #define ECORE_IS_ISCSI_PERSONALITY(dev) \
 	((dev)->hw_info.personality == ECORE_PCI_ISCSI)
-
+#define ECORE_IS_NVMETCP_PERSONALITY(dev) \
+	((dev)->hw_info.personality == ECORE_PCI_ISCSI && \
+	 (dev)->is_nvmetcp)
 	/* Resource Allocation scheme results */
 	u32 resc_start[ECORE_MAX_RESC];
 	u32 resc_num[ECORE_MAX_RESC];
@@ -397,23 +580,22 @@ struct ecore_hw_info {
 	/* Amount of traffic classes HW supports */
 	u8 num_hw_tc;
 
-/* Amount of TCs which should be active according to DCBx or upper layer driver
- * configuration
- */
-
+/* Amount of TCs which should be active according to DCBx or upper layer driver configuration */
 	u8 num_active_tc;
 
 	/* The traffic class used by PF for it's offloaded protocol */
 	u8 offload_tc;
+	bool offload_tc_set;
+
+	bool multi_tc_roce_en;
+#define IS_ECORE_MULTI_TC_ROCE(p_hwfn) (!!((p_hwfn)->hw_info.multi_tc_roce_en))
 
 	u32 concrete_fid;
 	u16 opaque_fid;
 	u16 ovlan;
 	u32 part_num[4];
 
-	unsigned char hw_mac_addr[ETH_ALEN];
-	u64 node_wwn; /* For FCoE only */
-	u64 port_wwn; /* For FCoE only */
+	unsigned char hw_mac_addr[ECORE_ETH_ALEN];
 
 	u16 num_iscsi_conns;
 	u16 num_fcoe_conns;
@@ -424,12 +606,16 @@ struct ecore_hw_info {
 
 	u32 port_mode;
 	u32 hw_mode;
-	u32 device_capabilities;
+	u32 device_capabilities; /* @DPDK */
 
+#ifndef __EXTRACT__LINUX__THROW__
 	/* Default DCBX mode */
 	u8 dcbx_mode;
+#endif
 
 	u16 mtu;
+
+	enum ecore_wol_support		b_wol_support;
 };
 
 /* maximun size of read/write commands (HW limit) */
@@ -470,38 +656,71 @@ struct ecore_wfq_data {
 
 #define OFLD_GRP_SIZE 4
 
+struct ecore_offload_pq {
+	u8 port;
+	u8 tc;
+};
+
 struct ecore_qm_info {
 	struct init_qm_pq_params    *qm_pq_params;
 	struct init_qm_vport_params *qm_vport_params;
 	struct init_qm_port_params  *qm_port_params;
 	u16			start_pq;
-	u8			start_vport;
+	u16			start_vport;
+	u16			start_rl;
 	u16			pure_lb_pq;
-	u16			offload_pq;
+	u16			first_ofld_pq;
+	u16			first_llt_pq;
 	u16			pure_ack_pq;
 	u16			ooo_pq;
+	u16			single_vf_rdma_pq;
 	u16			first_vf_pq;
 	u16			first_mcos_pq;
 	u16			first_rl_pq;
+	u16			first_ofld_grp_pq;
 	u16			num_pqs;
 	u16			num_vf_pqs;
+	u16			ilt_pf_pqs;
 	u8			num_vports;
+	u8			num_rls;
 	u8			max_phys_tcs_per_port;
 	u8			ooo_tc;
+	bool			pq_overflow;
 	bool			pf_rl_en;
 	bool			pf_wfq_en;
 	bool			vport_rl_en;
 	bool			vport_wfq_en;
+	bool			vf_rdma_en;
+#define IS_ECORE_QM_VF_RDMA(_p_hwfn) ((_p_hwfn)->qm_info.vf_rdma_en)
 	u8			pf_wfq;
 	u32			pf_rl;
 	struct ecore_wfq_data	*wfq_data;
 	u8			num_pf_rls;
+	struct ecore_offload_pq offload_group[OFLD_GRP_SIZE];
+	u8			offload_group_count;
+#define IS_ECORE_OFLD_GRP(p_hwfn) ((p_hwfn)->qm_info.offload_group_count > 0)
+
+	/* Locks PQ getters against QM info initialization */
+	osal_spinlock_t		qm_info_lock;
 };
 
+#define ECORE_OVERFLOW_BIT	1
+
 struct ecore_db_recovery_info {
-	osal_list_t list;
-	osal_spinlock_t lock;
-	u32 db_recovery_counter;
+	osal_list_t	list;
+	osal_spinlock_t	lock;
+	u32		count;
+
+	/* PF doorbell overflow sticky indicator was cleared in the DORQ
+	 * attention callback, but still needs to execute doorbell recovery.
+	 * Full (REAL_DEAL) dorbell recovery is executed in the periodic
+	 * handler.
+	 * This value doesn't require a lock but must use atomic operations.
+	 */
+	u32		overflow; /* @DPDK*/
+
+	/* Indicates that DORQ attention was handled in ecore_int_deassertion */
+	bool		dorq_attn;
 };
 
 struct storm_stats {
@@ -553,18 +772,32 @@ enum ecore_mf_mode_bit {
 	/* Use stag for steering */
 	ECORE_MF_8021AD_TAGGING,
 
+	/* Allow DSCP to TC mapping */
+	ECORE_MF_DSCP_TO_TC_MAP,
+
 	/* Allow FIP discovery fallback */
 	ECORE_MF_FIP_SPECIAL,
+
+	/* Do not insert a vlan tag with id 0 */
+	ECORE_MF_DONT_ADD_VLAN0_TAG,
+
+	/* Allow VF RDMA */
+	ECORE_MF_VF_RDMA,
+
+	/* Allow RoCE LAG */
+	ECORE_MF_ROCE_LAG,
 };
 
 enum ecore_ufp_mode {
 	ECORE_UFP_MODE_ETS,
 	ECORE_UFP_MODE_VNIC_BW,
+	ECORE_UFP_MODE_UNKNOWN
 };
 
 enum ecore_ufp_pri_type {
 	ECORE_UFP_PRI_OS,
-	ECORE_UFP_PRI_VNIC
+	ECORE_UFP_PRI_VNIC,
+	ECORE_UFP_PRI_UNKNOWN
 };
 
 struct ecore_ufp_info {
@@ -578,16 +811,85 @@ enum BAR_ID {
 	BAR_ID_1	/* Used for doorbells */
 };
 
+#ifndef __EXTRACT__LINUX__IF__
+enum ecore_lag_type {
+	ECORE_LAG_TYPE_NONE,
+	ECORE_LAG_TYPE_ACTIVEACTIVE,
+	ECORE_LAG_TYPE_ACTIVEBACKUP
+};
+#endif
+
 struct ecore_nvm_image_info {
 	u32				num_images;
 	struct bist_nvm_image_att	*image_att;
 	bool				valid;
 };
 
+#define LAG_MAX_PORT_NUM	2
+
+struct ecore_lag_info {
+	enum ecore_lag_type	lag_type;
+	void			(*link_change_cb)(void *cxt);
+	void			*cxt;
+	u8			port_num;
+	u32			active_ports; /* @DPDK */
+	u8			first_port;
+	u8			second_port;
+	bool			is_master;
+	u8			master_pf;
+};
+
+/* PWM region specific data */
+struct ecore_dpi_info {
+	u16			wid_count;
+	u32			dpi_size;
+	u32			dpi_count;
+	u32			dpi_start_offset; /* this is used to calculate
+						   * the doorbell address
+						   */
+	u32			dpi_bit_shift_addr;
+};
+
+struct ecore_common_dpm_info {
+	u8			db_bar_no_edpm;
+	u8			mfw_no_edpm;
+	bool			vf_cfg;
+};
+
+enum ecore_hsi_def_type {
+	ECORE_HSI_DEF_MAX_NUM_VFS,
+	ECORE_HSI_DEF_MAX_NUM_L2_QUEUES,
+	ECORE_HSI_DEF_MAX_NUM_PORTS,
+	ECORE_HSI_DEF_MAX_SB_PER_PATH,
+	ECORE_HSI_DEF_MAX_NUM_PFS,
+	ECORE_HSI_DEF_MAX_NUM_VPORTS,
+	ECORE_HSI_DEF_NUM_ETH_RSS_ENGINE,
+	ECORE_HSI_DEF_MAX_QM_TX_QUEUES,
+	ECORE_HSI_DEF_NUM_PXP_ILT_RECORDS,
+	ECORE_HSI_DEF_NUM_RDMA_STATISTIC_COUNTERS,
+	ECORE_HSI_DEF_MAX_QM_GLOBAL_RLS,
+	ECORE_HSI_DEF_MAX_PBF_CMD_LINES,
+	ECORE_HSI_DEF_MAX_BTB_BLOCKS,
+	ECORE_NUM_HSI_DEFS
+};
+
+enum ecore_rx_config_flags {
+	ECORE_RX_CONF_SKIP_ACCEPT_FLAGS_UPDATE,
+	ECORE_RX_CONF_SKIP_UCAST_FILTER_UPDATE,
+	ECORE_RX_CONF_SET_LB_VPORT
+};
+
+struct ecore_rx_config {
+	u32			flags; /* @DPDK */
+	u8			loopback_dst_vport_id;
+};
+
 struct ecore_hwfn {
 	struct ecore_dev		*p_dev;
 	u8				my_id;		/* ID inside the PF */
-#define IS_LEAD_HWFN(edev)		(!((edev)->my_id))
+#define IS_LEAD_HWFN(_p_hwfn)		(!((_p_hwfn)->my_id))
+#define IS_AFFIN_HWFN(_p_hwfn) \
+	((_p_hwfn) == ECORE_AFFIN_HWFN((_p_hwfn)->p_dev))
 	u8				rel_pf_id;	/* Relative to engine*/
 	u8				abs_pf_id;
 #define ECORE_PATH_ID(_p_hwfn) \
@@ -597,10 +899,11 @@ struct ecore_hwfn {
 
 	u32				dp_module;
 	u8				dp_level;
+	u32				dp_int_module;
+	u8				dp_int_level;
 	char				name[NAME_SIZE];
 	void				*dp_ctx;
 
-	bool				first_on_engine;
 	bool				hw_init_done;
 
 	u8				num_funcs_on_engine;
@@ -612,6 +915,8 @@ struct ecore_hwfn {
 	void OSAL_IOMEM			*doorbells;
 	u64				db_phys_addr;
 	unsigned long			db_size;
+	u64				reg_offset;
+	u64				db_offset;
 
 	/* PTT pool */
 	struct ecore_ptt_pool		*p_ptt_pool;
@@ -638,6 +943,11 @@ struct ecore_hwfn {
 	struct ecore_ptt		*p_main_ptt;
 	struct ecore_ptt		*p_dpc_ptt;
 
+	/* PTP will be used only by the leading function.
+	 * Usage of all PTP-apis should be synchronized as result.
+	 */
+	struct ecore_ptt		*p_ptp_ptt;
+
 	struct ecore_sb_sp_info		*p_sp_sb;
 	struct ecore_sb_attn_info	*p_sb_attn;
 
@@ -649,6 +959,7 @@ struct ecore_hwfn {
 	struct ecore_fcoe_info		*p_fcoe_info;
 	struct ecore_rdma_info		*p_rdma_info;
 	struct ecore_pf_params		pf_params;
+	bool				is_nvmetcp;
 
 	bool				b_rdma_enabled_in_prs;
 	u32				rdma_prs_search_reg;
@@ -673,8 +984,8 @@ struct ecore_hwfn {
 	/* QM init */
 	struct ecore_qm_info		qm_info;
 
-#ifdef CONFIG_ECORE_ZIPPED_FW
 	/* Buffer for unzipping firmware data */
+#ifdef CONFIG_ECORE_ZIPPED_FW
 	void *unzip_buf;
 #endif
 
@@ -682,19 +993,12 @@ struct ecore_hwfn {
 	void				*dbg_user_info;
 	struct virt_mem_desc		dbg_arrays[MAX_BIN_DBG_BUFFER_TYPE];
 
-	struct z_stream_s		*stream;
-
-	/* PWM region specific data */
-	u32				dpi_size;
-	u32				dpi_count;
-	u32				dpi_start_offset; /* this is used to
-							   * calculate th
-							   * doorbell address
-							   */
+	struct ecore_dpi_info		dpi_info;
 
-	/* If one of the following is set then EDPM shouldn't be used */
+	struct ecore_common_dpm_info	dpm_info;
+	u8				roce_edpm_mode;
 	u8				dcbx_no_edpm;
-	u8				db_bar_no_edpm;
+	u8				num_vf_cnqs;
 
 	/* L2-related */
 	struct ecore_l2_info		*p_l2_info;
@@ -708,11 +1012,26 @@ struct ecore_hwfn {
 	 * struct ecore_hw_prepare_params by ecore client.
 	 */
 	bool b_en_pacing;
+	struct ecore_lag_info		lag_info;
 
 	/* Nvm images number and attributes */
-	struct ecore_nvm_image_info     nvm_info;
+	struct ecore_nvm_image_info	nvm_info;
+
+	/* Flow steering info */
+	union {
+		struct ecore_fs_info_e4	*e4;
+		struct ecore_fs_info_e5	*e5;
+		void			*info;
+	} fs_info;
 
-	struct phys_mem_desc            *fw_overlay_mem;
+	/* Flow steering statistics accuracy */
+	u8				fs_accuracy;
+
+	struct phys_mem_desc		*fw_overlay_mem;
+	enum _ecore_status_t		(*p_dummy_cb)
+					(struct ecore_hwfn *p_hwfn, void *cookie);
+	/* Rx configuration */
+	struct ecore_rx_config		rx_conf;
 
 	/* @DPDK */
 	struct ecore_ptt		*p_arfs_ptt;
@@ -722,18 +1041,22 @@ struct ecore_hwfn {
 	u32 iov_task_flags;
 };
 
+#ifndef __EXTRACT__LINUX__THROW__
 enum ecore_mf_mode {
 	ECORE_MF_DEFAULT,
 	ECORE_MF_OVLAN,
 	ECORE_MF_NPAR,
 	ECORE_MF_UFP,
 };
+#endif
 
+#ifndef __EXTRACT__LINUX__IF__
 enum ecore_dev_type {
 	ECORE_DEV_TYPE_BB,
 	ECORE_DEV_TYPE_AH,
 	ECORE_DEV_TYPE_E5,
 };
+#endif
 
 /* @DPDK */
 enum ecore_dbg_features {
@@ -765,7 +1088,12 @@ struct ecore_dev {
 	u8				dp_level;
 	char				name[NAME_SIZE];
 	void				*dp_ctx;
+	struct ecore_internal_trace	internal_trace;
+	u8				dp_int_level;
+	u32				dp_int_module;
 
+/* for work DP_* macros with cdev, hwfn, etc */
+	struct ecore_dev		*p_dev;
 	enum ecore_dev_type		type;
 /* Translate type/revision combo into the proper conditions */
 #define ECORE_IS_BB(dev)	((dev)->type == ECORE_DEV_TYPE_BB)
@@ -781,6 +1109,8 @@ struct ecore_dev {
 #define ECORE_IS_E4(dev)	(ECORE_IS_BB(dev) || ECORE_IS_AH(dev))
 #define ECORE_IS_E5(dev)	((dev)->type == ECORE_DEV_TYPE_E5)
 
+#define ECORE_E5_MISSING_CODE	OSAL_BUILD_BUG_ON(false)
+
 	u16 vendor_id;
 	u16 device_id;
 #define ECORE_DEV_ID_MASK	0xff00
@@ -829,21 +1159,18 @@ struct ecore_dev {
 #define CHIP_BOND_ID_MASK		0xff
 #define CHIP_BOND_ID_SHIFT		0
 
-	u8				num_engines;
 	u8				num_ports;
 	u8				num_ports_in_engine;
-	u8				num_funcs_in_port;
 
 	u8				path_id;
 
-	u32				mf_bits;
+	u32				mf_bits; /* @DPDK */
+#ifndef __EXTRACT__LINUX__THROW__
 	enum ecore_mf_mode		mf_mode;
-#define IS_MF_DEFAULT(_p_hwfn)	\
-	(((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_DEFAULT)
-#define IS_MF_SI(_p_hwfn)	\
-	(((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_NPAR)
-#define IS_MF_SD(_p_hwfn)	\
-	(((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_OVLAN)
+#define IS_MF_DEFAULT(_p_hwfn)	(((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_DEFAULT)
+#define IS_MF_SI(_p_hwfn)	(((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_NPAR)
+#define IS_MF_SD(_p_hwfn)	(((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_OVLAN)
+#endif
 
 	int				pcie_width;
 	int				pcie_speed;
@@ -852,7 +1179,9 @@ struct ecore_dev {
 	u8				mcp_rev;
 	u8				boot_mode;
 
-	u8				wol;
+	/* WoL related configurations */
+	u8				wol_config;
+	u8				wol_mac[ECORE_ETH_ALEN];
 
 	u32				int_mode;
 	enum ecore_coalescing_mode	int_coalescing_mode;
@@ -907,6 +1236,7 @@ struct ecore_dev {
 	u32				rdma_max_sge;
 	u32				rdma_max_inline;
 	u32				rdma_max_srq_sge;
+	u8				ilt_page_size;
 
 	struct ecore_eth_stats		*reset_stats;
 	struct ecore_fw_data		*fw_data;
@@ -916,8 +1246,7 @@ struct ecore_dev {
 	/* Recovery */
 	bool				recov_in_prog;
 
-/* Indicates whether should prevent attentions from being reasserted */
-
+	/* Indicates whether should prevent attentions from being reasserted */
 	bool				attn_clr_en;
 
 	/* Indicates whether allowing the MFW to collect a crash dump */
@@ -926,6 +1255,9 @@ struct ecore_dev {
 	/* Indicates if the reg_fifo is checked after any register access */
 	bool				chk_reg_fifo;
 
+	/* Indicates the monitored address by ecore_rd()/ecore_wr() */
+	u32				monitored_hw_addr;
+
 #ifndef ASIC_ONLY
 	bool				b_is_emul_full;
 	bool				b_is_emul_mac;
@@ -937,6 +1269,9 @@ struct ecore_dev {
 	/* Indicates whether this PF serves a storage target */
 	bool				b_is_target;
 
+	/* Instruct driver to read statistics from the specified bin id */
+	u16				stats_bin_id;
+
 #ifdef CONFIG_ECORE_BINARY_FW /* @DPDK */
 	void				*firmware;
 	u64				fw_len;
@@ -952,23 +1287,6 @@ struct ecore_dev {
 	struct rte_pci_device		*pci_dev;
 };
 
-enum ecore_hsi_def_type {
-	ECORE_HSI_DEF_MAX_NUM_VFS,
-	ECORE_HSI_DEF_MAX_NUM_L2_QUEUES,
-	ECORE_HSI_DEF_MAX_NUM_PORTS,
-	ECORE_HSI_DEF_MAX_SB_PER_PATH,
-	ECORE_HSI_DEF_MAX_NUM_PFS,
-	ECORE_HSI_DEF_MAX_NUM_VPORTS,
-	ECORE_HSI_DEF_NUM_ETH_RSS_ENGINE,
-	ECORE_HSI_DEF_MAX_QM_TX_QUEUES,
-	ECORE_HSI_DEF_NUM_PXP_ILT_RECORDS,
-	ECORE_HSI_DEF_NUM_RDMA_STATISTIC_COUNTERS,
-	ECORE_HSI_DEF_MAX_QM_GLOBAL_RLS,
-	ECORE_HSI_DEF_MAX_PBF_CMD_LINES,
-	ECORE_HSI_DEF_MAX_BTB_BLOCKS,
-	ECORE_NUM_HSI_DEFS
-};
-
 u32 ecore_get_hsi_def_val(struct ecore_dev *p_dev,
 			  enum ecore_hsi_def_type type);
 
@@ -1039,6 +1357,11 @@ int ecore_device_num_ports(struct ecore_dev *p_dev);
 void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
 			   u8 *mac);
 
+#define ECORE_TOS_ECN_SHIFT	0
+#define ECORE_TOS_ECN_MASK	0x3
+#define ECORE_TOS_DSCP_SHIFT	2
+#define ECORE_TOS_DSCP_MASK	0x3f
+
 /* Flags for indication of required queues */
 #define PQ_FLAGS_RLS	(1 << 0)
 #define PQ_FLAGS_MCOS	(1 << 1)
@@ -1046,26 +1369,50 @@ void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
 #define PQ_FLAGS_OOO	(1 << 3)
 #define PQ_FLAGS_ACK	(1 << 4)
 #define PQ_FLAGS_OFLD	(1 << 5)
-#define PQ_FLAGS_VFS	(1 << 6)
-#define PQ_FLAGS_LLT	(1 << 7)
+#define PQ_FLAGS_GRP	(1 << 6)
+#define PQ_FLAGS_VFS	(1 << 7)
+#define PQ_FLAGS_LLT	(1 << 8)
+#define PQ_FLAGS_MTC	(1 << 9)
+#define PQ_FLAGS_VFR	(1 << 10)
+#define PQ_FLAGS_VSR	(1 << 11)
 
 /* physical queue index for cm context intialization */
 u16 ecore_get_cm_pq_idx(struct ecore_hwfn *p_hwfn, u32 pq_flags);
 u16 ecore_get_cm_pq_idx_mcos(struct ecore_hwfn *p_hwfn, u8 tc);
 u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf);
+u16 ecore_get_cm_pq_idx_vf_rdma(struct ecore_hwfn *p_hwfn, u16 vf);
+
 u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u16 rl);
+u16 ecore_get_cm_pq_idx_grp(struct ecore_hwfn *p_hwfn, u8 idx);
+u16 ecore_get_cm_pq_idx_ofld_mtc(struct ecore_hwfn *p_hwfn, u16 idx, u8 tc);
+u16 ecore_get_cm_pq_idx_llt_mtc(struct ecore_hwfn *p_hwfn, u16 idx, u8 tc);
+u16 ecore_get_cm_pq_idx_ll2(struct ecore_hwfn *p_hwfn, u8 tc);
 
-/* qm vport for rate limit configuration */
-u16 ecore_get_qm_vport_idx_rl(struct ecore_hwfn *p_hwfn, u16 rl);
+/* qm vport/rl for rate limit configuration */
+u16 ecore_get_pq_vport_id_from_rl(struct ecore_hwfn *p_hwfn, u16 rl);
+u16 ecore_get_pq_vport_id_from_vf(struct ecore_hwfn *p_hwfn, u16 vf);
+u16 ecore_get_pq_rl_id_from_rl(struct ecore_hwfn *p_hwfn, u16 rl);
+u16 ecore_get_pq_rl_id_from_vf(struct ecore_hwfn *p_hwfn, u16 vf);
 
 const char *ecore_hw_get_resc_name(enum ecore_resources res_id);
 
 /* doorbell recovery mechanism */
 void ecore_db_recovery_dp(struct ecore_hwfn *p_hwfn);
-void ecore_db_recovery_execute(struct ecore_hwfn *p_hwfn,
-			       enum ecore_db_rec_exec);
-
-bool ecore_edpm_enabled(struct ecore_hwfn *p_hwfn);
+void ecore_db_recovery_execute(struct ecore_hwfn *p_hwfn);
+enum _ecore_status_t
+ecore_db_rec_flush_queue(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			 u32 usage_cnt_reg, u32 *count);
+#define ECORE_DB_REC_COUNT	1000
+#define ECORE_DB_REC_INTERVAL	100
+
+bool ecore_edpm_enabled(struct ecore_hwfn *p_hwfn,
+			struct ecore_common_dpm_info *dpm_info);
+
+enum _ecore_status_t ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    struct ecore_dpi_info *dpi_info,
+					    u32 pwm_region_size,
+					    u32 n_cpus);
 
 /* amount of resources used in qm init */
 u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn);
@@ -1074,9 +1421,18 @@ u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn);
 u16 ecore_init_qm_get_num_vports(struct ecore_hwfn *p_hwfn);
 u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn);
 
+void ecore_hw_info_set_offload_tc(struct ecore_hw_info *p_info, u8 tc);
+u8 ecore_get_offload_tc(struct ecore_hwfn *p_hwfn);
+
 #define MFW_PORT(_p_hwfn)	((_p_hwfn)->abs_pf_id % \
 				 ecore_device_num_ports((_p_hwfn)->p_dev))
 
+enum _ecore_status_t ecore_abs_ppfid(struct ecore_dev *p_dev, u8 rel_ppfid,
+				     u8 *p_abs_ppfid);
+enum _ecore_status_t ecore_llh_map_ppfid_to_pfid(struct ecore_hwfn *p_hwfn,
+						 struct ecore_ptt *p_ptt,
+						 u8 ppfid, u8 pfid);
+
 /* The PFID<->PPFID calculation is based on the relative index of a PF on its
  * port. In BB there is a bug in the LLH in which the PPFID is actually engine
  * based, and thus it equals the PFID.
@@ -1110,6 +1466,12 @@ enum _ecore_status_t ecore_llh_dump_all(struct ecore_dev *p_dev);
 void ecore_set_platform_str(struct ecore_hwfn *p_hwfn,
 			    char *buf_str, u32 buf_size);
 
+#define LNX_STATIC
+#define IFDEF_HAS_IFLA_VF_RATE
+#define ENDIF_HAS_IFLA_VF_RATE
+#define IFDEF_DEFINE_IFLA_VF_SPOOFCHK
+#define ENDIF_DEFINE_IFLA_VF_SPOOFCHK
+
 #define TSTORM_QZONE_START	PXP_VF_BAR0_START_SDM_ZONE_A
 #define TSTORM_QZONE_SIZE(dev) \
 	(ECORE_IS_E4(dev) ? TSTORM_QZONE_SIZE_E4 : TSTORM_QZONE_SIZE_E5)
diff --git a/drivers/net/qede/base/ecore_attn_values.h b/drivers/net/qede/base/ecore_attn_values.h
index ec773fbdd..ff4fc5c38 100644
--- a/drivers/net/qede/base/ecore_attn_values.h
+++ b/drivers/net/qede/base/ecore_attn_values.h
@@ -1,7 +1,8 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
 
 #ifndef __ATTN_VALUES_H__
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 221cec22b..c22f97f7d 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #include "bcm_osal.h"
 #include "reg_addr.h"
 #include "common_hsi.h"
@@ -37,13 +37,11 @@
 #define TM_ELEM_SIZE	4
 
 /* ILT constants */
-#define ILT_DEFAULT_HW_P_SIZE	4
-
 #define ILT_PAGE_IN_BYTES(hw_p_size)	(1U << ((hw_p_size) + 12))
 #define ILT_CFG_REG(cli, reg)		PSWRQ2_REG_##cli##_##reg##_RT_OFFSET
 
 /* ILT entry structure */
-#define ILT_ENTRY_PHY_ADDR_MASK		0x000FFFFFFFFFFFULL
+#define ILT_ENTRY_PHY_ADDR_MASK		(~0ULL >> 12)
 #define ILT_ENTRY_PHY_ADDR_SHIFT	0
 #define ILT_ENTRY_VALID_MASK		0x1ULL
 #define ILT_ENTRY_VALID_SHIFT		52
@@ -81,11 +79,11 @@ union e5_type1_task_context {
 };
 
 struct src_ent {
-	u8 opaque[56];
+	u8  opaque[56];
 	u64 next;
 };
 
-#define CDUT_SEG_ALIGNMET 3	/* in 4k chunks */
+#define CDUT_SEG_ALIGNMET 3 /* in 4k chunks */
 #define CDUT_SEG_ALIGNMET_IN_BYTES (1 << (CDUT_SEG_ALIGNMET + 12))
 
 #define CONN_CXT_SIZE(p_hwfn) \
@@ -93,8 +91,6 @@ struct src_ent {
 	 ALIGNED_TYPE_SIZE(union e4_conn_context, (p_hwfn)) : \
 	 ALIGNED_TYPE_SIZE(union e5_conn_context, (p_hwfn)))
 
-#define SRQ_CXT_SIZE (sizeof(struct regpair) * 8) /* @DPDK */
-
 #define TYPE0_TASK_CXT_SIZE(p_hwfn) \
 	(ECORE_IS_E4(((p_hwfn)->p_dev)) ? \
 	 ALIGNED_TYPE_SIZE(union e4_type0_task_context, (p_hwfn)) : \
@@ -117,9 +113,12 @@ static bool src_proto(enum protocol_type type)
 		type == PROTOCOLID_IWARP;
 }
 
-static OSAL_INLINE bool tm_cid_proto(enum protocol_type type)
+static bool tm_cid_proto(enum protocol_type type)
 {
-	return type == PROTOCOLID_TOE;
+	return type == PROTOCOLID_ISCSI ||
+	       type == PROTOCOLID_FCOE  ||
+	       type == PROTOCOLID_ROCE  ||
+	       type == PROTOCOLID_IWARP;
 }
 
 static bool tm_tid_proto(enum protocol_type type)
@@ -133,8 +132,8 @@ struct ecore_cdu_iids {
 	u32 per_vf_cids;
 };
 
-static void ecore_cxt_cdu_iids(struct ecore_cxt_mngr *p_mngr,
-			       struct ecore_cdu_iids *iids)
+static void ecore_cxt_cdu_iids(struct ecore_cxt_mngr   *p_mngr,
+			       struct ecore_cdu_iids	*iids)
 {
 	u32 type;
 
@@ -146,8 +145,8 @@ static void ecore_cxt_cdu_iids(struct ecore_cxt_mngr *p_mngr,
 
 /* counts the iids for the Searcher block configuration */
 struct ecore_src_iids {
-	u32 pf_cids;
-	u32 per_vf_cids;
+	u32			pf_cids;
+	u32			per_vf_cids;
 };
 
 static void ecore_cxt_src_iids(struct ecore_cxt_mngr *p_mngr,
@@ -156,6 +155,9 @@ static void ecore_cxt_src_iids(struct ecore_cxt_mngr *p_mngr,
 	u32 i;
 
 	for (i = 0; i < MAX_CONN_TYPES; i++) {
+		if (!src_proto(i))
+			continue;
+
 		iids->pf_cids += p_mngr->conn_cfg[i].cid_count;
 		iids->per_vf_cids += p_mngr->conn_cfg[i].cids_per_vf;
 	}
@@ -167,24 +169,39 @@ static void ecore_cxt_src_iids(struct ecore_cxt_mngr *p_mngr,
 /* counts the iids for the Timers block configuration */
 struct ecore_tm_iids {
 	u32 pf_cids;
-	u32 pf_tids[NUM_TASK_PF_SEGMENTS];	/* per segment */
+	u32 pf_tids[NUM_TASK_PF_SEGMENTS]; /* per segment */
 	u32 pf_tids_total;
 	u32 per_vf_cids;
 	u32 per_vf_tids;
 };
 
 static void ecore_cxt_tm_iids(struct ecore_hwfn *p_hwfn,
-			      struct ecore_cxt_mngr *p_mngr,
 			      struct ecore_tm_iids *iids)
 {
+	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
 	struct ecore_conn_type_cfg *p_cfg;
 	bool tm_vf_required = false;
 	bool tm_required = false;
-	u32 i, j;
+	int i, j;
 
-	for (i = 0; i < MAX_CONN_TYPES; i++) {
+	/* Timers is a special case -> we don't count how many cids require
+	 * timers but what's the max cid that will be used by the timer block.
+	 * therefore we traverse in reverse order, and once we hit a protocol
+	 * that requires the timers memory, we'll sum all the protocols up
+	 * to that one.
+	 */
+	for (i = MAX_CONN_TYPES - 1; i >= 0; i--) {
 		p_cfg = &p_mngr->conn_cfg[i];
 
+		/* In E5 the CORE CIDs are allocated first, and not according to
+		 * its 'enum protocol_type' value. To not miss the count of the
+		 * CORE CIDs by a protocol that requires the timers memory, but
+		 * with a lower 'enum protocol_type' value - the addition of the
+		 * CORE CIDs is done outside the loop.
+		 */
+		if (ECORE_IS_E5(p_hwfn->p_dev) && (i == PROTOCOLID_CORE))
+			continue;
+
 		if (tm_cid_proto(i) || tm_required) {
 			if (p_cfg->cid_count)
 				tm_required = true;
@@ -196,6 +213,7 @@ static void ecore_cxt_tm_iids(struct ecore_hwfn *p_hwfn,
 			if (p_cfg->cids_per_vf)
 				tm_vf_required = true;
 
+			iids->per_vf_cids += p_cfg->cids_per_vf;
 		}
 
 		if (tm_tid_proto(i)) {
@@ -215,6 +233,15 @@ static void ecore_cxt_tm_iids(struct ecore_hwfn *p_hwfn,
 		}
 	}
 
+	if (ECORE_IS_E5(p_hwfn->p_dev)) {
+		p_cfg = &p_mngr->conn_cfg[PROTOCOLID_CORE];
+
+		if (tm_required)
+			iids->pf_cids += p_cfg->cid_count;
+		if (tm_vf_required)
+			iids->per_vf_cids += p_cfg->cids_per_vf;
+	}
+
 	iids->pf_cids = ROUNDUP(iids->pf_cids, TM_ALIGN);
 	iids->per_vf_cids = ROUNDUP(iids->per_vf_cids, TM_ALIGN);
 	iids->per_vf_tids = ROUNDUP(iids->per_vf_tids, TM_ALIGN);
@@ -251,7 +278,7 @@ static void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn,
 		vf_tids += segs[NUM_TASK_PF_SEGMENTS].count;
 	}
 
-	iids->vf_cids += vf_cids * p_mngr->vf_count;
+	iids->vf_cids = vf_cids;
 	iids->tids += vf_tids * p_mngr->vf_count;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
@@ -259,8 +286,8 @@ static void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn,
 		   iids->cids, iids->vf_cids, iids->tids, vf_tids);
 }
 
-static struct ecore_tid_seg *ecore_cxt_tid_seg_info(struct ecore_hwfn *p_hwfn,
-						    u32 seg)
+static struct ecore_tid_seg *ecore_cxt_tid_seg_info(struct ecore_hwfn   *p_hwfn,
+						    u32			seg)
 {
 	struct ecore_cxt_mngr *p_cfg = p_hwfn->p_cxt_mngr;
 	u32 i;
@@ -275,24 +302,18 @@ static struct ecore_tid_seg *ecore_cxt_tid_seg_info(struct ecore_hwfn *p_hwfn,
 	return OSAL_NULL;
 }
 
-static void ecore_cxt_set_srq_count(struct ecore_hwfn *p_hwfn, u32 num_srqs)
-{
-	struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
-
-	p_mgr->srq_count = num_srqs;
-}
-
-u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn)
+/* This function was written under the assumption that all the ILT clients
+ * share the same ILT page size (although it is not required).
+ */
+u32 ecore_cxt_get_ilt_page_size(struct ecore_hwfn *p_hwfn)
 {
-	struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
-
-	return p_mgr->srq_count;
+	return ILT_PAGE_IN_BYTES(p_hwfn->p_dev->ilt_page_size);
 }
 
 /* set the iids (cid/tid) count per protocol */
 static void ecore_cxt_set_proto_cid_count(struct ecore_hwfn *p_hwfn,
-				   enum protocol_type type,
-				   u32 cid_count, u32 vf_cid_cnt)
+					  enum protocol_type type,
+					  u32 cid_count, u32 vf_cid_cnt)
 {
 	struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
 	struct ecore_conn_type_cfg *p_conn = &p_mgr->conn_cfg[type];
@@ -301,8 +322,9 @@ static void ecore_cxt_set_proto_cid_count(struct ecore_hwfn *p_hwfn,
 	p_conn->cids_per_vf = ROUNDUP(vf_cid_cnt, DQ_RANGE_ALIGN);
 }
 
-u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn,
-				  enum protocol_type type, u32 *vf_cid)
+u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn	*p_hwfn,
+				  enum protocol_type	type,
+				  u32			*vf_cid)
 {
 	if (vf_cid)
 		*vf_cid = p_hwfn->p_cxt_mngr->conn_cfg[type].cids_per_vf;
@@ -310,28 +332,41 @@ u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn,
 	return p_hwfn->p_cxt_mngr->conn_cfg[type].cid_count;
 }
 
-u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn,
-				  enum protocol_type type)
+u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn	*p_hwfn,
+				  enum protocol_type	type,
+				  u8			vf_id)
 {
-	return p_hwfn->p_cxt_mngr->acquired[type].start_cid;
+	if (vf_id != ECORE_CXT_PF_CID)
+		return p_hwfn->p_cxt_mngr->acquired_vf[type][vf_id].start_cid;
+	else
+		return p_hwfn->p_cxt_mngr->acquired[type].start_cid;
 }
 
 u32 ecore_cxt_get_proto_tid_count(struct ecore_hwfn *p_hwfn,
-					 enum protocol_type type)
+				  enum protocol_type type,
+				  u8		     vf_id)
 {
+	struct ecore_conn_type_cfg *p_conn_cfg;
 	u32 cnt = 0;
 	int i;
 
-	for (i = 0; i < TASK_SEGMENTS; i++)
-		cnt += p_hwfn->p_cxt_mngr->conn_cfg[type].tid_seg[i].count;
+	p_conn_cfg = &p_hwfn->p_cxt_mngr->conn_cfg[type];
+
+	if (vf_id != ECORE_CXT_PF_CID)
+		return p_conn_cfg->tid_seg[TASK_SEGMENT_VF].count;
+
+	for (i = 0; i < NUM_TASK_PF_SEGMENTS; i++)
+		cnt += p_conn_cfg->tid_seg[i].count;
 
 	return cnt;
 }
 
-static OSAL_INLINE void
-ecore_cxt_set_proto_tid_count(struct ecore_hwfn *p_hwfn,
-			      enum protocol_type proto,
-			      u8 seg, u8 seg_type, u32 count, bool has_fl)
+static void ecore_cxt_set_proto_tid_count(struct ecore_hwfn *p_hwfn,
+					  enum protocol_type proto,
+					  u8 seg,
+					  u8 seg_type,
+					  u32 count,
+					  bool has_fl)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
 	struct ecore_tid_seg *p_seg = &p_mngr->conn_cfg[proto].tid_seg[seg];
@@ -342,12 +377,13 @@ ecore_cxt_set_proto_tid_count(struct ecore_hwfn *p_hwfn,
 }
 
 /* the *p_line parameter must be either 0 for the first invocation or the
- * value returned in the previous invocation.
+   value returned in the previous invocation.
  */
-static void ecore_ilt_cli_blk_fill(struct ecore_ilt_client_cfg *p_cli,
-				   struct ecore_ilt_cli_blk *p_blk,
-				   u32 start_line,
-				   u32 total_size, u32 elem_size)
+static void ecore_ilt_cli_blk_fill(struct ecore_ilt_client_cfg	*p_cli,
+				   struct ecore_ilt_cli_blk	*p_blk,
+				   u32				start_line,
+				   u32				total_size,
+				   u32				elem_size)
 {
 	u32 ilt_size = ILT_PAGE_IN_BYTES(p_cli->p_size.val);
 
@@ -362,10 +398,11 @@ static void ecore_ilt_cli_blk_fill(struct ecore_ilt_client_cfg *p_cli,
 	p_blk->start_line = start_line;
 }
 
-static void ecore_ilt_cli_adv_line(struct ecore_hwfn *p_hwfn,
-				   struct ecore_ilt_client_cfg *p_cli,
-				   struct ecore_ilt_cli_blk *p_blk,
-				   u32 *p_line, enum ilt_clients client_id)
+static void ecore_ilt_cli_adv_line(struct ecore_hwfn		*p_hwfn,
+				    struct ecore_ilt_client_cfg	*p_cli,
+				    struct ecore_ilt_cli_blk	*p_blk,
+				    u32				*p_line,
+				    enum ilt_clients		client_id)
 {
 	if (!p_blk->total_size)
 		return;
@@ -378,8 +415,7 @@ static void ecore_ilt_cli_adv_line(struct ecore_hwfn *p_hwfn,
 	p_cli->last.val = *p_line - 1;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
-		   "ILT[Client %d] - Lines: [%08x - %08x]. Block - Size %08x"
-		   " [Real %08x] Start line %d\n",
+		   "ILT[Client %d] - Lines: [%08x - %08x]. Block - Size %08x [Real %08x] Start line %d\n",
 		   client_id, p_cli->first.val, p_cli->last.val,
 		   p_blk->total_size, p_blk->real_size_in_page,
 		   p_blk->start_line);
@@ -388,7 +424,7 @@ static void ecore_ilt_cli_adv_line(struct ecore_hwfn *p_hwfn,
 static void ecore_ilt_get_dynamic_line_range(struct ecore_hwfn *p_hwfn,
 					     enum ilt_clients ilt_client,
 					     u32 *dynamic_line_offset,
-					     u32 *dynamic_line_cnt)
+					     u32 *dynamic_line_cnt, u8 is_vf)
 {
 	struct ecore_ilt_client_cfg *p_cli;
 	struct ecore_conn_type_cfg *p_cfg;
@@ -404,9 +440,23 @@ static void ecore_ilt_get_dynamic_line_range(struct ecore_hwfn *p_hwfn,
 		p_cfg = &p_hwfn->p_cxt_mngr->conn_cfg[PROTOCOLID_ROCE];
 
 		cxts_per_p = ILT_PAGE_IN_BYTES(p_cli->p_size.val) /
-		    (u32)CONN_CXT_SIZE(p_hwfn);
-
-		*dynamic_line_cnt = p_cfg->cid_count / cxts_per_p;
+			     (u32)CONN_CXT_SIZE(p_hwfn);
+
+		*dynamic_line_cnt = is_vf ? p_cfg->cids_per_vf / cxts_per_p :
+					    p_cfg->cid_count / cxts_per_p;
+
+		/* In E5 the CORE CIDs are allocated before the ROCE CIDs */
+		if (*dynamic_line_cnt && ECORE_IS_E5(p_hwfn->p_dev)) {
+			u32 roce_cid_cnt = is_vf ? p_cfg->cids_per_vf :
+					   p_cfg->cid_count;
+			u32 core_cid_cnt;
+
+			p_cfg = &p_hwfn->p_cxt_mngr->conn_cfg[PROTOCOLID_CORE];
+			core_cid_cnt = p_cfg->cid_count;
+			*dynamic_line_offset = 1 + (core_cid_cnt / cxts_per_p);
+			*dynamic_line_cnt = ((core_cid_cnt + roce_cid_cnt) /
+					     cxts_per_p) - *dynamic_line_offset;
+		}
 	}
 }
 
@@ -424,7 +474,7 @@ ecore_cxt_set_blk(struct ecore_ilt_cli_blk *p_blk)
 {
 	p_blk->total_size = 0;
 	return p_blk;
-	}
+}
 
 static u32
 ecore_cxt_src_elements(struct ecore_cxt_mngr *p_mngr)
@@ -435,9 +485,9 @@ ecore_cxt_src_elements(struct ecore_cxt_mngr *p_mngr)
 	OSAL_MEM_ZERO(&src_iids, sizeof(src_iids));
 	ecore_cxt_src_iids(p_mngr, &src_iids);
 
-	/* Both the PF and VFs searcher connections are stored in the per PF
-	 * database. Thus sum the PF searcher cids and all the VFs searcher
-	 * cids.
+	/* Both the PF and VFs searcher connections are stored
+	 * in the per PF database. Thus sum the PF searcher
+	 * cids and all the VFs searcher cids.
 	 */
 	elem_num = src_iids.pf_cids +
 		   src_iids.per_vf_cids * p_mngr->vf_count;
@@ -450,16 +500,34 @@ ecore_cxt_src_elements(struct ecore_cxt_mngr *p_mngr)
 	return elem_num;
 }
 
-enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
+static void eocre_ilt_blk_reset(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_ilt_client_cfg *clients = p_hwfn->p_cxt_mngr->clients;
+	u32 cli_idx, blk_idx;
+
+	for (cli_idx = 0; cli_idx < MAX_ILT_CLIENTS; cli_idx++) {
+		for (blk_idx = 0; blk_idx < ILT_CLI_PF_BLOCKS; blk_idx++)
+			clients[cli_idx].pf_blks[blk_idx].total_size = 0;
+		for (blk_idx = 0; blk_idx < ILT_CLI_VF_BLOCKS; blk_idx++)
+			clients[cli_idx].vf_blks[blk_idx].total_size = 0;
+	}
+}
+
+enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn,
+					       u32 *line_count)
 {
-	u32 curr_line, total, i, task_size, line, total_size, elem_size;
+	u32 total, i, task_size, line, total_size, elem_size;
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u32 curr_line, prev_line, total_lines;
 	struct ecore_ilt_client_cfg *p_cli;
 	struct ecore_ilt_cli_blk *p_blk;
 	struct ecore_cdu_iids cdu_iids;
 	struct ecore_qm_iids qm_iids;
 	struct ecore_tm_iids tm_iids;
 	struct ecore_tid_seg *p_seg;
+	u16 num_vf_pqs;
+	int ret;
 
 	OSAL_MEM_ZERO(&qm_iids, sizeof(qm_iids));
 	OSAL_MEM_ZERO(&cdu_iids, sizeof(cdu_iids));
@@ -467,8 +535,14 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 
 	p_mngr->pf_start_line = RESC_START(p_hwfn, ECORE_ILT);
 
+	/* Reset all the ILT blocks at the beginning of ILT compute - this
+	 * is done in order to prevent memory allocation for irrelevant blocks
+	 * afterwards (e.g. VF timer block after disabling VF-RDMA).
+	 */
+	eocre_ilt_blk_reset(p_hwfn);
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
-		   "hwfn [%d] - Set context mngr starting line to be 0x%08x\n",
+		   "hwfn [%d] - Set context manager starting line to be 0x%08x\n",
 		   p_hwfn->my_id, p_hwfn->p_cxt_mngr->pf_start_line);
 
 	/* CDUC */
@@ -494,7 +568,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 
 	ecore_ilt_get_dynamic_line_range(p_hwfn, ILT_CLI_CDUC,
 					 &p_blk->dynamic_line_offset,
-					 &p_blk->dynamic_line_cnt);
+					 &p_blk->dynamic_line_cnt, IOV_PF);
 
 	/* CDUC VF */
 	p_blk = ecore_cxt_set_blk(&p_cli->vf_blks[CDUC_BLK]);
@@ -510,6 +584,10 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
 				       ILT_CLI_CDUC);
 
+	ecore_ilt_get_dynamic_line_range(p_hwfn, ILT_CLI_CDUC,
+					 &p_blk->dynamic_line_offset,
+					 &p_blk->dynamic_line_cnt, IOV_VF);
+
 	/* CDUT PF */
 	p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_CDUT]);
 	p_cli->first.val = curr_line;
@@ -525,8 +603,39 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line, total,
 				       p_mngr->task_type_size[p_seg->type]);
 
+		prev_line = curr_line;
 		ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
 				       ILT_CLI_CDUT);
+		total_lines = curr_line - prev_line;
+
+		switch (i) {
+		case ECORE_CXT_ISCSI_TID_SEG:
+			p_mngr->iscsi_task_pages = (u16)total_lines;
+			DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
+				   "CDUT ILT Info: iscsi_task_pages %hu\n",
+				   p_mngr->iscsi_task_pages);
+			break;
+		case ECORE_CXT_FCOE_TID_SEG:
+			p_mngr->fcoe_task_pages = (u16)total_lines;
+			DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
+				   "CDUT ILT Info: fcoe_task_pages %hu\n",
+				   p_mngr->fcoe_task_pages);
+			break;
+		case ECORE_CXT_ROCE_TID_SEG:
+			p_mngr->roce_task_pages = (u16)total_lines;
+			DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
+				   "CDUT ILT Info: roce_task_pages %hu\n",
+				   p_mngr->roce_task_pages);
+			break;
+		case ECORE_CXT_ETH_TID_SEG:
+			p_mngr->eth_task_pages = (u16)total_lines;
+			DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
+				   "CDUT ILT Info: eth_task_pages %hu\n",
+				   p_mngr->eth_task_pages);
+			break;
+		default:
+			break;
+		}
 	}
 
 	/* next the 'init' task memory (forced load memory) */
@@ -535,8 +644,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		if (!p_seg || p_seg->count == 0)
 			continue;
 
-		p_blk =
-		     ecore_cxt_set_blk(&p_cli->pf_blks[CDUT_FL_SEG_BLK(i, PF)]);
+		p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[CDUT_FL_SEG_BLK(i, PF)]);
 
 		if (!p_seg->has_fl_mem) {
 			/* The segment is active (total size pf 'working'
@@ -590,8 +698,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 				       ILT_CLI_CDUT);
 
 		/* 'init' memory */
-		p_blk =
-		     ecore_cxt_set_blk(&p_cli->vf_blks[CDUT_FL_SEG_BLK(0, VF)]);
+		p_blk = ecore_cxt_set_blk(&p_cli->vf_blks[CDUT_FL_SEG_BLK(0, VF)]);
 		if (!p_seg->has_fl_mem) {
 			/* see comment above */
 			line = p_cli->vf_blks[CDUT_SEG_BLK(0)].start_line;
@@ -599,7 +706,8 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		} else {
 			task_size = p_mngr->task_type_size[p_seg->type];
 			ecore_ilt_cli_blk_fill(p_cli, p_blk,
-					       curr_line, total, task_size);
+					       curr_line, total,
+					       task_size);
 			ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
 					       ILT_CLI_CDUT);
 		}
@@ -624,23 +732,29 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 	p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_QM]);
 	p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[0]);
 
+	ecore_cxt_qm_iids(p_hwfn, &qm_iids);
+
 	/* At this stage, after the first QM configuration, the PF PQs amount
 	 * is the highest possible. Save this value at qm_info->ilt_pf_pqs to
 	 * detect overflows in the future.
 	 * Even though VF PQs amount can be larger than VF count, use vf_count
 	 * because each VF requires only the full amount of CIDs.
 	 */
-	ecore_cxt_qm_iids(p_hwfn, &qm_iids);
+	qm_info->ilt_pf_pqs = qm_info->num_pqs - qm_info->num_vf_pqs;
+	if (ECORE_IS_VF_RDMA(p_hwfn))
+		num_vf_pqs = RESC_NUM(p_hwfn, ECORE_PQ) - qm_info->ilt_pf_pqs;
+	else
+		num_vf_pqs = (u16)p_mngr->vf_count;
+
 	total = ecore_qm_pf_mem_size(p_hwfn, qm_iids.cids,
 				     qm_iids.vf_cids, qm_iids.tids,
-				     p_hwfn->qm_info.num_pqs + OFLD_GRP_SIZE,
-				     p_hwfn->qm_info.num_vf_pqs);
+				     qm_info->ilt_pf_pqs,
+				     num_vf_pqs);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
-		   "QM ILT Info, (cids=%d, vf_cids=%d, tids=%d, num_pqs=%d,"
-		   " num_vf_pqs=%d, memory_size=%d)\n",
+		   "QM ILT Info, (cids=%d, vf_cids=%d, tids=%d, pf_pqs=%d, vf_pqs=%d, memory_size=%d)\n",
 		   qm_iids.cids, qm_iids.vf_cids, qm_iids.tids,
-		   p_hwfn->qm_info.num_pqs, p_hwfn->qm_info.num_vf_pqs, total);
+		   qm_info->ilt_pf_pqs, p_mngr->vf_count, total);
 
 	ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line, total * 0x1000,
 			       QM_PQ_ELEMENT_SIZE);
@@ -650,7 +764,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 
 	/* TM PF */
 	p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_TM]);
-	ecore_cxt_tm_iids(p_hwfn, p_mngr, &tm_iids);
+	ecore_cxt_tm_iids(p_hwfn, &tm_iids);
 	total = tm_iids.pf_cids + tm_iids.pf_tids_total;
 	if (total) {
 		p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[0]);
@@ -668,12 +782,14 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 	if (total) {
 		p_blk = ecore_cxt_set_blk(&p_cli->vf_blks[0]);
 		ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
-				       total * TM_ELEM_SIZE, TM_ELEM_SIZE);
+				       total * TM_ELEM_SIZE,
+				       TM_ELEM_SIZE);
 
 		ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
 				       ILT_CLI_TM);
 
 		p_cli->vf_total_lines = curr_line - p_blk->start_line;
+
 		for (i = 1; i < p_mngr->vf_count; i++) {
 			ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
 					       ILT_CLI_TM);
@@ -696,30 +812,54 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		p_cli->pf_total_lines = curr_line - p_blk->start_line;
 	}
 
-	/* TSDM (SRQ CONTEXT) */
-	total = ecore_cxt_get_srq_count(p_hwfn);
-
-	if (total) {
-		p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_TSDM]);
-		p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[SRQ_BLK]);
-		ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
-				       total * SRQ_CXT_SIZE, SRQ_CXT_SIZE);
-
-		ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
-				       ILT_CLI_TSDM);
-		p_cli->pf_total_lines = curr_line - p_blk->start_line;
-	}
+	*line_count = curr_line - p_hwfn->p_cxt_mngr->pf_start_line;
 
 	if (curr_line - p_hwfn->p_cxt_mngr->pf_start_line >
 	    RESC_NUM(p_hwfn, ECORE_ILT)) {
-		DP_ERR(p_hwfn, "too many ilt lines...#lines=%d\n",
-		       curr_line - p_hwfn->p_cxt_mngr->pf_start_line);
 		return ECORE_INVAL;
 	}
 
 	return ECORE_SUCCESS;
 }
 
+u32 ecore_cxt_cfg_ilt_compute_excess(struct ecore_hwfn *p_hwfn, u32 used_lines)
+{
+	struct ecore_ilt_client_cfg *p_cli;
+	u32 excess_lines, available_lines;
+	struct ecore_cxt_mngr *p_mngr;
+	u32 ilt_page_size, elem_size;
+	struct ecore_tid_seg *p_seg;
+	int i;
+
+	available_lines = RESC_NUM(p_hwfn, ECORE_ILT);
+	excess_lines = used_lines - available_lines;
+
+	if (!excess_lines)
+		return 0;
+
+	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
+		return 0;
+
+	p_mngr = p_hwfn->p_cxt_mngr;
+	p_cli = &p_mngr->clients[ILT_CLI_CDUT];
+	ilt_page_size = ILT_PAGE_IN_BYTES(p_cli->p_size.val);
+
+	for (i = 0; i < NUM_TASK_PF_SEGMENTS; i++) {
+		p_seg = ecore_cxt_tid_seg_info(p_hwfn, i);
+		if (!p_seg || p_seg->count == 0)
+			continue;
+
+		elem_size = p_mngr->task_type_size[p_seg->type];
+		if (!elem_size)
+			continue;
+
+		return (ilt_page_size / elem_size) * excess_lines;
+	}
+
+	DP_ERR(p_hwfn, "failed computing excess ILT lines\n");
+	return 0;
+}
+
 static void ecore_cxt_src_t2_free(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_src_t2 *p_t2 = &p_hwfn->p_cxt_mngr->src_t2;
@@ -810,6 +950,7 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn)
 	if (rc)
 		goto t2_fail;
 
+
 	/* Set the t2 pointers */
 
 	/* entries per page - must be a power of two */
@@ -829,7 +970,8 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn)
 		u32 j;
 
 		for (j = 0; j < ent_num - 1; j++) {
-			val = p_ent_phys + (j + 1) * sizeof(struct src_ent);
+			val = p_ent_phys +
+			      (j + 1) * sizeof(struct src_ent);
 			entries[j].next = OSAL_CPU_TO_BE64(val);
 		}
 
@@ -849,11 +991,11 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn)
 	return rc;
 }
 
-#define for_each_ilt_valid_client(pos, clients)		\
-	for (pos = 0; pos < MAX_ILT_CLIENTS; pos++)		\
-		if (!clients[pos].active) {		\
-			continue;			\
-		} else					\
+#define for_each_ilt_valid_client(pos, clients)	\
+	for (pos = 0; pos < MAX_ILT_CLIENTS; pos++)	\
+		if (!(clients)[pos].active) {	\
+			continue;		\
+		} else				\
 
 
 /* Total number of ILT lines used by this PF */
@@ -885,24 +1027,26 @@ static void ecore_ilt_shadow_free(struct ecore_hwfn *p_hwfn)
 
 		if (p_dma->virt_addr)
 			OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
-					       p_dma->p_virt,
-					       p_dma->phys_addr, p_dma->size);
+					       p_dma->virt_addr,
+					       p_dma->phys_addr,
+					       p_dma->size);
 		p_dma->virt_addr = OSAL_NULL;
 	}
 	OSAL_FREE(p_hwfn->p_dev, p_mngr->ilt_shadow);
 	p_mngr->ilt_shadow = OSAL_NULL;
 }
 
-static enum _ecore_status_t
-ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn,
-		    struct ecore_ilt_cli_blk *p_blk,
-		    enum ilt_clients ilt_client, u32 start_line_offset)
+static enum _ecore_status_t ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn,
+						struct ecore_ilt_cli_blk *p_blk,
+						enum ilt_clients ilt_client,
+						u32 start_line_offset)
 {
 	struct phys_mem_desc *ilt_shadow = p_hwfn->p_cxt_mngr->ilt_shadow;
 	u32 lines, line, sz_left, lines_to_skip, first_skipped_line;
 
 	/* Special handling for RoCE that supports dynamic allocation */
-	if (ilt_client == ILT_CLI_CDUT || ilt_client == ILT_CLI_TSDM)
+	if (ECORE_IS_RDMA_PERSONALITY(p_hwfn) &&
+	    ((ilt_client == ILT_CLI_CDUT) || ilt_client == ILT_CLI_TSDM))
 		return ECORE_SUCCESS;
 
 	if (!p_blk->total_size)
@@ -910,7 +1054,8 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn,
 
 	sz_left = p_blk->total_size;
 	lines_to_skip = p_blk->dynamic_line_cnt;
-	lines = DIV_ROUND_UP(sz_left, p_blk->real_size_in_page) - lines_to_skip;
+	lines = DIV_ROUND_UP(sz_left, p_blk->real_size_in_page) -
+		lines_to_skip;
 	line = p_blk->start_line + start_line_offset -
 	       p_hwfn->p_cxt_mngr->pf_start_line;
 	first_skipped_line = line + p_blk->dynamic_line_offset;
@@ -926,7 +1071,6 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn,
 		}
 
 		size = OSAL_MIN_T(u32, sz_left, p_blk->real_size_in_page);
-
 /* @DPDK */
 #define ILT_BLOCK_ALIGN_SIZE 0x1000
 		p_virt = OSAL_DMA_ALLOC_COHERENT_ALIGNED(p_hwfn->p_dev,
@@ -941,9 +1085,8 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn,
 		ilt_shadow[line].size = size;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
-			   "ILT shadow: Line [%d] Physical 0x%lx"
-			   " Virtual %p Size %d\n",
-			   line, (unsigned long)p_phys, p_virt, size);
+			   "ILT shadow: Line [%d] Physical 0x%" PRIx64 " Virtual %p Size %d\n",
+			   line, (u64)p_phys, p_virt, size);
 
 		sz_left -= size;
 		line++;
@@ -955,7 +1098,7 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn,
 
 static enum _ecore_status_t ecore_ilt_shadow_alloc(struct ecore_hwfn *p_hwfn)
 {
-	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_cxt_mngr *p_mngr  = p_hwfn->p_cxt_mngr;
 	struct ecore_ilt_client_cfg *clients = p_mngr->clients;
 	struct ecore_ilt_cli_blk *p_blk;
 	u32 size, i, j, k;
@@ -965,7 +1108,7 @@ static enum _ecore_status_t ecore_ilt_shadow_alloc(struct ecore_hwfn *p_hwfn)
 	p_mngr->ilt_shadow = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
 					 size * sizeof(struct phys_mem_desc));
 
-	if (!p_mngr->ilt_shadow) {
+	if (p_mngr->ilt_shadow == OSAL_NULL) {
 		DP_NOTICE(p_hwfn, false, "Failed to allocate ilt shadow table\n");
 		rc = ECORE_NOMEM;
 		goto ilt_shadow_fail;
@@ -995,6 +1138,8 @@ static enum _ecore_status_t ecore_ilt_shadow_alloc(struct ecore_hwfn *p_hwfn)
 		}
 	}
 
+	p_mngr->ilt_shadow_size = size;
+
 	return ECORE_SUCCESS;
 
 ilt_shadow_fail:
@@ -1013,6 +1158,9 @@ static void ecore_cid_map_free(struct ecore_hwfn *p_hwfn)
 		p_mngr->acquired[type].max_count = 0;
 		p_mngr->acquired[type].start_cid = 0;
 
+		if (!p_mngr->acquired_vf[type])
+			continue;
+
 		for (vf = 0; vf < max_num_vfs; vf++) {
 			OSAL_FREE(p_hwfn->p_dev,
 				  p_mngr->acquired_vf[type][vf].cid_map);
@@ -1025,8 +1173,8 @@ static void ecore_cid_map_free(struct ecore_hwfn *p_hwfn)
 
 static enum _ecore_status_t
 __ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type,
-			   u32 cid_start, u32 cid_count,
-			   struct ecore_cid_acquired_map *p_map)
+			     u32 cid_start, u32 cid_count,
+			     struct ecore_cid_acquired_map *p_map)
 {
 	u32 size;
 
@@ -1060,8 +1208,8 @@ ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type, u32 start_cid,
 
 	p_cfg = &p_mngr->conn_cfg[type];
 
-		/* Handle PF maps */
-		p_map = &p_mngr->acquired[type];
+	/* Handle PF maps */
+	p_map = &p_mngr->acquired[type];
 	rc = __ecore_cid_map_alloc_single(p_hwfn, type, start_cid,
 					  p_cfg->cid_count, p_map);
 	if (rc != ECORE_SUCCESS)
@@ -1086,7 +1234,23 @@ static enum _ecore_status_t ecore_cid_map_alloc(struct ecore_hwfn *p_hwfn)
 	u32 type;
 	enum _ecore_status_t rc;
 
+	/* Set the CORE CIDs to be first so it can have a global range ID */
+	if (ECORE_IS_E5(p_hwfn->p_dev)) {
+		rc = ecore_cid_map_alloc_single(p_hwfn, PROTOCOLID_CORE,
+						start_cid, vf_start_cid);
+		if (rc != ECORE_SUCCESS)
+			goto cid_map_fail;
+
+		start_cid = p_mngr->conn_cfg[PROTOCOLID_CORE].cid_count;
+
+		/* Add to VFs the required offset to be after the CORE CIDs */
+		vf_start_cid = start_cid;
+	}
+
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
+		if (ECORE_IS_E5(p_hwfn->p_dev) && (type == PROTOCOLID_CORE))
+			continue;
+
 		rc = ecore_cid_map_alloc_single(p_hwfn, type, start_cid,
 						vf_start_cid);
 		if (rc != ECORE_SUCCESS)
@@ -1107,6 +1271,7 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cid_acquired_map *acquired_vf;
 	struct ecore_ilt_client_cfg *clients;
+	struct ecore_hw_sriov_info *p_iov;
 	struct ecore_cxt_mngr *p_mngr;
 	u32 i, max_num_vfs;
 
@@ -1116,6 +1281,9 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
 		return ECORE_NOMEM;
 	}
 
+	/* Set the cxt mangr pointer priori to further allocations */
+	p_hwfn->p_cxt_mngr = p_mngr;
+
 	/* Initialize ILT client registers */
 	clients = p_mngr->clients;
 	clients[ILT_CLI_CDUC].first.reg = ILT_CFG_REG(CDUC, FIRST_ILT);
@@ -1144,16 +1312,22 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
 
 	/* default ILT page size for all clients is 64K */
 	for (i = 0; i < MAX_ILT_CLIENTS; i++)
-		p_mngr->clients[i].p_size.val = ILT_DEFAULT_HW_P_SIZE;
+		p_mngr->clients[i].p_size.val = p_hwfn->p_dev->ilt_page_size;
 
+	/* Initialize task sizes */
 	/* due to removal of ISCSI/FCoE files union type0_task_context
 	 * task_type_size will be 0. So hardcoded for now.
 	 */
 	p_mngr->task_type_size[0] = 512; /* @DPDK */
 	p_mngr->task_type_size[1] = 128; /* @DPDK */
 
-	if (p_hwfn->p_dev->p_iov_info)
-		p_mngr->vf_count = p_hwfn->p_dev->p_iov_info->total_vfs;
+	p_mngr->conn_ctx_size = CONN_CXT_SIZE(p_hwfn);
+
+	p_iov = p_hwfn->p_dev->p_iov_info;
+	if (p_iov) {
+		p_mngr->vf_count = p_iov->total_vfs;
+		p_mngr->first_vf_in_pf = p_iov->first_vf_in_pf;
+	}
 
 	/* Initialize the dynamic ILT allocation mutex */
 #ifdef CONFIG_ECORE_LOCK_ALLOC
@@ -1164,9 +1338,6 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
 #endif
 	OSAL_MUTEX_INIT(&p_mngr->mutex);
 
-	/* Set the cxt mangr pointer prior to further allocations */
-	p_hwfn->p_cxt_mngr = p_mngr;
-
 	max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
 	for (i = 0; i < MAX_CONN_TYPES; i++) {
 		acquired_vf = OSAL_CALLOC(p_hwfn->p_dev, GFP_KERNEL,
@@ -1177,7 +1348,7 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
 			return ECORE_NOMEM;
 		}
 
-		p_mngr->acquired_vf[i] = acquired_vf;
+		p_hwfn->p_cxt_mngr->acquired_vf[i] = acquired_vf;
 	}
 
 	return ECORE_SUCCESS;
@@ -1185,7 +1356,7 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
 
 enum _ecore_status_t ecore_cxt_tables_alloc(struct ecore_hwfn *p_hwfn)
 {
-	enum _ecore_status_t rc;
+	enum _ecore_status_t    rc;
 
 	/* Allocate the ILT shadow table */
 	rc = ecore_ilt_shadow_alloc(p_hwfn);
@@ -1194,10 +1365,11 @@ enum _ecore_status_t ecore_cxt_tables_alloc(struct ecore_hwfn *p_hwfn)
 		goto tables_alloc_fail;
 	}
 
-	/* Allocate the T2  table */
+	/* Allocate the T2 tables */
 	rc = ecore_cxt_src_t2_alloc(p_hwfn);
 	if (rc) {
-		DP_NOTICE(p_hwfn, false, "Failed to allocate T2 memory\n");
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to allocate src T2 memory\n");
 		goto tables_alloc_fail;
 	}
 
@@ -1334,7 +1506,7 @@ void ecore_cxt_mngr_setup(struct ecore_hwfn *p_hwfn)
 
 static void ecore_cdu_init_common(struct ecore_hwfn *p_hwfn)
 {
-	u32 page_sz, elems_per_page, block_waste, cxt_size, cdu_params = 0;
+	u32 page_sz, elems_per_page, block_waste,  cxt_size, cdu_params = 0;
 
 	/* CDUC - connection configuration */
 	page_sz = p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC].p_size.val;
@@ -1390,7 +1562,8 @@ static void ecore_cdu_init_pf(struct ecore_hwfn *p_hwfn)
 		CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET,
 		CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET,
 		CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET,
-		CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET
+		CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET,
+		CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET
 	};
 
 	static const u32 rt_type_offset_fl_arr[] = {
@@ -1404,7 +1577,6 @@ static void ecore_cdu_init_pf(struct ecore_hwfn *p_hwfn)
 
 	/* There are initializations only for CDUT during pf Phase */
 	for (i = 0; i < NUM_TASK_PF_SEGMENTS; i++) {
-		/* Segment 0 */
 		p_seg = ecore_cxt_tid_seg_info(p_hwfn, i);
 		if (!p_seg)
 			continue;
@@ -1415,22 +1587,40 @@ static void ecore_cdu_init_pf(struct ecore_hwfn *p_hwfn)
 		 * Page size is larger than 32K!
 		 */
 		offset = (ILT_PAGE_IN_BYTES(p_cli->p_size.val) *
-			  (p_cli->pf_blks[CDUT_SEG_BLK(i)].start_line -
-			   p_cli->first.val)) / CDUT_SEG_ALIGNMET_IN_BYTES;
+			 (p_cli->pf_blks[CDUT_SEG_BLK(i)].start_line -
+			  p_cli->first.val)) / CDUT_SEG_ALIGNMET_IN_BYTES;
 
 		cdu_seg_params = 0;
 		SET_FIELD(cdu_seg_params, CDU_SEG_REG_TYPE, p_seg->type);
 		SET_FIELD(cdu_seg_params, CDU_SEG_REG_OFFSET, offset);
-		STORE_RT_REG(p_hwfn, rt_type_offset_arr[i], cdu_seg_params);
+		STORE_RT_REG(p_hwfn, rt_type_offset_arr[i],
+			     cdu_seg_params);
 
 		offset = (ILT_PAGE_IN_BYTES(p_cli->p_size.val) *
-			  (p_cli->pf_blks[CDUT_FL_SEG_BLK(i, PF)].start_line -
-			   p_cli->first.val)) / CDUT_SEG_ALIGNMET_IN_BYTES;
+			 (p_cli->pf_blks[CDUT_FL_SEG_BLK(i, PF)].start_line -
+			  p_cli->first.val)) / CDUT_SEG_ALIGNMET_IN_BYTES;
+
+		cdu_seg_params = 0;
+		SET_FIELD(cdu_seg_params, CDU_SEG_REG_TYPE, p_seg->type);
+		SET_FIELD(cdu_seg_params, CDU_SEG_REG_OFFSET, offset);
+		STORE_RT_REG(p_hwfn, rt_type_offset_fl_arr[i],
+			     cdu_seg_params);
+	}
 
+	/* Init VF (single) segment */
+	p_seg = ecore_cxt_tid_seg_info(p_hwfn, TASK_SEGMENT_VF);
+	if (p_seg) {
+		/* VF has a single segment so the offset is 0 by definition.
+		 * The offset expresses where this segment starts relative to
+		 * the VF section in CDUT. Since there is a single segment it
+		 * will be 0 by definition
+		 */
+		offset = 0;
 		cdu_seg_params = 0;
 		SET_FIELD(cdu_seg_params, CDU_SEG_REG_TYPE, p_seg->type);
 		SET_FIELD(cdu_seg_params, CDU_SEG_REG_OFFSET, offset);
-		STORE_RT_REG(p_hwfn, rt_type_offset_fl_arr[i], cdu_seg_params);
+		STORE_RT_REG(p_hwfn, rt_type_offset_arr[TASK_SEGMENT_VF],
+			     cdu_seg_params);
 	}
 }
 
@@ -1459,12 +1649,35 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 /* CM PF */
 static void ecore_cm_init_pf(struct ecore_hwfn *p_hwfn)
 {
-	STORE_RT_REG(p_hwfn, XCM_REG_CON_PHY_Q3_RT_OFFSET,
-		     ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB));
+	STORE_RT_REG(p_hwfn, XCM_REG_CON_PHY_Q3_RT_OFFSET, ecore_get_cm_pq_idx(p_hwfn,
+		     PQ_FLAGS_LB));
+}
+
+#define GLB_MAX_ICID_RT_OFFSET(id) \
+	DORQ_REG_GLB_MAX_ICID_ ## id ## _RT_OFFSET
+#define GLB_RANGE2CONN_TYPE_RT_OFFSET(id) \
+	DORQ_REG_GLB_RANGE2CONN_TYPE_ ## id ## _RT_OFFSET
+
+static void ecore_dq_init_common(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	u32 dq_core_max_cid;
+
+	if (!ECORE_IS_E5(p_hwfn->p_dev))
+		return;
+
+	dq_core_max_cid = p_mngr->conn_cfg[PROTOCOLID_CORE].cid_count >>
+			  DQ_RANGE_SHIFT;
+	STORE_RT_REG(p_hwfn, GLB_MAX_ICID_RT_OFFSET(0), dq_core_max_cid);
+
+	/* Range ID #1 is an empty range */
+	STORE_RT_REG(p_hwfn, GLB_MAX_ICID_RT_OFFSET(1), dq_core_max_cid);
+
+	STORE_RT_REG(p_hwfn, GLB_RANGE2CONN_TYPE_RT_OFFSET(0), PROTOCOLID_CORE);
 }
 
 /* DQ PF */
-static void ecore_dq_init_pf(struct ecore_hwfn *p_hwfn)
+static void ecore_dq_init_pf_e4(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
 	u32 dq_pf_max_cid = 0, dq_vf_max_cid = 0;
@@ -1505,11 +1718,10 @@ static void ecore_dq_init_pf(struct ecore_hwfn *p_hwfn)
 	dq_vf_max_cid += (p_mngr->conn_cfg[5].cids_per_vf >> DQ_RANGE_SHIFT);
 	STORE_RT_REG(p_hwfn, DORQ_REG_VF_MAX_ICID_5_RT_OFFSET, dq_vf_max_cid);
 
-	/* Connection types 6 & 7 are not in use, yet they must be configured
-	 * as the highest possible connection. Not configuring them means the
-	 * defaults will be  used, and with a large number of cids a bug may
-	 * occur, if the defaults will be smaller than dq_pf_max_cid /
-	 * dq_vf_max_cid.
+	/* Connection types 6 & 7 are not in use, but still must be configured
+	 * as the highest possible connection. Not configuring them means that
+	 * the defaults will be used, and with a large number of cids a bug may
+	 * occur, if the defaults are smaller than dq_pf_max_cid/dq_vf_max_cid.
 	 */
 	STORE_RT_REG(p_hwfn, DORQ_REG_PF_MAX_ICID_6_RT_OFFSET, dq_pf_max_cid);
 	STORE_RT_REG(p_hwfn, DORQ_REG_VF_MAX_ICID_6_RT_OFFSET, dq_vf_max_cid);
@@ -1518,6 +1730,90 @@ static void ecore_dq_init_pf(struct ecore_hwfn *p_hwfn)
 	STORE_RT_REG(p_hwfn, DORQ_REG_VF_MAX_ICID_7_RT_OFFSET, dq_vf_max_cid);
 }
 
+#define PRV_MAX_ICID_RT_OFFSET(pfvf, id) \
+	DORQ_REG_PRV_ ## pfvf ##  _MAX_ICID_ ## id ## _RT_OFFSET
+#define PRV_RANGE2CONN_TYPE_RT_OFFSET(pfvf, id) \
+	DORQ_REG_PRV_ ## pfvf ## _RANGE2CONN_TYPE_ ## id ## _RT_OFFSET
+
+static void ecore_dq_init_pf_e5(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	u32 dq_pf_max_cid, dq_vf_max_cid, type;
+
+	/* The private ranges should start after the CORE's global range */
+	dq_pf_max_cid = p_mngr->conn_cfg[PROTOCOLID_CORE].cid_count >>
+			DQ_RANGE_SHIFT;
+	dq_vf_max_cid = dq_pf_max_cid;
+
+	/* Range ID #2 */
+	if (ECORE_IS_ISCSI_PERSONALITY(p_hwfn))
+		type = PROTOCOLID_ISCSI;
+	else if (ECORE_IS_FCOE_PERSONALITY(p_hwfn))
+		type = PROTOCOLID_FCOE;
+	else if (ECORE_IS_ROCE_PERSONALITY(p_hwfn))
+		type = PROTOCOLID_ROCE;
+	else /* ETH or ETH_IWARP */
+		type = PROTOCOLID_ETH;
+
+	dq_pf_max_cid += p_mngr->conn_cfg[type].cid_count >> DQ_RANGE_SHIFT;
+	dq_vf_max_cid += p_mngr->conn_cfg[type].cids_per_vf >> DQ_RANGE_SHIFT;
+	STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(PF, 2), dq_pf_max_cid);
+	STORE_RT_REG(p_hwfn, PRV_RANGE2CONN_TYPE_RT_OFFSET(PF, 2), type);
+	STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(VF, 2), dq_vf_max_cid);
+	STORE_RT_REG(p_hwfn, PRV_RANGE2CONN_TYPE_RT_OFFSET(VF, 2), type);
+
+	/* Range ID #3 */
+	if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
+		dq_pf_max_cid += p_mngr->conn_cfg[PROTOCOLID_ETH].cid_count >>
+				 DQ_RANGE_SHIFT;
+		dq_vf_max_cid +=
+			p_mngr->conn_cfg[PROTOCOLID_ETH].cids_per_vf >>
+			DQ_RANGE_SHIFT;
+		STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(PF, 3),
+			     dq_pf_max_cid);
+		STORE_RT_REG(p_hwfn, PRV_RANGE2CONN_TYPE_RT_OFFSET(PF, 3),
+			     PROTOCOLID_ETH);
+		STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(VF, 3),
+			     dq_vf_max_cid);
+		STORE_RT_REG(p_hwfn, PRV_RANGE2CONN_TYPE_RT_OFFSET(VF, 3),
+			     PROTOCOLID_ETH);
+	} else if (ECORE_IS_IWARP_PERSONALITY(p_hwfn)) {
+		dq_pf_max_cid += p_mngr->conn_cfg[PROTOCOLID_IWARP].cid_count >>
+				 DQ_RANGE_SHIFT;
+		dq_vf_max_cid +=
+			p_mngr->conn_cfg[PROTOCOLID_IWARP].cids_per_vf >>
+			DQ_RANGE_SHIFT;
+		STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(PF, 3),
+			     dq_pf_max_cid);
+		STORE_RT_REG(p_hwfn, PRV_RANGE2CONN_TYPE_RT_OFFSET(PF, 3),
+			     PROTOCOLID_IWARP);
+		STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(VF, 3),
+			     dq_vf_max_cid);
+		STORE_RT_REG(p_hwfn, PRV_RANGE2CONN_TYPE_RT_OFFSET(VF, 3),
+			     PROTOCOLID_IWARP);
+	} else {
+		/* Range ID #3 is an empty range */
+		STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(PF, 3),
+			     dq_pf_max_cid);
+		STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(VF, 3),
+			     dq_vf_max_cid);
+	}
+
+	/* Range IDs #4 and #5 are empty ranges */
+	STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(PF, 4), dq_pf_max_cid);
+	STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(VF, 4), dq_vf_max_cid);
+	STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(PF, 5), dq_pf_max_cid);
+	STORE_RT_REG(p_hwfn, PRV_MAX_ICID_RT_OFFSET(VF, 5), dq_vf_max_cid);
+}
+
+static void ecore_dq_init_pf(struct ecore_hwfn *p_hwfn)
+{
+	if (ECORE_IS_E4(p_hwfn->p_dev))
+		ecore_dq_init_pf_e4(p_hwfn);
+	else
+		ecore_dq_init_pf_e5(p_hwfn);
+}
+
 static void ecore_ilt_bounds_init(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_ilt_client_cfg *ilt_clients;
@@ -1529,7 +1825,8 @@ static void ecore_ilt_bounds_init(struct ecore_hwfn *p_hwfn)
 			     ilt_clients[i].first.reg,
 			     ilt_clients[i].first.val);
 		STORE_RT_REG(p_hwfn,
-			     ilt_clients[i].last.reg, ilt_clients[i].last.val);
+			     ilt_clients[i].last.reg,
+			     ilt_clients[i].last.val);
 		STORE_RT_REG(p_hwfn,
 			     ilt_clients[i].p_size.reg,
 			     ilt_clients[i].p_size.val);
@@ -1585,7 +1882,8 @@ static void ecore_ilt_vf_bounds_init(struct ecore_hwfn *p_hwfn)
 	blk_factor = OSAL_LOG2(ILT_PAGE_IN_BYTES(p_cli->p_size.val) >> 10);
 	if (p_cli->active) {
 		STORE_RT_REG(p_hwfn,
-			     PSWRQ2_REG_TM_BLOCKS_FACTOR_RT_OFFSET, blk_factor);
+			     PSWRQ2_REG_TM_BLOCKS_FACTOR_RT_OFFSET,
+			     blk_factor);
 		STORE_RT_REG(p_hwfn,
 			     PSWRQ2_REG_TM_NUMBER_OF_PF_BLOCKS_RT_OFFSET,
 			     p_cli->pf_total_lines);
@@ -1606,8 +1904,8 @@ static void ecore_ilt_init_pf(struct ecore_hwfn *p_hwfn)
 	ecore_ilt_bounds_init(p_hwfn);
 	ecore_ilt_vf_bounds_init(p_hwfn);
 
-	p_mngr = p_hwfn->p_cxt_mngr;
-	p_shdw = p_mngr->ilt_shadow;
+	p_mngr  = p_hwfn->p_cxt_mngr;
+	p_shdw  = p_mngr->ilt_shadow;
 	clients = p_hwfn->p_cxt_mngr->clients;
 
 	for_each_ilt_valid_client(i, clients) {
@@ -1616,13 +1914,13 @@ static void ecore_ilt_init_pf(struct ecore_hwfn *p_hwfn)
 		 */
 		line = clients[i].first.val - p_mngr->pf_start_line;
 		rt_offst = PSWRQ2_REG_ILT_MEMORY_RT_OFFSET +
-		    clients[i].first.val * ILT_ENTRY_IN_REGS;
+			   clients[i].first.val * ILT_ENTRY_IN_REGS;
 
 		for (; line <= clients[i].last.val - p_mngr->pf_start_line;
 		     line++, rt_offst += ILT_ENTRY_IN_REGS) {
 			u64 ilt_hw_entry = 0;
 
-			/** p_virt could be OSAL_NULL incase of dynamic
+			/** virt_addr could be OSAL_NULL incase of dynamic
 			 *  allocation
 			 */
 			if (p_shdw[line].virt_addr != OSAL_NULL) {
@@ -1631,12 +1929,10 @@ static void ecore_ilt_init_pf(struct ecore_hwfn *p_hwfn)
 					  (p_shdw[line].phys_addr >> 12));
 
 				DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
-					"Setting RT[0x%08x] from"
-					" ILT[0x%08x] [Client is %d] to"
-					" Physical addr: 0x%lx\n",
+					"Setting RT[0x%08x] from ILT[0x%08x] [Client is %d] to "
+					"Physical addr: 0x%" PRIx64 "\n",
 					rt_offst, line, i,
-					(unsigned long)(p_shdw[line].
-							phys_addr >> 12));
+					(u64)(p_shdw[line].phys_addr >> 12));
 			}
 
 			STORE_RT_REG_AGG(p_hwfn, rt_offst, ilt_hw_entry);
@@ -1673,65 +1969,114 @@ static void ecore_src_init_pf(struct ecore_hwfn *p_hwfn)
 		   conn_num);
 }
 
-/* Timers PF */
-#define TM_CFG_NUM_IDS_SHIFT		0
-#define TM_CFG_NUM_IDS_MASK		0xFFFFULL
-#define TM_CFG_PRE_SCAN_OFFSET_SHIFT	16
-#define TM_CFG_PRE_SCAN_OFFSET_MASK	0x1FFULL
-#define TM_CFG_PARENT_PF_SHIFT		25
-#define TM_CFG_PARENT_PF_MASK		0x7ULL
-
-#define TM_CFG_CID_PRE_SCAN_ROWS_SHIFT	30
-#define TM_CFG_CID_PRE_SCAN_ROWS_MASK	0x1FFULL
+/* Timers PF - configuration memory for the connections and the tasks */
+
+/* Common parts to connections and tasks */
+#define TM_CFG_NUM_IDS_SHIFT			0
+#define TM_CFG_NUM_IDS_MASK			0xFFFFULL
+/* BB */
+#define TM_CFG_PRE_SCAN_OFFSET_BB_SHIFT		16
+#define TM_CFG_PRE_SCAN_OFFSET_BB_MASK		0x1FFULL
+#define TM_CFG_PARENT_PF_BB_SHIFT		26
+#define TM_CFG_PARENT_PF_BB_MASK		0x7ULL
+/* AH */
+#define TM_CFG_PRE_SCAN_OFFSET_AH_SHIFT		16
+#define TM_CFG_PRE_SCAN_OFFSET_AH_MASK		0x3FFULL
+#define TM_CFG_PARENT_PF_AH_SHIFT		26
+#define TM_CFG_PARENT_PF_AH_MASK		0xFULL
+/* E5 */
+#define TM_CFG_PRE_SCAN_OFFSET_E5_SHIFT		16
+#define TM_CFG_PRE_SCAN_OFFSET_E5_MASK		0x3FFULL
+#define TM_CFG_PARENT_PF_E5_SHIFT		26
+#define TM_CFG_PARENT_PF_E5_MASK		0xFULL
+
+/* Connections specific */
+#define TM_CFG_CID_PRE_SCAN_ROWS_SHIFT		30
+#define TM_CFG_CID_PRE_SCAN_ROWS_MASK		0x1FFULL
+
+/* Tasks specific */
+#define TM_CFG_TID_OFFSET_SHIFT			30
+#define TM_CFG_TID_OFFSET_MASK			0x7FFFFULL
+#define TM_CFG_TID_PRE_SCAN_ROWS_SHIFT		49
+#define TM_CFG_TID_PRE_SCAN_ROWS_MASK		0x1FFULL
+
+static void ecore_tm_cfg_set_parent_pf(struct ecore_dev *p_dev, u64 *cfg_word,
+				       u8 val)
+{
+	if (ECORE_IS_BB(p_dev))
+		SET_FIELD(*cfg_word, TM_CFG_PARENT_PF_BB, val);
+	else if (ECORE_IS_AH(p_dev))
+		SET_FIELD(*cfg_word, TM_CFG_PARENT_PF_AH, val);
+	else /* E5 */
+		SET_FIELD(*cfg_word, TM_CFG_PARENT_PF_E5, val);
+}
 
-#define TM_CFG_TID_OFFSET_SHIFT		30
-#define TM_CFG_TID_OFFSET_MASK		0x7FFFFULL
-#define TM_CFG_TID_PRE_SCAN_ROWS_SHIFT	49
-#define TM_CFG_TID_PRE_SCAN_ROWS_MASK	0x1FFULL
+static void ecore_tm_cfg_set_pre_scan_offset(struct ecore_dev *p_dev,
+					     u64 *cfg_word, u8 val)
+{
+	if (ECORE_IS_BB(p_dev))
+		SET_FIELD(*cfg_word, TM_CFG_PRE_SCAN_OFFSET_BB, val);
+	else if (ECORE_IS_AH(p_dev))
+		SET_FIELD(*cfg_word, TM_CFG_PRE_SCAN_OFFSET_AH, val);
+	else /* E5 */
+		SET_FIELD(*cfg_word, TM_CFG_PRE_SCAN_OFFSET_E5, val);
+}
 
 static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
 	u32 active_seg_mask = 0, tm_offset, rt_reg;
+	u32 *p_cfg_word_32, cfg_word_size;
 	struct ecore_tm_iids tm_iids;
 	u64 cfg_word;
 	u8 i;
 
 	OSAL_MEM_ZERO(&tm_iids, sizeof(tm_iids));
-	ecore_cxt_tm_iids(p_hwfn, p_mngr, &tm_iids);
+	ecore_cxt_tm_iids(p_hwfn, &tm_iids);
 
 	/* @@@TBD No pre-scan for now */
 
-		cfg_word = 0;
-		SET_FIELD(cfg_word, TM_CFG_NUM_IDS, tm_iids.per_vf_cids);
-		SET_FIELD(cfg_word, TM_CFG_PARENT_PF, p_hwfn->rel_pf_id);
-	SET_FIELD(cfg_word, TM_CFG_PRE_SCAN_OFFSET, 0);
+	cfg_word = 0;
+	SET_FIELD(cfg_word, TM_CFG_NUM_IDS, tm_iids.per_vf_cids);
+	ecore_tm_cfg_set_parent_pf(p_hwfn->p_dev, &cfg_word, p_hwfn->rel_pf_id);
+	ecore_tm_cfg_set_pre_scan_offset(p_hwfn->p_dev, &cfg_word, 0);
+	if (ECORE_IS_E4(p_hwfn->p_dev))
 		SET_FIELD(cfg_word, TM_CFG_CID_PRE_SCAN_ROWS, 0); /* scan all */
 
+	/* Each CONFIG_CONN_MEM row in E5 is 32 bits and not 64 bits as in E4 */
+	p_cfg_word_32 = (u32 *)&cfg_word;
+	cfg_word_size = ECORE_IS_E4(p_hwfn->p_dev) ? sizeof(cfg_word)
+						   : sizeof(*p_cfg_word_32);
+
 	/* Note: We assume consecutive VFs for a PF */
 	for (i = 0; i < p_mngr->vf_count; i++) {
 		rt_reg = TM_REG_CONFIG_CONN_MEM_RT_OFFSET +
-		    (sizeof(cfg_word) / sizeof(u32)) *
-		    (p_hwfn->p_dev->p_iov_info->first_vf_in_pf + i);
-		STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word);
+			 (cfg_word_size / sizeof(u32)) *
+			 (p_hwfn->p_dev->p_iov_info->first_vf_in_pf + i);
+		if (ECORE_IS_E4(p_hwfn->p_dev))
+			STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word);
+		else
+			STORE_RT_REG_AGG(p_hwfn, rt_reg, *p_cfg_word_32);
 	}
 
 	cfg_word = 0;
 	SET_FIELD(cfg_word, TM_CFG_NUM_IDS, tm_iids.pf_cids);
-	SET_FIELD(cfg_word, TM_CFG_PRE_SCAN_OFFSET, 0);
-	SET_FIELD(cfg_word, TM_CFG_PARENT_PF, 0);	/* n/a for PF */
-	SET_FIELD(cfg_word, TM_CFG_CID_PRE_SCAN_ROWS, 0); /* scan all   */
+	ecore_tm_cfg_set_parent_pf(p_hwfn->p_dev, &cfg_word, 0); /* n/a for PF */
+	ecore_tm_cfg_set_pre_scan_offset(p_hwfn->p_dev, &cfg_word, 0);
+	if (ECORE_IS_E4(p_hwfn->p_dev))
+		SET_FIELD(cfg_word, TM_CFG_CID_PRE_SCAN_ROWS, 0); /* scan all */
 
 	rt_reg = TM_REG_CONFIG_CONN_MEM_RT_OFFSET +
-	    (sizeof(cfg_word) / sizeof(u32)) *
-	    (NUM_OF_VFS(p_hwfn->p_dev) + p_hwfn->rel_pf_id);
-	STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word);
+		 (cfg_word_size / sizeof(u32)) *
+		 (NUM_OF_VFS(p_hwfn->p_dev) + p_hwfn->rel_pf_id);
+	if (ECORE_IS_E4(p_hwfn->p_dev))
+		STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word);
+	else
+		STORE_RT_REG_AGG(p_hwfn, rt_reg, *p_cfg_word_32);
 
-	/* enable scan */
+	/* enable scan for PF */
 	STORE_RT_REG(p_hwfn, TM_REG_PF_ENABLE_CONN_RT_OFFSET,
-		     tm_iids.pf_cids ? 0x1 : 0x0);
-
-	/* @@@TBD how to enable the scan for the VFs */
+		     tm_iids.pf_cids  ? 0x1 : 0x0);
 
 	tm_offset = tm_iids.per_vf_cids;
 
@@ -1739,14 +2084,15 @@ static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn)
 	for (i = 0; i < p_mngr->vf_count; i++) {
 		cfg_word = 0;
 		SET_FIELD(cfg_word, TM_CFG_NUM_IDS, tm_iids.per_vf_tids);
-		SET_FIELD(cfg_word, TM_CFG_PRE_SCAN_OFFSET, 0);
-		SET_FIELD(cfg_word, TM_CFG_PARENT_PF, p_hwfn->rel_pf_id);
+		ecore_tm_cfg_set_pre_scan_offset(p_hwfn->p_dev, &cfg_word, 0);
+		ecore_tm_cfg_set_parent_pf(p_hwfn->p_dev, &cfg_word,
+					   p_hwfn->rel_pf_id);
 		SET_FIELD(cfg_word, TM_CFG_TID_OFFSET, tm_offset);
 		SET_FIELD(cfg_word, TM_CFG_TID_PRE_SCAN_ROWS, (u64)0);
 
 		rt_reg = TM_REG_CONFIG_TASK_MEM_RT_OFFSET +
-		    (sizeof(cfg_word) / sizeof(u32)) *
-		    (p_hwfn->p_dev->p_iov_info->first_vf_in_pf + i);
+			 (sizeof(cfg_word) / sizeof(u32)) *
+			 (p_hwfn->p_dev->p_iov_info->first_vf_in_pf + i);
 
 		STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word);
 	}
@@ -1755,15 +2101,15 @@ static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn)
 	for (i = 0; i < NUM_TASK_PF_SEGMENTS; i++) {
 		cfg_word = 0;
 		SET_FIELD(cfg_word, TM_CFG_NUM_IDS, tm_iids.pf_tids[i]);
-		SET_FIELD(cfg_word, TM_CFG_PRE_SCAN_OFFSET, 0);
-		SET_FIELD(cfg_word, TM_CFG_PARENT_PF, 0);
+		ecore_tm_cfg_set_pre_scan_offset(p_hwfn->p_dev, &cfg_word, 0);
+		ecore_tm_cfg_set_parent_pf(p_hwfn->p_dev, &cfg_word, 0);
 		SET_FIELD(cfg_word, TM_CFG_TID_OFFSET, tm_offset);
 		SET_FIELD(cfg_word, TM_CFG_TID_PRE_SCAN_ROWS, (u64)0);
 
 		rt_reg = TM_REG_CONFIG_TASK_MEM_RT_OFFSET +
-		    (sizeof(cfg_word) / sizeof(u32)) *
-		    (NUM_OF_VFS(p_hwfn->p_dev) +
-		     p_hwfn->rel_pf_id * NUM_TASK_PF_SEGMENTS + i);
+			 (sizeof(cfg_word) / sizeof(u32)) *
+			 (NUM_OF_VFS(p_hwfn->p_dev) +
+			 p_hwfn->rel_pf_id * NUM_TASK_PF_SEGMENTS + i);
 
 		STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word);
 		active_seg_mask |= (tm_iids.pf_tids[i] ? (1 << i) : 0);
@@ -1771,9 +2117,43 @@ static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn)
 		tm_offset += tm_iids.pf_tids[i];
 	}
 
+	if (ECORE_IS_RDMA_PERSONALITY(p_hwfn))
+		active_seg_mask = 0;
+
 	STORE_RT_REG(p_hwfn, TM_REG_PF_ENABLE_TASK_RT_OFFSET, active_seg_mask);
+}
 
-	/* @@@TBD how to enable the scan for the VFs */
+void ecore_tm_clear_vf_ilt(struct ecore_hwfn *p_hwfn, u16 vf_idx)
+{
+	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_ilt_client_cfg *p_cli;
+	struct phys_mem_desc *shadow_line;
+	struct ecore_ilt_cli_blk *p_blk;
+	u32 shadow_start_line, line;
+	u32 i;
+
+	p_cli = &p_mngr->clients[ILT_CLI_TM];
+	p_blk = &p_cli->vf_blks[0];
+	line = p_blk->start_line + vf_idx * p_cli->vf_total_lines;
+	shadow_start_line = line - p_mngr->pf_start_line;
+
+	for (i = 0; i < p_cli->vf_total_lines; i++) {
+		shadow_line = &p_mngr->ilt_shadow[shadow_start_line + i];
+
+		DP_VERBOSE(p_hwfn, ECORE_MSG_CXT,
+			   "zeroing ILT for VF %d line %d address %p size %d\n",
+			   vf_idx, i, shadow_line->virt_addr, shadow_line->size);
+
+		if (shadow_line->virt_addr != OSAL_NULL)
+			OSAL_MEM_ZERO(shadow_line->virt_addr, shadow_line->size);
+	}
+}
+
+static void ecore_prs_init_common(struct ecore_hwfn *p_hwfn)
+{
+	if ((p_hwfn->hw_info.personality == ECORE_PCI_FCOE) &&
+	    p_hwfn->pf_params.fcoe_pf_params.is_target)
+		STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_RESP_INITIATOR_TYPE_RT_OFFSET, 0);
 }
 
 static void ecore_prs_init_pf(struct ecore_hwfn *p_hwfn)
@@ -1781,6 +2161,7 @@ static void ecore_prs_init_pf(struct ecore_hwfn *p_hwfn)
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
 	struct ecore_conn_type_cfg *p_fcoe;
 	struct ecore_tid_seg *p_tid;
+	u32 max_tid;
 
 	p_fcoe = &p_mngr->conn_cfg[PROTOCOLID_FCOE];
 
@@ -1789,15 +2170,23 @@ static void ecore_prs_init_pf(struct ecore_hwfn *p_hwfn)
 		return;
 
 	p_tid = &p_fcoe->tid_seg[ECORE_CXT_FCOE_TID_SEG];
-	STORE_RT_REG_AGG(p_hwfn,
-			PRS_REG_TASK_ID_MAX_INITIATOR_PF_RT_OFFSET,
-			p_tid->count);
+	max_tid = p_tid->count - 1;
+	if (p_hwfn->pf_params.fcoe_pf_params.is_target) {
+		STORE_RT_REG_AGG(p_hwfn,
+				 PRS_REG_TASK_ID_MAX_TARGET_PF_RT_OFFSET,
+				 max_tid);
+	} else {
+		STORE_RT_REG_AGG(p_hwfn,
+				PRS_REG_TASK_ID_MAX_INITIATOR_PF_RT_OFFSET,
+				max_tid);
+	}
 }
 
 void ecore_cxt_hw_init_common(struct ecore_hwfn *p_hwfn)
 {
-	/* CDU configuration */
 	ecore_cdu_init_common(p_hwfn);
+	ecore_prs_init_common(p_hwfn);
+	ecore_dq_init_common(p_hwfn);
 }
 
 void ecore_cxt_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
@@ -1807,7 +2196,10 @@ void ecore_cxt_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 	ecore_dq_init_pf(p_hwfn);
 	ecore_cdu_init_pf(p_hwfn);
 	ecore_ilt_init_pf(p_hwfn);
-	ecore_src_init_pf(p_hwfn);
+
+	if (!ECORE_IS_E5(p_hwfn->p_dev))
+		ecore_src_init_pf(p_hwfn);
+
 	ecore_tm_init_pf(p_hwfn);
 	ecore_prs_init_pf(p_hwfn);
 }
@@ -1850,7 +2242,7 @@ enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
 		return ECORE_NORESOURCES;
 	}
 
-	OSAL_SET_BIT(rel_cid, p_map->cid_map);
+	OSAL_NON_ATOMIC_SET_BIT(rel_cid, p_map->cid_map);
 
 	*p_cid = rel_cid + p_map->start_cid;
 
@@ -1890,15 +2282,16 @@ static bool ecore_cxt_test_cid_acquired(struct ecore_hwfn *p_hwfn,
 			break;
 		}
 	}
+
 	if (*p_type == MAX_CONN_TYPES) {
-		DP_NOTICE(p_hwfn, true, "Invalid CID %d vfid %02x", cid, vfid);
+		DP_NOTICE(p_hwfn, false, "Invalid CID %d vfid %02x\n", cid, vfid);
 		goto fail;
 	}
 
 	rel_cid = cid - (*pp_map)->start_cid;
-	if (!OSAL_GET_BIT(rel_cid, (*pp_map)->cid_map)) {
-		DP_NOTICE(p_hwfn, true,
-			  "CID %d [vifd %02x] not acquired", cid, vfid);
+	if (!OSAL_TEST_BIT(rel_cid, (*pp_map)->cid_map)) {
+		DP_NOTICE(p_hwfn, false,
+			  "CID %d [vifd %02x] not acquired\n", cid, vfid);
 		goto fail;
 	}
 
@@ -1975,29 +2368,38 @@ enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 
 	p_info->p_cxt = (u8 *)p_mngr->ilt_shadow[line].virt_addr +
-	    p_info->iid % cxts_per_p * conn_cxt_size;
+			      p_info->iid % cxts_per_p * conn_cxt_size;
 
 	DP_VERBOSE(p_hwfn, (ECORE_MSG_ILT | ECORE_MSG_CXT),
-		"Accessing ILT shadow[%d]: CXT pointer is at %p (for iid %d)\n",
-		(p_info->iid / cxts_per_p), p_info->p_cxt, p_info->iid);
+		   "Accessing ILT shadow[%d]: CXT pointer is at %p (for iid %d)\n",
+		   (p_info->iid / cxts_per_p), p_info->p_cxt, p_info->iid);
 
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
+enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn,
+					     u32 rdma_tasks, u32 eth_tasks)
 {
+	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+
 	/* Set the number of required CORE connections */
-	u32 core_cids = 1;	/* SPQ */
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_ptt *p_ptt;
+	u32 core_cids = 1; /* SPQ */
+	u32 vf_core_cids = 0;
 
-	ecore_cxt_set_proto_cid_count(p_hwfn, PROTOCOLID_CORE, core_cids, 0);
+	ecore_cxt_set_proto_cid_count(p_hwfn, PROTOCOLID_CORE, core_cids,
+				      vf_core_cids);
 
 	switch (p_hwfn->hw_info.personality) {
 	case ECORE_PCI_ETH:
-		{
+	{
 		u32 count = 0;
 
 		struct ecore_eth_pf_params *p_params =
-			    &p_hwfn->pf_params.eth_pf_params;
+					&p_hwfn->pf_params.eth_pf_params;
+
+		p_mngr->task_ctx_size = TYPE1_TASK_CXT_SIZE(p_hwfn);
 
 		if (!p_params->num_vf_cons)
 			p_params->num_vf_cons = ETH_PF_PARAMS_VF_CONS_DEFAULT;
@@ -2005,17 +2407,108 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
 					      p_params->num_cons,
 					      p_params->num_vf_cons);
 
+		ecore_cxt_set_proto_tid_count(p_hwfn, PROTOCOLID_ETH,
+					      ECORE_CXT_ETH_TID_SEG,
+					      ETH_CDU_TASK_SEG_TYPE,
+					      eth_tasks, false);
+
+#ifdef CONFIG_ECORE_FS
+		if (ECORE_IS_E5(p_hwfn->p_dev))
+			p_hwfn->fs_info.e5->tid_count = eth_tasks;
+#endif
+
 		count = p_params->num_arfs_filters;
 
-		if (!OSAL_GET_BIT(ECORE_MF_DISABLE_ARFS,
+		if (!OSAL_TEST_BIT(ECORE_MF_DISABLE_ARFS,
 				   &p_hwfn->p_dev->mf_bits))
 			p_hwfn->p_cxt_mngr->arfs_count = count;
 
 		break;
-		}
+	}
 	default:
+		rc = ECORE_INVAL;
+	}
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_cxt_get_tid_mem_info(struct ecore_hwfn *p_hwfn,
+						struct ecore_tid_mem *p_info)
+{
+	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	u32 proto, seg, total_lines, i, shadow_line;
+	struct ecore_ilt_client_cfg *p_cli;
+	struct ecore_ilt_cli_blk *p_fl_seg;
+	struct ecore_tid_seg *p_seg_info;
+
+	/* Verify the personality */
+	switch (p_hwfn->hw_info.personality) {
+	case ECORE_PCI_FCOE:
+		proto = PROTOCOLID_FCOE;
+		seg = ECORE_CXT_FCOE_TID_SEG;
+		break;
+	case ECORE_PCI_ISCSI:
+		proto = PROTOCOLID_ISCSI;
+		seg = ECORE_CXT_ISCSI_TID_SEG;
+		break;
+	default:
+		return ECORE_INVAL;
+	}
+
+	p_cli = &p_mngr->clients[ILT_CLI_CDUT];
+	if (!p_cli->active)
+		return ECORE_INVAL;
+
+	p_seg_info = &p_mngr->conn_cfg[proto].tid_seg[seg];
+	if (!p_seg_info->has_fl_mem)
 		return ECORE_INVAL;
+
+	p_fl_seg = &p_cli->pf_blks[CDUT_FL_SEG_BLK(seg, PF)];
+	total_lines = DIV_ROUND_UP(p_fl_seg->total_size,
+				   p_fl_seg->real_size_in_page);
+
+	for (i = 0; i < total_lines; i++) {
+		shadow_line = i + p_fl_seg->start_line -
+			      p_hwfn->p_cxt_mngr->pf_start_line;
+		p_info->blocks[i] = p_mngr->ilt_shadow[shadow_line].virt_addr;
 	}
+	p_info->waste = ILT_PAGE_IN_BYTES(p_cli->p_size.val) -
+			p_fl_seg->real_size_in_page;
+	p_info->tid_size = p_mngr->task_type_size[p_seg_info->type];
+	p_info->num_tids_per_block = p_fl_seg->real_size_in_page /
+				     p_info->tid_size;
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_cxt_get_iid_info(struct ecore_hwfn *p_hwfn,
+		       enum ecore_cxt_elem_type elem_type,
+		       struct ecore_ilt_client_cfg **pp_cli,
+		       struct ecore_ilt_cli_blk **pp_blk,
+		       u32 *elem_size, bool is_vf)
+{
+	struct ecore_ilt_client_cfg *p_cli;
+
+	switch (elem_type) {
+	case ECORE_ELEM_CXT:
+		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC];
+		*elem_size = CONN_CXT_SIZE(p_hwfn);
+		*pp_blk = is_vf ? &p_cli->vf_blks[CDUC_BLK] :
+			  &p_cli->pf_blks[CDUC_BLK];
+		break;
+	case ECORE_ELEM_ETH_TASK:
+		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT];
+		*elem_size = TYPE1_TASK_CXT_SIZE(p_hwfn);
+		*pp_blk = &p_cli->pf_blks[CDUT_SEG_BLK(ECORE_CXT_ETH_TID_SEG)];
+		break;
+	default:
+		DP_NOTICE(p_hwfn, false,
+			  "ECORE_INVALID elem type = %d", elem_type);
+		return ECORE_INVAL;
+	}
+
+	*pp_cli = p_cli;
 
 	return ECORE_SUCCESS;
 }
@@ -2026,8 +2519,13 @@ enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
 enum _ecore_status_t
 ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
 			    enum ecore_cxt_elem_type elem_type,
-			    u32 iid)
+			    u32 iid, u8 vf_id)
 {
+	/* TODO
+	 * Check to see if we need to do anything differeny if this is
+	 * called on behalf of VF.
+	 */
+
 	u32 reg_offset, shadow_line, elem_size, hw_p_size, elems_per_p, line;
 	struct ecore_ilt_client_cfg *p_cli;
 	struct ecore_ilt_cli_blk *p_blk;
@@ -2036,33 +2534,25 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
 	u64 ilt_hw_entry;
 	void *p_virt;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
+	bool is_vf = (vf_id != ECORE_CXT_PF_CID);
 
-	switch (elem_type) {
-	case ECORE_ELEM_CXT:
-		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC];
-		elem_size = CONN_CXT_SIZE(p_hwfn);
-		p_blk = &p_cli->pf_blks[CDUC_BLK];
-		break;
-	case ECORE_ELEM_SRQ:
-		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_TSDM];
-		elem_size = SRQ_CXT_SIZE;
-		p_blk = &p_cli->pf_blks[SRQ_BLK];
-		break;
-	case ECORE_ELEM_TASK:
-		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT];
-		elem_size = TYPE1_TASK_CXT_SIZE(p_hwfn);
-		p_blk = &p_cli->pf_blks[CDUT_SEG_BLK(ECORE_CXT_ROCE_TID_SEG)];
-		break;
-	default:
-		DP_NOTICE(p_hwfn, false,
-			  "ECORE_INVALID elem type = %d", elem_type);
-		return ECORE_INVAL;
-	}
+	rc = ecore_cxt_get_iid_info(p_hwfn, elem_type, &p_cli, &p_blk,
+				    &elem_size, is_vf);
+	if (rc)
+		return rc;
 
 	/* Calculate line in ilt */
 	hw_p_size = p_cli->p_size.val;
 	elems_per_p = ILT_PAGE_IN_BYTES(hw_p_size) / elem_size;
-	line = p_blk->start_line + (iid / elems_per_p);
+	if (is_vf)
+		/* start_line - where the VF sections starts (p_blk is VF's one)
+		 * (vf_id * p_cli->vf_total_lines) - Where this VF starts
+		 */
+		line = p_blk->start_line +
+		       (vf_id * p_cli->vf_total_lines) + (iid / elems_per_p);
+	else
+		line = p_blk->start_line + (iid / elems_per_p);
+
 	shadow_line = line - p_hwfn->p_cxt_mngr->pf_start_line;
 
 	/* If line is already allocated, do nothing, otherwise allocate it and
@@ -2070,7 +2560,9 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
 	 * This section can be run in parallel from different contexts and thus
 	 * a mutex protection is needed.
 	 */
-
+#ifdef _NTDDK_
+#pragma warning(suppress : 28121)
+#endif
 	OSAL_MUTEX_ACQUIRE(&p_hwfn->p_cxt_mngr->mutex);
 
 	if (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].virt_addr)
@@ -2106,14 +2598,13 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
 	SET_FIELD(ilt_hw_entry, ILT_ENTRY_VALID, 1ULL);
 	SET_FIELD(ilt_hw_entry,
 		  ILT_ENTRY_PHY_ADDR,
-		 (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].phys_addr >> 12));
-
-/* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a wide-bus */
+		  (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].phys_addr >>
+		   12));
 
+	/* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a wide-bus */
 	ecore_dmae_host2grc(p_hwfn, p_ptt, (u64)(osal_uintptr_t)&ilt_hw_entry,
 			    reg_offset, sizeof(ilt_hw_entry) / sizeof(u32),
 			    OSAL_NULL /* default parameters */);
-
 out1:
 	ecore_ptt_release(p_hwfn, p_ptt);
 out0:
@@ -2125,77 +2616,119 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
 /* This function is very RoCE oriented, if another protocol in the future
  * will want this feature we'll need to modify the function to be more generic
  */
-static enum _ecore_status_t
+enum _ecore_status_t
 ecore_cxt_free_ilt_range(struct ecore_hwfn *p_hwfn,
 			 enum ecore_cxt_elem_type elem_type,
-			 u32 start_iid, u32 count)
+			 u32 start_iid, u32 count, u8 vf_id)
 {
-	u32 start_line, end_line, shadow_start_line, shadow_end_line;
+	u32 end_iid, start_line, end_line, shadow_start_line, shadow_end_line;
+	u32 start_offset, end_offset, start_iid_offset, end_iid_offset;
 	u32 reg_offset, elem_size, hw_p_size, elems_per_p;
+	bool b_skip_start = false, b_skip_end = false;
+	bool is_vf = (vf_id != ECORE_CXT_PF_CID);
 	struct ecore_ilt_client_cfg *p_cli;
 	struct ecore_ilt_cli_blk *p_blk;
-	u32 end_iid = start_iid + count;
+	struct phys_mem_desc *ilt_page;
 	struct ecore_ptt *p_ptt;
 	u64 ilt_hw_entry = 0;
-	u32 i;
+	u32 i, abs_line;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 
-	switch (elem_type) {
-	case ECORE_ELEM_CXT:
-		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC];
-		elem_size = CONN_CXT_SIZE(p_hwfn);
-		p_blk = &p_cli->pf_blks[CDUC_BLK];
-		break;
-	case ECORE_ELEM_SRQ:
-		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_TSDM];
-		elem_size = SRQ_CXT_SIZE;
-		p_blk = &p_cli->pf_blks[SRQ_BLK];
-		break;
-	case ECORE_ELEM_TASK:
-		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT];
-		elem_size = TYPE1_TASK_CXT_SIZE(p_hwfn);
-		p_blk = &p_cli->pf_blks[CDUT_SEG_BLK(ECORE_CXT_ROCE_TID_SEG)];
-		break;
-	default:
-		DP_NOTICE(p_hwfn, false,
-			  "ECORE_INVALID elem type = %d", elem_type);
-		return ECORE_INVAL;
-	}
+	/* in case this client has no ILT lines, no need to free anything */
+	if (count == 0)
+		return ECORE_SUCCESS;
 
-	/* Calculate line in ilt */
+	rc = ecore_cxt_get_iid_info(p_hwfn, elem_type, &p_cli, &p_blk,
+				    &elem_size, is_vf);
+	if (rc)
+		return rc;
+
+	/* Calculate lines in ILT.
+	 * Skip the start line if 'start_iid' is not the first element in page.
+	 * Skip the end line if 'end_iid' is not the last element in page.
+	 */
 	hw_p_size = p_cli->p_size.val;
 	elems_per_p = ILT_PAGE_IN_BYTES(hw_p_size) / elem_size;
+	end_iid = start_iid + count - 1;
+
 	start_line = p_blk->start_line + (start_iid / elems_per_p);
 	end_line = p_blk->start_line + (end_iid / elems_per_p);
-	if (((end_iid + 1) / elems_per_p) != (end_iid / elems_per_p))
-		end_line--;
+
+	if (is_vf) {
+		start_line += (vf_id * p_cli->vf_total_lines);
+		end_line += (vf_id * p_cli->vf_total_lines);
+	}
+
+	if (start_iid % elems_per_p)
+		b_skip_start = true;
+
+	if ((end_iid % elems_per_p) != (elems_per_p - 1))
+		b_skip_end = true;
+
+	start_iid_offset = (start_iid % elems_per_p) * elem_size;
+	end_iid_offset = ((end_iid % elems_per_p) + 1) * elem_size;
 
 	shadow_start_line = start_line - p_hwfn->p_cxt_mngr->pf_start_line;
 	shadow_end_line = end_line - p_hwfn->p_cxt_mngr->pf_start_line;
 
 	p_ptt = ecore_ptt_acquire(p_hwfn);
 	if (!p_ptt) {
-		DP_NOTICE(p_hwfn, false,
-			  "ECORE_TIME_OUT on ptt acquire - dynamic allocation");
+		DP_NOTICE(p_hwfn, false, "ECORE_TIME_OUT on ptt acquire - dynamic allocation");
 		return ECORE_TIMEOUT;
 	}
 
-	for (i = shadow_start_line; i < shadow_end_line; i++) {
-		if (!p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr)
+	/* This piece of code takes care of freeing the ILT specific range, as
+	 * well as setting it to zero.
+	 * In case the start or the end lines of the range share other iids,
+	 * they should not be freed, but only be set to zero.
+	 * The reason for zeroing the lines is to prevent future access to the
+	 * old iids.
+	 *
+	 * For example, lets assume the RDMA tids of VF0 occupy 3.5 lines.
+	 * Now we run a test which uses all the tids and then perform VF FLR.
+	 * During the FLR we will free the first 3 lines but not the 4th.
+	 * If we won't zero the first half of the 4th line, a new VF0 might try
+	 * to use the old tids which are stored there, and this will lead to an
+	 * error.
+	 */
+	for (i = shadow_start_line; i <= shadow_end_line; i++) {
+		ilt_page = &p_hwfn->p_cxt_mngr->ilt_shadow[i];
+
+		if (!ilt_page->virt_addr) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
+				   "Virtual address of ILT shadow line %u is NULL\n", i);
+			continue;
+		}
+
+		start_offset = (i == shadow_start_line && b_skip_start) ?
+			       start_iid_offset : 0;
+		end_offset = (i == shadow_end_line && b_skip_end) ?
+			     end_iid_offset : ilt_page->size;
+
+		OSAL_MEM_ZERO((u8 *)ilt_page->virt_addr + start_offset,
+			      end_offset - start_offset);
+
+		DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
+			   "Zeroing shadow line %u start offset %x end offset %x\n",
+			   i, start_offset, end_offset);
+
+		if (end_offset - start_offset < ilt_page->size)
 			continue;
 
 		OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
-				    p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr,
-				    p_hwfn->p_cxt_mngr->ilt_shadow[i].phys_addr,
-				    p_hwfn->p_cxt_mngr->ilt_shadow[i].size);
+				       ilt_page->virt_addr,
+				       ilt_page->phys_addr,
+				       ilt_page->size);
 
-		p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr = OSAL_NULL;
-		p_hwfn->p_cxt_mngr->ilt_shadow[i].phys_addr = 0;
-		p_hwfn->p_cxt_mngr->ilt_shadow[i].size = 0;
+		ilt_page->virt_addr = OSAL_NULL;
+		ilt_page->phys_addr = 0;
+		ilt_page->size = 0;
 
 		/* compute absolute offset */
+		abs_line = p_hwfn->p_cxt_mngr->pf_start_line + i;
 		reg_offset = PSWRQ2_REG_ILT_MEMORY +
-		    ((start_line++) * ILT_REG_SIZE_IN_BYTES *
-		     ILT_ENTRY_IN_REGS);
+			     (abs_line * ILT_REG_SIZE_IN_BYTES *
+			      ILT_ENTRY_IN_REGS);
 
 		/* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a
 		 * wide-bus.
@@ -2212,6 +2745,75 @@ ecore_cxt_free_ilt_range(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_cxt_get_task_ctx(struct ecore_hwfn *p_hwfn,
+					    u32 tid,
+					    u8 ctx_type,
+					    void **pp_task_ctx)
+{
+	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_ilt_client_cfg *p_cli;
+	struct ecore_tid_seg *p_seg_info;
+	struct ecore_ilt_cli_blk *p_seg;
+	u32 num_tids_per_block;
+	u32 tid_size, ilt_idx;
+	u32 total_lines;
+	u32 proto, seg;
+
+	/* Verify the personality */
+	switch (p_hwfn->hw_info.personality) {
+	case ECORE_PCI_FCOE:
+		proto = PROTOCOLID_FCOE;
+		seg = ECORE_CXT_FCOE_TID_SEG;
+		break;
+	case ECORE_PCI_ISCSI:
+		proto = PROTOCOLID_ISCSI;
+		seg = ECORE_CXT_ISCSI_TID_SEG;
+		break;
+	case ECORE_PCI_ETH_RDMA:
+	case ECORE_PCI_ETH_IWARP:
+	case ECORE_PCI_ETH_ROCE:
+	case ECORE_PCI_ETH:
+		/* All ETH personalities refer to Ethernet TIDs since RDMA does
+		 * not use this API.
+		 */
+		proto = PROTOCOLID_ETH;
+		seg = ECORE_CXT_ETH_TID_SEG;
+		break;
+	default:
+		return ECORE_INVAL;
+	}
+
+	p_cli = &p_mngr->clients[ILT_CLI_CDUT];
+	if (!p_cli->active)
+		return ECORE_INVAL;
+
+	p_seg_info = &p_mngr->conn_cfg[proto].tid_seg[seg];
+
+	if (ctx_type == ECORE_CTX_WORKING_MEM) {
+		p_seg = &p_cli->pf_blks[CDUT_SEG_BLK(seg)];
+	} else if (ctx_type == ECORE_CTX_FL_MEM) {
+		if (!p_seg_info->has_fl_mem)
+			return ECORE_INVAL;
+		p_seg = &p_cli->pf_blks[CDUT_FL_SEG_BLK(seg, PF)];
+	} else {
+		return ECORE_INVAL;
+	}
+	total_lines = DIV_ROUND_UP(p_seg->total_size,
+				   p_seg->real_size_in_page);
+	tid_size = p_mngr->task_type_size[p_seg_info->type];
+	num_tids_per_block = p_seg->real_size_in_page / tid_size;
+
+	if (total_lines < tid / num_tids_per_block)
+		return ECORE_INVAL;
+
+	ilt_idx = tid / num_tids_per_block + p_seg->start_line -
+		  p_mngr->pf_start_line;
+	*pp_task_ctx = (u8 *)p_mngr->ilt_shadow[ilt_idx].virt_addr +
+			     (tid % num_tids_per_block) * tid_size;
+
+	return ECORE_SUCCESS;
+}
+
 static u16 ecore_blk_calculate_pages(struct ecore_ilt_cli_blk *p_blk)
 {
 	if (p_blk->real_size_in_page == 0)
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index cd0e32e3b..fe61a8b8f 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -1,25 +1,35 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef _ECORE_CID_
 #define _ECORE_CID_
 
 #include "ecore_hsi_common.h"
+#include "ecore_hsi_eth.h"
 #include "ecore_proto_if.h"
 #include "ecore_cxt_api.h"
 
-/* Tasks segments definitions  */
-#define ECORE_CXT_ISCSI_TID_SEG			PROTOCOLID_ISCSI	/* 0 */
-#define ECORE_CXT_FCOE_TID_SEG			PROTOCOLID_FCOE		/* 1 */
-#define ECORE_CXT_ROCE_TID_SEG			PROTOCOLID_ROCE		/* 2 */
+/* Tasks segments definitions (keeping this numbering is necessary) */
+#define ECORE_CXT_ISCSI_TID_SEG			0	/* PROTOCOLID_ISCSI */
+#define ECORE_CXT_FCOE_TID_SEG			1	/* PROTOCOLID_FCOE */
+#define ECORE_CXT_ROCE_TID_SEG			2	/* PROTOCOLID_ROCE */
+#define ECORE_CXT_ETH_TID_SEG			3
 
 enum ecore_cxt_elem_type {
 	ECORE_ELEM_CXT,
 	ECORE_ELEM_SRQ,
-	ECORE_ELEM_TASK
+	ECORE_ELEM_RDMA_TASK,
+	ECORE_ELEM_ETH_TASK,
+	ECORE_ELEM_XRC_SRQ,
+};
+
+/* @DPDK */
+enum ecore_iov_is_vf_or_pf {
+	IOV_PF	= 0,	/* This is a PF instance. */
+	IOV_VF	= 1	/* This is a VF instance. */
 };
 
 u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn,
@@ -27,11 +37,12 @@ u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn,
 				  u32 *vf_cid);
 
 u32 ecore_cxt_get_proto_tid_count(struct ecore_hwfn *p_hwfn,
-				  enum protocol_type type);
+				  enum protocol_type type,
+				  u8 vf_id);
 
 u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn,
-				  enum protocol_type type);
-u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn);
+				  enum protocol_type type,
+				  u8 vf_id);
 
 /**
  * @brief ecore_cxt_set_pf_params - Set the PF params for cxt init
@@ -40,16 +51,27 @@ u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn);
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn);
+enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn,
+					     u32 rdma_tasks, u32 eth_tasks);
 
 /**
  * @brief ecore_cxt_cfg_ilt_compute - compute ILT init parameters
  *
  * @param p_hwfn
+ * @param last_line
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn);
+enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn,
+					       u32 *last_line);
+
+/**
+ * @brief ecore_cxt_cfg_ilt_compute_excess - how many lines can be decreased
+ *
+ * @param p_hwfn
+ * @param used_lines
+ */
+u32 ecore_cxt_cfg_ilt_compute_excess(struct ecore_hwfn *p_hwfn, u32 used_lines);
 
 /**
  * @brief ecore_cxt_mngr_alloc - Allocate and init the context manager struct
@@ -68,8 +90,7 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn);
 void ecore_cxt_mngr_free(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_cxt_tables_alloc - Allocate ILT shadow, Searcher T2, acquired
- *        map
+ * @brief ecore_cxt_tables_alloc - Allocate ILT shadow, Searcher T2, acquired map
  *
  * @param p_hwfn
  *
@@ -85,8 +106,7 @@ enum _ecore_status_t ecore_cxt_tables_alloc(struct ecore_hwfn *p_hwfn);
 void ecore_cxt_mngr_setup(struct ecore_hwfn *p_hwfn);
 
 /**
- * @brief ecore_cxt_hw_init_common - Initailze ILT and DQ, common phase, per
- *        path.
+ * @brief ecore_cxt_hw_init_common - Initailze ILT and DQ, common phase, per path.
  *
  * @param p_hwfn
  */
@@ -121,7 +141,16 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt);
 
-#define ECORE_CXT_PF_CID (0xff)
+/**
+ * @brief Reconfigures QM from a non-sleepable context.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_qm_reconf_intr(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt);
 
 /**
  * @brief ecore_cxt_release - Release a cid
@@ -177,28 +206,39 @@ enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
  * @param p_hwfn
  * @param elem_type
  * @param iid
+ * @param vf_id
  *
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t
 ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
 			    enum ecore_cxt_elem_type elem_type,
-			    u32 iid);
+			    u32 iid, u8 vf_id);
 
 /**
- * @brief ecore_cxt_free_proto_ilt - function frees ilt pages
- *        associated with the protocol passed.
+ * @brief ecore_cxt_free_ilt_range - function frees ilt pages
+ *        associated with the protocol and element type passed.
  *
  * @param p_hwfn
  * @param proto
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_cxt_free_proto_ilt(struct ecore_hwfn *p_hwfn,
-					      enum protocol_type proto);
+enum _ecore_status_t
+ecore_cxt_free_ilt_range(struct ecore_hwfn *p_hwfn,
+			 enum ecore_cxt_elem_type elem_type,
+			 u32 start_iid, u32 count, u8 vf_id);
 
 #define ECORE_CTX_WORKING_MEM 0
 #define ECORE_CTX_FL_MEM 1
+enum _ecore_status_t ecore_cxt_get_task_ctx(struct ecore_hwfn *p_hwfn,
+					    u32 tid,
+					    u8 ctx_type,
+					    void **task_ctx);
+
+u32 ecore_cxt_get_ilt_page_size(struct ecore_hwfn *p_hwfn);
+
+u32 ecore_cxt_get_total_srq_count(struct ecore_hwfn *p_hwfn);
 
 /* Max number of connection types in HW (DQ/CDU etc.) */
 #define MAX_CONN_TYPES		PROTOCOLID_COMMON
@@ -206,20 +246,20 @@ enum _ecore_status_t ecore_cxt_free_proto_ilt(struct ecore_hwfn *p_hwfn,
 #define NUM_TASK_PF_SEGMENTS	4
 #define NUM_TASK_VF_SEGMENTS	1
 
-/* PF per protocol configuration object */
+/* PF per protocl configuration object */
 #define TASK_SEGMENTS   (NUM_TASK_PF_SEGMENTS + NUM_TASK_VF_SEGMENTS)
 #define TASK_SEGMENT_VF (NUM_TASK_PF_SEGMENTS)
 
 struct ecore_tid_seg {
-	u32 count;
-	u8 type;
-	bool has_fl_mem;
+	u32	count;
+	u8	type;
+	bool	has_fl_mem;
 };
 
 struct ecore_conn_type_cfg {
-	u32 cid_count;
-	u32 cids_per_vf;
-	struct ecore_tid_seg tid_seg[TASK_SEGMENTS];
+	u32			cid_count;
+	u32			cids_per_vf;
+	struct ecore_tid_seg	tid_seg[TASK_SEGMENTS];
 };
 
 /* ILT Client configuration,
@@ -240,7 +280,7 @@ struct ilt_cfg_pair {
 };
 
 struct ecore_ilt_cli_blk {
-	u32 total_size;		/* 0 means not active */
+	u32 total_size; /* 0 means not active */
 	u32 real_size_in_page;
 	u32 start_line;
 	u32 dynamic_line_offset;
@@ -248,29 +288,29 @@ struct ecore_ilt_cli_blk {
 };
 
 struct ecore_ilt_client_cfg {
-	bool active;
+	bool				active;
 
 	/* ILT boundaries */
-	struct ilt_cfg_pair first;
-	struct ilt_cfg_pair last;
-	struct ilt_cfg_pair p_size;
+	struct ilt_cfg_pair		first;
+	struct ilt_cfg_pair		last;
+	struct ilt_cfg_pair		p_size;
 
 	/* ILT client blocks for PF */
-	struct ecore_ilt_cli_blk pf_blks[ILT_CLI_PF_BLOCKS];
-	u32 pf_total_lines;
+	struct ecore_ilt_cli_blk	pf_blks[ILT_CLI_PF_BLOCKS];
+	u32				pf_total_lines;
 
 	/* ILT client blocks for VFs */
-	struct ecore_ilt_cli_blk vf_blks[ILT_CLI_VF_BLOCKS];
-	u32 vf_total_lines;
+	struct ecore_ilt_cli_blk	vf_blks[ILT_CLI_VF_BLOCKS];
+	u32				vf_total_lines;
 };
 
 #define MAP_WORD_SIZE		sizeof(unsigned long)
 #define BITS_PER_MAP_WORD	(MAP_WORD_SIZE * 8)
 
 struct ecore_cid_acquired_map {
-	u32 start_cid;
-	u32 max_count;
-	u32 *cid_map;
+	u32		start_cid;
+	u32		max_count;
+	u32		*cid_map; /* @DPDK */
 };
 
 struct ecore_src_t2 {
@@ -281,7 +321,7 @@ struct ecore_src_t2 {
 };
 
 struct ecore_cxt_mngr {
-	/* Per protocol configuration */
+	/* Per protocl configuration */
 	struct ecore_conn_type_cfg	conn_cfg[MAX_CONN_TYPES];
 
 	/* computed ILT structure */
@@ -300,15 +340,14 @@ struct ecore_cxt_mngr {
 	struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES];
 	struct ecore_cid_acquired_map *acquired_vf[MAX_CONN_TYPES];
 
-	/* ILT  shadow table */
+	/* ILT shadow table */
 	struct phys_mem_desc		*ilt_shadow;
 	u32				ilt_shadow_size;
 	u32				pf_start_line;
 
 	/* Mutex for a dynamic ILT allocation */
-	osal_mutex_t mutex;
+	osal_mutex_t			mutex;
 
-	/* SRC T2 */
 	struct ecore_src_t2		src_t2;
 
 	/* The infrastructure originally was very generic and context/task
@@ -317,19 +356,24 @@ struct ecore_cxt_mngr {
 	 * needing for a given block we'd iterate over all the relevant
 	 * connection-types.
 	 * But since then we've had some additional resources, some of which
-	 * require memory which is independent of the general context/task
+	 * require memory which is indepent of the general context/task
 	 * scheme. We add those here explicitly per-feature.
 	 */
 
 	/* total number of SRQ's for this hwfn */
 	u32				srq_count;
+	u32				xrc_srq_count;
+	u32				vfs_srq_count;
 
 	/* Maximal number of L2 steering filters */
 	u32				arfs_count;
 
 	/* TODO - VF arfs filters ? */
 
-	u8				task_type_id;
+	u16				iscsi_task_pages;
+	u16				fcoe_task_pages;
+	u16				roce_task_pages;
+	u16				eth_task_pages;
 	u16				task_ctx_size;
 	u16				conn_ctx_size;
 };
@@ -338,4 +382,6 @@ u16 ecore_get_cdut_num_pf_init_pages(struct ecore_hwfn *p_hwfn);
 u16 ecore_get_cdut_num_vf_init_pages(struct ecore_hwfn *p_hwfn);
 u16 ecore_get_cdut_num_pf_work_pages(struct ecore_hwfn *p_hwfn);
 u16 ecore_get_cdut_num_vf_work_pages(struct ecore_hwfn *p_hwfn);
+
+void ecore_tm_clear_vf_ilt(struct ecore_hwfn *p_hwfn, u16 vf_idx);
 #endif /* _ECORE_CID_ */
diff --git a/drivers/net/qede/base/ecore_cxt_api.h b/drivers/net/qede/base/ecore_cxt_api.h
index 6c8b2831c..fa85a0dc8 100644
--- a/drivers/net/qede/base/ecore_cxt_api.h
+++ b/drivers/net/qede/base/ecore_cxt_api.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_CXT_API_H__
 #define __ECORE_CXT_API_H__
 
@@ -24,15 +24,26 @@ struct ecore_tid_mem {
 };
 
 /**
-* @brief ecoreo_cid_get_cxt_info - Returns the context info for a specific cid
-*
-*
-* @param p_hwfn
-* @param p_info in/out
-*
-* @return enum _ecore_status_t
-*/
+ * @brief ecoreo_cid_get_cxt_info - Returns the context info for a specific cid
+ *
+ *
+ * @param p_hwfn
+ * @param p_info in/out
+ *
+ * @return enum _ecore_status_t
+ */
 enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
 					    struct ecore_cxt_info *p_info);
 
+/**
+ * @brief ecore_cxt_get_tid_mem_info
+ *
+ * @param p_hwfn
+ * @param p_info
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_cxt_get_tid_mem_info(struct ecore_hwfn *p_hwfn,
+						struct ecore_tid_mem *p_info);
+
 #endif
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index f94efce72..627975369 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #include "bcm_osal.h"
 #include "ecore.h"
 #include "ecore_sp_commands.h"
@@ -15,6 +15,10 @@
 
 #define ECORE_DCBX_MAX_MIB_READ_TRY	(100)
 #define ECORE_ETH_TYPE_DEFAULT		(0)
+#define ECORE_ETH_TYPE_ROCE		(0x8915)
+#define ECORE_UDP_PORT_TYPE_ROCE_V2	(0x12B7)
+#define ECORE_ETH_TYPE_FCOE		(0x8906)
+#define ECORE_TCP_PORT_ISCSI		(0xCBC)
 
 #define ECORE_DCBX_INVALID_PRIORITY	0xFF
 
@@ -22,7 +26,7 @@
  * the traffic class corresponding to the priority.
  */
 #define ECORE_DCBX_PRIO2TC(prio_tc_tbl, prio) \
-		((u32)(prio_tc_tbl >> ((7 - prio) * 4)) & 0x7)
+		((u32)(prio_tc_tbl >> ((7 - prio) * 4)) & 0xF)
 
 static bool ecore_dcbx_app_ethtype(u32 app_info_bitmap)
 {
@@ -70,6 +74,56 @@ static bool ecore_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
 	return !!(ethtype && (proto_id == ECORE_ETH_TYPE_DEFAULT));
 }
 
+static bool ecore_dcbx_iscsi_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
+{
+	bool port;
+
+	if (ieee)
+		port = ecore_dcbx_ieee_app_port(app_info_bitmap,
+						DCBX_APP_SF_IEEE_TCP_PORT);
+	else
+		port = ecore_dcbx_app_port(app_info_bitmap);
+
+	return !!(port && (proto_id == ECORE_TCP_PORT_ISCSI));
+}
+
+static bool ecore_dcbx_fcoe_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
+{
+	bool ethtype;
+
+	if (ieee)
+		ethtype = ecore_dcbx_ieee_app_ethtype(app_info_bitmap);
+	else
+		ethtype = ecore_dcbx_app_ethtype(app_info_bitmap);
+
+	return !!(ethtype && (proto_id == ECORE_ETH_TYPE_FCOE));
+}
+
+static bool ecore_dcbx_roce_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
+{
+	bool ethtype;
+
+	if (ieee)
+		ethtype = ecore_dcbx_ieee_app_ethtype(app_info_bitmap);
+	else
+		ethtype = ecore_dcbx_app_ethtype(app_info_bitmap);
+
+	return !!(ethtype && (proto_id == ECORE_ETH_TYPE_ROCE));
+}
+
+static bool ecore_dcbx_roce_v2_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
+{
+	bool port;
+
+	if (ieee)
+		port = ecore_dcbx_ieee_app_port(app_info_bitmap,
+						DCBX_APP_SF_IEEE_UDP_PORT);
+	else
+		port = ecore_dcbx_app_port(app_info_bitmap);
+
+	return !!(port && (proto_id == ECORE_UDP_PORT_TYPE_ROCE_V2));
+}
+
 static bool ecore_dcbx_iwarp_tlv(struct ecore_hwfn *p_hwfn, u32 app_info_bitmap,
 				 u16 proto_id, bool ieee)
 {
@@ -92,7 +146,7 @@ ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
 		       struct ecore_dcbx_results *p_data)
 {
 	enum dcbx_protocol_type id;
-	int i;
+	u32 i;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "DCBX negotiated: %d\n",
 		   p_data->dcbx_enabled);
@@ -101,10 +155,8 @@ ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
 		id = ecore_dcbx_app_update[i].id;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
-			   "%s info: update %d, enable %d, prio %d, tc %d,"
-			   " num_active_tc %d dscp_enable = %d dscp_val = %d\n",
-			   ecore_dcbx_app_update[i].name,
-			   p_data->arr[id].update,
+			   "%s info: update %d, enable %d, prio %d, tc %d, num_active_tc %d dscp_enable = %d dscp_val = %d\n",
+			   ecore_dcbx_app_update[i].name, p_data->arr[id].update,
 			   p_data->arr[id].enable, p_data->arr[id].priority,
 			   p_data->arr[id].tc, p_hwfn->hw_info.num_active_tc,
 			   p_data->arr[id].dscp_enable,
@@ -130,7 +182,7 @@ u8 ecore_dcbx_get_dscp_value(struct ecore_hwfn *p_hwfn, u8 pri)
 static void
 ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 		      struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		      bool enable, u8 prio, u8 tc,
+		      bool app_tlv, bool enable, u8 prio, u8 tc,
 		      enum dcbx_protocol_type type,
 		      enum ecore_pci_personality personality)
 {
@@ -143,21 +195,21 @@ ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 		p_data->arr[type].dscp_enable = false;
 		p_data->arr[type].dscp_val = 0;
 	} else {
-		p_data->arr[type].dscp_enable = true;
+		p_data->arr[type].dscp_enable = enable;
 	}
+
 	p_data->arr[type].update = UPDATE_DCB_DSCP;
 
-	/* Do not add valn tag 0 when DCB is enabled and port is in UFP mode */
-	if (OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits))
+	if (OSAL_TEST_BIT(ECORE_MF_DONT_ADD_VLAN0_TAG, &p_hwfn->p_dev->mf_bits))
 		p_data->arr[type].dont_add_vlan0 = true;
 
 	/* QM reconf data */
-	if (p_hwfn->hw_info.personality == personality)
-		p_hwfn->hw_info.offload_tc = tc;
+	if (app_tlv && p_hwfn->hw_info.personality == personality)
+		ecore_hw_info_set_offload_tc(&p_hwfn->hw_info, tc);
 
 	/* Configure dcbx vlan priority in doorbell block for roce EDPM */
-	if (OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits) &&
-	    type == DCBX_PROTOCOL_ROCE) {
+	if (OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits) &&
+	    (type == DCBX_PROTOCOL_ROCE)) {
 		ecore_wr(p_hwfn, p_ptt, DORQ_REG_TAG1_OVRD_MODE, 1);
 		ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_PCP_BB_K2, prio << 1);
 	}
@@ -167,12 +219,12 @@ ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 static void
 ecore_dcbx_update_app_info(struct ecore_dcbx_results *p_data,
 			   struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			   bool enable, u8 prio, u8 tc,
+			   bool app_tlv, bool enable, u8 prio, u8 tc,
 			   enum dcbx_protocol_type type)
 {
 	enum ecore_pci_personality personality;
 	enum dcbx_protocol_type id;
-	int i;
+	u32 i;
 
 	for (i = 0; i < OSAL_ARRAY_SIZE(ecore_dcbx_app_update); i++) {
 		id = ecore_dcbx_app_update[i].id;
@@ -182,7 +234,7 @@ ecore_dcbx_update_app_info(struct ecore_dcbx_results *p_data,
 
 		personality = ecore_dcbx_app_update[i].personality;
 
-		ecore_dcbx_set_params(p_data, p_hwfn, p_ptt, enable,
+		ecore_dcbx_set_params(p_data, p_hwfn, p_ptt, app_tlv, enable,
 				      prio, tc, type, personality);
 	}
 }
@@ -221,12 +273,22 @@ ecore_dcbx_get_app_protocol_type(struct ecore_hwfn *p_hwfn,
 				 u32 app_prio_bitmap, u16 id,
 				 enum dcbx_protocol_type *type, bool ieee)
 {
-	if (ecore_dcbx_default_tlv(app_prio_bitmap, id, ieee)) {
+	if (ecore_dcbx_fcoe_tlv(app_prio_bitmap, id, ieee)) {
+		*type = DCBX_PROTOCOL_FCOE;
+	} else if (ecore_dcbx_roce_tlv(app_prio_bitmap, id, ieee)) {
+		*type = DCBX_PROTOCOL_ROCE;
+	} else if (ecore_dcbx_iscsi_tlv(app_prio_bitmap, id, ieee)) {
+		*type = DCBX_PROTOCOL_ISCSI;
+	} else if (ecore_dcbx_default_tlv(app_prio_bitmap, id, ieee)) {
 		*type = DCBX_PROTOCOL_ETH;
+	} else if (ecore_dcbx_roce_v2_tlv(app_prio_bitmap, id, ieee)) {
+		*type = DCBX_PROTOCOL_ROCE_V2;
+	} else if (ecore_dcbx_iwarp_tlv(p_hwfn, app_prio_bitmap, id, ieee)) {
+		*type = DCBX_PROTOCOL_IWARP;
 	} else {
 		*type = DCBX_MAX_PROTOCOL_TYPE;
 		DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
-			    "No action required, App TLV entry = 0x%x\n",
+			   "No action required, App TLV entry = 0x%x\n",
 			   app_prio_bitmap);
 		return false;
 	}
@@ -287,13 +349,13 @@ ecore_dcbx_process_tlv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 				enable = true;
 			}
 
-			ecore_dcbx_update_app_info(p_data, p_hwfn, p_ptt,
+			ecore_dcbx_update_app_info(p_data, p_hwfn, p_ptt, true,
 						   enable, priority, tc, type);
 		}
 	}
 
 	/* If Eth TLV is not detected, use UFP TC as default TC */
-	if (OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC,
+	if (OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC,
 			  &p_hwfn->p_dev->mf_bits) && !eth_tlv)
 		p_data->arr[DCBX_PROTOCOL_ETH].tc = p_hwfn->ufp_info.tc;
 
@@ -310,9 +372,9 @@ ecore_dcbx_process_tlv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			continue;
 
 		/* if no app tlv was present, don't override in FW */
-		ecore_dcbx_update_app_info(p_data, p_hwfn, p_ptt,
-					  p_data->arr[DCBX_PROTOCOL_ETH].enable,
-					  priority, tc, type);
+		ecore_dcbx_update_app_info(p_data, p_hwfn, p_ptt, false,
+					   p_data->arr[DCBX_PROTOCOL_ETH].enable,
+					   priority, tc, type);
 	}
 
 	return ECORE_SUCCESS;
@@ -399,16 +461,14 @@ ecore_dcbx_copy_mib(struct ecore_hwfn *p_hwfn,
 		read_count++;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
-			   "mib type = %d, try count = %d prefix seq num  ="
-			   " %d suffix seq num = %d\n",
+			   "mib type = %d, try count = %d prefix seq num  = %d suffix seq num = %d\n",
 			   type, read_count, prefix_seq_num, suffix_seq_num);
 	} while ((prefix_seq_num != suffix_seq_num) &&
 		 (read_count < ECORE_DCBX_MAX_MIB_READ_TRY));
 
 	if (read_count >= ECORE_DCBX_MAX_MIB_READ_TRY) {
 		DP_ERR(p_hwfn,
-		       "MIB read err, mib type = %d, try count ="
-		       " %d prefix seq num = %d suffix seq num = %d\n",
+		       "MIB read err, mib type = %d, try count = %d prefix seq num = %d suffix seq num = %d\n",
 		       type, read_count, prefix_seq_num, suffix_seq_num);
 		rc = ECORE_IO;
 	}
@@ -423,12 +483,36 @@ ecore_dcbx_get_priority_info(struct ecore_hwfn *p_hwfn,
 {
 	u8 val;
 
+	p_prio->roce = ECORE_DCBX_INVALID_PRIORITY;
+	p_prio->roce_v2 = ECORE_DCBX_INVALID_PRIORITY;
+	p_prio->iscsi = ECORE_DCBX_INVALID_PRIORITY;
+	p_prio->fcoe = ECORE_DCBX_INVALID_PRIORITY;
+
+	if (p_results->arr[DCBX_PROTOCOL_ROCE].update &&
+	    p_results->arr[DCBX_PROTOCOL_ROCE].enable)
+		p_prio->roce = p_results->arr[DCBX_PROTOCOL_ROCE].priority;
+
+	if (p_results->arr[DCBX_PROTOCOL_ROCE_V2].update &&
+	    p_results->arr[DCBX_PROTOCOL_ROCE_V2].enable) {
+		val = p_results->arr[DCBX_PROTOCOL_ROCE_V2].priority;
+		p_prio->roce_v2 = val;
+	}
+
+	if (p_results->arr[DCBX_PROTOCOL_ISCSI].update &&
+	    p_results->arr[DCBX_PROTOCOL_ISCSI].enable)
+		p_prio->iscsi = p_results->arr[DCBX_PROTOCOL_ISCSI].priority;
+
+	if (p_results->arr[DCBX_PROTOCOL_FCOE].update &&
+	    p_results->arr[DCBX_PROTOCOL_FCOE].enable)
+		p_prio->fcoe = p_results->arr[DCBX_PROTOCOL_FCOE].priority;
+
 	if (p_results->arr[DCBX_PROTOCOL_ETH].update &&
 	    p_results->arr[DCBX_PROTOCOL_ETH].enable)
 		p_prio->eth = p_results->arr[DCBX_PROTOCOL_ETH].priority;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
-		   "Priorities: eth %d\n",
+		   "Priorities: iscsi %d, roce %d, roce v2 %d, fcoe %d, eth %d\n",
+		   p_prio->iscsi, p_prio->roce, p_prio->roce_v2, p_prio->fcoe,
 		   p_prio->eth);
 }
 
@@ -474,8 +558,7 @@ ecore_dcbx_get_app_data(struct ecore_hwfn *p_hwfn,
 				entry->sf_ieee = ECORE_DCBX_SF_IEEE_UDP_PORT;
 				break;
 			case DCBX_APP_SF_IEEE_TCP_UDP_PORT:
-				entry->sf_ieee =
-						ECORE_DCBX_SF_IEEE_TCP_UDP_PORT;
+				entry->sf_ieee = ECORE_DCBX_SF_IEEE_TCP_UDP_PORT;
 				break;
 			}
 		} else {
@@ -506,6 +589,7 @@ ecore_dcbx_get_pfc_data(struct ecore_hwfn *p_hwfn,
 
 	p_params->pfc.willing = GET_MFW_FIELD(pfc, DCBX_PFC_WILLING);
 	p_params->pfc.max_tc = GET_MFW_FIELD(pfc, DCBX_PFC_CAPS);
+	p_params->pfc.mbc = GET_MFW_FIELD(pfc, DCBX_PFC_MBC);
 	p_params->pfc.enabled = GET_MFW_FIELD(pfc, DCBX_PFC_ENABLED);
 	pfc_map = GET_MFW_FIELD(pfc, DCBX_PFC_PRI_EN_BITMAP);
 	p_params->pfc.prio[0] = !!(pfc_map & DCBX_PFC_PRI_EN_BITMAP_PRI_0);
@@ -518,9 +602,9 @@ ecore_dcbx_get_pfc_data(struct ecore_hwfn *p_hwfn,
 	p_params->pfc.prio[7] = !!(pfc_map & DCBX_PFC_PRI_EN_BITMAP_PRI_7);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
-		   "PFC params: willing %d, pfc_bitmap %u max_tc = %u enabled = %d\n",
+		   "PFC params: willing %d, pfc_bitmap %u max_tc = %u enabled = %d mbc = %d\n",
 		   p_params->pfc.willing, pfc_map, p_params->pfc.max_tc,
-		   p_params->pfc.enabled);
+		   p_params->pfc.enabled, p_params->pfc.mbc);
 }
 
 static void
@@ -540,7 +624,12 @@ ecore_dcbx_get_ets_data(struct ecore_hwfn *p_hwfn,
 		   p_params->ets_willing, p_params->ets_enabled,
 		   p_params->ets_cbs, p_ets->pri_tc_tbl[0],
 		   p_params->max_ets_tc);
-
+	if (p_params->ets_enabled && !p_params->max_ets_tc) {
+		p_params->max_ets_tc = ECORE_MAX_PFC_PRIORITIES;
+		DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
+			   "ETS params: max_ets_tc is forced to %d\n",
+		   p_params->max_ets_tc);
+	}
 	/* 8 bit tsa and bw data corresponding to each of the 8 TC's are
 	 * encoded in a type u32 array of size 2.
 	 */
@@ -600,8 +689,8 @@ ecore_dcbx_get_remote_params(struct ecore_hwfn *p_hwfn,
 	params->remote.valid = true;
 }
 
-static void  ecore_dcbx_get_dscp_params(struct ecore_hwfn *p_hwfn,
-					struct ecore_dcbx_get *params)
+static void ecore_dcbx_get_dscp_params(struct ecore_hwfn *p_hwfn,
+				       struct ecore_dcbx_get *params)
 {
 	struct ecore_dcbx_dscp_params *p_dscp;
 	struct dcb_dscp_map *p_dscp_map;
@@ -616,7 +705,7 @@ static void  ecore_dcbx_get_dscp_params(struct ecore_hwfn *p_hwfn,
 	 * where each entry holds the 4bit priority map for 8 dscp entries.
 	 */
 	for (i = 0, entry = 0; i < ECORE_DCBX_DSCP_SIZE / 8; i++) {
-		pri_map = OSAL_BE32_TO_CPU(p_dscp_map->dscp_pri_map[i]);
+		pri_map = p_dscp_map->dscp_pri_map[i];
 		DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "elem %d pri_map 0x%x\n",
 			   entry, pri_map);
 		for (j = 0; j < ECORE_DCBX_DSCP_SIZE / 8; j++, entry++)
@@ -785,7 +874,7 @@ ecore_dcbx_read_operational_mib(struct ecore_hwfn *p_hwfn,
 
 	OSAL_MEM_ZERO(&data, sizeof(data));
 	data.addr = p_hwfn->mcp_info->port_addr +
-	    offsetof(struct public_port, operational_dcbx_mib);
+		    offsetof(struct public_port, operational_dcbx_mib);
 	data.mib = &p_hwfn->p_dcbx_info->operational;
 	data.size = sizeof(struct dcbx_mib);
 	rc = ecore_dcbx_copy_mib(p_hwfn, p_ptt, &data, type);
@@ -803,7 +892,7 @@ ecore_dcbx_read_remote_mib(struct ecore_hwfn *p_hwfn,
 
 	OSAL_MEM_ZERO(&data, sizeof(data));
 	data.addr = p_hwfn->mcp_info->port_addr +
-	    offsetof(struct public_port, remote_dcbx_mib);
+		    offsetof(struct public_port, remote_dcbx_mib);
 	data.mib = &p_hwfn->p_dcbx_info->remote;
 	data.size = sizeof(struct dcbx_mib);
 	rc = ecore_dcbx_copy_mib(p_hwfn, p_ptt, &data, type);
@@ -819,7 +908,7 @@ ecore_dcbx_read_local_mib(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 
 	OSAL_MEM_ZERO(&data, sizeof(data));
 	data.addr = p_hwfn->mcp_info->port_addr +
-	    offsetof(struct public_port, local_admin_dcbx_mib);
+			offsetof(struct public_port, local_admin_dcbx_mib);
 	data.local_admin = &p_hwfn->p_dcbx_info->local_admin;
 	data.size = sizeof(struct dcbx_local_params);
 	ecore_memcpy_from(p_hwfn, p_ptt, data.local_admin,
@@ -867,6 +956,61 @@ static enum _ecore_status_t ecore_dcbx_read_mib(struct ecore_hwfn *p_hwfn,
 		DP_ERR(p_hwfn, "MIB read err, unknown mib type %d\n", type);
 	}
 
+	return rc;
+}
+
+static enum _ecore_status_t
+ecore_dcbx_dscp_map_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			   bool b_en)
+{
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
+	u8 ppfid, abs_ppfid, pfid;
+	u32 addr, val;
+	u16 fid;
+	enum _ecore_status_t rc;
+
+	if (!OSAL_TEST_BIT(ECORE_MF_DSCP_TO_TC_MAP, &p_dev->mf_bits))
+		return ECORE_INVAL;
+
+	if (ECORE_IS_E4(p_hwfn->p_dev)) {
+		addr = NIG_REG_DSCP_TO_TC_MAP_ENABLE_BB_K2;
+		val = b_en ? 0x1 : 0x0;
+	} else { /* E5 */
+		addr = NIG_REG_LLH_TC_CLS_DSCP_MODE_E5;
+		val = b_en ? 0x2  /* L2-PRI if exists, else L3-DSCP */
+			   : 0x0; /* L2-PRI only */
+	}
+
+	if (!ECORE_IS_AH(p_dev))
+		return ecore_all_ppfids_wr(p_hwfn, p_ptt, addr, val);
+
+	/* Workaround for a HW bug in E4 (only AH is affected):
+	 * Instead of writing to "NIG_REG_DSCP_TO_TC_MAP_ENABLE[ppfid]", write
+	 * to "NIG_REG_DSCP_TO_TC_MAP_ENABLE[n]", where "n" is the "pfid" which
+	 * is read from "NIG_REG_LLH_PPFID2PFID_TBL[ppfid]".
+	 */
+	for (ppfid = 0; ppfid < ecore_llh_get_num_ppfid(p_dev); ppfid++) {
+		rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		/* Cannot just take "rel_pf_id" since a ppfid could have been
+		 * loaned to another pf (e.g. RDMA bonding).
+		 */
+		pfid = (u8)ecore_rd(p_hwfn, p_ptt,
+				    NIG_REG_LLH_PPFID2PFID_TBL_0 +
+				    abs_ppfid * 0x4);
+
+		fid = FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, pfid);
+		ecore_fid_pretend(p_hwfn, p_ptt, fid);
+
+		ecore_wr(p_hwfn, p_ptt, addr, val);
+
+		fid = FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID,
+				  p_hwfn->rel_pf_id);
+		ecore_fid_pretend(p_hwfn, p_ptt, fid);
+	}
+
 	return ECORE_SUCCESS;
 }
 
@@ -893,7 +1037,7 @@ ecore_dcbx_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			/* reconfigure tcs of QM queues according
 			 * to negotiation results
 			 */
-			ecore_qm_reconf(p_hwfn, p_ptt);
+			ecore_qm_reconf_intr(p_hwfn, p_ptt);
 
 			/* update storm FW with negotiation results */
 			ecore_sp_pf_update_dcbx(p_hwfn);
@@ -902,20 +1046,33 @@ ecore_dcbx_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 	ecore_dcbx_get_params(p_hwfn, &p_hwfn->p_dcbx_info->get, type);
 
-	/* Update the DSCP to TC mapping enable bit if required */
-	if ((type == ECORE_DCBX_OPERATIONAL_MIB) &&
-	    p_hwfn->p_dcbx_info->dscp_nig_update) {
-		u8 val = !!p_hwfn->p_dcbx_info->get.dscp.enabled;
-		u32 addr = NIG_REG_DSCP_TO_TC_MAP_ENABLE_BB_K2;
+	if (type == ECORE_DCBX_OPERATIONAL_MIB) {
+		struct ecore_dcbx_results *p_data;
+		u16 val;
+
+		/* Enable/disable the DSCP to TC mapping if required */
+		if (p_hwfn->p_dcbx_info->dscp_nig_update) {
+			bool b_en = p_hwfn->p_dcbx_info->get.dscp.enabled;
+
+			rc = ecore_dcbx_dscp_map_enable(p_hwfn, p_ptt, b_en);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, false,
+					  "Failed to update the DSCP to TC mapping enable bit\n");
+				return rc;
+			}
 
-		rc = ecore_all_ppfids_wr(p_hwfn, p_ptt, addr, val);
-		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn, false,
-				  "Failed to update the DSCP to TC mapping enable bit\n");
-			return rc;
+			p_hwfn->p_dcbx_info->dscp_nig_update = false;
 		}
 
-		p_hwfn->p_dcbx_info->dscp_nig_update = false;
+		/* Configure in NIG which protocols support EDPM and should
+		 * honor PFC.
+		 */
+		p_data = &p_hwfn->p_dcbx_info->results;
+		val = (0x1 << p_data->arr[DCBX_PROTOCOL_ROCE].tc) |
+			(0x1 << p_data->arr[DCBX_PROTOCOL_ROCE_V2].tc);
+		val <<= NIG_REG_TX_EDPM_CTRL_TX_EDPM_TC_EN_SHIFT;
+		val |= NIG_REG_TX_EDPM_CTRL_TX_EDPM_EN;
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_EDPM_CTRL, val);
 	}
 
 	OSAL_DCBX_AEN(p_hwfn, type);
@@ -925,6 +1082,15 @@ ecore_dcbx_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 enum _ecore_status_t ecore_dcbx_info_alloc(struct ecore_hwfn *p_hwfn)
 {
+#ifndef __EXTRACT__LINUX__
+	OSAL_BUILD_BUG_ON(ECORE_LLDP_CHASSIS_ID_STAT_LEN !=
+			  LLDP_CHASSIS_ID_STAT_LEN);
+	OSAL_BUILD_BUG_ON(ECORE_LLDP_PORT_ID_STAT_LEN !=
+			  LLDP_PORT_ID_STAT_LEN);
+	OSAL_BUILD_BUG_ON(ECORE_DCBX_MAX_APP_PROTOCOL !=
+			  DCBX_MAX_APP_PROTOCOL);
+#endif
+
 	p_hwfn->p_dcbx_info = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
 					  sizeof(*p_hwfn->p_dcbx_info));
 	if (!p_hwfn->p_dcbx_info) {
@@ -942,6 +1108,7 @@ enum _ecore_status_t ecore_dcbx_info_alloc(struct ecore_hwfn *p_hwfn)
 void ecore_dcbx_info_free(struct ecore_hwfn *p_hwfn)
 {
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_dcbx_info);
+	p_hwfn->p_dcbx_info = OSAL_NULL;
 }
 
 static void ecore_dcbx_update_protocol_data(struct protocol_dcb_data *p_data,
@@ -963,17 +1130,62 @@ void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
 	struct protocol_dcb_data *p_dcb_data;
 	u8 update_flag;
 
+	update_flag = p_src->arr[DCBX_PROTOCOL_FCOE].update;
+	p_dest->update_fcoe_dcb_data_mode = update_flag;
+
+	update_flag = p_src->arr[DCBX_PROTOCOL_ROCE].update;
+	p_dest->update_roce_dcb_data_mode = update_flag;
+
+	update_flag = p_src->arr[DCBX_PROTOCOL_ROCE_V2].update;
+	p_dest->update_rroce_dcb_data_mode = update_flag;
+
+	update_flag = p_src->arr[DCBX_PROTOCOL_ISCSI].update;
+	p_dest->update_iscsi_dcb_data_mode = update_flag;
 	update_flag = p_src->arr[DCBX_PROTOCOL_ETH].update;
 	p_dest->update_eth_dcb_data_mode = update_flag;
 	update_flag = p_src->arr[DCBX_PROTOCOL_IWARP].update;
 	p_dest->update_iwarp_dcb_data_mode = update_flag;
 
+	p_dcb_data = &p_dest->fcoe_dcb_data;
+	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_FCOE);
+	p_dcb_data = &p_dest->roce_dcb_data;
+	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ROCE);
+	p_dcb_data = &p_dest->rroce_dcb_data;
+	ecore_dcbx_update_protocol_data(p_dcb_data, p_src,
+					DCBX_PROTOCOL_ROCE_V2);
+	p_dcb_data = &p_dest->iscsi_dcb_data;
+	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ISCSI);
 	p_dcb_data = &p_dest->eth_dcb_data;
 	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ETH);
 	p_dcb_data = &p_dest->iwarp_dcb_data;
 	ecore_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_IWARP);
 }
 
+bool ecore_dcbx_get_dscp_state(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_dcbx_get *p_dcbx_info = &p_hwfn->p_dcbx_info->get;
+
+	return p_dcbx_info->dscp.enabled;
+}
+
+u8 ecore_dcbx_get_priority_tc(struct ecore_hwfn *p_hwfn, u8 pri)
+{
+	struct ecore_dcbx_get *dcbx_info = &p_hwfn->p_dcbx_info->get;
+
+	if (pri >= ECORE_MAX_PFC_PRIORITIES) {
+		DP_ERR(p_hwfn, "Invalid priority %d\n", pri);
+		return ECORE_DCBX_DEFAULT_TC;
+	}
+
+	if (!dcbx_info->operational.valid) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
+			   "Dcbx parameters not available\n");
+		return ECORE_DCBX_DEFAULT_TC;
+	}
+
+	return dcbx_info->operational.params.ets_pri_tc_tbl[pri];
+}
+
 enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn,
 					     struct ecore_dcbx_get *p_get,
 					     enum ecore_mib_read_type type)
@@ -981,6 +1193,11 @@ enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn,
 	struct ecore_ptt *p_ptt;
 	enum _ecore_status_t rc;
 
+#ifndef ASIC_ONLY
+	if (!ecore_mcp_is_init(p_hwfn))
+		return ECORE_INVAL;
+#endif
+
 	if (IS_VF(p_hwfn->p_dev))
 		return ECORE_INVAL;
 
@@ -1008,24 +1225,16 @@ ecore_dcbx_set_pfc_data(struct ecore_hwfn *p_hwfn,
 	u8 pfc_map = 0;
 	int i;
 
-	if (p_params->pfc.willing)
-		*pfc |= DCBX_PFC_WILLING_MASK;
-	else
-		*pfc &= ~DCBX_PFC_WILLING_MASK;
-
-	if (p_params->pfc.enabled)
-		*pfc |= DCBX_PFC_ENABLED_MASK;
-	else
-		*pfc &= ~DCBX_PFC_ENABLED_MASK;
-
-	*pfc &= ~DCBX_PFC_CAPS_MASK;
-	*pfc |= (u32)p_params->pfc.max_tc << DCBX_PFC_CAPS_OFFSET;
+	SET_MFW_FIELD(*pfc, DCBX_PFC_ERROR, 0);
+	SET_MFW_FIELD(*pfc, DCBX_PFC_WILLING, p_params->pfc.willing ? 1 : 0);
+	SET_MFW_FIELD(*pfc, DCBX_PFC_ENABLED, p_params->pfc.enabled ? 1 : 0);
+	SET_MFW_FIELD(*pfc, DCBX_PFC_CAPS, (u32)p_params->pfc.max_tc);
+	SET_MFW_FIELD(*pfc, DCBX_PFC_MBC, p_params->pfc.mbc ? 1 : 0);
 
 	for (i = 0; i < ECORE_MAX_PFC_PRIORITIES; i++)
 		if (p_params->pfc.prio[i])
 			pfc_map |= (1 << i);
-	*pfc &= ~DCBX_PFC_PRI_EN_BITMAP_MASK;
-	*pfc |= (pfc_map << DCBX_PFC_PRI_EN_BITMAP_OFFSET);
+	SET_MFW_FIELD(*pfc, DCBX_PFC_PRI_EN_BITMAP, pfc_map);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "pfc = 0x%x\n", *pfc);
 }
@@ -1039,23 +1248,14 @@ ecore_dcbx_set_ets_data(struct ecore_hwfn *p_hwfn,
 	u32 val;
 	int i;
 
-	if (p_params->ets_willing)
-		p_ets->flags |= DCBX_ETS_WILLING_MASK;
-	else
-		p_ets->flags &= ~DCBX_ETS_WILLING_MASK;
-
-	if (p_params->ets_cbs)
-		p_ets->flags |= DCBX_ETS_CBS_MASK;
-	else
-		p_ets->flags &= ~DCBX_ETS_CBS_MASK;
-
-	if (p_params->ets_enabled)
-		p_ets->flags |= DCBX_ETS_ENABLED_MASK;
-	else
-		p_ets->flags &= ~DCBX_ETS_ENABLED_MASK;
-
-	p_ets->flags &= ~DCBX_ETS_MAX_TCS_MASK;
-	p_ets->flags |= (u32)p_params->max_ets_tc << DCBX_ETS_MAX_TCS_OFFSET;
+	SET_MFW_FIELD(p_ets->flags, DCBX_ETS_WILLING,
+		      p_params->ets_willing ? 1 : 0);
+	SET_MFW_FIELD(p_ets->flags, DCBX_ETS_CBS,
+		      p_params->ets_cbs ? 1 : 0);
+	SET_MFW_FIELD(p_ets->flags, DCBX_ETS_ENABLED,
+		      p_params->ets_enabled ? 1 : 0);
+	SET_MFW_FIELD(p_ets->flags, DCBX_ETS_MAX_TCS,
+		      (u32)p_params->max_ets_tc);
 
 	bw_map = (u8 *)&p_ets->tc_bw_tbl[0];
 	tsa_map = (u8 *)&p_ets->tc_tsa_tbl[0];
@@ -1089,66 +1289,55 @@ ecore_dcbx_set_app_data(struct ecore_hwfn *p_hwfn,
 	u32 *entry;
 	int i;
 
-	if (p_params->app_willing)
-		p_app->flags |= DCBX_APP_WILLING_MASK;
-	else
-		p_app->flags &= ~DCBX_APP_WILLING_MASK;
-
-	if (p_params->app_valid)
-		p_app->flags |= DCBX_APP_ENABLED_MASK;
-	else
-		p_app->flags &= ~DCBX_APP_ENABLED_MASK;
-
-	p_app->flags &= ~DCBX_APP_NUM_ENTRIES_MASK;
-	p_app->flags |= (u32)p_params->num_app_entries <<
-			DCBX_APP_NUM_ENTRIES_OFFSET;
+	SET_MFW_FIELD(p_app->flags, DCBX_APP_WILLING,
+		      p_params->app_willing ? 1 : 0);
+	SET_MFW_FIELD(p_app->flags, DCBX_APP_ENABLED,
+		      p_params->app_valid ? 1 : 0);
+	SET_MFW_FIELD(p_app->flags, DCBX_APP_NUM_ENTRIES,
+		      (u32)p_params->num_app_entries);
 
 	for (i = 0; i < p_params->num_app_entries; i++) {
 		entry = &p_app->app_pri_tbl[i].entry;
 		*entry = 0;
 		if (ieee) {
-			*entry &= ~(DCBX_APP_SF_IEEE_MASK | DCBX_APP_SF_MASK);
+			SET_MFW_FIELD(*entry, DCBX_APP_SF_IEEE, 0);
+			SET_MFW_FIELD(*entry, DCBX_APP_SF, 0);
 			switch (p_params->app_entry[i].sf_ieee) {
 			case ECORE_DCBX_SF_IEEE_ETHTYPE:
-				*entry  |= ((u32)DCBX_APP_SF_IEEE_ETHTYPE <<
-					    DCBX_APP_SF_IEEE_OFFSET);
-				*entry  |= ((u32)DCBX_APP_SF_ETHTYPE <<
-					    DCBX_APP_SF_OFFSET);
+				SET_MFW_FIELD(*entry, DCBX_APP_SF_IEEE,
+					      (u32)DCBX_APP_SF_IEEE_ETHTYPE);
+				SET_MFW_FIELD(*entry, DCBX_APP_SF,
+					      (u32)DCBX_APP_SF_ETHTYPE);
 				break;
 			case ECORE_DCBX_SF_IEEE_TCP_PORT:
-				*entry  |= ((u32)DCBX_APP_SF_IEEE_TCP_PORT <<
-					    DCBX_APP_SF_IEEE_OFFSET);
-				*entry  |= ((u32)DCBX_APP_SF_PORT <<
-					    DCBX_APP_SF_OFFSET);
+				SET_MFW_FIELD(*entry, DCBX_APP_SF_IEEE,
+					      (u32)DCBX_APP_SF_IEEE_TCP_PORT);
+				SET_MFW_FIELD(*entry, DCBX_APP_SF,
+					      (u32)DCBX_APP_SF_PORT);
 				break;
 			case ECORE_DCBX_SF_IEEE_UDP_PORT:
-				*entry  |= ((u32)DCBX_APP_SF_IEEE_UDP_PORT <<
-					    DCBX_APP_SF_IEEE_OFFSET);
-				*entry  |= ((u32)DCBX_APP_SF_PORT <<
-					    DCBX_APP_SF_OFFSET);
+				SET_MFW_FIELD(*entry, DCBX_APP_SF_IEEE,
+					      (u32)DCBX_APP_SF_IEEE_UDP_PORT);
+				SET_MFW_FIELD(*entry, DCBX_APP_SF,
+					      (u32)DCBX_APP_SF_PORT);
 				break;
 			case ECORE_DCBX_SF_IEEE_TCP_UDP_PORT:
-				*entry  |= (u32)DCBX_APP_SF_IEEE_TCP_UDP_PORT <<
-					    DCBX_APP_SF_IEEE_OFFSET;
-				*entry  |= ((u32)DCBX_APP_SF_PORT <<
-					    DCBX_APP_SF_OFFSET);
+				SET_MFW_FIELD(*entry, DCBX_APP_SF_IEEE,
+					      (u32)DCBX_APP_SF_IEEE_TCP_UDP_PORT);
+				SET_MFW_FIELD(*entry, DCBX_APP_SF,
+					      (u32)DCBX_APP_SF_PORT);
 				break;
 			}
 		} else {
-			*entry &= ~DCBX_APP_SF_MASK;
-			if (p_params->app_entry[i].ethtype)
-				*entry  |= ((u32)DCBX_APP_SF_ETHTYPE <<
-					    DCBX_APP_SF_OFFSET);
-			else
-				*entry  |= ((u32)DCBX_APP_SF_PORT <<
-					    DCBX_APP_SF_OFFSET);
+			SET_MFW_FIELD(*entry, DCBX_APP_SF,
+				      p_params->app_entry[i].ethtype ?
+				      (u32)DCBX_APP_SF_ETHTYPE :
+				      (u32)DCBX_APP_SF_PORT);
 		}
-		*entry &= ~DCBX_APP_PROTOCOL_ID_MASK;
-		*entry |= ((u32)p_params->app_entry[i].proto_id <<
-			   DCBX_APP_PROTOCOL_ID_OFFSET);
-		*entry &= ~DCBX_APP_PRI_MAP_MASK;
-		*entry |= ((u32)(p_params->app_entry[i].prio) <<
-			   DCBX_APP_PRI_MAP_OFFSET);
+		SET_MFW_FIELD(*entry, DCBX_APP_PROTOCOL_ID,
+			      (u32)p_params->app_entry[i].proto_id);
+		SET_MFW_FIELD(*entry, DCBX_APP_PRI_MAP,
+			      (u32)(1 << p_params->app_entry[i].prio));
 	}
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "flags = 0x%x\n", p_app->flags);
@@ -1200,9 +1389,8 @@ ecore_dcbx_set_dscp_params(struct ecore_hwfn *p_hwfn,
 	OSAL_MEMCPY(p_dscp_map, &p_hwfn->p_dcbx_info->dscp_map,
 		    sizeof(*p_dscp_map));
 
-	p_dscp_map->flags &= ~DCB_DSCP_ENABLE_MASK;
-	if (p_params->dscp.enabled)
-		p_dscp_map->flags |= DCB_DSCP_ENABLE_MASK;
+	SET_MFW_FIELD(p_dscp_map->flags, DCB_DSCP_ENABLE,
+		      p_params->dscp.enabled ? 1 : 0);
 
 	for (i = 0, entry = 0; i < 8; i++) {
 		val = 0;
@@ -1210,7 +1398,7 @@ ecore_dcbx_set_dscp_params(struct ecore_hwfn *p_hwfn,
 			val |= (((u32)p_params->dscp.dscp_pri_map[entry]) <<
 				(j * 4));
 
-		p_dscp_map->dscp_pri_map[i] = OSAL_CPU_TO_BE32(val);
+		p_dscp_map->dscp_pri_map[i] = val;
 	}
 
 	p_hwfn->p_dcbx_info->dscp_nig_update = true;
@@ -1231,10 +1419,10 @@ enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *p_hwfn,
 					      struct ecore_dcbx_set *params,
 					      bool hw_commit)
 {
+	u32 resp = 0, param = 0, drv_mb_param = 0;
 	struct dcbx_local_params local_admin;
 	struct ecore_dcbx_mib_meta_data data;
 	struct dcb_dscp_map dscp_map;
-	u32 resp = 0, param = 0;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	OSAL_MEMCPY(&p_hwfn->p_dcbx_info->set, params,
@@ -1263,8 +1451,9 @@ enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *p_hwfn,
 				data.size);
 	}
 
+	SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_LLDP_SEND, 1);
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_DCBX,
-			   1 << DRV_MB_PARAM_LLDP_SEND_OFFSET, &resp, &param);
+			   drv_mb_param, &resp, &param);
 	if (rc != ECORE_SUCCESS)
 		DP_NOTICE(p_hwfn, false,
 			  "Failed to send DCBX update request\n");
@@ -1276,7 +1465,7 @@ enum _ecore_status_t ecore_dcbx_get_config_params(struct ecore_hwfn *p_hwfn,
 						  struct ecore_dcbx_set *params)
 {
 	struct ecore_dcbx_get *dcbx_info;
-	int rc;
+	enum _ecore_status_t rc;
 
 	if (p_hwfn->p_dcbx_info->set.config.valid) {
 		OSAL_MEMCPY(params, &p_hwfn->p_dcbx_info->set,
@@ -1557,6 +1746,11 @@ ecore_dcbx_get_dscp_priority(struct ecore_hwfn *p_hwfn,
 	struct ecore_dcbx_get *p_dcbx_info;
 	enum _ecore_status_t rc;
 
+	if (IS_VF(p_hwfn->p_dev)) {
+		DP_ERR(p_hwfn->p_dev, "ecore rdma get dscp priority not supported for VF.\n");
+		return ECORE_INVAL;
+	}
+
 	if (dscp_index >= ECORE_DCBX_DSCP_SIZE) {
 		DP_ERR(p_hwfn, "Invalid dscp index %d\n", dscp_index);
 		return ECORE_INVAL;
@@ -1588,6 +1782,11 @@ ecore_dcbx_set_dscp_priority(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	struct ecore_dcbx_set dcbx_set;
 	enum _ecore_status_t rc;
 
+	if (IS_VF(p_hwfn->p_dev)) {
+		DP_ERR(p_hwfn->p_dev, "ecore rdma set dscp priority not supported for VF.\n");
+		return ECORE_INVAL;
+	}
+
 	if (dscp_index >= ECORE_DCBX_DSCP_SIZE ||
 	    pri_val >= ECORE_MAX_PFC_PRIORITIES) {
 		DP_ERR(p_hwfn, "Invalid dscp params: index = %d pri = %d\n",
@@ -1605,3 +1804,58 @@ ecore_dcbx_set_dscp_priority(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 	return ecore_dcbx_config_params(p_hwfn, p_ptt, &dcbx_set, 1);
 }
+
+enum _ecore_status_t
+ecore_lldp_get_stats(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		     struct ecore_lldp_stats *p_params)
+{
+	u32 mcp_resp = 0, mcp_param = 0, drv_mb_param = 0, addr, val;
+	struct lldp_stats_stc lldp_stats;
+	enum _ecore_status_t rc;
+
+	switch (p_params->agent) {
+	case ECORE_LLDP_NEAREST_BRIDGE:
+		val = LLDP_NEAREST_BRIDGE;
+		break;
+	case ECORE_LLDP_NEAREST_NON_TPMR_BRIDGE:
+		val = LLDP_NEAREST_NON_TPMR_BRIDGE;
+		break;
+	case ECORE_LLDP_NEAREST_CUSTOMER_BRIDGE:
+		val = LLDP_NEAREST_CUSTOMER_BRIDGE;
+		break;
+	default:
+		DP_ERR(p_hwfn, "Invalid agent type %d\n", p_params->agent);
+		return ECORE_INVAL;
+	}
+
+	SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_LLDP_STATS_AGENT, val);
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GET_LLDP_STATS,
+			   drv_mb_param, &mcp_resp, &mcp_param);
+	if (rc != ECORE_SUCCESS) {
+		DP_ERR(p_hwfn, "GET_LLDP_STATS failed, error = %d\n", rc);
+		return rc;
+	}
+
+	addr = p_hwfn->mcp_info->drv_mb_addr +
+		OFFSETOF(struct public_drv_mb, union_data);
+
+	ecore_memcpy_from(p_hwfn, p_ptt, &lldp_stats, addr, sizeof(lldp_stats));
+
+	p_params->tx_frames = lldp_stats.tx_frames_total;
+	p_params->rx_frames = lldp_stats.rx_frames_total;
+	p_params->rx_discards = lldp_stats.rx_frames_discarded;
+	p_params->rx_age_outs = lldp_stats.rx_age_outs;
+
+	return ECORE_SUCCESS;
+}
+
+bool ecore_dcbx_is_enabled(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_dcbx_operational_params *op_params =
+		&p_hwfn->p_dcbx_info->get.operational;
+
+	if (op_params->valid && op_params->enabled)
+		return true;
+
+	return false;
+}
diff --git a/drivers/net/qede/base/ecore_dcbx.h b/drivers/net/qede/base/ecore_dcbx.h
index 519e6ceaa..57c07c59f 100644
--- a/drivers/net/qede/base/ecore_dcbx.h
+++ b/drivers/net/qede/base/ecore_dcbx.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_DCBX_H__
 #define __ECORE_DCBX_H__
 
@@ -45,18 +45,26 @@ struct ecore_dcbx_mib_meta_data {
 
 /* ECORE local interface routines */
 enum _ecore_status_t
-ecore_dcbx_mib_update_event(struct ecore_hwfn *, struct ecore_ptt *,
-			    enum ecore_mib_read_type);
+ecore_dcbx_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			    enum ecore_mib_read_type type);
 
 enum _ecore_status_t ecore_dcbx_info_alloc(struct ecore_hwfn *p_hwfn);
 void ecore_dcbx_info_free(struct ecore_hwfn *p_hwfn);
 void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
 				     struct pf_update_ramrod_data *p_dest);
 
+#define ECORE_DCBX_DEFAULT_TC	0
+
+u8 ecore_dcbx_get_priority_tc(struct ecore_hwfn *p_hwfn, u8 pri);
+
+bool ecore_dcbx_get_dscp_state(struct ecore_hwfn *p_hwfn);
+
 /* Returns TOS value for a given priority */
 u8 ecore_dcbx_get_dscp_value(struct ecore_hwfn *p_hwfn, u8 pri);
 
 enum _ecore_status_t
 ecore_lldp_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
 
+bool ecore_dcbx_is_enabled(struct ecore_hwfn *p_hwfn);
+
 #endif /* __ECORE_DCBX_H__ */
diff --git a/drivers/net/qede/base/ecore_dcbx_api.h b/drivers/net/qede/base/ecore_dcbx_api.h
index 6fad2ecc2..26616f0b9 100644
--- a/drivers/net/qede/base/ecore_dcbx_api.h
+++ b/drivers/net/qede/base/ecore_dcbx_api.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_DCBX_API_H__
 #define __ECORE_DCBX_API_H__
 
@@ -30,7 +30,7 @@ struct ecore_dcbx_app_data {
 	bool dont_add_vlan0;	/* Do not insert a vlan tag with id 0 */
 };
 
-#ifndef __EXTRACT__LINUX__
+#ifndef __EXTRACT__LINUX__IF__
 enum dcbx_protocol_type {
 	DCBX_PROTOCOL_ISCSI,
 	DCBX_PROTOCOL_FCOE,
@@ -72,6 +72,7 @@ struct ecore_dcbx_app_prio {
 struct ecore_dbcx_pfc_params {
 	bool	willing;
 	bool	enabled;
+	bool	mbc;
 	u8	prio[ECORE_MAX_PFC_PRIORITIES];
 	u8	max_tc;
 };
@@ -86,7 +87,6 @@ enum ecore_dcbx_sf_ieee_type {
 struct ecore_app_entry {
 	bool ethtype;
 	enum ecore_dcbx_sf_ieee_type sf_ieee;
-	bool enabled;
 	u8 prio;
 	u16 proto_id;
 	enum dcbx_protocol_type proto_type;
@@ -199,17 +199,26 @@ struct ecore_lldp_sys_tlvs {
 	u16 buf_size;
 };
 
-enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *,
-					     struct ecore_dcbx_get *,
-					     enum ecore_mib_read_type);
+struct ecore_lldp_stats {
+	enum ecore_lldp_agent agent;
+	u32 tx_frames;
+	u32 rx_frames;
+	u32 rx_discards;
+	u32 rx_age_outs;
+};
+
+enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn,
+					     struct ecore_dcbx_get *p_get,
+					     enum ecore_mib_read_type type);
 
-enum _ecore_status_t ecore_dcbx_get_config_params(struct ecore_hwfn *,
-						  struct ecore_dcbx_set *);
+enum _ecore_status_t ecore_dcbx_get_config_params(struct ecore_hwfn *p_hwfn,
+						  struct ecore_dcbx_set
+						  *params);
 
-enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *,
-					      struct ecore_ptt *,
-					      struct ecore_dcbx_set *,
-					      bool);
+enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *p_hwfn,
+					      struct ecore_ptt *p_ptt,
+					      struct ecore_dcbx_set *params,
+					      bool hw_commit);
 
 enum _ecore_status_t ecore_lldp_register_tlv(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt,
@@ -238,6 +247,11 @@ enum _ecore_status_t
 ecore_dcbx_set_dscp_priority(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			     u8 dscp_index, u8 pri_val);
 
+enum _ecore_status_t
+ecore_lldp_get_stats(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		     struct ecore_lldp_stats *p_params);
+
+#ifndef __EXTRACT__LINUX__C__
 static const struct ecore_dcbx_app_metadata ecore_dcbx_app_update[] = {
 	{DCBX_PROTOCOL_ISCSI, "ISCSI", ECORE_PCI_ISCSI},
 	{DCBX_PROTOCOL_FCOE, "FCOE", ECORE_PCI_FCOE},
@@ -246,5 +260,6 @@ static const struct ecore_dcbx_app_metadata ecore_dcbx_app_update[] = {
 	{DCBX_PROTOCOL_ETH, "ETH", ECORE_PCI_ETH},
 	{DCBX_PROTOCOL_IWARP, "IWARP", ECORE_PCI_ETH_IWARP}
 };
+#endif
 
 #endif /* __ECORE_DCBX_API_H__ */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 5db02d0c4..63e5d6860 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #include "bcm_osal.h"
 #include "reg_addr.h"
 #include "ecore_gtt_reg_addr.h"
@@ -27,8 +27,15 @@
 #include "ecore_iro.h"
 #include "nvm_cfg.h"
 #include "ecore_dcbx.h"
+#include <linux/pci_regs.h> /* @DPDK */
 #include "ecore_l2.h"
 
+#ifdef _NTDDK_
+#pragma warning(push)
+#pragma warning(disable : 28167)
+#pragma warning(disable : 28123)
+#endif
+
 /* TODO - there's a bug in DCBx re-configuration flows in MF, as the QM
  * registers involved are not split and thus configuration is a race where
  * some of the PFs configuration might be lost.
@@ -43,6 +50,11 @@ static u32 qm_lock_ref_cnt;
 static bool b_ptt_gtt_init;
 #endif
 
+void ecore_set_ilt_page_size(struct ecore_dev *p_dev, u8 ilt_page_size)
+{
+	p_dev->ilt_page_size = ilt_page_size;
+}
+
 /******************** Doorbell Recovery *******************/
 /* The doorbell recovery mechanism consists of a list of entries which represent
  * doorbelling entities (l2 queues, roce sq/rq/cqs, the slowpath spq, etc). Each
@@ -60,10 +72,11 @@ struct ecore_db_recovery_entry {
 	u8			hwfn_idx;
 };
 
+/* @DPDK */
 /* display a single doorbell recovery entry */
-void ecore_db_recovery_dp_entry(struct ecore_hwfn *p_hwfn,
-				struct ecore_db_recovery_entry *db_entry,
-				const char *action)
+static void ecore_db_recovery_dp_entry(struct ecore_hwfn *p_hwfn,
+				       struct ecore_db_recovery_entry *db_entry,
+				       const char *action)
 {
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "(%s: db_entry %p, addr %p, data %p, width %s, %s space, hwfn %d)\n",
 		   action, db_entry, db_entry->db_addr, db_entry->db_data,
@@ -72,17 +85,40 @@ void ecore_db_recovery_dp_entry(struct ecore_hwfn *p_hwfn,
 		   db_entry->hwfn_idx);
 }
 
-/* doorbell address sanity (address within doorbell bar range) */
-bool ecore_db_rec_sanity(struct ecore_dev *p_dev, void OSAL_IOMEM *db_addr,
-			 void *db_data)
+/* find hwfn according to the doorbell address */
+static struct ecore_hwfn *ecore_db_rec_find_hwfn(struct ecore_dev *p_dev,
+						 void OSAL_IOMEM *db_addr)
 {
-	/* make sure doorbell address  is within the doorbell bar */
-	if (db_addr < p_dev->doorbells || (u8 *)db_addr >
-			(u8 *)p_dev->doorbells + p_dev->db_size) {
+	struct ecore_hwfn *p_hwfn;
+
+	/* in CMT doorbell bar is split down the middle between engine 0 and enigne 1 */
+	if (ECORE_IS_CMT(p_dev))
+		p_hwfn = db_addr < p_dev->hwfns[1].doorbells ?
+			&p_dev->hwfns[0] : &p_dev->hwfns[1];
+	else
+		p_hwfn = ECORE_LEADING_HWFN(p_dev);
+
+	return p_hwfn;
+}
+
+/* doorbell address sanity (address within doorbell bar range) */
+static bool ecore_db_rec_sanity(struct ecore_dev *p_dev,
+				void OSAL_IOMEM *db_addr,
+				enum ecore_db_rec_width db_width,
+				void *db_data)
+{
+	struct ecore_hwfn *p_hwfn = ecore_db_rec_find_hwfn(p_dev, db_addr);
+	u32 width = (db_width == DB_REC_WIDTH_32B) ? 32 : 64;
+
+	/* make sure doorbell address is within the doorbell bar */
+	if (db_addr < p_hwfn->doorbells ||
+	    (u8 OSAL_IOMEM *)db_addr + width >
+	    (u8 OSAL_IOMEM *)p_hwfn->doorbells + p_hwfn->db_size) {
 		OSAL_WARN(true,
 			  "Illegal doorbell address: %p. Legal range for doorbell addresses is [%p..%p]\n",
-			  db_addr, p_dev->doorbells,
-			  (u8 *)p_dev->doorbells + p_dev->db_size);
+			  db_addr, p_hwfn->doorbells,
+			  (u8 OSAL_IOMEM *)p_hwfn->doorbells + p_hwfn->db_size);
+
 		return false;
 	}
 
@@ -95,24 +131,6 @@ bool ecore_db_rec_sanity(struct ecore_dev *p_dev, void OSAL_IOMEM *db_addr,
 	return true;
 }
 
-/* find hwfn according to the doorbell address */
-struct ecore_hwfn *ecore_db_rec_find_hwfn(struct ecore_dev *p_dev,
-					  void OSAL_IOMEM *db_addr)
-{
-	struct ecore_hwfn *p_hwfn;
-
-	/* In CMT doorbell bar is split down the middle between engine 0 and
-	 * enigne 1
-	 */
-	if (ECORE_IS_CMT(p_dev))
-		p_hwfn = db_addr < p_dev->hwfns[1].doorbells ?
-			&p_dev->hwfns[0] : &p_dev->hwfns[1];
-	else
-		p_hwfn = ECORE_LEADING_HWFN(p_dev);
-
-	return p_hwfn;
-}
-
 /* add a new entry to the doorbell recovery mechanism */
 enum _ecore_status_t ecore_db_recovery_add(struct ecore_dev *p_dev,
 					   void OSAL_IOMEM *db_addr,
@@ -123,14 +141,8 @@ enum _ecore_status_t ecore_db_recovery_add(struct ecore_dev *p_dev,
 	struct ecore_db_recovery_entry *db_entry;
 	struct ecore_hwfn *p_hwfn;
 
-	/* shortcircuit VFs, for now */
-	if (IS_VF(p_dev)) {
-		DP_VERBOSE(p_dev, ECORE_MSG_IOV, "db recovery - skipping VF doorbell\n");
-		return ECORE_SUCCESS;
-	}
-
 	/* sanitize doorbell address */
-	if (!ecore_db_rec_sanity(p_dev, db_addr, db_data))
+	if (!ecore_db_rec_sanity(p_dev, db_addr, db_width, db_data))
 		return ECORE_INVAL;
 
 	/* obtain hwfn from doorbell address */
@@ -171,16 +183,6 @@ enum _ecore_status_t ecore_db_recovery_del(struct ecore_dev *p_dev,
 	enum _ecore_status_t rc = ECORE_INVAL;
 	struct ecore_hwfn *p_hwfn;
 
-	/* shortcircuit VFs, for now */
-	if (IS_VF(p_dev)) {
-		DP_VERBOSE(p_dev, ECORE_MSG_IOV, "db recovery - skipping VF doorbell\n");
-		return ECORE_SUCCESS;
-	}
-
-	/* sanitize doorbell address */
-	if (!ecore_db_rec_sanity(p_dev, db_addr, db_data))
-		return ECORE_INVAL;
-
 	/* obtain hwfn from doorbell address */
 	p_hwfn = ecore_db_rec_find_hwfn(p_dev, db_addr);
 
@@ -190,12 +192,9 @@ enum _ecore_status_t ecore_db_recovery_del(struct ecore_dev *p_dev,
 				 &p_hwfn->db_recovery_info.list,
 				 list_entry,
 				 struct ecore_db_recovery_entry) {
-		/* search according to db_data addr since db_addr is not unique
-		 * (roce)
-		 */
+		/* search according to db_data addr since db_addr is not unique (roce) */
 		if (db_entry->db_data == db_data) {
-			ecore_db_recovery_dp_entry(p_hwfn, db_entry,
-						   "Deleting");
+			ecore_db_recovery_dp_entry(p_hwfn, db_entry, "Deleting");
 			OSAL_LIST_REMOVE_ENTRY(&db_entry->list_entry,
 					       &p_hwfn->db_recovery_info.list);
 			rc = ECORE_SUCCESS;
@@ -217,40 +216,40 @@ enum _ecore_status_t ecore_db_recovery_del(struct ecore_dev *p_dev,
 }
 
 /* initialize the doorbell recovery mechanism */
-enum _ecore_status_t ecore_db_recovery_setup(struct ecore_hwfn *p_hwfn)
+static enum _ecore_status_t ecore_db_recovery_setup(struct ecore_hwfn *p_hwfn)
 {
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "Setting up db recovery\n");
 
 	/* make sure db_size was set in p_dev */
-	if (!p_hwfn->p_dev->db_size) {
+	if (!p_hwfn->db_size) {
 		DP_ERR(p_hwfn->p_dev, "db_size not set\n");
 		return ECORE_INVAL;
 	}
 
 	OSAL_LIST_INIT(&p_hwfn->db_recovery_info.list);
 #ifdef CONFIG_ECORE_LOCK_ALLOC
-	if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_hwfn->db_recovery_info.lock))
+	if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_hwfn->db_recovery_info.lock,
+				 "db_recov_lock"))
 		return ECORE_NOMEM;
 #endif
 	OSAL_SPIN_LOCK_INIT(&p_hwfn->db_recovery_info.lock);
-	p_hwfn->db_recovery_info.db_recovery_counter = 0;
+	p_hwfn->db_recovery_info.count = 0;
 
 	return ECORE_SUCCESS;
 }
 
 /* destroy the doorbell recovery mechanism */
-void ecore_db_recovery_teardown(struct ecore_hwfn *p_hwfn)
+static void ecore_db_recovery_teardown(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_db_recovery_entry *db_entry = OSAL_NULL;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "Tearing down db recovery\n");
 	if (!OSAL_LIST_IS_EMPTY(&p_hwfn->db_recovery_info.list)) {
-		DP_VERBOSE(p_hwfn, false, "Doorbell Recovery teardown found the doorbell recovery list was not empty (Expected in disorderly driver unload (e.g. recovery) otherwise this probably means some flow forgot to db_recovery_del). Prepare to purge doorbell recovery list...\n");
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "Doorbell Recovery teardown found the doorbell recovery list was not empty (Expected in disorderly driver unload (e.g. recovery) otherwise this probably means some flow forgot to db_recovery_del). Prepare to purge doorbell recovery list...\n");
 		while (!OSAL_LIST_IS_EMPTY(&p_hwfn->db_recovery_info.list)) {
-			db_entry = OSAL_LIST_FIRST_ENTRY(
-						&p_hwfn->db_recovery_info.list,
-						struct ecore_db_recovery_entry,
-						list_entry);
+			db_entry = OSAL_LIST_FIRST_ENTRY(&p_hwfn->db_recovery_info.list,
+							 struct ecore_db_recovery_entry,
+							 list_entry);
 			ecore_db_recovery_dp_entry(p_hwfn, db_entry, "Purging");
 			OSAL_LIST_REMOVE_ENTRY(&db_entry->list_entry,
 					       &p_hwfn->db_recovery_info.list);
@@ -260,17 +259,34 @@ void ecore_db_recovery_teardown(struct ecore_hwfn *p_hwfn)
 #ifdef CONFIG_ECORE_LOCK_ALLOC
 	OSAL_SPIN_LOCK_DEALLOC(&p_hwfn->db_recovery_info.lock);
 #endif
-	p_hwfn->db_recovery_info.db_recovery_counter = 0;
+	p_hwfn->db_recovery_info.count = 0;
 }
 
 /* print the content of the doorbell recovery mechanism */
 void ecore_db_recovery_dp(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_db_recovery_entry *db_entry = OSAL_NULL;
+	u32 dp_module;
+	u8 dp_level;
 
 	DP_NOTICE(p_hwfn, false,
-		  "Dispalying doorbell recovery database. Counter was %d\n",
-		  p_hwfn->db_recovery_info.db_recovery_counter);
+		  "Displaying doorbell recovery database. Counter is %d\n",
+		  p_hwfn->db_recovery_info.count);
+
+	if (IS_PF(p_hwfn->p_dev))
+		if (p_hwfn->pf_iov_info->max_db_rec_count > 0)
+			DP_NOTICE(p_hwfn, false,
+				  "Max VF counter is %u (VF %u)\n",
+				  p_hwfn->pf_iov_info->max_db_rec_count,
+				  p_hwfn->pf_iov_info->max_db_rec_vfid);
+
+	/* Save dp_module/dp_level values and enable ECORE_MSG_SPQ verbosity
+	 * to force print the db entries.
+	 */
+	dp_module = p_hwfn->dp_module;
+	p_hwfn->dp_module |= ECORE_MSG_SPQ;
+	dp_level = p_hwfn->dp_level;
+	p_hwfn->dp_level = ECORE_LEVEL_VERBOSE;
 
 	/* protect the list */
 	OSAL_SPIN_LOCK(&p_hwfn->db_recovery_info.lock);
@@ -282,28 +298,27 @@ void ecore_db_recovery_dp(struct ecore_hwfn *p_hwfn)
 	}
 
 	OSAL_SPIN_UNLOCK(&p_hwfn->db_recovery_info.lock);
+
+	/* Get back to saved dp_module/dp_level values */
+	p_hwfn->dp_module = dp_module;
+	p_hwfn->dp_level = dp_level;
 }
 
 /* ring the doorbell of a single doorbell recovery entry */
-void ecore_db_recovery_ring(struct ecore_hwfn *p_hwfn,
-			    struct ecore_db_recovery_entry *db_entry,
-			    enum ecore_db_rec_exec db_exec)
+static void ecore_db_recovery_ring(struct ecore_hwfn *p_hwfn,
+				   struct ecore_db_recovery_entry *db_entry)
 {
 	/* Print according to width */
 	if (db_entry->db_width == DB_REC_WIDTH_32B)
-		DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "%s doorbell address %p data %x\n",
-			   db_exec == DB_REC_DRY_RUN ? "would have rung" : "ringing",
-			   db_entry->db_addr, *(u32 *)db_entry->db_data);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
+			   "ringing doorbell address %p data %x\n",
+			   db_entry->db_addr,
+			   *(u32 *)db_entry->db_data);
 	else
-		DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "%s doorbell address %p data %lx\n",
-			   db_exec == DB_REC_DRY_RUN ? "would have rung" : "ringing",
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
+			   "ringing doorbell address %p data %" PRIx64 "\n",
 			   db_entry->db_addr,
-			   *(unsigned long *)(db_entry->db_data));
-
-	/* Sanity */
-	if (!ecore_db_rec_sanity(p_hwfn->p_dev, db_entry->db_addr,
-				 db_entry->db_data))
-		return;
+			   *(u64 *)(db_entry->db_data));
 
 	/* Flush the write combined buffer. Since there are multiple doorbelling
 	 * entities using the same address, if we don't flush, a transaction
@@ -312,14 +327,12 @@ void ecore_db_recovery_ring(struct ecore_hwfn *p_hwfn,
 	OSAL_WMB(p_hwfn->p_dev);
 
 	/* Ring the doorbell */
-	if (db_exec == DB_REC_REAL_DEAL || db_exec == DB_REC_ONCE) {
-		if (db_entry->db_width == DB_REC_WIDTH_32B)
-			DIRECT_REG_WR(p_hwfn, db_entry->db_addr,
-				      *(u32 *)(db_entry->db_data));
-		else
-			DIRECT_REG_WR64(p_hwfn, db_entry->db_addr,
-					*(u64 *)(db_entry->db_data));
-	}
+	if (db_entry->db_width == DB_REC_WIDTH_32B)
+		DIRECT_REG_WR(p_hwfn, db_entry->db_addr,
+			      *(u32 *)(db_entry->db_data));
+	else
+		DIRECT_REG_WR64(p_hwfn, db_entry->db_addr,
+				*(u64 *)(db_entry->db_data));
 
 	/* Flush the write combined buffer. Next doorbell may come from a
 	 * different entity to the same address...
@@ -328,30 +341,21 @@ void ecore_db_recovery_ring(struct ecore_hwfn *p_hwfn,
 }
 
 /* traverse the doorbell recovery entry list and ring all the doorbells */
-void ecore_db_recovery_execute(struct ecore_hwfn *p_hwfn,
-			       enum ecore_db_rec_exec db_exec)
+void ecore_db_recovery_execute(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_db_recovery_entry *db_entry = OSAL_NULL;
 
-	if (db_exec != DB_REC_ONCE) {
-		DP_NOTICE(p_hwfn, false, "Executing doorbell recovery. Counter was %d\n",
-			  p_hwfn->db_recovery_info.db_recovery_counter);
-
-		/* track amount of times recovery was executed */
-		p_hwfn->db_recovery_info.db_recovery_counter++;
-	}
+	DP_NOTICE(p_hwfn, false,
+		  "Executing doorbell recovery. Counter is %d\n",
+		  ++p_hwfn->db_recovery_info.count);
 
 	/* protect the list */
 	OSAL_SPIN_LOCK(&p_hwfn->db_recovery_info.lock);
 	OSAL_LIST_FOR_EACH_ENTRY(db_entry,
 				 &p_hwfn->db_recovery_info.list,
 				 list_entry,
-				 struct ecore_db_recovery_entry) {
-		ecore_db_recovery_ring(p_hwfn, db_entry, db_exec);
-		if (db_exec == DB_REC_ONCE)
-			break;
-	}
-
+				 struct ecore_db_recovery_entry)
+		ecore_db_recovery_ring(p_hwfn, db_entry);
 	OSAL_SPIN_UNLOCK(&p_hwfn->db_recovery_info.lock);
 }
 /******************** Doorbell Recovery end ****************/
@@ -364,7 +368,7 @@ enum ecore_llh_filter_type {
 };
 
 struct ecore_llh_mac_filter {
-	u8 addr[ETH_ALEN];
+	u8 addr[ECORE_ETH_ALEN];
 };
 
 struct ecore_llh_protocol_filter {
@@ -661,16 +665,15 @@ ecore_llh_shadow_remove_all_filters(struct ecore_dev *p_dev, u8 ppfid)
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t ecore_abs_ppfid(struct ecore_dev *p_dev,
-					    u8 rel_ppfid, u8 *p_abs_ppfid)
+enum _ecore_status_t ecore_abs_ppfid(struct ecore_dev *p_dev, u8 rel_ppfid,
+				     u8 *p_abs_ppfid)
 {
 	struct ecore_llh_info *p_llh_info = p_dev->p_llh_info;
-	u8 ppfids = p_llh_info->num_ppfid - 1;
 
 	if (rel_ppfid >= p_llh_info->num_ppfid) {
 		DP_NOTICE(p_dev, false,
-			  "rel_ppfid %d is not valid, available indices are 0..%hhd\n",
-			  rel_ppfid, ppfids);
+			  "rel_ppfid %d is not valid, available indices are 0..%hhu\n",
+			  rel_ppfid, p_llh_info->num_ppfid - 1);
 		return ECORE_INVAL;
 	}
 
@@ -679,6 +682,24 @@ static enum _ecore_status_t ecore_abs_ppfid(struct ecore_dev *p_dev,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t ecore_llh_map_ppfid_to_pfid(struct ecore_hwfn *p_hwfn,
+						 struct ecore_ptt *p_ptt,
+						 u8 ppfid, u8 pfid)
+{
+	u8 abs_ppfid;
+	u32 addr;
+	enum _ecore_status_t rc;
+
+	rc = ecore_abs_ppfid(p_hwfn->p_dev, ppfid, &abs_ppfid);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	addr = NIG_REG_LLH_PPFID2PFID_TBL_0 + abs_ppfid * 0x4;
+	ecore_wr(p_hwfn, p_ptt, addr, pfid);
+
+	return ECORE_SUCCESS;
+}
+
 static enum _ecore_status_t
 __ecore_llh_set_engine_affin(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 {
@@ -791,21 +812,21 @@ static enum _ecore_status_t ecore_llh_hw_init_pf(struct ecore_hwfn *p_hwfn,
 						 bool avoid_eng_affin)
 {
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
-	u8 ppfid, abs_ppfid;
+	u8 ppfid;
 	enum _ecore_status_t rc;
 
 	for (ppfid = 0; ppfid < p_dev->p_llh_info->num_ppfid; ppfid++) {
-		u32 addr;
-
-		rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
-		if (rc != ECORE_SUCCESS)
+		rc = ecore_llh_map_ppfid_to_pfid(p_hwfn, p_ptt, ppfid,
+						 p_hwfn->rel_pf_id);
+		if (rc != ECORE_SUCCESS) {
+			DP_NOTICE(p_dev, false,
+				  "Failed to map ppfid %d to pfid %d\n",
+				  ppfid, p_hwfn->rel_pf_id);
 			return rc;
-
-		addr = NIG_REG_LLH_PPFID2PFID_TBL_0 + abs_ppfid * 0x4;
-		ecore_wr(p_hwfn, p_ptt, addr, p_hwfn->rel_pf_id);
+		}
 	}
 
-	if (OSAL_GET_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits) &&
+	if (OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits) &&
 	    !ECORE_IS_FCOE_PERSONALITY(p_hwfn)) {
 		rc = ecore_llh_add_mac_filter(p_dev, 0,
 					      p_hwfn->hw_info.hw_mac_addr);
@@ -833,7 +854,10 @@ enum ecore_eng ecore_llh_get_l2_affinity_hint(struct ecore_dev *p_dev)
 	return p_dev->l2_affin_hint ? ECORE_ENG1 : ECORE_ENG0;
 }
 
-/* TBD - should be removed when these definitions are available in reg_addr.h */
+/* TBD -
+ * When the relevant definitions are available in reg_addr.h, the SHIFT
+ * definitions should be removed, and the MASK definitions should be revised.
+ */
 #define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_MASK		0x3
 #define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_SHIFT		0
 #define NIG_REG_PPF_TO_ENGINE_SEL_NON_ROCE_MASK		0x3
@@ -939,7 +963,7 @@ enum _ecore_status_t ecore_llh_set_roce_affinity(struct ecore_dev *p_dev,
 	return rc;
 }
 
-struct ecore_llh_filter_details {
+struct ecore_llh_filter_e4_details {
 	u64 value;
 	u32 mode;
 	u32 protocol_type;
@@ -948,10 +972,10 @@ struct ecore_llh_filter_details {
 };
 
 static enum _ecore_status_t
-ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
-			struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx,
-			struct ecore_llh_filter_details *p_details,
-			bool b_write_access)
+ecore_llh_access_filter_e4(struct ecore_hwfn *p_hwfn,
+			   struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx,
+			   struct ecore_llh_filter_e4_details *p_details,
+			   bool b_write_access)
 {
 	u8 pfid = ECORE_PFID_BY_PPFID(p_hwfn, abs_ppfid);
 	struct dmae_params params;
@@ -1035,16 +1059,16 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
 }
 
 static enum _ecore_status_t
-ecore_llh_add_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ecore_llh_add_filter_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			u8 abs_ppfid, u8 filter_idx, u8 filter_prot_type,
 			u32 high, u32 low)
 {
-	struct ecore_llh_filter_details filter_details;
+	struct ecore_llh_filter_e4_details filter_details;
 
 	filter_details.enable = 1;
 	filter_details.value = ((u64)high << 32) | low;
 	filter_details.hdr_sel =
-		OSAL_GET_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits) ?
+		OSAL_TEST_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits) ?
 		1 : /* inner/encapsulated header */
 		0;  /* outer/tunnel header */
 	filter_details.protocol_type = filter_prot_type;
@@ -1052,42 +1076,104 @@ ecore_llh_add_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			      1 : /* protocol-based classification */
 			      0;  /* MAC-address based classification */
 
-	return ecore_llh_access_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx,
-				&filter_details,
-				true /* write access */);
+	return ecore_llh_access_filter_e4(p_hwfn, p_ptt, abs_ppfid, filter_idx,
+					  &filter_details,
+					  true /* write access */);
 }
 
 static enum _ecore_status_t
-ecore_llh_remove_filter(struct ecore_hwfn *p_hwfn,
+ecore_llh_remove_filter_e4(struct ecore_hwfn *p_hwfn,
 			   struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx)
 {
-	struct ecore_llh_filter_details filter_details;
+	struct ecore_llh_filter_e4_details filter_details;
 
 	OSAL_MEMSET(&filter_details, 0, sizeof(filter_details));
 
-	return ecore_llh_access_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx,
-				       &filter_details,
-				       true /* write access */);
+	return ecore_llh_access_filter_e4(p_hwfn, p_ptt, abs_ppfid, filter_idx,
+					  &filter_details,
+					  true /* write access */);
+}
+
+/* OSAL_UNUSED is temporary used to avoid unused-parameter compilation warnings.
+ * Should be removed when the function is implemented.
+ */
+static enum _ecore_status_t
+ecore_llh_add_filter_e5(struct ecore_hwfn OSAL_UNUSED * p_hwfn,
+			struct ecore_ptt OSAL_UNUSED * p_ptt,
+			u8 OSAL_UNUSED abs_ppfid, u8 OSAL_UNUSED filter_idx,
+			u8 OSAL_UNUSED filter_prot_type, u32 OSAL_UNUSED high,
+			u32 OSAL_UNUSED low)
+{
+	ECORE_E5_MISSING_CODE;
+
+	return ECORE_SUCCESS;
+}
+
+/* OSAL_UNUSED is temporary used to avoid unused-parameter compilation warnings.
+ * Should be removed when the function is implemented.
+ */
+static enum _ecore_status_t
+ecore_llh_remove_filter_e5(struct ecore_hwfn OSAL_UNUSED * p_hwfn,
+			   struct ecore_ptt OSAL_UNUSED * p_ptt,
+			   u8 OSAL_UNUSED abs_ppfid,
+			   u8 OSAL_UNUSED filter_idx)
+{
+	ECORE_E5_MISSING_CODE;
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_llh_add_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		     u8 abs_ppfid, u8 filter_idx, u8 filter_prot_type, u32 high,
+		     u32 low)
+{
+	if (ECORE_IS_E4(p_hwfn->p_dev))
+		return ecore_llh_add_filter_e4(p_hwfn, p_ptt, abs_ppfid,
+					       filter_idx, filter_prot_type,
+					       high, low);
+	else /* E5 */
+		return ecore_llh_add_filter_e5(p_hwfn, p_ptt, abs_ppfid,
+					       filter_idx, filter_prot_type,
+					       high, low);
+}
+
+static enum _ecore_status_t
+ecore_llh_remove_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			u8 abs_ppfid, u8 filter_idx)
+{
+	if (ECORE_IS_E4(p_hwfn->p_dev))
+		return ecore_llh_remove_filter_e4(p_hwfn, p_ptt, abs_ppfid,
+						  filter_idx);
+	else /* E5 */
+		return ecore_llh_remove_filter_e5(p_hwfn, p_ptt, abs_ppfid,
+						  filter_idx);
 }
 
 enum _ecore_status_t ecore_llh_add_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
-					      u8 mac_addr[ETH_ALEN])
+					      u8 mac_addr[ECORE_ETH_ALEN])
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
 	union ecore_llh_filter filter;
 	u8 filter_idx, abs_ppfid;
+	struct ecore_ptt *p_ptt;
 	u32 high, low, ref_cnt;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
+	if (!OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits))
+		return rc;
+
+	if (IS_VF(p_hwfn->p_dev)) {
+		DP_NOTICE(p_dev, false, "Setting MAC to LLH is not supported to VF\n");
+		return ECORE_NOTIMPL;
+	}
+
+	p_ptt = ecore_ptt_acquire(p_hwfn);
 	if (p_ptt == OSAL_NULL)
 		return ECORE_AGAIN;
 
-	if (!OSAL_GET_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits))
-		goto out;
-
 	OSAL_MEM_ZERO(&filter, sizeof(filter));
-	OSAL_MEMCPY(filter.mac.addr, mac_addr, ETH_ALEN);
+	OSAL_MEMCPY(filter.mac.addr, mac_addr, ECORE_ETH_ALEN);
 	rc = ecore_llh_shadow_add_filter(p_dev, ppfid,
 					 ECORE_LLH_FILTER_TYPE_MAC,
 					 &filter, &filter_idx, &ref_cnt);
@@ -1204,6 +1290,22 @@ ecore_llh_protocol_filter_to_hilo(struct ecore_dev *p_dev,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t
+ecore_llh_add_dst_tcp_port_filter(struct ecore_dev *p_dev, u16 dest_port)
+{
+	return ecore_llh_add_protocol_filter(p_dev, 0,
+					ECORE_LLH_FILTER_TCP_DEST_PORT,
+					ECORE_LLH_DONT_CARE, dest_port);
+}
+
+enum _ecore_status_t
+ecore_llh_add_src_tcp_port_filter(struct ecore_dev *p_dev, u16 src_port)
+{
+	return ecore_llh_add_protocol_filter(p_dev, 0,
+					ECORE_LLH_FILTER_TCP_SRC_PORT,
+					src_port, ECORE_LLH_DONT_CARE);
+}
+
 enum _ecore_status_t
 ecore_llh_add_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
 			      enum ecore_llh_prot_filter_type_t type,
@@ -1212,15 +1314,15 @@ ecore_llh_add_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
 	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
 	u8 filter_idx, abs_ppfid, type_bitmap;
-	char str[32];
 	union ecore_llh_filter filter;
 	u32 high, low, ref_cnt;
+	char str[32];
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	if (p_ptt == OSAL_NULL)
 		return ECORE_AGAIN;
 
-	if (!OSAL_GET_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits))
+	if (!OSAL_TEST_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits))
 		goto out;
 
 	rc = ecore_llh_protocol_filter_stringify(p_dev, type,
@@ -1275,23 +1377,33 @@ ecore_llh_add_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
 }
 
 void ecore_llh_remove_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
-				 u8 mac_addr[ETH_ALEN])
+				 u8 mac_addr[ECORE_ETH_ALEN])
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
 	union ecore_llh_filter filter;
 	u8 filter_idx, abs_ppfid;
+	struct ecore_ptt *p_ptt;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u32 ref_cnt;
 
-	if (p_ptt == OSAL_NULL)
+	if (!OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits))
 		return;
 
-	if (!OSAL_GET_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits))
-		goto out;
+	if (ECORE_IS_NVMETCP_PERSONALITY(p_hwfn))
+		return;
+
+	if (IS_VF(p_hwfn->p_dev)) {
+		DP_NOTICE(p_dev, false, "Removing MAC from LLH is not supported to VF\n");
+		return;
+	}
+
+	p_ptt = ecore_ptt_acquire(p_hwfn);
+
+	if (p_ptt == OSAL_NULL)
+		return;
 
 	OSAL_MEM_ZERO(&filter, sizeof(filter));
-	OSAL_MEMCPY(filter.mac.addr, mac_addr, ETH_ALEN);
+	OSAL_MEMCPY(filter.mac.addr, mac_addr, ECORE_ETH_ALEN);
 	rc = ecore_llh_shadow_remove_filter(p_dev, ppfid, &filter, &filter_idx,
 					    &ref_cnt);
 	if (rc != ECORE_SUCCESS)
@@ -1326,6 +1438,22 @@ void ecore_llh_remove_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
 	ecore_ptt_release(p_hwfn, p_ptt);
 }
 
+void ecore_llh_remove_dst_tcp_port_filter(struct ecore_dev *p_dev,
+					  u16 dest_port)
+{
+	ecore_llh_remove_protocol_filter(p_dev, 0,
+					 ECORE_LLH_FILTER_TCP_DEST_PORT,
+					 ECORE_LLH_DONT_CARE, dest_port);
+}
+
+void ecore_llh_remove_src_tcp_port_filter(struct ecore_dev *p_dev,
+					  u16 src_port)
+{
+	ecore_llh_remove_protocol_filter(p_dev, 0,
+					 ECORE_LLH_FILTER_TCP_SRC_PORT,
+					 src_port, ECORE_LLH_DONT_CARE);
+}
+
 void ecore_llh_remove_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
 				      enum ecore_llh_prot_filter_type_t type,
 				      u16 source_port_or_eth_type,
@@ -1334,15 +1462,15 @@ void ecore_llh_remove_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
 	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
 	u8 filter_idx, abs_ppfid;
-	char str[32];
 	union ecore_llh_filter filter;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
+	char str[32];
 	u32 ref_cnt;
 
 	if (p_ptt == OSAL_NULL)
 		return;
 
-	if (!OSAL_GET_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits))
+	if (!OSAL_TEST_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits))
 		goto out;
 
 	rc = ecore_llh_protocol_filter_stringify(p_dev, type,
@@ -1396,8 +1524,8 @@ void ecore_llh_clear_ppfid_filters(struct ecore_dev *p_dev, u8 ppfid)
 	if (p_ptt == OSAL_NULL)
 		return;
 
-	if (!OSAL_GET_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits) &&
-	    !OSAL_GET_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits))
+	if (!OSAL_TEST_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits) &&
+	    !OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits))
 		goto out;
 
 	rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
@@ -1411,7 +1539,7 @@ void ecore_llh_clear_ppfid_filters(struct ecore_dev *p_dev, u8 ppfid)
 	for (filter_idx = 0; filter_idx < NIG_REG_LLH_FUNC_FILTER_EN_SIZE;
 	     filter_idx++) {
 		rc = ecore_llh_remove_filter(p_hwfn, p_ptt,
-						abs_ppfid, filter_idx);
+					     abs_ppfid, filter_idx);
 		if (rc != ECORE_SUCCESS)
 			goto out;
 	}
@@ -1423,8 +1551,8 @@ void ecore_llh_clear_all_filters(struct ecore_dev *p_dev)
 {
 	u8 ppfid;
 
-	if (!OSAL_GET_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits) &&
-	    !OSAL_GET_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits))
+	if (!OSAL_TEST_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits) &&
+	    !OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits))
 		return;
 
 	for (ppfid = 0; ppfid < p_dev->p_llh_info->num_ppfid; ppfid++)
@@ -1450,22 +1578,18 @@ enum _ecore_status_t ecore_all_ppfids_wr(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t
-ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid)
+static enum _ecore_status_t
+ecore_llh_dump_ppfid_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			u8 ppfid)
 {
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
-	struct ecore_llh_filter_details filter_details;
+	struct ecore_llh_filter_e4_details filter_details;
 	u8 abs_ppfid, filter_idx;
 	u32 addr;
 	enum _ecore_status_t rc;
 
-	if (!p_ptt)
-		return ECORE_AGAIN;
-
 	rc = ecore_abs_ppfid(p_hwfn->p_dev, ppfid, &abs_ppfid);
 	if (rc != ECORE_SUCCESS)
-		goto out;
+		return rc;
 
 	addr = NIG_REG_PPF_TO_ENGINE_SEL + abs_ppfid * 0x4;
 	DP_NOTICE(p_hwfn, false,
@@ -1476,22 +1600,46 @@ ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid)
 	for (filter_idx = 0; filter_idx < NIG_REG_LLH_FUNC_FILTER_EN_SIZE;
 	     filter_idx++) {
 		OSAL_MEMSET(&filter_details, 0, sizeof(filter_details));
-		rc =  ecore_llh_access_filter(p_hwfn, p_ptt, abs_ppfid,
-					      filter_idx, &filter_details,
-					      false /* read access */);
+		rc =  ecore_llh_access_filter_e4(p_hwfn, p_ptt, abs_ppfid,
+						 filter_idx, &filter_details,
+						 false /* read access */);
 		if (rc != ECORE_SUCCESS)
-			goto out;
+			return rc;
 
 		DP_NOTICE(p_hwfn, false,
-			  "filter %2hhd: enable %d, value 0x%016lx, mode %d, protocol_type 0x%x, hdr_sel 0x%x\n",
+			  "filter %2hhd: enable %d, value 0x%016" PRIx64 ", mode %d, protocol_type 0x%x, hdr_sel 0x%x\n",
 			  filter_idx, filter_details.enable,
-			  (unsigned long)filter_details.value,
-			  filter_details.mode,
+			  filter_details.value, filter_details.mode,
 			  filter_details.protocol_type, filter_details.hdr_sel);
 	}
 
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_llh_dump_ppfid_e5(struct ecore_hwfn OSAL_UNUSED * p_hwfn,
+			struct ecore_ptt OSAL_UNUSED * p_ptt,
+			u8 OSAL_UNUSED ppfid)
+{
+	ECORE_E5_MISSING_CODE;
+
+	return ECORE_NOTIMPL;
+}
+
+enum _ecore_status_t ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid)
+{
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+	enum _ecore_status_t rc;
+
+	if (p_ptt == OSAL_NULL)
+		return ECORE_AGAIN;
+
+	if (ECORE_IS_E4(p_dev))
+		rc = ecore_llh_dump_ppfid_e4(p_hwfn, p_ptt, ppfid);
+	else /* E5 */
+		rc = ecore_llh_dump_ppfid_e5(p_hwfn, p_ptt, ppfid);
 
-out:
 	ecore_ptt_release(p_hwfn, p_ptt);
 
 	return rc;
@@ -1513,15 +1661,6 @@ enum _ecore_status_t ecore_llh_dump_all(struct ecore_dev *p_dev)
 
 /******************************* NIG LLH - End ********************************/
 
-/* Configurable */
-#define ECORE_MIN_DPIS		(4)	/* The minimal num of DPIs required to
-					 * load the driver. The number was
-					 * arbitrarily set.
-					 */
-
-/* Derived */
-#define ECORE_MIN_PWM_REGION	(ECORE_WID_SIZE * ECORE_MIN_DPIS)
-
 static u32 ecore_hw_bar_size(struct ecore_hwfn *p_hwfn,
 			     struct ecore_ptt *p_ptt,
 			     enum BAR_ID bar_id)
@@ -1544,18 +1683,18 @@ static u32 ecore_hw_bar_size(struct ecore_hwfn *p_hwfn,
 	if (ECORE_IS_CMT(p_hwfn->p_dev)) {
 		DP_INFO(p_hwfn,
 			"BAR size not configured. Assuming BAR size of 256kB for GRC and 512kB for DB\n");
-		val = BAR_ID_0 ? 256 * 1024 : 512 * 1024;
+		return BAR_ID_0 ? 256 * 1024 : 512 * 1024;
 	} else {
 		DP_INFO(p_hwfn,
 			"BAR size not configured. Assuming BAR size of 512kB for GRC and 512kB for DB\n");
-		val = 512 * 1024;
+		return 512 * 1024;
 	}
-
-	return val;
 }
 
-void ecore_init_dp(struct ecore_dev *p_dev,
-		   u32 dp_module, u8 dp_level, void *dp_ctx)
+void ecore_init_dp(struct ecore_dev	*p_dev,
+		   u32			dp_module,
+		   u8			dp_level,
+		   void		 *dp_ctx)
 {
 	u32 i;
 
@@ -1571,6 +1710,70 @@ void ecore_init_dp(struct ecore_dev *p_dev,
 	}
 }
 
+void ecore_init_int_dp(struct ecore_dev *p_dev, u32 dp_module, u8 dp_level)
+{
+	u32 i;
+
+	p_dev->dp_int_level = dp_level;
+	p_dev->dp_int_module = dp_module;
+	for (i = 0; i < MAX_HWFNS_PER_DEVICE; i++) {
+		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
+
+		p_hwfn->dp_int_level = dp_level;
+		p_hwfn->dp_int_module = dp_module;
+	}
+}
+
+#define ECORE_DP_INT_LOG_MAX_STR_SIZE 256 /* @DPDK */
+
+void ecore_dp_internal_log(struct ecore_dev *p_dev, char *fmt, ...)
+{
+	char buff[ECORE_DP_INT_LOG_MAX_STR_SIZE];
+	struct ecore_internal_trace *p_int_log;
+	u32 len, partial_len;
+	unsigned long flags;
+	osal_va_list args;
+	char *buf = buff;
+	u32 prod;
+
+	if (!p_dev)
+		return;
+
+	p_int_log = &p_dev->internal_trace;
+
+	if (!p_int_log->buf)
+		return;
+
+	OSAL_VA_START(args, fmt);
+	len = OSAL_VSNPRINTF(buf, ECORE_DP_INT_LOG_MAX_STR_SIZE, fmt, args);
+	OSAL_VA_END(args);
+
+	if (len > ECORE_DP_INT_LOG_MAX_STR_SIZE) {
+		len = ECORE_DP_INT_LOG_MAX_STR_SIZE;
+		buf[len - 1] = '\n';
+	}
+
+	partial_len = len;
+
+	OSAL_SPIN_LOCK_IRQSAVE(&p_int_log->lock, flags);
+	prod = p_int_log->prod % p_int_log->size;
+
+	if (p_int_log->size - prod <= len) {
+		partial_len = p_int_log->size - prod;
+		OSAL_MEMCPY(p_int_log->buf + prod, buf, partial_len);
+		p_int_log->prod += partial_len;
+		prod = p_int_log->prod % p_int_log->size;
+		buf += partial_len;
+		partial_len = len - partial_len;
+	}
+
+	OSAL_MEMCPY(p_int_log->buf + prod, buf, partial_len);
+
+	p_int_log->prod += partial_len;
+
+	OSAL_SPIN_UNLOCK_IRQSAVE(&p_int_log->lock, flags);
+}
+
 enum _ecore_status_t ecore_init_struct(struct ecore_dev *p_dev)
 {
 	u8 i;
@@ -1581,19 +1784,32 @@ enum _ecore_status_t ecore_init_struct(struct ecore_dev *p_dev)
 		p_hwfn->p_dev = p_dev;
 		p_hwfn->my_id = i;
 		p_hwfn->b_active = false;
+		p_hwfn->p_dummy_cb = ecore_int_dummy_comp_cb;
 
 #ifdef CONFIG_ECORE_LOCK_ALLOC
-		if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_hwfn->dmae_info.lock))
+		if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_hwfn->dmae_info.lock,
+					 "dma_info_lock"))
 			goto handle_err;
 #endif
 		OSAL_SPIN_LOCK_INIT(&p_hwfn->dmae_info.lock);
 	}
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	if (OSAL_SPIN_LOCK_ALLOC(&p_dev->hwfns[0], &p_dev->internal_trace.lock,
+				 "internal_trace_lock"))
+		goto handle_err;
+#endif
+	OSAL_SPIN_LOCK_INIT(&p_dev->internal_trace.lock);
+
+	p_dev->p_dev = p_dev;
 
 	/* hwfn 0 is always active */
 	p_dev->hwfns[0].b_active = true;
 
 	/* set the default cache alignment to 128 (may be overridden later) */
 	p_dev->cache_shift = 7;
+
+	p_dev->ilt_page_size = ECORE_DEFAULT_ILT_PAGE_SIZE;
+
 	return ECORE_SUCCESS;
 #ifdef CONFIG_ECORE_LOCK_ALLOC
 handle_err:
@@ -1612,9 +1828,16 @@ static void ecore_qm_info_free(struct ecore_hwfn *p_hwfn)
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
 	OSAL_FREE(p_hwfn->p_dev, qm_info->qm_pq_params);
+	qm_info->qm_pq_params = OSAL_NULL;
 	OSAL_FREE(p_hwfn->p_dev, qm_info->qm_vport_params);
+	qm_info->qm_vport_params = OSAL_NULL;
 	OSAL_FREE(p_hwfn->p_dev, qm_info->qm_port_params);
+	qm_info->qm_port_params = OSAL_NULL;
 	OSAL_FREE(p_hwfn->p_dev, qm_info->wfq_data);
+	qm_info->wfq_data = OSAL_NULL;
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	OSAL_SPIN_LOCK_DEALLOC(&qm_info->qm_info_lock);
+#endif
 }
 
 static void ecore_dbg_user_data_free(struct ecore_hwfn *p_hwfn)
@@ -1627,49 +1850,66 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 {
 	int i;
 
-	if (IS_VF(p_dev)) {
-		for_each_hwfn(p_dev, i)
-			ecore_l2_free(&p_dev->hwfns[i]);
-		return;
-	}
-
-	OSAL_FREE(p_dev, p_dev->fw_data);
-
 	OSAL_FREE(p_dev, p_dev->reset_stats);
+	p_dev->reset_stats = OSAL_NULL;
 
-	ecore_llh_free(p_dev);
+	if (IS_PF(p_dev)) {
+		OSAL_FREE(p_dev, p_dev->fw_data);
+		p_dev->fw_data = OSAL_NULL;
+
+		ecore_llh_free(p_dev);
+	}
 
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 
+		ecore_spq_free(p_hwfn);
+		ecore_l2_free(p_hwfn);
+
+		if (IS_VF(p_dev)) {
+			ecore_db_recovery_teardown(p_hwfn);
+			continue;
+		}
+
 		ecore_cxt_mngr_free(p_hwfn);
 		ecore_qm_info_free(p_hwfn);
-		ecore_spq_free(p_hwfn);
 		ecore_eq_free(p_hwfn);
 		ecore_consq_free(p_hwfn);
 		ecore_int_free(p_hwfn);
+
 		ecore_iov_free(p_hwfn);
-		ecore_l2_free(p_hwfn);
 		ecore_dmae_info_free(p_hwfn);
 		ecore_dcbx_info_free(p_hwfn);
 		ecore_dbg_user_data_free(p_hwfn);
-		ecore_fw_overlay_mem_free(p_hwfn, p_hwfn->fw_overlay_mem);
-		/* @@@TBD Flush work-queue ? */
+		ecore_fw_overlay_mem_free(p_hwfn, &p_hwfn->fw_overlay_mem);
+		/* @@@TBD Flush work-queue ?*/
 
 		/* destroy doorbell recovery mechanism */
 		ecore_db_recovery_teardown(p_hwfn);
 	}
+
+	if (IS_PF(p_dev)) {
+		OSAL_FREE(p_dev, p_dev->fw_data);
+		p_dev->fw_data = OSAL_NULL;
+	}
 }
 
 /******************** QM initialization *******************/
-
-/* bitmaps for indicating active traffic classes.
- * Special case for Arrowhead 4 port
- */
+/* bitmaps for indicating active traffic classes. Special case for Arrowhead 4 port */
 /* 0..3 actualy used, 4 serves OOO, 7 serves high priority stuff (e.g. DCQCN) */
-#define ACTIVE_TCS_BMAP 0x9f
-/* 0..3 actually used, OOO and high priority stuff all use 3 */
-#define ACTIVE_TCS_BMAP_4PORT_K2 0xf
+#define ACTIVE_TCS_BMAP_E4 0x9f
+#define ACTIVE_TCS_BMAP_E5 0x1f /* 0..3 actualy used, 4 serves OOO */
+#define ACTIVE_TCS_BMAP_4PORT_K2 0xf /* 0..3 actually used, OOO and high priority stuff all use 3 */
+
+#define ACTIVE_TCS_BMAP(_p_hwfn) \
+	(ECORE_IS_E4((_p_hwfn)->p_dev) ? \
+	 ACTIVE_TCS_BMAP_E4 : ACTIVE_TCS_BMAP_E5)
+
+static u16 ecore_init_qm_get_num_active_vfs(struct ecore_hwfn *p_hwfn)
+{
+	return IS_ECORE_SRIOV(p_hwfn->p_dev) ?
+		p_hwfn->pf_iov_info->max_active_vfs : 0;
+}
 
 /* determines the physical queue flags for a given PF. */
 static u32 ecore_get_pq_flags(struct ecore_hwfn *p_hwfn)
@@ -1680,8 +1920,10 @@ static u32 ecore_get_pq_flags(struct ecore_hwfn *p_hwfn)
 	flags = PQ_FLAGS_LB;
 
 	/* feature flags */
-	if (IS_ECORE_SRIOV(p_hwfn->p_dev))
+	if (ecore_init_qm_get_num_active_vfs(p_hwfn))
 		flags |= PQ_FLAGS_VFS;
+
+	/* @DPDK */
 	if (IS_ECORE_PACING(p_hwfn))
 		flags |= PQ_FLAGS_RLS;
 
@@ -1691,27 +1933,11 @@ static u32 ecore_get_pq_flags(struct ecore_hwfn *p_hwfn)
 		if (!IS_ECORE_PACING(p_hwfn))
 			flags |= PQ_FLAGS_MCOS;
 		break;
-	case ECORE_PCI_FCOE:
-		flags |= PQ_FLAGS_OFLD;
-		break;
-	case ECORE_PCI_ISCSI:
-		flags |= PQ_FLAGS_ACK | PQ_FLAGS_OOO | PQ_FLAGS_OFLD;
-		break;
-	case ECORE_PCI_ETH_ROCE:
-		flags |= PQ_FLAGS_OFLD | PQ_FLAGS_LLT;
-		if (!IS_ECORE_PACING(p_hwfn))
-			flags |= PQ_FLAGS_MCOS;
-		break;
-	case ECORE_PCI_ETH_IWARP:
-		flags |= PQ_FLAGS_ACK | PQ_FLAGS_OOO | PQ_FLAGS_OFLD;
-		if (!IS_ECORE_PACING(p_hwfn))
-			flags |= PQ_FLAGS_MCOS;
-		break;
 	default:
-		DP_ERR(p_hwfn, "unknown personality %d\n",
-		       p_hwfn->hw_info.personality);
+		DP_ERR(p_hwfn, "unknown personality %d\n", p_hwfn->hw_info.personality);
 		return 0;
 	}
+
 	return flags;
 }
 
@@ -1723,8 +1949,53 @@ u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn)
 
 u16 ecore_init_qm_get_num_vfs(struct ecore_hwfn *p_hwfn)
 {
-	return IS_ECORE_SRIOV(p_hwfn->p_dev) ?
-			p_hwfn->p_dev->p_iov_info->total_vfs : 0;
+	return IS_ECORE_SRIOV(p_hwfn->p_dev) ? p_hwfn->p_dev->p_iov_info->total_vfs : 0;
+}
+
+static u16 ecore_init_qm_get_num_vfs_pqs(struct ecore_hwfn *p_hwfn)
+{
+	u16 num_pqs, num_vfs = ecore_init_qm_get_num_active_vfs(p_hwfn);
+	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
+
+	/* One L2 PQ per VF */
+	num_pqs = num_vfs;
+
+	/* Separate RDMA PQ per VF */
+	if ((PQ_FLAGS_VFR & pq_flags))
+		num_pqs += num_vfs;
+
+	/* Separate RDMA PQ for all VFs */
+	if ((PQ_FLAGS_VSR & pq_flags))
+		num_pqs += 1;
+
+	return num_pqs;
+}
+
+static bool ecore_lag_support(struct ecore_hwfn *p_hwfn)
+{
+	return (ECORE_IS_AH(p_hwfn->p_dev) &&
+		ECORE_IS_ROCE_PERSONALITY(p_hwfn) &&
+		OSAL_TEST_BIT(ECORE_MF_ROCE_LAG, &p_hwfn->p_dev->mf_bits));
+}
+
+static u8 ecore_init_qm_get_num_mtc_tcs(struct ecore_hwfn *p_hwfn)
+{
+	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
+
+	if (!(PQ_FLAGS_MTC & pq_flags))
+		return 1;
+
+	return ecore_init_qm_get_num_tcs(p_hwfn);
+}
+
+static u8 ecore_init_qm_get_num_mtc_pqs(struct ecore_hwfn *p_hwfn)
+{
+	u32 num_ports, num_tcs;
+
+	num_ports = ecore_lag_support(p_hwfn) ? LAG_MAX_PORT_NUM : 1;
+	num_tcs = ecore_init_qm_get_num_mtc_tcs(p_hwfn);
+
+	return num_ports * num_tcs;
 }
 
 #define NUM_DEFAULT_RLS 1
@@ -1733,21 +2004,13 @@ u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn)
 {
 	u16 num_pf_rls, num_vfs = ecore_init_qm_get_num_vfs(p_hwfn);
 
-	/* num RLs can't exceed resource amount of rls or vports or the
-	 * dcqcn qps
-	 */
-	num_pf_rls = (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL),
-				     RESC_NUM(p_hwfn, ECORE_VPORT));
+	/* num RLs can't exceed resource amount of rls or vports or the dcqcn qps */
+	num_pf_rls = (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL), RESC_NUM(p_hwfn,
+				     ECORE_VPORT));
 
-	/* make sure after we reserve the default and VF rls we'll have
-	 * something left
-	 */
-	if (num_pf_rls < num_vfs + NUM_DEFAULT_RLS) {
-		DP_NOTICE(p_hwfn, false,
-			  "no rate limiters left for PF rate limiting"
-			  " [num_pf_rls %d num_vfs %d]\n", num_pf_rls, num_vfs);
+	/* make sure after we reserve the default and VF rls we'll have something left */
+	if (num_pf_rls < num_vfs + NUM_DEFAULT_RLS)
 		return 0;
-	}
 
 	/* subtract rls necessary for VFs and one default one for the PF */
 	num_pf_rls -= num_vfs + NUM_DEFAULT_RLS;
@@ -1755,17 +2018,40 @@ u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn)
 	return num_pf_rls;
 }
 
+static u16 ecore_init_qm_get_num_rls(struct ecore_hwfn *p_hwfn)
+{
+	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
+	u16 num_rls = 0;
+
+	num_rls += (!!(PQ_FLAGS_RLS & pq_flags)) *
+		   ecore_init_qm_get_num_pf_rls(p_hwfn);
+
+	/* RL for each VF L2 PQ */
+	num_rls += (!!(PQ_FLAGS_VFS & pq_flags)) *
+		   ecore_init_qm_get_num_active_vfs(p_hwfn);
+
+	/* RL for each VF RDMA PQ */
+	num_rls += (!!(PQ_FLAGS_VFR & pq_flags)) *
+		   ecore_init_qm_get_num_active_vfs(p_hwfn);
+
+	/* RL for VF RDMA single PQ */
+	num_rls += (!!(PQ_FLAGS_VSR & pq_flags));
+
+	return num_rls;
+}
+
 u16 ecore_init_qm_get_num_vports(struct ecore_hwfn *p_hwfn)
 {
 	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
 
-	/* all pqs share the same vport (hence the 1 below), except for vfs
-	 * and pf_rl pqs
-	 */
-	return (!!(PQ_FLAGS_RLS & pq_flags)) *
-		ecore_init_qm_get_num_pf_rls(p_hwfn) +
-	       (!!(PQ_FLAGS_VFS & pq_flags)) *
-		ecore_init_qm_get_num_vfs(p_hwfn) + 1;
+	/* all pqs share the same vport (hence the 1 below), except for vfs and pf_rl pqs */
+	return (!!(PQ_FLAGS_RLS & pq_flags)) * ecore_init_qm_get_num_pf_rls(p_hwfn) +
+	       (!!(PQ_FLAGS_VFS & pq_flags)) * ecore_init_qm_get_num_vfs(p_hwfn) + 1;
+}
+
+static u8 ecore_init_qm_get_group_count(struct ecore_hwfn *p_hwfn)
+{
+	return p_hwfn->qm_info.offload_group_count;
 }
 
 /* calc amount of PQs according to the requested flags */
@@ -1773,16 +2059,15 @@ u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn)
 {
 	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
 
-	return (!!(PQ_FLAGS_RLS & pq_flags)) *
-		ecore_init_qm_get_num_pf_rls(p_hwfn) +
-	       (!!(PQ_FLAGS_MCOS & pq_flags)) *
-		ecore_init_qm_get_num_tcs(p_hwfn) +
+	return (!!(PQ_FLAGS_RLS & pq_flags)) * ecore_init_qm_get_num_pf_rls(p_hwfn) +
+	       (!!(PQ_FLAGS_MCOS & pq_flags)) * ecore_init_qm_get_num_tcs(p_hwfn) +
 	       (!!(PQ_FLAGS_LB & pq_flags)) +
 	       (!!(PQ_FLAGS_OOO & pq_flags)) +
 	       (!!(PQ_FLAGS_ACK & pq_flags)) +
-	       (!!(PQ_FLAGS_OFLD & pq_flags)) +
-	       (!!(PQ_FLAGS_VFS & pq_flags)) *
-		ecore_init_qm_get_num_vfs(p_hwfn);
+	       (!!(PQ_FLAGS_OFLD & pq_flags)) * ecore_init_qm_get_num_mtc_pqs(p_hwfn) +
+	       (!!(PQ_FLAGS_GRP & pq_flags)) * OFLD_GRP_SIZE +
+	       (!!(PQ_FLAGS_LLT & pq_flags)) * ecore_init_qm_get_num_mtc_pqs(p_hwfn) +
+	       (!!(PQ_FLAGS_VFS & pq_flags)) * ecore_init_qm_get_num_vfs_pqs(p_hwfn);
 }
 
 /* initialize the top level QM params */
@@ -1793,7 +2078,8 @@ static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
 
 	/* pq and vport bases for this PF */
 	qm_info->start_pq = (u16)RESC_START(p_hwfn, ECORE_PQ);
-	qm_info->start_vport = (u8)RESC_START(p_hwfn, ECORE_VPORT);
+	qm_info->start_vport = (u16)RESC_START(p_hwfn, ECORE_VPORT);
+	qm_info->start_rl = (u16)RESC_START(p_hwfn, ECORE_RL);
 
 	/* rate limiting and weighted fair queueing are always enabled */
 	qm_info->vport_rl_en = 1;
@@ -1803,15 +2089,11 @@ static void ecore_init_qm_params(struct ecore_hwfn *p_hwfn)
 	four_port = p_hwfn->p_dev->num_ports_in_engine == MAX_NUM_PORTS_K2;
 
 	/* in AH 4 port we have fewer TCs per port */
-	qm_info->max_phys_tcs_per_port = four_port ? NUM_PHYS_TCS_4PORT_K2 :
-						     NUM_OF_PHYS_TCS;
+	qm_info->max_phys_tcs_per_port = four_port ? NUM_PHYS_TCS_4PORT_K2 : NUM_OF_PHYS_TCS;
 
-	/* unless MFW indicated otherwise, ooo_tc should be 3 for AH 4 port and
-	 * 4 otherwise
-	 */
+	/* unless MFW indicated otherwise, ooo_tc should be 3 for AH 4 port and 4 otherwise */
 	if (!qm_info->ooo_tc)
-		qm_info->ooo_tc = four_port ? DCBX_TCP_OOO_K2_4PORT_TC :
-					      DCBX_TCP_OOO_TC;
+		qm_info->ooo_tc = four_port ? DCBX_TCP_OOO_K2_4PORT_TC : DCBX_TCP_OOO_TC;
 }
 
 /* initialize qm vport params */
@@ -1834,7 +2116,7 @@ static void ecore_init_qm_port_params(struct ecore_hwfn *p_hwfn)
 
 	/* indicate how ooo and high pri traffic is dealt with */
 	active_phys_tcs = num_ports == MAX_NUM_PORTS_K2 ?
-		ACTIVE_TCS_BMAP_4PORT_K2 : ACTIVE_TCS_BMAP;
+		ACTIVE_TCS_BMAP_4PORT_K2 : ACTIVE_TCS_BMAP(p_hwfn);
 
 	for (i = 0; i < num_ports; i++) {
 		struct init_qm_port_params *p_qm_port =
@@ -1862,11 +2144,14 @@ static void ecore_init_qm_reset_params(struct ecore_hwfn *p_hwfn)
 
 	qm_info->num_pqs = 0;
 	qm_info->num_vports = 0;
+	qm_info->num_rls = 0;
 	qm_info->num_pf_rls = 0;
 	qm_info->num_vf_pqs = 0;
 	qm_info->first_vf_pq = 0;
 	qm_info->first_mcos_pq = 0;
 	qm_info->first_rl_pq = 0;
+	qm_info->single_vf_rdma_pq = 0;
+	qm_info->pq_overflow = false;
 }
 
 static void ecore_init_qm_advance_vport(struct ecore_hwfn *p_hwfn)
@@ -1876,18 +2161,13 @@ static void ecore_init_qm_advance_vport(struct ecore_hwfn *p_hwfn)
 	qm_info->num_vports++;
 
 	if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn))
-		DP_ERR(p_hwfn,
-		       "vport overflow! qm_info->num_vports %d,"
-		       " qm_init_get_num_vports() %d\n",
-		       qm_info->num_vports,
-		       ecore_init_qm_get_num_vports(p_hwfn));
+		DP_ERR(p_hwfn, "vport overflow! qm_info->num_vports %d, qm_init_get_num_vports() %d\n",
+				qm_info->num_vports, ecore_init_qm_get_num_vports(p_hwfn));
 }
 
 /* initialize a single pq and manage qm_info resources accounting.
- * The pq_init_flags param determines whether the PQ is rate limited
- * (for VF or PF)
- * and whether a new vport is allocated to the pq or not (i.e. vport will be
- * shared)
+ * The pq_init_flags param determines whether the PQ is rate limited (for VF or PF)
+ * and whether a new vport is allocated to the pq or not (i.e. vport will be shared)
  */
 
 /* flags for pq init */
@@ -1898,65 +2178,108 @@ static void ecore_init_qm_advance_vport(struct ecore_hwfn *p_hwfn)
 /* defines for pq init */
 #define PQ_INIT_DEFAULT_WRR_GROUP	1
 #define PQ_INIT_DEFAULT_TC		0
-#define PQ_INIT_OFLD_TC			(p_hwfn->hw_info.offload_tc)
 
-static void ecore_init_qm_pq(struct ecore_hwfn *p_hwfn,
-			     struct ecore_qm_info *qm_info,
-			     u8 tc, u32 pq_init_flags)
+void ecore_hw_info_set_offload_tc(struct ecore_hw_info *p_info, u8 tc)
 {
-	u16 pq_idx = qm_info->num_pqs, max_pq =
-					ecore_init_qm_get_num_pqs(p_hwfn);
+	p_info->offload_tc = tc;
+	p_info->offload_tc_set = true;
+}
 
-	if (pq_idx > max_pq)
-		DP_ERR(p_hwfn,
-		       "pq overflow! pq %d, max pq %d\n", pq_idx, max_pq);
+static bool ecore_is_offload_tc_set(struct ecore_hwfn *p_hwfn)
+{
+	return p_hwfn->hw_info.offload_tc_set;
+}
+
+u8 ecore_get_offload_tc(struct ecore_hwfn *p_hwfn)
+{
+	if (ecore_is_offload_tc_set(p_hwfn))
+		return p_hwfn->hw_info.offload_tc;
+
+	return PQ_INIT_DEFAULT_TC;
+}
+
+static void ecore_init_qm_pq_port(struct ecore_hwfn *p_hwfn,
+				  struct ecore_qm_info *qm_info,
+				  u8 tc, u32 pq_init_flags, u8 port)
+{
+	u16 pq_idx = qm_info->num_pqs, max_pq = ecore_init_qm_get_num_pqs(p_hwfn);
+	u16 num_pf_pqs;
+
+	if (pq_idx > max_pq) {
+		qm_info->pq_overflow = true;
+		DP_ERR(p_hwfn, "pq overflow! pq %d, max pq %d\n", pq_idx, max_pq);
+	}
 
 	/* init pq params */
-	qm_info->qm_pq_params[pq_idx].port_id = p_hwfn->port_id;
-	qm_info->qm_pq_params[pq_idx].vport_id = qm_info->start_vport +
-						 qm_info->num_vports;
+	qm_info->qm_pq_params[pq_idx].port_id = port;
+	qm_info->qm_pq_params[pq_idx].vport_id = qm_info->start_vport + qm_info->num_vports;
 	qm_info->qm_pq_params[pq_idx].tc_id = tc;
 	qm_info->qm_pq_params[pq_idx].wrr_group = PQ_INIT_DEFAULT_WRR_GROUP;
-	qm_info->qm_pq_params[pq_idx].rl_valid =
-		(pq_init_flags & PQ_INIT_PF_RL ||
-		 pq_init_flags & PQ_INIT_VF_RL);
 
-	/* The "rl_id" is set as the "vport_id" */
-	qm_info->qm_pq_params[pq_idx].rl_id =
-		qm_info->qm_pq_params[pq_idx].vport_id;
+	if (pq_init_flags & (PQ_INIT_PF_RL | PQ_INIT_VF_RL)) {
+		qm_info->qm_pq_params[pq_idx].rl_valid = 1;
+		qm_info->qm_pq_params[pq_idx].rl_id =
+			qm_info->start_rl + qm_info->num_rls++;
+	}
 
 	/* qm params accounting */
 	qm_info->num_pqs++;
+	if (pq_init_flags & PQ_INIT_VF_RL) {
+		qm_info->num_vf_pqs++;
+	} else {
+		num_pf_pqs = qm_info->num_pqs - qm_info->num_vf_pqs;
+		if (qm_info->ilt_pf_pqs && num_pf_pqs > qm_info->ilt_pf_pqs) {
+			qm_info->pq_overflow = true;
+			DP_ERR(p_hwfn,
+			       "ilt overflow! num_pf_pqs %d, qm_info->ilt_pf_pqs %d\n",
+			       num_pf_pqs, qm_info->ilt_pf_pqs);
+		}
+	}
+
 	if (!(pq_init_flags & PQ_INIT_SHARE_VPORT))
 		qm_info->num_vports++;
 
 	if (pq_init_flags & PQ_INIT_PF_RL)
 		qm_info->num_pf_rls++;
 
-	if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn))
-		DP_ERR(p_hwfn,
-		       "vport overflow! qm_info->num_vports %d,"
-		       " qm_init_get_num_vports() %d\n",
-		       qm_info->num_vports,
-		       ecore_init_qm_get_num_vports(p_hwfn));
+	if (qm_info->num_vports > ecore_init_qm_get_num_vports(p_hwfn)) {
+		qm_info->pq_overflow = true;
+		DP_ERR(p_hwfn, "vport overflow! qm_info->num_vports %d, qm_init_get_num_vports() %d\n",
+				qm_info->num_vports, ecore_init_qm_get_num_vports(p_hwfn));
+	}
 
-	if (qm_info->num_pf_rls > ecore_init_qm_get_num_pf_rls(p_hwfn))
-		DP_ERR(p_hwfn, "rl overflow! qm_info->num_pf_rls %d,"
-		       " qm_init_get_num_pf_rls() %d\n",
-		       qm_info->num_pf_rls,
-		       ecore_init_qm_get_num_pf_rls(p_hwfn));
+	if (qm_info->num_pf_rls > ecore_init_qm_get_num_pf_rls(p_hwfn)) {
+		qm_info->pq_overflow = true;
+		DP_ERR(p_hwfn, "rl overflow! qm_info->num_pf_rls %d, qm_init_get_num_pf_rls() %d\n",
+		       qm_info->num_pf_rls, ecore_init_qm_get_num_pf_rls(p_hwfn));
+	}
+}
+
+/* init one qm pq, assume port of the PF */
+static void ecore_init_qm_pq(struct ecore_hwfn *p_hwfn,
+			     struct ecore_qm_info *qm_info,
+			     u8 tc, u32 pq_init_flags)
+{
+	ecore_init_qm_pq_port(p_hwfn, qm_info, tc, pq_init_flags, p_hwfn->port_id);
 }
 
 /* get pq index according to PQ_FLAGS */
 static u16 *ecore_init_qm_get_idx_from_flags(struct ecore_hwfn *p_hwfn,
-					     u32 pq_flags)
+					     unsigned long pq_flags)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 
 	/* Can't have multiple flags set here */
-	if (OSAL_BITMAP_WEIGHT((unsigned long *)&pq_flags,
-				sizeof(pq_flags)) > 1)
+	if (OSAL_BITMAP_WEIGHT(&pq_flags,
+			       sizeof(pq_flags) * BITS_PER_BYTE) > 1) {
+		DP_ERR(p_hwfn, "requested multiple pq flags 0x%lx\n", pq_flags);
+		goto err;
+	}
+
+	if (!(ecore_get_pq_flags(p_hwfn) & pq_flags)) {
+		DP_ERR(p_hwfn, "pq flag 0x%lx is not set\n", pq_flags);
 		goto err;
+	}
 
 	switch (pq_flags) {
 	case PQ_FLAGS_RLS:
@@ -1970,16 +2293,20 @@ static u16 *ecore_init_qm_get_idx_from_flags(struct ecore_hwfn *p_hwfn,
 	case PQ_FLAGS_ACK:
 		return &qm_info->pure_ack_pq;
 	case PQ_FLAGS_OFLD:
-		return &qm_info->offload_pq;
+		return &qm_info->first_ofld_pq;
+	case PQ_FLAGS_LLT:
+		return &qm_info->first_llt_pq;
 	case PQ_FLAGS_VFS:
 		return &qm_info->first_vf_pq;
+	case PQ_FLAGS_GRP:
+		return &qm_info->first_ofld_grp_pq;
+	case PQ_FLAGS_VSR:
+		return &qm_info->single_vf_rdma_pq;
 	default:
 		goto err;
 	}
-
 err:
-	DP_ERR(p_hwfn, "BAD pq flags %d\n", pq_flags);
-	return OSAL_NULL;
+	return &qm_info->start_pq;
 }
 
 /* save pq index in qm info */
@@ -1991,59 +2318,285 @@ static void ecore_init_qm_set_idx(struct ecore_hwfn *p_hwfn,
 	*base_pq_idx = p_hwfn->qm_info.start_pq + pq_val;
 }
 
+static u16 ecore_qm_get_start_pq(struct ecore_hwfn *p_hwfn)
+{
+	u16 start_pq;
+
+	OSAL_SPIN_LOCK(&p_hwfn->qm_info.qm_info_lock);
+	start_pq = p_hwfn->qm_info.start_pq;
+	OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock);
+
+	return start_pq;
+}
+
 /* get tx pq index, with the PQ TX base already set (ready for context init) */
 u16 ecore_get_cm_pq_idx(struct ecore_hwfn *p_hwfn, u32 pq_flags)
 {
-	u16 *base_pq_idx = ecore_init_qm_get_idx_from_flags(p_hwfn, pq_flags);
+	u16 *base_pq_idx;
+	u16 pq_idx;
+
+	OSAL_SPIN_LOCK(&p_hwfn->qm_info.qm_info_lock);
+	base_pq_idx = ecore_init_qm_get_idx_from_flags(p_hwfn, pq_flags);
+	pq_idx = *base_pq_idx + CM_TX_PQ_BASE;
+	OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock);
+
+	return pq_idx;
+}
+
+u16 ecore_get_cm_pq_idx_grp(struct ecore_hwfn *p_hwfn, u8 idx)
+{
+	u16 pq_idx = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_GRP);
+	u8 max_idx = ecore_init_qm_get_group_count(p_hwfn);
 
-	return *base_pq_idx + CM_TX_PQ_BASE;
+	if (max_idx == 0) {
+		DP_ERR(p_hwfn, "pq with flag 0x%x do not exist\n",
+		       PQ_FLAGS_GRP);
+		return ecore_qm_get_start_pq(p_hwfn);
+	}
+
+	if (idx > max_idx)
+		DP_ERR(p_hwfn, "idx %d must be smaller than %d\n", idx, max_idx);
+
+	return pq_idx + (idx % max_idx);
 }
 
 u16 ecore_get_cm_pq_idx_mcos(struct ecore_hwfn *p_hwfn, u8 tc)
 {
+	u16 pq_idx = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS);
 	u8 max_tc = ecore_init_qm_get_num_tcs(p_hwfn);
 
+	if (max_tc == 0) {
+		DP_ERR(p_hwfn, "pq with flag 0x%x do not exist\n",
+		       PQ_FLAGS_MCOS);
+		return ecore_qm_get_start_pq(p_hwfn);
+	}
+
 	if (tc > max_tc)
 		DP_ERR(p_hwfn, "tc %d must be smaller than %d\n", tc, max_tc);
 
-	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + (tc % max_tc);
+	return pq_idx + (tc % max_tc);
+}
+
+static u8 ecore_qm_get_pqs_per_vf(struct ecore_hwfn *p_hwfn)
+{
+	u8 pqs_per_vf;
+	u32 pq_flags;
+
+	/* When VFR is set, there is pair of PQs per VF. If VSR is set,
+	 * no additional action required in computing the per VF PQ.
+	 */
+	OSAL_SPIN_LOCK(&p_hwfn->qm_info.qm_info_lock);
+	pq_flags = ecore_get_pq_flags(p_hwfn);
+	pqs_per_vf = (PQ_FLAGS_VFR & pq_flags) ? 2 : 1;
+	OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock);
+
+	return pqs_per_vf;
 }
 
 u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf)
 {
-	u16 max_vf = ecore_init_qm_get_num_vfs(p_hwfn);
+	u16 pq_idx = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS);
+	u16 max_vf = ecore_init_qm_get_num_active_vfs(p_hwfn);
+	u8 pqs_per_vf;
+
+	if (max_vf == 0) {
+		DP_ERR(p_hwfn, "pq with flag 0x%x do not exist\n",
+		       PQ_FLAGS_VFS);
+		return ecore_qm_get_start_pq(p_hwfn);
+	}
 
 	if (vf > max_vf)
 		DP_ERR(p_hwfn, "vf %d must be smaller than %d\n", vf, max_vf);
 
-	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + (vf % max_vf);
+	pqs_per_vf = ecore_qm_get_pqs_per_vf(p_hwfn);
+
+	return pq_idx + ((vf % max_vf) * pqs_per_vf);
+}
+
+u16 ecore_get_cm_pq_idx_vf_rdma(struct ecore_hwfn *p_hwfn, u16 vf)
+{
+	u32 pq_flags;
+	u16 pq_idx;
+
+	OSAL_SPIN_LOCK(&p_hwfn->qm_info.qm_info_lock);
+	pq_flags = ecore_get_pq_flags(p_hwfn);
+	OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock);
+
+	/* If VSR is set, dedicated single PQ for VFs RDMA */
+	if (PQ_FLAGS_VSR & pq_flags)
+		pq_idx = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VSR);
+	else
+		pq_idx = ecore_get_cm_pq_idx_vf(p_hwfn, vf);
+
+	/* If VFR is set, VF's 2nd PQ is for RDMA */
+	if ((PQ_FLAGS_VFR & pq_flags))
+		pq_idx++;
+
+	return pq_idx;
 }
 
 u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u16 rl)
 {
+	u16 pq_idx = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_RLS);
 	u16 max_rl = ecore_init_qm_get_num_pf_rls(p_hwfn);
 
-	/* for rate limiters, it is okay to use the modulo behavior - no
-	 * DP_ERR
+	if (max_rl == 0) {
+		DP_ERR(p_hwfn, "pq with flag 0x%x do not exist\n",
+		       PQ_FLAGS_RLS);
+		return ecore_qm_get_start_pq(p_hwfn);
+	}
+
+	/* When an invalid RL index is requested, return the highest
+	 * available RL PQ. "max_rl - 1" is the relative index of the
+	 * last PQ reserved for RLs.
 	 */
-	return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_RLS) + (rl % max_rl);
+	if (rl >= max_rl) {
+		DP_ERR(p_hwfn,
+		       "rl %hu is not a valid rate limiter, returning rl %hu\n",
+		       rl, max_rl - 1);
+		return pq_idx + max_rl - 1;
+	}
+
+	return pq_idx + rl;
 }
 
-u16 ecore_get_qm_vport_idx_rl(struct ecore_hwfn *p_hwfn, u16 rl)
+static u16 ecore_get_qm_pq_from_cm_pq(struct ecore_hwfn *p_hwfn, u16 cm_pq_id)
 {
-	u16 start_pq, pq, qm_pq_idx;
+	u16 start_pq = ecore_qm_get_start_pq(p_hwfn);
 
-	pq = ecore_get_cm_pq_idx_rl(p_hwfn, rl);
-	start_pq = p_hwfn->qm_info.start_pq;
-	qm_pq_idx = pq - start_pq - CM_TX_PQ_BASE;
+	return cm_pq_id - CM_TX_PQ_BASE - start_pq;
+}
+
+static u16 ecore_get_vport_id_from_pq(struct ecore_hwfn *p_hwfn, u16 pq_id)
+{
+	u16 vport_id;
+
+	OSAL_SPIN_LOCK(&p_hwfn->qm_info.qm_info_lock);
+	vport_id = p_hwfn->qm_info.qm_pq_params[pq_id].vport_id;
+	OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock);
+
+	return vport_id;
+}
+
+static u16 ecore_get_rl_id_from_pq(struct ecore_hwfn *p_hwfn, u16 pq_id)
+{
+	u16 rl_id;
+
+	OSAL_SPIN_LOCK(&p_hwfn->qm_info.qm_info_lock);
+	rl_id = p_hwfn->qm_info.qm_pq_params[pq_id].rl_id;
+	OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock);
 
-	if (qm_pq_idx > p_hwfn->qm_info.num_pqs) {
+	return rl_id;
+}
+
+u16 ecore_get_pq_vport_id_from_rl(struct ecore_hwfn *p_hwfn, u16 rl)
+{
+	u16 cm_pq_id = ecore_get_cm_pq_idx_rl(p_hwfn, rl);
+	u16 qm_pq_id = ecore_get_qm_pq_from_cm_pq(p_hwfn, cm_pq_id);
+
+	return ecore_get_vport_id_from_pq(p_hwfn, qm_pq_id);
+}
+
+u16 ecore_get_pq_vport_id_from_vf(struct ecore_hwfn *p_hwfn, u16 vf)
+{
+	u16 cm_pq_id = ecore_get_cm_pq_idx_vf(p_hwfn, vf);
+	u16 qm_pq_id = ecore_get_qm_pq_from_cm_pq(p_hwfn, cm_pq_id);
+
+	return ecore_get_vport_id_from_pq(p_hwfn, qm_pq_id);
+}
+
+u16 ecore_get_pq_rl_id_from_rl(struct ecore_hwfn *p_hwfn, u16 rl)
+{
+	u16 cm_pq_id = ecore_get_cm_pq_idx_rl(p_hwfn, rl);
+	u16 qm_pq_id = ecore_get_qm_pq_from_cm_pq(p_hwfn, cm_pq_id);
+
+	return ecore_get_rl_id_from_pq(p_hwfn, qm_pq_id);
+}
+
+u16 ecore_get_pq_rl_id_from_vf(struct ecore_hwfn *p_hwfn, u16 vf)
+{
+	u16 cm_pq_id = ecore_get_cm_pq_idx_vf(p_hwfn, vf);
+	u16 qm_pq_id = ecore_get_qm_pq_from_cm_pq(p_hwfn, cm_pq_id);
+
+	return ecore_get_rl_id_from_pq(p_hwfn, qm_pq_id);
+}
+
+static u16 ecore_get_cm_pq_offset_mtc(struct ecore_hwfn *p_hwfn,
+				      u16 idx, u8 tc)
+{
+	u16 pq_offset = 0, max_pqs;
+	u8 num_ports, num_tcs;
+
+	num_ports = ecore_lag_support(p_hwfn) ? LAG_MAX_PORT_NUM : 1;
+	num_tcs = ecore_init_qm_get_num_mtc_tcs(p_hwfn);
+
+	/* add the port offset */
+	pq_offset += (idx % num_ports) * num_tcs;
+	/* add the tc offset */
+	pq_offset += tc % num_tcs;
+
+	/* Verify that the pq returned is within pqs range */
+	max_pqs = ecore_init_qm_get_num_mtc_pqs(p_hwfn);
+	if (pq_offset >= max_pqs) {
 		DP_ERR(p_hwfn,
-		       "qm_pq_idx %d must be smaller than %d\n",
-			qm_pq_idx, p_hwfn->qm_info.num_pqs);
+		       "pq_offset %d must be smaller than %d (idx %d tc %d)\n",
+		       pq_offset, max_pqs, idx, tc);
+		return 0;
 	}
 
-	return p_hwfn->qm_info.qm_pq_params[qm_pq_idx].vport_id;
+	return pq_offset;
+}
+
+u16 ecore_get_cm_pq_idx_ofld_mtc(struct ecore_hwfn *p_hwfn,
+				  u16 idx, u8 tc)
+{
+	u16 first_ofld_pq, pq_offset;
+
+#ifdef CONFIG_DCQCN
+	if (p_hwfn->p_rdma_info->roce.dcqcn_enabled)
+		return ecore_get_cm_pq_idx_rl(p_hwfn, idx);
+#endif
+
+	first_ofld_pq = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_OFLD);
+	pq_offset = ecore_get_cm_pq_offset_mtc(p_hwfn, idx, tc);
+
+	return first_ofld_pq + pq_offset;
+}
+
+u16 ecore_get_cm_pq_idx_llt_mtc(struct ecore_hwfn *p_hwfn,
+				 u16 idx, u8 tc)
+{
+	u16 first_llt_pq, pq_offset;
+
+#ifdef CONFIG_DCQCN
+	if (p_hwfn->p_rdma_info->roce.dcqcn_enabled)
+		return ecore_get_cm_pq_idx_rl(p_hwfn, idx);
+#endif
+
+	first_llt_pq = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LLT);
+	pq_offset = ecore_get_cm_pq_offset_mtc(p_hwfn, idx, tc);
+
+	return first_llt_pq + pq_offset;
+}
+
+u16 ecore_get_cm_pq_idx_ll2(struct ecore_hwfn *p_hwfn, u8 tc)
+{
+	switch (tc) {
+	case PURE_LB_TC:
+		return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB);
+	case PKT_LB_TC:
+		return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_OOO);
+	default:
+#ifdef CONFIG_DCQCN
+		/* In RoCE, when DCQCN is enabled, there are no OFLD pqs,
+		 * get the first RL pq.
+		 */
+		if (ECORE_IS_ROCE_PERSONALITY(p_hwfn) &&
+		    p_hwfn->p_rdma_info->roce.dcqcn_enabled)
+			return ecore_get_cm_pq_idx_rl(p_hwfn, 0);
+#endif
+		return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_OFLD);
+	}
 }
 
 /* Functions for creating specific types of pqs */
@@ -2077,7 +2630,39 @@ static void ecore_init_qm_pure_ack_pq(struct ecore_hwfn *p_hwfn)
 		return;
 
 	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_ACK, qm_info->num_pqs);
-	ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC, PQ_INIT_SHARE_VPORT);
+	ecore_init_qm_pq(p_hwfn, qm_info, ecore_get_offload_tc(p_hwfn),
+			 PQ_INIT_SHARE_VPORT);
+}
+
+static void ecore_init_qm_mtc_pqs(struct ecore_hwfn *p_hwfn)
+{
+	u8 num_tcs = ecore_init_qm_get_num_mtc_tcs(p_hwfn);
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u8 second_port = p_hwfn->port_id;
+	u8 first_port = p_hwfn->port_id;
+	u8 tc;
+
+	/* if lag is not active, init all pqs with p_hwfn's default port */
+	if (ecore_lag_is_active(p_hwfn)) {
+		first_port = p_hwfn->lag_info.first_port;
+		second_port = p_hwfn->lag_info.second_port;
+	}
+
+	/* override pq's TC if offload TC is set */
+	for (tc = 0; tc < num_tcs; tc++)
+		ecore_init_qm_pq_port(p_hwfn, qm_info,
+				      ecore_is_offload_tc_set(p_hwfn) ?
+				      p_hwfn->hw_info.offload_tc : tc,
+				      PQ_INIT_SHARE_VPORT,
+				      first_port);
+	if (ecore_lag_support(p_hwfn))
+		/* initialize second port's pqs even if lag is not active */
+		for (tc = 0; tc < num_tcs; tc++)
+			ecore_init_qm_pq_port(p_hwfn, qm_info,
+					      ecore_is_offload_tc_set(p_hwfn) ?
+					      p_hwfn->hw_info.offload_tc : tc,
+					      PQ_INIT_SHARE_VPORT,
+					      second_port);
 }
 
 static void ecore_init_qm_offload_pq(struct ecore_hwfn *p_hwfn)
@@ -2088,7 +2673,36 @@ static void ecore_init_qm_offload_pq(struct ecore_hwfn *p_hwfn)
 		return;
 
 	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_OFLD, qm_info->num_pqs);
-	ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC, PQ_INIT_SHARE_VPORT);
+	ecore_init_qm_mtc_pqs(p_hwfn);
+}
+
+static void ecore_init_qm_low_latency_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_LLT))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_LLT, qm_info->num_pqs);
+	ecore_init_qm_mtc_pqs(p_hwfn);
+}
+
+static void ecore_init_qm_offload_pq_group(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u8 idx;
+
+	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_GRP))
+		return;
+
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_GRP, qm_info->num_pqs);
+
+	/* iterate over offload pqs */
+	for (idx = 0; idx < ecore_init_qm_get_group_count(p_hwfn); idx++) {
+		ecore_init_qm_pq_port(p_hwfn, qm_info, qm_info->offload_group[idx].tc,
+				      PQ_INIT_SHARE_VPORT,
+				      qm_info->offload_group[idx].port);
+	}
 }
 
 static void ecore_init_qm_mcos_pqs(struct ecore_hwfn *p_hwfn)
@@ -2104,34 +2718,76 @@ static void ecore_init_qm_mcos_pqs(struct ecore_hwfn *p_hwfn)
 		ecore_init_qm_pq(p_hwfn, qm_info, tc_idx, PQ_INIT_SHARE_VPORT);
 }
 
+static void ecore_init_qm_vf_single_rdma_pq(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
+
+	if (!(pq_flags & PQ_FLAGS_VSR))
+		return;
+
+	/* ecore_init_qm_pq_params() is going to increment vport ID anyway,
+	 * so keep it shared here so we don't waste a vport.
+	 */
+	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_VSR, qm_info->num_pqs);
+	ecore_init_qm_pq(p_hwfn, qm_info, ecore_get_offload_tc(p_hwfn),
+			 PQ_INIT_VF_RL | PQ_INIT_SHARE_VPORT);
+}
+
 static void ecore_init_qm_vf_pqs(struct ecore_hwfn *p_hwfn)
 {
+	u16 vf_idx, num_vfs = ecore_init_qm_get_num_active_vfs(p_hwfn);
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	u16 vf_idx, num_vfs = ecore_init_qm_get_num_vfs(p_hwfn);
+	u32 pq_flags = ecore_get_pq_flags(p_hwfn);
+	u32 l2_pq_init_flags = PQ_INIT_VF_RL;
 
-	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_VFS))
+	if (!(pq_flags & PQ_FLAGS_VFS))
 		return;
 
+	/* Mark PQ starting VF range */
 	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_VFS, qm_info->num_pqs);
 
-	qm_info->num_vf_pqs = num_vfs;
-	for (vf_idx = 0; vf_idx < num_vfs; vf_idx++)
+	/* If VFR is set, the L2 PQ will share the rate limiter with the rdma PQ */
+	if (pq_flags & PQ_FLAGS_VFR)
+		l2_pq_init_flags |= PQ_INIT_SHARE_VPORT;
+
+	/* Init the per PF PQs */
+	for (vf_idx = 0; vf_idx < num_vfs; vf_idx++) {
+		/* Per VF L2 PQ */
 		ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_DEFAULT_TC,
-				 PQ_INIT_VF_RL);
+				 l2_pq_init_flags);
+
+		/* Per VF Rdma PQ */
+		if (pq_flags & PQ_FLAGS_VFR)
+			ecore_init_qm_pq(p_hwfn, qm_info,
+					 ecore_get_offload_tc(p_hwfn),
+					 PQ_INIT_VF_RL);
+	}
 }
 
 static void ecore_init_qm_rl_pqs(struct ecore_hwfn *p_hwfn)
 {
 	u16 pf_rls_idx, num_pf_rls = ecore_init_qm_get_num_pf_rls(p_hwfn);
+	struct ecore_lag_info *lag_info = &p_hwfn->lag_info;
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u8 port = p_hwfn->port_id, tc;
 
 	if (!(ecore_get_pq_flags(p_hwfn) & PQ_FLAGS_RLS))
 		return;
 
 	ecore_init_qm_set_idx(p_hwfn, PQ_FLAGS_RLS, qm_info->num_pqs);
-	for (pf_rls_idx = 0; pf_rls_idx < num_pf_rls; pf_rls_idx++)
-		ecore_init_qm_pq(p_hwfn, qm_info, PQ_INIT_OFLD_TC,
-				 PQ_INIT_PF_RL);
+	tc = ecore_get_offload_tc(p_hwfn);
+	for (pf_rls_idx = 0; pf_rls_idx < num_pf_rls; pf_rls_idx++) {
+		/* if lag is present, set these pqs per port according to parity */
+		if (lag_info->is_master &&
+		    lag_info->lag_type != ECORE_LAG_TYPE_NONE &&
+		    lag_info->port_num > 0)
+			port = (pf_rls_idx % lag_info->port_num == 0) ?
+			       lag_info->first_port : lag_info->second_port;
+
+		ecore_init_qm_pq_port(p_hwfn, qm_info, tc, PQ_INIT_PF_RL,
+				      port);
+	}
 }
 
 static void ecore_init_qm_pq_params(struct ecore_hwfn *p_hwfn)
@@ -2154,24 +2810,68 @@ static void ecore_init_qm_pq_params(struct ecore_hwfn *p_hwfn)
 	/* pq for offloaded protocol */
 	ecore_init_qm_offload_pq(p_hwfn);
 
-	/* done sharing vports */
+	/* low latency pq */
+	ecore_init_qm_low_latency_pq(p_hwfn);
+
+	/* per offload group pqs */
+	ecore_init_qm_offload_pq_group(p_hwfn);
+
+	/* Single VF-RDMA PQ, in case there weren't enough for each VF */
+	ecore_init_qm_vf_single_rdma_pq(p_hwfn);
+
+	/* PF done sharing vports, advance vport for first VF.
+	 * Vport ID is incremented in a separate function because we can't
+	 * rely on the last PF PQ to not use PQ_INIT_SHARE_VPORT, which can
+	 * be different in every QM reconfiguration.
+	 */
 	ecore_init_qm_advance_vport(p_hwfn);
 
 	/* pqs for vfs */
 	ecore_init_qm_vf_pqs(p_hwfn);
 }
 
-/* compare values of getters against resources amounts */
-static enum _ecore_status_t ecore_init_qm_sanity(struct ecore_hwfn *p_hwfn)
+/* Finds the optimal features configuration to maximize PQs utilization */
+static enum _ecore_status_t ecore_init_qm_features(struct ecore_hwfn *p_hwfn)
 {
-	if (ecore_init_qm_get_num_vports(p_hwfn) >
-	    RESC_NUM(p_hwfn, ECORE_VPORT)) {
+	if (ecore_init_qm_get_num_vports(p_hwfn) > RESC_NUM(p_hwfn, ECORE_VPORT)) {
 		DP_ERR(p_hwfn, "requested amount of vports exceeds resource\n");
 		return ECORE_INVAL;
 	}
 
-	if (ecore_init_qm_get_num_pqs(p_hwfn) > RESC_NUM(p_hwfn, ECORE_PQ)) {
-		DP_ERR(p_hwfn, "requested amount of pqs exceeds resource\n");
+	if (ecore_init_qm_get_num_pf_rls(p_hwfn) == 0) {
+		if (IS_ECORE_PACING(p_hwfn)) {
+			DP_ERR(p_hwfn, "No rate limiters available for PF\n");
+			return ECORE_INVAL;
+		}
+	}
+
+	/* For VF RDMA try to provide 2 PQs (separate PQ for RDMA) per VF */
+	if (ECORE_IS_RDMA_PERSONALITY(p_hwfn) && ECORE_IS_VF_RDMA(p_hwfn) &&
+	    ecore_init_qm_get_num_active_vfs(p_hwfn))
+		p_hwfn->qm_info.vf_rdma_en = true;
+
+	while (ecore_init_qm_get_num_pqs(p_hwfn) > RESC_NUM(p_hwfn, ECORE_PQ) ||
+	       ecore_init_qm_get_num_rls(p_hwfn) > RESC_NUM(p_hwfn, ECORE_RL)) {
+		if (IS_ECORE_QM_VF_RDMA(p_hwfn)) {
+			p_hwfn->qm_info.vf_rdma_en = false;
+			DP_NOTICE(p_hwfn, false,
+				  "PQ per rdma vf was disabled to reduce requested amount of pqs/rls. A single PQ for all rdma VFs will be used\n");
+			continue;
+		}
+
+		if (IS_ECORE_MULTI_TC_ROCE(p_hwfn)) {
+			p_hwfn->hw_info.multi_tc_roce_en = false;
+			DP_NOTICE(p_hwfn, false,
+				  "multi-tc roce was disabled to reduce requested amount of pqs/rls\n");
+			continue;
+		}
+
+		DP_ERR(p_hwfn,
+		       "Requested amount: %d pqs %d rls, Actual amount: %d pqs %d rls\n",
+		       ecore_init_qm_get_num_pqs(p_hwfn),
+		       ecore_init_qm_get_num_rls(p_hwfn),
+		       RESC_NUM(p_hwfn, ECORE_PQ),
+		       RESC_NUM(p_hwfn, ECORE_RL));
 		return ECORE_INVAL;
 	}
 
@@ -2189,35 +2889,38 @@ static void ecore_dp_init_qm_params(struct ecore_hwfn *p_hwfn)
 	struct init_qm_pq_params *pq;
 	int i, tc;
 
+	if (qm_info->pq_overflow)
+		return;
+
 	/* top level params */
-	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
-		   "qm init top level params: start_pq %d, start_vport %d,"
-		   " pure_lb_pq %d, offload_pq %d, pure_ack_pq %d\n",
-		   qm_info->start_pq, qm_info->start_vport, qm_info->pure_lb_pq,
-		   qm_info->offload_pq, qm_info->pure_ack_pq);
-	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
-		   "ooo_pq %d, first_vf_pq %d, num_pqs %d, num_vf_pqs %d,"
-		   " num_vports %d, max_phys_tcs_per_port %d\n",
-		   qm_info->ooo_pq, qm_info->first_vf_pq, qm_info->num_pqs,
-		   qm_info->num_vf_pqs, qm_info->num_vports,
-		   qm_info->max_phys_tcs_per_port);
-	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
-		   "pf_rl_en %d, pf_wfq_en %d, vport_rl_en %d, vport_wfq_en %d,"
-		   " pf_wfq %d, pf_rl %d, num_pf_rls %d, pq_flags %x\n",
-		   qm_info->pf_rl_en, qm_info->pf_wfq_en, qm_info->vport_rl_en,
-		   qm_info->vport_wfq_en, qm_info->pf_wfq, qm_info->pf_rl,
-		   qm_info->num_pf_rls, ecore_get_pq_flags(p_hwfn));
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "qm init params: pq_flags 0x%x, num_pqs %d, num_vf_pqs %d, start_pq %d\n",
+		   ecore_get_pq_flags(p_hwfn), qm_info->num_pqs,
+		   qm_info->num_vf_pqs, qm_info->start_pq);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "qm init params: pf_rl_en %d, pf_wfq_en %d, vport_rl_en %d, vport_wfq_en %d\n",
+		   qm_info->pf_rl_en, qm_info->pf_wfq_en,
+		   qm_info->vport_rl_en, qm_info->vport_wfq_en);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "qm init params: num_vports %d, start_vport %d, num_rls %d, num_pf_rls %d, start_rl %d, pf_rl %d\n",
+		   qm_info->num_vports, qm_info->start_vport,
+		   qm_info->num_rls, qm_info->num_pf_rls,
+		   qm_info->start_rl, qm_info->pf_rl);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "qm init params: pure_lb_pq %d, ooo_pq %d, pure_ack_pq %d, first_ofld_pq %d, first_llt_pq %d\n",
+		   qm_info->pure_lb_pq, qm_info->ooo_pq, qm_info->pure_ack_pq,
+		   qm_info->first_ofld_pq, qm_info->first_llt_pq);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "qm init params: single_vf_rdma_pq %d, first_vf_pq %d, max_phys_tcs_per_port %d, pf_wfq %d\n",
+		   qm_info->single_vf_rdma_pq, qm_info->first_vf_pq,
+		   qm_info->max_phys_tcs_per_port, qm_info->pf_wfq);
 
 	/* port table */
 	for (i = 0; i < p_hwfn->p_dev->num_ports_in_engine; i++) {
 		port = &qm_info->qm_port_params[i];
-		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
-			   "port idx %d, active %d, active_phys_tcs %d,"
-			   " num_pbf_cmd_lines %d, num_btb_blocks %d,"
-			   " reserved %d\n",
-			   i, port->active, port->active_phys_tcs,
-			   port->num_pbf_cmd_lines, port->num_btb_blocks,
-			   port->reserved);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "port idx %d, active %d, active_phys_tcs %d, num_pbf_cmd_lines %d, num_btb_blocks %d, reserved %d\n",
+			   i, port->active, port->active_phys_tcs, port->num_pbf_cmd_lines,
+			   port->num_btb_blocks, port->reserved);
 	}
 
 	/* vport table */
@@ -2226,8 +2929,7 @@ static void ecore_dp_init_qm_params(struct ecore_hwfn *p_hwfn)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "vport idx %d, wfq %d, first_tx_pq_id [ ",
 			   qm_info->start_vport + i, vport->wfq);
 		for (tc = 0; tc < NUM_OF_TCS; tc++)
-			DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "%d ",
-				   vport->first_tx_pq_id[tc]);
+			DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "%d ", vport->first_tx_pq_id[tc]);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "]\n");
 	}
 
@@ -2270,19 +2972,66 @@ static void ecore_init_qm_info(struct ecore_hwfn *p_hwfn)
  * 4. activate init tool in QM_PF stage
  * 5. send an sdm_qm_cmd through rbc interface to release the QM
  */
-enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
-				     struct ecore_ptt *p_ptt)
+static enum _ecore_status_t __ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
+					      struct ecore_ptt *p_ptt,
+					      bool b_can_sleep)
 {
-	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	bool b_rc;
+	struct ecore_resc_unlock_params resc_unlock_params;
+	struct ecore_resc_lock_params resc_lock_params;
+	bool b_rc, b_mfw_unlock = true;
+	struct ecore_qm_info *qm_info;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
-	/* multiple flows can issue qm reconf. Need to lock */
+	qm_info = &p_hwfn->qm_info;
+
+	/* Obtain MFW resource lock to sync with PFs with driver instances not
+	 * covered by the static global qm_lock (monolithic, dpdk, PDA).
+	 */
+	ecore_mcp_resc_lock_default_init(&resc_lock_params, &resc_unlock_params,
+					 ECORE_RESC_LOCK_QM_RECONF, false);
+	resc_lock_params.sleep_b4_retry = b_can_sleep;
+	rc = ecore_mcp_resc_lock(p_hwfn, p_ptt, &resc_lock_params);
+
+	/* If lock is taken we must abort. If MFW does not support the feature
+	 * or took too long to acquire the lock we soldier on.
+	 */
+	if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL && rc != ECORE_TIMEOUT) {
+		DP_ERR(p_hwfn,
+		       "QM reconf MFW lock is stuck. Failing reconf flow\n");
+		return ECORE_INVAL;
+	}
+
+	/* if MFW doesn't support, no need to unlock. There is no harm in
+	 * trying, but we would need to tweak the rc value in case of
+	 * ECORE_NOTIMPL, so seems nicer to avoid.
+	 */
+	if (rc == ECORE_NOTIMPL)
+		b_mfw_unlock = false;
+
+	/* Multiple hwfn flows can issue qm reconf. Need to lock between hwfn
+	 * flows.
+	 */
 	OSAL_SPIN_LOCK(&qm_lock);
 
+	/* qm_info is invalid while this lock is taken */
+	OSAL_SPIN_LOCK(&p_hwfn->qm_info.qm_info_lock);
+
+	rc = ecore_init_qm_features(p_hwfn);
+	if (rc != ECORE_SUCCESS) {
+		OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock);
+		goto unlock;
+	}
+
 	/* initialize ecore's qm data structure */
 	ecore_init_qm_info(p_hwfn);
 
+	OSAL_SPIN_UNLOCK(&p_hwfn->qm_info.qm_info_lock);
+
+	if (qm_info->pq_overflow) {
+		rc = ECORE_INVAL;
+		goto unlock;
+	}
+
 	/* stop PF's qm queues */
 	b_rc = ecore_send_qm_stop_cmd(p_hwfn, p_ptt, false, true,
 				      qm_info->start_pq, qm_info->num_pqs);
@@ -2291,9 +3040,6 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 		goto unlock;
 	}
 
-	/* clear the QM_PF runtime phase leftovers from previous init */
-	ecore_init_clear_rt_data(p_hwfn);
-
 	/* prepare QM portion of runtime array */
 	ecore_qm_init_pf(p_hwfn, p_ptt, false);
 
@@ -2310,39 +3056,66 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 unlock:
 	OSAL_SPIN_UNLOCK(&qm_lock);
 
+	if (b_mfw_unlock)
+		rc = ecore_mcp_resc_unlock(p_hwfn, p_ptt, &resc_unlock_params);
+
 	return rc;
 }
 
+enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt)
+{
+	return __ecore_qm_reconf(p_hwfn, p_ptt, true);
+}
+
+enum _ecore_status_t ecore_qm_reconf_intr(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt)
+{
+	return __ecore_qm_reconf(p_hwfn, p_ptt, false);
+}
+
 static enum _ecore_status_t ecore_alloc_qm_data(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
+	u16 max_pqs_num, max_vports_num;
 	enum _ecore_status_t rc;
 
-	rc = ecore_init_qm_sanity(p_hwfn);
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	rc = OSAL_SPIN_LOCK_ALLOC(p_hwfn, &qm_info->qm_info_lock,
+				  "qm_info_lock");
+	if (rc)
+		goto alloc_err;
+#endif
+	OSAL_SPIN_LOCK_INIT(&qm_info->qm_info_lock);
+
+	rc = ecore_init_qm_features(p_hwfn);
 	if (rc != ECORE_SUCCESS)
 		goto alloc_err;
 
+	max_pqs_num = (u16)RESC_NUM(p_hwfn, ECORE_PQ);
+	max_vports_num = (u16)RESC_NUM(p_hwfn, ECORE_VPORT);
+
 	qm_info->qm_pq_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
 					    sizeof(struct init_qm_pq_params) *
-					    ecore_init_qm_get_num_pqs(p_hwfn));
+					    max_pqs_num);
 	if (!qm_info->qm_pq_params)
 		goto alloc_err;
 
 	qm_info->qm_vport_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-				       sizeof(struct init_qm_vport_params) *
-				       ecore_init_qm_get_num_vports(p_hwfn));
+					       sizeof(struct init_qm_vport_params) *
+					       max_vports_num);
 	if (!qm_info->qm_vport_params)
 		goto alloc_err;
 
 	qm_info->qm_port_params = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-				      sizeof(struct init_qm_port_params) *
-				      p_hwfn->p_dev->num_ports_in_engine);
+					      sizeof(struct init_qm_port_params) *
+					      p_hwfn->p_dev->num_ports_in_engine);
 	if (!qm_info->qm_port_params)
 		goto alloc_err;
 
 	qm_info->wfq_data = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
 					sizeof(struct ecore_wfq_data) *
-					ecore_init_qm_get_num_vports(p_hwfn));
+					max_vports_num);
 	if (!qm_info->wfq_data)
 		goto alloc_err;
 
@@ -2355,25 +3128,256 @@ static enum _ecore_status_t ecore_alloc_qm_data(struct ecore_hwfn *p_hwfn)
 }
 /******************** End QM initialization ***************/
 
-enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
+static enum _ecore_status_t ecore_lag_create_slave(struct ecore_hwfn *p_hwfn,
+						   u8 master_pfid)
+{
+	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+	u8 slave_ppfid = 1; /* TODO: Need some sort of resource management function
+			     * to return a free entry
+			     */
+	enum _ecore_status_t rc;
+
+	if (!p_ptt)
+		return ECORE_AGAIN;
+
+	rc = ecore_llh_map_ppfid_to_pfid(p_hwfn, p_ptt, slave_ppfid,
+					 master_pfid);
+	ecore_ptt_release(p_hwfn, p_ptt);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Protocol filter for RoCE v1 */
+	rc = ecore_llh_add_protocol_filter(p_hwfn->p_dev, slave_ppfid,
+					   ECORE_LLH_FILTER_ETHERTYPE, 0x8915,
+					   ECORE_LLH_DONT_CARE);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Protocol filter for RoCE v2 */
+	return ecore_llh_add_protocol_filter(p_hwfn->p_dev, slave_ppfid,
+					     ECORE_LLH_FILTER_UDP_DEST_PORT,
+					     ECORE_LLH_DONT_CARE, 4791);
+}
+
+static void ecore_lag_destroy_slave(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+	u8 slave_ppfid = 1; /* Need some sort of resource management function
+			     * to return a free entry
+			     */
+
+	/* Protocol filter for RoCE v1 */
+	ecore_llh_remove_protocol_filter(p_hwfn->p_dev, slave_ppfid,
+					 ECORE_LLH_FILTER_ETHERTYPE,
+					 0x8915, ECORE_LLH_DONT_CARE);
+
+	/* Protocol filter for RoCE v2 */
+	ecore_llh_remove_protocol_filter(p_hwfn->p_dev, slave_ppfid,
+					 ECORE_LLH_FILTER_UDP_DEST_PORT,
+					 ECORE_LLH_DONT_CARE, 4791);
+
+	if (p_ptt) {
+		ecore_llh_map_ppfid_to_pfid(p_hwfn, p_ptt, slave_ppfid,
+					    p_hwfn->rel_pf_id);
+		ecore_ptt_release(p_hwfn, p_ptt);
+	}
+}
+
+/* Map ports:
+ *      port 0/2 - 0/2
+ *      port 1/3 - 1/3
+ * If port 0/2 is down, map both to port 1/3, if port 1/3 is down, map both to
+ * port 0/2, and if both are down, it doesn't really matter.
+ */
+static void ecore_lag_map_ports(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_lag_info *lag_info = &p_hwfn->lag_info;
+
+	/* for now support only 2 ports in the bond */
+	if (lag_info->master_pf == 0) {
+		lag_info->first_port = (lag_info->active_ports & (1 << 0)) ? 0 : 1;
+		lag_info->second_port = (lag_info->active_ports & (1 << 1)) ? 1 : 0;
+	} else if (lag_info->master_pf == 2) {
+		lag_info->first_port = (lag_info->active_ports & (1 << 2)) ? 2 : 3;
+		lag_info->second_port = (lag_info->active_ports & (1 << 3)) ? 3 : 2;
+	}
+	lag_info->port_num = LAG_MAX_PORT_NUM;
+}
+
+/* The following function strongly assumes two ports only */
+static enum _ecore_status_t ecore_lag_create_master(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+	enum _ecore_status_t rc;
+
+	if (!p_ptt)
+		return ECORE_AGAIN;
+
+	ecore_lag_map_ports(p_hwfn);
+	rc = ecore_qm_reconf_intr(p_hwfn, p_ptt);
+	ecore_ptt_release(p_hwfn, p_ptt);
+
+	return rc;
+}
+
+/* The following function strongly assumes two ports only */
+static enum _ecore_status_t ecore_lag_destroy_master(struct ecore_hwfn *p_hwfn)
 {
+	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+	enum _ecore_status_t rc;
+
+	if (!p_ptt)
+		return ECORE_AGAIN;
+
+	p_hwfn->qm_info.offload_group_count = 0;
+
+	rc = ecore_qm_reconf_intr(p_hwfn, p_ptt);
+	ecore_ptt_release(p_hwfn, p_ptt);
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_lag_create(struct ecore_dev *dev,
+				      enum ecore_lag_type lag_type,
+				      void (*link_change_cb)(void *cxt),
+				      void *cxt,
+				      u8 active_ports)
+{
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(dev);
+	u8 master_pfid = p_hwfn->abs_pf_id < 2 ? 0 : 2;
+
+	if (!ecore_lag_support(p_hwfn)) {
+		DP_NOTICE(p_hwfn, false, "RDMA bonding will not be configured - only supported on AH devices on default mode\n");
+		return ECORE_INVAL;
+	}
+
+	/* TODO: Check Supported MFW */
+	p_hwfn->lag_info.lag_type = lag_type;
+	p_hwfn->lag_info.link_change_cb = link_change_cb;
+	p_hwfn->lag_info.cxt = cxt;
+	p_hwfn->lag_info.active_ports = active_ports;
+	p_hwfn->lag_info.is_master = p_hwfn->abs_pf_id == master_pfid;
+	p_hwfn->lag_info.master_pf = master_pfid;
+
+	/* Configure RX for LAG */
+	if (p_hwfn->lag_info.is_master)
+		return ecore_lag_create_master(p_hwfn);
+
+	return ecore_lag_create_slave(p_hwfn, master_pfid);
+}
+
+/* Modify the link state of a given port */
+enum _ecore_status_t ecore_lag_modify(struct ecore_dev *dev,
+				      u8 port_id,
+				      u8 link_active)
+{
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(dev);
+	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+	struct ecore_lag_info *lag_info = &p_hwfn->lag_info;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	enum dbg_status debug_status = DBG_STATUS_OK;
-	int i;
+	unsigned long active_ports;
+	u8 curr_active;
 
-	if (IS_VF(p_dev)) {
-		for_each_hwfn(p_dev, i) {
-			rc = ecore_l2_alloc(&p_dev->hwfns[i]);
-			if (rc != ECORE_SUCCESS)
-				return rc;
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Active ports changed before %x link active %x port_id=%d\n",
+		   lag_info->active_ports, link_active, port_id);
+
+	if (!p_ptt)
+		return ECORE_AGAIN;
+
+	active_ports = lag_info->active_ports;
+	curr_active = !!OSAL_TEST_BIT(port_id, &lag_info->active_ports);
+	if (curr_active != link_active) {
+		OSAL_TEST_AND_FLIP_BIT(port_id, &lag_info->active_ports);
+
+		/* Reconfigure QM according to active_ports */
+		if (lag_info->is_master) {
+			ecore_lag_map_ports(p_hwfn);
+			rc = ecore_qm_reconf_intr(p_hwfn, p_ptt);
 		}
-		return rc;
+
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Active ports changed before %lx after %x\n",
+			   active_ports, lag_info->active_ports);
+	} else {
+		/* No change in active ports, triggered from port event */
+		/* call dcbx related code */
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Nothing changed\n");
 	}
 
-	p_dev->fw_data = OSAL_ZALLOC(p_dev, GFP_KERNEL,
-				     sizeof(*p_dev->fw_data));
-	if (!p_dev->fw_data)
-		return ECORE_NOMEM;
+	ecore_ptt_release(p_hwfn, p_ptt);
+	return rc;
+}
+
+enum _ecore_status_t ecore_lag_destroy(struct ecore_dev *dev)
+{
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(dev);
+	struct ecore_lag_info *lag_info = &p_hwfn->lag_info;
+
+	lag_info->lag_type = ECORE_LAG_TYPE_NONE;
+	lag_info->link_change_cb = OSAL_NULL;
+	lag_info->cxt = OSAL_NULL;
+
+	if (!lag_info->is_master) {
+		ecore_lag_destroy_slave(p_hwfn);
+		return ECORE_SUCCESS;
+	}
+
+	return ecore_lag_destroy_master(p_hwfn);
+}
+
+bool ecore_lag_is_active(struct ecore_hwfn *p_hwfn)
+{
+	return !(p_hwfn->lag_info.lag_type == ECORE_LAG_TYPE_NONE);
+}
+
+static enum _ecore_status_t ecore_cxt_calculate_tasks(struct ecore_hwfn *p_hwfn,
+						      u32 *rdma_tasks,
+						      u32 *eth_tasks,
+						      u32 excess_tasks)
+{
+	u32 eth_tasks_tmp, rdma_tasks_tmp, reduced_tasks = 0;
+
+/* @DPDK */
+#define ECORE_ETH_MAX_TIDS 0 /* !CONFIG_ECORE_FS */
+	/* ETH tasks are used for GFS stats counters in AHP */
+	*eth_tasks = ECORE_IS_E5(p_hwfn->p_dev) ? ECORE_ETH_MAX_TIDS : 0;
+
+	/* No tasks requested. If there are no excess tasks, return.
+	 * If there are excess tasks and ILT lines needs to be reduced,
+	 * it can't be done by reducing number of tasks, return failure.
+	 */
+	/* DPDK */
+	if (*eth_tasks == 0)
+		return excess_tasks ? ECORE_INVAL : ECORE_SUCCESS;
+
+	/* Can not reduce enough tasks */
+	if (excess_tasks > *eth_tasks)
+		return ECORE_INVAL;
+
+	eth_tasks_tmp = *eth_tasks;
+
+	while (reduced_tasks < excess_tasks) {
+		eth_tasks_tmp >>= 1;
+		/* DPDK */
+		reduced_tasks = (*eth_tasks) - (eth_tasks_tmp);
+	}
+
+	*eth_tasks = eth_tasks_tmp;
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
+{
+	u32 rdma_tasks = 0, eth_tasks = 0, excess_tasks = 0, line_count;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	int i;
+
+	if (IS_PF(p_dev)) {
+		p_dev->fw_data = OSAL_ZALLOC(p_dev, GFP_KERNEL,
+					     sizeof(*p_dev->fw_data));
+		if (!p_dev->fw_data)
+			return ECORE_NOMEM;
+	}
 
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
@@ -2384,6 +3388,26 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		if (rc)
 			goto alloc_err;
 
+		rc = ecore_l2_alloc(p_hwfn);
+		if (rc != ECORE_SUCCESS)
+			goto alloc_err;
+
+		if (IS_VF(p_dev)) {
+			/* Allocating the entire spq struct although only the
+			 * async_comp callbacks are used by VF.
+			 */
+			rc = ecore_spq_alloc(p_hwfn);
+			if (rc != ECORE_SUCCESS)
+				return rc;
+
+			continue;
+		}
+
+		/* ecore_iov_alloc must be called before ecore_cxt_set_pf_params()*/
+		rc = ecore_iov_alloc(p_hwfn);
+		if (rc)
+			goto alloc_err;
+
 		/* First allocate the context manager structure */
 		rc = ecore_cxt_mngr_alloc(p_hwfn);
 		if (rc)
@@ -2392,7 +3416,8 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		/* Set the HW cid/tid numbers (in the context manager)
 		 * Must be done prior to any further computations.
 		 */
-		rc = ecore_cxt_set_pf_params(p_hwfn);
+		ecore_cxt_calculate_tasks(p_hwfn, &rdma_tasks, &eth_tasks, 0);
+		rc = ecore_cxt_set_pf_params(p_hwfn, rdma_tasks, eth_tasks);
 		if (rc)
 			goto alloc_err;
 
@@ -2404,9 +3429,70 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		ecore_init_qm_info(p_hwfn);
 
 		/* Compute the ILT client partition */
-		rc = ecore_cxt_cfg_ilt_compute(p_hwfn);
-		if (rc)
-			goto alloc_err;
+		rc = ecore_cxt_cfg_ilt_compute(p_hwfn, &line_count);
+		if (rc) {
+			u32 ilt_page_size_kb =
+				ecore_cxt_get_ilt_page_size(p_hwfn) >> 10;
+
+			DP_NOTICE(p_hwfn, false,
+				  "Requested %u lines but only %u are available. line size is %u KB; 0x%x rdma tasks 0x%x eth tasks; VF RDMA %s\n",
+				  line_count, RESC_NUM(p_hwfn, ECORE_ILT),
+				  ilt_page_size_kb, rdma_tasks, eth_tasks,
+				  ECORE_IS_VF_RDMA(p_hwfn) ?
+					"enabled" : "disabled");
+
+			/* Calculate the number of tasks need to be reduced
+			 * in order to have a successful ILT computation.
+			 */
+			excess_tasks = ecore_cxt_cfg_ilt_compute_excess(p_hwfn, line_count);
+			if (!excess_tasks)
+				goto alloc_err;
+
+			rc = ecore_cxt_calculate_tasks(p_hwfn, &rdma_tasks,
+						       &eth_tasks,
+						       excess_tasks);
+			if (!rc) {
+				DP_NOTICE(p_hwfn, false,
+					  "Re-computing after reducing tasks (0x%x rdma tasks, 0x%x eth tasks)\n",
+					  rdma_tasks, eth_tasks);
+				rc = ecore_cxt_set_pf_params(p_hwfn, rdma_tasks,
+							     eth_tasks);
+				if (rc)
+					goto alloc_err;
+
+				rc = ecore_cxt_cfg_ilt_compute(p_hwfn,
+							       &line_count);
+				if (rc)
+					DP_NOTICE(p_hwfn, false,
+						  "Requested %u lines but only %u are available.\n",
+						  line_count,
+						  RESC_NUM(p_hwfn, ECORE_ILT));
+			}
+
+			if (rc && ECORE_IS_VF_RDMA(p_hwfn)) {
+				DP_NOTICE(p_hwfn, false,
+					  "Re-computing after disabling VF RDMA\n");
+				p_hwfn->pf_iov_info->rdma_enable = false;
+
+				/* After disabling VF RDMA, we must call
+				 * ecore_cxt_set_pf_params(), in order to
+				 * recalculate the VF cids/tids amount.
+				 */
+				rc = ecore_cxt_set_pf_params(p_hwfn, rdma_tasks,
+							     eth_tasks);
+				if (rc)
+					goto alloc_err;
+
+				rc = ecore_cxt_cfg_ilt_compute(p_hwfn, &line_count);
+			}
+
+			if (rc) {
+				DP_ERR(p_hwfn,
+				       "ILT compute failed - requested %u lines but only %u are available. Need to increase ILT line size (current size is %u KB)\n",
+				       line_count, RESC_NUM(p_hwfn, ECORE_ILT), ilt_page_size_kb);
+				goto alloc_err;
+			}
+		}
 
 		/* CID map / ILT shadow table / T2
 		 * The talbes sizes are determined by the computations above
@@ -2428,62 +3514,12 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		if (rc)
 			goto alloc_err;
 
-		rc = ecore_iov_alloc(p_hwfn);
-		if (rc)
-			goto alloc_err;
-
 		/* EQ */
 		n_eqes = ecore_chain_get_capacity(&p_hwfn->p_spq->chain);
-		if (ECORE_IS_RDMA_PERSONALITY(p_hwfn)) {
-			/* Calculate the EQ size
-			 * ---------------------
-			 * Each ICID may generate up to one event at a time i.e.
-			 * the event must be handled/cleared before a new one
-			 * can be generated. We calculate the sum of events per
-			 * protocol and create an EQ deep enough to handle the
-			 * worst case:
-			 * - Core - according to SPQ.
-			 * - RoCE - per QP there are a couple of ICIDs, one
-			 *	  responder and one requester, each can
-			 *	  generate an EQE => n_eqes_qp = 2 * n_qp.
-			 *	  Each CQ can generate an EQE. There are 2 CQs
-			 *	  per QP => n_eqes_cq = 2 * n_qp.
-			 *	  Hence the RoCE total is 4 * n_qp or
-			 *	  2 * num_cons.
-			 * - ENet - There can be up to two events per VF. One
-			 *	  for VF-PF channel and another for VF FLR
-			 *	  initial cleanup. The number of VFs is
-			 *	  bounded by MAX_NUM_VFS_BB, and is much
-			 *	  smaller than RoCE's so we avoid exact
-			 *	  calculation.
-			 */
-			if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
-				num_cons =
-				    ecore_cxt_get_proto_cid_count(
-						p_hwfn,
-						PROTOCOLID_ROCE,
-						OSAL_NULL);
-				num_cons *= 2;
-			} else {
-				num_cons = ecore_cxt_get_proto_cid_count(
-						p_hwfn,
-						PROTOCOLID_IWARP,
-						OSAL_NULL);
-			}
-			n_eqes += num_cons + 2 * MAX_NUM_VFS_BB;
-		} else if (p_hwfn->hw_info.personality == ECORE_PCI_ISCSI) {
-			num_cons =
-			    ecore_cxt_get_proto_cid_count(p_hwfn,
-							  PROTOCOLID_ISCSI,
-							  OSAL_NULL);
-			n_eqes += 2 * num_cons;
-		}
-
-		if (n_eqes > 0xFFFF) {
-			DP_ERR(p_hwfn, "Cannot allocate 0x%x EQ elements."
-				       "The maximum of a u16 chain is 0x%x\n",
-			       n_eqes, 0xFFFF);
-			goto alloc_no_mem;
+		if (n_eqes > ECORE_EQ_MAX_ELEMENTS) {
+			DP_INFO(p_hwfn, "EQs maxing out at 0x%x elements\n",
+				ECORE_EQ_MAX_ELEMENTS);
+			n_eqes = ECORE_EQ_MAX_ELEMENTS;
 		}
 
 		rc = ecore_eq_alloc(p_hwfn, (u16)n_eqes);
@@ -2494,14 +3530,11 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 		if (rc)
 			goto alloc_err;
 
-		rc = ecore_l2_alloc(p_hwfn);
-		if (rc != ECORE_SUCCESS)
-			goto alloc_err;
-
 		/* DMA info initialization */
 		rc = ecore_dmae_info_alloc(p_hwfn);
 		if (rc) {
-			DP_NOTICE(p_hwfn, false, "Failed to allocate memory for dmae_info structure\n");
+			DP_NOTICE(p_hwfn, false,
+				  "Failed to allocate memory for dmae_info structure\n");
 			goto alloc_err;
 		}
 
@@ -2513,36 +3546,28 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 			goto alloc_err;
 		}
 
-		debug_status = OSAL_DBG_ALLOC_USER_DATA(p_hwfn,
-							&p_hwfn->dbg_user_info);
-		if (debug_status) {
+		rc = OSAL_DBG_ALLOC_USER_DATA(p_hwfn, &p_hwfn->dbg_user_info);
+		if (rc) {
 			DP_NOTICE(p_hwfn, false,
 				  "Failed to allocate dbg user info structure\n");
-			rc = (enum _ecore_status_t)debug_status;
 			goto alloc_err;
 		}
+	} /* hwfn loop */
 
-		debug_status = OSAL_DBG_ALLOC_USER_DATA(p_hwfn,
-							&p_hwfn->dbg_user_info);
-		if (debug_status) {
-			DP_NOTICE(p_hwfn, false,
-				  "Failed to allocate dbg user info structure\n");
-			rc = (enum _ecore_status_t)debug_status;
+	if (IS_PF(p_dev)) {
+		rc = ecore_llh_alloc(p_dev);
+		if (rc != ECORE_SUCCESS) {
+			DP_NOTICE(p_dev, false,
+				  "Failed to allocate memory for the llh_info structure\n");
 			goto alloc_err;
 		}
-	} /* hwfn loop */
-
-	rc = ecore_llh_alloc(p_dev);
-	if (rc != ECORE_SUCCESS) {
-		DP_NOTICE(p_dev, true,
-			  "Failed to allocate memory for the llh_info structure\n");
-		goto alloc_err;
 	}
 
 	p_dev->reset_stats = OSAL_ZALLOC(p_dev, GFP_KERNEL,
 					 sizeof(*p_dev->reset_stats));
 	if (!p_dev->reset_stats) {
-		DP_NOTICE(p_dev, false, "Failed to allocate reset statistics\n");
+		DP_NOTICE(p_dev, false,
+			  "Failed to allocate reset statistics\n");
 		goto alloc_no_mem;
 	}
 
@@ -2560,8 +3585,10 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
 	int i;
 
 	if (IS_VF(p_dev)) {
-		for_each_hwfn(p_dev, i)
+		for_each_hwfn(p_dev, i) {
 			ecore_l2_setup(&p_dev->hwfns[i]);
+		}
+
 		return;
 	}
 
@@ -2592,36 +3619,37 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt,
 					 u16 id, bool is_vf)
 {
-	u32 command = 0, addr, count = FINAL_CLEANUP_POLL_CNT;
+	u32 count = FINAL_CLEANUP_POLL_CNT, poll_time = FINAL_CLEANUP_POLL_TIME;
+	u32 command = 0, addr;
 	enum _ecore_status_t rc = ECORE_TIMEOUT;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_TEDIBEAR(p_hwfn->p_dev) ||
-	    CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
+	    (ECORE_IS_E4(p_hwfn->p_dev) && CHIP_REV_IS_SLOW(p_hwfn->p_dev))) {
 		DP_INFO(p_hwfn, "Skipping final cleanup for non-ASIC\n");
 		return ECORE_SUCCESS;
 	}
+
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev))
+		poll_time *= 10;
 #endif
 
 	addr = GTT_BAR0_MAP_REG_USDM_RAM +
-	    USTORM_FLR_FINAL_ACK_OFFSET(p_hwfn->rel_pf_id);
+	       USTORM_FLR_FINAL_ACK_OFFSET(p_hwfn->rel_pf_id);
 
 	if (is_vf)
 		id += 0x10;
 
-	command |= X_FINAL_CLEANUP_AGG_INT <<
-	    SDM_AGG_INT_COMP_PARAMS_AGG_INT_INDEX_SHIFT;
-	command |= 1 << SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_ENABLE_SHIFT;
-	command |= id << SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_BIT_SHIFT;
-	command |= SDM_COMP_TYPE_AGG_INT << SDM_OP_GEN_COMP_TYPE_SHIFT;
-
-/* Make sure notification is not set before initiating final cleanup */
+	SET_FIELD(command, SDM_AGG_INT_COMP_PARAMS_AGG_INT_INDEX,
+		  X_FINAL_CLEANUP_AGG_INT);
+	SET_FIELD(command, SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_ENABLE, 1);
+	SET_FIELD(command, SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_BIT, id);
+	SET_FIELD(command, SDM_OP_GEN_COMP_TYPE, SDM_COMP_TYPE_AGG_INT);
 
+	/* Make sure notification is not set before initiating final cleanup */
 	if (REG_RD(p_hwfn, addr)) {
 		DP_NOTICE(p_hwfn, false,
-			  "Unexpected; Found final cleanup notification");
-		DP_NOTICE(p_hwfn, false,
-			  " before initiating final cleanup\n");
+			  "Unexpected; Found final cleanup notification before initiating final cleanup\n");
 		REG_WR(p_hwfn, addr, 0);
 	}
 
@@ -2633,13 +3661,12 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn *p_hwfn,
 
 	/* Poll until completion */
 	while (!REG_RD(p_hwfn, addr) && count--)
-		OSAL_MSLEEP(FINAL_CLEANUP_POLL_TIME);
+		OSAL_MSLEEP(poll_time);
 
 	if (REG_RD(p_hwfn, addr))
 		rc = ECORE_SUCCESS;
 	else
-		DP_NOTICE(p_hwfn, true,
-			  "Failed to receive FW final cleanup notification\n");
+		DP_NOTICE(p_hwfn, true, "Failed to receive FW final cleanup notification\n");
 
 	/* Cleanup afterwards */
 	REG_WR(p_hwfn, addr, 0);
@@ -2655,13 +3682,15 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 		hw_mode |= 1 << MODE_BB;
 	} else if (ECORE_IS_AH(p_hwfn->p_dev)) {
 		hw_mode |= 1 << MODE_K2;
+	} else if (ECORE_IS_E5(p_hwfn->p_dev)) {
+		hw_mode |= 1 << MODE_E5;
 	} else {
 		DP_NOTICE(p_hwfn, true, "Unknown chip type %#x\n",
 			  p_hwfn->p_dev->type);
 		return ECORE_INVAL;
 	}
 
-	/* Ports per engine is based on the values in CNIG_REG_NW_PORT_MODE */
+	/* Ports per engine is based on the values in CNIG_REG_NW_PORT_MODE*/
 	switch (p_hwfn->p_dev->num_ports_in_engine) {
 	case 1:
 		hw_mode |= 1 << MODE_PORTS_PER_ENG_1;
@@ -2673,13 +3702,12 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 		hw_mode |= 1 << MODE_PORTS_PER_ENG_4;
 		break;
 	default:
-		DP_NOTICE(p_hwfn, true,
-			  "num_ports_in_engine = %d not supported\n",
+		DP_NOTICE(p_hwfn, true, "num_ports_in_engine = %d not supported\n",
 			  p_hwfn->p_dev->num_ports_in_engine);
 		return ECORE_INVAL;
 	}
 
-	if (OSAL_GET_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits))
+	if (OSAL_TEST_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits))
 		hw_mode |= 1 << MODE_MF_SD;
 	else
 		hw_mode |= 1 << MODE_MF_SI;
@@ -2711,8 +3739,6 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 }
 
 #ifndef ASIC_ONLY
-#define PCI_EXP_DEVCTL_PAYLOAD 0x00e0
-
 /* MFW-replacement initializations for emulation */
 static enum _ecore_status_t ecore_hw_init_chip(struct ecore_dev *p_dev,
 					       struct ecore_ptt *p_ptt)
@@ -2728,17 +3754,23 @@ static enum _ecore_status_t ecore_hw_init_chip(struct ecore_dev *p_dev,
 		return ECORE_INVAL;
 	}
 
-	pl_hv = ECORE_IS_BB(p_dev) ? 0x1 : 0x401;
+	if (ECORE_IS_BB(p_dev))
+		pl_hv = 0x1;
+	else if (ECORE_IS_AH(p_dev))
+		pl_hv = 0x401;
+	else /* E5 */
+		pl_hv = 0x4601;
 	ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV + 4, pl_hv);
 
-	if (ECORE_IS_AH(p_dev))
-		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2_E5, 0x3ffffff);
+	if (ECORE_IS_AH(p_dev) || ECORE_IS_E5(p_dev))
+		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2_E5,
+			 0x3ffffff);
 
 	/* Initialize port mode to 4x10G_E (10G with 4x10 SERDES) */
 	if (ECORE_IS_BB(p_dev))
 		ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB, 4);
 
-	if (ECORE_IS_AH(p_dev)) {
+	if (ECORE_IS_AH(p_dev) || ECORE_IS_E5(p_dev)) {
 		/* 2 for 4-port, 1 for 2-port, 0 for 1-port */
 		ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE,
 			 p_dev->num_ports_in_engine >> 1);
@@ -2778,6 +3810,22 @@ static enum _ecore_status_t ecore_hw_init_chip(struct ecore_dev *p_dev,
 	 */
 	ecore_wr(p_hwfn, p_ptt, PSWRQ2_REG_PRTY_STS_WR_H_0, 0x8);
 
+	if (ECORE_IS_E5(p_dev)) {
+		/* Clock enable for CSCLK and PCE_CLK */
+		ecore_wr(p_hwfn, p_ptt,
+			 PGL2PEM_REG_PEM_NATIVE_E5 + PEM_REG_PEM_CLK_EN_E5,
+			 0x0);
+
+		/* Allow the traffic to be sent out the PCIe link */
+		ecore_wr(p_hwfn, p_ptt,
+			 PGL2PEM_REG_PEM_NATIVE_E5 + PEM_REG_PEM_DIS_PORT_E5,
+			 0x1);
+
+		/* Enable zone_b access of VFs to SDM */
+		ecore_wr(p_hwfn, p_ptt,
+			 PGLUE_B_REG_DISABLE_ZONE_B_ACCESS_OF_VF_E5, 0x0);
+	}
+
 	/* Configure PSWRQ2_REG_WR_MBS0 according to the MaxPayloadSize field in
 	 * the PCI configuration space. The value is common for all PFs, so it
 	 * is okay to do it according to the first loading PF.
@@ -2796,6 +3844,18 @@ static enum _ecore_status_t ecore_hw_init_chip(struct ecore_dev *p_dev,
 	/* Configure the PGLUE_B to discard mode */
 	ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_MASTER_DISCARD_NBLOCK, 0x3f);
 
+	/* Workaround for a HW bug in E5 to allow VFs of PFs > 0 */
+	if (ECORE_IS_E5(p_dev)) {
+		u32 addr, timer_ctl;
+
+		addr = PGLCS_REG_PGL_CS + PCIEIP_REG_PCIEEP_TIMER_CTL_E5;
+		timer_ctl = ecore_rd(p_hwfn, p_ptt, addr);
+		timer_ctl = (timer_ctl &
+			     ~PCIEIP_REG_PCIEEP_TIMER_CTL_MFUNCN_E5) |
+			    (0xf & PCIEIP_REG_PCIEEP_TIMER_CTL_MFUNCN_E5);
+		ecore_wr(p_hwfn, p_ptt, addr, timer_ctl);
+	}
+
 	return ECORE_SUCCESS;
 }
 #endif
@@ -2827,7 +3887,8 @@ static void ecore_init_cau_rt_data(struct ecore_dev *p_dev)
 				continue;
 
 			ecore_init_cau_sb_entry(p_hwfn, &sb_entry,
-						p_block->function_id, 0, 0);
+						p_block->function_id,
+						0, 0);
 			STORE_RT_REG_AGG(p_hwfn, offset + igu_sb_id * 2,
 					 sb_entry);
 		}
@@ -2898,7 +3959,7 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	u8 vf_id, max_num_vfs;
 	u16 num_pfs, pf_id;
 	u32 concrete_fid;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
+	enum _ecore_status_t rc	= ECORE_SUCCESS;
 
 	ecore_init_cau_rt_data(p_dev);
 
@@ -2975,6 +4036,26 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	/* pretend to original PF */
 	ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
 
+#ifndef ASIC_ONLY
+	/* Clear the FIRST_VF register for all PFs */
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) &&
+	    ECORE_IS_E5(p_hwfn->p_dev) &&
+	    IS_PF_SRIOV(p_hwfn)) {
+		u8 pf_id;
+
+		for (pf_id = 0; pf_id < MAX_NUM_PFS_E5; pf_id++) {
+			/* pretend to the relevant PF */
+			ecore_fid_pretend(p_hwfn, p_ptt,
+					  FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, pf_id));
+			ecore_wr(p_hwfn, p_ptt, PGLCS_REG_FIRST_VF_K2_E5, 0);
+		}
+
+		/* pretend to the original PF */
+		ecore_fid_pretend(p_hwfn, p_ptt,
+				  FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, p_hwfn->rel_pf_id));
+	}
+#endif
+
 	return rc;
 }
 
@@ -2984,9 +4065,12 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 
 #define PMEG_IF_BYTE_COUNT	8
 
-static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn,
-			     struct ecore_ptt *p_ptt,
-			     u32 addr, u64 data, u8 reg_type, u8 port)
+static void ecore_wr_nw_port(struct ecore_hwfn	*p_hwfn,
+			     struct ecore_ptt	*p_ptt,
+			     u32		addr,
+			     u64		data,
+			     u8			reg_type,
+			     u8			port)
 {
 	DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
 		   "CMD: %08x, ADDR: 0x%08x, DATA: %08x:%08x\n",
@@ -2998,7 +4082,8 @@ static void ecore_wr_nw_port(struct ecore_hwfn *p_hwfn,
 
 	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB,
 		 (ecore_rd(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_CMD_BB) &
-		  0xffff00fe) | (8 << PMEG_IF_BYTE_COUNT));
+		  0xffff00fe) |
+		 (8 << PMEG_IF_BYTE_COUNT));
 	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_ADDR_BB,
 		 (reg_type << 25) | (addr << 8) | port);
 	ecore_wr(p_hwfn, p_ptt, CNIG_REG_PMEG_IF_WRDATA_BB, data & 0xffffffff);
@@ -3023,36 +4108,34 @@ static void ecore_emul_link_init_bb(struct ecore_hwfn *p_hwfn,
 {
 	u8 loopback = 0, port = p_hwfn->port_id * 2;
 
-	/* XLPORT MAC MODE *//* 0 Quad, 4 Single... */
-	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MODE_REG, (0x4 << 4) | 0x4, 1,
-			 port);
+	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MODE_REG,
+			 (0x4 << 4) | 0x4, 1, port); /* XLPORT MAC MODE */ /* 0 Quad, 4 Single... */
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MAC_CONTROL, 0, 1, port);
-	/* XLMAC: SOFT RESET */
-	ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL, 0x40, 0, port);
-	/* XLMAC: Port Speed >= 10Gbps */
-	ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_MODE, 0x40, 0, port);
-	/* XLMAC: Max Size */
-	ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_RX_MAX_SIZE, 0x3fff, 0, port);
+	ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL,
+			 0x40, 0, port); /*XLMAC: SOFT RESET */
+	ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_MODE,
+			 0x40, 0, port); /*XLMAC: Port Speed >= 10Gbps */
+	ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_RX_MAX_SIZE,
+			 0x3fff, 0, port); /* XLMAC: Max Size */
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_TX_CTRL,
 			 0x01000000800ULL | (0xa << 12) | ((u64)1 << 38),
 			 0, port);
-	ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_PAUSE_CTRL, 0x7c000, 0, port);
+	ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_PAUSE_CTRL,
+			 0x7c000, 0, port);
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_PFC_CTRL,
 			 0x30ffffc000ULL, 0, port);
-	ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL, 0x3 | (loopback << 2), 0,
-			 port);	/* XLMAC: TX_EN, RX_EN */
-	/* XLMAC: TX_EN, RX_EN, SW_LINK_STATUS */
-	ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL,
-			 0x1003 | (loopback << 2), 0, port);
-	/* Enabled Parallel PFC interface */
-	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_FLOW_CONTROL_CONFIG, 1, 0, port);
-
-	/* XLPORT port enable */
-	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_ENABLE_REG, 0xf, 1, port);
+	ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL, 0x3 | (loopback << 2),
+			 0, port); /* XLMAC: TX_EN, RX_EN */
+	ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL, 0x1003 | (loopback << 2),
+			 0, port); /* XLMAC: TX_EN, RX_EN, SW_LINK_STATUS */
+	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_FLOW_CONTROL_CONFIG,
+			 1, 0, port); /* Enabled Parallel PFC interface */
+	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_ENABLE_REG,
+			 0xf, 1, port); /* XLPORT port enable */
 }
 
 static void ecore_emul_link_init_ah(struct ecore_hwfn *p_hwfn,
-				       struct ecore_ptt *p_ptt)
+				    struct ecore_ptt *p_ptt)
 {
 	u32 mac_base, mac_config_val = 0xa853;
 	u8 port = p_hwfn->port_id;
@@ -3089,6 +4172,187 @@ static void ecore_emul_link_init_ah(struct ecore_hwfn *p_hwfn,
 		 mac_config_val);
 }
 
+static struct {
+	u32 offset;
+	u32 value;
+} ecore_e5_emul_mac_init[] = {
+	{RCE_4CH_REG_MAC_RS_TX_0_MAC_TX_MAX_PKT_LEN_E5, 0x3fff},
+	{RCE_4CH_REG_MAC_RS_TX_1_MAC_TX_MAX_PKT_LEN_E5, 0x3fff},
+	{RCE_4CH_REG_MAC_RS_TX_2_MAC_TX_MAX_PKT_LEN_E5, 0x3fff},
+	{RCE_4CH_REG_MAC_RS_TX_3_MAC_TX_MAX_PKT_LEN_E5, 0x3fff},
+	{RCE_4CH_REG_MAC_RS_RX_0_RX_MAX_PKT_LEN_E5, 0x3fff},
+	{RCE_4CH_REG_MAC_RS_RX_1_RX_MAX_PKT_LEN_E5, 0x3fff},
+	{RCE_4CH_REG_MAC_RS_RX_2_RX_MAX_PKT_LEN_E5, 0x3fff},
+	{RCE_4CH_REG_MAC_RS_RX_3_RX_MAX_PKT_LEN_E5, 0x3fff},
+	{RCE_4CH_REG_APP_FIFO_0_TX_SOF_THRESH_E5, 0x3},
+	{RCE_4CH_REG_MAC_RS_TX_0_TX_MAC_CFG_E5, 0x68094e5},
+	{RCE_4CH_REG_MAC_RS_TX_0_STN_ADD_47_32_E5, 0xa868},
+	{RCE_4CH_REG_MAC_RS_TX_0_STN_ADD_31_0_E5, 0xaf88776f},
+	{RCE_4CH_REG_MAC_RS_TX_0_TX_MAC_FC_CONFIG0_E5, 0x7f97},
+	{RCE_4CH_REG_MAC_RS_TX_0_TX_MAC_FC_CONFIG2_E5, 0x11},
+	{RCE_4CH_REG_MAC_RS_TX_0_TX_START_PRE_SMD_E5, 0xb37f4cd5},
+	{RCE_4CH_REG_MAC_RS_TX_0_RS_CTRL_E5, 0x8},
+	{RCE_4CH_REG_MAC_RS_TX_0_TX_IS_FIFO_THRESH_E5, 0x83},
+	{RCE_4CH_REG_MAC_RS_RX_0_RX_MAC_CFG_E5, 0x15d5},
+	{RCE_4CH_REG_MAC_RS_RX_0_RX_MAC_FC_CONFIG_E5, 0xff08},
+	{RCE_4CH_REG_MAC_RS_RX_0_RS_CTRL_E5, 0x4},
+	{RCE_4CH_REG_PCS_RX_0_RX_PCS_CFG_E5, 0x1808a03},
+	{RCE_4CH_REG_PCS_RX_0_RX_PCS_CFG2_E5, 0x2060c202},
+	{RCE_4CH_REG_PCS_RX_0_RX_PCS_ALIGN_CFG_E5, 0xa00},
+	{RCE_4CH_REG_PCS_TX_0_TX_PCS_CFG_E5, 0x8a03},
+	{RCE_4CH_REG_PCS_TX_0_TX_PCS_CFG2_E5, 0x5a0610},
+	{RCE_4CH_REG_PCS_TX_0_TX_PCS_CTRL_ALIGN_PERIOD_E5, 0xa00},
+	{RCE_4CH_REG_PCS_TX_0_TX_PCS_CTRL_ENCODER_E5, 0xc00000},
+	{RCE_4CH_REG_MAC_RS_RX_0_RX_EXPRESS_SMD0_3_E5, 0xffffffd5},
+	{RCE_4CH_REG_APP_FIFO_1_TX_SOF_THRESH_E5, 0x3},
+	{RCE_4CH_REG_MAC_RS_TX_1_TX_MAC_CFG_E5, 0x68494e5},
+	{RCE_4CH_REG_MAC_RS_TX_1_STN_ADD_47_32_E5, 0x44b2},
+	{RCE_4CH_REG_MAC_RS_TX_1_STN_ADD_31_0_E5, 0x20123eee},
+	{RCE_4CH_REG_MAC_RS_TX_1_TX_MAC_FC_CONFIG0_E5, 0x7f97},
+	{RCE_4CH_REG_MAC_RS_TX_1_TX_MAC_FC_CONFIG2_E5, 0x11},
+	{RCE_4CH_REG_MAC_RS_TX_1_TX_START_PRE_SMD_E5, 0xb37f4cd5},
+	{RCE_4CH_REG_MAC_RS_TX_1_RS_CTRL_E5, 0x8},
+	{RCE_4CH_REG_MAC_RS_TX_1_TX_IS_FIFO_THRESH_E5, 0x83},
+	{RCE_4CH_REG_MAC_RS_RX_1_RX_MAC_CFG_E5, 0x415d5},
+	{RCE_4CH_REG_MAC_RS_RX_1_RX_MAC_FC_CONFIG_E5, 0xff08},
+	{RCE_4CH_REG_MAC_RS_RX_1_RS_CTRL_E5, 0x4},
+	{RCE_4CH_REG_MAC_RS_RX_1_RX_START_PRE_SMD_VC0_E5, 0xffffffff},
+	{RCE_4CH_REG_MAC_RS_RX_1_RX_START_PRE_SMD_VC1_E5, 0xb37f4ce6},
+	{RCE_4CH_REG_MAC_RS_RX_1_RX_CONT_FRAG_SMD_VC0_E5, 0xffffffff},
+	{RCE_4CH_REG_MAC_RS_RX_1_RX_CONT_FRAG_SMD_VC1_E5, 0x2a9e5261},
+	{RCE_4CH_REG_MAC_RS_RX_1_RX_EXP_VER_SMD_VC0_3_E5, 0xffff07ff},
+	{RCE_4CH_REG_MAC_RS_RX_1_RX_EXP_RES_SMD_VC0_3_E5, 0xffff19ff},
+	{RCE_4CH_REG_PCS_RX_1_RX_PCS_CFG_E5, 0x1808a03},
+	{RCE_4CH_REG_PCS_RX_1_RX_PCS_CFG2_E5, 0x2060c202},
+	{RCE_4CH_REG_PCS_RX_1_RX_PCS_ALIGN_CFG_E5, 0xa00},
+	{RCE_4CH_REG_PCS_TX_1_TX_PCS_CFG_E5, 0x8a03},
+	{RCE_4CH_REG_PCS_TX_1_TX_PCS_CFG2_E5, 0x5a0610},
+	{RCE_4CH_REG_PCS_TX_1_TX_PCS_CTRL_ALIGN_PERIOD_E5, 0xa00},
+	{RCE_4CH_REG_PCS_TX_1_TX_PCS_CTRL_ENCODER_E5, 0xc00000},
+	{RCE_4CH_REG_MAC_RS_RX_1_RX_EXPRESS_SMD0_3_E5, 0xffffd5ff},
+	{RCE_4CH_REG_APP_FIFO_2_TX_SOF_THRESH_E5, 0x3},
+	{RCE_4CH_REG_MAC_RS_TX_2_TX_MAC_CFG_E5, 0x68894e5},
+	{RCE_4CH_REG_MAC_RS_TX_2_STN_ADD_47_32_E5, 0xf1c0},
+	{RCE_4CH_REG_MAC_RS_TX_2_STN_ADD_31_0_E5, 0xabc40c51},
+	{RCE_4CH_REG_MAC_RS_TX_2_TX_MAC_FC_CONFIG0_E5, 0x7f97},
+	{RCE_4CH_REG_MAC_RS_TX_2_TX_MAC_FC_CONFIG2_E5, 0x15},
+	{RCE_4CH_REG_MAC_RS_TX_2_TX_START_PRE_SMD_E5, 0xb37f4cd5},
+	{RCE_4CH_REG_MAC_RS_TX_2_RS_CTRL_E5, 0x8},
+	{RCE_4CH_REG_MAC_RS_TX_2_TX_IS_FIFO_THRESH_E5, 0x83},
+	{RCE_4CH_REG_MAC_RS_RX_2_RX_MAC_CFG_E5, 0x815d5},
+	{RCE_4CH_REG_MAC_RS_RX_2_RX_MAC_FC_CONFIG_E5, 0xff08},
+	{RCE_4CH_REG_MAC_RS_RX_2_RS_CTRL_E5, 0x4},
+	{RCE_4CH_REG_MAC_RS_RX_2_RX_START_PRE_SMD_VC0_E5, 0xffffffff},
+	{RCE_4CH_REG_MAC_RS_RX_2_RX_START_PRE_SMD_VC2_E5, 0xb37f4ce6},
+	{RCE_4CH_REG_MAC_RS_RX_2_RX_CONT_FRAG_SMD_VC0_E5, 0xffffffff},
+	{RCE_4CH_REG_MAC_RS_RX_2_RX_CONT_FRAG_SMD_VC2_E5, 0x2a9e5261},
+	{RCE_4CH_REG_MAC_RS_RX_2_RX_EXP_VER_SMD_VC0_3_E5, 0xff07ffff},
+	{RCE_4CH_REG_MAC_RS_RX_2_RX_EXP_RES_SMD_VC0_3_E5, 0xff19ffff},
+	{RCE_4CH_REG_PCS_RX_2_RX_PCS_CFG_E5, 0x1808a03},
+	{RCE_4CH_REG_PCS_RX_2_RX_PCS_CFG2_E5, 0x2060c202},
+	{RCE_4CH_REG_PCS_RX_2_RX_PCS_ALIGN_CFG_E5, 0xa00},
+	{RCE_4CH_REG_PCS_TX_2_TX_PCS_CFG_E5, 0x8a03},
+	{RCE_4CH_REG_PCS_TX_2_TX_PCS_CFG2_E5, 0x5a0610},
+	{RCE_4CH_REG_PCS_TX_2_TX_PCS_CTRL_ALIGN_PERIOD_E5, 0xa00},
+	{RCE_4CH_REG_PCS_TX_2_TX_PCS_CTRL_ENCODER_E5, 0xc00000},
+	{RCE_4CH_REG_MAC_RS_RX_2_RX_EXPRESS_SMD0_3_E5, 0xffd5ffff},
+	{RCE_4CH_REG_APP_FIFO_3_TX_SOF_THRESH_E5, 0x3},
+	{RCE_4CH_REG_MAC_RS_TX_3_TX_MAC_CFG_E5, 0x68c94e5},
+	{RCE_4CH_REG_MAC_RS_TX_3_STN_ADD_47_32_E5, 0xa235},
+	{RCE_4CH_REG_MAC_RS_TX_3_STN_ADD_31_0_E5, 0x23650a9d},
+	{RCE_4CH_REG_MAC_RS_TX_3_TX_MAC_FC_CONFIG0_E5, 0x7f97},
+	{RCE_4CH_REG_MAC_RS_TX_3_TX_MAC_FC_CONFIG2_E5, 0x1f},
+	{RCE_4CH_REG_MAC_RS_TX_3_TX_START_PRE_SMD_E5, 0xb37f4cd5},
+	{RCE_4CH_REG_MAC_RS_TX_3_RS_CTRL_E5, 0x8},
+	{RCE_4CH_REG_MAC_RS_TX_3_TX_IS_FIFO_THRESH_E5, 0x83},
+	{RCE_4CH_REG_MAC_RS_RX_3_RX_MAC_CFG_E5, 0xc15d5},
+	{RCE_4CH_REG_MAC_RS_RX_3_RX_MAC_FC_CONFIG_E5, 0xff08},
+	{RCE_4CH_REG_MAC_RS_RX_3_RS_CTRL_E5, 0x4},
+	{RCE_4CH_REG_MAC_RS_RX_3_RX_START_PRE_SMD_VC0_E5, 0xffffffff},
+	{RCE_4CH_REG_MAC_RS_RX_3_RX_START_PRE_SMD_VC3_E5, 0xb37f4ce6},
+	{RCE_4CH_REG_MAC_RS_RX_3_RX_CONT_FRAG_SMD_VC0_E5, 0xffffffff},
+	{RCE_4CH_REG_MAC_RS_RX_3_RX_CONT_FRAG_SMD_VC3_E5, 0x2a9e5261},
+	{RCE_4CH_REG_MAC_RS_RX_3_RX_EXP_VER_SMD_VC0_3_E5, 0x7ffffff},
+	{RCE_4CH_REG_MAC_RS_RX_3_RX_EXP_RES_SMD_VC0_3_E5, 0x19ffffff},
+	{RCE_4CH_REG_PCS_RX_3_RX_PCS_CFG_E5, 0x1808a03},
+	{RCE_4CH_REG_PCS_RX_3_RX_PCS_CFG2_E5, 0x2060c202},
+	{RCE_4CH_REG_PCS_RX_3_RX_PCS_ALIGN_CFG_E5, 0xa00},
+	{RCE_4CH_REG_PCS_TX_3_TX_PCS_CFG_E5, 0x8a03},
+	{RCE_4CH_REG_PCS_TX_3_TX_PCS_CFG2_E5, 0x5a0610},
+	{RCE_4CH_REG_PCS_TX_3_TX_PCS_CTRL_ALIGN_PERIOD_E5, 0xa00},
+	{RCE_4CH_REG_PCS_TX_3_TX_PCS_CTRL_ENCODER_E5, 0xc00000},
+	{RCE_4CH_REG_MAC_RS_RX_3_RX_EXPRESS_SMD0_3_E5, 0xd5ffffff},
+	{RCE_4CH_REG_MAC_TOP_TS_TO_PC_MAP_E5, 0x650},
+	{RCE_4CH_REG_PCS_TOP_PCS_SEL_MAP_E5, 0xb6d249},
+	{RCE_4CH_REG_PCS_TOP_PMA_TX_FIFO_CFG3_E5, 0x5294a},
+	{RCE_4CH_REG_PCS_TOP_RX_PMA_SEL_MAP1_E5, 0x18022},
+	{RCE_4CH_REG_PCS_TOP_TX_PMA_SEL_MAP1_E5, 0x443},
+	{RCE_4CH_REG_MAC_RS_TX_0_TX_MAC_CFG_E5, 0x68094e6},
+	{RCE_4CH_REG_MAC_RS_RX_0_RX_MAC_CFG_E5, 0x15d6},
+	{RCE_4CH_REG_PCS_RX_0_RX_PCS_CFG_E5, 0x1808a02},
+	{RCE_4CH_REG_PCS_TX_0_TX_PCS_CFG_E5, 0x8a02},
+	{RCE_4CH_REG_MAC_RS_TX_1_TX_MAC_CFG_E5, 0x68494e6},
+	{RCE_4CH_REG_MAC_RS_RX_1_RX_MAC_CFG_E5, 0x415d6},
+	{RCE_4CH_REG_PCS_RX_1_RX_PCS_CFG_E5, 0x1808a02},
+	{RCE_4CH_REG_PCS_TX_1_TX_PCS_CFG_E5, 0x8a02},
+	{RCE_4CH_REG_MAC_RS_TX_2_TX_MAC_CFG_E5, 0x68894e6},
+	{RCE_4CH_REG_MAC_RS_RX_2_RX_MAC_CFG_E5, 0x815d6},
+	{RCE_4CH_REG_PCS_RX_2_RX_PCS_CFG_E5, 0x1808a02},
+	{RCE_4CH_REG_PCS_TX_2_TX_PCS_CFG_E5, 0x8a02},
+	{RCE_4CH_REG_MAC_RS_TX_3_TX_MAC_CFG_E5, 0x68c94e6},
+	{RCE_4CH_REG_MAC_RS_RX_3_RX_MAC_CFG_E5, 0xc15d6},
+	{RCE_4CH_REG_PCS_RX_3_RX_PCS_CFG_E5, 0x1808a02},
+	{RCE_4CH_REG_PCS_TX_3_TX_PCS_CFG_E5, 0x8a02},
+	{RCE_4CH_REG_PCS_TOP_PCS_RESET_E5, 0xd},
+	{RCE_4CH_REG_PCS_TOP_PCS_RESET_E5, 0x5},
+	{RCE_4CH_REG_PCS_TOP_PCS_RESET_E5, 0x4},
+	{RCE_4CH_REG_PCS_TOP_PCS_RESET_E5, 0x0},
+	{RCE_4CH_REG_PCS_TOP_PMA_RX_FIFO_CFG3_E5, 0xf0},
+	{RCE_4CH_REG_PCS_TOP_PMA_TX_FIFO_CFG5_E5, 0xf0},
+	{RCE_4CH_REG_APP_FIFO_0_APP_FIFO_CFG_E5, 0x43c},
+	{RCE_4CH_REG_APP_FIFO_1_APP_FIFO_CFG_E5, 0x43c},
+	{RCE_4CH_REG_APP_FIFO_2_APP_FIFO_CFG_E5, 0x43c},
+	{RCE_4CH_REG_APP_FIFO_3_APP_FIFO_CFG_E5, 0x43c},
+};
+
+static void ecore_emul_link_init_e5(struct ecore_hwfn *p_hwfn,
+				    struct ecore_ptt *p_ptt)
+{
+	u32 val;
+
+	/* Init the MAC if exists and disable external loopback.
+	 * Without MAC - configure a loopback in the NIG.
+	 */
+	if (p_hwfn->p_dev->b_is_emul_mac) {
+		u32 i, base = NWM_REG_MAC_E5;
+		osal_size_t size;
+
+		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
+			   "E5 emulation: Initializing the MAC\n");
+
+		size = sizeof(ecore_e5_emul_mac_init) /
+		       sizeof(ecore_e5_emul_mac_init[0]);
+		for (i = 0; i < size; i++)
+			ecore_wr(p_hwfn, p_ptt,
+				 base + ecore_e5_emul_mac_init[i].offset,
+				 ecore_e5_emul_mac_init[i].value);
+
+		/* MISCS_REG_ECO_RESERVED[31:30]: enable/disable ext loopback */
+		val = ecore_rd(p_hwfn, p_ptt, MISCS_REG_ECO_RESERVED);
+		val &= ~0xc0000000;
+		ecore_wr(p_hwfn, p_ptt, MISCS_REG_ECO_RESERVED, val);
+	} else {
+		u8 port_mode = 0x2;
+
+		/* NIG_REG_ECO_RESERVED[1:0]: NIG loopback.
+		 * '2' - ports "0 <-> 1" and "2 <-> 3".
+		 */
+		val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ECO_RESERVED);
+		val = (val & ~0x3) | port_mode;
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_ECO_RESERVED, val);
+	}
+}
+
 static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
 				 struct ecore_ptt *p_ptt)
 {
@@ -3100,8 +4364,10 @@ static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
 
 	if (ECORE_IS_BB(p_hwfn->p_dev))
 		ecore_emul_link_init_bb(p_hwfn, p_ptt);
-	else
+	else if (ECORE_IS_AH(p_hwfn->p_dev))
 		ecore_emul_link_init_ah(p_hwfn, p_ptt);
+	else /* E5 */
+		ecore_emul_link_init_e5(p_hwfn, p_ptt);
 
 	return;
 }
@@ -3110,15 +4376,15 @@ static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,  u8 port)
 {
 	int port_offset = port ? 0x800 : 0;
-	u32 xmac_rxctrl = 0;
+	u32 xmac_rxctrl	= 0;
 
 	/* Reset of XMAC */
 	/* FIXME: move to common start */
 	ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + 2 * sizeof(u32),
-		 MISC_REG_RESET_REG_2_XMAC_BIT);	/* Clear */
+		 MISC_REG_RESET_REG_2_XMAC_BIT); /* Clear */
 	OSAL_MSLEEP(1);
 	ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + sizeof(u32),
-		 MISC_REG_RESET_REG_2_XMAC_BIT);	/* Set */
+		 MISC_REG_RESET_REG_2_XMAC_BIT); /* Set */
 
 	ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_CORE_PORT_MODE_BB, 1);
 
@@ -3152,28 +4418,45 @@ static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
-static u32 ecore_hw_norm_region_conn(struct ecore_hwfn *p_hwfn)
+static u32 ecore_hw_get_norm_region_conn(struct ecore_hwfn *p_hwfn)
 {
 	u32 norm_region_conn;
 
-	/* The order of CIDs allocation is according to the order of
+	/* In E4, the order of CIDs allocation is according to the order of
 	 * 'enum protocol_type'. Therefore, the number of CIDs for the normal
 	 * region is calculated based on the CORE CIDs, in case of non-ETH
 	 * personality, and otherwise - based on the ETH CIDs.
+	 * In E5 there is an exception - the CORE CIDs are allocated first.
+	 * Therefore, the calculation should consider every possible
+	 * personality.
 	 */
 	norm_region_conn =
-		ecore_cxt_get_proto_cid_start(p_hwfn, PROTOCOLID_CORE) +
+		ecore_cxt_get_proto_cid_start(p_hwfn, PROTOCOLID_CORE,
+					      ECORE_CXT_PF_CID) +
 		ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_CORE,
 					      OSAL_NULL) +
 		ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH,
 					      OSAL_NULL);
 
+	if (ECORE_IS_E5(p_hwfn->p_dev)) {
+		norm_region_conn +=
+			ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ISCSI,
+						      OSAL_NULL) +
+			ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_FCOE,
+						      OSAL_NULL) +
+			ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ROCE,
+						      OSAL_NULL);
+	}
+
 	return norm_region_conn;
 }
 
-static enum _ecore_status_t
+enum _ecore_status_t
 ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn,
-		       struct ecore_ptt *p_ptt, u32 pwm_region_size, u32 n_cpus)
+		       struct ecore_ptt *p_ptt,
+		       struct ecore_dpi_info *dpi_info,
+		       u32 pwm_region_size,
+		       u32 n_cpus)
 {
 	u32 dpi_bit_shift, dpi_count, dpi_page_size;
 	u32 min_dpis;
@@ -3202,20 +4485,16 @@ ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn,
 	 */
 	n_wids = OSAL_MAX_T(u32, ECORE_MIN_WIDS, n_cpus);
 	dpi_page_size = ECORE_WID_SIZE * OSAL_ROUNDUP_POW_OF_TWO(n_wids);
-	dpi_page_size = (dpi_page_size + OSAL_PAGE_SIZE - 1) &
-			~(OSAL_PAGE_SIZE - 1);
+	dpi_page_size = (dpi_page_size + OSAL_PAGE_SIZE - 1) & ~(OSAL_PAGE_SIZE - 1);
 	dpi_bit_shift = OSAL_LOG2(dpi_page_size / 4096);
 	dpi_count = pwm_region_size / dpi_page_size;
 
 	min_dpis = p_hwfn->pf_params.rdma_pf_params.min_dpis;
 	min_dpis = OSAL_MAX_T(u32, ECORE_MIN_DPIS, min_dpis);
 
-	/* Update hwfn */
-	p_hwfn->dpi_size = dpi_page_size;
-	p_hwfn->dpi_count = dpi_count;
-
-	/* Update registers */
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_DPI_BIT_SHIFT, dpi_bit_shift);
+	dpi_info->dpi_size = dpi_page_size;
+	dpi_info->dpi_count = dpi_count;
+	ecore_wr(p_hwfn, p_ptt, dpi_info->dpi_bit_shift_addr, dpi_bit_shift);
 
 	if (dpi_count < min_dpis)
 		return ECORE_NORESOURCES;
@@ -3223,15 +4502,11 @@ ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum ECORE_ROCE_EDPM_MODE {
-	ECORE_ROCE_EDPM_MODE_ENABLE = 0,
-	ECORE_ROCE_EDPM_MODE_FORCE_ON = 1,
-	ECORE_ROCE_EDPM_MODE_DISABLE = 2,
-};
-
-bool ecore_edpm_enabled(struct ecore_hwfn *p_hwfn)
+bool ecore_edpm_enabled(struct ecore_hwfn *p_hwfn,
+			struct ecore_common_dpm_info *dpm_info)
 {
-	if (p_hwfn->dcbx_no_edpm || p_hwfn->db_bar_no_edpm)
+	if (p_hwfn->dcbx_no_edpm || dpm_info->db_bar_no_edpm ||
+	    dpm_info->mfw_no_edpm)
 		return false;
 
 	return true;
@@ -3241,20 +4516,24 @@ static enum _ecore_status_t
 ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 			      struct ecore_ptt *p_ptt)
 {
+	struct ecore_common_dpm_info *dpm_info;
 	u32 norm_region_conn, min_addr_reg1;
+	struct ecore_dpi_info *dpi_info;
 	u32 pwm_regsize, norm_regsize;
-	u32 db_bar_size, n_cpus;
+	u32 db_bar_size, n_cpus = 1;
 	u32 roce_edpm_mode;
-	u32 pf_dems_shift;
+	u32 dems_shift;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u8 cond;
 
 	db_bar_size = ecore_hw_bar_size(p_hwfn, p_ptt, BAR_ID_1);
+
+	/* In CMT, doorbell bar should be split over both engines */
 	if (ECORE_IS_CMT(p_hwfn->p_dev))
 		db_bar_size /= 2;
 
 	/* Calculate doorbell regions
-	 * -----------------------------------
+	 * --------------------------
 	 * The doorbell BAR is made of two regions. The first is called normal
 	 * region and the second is called PWM region. In the normal region
 	 * each ICID has its own set of addresses so that writing to that
@@ -3267,7 +4546,7 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 	 * connections. The DORQ_REG_PF_MIN_ADDR_REG1 register is
 	 * in units of 4,096 bytes.
 	 */
-	norm_region_conn = ecore_hw_norm_region_conn(p_hwfn);
+	norm_region_conn = ecore_hw_get_norm_region_conn(p_hwfn);
 	norm_regsize = ROUNDUP(ECORE_PF_DEMS_SIZE * norm_region_conn,
 			       OSAL_PAGE_SIZE);
 	min_addr_reg1 = norm_regsize / 4096;
@@ -3288,15 +4567,32 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 		return ECORE_NORESOURCES;
 	}
 
-	/* Calculate number of DPIs */
-	roce_edpm_mode = p_hwfn->pf_params.rdma_pf_params.roce_edpm_mode;
+	dpi_info = &p_hwfn->dpi_info;
+	dpm_info = &p_hwfn->dpm_info;
+
+	dpi_info->dpi_bit_shift_addr = DORQ_REG_PF_DPI_BIT_SHIFT;
+	dpm_info->vf_cfg = false;
+
+	if (p_hwfn->roce_edpm_mode <= ECORE_ROCE_EDPM_MODE_DISABLE) {
+		roce_edpm_mode = p_hwfn->roce_edpm_mode;
+	} else {
+		DP_ERR(p_hwfn->p_dev,
+		       "roce edpm mode was configured to an illegal value of %u. Resetting it to 0-Enable EDPM if BAR size is adequate\n",
+		       p_hwfn->roce_edpm_mode);
+
+		/* Reset this field in order to save this check for a VF */
+		p_hwfn->roce_edpm_mode = 0;
+		roce_edpm_mode = 0;
+	}
+
 	if ((roce_edpm_mode == ECORE_ROCE_EDPM_MODE_ENABLE) ||
 	    ((roce_edpm_mode == ECORE_ROCE_EDPM_MODE_FORCE_ON))) {
 		/* Either EDPM is mandatory, or we are attempting to allocate a
 		 * WID per CPU.
 		 */
 		n_cpus = OSAL_NUM_CPUS();
-		rc = ecore_hw_init_dpi_size(p_hwfn, p_ptt, pwm_regsize, n_cpus);
+		rc = ecore_hw_init_dpi_size(p_hwfn, p_ptt, dpi_info,
+					    pwm_regsize, n_cpus);
 	}
 
 	cond = ((rc != ECORE_SUCCESS) &&
@@ -3308,38 +4604,55 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 		 * allocated a WID per CPU.
 		 */
 		n_cpus = 1;
-		rc = ecore_hw_init_dpi_size(p_hwfn, p_ptt, pwm_regsize, n_cpus);
-
-		/* If we entered this flow due to DCBX then the DPM register is
-		 * already configured.
-		 */
+		rc = ecore_hw_init_dpi_size(p_hwfn, p_ptt, dpi_info,
+					    pwm_regsize, n_cpus);
 	}
 
-	DP_INFO(p_hwfn,
-		"doorbell bar: normal_region_size=%d, pwm_region_size=%d",
-		norm_regsize, pwm_regsize);
-	DP_INFO(p_hwfn,
-		" dpi_size=%d, dpi_count=%d, roce_edpm=%s\n",
-		p_hwfn->dpi_size, p_hwfn->dpi_count,
-		(!ecore_edpm_enabled(p_hwfn)) ?
-		"disabled" : "enabled");
+	dpi_info->wid_count = (u16)n_cpus;
 
 	/* Check return codes from above calls */
 	if (rc != ECORE_SUCCESS) {
 		DP_ERR(p_hwfn,
-		       "Failed to allocate enough DPIs\n");
+		       "Failed to allocate enough DPIs. Allocated %d but the current minimum is set to %d. You can reduce this minimum down to %d via user configuration min_dpis or by disabling EDPM via user configuration roce_edpm_mode\n",
+		       dpi_info->dpi_count,
+		       p_hwfn->pf_params.rdma_pf_params.min_dpis,
+		       ECORE_MIN_DPIS);
+		DP_ERR(p_hwfn,
+		       "PF doorbell bar: normal_region_size=0x%x, pwm_region_size=0x%x, dpi_size=0x%x, dpi_count=%d, roce_edpm=%s, page_size=%u\n",
+		       norm_regsize, pwm_regsize, dpi_info->dpi_size,
+		       dpi_info->dpi_count,
+		       (!ecore_edpm_enabled(p_hwfn, dpm_info)) ?
+		       "disabled" : "enabled", OSAL_PAGE_SIZE);
+
 		return ECORE_NORESOURCES;
 	}
 
+	DP_INFO(p_hwfn,
+		"PF doorbell bar: db_bar_size=0x%x normal_region_size=0x%x, pwm_region_size=0x%x, dpi_size=0x%x, dpi_count=%d, roce_edpm=%s, page_size=%u\n",
+		db_bar_size, norm_regsize, pwm_regsize, dpi_info->dpi_size,
+		dpi_info->dpi_count, (!ecore_edpm_enabled(p_hwfn, dpm_info)) ?
+		"disabled" : "enabled", OSAL_PAGE_SIZE);
+
 	/* Update hwfn */
-	p_hwfn->dpi_start_offset = norm_regsize;
+	dpi_info->dpi_start_offset = norm_regsize; /* this is later used to
+						    * calculate the doorbell
+						    * address
+						    */
 
-	/* Update registers */
-	/* DEMS size is configured log2 of DWORDs, hence the division by 4 */
-	pf_dems_shift = OSAL_LOG2(ECORE_PF_DEMS_SIZE / 4);
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_ICID_BIT_SHIFT_NORM, pf_dems_shift);
+	/* Update the DORQ registers.
+	 * DEMS size is configured as log2 of DWORDs, hence the division by 4.
+	 */
+
+	dems_shift = OSAL_LOG2(ECORE_PF_DEMS_SIZE / 4);
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_ICID_BIT_SHIFT_NORM, dems_shift);
 	ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_MIN_ADDR_REG1, min_addr_reg1);
 
+	if (ECORE_IS_E5(p_hwfn->p_dev)) {
+		dems_shift = OSAL_LOG2(ECORE_VF_DEMS_SIZE / 4);
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_VF_ICID_BIT_SHIFT_NORM,
+			 dems_shift);
+	}
+
 	return ECORE_SUCCESS;
 }
 
@@ -3389,7 +4702,7 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
 		 * The ppfid should be set in the vector, except in BB which has
 		 * a bug in the LLH where the ppfid is actually engine based.
 		 */
-		if (OSAL_GET_BIT(ECORE_MF_NEED_DEF_PF, &p_dev->mf_bits)) {
+		if (OSAL_TEST_BIT(ECORE_MF_NEED_DEF_PF, &p_dev->mf_bits)) {
 			u8 pf_id = p_hwfn->rel_pf_id;
 
 			if (!ECORE_IS_BB(p_dev))
@@ -3411,8 +4724,8 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 {
 	u8 rel_pf_id = p_hwfn->rel_pf_id;
 	u32 prs_reg;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-	u16 ctrl;
+	enum _ecore_status_t rc	= ECORE_SUCCESS;
+	u16 ctrl = 0;
 	int pos;
 
 	if (p_hwfn->mcp_info) {
@@ -3444,13 +4757,12 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 	/* Enable classification by MAC if needed */
 	if (hw_mode & (1 << MODE_MF_SI)) {
-		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
-			   "Configuring TAGMAC_CLS_TYPE\n");
-		STORE_RT_REG(p_hwfn, NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET,
-			     1);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "Configuring TAGMAC_CLS_TYPE\n");
+		STORE_RT_REG(p_hwfn,
+			     NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET, 1);
 	}
 
-	/* Protocl Configuration  - @@@TBD - should we set 0 otherwise? */
+	/* Protocl Configuration  - @@@TBD - should we set 0 otherwise?*/
 	STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_TCP_RT_OFFSET,
 		     (p_hwfn->hw_info.personality == ECORE_PCI_ISCSI) ? 1 : 0);
 	STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_FCOE_RT_OFFSET,
@@ -3480,30 +4792,50 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	/* Pure runtime initializations - directly to the HW  */
 	ecore_int_igu_init_pure_rt(p_hwfn, p_ptt, true, true);
 
-	/* PCI relaxed ordering causes a decrease in the performance on some
-	 * systems. Till a root cause is found, disable this attribute in the
-	 * PCI config space.
+	#if 0 /* @DPDK */
+	/* PCI relaxed ordering is generally beneficial for performance,
+	 * but can hurt performance or lead to instability on some setups.
+	 * If management FW is taking care of it go with that, otherwise
+	 * disable to be on the safe side.
 	 */
-	/* Not in use @DPDK
-	* pos = OSAL_PCI_FIND_CAPABILITY(p_hwfn->p_dev, PCI_CAP_ID_EXP);
-	* if (!pos) {
-	*	DP_NOTICE(p_hwfn, true,
-	*		  "Failed to find the PCIe Cap\n");
-	*	return ECORE_IO;
-	* }
-	* OSAL_PCI_READ_CONFIG_WORD(p_hwfn->p_dev, pos + PCI_EXP_DEVCTL, &ctrl);
-	* ctrl &= ~PCI_EXP_DEVCTL_RELAX_EN;
-	* OSAL_PCI_WRITE_CONFIG_WORD(p_hwfn->p_dev, pos + PCI_EXP_DEVCTL, ctrl);
-	*/
+	pos = OSAL_PCI_FIND_CAPABILITY(p_hwfn->p_dev, PCI_CAP_ID_EXP);
+	if (!pos) {
+		DP_NOTICE(p_hwfn, true,
+			  "Failed to find the PCI Express Capability structure in the PCI config space\n");
+		return ECORE_IO;
+	}
+
+	OSAL_PCI_READ_CONFIG_WORD(p_hwfn->p_dev, pos + PCI_EXP_DEVCTL, &ctrl);
+
+	if (p_params->pci_rlx_odr_mode == ECORE_ENABLE_RLX_ODR) {
+		ctrl |= PCI_EXP_DEVCTL_RELAX_EN;
+		OSAL_PCI_WRITE_CONFIG_WORD(p_hwfn->p_dev,
+					   pos + PCI_EXP_DEVCTL, ctrl);
+	} else if (p_params->pci_rlx_odr_mode == ECORE_DISABLE_RLX_ODR) {
+		ctrl &= ~PCI_EXP_DEVCTL_RELAX_EN;
+		OSAL_PCI_WRITE_CONFIG_WORD(p_hwfn->p_dev,
+					   pos + PCI_EXP_DEVCTL, ctrl);
+	} else if (ecore_mcp_rlx_odr_supported(p_hwfn)) {
+		DP_INFO(p_hwfn, "PCI relax ordering configured by MFW\n");
+	} else {
+		ctrl &= ~PCI_EXP_DEVCTL_RELAX_EN;
+		OSAL_PCI_WRITE_CONFIG_WORD(p_hwfn->p_dev,
+					   pos + PCI_EXP_DEVCTL, ctrl);
+	}
+	#endif /* @DPDK */
 
 	rc = ecore_hw_init_pf_doorbell_bar(p_hwfn, p_ptt);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
+	rc = ecore_iov_init_vf_doorbell_bar(p_hwfn, p_ptt);
+	if (rc != ECORE_SUCCESS && ECORE_IS_VF_RDMA(p_hwfn))
+		p_hwfn->pf_iov_info->rdma_enable = false;
+
 	/* Use the leading hwfn since in CMT only NIG #0 is operational */
 	if (IS_LEAD_HWFN(p_hwfn)) {
 		rc = ecore_llh_hw_init_pf(p_hwfn, p_ptt,
-					p_params->avoid_eng_affin);
+					  p_params->avoid_eng_affin);
 		if (rc != ECORE_SUCCESS)
 			return rc;
 	}
@@ -3518,44 +4850,69 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		rc = ecore_sp_pf_start(p_hwfn, p_ptt, p_params->p_tunn,
 				       p_params->allow_npar_tx_switch);
 		if (rc) {
-			DP_NOTICE(p_hwfn, true,
-				  "Function start ramrod failed\n");
+			DP_NOTICE(p_hwfn, true, "Function start ramrod failed\n");
 			return rc;
 		}
+
 		prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_TAG1);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
-				"PRS_REG_SEARCH_TAG1: %x\n", prs_reg);
+			   "PRS_REG_SEARCH_TAG1: %x\n", prs_reg);
 
 		if (p_hwfn->hw_info.personality == ECORE_PCI_FCOE) {
-			ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_TAG1,
-					(1 << 2));
+			ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_TAG1, (1 << 2));
 			ecore_wr(p_hwfn, p_ptt,
 				 PRS_REG_PKT_LEN_STAT_TAGS_NOT_COUNTED_FIRST,
 				 0x100);
 		}
+
 		DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
-				"PRS_REG_SEARCH registers after start PFn\n");
+			   "PRS_REG_SEARCH registers after start PFn\n");
 		prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_TCP);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
-				"PRS_REG_SEARCH_TCP: %x\n", prs_reg);
+			   "PRS_REG_SEARCH_TCP: %x\n", prs_reg);
 		prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_UDP);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
-				"PRS_REG_SEARCH_UDP: %x\n", prs_reg);
+			   "PRS_REG_SEARCH_UDP: %x\n", prs_reg);
 		prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_FCOE);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
-				"PRS_REG_SEARCH_FCOE: %x\n", prs_reg);
+			   "PRS_REG_SEARCH_FCOE: %x\n", prs_reg);
 		prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_ROCE);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
-				"PRS_REG_SEARCH_ROCE: %x\n", prs_reg);
+			   "PRS_REG_SEARCH_ROCE: %x\n", prs_reg);
 		prs_reg = ecore_rd(p_hwfn, p_ptt,
-				PRS_REG_SEARCH_TCP_FIRST_FRAG);
+				   PRS_REG_SEARCH_TCP_FIRST_FRAG);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
-				"PRS_REG_SEARCH_TCP_FIRST_FRAG: %x\n",
-				prs_reg);
+			   "PRS_REG_SEARCH_TCP_FIRST_FRAG: %x\n", prs_reg);
 		prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_TAG1);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
-				"PRS_REG_SEARCH_TAG1: %x\n", prs_reg);
+			   "PRS_REG_SEARCH_TAG1: %x\n", prs_reg);
+	}
+
+#ifndef ASIC_ONLY
+	/* Configure the first VF number for the current PF and the following PF */
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) &&
+	    ECORE_IS_E5(p_hwfn->p_dev) &&
+	    IS_PF_SRIOV(p_hwfn)) {
+		struct ecore_hw_sriov_info *p_iov_info = p_hwfn->p_dev->p_iov_info;
+
+		ecore_wr(p_hwfn, p_ptt, PGLCS_REG_FIRST_VF_K2_E5,
+			 p_iov_info->first_vf_in_pf);
+
+		/* If this is not the last PF, pretend to the next PF and set first VF register */
+		if ((rel_pf_id + 1) < MAX_NUM_PFS_E5) {
+			/* pretend to the following PF */
+			ecore_fid_pretend(p_hwfn, p_ptt,
+					  FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID,
+						      (rel_pf_id + 1)));
+			ecore_wr(p_hwfn, p_ptt, PGLCS_REG_FIRST_VF_K2_E5,
+				 p_iov_info->first_vf_in_pf + p_iov_info->total_vfs);
+			/* pretend to the original PF */
+			ecore_fid_pretend(p_hwfn, p_ptt,
+					  FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, rel_pf_id));
+		}
 	}
+#endif
+
 	return ECORE_SUCCESS;
 }
 
@@ -3582,14 +4939,14 @@ enum _ecore_status_t ecore_pglueb_set_pfid_enable(struct ecore_hwfn *p_hwfn,
 	if (val != set_val) {
 		DP_NOTICE(p_hwfn, true,
 			  "PFID_ENABLE_MASTER wasn't changed after a second\n");
-		return ECORE_UNKNOWN_ERROR;
+		return ECORE_AGAIN;
 	}
 
 	return ECORE_SUCCESS;
 }
 
 static void ecore_reset_mb_shadow(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_main_ptt)
+			struct ecore_ptt *p_main_ptt)
 {
 	/* Read shadow of current MFW mailbox */
 	ecore_mcp_read_mb(p_hwfn, p_main_ptt);
@@ -3598,13 +4955,6 @@ static void ecore_reset_mb_shadow(struct ecore_hwfn *p_hwfn,
 		    p_hwfn->mcp_info->mfw_mb_length);
 }
 
-static void ecore_pglueb_clear_err(struct ecore_hwfn *p_hwfn,
-				   struct ecore_ptt *p_ptt)
-{
-	ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_WAS_ERROR_PF_31_0_CLR,
-		 1 << p_hwfn->abs_pf_id);
-}
-
 static enum _ecore_status_t
 ecore_fill_load_req_params(struct ecore_hwfn *p_hwfn,
 			   struct ecore_load_req_params *p_load_req,
@@ -3666,8 +5016,8 @@ ecore_fill_load_req_params(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn,
-				    struct ecore_hw_init_params *p_params)
+static enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn,
+					   struct ecore_hw_init_params *p_params)
 {
 	if (p_params->p_tunn) {
 		ecore_vf_set_vf_start_tunn_update_param(p_params->p_tunn);
@@ -3679,16 +5029,23 @@ enum _ecore_status_t ecore_vf_start(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+static void ecore_pglueb_clear_err(struct ecore_hwfn *p_hwfn,
+				   struct ecore_ptt *p_ptt)
+{
+	ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_WAS_ERROR_PF_31_0_CLR,
+		 1 << p_hwfn->abs_pf_id);
+}
+
 enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 				   struct ecore_hw_init_params *p_params)
 {
 	struct ecore_load_req_params load_req_params;
 	u32 load_code, resp, param, drv_mb_param;
+	enum _ecore_status_t rc = ECORE_SUCCESS, cancel_load_rc;
 	bool b_default_mtu = true;
 	struct ecore_hwfn *p_hwfn;
 	const u32 *fw_overlays;
 	u32 fw_overlays_len;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u16 ether_type;
 	int i;
 
@@ -3718,19 +5075,22 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 			continue;
 		}
 
+		/* Some flows may keep variable set */
+		p_hwfn->mcp_info->mcp_handling_status = 0;
+
 		rc = ecore_calc_hw_mode(p_hwfn);
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
-		if (IS_PF(p_dev) && (OSAL_GET_BIT(ECORE_MF_8021Q_TAGGING,
+		if (IS_PF(p_dev) && (OSAL_TEST_BIT(ECORE_MF_8021Q_TAGGING,
 						   &p_dev->mf_bits) ||
-				     OSAL_GET_BIT(ECORE_MF_8021AD_TAGGING,
+				     OSAL_TEST_BIT(ECORE_MF_8021AD_TAGGING,
 						   &p_dev->mf_bits))) {
-			if (OSAL_GET_BIT(ECORE_MF_8021Q_TAGGING,
+			if (OSAL_TEST_BIT(ECORE_MF_8021Q_TAGGING,
 					  &p_dev->mf_bits))
-				ether_type = ETHER_TYPE_VLAN;
+				ether_type = ECORE_ETH_P_8021Q;
 			else
-				ether_type = ETHER_TYPE_QINQ;
+				ether_type = ECORE_ETH_P_8021AD;
 			STORE_RT_REG(p_hwfn, PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET,
 				     ether_type);
 			STORE_RT_REG(p_hwfn, NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET,
@@ -3761,10 +5121,15 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 			   "Load request was sent. Load code: 0x%x\n",
 			   load_code);
 
+		/* Only relevant for recovery:
+		 * Clear the indication after LOAD_REQ is responded by the MFW.
+		 */
+		p_dev->recov_in_prog = false;
+
 		ecore_mcp_set_capabilities(p_hwfn, p_hwfn->p_main_ptt);
 
 		/* CQ75580:
-		 * When coming back from hiberbate state, the registers from
+		 * When coming back from hibernate state, the registers from
 		 * which shadow is read initially are not initialized. It turns
 		 * out that these registers get initialized during the call to
 		 * ecore_mcp_load_req request. So we need to reread them here
@@ -3775,18 +5140,9 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		 */
 		ecore_reset_mb_shadow(p_hwfn, p_hwfn->p_main_ptt);
 
-		/* Only relevant for recovery:
-		 * Clear the indication after the LOAD_REQ command is responded
-		 * by the MFW.
-		 */
-		p_dev->recov_in_prog = false;
-
-		p_hwfn->first_on_engine = (load_code ==
-					   FW_MSG_CODE_DRV_LOAD_ENGINE);
-
 		if (!qm_lock_ref_cnt) {
 #ifdef CONFIG_ECORE_LOCK_ALLOC
-			rc = OSAL_SPIN_LOCK_ALLOC(p_hwfn, &qm_lock);
+			rc = OSAL_SPIN_LOCK_ALLOC(p_hwfn, &qm_lock, "qm_lock");
 			if (rc) {
 				DP_ERR(p_hwfn, "qm_lock allocation failed\n");
 				goto qm_lock_fail;
@@ -3804,8 +5160,16 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 			rc = ecore_final_cleanup(p_hwfn, p_hwfn->p_main_ptt,
 						 p_hwfn->rel_pf_id, false);
 			if (rc != ECORE_SUCCESS) {
-				ecore_hw_err_notify(p_hwfn,
-						    ECORE_HW_ERR_RAMROD_FAIL);
+				u8 str[ECORE_HW_ERR_MAX_STR_SIZE];
+
+				OSAL_SNPRINTF((char *)str,
+					      ECORE_HW_ERR_MAX_STR_SIZE,
+					      "Final cleanup failed\n");
+				DP_NOTICE(p_hwfn, false, "%s", str);
+				ecore_hw_err_notify(p_hwfn, p_hwfn->p_main_ptt,
+						    ECORE_HW_ERR_RAMROD_FAIL,
+						    str,
+						    OSAL_STRLEN((char *)str) + 1);
 				goto load_err;
 			}
 		}
@@ -3839,6 +5203,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		if (!p_hwfn->fw_overlay_mem) {
 			DP_NOTICE(p_hwfn, false,
 				  "Failed to allocate fw overlay memory\n");
+			rc = ECORE_NOMEM;
 			goto load_err;
 		}
 
@@ -3848,13 +5213,13 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 						  p_hwfn->hw_info.hw_mode);
 			if (rc != ECORE_SUCCESS)
 				break;
-			/* Fall into */
+			/* Fall through */
 		case FW_MSG_CODE_DRV_LOAD_PORT:
 			rc = ecore_hw_init_port(p_hwfn, p_hwfn->p_main_ptt,
 						p_hwfn->hw_info.hw_mode);
 			if (rc != ECORE_SUCCESS)
 				break;
-			/* Fall into */
+			/* Fall through */
 		case FW_MSG_CODE_DRV_LOAD_FUNCTION:
 			rc = ecore_hw_init_pf(p_hwfn, p_hwfn->p_main_ptt,
 					      p_hwfn->hw_info.hw_mode,
@@ -3876,8 +5241,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 
 		rc = ecore_mcp_load_done(p_hwfn, p_hwfn->p_main_ptt);
 		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn, false,
-				  "Sending load done failed, rc = %d\n", rc);
+			DP_NOTICE(p_hwfn, false, "Sending load done failed, rc = %d\n", rc);
 			if (rc == ECORE_NOMEM) {
 				DP_NOTICE(p_hwfn, false,
 					  "Sending load done was failed due to memory allocation failure\n");
@@ -3889,10 +5253,11 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		/* send DCBX attention request command */
 		DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
 			   "sending phony dcbx set command to trigger DCBx attention handling\n");
+		drv_mb_param = 0;
+		SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_DCBX_NOTIFY, 1);
 		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
 				   DRV_MSG_CODE_SET_DCBX,
-				   1 << DRV_MB_PARAM_DCBX_NOTIFY_OFFSET, &resp,
-				   &param);
+				   drv_mb_param, &resp, &param);
 		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, false,
 				  "Failed to send DCBX attention request\n");
@@ -3907,24 +5272,11 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		p_hwfn = ECORE_LEADING_HWFN(p_dev);
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
 			   "Sending GET_OEM_UPDATES command to trigger stag/bandwidth attention handling\n");
+		drv_mb_param = 0;
+		SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_DUMMY_OEM_UPDATES, 1);
 		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
 				   DRV_MSG_CODE_GET_OEM_UPDATES,
-				   1 << DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET,
-				   &resp, &param);
-		if (rc != ECORE_SUCCESS)
-			DP_NOTICE(p_hwfn, false,
-				  "Failed to send GET_OEM_UPDATES attention request\n");
-	}
-
-	if (IS_PF(p_dev)) {
-		/* Get pre-negotiated values for stag, bandwidth etc. */
-		p_hwfn = ECORE_LEADING_HWFN(p_dev);
-		DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
-			   "Sending GET_OEM_UPDATES command to trigger stag/bandwidth attention handling\n");
-		rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
-				   DRV_MSG_CODE_GET_OEM_UPDATES,
-				   1 << DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET,
-				   &resp, &param);
+				   drv_mb_param, &resp, &param);
 		if (rc != ECORE_SUCCESS)
 			DP_NOTICE(p_hwfn, false,
 				  "Failed to send GET_OEM_UPDATES attention request\n");
@@ -3948,7 +5300,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 
 		rc = ecore_mcp_ov_update_driver_state(p_hwfn,
 						      p_hwfn->p_main_ptt,
-						ECORE_OV_DRIVER_STATE_DISABLED);
+						      ECORE_OV_DRIVER_STATE_DISABLED);
 		if (rc != ECORE_SUCCESS)
 			DP_INFO(p_hwfn, "Failed to update driver state\n");
 
@@ -3967,11 +5319,16 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 		OSAL_SPIN_LOCK_DEALLOC(&qm_lock);
 qm_lock_fail:
 #endif
-	/* The MFW load lock should be released regardless of success or failure
-	 * of initialization.
-	 * TODO: replace this with an attempt to send cancel_load.
+	/* The MFW load lock should be released also when initialization fails.
+	 * If supported, use a cancel_load request to update the MFW with the
+	 * load failure.
 	 */
-	ecore_mcp_load_done(p_hwfn, p_hwfn->p_main_ptt);
+	cancel_load_rc = ecore_mcp_cancel_load_req(p_hwfn, p_hwfn->p_main_ptt);
+	if (cancel_load_rc == ECORE_NOTIMPL) {
+		DP_INFO(p_hwfn,
+			"Send a load done request instead of cancel load\n");
+		ecore_mcp_load_done(p_hwfn, p_hwfn->p_main_ptt);
+	}
 	return rc;
 }
 
@@ -3985,11 +5342,15 @@ static void ecore_hw_timers_stop(struct ecore_dev *p_dev,
 	/* close timers */
 	ecore_wr(p_hwfn, p_ptt, TM_REG_PF_ENABLE_CONN, 0x0);
 	ecore_wr(p_hwfn, p_ptt, TM_REG_PF_ENABLE_TASK, 0x0);
-	for (i = 0; i < ECORE_HW_STOP_RETRY_LIMIT && !p_dev->recov_in_prog;
-									i++) {
+
+	if (p_dev->recov_in_prog)
+		return;
+
+	for (i = 0; i < ECORE_HW_STOP_RETRY_LIMIT; i++) {
 		if ((!ecore_rd(p_hwfn, p_ptt,
 			       TM_REG_PF_SCAN_ACTIVE_CONN)) &&
-		    (!ecore_rd(p_hwfn, p_ptt, TM_REG_PF_SCAN_ACTIVE_TASK)))
+		    (!ecore_rd(p_hwfn, p_ptt,
+			       TM_REG_PF_SCAN_ACTIVE_TASK)))
 			break;
 
 		/* Dependent on number of connection/tasks, possibly
@@ -4052,10 +5413,10 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 			ecore_vf_pf_int_cleanup(p_hwfn);
 			rc = ecore_vf_pf_reset(p_hwfn);
 			if (rc != ECORE_SUCCESS) {
-				DP_NOTICE(p_hwfn, true,
+				DP_NOTICE(p_hwfn, false,
 					  "ecore_vf_pf_reset failed. rc = %d.\n",
 					  rc);
-				rc2 = ECORE_UNKNOWN_ERROR;
+				rc2 = rc;
 			}
 			continue;
 		}
@@ -4130,12 +5491,6 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 		/* Need to wait 1ms to guarantee SBs are cleared */
 		OSAL_MSLEEP(1);
 
-		if (IS_LEAD_HWFN(p_hwfn) &&
-		    OSAL_GET_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits) &&
-		    !ECORE_IS_FCOE_PERSONALITY(p_hwfn))
-			ecore_llh_remove_mac_filter(p_dev, 0,
-						   p_hwfn->hw_info.hw_mac_addr);
-
 		if (!p_dev->recov_in_prog) {
 			ecore_verify_reg_val(p_hwfn, p_ptt,
 					     QM_REG_USG_CNT_PF_TX, 0);
@@ -4148,6 +5503,12 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 		ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_DB_ENABLE, 0);
 		ecore_wr(p_hwfn, p_ptt, QM_REG_PF_EN, 0);
 
+		if (IS_LEAD_HWFN(p_hwfn) &&
+		    OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits) &&
+		    !ECORE_IS_FCOE_PERSONALITY(p_hwfn))
+			ecore_llh_remove_mac_filter(p_dev, 0,
+						    p_hwfn->hw_info.hw_mac_addr);
+
 		--qm_lock_ref_cnt;
 #ifdef CONFIG_ECORE_LOCK_ALLOC
 		if (!qm_lock_ref_cnt)
@@ -4179,8 +5540,7 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 		  * only after all transactions have stopped for all active
 		  * hw-functions.
 		  */
-		rc = ecore_pglueb_set_pfid_enable(p_hwfn, p_hwfn->p_main_ptt,
-						  false);
+		rc = ecore_pglueb_set_pfid_enable(p_hwfn, p_ptt, false);
 		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, true,
 				  "ecore_pglueb_set_pfid_enable() failed. rc = %d.\n",
@@ -4208,18 +5568,11 @@ enum _ecore_status_t ecore_hw_stop_fastpath(struct ecore_dev *p_dev)
 		if (!p_ptt)
 			return ECORE_AGAIN;
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN,
-			   "Shutting down the fastpath\n");
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Shutting down the fastpath\n");
 
 		ecore_wr(p_hwfn, p_ptt,
 			 NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x1);
 
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_TCP, 0x0);
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_UDP, 0x0);
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_FCOE, 0x0);
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_ROCE, 0x0);
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_OPENFLOW, 0x0);
-
 		/* @@@TBD - clean transmission queues (5.b) */
 		/* @@@TBD - clean BTB (5.c) */
 
@@ -4245,20 +5598,172 @@ enum _ecore_status_t ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn)
 	if (!p_ptt)
 		return ECORE_AGAIN;
 
-	/* If roce info is allocated it means roce is initialized and should
-	 * be enabled in searcher.
-	 */
-	if (p_hwfn->p_rdma_info) {
-		if (p_hwfn->b_rdma_enabled_in_prs)
-			ecore_wr(p_hwfn, p_ptt,
-				 p_hwfn->rdma_prs_search_reg, 0x1);
-		ecore_wr(p_hwfn, p_ptt, TM_REG_PF_ENABLE_CONN, 0x1);
+	/* Re-open incoming traffic */
+	ecore_wr(p_hwfn, p_ptt,
+		 NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x0);
+	ecore_ptt_release(p_hwfn, p_ptt);
+
+	return ECORE_SUCCESS;
+}
+
+static void ecore_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			 u32 hw_addr, u32 val)
+{
+	if (ECORE_IS_BB(p_hwfn->p_dev))
+		ecore_wr(p_hwfn, p_ptt, hw_addr, val);
+	else
+		ecore_mcp_wol_wr(p_hwfn, p_ptt, hw_addr, val);
+}
+
+enum _ecore_status_t ecore_set_nwuf_reg(struct ecore_dev *p_dev, u32 reg_idx,
+					u32 pattern_size, u32 crc)
+{
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_ptt *p_ptt;
+	u32 reg_len = 0;
+	u32 reg_crc = 0;
+
+	p_ptt = ecore_ptt_acquire(p_hwfn);
+	if (!p_ptt)
+		return ECORE_AGAIN;
+
+	/* Get length and CRC register offsets */
+	switch (reg_idx) {
+	case 0:
+		reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_0_LEN_BB :
+				WOL_REG_ACPI_PAT_0_LEN_K2_E5;
+		reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_0_CRC_BB :
+				WOL_REG_ACPI_PAT_0_CRC_K2_E5;
+		break;
+	case 1:
+		reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_1_LEN_BB :
+				WOL_REG_ACPI_PAT_1_LEN_K2_E5;
+		reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_1_CRC_BB :
+				WOL_REG_ACPI_PAT_1_CRC_K2_E5;
+		break;
+	case 2:
+		reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_2_LEN_BB :
+				WOL_REG_ACPI_PAT_2_LEN_K2_E5;
+		reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_2_CRC_BB :
+				WOL_REG_ACPI_PAT_2_CRC_K2_E5;
+		break;
+	case 3:
+		reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_3_LEN_BB :
+				WOL_REG_ACPI_PAT_3_LEN_K2_E5;
+		reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_3_CRC_BB :
+				WOL_REG_ACPI_PAT_3_CRC_K2_E5;
+		break;
+	case 4:
+		reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_4_LEN_BB :
+				WOL_REG_ACPI_PAT_4_LEN_K2_E5;
+		reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_4_CRC_BB :
+				WOL_REG_ACPI_PAT_4_CRC_K2_E5;
+		break;
+	case 5:
+		reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_5_LEN_BB :
+				WOL_REG_ACPI_PAT_5_LEN_K2_E5;
+		reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_5_CRC_BB :
+				WOL_REG_ACPI_PAT_5_CRC_K2_E5;
+		break;
+	case 6:
+		reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_6_LEN_BB :
+				WOL_REG_ACPI_PAT_6_LEN_K2_E5;
+		reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_6_CRC_BB :
+				WOL_REG_ACPI_PAT_6_CRC_K2_E5;
+		break;
+	case 7:
+		reg_len = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_7_LEN_BB :
+				WOL_REG_ACPI_PAT_7_LEN_K2_E5;
+		reg_crc = ECORE_IS_BB(p_dev) ? NIG_REG_ACPI_PAT_7_CRC_BB :
+				WOL_REG_ACPI_PAT_7_CRC_K2_E5;
+		break;
+	default:
+		rc = ECORE_UNKNOWN_ERROR;
+		goto out;
+	}
+
+	/* Align pattern size to 4 */
+	while (pattern_size % 4)
+		pattern_size++;
+
+	/* Write pattern length */
+	ecore_wol_wr(p_hwfn, p_ptt, reg_len, pattern_size);
+
+	/* Write crc value*/
+	ecore_wol_wr(p_hwfn, p_ptt, reg_crc, crc);
+
+	DP_INFO(p_dev,
+		"idx[%d] reg_crc[0x%x=0x%08x] "
+		"reg_len[0x%x=0x%x]\n",
+		reg_idx, reg_crc, crc, reg_len, pattern_size);
+out:
+	ecore_ptt_release(p_hwfn, p_ptt);
+
+	return rc;
+}
+
+void ecore_wol_buffer_clear(struct ecore_hwfn *p_hwfn,
+			    struct ecore_ptt *p_ptt)
+{
+	const u32 wake_buffer_clear_offset =
+		ECORE_IS_BB(p_hwfn->p_dev) ?
+		NIG_REG_WAKE_BUFFER_CLEAR_BB : WOL_REG_WAKE_BUFFER_CLEAR_K2_E5;
+
+	DP_INFO(p_hwfn->p_dev,
+		"reset "
+		"REG_WAKE_BUFFER_CLEAR offset=0x%08x\n",
+		wake_buffer_clear_offset);
+
+	ecore_wol_wr(p_hwfn, p_ptt, wake_buffer_clear_offset, 1);
+	ecore_wol_wr(p_hwfn, p_ptt, wake_buffer_clear_offset, 0);
+}
+
+enum _ecore_status_t ecore_get_wake_info(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 struct ecore_wake_info *wake_info)
+{
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
+	u32 *buf = OSAL_NULL;
+	u32 i    = 0;
+	const u32 reg_wake_buffer_offest =
+		ECORE_IS_BB(p_dev) ? NIG_REG_WAKE_BUFFER_BB :
+			WOL_REG_WAKE_BUFFER_K2_E5;
+
+	wake_info->wk_info    = ecore_rd(p_hwfn, p_ptt,
+				ECORE_IS_BB(p_dev) ? NIG_REG_WAKE_INFO_BB :
+				WOL_REG_WAKE_INFO_K2_E5);
+	wake_info->wk_details = ecore_rd(p_hwfn, p_ptt,
+				ECORE_IS_BB(p_dev) ? NIG_REG_WAKE_DETAILS_BB :
+				WOL_REG_WAKE_DETAILS_K2_E5);
+	wake_info->wk_pkt_len = ecore_rd(p_hwfn, p_ptt,
+				ECORE_IS_BB(p_dev) ? NIG_REG_WAKE_PKT_LEN_BB :
+				WOL_REG_WAKE_PKT_LEN_K2_E5);
+
+	DP_INFO(p_dev,
+		"REG_WAKE_INFO=0x%08x "
+		"REG_WAKE_DETAILS=0x%08x "
+		"REG_WAKE_PKT_LEN=0x%08x\n",
+		wake_info->wk_info,
+		wake_info->wk_details,
+		wake_info->wk_pkt_len);
+
+	buf = (u32 *)wake_info->wk_buffer;
+
+	for (i = 0; i < (wake_info->wk_pkt_len / sizeof(u32)); i++) {
+		if ((i * sizeof(u32)) >=  sizeof(wake_info->wk_buffer)) {
+			DP_INFO(p_dev,
+				"i index to 0 high=%d\n",
+				 i);
+			break;
+		}
+		buf[i] = ecore_rd(p_hwfn, p_ptt,
+				  reg_wake_buffer_offest + (i * sizeof(u32)));
+		DP_INFO(p_dev, "wk_buffer[%u]: 0x%08x\n",
+			i, buf[i]);
 	}
 
-	/* Re-open incoming traffic */
-	ecore_wr(p_hwfn, p_ptt,
-		 NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x0);
-	ecore_ptt_release(p_hwfn, p_ptt);
+	ecore_wol_buffer_clear(p_hwfn, p_ptt);
 
 	return ECORE_SUCCESS;
 }
@@ -4268,13 +5773,14 @@ static void ecore_hw_hwfn_free(struct ecore_hwfn *p_hwfn)
 {
 	ecore_ptt_pool_free(p_hwfn);
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->hw_info.p_igu_info);
+	p_hwfn->hw_info.p_igu_info = OSAL_NULL;
 }
 
 /* Setup bar access */
 static void ecore_hw_hwfn_prepare(struct ecore_hwfn *p_hwfn)
 {
 	/* clear indirect access */
-	if (ECORE_IS_AH(p_hwfn->p_dev)) {
+	if (ECORE_IS_AH(p_hwfn->p_dev) || ECORE_IS_E5(p_hwfn->p_dev)) {
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
 			 PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
@@ -4306,7 +5812,7 @@ static void get_function_id(struct ecore_hwfn *p_hwfn)
 {
 	/* ME Register */
 	p_hwfn->hw_info.opaque_fid = (u16)REG_RD(p_hwfn,
-						  PXP_PF_ME_OPAQUE_ADDR);
+						 PXP_PF_ME_OPAQUE_ADDR);
 
 	p_hwfn->hw_info.concrete_fid = REG_RD(p_hwfn, PXP_PF_ME_CONCRETE_ADDR);
 
@@ -4322,22 +5828,65 @@ static void get_function_id(struct ecore_hwfn *p_hwfn)
 		   p_hwfn->hw_info.concrete_fid, p_hwfn->hw_info.opaque_fid);
 }
 
-static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
+void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 {
 	u32 *feat_num = p_hwfn->hw_info.feat_num;
 	struct ecore_sb_cnt_info sb_cnt;
-	u32 non_l2_sbs = 0;
+	u32 non_l2_sbs = 0, non_l2_iov_sbs = 0;
 
 	OSAL_MEM_ZERO(&sb_cnt, sizeof(sb_cnt));
 	ecore_int_get_num_sbs(p_hwfn, &sb_cnt);
 
+#ifdef CONFIG_ECORE_ROCE
+	/* Roce CNQ require each: 1 status block. 1 CNQ, we divide the
+	 * status blocks equally between L2 / RoCE but with consideration as
+	 * to how many l2 queues / cnqs we have
+	 */
+	if (ECORE_IS_RDMA_PERSONALITY(p_hwfn)) {
+#ifndef __EXTRACT__LINUX__THROW__
+		u32 max_cnqs;
+#endif
+
+		feat_num[ECORE_RDMA_CNQ] =
+			OSAL_MIN_T(u32,
+				   sb_cnt.cnt / 2,
+				   RESC_NUM(p_hwfn, ECORE_RDMA_CNQ_RAM));
+
+		feat_num[ECORE_VF_RDMA_CNQ] =
+			OSAL_MIN_T(u32,
+				   sb_cnt.iov_cnt / 2,
+				   RESC_NUM(p_hwfn, ECORE_VF_RDMA_CNQ_RAM));
+
+#ifndef __EXTRACT__LINUX__THROW__
+		/* Upper layer might require less */
+		max_cnqs = (u32)p_hwfn->pf_params.rdma_pf_params.max_cnqs;
+		if (max_cnqs) {
+			if (max_cnqs == ECORE_RDMA_PF_PARAMS_CNQS_NONE)
+				max_cnqs = 0;
+			feat_num[ECORE_RDMA_CNQ] =
+				OSAL_MIN_T(u32,
+					   feat_num[ECORE_RDMA_CNQ],
+					   max_cnqs);
+			feat_num[ECORE_VF_RDMA_CNQ] =
+				OSAL_MIN_T(u32,
+					   feat_num[ECORE_VF_RDMA_CNQ],
+					   max_cnqs);
+		}
+#endif
+
+		non_l2_sbs = feat_num[ECORE_RDMA_CNQ];
+		non_l2_iov_sbs = feat_num[ECORE_VF_RDMA_CNQ];
+	}
+#endif
+
 	/* L2 Queues require each: 1 status block. 1 L2 queue */
 	if (ECORE_IS_L2_PERSONALITY(p_hwfn)) {
 		/* Start by allocating VF queues, then PF's */
 		feat_num[ECORE_VF_L2_QUE] =
 			OSAL_MIN_T(u32,
-				   RESC_NUM(p_hwfn, ECORE_L2_QUEUE),
-				   sb_cnt.iov_cnt);
+				   sb_cnt.iov_cnt - non_l2_iov_sbs,
+				   RESC_NUM(p_hwfn, ECORE_L2_QUEUE));
+
 		feat_num[ECORE_PF_L2_QUE] =
 			OSAL_MIN_T(u32,
 				   sb_cnt.cnt - non_l2_sbs,
@@ -4345,36 +5894,12 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
 				   FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE));
 	}
 
-	if (ECORE_IS_FCOE_PERSONALITY(p_hwfn) ||
-	    ECORE_IS_ISCSI_PERSONALITY(p_hwfn)) {
-		u32 *p_storage_feat = ECORE_IS_FCOE_PERSONALITY(p_hwfn) ?
-				      &feat_num[ECORE_FCOE_CQ] :
-				      &feat_num[ECORE_ISCSI_CQ];
-		u32 limit = sb_cnt.cnt;
-
-		/* The number of queues should not exceed the number of FP SBs.
-		 * In storage target, the queues are divided into pairs of a CQ
-		 * and a CmdQ, and each pair uses a single SB. The limit in
-		 * this case should allow a max ratio of 2:1 instead of 1:1.
-		 */
-		if (p_hwfn->p_dev->b_is_target)
-			limit *= 2;
-		*p_storage_feat = OSAL_MIN_T(u32, limit,
-					     RESC_NUM(p_hwfn, ECORE_CMDQS_CQS));
-
-		/* @DPDK */
-		/* The size of "cq_cmdq_sb_num_arr" in the fcoe/iscsi init
-		 * ramrod is limited to "NUM_OF_GLOBAL_QUEUES / 2".
-		 */
-		*p_storage_feat = OSAL_MIN_T(u32, *p_storage_feat,
-					     (NUM_OF_GLOBAL_QUEUES / 2));
-	}
-
 	DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE,
-		   "#PF_L2_QUEUE=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d #FCOE_CQ=%d #ISCSI_CQ=%d #SB=%d\n",
+		   "#PF_L2_QUEUE=%d VF_L2_QUEUES=%d #PF_ROCE_CNQ=%d #VF_ROCE_CNQ=%d #FCOE_CQ=%d #ISCSI_CQ=%d #SB=%d\n",
 		   (int)FEAT_NUM(p_hwfn, ECORE_PF_L2_QUE),
 		   (int)FEAT_NUM(p_hwfn, ECORE_VF_L2_QUE),
 		   (int)FEAT_NUM(p_hwfn, ECORE_RDMA_CNQ),
+		   (int)FEAT_NUM(p_hwfn, ECORE_VF_RDMA_CNQ),
 		   (int)FEAT_NUM(p_hwfn, ECORE_FCOE_CQ),
 		   (int)FEAT_NUM(p_hwfn, ECORE_ISCSI_CQ),
 		   (int)sb_cnt.cnt);
@@ -4397,18 +5922,26 @@ const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
 		return "MAC";
 	case ECORE_VLAN:
 		return "VLAN";
+	case ECORE_VF_RDMA_CNQ_RAM:
+		return "VF_RDMA_CNQ_RAM";
 	case ECORE_RDMA_CNQ_RAM:
 		return "RDMA_CNQ_RAM";
 	case ECORE_ILT:
 		return "ILT";
-	case ECORE_LL2_QUEUE:
-		return "LL2_QUEUE";
+	case ECORE_LL2_RAM_QUEUE:
+		return "LL2_RAM_QUEUE";
+	case ECORE_LL2_CTX_QUEUE:
+		return "LL2_CTX_QUEUE";
 	case ECORE_CMDQS_CQS:
 		return "CMDQS_CQS";
 	case ECORE_RDMA_STATS_QUEUE:
 		return "RDMA_STATS_QUEUE";
 	case ECORE_BDQ:
 		return "BDQ";
+	case ECORE_VF_MAC_ADDR:
+		return "VF_MAC_ADDR";
+	case ECORE_GFS_PROFILE:
+		return "GFS_PROFILE";
 	case ECORE_SB:
 		return "SB";
 	default:
@@ -4467,7 +6000,8 @@ static u32 ecore_hsi_def_val[][MAX_CHIP_IDS] = {
 
 u32 ecore_get_hsi_def_val(struct ecore_dev *p_dev, enum ecore_hsi_def_type type)
 {
-	enum chip_ids chip_id = ECORE_IS_BB(p_dev) ? CHIP_BB : CHIP_K2;
+	enum chip_ids chip_id = ECORE_IS_BB(p_dev) ? CHIP_BB :
+				ECORE_IS_AH(p_dev) ? CHIP_K2 : CHIP_E5;
 
 	if (type >= ECORE_NUM_HSI_DEFS) {
 		DP_ERR(p_dev, "Unexpected HSI definition type [%d]\n", type);
@@ -4482,18 +6016,13 @@ ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt)
 {
 	u32 resc_max_val, mcp_resp;
-	u8 res_id;
+	u8 num_vf_cnqs, res_id;
 	enum _ecore_status_t rc;
 
+	num_vf_cnqs = p_hwfn->num_vf_cnqs;
+
 	for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
-		/* @DPDK */
 		switch (res_id) {
-		case ECORE_LL2_QUEUE:
-		case ECORE_RDMA_CNQ_RAM:
-		case ECORE_RDMA_STATS_QUEUE:
-		case ECORE_BDQ:
-			resc_max_val = 0;
-			break;
 		default:
 			continue;
 		}
@@ -4523,6 +6052,7 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 {
 	u8 num_funcs = p_hwfn->num_funcs_on_engine;
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
+	u8 num_vf_cnqs = p_hwfn->num_vf_cnqs;
 
 	switch (res_id) {
 	case ECORE_L2_QUEUE:
@@ -4549,37 +6079,57 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 	case ECORE_ILT:
 		*p_resc_num = NUM_OF_PXP_ILT_RECORDS(p_dev) / num_funcs;
 		break;
-	case ECORE_LL2_QUEUE:
+	case ECORE_LL2_RAM_QUEUE:
 		*p_resc_num = MAX_NUM_LL2_RX_RAM_QUEUES / num_funcs;
 		break;
+	case ECORE_LL2_CTX_QUEUE:
+		*p_resc_num = MAX_NUM_LL2_RX_CTX_QUEUES / num_funcs;
+		break;
+	case ECORE_VF_RDMA_CNQ_RAM:
+		*p_resc_num = num_vf_cnqs / num_funcs;
+		break;
 	case ECORE_RDMA_CNQ_RAM:
 	case ECORE_CMDQS_CQS:
 		/* CNQ/CMDQS are the same resource */
-		/* @DPDK */
-		*p_resc_num = (NUM_OF_GLOBAL_QUEUES / 2) / num_funcs;
+		*p_resc_num = (NUM_OF_GLOBAL_QUEUES - num_vf_cnqs) / num_funcs;
 		break;
 	case ECORE_RDMA_STATS_QUEUE:
 		*p_resc_num = NUM_OF_RDMA_STATISTIC_COUNTERS(p_dev) / num_funcs;
 		break;
 	case ECORE_BDQ:
-		/* @DPDK */
+		if (p_hwfn->hw_info.personality != ECORE_PCI_ISCSI &&
+		    p_hwfn->hw_info.personality != ECORE_PCI_FCOE)
+			*p_resc_num = 0;
+		else
+			*p_resc_num = 1;
+		break;
+	case ECORE_VF_MAC_ADDR:
 		*p_resc_num = 0;
 		break;
-	default:
+	case ECORE_GFS_PROFILE:
+		*p_resc_num = ECORE_IS_E5(p_hwfn->p_dev) ?
+			      GFS_PROFILE_MAX_ENTRIES / num_funcs : 0;
+		break;
+	case ECORE_SB:
+		/* Since we want its value to reflect whether MFW supports
+		 * the new scheme, have a default of 0.
+		 */
+		*p_resc_num = 0;
 		break;
+	default:
+		return ECORE_INVAL;
 	}
 
-
 	switch (res_id) {
 	case ECORE_BDQ:
 		if (!*p_resc_num)
 			*p_resc_start = 0;
-		break;
-	case ECORE_SB:
-		/* Since we want its value to reflect whether MFW supports
-		 * the new scheme, have a default of 0.
-		 */
-		*p_resc_num = 0;
+		else if (p_hwfn->p_dev->num_ports_in_engine == 4)
+			*p_resc_start = p_hwfn->port_id;
+		else if (p_hwfn->hw_info.personality == ECORE_PCI_ISCSI)
+			*p_resc_start = p_hwfn->port_id;
+		else if (p_hwfn->hw_info.personality == ECORE_PCI_FCOE)
+			*p_resc_start = p_hwfn->port_id + 2;
 		break;
 	default:
 		*p_resc_start = *p_resc_num * p_hwfn->enabled_func_idx;
@@ -4609,8 +6159,19 @@ __ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn, enum ecore_resources res_id,
 		return rc;
 	}
 
+	/* TODO: Add MFW support for GFS profiles resource */
+	if (res_id == ECORE_GFS_PROFILE) {
+		DP_INFO(p_hwfn,
+			"Resource %d [%s]: Applying default values [%d,%d]\n",
+			res_id, ecore_hw_get_resc_name(res_id),
+			dflt_resc_num, dflt_resc_start);
+		*p_resc_num = dflt_resc_num;
+		*p_resc_start = dflt_resc_start;
+		return ECORE_SUCCESS;
+	}
+
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev) && !ECORE_IS_E5(p_hwfn->p_dev)) {
 		*p_resc_num = dflt_resc_num;
 		*p_resc_start = dflt_resc_start;
 		goto out;
@@ -4620,9 +6181,8 @@ __ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn, enum ecore_resources res_id,
 	rc = ecore_mcp_get_resc_info(p_hwfn, p_hwfn->p_main_ptt, res_id,
 				     &mcp_resp, p_resc_num, p_resc_start);
 	if (rc != ECORE_SUCCESS) {
-		DP_NOTICE(p_hwfn, true,
-			  "MFW response failure for an allocation request for"
-			  " resource %d [%s]\n",
+		DP_NOTICE(p_hwfn, false,
+			  "MFW response failure for an allocation request for resource %d [%s]\n",
 			  res_id, ecore_hw_get_resc_name(res_id));
 		return rc;
 	}
@@ -4634,12 +6194,9 @@ __ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn, enum ecore_resources res_id,
 	 */
 	if (mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK) {
 		DP_INFO(p_hwfn,
-			"Failed to receive allocation info for resource %d [%s]."
-			" mcp_resp = 0x%x. Applying default values"
-			" [%d,%d].\n",
+			"Failed to receive allocation info for resource %d [%s]. mcp_resp = 0x%x. Applying default values [%d,%d].\n",
 			res_id, ecore_hw_get_resc_name(res_id), mcp_resp,
 			dflt_resc_num, dflt_resc_start);
-
 		*p_resc_num = dflt_resc_num;
 		*p_resc_start = dflt_resc_start;
 		goto out;
@@ -4659,6 +6216,37 @@ __ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn, enum ecore_resources res_id,
 		}
 	}
 out:
+	/* PQs have to divide by 8 [that's the HW granularity].
+	 * Reduce number so it would fit.
+	 */
+	if ((res_id == ECORE_PQ) &&
+	    ((*p_resc_num % 8) || (*p_resc_start % 8))) {
+		DP_INFO(p_hwfn,
+			"PQs need to align by 8; Number %08x --> %08x, Start %08x --> %08x\n",
+			*p_resc_num, (*p_resc_num) & ~0x7,
+			*p_resc_start, (*p_resc_start) & ~0x7);
+		*p_resc_num &= ~0x7;
+		*p_resc_start &= ~0x7;
+	}
+
+	/* The last RSS engine is used by the FW for TPA hash calculation.
+	 * Old MFW versions allocate it to the drivers, so it needs to be
+	 * truncated.
+	 */
+	if (res_id == ECORE_RSS_ENG &&
+	    *p_resc_num &&
+	    (*p_resc_start + *p_resc_num - 1) ==
+	    NUM_OF_RSS_ENGINES(p_hwfn->p_dev))
+		--*p_resc_num;
+
+	/* In case personality is not RDMA or there is no separate doorbell bar
+	 * for VFs, VF-RDMA will not be supported, thus no need CNQs for VFs.
+	 */
+	if ((res_id == ECORE_VF_RDMA_CNQ_RAM) &&
+	    (!ECORE_IS_RDMA_PERSONALITY(p_hwfn) ||
+	    !ecore_iov_vf_db_bar_size(p_hwfn, p_hwfn->p_main_ptt)))
+		*p_resc_num = 0;
+
 	return ECORE_SUCCESS;
 }
 
@@ -4697,7 +6285,7 @@ static enum _ecore_status_t ecore_hw_get_ppfid_bitmap(struct ecore_hwfn *p_hwfn,
 	/* 4-ports mode has limitations that should be enforced:
 	 * - BB: the MFW can access only PPFIDs which their corresponding PFIDs
 	 *       belong to this certain port.
-	 * - AH: only 4 PPFIDs per port are available.
+	 * - AH/E5: only 4 PPFIDs per port are available.
 	 */
 	if (ecore_device_num_ports(p_dev) == 4) {
 		u8 mask;
@@ -4762,9 +6350,17 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	 * Old drivers that don't acquire the lock can run in parallel, and
 	 * their allocation values won't be affected by the updated max values.
 	 */
+
 	ecore_mcp_resc_lock_default_init(&resc_lock_params, &resc_unlock_params,
 					 ECORE_RESC_LOCK_RESC_ALLOC, false);
 
+	/* Changes on top of the default values to accommodate parallel attempts
+	 * of several PFs.
+	 * [10 x 10 msec by default ==> 20 x 50 msec]
+	 */
+	resc_lock_params.retry_num *= 2;
+	resc_lock_params.retry_interval *= 5;
+
 	rc = ecore_mcp_resc_lock(p_hwfn, p_ptt, &resc_lock_params);
 	if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
 		return rc;
@@ -4774,8 +6370,7 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	} else if (rc == ECORE_SUCCESS && !resc_lock_params.b_granted) {
 		DP_NOTICE(p_hwfn, false,
 			  "Failed to acquire the resource lock for the resource allocation commands\n");
-		rc = ECORE_BUSY;
-		goto unlock_and_exit;
+		return ECORE_BUSY;
 	} else {
 		rc = ecore_hw_set_soft_resc_size(p_hwfn, p_ptt);
 		if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
@@ -4818,7 +6413,7 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 		if (!(p_dev->b_is_emul_full)) {
 			resc_num[ECORE_PQ] = 32;
 			resc_start[ECORE_PQ] = resc_num[ECORE_PQ] *
-			    p_hwfn->enabled_func_idx;
+					       p_hwfn->enabled_func_idx;
 		}
 
 		/* For AH emulation, since we have a possible maximal number of
@@ -4832,23 +6427,38 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 			if (!p_hwfn->rel_pf_id) {
 				resc_num[ECORE_ILT] =
 					OSAL_MAX_T(u32, resc_num[ECORE_ILT],
-							 roce_min_ilt_lines);
+						   roce_min_ilt_lines);
 			} else if (resc_num[ECORE_ILT] < roce_min_ilt_lines) {
 				resc_start[ECORE_ILT] += roce_min_ilt_lines -
 							 resc_num[ECORE_ILT];
 			}
 		}
+
+		/* [E5 emulation] MF + SRIOV:
+		 * The 14 low PFs have 16 VFs while the 2 high PFs have 8 VFs.
+		 * Need to adjust the vport values so that each PF/VF will have
+		 * a vport:
+		 * 256 vports = 14 x (PF + 16 VFs) + 2 x (PF + 8 VFs)
+		 */
+		if (ECORE_IS_E5(p_dev) &&
+		    p_hwfn->num_funcs_on_engine == 16 &&
+		    IS_ECORE_SRIOV(p_dev)) {
+			u8 id = p_hwfn->abs_pf_id;
+
+			resc_num[ECORE_VPORT] = (id < 14) ? 17 : 9;
+			resc_start[ECORE_VPORT] = (id < 14) ?
+						  (id * 17) :
+						  (14 * 17 + (id - 14) * 9);
+		}
 	}
 #endif
 
 	/* Sanity for ILT */
 	max_ilt_lines = NUM_OF_PXP_ILT_RECORDS(p_dev);
 	if (RESC_END(p_hwfn, ECORE_ILT) > max_ilt_lines) {
-		DP_NOTICE(p_hwfn, true,
-			  "Can't assign ILT pages [%08x,...,%08x]\n",
-			  RESC_START(p_hwfn, ECORE_ILT), RESC_END(p_hwfn,
-								  ECORE_ILT) -
-			  1);
+		DP_NOTICE(p_hwfn, true, "Can't assign ILT pages [%08x,...,%08x]\n",
+			  RESC_START(p_hwfn, ECORE_ILT),
+			  RESC_END(p_hwfn, ECORE_ILT) - 1);
 		return ECORE_INVAL;
 	}
 
@@ -4875,6 +6485,26 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
+static bool ecore_is_dscp_mapping_allowed(struct ecore_hwfn *p_hwfn,
+					  u32 mf_mode)
+{
+	/* HW bug in E4:
+	 * The NIG accesses the "NIG_REG_DSCP_TO_TC_MAP_ENABLE" PORT_PF register
+	 * with the PFID, as set in "NIG_REG_LLH_PPFID2PFID_TBL", instead of
+	 * with the PPFID. AH is being affected from this bug, and thus DSCP to
+	 * TC mapping is only allowed on the following configurations:
+	 * - QPAR, 2-ports
+	 * - QPAR, 4-ports, single PF on a port
+	 * There is no impact on BB since it has another bug in which the PPFID
+	 * is actually engine based.
+	 */
+	return !ECORE_IS_AH(p_hwfn->p_dev) ||
+	       (mf_mode == NVM_CFG1_GLOB_MF_MODE_DEFAULT &&
+		(ecore_device_num_ports(p_hwfn->p_dev) == 2 ||
+		 (ecore_device_num_ports(p_hwfn->p_dev) == 4 &&
+		  p_hwfn->num_funcs_on_port == 1)));
+}
+
 #ifndef ASIC_ONLY
 static enum _ecore_status_t
 ecore_emul_hw_get_nvm_info(struct ecore_hwfn *p_hwfn)
@@ -4885,7 +6515,8 @@ ecore_emul_hw_get_nvm_info(struct ecore_hwfn *p_hwfn)
 		/* The MF mode on emulation is either default or NPAR 1.0 */
 		p_dev->mf_bits = 1 << ECORE_MF_LLH_MAC_CLSS |
 				 1 << ECORE_MF_LLH_PROTO_CLSS |
-				 1 << ECORE_MF_LL2_NON_UNICAST;
+				 1 << ECORE_MF_LL2_NON_UNICAST |
+				 1 << ECORE_MF_DSCP_TO_TC_MAP;
 		if (p_hwfn->num_funcs_on_port > 1)
 			p_dev->mf_bits |= 1 << ECORE_MF_INTER_PF_SWITCH |
 					  1 << ECORE_MF_DISABLE_ARFS;
@@ -4902,14 +6533,16 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 		      struct ecore_ptt *p_ptt,
 		      struct ecore_hw_prepare_params *p_params)
 {
-	u32 nvm_cfg1_offset, mf_mode, addr, generic_cont0, core_cfg, dcbx_mode;
 	u32 port_cfg_addr, link_temp, nvm_cfg_addr, device_capabilities;
+	u32 nvm_cfg1_offset, mf_mode, addr, generic_cont0, core_cfg;
 	struct ecore_mcp_link_capabilities *p_caps;
 	struct ecore_mcp_link_params *link;
 	enum _ecore_status_t rc;
+	u32 dcbx_mode;  /* __LINUX__THROW__ */
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev))
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev) ||
+	    p_hwfn->mcp_info->recovery_mode)
 		return ecore_emul_hw_get_nvm_info(p_hwfn);
 #endif
 
@@ -4924,18 +6557,16 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 	}
 
-/* Read nvm_cfg1  (Notice this is just offset, and not offsize (TBD) */
-
+	/* Read nvm_cfg1  (Notice this is just offset, and not offsize (TBD) */
 	nvm_cfg1_offset = ecore_rd(p_hwfn, p_ptt, nvm_cfg_addr + 4);
 
-	addr = MCP_REG_SCRATCH + nvm_cfg1_offset +
+	addr = MCP_REG_SCRATCH  + nvm_cfg1_offset +
 		   OFFSETOF(struct nvm_cfg1, glob) +
 		   OFFSETOF(struct nvm_cfg1_glob, core_cfg);
 
 	core_cfg = ecore_rd(p_hwfn, p_ptt, addr);
 
-	switch ((core_cfg & NVM_CFG1_GLOB_NETWORK_PORT_MODE_MASK) >>
-		NVM_CFG1_GLOB_NETWORK_PORT_MODE_OFFSET) {
+	switch (GET_MFW_FIELD(core_cfg, NVM_CFG1_GLOB_NETWORK_PORT_MODE)) {
 	case NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_2X40G:
 		p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_2X40G;
 		break;
@@ -4969,20 +6600,35 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 	case NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X25G:
 		p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_4X25G;
 		break;
+	case NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_2X50G_R1:
+		p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_2X50G_R1;
+		break;
+	case NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_4X50G_R1:
+		p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_4X50G_R1;
+		break;
+	case NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_1X100G_R2:
+		p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_1X100G_R2;
+		break;
+	case NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_2X100G_R2:
+		p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_2X100G_R2;
+		break;
+	case NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_1X100G_R4:
+		p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_1X100G_R4;
+		break;
 	default:
 		DP_NOTICE(p_hwfn, true, "Unknown port mode in 0x%08x\n",
 			  core_cfg);
 		break;
 	}
 
+#ifndef __EXTRACT__LINUX__THROW__
 	/* Read DCBX configuration */
 	port_cfg_addr = MCP_REG_SCRATCH + nvm_cfg1_offset +
 			OFFSETOF(struct nvm_cfg1, port[MFW_PORT(p_hwfn)]);
 	dcbx_mode = ecore_rd(p_hwfn, p_ptt,
 			     port_cfg_addr +
 			     OFFSETOF(struct nvm_cfg1_port, generic_cont0));
-	dcbx_mode = (dcbx_mode & NVM_CFG1_PORT_DCBX_MODE_MASK)
-		>> NVM_CFG1_PORT_DCBX_MODE_OFFSET;
+	dcbx_mode = GET_MFW_FIELD(dcbx_mode, NVM_CFG1_PORT_DCBX_MODE);
 	switch (dcbx_mode) {
 	case NVM_CFG1_PORT_DCBX_MODE_DYNAMIC:
 		p_hwfn->hw_info.dcbx_mode = ECORE_DCBX_VERSION_DYNAMIC;
@@ -4996,12 +6642,13 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 	default:
 		p_hwfn->hw_info.dcbx_mode = ECORE_DCBX_VERSION_DISABLED;
 	}
+#endif
 
 	/* Read default link configuration */
 	link = &p_hwfn->mcp_info->link_input;
 	p_caps = &p_hwfn->mcp_info->link_capabilities;
 	port_cfg_addr = MCP_REG_SCRATCH + nvm_cfg1_offset +
-	    OFFSETOF(struct nvm_cfg1, port[MFW_PORT(p_hwfn)]);
+			OFFSETOF(struct nvm_cfg1, port[MFW_PORT(p_hwfn)]);
 	link_temp = ecore_rd(p_hwfn, p_ptt,
 			     port_cfg_addr +
 			     OFFSETOF(struct nvm_cfg1_port, speed_cap_mask));
@@ -5012,8 +6659,7 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 	link_temp = ecore_rd(p_hwfn, p_ptt,
 				 port_cfg_addr +
 				 OFFSETOF(struct nvm_cfg1_port, link_settings));
-	switch ((link_temp & NVM_CFG1_PORT_DRV_LINK_SPEED_MASK) >>
-		NVM_CFG1_PORT_DRV_LINK_SPEED_OFFSET) {
+	switch (GET_MFW_FIELD(link_temp, NVM_CFG1_PORT_DRV_LINK_SPEED)) {
 	case NVM_CFG1_PORT_DRV_LINK_SPEED_AUTONEG:
 		link->speed.autoneg = true;
 		break;
@@ -5023,6 +6669,9 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 	case NVM_CFG1_PORT_DRV_LINK_SPEED_10G:
 		link->speed.forced_speed = 10000;
 		break;
+	case NVM_CFG1_PORT_DRV_LINK_SPEED_20G:
+		link->speed.forced_speed = 20000;
+		break;
 	case NVM_CFG1_PORT_DRV_LINK_SPEED_25G:
 		link->speed.forced_speed = 25000;
 		break;
@@ -5036,27 +6685,56 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 		link->speed.forced_speed = 100000;
 		break;
 	default:
-		DP_NOTICE(p_hwfn, true, "Unknown Speed in 0x%08x\n", link_temp);
+		DP_NOTICE(p_hwfn, true, "Unknown Speed in 0x%08x\n",
+			  link_temp);
 	}
 
-	p_caps->default_speed = link->speed.forced_speed;
+	p_caps->default_speed = link->speed.forced_speed; /* __LINUX__THROW__ */
 	p_caps->default_speed_autoneg = link->speed.autoneg;
 
-	link_temp &= NVM_CFG1_PORT_DRV_FLOW_CONTROL_MASK;
-	link_temp >>= NVM_CFG1_PORT_DRV_FLOW_CONTROL_OFFSET;
+	link_temp = GET_MFW_FIELD(link_temp, NVM_CFG1_PORT_DRV_FLOW_CONTROL);
 	link->pause.autoneg = !!(link_temp &
-				  NVM_CFG1_PORT_DRV_FLOW_CONTROL_AUTONEG);
+				 NVM_CFG1_PORT_DRV_FLOW_CONTROL_AUTONEG);
 	link->pause.forced_rx = !!(link_temp &
-				    NVM_CFG1_PORT_DRV_FLOW_CONTROL_RX);
+				   NVM_CFG1_PORT_DRV_FLOW_CONTROL_RX);
 	link->pause.forced_tx = !!(link_temp &
-				    NVM_CFG1_PORT_DRV_FLOW_CONTROL_TX);
+				   NVM_CFG1_PORT_DRV_FLOW_CONTROL_TX);
 	link->loopback_mode = 0;
 
+	if (p_hwfn->mcp_info->capabilities &
+			FW_MB_PARAM_FEATURE_SUPPORT_FEC_CONTROL) {
+		link_temp = ecore_rd(p_hwfn, p_ptt, port_cfg_addr +
+				     OFFSETOF(struct nvm_cfg1_port,
+				     link_settings));
+		switch (GET_MFW_FIELD(link_temp,
+			NVM_CFG1_PORT_FEC_FORCE_MODE)) {
+		case NVM_CFG1_PORT_FEC_FORCE_MODE_NONE:
+			p_caps->fec_default |= ECORE_MCP_FEC_NONE;
+			break;
+		case NVM_CFG1_PORT_FEC_FORCE_MODE_FIRECODE:
+			p_caps->fec_default |= ECORE_MCP_FEC_FIRECODE;
+			break;
+		case NVM_CFG1_PORT_FEC_FORCE_MODE_RS:
+			p_caps->fec_default |= ECORE_MCP_FEC_RS;
+			break;
+		case NVM_CFG1_PORT_FEC_FORCE_MODE_AUTO:
+			p_caps->fec_default |= ECORE_MCP_FEC_AUTO;
+			break;
+		default:
+			DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
+				   "unknown FEC mode in 0x%08x\n", link_temp);
+		}
+	} else {
+		p_caps->fec_default = ECORE_MCP_FEC_UNSUPPORTED;
+	}
+
+	link->fec = p_caps->fec_default;
+
 	if (p_hwfn->mcp_info->capabilities & FW_MB_PARAM_FEATURE_SUPPORT_EEE) {
 		link_temp = ecore_rd(p_hwfn, p_ptt, port_cfg_addr +
 				     OFFSETOF(struct nvm_cfg1_port, ext_phy));
-		link_temp &= NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_MASK;
-		link_temp >>= NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_OFFSET;
+		link_temp = GET_MFW_FIELD(link_temp,
+					  NVM_CFG1_PORT_EEE_POWER_SAVING_MODE);
 		p_caps->default_eee = ECORE_MCP_EEE_ENABLED;
 		link->eee.enable = true;
 		switch (link_temp) {
@@ -5083,100 +6761,202 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 		p_caps->default_eee = ECORE_MCP_EEE_UNSUPPORTED;
 	}
 
+	if (p_hwfn->mcp_info->capabilities &
+	    FW_MB_PARAM_FEATURE_SUPPORT_EXT_SPEED_FEC_CONTROL) {
+		u32 mask;
+		link_temp = ecore_rd(p_hwfn, p_ptt,
+			     port_cfg_addr +
+			     OFFSETOF(struct nvm_cfg1_port, extended_speed));
+		mask = GET_MFW_FIELD(link_temp, NVM_CFG1_PORT_EXTENDED_SPEED);
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_AN)
+			link->ext_speed.autoneg = true;
+		link->ext_speed.forced_speed = 0;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_1G)
+			link->ext_speed.forced_speed |= ECORE_EXT_SPEED_1G;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_10G)
+			link->ext_speed.forced_speed |= ECORE_EXT_SPEED_10G;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_20G)
+			link->ext_speed.forced_speed |= ECORE_EXT_SPEED_20G;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_25G)
+			link->ext_speed.forced_speed |= ECORE_EXT_SPEED_25G;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_40G)
+			link->ext_speed.forced_speed |= ECORE_EXT_SPEED_40G;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R)
+			link->ext_speed.forced_speed |= ECORE_EXT_SPEED_50G_R;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R2)
+			link->ext_speed.forced_speed |= ECORE_EXT_SPEED_50G_R2;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R2)
+			link->ext_speed.forced_speed |= ECORE_EXT_SPEED_100G_R2;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R4)
+			link->ext_speed.forced_speed |= ECORE_EXT_SPEED_100G_R4;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_P4)
+			link->ext_speed.forced_speed |= ECORE_EXT_SPEED_100G_P4;
+
+		mask = GET_MFW_FIELD(link_temp,
+				     NVM_CFG1_PORT_EXTENDED_SPEED_CAP);
+		link->ext_speed.advertised_speeds = 0;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_RESERVED)
+			link->ext_speed.advertised_speeds |=
+						ECORE_EXT_SPEED_MASK_RES;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_1G)
+			link->ext_speed.advertised_speeds |=
+						ECORE_EXT_SPEED_MASK_1G;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_10G)
+			link->ext_speed.advertised_speeds |=
+						ECORE_EXT_SPEED_MASK_10G;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_20G)
+			link->ext_speed.advertised_speeds |=
+						ECORE_EXT_SPEED_MASK_20G;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_25G)
+			link->ext_speed.advertised_speeds |=
+						ECORE_EXT_SPEED_MASK_25G;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_40G)
+			link->ext_speed.advertised_speeds |=
+						ECORE_EXT_SPEED_MASK_40G;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R)
+			link->ext_speed.advertised_speeds |=
+						ECORE_EXT_SPEED_MASK_50G_R;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R2)
+			link->ext_speed.advertised_speeds |=
+						ECORE_EXT_SPEED_MASK_50G_R2;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R2)
+			link->ext_speed.advertised_speeds |=
+						ECORE_EXT_SPEED_MASK_100G_R2;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R4)
+			link->ext_speed.advertised_speeds |=
+						ECORE_EXT_SPEED_MASK_100G_R4;
+		if (mask & NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_P4)
+			link->ext_speed.advertised_speeds |=
+						ECORE_EXT_SPEED_MASK_100G_P4;
+
+		link->ext_fec_mode = ecore_rd(p_hwfn, p_ptt, port_cfg_addr +
+					      OFFSETOF(struct nvm_cfg1_port,
+						       extended_fec_mode));
+		p_caps->default_ext_speed_caps =
+					link->ext_speed.advertised_speeds;
+		p_caps->default_ext_speed = link->ext_speed.forced_speed;
+		p_caps->default_ext_autoneg = link->ext_speed.autoneg;
+		p_caps->default_ext_fec = link->ext_fec_mode;
+		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
+			   "Read default extended link config: An 0x%08x Speed 0x%08x, Adv. Speed 0x%08x, AN 0x%02x, FEC: 0x%2x\n",
+			   link->ext_speed.autoneg,
+			   link->ext_speed.forced_speed,
+			   link->ext_speed.advertised_speeds,
+			   link->ext_speed.autoneg, link->ext_fec_mode);
+	}
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
-		   "Read default link: Speed 0x%08x, Adv. Speed 0x%08x, AN: 0x%02x, PAUSE AN: 0x%02x\n EEE: %02x [%08x usec]",
+		   "Read default link: Speed 0x%08x, Adv. Speed 0x%08x, AN: 0x%02x, PAUSE AN: 0x%02x EEE: %02x [%08x usec], FEC: 0x%2x\n",
 		   link->speed.forced_speed, link->speed.advertised_speeds,
 		   link->speed.autoneg, link->pause.autoneg,
-		   p_caps->default_eee, p_caps->eee_lpi_timer);
-
-	/* Read Multi-function information from shmem */
-	addr = MCP_REG_SCRATCH + nvm_cfg1_offset +
-		   OFFSETOF(struct nvm_cfg1, glob) +
-		   OFFSETOF(struct nvm_cfg1_glob, generic_cont0);
+		   p_caps->default_eee, p_caps->eee_lpi_timer,
+		   p_caps->fec_default);
 
-	generic_cont0 = ecore_rd(p_hwfn, p_ptt, addr);
+	if (IS_LEAD_HWFN(p_hwfn)) {
+		struct ecore_dev *p_dev = p_hwfn->p_dev;
 
-	mf_mode = (generic_cont0 & NVM_CFG1_GLOB_MF_MODE_MASK) >>
-	    NVM_CFG1_GLOB_MF_MODE_OFFSET;
+		/* Read Multi-function information from shmem */
+		addr = MCP_REG_SCRATCH + nvm_cfg1_offset +
+		       OFFSETOF(struct nvm_cfg1, glob) +
+		       OFFSETOF(struct nvm_cfg1_glob, generic_cont0);
+		generic_cont0 = ecore_rd(p_hwfn, p_ptt, addr);
+		mf_mode = GET_MFW_FIELD(generic_cont0, NVM_CFG1_GLOB_MF_MODE);
 
-	switch (mf_mode) {
-	case NVM_CFG1_GLOB_MF_MODE_MF_ALLOWED:
-		p_hwfn->p_dev->mf_bits = 1 << ECORE_MF_OVLAN_CLSS;
-		break;
-	case NVM_CFG1_GLOB_MF_MODE_UFP:
-		p_hwfn->p_dev->mf_bits = 1 << ECORE_MF_OVLAN_CLSS |
+		switch (mf_mode) {
+		case NVM_CFG1_GLOB_MF_MODE_MF_ALLOWED:
+			p_dev->mf_bits = 1 << ECORE_MF_OVLAN_CLSS;
+			break;
+		case NVM_CFG1_GLOB_MF_MODE_UFP:
+			p_dev->mf_bits = 1 << ECORE_MF_OVLAN_CLSS |
+					 1 << ECORE_MF_LLH_PROTO_CLSS |
 					 1 << ECORE_MF_UFP_SPECIFIC |
-					 1 << ECORE_MF_8021Q_TAGGING;
-		break;
-	case NVM_CFG1_GLOB_MF_MODE_BD:
-		p_hwfn->p_dev->mf_bits = 1 << ECORE_MF_OVLAN_CLSS |
+					 1 << ECORE_MF_8021Q_TAGGING |
+					 1 << ECORE_MF_DONT_ADD_VLAN0_TAG;
+			break;
+		case NVM_CFG1_GLOB_MF_MODE_BD:
+			p_dev->mf_bits = 1 << ECORE_MF_OVLAN_CLSS |
 					 1 << ECORE_MF_LLH_PROTO_CLSS |
 					 1 << ECORE_MF_8021AD_TAGGING |
-					 1 << ECORE_MF_FIP_SPECIAL;
-		break;
-	case NVM_CFG1_GLOB_MF_MODE_NPAR1_0:
-		p_hwfn->p_dev->mf_bits = 1 << ECORE_MF_LLH_MAC_CLSS |
+					 1 << ECORE_MF_FIP_SPECIAL |
+					 1 << ECORE_MF_DONT_ADD_VLAN0_TAG;
+			break;
+		case NVM_CFG1_GLOB_MF_MODE_NPAR1_0:
+			p_dev->mf_bits = 1 << ECORE_MF_LLH_MAC_CLSS |
 					 1 << ECORE_MF_LLH_PROTO_CLSS |
 					 1 << ECORE_MF_LL2_NON_UNICAST |
 					 1 << ECORE_MF_INTER_PF_SWITCH |
 					 1 << ECORE_MF_DISABLE_ARFS;
-		break;
-	case NVM_CFG1_GLOB_MF_MODE_DEFAULT:
-		p_hwfn->p_dev->mf_bits = 1 << ECORE_MF_LLH_MAC_CLSS |
+			break;
+		case NVM_CFG1_GLOB_MF_MODE_DEFAULT:
+			p_dev->mf_bits = 1 << ECORE_MF_LLH_MAC_CLSS |
 					 1 << ECORE_MF_LLH_PROTO_CLSS |
-					 1 << ECORE_MF_LL2_NON_UNICAST;
-		if (ECORE_IS_BB(p_hwfn->p_dev))
-			p_hwfn->p_dev->mf_bits |= 1 << ECORE_MF_NEED_DEF_PF;
-		break;
-	}
-	DP_INFO(p_hwfn, "Multi function mode is 0x%x\n",
-		p_hwfn->p_dev->mf_bits);
+					 1 << ECORE_MF_LL2_NON_UNICAST |
+					 1 << ECORE_MF_VF_RDMA |
+					 1 << ECORE_MF_ROCE_LAG;
+			if (ECORE_IS_BB(p_dev))
+				p_dev->mf_bits |= 1 << ECORE_MF_NEED_DEF_PF;
+			break;
+		}
 
-	if (ECORE_IS_CMT(p_hwfn->p_dev))
-		p_hwfn->p_dev->mf_bits |= (1 << ECORE_MF_DISABLE_ARFS);
+		DP_INFO(p_dev, "Multi function mode is 0x%x\n",
+			p_dev->mf_bits);
 
-	/* It's funny since we have another switch, but it's easier
-	 * to throw this away in linux this way. Long term, it might be
-	 * better to have have getters for needed ECORE_MF_* fields,
-	 * convert client code and eliminate this.
-	 */
-	switch (mf_mode) {
-	case NVM_CFG1_GLOB_MF_MODE_MF_ALLOWED:
-	case NVM_CFG1_GLOB_MF_MODE_BD:
-		p_hwfn->p_dev->mf_mode = ECORE_MF_OVLAN;
-		break;
-	case NVM_CFG1_GLOB_MF_MODE_NPAR1_0:
-		p_hwfn->p_dev->mf_mode = ECORE_MF_NPAR;
-		break;
-	case NVM_CFG1_GLOB_MF_MODE_DEFAULT:
-		p_hwfn->p_dev->mf_mode = ECORE_MF_DEFAULT;
-		break;
-	case NVM_CFG1_GLOB_MF_MODE_UFP:
-		p_hwfn->p_dev->mf_mode = ECORE_MF_UFP;
-		break;
+		/* In CMT the PF is unknown when the GFS block processes the
+		 * packet. Therefore cannot use searcher as it has a per PF
+		 * database, and thus ARFS must be disabled.
+		 *
+		 */
+		if (ECORE_IS_CMT(p_dev))
+			p_dev->mf_bits |= 1 << ECORE_MF_DISABLE_ARFS;
+
+		if (ecore_is_dscp_mapping_allowed(p_hwfn, mf_mode))
+			p_dev->mf_bits |= 1 << ECORE_MF_DSCP_TO_TC_MAP;
+
+#ifndef __EXTRACT__LINUX__THROW__
+		/* It's funny since we have another switch, but it's easier
+		 * to throw this away in linux this way. Long term, it might be
+		 * better to have getters for needed ECORE_MF_* fields,
+		 * convert client code and eliminate this.
+		 */
+		switch (mf_mode) {
+		case NVM_CFG1_GLOB_MF_MODE_MF_ALLOWED:
+		case NVM_CFG1_GLOB_MF_MODE_BD:
+			p_dev->mf_mode = ECORE_MF_OVLAN;
+			break;
+		case NVM_CFG1_GLOB_MF_MODE_NPAR1_0:
+			p_dev->mf_mode = ECORE_MF_NPAR;
+			break;
+		case NVM_CFG1_GLOB_MF_MODE_DEFAULT:
+			p_dev->mf_mode = ECORE_MF_DEFAULT;
+			break;
+		case NVM_CFG1_GLOB_MF_MODE_UFP:
+			p_dev->mf_mode = ECORE_MF_UFP;
+			break;
+		}
+#endif
 	}
 
-	/* Read Multi-function information from shmem */
+	/* Read device capabilities information from shmem */
 	addr = MCP_REG_SCRATCH + nvm_cfg1_offset +
 		   OFFSETOF(struct nvm_cfg1, glob) +
 		   OFFSETOF(struct nvm_cfg1_glob, device_capabilities);
 
 	device_capabilities = ecore_rd(p_hwfn, p_ptt, addr);
 	if (device_capabilities & NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ETHERNET)
-		OSAL_SET_BIT(ECORE_DEV_CAP_ETH,
-				&p_hwfn->hw_info.device_capabilities);
+		OSAL_NON_ATOMIC_SET_BIT(ECORE_DEV_CAP_ETH,
+					&p_hwfn->hw_info.device_capabilities);
 	if (device_capabilities & NVM_CFG1_GLOB_DEVICE_CAPABILITIES_FCOE)
-		OSAL_SET_BIT(ECORE_DEV_CAP_FCOE,
-				&p_hwfn->hw_info.device_capabilities);
+		OSAL_NON_ATOMIC_SET_BIT(ECORE_DEV_CAP_FCOE,
+					&p_hwfn->hw_info.device_capabilities);
 	if (device_capabilities & NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ISCSI)
-		OSAL_SET_BIT(ECORE_DEV_CAP_ISCSI,
-				&p_hwfn->hw_info.device_capabilities);
+		OSAL_NON_ATOMIC_SET_BIT(ECORE_DEV_CAP_ISCSI,
+					&p_hwfn->hw_info.device_capabilities);
 	if (device_capabilities & NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ROCE)
-		OSAL_SET_BIT(ECORE_DEV_CAP_ROCE,
-				&p_hwfn->hw_info.device_capabilities);
+		OSAL_NON_ATOMIC_SET_BIT(ECORE_DEV_CAP_ROCE,
+					&p_hwfn->hw_info.device_capabilities);
 	if (device_capabilities & NVM_CFG1_GLOB_DEVICE_CAPABILITIES_IWARP)
-		OSAL_SET_BIT(ECORE_DEV_CAP_IWARP,
-				&p_hwfn->hw_info.device_capabilities);
+		OSAL_NON_ATOMIC_SET_BIT(ECORE_DEV_CAP_IWARP,
+					&p_hwfn->hw_info.device_capabilities);
 
 	rc = ecore_mcp_fill_shmem_func_info(p_hwfn, p_ptt);
 	if (rc != ECORE_SUCCESS && p_params->b_relaxed_probe) {
@@ -5190,11 +6970,14 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
 				struct ecore_ptt *p_ptt)
 {
-	u8 num_funcs, enabled_func_idx = p_hwfn->rel_pf_id;
-	u32 reg_function_hide, tmp, eng_mask, low_pfs_mask;
+	u8 num_funcs, enabled_func_idx, num_funcs_on_port;
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
+	u32 reg_function_hide;
 
+	/* Default "worst-case" values */
 	num_funcs = ECORE_IS_AH(p_dev) ? MAX_NUM_PFS_K2 : MAX_NUM_PFS_BB;
+	enabled_func_idx = p_hwfn->rel_pf_id;
+	num_funcs_on_port = MAX_PF_PER_PORT;
 
 	/* Bit 0 of MISCS_REG_FUNCTION_HIDE indicates whether the bypass values
 	 * in the other bits are selected.
@@ -5207,21 +6990,23 @@ static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
 	reg_function_hide = ecore_rd(p_hwfn, p_ptt, MISCS_REG_FUNCTION_HIDE);
 
 	if (reg_function_hide & 0x1) {
+		u32 enabled_funcs, eng_mask, low_pfs_mask, port_mask, tmp;
+
+		/* Get the number of the enabled functions on the device */
+		enabled_funcs = (reg_function_hide & 0xffff) ^ 0xfffe;
+
+		/* Get the number of the enabled functions on the engine */
 		if (ECORE_IS_BB(p_dev)) {
-			if (ECORE_PATH_ID(p_hwfn) && !ECORE_IS_CMT(p_dev)) {
-				num_funcs = 0;
+			if (ECORE_PATH_ID(p_hwfn) && !ECORE_IS_CMT(p_dev))
 				eng_mask = 0xaaaa;
-			} else {
-				num_funcs = 1;
-				eng_mask = 0x5554;
-			}
+			else
+				eng_mask = 0x5555;
 		} else {
-			num_funcs = 1;
-			eng_mask = 0xfffe;
+			eng_mask = 0xffff;
 		}
 
-		/* Get the number of the enabled functions on the engine */
-		tmp = (reg_function_hide ^ 0xffffffff) & eng_mask;
+		num_funcs = 0;
+		tmp = enabled_funcs & eng_mask;
 		while (tmp) {
 			if (tmp & 0x1)
 				num_funcs++;
@@ -5229,17 +7014,35 @@ static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
 		}
 
 		/* Get the PF index within the enabled functions */
-		low_pfs_mask = (0x1 << p_hwfn->abs_pf_id) - 1;
-		tmp = reg_function_hide & eng_mask & low_pfs_mask;
+		low_pfs_mask = (1 << ECORE_LEADING_HWFN(p_dev)->abs_pf_id) - 1;
+		enabled_func_idx = 0;
+		tmp = enabled_funcs & eng_mask & low_pfs_mask;
+		while (tmp) {
+			if (tmp & 0x1)
+				enabled_func_idx++;
+			tmp >>= 0x1;
+		}
+
+		/* Get the number of functions on the port */
+		if (ecore_device_num_ports(p_hwfn->p_dev) == 4)
+			port_mask = 0x1111 << (p_hwfn->abs_pf_id % 4);
+		else if (ecore_device_num_ports(p_hwfn->p_dev) == 2)
+			port_mask = 0x5555 << (p_hwfn->abs_pf_id % 2);
+		else /* single port */
+			port_mask = 0xffff;
+
+		num_funcs_on_port = 0;
+		tmp = enabled_funcs & port_mask;
 		while (tmp) {
 			if (tmp & 0x1)
-				enabled_func_idx--;
+				num_funcs_on_port++;
 			tmp >>= 0x1;
 		}
 	}
 
 	p_hwfn->num_funcs_on_engine = num_funcs;
 	p_hwfn->enabled_func_idx = enabled_func_idx;
+	p_hwfn->num_funcs_on_port = num_funcs_on_port;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_FPGA(p_dev)) {
@@ -5250,14 +7053,15 @@ static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
 #endif
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE,
-		   "PF [rel_id %d, abs_id %d] occupies index %d within the %d enabled functions on the engine\n",
-		   p_hwfn->rel_pf_id, p_hwfn->abs_pf_id,
-		   p_hwfn->enabled_func_idx, p_hwfn->num_funcs_on_engine);
+		   "PF {abs %d, rel %d}: enabled func %d, %d funcs on engine, %d funcs on port [reg_function_hide 0x%x]\n",
+		   p_hwfn->abs_pf_id, p_hwfn->rel_pf_id,
+		   p_hwfn->enabled_func_idx, p_hwfn->num_funcs_on_engine,
+		   p_hwfn->num_funcs_on_port, reg_function_hide);
 }
 
 #ifndef ASIC_ONLY
 static void ecore_emul_hw_info_port_num(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt)
+					struct ecore_ptt *p_ptt)
 {
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	u32 eco_reserved;
@@ -5265,22 +7069,22 @@ static void ecore_emul_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 	/* MISCS_REG_ECO_RESERVED[15:12]: num of ports in an engine */
 	eco_reserved = ecore_rd(p_hwfn, p_ptt, MISCS_REG_ECO_RESERVED);
 	switch ((eco_reserved & 0xf000) >> 12) {
-		case 1:
-			p_dev->num_ports_in_engine = 1;
-			break;
-		case 3:
-			p_dev->num_ports_in_engine = 2;
-			break;
-		case 0xf:
-			p_dev->num_ports_in_engine = 4;
-			break;
-		default:
-			DP_NOTICE(p_hwfn, false,
+	case 1:
+		p_dev->num_ports_in_engine = 1;
+		break;
+	case 3:
+		p_dev->num_ports_in_engine = 2;
+		break;
+	case 0xf:
+		p_dev->num_ports_in_engine = 4;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, false,
 			  "Emulation: Unknown port mode [ECO_RESERVED 0x%08x]\n",
 			  eco_reserved);
 		p_dev->num_ports_in_engine = 1; /* Default to something */
 		break;
-		}
+	}
 
 	p_dev->num_ports = p_dev->num_ports_in_engine *
 			   ecore_device_num_engines(p_dev);
@@ -5307,7 +7111,7 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 	}
 #endif
 
-		/* In CMT there is always only one port */
+	/* In CMT there is always only one port */
 	if (ECORE_IS_CMT(p_dev)) {
 		p_dev->num_ports_in_engine = 1;
 		p_dev->num_ports = 1;
@@ -5355,8 +7159,7 @@ static void ecore_mcp_get_eee_caps(struct ecore_hwfn *p_hwfn,
 	p_caps->eee_speed_caps = 0;
 	eee_status = ecore_rd(p_hwfn, p_ptt, p_hwfn->mcp_info->port_addr +
 			      OFFSETOF(struct public_port, eee_status));
-	eee_status = (eee_status & EEE_SUPPORTED_SPEED_MASK) >>
-			EEE_SUPPORTED_SPEED_OFFSET;
+	eee_status = GET_MFW_FIELD(eee_status, EEE_SUPPORTED_SPEED);
 	if (eee_status & EEE_1G_SUPPORTED)
 		p_caps->eee_speed_caps |= ECORE_EEE_1G_ADV;
 	if (eee_status & EEE_10G_ADV)
@@ -5377,26 +7180,41 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	}
 
 	/* Since all information is common, only first hwfns should do this */
-	if (IS_LEAD_HWFN(p_hwfn) && !IS_ECORE_PACING(p_hwfn)) {
+	if (IS_LEAD_HWFN(p_hwfn) && !IS_ECORE_PACING(p_hwfn) &&
+	    !p_params->b_sriov_disable) {
 		rc = ecore_iov_hw_info(p_hwfn);
+
 		if (rc != ECORE_SUCCESS) {
 			if (p_params->b_relaxed_probe)
 				p_params->p_relaxed_res =
-						ECORE_HW_PREPARE_BAD_IOV;
+					ECORE_HW_PREPARE_BAD_IOV;
 			else
 				return rc;
 		}
 	}
 
+	/* The following order should be kept:
+	 * (1) ecore_hw_info_port_num(),
+	 * (2) ecore_get_num_funcs(),
+	 * (3) ecore_mcp_get_capabilities, and
+	 * (4) ecore_hw_get_nvm_info(),
+	 * since (2) depends on (1) and (4) depends on both.
+	 * In addition, can send the MFW a MB command only after (1) is called.
+	 */
 	if (IS_LEAD_HWFN(p_hwfn))
 		ecore_hw_info_port_num(p_hwfn, p_ptt);
 
+	ecore_get_num_funcs(p_hwfn, p_ptt);
+
 	ecore_mcp_get_capabilities(p_hwfn, p_ptt);
 
 	rc = ecore_hw_get_nvm_info(p_hwfn, p_ptt, p_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
+	if (p_hwfn->mcp_info->recovery_mode)
+		return ECORE_SUCCESS;
+
 	rc = ecore_int_igu_read_cam(p_hwfn, p_ptt);
 	if (rc != ECORE_SUCCESS) {
 		if (p_params->b_relaxed_probe)
@@ -5405,16 +7223,18 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			return rc;
 	}
 
+	OSAL_NUM_FUNCS_IS_SET(p_hwfn);
+
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev) && ecore_mcp_is_init(p_hwfn)) {
 #endif
 		OSAL_MEMCPY(p_hwfn->hw_info.hw_mac_addr,
-			    p_hwfn->mcp_info->func_info.mac, ETH_ALEN);
+			    p_hwfn->mcp_info->func_info.mac, ECORE_ETH_ALEN);
 #ifndef ASIC_ONLY
 	} else {
-		static u8 mcp_hw_mac[6] = { 0, 2, 3, 4, 5, 6 };
+		static u8 mcp_hw_mac[6] = {0, 2, 3, 4, 5, 6};
 
-		OSAL_MEMCPY(p_hwfn->hw_info.hw_mac_addr, mcp_hw_mac, ETH_ALEN);
+		OSAL_MEMCPY(p_hwfn->hw_info.hw_mac_addr, mcp_hw_mac, ECORE_ETH_ALEN);
 		p_hwfn->hw_info.hw_mac_addr[5] = p_hwfn->abs_pf_id;
 	}
 #endif
@@ -5422,7 +7242,7 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	if (ecore_mcp_is_init(p_hwfn)) {
 		if (p_hwfn->mcp_info->func_info.ovlan != ECORE_MCP_VLAN_UNSET)
 			p_hwfn->hw_info.ovlan =
-			    p_hwfn->mcp_info->func_info.ovlan;
+				p_hwfn->mcp_info->func_info.ovlan;
 
 		ecore_mcp_cmd_port_init(p_hwfn, p_ptt);
 
@@ -5443,7 +7263,7 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
 		/* AH emulation:
 		 * Allow only PF0 to be RoCE to overcome a lack of ILT lines.
-	 */
+		 */
 		if (ECORE_IS_AH(p_hwfn->p_dev) && p_hwfn->rel_pf_id)
 			p_hwfn->hw_info.personality = ECORE_PCI_ETH;
 		else
@@ -5451,6 +7271,9 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	}
 #endif
 
+	if (ECORE_IS_ROCE_PERSONALITY(p_hwfn))
+		p_hwfn->hw_info.multi_tc_roce_en = 1;
+
 	/* although in BB some constellations may support more than 4 tcs,
 	 * that can result in performance penalty in some cases. 4
 	 * represents a good tradeoff between performance and flexibility.
@@ -5465,16 +7288,18 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	 */
 	p_hwfn->hw_info.num_active_tc = 1;
 
-	ecore_get_num_funcs(p_hwfn, p_ptt);
-
 	if (ecore_mcp_is_init(p_hwfn))
 		p_hwfn->hw_info.mtu = p_hwfn->mcp_info->func_info.mtu;
 
-	/* In case of forcing the driver's default resource allocation, calling
-	 * ecore_hw_get_resc() should come after initializing the personality
-	 * and after getting the number of functions, since the calculation of
-	 * the resources/features depends on them.
-	 * This order is not harmful if not forcing.
+	/* In case the current MF mode doesn't support VF-RDMA, there is no
+	 * need in VF CNQs.
+	 */
+	if (!OSAL_TEST_BIT(ECORE_MF_VF_RDMA, &p_hwfn->p_dev->mf_bits))
+		p_hwfn->num_vf_cnqs = 0;
+
+	/* Due to a dependency, ecore_hw_get_resc() should be called after
+	 * getting the number of functions on an engine, and after initializing
+	 * the personality.
 	 */
 	rc = ecore_hw_get_resc(p_hwfn, p_ptt, drv_resc_alloc);
 	if (rc != ECORE_SUCCESS && p_params->b_relaxed_probe) {
@@ -5485,7 +7310,7 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	return rc;
 }
 
-#define ECORE_MAX_DEVICE_NAME_LEN (8)
+#define ECORE_MAX_DEVICE_NAME_LEN	(8)
 
 void ecore_get_dev_name(struct ecore_dev *p_dev, u8 *name, u8 max_chars)
 {
@@ -5493,7 +7318,8 @@ void ecore_get_dev_name(struct ecore_dev *p_dev, u8 *name, u8 max_chars)
 
 	n = OSAL_MIN_T(u8, max_chars, ECORE_MAX_DEVICE_NAME_LEN);
 	OSAL_SNPRINTF((char *)name, n, "%s %c%d",
-		      ECORE_IS_BB(p_dev) ? "BB" : "AH",
+		      ECORE_IS_BB(p_dev) ? "BB"
+					 : ECORE_IS_AH(p_dev) ? "AH" : "E5",
 		      'A' + p_dev->chip_rev, (int)p_dev->chip_metal);
 }
 
@@ -5519,6 +7345,9 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 	case ECORE_DEV_ID_MASK_AH:
 		p_dev->type = ECORE_DEV_TYPE_AH;
 		break;
+	case ECORE_DEV_ID_MASK_E5:
+		p_dev->type = ECORE_DEV_TYPE_E5;
+		break;
 	default:
 		DP_NOTICE(p_hwfn, true, "Unknown device id 0x%x\n",
 			  p_dev->device_id);
@@ -5545,8 +7374,7 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 		/* For some reason we have problems with this register
 		 * in BB B0 emulation; Simply assume no CMT
 		 */
-		DP_NOTICE(p_dev->hwfns, false,
-			  "device on emul - assume no CMT\n");
+		DP_NOTICE(p_dev->hwfns, false, "device on emul - assume no CMT\n");
 		p_dev->num_hwfns = 1;
 	}
 #endif
@@ -5558,7 +7386,7 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 
 	DP_INFO(p_dev->hwfns,
 		"Chip details - %s %c%d, Num: %04x Rev: %02x Bond id: %02x Metal: %02x\n",
-		ECORE_IS_BB(p_dev) ? "BB" : "AH",
+		ECORE_IS_BB(p_dev) ? "BB" : ECORE_IS_AH(p_dev) ? "AH" : "E5",
 		'A' + p_dev->chip_rev, (int)p_dev->chip_metal,
 		p_dev->chip_num, p_dev->chip_rev, p_dev->chip_bond_id,
 		p_dev->chip_metal);
@@ -5568,6 +7396,7 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 			  "The chip type/rev (BB A0) is not supported!\n");
 		return ECORE_ABORTED;
 	}
+
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_EMUL(p_dev) && ECORE_IS_AH(p_dev))
 		ecore_wr(p_hwfn, p_ptt, MISCS_REG_PLL_MAIN_CTRL_4, 0x1);
@@ -5581,7 +7410,7 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 		/* MISCS_REG_ECO_RESERVED[28]: emulation build w/ or w/o MAC */
 		p_dev->b_is_emul_mac = !!(tmp & (1 << 28));
 
-			DP_NOTICE(p_hwfn, false,
+		DP_NOTICE(p_hwfn, false,
 			  "Emulation: Running on a %s build %s MAC\n",
 			  p_dev->b_is_emul_full ? "full" : "reduced",
 			  p_dev->b_is_emul_mac ? "with" : "without");
@@ -5591,8 +7420,7 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-#ifndef LINUX_REMOVE
-void ecore_prepare_hibernate(struct ecore_dev *p_dev)
+void ecore_hw_hibernate_prepare(struct ecore_dev *p_dev)
 {
 	int j;
 
@@ -5602,19 +7430,41 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev)
 	for_each_hwfn(p_dev, j) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j];
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN,
-			   "Mark hw/fw uninitialized\n");
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN, "Mark hw/fw uninitialized\n");
 
 		p_hwfn->hw_init_done = false;
 
 		ecore_ptt_invalidate(p_hwfn);
 	}
 }
-#endif
+
+void ecore_hw_hibernate_resume(struct ecore_dev *p_dev)
+{
+	int j = 0;
+
+	if (IS_VF(p_dev))
+		return;
+
+	for_each_hwfn(p_dev, j) {
+		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[j];
+		struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+
+		ecore_hw_hwfn_prepare(p_hwfn);
+
+		if (!p_ptt) {
+			DP_NOTICE(p_hwfn, false, "ptt acquire failed\n");
+		} else {
+			ecore_load_mcp_offsets(p_hwfn, p_ptt);
+			ecore_ptt_release(p_hwfn, p_ptt);
+		}
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP, "Reinitialized hw after low power state\n");
+	}
+}
 
 static enum _ecore_status_t
 ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 			void OSAL_IOMEM *p_doorbells, u64 db_phys_addr,
+			unsigned long db_size,
 			struct ecore_hw_prepare_params *p_params)
 {
 	struct ecore_mdump_retain_data mdump_retain;
@@ -5626,14 +7476,18 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 	p_hwfn->regview = p_regview;
 	p_hwfn->doorbells = p_doorbells;
 	p_hwfn->db_phys_addr = db_phys_addr;
+	p_hwfn->db_size = db_size;
+	p_hwfn->reg_offset = (u8 *)p_hwfn->regview -
+			     (u8 *)p_hwfn->p_dev->regview;
+	p_hwfn->db_offset = (u8 *)p_hwfn->doorbells -
+			    (u8 *)p_hwfn->p_dev->doorbells;
 
 	if (IS_VF(p_dev))
 		return ecore_vf_hw_prepare(p_hwfn, p_params);
 
 	/* Validate that chip access is feasible */
 	if (REG_RD(p_hwfn, PXP_PF_ME_OPAQUE_ADDR) == 0xffffffff) {
-		DP_ERR(p_hwfn,
-		       "Reading the ME register returns all Fs; Preventing further chip access\n");
+		DP_ERR(p_hwfn, "Reading the ME register returns all Fs; Preventing further chip access\n");
 		if (p_params->b_relaxed_probe)
 			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_ME;
 		return ECORE_INVAL;
@@ -5677,6 +7531,7 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 		if (val != 1) {
 			DP_ERR(p_hwfn,
 			       "PTT and GTT init in PGLUE_B didn't complete\n");
+			rc = ECORE_BUSY;
 			goto err1;
 		}
 
@@ -5710,21 +7565,19 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 		goto err2;
 	}
 
-	/* Sending a mailbox to the MFW should be after ecore_get_hw_info() is
-	 * called, since among others it sets the ports number in an engine.
-	 */
 	if (p_params->initiate_pf_flr && IS_LEAD_HWFN(p_hwfn) &&
 	    !p_dev->recov_in_prog) {
 		rc = ecore_mcp_initiate_pf_flr(p_hwfn, p_hwfn->p_main_ptt);
 		if (rc != ECORE_SUCCESS)
 			DP_NOTICE(p_hwfn, false, "Failed to initiate PF FLR\n");
+	}
 
-		/* Workaround for MFW issue where PF FLR does not cleanup
-		 * IGU block
-		 */
-		if (!(p_hwfn->mcp_info->capabilities &
-		      FW_MB_PARAM_FEATURE_SUPPORT_IGU_CLEANUP))
-			ecore_pf_flr_igu_cleanup(p_hwfn);
+	/* NVRAM info initialization and population */
+	rc = ecore_mcp_nvm_info_populate(p_hwfn);
+	if (rc) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to populate nvm info shadow\n");
+		goto err2;
 	}
 
 	/* Check if mdump logs/data are present and update the epoch value */
@@ -5733,7 +7586,7 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 					      &mdump_info);
 		if (rc == ECORE_SUCCESS && mdump_info.num_of_logs)
 			DP_NOTICE(p_hwfn, false,
-				  "* * * IMPORTANT - HW ERROR register dump captured by device * * *\n");
+				  "mdump data available.\n");
 
 		rc = ecore_mcp_mdump_get_retain(p_hwfn, p_hwfn->p_main_ptt,
 						&mdump_retain);
@@ -5753,8 +7606,9 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 		DP_NOTICE(p_hwfn, false, "Failed to allocate the init array\n");
 		if (p_params->b_relaxed_probe)
 			p_params->p_relaxed_res = ECORE_HW_PREPARE_FAILED_MEM;
-		goto err2;
+		goto err3;
 	}
+
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_FPGA(p_dev)) {
 		if (ECORE_IS_AH(p_dev)) {
@@ -5772,6 +7626,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 #endif
 
 	return rc;
+err3:
+	ecore_mcp_nvm_info_free(p_hwfn);
 err2:
 	if (IS_LEAD_HWFN(p_hwfn))
 		ecore_iov_free_hw_info(p_dev);
@@ -5789,9 +7645,13 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 	enum _ecore_status_t rc;
 
 	p_dev->chk_reg_fifo = p_params->chk_reg_fifo;
+	p_dev->monitored_hw_addr = p_params->monitored_hw_addr;
 	p_dev->allow_mdump = p_params->allow_mdump;
 	p_hwfn->b_en_pacing = p_params->b_en_pacing;
 	p_dev->b_is_target = p_params->b_is_target;
+	p_hwfn->roce_edpm_mode = p_params->roce_edpm_mode;
+	p_hwfn->num_vf_cnqs = p_params->num_vf_cnqs;
+	p_hwfn->fs_accuracy = p_params->fs_accuracy;
 
 	if (p_params->b_relaxed_probe)
 		p_params->p_relaxed_res = ECORE_HW_PREPARE_SUCCESS;
@@ -5799,7 +7659,7 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 	/* Initialize the first hwfn - will learn number of hwfns */
 	rc = ecore_hw_prepare_single(p_hwfn, p_dev->regview,
 				     p_dev->doorbells, p_dev->db_phys_addr,
-				     p_params);
+				     p_dev->db_size, p_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
@@ -5825,13 +7685,14 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 		db_phys_addr = p_dev->db_phys_addr + offset;
 
 		p_dev->hwfns[1].b_en_pacing = p_params->b_en_pacing;
+		p_dev->hwfns[1].num_vf_cnqs = p_params->num_vf_cnqs;
 		/* prepare second hw function */
 		rc = ecore_hw_prepare_single(&p_dev->hwfns[1], p_regview,
 					     p_doorbell, db_phys_addr,
-					     p_params);
+					     p_dev->db_size, p_params);
 
 		/* in case of error, need to free the previously
-		 * initiliazed hwfn 0.
+		 * initialized hwfn 0.
 		 */
 		if (rc != ECORE_SUCCESS) {
 			if (p_params->b_relaxed_probe)
@@ -5840,6 +7701,7 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 
 			if (IS_PF(p_dev)) {
 				ecore_init_free(p_hwfn);
+				ecore_mcp_nvm_info_free(p_hwfn);
 				ecore_mcp_free(p_hwfn);
 				ecore_hw_hwfn_free(p_hwfn);
 			} else {
@@ -5854,12 +7716,13 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 
 void ecore_hw_remove(struct ecore_dev *p_dev)
 {
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
 	int i;
 
-	if (IS_PF(p_dev))
+	if (IS_PF(p_dev)) {
+		struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
 		ecore_mcp_ov_update_driver_state(p_hwfn, p_hwfn->p_main_ptt,
-					ECORE_OV_DRIVER_STATE_NOT_LOADED);
+						 ECORE_OV_DRIVER_STATE_NOT_LOADED);
+	}
 
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
@@ -5870,6 +7733,7 @@ void ecore_hw_remove(struct ecore_dev *p_dev)
 		}
 
 		ecore_init_free(p_hwfn);
+		ecore_mcp_nvm_info_free(p_hwfn);
 		ecore_hw_hwfn_free(p_hwfn);
 		ecore_mcp_free(p_hwfn);
 
@@ -5878,6 +7742,9 @@ void ecore_hw_remove(struct ecore_dev *p_dev)
 #endif
 	}
 
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+		OSAL_SPIN_LOCK_DEALLOC(&p_dev->internal_trace.lock);
+#endif
 	ecore_iov_free_hw_info(p_dev);
 }
 
@@ -5903,7 +7770,7 @@ static void ecore_chain_free_next_ptr(struct ecore_dev *p_dev,
 		p_phys_next = HILO_DMA_REGPAIR(p_next->next_phys);
 
 		OSAL_DMA_FREE_COHERENT(p_dev, p_virt, p_phys,
-				       ECORE_CHAIN_PAGE_SIZE);
+				       p_chain->page_size);
 
 		p_virt = p_virt_next;
 		p_phys = p_phys_next;
@@ -6168,11 +8035,16 @@ enum _ecore_status_t ecore_chain_alloc(struct ecore_dev *p_dev,
 enum _ecore_status_t ecore_fw_l2_queue(struct ecore_hwfn *p_hwfn,
 				       u16 src_id, u16 *dst_id)
 {
+	if (!RESC_NUM(p_hwfn, ECORE_L2_QUEUE)) {
+		DP_NOTICE(p_hwfn, false, "No L2 queue is available\n");
+		return ECORE_INVAL;
+	}
+
 	if (src_id >= RESC_NUM(p_hwfn, ECORE_L2_QUEUE)) {
 		u16 min, max;
 
 		min = (u16)RESC_START(p_hwfn, ECORE_L2_QUEUE);
-		max = min + RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
+		max = min + RESC_NUM(p_hwfn, ECORE_L2_QUEUE) - 1;
 		DP_NOTICE(p_hwfn, true,
 			  "l2_queue id [%d] is not valid, available indices [%d - %d]\n",
 			  src_id, min, max);
@@ -6188,11 +8060,16 @@ enum _ecore_status_t ecore_fw_l2_queue(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_fw_vport(struct ecore_hwfn *p_hwfn,
 				    u8 src_id, u8 *dst_id)
 {
+	if (!RESC_NUM(p_hwfn, ECORE_VPORT)) {
+		DP_NOTICE(p_hwfn, false, "No vport is available\n");
+		return ECORE_INVAL;
+	}
+
 	if (src_id >= RESC_NUM(p_hwfn, ECORE_VPORT)) {
 		u8 min, max;
 
 		min = (u8)RESC_START(p_hwfn, ECORE_VPORT);
-		max = min + RESC_NUM(p_hwfn, ECORE_VPORT);
+		max = min + RESC_NUM(p_hwfn, ECORE_VPORT) - 1;
 		DP_NOTICE(p_hwfn, true,
 			  "vport id [%d] is not valid, available indices [%d - %d]\n",
 			  src_id, min, max);
@@ -6208,11 +8085,16 @@ enum _ecore_status_t ecore_fw_vport(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_fw_rss_eng(struct ecore_hwfn *p_hwfn,
 				      u8 src_id, u8 *dst_id)
 {
+	if (!RESC_NUM(p_hwfn, ECORE_RSS_ENG)) {
+		DP_NOTICE(p_hwfn, false, "No RSS engine is available\n");
+		return ECORE_INVAL;
+	}
+
 	if (src_id >= RESC_NUM(p_hwfn, ECORE_RSS_ENG)) {
 		u8 min, max;
 
 		min = (u8)RESC_START(p_hwfn, ECORE_RSS_ENG);
-		max = min + RESC_NUM(p_hwfn, ECORE_RSS_ENG);
+		max = min + RESC_NUM(p_hwfn, ECORE_RSS_ENG) - 1;
 		DP_NOTICE(p_hwfn, true,
 			  "rss_eng id [%d] is not valid, available indices [%d - %d]\n",
 			  src_id, min, max);
@@ -6229,17 +8111,17 @@ enum _ecore_status_t
 ecore_llh_set_function_as_default(struct ecore_hwfn *p_hwfn,
 				  struct ecore_ptt *p_ptt)
 {
-	if (OSAL_GET_BIT(ECORE_MF_NEED_DEF_PF, &p_hwfn->p_dev->mf_bits)) {
+	if (OSAL_TEST_BIT(ECORE_MF_NEED_DEF_PF, &p_hwfn->p_dev->mf_bits)) {
 		ecore_wr(p_hwfn, p_ptt,
 			 NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR,
 			 1 << p_hwfn->abs_pf_id / 2);
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_MSG_INFO, 0);
 		return ECORE_SUCCESS;
+	} else {
+		DP_NOTICE(p_hwfn, false,
+			  "This function can't be set as default\n");
+		return ECORE_INVAL;
 	}
-
-	DP_NOTICE(p_hwfn, false,
-		  "This function can't be set as default\n");
-	return ECORE_INVAL;
 }
 
 static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
@@ -6366,7 +8248,6 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 		DP_ERR(p_hwfn, "Invalid coalesce value - %d\n", coalesce);
 		return ECORE_INVAL;
 	}
-
 	timeset = (u8)(coalesce >> timer_res);
 
 	rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res,
@@ -6385,7 +8266,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 
 /* Calculate final WFQ values for all vports and configure it.
  * After this configuration each vport must have
- * approx min rate =  wfq * min_pf_rate / ECORE_WFQ_UNIT
+ * approx min rate =  vport_wfq * min_pf_rate / ECORE_WFQ_UNIT
  */
 static void ecore_configure_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt,
@@ -6400,7 +8281,7 @@ static void ecore_configure_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
 		u32 wfq_speed = p_hwfn->qm_info.wfq_data[i].min_speed;
 
 		vport_params[i].wfq = (wfq_speed * ECORE_WFQ_UNIT) /
-		    min_pf_rate;
+				      min_pf_rate;
 		ecore_init_vport_wfq(p_hwfn, p_ptt,
 				     vport_params[i].first_tx_pq_id,
 				     vport_params[i].wfq);
@@ -6408,6 +8289,7 @@ static void ecore_configure_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
 }
 
 static void ecore_init_wfq_default_param(struct ecore_hwfn *p_hwfn)
+
 {
 	int i;
 
@@ -6447,8 +8329,7 @@ static enum _ecore_status_t ecore_init_wfq_param(struct ecore_hwfn *p_hwfn,
 
 	num_vports = p_hwfn->qm_info.num_vports;
 
-/* Accounting for the vports which are configured for WFQ explicitly */
-
+	/* Accounting for the vports which are configured for WFQ explicitly */
 	for (i = 0; i < num_vports; i++) {
 		u32 tmp_speed;
 
@@ -6466,24 +8347,23 @@ static enum _ecore_status_t ecore_init_wfq_param(struct ecore_hwfn *p_hwfn,
 
 	/* validate possible error cases */
 	if (req_rate < min_pf_rate / ECORE_WFQ_UNIT) {
-		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
-			   "Vport [%d] - Requested rate[%d Mbps] is less than one percent of configured PF min rate[%d Mbps]\n",
-			   vport_id, req_rate, min_pf_rate);
+		DP_ERR(p_hwfn,
+		       "Vport [%d] - Requested rate[%d Mbps] is less than one percent of configured PF min rate[%d Mbps]\n",
+		       vport_id, req_rate, min_pf_rate);
 		return ECORE_INVAL;
 	}
 
 	/* TBD - for number of vports greater than 100 */
 	if (num_vports > ECORE_WFQ_UNIT) {
-		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
-			   "Number of vports is greater than %d\n",
-			   ECORE_WFQ_UNIT);
+		DP_ERR(p_hwfn, "Number of vports is greater than %d\n",
+		       ECORE_WFQ_UNIT);
 		return ECORE_INVAL;
 	}
 
 	if (total_req_min_rate > min_pf_rate) {
-		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
-			   "Total requested min rate for all vports[%d Mbps] is greater than configured PF min rate[%d Mbps]\n",
-			   total_req_min_rate, min_pf_rate);
+		DP_ERR(p_hwfn,
+		       "Total requested min rate for all vports[%d Mbps] is greater than configured PF min rate[%d Mbps]\n",
+		       total_req_min_rate, min_pf_rate);
 		return ECORE_INVAL;
 	}
 
@@ -6493,9 +8373,9 @@ static enum _ecore_status_t ecore_init_wfq_param(struct ecore_hwfn *p_hwfn,
 
 	/* validate if non requested get < 1% of min bw */
 	if (left_rate_per_vp < min_pf_rate / ECORE_WFQ_UNIT) {
-		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
-			   "Non WFQ configured vports rate [%d Mbps] is less than one percent of configured PF min rate[%d Mbps]\n",
-			   left_rate_per_vp, min_pf_rate);
+		DP_ERR(p_hwfn,
+		       "Non WFQ configured vports rate [%d Mbps] is less than one percent of configured PF min rate[%d Mbps]\n",
+		       left_rate_per_vp, min_pf_rate);
 		return ECORE_INVAL;
 	}
 
@@ -6621,8 +8501,8 @@ void ecore_configure_vp_wfq_on_link_change(struct ecore_dev *p_dev,
 
 	/* TBD - for multiple hardware functions - that is 100 gig */
 	if (ECORE_IS_CMT(p_dev)) {
-		DP_VERBOSE(p_dev, ECORE_MSG_LINK,
-			   "WFQ configuration is not supported for this device\n");
+		DP_ERR(p_dev,
+		       "WFQ configuration is not supported for this device\n");
 		return;
 	}
 
@@ -6778,7 +8658,7 @@ void ecore_clean_wfq_db(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 
 	OSAL_MEMSET(p_hwfn->qm_info.wfq_data, 0,
 		    sizeof(*p_hwfn->qm_info.wfq_data) *
-		    p_hwfn->qm_info.num_vports);
+				p_hwfn->qm_info.num_vports);
 }
 
 int ecore_device_num_engines(struct ecore_dev *p_dev)
@@ -6804,6 +8684,15 @@ void ecore_set_fw_mac_addr(__le16 *fw_msb,
 	((u8 *)fw_lsb)[1] = mac[4];
 }
 
+void ecore_set_dev_access_enable(struct ecore_dev *p_dev, bool b_enable)
+{
+	if (p_dev->recov_in_prog != !b_enable) {
+		DP_INFO(p_dev, "%s access to the device\n",
+			b_enable ?  "Enable" : "Disable");
+		p_dev->recov_in_prog = !b_enable;
+	}
+}
+
 void ecore_set_platform_str(struct ecore_hwfn *p_hwfn,
 			    char *buf_str, u32 buf_size)
 {
@@ -6819,5 +8708,19 @@ void ecore_set_platform_str(struct ecore_hwfn *p_hwfn,
 
 bool ecore_is_mf_fip_special(struct ecore_dev *p_dev)
 {
-	return !!OSAL_GET_BIT(ECORE_MF_FIP_SPECIAL, &p_dev->mf_bits);
+	return !!OSAL_TEST_BIT(ECORE_MF_FIP_SPECIAL, &p_dev->mf_bits);
 }
+
+bool ecore_is_dscp_to_tc_capable(struct ecore_dev *p_dev)
+{
+	return !!OSAL_TEST_BIT(ECORE_MF_DSCP_TO_TC_MAP, &p_dev->mf_bits);
+}
+
+u8 ecore_get_num_funcs_on_engine(struct ecore_hwfn *p_hwfn)
+{
+	return p_hwfn->num_funcs_on_engine;
+}
+
+#ifdef _NTDDK_
+#pragma warning(pop)
+#endif
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 9ddf502eb..1ffe286d7 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_DEV_API_H__
 #define __ECORE_DEV_API_H__
 
@@ -11,6 +11,15 @@
 #include "ecore_chain.h"
 #include "ecore_int_api.h"
 
+#define ECORE_DEFAULT_ILT_PAGE_SIZE 4
+
+struct ecore_wake_info {
+	u32 wk_info;
+	u32 wk_details;
+	u32 wk_pkt_len;
+	u8  wk_buffer[256];
+};
+
 /**
  * @brief ecore_init_dp - initialize the debug level
  *
@@ -24,6 +33,26 @@ void ecore_init_dp(struct ecore_dev *p_dev,
 		   u8 dp_level,
 		   void *dp_ctx);
 
+/**
+ * @brief ecore_init_int_dp - initialize the internal debug level
+ *
+ * @param p_dev
+ * @param dp_module
+ * @param dp_level
+ */
+void ecore_init_int_dp(struct ecore_dev *p_dev,
+		       u32 dp_module,
+		       u8 dp_level);
+
+/**
+ * @brief ecore_dp_internal_log - store into internal log
+ *
+ * @param p_dev
+ * @param buf
+ * @param len
+ */
+void ecore_dp_internal_log(struct ecore_dev *cdev, char *fmt, ...);
+
 /**
  * @brief ecore_init_struct - initialize the device structure to
  *        its defaults
@@ -84,7 +113,7 @@ struct ecore_drv_load_params {
 #define ECORE_LOAD_REQ_LOCK_TO_NONE	255
 
 	/* Action to take in case the MFW doesn't support timeout values other
-	 * than default and none.
+	 * then default and none.
 	 */
 	enum ecore_mfw_timeout_fallback mfw_timeout_fallback;
 
@@ -104,10 +133,12 @@ struct ecore_hw_init_params {
 	/* Interrupt mode [msix, inta, etc.] to use */
 	enum ecore_int_mode int_mode;
 
-	/* NPAR tx switching to be used for vports configured for tx-switching
-	 */
+	/* NPAR tx switching to be used for vports configured for tx-switching */
 	bool allow_npar_tx_switch;
 
+	/* PCI relax ordering to be configured by MFW or ecore client */
+	enum ecore_pci_rlx_odr pci_rlx_odr_mode;
+
 	/* Binary fw data pointer in binary fw file */
 	const u8 *bin_fw_data;
 
@@ -161,61 +192,23 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev);
  */
 enum _ecore_status_t ecore_hw_stop_fastpath(struct ecore_dev *p_dev);
 
-#ifndef LINUX_REMOVE
 /**
- * @brief ecore_prepare_hibernate -should be called when
+ * @brief ecore_hw_hibernate_prepare -should be called when
  *        the system is going into the hibernate state
  *
  * @param p_dev
  *
  */
-void ecore_prepare_hibernate(struct ecore_dev *p_dev);
-
-enum ecore_db_rec_width {
-	DB_REC_WIDTH_32B,
-	DB_REC_WIDTH_64B,
-};
-
-enum ecore_db_rec_space {
-	DB_REC_KERNEL,
-	DB_REC_USER,
-};
+void ecore_hw_hibernate_prepare(struct ecore_dev *p_dev);
 
 /**
- * @brief db_recovery_add - add doorbell information to the doorbell
- * recovery mechanism.
+ * @brief ecore_hw_hibernate_resume -should be called when the system is
+	  resuming from D3 power state and before calling ecore_hw_init.
  *
- * @param p_dev
- * @param db_addr - doorbell address
- * @param db_data - address of where db_data is stored
- * @param db_width - doorbell is 32b pr 64b
- * @param db_space - doorbell recovery addresses are user or kernel space
- */
-enum _ecore_status_t ecore_db_recovery_add(struct ecore_dev *p_dev,
-					   void OSAL_IOMEM *db_addr,
-					   void *db_data,
-					   enum ecore_db_rec_width db_width,
-					   enum ecore_db_rec_space db_space);
-
-/**
- * @brief db_recovery_del - remove doorbell information from the doorbell
- * recovery mechanism. db_data serves as key (db_addr is not unique).
+ * @param p_hwfn
  *
- * @param cdev
- * @param db_addr - doorbell address
- * @param db_data - address where db_data is stored. Serves as key for the
- *                  entry to delete.
  */
-enum _ecore_status_t ecore_db_recovery_del(struct ecore_dev *p_dev,
-					   void OSAL_IOMEM *db_addr,
-					   void *db_data);
-
-static OSAL_INLINE bool ecore_is_mf_ufp(struct ecore_hwfn *p_hwfn)
-{
-	return !!OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits);
-}
-
-#endif
+void ecore_hw_hibernate_resume(struct ecore_dev *p_dev);
 
 /**
  * @brief ecore_hw_start_fastpath -restart fastpath traffic,
@@ -246,6 +239,12 @@ enum ecore_hw_prepare_result {
 	ECORE_HW_PREPARE_BAD_IGU,
 };
 
+enum ECORE_ROCE_EDPM_MODE {
+	ECORE_ROCE_EDPM_MODE_ENABLE	= 0,
+	ECORE_ROCE_EDPM_MODE_FORCE_ON	= 1,
+	ECORE_ROCE_EDPM_MODE_DISABLE	= 2,
+};
+
 struct ecore_hw_prepare_params {
 	/* Personality to initialize */
 	int personality;
@@ -256,6 +255,9 @@ struct ecore_hw_prepare_params {
 	/* Check the reg_fifo after any register access */
 	bool chk_reg_fifo;
 
+	/* Monitored address by ecore_rd()/ecore_wr() */
+	u32 monitored_hw_addr;
+
 	/* Request the MFW to initiate PF FLR */
 	bool initiate_pf_flr;
 
@@ -278,8 +280,20 @@ struct ecore_hw_prepare_params {
 	/* Indicates whether this PF serves a storage target */
 	bool b_is_target;
 
+	/* EDPM can be enabled/forced_on/disabled */
+	u8 roce_edpm_mode;
+
 	/* retry count for VF acquire on channel timeout */
 	u8 acquire_retry_cnt;
+
+	/* Num of VF CNQs resources that will be requested */
+	u8 num_vf_cnqs;
+
+	/* Flow steering statistics accuracy */
+	u8 fs_accuracy;
+
+	/* Disable SRIOV */
+	bool b_sriov_disable;
 };
 
 /**
@@ -300,6 +314,43 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
  */
 void ecore_hw_remove(struct ecore_dev *p_dev);
 
+/**
+ * @brief ecore_set_nwuf_reg -
+ *
+ * @param p_dev
+ * @param reg_idx - Index of the pattern register
+ * @param pattern_size - size of pattern
+ * @param crc - CRC value of patter & mask
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_set_nwuf_reg(struct ecore_dev *p_dev,
+					u32 reg_idx, u32 pattern_size, u32 crc);
+
+/**
+ * @brief ecore_get_wake_info - get magic packet buffer
+ *
+ * @param p_hwfn
+ * @param p_ppt
+ * @param wake_info - pointer to ecore_wake_info buffer
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_get_wake_info(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt,
+					 struct ecore_wake_info *wake_info);
+
+/**
+ * @brief ecore_wol_buffer_clear - Clear magic package buffer
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return void
+ */
+void ecore_wol_buffer_clear(struct ecore_hwfn *p_hwfn,
+			    struct ecore_ptt *p_ptt);
+
 /**
  * @brief ecore_ptt_acquire - Allocate a PTT window
  *
@@ -325,6 +376,18 @@ struct ecore_ptt *ecore_ptt_acquire(struct ecore_hwfn *p_hwfn);
 void ecore_ptt_release(struct ecore_hwfn *p_hwfn,
 		       struct ecore_ptt *p_ptt);
 
+/**
+ * @brief ecore_get_dev_name - get device name, e.g., "BB B0"
+ *
+ * @param p_hwfn
+ * @param name - this is where the name will be written to
+ * @param max_chars - maximum chars that can be written to name including '\0'
+ */
+void ecore_get_dev_name(struct ecore_dev *p_dev,
+			u8 *name,
+			u8 max_chars);
+
+#ifndef __EXTRACT__LINUX__IF__
 struct ecore_eth_stats_common {
 	u64 no_buff_discards;
 	u64 packet_too_big_discard;
@@ -417,6 +480,7 @@ struct ecore_eth_stats {
 		struct ecore_eth_stats_ah ah;
 	};
 };
+#endif
 
 /**
  * @brief ecore_chain_alloc - Allocate and initialize a chain
@@ -488,6 +552,7 @@ enum _ecore_status_t ecore_fw_rss_eng(struct ecore_hwfn *p_hwfn,
 				      u8 src_id,
 				      u8 *dst_id);
 
+#define ECORE_LLH_DONT_CARE 0
 /**
  * @brief ecore_llh_get_num_ppfid - Return the allocated number of LLH filter
  *	banks that are allocated to the PF.
@@ -548,7 +613,7 @@ enum _ecore_status_t ecore_llh_set_roce_affinity(struct ecore_dev *p_dev,
  * @return enum _ecore_status_t
  */
 enum _ecore_status_t ecore_llh_add_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
-					      u8 mac_addr[ETH_ALEN]);
+					      u8 mac_addr[ECORE_ETH_ALEN]);
 
 /**
  * @brief ecore_llh_remove_mac_filter - Remove a LLH MAC filter from the given
@@ -559,7 +624,7 @@ enum _ecore_status_t ecore_llh_add_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
  * @param mac_addr - MAC to remove
  */
 void ecore_llh_remove_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
-				 u8 mac_addr[ETH_ALEN]);
+				 u8 mac_addr[ECORE_ETH_ALEN]);
 
 enum ecore_llh_prot_filter_type_t {
 	ECORE_LLH_FILTER_ETHERTYPE,
@@ -571,6 +636,30 @@ enum ecore_llh_prot_filter_type_t {
 	ECORE_LLH_FILTER_UDP_SRC_AND_DEST_PORT
 };
 
+/**
+ * @brief ecore_llh_add_dst_tcp_port_filter - Add a destination tcp port
+ *	LLH filter into the given filter bank.
+ *
+ * @param p_dev
+ * @param dest_port - destination port to add
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_llh_add_dst_tcp_port_filter(struct ecore_dev *p_dev, u16 dest_port);
+
+/**
+ * @brief ecore_llh_add_src_tcp_port_filter - Add a source tcp port filter
+ *	into the given filter bank.
+ *
+ * @param p_dev
+ * @param src_port - source port to add
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_llh_add_src_tcp_port_filter(struct ecore_dev *p_dev, u16 src_port);
+
 /**
  * @brief ecore_llh_add_protocol_filter - Add a LLH protocol filter into the
  *	given filter bank.
@@ -588,6 +677,30 @@ ecore_llh_add_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
 			      enum ecore_llh_prot_filter_type_t type,
 			      u16 source_port_or_eth_type, u16 dest_port);
 
+/**
+ * @brief ecore_llh_remove_dst_tcp_port_filter - Remove a destination tcp
+ *	port LLH filter from the given filter bank.
+ *
+ * @param p_dev
+ * @param dest_port - destination port
+ *
+ * @return enum _ecore_status_t
+ */
+void ecore_llh_remove_dst_tcp_port_filter(struct ecore_dev *p_dev,
+					  u16 dest_port);
+
+/**
+ * @brief ecore_llh_remove_src_tcp_port_filter - Remove a source tcp
+ *	port LLH filter from the given filter bank.
+ *
+ * @param p_dev
+ * @param src_port - source port
+ *
+ * @return enum _ecore_status_t
+ */
+void ecore_llh_remove_src_tcp_port_filter(struct ecore_dev *p_dev,
+					  u16 src_port);
+
 /**
  * @brief ecore_llh_remove_protocol_filter - Remove a LLH protocol filter from
  *	the given filter bank.
@@ -677,6 +790,16 @@ enum _ecore_status_t
 ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal,
 			 u16 tx_coal, void *p_handle);
 
+/**
+ * @brief - Recalculate feature distributions based on HW resources and
+ * user inputs. Currently this affects RDMA_CNQ, PF_L2_QUE and VF_L2_QUE.
+ * As a result, this must not be called while RDMA is active or while VFs
+ * are enabled.
+ *
+ * @param p_hwfn
+ */
+void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn);
+
 /**
  * @brief ecore_pglueb_set_pfid_enable - Enable or disable PCI BUS MASTER
  *
@@ -690,6 +813,116 @@ enum _ecore_status_t ecore_pglueb_set_pfid_enable(struct ecore_hwfn *p_hwfn,
 						  struct ecore_ptt *p_ptt,
 						  bool b_enable);
 
+#ifndef __EXTRACT__LINUX__IF__
+enum ecore_db_rec_width {
+	DB_REC_WIDTH_32B,
+	DB_REC_WIDTH_64B,
+};
+
+enum ecore_db_rec_space {
+	DB_REC_KERNEL,
+	DB_REC_USER,
+};
+#endif
+
+/**
+ * @brief db_recovery_add - add doorbell information to the doorbell
+ * recovery mechanism.
+ *
+ * @param p_dev
+ * @param db_addr - doorbell address
+ * @param db_data - address of where db_data is stored
+ * @param db_width - doorbell is 32b pr 64b
+ * @param db_space - doorbell recovery addresses are user or kernel space
+ */
+enum _ecore_status_t ecore_db_recovery_add(struct ecore_dev *p_dev,
+					   void OSAL_IOMEM *db_addr,
+					   void *db_data,
+					   enum ecore_db_rec_width db_width,
+					   enum ecore_db_rec_space db_space);
+
+/**
+ * @brief db_recovery_del - remove doorbell information from the doorbell
+ * recovery mechanism. db_data serves as key (db_addr is not unique).
+ *
+ * @param cdev
+ * @param db_addr - doorbell address
+ * @param db_data - address where db_data is stored. Serves as key for the
+ *                  entry to delete.
+ */
+enum _ecore_status_t ecore_db_recovery_del(struct ecore_dev *p_dev,
+					   void OSAL_IOMEM *db_addr,
+					   void *db_data);
+
+#ifndef __EXTRACT__LINUX__THROW__
+static OSAL_INLINE bool ecore_is_mf_ufp(struct ecore_hwfn *p_hwfn)
+{
+	return !!OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits);
+}
+#endif
+
+/**
+ * @brief ecore_set_dev_access_enable - Enable or disable access to the device
+ *
+ * @param p_hwfn
+ * @param b_enable - true/false
+ */
+void ecore_set_dev_access_enable(struct ecore_dev *p_dev, bool b_enable);
+
+/**
+ * @brief ecore_set_ilt_page_size - Set ILT page size
+ *
+ * @param p_dev
+ * @param ilt_size
+ *
+ * @return enum _ecore_status_t
+ */
+void ecore_set_ilt_page_size(struct ecore_dev *p_dev, u8 ilt_size);
+
+/**
+ * @brief Create Lag
+ *
+ *        two ports of the same device are bonded or unbonded,
+ *        or link status changes.
+ *
+ * @param lag_type: LAG_TYPE_NONE: Disable lag
+ *	 LAG_TYPE_ACTIVEACTIVE: Utilize all ports
+ *	 LAG_TYPE_ACTIVEBACKUP: Configure all queues to
+ *	 active port
+ * @param active_ports: Bitmap, each bit represents whether the
+ *	 port is active or not (1 - active )
+ * @param link_change_cb: Callback function to call if port
+ *	settings change such as dcbx.
+ * @param cxt:	Parameter will be passed to the
+ *	link_change_cb function
+ *
+ * @param p_hwfn
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_lag_create(struct ecore_dev *dev,
+				      enum ecore_lag_type lag_type,
+				      void (*link_change_cb)(void *cxt),
+				      void *cxt,
+				      u8 active_ports);
+/**
+ * @brief Modify lag link status of a given port
+ *
+ * @param port_id: the port id that change
+ * @param link_active: current link state
+ */
+enum _ecore_status_t ecore_lag_modify(struct ecore_dev *dev,
+				      u8 port_id,
+				      u8 link_active);
+
+/**
+ * @brief Exit lag mode
+ *
+ * @param p_hwfn
+ */
+enum _ecore_status_t ecore_lag_destroy(struct ecore_dev *dev);
+
+bool ecore_lag_is_active(struct ecore_hwfn *p_hwfn);
+
 /**
  * @brief Whether FIP discovery fallback special mode is enabled or not.
  *
@@ -698,4 +931,23 @@ enum _ecore_status_t ecore_pglueb_set_pfid_enable(struct ecore_hwfn *p_hwfn,
  * @return true if device is in FIP special mode, false otherwise.
  */
 bool ecore_is_mf_fip_special(struct ecore_dev *p_dev);
+
+/**
+ * @brief Whether device allows DSCP to TC mapping or not.
+ *
+ * @param cdev
+ *
+ * @return true if device allows dscp to tc mapping.
+ */
+bool ecore_is_dscp_to_tc_capable(struct ecore_dev *p_dev);
+
+/**
+ * @brief Returns the number of PFs.
+ *
+ * @param p_hwfn
+ *
+ * @return u8 - Number of PFs.
+ */
+u8 ecore_get_num_funcs_on_engine(struct ecore_hwfn *p_hwfn);
+
 #endif
diff --git a/drivers/net/qede/base/ecore_gtt_reg_addr.h b/drivers/net/qede/base/ecore_gtt_reg_addr.h
index f5b11eb28..e21e5a5df 100644
--- a/drivers/net/qede/base/ecore_gtt_reg_addr.h
+++ b/drivers/net/qede/base/ecore_gtt_reg_addr.h
@@ -1,60 +1,75 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef GTT_REG_ADDR_H
 #define GTT_REG_ADDR_H
 
-/* Win 2 */
-//Access:RW   DataWidth:0x20   //
-#define GTT_BAR0_MAP_REG_IGU_CMD                                      0x00f000UL
+/**
+ * Win 2
+ */
+#define GTT_BAR0_MAP_REG_IGU_CMD 0x00f000UL /* Access:RW DataWidth:0x20 */
 
-/* Win 3 */
-//Access:RW   DataWidth:0x20   //
-#define GTT_BAR0_MAP_REG_TSDM_RAM                                     0x010000UL
+/**
+ * Win 3
+ */
+#define GTT_BAR0_MAP_REG_TSDM_RAM 0x010000UL /* Access:RW DataWidth:0x20 */
 
-/* Win 4 */
-//Access:RW   DataWidth:0x20   //
-#define GTT_BAR0_MAP_REG_MSDM_RAM                                     0x011000UL
+/**
+ * Win 4
+ */
+#define GTT_BAR0_MAP_REG_MSDM_RAM 0x011000UL /* Access:RW DataWidth:0x20 */
 
-/* Win 5 */
-//Access:RW   DataWidth:0x20   //
-#define GTT_BAR0_MAP_REG_MSDM_RAM_1024                                0x012000UL
+/**
+ * Win 5
+ */
+#define GTT_BAR0_MAP_REG_MSDM_RAM_1024 0x012000UL /* Access:RW DataWidth:0x20 */
 
-/* Win 6 */
-//Access:RW   DataWidth:0x20   //
-#define GTT_BAR0_MAP_REG_MSDM_RAM_2048                                0x013000UL
+/**
+ * Win 6
+ */
+#define GTT_BAR0_MAP_REG_MSDM_RAM_2048 0x013000UL /* Access:RW DataWidth:0x20 */
 
-/* Win 7 */
-//Access:RW   DataWidth:0x20   //
-#define GTT_BAR0_MAP_REG_USDM_RAM                                     0x014000UL
+/**
+ * Win 7
+ */
+#define GTT_BAR0_MAP_REG_USDM_RAM 0x014000UL /* Access:RW DataWidth:0x20 */
 
-/* Win 8 */
-//Access:RW   DataWidth:0x20   //
-#define GTT_BAR0_MAP_REG_USDM_RAM_1024                                0x015000UL
+/**
+ * Win 8
+ */
+#define GTT_BAR0_MAP_REG_USDM_RAM_1024 0x015000UL /* Access:RW DataWidth:0x20 */
 
-/* Win 9 */
-//Access:RW   DataWidth:0x20   //
-#define GTT_BAR0_MAP_REG_USDM_RAM_2048                                0x016000UL
+/**
+ * Win 9
+ */
+#define GTT_BAR0_MAP_REG_USDM_RAM_2048 0x016000UL /* Access:RW DataWidth:0x20 */
 
-/* Win 10 */
-//Access:RW   DataWidth:0x20   //
-#define GTT_BAR0_MAP_REG_XSDM_RAM                                     0x017000UL
+/**
+ * Win 10
+ */
+#define GTT_BAR0_MAP_REG_XSDM_RAM 0x017000UL /* Access:RW DataWidth:0x20 */
 
-/* Win 11 */
-//Access:RW   DataWidth:0x20   //
-#define GTT_BAR0_MAP_REG_XSDM_RAM_1024                                0x018000UL
+/**
+ * Win 11
+ */
+#define GTT_BAR0_MAP_REG_XSDM_RAM_1024 0x018000UL /* Access:RW DataWidth:0x20 */
 
-/* Win 12 */
-//Access:RW   DataWidth:0x20   //
-#define GTT_BAR0_MAP_REG_YSDM_RAM                                     0x019000UL
+/**
+ * Win 12
+ */
+#define GTT_BAR0_MAP_REG_YSDM_RAM 0x019000UL /* Access:RW DataWidth:0x20 */
 
-/* Win 13 */
-//Access:RW   DataWidth:0x20   //
-#define GTT_BAR0_MAP_REG_PSDM_RAM                                     0x01a000UL
+/**
+ * Win 13
+ */
+#define GTT_BAR0_MAP_REG_PSDM_RAM 0x01a000UL /* Access:RW DataWidth:0x20 */
 
-/* Win 14 */
+/**
+ * Win 14
+ */
+#define GTT_BAR0_MAP_REG_IGU_CMD_EXT_E5 0x01b000UL /* Access:RW DataWidth:0x20 */
 
 #endif
diff --git a/drivers/net/qede/base/ecore_gtt_values.h b/drivers/net/qede/base/ecore_gtt_values.h
index 2035bed5c..6752178b6 100644
--- a/drivers/net/qede/base/ecore_gtt_values.h
+++ b/drivers/net/qede/base/ecore_gtt_values.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __PREVENT_PXP_GLOBAL_WIN__
 
 static u32 pxp_global_win[] = {
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index 638178c30..80812d4ce 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -3191,7 +3191,7 @@ enum nvm_image_type {
 	NVM_TYPE_INIT_HW = 0x19,
 	NVM_TYPE_DEFAULT_CFG = 0x1a,
 	NVM_TYPE_MDUMP = 0x1b,
-	NVM_TYPE_META = 0x1c,
+	NVM_TYPE_NVM_META = 0x1c, /* @DPDK */
 	NVM_TYPE_ISCSI_CFG = 0x1d,
 	NVM_TYPE_FCOE_CFG = 0x1f,
 	NVM_TYPE_ETH_PHY_FW1 = 0x20,
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 1db39d6a3..e4f9b5e85 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #include "bcm_osal.h"
 #include "ecore_hsi_common.h"
 #include "ecore_status.h"
@@ -14,6 +14,14 @@
 #include "ecore_iov_api.h"
 #include "ecore_gtt_values.h"
 #include "ecore_dev_api.h"
+#include "ecore_mcp.h"
+
+#ifdef _NTDDK_
+#pragma warning(push)
+#pragma warning(disable : 28167)
+#pragma warning(disable : 28123)
+#pragma warning(disable : 28121)
+#endif
 
 #ifndef ASIC_ONLY
 #define ECORE_EMUL_FACTOR 2000
@@ -26,19 +34,19 @@
 #define ECORE_BAR_INVALID_OFFSET	(OSAL_CPU_TO_LE32(-1))
 
 struct ecore_ptt {
-	osal_list_entry_t list_entry;
-	unsigned int idx;
-	struct pxp_ptt_entry pxp;
-	u8 hwfn_id;
+	osal_list_entry_t	list_entry;
+	unsigned int		idx;
+	struct pxp_ptt_entry	pxp;
+	u8			hwfn_id;
 };
 
 struct ecore_ptt_pool {
-	osal_list_t free_list;
-	osal_spinlock_t lock; /* ptt synchronized access */
-	struct ecore_ptt ptts[PXP_EXTERNAL_BAR_PF_WINDOW_NUM];
+	osal_list_t		free_list;
+	osal_spinlock_t		lock; /* ptt synchronized access */
+	struct ecore_ptt	ptts[PXP_EXTERNAL_BAR_PF_WINDOW_NUM];
 };
 
-void __ecore_ptt_pool_free(struct ecore_hwfn *p_hwfn)
+static void __ecore_ptt_pool_free(struct ecore_hwfn *p_hwfn)
 {
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_ptt_pool);
 	p_hwfn->p_ptt_pool = OSAL_NULL;
@@ -71,12 +79,13 @@ enum _ecore_status_t ecore_ptt_pool_alloc(struct ecore_hwfn *p_hwfn)
 
 	p_hwfn->p_ptt_pool = p_pool;
 #ifdef CONFIG_ECORE_LOCK_ALLOC
-	if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_pool->lock)) {
+	if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_pool->lock, "ptt_lock")) {
 		__ecore_ptt_pool_free(p_hwfn);
 		return ECORE_NOMEM;
 	}
 #endif
 	OSAL_SPIN_LOCK_INIT(&p_pool->lock);
+
 	return ECORE_SUCCESS;
 }
 
@@ -122,10 +131,10 @@ struct ecore_ptt *ecore_ptt_acquire(struct ecore_hwfn *p_hwfn)
 	/* Take the free PTT from the list */
 	for (i = 0; i < ECORE_BAR_ACQUIRE_TIMEOUT; i++) {
 		OSAL_SPIN_LOCK(&p_hwfn->p_ptt_pool->lock);
+
 		if (!OSAL_LIST_IS_EMPTY(&p_hwfn->p_ptt_pool->free_list)) {
-			p_ptt = OSAL_LIST_FIRST_ENTRY(
-						&p_hwfn->p_ptt_pool->free_list,
-						struct ecore_ptt, list_entry);
+			p_ptt = OSAL_LIST_FIRST_ENTRY(&p_hwfn->p_ptt_pool->free_list,
+						      struct ecore_ptt, list_entry);
 			OSAL_LIST_REMOVE_ENTRY(&p_ptt->list_entry,
 					       &p_hwfn->p_ptt_pool->free_list);
 
@@ -141,17 +150,15 @@ struct ecore_ptt *ecore_ptt_acquire(struct ecore_hwfn *p_hwfn)
 		OSAL_MSLEEP(1);
 	}
 
-	DP_NOTICE(p_hwfn, true,
-		  "PTT acquire timeout - failed to allocate PTT\n");
+	DP_NOTICE(p_hwfn, true, "PTT acquire timeout - failed to allocate PTT\n");
 	return OSAL_NULL;
 }
 
-void ecore_ptt_release(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+void ecore_ptt_release(struct ecore_hwfn *p_hwfn,
+		       struct ecore_ptt *p_ptt)
 {
 	/* This PTT should not be set to pretend if it is being released */
-	/* TODO - add some pretend sanity checks, to make sure pretend
-	 * isn't set on this ptt
-	 */
+	/* TODO - add some pretend sanity checks, to make sure pretend isn't set on this ptt */
 
 	OSAL_SPIN_LOCK(&p_hwfn->p_ptt_pool->lock);
 	OSAL_LIST_PUSH_HEAD(&p_ptt->list_entry, &p_hwfn->p_ptt_pool->free_list);
@@ -167,17 +174,18 @@ static u32 ecore_ptt_get_hw_addr(struct ecore_ptt *p_ptt)
 static u32 ecore_ptt_config_addr(struct ecore_ptt *p_ptt)
 {
 	return PXP_PF_WINDOW_ADMIN_PER_PF_START +
-	    p_ptt->idx * sizeof(struct pxp_ptt_entry);
+	       p_ptt->idx * sizeof(struct pxp_ptt_entry);
 }
 
 u32 ecore_ptt_get_bar_addr(struct ecore_ptt *p_ptt)
 {
 	return PXP_EXTERNAL_BAR_PF_WINDOW_START +
-	    p_ptt->idx * PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE;
+	       p_ptt->idx * PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE;
 }
 
 void ecore_ptt_set_win(struct ecore_hwfn *p_hwfn,
-		       struct ecore_ptt *p_ptt, u32 new_hw_addr)
+		       struct ecore_ptt *p_ptt,
+		       u32 new_hw_addr)
 {
 	u32 prev_hw_addr;
 
@@ -201,7 +209,8 @@ void ecore_ptt_set_win(struct ecore_hwfn *p_hwfn,
 }
 
 static u32 ecore_set_ptt(struct ecore_hwfn *p_hwfn,
-			 struct ecore_ptt *p_ptt, u32 hw_addr)
+			 struct ecore_ptt *p_ptt,
+			 u32 hw_addr)
 {
 	u32 win_hw_addr = ecore_ptt_get_hw_addr(p_ptt);
 	u32 offset;
@@ -257,8 +266,8 @@ static bool ecore_is_reg_fifo_empty(struct ecore_hwfn *p_hwfn,
 	return is_empty;
 }
 
-void ecore_wr(struct ecore_hwfn *p_hwfn,
-	      struct ecore_ptt *p_ptt, u32 hw_addr, u32 val)
+void ecore_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 hw_addr,
+	      u32 val)
 {
 	bool prev_fifo_err;
 	u32 bar_addr;
@@ -271,13 +280,16 @@ void ecore_wr(struct ecore_hwfn *p_hwfn,
 		   "bar_addr 0x%x, hw_addr 0x%x, val 0x%x\n",
 		   bar_addr, hw_addr, val);
 
+	OSAL_WARN((hw_addr == p_hwfn->p_dev->monitored_hw_addr),
+		  "accessed hw_addr 0x%x\n", hw_addr);
+
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev))
 		OSAL_UDELAY(100);
 #endif
 
 	OSAL_WARN(!prev_fifo_err && !ecore_is_reg_fifo_empty(p_hwfn, p_ptt),
-		  "reg_fifo err was caused by a call to ecore_wr(0x%x, 0x%x)\n",
+		  "reg_fifo error was caused hw_addr 0x%x, val 0x%x\n",
 		  hw_addr, val);
 }
 
@@ -295,6 +307,9 @@ u32 ecore_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 hw_addr)
 		   "bar_addr 0x%x, hw_addr 0x%x, val 0x%x\n",
 		   bar_addr, hw_addr, val);
 
+	OSAL_WARN((hw_addr == p_hwfn->p_dev->monitored_hw_addr),
+		  "accessed hw_addr 0x%x\n", hw_addr);
+
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev))
 		OSAL_UDELAY(100);
@@ -307,12 +322,20 @@ u32 ecore_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 hw_addr)
 	return val;
 }
 
+enum memcpy_action {
+	TO_DEVICE,
+	FROM_DEVICE,
+	MEMZERO_DEVICE
+};
+
 static void ecore_memcpy_hw(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt,
 			    void *addr,
-			    u32 hw_addr, osal_size_t n, bool to_device)
+			    u32 hw_addr,
+			    osal_size_t n,
+			    enum memcpy_action action)
 {
-	u32 dw_count, *host_addr, hw_offset;
+	u32 dw_count, *host_addr = OSAL_NULL, hw_offset;
 	osal_size_t quota, done = 0;
 	u32 OSAL_IOMEM *reg_addr;
 
@@ -328,16 +351,30 @@ static void ecore_memcpy_hw(struct ecore_hwfn *p_hwfn,
 		}
 
 		dw_count = quota / 4;
-		host_addr = (u32 *)((u8 *)addr + done);
+		if (addr)
+			host_addr = (u32 *)((u8 *)addr + done);
 		reg_addr = (u32 OSAL_IOMEM *)OSAL_REG_ADDR(p_hwfn, hw_offset);
 
-		if (to_device)
+		switch (action) {
+		case TO_DEVICE:
 			while (dw_count--)
-				DIRECT_REG_WR(p_hwfn, reg_addr++, *host_addr++);
-		else
+				DIRECT_REG_WR(p_hwfn, reg_addr++,
+					      *host_addr++);
+			break;
+		case FROM_DEVICE:
 			while (dw_count--)
-				*host_addr++ = DIRECT_REG_RD(p_hwfn,
-							     reg_addr++);
+				*host_addr++ =
+					DIRECT_REG_RD(p_hwfn, reg_addr++);
+			break;
+		case MEMZERO_DEVICE:
+			while (dw_count--)
+				DIRECT_REG_WR(p_hwfn, reg_addr++, 0);
+			break;
+		default:
+			DP_NOTICE(p_hwfn, true,
+				  "Invalid memcpy_action %d\n", action);
+			return;
+		}
 
 		done += quota;
 	}
@@ -351,7 +388,7 @@ void ecore_memcpy_from(struct ecore_hwfn *p_hwfn,
 		   "hw_addr 0x%x, dest %p hw_addr 0x%x, size %lu\n",
 		   hw_addr, dest, hw_addr, (unsigned long)n);
 
-	ecore_memcpy_hw(p_hwfn, p_ptt, dest, hw_addr, n, false);
+	ecore_memcpy_hw(p_hwfn, p_ptt, dest, hw_addr, n, FROM_DEVICE);
 }
 
 void ecore_memcpy_to(struct ecore_hwfn *p_hwfn,
@@ -362,7 +399,19 @@ void ecore_memcpy_to(struct ecore_hwfn *p_hwfn,
 		   "hw_addr 0x%x, hw_addr 0x%x, src %p size %lu\n",
 		   hw_addr, hw_addr, src, (unsigned long)n);
 
-	ecore_memcpy_hw(p_hwfn, p_ptt, src, hw_addr, n, true);
+	ecore_memcpy_hw(p_hwfn, p_ptt, src, hw_addr, n, TO_DEVICE);
+}
+
+void ecore_memzero_hw(struct ecore_hwfn *p_hwfn,
+		      struct ecore_ptt *p_ptt,
+		      u32 hw_addr,
+		      osal_size_t n)
+{
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+		   "hw_addr 0x%x, hw_addr 0x%x, size %lu\n",
+		   hw_addr, hw_addr, (unsigned long)n);
+
+	ecore_memcpy_hw(p_hwfn, p_ptt, OSAL_NULL, hw_addr, n, MEMZERO_DEVICE);
 }
 
 void ecore_fid_pretend(struct ecore_hwfn *p_hwfn,
@@ -373,8 +422,9 @@ void ecore_fid_pretend(struct ecore_hwfn *p_hwfn,
 	SET_FIELD(control, PXP_PRETEND_CMD_IS_CONCRETE, 1);
 	SET_FIELD(control, PXP_PRETEND_CMD_PRETEND_FUNCTION, 1);
 
-/* Every pretend undos prev pretends, including previous port pretend */
-
+	/* Every pretend undos previous pretends, including
+	 * previous port pretend.
+	 */
 	SET_FIELD(control, PXP_PRETEND_CMD_PORT, 0);
 	SET_FIELD(control, PXP_PRETEND_CMD_USE_PORT, 0);
 	SET_FIELD(control, PXP_PRETEND_CMD_PRETEND_PORT, 1);
@@ -388,7 +438,7 @@ void ecore_fid_pretend(struct ecore_hwfn *p_hwfn,
 	REG_WR(p_hwfn,
 	       ecore_ptt_config_addr(p_ptt) +
 	       OFFSETOF(struct pxp_ptt_entry, pretend),
-			*(u32 *)&p_ptt->pxp.pretend);
+	       *(u32 *)&p_ptt->pxp.pretend);
 }
 
 void ecore_port_pretend(struct ecore_hwfn *p_hwfn,
@@ -404,10 +454,11 @@ void ecore_port_pretend(struct ecore_hwfn *p_hwfn,
 	REG_WR(p_hwfn,
 	       ecore_ptt_config_addr(p_ptt) +
 	       OFFSETOF(struct pxp_ptt_entry, pretend),
-			*(u32 *)&p_ptt->pxp.pretend);
+	       *(u32 *)&p_ptt->pxp.pretend);
 }
 
-void ecore_port_unpretend(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+void ecore_port_unpretend(struct ecore_hwfn *p_hwfn,
+			  struct ecore_ptt *p_ptt)
 {
 	u16 control = 0;
 
@@ -420,7 +471,7 @@ void ecore_port_unpretend(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 	REG_WR(p_hwfn,
 	       ecore_ptt_config_addr(p_ptt) +
 	       OFFSETOF(struct pxp_ptt_entry, pretend),
-			*(u32 *)&p_ptt->pxp.pretend);
+	       *(u32 *)&p_ptt->pxp.pretend);
 }
 
 void ecore_port_fid_pretend(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
@@ -458,23 +509,15 @@ u32 ecore_vfid_to_concrete(struct ecore_hwfn *p_hwfn, u8 vfid)
 	return concrete_fid;
 }
 
-/* Not in use @DPDK
- * Ecore HW lock
- * =============
- * Although the implementation is ready, today we don't have any flow that
- * utliizes said locks - and we want to keep it this way.
- * If this changes, this needs to be revisted.
- */
-
 /* DMAE */
 
 #define ECORE_DMAE_FLAGS_IS_SET(params, flag)	\
 	((params) != OSAL_NULL && \
 	 GET_FIELD((params)->flags, DMAE_PARAMS_##flag))
 
-static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
-			      const u8 is_src_type_grc,
-			      const u8 is_dst_type_grc,
+static void ecore_dmae_opcode(struct ecore_hwfn	*p_hwfn,
+			      const u8	is_src_type_grc,
+			      const u8	is_dst_type_grc,
 			      struct dmae_params *p_params)
 {
 	u8 src_pf_id, dst_pf_id, port_id;
@@ -485,62 +528,61 @@ static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
 	 * 0- The source is the PCIe
 	 * 1- The source is the GRC.
 	 */
-	opcode |= (is_src_type_grc ? dmae_cmd_src_grc : dmae_cmd_src_pcie) <<
-		  DMAE_CMD_SRC_SHIFT;
+	SET_FIELD(opcode, DMAE_CMD_SRC,
+		  (is_src_type_grc ? dmae_cmd_src_grc
+				   : dmae_cmd_src_pcie));
 	src_pf_id = ECORE_DMAE_FLAGS_IS_SET(p_params, SRC_PF_VALID) ?
 		    p_params->src_pf_id : p_hwfn->rel_pf_id;
-	opcode |= (src_pf_id & DMAE_CMD_SRC_PF_ID_MASK) <<
-		  DMAE_CMD_SRC_PF_ID_SHIFT;
+	SET_FIELD(opcode, DMAE_CMD_SRC_PF_ID, src_pf_id);
 
 	/* The destination of the DMA can be: 0-None 1-PCIe 2-GRC 3-None */
-	opcode |= (is_dst_type_grc ? dmae_cmd_dst_grc : dmae_cmd_dst_pcie) <<
-		  DMAE_CMD_DST_SHIFT;
+	SET_FIELD(opcode, DMAE_CMD_DST,
+		  (is_dst_type_grc ? dmae_cmd_dst_grc
+				   : dmae_cmd_dst_pcie));
 	dst_pf_id = ECORE_DMAE_FLAGS_IS_SET(p_params, DST_PF_VALID) ?
 		    p_params->dst_pf_id : p_hwfn->rel_pf_id;
-	opcode |= (dst_pf_id & DMAE_CMD_DST_PF_ID_MASK) <<
-		  DMAE_CMD_DST_PF_ID_SHIFT;
+	SET_FIELD(opcode, DMAE_CMD_DST_PF_ID, dst_pf_id);
 
 	/* DMAE_E4_TODO need to check which value to specify here. */
-	/* opcode |= (!b_complete_to_host)<< DMAE_CMD_C_DST_SHIFT; */
+	/* SET_FIELD(opcode, DMAE_CMD_C_DST, !b_complete_to_host);*/
 
 	/* Whether to write a completion word to the completion destination:
 	 * 0-Do not write a completion word
 	 * 1-Write the completion word
 	 */
-	opcode |= DMAE_CMD_COMP_WORD_EN_MASK << DMAE_CMD_COMP_WORD_EN_SHIFT;
-	opcode |= DMAE_CMD_SRC_ADDR_RESET_MASK << DMAE_CMD_SRC_ADDR_RESET_SHIFT;
+	SET_FIELD(opcode, DMAE_CMD_COMP_WORD_EN, 1);
+	SET_FIELD(opcode, DMAE_CMD_SRC_ADDR_RESET, 1);
 
 	if (ECORE_DMAE_FLAGS_IS_SET(p_params, COMPLETION_DST))
-		opcode |= 1 << DMAE_CMD_COMP_FUNC_SHIFT;
+		SET_FIELD(opcode, DMAE_CMD_COMP_FUNC, 1);
 
 	/* swapping mode 3 - big endian there should be a define ifdefed in
 	 * the HSI somewhere. Since it is currently
 	 */
-	opcode |= DMAE_CMD_ENDIANITY << DMAE_CMD_ENDIANITY_MODE_SHIFT;
+	SET_FIELD(opcode, DMAE_CMD_ENDIANITY_MODE, DMAE_CMD_ENDIANITY);
 
 	port_id = (ECORE_DMAE_FLAGS_IS_SET(p_params, PORT_VALID)) ?
 		  p_params->port_id : p_hwfn->port_id;
-	opcode |= port_id << DMAE_CMD_PORT_ID_SHIFT;
+	SET_FIELD(opcode, DMAE_CMD_PORT_ID, port_id);
 
 	/* reset source address in next go */
-	opcode |= DMAE_CMD_SRC_ADDR_RESET_MASK << DMAE_CMD_SRC_ADDR_RESET_SHIFT;
+	SET_FIELD(opcode, DMAE_CMD_SRC_ADDR_RESET, 1);
 
 	/* reset dest address in next go */
-	opcode |= DMAE_CMD_DST_ADDR_RESET_MASK << DMAE_CMD_DST_ADDR_RESET_SHIFT;
+	SET_FIELD(opcode, DMAE_CMD_DST_ADDR_RESET, 1);
 
 	/* SRC/DST VFID: all 1's - pf, otherwise VF id */
 	if (ECORE_DMAE_FLAGS_IS_SET(p_params, SRC_VF_VALID)) {
-		opcode |= (1 << DMAE_CMD_SRC_VF_ID_VALID_SHIFT);
-		opcode_b |= (p_params->src_vf_id <<  DMAE_CMD_SRC_VF_ID_SHIFT);
+		SET_FIELD(opcode, DMAE_CMD_SRC_VF_ID_VALID, 1);
+		SET_FIELD(opcode_b, DMAE_CMD_SRC_VF_ID, p_params->src_vf_id);
 	} else {
-		opcode_b |= (DMAE_CMD_SRC_VF_ID_MASK <<
-			     DMAE_CMD_SRC_VF_ID_SHIFT);
+		SET_FIELD(opcode_b, DMAE_CMD_SRC_VF_ID, 0xFF);
 	}
 	if (ECORE_DMAE_FLAGS_IS_SET(p_params, DST_VF_VALID)) {
-		opcode |= 1 << DMAE_CMD_DST_VF_ID_VALID_SHIFT;
-		opcode_b |= p_params->dst_vf_id << DMAE_CMD_DST_VF_ID_SHIFT;
+		SET_FIELD(opcode, DMAE_CMD_DST_VF_ID_VALID, 1);
+		SET_FIELD(opcode_b, DMAE_CMD_DST_VF_ID, p_params->dst_vf_id);
 	} else {
-		opcode_b |= DMAE_CMD_DST_VF_ID_MASK << DMAE_CMD_DST_VF_ID_SHIFT;
+		SET_FIELD(opcode_b, DMAE_CMD_DST_VF_ID, 0xFF);
 	}
 
 	p_hwfn->dmae_info.p_dmae_cmd->opcode = OSAL_CPU_TO_LE32(opcode);
@@ -549,7 +591,8 @@ static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
 
 static u32 ecore_dmae_idx_to_go_cmd(u8 idx)
 {
-	OSAL_BUILD_BUG_ON((DMAE_REG_GO_C31 - DMAE_REG_GO_C0) != 31 * 4);
+	OSAL_BUILD_BUG_ON((DMAE_REG_GO_C31 - DMAE_REG_GO_C0) !=
+			  31 * 4);
 
 	/* All the DMAE 'go' registers form an array in internal memory */
 	return DMAE_REG_GO_C0 + (idx << 2);
@@ -567,8 +610,7 @@ static enum _ecore_status_t ecore_dmae_post_command(struct ecore_hwfn *p_hwfn,
 	     ((!p_command->src_addr_lo) && (!p_command->src_addr_hi)))) {
 		DP_NOTICE(p_hwfn, true,
 			  "source or destination address 0 idx_cmd=%d\n"
-			  "opcode = [0x%08x,0x%04x] len=0x%x"
-			  " src=0x%x:%x dst=0x%x:%x\n",
+			  "opcode = [0x%08x,0x%04x] len=0x%x src=0x%x:%x dst=0x%x:%x\n",
 			  idx_cmd,
 			  OSAL_LE32_TO_CPU(p_command->opcode),
 			  OSAL_LE16_TO_CPU(p_command->opcode_b),
@@ -582,8 +624,7 @@ static enum _ecore_status_t ecore_dmae_post_command(struct ecore_hwfn *p_hwfn,
 	}
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
-		   "Posting DMAE command [idx %d]: opcode = [0x%08x,0x%04x]"
-		   "len=0x%x src=0x%x:%x dst=0x%x:%x\n",
+		   "Posting DMAE command [idx %d]: opcode = [0x%08x,0x%04x] len=0x%x src=0x%x:%x dst=0x%x:%x\n",
 		   idx_cmd,
 		   OSAL_LE32_TO_CPU(p_command->opcode),
 		   OSAL_LE16_TO_CPU(p_command->opcode_b),
@@ -602,7 +643,7 @@ static enum _ecore_status_t ecore_dmae_post_command(struct ecore_hwfn *p_hwfn,
 	 */
 	for (i = 0; i < DMAE_CMD_SIZE; i++) {
 		u32 data = (i < DMAE_CMD_SIZE_TO_FILL) ?
-		    *(((u32 *)p_command) + i) : 0;
+			    *(((u32 *)p_command) + i) : 0;
 
 		ecore_wr(p_hwfn, p_ptt,
 			 DMAE_REG_CMD_MEM +
@@ -611,7 +652,8 @@ static enum _ecore_status_t ecore_dmae_post_command(struct ecore_hwfn *p_hwfn,
 	}
 
 	ecore_wr(p_hwfn, p_ptt,
-		 ecore_dmae_idx_to_go_cmd(idx_cmd), DMAE_GO_VALUE);
+		 ecore_dmae_idx_to_go_cmd(idx_cmd),
+		 DMAE_GO_VALUE);
 
 	return ecore_status;
 }
@@ -630,7 +672,7 @@ enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn *p_hwfn)
 		goto err;
 	}
 
-	p_addr = &p_hwfn->dmae_info.dmae_cmd_phys_addr;
+	p_addr =  &p_hwfn->dmae_info.dmae_cmd_phys_addr;
 	*p_cmd = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev, p_addr,
 					 sizeof(struct dmae_cmd));
 	if (*p_cmd == OSAL_NULL) {
@@ -690,7 +732,8 @@ void ecore_dmae_info_free(struct ecore_hwfn *p_hwfn)
 	}
 }
 
-static enum _ecore_status_t ecore_dmae_operation_wait(struct ecore_hwfn *p_hwfn)
+static enum _ecore_status_t
+ecore_dmae_operation_wait(struct ecore_hwfn *p_hwfn)
 {
 	u32 wait_cnt_limit = 10000, wait_cnt = 0;
 	enum _ecore_status_t ecore_status = ECORE_SUCCESS;
@@ -713,14 +756,14 @@ static enum _ecore_status_t ecore_dmae_operation_wait(struct ecore_hwfn *p_hwfn)
 		OSAL_UDELAY(DMAE_MIN_WAIT_TIME);
 		if (++wait_cnt > wait_cnt_limit) {
 			DP_NOTICE(p_hwfn->p_dev, false,
-				  "Timed-out waiting for operation to"
-				  " complete. Completion word is 0x%08x"
-				  " expected 0x%08x.\n",
+				  "Timed-out waiting for operation to complete. Completion word is 0x%08x expected 0x%08x.\n",
 				  *p_hwfn->dmae_info.p_completion_word,
 				  DMAE_COMPLETION_VAL);
+			OSAL_WARN(true, "Dumping stack");
 			ecore_status = ECORE_TIMEOUT;
 			break;
 		}
+
 		/* to sync the completion_word since we are not
 		 * using the volatile keyword for p_completion_word
 		 */
@@ -739,12 +782,13 @@ enum ecore_dmae_address_type {
 	ECORE_DMAE_ADDRESS_GRC
 };
 
-static enum _ecore_status_t
-ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn,
-				 struct ecore_ptt *p_ptt,
-				 u64 src_addr,
-				 u64 dst_addr,
-				 u8 src_type, u8 dst_type, u32 length_dw)
+static enum _ecore_status_t ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn,
+							     struct ecore_ptt *p_ptt,
+							     u64 src_addr,
+							     u64 dst_addr,
+							     u8 src_type,
+							     u8 dst_type,
+							     u32 length_dw)
 {
 	dma_addr_t phys = p_hwfn->dmae_info.intermediate_buffer_phys_addr;
 	struct dmae_cmd *cmd = p_hwfn->dmae_info.p_dmae_cmd;
@@ -756,7 +800,7 @@ ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn,
 		cmd->src_addr_hi = OSAL_CPU_TO_LE32(DMA_HI(src_addr));
 		cmd->src_addr_lo = OSAL_CPU_TO_LE32(DMA_LO(src_addr));
 		break;
-		/* for virt source addresses we use the intermediate buffer. */
+	/* for virtual source addresses we use the intermediate buffer. */
 	case ECORE_DMAE_ADDRESS_HOST_VIRT:
 		cmd->src_addr_hi = OSAL_CPU_TO_LE32(DMA_HI(phys));
 		cmd->src_addr_lo = OSAL_CPU_TO_LE32(DMA_LO(phys));
@@ -774,7 +818,7 @@ ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn,
 		cmd->dst_addr_hi = OSAL_CPU_TO_LE32(DMA_HI(dst_addr));
 		cmd->dst_addr_lo = OSAL_CPU_TO_LE32(DMA_LO(dst_addr));
 		break;
-		/* for virt destination address we use the intermediate buff. */
+	/* for virtual destination addresses we use the intermediate buffer. */
 	case ECORE_DMAE_ADDRESS_HOST_VIRT:
 		cmd->dst_addr_hi = OSAL_CPU_TO_LE32(DMA_HI(phys));
 		cmd->dst_addr_lo = OSAL_CPU_TO_LE32(DMA_LO(phys));
@@ -784,18 +828,20 @@ ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn,
 	}
 
 	cmd->length_dw = OSAL_CPU_TO_LE16((u16)length_dw);
-
+#ifndef __EXTRACT__LINUX__
 	if (src_type == ECORE_DMAE_ADDRESS_HOST_VIRT ||
 	    src_type == ECORE_DMAE_ADDRESS_HOST_PHYS)
 		OSAL_DMA_SYNC(p_hwfn->p_dev,
 			      (void *)HILO_U64(cmd->src_addr_hi,
 					       cmd->src_addr_lo),
 			      length_dw * sizeof(u32), false);
+#endif
 
 	ecore_dmae_post_command(p_hwfn, p_ptt);
 
 	ecore_status = ecore_dmae_operation_wait(p_hwfn);
 
+#ifndef __EXTRACT__LINUX__
 	/* TODO - is it true ? */
 	if (src_type == ECORE_DMAE_ADDRESS_HOST_VIRT ||
 	    src_type == ECORE_DMAE_ADDRESS_HOST_PHYS)
@@ -803,13 +849,14 @@ ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn,
 			      (void *)HILO_U64(cmd->src_addr_hi,
 					       cmd->src_addr_lo),
 			      length_dw * sizeof(u32), true);
+#endif
 
 	if (ecore_status != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, false,
-			  "Wait Failed. source_addr 0x%lx, grc_addr 0x%lx, size_in_dwords 0x%x, intermediate buffer 0x%lx.\n",
-			  (unsigned long)src_addr, (unsigned long)dst_addr,
-			  length_dw,
-			  (unsigned long)p_hwfn->dmae_info.intermediate_buffer_phys_addr);
+			  "Wait Failed. source_addr 0x%" PRIx64 ", grc_addr 0x%" PRIx64 ", "
+			  "size_in_dwords 0x%x, intermediate buffer 0x%" PRIx64 ".\n",
+			  src_addr, dst_addr, length_dw,
+			  (u64)p_hwfn->dmae_info.intermediate_buffer_phys_addr);
 		return ecore_status;
 	}
 
@@ -821,15 +868,12 @@ ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t
-ecore_dmae_execute_command(struct ecore_hwfn *p_hwfn,
-			   struct ecore_ptt *p_ptt,
-			   u64 src_addr,
-			   u64 dst_addr,
-			   u8 src_type,
-			   u8 dst_type,
-			   u32 size_in_dwords,
-			   struct dmae_params *p_params)
+static enum _ecore_status_t ecore_dmae_execute_command(struct ecore_hwfn *p_hwfn,
+						       struct ecore_ptt *p_ptt,
+						       u64 src_addr, u64 dst_addr,
+						       u8 src_type, u8 dst_type,
+						       u32 size_in_dwords,
+						       struct dmae_params *p_params)
 {
 	dma_addr_t phys = p_hwfn->dmae_info.completion_word_phys_addr;
 	u16 length_cur = 0, i = 0, cnt_split = 0, length_mod = 0;
@@ -841,37 +885,39 @@ ecore_dmae_execute_command(struct ecore_hwfn *p_hwfn,
 
 	if (!p_hwfn->dmae_info.b_mem_ready) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
-			   "No buffers allocated. Avoid DMAE transaction [{src: addr 0x%lx, type %d}, {dst: addr 0x%lx, type %d}, size %d].\n",
-			   (unsigned long)src_addr, src_type,
-			   (unsigned long)dst_addr, dst_type,
+			   "No buffers allocated. Avoid DMAE transaction "
+			   "[{src: addr 0x%" PRIx64 ", type %d}, "
+			   "{dst: addr 0x%" PRIx64 ", type %d}, size %d].\n",
+			   src_addr, src_type, dst_addr, dst_type,
 			   size_in_dwords);
 		return ECORE_NOMEM;
 	}
 
 	if (p_hwfn->p_dev->recov_in_prog) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
-			   "Recovery is in progress. Avoid DMAE transaction [{src: addr 0x%lx, type %d}, {dst: addr 0x%lx, type %d}, size %d].\n",
-			   (unsigned long)src_addr, src_type,
-			   (unsigned long)dst_addr, dst_type,
+			   "Recovery is in progress. Avoid DMAE transaction "
+			   "[{src: addr 0x%" PRIx64 ", type %d}, "
+			   "{dst: addr 0x%" PRIx64 ", type %d}, size %d].\n",
+			   src_addr, src_type, dst_addr, dst_type,
 			   size_in_dwords);
-		/* Return success to let the flow to be completed successfully
-		 * w/o any error handling.
-		 */
+
+		/* Let the flow complete w/o any error handling */
 		return ECORE_SUCCESS;
 	}
 
 	if (!cmd) {
 		DP_NOTICE(p_hwfn, true,
-			  "ecore_dmae_execute_sub_operation failed. Invalid state. source_addr 0x%lx, destination addr 0x%lx, size_in_dwords 0x%x\n",
-			  (unsigned long)src_addr,
-			  (unsigned long)dst_addr,
-			  length_cur);
+			  "ecore_dmae_execute_sub_operation failed. Invalid state. "
+			  "source_addr 0x%" PRIx64 ", destination addr 0x%" PRIx64 ", "
+			  "size_in_dwords 0x%x\n",
+			  src_addr, dst_addr, length_cur);
 		return ECORE_INVAL;
 	}
 
 	ecore_dmae_opcode(p_hwfn,
 			  (src_type == ECORE_DMAE_ADDRESS_GRC),
-			  (dst_type == ECORE_DMAE_ADDRESS_GRC), p_params);
+			  (dst_type == ECORE_DMAE_ADDRESS_GRC),
+			  p_params);
 
 	cmd->comp_addr_lo = OSAL_CPU_TO_LE32(DMA_LO(phys));
 	cmd->comp_addr_hi = OSAL_CPU_TO_LE32(DMA_HI(phys));
@@ -913,14 +959,18 @@ ecore_dmae_execute_command(struct ecore_hwfn *p_hwfn,
 								dst_type,
 								length_cur);
 		if (ecore_status != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn, false,
-				  "ecore_dmae_execute_sub_operation Failed"
-				  " with error 0x%x. source_addr 0x%lx,"
-				  " dest addr 0x%lx, size_in_dwords 0x%x\n",
-				  ecore_status, (unsigned long)src_addr,
-				  (unsigned long)dst_addr, length_cur);
-
-			ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_DMAE_FAIL);
+			u8 str[ECORE_HW_ERR_MAX_STR_SIZE];
+
+			OSAL_SNPRINTF((char *)str, ECORE_HW_ERR_MAX_STR_SIZE,
+				      "ecore_dmae_execute_sub_operation Failed with error 0x%x. "
+				      "source_addr 0x%" PRIx64 ", "
+				      "destination addr 0x%" PRIx64 ", size_in_dwords 0x%x\n",
+				      ecore_status, src_addr, dst_addr,
+				      length_cur);
+			DP_NOTICE(p_hwfn, false, "%s", str);
+			ecore_hw_err_notify(p_hwfn, p_ptt,
+					    ECORE_HW_ERR_DMAE_FAIL, str,
+					    OSAL_STRLEN((char *)str) + 1);
 			break;
 		}
 	}
@@ -973,12 +1023,11 @@ enum _ecore_status_t ecore_dmae_grc2host(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t
-ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
-		     struct ecore_ptt *p_ptt,
-		     dma_addr_t source_addr,
-		     dma_addr_t dest_addr,
-		     u32 size_in_dwords,
+enum _ecore_status_t ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  dma_addr_t source_addr,
+					  dma_addr_t dest_addr,
+					  u32 size_in_dwords,
 					  struct dmae_params *p_params)
 {
 	enum _ecore_status_t rc;
@@ -989,26 +1038,31 @@ ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
 					dest_addr,
 					ECORE_DMAE_ADDRESS_HOST_PHYS,
 					ECORE_DMAE_ADDRESS_HOST_PHYS,
-					size_in_dwords, p_params);
+					size_in_dwords,
+					p_params);
 
 	OSAL_SPIN_UNLOCK(&p_hwfn->dmae_info.lock);
 
 	return rc;
 }
 
-void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn,
-			 enum ecore_hw_err_type err_type)
+void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			 enum ecore_hw_err_type err_type, u8 *p_buf, u32 size)
 {
 	/* Fan failure cannot be masked by handling of another HW error */
 	if (p_hwfn->p_dev->recov_in_prog && err_type != ECORE_HW_ERR_FAN_FAIL) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_DRV,
-			   "Recovery is in progress."
-			   "Avoid notifying about HW error %d.\n",
+			   "Recovery is in progress. Avoid notifying about HW error %d.\n",
 			   err_type);
 		return;
 	}
 
 	OSAL_HW_ERROR_OCCURRED(p_hwfn, err_type);
+
+	if (p_buf != OSAL_NULL)
+		ecore_mcp_send_raw_debug_data(p_hwfn, p_ptt, p_buf, size);
+
+	ecore_mcp_gen_mdump_idlechk(p_hwfn, p_ptt);
 }
 
 enum _ecore_status_t ecore_dmae_sanity(struct ecore_hwfn *p_hwfn,
@@ -1042,9 +1096,9 @@ enum _ecore_status_t ecore_dmae_sanity(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO((u8 *)p_virt + size, size);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "DMAE sanity [%s]: src_addr={phys 0x%lx, virt %p}, dst_addr={phys 0x%lx, virt %p}, size 0x%x\n",
-		   phase, (unsigned long)p_phys, p_virt,
-		   (unsigned long)(p_phys + size),
+		   "DMAE sanity [%s]: src_addr={phys 0x%" PRIx64 ", virt %p}, "
+		   "dst_addr={phys 0x%" PRIx64 ", virt %p}, size 0x%x\n",
+		   phase, (u64)p_phys, p_virt, (u64)(p_phys + size),
 		   (u8 *)p_virt + size, size);
 
 	rc = ecore_dmae_host2host(p_hwfn, p_ptt, p_phys, p_phys + size,
@@ -1066,10 +1120,9 @@ enum _ecore_status_t ecore_dmae_sanity(struct ecore_hwfn *p_hwfn,
 
 		if (*p_tmp != val) {
 			DP_NOTICE(p_hwfn, false,
-				  "DMAE sanity [%s]: addr={phys 0x%lx, virt %p}, read_val 0x%08x, expected_val 0x%08x\n",
+				  "DMAE sanity [%s]: addr={phys 0x%" PRIx64 ", virt %p}, read_val 0x%08x, expected_val 0x%08x\n",
 				  phase,
-				  (unsigned long)p_phys +
-				   ((u8 *)p_tmp - (u8 *)p_virt),
+				  (u64)p_phys + ((u8 *)p_tmp - (u8 *)p_virt),
 				  p_tmp, *p_tmp, val);
 			rc = ECORE_UNKNOWN_ERROR;
 			goto out;
@@ -1085,13 +1138,15 @@ void ecore_ppfid_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		    u8 abs_ppfid, u32 hw_addr, u32 val)
 {
 	u8 pfid = ECORE_PFID_BY_PPFID(p_hwfn, abs_ppfid);
+	u16 fid;
+
+	fid = FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, pfid);
+	ecore_fid_pretend(p_hwfn, p_ptt, fid);
 
-	ecore_fid_pretend(p_hwfn, p_ptt,
-			  pfid << PXP_PRETEND_CONCRETE_FID_PFID_SHIFT);
 	ecore_wr(p_hwfn, p_ptt, hw_addr, val);
-	ecore_fid_pretend(p_hwfn, p_ptt,
-			  p_hwfn->rel_pf_id <<
-			  PXP_PRETEND_CONCRETE_FID_PFID_SHIFT);
+
+	fid = FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, p_hwfn->rel_pf_id);
+	ecore_fid_pretend(p_hwfn, p_ptt, fid);
 }
 
 u32 ecore_ppfid_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
@@ -1099,13 +1154,19 @@ u32 ecore_ppfid_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 {
 	u8 pfid = ECORE_PFID_BY_PPFID(p_hwfn, abs_ppfid);
 	u32 val;
+	u16 fid;
+
+	fid = FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, pfid);
+	ecore_fid_pretend(p_hwfn, p_ptt, fid);
 
-	ecore_fid_pretend(p_hwfn, p_ptt,
-			  pfid << PXP_PRETEND_CONCRETE_FID_PFID_SHIFT);
 	val = ecore_rd(p_hwfn, p_ptt, hw_addr);
-	ecore_fid_pretend(p_hwfn, p_ptt,
-			  p_hwfn->rel_pf_id <<
-			  PXP_PRETEND_CONCRETE_FID_PFID_SHIFT);
+
+	fid = FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, p_hwfn->rel_pf_id);
+	ecore_fid_pretend(p_hwfn, p_ptt, fid);
 
 	return val;
 }
+
+#ifdef _NTDDK_
+#pragma warning(pop)
+#endif
diff --git a/drivers/net/qede/base/ecore_hw.h b/drivers/net/qede/base/ecore_hw.h
index 238bdb9db..cbf65d737 100644
--- a/drivers/net/qede/base/ecore_hw.h
+++ b/drivers/net/qede/base/ecore_hw.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_HW_H__
 #define __ECORE_HW_H__
 
@@ -42,17 +42,19 @@ enum reserved_ptts {
 #endif
 
 #define DMAE_CMD_SIZE	14
-/* size of DMAE command structure to fill.. DMAE_CMD_SIZE-5 */
+
+/* Size of DMAE command structure to fill.. DMAE_CMD_SIZE-5 */
 #define DMAE_CMD_SIZE_TO_FILL	(DMAE_CMD_SIZE - 5)
-/* Minimum wait for dmae opertaion to complete 2 milliseconds */
+
+/* Minimum wait for dmae operation to complete 2 milliseconds */
 #define DMAE_MIN_WAIT_TIME	0x2
 #define DMAE_MAX_CLIENTS	32
 
 /**
-* @brief ecore_gtt_init - Initialize GTT windows
-*
-* @param p_hwfn
-*/
+ * @brief ecore_gtt_init - Initialize GTT windows
+ *
+ * @param p_hwfn
+ */
 void ecore_gtt_init(struct ecore_hwfn *p_hwfn);
 
 /**
@@ -85,7 +87,7 @@ void ecore_ptt_pool_free(struct ecore_hwfn *p_hwfn);
  *
  * @return u32
  */
-u32 ecore_ptt_get_bar_addr(struct ecore_ptt	*p_ptt);
+u32 ecore_ptt_get_bar_addr(struct ecore_ptt *p_ptt);
 
 /**
  * @brief ecore_ptt_set_win - Set PTT Window's GRC BAR address
@@ -133,6 +135,21 @@ u32 ecore_rd(struct ecore_hwfn	*p_hwfn,
 	     struct ecore_ptt	*p_ptt,
 	     u32		hw_addr);
 
+/**
+ * @brief ecore_memset_hw - set (n / 4) DWORDs from BAR using the given ptt
+ *        with DWORD value
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param value
+ * @param hw_addr
+ * @param n
+ */
+void ecore_memzero_hw(struct ecore_hwfn *p_hwfn,
+		      struct ecore_ptt *p_ptt,
+		      u32 hw_addr,
+		      osal_size_t n);
+
 /**
  * @brief ecore_memcpy_from - copy n bytes from BAR using the given
  *        ptt
@@ -228,7 +245,7 @@ u32 ecore_vfid_to_concrete(struct ecore_hwfn *p_hwfn, u8 vfid);
 * which is part of p_hwfn.
 * @param p_hwfn
 */
-enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn	*p_hwfn);
+enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn *p_hwfn);
 
 /**
 * @brief ecore_dmae_info_free - Free the dmae_info structure
@@ -236,7 +253,7 @@ enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn	*p_hwfn);
 *
 * @param p_hwfn
 */
-void ecore_dmae_info_free(struct ecore_hwfn	*p_hwfn);
+void ecore_dmae_info_free(struct ecore_hwfn *p_hwfn);
 
 /**
  * @brief ecore_dmae_host2grc - copy data from source address to
@@ -307,8 +324,20 @@ enum _ecore_status_t ecore_dmae_sanity(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
 					const u8 *fw_data);
 
-void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn,
-			 enum ecore_hw_err_type err_type);
+#define ECORE_HW_ERR_MAX_STR_SIZE	256
+
+/**
+ * @brief ecore_hw_err_notify - Notify upper layer driver and management FW
+ *	about a HW error.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param err_type
+ * @param p_buf - debug data buffer to send to the MFW
+ * @param size - buffer size
+ */
+void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			 enum ecore_hw_err_type err_type, u8 *p_buf, u32 size);
 
 /**
  * @brief ecore_ppfid_wr - Write value to BAR using the given ptt while
diff --git a/drivers/net/qede/base/ecore_hw_defs.h b/drivers/net/qede/base/ecore_hw_defs.h
index 92361e79c..e47dbd7e5 100644
--- a/drivers/net/qede/base/ecore_hw_defs.h
+++ b/drivers/net/qede/base/ecore_hw_defs.h
@@ -1,37 +1,26 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef _ECORE_IGU_DEF_H_
 #define _ECORE_IGU_DEF_H_
 
 /* Fields of IGU PF CONFIGRATION REGISTER */
-/* function enable        */
-#define IGU_PF_CONF_FUNC_EN       (0x1 << 0)
-/* MSI/MSIX enable        */
-#define IGU_PF_CONF_MSI_MSIX_EN   (0x1 << 1)
-/* INT enable             */
-#define IGU_PF_CONF_INT_LINE_EN   (0x1 << 2)
-/* attention enable       */
-#define IGU_PF_CONF_ATTN_BIT_EN   (0x1 << 3)
-/* single ISR mode enable */
-#define IGU_PF_CONF_SINGLE_ISR_EN (0x1 << 4)
-/* simd all ones mode     */
-#define IGU_PF_CONF_SIMD_MODE     (0x1 << 5)
+#define IGU_PF_CONF_FUNC_EN       (0x1 << 0)  /* function enable        */
+#define IGU_PF_CONF_MSI_MSIX_EN   (0x1 << 1)  /* MSI/MSIX enable        */
+#define IGU_PF_CONF_INT_LINE_EN   (0x1 << 2)  /* INT enable             */
+#define IGU_PF_CONF_ATTN_BIT_EN   (0x1 << 3)  /* attention enable       */
+#define IGU_PF_CONF_SINGLE_ISR_EN (0x1 << 4)  /* single ISR mode enable */
+#define IGU_PF_CONF_SIMD_MODE     (0x1 << 5)  /* simd all ones mode     */
 
 /* Fields of IGU VF CONFIGRATION REGISTER */
-/* function enable        */
-#define IGU_VF_CONF_FUNC_EN        (0x1 << 0)
-/* MSI/MSIX enable        */
-#define IGU_VF_CONF_MSI_MSIX_EN    (0x1 << 1)
-/* single ISR mode enable */
-#define IGU_VF_CONF_SINGLE_ISR_EN  (0x1 << 4)
-/* Parent PF              */
-#define IGU_VF_CONF_PARENT_MASK    (0xF)
-/* Parent PF              */
-#define IGU_VF_CONF_PARENT_SHIFT   5
+#define IGU_VF_CONF_FUNC_EN        (0x1 << 0)  /* function enable        */
+#define IGU_VF_CONF_MSI_MSIX_EN    (0x1 << 1)  /* MSI/MSIX enable        */
+#define IGU_VF_CONF_SINGLE_ISR_EN  (0x1 << 4)  /* single ISR mode enable */
+#define IGU_VF_CONF_PARENT_MASK    (0xF)     /* Parent PF              */
+#define IGU_VF_CONF_PARENT_SHIFT   5         /* Parent PF              */
 
 /* Igu control commands
  */
@@ -47,11 +36,11 @@ struct igu_ctrl_reg {
 	u32 ctrl_data;
 #define IGU_CTRL_REG_FID_MASK		0xFFFF /* Opaque_FID	 */
 #define IGU_CTRL_REG_FID_SHIFT		0
-#define IGU_CTRL_REG_PXP_ADDR_MASK	0xFFF /* Command address */
+#define IGU_CTRL_REG_PXP_ADDR_MASK	0x1FFF /* Command address */
 #define IGU_CTRL_REG_PXP_ADDR_SHIFT	16
-#define IGU_CTRL_REG_RESERVED_MASK	0x1
-#define IGU_CTRL_REG_RESERVED_SHIFT	28
-#define IGU_CTRL_REG_TYPE_MASK		0x1U /* use enum igu_ctrl_cmd */
+#define IGU_CTRL_REG_RESERVED_MASK	0x3
+#define IGU_CTRL_REG_RESERVED_SHIFT	29
+#define IGU_CTRL_REG_TYPE_MASK		0x1 /* use enum igu_ctrl_cmd */
 #define IGU_CTRL_REG_TYPE_SHIFT		31
 };
 
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 1b751d837..575ba8070 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #include "bcm_osal.h"
 #include "ecore_hw.h"
 #include "ecore_init_ops.h"
@@ -38,8 +38,7 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = {
 /* General constants */
 #define QM_PQ_MEM_4KB(pq_size) \
 	(pq_size ? DIV_ROUND_UP((pq_size + 1) * QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
-#define QM_PQ_SIZE_256B(pq_size) \
-	(pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : 0)
+#define QM_PQ_SIZE_256B(pq_size)		(pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : 0)
 #define QM_INVALID_PQ_ID		0xffff
 
 /* Max link speed (in Mbps) */
@@ -50,11 +49,12 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = {
 #define QM_BYTE_CRD_EN			1
 
 /* Other PQ constants */
-#define QM_OTHER_PQS_PER_PF		4
+#define QM_OTHER_PQS_PER_PF_E4		4
+#define QM_OTHER_PQS_PER_PF_E5		4
 
 /* VOQ constants */
-#define MAX_NUM_VOQS			(MAX_NUM_PORTS_K2 * NUM_TCS_4PORT_K2)
-#define VOQS_BIT_MASK			((1 << MAX_NUM_VOQS) - 1)
+#define MAX_NUM_VOQS_E4			(MAX_NUM_PORTS_K2 * NUM_TCS_4PORT_K2)
+#define QM_E5_NUM_EXT_VOQ		(MAX_NUM_PORTS_E5 * NUM_OF_TCS)
 
 /* WFQ constants: */
 
@@ -65,7 +65,8 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = {
 #define QM_WFQ_VP_PQ_VOQ_SHIFT		0
 
 /* Bit  of PF in WFQ VP PQ map */
-#define QM_WFQ_VP_PQ_PF_SHIFT		5
+#define QM_WFQ_VP_PQ_PF_E4_SHIFT	5
+#define QM_WFQ_VP_PQ_PF_E5_SHIFT	6
 
 /* 0x9000 = 4*9*1024 */
 #define QM_WFQ_INC_VAL(weight)		((weight) * 0x9000)
@@ -73,6 +74,9 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = {
 /* Max WFQ increment value is 0.7 * upper bound */
 #define QM_WFQ_MAX_INC_VAL		((QM_WFQ_UPPER_BOUND * 7) / 10)
 
+/* Number of VOQs in E5 QmWfqCrd register */
+#define QM_WFQ_CRD_E5_NUM_VOQS		16
+
 /* RL constants: */
 
 /* Period in us */
@@ -89,8 +93,7 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = {
  * this point.
  */
 #define QM_RL_INC_VAL(rate) \
-	OSAL_MAX_T(u32, (u32)(((rate ? rate : 100000) * QM_RL_PERIOD * 101) / \
-	(8 * 100)), 1)
+	OSAL_MAX_T(u32, (u32)(((rate ? rate : 100000) * QM_RL_PERIOD * 101) / (8 * 100)), 1)
 
 /* PF RL Upper bound is set to 10 * burst size of 1ms in 50Gbps */
 #define QM_PF_RL_UPPER_BOUND		62500000
@@ -99,8 +102,7 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = {
 #define QM_PF_RL_MAX_INC_VAL		((QM_PF_RL_UPPER_BOUND * 7) / 10)
 
 /* Vport RL Upper bound, link speed is in Mpbs */
-#define QM_VP_RL_UPPER_BOUND(speed) \
-	((u32)OSAL_MAX_T(u32, QM_RL_INC_VAL(speed), 9700 + 1000))
+#define QM_VP_RL_UPPER_BOUND(speed)	((u32)OSAL_MAX_T(u32, QM_RL_INC_VAL(speed), 9700 + 1000))
 
 /* Max Vport RL increment value is the Vport RL upper bound */
 #define QM_VP_RL_MAX_INC_VAL(speed)	QM_VP_RL_UPPER_BOUND(speed)
@@ -116,22 +118,24 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = {
 /* Command Queue constants: */
 
 /* Pure LB CmdQ lines (+spare) */
-#define PBF_CMDQ_PURE_LB_LINES		150
+#define PBF_CMDQ_PURE_LB_LINES_E4	150
+#define PBF_CMDQ_PURE_LB_LINES_E5	75
+
+#define PBF_CMDQ_LINES_E5_RSVD_RATIO	8
 
 #define PBF_CMDQ_LINES_RT_OFFSET(ext_voq) \
-	(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + \
-	 ext_voq * \
-	 (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET - \
-	  PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET))
+	(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + (ext_voq) * \
+	 (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET - PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET))
 
 #define PBF_BTB_GUARANTEED_RT_OFFSET(ext_voq) \
-	(PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + \
-	 ext_voq * \
-	 (PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET - \
-	  PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET))
+	(PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET + (ext_voq) * \
+	 (PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET	- PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET))
 
-#define QM_VOQ_LINE_CRD(pbf_cmd_lines) \
-((((pbf_cmd_lines) - 4) * 2) | QM_LINE_CRD_REG_SIGN_BIT)
+/* Returns the VOQ line credit for the specified number of PBF command lines.
+ * PBF lines are specified in 256b units in E4, and in 512b units in E5.
+ */
+#define QM_VOQ_LINE_CRD(pbf_cmd_lines, is_e5) \
+	((((pbf_cmd_lines) - 4) * (is_e5 ? 4 : 2)) | QM_LINE_CRD_REG_SIGN_BIT)
 
 /* BTB: blocks constants (block size = 256B) */
 
@@ -151,7 +155,7 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = {
 #define QM_STOP_CMD_STRUCT_SIZE		2
 #define QM_STOP_CMD_PAUSE_MASK_OFFSET	0
 #define QM_STOP_CMD_PAUSE_MASK_SHIFT	0
-#define QM_STOP_CMD_PAUSE_MASK_MASK	0xffffffff /* @DPDK */
+#define QM_STOP_CMD_PAUSE_MASK_MASK	((u64)-1)
 #define QM_STOP_CMD_GROUP_ID_OFFSET	1
 #define QM_STOP_CMD_GROUP_ID_SHIFT	16
 #define QM_STOP_CMD_GROUP_ID_MASK	15
@@ -185,25 +189,31 @@ static u16 task_region_offsets_e5[1][NUM_OF_CONNECTION_TYPES_E5] = {
 	 ((rl_valid ? 1 : 0) << 22) | (((rl_id) & 255) << 24) | \
 	 (((rl_id) >> 8) << 9))
 
-#define PQ_INFO_RAM_GRC_ADDRESS(pq_id) (XSEM_REG_FAST_MEMORY + \
-	SEM_FAST_REG_INT_RAM + XSTORM_PQ_INFO_OFFSET(pq_id))
+#define PQ_INFO_RAM_GRC_ADDRESS(pq_id) \
+	(XSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + XSTORM_PQ_INFO_OFFSET(pq_id))
 
 /******************** INTERNAL IMPLEMENTATION *********************/
 
 /* Prepare PF RL enable/disable runtime init values */
-static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en)
+static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn,
+			       bool pf_rl_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFENABLE_RT_OFFSET, pf_rl_en ? 1 : 0);
 	if (pf_rl_en) {
+		u8 num_ext_voqs = ECORE_IS_E5(p_hwfn->p_dev) ? QM_E5_NUM_EXT_VOQ : MAX_NUM_VOQS_E4;
+		u64 voq_bit_mask = ((u64)1 << num_ext_voqs) - 1;
+
 		/* Enable RLs for all VOQs */
-		STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_RT_OFFSET,
-			     VOQS_BIT_MASK);
+		STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_RT_OFFSET, (u32)voq_bit_mask);
+#ifdef QM_REG_RLPFVOQENABLE_MSB_RT_OFFSET
+		if (num_ext_voqs >= 32)
+			STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_MSB_RT_OFFSET,
+				     (u32)(voq_bit_mask >> 32));
+#endif
 
 		/* Write RL period */
-		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIOD_RT_OFFSET,
-			     QM_RL_PERIOD_CLK_25M);
-		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIODTIMER_RT_OFFSET,
-			     QM_RL_PERIOD_CLK_25M);
+		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIOD_RT_OFFSET, QM_RL_PERIOD_CLK_25M);
+		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIODTIMER_RT_OFFSET, QM_RL_PERIOD_CLK_25M);
 
 		/* Set credit threshold for QM bypass flow */
 		if (QM_BYPASS_EN)
@@ -213,184 +223,193 @@ static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en)
 }
 
 /* Prepare PF WFQ enable/disable runtime init values */
-static void ecore_enable_pf_wfq(struct ecore_hwfn *p_hwfn, bool pf_wfq_en)
+static void ecore_enable_pf_wfq(struct ecore_hwfn *p_hwfn,
+				bool pf_wfq_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFENABLE_RT_OFFSET, pf_wfq_en ? 1 : 0);
 
 	/* Set credit threshold for QM bypass flow */
 	if (pf_wfq_en && QM_BYPASS_EN)
-		STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET,
-			     QM_WFQ_UPPER_BOUND);
+		STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET, QM_WFQ_UPPER_BOUND);
 }
 
 /* Prepare global RL enable/disable runtime init values */
 static void ecore_enable_global_rl(struct ecore_hwfn *p_hwfn,
 				   bool global_rl_en)
 {
-	STORE_RT_REG(p_hwfn, QM_REG_RLGLBLENABLE_RT_OFFSET,
-		     global_rl_en ? 1 : 0);
+	STORE_RT_REG(p_hwfn, QM_REG_RLGLBLENABLE_RT_OFFSET, global_rl_en ? 1 : 0);
 	if (global_rl_en) {
 		/* Write RL period (use timer 0 only) */
-		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIOD_0_RT_OFFSET,
-			     QM_RL_PERIOD_CLK_25M);
-		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET,
-			     QM_RL_PERIOD_CLK_25M);
+		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIOD_0_RT_OFFSET, QM_RL_PERIOD_CLK_25M);
+		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET, QM_RL_PERIOD_CLK_25M);
 
 		/* Set credit threshold for QM bypass flow */
 		if (QM_BYPASS_EN)
-			STORE_RT_REG(p_hwfn,
-				     QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET,
+			STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET,
 				     QM_VP_RL_BYPASS_THRESH_SPEED);
 	}
 }
 
 /* Prepare VPORT WFQ enable/disable runtime init values */
-static void ecore_enable_vport_wfq(struct ecore_hwfn *p_hwfn, bool vport_wfq_en)
+static void ecore_enable_vport_wfq(struct ecore_hwfn *p_hwfn,
+				   bool vport_wfq_en)
 {
-	STORE_RT_REG(p_hwfn, QM_REG_WFQVPENABLE_RT_OFFSET,
-		     vport_wfq_en ? 1 : 0);
+	STORE_RT_REG(p_hwfn, QM_REG_WFQVPENABLE_RT_OFFSET, vport_wfq_en ? 1 : 0);
 
 	/* Set credit threshold for QM bypass flow */
 	if (vport_wfq_en && QM_BYPASS_EN)
-		STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET,
-			     QM_WFQ_UPPER_BOUND);
+		STORE_RT_REG(p_hwfn, QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET, QM_WFQ_UPPER_BOUND);
 }
 
 /* Prepare runtime init values to allocate PBF command queue lines for
- * the specified VOQ
+ * the specified VOQ.
  */
 static void ecore_cmdq_lines_voq_rt_init(struct ecore_hwfn *p_hwfn,
-					 u8 voq,
+					 u8 ext_voq,
 					 u16 cmdq_lines)
 {
-	u32 qm_line_crd = QM_VOQ_LINE_CRD(cmdq_lines);
+	u32 qm_line_crd = QM_VOQ_LINE_CRD(cmdq_lines, ECORE_IS_E5(p_hwfn->p_dev));
 
-	OVERWRITE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq),
-			 (u32)cmdq_lines);
-	STORE_RT_REG(p_hwfn, QM_REG_VOQCRDLINE_RT_OFFSET + voq, qm_line_crd);
-	STORE_RT_REG(p_hwfn, QM_REG_VOQINITCRDLINE_RT_OFFSET + voq,
-		     qm_line_crd);
+	OVERWRITE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(ext_voq), (u32)cmdq_lines);
+	STORE_RT_REG(p_hwfn, QM_REG_VOQCRDLINE_RT_OFFSET + ext_voq, qm_line_crd);
+	STORE_RT_REG(p_hwfn, QM_REG_VOQINITCRDLINE_RT_OFFSET + ext_voq, qm_line_crd);
 }
 
 /* Prepare runtime init values to allocate PBF command queue lines. */
 static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
 				     u8 max_ports_per_engine,
 				     u8 max_phys_tcs_per_port,
-				     struct init_qm_port_params
-				     port_params[MAX_NUM_PORTS])
+				     struct init_qm_port_params port_params[MAX_NUM_PORTS])
 {
-	u8 tc, voq, port_id, num_tcs_in_port;
+	u8 tc, ext_voq, port_id, num_tcs_in_port;
+	u8 num_ext_voqs = ECORE_IS_E5(p_hwfn->p_dev) ? QM_E5_NUM_EXT_VOQ : MAX_NUM_VOQS_E4;
 
 	/* Clear PBF lines of all VOQs */
-	for (voq = 0; voq < MAX_NUM_VOQS; voq++)
-		STORE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq), 0);
+	for (ext_voq = 0; ext_voq < num_ext_voqs; ext_voq++)
+		STORE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(ext_voq), 0);
 
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
-		u16 phys_lines, phys_lines_per_tc;
+		u16 phys_lines, phys_lines_per_tc, pure_lb_lines;
 
 		if (!port_params[port_id].active)
 			continue;
 
 		/* Find number of command queue lines to divide between the
-		 * active physical TCs.
+		 * active physical TCs. In E5, 1/8 of the lines are reserved.
+		 * the lines for pure LB TC are subtracted.
 		 */
 		phys_lines = port_params[port_id].num_pbf_cmd_lines;
-		phys_lines -= PBF_CMDQ_PURE_LB_LINES;
+		if (ECORE_IS_E5(p_hwfn->p_dev)) {
+			phys_lines -= DIV_ROUND_UP(phys_lines, PBF_CMDQ_LINES_E5_RSVD_RATIO);
+			pure_lb_lines = PBF_CMDQ_PURE_LB_LINES_E5;
+			/* E5 TGFS workaround use 2 TCs for PURE_LB */
+			if (((port_params[port_id].active_phys_tcs >>
+			      PBF_TGFS_PURE_LB_TC_E5) & 0x1) == 0) {
+				phys_lines -= PBF_CMDQ_PURE_LB_LINES_E5;
+
+				/* Init registers for PBF_TGFS_PURE_LB_TC_E5 */
+				ext_voq = ecore_get_ext_voq(p_hwfn, port_id, PBF_TGFS_PURE_LB_TC_E5,
+							    max_phys_tcs_per_port);
+				ecore_cmdq_lines_voq_rt_init(p_hwfn, ext_voq, pure_lb_lines);
+			} else {
+				DP_NOTICE(p_hwfn, true, "Error : TC %d used for TGFS ramrods workaround. Must not used by driver.\n",
+					  PBF_TGFS_PURE_LB_TC_E5);
+			}
+		} else {
+			pure_lb_lines = PBF_CMDQ_PURE_LB_LINES_E4;
+		}
+		phys_lines -= pure_lb_lines;
 
 		/* Find #lines per active physical TC */
 		num_tcs_in_port = 0;
 		for (tc = 0; tc < max_phys_tcs_per_port; tc++)
-			if (((port_params[port_id].active_phys_tcs >> tc) &
-			      0x1) == 1)
+			if (((port_params[port_id].active_phys_tcs >> tc) & 0x1) == 1)
 				num_tcs_in_port++;
 		phys_lines_per_tc = phys_lines / num_tcs_in_port;
 
 		/* Init registers per active TC */
 		for (tc = 0; tc < max_phys_tcs_per_port; tc++) {
-			voq = VOQ(port_id, tc, max_phys_tcs_per_port);
-			if (((port_params[port_id].active_phys_tcs >>
-			      tc) & 0x1) == 1)
-				ecore_cmdq_lines_voq_rt_init(p_hwfn, voq,
-							     phys_lines_per_tc);
+			ext_voq = ecore_get_ext_voq(p_hwfn, port_id, tc, max_phys_tcs_per_port);
+			if (((port_params[port_id].active_phys_tcs >> tc) & 0x1) == 1)
+				ecore_cmdq_lines_voq_rt_init(p_hwfn, ext_voq, phys_lines_per_tc);
 		}
 
 		/* Init registers for pure LB TC */
-		voq = VOQ(port_id, PURE_LB_TC, max_phys_tcs_per_port);
-		ecore_cmdq_lines_voq_rt_init(p_hwfn, voq,
-					     PBF_CMDQ_PURE_LB_LINES);
+		ext_voq = ecore_get_ext_voq(p_hwfn, port_id, PURE_LB_TC, max_phys_tcs_per_port);
+		ecore_cmdq_lines_voq_rt_init(p_hwfn, ext_voq, pure_lb_lines);
 	}
 }
 
-/*
- * Prepare runtime init values to allocate guaranteed BTB blocks for the
+/* Prepare runtime init values to allocate guaranteed BTB blocks for the
  * specified port. The guaranteed BTB space is divided between the TCs as
  * follows (shared space Is currently not used):
  * 1. Parameters:
- *     B BTB blocks for this port
- *     C Number of physical TCs for this port
+ *    B - BTB blocks for this port
+ *    C - Number of physical TCs for this port
  * 2. Calculation:
- *     a. 38 blocks (9700B jumbo frame) are allocated for global per port
- *        headroom
- *     b. B = B 38 (remainder after global headroom allocation)
- *     c. MAX(38,B/(C+0.7)) blocks are allocated for the pure LB VOQ.
- *     d. B = B MAX(38, B/(C+0.7)) (remainder after pure LB allocation).
- *     e. B/C blocks are allocated for each physical TC.
+ *    a. 38 blocks (9700B jumbo frame) are allocated for global per port
+ *	 headroom.
+ *    b. B = B - 38 (remainder after global headroom allocation).
+ *    c. MAX(38,B/(C + 0.7)) blocks are allocated for the pure LB VOQ.
+ *    d. B = B - MAX(38, B/(C + 0.7)) (remainder after pure LB allocation).
+ *    e. B/C blocks are allocated for each physical TC.
  * Assumptions:
  * - MTU is up to 9700 bytes (38 blocks)
  * - All TCs are considered symmetrical (same rate and packet size)
- * - No optimization for lossy TC (all are considered lossless). Shared space is
- *   not enabled and allocated for each TC.
+ * - No optimization for lossy TC (all are considered lossless). Shared space
+ *   is not enabled and allocated for each TC.
  */
 static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
 				     u8 max_ports_per_engine,
 				     u8 max_phys_tcs_per_port,
-				     struct init_qm_port_params
-				     port_params[MAX_NUM_PORTS])
+				     struct init_qm_port_params port_params[MAX_NUM_PORTS])
 {
 	u32 usable_blocks, pure_lb_blocks, phys_blocks;
-	u8 tc, voq, port_id, num_tcs_in_port;
+	u8 tc, ext_voq, port_id, num_tcs_in_port;
 
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
 		if (!port_params[port_id].active)
 			continue;
 
 		/* Subtract headroom blocks */
-		usable_blocks = port_params[port_id].num_btb_blocks -
-				BTB_HEADROOM_BLOCKS;
+		usable_blocks = port_params[port_id].num_btb_blocks - BTB_HEADROOM_BLOCKS;
 
 		/* Find blocks per physical TC. use factor to avoid floating
 		 * arithmethic.
 		 */
 		num_tcs_in_port = 0;
 		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
-			if (((port_params[port_id].active_phys_tcs >> tc) &
-			      0x1) == 1)
+			if (((port_params[port_id].active_phys_tcs >> tc) & 0x1) == 1)
 				num_tcs_in_port++;
 
 		pure_lb_blocks = (usable_blocks * BTB_PURE_LB_FACTOR) /
-				  (num_tcs_in_port * BTB_PURE_LB_FACTOR +
-				   BTB_PURE_LB_RATIO);
+				 (num_tcs_in_port * BTB_PURE_LB_FACTOR + BTB_PURE_LB_RATIO);
 		pure_lb_blocks = OSAL_MAX_T(u32, BTB_JUMBO_PKT_BLOCKS,
-					    pure_lb_blocks /
-					    BTB_PURE_LB_FACTOR);
-		phys_blocks = (usable_blocks - pure_lb_blocks) /
-			      num_tcs_in_port;
+					    pure_lb_blocks / BTB_PURE_LB_FACTOR);
+		phys_blocks = (usable_blocks - pure_lb_blocks) / num_tcs_in_port;
 
 		/* Init physical TCs */
 		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
-			if (((port_params[port_id].active_phys_tcs >> tc) &
-			     0x1) == 1) {
-				voq = VOQ(port_id, tc, max_phys_tcs_per_port);
-				STORE_RT_REG(p_hwfn,
-					PBF_BTB_GUARANTEED_RT_OFFSET(voq),
-					phys_blocks);
+			if (((port_params[port_id].active_phys_tcs >> tc) & 0x1) == 1) {
+				ext_voq = ecore_get_ext_voq(p_hwfn, port_id, tc,
+							    max_phys_tcs_per_port);
+				STORE_RT_REG(p_hwfn, PBF_BTB_GUARANTEED_RT_OFFSET(ext_voq),
+					     phys_blocks);
 			}
 		}
 
 		/* Init pure LB TC */
-		voq = VOQ(port_id, PURE_LB_TC, max_phys_tcs_per_port);
-		STORE_RT_REG(p_hwfn, PBF_BTB_GUARANTEED_RT_OFFSET(voq),
-			     pure_lb_blocks);
+		ext_voq = ecore_get_ext_voq(p_hwfn, port_id, PURE_LB_TC, max_phys_tcs_per_port);
+		STORE_RT_REG(p_hwfn, PBF_BTB_GUARANTEED_RT_OFFSET(ext_voq), pure_lb_blocks);
+
+		if (ECORE_IS_E5(p_hwfn->p_dev) &&
+			(((port_params[port_id].active_phys_tcs >>
+			   PBF_TGFS_PURE_LB_TC_E5) & 0x1) == 0)) {
+			/* Init registers for PBF_TGFS_PURE_LB_TC_E5 */
+			ext_voq = ecore_get_ext_voq(p_hwfn, port_id, PBF_TGFS_PURE_LB_TC_E5,
+						    max_phys_tcs_per_port);
+			STORE_RT_REG(p_hwfn, PBF_BTB_GUARANTEED_RT_OFFSET(ext_voq), pure_lb_blocks);
+		}
 	}
 }
 
@@ -418,12 +437,18 @@ static int ecore_global_rl_rt_init(struct ecore_hwfn *p_hwfn,
 			return -1;
 		}
 
-		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLCRD_RT_OFFSET + rl_id,
-			     (u32)QM_RL_CRD_REG_SIGN_BIT);
-		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLUPPERBOUND_RT_OFFSET + rl_id,
-			     upper_bound);
-		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLINCVAL_RT_OFFSET + rl_id,
-			     inc_val);
+		STORE_RT_REG(p_hwfn, rl_id < QM_REG_RLGLBLCRD_RT_SIZE ?
+			QM_REG_RLGLBLCRD_RT_OFFSET + rl_id :
+			QM_REG_RLGLBLCRD_MSB_RT_OFFSET + rl_id - QM_REG_RLGLBLCRD_RT_SIZE,
+			(u32)QM_RL_CRD_REG_SIGN_BIT);
+		STORE_RT_REG(p_hwfn, rl_id < QM_REG_RLGLBLUPPERBOUND_RT_SIZE ?
+			QM_REG_RLGLBLUPPERBOUND_RT_OFFSET + rl_id :
+			QM_REG_RLGLBLUPPERBOUND_MSB_RT_OFFSET + rl_id -
+			QM_REG_RLGLBLUPPERBOUND_RT_SIZE, upper_bound);
+		STORE_RT_REG(p_hwfn, rl_id < QM_REG_RLGLBLINCVAL_RT_SIZE ?
+			QM_REG_RLGLBLINCVAL_RT_OFFSET + rl_id :
+			QM_REG_RLGLBLINCVAL_MSB_RT_OFFSET + rl_id - QM_REG_RLGLBLINCVAL_RT_SIZE,
+			inc_val);
 	}
 
 	return 0;
@@ -431,19 +456,19 @@ static int ecore_global_rl_rt_init(struct ecore_hwfn *p_hwfn,
 
 /* Prepare Tx PQ mapping runtime init values for the specified PF */
 static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
-				    struct ecore_ptt *p_ptt,
-				    u8 pf_id,
-				    u8 max_phys_tcs_per_port,
-						bool is_pf_loading,
-				    u32 num_pf_cids,
-				    u32 num_vf_cids,
-				    u16 start_pq,
-				    u16 num_pf_pqs,
-				    u16 num_vf_pqs,
+				   struct ecore_ptt *p_ptt,
+				   u8 pf_id,
+				   u8 max_phys_tcs_per_port,
+				   bool is_pf_loading,
+				   u32 num_pf_cids,
+				   u32 num_vf_cids,
+				   u16 start_pq,
+				   u16 num_pf_pqs,
+				   u16 num_vf_pqs,
 				   u16 start_vport,
-				    u32 base_mem_addr_4kb,
-				    struct init_qm_pq_params *pq_params,
-				    struct init_qm_vport_params *vport_params)
+				   u32 base_mem_addr_4kb,
+				   struct init_qm_pq_params *pq_params,
+				   struct init_qm_vport_params *vport_params)
 {
 	/* A bit per Tx PQ indicating if the PQ is associated with a VF */
 	u32 tx_pq_vf_mask[MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE] = { 0 };
@@ -465,14 +490,11 @@ static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 
 	/* Set mapping from PQ group to PF */
 	for (pq_group = first_pq_group; pq_group <= last_pq_group; pq_group++)
-		STORE_RT_REG(p_hwfn, QM_REG_PQTX2PF_0_RT_OFFSET + pq_group,
-			     (u32)(pf_id));
+		STORE_RT_REG(p_hwfn, QM_REG_PQTX2PF_0_RT_OFFSET + pq_group, (u32)(pf_id));
 
 	/* Set PQ sizes */
-	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_0_RT_OFFSET,
-		     QM_PQ_SIZE_256B(num_pf_cids));
-	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_1_RT_OFFSET,
-		     QM_PQ_SIZE_256B(num_vf_cids));
+	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_0_RT_OFFSET, QM_PQ_SIZE_256B(num_pf_cids));
+	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_1_RT_OFFSET, QM_PQ_SIZE_256B(num_vf_cids));
 
 	/* Go over all Tx PQs */
 	for (i = 0, pq_id = start_pq; i < num_pqs; i++, pq_id++) {
@@ -480,26 +502,24 @@ static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		bool is_vf_pq;
 		u8 ext_voq;
 
-		ext_voq = VOQ(pq_params[i].port_id, pq_params[i].tc_id,
-			  max_phys_tcs_per_port);
+		ext_voq = ecore_get_ext_voq(p_hwfn, pq_params[i].port_id, pq_params[i].tc_id,
+					    max_phys_tcs_per_port);
 		is_vf_pq = (i >= num_pf_pqs);
 
 		/* Update first Tx PQ of VPORT/TC */
 		vport_id_in_pf = pq_params[i].vport_id - start_vport;
-		first_tx_pq_id =
-		vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].tc_id];
+		first_tx_pq_id = vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].tc_id];
 		if (first_tx_pq_id == QM_INVALID_PQ_ID) {
 			u32 map_val = (ext_voq << QM_WFQ_VP_PQ_VOQ_SHIFT) |
-				      (pf_id << QM_WFQ_VP_PQ_PF_SHIFT);
+				      (pf_id << (ECORE_IS_E5(p_hwfn->p_dev) ?
+					QM_WFQ_VP_PQ_PF_E5_SHIFT : QM_WFQ_VP_PQ_PF_E4_SHIFT));
 
 			/* Create new VP PQ */
-			vport_params[vport_id_in_pf].
-			    first_tx_pq_id[pq_params[i].tc_id] = pq_id;
+			vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].tc_id] = pq_id;
 			first_tx_pq_id = pq_id;
 
 			/* Map VP PQ to VOQ and PF */
-			STORE_RT_REG(p_hwfn, QM_REG_WFQVPMAP_RT_OFFSET +
-				     first_tx_pq_id, map_val);
+			STORE_RT_REG(p_hwfn, QM_REG_WFQVPMAP_RT_OFFSET + first_tx_pq_id, map_val);
 		}
 
 		/* Prepare PQ map entry */
@@ -518,8 +538,7 @@ static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		}
 
 		/* Set PQ base address */
-		STORE_RT_REG(p_hwfn, QM_REG_BASEADDRTXPQ_RT_OFFSET + pq_id,
-			     mem_addr_4kb);
+		STORE_RT_REG(p_hwfn, QM_REG_BASEADDRTXPQ_RT_OFFSET + pq_id, mem_addr_4kb);
 
 		/* Clear PQ pointer table entry (64 bit) */
 		if (is_pf_loading)
@@ -529,19 +548,16 @@ static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 
 		/* Write PQ info to RAM */
 #if (WRITE_PQ_INFO_TO_RAM != 0)
-		pq_info = PQ_INFO_ELEMENT(first_tx_pq_id, pf_id,
-					  pq_params[i].tc_id,
-					  pq_params[i].port_id,
-					  pq_params[i].rl_valid,
+		pq_info = PQ_INFO_ELEMENT(first_tx_pq_id, pf_id, pq_params[i].tc_id,
+					  pq_params[i].port_id, pq_params[i].rl_valid,
 					  pq_params[i].rl_id);
-		ecore_wr(p_hwfn, p_ptt, PQ_INFO_RAM_GRC_ADDRESS(pq_id),
-			 pq_info);
+		ecore_wr(p_hwfn, p_ptt, PQ_INFO_RAM_GRC_ADDRESS(pq_id), pq_info);
 #endif
 
 		/* If VF PQ, add indication to PQ VF mask */
 		if (is_vf_pq) {
 			tx_pq_vf_mask[pq_id / QM_PF_QUEUE_GROUP_SIZE] |=
-				(1 << (pq_id % QM_PF_QUEUE_GROUP_SIZE));
+							(1 << (pq_id % QM_PF_QUEUE_GROUP_SIZE));
 			mem_addr_4kb += vport_pq_mem_4kb;
 		} else {
 			mem_addr_4kb += pq_mem_4kb;
@@ -551,8 +567,8 @@ static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 	/* Store Tx PQ VF mask to size select register */
 	for (i = 0; i < num_tx_pq_vf_masks; i++)
 		if (tx_pq_vf_mask[i])
-			STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET +
-				     i, tx_pq_vf_mask[i]);
+			STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET + i,
+				     tx_pq_vf_mask[i]);
 
 	return 0;
 }
@@ -566,7 +582,7 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				       u32 base_mem_addr_4kb)
 {
 	u32 pq_size, pq_mem_4kb, mem_addr_4kb;
-	u16 i, j, pq_id, pq_group;
+	u16 i, j, pq_id, pq_group, qm_other_pqs_per_pf;
 
 	/* A single other PQ group is used in each PF, where PQ group i is used
 	 * in PF i.
@@ -575,26 +591,23 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 	pq_size = num_pf_cids + num_tids;
 	pq_mem_4kb = QM_PQ_MEM_4KB(pq_size);
 	mem_addr_4kb = base_mem_addr_4kb;
+	qm_other_pqs_per_pf = ECORE_IS_E5(p_hwfn->p_dev) ? QM_OTHER_PQS_PER_PF_E5 :
+							   QM_OTHER_PQS_PER_PF_E4;
 
 	/* Map PQ group to PF */
-	STORE_RT_REG(p_hwfn, QM_REG_PQOTHER2PF_0_RT_OFFSET + pq_group,
-		     (u32)(pf_id));
+	STORE_RT_REG(p_hwfn, QM_REG_PQOTHER2PF_0_RT_OFFSET + pq_group, (u32)(pf_id));
 
 	/* Set PQ sizes */
-	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_2_RT_OFFSET,
-		     QM_PQ_SIZE_256B(pq_size));
+	STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZE_2_RT_OFFSET, QM_PQ_SIZE_256B(pq_size));
 
-	for (i = 0, pq_id = pf_id * QM_PF_QUEUE_GROUP_SIZE;
-	     i < QM_OTHER_PQS_PER_PF; i++, pq_id++) {
+	for (i = 0, pq_id = pf_id * QM_PF_QUEUE_GROUP_SIZE; i < qm_other_pqs_per_pf; i++, pq_id++) {
 		/* Set PQ base address */
-		STORE_RT_REG(p_hwfn, QM_REG_BASEADDROTHERPQ_RT_OFFSET + pq_id,
-			     mem_addr_4kb);
+		STORE_RT_REG(p_hwfn, QM_REG_BASEADDROTHERPQ_RT_OFFSET + pq_id, mem_addr_4kb);
 
 		/* Clear PQ pointer table entry */
 		if (is_pf_loading)
 			for (j = 0; j < 2; j++)
-				STORE_RT_REG(p_hwfn,
-					     QM_REG_PTRTBLOTHER_RT_OFFSET +
+				STORE_RT_REG(p_hwfn, QM_REG_PTRTBLOTHER_RT_OFFSET +
 					     (pq_id * 2) + j, 0);
 
 		mem_addr_4kb += pq_mem_4kb;
@@ -612,30 +625,30 @@ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				struct init_qm_pq_params *pq_params)
 {
 	u32 inc_val, crd_reg_offset;
-	u8 voq;
+	u8 ext_voq;
 	u16 i;
 
 	inc_val = QM_WFQ_INC_VAL(pf_wfq);
 	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true,
-			  "Invalid PF WFQ weight configuration\n");
+		DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration\n");
 		return -1;
 	}
 
 	for (i = 0; i < num_tx_pqs; i++) {
-		voq = VOQ(pq_params[i].port_id, pq_params[i].tc_id,
-			  max_phys_tcs_per_port);
-		crd_reg_offset = (pf_id < MAX_NUM_PFS_BB ?
-				  QM_REG_WFQPFCRD_RT_OFFSET :
-				  QM_REG_WFQPFCRD_MSB_RT_OFFSET) +
-				 voq * MAX_NUM_PFS_BB +
-				 (pf_id % MAX_NUM_PFS_BB);
-		OVERWRITE_RT_REG(p_hwfn, crd_reg_offset,
-				 (u32)QM_WFQ_CRD_REG_SIGN_BIT);
+		ext_voq = ecore_get_ext_voq(p_hwfn, pq_params[i].port_id, pq_params[i].tc_id,
+					    max_phys_tcs_per_port);
+		crd_reg_offset = ECORE_IS_E5(p_hwfn->p_dev) ?
+			(ext_voq < QM_WFQ_CRD_E5_NUM_VOQS ? QM_REG_WFQPFCRD_RT_OFFSET :
+							    QM_REG_WFQPFCRD_MSB_RT_OFFSET) +
+			(ext_voq % QM_WFQ_CRD_E5_NUM_VOQS) * MAX_NUM_PFS_E5 + pf_id :
+			(pf_id < MAX_NUM_PFS_BB ? QM_REG_WFQPFCRD_RT_OFFSET :
+						   QM_REG_WFQPFCRD_MSB_RT_OFFSET) +
+			ext_voq * MAX_NUM_PFS_BB + (pf_id % MAX_NUM_PFS_BB);
+		OVERWRITE_RT_REG(p_hwfn, crd_reg_offset, (u32)QM_WFQ_CRD_REG_SIGN_BIT);
 	}
 
-	STORE_RT_REG(p_hwfn, QM_REG_WFQPFUPPERBOUND_RT_OFFSET +
-		     pf_id, QM_WFQ_UPPER_BOUND | (u32)QM_WFQ_CRD_REG_SIGN_BIT);
+	STORE_RT_REG(p_hwfn, QM_REG_WFQPFUPPERBOUND_RT_OFFSET + pf_id, QM_WFQ_UPPER_BOUND |
+		     (u32)QM_WFQ_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_WFQPFWEIGHT_RT_OFFSET + pf_id, inc_val);
 
 	return 0;
@@ -644,21 +657,21 @@ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 /* Prepare PF RL runtime init values for the specified PF.
  * Return -1 on error.
  */
-static int ecore_pf_rl_rt_init(struct ecore_hwfn *p_hwfn, u8 pf_id, u32 pf_rl)
+static int ecore_pf_rl_rt_init(struct ecore_hwfn *p_hwfn,
+			       u8 pf_id,
+			       u32 pf_rl)
 {
 	u32 inc_val;
 
 	inc_val = QM_RL_INC_VAL(pf_rl);
 	if (inc_val > QM_PF_RL_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true,
-			  "Invalid PF rate limit configuration\n");
+		DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration\n");
 		return -1;
 	}
 
-	STORE_RT_REG(p_hwfn, QM_REG_RLPFCRD_RT_OFFSET + pf_id,
+	STORE_RT_REG(p_hwfn, QM_REG_RLPFCRD_RT_OFFSET + pf_id, (u32)QM_RL_CRD_REG_SIGN_BIT);
+	STORE_RT_REG(p_hwfn, QM_REG_RLPFUPPERBOUND_RT_OFFSET + pf_id, QM_PF_RL_UPPER_BOUND |
 		     (u32)QM_RL_CRD_REG_SIGN_BIT);
-	STORE_RT_REG(p_hwfn, QM_REG_RLPFUPPERBOUND_RT_OFFSET + pf_id,
-		     QM_PF_RL_UPPER_BOUND | (u32)QM_RL_CRD_REG_SIGN_BIT);
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFINCVAL_RT_OFFSET + pf_id, inc_val);
 
 	return 0;
@@ -682,8 +695,7 @@ static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 
 		inc_val = QM_WFQ_INC_VAL(vport_params[vport_id].wfq);
 		if (inc_val > QM_WFQ_MAX_INC_VAL) {
-			DP_NOTICE(p_hwfn, true,
-				  "Invalid VPORT WFQ weight configuration\n");
+			DP_NOTICE(p_hwfn, true, "Invalid VPORT WFQ weight configuration\n");
 			return -1;
 		}
 
@@ -693,10 +705,9 @@ static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 			if (vp_pq_id == QM_INVALID_PQ_ID)
 				continue;
 
-			STORE_RT_REG(p_hwfn, QM_REG_WFQVPCRD_RT_OFFSET +
-				     vp_pq_id, (u32)QM_WFQ_CRD_REG_SIGN_BIT);
-			STORE_RT_REG(p_hwfn, QM_REG_WFQVPWEIGHT_RT_OFFSET +
-				     vp_pq_id, inc_val);
+			STORE_RT_REG(p_hwfn, QM_REG_WFQVPCRD_RT_OFFSET + vp_pq_id,
+				     (u32)QM_WFQ_CRD_REG_SIGN_BIT);
+			STORE_RT_REG(p_hwfn, QM_REG_WFQVPWEIGHT_RT_OFFSET + vp_pq_id, inc_val);
 		}
 	}
 
@@ -708,16 +719,14 @@ static bool ecore_poll_on_qm_cmd_ready(struct ecore_hwfn *p_hwfn,
 {
 	u32 reg_val, i;
 
-	for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && !reg_val;
-	     i++) {
+	for (i = 0, reg_val = 0; i < QM_STOP_CMD_MAX_POLL_COUNT && !reg_val; i++) {
 		OSAL_UDELAY(QM_STOP_CMD_POLL_PERIOD_US);
 		reg_val = ecore_rd(p_hwfn, p_ptt, QM_REG_SDMCMDREADY);
 	}
 
 	/* Check if timeout while waiting for SDM command ready */
 	if (i == QM_STOP_CMD_MAX_POLL_COUNT) {
-		DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG,
-			   "Timeout waiting for QM SDM cmd ready signal\n");
+		DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG, "Timeout when waiting for QM SDM command ready signal\n");
 		return false;
 	}
 
@@ -726,9 +735,9 @@ static bool ecore_poll_on_qm_cmd_ready(struct ecore_hwfn *p_hwfn,
 
 static bool ecore_send_qm_cmd(struct ecore_hwfn *p_hwfn,
 			      struct ecore_ptt *p_ptt,
-							  u32 cmd_addr,
-							  u32 cmd_data_lsb,
-							  u32 cmd_data_msb)
+			      u32 cmd_addr,
+			      u32 cmd_data_lsb,
+			      u32 cmd_data_msb)
 {
 	if (!ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt))
 		return false;
@@ -744,16 +753,30 @@ static bool ecore_send_qm_cmd(struct ecore_hwfn *p_hwfn,
 
 /******************** INTERFACE IMPLEMENTATION *********************/
 
+u8 ecore_get_ext_voq(struct ecore_hwfn *p_hwfn,
+		     u8 port_id,
+		     u8 tc,
+		     u8 max_phys_tcs_per_port)
+{
+	if (tc == PURE_LB_TC)
+		return NUM_OF_PHYS_TCS * (ECORE_IS_E5(p_hwfn->p_dev) ? MAX_NUM_PORTS_E5 :
+								       MAX_NUM_PORTS_BB) + port_id;
+	else
+		return port_id * (ECORE_IS_E5(p_hwfn->p_dev) ? NUM_OF_PHYS_TCS :
+							       max_phys_tcs_per_port) + tc;
+}
+
 u32 ecore_qm_pf_mem_size(struct ecore_hwfn *p_hwfn,
 			 u32 num_pf_cids,
-						 u32 num_vf_cids,
-						 u32 num_tids,
-						 u16 num_pf_pqs,
-						 u16 num_vf_pqs)
+			 u32 num_vf_cids,
+			 u32 num_tids,
+			 u16 num_pf_pqs,
+			 u16 num_vf_pqs)
 {
 	return QM_PQ_MEM_4KB(num_pf_cids) * num_pf_pqs +
-	    QM_PQ_MEM_4KB(num_vf_cids) * num_vf_pqs +
-	    QM_PQ_MEM_4KB(num_pf_cids + num_tids) * QM_OTHER_PQS_PER_PF;
+		   QM_PQ_MEM_4KB(num_vf_cids) * num_vf_pqs +
+		   QM_PQ_MEM_4KB(num_pf_cids + num_tids) * (ECORE_IS_E5(p_hwfn->p_dev) ?
+						QM_OTHER_PQS_PER_PF_E5 : QM_OTHER_PQS_PER_PF_E4);
 }
 
 int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
@@ -763,24 +786,21 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 			    bool pf_wfq_en,
 			    bool global_rl_en,
 			    bool vport_wfq_en,
-			    struct init_qm_port_params
-				   port_params[MAX_NUM_PORTS],
+			    struct init_qm_port_params port_params[MAX_NUM_PORTS],
 			    struct init_qm_global_rl_params
-				   global_rl_params[COMMON_MAX_QM_GLOBAL_RLS])
+							global_rl_params[COMMON_MAX_QM_GLOBAL_RLS])
 {
 	u32 mask = 0;
 
 	/* Init AFullOprtnstcCrdMask */
-	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_LINEVOQ,
-		  QM_OPPOR_LINE_VOQ_DEF);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_LINEVOQ, QM_OPPOR_LINE_VOQ_DEF);
 	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ, QM_BYTE_CRD_EN);
-	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_PFWFQ, pf_wfq_en);
-	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_VPWFQ, vport_wfq_en);
-	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_PFRL, pf_rl_en);
-	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_VPQCNRL, global_rl_en);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_PFWFQ, pf_wfq_en    ? 1 : 0);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_VPWFQ, vport_wfq_en ? 1 : 0);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_PFRL, pf_rl_en     ? 1 : 0);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_VPQCNRL, global_rl_en ? 1 : 0);
 	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_FWPAUSE, QM_OPPOR_FW_STOP_DEF);
-	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY,
-		  QM_OPPOR_PQ_EMPTY_DEF);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY, QM_OPPOR_PQ_EMPTY_DEF);
 	STORE_RT_REG(p_hwfn, QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET, mask);
 
 	/* Enable/disable PF RL */
@@ -796,12 +816,10 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 	ecore_enable_vport_wfq(p_hwfn, vport_wfq_en);
 
 	/* Init PBF CMDQ line credit */
-	ecore_cmdq_lines_rt_init(p_hwfn, max_ports_per_engine,
-				 max_phys_tcs_per_port, port_params);
+	ecore_cmdq_lines_rt_init(p_hwfn, max_ports_per_engine, max_phys_tcs_per_port, port_params);
 
 	/* Init BTB blocks in PBF */
-	ecore_btb_blocks_rt_init(p_hwfn, max_ports_per_engine,
-				 max_phys_tcs_per_port, port_params);
+	ecore_btb_blocks_rt_init(p_hwfn, max_ports_per_engine, max_phys_tcs_per_port, port_params);
 
 	ecore_global_rl_rt_init(p_hwfn, global_rl_params);
 
@@ -830,33 +848,27 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 	u16 vport_id;
 	u8 tc;
 
-	other_mem_size_4kb = QM_PQ_MEM_4KB(num_pf_cids + num_tids) *
-			     QM_OTHER_PQS_PER_PF;
+	other_mem_size_4kb = QM_PQ_MEM_4KB(num_pf_cids + num_tids) * (ECORE_IS_E5(p_hwfn->p_dev) ?
+						QM_OTHER_PQS_PER_PF_E5 : QM_OTHER_PQS_PER_PF_E4);
 
 	/* Clear first Tx PQ ID array for each VPORT */
 	for (vport_id = 0; vport_id < num_vports; vport_id++)
 		for (tc = 0; tc < NUM_OF_TCS; tc++)
-			vport_params[vport_id].first_tx_pq_id[tc] =
-				QM_INVALID_PQ_ID;
+			vport_params[vport_id].first_tx_pq_id[tc] = QM_INVALID_PQ_ID;
 
 	/* Map Other PQs (if any) */
-#if QM_OTHER_PQS_PER_PF > 0
-	ecore_other_pq_map_rt_init(p_hwfn, pf_id, is_pf_loading, num_pf_cids,
-				   num_tids, 0);
-#endif
+	if ((ECORE_IS_E5(p_hwfn->p_dev) ? QM_OTHER_PQS_PER_PF_E5 : QM_OTHER_PQS_PER_PF_E4) > 0)
+		ecore_other_pq_map_rt_init(p_hwfn, pf_id, is_pf_loading, num_pf_cids, num_tids, 0);
 
 	/* Map Tx PQs */
-	if (ecore_tx_pq_map_rt_init(p_hwfn, p_ptt, pf_id, max_phys_tcs_per_port,
-				    is_pf_loading, num_pf_cids, num_vf_cids,
-				    start_pq, num_pf_pqs, num_vf_pqs,
-				    start_vport, other_mem_size_4kb, pq_params,
-				    vport_params))
+	if (ecore_tx_pq_map_rt_init(p_hwfn, p_ptt, pf_id, max_phys_tcs_per_port, is_pf_loading,
+				    num_pf_cids, num_vf_cids, start_pq, num_pf_pqs, num_vf_pqs,
+				    start_vport, other_mem_size_4kb, pq_params, vport_params))
 		return -1;
 
 	/* Init PF WFQ */
 	if (pf_wfq)
-		if (ecore_pf_wfq_rt_init(p_hwfn, pf_id, pf_wfq,
-					 max_phys_tcs_per_port,
+		if (ecore_pf_wfq_rt_init(p_hwfn, pf_id, pf_wfq, max_phys_tcs_per_port,
 					 num_pf_pqs + num_vf_pqs, pq_params))
 			return -1;
 
@@ -872,14 +884,15 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 }
 
 int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn,
-		      struct ecore_ptt *p_ptt, u8 pf_id, u16 pf_wfq)
+		      struct ecore_ptt *p_ptt,
+		      u8 pf_id,
+		      u16 pf_wfq)
 {
 	u32 inc_val;
 
 	inc_val = QM_WFQ_INC_VAL(pf_wfq);
 	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true,
-			  "Invalid PF WFQ weight configuration\n");
+		DP_NOTICE(p_hwfn, true, "Invalid PF WFQ weight configuration\n");
 		return -1;
 	}
 
@@ -889,19 +902,19 @@ int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn,
 }
 
 int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
-		     struct ecore_ptt *p_ptt, u8 pf_id, u32 pf_rl)
+		     struct ecore_ptt *p_ptt,
+		     u8 pf_id,
+		     u32 pf_rl)
 {
 	u32 inc_val;
 
 	inc_val = QM_RL_INC_VAL(pf_rl);
 	if (inc_val > QM_PF_RL_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true,
-			  "Invalid PF rate limit configuration\n");
+		DP_NOTICE(p_hwfn, true, "Invalid PF rate limit configuration\n");
 		return -1;
 	}
 
-	ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFCRD + pf_id * 4,
-		 (u32)QM_RL_CRD_REG_SIGN_BIT);
+	ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFCRD + pf_id * 4, (u32)QM_RL_CRD_REG_SIGN_BIT);
 	ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFINCVAL + pf_id * 4, inc_val);
 
 	return 0;
@@ -918,22 +931,20 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
 
 	inc_val = QM_WFQ_INC_VAL(wfq);
 	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
-		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT WFQ weight configuration\n");
+		DP_NOTICE(p_hwfn, true, "Invalid VPORT WFQ configuration.\n");
 		return -1;
 	}
 
 	/* A VPORT can have several VPORT PQ IDs for various TCs */
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
 		vp_pq_id = first_tx_pq_id[tc];
-		if (vp_pq_id != QM_INVALID_PQ_ID) {
-			ecore_wr(p_hwfn, p_ptt,
-				 QM_REG_WFQVPWEIGHT + vp_pq_id * 4, inc_val);
-		}
+		if (vp_pq_id == QM_INVALID_PQ_ID)
+			continue;
+		ecore_wr(p_hwfn, p_ptt, QM_REG_WFQVPWEIGHT + vp_pq_id * 4, inc_val);
 	}
 
 	return 0;
-		}
+}
 
 int ecore_init_global_rl(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
@@ -948,37 +959,13 @@ int ecore_init_global_rl(struct ecore_hwfn *p_hwfn,
 		return -1;
 	}
 
-	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLCRD + rl_id * 4,
-		 (u32)QM_RL_CRD_REG_SIGN_BIT);
-	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLINCVAL + rl_id * 4, inc_val);
-
-	return 0;
-}
-
-int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
-			struct ecore_ptt *p_ptt, u8 vport_id,
-						u32 vport_rl,
-						u32 link_speed)
-{
-	u32 inc_val, max_qm_global_rls = ECORE_IS_E5(p_hwfn->p_dev) ? MAX_QM_GLOBAL_RLS_E5
-								    : MAX_QM_GLOBAL_RLS_E4;
-
-	if (vport_id >= max_qm_global_rls) {
-		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT ID for rate limiter configuration\n");
-		return -1;
-	}
-
-	inc_val = QM_RL_INC_VAL(vport_rl ? vport_rl : link_speed);
-	if (inc_val > QM_VP_RL_MAX_INC_VAL(link_speed)) {
-		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT rate-limit configuration\n");
-		return -1;
-	}
-
-	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLCRD + vport_id * 4,
-		 (u32)QM_RL_CRD_REG_SIGN_BIT);
-	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLINCVAL + vport_id * 4, inc_val);
+	ecore_wr(p_hwfn, p_ptt, rl_id < QM_REG_RLGLBLCRD_SIZE ?
+		QM_REG_RLGLBLCRD + rl_id * 4 :
+		QM_REG_RLGLBLCRD_MSB_E5 + (rl_id - QM_REG_RLGLBLCRD_SIZE) * 4,
+		(u32)QM_RL_CRD_REG_SIGN_BIT);
+	ecore_wr(p_hwfn, p_ptt, rl_id < QM_REG_RLGLBLINCVAL_SIZE ?
+		QM_REG_RLGLBLINCVAL + rl_id * 4 :
+		QM_REG_RLGLBLINCVAL_MSB_E5 + (rl_id - QM_REG_RLGLBLINCVAL_SIZE) * 4, inc_val);
 
 	return 0;
 }
@@ -986,9 +973,11 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
 bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt,
 			    bool is_release_cmd,
-			    bool is_tx_pq, u16 start_pq, u16 num_pqs)
+			    bool is_tx_pq,
+			    u16 start_pq,
+			    u16 num_pqs)
 {
-	u32 cmd_arr[QM_CMD_STRUCT_SIZE(QM_STOP_CMD)] = { 0 };
+	u32 cmd_arr[QM_CMD_STRUCT_SIZE(QM_STOP_CMD)] = {0};
 	u32 pq_mask = 0, last_pq, pq_id;
 
 	last_pq = start_pq + num_pqs - 1;
@@ -1003,16 +992,13 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 			pq_mask |= (1 << (pq_id % QM_STOP_PQ_MASK_WIDTH));
 
 		/* If last PQ or end of PQ mask, write command */
-		if ((pq_id == last_pq) ||
-		    (pq_id % QM_STOP_PQ_MASK_WIDTH ==
-		    (QM_STOP_PQ_MASK_WIDTH - 1))) {
-			QM_CMD_SET_FIELD(cmd_arr, QM_STOP_CMD, PAUSE_MASK,
-					 pq_mask);
-			QM_CMD_SET_FIELD(cmd_arr, QM_STOP_CMD, GROUP_ID,
-					 pq_id / QM_STOP_PQ_MASK_WIDTH);
-			if (!ecore_send_qm_cmd
-			    (p_hwfn, p_ptt, QM_STOP_CMD_ADDR, cmd_arr[0],
-			     cmd_arr[1]))
+		if ((pq_id == last_pq) || (pq_id % QM_STOP_PQ_MASK_WIDTH ==
+					   (QM_STOP_PQ_MASK_WIDTH - 1))) {
+			QM_CMD_SET_FIELD(cmd_arr, QM_STOP_CMD, PAUSE_MASK, pq_mask);
+			QM_CMD_SET_FIELD(cmd_arr, QM_STOP_CMD,
+					 GROUP_ID, pq_id / QM_STOP_PQ_MASK_WIDTH);
+			if (!ecore_send_qm_cmd(p_hwfn, p_ptt, QM_STOP_CMD_ADDR,
+					       cmd_arr[0], cmd_arr[1]))
 				return false;
 			pq_mask = 0;
 		}
@@ -1029,8 +1015,7 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 #define NIG_ETS_MIN_WFQ_BYTES		1600
 
 /* NIG: ETS constants */
-#define NIG_ETS_UP_BOUND(weight, mtu) \
-	(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+#define NIG_ETS_UP_BOUND(weight, mtu)		(2 * ((weight) > (mtu) ? (weight) : (mtu)))
 
 /* NIG: RL constants */
 
@@ -1046,8 +1031,7 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 /* Rate in mbps */
 #define NIG_RL_INC_VAL(rate)		(((rate) * NIG_RL_PERIOD) / 8)
 
-#define NIG_RL_MAX_VAL(inc_val, mtu) \
-	(2 * ((inc_val) > (mtu) ? (inc_val) : (mtu)))
+#define NIG_RL_MAX_VAL(inc_val, mtu)		(2 * ((inc_val) > (mtu) ? (inc_val) : (mtu)))
 
 /* NIG: packet prioritry configuration constants */
 #define NIG_PRIORITY_MAP_TC_BITS	4
@@ -1055,7 +1039,8 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 
 void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 			struct ecore_ptt *p_ptt,
-			struct init_ets_req *req, bool is_lb)
+			struct init_ets_req *req,
+			bool is_lb)
 {
 	u32 min_weight, tc_weight_base_addr, tc_weight_addr_diff;
 	u32 tc_bound_base_addr, tc_bound_addr_diff;
@@ -1063,8 +1048,7 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 	u8 tc, num_tc, tc_client_offset;
 
 	num_tc = is_lb ? NUM_OF_TCS : NUM_OF_PHYS_TCS;
-	tc_client_offset = is_lb ? NIG_LB_ETS_CLIENT_OFFSET :
-				   NIG_TX_ETS_CLIENT_OFFSET;
+	tc_client_offset = is_lb ? NIG_LB_ETS_CLIENT_OFFSET : NIG_TX_ETS_CLIENT_OFFSET;
 	min_weight = 0xffffffff;
 	tc_weight_base_addr = is_lb ? NIG_REG_LB_ARB_CREDIT_WEIGHT_0 :
 				      NIG_REG_TX_ARB_CREDIT_WEIGHT_0;
@@ -1098,17 +1082,14 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 	}
 
 	/* Write SP map */
-	ecore_wr(p_hwfn, p_ptt,
-		 is_lb ? NIG_REG_LB_ARB_CLIENT_IS_STRICT :
-		 NIG_REG_TX_ARB_CLIENT_IS_STRICT,
-		 (sp_tc_map << tc_client_offset));
+	ecore_wr(p_hwfn, p_ptt, is_lb ? NIG_REG_LB_ARB_CLIENT_IS_STRICT :
+		 NIG_REG_TX_ARB_CLIENT_IS_STRICT, (sp_tc_map << tc_client_offset));
 
 	/* Write WFQ map */
-	ecore_wr(p_hwfn, p_ptt,
-		 is_lb ? NIG_REG_LB_ARB_CLIENT_IS_SUBJECT2WFQ :
-		 NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ,
-		 (wfq_tc_map << tc_client_offset));
-	/* write WFQ weights */
+	ecore_wr(p_hwfn, p_ptt, is_lb ? NIG_REG_LB_ARB_CLIENT_IS_SUBJECT2WFQ :
+		 NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ, (wfq_tc_map << tc_client_offset));
+
+	/* Write WFQ weights */
 	for (tc = 0; tc < num_tc; tc++, tc_client_offset++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
 		u32 byte_weight;
@@ -1117,8 +1098,7 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
 			continue;
 
 		/* Translate weight to bytes */
-		byte_weight = (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) /
-			      min_weight;
+		byte_weight = (NIG_ETS_MIN_WFQ_BYTES * tc_req->weight) / min_weight;
 
 		/* Write WFQ weight */
 		ecore_wr(p_hwfn, p_ptt, tc_weight_base_addr +
@@ -1138,62 +1118,50 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 	u32 ctrl, inc_val, reg_offset;
 	u8 tc;
 
-	/* Disable global MAC+LB RL */
-	ctrl =
-	    NIG_RL_BASE_TYPE <<
-	    NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_BASE_TYPE_SHIFT;
+	/* Disable global MAC + LB RL */
+	ctrl = NIG_RL_BASE_TYPE <<
+	       NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_BASE_TYPE_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
 
-	/* Configure and enable global MAC+LB RL */
+	/* Configure and enable global MAC + LB RL */
 	if (req->lb_mac_rate) {
 		/* Configure  */
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_PERIOD,
 			 NIG_RL_PERIOD_CLK_25M);
 		inc_val = NIG_RL_INC_VAL(req->lb_mac_rate);
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_VALUE,
-			 inc_val);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_INC_VALUE, inc_val);
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_MAX_VALUE,
 			 NIG_RL_MAX_VAL(inc_val, req->mtu));
 
 		/* Enable */
-		ctrl |=
-		    1 <<
-		    NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_EN_SHIFT;
+		ctrl |= 1 << NIG_REG_TX_LB_GLBRATELIMIT_CTRL_TX_LB_GLBRATELIMIT_EN_SHIFT;
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_TX_LB_GLBRATELIMIT_CTRL, ctrl);
 	}
 
 	/* Disable global LB-only RL */
-	ctrl =
-	    NIG_RL_BASE_TYPE <<
-	    NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_BASE_TYPE_SHIFT;
+	ctrl = NIG_RL_BASE_TYPE << NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_BASE_TYPE_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
 
 	/* Configure and enable global LB-only RL */
 	if (req->lb_rate) {
 		/* Configure  */
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_PERIOD,
-			 NIG_RL_PERIOD_CLK_25M);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_PERIOD, NIG_RL_PERIOD_CLK_25M);
 		inc_val = NIG_RL_INC_VAL(req->lb_rate);
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_VALUE,
-			 inc_val);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_INC_VALUE, inc_val);
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_MAX_VALUE,
 			 NIG_RL_MAX_VAL(inc_val, req->mtu));
 
 		/* Enable */
-		ctrl |=
-		    1 << NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_EN_SHIFT;
+		ctrl |= 1 << NIG_REG_LB_BRBRATELIMIT_CTRL_LB_BRBRATELIMIT_EN_SHIFT;
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_BRBRATELIMIT_CTRL, ctrl);
 	}
 
 	/* Per-TC RLs */
-	for (tc = 0, reg_offset = 0; tc < NUM_OF_PHYS_TCS;
-	     tc++, reg_offset += 4) {
+	for (tc = 0, reg_offset = 0; tc < NUM_OF_PHYS_TCS; tc++, reg_offset += 4) {
 		/* Disable TC RL */
-		ctrl =
-		    NIG_RL_BASE_TYPE <<
-		NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_BASE_TYPE_0_SHIFT;
-		ecore_wr(p_hwfn, p_ptt,
-			 NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset, ctrl);
+		ctrl = NIG_RL_BASE_TYPE <<
+		       NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_BASE_TYPE_0_SHIFT;
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset, ctrl);
 
 		/* Configure and enable TC RL */
 		if (!req->tc_rate[tc])
@@ -1203,16 +1171,13 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_PERIOD_0 +
 			 reg_offset, NIG_RL_PERIOD_CLK_25M);
 		inc_val = NIG_RL_INC_VAL(req->tc_rate[tc]);
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 +
-			 reg_offset, inc_val);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_INC_VALUE_0 + reg_offset, inc_val);
 		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_MAX_VALUE_0 +
 			 reg_offset, NIG_RL_MAX_VAL(inc_val, req->mtu));
 
 		/* Enable */
-		ctrl |= 1 <<
-			NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT;
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_CTRL_0 +
-			 reg_offset, ctrl);
+		ctrl |= 1 << NIG_REG_LB_TCRATELIMIT_CTRL_0_LB_TCRATELIMIT_EN_0_SHIFT;
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_LB_TCRATELIMIT_CTRL_0 + reg_offset, ctrl);
 	}
 }
 
@@ -1228,8 +1193,7 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 		if (!req->pri[pri].valid)
 			continue;
 
-		pri_tc_mask |= (req->pri[pri].tc_id <<
-				(pri * NIG_PRIORITY_MAP_TC_BITS));
+		pri_tc_mask |= (req->pri[pri].tc_id << (pri * NIG_PRIORITY_MAP_TC_BITS));
 		tc_pri_mask[req->pri[pri].tc_id] |= (1 << pri);
 	}
 
@@ -1238,10 +1202,8 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 
 	/* Write TC -> priority mask */
 	for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_PRIORITY_FOR_TC_0 + tc * 4,
-			 tc_pri_mask[tc]);
-		ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_TC0_PRIORITY_MASK + tc * 4,
-			 tc_pri_mask[tc]);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_PRIORITY_FOR_TC_0 + tc * 4, tc_pri_mask[tc]);
+		ecore_wr(p_hwfn, p_ptt, NIG_REG_RX_TC0_PRIORITY_MASK + tc * 4, tc_pri_mask[tc]);
 	}
 }
 
@@ -1251,18 +1213,17 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 
 /* PRS: ETS configuration constants */
 #define PRS_ETS_MIN_WFQ_BYTES		1600
-#define PRS_ETS_UP_BOUND(weight, mtu) \
-	(2 * ((weight) > (mtu) ? (weight) : (mtu)))
+#define PRS_ETS_UP_BOUND(weight, mtu)		(2 * ((weight) > (mtu) ? (weight) : (mtu)))
 
 
 void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
-			struct ecore_ptt *p_ptt, struct init_ets_req *req)
+			struct ecore_ptt *p_ptt,
+			struct init_ets_req *req)
 {
 	u32 tc_weight_addr_diff, tc_bound_addr_diff, min_weight = 0xffffffff;
 	u8 tc, sp_tc_map = 0, wfq_tc_map = 0;
 
-	tc_weight_addr_diff = PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 -
-			      PRS_REG_ETS_ARB_CREDIT_WEIGHT_0;
+	tc_weight_addr_diff = PRS_REG_ETS_ARB_CREDIT_WEIGHT_1 - PRS_REG_ETS_ARB_CREDIT_WEIGHT_0;
 	tc_bound_addr_diff = PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_1 -
 			     PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0;
 
@@ -1284,14 +1245,13 @@ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
 			min_weight = tc_req->weight;
 	}
 
-	/* write SP map */
+	/* Write SP map */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_STRICT, sp_tc_map);
 
-	/* write WFQ map */
-	ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ,
-		 wfq_tc_map);
+	/* Write WFQ map */
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CLIENT_IS_SUBJECT2WFQ, wfq_tc_map);
 
-	/* write WFQ weights */
+	/* Write WFQ weights */
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
 		struct init_ets_tc_req *tc_req = &req->tc_req[tc];
 		u32 byte_weight;
@@ -1300,17 +1260,15 @@ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
 			continue;
 
 		/* Translate weight to bytes */
-		byte_weight = (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) /
-			      min_weight;
+		byte_weight = (PRS_ETS_MIN_WFQ_BYTES * tc_req->weight) / min_weight;
 
 		/* Write WFQ weight */
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 + tc *
-			 tc_weight_addr_diff, byte_weight);
+		ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_WEIGHT_0 +
+			 tc * tc_weight_addr_diff, byte_weight);
 
 		/* Write WFQ upper bound */
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_ETS_ARB_CREDIT_UPPER_BOUND_0 +
-			 tc * tc_bound_addr_diff, PRS_ETS_UP_BOUND(byte_weight,
-								   req->mtu));
+			 tc * tc_bound_addr_diff, PRS_ETS_UP_BOUND(byte_weight, req->mtu));
 	}
 }
 
@@ -1327,16 +1285,15 @@ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
 
 /* Temporary big RAM allocation - should be updated */
 void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
-			struct ecore_ptt *p_ptt, struct init_brb_ram_req *req)
+			struct ecore_ptt *p_ptt,
+			struct init_brb_ram_req *req)
 {
 	u32 tc_headroom_blocks, min_pkt_size_blocks, total_blocks;
 	u32 active_port_blocks, reg_offset = 0;
 	u8 port, active_ports = 0;
 
-	tc_headroom_blocks = (u32)DIV_ROUND_UP(req->headroom_per_tc,
-					       BRB_BLOCK_SIZE);
-	min_pkt_size_blocks = (u32)DIV_ROUND_UP(req->min_pkt_size,
-						BRB_BLOCK_SIZE);
+	tc_headroom_blocks = (u32)DIV_ROUND_UP(req->headroom_per_tc, BRB_BLOCK_SIZE);
+	min_pkt_size_blocks = (u32)DIV_ROUND_UP(req->min_pkt_size, BRB_BLOCK_SIZE);
 	total_blocks = ECORE_IS_K2(p_hwfn->p_dev) ? BRB_TOTAL_RAM_BLOCKS_K2 :
 						    BRB_TOTAL_RAM_BLOCKS_BB;
 
@@ -1354,26 +1311,20 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 		u8 tc;
 
 		/* Calculate per-port sizes */
-		tc_guaranteed_blocks = (u32)DIV_ROUND_UP(req->guranteed_per_tc,
-							 BRB_BLOCK_SIZE);
-		port_blocks = req->num_active_tcs[port] ? active_port_blocks :
-							  0;
-		port_guaranteed_blocks = req->num_active_tcs[port] *
-					 tc_guaranteed_blocks;
+		tc_guaranteed_blocks = (u32)DIV_ROUND_UP(req->guranteed_per_tc, BRB_BLOCK_SIZE);
+		port_blocks = req->num_active_tcs[port] ? active_port_blocks : 0;
+		port_guaranteed_blocks = req->num_active_tcs[port] * tc_guaranteed_blocks;
 		port_shared_blocks = port_blocks - port_guaranteed_blocks;
-		full_xoff_th = req->num_active_tcs[port] *
-			       BRB_MIN_BLOCKS_PER_TC;
+		full_xoff_th = req->num_active_tcs[port] * BRB_MIN_BLOCKS_PER_TC;
 		full_xon_th = full_xoff_th + min_pkt_size_blocks;
 		pause_xoff_th = tc_headroom_blocks;
 		pause_xon_th = pause_xoff_th + min_pkt_size_blocks;
 
 		/* Init total size per port */
-		ecore_wr(p_hwfn, p_ptt, BRB_REG_TOTAL_MAC_SIZE + port * 4,
-			 port_blocks);
+		ecore_wr(p_hwfn, p_ptt, BRB_REG_TOTAL_MAC_SIZE + port * 4, port_blocks);
 
 		/* Init shared size per port */
-		ecore_wr(p_hwfn, p_ptt, BRB_REG_SHARED_HR_AREA + port * 4,
-			 port_shared_blocks);
+		ecore_wr(p_hwfn, p_ptt, BRB_REG_SHARED_HR_AREA + port * 4, port_shared_blocks);
 
 		for (tc = 0; tc < NUM_OF_TCS; tc++, reg_offset += 4) {
 			/* Clear init values for non-active TCs */
@@ -1386,43 +1337,33 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 			}
 
 			/* Init guaranteed size per TC */
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_TC_GUARANTIED_0 + reg_offset,
+			ecore_wr(p_hwfn, p_ptt, BRB_REG_TC_GUARANTIED_0 + reg_offset,
 				 tc_guaranteed_blocks);
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_MAIN_TC_GUARANTIED_HYST_0 + reg_offset,
-				 BRB_HYST_BLOCKS);
+			ecore_wr(p_hwfn, p_ptt, BRB_REG_MAIN_TC_GUARANTIED_HYST_0 +
+				 reg_offset, BRB_HYST_BLOCKS);
 
 			/* Init pause/full thresholds per physical TC - for
 			 * loopback traffic.
 			 */
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_LB_TC_FULL_XOFF_THRESHOLD_0 +
+			ecore_wr(p_hwfn, p_ptt, BRB_REG_LB_TC_FULL_XOFF_THRESHOLD_0 +
 				 reg_offset, full_xoff_th);
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_LB_TC_FULL_XON_THRESHOLD_0 +
+			ecore_wr(p_hwfn, p_ptt, BRB_REG_LB_TC_FULL_XON_THRESHOLD_0 +
 				 reg_offset, full_xon_th);
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_LB_TC_PAUSE_XOFF_THRESHOLD_0 +
+			ecore_wr(p_hwfn, p_ptt, BRB_REG_LB_TC_PAUSE_XOFF_THRESHOLD_0 +
 				 reg_offset, pause_xoff_th);
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_LB_TC_PAUSE_XON_THRESHOLD_0 +
+			ecore_wr(p_hwfn, p_ptt, BRB_REG_LB_TC_PAUSE_XON_THRESHOLD_0 +
 				 reg_offset, pause_xon_th);
 
 			/* Init pause/full thresholds per physical TC - for
 			 * main traffic.
 			 */
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_MAIN_TC_FULL_XOFF_THRESHOLD_0 +
+			ecore_wr(p_hwfn, p_ptt, BRB_REG_MAIN_TC_FULL_XOFF_THRESHOLD_0 +
 				 reg_offset, full_xoff_th);
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_MAIN_TC_FULL_XON_THRESHOLD_0 +
+			ecore_wr(p_hwfn, p_ptt, BRB_REG_MAIN_TC_FULL_XON_THRESHOLD_0 +
 				 reg_offset, full_xon_th);
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_MAIN_TC_PAUSE_XOFF_THRESHOLD_0 +
+			ecore_wr(p_hwfn, p_ptt, BRB_REG_MAIN_TC_PAUSE_XOFF_THRESHOLD_0 +
 				 reg_offset, pause_xoff_th);
-			ecore_wr(p_hwfn, p_ptt,
-				 BRB_REG_MAIN_TC_PAUSE_XON_THRESHOLD_0 +
+			ecore_wr(p_hwfn, p_ptt, BRB_REG_MAIN_TC_PAUSE_XON_THRESHOLD_0 +
 				 reg_offset, pause_xon_th);
 		}
 	}
@@ -1431,12 +1372,14 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 #endif /* UNUSED_HSI_FUNC */
 #ifndef UNUSED_HSI_FUNC
 
-#define ARR_REG_WR(dev, ptt, addr, arr, arr_size)		\
-	do {							\
-		u32 i;						\
-		for (i = 0; i < (arr_size); i++)		\
-			ecore_wr(dev, ptt, ((addr) + (4 * i)),	\
-				 ((u32 *)&(arr))[i]);		\
+/* @DPDK */
+#define ARR_REG_WR(dev, ptt, addr, arr, arr_size) \
+	do { \
+		u32 i; \
+		       \
+		for (i = 0; i < (arr_size); i++) \
+			ecore_wr(dev, ptt, ((addr) + (4 * i)), \
+				 ((u32 *)&(arr))[i]); \
 	} while (0)
 
 #ifndef DWORDS_TO_BYTES
@@ -1445,25 +1388,27 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 
 
 /**
- * @brief ecore_dmae_to_grc - is an internal function - writes from host to
- * wide-bus registers (split registers are not supported yet)
+ * @brief ecore_dmae_to_grc - is an internal function - writes from host to wide-bus registers
+ *                                                      (split registers are not supported yet)
  *
  * @param p_hwfn -       HW device data
  * @param p_ptt -       ptt window used for writing the registers.
- * @param pData - pointer to source data.
+ * @param p_data - pointer to source data.
  * @param addr - Destination register address.
  * @param len_in_dwords - data length in DWARDS (u32)
  */
+
 static int ecore_dmae_to_grc(struct ecore_hwfn *p_hwfn,
-			     struct ecore_ptt *p_ptt,
-			     u32 *pData,
-			     u32 addr,
-			     u32 len_in_dwords)
+	struct ecore_ptt *p_ptt,
+	u32 *p_data,
+	u32 addr,
+	u32 len_in_dwords)
+
 {
 	struct dmae_params params;
 	bool read_using_dmae = false;
 
-	if (!pData)
+	if (!p_data)
 		return -1;
 
 	/* Set DMAE params */
@@ -1472,17 +1417,17 @@ static int ecore_dmae_to_grc(struct ecore_hwfn *p_hwfn,
 	SET_FIELD(params.flags, DMAE_PARAMS_COMPLETION_DST, 1);
 
 	/* Execute DMAE command */
-	read_using_dmae = !ecore_dmae_host2grc(p_hwfn, p_ptt,
-					       (u64)(osal_uintptr_t)(pData),
+	read_using_dmae = !ecore_dmae_host2grc(p_hwfn, p_ptt, (u64)(osal_uintptr_t)(p_data),
 					       addr, len_in_dwords, &params);
 	if (!read_using_dmae)
-		DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG,
-			   "Failed writing to chip using DMAE, using GRC instead\n");
+		DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG, "Failed writing to chip using DMAE, using GRC instead\n");
+
 
 	/* If not read using DMAE, read using GRC */
-	if (!read_using_dmae)
+	if (!read_using_dmae) {
 		/* write to registers using GRC */
-		ARR_REG_WR(p_hwfn, p_ptt, addr, pData, len_in_dwords);
+		ARR_REG_WR(p_hwfn, p_ptt, addr, p_data, len_in_dwords);
+	}
 
 	return len_in_dwords;
 }
@@ -1497,12 +1442,13 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn, u32 ethType)
 #endif /* UNUSED_HSI_FUNC */
 
 #define SET_TUNNEL_TYPE_ENABLE_BIT(var, offset, enable) \
-(var = ((var) & ~(1 << (offset))) | ((enable) ? (1 << (offset)) : 0))
-#define PRS_ETH_TUNN_OUTPUT_FORMAT        -188897008
-#define PRS_ETH_OUTPUT_FORMAT             -46832
+	(var = ((var) & ~(1 << (offset))) | ((enable) ? (1 << (offset)) : 0))
+#define PRS_ETH_TUNN_OUTPUT_FORMAT_E4     0xF4DAB910
+#define PRS_ETH_OUTPUT_FORMAT_E4          0xFFFF4910
 
 void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
-			       struct ecore_ptt *p_ptt, u16 dest_port)
+	struct ecore_ptt *p_ptt,
+	u16 dest_port)
 {
 	/* Update PRS register */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_VXLAN_PORT, dest_port);
@@ -1515,7 +1461,8 @@ void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
 }
 
 void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
-			    struct ecore_ptt *p_ptt, bool vxlan_enable)
+	struct ecore_ptt *p_ptt,
+	bool vxlan_enable)
 {
 	u32 reg_val;
 
@@ -1524,31 +1471,38 @@ void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, PRS_ENCAPSULATION_TYPE_EN_FLAGS_VXLAN_ENABLE_SHIFT,
 		vxlan_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
-	if (reg_val) { /* TODO: handle E5 init */
-		reg_val = ecore_rd(p_hwfn, p_ptt,
-				   PRS_REG_OUTPUT_FORMAT_4_0_BB_K2);
+	if ((reg_val != 0) && (!ECORE_IS_E5(p_hwfn->p_dev))) {
+		reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2);
 
-		/* Update output  only if tunnel blocks not included. */
-		if (reg_val == (u32)PRS_ETH_OUTPUT_FORMAT)
+		/* @DPDK */
+		/* Update output Format only if tunnel blocks not included. */
+		if (reg_val == (u32)PRS_ETH_OUTPUT_FORMAT_E4)
 			ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
-				 (u32)PRS_ETH_TUNN_OUTPUT_FORMAT);
+				 (u32)PRS_ETH_TUNN_OUTPUT_FORMAT_E4);
 	}
 
 	/* Update NIG register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
 	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
-				   NIG_REG_ENC_TYPE_ENABLE_VXLAN_ENABLE_SHIFT,
-				   vxlan_enable);
+		NIG_REG_ENC_TYPE_ENABLE_VXLAN_ENABLE_SHIFT, vxlan_enable);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val);
 
-	/* Update DORQ register */
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_VXLAN_EN_BB_K2,
-		 vxlan_enable ? 1 : 0);
+	/* Update PBF register */
+	if (ECORE_IS_E5(p_hwfn->p_dev)) {
+		reg_val = ecore_rd(p_hwfn, p_ptt, PBF_REG_TUNNEL_ENABLES_E5);
+		SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
+			PBF_REG_TUNNEL_ENABLES_TUNNEL_VXLAN_EN_E5_SHIFT, vxlan_enable);
+		ecore_wr(p_hwfn, p_ptt, PBF_REG_TUNNEL_ENABLES_E5, reg_val);
+	} else {   /* Update DORQ register */
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_VXLAN_EN_BB_K2,
+			 vxlan_enable ? 1 : 0);
+	}
 }
 
 void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
-			  struct ecore_ptt *p_ptt,
-			  bool eth_gre_enable, bool ip_gre_enable)
+	struct ecore_ptt *p_ptt,
+	bool eth_gre_enable,
+	bool ip_gre_enable)
 {
 	u32 reg_val;
 
@@ -1561,35 +1515,44 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 		PRS_ENCAPSULATION_TYPE_EN_FLAGS_IP_OVER_GRE_ENABLE_SHIFT,
 		ip_gre_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
-	if (reg_val) { /* TODO: handle E5 init */
-		reg_val = ecore_rd(p_hwfn, p_ptt,
-				   PRS_REG_OUTPUT_FORMAT_4_0_BB_K2);
+	if ((reg_val != 0) && (!ECORE_IS_E5(p_hwfn->p_dev))) {
+		reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2);
 
-		/* Update output  only if tunnel blocks not included. */
-		if (reg_val == (u32)PRS_ETH_OUTPUT_FORMAT)
+		/* @DPDK */
+		/* Update output Format only if tunnel blocks not included. */
+		if (reg_val == (u32)PRS_ETH_OUTPUT_FORMAT_E4)
 			ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
-				 (u32)PRS_ETH_TUNN_OUTPUT_FORMAT);
+				 (u32)PRS_ETH_TUNN_OUTPUT_FORMAT_E4);
 	}
 
 	/* Update NIG register */
 	reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
-	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
-		   NIG_REG_ENC_TYPE_ENABLE_ETH_OVER_GRE_ENABLE_SHIFT,
-		   eth_gre_enable);
-	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
-		   NIG_REG_ENC_TYPE_ENABLE_IP_OVER_GRE_ENABLE_SHIFT,
-		   ip_gre_enable);
+	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, NIG_REG_ENC_TYPE_ENABLE_ETH_OVER_GRE_ENABLE_SHIFT,
+				   eth_gre_enable);
+	SET_TUNNEL_TYPE_ENABLE_BIT(reg_val, NIG_REG_ENC_TYPE_ENABLE_IP_OVER_GRE_ENABLE_SHIFT,
+				   ip_gre_enable);
 	ecore_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val);
 
-	/* Update DORQ registers */
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_ETH_EN_BB_K2,
-		 eth_gre_enable ? 1 : 0);
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_IP_EN_BB_K2,
-		 ip_gre_enable ? 1 : 0);
+	/* Update PBF register */
+	if (ECORE_IS_E5(p_hwfn->p_dev)) {
+		reg_val = ecore_rd(p_hwfn, p_ptt, PBF_REG_TUNNEL_ENABLES_E5);
+		SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
+			PBF_REG_TUNNEL_ENABLES_TUNNEL_GRE_ETH_EN_E5_SHIFT, eth_gre_enable);
+		SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
+			PBF_REG_TUNNEL_ENABLES_TUNNEL_GRE_IP_EN_E5_SHIFT, ip_gre_enable);
+		ecore_wr(p_hwfn, p_ptt, PBF_REG_TUNNEL_ENABLES_E5, reg_val);
+	} else {   /* Update DORQ registers */
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_ETH_EN_BB_K2,
+			 eth_gre_enable ? 1 : 0);
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_IP_EN_BB_K2,
+			 ip_gre_enable ? 1 : 0);
+	}
 }
 
 void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
-				struct ecore_ptt *p_ptt, u16 dest_port)
+	struct ecore_ptt *p_ptt,
+	u16 dest_port)
+
 {
 	/* Update PRS register */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_NGE_PORT, dest_port);
@@ -1603,7 +1566,8 @@ void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
 
 void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 			     struct ecore_ptt *p_ptt,
-			     bool eth_geneve_enable, bool ip_geneve_enable)
+			     bool eth_geneve_enable,
+			     bool ip_geneve_enable)
 {
 	u32 reg_val;
 
@@ -1616,35 +1580,42 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 		PRS_ENCAPSULATION_TYPE_EN_FLAGS_IP_OVER_GENEVE_ENABLE_SHIFT,
 		ip_geneve_enable);
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
-	if (reg_val) { /* TODO: handle E5 init */
-		reg_val = ecore_rd(p_hwfn, p_ptt,
-				   PRS_REG_OUTPUT_FORMAT_4_0_BB_K2);
+	if ((reg_val != 0) && (!ECORE_IS_E5(p_hwfn->p_dev))) {
+		reg_val = ecore_rd(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2);
 
-		/* Update output  only if tunnel blocks not included. */
-		if (reg_val == (u32)PRS_ETH_OUTPUT_FORMAT)
+		/* @DPDK */
+		/* Update output Format only if tunnel blocks not included. */
+		if (reg_val == (u32)PRS_ETH_OUTPUT_FORMAT_E4)
 			ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
-				 (u32)PRS_ETH_TUNN_OUTPUT_FORMAT);
+				 (u32)PRS_ETH_TUNN_OUTPUT_FORMAT_E4);
 	}
 
 	/* Update NIG register */
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_ETH_ENABLE,
-		 eth_geneve_enable ? 1 : 0);
-	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_IP_ENABLE,
-		 ip_geneve_enable ? 1 : 0);
+	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_ETH_ENABLE, eth_geneve_enable ? 1 : 0);
+	ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_IP_ENABLE, ip_geneve_enable ? 1 : 0);
 
 	/* EDPM with geneve tunnel not supported in BB */
 	if (ECORE_IS_BB_B0(p_hwfn->p_dev))
 		return;
 
-	/* Update DORQ registers */
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2,
-		 eth_geneve_enable ? 1 : 0);
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2,
-		 ip_geneve_enable ? 1 : 0);
+	/* Update PBF register */
+	if (ECORE_IS_E5(p_hwfn->p_dev)) {
+		reg_val = ecore_rd(p_hwfn, p_ptt, PBF_REG_TUNNEL_ENABLES_E5);
+		SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
+			PBF_REG_TUNNEL_ENABLES_TUNNEL_NGE_ETH_EN_E5_SHIFT, eth_geneve_enable);
+		SET_TUNNEL_TYPE_ENABLE_BIT(reg_val,
+			PBF_REG_TUNNEL_ENABLES_TUNNEL_NGE_IP_EN_E5_SHIFT, ip_geneve_enable);
+		ecore_wr(p_hwfn, p_ptt, PBF_REG_TUNNEL_ENABLES_E5, reg_val);
+	} else {   /* Update DORQ registers */
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2,
+			 eth_geneve_enable ? 1 : 0);
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2,
+			 ip_geneve_enable ? 1 : 0);
+	}
 }
 
 #define PRS_ETH_VXLAN_NO_L2_ENABLE_OFFSET      3
-#define PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT   -925189872
+#define PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT_E4   0xC8DAB910
 
 void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn,
 				  struct ecore_ptt *p_ptt,
@@ -1658,13 +1629,14 @@ void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn,
 	/* set VXLAN_NO_L2_ENABLE mask */
 	cfg_mask = (1 << PRS_ETH_VXLAN_NO_L2_ENABLE_OFFSET);
 
-	if (enable) {
+	if ((enable) && (!ECORE_IS_E5(p_hwfn->p_dev))) {
 		/* set VXLAN_NO_L2_ENABLE flag */
 		reg_val |= cfg_mask;
 
 		/* update PRS FIC Format register */
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
-		 (u32)PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT);
+			 (u32)PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT_E4);
+	} else {
 		/* clear VXLAN_NO_L2_ENABLE flag */
 		reg_val &= ~cfg_mask;
 	}
@@ -1683,9 +1655,10 @@ void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn,
 #define RAM_LINE_SIZE sizeof(u64)
 #define REG_SIZE sizeof(u32)
 
+
 void ecore_gft_disable(struct ecore_hwfn *p_hwfn,
-		       struct ecore_ptt *p_ptt,
-		       u16 pf_id)
+	struct ecore_ptt *p_ptt,
+	u16 pf_id)
 {
 	struct regpair ram_line;
 	OSAL_MEMSET(&ram_line, 0, sizeof(ram_line));
@@ -1699,35 +1672,32 @@ void ecore_gft_disable(struct ecore_hwfn *p_hwfn,
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id, 0);
 
 	/* Zero ramline */
-	ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line,
-			  PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * pf_id,
-			  sizeof(ram_line) / REG_SIZE);
+	ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line, PRS_REG_GFT_PROFILE_MASK_RAM +
+			  RAM_LINE_SIZE * pf_id, sizeof(ram_line) / REG_SIZE);
 
 }
 
-
+/* @DPDK */
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
-				   struct ecore_ptt *p_ptt)
+	struct ecore_ptt *p_ptt)
 {
 	u32 rfs_cm_hdr_event_id;
 
 	/* Set RFS event ID to be awakened i Tstorm By Prs */
 	rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT);
-	rfs_cm_hdr_event_id |= T_ETH_PACKET_ACTION_GFT_EVENTID <<
-	    PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
-	rfs_cm_hdr_event_id |= PARSER_ETH_CONN_GFT_ACTION_CM_HDR <<
-	    PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT;
+	rfs_cm_hdr_event_id |= T_ETH_PACKET_ACTION_GFT_EVENTID << PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
+	rfs_cm_hdr_event_id |= PARSER_ETH_CONN_GFT_ACTION_CM_HDR << PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, rfs_cm_hdr_event_id);
 }
 
 void ecore_gft_config(struct ecore_hwfn *p_hwfn,
-			       struct ecore_ptt *p_ptt,
-			       u16 pf_id,
-			       bool tcp,
-			       bool udp,
-			       bool ipv4,
-			       bool ipv6,
-			       enum gft_profile_type profile_type)
+	struct ecore_ptt *p_ptt,
+	u16 pf_id,
+	bool tcp,
+	bool udp,
+	bool ipv4,
+	bool ipv6,
+	enum gft_profile_type profile_type)
 {
 	u32 reg_val, cam_line, search_non_ip_as_gft;
 	struct regpair ram_line = { 0 };
@@ -1740,8 +1710,7 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 		DP_NOTICE(p_hwfn, true, "gft_config: unsupported gft_profile_type\n");
 
 	/* Set RFS event ID to be awakened i Tstorm By Prs */
-	reg_val = T_ETH_PACKET_MATCH_RFS_EVENTID <<
-		  PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
+	reg_val = T_ETH_PACKET_MATCH_RFS_EVENTID << PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
 	reg_val |= PARSER_ETH_CONN_CM_HDR << PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT;
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, reg_val);
 
@@ -1756,13 +1725,11 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 	SET_FIELD(cam_line, GFT_CAM_LINE_MAPPED_VALID, 1);
 
 	/* Filters are per PF!! */
-	SET_FIELD(cam_line, GFT_CAM_LINE_MAPPED_PF_ID_MASK,
-		  GFT_CAM_LINE_MAPPED_PF_ID_MASK_MASK);
+	SET_FIELD(cam_line, GFT_CAM_LINE_MAPPED_PF_ID_MASK, GFT_CAM_LINE_MAPPED_PF_ID_MASK_MASK);
 	SET_FIELD(cam_line, GFT_CAM_LINE_MAPPED_PF_ID, pf_id);
 
 	if (!(tcp && udp)) {
-		SET_FIELD(cam_line,
-			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK,
+		SET_FIELD(cam_line, GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK,
 			  GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK_MASK);
 		if (tcp)
 			SET_FIELD(cam_line,
@@ -1785,10 +1752,8 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 	}
 
 	/* Write characteristics to cam */
-	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id,
-		 cam_line);
-	cam_line = ecore_rd(p_hwfn, p_ptt,
-			    PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id);
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id, cam_line);
+	cam_line = ecore_rd(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id);
 
 	/* Write line to RAM - compare to filter 4 tuple */
 
@@ -1823,19 +1788,15 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 		search_non_ip_as_gft = 1;
 	}
 
-	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_NON_IP_AS_GFT,
-		 search_non_ip_as_gft);
-	ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line,
-			  PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * pf_id,
-			  sizeof(ram_line) / REG_SIZE);
+	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_NON_IP_AS_GFT, search_non_ip_as_gft);
+	ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line, PRS_REG_GFT_PROFILE_MASK_RAM +
+			  RAM_LINE_SIZE * pf_id, sizeof(ram_line) / REG_SIZE);
 
 	/* Set default profile so that no filter match will happen */
 	ram_line.lo = 0xffffffff;
 	ram_line.hi = 0x3ff;
-	ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line,
-			  PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE *
-			  PRS_GFT_CAM_LINES_NO_MATCH,
-			  sizeof(ram_line) / REG_SIZE);
+	ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line, PRS_REG_GFT_PROFILE_MASK_RAM +
+			  RAM_LINE_SIZE * PRS_GFT_CAM_LINES_NO_MATCH, sizeof(ram_line) / REG_SIZE);
 
 	/* Enable gft search */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 1);
@@ -1844,14 +1805,16 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 
 #endif /* UNUSED_HSI_FUNC */
 
-/* Configure VF zone size mode */
-void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
-				    struct ecore_ptt *p_ptt, u16 mode,
+/* Configure VF zone size mode*/
+void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u16 mode,
 				    bool runtime_init)
 {
 	u32 msdm_vf_size_log = MSTORM_VF_ZONE_DEFAULT_SIZE_LOG;
 	u32 msdm_vf_offset_mask;
 
+	if (ECORE_IS_E5(p_hwfn->p_dev))
+		DP_NOTICE(p_hwfn, true, "config_vf_zone_size_mode: Not supported in E5'\n");
+
 	if (mode == VF_ZONE_SIZE_MODE_DOUBLE)
 		msdm_vf_size_log += 1;
 	else if (mode == VF_ZONE_SIZE_MODE_QUAD)
@@ -1860,55 +1823,51 @@ void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
 	msdm_vf_offset_mask = (1 << msdm_vf_size_log) - 1;
 
 	if (runtime_init) {
-		STORE_RT_REG(p_hwfn,
-			     PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET,
-			     msdm_vf_size_log);
-		STORE_RT_REG(p_hwfn,
-			     PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET,
-			     msdm_vf_offset_mask);
+		STORE_RT_REG(p_hwfn, PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET, msdm_vf_size_log);
+		STORE_RT_REG(p_hwfn, PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET, msdm_vf_offset_mask);
 	} else {
-		ecore_wr(p_hwfn, p_ptt,
-			 PGLUE_B_REG_MSDM_VF_SHIFT_B, msdm_vf_size_log);
-		ecore_wr(p_hwfn, p_ptt,
-			 PGLUE_B_REG_MSDM_OFFSET_MASK_B, msdm_vf_offset_mask);
+		ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_MSDM_VF_SHIFT_B, msdm_vf_size_log);
+		ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_MSDM_OFFSET_MASK_B, msdm_vf_offset_mask);
 	}
 }
 
 /* Get mstorm statistics for offset by VF zone size mode */
-u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
-				       u16 stat_cnt_id,
+u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn, u16 stat_cnt_id,
 				       u16 vf_zone_size_mode)
 {
 	u32 offset = MSTORM_QUEUE_STAT_OFFSET(stat_cnt_id);
 
-	if ((vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) &&
-	    (stat_cnt_id > MAX_NUM_PFS)) {
+	if (ECORE_IS_E5(p_hwfn->p_dev))
+		return offset;
+
+	if ((vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) && (stat_cnt_id > MAX_NUM_PFS)) {
 		if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
 			offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
-			    (stat_cnt_id - MAX_NUM_PFS);
+				   (stat_cnt_id - MAX_NUM_PFS);
 		else if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_QUAD)
 			offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
-			    (stat_cnt_id - MAX_NUM_PFS);
+				       (stat_cnt_id - MAX_NUM_PFS);
 	}
 
 	return offset;
 }
 
 /* Get mstorm VF producer offset by VF zone size mode */
-u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
-					 u8 vf_id,
-					 u8 vf_queue_id,
+u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8 vf_queue_id,
 					 u16 vf_zone_size_mode)
 {
 	u32 offset = MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id);
 
+	if (ECORE_IS_E5(p_hwfn->p_dev))
+		DP_NOTICE(p_hwfn, true, "get_mstorm_eth_vf_prods_offset: Not supported in E5. Use queue zone instead.'\n");
+
 	if (vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) {
 		if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
 			offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
 				   vf_id;
 		else if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_QUAD)
 			offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
-				  vf_id;
+				       vf_id;
 	}
 
 	return offset;
@@ -1919,12 +1878,10 @@ u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
 #endif
 static u8 cdu_crc8_table[CRC8_TABLE_SIZE];
 
-/* Calculate and return CDU validation byte per connection type / region /
- * cid
- */
+/* Calculate and return CDU validation byte per connection type/region/cid */
 static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region, u32 cid)
 {
-	static u8 crc8_table_valid;	/*automatically initialized to 0*/
+	static u8 crc8_table_valid; /*automatically initialized to 0*/
 	u8 crc, validation_byte = 0;
 	u32 validation_string = 0;
 	u32 data_to_crc;
@@ -1934,62 +1891,55 @@ static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region, u32 cid)
 		crc8_table_valid = 1;
 	}
 
-	/*
-	 * The CRC is calculated on the String-to-compress:
-	 * [31:8]  = {CID[31:20],CID[11:0]}
-	 * [7:4]   = Region
-	 * [3:0]   = Type
+	/* The CRC is calculated on the String-to-compress:
+	 * [31:8] = {CID[31:20],CID[11:0]}
+	 * [ 7:4] = Region
+	 * [ 3:0] = Type
 	 */
-#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \
-	CDU_CONTEXT_VALIDATION_CFG_USE_CID) & 1)
-	validation_string |= (cid & 0xFFF00000) | ((cid & 0xFFF) << 8);
+#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> CDU_CONTEXT_VALIDATION_CFG_USE_CID) & 1)
+		validation_string |= (cid & 0xFFF00000) | ((cid & 0xFFF) << 8);
 #endif
 
-#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \
-	CDU_CONTEXT_VALIDATION_CFG_USE_REGION) & 1)
-	validation_string |= ((region & 0xF) << 4);
+#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> CDU_CONTEXT_VALIDATION_CFG_USE_REGION) & 1)
+		validation_string |= ((region & 0xF) << 4);
 #endif
 
-#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \
-	CDU_CONTEXT_VALIDATION_CFG_USE_TYPE) & 1)
-	validation_string |= (conn_type & 0xF);
+#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> CDU_CONTEXT_VALIDATION_CFG_USE_TYPE) & 1)
+		validation_string |= (conn_type & 0xF);
 #endif
 	/* Convert to big-endian and calculate CRC8*/
 	data_to_crc = OSAL_BE32_TO_CPU(validation_string);
 
-	crc = OSAL_CRC8(cdu_crc8_table, (u8 *)&data_to_crc, sizeof(data_to_crc),
-			CRC8_INIT_VALUE);
+	crc = OSAL_CRC8(cdu_crc8_table, (u8 *)&data_to_crc, sizeof(data_to_crc), CRC8_INIT_VALUE);
 
 	/* The validation byte [7:0] is composed:
 	 * for type A validation
-	 * [7]		= active configuration bit
-	 * [6:0]	= crc[6:0]
+	 *   [7] = active configuration bit
+	 * [6:0] = crc[6:0]
 	 *
 	 * for type B validation
-	 * [7]		= active configuration bit
-	 * [6:3]	= connection_type[3:0]
-	 * [2:0]	= crc[2:0]
+	 *   [7] = active configuration bit
+	 * [6:3] = connection_type[3:0]
+	 * [2:0] = crc[2:0]
 	 */
 	validation_byte |= ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >>
 			     CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE) & 1) << 7;
 
-#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \
-	CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT) & 1)
-	validation_byte |= ((conn_type & 0xF) << 3) | (crc & 0x7);
+#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT) & 1)
+		validation_byte |= ((conn_type & 0xF) << 3) | (crc & 0x7);
 #else
-	validation_byte |= crc & 0x7F;
+		validation_byte |= crc & 0x7F;
 #endif
 	return validation_byte;
 }
 
 /* Calcualte and set validation bytes for session context */
-void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
-				       void *p_ctx_mem, u16 ctx_size,
+void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, u16 ctx_size,
 				       u8 ctx_type, u32 cid)
 {
 	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
 
-	p_ctx = (u8 * const)p_ctx_mem;
+	p_ctx = (u8 *const)p_ctx_mem;
 
 	if (ECORE_IS_E5(p_hwfn->p_dev)) {
 		x_val_ptr = &p_ctx[con_region_offsets_e5[0][ctx_type]];
@@ -2009,12 +1959,12 @@ void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
 }
 
 /* Calcualte and set validation bytes for task context */
-void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
-				    u16 ctx_size, u8 ctx_type, u32 tid)
+void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, u16 ctx_size,
+				    u8 ctx_type, u32 tid)
 {
 	u8 *p_ctx, *region1_val_ptr;
 
-	p_ctx = (u8 * const)p_ctx_mem;
+	p_ctx = (u8 *const)p_ctx_mem;
 
 	region1_val_ptr = ECORE_IS_E5(p_hwfn->p_dev) ?
 		&p_ctx[task_region_offsets_e5[0][ctx_type]] :
@@ -2026,13 +1976,12 @@ void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
 }
 
 /* Memset session context to 0 while preserving validation bytes */
-void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
-			      u32 ctx_size, u8 ctx_type)
+void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, u32 ctx_size, u8 ctx_type)
 {
 	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
 	u8 x_val, t_val, u_val;
 
-	p_ctx = (u8 * const)p_ctx_mem;
+	p_ctx = (u8 *const)p_ctx_mem;
 
 	if (ECORE_IS_E5(p_hwfn->p_dev)) {
 		x_val_ptr = &p_ctx[con_region_offsets_e5[0][ctx_type]];
@@ -2056,13 +2005,12 @@ void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
 }
 
 /* Memset task context to 0 while preserving validation bytes */
-void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
-			   u32 ctx_size, u8 ctx_type)
+void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem, u32 ctx_size, u8 ctx_type)
 {
 	u8 *p_ctx, *region1_val_ptr;
 	u8 region1_val;
 
-	p_ctx = (u8 * const)p_ctx_mem;
+	p_ctx = (u8 *const)p_ctx_mem;
 
 	region1_val_ptr = ECORE_IS_E5(p_hwfn->p_dev) ?
 		&p_ctx[task_region_offsets_e5[0][ctx_type]] :
@@ -2076,8 +2024,7 @@ void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
 }
 
 /* Enable and configure context validation */
-void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
-				     struct ecore_ptt *p_ptt)
+void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 {
 	u32 ctx_validation;
 
@@ -2094,33 +2041,77 @@ void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
 	ecore_wr(p_hwfn, p_ptt, CDU_REG_TCFC_CTX_VALID0, ctx_validation);
 }
 
-#define PHYS_ADDR_DWORDS        DIV_ROUND_UP(sizeof(dma_addr_t), 4)
-#define OVERLAY_HDR_SIZE_DWORDS (sizeof(struct fw_overlay_buf_hdr) / 4)
+/*******************************************************************************
+ * File name : rdma_init.c
+ * Author    : Michael Shteinbok
+ *******************************************************************************
+ *******************************************************************************
+ * Description:
+ * RDMA HSI functions
+ *
+ *******************************************************************************
+ * Notes: This is the input to the auto generated file drv_init_fw_funcs.c
+ *
+ *******************************************************************************/
 
-static u32 ecore_get_overlay_addr_ram_addr(struct ecore_hwfn *p_hwfn,
-					   u8 storm_id)
+static u32 ecore_get_rdma_assert_ram_addr(struct ecore_hwfn *p_hwfn,
+									      u8 storm_id)
 {
 	switch (storm_id) {
 	case 0: return TSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-			TSTORM_OVERLAY_BUF_ADDR_OFFSET;
+		       TSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
 	case 1: return MSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-			MSTORM_OVERLAY_BUF_ADDR_OFFSET;
+		       MSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
 	case 2: return USEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-			USTORM_OVERLAY_BUF_ADDR_OFFSET;
+		       USTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
 	case 3: return XSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-			XSTORM_OVERLAY_BUF_ADDR_OFFSET;
+		       XSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
 	case 4: return YSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-			YSTORM_OVERLAY_BUF_ADDR_OFFSET;
+		       YSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
 	case 5: return PSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-			PSTORM_OVERLAY_BUF_ADDR_OFFSET;
+		       PSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
+
+	default: return 0;
+	}
+}
+
+void ecore_set_rdma_error_level(struct ecore_hwfn *p_hwfn,
+								struct ecore_ptt *p_ptt,
+								u8 assert_level[NUM_STORMS])
+{
+	u8 storm_id;
+	for (storm_id = 0; storm_id < NUM_STORMS; storm_id++) {
+		u32 ram_addr = ecore_get_rdma_assert_ram_addr(p_hwfn, storm_id);
+
+		ecore_wr(p_hwfn, p_ptt, ram_addr, assert_level[storm_id]);
+	}
+}
+
+
+
+
+#define PHYS_ADDR_DWORDS	DIV_ROUND_UP(sizeof(dma_addr_t), 4)
+#define OVERLAY_HDR_SIZE_DWORDS	(sizeof(struct fw_overlay_buf_hdr) / 4)
+
+
+static u32 ecore_get_overlay_addr_ram_addr(struct ecore_hwfn *p_hwfn,
+					   u8 storm_id)
+{
+	switch (storm_id) {
+	case 0: return TSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + TSTORM_OVERLAY_BUF_ADDR_OFFSET;
+	case 1: return MSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + MSTORM_OVERLAY_BUF_ADDR_OFFSET;
+	case 2: return USEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + USTORM_OVERLAY_BUF_ADDR_OFFSET;
+	case 3: return XSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + XSTORM_OVERLAY_BUF_ADDR_OFFSET;
+	case 4: return YSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + YSTORM_OVERLAY_BUF_ADDR_OFFSET;
+	case 5: return PSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + PSTORM_OVERLAY_BUF_ADDR_OFFSET;
 
 	default: return 0;
 	}
 }
 
 struct phys_mem_desc *ecore_fw_overlay_mem_alloc(struct ecore_hwfn *p_hwfn,
-					 const u32 *const fw_overlay_in_buf,
-					 u32 buf_size_in_bytes)
+						 const u32 *const fw_overlay_in_buf,
+						 u32 buf_size_in_bytes)
 {
 	u32 buf_size = buf_size_in_bytes / sizeof(u32), buf_offset = 0;
 	struct phys_mem_desc *allocated_mem;
@@ -2128,15 +2119,12 @@ struct phys_mem_desc *ecore_fw_overlay_mem_alloc(struct ecore_hwfn *p_hwfn,
 	if (!buf_size)
 		return OSAL_NULL;
 
-	allocated_mem = (struct phys_mem_desc *)OSAL_ZALLOC(p_hwfn->p_dev,
-							    GFP_KERNEL,
-							    NUM_STORMS *
-						  sizeof(struct phys_mem_desc));
+	allocated_mem = (struct phys_mem_desc *)OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+			 NUM_STORMS * sizeof(struct phys_mem_desc));
 	if (!allocated_mem)
 		return OSAL_NULL;
 
-	OSAL_MEMSET(allocated_mem, 0, NUM_STORMS *
-		    sizeof(struct phys_mem_desc));
+	OSAL_MEMSET(allocated_mem, 0, NUM_STORMS * sizeof(struct phys_mem_desc));
 
 	/* For each Storm, set physical address in RAM */
 	while (buf_offset < buf_size) {
@@ -2145,19 +2133,16 @@ struct phys_mem_desc *ecore_fw_overlay_mem_alloc(struct ecore_hwfn *p_hwfn,
 		u32 storm_buf_size;
 		u8 storm_id;
 
-		hdr =
-		    (struct fw_overlay_buf_hdr *)&fw_overlay_in_buf[buf_offset];
-		storm_buf_size = GET_FIELD(hdr->data,
-					   FW_OVERLAY_BUF_HDR_BUF_SIZE);
+		hdr = (struct fw_overlay_buf_hdr *)&fw_overlay_in_buf[buf_offset];
+		storm_buf_size = GET_FIELD(hdr->data, FW_OVERLAY_BUF_HDR_BUF_SIZE);
 		storm_id = GET_FIELD(hdr->data, FW_OVERLAY_BUF_HDR_STORM_ID);
 		storm_mem_desc = allocated_mem + storm_id;
 		storm_mem_desc->size = storm_buf_size * sizeof(u32);
 
 		/* Allocate physical memory for Storm's overlays buffer */
-		storm_mem_desc->virt_addr =
-			OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
-						&storm_mem_desc->phys_addr,
-						storm_mem_desc->size);
+		storm_mem_desc->virt_addr = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
+								    &storm_mem_desc->phys_addr,
+								    storm_mem_desc->size);
 		if (!storm_mem_desc->virt_addr)
 			break;
 
@@ -2165,8 +2150,7 @@ struct phys_mem_desc *ecore_fw_overlay_mem_alloc(struct ecore_hwfn *p_hwfn,
 		buf_offset += OVERLAY_HDR_SIZE_DWORDS;
 
 		/* Copy Storm's overlays buffer to allocated memory */
-		OSAL_MEMCPY(storm_mem_desc->virt_addr,
-			    &fw_overlay_in_buf[buf_offset],
+		OSAL_MEMCPY(storm_mem_desc->virt_addr, &fw_overlay_in_buf[buf_offset],
 			    storm_mem_desc->size);
 
 		/* Advance to next Storm */
@@ -2175,7 +2159,7 @@ struct phys_mem_desc *ecore_fw_overlay_mem_alloc(struct ecore_hwfn *p_hwfn,
 
 	/* If memory allocation has failed, free all allocated memory */
 	if (buf_offset < buf_size) {
-		ecore_fw_overlay_mem_free(p_hwfn, allocated_mem);
+		ecore_fw_overlay_mem_free(p_hwfn, &allocated_mem);
 		return OSAL_NULL;
 	}
 
@@ -2189,8 +2173,8 @@ void ecore_fw_overlay_init_ram(struct ecore_hwfn *p_hwfn,
 	u8 storm_id;
 
 	for (storm_id = 0; storm_id < NUM_STORMS; storm_id++) {
-		struct phys_mem_desc *storm_mem_desc =
-			      (struct phys_mem_desc *)fw_overlay_mem + storm_id;
+		struct phys_mem_desc *storm_mem_desc = (struct phys_mem_desc *)fw_overlay_mem +
+							storm_id;
 		u32 ram_addr, i;
 
 		/* Skip Storms with no FW overlays */
@@ -2203,31 +2187,29 @@ void ecore_fw_overlay_init_ram(struct ecore_hwfn *p_hwfn,
 
 		/* Write Storm's overlay physical address to RAM */
 		for (i = 0; i < PHYS_ADDR_DWORDS; i++, ram_addr += sizeof(u32))
-			ecore_wr(p_hwfn, p_ptt, ram_addr,
-				 ((u32 *)&storm_mem_desc->phys_addr)[i]);
+			ecore_wr(p_hwfn, p_ptt, ram_addr, ((u32 *)&storm_mem_desc->phys_addr)[i]);
 	}
 }
 
 void ecore_fw_overlay_mem_free(struct ecore_hwfn *p_hwfn,
-			       struct phys_mem_desc *fw_overlay_mem)
+				struct phys_mem_desc **fw_overlay_mem)
 {
 	u8 storm_id;
 
-	if (!fw_overlay_mem)
+	if (!fw_overlay_mem || !(*fw_overlay_mem))
 		return;
 
 	for (storm_id = 0; storm_id < NUM_STORMS; storm_id++) {
-		struct phys_mem_desc *storm_mem_desc =
-			      (struct phys_mem_desc *)fw_overlay_mem + storm_id;
+		struct phys_mem_desc *storm_mem_desc = (struct phys_mem_desc *)*fw_overlay_mem +
+						       storm_id;
 
 		/* Free Storm's physical memory */
 		if (storm_mem_desc->virt_addr)
-			OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
-					       storm_mem_desc->virt_addr,
-					       storm_mem_desc->phys_addr,
-					       storm_mem_desc->size);
+			OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev, storm_mem_desc->virt_addr,
+					       storm_mem_desc->phys_addr, storm_mem_desc->size);
 	}
 
 	/* Free allocated virtual memory */
-	OSAL_FREE(p_hwfn->p_dev, fw_overlay_mem);
+	OSAL_FREE(p_hwfn->p_dev, *fw_overlay_mem);
+	*fw_overlay_mem = OSAL_NULL;
 }
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index a393d088f..7d71ef951 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -1,21 +1,35 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef _INIT_FW_FUNCS_H
 #define _INIT_FW_FUNCS_H
 #include "ecore_hsi_common.h"
 #include "ecore_hsi_eth.h"
+#include "ecore_hsi_func_common.h"
+#define NUM_STORMS 6
 
-/* Returns the VOQ based on port and TC */
-#define VOQ(port, tc, max_phys_tcs_per_port) \
-	((tc) == PURE_LB_TC ? NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB + (port) : \
-	 (port) * (max_phys_tcs_per_port) + (tc))
+/* Forward declarations */
 
 struct init_qm_pq_params;
 
+/**
+ * @brief ecore_get_ext_voq - Return the external VOQ ID
+ *
+ * @param p_hwfn -			 HW device data
+ * @param port_id -		 port ID
+ * @param tc_id -		 traffic class ID
+ * @param max_phys_tcs_per_port - max number of physical TCs per port in HW
+ *
+ * @return The external VOQ ID.
+ */
+u8 ecore_get_ext_voq(struct ecore_hwfn *p_hwfn,
+		     u8 port_id,
+		     u8 tc,
+		     u8 max_phys_tcs_per_port);
+
 /**
  * @brief ecore_qm_pf_mem_size - Prepare QM ILT sizes
  *
@@ -33,22 +47,22 @@ struct init_qm_pq_params;
  */
 u32 ecore_qm_pf_mem_size(struct ecore_hwfn *p_hwfn,
 			 u32 num_pf_cids,
-						 u32 num_vf_cids,
-						 u32 num_tids,
-						 u16 num_pf_pqs,
-						 u16 num_vf_pqs);
+			 u32 num_vf_cids,
+			 u32 num_tids,
+			 u16 num_pf_pqs,
+			 u16 num_vf_pqs);
 
 /**
- * @brief ecore_qm_common_rt_init - Prepare QM runtime init values for engine
- *                                  phase
+ * @brief ecore_qm_common_rt_init - Prepare QM runtime init values for the
+ * engine phase.
  *
- * @param p_hwfn
- * @param max_ports_per_engine	- max number of ports per engine in HW
+ * @param p_hwfn -			  HW device data
+ * @param max_ports_per_engine -  max number of ports per engine in HW
  * @param max_phys_tcs_per_port	- max number of physical TCs per port in HW
- * @param pf_rl_en		- enable per-PF rate limiters
- * @param pf_wfq_en		- enable per-PF WFQ
+ * @param pf_rl_en -		  enable per-PF rate limiters
+ * @param pf_wfq_en -		  enable per-PF WFQ
  * @param global_rl_en -	  enable global rate limiters
- * @param vport_wfq_en		- enable per-VPORT WFQ
+ * @param vport_wfq_en -	  enable per-VPORT WFQ
  * @param port_params -		  array with parameters for each port.
  * @param global_rl_params -	  array with parameters for each global RL.
  *				  If OSAL_NULL, global RLs are not configured.
@@ -62,36 +76,39 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 			    bool pf_wfq_en,
 			    bool global_rl_en,
 			    bool vport_wfq_en,
-			  struct init_qm_port_params port_params[MAX_NUM_PORTS],
-			  struct init_qm_global_rl_params
-				 global_rl_params[COMMON_MAX_QM_GLOBAL_RLS]);
+			    struct init_qm_port_params port_params[MAX_NUM_PORTS],
+			    struct init_qm_global_rl_params
+					global_rl_params[COMMON_MAX_QM_GLOBAL_RLS]);
 
 /**
- * @brief ecore_qm_pf_rt_init  Prepare QM runtime init values for the PF phase
+ * @brief ecore_qm_pf_rt_init - Prepare QM runtime init values for the PF phase
  *
- * @param p_hwfn
- * @param p_ptt			- ptt window used for writing the registers
- * @param pf_id			- PF ID
+ * @param p_hwfn -			  HW device data
+ * @param p_ptt -			  ptt window used for writing the registers
+ * @param pf_id -		  PF ID
  * @param max_phys_tcs_per_port	- max number of physical TCs per port in HW
  * @param is_pf_loading -	  indicates if the PF is currently loading,
  *				  i.e. it has no allocated QM resources.
- * @param num_pf_cids		- number of connections used by this PF
- * @param num_vf_cids		- number of connections used by VFs of this PF
- * @param num_tids		- number of tasks used by this PF
- * @param start_pq		- first Tx PQ ID associated with this PF
- * @param num_pf_pqs		- number of Tx PQs associated with this PF
- *                                (non-VF)
- * @param num_vf_pqs		- number of Tx PQs associated with a VF
- * @param start_vport		- first VPORT ID associated with this PF
- * @param num_vports - number of VPORTs associated with this PF
- * @param pf_wfq - WFQ weight. if PF WFQ is globally disabled, the weight must
- *		   be 0. otherwise, the weight must be non-zero.
- * @param pf_rl - rate limit in Mb/sec units. a value of 0 means don't
- *                configure. ignored if PF RL is globally disabled.
- * @param pq_params - array of size (num_pf_pqs+num_vf_pqs) with parameters for
- *                    each Tx PQ associated with the specified PF.
- * @param vport_params - array of size num_vports with parameters for each
- *                       associated VPORT.
+ * @param num_pf_cids -		  number of connections used by this PF
+ * @param num_vf_cids -		  number of connections used by VFs of this PF
+ * @param num_tids -		  number of tasks used by this PF
+ * @param start_pq -		  first Tx PQ ID associated with this PF
+ * @param num_pf_pqs -		  number of Tx PQs associated with this PF
+ *				  (non-VF)
+ * @param num_vf_pqs -		  number of Tx PQs associated with a VF
+ * @param start_vport -		  first VPORT ID associated with this PF
+ * @param num_vports -		  number of VPORTs associated with this PF
+ * @param pf_wfq -		  WFQ weight. if PF WFQ is globally disabled,
+ *				  the weight must be 0. otherwise, the weight
+ *				  must be non-zero.
+ * @param pf_rl -		  rate limit in Mb/sec units. a value of 0
+ *				  means don't configure. ignored if PF RL is
+ *				  globally disabled.
+ * @param pq_params -		  array of size (num_pf_pqs + num_vf_pqs) with
+ *				  parameters for each Tx PQ associated with the
+ *				  specified PF.
+ * @param vport_params -	  array of size num_vports with parameters for
+ *				  each associated VPORT.
  *
  * @return 0 on success, -1 on error.
  */
@@ -114,50 +131,50 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 			struct init_qm_vport_params *vport_params);
 
 /**
- * @brief ecore_init_pf_wfq  Initializes the WFQ weight of the specified PF
+ * @brief ecore_init_pf_wfq - Initializes the WFQ weight of the specified PF
  *
- * @param p_hwfn
- * @param p_ptt		- ptt window used for writing the registers
- * @param pf_id		- PF ID
- * @param pf_wfq	- WFQ weight. Must be non-zero.
+ * @param p_hwfn -	   HW device data
+ * @param p_ptt -	   ptt window used for writing the registers
+ * @param pf_id	-  PF ID
+ * @param pf_wfq - WFQ weight. Must be non-zero.
  *
  * @return 0 on success, -1 on error.
  */
 int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn,
-					  struct ecore_ptt *p_ptt,
-					  u8 pf_id,
-					  u16 pf_wfq);
+		      struct ecore_ptt *p_ptt,
+		      u8 pf_id,
+		      u16 pf_wfq);
 
 /**
  * @brief ecore_init_pf_rl - Initializes the rate limit of the specified PF
  *
  * @param p_hwfn
- * @param p_ptt - ptt window used for writing the registers
+ * @param p_ptt -   ptt window used for writing the registers
  * @param pf_id	- PF ID
  * @param pf_rl	- rate limit in Mb/sec units
  *
  * @return 0 on success, -1 on error.
  */
 int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
-					 struct ecore_ptt *p_ptt,
-					 u8 pf_id,
-					 u32 pf_rl);
+		     struct ecore_ptt *p_ptt,
+		     u8 pf_id,
+		     u32 pf_rl);
 
 /**
- * @brief ecore_init_vport_wfq  Initializes the WFQ weight of specified VPORT
+ * @brief ecore_init_vport_wfq - Initializes the WFQ weight of the specified VPORT
  *
- * @param p_hwfn
- * @param p_ptt			- ptt window used for writing the registers
- * @param first_tx_pq_id- An array containing the first Tx PQ ID associated
- *                        with the VPORT for each TC. This array is filled by
- *                        ecore_qm_pf_rt_init
+ * @param p_hwfn -		   HW device data
+ * @param p_ptt -		   ptt window used for writing the registers
+ * @param first_tx_pq_id - An array containing the first Tx PQ ID associated
+ *                         with the VPORT for each TC. This array is filled by
+ *                         ecore_qm_pf_rt_init
  * @param wfq -		   WFQ weight. Must be non-zero.
  *
  * @return 0 on success, -1 on error.
  */
 int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
-						 struct ecore_ptt *p_ptt,
-						 u16 first_tx_pq_id[NUM_OF_TCS],
+			 struct ecore_ptt *p_ptt,
+			 u16 first_tx_pq_id[NUM_OF_TCS],
 			 u16 wfq);
 
 /**
@@ -177,142 +194,124 @@ int ecore_init_global_rl(struct ecore_hwfn *p_hwfn,
 			 u32 rate_limit);
 
 /**
- * @brief ecore_init_vport_rl - Initializes the rate limit of the specified
- * VPORT.
- *
- * @param p_hwfn -	       HW device data
- * @param p_ptt -	       ptt window used for writing the registers
- * @param vport_id -   VPORT ID
- * @param vport_rl -   rate limit in Mb/sec units
- * @param link_speed - link speed in Mbps.
- *
- * @return 0 on success, -1 on error.
- */
-int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt,
-						u8 vport_id,
-						u32 vport_rl,
-						u32 link_speed);
-
-/**
- * @brief ecore_send_qm_stop_cmd  Sends a stop command to the QM
+ * @brief ecore_send_qm_stop_cmd - Sends a stop command to the QM
  *
- * @param p_hwfn
- * @param p_ptt	         - ptt window used for writing the registers
+ * @param p_hwfn -		   HW device data
+ * @param p_ptt -		   ptt window used for writing the registers
  * @param is_release_cmd - true for release, false for stop.
- * @param is_tx_pq       - true for Tx PQs, false for Other PQs.
- * @param start_pq       - first PQ ID to stop
- * @param num_pqs        - Number of PQs to stop, starting from start_pq.
+ * @param is_tx_pq -	   true for Tx PQs, false for Other PQs.
+ * @param start_pq -	   first PQ ID to stop
+ * @param num_pqs -	   Number of PQs to stop, starting from start_pq.
  *
- * @return bool, true if successful, false if timeout occurred while waiting
- *  for QM command done.
+ * @return bool, true if successful, false if timeout occurred while waiting for
+ * QM command done.
  */
 bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
-							struct ecore_ptt *p_ptt,
-							bool is_release_cmd,
-							bool is_tx_pq,
-							u16 start_pq,
-							u16 num_pqs);
+			    struct ecore_ptt *p_ptt,
+			    bool is_release_cmd,
+			    bool is_tx_pq,
+			    u16 start_pq,
+			    u16 num_pqs);
+
 #ifndef UNUSED_HSI_FUNC
 
 /**
- * @brief ecore_init_nig_ets - initializes the NIG ETS arbiter
+ * @brief ecore_init_nig_ets - Initializes the NIG ETS arbiter
  *
  * Based on weight/priority requirements per-TC.
  *
- * @param p_ptt	- ptt window used for writing the registers.
- * @param req	- the NIG ETS initialization requirements.
+ * @param p_hwfn -   HW device data
+ * @param p_ptt -   ptt window used for writing the registers.
+ * @param req -   the NIG ETS initialization requirements.
  * @param is_lb	- if set, the loopback port arbiter is initialized, otherwise
  *		  the physical port arbiter is initialized. The pure-LB TC
  *		  requirements are ignored when is_lb is cleared.
  */
 void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt,
-						struct init_ets_req *req,
-						bool is_lb);
+			struct ecore_ptt *p_ptt,
+			struct init_ets_req *req,
+			bool is_lb);
 
 /**
- * @brief ecore_init_nig_lb_rl - initializes the NIG LB RLs
+ * @brief ecore_init_nig_lb_rl - Initializes the NIG LB RLs
  *
  * Based on global and per-TC rate requirements
  *
- * @param p_ptt	- ptt window used for writing the registers.
- * @param req	- the NIG LB RLs initialization requirements.
+ * @param p_hwfn -	HW device data
+ * @param p_ptt - ptt window used for writing the registers.
+ * @param req -	the NIG LB RLs initialization requirements.
  */
 void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt,
-				  struct init_nig_lb_rl_req *req);
+			  struct ecore_ptt *p_ptt,
+			  struct init_nig_lb_rl_req *req);
+
 #endif /* UNUSED_HSI_FUNC */
 
 /**
- * @brief ecore_init_nig_pri_tc_map - initializes the NIG priority to TC map.
+ * @brief ecore_init_nig_pri_tc_map - Initializes the NIG priority to TC map.
  *
  * Assumes valid arguments.
  *
- * @param p_ptt	- ptt window used for writing the registers.
- * @param req	- required mapping from prioirties to TCs.
+ * @param p_hwfn -	HW device data
+ * @param p_ptt - ptt window used for writing the registers.
+ * @param req - required mapping from prioirties to TCs.
  */
 void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
-					   struct ecore_ptt *p_ptt,
-					   struct init_nig_pri_tc_map_req *req);
+			       struct ecore_ptt *p_ptt,
+			       struct init_nig_pri_tc_map_req *req);
 
 #ifndef UNUSED_HSI_FUNC
+
 /**
- * @brief ecore_init_prs_ets - initializes the PRS Rx ETS arbiter
+ * @brief ecore_init_prs_ets - Initializes the PRS Rx ETS arbiter
  *
  * Based on weight/priority requirements per-TC.
  *
- * @param p_ptt	- ptt window used for writing the registers.
- * @param req	- the PRS ETS initialization requirements.
+ * @param p_hwfn -	HW device data
+ * @param p_ptt - ptt window used for writing the registers.
+ * @param req -	the PRS ETS initialization requirements.
  */
 void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt,
-						struct init_ets_req *req);
-#endif /* UNUSED_HSI_FUNC */
+			struct ecore_ptt *p_ptt,
+			struct init_ets_req *req);
 
+#endif /* UNUSED_HSI_FUNC */
 #ifndef UNUSED_HSI_FUNC
+
 /**
- * @brief ecore_init_brb_ram - initializes BRB RAM sizes per TC
+ * @brief ecore_init_brb_ram - Initializes BRB RAM sizes per TC.
  *
  * Based on weight/priority requirements per-TC.
  *
+ * @param p_hwfn -   HW device data
  * @param p_ptt	- ptt window used for writing the registers.
- * @param req	- the BRB RAM initialization requirements.
+ * @param req -   the BRB RAM initialization requirements.
  */
 void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt,
-						struct init_brb_ram_req *req);
-#endif /* UNUSED_HSI_FUNC */
-
-/**
- * @brief ecore_set_vxlan_no_l2_enable - enable or disable VXLAN no L2 parsing
- *
- * @param p_ptt             - ptt window used for writing the registers.
- * @param enable            - VXLAN no L2 enable flag.
- */
-void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt,
-				  bool enable);
+			struct ecore_ptt *p_ptt,
+			struct init_brb_ram_req *req);
 
+#endif /* UNUSED_HSI_FUNC */
 #ifndef UNUSED_HSI_FUNC
+
 /**
  * @brief ecore_set_port_mf_ovlan_eth_type - initializes DORQ ethType Regs to
- *                                           input ethType should Be called
- *                                           once per port.
+ * input ethType. should Be called once per port.
  *
- * @param p_hwfn -	    HW device data
+ * @param p_hwfn -     HW device data
  * @param ethType - etherType to configure
  */
 void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 				      u32 ethType);
+
 #endif /* UNUSED_HSI_FUNC */
 
 /**
- * @brief ecore_set_vxlan_dest_port - initializes vxlan tunnel destination udp
+ * @brief ecore_set_vxlan_dest_port - Initializes vxlan tunnel destination udp
  * port.
  *
  * @param p_hwfn -       HW device data
- * @param p_ptt     - ptt window used for writing the registers.
+ * @param p_ptt -       ptt window used for writing the registers.
  * @param dest_port - vxlan destination udp port.
  */
 void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
@@ -320,23 +319,23 @@ void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
 			       u16 dest_port);
 
 /**
- * @brief ecore_set_vxlan_enable - enable or disable VXLAN tunnel in HW
+ * @brief ecore_set_vxlan_enable - Enable or disable VXLAN tunnel in HW
  *
  * @param p_hwfn -      HW device data
- * @param p_ptt		- ptt window used for writing the registers.
- * @param vxlan_enable	- vxlan enable flag.
+ * @param p_ptt -      ptt window used for writing the registers.
+ * @param vxlan_enable - vxlan enable flag.
  */
 void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt,
 			    bool vxlan_enable);
 
 /**
- * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
+ * @brief ecore_set_gre_enable - Enable or disable GRE tunnel in HW
  *
  * @param p_hwfn -        HW device data
- * @param p_ptt          - ptt window used for writing the registers.
- * @param eth_gre_enable - eth GRE enable enable flag.
- * @param ip_gre_enable  - IP GRE enable enable flag.
+ * @param p_ptt -        ptt window used for writing the registers.
+ * @param eth_gre_enable - eth GRE enable flag.
+ * @param ip_gre_enable -  IP GRE enable flag.
  */
 void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 			  struct ecore_ptt *p_ptt,
@@ -344,11 +343,11 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
 			  bool ip_gre_enable);
 
 /**
- * @brief ecore_set_geneve_dest_port - initializes geneve tunnel destination
- *                                     udp port
+ * @brief ecore_set_geneve_dest_port - Initializes geneve tunnel destination
+ * udp port.
  *
  * @param p_hwfn -       HW device data
- * @param p_ptt     - ptt window used for writing the registers.
+ * @param p_ptt -       ptt window used for writing the registers.
  * @param dest_port - geneve destination udp port.
  */
 void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
@@ -356,24 +355,36 @@ void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
 				u16 dest_port);
 
 /**
- * @brief ecore_set_geneve_enable - enable or disable GRE tunnel in HW
+ * @brief ecore_set_geneve_enable - Enable or disable GRE tunnel in HW
  *
  * @param p_hwfn -         HW device data
- * @param p_ptt             - ptt window used for writing the registers.
- * @param eth_geneve_enable - eth GENEVE enable enable flag.
- * @param ip_geneve_enable  - IP GENEVE enable enable flag.
+ * @param p_ptt -         ptt window used for writing the registers.
+ * @param eth_geneve_enable -   eth GENEVE enable flag.
+ * @param ip_geneve_enable -    IP GENEVE enable flag.
   */
 void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 			     struct ecore_ptt *p_ptt,
 			     bool eth_geneve_enable,
 			     bool ip_geneve_enable);
+
+/**
+ * @brief ecore_set_vxlan_no_l2_enable - enable or disable VXLAN no L2 parsing
+ *
+ * @param p_ptt             - ptt window used for writing the registers.
+ * @param enable            - VXLAN no L2 enable flag.
+ */
+void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn,
+	struct ecore_ptt *p_ptt,
+	bool enable);
+
 #ifndef UNUSED_HSI_FUNC
 
 /**
-* @brief ecore_set_gft_event_id_cm_hdr - configure GFT event id and cm header
-*
-* @param p_ptt          - ptt window used for writing the registers.
-*/
+ * @brief ecore_set_gft_event_id_cm_hdr - Configure GFT event id and cm header
+ *
+ * @param p_hwfn - HW device data
+ * @param p_ptt - ptt window used for writing the registers.
+ */
 void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt);
 
@@ -385,21 +396,21 @@ void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
  * @param pf_id - pf on which to disable GFT.
  */
 void ecore_gft_disable(struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt,
-						u16 pf_id);
+			struct ecore_ptt *p_ptt,
+			u16 pf_id);
 
 /**
  * @brief ecore_gft_config - Enable and configure HW for GFT
-*
+ *
  * @param p_hwfn -   HW device data
-* @param p_ptt	- ptt window used for writing the registers.
+ * @param p_ptt -   ptt window used for writing the registers.
  * @param pf_id - pf on which to enable GFT.
-* @param tcp	- set profile tcp packets.
-* @param udp	- set profile udp  packet.
-* @param ipv4	- set profile ipv4 packet.
-* @param ipv6	- set profile ipv6 packet.
+ * @param tcp -   set profile tcp packets.
+ * @param udp -   set profile udp  packet.
+ * @param ipv4 -  set profile ipv4 packet.
+ * @param ipv6 -  set profile ipv6 packet.
  * @param profile_type -  define packet same fields. Use enum gft_profile_type.
-*/
+ */
 void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 	struct ecore_ptt *p_ptt,
 	u16 pf_id,
@@ -408,54 +419,60 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 	bool ipv4,
 	bool ipv6,
 	enum gft_profile_type profile_type);
+
 #endif /* UNUSED_HSI_FUNC */
 
 /**
-* @brief ecore_config_vf_zone_size_mode - Configure VF zone size mode. Must be
-*                                         used before first ETH queue started.
-*
+ * @brief ecore_config_vf_zone_size_mode - Configure VF zone size mode. Must be
+ * used before first ETH queue started.
+ *
  * @param p_hwfn -      HW device data
-* @param p_ptt        -  ptt window used for writing the registers. Don't care
+ * @param p_ptt -      ptt window used for writing the registers. Don't care
  *           if runtime_init used.
-* @param mode         -  VF zone size mode. Use enum vf_zone_size_mode.
+ * @param mode -     VF zone size mode. Use enum vf_zone_size_mode.
  * @param runtime_init - Set 1 to init runtime registers in engine phase.
  *           Set 0 if VF zone size mode configured after engine
  *           phase.
-*/
-void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt
-				    *p_ptt, u16 mode, bool runtime_init);
+ */
+void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
+				    struct ecore_ptt *p_ptt,
+				    u16 mode,
+				    bool runtime_init);
 
 /**
  * @brief ecore_get_mstorm_queue_stat_offset - Get mstorm statistics offset by
  * VF zone size mode.
-*
+ *
  * @param p_hwfn -         HW device data
-* @param stat_cnt_id         -  statistic counter id
-* @param vf_zone_size_mode   -  VF zone size mode. Use enum vf_zone_size_mode.
-*/
+ * @param stat_cnt_id -     statistic counter id
+ * @param vf_zone_size_mode -   VF zone size mode. Use enum vf_zone_size_mode.
+ */
 u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
-				       u16 stat_cnt_id, u16 vf_zone_size_mode);
+				       u16 stat_cnt_id,
+				       u16 vf_zone_size_mode);
 
 /**
  * @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
  * size mode.
-*
+ *
  * @param p_hwfn -           HW device data
-* @param vf_id               -  vf id.
-* @param vf_queue_id         -  per VF rx queue id.
-* @param vf_zone_size_mode   -  vf zone size mode. Use enum vf_zone_size_mode.
-*/
-u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8
-					 vf_queue_id, u16 vf_zone_size_mode);
+ * @param vf_id -         vf id.
+ * @param vf_queue_id -       per VF rx queue id.
+ * @param vf_zone_size_mode - vf zone size mode. Use enum vf_zone_size_mode.
+ */
+u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
+					 u8 vf_id,
+					 u8 vf_queue_id,
+					 u16 vf_zone_size_mode);
+
 /**
  * @brief ecore_enable_context_validation - Enable and configure context
- *                                          validation.
+ * validation.
  *
  * @param p_hwfn -   HW device data
- * @param p_ptt - ptt window used for writing the registers.
  */
-void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
-				     struct ecore_ptt *p_ptt);
+void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
+
 /**
  * @brief ecore_calc_session_ctx_validation - Calcualte validation byte for
  * session context.
@@ -480,7 +497,7 @@ void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
  * @param p_ctx_mem -	pointer to context memory.
  * @param ctx_size -	context size.
  * @param ctx_type -	context type.
- * @param tid -		    context tid.
+ * @param tid -		context tid.
  */
 void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn,
 				    void *p_ctx_mem,
@@ -492,10 +509,10 @@ void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn,
  * @brief ecore_memset_session_ctx - Memset session context to 0 while
  * preserving validation bytes.
  *
- * @param p_hwfn -		  HW device data
- * @param p_ctx_mem - pointer to context memory.
- * @param ctx_size -  size to initialzie.
- * @param ctx_type -  context type.
+ * @param p_hwfn -		HW device data
+ * @param p_ctx_mem -	pointer to context memory.
+ * @param ctx_size -	size to initialzie.
+ * @param ctx_type -	context type.
  */
 void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn,
 			      void *p_ctx_mem,
@@ -507,9 +524,9 @@ void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn,
  * validation bytes.
  *
  * @param p_hwfn -		HW device data
- * @param p_ctx_mem - pointer to context memory.
- * @param ctx_size -  size to initialzie.
- * @param ctx_type -  context type.
+ * @param p_ctx_mem -	pointer to context memory.
+ * @param ctx_size -	size to initialzie.
+ * @param ctx_type -	context type.
  */
 void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn,
 			   void *p_ctx_mem,
@@ -517,57 +534,41 @@ void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn,
 			   u8 ctx_type);
 
 
-/*******************************************************************************
- * File name : rdma_init.h
- * Author    : Michael Shteinbok
- *******************************************************************************
- *******************************************************************************
- * Description:
- * RDMA HSI functions header
- *
- *******************************************************************************
- * Notes: This is the input to the auto generated file drv_init_fw_funcs.h
- *
- *******************************************************************************
- */
-#define NUM_STORMS 6
-
-
-
-/**
+ /**
  * @brief ecore_set_rdma_error_level - Sets the RDMA assert level.
  *                                     If the severity of the error will be
- *				       above the level, the FW will assert.
+ *                                     above the level, the FW will assert.
  * @param p_hwfn -		   HW device data
  * @param p_ptt -		   ptt window used for writing the registers
  * @param assert_level - An array of assert levels for each storm.
  */
 void ecore_set_rdma_error_level(struct ecore_hwfn *p_hwfn,
-				struct ecore_ptt *p_ptt,
-				u8 assert_level[NUM_STORMS]);
+								struct ecore_ptt *p_ptt,
+								u8 assert_level[NUM_STORMS]);
+
+
 
 /**
- * @brief ecore_fw_overlay_mem_alloc - Allocates and fills the FW overlay memory
+ * @brief ecore_fw_overlay_mem_alloc - Allocates and fills the FW overlay memory.
  *
- * @param p_hwfn -                     HW device data
+ * @param p_hwfn -		      HW device data
  * @param fw_overlay_in_buf -  the input FW overlay buffer.
- * @param buf_size -           the size of the input FW overlay buffer in bytes.
- *                             must be aligned to dwords.
+ * @param buf_size -	      the size of the input FW overlay buffer in bytes.
+ *			      must be aligned to dwords.
  * @param fw_overlay_out_mem - OUT: a pointer to the allocated overlays memory.
  *
- * @return a pointer to the allocated overlays memory, or OSAL_NULL in case of
- *  failures.
+ * @return a pointer to the allocated overlays memory, or OSAL_NULL in case of failures.
  */
 struct phys_mem_desc *ecore_fw_overlay_mem_alloc(struct ecore_hwfn *p_hwfn,
-					 const u32 *const fw_overlay_in_buf,
-					 u32 buf_size_in_bytes);
+						 const u32 *const fw_overlay_in_buf,
+						 u32 buf_size_in_bytes);
 
 /**
  * @brief ecore_fw_overlay_init_ram - Initializes the FW overlay RAM.
  *
- * @param p_hwfn -                    HW device data.
- * @param p_ptt -                     ptt window used for writing the registers.
- * @param fw_overlay_mem -       the allocated FW overlay memory.
+ * @param p_hwfn -			HW device data.
+ * @param p_ptt -			ptt window used for writing the registers.
+ * @param fw_overlay_mem -	the allocated FW overlay memory.
  */
 void ecore_fw_overlay_init_ram(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
@@ -576,10 +577,10 @@ void ecore_fw_overlay_init_ram(struct ecore_hwfn *p_hwfn,
 /**
  * @brief ecore_fw_overlay_mem_free - Frees the FW overlay memory.
  *
- * @param p_hwfn -                    HW device data.
- * @param fw_overlay_mem -       the allocated FW overlay memory to free.
+ * @param p_hwfn -				HW device data.
+ * @param fw_overlay_mem -	pointer to the allocated FW overlay memory to free.
  */
 void ecore_fw_overlay_mem_free(struct ecore_hwfn *p_hwfn,
-			       struct phys_mem_desc *fw_overlay_mem);
+			       struct phys_mem_desc **fw_overlay_mem);
 
 #endif
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index 42885a401..a01f90dff 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 /* include the precompiled configuration values - only once */
 #include "bcm_osal.h"
 #include "ecore_hsi_common.h"
@@ -13,6 +13,14 @@
 #include "ecore_rt_defs.h"
 #include "ecore_init_fw_funcs.h"
 
+#ifndef CONFIG_ECORE_BINARY_FW
+#ifdef CONFIG_ECORE_ZIPPED_FW
+#include "ecore_init_values_zipped.h"
+#else
+#include "ecore_init_values.h"
+#endif
+#endif
+
 #include "ecore_iro_values.h"
 #include "ecore_sriov.h"
 #include "reg_addr.h"
@@ -23,19 +31,13 @@
 
 void ecore_init_iro_array(struct ecore_dev *p_dev)
 {
-	p_dev->iro_arr = iro_arr + E4_IRO_ARR_OFFSET;
-}
-
-/* Runtime configuration helpers */
-void ecore_init_clear_rt_data(struct ecore_hwfn *p_hwfn)
-{
-	int i;
+	u32 offset = ECORE_IS_E5(p_dev) ? E5_IRO_ARR_OFFSET : E4_IRO_ARR_OFFSET;
 
-	for (i = 0; i < RUNTIME_ARRAY_SIZE; i++)
-		p_hwfn->rt_data.b_valid[i] = false;
+	p_dev->iro_arr = iro_arr + offset;
 }
 
-void ecore_init_store_rt_reg(struct ecore_hwfn *p_hwfn, u32 rt_offset, u32 val)
+void ecore_init_store_rt_reg(struct ecore_hwfn *p_hwfn,
+			     u32 rt_offset, u32 val)
 {
 	if (rt_offset >= RUNTIME_ARRAY_SIZE) {
 		DP_ERR(p_hwfn,
@@ -49,7 +51,8 @@ void ecore_init_store_rt_reg(struct ecore_hwfn *p_hwfn, u32 rt_offset, u32 val)
 }
 
 void ecore_init_store_rt_agg(struct ecore_hwfn *p_hwfn,
-			     u32 rt_offset, u32 *p_val, osal_size_t size)
+			     u32 rt_offset, u32 *p_val,
+			     osal_size_t size)
 {
 	osal_size_t i;
 
@@ -64,6 +67,7 @@ void ecore_init_store_rt_agg(struct ecore_hwfn *p_hwfn,
 	for (i = 0; i < size / sizeof(u32); i++) {
 		p_hwfn->rt_data.init_val[rt_offset + i] = p_val[i];
 		p_hwfn->rt_data.b_valid[rt_offset + i] = true;
+
 	}
 }
 
@@ -71,11 +75,12 @@ static enum _ecore_status_t ecore_init_rt(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt,
 					  u32 addr,
 					  u16 rt_offset,
-					  u16 size, bool b_must_dmae)
+					  u16 size,
+					  bool b_must_dmae)
 {
 	u32 *p_init_val = &p_hwfn->rt_data.init_val[rt_offset];
 	bool *p_valid = &p_hwfn->rt_data.b_valid[rt_offset];
-	u16 i, segment;
+	u16 i, j, segment;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	/* Since not all RT entries are initialized, go over the RT and
@@ -89,7 +94,9 @@ static enum _ecore_status_t ecore_init_rt(struct ecore_hwfn *p_hwfn,
 		 * simply write the data instead of using dmae.
 		 */
 		if (!b_must_dmae) {
-			ecore_wr(p_hwfn, p_ptt, addr + (i << 2), p_init_val[i]);
+			ecore_wr(p_hwfn, p_ptt, addr + (i << 2),
+				 p_init_val[i]);
+			p_valid[i] = false;
 			continue;
 		}
 
@@ -105,6 +112,10 @@ static enum _ecore_status_t ecore_init_rt(struct ecore_hwfn *p_hwfn,
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
+		/* invalidate after writing */
+		for (j = i; j < i + segment; j++)
+			p_valid[j] = false;
+
 		/* Jump over the entire segment, including invalid entry */
 		i += segment;
 	}
@@ -128,6 +139,7 @@ enum _ecore_status_t ecore_init_alloc(struct ecore_hwfn *p_hwfn)
 					sizeof(u32) * RUNTIME_ARRAY_SIZE);
 	if (!rt_data->init_val) {
 		OSAL_FREE(p_hwfn->p_dev, rt_data->b_valid);
+		rt_data->b_valid = OSAL_NULL;
 		return ECORE_NOMEM;
 	}
 
@@ -137,18 +149,18 @@ enum _ecore_status_t ecore_init_alloc(struct ecore_hwfn *p_hwfn)
 void ecore_init_free(struct ecore_hwfn *p_hwfn)
 {
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->rt_data.init_val);
+	p_hwfn->rt_data.init_val = OSAL_NULL;
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->rt_data.b_valid);
+	p_hwfn->rt_data.b_valid = OSAL_NULL;
 }
 
 static enum _ecore_status_t ecore_init_array_dmae(struct ecore_hwfn *p_hwfn,
-						  struct ecore_ptt *p_ptt,
-						  u32 addr,
-						  u32 dmae_data_offset,
-						  u32 size, const u32 *p_buf,
-						  bool b_must_dmae,
-						  bool b_can_dmae)
+				  struct ecore_ptt *p_ptt,
+				  u32 addr, u32 dmae_data_offset,
+				  u32 size, const u32 *p_buf,
+				  bool b_must_dmae, bool b_can_dmae)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS;
+	enum _ecore_status_t rc	= ECORE_SUCCESS;
 
 	/* Perform DMAE only for lengthy enough sections or for wide-bus */
 #ifndef ASIC_ONLY
@@ -185,7 +197,7 @@ static enum _ecore_status_t ecore_init_fill_dmae(struct ecore_hwfn *p_hwfn,
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	SET_FIELD(params.flags, DMAE_PARAMS_RW_REPL_SRC, 0x1);
 	return ecore_dmae_host2grc(p_hwfn, p_ptt,
-				   (osal_uintptr_t)&zero_buffer[0],
+				   (osal_uintptr_t)(&zero_buffer[0]),
 				   addr, fill_count, &params);
 }
 
@@ -199,6 +211,7 @@ static void ecore_init_fill(struct ecore_hwfn *p_hwfn,
 		ecore_wr(p_hwfn, p_ptt, addr, fill);
 }
 
+
 static enum _ecore_status_t ecore_init_cmd_array(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 struct init_write_op *cmd,
@@ -219,28 +232,29 @@ static enum _ecore_status_t ecore_init_cmd_array(struct ecore_hwfn *p_hwfn,
 
 	array_data = p_dev->fw_data->arr_data;
 
-	hdr = (union init_array_hdr *)
-		(uintptr_t)(array_data + dmae_array_offset);
+	hdr = (union init_array_hdr *)(array_data +
+				       dmae_array_offset);
 	data = OSAL_LE32_TO_CPU(hdr->raw.data);
 	switch (GET_FIELD(data, INIT_ARRAY_RAW_HDR_TYPE)) {
 	case INIT_ARR_ZIPPED:
 #ifdef CONFIG_ECORE_ZIPPED_FW
 		offset = dmae_array_offset + 1;
-		input_len = GET_FIELD(data, INIT_ARRAY_ZIPPED_HDR_ZIPPED_SIZE);
+		input_len = GET_FIELD(data,
+				      INIT_ARRAY_ZIPPED_HDR_ZIPPED_SIZE);
 		max_size = MAX_ZIPPED_SIZE * 4;
 		OSAL_MEMSET(p_hwfn->unzip_buf, 0, max_size);
 
 		output_len = OSAL_UNZIP_DATA(p_hwfn, input_len,
-				(u8 *)(uintptr_t)&array_data[offset],
-				max_size,
-				(u8 *)p_hwfn->unzip_buf);
+					     (u8 *)&array_data[offset],
+					     max_size, (u8 *)p_hwfn->unzip_buf);
 		if (output_len) {
 			rc = ecore_init_array_dmae(p_hwfn, p_ptt, addr, 0,
 						   output_len,
 						   p_hwfn->unzip_buf,
 						   b_must_dmae, b_can_dmae);
 		} else {
-			DP_NOTICE(p_hwfn, true, "Failed to unzip dmae data\n");
+			DP_NOTICE(p_hwfn, true,
+				  "Failed to unzip dmae data\n");
 			rc = ECORE_INVAL;
 		}
 #else
@@ -250,27 +264,27 @@ static enum _ecore_status_t ecore_init_cmd_array(struct ecore_hwfn *p_hwfn,
 #endif
 		break;
 	case INIT_ARR_PATTERN:
-		{
-			u32 repeats = GET_FIELD(data,
+	{
+		u32 repeats = GET_FIELD(data,
 					INIT_ARRAY_PATTERN_HDR_REPETITIONS);
-			u32 i;
-
-			size = GET_FIELD(data,
-					 INIT_ARRAY_PATTERN_HDR_PATTERN_SIZE);
-
-			for (i = 0; i < repeats; i++, addr += size << 2) {
-				rc = ecore_init_array_dmae(p_hwfn, p_ptt, addr,
-							   dmae_array_offset +
-							   1, size, array_data,
-							   b_must_dmae,
-							   b_can_dmae);
-				if (rc)
-					break;
+		u32 i;
+
+		size = GET_FIELD(data,
+				 INIT_ARRAY_PATTERN_HDR_PATTERN_SIZE);
+
+		for (i = 0; i < repeats; i++, addr += size << 2) {
+			rc = ecore_init_array_dmae(p_hwfn, p_ptt, addr,
+						   dmae_array_offset + 1,
+						   size, array_data,
+						   b_must_dmae, b_can_dmae);
+			if (rc)
+				break;
 		}
 		break;
 	}
 	case INIT_ARR_STANDARD:
-		size = GET_FIELD(data, INIT_ARRAY_STANDARD_HDR_SIZE);
+		size = GET_FIELD(data,
+				 INIT_ARRAY_STANDARD_HDR_SIZE);
 		rc = ecore_init_array_dmae(p_hwfn, p_ptt, addr,
 					   dmae_array_offset + 1,
 					   size, array_data,
@@ -290,13 +304,12 @@ static enum _ecore_status_t ecore_init_cmd_wr(struct ecore_hwfn *p_hwfn,
 	u32 data = OSAL_LE32_TO_CPU(p_cmd->data);
 	bool b_must_dmae = GET_FIELD(data, INIT_WRITE_OP_WIDE_BUS);
 	u32 addr = GET_FIELD(data, INIT_WRITE_OP_ADDRESS) << 2;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
+	enum _ecore_status_t rc	= ECORE_SUCCESS;
 
 	/* Sanitize */
 	if (b_must_dmae && !b_can_dmae) {
 		DP_NOTICE(p_hwfn, true,
-			  "Need to write to %08x for Wide-bus but DMAE isn't"
-			  " allowed\n",
+			  "Need to write to %08x for Wide-bus but DMAE isn't allowed\n",
 			  addr);
 		return ECORE_INVAL;
 	}
@@ -345,7 +358,8 @@ static OSAL_INLINE bool comp_or(u32 val, u32 expected_val)
 
 /* init_ops read/poll commands */
 static void ecore_init_cmd_rd(struct ecore_hwfn *p_hwfn,
-			      struct ecore_ptt *p_ptt, struct init_read_op *cmd)
+			      struct ecore_ptt *p_ptt,
+			      struct init_read_op *cmd)
 {
 	bool (*comp_check)(u32 val, u32 expected_val);
 	u32 delay = ECORE_INIT_POLL_PERIOD_US, val;
@@ -384,14 +398,16 @@ static void ecore_init_cmd_rd(struct ecore_hwfn *p_hwfn,
 
 	data = OSAL_LE32_TO_CPU(cmd->expected_val);
 	for (i = 0;
-	     i < ECORE_INIT_MAX_POLL_COUNT && !comp_check(val, data); i++) {
+	     i < ECORE_INIT_MAX_POLL_COUNT && !comp_check(val, data);
+	     i++) {
 		OSAL_UDELAY(delay);
 		val = ecore_rd(p_hwfn, p_ptt, addr);
 	}
 
 	if (i == ECORE_INIT_MAX_POLL_COUNT)
 		DP_ERR(p_hwfn, "Timeout when polling reg: 0x%08x [ Waiting-for: %08x Got: %08x (comparison %08x)]\n",
-		       addr, OSAL_LE32_TO_CPU(cmd->expected_val), val,
+		       addr,
+		       OSAL_LE32_TO_CPU(cmd->expected_val), val,
 		       OSAL_LE32_TO_CPU(cmd->op_data));
 }
 
@@ -469,7 +485,9 @@ static u32 ecore_init_cmd_phase(struct init_if_phase_op *p_cmd,
 
 enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt,
-				    int phase, int phase_id, int modes)
+				    int phase,
+				    int phase_id,
+				    int modes)
 {
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	bool b_dmae = (phase != PHASE_ENGINE);
@@ -531,6 +549,7 @@ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
 	}
 #ifdef CONFIG_ECORE_ZIPPED_FW
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->unzip_buf);
+	p_hwfn->unzip_buf = OSAL_NULL;
 #endif
 	return rc;
 }
@@ -553,19 +572,19 @@ enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
 		return ECORE_INVAL;
 	}
 
-	buf_hdr = (struct bin_buffer_hdr *)(uintptr_t)fw_data;
+	buf_hdr = (struct bin_buffer_hdr *)fw_data;
 
 	offset = buf_hdr[BIN_BUF_INIT_FW_VER_INFO].offset;
-	fw->fw_ver_info = (struct fw_ver_info *)((uintptr_t)(fw_data + offset));
+	fw->fw_ver_info = (struct fw_ver_info *)(fw_data + offset);
 
 	offset = buf_hdr[BIN_BUF_INIT_CMD].offset;
-	fw->init_ops = (union init_op *)((uintptr_t)(fw_data + offset));
+	fw->init_ops = (union init_op *)(fw_data + offset);
 
 	offset = buf_hdr[BIN_BUF_INIT_VAL].offset;
-	fw->arr_data = (u32 *)((uintptr_t)(fw_data + offset));
+	fw->arr_data = (u32 *)(fw_data + offset);
 
 	offset = buf_hdr[BIN_BUF_INIT_MODE_TREE].offset;
-	fw->modes_tree_buf = (u8 *)((uintptr_t)(fw_data + offset));
+	fw->modes_tree_buf = (u8 *)(fw_data + offset);
 	len = buf_hdr[BIN_BUF_INIT_CMD].length;
 	fw->init_ops_size = len / sizeof(struct init_raw_op);
 
@@ -585,8 +604,10 @@ enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
 	fw->arr_data = (u32 *)init_val;
 	fw->modes_tree_buf = (u8 *)modes_tree_buf;
 	fw->init_ops_size = init_ops_size;
-	fw->fw_overlays = fw_overlays;
-	fw->fw_overlays_len = sizeof(fw_overlays);
+	fw->fw_overlays_e4 = e4_overlays;
+	fw->fw_overlays_e4_len = sizeof(e4_overlays);
+	fw->fw_overlays_e5 = e5_overlays;
+	fw->fw_overlays_e5_len = sizeof(e5_overlays);
 #endif
 
 	return ECORE_SUCCESS;
diff --git a/drivers/net/qede/base/ecore_init_ops.h b/drivers/net/qede/base/ecore_init_ops.h
index 0cbf293b3..fb305e531 100644
--- a/drivers/net/qede/base/ecore_init_ops.h
+++ b/drivers/net/qede/base/ecore_init_ops.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_INIT_OPS__
 #define __ECORE_INIT_OPS__
 
@@ -29,7 +29,7 @@ void ecore_init_iro_array(struct ecore_dev *p_dev);
  * @return _ecore_status_t
  */
 enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
-				    struct ecore_ptt *p_ptt,
+				    struct ecore_ptt  *p_ptt,
 				    int               phase,
 				    int               phase_id,
 				    int               modes);
@@ -52,15 +52,6 @@ enum _ecore_status_t ecore_init_alloc(struct ecore_hwfn *p_hwfn);
  */
 void ecore_init_free(struct ecore_hwfn *p_hwfn);
 
-
-/**
- * @brief ecore_init_clear_rt_data - Clears the runtime init array.
- *
- *
- * @param p_hwfn
- */
-void ecore_init_clear_rt_data(struct ecore_hwfn *p_hwfn);
-
 /**
  * @brief ecore_init_store_rt_reg - Store a configuration value in the RT array.
  *
@@ -70,8 +61,8 @@ void ecore_init_clear_rt_data(struct ecore_hwfn *p_hwfn);
  * @param val
  */
 void ecore_init_store_rt_reg(struct ecore_hwfn *p_hwfn,
-			     u32               rt_offset,
-			     u32               val);
+			     u32		rt_offset,
+			     u32		val);
 
 #define STORE_RT_REG(hwfn, offset, val)				\
 	ecore_init_store_rt_reg(hwfn, offset, val)
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index e0d61e375..e56d5dba7 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -1,14 +1,13 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
-#include <rte_string_fns.h>
-
 #include "bcm_osal.h"
 #include "ecore.h"
 #include "ecore_spq.h"
+#include "reg_addr.h"
 #include "ecore_gtt_reg_addr.h"
 #include "ecore_init_ops.h"
 #include "ecore_rt_defs.h"
@@ -20,10 +19,32 @@
 #include "ecore_hw_defs.h"
 #include "ecore_hsi_common.h"
 #include "ecore_mcp.h"
+#include "../qede_debug.h" /* @DPDK */
+
+#ifdef DIAG
+/* This is nasty, but diag is using the drv_dbg_fw_funcs.c [non-ecore flavor],
+ * and so the functions are lacking ecore prefix.
+ * If there would be other clients needing this [or if the content that isn't
+ * really optional there would increase], we'll need to re-think this.
+ */
+enum dbg_status dbg_read_attn(struct ecore_hwfn *dev,
+							  struct ecore_ptt *ptt,
+							  enum block_id block,
+							  enum dbg_attn_type attn_type,
+							  bool clear_status,
+							  struct dbg_attn_block_result *results);
+
+const char *dbg_get_status_str(enum dbg_status status);
+
+#define ecore_dbg_read_attn(hwfn, ptt, id, type, clear, results) \
+	dbg_read_attn(hwfn, ptt, id, type, clear, results)
+#define ecore_dbg_get_status_str(status) \
+	dbg_get_status_str(status)
+#endif
 
 struct ecore_pi_info {
 	ecore_int_comp_cb_t comp_cb;
-	void *cookie;		/* Will be sent to the compl cb function */
+	void *cookie; /* Will be sent to the completion callback function */
 };
 
 struct ecore_sb_sp_info {
@@ -47,17 +68,16 @@ struct aeu_invert_reg_bit {
 
 #define ATTENTION_PARITY		(1 << 0)
 
-#define ATTENTION_LENGTH_MASK		(0x00000ff0)
+#define ATTENTION_LENGTH_MASK		(0xff)
 #define ATTENTION_LENGTH_SHIFT		(4)
-#define ATTENTION_LENGTH(flags)		(((flags) & ATTENTION_LENGTH_MASK) >> \
-					 ATTENTION_LENGTH_SHIFT)
+#define ATTENTION_LENGTH(flags)		(GET_FIELD((flags), ATTENTION_LENGTH))
 #define ATTENTION_SINGLE		(1 << ATTENTION_LENGTH_SHIFT)
 #define ATTENTION_PAR			(ATTENTION_SINGLE | ATTENTION_PARITY)
 #define ATTENTION_PAR_INT		((2 << ATTENTION_LENGTH_SHIFT) | \
 					 ATTENTION_PARITY)
 
 /* Multiple bits start with this offset */
-#define ATTENTION_OFFSET_MASK		(0x000ff000)
+#define ATTENTION_OFFSET_MASK		(0xff)
 #define ATTENTION_OFFSET_SHIFT		(12)
 
 #define ATTENTION_BB_MASK		(0xf)
@@ -78,79 +98,83 @@ struct aeu_invert_reg {
 	struct aeu_invert_reg_bit bits[32];
 };
 
-#define MAX_ATTN_GRPS		(8)
-#define NUM_ATTN_REGS		(9)
+#define MAX_ATTN_GRPS		8
+#define NUM_ATTN_REGS_E4	9
+#define NUM_ATTN_REGS_E5	10
+#define MAX_NUM_ATTN_REGS	NUM_ATTN_REGS_E5
+#define NUM_ATTN_REGS(p_hwfn) \
+	(ECORE_IS_E4((p_hwfn)->p_dev) ? NUM_ATTN_REGS_E4 : NUM_ATTN_REGS_E5)
 
 static enum _ecore_status_t ecore_mcp_attn_cb(struct ecore_hwfn *p_hwfn)
 {
 	u32 tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, MCP_REG_CPU_STATE);
 
-	DP_INFO(p_hwfn->p_dev, "MCP_REG_CPU_STATE: %08x - Masking...\n", tmp);
-	ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, MCP_REG_CPU_EVENT_MASK, 0xffffffff);
+	DP_INFO(p_hwfn->p_dev, "MCP_REG_CPU_STATE: %08x - Masking...\n",
+		tmp);
+	ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, MCP_REG_CPU_EVENT_MASK,
+		 0xffffffff);
 
 	return ECORE_SUCCESS;
 }
 
-#define ECORE_PSWHST_ATTENTION_DISABLED_PF_MASK		(0x3c000)
-#define ECORE_PSWHST_ATTENTION_DISABLED_PF_SHIFT	(14)
-#define ECORE_PSWHST_ATTENTION_DISABLED_VF_MASK		(0x03fc0)
-#define ECORE_PSWHST_ATTENTION_DISABLED_VF_SHIFT	(6)
-#define ECORE_PSWHST_ATTENTION_DISABLED_VALID_MASK	(0x00020)
-#define ECORE_PSWHST_ATTENTION_DISABLED_VALID_SHIFT	(5)
-#define ECORE_PSWHST_ATTENTION_DISABLED_CLIENT_MASK	(0x0001e)
-#define ECORE_PSWHST_ATTENTION_DISABLED_CLIENT_SHIFT	(1)
-#define ECORE_PSWHST_ATTENTION_DISABLED_WRITE_MASK	(0x1)
-#define ECORE_PSWHST_ATTNETION_DISABLED_WRITE_SHIFT	(0)
-#define ECORE_PSWHST_ATTENTION_VF_DISABLED		(0x1)
-#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS		(0x1)
-#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_WR_MASK		(0x1)
-#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_WR_SHIFT	(0)
-#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_CLIENT_MASK	(0x1e)
-#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_CLIENT_SHIFT	(1)
-#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_VALID_MASK	(0x20)
-#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_VALID_SHIFT	(5)
-#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_ID_MASK	(0x3fc0)
-#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_ID_SHIFT	(6)
-#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_PF_ID_MASK	(0x3c000)
-#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_PF_ID_SHIFT	(14)
-#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_BYTE_EN_MASK	(0x3fc0000)
-#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_BYTE_EN_SHIFT	(18)
+/* Register PSWHST_REG_VF_DISABLED_ERROR_DATA */
+#define ECORE_PSWHST_ATTN_DISABLED_PF_MASK		(0xf)
+#define ECORE_PSWHST_ATTN_DISABLED_PF_SHIFT		(14)
+#define ECORE_PSWHST_ATTN_DISABLED_VF_MASK		(0xff)
+#define ECORE_PSWHST_ATTN_DISABLED_VF_SHIFT		(6)
+#define ECORE_PSWHST_ATTN_DISABLED_VALID_MASK		(0x1)
+#define ECORE_PSWHST_ATTN_DISABLED_VALID_SHIFT		(5)
+#define ECORE_PSWHST_ATTN_DISABLED_CLIENT_MASK		(0xf)
+#define ECORE_PSWHST_ATTN_DISABLED_CLIENT_SHIFT		(1)
+#define ECORE_PSWHST_ATTN_DISABLED_WRITE_MASK		(0x1)
+#define ECORE_PSWHST_ATTN_DISABLED_WRITE_SHIFT		(0)
+
+/* Register PSWHST_REG_VF_DISABLED_ERROR_VALID */
+#define ECORE_PSWHST_ATTN_VF_DISABLED_MASK		(0x1)
+#define ECORE_PSWHST_ATTN_VF_DISABLED_SHIFT		(0)
+
+/* Register PSWHST_REG_INCORRECT_ACCESS_VALID */
+#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_MASK		(0x1)
+#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_SHIFT	(0)
+
+/* Register PSWHST_REG_INCORRECT_ACCESS_DATA */
+#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_WR_MASK		(0x1)
+#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_WR_SHIFT		(0)
+#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_CLIENT_MASK		(0xf)
+#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_CLIENT_SHIFT		(1)
+#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_VF_VALID_MASK	(0x1)
+#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_VF_VALID_SHIFT	(5)
+#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_VF_ID_MASK		(0xff)
+#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_VF_ID_SHIFT		(6)
+#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_PF_ID_MASK		(0xf)
+#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_PF_ID_SHIFT		(14)
+#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_BYTE_EN_MASK		(0xff)
+#define ECORE_PSWHST_ATTN_INCORRECT_ACCESS_BYTE_EN_SHIFT	(18)
+
 static enum _ecore_status_t ecore_pswhst_attn_cb(struct ecore_hwfn *p_hwfn)
 {
-	u32 tmp =
-	    ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
-		     PSWHST_REG_VF_DISABLED_ERROR_VALID);
+	u32 tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, PSWHST_REG_VF_DISABLED_ERROR_VALID);
 
 	/* Disabled VF access */
-	if (tmp & ECORE_PSWHST_ATTENTION_VF_DISABLED) {
+	if (GET_FIELD(tmp, ECORE_PSWHST_ATTN_VF_DISABLED)) {
 		u32 addr, data;
 
 		addr = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
 				PSWHST_REG_VF_DISABLED_ERROR_ADDRESS);
 		data = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
 				PSWHST_REG_VF_DISABLED_ERROR_DATA);
-		DP_INFO(p_hwfn->p_dev,
-			"PF[0x%02x] VF [0x%02x] [Valid 0x%02x] Client [0x%02x]"
-			" Write [0x%02x] Addr [0x%08x]\n",
-			(u8)((data & ECORE_PSWHST_ATTENTION_DISABLED_PF_MASK)
-			     >> ECORE_PSWHST_ATTENTION_DISABLED_PF_SHIFT),
-			(u8)((data & ECORE_PSWHST_ATTENTION_DISABLED_VF_MASK)
-			     >> ECORE_PSWHST_ATTENTION_DISABLED_VF_SHIFT),
-			(u8)((data &
-			      ECORE_PSWHST_ATTENTION_DISABLED_VALID_MASK) >>
-			      ECORE_PSWHST_ATTENTION_DISABLED_VALID_SHIFT),
-			(u8)((data &
-			      ECORE_PSWHST_ATTENTION_DISABLED_CLIENT_MASK) >>
-			      ECORE_PSWHST_ATTENTION_DISABLED_CLIENT_SHIFT),
-			(u8)((data &
-			      ECORE_PSWHST_ATTENTION_DISABLED_WRITE_MASK) >>
-			      ECORE_PSWHST_ATTNETION_DISABLED_WRITE_SHIFT),
+		DP_INFO(p_hwfn->p_dev, "PF[0x%02x] VF [0x%02x] [Valid 0x%02x] Client [0x%02x] Write [0x%02x] Addr [0x%08x]\n",
+			(u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_DISABLED_PF)),
+			(u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_DISABLED_VF)),
+			(u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_DISABLED_VALID)),
+			(u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_DISABLED_CLIENT)),
+			(u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_DISABLED_WRITE)),
 			addr);
 	}
 
 	tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
 		       PSWHST_REG_INCORRECT_ACCESS_VALID);
-	if (tmp & ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS) {
+	if (GET_FIELD(tmp, ECORE_PSWHST_ATTN_INCORRECT_ACCESS)) {
 		u32 addr, data, length;
 
 		addr = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
@@ -160,29 +184,14 @@ static enum _ecore_status_t ecore_pswhst_attn_cb(struct ecore_hwfn *p_hwfn)
 		length = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
 				  PSWHST_REG_INCORRECT_ACCESS_LENGTH);
 
-		DP_INFO(p_hwfn->p_dev,
-			"Incorrect access to %08x of length %08x - PF [%02x]"
-			" VF [%04x] [valid %02x] client [%02x] write [%02x]"
-			" Byte-Enable [%04x] [%08x]\n",
+		DP_INFO(p_hwfn->p_dev, "Incorrect access to %08x of length %08x - PF [%02x] VF [%04x] [valid %02x] client [%02x] write [%02x] Byte-Enable [%04x] [%08x]\n",
 			addr, length,
-			(u8)((data &
-		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_PF_ID_MASK) >>
-		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_PF_ID_SHIFT),
-			(u8)((data &
-		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_ID_MASK) >>
-		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_ID_SHIFT),
-			(u8)((data &
-		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_VALID_MASK) >>
-		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_VF_VALID_SHIFT),
-			(u8)((data &
-		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_CLIENT_MASK) >>
-		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_CLIENT_SHIFT),
-			(u8)((data &
-		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_WR_MASK) >>
-		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_WR_SHIFT),
-			(u8)((data &
-		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_BYTE_EN_MASK) >>
-		      ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_BYTE_EN_SHIFT),
+			(u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_INCORRECT_ACCESS_PF_ID)),
+			(u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_INCORRECT_ACCESS_VF_ID)),
+			(u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_INCORRECT_ACCESS_VF_VALID)),
+			(u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_INCORRECT_ACCESS_CLIENT)),
+			(u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_INCORRECT_ACCESS_WR)),
+			(u8)(GET_FIELD(data, ECORE_PSWHST_ATTN_INCORRECT_ACCESS_BYTE_EN)),
 			data);
 	}
 
@@ -193,42 +202,41 @@ static enum _ecore_status_t ecore_pswhst_attn_cb(struct ecore_hwfn *p_hwfn)
 }
 
 /* Register GRC_REG_TIMEOUT_ATTN_ACCESS_VALID */
-#define ECORE_GRC_ATTENTION_VALID_BIT_MASK      (0x1)
-#define ECORE_GRC_ATTENTION_VALID_BIT_SHIFT     (0)
-
-#define ECORE_GRC_ATTENTION_ADDRESS_MASK	(0x7fffff << 0)
-#define ECORE_GRC_ATTENTION_RDWR_BIT		(1 << 23)
-#define ECORE_GRC_ATTENTION_MASTER_MASK		(0xf << 24)
+#define ECORE_GRC_ATTENTION_VALID_BIT_MASK	(0x1)
+#define ECORE_GRC_ATTENTION_VALID_BIT_SHIFT	(0)
+
+/* Register GRC_REG_TIMEOUT_ATTN_ACCESS_DATA_0 */
+#define ECORE_GRC_ATTENTION_ADDRESS_MASK	(0x7fffff)
+#define ECORE_GRC_ATTENTION_ADDRESS_SHIFT	(0)
+#define ECORE_GRC_ATTENTION_RDWR_BIT_MASK	(0x1)
+#define ECORE_GRC_ATTENTION_RDWR_BIT_SHIFT	(23)
+#define ECORE_GRC_ATTENTION_MASTER_MASK		(0xf)
 #define ECORE_GRC_ATTENTION_MASTER_SHIFT	(24)
+
+/* Register GRC_REG_TIMEOUT_ATTN_ACCESS_DATA_1 */
 #define ECORE_GRC_ATTENTION_PF_MASK		(0xf)
-#define ECORE_GRC_ATTENTION_VF_MASK		(0xff << 4)
+#define ECORE_GRC_ATTENTION_PF_SHIFT		(0)
+#define ECORE_GRC_ATTENTION_VF_MASK		(0xff)
 #define ECORE_GRC_ATTENTION_VF_SHIFT		(4)
-#define ECORE_GRC_ATTENTION_PRIV_MASK		(0x3 << 14)
+#define ECORE_GRC_ATTENTION_PRIV_MASK		(0x3)
 #define ECORE_GRC_ATTENTION_PRIV_SHIFT		(14)
+
+/* Constant value for ECORE_GRC_ATTENTION_PRIV field */
 #define ECORE_GRC_ATTENTION_PRIV_VF		(0)
+
 static const char *grc_timeout_attn_master_to_str(u8 master)
 {
 	switch (master) {
-	case 1:
-		return "PXP";
-	case 2:
-		return "MCP";
-	case 3:
-		return "MSDM";
-	case 4:
-		return "PSDM";
-	case 5:
-		return "YSDM";
-	case 6:
-		return "USDM";
-	case 7:
-		return "TSDM";
-	case 8:
-		return "XSDM";
-	case 9:
-		return "DBU";
-	case 10:
-		return "DMAE";
+	case 1: return "PXP";
+	case 2: return "MCP";
+	case 3: return "MSDM";
+	case 4: return "PSDM";
+	case 5: return "YSDM";
+	case 6: return "USDM";
+	case 7: return "TSDM";
+	case 8: return "XSDM";
+	case 9: return "DBU";
+	case 10: return "DMAE";
 	default:
 		return "Unknown";
 	}
@@ -255,21 +263,19 @@ static enum _ecore_status_t ecore_grc_attn_cb(struct ecore_hwfn *p_hwfn)
 	tmp2 = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
 			GRC_REG_TIMEOUT_ATTN_ACCESS_DATA_1);
 
+	/* ECORE_GRC_ATTENTION_ADDRESS: register to bytes address format */
 	DP_NOTICE(p_hwfn->p_dev, false,
 		  "GRC timeout [%08x:%08x] - %s Address [%08x] [Master %s] [PF: %02x %s %02x]\n",
 		  tmp2, tmp,
-		  (tmp & ECORE_GRC_ATTENTION_RDWR_BIT) ? "Write to"
+		  GET_FIELD(tmp, ECORE_GRC_ATTENTION_RDWR_BIT) ? "Write to"
 						       : "Read from",
-		  (tmp & ECORE_GRC_ATTENTION_ADDRESS_MASK) << 2,
+		  GET_FIELD(tmp, ECORE_GRC_ATTENTION_ADDRESS) << 2,
 		  grc_timeout_attn_master_to_str(
-			(tmp & ECORE_GRC_ATTENTION_MASTER_MASK) >>
-			 ECORE_GRC_ATTENTION_MASTER_SHIFT),
-		  (tmp2 & ECORE_GRC_ATTENTION_PF_MASK),
-		  (((tmp2 & ECORE_GRC_ATTENTION_PRIV_MASK) >>
-		  ECORE_GRC_ATTENTION_PRIV_SHIFT) ==
+			GET_FIELD(tmp, ECORE_GRC_ATTENTION_MASTER)),
+		  GET_FIELD(tmp2, ECORE_GRC_ATTENTION_PF),
+		  (GET_FIELD(tmp2, ECORE_GRC_ATTENTION_PRIV) ==
 		  ECORE_GRC_ATTENTION_PRIV_VF) ? "VF" : "(Irrelevant:)",
-		  (tmp2 & ECORE_GRC_ATTENTION_VF_MASK) >>
-		  ECORE_GRC_ATTENTION_VF_SHIFT);
+		  GET_FIELD(tmp2, ECORE_GRC_ATTENTION_VF));
 
 	/* Clean the validity bit */
 	ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt,
@@ -278,19 +284,41 @@ static enum _ecore_status_t ecore_grc_attn_cb(struct ecore_hwfn *p_hwfn)
 	return rc;
 }
 
-#define ECORE_PGLUE_ATTENTION_VALID (1 << 29)
-#define ECORE_PGLUE_ATTENTION_RD_VALID (1 << 26)
-#define ECORE_PGLUE_ATTENTION_DETAILS_PFID_MASK (0xf << 20)
-#define ECORE_PGLUE_ATTENTION_DETAILS_PFID_SHIFT (20)
-#define ECORE_PGLUE_ATTENTION_DETAILS_VF_VALID (1 << 19)
-#define ECORE_PGLUE_ATTENTION_DETAILS_VFID_MASK (0xff << 24)
-#define ECORE_PGLUE_ATTENTION_DETAILS_VFID_SHIFT (24)
-#define ECORE_PGLUE_ATTENTION_DETAILS2_WAS_ERR (1 << 21)
-#define ECORE_PGLUE_ATTENTION_DETAILS2_BME	(1 << 22)
-#define ECORE_PGLUE_ATTENTION_DETAILS2_FID_EN (1 << 23)
-#define ECORE_PGLUE_ATTENTION_ICPL_VALID (1 << 23)
-#define ECORE_PGLUE_ATTENTION_ZLR_VALID (1 << 25)
-#define ECORE_PGLUE_ATTENTION_ILT_VALID (1 << 23)
+/* Register PGLUE_B_REG_TX_ERR_RD_DETAILS and
+ * Register PGLUE_B_REG_TX_ERR_WR_DETAILS
+ */
+#define ECORE_PGLUE_ATTN_DETAILS_VF_VALID_MASK		(0x1)
+#define ECORE_PGLUE_ATTN_DETAILS_VF_VALID_SHIFT		(19)
+#define ECORE_PGLUE_ATTN_DETAILS_PFID_MASK		(0xf)
+#define ECORE_PGLUE_ATTN_DETAILS_PFID_SHIFT		(20)
+#define ECORE_PGLUE_ATTN_DETAILS_VFID_MASK		(0xff)
+#define ECORE_PGLUE_ATTN_DETAILS_VFID_SHIFT		(24)
+
+/* Register PGLUE_B_REG_TX_ERR_RD_DETAILS2 and
+ * Register PGLUE_B_REG_TX_ERR_WR_DETAILS2
+ */
+#define ECORE_PGLUE_ATTN_DETAILS2_WAS_ERR_MASK		(0x1)
+#define ECORE_PGLUE_ATTN_DETAILS2_WAS_ERR_SHIFT		(21)
+#define ECORE_PGLUE_ATTN_DETAILS2_BME_MASK		(0x1)
+#define ECORE_PGLUE_ATTN_DETAILS2_BME_SHIFT		(22)
+#define ECORE_PGLUE_ATTN_DETAILS2_FID_EN_MASK		(0x1)
+#define ECORE_PGLUE_ATTN_DETAILS2_FID_EN_SHIFT		(23)
+#define ECORE_PGLUE_ATTN_DETAILS2_RD_VALID_MASK		(0x1)
+#define ECORE_PGLUE_ATTN_DETAILS2_RD_VALID_SHIFT	(26)
+#define ECORE_PGLUE_ATTN_DETAILS2_WR_VALID_MASK		(0x1)
+#define ECORE_PGLUE_ATTN_DETAILS2_WR_VALID_SHIFT	(29)
+
+/* Register PGLUE_B_REG_TX_ERR_WR_DETAILS_ICPL */
+#define ECORE_PGLUE_ATTN_ICPL_VALID_MASK		(0x1)
+#define ECORE_PGLUE_ATTN_ICPL_VALID_SHIFT		(23)
+
+/* Register PGLUE_B_REG_MASTER_ZLR_ERR_DETAILS */
+#define ECORE_PGLUE_ATTN_ZLR_VALID_MASK			(0x1)
+#define ECORE_PGLUE_ATTN_ZLR_VALID_SHIFT		(25)
+
+/* Register PGLUE_B_REG_VF_ILT_ERR_DETAILS2 */
+#define ECORE_PGLUE_ATTN_ILT_VALID_MASK			(0x1)
+#define ECORE_PGLUE_ATTN_ILT_VALID_SHIFT		(23)
 
 enum _ecore_status_t ecore_pglueb_rbc_attn_handler(struct ecore_hwfn *p_hwfn,
 						   struct ecore_ptt *p_ptt,
@@ -300,7 +328,7 @@ enum _ecore_status_t ecore_pglueb_rbc_attn_handler(struct ecore_hwfn *p_hwfn,
 	char str[512] = {0};
 
 	tmp = ecore_rd(p_hwfn, p_ptt, PGLUE_B_REG_TX_ERR_WR_DETAILS2);
-	if (tmp & ECORE_PGLUE_ATTENTION_VALID) {
+	if (GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_WR_VALID)) {
 		u32 addr_lo, addr_hi, details;
 
 		addr_lo = ecore_rd(p_hwfn, p_ptt,
@@ -310,23 +338,19 @@ enum _ecore_status_t ecore_pglueb_rbc_attn_handler(struct ecore_hwfn *p_hwfn,
 		details = ecore_rd(p_hwfn, p_ptt,
 				   PGLUE_B_REG_TX_ERR_WR_DETAILS);
 		OSAL_SNPRINTF(str, 512,
-			 "Illegal write by chip to [%08x:%08x] blocked. Details: %08x [PFID %02x, VFID %02x, VF_VALID %02x] Details2 %08x [Was_error %02x BME deassert %02x FID_enable deassert %02x]\n",
-			  addr_hi, addr_lo, details,
-			  (u8)((details &
-				ECORE_PGLUE_ATTENTION_DETAILS_PFID_MASK) >>
-			       ECORE_PGLUE_ATTENTION_DETAILS_PFID_SHIFT),
-			  (u8)((details &
-				ECORE_PGLUE_ATTENTION_DETAILS_VFID_MASK) >>
-			       ECORE_PGLUE_ATTENTION_DETAILS_VFID_SHIFT),
-			  (u8)((details &
-			       ECORE_PGLUE_ATTENTION_DETAILS_VF_VALID) ? 1 : 0),
-			  tmp,
-			  (u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_WAS_ERR) ?
-				1 : 0),
-			  (u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_BME) ?
-				1 : 0),
-			  (u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_FID_EN) ?
-				1 : 0));
+			      "Illegal write by chip to [%08x:%08x] blocked. Details: %08x [PFID %02x, VFID %02x, VF_VALID %02x] Details2 %08x [Was_error %02x BME deassert %02x FID_enable deassert %02x]\n",
+			      addr_hi, addr_lo, details,
+			      (u8)(GET_FIELD(details,
+					     ECORE_PGLUE_ATTN_DETAILS_PFID)),
+			      (u8)(GET_FIELD(details,
+					     ECORE_PGLUE_ATTN_DETAILS_VFID)),
+			      (u8)(GET_FIELD(details,
+					     ECORE_PGLUE_ATTN_DETAILS_VF_VALID) ? 1 : 0),
+			      tmp,
+			      (u8)(GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_WAS_ERR) ? 1 : 0),
+			      (u8)(GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_BME) ? 1 : 0),
+			      (u8)(GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_FID_EN) ? 1 : 0));
+
 		if (is_hw_init)
 			DP_VERBOSE(p_hwfn, ECORE_MSG_INTR, "%s", str);
 		else
@@ -334,7 +358,7 @@ enum _ecore_status_t ecore_pglueb_rbc_attn_handler(struct ecore_hwfn *p_hwfn,
 	}
 
 	tmp = ecore_rd(p_hwfn, p_ptt, PGLUE_B_REG_TX_ERR_RD_DETAILS2);
-	if (tmp & ECORE_PGLUE_ATTENTION_RD_VALID) {
+	if (GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_RD_VALID)) {
 		u32 addr_lo, addr_hi, details;
 
 		addr_lo = ecore_rd(p_hwfn, p_ptt,
@@ -347,29 +371,23 @@ enum _ecore_status_t ecore_pglueb_rbc_attn_handler(struct ecore_hwfn *p_hwfn,
 		DP_NOTICE(p_hwfn, false,
 			  "Illegal read by chip from [%08x:%08x] blocked. Details: %08x [PFID %02x, VFID %02x, VF_VALID %02x] Details2 %08x [Was_error %02x BME deassert %02x FID_enable deassert %02x]\n",
 			  addr_hi, addr_lo, details,
-			  (u8)((details &
-				ECORE_PGLUE_ATTENTION_DETAILS_PFID_MASK) >>
-			       ECORE_PGLUE_ATTENTION_DETAILS_PFID_SHIFT),
-			  (u8)((details &
-				ECORE_PGLUE_ATTENTION_DETAILS_VFID_MASK) >>
-			       ECORE_PGLUE_ATTENTION_DETAILS_VFID_SHIFT),
-			  (u8)((details &
-			       ECORE_PGLUE_ATTENTION_DETAILS_VF_VALID) ? 1 : 0),
+			  (u8)(GET_FIELD(details,
+					 ECORE_PGLUE_ATTN_DETAILS_PFID)),
+			  (u8)(GET_FIELD(details,
+					 ECORE_PGLUE_ATTN_DETAILS_VFID)),
+			  (u8)(GET_FIELD(details, ECORE_PGLUE_ATTN_DETAILS_VF_VALID) ? 1 : 0),
 			  tmp,
-			  (u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_WAS_ERR) ?
-				1 : 0),
-			  (u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_BME) ?
-				1 : 0),
-			  (u8)((tmp & ECORE_PGLUE_ATTENTION_DETAILS2_FID_EN) ?
-				1 : 0));
+			  (u8)(GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_WAS_ERR) ? 1 : 0),
+			  (u8)(GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_BME) ? 1 : 0),
+			  (u8)(GET_FIELD(tmp, ECORE_PGLUE_ATTN_DETAILS2_FID_EN) ? 1 : 0));
 	}
 
 	tmp = ecore_rd(p_hwfn, p_ptt, PGLUE_B_REG_TX_ERR_WR_DETAILS_ICPL);
-	if (tmp & ECORE_PGLUE_ATTENTION_ICPL_VALID)
-		DP_NOTICE(p_hwfn, false, "ICPL erorr - %08x\n", tmp);
+	if (GET_FIELD(tmp, ECORE_PGLUE_ATTN_ICPL_VALID))
+		DP_VERBOSE(p_hwfn, ECORE_MSG_INTR, "ICPL error - %08x\n", tmp);
 
 	tmp = ecore_rd(p_hwfn, p_ptt, PGLUE_B_REG_MASTER_ZLR_ERR_DETAILS);
-	if (tmp & ECORE_PGLUE_ATTENTION_ZLR_VALID) {
+	if (GET_FIELD(tmp, ECORE_PGLUE_ATTN_ZLR_VALID)) {
 		u32 addr_hi, addr_lo;
 
 		addr_lo = ecore_rd(p_hwfn, p_ptt,
@@ -378,12 +396,12 @@ enum _ecore_status_t ecore_pglueb_rbc_attn_handler(struct ecore_hwfn *p_hwfn,
 				   PGLUE_B_REG_MASTER_ZLR_ERR_ADD_63_32);
 
 		DP_NOTICE(p_hwfn, false,
-			  "ICPL erorr - %08x [Address %08x:%08x]\n",
+			  "ZLR error - %08x [Address %08x:%08x]\n",
 			  tmp, addr_hi, addr_lo);
 	}
 
 	tmp = ecore_rd(p_hwfn, p_ptt, PGLUE_B_REG_VF_ILT_ERR_DETAILS2);
-	if (tmp & ECORE_PGLUE_ATTENTION_ILT_VALID) {
+	if (GET_FIELD(tmp, ECORE_PGLUE_ATTN_ILT_VALID)) {
 		u32 addr_hi, addr_lo, details;
 
 		addr_lo = ecore_rd(p_hwfn, p_ptt,
@@ -411,9 +429,12 @@ static enum _ecore_status_t ecore_pglueb_rbc_attn_cb(struct ecore_hwfn *p_hwfn)
 
 static enum _ecore_status_t ecore_fw_assertion(struct ecore_hwfn *p_hwfn)
 {
-	DP_NOTICE(p_hwfn, false, "FW assertion!\n");
+	u8 str[ECORE_HW_ERR_MAX_STR_SIZE];
 
-	ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_FW_ASSERT);
+	OSAL_SNPRINTF((char *)str, ECORE_HW_ERR_MAX_STR_SIZE, "FW assertion!\n");
+	DP_NOTICE(p_hwfn, false, "%s", str);
+	ecore_hw_err_notify(p_hwfn, p_hwfn->p_dpc_ptt, ECORE_HW_ERR_FW_ASSERT,
+			    str, OSAL_STRLEN((char *)str) + 1);
 
 	return ECORE_INVAL;
 }
@@ -432,91 +453,212 @@ ecore_general_attention_35(struct ecore_hwfn *p_hwfn)
 #define ECORE_DORQ_ATTENTION_SIZE_MASK		(0x7f)
 #define ECORE_DORQ_ATTENTION_SIZE_SHIFT		(16)
 
-#define ECORE_DB_REC_COUNT			1000
-#define ECORE_DB_REC_INTERVAL			100
-
-static enum _ecore_status_t ecore_db_rec_flush_queue(struct ecore_hwfn *p_hwfn,
-						     struct ecore_ptt *p_ptt)
+/* Wait for usage to zero or count to run out. This is necessary since EDPM
+ * doorbell transactions can take multiple 64b cycles, and as such can "split"
+ * over the pci. Possibly, the doorbell drop can happen with half an EDPM in
+ * the queue and other half dropped. Another EDPM doorbell to the same address
+ * (from doorbell recovery mechanism or from the doorbelling entity) could have
+ * first half dropped and second half interperted as continuation of the first.
+ * To prevent such malformed doorbells from reaching the device, flush the
+ * queue before releaseing the overflow sticky indication.
+ * For PFs/VFs which do not edpm, flushing is not necessary.
+ */
+enum _ecore_status_t
+ecore_db_rec_flush_queue(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			 u32 usage_cnt_reg, u32 *count)
 {
-	u32 count = ECORE_DB_REC_COUNT;
-	u32 usage = 1;
+	u32 usage = ecore_rd(p_hwfn, p_ptt, usage_cnt_reg);
 
-	/* wait for usage to zero or count to run out. This is necessary since
-	 * EDPM doorbell transactions can take multiple 64b cycles, and as such
-	 * can "split" over the pci. Possibly, the doorbell drop can happen with
-	 * half an EDPM in the queue and other half dropped. Another EDPM
-	 * doorbell to the same address (from doorbell recovery mechanism or
-	 * from the doorbelling entity) could have first half dropped and second
-	 * half interperted as continuation of the first. To prevent such
-	 * malformed doorbells from reaching the device, flush the queue before
-	 * releaseing the overflow sticky indication.
+	/* Flush any pedning (e)dpm as they may never arrive.
+	 * This command will flush pending dpms for all customer PFs and VFs
+	 * of this DORQ block, and will force them to use a regular doorbell.
 	 */
-	while (count-- && usage) {
-		usage = ecore_rd(p_hwfn, p_ptt, DORQ_REG_PF_USAGE_CNT);
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_DPM_FORCE_ABORT, 0x1);
+
+	while (*count && usage) {
+		(*count)--;
 		OSAL_UDELAY(ECORE_DB_REC_INTERVAL);
+		usage = ecore_rd(p_hwfn, p_ptt, usage_cnt_reg);
 	}
 
 	/* should have been depleted by now */
 	if (usage) {
 		DP_NOTICE(p_hwfn->p_dev, false,
-			  "DB recovery: doorbell usage failed to zero after %d usec. usage was %x\n",
-			  ECORE_DB_REC_INTERVAL * ECORE_DB_REC_COUNT, usage);
+			  "Flushing DORQ failed, usage was %u after %d usec\n",
+			  usage, ECORE_DB_REC_INTERVAL * ECORE_DB_REC_COUNT);
+
 		return ECORE_TIMEOUT;
 	}
 
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_db_rec_handler(struct ecore_hwfn *p_hwfn,
-					  struct ecore_ptt *p_ptt)
+void ecore_db_rec_handler(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 {
-	u32 overflow;
+	u32 attn_ovfl, cur_ovfl, flush_count = ECORE_DB_REC_COUNT;
 	enum _ecore_status_t rc;
 
-	overflow = ecore_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY);
-	DP_NOTICE(p_hwfn, false, "PF Overflow sticky 0x%x\n", overflow);
-	if (!overflow) {
-		ecore_db_recovery_execute(p_hwfn, DB_REC_ONCE);
-		return ECORE_SUCCESS;
-	}
-
-	if (ecore_edpm_enabled(p_hwfn)) {
-		rc = ecore_db_rec_flush_queue(p_hwfn, p_ptt);
-		if (rc != ECORE_SUCCESS)
-			return rc;
-	}
+	/* If PF overflow was caught by DORQ attn callback, PF must execute
+	 * doorbell recovery. Reading the bit is atomic because the dorq
+	 * attention handler is setting it in interrupt context.
+	 */
+	attn_ovfl = OSAL_TEST_AND_CLEAR_BIT(ECORE_OVERFLOW_BIT,
+					    &p_hwfn->db_recovery_info.overflow);
 
-	/* flush any pedning (e)dpm as they may never arrive */
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_DPM_FORCE_ABORT, 0x1);
+	cur_ovfl = ecore_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY);
 
-	/* release overflow sticky indication (stop silently dropping
-	 * everything)
+	/* Check if sticky overflow indication is set, or it was caught set
+	 * during the DORQ attention callback.
 	 */
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY, 0x0);
+	if (cur_ovfl || attn_ovfl) {
+		DP_NOTICE(p_hwfn, false,
+			  "PF Overflow sticky: attn %u current %u\n",
+			  attn_ovfl ? 1 : 0, cur_ovfl ? 1 : 0);
+
+		/* In case the PF is capable of DPM, i.e. it has sufficient
+		 * doorbell BAR, the DB queue must be flushed.
+		 * Note that DCBX events which can modify the DPM register
+		 * state aren't taking into account because they can change
+		 * dynamically.
+		 */
+		if (cur_ovfl && !p_hwfn->dpm_info.db_bar_no_edpm) {
+			rc = ecore_db_rec_flush_queue(p_hwfn, p_ptt,
+						      DORQ_REG_PF_USAGE_CNT,
+						      &flush_count);
+			if (rc != ECORE_SUCCESS)
+				return;
+		}
 
-	/* repeat all last doorbells (doorbell drop recovery) */
-	ecore_db_recovery_execute(p_hwfn, DB_REC_REAL_DEAL);
+		/* release overflow sticky indication (stop silently dropping
+		 * everything).
+		 */
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY, 0x0);
 
-	return ECORE_SUCCESS;
+		/* VF doorbell recovery in next iov handler for all VFs.
+		 * Setting the bit is atomic because the iov handler is reading
+		 * it in a parallel flow.
+		 */
+		if (cur_ovfl && IS_PF_SRIOV_ALLOC(p_hwfn))
+			OSAL_SET_BIT(ECORE_OVERFLOW_BIT,
+				     &p_hwfn->pf_iov_info->overflow);
+	}
+
+	/* Even if the PF didn't overflow, some of its child VFs may have.
+	 * Either way, schedule VFs handler for doorbell recovery.
+	 */
+	if (IS_PF_SRIOV_ALLOC(p_hwfn))
+		OSAL_IOV_DB_REC_HANDLER(p_hwfn);
+
+	if (cur_ovfl || attn_ovfl)
+		ecore_db_recovery_execute(p_hwfn);
 }
 
-static enum _ecore_status_t ecore_dorq_attn_cb(struct ecore_hwfn *p_hwfn)
+static void ecore_dorq_attn_overflow(struct ecore_hwfn *p_hwfn)
 {
-	u32 int_sts, first_drop_reason, details, address, all_drops_reason;
+	u32 overflow, i, flush_delay_count = ECORE_DB_REC_COUNT;
 	struct ecore_ptt *p_ptt = p_hwfn->p_dpc_ptt;
+	struct ecore_vf_info *p_vf;
 	enum _ecore_status_t rc;
 
-	int_sts = ecore_rd(p_hwfn, p_ptt, DORQ_REG_INT_STS);
-	DP_NOTICE(p_hwfn->p_dev, false, "DORQ attention. int_sts was %x\n",
-		  int_sts);
+	overflow = ecore_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY);
+	if (overflow) {
+		/* PF doorbell recovery in next periodic handler.
+		 * Setting the bit is atomic because the db_rec handler is
+		 * reading it in the periodic handler flow.
+		 */
+		OSAL_SET_BIT(ECORE_OVERFLOW_BIT,
+			     &p_hwfn->db_recovery_info.overflow);
+
+		/* VF doorbell recovery in next iov handler for all VFs.
+		 * Setting the bit is atomic because the iov handler is reading
+		 * it in a parallel flow.
+		 */
+		if (IS_PF_SRIOV_ALLOC(p_hwfn))
+			OSAL_SET_BIT(ECORE_OVERFLOW_BIT,
+				     &p_hwfn->pf_iov_info->overflow);
+
+		/* In case the PF is capable of DPM, i.e. it has sufficient
+		 * doorbell BAR, the DB queue must be flushed.
+		 * Note that DCBX events which can modify the DPM register
+		 * state aren't taking into account because they can change
+		 * dynamically.
+		 */
+		if (!p_hwfn->dpm_info.db_bar_no_edpm) {
+			rc = ecore_db_rec_flush_queue(p_hwfn, p_ptt,
+						      DORQ_REG_PF_USAGE_CNT,
+						      &flush_delay_count);
+			if (rc != ECORE_SUCCESS)
+				goto out;
+		}
+
+		/* Allow DORQ to process doorbells from this PF again */
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY, 0x0);
+	} else {
+		/* Checking for VF overflows is only needed if the PF didn't
+		 * overflow. If the PF did overflow, ecore_iov_db_rec_handler()
+		 * will be scheduled by the periodic doorbell recovery handler
+		 * anyway.
+		 */
+		ecore_for_each_vf(p_hwfn, i) {
+			p_vf = &p_hwfn->pf_iov_info->vfs_array[i];
+			ecore_fid_pretend(p_hwfn, p_ptt,
+					  (u16)p_vf->concrete_fid);
+			overflow = ecore_rd(p_hwfn, p_ptt,
+					    DORQ_REG_VF_OVFL_STICKY);
+			if (overflow) {
+				/* VF doorbell recovery in next iov handler.
+				 * Setting the bit is atomic because the iov
+				 * handler is reading it in a parallel flow.
+				 */
+				OSAL_SET_BIT(ECORE_OVERFLOW_BIT,
+					     &p_vf->db_recovery_info.overflow);
+
+				if (!p_vf->db_recovery_info.edpm_disabled) {
+					rc = ecore_db_rec_flush_queue(p_hwfn,
+						p_ptt, DORQ_REG_VF_USAGE_CNT,
+						&flush_delay_count);
+					/* Do not clear VF sticky for this VF
+					 * if flush failed.
+					 */
+					if (rc != ECORE_SUCCESS)
+						continue;
+				}
+
+				/* Allow DORQ to process doorbells from this
+				 * VF again.
+				 */
+				ecore_wr(p_hwfn, p_ptt,
+					 DORQ_REG_VF_OVFL_STICKY, 0x0);
+			}
+		}
+
+		ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
+	}
+out:
+	/* Schedule the handler even if overflow was not detected. If an
+	 * overflow occurred after reading the sticky values, there won't be
+	 * another attention.
+	 */
+	OSAL_DB_REC_OCCURRED(p_hwfn);
+}
+
+static enum _ecore_status_t ecore_dorq_attn_int_sts(struct ecore_hwfn *p_hwfn)
+{
+	u32 int_sts, first_drop_reason, details, address, all_drops_reason;
+	struct ecore_ptt *p_ptt = p_hwfn->p_dpc_ptt;
 
 	/* int_sts may be zero since all PFs were interrupted for doorbell
 	 * overflow but another one already handled it. Can abort here. If
-	 * This PF also requires overflow recovery we will be interrupted again
+	 * This PF also requires overflow recovery we will be interrupted again.
+	 * The masked almost full indication may also be set. Ignoring.
 	 */
-	if (!int_sts)
+	int_sts = ecore_rd(p_hwfn, p_ptt, DORQ_REG_INT_STS);
+	if (!(int_sts & ~DORQ_REG_INT_STS_DORQ_FIFO_AFULL))
 		return ECORE_SUCCESS;
 
+	DP_NOTICE(p_hwfn->p_dev, false, "DORQ attention. int_sts was %x\n",
+		  int_sts);
+
 	/* check if db_drop or overflow happened */
 	if (int_sts & (DORQ_REG_INT_STS_DB_DROP |
 		       DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR)) {
@@ -539,29 +681,20 @@ static enum _ecore_status_t ecore_dorq_attn_cb(struct ecore_hwfn *p_hwfn)
 			  "Size\t\t0x%04x\t\t(in bytes)\n"
 			  "1st drop reason\t0x%08x\t(details on first drop since last handling)\n"
 			  "Sticky reasons\t0x%08x\t(all drop reasons since last handling)\n",
-			  address,
-			  GET_FIELD(details, ECORE_DORQ_ATTENTION_OPAQUE),
+			  address, GET_FIELD(details, ECORE_DORQ_ATTENTION_OPAQUE),
 			  GET_FIELD(details, ECORE_DORQ_ATTENTION_SIZE) * 4,
 			  first_drop_reason, all_drops_reason);
 
-		rc = ecore_db_rec_handler(p_hwfn, p_ptt);
-		OSAL_DB_REC_OCCURRED(p_hwfn);
-		if (rc != ECORE_SUCCESS)
-			return rc;
-
 		/* clear the doorbell drop details and prepare for next drop */
 		ecore_wr(p_hwfn, p_ptt, DORQ_REG_DB_DROP_DETAILS_REL, 0);
 
-		/* mark interrupt as handeld (note: even if drop was due to a
-		 * different reason than overflow we mark as handled)
+		/* mark interrupt as handeld (note: even if drop was due to a diffrent
+		 * reason than overflow we mark as handled)
 		 */
 		ecore_wr(p_hwfn, p_ptt, DORQ_REG_INT_STS_WR,
-			 DORQ_REG_INT_STS_DB_DROP |
-			 DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR);
+			 DORQ_REG_INT_STS_DB_DROP | DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR);
 
-		/* if there are no indications otherthan drop indications,
-		 * success
-		 */
+		/* if there are no indications otherthan drop indications, success */
 		if ((int_sts & ~(DORQ_REG_INT_STS_DB_DROP |
 				 DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR |
 				 DORQ_REG_INT_STS_DORQ_FIFO_AFULL)) == 0)
@@ -574,6 +707,31 @@ static enum _ecore_status_t ecore_dorq_attn_cb(struct ecore_hwfn *p_hwfn)
 	return ECORE_INVAL;
 }
 
+static enum _ecore_status_t ecore_dorq_attn_cb(struct ecore_hwfn *p_hwfn)
+{
+	p_hwfn->db_recovery_info.dorq_attn = true;
+	ecore_dorq_attn_overflow(p_hwfn);
+
+	return ecore_dorq_attn_int_sts(p_hwfn);
+}
+
+/* Handle a corner case race condition where another PF clears the DORQ's
+ * attention bit in MISCS bitmap before this PF got to read it. This is needed
+ * only for DORQ attentions because it doesn't send another attention later if
+ * the original issue was not resolved. If DORQ attention was already handled,
+ * its callback already set a flag to indicate it.
+ */
+static void ecore_dorq_attn_handler(struct ecore_hwfn *p_hwfn)
+{
+	if (p_hwfn->db_recovery_info.dorq_attn)
+		goto out;
+
+	/* Call DORQ callback if the attention was missed */
+	ecore_dorq_attn_cb(p_hwfn);
+out:
+	p_hwfn->db_recovery_info.dorq_attn = false;
+}
+
 static enum _ecore_status_t ecore_tm_attn_cb(struct ecore_hwfn *p_hwfn)
 {
 #ifndef ASIC_ONLY
@@ -587,12 +745,10 @@ static enum _ecore_status_t ecore_tm_attn_cb(struct ecore_hwfn *p_hwfn)
 
 		if (val & (TM_REG_INT_STS_1_PEND_TASK_SCAN |
 			   TM_REG_INT_STS_1_PEND_CONN_SCAN))
-			DP_INFO(p_hwfn,
-				"TM attention on emulation - most likely"
-				" results of clock-ratios\n");
+			DP_INFO(p_hwfn, "TM attention on emulation - most likely results of clock-ratios\n");
 		val = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, TM_REG_INT_MASK_1);
 		val |= TM_REG_INT_MASK_1_PEND_CONN_SCAN |
-		    TM_REG_INT_MASK_1_PEND_TASK_SCAN;
+		       TM_REG_INT_MASK_1_PEND_TASK_SCAN;
 		ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, TM_REG_INT_MASK_1, val);
 
 		return ECORE_SUCCESS;
@@ -626,191 +782,213 @@ aeu_descs_special[AEU_INVERT_REG_SPECIAL_MAX] = {
 };
 
 /* Notice aeu_invert_reg must be defined in the same order of bits as HW; */
-static struct aeu_invert_reg aeu_descs[NUM_ATTN_REGS] = {
+static struct aeu_invert_reg aeu_descs[MAX_NUM_ATTN_REGS] = {
 	{
-	 {			/* After Invert 1 */
-	  {"GPIO0 function%d", (32 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
-	   MAX_BLOCK_ID},
-	  }
-	 },
+		{	/* After Invert 1 */
+			{"GPIO0 function%d", FIELD_VALUE(ATTENTION_LENGTH, 32), OSAL_NULL,
+			 MAX_BLOCK_ID},
+		}
+	},
 
 	{
-	 {			/* After Invert 2 */
-	  {"PGLUE config_space", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
-	  {"PGLUE misc_flr", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
-	  {"PGLUE B RBC", ATTENTION_PAR_INT, ecore_pglueb_rbc_attn_cb,
-	   BLOCK_PGLUE_B},
-	  {"PGLUE misc_mctp", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
-	  {"Flash event", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
-	  {"SMB event", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
-	  {"Main Power", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
-	  {"SW timers #%d",
-	   (8 << ATTENTION_LENGTH_SHIFT) | (1 << ATTENTION_OFFSET_SHIFT),
-	   OSAL_NULL, MAX_BLOCK_ID},
-	  {"PCIE glue/PXP VPD %d", (16 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
-	   BLOCK_PGLCS},
-	  }
-	 },
+		{	/* After Invert 2 */
+			{"PGLUE config_space", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+			{"PGLUE misc_flr", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+			{"PGLUE B RBC", ATTENTION_PAR_INT, ecore_pglueb_rbc_attn_cb, BLOCK_PGLUE_B},
+			{"PGLUE misc_mctp", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+			{"Flash event", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+			{"SMB event", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+			{"Main Power", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+			{"SW timers #%d", FIELD_VALUE(ATTENTION_LENGTH, 8) |
+			 FIELD_VALUE(ATTENTION_OFFSET, 1), OSAL_NULL, MAX_BLOCK_ID},
+			{"PCIE glue/PXP VPD %d", FIELD_VALUE(ATTENTION_LENGTH, 16), OSAL_NULL,
+			 BLOCK_PGLCS},
+		}
+	},
 
 	{
-	 {			/* After Invert 3 */
-	  {"General Attention %d", (32 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
-	   MAX_BLOCK_ID},
-	  }
-	 },
+		{	/* After Invert 3 */
+			{"General Attention %d", FIELD_VALUE(ATTENTION_LENGTH, 32), OSAL_NULL,
+			 MAX_BLOCK_ID},
+		}
+	},
 
 	{
-	 {			/* After Invert 4 */
-	  {"General Attention 32", ATTENTION_SINGLE | ATTENTION_CLEAR_ENABLE,
-	   ecore_fw_assertion, MAX_BLOCK_ID},
-	  {"General Attention %d",
-	   (2 << ATTENTION_LENGTH_SHIFT) | (33 << ATTENTION_OFFSET_SHIFT),
-	   OSAL_NULL, MAX_BLOCK_ID},
-	  {"General Attention 35", ATTENTION_SINGLE | ATTENTION_CLEAR_ENABLE,
-	   ecore_general_attention_35, MAX_BLOCK_ID},
-	  {"NWS Parity", ATTENTION_PAR | ATTENTION_BB_DIFFERENT |
-			 ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_0),
-			 OSAL_NULL, BLOCK_NWS},
-	  {"NWS Interrupt", ATTENTION_SINGLE | ATTENTION_BB_DIFFERENT |
-			    ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_1),
-			    OSAL_NULL, BLOCK_NWS},
-	  {"NWM Parity", ATTENTION_PAR | ATTENTION_BB_DIFFERENT |
-			 ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_2),
-			 OSAL_NULL, BLOCK_NWM},
-	  {"NWM Interrupt", ATTENTION_SINGLE | ATTENTION_BB_DIFFERENT |
-			    ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_3),
-			    OSAL_NULL, BLOCK_NWM},
-	  {"MCP CPU", ATTENTION_SINGLE, ecore_mcp_attn_cb, MAX_BLOCK_ID},
-	  {"MCP Watchdog timer", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
-	  {"MCP M2P", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
-	  {"AVS stop status ready", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
-	  {"MSTAT", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID},
-	  {"MSTAT per-path", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID},
+		{	/* After Invert 4 */
+			{"General Attention 32", ATTENTION_SINGLE | ATTENTION_CLEAR_ENABLE,
+			 ecore_fw_assertion, MAX_BLOCK_ID},
+			{"General Attention %d", FIELD_VALUE(ATTENTION_LENGTH, 2) |
+			 FIELD_VALUE(ATTENTION_OFFSET, 33), OSAL_NULL, MAX_BLOCK_ID},
+			{"General Attention 35", ATTENTION_SINGLE | ATTENTION_CLEAR_ENABLE,
+			 ecore_general_attention_35, MAX_BLOCK_ID},
+			{"NWS Parity", ATTENTION_PAR | ATTENTION_BB_DIFFERENT |
+				       ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_0), OSAL_NULL,
+			 BLOCK_NWS},
+			{"NWS Interrupt", ATTENTION_SINGLE | ATTENTION_BB_DIFFERENT |
+					  ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_1), OSAL_NULL,
+			 BLOCK_NWS},
+			{"NWM Parity", ATTENTION_PAR | ATTENTION_BB_DIFFERENT |
+				       ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_2), OSAL_NULL,
+			 BLOCK_NWM},
+			{"NWM Interrupt", ATTENTION_SINGLE | ATTENTION_BB_DIFFERENT |
+					  ATTENTION_BB(AEU_INVERT_REG_SPECIAL_CNIG_3), OSAL_NULL,
+			 BLOCK_NWM},
+			{"MCP CPU", ATTENTION_SINGLE, ecore_mcp_attn_cb, MAX_BLOCK_ID},
+			{"MCP Watchdog timer", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+			{"MCP M2P", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+			{"AVS stop status ready", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+			{"MSTAT", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID},
+			{"MSTAT per-path", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID},
 			{"OPTE", ATTENTION_PAR, OSAL_NULL, BLOCK_OPTE},
 			{"MCP", ATTENTION_PAR, OSAL_NULL, BLOCK_MCP},
 			{"MS", ATTENTION_SINGLE, OSAL_NULL, BLOCK_MS},
 			{"UMAC", ATTENTION_SINGLE, OSAL_NULL, BLOCK_UMAC},
 			{"LED", ATTENTION_SINGLE, OSAL_NULL, BLOCK_LED},
 			{"BMBN", ATTENTION_SINGLE, OSAL_NULL, BLOCK_BMBN},
-	  {"NIG", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_NIG},
-	  {"BMB/OPTE/MCP", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BMB},
+			{"NIG", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_NIG},
 			{"BMB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BMB},
-	  {"BTB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BTB},
-	  {"BRB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BRB},
-	  {"PRS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PRS},
-	  }
-	 },
+			{"BTB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BTB},
+			{"BRB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BRB},
+			{"PRS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PRS},
+		}
+	},
 
 	{
-	 {			/* After Invert 5 */
-	  {"SRC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_SRC},
-	  {"PB Client1", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PBF_PB1},
-	  {"PB Client2", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PBF_PB2},
-	  {"RPB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RPB},
-	  {"PBF", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PBF},
-	  {"QM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_QM},
-	  {"TM", ATTENTION_PAR_INT, ecore_tm_attn_cb, BLOCK_TM},
-	  {"MCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MCM},
-	  {"MSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MSDM},
-	  {"MSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MSEM},
-	  {"PCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PCM},
-	  {"PSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSDM},
-	  {"PSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSEM},
-	  {"TCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TCM},
-	  {"TSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TSDM},
-	  {"TSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TSEM},
-	  }
-	 },
+		{	/* After Invert 5 */
+			{"SRC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_SRC},
+			{"PB Client1", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PBF_PB1},
+			{"PB Client2", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PBF_PB2},
+			{"RPB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RPB},
+			{"PBF", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PBF},
+			{"QM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_QM},
+			{"TM", ATTENTION_PAR_INT, ecore_tm_attn_cb, BLOCK_TM},
+			{"MCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MCM},
+			{"MSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MSDM},
+			{"MSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MSEM},
+			{"PCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PCM},
+			{"PSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSDM},
+			{"PSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSEM},
+			{"TCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TCM},
+			{"TSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TSDM},
+			{"TSEM", ATTENTION_PAR_INT,  OSAL_NULL, BLOCK_TSEM},
+		}
+	},
 
 	{
-	 {			/* After Invert 6 */
-	  {"UCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_UCM},
-	  {"USDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_USDM},
-	  {"USEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_USEM},
-	  {"XCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XCM},
-	  {"XSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XSDM},
-	  {"XSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XSEM},
-	  {"YCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YCM},
-	  {"YSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YSDM},
-	  {"YSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YSEM},
-	  {"XYLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XYLD},
-	  {"TMLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TMLD},
-	  {"MYLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MULD},
-	  {"YULD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YULD},
-	  {"DORQ", ATTENTION_PAR_INT, ecore_dorq_attn_cb, BLOCK_DORQ},
-	  {"DBG", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_DBG},
-	  {"IPC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_IPC},
-	  }
-	 },
+		{	/* After Invert 6 */
+			{"UCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_UCM},
+			{"USDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_USDM},
+			{"USEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_USEM},
+			{"XCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XCM},
+			{"XSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XSDM},
+			{"XSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XSEM},
+			{"YCM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YCM},
+			{"YSDM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YSDM},
+			{"YSEM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YSEM},
+			{"XYLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_XYLD},
+			{"TMLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TMLD},
+			{"MYLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MULD},
+			{"YULD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YULD},
+			{"DORQ", ATTENTION_PAR_INT, ecore_dorq_attn_cb, BLOCK_DORQ},
+			{"DBG", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_DBG},
+			{"IPC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_IPC},
+		}
+	},
 
 	{
-	 {			/* After Invert 7 */
-	  {"CCFC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CCFC},
-	  {"CDU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CDU},
-	  {"DMAE", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_DMAE},
-	  {"IGU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_IGU},
-	  {"ATC", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID},
-	  {"CAU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CAU},
-	  {"PTU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PTU},
-	  {"PRM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PRM},
-	  {"TCFC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TCFC},
-	  {"RDIF", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RDIF},
-	  {"TDIF", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TDIF},
-	  {"RSS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RSS},
-	  {"MISC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MISC},
-	  {"MISCS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MISCS},
-	  {"PCIE", ATTENTION_PAR, OSAL_NULL, BLOCK_PCIE},
-	  {"Vaux PCI core", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
-	  {"PSWRQ", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRQ},
-	  }
-	 },
+		{	/* After Invert 7 */
+			{"CCFC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CCFC},
+			{"CDU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CDU},
+			{"DMAE", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_DMAE},
+			{"IGU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_IGU},
+			{"ATC", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID},
+			{"CAU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CAU},
+			{"PTU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PTU},
+			{"PRM", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PRM},
+			{"TCFC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TCFC},
+			{"RDIF", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RDIF},
+			{"TDIF", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TDIF},
+			{"RSS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RSS},
+			{"MISC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MISC},
+			{"MISCS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_MISCS},
+			{"PCIE", ATTENTION_PAR, OSAL_NULL, BLOCK_PCIE},
+			{"Vaux PCI core", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
+			{"PSWRQ", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRQ},
+		}
+	},
 
 	{
-	 {			/* After Invert 8 */
-	  {"PSWRQ (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRQ2},
-	  {"PSWWR", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWWR},
-	  {"PSWWR (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWWR2},
-	  {"PSWRD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRD},
-	  {"PSWRD (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRD2},
-	  {"PSWHST", ATTENTION_PAR_INT, ecore_pswhst_attn_cb, BLOCK_PSWHST},
-	  {"PSWHST (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWHST2},
-	  {"GRC", ATTENTION_PAR_INT, ecore_grc_attn_cb, BLOCK_GRC},
-	  {"CPMU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CPMU},
-	  {"NCSI", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_NCSI},
-	  {"MSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
-	  {"PSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
-	  {"TSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
-	  {"USEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
-	  {"XSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
-	  {"YSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
-	  {"pxp_misc_mps", ATTENTION_PAR, OSAL_NULL, BLOCK_PGLCS},
-	  {"PCIE glue/PXP Exp. ROM", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
-	  {"PERST_B assertion", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
-	  {"PERST_B deassertion", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
-	  {"Reserved %d", (2 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
-	   MAX_BLOCK_ID},
-	  }
-	 },
+		{	/* After Invert 8 */
+			{"PSWRQ (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRQ2},
+			{"PSWWR", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWWR},
+			{"PSWWR (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWWR2},
+			{"PSWRD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRD},
+			{"PSWRD (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWRD2},
+			{"PSWHST", ATTENTION_PAR_INT, ecore_pswhst_attn_cb, BLOCK_PSWHST},
+			{"PSWHST (pci_clk)", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PSWHST2},
+			{"GRC", ATTENTION_PAR_INT, ecore_grc_attn_cb, BLOCK_GRC},
+			{"CPMU", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_CPMU},
+			{"NCSI", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_NCSI},
+			{"MSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+			{"PSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+			{"TSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+			{"USEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+			{"XSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+			{"YSEM PRAM", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+			{"pxp_misc_mps", ATTENTION_PAR, OSAL_NULL, BLOCK_PGLCS},
+			{"PCIE glue/PXP Exp. ROM", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
+			{"PERST_B assertion", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+			{"PERST_B deassertion", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+			{"Reserved %d", FIELD_VALUE(ATTENTION_LENGTH, 2), OSAL_NULL, MAX_BLOCK_ID },
+		}
+	},
 
 	{
-	 {			/* After Invert 9 */
-	  {"MCP Latched memory", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
-	  {"MCP Latched scratchpad cache", ATTENTION_SINGLE, OSAL_NULL,
-	   MAX_BLOCK_ID},
-	  {"AVS", ATTENTION_PAR | ATTENTION_BB_DIFFERENT |
-	   ATTENTION_BB(AEU_INVERT_REG_SPECIAL_MCP_UMP_TX), OSAL_NULL,
-	   BLOCK_AVS_WRAP},
-	  {"AVS", ATTENTION_SINGLE | ATTENTION_BB_DIFFERENT |
-	   ATTENTION_BB(AEU_INVERT_REG_SPECIAL_MCP_SCPAD), OSAL_NULL,
-	   BLOCK_AVS_WRAP},
-	  {"PCIe core", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
-	  {"PCIe link up", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
-	  {"PCIe hot reset", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
-	  {"Reserved %d", (9 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
-	    MAX_BLOCK_ID},
-	  }
-	 },
+		{	/* After Invert 9 */
+			{"MCP Latched memory", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+			{"MCP Latched scratchpad cache", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
+			{"AVS", ATTENTION_PAR | ATTENTION_BB_DIFFERENT |
+				ATTENTION_BB(AEU_INVERT_REG_SPECIAL_MCP_UMP_TX), OSAL_NULL,
+			 BLOCK_AVS_WRAP},
+			{"AVS", ATTENTION_SINGLE | ATTENTION_BB_DIFFERENT |
+				ATTENTION_BB(AEU_INVERT_REG_SPECIAL_MCP_SCPAD), OSAL_NULL,
+			 BLOCK_AVS_WRAP},
+			{"PCIe core", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
+			{"PCIe link up", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
+			{"PCIe hot reset", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
+			{"YPLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_YPLD},
+			{"PTLD", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PTLD},
+			{"RGFS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RGFS},
+			{"TGFS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TGFS},
+			{"RGSRC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RGSRC},
+			{"TGSRC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TGSRC},
+			{"RGFC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_RGFC},
+			{"TGFC", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_TGFC},
+			{"Reserved %d", FIELD_VALUE(ATTENTION_LENGTH, 9), OSAL_NULL, MAX_BLOCK_ID},
+		}
+	},
 
+	{
+		{	/* After Invert 10 */
+			{"BMB_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_BMB_PD},
+			{"CFC_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_CFC_PD},
+			{"MS_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_MS_PD},
+			{"HOST_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_HOST_PD},
+			{"NMC_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_NMC_PD},
+			{"NM_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_NM_PD},
+			{"NW_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_NW_PD},
+			{"PS_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PS_PD},
+			{"PXP_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PXP_PD},
+			{"QM_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_QM_PD},
+			{"TS_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_TS_PD},
+			{"TX_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_TX_PD},
+			{"US_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_US_PD},
+			{"XS_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_XS_PD},
+			{"YS_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_YS_PD},
+			{"RX_PD", ATTENTION_SINGLE, OSAL_NULL, BLOCK_RX_PD},
+			{"Reserved %d", FIELD_VALUE(ATTENTION_LENGTH, 16), OSAL_NULL, MAX_BLOCK_ID},
+		}
+	},
 };
 
 static struct aeu_invert_reg_bit *
@@ -823,8 +1001,7 @@ ecore_int_aeu_translate(struct ecore_hwfn *p_hwfn,
 	if (!(p_bit->flags & ATTENTION_BB_DIFFERENT))
 		return p_bit;
 
-	return &aeu_descs_special[(p_bit->flags & ATTENTION_BB_MASK) >>
-				  ATTENTION_BB_SHIFT];
+	return &aeu_descs_special[GET_FIELD(p_bit->flags, ATTENTION_BB)];
 }
 
 static bool ecore_int_is_parity_flag(struct ecore_hwfn *p_hwfn,
@@ -838,23 +1015,23 @@ static bool ecore_int_is_parity_flag(struct ecore_hwfn *p_hwfn,
 #define ATTN_BITS_MASKABLE	(0x3ff)
 struct ecore_sb_attn_info {
 	/* Virtual & Physical address of the SB */
-	struct atten_status_block *sb_attn;
-	dma_addr_t sb_phys;
+	struct atten_status_block	*sb_attn;
+	dma_addr_t			sb_phys;
 
 	/* Last seen running index */
-	u16 index;
+	u16				index;
 
 	/* A mask of the AEU bits resulting in a parity error */
-	u32 parity_mask[NUM_ATTN_REGS];
+	u32				parity_mask[MAX_NUM_ATTN_REGS];
 
 	/* A pointer to the attention description structure */
-	struct aeu_invert_reg *p_aeu_desc;
+	struct aeu_invert_reg		*p_aeu_desc;
 
 	/* Previously asserted attentions, which are still unasserted */
-	u16 known_attn;
+	u16				known_attn;
 
 	/* Cleanup address for the link's general hw attention */
-	u32 mfw_attn_addr;
+	u32				mfw_attn_addr;
 };
 
 static u16 ecore_attn_update_idx(struct ecore_hwfn *p_hwfn,
@@ -912,10 +1089,11 @@ static enum _ecore_status_t ecore_int_assertion(struct ecore_hwfn *p_hwfn,
 
 	/* FIXME - this will change once we'll have GOOD gtt definitions */
 	DIRECT_REG_WR(p_hwfn,
-		      (u8 OSAL_IOMEM *) p_hwfn->regview +
+		      (u8 OSAL_IOMEM *)p_hwfn->regview +
 		      GTT_BAR0_MAP_REG_IGU_CMD +
 		      ((IGU_CMD_ATTN_BIT_SET_UPPER -
-			IGU_CMD_INT_ACK_BASE) << 3), (u32)asserted_bits);
+			IGU_CMD_INT_ACK_BASE) << 3),
+			(u32)asserted_bits);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_INTR, "set cmd IGU: 0x%04x\n",
 		   asserted_bits);
@@ -923,12 +1101,24 @@ static enum _ecore_status_t ecore_int_assertion(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+/* @DPDK */
 static void ecore_int_attn_print(struct ecore_hwfn *p_hwfn,
 				 enum block_id id, enum dbg_attn_type type,
 				 bool b_clear)
 {
-	/* @DPDK */
-	DP_NOTICE(p_hwfn->p_dev, false, "[block_id %d type %d]\n", id, type);
+	struct dbg_attn_block_result attn_results;
+	enum dbg_status status;
+
+	OSAL_MEMSET(&attn_results, 0, sizeof(attn_results));
+
+	status = qed_dbg_read_attn(p_hwfn, p_hwfn->p_dpc_ptt, id, type,
+				   b_clear, &attn_results);
+	if (status != DBG_STATUS_OK)
+		DP_NOTICE(p_hwfn, true,
+			  "Failed to parse attention information [status: %s]\n",
+			  qed_dbg_get_status_str(status));
+	else
+		qed_dbg_parse_attn(p_hwfn, &attn_results);
 }
 
 /**
@@ -967,23 +1157,24 @@ ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn,
 		b_fatal = true;
 
 	/* Print HW block interrupt registers */
-	if (p_aeu->block_index != MAX_BLOCK_ID) {
+	if (p_aeu->block_index != MAX_BLOCK_ID)
 		ecore_int_attn_print(p_hwfn, p_aeu->block_index,
 				     ATTN_TYPE_INTERRUPT, !b_fatal);
-}
 
-	/* @DPDK */
 	/* Reach assertion if attention is fatal */
-	if (b_fatal || (strcmp(p_bit_name, "PGLUE B RBC") == 0)) {
+	if (b_fatal) {
+		u8 str[ECORE_HW_ERR_MAX_STR_SIZE];
+
+		OSAL_SNPRINTF((char *)str, ECORE_HW_ERR_MAX_STR_SIZE,
+			      "`%s': Fatal attention\n", p_bit_name);
 #ifndef ASIC_ONLY
-		DP_NOTICE(p_hwfn, !CHIP_REV_IS_EMUL(p_hwfn->p_dev),
-			  "`%s': Fatal attention\n", p_bit_name);
+		DP_NOTICE(p_hwfn, !CHIP_REV_IS_EMUL(p_hwfn->p_dev), "%s", str);
 #else
-		DP_NOTICE(p_hwfn, true, "`%s': Fatal attention\n",
-			  p_bit_name);
+		DP_NOTICE(p_hwfn, true, "%s", str);
 #endif
-
-		ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_HW_ATTN);
+		ecore_hw_err_notify(p_hwfn, p_hwfn->p_dpc_ptt,
+				    ECORE_HW_ERR_HW_ATTN, str,
+				    OSAL_STRLEN((char *)str) + 1);
 	}
 
 	/* Prevent this Attention from being asserted in the future */
@@ -996,7 +1187,7 @@ ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn,
 		u32 mask = ~bitmask;
 		val = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en_reg);
 		ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en_reg, (val & mask));
-		DP_ERR(p_hwfn, "`%s' - Disabled future attentions\n",
+		DP_INFO(p_hwfn, "`%s' - Disabled future attentions\n",
 			p_bit_name);
 	}
 
@@ -1042,11 +1233,15 @@ static void ecore_int_deassertion_parity(struct ecore_hwfn *p_hwfn,
 }
 
 #define MISC_REG_AEU_AFTER_INVERT_IGU(n) \
-	(MISC_REG_AEU_AFTER_INVERT_1_IGU + (n) * 0x4)
+	((n) < NUM_ATTN_REGS_E4 ? \
+	 MISC_REG_AEU_AFTER_INVERT_1_IGU + (n) * 0x4 : \
+	 MISC_REG_AEU_AFTER_INVERT_10_IGU_E5)
 
 #define MISC_REG_AEU_ENABLE_IGU_OUT(n, group) \
-	(MISC_REG_AEU_ENABLE1_IGU_OUT_0 + (n) * 0x4 + \
-	 (group) * 0x4 * NUM_ATTN_REGS)
+	((n) < NUM_ATTN_REGS_E4 ? \
+	 (MISC_REG_AEU_ENABLE1_IGU_OUT_0 + (n) * 0x4 + \
+	  (group) * 0x4 * NUM_ATTN_REGS_E4) : \
+	 (MISC_REG_AEU_ENABLE10_IGU_OUT_0_E5 + (group) * 0x4))
 
 /**
  * @brief - handles deassertion of previously asserted attentions.
@@ -1059,21 +1254,22 @@ static void ecore_int_deassertion_parity(struct ecore_hwfn *p_hwfn,
 static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
 						  u16 deasserted_bits)
 {
+	u8 i, j, k, bit_idx, num_attn_regs = NUM_ATTN_REGS(p_hwfn);
 	struct ecore_sb_attn_info *sb_attn_sw = p_hwfn->p_sb_attn;
-	u32 aeu_inv_arr[NUM_ATTN_REGS], aeu_mask, aeu_en, en;
-	u8 i, j, k, bit_idx;
+	u32 aeu_inv_arr[MAX_NUM_ATTN_REGS], aeu_mask, aeu_en, en;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	/* Read the attention registers in the AEU */
-	for (i = 0; i < NUM_ATTN_REGS; i++) {
+	for (i = 0; i < num_attn_regs; i++) {
 		aeu_inv_arr[i] = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
 					  MISC_REG_AEU_AFTER_INVERT_IGU(i));
 		DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
-			   "Deasserted bits [%d]: %08x\n", i, aeu_inv_arr[i]);
+			   "Deasserted bits [%d]: %08x\n",
+			   i, aeu_inv_arr[i]);
 	}
 
 	/* Handle parity attentions first */
-	for (i = 0; i < NUM_ATTN_REGS; i++) {
+	for (i = 0; i < num_attn_regs; i++) {
 		struct aeu_invert_reg *p_aeu = &sb_attn_sw->p_aeu_desc[i];
 		u32 parities;
 
@@ -1105,7 +1301,7 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
 		if (!(deasserted_bits & (1 << k)))
 			continue;
 
-		for (i = 0; i < NUM_ATTN_REGS; i++) {
+		for (i = 0; i < num_attn_regs; i++) {
 			u32 bits;
 
 			aeu_en = MISC_REG_AEU_ENABLE_IGU_OUT(i, k);
@@ -1121,11 +1317,12 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
 			 * previous assertion.
 			 */
 			for (j = 0, bit_idx = 0; bit_idx < 32; j++) {
-				unsigned long int bitmask;
+				unsigned long bitmask;
 				u8 bit, bit_len;
 
 				/* Need to account bits with changed meaning */
 				p_aeu = &sb_attn_sw->p_aeu_desc[i].bits[j];
+				p_aeu = ecore_int_aeu_translate(p_hwfn, p_aeu);
 
 				bit = bit_idx;
 				bit_len = ATTENTION_LENGTH(p_aeu->flags);
@@ -1160,9 +1357,9 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
 							      p_aeu->bit_name,
 							      num);
 					else
-						strlcpy(bit_name,
-							p_aeu->bit_name,
-							sizeof(bit_name));
+						OSAL_STRLCPY(bit_name,
+							     p_aeu->bit_name,
+							     30);
 
 					/* We now need to pass bitmask in its
 					 * correct position.
@@ -1182,13 +1379,17 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
 		}
 	}
 
+	/* Handle missed DORQ attention */
+	ecore_dorq_attn_handler(p_hwfn);
+
 	/* Clear IGU indication for the deasserted bits */
 	/* FIXME - this will change once we'll have GOOD gtt definitions */
 	DIRECT_REG_WR(p_hwfn,
-		      (u8 OSAL_IOMEM *) p_hwfn->regview +
+		      (u8 OSAL_IOMEM *)p_hwfn->regview +
 		      GTT_BAR0_MAP_REG_IGU_CMD +
 		      ((IGU_CMD_ATTN_BIT_CLR_UPPER -
-			IGU_CMD_INT_ACK_BASE) << 3), ~((u32)deasserted_bits));
+			IGU_CMD_INT_ACK_BASE) << 3),
+		      ~((u32)deasserted_bits));
 
 	/* Unmask deasserted attentions in IGU */
 	aeu_mask = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
@@ -1215,6 +1416,8 @@ static enum _ecore_status_t ecore_int_attentions(struct ecore_hwfn *p_hwfn)
 	 */
 	do {
 		index = OSAL_LE16_TO_CPU(p_sb_attn->sb_index);
+		/* Make sure reading index is not optimized out */
+		OSAL_RMB(p_hwfn->p_dev);
 		attn_bits = OSAL_LE32_TO_CPU(p_sb_attn->atten_bits);
 		attn_acks = OSAL_LE32_TO_CPU(p_sb_attn->atten_ack);
 	} while (index != OSAL_LE16_TO_CPU(p_sb_attn->sb_index));
@@ -1226,9 +1429,9 @@ static enum _ecore_status_t ecore_int_attentions(struct ecore_hwfn *p_hwfn)
 	 * attention with no previous attention
 	 */
 	asserted_bits = (attn_bits & ~attn_acks & ATTN_STATE_BITS) &
-	    ~p_sb_attn_sw->known_attn;
+			~p_sb_attn_sw->known_attn;
 	deasserted_bits = (~attn_bits & attn_acks & ATTN_STATE_BITS) &
-	    p_sb_attn_sw->known_attn;
+			  p_sb_attn_sw->known_attn;
 
 	if ((asserted_bits & ~0x100) || (deasserted_bits & ~0x100))
 		DP_INFO(p_hwfn,
@@ -1236,7 +1439,8 @@ static enum _ecore_status_t ecore_int_attentions(struct ecore_hwfn *p_hwfn)
 			index, attn_bits, attn_acks, asserted_bits,
 			deasserted_bits, p_sb_attn_sw->known_attn);
 	else if (asserted_bits == 0x100)
-		DP_INFO(p_hwfn, "MFW indication via attention\n");
+		DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
+			   "MFW indication via attention\n");
 	else
 		DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
 			   "MFW indication [deassertion]\n");
@@ -1259,12 +1463,14 @@ static void ecore_sb_ack_attn(struct ecore_hwfn *p_hwfn,
 	struct igu_prod_cons_update igu_ack;
 
 	OSAL_MEMSET(&igu_ack, 0, sizeof(struct igu_prod_cons_update));
-	igu_ack.sb_id_and_flags =
-	    ((ack_cons << IGU_PROD_CONS_UPDATE_SB_INDEX_SHIFT) |
-	     (1 << IGU_PROD_CONS_UPDATE_UPDATE_FLAG_SHIFT) |
-	     (IGU_INT_NOP << IGU_PROD_CONS_UPDATE_ENABLE_INT_SHIFT) |
-	     (IGU_SEG_ACCESS_ATTN <<
-	      IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS_SHIFT));
+	SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_SB_INDEX,
+		  ack_cons);
+	SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_UPDATE_FLAG,
+		  1);
+	SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_ENABLE_INT,
+		  IGU_INT_NOP);
+	SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS,
+		  IGU_SEG_ACCESS_ATTN);
 
 	DIRECT_REG_WR(p_hwfn, igu_addr, igu_ack.sb_id_and_flags);
 
@@ -1293,8 +1499,7 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie)
 
 	sb_info = &p_hwfn->p_sp_sb->sb_info;
 	if (!sb_info) {
-		DP_ERR(p_hwfn->p_dev,
-		       "Status block is NULL - cannot ack interrupts\n");
+		DP_ERR(p_hwfn->p_dev, "Status block is NULL - cannot ack interrupts\n");
 		return;
 	}
 
@@ -1302,7 +1507,7 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie)
 		DP_ERR(p_hwfn->p_dev, "DPC called - no p_sb_attn");
 		return;
 	}
-	sb_attn = p_hwfn->p_sb_attn;
+	sb_attn =  p_hwfn->p_sb_attn;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_INTR, "DPC Called! (hwfn %p %d)\n",
 		   p_hwfn, p_hwfn->my_id);
@@ -1315,8 +1520,7 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie)
 	/* Gather Interrupts/Attentions information */
 	if (!sb_info->sb_virt) {
 		DP_ERR(p_hwfn->p_dev,
-		       "Interrupt Status block is NULL -"
-		       " cannot check for new interrupts!\n");
+		       "Interrupt Status block is NULL - cannot check for new interrupts!\n");
 	} else {
 		u32 tmp_index = sb_info->sb_ack;
 		rc = ecore_sb_update_sb_idx(sb_info);
@@ -1326,9 +1530,7 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie)
 	}
 
 	if (!sb_attn || !sb_attn->sb_attn) {
-		DP_ERR(p_hwfn->p_dev,
-		       "Attentions Status block is NULL -"
-		       " cannot check for new attentions!\n");
+		DP_ERR(p_hwfn->p_dev, "Attentions Status block is NULL - cannot check for new attentions!\n");
 	} else {
 		u16 tmp_index = sb_attn->index;
 
@@ -1344,8 +1546,7 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie)
 		return;
 	}
 
-/* Check the validity of the DPC ptt. If not ack interrupts and fail */
-
+	/* Check the validity of the DPC ptt. If not ack interrupts and fail */
 	if (!p_hwfn->p_dpc_ptt) {
 		DP_NOTICE(p_hwfn->p_dev, true, "Failed to allocate PTT\n");
 		ecore_sb_ack(sb_info, IGU_INT_ENABLE, 1);
@@ -1392,7 +1593,9 @@ static void ecore_int_sb_attn_free(struct ecore_hwfn *p_hwfn)
 				       p_sb->sb_phys,
 				       SB_ATTN_ALIGNED_SIZE(p_hwfn));
 	}
+
 	OSAL_FREE(p_hwfn->p_dev, p_sb);
+	p_hwfn->p_sb_attn = OSAL_NULL;
 }
 
 static void ecore_int_sb_attn_setup(struct ecore_hwfn *p_hwfn,
@@ -1414,10 +1617,11 @@ static void ecore_int_sb_attn_setup(struct ecore_hwfn *p_hwfn,
 
 static void ecore_int_sb_attn_init(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   void *sb_virt_addr, dma_addr_t sb_phy_addr)
+				   void *sb_virt_addr,
+				   dma_addr_t sb_phy_addr)
 {
 	struct ecore_sb_attn_info *sb_info = p_hwfn->p_sb_attn;
-	int i, j, k;
+	int i, j, k, num_attn_regs = NUM_ATTN_REGS(p_hwfn);
 
 	sb_info->sb_attn = sb_virt_addr;
 	sb_info->sb_phys = sb_phy_addr;
@@ -1426,8 +1630,8 @@ static void ecore_int_sb_attn_init(struct ecore_hwfn *p_hwfn,
 	sb_info->p_aeu_desc = aeu_descs;
 
 	/* Calculate Parity Masks */
-	OSAL_MEMSET(sb_info->parity_mask, 0, sizeof(u32) * NUM_ATTN_REGS);
-	for (i = 0; i < NUM_ATTN_REGS; i++) {
+	OSAL_MEMSET(sb_info->parity_mask, 0, sizeof(u32) * MAX_NUM_ATTN_REGS);
+	for (i = 0; i < num_attn_regs; i++) {
 		/* j is array index, k is bit index */
 		for (j = 0, k = 0; k < 32; j++) {
 			struct aeu_invert_reg_bit *p_aeu;
@@ -1445,7 +1649,7 @@ static void ecore_int_sb_attn_init(struct ecore_hwfn *p_hwfn,
 
 	/* Set the address of cleanup for the mcp attention */
 	sb_info->mfw_attn_addr = (p_hwfn->rel_pf_id << 3) +
-	    MISC_REG_AEU_GENERAL_ATTN_0;
+				 MISC_REG_AEU_GENERAL_ATTN_0;
 
 	ecore_int_sb_attn_setup(p_hwfn, p_ptt);
 }
@@ -1590,25 +1794,23 @@ void ecore_int_cau_conf_sb(struct ecore_hwfn *p_hwfn,
 		/* Wide-bus, initialize via DMAE */
 		u64 phys_addr = (u64)sb_phys;
 
-		ecore_dmae_host2grc(p_hwfn, p_ptt,
-				    (u64)(osal_uintptr_t)&phys_addr,
+		ecore_dmae_host2grc(p_hwfn, p_ptt, (u64)(osal_uintptr_t)&phys_addr,
 				    CAU_REG_SB_ADDR_MEMORY +
 				    igu_sb_id * sizeof(u64), 2,
 				    OSAL_NULL /* default parameters */);
-		ecore_dmae_host2grc(p_hwfn, p_ptt,
-				    (u64)(osal_uintptr_t)&sb_entry,
+		ecore_dmae_host2grc(p_hwfn, p_ptt, (u64)(osal_uintptr_t)&sb_entry,
 				    CAU_REG_SB_VAR_MEMORY +
 				    igu_sb_id * sizeof(u64), 2,
 				    OSAL_NULL /* default parameters */);
 	} else {
 		/* Initialize Status Block Address */
 		STORE_RT_REG_AGG(p_hwfn,
-				 CAU_REG_SB_ADDR_MEMORY_RT_OFFSET +
-				 igu_sb_id * 2, sb_phys);
+				 CAU_REG_SB_ADDR_MEMORY_RT_OFFSET + igu_sb_id * 2,
+				 sb_phys);
 
 		STORE_RT_REG_AGG(p_hwfn,
-				 CAU_REG_SB_VAR_MEMORY_RT_OFFSET +
-				 igu_sb_id * 2, sb_entry);
+				 CAU_REG_SB_VAR_MEMORY_RT_OFFSET + igu_sb_id * 2,
+				 sb_entry);
 	}
 
 	/* Configure pi coalescing if set */
@@ -1650,7 +1852,8 @@ void ecore_int_cau_conf_sb(struct ecore_hwfn *p_hwfn,
 }
 
 void ecore_int_sb_setup(struct ecore_hwfn *p_hwfn,
-			struct ecore_ptt *p_ptt, struct ecore_sb_info *sb_info)
+			struct ecore_ptt *p_ptt,
+			struct ecore_sb_info *sb_info)
 {
 	/* zero status block and ack counter */
 	sb_info->sb_ack = 0;
@@ -1734,7 +1937,8 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt,
 				       struct ecore_sb_info *sb_info,
 				       void *sb_virt_addr,
-				       dma_addr_t sb_phy_addr, u16 sb_id)
+				       dma_addr_t sb_phy_addr,
+				       u16 sb_id)
 {
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
 
@@ -1764,7 +1968,7 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
 
 	/* Let the igu info reference the client's SB info */
 	if (sb_id != ECORE_SP_SB_ID) {
-		if (IS_PF(p_hwfn->p_dev)) {
+		if (IS_PF(p_dev)) {
 			struct ecore_igu_info *p_info;
 			struct ecore_igu_block *p_block;
 
@@ -1778,24 +1982,51 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
 			ecore_vf_set_sb_info(p_hwfn, sb_id, sb_info);
 		}
 	}
+
 #ifdef ECORE_CONFIG_DIRECT_HWFN
 	sb_info->p_hwfn = p_hwfn;
 #endif
-	sb_info->p_dev = p_hwfn->p_dev;
+	sb_info->p_dev = p_dev;
 
 	/* The igu address will hold the absolute address that needs to be
 	 * written to for a specific status block
 	 */
-	if (IS_PF(p_hwfn->p_dev))
+	if (IS_PF(p_dev)) {
+		u32 igu_cmd_gtt_win = ECORE_IS_E4(p_dev) ?
+				      GTT_BAR0_MAP_REG_IGU_CMD :
+				      GTT_BAR0_MAP_REG_IGU_CMD_EXT_E5;
+
 		sb_info->igu_addr = (u8 OSAL_IOMEM *)p_hwfn->regview +
-				     GTT_BAR0_MAP_REG_IGU_CMD +
-				     (sb_info->igu_sb_id << 3);
+				    igu_cmd_gtt_win +
+				    (sb_info->igu_sb_id << 3);
+	} else {
+		u32 igu_bar_offset, int_ack_base;
+
+		if (ECORE_IS_E4(p_dev)) {
+			igu_bar_offset = PXP_VF_BAR0_START_IGU;
+			int_ack_base = IGU_CMD_INT_ACK_BASE;
+		} else {
+			igu_bar_offset = PXP_VF_BAR0_START_IGU2;
+			int_ack_base = IGU_CMD_INT_ACK_E5_BASE -
+				       (IGU_CMD_INT_ACK_RESERVED_UPPER + 1);
+
+			/* Force emulator to use legacy IGU for lower SBs */
+			if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+				int_ack_base = IGU_CMD_INT_ACK_BASE;
+				if (sb_info->igu_sb_id <
+				    MAX_SB_PER_PATH_E5_BC_MODE)
+					igu_bar_offset = PXP_VF_BAR0_START_IGU;
+				else
+					igu_bar_offset =
+						PXP_VF_BAR0_START_IGU2 -
+						PXP_VF_BAR0_IGU_LENGTH;
+			}
+		}
 
-	else
 		sb_info->igu_addr = (u8 OSAL_IOMEM *)p_hwfn->regview +
-		    PXP_VF_BAR0_START_IGU +
-				     ((IGU_CMD_INT_ACK_BASE +
-				       sb_info->igu_sb_id) << 3);
+				    igu_bar_offset +
+				    ((int_ack_base + sb_info->igu_sb_id) << 3);
+	}
 
 	sb_info->flags |= ECORE_SB_INFO_INIT;
 
@@ -1855,6 +2086,7 @@ static void ecore_int_sp_sb_free(struct ecore_hwfn *p_hwfn)
 	}
 
 	OSAL_FREE(p_hwfn->p_dev, p_sb);
+	p_hwfn->p_sp_sb = OSAL_NULL;
 }
 
 static enum _ecore_status_t ecore_int_sp_sb_alloc(struct ecore_hwfn *p_hwfn,
@@ -1874,7 +2106,8 @@ static enum _ecore_status_t ecore_int_sp_sb_alloc(struct ecore_hwfn *p_hwfn,
 
 	/* SB ring  */
 	p_virt = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
-					 &p_phys, SB_ALIGNED_SIZE(p_hwfn));
+					 &p_phys,
+					 SB_ALIGNED_SIZE(p_hwfn));
 	if (!p_virt) {
 		DP_NOTICE(p_hwfn, false, "Failed to allocate status block\n");
 		OSAL_FREE(p_hwfn->p_dev, p_sb);
@@ -1892,24 +2125,42 @@ static enum _ecore_status_t ecore_int_sp_sb_alloc(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+enum _ecore_status_t
+ecore_int_dummy_comp_cb(struct ecore_hwfn OSAL_UNUSED * p_hwfn,
+			void OSAL_UNUSED * cookie)
+{
+	/* Empty completion callback for avoid race */
+	return ECORE_SUCCESS;
+}
+
 enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn,
 					   ecore_int_comp_cb_t comp_cb,
 					   void *cookie,
-					   u8 *sb_idx, __le16 **p_fw_cons)
+					   u8 *sb_idx,
+					   __le16 **p_fw_cons)
 {
-	struct ecore_sb_sp_info *p_sp_sb = p_hwfn->p_sp_sb;
+	struct ecore_sb_sp_info *p_sp_sb  = p_hwfn->p_sp_sb;
 	enum _ecore_status_t rc = ECORE_NOMEM;
 	u8 pi;
 
 	/* Look for a free index */
 	for (pi = 0; pi < p_sp_sb->pi_info_arr_size; pi++) {
-		if (p_sp_sb->pi_info_arr[pi].comp_cb != OSAL_NULL)
+		if ((p_sp_sb->pi_info_arr[pi].comp_cb != OSAL_NULL) &&
+		    (p_sp_sb->pi_info_arr[pi].comp_cb != p_hwfn->p_dummy_cb))
 			continue;
 
-		p_sp_sb->pi_info_arr[pi].comp_cb = comp_cb;
 		p_sp_sb->pi_info_arr[pi].cookie = cookie;
 		*sb_idx = pi;
 		*p_fw_cons = &p_sp_sb->sb_info.sb_pi_array[pi];
+		**p_fw_cons = 0;
+
+		/* The callback might be called in parallel from
+		 * ecore_int_sp_dpc(), so make sure that the cookie is set and
+		 * that the PI producer is zeroed before setting the callback.
+		 */
+		OSAL_BARRIER(p_hwfn->p_dev);
+		p_sp_sb->pi_info_arr[pi].comp_cb = comp_cb;
+
 		rc = ECORE_SUCCESS;
 		break;
 	}
@@ -1917,15 +2168,19 @@ enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t ecore_int_unregister_cb(struct ecore_hwfn *p_hwfn, u8 pi)
+enum _ecore_status_t ecore_int_unregister_cb(struct ecore_hwfn *p_hwfn,
+					     u8 pi)
 {
 	struct ecore_sb_sp_info *p_sp_sb = p_hwfn->p_sp_sb;
 
 	if (p_sp_sb->pi_info_arr[pi].comp_cb == OSAL_NULL)
 		return ECORE_NOMEM;
 
-	p_sp_sb->pi_info_arr[pi].comp_cb = OSAL_NULL;
-	p_sp_sb->pi_info_arr[pi].cookie = OSAL_NULL;
+	/* In order to prevent damage in possible race set dummy callback and
+	 * don't clear cookie instead of set NULL to table entry
+	 */
+	p_sp_sb->pi_info_arr[pi].comp_cb = p_hwfn->p_dummy_cb;
+
 	return ECORE_SUCCESS;
 }
 
@@ -1935,7 +2190,7 @@ u16 ecore_int_get_sp_sb_id(struct ecore_hwfn *p_hwfn)
 }
 
 void ecore_int_igu_enable_int(struct ecore_hwfn *p_hwfn,
-			      struct ecore_ptt *p_ptt,
+			      struct ecore_ptt	*p_ptt,
 			      enum ecore_int_mode int_mode)
 {
 	u32 igu_pf_conf = IGU_PF_CONF_FUNC_EN | IGU_PF_CONF_ATTN_BIT_EN;
@@ -1974,8 +2229,7 @@ static void ecore_int_igu_enable_attn(struct ecore_hwfn *p_hwfn,
 {
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
-		DP_INFO(p_hwfn,
-			"FPGA - Don't enable Attentions in IGU and MISC\n");
+		DP_INFO(p_hwfn, "FPGA - Don't enable Attentions in IGU and MISC\n");
 		return;
 	}
 #endif
@@ -2004,8 +2258,7 @@ ecore_int_igu_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	if ((int_mode != ECORE_INT_MODE_INTA) || IS_LEAD_HWFN(p_hwfn)) {
 		rc = OSAL_SLOWPATH_IRQ_REQ(p_hwfn);
 		if (rc != ECORE_SUCCESS) {
-			DP_NOTICE(p_hwfn, true,
-				  "Slowpath IRQ request failed\n");
+			DP_NOTICE(p_hwfn, true, "Slowpath IRQ request failed\n");
 			return ECORE_NORESOURCES;
 		}
 		p_hwfn->b_int_requested = true;
@@ -2019,8 +2272,8 @@ ecore_int_igu_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	return rc;
 }
 
-void ecore_int_igu_disable_int(struct ecore_hwfn *p_hwfn,
-			       struct ecore_ptt *p_ptt)
+void ecore_int_igu_disable_int(struct ecore_hwfn	*p_hwfn,
+			       struct ecore_ptt		*p_ptt)
 {
 	p_hwfn->b_int_enabled = 0;
 
@@ -2033,12 +2286,13 @@ void ecore_int_igu_disable_int(struct ecore_hwfn *p_hwfn,
 #define IGU_CLEANUP_SLEEP_LENGTH		(1000)
 static void ecore_int_igu_cleanup_sb(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt,
-				     u32 igu_sb_id,
+				     u16 igu_sb_id,
 				     bool cleanup_set,
 				     u16 opaque_fid)
 {
-	u32 data = 0, cmd_ctrl = 0, sb_bit, sb_bit_addr, pxp_addr;
+	u32 data = 0, cmd_ctrl = 0, sb_bit, sb_bit_addr;
 	u32 sleep_cnt = IGU_CLEANUP_SLEEP_LENGTH, val;
+	u32 pxp_addr, int_ack_base;
 	u8 type = 0;
 
 	OSAL_BUILD_BUG_ON((IGU_REG_CLEANUP_STATUS_4 -
@@ -2054,7 +2308,9 @@ static void ecore_int_igu_cleanup_sb(struct ecore_hwfn *p_hwfn,
 	SET_FIELD(data, IGU_CLEANUP_COMMAND_TYPE, IGU_COMMAND_TYPE_SET);
 
 	/* Set the control register */
-	pxp_addr = IGU_CMD_INT_ACK_BASE + igu_sb_id;
+	int_ack_base = ECORE_IS_E4(p_hwfn->p_dev) ? IGU_CMD_INT_ACK_BASE
+						  : IGU_CMD_INT_ACK_E5_BASE;
+	pxp_addr = int_ack_base + igu_sb_id;
 	SET_FIELD(cmd_ctrl, IGU_CTRL_REG_PXP_ADDR, pxp_addr);
 	SET_FIELD(cmd_ctrl, IGU_CTRL_REG_FID, opaque_fid);
 	SET_FIELD(cmd_ctrl, IGU_CTRL_REG_TYPE, IGU_CTRL_CMD_TYPE_WR);
@@ -2136,8 +2392,9 @@ void ecore_int_igu_init_pure_rt_single(struct ecore_hwfn *p_hwfn,
 }
 
 void ecore_int_igu_init_pure_rt(struct ecore_hwfn *p_hwfn,
-				struct ecore_ptt *p_ptt,
-				bool b_set, bool b_slowpath)
+				 struct ecore_ptt *p_ptt,
+				 bool b_set,
+				 bool b_slowpath)
 {
 	struct ecore_igu_info *p_info = p_hwfn->hw_info.p_igu_info;
 	struct ecore_igu_block *p_block;
@@ -2344,10 +2601,11 @@ static void ecore_int_igu_read_cam_block(struct ecore_hwfn *p_hwfn,
 	p_block = &p_hwfn->hw_info.p_igu_info->entry[igu_sb_id];
 
 	/* Fill the block information */
-	p_block->function_id = GET_FIELD(val, IGU_MAPPING_LINE_FUNCTION_NUMBER);
+	p_block->function_id = GET_FIELD(val,
+					 IGU_MAPPING_LINE_FUNCTION_NUMBER);
 	p_block->is_pf = GET_FIELD(val, IGU_MAPPING_LINE_PF_VALID);
-	p_block->vector_number = GET_FIELD(val, IGU_MAPPING_LINE_VECTOR_NUMBER);
-
+	p_block->vector_number = GET_FIELD(val,
+					   IGU_MAPPING_LINE_VECTOR_NUMBER);
 	p_block->igu_sb_id = igu_sb_id;
 }
 
@@ -2507,6 +2765,12 @@ ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		return ECORE_INVAL;
 	}
 
+	if (p_block == OSAL_NULL) {
+		DP_VERBOSE(p_hwfn, (ECORE_MSG_INTR | ECORE_MSG_IOV),
+			   "SB address (p_block) is NULL\n");
+		return ECORE_INVAL;
+	}
+
 	/* At this point, p_block points to the SB we want to relocate */
 	if (b_to_vf) {
 		p_block->status &= ~ECORE_IGU_STATUS_PF;
@@ -2546,6 +2810,11 @@ ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		p_hwfn->hw_info.feat_num[ECORE_VF_L2_QUE]--;
 	}
 
+	/* clean up PF`s SB before assigning it to VF */
+	if (b_to_vf)
+		ecore_int_igu_cleanup_sb(p_hwfn, p_ptt, igu_sb_id, 1,
+					 p_hwfn->hw_info.opaque_fid);
+
 	/* Update the IGU and CAU with the new configuration */
 	SET_FIELD(val, IGU_MAPPING_LINE_FUNCTION_NUMBER,
 		  p_block->function_id);
@@ -2567,6 +2836,11 @@ ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		   igu_sb_id, p_block->function_id,
 		   p_block->is_pf, p_block->vector_number);
 
+	/* clean up new assigned PF`s SB */
+	if (p_block->is_pf)
+		ecore_int_igu_cleanup_sb(p_hwfn, p_ptt, igu_sb_id, 1,
+					 p_hwfn->hw_info.opaque_fid);
+
 	return ECORE_SUCCESS;
 }
 
@@ -2620,6 +2894,7 @@ static enum _ecore_status_t ecore_int_sp_dpc_alloc(struct ecore_hwfn *p_hwfn)
 static void ecore_int_sp_dpc_free(struct ecore_hwfn *p_hwfn)
 {
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->sp_dpc);
+	p_hwfn->sp_dpc = OSAL_NULL;
 }
 
 enum _ecore_status_t ecore_int_alloc(struct ecore_hwfn *p_hwfn,
@@ -2754,35 +3029,3 @@ enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
 
 	return ECORE_SUCCESS;
 }
-
-void ecore_pf_flr_igu_cleanup(struct ecore_hwfn *p_hwfn)
-{
-	struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
-	struct ecore_ptt *p_dpc_ptt = ecore_get_reserved_ptt(p_hwfn,
-							     RESERVED_PTT_DPC);
-	int i;
-
-	/* Do not reorder the following cleanup sequence */
-	/* Ack all attentions */
-	ecore_wr(p_hwfn, p_ptt, IGU_REG_ATTENTION_ACK_BITS, 0xfff);
-
-	/* Clear driver attention */
-	ecore_wr(p_hwfn,  p_dpc_ptt,
-		((p_hwfn->rel_pf_id << 3) + MISC_REG_AEU_GENERAL_ATTN_0), 0);
-
-	/* Clear per-PF IGU registers to restore them as if the IGU
-	 * was reset for this PF
-	 */
-	ecore_wr(p_hwfn, p_ptt, IGU_REG_LEADING_EDGE_LATCH, 0);
-	ecore_wr(p_hwfn, p_ptt, IGU_REG_TRAILING_EDGE_LATCH, 0);
-	ecore_wr(p_hwfn, p_ptt, IGU_REG_PF_CONFIGURATION, 0);
-
-	/* Execute IGU clean up*/
-	ecore_wr(p_hwfn, p_ptt, IGU_REG_PF_FUNCTIONAL_CLEANUP, 1);
-
-	/* Clear Stats */
-	ecore_wr(p_hwfn, p_ptt, IGU_REG_STATISTIC_NUM_OF_INTA_ASSERTED, 0);
-
-	for (i = 0; i < IGU_REG_PBA_STS_PF_SIZE; i++)
-		ecore_wr(p_hwfn, p_ptt, IGU_REG_PBA_STS_PF + i * 4, 0);
-}
diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h
index 924c31429..6d72d5ced 100644
--- a/drivers/net/qede/base/ecore_int.h
+++ b/drivers/net/qede/base/ecore_int.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_INT_H__
 #define __ECORE_INT_H__
 
@@ -24,15 +24,15 @@
 #define ECORE_SB_INVALID_IDX	0xffff
 
 struct ecore_igu_block {
-	u8 status;
+	u8	status;
 #define ECORE_IGU_STATUS_FREE	0x01
 #define ECORE_IGU_STATUS_VALID	0x02
 #define ECORE_IGU_STATUS_PF	0x04
 #define ECORE_IGU_STATUS_DSB	0x08
 
-	u8 vector_number;
-	u8 function_id;
-	u8 is_pf;
+	u8	vector_number;
+	u8	function_id;
+	u8      is_pf;
 
 	/* Index inside IGU [meant for back reference] */
 	u16 igu_sb_id;
@@ -91,12 +91,14 @@ u16 ecore_get_igu_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id);
  */
 struct ecore_igu_block *
 ecore_get_igu_free_sb(struct ecore_hwfn *p_hwfn, bool b_is_pf);
+
 /* TODO Names of function may change... */
-void ecore_int_igu_init_pure_rt(struct ecore_hwfn *p_hwfn,
-				struct ecore_ptt *p_ptt,
-				bool b_set, bool b_slowpath);
+void ecore_int_igu_init_pure_rt(struct ecore_hwfn	*p_hwfn,
+				 struct ecore_ptt	*p_ptt,
+				 bool			b_set,
+				 bool			b_slowpath);
 
-void ecore_int_igu_init_rt(struct ecore_hwfn *p_hwfn);
+void ecore_int_igu_init_rt(struct ecore_hwfn		*p_hwfn);
 
 /**
  * @brief ecore_int_igu_read_cam - Reads the IGU CAM.
@@ -109,11 +111,11 @@ void ecore_int_igu_init_rt(struct ecore_hwfn *p_hwfn);
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_int_igu_read_cam(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt);
+enum _ecore_status_t ecore_int_igu_read_cam(struct ecore_hwfn	*p_hwfn,
+					    struct ecore_ptt	*p_ptt);
 
-typedef enum _ecore_status_t (*ecore_int_comp_cb_t) (struct ecore_hwfn *p_hwfn,
-						     void *cookie);
+typedef enum _ecore_status_t(*ecore_int_comp_cb_t)(struct ecore_hwfn *p_hwfn,
+						   void *cookie);
 /**
  * @brief ecore_int_register_cb - Register callback func for
  *      slowhwfn statusblock.
@@ -134,10 +136,11 @@ typedef enum _ecore_status_t (*ecore_int_comp_cb_t) (struct ecore_hwfn *p_hwfn,
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn,
-					   ecore_int_comp_cb_t comp_cb,
-					   void *cookie,
-					   u8 *sb_idx, __le16 **p_fw_cons);
+enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn    *p_hwfn,
+					   ecore_int_comp_cb_t  comp_cb,
+					   void                 *cookie,
+					   u8                   *sb_idx,
+					   __le16               **p_fw_cons);
 /**
  * @brief ecore_int_unregister_cb - Unregisters callback
  *      function from sp sb.
@@ -149,7 +152,8 @@ enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn,
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_int_unregister_cb(struct ecore_hwfn *p_hwfn, u8 pi);
+enum _ecore_status_t ecore_int_unregister_cb(struct ecore_hwfn *p_hwfn,
+					     u8 pi);
 
 /**
  * @brief ecore_int_get_sp_sb_id - Get the slowhwfn sb id.
@@ -187,10 +191,12 @@ void ecore_int_igu_init_pure_rt_single(struct ecore_hwfn	*p_hwfn,
  * @param vf_number
  * @param vf_valid
  */
-void ecore_int_cau_conf_sb(struct ecore_hwfn *p_hwfn,
-			   struct ecore_ptt *p_ptt,
-			   dma_addr_t sb_phys,
-			   u16 igu_sb_id, u16 vf_number, u8 vf_valid);
+void ecore_int_cau_conf_sb(struct ecore_hwfn	*p_hwfn,
+			   struct ecore_ptt	*p_ptt,
+			   dma_addr_t		sb_phys,
+			   u16			igu_sb_id,
+			   u16			vf_number,
+			   u8			vf_valid);
 
 /**
 * @brief ecore_int_alloc
@@ -216,7 +222,8 @@ void ecore_int_free(struct ecore_hwfn *p_hwfn);
 * @param p_hwfn
 * @param p_ptt
 */
-void ecore_int_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
+void ecore_int_setup(struct ecore_hwfn	*p_hwfn,
+		     struct ecore_ptt	*p_ptt);
 
 /**
  * @brief - Enable Interrupt & Attention for hw function
@@ -258,6 +265,9 @@ enum _ecore_status_t ecore_int_set_timer_res(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_pglueb_rbc_attn_handler(struct ecore_hwfn *p_hwfn,
 						   struct ecore_ptt *p_ptt,
 						   bool is_hw_init);
-void ecore_pf_flr_igu_cleanup(struct ecore_hwfn *p_hwfn);
+
+enum _ecore_status_t
+ecore_int_dummy_comp_cb(struct ecore_hwfn OSAL_UNUSED * p_hwfn,
+			void OSAL_UNUSED * cookie);
 
 #endif /* __ECORE_INT_H__ */
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index 553d7e6b3..8f82d74e0 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -1,18 +1,33 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_INT_API_H__
 #define __ECORE_INT_API_H__
 
+#include "common_hsi.h"
+
 #ifndef __EXTRACT__LINUX__
 #define ECORE_SB_IDX		0x0002
 
 #define RX_PI		0
 #define TX_PI(tc)	(RX_PI + 1 + tc)
 
+#define LL2_VF_RX_PI_E4		9
+#define LL2_VF_TX_PI_E4		10
+#define LL2_VF_RX_PI_E5		6
+#define LL2_VF_TX_PI_E5		7
+
+#define LL2_VF_RX_PI(_p_hwfn) \
+	(ECORE_IS_E4((_p_hwfn)->p_dev) ? \
+	 LL2_VF_RX_PI_E4 : LL2_VF_RX_PI_E5)
+
+#define LL2_VF_TX_PI(_p_hwfn) \
+	(ECORE_IS_E4((_p_hwfn)->p_dev) ? \
+	 LL2_VF_TX_PI_E4 : LL2_VF_TX_PI_E5)
+
 #ifndef ECORE_INT_MODE
 #define ECORE_INT_MODE
 enum ecore_int_mode {
@@ -31,7 +46,7 @@ struct ecore_sb_info {
 #define STATUS_BLOCK_PROD_INDEX_MASK	0xFFFFFF
 
 	dma_addr_t sb_phys;
-	u32 sb_ack;		/* Last given ack */
+	u32 sb_ack; /* Last given ack */
 	u16 igu_sb_id;
 	void OSAL_IOMEM *igu_addr;
 	u8 flags;
@@ -52,22 +67,22 @@ struct ecore_sb_info_dbg {
 
 struct ecore_sb_cnt_info {
 	/* Original, current, and free SBs for PF */
-	int orig;
-	int cnt;
-	int free_cnt;
+	u32 orig;
+	u32 cnt;
+	u32 free_cnt;
 
 	/* Original, current and free SBS for child VFs */
-	int iov_orig;
-	int iov_cnt;
-	int free_cnt_iov;
+	u32 iov_orig;
+	u32 iov_cnt;
+	u32 free_cnt_iov;
 };
 
 static OSAL_INLINE u16 ecore_sb_update_sb_idx(struct ecore_sb_info *sb_info)
 {
 	u32 prod = 0;
-	u16 rc = 0;
+	u16 rc   = 0;
 
-	/* barrier(); status block is written to by the chip */
+	/* barrier(); */ /* status block is written to by the chip */
 	/* FIXME: need some sort of barrier. */
 	prod = OSAL_LE32_TO_CPU(*sb_info->sb_prod_index) &
 	       STATUS_BLOCK_PROD_INDEX_MASK;
@@ -81,7 +96,6 @@ static OSAL_INLINE u16 ecore_sb_update_sb_idx(struct ecore_sb_info *sb_info)
 }
 
 /**
- *
  * @brief This function creates an update command for interrupts that is
  *        written to the IGU.
  *
@@ -98,19 +112,29 @@ static OSAL_INLINE void ecore_sb_ack(struct ecore_sb_info *sb_info,
 				     enum igu_int_cmd int_cmd, u8 upd_flg)
 {
 	struct igu_prod_cons_update igu_ack;
+#ifndef ECORE_CONFIG_DIRECT_HWFN
+	u32 val;
+#endif
 
+	if (sb_info->p_dev->int_mode == ECORE_INT_MODE_POLL)
+		return;
 	OSAL_MEMSET(&igu_ack, 0, sizeof(struct igu_prod_cons_update));
-	igu_ack.sb_id_and_flags =
-	    ((sb_info->sb_ack << IGU_PROD_CONS_UPDATE_SB_INDEX_SHIFT) |
-	     (upd_flg << IGU_PROD_CONS_UPDATE_UPDATE_FLAG_SHIFT) |
-	     (int_cmd << IGU_PROD_CONS_UPDATE_ENABLE_INT_SHIFT) |
-	     (IGU_SEG_ACCESS_REG << IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS_SHIFT));
+	SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_SB_INDEX,
+		  sb_info->sb_ack);
+	SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_UPDATE_FLAG,
+		  upd_flg);
+	SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_ENABLE_INT,
+		  int_cmd);
+	SET_FIELD(igu_ack.sb_id_and_flags, IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS,
+		  IGU_SEG_ACCESS_REG);
+	igu_ack.sb_id_and_flags = OSAL_CPU_TO_LE32(igu_ack.sb_id_and_flags);
 
 #ifdef ECORE_CONFIG_DIRECT_HWFN
 	DIRECT_REG_WR(sb_info->p_hwfn, sb_info->igu_addr,
 		      igu_ack.sb_id_and_flags);
 #else
-	DIRECT_REG_WR(OSAL_NULL, sb_info->igu_addr, igu_ack.sb_id_and_flags);
+	val = OSAL_LE32_TO_CPU(igu_ack.sb_id_and_flags);
+	DIRECT_REG_WR(OSAL_NULL, sb_info->igu_addr, val);
 #endif
 	/* Both segments (interrupts & acks) are written to same place address;
 	 * Need to guarantee all commands will be received (in-order) by HW.
@@ -127,46 +151,29 @@ static OSAL_INLINE void __internal_ram_wr(struct ecore_hwfn *p_hwfn,
 static OSAL_INLINE void __internal_ram_wr(__rte_unused void *p_hwfn,
 					  void OSAL_IOMEM *addr,
 					  int size, u32 *data)
-#endif
-{
-	unsigned int i;
 
-	for (i = 0; i < size / sizeof(*data); i++)
-		DIRECT_REG_WR(p_hwfn, &((u32 OSAL_IOMEM *)addr)[i], data[i]);
-}
-
-#ifdef ECORE_CONFIG_DIRECT_HWFN
-static OSAL_INLINE void __internal_ram_wr_relaxed(struct ecore_hwfn *p_hwfn,
-						  void OSAL_IOMEM * addr,
-						  int size, u32 *data)
-#else
-static OSAL_INLINE void __internal_ram_wr_relaxed(__rte_unused void *p_hwfn,
-						  void OSAL_IOMEM * addr,
-						  int size, u32 *data)
 #endif
 {
 	unsigned int i;
 
 	for (i = 0; i < size / sizeof(*data); i++)
-		DIRECT_REG_WR_RELAXED(p_hwfn, &((u32 OSAL_IOMEM *)addr)[i],
-				      data[i]);
+		DIRECT_REG_WR(p_hwfn, &((u32 OSAL_IOMEM *)addr)[i], data[i]);
 }
 
 #ifdef ECORE_CONFIG_DIRECT_HWFN
 static OSAL_INLINE void internal_ram_wr(struct ecore_hwfn *p_hwfn,
-						void OSAL_IOMEM * addr,
-						int size, u32 *data)
+					void OSAL_IOMEM *addr,
+					int size, u32 *data)
 {
-	__internal_ram_wr_relaxed(p_hwfn, addr, size, data);
+	__internal_ram_wr(p_hwfn, addr, size, data);
 }
 #else
 static OSAL_INLINE void internal_ram_wr(void OSAL_IOMEM *addr,
-						int size, u32 *data)
+					int size, u32 *data)
 {
-	__internal_ram_wr_relaxed(OSAL_NULL, addr, size, data);
+	__internal_ram_wr(OSAL_NULL, addr, size, data);
 }
 #endif
-
 #endif
 
 struct ecore_hwfn;
@@ -196,7 +203,6 @@ void ecore_int_cau_conf_pi(struct ecore_hwfn		*p_hwfn,
 			   u8				timeset);
 
 /**
- *
  * @brief ecore_int_igu_enable_int - enable device interrupts
  *
  * @param p_hwfn
@@ -208,17 +214,15 @@ void ecore_int_igu_enable_int(struct ecore_hwfn *p_hwfn,
 			      enum ecore_int_mode int_mode);
 
 /**
- *
  * @brief ecore_int_igu_disable_int - disable device interrupts
  *
  * @param p_hwfn
  * @param p_ptt
  */
 void ecore_int_igu_disable_int(struct ecore_hwfn *p_hwfn,
-			       struct ecore_ptt *p_ptt);
+			       struct ecore_ptt	*p_ptt);
 
 /**
- *
  * @brief ecore_int_igu_read_sisr_reg - Reads the single isr multiple dpc
  *        register from igu.
  *
@@ -246,11 +250,12 @@ u64 ecore_int_igu_read_sisr_reg(struct ecore_hwfn *p_hwfn);
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
-				       struct ecore_ptt *p_ptt,
-				       struct ecore_sb_info *sb_info,
-				       void *sb_virt_addr,
-				       dma_addr_t sb_phy_addr, u16 sb_id);
+enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn	*p_hwfn,
+				       struct ecore_ptt		*p_ptt,
+				       struct ecore_sb_info	*sb_info,
+				       void			*sb_virt_addr,
+				       dma_addr_t		sb_phy_addr,
+				       u16			sb_id);
 /**
  * @brief ecore_int_sb_setup - Setup the sb.
  *
@@ -258,8 +263,9 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
  * @param p_ptt
  * @param sb_info	initialized sb_info structure
  */
-void ecore_int_sb_setup(struct ecore_hwfn *p_hwfn,
-			struct ecore_ptt *p_ptt, struct ecore_sb_info *sb_info);
+void ecore_int_sb_setup(struct ecore_hwfn	*p_hwfn,
+			struct ecore_ptt	*p_ptt,
+			struct ecore_sb_info	*sb_info);
 
 /**
  * @brief ecore_int_sb_release - releases the sb_info structure.
@@ -274,9 +280,9 @@ void ecore_int_sb_setup(struct ecore_hwfn *p_hwfn,
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_int_sb_release(struct ecore_hwfn *p_hwfn,
-					  struct ecore_sb_info *sb_info,
-					  u16 sb_id);
+enum _ecore_status_t ecore_int_sb_release(struct ecore_hwfn	*p_hwfn,
+					  struct ecore_sb_info	*sb_info,
+					  u16			sb_id);
 
 /**
  * @brief ecore_int_sp_dpc - To be called when an interrupt is received on the
@@ -296,7 +302,7 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie);
  *
  * @return
  */
-void ecore_int_get_num_sbs(struct ecore_hwfn *p_hwfn,
+void ecore_int_get_num_sbs(struct ecore_hwfn	    *p_hwfn,
 			   struct ecore_sb_cnt_info *p_sb_cnt_info);
 
 /**
@@ -352,12 +358,11 @@ ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 /**
  * @brief - Doorbell Recovery handler.
- *          Run DB_REAL_DEAL doorbell recovery in case of PF overflow
- *          (and flush DORQ if needed), otherwise run DB_REC_ONCE.
+ *          Run doorbell recovery in case of PF overflow (and flush DORQ if
+ *          needed).
  *
  * @param p_hwfn
  * @param p_ptt
  */
-enum _ecore_status_t ecore_db_rec_handler(struct ecore_hwfn *p_hwfn,
-					  struct ecore_ptt *p_ptt);
+void ecore_db_rec_handler(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
 #endif
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index bd7c5703f..329b30b7b 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_SRIOV_API_H__
 #define __ECORE_SRIOV_API_H__
 
@@ -12,7 +12,17 @@
 
 #define ECORE_ETH_VF_NUM_MAC_FILTERS 1
 #define ECORE_ETH_VF_NUM_VLAN_FILTERS 2
-#define ECORE_VF_ARRAY_LENGTH (3)
+
+#define ECORE_VF_ARRAY_LENGTH (DIV_ROUND_UP(MAX_NUM_VFS, 64))
+
+#define ECORE_VF_ARRAY_GET_VFID(arr, vfid)	\
+	(((arr)[(vfid) / 64]) & (1ULL << ((vfid) % 64)))
+
+#define ECORE_VF_ARRAY_SET_VFID(arr, vfid)	\
+	(((arr)[(vfid) / 64]) |= (1ULL << ((vfid) % 64)))
+
+#define ECORE_VF_ARRAY_CLEAR_VFID(arr, vfid)	\
+	(((arr)[(vfid) / 64]) &= ~(1ULL << ((vfid) % 64)))
 
 #define ECORE_VF_ARRAY_GET_VFID(arr, vfid)	\
 	(((arr)[(vfid) / 64]) & (1ULL << ((vfid) % 64)))
@@ -28,7 +38,10 @@
 #define IS_PF_PDA(p_hwfn)	0 /* @@TBD Michalk */
 
 /* @@@ TBD MichalK - what should this number be*/
-#define ECORE_MAX_VF_CHAINS_PER_PF 16
+#define ECORE_MAX_QUEUE_VF_CHAINS_PER_PF 16
+#define ECORE_MAX_CNQ_VF_CHAINS_PER_PF 16
+#define ECORE_MAX_VF_CHAINS_PER_PF	\
+	(ECORE_MAX_QUEUE_VF_CHAINS_PER_PF + ECORE_MAX_CNQ_VF_CHAINS_PER_PF)
 
 /* vport update extended feature tlvs flags */
 enum ecore_iov_vport_update_flag {
@@ -45,7 +58,7 @@ enum ecore_iov_vport_update_flag {
 
 /* PF to VF STATUS is part of vfpf-channel API
  * and must be forward compatible
-*/
+ */
 enum ecore_iov_pf_to_vf_status {
 	PFVF_STATUS_WAITING = 0,
 	PFVF_STATUS_SUCCESS,
@@ -67,26 +80,18 @@ struct ecore_mcp_link_capabilities;
 #define VFPF_ACQUIRE_OS_ESX (2)
 #define VFPF_ACQUIRE_OS_SOLARIS (3)
 #define VFPF_ACQUIRE_OS_LINUX_USERSPACE (4)
+#define VFPF_ACQUIRE_OS_FREEBSD (5)
 
 struct ecore_vf_acquire_sw_info {
 	u32 driver_version;
 	u8 os_type;
-
-	/* We have several close releases that all use ~same FW with different
-	 * versions [making it incompatible as the versioning scheme is still
-	 * tied directly to FW version], allow to override the checking. Only
-	 * those versions would actually support this feature [so it would not
-	 * break forward compatibility with newer HV drivers that are no longer
-	 * suited].
-	 */
-	bool override_fw_version;
 };
 
 struct ecore_public_vf_info {
 	/* These copies will later be reflected in the bulletin board,
 	 * but this copy should be newer.
 	 */
-	u8 forced_mac[ETH_ALEN];
+	u8 forced_mac[ECORE_ETH_ALEN];
 	u16 forced_vlan;
 
 	/* Trusted VFs can configure promiscuous mode and
@@ -104,14 +109,17 @@ struct ecore_iov_vf_init_params {
 	 * number of Rx/Tx queues.
 	 */
 	/* TODO - remove this limitation */
-	u16 num_queues;
+	u8 num_queues;
+	u8 num_cnqs;
+
+	u8 cnq_offset;
 
 	/* Allow the client to choose which qzones to use for Rx/Tx,
 	 * and which queue_base to use for Tx queues on a per-queue basis.
 	 * Notice values should be relative to the PF resources.
 	 */
-	u16 req_rx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
-	u16 req_tx_queue[ECORE_MAX_VF_CHAINS_PER_PF];
+	u16 req_rx_queue[ECORE_MAX_QUEUE_VF_CHAINS_PER_PF];
+	u16 req_tx_queue[ECORE_MAX_QUEUE_VF_CHAINS_PER_PF];
 
 	u8 vport_id;
 
@@ -175,6 +183,33 @@ struct ecore_hw_sriov_info {
 };
 
 #ifdef CONFIG_ECORE_SRIOV
+/**
+ * @brief ecore_iov_pci_enable_prolog - Called before enabling sriov on pci.
+ *        Reconfigure QM to initialize PQs for the
+ *        max_active_vfs.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param max_active_vfs
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_iov_pci_enable_prolog(struct ecore_hwfn *p_hwfn,
+						 struct ecore_ptt *p_ptt,
+						 u8 max_active_vfs);
+
+/**
+ * @brief ecore_iov_pci_disable_epilog - Called after disabling sriov on pci.
+ *        Reconfigure QM to delete VF PQs.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_iov_pci_disable_epilog(struct ecore_hwfn *p_hwfn,
+						  struct ecore_ptt *p_ptt);
+
 #ifndef LINUX_REMOVE
 /**
  * @brief mark/clear all VFs before/after an incoming PCIe sriov
@@ -210,10 +245,10 @@ void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev,
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
-					      struct ecore_ptt *p_ptt,
-					      struct ecore_iov_vf_init_params
-						     *p_params);
+enum _ecore_status_t
+ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
+			 struct ecore_ptt *p_ptt,
+			 struct ecore_iov_vf_init_params *p_params);
 
 /**
  * @brief ecore_iov_process_mbx_req - process a request received
@@ -322,6 +357,7 @@ void ecore_iov_get_link(struct ecore_hwfn *p_hwfn,
  */
 bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn,
 				 u16 rel_vf_id);
+#endif
 
 /**
  * @brief Check if given VF ID @vfid is valid
@@ -704,7 +740,7 @@ bool ecore_iov_is_vf_started(struct ecore_hwfn *p_hwfn,
  *
  * @return - rate in Mbps
  */
-int ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid);
+u32 ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid);
 
 /**
  * @brief - Configure min rate for VF's vport.
@@ -716,11 +752,21 @@ int ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid);
  */
 enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
 						     int vfid, u32 rate);
-#endif
 
 /**
- * @brief ecore_pf_configure_vf_queue_coalesce - PF configure coalesce
- *    parameters of VFs for Rx and Tx queue.
+ * @brief Set default vlan in VF info wihtout configuring FW/HW.
+ *
+ * @param p_hwfn
+ * @param vlan
+ * @param vfid
+ */
+enum _ecore_status_t ecore_iov_set_default_vlan(struct ecore_hwfn *p_hwfn,
+						u16 vlan, int vfid);
+
+
+/**
+ * @brief ecore_pf_configure_vf_queue_coalesce - PF configure coalesce parameters
+ *    of VFs for Rx and Tx queue.
  *    While the API allows setting coalescing per-qid, all queues sharing a SB
  *    should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
  *    otherwise configuration would break.
@@ -744,10 +790,9 @@ ecore_iov_pf_configure_vf_queue_coalesce(struct ecore_hwfn *p_hwfn,
  * @param p_hwfn
  * @param rel_vf_id
  *
- * @return MAX_NUM_VFS_K2 in case no further active VFs, otherwise index.
+ * @return MAX_NUM_VFS in case no further active VFs, otherwise index.
  */
 u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
-
 void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn, int vfid,
 				      u16 vxlan_port, u16 geneve_port);
 
@@ -764,11 +809,21 @@ void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn, int vfid,
 void ecore_iov_set_vf_hw_channel(struct ecore_hwfn *p_hwfn, int vfid,
 				 bool b_is_hw);
 #endif
-#endif /* CONFIG_ECORE_SRIOV */
+
+/**
+ * @brief Run DORQ flush for VFs that may use EDPM, and notify VFs that need
+ *        to perform doorbell overflow recovery. A VF needs to perform recovery
+ *        if it or its PF overflowed.
+ *
+ * @param p_hwfn
+ */
+enum _ecore_status_t ecore_iov_db_rec_handler(struct ecore_hwfn *p_hwfn);
+
+#endif
 
 #define ecore_for_each_vf(_p_hwfn, _i)					\
 	for (_i = ecore_iov_get_next_active_vf(_p_hwfn, 0);		\
-	     _i < MAX_NUM_VFS_K2;					\
+	     _i < MAX_NUM_VFS;					\
 	     _i = ecore_iov_get_next_active_vf(_p_hwfn, _i + 1))
 
 #endif
diff --git a/drivers/net/qede/base/ecore_iro.h b/drivers/net/qede/base/ecore_iro.h
index b146faff9..87f6ecacb 100644
--- a/drivers/net/qede/base/ecore_iro.h
+++ b/drivers/net/qede/base/ecore_iro.h
@@ -1,273 +1,304 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __IRO_H__
 #define __IRO_H__
 
 /* Ystorm flow control mode. Use enum fw_flow_ctrl_mode */
-#define YSTORM_FLOW_CONTROL_MODE_OFFSET (IRO[0].base)
-#define YSTORM_FLOW_CONTROL_MODE_SIZE (IRO[0].size)
+#define YSTORM_FLOW_CONTROL_MODE_OFFSET                             (IRO[0].base)
+#define YSTORM_FLOW_CONTROL_MODE_SIZE                               (IRO[0].size)
+/* Pstorm LL2 packet duplication configuration. Use pstorm_pkt_dup_cfg data type. */
+#define PSTORM_PKT_DUPLICATION_CFG_OFFSET(pf_id)                    \
+	(IRO[1].base + ((pf_id) * IRO[1].m1))
+#define PSTORM_PKT_DUPLICATION_CFG_SIZE                             (IRO[1].size)
 /* Tstorm port statistics */
-#define TSTORM_PORT_STAT_OFFSET(port_id) (IRO[1].base + ((port_id) * IRO[1].m1))
-#define TSTORM_PORT_STAT_SIZE (IRO[1].size)
+#define TSTORM_PORT_STAT_OFFSET(port_id)                            \
+	(IRO[2].base + ((port_id) * IRO[2].m1))
+#define TSTORM_PORT_STAT_SIZE                                       (IRO[2].size)
 /* Tstorm ll2 port statistics */
-#define TSTORM_LL2_PORT_STAT_OFFSET(port_id) (IRO[2].base + \
-	((port_id) * IRO[2].m1))
-#define TSTORM_LL2_PORT_STAT_SIZE (IRO[2].size)
+#define TSTORM_LL2_PORT_STAT_OFFSET(port_id)                        \
+	(IRO[3].base + ((port_id) * IRO[3].m1))
+#define TSTORM_LL2_PORT_STAT_SIZE                                   (IRO[3].size)
+/* Tstorm LL2 packet duplication configuration. Use tstorm_pkt_dup_cfg data type. */
+#define TSTORM_PKT_DUPLICATION_CFG_OFFSET(pf_id)                    \
+	(IRO[4].base + ((pf_id) * IRO[4].m1))
+#define TSTORM_PKT_DUPLICATION_CFG_SIZE                             (IRO[4].size)
 /* Ustorm VF-PF Channel ready flag */
-#define USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id) (IRO[3].base + \
-	((vf_id) * IRO[3].m1))
-#define USTORM_VF_PF_CHANNEL_READY_SIZE (IRO[3].size)
+#define USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id)                    \
+	(IRO[5].base + ((vf_id) * IRO[5].m1))
+#define USTORM_VF_PF_CHANNEL_READY_SIZE                             (IRO[5].size)
 /* Ustorm Final flr cleanup ack */
-#define USTORM_FLR_FINAL_ACK_OFFSET(pf_id) (IRO[4].base + ((pf_id) * IRO[4].m1))
-#define USTORM_FLR_FINAL_ACK_SIZE (IRO[4].size)
+#define USTORM_FLR_FINAL_ACK_OFFSET(pf_id)                          \
+	(IRO[6].base + ((pf_id) * IRO[6].m1))
+#define USTORM_FLR_FINAL_ACK_SIZE                                   (IRO[6].size)
 /* Ustorm Event ring consumer */
-#define USTORM_EQE_CONS_OFFSET(pf_id) (IRO[5].base + ((pf_id) * IRO[5].m1))
-#define USTORM_EQE_CONS_SIZE (IRO[5].size)
+#define USTORM_EQE_CONS_OFFSET(pf_id)                               \
+	(IRO[7].base + ((pf_id) * IRO[7].m1))
+#define USTORM_EQE_CONS_SIZE                                        (IRO[7].size)
 /* Ustorm eth queue zone */
-#define USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id) (IRO[6].base + \
-	((queue_zone_id) * IRO[6].m1))
-#define USTORM_ETH_QUEUE_ZONE_SIZE (IRO[6].size)
+#define USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id)                 \
+	(IRO[8].base + ((queue_zone_id) * IRO[8].m1))
+#define USTORM_ETH_QUEUE_ZONE_SIZE                                  (IRO[8].size)
 /* Ustorm Common Queue ring consumer */
-#define USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id) (IRO[7].base + \
-	((queue_zone_id) * IRO[7].m1))
-#define USTORM_COMMON_QUEUE_CONS_SIZE (IRO[7].size)
+#define USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id)              \
+	(IRO[9].base + ((queue_zone_id) * IRO[9].m1))
+#define USTORM_COMMON_QUEUE_CONS_SIZE                               (IRO[9].size)
 /* Xstorm common PQ info */
-#define XSTORM_PQ_INFO_OFFSET(pq_id) (IRO[8].base + ((pq_id) * IRO[8].m1))
-#define XSTORM_PQ_INFO_SIZE (IRO[8].size)
+#define XSTORM_PQ_INFO_OFFSET(pq_id)                                \
+	(IRO[10].base + ((pq_id) * IRO[10].m1))
+#define XSTORM_PQ_INFO_SIZE                                         (IRO[10].size)
 /* Xstorm Integration Test Data */
-#define XSTORM_INTEG_TEST_DATA_OFFSET (IRO[9].base)
-#define XSTORM_INTEG_TEST_DATA_SIZE (IRO[9].size)
+#define XSTORM_INTEG_TEST_DATA_OFFSET                               (IRO[11].base)
+#define XSTORM_INTEG_TEST_DATA_SIZE                                 (IRO[11].size)
 /* Ystorm Integration Test Data */
-#define YSTORM_INTEG_TEST_DATA_OFFSET (IRO[10].base)
-#define YSTORM_INTEG_TEST_DATA_SIZE (IRO[10].size)
+#define YSTORM_INTEG_TEST_DATA_OFFSET                               (IRO[12].base)
+#define YSTORM_INTEG_TEST_DATA_SIZE                                 (IRO[12].size)
 /* Pstorm Integration Test Data */
-#define PSTORM_INTEG_TEST_DATA_OFFSET (IRO[11].base)
-#define PSTORM_INTEG_TEST_DATA_SIZE (IRO[11].size)
+#define PSTORM_INTEG_TEST_DATA_OFFSET                               (IRO[13].base)
+#define PSTORM_INTEG_TEST_DATA_SIZE                                 (IRO[13].size)
 /* Tstorm Integration Test Data */
-#define TSTORM_INTEG_TEST_DATA_OFFSET (IRO[12].base)
-#define TSTORM_INTEG_TEST_DATA_SIZE (IRO[12].size)
+#define TSTORM_INTEG_TEST_DATA_OFFSET                               (IRO[14].base)
+#define TSTORM_INTEG_TEST_DATA_SIZE                                 (IRO[14].size)
 /* Mstorm Integration Test Data */
-#define MSTORM_INTEG_TEST_DATA_OFFSET (IRO[13].base)
-#define MSTORM_INTEG_TEST_DATA_SIZE (IRO[13].size)
+#define MSTORM_INTEG_TEST_DATA_OFFSET                               (IRO[15].base)
+#define MSTORM_INTEG_TEST_DATA_SIZE                                 (IRO[15].size)
 /* Ustorm Integration Test Data */
-#define USTORM_INTEG_TEST_DATA_OFFSET (IRO[14].base)
-#define USTORM_INTEG_TEST_DATA_SIZE (IRO[14].size)
+#define USTORM_INTEG_TEST_DATA_OFFSET                               (IRO[16].base)
+#define USTORM_INTEG_TEST_DATA_SIZE                                 (IRO[16].size)
 /* Xstorm overlay buffer host address */
-#define XSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[15].base)
-#define XSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[15].size)
+#define XSTORM_OVERLAY_BUF_ADDR_OFFSET                              (IRO[17].base)
+#define XSTORM_OVERLAY_BUF_ADDR_SIZE                                (IRO[17].size)
 /* Ystorm overlay buffer host address */
-#define YSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[16].base)
-#define YSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[16].size)
+#define YSTORM_OVERLAY_BUF_ADDR_OFFSET                              (IRO[18].base)
+#define YSTORM_OVERLAY_BUF_ADDR_SIZE                                (IRO[18].size)
 /* Pstorm overlay buffer host address */
-#define PSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[17].base)
-#define PSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[17].size)
+#define PSTORM_OVERLAY_BUF_ADDR_OFFSET                              (IRO[19].base)
+#define PSTORM_OVERLAY_BUF_ADDR_SIZE                                (IRO[19].size)
 /* Tstorm overlay buffer host address */
-#define TSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[18].base)
-#define TSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[18].size)
+#define TSTORM_OVERLAY_BUF_ADDR_OFFSET                              (IRO[20].base)
+#define TSTORM_OVERLAY_BUF_ADDR_SIZE                                (IRO[20].size)
 /* Mstorm overlay buffer host address */
-#define MSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[19].base)
-#define MSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[19].size)
+#define MSTORM_OVERLAY_BUF_ADDR_OFFSET                              (IRO[21].base)
+#define MSTORM_OVERLAY_BUF_ADDR_SIZE                                (IRO[21].size)
 /* Ustorm overlay buffer host address */
-#define USTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[20].base)
-#define USTORM_OVERLAY_BUF_ADDR_SIZE (IRO[20].size)
+#define USTORM_OVERLAY_BUF_ADDR_OFFSET                              (IRO[22].base)
+#define USTORM_OVERLAY_BUF_ADDR_SIZE                                (IRO[22].size)
 /* Tstorm producers */
-#define TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) (IRO[21].base + \
-	((core_rx_queue_id) * IRO[21].m1))
-#define TSTORM_LL2_RX_PRODS_SIZE (IRO[21].size)
+#define TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id)                \
+	(IRO[23].base + ((core_rx_queue_id) * IRO[23].m1))
+#define TSTORM_LL2_RX_PRODS_SIZE                                    (IRO[23].size)
 /* Tstorm LightL2 queue statistics */
-#define CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) \
-	(IRO[22].base + ((core_rx_queue_id) * IRO[22].m1))
-#define CORE_LL2_TSTORM_PER_QUEUE_STAT_SIZE (IRO[22].size)
+#define CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id)     \
+	(IRO[24].base + ((core_rx_queue_id) * IRO[24].m1))
+#define CORE_LL2_TSTORM_PER_QUEUE_STAT_SIZE                         (IRO[24].size)
 /* Ustorm LiteL2 queue statistics */
-#define CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) \
-	(IRO[23].base + ((core_rx_queue_id) * IRO[23].m1))
-#define CORE_LL2_USTORM_PER_QUEUE_STAT_SIZE (IRO[23].size)
+#define CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id)     \
+	(IRO[25].base + ((core_rx_queue_id) * IRO[25].m1))
+#define CORE_LL2_USTORM_PER_QUEUE_STAT_SIZE                         (IRO[25].size)
 /* Pstorm LiteL2 queue statistics */
-#define CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) \
-	(IRO[24].base + ((core_tx_stats_id) * IRO[24].m1))
-#define CORE_LL2_PSTORM_PER_QUEUE_STAT_SIZE (IRO[24].size)
+#define CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id)     \
+	(IRO[26].base + ((core_tx_stats_id) * IRO[26].m1))
+#define CORE_LL2_PSTORM_PER_QUEUE_STAT_SIZE                         (IRO[26].size)
 /* Mstorm queue statistics */
-#define MSTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[25].base + \
-	((stat_counter_id) * IRO[25].m1))
-#define MSTORM_QUEUE_STAT_SIZE (IRO[25].size)
-/* TPA agregation timeout in us resolution (on ASIC) */
-#define MSTORM_TPA_TIMEOUT_US_OFFSET (IRO[26].base)
-#define MSTORM_TPA_TIMEOUT_US_SIZE (IRO[26].size)
-/* Mstorm ETH VF queues producers offset in RAM. Used in default VF zone size
- * mode.
- */
-#define MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id) (IRO[27].base + \
-	((vf_id) * IRO[27].m1) + ((vf_queue_id) * IRO[27].m2))
-#define MSTORM_ETH_VF_PRODS_SIZE (IRO[27].size)
+#define MSTORM_QUEUE_STAT_OFFSET(stat_counter_id)                   \
+	(IRO[27].base + ((stat_counter_id) * IRO[27].m1))
+#define MSTORM_QUEUE_STAT_SIZE                                      (IRO[27].size)
+/* TPA aggregation timeout in us resolution (on ASIC) */
+#define MSTORM_TPA_TIMEOUT_US_OFFSET                                (IRO[28].base)
+#define MSTORM_TPA_TIMEOUT_US_SIZE                                  (IRO[28].size)
+/* Mstorm ETH VF queues producers offset in RAM. Used in default VF zone size mode. */
+#define MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id)               \
+	(IRO[29].base + ((vf_id) * IRO[29].m1) + ((vf_queue_id) * IRO[29].m2))
+#define MSTORM_ETH_VF_PRODS_SIZE                                    (IRO[29].size)
 /* Mstorm ETH PF queues producers */
-#define MSTORM_ETH_PF_PRODS_OFFSET(queue_id) (IRO[28].base + \
-	((queue_id) * IRO[28].m1))
-#define MSTORM_ETH_PF_PRODS_SIZE (IRO[28].size)
+#define MSTORM_ETH_PF_PRODS_OFFSET(queue_id)                        \
+	(IRO[30].base + ((queue_id) * IRO[30].m1))
+#define MSTORM_ETH_PF_PRODS_SIZE                                    (IRO[30].size)
+/* Mstorm ETH PF BD queues producers */
+#define MSTORM_ETH_PF_BD_PROD_OFFSET(queue_id)                      \
+	(IRO[31].base + ((queue_id) * IRO[31].m1))
+#define MSTORM_ETH_PF_BD_PROD_SIZE                                  (IRO[31].size)
+/* Mstorm ETH PF CQ queues producers */
+#define MSTORM_ETH_PF_CQE_PROD_OFFSET(queue_id)                     \
+	(IRO[32].base + ((queue_id) * IRO[32].m1))
+#define MSTORM_ETH_PF_CQE_PROD_SIZE                                 (IRO[32].size)
 /* Mstorm pf statistics */
-#define MSTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[29].base + ((pf_id) * IRO[29].m1))
-#define MSTORM_ETH_PF_STAT_SIZE (IRO[29].size)
+#define MSTORM_ETH_PF_STAT_OFFSET(pf_id)                            \
+	(IRO[33].base + ((pf_id) * IRO[33].m1))
+#define MSTORM_ETH_PF_STAT_SIZE                                     (IRO[33].size)
 /* Ustorm queue statistics */
-#define USTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[30].base + \
-	((stat_counter_id) * IRO[30].m1))
-#define USTORM_QUEUE_STAT_SIZE (IRO[30].size)
+#define USTORM_QUEUE_STAT_OFFSET(stat_counter_id)                   \
+	(IRO[34].base + ((stat_counter_id) * IRO[34].m1))
+#define USTORM_QUEUE_STAT_SIZE                                      (IRO[34].size)
 /* Ustorm pf statistics */
-#define USTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[31].base + ((pf_id) * IRO[31].m1))
-#define USTORM_ETH_PF_STAT_SIZE (IRO[31].size)
+#define USTORM_ETH_PF_STAT_OFFSET(pf_id)                            \
+	(IRO[35].base + ((pf_id) * IRO[35].m1))
+#define USTORM_ETH_PF_STAT_SIZE                                     (IRO[35].size)
 /* Pstorm queue statistics */
-#define PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[32].base + \
-	((stat_counter_id) * IRO[32].m1))
-#define PSTORM_QUEUE_STAT_SIZE (IRO[32].size)
+#define PSTORM_QUEUE_STAT_OFFSET(stat_counter_id)                   \
+	(IRO[36].base + ((stat_counter_id) * IRO[36].m1))
+#define PSTORM_QUEUE_STAT_SIZE                                      (IRO[36].size)
 /* Pstorm pf statistics */
-#define PSTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[33].base + ((pf_id) * IRO[33].m1))
-#define PSTORM_ETH_PF_STAT_SIZE (IRO[33].size)
+#define PSTORM_ETH_PF_STAT_OFFSET(pf_id)                            \
+	(IRO[37].base + ((pf_id) * IRO[37].m1))
+#define PSTORM_ETH_PF_STAT_SIZE                                     (IRO[37].size)
 /* Control frame's EthType configuration for TX control frame security */
-#define PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) (IRO[34].base + \
-	((ethType_id) * IRO[34].m1))
-#define PSTORM_CTL_FRAME_ETHTYPE_SIZE (IRO[34].size)
+#define PSTORM_CTL_FRAME_ETHTYPE_OFFSET(eth_type_id)                \
+	(IRO[38].base + ((eth_type_id) * IRO[38].m1))
+#define PSTORM_CTL_FRAME_ETHTYPE_SIZE                               (IRO[38].size)
 /* Tstorm last parser message */
-#define TSTORM_ETH_PRS_INPUT_OFFSET (IRO[35].base)
-#define TSTORM_ETH_PRS_INPUT_SIZE (IRO[35].size)
+#define TSTORM_ETH_PRS_INPUT_OFFSET                                 (IRO[39].base)
+#define TSTORM_ETH_PRS_INPUT_SIZE                                   (IRO[39].size)
 /* Tstorm Eth limit Rx rate */
-#define ETH_RX_RATE_LIMIT_OFFSET(pf_id) (IRO[36].base + ((pf_id) * IRO[36].m1))
-#define ETH_RX_RATE_LIMIT_SIZE (IRO[36].size)
-/* RSS indirection table entry update command per PF offset in TSTORM PF BAR0.
- * Use eth_tstorm_rss_update_data for update.
+#define ETH_RX_RATE_LIMIT_OFFSET(pf_id)                             \
+	(IRO[40].base + ((pf_id) * IRO[40].m1))
+#define ETH_RX_RATE_LIMIT_SIZE                                      (IRO[40].size)
+/* RSS indirection table entry update command per PF offset in TSTORM PF BAR0. Use
+ * eth_tstorm_rss_update_data for update.
  */
-#define TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id) (IRO[37].base + \
-	((pf_id) * IRO[37].m1))
-#define TSTORM_ETH_RSS_UPDATE_SIZE (IRO[37].size)
+#define TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id)                         \
+	(IRO[41].base + ((pf_id) * IRO[41].m1))
+#define TSTORM_ETH_RSS_UPDATE_SIZE                                  (IRO[41].size)
 /* Xstorm queue zone */
-#define XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) (IRO[38].base + \
-	((queue_id) * IRO[38].m1))
-#define XSTORM_ETH_QUEUE_ZONE_SIZE (IRO[38].size)
+#define XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id)                      \
+	(IRO[42].base + ((queue_id) * IRO[42].m1))
+#define XSTORM_ETH_QUEUE_ZONE_SIZE                                  (IRO[42].size)
+/* Hairpin Allocations Table with Per-PF Entry of Hairpin base CID, number of allocated CIDs and
+ * Default Vport
+ */
+#define XSTORM_ETH_HAIRPIN_CID_ALLOCATION_OFFSET(pf_id)             \
+	(IRO[43].base + ((pf_id) * IRO[43].m1))
+#define XSTORM_ETH_HAIRPIN_CID_ALLOCATION_SIZE                      (IRO[43].size)
 /* Ystorm cqe producer */
-#define YSTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[39].base + \
-	((rss_id) * IRO[39].m1))
-#define YSTORM_TOE_CQ_PROD_SIZE (IRO[39].size)
+#define YSTORM_TOE_CQ_PROD_OFFSET(rss_id)                           \
+	(IRO[44].base + ((rss_id) * IRO[44].m1))
+#define YSTORM_TOE_CQ_PROD_SIZE                                     (IRO[44].size)
 /* Ustorm cqe producer */
-#define USTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[40].base + \
-	((rss_id) * IRO[40].m1))
-#define USTORM_TOE_CQ_PROD_SIZE (IRO[40].size)
+#define USTORM_TOE_CQ_PROD_OFFSET(rss_id)                           \
+	(IRO[45].base + ((rss_id) * IRO[45].m1))
+#define USTORM_TOE_CQ_PROD_SIZE                                     (IRO[45].size)
 /* Ustorm grq producer */
-#define USTORM_TOE_GRQ_PROD_OFFSET(pf_id) (IRO[41].base + \
-	((pf_id) * IRO[41].m1))
-#define USTORM_TOE_GRQ_PROD_SIZE (IRO[41].size)
+#define USTORM_TOE_GRQ_PROD_OFFSET(pf_id)                           \
+	(IRO[46].base + ((pf_id) * IRO[46].m1))
+#define USTORM_TOE_GRQ_PROD_SIZE                                    (IRO[46].size)
 /* Tstorm cmdq-cons of given command queue-id */
-#define TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id) (IRO[42].base + \
-	((cmdq_queue_id) * IRO[42].m1))
-#define TSTORM_SCSI_CMDQ_CONS_SIZE (IRO[42].size)
-/* Tstorm (reflects M-Storm) bdq-external-producer of given function ID,
- * BDqueue-id
- */
-#define TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id, bdq_id) \
-	(IRO[43].base + ((storage_func_id) * IRO[43].m1) + \
-	((bdq_id) * IRO[43].m2))
-#define TSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[43].size)
+#define TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id)                 \
+	(IRO[47].base + ((cmdq_queue_id) * IRO[47].m1))
+#define TSTORM_SCSI_CMDQ_CONS_SIZE                                  (IRO[47].size)
+/* Tstorm (reflects M-Storm) bdq-external-producer of given function ID, BDqueue-id */
+#define TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id, bdq_id)     \
+	(IRO[48].base + ((storage_func_id) * IRO[48].m1) + ((bdq_id) * IRO[48].m2))
+#define TSTORM_SCSI_BDQ_EXT_PROD_SIZE                               (IRO[48].size)
 /* Mstorm bdq-external-producer of given BDQ resource ID, BDqueue-id */
-#define MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id, bdq_id) \
-	(IRO[44].base + ((storage_func_id) * IRO[44].m1) + \
-	((bdq_id) * IRO[44].m2))
-#define MSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[44].size)
+#define MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id, bdq_id)     \
+	(IRO[49].base + ((storage_func_id) * IRO[49].m1) + ((bdq_id) * IRO[49].m2))
+#define MSTORM_SCSI_BDQ_EXT_PROD_SIZE                               (IRO[49].size)
 /* Tstorm iSCSI RX stats */
-#define TSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) (IRO[45].base + \
-	((storage_func_id) * IRO[45].m1))
-#define TSTORM_ISCSI_RX_STATS_SIZE (IRO[45].size)
+#define TSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id)               \
+	(IRO[50].base + ((storage_func_id) * IRO[50].m1))
+#define TSTORM_ISCSI_RX_STATS_SIZE                                  (IRO[50].size)
 /* Mstorm iSCSI RX stats */
-#define MSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) (IRO[46].base + \
-	((storage_func_id) * IRO[46].m1))
-#define MSTORM_ISCSI_RX_STATS_SIZE (IRO[46].size)
+#define MSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id)               \
+	(IRO[51].base + ((storage_func_id) * IRO[51].m1))
+#define MSTORM_ISCSI_RX_STATS_SIZE                                  (IRO[51].size)
 /* Ustorm iSCSI RX stats */
-#define USTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) (IRO[47].base + \
-	((storage_func_id) * IRO[47].m1))
-#define USTORM_ISCSI_RX_STATS_SIZE (IRO[47].size)
+#define USTORM_ISCSI_RX_STATS_OFFSET(storage_func_id)               \
+	(IRO[52].base + ((storage_func_id) * IRO[52].m1))
+#define USTORM_ISCSI_RX_STATS_SIZE                                  (IRO[52].size)
 /* Xstorm iSCSI TX stats */
-#define XSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) (IRO[48].base + \
-	((storage_func_id) * IRO[48].m1))
-#define XSTORM_ISCSI_TX_STATS_SIZE (IRO[48].size)
+#define XSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id)               \
+	(IRO[53].base + ((storage_func_id) * IRO[53].m1))
+#define XSTORM_ISCSI_TX_STATS_SIZE                                  (IRO[53].size)
 /* Ystorm iSCSI TX stats */
-#define YSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) (IRO[49].base + \
-	((storage_func_id) * IRO[49].m1))
-#define YSTORM_ISCSI_TX_STATS_SIZE (IRO[49].size)
+#define YSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id)               \
+	(IRO[54].base + ((storage_func_id) * IRO[54].m1))
+#define YSTORM_ISCSI_TX_STATS_SIZE                                  (IRO[54].size)
 /* Pstorm iSCSI TX stats */
-#define PSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) (IRO[50].base + \
-	((storage_func_id) * IRO[50].m1))
-#define PSTORM_ISCSI_TX_STATS_SIZE (IRO[50].size)
+#define PSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id)               \
+	(IRO[55].base + ((storage_func_id) * IRO[55].m1))
+#define PSTORM_ISCSI_TX_STATS_SIZE                                  (IRO[55].size)
 /* Tstorm FCoE RX stats */
-#define TSTORM_FCOE_RX_STATS_OFFSET(pf_id) (IRO[51].base + \
-	((pf_id) * IRO[51].m1))
-#define TSTORM_FCOE_RX_STATS_SIZE (IRO[51].size)
+#define TSTORM_FCOE_RX_STATS_OFFSET(pf_id)                          \
+	(IRO[56].base + ((pf_id) * IRO[56].m1))
+#define TSTORM_FCOE_RX_STATS_SIZE                                   (IRO[56].size)
 /* Pstorm FCoE TX stats */
-#define PSTORM_FCOE_TX_STATS_OFFSET(pf_id) (IRO[52].base + \
-	((pf_id) * IRO[52].m1))
-#define PSTORM_FCOE_TX_STATS_SIZE (IRO[52].size)
+#define PSTORM_FCOE_TX_STATS_OFFSET(pf_id)                          \
+	(IRO[57].base + ((pf_id) * IRO[57].m1))
+#define PSTORM_FCOE_TX_STATS_SIZE                                   (IRO[57].size)
 /* Pstorm RDMA queue statistics */
-#define PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[53].base + \
-	((rdma_stat_counter_id) * IRO[53].m1))
-#define PSTORM_RDMA_QUEUE_STAT_SIZE (IRO[53].size)
+#define PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id)         \
+	(IRO[58].base + ((rdma_stat_counter_id) * IRO[58].m1))
+#define PSTORM_RDMA_QUEUE_STAT_SIZE                                 (IRO[58].size)
 /* Tstorm RDMA queue statistics */
-#define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[54].base + \
-	((rdma_stat_counter_id) * IRO[54].m1))
-#define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[54].size)
+#define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id)         \
+	(IRO[59].base + ((rdma_stat_counter_id) * IRO[59].m1))
+#define TSTORM_RDMA_QUEUE_STAT_SIZE                                 (IRO[59].size)
 /* Xstorm error level for assert */
-#define XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[55].base + \
-	((pf_id) * IRO[55].m1))
-#define XSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[55].size)
+#define XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id)                      \
+	(IRO[60].base + ((pf_id) * IRO[60].m1))
+#define XSTORM_RDMA_ASSERT_LEVEL_SIZE                               (IRO[60].size)
 /* Ystorm error level for assert */
-#define YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[56].base + \
-	((pf_id) * IRO[56].m1))
-#define YSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[56].size)
+#define YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id)                      \
+	(IRO[61].base + ((pf_id) * IRO[61].m1))
+#define YSTORM_RDMA_ASSERT_LEVEL_SIZE                               (IRO[61].size)
 /* Pstorm error level for assert */
-#define PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[57].base + \
-	((pf_id) * IRO[57].m1))
-#define PSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[57].size)
+#define PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id)                      \
+	(IRO[62].base + ((pf_id) * IRO[62].m1))
+#define PSTORM_RDMA_ASSERT_LEVEL_SIZE                               (IRO[62].size)
 /* Tstorm error level for assert */
-#define TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[58].base + \
-	((pf_id) * IRO[58].m1))
-#define TSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[58].size)
+#define TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id)                      \
+	(IRO[63].base + ((pf_id) * IRO[63].m1))
+#define TSTORM_RDMA_ASSERT_LEVEL_SIZE                               (IRO[63].size)
 /* Mstorm error level for assert */
-#define MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[59].base + \
-	((pf_id) * IRO[59].m1))
-#define MSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[59].size)
+#define MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id)                      \
+	(IRO[64].base + ((pf_id) * IRO[64].m1))
+#define MSTORM_RDMA_ASSERT_LEVEL_SIZE                               (IRO[64].size)
 /* Ustorm error level for assert */
-#define USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[60].base + \
-	((pf_id) * IRO[60].m1))
-#define USTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[60].size)
+#define USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id)                      \
+	(IRO[65].base + ((pf_id) * IRO[65].m1))
+#define USTORM_RDMA_ASSERT_LEVEL_SIZE                               (IRO[65].size)
 /* Xstorm iWARP rxmit stats */
-#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) (IRO[61].base + \
-	((pf_id) * IRO[61].m1))
-#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[61].size)
+#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id)                      \
+	(IRO[66].base + ((pf_id) * IRO[66].m1))
+#define XSTORM_IWARP_RXMIT_STATS_SIZE                               (IRO[66].size)
 /* Tstorm RoCE Event Statistics */
-#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) (IRO[62].base + \
-	((roce_pf_id) * IRO[62].m1))
-#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[62].size)
+#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id)                  \
+	(IRO[67].base + ((roce_pf_id) * IRO[67].m1))
+#define TSTORM_ROCE_EVENTS_STAT_SIZE                                (IRO[67].size)
 /* DCQCN Received Statistics */
-#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id) (IRO[63].base + \
-	((roce_pf_id) * IRO[63].m1))
-#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_SIZE (IRO[63].size)
+#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id)         \
+	(IRO[68].base + ((roce_pf_id) * IRO[68].m1))
+#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_SIZE                       (IRO[68].size)
 /* RoCE Error Statistics */
-#define YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id) (IRO[64].base + \
-	((roce_pf_id) * IRO[64].m1))
-#define YSTORM_ROCE_ERROR_STATS_SIZE (IRO[64].size)
+#define YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id)                  \
+	(IRO[69].base + ((roce_pf_id) * IRO[69].m1))
+#define YSTORM_ROCE_ERROR_STATS_SIZE                                (IRO[69].size)
 /* DCQCN Sent Statistics */
-#define PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id) (IRO[65].base + \
-	((roce_pf_id) * IRO[65].m1))
-#define PSTORM_ROCE_DCQCN_SENT_STATS_SIZE (IRO[65].size)
+#define PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id)             \
+	(IRO[70].base + ((roce_pf_id) * IRO[70].m1))
+#define PSTORM_ROCE_DCQCN_SENT_STATS_SIZE                           (IRO[70].size)
 /* RoCE CQEs Statistics */
-#define USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id) (IRO[66].base + \
-	((roce_pf_id) * IRO[66].m1))
-#define USTORM_ROCE_CQE_STATS_SIZE (IRO[66].size)
+#define USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id)                    \
+	(IRO[71].base + ((roce_pf_id) * IRO[71].m1))
+#define USTORM_ROCE_CQE_STATS_SIZE                                  (IRO[71].size)
 /* Tstorm NVMf per port per producer consumer data */
-#define TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_num_id, \
-	taskpool_index) (IRO[67].base + ((port_num_id) * IRO[67].m1) + \
-	((taskpool_index) * IRO[67].m2))
-#define TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_SIZE (IRO[67].size)
+#define TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_num_id, taskpool_index) \
+	(IRO[72].base + ((port_num_id) * IRO[72].m1) + ((taskpool_index) * IRO[72].m2))
+#define TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_SIZE            (IRO[72].size)
 /* Ustorm NVMf per port counters */
-#define USTORM_NVMF_PORT_COUNTERS_OFFSET(port_num_id) (IRO[68].base + \
-	((port_num_id) * IRO[68].m1))
-#define USTORM_NVMF_PORT_COUNTERS_SIZE (IRO[68].size)
+#define USTORM_NVMF_PORT_COUNTERS_OFFSET(port_num_id)               \
+	(IRO[73].base + ((port_num_id) * IRO[73].m1))
+#define USTORM_NVMF_PORT_COUNTERS_SIZE                              (IRO[73].size)
+/* Xstorm RoCE non EDPM ACK sent */
+#define XSTORM_ROCE_NON_EDPM_ACK_PKT_OFFSET(counter_id)             \
+	(IRO[74].base + ((counter_id) * IRO[74].m1))
+#define XSTORM_ROCE_NON_EDPM_ACK_PKT_SIZE                           (IRO[74].size)
+/* Mstorm RoCE ACK sent */
+#define MSTORM_ROCE_ACK_PKT_OFFSET(counter_id)                      \
+	(IRO[75].base + ((counter_id) * IRO[75].m1))
+#define MSTORM_ROCE_ACK_PKT_SIZE                                    (IRO[75].size)
 
 #endif
diff --git a/drivers/net/qede/base/ecore_iro_values.h b/drivers/net/qede/base/ecore_iro_values.h
index dd7349778..562146dd0 100644
--- a/drivers/net/qede/base/ecore_iro_values.h
+++ b/drivers/net/qede/base/ecore_iro_values.h
@@ -1,227 +1,336 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __IRO_VALUES_H__
 #define __IRO_VALUES_H__
 
 /* Per-chip offsets in iro_arr in dwords */
 #define E4_IRO_ARR_OFFSET 0
+#define E5_IRO_ARR_OFFSET 228
 
 /* IRO Array */
-static const u32 iro_arr[] = {
+static const u32 iro_arr[] = { /* @DPDK */
 	/* E4 */
-	/* YSTORM_FLOW_CONTROL_MODE_OFFSET */
-	/* offset=0x0, size=0x8 */
+/* YSTORM_FLOW_CONTROL_MODE_OFFSET, offset=0x0, size=0x8 */
 	0x00000000, 0x00000000, 0x00080000,
-	/* TSTORM_PORT_STAT_OFFSET(port_id), */
-	/* offset=0x3288, mult1=0x88, size=0x88 */
+/* PSTORM_PKT_DUPLICATION_CFG_OFFSET(pf_id), offset=0x4478, mult1=0x8, size=0x8 */
+	0x00004478, 0x00000008, 0x00080000,
+/* TSTORM_PORT_STAT_OFFSET(port_id), offset=0x3288, mult1=0x88, size=0x88 */
 	0x00003288, 0x00000088, 0x00880000,
-	/* TSTORM_LL2_PORT_STAT_OFFSET(port_id), */
-	/* offset=0x58f0, mult1=0x20, size=0x20 */
-	0x000058f0, 0x00000020, 0x00200000,
-	/* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id), */
-	/* offset=0xb00, mult1=0x8, size=0x4 */
+/* TSTORM_LL2_PORT_STAT_OFFSET(port_id), offset=0x5868, mult1=0x20, size=0x20 */
+	0x00005868, 0x00000020, 0x00200000,
+/* TSTORM_PKT_DUPLICATION_CFG_OFFSET(pf_id), offset=0x3188, mult1=0x8, size=0x8 */
+	0x00003188, 0x00000008, 0x00080000,
+/* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id), offset=0xb00, mult1=0x8, size=0x4 */
 	0x00000b00, 0x00000008, 0x00040000,
-	/* USTORM_FLR_FINAL_ACK_OFFSET(pf_id), */
-	/* offset=0xa80, mult1=0x8, size=0x4 */
+/* USTORM_FLR_FINAL_ACK_OFFSET(pf_id), offset=0xa80, mult1=0x8, size=0x4 */
 	0x00000a80, 0x00000008, 0x00040000,
-	/* USTORM_EQE_CONS_OFFSET(pf_id), */
-	/* offset=0x0, mult1=0x8, size=0x2 */
+/* USTORM_EQE_CONS_OFFSET(pf_id), offset=0x0, mult1=0x8, size=0x2 */
 	0x00000000, 0x00000008, 0x00020000,
-	/* USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id), */
-	/* offset=0x80, mult1=0x8, size=0x4 */
+/* USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id), offset=0x80, mult1=0x8, size=0x4 */
 	0x00000080, 0x00000008, 0x00040000,
-	/* USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id), */
-	/* offset=0x84, mult1=0x8, size=0x2 */
+/* USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id), offset=0x84, mult1=0x8, size=0x2 */
 	0x00000084, 0x00000008, 0x00020000,
-	/* XSTORM_PQ_INFO_OFFSET(pq_id), */
-	/* offset=0x5718, mult1=0x4, size=0x4 */
-	0x00005718, 0x00000004, 0x00040000,
-	/* XSTORM_INTEG_TEST_DATA_OFFSET, */
-	/* offset=0x4dd0, size=0x78 */
-	0x00004dd0, 0x00000000, 0x00780000,
-	/* YSTORM_INTEG_TEST_DATA_OFFSET */
-	/* offset=0x3e40, size=0x78 */
+/* XSTORM_PQ_INFO_OFFSET(pq_id), offset=0x5798, mult1=0x4, size=0x4 */
+	0x00005798, 0x00000004, 0x00040000,
+/* XSTORM_INTEG_TEST_DATA_OFFSET, offset=0x4e50, size=0x78 */
+	0x00004e50, 0x00000000, 0x00780000,
+/* YSTORM_INTEG_TEST_DATA_OFFSET, offset=0x3e40, size=0x78 */
 	0x00003e40, 0x00000000, 0x00780000,
-	/* PSTORM_INTEG_TEST_DATA_OFFSET, */
-	/* offset=0x4480, size=0x78 */
-	0x00004480, 0x00000000, 0x00780000,
-	/* TSTORM_INTEG_TEST_DATA_OFFSET, */
-	/* offset=0x3210, size=0x78 */
+/* PSTORM_INTEG_TEST_DATA_OFFSET, offset=0x4500, size=0x78 */
+	0x00004500, 0x00000000, 0x00780000,
+/* TSTORM_INTEG_TEST_DATA_OFFSET, offset=0x3210, size=0x78 */
 	0x00003210, 0x00000000, 0x00780000,
-	/* MSTORM_INTEG_TEST_DATA_OFFSET */
-	/* offset=0x3b50, size=0x78 */
+/* MSTORM_INTEG_TEST_DATA_OFFSET, offset=0x3b50, size=0x78 */
 	0x00003b50, 0x00000000, 0x00780000,
-	/* USTORM_INTEG_TEST_DATA_OFFSET */
-	/* offset=0x7f58, size=0x78 */
+/* USTORM_INTEG_TEST_DATA_OFFSET, offset=0x7f58, size=0x78 */
 	0x00007f58, 0x00000000, 0x00780000,
-	/* XSTORM_OVERLAY_BUF_ADDR_OFFSET, */
-	/* offset=0x5f58, size=0x8 */
-	0x00005f58, 0x00000000, 0x00080000,
-	/* YSTORM_OVERLAY_BUF_ADDR_OFFSET */
-	/* offset=0x7100, size=0x8 */
+/* XSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0x5fd8, size=0x8 */
+	0x00005fd8, 0x00000000, 0x00080000,
+/* YSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0x7100, size=0x8 */
 	0x00007100, 0x00000000, 0x00080000,
-	/* PSTORM_OVERLAY_BUF_ADDR_OFFSET, */
-	/* offset=0xaea0, size=0x8 */
-	0x0000aea0, 0x00000000, 0x00080000,
-	/* TSTORM_OVERLAY_BUF_ADDR_OFFSET, */
-	/* offset=0x4398, size=0x8 */
+/* PSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0xaf20, size=0x8 */
+	0x0000af20, 0x00000000, 0x00080000,
+/* TSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0x4398, size=0x8 */
 	0x00004398, 0x00000000, 0x00080000,
-	/* MSTORM_OVERLAY_BUF_ADDR_OFFSET */
-	/* offset=0xa5a0, size=0x8 */
+/* MSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0xa5a0, size=0x8 */
 	0x0000a5a0, 0x00000000, 0x00080000,
-	/* USTORM_OVERLAY_BUF_ADDR_OFFSET */
-	/* offset=0xbde8, size=0x8 */
-	0x0000bde8, 0x00000000, 0x00080000,
-	/* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id), */
-	/* offset=0x20, mult1=0x4, size=0x4 */
-	0x00000020, 0x00000004, 0x00040000,
-	/* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), */
-	/* offset=0x56d0, mult1=0x10, size=0x10 */
-	0x000056d0, 0x00000010, 0x00100000,
-	/* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), */
-	/* offset=0xc210, mult1=0x30, size=0x30 */
-	0x0000c210, 0x00000030, 0x00300000,
-	/* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id), */
-	/* offset=0xb088, mult1=0x38, size=0x38 */
-	0x0000b088, 0x00000038, 0x00380000,
-	/* MSTORM_QUEUE_STAT_OFFSET(stat_counter_id), */
-	/* offset=0x3d20, mult1=0x80, size=0x40 */
+/* USTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0xc1e8, size=0x8 */
+	0x0000c1e8, 0x00000000, 0x00080000,
+/* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id), offset=0x20, mult1=0x8, size=0x8 */
+	0x00000020, 0x00000008, 0x00080000,
+/* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), offset=0x5548, mult1=0x10, size=0x10 */
+	0x00005548, 0x00000010, 0x00100000,
+/* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), offset=0xc610, mult1=0x30, size=0x30 */
+	0x0000c610, 0x00000030, 0x00300000,
+/* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id), offset=0xb108, mult1=0x38, size=0x38 */
+	0x0000b108, 0x00000038, 0x00380000,
+/* MSTORM_QUEUE_STAT_OFFSET(stat_counter_id), offset=0x3d20, mult1=0x80, size=0x40 */
 	0x00003d20, 0x00000080, 0x00400000,
-	/* MSTORM_TPA_TIMEOUT_US_OFFSET */
-	/* offset=0xbf60, size=0x4 */
+/* MSTORM_TPA_TIMEOUT_US_OFFSET, offset=0xbf60, size=0x4 */
 	0x0000bf60, 0x00000000, 0x00040000,
-	/* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id), */
-	/* offset=0x4560, mult1=0x80, mult2=0x4, size=0x4 */
+/* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id), offset=0x4560, mult1=0x80, mult2=0x4, size=0x4 */
 	0x00004560, 0x00040080, 0x00040000,
-	/* MSTORM_ETH_PF_PRODS_OFFSET(queue_id), */
-	/* offset=0x1f8, mult1=0x4, size=0x4 */
+/* MSTORM_ETH_PF_PRODS_OFFSET(queue_id), offset=0x1f8, mult1=0x4, size=0x4 */
 	0x000001f8, 0x00000004, 0x00040000,
-	/* MSTORM_ETH_PF_STAT_OFFSET(pf_id), */
-	/* offset=0x3d60, mult1=0x80, size=0x20 */
+/* MSTORM_ETH_PF_BD_PROD_OFFSET(queue_id), offset=0x1f8, mult1=0x4, size=0x2 */
+	0x000001f8, 0x00000004, 0x00020000,
+/* MSTORM_ETH_PF_CQE_PROD_OFFSET(queue_id), offset=0x1fa, mult1=0x4, size=0x2 */
+	0x000001fa, 0x00000004, 0x00020000,
+/* MSTORM_ETH_PF_STAT_OFFSET(pf_id), offset=0x3d60, mult1=0x80, size=0x20 */
 	0x00003d60, 0x00000080, 0x00200000,
-	/* USTORM_QUEUE_STAT_OFFSET(stat_counter_id), */
-	/* offset=0x8960, mult1=0x40, size=0x30 */
-	0x00008960, 0x00000040, 0x00300000,
-	/* USTORM_ETH_PF_STAT_OFFSET(pf_id), */
-	/* offset=0xe840, mult1=0x60, size=0x60 */
-	0x0000e840, 0x00000060, 0x00600000,
-	/* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id), */
-	/* offset=0x4618, mult1=0x80, size=0x38 */
-	0x00004618, 0x00000080, 0x00380000,
-	/* PSTORM_ETH_PF_STAT_OFFSET(pf_id), */
-	/* offset=0x10738, mult1=0xc0, size=0xc0 */
-	0x00010738, 0x000000c0, 0x00c00000,
-	/* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id), */
-	/* offset=0x1f8, mult1=0x2, size=0x2 */
+/* USTORM_QUEUE_STAT_OFFSET(stat_counter_id), offset=0x8d60, mult1=0x40, size=0x30 */
+	0x00008d60, 0x00000040, 0x00300000,
+/* USTORM_ETH_PF_STAT_OFFSET(pf_id), offset=0xef40, mult1=0x60, size=0x60 */
+	0x0000ef40, 0x00000060, 0x00600000,
+/* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id), offset=0x4698, mult1=0x80, size=0x38 */
+	0x00004698, 0x00000080, 0x00380000,
+/* PSTORM_ETH_PF_STAT_OFFSET(pf_id), offset=0x107b0, mult1=0xc0, size=0xc0 */
+	0x000107b0, 0x000000c0, 0x00c00000,
+/* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id), offset=0x1f8, mult1=0x2, size=0x2 */
 	0x000001f8, 0x00000002, 0x00020000,
-	/* TSTORM_ETH_PRS_INPUT_OFFSET, */
-	/* offset=0xa2a8, size=0x108 */
-	0x0000a2a8, 0x00000000, 0x01080000,
-	/* ETH_RX_RATE_LIMIT_OFFSET(pf_id), */
-	/* offset=0xa3b0, mult1=0x8, size=0x8 */
-	0x0000a3b0, 0x00000008, 0x00080000,
-	/* TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id), */
-	/* offset=0x1c0, mult1=0x8, size=0x8 */
+/* TSTORM_ETH_PRS_INPUT_OFFSET, offset=0xa220, size=0x108 */
+	0x0000a220, 0x00000000, 0x01080000,
+/* ETH_RX_RATE_LIMIT_OFFSET(pf_id), offset=0xa328, mult1=0x8, size=0x8 */
+	0x0000a328, 0x00000008, 0x00080000,
+/* TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id), offset=0x1c0, mult1=0x8, size=0x8 */
 	0x000001c0, 0x00000008, 0x00080000,
-	/* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id), */
-	/* offset=0x1f8, mult1=0x8, size=0x8 */
+/* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id), offset=0x1f8, mult1=0x8, size=0x8 */
 	0x000001f8, 0x00000008, 0x00080000,
-	/* YSTORM_TOE_CQ_PROD_OFFSET(rss_id), */
-	/* offset=0xac0, mult1=0x8, size=0x8 */
+/* XSTORM_ETH_HAIRPIN_CID_ALLOCATION_OFFSET(pf_id), offset=0x8ce0, mult1=0x8, size=0x8 */
+	0x00008ce0, 0x00000008, 0x00080000,
+/* YSTORM_TOE_CQ_PROD_OFFSET(rss_id), offset=0xac0, mult1=0x8, size=0x8 */
 	0x00000ac0, 0x00000008, 0x00080000,
-	/* USTORM_TOE_CQ_PROD_OFFSET(rss_id), */
-	/* offset=0x2578, mult1=0x8, size=0x8 */
+/* USTORM_TOE_CQ_PROD_OFFSET(rss_id), offset=0x2578, mult1=0x8, size=0x8 */
 	0x00002578, 0x00000008, 0x00080000,
-	/* USTORM_TOE_GRQ_PROD_OFFSET(pf_id), */
-	/* offset=0x24f8, mult1=0x8, size=0x8 */
+/* USTORM_TOE_GRQ_PROD_OFFSET(pf_id), offset=0x24f8, mult1=0x8, size=0x8 */
 	0x000024f8, 0x00000008, 0x00080000,
-	/* TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id), */
-	/* offset=0x280, mult1=0x8, size=0x8 */
+/* TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id), offset=0x280, mult1=0x8, size=0x8 */
 	0x00000280, 0x00000008, 0x00080000,
-	/* TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), */
-	/* offset=0x680, mult1=0x18, mult2=0x8, size=0x8 */
+/* TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), offset=0x680, mult1=0x18, mult2=0x8,
+ * size=0x8
+ */
 	0x00000680, 0x00080018, 0x00080000,
-	/* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), */
-	/* offset=0xb78, mult1=0x18, mult2=0x8, size=0x2 */
+/* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), offset=0xb78, mult1=0x18, mult2=0x8,
+ * size=0x2
+ */
 	0x00000b78, 0x00080018, 0x00020000,
-	/* TSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), */
-	/* offset=0xc648, mult1=0x50, size=0x3c */
-	0x0000c648, 0x00000050, 0x003c0000,
-	/* MSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), */
-	/* offset=0x12038, mult1=0x18, size=0x10 */
+/* TSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), offset=0xc5c0, mult1=0x50, size=0x3c */
+	0x0000c5c0, 0x00000050, 0x003c0000,
+/* MSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), offset=0x12038, mult1=0x18, size=0x10 */
 	0x00012038, 0x00000018, 0x00100000,
-	/* USTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), */
-	/* offset=0x11b00, mult1=0x40, size=0x18 */
-	0x00011b00, 0x00000040, 0x00180000,
-	/* XSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), */
-	/* offset=0x95d0, mult1=0x50, size=0x20 */
-	0x000095d0, 0x00000050, 0x00200000,
-	/* YSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), */
-	/* offset=0x8b10, mult1=0x40, size=0x28 */
+/* USTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), offset=0x12200, mult1=0x40, size=0x18 */
+	0x00012200, 0x00000040, 0x00180000,
+/* XSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), offset=0x96c8, mult1=0x50, size=0x20 */
+	0x000096c8, 0x00000050, 0x00200000,
+/* YSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), offset=0x8b10, mult1=0x40, size=0x28 */
 	0x00008b10, 0x00000040, 0x00280000,
-	/* PSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), */
-	/* offset=0x11640, mult1=0x18, size=0x10 */
-	0x00011640, 0x00000018, 0x00100000,
-	/* TSTORM_FCOE_RX_STATS_OFFSET(pf_id), */
-	/* offset=0xc830, mult1=0x48, size=0x38 */
+/* PSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), offset=0x116b8, mult1=0x18, size=0x10 */
+	0x000116b8, 0x00000018, 0x00100000,
+/* TSTORM_FCOE_RX_STATS_OFFSET(pf_id), offset=0xc830, mult1=0x48, size=0x38 */
 	0x0000c830, 0x00000048, 0x00380000,
-	/* PSTORM_FCOE_TX_STATS_OFFSET(pf_id), */
-	/* offset=0x11710, mult1=0x20, size=0x20 */
-	0x00011710, 0x00000020, 0x00200000,
-	/* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), */
-	/* offset=0x4650, mult1=0x80, size=0x10 */
-	0x00004650, 0x00000080, 0x00100000,
-	/* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), */
-	/* offset=0x3618, mult1=0x10, size=0x10 */
+/* PSTORM_FCOE_TX_STATS_OFFSET(pf_id), offset=0x11788, mult1=0x20, size=0x20 */
+	0x00011788, 0x00000020, 0x00200000,
+/* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), offset=0x46d0, mult1=0x80, size=0x10 */
+	0x000046d0, 0x00000080, 0x00100000,
+/* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), offset=0x3618, mult1=0x10, size=0x10 */
 	0x00003618, 0x00000010, 0x00100000,
-	/* XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
-	/* offset=0xa968, mult1=0x8, size=0x1 */
-	0x0000a968, 0x00000008, 0x00010000,
-	/* YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
-	/* offset=0x97a0, mult1=0x8, size=0x1 */
+/* XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0xaa60, mult1=0x8, size=0x1 */
+	0x0000aa60, 0x00000008, 0x00010000,
+/* YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x97a0, mult1=0x8, size=0x1 */
 	0x000097a0, 0x00000008, 0x00010000,
-	/* PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
-	/* offset=0x11990, mult1=0x8, size=0x1 */
-	0x00011990, 0x00000008, 0x00010000,
-	/* TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
-	/* offset=0xf020, mult1=0x8, size=0x1 */
-	0x0000f020, 0x00000008, 0x00010000,
-	/* MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
-	/* offset=0x12628, mult1=0x8, size=0x1 */
+/* PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x11a08, mult1=0x8, size=0x1 */
+	0x00011a08, 0x00000008, 0x00010000,
+/* TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0xea20, mult1=0x8, size=0x1 */
+	0x0000ea20, 0x00000008, 0x00010000,
+/* MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x12628, mult1=0x8, size=0x1 */
 	0x00012628, 0x00000008, 0x00010000,
-	/* USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
-	/* offset=0x11da8, mult1=0x8, size=0x1 */
-	0x00011da8, 0x00000008, 0x00010000,
-	/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id), */
-	/* offset=0xaa78, mult1=0x30, size=0x10 */
-	0x0000aa78, 0x00000030, 0x00100000,
-	/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id), */
-	/* offset=0xd770, mult1=0x28, size=0x28 */
+/* USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x12930, mult1=0x8, size=0x1 */
+	0x00012930, 0x00000008, 0x00010000,
+/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id), offset=0xaf80, mult1=0x30, size=0x10 */
+	0x0000af80, 0x00000030, 0x00100000,
+/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id), offset=0xd770, mult1=0x28, size=0x28 */
 	0x0000d770, 0x00000028, 0x00280000,
-	/* YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id), */
-	/* offset=0x9a58, mult1=0x18, size=0x18 */
-	0x00009a58, 0x00000018, 0x00180000,
-	/* YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id), */
-	/* offset=0x9bd8, mult1=0x8, size=0x8 */
-	0x00009bd8, 0x00000008, 0x00080000,
-	/* PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id), */
-	/* offset=0x13a18, mult1=0x8, size=0x8 */
-	0x00013a18, 0x00000008, 0x00080000,
-	/* USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id), */
-	/* offset=0x126e8, mult1=0x18, size=0x18 */
-	0x000126e8, 0x00000018, 0x00180000,
-	/* TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_num_id,taskpool_index), */
-	/* offset=0xe610, mult1=0x288, mult2=0x50, size=0x10 */
-	0x0000e610, 0x00500288, 0x00100000,
-	/* USTORM_NVMF_PORT_COUNTERS_OFFSET(port_num_id), */
-	/* offset=0x12970, mult1=0x138, size=0x28 */
-	0x00012970, 0x00000138, 0x00280000,
+/* YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id), offset=0x9e68, mult1=0x18, size=0x18 */
+	0x00009e68, 0x00000018, 0x00180000,
+/* YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id), offset=0x9fe8, mult1=0x8, size=0x8 */
+	0x00009fe8, 0x00000008, 0x00080000,
+/* PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id), offset=0x13ea0, mult1=0x8, size=0x8 */
+	0x00013ea0, 0x00000008, 0x00080000,
+/* USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id), offset=0x13680, mult1=0x18, size=0x18 */
+	0x00013680, 0x00000018, 0x00180000,
+/* TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_num_id,taskpool_index), offset=0xe010,
+ * mult1=0x288, mult2=0x50, size=0x10
+ */
+	0x0000e010, 0x00500288, 0x00100000,
+/* USTORM_NVMF_PORT_COUNTERS_OFFSET(port_num_id), offset=0x13908, mult1=0x138, size=0x28 */
+	0x00013908, 0x00000138, 0x00280000,
+	0x00000000, 0x00000000, 0x00000000,
+	0x00000000, 0x00000000, 0x00000000,
+	/* E5 */
+/* YSTORM_FLOW_CONTROL_MODE_OFFSET, offset=0x0, size=0x8 */
+	0x00000000, 0x00000000, 0x00080000,
+/* PSTORM_PKT_DUPLICATION_CFG_OFFSET(pf_id), offset=0x7ff8, mult1=0x8, size=0x8 */
+	0x00007ff8, 0x00000008, 0x00080000,
+/* TSTORM_PORT_STAT_OFFSET(port_id), offset=0x6fc8, mult1=0x88, size=0x88 */
+	0x00006fc8, 0x00000088, 0x00880000,
+/* TSTORM_LL2_PORT_STAT_OFFSET(port_id), offset=0x9b30, mult1=0x20, size=0x20 */
+	0x00009b30, 0x00000020, 0x00200000,
+/* TSTORM_PKT_DUPLICATION_CFG_OFFSET(pf_id), offset=0x6ec8, mult1=0x8, size=0x8 */
+	0x00006ec8, 0x00000008, 0x00080000,
+/* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id), offset=0x1100, mult1=0x8, size=0x4 */
+	0x00001100, 0x00000008, 0x00040000,
+/* USTORM_FLR_FINAL_ACK_OFFSET(pf_id), offset=0x1080, mult1=0x8, size=0x4 */
+	0x00001080, 0x00000008, 0x00040000,
+/* USTORM_EQE_CONS_OFFSET(pf_id), offset=0x0, mult1=0x8, size=0x2 */
+	0x00000000, 0x00000008, 0x00020000,
+/* USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id), offset=0x80, mult1=0x8, size=0x4 */
+	0x00000080, 0x00000008, 0x00040000,
+/* USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id), offset=0x84, mult1=0x8, size=0x2 */
+	0x00000084, 0x00000008, 0x00020000,
+/* XSTORM_PQ_INFO_OFFSET(pq_id), offset=0x8b98, mult1=0x4, size=0x4 */
+	0x00008b98, 0x00000004, 0x00040000,
+/* XSTORM_INTEG_TEST_DATA_OFFSET, offset=0x80d0, size=0x78 */
+	0x000080d0, 0x00000000, 0x00780000,
+/* YSTORM_INTEG_TEST_DATA_OFFSET, offset=0x6b10, size=0x78 */
+	0x00006b10, 0x00000000, 0x00780000,
+/* PSTORM_INTEG_TEST_DATA_OFFSET, offset=0x8080, size=0x78 */
+	0x00008080, 0x00000000, 0x00780000,
+/* TSTORM_INTEG_TEST_DATA_OFFSET, offset=0x6f50, size=0x78 */
+	0x00006f50, 0x00000000, 0x00780000,
+/* MSTORM_INTEG_TEST_DATA_OFFSET, offset=0x7b10, size=0x78 */
+	0x00007b10, 0x00000000, 0x00780000,
+/* USTORM_INTEG_TEST_DATA_OFFSET, offset=0xaa20, size=0x78 */
+	0x0000aa20, 0x00000000, 0x00780000,
+/* XSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0x93d8, size=0x8 */
+	0x000093d8, 0x00000000, 0x00080000,
+/* YSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0xc950, size=0x8 */
+	0x0000c950, 0x00000000, 0x00080000,
+/* PSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0x102a8, size=0x8 */
+	0x000102a8, 0x00000000, 0x00080000,
+/* TSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0x83d8, size=0x8 */
+	0x000083d8, 0x00000000, 0x00080000,
+/* MSTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0xfd68, size=0x8 */
+	0x0000fd68, 0x00000000, 0x00080000,
+/* USTORM_OVERLAY_BUF_ADDR_OFFSET, offset=0x116b0, size=0x8 */
+	0x000116b0, 0x00000000, 0x00080000,
+/* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id), offset=0x40, mult1=0x8, size=0x8 */
+	0x00000040, 0x00000008, 0x00080000,
+/* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), offset=0x9810, mult1=0x10, size=0x10 */
+	0x00009810, 0x00000010, 0x00100000,
+/* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), offset=0x11d60, mult1=0x30, size=0x30 */
+	0x00011d60, 0x00000030, 0x00300000,
+/* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id), offset=0x10958, mult1=0x38, size=0x38 */
+	0x00010958, 0x00000038, 0x00380000,
+/* MSTORM_QUEUE_STAT_OFFSET(stat_counter_id), offset=0x7ce8, mult1=0x80, size=0x40 */
+	0x00007ce8, 0x00000080, 0x00400000,
+/* MSTORM_TPA_TIMEOUT_US_OFFSET, offset=0x12830, size=0x4 */
+	0x00012830, 0x00000000, 0x00040000,
+/* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id), offset=0x1f8, mult1=0x0, mult2=0x0, size=0x4 */
+	0x000001f8, 0x00000000, 0x00040000,
+/* MSTORM_ETH_PF_PRODS_OFFSET(queue_id), offset=0x1f8, mult1=0x10, size=0x4 */
+	0x000001f8, 0x00000010, 0x00040000,
+/* MSTORM_ETH_PF_BD_PROD_OFFSET(queue_id), offset=0x1f8, mult1=0x10, size=0x2 */
+	0x000001f8, 0x00000010, 0x00020000,
+/* MSTORM_ETH_PF_CQE_PROD_OFFSET(queue_id), offset=0x1fa, mult1=0x10, size=0x2 */
+	0x000001fa, 0x00000010, 0x00020000,
+/* MSTORM_ETH_PF_STAT_OFFSET(pf_id), offset=0x17310, mult1=0x20, size=0x20 */
+	0x00017310, 0x00000020, 0x00200000,
+/* USTORM_QUEUE_STAT_OFFSET(stat_counter_id), offset=0xd628, mult1=0x40, size=0x30 */
+	0x0000d628, 0x00000040, 0x00300000,
+/* USTORM_ETH_PF_STAT_OFFSET(pf_id), offset=0x15500, mult1=0x60, size=0x60 */
+	0x00015500, 0x00000060, 0x00600000,
+/* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id), offset=0x8218, mult1=0x80, size=0x38 */
+	0x00008218, 0x00000080, 0x00380000,
+/* PSTORM_ETH_PF_STAT_OFFSET(pf_id), offset=0x14a68, mult1=0xc0, size=0xc0 */
+	0x00014a68, 0x000000c0, 0x00c00000,
+/* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id), offset=0x1f8, mult1=0x2, size=0x2 */
+	0x000001f8, 0x00000002, 0x00020000,
+/* TSTORM_ETH_PRS_INPUT_OFFSET, offset=0xeeb8, size=0x128 */
+	0x0000eeb8, 0x00000000, 0x01280000,
+/* ETH_RX_RATE_LIMIT_OFFSET(pf_id), offset=0xefe0, mult1=0x8, size=0x8 */
+	0x0000efe0, 0x00000008, 0x00080000,
+/* TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id), offset=0x200, mult1=0x8, size=0x8 */
+	0x00000200, 0x00000008, 0x00080000,
+/* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id), offset=0x1f8, mult1=0x8, size=0x8 */
+	0x000001f8, 0x00000008, 0x00080000,
+/* XSTORM_ETH_HAIRPIN_CID_ALLOCATION_OFFSET(pf_id), offset=0x12370, mult1=0x8, size=0x8 */
+	0x00012370, 0x00000008, 0x00080000,
+/* YSTORM_TOE_CQ_PROD_OFFSET(rss_id), offset=0xac0, mult1=0x8, size=0x8 */
+	0x00000ac0, 0x00000008, 0x00080000,
+/* USTORM_TOE_CQ_PROD_OFFSET(rss_id), offset=0x2880, mult1=0x8, size=0x8 */
+	0x00002880, 0x00000008, 0x00080000,
+/* USTORM_TOE_GRQ_PROD_OFFSET(pf_id), offset=0x2800, mult1=0x8, size=0x8 */
+	0x00002800, 0x00000008, 0x00080000,
+/* TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id), offset=0x340, mult1=0x8, size=0x8 */
+	0x00000340, 0x00000008, 0x00080000,
+/* TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), offset=0x740, mult1=0x18, mult2=0x8,
+ * size=0x8
+ */
+	0x00000740, 0x00080018, 0x00080000,
+/* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), offset=0x2508, mult1=0x18, mult2=0x8,
+ * size=0x2
+ */
+	0x00002508, 0x00080018, 0x00020000,
+/* TSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), offset=0x11298, mult1=0x50, size=0x3c */
+	0x00011298, 0x00000050, 0x003c0000,
+/* MSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), offset=0x19888, mult1=0x18, size=0x10 */
+	0x00019888, 0x00000018, 0x00100000,
+/* USTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), offset=0x187c0, mult1=0x40, size=0x18 */
+	0x000187c0, 0x00000040, 0x00180000,
+/* XSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), offset=0x12d58, mult1=0x50, size=0x20 */
+	0x00012d58, 0x00000050, 0x00200000,
+/* YSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), offset=0xe928, mult1=0x40, size=0x28 */
+	0x0000e928, 0x00000040, 0x00280000,
+/* PSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), offset=0x17968, mult1=0x18, size=0x10 */
+	0x00017968, 0x00000018, 0x00100000,
+/* TSTORM_FCOE_RX_STATS_OFFSET(pf_id), offset=0x11508, mult1=0x48, size=0x38 */
+	0x00011508, 0x00000048, 0x00380000,
+/* PSTORM_FCOE_TX_STATS_OFFSET(pf_id), offset=0x17a38, mult1=0x20, size=0x20 */
+	0x00017a38, 0x00000020, 0x00200000,
+/* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), offset=0x8250, mult1=0x80, size=0x10 */
+	0x00008250, 0x00000080, 0x00100000,
+/* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), offset=0x7358, mult1=0x10, size=0x10 */
+	0x00007358, 0x00000010, 0x00100000,
+/* XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x14270, mult1=0x8, size=0x1 */
+	0x00014270, 0x00000008, 0x00010000,
+/* YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0xf7b8, mult1=0x8, size=0x1 */
+	0x0000f7b8, 0x00000008, 0x00010000,
+/* PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x17cb8, mult1=0x8, size=0x1 */
+	0x00017cb8, 0x00000008, 0x00010000,
+/* TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x13978, mult1=0x8, size=0x1 */
+	0x00013978, 0x00000008, 0x00010000,
+/* MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x19e78, mult1=0x8, size=0x1 */
+	0x00019e78, 0x00000008, 0x00010000,
+/* USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), offset=0x18ef0, mult1=0x8, size=0x1 */
+	0x00018ef0, 0x00000008, 0x00010000,
+/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id), offset=0x14790, mult1=0x30, size=0x10 */
+	0x00014790, 0x00000030, 0x00100000,
+/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id), offset=0x125c8, mult1=0x28, size=0x28 */
+	0x000125c8, 0x00000028, 0x00280000,
+/* YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id), offset=0xfe80, mult1=0x18, size=0x18 */
+	0x0000fe80, 0x00000018, 0x00180000,
+/* YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id), offset=0x10000, mult1=0x8, size=0x8 */
+	0x00010000, 0x00000008, 0x00080000,
+/* PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id), offset=0x1a150, mult1=0x8, size=0x8 */
+	0x0001a150, 0x00000008, 0x00080000,
+/* USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id), offset=0x19c40, mult1=0x18, size=0x18 */
+	0x00019c40, 0x00000018, 0x00180000,
+/* TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_num_id,taskpool_index), offset=0x12e70,
+ * mult1=0x2c8, mult2=0x58, size=0x10
+ */
+	0x00012e70, 0x005802c8, 0x00100000,
+/* USTORM_NVMF_PORT_COUNTERS_OFFSET(port_num_id), offset=0x19ec8, mult1=0x138, size=0x28 */
+	0x00019ec8, 0x00000138, 0x00280000,
+/* XSTORM_ROCE_NON_EDPM_ACK_PKT_OFFSET(counter_id), offset=0x82e8, mult1=0x8, size=0x8 */
+	0x000082e8, 0x00000008, 0x00080000,
+/* MSTORM_ROCE_ACK_PKT_OFFSET(counter_id), offset=0x7d28, mult1=0x80, size=0x8 */
+	0x00007d28, 0x00000080, 0x00080000,
 };
-/* Data size: 828 bytes */
+/* Data size: 1824 bytes */
 
 
-#endif /* __IRO_VALUES_H__ */
+#endif
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 01eb69c0e..646cf07cb 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #include "bcm_osal.h"
 
 #include "ecore.h"
@@ -20,16 +20,23 @@
 #include "reg_addr.h"
 #include "ecore_int.h"
 #include "ecore_hw.h"
-#include "ecore_vf.h"
 #include "ecore_sriov.h"
+#include "ecore_vf.h"
 #include "ecore_mcp.h"
 
 #define ECORE_MAX_SGES_NUM 16
 #define CRC32_POLY 0x1edc6f41
 
+#ifdef _NTDDK_
+#pragma warning(push)
+#pragma warning(disable : 28167)
+#pragma warning(disable : 28123)
+#pragma warning(disable : 28121)
+#endif
+
 struct ecore_l2_info {
 	u32 queues;
-	u32 **pp_qid_usage;
+	u32 **pp_qid_usage; /* @DPDK */
 
 	/* The lock is meant to synchronize access to the qid usage */
 	osal_mutex_t lock;
@@ -38,7 +45,7 @@ struct ecore_l2_info {
 enum _ecore_status_t ecore_l2_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_l2_info *p_l2_info;
-	u32 **pp_qids;
+	u32 **pp_qids; /* @DPDK */
 	u32 i;
 
 	if (!ECORE_IS_L2_PERSONALITY(p_hwfn))
@@ -152,7 +159,7 @@ static bool ecore_eth_queue_qid_usage_add(struct ecore_hwfn *p_hwfn,
 		goto out;
 	}
 
-	OSAL_SET_BIT(first, p_l2_info->pp_qid_usage[queue_id]);
+	OSAL_NON_ATOMIC_SET_BIT(first, p_l2_info->pp_qid_usage[queue_id]);
 	p_cid->qid_usage_idx = first;
 
 out:
@@ -190,7 +197,7 @@ void ecore_eth_queue_cid_release(struct ecore_hwfn *p_hwfn,
 	OSAL_VFREE(p_hwfn->p_dev, p_cid);
 }
 
-/* The internal is only meant to be directly called by PFs initializeing CIDs
+/* The internal is only meant to be directly called by PFs initializing CIDs
  * for their VFs.
  */
 static struct ecore_queue_cid *
@@ -293,23 +300,26 @@ ecore_eth_queue_to_cid(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
 		       bool b_is_rx,
 		       struct ecore_queue_cid_vf_params *p_vf_params)
 {
+	u8 vfid = p_vf_params ? p_vf_params->vfid : ECORE_CXT_PF_CID;
 	struct ecore_queue_cid *p_cid;
-	u8 vfid = ECORE_CXT_PF_CID;
 	bool b_legacy_vf = false;
 	u32 cid = 0;
 
-	/* In case of legacy VFs, The CID can be derived from the additional
-	 * VF parameters - the VF assumes queue X uses CID X, so we can simply
+	/* In case of legacy VFs, The ICID can be derived from the additional
+	 * VF parameters - the VF assumes queue X uses ICID X, so we can simply
 	 * use the vf_qid for this purpose as well.
+	 * In E5 the ICID should be after the CORE's global range, so an offset
+	 * is required.
 	 */
-	if (p_vf_params) {
-		vfid = p_vf_params->vfid;
-
-		if (p_vf_params->vf_legacy &
-		    ECORE_QCID_LEGACY_VF_CID) {
-			b_legacy_vf = true;
-			cid = p_vf_params->vf_qid;
-		}
+	if (p_vf_params &&
+	    (p_vf_params->vf_legacy & ECORE_QCID_LEGACY_VF_CID)) {
+		b_legacy_vf = true;
+		cid = p_vf_params->vf_qid;
+
+		if (ECORE_IS_E5(p_hwfn->p_dev))
+			cid += ecore_cxt_get_proto_cid_count(p_hwfn,
+							     PROTOCOLID_CORE,
+							     OSAL_NULL);
 	}
 
 	/* Get a unique firmware CID for this queue, in case it's a PF.
@@ -341,9 +351,8 @@ ecore_eth_queue_to_cid_pf(struct ecore_hwfn *p_hwfn, u16 opaque_fid,
 				      OSAL_NULL);
 }
 
-enum _ecore_status_t
-ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
-			 struct ecore_sp_vport_start_params *p_params)
+enum _ecore_status_t ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
+					      struct ecore_sp_vport_start_params *p_params)
 {
 	struct vport_start_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
@@ -375,14 +384,14 @@ ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 	p_ramrod->mtu = OSAL_CPU_TO_LE16(p_params->mtu);
 	p_ramrod->handle_ptp_pkts = p_params->handle_ptp_pkts;
 	p_ramrod->inner_vlan_removal_en	= p_params->remove_inner_vlan;
-	p_ramrod->drop_ttl0_en = p_params->drop_ttl0;
+	p_ramrod->drop_ttl0_en	= p_params->drop_ttl0;
 	p_ramrod->untagged = p_params->only_untagged;
 	p_ramrod->zero_placement_offset = p_params->zero_placement_offset;
 
 	SET_FIELD(rx_mode, ETH_VPORT_RX_MODE_UCAST_DROP_ALL, 1);
 	SET_FIELD(rx_mode, ETH_VPORT_RX_MODE_MCAST_DROP_ALL, 1);
 
-	p_ramrod->rx_mode.state = OSAL_CPU_TO_LE16(rx_mode);
+	p_ramrod->rx_mode.state	= OSAL_CPU_TO_LE16(rx_mode);
 
 	/* Handle requests for strict behavior on transmission errors */
 	SET_FIELD(tx_err, ETH_TX_ERR_VALS_ILLEGAL_VLAN_MODE,
@@ -431,8 +440,13 @@ ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 	}
 
 	p_ramrod->tx_switching_en = p_params->tx_switching;
+	p_ramrod->tx_dst_port_mode_config = p_params->tx_dst_port_mode_config;
+	p_ramrod->tx_dst_port_mode = p_params->tx_dst_port_mode;
+	p_ramrod->dst_vport_id = p_params->dst_vport_id;
+	p_ramrod->dst_vport_id_valid = p_params->dst_vport_id_valid;
+
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev))
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev) && ECORE_IS_E4(p_hwfn->p_dev))
 		p_ramrod->tx_switching_en = 0;
 #endif
 
@@ -445,9 +459,8 @@ ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
-enum _ecore_status_t
-ecore_sp_vport_start(struct ecore_hwfn *p_hwfn,
-		     struct ecore_sp_vport_start_params *p_params)
+enum _ecore_status_t ecore_sp_vport_start(struct ecore_hwfn *p_hwfn,
+					  struct ecore_sp_vport_start_params *p_params)
 {
 	if (IS_VF(p_hwfn->p_dev))
 		return ecore_vf_pf_vport_start(p_hwfn, p_params->vport_id,
@@ -455,7 +468,8 @@ ecore_sp_vport_start(struct ecore_hwfn *p_hwfn,
 					       p_params->remove_inner_vlan,
 					       p_params->tpa_mode,
 					       p_params->max_buffers_per_cqe,
-					       p_params->only_untagged);
+					       p_params->only_untagged,
+					       p_params->zero_placement_offset);
 
 	return ecore_sp_eth_vport_start(p_hwfn, p_params);
 }
@@ -477,9 +491,10 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 	p_config = &p_ramrod->rss_config;
 
 	OSAL_BUILD_BUG_ON(ECORE_RSS_IND_TABLE_SIZE !=
-			  ETH_RSS_IND_TABLE_ENTRIES_NUM);
+			   ETH_RSS_IND_TABLE_ENTRIES_NUM);
 
-	rc = ecore_fw_rss_eng(p_hwfn, p_rss->rss_eng_id, &p_config->rss_id);
+	rc = ecore_fw_rss_eng(p_hwfn, p_rss->rss_eng_id,
+			      &p_config->rss_id);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
@@ -489,7 +504,8 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 	p_config->update_rss_key = p_rss->update_rss_key;
 
 	p_config->rss_mode = p_rss->rss_enable ?
-	    ETH_VPORT_RSS_MODE_REGULAR : ETH_VPORT_RSS_MODE_DISABLED;
+			     ETH_VPORT_RSS_MODE_REGULAR :
+			     ETH_VPORT_RSS_MODE_DISABLED;
 
 	p_config->capabilities = 0;
 
@@ -520,7 +536,8 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 		   p_config->rss_mode,
 		   p_config->update_rss_capabilities,
 		   p_config->capabilities,
-		   p_config->update_rss_ind_table, p_config->update_rss_key);
+		   p_config->update_rss_ind_table,
+		   p_config->update_rss_key);
 
 	table_size = OSAL_MIN_T(int, ECORE_RSS_IND_TABLE_SIZE,
 				1 << p_config->tbl_size);
@@ -550,15 +567,15 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
 			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 7]),
 			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 8]),
 			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 9]),
-			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 10]),
-			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 11]),
-			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 12]),
-			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 13]),
-			  OSAL_LE16_TO_CPU(p_config->indirection_table[i + 14]),
-			 OSAL_LE16_TO_CPU(p_config->indirection_table[i + 15]));
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 10]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 11]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 12]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 13]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 14]),
+			   OSAL_LE16_TO_CPU(p_config->indirection_table[i + 15]));
 	}
 
-	for (i = 0; i < 10; i++)
+	for (i = 0; i <  10; i++)
 		p_config->rss_key[i] = OSAL_CPU_TO_LE32(p_rss->rss_key[i]);
 
 	return rc;
@@ -578,7 +595,7 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn,
 	/* On B0 emulation we cannot enable Tx, since this would cause writes
 	 * to PVFC HW block which isn't implemented in emulation.
 	 */
-	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev) && ECORE_IS_E4(p_hwfn->p_dev)) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Non-Asic - prevent Tx mode in vport update\n");
 		p_ramrod->common.update_tx_mode_flg = 0;
@@ -599,7 +616,7 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn,
 
 		SET_FIELD(state, ETH_VPORT_RX_MODE_MCAST_DROP_ALL,
 			  !(!!(accept_filter & ECORE_ACCEPT_MCAST_MATCHED) ||
-			    !!(accept_filter & ECORE_ACCEPT_MCAST_UNMATCHED)));
+			   !!(accept_filter & ECORE_ACCEPT_MCAST_UNMATCHED)));
 
 		SET_FIELD(state, ETH_VPORT_RX_MODE_MCAST_ACCEPT_ALL,
 			  (!!(accept_filter & ECORE_ACCEPT_MCAST_MATCHED) &&
@@ -632,6 +649,10 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn,
 			  (!!(accept_filter & ECORE_ACCEPT_MCAST_MATCHED) &&
 			   !!(accept_filter & ECORE_ACCEPT_MCAST_UNMATCHED)));
 
+		SET_FIELD(state, ETH_VPORT_TX_MODE_UCAST_ACCEPT_ALL,
+			  (!!(accept_filter & ECORE_ACCEPT_UCAST_MATCHED) &&
+			   !!(accept_filter & ECORE_ACCEPT_UCAST_UNMATCHED)));
+
 		SET_FIELD(state, ETH_VPORT_TX_MODE_BCAST_ACCEPT_ALL,
 			  !!(accept_filter & ECORE_ACCEPT_BCAST));
 
@@ -697,11 +718,10 @@ ecore_sp_update_mcast_bin(struct vport_update_ramrod_data *p_ramrod,
 	}
 }
 
-enum _ecore_status_t
-ecore_sp_vport_update(struct ecore_hwfn *p_hwfn,
-		      struct ecore_sp_vport_update_params *p_params,
-		      enum spq_mode comp_mode,
-		      struct ecore_spq_comp_cb *p_comp_data)
+enum _ecore_status_t ecore_sp_vport_update(struct ecore_hwfn *p_hwfn,
+					   struct ecore_sp_vport_update_params *p_params,
+					   enum spq_mode comp_mode,
+					   struct ecore_spq_comp_cb *p_comp_data)
 {
 	struct ecore_rss_params *p_rss_params = p_params->rss_params;
 	struct vport_update_ramrod_data_cmn *p_cmn;
@@ -762,13 +782,11 @@ ecore_sp_vport_update(struct ecore_hwfn *p_hwfn,
 	p_cmn->silent_vlan_removal_en = p_params->silent_vlan_removal_flg;
 
 	p_ramrod->common.tx_switching_en = p_params->tx_switching_flg;
-
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev))
 		if (p_ramrod->common.tx_switching_en ||
 		    p_ramrod->common.update_tx_switching_en_flg) {
-			DP_NOTICE(p_hwfn, false,
-				  "FPGA - why are we seeing tx-switching? Overriding it\n");
+			DP_NOTICE(p_hwfn, false, "FPGA - why are we seeing tx-switching? Overriding it\n");
 			p_ramrod->common.tx_switching_en = 0;
 			p_ramrod->common.update_tx_switching_en_flg = 1;
 		}
@@ -781,8 +799,7 @@ ecore_sp_vport_update(struct ecore_hwfn *p_hwfn,
 
 	rc = ecore_sp_vport_update_rss(p_hwfn, p_ramrod, p_rss_params);
 	if (rc != ECORE_SUCCESS) {
-		/* Return spq entry which is taken in ecore_sp_init_request()*/
-		ecore_spq_return_entry(p_hwfn, p_ent);
+		ecore_sp_destroy_request(p_hwfn, p_ent);
 		return rc;
 	}
 
@@ -791,11 +808,19 @@ ecore_sp_vport_update(struct ecore_hwfn *p_hwfn,
 		p_cmn->ctl_frame_ethtype_check_en = p_params->ethtype_chk_en;
 	}
 
+	p_cmn->update_tx_dst_port_mode_flg =
+		p_params->update_tx_dst_port_mode_flag;
+	p_cmn->tx_dst_port_mode_config = p_params->tx_dst_port_mode_config;
+	p_cmn->tx_dst_port_mode = p_params->tx_dst_port_mode;
+	p_cmn->dst_vport_id = p_params->dst_vport_id;
+	p_cmn->dst_vport_id_valid = p_params->dst_vport_id_valid;
+
 	/* Update mcast bins for VFs, PF doesn't use this functionality */
 	ecore_sp_update_mcast_bin(p_ramrod, p_params);
 
 	ecore_sp_update_accept_mode(p_hwfn, p_ramrod, p_params->accept_flags);
 	ecore_sp_vport_update_sge_tpa(p_ramrod, p_params->sge_tpa_params);
+
 	if (p_params->mtu) {
 		p_ramrod->common.update_mtu_flg = 1;
 		p_ramrod->common.mtu = OSAL_CPU_TO_LE16(p_params->mtu);
@@ -805,7 +830,8 @@ ecore_sp_vport_update(struct ecore_hwfn *p_hwfn,
 }
 
 enum _ecore_status_t ecore_sp_vport_stop(struct ecore_hwfn *p_hwfn,
-					 u16 opaque_fid, u8 vport_id)
+					  u16 opaque_fid,
+					  u8 vport_id)
 {
 	struct vport_stop_ramrod_data *p_ramrod;
 	struct ecore_sp_init_data init_data;
@@ -851,14 +877,13 @@ ecore_vf_pf_accept_flags(struct ecore_hwfn *p_hwfn,
 	return ecore_vf_pf_vport_update(p_hwfn, &s_params);
 }
 
-enum _ecore_status_t
-ecore_filter_accept_cmd(struct ecore_dev *p_dev,
-			u8 vport,
-			struct ecore_filter_accept_flags accept_flags,
-			u8 update_accept_any_vlan,
-			u8 accept_any_vlan,
-			enum spq_mode comp_mode,
-			struct ecore_spq_comp_cb *p_comp_data)
+enum _ecore_status_t ecore_filter_accept_cmd(struct ecore_dev *p_dev,
+					     u8 vport,
+					     struct ecore_filter_accept_flags accept_flags,
+					     u8 update_accept_any_vlan,
+					     u8 accept_any_vlan,
+					     enum spq_mode comp_mode,
+					     struct ecore_spq_comp_cb *p_comp_data)
 {
 	struct ecore_sp_vport_update_params vport_update_params;
 	int i, rc;
@@ -882,6 +907,12 @@ ecore_filter_accept_cmd(struct ecore_dev *p_dev,
 			continue;
 		}
 
+		/* Check whether skipping accept flags update is required,
+		 * e.g. when Switchdev is enabled.
+		 */
+		if (ecore_get_skip_accept_flags_update(p_hwfn))
+			continue;
+
 		rc = ecore_sp_vport_update(p_hwfn, &vport_update_params,
 					   comp_mode, p_comp_data);
 		if (rc != ECORE_SUCCESS) {
@@ -916,8 +947,7 @@ ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
 	struct ecore_sp_init_data init_data;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "opaque_fid=0x%x, cid=0x%x, rx_qzone=0x%x, vport_id=0x%x, sb_id=0x%x\n",
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "opaque_fid=0x%x, cid=0x%x, rx_qzone=0x%x, vport_id=0x%x, sb_id=0x%x\n",
 		   p_cid->opaque_fid, p_cid->cid, p_cid->abs.queue_id,
 		   p_cid->abs.vport_id, p_cid->sb_igu_id);
 
@@ -954,8 +984,7 @@ ecore_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
 				      ECORE_QCID_LEGACY_VF_RX_PROD);
 
 		p_ramrod->vf_rx_prod_index = p_cid->vf_qid;
-		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "Queue%s is meant for VF rxq[%02x]\n",
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Queue%s is meant for VF rxq[%02x]\n",
 			   b_legacy_vf ? " [legacy]" : "",
 			   p_cid->vf_qid);
 		p_ramrod->vf_rx_prod_use_zone_a = b_legacy_vf;
@@ -971,7 +1000,7 @@ ecore_eth_pf_rx_queue_start(struct ecore_hwfn *p_hwfn,
 			    dma_addr_t bd_chain_phys_addr,
 			    dma_addr_t cqe_pbl_addr,
 			    u16 cqe_pbl_size,
-			    void OSAL_IOMEM * *pp_prod)
+			    void OSAL_IOMEM **pp_prod)
 {
 	u32 init_prod_val = 0;
 
@@ -1031,14 +1060,13 @@ ecore_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t
-ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
-			      void **pp_rxq_handles,
-			      u8 num_rxqs,
-			      u8 complete_cqe_flg,
-			      u8 complete_event_flg,
-			      enum spq_mode comp_mode,
-			      struct ecore_spq_comp_cb *p_comp_data)
+enum _ecore_status_t ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
+						   void **pp_rxq_handles,
+						   u8 num_rxqs,
+						   u8 complete_cqe_flg,
+						   u8 complete_event_flg,
+						   enum spq_mode comp_mode,
+						   struct ecore_spq_comp_cb *p_comp_data)
 {
 	struct rx_queue_update_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
@@ -1087,6 +1115,50 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
+enum _ecore_status_t
+ecore_sp_eth_rx_queues_set_default(struct ecore_hwfn *p_hwfn,
+				   void *p_rxq_handler,
+				   enum spq_mode comp_mode,
+				   struct ecore_spq_comp_cb *p_comp_data)
+{
+	struct rx_queue_update_ramrod_data *p_ramrod = OSAL_NULL;
+	struct ecore_spq_entry *p_ent = OSAL_NULL;
+	struct ecore_sp_init_data init_data;
+	struct ecore_queue_cid *p_cid;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	if (IS_VF(p_hwfn->p_dev))
+		return ECORE_NOTIMPL;
+
+	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+	init_data.comp_mode = comp_mode;
+	init_data.p_comp_data = p_comp_data;
+
+	p_cid = (struct ecore_queue_cid *)p_rxq_handler;
+
+	/* Get SPQ entry */
+	init_data.cid = p_cid->cid;
+	init_data.opaque_fid = p_cid->opaque_fid;
+
+	rc = ecore_sp_init_request(p_hwfn, &p_ent,
+				   ETH_RAMROD_RX_QUEUE_UPDATE,
+				   PROTOCOLID_ETH, &init_data);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	p_ramrod = &p_ent->ramrod.rx_queue_update;
+	p_ramrod->vport_id = p_cid->abs.vport_id;
+
+	p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(p_cid->abs.queue_id);
+	p_ramrod->complete_cqe_flg = 0;
+	p_ramrod->complete_event_flg = 1;
+	p_ramrod->set_default_rss_queue = 1;
+
+	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+
+	return rc;
+}
+
 static enum _ecore_status_t
 ecore_eth_pf_rx_queue_stop(struct ecore_hwfn *p_hwfn,
 			   struct ecore_queue_cid *p_cid,
@@ -1191,7 +1263,7 @@ ecore_eth_pf_tx_queue_start(struct ecore_hwfn *p_hwfn,
 			    struct ecore_queue_cid *p_cid,
 			    u8 tc,
 			    dma_addr_t pbl_addr, u16 pbl_size,
-			    void OSAL_IOMEM * *pp_doorbell)
+			    void OSAL_IOMEM **pp_doorbell)
 {
 	enum _ecore_status_t rc;
 	u16 pq_id;
@@ -1288,8 +1360,7 @@ enum _ecore_status_t ecore_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-static enum eth_filter_action
-ecore_filter_action(enum ecore_filter_opcode opcode)
+static enum eth_filter_action ecore_filter_action(enum ecore_filter_opcode opcode)
 {
 	enum eth_filter_action action = MAX_ETH_FILTER_ACTION;
 
@@ -1356,21 +1427,20 @@ ecore_filter_ucast_common(struct ecore_hwfn *p_hwfn,
 	p_ramrod->filter_cmd_hdr.tx = p_filter_cmd->is_tx_filter ? 1 : 0;
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev) && ECORE_IS_E4(p_hwfn->p_dev)) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Non-Asic - prevent Tx filters\n");
 		p_ramrod->filter_cmd_hdr.tx = 0;
 	}
+
 #endif
 
 	switch (p_filter_cmd->opcode) {
 	case ECORE_FILTER_REPLACE:
 	case ECORE_FILTER_MOVE:
-		p_ramrod->filter_cmd_hdr.cmd_cnt = 2;
-		break;
+		p_ramrod->filter_cmd_hdr.cmd_cnt = 2; break;
 	default:
-		p_ramrod->filter_cmd_hdr.cmd_cnt = 1;
-		break;
+		p_ramrod->filter_cmd_hdr.cmd_cnt = 1; break;
 	}
 
 	p_first_filter = &p_ramrod->filter_cmds[0];
@@ -1378,35 +1448,26 @@ ecore_filter_ucast_common(struct ecore_hwfn *p_hwfn,
 
 	switch (p_filter_cmd->type) {
 	case ECORE_FILTER_MAC:
-		p_first_filter->type = ETH_FILTER_TYPE_MAC;
-		break;
+		p_first_filter->type = ETH_FILTER_TYPE_MAC; break;
 	case ECORE_FILTER_VLAN:
-		p_first_filter->type = ETH_FILTER_TYPE_VLAN;
-		break;
+		p_first_filter->type = ETH_FILTER_TYPE_VLAN; break;
 	case ECORE_FILTER_MAC_VLAN:
-		p_first_filter->type = ETH_FILTER_TYPE_PAIR;
-		break;
+		p_first_filter->type = ETH_FILTER_TYPE_PAIR; break;
 	case ECORE_FILTER_INNER_MAC:
-		p_first_filter->type = ETH_FILTER_TYPE_INNER_MAC;
-		break;
+		p_first_filter->type = ETH_FILTER_TYPE_INNER_MAC; break;
 	case ECORE_FILTER_INNER_VLAN:
-		p_first_filter->type = ETH_FILTER_TYPE_INNER_VLAN;
-		break;
+		p_first_filter->type = ETH_FILTER_TYPE_INNER_VLAN; break;
 	case ECORE_FILTER_INNER_PAIR:
-		p_first_filter->type = ETH_FILTER_TYPE_INNER_PAIR;
-		break;
+		p_first_filter->type = ETH_FILTER_TYPE_INNER_PAIR; break;
 	case ECORE_FILTER_INNER_MAC_VNI_PAIR:
 		p_first_filter->type = ETH_FILTER_TYPE_INNER_MAC_VNI_PAIR;
 		break;
 	case ECORE_FILTER_MAC_VNI_PAIR:
-		p_first_filter->type = ETH_FILTER_TYPE_MAC_VNI_PAIR;
-		break;
+		p_first_filter->type = ETH_FILTER_TYPE_MAC_VNI_PAIR; break;
 	case ECORE_FILTER_VNI:
-		p_first_filter->type = ETH_FILTER_TYPE_VNI;
-		break;
+		p_first_filter->type = ETH_FILTER_TYPE_VNI; break;
 	case ECORE_FILTER_UNUSED: /* @DPDK */
-		p_first_filter->type = MAX_ETH_FILTER_TYPE;
-		break;
+		p_first_filter->type = MAX_ETH_FILTER_TYPE; break;
 	}
 
 	if ((p_first_filter->type == ETH_FILTER_TYPE_MAC) ||
@@ -1458,24 +1519,24 @@ ecore_filter_ucast_common(struct ecore_hwfn *p_hwfn,
 			DP_NOTICE(p_hwfn, true,
 				  "%d is not supported yet\n",
 				  p_filter_cmd->opcode);
+			ecore_sp_destroy_request(p_hwfn, *pp_ent);
 			return ECORE_NOTIMPL;
 		}
 
 		p_first_filter->action = action;
 		p_first_filter->vport_id =
-		    (p_filter_cmd->opcode == ECORE_FILTER_REMOVE) ?
-		    vport_to_remove_from : vport_to_add_to;
+			(p_filter_cmd->opcode == ECORE_FILTER_REMOVE) ?
+			vport_to_remove_from : vport_to_add_to;
 	}
 
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t
-ecore_sp_eth_filter_ucast(struct ecore_hwfn *p_hwfn,
-			  u16 opaque_fid,
-			  struct ecore_filter_ucast *p_filter_cmd,
-			  enum spq_mode comp_mode,
-			  struct ecore_spq_comp_cb *p_comp_data)
+enum _ecore_status_t ecore_sp_eth_filter_ucast(struct ecore_hwfn *p_hwfn,
+					       u16 opaque_fid,
+					       struct ecore_filter_ucast *p_filter_cmd,
+					       enum spq_mode comp_mode,
+					       struct ecore_spq_comp_cb *p_comp_data)
 {
 	struct vport_filter_update_ramrod_data *p_ramrod = OSAL_NULL;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
@@ -1494,22 +1555,25 @@ ecore_sp_eth_filter_ucast(struct ecore_hwfn *p_hwfn,
 
 	rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 	if (rc != ECORE_SUCCESS) {
-		DP_ERR(p_hwfn, "Unicast filter ADD command failed %d\n", rc);
+		DP_ERR(p_hwfn,
+		       "Unicast filter ADD command failed %d\n",
+		       rc);
 		return rc;
 	}
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Unicast filter configured, opcode = %s, type = %s, cmd_cnt = %d, is_rx_filter = %d, is_tx_filter = %d\n",
 		   (p_filter_cmd->opcode == ECORE_FILTER_ADD) ? "ADD" :
-		   ((p_filter_cmd->opcode == ECORE_FILTER_REMOVE) ?
-		    "REMOVE" :
-		    ((p_filter_cmd->opcode == ECORE_FILTER_MOVE) ?
-		     "MOVE" : "REPLACE")),
+		    ((p_filter_cmd->opcode == ECORE_FILTER_REMOVE) ?
+		     "REMOVE" :
+		     ((p_filter_cmd->opcode == ECORE_FILTER_MOVE) ?
+		      "MOVE" : "REPLACE")),
 		   (p_filter_cmd->type == ECORE_FILTER_MAC) ? "MAC" :
-		   ((p_filter_cmd->type == ECORE_FILTER_VLAN) ?
-		    "VLAN" : "MAC & VLAN"),
+		    ((p_filter_cmd->type == ECORE_FILTER_VLAN) ?
+		     "VLAN" : "MAC & VLAN"),
 		   p_ramrod->filter_cmd_hdr.cmd_cnt,
-		   p_filter_cmd->is_rx_filter, p_filter_cmd->is_tx_filter);
+		   p_filter_cmd->is_rx_filter,
+		   p_filter_cmd->is_tx_filter);
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "vport_to_add_to = %d, vport_to_remove_from = %d, mac = %2x:%2x:%2x:%2x:%2x:%2x, vlan = %d\n",
 		   p_filter_cmd->vport_to_add_to,
@@ -1531,10 +1595,11 @@ ecore_sp_eth_filter_ucast(struct ecore_hwfn *p_hwfn,
 static u32 ecore_calc_crc32c(u8 *crc32_packet, u32 crc32_length, u32 crc32_seed)
 {
 	u32 byte = 0, bit = 0, crc32_result = crc32_seed;
-	u8 msb = 0, current_byte = 0;
+	u8  msb = 0, current_byte = 0;
 
 	if ((crc32_packet == OSAL_NULL) ||
-	    (crc32_length == 0) || ((crc32_length % 8) != 0)) {
+	    (crc32_length == 0) ||
+	    ((crc32_length % 8) != 0)) {
 		return crc32_result;
 	}
 
@@ -1545,7 +1610,7 @@ static u32 ecore_calc_crc32c(u8 *crc32_packet, u32 crc32_length, u32 crc32_seed)
 			crc32_result = crc32_result << 1;
 			if (msb != (0x1 & (current_byte >> bit))) {
 				crc32_result = crc32_result ^ CRC32_POLY;
-				crc32_result |= 1;
+				crc32_result |= 1; /*crc32_result[0] = 1;*/
 			}
 		}
 	}
@@ -1555,7 +1620,7 @@ static u32 ecore_calc_crc32c(u8 *crc32_packet, u32 crc32_length, u32 crc32_seed)
 
 static u32 ecore_crc32c_le(u32 seed, u8 *mac)
 {
-	u32 packet_buf[2] = { 0 };
+	u32 packet_buf[2] = {0};
 
 	OSAL_MEMCPY((u8 *)(&packet_buf[0]), &mac[0], 6);
 	return ecore_calc_crc32c((u8 *)packet_buf, 8, seed);
@@ -1583,12 +1648,10 @@ ecore_sp_eth_filter_mcast(struct ecore_hwfn *p_hwfn,
 	int i;
 
 	if (p_filter_cmd->opcode == ECORE_FILTER_ADD)
-		rc = ecore_fw_vport(p_hwfn,
-				    p_filter_cmd->vport_to_add_to,
+		rc = ecore_fw_vport(p_hwfn, p_filter_cmd->vport_to_add_to,
 				    &abs_vport_id);
 	else
-		rc = ecore_fw_vport(p_hwfn,
-				    p_filter_cmd->vport_to_remove_from,
+		rc = ecore_fw_vport(p_hwfn, p_filter_cmd->vport_to_remove_from,
 				    &abs_vport_id);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -1644,19 +1707,18 @@ ecore_sp_eth_filter_mcast(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t
-ecore_filter_mcast_cmd(struct ecore_dev *p_dev,
-		       struct ecore_filter_mcast *p_filter_cmd,
-		       enum spq_mode comp_mode,
-		       struct ecore_spq_comp_cb *p_comp_data)
+enum _ecore_status_t ecore_filter_mcast_cmd(struct ecore_dev *p_dev,
+					    struct ecore_filter_mcast *p_filter_cmd,
+					    enum spq_mode comp_mode,
+					    struct ecore_spq_comp_cb *p_comp_data)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int i;
 
 	/* only ADD and REMOVE operations are supported for multi-cast */
-	if ((p_filter_cmd->opcode != ECORE_FILTER_ADD &&
+	if ((p_filter_cmd->opcode != ECORE_FILTER_ADD  &&
 	     (p_filter_cmd->opcode != ECORE_FILTER_REMOVE)) ||
-	    (p_filter_cmd->num_mc_addrs > ECORE_MAX_MC_ADDRS)) {
+	     (p_filter_cmd->num_mc_addrs > ECORE_MAX_MC_ADDRS)) {
 		return ECORE_INVAL;
 	}
 
@@ -1670,7 +1732,8 @@ ecore_filter_mcast_cmd(struct ecore_dev *p_dev,
 
 		rc = ecore_sp_eth_filter_mcast(p_hwfn,
 					       p_filter_cmd,
-					       comp_mode, p_comp_data);
+					       comp_mode,
+					       p_comp_data);
 		if (rc != ECORE_SUCCESS)
 			break;
 	}
@@ -1678,11 +1741,10 @@ ecore_filter_mcast_cmd(struct ecore_dev *p_dev,
 	return rc;
 }
 
-enum _ecore_status_t
-ecore_filter_ucast_cmd(struct ecore_dev *p_dev,
-		       struct ecore_filter_ucast *p_filter_cmd,
-		       enum spq_mode comp_mode,
-		       struct ecore_spq_comp_cb *p_comp_data)
+enum _ecore_status_t ecore_filter_ucast_cmd(struct ecore_dev *p_dev,
+					    struct ecore_filter_ucast *p_filter_cmd,
+					    enum spq_mode comp_mode,
+					    struct ecore_spq_comp_cb *p_comp_data)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	int i;
@@ -1696,11 +1758,18 @@ ecore_filter_ucast_cmd(struct ecore_dev *p_dev,
 			continue;
 		}
 
+		/* Check whether skipping unicast filter update is required,
+		 * e.g. when Switchdev mode is enabled.
+		 */
+		if (ecore_get_skip_ucast_filter_update(p_hwfn))
+			continue;
+
 		opaque_fid = p_hwfn->hw_info.opaque_fid;
 		rc = ecore_sp_eth_filter_ucast(p_hwfn,
 					       opaque_fid,
 					       p_filter_cmd,
-					       comp_mode, p_comp_data);
+					       comp_mode,
+					       p_comp_data);
 		if (rc != ECORE_SUCCESS)
 			break;
 	}
@@ -1715,7 +1784,7 @@ static void __ecore_get_vport_pstats_addrlen(struct ecore_hwfn *p_hwfn,
 {
 	if (IS_PF(p_hwfn->p_dev)) {
 		*p_addr = BAR0_MAP_REG_PSDM_RAM +
-		    PSTORM_QUEUE_STAT_OFFSET(statistics_bin);
+			  PSTORM_QUEUE_STAT_OFFSET(statistics_bin);
 		*p_len = sizeof(struct eth_pstorm_per_queue_stat);
 	} else {
 		struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
@@ -1738,7 +1807,8 @@ static void __ecore_get_vport_pstats(struct ecore_hwfn *p_hwfn,
 					 statistics_bin);
 
 	OSAL_MEMSET(&pstats, 0, sizeof(pstats));
-	ecore_memcpy_from(p_hwfn, p_ptt, &pstats, pstats_addr, pstats_len);
+	ecore_memcpy_from(p_hwfn, p_ptt, &pstats,
+			  pstats_addr, pstats_len);
 
 	p_stats->common.tx_ucast_bytes +=
 		HILO_64_REGPAIR(pstats.sent_ucast_bytes);
@@ -1765,7 +1835,7 @@ static void __ecore_get_vport_tstats(struct ecore_hwfn *p_hwfn,
 
 	if (IS_PF(p_hwfn->p_dev)) {
 		tstats_addr = BAR0_MAP_REG_TSDM_RAM +
-		    TSTORM_PORT_STAT_OFFSET(MFW_PORT(p_hwfn));
+			      TSTORM_PORT_STAT_OFFSET(MFW_PORT(p_hwfn));
 		tstats_len = sizeof(struct tstorm_per_port_stat);
 	} else {
 		struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
@@ -1776,7 +1846,8 @@ static void __ecore_get_vport_tstats(struct ecore_hwfn *p_hwfn,
 	}
 
 	OSAL_MEMSET(&tstats, 0, sizeof(tstats));
-	ecore_memcpy_from(p_hwfn, p_ptt, &tstats, tstats_addr, tstats_len);
+	ecore_memcpy_from(p_hwfn, p_ptt, &tstats,
+			  tstats_addr, tstats_len);
 
 	p_stats->common.mftag_filter_discards +=
 		HILO_64_REGPAIR(tstats.mftag_filter_discard);
@@ -1792,7 +1863,7 @@ static void __ecore_get_vport_ustats_addrlen(struct ecore_hwfn *p_hwfn,
 {
 	if (IS_PF(p_hwfn->p_dev)) {
 		*p_addr = BAR0_MAP_REG_USDM_RAM +
-		    USTORM_QUEUE_STAT_OFFSET(statistics_bin);
+			  USTORM_QUEUE_STAT_OFFSET(statistics_bin);
 		*p_len = sizeof(struct eth_ustorm_per_queue_stat);
 	} else {
 		struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
@@ -1815,7 +1886,8 @@ static void __ecore_get_vport_ustats(struct ecore_hwfn *p_hwfn,
 					 statistics_bin);
 
 	OSAL_MEMSET(&ustats, 0, sizeof(ustats));
-	ecore_memcpy_from(p_hwfn, p_ptt, &ustats, ustats_addr, ustats_len);
+	ecore_memcpy_from(p_hwfn, p_ptt, &ustats,
+			  ustats_addr, ustats_len);
 
 	p_stats->common.rx_ucast_bytes +=
 		HILO_64_REGPAIR(ustats.rcv_ucast_bytes);
@@ -1837,7 +1909,7 @@ static void __ecore_get_vport_mstats_addrlen(struct ecore_hwfn *p_hwfn,
 {
 	if (IS_PF(p_hwfn->p_dev)) {
 		*p_addr = BAR0_MAP_REG_MSDM_RAM +
-		    MSTORM_QUEUE_STAT_OFFSET(statistics_bin);
+			  MSTORM_QUEUE_STAT_OFFSET(statistics_bin);
 		*p_len = sizeof(struct eth_mstorm_per_queue_stat);
 	} else {
 		struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
@@ -1860,7 +1932,8 @@ static void __ecore_get_vport_mstats(struct ecore_hwfn *p_hwfn,
 					 statistics_bin);
 
 	OSAL_MEMSET(&mstats, 0, sizeof(mstats));
-	ecore_memcpy_from(p_hwfn, p_ptt, &mstats, mstats_addr, mstats_len);
+	ecore_memcpy_from(p_hwfn, p_ptt, &mstats,
+			  mstats_addr, mstats_len);
 
 	p_stats->common.no_buff_discards +=
 		HILO_64_REGPAIR(mstats.no_buff_discard);
@@ -1981,7 +2054,7 @@ void __ecore_get_vport_stats(struct ecore_hwfn *p_hwfn,
 	__ecore_get_vport_pstats(p_hwfn, p_ptt, stats, statistics_bin);
 
 #ifndef ASIC_ONLY
-	/* Avoid getting PORT stats for emulation. */
+	/* Avoid getting PORT stats for emulation.*/
 	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
 		return;
 #endif
@@ -2001,12 +2074,14 @@ static void _ecore_get_vport_stats(struct ecore_dev *p_dev,
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
 		struct ecore_ptt *p_ptt = IS_PF(p_dev) ?
-		    ecore_ptt_acquire(p_hwfn) : OSAL_NULL;
+					  ecore_ptt_acquire(p_hwfn) : OSAL_NULL;
 		bool b_get_port_stats;
 
 		if (IS_PF(p_dev)) {
+			if (p_dev->stats_bin_id)
+				fw_vport = (u8)p_dev->stats_bin_id;
 			/* The main vport index is relative first */
-			if (ecore_fw_vport(p_hwfn, 0, &fw_vport)) {
+			else if (ecore_fw_vport(p_hwfn, 0, &fw_vport)) {
 				DP_ERR(p_hwfn, "No vport available!\n");
 				goto out;
 			}
@@ -2017,7 +2092,8 @@ static void _ecore_get_vport_stats(struct ecore_dev *p_dev,
 			continue;
 		}
 
-		b_get_port_stats = IS_PF(p_dev) && IS_LEAD_HWFN(p_hwfn);
+		b_get_port_stats = p_dev->stats_bin_id ? 0 :
+					IS_PF(p_dev) && IS_LEAD_HWFN(p_hwfn);
 		__ecore_get_vport_stats(p_hwfn, p_ptt, stats, fw_vport,
 					b_get_port_stats);
 
@@ -2025,6 +2101,8 @@ static void _ecore_get_vport_stats(struct ecore_dev *p_dev,
 		if (IS_PF(p_dev) && p_ptt)
 			ecore_ptt_release(p_hwfn, p_ptt);
 	}
+
+	p_dev->stats_bin_id = 0;
 }
 
 void ecore_get_vport_stats(struct ecore_dev *p_dev,
@@ -2058,7 +2136,7 @@ void ecore_reset_vport_stats(struct ecore_dev *p_dev)
 		struct eth_ustorm_per_queue_stat ustats;
 		struct eth_pstorm_per_queue_stat pstats;
 		struct ecore_ptt *p_ptt = IS_PF(p_dev) ?
-		    ecore_ptt_acquire(p_hwfn) : OSAL_NULL;
+					  ecore_ptt_acquire(p_hwfn) : OSAL_NULL;
 		u32 addr = 0, len = 0;
 
 		if (IS_PF(p_dev) && !p_ptt) {
@@ -2099,13 +2177,10 @@ ecore_arfs_mode_to_hsi(enum ecore_filter_config_mode mode)
 {
 	if (mode == ECORE_FILTER_CONFIG_MODE_5_TUPLE)
 		return GFT_PROFILE_TYPE_4_TUPLE;
-
 	if (mode == ECORE_FILTER_CONFIG_MODE_IP_DEST)
 		return GFT_PROFILE_TYPE_IP_DST_ADDR;
-
 	if (mode == ECORE_FILTER_CONFIG_MODE_TUNN_TYPE)
 		return GFT_PROFILE_TYPE_TUNNEL_TYPE;
-
 	if (mode == ECORE_FILTER_CONFIG_MODE_IP_SRC)
 		return GFT_PROFILE_TYPE_IP_SRC_ADDR;
 
@@ -2116,7 +2191,7 @@ void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
 			       struct ecore_arfs_config_params *p_cfg_params)
 {
-	if (OSAL_GET_BIT(ECORE_MF_DISABLE_ARFS, &p_hwfn->p_dev->mf_bits))
+	if (OSAL_TEST_BIT(ECORE_MF_DISABLE_ARFS, &p_hwfn->p_dev->mf_bits))
 		return;
 
 	if (p_cfg_params->mode != ECORE_FILTER_CONFIG_MODE_DISABLE) {
@@ -2126,17 +2201,18 @@ void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn,
 				 p_cfg_params->ipv4,
 				 p_cfg_params->ipv6,
 				 ecore_arfs_mode_to_hsi(p_cfg_params->mode));
+
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-			   "tcp = %s, udp = %s, ipv4 = %s, ipv6 =%s\n",
+			   "Configured Filtering: tcp = %s, udp = %s, ipv4 = %s, ipv6 =%s mode=%08x\n",
 			   p_cfg_params->tcp ? "Enable" : "Disable",
 			   p_cfg_params->udp ? "Enable" : "Disable",
 			   p_cfg_params->ipv4 ? "Enable" : "Disable",
-			   p_cfg_params->ipv6 ? "Enable" : "Disable");
+			   p_cfg_params->ipv6 ? "Enable" : "Disable",
+			   (u32)p_cfg_params->mode);
 	} else {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Disabled Filtering\n");
 		ecore_gft_disable(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
 	}
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Configured ARFS mode : %d\n",
-		   (int)p_cfg_params->mode);
 }
 
 enum _ecore_status_t
@@ -2181,13 +2257,13 @@ ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
 		rc = ecore_fw_vport(p_hwfn, p_params->vport_id,
 				    &abs_vport_id);
 		if (rc)
-			return rc;
+			goto err;
 
 		if (p_params->qid != ECORE_RFS_NTUPLE_QID_RSS) {
 			rc = ecore_fw_l2_queue(p_hwfn, p_params->qid,
 					       &abs_rx_q_id);
 			if (rc)
-				return rc;
+				goto err;
 
 			p_ramrod->rx_qid_valid = 1;
 			p_ramrod->rx_qid = OSAL_CPU_TO_LE16(abs_rx_q_id);
@@ -2198,17 +2274,20 @@ ecore_configure_rfs_ntuple_filter(struct ecore_hwfn *p_hwfn,
 
 	p_ramrod->flow_id_valid = 0;
 	p_ramrod->flow_id = 0;
-
 	p_ramrod->filter_action = p_params->b_is_add ? GFT_ADD_FILTER
 						     : GFT_DELETE_FILTER;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "V[%0x], Q[%04x] - %s filter from 0x%lx [length %04xb]\n",
+		   "V[%0x], Q[%04x] - %s filter from 0x%" PRIx64 " [length %04xb]\n",
 		   abs_vport_id, abs_rx_q_id,
 		   p_params->b_is_add ? "Adding" : "Removing",
-		   (unsigned long)p_params->addr, p_params->length);
+		   (u64)p_params->addr, p_params->length);
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+
+err:
+	ecore_sp_destroy_request(p_hwfn, p_ent);
+	return rc;
 }
 
 enum _ecore_status_t ecore_get_rxq_coalesce(struct ecore_hwfn *p_hwfn,
@@ -2323,16 +2402,22 @@ ecore_eth_tx_queue_maxrate(struct ecore_hwfn *p_hwfn,
 			   struct ecore_ptt *p_ptt,
 			   struct ecore_queue_cid *p_cid, u32 rate)
 {
-	u16 rl_id;
-	u8 vport;
+	u16 qid, vport_id, rl_id;
 
-	vport = (u8)ecore_get_qm_vport_idx_rl(p_hwfn, p_cid->rel.queue_id);
+	if (!IS_ECORE_PACING(p_hwfn)) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG,
+			   "Packet pacing is not enabled\n");
+		return ECORE_INVAL;
+	}
+
+	qid = p_cid->rel.queue_id;
+	vport_id = ecore_get_pq_vport_id_from_rl(p_hwfn, qid);
+	rl_id = ecore_get_pq_rl_id_from_rl(p_hwfn, qid);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
-		   "About to rate limit qm vport %d for queue %d with rate %d\n",
-		   vport, p_cid->rel.queue_id, rate);
+		   "About to rate limit qm vport %d (rl_id %d) for queue %d with rate %d\n",
+		   vport_id, rl_id, qid, rate);
 
-	rl_id = vport; /* The "rl_id" is set as the "vport_id" */
 	return ecore_init_global_rl(p_hwfn, p_ptt, rl_id, rate);
 }
 
@@ -2357,7 +2442,8 @@ ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	addr = (u8 *)p_hwfn->regview + GTT_BAR0_MAP_REG_TSDM_RAM +
+	addr = (u8 OSAL_IOMEM *)p_hwfn->regview +
+	       GTT_BAR0_MAP_REG_TSDM_RAM +
 	       TSTORM_ETH_RSS_UPDATE_OFFSET(p_hwfn->rel_pf_id);
 
 	*(u64 *)(&update_data) = DIRECT_REG_RD64(p_hwfn, addr);
@@ -2386,3 +2472,62 @@ ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn,
 
 	return ECORE_SUCCESS;
 }
+
+enum _ecore_status_t
+ecore_set_rx_config(struct ecore_hwfn *p_hwfn, struct ecore_rx_config *p_conf)
+{
+	struct ecore_rx_config *rx_conf = &p_hwfn->rx_conf;
+	u8 abs_dst_vport_id = 0;
+	enum _ecore_status_t rc;
+
+	if (OSAL_TEST_BIT(ECORE_RX_CONF_SET_LB_VPORT, &p_conf->flags)) {
+		rc = ecore_fw_vport(p_hwfn, p_conf->loopback_dst_vport_id,
+				    &abs_dst_vport_id);
+		if (rc)
+			return rc;
+	}
+
+	rx_conf->flags = p_conf->flags;
+	rx_conf->loopback_dst_vport_id = abs_dst_vport_id;
+
+	return ECORE_SUCCESS;
+}
+
+int ecore_get_skip_accept_flags_update(struct ecore_hwfn *p_hwfn)
+{
+	return OSAL_TEST_BIT(ECORE_RX_CONF_SKIP_ACCEPT_FLAGS_UPDATE,
+			     &p_hwfn->rx_conf.flags);
+}
+
+int ecore_get_skip_ucast_filter_update(struct ecore_hwfn *p_hwfn)
+{
+	return OSAL_TEST_BIT(ECORE_RX_CONF_SKIP_UCAST_FILTER_UPDATE,
+			     &p_hwfn->rx_conf.flags);
+}
+
+int ecore_get_loopback_vport(struct ecore_hwfn *p_hwfn, u8 *abs_vport_id)
+{
+	*abs_vport_id = p_hwfn->rx_conf.loopback_dst_vport_id;
+
+	return OSAL_TEST_BIT(ECORE_RX_CONF_SET_LB_VPORT,
+			     &p_hwfn->rx_conf.flags);
+}
+
+enum _ecore_status_t ecore_set_vf_stats_bin_id(struct ecore_dev *p_dev,
+					       u16 vf_id)
+{
+	struct ecore_vf_info *p_vf;
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+
+	p_vf = ecore_iov_get_vf_info(p_hwfn, vf_id, true);
+	if (!p_vf)
+		return ECORE_INVAL;
+
+	p_dev->stats_bin_id = p_vf->abs_vf_id + ECORE_VF_START_BIN_ID;
+
+	return ECORE_SUCCESS;
+}
+
+#ifdef _NTDDK_
+#pragma warning(pop)
+#endif
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 8fa403029..3e72f1539 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_L2_H__
 #define __ECORE_L2_H__
 
@@ -162,4 +162,18 @@ enum _ecore_status_t ecore_get_txq_coalesce(struct ecore_hwfn *p_hwfn,
 					    struct ecore_queue_cid *p_cid,
 					    u16 *p_hw_coal);
 
+/**
+ * @brief - Sets a Tx rate limit on a specific queue
+ *
+ * @params p_hwfn
+ * @params p_ptt
+ * @params p_cid
+ * @params rate
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_eth_tx_queue_maxrate(struct ecore_hwfn *p_hwfn,
+			   struct ecore_ptt *p_ptt,
+			   struct ecore_queue_cid *p_cid, u32 rate);
 #endif
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index bebf412ed..13f559fc7 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_L2_API_H__
 #define __ECORE_L2_API_H__
 
@@ -24,8 +24,30 @@ enum ecore_rss_caps {
 /* Should be the same as ETH_RSS_IND_TABLE_ENTRIES_NUM */
 #define ECORE_RSS_IND_TABLE_SIZE 128
 #define ECORE_RSS_KEY_SIZE 10 /* size in 32b chunks */
+
+#define ECORE_MAX_PHC_DRIFT_PPB	291666666
+#define ECORE_VF_START_BIN_ID	16
+
+enum ecore_ptp_filter_type {
+	ECORE_PTP_FILTER_NONE,
+	ECORE_PTP_FILTER_ALL,
+	ECORE_PTP_FILTER_V1_L4_EVENT,
+	ECORE_PTP_FILTER_V1_L4_GEN,
+	ECORE_PTP_FILTER_V2_L4_EVENT,
+	ECORE_PTP_FILTER_V2_L4_GEN,
+	ECORE_PTP_FILTER_V2_L2_EVENT,
+	ECORE_PTP_FILTER_V2_L2_GEN,
+	ECORE_PTP_FILTER_V2_EVENT,
+	ECORE_PTP_FILTER_V2_GEN
+};
+
+enum ecore_ptp_hwtstamp_tx_type {
+	ECORE_PTP_HWTSTAMP_TX_OFF,
+	ECORE_PTP_HWTSTAMP_TX_ON,
+};
 #endif
 
+#ifndef __EXTRACT__LINUX__
 struct ecore_queue_start_common_params {
 	/* Should always be relative to entity sending this. */
 	u8 vport_id;
@@ -36,6 +58,8 @@ struct ecore_queue_start_common_params {
 
 	struct ecore_sb_info *p_sb;
 	u8 sb_idx;
+
+	u8 tc;
 };
 
 struct ecore_rxq_start_ret_params {
@@ -47,6 +71,7 @@ struct ecore_txq_start_ret_params {
 	void OSAL_IOMEM *p_doorbell;
 	void *p_handle;
 };
+#endif
 
 struct ecore_rss_params {
 	u8 update_rss_config;
@@ -110,7 +135,7 @@ struct ecore_filter_ucast {
 	u8 is_tx_filter;
 	u8 vport_to_add_to;
 	u8 vport_to_remove_from;
-	unsigned char mac[ETH_ALEN];
+	unsigned char mac[ECORE_ETH_ALEN];
 	u8 assert_on_error;
 	u16 vlan;
 	u32 vni;
@@ -123,7 +148,7 @@ struct ecore_filter_mcast {
 	u8 vport_to_remove_from;
 	u8	num_mc_addrs;
 #define ECORE_MAX_MC_ADDRS	64
-	unsigned char mac[ECORE_MAX_MC_ADDRS][ETH_ALEN];
+	unsigned char mac[ECORE_MAX_MC_ADDRS][ECORE_ETH_ALEN];
 };
 
 struct ecore_filter_accept_flags {
@@ -140,6 +165,7 @@ struct ecore_filter_accept_flags {
 #define ECORE_ACCEPT_ANY_VNI		0x40
 };
 
+#ifndef __EXTRACT__LINUX__
 enum ecore_filter_config_mode {
 	ECORE_FILTER_CONFIG_MODE_DISABLE,
 	ECORE_FILTER_CONFIG_MODE_5_TUPLE,
@@ -148,6 +174,7 @@ enum ecore_filter_config_mode {
 	ECORE_FILTER_CONFIG_MODE_TUNN_TYPE,
 	ECORE_FILTER_CONFIG_MODE_IP_SRC,
 };
+#endif
 
 struct ecore_arfs_config_params {
 	bool tcp;
@@ -247,7 +274,7 @@ ecore_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
  * @param tc			traffic class to use with this L2 txq
  * @param pbl_addr		address of the pbl array
  * @param pbl_size		number of entries in pbl
- * @param p_ret_params		Pointer to fill the return parameters in.
+ * @oaram p_ret_params		Pointer to fill the return parameters in.
  *
  * @return enum _ecore_status_t
  */
@@ -293,6 +320,10 @@ struct ecore_sp_vport_start_params {
 	bool zero_placement_offset;
 	bool check_mac;
 	bool check_ethtype;
+	u8 tx_dst_port_mode_config;	/* Configurations for tx forwarding */
+	u8 dst_vport_id;	/* Destination Vport ID to forward the packet */
+	u8 tx_dst_port_mode;	/* Destination tx to forward the packet */
+	u8 dst_vport_id_valid;	/* If set, dst_vport_id has valid value */
 
 	/* Strict behavior on transmission errors */
 	bool b_err_illegal_vlan_mode;
@@ -346,6 +377,7 @@ struct ecore_sp_vport_update_params {
 	struct ecore_rss_params	*rss_params;
 	struct ecore_filter_accept_flags accept_flags;
 	struct ecore_sge_tpa_params *sge_tpa_params;
+
 	/* MTU change - notice this requires the vport to be disabled.
 	 * If non-zero, value would be used.
 	 */
@@ -353,6 +385,20 @@ struct ecore_sp_vport_update_params {
 	u8			update_ctl_frame_check;
 	u8			mac_chk_en;
 	u8			ethtype_chk_en;
+
+	u8			update_tx_dst_port_mode_flag;
+
+	/* Configurations for tx forwarding */
+	u8			tx_dst_port_mode_config;
+
+	/* Destination Vport ID to forward the packet */
+	u8			dst_vport_id;
+
+	/* Destination tx to forward the packet */
+	u8			tx_dst_port_mode;
+
+	/* If set, dst_vport_id has valid value */
+	u8			dst_vport_id_valid;
 };
 
 /**
@@ -425,6 +471,27 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
 			      enum spq_mode comp_mode,
 			      struct ecore_spq_comp_cb *p_comp_data);
 
+/**
+ * @brief ecore_sp_eth_rx_queues_set_default -
+ *
+ * This ramrod sets RSS RX queue as default one.
+ *
+ * @note Final phase API.
+ *
+ * @param p_hwfn
+ * @param p_rxq_handlers	queue handlers to be updated.
+ * @param comp_mode
+ * @param p_comp_data
+ *
+ * @return enum _ecore_status_t
+ */
+
+enum _ecore_status_t
+ecore_sp_eth_rx_queues_set_default(struct ecore_hwfn *p_hwfn,
+				   void *p_rxq_handler,
+				   enum spq_mode comp_mode,
+				   struct ecore_spq_comp_cb *p_comp_data);
+
 void __ecore_get_vport_stats(struct ecore_hwfn *p_hwfn,
 			     struct ecore_ptt *p_ptt,
 			     struct ecore_eth_stats *stats,
@@ -450,6 +517,7 @@ void ecore_arfs_mode_configure(struct ecore_hwfn *p_hwfn,
 			       struct ecore_ptt *p_ptt,
 			       struct ecore_arfs_config_params *p_cfg_params);
 
+#ifndef __EXTRACT__LINUX__
 struct ecore_ntuple_filter_params {
 	/* Physically mapped address containing header of buffer to be used
 	 * as filter.
@@ -460,7 +528,7 @@ struct ecore_ntuple_filter_params {
 	u16 length;
 
 	/* Relative queue-id to receive classified packet */
-	#define ECORE_RFS_NTUPLE_QID_RSS ((u16)-1)
+#define ECORE_RFS_NTUPLE_QID_RSS ((u16)-1)
 	u16 qid;
 
 	/* Identifier can either be according to vport-id or vfid */
@@ -468,12 +536,13 @@ struct ecore_ntuple_filter_params {
 	u8 vport_id;
 	u8 vf_id;
 
-	/* true if this filter is to be added. Else to be removed */
+	/* true iff this filter is to be added. Else to be removed */
 	bool b_is_add;
 
-	/* If packet needs to be dropped */
+	/* If packet need to be drop */
 	bool b_is_drop;
 };
+#endif
 
 /**
  * @brief - ecore_configure_rfs_ntuple_filter
@@ -514,4 +583,67 @@ ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn,
 				     u8 vport_id,
 				     u8 ind_table_index,
 				     u16 ind_table_value);
+
+/**
+ * @brief - ecore_set_rx_config
+ *
+ * Set a new Rx configuration. The current options are:
+ * - Skip updating new accept flags.
+ * - Skip updating unicast filter (mac/vlan/mac-vlan pair).
+ * - Set the PF/VFs vports to loopback to a specific vport ID.
+ *
+ * @params p_hwfn
+ * @params p_conf	The new Rx configuration
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_set_rx_config(struct ecore_hwfn *p_hwfn, struct ecore_rx_config *p_conf);
+
+/**
+ * @brief - ecore_get_skip_accept_flags_update
+ *
+ * Returns whether accept flags update should be skipped.
+ *
+ * @params p_hwfn
+ *
+ * @return int
+ */
+int ecore_get_skip_accept_flags_update(struct ecore_hwfn *p_hwfn);
+
+/**
+ * @brief - ecore_get_skip_ucast_filter_update
+ *
+ * Returns whether unicast filter update should be skipped.
+ *
+ * @params p_hwfn
+ *
+ * @return int
+ */
+int ecore_get_skip_ucast_filter_update(struct ecore_hwfn *p_hwfn);
+
+/**
+ * @brief - ecore_get_skip_ucast_filter_update
+ *
+ * Returns whether loopback to a specific vport is required.
+ *
+ * @params p_hwfn
+ * @params abs_vport_id	   The destination vport ID to loopback to.
+ *
+ * @return int
+ */
+int ecore_get_loopback_vport(struct ecore_hwfn *p_hwfn, u8 *abs_vport_id);
+
+/**
+ * @brief - ecore_set_vf_stats_bin_id
+ *
+ * Set stats bin id of a given vf
+ *
+ * @params p_dev
+ * @params vf_id	VF number
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_set_vf_stats_bin_id(struct ecore_dev *p_dev,
+					       u16 vf_id);
 #endif
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 6f092dd37..2e945556c 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #include "bcm_osal.h"
 #include "ecore.h"
 #include "ecore_status.h"
@@ -25,18 +25,18 @@
 #define GRCBASE_MCP	0xe00000
 
 #define ECORE_MCP_RESP_ITER_US		10
-#define ECORE_DRV_MB_MAX_RETRIES (500 * 1000)	/* Account for 5 sec */
-#define ECORE_MCP_RESET_RETRIES (50 * 1000)	/* Account for 500 msec */
+#define ECORE_DRV_MB_MAX_RETRIES	(500 * 1000) /* Account for 5 sec */
+#define ECORE_MCP_RESET_RETRIES		(50 * 1000) /* Account for 500 msec */
 
 #ifndef ASIC_ONLY
 /* Non-ASIC:
  * The waiting interval is multiplied by 100 to reduce the impact of the
  * built-in delay of 100usec in each ecore_rd().
- * In addition, a factor of 4 comparing to ASIC is applied.
+ * In addition, a factor of 6 comparing to ASIC is applied.
  */
 #define ECORE_EMUL_MCP_RESP_ITER_US	(ECORE_MCP_RESP_ITER_US * 100)
-#define ECORE_EMUL_DRV_MB_MAX_RETRIES	((ECORE_DRV_MB_MAX_RETRIES / 100) * 4)
-#define ECORE_EMUL_MCP_RESET_RETRIES	((ECORE_MCP_RESET_RETRIES / 100) * 4)
+#define ECORE_EMUL_DRV_MB_MAX_RETRIES	((ECORE_DRV_MB_MAX_RETRIES / 100) * 6)
+#define ECORE_EMUL_MCP_RESET_RETRIES	((ECORE_MCP_RESET_RETRIES / 100) * 6)
 #endif
 
 #define DRV_INNER_WR(_p_hwfn, _p_ptt, _ptr, _offset, _val) \
@@ -54,11 +54,16 @@
 	DRV_INNER_RD(_p_hwfn, _p_ptt, drv_mb_addr, \
 		     OFFSETOF(struct public_drv_mb, _field))
 
-#define PDA_COMP (((FW_MAJOR_VERSION) + (FW_MINOR_VERSION << 8)) << \
-	DRV_ID_PDA_COMP_VER_OFFSET)
+#define PDA_COMP ((FW_MAJOR_VERSION) + (FW_MINOR_VERSION << 8))
 
 #define MCP_BYTES_PER_MBIT_OFFSET 17
 
+#ifdef _NTDDK_
+#pragma warning(push)
+#pragma warning(disable : 28167)
+#pragma warning(disable : 28123)
+#endif
+
 #ifndef ASIC_ONLY
 static int loaded;
 static int loaded_port[MAX_NUM_PORTS] = { 0 };
@@ -66,12 +71,13 @@ static int loaded_port[MAX_NUM_PORTS] = { 0 };
 
 bool ecore_mcp_is_init(struct ecore_hwfn *p_hwfn)
 {
-	if (!p_hwfn->mcp_info || !p_hwfn->mcp_info->public_base)
+	if (!p_hwfn->mcp_info || !p_hwfn->mcp_info->b_mfw_running)
 		return false;
 	return true;
 }
 
-void ecore_mcp_cmd_port_init(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+void ecore_mcp_cmd_port_init(struct ecore_hwfn *p_hwfn,
+			     struct ecore_ptt *p_ptt)
 {
 	u32 addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
 					PUBLIC_PORT);
@@ -84,7 +90,8 @@ void ecore_mcp_cmd_port_init(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 		   p_hwfn->mcp_info->port_addr, MFW_PORT(p_hwfn));
 }
 
-void ecore_mcp_read_mb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+void ecore_mcp_read_mb(struct ecore_hwfn *p_hwfn,
+		       struct ecore_ptt *p_ptt)
 {
 	u32 length = MFW_DRV_MSG_MAX_DWORDS(p_hwfn->mcp_info->mfw_mb_length);
 	OSAL_BE32 tmp;
@@ -104,7 +111,7 @@ void ecore_mcp_read_mb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 			       (i << 2) + sizeof(u32));
 
 		((u32 *)p_hwfn->mcp_info->mfw_mb_cur)[i] =
-		    OSAL_BE32_TO_CPU(tmp);
+						OSAL_BE32_TO_CPU(tmp);
 	}
 }
 
@@ -180,10 +187,13 @@ enum _ecore_status_t ecore_mcp_free(struct ecore_hwfn *p_hwfn)
 #ifdef CONFIG_ECORE_LOCK_ALLOC
 		OSAL_SPIN_LOCK_DEALLOC(&p_hwfn->mcp_info->cmd_lock);
 		OSAL_SPIN_LOCK_DEALLOC(&p_hwfn->mcp_info->link_lock);
+		OSAL_SPIN_LOCK_DEALLOC(&p_hwfn->mcp_info->dbg_data_lock);
+		OSAL_SPIN_LOCK_DEALLOC(&p_hwfn->mcp_info->unload_lock);
 #endif
 	}
 
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->mcp_info);
+	p_hwfn->mcp_info = OSAL_NULL;
 
 	return ECORE_SUCCESS;
 }
@@ -192,28 +202,52 @@ enum _ecore_status_t ecore_mcp_free(struct ecore_hwfn *p_hwfn)
 #define ECORE_MCP_SHMEM_RDY_MAX_RETRIES	20
 #define ECORE_MCP_SHMEM_RDY_ITER_MS	50
 
-static enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn,
-						   struct ecore_ptt *p_ptt)
+#define MCP_REG_CACHE_PAGING_ENABLE_ENABLE_MASK	\
+	MCP_REG_CACHE_PAGING_ENABLE_ENABLE
+
+enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt)
 {
 	u32 drv_mb_offsize, mfw_mb_offsize, cache_paging_en_addr, val;
 	struct ecore_mcp_info *p_info = p_hwfn->mcp_info;
 	u8 cnt = ECORE_MCP_SHMEM_RDY_MAX_RETRIES;
 	u8 msec = ECORE_MCP_SHMEM_RDY_ITER_MS;
 	u32 mcp_pf_id = MCP_PF_ID(p_hwfn);
+	u32 recovery_mode;
 
 	/* Disabled cache paging in the MCP implies that no MFW is running */
-	cache_paging_en_addr = MCP_REG_CACHE_PAGING_ENABLE_BB_K2;
+	cache_paging_en_addr = ECORE_IS_E4(p_hwfn->p_dev) ?
+			       MCP_REG_CACHE_PAGING_ENABLE_BB_K2 :
+			       MCP_REG_CACHE_PAGING_ENABLE_E5;
 	val = ecore_rd(p_hwfn, p_ptt, cache_paging_en_addr);
 
+	/* MISCS_REG_GENERIC_HW_0[31] is indication that the MCP/ROM is in
+	 * recovery mode, serving NVM upgrade mailbox commands.
+	 */
+	recovery_mode = ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_HW_0) &
+		0x80000000;
+	p_info->recovery_mode = recovery_mode ? true : false;
+	p_info->b_mfw_running =
+		GET_FIELD(val, MCP_REG_CACHE_PAGING_ENABLE_ENABLE) ||
+		p_info->recovery_mode;
+	if (!p_info->b_mfw_running) {
+#ifndef ASIC_ONLY
+		if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+			DP_INFO(p_hwfn,
+				"Emulation: Assume that no MFW is running [MCP_REG_CACHE_PAGING_ENABLE 0x%x]\n",
+				val);
+		else
+#endif
+			DP_NOTICE(p_hwfn, false,
+				  "Assume that no MFW is running [MCP_REG_CACHE_PAGING_ENABLE 0x%x]\n",
+				  val);
+		return ECORE_INVAL;
+	}
+
 	p_info->public_base = ecore_rd(p_hwfn, p_ptt, MISC_REG_SHARED_MEM_ADDR);
 	if (!p_info->public_base) {
 		DP_NOTICE(p_hwfn, false,
 			  "The address of the MCP scratch-pad is not configured\n");
-#ifndef ASIC_ONLY
-		/* Zeroed "public_base" implies no MFW */
-		if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
-			DP_INFO(p_hwfn, "Emulation: Assume no MFW\n");
-#endif
 		return ECORE_INVAL;
 	}
 
@@ -225,27 +259,34 @@ static enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn,
 				  PUBLIC_MFW_MB));
 	p_info->mfw_mb_addr = SECTION_ADDR(mfw_mb_offsize, mcp_pf_id);
 	p_info->mfw_mb_length = (u16)ecore_rd(p_hwfn, p_ptt,
-					      p_info->mfw_mb_addr);
+					      p_info->mfw_mb_addr +
+					      OFFSETOF(struct public_mfw_mb,
+						       sup_msgs));
 
 	/* @@@TBD:
-	 * The driver can notify that there was an MCP reset, and read the SHMEM
-	 * values before the MFW has completed initializing them.
-	 * As a temporary solution, the "sup_msgs" field is used as a data ready
-	 * indication.
+	 * The driver can notify that there was an MCP reset, and might read the
+	 * SHMEM values before the MFW has completed initializing them.
+	 * As a temporary solution, the "sup_msgs" field in the MFW mailbox is
+	 * used as a data ready indication.
 	 * This should be replaced with an actual indication when it is provided
 	 * by the MFW.
 	 */
-	while (!p_info->mfw_mb_length && cnt--) {
-		OSAL_MSLEEP(msec);
-		p_info->mfw_mb_length = (u16)ecore_rd(p_hwfn, p_ptt,
-						      p_info->mfw_mb_addr);
-	}
+	if (!p_info->recovery_mode) {
+		while (!p_info->mfw_mb_length && --cnt) {
+			OSAL_MSLEEP(msec);
+			p_info->mfw_mb_length =
+				(u16)ecore_rd(p_hwfn, p_ptt,
+					      p_info->mfw_mb_addr +
+					      OFFSETOF(struct public_mfw_mb,
+						       sup_msgs));
+		}
 
-	if (!cnt) {
-		DP_NOTICE(p_hwfn, false,
-			  "Failed to get the SHMEM ready notification after %d msec\n",
-			  ECORE_MCP_SHMEM_RDY_MAX_RETRIES * msec);
-		return ECORE_TIMEOUT;
+		if (!cnt) {
+			DP_NOTICE(p_hwfn, false,
+				  "Failed to get the SHMEM ready notification after %d msec\n",
+				  ECORE_MCP_SHMEM_RDY_MAX_RETRIES * msec);
+			return ECORE_TIMEOUT;
+		}
 	}
 
 	/* Calculate the driver and MFW mailbox address */
@@ -254,19 +295,18 @@ static enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn,
 						       PUBLIC_DRV_MB));
 	p_info->drv_mb_addr = SECTION_ADDR(drv_mb_offsize, mcp_pf_id);
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "drv_mb_offsiz = 0x%x, drv_mb_addr = 0x%x"
-		   " mcp_pf_id = 0x%x\n",
+		   "drv_mb_offsiz = 0x%x, drv_mb_addr = 0x%x mcp_pf_id = 0x%x\n",
 		   drv_mb_offsize, p_info->drv_mb_addr, mcp_pf_id);
 
 	/* Get the current driver mailbox sequence before sending
 	 * the first command
 	 */
 	p_info->drv_mb_seq = DRV_MB_RD(p_hwfn, p_ptt, drv_mb_header) &
-	    DRV_MSG_SEQ_NUMBER_MASK;
+				       DRV_MSG_SEQ_NUMBER_MASK;
 
 	/* Get current FW pulse sequence */
 	p_info->drv_pulse_seq = DRV_MB_RD(p_hwfn, p_ptt, drv_pulse_mb) &
-	    DRV_PULSE_SEQ_MASK;
+				DRV_PULSE_SEQ_MASK;
 
 	p_info->mcp_hist = ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0);
 
@@ -290,18 +330,34 @@ enum _ecore_status_t ecore_mcp_cmd_init(struct ecore_hwfn *p_hwfn,
 
 	/* Initialize the MFW spinlocks */
 #ifdef CONFIG_ECORE_LOCK_ALLOC
-	if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_info->cmd_lock)) {
+	if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_info->cmd_lock, "cmd_lock")) {
+		OSAL_FREE(p_hwfn->p_dev, p_hwfn->mcp_info);
+		return ECORE_NOMEM;
+	}
+	if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_info->link_lock, "link_lock")) {
+		OSAL_SPIN_LOCK_DEALLOC(&p_info->cmd_lock);
+		OSAL_FREE(p_hwfn->p_dev, p_hwfn->mcp_info);
+		return ECORE_NOMEM;
+	}
+	if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_info->dbg_data_lock,
+				 "dbg_data_lock")) {
+		OSAL_SPIN_LOCK_DEALLOC(&p_info->cmd_lock);
+		OSAL_SPIN_LOCK_DEALLOC(&p_info->link_lock);
 		OSAL_FREE(p_hwfn->p_dev, p_hwfn->mcp_info);
 		return ECORE_NOMEM;
 	}
-	if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_info->link_lock)) {
+	if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_info->unload_lock, "unload_lock")) {
 		OSAL_SPIN_LOCK_DEALLOC(&p_info->cmd_lock);
+		OSAL_SPIN_LOCK_DEALLOC(&p_info->link_lock);
+		OSAL_SPIN_LOCK_DEALLOC(&p_info->dbg_data_lock);
 		OSAL_FREE(p_hwfn->p_dev, p_hwfn->mcp_info);
 		return ECORE_NOMEM;
 	}
 #endif
 	OSAL_SPIN_LOCK_INIT(&p_info->cmd_lock);
 	OSAL_SPIN_LOCK_INIT(&p_info->link_lock);
+	OSAL_SPIN_LOCK_INIT(&p_info->dbg_data_lock);
+	OSAL_SPIN_LOCK_INIT(&p_info->unload_lock);
 
 	OSAL_LIST_INIT(&p_info->cmd_list);
 
@@ -316,7 +372,7 @@ enum _ecore_status_t ecore_mcp_cmd_init(struct ecore_hwfn *p_hwfn,
 	size = MFW_DRV_MSG_MAX_DWORDS(p_info->mfw_mb_length) * sizeof(u32);
 	p_info->mfw_mb_cur = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, size);
 	p_info->mfw_mb_shadow = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, size);
-	if (!p_info->mfw_mb_shadow || !p_info->mfw_mb_addr)
+	if (p_info->mfw_mb_cur == OSAL_NULL || p_info->mfw_mb_shadow == OSAL_NULL)
 		goto err;
 
 	return ECORE_SUCCESS;
@@ -352,7 +408,7 @@ enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
 	u32 retries = ECORE_MCP_RESET_RETRIES;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
-#ifndef ASIC_ONLY
+#if (!defined ASIC_ONLY) && (!defined ROM_TEST)
 	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
 		delay = ECORE_EMUL_MCP_RESP_ITER_US;
 		retries = ECORE_EMUL_MCP_RESET_RETRIES;
@@ -407,6 +463,7 @@ static void ecore_emul_mcp_load_req(struct ecore_hwfn *p_hwfn,
 		return;
 	}
 
+	/* @DPDK */
 	if (!loaded)
 		p_mb_params->mcp_resp = FW_MSG_CODE_DRV_LOAD_ENGINE;
 	else if (!loaded_port[p_hwfn->port_id])
@@ -566,8 +623,16 @@ static void ecore_mcp_cmd_set_blocking(struct ecore_hwfn *p_hwfn,
 		block_cmd ? "Block" : "Unblock");
 }
 
-void ecore_mcp_print_cpu_info(struct ecore_hwfn *p_hwfn,
-			      struct ecore_ptt *p_ptt)
+static void ecore_mcp_cmd_set_halt(struct ecore_hwfn *p_hwfn, bool halt)
+{
+	p_hwfn->mcp_info->b_halted = halt;
+
+	DP_INFO(p_hwfn, "%s sending of mailbox commands to the MFW\n",
+		halt ? "Halt" : "Unhalt");
+}
+
+static void ecore_mcp_print_cpu_info(struct ecore_hwfn *p_hwfn,
+				     struct ecore_ptt *p_ptt)
 {
 	u32 cpu_mode, cpu_state, cpu_pc_0, cpu_pc_1, cpu_pc_2;
 	u32 delay = ECORE_MCP_RESP_ITER_US;
@@ -592,13 +657,21 @@ void ecore_mcp_print_cpu_info(struct ecore_hwfn *p_hwfn,
 static enum _ecore_status_t
 _ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			 struct ecore_mcp_mb_params *p_mb_params,
-			 u32 max_retries, u32 delay)
+			 u32 max_retries, u32 usecs)
 {
+	u32 cnt = 0, msecs = DIV_ROUND_UP(usecs, 1000);
 	struct ecore_mcp_cmd_elem *p_cmd_elem;
-	u32 cnt = 0;
+	u32 ud_time = 10, ud_iter = 100;
 	u16 seq_num;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
+	if (!p_mb_params)
+		return ECORE_INVAL;
+
+	/* First 100 udelay wait iterations equates to 1ms wait time */
+	if (ECORE_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
+		max_retries += ud_iter - 1;
+
 	/* Wait until the mailbox is non-occupied */
 	do {
 		/* Exit the loop if there is no pending command, or if the
@@ -618,7 +691,15 @@ _ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			goto err;
 
 		OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock);
-		OSAL_UDELAY(delay);
+
+		if (ECORE_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
+			if (cnt < ud_iter)
+				OSAL_UDELAY(ud_time);
+			else
+				OSAL_MSLEEP(msecs);
+		} else {
+			OSAL_UDELAY(usecs);
+		}
 		OSAL_MFW_CMD_PREEMPT(p_hwfn);
 	} while (++cnt < max_retries);
 
@@ -648,7 +729,15 @@ _ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		 * The spinlock stays locked until the list element is removed.
 		 */
 
-		OSAL_UDELAY(delay);
+		if (ECORE_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
+			if (cnt < ud_iter)
+				OSAL_UDELAY(ud_time);
+			else
+				OSAL_MSLEEP(msecs);
+		} else {
+			OSAL_UDELAY(usecs);
+		}
+
 		OSAL_SPIN_LOCK(&p_hwfn->mcp_info->cmd_lock);
 
 		if (p_cmd_elem->b_is_completed)
@@ -674,8 +763,10 @@ _ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		ecore_mcp_cmd_del_elem(p_hwfn, p_cmd_elem);
 		OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock);
 
-		ecore_mcp_cmd_set_blocking(p_hwfn, true);
-		ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_MFW_RESP_FAIL);
+		if (!ECORE_MB_FLAGS_IS_SET(p_mb_params, AVOID_BLOCK))
+			ecore_mcp_cmd_set_blocking(p_hwfn, true);
+		ecore_hw_err_notify(p_hwfn, p_ptt, ECORE_HW_ERR_MFW_RESP_FAIL,
+				    OSAL_NULL, 0);
 		return ECORE_AGAIN;
 	}
 
@@ -685,7 +776,7 @@ _ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
 		   "MFW mailbox: response 0x%08x param 0x%08x [after %d.%03d ms]\n",
 		   p_mb_params->mcp_resp, p_mb_params->mcp_param,
-		   (cnt * delay) / 1000, (cnt * delay) % 1000);
+		   (cnt * usecs) / 1000, (cnt * usecs) % 1000);
 
 	/* Clear the sequence number from the MFW response */
 	p_mb_params->mcp_resp &= FW_MSG_CODE_MASK;
@@ -697,15 +788,17 @@ _ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	return rc;
 }
 
-static enum _ecore_status_t
-ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
-			struct ecore_ptt *p_ptt,
-			struct ecore_mcp_mb_params *p_mb_params)
+static enum _ecore_status_t ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
+						    struct ecore_ptt *p_ptt,
+						    struct ecore_mcp_mb_params *p_mb_params)
 {
 	osal_size_t union_data_size = sizeof(union drv_union_data);
 	u32 max_retries = ECORE_DRV_MB_MAX_RETRIES;
 	u32 usecs = ECORE_MCP_RESP_ITER_US;
 
+	if (!p_mb_params)
+		return ECORE_INVAL;
+
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && !ecore_mcp_is_init(p_hwfn))
 		return ecore_emul_mcp_cmd(p_hwfn, p_mb_params);
@@ -715,17 +808,29 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 		usecs = ECORE_EMUL_MCP_RESP_ITER_US;
 	}
 #endif
-	if (ECORE_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
-		max_retries = DIV_ROUND_UP(max_retries, 1000);
-		usecs *= 1000;
-	}
-
 	/* MCP not initialized */
 	if (!ecore_mcp_is_init(p_hwfn)) {
 		DP_NOTICE(p_hwfn, true, "MFW is not initialized!\n");
 		return ECORE_BUSY;
 	}
 
+	/* If MFW was halted by another process, let the mailbox sender know
+	 * they can retry sending the mailbox later.
+	 */
+	if (p_hwfn->mcp_info->b_halted) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+			   "The MFW is halted. Avoid sending mailbox command 0x%08x [param 0x%08x].\n",
+			   p_mb_params->cmd, p_mb_params->param);
+		return ECORE_AGAIN;
+	}
+
+	if (p_hwfn->mcp_info->b_block_cmd) {
+		DP_NOTICE(p_hwfn, false,
+			  "The MFW is not responsive. Avoid sending mailbox command 0x%08x [param 0x%08x].\n",
+			  p_mb_params->cmd, p_mb_params->param);
+		return ECORE_ABORTED;
+	}
+
 	if (p_mb_params->data_src_size > union_data_size ||
 	    p_mb_params->data_dst_size > union_data_size) {
 		DP_ERR(p_hwfn,
@@ -735,11 +840,9 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 	}
 
-	if (p_hwfn->mcp_info->b_block_cmd) {
-		DP_NOTICE(p_hwfn, false,
-			  "The MFW is not responsive. Avoid sending mailbox command 0x%08x [param 0x%08x].\n",
-			  p_mb_params->cmd, p_mb_params->param);
-		return ECORE_ABORTED;
+	if (ECORE_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
+		max_retries = DIV_ROUND_UP(max_retries, 1000);
+		usecs *= 1000;
 	}
 
 	return _ecore_mcp_cmd_and_union(p_hwfn, p_ptt, p_mb_params, max_retries,
@@ -772,7 +875,9 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
 					  u32 param,
 					  u32 *o_mcp_resp,
 					  u32 *o_mcp_param,
-					  u32 i_txn_size, u32 *i_buf)
+					  u32 i_txn_size,
+					  u32 *i_buf,
+					  bool b_can_sleep)
 {
 	struct ecore_mcp_mb_params mb_params;
 	enum _ecore_status_t rc;
@@ -782,6 +887,8 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
 	mb_params.param = param;
 	mb_params.p_data_src = i_buf;
 	mb_params.data_src_size = (u8)i_txn_size;
+	if (b_can_sleep)
+		mb_params.flags = ECORE_MB_FLAG_CAN_SLEEP;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -789,6 +896,9 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
 	*o_mcp_resp = mb_params.mcp_resp;
 	*o_mcp_param = mb_params.mcp_param;
 
+	/* nvm_info needs to be updated */
+	p_hwfn->nvm_info.valid = false;
+
 	return ECORE_SUCCESS;
 }
 
@@ -798,7 +908,9 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 					  u32 param,
 					  u32 *o_mcp_resp,
 					  u32 *o_mcp_param,
-					  u32 *o_txn_size, u32 *o_buf)
+					  u32 *o_txn_size,
+					  u32 *o_buf,
+					  bool b_can_sleep)
 {
 	struct ecore_mcp_mb_params mb_params;
 	u8 raw_data[MCP_DRV_NVM_BUF_LEN];
@@ -811,6 +923,8 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 
 	/* Use the maximal value since the actual one is part of the response */
 	mb_params.data_dst_size = MCP_DRV_NVM_BUF_LEN;
+	if (b_can_sleep)
+		mb_params.flags = ECORE_MB_FLAG_CAN_SLEEP;
 
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
@@ -820,8 +934,7 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 	*o_mcp_param = mb_params.mcp_param;
 
 	*o_txn_size = *o_mcp_param;
-	/* @DPDK */
-	OSAL_MEMCPY(o_buf, raw_data, RTE_MIN(*o_txn_size, MCP_DRV_NVM_BUF_LEN));
+	OSAL_MEMCPY(o_buf, raw_data, *o_txn_size);
 
 	return ECORE_SUCCESS;
 }
@@ -850,19 +963,27 @@ ecore_mcp_can_force_load(u8 drv_role, u8 exist_drv_role,
 	return can_force_load;
 }
 
-static enum _ecore_status_t ecore_mcp_cancel_load_req(struct ecore_hwfn *p_hwfn,
-						      struct ecore_ptt *p_ptt)
+enum _ecore_status_t ecore_mcp_cancel_load_req(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt)
 {
 	u32 resp = 0, param = 0;
 	enum _ecore_status_t rc;
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_CANCEL_LOAD_REQ, 0,
 			   &resp, &param);
-	if (rc != ECORE_SUCCESS)
+	if (rc != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, false,
 			  "Failed to send cancel load request, rc = %d\n", rc);
+		return rc;
+	}
 
-	return rc;
+	if (resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"The cancel load command is unsupported by the MFW\n");
+		return ECORE_NOTIMPL;
+	}
+
+	return ECORE_SUCCESS;
 }
 
 #define CONFIG_ECORE_L2_BITMAP_IDX	(0x1 << 0)
@@ -933,7 +1054,6 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	struct ecore_mcp_mb_params mb_params;
 	struct load_req_stc load_req;
 	struct load_rsp_stc load_rsp;
-	u32 hsi_ver;
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&load_req, sizeof(load_req));
@@ -947,17 +1067,18 @@ __ecore_mcp_load_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	SET_MFW_FIELD(load_req.misc0, LOAD_REQ_FLAGS0,
 		      p_in_params->avoid_eng_reset);
 
-	hsi_ver = (p_in_params->hsi_ver == ECORE_LOAD_REQ_HSI_VER_DEFAULT) ?
-		  DRV_ID_MCP_HSI_VER_CURRENT :
-		  (p_in_params->hsi_ver << DRV_ID_MCP_HSI_VER_OFFSET);
-
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
-	mb_params.param = PDA_COMP | hsi_ver | p_hwfn->p_dev->drv_type;
+	mb_params.param = p_hwfn->p_dev->drv_type;
+	SET_MFW_FIELD(mb_params.param, DRV_ID_PDA_COMP_VER, PDA_COMP);
+	SET_MFW_FIELD(mb_params.param, DRV_ID_MCP_HSI_VER,
+		      p_in_params->hsi_ver == ECORE_LOAD_REQ_HSI_VER_DEFAULT ?
+		      LOAD_REQ_HSI_VERSION : p_in_params->hsi_ver);
 	mb_params.p_data_src = &load_req;
 	mb_params.data_src_size = sizeof(load_req);
 	mb_params.p_data_dst = &load_rsp;
 	mb_params.data_dst_size = sizeof(load_rsp);
+	mb_params.flags = ECORE_MB_FLAG_CAN_SLEEP | ECORE_MB_FLAG_AVOID_BLOCK;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
@@ -1113,7 +1234,7 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 				return rc;
 		} else {
 			DP_NOTICE(p_hwfn, false,
-				  "A force load is required [{role, fw_ver, drv_ver}: loading={%d, 0x%08x, x%08x_0x%08x}, existing={%d, 0x%08x, 0x%08x_0x%08x}] - Avoid\n",
+				  "A force load is required [{role, fw_ver, drv_ver}: loading={%d, 0x%08x, 0x%08x_%08x}, existing={%d, 0x%08x, 0x%08x_%08x}] - Avoid\n",
 				  in_params.drv_role, in_params.fw_ver,
 				  in_params.drv_ver_0, in_params.drv_ver_1,
 				  out_params.exist_drv_role,
@@ -1172,6 +1293,12 @@ enum _ecore_status_t ecore_mcp_load_done(struct ecore_hwfn *p_hwfn,
 		return rc;
 	}
 
+	if (resp == FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT) {
+		DP_NOTICE(p_hwfn, false,
+			  "Received a LOAD_REFUSED_REJECT response from the mfw\n");
+		return ECORE_ABORTED;
+	}
+
 	/* Check if there is a DID mismatch between nvm-cfg/efuse */
 	if (param & FW_MB_PARAM_LOAD_DONE_DID_EFUSE_ERROR)
 		DP_NOTICE(p_hwfn, false,
@@ -1180,16 +1307,56 @@ enum _ecore_status_t ecore_mcp_load_done(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+#define MFW_COMPLETION_MAX_ITER 5000
+#define MFW_COMPLETION_INTERVAL_MS 1
+
 enum _ecore_status_t ecore_mcp_unload_req(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt)
 {
-	u32 wol_param, mcp_resp, mcp_param;
+	struct ecore_mcp_mb_params mb_params;
+	u32 cnt = MFW_COMPLETION_MAX_ITER;
+	u32 wol_param;
+	enum _ecore_status_t rc;
 
-	/* @DPDK */
-	wol_param = DRV_MB_PARAM_UNLOAD_WOL_MCP;
+	switch (p_hwfn->p_dev->wol_config) {
+	case ECORE_OV_WOL_DISABLED:
+		wol_param = DRV_MB_PARAM_UNLOAD_WOL_DISABLED;
+		break;
+	case ECORE_OV_WOL_ENABLED:
+		wol_param = DRV_MB_PARAM_UNLOAD_WOL_ENABLED;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, true,
+			  "Unknown WoL configuration %02x\n",
+			  p_hwfn->p_dev->wol_config);
+		/* Fallthrough */
+	case ECORE_OV_WOL_DEFAULT:
+		wol_param = DRV_MB_PARAM_UNLOAD_WOL_MCP;
+	}
 
-	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_UNLOAD_REQ, wol_param,
-			     &mcp_resp, &mcp_param);
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_UNLOAD_REQ;
+	mb_params.param = wol_param;
+	mb_params.flags = ECORE_MB_FLAG_CAN_SLEEP | ECORE_MB_FLAG_AVOID_BLOCK;
+
+	OSAL_SPIN_LOCK(&p_hwfn->mcp_info->unload_lock);
+	OSAL_SET_BIT(ECORE_MCP_BYPASS_PROC_BIT,
+		     &p_hwfn->mcp_info->mcp_handling_status);
+	OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->unload_lock);
+
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+
+	while (OSAL_TEST_BIT(ECORE_MCP_IN_PROCESSING_BIT,
+			     &p_hwfn->mcp_info->mcp_handling_status) && --cnt)
+		OSAL_MSLEEP(MFW_COMPLETION_INTERVAL_MS);
+
+	if (!cnt)
+		DP_NOTICE(p_hwfn, true,
+			  "Failed to wait MFW event completion after %d msec\n",
+			   MFW_COMPLETION_MAX_ITER *
+			   MFW_COMPLETION_INTERVAL_MS);
+
+	return rc;
 }
 
 enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
@@ -1201,6 +1368,24 @@ enum _ecore_status_t ecore_mcp_unload_done(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_UNLOAD_DONE;
 
+	/* Set the primary MAC if WoL is enabled */
+	if (p_hwfn->p_dev->wol_config == ECORE_OV_WOL_ENABLED) {
+		u8 *p_mac = p_hwfn->p_dev->wol_mac;
+
+		OSAL_MEM_ZERO(&wol_mac, sizeof(wol_mac));
+		wol_mac.mac_upper = p_mac[0] << 8 | p_mac[1];
+		wol_mac.mac_lower = p_mac[2] << 24 | p_mac[3] << 16 |
+				    p_mac[4] << 8 | p_mac[5];
+
+		DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IFDOWN),
+			   "Setting WoL MAC: %02x:%02x:%02x:%02x:%02x:%02x --> [%08x,%08x]\n",
+			   p_mac[0], p_mac[1], p_mac[2], p_mac[3], p_mac[4],
+			   p_mac[5], wol_mac.mac_upper, wol_mac.mac_lower);
+
+		mb_params.p_data_src = &wol_mac;
+		mb_params.data_src_size = sizeof(wol_mac);
+	}
+
 	return ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 }
 
@@ -1218,8 +1403,7 @@ static void ecore_mcp_handle_vf_flr(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO(disabled_vfs, EXT_VF_BITMAP_SIZE_IN_BYTES);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "Reading Disabled VF information from [offset %08x],"
-		   " path_addr %08x\n",
+		   "Reading Disabled VF information from [offset %08x], path_addr %08x\n",
 		   mfw_path_offsize, path_addr);
 
 	for (i = 0; i < VF_BITMAP_SIZE_IN_DWORDS; i++) {
@@ -1241,11 +1425,19 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt,
 					  u32 *vfs_to_ack)
 {
+	u16 i, vf_bitmap_size, vf_bitmap_size_in_bytes;
 	struct ecore_mcp_mb_params mb_params;
 	enum _ecore_status_t rc;
-	u16 i;
 
-	for (i = 0; i < VF_BITMAP_SIZE_IN_DWORDS; i++)
+	if (ECORE_IS_E4(p_hwfn->p_dev)) {
+		vf_bitmap_size = VF_BITMAP_SIZE_IN_DWORDS;
+		vf_bitmap_size_in_bytes = VF_BITMAP_SIZE_IN_BYTES;
+	} else {
+		vf_bitmap_size = EXT_VF_BITMAP_SIZE_IN_DWORDS;
+		vf_bitmap_size_in_bytes = EXT_VF_BITMAP_SIZE_IN_BYTES;
+	}
+
+	for (i = 0; i < vf_bitmap_size; i++)
 		DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IOV),
 			   "Acking VFs [%08x,...,%08x] - %08x\n",
 			   i * 32, (i + 1) * 32 - 1, vfs_to_ack[i]);
@@ -1253,9 +1445,8 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_VF_DISABLED_DONE;
 	mb_params.p_data_src = vfs_to_ack;
-	mb_params.data_src_size = (u8)VF_BITMAP_SIZE_IN_BYTES;
-	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt,
-				     &mb_params);
+	mb_params.data_src_size = (u8)vf_bitmap_size_in_bytes;
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, false,
 			  "Failed to pass ACK for VF flr to MFW\n");
@@ -1276,8 +1467,7 @@ static void ecore_mcp_handle_transceiver_change(struct ecore_hwfn *p_hwfn,
 					      transceiver_data));
 
 	DP_VERBOSE(p_hwfn, (ECORE_MSG_HW | ECORE_MSG_SP),
-		   "Received transceiver state update [0x%08x] from mfw"
-		   " [Addr 0x%x]\n",
+		   "Received transceiver state update [0x%08x] from mfw [Addr 0x%x]\n",
 		   transceiver_state, (u32)(p_hwfn->mcp_info->port_addr +
 					    OFFSETOF(struct public_port,
 						     transceiver_data)));
@@ -1293,6 +1483,22 @@ static void ecore_mcp_handle_transceiver_change(struct ecore_hwfn *p_hwfn,
 	OSAL_TRANSCEIVER_UPDATE(p_hwfn);
 }
 
+static void ecore_mcp_handle_transceiver_tx_fault(struct ecore_hwfn *p_hwfn)
+{
+	DP_VERBOSE(p_hwfn, (ECORE_MSG_HW | ECORE_MSG_SP),
+		   "Received TX_FAULT event from mfw\n");
+
+	OSAL_TRANSCEIVER_TX_FAULT(p_hwfn);
+}
+
+static void ecore_mcp_handle_transceiver_rx_los(struct ecore_hwfn *p_hwfn)
+{
+	DP_VERBOSE(p_hwfn, (ECORE_MSG_HW | ECORE_MSG_SP),
+		   "Received RX_LOS event from mfw\n");
+
+	OSAL_TRANSCEIVER_RX_LOS(p_hwfn);
+}
+
 static void ecore_mcp_read_eee_config(struct ecore_hwfn *p_hwfn,
 				      struct ecore_ptt *p_ptt,
 				      struct ecore_mcp_link_state *p_link)
@@ -1304,12 +1510,12 @@ static void ecore_mcp_read_eee_config(struct ecore_hwfn *p_hwfn,
 	eee_status = ecore_rd(p_hwfn, p_ptt, p_hwfn->mcp_info->port_addr +
 				     OFFSETOF(struct public_port, eee_status));
 	p_link->eee_active = !!(eee_status & EEE_ACTIVE_BIT);
-	val = (eee_status & EEE_LD_ADV_STATUS_MASK) >> EEE_LD_ADV_STATUS_OFFSET;
+	val = GET_MFW_FIELD(eee_status, EEE_LD_ADV_STATUS);
 	if (val & EEE_1G_ADV)
 		p_link->eee_adv_caps |= ECORE_EEE_1G_ADV;
 	if (val & EEE_10G_ADV)
 		p_link->eee_adv_caps |= ECORE_EEE_10G_ADV;
-	val = (eee_status & EEE_LP_ADV_STATUS_MASK) >> EEE_LP_ADV_STATUS_OFFSET;
+	val = GET_MFW_FIELD(eee_status, EEE_LP_ADV_STATUS);
 	if (val & EEE_1G_ADV)
 		p_link->eee_lp_adv_caps |= ECORE_EEE_1G_ADV;
 	if (val & EEE_10G_ADV)
@@ -1338,6 +1544,38 @@ static u32 ecore_mcp_get_shmem_func(struct ecore_hwfn *p_hwfn,
 	return size;
 }
 
+static void ecore_read_pf_bandwidth(struct ecore_hwfn *p_hwfn,
+				    struct public_func *p_shmem_info)
+{
+	struct ecore_mcp_function_info *p_info;
+
+	p_info = &p_hwfn->mcp_info->func_info;
+
+	/* TODO - bandwidth min/max should have valid values of 1-100,
+	 * as well as some indication that the feature is disabled.
+	 * Until MFW/qlediag enforce those limitations, Assume THERE IS ALWAYS
+	 * limit and correct value to min `1' and max `100' if limit isn't in
+	 * range.
+	 */
+	p_info->bandwidth_min = GET_MFW_FIELD(p_shmem_info->config,
+					      FUNC_MF_CFG_MIN_BW);
+	if (p_info->bandwidth_min < 1 || p_info->bandwidth_min > 100) {
+		DP_INFO(p_hwfn,
+			"bandwidth minimum out of bounds [%02x]. Set to 1\n",
+			p_info->bandwidth_min);
+		p_info->bandwidth_min = 1;
+	}
+
+	p_info->bandwidth_max = GET_MFW_FIELD(p_shmem_info->config,
+					      FUNC_MF_CFG_MAX_BW);
+	if (p_info->bandwidth_max < 1 || p_info->bandwidth_max > 100) {
+		DP_INFO(p_hwfn,
+			"bandwidth maximum out of bounds [%02x]. Set to 100\n",
+			p_info->bandwidth_max);
+		p_info->bandwidth_max = 100;
+	}
+}
+
 static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt,
 					 bool b_reset)
@@ -1356,11 +1594,9 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
 				  p_hwfn->mcp_info->port_addr +
 				  OFFSETOF(struct public_port, link_status));
 		DP_VERBOSE(p_hwfn, (ECORE_MSG_LINK | ECORE_MSG_SP),
-			   "Received link update [0x%08x] from mfw"
-			   " [Addr 0x%x]\n",
+			   "Received link update [0x%08x] from mfw [Addr 0x%x]\n",
 			   status, (u32)(p_hwfn->mcp_info->port_addr +
-					  OFFSETOF(struct public_port,
-						   link_status)));
+			   OFFSETOF(struct public_port, link_status)));
 	} else {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
 			   "Resetting link indications\n");
@@ -1379,11 +1615,20 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
 						 MCP_PF_ID(p_hwfn));
 			p_link->link_up = !!(shmem_info.status &
 					     FUNC_STATUS_VIRTUAL_LINK_UP);
+			ecore_read_pf_bandwidth(p_hwfn, &shmem_info);
+			DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
+				   "Virtual link_up = %d\n", p_link->link_up);
 		} else {
 			p_link->link_up = !!(status & LINK_STATUS_LINK_UP);
+			DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
+				   "Physical link_up = %d\n",  p_link->link_up);
 		}
 	} else {
 		p_link->link_up = false;
+
+		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
+			   "Link is down - disbale dcbx\n");
+		p_hwfn->p_dcbx_info->get.operational.enabled = false;
 	}
 
 	p_link->full_duplex = true;
@@ -1414,6 +1659,7 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
 		break;
 	default:
 		p_link->speed = 0;
+		p_link->link_up = 0;
 	}
 
 	/* We never store total line speed as p_link->speed is
@@ -1428,50 +1674,49 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
 	min_bw = p_hwfn->mcp_info->func_info.bandwidth_min;
 
 	/* Max bandwidth configuration */
-	__ecore_configure_pf_max_bandwidth(p_hwfn, p_ptt,
-					   p_link, max_bw);
+	__ecore_configure_pf_max_bandwidth(p_hwfn, p_ptt, p_link, max_bw);
 
 	/* Min bandwidth configuration */
-	__ecore_configure_pf_min_bandwidth(p_hwfn, p_ptt,
-					   p_link, min_bw);
+	__ecore_configure_pf_min_bandwidth(p_hwfn, p_ptt, p_link, min_bw);
 	ecore_configure_vp_wfq_on_link_change(p_hwfn->p_dev, p_ptt,
 					      p_link->min_pf_rate);
 
 	p_link->an = !!(status & LINK_STATUS_AUTO_NEGOTIATE_ENABLED);
-	p_link->an_complete = !!(status & LINK_STATUS_AUTO_NEGOTIATE_COMPLETE);
+	p_link->an_complete = !!(status &
+				 LINK_STATUS_AUTO_NEGOTIATE_COMPLETE);
 	p_link->parallel_detection = !!(status &
-					 LINK_STATUS_PARALLEL_DETECTION_USED);
+					LINK_STATUS_PARALLEL_DETECTION_USED);
 	p_link->pfc_enabled = !!(status & LINK_STATUS_PFC_ENABLED);
 
 	p_link->partner_adv_speed |=
-	    (status & LINK_STATUS_LINK_PARTNER_1000TFD_CAPABLE) ?
-	    ECORE_LINK_PARTNER_SPEED_1G_FD : 0;
+		(status & LINK_STATUS_LINK_PARTNER_1000TFD_CAPABLE) ?
+		ECORE_LINK_PARTNER_SPEED_1G_FD : 0;
 	p_link->partner_adv_speed |=
-	    (status & LINK_STATUS_LINK_PARTNER_1000THD_CAPABLE) ?
-	    ECORE_LINK_PARTNER_SPEED_1G_HD : 0;
+		(status & LINK_STATUS_LINK_PARTNER_1000THD_CAPABLE) ?
+		ECORE_LINK_PARTNER_SPEED_1G_HD : 0;
 	p_link->partner_adv_speed |=
-	    (status & LINK_STATUS_LINK_PARTNER_10G_CAPABLE) ?
-	    ECORE_LINK_PARTNER_SPEED_10G : 0;
+		(status & LINK_STATUS_LINK_PARTNER_10G_CAPABLE) ?
+		ECORE_LINK_PARTNER_SPEED_10G : 0;
 	p_link->partner_adv_speed |=
-	    (status & LINK_STATUS_LINK_PARTNER_20G_CAPABLE) ?
-	    ECORE_LINK_PARTNER_SPEED_20G : 0;
+		(status & LINK_STATUS_LINK_PARTNER_20G_CAPABLE) ?
+		ECORE_LINK_PARTNER_SPEED_20G : 0;
 	p_link->partner_adv_speed |=
-	    (status & LINK_STATUS_LINK_PARTNER_25G_CAPABLE) ?
-	    ECORE_LINK_PARTNER_SPEED_25G : 0;
+		(status & LINK_STATUS_LINK_PARTNER_25G_CAPABLE) ?
+		ECORE_LINK_PARTNER_SPEED_25G : 0;
 	p_link->partner_adv_speed |=
-	    (status & LINK_STATUS_LINK_PARTNER_40G_CAPABLE) ?
-	    ECORE_LINK_PARTNER_SPEED_40G : 0;
+		(status & LINK_STATUS_LINK_PARTNER_40G_CAPABLE) ?
+		ECORE_LINK_PARTNER_SPEED_40G : 0;
 	p_link->partner_adv_speed |=
-	    (status & LINK_STATUS_LINK_PARTNER_50G_CAPABLE) ?
-	    ECORE_LINK_PARTNER_SPEED_50G : 0;
+		(status & LINK_STATUS_LINK_PARTNER_50G_CAPABLE) ?
+		ECORE_LINK_PARTNER_SPEED_50G : 0;
 	p_link->partner_adv_speed |=
-	    (status & LINK_STATUS_LINK_PARTNER_100G_CAPABLE) ?
-	    ECORE_LINK_PARTNER_SPEED_100G : 0;
+		(status & LINK_STATUS_LINK_PARTNER_100G_CAPABLE) ?
+		ECORE_LINK_PARTNER_SPEED_100G : 0;
 
 	p_link->partner_tx_flow_ctrl_en =
-	    !!(status & LINK_STATUS_TX_FLOW_CONTROL_ENABLED);
+		!!(status & LINK_STATUS_TX_FLOW_CONTROL_ENABLED);
 	p_link->partner_rx_flow_ctrl_en =
-	    !!(status & LINK_STATUS_RX_FLOW_CONTROL_ENABLED);
+		!!(status & LINK_STATUS_RX_FLOW_CONTROL_ENABLED);
 
 	switch (status & LINK_STATUS_LINK_PARTNER_FLOW_CONTROL_MASK) {
 	case LINK_STATUS_LINK_PARTNER_SYMMETRIC_PAUSE:
@@ -1492,24 +1737,44 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
 	if (p_hwfn->mcp_info->capabilities & FW_MB_PARAM_FEATURE_SUPPORT_EEE)
 		ecore_mcp_read_eee_config(p_hwfn, p_ptt, p_link);
 
-	OSAL_LINK_UPDATE(p_hwfn);
+	if (p_hwfn->mcp_info->capabilities &
+		FW_MB_PARAM_FEATURE_SUPPORT_FEC_CONTROL) {
+		switch (status & LINK_STATUS_FEC_MODE_MASK) {
+		case LINK_STATUS_FEC_MODE_NONE:
+			p_link->fec_active = ECORE_MCP_FEC_NONE;
+			break;
+		case LINK_STATUS_FEC_MODE_FIRECODE_CL74:
+			p_link->fec_active = ECORE_MCP_FEC_FIRECODE;
+			break;
+		case LINK_STATUS_FEC_MODE_RS_CL91:
+			p_link->fec_active = ECORE_MCP_FEC_RS;
+			break;
+		default:
+			p_link->fec_active = ECORE_MCP_FEC_AUTO;
+		}
+	} else {
+		p_link->fec_active = ECORE_MCP_FEC_UNSUPPORTED;
+	}
+
+	OSAL_LINK_UPDATE(p_hwfn); /* @DPDK */
 out:
 	OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->link_lock);
 }
 
 enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
-					struct ecore_ptt *p_ptt, bool b_up)
+					struct ecore_ptt *p_ptt,
+					bool b_up)
 {
 	struct ecore_mcp_link_params *params = &p_hwfn->mcp_info->link_input;
 	struct ecore_mcp_mb_params mb_params;
 	struct eth_phy_cfg phy_cfg;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	u32 cmd;
+	u32 cmd, fec_bit = 0;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
 		if (b_up)
-			OSAL_LINK_UPDATE(p_hwfn);
+			OSAL_LINK_UPDATE(p_hwfn); /* @DPDK */
 		return ECORE_SUCCESS;
 	}
 #endif
@@ -1540,18 +1805,87 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 			phy_cfg.eee_cfg |= EEE_CFG_ADV_SPEED_1G;
 		if (params->eee.adv_caps & ECORE_EEE_10G_ADV)
 			phy_cfg.eee_cfg |= EEE_CFG_ADV_SPEED_10G;
-		phy_cfg.eee_cfg |= (params->eee.tx_lpi_timer <<
-				    EEE_TX_TIMER_USEC_OFFSET) &
-					EEE_TX_TIMER_USEC_MASK;
+		SET_MFW_FIELD(phy_cfg.eee_cfg, EEE_TX_TIMER_USEC,
+			      params->eee.tx_lpi_timer);
+	}
+
+	if (p_hwfn->mcp_info->capabilities &
+		FW_MB_PARAM_FEATURE_SUPPORT_FEC_CONTROL) {
+		if (params->fec & ECORE_MCP_FEC_NONE)
+			fec_bit |= FEC_FORCE_MODE_NONE;
+		else if (params->fec &	ECORE_MCP_FEC_FIRECODE)
+			fec_bit |= FEC_FORCE_MODE_FIRECODE;
+		else if (params->fec & ECORE_MCP_FEC_RS)
+			fec_bit |= FEC_FORCE_MODE_RS;
+		else if (params->fec & ECORE_MCP_FEC_AUTO)
+			fec_bit |=  FEC_FORCE_MODE_AUTO;
+		SET_MFW_FIELD(phy_cfg.fec_mode, FEC_FORCE_MODE, fec_bit);
+	}
+
+	if (p_hwfn->mcp_info->capabilities &
+		FW_MB_PARAM_FEATURE_SUPPORT_EXT_SPEED_FEC_CONTROL) {
+		u32 spd_mask = 0, val = params->ext_speed.forced_speed;
+
+		if (params->ext_speed.autoneg)
+			spd_mask |= ETH_EXT_SPEED_AN;
+		if (val & ECORE_EXT_SPEED_1G)
+			spd_mask |= ETH_EXT_SPEED_1G;
+		if (val & ECORE_EXT_SPEED_10G)
+			spd_mask |= ETH_EXT_SPEED_10G;
+		if (val & ECORE_EXT_SPEED_20G)
+			spd_mask |= ETH_EXT_SPEED_20G;
+		if (val & ECORE_EXT_SPEED_25G)
+			spd_mask |= ETH_EXT_SPEED_25G;
+		if (val & ECORE_EXT_SPEED_40G)
+			spd_mask |= ETH_EXT_SPEED_40G;
+		if (val & ECORE_EXT_SPEED_50G_R)
+			spd_mask |= ETH_EXT_SPEED_50G_BASE_R;
+		if (val & ECORE_EXT_SPEED_50G_R2)
+			spd_mask |= ETH_EXT_SPEED_50G_BASE_R2;
+		if (val & ECORE_EXT_SPEED_100G_R2)
+			spd_mask |= ETH_EXT_SPEED_100G_BASE_R2;
+		if (val & ECORE_EXT_SPEED_100G_R4)
+			spd_mask |= ETH_EXT_SPEED_100G_BASE_R4;
+		if (val & ECORE_EXT_SPEED_100G_P4)
+			spd_mask |= ETH_EXT_SPEED_100G_BASE_P4;
+		SET_MFW_FIELD(phy_cfg.extended_speed, ETH_EXT_SPEED, spd_mask);
+
+		spd_mask = 0;
+		val = params->ext_speed.advertised_speeds;
+		if (val & ECORE_EXT_SPEED_MASK_1G)
+			spd_mask |= ETH_EXT_ADV_SPEED_1G;
+		if (val & ECORE_EXT_SPEED_MASK_10G)
+			spd_mask |= ETH_EXT_ADV_SPEED_10G;
+		if (val & ECORE_EXT_SPEED_MASK_20G)
+			spd_mask |= ETH_EXT_ADV_SPEED_20G;
+		if (val & ECORE_EXT_SPEED_MASK_25G)
+			spd_mask |= ETH_EXT_ADV_SPEED_25G;
+		if (val & ECORE_EXT_SPEED_MASK_40G)
+			spd_mask |= ETH_EXT_ADV_SPEED_40G;
+		if (val & ECORE_EXT_SPEED_MASK_50G_R)
+			spd_mask |= ETH_EXT_ADV_SPEED_50G_BASE_R;
+		if (val & ECORE_EXT_SPEED_MASK_50G_R2)
+			spd_mask |= ETH_EXT_ADV_SPEED_50G_BASE_R2;
+		if (val & ECORE_EXT_SPEED_MASK_100G_R2)
+			spd_mask |= ETH_EXT_ADV_SPEED_100G_BASE_R2;
+		if (val & ECORE_EXT_SPEED_MASK_100G_R4)
+			spd_mask |= ETH_EXT_ADV_SPEED_100G_BASE_R4;
+		if (val & ECORE_EXT_SPEED_MASK_100G_P4)
+			spd_mask |= ETH_EXT_ADV_SPEED_100G_BASE_P4;
+		phy_cfg.extended_speed |= spd_mask;
+
+		SET_MFW_FIELD(phy_cfg.fec_mode, FEC_EXTENDED_MODE,
+			      params->ext_fec_mode);
 	}
 
 	p_hwfn->b_drv_link_init = b_up;
 
 	if (b_up)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
-			   "Configuring Link: Speed 0x%08x, Pause 0x%08x, adv_speed 0x%08x, loopback 0x%08x\n",
+			   "Configuring Link: Speed 0x%08x, Pause 0x%08x, adv_speed 0x%08x, loopback 0x%08x FEC 0x%08x, ext_speed 0x%08x\n",
 			   phy_cfg.speed, phy_cfg.pause, phy_cfg.adv_speed,
-			   phy_cfg.loopback_mode);
+			   phy_cfg.loopback_mode, phy_cfg.fec_mode,
+			   phy_cfg.extended_speed);
 	else
 		DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, "Resetting link\n");
 
@@ -1559,6 +1893,7 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 	mb_params.cmd = cmd;
 	mb_params.p_data_src = &phy_cfg;
 	mb_params.data_src_size = sizeof(phy_cfg);
+	mb_params.flags = ECORE_MB_FLAG_CAN_SLEEP;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 
 	/* if mcp fails to respond we must abort */
@@ -1595,7 +1930,7 @@ u32 ecore_get_process_kill_counter(struct ecore_hwfn *p_hwfn,
 	proc_kill_cnt = ecore_rd(p_hwfn, p_ptt,
 				 path_addr +
 				 OFFSETOF(struct public_path, process_kill)) &
-	    PROCESS_KILL_COUNTER_MASK;
+			PROCESS_KILL_COUNTER_MASK;
 
 	return proc_kill_cnt;
 }
@@ -1621,8 +1956,7 @@ static void ecore_mcp_handle_process_kill(struct ecore_hwfn *p_hwfn,
 
 	if (p_dev->recov_in_prog) {
 		DP_NOTICE(p_hwfn, false,
-			  "Ignoring the indication since a recovery"
-			  " process is already in progress\n");
+			  "Ignoring the indication since a recovery process is already in progress\n");
 		return;
 	}
 
@@ -1649,6 +1983,18 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 		stats_type = ECORE_MCP_LAN_STATS;
 		hsi_param = DRV_MSG_CODE_STATS_TYPE_LAN;
 		break;
+	case MFW_DRV_MSG_GET_FCOE_STATS:
+		stats_type = ECORE_MCP_FCOE_STATS;
+		hsi_param = DRV_MSG_CODE_STATS_TYPE_FCOE;
+		break;
+	case MFW_DRV_MSG_GET_ISCSI_STATS:
+		stats_type = ECORE_MCP_ISCSI_STATS;
+		hsi_param = DRV_MSG_CODE_STATS_TYPE_ISCSI;
+		break;
+	case MFW_DRV_MSG_GET_RDMA_STATS:
+		stats_type = ECORE_MCP_RDMA_STATS;
+		hsi_param = DRV_MSG_CODE_STATS_TYPE_RDMA;
+		break;
 	default:
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "Invalid protocol type %d\n", type);
@@ -1667,40 +2013,6 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
 		DP_ERR(p_hwfn, "Failed to send protocol stats, rc = %d\n", rc);
 }
 
-static void ecore_read_pf_bandwidth(struct ecore_hwfn *p_hwfn,
-				    struct public_func *p_shmem_info)
-{
-	struct ecore_mcp_function_info *p_info;
-
-	p_info = &p_hwfn->mcp_info->func_info;
-
-	/* TODO - bandwidth min/max should have valid values of 1-100,
-	 * as well as some indication that the feature is disabled.
-	 * Until MFW/qlediag enforce those limitations, Assume THERE IS ALWAYS
-	 * limit and correct value to min `1' and max `100' if limit isn't in
-	 * range.
-	 */
-	p_info->bandwidth_min = (p_shmem_info->config &
-				 FUNC_MF_CFG_MIN_BW_MASK) >>
-	    FUNC_MF_CFG_MIN_BW_OFFSET;
-	if (p_info->bandwidth_min < 1 || p_info->bandwidth_min > 100) {
-		DP_INFO(p_hwfn,
-			"bandwidth minimum out of bounds [%02x]. Set to 1\n",
-			p_info->bandwidth_min);
-		p_info->bandwidth_min = 1;
-	}
-
-	p_info->bandwidth_max = (p_shmem_info->config &
-				 FUNC_MF_CFG_MAX_BW_MASK) >>
-	    FUNC_MF_CFG_MAX_BW_OFFSET;
-	if (p_info->bandwidth_max < 1 || p_info->bandwidth_max > 100) {
-		DP_INFO(p_hwfn,
-			"bandwidth maximum out of bounds [%02x]. Set to 100\n",
-			p_info->bandwidth_max);
-		p_info->bandwidth_max = 100;
-	}
-}
-
 static void
 ecore_mcp_update_bw(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 {
@@ -1708,7 +2020,10 @@ ecore_mcp_update_bw(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 	struct public_func shmem_info;
 	u32 resp = 0, param = 0;
 
-	ecore_mcp_get_shmem_func(p_hwfn, p_ptt, &shmem_info, MCP_PF_ID(p_hwfn));
+	OSAL_SPIN_LOCK(&p_hwfn->mcp_info->link_lock);
+
+	ecore_mcp_get_shmem_func(p_hwfn, p_ptt, &shmem_info,
+				 MCP_PF_ID(p_hwfn));
 
 	ecore_read_pf_bandwidth(p_hwfn, &shmem_info);
 
@@ -1718,6 +2033,10 @@ ecore_mcp_update_bw(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 
 	ecore_configure_pf_max_bandwidth(p_hwfn->p_dev, p_info->bandwidth_max);
 
+	OSAL_BW_UPDATE(p_hwfn, p_ptt);
+
+	OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->link_lock);
+
 	/* Acknowledge the MFW */
 	ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BW_UPDATE_ACK, 0, &resp,
 		      &param);
@@ -1735,7 +2054,7 @@ static void ecore_mcp_update_stag(struct ecore_hwfn *p_hwfn,
 	p_hwfn->mcp_info->func_info.ovlan = (u16)shmem_info.ovlan_stag &
 						 FUNC_MF_CFG_OV_STAG_MASK;
 	p_hwfn->hw_info.ovlan = p_hwfn->mcp_info->func_info.ovlan;
-	if (OSAL_GET_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits)) {
+	if (OSAL_TEST_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits)) {
 		if (p_hwfn->hw_info.ovlan != ECORE_MCP_VLAN_UNSET) {
 			ecore_wr(p_hwfn, p_ptt, NIG_REG_LLH_FUNC_TAG_VALUE,
 				 p_hwfn->hw_info.ovlan);
@@ -1766,29 +2085,64 @@ static void ecore_mcp_update_stag(struct ecore_hwfn *p_hwfn,
 		      &resp, &param);
 }
 
-static void ecore_mcp_handle_fan_failure(struct ecore_hwfn *p_hwfn)
+static void ecore_mcp_handle_fan_failure(struct ecore_hwfn *p_hwfn,
+					 struct ecore_ptt *p_ptt)
 {
+	u32 addr, global_offsize, global_addr;
+	u32 nvm_cfg_addr, nvm_cfg1_offset;
+	u32 phy_mod_temp, gpio_value;
+	u32 fan_fail_enforcement;
+	u32 int_temp, ext_temp;
+	u32 gpio0;
+
 	/* A single notification should be sent to upper driver in CMT mode */
 	if (p_hwfn != ECORE_LEADING_HWFN(p_hwfn->p_dev))
 		return;
 
+	addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
+				    PUBLIC_GLOBAL);
+	global_offsize = ecore_rd(p_hwfn, p_ptt, addr);
+	global_addr = SECTION_ADDR(global_offsize, 0);
+	int_temp = ecore_rd(p_hwfn, p_ptt, global_addr +
+			    offsetof(struct public_global,
+				     internal_temperature));
+
+	addr = global_addr + offsetof(struct public_global,
+				      external_temperature);
+	ext_temp = ecore_rd(p_hwfn, p_ptt, addr);
+
+	addr = p_hwfn->mcp_info->port_addr + OFFSETOF(struct public_port,
+						      phy_module_temperature);
+	phy_mod_temp = ecore_rd(p_hwfn, p_ptt, addr);
+
+	nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt, MISC_REG_GEN_PURP_CR0);
+	nvm_cfg1_offset = ecore_rd(p_hwfn, p_ptt, nvm_cfg_addr + 4);
+	addr = MCP_REG_SCRATCH + nvm_cfg1_offset +
+		offsetof(struct nvm_cfg1, glob) +
+		offsetof(struct nvm_cfg1_glob, generic_cont0);
+	fan_fail_enforcement = ecore_rd(p_hwfn, p_ptt, addr);
+	fan_fail_enforcement &= NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_MASK;
+
+	if (NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_ENABLED ==
+	    fan_fail_enforcement) {
+		ecore_mcp_gpio_read(p_hwfn, p_ptt, MISCS_REG_GPIO_VAL,
+				    &gpio_value);
+		gpio0 = gpio_value & 1;
+		DP_NOTICE(p_hwfn, false, "Fan Failure(Status = %d)\n",
+			  gpio0);
+	}
+
 	DP_NOTICE(p_hwfn, false,
-		  "Fan failure was detected on the network interface card"
-		  " and it's going to be shut down.\n");
+		  "Fan failure: Internal Temp=%dC External Temp=%dC Phy Module Temp=%dC\n",
+		  int_temp, ext_temp, phy_mod_temp);
+	DP_NOTICE(p_hwfn, false,
+		  "Fan failure was detected on the network interface card and it's going to be shut down.\n");
 
-	ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_FAN_FAIL);
+	ecore_hw_err_notify(p_hwfn, p_ptt, ECORE_HW_ERR_FAN_FAIL, OSAL_NULL, 0);
 }
 
-struct ecore_mdump_cmd_params {
-	u32 cmd;
-	void *p_data_src;
-	u8 data_src_size;
-	void *p_data_dst;
-	u8 data_dst_size;
-	u32 mcp_resp;
-};
 
-static enum _ecore_status_t
+enum _ecore_status_t
 ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		    struct ecore_mdump_cmd_params *p_mdump_cmd_params)
 {
@@ -1807,6 +2161,7 @@ ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		return rc;
 
 	p_mdump_cmd_params->mcp_resp = mb_params.mcp_resp;
+	p_mdump_cmd_params->mcp_param = mb_params.mcp_param;
 
 	if (p_mdump_cmd_params->mcp_resp == FW_MSG_CODE_MDUMP_INVALID_CMD) {
 		DP_INFO(p_hwfn,
@@ -1851,11 +2206,22 @@ enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt)
 {
 	struct ecore_mdump_cmd_params mdump_cmd_params;
+	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
 	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_TRIGGER;
 
-	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
+	rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
+	if (mdump_cmd_params.mcp_resp == FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND) {
+		DP_INFO(p_hwfn,
+			"The mdump command failed cause there is no HW_DUMP image\n");
+		rc = ECORE_INVAL;
+	} else if (mdump_cmd_params.mcp_resp != FW_MSG_CODE_OK) {
+		DP_INFO(p_hwfn,
+			"The mdump command failed cause of MFW error\n");
+		rc = ECORE_UNKNOWN_ERROR;
+	}
+	return rc;
 }
 
 static enum _ecore_status_t
@@ -1988,17 +2354,260 @@ enum _ecore_status_t ecore_mcp_mdump_clr_retain(struct ecore_hwfn *p_hwfn,
 	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
 }
 
-static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn,
-					    struct ecore_ptt *p_ptt)
+enum _ecore_status_t ecore_mcp_hw_dump_trigger(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt)
 {
-	struct ecore_mdump_retain_data mdump_retain;
+	struct ecore_mdump_cmd_params mdump_cmd_params;
 	enum _ecore_status_t rc;
 
-	/* In CMT mode - no need for more than a single acknowledgment to the
-	 * MFW, and no more than a single notification to the upper driver.
-	 */
-	if (p_hwfn != ECORE_LEADING_HWFN(p_hwfn->p_dev))
-		return;
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_HW_DUMP_TRIGGER;
+
+	rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
+	if (mdump_cmd_params.mcp_resp == FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND) {
+		DP_INFO(p_hwfn,
+			"The hw_dump command failed cause there is no HW_DUMP image\n");
+		rc = ECORE_INVAL;
+	} else if (mdump_cmd_params.mcp_resp != FW_MSG_CODE_OK) {
+		DP_INFO(p_hwfn,
+			"The hw_dump command failed cause of MFW error\n");
+		rc = ECORE_UNKNOWN_ERROR;
+	}
+	return rc;
+}
+
+enum _ecore_status_t ecore_mdump2_req_offsize(struct ecore_hwfn *p_hwfn,
+					      struct ecore_ptt *p_ptt,
+					      u32 *buff_byte_size,
+					      u32 *buff_byte_addr)
+{
+	struct ecore_mdump_cmd_params mdump_cmd_params;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_GEN_LINK_DUMP;
+
+	SET_MFW_FIELD(mdump_cmd_params.cmd,
+		      DRV_MSG_CODE_MDUMP_USE_DRIVER_BUF, 1);
+
+	rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
+	if (mdump_cmd_params.mcp_resp != FW_MSG_CODE_OK) {
+		DP_INFO(p_hwfn,
+			"The mdump2 command failed because of MFW error\n");
+		rc = ECORE_UNKNOWN_ERROR;
+	}
+
+	*buff_byte_size =  SECTION_SIZE(mdump_cmd_params.mcp_param);
+	*buff_byte_addr =  SECTION_ADDR(mdump_cmd_params.mcp_param, 0);
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_mdump2_req_free(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt)
+{
+	struct ecore_mdump_cmd_params mdump_cmd_params;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_FREE_DRIVER_BUF;
+
+	rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
+	if (mdump_cmd_params.mcp_resp != FW_MSG_CODE_OK) {
+		DP_INFO(p_hwfn,
+			"The mdump2 command failed because of MFW error\n");
+		rc = ECORE_UNKNOWN_ERROR;
+	}
+
+	return rc;
+}
+
+static enum _ecore_status_t
+ecore_mcp_get_disabled_attns(struct ecore_hwfn *p_hwfn,
+			     struct ecore_ptt *p_ptt,
+			     u32 *disabled_attns)
+{
+	struct ecore_mcp_mb_params mb_params;
+	struct get_att_ctrl_stc get_att_ctrl;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&get_att_ctrl, sizeof(get_att_ctrl));
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_GET_ATTN_CONTROL;
+	mb_params.p_data_dst = &get_att_ctrl;
+	mb_params.data_dst_size = sizeof(get_att_ctrl);
+
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, false, "GET_ATTN_CONTROL failed, rc = %d\n",
+			  rc);
+		return rc;
+	}
+	if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"GET_ATTN_CONTROL command is not supported\n");
+		return ECORE_NOTIMPL;
+	}
+
+	*disabled_attns = get_att_ctrl.disabled_attns;
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_mcp_get_attn_control(struct ecore_hwfn *p_hwfn,
+			   struct ecore_ptt *p_ptt,
+			   enum MFW_DRV_MSG_TYPE type, u8 *enabled)
+{
+	u32 disabled_attns = 0;
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_get_disabled_attns(p_hwfn, p_ptt, &disabled_attns);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	*enabled = !(disabled_attns & (1 << type));
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_mcp_set_attn_control(struct ecore_hwfn *p_hwfn,
+			   struct ecore_ptt *p_ptt,
+			   enum MFW_DRV_MSG_TYPE type, u8 enable)
+{
+	struct ecore_mcp_mb_params mb_params;
+	u32 disabled_attns = 0;
+	u8 is_enabled;
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_get_disabled_attns(p_hwfn, p_ptt, &disabled_attns);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	is_enabled = (~disabled_attns >> type) & 0x1;
+	if (is_enabled == enable)
+		return ECORE_SUCCESS;
+
+	if (enable)
+		disabled_attns &= ~(1 << type);
+	else
+		disabled_attns |= (1 << type);
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_SET_ATTN_CONTROL;
+	mb_params.p_data_src = &disabled_attns;
+	mb_params.data_src_size = sizeof(disabled_attns);
+
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, false, "SET_ATTN_CONTROL failed, rc = %d\n",
+			  rc);
+		return rc;
+	}
+
+	return rc;
+}
+
+enum _ecore_status_t
+ecore_mcp_is_tx_flt_attn_enabled(struct ecore_hwfn *p_hwfn,
+				 struct ecore_ptt *p_ptt,
+				 u8 *enabled)
+{
+	return ecore_mcp_get_attn_control(p_hwfn, p_ptt,
+					  MFW_DRV_MSG_XCVR_TX_FAULT,
+					  enabled);
+}
+
+enum _ecore_status_t
+ecore_mcp_is_rx_los_attn_enabled(struct ecore_hwfn *p_hwfn,
+				 struct ecore_ptt *p_ptt,
+				 u8 *enabled)
+{
+	return ecore_mcp_get_attn_control(p_hwfn, p_ptt,
+					  MFW_DRV_MSG_XCVR_RX_LOS,
+					  enabled);
+}
+
+enum _ecore_status_t
+ecore_mcp_enable_tx_flt_attn(struct ecore_hwfn *p_hwfn,
+			     struct ecore_ptt *p_ptt,
+			     u8 enable)
+{
+	return ecore_mcp_set_attn_control(p_hwfn, p_ptt,
+					  MFW_DRV_MSG_XCVR_TX_FAULT,
+					  enable);
+}
+
+enum _ecore_status_t
+ecore_mcp_enable_rx_los_attn(struct ecore_hwfn *p_hwfn,
+			     struct ecore_ptt *p_ptt,
+			     u8 enable)
+{
+	return ecore_mcp_set_attn_control(p_hwfn, p_ptt,
+					  MFW_DRV_MSG_XCVR_RX_LOS,
+					  enable);
+}
+
+enum _ecore_status_t
+ecore_mcp_set_trace_filter(struct ecore_hwfn *p_hwfn,
+			   u32 *dbg_level, u32 *dbg_modules)
+{
+	struct trace_filter_stc trace_filter;
+	struct ecore_ptt  *p_ptt;
+	u32 resp = 0, param;
+	enum _ecore_status_t rc;
+
+	trace_filter.level = *dbg_level;
+	trace_filter.modules = *dbg_modules;
+
+	p_ptt = ecore_ptt_acquire(p_hwfn);
+	if (!p_ptt)
+		return ECORE_BUSY;
+
+	rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_TRACE_FILTER,
+				  sizeof(trace_filter), &resp,
+				  &param, sizeof(trace_filter),
+				  (u32 *)&trace_filter, false);
+	if (rc != ECORE_SUCCESS)
+		DP_NOTICE(p_hwfn, false, "MCP command rc = %d\n", rc);
+	ecore_ptt_release(p_hwfn, p_ptt);
+
+	return rc;
+}
+
+enum _ecore_status_t
+ecore_mcp_restore_trace_filter(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_ptt *p_ptt;
+	u32 resp = 0, param;
+	enum _ecore_status_t rc;
+
+	p_ptt = ecore_ptt_acquire(p_hwfn);
+	if (!p_ptt)
+		return ECORE_BUSY;
+
+	rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt,
+				  DRV_MSG_CODE_RESTORE_TRACE_FILTER,
+				  0, &resp, &param, 0, OSAL_NULL, false);
+	if (rc != ECORE_SUCCESS)
+		DP_NOTICE(p_hwfn, false, "MCP command rc = %d\n", rc);
+	ecore_ptt_release(p_hwfn, p_ptt);
+
+	return rc;
+}
+
+static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt)
+{
+	struct ecore_mdump_retain_data mdump_retain;
+	enum _ecore_status_t rc;
+
+	/* In CMT mode - no need for more than a single acknowledgment to the
+	 * MFW, and no more than a single notification to the upper driver.
+	 */
+	if (p_hwfn != ECORE_LEADING_HWFN(p_hwfn->p_dev))
+		return;
 
 	rc = ecore_mcp_mdump_get_retain(p_hwfn, p_ptt, &mdump_retain);
 	if (rc == ECORE_SUCCESS && mdump_retain.valid) {
@@ -2020,7 +2629,7 @@ static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn,
 	DP_NOTICE(p_hwfn, false,
 		  "Acknowledging the notification to not allow the MFW crash dump [driver debug data collection is preferable]\n");
 	ecore_mcp_mdump_ack(p_hwfn, p_ptt);
-	ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_HW_ATTN);
+	ecore_hw_err_notify(p_hwfn, p_ptt, ECORE_HW_ERR_HW_ATTN, OSAL_NULL, 0);
 }
 
 void
@@ -2029,7 +2638,7 @@ ecore_mcp_read_ufp_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 	struct public_func shmem_info;
 	u32 port_cfg, val;
 
-	if (!OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits))
+	if (!OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits))
 		return;
 
 	OSAL_MEMSET(&p_hwfn->ufp_info, 0, sizeof(p_hwfn->ufp_info));
@@ -2037,17 +2646,19 @@ ecore_mcp_read_ufp_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 			    OFFSETOF(struct public_port, oem_cfg_port));
 	val = GET_MFW_FIELD(port_cfg, OEM_CFG_CHANNEL_TYPE);
 	if (val != OEM_CFG_CHANNEL_TYPE_STAGGED)
-		DP_NOTICE(p_hwfn, false, "Incorrect UFP Channel type  %d\n",
-			  val);
+		DP_NOTICE(p_hwfn, false, "Incorrect UFP Channel type  %d port_id 0x%02x\n",
+			  val, MFW_PORT(p_hwfn));
 
 	val = GET_MFW_FIELD(port_cfg, OEM_CFG_SCHED_TYPE);
-	if (val == OEM_CFG_SCHED_TYPE_ETS)
+	if (val == OEM_CFG_SCHED_TYPE_ETS) {
 		p_hwfn->ufp_info.mode = ECORE_UFP_MODE_ETS;
-	else if (val == OEM_CFG_SCHED_TYPE_VNIC_BW)
+	} else if (val == OEM_CFG_SCHED_TYPE_VNIC_BW) {
 		p_hwfn->ufp_info.mode = ECORE_UFP_MODE_VNIC_BW;
-	else
-		DP_NOTICE(p_hwfn, false, "Unknown UFP scheduling mode %d\n",
-			  val);
+	} else {
+		p_hwfn->ufp_info.mode = ECORE_UFP_MODE_UNKNOWN;
+		DP_NOTICE(p_hwfn, false, "Unknown UFP scheduling mode %d port_id 0x%02x\n",
+			  val, MFW_PORT(p_hwfn));
+	}
 
 	ecore_mcp_get_shmem_func(p_hwfn, p_ptt, &shmem_info,
 				 MCP_PF_ID(p_hwfn));
@@ -2055,18 +2666,20 @@ ecore_mcp_read_ufp_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 	p_hwfn->ufp_info.tc = (u8)val;
 	val = GET_MFW_FIELD(shmem_info.oem_cfg_func,
 			    OEM_CFG_FUNC_HOST_PRI_CTRL);
-	if (val == OEM_CFG_FUNC_HOST_PRI_CTRL_VNIC)
+	if (val == OEM_CFG_FUNC_HOST_PRI_CTRL_VNIC) {
 		p_hwfn->ufp_info.pri_type = ECORE_UFP_PRI_VNIC;
-	else if (val == OEM_CFG_FUNC_HOST_PRI_CTRL_OS)
+	} else if (val == OEM_CFG_FUNC_HOST_PRI_CTRL_OS) {
 		p_hwfn->ufp_info.pri_type = ECORE_UFP_PRI_OS;
-	else
-		DP_NOTICE(p_hwfn, false, "Unknown Host priority control %d\n",
-			  val);
+	} else {
+		p_hwfn->ufp_info.pri_type = ECORE_UFP_PRI_UNKNOWN;
+		DP_NOTICE(p_hwfn, false, "Unknown Host priority control %d port_id 0x%02x\n",
+			  val, MFW_PORT(p_hwfn));
+	}
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "UFP shmem config: mode = %d tc = %d pri_type = %d\n",
-		   p_hwfn->ufp_info.mode, p_hwfn->ufp_info.tc,
-		   p_hwfn->ufp_info.pri_type);
+	DP_NOTICE(p_hwfn, false,
+		  "UFP shmem config: mode = %d tc = %d pri_type = %d port_id 0x%02x\n",
+		  p_hwfn->ufp_info.mode, p_hwfn->ufp_info.tc,
+		  p_hwfn->ufp_info.pri_type, MFW_PORT(p_hwfn));
 }
 
 static enum _ecore_status_t
@@ -2076,13 +2689,16 @@ ecore_mcp_handle_ufp_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 
 	if (p_hwfn->ufp_info.mode == ECORE_UFP_MODE_VNIC_BW) {
 		p_hwfn->qm_info.ooo_tc = p_hwfn->ufp_info.tc;
-		p_hwfn->hw_info.offload_tc = p_hwfn->ufp_info.tc;
-
-		ecore_qm_reconf(p_hwfn, p_ptt);
-	} else {
+		ecore_hw_info_set_offload_tc(&p_hwfn->hw_info,
+					     p_hwfn->ufp_info.tc);
+		ecore_qm_reconf_intr(p_hwfn, p_ptt);
+	} else if (p_hwfn->ufp_info.mode == ECORE_UFP_MODE_ETS) {
 		/* Merge UFP TC with the dcbx TC data */
 		ecore_dcbx_mib_update_event(p_hwfn, p_ptt,
 					    ECORE_DCBX_OPERATIONAL_MIB);
+	} else {
+		DP_ERR(p_hwfn, "Invalid sched type, discard the UFP config\n");
+		return ECORE_INVAL;
 	}
 
 	/* update storm FW with negotiation results */
@@ -2118,6 +2734,19 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
 			   "Msg [%d] - old CMD 0x%02x, new CMD 0x%02x\n",
 			   i, info->mfw_mb_shadow[i], info->mfw_mb_cur[i]);
 
+		OSAL_SPIN_LOCK(&p_hwfn->mcp_info->unload_lock);
+		if (OSAL_TEST_BIT(ECORE_MCP_BYPASS_PROC_BIT,
+				  &p_hwfn->mcp_info->mcp_handling_status)) {
+			OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->unload_lock);
+			DP_INFO(p_hwfn,
+				"Msg [%d] is bypassed on unload flow\n", i);
+			continue;
+		}
+
+		OSAL_SET_BIT(ECORE_MCP_IN_PROCESSING_BIT,
+			     &p_hwfn->mcp_info->mcp_handling_status);
+		OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->unload_lock);
+
 		switch (i) {
 		case MFW_DRV_MSG_LINK_CHANGE:
 			ecore_mcp_handle_link_change(p_hwfn, p_ptt, false);
@@ -2149,6 +2778,12 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
 		case MFW_DRV_MSG_TRANSCEIVER_STATE_CHANGE:
 			ecore_mcp_handle_transceiver_change(p_hwfn, p_ptt);
 			break;
+		case MFW_DRV_MSG_XCVR_TX_FAULT:
+			ecore_mcp_handle_transceiver_tx_fault(p_hwfn);
+			break;
+		case MFW_DRV_MSG_XCVR_RX_LOS:
+			ecore_mcp_handle_transceiver_rx_los(p_hwfn);
+			break;
 		case MFW_DRV_MSG_ERROR_RECOVERY:
 			ecore_mcp_handle_process_kill(p_hwfn, p_ptt);
 			break;
@@ -2165,15 +2800,21 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
 			ecore_mcp_update_stag(p_hwfn, p_ptt);
 			break;
 		case MFW_DRV_MSG_FAILURE_DETECTED:
-			ecore_mcp_handle_fan_failure(p_hwfn);
+			ecore_mcp_handle_fan_failure(p_hwfn, p_ptt);
 			break;
 		case MFW_DRV_MSG_CRITICAL_ERROR_OCCURRED:
 			ecore_mcp_handle_critical_error(p_hwfn, p_ptt);
 			break;
+		case MFW_DRV_MSG_GET_TLV_REQ:
+			OSAL_MFW_TLV_REQ(p_hwfn);
+			break;
 		default:
 			DP_INFO(p_hwfn, "Unimplemented MFW message %d\n", i);
 			rc = ECORE_INVAL;
 		}
+
+		OSAL_CLEAR_BIT(ECORE_MCP_IN_PROCESSING_BIT,
+			       &p_hwfn->mcp_info->mcp_handling_status);
 	}
 
 	/* ACK everything */
@@ -2188,9 +2829,8 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
 	}
 
 	if (!found) {
-		DP_NOTICE(p_hwfn, false,
-			  "Received an MFW message indication but no"
-			  " new message!\n");
+		DP_INFO(p_hwfn,
+			"Received an MFW message indication but no new message!\n");
 		rc = ECORE_INVAL;
 	}
 
@@ -2229,60 +2869,57 @@ enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_hwfn *p_hwfn,
 	}
 
 	global_offsize = ecore_rd(p_hwfn, p_ptt,
-				  SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->
-						       public_base,
-						       PUBLIC_GLOBAL));
-	*p_mfw_ver =
-	    ecore_rd(p_hwfn, p_ptt,
-		     SECTION_ADDR(global_offsize,
-				  0) + OFFSETOF(struct public_global, mfw_ver));
+			  SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
+					       PUBLIC_GLOBAL));
+	*p_mfw_ver = ecore_rd(p_hwfn, p_ptt,
+			SECTION_ADDR(global_offsize, 0) +
+			OFFSETOF(struct public_global, mfw_ver));
 
 	if (p_running_bundle_id != OSAL_NULL) {
 		*p_running_bundle_id = ecore_rd(p_hwfn, p_ptt,
-						SECTION_ADDR(global_offsize,
-							     0) +
-						OFFSETOF(struct public_global,
-							 running_bundle_id));
+				SECTION_ADDR(global_offsize, 0) +
+				OFFSETOF(struct public_global,
+					 running_bundle_id));
 	}
 
 	return ECORE_SUCCESS;
 }
 
-int ecore_mcp_get_mbi_ver(struct ecore_hwfn *p_hwfn,
-			  struct ecore_ptt *p_ptt, u32 *p_mbi_ver)
+enum _ecore_status_t ecore_mcp_get_mbi_ver(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt,
+					   u32 *p_mbi_ver)
 {
 	u32 nvm_cfg_addr, nvm_cfg1_offset, mbi_ver_addr;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && !ecore_mcp_is_init(p_hwfn)) {
 		DP_INFO(p_hwfn, "Emulation: Can't get MBI version\n");
-		return -EOPNOTSUPP;
+		return ECORE_NOTIMPL;
 	}
 #endif
 
 	if (IS_VF(p_hwfn->p_dev))
-		return -EINVAL;
+		return ECORE_INVAL;
 
 	/* Read the address of the nvm_cfg */
 	nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt, MISC_REG_GEN_PURP_CR0);
 	if (!nvm_cfg_addr) {
 		DP_NOTICE(p_hwfn, false, "Shared memory not initialized\n");
-		return -EINVAL;
+		return ECORE_INVAL;
 	}
 
 	/* Read the offset of nvm_cfg1 */
 	nvm_cfg1_offset = ecore_rd(p_hwfn, p_ptt, nvm_cfg_addr + 4);
 
 	mbi_ver_addr = MCP_REG_SCRATCH + nvm_cfg1_offset +
-	    offsetof(struct nvm_cfg1, glob) + offsetof(struct nvm_cfg1_glob,
-						       mbi_version);
-	*p_mbi_ver =
-	    ecore_rd(p_hwfn, p_ptt,
-		     mbi_ver_addr) & (NVM_CFG1_GLOB_MBI_VERSION_0_MASK |
-				      NVM_CFG1_GLOB_MBI_VERSION_1_MASK |
-				      NVM_CFG1_GLOB_MBI_VERSION_2_MASK);
+		       OFFSETOF(struct nvm_cfg1, glob) +
+		       OFFSETOF(struct nvm_cfg1_glob, mbi_version);
+	*p_mbi_ver = ecore_rd(p_hwfn, p_ptt, mbi_ver_addr) &
+		     (NVM_CFG1_GLOB_MBI_VERSION_0_MASK |
+		      NVM_CFG1_GLOB_MBI_VERSION_1_MASK |
+		      NVM_CFG1_GLOB_MBI_VERSION_2_MASK);
 
-	return 0;
+	return ECORE_SUCCESS;
 }
 
 enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_hwfn *p_hwfn,
@@ -2322,44 +2959,55 @@ enum _ecore_status_t ecore_mcp_get_transceiver_data(struct ecore_hwfn *p_hwfn,
 						    u32 *p_transceiver_type)
 {
 	u32 transceiver_info;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	if (p_hwfn->p_dev->recov_in_prog) {
+		DP_ERR(p_hwfn, "Error recovery in progress\n");
+		return ECORE_AGAIN;
+	}
+
+	*p_transceiver_type = ETH_TRANSCEIVER_TYPE_NONE;
+	*p_transceiver_state = ETH_TRANSCEIVER_STATE_UPDATING;
 
 	/* TODO - Add support for VFs */
 	if (IS_VF(p_hwfn->p_dev))
 		return ECORE_INVAL;
 
 	if (!ecore_mcp_is_init(p_hwfn)) {
+#ifndef ASIC_ONLY
+		if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+			DP_INFO(p_hwfn,
+				"Emulation: Can't get transceiver data\n");
+			return ECORE_NOTIMPL;
+		}
+#endif
 		DP_NOTICE(p_hwfn, false, "MFW is not initialized!\n");
 		return ECORE_BUSY;
 	}
 
-	*p_transceiver_type = ETH_TRANSCEIVER_TYPE_NONE;
-	*p_transceiver_state = ETH_TRANSCEIVER_STATE_UPDATING;
-
 	transceiver_info = ecore_rd(p_hwfn, p_ptt,
 				    p_hwfn->mcp_info->port_addr +
 				    offsetof(struct public_port,
 				    transceiver_data));
-
 	*p_transceiver_state = GET_MFW_FIELD(transceiver_info,
 					     ETH_TRANSCEIVER_STATE);
 
 	if (*p_transceiver_state == ETH_TRANSCEIVER_STATE_PRESENT) {
 		*p_transceiver_type = GET_MFW_FIELD(transceiver_info,
-					    ETH_TRANSCEIVER_TYPE);
+						    ETH_TRANSCEIVER_TYPE);
 	} else {
 		*p_transceiver_type = ETH_TRANSCEIVER_TYPE_UNKNOWN;
 	}
 
-	return rc;
+	return 0;
 }
 
 static int is_transceiver_ready(u32 transceiver_state, u32 transceiver_type)
 {
 	if ((transceiver_state & ETH_TRANSCEIVER_STATE_PRESENT) &&
 	    ((transceiver_state & ETH_TRANSCEIVER_STATE_UPDATING) == 0x0) &&
-	    (transceiver_type != ETH_TRANSCEIVER_TYPE_NONE))
+	    (transceiver_type != ETH_TRANSCEIVER_TYPE_NONE)) {
 		return 1;
+	}
 
 	return 0;
 }
@@ -2368,14 +3016,17 @@ enum _ecore_status_t ecore_mcp_trans_speed_mask(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt,
 						u32 *p_speed_mask)
 {
-	u32 transceiver_type = ETH_TRANSCEIVER_TYPE_NONE, transceiver_state;
-
-	ecore_mcp_get_transceiver_data(p_hwfn, p_ptt, &transceiver_state,
-				       &transceiver_type);
+	u32 transceiver_type, transceiver_state;
+	enum _ecore_status_t ret;
 
+	ret = ecore_mcp_get_transceiver_data(p_hwfn, p_ptt, &transceiver_state,
+					     &transceiver_type);
+	if (ret)
+		return ret;
 
-	if (is_transceiver_ready(transceiver_state, transceiver_type) == 0)
+	if (is_transceiver_ready(transceiver_state, transceiver_type) == 0) {
 		return ECORE_INVAL;
+	}
 
 	switch (transceiver_type) {
 	case ETH_TRANSCEIVER_TYPE_1G_LX:
@@ -2431,6 +3082,11 @@ enum _ecore_status_t ecore_mcp_trans_speed_mask(struct ecore_hwfn *p_hwfn,
 			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G |
 			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G;
 		break;
+	case ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_25G_SR:
+	case ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_25G_LR:
+		*p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G |
+				NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G;
+		break;
 
 	case ETH_TRANSCEIVER_TYPE_40G_CR4:
 	case ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_40G_CR:
@@ -2466,12 +3122,14 @@ enum _ecore_status_t ecore_mcp_trans_speed_mask(struct ecore_hwfn *p_hwfn,
 		break;
 
 	case ETH_TRANSCEIVER_TYPE_10G_BASET:
+	case ETH_TRANSCEIVER_TYPE_MULTI_RATE_1G_10G_SR:
+	case ETH_TRANSCEIVER_TYPE_MULTI_RATE_1G_10G_LR:
 		*p_speed_mask = NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G |
 			NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G;
 		break;
 
 	default:
-		DP_INFO(p_hwfn, "Unknown transcevier type 0x%x\n",
+		DP_INFO(p_hwfn, "Unknown transceiver type 0x%x\n",
 			transceiver_type);
 		*p_speed_mask = 0xff;
 		break;
@@ -2485,24 +3143,29 @@ enum _ecore_status_t ecore_mcp_get_board_config(struct ecore_hwfn *p_hwfn,
 						u32 *p_board_config)
 {
 	u32 nvm_cfg_addr, nvm_cfg1_offset, port_cfg_addr;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	/* TODO - Add support for VFs */
 	if (IS_VF(p_hwfn->p_dev))
 		return ECORE_INVAL;
 
 	if (!ecore_mcp_is_init(p_hwfn)) {
+#ifndef ASIC_ONLY
+		if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+			DP_INFO(p_hwfn, "Emulation: Can't get board config\n");
+			return ECORE_NOTIMPL;
+		}
+#endif
 		DP_NOTICE(p_hwfn, false, "MFW is not initialized!\n");
 		return ECORE_BUSY;
 	}
 	if (!p_ptt) {
 		*p_board_config = NVM_CFG1_PORT_PORT_TYPE_UNDEFINED;
-		rc = ECORE_INVAL;
+		return ECORE_INVAL;
 	} else {
 		nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt,
-					MISC_REG_GEN_PURP_CR0);
+				MISC_REG_GEN_PURP_CR0);
 		nvm_cfg1_offset = ecore_rd(p_hwfn, p_ptt,
-					   nvm_cfg_addr + 4);
+				nvm_cfg_addr + 4);
 		port_cfg_addr = MCP_REG_SCRATCH + nvm_cfg1_offset +
 			offsetof(struct nvm_cfg1, port[MFW_PORT(p_hwfn)]);
 		*p_board_config  =  ecore_rd(p_hwfn, p_ptt,
@@ -2511,23 +3174,28 @@ enum _ecore_status_t ecore_mcp_get_board_config(struct ecore_hwfn *p_hwfn,
 					     board_cfg));
 	}
 
-	return rc;
+	return ECORE_SUCCESS;
 }
 
-/* @DPDK */
 /* Old MFW has a global configuration for all PFs regarding RDMA support */
 static void
 ecore_mcp_get_shmem_proto_legacy(struct ecore_hwfn *p_hwfn,
 				 enum ecore_pci_personality *p_proto)
 {
-	*p_proto = ECORE_PCI_ETH;
+	/* There wasn't ever a legacy MFW that published iwarp.
+	 * So at this point, this is either plain l2 or RoCE.
+	 */
+	if (OSAL_TEST_BIT(ECORE_DEV_CAP_ROCE,
+			  &p_hwfn->hw_info.device_capabilities))
+		*p_proto = ECORE_PCI_ETH_ROCE;
+	else
+		*p_proto = ECORE_PCI_ETH;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
 		   "According to Legacy capabilities, L2 personality is %08x\n",
 		   (u32)*p_proto);
 }
 
-/* @DPDK */
 static enum _ecore_status_t
 ecore_mcp_get_shmem_proto_mfw(struct ecore_hwfn *p_hwfn,
 			      struct ecore_ptt *p_ptt,
@@ -2536,6 +3204,37 @@ ecore_mcp_get_shmem_proto_mfw(struct ecore_hwfn *p_hwfn,
 	u32 resp = 0, param = 0;
 	enum _ecore_status_t rc;
 
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt,
+			 DRV_MSG_CODE_GET_PF_RDMA_PROTOCOL, 0, &resp, &param);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+	if (resp != FW_MSG_CODE_OK) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
+			   "MFW lacks support for command; Returns %08x\n",
+			   resp);
+		return ECORE_INVAL;
+	}
+
+	switch (param) {
+	case FW_MB_PARAM_GET_PF_RDMA_NONE:
+		*p_proto = ECORE_PCI_ETH;
+		break;
+	case FW_MB_PARAM_GET_PF_RDMA_ROCE:
+		*p_proto = ECORE_PCI_ETH_ROCE;
+		break;
+	case FW_MB_PARAM_GET_PF_RDMA_IWARP:
+		*p_proto = ECORE_PCI_ETH_IWARP;
+		break;
+	case FW_MB_PARAM_GET_PF_RDMA_BOTH:
+		*p_proto = ECORE_PCI_ETH_RDMA;
+		break;
+	default:
+		DP_NOTICE(p_hwfn, true,
+			  "MFW answers GET_PF_RDMA_PROTOCOL but param is %08x\n",
+			  param);
+		return ECORE_INVAL;
+	}
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IFUP,
 		   "According to capabilities, L2 personality is %08x [resp %08x param %08x]\n",
 		   (u32)*p_proto, resp, param);
@@ -2556,6 +3255,16 @@ ecore_mcp_get_shmem_proto(struct ecore_hwfn *p_hwfn,
 		    ECORE_SUCCESS)
 			ecore_mcp_get_shmem_proto_legacy(p_hwfn, p_proto);
 		break;
+	case FUNC_MF_CFG_PROTOCOL_NVMETCP:
+	case FUNC_MF_CFG_PROTOCOL_ISCSI:
+		*p_proto = ECORE_PCI_ISCSI;
+		break;
+	case FUNC_MF_CFG_PROTOCOL_FCOE:
+		*p_proto = ECORE_PCI_FCOE;
+		break;
+	case FUNC_MF_CFG_PROTOCOL_ROCE:
+		DP_NOTICE(p_hwfn, true, "RoCE personality is not a valid value!\n");
+		/* Fallthrough */
 	default:
 		rc = ECORE_INVAL;
 	}
@@ -2569,7 +3278,8 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 	struct ecore_mcp_function_info *info;
 	struct public_func shmem_info;
 
-	ecore_mcp_get_shmem_func(p_hwfn, p_ptt, &shmem_info, MCP_PF_ID(p_hwfn));
+	ecore_mcp_get_shmem_func(p_hwfn, p_ptt, &shmem_info,
+				 MCP_PF_ID(p_hwfn));
 	info = &p_hwfn->mcp_info->func_info;
 
 	info->pause_on_host = (shmem_info.config &
@@ -2591,37 +3301,49 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 		info->mac[3] = (u8)(shmem_info.mac_lower >> 16);
 		info->mac[4] = (u8)(shmem_info.mac_lower >> 8);
 		info->mac[5] = (u8)(shmem_info.mac_lower);
+
+		/* Store primary MAC for later possible WoL */
+		OSAL_MEMCPY(p_hwfn->p_dev->wol_mac, info->mac, ECORE_ETH_ALEN);
+
 	} else {
 		/* TODO - are there protocols for which there's no MAC? */
 		DP_NOTICE(p_hwfn, false, "MAC is 0 in shmem\n");
 	}
 
 	/* TODO - are these calculations true for BE machine? */
-	info->wwn_port = (u64)shmem_info.fcoe_wwn_port_name_upper |
-			 (((u64)shmem_info.fcoe_wwn_port_name_lower) << 32);
-	info->wwn_node = (u64)shmem_info.fcoe_wwn_node_name_upper |
-			 (((u64)shmem_info.fcoe_wwn_node_name_lower) << 32);
+	info->wwn_port = (u64)shmem_info.fcoe_wwn_port_name_lower |
+			 (((u64)shmem_info.fcoe_wwn_port_name_upper) << 32);
+	info->wwn_node = (u64)shmem_info.fcoe_wwn_node_name_lower |
+			 (((u64)shmem_info.fcoe_wwn_node_name_upper) << 32);
 
 	info->ovlan = (u16)(shmem_info.ovlan_stag & FUNC_MF_CFG_OV_STAG_MASK);
 
 	info->mtu = (u16)shmem_info.mtu_size;
 
-	if (info->mtu == 0)
-		info->mtu = 1500;
+	p_hwfn->hw_info.b_wol_support = ECORE_WOL_SUPPORT_NONE;
+	p_hwfn->p_dev->wol_config = (u8)ECORE_OV_WOL_DEFAULT;
+	if (ecore_mcp_is_init(p_hwfn)) {
+		u32 resp = 0, param = 0;
+		enum _ecore_status_t rc;
 
-	info->mtu = (u16)shmem_info.mtu_size;
+		rc = ecore_mcp_cmd(p_hwfn, p_ptt,
+				   DRV_MSG_CODE_OS_WOL, 0, &resp, &param);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+		if (resp == FW_MSG_CODE_OS_WOL_SUPPORTED)
+			p_hwfn->hw_info.b_wol_support = ECORE_WOL_SUPPORT_PME;
+	}
 
 	DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IFUP),
-		   "Read configuration from shmem: pause_on_host %02x"
-		    " protocol %02x BW [%02x - %02x]"
-		    " MAC %02x:%02x:%02x:%02x:%02x:%02x wwn port %lx"
-		    " node %lx ovlan %04x\n",
+		   "Read configuration from shmem: pause_on_host %02x protocol %02x "
+		   "BW [%02x - %02x] MAC %02x:%02x:%02x:%02x:%02x:%02x wwn port %" PRIx64 " "
+		   "node %" PRIx64 " ovlan %04x wol %02x\n",
 		   info->pause_on_host, info->protocol,
 		   info->bandwidth_min, info->bandwidth_max,
 		   info->mac[0], info->mac[1], info->mac[2],
 		   info->mac[3], info->mac[4], info->mac[5],
-		   (unsigned long)info->wwn_port,
-		   (unsigned long)info->wwn_node, info->ovlan);
+		   info->wwn_port, info->wwn_node, info->ovlan,
+		   (u8)p_hwfn->hw_info.b_wol_support);
 
 	return ECORE_SUCCESS;
 }
@@ -2665,7 +3387,8 @@ enum _ecore_status_t ecore_mcp_drain(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t rc;
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt,
-			   DRV_MSG_CODE_NIG_DRAIN, 1000, &resp, &param);
+			   DRV_MSG_CODE_NIG_DRAIN, 1000,
+			   &resp, &param);
 
 	/* Wait for the drain to complete before returning */
 	OSAL_MSLEEP(1020);
@@ -2682,7 +3405,8 @@ const struct ecore_mcp_function_info
 }
 
 int ecore_mcp_get_personality_cnt(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt, u32 personalities)
+				  struct ecore_ptt *p_ptt,
+				  u32 personalities)
 {
 	enum ecore_pci_personality protocol = ECORE_PCI_DEFAULT;
 	struct public_func shmem_info;
@@ -2741,8 +3465,7 @@ enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
 
 	if (p_dev->recov_in_prog) {
 		DP_NOTICE(p_hwfn, false,
-			  "Avoid triggering a recovery since such a process"
-			  " is already in progress\n");
+			  "Avoid triggering a recovery since such a process is already in progress\n");
 		return ECORE_AGAIN;
 	}
 
@@ -2752,29 +3475,47 @@ enum _ecore_status_t ecore_start_recovery_process(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t
-ecore_mcp_config_vf_msix_bb(struct ecore_hwfn *p_hwfn,
-			    struct ecore_ptt *p_ptt,
-			    u8 vf_id, u8 num)
-{
-	u32 resp = 0, param = 0, rc_param = 0;
+#define ECORE_RECOVERY_PROLOG_SLEEP_MS	100
+
+enum _ecore_status_t ecore_recovery_prolog(struct ecore_dev *p_dev)
+{
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+	struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
 	enum _ecore_status_t rc;
 
-/* Only Leader can configure MSIX, and need to take CMT into account */
+	/* Allow ongoing PCIe transactions to complete */
+	OSAL_MSLEEP(ECORE_RECOVERY_PROLOG_SLEEP_MS);
+
+	/* Clear the PF's internal FID_enable in the PXP */
+	rc = ecore_pglueb_set_pfid_enable(p_hwfn, p_ptt, false);
+	if (rc != ECORE_SUCCESS)
+		DP_NOTICE(p_hwfn, false,
+			  "ecore_pglueb_set_pfid_enable() failed. rc = %d.\n",
+			  rc);
+
+	return rc;
+}
+
+static enum _ecore_status_t
+ecore_mcp_config_vf_msix_bb(struct ecore_hwfn *p_hwfn,
+			    struct ecore_ptt *p_ptt,
+			    u8 vf_id, u8 num)
+{
+	u32 resp = 0, param = 0, rc_param = 0;
+	enum _ecore_status_t rc;
 
+	/* Only Leader can configure MSIX, and need to take CMT into account */
 	if (!IS_LEAD_HWFN(p_hwfn))
 		return ECORE_SUCCESS;
 	num *= p_hwfn->p_dev->num_hwfns;
 
-	param |= (vf_id << DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_OFFSET) &
-	    DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_MASK;
-	param |= (num << DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_OFFSET) &
-	    DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_MASK;
+	SET_MFW_FIELD(param, DRV_MB_PARAM_CFG_VF_MSIX_VF_ID, vf_id);
+	SET_MFW_FIELD(param, DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM, num);
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_CFG_VF_MSIX, param,
 			   &resp, &rc_param);
 
-	if (resp != FW_MSG_CODE_DRV_CFG_VF_MSIX_DONE) {
+	if (resp != (u32)FW_MSG_CODE_DRV_CFG_VF_MSIX_DONE) {
 		DP_NOTICE(p_hwfn, true, "VF[%d]: MFW failed to set MSI-X\n",
 			  vf_id);
 		rc = ECORE_INVAL;
@@ -2788,9 +3529,8 @@ ecore_mcp_config_vf_msix_bb(struct ecore_hwfn *p_hwfn,
 }
 
 static enum _ecore_status_t
-ecore_mcp_config_vf_msix_ah(struct ecore_hwfn *p_hwfn,
-			    struct ecore_ptt *p_ptt,
-			    u8 num)
+ecore_mcp_config_vf_msix_ah_e5(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt, u8 num)
 {
 	u32 resp = 0, param = num, rc_param = 0;
 	enum _ecore_status_t rc;
@@ -2803,8 +3543,7 @@ ecore_mcp_config_vf_msix_ah(struct ecore_hwfn *p_hwfn,
 		rc = ECORE_INVAL;
 	} else {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "Requested 0x%02x MSI-x interrupts for VFs\n",
-			   num);
+			   "Requested 0x%02x MSI-x interrupts for VFs\n", num);
 	}
 
 	return rc;
@@ -2827,7 +3566,7 @@ enum _ecore_status_t ecore_mcp_config_vf_msix(struct ecore_hwfn *p_hwfn,
 	if (ECORE_IS_BB(p_hwfn->p_dev))
 		return ecore_mcp_config_vf_msix_bb(p_hwfn, p_ptt, vf_id, num);
 	else
-		return ecore_mcp_config_vf_msix_ah(p_hwfn, p_ptt, num);
+		return ecore_mcp_config_vf_msix_ah_e5(p_hwfn, p_ptt, num);
 }
 
 enum _ecore_status_t
@@ -2898,7 +3637,7 @@ enum _ecore_status_t ecore_mcp_halt(struct ecore_hwfn *p_hwfn,
 		return ECORE_BUSY;
 	}
 
-	ecore_mcp_cmd_set_blocking(p_hwfn, true);
+	ecore_mcp_cmd_set_halt(p_hwfn, true);
 
 	return ECORE_SUCCESS;
 }
@@ -2915,7 +3654,6 @@ enum _ecore_status_t ecore_mcp_resume(struct ecore_hwfn *p_hwfn,
 	cpu_mode = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
 	cpu_mode &= ~MCP_REG_CPU_MODE_SOFT_HALT;
 	ecore_wr(p_hwfn, p_ptt, MCP_REG_CPU_MODE, cpu_mode);
-
 	OSAL_MSLEEP(ECORE_MCP_RESUME_SLEEP_MS);
 	cpu_state = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
 
@@ -2926,7 +3664,7 @@ enum _ecore_status_t ecore_mcp_resume(struct ecore_hwfn *p_hwfn,
 		return ECORE_BUSY;
 	}
 
-	ecore_mcp_cmd_set_blocking(p_hwfn, false);
+	ecore_mcp_cmd_set_halt(p_hwfn, false);
 
 	return ECORE_SUCCESS;
 }
@@ -2951,7 +3689,8 @@ ecore_mcp_ov_update_current_config(struct ecore_hwfn *p_hwfn,
 		drv_mb_param = DRV_MB_PARAM_OV_CURR_CFG_VENDOR_SPEC;
 		break;
 	default:
-		DP_NOTICE(p_hwfn, true, "Invalid client type %d\n", client);
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid client type %d\n", client);
 		return ECORE_INVAL;
 	}
 
@@ -2983,7 +3722,8 @@ ecore_mcp_ov_update_driver_state(struct ecore_hwfn *p_hwfn,
 		drv_mb_param = DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_ACTIVE;
 		break;
 	default:
-		DP_NOTICE(p_hwfn, true, "Invalid driver state %d\n", drv_state);
+		DP_NOTICE(p_hwfn, true,
+			  "Invalid driver state %d\n", drv_state);
 		return ECORE_INVAL;
 	}
 
@@ -2999,7 +3739,78 @@ enum _ecore_status_t
 ecore_mcp_ov_get_fc_npiv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			 struct ecore_fc_npiv_tbl *p_table)
 {
-	return 0;
+	struct dci_fc_npiv_tbl *p_npiv_table;
+	u8 *p_buf = OSAL_NULL;
+	u32 addr, size, i;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct dci_fc_npiv_cfg npiv_cfg;
+
+	p_table->num_wwpn = 0;
+	p_table->num_wwnn = 0;
+	addr = ecore_rd(p_hwfn, p_ptt, p_hwfn->mcp_info->port_addr +
+			OFFSETOF(struct public_port, fc_npiv_nvram_tbl_addr));
+	if (addr == NPIV_TBL_INVALID_ADDR) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "NPIV table doesn't exist\n");
+		return ECORE_INVAL;
+	}
+
+	/* First read table header. Then use number of entries field to
+	 * know actual number of npiv entries.
+	 */
+	size = sizeof(struct dci_fc_npiv_cfg);
+	OSAL_MEMSET(&npiv_cfg, 0, size);
+
+	rc = ecore_mcp_nvm_read(p_hwfn->p_dev, addr, (u8 *)&npiv_cfg, size);
+	if (rc != ECORE_SUCCESS) {
+		DP_ERR(p_hwfn, "NPIV table configuration read failed.\n");
+		return rc;
+	}
+
+	if (npiv_cfg.num_of_npiv < 1) {
+		DP_ERR(p_hwfn, "No entries in NPIV table\n");
+		return ECORE_INVAL;
+	}
+
+	size += sizeof(struct dci_npiv_settings) * npiv_cfg.num_of_npiv;
+
+	p_buf = OSAL_VZALLOC(p_hwfn->p_dev, size);
+	if (!p_buf) {
+		DP_ERR(p_hwfn, "Buffer allocation failed\n");
+		return ECORE_NOMEM;
+	}
+
+	rc = ecore_mcp_nvm_read(p_hwfn->p_dev, addr, p_buf, size);
+	if (rc != ECORE_SUCCESS) {
+		OSAL_VFREE(p_hwfn->p_dev, p_buf);
+		return rc;
+	}
+
+	p_npiv_table = (struct dci_fc_npiv_tbl *)p_buf;
+	p_table->num_wwpn = (u16)p_npiv_table->fc_npiv_cfg.num_of_npiv;
+	p_table->num_wwnn = (u16)p_npiv_table->fc_npiv_cfg.num_of_npiv;
+	for (i = 0; i < p_npiv_table->fc_npiv_cfg.num_of_npiv; i++) {
+		p_table->wwpn[i][0] = p_npiv_table->settings[i].npiv_wwpn[3];
+		p_table->wwpn[i][1] = p_npiv_table->settings[i].npiv_wwpn[2];
+		p_table->wwpn[i][2] = p_npiv_table->settings[i].npiv_wwpn[1];
+		p_table->wwpn[i][3] = p_npiv_table->settings[i].npiv_wwpn[0];
+		p_table->wwpn[i][4] = p_npiv_table->settings[i].npiv_wwpn[7];
+		p_table->wwpn[i][5] = p_npiv_table->settings[i].npiv_wwpn[6];
+		p_table->wwpn[i][6] = p_npiv_table->settings[i].npiv_wwpn[5];
+		p_table->wwpn[i][7] = p_npiv_table->settings[i].npiv_wwpn[4];
+
+		p_table->wwnn[i][0] = p_npiv_table->settings[i].npiv_wwnn[3];
+		p_table->wwnn[i][1] = p_npiv_table->settings[i].npiv_wwnn[2];
+		p_table->wwnn[i][2] = p_npiv_table->settings[i].npiv_wwnn[1];
+		p_table->wwnn[i][3] = p_npiv_table->settings[i].npiv_wwnn[0];
+		p_table->wwnn[i][4] = p_npiv_table->settings[i].npiv_wwnn[7];
+		p_table->wwnn[i][5] = p_npiv_table->settings[i].npiv_wwnn[6];
+		p_table->wwnn[i][6] = p_npiv_table->settings[i].npiv_wwnn[5];
+		p_table->wwnn[i][7] = p_npiv_table->settings[i].npiv_wwnn[4];
+	}
+
+	OSAL_VFREE(p_hwfn->p_dev, p_buf);
+
+	return ECORE_SUCCESS;
 }
 
 enum _ecore_status_t
@@ -3023,7 +3834,7 @@ ecore_mcp_ov_update_mac(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			u8 *mac)
 {
 	struct ecore_mcp_mb_params mb_params;
-	union drv_union_data union_data;
+	u32 mfw_mac[2];
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
@@ -3031,12 +3842,64 @@ ecore_mcp_ov_update_mac(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	SET_MFW_FIELD(mb_params.param, DRV_MSG_CODE_VMAC_TYPE,
 		      DRV_MSG_CODE_VMAC_TYPE_MAC);
 	mb_params.param |= MCP_PF_ID(p_hwfn);
-	OSAL_MEMCPY(&union_data.raw_data, mac, ETH_ALEN);
-	mb_params.p_data_src = &union_data;
+
+	/* MCP is BE, and on LE platforms PCI would swap access to SHMEM
+	 * in 32-bit granularity.
+	 * So the MAC has to be set in native order [and not byte order],
+	 * otherwise it would be read incorrectly by MFW after swap.
+	 */
+	mfw_mac[0] = mac[0] << 24 | mac[1] << 16 | mac[2] << 8 | mac[3];
+	mfw_mac[1] = mac[4] << 24 | mac[5] << 16;
+
+	mb_params.p_data_src = (u8 *)mfw_mac;
+	mb_params.data_src_size = 8;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
 	if (rc != ECORE_SUCCESS)
 		DP_ERR(p_hwfn, "Failed to send mac address, rc = %d\n", rc);
 
+	/* Store primary MAC for later possible WoL */
+	OSAL_MEMCPY(p_hwfn->p_dev->wol_mac, mac, ECORE_ETH_ALEN);
+
+	return rc;
+}
+
+enum _ecore_status_t
+ecore_mcp_ov_update_wol(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			enum ecore_ov_wol wol)
+{
+	u32 resp = 0, param = 0;
+	u32 drv_mb_param;
+	enum _ecore_status_t rc;
+
+	if (p_hwfn->hw_info.b_wol_support == ECORE_WOL_SUPPORT_NONE) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Can't change WoL configuration when WoL isn't supported\n");
+		return ECORE_INVAL;
+	}
+
+	switch (wol) {
+	case ECORE_OV_WOL_DEFAULT:
+		drv_mb_param = DRV_MB_PARAM_WOL_DEFAULT;
+		break;
+	case ECORE_OV_WOL_DISABLED:
+		drv_mb_param = DRV_MB_PARAM_WOL_DISABLED;
+		break;
+	case ECORE_OV_WOL_ENABLED:
+		drv_mb_param = DRV_MB_PARAM_WOL_ENABLED;
+		break;
+	default:
+		DP_ERR(p_hwfn, "Invalid wol state %d\n", wol);
+		return ECORE_INVAL;
+	}
+
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_OV_UPDATE_WOL,
+			   drv_mb_param, &resp, &param);
+	if (rc != ECORE_SUCCESS)
+		DP_ERR(p_hwfn, "Failed to send wol mode, rc = %d\n", rc);
+
+	/* Store the WoL update for a future unload */
+	p_hwfn->p_dev->wol_config = (u8)wol;
+
 	return rc;
 }
 
@@ -3044,10 +3907,9 @@ enum _ecore_status_t
 ecore_mcp_ov_update_eswitch(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			    enum ecore_ov_eswitch eswitch)
 {
-	enum _ecore_status_t rc;
 	u32 resp = 0, param = 0;
 	u32 drv_mb_param;
-
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 	switch (eswitch) {
 	case ECORE_OV_ESWITCH_NONE:
 		drv_mb_param = DRV_MB_PARAM_ESWITCH_MODE_NONE;
@@ -3112,24 +3974,24 @@ enum _ecore_status_t ecore_mcp_mask_parities(struct ecore_hwfn *p_hwfn,
 			   mask_parities, &resp, &param);
 
 	if (rc != ECORE_SUCCESS) {
-		DP_ERR(p_hwfn,
-		       "MCP response failure for mask parities, aborting\n");
+		DP_ERR(p_hwfn, "MCP response failure for mask parities, aborting\n");
 	} else if (resp != FW_MSG_CODE_OK) {
-		DP_ERR(p_hwfn,
-		       "MCP did not ack mask parity request. Old MFW?\n");
+		DP_ERR(p_hwfn, "MCP did not acknowledge mask parity request. Old MFW?\n");
 		rc = ECORE_INVAL;
 	}
 
 	return rc;
 }
 
+#define FLASH_PAGE_SIZE 0x1000 /* @DPDK */
+
 enum _ecore_status_t ecore_mcp_nvm_read(struct ecore_dev *p_dev, u32 addr,
-					u8 *p_buf, u32 len)
+			   u8 *p_buf, u32 len)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
 	u32 bytes_left, offset, bytes_to_copy, buf_size;
-	u32 nvm_offset, resp, param;
-	struct ecore_ptt *p_ptt;
+	u32 nvm_offset = 0, resp = 0, param;
+	struct ecore_ptt  *p_ptt;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	p_ptt = ecore_ptt_acquire(p_hwfn);
@@ -3141,12 +4003,13 @@ enum _ecore_status_t ecore_mcp_nvm_read(struct ecore_dev *p_dev, u32 addr,
 	while (bytes_left > 0) {
 		bytes_to_copy = OSAL_MIN_T(u32, bytes_left,
 					   MCP_DRV_NVM_BUF_LEN);
-		nvm_offset = (addr + offset) | (bytes_to_copy <<
-						DRV_MB_PARAM_NVM_LEN_OFFSET);
+		SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_NVM_OFFSET,
+			      addr + offset);
+		SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_NVM_LEN, bytes_to_copy);
 		rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt,
 					  DRV_MSG_CODE_NVM_READ_NVRAM,
 					  nvm_offset, &resp, &param, &buf_size,
-					  (u32 *)(p_buf + offset));
+					  (u32 *)(p_buf + offset), false);
 		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_dev, false,
 				  "ecore_mcp_nvm_rd_cmd() failed, rc = %d\n",
@@ -3165,8 +4028,8 @@ enum _ecore_status_t ecore_mcp_nvm_read(struct ecore_dev *p_dev, u32 addr,
 		/* This can be a lengthy process, and it's possible scheduler
 		 * isn't preemptible. Sleep a bit to prevent CPU hogging.
 		 */
-		if (bytes_left % 0x1000 <
-		    (bytes_left - buf_size) % 0x1000)
+		if (bytes_left % FLASH_PAGE_SIZE <
+		    (bytes_left - buf_size) % FLASH_PAGE_SIZE)
 			OSAL_MSLEEP(1);
 
 		offset += buf_size;
@@ -3183,10 +4046,15 @@ enum _ecore_status_t ecore_mcp_phy_read(struct ecore_dev *p_dev, u32 cmd,
 					u32 addr, u8 *p_buf, u32 *p_len)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt;
+	struct ecore_ptt  *p_ptt;
 	u32 resp = 0, param;
 	enum _ecore_status_t rc;
 
+	if (p_hwfn->p_dev->recov_in_prog) {
+		DP_ERR(p_hwfn, "Error recovery in progress\n");
+		return ECORE_AGAIN;
+	}
+
 	p_ptt = ecore_ptt_acquire(p_hwfn);
 	if (!p_ptt)
 		return ECORE_BUSY;
@@ -3194,8 +4062,8 @@ enum _ecore_status_t ecore_mcp_phy_read(struct ecore_dev *p_dev, u32 cmd,
 	rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt,
 				  (cmd == ECORE_PHY_CORE_READ) ?
 				  DRV_MSG_CODE_PHY_CORE_READ :
-				  DRV_MSG_CODE_PHY_RAW_READ,
-				  addr, &resp, &param, p_len, (u32 *)p_buf);
+				  DRV_MSG_CODE_PHY_RAW_READ, addr, &resp,
+				  &param, p_len, (u32 *)p_buf, false);
 	if (rc != ECORE_SUCCESS)
 		DP_NOTICE(p_dev, false, "MCP command rc = %d\n", rc);
 
@@ -3207,23 +4075,16 @@ enum _ecore_status_t ecore_mcp_phy_read(struct ecore_dev *p_dev, u32 cmd,
 
 enum _ecore_status_t ecore_mcp_nvm_resp(struct ecore_dev *p_dev, u8 *p_buf)
 {
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt;
-
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt)
-		return ECORE_BUSY;
-
 	OSAL_MEMCPY(p_buf, &p_dev->mcp_nvm_resp, sizeof(p_dev->mcp_nvm_resp));
-	ecore_ptt_release(p_hwfn, p_ptt);
 
 	return ECORE_SUCCESS;
 }
 
-enum _ecore_status_t ecore_mcp_nvm_del_file(struct ecore_dev *p_dev, u32 addr)
+enum _ecore_status_t ecore_mcp_nvm_del_file(struct ecore_dev *p_dev,
+					    u32 addr)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt;
+	struct ecore_ptt  *p_ptt;
 	u32 resp = 0, param;
 	enum _ecore_status_t rc;
 
@@ -3235,24 +4096,8 @@ enum _ecore_status_t ecore_mcp_nvm_del_file(struct ecore_dev *p_dev, u32 addr)
 	p_dev->mcp_nvm_resp = resp;
 	ecore_ptt_release(p_hwfn, p_ptt);
 
-	return rc;
-}
-
-enum _ecore_status_t ecore_mcp_nvm_put_file_begin(struct ecore_dev *p_dev,
-						  u32 addr)
-{
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt;
-	u32 resp = 0, param;
-	enum _ecore_status_t rc;
-
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt)
-		return ECORE_BUSY;
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_NVM_PUT_FILE_BEGIN, addr,
-			   &resp, &param);
-	p_dev->mcp_nvm_resp = resp;
-	ecore_ptt_release(p_hwfn, p_ptt);
+	/* nvm_info needs to be updated */
+	p_hwfn->nvm_info.valid = false;
 
 	return rc;
 }
@@ -3263,17 +4108,19 @@ enum _ecore_status_t ecore_mcp_nvm_put_file_begin(struct ecore_dev *p_dev,
 enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd,
 					 u32 addr, u8 *p_buf, u32 len)
 {
-	u32 buf_idx, buf_size, nvm_cmd, nvm_offset;
-	u32 resp = FW_MSG_CODE_ERROR, param;
+	u32 buf_idx, buf_size, nvm_cmd, nvm_offset, resp = 0, param;
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
 	enum _ecore_status_t rc = ECORE_INVAL;
-	struct ecore_ptt *p_ptt;
+	struct ecore_ptt  *p_ptt;
 
 	p_ptt = ecore_ptt_acquire(p_hwfn);
 	if (!p_ptt)
 		return ECORE_BUSY;
 
 	switch (cmd) {
+	case ECORE_PUT_FILE_BEGIN:
+		nvm_cmd = DRV_MSG_CODE_NVM_PUT_FILE_BEGIN;
+		break;
 	case ECORE_PUT_FILE_DATA:
 		nvm_cmd = DRV_MSG_CODE_NVM_PUT_FILE_DATA;
 		break;
@@ -3283,6 +4130,9 @@ enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd,
 	case ECORE_EXT_PHY_FW_UPGRADE:
 		nvm_cmd = DRV_MSG_CODE_EXT_PHY_FW_UPGRADE;
 		break;
+	case ECORE_ENCRYPT_PASSWORD:
+		nvm_cmd = DRV_MSG_CODE_ENCRYPT_PASSWORD;
+		break;
 	default:
 		DP_NOTICE(p_hwfn, true, "Invalid nvm write command 0x%x\n",
 			  cmd);
@@ -3291,15 +4141,16 @@ enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd,
 	}
 
 	buf_idx = 0;
+	buf_size = OSAL_MIN_T(u32, (len - buf_idx), MCP_DRV_NVM_BUF_LEN);
 	while (buf_idx < len) {
-		buf_size = OSAL_MIN_T(u32, (len - buf_idx),
-				      MCP_DRV_NVM_BUF_LEN);
-		nvm_offset = ((buf_size << DRV_MB_PARAM_NVM_LEN_OFFSET) |
-			      addr) +
-			     buf_idx;
+		if (cmd == ECORE_PUT_FILE_BEGIN)
+			nvm_offset = addr;
+		else
+			nvm_offset = ((buf_size << DRV_MB_PARAM_NVM_LEN_OFFSET) |
+				      addr) + buf_idx;
 		rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, nvm_cmd, nvm_offset,
 					  &resp, &param, buf_size,
-					  (u32 *)&p_buf[buf_idx]);
+					  (u32 *)&p_buf[buf_idx], true);
 		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_dev, false,
 				  "ecore_mcp_nvm_write() failed, rc = %d\n",
@@ -3320,11 +4171,22 @@ enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd,
 		/* This can be a lengthy process, and it's possible scheduler
 		 * isn't preemptible. Sleep a bit to prevent CPU hogging.
 		 */
-		if (buf_idx % 0x1000 >
-		    (buf_idx + buf_size) % 0x1000)
+		if (buf_idx % 0x1000 > (buf_idx + buf_size) % 0x1000)
 			OSAL_MSLEEP(1);
 
-		buf_idx += buf_size;
+		/* For MBI upgrade, MFW response includes the next buffer offset
+		 * to be delivered to MFW.
+		 */
+		if (param && cmd == ECORE_PUT_FILE_DATA) {
+			buf_idx = GET_MFW_FIELD(param,
+					FW_MB_PARAM_NVM_PUT_FILE_REQ_OFFSET);
+			buf_size = GET_MFW_FIELD(param,
+					FW_MB_PARAM_NVM_PUT_FILE_REQ_SIZE);
+		} else {
+			buf_idx += buf_size;
+			buf_size = OSAL_MIN_T(u32, (len - buf_idx),
+					      MCP_DRV_NVM_BUF_LEN);
+		}
 	}
 
 	p_dev->mcp_nvm_resp = resp;
@@ -3338,10 +4200,15 @@ enum _ecore_status_t ecore_mcp_phy_write(struct ecore_dev *p_dev, u32 cmd,
 					 u32 addr, u8 *p_buf, u32 len)
 {
 	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+	struct ecore_ptt  *p_ptt;
 	u32 resp = 0, param, nvm_cmd;
-	struct ecore_ptt *p_ptt;
 	enum _ecore_status_t rc;
 
+	if (p_hwfn->p_dev->recov_in_prog) {
+		DP_ERR(p_hwfn, "Error recovery in progress\n");
+		return ECORE_AGAIN;
+	}
+
 	p_ptt = ecore_ptt_acquire(p_hwfn);
 	if (!p_ptt)
 		return ECORE_BUSY;
@@ -3349,7 +4216,7 @@ enum _ecore_status_t ecore_mcp_phy_write(struct ecore_dev *p_dev, u32 cmd,
 	nvm_cmd = (cmd == ECORE_PHY_CORE_WRITE) ?  DRV_MSG_CODE_PHY_CORE_WRITE :
 			DRV_MSG_CODE_PHY_RAW_WRITE;
 	rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt, nvm_cmd, addr,
-				  &resp, &param, len, (u32 *)p_buf);
+				  &resp, &param, len, (u32 *)p_buf, false);
 	if (rc != ECORE_SUCCESS)
 		DP_NOTICE(p_dev, false, "MCP command rc = %d\n", rc);
 	p_dev->mcp_nvm_resp = resp;
@@ -3358,37 +4225,22 @@ enum _ecore_status_t ecore_mcp_phy_write(struct ecore_dev *p_dev, u32 cmd,
 	return rc;
 }
 
-enum _ecore_status_t ecore_mcp_nvm_set_secure_mode(struct ecore_dev *p_dev,
-						   u32 addr)
-{
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt;
-	u32 resp = 0, param;
-	enum _ecore_status_t rc;
-
-	p_ptt = ecore_ptt_acquire(p_hwfn);
-	if (!p_ptt)
-		return ECORE_BUSY;
-
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_SECURE_MODE, addr,
-			   &resp, &param);
-	p_dev->mcp_nvm_resp = resp;
-	ecore_ptt_release(p_hwfn, p_ptt);
-
-	return rc;
-}
-
 enum _ecore_status_t ecore_mcp_phy_sfp_read(struct ecore_hwfn *p_hwfn,
 					    struct ecore_ptt *p_ptt,
 					    u32 port, u32 addr, u32 offset,
 					    u32 len, u8 *p_buf)
 {
-	u32 bytes_left, bytes_to_copy, buf_size, nvm_offset;
+	u32 bytes_left, bytes_to_copy, buf_size, nvm_offset = 0;
 	u32 resp, param;
 	enum _ecore_status_t rc;
 
-	nvm_offset = (port << DRV_MB_PARAM_TRANSCEIVER_PORT_OFFSET) |
-			(addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_OFFSET);
+	if (p_hwfn->p_dev->recov_in_prog) {
+		DP_ERR(p_hwfn, "Error recovery in progress\n");
+		return ECORE_AGAIN;
+	}
+
+	SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_PORT, port);
+	SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS, addr);
 	addr = offset;
 	offset = 0;
 	bytes_left = len;
@@ -3397,14 +4249,14 @@ enum _ecore_status_t ecore_mcp_phy_sfp_read(struct ecore_hwfn *p_hwfn,
 					   MAX_I2C_TRANSACTION_SIZE);
 		nvm_offset &= (DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK |
 			       DRV_MB_PARAM_TRANSCEIVER_PORT_MASK);
-		nvm_offset |= ((addr + offset) <<
-				DRV_MB_PARAM_TRANSCEIVER_OFFSET_OFFSET);
-		nvm_offset |= (bytes_to_copy <<
-			       DRV_MB_PARAM_TRANSCEIVER_SIZE_OFFSET);
+		SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_OFFSET,
+			      (addr + offset));
+		SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_SIZE,
+			      bytes_to_copy);
 		rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt,
 					  DRV_MSG_CODE_TRANSCEIVER_READ,
 					  nvm_offset, &resp, &param, &buf_size,
-					  (u32 *)(p_buf + offset));
+					  (u32 *)(p_buf + offset), true);
 		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, false,
 				  "Failed to send a transceiver read command to the MFW. rc = %d.\n",
@@ -3419,6 +4271,11 @@ enum _ecore_status_t ecore_mcp_phy_sfp_read(struct ecore_hwfn *p_hwfn,
 
 		offset += buf_size;
 		bytes_left -= buf_size;
+
+		/* The MFW reads from the SFP using the slow I2C bus, so a short
+		 * sleep is needed to prevent CPU hogging.
+		 */
+		OSAL_MSLEEP(1);
 	}
 
 	return ECORE_SUCCESS;
@@ -3429,25 +4286,30 @@ enum _ecore_status_t ecore_mcp_phy_sfp_write(struct ecore_hwfn *p_hwfn,
 					     u32 port, u32 addr, u32 offset,
 					     u32 len, u8 *p_buf)
 {
-	u32 buf_idx, buf_size, nvm_offset, resp, param;
+	u32 buf_idx, buf_size, nvm_offset = 0, resp, param;
 	enum _ecore_status_t rc;
 
-	nvm_offset = (port << DRV_MB_PARAM_TRANSCEIVER_PORT_OFFSET) |
-			(addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_OFFSET);
+	if (p_hwfn->p_dev->recov_in_prog) {
+		DP_ERR(p_hwfn, "Error recovery in progress\n");
+		return ECORE_AGAIN;
+	}
+
+	SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_PORT, port);
+	SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS, addr);
 	buf_idx = 0;
 	while (buf_idx < len) {
 		buf_size = OSAL_MIN_T(u32, (len - buf_idx),
 				      MAX_I2C_TRANSACTION_SIZE);
 		nvm_offset &= (DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK |
 				 DRV_MB_PARAM_TRANSCEIVER_PORT_MASK);
-		nvm_offset |= ((offset + buf_idx) <<
-				 DRV_MB_PARAM_TRANSCEIVER_OFFSET_OFFSET);
-		nvm_offset |= (buf_size <<
-			       DRV_MB_PARAM_TRANSCEIVER_SIZE_OFFSET);
+		SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_OFFSET,
+			      (offset + buf_idx));
+		SET_MFW_FIELD(nvm_offset, DRV_MB_PARAM_TRANSCEIVER_SIZE,
+			      buf_size);
 		rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt,
 					  DRV_MSG_CODE_TRANSCEIVER_WRITE,
 					  nvm_offset, &resp, &param, buf_size,
-					  (u32 *)&p_buf[buf_idx]);
+					  (u32 *)&p_buf[buf_idx], false);
 		if (rc != ECORE_SUCCESS) {
 			DP_NOTICE(p_hwfn, false,
 				  "Failed to send a transceiver write command to the MFW. rc = %d.\n",
@@ -3461,6 +4323,11 @@ enum _ecore_status_t ecore_mcp_phy_sfp_write(struct ecore_hwfn *p_hwfn,
 			return ECORE_UNKNOWN_ERROR;
 
 		buf_idx += buf_size;
+
+		/* The MFW writes to the SFP using the slow I2C bus, so a short
+		 * sleep is needed to prevent CPU hogging.
+		 */
+		OSAL_MSLEEP(1);
 	}
 
 	return ECORE_SUCCESS;
@@ -3471,9 +4338,14 @@ enum _ecore_status_t ecore_mcp_gpio_read(struct ecore_hwfn *p_hwfn,
 					 u16 gpio, u32 *gpio_val)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	u32 drv_mb_param = 0, rsp = 0;
+	u32 drv_mb_param = 0, rsp;
+
+	if (p_hwfn->p_dev->recov_in_prog) {
+		DP_ERR(p_hwfn, "Error recovery in progress\n");
+		return ECORE_AGAIN;
+	}
 
-	drv_mb_param = (gpio << DRV_MB_PARAM_GPIO_NUMBER_OFFSET);
+	SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_GPIO_NUMBER, gpio);
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_READ,
 			   drv_mb_param, &rsp, gpio_val);
@@ -3492,10 +4364,15 @@ enum _ecore_status_t ecore_mcp_gpio_write(struct ecore_hwfn *p_hwfn,
 					  u16 gpio, u16 gpio_val)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	u32 drv_mb_param = 0, param, rsp = 0;
+	u32 drv_mb_param = 0, param, rsp;
+
+	if (p_hwfn->p_dev->recov_in_prog) {
+		DP_ERR(p_hwfn, "Error recovery in progress\n");
+		return ECORE_AGAIN;
+	}
 
-	drv_mb_param = (gpio << DRV_MB_PARAM_GPIO_NUMBER_OFFSET) |
-		(gpio_val << DRV_MB_PARAM_GPIO_VALUE_OFFSET);
+	SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_GPIO_NUMBER, gpio);
+	SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_GPIO_VALUE, gpio_val);
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_WRITE,
 			   drv_mb_param, &rsp, &param);
@@ -3517,17 +4394,20 @@ enum _ecore_status_t ecore_mcp_gpio_info(struct ecore_hwfn *p_hwfn,
 	u32 drv_mb_param = 0, rsp, val = 0;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
-	drv_mb_param = gpio << DRV_MB_PARAM_GPIO_NUMBER_OFFSET;
+	if (p_hwfn->p_dev->recov_in_prog) {
+		DP_ERR(p_hwfn, "Error recovery in progress\n");
+		return ECORE_AGAIN;
+	}
+
+	SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_GPIO_NUMBER, gpio);
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_INFO,
 			   drv_mb_param, &rsp, &val);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	*gpio_direction = (val & DRV_MB_PARAM_GPIO_DIRECTION_MASK) >>
-			   DRV_MB_PARAM_GPIO_DIRECTION_OFFSET;
-	*gpio_ctrl = (val & DRV_MB_PARAM_GPIO_CTRL_MASK) >>
-		      DRV_MB_PARAM_GPIO_CTRL_OFFSET;
+	*gpio_direction = GET_MFW_FIELD(val, DRV_MB_PARAM_GPIO_DIRECTION);
+	*gpio_ctrl = GET_MFW_FIELD(val, DRV_MB_PARAM_GPIO_CTRL);
 
 	if ((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_GPIO_OK)
 		return ECORE_UNKNOWN_ERROR;
@@ -3541,8 +4421,8 @@ enum _ecore_status_t ecore_mcp_bist_register_test(struct ecore_hwfn *p_hwfn,
 	u32 drv_mb_param = 0, rsp, param;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
-	drv_mb_param = (DRV_MB_PARAM_BIST_REGISTER_TEST <<
-			DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
+	SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_BIST_TEST_INDEX,
+		      DRV_MB_PARAM_BIST_REGISTER_TEST);
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
 			   drv_mb_param, &rsp, &param);
@@ -3560,11 +4440,11 @@ enum _ecore_status_t ecore_mcp_bist_register_test(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_bist_clock_test(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt)
 {
-	u32 drv_mb_param, rsp, param;
+	u32 drv_mb_param = 0, rsp, param;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
-	drv_mb_param = (DRV_MB_PARAM_BIST_CLOCK_TEST <<
-			DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
+	SET_MFW_FIELD(drv_mb_param, DRV_MB_PARAM_BIST_TEST_INDEX,
+		      DRV_MB_PARAM_BIST_CLOCK_TEST);
 
 	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
 			   drv_mb_param, &rsp, &param);
@@ -3579,54 +4459,8 @@ enum _ecore_status_t ecore_mcp_bist_clock_test(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t ecore_mcp_bist_nvm_test_get_num_images(
-	struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 *num_images)
-{
-	u32 drv_mb_param = 0, rsp = 0;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
-
-	drv_mb_param = (DRV_MB_PARAM_BIST_NVM_TEST_NUM_IMAGES <<
-			DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
-
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
-			   drv_mb_param, &rsp, num_images);
-
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK))
-		rc = ECORE_UNKNOWN_ERROR;
-
-	return rc;
-}
-
-enum _ecore_status_t ecore_mcp_bist_nvm_test_get_image_att(
-	struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-	struct bist_nvm_image_att *p_image_att, u32 image_index)
-{
-	u32 buf_size, nvm_offset, resp, param;
-	enum _ecore_status_t rc;
-
-	nvm_offset = (DRV_MB_PARAM_BIST_NVM_TEST_IMAGE_BY_INDEX <<
-				    DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET);
-	nvm_offset |= (image_index <<
-		       DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_OFFSET);
-	rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
-				  nvm_offset, &resp, &param, &buf_size,
-				  (u32 *)p_image_att);
-	if (rc != ECORE_SUCCESS)
-		return rc;
-
-	if (((resp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
-	    (p_image_att->return_code != 1))
-		rc = ECORE_UNKNOWN_ERROR;
-
-	return rc;
-}
-
-enum _ecore_status_t
-ecore_mcp_bist_nvm_get_num_images(struct ecore_hwfn *p_hwfn,
-				  struct ecore_ptt *p_ptt, u32 *num_images)
+enum _ecore_status_t ecore_mcp_bist_nvm_get_num_images(struct ecore_hwfn *p_hwfn,
+	struct ecore_ptt *p_ptt, u32 *num_images)
 {
 	u32 drv_mb_param = 0, rsp;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
@@ -3647,11 +4481,9 @@ ecore_mcp_bist_nvm_get_num_images(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t
-ecore_mcp_bist_nvm_get_image_att(struct ecore_hwfn *p_hwfn,
-				 struct ecore_ptt *p_ptt,
-				 struct bist_nvm_image_att *p_image_att,
-				 u32 image_index)
+enum _ecore_status_t ecore_mcp_bist_nvm_get_image_att(struct ecore_hwfn *p_hwfn,
+	struct ecore_ptt *p_ptt,
+	struct bist_nvm_image_att *p_image_att, u32 image_index)
 {
 	u32 buf_size, nvm_offset = 0, resp, param;
 	enum _ecore_status_t rc;
@@ -3662,7 +4494,7 @@ ecore_mcp_bist_nvm_get_image_att(struct ecore_hwfn *p_hwfn,
 		      image_index);
 	rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
 				  nvm_offset, &resp, &param, &buf_size,
-				  (u32 *)p_image_att);
+				  (u32 *)p_image_att, false);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
@@ -3686,7 +4518,8 @@ enum _ecore_status_t ecore_mcp_nvm_info_populate(struct ecore_hwfn *p_hwfn)
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) ||
-	    CHIP_REV_IS_TEDIBEAR(p_hwfn->p_dev))
+	    CHIP_REV_IS_TEDIBEAR(p_hwfn->p_dev) ||
+	    p_hwfn->mcp_info->recovery_mode)
 		return ECORE_SUCCESS;
 #endif
 
@@ -3751,6 +4584,18 @@ enum _ecore_status_t ecore_mcp_nvm_info_populate(struct ecore_hwfn *p_hwfn)
 	return rc;
 }
 
+void ecore_mcp_nvm_info_free(struct ecore_hwfn *p_hwfn)
+{
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) ||
+	    CHIP_REV_IS_TEDIBEAR(p_hwfn->p_dev))
+		return;
+#endif
+	OSAL_FREE(p_hwfn->p_dev, p_hwfn->nvm_info.image_att);
+	p_hwfn->nvm_info.image_att = OSAL_NULL;
+	p_hwfn->nvm_info.valid = false;
+}
+
 enum _ecore_status_t
 ecore_mcp_get_nvm_image_att(struct ecore_hwfn *p_hwfn,
 			    enum ecore_nvm_images image_id,
@@ -3777,7 +4622,7 @@ ecore_mcp_get_nvm_image_att(struct ecore_hwfn *p_hwfn,
 		type = NVM_TYPE_DEFAULT_CFG;
 		break;
 	case ECORE_NVM_IMAGE_NVM_META:
-		type = NVM_TYPE_META;
+		type = NVM_TYPE_NVM_META;
 		break;
 	default:
 		DP_NOTICE(p_hwfn, false, "Unknown request of image_id %08x\n",
@@ -3832,7 +4677,7 @@ enum _ecore_status_t ecore_mcp_get_nvm_image(struct ecore_hwfn *p_hwfn,
 	}
 
 	return ecore_mcp_nvm_read(p_hwfn->p_dev, image_att.start_addr,
-				  (u8 *)p_buffer, image_att.length);
+				  p_buffer, image_att.length);
 }
 
 enum _ecore_status_t
@@ -3861,14 +4706,13 @@ ecore_mcp_get_temperature_info(struct ecore_hwfn *p_hwfn,
 	for (i = 0; i < p_temp_info->num_sensors; i++) {
 		val = mfw_temp_info.sensor[i];
 		p_temp_sensor = &p_temp_info->sensors[i];
-		p_temp_sensor->sensor_location = (val & SENSOR_LOCATION_MASK) >>
-						 SENSOR_LOCATION_OFFSET;
-		p_temp_sensor->threshold_high = (val & THRESHOLD_HIGH_MASK) >>
-						THRESHOLD_HIGH_OFFSET;
-		p_temp_sensor->critical = (val & CRITICAL_TEMPERATURE_MASK) >>
-					  CRITICAL_TEMPERATURE_OFFSET;
-		p_temp_sensor->current_temp = (val & CURRENT_TEMP_MASK) >>
-					      CURRENT_TEMP_OFFSET;
+		p_temp_sensor->sensor_location =
+			GET_MFW_FIELD(val, SENSOR_LOCATION);
+		p_temp_sensor->threshold_high =
+			GET_MFW_FIELD(val, THRESHOLD_HIGH);
+		p_temp_sensor->critical =
+			GET_MFW_FIELD(val, CRITICAL_TEMPERATURE);
+		p_temp_sensor->current_temp = GET_MFW_FIELD(val, CURRENT_TEMP);
 	}
 
 	return ECORE_SUCCESS;
@@ -3884,7 +4728,7 @@ enum _ecore_status_t ecore_mcp_get_mba_versions(
 
 	rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GET_MBA_VERSION,
 				  0, &resp, &param, &buf_size,
-				  &p_mba_vers->mba_vers[0]);
+				  &p_mba_vers->mba_vers[0], false);
 
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -3902,10 +4746,24 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
 					      struct ecore_ptt *p_ptt,
 					      u64 *num_events)
 {
-	u32 rsp;
+	struct ecore_mcp_mb_params mb_params;
+	struct mcp_val64 val64;
+	enum _ecore_status_t rc;
+
+	OSAL_MEMSET(&mb_params, 0, sizeof(struct ecore_mcp_mb_params));
+	mb_params.cmd = DRV_MSG_CODE_MEM_ECC_EVENTS;
+	mb_params.p_data_dst = &val64;
+	mb_params.data_dst_size = sizeof(val64);
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	*num_events = ((u64)val64.hi << 32) | val64.lo;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Number of memory ECC events: %" PRId64 "\n", *num_events);
 
-	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MEM_ECC_EVENTS,
-			     0, &rsp, (u32 *)num_events);
+	return ECORE_SUCCESS;
 }
 
 static enum resource_id_enum
@@ -3940,9 +4798,15 @@ ecore_mcp_get_mfw_res_id(enum ecore_resources res_id)
 	case ECORE_ILT:
 		mfw_res_id = RESOURCE_ILT_E;
 		break;
-	case ECORE_LL2_QUEUE:
+	case ECORE_LL2_RAM_QUEUE:
 		mfw_res_id = RESOURCE_LL2_QUEUE_E;
 		break;
+	case ECORE_LL2_CTX_QUEUE:
+		mfw_res_id = RESOURCE_LL2_CTX_E;
+		break;
+	case ECORE_VF_RDMA_CNQ_RAM:
+		mfw_res_id = RESOURCE_VF_CNQS;
+		break;
 	case ECORE_RDMA_CNQ_RAM:
 	case ECORE_CMDQS_CQS:
 		/* CNQ/CMDQS are the same resource */
@@ -3954,6 +4818,9 @@ ecore_mcp_get_mfw_res_id(enum ecore_resources res_id)
 	case ECORE_BDQ:
 		mfw_res_id = RESOURCE_BDQ_E;
 		break;
+	case ECORE_VF_MAC_ADDR:
+		mfw_res_id = RESOURCE_VF_MAC_ADDR;
+		break;
 	default:
 		break;
 	}
@@ -3963,11 +4830,6 @@ ecore_mcp_get_mfw_res_id(enum ecore_resources res_id)
 
 #define ECORE_RESC_ALLOC_VERSION_MAJOR	2
 #define ECORE_RESC_ALLOC_VERSION_MINOR	0
-#define ECORE_RESC_ALLOC_VERSION				\
-	((ECORE_RESC_ALLOC_VERSION_MAJOR <<			\
-	  DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_OFFSET) |	\
-	 (ECORE_RESC_ALLOC_VERSION_MINOR <<			\
-	  DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_OFFSET))
 
 struct ecore_resc_alloc_in_params {
 	u32 cmd;
@@ -3985,27 +4847,6 @@ struct ecore_resc_alloc_out_params {
 	u32 flags;
 };
 
-#define ECORE_RECOVERY_PROLOG_SLEEP_MS	100
-
-enum _ecore_status_t ecore_recovery_prolog(struct ecore_dev *p_dev)
-{
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
-	enum _ecore_status_t rc;
-
-	/* Allow ongoing PCIe transactions to complete */
-	OSAL_MSLEEP(ECORE_RECOVERY_PROLOG_SLEEP_MS);
-
-	/* Clear the PF's internal FID_enable in the PXP */
-	rc = ecore_pglueb_set_pfid_enable(p_hwfn, p_ptt, false);
-	if (rc != ECORE_SUCCESS)
-		DP_NOTICE(p_hwfn, false,
-			  "ecore_pglueb_set_pfid_enable() failed. rc = %d.\n",
-			  rc);
-
-	return rc;
-}
-
 static enum _ecore_status_t
 ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 			      struct ecore_ptt *p_ptt,
@@ -4041,7 +4882,12 @@ ecore_mcp_resc_allocation_msg(struct ecore_hwfn *p_hwfn,
 
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = p_in_params->cmd;
-	mb_params.param = ECORE_RESC_ALLOC_VERSION;
+	SET_MFW_FIELD(mb_params.param,
+		      DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR,
+		      ECORE_RESC_ALLOC_VERSION_MAJOR);
+	SET_MFW_FIELD(mb_params.param,
+		      DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR,
+		      ECORE_RESC_ALLOC_VERSION_MINOR);
 	mb_params.p_data_src = &mfw_resc_info;
 	mb_params.data_src_size = sizeof(mfw_resc_info);
 	mb_params.p_data_dst = mb_params.p_data_src;
@@ -4143,14 +4989,162 @@ enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 			     &mcp_resp, &mcp_param);
 }
 
-static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
-						   struct ecore_ptt *p_ptt,
-						   u32 param, u32 *p_mcp_resp,
-						   u32 *p_mcp_param)
+static enum _ecore_status_t
+ecore_mcp_initiate_vf_flr(struct ecore_hwfn *p_hwfn,
+			  struct ecore_ptt *p_ptt, u32 *vfs_to_flr)
 {
+	u8 i, vf_bitmap_size, vf_bitmap_size_in_bytes;
+	struct ecore_mcp_mb_params mb_params;
 	enum _ecore_status_t rc;
 
-	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_RESOURCE_CMD, param,
+	if (ECORE_IS_E4(p_hwfn->p_dev)) {
+		vf_bitmap_size = VF_BITMAP_SIZE_IN_DWORDS;
+		vf_bitmap_size_in_bytes = VF_BITMAP_SIZE_IN_BYTES;
+	} else {
+		vf_bitmap_size = EXT_VF_BITMAP_SIZE_IN_DWORDS;
+		vf_bitmap_size_in_bytes = EXT_VF_BITMAP_SIZE_IN_BYTES;
+	}
+
+	for (i = 0; i < vf_bitmap_size; i++)
+		DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IOV),
+			   "FLR-ing VFs [%08x,...,%08x] - %08x\n",
+			   i * 32, (i + 1) * 32 - 1, vfs_to_flr[i]);
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_INITIATE_VF_FLR;
+	mb_params.p_data_src = vfs_to_flr;
+	mb_params.data_src_size = vf_bitmap_size_in_bytes;
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to send an initiate VF FLR request\n");
+		return rc;
+	}
+
+	if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"The VF FLR command is unsupported by the MFW\n");
+		rc = ECORE_NOTIMPL;
+	} else if (mb_params.mcp_resp != (u32)FW_MSG_CODE_INITIATE_VF_FLR_OK) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to initiate VF FLR [resp 0x%08x]\n",
+			  mb_params.mcp_resp);
+		rc = ECORE_INVAL;
+	}
+
+	return rc;
+}
+
+enum _ecore_status_t ecore_mcp_vf_flr(struct ecore_hwfn *p_hwfn,
+				      struct ecore_ptt *p_ptt, u8 rel_vf_id)
+{
+	u32 vfs_to_flr[EXT_VF_BITMAP_SIZE_IN_DWORDS];
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
+	u8 abs_vf_id, index, bit;
+	enum _ecore_status_t rc;
+
+	if (!IS_PF_SRIOV(p_hwfn)) {
+		DP_ERR(p_hwfn,
+		       "Failed to initiate VF FLR: SRIOV is disabled\n");
+		return ECORE_INVAL;
+	}
+
+	if (rel_vf_id >= p_dev->p_iov_info->total_vfs) {
+		DP_ERR(p_hwfn,
+		       "Failed to initiate VF FLR: VF ID is too high [total_vfs %d]\n",
+		       p_dev->p_iov_info->total_vfs);
+		return ECORE_INVAL;
+	}
+
+	abs_vf_id = (u8)p_dev->p_iov_info->first_vf_in_pf + rel_vf_id;
+	index = abs_vf_id / 32;
+	bit = abs_vf_id % 32;
+	OSAL_MEMSET(vfs_to_flr, 0, EXT_VF_BITMAP_SIZE_IN_BYTES);
+	vfs_to_flr[index] |= (1 << bit);
+
+	rc = ecore_mcp_initiate_vf_flr(p_hwfn, p_ptt, vfs_to_flr);
+	return rc;
+}
+
+enum _ecore_status_t ecore_mcp_get_lldp_mac(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u8 lldp_mac_addr[ECORE_ETH_ALEN])
+{
+	struct ecore_mcp_mb_params mb_params;
+	struct mcp_mac lldp_mac;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_GET_LLDP_MAC;
+	mb_params.p_data_dst = &lldp_mac;
+	mb_params.data_dst_size = sizeof(lldp_mac);
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	if (mb_params.mcp_resp != FW_MSG_CODE_OK) {
+		DP_NOTICE(p_hwfn, false,
+			  "MFW lacks support for the GET_LLDP_MAC command [resp 0x%08x]\n",
+			  mb_params.mcp_resp);
+		return ECORE_INVAL;
+	}
+
+	*(u16 *)lldp_mac_addr = OSAL_BE16_TO_CPU(*(u16 *)&lldp_mac.mac_upper);
+	*(u32 *)(lldp_mac_addr + sizeof(u16)) =
+		OSAL_BE32_TO_CPU(lldp_mac.mac_lower);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "LLDP MAC address is %02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx\n",
+		   lldp_mac_addr[0], lldp_mac_addr[1], lldp_mac_addr[2],
+		   lldp_mac_addr[3], lldp_mac_addr[4], lldp_mac_addr[5]);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_mcp_set_lldp_mac(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u8 lldp_mac_addr[ECORE_ETH_ALEN])
+{
+	struct ecore_mcp_mb_params mb_params;
+	struct mcp_mac lldp_mac;
+	enum _ecore_status_t rc;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Configuring LLDP MAC address to %02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx\n",
+		   lldp_mac_addr[0], lldp_mac_addr[1], lldp_mac_addr[2],
+		   lldp_mac_addr[3], lldp_mac_addr[4], lldp_mac_addr[5]);
+
+	OSAL_MEM_ZERO(&lldp_mac, sizeof(lldp_mac));
+	lldp_mac.mac_upper = OSAL_CPU_TO_BE16(*(u16 *)lldp_mac_addr);
+	lldp_mac.mac_lower =
+		OSAL_CPU_TO_BE32(*(u32 *)(lldp_mac_addr + sizeof(u16)));
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_SET_LLDP_MAC;
+	mb_params.p_data_src = &lldp_mac;
+	mb_params.data_src_size = sizeof(lldp_mac);
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	if (mb_params.mcp_resp != FW_MSG_CODE_OK) {
+		DP_NOTICE(p_hwfn, false,
+			  "MFW lacks support for the SET_LLDP_MAC command [resp 0x%08x]\n",
+			  mb_params.mcp_resp);
+		return ECORE_INVAL;
+	}
+
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
+						   struct ecore_ptt *p_ptt,
+						   u32 param, u32 *p_mcp_resp,
+						   u32 *p_mcp_param)
+{
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_RESOURCE_CMD, param,
 			   p_mcp_resp, p_mcp_param);
 	if (rc != ECORE_SUCCESS)
 		return rc;
@@ -4173,35 +5167,36 @@ static enum _ecore_status_t ecore_mcp_resource_cmd(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t
+static enum _ecore_status_t
 __ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		      struct ecore_resc_lock_params *p_params)
 {
-	u32 param = 0, mcp_resp = 0, mcp_param = 0;
-	u8 opcode;
+	u32 param = 0, mcp_resp, mcp_param;
+	u8 opcode, timeout;
 	enum _ecore_status_t rc;
 
 	switch (p_params->timeout) {
 	case ECORE_MCP_RESC_LOCK_TO_DEFAULT:
 		opcode = RESOURCE_OPCODE_REQ;
-		p_params->timeout = 0;
+		timeout = 0;
 		break;
 	case ECORE_MCP_RESC_LOCK_TO_NONE:
 		opcode = RESOURCE_OPCODE_REQ_WO_AGING;
-		p_params->timeout = 0;
+		timeout = 0;
 		break;
 	default:
 		opcode = RESOURCE_OPCODE_REQ_W_AGING;
+		timeout = p_params->timeout;
 		break;
 	}
 
 	SET_MFW_FIELD(param, RESOURCE_CMD_REQ_RESC, p_params->resource);
 	SET_MFW_FIELD(param, RESOURCE_CMD_REQ_OPCODE, opcode);
-	SET_MFW_FIELD(param, RESOURCE_CMD_REQ_AGE, p_params->timeout);
+	SET_MFW_FIELD(param, RESOURCE_CMD_REQ_AGE, timeout);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Resource lock request: param 0x%08x [age %d, opcode %d, resource %d]\n",
-		   param, p_params->timeout, opcode, p_params->resource);
+		   param, timeout, opcode, p_params->resource);
 
 	/* Attempt to acquire the resource */
 	rc = ecore_mcp_resource_cmd(p_hwfn, p_ptt, param, &mcp_resp,
@@ -4245,7 +5240,7 @@ ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		/* No need for an interval before the first iteration */
 		if (retry_cnt) {
 			if (p_params->sleep_b4_retry) {
-				u16 retry_interval_in_ms =
+				u32 retry_interval_in_ms =
 					DIV_ROUND_UP(p_params->retry_interval,
 						     1000);
 
@@ -4256,44 +5251,16 @@ ecore_mcp_resc_lock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		}
 
 		rc = __ecore_mcp_resc_lock(p_hwfn, p_ptt, p_params);
+		if (rc == ECORE_AGAIN)
+			continue;
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
 		if (p_params->b_granted)
-			break;
+			return ECORE_SUCCESS;
 	} while (retry_cnt++ < p_params->retry_num);
 
-	return ECORE_SUCCESS;
-}
-
-void ecore_mcp_resc_lock_default_init(struct ecore_resc_lock_params *p_lock,
-				      struct ecore_resc_unlock_params *p_unlock,
-				      enum ecore_resc_lock resource,
-				      bool b_is_permanent)
-{
-	if (p_lock != OSAL_NULL) {
-		OSAL_MEM_ZERO(p_lock, sizeof(*p_lock));
-
-		/* Permanent resources don't require aging, and there's no
-		 * point in trying to acquire them more than once since it's
-		 * unexpected another entity would release them.
-		 */
-		if (b_is_permanent) {
-			p_lock->timeout = ECORE_MCP_RESC_LOCK_TO_NONE;
-		} else {
-			p_lock->retry_num = ECORE_MCP_RESC_LOCK_RETRY_CNT_DFLT;
-			p_lock->retry_interval =
-					ECORE_MCP_RESC_LOCK_RETRY_VAL_DFLT;
-			p_lock->sleep_b4_retry = true;
-		}
-
-		p_lock->resource = resource;
-	}
-
-	if (p_unlock != OSAL_NULL) {
-		OSAL_MEM_ZERO(p_unlock, sizeof(*p_unlock));
-		p_unlock->resource = resource;
-	}
+	return ECORE_TIMEOUT;
 }
 
 enum _ecore_status_t
@@ -4348,12 +5315,114 @@ ecore_mcp_resc_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	return ECORE_SUCCESS;
 }
 
+void ecore_mcp_resc_lock_default_init(struct ecore_resc_lock_params *p_lock,
+				      struct ecore_resc_unlock_params *p_unlock,
+				      enum ecore_resc_lock resource,
+				      bool b_is_permanent)
+{
+	if (p_lock != OSAL_NULL) {
+		OSAL_MEM_ZERO(p_lock, sizeof(*p_lock));
+
+		/* Permanent resources don't require aging, and there's no
+		 * point in trying to acquire them more than once since it's
+		 * unexpected another entity would release them.
+		 */
+		if (b_is_permanent) {
+			p_lock->timeout = ECORE_MCP_RESC_LOCK_TO_NONE;
+		} else {
+			p_lock->retry_num = ECORE_MCP_RESC_LOCK_RETRY_CNT_DFLT;
+			p_lock->retry_interval =
+					ECORE_MCP_RESC_LOCK_RETRY_VAL_DFLT;
+			p_lock->sleep_b4_retry = true;
+		}
+
+		p_lock->resource = resource;
+	}
+
+	if (p_unlock != OSAL_NULL) {
+		OSAL_MEM_ZERO(p_unlock, sizeof(*p_unlock));
+		p_unlock->resource = resource;
+	}
+}
+
+enum _ecore_status_t
+ecore_mcp_update_fcoe_cvid(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			   u16 vlan)
+{
+	u32 resp = 0, param = 0, mcp_param = 0;
+	enum _ecore_status_t rc;
+
+	SET_MFW_FIELD(param, DRV_MB_PARAM_FCOE_CVID, vlan);
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_OEM_UPDATE_FCOE_CVID,
+			   param, &resp, &mcp_param);
+	if (rc != ECORE_SUCCESS)
+		DP_ERR(p_hwfn, "Failed to update fcoe vlan, rc = %d\n", rc);
+
+	return rc;
+}
+
+enum _ecore_status_t
+ecore_mcp_update_fcoe_fabric_name(struct ecore_hwfn *p_hwfn,
+				  struct ecore_ptt *p_ptt, u8 *wwn)
+{
+	struct ecore_mcp_mb_params mb_params;
+	struct mcp_wwn fabric_name;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&fabric_name, sizeof(fabric_name));
+	fabric_name.wwn_upper = *(u32 *)wwn;
+	fabric_name.wwn_lower = *(u32 *)(wwn + 4);
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_OEM_UPDATE_FCOE_FABRIC_NAME;
+	mb_params.p_data_src = &fabric_name;
+	mb_params.data_src_size = sizeof(fabric_name);
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS)
+		DP_ERR(p_hwfn, "Failed to update fcoe wwn, rc = %d\n", rc);
+
+	return rc;
+}
+
+void ecore_mcp_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      u32 offset, u32 val)
+{
+	enum _ecore_status_t	   rc = ECORE_SUCCESS;
+	u32			   dword = val;
+	struct ecore_mcp_mb_params mb_params;
+
+	OSAL_MEMSET(&mb_params, 0, sizeof(struct ecore_mcp_mb_params));
+	mb_params.cmd = DRV_MSG_CODE_WRITE_WOL_REG;
+	mb_params.param = offset;
+	mb_params.p_data_src = &dword;
+	mb_params.data_src_size = sizeof(dword);
+
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to wol write request, rc = %d\n", rc);
+	}
+
+	if (mb_params.mcp_resp != FW_MSG_CODE_WOL_READ_WRITE_OK) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to write value 0x%x to offset 0x%x [mcp_resp 0x%x]\n",
+			  val, offset, mb_params.mcp_resp);
+		rc = ECORE_UNKNOWN_ERROR;
+	}
+}
+
 bool ecore_mcp_is_smart_an_supported(struct ecore_hwfn *p_hwfn)
 {
 	return !!(p_hwfn->mcp_info->capabilities &
 		  FW_MB_PARAM_FEATURE_SUPPORT_SMARTLINQ);
 }
 
+bool ecore_mcp_rlx_odr_supported(struct ecore_hwfn *p_hwfn)
+{
+	return !!(p_hwfn->mcp_info->capabilities &
+		  FW_MB_PARAM_FEATURE_SUPPORT_RELAXED_ORD);
+}
+
 enum _ecore_status_t ecore_mcp_get_capabilities(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt)
 {
@@ -4377,7 +5446,12 @@ enum _ecore_status_t ecore_mcp_set_capabilities(struct ecore_hwfn *p_hwfn,
 
 	features = DRV_MB_PARAM_FEATURE_SUPPORT_PORT_SMARTLINQ |
 		   DRV_MB_PARAM_FEATURE_SUPPORT_PORT_EEE |
-		   DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_VLINK;
+		   DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_VLINK |
+		   DRV_MB_PARAM_FEATURE_SUPPORT_PORT_FEC_CONTROL;
+
+	if (ECORE_IS_E5(p_hwfn->p_dev))
+		features |=
+			DRV_MB_PARAM_FEATURE_SUPPORT_PORT_EXT_SPEED_FEC_CONTROL;
 
 	return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_FEATURE_SUPPORT,
 			     features, &mcp_resp, &mcp_param);
@@ -4525,29 +5599,403 @@ enum _ecore_status_t ecore_mcp_get_ppfid_bitmap(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-void ecore_mcp_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		      u32 offset, u32 val)
+enum _ecore_status_t
+ecore_mcp_ind_table_lock(struct ecore_hwfn *p_hwfn,
+			 struct ecore_ptt *p_ptt,
+			 u8 retry_num,
+			 u32 retry_interval)
+{
+	struct ecore_resc_lock_params resc_lock_params;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&resc_lock_params,
+		      sizeof(struct ecore_resc_lock_params));
+	resc_lock_params.resource = ECORE_RESC_LOCK_IND_TABLE;
+	if (!retry_num)
+		retry_num = ECORE_MCP_RESC_LOCK_RETRY_CNT_DFLT;
+	resc_lock_params.retry_num = retry_num;
+
+	if (!retry_interval)
+		retry_interval = ECORE_MCP_RESC_LOCK_RETRY_VAL_DFLT;
+	resc_lock_params.retry_interval = retry_interval;
+
+	rc = ecore_mcp_resc_lock(p_hwfn, p_ptt, &resc_lock_params);
+	if (rc == ECORE_SUCCESS && !resc_lock_params.b_granted) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to acquire the resource lock for IDT access\n");
+		return ECORE_BUSY;
+	}
+	return rc;
+}
+
+enum _ecore_status_t
+ecore_mcp_ind_table_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+	struct ecore_resc_unlock_params resc_unlock_params;
+	enum _ecore_status_t rc;
+
+	OSAL_MEM_ZERO(&resc_unlock_params,
+		      sizeof(struct ecore_resc_unlock_params));
+	resc_unlock_params.resource = ECORE_RESC_LOCK_IND_TABLE;
+	rc = ecore_mcp_resc_unlock(p_hwfn, p_ptt,
+				  &resc_unlock_params);
+	return rc;
+}
+
+enum _ecore_status_t
+ecore_mcp_nvm_get_cfg(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      u16 option_id, u8 entity_id, u16 flags, u8 *p_buf,
+		      u32 *p_len)
+{
+	u32 mb_param = 0, resp, param;
+	enum _ecore_status_t rc;
+
+	SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_ID, option_id);
+	if (flags & ECORE_NVM_CFG_OPTION_INIT)
+		SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_INIT, 1);
+	if (flags & ECORE_NVM_CFG_OPTION_FREE)
+		SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_FREE, 1);
+	if (flags & ECORE_NVM_CFG_OPTION_ENTITY_SEL) {
+		SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_SEL,
+			      1);
+		SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_ID,
+			      entity_id);
+	}
+
+	rc = ecore_mcp_nvm_rd_cmd(p_hwfn, p_ptt,
+				  DRV_MSG_CODE_GET_NVM_CFG_OPTION, mb_param,
+				  &resp, &param, p_len, (u32 *)p_buf, false);
+
+	return rc;
+}
+
+enum _ecore_status_t
+ecore_mcp_nvm_set_cfg(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      u16 option_id, u8 entity_id, u16 flags, u8 *p_buf,
+		      u32 len)
+{
+	u32 mb_param = 0, resp, param;
+	enum _ecore_status_t rc;
+
+	SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_ID, option_id);
+	if (flags & ECORE_NVM_CFG_OPTION_ALL)
+		SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_ALL, 1);
+	if (flags & ECORE_NVM_CFG_OPTION_INIT)
+		SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_INIT, 1);
+	if (flags & ECORE_NVM_CFG_OPTION_COMMIT)
+		SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT, 1);
+	if (flags & ECORE_NVM_CFG_OPTION_FREE)
+		SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_FREE, 1);
+	if (flags & ECORE_NVM_CFG_OPTION_ENTITY_SEL) {
+		SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_SEL,
+			      1);
+		SET_MFW_FIELD(mb_param, DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_ID,
+			      entity_id);
+	}
+	if (flags & ECORE_NVM_CFG_OPTION_DEFAULT_RESTORE_ALL) {
+		if (!ecore_mcp_is_restore_default_cfg_supported(p_hwfn)) {
+			DP_ERR(p_hwfn, "Restoring nvm config to default value is unsupported by the MFW\n");
+			return ECORE_NOTIMPL;
+		}
+		SET_MFW_FIELD(mb_param,
+			      DRV_MB_PARAM_NVM_CFG_OPTION_DEFAULT_RESTORE_ALL,
+			      1);
+	}
+
+	rc = ecore_mcp_nvm_wr_cmd(p_hwfn, p_ptt,
+				  DRV_MSG_CODE_SET_NVM_CFG_OPTION, mb_param,
+				  &resp, &param, len, (u32 *)p_buf, false);
+
+	return rc;
+}
+
+enum ecore_perm_mac_type {
+	ECORE_PERM_MAC_TYPE_PF,
+	ECORE_PERM_MAC_TYPE_BMC,
+	ECORE_PERM_MAC_TYPE_VF,
+	ECORE_PERM_MAC_TYPE_LLDP,
+};
+
+static enum _ecore_status_t
+ecore_mcp_get_permanent_mac(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			    enum ecore_perm_mac_type type, u32 index,
+			    u8 mac_addr[ECORE_ETH_ALEN])
 {
-	enum _ecore_status_t	   rc = ECORE_SUCCESS;
-	u32			   dword = val;
 	struct ecore_mcp_mb_params mb_params;
+	struct mcp_mac perm_mac;
+	u32 mfw_type;
+	enum _ecore_status_t rc;
 
-	OSAL_MEMSET(&mb_params, 0, sizeof(struct ecore_mcp_mb_params));
-	mb_params.cmd = DRV_MSG_CODE_WRITE_WOL_REG;
-	mb_params.param = offset;
-	mb_params.p_data_src = &dword;
-	mb_params.data_src_size = sizeof(dword);
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_GET_PERM_MAC;
+	switch (type) {
+	case ECORE_PERM_MAC_TYPE_PF:
+		mfw_type = DRV_MSG_CODE_GET_PERM_MAC_TYPE_PF;
+		break;
+	case ECORE_PERM_MAC_TYPE_BMC:
+		mfw_type = DRV_MSG_CODE_GET_PERM_MAC_TYPE_BMC;
+		break;
+	case ECORE_PERM_MAC_TYPE_VF:
+		mfw_type = DRV_MSG_CODE_GET_PERM_MAC_TYPE_VF;
+		break;
+	case ECORE_PERM_MAC_TYPE_LLDP:
+		mfw_type = DRV_MSG_CODE_GET_PERM_MAC_TYPE_LLDP;
+		break;
+	default:
+		DP_ERR(p_hwfn, "Unexpected type of permanent MAC [%d]\n", type);
+		return ECORE_INVAL;
+	}
+	SET_MFW_FIELD(mb_params.param, DRV_MSG_CODE_GET_PERM_MAC_TYPE,
+		      mfw_type);
+	SET_MFW_FIELD(mb_params.param, DRV_MSG_CODE_GET_PERM_MAC_INDEX, index);
+	mb_params.p_data_dst = &perm_mac;
+	mb_params.data_dst_size = sizeof(perm_mac);
+	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"The GET_PERM_MAC command is unsupported by the MFW\n");
+		return ECORE_NOTIMPL;
+	} else if (mb_params.mcp_resp != (u32)FW_MSG_CODE_GET_PERM_MAC_OK) {
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to get a permanent MAC address [type %d, index %d, resp 0x%08x]\n",
+			  type, index, mb_params.mcp_resp);
+		return ECORE_INVAL;
+	}
+
+	*(u16 *)mac_addr = OSAL_BE16_TO_CPU(*(u16 *)&perm_mac.mac_upper);
+	*(u32 *)(mac_addr + sizeof(u16)) = OSAL_BE32_TO_CPU(perm_mac.mac_lower);
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Permanent MAC address [type %d, index %d] is %02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx\n",
+		   type, index, mac_addr[0], mac_addr[1], mac_addr[2],
+		   mac_addr[3], mac_addr[4], mac_addr[5]);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_mcp_get_perm_vf_mac(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt,
+					       u32 rel_vf_id,
+					       u8 mac_addr[ECORE_ETH_ALEN])
+{
+	u32 index;
+
+	if (!RESC_NUM(p_hwfn, ECORE_VF_MAC_ADDR)) {
+		DP_INFO(p_hwfn, "There are no permanent VF MAC addresses\n");
+		return ECORE_NORESOURCES;
+	}
+
+	if (rel_vf_id >= RESC_NUM(p_hwfn, ECORE_VF_MAC_ADDR)) {
+		DP_INFO(p_hwfn,
+			"There are permanent VF MAC addresses for only the first %d VFs\n",
+			RESC_NUM(p_hwfn, ECORE_VF_MAC_ADDR));
+		return ECORE_NORESOURCES;
+	}
+
+	index = RESC_START(p_hwfn, ECORE_VF_MAC_ADDR) + rel_vf_id;
+	return ecore_mcp_get_permanent_mac(p_hwfn, p_ptt,
+					   ECORE_PERM_MAC_TYPE_VF, index,
+					   mac_addr);
+}
 
+#define ECORE_MCP_DBG_DATA_MAX_SIZE		MCP_DRV_NVM_BUF_LEN
+#define ECORE_MCP_DBG_DATA_MAX_HEADER_SIZE	sizeof(u32)
+#define ECORE_MCP_DBG_DATA_MAX_PAYLOAD_SIZE \
+	(ECORE_MCP_DBG_DATA_MAX_SIZE - ECORE_MCP_DBG_DATA_MAX_HEADER_SIZE)
+
+static enum _ecore_status_t
+__ecore_mcp_send_debug_data(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			    u8 *p_buf, u8 size)
+{
+	struct ecore_mcp_mb_params mb_params;
+	enum _ecore_status_t rc;
+
+	if (size > ECORE_MCP_DBG_DATA_MAX_SIZE) {
+		DP_ERR(p_hwfn,
+		       "Debug data size is %d while it should not exceed %d\n",
+		       size, ECORE_MCP_DBG_DATA_MAX_SIZE);
+		return ECORE_INVAL;
+	}
+
+	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+	mb_params.cmd = DRV_MSG_CODE_DEBUG_DATA_SEND;
+	SET_MFW_FIELD(mb_params.param, DRV_MSG_CODE_DEBUG_DATA_SEND_SIZE, size);
+	mb_params.p_data_src = p_buf;
+	mb_params.data_src_size = size;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
-	if (rc != ECORE_SUCCESS) {
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+		DP_INFO(p_hwfn,
+			"The DEBUG_DATA_SEND command is unsupported by the MFW\n");
+		return ECORE_NOTIMPL;
+	} else if (mb_params.mcp_resp == (u32)FW_MSG_CODE_DEBUG_NOT_ENABLED) {
+		DP_INFO(p_hwfn,
+			"The DEBUG_DATA_SEND command is not enabled\n");
+		return ECORE_ABORTED;
+	} else if (mb_params.mcp_resp != (u32)FW_MSG_CODE_DEBUG_DATA_SEND_OK) {
 		DP_NOTICE(p_hwfn, false,
-			  "Failed to wol write request, rc = %d\n", rc);
+			  "Failed to send debug data to the MFW [resp 0x%08x]\n",
+			  mb_params.mcp_resp);
+		return ECORE_INVAL;
 	}
 
-	if (mb_params.mcp_resp != FW_MSG_CODE_WOL_READ_WRITE_OK) {
+	return ECORE_SUCCESS;
+}
+
+enum ecore_mcp_dbg_data_type {
+	ECORE_MCP_DBG_DATA_TYPE_RAW,
+};
+
+/* Header format: [31:28] PFID, [27:20] flags, [19:12] type, [11:0] S/N */
+#define ECORE_MCP_DBG_DATA_HDR_SN_OFFSET	0
+#define ECORE_MCP_DBG_DATA_HDR_SN_MASK		0x00000fff
+#define ECORE_MCP_DBG_DATA_HDR_TYPE_OFFSET	12
+#define ECORE_MCP_DBG_DATA_HDR_TYPE_MASK	0x000ff000
+#define ECORE_MCP_DBG_DATA_HDR_FLAGS_OFFSET	20
+#define ECORE_MCP_DBG_DATA_HDR_FLAGS_MASK	0x0ff00000
+#define ECORE_MCP_DBG_DATA_HDR_PF_OFFSET	28
+#define ECORE_MCP_DBG_DATA_HDR_PF_MASK		0xf0000000
+
+#define ECORE_MCP_DBG_DATA_HDR_FLAGS_FIRST	0x1
+#define ECORE_MCP_DBG_DATA_HDR_FLAGS_LAST	0x2
+
+static enum _ecore_status_t
+ecore_mcp_send_debug_data(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			  enum ecore_mcp_dbg_data_type type, u8 *p_buf,
+			  u32 size)
+{
+	u8 raw_data[ECORE_MCP_DBG_DATA_MAX_SIZE], *p_tmp_buf = p_buf;
+	u32 tmp_size = size, *p_header, *p_payload;
+	u8 flags = 0;
+	u16 seq;
+	enum _ecore_status_t rc;
+
+	p_header = (u32 *)raw_data;
+	p_payload = (u32 *)(raw_data + ECORE_MCP_DBG_DATA_MAX_HEADER_SIZE);
+
+	OSAL_SPIN_LOCK(&p_hwfn->mcp_info->dbg_data_lock);
+	seq = p_hwfn->mcp_info->dbg_data_seq++;
+	OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->dbg_data_lock);
+
+	/* First chunk is marked as 'first' */
+	flags |= ECORE_MCP_DBG_DATA_HDR_FLAGS_FIRST;
+
+	*p_header = 0;
+	SET_MFW_FIELD(*p_header, ECORE_MCP_DBG_DATA_HDR_SN, seq);
+	SET_MFW_FIELD(*p_header, ECORE_MCP_DBG_DATA_HDR_TYPE, type);
+	SET_MFW_FIELD(*p_header, ECORE_MCP_DBG_DATA_HDR_FLAGS, flags);
+	SET_MFW_FIELD(*p_header, ECORE_MCP_DBG_DATA_HDR_PF, p_hwfn->abs_pf_id);
+
+	while (tmp_size > ECORE_MCP_DBG_DATA_MAX_PAYLOAD_SIZE) {
+		OSAL_MEMCPY(p_payload, p_tmp_buf,
+			    ECORE_MCP_DBG_DATA_MAX_PAYLOAD_SIZE);
+		rc = __ecore_mcp_send_debug_data(p_hwfn, p_ptt, raw_data,
+						 ECORE_MCP_DBG_DATA_MAX_SIZE);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+
+		/* Clear the 'first' marking after sending the first chunk */
+		if (p_tmp_buf == p_buf) {
+			flags &= ~ECORE_MCP_DBG_DATA_HDR_FLAGS_FIRST;
+			SET_MFW_FIELD(*p_header, ECORE_MCP_DBG_DATA_HDR_FLAGS,
+				      flags);
+		}
+
+		p_tmp_buf += ECORE_MCP_DBG_DATA_MAX_PAYLOAD_SIZE;
+		tmp_size -= ECORE_MCP_DBG_DATA_MAX_PAYLOAD_SIZE;
+	}
+
+	/* Last chunk is marked as 'last' */
+	flags |= ECORE_MCP_DBG_DATA_HDR_FLAGS_LAST;
+	SET_MFW_FIELD(*p_header, ECORE_MCP_DBG_DATA_HDR_FLAGS, flags);
+	OSAL_MEMCPY(p_payload, p_tmp_buf, tmp_size);
+
+	/* Casting the left size to u8 is ok since at this point it is <= 32 */
+	return __ecore_mcp_send_debug_data(p_hwfn, p_ptt, raw_data,
+					   (u8)(ECORE_MCP_DBG_DATA_MAX_HEADER_SIZE +
+						tmp_size));
+}
+
+enum _ecore_status_t
+ecore_mcp_send_raw_debug_data(struct ecore_hwfn *p_hwfn,
+			      struct ecore_ptt *p_ptt, u8 *p_buf, u32 size)
+{
+	return ecore_mcp_send_debug_data(p_hwfn, p_ptt,
+					 ECORE_MCP_DBG_DATA_TYPE_RAW, p_buf,
+					 size);
+}
+
+enum _ecore_status_t
+ecore_mcp_set_bandwidth(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			u8 bw_min, u8 bw_max)
+{
+	u32 resp = 0, param = 0, drv_mb_param = 0;
+	enum _ecore_status_t rc;
+
+	SET_MFW_FIELD(drv_mb_param, BW_MIN, bw_min);
+	SET_MFW_FIELD(drv_mb_param, BW_MAX, bw_max);
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_BW,
+			   drv_mb_param, &resp, &param);
+	if (rc != ECORE_SUCCESS)
+		DP_ERR(p_hwfn, "Failed to send BW update, rc = %d\n", rc);
+
+	return rc;
+}
+
+bool ecore_mcp_is_esl_supported(struct ecore_hwfn *p_hwfn)
+{
+	return !!(p_hwfn->mcp_info->capabilities &
+		  FW_MB_PARAM_FEATURE_SUPPORT_ENHANCED_SYS_LCK);
+}
+
+enum _ecore_status_t
+ecore_mcp_get_esl_status(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			 bool *active)
+
+{
+	u32 resp = 0, param = 0;
+	enum _ecore_status_t rc;
+
+	rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GET_MANAGEMENT_STATUS, 0,
+			   &resp, &param);
+	if (rc != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, false,
-			  "Failed to write value 0x%x to offset 0x%x [mcp_resp 0x%x]\n",
-			  val, offset, mb_params.mcp_resp);
-		rc = ECORE_UNKNOWN_ERROR;
+			  "Failed to send ESL command, rc = %d\n", rc);
+		return rc;
 	}
+
+	*active = !!(param & FW_MB_PARAM_MANAGEMENT_STATUS_LOCKDOWN_ENABLED);
+
+	return ECORE_SUCCESS;
+}
+
+bool ecore_mcp_is_ext_speed_supported(struct ecore_hwfn *p_hwfn)
+{
+	return !!(p_hwfn->mcp_info->capabilities &
+		  FW_MB_PARAM_FEATURE_SUPPORT_EXT_SPEED_FEC_CONTROL);
+}
+
+enum _ecore_status_t
+ecore_mcp_gen_mdump_idlechk(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+	struct ecore_mdump_cmd_params mdump_cmd_params;
+
+	OSAL_MEM_ZERO(&mdump_cmd_params, sizeof(mdump_cmd_params));
+	mdump_cmd_params.cmd = DRV_MSG_CODE_MDUMP_GEN_IDLE_CHK;
+
+	return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, &mdump_cmd_params);
+}
+
+bool ecore_mcp_is_restore_default_cfg_supported(struct ecore_hwfn *p_hwfn)
+{
+	return !!(p_hwfn->mcp_info->capabilities &
+		  FW_MB_PARAM_FEATURE_SUPPORT_RESTORE_DEFAULT_CFG);
 }
+#ifdef _NTDDK_
+#pragma warning(pop)
+#endif
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 185cc2339..81d3dc04a 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_MCP_H__
 #define __ECORE_MCP_H__
 
@@ -26,6 +26,9 @@
 #define MCP_PF_ID(p_hwfn)	MCP_PF_ID_BY_REL(p_hwfn, (p_hwfn)->rel_pf_id)
 
 struct ecore_mcp_info {
+	/* Flag to indicate whether the MFW is running */
+	bool b_mfw_running;
+
 	/* List for mailbox commands which were sent and wait for a response */
 	osal_list_t cmd_list;
 
@@ -37,6 +40,14 @@ struct ecore_mcp_info {
 	/* Flag to indicate whether sending a MFW mailbox command is blocked */
 	bool b_block_cmd;
 
+	/* Flag to indicate whether MFW was halted */
+	bool b_halted;
+
+	/* Flag to indicate that driver discovered that mcp is running in
+	 * recovery mode.
+	 */
+	bool recovery_mode;
+
 	/* Spinlock used for syncing SW link-changes and link-changes
 	 * originating from attention context.
 	 */
@@ -67,8 +78,16 @@ struct ecore_mcp_info {
 	u16 mfw_mb_length;
 	u32 mcp_hist;
 
-	/* Capabilties negotiated with the MFW */
+	/* Capabilities negotiated with the MFW */
 	u32 capabilities;
+
+	/* S/N for debug data mailbox commands and a spinlock to protect it */
+	u16 dbg_data_seq;
+	osal_spinlock_t dbg_data_lock;
+	osal_spinlock_t unload_lock;
+	u32 mcp_handling_status; /* @DPDK */
+#define ECORE_MCP_BYPASS_PROC_BIT	0
+#define ECORE_MCP_IN_PROCESSING_BIT	1
 };
 
 struct ecore_mcp_mb_params {
@@ -76,14 +95,14 @@ struct ecore_mcp_mb_params {
 	u32 param;
 	void *p_data_src;
 	void *p_data_dst;
-	u32 mcp_resp;
-	u32 mcp_param;
 	u8 data_src_size;
 	u8 data_dst_size;
+	u32 mcp_resp;
+	u32 mcp_param;
 	u32 flags;
-#define ECORE_MB_FLAG_CAN_SLEEP         (0x1 << 0)
-#define ECORE_MB_FLAG_AVOID_BLOCK       (0x1 << 1)
-#define ECORE_MB_FLAGS_IS_SET(params, flag) \
+#define ECORE_MB_FLAG_CAN_SLEEP		(0x1 << 0)
+#define ECORE_MB_FLAG_AVOID_BLOCK	(0x1 << 1)
+#define ECORE_MB_FLAGS_IS_SET(params, flag)	\
 	((params) != OSAL_NULL && ((params)->flags & ECORE_MB_FLAG_##flag))
 };
 
@@ -199,6 +218,17 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_load_done(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt);
 
+/**
+ * @brief Sends a CANCEL_LOAD_REQ message to the MFW
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - Operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_cancel_load_req(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt);
+
 /**
  * @brief Sends a UNLOAD_REQ message to the MFW
  *
@@ -242,6 +272,8 @@ void ecore_mcp_read_mb(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt,
 					  u32 *vfs_to_ack);
+enum _ecore_status_t ecore_mcp_vf_flr(struct ecore_hwfn *p_hwfn,
+				      struct ecore_ptt *p_ptt, u8 rel_vf_id);
 
 /**
  * @brief - calls during init to read shmem of all function-related info.
@@ -319,11 +351,13 @@ int __ecore_configure_pf_min_bandwidth(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_mask_parities(struct ecore_hwfn *p_hwfn,
 					     struct ecore_ptt *p_ptt,
 					     u32 mask_parities);
+
 /**
  * @brief - Sends crash mdump related info to the MFW.
  *
  * @param p_hwfn
  * @param p_ptt
+ * @param epoch
  *
  * @param return ECORE_SUCCESS upon success.
  */
@@ -336,7 +370,6 @@ enum _ecore_status_t ecore_mcp_mdump_set_values(struct ecore_hwfn *p_hwfn,
  *
  * @param p_hwfn
  * @param p_ptt
- * @param epoch
  *
  * @param return ECORE_SUCCESS upon success.
  */
@@ -399,10 +432,10 @@ ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 /**
  * @brief - Initiates PF FLR
  *
- * @param p_hwfn
- * @param p_ptt
+ *  @param p_hwfn
+ *  @param p_ptt
  *
- * @param return ECORE_SUCCESS upon success.
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
 enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt);
@@ -415,6 +448,12 @@ enum ecore_resc_lock {
 	/* Locks that the MFW is aware of should be added here downwards */
 
 	/* Ecore only locks should be added here upwards */
+	ECORE_RESC_LOCK_QM_RECONF = 25,
+	ECORE_RESC_LOCK_IND_TABLE = 26,
+	ECORE_RESC_LOCK_PTP_PORT0 = 27,
+	ECORE_RESC_LOCK_PTP_PORT1 = 28,
+	ECORE_RESC_LOCK_PTP_PORT2 = 29,
+	ECORE_RESC_LOCK_PTP_PORT3 = 30,
 	ECORE_RESC_LOCK_RESC_ALLOC = ECORE_MCP_RESC_LOCK_MAX_VAL,
 
 	/* A dummy value to be used for auxiliary functions in need of
@@ -437,7 +476,7 @@ struct ecore_resc_lock_params {
 #define ECORE_MCP_RESC_LOCK_RETRY_CNT_DFLT	10
 
 	/* The interval in usec between retries */
-	u16 retry_interval;
+	u32 retry_interval;
 #define ECORE_MCP_RESC_LOCK_RETRY_VAL_DFLT	10000
 
 	/* Use sleep or delay between retries */
@@ -502,6 +541,9 @@ void ecore_mcp_resc_lock_default_init(struct ecore_resc_lock_params *p_lock,
 				      enum ecore_resc_lock resource,
 				      bool b_is_permanent);
 
+void ecore_mcp_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      u32 offset, u32 val);
+
 /**
  * @brief Learn of supported MFW features; To be done during early init
  *
@@ -521,6 +563,15 @@ enum _ecore_status_t ecore_mcp_get_capabilities(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_set_capabilities(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt);
 
+/**
+ * @brief Initialize MFW mailbox and sequence values for driver interaction.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ */
+enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt);
+
 enum ecore_mcp_drv_attr_cmd {
 	ECORE_MCP_DRV_ATTR_CMD_READ,
 	ECORE_MCP_DRV_ATTR_CMD_WRITE,
@@ -565,9 +616,6 @@ ecore_mcp_drv_attribute(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 void
 ecore_mcp_read_ufp_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
 
-void ecore_mcp_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		      u32 offset, u32 val);
-
 /**
  * @brief Get the engine affinity configuration.
  *
@@ -586,4 +634,48 @@ enum _ecore_status_t ecore_mcp_get_engine_config(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_get_ppfid_bitmap(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt);
 
+/**
+ * @brief Acquire MCP lock to access to HW indirection table entries
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param retry_num
+ * @param retry_interval
+ */
+enum _ecore_status_t
+ecore_mcp_ind_table_lock(struct ecore_hwfn *p_hwfn,
+			 struct ecore_ptt *p_ptt,
+			 u8 retry_num,
+			 u32 retry_interval);
+
+/**
+ * @brief Release MCP lock of access to HW indirection table entries
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ */
+enum _ecore_status_t
+ecore_mcp_ind_table_unlock(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
+
+/**
+ * @brief Populate the nvm info shadow in the given hardware function
+ *
+ * @param p_hwfn
+ */
+enum _ecore_status_t
+ecore_mcp_nvm_info_populate(struct ecore_hwfn *p_hwfn);
+
+/**
+ * @brief Delete nvm info shadow in the given hardware function
+ *
+ * @param p_hwfn
+ */
+void ecore_mcp_nvm_info_free(struct ecore_hwfn *p_hwfn);
+
+enum _ecore_status_t
+ecore_mcp_set_trace_filter(struct ecore_hwfn *p_hwfn,
+			   u32 *dbg_level, u32 *dbg_modules);
+
+enum _ecore_status_t
+ecore_mcp_restore_trace_filter(struct ecore_hwfn *p_hwfn);
 #endif /* __ECORE_MCP_H__ */
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index c3922ba43..800a23fb1 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_MCP_API_H__
 #define __ECORE_MCP_API_H__
 
@@ -12,7 +12,28 @@
 struct ecore_mcp_link_speed_params {
 	bool autoneg;
 	u32 advertised_speeds; /* bitmask of DRV_SPEED_CAPABILITY */
+#define ECORE_EXT_SPEED_MASK_RES	0x1
+#define ECORE_EXT_SPEED_MASK_1G		0x2
+#define ECORE_EXT_SPEED_MASK_10G	0x4
+#define ECORE_EXT_SPEED_MASK_20G	0x8
+#define ECORE_EXT_SPEED_MASK_25G	0x10
+#define ECORE_EXT_SPEED_MASK_40G	0x20
+#define ECORE_EXT_SPEED_MASK_50G_R	0x40
+#define ECORE_EXT_SPEED_MASK_50G_R2	0x80
+#define ECORE_EXT_SPEED_MASK_100G_R2	0x100
+#define ECORE_EXT_SPEED_MASK_100G_R4	0x200
+#define ECORE_EXT_SPEED_MASK_100G_P4	0x400
 	u32 forced_speed; /* In Mb/s */
+#define ECORE_EXT_SPEED_1G	0x1
+#define ECORE_EXT_SPEED_10G	0x2
+#define ECORE_EXT_SPEED_20G	0x4
+#define ECORE_EXT_SPEED_25G	0x8
+#define ECORE_EXT_SPEED_40G	0x10
+#define ECORE_EXT_SPEED_50G_R	0x20
+#define ECORE_EXT_SPEED_50G_R2	0x40
+#define ECORE_EXT_SPEED_100G_R2	0x80
+#define ECORE_EXT_SPEED_100G_R4	0x100
+#define ECORE_EXT_SPEED_100G_P4	0x200
 };
 
 struct ecore_mcp_link_pause_params {
@@ -27,6 +48,13 @@ enum ecore_mcp_eee_mode {
 	ECORE_MCP_EEE_UNSUPPORTED
 };
 
+#ifndef __EXTRACT__LINUX__
+#define ECORE_MCP_FEC_NONE		(1 << 0)
+#define ECORE_MCP_FEC_FIRECODE		(1 << 1)
+#define ECORE_MCP_FEC_RS		(1 << 2)
+#define ECORE_MCP_FEC_AUTO		(1 << 3)
+#define ECORE_MCP_FEC_UNSUPPORTED	(1 << 4)
+
 struct ecore_link_eee_params {
 	u32 tx_lpi_timer;
 #define ECORE_EEE_1G_ADV	(1 << 0)
@@ -37,21 +65,40 @@ struct ecore_link_eee_params {
 	bool enable;
 	bool tx_lpi_enable;
 };
+#endif
+
+struct ecore_mdump_cmd_params {
+	u32 cmd;
+	void *p_data_src;
+	u8 data_src_size;
+	void *p_data_dst;
+	u8 data_dst_size;
+	u32 mcp_resp;
+	u32 mcp_param;
+};
 
 struct ecore_mcp_link_params {
 	struct ecore_mcp_link_speed_params speed;
 	struct ecore_mcp_link_pause_params pause;
 	u32 loopback_mode; /* in PMM_LOOPBACK values */
 	struct ecore_link_eee_params eee;
+	u32 fec;
+	struct ecore_mcp_link_speed_params ext_speed;
+	u32 ext_fec_mode;
 };
 
 struct ecore_mcp_link_capabilities {
 	u32 speed_capabilities;
 	bool default_speed_autoneg; /* In Mb/s */
-	u32 default_speed; /* In Mb/s */
+	u32 default_speed; /* In Mb/s */ /* __LINUX__THROW__ */
+	u32 fec_default;
 	enum ecore_mcp_eee_mode default_eee;
 	u32 eee_lpi_timer;
 	u8 eee_speed_caps;
+	u32 default_ext_speed_caps;
+	u32 default_ext_autoneg;
+	u32 default_ext_speed;
+	u32 default_ext_fec;
 };
 
 struct ecore_mcp_link_state {
@@ -96,6 +143,8 @@ struct ecore_mcp_link_state {
 	bool eee_active;
 	u8 eee_adv_caps;
 	u8 eee_lp_adv_caps;
+
+	u32 fec_active;
 };
 
 struct ecore_mcp_function_info {
@@ -106,7 +155,7 @@ struct ecore_mcp_function_info {
 	u8 bandwidth_min;
 	u8 bandwidth_max;
 
-	u8 mac[ETH_ALEN];
+	u8 mac[ECORE_ETH_ALEN];
 
 	u64 wwn_port;
 	u64 wwn_node;
@@ -189,21 +238,28 @@ enum ecore_ov_driver_state {
 	ECORE_OV_DRIVER_STATE_ACTIVE
 };
 
+enum ecore_ov_wol {
+	ECORE_OV_WOL_DEFAULT,
+	ECORE_OV_WOL_DISABLED,
+	ECORE_OV_WOL_ENABLED
+};
+
 enum ecore_ov_eswitch {
 	ECORE_OV_ESWITCH_NONE,
 	ECORE_OV_ESWITCH_VEB,
 	ECORE_OV_ESWITCH_VEPA
 };
 
+#ifndef __EXTRACT__LINUX__
 #define ECORE_MAX_NPIV_ENTRIES 128
 #define ECORE_WWN_SIZE 8
 struct ecore_fc_npiv_tbl {
-	u32 count;
+	u16 num_wwpn;
+	u16 num_wwnn;
 	u8 wwpn[ECORE_MAX_NPIV_ENTRIES][ECORE_WWN_SIZE];
 	u8 wwnn[ECORE_MAX_NPIV_ENTRIES][ECORE_WWN_SIZE];
 };
 
-#ifndef __EXTRACT__LINUX__
 enum ecore_led_mode {
 	ECORE_LED_MODE_OFF,
 	ECORE_LED_MODE_ON,
@@ -249,18 +305,17 @@ enum ecore_mfw_tlv_type {
 };
 
 struct ecore_mfw_tlv_generic {
-	u16 feat_flags;
-	bool feat_flags_set;
-	u64 local_mac;
-	bool local_mac_set;
-	u64 additional_mac1;
-	bool additional_mac1_set;
-	u64 additional_mac2;
-	bool additional_mac2_set;
-	u8 drv_state;
-	bool drv_state_set;
-	u8 pxe_progress;
-	bool pxe_progress_set;
+	struct {
+		u8 ipv4_csum_offload;
+		u8 lso_supported;
+		bool b_set;
+	} flags;
+
+#define ECORE_MFW_TLV_MAC_COUNT 3
+	/* First entry for primary MAC, 2 secondary MACs possible */
+	u8 mac[ECORE_MFW_TLV_MAC_COUNT][6];
+	bool mac_set[ECORE_MFW_TLV_MAC_COUNT];
+
 	u64 rx_frames;
 	bool rx_frames_set;
 	u64 rx_bytes;
@@ -271,6 +326,7 @@ struct ecore_mfw_tlv_generic {
 	bool tx_bytes_set;
 };
 
+#ifndef __EXTRACT__LINUX__
 struct ecore_mfw_tlv_eth {
 	u16 lso_maxoff_size;
 	bool lso_maxoff_size_set;
@@ -293,6 +349,10 @@ struct ecore_mfw_tlv_eth {
 	u16 rx_descr_qdepth;
 	bool rx_descr_qdepth_set;
 	u8 iov_offload;
+#define ECORE_MFW_TLV_IOV_OFFLOAD_NONE		(0)
+#define ECORE_MFW_TLV_IOV_OFFLOAD_MULTIQUEUE	(1)
+#define ECORE_MFW_TLV_IOV_OFFLOAD_VEB		(2)
+#define ECORE_MFW_TLV_IOV_OFFLOAD_VEPA		(3)
 	bool iov_offload_set;
 	u8 txqs_empty;
 	bool txqs_empty_set;
@@ -304,6 +364,16 @@ struct ecore_mfw_tlv_eth {
 	bool num_rxqs_full_set;
 };
 
+struct ecore_mfw_tlv_time {
+	bool b_set;
+	u8 month;
+	u8 day;
+	u8 hour;
+	u8 min;
+	u16 msec;
+	u16 usec;
+};
+
 struct ecore_mfw_tlv_fcoe {
 	u8 scsi_timeout;
 	bool scsi_timeout_set;
@@ -338,6 +408,10 @@ struct ecore_mfw_tlv_fcoe {
 	u8 port_alias[3];
 	bool port_alias_set;
 	u8 port_state;
+#define ECORE_MFW_TLV_PORT_STATE_OFFLINE	(0)
+#define ECORE_MFW_TLV_PORT_STATE_LOOP		(1)
+#define ECORE_MFW_TLV_PORT_STATE_P2P		(2)
+#define ECORE_MFW_TLV_PORT_STATE_FABRIC		(3)
 	bool port_state_set;
 	u16 fip_tx_descr_size;
 	bool fip_tx_descr_size_set;
@@ -367,8 +441,7 @@ struct ecore_mfw_tlv_fcoe {
 	bool crc_count_set;
 	u32 crc_err_src_fcid[5];
 	bool crc_err_src_fcid_set[5];
-	u8 crc_err_tstamp[5][14];
-	bool crc_err_tstamp_set[5];
+	struct ecore_mfw_tlv_time crc_err[5];
 	u16 losync_err;
 	bool losync_err_set;
 	u16 losig_err;
@@ -381,16 +454,13 @@ struct ecore_mfw_tlv_fcoe {
 	bool code_violation_err_set;
 	u32 flogi_param[4];
 	bool flogi_param_set[4];
-	u8 flogi_tstamp[14];
-	bool flogi_tstamp_set;
+	struct ecore_mfw_tlv_time flogi_tstamp;
 	u32 flogi_acc_param[4];
 	bool flogi_acc_param_set[4];
-	u8 flogi_acc_tstamp[14];
-	bool flogi_acc_tstamp_set;
+	struct ecore_mfw_tlv_time flogi_acc_tstamp;
 	u32 flogi_rjt;
 	bool flogi_rjt_set;
-	u8 flogi_rjt_tstamp[14];
-	bool flogi_rjt_tstamp_set;
+	struct ecore_mfw_tlv_time flogi_rjt_tstamp;
 	u32 fdiscs;
 	bool fdiscs_set;
 	u8 fdisc_acc;
@@ -405,12 +475,10 @@ struct ecore_mfw_tlv_fcoe {
 	bool plogi_rjt_set;
 	u32 plogi_dst_fcid[5];
 	bool plogi_dst_fcid_set[5];
-	u8 plogi_tstamp[5][14];
-	bool plogi_tstamp_set[5];
+	struct ecore_mfw_tlv_time plogi_tstamp[5];
 	u32 plogi_acc_src_fcid[5];
 	bool plogi_acc_src_fcid_set[5];
-	u8 plogi_acc_tstamp[5][14];
-	bool plogi_acc_tstamp_set[5];
+	struct ecore_mfw_tlv_time plogi_acc_tstamp[5];
 	u8 tx_plogos;
 	bool tx_plogos_set;
 	u8 plogo_acc;
@@ -419,8 +487,7 @@ struct ecore_mfw_tlv_fcoe {
 	bool plogo_rjt_set;
 	u32 plogo_src_fcid[5];
 	bool plogo_src_fcid_set[5];
-	u8 plogo_tstamp[5][14];
-	bool plogo_tstamp_set[5];
+	struct ecore_mfw_tlv_time plogo_tstamp[5];
 	u8 rx_logos;
 	bool rx_logos_set;
 	u8 tx_accs;
@@ -437,8 +504,7 @@ struct ecore_mfw_tlv_fcoe {
 	bool rx_abts_rjt_set;
 	u32 abts_dst_fcid[5];
 	bool abts_dst_fcid_set[5];
-	u8 abts_tstamp[5][14];
-	bool abts_tstamp_set[5];
+	struct ecore_mfw_tlv_time abts_tstamp[5];
 	u8 rx_rscn;
 	bool rx_rscn_set;
 	u32 rx_rscn_nport[4];
@@ -487,8 +553,7 @@ struct ecore_mfw_tlv_fcoe {
 	bool scsi_tsk_abort_set;
 	u32 scsi_rx_chk[5];
 	bool scsi_rx_chk_set[5];
-	u8 scsi_chk_tstamp[5][14];
-	bool scsi_chk_tstamp_set[5];
+	struct ecore_mfw_tlv_time scsi_chk_tstamp[5];
 };
 
 struct ecore_mfw_tlv_iscsi {
@@ -499,6 +564,9 @@ struct ecore_mfw_tlv_iscsi {
 	u8 data_digest;
 	bool data_digest_set;
 	u8 auth_method;
+#define ECORE_MFW_TLV_AUTH_METHOD_NONE		(1)
+#define ECORE_MFW_TLV_AUTH_METHOD_CHAP		(2)
+#define ECORE_MFW_TLV_AUTH_METHOD_MUTUAL_CHAP	(3)
 	bool auth_method_set;
 	u16 boot_taget_portal;
 	bool boot_taget_portal_set;
@@ -523,6 +591,7 @@ struct ecore_mfw_tlv_iscsi {
 	u64 tx_bytes;
 	bool tx_bytes_set;
 };
+#endif
 
 union ecore_mfw_tlv_data {
 	struct ecore_mfw_tlv_generic generic;
@@ -531,9 +600,30 @@ union ecore_mfw_tlv_data {
 	struct ecore_mfw_tlv_iscsi iscsi;
 };
 
+#ifndef __EXTRACT__LINUX__
 enum ecore_hw_info_change {
 	ECORE_HW_INFO_CHANGE_OVLAN,
 };
+#endif
+
+#define ECORE_NVM_CFG_OPTION_ALL	(1 << 0)
+#define ECORE_NVM_CFG_OPTION_INIT	(1 << 1)
+#define ECORE_NVM_CFG_OPTION_COMMIT	(1 << 2)
+#define ECORE_NVM_CFG_OPTION_FREE	(1 << 3)
+#define ECORE_NVM_CFG_OPTION_ENTITY_SEL	(1 << 4)
+#define ECORE_NVM_CFG_OPTION_DEFAULT_RESTORE_ALL (1 << 5)
+#define ECORE_NVM_CFG_GET_FLAGS		0xA
+#define ECORE_NVM_CFG_GET_PF_FLAGS	0x1A
+#define ECORE_NVM_CFG_MAX_ATTRS		50
+
+enum ecore_nvm_flash_cmd {
+	ECORE_NVM_FLASH_CMD_FILE_DATA = 0x2,
+	ECORE_NVM_FLASH_CMD_FILE_START = 0x3,
+	ECORE_NVM_FLASH_CMD_NVM_CHANGE = 0x4,
+	ECORE_NVM_FLASH_CMD_NVM_CFG_ID = 0x5,
+	ECORE_NVM_FLASH_CMD_NVM_CFG_RESTORE_DEFAULT = 0x6,
+	ECORE_NVM_FLASH_CMD_NVM_MAX,
+};
 
 /**
  * @brief - returns the link params of the hw function
@@ -542,7 +632,8 @@ enum ecore_hw_info_change {
  *
  * @returns pointer to link params
  */
-struct ecore_mcp_link_params *ecore_mcp_get_link_params(struct ecore_hwfn *);
+struct ecore_mcp_link_params *ecore_mcp_get_link_params(struct ecore_hwfn
+							*p_hwfn);
 
 /**
  * @brief - return the link state of the hw function
@@ -551,7 +642,8 @@ struct ecore_mcp_link_params *ecore_mcp_get_link_params(struct ecore_hwfn *);
  *
  * @returns pointer to link state
  */
-struct ecore_mcp_link_state *ecore_mcp_get_link_state(struct ecore_hwfn *);
+struct ecore_mcp_link_state *ecore_mcp_get_link_state(struct ecore_hwfn
+						      *p_hwfn);
 
 /**
  * @brief - return the link capabilities of the hw function
@@ -598,10 +690,11 @@ enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_hwfn *p_hwfn,
  * @param p_ptt
  * @param p_mbi_ver - A pointer to a variable to be filled with the MBI version.
  *
- * @return int - 0 - operation was successful.
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-int ecore_mcp_get_mbi_ver(struct ecore_hwfn *p_hwfn,
-			  struct ecore_ptt *p_ptt, u32 *p_mbi_ver);
+enum _ecore_status_t ecore_mcp_get_mbi_ver(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt,
+					   u32 *p_mbi_ver);
 
 /**
  * @brief Get media type value of the port.
@@ -850,6 +943,19 @@ enum _ecore_status_t
 ecore_mcp_ov_update_mac(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			u8 *mac);
 
+/**
+ * @brief Send WOL mode to MFW
+ *
+ *  @param p_hwfn
+ *  @param p_ptt
+ *  @param wol - WOL mode
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t
+ecore_mcp_ov_update_wol(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			enum ecore_ov_wol wol);
+
 /**
  * @brief Send eswitch mode to MFW
  *
@@ -876,17 +982,6 @@ enum _ecore_status_t ecore_mcp_set_led(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt,
 				       enum ecore_led_mode mode);
 
-/**
- * @brief Set secure mode
- *
- *  @param p_dev
- *  @param addr - nvm offset
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_nvm_set_secure_mode(struct ecore_dev *p_dev,
-						   u32 addr);
-
 /**
  * @brief Write to phy
  *
@@ -915,17 +1010,6 @@ enum _ecore_status_t ecore_mcp_phy_write(struct ecore_dev *p_dev, u32 cmd,
 enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd,
 					 u32 addr, u8 *p_buf, u32 len);
 
-/**
- * @brief Put file begin
- *
- *  @param p_dev
- *  @param addr - nvm offset
- *
- * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
- */
-enum _ecore_status_t ecore_mcp_nvm_put_file_begin(struct ecore_dev *p_dev,
-						  u32 addr);
-
 /**
  * @brief Delete file
  *
@@ -1001,7 +1085,7 @@ ecore_mcp_get_nvm_image_att(struct ecore_hwfn *p_hwfn,
  * @param p_buffer - allocated buffer into which to fill data
  * @param buffer_len - length of the allocated buffer.
  *
- * @return ECORE_SUCCESS if p_buffer now contains the nvram image.
+ * @return ECORE_SUCCESS iff p_buffer now contains the nvram image.
  */
 enum _ecore_status_t ecore_mcp_get_nvm_image(struct ecore_hwfn *p_hwfn,
 					     enum ecore_nvm_images image_id,
@@ -1020,6 +1104,7 @@ enum _ecore_status_t ecore_mcp_get_nvm_image(struct ecore_hwfn *p_hwfn,
  * @param o_mcp_param - MCP response param
  * @param i_txn_size -  Buffer size
  * @param i_buf - Pointer to the buffer
+ * @param b_can_sleep - Whether sleep is allowed in this execution path.
  *
  * @param return ECORE_SUCCESS upon success.
  */
@@ -1030,7 +1115,8 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
 					  u32 *o_mcp_resp,
 					  u32 *o_mcp_param,
 					  u32 i_txn_size,
-					  u32 *i_buf);
+					  u32 *i_buf,
+					  bool b_can_sleep);
 
 /**
  * @brief - Sends an NVM read command request to the MFW to get
@@ -1045,6 +1131,7 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
  * @param o_mcp_param - MCP response param
  * @param o_txn_size -  Buffer size output
  * @param o_buf - Pointer to the buffer returned by the MFW.
+ * @param b_can_sleep - Whether sleep is allowed in this execution path.
  *
  * @param return ECORE_SUCCESS upon success.
  */
@@ -1055,7 +1142,8 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 					  u32 *o_mcp_resp,
 					  u32 *o_mcp_param,
 					  u32 *o_txn_size,
-					  u32 *o_buf);
+					  u32 *o_buf,
+					  bool b_can_sleep);
 
 /**
  * @brief Read from sfp
@@ -1170,10 +1258,9 @@ enum _ecore_status_t ecore_mcp_bist_clock_test(struct ecore_hwfn *p_hwfn,
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_bist_nvm_test_get_num_images(
-						struct ecore_hwfn *p_hwfn,
-						struct ecore_ptt *p_ptt,
-						u32 *num_images);
+enum _ecore_status_t ecore_mcp_bist_nvm_get_num_images(struct ecore_hwfn *p_hwfn,
+						       struct ecore_ptt *p_ptt,
+						       u32 *num_images);
 
 /**
  * @brief Bist nvm test - get image attributes by index
@@ -1185,11 +1272,10 @@ enum _ecore_status_t ecore_mcp_bist_nvm_test_get_num_images(
  *
  * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
  */
-enum _ecore_status_t ecore_mcp_bist_nvm_test_get_image_att(
-					struct ecore_hwfn *p_hwfn,
-					struct ecore_ptt *p_ptt,
-					struct bist_nvm_image_att *p_image_att,
-					u32 image_index);
+enum _ecore_status_t ecore_mcp_bist_nvm_get_image_att(struct ecore_hwfn *p_hwfn,
+						      struct ecore_ptt *p_ptt,
+						      struct bist_nvm_image_att *p_image_att,
+						      u32 image_index);
 
 /**
  * @brief ecore_mcp_get_temperature_info - get the status of the temperature
@@ -1278,6 +1364,56 @@ enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_mdump_clr_retain(struct ecore_hwfn *p_hwfn,
 						struct ecore_ptt *p_ptt);
 
+/**
+ * @brief - Get mdump2 offset and size.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param buff_byte_size - pointer to get mdump2 buffer size in bytes
+ * @param buff_byte_addr - pointer to get mdump2 buffer address.
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mdump2_req_offsize(struct ecore_hwfn *p_hwfn,
+					      struct ecore_ptt *p_ptt,
+					      u32 *buff_byte_size,
+					      u32 *buff_byte_addr);
+
+/**
+ * @brief - Request to free mdump2 buffer.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mdump2_req_free(struct ecore_hwfn *p_hwfn,
+					   struct ecore_ptt *p_ptt);
+
+/**
+ * @brief - Gets the LLDP MAC address.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param lldp_mac_addr - a buffer to be filled with the read LLDP MAC address.
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mcp_get_lldp_mac(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u8 lldp_mac_addr[ECORE_ETH_ALEN]);
+
+/**
+ * @brief - Sets the LLDP MAC address.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param lldp_mac_addr - a buffer with the LLDP MAC address to be written.
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mcp_set_lldp_mac(struct ecore_hwfn *p_hwfn,
+					    struct ecore_ptt *p_ptt,
+					    u8 lldp_mac_addr[ECORE_ETH_ALEN]);
+
 /**
  * @brief - Processes the TLV request from MFW i.e., get the required TLV info
  *          from the ecore client and send it to the MFW.
@@ -1290,13 +1426,192 @@ enum _ecore_status_t ecore_mcp_mdump_clr_retain(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt);
 
+/**
+ * @brief - Update fcoe vlan id value to the MFW.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param vlan - fcoe vlan
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t
+ecore_mcp_update_fcoe_cvid(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			   u16 vlan);
+
+/**
+ * @brief - Update fabric name (wwn) value to the MFW.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param wwn - world wide name
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t
+ecore_mcp_update_fcoe_fabric_name(struct ecore_hwfn *p_hwfn,
+				  struct ecore_ptt *p_ptt, u8 *wwn);
 
 /**
  * @brief - Return whether management firmware support smart AN
  *
  * @param p_hwfn
  *
- * @return bool - true iff feature is supported.
+ * @return bool - true if feature is supported.
  */
 bool ecore_mcp_is_smart_an_supported(struct ecore_hwfn *p_hwfn);
+
+/**
+ * @brief - Return whether management firmware support setting of
+ *          PCI relaxed ordering.
+ *
+ * @param p_hwfn
+ *
+ * @return bool - true if feature is supported.
+ */
+bool ecore_mcp_rlx_odr_supported(struct ecore_hwfn *p_hwfn);
+
+/**
+ * @brief - Triggers a HW dump procedure.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mcp_hw_dump_trigger(struct ecore_hwfn *p_hwfn,
+					     struct ecore_ptt *p_ptt);
+
+enum _ecore_status_t
+ecore_mcp_nvm_get_cfg(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      u16 option_id, u8 entity_id, u16 flags, u8 *p_buf,
+		      u32 *p_len);
+
+enum _ecore_status_t
+ecore_mcp_nvm_set_cfg(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		      u16 option_id, u8 entity_id, u16 flags, u8 *p_buf,
+		      u32 len);
+
+enum _ecore_status_t
+ecore_mcp_is_tx_flt_attn_enabled(struct ecore_hwfn *p_hwfn,
+				 struct ecore_ptt *p_ptt,
+				 u8 *enabled);
+
+enum _ecore_status_t
+ecore_mcp_is_rx_los_attn_enabled(struct ecore_hwfn *p_hwfn,
+				 struct ecore_ptt *p_ptt,
+				 u8 *enabled);
+
+enum _ecore_status_t
+ecore_mcp_enable_tx_flt_attn(struct ecore_hwfn *p_hwfn,
+			     struct ecore_ptt *p_ptt,
+			     u8 enable);
+
+enum _ecore_status_t
+ecore_mcp_enable_rx_los_attn(struct ecore_hwfn *p_hwfn,
+			     struct ecore_ptt *p_ptt,
+			     u8 enable);
+
+/**
+ * @brief - Gets the permanent VF MAC address of the given relative VF ID.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param rel_vf_id
+ * @param mac_addr - a buffer to be filled with the read VF MAC address.
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mcp_get_perm_vf_mac(struct ecore_hwfn *p_hwfn,
+					       struct ecore_ptt *p_ptt,
+					       u32 rel_vf_id,
+					       u8 mac_addr[ECORE_ETH_ALEN]);
+
+/**
+ * @brief Send raw debug data to the MFW
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param p_buf - raw debug data buffer
+ * @param size - buffer size
+ */
+enum _ecore_status_t
+ecore_mcp_send_raw_debug_data(struct ecore_hwfn *p_hwfn,
+			      struct ecore_ptt *p_ptt,
+			      u8 *p_buf, u32 size);
+
+/**
+ * @brief Configure min/max bandwidths.
+ *
+ * @param p_dev
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_mcp_set_bandwidth(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			u8 bw_min, u8 bw_max);
+
+/**
+ * @brief - Return whether management firmware support ESL or not.
+ *
+ * @param p_dev
+ *
+ * @return bool - true if feature is supported.
+ */
+bool ecore_mcp_is_esl_supported(struct ecore_hwfn *p_hwfn);
+
+/**
+ * @brief Get enhanced system lockdown status
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param active - ESL active status
+ */
+enum _ecore_status_t
+ecore_mcp_get_esl_status(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			 bool *active);
+
+/**
+ * @brief - Return whether management firmware support Extended speeds or not.
+ *
+ * @param p_dev
+ *
+ * @return bool - true if feature is supported.
+ */
+bool ecore_mcp_is_ext_speed_supported(struct ecore_hwfn *p_hwfn);
+
+/**
+ * @brief - Instruct mfw to collect idlechk and fw asserts.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t
+ecore_mcp_gen_mdump_idlechk(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
+
+/**
+ * @brief - Return whether management firmware support restoring nvm cfg to
+ * default values or not.
+ *
+ * @param p_hwfn
+ *
+ * @return bool - true if feature is supported.
+ */
+bool ecore_mcp_is_restore_default_cfg_supported(struct ecore_hwfn *p_hwfn);
+
+/**
+ * @brief Send mdump command
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param p_mdump_cmd_params
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was
+ * successful.
+ */
+enum _ecore_status_t
+ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		    struct ecore_mdump_cmd_params *p_mdump_cmd_params);
+
 #endif
diff --git a/drivers/net/qede/base/ecore_mng_tlv.c b/drivers/net/qede/base/ecore_mng_tlv.c
index f7666472d..35f866223 100644
--- a/drivers/net/qede/base/ecore_mng_tlv.c
+++ b/drivers/net/qede/base/ecore_mng_tlv.c
@@ -1,20 +1,32 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #include "bcm_osal.h"
 #include "ecore.h"
 #include "ecore_status.h"
 #include "ecore_mcp.h"
 #include "ecore_hw.h"
+#include "ecore_hsi_roce.h"
+#include "ecore_hsi_iwarp.h"
+#include "ecore_rdma.h"
 #include "reg_addr.h"
 
 #define TLV_TYPE(p)	(p[0])
 #define TLV_LENGTH(p)	(p[1])
 #define TLV_FLAGS(p)	(p[3])
 
+#define ECORE_TLV_DATA_MAX (14)
+struct ecore_tlv_parsed_buf {
+	/* To be filled with the address to set in Value field */
+	u8 *p_val;
+
+	/* To be used internally in case the value has to be modified */
+	u8 data[ECORE_TLV_DATA_MAX];
+};
+
 static enum _ecore_status_t
 ecore_mfw_get_tlv_group(u8 tlv_type, u8 *tlv_group)
 {
@@ -29,6 +41,14 @@ ecore_mfw_get_tlv_group(u8 tlv_type, u8 *tlv_group)
 	case DRV_TLV_RX_BYTES_RECEIVED:
 	case DRV_TLV_TX_FRAMES_SENT:
 	case DRV_TLV_TX_BYTES_SENT:
+	case DRV_TLV_NPIV_ENABLED:
+	case DRV_TLV_PCIE_BUS_RX_UTILIZATION:
+	case DRV_TLV_PCIE_BUS_TX_UTILIZATION:
+	case DRV_TLV_DEVICE_CPU_CORES_UTILIZATION:
+	case DRV_TLV_LAST_VALID_DCC_TLV_RECEIVED:
+	case DRV_TLV_NCSI_RX_BYTES_RECEIVED:
+	case DRV_TLV_NCSI_TX_BYTES_SENT:
+	case DRV_TLV_RDMA_DRV_VERSION:
 		*tlv_group |= ECORE_MFW_TLV_GENERIC;
 		break;
 	case DRV_TLV_LSO_MAX_OFFLOAD_SIZE:
@@ -224,71 +244,70 @@ ecore_mfw_get_tlv_group(u8 tlv_type, u8 *tlv_group)
 }
 
 static int
-ecore_mfw_get_gen_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
+ecore_mfw_get_gen_tlv_value(struct ecore_hwfn *p_hwfn,
+			    struct ecore_drv_tlv_hdr *p_tlv,
 			    struct ecore_mfw_tlv_generic *p_drv_buf,
-			    u8 **p_tlv_buf)
+			    struct ecore_tlv_parsed_buf *p_buf)
 {
 	switch (p_tlv->tlv_type) {
 	case DRV_TLV_FEATURE_FLAGS:
-		if (p_drv_buf->feat_flags_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->feat_flags;
-			return sizeof(p_drv_buf->feat_flags);
+		if (p_drv_buf->flags.b_set) {
+			OSAL_MEM_ZERO(p_buf->data,
+				      sizeof(u8) * ECORE_TLV_DATA_MAX);
+			p_buf->data[0] = p_drv_buf->flags.ipv4_csum_offload ?
+					 1 : 0;
+			p_buf->data[0] |= (p_drv_buf->flags.lso_supported ?
+					   1 : 0) << 1;
+			p_buf->p_val = p_buf->data;
+			return 2;
 		}
 		break;
+
 	case DRV_TLV_LOCAL_ADMIN_ADDR:
-		if (p_drv_buf->local_mac_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->local_mac;
-			return sizeof(p_drv_buf->local_mac);
-		}
-		break;
 	case DRV_TLV_ADDITIONAL_MAC_ADDR_1:
-		if (p_drv_buf->additional_mac1_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->additional_mac1;
-			return sizeof(p_drv_buf->additional_mac1);
-		}
-		break;
 	case DRV_TLV_ADDITIONAL_MAC_ADDR_2:
-		if (p_drv_buf->additional_mac2_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->additional_mac2;
-			return sizeof(p_drv_buf->additional_mac2);
-		}
-		break;
-	case DRV_TLV_OS_DRIVER_STATES:
-		if (p_drv_buf->drv_state_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->drv_state;
-			return sizeof(p_drv_buf->drv_state);
-		}
-		break;
-	case DRV_TLV_PXE_BOOT_PROGRESS:
-		if (p_drv_buf->pxe_progress_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->pxe_progress;
-			return sizeof(p_drv_buf->pxe_progress);
+	{
+		int idx = p_tlv->tlv_type - DRV_TLV_LOCAL_ADMIN_ADDR;
+
+		if (p_drv_buf->mac_set[idx]) {
+			p_buf->p_val = p_drv_buf->mac[idx];
+			return 6;
 		}
 		break;
+	}
+
 	case DRV_TLV_RX_FRAMES_RECEIVED:
 		if (p_drv_buf->rx_frames_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_frames;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_frames;
 			return sizeof(p_drv_buf->rx_frames);
 		}
 		break;
 	case DRV_TLV_RX_BYTES_RECEIVED:
 		if (p_drv_buf->rx_bytes_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bytes;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_bytes;
 			return sizeof(p_drv_buf->rx_bytes);
 		}
 		break;
 	case DRV_TLV_TX_FRAMES_SENT:
 		if (p_drv_buf->tx_frames_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_frames;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_frames;
 			return sizeof(p_drv_buf->tx_frames);
 		}
 		break;
 	case DRV_TLV_TX_BYTES_SENT:
 		if (p_drv_buf->tx_bytes_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bytes;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_bytes;
 			return sizeof(p_drv_buf->tx_bytes);
 		}
 		break;
+	case DRV_TLV_RDMA_DRV_VERSION:
+		if (p_hwfn->p_rdma_info && p_hwfn->p_rdma_info->drv_ver) {
+			OSAL_MEM_ZERO(p_buf->data, sizeof(u8) * ECORE_TLV_DATA_MAX);
+			OSAL_MEMCPY(p_buf->data, &p_hwfn->p_rdma_info->drv_ver,
+				    sizeof(p_hwfn->p_rdma_info->drv_ver));
+			p_buf->p_val = p_buf->data;
+			return sizeof(p_hwfn->p_rdma_info->drv_ver);
+		}
 	default:
 		break;
 	}
@@ -299,96 +318,96 @@ ecore_mfw_get_gen_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
 static int
 ecore_mfw_get_eth_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
 			    struct ecore_mfw_tlv_eth *p_drv_buf,
-			    u8 **p_tlv_buf)
+			    struct ecore_tlv_parsed_buf *p_buf)
 {
 	switch (p_tlv->tlv_type) {
 	case DRV_TLV_LSO_MAX_OFFLOAD_SIZE:
 		if (p_drv_buf->lso_maxoff_size_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->lso_maxoff_size;
+			p_buf->p_val = (u8 *)&p_drv_buf->lso_maxoff_size;
 			return sizeof(p_drv_buf->lso_maxoff_size);
 		}
 		break;
 	case DRV_TLV_LSO_MIN_SEGMENT_COUNT:
 		if (p_drv_buf->lso_minseg_size_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->lso_minseg_size;
+			p_buf->p_val = (u8 *)&p_drv_buf->lso_minseg_size;
 			return sizeof(p_drv_buf->lso_minseg_size);
 		}
 		break;
 	case DRV_TLV_PROMISCUOUS_MODE:
 		if (p_drv_buf->prom_mode_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->prom_mode;
+			p_buf->p_val = (u8 *)&p_drv_buf->prom_mode;
 			return sizeof(p_drv_buf->prom_mode);
 		}
 		break;
 	case DRV_TLV_TX_DESCRIPTORS_QUEUE_SIZE:
 		if (p_drv_buf->tx_descr_size_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_descr_size;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_descr_size;
 			return sizeof(p_drv_buf->tx_descr_size);
 		}
 		break;
 	case DRV_TLV_RX_DESCRIPTORS_QUEUE_SIZE:
 		if (p_drv_buf->rx_descr_size_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_descr_size;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_descr_size;
 			return sizeof(p_drv_buf->rx_descr_size);
 		}
 		break;
 	case DRV_TLV_NUM_OF_NET_QUEUE_VMQ_CFG:
 		if (p_drv_buf->netq_count_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->netq_count;
+			p_buf->p_val = (u8 *)&p_drv_buf->netq_count;
 			return sizeof(p_drv_buf->netq_count);
 		}
 		break;
 	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV4:
 		if (p_drv_buf->tcp4_offloads_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tcp4_offloads;
+			p_buf->p_val = (u8 *)&p_drv_buf->tcp4_offloads;
 			return sizeof(p_drv_buf->tcp4_offloads);
 		}
 		break;
 	case DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV6:
 		if (p_drv_buf->tcp6_offloads_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tcp6_offloads;
+			p_buf->p_val = (u8 *)&p_drv_buf->tcp6_offloads;
 			return sizeof(p_drv_buf->tcp6_offloads);
 		}
 		break;
 	case DRV_TLV_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
 		if (p_drv_buf->tx_descr_qdepth_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_descr_qdepth;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_descr_qdepth;
 			return sizeof(p_drv_buf->tx_descr_qdepth);
 		}
 		break;
 	case DRV_TLV_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
 		if (p_drv_buf->rx_descr_qdepth_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_descr_qdepth;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_descr_qdepth;
 			return sizeof(p_drv_buf->rx_descr_qdepth);
 		}
 		break;
 	case DRV_TLV_IOV_OFFLOAD:
 		if (p_drv_buf->iov_offload_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->iov_offload;
+			p_buf->p_val = (u8 *)&p_drv_buf->iov_offload;
 			return sizeof(p_drv_buf->iov_offload);
 		}
 		break;
 	case DRV_TLV_TX_QUEUES_EMPTY:
 		if (p_drv_buf->txqs_empty_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->txqs_empty;
+			p_buf->p_val = (u8 *)&p_drv_buf->txqs_empty;
 			return sizeof(p_drv_buf->txqs_empty);
 		}
 		break;
 	case DRV_TLV_RX_QUEUES_EMPTY:
 		if (p_drv_buf->rxqs_empty_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rxqs_empty;
+			p_buf->p_val = (u8 *)&p_drv_buf->rxqs_empty;
 			return sizeof(p_drv_buf->rxqs_empty);
 		}
 		break;
 	case DRV_TLV_TX_QUEUES_FULL:
 		if (p_drv_buf->num_txqs_full_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->num_txqs_full;
+			p_buf->p_val = (u8 *)&p_drv_buf->num_txqs_full;
 			return sizeof(p_drv_buf->num_txqs_full);
 		}
 		break;
 	case DRV_TLV_RX_QUEUES_FULL:
 		if (p_drv_buf->num_rxqs_full_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->num_rxqs_full;
+			p_buf->p_val = (u8 *)&p_drv_buf->num_rxqs_full;
 			return sizeof(p_drv_buf->num_rxqs_full);
 		}
 		break;
@@ -399,906 +418,704 @@ ecore_mfw_get_eth_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
 	return -1;
 }
 
+static int
+ecore_mfw_get_tlv_time_value(struct ecore_mfw_tlv_time *p_time,
+			     struct ecore_tlv_parsed_buf *p_buf)
+{
+	if (!p_time->b_set)
+		return -1;
+
+	/* Validate numbers */
+	if (p_time->month > 12)
+		p_time->month = 0;
+	if (p_time->day > 31)
+		p_time->day = 0;
+	if (p_time->hour > 23)
+		p_time->hour = 0;
+	if (p_time->min > 59)
+		p_time->hour = 0;
+	if (p_time->msec > 999)
+		p_time->msec = 0;
+	if (p_time->usec > 999)
+		p_time->usec = 0;
+
+	OSAL_MEM_ZERO(p_buf->data, sizeof(u8) * ECORE_TLV_DATA_MAX);
+	OSAL_SNPRINTF((char *)p_buf->data, 14, "%d%d%d%d%d%d",
+		      p_time->month, p_time->day,
+		      p_time->hour, p_time->min,
+		      p_time->msec, p_time->usec);
+
+	p_buf->p_val = p_buf->data;
+	return 14;
+}
+
 static int
 ecore_mfw_get_fcoe_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
 			     struct ecore_mfw_tlv_fcoe *p_drv_buf,
-			     u8 **p_tlv_buf)
+			     struct ecore_tlv_parsed_buf *p_buf)
 {
 	switch (p_tlv->tlv_type) {
 	case DRV_TLV_SCSI_TO:
 		if (p_drv_buf->scsi_timeout_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_timeout;
+			p_buf->p_val = (u8 *)&p_drv_buf->scsi_timeout;
 			return sizeof(p_drv_buf->scsi_timeout);
 		}
 		break;
 	case DRV_TLV_R_T_TOV:
 		if (p_drv_buf->rt_tov_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rt_tov;
+			p_buf->p_val = (u8 *)&p_drv_buf->rt_tov;
 			return sizeof(p_drv_buf->rt_tov);
 		}
 		break;
 	case DRV_TLV_R_A_TOV:
 		if (p_drv_buf->ra_tov_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->ra_tov;
+			p_buf->p_val = (u8 *)&p_drv_buf->ra_tov;
 			return sizeof(p_drv_buf->ra_tov);
 		}
 		break;
 	case DRV_TLV_E_D_TOV:
 		if (p_drv_buf->ed_tov_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->ed_tov;
+			p_buf->p_val = (u8 *)&p_drv_buf->ed_tov;
 			return sizeof(p_drv_buf->ed_tov);
 		}
 		break;
 	case DRV_TLV_CR_TOV:
 		if (p_drv_buf->cr_tov_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->cr_tov;
+			p_buf->p_val = (u8 *)&p_drv_buf->cr_tov;
 			return sizeof(p_drv_buf->cr_tov);
 		}
 		break;
 	case DRV_TLV_BOOT_TYPE:
 		if (p_drv_buf->boot_type_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->boot_type;
+			p_buf->p_val = (u8 *)&p_drv_buf->boot_type;
 			return sizeof(p_drv_buf->boot_type);
 		}
 		break;
 	case DRV_TLV_NPIV_STATE:
 		if (p_drv_buf->npiv_state_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->npiv_state;
+			p_buf->p_val = (u8 *)&p_drv_buf->npiv_state;
 			return sizeof(p_drv_buf->npiv_state);
 		}
 		break;
 	case DRV_TLV_NUM_OF_NPIV_IDS:
 		if (p_drv_buf->num_npiv_ids_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->num_npiv_ids;
+			p_buf->p_val = (u8 *)&p_drv_buf->num_npiv_ids;
 			return sizeof(p_drv_buf->num_npiv_ids);
 		}
 		break;
 	case DRV_TLV_SWITCH_NAME:
 		if (p_drv_buf->switch_name_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->switch_name;
+			p_buf->p_val = (u8 *)&p_drv_buf->switch_name;
 			return sizeof(p_drv_buf->switch_name);
 		}
 		break;
 	case DRV_TLV_SWITCH_PORT_NUM:
 		if (p_drv_buf->switch_portnum_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->switch_portnum;
+			p_buf->p_val = (u8 *)&p_drv_buf->switch_portnum;
 			return sizeof(p_drv_buf->switch_portnum);
 		}
 		break;
 	case DRV_TLV_SWITCH_PORT_ID:
 		if (p_drv_buf->switch_portid_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->switch_portid;
+			p_buf->p_val = (u8 *)&p_drv_buf->switch_portid;
 			return sizeof(p_drv_buf->switch_portid);
 		}
 		break;
 	case DRV_TLV_VENDOR_NAME:
 		if (p_drv_buf->vendor_name_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->vendor_name;
+			p_buf->p_val = (u8 *)&p_drv_buf->vendor_name;
 			return sizeof(p_drv_buf->vendor_name);
 		}
 		break;
 	case DRV_TLV_SWITCH_MODEL:
 		if (p_drv_buf->switch_model_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->switch_model;
+			p_buf->p_val = (u8 *)&p_drv_buf->switch_model;
 			return sizeof(p_drv_buf->switch_model);
 		}
 		break;
 	case DRV_TLV_SWITCH_FW_VER:
 		if (p_drv_buf->switch_fw_version_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->switch_fw_version;
+			p_buf->p_val = (u8 *)&p_drv_buf->switch_fw_version;
 			return sizeof(p_drv_buf->switch_fw_version);
 		}
 		break;
 	case DRV_TLV_QOS_PRIORITY_PER_802_1P:
 		if (p_drv_buf->qos_pri_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->qos_pri;
+			p_buf->p_val = (u8 *)&p_drv_buf->qos_pri;
 			return sizeof(p_drv_buf->qos_pri);
 		}
 		break;
 	case DRV_TLV_PORT_ALIAS:
 		if (p_drv_buf->port_alias_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->port_alias;
+			p_buf->p_val = (u8 *)&p_drv_buf->port_alias;
 			return sizeof(p_drv_buf->port_alias);
 		}
 		break;
 	case DRV_TLV_PORT_STATE:
 		if (p_drv_buf->port_state_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->port_state;
+			p_buf->p_val = (u8 *)&p_drv_buf->port_state;
 			return sizeof(p_drv_buf->port_state);
 		}
 		break;
 	case DRV_TLV_FIP_TX_DESCRIPTORS_QUEUE_SIZE:
 		if (p_drv_buf->fip_tx_descr_size_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->fip_tx_descr_size;
+			p_buf->p_val = (u8 *)&p_drv_buf->fip_tx_descr_size;
 			return sizeof(p_drv_buf->fip_tx_descr_size);
 		}
 		break;
 	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_SIZE:
 		if (p_drv_buf->fip_rx_descr_size_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->fip_rx_descr_size;
+			p_buf->p_val = (u8 *)&p_drv_buf->fip_rx_descr_size;
 			return sizeof(p_drv_buf->fip_rx_descr_size);
 		}
 		break;
 	case DRV_TLV_LINK_FAILURE_COUNT:
 		if (p_drv_buf->link_failures_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->link_failures;
+			p_buf->p_val = (u8 *)&p_drv_buf->link_failures;
 			return sizeof(p_drv_buf->link_failures);
 		}
 		break;
 	case DRV_TLV_FCOE_BOOT_PROGRESS:
 		if (p_drv_buf->fcoe_boot_progress_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_boot_progress;
+			p_buf->p_val = (u8 *)&p_drv_buf->fcoe_boot_progress;
 			return sizeof(p_drv_buf->fcoe_boot_progress);
 		}
 		break;
 	case DRV_TLV_RX_BROADCAST_PACKETS:
 		if (p_drv_buf->rx_bcast_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bcast;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_bcast;
 			return sizeof(p_drv_buf->rx_bcast);
 		}
 		break;
 	case DRV_TLV_TX_BROADCAST_PACKETS:
 		if (p_drv_buf->tx_bcast_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bcast;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_bcast;
 			return sizeof(p_drv_buf->tx_bcast);
 		}
 		break;
 	case DRV_TLV_FCOE_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
 		if (p_drv_buf->fcoe_txq_depth_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_txq_depth;
+			p_buf->p_val = (u8 *)&p_drv_buf->fcoe_txq_depth;
 			return sizeof(p_drv_buf->fcoe_txq_depth);
 		}
 		break;
 	case DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
 		if (p_drv_buf->fcoe_rxq_depth_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rxq_depth;
+			p_buf->p_val = (u8 *)&p_drv_buf->fcoe_rxq_depth;
 			return sizeof(p_drv_buf->fcoe_rxq_depth);
 		}
 		break;
 	case DRV_TLV_FCOE_RX_FRAMES_RECEIVED:
 		if (p_drv_buf->fcoe_rx_frames_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rx_frames;
+			p_buf->p_val = (u8 *)&p_drv_buf->fcoe_rx_frames;
 			return sizeof(p_drv_buf->fcoe_rx_frames);
 		}
 		break;
 	case DRV_TLV_FCOE_RX_BYTES_RECEIVED:
 		if (p_drv_buf->fcoe_rx_bytes_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_rx_bytes;
+			p_buf->p_val = (u8 *)&p_drv_buf->fcoe_rx_bytes;
 			return sizeof(p_drv_buf->fcoe_rx_bytes);
 		}
 		break;
 	case DRV_TLV_FCOE_TX_FRAMES_SENT:
 		if (p_drv_buf->fcoe_tx_frames_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_tx_frames;
+			p_buf->p_val = (u8 *)&p_drv_buf->fcoe_tx_frames;
 			return sizeof(p_drv_buf->fcoe_tx_frames);
 		}
 		break;
 	case DRV_TLV_FCOE_TX_BYTES_SENT:
 		if (p_drv_buf->fcoe_tx_bytes_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->fcoe_tx_bytes;
+			p_buf->p_val = (u8 *)&p_drv_buf->fcoe_tx_bytes;
 			return sizeof(p_drv_buf->fcoe_tx_bytes);
 		}
 		break;
 	case DRV_TLV_CRC_ERROR_COUNT:
 		if (p_drv_buf->crc_count_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->crc_count;
+			p_buf->p_val = (u8 *)&p_drv_buf->crc_count;
 			return sizeof(p_drv_buf->crc_count);
 		}
 		break;
 	case DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID:
-		if (p_drv_buf->crc_err_src_fcid_set[0]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[0];
-			return sizeof(p_drv_buf->crc_err_src_fcid[0]);
-		}
-		break;
 	case DRV_TLV_CRC_ERROR_2_RECEIVED_SOURCE_FC_ID:
-		if (p_drv_buf->crc_err_src_fcid_set[1]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[1];
-			return sizeof(p_drv_buf->crc_err_src_fcid[1]);
-		}
-		break;
 	case DRV_TLV_CRC_ERROR_3_RECEIVED_SOURCE_FC_ID:
-		if (p_drv_buf->crc_err_src_fcid_set[2]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[2];
-			return sizeof(p_drv_buf->crc_err_src_fcid[2]);
-		}
-		break;
 	case DRV_TLV_CRC_ERROR_4_RECEIVED_SOURCE_FC_ID:
-		if (p_drv_buf->crc_err_src_fcid_set[3]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[3];
-			return sizeof(p_drv_buf->crc_err_src_fcid[3]);
-		}
-		break;
 	case DRV_TLV_CRC_ERROR_5_RECEIVED_SOURCE_FC_ID:
-		if (p_drv_buf->crc_err_src_fcid_set[4]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_src_fcid[4];
-			return sizeof(p_drv_buf->crc_err_src_fcid[4]);
+	{
+		u8 idx = (p_tlv->tlv_type -
+			  DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID) / 2;
+
+		if (p_drv_buf->crc_err_src_fcid_set[idx]) {
+			p_buf->p_val = (u8 *)&p_drv_buf->crc_err_src_fcid[idx];
+			return sizeof(p_drv_buf->crc_err_src_fcid[idx]);
 		}
 		break;
+	}
+
 	case DRV_TLV_CRC_ERROR_1_TIMESTAMP:
-		if (p_drv_buf->crc_err_tstamp_set[0]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[0];
-			return sizeof(p_drv_buf->crc_err_tstamp[0]);
-		}
-		break;
 	case DRV_TLV_CRC_ERROR_2_TIMESTAMP:
-		if (p_drv_buf->crc_err_tstamp_set[1]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[1];
-			return sizeof(p_drv_buf->crc_err_tstamp[1]);
-		}
-		break;
 	case DRV_TLV_CRC_ERROR_3_TIMESTAMP:
-		if (p_drv_buf->crc_err_tstamp_set[2]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[2];
-			return sizeof(p_drv_buf->crc_err_tstamp[2]);
-		}
-		break;
 	case DRV_TLV_CRC_ERROR_4_TIMESTAMP:
-		if (p_drv_buf->crc_err_tstamp_set[3]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[3];
-			return sizeof(p_drv_buf->crc_err_tstamp[3]);
-		}
-		break;
 	case DRV_TLV_CRC_ERROR_5_TIMESTAMP:
-		if (p_drv_buf->crc_err_tstamp_set[4]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->crc_err_tstamp[4];
-			return sizeof(p_drv_buf->crc_err_tstamp[4]);
-		}
-		break;
+	{
+		u8 idx = (p_tlv->tlv_type -
+			  DRV_TLV_CRC_ERROR_1_TIMESTAMP) / 2;
+
+		return ecore_mfw_get_tlv_time_value(&p_drv_buf->crc_err[idx],
+						    p_buf);
+	}
+
 	case DRV_TLV_LOSS_OF_SYNC_ERROR_COUNT:
 		if (p_drv_buf->losync_err_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->losync_err;
+			p_buf->p_val = (u8 *)&p_drv_buf->losync_err;
 			return sizeof(p_drv_buf->losync_err);
 		}
 		break;
 	case DRV_TLV_LOSS_OF_SIGNAL_ERRORS:
 		if (p_drv_buf->losig_err_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->losig_err;
+			p_buf->p_val = (u8 *)&p_drv_buf->losig_err;
 			return sizeof(p_drv_buf->losig_err);
 		}
 		break;
 	case DRV_TLV_PRIMITIVE_SEQUENCE_PROTOCOL_ERROR_COUNT:
 		if (p_drv_buf->primtive_err_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->primtive_err;
+			p_buf->p_val = (u8 *)&p_drv_buf->primtive_err;
 			return sizeof(p_drv_buf->primtive_err);
 		}
 		break;
 	case DRV_TLV_DISPARITY_ERROR_COUNT:
 		if (p_drv_buf->disparity_err_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->disparity_err;
+			p_buf->p_val = (u8 *)&p_drv_buf->disparity_err;
 			return sizeof(p_drv_buf->disparity_err);
 		}
 		break;
 	case DRV_TLV_CODE_VIOLATION_ERROR_COUNT:
 		if (p_drv_buf->code_violation_err_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->code_violation_err;
+			p_buf->p_val = (u8 *)&p_drv_buf->code_violation_err;
 			return sizeof(p_drv_buf->code_violation_err);
 		}
 		break;
 	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1:
-		if (p_drv_buf->flogi_param_set[0]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[0];
-			return sizeof(p_drv_buf->flogi_param[0]);
-		}
-		break;
 	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_2:
-		if (p_drv_buf->flogi_param_set[1]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[1];
-			return sizeof(p_drv_buf->flogi_param[1]);
-		}
-		break;
 	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_3:
-		if (p_drv_buf->flogi_param_set[2]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[2];
-			return sizeof(p_drv_buf->flogi_param[2]);
-		}
-		break;
 	case DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_4:
-		if (p_drv_buf->flogi_param_set[3]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_param[3];
-			return sizeof(p_drv_buf->flogi_param[3]);
+	{
+		u8 idx = p_tlv->tlv_type -
+			 DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1;
+		if (p_drv_buf->flogi_param_set[idx]) {
+			p_buf->p_val = (u8 *)&p_drv_buf->flogi_param[idx];
+			return sizeof(p_drv_buf->flogi_param[idx]);
 		}
 		break;
+	}
 	case DRV_TLV_LAST_FLOGI_TIMESTAMP:
-		if (p_drv_buf->flogi_tstamp_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_tstamp;
-			return sizeof(p_drv_buf->flogi_tstamp);
-		}
-		break;
+		return ecore_mfw_get_tlv_time_value(&p_drv_buf->flogi_tstamp,
+						    p_buf);
 	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1:
-		if (p_drv_buf->flogi_acc_param_set[0]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[0];
-			return sizeof(p_drv_buf->flogi_acc_param[0]);
-		}
-		break;
 	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_2:
-		if (p_drv_buf->flogi_acc_param_set[1]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[1];
-			return sizeof(p_drv_buf->flogi_acc_param[1]);
-		}
-		break;
 	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_3:
-		if (p_drv_buf->flogi_acc_param_set[2]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[2];
-			return sizeof(p_drv_buf->flogi_acc_param[2]);
-		}
-		break;
 	case DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_4:
-		if (p_drv_buf->flogi_acc_param_set[3]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_param[3];
-			return sizeof(p_drv_buf->flogi_acc_param[3]);
+	{
+		u8 idx = p_tlv->tlv_type -
+			 DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1;
+
+		if (p_drv_buf->flogi_acc_param_set[idx]) {
+			p_buf->p_val = (u8 *)&p_drv_buf->flogi_acc_param[idx];
+			return sizeof(p_drv_buf->flogi_acc_param[idx]);
 		}
 		break;
+	}
 	case DRV_TLV_LAST_FLOGI_ACC_TIMESTAMP:
-		if (p_drv_buf->flogi_acc_tstamp_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_acc_tstamp;
-			return sizeof(p_drv_buf->flogi_acc_tstamp);
-		}
-		break;
+		return ecore_mfw_get_tlv_time_value(&p_drv_buf->flogi_acc_tstamp,
+						    p_buf);
 	case DRV_TLV_LAST_FLOGI_RJT:
 		if (p_drv_buf->flogi_rjt_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_rjt;
+			p_buf->p_val = (u8 *)&p_drv_buf->flogi_rjt;
 			return sizeof(p_drv_buf->flogi_rjt);
 		}
 		break;
 	case DRV_TLV_LAST_FLOGI_RJT_TIMESTAMP:
-		if (p_drv_buf->flogi_rjt_tstamp_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->flogi_rjt_tstamp;
-			return sizeof(p_drv_buf->flogi_rjt_tstamp);
-		}
-		break;
+		return ecore_mfw_get_tlv_time_value(&p_drv_buf->flogi_rjt_tstamp,
+						    p_buf);
 	case DRV_TLV_FDISCS_SENT_COUNT:
 		if (p_drv_buf->fdiscs_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->fdiscs;
+			p_buf->p_val = (u8 *)&p_drv_buf->fdiscs;
 			return sizeof(p_drv_buf->fdiscs);
 		}
 		break;
 	case DRV_TLV_FDISC_ACCS_RECEIVED:
 		if (p_drv_buf->fdisc_acc_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->fdisc_acc;
+			p_buf->p_val = (u8 *)&p_drv_buf->fdisc_acc;
 			return sizeof(p_drv_buf->fdisc_acc);
 		}
 		break;
 	case DRV_TLV_FDISC_RJTS_RECEIVED:
 		if (p_drv_buf->fdisc_rjt_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->fdisc_rjt;
+			p_buf->p_val = (u8 *)&p_drv_buf->fdisc_rjt;
 			return sizeof(p_drv_buf->fdisc_rjt);
 		}
 		break;
 	case DRV_TLV_PLOGI_SENT_COUNT:
 		if (p_drv_buf->plogi_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi;
+			p_buf->p_val = (u8 *)&p_drv_buf->plogi;
 			return sizeof(p_drv_buf->plogi);
 		}
 		break;
 	case DRV_TLV_PLOGI_ACCS_RECEIVED:
 		if (p_drv_buf->plogi_acc_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc;
+			p_buf->p_val = (u8 *)&p_drv_buf->plogi_acc;
 			return sizeof(p_drv_buf->plogi_acc);
 		}
 		break;
 	case DRV_TLV_PLOGI_RJTS_RECEIVED:
 		if (p_drv_buf->plogi_rjt_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_rjt;
+			p_buf->p_val = (u8 *)&p_drv_buf->plogi_rjt;
 			return sizeof(p_drv_buf->plogi_rjt);
 		}
 		break;
 	case DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID:
-		if (p_drv_buf->plogi_dst_fcid_set[0]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[0];
-			return sizeof(p_drv_buf->plogi_dst_fcid[0]);
-		}
-		break;
 	case DRV_TLV_PLOGI_2_SENT_DESTINATION_FC_ID:
-		if (p_drv_buf->plogi_dst_fcid_set[1]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[1];
-			return sizeof(p_drv_buf->plogi_dst_fcid[1]);
-		}
-		break;
 	case DRV_TLV_PLOGI_3_SENT_DESTINATION_FC_ID:
-		if (p_drv_buf->plogi_dst_fcid_set[2]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[2];
-			return sizeof(p_drv_buf->plogi_dst_fcid[2]);
-		}
-		break;
 	case DRV_TLV_PLOGI_4_SENT_DESTINATION_FC_ID:
-		if (p_drv_buf->plogi_dst_fcid_set[3]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[3];
-			return sizeof(p_drv_buf->plogi_dst_fcid[3]);
-		}
-		break;
 	case DRV_TLV_PLOGI_5_SENT_DESTINATION_FC_ID:
-		if (p_drv_buf->plogi_dst_fcid_set[4]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_dst_fcid[4];
-			return sizeof(p_drv_buf->plogi_dst_fcid[4]);
+	{
+		u8 idx = (p_tlv->tlv_type -
+			  DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID) / 2;
+
+		if (p_drv_buf->plogi_dst_fcid_set[idx]) {
+			p_buf->p_val = (u8 *)&p_drv_buf->plogi_dst_fcid[idx];
+			return sizeof(p_drv_buf->plogi_dst_fcid[idx]);
 		}
 		break;
+	}
 	case DRV_TLV_PLOGI_1_TIMESTAMP:
-		if (p_drv_buf->plogi_tstamp_set[0]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[0];
-			return sizeof(p_drv_buf->plogi_tstamp[0]);
-		}
-		break;
 	case DRV_TLV_PLOGI_2_TIMESTAMP:
-		if (p_drv_buf->plogi_tstamp_set[1]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[1];
-			return sizeof(p_drv_buf->plogi_tstamp[1]);
-		}
-		break;
 	case DRV_TLV_PLOGI_3_TIMESTAMP:
-		if (p_drv_buf->plogi_tstamp_set[2]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[2];
-			return sizeof(p_drv_buf->plogi_tstamp[2]);
-		}
-		break;
 	case DRV_TLV_PLOGI_4_TIMESTAMP:
-		if (p_drv_buf->plogi_tstamp_set[3]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[3];
-			return sizeof(p_drv_buf->plogi_tstamp[3]);
-		}
-		break;
 	case DRV_TLV_PLOGI_5_TIMESTAMP:
-		if (p_drv_buf->plogi_tstamp_set[4]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_tstamp[4];
-			return sizeof(p_drv_buf->plogi_tstamp[4]);
-		}
-		break;
+	{
+		u8 idx = (p_tlv->tlv_type -
+			  DRV_TLV_PLOGI_1_TIMESTAMP) / 2;
+
+		return ecore_mfw_get_tlv_time_value(&p_drv_buf->plogi_tstamp[idx],
+						    p_buf);
+	}
+
 	case DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID:
-		if (p_drv_buf->plogi_acc_src_fcid_set[0]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[0];
-			return sizeof(p_drv_buf->plogi_acc_src_fcid[0]);
-		}
-		break;
 	case DRV_TLV_PLOGI_2_ACC_RECEIVED_SOURCE_FC_ID:
-		if (p_drv_buf->plogi_acc_src_fcid_set[1]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[1];
-			return sizeof(p_drv_buf->plogi_acc_src_fcid[1]);
-		}
-		break;
 	case DRV_TLV_PLOGI_3_ACC_RECEIVED_SOURCE_FC_ID:
-		if (p_drv_buf->plogi_acc_src_fcid_set[2]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[2];
-			return sizeof(p_drv_buf->plogi_acc_src_fcid[2]);
-		}
-		break;
 	case DRV_TLV_PLOGI_4_ACC_RECEIVED_SOURCE_FC_ID:
-		if (p_drv_buf->plogi_acc_src_fcid_set[3]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[3];
-			return sizeof(p_drv_buf->plogi_acc_src_fcid[3]);
-		}
-		break;
 	case DRV_TLV_PLOGI_5_ACC_RECEIVED_SOURCE_FC_ID:
-		if (p_drv_buf->plogi_acc_src_fcid_set[4]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_src_fcid[4];
-			return sizeof(p_drv_buf->plogi_acc_src_fcid[4]);
+	{
+		u8 idx = (p_tlv->tlv_type -
+			  DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID) / 2;
+
+		if (p_drv_buf->plogi_acc_src_fcid_set[idx]) {
+			p_buf->p_val = (u8 *)&p_drv_buf->plogi_acc_src_fcid[idx];
+			return sizeof(p_drv_buf->plogi_acc_src_fcid[idx]);
 		}
 		break;
+	}
 	case DRV_TLV_PLOGI_1_ACC_TIMESTAMP:
-		if (p_drv_buf->plogi_acc_tstamp_set[0]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[0];
-			return sizeof(p_drv_buf->plogi_acc_tstamp[0]);
-		}
-		break;
 	case DRV_TLV_PLOGI_2_ACC_TIMESTAMP:
-		if (p_drv_buf->plogi_acc_tstamp_set[1]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[1];
-			return sizeof(p_drv_buf->plogi_acc_tstamp[1]);
-		}
-		break;
 	case DRV_TLV_PLOGI_3_ACC_TIMESTAMP:
-		if (p_drv_buf->plogi_acc_tstamp_set[2]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[2];
-			return sizeof(p_drv_buf->plogi_acc_tstamp[2]);
-		}
-		break;
 	case DRV_TLV_PLOGI_4_ACC_TIMESTAMP:
-		if (p_drv_buf->plogi_acc_tstamp_set[3]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[3];
-			return sizeof(p_drv_buf->plogi_acc_tstamp[3]);
-		}
-		break;
 	case DRV_TLV_PLOGI_5_ACC_TIMESTAMP:
-		if (p_drv_buf->plogi_acc_tstamp_set[4]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogi_acc_tstamp[4];
-			return sizeof(p_drv_buf->plogi_acc_tstamp[4]);
-		}
-		break;
+	{
+		u8 idx = (p_tlv->tlv_type -
+			  DRV_TLV_PLOGI_1_ACC_TIMESTAMP) / 2;
+
+		return ecore_mfw_get_tlv_time_value(&p_drv_buf->plogi_acc_tstamp[idx],
+						    p_buf);
+	}
+
 	case DRV_TLV_LOGOS_ISSUED:
 		if (p_drv_buf->tx_plogos_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_plogos;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_plogos;
 			return sizeof(p_drv_buf->tx_plogos);
 		}
 		break;
 	case DRV_TLV_LOGO_ACCS_RECEIVED:
 		if (p_drv_buf->plogo_acc_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_acc;
+			p_buf->p_val = (u8 *)&p_drv_buf->plogo_acc;
 			return sizeof(p_drv_buf->plogo_acc);
 		}
 		break;
 	case DRV_TLV_LOGO_RJTS_RECEIVED:
 		if (p_drv_buf->plogo_rjt_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_rjt;
+			p_buf->p_val = (u8 *)&p_drv_buf->plogo_rjt;
 			return sizeof(p_drv_buf->plogo_rjt);
 		}
 		break;
 	case DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID:
-		if (p_drv_buf->plogo_src_fcid_set[0]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[0];
-			return sizeof(p_drv_buf->plogo_src_fcid[0]);
-		}
-		break;
 	case DRV_TLV_LOGO_2_RECEIVED_SOURCE_FC_ID:
-		if (p_drv_buf->plogo_src_fcid_set[1]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[1];
-			return sizeof(p_drv_buf->plogo_src_fcid[1]);
-		}
-		break;
 	case DRV_TLV_LOGO_3_RECEIVED_SOURCE_FC_ID:
-		if (p_drv_buf->plogo_src_fcid_set[2]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[2];
-			return sizeof(p_drv_buf->plogo_src_fcid[2]);
-		}
-		break;
 	case DRV_TLV_LOGO_4_RECEIVED_SOURCE_FC_ID:
-		if (p_drv_buf->plogo_src_fcid_set[3]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[3];
-			return sizeof(p_drv_buf->plogo_src_fcid[3]);
-		}
-		break;
 	case DRV_TLV_LOGO_5_RECEIVED_SOURCE_FC_ID:
-		if (p_drv_buf->plogo_src_fcid_set[4]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_src_fcid[4];
-			return sizeof(p_drv_buf->plogo_src_fcid[4]);
+	{
+		u8 idx = (p_tlv->tlv_type -
+			  DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID) / 2;
+
+		if (p_drv_buf->plogo_src_fcid_set[idx]) {
+			p_buf->p_val = (u8 *)&p_drv_buf->plogo_src_fcid[idx];
+			return sizeof(p_drv_buf->plogo_src_fcid[idx]);
 		}
 		break;
+	}
 	case DRV_TLV_LOGO_1_TIMESTAMP:
-		if (p_drv_buf->plogo_tstamp_set[0]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[0];
-			return sizeof(p_drv_buf->plogo_tstamp[0]);
-		}
-		break;
 	case DRV_TLV_LOGO_2_TIMESTAMP:
-		if (p_drv_buf->plogo_tstamp_set[1]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[1];
-			return sizeof(p_drv_buf->plogo_tstamp[1]);
-		}
-		break;
 	case DRV_TLV_LOGO_3_TIMESTAMP:
-		if (p_drv_buf->plogo_tstamp_set[2]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[2];
-			return sizeof(p_drv_buf->plogo_tstamp[2]);
-		}
-		break;
 	case DRV_TLV_LOGO_4_TIMESTAMP:
-		if (p_drv_buf->plogo_tstamp_set[3]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[3];
-			return sizeof(p_drv_buf->plogo_tstamp[3]);
-		}
-		break;
 	case DRV_TLV_LOGO_5_TIMESTAMP:
-		if (p_drv_buf->plogo_tstamp_set[4]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->plogo_tstamp[4];
-			return sizeof(p_drv_buf->plogo_tstamp[4]);
-		}
-		break;
+	{
+		u8 idx = (p_tlv->tlv_type -
+			  DRV_TLV_LOGO_1_TIMESTAMP) / 2;
+
+		return ecore_mfw_get_tlv_time_value(&p_drv_buf->plogo_tstamp[idx],
+						    p_buf);
+	}
 	case DRV_TLV_LOGOS_RECEIVED:
 		if (p_drv_buf->rx_logos_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_logos;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_logos;
 			return sizeof(p_drv_buf->rx_logos);
 		}
 		break;
 	case DRV_TLV_ACCS_ISSUED:
 		if (p_drv_buf->tx_accs_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_accs;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_accs;
 			return sizeof(p_drv_buf->tx_accs);
 		}
 		break;
 	case DRV_TLV_PRLIS_ISSUED:
 		if (p_drv_buf->tx_prlis_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_prlis;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_prlis;
 			return sizeof(p_drv_buf->tx_prlis);
 		}
 		break;
 	case DRV_TLV_ACCS_RECEIVED:
 		if (p_drv_buf->rx_accs_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_accs;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_accs;
 			return sizeof(p_drv_buf->rx_accs);
 		}
 		break;
 	case DRV_TLV_ABTS_SENT_COUNT:
 		if (p_drv_buf->tx_abts_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_abts;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_abts;
 			return sizeof(p_drv_buf->tx_abts);
 		}
 		break;
 	case DRV_TLV_ABTS_ACCS_RECEIVED:
 		if (p_drv_buf->rx_abts_acc_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_abts_acc;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_abts_acc;
 			return sizeof(p_drv_buf->rx_abts_acc);
 		}
 		break;
 	case DRV_TLV_ABTS_RJTS_RECEIVED:
 		if (p_drv_buf->rx_abts_rjt_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_abts_rjt;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_abts_rjt;
 			return sizeof(p_drv_buf->rx_abts_rjt);
 		}
 		break;
 	case DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID:
-		if (p_drv_buf->abts_dst_fcid_set[0]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[0];
-			return sizeof(p_drv_buf->abts_dst_fcid[0]);
-		}
-		break;
 	case DRV_TLV_ABTS_2_SENT_DESTINATION_FC_ID:
-		if (p_drv_buf->abts_dst_fcid_set[1]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[1];
-			return sizeof(p_drv_buf->abts_dst_fcid[1]);
-		}
-		break;
 	case DRV_TLV_ABTS_3_SENT_DESTINATION_FC_ID:
-		if (p_drv_buf->abts_dst_fcid_set[2]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[2];
-			return sizeof(p_drv_buf->abts_dst_fcid[2]);
-		}
-		break;
 	case DRV_TLV_ABTS_4_SENT_DESTINATION_FC_ID:
-		if (p_drv_buf->abts_dst_fcid_set[3]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[3];
-			return sizeof(p_drv_buf->abts_dst_fcid[3]);
-		}
-		break;
 	case DRV_TLV_ABTS_5_SENT_DESTINATION_FC_ID:
-		if (p_drv_buf->abts_dst_fcid_set[4]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->abts_dst_fcid[4];
-			return sizeof(p_drv_buf->abts_dst_fcid[4]);
+	{
+		u8 idx = (p_tlv->tlv_type -
+			  DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID) / 2;
+
+		if (p_drv_buf->abts_dst_fcid_set[idx]) {
+			p_buf->p_val = (u8 *)&p_drv_buf->abts_dst_fcid[idx];
+			return sizeof(p_drv_buf->abts_dst_fcid[idx]);
 		}
 		break;
+	}
 	case DRV_TLV_ABTS_1_TIMESTAMP:
-		if (p_drv_buf->abts_tstamp_set[0]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[0];
-			return sizeof(p_drv_buf->abts_tstamp[0]);
-		}
-		break;
 	case DRV_TLV_ABTS_2_TIMESTAMP:
-		if (p_drv_buf->abts_tstamp_set[1]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[1];
-			return sizeof(p_drv_buf->abts_tstamp[1]);
-		}
-		break;
 	case DRV_TLV_ABTS_3_TIMESTAMP:
-		if (p_drv_buf->abts_tstamp_set[2]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[2];
-			return sizeof(p_drv_buf->abts_tstamp[2]);
-		}
-		break;
 	case DRV_TLV_ABTS_4_TIMESTAMP:
-		if (p_drv_buf->abts_tstamp_set[3]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[3];
-			return sizeof(p_drv_buf->abts_tstamp[3]);
-		}
-		break;
 	case DRV_TLV_ABTS_5_TIMESTAMP:
-		if (p_drv_buf->abts_tstamp_set[4]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->abts_tstamp[4];
-			return sizeof(p_drv_buf->abts_tstamp[4]);
-		}
-		break;
+	{
+		u8 idx = (p_tlv->tlv_type -
+			  DRV_TLV_ABTS_1_TIMESTAMP) / 2;
+
+		return ecore_mfw_get_tlv_time_value(&p_drv_buf->abts_tstamp[idx],
+						    p_buf);
+	}
+
 	case DRV_TLV_RSCNS_RECEIVED:
 		if (p_drv_buf->rx_rscn_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_rscn;
 			return sizeof(p_drv_buf->rx_rscn);
 		}
 		break;
 	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1:
-		if (p_drv_buf->rx_rscn_nport_set[0]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[0];
-			return sizeof(p_drv_buf->rx_rscn_nport[0]);
-		}
-		break;
 	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_2:
-		if (p_drv_buf->rx_rscn_nport_set[1]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[1];
-			return sizeof(p_drv_buf->rx_rscn_nport[1]);
-		}
-		break;
 	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_3:
-		if (p_drv_buf->rx_rscn_nport_set[2]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[2];
-			return sizeof(p_drv_buf->rx_rscn_nport[2]);
-		}
-		break;
 	case DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_4:
-		if (p_drv_buf->rx_rscn_nport_set[3]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_rscn_nport[3];
-			return sizeof(p_drv_buf->rx_rscn_nport[3]);
+	{
+		u8 idx = p_tlv->tlv_type - DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1;
+
+		if (p_drv_buf->rx_rscn_nport_set[idx]) {
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_rscn_nport[idx];
+			return sizeof(p_drv_buf->rx_rscn_nport[idx]);
 		}
 		break;
+	}
 	case DRV_TLV_LUN_RESETS_ISSUED:
 		if (p_drv_buf->tx_lun_rst_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_lun_rst;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_lun_rst;
 			return sizeof(p_drv_buf->tx_lun_rst);
 		}
 		break;
 	case DRV_TLV_ABORT_TASK_SETS_ISSUED:
 		if (p_drv_buf->abort_task_sets_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->abort_task_sets;
+			p_buf->p_val = (u8 *)&p_drv_buf->abort_task_sets;
 			return sizeof(p_drv_buf->abort_task_sets);
 		}
 		break;
 	case DRV_TLV_TPRLOS_SENT:
 		if (p_drv_buf->tx_tprlos_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_tprlos;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_tprlos;
 			return sizeof(p_drv_buf->tx_tprlos);
 		}
 		break;
 	case DRV_TLV_NOS_SENT_COUNT:
 		if (p_drv_buf->tx_nos_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_nos;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_nos;
 			return sizeof(p_drv_buf->tx_nos);
 		}
 		break;
 	case DRV_TLV_NOS_RECEIVED_COUNT:
 		if (p_drv_buf->rx_nos_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_nos;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_nos;
 			return sizeof(p_drv_buf->rx_nos);
 		}
 		break;
 	case DRV_TLV_OLS_COUNT:
 		if (p_drv_buf->ols_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->ols;
+			p_buf->p_val = (u8 *)&p_drv_buf->ols;
 			return sizeof(p_drv_buf->ols);
 		}
 		break;
 	case DRV_TLV_LR_COUNT:
 		if (p_drv_buf->lr_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->lr;
+			p_buf->p_val = (u8 *)&p_drv_buf->lr;
 			return sizeof(p_drv_buf->lr);
 		}
 		break;
 	case DRV_TLV_LRR_COUNT:
 		if (p_drv_buf->lrr_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->lrr;
+			p_buf->p_val = (u8 *)&p_drv_buf->lrr;
 			return sizeof(p_drv_buf->lrr);
 		}
 		break;
 	case DRV_TLV_LIP_SENT_COUNT:
 		if (p_drv_buf->tx_lip_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_lip;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_lip;
 			return sizeof(p_drv_buf->tx_lip);
 		}
 		break;
 	case DRV_TLV_LIP_RECEIVED_COUNT:
 		if (p_drv_buf->rx_lip_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_lip;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_lip;
 			return sizeof(p_drv_buf->rx_lip);
 		}
 		break;
 	case DRV_TLV_EOFA_COUNT:
 		if (p_drv_buf->eofa_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->eofa;
+			p_buf->p_val = (u8 *)&p_drv_buf->eofa;
 			return sizeof(p_drv_buf->eofa);
 		}
 		break;
 	case DRV_TLV_EOFNI_COUNT:
 		if (p_drv_buf->eofni_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->eofni;
+			p_buf->p_val = (u8 *)&p_drv_buf->eofni;
 			return sizeof(p_drv_buf->eofni);
 		}
 		break;
 	case DRV_TLV_SCSI_STATUS_CHECK_CONDITION_COUNT:
 		if (p_drv_buf->scsi_chks_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chks;
+			p_buf->p_val = (u8 *)&p_drv_buf->scsi_chks;
 			return sizeof(p_drv_buf->scsi_chks);
 		}
 		break;
 	case DRV_TLV_SCSI_STATUS_CONDITION_MET_COUNT:
 		if (p_drv_buf->scsi_cond_met_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_cond_met;
+			p_buf->p_val = (u8 *)&p_drv_buf->scsi_cond_met;
 			return sizeof(p_drv_buf->scsi_cond_met);
 		}
 		break;
 	case DRV_TLV_SCSI_STATUS_BUSY_COUNT:
 		if (p_drv_buf->scsi_busy_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_busy;
+			p_buf->p_val = (u8 *)&p_drv_buf->scsi_busy;
 			return sizeof(p_drv_buf->scsi_busy);
 		}
 		break;
 	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_COUNT:
 		if (p_drv_buf->scsi_inter_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_inter;
+			p_buf->p_val = (u8 *)&p_drv_buf->scsi_inter;
 			return sizeof(p_drv_buf->scsi_inter);
 		}
 		break;
 	case DRV_TLV_SCSI_STATUS_INTERMEDIATE_CONDITION_MET_COUNT:
 		if (p_drv_buf->scsi_inter_cond_met_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_inter_cond_met;
+			p_buf->p_val = (u8 *)&p_drv_buf->scsi_inter_cond_met;
 			return sizeof(p_drv_buf->scsi_inter_cond_met);
 		}
 		break;
 	case DRV_TLV_SCSI_STATUS_RESERVATION_CONFLICT_COUNT:
 		if (p_drv_buf->scsi_rsv_conflicts_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rsv_conflicts;
+			p_buf->p_val = (u8 *)&p_drv_buf->scsi_rsv_conflicts;
 			return sizeof(p_drv_buf->scsi_rsv_conflicts);
 		}
 		break;
 	case DRV_TLV_SCSI_STATUS_TASK_SET_FULL_COUNT:
 		if (p_drv_buf->scsi_tsk_full_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_tsk_full;
+			p_buf->p_val = (u8 *)&p_drv_buf->scsi_tsk_full;
 			return sizeof(p_drv_buf->scsi_tsk_full);
 		}
 		break;
 	case DRV_TLV_SCSI_STATUS_ACA_ACTIVE_COUNT:
 		if (p_drv_buf->scsi_aca_active_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_aca_active;
+			p_buf->p_val = (u8 *)&p_drv_buf->scsi_aca_active;
 			return sizeof(p_drv_buf->scsi_aca_active);
 		}
 		break;
 	case DRV_TLV_SCSI_STATUS_TASK_ABORTED_COUNT:
 		if (p_drv_buf->scsi_tsk_abort_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_tsk_abort;
+			p_buf->p_val = (u8 *)&p_drv_buf->scsi_tsk_abort;
 			return sizeof(p_drv_buf->scsi_tsk_abort);
 		}
 		break;
 	case DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ:
-		if (p_drv_buf->scsi_rx_chk_set[0]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[0];
-			return sizeof(p_drv_buf->scsi_rx_chk[0]);
-		}
-		break;
 	case DRV_TLV_SCSI_CHECK_CONDITION_2_RECEIVED_SK_ASC_ASCQ:
-		if (p_drv_buf->scsi_rx_chk_set[1]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[1];
-			return sizeof(p_drv_buf->scsi_rx_chk[1]);
-		}
-		break;
 	case DRV_TLV_SCSI_CHECK_CONDITION_3_RECEIVED_SK_ASC_ASCQ:
-		if (p_drv_buf->scsi_rx_chk_set[2]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[2];
-			return sizeof(p_drv_buf->scsi_rx_chk[2]);
-		}
-		break;
 	case DRV_TLV_SCSI_CHECK_CONDITION_4_RECEIVED_SK_ASC_ASCQ:
-		if (p_drv_buf->scsi_rx_chk_set[3]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[3];
-			return sizeof(p_drv_buf->scsi_rx_chk[4]);
-		}
-		break;
 	case DRV_TLV_SCSI_CHECK_CONDITION_5_RECEIVED_SK_ASC_ASCQ:
-		if (p_drv_buf->scsi_rx_chk_set[4]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_rx_chk[4];
-			return sizeof(p_drv_buf->scsi_rx_chk[4]);
+	{
+		u8 idx = (p_tlv->tlv_type -
+			  DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ) / 2;
+
+		if (p_drv_buf->scsi_rx_chk_set[idx]) {
+			p_buf->p_val = (u8 *)&p_drv_buf->scsi_rx_chk[idx];
+			return sizeof(p_drv_buf->scsi_rx_chk[idx]);
 		}
 		break;
+	}
 	case DRV_TLV_SCSI_CHECK_1_TIMESTAMP:
-		if (p_drv_buf->scsi_chk_tstamp_set[0]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[0];
-			return sizeof(p_drv_buf->scsi_chk_tstamp[0]);
-		}
-		break;
 	case DRV_TLV_SCSI_CHECK_2_TIMESTAMP:
-		if (p_drv_buf->scsi_chk_tstamp_set[1]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[1];
-			return sizeof(p_drv_buf->scsi_chk_tstamp[1]);
-		}
-		break;
 	case DRV_TLV_SCSI_CHECK_3_TIMESTAMP:
-		if (p_drv_buf->scsi_chk_tstamp_set[2]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[2];
-			return sizeof(p_drv_buf->scsi_chk_tstamp[2]);
-		}
-		break;
 	case DRV_TLV_SCSI_CHECK_4_TIMESTAMP:
-		if (p_drv_buf->scsi_chk_tstamp_set[3]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[3];
-			return sizeof(p_drv_buf->scsi_chk_tstamp[3]);
-		}
-		break;
 	case DRV_TLV_SCSI_CHECK_5_TIMESTAMP:
-		if (p_drv_buf->scsi_chk_tstamp_set[4]) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->scsi_chk_tstamp[4];
-			return sizeof(p_drv_buf->scsi_chk_tstamp[4]);
-		}
-		break;
+	{
+		u8 idx = (p_tlv->tlv_type -
+			  DRV_TLV_SCSI_CHECK_1_TIMESTAMP) / 2;
+
+		return ecore_mfw_get_tlv_time_value(&p_drv_buf->scsi_chk_tstamp[idx],
+						    p_buf);
+	}
+
 	default:
 		break;
 	}
@@ -1309,96 +1126,96 @@ ecore_mfw_get_fcoe_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
 static int
 ecore_mfw_get_iscsi_tlv_value(struct ecore_drv_tlv_hdr *p_tlv,
 			      struct ecore_mfw_tlv_iscsi *p_drv_buf,
-			      u8 **p_tlv_buf)
+			      struct ecore_tlv_parsed_buf *p_buf)
 {
 	switch (p_tlv->tlv_type) {
 	case DRV_TLV_TARGET_LLMNR_ENABLED:
 		if (p_drv_buf->target_llmnr_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->target_llmnr;
+			p_buf->p_val = (u8 *)&p_drv_buf->target_llmnr;
 			return sizeof(p_drv_buf->target_llmnr);
 		}
 		break;
 	case DRV_TLV_HEADER_DIGEST_FLAG_ENABLED:
 		if (p_drv_buf->header_digest_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->header_digest;
+			p_buf->p_val = (u8 *)&p_drv_buf->header_digest;
 			return sizeof(p_drv_buf->header_digest);
 		}
 		break;
 	case DRV_TLV_DATA_DIGEST_FLAG_ENABLED:
 		if (p_drv_buf->data_digest_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->data_digest;
+			p_buf->p_val = (u8 *)&p_drv_buf->data_digest;
 			return sizeof(p_drv_buf->data_digest);
 		}
 		break;
 	case DRV_TLV_AUTHENTICATION_METHOD:
 		if (p_drv_buf->auth_method_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->auth_method;
+			p_buf->p_val = (u8 *)&p_drv_buf->auth_method;
 			return sizeof(p_drv_buf->auth_method);
 		}
 		break;
 	case DRV_TLV_ISCSI_BOOT_TARGET_PORTAL:
 		if (p_drv_buf->boot_taget_portal_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->boot_taget_portal;
+			p_buf->p_val = (u8 *)&p_drv_buf->boot_taget_portal;
 			return sizeof(p_drv_buf->boot_taget_portal);
 		}
 		break;
 	case DRV_TLV_MAX_FRAME_SIZE:
 		if (p_drv_buf->frame_size_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->frame_size;
+			p_buf->p_val = (u8 *)&p_drv_buf->frame_size;
 			return sizeof(p_drv_buf->frame_size);
 		}
 		break;
 	case DRV_TLV_PDU_TX_DESCRIPTORS_QUEUE_SIZE:
 		if (p_drv_buf->tx_desc_size_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_desc_size;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_desc_size;
 			return sizeof(p_drv_buf->tx_desc_size);
 		}
 		break;
 	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_SIZE:
 		if (p_drv_buf->rx_desc_size_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_desc_size;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_desc_size;
 			return sizeof(p_drv_buf->rx_desc_size);
 		}
 		break;
 	case DRV_TLV_ISCSI_BOOT_PROGRESS:
 		if (p_drv_buf->boot_progress_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->boot_progress;
+			p_buf->p_val = (u8 *)&p_drv_buf->boot_progress;
 			return sizeof(p_drv_buf->boot_progress);
 		}
 		break;
 	case DRV_TLV_PDU_TX_DESCRIPTOR_QUEUE_AVG_DEPTH:
 		if (p_drv_buf->tx_desc_qdepth_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_desc_qdepth;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_desc_qdepth;
 			return sizeof(p_drv_buf->tx_desc_qdepth);
 		}
 		break;
 	case DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_AVG_DEPTH:
 		if (p_drv_buf->rx_desc_qdepth_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_desc_qdepth;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_desc_qdepth;
 			return sizeof(p_drv_buf->rx_desc_qdepth);
 		}
 		break;
 	case DRV_TLV_ISCSI_PDU_RX_FRAMES_RECEIVED:
 		if (p_drv_buf->rx_frames_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_frames;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_frames;
 			return sizeof(p_drv_buf->rx_frames);
 		}
 		break;
 	case DRV_TLV_ISCSI_PDU_RX_BYTES_RECEIVED:
 		if (p_drv_buf->rx_bytes_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->rx_bytes;
+			p_buf->p_val = (u8 *)&p_drv_buf->rx_bytes;
 			return sizeof(p_drv_buf->rx_bytes);
 		}
 		break;
 	case DRV_TLV_ISCSI_PDU_TX_FRAMES_SENT:
 		if (p_drv_buf->tx_frames_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_frames;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_frames;
 			return sizeof(p_drv_buf->tx_frames);
 		}
 		break;
 	case DRV_TLV_ISCSI_PDU_TX_BYTES_SENT:
 		if (p_drv_buf->tx_bytes_set) {
-			*p_tlv_buf = (u8 *)&p_drv_buf->tx_bytes;
+			p_buf->p_val = (u8 *)&p_drv_buf->tx_bytes;
 			return sizeof(p_drv_buf->tx_bytes);
 		}
 		break;
@@ -1414,10 +1231,11 @@ static enum _ecore_status_t ecore_mfw_update_tlvs(struct ecore_hwfn *p_hwfn,
 						  u32 size)
 {
 	union ecore_mfw_tlv_data *p_tlv_data;
+	struct ecore_tlv_parsed_buf buffer;
 	struct ecore_drv_tlv_hdr tlv;
-	u8 *p_tlv_ptr = OSAL_NULL, *p_temp;
 	u32 offset;
 	int len;
+	u8 *p_tlv;
 
 	p_tlv_data = OSAL_VZALLOC(p_hwfn->p_dev, sizeof(*p_tlv_data));
 	if (!p_tlv_data)
@@ -1428,41 +1246,39 @@ static enum _ecore_status_t ecore_mfw_update_tlvs(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 	}
 
-	offset = 0;
 	OSAL_MEMSET(&tlv, 0, sizeof(tlv));
-	while (offset < size) {
-		p_temp = &p_mfw_buf[offset];
-		tlv.tlv_type = TLV_TYPE(p_temp);
-		tlv.tlv_length = TLV_LENGTH(p_temp);
-		tlv.tlv_flags = TLV_FLAGS(p_temp);
-		DP_INFO(p_hwfn, "Type %d length = %d flags = 0x%x\n",
-			tlv.tlv_type, tlv.tlv_length, tlv.tlv_flags);
+	for (offset = 0; offset < size;
+	     offset += sizeof(tlv) + sizeof(u32) * tlv.tlv_length) {
+		p_tlv = &p_mfw_buf[offset];
+		tlv.tlv_type = TLV_TYPE(p_tlv);
+		tlv.tlv_length = TLV_LENGTH(p_tlv);
+		tlv.tlv_flags = TLV_FLAGS(p_tlv);
+
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Type %d length = %d flags = 0x%x\n", tlv.tlv_type,
+			   tlv.tlv_length, tlv.tlv_flags);
 
-		offset += sizeof(tlv);
 		if (tlv_group == ECORE_MFW_TLV_GENERIC)
-			len = ecore_mfw_get_gen_tlv_value(&tlv,
-					&p_tlv_data->generic, &p_tlv_ptr);
+			len = ecore_mfw_get_gen_tlv_value(p_hwfn, &tlv,
+							  &p_tlv_data->generic,
+							  &buffer);
 		else if (tlv_group == ECORE_MFW_TLV_ETH)
-			len = ecore_mfw_get_eth_tlv_value(&tlv,
-					&p_tlv_data->eth, &p_tlv_ptr);
+			len = ecore_mfw_get_eth_tlv_value(&tlv, &p_tlv_data->eth, &buffer);
 		else if (tlv_group == ECORE_MFW_TLV_FCOE)
-			len = ecore_mfw_get_fcoe_tlv_value(&tlv,
-					&p_tlv_data->fcoe, &p_tlv_ptr);
+			len = ecore_mfw_get_fcoe_tlv_value(&tlv, &p_tlv_data->fcoe, &buffer);
 		else
-			len = ecore_mfw_get_iscsi_tlv_value(&tlv,
-					&p_tlv_data->iscsi, &p_tlv_ptr);
+			len = ecore_mfw_get_iscsi_tlv_value(&tlv, &p_tlv_data->iscsi, &buffer);
 
 		if (len > 0) {
 			OSAL_WARN(len > 4 * tlv.tlv_length,
-				  "Incorrect MFW TLV length");
+				  "Incorrect MFW TLV length %d, it shouldn't be greater than %d\n",
+				  len, 4 * tlv.tlv_length);
 			len = OSAL_MIN_T(int, len, 4 * tlv.tlv_length);
 			tlv.tlv_flags |= ECORE_DRV_TLV_FLAGS_CHANGED;
-			/* TODO: Endianness handling? */
-			OSAL_MEMCPY(p_mfw_buf, &tlv, sizeof(tlv));
-			OSAL_MEMCPY(p_mfw_buf + offset, p_tlv_ptr, len);
+			TLV_FLAGS(p_tlv) = tlv.tlv_flags;
+			OSAL_MEMCPY(p_mfw_buf + offset + sizeof(tlv),
+				    buffer.p_val, len);
 		}
-
-		offset += sizeof(u32) * tlv.tlv_length;
 	}
 
 	OSAL_VFREE(p_hwfn->p_dev, p_tlv_data);
@@ -1484,6 +1300,7 @@ ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 	global_offsize = ecore_rd(p_hwfn, p_ptt, addr);
 	global_addr = SECTION_ADDR(global_offsize, 0);
 	addr = global_addr + OFFSETOF(struct public_global, data_ptr);
+	addr = ecore_rd(p_hwfn, p_ptt, addr);
 	size = ecore_rd(p_hwfn, p_ptt, global_addr +
 			OFFSETOF(struct public_global, data_size));
 
@@ -1494,14 +1311,19 @@ ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 
 	p_mfw_buf = (void *)OSAL_VZALLOC(p_hwfn->p_dev, size);
 	if (!p_mfw_buf) {
-		DP_NOTICE(p_hwfn, false,
-			  "Failed allocate memory for p_mfw_buf\n");
+		DP_NOTICE(p_hwfn, false, "Failed allocate memory for p_mfw_buf\n");
 		goto drv_done;
 	}
 
-	/* Read the TLV request to local buffer */
+	/* Read the TLV request to local buffer. MFW represents the TLV in
+	 * little endian format and mcp returns it bigendian format. Hence
+	 * driver need to convert data to little endian first and then do the
+	 * memcpy (casting) to preserve the MFW TLV format in the driver buffer.
+	 *
+	 */
 	for (offset = 0; offset < size; offset += sizeof(u32)) {
 		val = ecore_rd(p_hwfn, p_ptt, addr + offset);
+		val = OSAL_BE32_TO_CPU(val);
 		OSAL_MEMCPY(&p_mfw_buf[offset], &val, sizeof(u32));
 	}
 
@@ -1512,7 +1334,30 @@ ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 		tlv.tlv_type = TLV_TYPE(p_temp);
 		tlv.tlv_length = TLV_LENGTH(p_temp);
 		if (ecore_mfw_get_tlv_group(tlv.tlv_type, &tlv_group))
-			goto drv_done;
+			DP_VERBOSE(p_hwfn, ECORE_MSG_DRV,
+				   "Un recognized TLV %d\n", tlv.tlv_type);
+	}
+
+	/* Sanitize the TLV groups according to personality */
+	if ((tlv_group & ECORE_MFW_TLV_FCOE)  &&
+	    p_hwfn->hw_info.personality != ECORE_PCI_FCOE) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Skipping FCoE TLVs for non-FCoE function\n");
+		tlv_group &= ~ECORE_MFW_TLV_FCOE;
+	}
+
+	if ((tlv_group & ECORE_MFW_TLV_ISCSI) &&
+	    p_hwfn->hw_info.personality != ECORE_PCI_ISCSI) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Skipping iSCSI TLVs for non-iSCSI function\n");
+		tlv_group &= ~ECORE_MFW_TLV_ISCSI;
+	}
+
+	if ((tlv_group & ECORE_MFW_TLV_ETH) &&
+	    !ECORE_IS_L2_PERSONALITY(p_hwfn)) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "Skipping L2 TLVs for non-L2 function\n");
+		tlv_group &= ~ECORE_MFW_TLV_ETH;
 	}
 
 	/* Update the TLV values in the local buffer */
@@ -1523,11 +1368,14 @@ ecore_mfw_process_tlv_req(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 		}
 	}
 
-	/* Write the TLV data to shared memory */
+	/* Write the TLV data to shared memory. The stream of 4 bytes first need
+	 * to be mem-copied to u32 element to make it as LSB format. And then
+	 * converted to big endian as required by mcp-write.
+	 */
 	for (offset = 0; offset < size; offset += sizeof(u32)) {
-		val = (u32)p_mfw_buf[offset];
+		OSAL_MEMCPY(&val, &p_mfw_buf[offset], sizeof(u32));
+		val = OSAL_CPU_TO_BE32(val);
 		ecore_wr(p_hwfn, p_ptt, addr + offset, val);
-		offset += sizeof(u32);
 	}
 
 drv_done:
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index 64509f7cc..0e15c51b3 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_PROTO_IF_H__
 #define __ECORE_PROTO_IF_H__
 
@@ -34,10 +34,43 @@ struct ecore_eth_pf_params {
 	bool	allow_vf_mac_change;
 };
 
+/* Most of the parameters below are described in the FW FCoE HSI */
+struct ecore_fcoe_pf_params {
+	/* The following parameters are used during protocol-init */
+	u64		glbl_q_params_addr;
+	u64		bdq_pbl_base_addr[2];
+
+	/* The following parameters are used during HW-init
+	 * and these parameters need to be passed as arguments
+	 * to update_pf_params routine invoked before slowpath start
+	 */
+	u16		num_cons;
+	u16		num_tasks;
+
+	/* The following parameters are used during protocol-init */
+	u16		sq_num_pbl_pages;
+
+	u16		cq_num_entries;
+	u16		cmdq_num_entries;
+	u16		rq_buffer_log_size;
+	u16		mtu;
+	u16		dummy_icid;
+	u16		bdq_xoff_threshold[2];
+	u16		bdq_xon_threshold[2];
+	u16		rq_buffer_size;
+	u8		num_cqs; /* num of global CQs */
+	u8		log_page_size;
+	u8		gl_rq_pi;
+	u8		gl_cmd_pi;
+	u8		debug_mode;
+	u8		is_target;
+	u8		bdq_pbl_num_entries[2];
+};
+
 /* Most of the parameters below are described in the FW iSCSI / TCP HSI */
 struct ecore_iscsi_pf_params {
 	u64		glbl_q_params_addr;
-	u64		bdq_pbl_base_addr[2];
+	u64		bdq_pbl_base_addr[3];
 	u16		cq_num_entries;
 	u16		cmdq_num_entries;
 	u32		two_msl_timer;
@@ -51,33 +84,36 @@ struct ecore_iscsi_pf_params {
 
 	/* The following parameters are used during protocol-init */
 	u16		half_way_close_timeout;
-	u16		bdq_xoff_threshold[2];
-	u16		bdq_xon_threshold[2];
+	u16		bdq_xoff_threshold[3];
+	u16		bdq_xon_threshold[3];
 	u16		cmdq_xoff_threshold;
 	u16		cmdq_xon_threshold;
+	u16		in_capsule_max_size;
 	u16		rq_buffer_size;
 
 	u8		num_sq_pages_in_ring;
-	u8		num_r2tq_pages_in_ring;
 	u8		num_uhq_pages_in_ring;
 	u8		num_queues;
 	u8		log_page_size;
 	u8		log_page_size_conn;
 	u8		rqe_log_size;
+	u8		max_syn_rt;
 	u8		max_fin_rt;
 	u8		gl_rq_pi;
 	u8		gl_cmd_pi;
 	u8		debug_mode;
 	u8		ll2_ooo_queue_id;
-	u8		ooo_enable;
-
 	u8		is_target;
-	u8		bdq_pbl_num_entries[2];
+	u8		is_soc_en;
+	u8		soc_num_of_blocks_log;
+	u8		bdq_pbl_num_entries[3];
 	u8		disable_stats_collection;
+	u8		nvmetcp_en;
 };
 
 enum ecore_rdma_protocol {
 	ECORE_RDMA_PROTOCOL_DEFAULT,
+	ECORE_RDMA_PROTOCOL_NONE,
 	ECORE_RDMA_PROTOCOL_ROCE,
 	ECORE_RDMA_PROTOCOL_IWARP,
 };
@@ -87,15 +123,23 @@ struct ecore_rdma_pf_params {
 	 * the doorbell BAR).
 	 */
 	u32		min_dpis;	/* number of requested DPIs */
-	u32		num_mrs;	/* number of requested memory regions*/
 	u32		num_qps;	/* number of requested Queue Pairs */
-	u32		num_srqs;	/* number of requested SRQ */
-	u8		roce_edpm_mode; /* see QED_ROCE_EDPM_MODE_ENABLE */
+	u32		num_vf_qps;	/* number of requested Queue Pairs for VF */
+	u32		num_vf_tasks;	/* number of requested MRs for VF */
+	u32		num_srqs;	/* number of requested SRQs */
+	u32		num_vf_srqs;	/* number of requested SRQs for VF*/
 	u8		gl_pi;		/* protocol index */
 
 	/* Will allocate rate limiters to be used with QPs */
 	u8		enable_dcqcn;
 
+	/* Max number of CNQs - limits number of ECORE_RDMA_CNQ feature,
+	 * Allowing an incrementation in ECORE_PF_L2_QUE.
+	 * To disable CNQs, use dedicated value instead of `0'.
+	 */
+#define ECORE_RDMA_PF_PARAMS_CNQS_NONE	(0xffff)
+	u16		max_cnqs;
+
 	/* TCP port number used for the iwarp traffic */
 	u16		iwarp_port;
 	enum ecore_rdma_protocol rdma_protocol;
@@ -103,6 +147,7 @@ struct ecore_rdma_pf_params {
 
 struct ecore_pf_params {
 	struct ecore_eth_pf_params	eth_pf_params;
+	struct ecore_fcoe_pf_params	fcoe_pf_params;
 	struct ecore_iscsi_pf_params	iscsi_pf_params;
 	struct ecore_rdma_pf_params	rdma_pf_params;
 };
diff --git a/drivers/net/qede/base/ecore_rt_defs.h b/drivers/net/qede/base/ecore_rt_defs.h
index 08b1f4700..7badacd6d 100644
--- a/drivers/net/qede/base/ecore_rt_defs.h
+++ b/drivers/net/qede/base/ecore_rt_defs.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __RT_DEFS_H__
 #define __RT_DEFS_H__
 
@@ -27,425 +27,534 @@
 #define DORQ_REG_VF_ICID_BIT_SHIFT_NORM_RT_OFFSET                   16
 #define DORQ_REG_PF_WAKE_ALL_RT_OFFSET                              17
 #define DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET                           18
-#define IGU_REG_PF_CONFIGURATION_RT_OFFSET                          19
-#define IGU_REG_VF_CONFIGURATION_RT_OFFSET                          20
-#define IGU_REG_ATTN_MSG_ADDR_L_RT_OFFSET                           21
-#define IGU_REG_ATTN_MSG_ADDR_H_RT_OFFSET                           22
-#define IGU_REG_LEADING_EDGE_LATCH_RT_OFFSET                        23
-#define IGU_REG_TRAILING_EDGE_LATCH_RT_OFFSET                       24
-#define CAU_REG_CQE_AGG_UNIT_SIZE_RT_OFFSET                         25
-#define CAU_REG_SB_VAR_MEMORY_RT_OFFSET                             26
-#define CAU_REG_SB_VAR_MEMORY_RT_SIZE                               736
-#define CAU_REG_SB_ADDR_MEMORY_RT_OFFSET                            762
-#define CAU_REG_SB_ADDR_MEMORY_RT_SIZE                              736
-#define CAU_REG_PI_MEMORY_RT_OFFSET                                 1498
-#define CAU_REG_PI_MEMORY_RT_SIZE                                   4416
-#define PRS_REG_SEARCH_RESP_INITIATOR_TYPE_RT_OFFSET                5914
-#define PRS_REG_TASK_ID_MAX_INITIATOR_PF_RT_OFFSET                  5915
-#define PRS_REG_TASK_ID_MAX_INITIATOR_VF_RT_OFFSET                  5916
-#define PRS_REG_TASK_ID_MAX_TARGET_PF_RT_OFFSET                     5917
-#define PRS_REG_TASK_ID_MAX_TARGET_VF_RT_OFFSET                     5918
-#define PRS_REG_SEARCH_TCP_RT_OFFSET                                5919
-#define PRS_REG_SEARCH_FCOE_RT_OFFSET                               5920
-#define PRS_REG_SEARCH_ROCE_RT_OFFSET                               5921
-#define PRS_REG_ROCE_DEST_QP_MAX_VF_RT_OFFSET                       5922
-#define PRS_REG_ROCE_DEST_QP_MAX_PF_RT_OFFSET                       5923
-#define PRS_REG_SEARCH_OPENFLOW_RT_OFFSET                           5924
-#define PRS_REG_SEARCH_NON_IP_AS_OPENFLOW_RT_OFFSET                 5925
-#define PRS_REG_OPENFLOW_SUPPORT_ONLY_KNOWN_OVER_IP_RT_OFFSET       5926
-#define PRS_REG_OPENFLOW_SEARCH_KEY_MASK_RT_OFFSET                  5927
-#define PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET                           5928
-#define PRS_REG_LIGHT_L2_ETHERTYPE_EN_RT_OFFSET                     5929
-#define SRC_REG_FIRSTFREE_RT_OFFSET                                 5930
+#define DORQ_REG_GLB_MAX_ICID_0_RT_OFFSET                           19
+#define DORQ_REG_GLB_MAX_ICID_1_RT_OFFSET                           20
+#define DORQ_REG_GLB_RANGE2CONN_TYPE_0_RT_OFFSET                    21
+#define DORQ_REG_GLB_RANGE2CONN_TYPE_1_RT_OFFSET                    22
+#define DORQ_REG_PRV_PF_MAX_ICID_2_RT_OFFSET                        23
+#define DORQ_REG_PRV_PF_MAX_ICID_3_RT_OFFSET                        24
+#define DORQ_REG_PRV_PF_MAX_ICID_4_RT_OFFSET                        25
+#define DORQ_REG_PRV_PF_MAX_ICID_5_RT_OFFSET                        26
+#define DORQ_REG_PRV_VF_MAX_ICID_2_RT_OFFSET                        27
+#define DORQ_REG_PRV_VF_MAX_ICID_3_RT_OFFSET                        28
+#define DORQ_REG_PRV_VF_MAX_ICID_4_RT_OFFSET                        29
+#define DORQ_REG_PRV_VF_MAX_ICID_5_RT_OFFSET                        30
+#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_2_RT_OFFSET                 31
+#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_3_RT_OFFSET                 32
+#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_4_RT_OFFSET                 33
+#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_5_RT_OFFSET                 34
+#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_2_RT_OFFSET                 35
+#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_3_RT_OFFSET                 36
+#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_4_RT_OFFSET                 37
+#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_5_RT_OFFSET                 38
+#define IGU_REG_PF_CONFIGURATION_RT_OFFSET                          39
+#define IGU_REG_VF_CONFIGURATION_RT_OFFSET                          40
+#define IGU_REG_ATTN_MSG_ADDR_L_RT_OFFSET                           41
+#define IGU_REG_ATTN_MSG_ADDR_H_RT_OFFSET                           42
+#define IGU_REG_LEADING_EDGE_LATCH_RT_OFFSET                        43
+#define IGU_REG_TRAILING_EDGE_LATCH_RT_OFFSET                       44
+#define CAU_REG_CQE_AGG_UNIT_SIZE_RT_OFFSET                         45
+#define CAU_REG_SB_VAR_MEMORY_RT_OFFSET                             46
+#define CAU_REG_SB_VAR_MEMORY_RT_SIZE                               1024
+#define CAU_REG_SB_ADDR_MEMORY_RT_OFFSET                            1070
+#define CAU_REG_SB_ADDR_MEMORY_RT_SIZE                              1024
+#define CAU_REG_PI_MEMORY_RT_OFFSET                                 2094
+#define CAU_REG_PI_MEMORY_RT_SIZE                                   5120
+#define PRS_REG_SEARCH_RESP_INITIATOR_TYPE_RT_OFFSET                7214
+#define PRS_REG_TASK_ID_MAX_INITIATOR_PF_RT_OFFSET                  7215
+#define PRS_REG_TASK_ID_MAX_INITIATOR_VF_RT_OFFSET                  7216
+#define PRS_REG_TASK_ID_MAX_TARGET_PF_RT_OFFSET                     7217
+#define PRS_REG_TASK_ID_MAX_TARGET_VF_RT_OFFSET                     7218
+#define PRS_REG_SEARCH_TCP_RT_OFFSET                                7219
+#define PRS_REG_SEARCH_FCOE_RT_OFFSET                               7220
+#define PRS_REG_SEARCH_ROCE_RT_OFFSET                               7221
+#define PRS_REG_ROCE_DEST_QP_MAX_VF_RT_OFFSET                       7222
+#define PRS_REG_ROCE_DEST_QP_MAX_PF_RT_OFFSET                       7223
+#define PRS_REG_SEARCH_OPENFLOW_RT_OFFSET                           7224
+#define PRS_REG_SEARCH_NON_IP_AS_OPENFLOW_RT_OFFSET                 7225
+#define PRS_REG_OPENFLOW_SUPPORT_ONLY_KNOWN_OVER_IP_RT_OFFSET       7226
+#define PRS_REG_OPENFLOW_SEARCH_KEY_MASK_RT_OFFSET                  7227
+#define PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET                           7228
+#define PRS_REG_LIGHT_L2_ETHERTYPE_EN_RT_OFFSET                     7229
+#define SRC_REG_TABLE_T1_ENTRY_SIZE_RT_OFFSET                       7230
+#define SRC_REG_TABLE_T2_ENTRY_SIZE_RT_OFFSET                       7231
+#define SRC_REG_FIRSTFREE_RT_OFFSET                                 7232
 #define SRC_REG_FIRSTFREE_RT_SIZE                                   2
-#define SRC_REG_LASTFREE_RT_OFFSET                                  5932
+#define SRC_REG_LASTFREE_RT_OFFSET                                  7234
 #define SRC_REG_LASTFREE_RT_SIZE                                    2
-#define SRC_REG_COUNTFREE_RT_OFFSET                                 5934
-#define SRC_REG_NUMBER_HASH_BITS_RT_OFFSET                          5935
-#define PSWRQ2_REG_CDUT_P_SIZE_RT_OFFSET                            5936
-#define PSWRQ2_REG_CDUC_P_SIZE_RT_OFFSET                            5937
-#define PSWRQ2_REG_TM_P_SIZE_RT_OFFSET                              5938
-#define PSWRQ2_REG_QM_P_SIZE_RT_OFFSET                              5939
-#define PSWRQ2_REG_SRC_P_SIZE_RT_OFFSET                             5940
-#define PSWRQ2_REG_TSDM_P_SIZE_RT_OFFSET                            5941
-#define PSWRQ2_REG_TM_FIRST_ILT_RT_OFFSET                           5942
-#define PSWRQ2_REG_TM_LAST_ILT_RT_OFFSET                            5943
-#define PSWRQ2_REG_QM_FIRST_ILT_RT_OFFSET                           5944
-#define PSWRQ2_REG_QM_LAST_ILT_RT_OFFSET                            5945
-#define PSWRQ2_REG_SRC_FIRST_ILT_RT_OFFSET                          5946
-#define PSWRQ2_REG_SRC_LAST_ILT_RT_OFFSET                           5947
-#define PSWRQ2_REG_CDUC_FIRST_ILT_RT_OFFSET                         5948
-#define PSWRQ2_REG_CDUC_LAST_ILT_RT_OFFSET                          5949
-#define PSWRQ2_REG_CDUT_FIRST_ILT_RT_OFFSET                         5950
-#define PSWRQ2_REG_CDUT_LAST_ILT_RT_OFFSET                          5951
-#define PSWRQ2_REG_TSDM_FIRST_ILT_RT_OFFSET                         5952
-#define PSWRQ2_REG_TSDM_LAST_ILT_RT_OFFSET                          5953
-#define PSWRQ2_REG_TM_NUMBER_OF_PF_BLOCKS_RT_OFFSET                 5954
-#define PSWRQ2_REG_CDUT_NUMBER_OF_PF_BLOCKS_RT_OFFSET               5955
-#define PSWRQ2_REG_CDUC_NUMBER_OF_PF_BLOCKS_RT_OFFSET               5956
-#define PSWRQ2_REG_TM_VF_BLOCKS_RT_OFFSET                           5957
-#define PSWRQ2_REG_CDUT_VF_BLOCKS_RT_OFFSET                         5958
-#define PSWRQ2_REG_CDUC_VF_BLOCKS_RT_OFFSET                         5959
-#define PSWRQ2_REG_TM_BLOCKS_FACTOR_RT_OFFSET                       5960
-#define PSWRQ2_REG_CDUT_BLOCKS_FACTOR_RT_OFFSET                     5961
-#define PSWRQ2_REG_CDUC_BLOCKS_FACTOR_RT_OFFSET                     5962
-#define PSWRQ2_REG_VF_BASE_RT_OFFSET                                5963
-#define PSWRQ2_REG_VF_LAST_ILT_RT_OFFSET                            5964
-#define PSWRQ2_REG_DRAM_ALIGN_WR_RT_OFFSET                          5965
-#define PSWRQ2_REG_DRAM_ALIGN_RD_RT_OFFSET                          5966
-#define PSWRQ2_REG_ILT_MEMORY_RT_OFFSET                             5967
-#define PSWRQ2_REG_ILT_MEMORY_RT_SIZE                               22000
-#define PGLUE_REG_B_VF_BASE_RT_OFFSET                               27967
-#define PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET                    27968
-#define PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET                       27969
-#define PGLUE_REG_B_CACHE_LINE_SIZE_RT_OFFSET                       27970
-#define PGLUE_REG_B_PF_BAR0_SIZE_RT_OFFSET                          27971
-#define PGLUE_REG_B_PF_BAR1_SIZE_RT_OFFSET                          27972
-#define PGLUE_REG_B_VF_BAR1_SIZE_RT_OFFSET                          27973
-#define TM_REG_VF_ENABLE_CONN_RT_OFFSET                             27974
-#define TM_REG_PF_ENABLE_CONN_RT_OFFSET                             27975
-#define TM_REG_PF_ENABLE_TASK_RT_OFFSET                             27976
-#define TM_REG_GROUP_SIZE_RESOLUTION_CONN_RT_OFFSET                 27977
-#define TM_REG_GROUP_SIZE_RESOLUTION_TASK_RT_OFFSET                 27978
-#define TM_REG_CONFIG_CONN_MEM_RT_OFFSET                            27979
+#define SRC_REG_COUNTFREE_RT_OFFSET                                 7236
+#define SRC_REG_NUMBER_HASH_BITS_RT_OFFSET                          7237
+#define PSWRQ2_REG_CDUT_P_SIZE_RT_OFFSET                            7238
+#define PSWRQ2_REG_CDUC_P_SIZE_RT_OFFSET                            7239
+#define PSWRQ2_REG_TM_P_SIZE_RT_OFFSET                              7240
+#define PSWRQ2_REG_QM_P_SIZE_RT_OFFSET                              7241
+#define PSWRQ2_REG_SRC_P_SIZE_RT_OFFSET                             7242
+#define PSWRQ2_REG_TSDM_P_SIZE_RT_OFFSET                            7243
+#define PSWRQ2_REG_TM_FIRST_ILT_RT_OFFSET                           7244
+#define PSWRQ2_REG_TM_LAST_ILT_RT_OFFSET                            7245
+#define PSWRQ2_REG_QM_FIRST_ILT_RT_OFFSET                           7246
+#define PSWRQ2_REG_QM_LAST_ILT_RT_OFFSET                            7247
+#define PSWRQ2_REG_SRC_FIRST_ILT_RT_OFFSET                          7248
+#define PSWRQ2_REG_SRC_LAST_ILT_RT_OFFSET                           7249
+#define PSWRQ2_REG_CDUC_FIRST_ILT_RT_OFFSET                         7250
+#define PSWRQ2_REG_CDUC_LAST_ILT_RT_OFFSET                          7251
+#define PSWRQ2_REG_CDUT_FIRST_ILT_RT_OFFSET                         7252
+#define PSWRQ2_REG_CDUT_LAST_ILT_RT_OFFSET                          7253
+#define PSWRQ2_REG_TSDM_FIRST_ILT_RT_OFFSET                         7254
+#define PSWRQ2_REG_TSDM_LAST_ILT_RT_OFFSET                          7255
+#define PSWRQ2_REG_TM_NUMBER_OF_PF_BLOCKS_RT_OFFSET                 7256
+#define PSWRQ2_REG_CDUT_NUMBER_OF_PF_BLOCKS_RT_OFFSET               7257
+#define PSWRQ2_REG_CDUC_NUMBER_OF_PF_BLOCKS_RT_OFFSET               7258
+#define PSWRQ2_REG_TM_VF_BLOCKS_RT_OFFSET                           7259
+#define PSWRQ2_REG_CDUT_VF_BLOCKS_RT_OFFSET                         7260
+#define PSWRQ2_REG_CDUC_VF_BLOCKS_RT_OFFSET                         7261
+#define PSWRQ2_REG_TM_BLOCKS_FACTOR_RT_OFFSET                       7262
+#define PSWRQ2_REG_CDUT_BLOCKS_FACTOR_RT_OFFSET                     7263
+#define PSWRQ2_REG_CDUC_BLOCKS_FACTOR_RT_OFFSET                     7264
+#define PSWRQ2_REG_VF_BASE_RT_OFFSET                                7265
+#define PSWRQ2_REG_VF_LAST_ILT_RT_OFFSET                            7266
+#define PSWRQ2_REG_DRAM_ALIGN_WR_RT_OFFSET                          7267
+#define PSWRQ2_REG_DRAM_ALIGN_RD_RT_OFFSET                          7268
+#define PSWRQ2_REG_TGSRC_P_SIZE_RT_OFFSET                           7269
+#define PSWRQ2_REG_RGSRC_P_SIZE_RT_OFFSET                           7270
+#define PSWRQ2_REG_TGSRC_FIRST_ILT_RT_OFFSET                        7271
+#define PSWRQ2_REG_RGSRC_FIRST_ILT_RT_OFFSET                        7272
+#define PSWRQ2_REG_TGSRC_LAST_ILT_RT_OFFSET                         7273
+#define PSWRQ2_REG_RGSRC_LAST_ILT_RT_OFFSET                         7274
+#define PSWRQ2_REG_ILT_MEMORY_RT_OFFSET                             7275
+#define PSWRQ2_REG_ILT_MEMORY_RT_SIZE                               27328
+#define PGLUE_REG_B_VF_BASE_RT_OFFSET                               34603
+#define PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET                    34604
+#define PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET                       34605
+#define PGLUE_REG_B_CACHE_LINE_SIZE_RT_OFFSET                       34606
+#define PGLUE_REG_B_PF_BAR0_SIZE_RT_OFFSET                          34607
+#define PGLUE_REG_B_PF_BAR1_SIZE_RT_OFFSET                          34608
+#define PGLUE_REG_B_VF_BAR1_SIZE_RT_OFFSET                          34609
+#define TM_REG_VF_ENABLE_CONN_RT_OFFSET                             34610
+#define TM_REG_PF_ENABLE_CONN_RT_OFFSET                             34611
+#define TM_REG_PF_ENABLE_TASK_RT_OFFSET                             34612
+#define TM_REG_GROUP_SIZE_RESOLUTION_CONN_RT_OFFSET                 34613
+#define TM_REG_GROUP_SIZE_RESOLUTION_TASK_RT_OFFSET                 34614
+#define TM_REG_CONFIG_CONN_MEM_RT_OFFSET                            34615
 #define TM_REG_CONFIG_CONN_MEM_RT_SIZE                              416
-#define TM_REG_CONFIG_TASK_MEM_RT_OFFSET                            28395
-#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              512
-#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                28907
-#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                28908
-#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                28909
-#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           28910
-#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           28911
-#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           28912
-#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           28913
-#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           28914
-#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           28915
-#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           28916
-#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           28917
-#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           28918
-#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           28919
-#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          28920
-#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          28921
-#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          28922
-#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          28923
-#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          28924
-#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          28925
-#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          28926
-#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          28927
-#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          28928
-#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          28929
-#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          28930
-#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          28931
-#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          28932
-#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          28933
-#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          28934
-#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          28935
-#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          28936
-#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          28937
-#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          28938
-#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          28939
-#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          28940
-#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          28941
-#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          28942
-#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          28943
-#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          28944
-#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          28945
-#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          28946
-#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          28947
-#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          28948
-#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          28949
-#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          28950
-#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          28951
-#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          28952
-#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          28953
-#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          28954
-#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          28955
-#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          28956
-#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          28957
-#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          28958
-#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          28959
-#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          28960
-#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          28961
-#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          28962
-#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          28963
-#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          28964
-#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          28965
-#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          28966
-#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          28967
-#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          28968
-#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          28969
-#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          28970
-#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          28971
-#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          28972
-#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          28973
-#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            28974
+#define TM_REG_CONFIG_TASK_MEM_RT_OFFSET                            35031
+#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              608
+#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                35639
+#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                35640
+#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                35641
+#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           35642
+#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           35643
+#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           35644
+#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           35645
+#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           35646
+#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           35647
+#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           35648
+#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           35649
+#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           35650
+#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           35651
+#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          35652
+#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          35653
+#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          35654
+#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          35655
+#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          35656
+#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          35657
+#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          35658
+#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          35659
+#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          35660
+#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          35661
+#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          35662
+#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          35663
+#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          35664
+#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          35665
+#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          35666
+#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          35667
+#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          35668
+#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          35669
+#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          35670
+#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          35671
+#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          35672
+#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          35673
+#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          35674
+#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          35675
+#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          35676
+#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          35677
+#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          35678
+#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          35679
+#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          35680
+#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          35681
+#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          35682
+#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          35683
+#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          35684
+#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          35685
+#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          35686
+#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          35687
+#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          35688
+#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          35689
+#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          35690
+#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          35691
+#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          35692
+#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          35693
+#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          35694
+#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          35695
+#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          35696
+#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          35697
+#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          35698
+#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          35699
+#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          35700
+#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          35701
+#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          35702
+#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          35703
+#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          35704
+#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          35705
+#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            35706
 #define QM_REG_BASEADDROTHERPQ_RT_SIZE                              128
-#define QM_REG_PTRTBLOTHER_RT_OFFSET                                29102
+#define QM_REG_PTRTBLOTHER_RT_OFFSET                                35834
 #define QM_REG_PTRTBLOTHER_RT_SIZE                                  256
-#define QM_REG_VOQCRDLINE_RT_OFFSET                                 29358
-#define QM_REG_VOQCRDLINE_RT_SIZE                                   20
-#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             29378
-#define QM_REG_VOQINITCRDLINE_RT_SIZE                               20
-#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         29398
-#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         29399
-#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          29400
-#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        29401
-#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       29402
-#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            29403
-#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            29404
-#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            29405
-#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            29406
-#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            29407
-#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            29408
-#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            29409
-#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            29410
-#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            29411
-#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            29412
-#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           29413
-#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           29414
-#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           29415
-#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           29416
-#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           29417
-#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           29418
-#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        29419
-#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        29420
-#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        29421
-#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        29422
-#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           29423
-#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           29424
-#define QM_REG_PQTX2PF_0_RT_OFFSET                                  29425
-#define QM_REG_PQTX2PF_1_RT_OFFSET                                  29426
-#define QM_REG_PQTX2PF_2_RT_OFFSET                                  29427
-#define QM_REG_PQTX2PF_3_RT_OFFSET                                  29428
-#define QM_REG_PQTX2PF_4_RT_OFFSET                                  29429
-#define QM_REG_PQTX2PF_5_RT_OFFSET                                  29430
-#define QM_REG_PQTX2PF_6_RT_OFFSET                                  29431
-#define QM_REG_PQTX2PF_7_RT_OFFSET                                  29432
-#define QM_REG_PQTX2PF_8_RT_OFFSET                                  29433
-#define QM_REG_PQTX2PF_9_RT_OFFSET                                  29434
-#define QM_REG_PQTX2PF_10_RT_OFFSET                                 29435
-#define QM_REG_PQTX2PF_11_RT_OFFSET                                 29436
-#define QM_REG_PQTX2PF_12_RT_OFFSET                                 29437
-#define QM_REG_PQTX2PF_13_RT_OFFSET                                 29438
-#define QM_REG_PQTX2PF_14_RT_OFFSET                                 29439
-#define QM_REG_PQTX2PF_15_RT_OFFSET                                 29440
-#define QM_REG_PQTX2PF_16_RT_OFFSET                                 29441
-#define QM_REG_PQTX2PF_17_RT_OFFSET                                 29442
-#define QM_REG_PQTX2PF_18_RT_OFFSET                                 29443
-#define QM_REG_PQTX2PF_19_RT_OFFSET                                 29444
-#define QM_REG_PQTX2PF_20_RT_OFFSET                                 29445
-#define QM_REG_PQTX2PF_21_RT_OFFSET                                 29446
-#define QM_REG_PQTX2PF_22_RT_OFFSET                                 29447
-#define QM_REG_PQTX2PF_23_RT_OFFSET                                 29448
-#define QM_REG_PQTX2PF_24_RT_OFFSET                                 29449
-#define QM_REG_PQTX2PF_25_RT_OFFSET                                 29450
-#define QM_REG_PQTX2PF_26_RT_OFFSET                                 29451
-#define QM_REG_PQTX2PF_27_RT_OFFSET                                 29452
-#define QM_REG_PQTX2PF_28_RT_OFFSET                                 29453
-#define QM_REG_PQTX2PF_29_RT_OFFSET                                 29454
-#define QM_REG_PQTX2PF_30_RT_OFFSET                                 29455
-#define QM_REG_PQTX2PF_31_RT_OFFSET                                 29456
-#define QM_REG_PQTX2PF_32_RT_OFFSET                                 29457
-#define QM_REG_PQTX2PF_33_RT_OFFSET                                 29458
-#define QM_REG_PQTX2PF_34_RT_OFFSET                                 29459
-#define QM_REG_PQTX2PF_35_RT_OFFSET                                 29460
-#define QM_REG_PQTX2PF_36_RT_OFFSET                                 29461
-#define QM_REG_PQTX2PF_37_RT_OFFSET                                 29462
-#define QM_REG_PQTX2PF_38_RT_OFFSET                                 29463
-#define QM_REG_PQTX2PF_39_RT_OFFSET                                 29464
-#define QM_REG_PQTX2PF_40_RT_OFFSET                                 29465
-#define QM_REG_PQTX2PF_41_RT_OFFSET                                 29466
-#define QM_REG_PQTX2PF_42_RT_OFFSET                                 29467
-#define QM_REG_PQTX2PF_43_RT_OFFSET                                 29468
-#define QM_REG_PQTX2PF_44_RT_OFFSET                                 29469
-#define QM_REG_PQTX2PF_45_RT_OFFSET                                 29470
-#define QM_REG_PQTX2PF_46_RT_OFFSET                                 29471
-#define QM_REG_PQTX2PF_47_RT_OFFSET                                 29472
-#define QM_REG_PQTX2PF_48_RT_OFFSET                                 29473
-#define QM_REG_PQTX2PF_49_RT_OFFSET                                 29474
-#define QM_REG_PQTX2PF_50_RT_OFFSET                                 29475
-#define QM_REG_PQTX2PF_51_RT_OFFSET                                 29476
-#define QM_REG_PQTX2PF_52_RT_OFFSET                                 29477
-#define QM_REG_PQTX2PF_53_RT_OFFSET                                 29478
-#define QM_REG_PQTX2PF_54_RT_OFFSET                                 29479
-#define QM_REG_PQTX2PF_55_RT_OFFSET                                 29480
-#define QM_REG_PQTX2PF_56_RT_OFFSET                                 29481
-#define QM_REG_PQTX2PF_57_RT_OFFSET                                 29482
-#define QM_REG_PQTX2PF_58_RT_OFFSET                                 29483
-#define QM_REG_PQTX2PF_59_RT_OFFSET                                 29484
-#define QM_REG_PQTX2PF_60_RT_OFFSET                                 29485
-#define QM_REG_PQTX2PF_61_RT_OFFSET                                 29486
-#define QM_REG_PQTX2PF_62_RT_OFFSET                                 29487
-#define QM_REG_PQTX2PF_63_RT_OFFSET                                 29488
-#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               29489
-#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               29490
-#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               29491
-#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               29492
-#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               29493
-#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               29494
-#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               29495
-#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               29496
-#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               29497
-#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               29498
-#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              29499
-#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              29500
-#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              29501
-#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              29502
-#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              29503
-#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              29504
-#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             29505
-#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             29506
-#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        29507
-#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        29508
-#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          29509
-#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          29510
-#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          29511
-#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          29512
-#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          29513
-#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          29514
-#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          29515
-#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          29516
-#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               29517
+#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         36090
+#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         36091
+#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          36092
+#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        36093
+#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       36094
+#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            36095
+#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            36096
+#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            36097
+#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            36098
+#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            36099
+#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            36100
+#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            36101
+#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            36102
+#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            36103
+#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            36104
+#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           36105
+#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           36106
+#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           36107
+#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           36108
+#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           36109
+#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           36110
+#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        36111
+#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        36112
+#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        36113
+#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        36114
+#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           36115
+#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           36116
+#define QM_REG_PQTX2PF_0_RT_OFFSET                                  36117
+#define QM_REG_PQTX2PF_1_RT_OFFSET                                  36118
+#define QM_REG_PQTX2PF_2_RT_OFFSET                                  36119
+#define QM_REG_PQTX2PF_3_RT_OFFSET                                  36120
+#define QM_REG_PQTX2PF_4_RT_OFFSET                                  36121
+#define QM_REG_PQTX2PF_5_RT_OFFSET                                  36122
+#define QM_REG_PQTX2PF_6_RT_OFFSET                                  36123
+#define QM_REG_PQTX2PF_7_RT_OFFSET                                  36124
+#define QM_REG_PQTX2PF_8_RT_OFFSET                                  36125
+#define QM_REG_PQTX2PF_9_RT_OFFSET                                  36126
+#define QM_REG_PQTX2PF_10_RT_OFFSET                                 36127
+#define QM_REG_PQTX2PF_11_RT_OFFSET                                 36128
+#define QM_REG_PQTX2PF_12_RT_OFFSET                                 36129
+#define QM_REG_PQTX2PF_13_RT_OFFSET                                 36130
+#define QM_REG_PQTX2PF_14_RT_OFFSET                                 36131
+#define QM_REG_PQTX2PF_15_RT_OFFSET                                 36132
+#define QM_REG_PQTX2PF_16_RT_OFFSET                                 36133
+#define QM_REG_PQTX2PF_17_RT_OFFSET                                 36134
+#define QM_REG_PQTX2PF_18_RT_OFFSET                                 36135
+#define QM_REG_PQTX2PF_19_RT_OFFSET                                 36136
+#define QM_REG_PQTX2PF_20_RT_OFFSET                                 36137
+#define QM_REG_PQTX2PF_21_RT_OFFSET                                 36138
+#define QM_REG_PQTX2PF_22_RT_OFFSET                                 36139
+#define QM_REG_PQTX2PF_23_RT_OFFSET                                 36140
+#define QM_REG_PQTX2PF_24_RT_OFFSET                                 36141
+#define QM_REG_PQTX2PF_25_RT_OFFSET                                 36142
+#define QM_REG_PQTX2PF_26_RT_OFFSET                                 36143
+#define QM_REG_PQTX2PF_27_RT_OFFSET                                 36144
+#define QM_REG_PQTX2PF_28_RT_OFFSET                                 36145
+#define QM_REG_PQTX2PF_29_RT_OFFSET                                 36146
+#define QM_REG_PQTX2PF_30_RT_OFFSET                                 36147
+#define QM_REG_PQTX2PF_31_RT_OFFSET                                 36148
+#define QM_REG_PQTX2PF_32_RT_OFFSET                                 36149
+#define QM_REG_PQTX2PF_33_RT_OFFSET                                 36150
+#define QM_REG_PQTX2PF_34_RT_OFFSET                                 36151
+#define QM_REG_PQTX2PF_35_RT_OFFSET                                 36152
+#define QM_REG_PQTX2PF_36_RT_OFFSET                                 36153
+#define QM_REG_PQTX2PF_37_RT_OFFSET                                 36154
+#define QM_REG_PQTX2PF_38_RT_OFFSET                                 36155
+#define QM_REG_PQTX2PF_39_RT_OFFSET                                 36156
+#define QM_REG_PQTX2PF_40_RT_OFFSET                                 36157
+#define QM_REG_PQTX2PF_41_RT_OFFSET                                 36158
+#define QM_REG_PQTX2PF_42_RT_OFFSET                                 36159
+#define QM_REG_PQTX2PF_43_RT_OFFSET                                 36160
+#define QM_REG_PQTX2PF_44_RT_OFFSET                                 36161
+#define QM_REG_PQTX2PF_45_RT_OFFSET                                 36162
+#define QM_REG_PQTX2PF_46_RT_OFFSET                                 36163
+#define QM_REG_PQTX2PF_47_RT_OFFSET                                 36164
+#define QM_REG_PQTX2PF_48_RT_OFFSET                                 36165
+#define QM_REG_PQTX2PF_49_RT_OFFSET                                 36166
+#define QM_REG_PQTX2PF_50_RT_OFFSET                                 36167
+#define QM_REG_PQTX2PF_51_RT_OFFSET                                 36168
+#define QM_REG_PQTX2PF_52_RT_OFFSET                                 36169
+#define QM_REG_PQTX2PF_53_RT_OFFSET                                 36170
+#define QM_REG_PQTX2PF_54_RT_OFFSET                                 36171
+#define QM_REG_PQTX2PF_55_RT_OFFSET                                 36172
+#define QM_REG_PQTX2PF_56_RT_OFFSET                                 36173
+#define QM_REG_PQTX2PF_57_RT_OFFSET                                 36174
+#define QM_REG_PQTX2PF_58_RT_OFFSET                                 36175
+#define QM_REG_PQTX2PF_59_RT_OFFSET                                 36176
+#define QM_REG_PQTX2PF_60_RT_OFFSET                                 36177
+#define QM_REG_PQTX2PF_61_RT_OFFSET                                 36178
+#define QM_REG_PQTX2PF_62_RT_OFFSET                                 36179
+#define QM_REG_PQTX2PF_63_RT_OFFSET                                 36180
+#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               36181
+#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               36182
+#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               36183
+#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               36184
+#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               36185
+#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               36186
+#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               36187
+#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               36188
+#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               36189
+#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               36190
+#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              36191
+#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              36192
+#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              36193
+#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              36194
+#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              36195
+#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              36196
+#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             36197
+#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             36198
+#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        36199
+#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        36200
+#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          36201
+#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          36202
+#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          36203
+#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          36204
+#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          36205
+#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          36206
+#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          36207
+#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          36208
+#define QM_REG_RLGLBLPERIODSEL_8_RT_OFFSET                          36209
+#define QM_REG_RLGLBLPERIODSEL_9_RT_OFFSET                          36210
+#define QM_REG_RLGLBLPERIODSEL_10_RT_OFFSET                         36211
+#define QM_REG_RLGLBLPERIODSEL_11_RT_OFFSET                         36212
+#define QM_REG_RLGLBLPERIODSEL_12_RT_OFFSET                         36213
+#define QM_REG_RLGLBLPERIODSEL_13_RT_OFFSET                         36214
+#define QM_REG_RLGLBLPERIODSEL_14_RT_OFFSET                         36215
+#define QM_REG_RLGLBLPERIODSEL_15_RT_OFFSET                         36216
+#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               36217
 #define QM_REG_RLGLBLINCVAL_RT_SIZE                                 256
-#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           29773
+#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           36473
 #define QM_REG_RLGLBLUPPERBOUND_RT_SIZE                             256
-#define QM_REG_RLGLBLCRD_RT_OFFSET                                  30029
+#define QM_REG_RLGLBLCRD_RT_OFFSET                                  36729
 #define QM_REG_RLGLBLCRD_RT_SIZE                                    256
-#define QM_REG_RLGLBLENABLE_RT_OFFSET                               30285
-#define QM_REG_RLPFPERIOD_RT_OFFSET                                 30286
-#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            30287
-#define QM_REG_RLPFINCVAL_RT_OFFSET                                 30288
+#define QM_REG_RLGLBLENABLE_RT_OFFSET                               36985
+#define QM_REG_RLPFPERIOD_RT_OFFSET                                 36986
+#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            36987
+#define QM_REG_RLPFINCVAL_RT_OFFSET                                 36988
 #define QM_REG_RLPFINCVAL_RT_SIZE                                   16
-#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             30304
+#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             37004
 #define QM_REG_RLPFUPPERBOUND_RT_SIZE                               16
-#define QM_REG_RLPFCRD_RT_OFFSET                                    30320
+#define QM_REG_RLPFCRD_RT_OFFSET                                    37020
 #define QM_REG_RLPFCRD_RT_SIZE                                      16
-#define QM_REG_RLPFENABLE_RT_OFFSET                                 30336
-#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              30337
-#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                30338
+#define QM_REG_RLPFENABLE_RT_OFFSET                                 37036
+#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              37037
+#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                37038
 #define QM_REG_WFQPFWEIGHT_RT_SIZE                                  16
-#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            30354
+#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            37054
 #define QM_REG_WFQPFUPPERBOUND_RT_SIZE                              16
-#define QM_REG_WFQPFCRD_RT_OFFSET                                   30370
-#define QM_REG_WFQPFCRD_RT_SIZE                                     160
-#define QM_REG_WFQPFENABLE_RT_OFFSET                                30530
-#define QM_REG_WFQVPENABLE_RT_OFFSET                                30531
-#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               30532
+#define QM_REG_WFQPFCRD_RT_OFFSET                                   37070
+#define QM_REG_WFQPFCRD_RT_SIZE                                     256
+#define QM_REG_WFQPFENABLE_RT_OFFSET                                37326
+#define QM_REG_WFQVPENABLE_RT_OFFSET                                37327
+#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               37328
 #define QM_REG_BASEADDRTXPQ_RT_SIZE                                 512
-#define QM_REG_TXPQMAP_RT_OFFSET                                    31044
+#define QM_REG_TXPQMAP_RT_OFFSET                                    37840
 #define QM_REG_TXPQMAP_RT_SIZE                                      512
-#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                31556
+#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                38352
 #define QM_REG_WFQVPWEIGHT_RT_SIZE                                  512
-#define QM_REG_WFQVPCRD_RT_OFFSET                                   32068
+#define QM_REG_RLGLBLINCVAL_MSB_RT_OFFSET                           38864
+#define QM_REG_RLGLBLINCVAL_MSB_RT_SIZE                             256
+#define QM_REG_RLGLBLUPPERBOUND_MSB_RT_OFFSET                       39120
+#define QM_REG_RLGLBLUPPERBOUND_MSB_RT_SIZE                         256
+#define QM_REG_WFQVPCRD_RT_OFFSET                                   39376
 #define QM_REG_WFQVPCRD_RT_SIZE                                     512
-#define QM_REG_WFQVPMAP_RT_OFFSET                                   32580
+#define QM_REG_RLGLBLCRD_MSB_RT_OFFSET                              39888
+#define QM_REG_RLGLBLCRD_MSB_RT_SIZE                                256
+#define QM_REG_WFQVPMAP_RT_OFFSET                                   40144
 #define QM_REG_WFQVPMAP_RT_SIZE                                     512
-#define QM_REG_PTRTBLTX_RT_OFFSET                                   33092
+#define QM_REG_PTRTBLTX_RT_OFFSET                                   40656
 #define QM_REG_PTRTBLTX_RT_SIZE                                     1024
-#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               34116
-#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 160
-#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34276
-#define NIG_REG_BRB_GATE_DNTFWD_PORT_RT_OFFSET                      34277
-#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     34278
-#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     34279
-#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     34280
-#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     34281
-#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  34282
-#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           34283
+#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               41680
+#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 320
+#define QM_REG_VOQCRDLINE_RT_OFFSET                                 42000
+#define QM_REG_VOQCRDLINE_RT_SIZE                                   36
+#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             42036
+#define QM_REG_VOQINITCRDLINE_RT_SIZE                               36
+#define QM_REG_RLPFVOQENABLE_MSB_RT_OFFSET                          42072
+#define RGSRC_REG_TABLE_T1_ENTRY_SIZE_RT_OFFSET                     42073
+#define RGSRC_REG_TABLE_T2_ENTRY_SIZE_RT_OFFSET                     42074
+#define RGSRC_REG_NUMBER_HASH_BITS_RT_OFFSET                        42075
+#define TGSRC_REG_TABLE_T1_ENTRY_SIZE_RT_OFFSET                     42076
+#define TGSRC_REG_TABLE_T2_ENTRY_SIZE_RT_OFFSET                     42077
+#define TGSRC_REG_NUMBER_HASH_BITS_RT_OFFSET                        42078
+#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           42079
+#define NIG_REG_BRB_GATE_DNTFWD_PORT_RT_OFFSET                      42080
+#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     42081
+#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     42082
+#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     42083
+#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     42084
+#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  42085
+#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           42086
 #define NIG_REG_LLH_FUNC_TAG_EN_RT_SIZE                             4
-#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        34287
+#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        42090
 #define NIG_REG_LLH_FUNC_TAG_VALUE_RT_SIZE                          4
-#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     34291
+#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     42094
 #define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_SIZE                       32
-#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        34323
+#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        42126
 #define NIG_REG_LLH_FUNC_FILTER_EN_RT_SIZE                          16
-#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      34339
+#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      42142
 #define NIG_REG_LLH_FUNC_FILTER_MODE_RT_SIZE                        16
-#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             34355
+#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             42158
 #define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE               16
-#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   34371
+#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   42174
 #define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_SIZE                     16
-#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              34387
-#define NIG_REG_PPF_TO_ENGINE_SEL_RT_OFFSET                         34388
+#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              42190
+#define NIG_REG_PPF_TO_ENGINE_SEL_RT_OFFSET                         42191
 #define NIG_REG_PPF_TO_ENGINE_SEL_RT_SIZE                           8
-#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           34396
-#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           34397
-#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           34398
-#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       34399
-#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       34400
-#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       34401
-#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       34402
-#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    34403
-#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    34404
-#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    34405
-#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    34406
-#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        34407
-#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     34408
-#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34409
-#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      34410
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    34411
-#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       34412
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                34413
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    34414
-#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       34415
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                34416
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    34417
-#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       34418
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                34419
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    34420
-#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       34421
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                34422
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    34423
-#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       34424
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                34425
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    34426
-#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       34427
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                34428
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    34429
-#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       34430
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                34431
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    34432
-#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       34433
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                34434
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    34435
-#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       34436
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                34437
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    34438
-#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       34439
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                34440
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   34441
-#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      34442
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               34443
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   34444
-#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      34445
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               34446
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   34447
-#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      34448
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               34449
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   34450
-#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      34451
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               34452
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   34453
-#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      34454
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               34455
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   34456
-#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      34457
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               34458
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   34459
-#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      34460
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               34461
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   34462
-#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      34463
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               34464
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   34465
-#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      34466
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               34467
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   34468
-#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      34469
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               34470
-#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                34471
+#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_VALUE_RT_OFFSET              42199
+#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_VALUE_RT_SIZE                1024
+#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_EN_RT_OFFSET                 43223
+#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_EN_RT_SIZE                   512
+#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_MODE_RT_OFFSET               43735
+#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_MODE_RT_SIZE                 512
+#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET      44247
+#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE        512
+#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_HDR_SEL_RT_OFFSET            44759
+#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_HDR_SEL_RT_SIZE              512
+#define NIG_REG_LLH_PF_CLS_FILTERS_MAP_RT_OFFSET                    45271
+#define NIG_REG_LLH_PF_CLS_FILTERS_MAP_RT_SIZE                      32
+#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           45303
+#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           45304
+#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           45305
+#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       45306
+#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       45307
+#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       45308
+#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       45309
+#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    45310
+#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    45311
+#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    45312
+#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    45313
+#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        45314
+#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     45315
+#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           45316
+#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      45317
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    45318
+#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       45319
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                45320
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    45321
+#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       45322
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                45323
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    45324
+#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       45325
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                45326
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    45327
+#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       45328
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                45329
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    45330
+#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       45331
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                45332
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    45333
+#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       45334
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                45335
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    45336
+#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       45337
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                45338
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    45339
+#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       45340
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                45341
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    45342
+#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       45343
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                45344
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    45345
+#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       45346
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                45347
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   45348
+#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      45349
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               45350
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   45351
+#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      45352
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               45353
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   45354
+#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      45355
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               45356
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   45357
+#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      45358
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               45359
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   45360
+#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      45361
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               45362
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   45363
+#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      45364
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               45365
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   45366
+#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      45367
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               45368
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   45369
+#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      45370
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               45371
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   45372
+#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      45373
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               45374
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   45375
+#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      45376
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               45377
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ20_RT_OFFSET                   45378
+#define PBF_REG_BTB_GUARANTEED_VOQ20_RT_OFFSET                      45379
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ20_RT_OFFSET               45380
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ21_RT_OFFSET                   45381
+#define PBF_REG_BTB_GUARANTEED_VOQ21_RT_OFFSET                      45382
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ21_RT_OFFSET               45383
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ22_RT_OFFSET                   45384
+#define PBF_REG_BTB_GUARANTEED_VOQ22_RT_OFFSET                      45385
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ22_RT_OFFSET               45386
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ23_RT_OFFSET                   45387
+#define PBF_REG_BTB_GUARANTEED_VOQ23_RT_OFFSET                      45388
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ23_RT_OFFSET               45389
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ24_RT_OFFSET                   45390
+#define PBF_REG_BTB_GUARANTEED_VOQ24_RT_OFFSET                      45391
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ24_RT_OFFSET               45392
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ25_RT_OFFSET                   45393
+#define PBF_REG_BTB_GUARANTEED_VOQ25_RT_OFFSET                      45394
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ25_RT_OFFSET               45395
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ26_RT_OFFSET                   45396
+#define PBF_REG_BTB_GUARANTEED_VOQ26_RT_OFFSET                      45397
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ26_RT_OFFSET               45398
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ27_RT_OFFSET                   45399
+#define PBF_REG_BTB_GUARANTEED_VOQ27_RT_OFFSET                      45400
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ27_RT_OFFSET               45401
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ28_RT_OFFSET                   45402
+#define PBF_REG_BTB_GUARANTEED_VOQ28_RT_OFFSET                      45403
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ28_RT_OFFSET               45404
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ29_RT_OFFSET                   45405
+#define PBF_REG_BTB_GUARANTEED_VOQ29_RT_OFFSET                      45406
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ29_RT_OFFSET               45407
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ30_RT_OFFSET                   45408
+#define PBF_REG_BTB_GUARANTEED_VOQ30_RT_OFFSET                      45409
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ30_RT_OFFSET               45410
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ31_RT_OFFSET                   45411
+#define PBF_REG_BTB_GUARANTEED_VOQ31_RT_OFFSET                      45412
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ31_RT_OFFSET               45413
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ32_RT_OFFSET                   45414
+#define PBF_REG_BTB_GUARANTEED_VOQ32_RT_OFFSET                      45415
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ32_RT_OFFSET               45416
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ33_RT_OFFSET                   45417
+#define PBF_REG_BTB_GUARANTEED_VOQ33_RT_OFFSET                      45418
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ33_RT_OFFSET               45419
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ34_RT_OFFSET                   45420
+#define PBF_REG_BTB_GUARANTEED_VOQ34_RT_OFFSET                      45421
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ34_RT_OFFSET               45422
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ35_RT_OFFSET                   45423
+#define PBF_REG_BTB_GUARANTEED_VOQ35_RT_OFFSET                      45424
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ35_RT_OFFSET               45425
+#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                45426
 
-#define RUNTIME_ARRAY_SIZE 34472
+#define RUNTIME_ARRAY_SIZE 45427
 
 /* Init Callbacks */
 #define DMAE_READY_CB                                               0
diff --git a/drivers/net/qede/base/ecore_sp_api.h b/drivers/net/qede/base/ecore_sp_api.h
index 4633dbebe..5d326bef9 100644
--- a/drivers/net/qede/base/ecore_sp_api.h
+++ b/drivers/net/qede/base/ecore_sp_api.h
@@ -1,16 +1,16 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_SP_API_H__
 #define __ECORE_SP_API_H__
 
 #include "ecore_status.h"
 
 enum spq_mode {
-	ECORE_SPQ_MODE_BLOCK, /* Client will poll a designated mem. address */
+	ECORE_SPQ_MODE_BLOCK,   /* Client will poll a designated mem. address */
 	ECORE_SPQ_MODE_CB,  /* Client supplies a callback */
 	ECORE_SPQ_MODE_EBLOCK,  /* ECORE should block until completion */
 };
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 44ced135d..3a507843a 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #include "bcm_osal.h"
 
 #include "ecore.h"
@@ -22,6 +22,15 @@
 #include "ecore_sriov.h"
 #include "ecore_vf.h"
 
+void ecore_sp_destroy_request(struct ecore_hwfn *p_hwfn,
+			      struct ecore_spq_entry *p_ent)
+{
+	if (p_ent->queue == &p_hwfn->p_spq->unlimited_pending)
+		OSAL_FREE(p_hwfn->p_dev, p_ent);
+	else
+		ecore_spq_return_entry(p_hwfn, p_ent);
+}
+
 enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
 					   struct ecore_spq_entry **pp_ent,
 					   u8 cmd,
@@ -56,7 +65,7 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
 
 	case ECORE_SPQ_MODE_BLOCK:
 		if (!p_data->p_comp_data)
-			return ECORE_INVAL;
+			goto err;
 
 		p_ent->comp_cb.cookie = p_data->p_comp_data->cookie;
 		break;
@@ -71,13 +80,13 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
 	default:
 		DP_NOTICE(p_hwfn, true, "Unknown SPQE completion mode %d\n",
 			  p_ent->comp_mode);
-		return ECORE_INVAL;
+		goto err;
 	}
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
-		   "Initialized: CID %08x cmd %02x protocol %02x data_addr %lu comp_mode [%s]\n",
+		   "Initialized: CID %08x cmd %02x protocol %02x data_addr %llx comp_mode [%s]\n",
 		   opaque_cid, cmd, protocol,
-		   (unsigned long)&p_ent->ramrod,
+		   (unsigned long long)(osal_uintptr_t)&p_ent->ramrod,
 		   D_TRINE(p_ent->comp_mode, ECORE_SPQ_MODE_EBLOCK,
 			   ECORE_SPQ_MODE_BLOCK, "MODE_EBLOCK", "MODE_BLOCK",
 			   "MODE_CB"));
@@ -85,6 +94,11 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
 	OSAL_MEMSET(&p_ent->ramrod, 0, sizeof(p_ent->ramrod));
 
 	return ECORE_SUCCESS;
+
+err:
+	ecore_sp_destroy_request(p_hwfn, p_ent);
+
+	return ECORE_INVAL;
 }
 
 static enum tunnel_clss ecore_tunn_clss_to_fw_clss(u8 type)
@@ -131,22 +145,14 @@ ecore_set_pf_update_tunn_mode(struct ecore_tunnel_info *p_tun,
 static void ecore_set_tunn_cls_info(struct ecore_tunnel_info *p_tun,
 				    struct ecore_tunnel_info *p_src)
 {
-	enum tunnel_clss type;
-
-	p_tun->b_update_rx_cls = p_src->b_update_rx_cls;
-	p_tun->b_update_tx_cls = p_src->b_update_tx_cls;
-
-	/* @DPDK - typecast tunnul class */
-	type = ecore_tunn_clss_to_fw_clss(p_src->vxlan.tun_cls);
-	p_tun->vxlan.tun_cls = (enum ecore_tunn_clss)type;
-	type = ecore_tunn_clss_to_fw_clss(p_src->l2_gre.tun_cls);
-	p_tun->l2_gre.tun_cls = (enum ecore_tunn_clss)type;
-	type = ecore_tunn_clss_to_fw_clss(p_src->ip_gre.tun_cls);
-	p_tun->ip_gre.tun_cls = (enum ecore_tunn_clss)type;
-	type = ecore_tunn_clss_to_fw_clss(p_src->l2_geneve.tun_cls);
-	p_tun->l2_geneve.tun_cls = (enum ecore_tunn_clss)type;
-	type = ecore_tunn_clss_to_fw_clss(p_src->ip_geneve.tun_cls);
-	p_tun->ip_geneve.tun_cls = (enum ecore_tunn_clss)type;
+	p_tun->b_update_rx_cls   = p_src->b_update_rx_cls;
+	p_tun->b_update_tx_cls   = p_src->b_update_tx_cls;
+
+	p_tun->vxlan.tun_cls     = p_src->vxlan.tun_cls;
+	p_tun->l2_gre.tun_cls    = p_src->l2_gre.tun_cls;
+	p_tun->ip_gre.tun_cls    = p_src->ip_gre.tun_cls;
+	p_tun->l2_geneve.tun_cls = p_src->l2_geneve.tun_cls;
+	p_tun->ip_geneve.tun_cls = p_src->ip_geneve.tun_cls;
 }
 
 static void ecore_set_tunn_ports(struct ecore_tunnel_info *p_tun,
@@ -166,7 +172,7 @@ static void
 __ecore_set_ramrod_tunnel_param(u8 *p_tunn_cls,
 				struct ecore_tunn_update_type *tun_type)
 {
-	*p_tunn_cls = tun_type->tun_cls;
+	*p_tunn_cls = ecore_tunn_clss_to_fw_clss(tun_type->tun_cls);
 }
 
 static void
@@ -218,7 +224,7 @@ ecore_tunn_set_pf_update_params(struct ecore_hwfn		*p_hwfn,
 }
 
 static void ecore_set_hw_tunn_mode(struct ecore_hwfn *p_hwfn,
-				   struct ecore_ptt *p_ptt,
+				   struct ecore_ptt  *p_ptt,
 				   struct ecore_tunnel_info *p_tun)
 {
 	ecore_set_gre_enable(p_hwfn, p_ptt, p_tun->l2_gre.b_mode_enabled,
@@ -251,9 +257,9 @@ static void ecore_set_hw_tunn_mode_port(struct ecore_hwfn *p_hwfn,
 }
 
 static void
-ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn,
+ecore_tunn_set_pf_start_params(struct ecore_hwfn		*p_hwfn,
 			       struct ecore_tunnel_info		*p_src,
-			       struct pf_start_tunnel_config *p_tunn_cfg)
+			       struct pf_start_tunnel_config	*p_tunn_cfg)
 {
 	struct ecore_tunnel_info *p_tun = &p_hwfn->p_dev->tunnel;
 
@@ -292,9 +298,6 @@ ecore_tunn_set_pf_start_params(struct ecore_hwfn *p_hwfn,
 					&p_tun->ip_gre);
 }
 
-#define ETH_P_8021Q 0x8100
-#define ETH_P_8021AD 0x88A8 /* 802.1ad Service VLAN         */
-
 enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt,
 				       struct ecore_tunnel_info *p_tunn,
@@ -321,7 +324,8 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 
 	rc = ecore_sp_init_request(p_hwfn, &p_ent,
 				   COMMON_RAMROD_PF_START,
-				   PROTOCOLID_COMMON, &init_data);
+				   PROTOCOLID_COMMON,
+				   &init_data);
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
@@ -335,18 +339,18 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 	p_ramrod->dont_log_ramrods = 0;
 	p_ramrod->log_type_mask = OSAL_CPU_TO_LE16(0x8f);
 
-	if (OSAL_GET_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits))
+	if (OSAL_TEST_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits))
 		p_ramrod->mf_mode = MF_OVLAN;
 	else
 		p_ramrod->mf_mode = MF_NPAR;
 
 	p_ramrod->outer_tag_config.outer_tag.tci =
 		OSAL_CPU_TO_LE16(p_hwfn->hw_info.ovlan);
-	if (OSAL_GET_BIT(ECORE_MF_8021Q_TAGGING, &p_hwfn->p_dev->mf_bits)) {
-		p_ramrod->outer_tag_config.outer_tag.tpid = ETH_P_8021Q;
-	} else if (OSAL_GET_BIT(ECORE_MF_8021AD_TAGGING,
+	if (OSAL_TEST_BIT(ECORE_MF_8021Q_TAGGING, &p_hwfn->p_dev->mf_bits)) {
+		p_ramrod->outer_tag_config.outer_tag.tpid = ECORE_ETH_P_8021Q;
+	} else if (OSAL_TEST_BIT(ECORE_MF_8021AD_TAGGING,
 		 &p_hwfn->p_dev->mf_bits)) {
-		p_ramrod->outer_tag_config.outer_tag.tpid = ETH_P_8021AD;
+		p_ramrod->outer_tag_config.outer_tag.tpid = ECORE_ETH_P_8021AD;
 		p_ramrod->outer_tag_config.enable_stag_pri_change = 1;
 	}
 
@@ -357,7 +361,7 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 	/* enable_stag_pri_change should be set if port is in BD mode or,
 	 * UFP with Host Control mode.
 	 */
-	if (OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits)) {
+	if (OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits)) {
 		if (p_hwfn->ufp_info.pri_type == ECORE_UFP_PRI_OS)
 			p_ramrod->outer_tag_config.enable_stag_pri_change = 1;
 		else
@@ -378,7 +382,7 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 	ecore_tunn_set_pf_start_params(p_hwfn, p_tunn,
 				       &p_ramrod->tunnel_config);
 
-	if (OSAL_GET_BIT(ECORE_MF_INTER_PF_SWITCH,
+	if (OSAL_TEST_BIT(ECORE_MF_INTER_PF_SWITCH,
 			  &p_hwfn->p_dev->mf_bits))
 		p_ramrod->allow_npar_tx_switching = allow_npar_tx_switch;
 
@@ -386,9 +390,20 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 	case ECORE_PCI_ETH:
 		p_ramrod->personality = PERSONALITY_ETH;
 		break;
+	case ECORE_PCI_FCOE:
+		p_ramrod->personality = PERSONALITY_FCOE;
+		break;
+	case ECORE_PCI_ISCSI:
+		p_ramrod->personality = PERSONALITY_ISCSI;
+		break;
+	case ECORE_PCI_ETH_IWARP:
+	case ECORE_PCI_ETH_ROCE:
+	case ECORE_PCI_ETH_RDMA:
+		p_ramrod->personality = PERSONALITY_RDMA_AND_ETH;
+		break;
 	default:
 		DP_NOTICE(p_hwfn, true, "Unknown personality %d\n",
-			 p_hwfn->hw_info.personality);
+			  p_hwfn->hw_info.personality);
 		p_ramrod->personality = PERSONALITY_ETH;
 	}
 
@@ -448,6 +463,12 @@ enum _ecore_status_t ecore_sp_pf_update_ufp(struct ecore_hwfn *p_hwfn)
 	struct ecore_sp_init_data init_data;
 	enum _ecore_status_t rc = ECORE_NOTIMPL;
 
+	if (p_hwfn->ufp_info.pri_type == ECORE_UFP_PRI_UNKNOWN) {
+		DP_INFO(p_hwfn, "Invalid priority type %d\n",
+			p_hwfn->ufp_info.pri_type);
+		return ECORE_INVAL;
+	}
+
 	/* Get SPQ entry */
 	OSAL_MEMSET(&init_data, 0, sizeof(init_data));
 	init_data.cid = ecore_spq_get_cid(p_hwfn);
@@ -476,12 +497,12 @@ enum _ecore_status_t ecore_sp_pf_update_ufp(struct ecore_hwfn *p_hwfn)
 /* FW uses 1/64k to express gd */
 #define FW_GD_RESOLUTION(gd)		(64 * 1024 / (gd))
 
-u16 ecore_sp_rl_mb_to_qm(u32 mb_val)
+static u16 ecore_sp_rl_mb_to_qm(u32 mb_val)
 {
 	return (u16)OSAL_MIN_T(u32, (u16)(~0U), QM_RL_RESOLUTION(mb_val));
 }
 
-u16 ecore_sp_rl_gd_denom(u32 gd)
+static u16 ecore_sp_rl_gd_denom(u32 gd)
 {
 	return gd ? (u16)OSAL_MIN_T(u32, (u16)(~0U), FW_GD_RESOLUTION(gd)) : 0;
 }
@@ -516,32 +537,20 @@ enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
 	rl_update->rl_id_first = params->rl_id_first;
 	rl_update->rl_id_last = params->rl_id_last;
 	rl_update->rl_dc_qcn_flg = params->rl_dc_qcn_flg;
-	rl_update->dcqcn_reset_alpha_on_idle =
-		params->dcqcn_reset_alpha_on_idle;
-	rl_update->rl_bc_stage_th = params->rl_bc_stage_th;
-	rl_update->rl_timer_stage_th = params->rl_timer_stage_th;
 	rl_update->rl_bc_rate = OSAL_CPU_TO_LE32(params->rl_bc_rate);
-	rl_update->rl_max_rate =
-		OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_max_rate));
-	rl_update->rl_r_ai =
-		OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_r_ai));
-	rl_update->rl_r_hai =
-		OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_r_hai));
-	rl_update->dcqcn_g =
-		OSAL_CPU_TO_LE16(ecore_sp_rl_gd_denom(params->dcqcn_gd));
+	rl_update->rl_max_rate = OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_max_rate));
+	rl_update->rl_r_ai = OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_r_ai));
+	rl_update->rl_r_hai = OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_r_hai));
+	rl_update->dcqcn_g = OSAL_CPU_TO_LE16(ecore_sp_rl_gd_denom(params->dcqcn_gd));
 	rl_update->dcqcn_k_us = OSAL_CPU_TO_LE32(params->dcqcn_k_us);
-	rl_update->dcqcn_timeuot_us =
-		OSAL_CPU_TO_LE32(params->dcqcn_timeuot_us);
+	rl_update->dcqcn_timeuot_us = OSAL_CPU_TO_LE32(params->dcqcn_timeuot_us);
 	rl_update->qcn_timeuot_us = OSAL_CPU_TO_LE32(params->qcn_timeuot_us);
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "rl_params: qcn_update_param_flg %x, dcqcn_update_param_flg %x, rl_init_flg %x, rl_start_flg %x, rl_stop_flg %x, rl_id_first %x, rl_id_last %x, rl_dc_qcn_flg %x,dcqcn_reset_alpha_on_idle %x, rl_bc_stage_th %x, rl_timer_stage_th %x, rl_bc_rate %x, rl_max_rate %x, rl_r_ai %x, rl_r_hai %x, dcqcn_g %x, dcqcn_k_us %x, dcqcn_timeuot_us %x, qcn_timeuot_us %x\n",
-		   rl_update->qcn_update_param_flg,
-		   rl_update->dcqcn_update_param_flg,
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "rl_params: qcn_update_param_flg %x, dcqcn_update_param_flg %x, rl_init_flg %x, rl_start_flg %x, rl_stop_flg %x, rl_id_first %x, rl_id_last %x, rl_dc_qcn_flg %x, rl_bc_rate %x, rl_max_rate %x, rl_r_ai %x, rl_r_hai %x, dcqcn_g %x, dcqcn_k_us %x, dcqcn_timeuot_us %x, qcn_timeuot_us %x\n",
+		   rl_update->qcn_update_param_flg, rl_update->dcqcn_update_param_flg,
 		   rl_update->rl_init_flg, rl_update->rl_start_flg,
 		   rl_update->rl_stop_flg, rl_update->rl_id_first,
 		   rl_update->rl_id_last, rl_update->rl_dc_qcn_flg,
-		   rl_update->dcqcn_reset_alpha_on_idle,
-		   rl_update->rl_bc_stage_th, rl_update->rl_timer_stage_th,
 		   rl_update->rl_bc_rate, rl_update->rl_max_rate,
 		   rl_update->rl_r_ai, rl_update->rl_r_hai,
 		   rl_update->dcqcn_g, rl_update->dcqcn_k_us,
@@ -638,10 +647,6 @@ enum _ecore_status_t ecore_sp_heartbeat_ramrod(struct ecore_hwfn *p_hwfn)
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	if (OSAL_GET_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits))
-		p_ent->ramrod.pf_update.mf_vlan |=
-			OSAL_CPU_TO_LE16(((u16)p_hwfn->ufp_info.tc << 13));
-
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
@@ -664,8 +669,11 @@ enum _ecore_status_t ecore_sp_pf_update_stag(struct ecore_hwfn *p_hwfn)
 		return rc;
 
 	p_ent->ramrod.pf_update.update_mf_vlan_flag = true;
-	p_ent->ramrod.pf_update.mf_vlan =
-				OSAL_CPU_TO_LE16(p_hwfn->hw_info.ovlan);
+	p_ent->ramrod.pf_update.mf_vlan = OSAL_CPU_TO_LE16(p_hwfn->hw_info.ovlan);
+
+	if (OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits))
+		p_ent->ramrod.pf_update.mf_vlan |=
+			OSAL_CPU_TO_LE16(((u16)p_hwfn->ufp_info.tc << 13));
 
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
diff --git a/drivers/net/qede/base/ecore_sp_commands.h b/drivers/net/qede/base/ecore_sp_commands.h
index 524fe57a1..4850a60f1 100644
--- a/drivers/net/qede/base/ecore_sp_commands.h
+++ b/drivers/net/qede/base/ecore_sp_commands.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_SP_COMMANDS_H__
 #define __ECORE_SP_COMMANDS_H__
 
@@ -28,6 +28,17 @@ struct ecore_sp_init_data {
 
 };
 
+/**
+ * @brief Returns a SPQ entry to the pool / frees the entry if allocated
+ *        Should be called on in error flows after initializing the SPQ entry
+ *        and before posting it.
+ *
+ * @param p_hwfn
+ * @param p_ent
+ */
+void ecore_sp_destroy_request(struct ecore_hwfn *p_hwfn,
+			      struct ecore_spq_entry *p_ent);
+
 /**
  * @brief Acquire and initialize and SPQ entry for a given ramrod.
  *
@@ -119,9 +130,6 @@ struct ecore_rl_update_params {
 	u8 rl_stop_flg;
 	u8 rl_id_first;
 	u8 rl_id_last;
-	u8 dcqcn_reset_alpha_on_idle;
-	u8 rl_bc_stage_th;
-	u8 rl_timer_stage_th;
 	u8 rl_dc_qcn_flg; /* If set, RL will used for DCQCN */
 	u32 rl_bc_rate; /* Byte Counter Limit */
 	u32 rl_max_rate; /* Maximum rate in Mbps resolution */
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 93e984bf8..47c8a4e90 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #include "bcm_osal.h"
 #include "reg_addr.h"
 #include "ecore_gtt_reg_addr.h"
@@ -20,6 +20,12 @@
 #include "ecore_hw.h"
 #include "ecore_sriov.h"
 
+#ifdef _NTDDK_
+#pragma warning(push)
+#pragma warning(disable : 28167)
+#pragma warning(disable : 28123)
+#endif
+
 /***************************************************************************
  * Structures & Definitions
  ***************************************************************************/
@@ -88,6 +94,7 @@ static enum _ecore_status_t ecore_spq_block(struct ecore_hwfn *p_hwfn,
 					    u8 *p_fw_ret, bool skip_quick_poll)
 {
 	struct ecore_spq_comp_done *comp_done;
+	u8 str[ECORE_HW_ERR_MAX_STR_SIZE];
 	struct ecore_ptt *p_ptt;
 	enum _ecore_status_t rc;
 
@@ -97,19 +104,20 @@ static enum _ecore_status_t ecore_spq_block(struct ecore_hwfn *p_hwfn,
 	if (!skip_quick_poll) {
 		rc = __ecore_spq_block(p_hwfn, p_ent, p_fw_ret, false);
 		if (rc == ECORE_SUCCESS)
-			return ECORE_SUCCESS;
+			return rc;
 	}
 
 	/* Move to polling with a sleeping period between iterations */
 	rc = __ecore_spq_block(p_hwfn, p_ent, p_fw_ret, true);
 	if (rc == ECORE_SUCCESS)
-		return ECORE_SUCCESS;
+		return rc;
+
+	DP_INFO(p_hwfn, "Ramrod is stuck, requesting MCP drain\n");
 
 	p_ptt = ecore_ptt_acquire(p_hwfn);
 	if (!p_ptt)
-		return ECORE_AGAIN;
+		goto err;
 
-	DP_INFO(p_hwfn, "Ramrod is stuck, requesting MCP drain\n");
 	rc = ecore_mcp_drain(p_hwfn, p_ptt);
 	ecore_ptt_release(p_hwfn, p_ptt);
 	if (rc != ECORE_SUCCESS) {
@@ -120,24 +128,34 @@ static enum _ecore_status_t ecore_spq_block(struct ecore_hwfn *p_hwfn,
 	/* Retry after drain */
 	rc = __ecore_spq_block(p_hwfn, p_ent, p_fw_ret, true);
 	if (rc == ECORE_SUCCESS)
-		return ECORE_SUCCESS;
+		return rc;
 
 	comp_done = (struct ecore_spq_comp_done *)p_ent->comp_cb.cookie;
 	if (comp_done->done == 1) {
 		if (p_fw_ret)
 			*p_fw_ret = comp_done->fw_return_code;
+
 		return ECORE_SUCCESS;
 	}
+
+	rc = ECORE_BUSY;
 err:
-	DP_NOTICE(p_hwfn, true,
-		  "Ramrod is stuck [CID %08x cmd %02x proto %02x echo %04x]\n",
-		  OSAL_LE32_TO_CPU(p_ent->elem.hdr.cid),
-		  p_ent->elem.hdr.cmd_id, p_ent->elem.hdr.protocol_id,
-		  OSAL_LE16_TO_CPU(p_ent->elem.hdr.echo));
+	OSAL_SNPRINTF((char *)str, ECORE_HW_ERR_MAX_STR_SIZE,
+		      "Ramrod is stuck [CID %08x cmd %02x protocol %02x echo %04x]\n",
+		      OSAL_LE32_TO_CPU(p_ent->elem.hdr.cid),
+		      p_ent->elem.hdr.cmd_id, p_ent->elem.hdr.protocol_id,
+		      OSAL_LE16_TO_CPU(p_ent->elem.hdr.echo));
+	DP_NOTICE(p_hwfn, true, "%s", str);
+
+	p_ptt = ecore_ptt_acquire(p_hwfn);
+	if (!p_ptt)
+		return rc;
 
-	ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_RAMROD_FAIL);
+	ecore_hw_err_notify(p_hwfn, p_ptt, ECORE_HW_ERR_RAMROD_FAIL, str,
+			    OSAL_STRLEN((char *)str) + 1);
+	ecore_ptt_release(p_hwfn, p_ptt);
 
-	return ECORE_BUSY;
+	return rc;
 }
 
 void ecore_set_spq_block_timeout(struct ecore_hwfn *p_hwfn,
@@ -151,8 +169,8 @@ void ecore_set_spq_block_timeout(struct ecore_hwfn *p_hwfn,
 /***************************************************************************
  * SPQ entries inner API
  ***************************************************************************/
-static enum _ecore_status_t
-ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry *p_ent)
+static enum _ecore_status_t ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn,
+						 struct ecore_spq_entry *p_ent)
 {
 	p_ent->flags = 0;
 
@@ -170,8 +188,7 @@ ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry *p_ent)
 	}
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
-		   "Ramrod header: [CID 0x%08x CMD 0x%02x protocol 0x%02x]"
-		   " Data pointer: [%08x:%08x] Completion Mode: %s\n",
+		   "Ramrod header: [CID 0x%08x CMD 0x%02x protocol 0x%02x] Data pointer: [%08x:%08x] Completion Mode: %s\n",
 		   p_ent->elem.hdr.cid, p_ent->elem.hdr.cmd_id,
 		   p_ent->elem.hdr.protocol_id,
 		   p_ent->elem.data_ptr.hi, p_ent->elem.data_ptr.lo,
@@ -196,7 +213,7 @@ ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry *p_ent)
 #define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT	6
 
 static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
-				    struct ecore_spq *p_spq)
+				    struct ecore_spq  *p_spq)
 {
 	__le32 *p_spq_base_lo, *p_spq_base_hi;
 	struct regpair *p_consolid_base_addr;
@@ -265,9 +282,9 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 		       p_hwfn->p_consq->chain.p_phys_addr);
 }
 
-static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn,
-					      struct ecore_spq *p_spq,
-					      struct ecore_spq_entry *p_ent)
+static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn		*p_hwfn,
+					      struct ecore_spq		*p_spq,
+					      struct ecore_spq_entry	*p_ent)
 {
 	struct ecore_chain *p_chain = &p_hwfn->p_spq->chain;
 	struct core_db_data *p_db_data = &p_spq->db_data;
@@ -281,7 +298,7 @@ static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 	}
 
-	*elem = p_ent->elem;	/* Struct assignment */
+	*elem = p_ent->elem; /* Struct assignment */
 
 	p_db_data->spq_prod =
 		OSAL_CPU_TO_LE16(ecore_chain_get_prod_idx(p_chain));
@@ -291,12 +308,11 @@ static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn,
 
 	DOORBELL(p_hwfn, p_spq->db_addr_offset, *(u32 *)p_db_data);
 
-	/* Make sure doorbell is rang */
+	/* Make sure doorbell was rung */
 	OSAL_WMB(p_hwfn->p_dev);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
-		   "Doorbelled [0x%08x, CID 0x%08x] with Flags: %02x"
-		   " agg_params: %02x, prod: %04x\n",
+		   "Doorbelled [0x%08x, CID 0x%08x] with Flags: %02x agg_params: %02x, prod: %04x\n",
 		   p_spq->db_addr_offset, p_spq->cid, p_db_data->params,
 		   p_db_data->agg_flags, ecore_chain_get_prod_idx(p_chain));
 
@@ -307,7 +323,7 @@ static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn,
  * Asynchronous events
  ***************************************************************************/
 
-static enum _ecore_status_t
+enum _ecore_status_t
 ecore_async_event_completion(struct ecore_hwfn *p_hwfn,
 			     struct event_ring_entry *p_eqe)
 {
@@ -319,6 +335,15 @@ ecore_async_event_completion(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 	}
 
+	/* This function is called by PF only, and only for VF completions.
+	 * The exceptions which are handled by the PF are PROTOCOLID_COMMON
+	 * and ROCE_ASYNC_EVENT_DESTROY_QP_DONE events.
+	 */
+	/* @DPDK */
+	if (IS_PF(p_hwfn->p_dev) && p_eqe->vf_id != ECORE_CXT_PF_CID &&
+	    p_eqe->protocol_id != PROTOCOLID_COMMON)
+		return ecore_iov_async_event_completion(p_hwfn, p_eqe);
+
 	cb = p_hwfn->p_spq->async_comp_cb[p_eqe->protocol_id];
 	if (!cb) {
 		DP_NOTICE(p_hwfn,
@@ -328,7 +353,7 @@ ecore_async_event_completion(struct ecore_hwfn *p_hwfn,
 	}
 
 	rc = cb(p_hwfn, p_eqe->opcode, p_eqe->echo,
-		&p_eqe->data, p_eqe->fw_return_code);
+		&p_eqe->data, p_eqe->fw_return_code, p_eqe->vf_id);
 	if (rc != ECORE_SUCCESS)
 		DP_NOTICE(p_hwfn, true,
 			  "Async completion callback failed, rc = %d [opcode %x, echo %x, fw_return_code %x]",
@@ -363,10 +388,11 @@ ecore_spq_unregister_async_cb(struct ecore_hwfn *p_hwfn,
 /***************************************************************************
  * EQ API
  ***************************************************************************/
-void ecore_eq_prod_update(struct ecore_hwfn *p_hwfn, u16 prod)
+void ecore_eq_prod_update(struct ecore_hwfn	*p_hwfn,
+			  u16			prod)
 {
 	u32 addr = GTT_BAR0_MAP_REG_USDM_RAM +
-	    USTORM_EQE_CONS_OFFSET(p_hwfn->rel_pf_id);
+		USTORM_EQE_CONS_OFFSET(p_hwfn->rel_pf_id);
 
 	REG_WR16(p_hwfn, addr, prod);
 
@@ -374,13 +400,24 @@ void ecore_eq_prod_update(struct ecore_hwfn *p_hwfn, u16 prod)
 	OSAL_MMIOWB(p_hwfn->p_dev);
 }
 
-enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
-					 void *cookie)
+void ecore_event_ring_entry_dp(struct ecore_hwfn *p_hwfn,
+			       struct event_ring_entry *p_eqe)
 {
-	struct ecore_eq *p_eq = cookie;
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
+		   "op %x prot %x res0 %x vfid %x echo %x fwret %x flags %x\n",
+		   p_eqe->opcode, p_eqe->protocol_id, p_eqe->reserved0,
+		   p_eqe->vf_id, OSAL_LE16_TO_CPU(p_eqe->echo),
+		   p_eqe->fw_return_code, p_eqe->flags);
+}
+
+enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn	*p_hwfn,
+					 void                   *cookie)
+
+{
+	struct ecore_eq    *p_eq    = cookie;
 	struct ecore_chain *p_chain = &p_eq->chain;
 	u16 fw_cons_idx             = 0;
-	enum _ecore_status_t rc = ECORE_SUCCESS;
+	enum _ecore_status_t rc;
 
 	if (!p_hwfn->p_spq) {
 		DP_ERR(p_hwfn, "Unexpected NULL p_spq\n");
@@ -409,17 +446,7 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
 			break;
 		}
 
-		DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
-			   "op %x prot %x res0 %x echo %x fwret %x flags %x\n",
-			   p_eqe->opcode,	     /* Event Opcode */
-			   p_eqe->protocol_id,	/* Event Protocol ID */
-			   p_eqe->reserved0,	/* Reserved */
-			   /* Echo value from ramrod data on the host */
-			   OSAL_LE16_TO_CPU(p_eqe->echo),
-			   p_eqe->fw_return_code,    /* FW return code for SP
-						      * ramrods
-						      */
-			   p_eqe->flags);
+		ecore_event_ring_entry_dp(p_hwfn, p_eqe);
 
 		if (GET_FIELD(p_eqe->flags, EVENT_RING_ENTRY_ASYNC))
 			ecore_async_event_completion(p_hwfn, p_eqe);
@@ -434,6 +461,11 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
 
 	ecore_eq_prod_update(p_hwfn, ecore_chain_get_prod_idx(p_chain));
 
+	/* Attempt to post pending requests */
+	OSAL_SPIN_LOCK(&p_hwfn->p_spq->lock);
+	rc = ecore_spq_pend_post(p_hwfn);
+	OSAL_SPIN_UNLOCK(&p_hwfn->p_spq->lock);
+
 	return rc;
 }
 
@@ -493,8 +525,7 @@ void ecore_eq_free(struct ecore_hwfn *p_hwfn)
 * CQE API - manipulate EQ functionality
 ***************************************************************************/
 static enum _ecore_status_t ecore_cqe_completion(struct ecore_hwfn *p_hwfn,
-						 struct eth_slow_path_rx_cqe
-						 *cqe,
+						 struct eth_slow_path_rx_cqe *cqe,
 						 enum protocol_type protocol)
 {
 	if (IS_VF(p_hwfn->p_dev))
@@ -556,10 +587,10 @@ void ecore_spq_setup(struct ecore_hwfn *p_hwfn)
 	}
 
 	/* Statistics */
-	p_spq->normal_count = 0;
-	p_spq->comp_count = 0;
-	p_spq->comp_sent_count = 0;
-	p_spq->unlimited_pending_count = 0;
+	p_spq->normal_count		= 0;
+	p_spq->comp_count		= 0;
+	p_spq->comp_sent_count		= 0;
+	p_spq->unlimited_pending_count	= 0;
 
 	OSAL_MEM_ZERO(p_spq->p_comp_bitmap,
 		      SPQ_COMP_BMAP_SIZE * sizeof(unsigned long));
@@ -583,7 +614,8 @@ void ecore_spq_setup(struct ecore_hwfn *p_hwfn)
 	p_db_data->agg_flags = DQ_XCM_CORE_DQ_CF_CMD;
 
 	/* Register the SPQ doorbell with the doorbell recovery mechanism */
-	db_addr = (void *)((u8 *)p_hwfn->doorbells + p_spq->db_addr_offset);
+	db_addr = (void OSAL_IOMEM *)((u8 OSAL_IOMEM *)p_hwfn->doorbells +
+				      p_spq->db_addr_offset);
 	rc = ecore_db_recovery_add(p_hwfn->p_dev, db_addr, &p_spq->db_data,
 				   DB_REC_WIDTH_32B, DB_REC_KERNEL);
 	if (rc != ECORE_SUCCESS)
@@ -630,7 +662,7 @@ enum _ecore_status_t ecore_spq_alloc(struct ecore_hwfn *p_hwfn)
 	p_spq->p_phys = p_phys;
 
 #ifdef CONFIG_ECORE_LOCK_ALLOC
-	if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_spq->lock))
+	if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_spq->lock, "spq_lock"))
 		goto spq_allocate_fail;
 #endif
 
@@ -652,8 +684,12 @@ void ecore_spq_free(struct ecore_hwfn *p_hwfn)
 	if (!p_spq)
 		return;
 
+	if (IS_VF(p_hwfn->p_dev))
+		goto out;
+
 	/* Delete the SPQ doorbell from the doorbell recovery mechanism */
-	db_addr = (void *)((u8 *)p_hwfn->doorbells + p_spq->db_addr_offset);
+	db_addr = (void OSAL_IOMEM *)((u8 OSAL_IOMEM *)p_hwfn->doorbells +
+				      p_spq->db_addr_offset);
 	ecore_db_recovery_del(p_hwfn->p_dev, db_addr, &p_spq->db_data);
 
 	if (p_spq->p_virt) {
@@ -669,12 +705,13 @@ void ecore_spq_free(struct ecore_hwfn *p_hwfn)
 #ifdef CONFIG_ECORE_LOCK_ALLOC
 	OSAL_SPIN_LOCK_DEALLOC(&p_spq->lock);
 #endif
-
+out:
 	OSAL_FREE(p_hwfn->p_dev, p_spq);
+	p_hwfn->p_spq = OSAL_NULL;
 }
 
-enum _ecore_status_t
-ecore_spq_get_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry **pp_ent)
+enum _ecore_status_t ecore_spq_get_entry(struct ecore_hwfn *p_hwfn,
+					 struct ecore_spq_entry **pp_ent)
 {
 	struct ecore_spq *p_spq = p_hwfn->p_spq;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
@@ -692,7 +729,8 @@ ecore_spq_get_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry **pp_ent)
 		p_ent->queue = &p_spq->unlimited_pending;
 	} else {
 		p_ent = OSAL_LIST_FIRST_ENTRY(&p_spq->free_pool,
-					      struct ecore_spq_entry, list);
+					      struct ecore_spq_entry,
+					      list);
 		OSAL_LIST_REMOVE_ENTRY(&p_ent->list, &p_spq->free_pool);
 		p_ent->queue = &p_spq->pending;
 	}
@@ -706,7 +744,7 @@ ecore_spq_get_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry **pp_ent)
 
 /* Locked variant; Should be called while the SPQ lock is taken */
 static void __ecore_spq_return_entry(struct ecore_hwfn *p_hwfn,
-				     struct ecore_spq_entry *p_ent)
+			      struct ecore_spq_entry *p_ent)
 {
 	OSAL_LIST_PUSH_TAIL(&p_ent->list, &p_hwfn->p_spq->free_pool);
 }
@@ -733,11 +771,11 @@ void ecore_spq_return_entry(struct ecore_hwfn *p_hwfn,
  *
  * @return enum _ecore_status_t
  */
-static enum _ecore_status_t
-ecore_spq_add_entry(struct ecore_hwfn *p_hwfn,
-		    struct ecore_spq_entry *p_ent, enum spq_priority priority)
+static enum _ecore_status_t ecore_spq_add_entry(struct ecore_hwfn *p_hwfn,
+						struct ecore_spq_entry *p_ent,
+						enum spq_priority priority)
 {
-	struct ecore_spq *p_spq = p_hwfn->p_spq;
+	struct ecore_spq	*p_spq	= p_hwfn->p_spq;
 
 	if (p_ent->queue == &p_spq->unlimited_pending) {
 		if (OSAL_LIST_IS_EMPTY(&p_spq->free_pool)) {
@@ -766,6 +804,8 @@ ecore_spq_add_entry(struct ecore_hwfn *p_hwfn,
 			/* EBLOCK responsible to free the allocated p_ent */
 			if (p_ent->comp_mode != ECORE_SPQ_MODE_EBLOCK)
 				OSAL_FREE(p_hwfn->p_dev, p_ent);
+			else
+				p_ent->post_ent = p_en2;
 
 			p_ent = p_en2;
 		}
@@ -804,11 +844,11 @@ u32 ecore_spq_get_cid(struct ecore_hwfn *p_hwfn)
  ***************************************************************************/
 
 static enum _ecore_status_t ecore_spq_post_list(struct ecore_hwfn *p_hwfn,
-						osal_list_t *head,
-						u32 keep_reserve)
+						osal_list_t	  *head,
+						u32		  keep_reserve)
 {
-	struct ecore_spq *p_spq = p_hwfn->p_spq;
-	enum _ecore_status_t rc;
+	struct ecore_spq	*p_spq = p_hwfn->p_spq;
+	enum _ecore_status_t	rc;
 
 	/* TODO - implementation might be wasteful; will always keep room
 	 * for an additional high priority ramrod (even if one is already
@@ -816,21 +856,20 @@ static enum _ecore_status_t ecore_spq_post_list(struct ecore_hwfn *p_hwfn,
 	 */
 	while (ecore_chain_get_elem_left(&p_spq->chain) > keep_reserve &&
 	       !OSAL_LIST_IS_EMPTY(head)) {
-		struct ecore_spq_entry *p_ent =
+		struct ecore_spq_entry  *p_ent =
 		    OSAL_LIST_FIRST_ENTRY(head, struct ecore_spq_entry, list);
 		if (p_ent != OSAL_NULL) {
 #if defined(_NTDDK_)
 #pragma warning(suppress : 6011 28182)
 #endif
 			OSAL_LIST_REMOVE_ENTRY(&p_ent->list, head);
-			OSAL_LIST_PUSH_TAIL(&p_ent->list,
-					    &p_spq->completion_pending);
+			OSAL_LIST_PUSH_TAIL(&p_ent->list, &p_spq->completion_pending);
 			p_spq->comp_sent_count++;
 
 			rc = ecore_spq_hw_post(p_hwfn, p_spq, p_ent);
 			if (rc) {
 				OSAL_LIST_REMOVE_ENTRY(&p_ent->list,
-						    &p_spq->completion_pending);
+						       &p_spq->completion_pending);
 				__ecore_spq_return_entry(p_hwfn, p_ent);
 				return rc;
 			}
@@ -840,7 +879,7 @@ static enum _ecore_status_t ecore_spq_post_list(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t ecore_spq_pend_post(struct ecore_hwfn *p_hwfn)
+enum _ecore_status_t ecore_spq_pend_post(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_spq *p_spq = p_hwfn->p_spq;
 	struct ecore_spq_entry *p_ent = OSAL_NULL;
@@ -850,7 +889,8 @@ static enum _ecore_status_t ecore_spq_pend_post(struct ecore_hwfn *p_hwfn)
 			break;
 
 		p_ent = OSAL_LIST_FIRST_ENTRY(&p_spq->unlimited_pending,
-					      struct ecore_spq_entry, list);
+					      struct ecore_spq_entry,
+					      list);
 		if (!p_ent)
 			return ECORE_INVAL;
 
@@ -862,17 +902,43 @@ static enum _ecore_status_t ecore_spq_pend_post(struct ecore_hwfn *p_hwfn)
 		ecore_spq_add_entry(p_hwfn, p_ent, p_ent->priority);
 	}
 
-	return ecore_spq_post_list(p_hwfn,
-				 &p_spq->pending, SPQ_HIGH_PRI_RESERVE_DEFAULT);
+	return ecore_spq_post_list(p_hwfn, &p_spq->pending,
+				   SPQ_HIGH_PRI_RESERVE_DEFAULT);
+}
+
+static void ecore_spq_recov_set_ret_code(struct ecore_spq_entry *p_ent,
+					 u8 *fw_return_code)
+{
+	if (fw_return_code == OSAL_NULL)
+		return;
 }
 
-enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn,
-				    struct ecore_spq_entry *p_ent,
-				    u8 *fw_return_code)
+/* Avoid overriding of SPQ entries when getting out-of-order completions, by
+ * marking the completions in a bitmap and increasing the chain consumer only
+ * for the first successive completed entries.
+ */
+static void ecore_spq_comp_bmap_update(struct ecore_hwfn *p_hwfn, __le16 echo)
+{
+	struct ecore_spq *p_spq = p_hwfn->p_spq;
+
+	SPQ_COMP_BMAP_SET_BIT(p_spq, echo);
+	while (SPQ_COMP_BMAP_TEST_BIT(p_spq,
+				      p_spq->comp_bitmap_idx)) {
+		SPQ_COMP_BMAP_CLEAR_BIT(p_spq,
+					p_spq->comp_bitmap_idx);
+		p_spq->comp_bitmap_idx++;
+		ecore_chain_return_produced(&p_spq->chain);
+	}
+}
+
+enum _ecore_status_t ecore_spq_post(struct ecore_hwfn		*p_hwfn,
+				    struct ecore_spq_entry	*p_ent,
+				    u8				*fw_return_code)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	struct ecore_spq *p_spq = p_hwfn ? p_hwfn->p_spq : OSAL_NULL;
 	bool b_ret_ent = true;
+	bool eblock;
 
 	if (!p_hwfn)
 		return ECORE_INVAL;
@@ -884,12 +950,11 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn,
 
 	if (p_hwfn->p_dev->recov_in_prog) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
-			   "Recovery is in progress -> skip spq post"
-			   " [cmd %02x protocol %02x]\n",
+			   "Recovery is in progress. Skip spq post [cmd %02x protocol %02x]\n",
 			   p_ent->elem.hdr.cmd_id, p_ent->elem.hdr.protocol_id);
-		/* Return success to let the flows to be completed successfully
-		 * w/o any error handling.
-		 */
+
+		/* Let the flow complete w/o any error handling */
+		ecore_spq_recov_set_ret_code(p_ent, fw_return_code);
 		return ECORE_SUCCESS;
 	}
 
@@ -902,6 +967,11 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn,
 	if (rc)
 		goto spq_post_fail;
 
+	/* Check if entry is in block mode before qed_spq_add_entry,
+	 * which might kfree p_ent.
+	 */
+	eblock = (p_ent->comp_mode == ECORE_SPQ_MODE_EBLOCK);
+
 	/* Add the request to the pending queue */
 	rc = ecore_spq_add_entry(p_hwfn, p_ent, p_ent->priority);
 	if (rc)
@@ -919,7 +989,7 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn,
 
 	OSAL_SPIN_UNLOCK(&p_spq->lock);
 
-	if (p_ent->comp_mode == ECORE_SPQ_MODE_EBLOCK) {
+	if (eblock) {
 		/* For entries in ECORE BLOCK mode, the completion code cannot
 		 * perform the necessary cleanup - if it did, we couldn't
 		 * access p_ent here to see whether it's successful or not.
@@ -929,15 +999,11 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn,
 				     p_ent->queue == &p_spq->unlimited_pending);
 
 		if (p_ent->queue == &p_spq->unlimited_pending) {
-			/* This is an allocated p_ent which does not need to
-			 * return to pool.
-			 */
+			struct ecore_spq_entry *p_post_ent = p_ent->post_ent;
 			OSAL_FREE(p_hwfn->p_dev, p_ent);
 
-			/* TBD: handle error flow and remove p_ent from
-			 * completion pending
-			 */
-			return rc;
+			/* Return the entry which was actually posted */
+			p_ent = p_post_ent;
 		}
 
 		if (rc)
@@ -951,7 +1017,7 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn,
 spq_post_fail2:
 	OSAL_SPIN_LOCK(&p_spq->lock);
 	OSAL_LIST_REMOVE_ENTRY(&p_ent->list, &p_spq->completion_pending);
-	ecore_chain_return_produced(&p_spq->chain);
+	ecore_spq_comp_bmap_update(p_hwfn, p_ent->elem.hdr.echo);
 
 spq_post_fail:
 	/* return to the free pool */
@@ -965,13 +1031,12 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn,
 					  __le16 echo,
 					  u8 fw_return_code,
-					  union event_ring_data *p_data)
+					  union event_ring_data	*p_data)
 {
-	struct ecore_spq *p_spq;
-	struct ecore_spq_entry *p_ent = OSAL_NULL;
-	struct ecore_spq_entry *tmp;
-	struct ecore_spq_entry *found = OSAL_NULL;
-	enum _ecore_status_t rc;
+	struct ecore_spq	*p_spq;
+	struct ecore_spq_entry	*p_ent = OSAL_NULL;
+	struct ecore_spq_entry	*tmp;
+	struct ecore_spq_entry	*found = OSAL_NULL;
 
 	p_spq = p_hwfn->p_spq;
 	if (!p_spq) {
@@ -983,25 +1048,26 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn,
 	OSAL_LIST_FOR_EACH_ENTRY_SAFE(p_ent,
 				      tmp,
 				      &p_spq->completion_pending,
-				      list, struct ecore_spq_entry) {
+				      list,
+				      struct ecore_spq_entry) {
 		if (p_ent->elem.hdr.echo == echo) {
+#ifndef ECORE_UPSTREAM
+			/* Ignoring completions is only supported for EBLOCK */
+			if (p_spq->drop_next_completion &&
+			    p_ent->comp_mode == ECORE_SPQ_MODE_EBLOCK) {
+				DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
+					   "Got completion for echo %04x - ignoring\n",
+					   echo);
+				p_spq->drop_next_completion = false;
+				OSAL_SPIN_UNLOCK(&p_spq->lock);
+				return ECORE_SUCCESS;
+			}
+#endif
+
 			OSAL_LIST_REMOVE_ENTRY(&p_ent->list,
 					       &p_spq->completion_pending);
 
-			/* Avoid overriding of SPQ entries when getting
-			 * out-of-order completions, by marking the completions
-			 * in a bitmap and increasing the chain consumer only
-			 * for the first successive completed entries.
-			 */
-			SPQ_COMP_BMAP_SET_BIT(p_spq, echo);
-			while (SPQ_COMP_BMAP_GET_BIT(p_spq,
-						      p_spq->comp_bitmap_idx)) {
-				SPQ_COMP_BMAP_CLEAR_BIT(p_spq,
-							p_spq->comp_bitmap_idx);
-				p_spq->comp_bitmap_idx++;
-				ecore_chain_return_produced(&p_spq->chain);
-			}
-
+			ecore_spq_comp_bmap_update(p_hwfn, echo);
 			p_spq->comp_count++;
 			found = p_ent;
 			break;
@@ -1011,8 +1077,7 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn,
 		 * on scenarios which have mutliple per-PF sent ramrods.
 		 */
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
-			   "Got completion for echo %04x - doesn't match"
-			   " echo %04x in completion pending list\n",
+			   "Got completion for echo %04x - doesn't match echo %04x in completion pending list\n",
 			   OSAL_LE16_TO_CPU(echo),
 			   OSAL_LE16_TO_CPU(p_ent->elem.hdr.echo));
 	}
@@ -1024,8 +1089,7 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn,
 
 	if (!found) {
 		DP_NOTICE(p_hwfn, true,
-			  "Failed to find an entry this"
-			  " EQE [echo %04x] completes\n",
+			  "Failed to find an entry this EQE [echo %04x] completes\n",
 			  OSAL_LE16_TO_CPU(echo));
 		return ECORE_EXISTS;
 	}
@@ -1038,32 +1102,28 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn,
 		found->comp_cb.function(p_hwfn, found->comp_cb.cookie, p_data,
 					fw_return_code);
 	else
-		DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
-			   "Got a completion without a callback function\n");
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "Got a completion without a callback function\n");
 
-	if ((found->comp_mode != ECORE_SPQ_MODE_EBLOCK) ||
-	    (found->queue == &p_spq->unlimited_pending))
-		/* EBLOCK  is responsible for returning its own entry into the
-		 * free list, unless it originally added the entry into the
-		 * unlimited pending list.
-		 */
+	/* EBLOCK is responsible for returning its own entry */
+	if (found->comp_mode != ECORE_SPQ_MODE_EBLOCK)
 		ecore_spq_return_entry(p_hwfn, found);
 
-	/* Attempt to post pending requests */
-	OSAL_SPIN_LOCK(&p_spq->lock);
-	rc = ecore_spq_pend_post(p_hwfn);
-	OSAL_SPIN_UNLOCK(&p_spq->lock);
+	return ECORE_SUCCESS;
+}
 
-	return rc;
+#ifndef ECORE_UPSTREAM
+void ecore_spq_drop_next_completion(struct ecore_hwfn *p_hwfn)
+{
+	p_hwfn->p_spq->drop_next_completion = true;
 }
+#endif
 
 enum _ecore_status_t ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_consq *p_consq;
 
 	/* Allocate ConsQ struct */
-	p_consq =
-	    OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_consq));
+	p_consq = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_consq));
 	if (!p_consq) {
 		DP_NOTICE(p_hwfn, false,
 			  "Failed to allocate `struct ecore_consq'\n");
@@ -1101,5 +1161,11 @@ void ecore_consq_free(struct ecore_hwfn *p_hwfn)
 		return;
 
 	ecore_chain_free(p_hwfn->p_dev, &p_hwfn->p_consq->chain);
+
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_consq);
+	p_hwfn->p_consq = OSAL_NULL;
 }
+
+#ifdef _NTDDK_
+#pragma warning(pop)
+#endif
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index 80dbdeaa0..72a0e1512 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_SPQ_H__
 #define __ECORE_SPQ_H__
 
@@ -72,8 +72,13 @@ struct ecore_spq_entry {
 	enum spq_mode			comp_mode;
 	struct ecore_spq_comp_cb	comp_cb;
 	struct ecore_spq_comp_done	comp_done; /* SPQ_MODE_EBLOCK */
+
+	/* Posted entry for unlimited list entry in EBLOCK mode */
+	struct ecore_spq_entry		*post_ent;
 };
 
+#define ECORE_EQ_MAX_ELEMENTS	0xFF00
+
 struct ecore_eq {
 	struct ecore_chain	chain;
 	u8			eq_sb_index;	/* index within the SB */
@@ -89,7 +94,8 @@ typedef enum _ecore_status_t
 			   u8 opcode,
 			   u16 echo,
 			   union event_ring_data *data,
-			   u8 fw_return_code);
+			   u8 fw_return_code,
+			   u8 vf_id);
 
 enum _ecore_status_t
 ecore_spq_register_async_cb(struct ecore_hwfn *p_hwfn,
@@ -120,18 +126,19 @@ struct ecore_spq {
 	/* Bitmap for handling out-of-order completions */
 #define SPQ_RING_SIZE		\
 	(CORE_SPQE_PAGE_SIZE_BYTES / sizeof(struct slow_path_element))
-/* BITS_PER_LONG */
-#define SPQ_COMP_BMAP_SIZE	(SPQ_RING_SIZE / (sizeof(u32) * 8))
-	u32			p_comp_bitmap[SPQ_COMP_BMAP_SIZE];
+#define SPQ_COMP_BMAP_SIZE	(SPQ_RING_SIZE / (sizeof(u32) * 8 /* BITS_PER_LONG */)) /* @DPDK */
+	u32			p_comp_bitmap[SPQ_COMP_BMAP_SIZE]; /* @DPDK */
 	u8			comp_bitmap_idx;
 #define SPQ_COMP_BMAP_SET_BIT(p_spq, idx)				\
-	(OSAL_SET_BIT(((idx) % SPQ_RING_SIZE), (p_spq)->p_comp_bitmap))
-
+	do {								\
+		OSAL_NON_ATOMIC_SET_BIT(((idx) % SPQ_RING_SIZE),	\
+					(p_spq)->p_comp_bitmap);	\
+	} while (0)
+/* @DPDK */
 #define SPQ_COMP_BMAP_CLEAR_BIT(p_spq, idx)				\
 	(OSAL_CLEAR_BIT(((idx) % SPQ_RING_SIZE), (p_spq)->p_comp_bitmap))
-
-#define SPQ_COMP_BMAP_GET_BIT(p_spq, idx)	\
-	(OSAL_GET_BIT(((idx) % SPQ_RING_SIZE), (p_spq)->p_comp_bitmap))
+#define SPQ_COMP_BMAP_TEST_BIT(p_spq, idx)	\
+	(OSAL_TEST_BIT(((idx) % SPQ_RING_SIZE), (p_spq)->p_comp_bitmap))
 
 	/* Statistics */
 	u32				unlimited_pending_count;
@@ -145,6 +152,10 @@ struct ecore_spq {
 	u32				db_addr_offset;
 	struct core_db_data		db_data;
 	ecore_spq_async_comp_cb		async_comp_cb[MAX_PROTOCOL_TYPE];
+
+#ifndef ECORE_UPSTREAM
+	bool				drop_next_completion;
+#endif
 };
 
 struct ecore_port;
@@ -152,7 +163,7 @@ struct ecore_hwfn;
 
 /**
  * @brief ecore_set_spq_block_timeout - calculates the maximum sleep
- * iterations used in __ecore_spq_block();
+	  iterations used in __ecore_spq_block();
  *
  * @param p_hwfn
  * @param spq_timeout_ms
@@ -278,6 +289,15 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn	*p_hwfn,
 					  u8                    fw_return_code,
 					  union event_ring_data	*p_data);
 
+#ifndef ECORE_UPSTREAM
+/**
+ * @brief ecore_spq_drop_next_completion - Next SPQ completion will be ignored
+ *
+ * @param p_hwfn
+ */
+void ecore_spq_drop_next_completion(struct ecore_hwfn *p_hwfn);
+#endif
+
 /**
  * @brief ecore_spq_get_cid - Given p_hwfn, return cid for the hwfn's SPQ
  *
@@ -309,5 +329,24 @@ void ecore_consq_setup(struct ecore_hwfn *p_hwfn);
  * @param p_hwfn
  */
 void ecore_consq_free(struct ecore_hwfn *p_hwfn);
+enum _ecore_status_t ecore_spq_pend_post(struct ecore_hwfn *p_hwfn);
 
+/**
+ * @brief ecore_event_ring_entry_dp - print ring entry info
+ *
+ * @param p_hwfn
+ * @param p_eqe
+ */
+void ecore_event_ring_entry_dp(struct ecore_hwfn *p_hwfn,
+			       struct event_ring_entry *p_eqe);
+
+/**
+ * @brief ecore_async_event_completion - handle EQ completions
+ *
+ * @param p_hwfn
+ * @param p_eqe
+ */
+enum _ecore_status_t
+ecore_async_event_completion(struct ecore_hwfn *p_hwfn,
+			     struct event_ring_entry *p_eqe);
 #endif /* __ECORE_SPQ_H__ */
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index b49465b88..0d768c525 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #include "bcm_osal.h"
 #include "ecore.h"
 #include "reg_addr.h"
@@ -29,10 +29,11 @@ static enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn *p_hwfn,
 						  u8 opcode,
 						  __le16 echo,
 						  union event_ring_data *data,
-						  u8 fw_return_code);
+						  u8 OSAL_UNUSED fw_return_code,
+						  u8 OSAL_UNUSED vf_id);
 
-const char *qede_ecore_channel_tlvs_string[] = {
-	"CHANNEL_TLV_NONE",	/* ends tlv sequence */
+const char *ecore_channel_tlvs_string[] = {
+	"CHANNEL_TLV_NONE", /* ends tlv sequence */
 	"CHANNEL_TLV_ACQUIRE",
 	"CHANNEL_TLV_VPORT_START",
 	"CHANNEL_TLV_VPORT_UPDATE",
@@ -78,9 +79,6 @@ const char *qede_ecore_channel_tlvs_string[] = {
 	"CHANNEL_TLV_RDMA_MODIFY_QP",
 	"CHANNEL_TLV_RDMA_QUERY_QP",
 	"CHANNEL_TLV_RDMA_DESTROY_QP",
-	"CHANNEL_TLV_RDMA_CREATE_SRQ",
-	"CHANNEL_TLV_RDMA_MODIFY_SRQ",
-	"CHANNEL_TLV_RDMA_DESTROY_SRQ",
 	"CHANNEL_TLV_RDMA_QUERY_PORT",
 	"CHANNEL_TLV_RDMA_QUERY_DEVICE",
 	"CHANNEL_TLV_RDMA_IWARP_CONNECT",
@@ -93,10 +91,52 @@ const char *qede_ecore_channel_tlvs_string[] = {
 	"CHANNEL_TLV_ESTABLISH_LL2_CONN",
 	"CHANNEL_TLV_TERMINATE_LL2_CONN",
 	"CHANNEL_TLV_ASYNC_EVENT",
+	"CHANNEL_TLV_RDMA_CREATE_SRQ",
+	"CHANNEL_TLV_RDMA_MODIFY_SRQ",
+	"CHANNEL_TLV_RDMA_DESTROY_SRQ",
 	"CHANNEL_TLV_SOFT_FLR",
 	"CHANNEL_TLV_MAX"
 };
 
+enum _ecore_status_t ecore_iov_pci_enable_prolog(struct ecore_hwfn *p_hwfn,
+						 struct ecore_ptt *p_ptt,
+						 u8 max_active_vfs)
+{
+	if (IS_LEAD_HWFN(p_hwfn) &&
+	    max_active_vfs > p_hwfn->p_dev->p_iov_info->total_vfs) {
+		DP_ERR(p_hwfn,
+		       "max_active_vfs %u is greater than the total_vf which is supported by the PF %u\n",
+		       max_active_vfs, p_hwfn->p_dev->p_iov_info->total_vfs);
+		return ECORE_INVAL;
+	}
+
+	if (p_hwfn->pf_iov_info->max_active_vfs) {
+		DP_ERR(p_hwfn,
+		       "max_active_vfs has already been set to %u. In order to set a new value, it is required to unload all the VFs and call disable_epilog\n",
+			p_hwfn->pf_iov_info->max_active_vfs);
+		return ECORE_INVAL;
+	}
+
+	p_hwfn->pf_iov_info->max_active_vfs = max_active_vfs;
+
+	return ecore_qm_reconf(p_hwfn, p_ptt);
+}
+
+enum _ecore_status_t ecore_iov_pci_disable_epilog(struct ecore_hwfn *p_hwfn,
+						  struct ecore_ptt *p_ptt)
+{
+	if (IS_LEAD_HWFN(p_hwfn) && p_hwfn->p_dev->p_iov_info->num_vfs) {
+		DP_ERR(p_hwfn,
+		       "There are still loaded VFs (%u) - can't re-configure the QM\n",
+		       p_hwfn->p_dev->p_iov_info->num_vfs);
+		return ECORE_INVAL;
+	}
+
+	p_hwfn->pf_iov_info->max_active_vfs = 0;
+
+	return ecore_qm_reconf(p_hwfn, p_ptt);
+}
+
 static u8 ecore_vf_calculate_legacy(struct ecore_vf_info *p_vf)
 {
 	u8 legacy = 0;
@@ -150,6 +190,7 @@ static enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn,
 	default:
 		DP_NOTICE(p_hwfn, true, "Unknown VF personality %d\n",
 			  p_hwfn->hw_info.personality);
+		ecore_sp_destroy_request(p_hwfn, p_ent);
 		return ECORE_INVAL;
 	}
 
@@ -157,9 +198,7 @@ static enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn,
 	if (fp_minor > ETH_HSI_VER_MINOR &&
 	    fp_minor != ETH_HSI_VER_NO_PKT_LEN_TUNN) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF [%d] - Requested fp hsi %02x.%02x which is"
-			   " slightly newer than PF's %02x.%02x; Configuring"
-			   " PFs version\n",
+			   "VF [%d] - Requested fp hsi %02x.%02x which is slightly newer than PF's %02x.%02x; Configuring PFs version\n",
 			   p_vf->abs_vf_id,
 			   ETH_HSI_VER_MAJOR, fp_minor,
 			   ETH_HSI_VER_MAJOR, ETH_HSI_VER_MINOR);
@@ -304,8 +343,7 @@ static bool ecore_iov_validate_rxq(struct ecore_hwfn *p_hwfn,
 {
 	if (rx_qid >= p_vf->num_rxqs) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF[0x%02x] - can't touch Rx queue[%04x];"
-			   " Only 0x%04x are allocated\n",
+			   "VF[0x%02x] - can't touch Rx queue[%04x]; Only 0x%04x are allocated\n",
 			   p_vf->abs_vf_id, rx_qid, p_vf->num_rxqs);
 		return false;
 	}
@@ -320,8 +358,7 @@ static bool ecore_iov_validate_txq(struct ecore_hwfn *p_hwfn,
 {
 	if (tx_qid >= p_vf->num_txqs) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF[0x%02x] - can't touch Tx queue[%04x];"
-			   " Only 0x%04x are allocated\n",
+			   "VF[0x%02x] - can't touch Tx queue[%04x]; Only 0x%04x are allocated\n",
 			   p_vf->abs_vf_id, tx_qid, p_vf->num_txqs);
 		return false;
 	}
@@ -340,8 +377,7 @@ static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
 			return true;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-		   "VF[0%02x] - tried using sb_idx %04x which doesn't exist as"
-		   " one of its 0x%02x SBs\n",
+		   "VF[0%02x] - tried using sb_idx %04x which doesn't exist as one of its 0x%02x SBs\n",
 		   p_vf->abs_vf_id, sb_idx, p_vf->num_sbs);
 
 	return false;
@@ -374,14 +410,23 @@ static bool ecore_iov_validate_active_txq(struct ecore_vf_info *p_vf)
 	return false;
 }
 
-enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn,
-						int vfid,
-						struct ecore_ptt *p_ptt)
+static u32 ecore_iov_vf_bulletin_crc(struct ecore_vf_info *p_vf)
 {
-	struct ecore_bulletin_content *p_bulletin;
+	struct ecore_bulletin_content *p_bulletin = p_vf->bulletin.p_virt;
 	int crc_size = sizeof(p_bulletin->crc);
+
+	return OSAL_CRC32(0, (u8 *)p_bulletin + crc_size,
+			  p_vf->bulletin.size - crc_size);
+}
+
+LNX_STATIC enum _ecore_status_t
+ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn, int vfid,
+			   struct ecore_ptt *p_ptt)
+{
+	struct ecore_bulletin_content *p_bulletin;
 	struct dmae_params params;
 	struct ecore_vf_info *p_vf;
+	u32 crc;
 
 	p_vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
 	if (!p_vf)
@@ -393,14 +438,19 @@ enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn,
 
 	p_bulletin = p_vf->bulletin.p_virt;
 
+	/* Do not post if bulletin was not changed */
+	crc = ecore_iov_vf_bulletin_crc(p_vf);
+	if (crc == p_bulletin->crc)
+		return ECORE_SUCCESS;
+
 	/* Increment bulletin board version and compute crc */
 	p_bulletin->version++;
-	p_bulletin->crc = OSAL_CRC32(0, (u8 *)p_bulletin + crc_size,
-				     p_vf->bulletin.size - crc_size);
+	p_bulletin->crc = ecore_iov_vf_bulletin_crc(p_vf);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 		   "Posting Bulletin 0x%08x to VF[%d] (CRC 0x%08x)\n",
-		   p_bulletin->version, p_vf->relative_vf_id, p_bulletin->crc);
+		   p_bulletin->version, p_vf->relative_vf_id,
+		   p_bulletin->crc);
 
 	/* propagate bulletin board via dmae to vm memory */
 	OSAL_MEMSET(&params, 0, sizeof(params));
@@ -411,6 +461,7 @@ enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn,
 				    &params);
 }
 
+/* @DPDK */
 static enum _ecore_status_t ecore_iov_pci_cfg_info(struct ecore_dev *p_dev)
 {
 	struct ecore_hw_sriov_info *iov = p_dev->p_iov_info;
@@ -419,44 +470,33 @@ static enum _ecore_status_t ecore_iov_pci_cfg_info(struct ecore_dev *p_dev)
 	DP_VERBOSE(p_dev, ECORE_MSG_IOV, "sriov ext pos %d\n", pos);
 	OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_CTRL, &iov->ctrl);
 
-	OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_TOTAL_VF,
-				  &iov->total_vfs);
-	OSAL_PCI_READ_CONFIG_WORD(p_dev,
-				  pos + RTE_PCI_SRIOV_INITIAL_VF,
-				  &iov->initial_vfs);
+	OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_TOTAL_VF, &iov->total_vfs);
+	OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_INITIAL_VF, &iov->initial_vfs);
 
-	OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_NUM_VF,
-				  &iov->num_vfs);
+	OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_NUM_VF, &iov->num_vfs);
 	if (iov->num_vfs) {
 		/* @@@TODO - in future we might want to add an OSAL here to
 		 * allow each OS to decide on its own how to act.
 		 */
 		DP_VERBOSE(p_dev, ECORE_MSG_IOV,
-			   "Number of VFs are already set to non-zero value."
-			   " Ignoring PCI configuration value\n");
+			   "Number of VFs are already set to non-zero value. Ignoring PCI configuration value\n");
 		iov->num_vfs = 0;
 	}
 
-	OSAL_PCI_READ_CONFIG_WORD(p_dev,
-				  pos + RTE_PCI_SRIOV_VF_OFFSET, &iov->offset);
+	OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_VF_OFFSET, &iov->offset);
 
-	OSAL_PCI_READ_CONFIG_WORD(p_dev,
-				  pos + RTE_PCI_SRIOV_VF_STRIDE, &iov->stride);
+	OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_VF_STRIDE, &iov->stride);
 
-	OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_VF_DID,
-				  &iov->vf_device_id);
+	OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_VF_DID, &iov->vf_device_id);
 
-	OSAL_PCI_READ_CONFIG_DWORD(p_dev,
-				   pos + RTE_PCI_SRIOV_SUP_PGSIZE, &iov->pgsz);
+	OSAL_PCI_READ_CONFIG_DWORD(p_dev, pos + RTE_PCI_SRIOV_SUP_PGSIZE, &iov->pgsz);
 
 	OSAL_PCI_READ_CONFIG_DWORD(p_dev, pos + RTE_PCI_SRIOV_CAP, &iov->cap);
 
-	OSAL_PCI_READ_CONFIG_BYTE(p_dev, pos + RTE_PCI_SRIOV_FUNC_LINK,
-				  &iov->link);
+	OSAL_PCI_READ_CONFIG_BYTE(p_dev, pos + RTE_PCI_SRIOV_FUNC_LINK, &iov->link);
 
-	DP_VERBOSE(p_dev, ECORE_MSG_IOV, "IOV info: nres %d, cap 0x%x,"
-		   "ctrl 0x%x, total %d, initial %d, num vfs %d, offset %d,"
-		   " stride %d, page size 0x%x\n",
+	DP_VERBOSE(p_dev, ECORE_MSG_IOV,
+		   "IOV info: nres %d, cap 0x%x, ctrl 0x%x, total %d, initial %d, num vfs %d, offset %d, stride %d, page size 0x%x\n",
 		   iov->nres, iov->cap, iov->ctrl,
 		   iov->total_vfs, iov->initial_vfs, iov->nr_virtfn,
 		   iov->offset, iov->stride, iov->pgsz);
@@ -468,9 +508,7 @@ static enum _ecore_status_t ecore_iov_pci_cfg_info(struct ecore_dev *p_dev)
 		 * num_vfs to zero to avoid memory corruption in the code that
 		 * assumes max number of vfs
 		 */
-		DP_NOTICE(p_dev, false,
-			  "IOV: Unexpected number of vfs set: %d"
-			  " setting num_vf to zero\n",
+		DP_NOTICE(p_dev, false, "IOV: Unexpected number of vfs set: %d setting num_vf to zero\n",
 			  iov->num_vfs);
 
 		iov->num_vfs = 0;
@@ -490,17 +528,20 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
 	union vfpf_tlvs *p_req_virt_addr;
 	u8 idx = 0;
 
-	OSAL_MEMSET(p_iov_info->vfs_array, 0, sizeof(p_iov_info->vfs_array));
-
 	p_req_virt_addr = p_iov_info->mbx_msg_virt_addr;
 	req_p = p_iov_info->mbx_msg_phys_addr;
 	p_reply_virt_addr = p_iov_info->mbx_reply_virt_addr;
 	rply_p = p_iov_info->mbx_reply_phys_addr;
 	p_bulletin_virt = p_iov_info->p_bulletins;
 	bulletin_p = p_iov_info->bulletins_phys;
+
 	if (!p_req_virt_addr || !p_reply_virt_addr || !p_bulletin_virt) {
-		DP_ERR(p_hwfn,
-		       "ecore_iov_setup_vfdb called without alloc mem first\n");
+		DP_ERR(p_hwfn, "called without allocating mem first\n");
+		return;
+	}
+
+	if (ECORE_IS_VF_RDMA(p_hwfn) && !p_iov_info->p_rdma_info) {
+		DP_ERR(p_hwfn, "called without allocating mem first for rdma_info\n");
 		return;
 	}
 
@@ -521,7 +562,8 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
 		vf->b_init = false;
 
 		vf->bulletin.phys = idx *
-		    sizeof(struct ecore_bulletin_content) + bulletin_p;
+				    sizeof(struct ecore_bulletin_content) +
+				    bulletin_p;
 		vf->bulletin.p_virt = p_bulletin_virt + idx;
 		vf->bulletin.size = sizeof(struct ecore_bulletin_content);
 
@@ -531,10 +573,15 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
 		vf->concrete_fid = concrete;
 		/* TODO - need to devise a better way of getting opaque */
 		vf->opaque_fid = (p_hwfn->hw_info.opaque_fid & 0xff) |
-		    (vf->abs_vf_id << 8);
+				 (vf->abs_vf_id << 8);
 
 		vf->num_mac_filters = ECORE_ETH_VF_NUM_MAC_FILTERS;
 		vf->num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS;
+
+		/* Pending EQs list for this VF and its lock */
+		OSAL_LIST_INIT(&vf->vf_eq_info.eq_list);
+		OSAL_SPIN_LOCK_INIT(&vf->vf_eq_info.eq_list_lock);
+		vf->vf_eq_info.eq_active = true;
 	}
 }
 
@@ -568,7 +615,7 @@ static enum _ecore_status_t ecore_iov_allocate_vfdb(struct ecore_hwfn *p_hwfn)
 		return ECORE_NOMEM;
 
 	p_iov_info->bulletins_size = sizeof(struct ecore_bulletin_content) *
-	    num_vfs;
+				     num_vfs;
 	p_v_addr = &p_iov_info->p_bulletins;
 	*p_v_addr = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
 					    &p_iov_info->bulletins_phys,
@@ -577,15 +624,15 @@ static enum _ecore_status_t ecore_iov_allocate_vfdb(struct ecore_hwfn *p_hwfn)
 		return ECORE_NOMEM;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-		   "PF's Requests mailbox [%p virt 0x%lx phys],  "
-		   "Response mailbox [%p virt 0x%lx phys] Bulletinsi"
-		   " [%p virt 0x%lx phys]\n",
+		   "PF's Requests mailbox [%p virt 0x%" PRIx64 " phys], "
+		   "Response mailbox [%p virt 0x%" PRIx64 " phys] "
+		   "Bulletins [%p virt 0x%" PRIx64 " phys]\n",
 		   p_iov_info->mbx_msg_virt_addr,
-		   (unsigned long)p_iov_info->mbx_msg_phys_addr,
+		   (u64)p_iov_info->mbx_msg_phys_addr,
 		   p_iov_info->mbx_reply_virt_addr,
-		   (unsigned long)p_iov_info->mbx_reply_phys_addr,
+		   (u64)p_iov_info->mbx_reply_phys_addr,
 		   p_iov_info->p_bulletins,
-		   (unsigned long)p_iov_info->bulletins_phys);
+		   (u64)p_iov_info->bulletins_phys);
 
 	return ECORE_SUCCESS;
 }
@@ -593,6 +640,7 @@ static enum _ecore_status_t ecore_iov_allocate_vfdb(struct ecore_hwfn *p_hwfn)
 static void ecore_iov_free_vfdb(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_pf_iov *p_iov_info = p_hwfn->pf_iov_info;
+	u8 idx;
 
 	if (p_hwfn->pf_iov_info->mbx_msg_virt_addr)
 		OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
@@ -611,11 +659,39 @@ static void ecore_iov_free_vfdb(struct ecore_hwfn *p_hwfn)
 				       p_iov_info->p_bulletins,
 				       p_iov_info->bulletins_phys,
 				       p_iov_info->bulletins_size);
+
+	/* Free pending EQ entries */
+	for (idx = 0; idx < p_hwfn->p_dev->p_iov_info->total_vfs; idx++) {
+		struct ecore_vf_info *p_vf = &p_iov_info->vfs_array[idx];
+
+		if (!p_vf->vf_eq_info.eq_active)
+			continue;
+
+		p_vf->vf_eq_info.eq_active = false;
+		while (!OSAL_LIST_IS_EMPTY(&p_vf->vf_eq_info.eq_list)) {
+			struct event_ring_list_entry *p_eqe;
+
+			p_eqe = OSAL_LIST_FIRST_ENTRY(&p_vf->vf_eq_info.eq_list,
+						      struct event_ring_list_entry,
+						      list_entry);
+			if (p_eqe) {
+				OSAL_LIST_REMOVE_ENTRY(&p_eqe->list_entry,
+						       &p_vf->vf_eq_info.eq_list);
+				OSAL_FREE(p_hwfn->p_dev, p_eqe);
+			}
+		}
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+		OSAL_SPIN_LOCK_DEALLOC(&p_vf->vf_eq_info.eq_list_lock);
+#endif
+	}
 }
 
 enum _ecore_status_t ecore_iov_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_pf_iov *p_sriov;
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	u8 idx;
+#endif
 
 	if (!IS_PF_SRIOV(p_hwfn)) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
@@ -629,20 +705,34 @@ enum _ecore_status_t ecore_iov_alloc(struct ecore_hwfn *p_hwfn)
 		return ECORE_NOMEM;
 	}
 
-	p_hwfn->pf_iov_info = p_sriov;
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+	for (idx = 0; idx < p_hwfn->p_dev->p_iov_info->total_vfs; idx++) {
+		struct ecore_vf_info *p_vf = &p_sriov->vfs_array[idx];
 
-	ecore_spq_register_async_cb(p_hwfn, PROTOCOLID_COMMON,
-				    ecore_sriov_eqe_event);
+		if (OSAL_SPIN_LOCK_ALLOC(p_hwfn, &p_vf->vf_eq_info.eq_list_lock,
+					 "vf_eq_list_lock")) {
+			DP_NOTICE(p_hwfn, false,
+					"Failed to allocate `eq_list_lock'\n");
+			OSAL_FREE(p_hwfn->p_dev, p_sriov);
+			return ECORE_NOMEM;
+		}
+	}
+#endif
+
+	p_hwfn->pf_iov_info = p_sriov;
 
 	return ecore_iov_allocate_vfdb(p_hwfn);
 }
 
-void ecore_iov_setup(struct ecore_hwfn *p_hwfn)
+void ecore_iov_setup(struct ecore_hwfn	*p_hwfn)
 {
 	if (!IS_PF_SRIOV(p_hwfn) || !IS_PF_SRIOV_ALLOC(p_hwfn))
 		return;
 
 	ecore_iov_setup_vfdb(p_hwfn);
+
+	ecore_spq_register_async_cb(p_hwfn, PROTOCOLID_COMMON,
+				    ecore_sriov_eqe_event);
 }
 
 void ecore_iov_free(struct ecore_hwfn *p_hwfn)
@@ -652,12 +742,14 @@ void ecore_iov_free(struct ecore_hwfn *p_hwfn)
 	if (IS_PF_SRIOV_ALLOC(p_hwfn)) {
 		ecore_iov_free_vfdb(p_hwfn);
 		OSAL_FREE(p_hwfn->p_dev, p_hwfn->pf_iov_info);
+		p_hwfn->pf_iov_info = OSAL_NULL;
 	}
 }
 
 void ecore_iov_free_hw_info(struct ecore_dev *p_dev)
 {
 	OSAL_FREE(p_dev, p_dev->p_iov_info);
+	p_dev->p_iov_info = OSAL_NULL;
 }
 
 enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn)
@@ -700,6 +792,7 @@ enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "IOV capabilities, but no VFs are published\n");
 		OSAL_FREE(p_dev, p_dev->p_iov_info);
+		p_dev->p_iov_info = OSAL_NULL;
 		return ECORE_SUCCESS;
 	}
 
@@ -750,13 +843,14 @@ static bool _ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid,
 	return true;
 }
 
-bool ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid)
+LNX_STATIC bool ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid)
 {
 	return _ecore_iov_pf_sanity_check(p_hwfn, vfid, true);
 }
 
-void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev,
-				 u16 rel_vf_id, u8 to_disable)
+LNX_STATIC void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev,
+					    u16 rel_vf_id,
+					    u8 to_disable)
 {
 	struct ecore_vf_info *vf;
 	int i;
@@ -772,8 +866,8 @@ void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev,
 	}
 }
 
-void ecore_iov_set_vfs_to_disable(struct ecore_dev *p_dev,
-				  u8 to_disable)
+LNX_STATIC void ecore_iov_set_vfs_to_disable(struct ecore_dev *p_dev,
+					     u8 to_disable)
 {
 	u16 i;
 
@@ -835,9 +929,10 @@ static void ecore_iov_vf_igu_reset(struct ecore_hwfn *p_hwfn,
 						  vf->opaque_fid, true);
 }
 
-static void ecore_iov_vf_igu_set_int(struct ecore_hwfn *p_hwfn,
-				     struct ecore_ptt *p_ptt,
-				     struct ecore_vf_info *vf, bool enable)
+static void ecore_iov_vf_igu_set_int(struct ecore_hwfn		*p_hwfn,
+				     struct ecore_ptt		*p_ptt,
+				     struct ecore_vf_info	*vf,
+				     bool			enable)
 {
 	u32 igu_vf_conf;
 
@@ -892,11 +987,12 @@ ecore_iov_enable_vf_access_msix(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t
-ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn,
-			   struct ecore_ptt *p_ptt, struct ecore_vf_info *vf)
+static enum _ecore_status_t ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn,
+						       struct ecore_ptt *p_ptt,
+						       struct ecore_vf_info *vf)
 {
 	u32 igu_vf_conf = IGU_VF_CONF_FUNC_EN;
+	bool enable_scan = false;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	/* It's possible VF was previously considered malicious -
@@ -907,9 +1003,8 @@ ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn,
 	if (vf->to_disable)
 		return ECORE_SUCCESS;
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-		   "Enable internal access for vf %x [abs %x]\n", vf->abs_vf_id,
-		   ECORE_VF_ABS_ID(p_hwfn, vf));
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Enable internal access for vf %x [abs %x]\n",
+		   vf->abs_vf_id, ECORE_VF_ABS_ID(p_hwfn, vf));
 
 	ecore_iov_vf_pglue_clear_err(p_hwfn, p_ptt,
 				     ECORE_VF_ABS_ID(p_hwfn, vf));
@@ -921,6 +1016,15 @@ ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
+	/* enable scan for VF only if it supports RDMA */
+	enable_scan = ECORE_IS_VF_RDMA(p_hwfn);
+
+	if (enable_scan)
+		ecore_tm_clear_vf_ilt(p_hwfn, vf->relative_vf_id);
+
+	STORE_RT_REG(p_hwfn, TM_REG_VF_ENABLE_CONN_RT_OFFSET,
+		     enable_scan ? 0x1 : 0x0);
+
 	ecore_fid_pretend(p_hwfn, p_ptt, (u16)vf->concrete_fid);
 
 	SET_FIELD(igu_vf_conf, IGU_VF_CONF_PARENT, p_hwfn->rel_pf_id);
@@ -938,10 +1042,9 @@ ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn,
 }
 
 /**
- *
  * @brief ecore_iov_config_perm_table - configure the permission
  *      zone table.
- *      The queue zone permission table size is 320x9. There
+ *      In E4, queue zone permission table size is 320x9. There
  *      are 320 VF queues for single engine device (256 for dual
  *      engine device), and each entry has the following format:
  *      {Valid, VF[7:0]}
@@ -950,9 +1053,10 @@ ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn,
  * @param vf
  * @param enable
  */
-static void ecore_iov_config_perm_table(struct ecore_hwfn *p_hwfn,
-					struct ecore_ptt *p_ptt,
-					struct ecore_vf_info *vf, u8 enable)
+static void ecore_iov_config_perm_table(struct ecore_hwfn	*p_hwfn,
+					struct ecore_ptt	*p_ptt,
+					struct ecore_vf_info	*vf,
+					u8			enable)
 {
 	u32 reg_addr, val;
 	u16 qzone_id = 0;
@@ -984,23 +1088,23 @@ static void ecore_iov_enable_vf_traffic(struct ecore_hwfn *p_hwfn,
 static u8 ecore_iov_alloc_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt,
 				     struct ecore_vf_info *vf,
-				     u16 num_rx_queues)
+				     u16 num_irqs)
 {
 	struct ecore_igu_block *p_block;
 	struct cau_sb_entry sb_entry;
 	int qid = 0;
 	u32 val = 0;
 
-	if (num_rx_queues > p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov)
-		num_rx_queues =
-		(u16)p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov;
-	p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov -= num_rx_queues;
+	if (num_irqs > p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov)
+		num_irqs = (u16)p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov;
+
+	p_hwfn->hw_info.p_igu_info->usage.free_cnt_iov -= num_irqs;
 
 	SET_FIELD(val, IGU_MAPPING_LINE_FUNCTION_NUMBER, vf->abs_vf_id);
 	SET_FIELD(val, IGU_MAPPING_LINE_VALID, 1);
 	SET_FIELD(val, IGU_MAPPING_LINE_PF_VALID, 0);
 
-	for (qid = 0; qid < num_rx_queues; qid++) {
+	for (qid = 0; qid < num_irqs; qid++) {
 		p_block = ecore_get_igu_free_sb(p_hwfn, false);
 		if (!p_block)
 			continue;
@@ -1025,7 +1129,7 @@ static u8 ecore_iov_alloc_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 				    OSAL_NULL /* default parameters */);
 	}
 
-	vf->num_sbs = (u8)num_rx_queues;
+	vf->num_sbs = (u8)num_irqs;
 
 	return vf->num_sbs;
 }
@@ -1044,6 +1148,7 @@ static u8 ecore_iov_alloc_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 				      struct ecore_ptt *p_ptt,
 				      struct ecore_vf_info *vf)
+
 {
 	struct ecore_igu_info *p_info = p_hwfn->hw_info.p_igu_info;
 	int idx, igu_id;
@@ -1052,7 +1157,8 @@ static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 	/* Invalidate igu CAM lines and mark them as free */
 	for (idx = 0; idx < vf->num_sbs; idx++) {
 		igu_id = vf->igu_sbs[idx];
-		addr = IGU_REG_MAPPING_MEMORY + sizeof(u32) * igu_id;
+		addr = IGU_REG_MAPPING_MEMORY +
+		       sizeof(u32) * igu_id;
 
 		val = ecore_rd(p_hwfn, p_ptt, addr);
 		SET_FIELD(val, IGU_MAPPING_LINE_VALID, 0);
@@ -1065,11 +1171,11 @@ static void ecore_iov_free_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 	vf->num_sbs = 0;
 }
 
-void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
-			u16 vfid,
-			struct ecore_mcp_link_params *params,
-			struct ecore_mcp_link_state *link,
-			struct ecore_mcp_link_capabilities *p_caps)
+LNX_STATIC void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
+				   u16 vfid,
+				   struct ecore_mcp_link_params *params,
+				   struct ecore_mcp_link_state *link,
+				   struct ecore_mcp_link_capabilities *p_caps)
 {
 	struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
 	struct ecore_bulletin_content *p_bulletin;
@@ -1111,7 +1217,7 @@ static void ecore_emul_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
-enum _ecore_status_t
+LNX_STATIC enum _ecore_status_t
 ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
 			 struct ecore_iov_vf_init_params *p_params)
@@ -1126,6 +1232,16 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 	u32 cids;
 	u8 i;
 
+	if (IS_LEAD_HWFN(p_hwfn) &&
+	    p_hwfn->p_dev->p_iov_info->num_vfs ==
+	    p_hwfn->pf_iov_info->max_active_vfs) {
+		DP_NOTICE(p_hwfn, false,
+			  "Can't add this VF - num_vfs (%u) has already reached max_active_vfs (%u)\n",
+			  p_hwfn->p_dev->p_iov_info->num_vfs,
+			  p_hwfn->pf_iov_info->max_active_vfs);
+		return ECORE_NORESOURCES;
+	}
+
 	vf = ecore_iov_get_vf_info(p_hwfn, p_params->rel_vf_id, false);
 	if (!vf) {
 		DP_ERR(p_hwfn, "ecore_iov_init_hw_for_vf : vf is OSAL_NULL\n");
@@ -1140,14 +1256,14 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 
 	/* Perform sanity checking on the requested vport/rss */
 	if (p_params->vport_id >= RESC_NUM(p_hwfn, ECORE_VPORT)) {
-		DP_NOTICE(p_hwfn, true, "VF[%d] - can't use VPORT %02x\n",
+		DP_NOTICE(p_hwfn, true, "VF[%d] - can't use VPORT 0x%02x\n",
 			  p_params->rel_vf_id, p_params->vport_id);
 		return ECORE_INVAL;
 	}
 
 	if ((p_params->num_queues > 1) &&
 	    (p_params->rss_eng_id >= RESC_NUM(p_hwfn, ECORE_RSS_ENG))) {
-		DP_NOTICE(p_hwfn, true, "VF[%d] - can't use RSS_ENG %02x\n",
+		DP_NOTICE(p_hwfn, true, "VF[%d] - can't use RSS_ENG 0x%02x\n",
 			  p_params->rel_vf_id, p_params->rss_eng_id);
 		return ECORE_INVAL;
 	}
@@ -1193,10 +1309,14 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 	/* Limit number of queues according to number of CIDs */
 	ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH, &cids);
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-		   "VF[%d] - requesting to initialize for 0x%04x queues"
-		   " [0x%04x CIDs available]\n",
-		   vf->relative_vf_id, p_params->num_queues, (u16)cids);
-	num_irqs = OSAL_MIN_T(u16, p_params->num_queues, ((u16)cids));
+		   "VF[%d] - requesting to initialize for 0x%04x queues [0x%04x CIDs available] and 0x%04x cnqs\n",
+		   vf->relative_vf_id, p_params->num_queues, (u16)cids,
+		   p_params->num_cnqs);
+
+	p_params->num_queues = OSAL_MIN_T(u16, (p_params->num_queues),
+					  ((u16)cids));
+
+	num_irqs = p_params->num_queues + p_params->num_cnqs;
 
 	num_of_vf_available_chains = ecore_iov_alloc_vf_igu_sbs(p_hwfn,
 							       p_ptt,
@@ -1207,9 +1327,21 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 		return ECORE_NOMEM;
 	}
 
-	/* Choose queue number and index ranges */
-	vf->num_rxqs = num_of_vf_available_chains;
-	vf->num_txqs = num_of_vf_available_chains;
+	/* l2 queues will get as much chains as they need. The rest will be
+	 * served the cnqs.
+	 */
+	if (num_of_vf_available_chains > p_params->num_queues) {
+		vf->num_rxqs = p_params->num_queues;
+		vf->num_txqs = p_params->num_queues;
+		vf->num_cnqs = num_of_vf_available_chains - p_params->num_queues;
+		vf->cnq_offset = p_params->cnq_offset;
+	} else {
+		vf->num_rxqs = num_of_vf_available_chains;
+		vf->num_txqs = num_of_vf_available_chains;
+		vf->num_cnqs = 0;
+	}
+
+	vf->cnq_sb_start_id = vf->num_rxqs;
 
 	for (i = 0; i < vf->num_rxqs; i++) {
 		struct ecore_vf_queue *p_queue = &vf->vf_queues[i];
@@ -1240,8 +1372,8 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 
 	vf->b_init = true;
 #ifndef REMOVE_DBG
-	p_hwfn->pf_iov_info->active_vfs[vf->relative_vf_id / 64] |=
-			(1ULL << (vf->relative_vf_id % 64));
+	ECORE_VF_ARRAY_SET_VFID(p_hwfn->pf_iov_info->active_vfs,
+				vf->relative_vf_id);
 #endif
 
 	if (IS_LEAD_HWFN(p_hwfn))
@@ -1253,7 +1385,7 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 #endif
 
 	return ECORE_SUCCESS;
-	}
+}
 
 #ifndef ASIC_ONLY
 static void ecore_emul_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
@@ -1265,18 +1397,56 @@ static void ecore_emul_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
 
 		ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_SR_IOV_DISABLED_REQUEST_CLR,
 			 sriov_dis);
-}
+	}
 }
 #endif
 
-enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
-						 struct ecore_ptt *p_ptt,
-						 u16 rel_vf_id)
+static enum _ecore_status_t
+ecore_iov_timers_stop(struct ecore_hwfn *p_hwfn,
+		      struct ecore_vf_info *p_vf,
+		      struct ecore_ptt *p_ptt)
+{
+	int i;
+
+	/* pretend to the specific VF for the split */
+	ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_vf->concrete_fid);
+
+	/* close timers */
+	ecore_wr(p_hwfn, p_ptt, TM_REG_VF_ENABLE_CONN, 0x0);
+
+	for (i = 0; i < ECORE_HW_STOP_RETRY_LIMIT; i++) {
+		if (!ecore_rd(p_hwfn, p_ptt, TM_REG_VF_SCAN_ACTIVE_CONN))
+			break;
+
+		/* Dependent on number of connection/tasks, possibly
+		 * 1ms sleep is required between polls
+		 */
+		OSAL_MSLEEP(1);
+	}
+
+	/* pretend back to the PF */
+	ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_hwfn->hw_info.concrete_fid);
+
+	if (i < ECORE_HW_STOP_RETRY_LIMIT)
+		return ECORE_SUCCESS;
+
+	DP_NOTICE(p_hwfn, false,
+		  "Timer linear scan connection is not over [%02x]\n",
+		  (u8)ecore_rd(p_hwfn, p_ptt, TM_REG_VF_SCAN_ACTIVE_CONN));
+
+	return ECORE_TIMEOUT;
+}
+
+LNX_STATIC enum _ecore_status_t
+ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
+			    struct ecore_ptt *p_ptt,
+			    u16 rel_vf_id)
 {
 	struct ecore_mcp_link_capabilities caps;
 	struct ecore_mcp_link_params params;
 	struct ecore_mcp_link_state link;
 	struct ecore_vf_info *vf = OSAL_NULL;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
 	if (!vf) {
@@ -1318,8 +1488,14 @@ enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
 
 	if (vf->b_init) {
 		vf->b_init = false;
-		p_hwfn->pf_iov_info->active_vfs[vf->relative_vf_id / 64] &=
-					~(1ULL << (vf->relative_vf_id / 64));
+
+#ifdef CONFIG_ECORE_SW_CHANNEL
+	vf->b_hw_channel = 0;
+#endif
+#ifndef REMOVE_DBG
+		ECORE_VF_ARRAY_CLEAR_VFID(p_hwfn->pf_iov_info->active_vfs,
+					  vf->relative_vf_id);
+#endif
 
 		if (IS_LEAD_HWFN(p_hwfn))
 			p_hwfn->p_dev->p_iov_info->num_vfs--;
@@ -1330,7 +1506,10 @@ enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
 		ecore_emul_iov_release_hw_for_vf(p_hwfn, p_ptt);
 #endif
 
-	return ECORE_SUCCESS;
+	/* disable timer scan for VF */
+	rc = ecore_iov_timers_stop(p_hwfn, vf, p_ptt);
+
+	return rc;
 }
 
 static bool ecore_iov_tlv_supported(u16 tlvtype)
@@ -1339,7 +1518,8 @@ static bool ecore_iov_tlv_supported(u16 tlvtype)
 }
 
 static void ecore_iov_lock_vf_pf_channel(struct ecore_hwfn *p_hwfn,
-					 struct ecore_vf_info *vf, u16 tlv)
+					 struct ecore_vf_info *vf,
+					 u16 tlv)
 {
 	/* lock the channel */
 	/* mutex_lock(&vf->op_mutex); @@@TBD MichalK - add lock... */
@@ -1349,14 +1529,12 @@ static void ecore_iov_lock_vf_pf_channel(struct ecore_hwfn *p_hwfn,
 
 	/* log the lock */
 	if (ecore_iov_tlv_supported(tlv))
-		DP_VERBOSE(p_hwfn,
-			   ECORE_MSG_IOV,
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[%d]: vf pf channel locked by %s\n",
 			   vf->abs_vf_id,
-			   qede_ecore_channel_tlvs_string[tlv]);
+			   ecore_channel_tlvs_string[tlv]);
 	else
-		DP_VERBOSE(p_hwfn,
-			   ECORE_MSG_IOV,
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[%d]: vf pf channel locked by %04x\n",
 			   vf->abs_vf_id, tlv);
 }
@@ -1365,13 +1543,23 @@ static void ecore_iov_unlock_vf_pf_channel(struct ecore_hwfn *p_hwfn,
 					   struct ecore_vf_info *vf,
 					   u16 expected_tlv)
 {
+	/* WARN(expected_tlv != vf->op_current,
+	 * "lock mismatch: expected %s found %s",
+	 * channel_tlvs_string[expected_tlv],
+	 * channel_tlvs_string[vf->op_current]);
+	 * @@@TBD MichalK
+	 */
+
+	/* lock the channel */
+	/* mutex_unlock(&vf->op_mutex); @@@TBD MichalK add the lock */
+
 	/* log the unlock */
 	if (ecore_iov_tlv_supported(expected_tlv))
 		DP_VERBOSE(p_hwfn,
 			   ECORE_MSG_IOV,
 			   "VF[%d]: vf pf channel unlocked by %s\n",
 			   vf->abs_vf_id,
-			   qede_ecore_channel_tlvs_string[expected_tlv]);
+			   ecore_channel_tlvs_string[expected_tlv]);
 	else
 		DP_VERBOSE(p_hwfn,
 			   ECORE_MSG_IOV,
@@ -1379,7 +1567,7 @@ static void ecore_iov_unlock_vf_pf_channel(struct ecore_hwfn *p_hwfn,
 			   vf->abs_vf_id, expected_tlv);
 
 	/* record the locking op */
-	/* vf->op_current = CHANNEL_TLV_NONE; */
+	/* vf->op_current = CHANNEL_TLV_NONE;*/
 }
 
 /* place a given tlv on the tlv buffer, continuing current tlv list */
@@ -1404,14 +1592,14 @@ void ecore_dp_tlv_list(struct ecore_hwfn *p_hwfn, void *tlvs_list)
 	struct channel_tlv *tlv;
 
 	do {
-		/* cast current tlv list entry to channel tlv header */
+		/* cast current tlv list entry to channel tlv header*/
 		tlv = (struct channel_tlv *)((u8 *)tlvs_list + total_length);
 
 		/* output tlv */
 		if (ecore_iov_tlv_supported(tlv->type))
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 				   "TLV number %d: type %s, length %d\n",
-				   i, qede_ecore_channel_tlvs_string[tlv->type],
+				   i, ecore_channel_tlvs_string[tlv->type],
 				   tlv->length);
 		else
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
@@ -1426,7 +1614,9 @@ void ecore_dp_tlv_list(struct ecore_hwfn *p_hwfn, void *tlvs_list)
 			DP_NOTICE(p_hwfn, false, "TLV of length 0 found\n");
 			return;
 		}
+
 		total_length += tlv->length;
+
 		if (total_length >= sizeof(struct tlv_buffer_size)) {
 			DP_NOTICE(p_hwfn, false, "TLV ==> Buffer overflow\n");
 			return;
@@ -1456,7 +1646,7 @@ static void ecore_iov_send_response(struct ecore_hwfn *p_hwfn,
 
 #ifdef CONFIG_ECORE_SW_CHANNEL
 	mbx->sw_mbx.response_size =
-	    length + sizeof(struct channel_list_end_tlv);
+		length + sizeof(struct channel_list_end_tlv);
 
 	if (!p_vf->b_hw_channel)
 		return;
@@ -1480,7 +1670,8 @@ static void ecore_iov_send_response(struct ecore_hwfn *p_hwfn,
 	 */
 	REG_WR(p_hwfn,
 	       GTT_BAR0_MAP_REG_USDM_RAM +
-	       USTORM_VF_PF_CHANNEL_READY_OFFSET(eng_vf_id), 1);
+	       USTORM_VF_PF_CHANNEL_READY_OFFSET(eng_vf_id),
+	       1);
 
 	ecore_dmae_host2host(p_hwfn, p_ptt, mbx->reply_phys,
 			     mbx->req_virt->first_tlv.reply_address,
@@ -1543,7 +1734,7 @@ static u16 ecore_iov_prep_vp_update_resp_tlvs(struct ecore_hwfn *p_hwfn,
 			resp->hdr.status = PFVF_STATUS_NOT_SUPPORTED;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF[%d] - vport_update resp: TLV %d, status %02x\n",
+			   "VF[%d] - vport_update response: TLV %d, status %02x\n",
 			   p_vf->relative_vf_id,
 			   ecore_iov_vport_to_tlv(i),
 			   resp->hdr.status);
@@ -1573,10 +1764,10 @@ static void ecore_iov_prepare_resp(struct ecore_hwfn *p_hwfn,
 	ecore_iov_send_response(p_hwfn, p_ptt, vf_info, length, status);
 }
 
-struct ecore_public_vf_info
-*ecore_iov_get_public_vf_info(struct ecore_hwfn *p_hwfn,
-			      u16 relative_vf_id,
-			      bool b_enabled_only)
+LNX_STATIC struct ecore_public_vf_info *
+ecore_iov_get_public_vf_info(struct ecore_hwfn *p_hwfn,
+			     u16 relative_vf_id,
+			     bool b_enabled_only)
 {
 	struct ecore_vf_info *vf = OSAL_NULL;
 
@@ -1590,18 +1781,19 @@ struct ecore_public_vf_info
 static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 				 struct ecore_vf_info *p_vf)
 {
+	struct ecore_bulletin_content *p_bulletin;
 	u32 i, j;
+
+	p_bulletin = p_vf->bulletin.p_virt;
+	p_bulletin->version = 0;
+
 	p_vf->vf_bulletin = 0;
 	p_vf->vport_instance = 0;
 	p_vf->configured_features = 0;
 
-	/* If VF previously requested less resources, go back to default */
-	p_vf->num_rxqs = p_vf->num_sbs;
-	p_vf->num_txqs = p_vf->num_sbs;
-
 	p_vf->num_active_rxqs = 0;
 
-	for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
+	for (i = 0; i < ECORE_MAX_QUEUE_VF_CHAINS_PER_PF; i++) {
 		struct ecore_vf_queue *p_queue = &p_vf->vf_queues[i];
 
 		for (j = 0; j < MAX_QUEUES_PER_QZONE; j++) {
@@ -1620,16 +1812,29 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
 }
 
 /* Returns either 0, or log(size) */
-static u32 ecore_iov_vf_db_bar_size(struct ecore_hwfn *p_hwfn,
-				    struct ecore_ptt *p_ptt)
+#define PGLUE_B_REG_VF_BAR1_SIZE_LOG_OFFSET 11
+u32 ecore_iov_vf_db_bar_size(struct ecore_hwfn *p_hwfn,
+			     struct ecore_ptt *p_ptt)
 {
 	u32 val = ecore_rd(p_hwfn, p_ptt, PGLUE_B_REG_VF_BAR1_SIZE);
 
+	/* The register holds the log of the bar size minus 11, e.g value of 1
+	 * indicates 4K bar size (log(4K)-11 = 1).
+	 * Therefore, we need to add 11 to register value.
+	 */
 	if (val)
-		return val + 11;
+		return val + PGLUE_B_REG_VF_BAR1_SIZE_LOG_OFFSET;
+
 	return 0;
 }
 
+u8 ecore_iov_abs_to_rel_id(struct ecore_hwfn *p_hwfn, u8 abs_id)
+{
+	struct ecore_hw_sriov_info *p_iov = p_hwfn->p_dev->p_iov_info;
+
+	return abs_id - p_iov->first_vf_in_pf;
+}
+
 static void
 ecore_iov_vf_mbx_acquire_resc_cids(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
@@ -1638,8 +1843,8 @@ ecore_iov_vf_mbx_acquire_resc_cids(struct ecore_hwfn *p_hwfn,
 				   struct pf_vf_resc *p_resp)
 {
 	u8 num_vf_cons = p_hwfn->pf_params.eth_pf_params.num_vf_cons;
-	u8 db_size = DB_ADDR_VF(1, DQ_DEMS_LEGACY) -
-		     DB_ADDR_VF(0, DQ_DEMS_LEGACY);
+	u8 db_size = DB_ADDR_VF(p_hwfn->p_dev, 1, DQ_DEMS_LEGACY) -
+		     DB_ADDR_VF(p_hwfn->p_dev, 0, DQ_DEMS_LEGACY);
 	u32 bar_size;
 
 	p_resp->num_cids = OSAL_MIN_T(u8, p_req->num_cids, num_vf_cons);
@@ -1662,6 +1867,7 @@ ecore_iov_vf_mbx_acquire_resc_cids(struct ecore_hwfn *p_hwfn,
 		if (bar_size)
 			bar_size = 1 << bar_size;
 
+		/* In CMT, doorbell bar should be split over both engines */
 		if (ECORE_IS_CMT(p_hwfn->p_dev))
 			bar_size /= 2;
 	} else {
@@ -1677,16 +1883,18 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 					struct ecore_ptt *p_ptt,
 					struct ecore_vf_info *p_vf,
 					struct vf_pf_resc_request *p_req,
-					struct pf_vf_resc *p_resp)
+					struct pf_vf_resc *p_resp,
+					bool is_rdma_supported)
 {
 	u8 i;
 
 	/* Queue related information */
-	p_resp->num_rxqs = p_vf->num_rxqs;
-	p_resp->num_txqs = p_vf->num_txqs;
-	p_resp->num_sbs = p_vf->num_sbs;
+	p_resp->num_rxqs = OSAL_MIN_T(u8, p_vf->num_rxqs, p_req->num_rxqs);
+	p_resp->num_txqs = OSAL_MIN_T(u8, p_vf->num_txqs, p_req->num_txqs);
+	/* Legacy drivers expect num SB and num rxqs to match */
+	p_resp->num_sbs = is_rdma_supported ? p_vf->num_sbs : p_vf->num_rxqs;
 
-	for (i = 0; i < p_resp->num_sbs; i++) {
+	for (i = 0; i < p_resp->num_rxqs; i++) {
 		p_resp->hw_sbs[i].hw_sb_id = p_vf->igu_sbs[i];
 		/* TODO - what's this sb_qid field? Is it deprecated?
 		 * or is there an ecore_client that looks at this?
@@ -1725,7 +1933,7 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 	    p_resp->num_mc_filters < p_req->num_mc_filters ||
 	    p_resp->num_cids < p_req->num_cids) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "VF[%d] - Insufficient resources: rxq [%02x/%02x] txq [%02x/%02x] sbs [%02x/%02x] mac [%02x/%02x] vlan [%02x/%02x] mc [%02x/%02x] cids [%02x/%02x]\n",
+			   "VF[%d] - Insufficient resources: rxq [%u/%u] txq [%u/%u] sbs [%u/%u] mac [%u/%u] vlan [%u/%u] mc [%u/%u] cids [%u/%u]\n",
 			   p_vf->abs_vf_id,
 			   p_req->num_rxqs, p_resp->num_rxqs,
 			   p_req->num_rxqs, p_resp->num_txqs,
@@ -1747,6 +1955,10 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 		return PFVF_STATUS_NO_RESOURCE;
 	}
 
+	/* Save the actual numbers from the response */
+	p_vf->actual_num_rxqs = p_resp->num_rxqs;
+	p_vf->actual_num_txqs = p_resp->num_txqs;
+
 	return PFVF_STATUS_SUCCESS;
 }
 
@@ -1798,6 +2010,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[%d] sent ACQUIRE but is already in state %d - fail request\n",
 			   vf->abs_vf_id, vf->state);
+		vfpf_status = PFVF_STATUS_ACQUIRED;
 		goto out;
 	}
 
@@ -1819,9 +2032,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 			p_vfdev->eth_fp_hsi_minor = ETH_HSI_VER_NO_PKT_LEN_TUNN;
 		} else {
 			DP_INFO(p_hwfn,
-				"VF[%d] needs fastpath HSI %02x.%02x, which is"
-				" incompatible with loaded FW's faspath"
-				" HSI %02x.%02x\n",
+				"VF[%d] needs fastpath HSI %02x.%02x, which is incompatible with loaded FW's faspath HSI %02x.%02x\n",
 				vf->abs_vf_id,
 				req->vfdev_info.eth_fp_hsi_major,
 				req->vfdev_info.eth_fp_hsi_minor,
@@ -1834,9 +2045,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 	/* On 100g PFs, prevent old VFs from loading */
 	if (ECORE_IS_CMT(p_hwfn->p_dev) &&
 	    !(req->vfdev_info.capabilities & VFPF_ACQUIRE_CAP_100G)) {
-		DP_INFO(p_hwfn,
-			"VF[%d] is running an old driver that doesn't support"
-			" 100g\n",
+		DP_INFO(p_hwfn, "VF[%d] is running an old driver that doesn't support 100g\n",
 			vf->abs_vf_id);
 		goto out;
 	}
@@ -1855,7 +2064,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 
 	vf->vf_bulletin = req->bulletin_addr;
 	vf->bulletin.size = (vf->bulletin.size < req->bulletin_size) ?
-	    vf->bulletin.size : req->bulletin_size;
+			    vf->bulletin.size : req->bulletin_size;
 
 	/* fill in pfdev info */
 	pfdev_info->chip_num = p_hwfn->p_dev->chip_num;
@@ -1874,6 +2083,22 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 	if (req->vfdev_info.capabilities & VFPF_ACQUIRE_CAP_QUEUE_QIDS)
 		pfdev_info->capabilities |= PFVF_ACQUIRE_CAP_QUEUE_QIDS;
 
+	/* EDPM is disabled for VF when the capability is missing, or the VFs
+	 * doorbell bar size is insufficient, or it's not supported by the MFW.
+	 * Otherwise, let the VF know that it supports DPM.
+	 */
+	if (!(req->vfdev_info.capabilities & VFPF_ACQUIRE_CAP_EDPM) ||
+	    p_hwfn->pf_iov_info->dpm_info.db_bar_no_edpm ||
+	    p_hwfn->pf_iov_info->dpm_info.mfw_no_edpm) {
+		vf->db_recovery_info.edpm_disabled = true;
+	} else {
+		pfdev_info->capabilities |= PFVF_ACQUIRE_CAP_EDPM;
+	}
+
+	/* This VF supports requesting, receiving and processing async events */
+	if (req->vfdev_info.capabilities & VFPF_ACQUIRE_CAP_EQ)
+		vf->vf_eq_info.eq_support = true;
+
 	/* Share the sizes of the bars with VF */
 	resp->pfdev_info.bar_size = (u8)ecore_iov_vf_db_bar_size(p_hwfn,
 							     p_ptt);
@@ -1881,7 +2106,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 	ecore_iov_vf_mbx_acquire_stats(&pfdev_info->stats_info);
 
 	OSAL_MEMCPY(pfdev_info->port_mac, p_hwfn->hw_info.hw_mac_addr,
-		    ETH_ALEN);
+		    ECORE_ETH_ALEN);
 
 	pfdev_info->fw_major = FW_MAJOR_VERSION;
 	pfdev_info->fw_minor = FW_MINOR_VERSION;
@@ -1900,14 +2125,6 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 	pfdev_info->dev_type = p_hwfn->p_dev->type;
 	pfdev_info->chip_rev = p_hwfn->p_dev->chip_rev;
 
-	/* Fill resources available to VF; Make sure there are enough to
-	 * satisfy the VF's request.
-	 */
-	vfpf_status = ecore_iov_vf_mbx_acquire_resc(p_hwfn, p_ptt, vf,
-						    &req->resc_request, resc);
-	if (vfpf_status != PFVF_STATUS_SUCCESS)
-		goto out;
-
 	/* Start the VF in FW */
 	rc = ecore_sp_vf_start(p_hwfn, vf);
 	if (rc != ECORE_SUCCESS) {
@@ -1924,18 +2141,28 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 	ecore_iov_post_vf_bulletin(p_hwfn, vf->relative_vf_id, p_ptt);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-		   "VF[%d] ACQUIRE_RESPONSE: pfdev_info- chip_num=0x%x,"
-		   " db_size=%d, idx_per_sb=%d, pf_cap=0x%lx\n"
-		   "resources- n_rxq-%d, n_txq-%d, n_sbs-%d, n_macs-%d,"
-		   " n_vlans-%d\n",
+		   "VF[%d] ACQUIRE_RESPONSE: pfdev_info- chip_num=0x%x, db_size=%d, idx_per_sb=%d, "
+		   "pf_cap=0x%" PRIx64 "\n"
+		   "resources- n_rxq-%d, n_txq-%d, n_sbs-%d, n_macs-%d, n_vlans-%d\n",
 		   vf->abs_vf_id, resp->pfdev_info.chip_num,
 		   resp->pfdev_info.db_size, resp->pfdev_info.indices_per_sb,
-		   (unsigned long)resp->pfdev_info.capabilities, resc->num_rxqs,
+		   resp->pfdev_info.capabilities, resc->num_rxqs,
 		   resc->num_txqs, resc->num_sbs, resc->num_mac_filters,
 		   resc->num_vlan_filters);
 
 	vf->state = VF_ACQUIRED;
 
+	/* Enable load requests for this VF - pretend to the specific VF
+	 * for the split.
+	 */
+	ecore_fid_pretend(p_hwfn, p_ptt, (u16)vf->concrete_fid);
+
+	ecore_wr(p_hwfn, p_ptt, CCFC_REG_WEAK_ENABLE_VF, 0x1);
+	ecore_wr(p_hwfn, p_ptt, TCFC_REG_WEAK_ENABLE_VF, 0x1);
+
+	/* unpretend */
+	ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_hwfn->hw_info.concrete_fid);
+
 out:
 	/* Prepare Response */
 	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_ACQUIRE,
@@ -1943,16 +2170,16 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 			       vfpf_status);
 }
 
-static enum _ecore_status_t
-__ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn,
-			 struct ecore_vf_info *p_vf, bool val)
+static enum _ecore_status_t __ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn,
+						     struct ecore_vf_info *p_vf, bool val)
 {
 	struct ecore_sp_vport_update_params params;
 	enum _ecore_status_t rc;
 
 	if (val == p_vf->spoof_chk) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "Spoofchk value[%d] is already configured\n", val);
+			   "Spoofchk value[%d] is already configured\n",
+			   val);
 		return ECORE_SUCCESS;
 	}
 
@@ -1978,9 +2205,8 @@ __ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-static enum _ecore_status_t
-ecore_iov_reconfigure_unicast_vlan(struct ecore_hwfn *p_hwfn,
-				   struct ecore_vf_info *p_vf)
+static enum _ecore_status_t ecore_iov_reconfigure_unicast_vlan(struct ecore_hwfn *p_hwfn,
+							       struct ecore_vf_info *p_vf)
 {
 	struct ecore_filter_ucast filter;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
@@ -2003,13 +2229,11 @@ ecore_iov_reconfigure_unicast_vlan(struct ecore_hwfn *p_hwfn,
 			   "Reconfiguring VLAN [0x%04x] for VF [%04x]\n",
 			   filter.vlan, p_vf->relative_vf_id);
 		rc = ecore_sp_eth_filter_ucast(p_hwfn, p_vf->opaque_fid,
-					       &filter, ECORE_SPQ_MODE_CB,
-					       OSAL_NULL);
+					       &filter, ECORE_SPQ_MODE_CB, OSAL_NULL);
 		if (rc) {
-			DP_NOTICE(p_hwfn, true,
-				  "Failed to configure VLAN [%04x]"
-				  " to VF [%04x]\n",
-				  filter.vlan, p_vf->relative_vf_id);
+			DP_NOTICE(p_hwfn, true, "Failed to configure VLAN [%04x] to VF [%04x]\n",
+				  filter.vlan,
+				  p_vf->relative_vf_id);
 			break;
 		}
 	}
@@ -2019,11 +2243,12 @@ ecore_iov_reconfigure_unicast_vlan(struct ecore_hwfn *p_hwfn,
 
 static enum _ecore_status_t
 ecore_iov_reconfigure_unicast_shadow(struct ecore_hwfn *p_hwfn,
-				     struct ecore_vf_info *p_vf, u64 events)
+				     struct ecore_vf_info *p_vf,
+				     u64 events)
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
-	/*TODO - what about MACs? */
+	/* TODO - what about MACs? */
 
 	if ((events & (1 << VLAN_ADDR_FORCED)) &&
 	    !(p_vf->configured_features & (1 << VLAN_ADDR_FORCED)))
@@ -2043,6 +2268,12 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 	if (!p_vf->vport_instance)
 		return ECORE_INVAL;
 
+	/* Check whether skipping unicast filter update is required, e.g. when
+	 * Switchdev mode is enabled.
+	 */
+	if (ecore_get_skip_ucast_filter_update(p_hwfn))
+		return ECORE_INVAL;
+
 	if ((events & (1 << MAC_ADDR_FORCED)) ||
 	    p_hwfn->pf_params.eth_pf_params.allow_vf_mac_change ||
 	    p_vf->p_vf_info.is_trusted_configured) {
@@ -2055,7 +2286,9 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 		filter.is_rx_filter = 1;
 		filter.is_tx_filter = 1;
 		filter.vport_to_add_to = p_vf->vport_id;
-		OSAL_MEMCPY(filter.mac, p_vf->bulletin.p_virt->mac, ETH_ALEN);
+		OSAL_MEMCPY(filter.mac,
+			    p_vf->bulletin.p_virt->mac,
+			    ECORE_ETH_ALEN);
 
 		rc = ecore_sp_eth_filter_ucast(p_hwfn, p_vf->opaque_fid,
 					       &filter,
@@ -2086,7 +2319,7 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 		filter.vport_to_add_to = p_vf->vport_id;
 		filter.vlan = p_vf->bulletin.p_virt->pvid;
 		filter.opcode = filter.vlan ? ECORE_FILTER_REPLACE :
-		    ECORE_FILTER_FLUSH;
+					      ECORE_FILTER_FLUSH;
 
 		/* Send the ramrod */
 		rc = ecore_sp_eth_filter_ucast(p_hwfn, p_vf->opaque_fid,
@@ -2109,11 +2342,12 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 
 		vport_update.update_inner_vlan_removal_flg = 1;
 		removal = filter.vlan ?
-		    1 : p_vf->shadow_config.inner_vlan_removal;
+			  1 : p_vf->shadow_config.inner_vlan_removal;
 		vport_update.inner_vlan_removal_flg = removal;
 		vport_update.silent_vlan_removal_flg = filter.vlan ? 1 : 0;
 		rc = ecore_sp_vport_update(p_hwfn, &vport_update,
-					   ECORE_SPQ_MODE_EBLOCK, OSAL_NULL);
+					   ECORE_SPQ_MODE_EBLOCK,
+					   OSAL_NULL);
 		if (rc) {
 			DP_NOTICE(p_hwfn, true,
 				  "PF failed to configure VF vport for vlan\n");
@@ -2121,7 +2355,7 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 		}
 
 		/* Update all the Rx queues */
-		for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++) {
+		for (i = 0; i < ECORE_MAX_QUEUE_VF_CHAINS_PER_PF; i++) {
 			struct ecore_vf_queue *p_queue = &p_vf->vf_queues[i];
 			struct ecore_queue_cid *p_cid = OSAL_NULL;
 
@@ -2132,13 +2366,12 @@ ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
 
 			rc = ecore_sp_eth_rx_queues_update(p_hwfn,
 							   (void **)&p_cid,
-						   1, 0, 1,
-						   ECORE_SPQ_MODE_EBLOCK,
-						   OSAL_NULL);
+							   1, 0, 1,
+							   ECORE_SPQ_MODE_EBLOCK,
+							   OSAL_NULL);
 			if (rc) {
 				DP_NOTICE(p_hwfn, true,
-					  "Failed to send Rx update"
-					  " fo queue[0x%04x]\n",
+					  "Failed to send Rx update fo queue[0x%04x]\n",
 					  p_cid->rel.queue_id);
 				return rc;
 			}
@@ -2186,7 +2419,7 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
 	ecore_iov_enable_vf_traffic(p_hwfn, p_ptt, vf);
 
 	/* Initialize Status block in CAU */
-	for (sb_id = 0; sb_id < vf->num_sbs; sb_id++) {
+	for (sb_id = 0; sb_id < vf->num_rxqs; sb_id++) {
 		if (!start->sb_addr[sb_id]) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 				   "VF[%d] did not fill the address of SB %d\n",
@@ -2216,14 +2449,14 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
 	}
 
 	OSAL_MEMSET(&params, 0, sizeof(struct ecore_sp_vport_start_params));
-	params.tpa_mode = start->tpa_mode;
+	params.tpa_mode =  start->tpa_mode;
 	params.remove_inner_vlan = start->inner_vlan_removal;
 	params.tx_switching = true;
+	params.zero_placement_offset = start->zero_placement_offset;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, false,
-			  "FPGA: Don't config VF for Tx-switching [no pVFC]\n");
+		DP_NOTICE(p_hwfn, false, "FPGA: Don't configure VF for Tx-switching [no pVFC]\n");
 		params.tx_switching = false;
 	}
 #endif
@@ -2241,8 +2474,7 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
 
 	rc = ecore_sp_eth_vport_start(p_hwfn, &params);
 	if (rc != ECORE_SUCCESS) {
-		DP_ERR(p_hwfn,
-		       "ecore_iov_vf_mbx_start_vport returned error %d\n", rc);
+		DP_ERR(p_hwfn, "returned error %d\n", rc);
 		status = PFVF_STATUS_FAILURE;
 	} else {
 		vf->vport_instance++;
@@ -2253,7 +2485,6 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
 					  vf->vport_id, vf->opaque_fid);
 		__ecore_iov_spoofchk_set(p_hwfn, vf, vf->req_spoofchk_val);
 	}
-
 	ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_VPORT_START,
 			       sizeof(struct pfvf_def_resp_tlv), status);
 }
@@ -2265,6 +2496,14 @@ static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn,
 	u8 status = PFVF_STATUS_SUCCESS;
 	enum _ecore_status_t rc;
 
+	if (!vf->vport_instance) {
+		DP_NOTICE(p_hwfn, false,
+			  "VF [%02x] requested vport stop, but no vport active\n",
+			  vf->abs_vf_id);
+		status = PFVF_STATUS_FAILURE;
+		goto out;
+	}
+
 	OSAL_IOV_VF_VPORT_STOP(p_hwfn, vf);
 	vf->vport_instance--;
 	vf->spoof_chk = false;
@@ -2272,9 +2511,8 @@ static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn,
 	if ((ecore_iov_validate_active_rxq(vf)) ||
 	    (ecore_iov_validate_active_txq(vf))) {
 		vf->b_malicious = true;
-		DP_NOTICE(p_hwfn, false,
-			  "VF [%02x] - considered malicious;"
-			  " Unable to stop RX/TX queuess\n",
+		DP_NOTICE(p_hwfn,
+			  false, " VF [%02x] - considered malicious; Unable to stop RX/TX queuess\n",
 			  vf->abs_vf_id);
 		status = PFVF_STATUS_MALICIOUS;
 		goto out;
@@ -2282,8 +2520,8 @@ static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn,
 
 	rc = ecore_sp_vport_stop(p_hwfn, vf->opaque_fid, vf->vport_id);
 	if (rc != ECORE_SUCCESS) {
-		DP_ERR(p_hwfn,
-		       "ecore_iov_vf_mbx_stop_vport returned error %d\n", rc);
+		DP_ERR(p_hwfn, "returned error %d\n",
+		       rc);
 		status = PFVF_STATUS_FAILURE;
 	}
 
@@ -2303,6 +2541,7 @@ static void ecore_iov_vf_mbx_start_rxq_resp(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	struct pfvf_start_queue_resp_tlv *p_tlv;
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	struct vfpf_start_rxq_tlv *req;
 	u16 length;
 
@@ -2322,16 +2561,32 @@ static void ecore_iov_vf_mbx_start_rxq_resp(struct ecore_hwfn *p_hwfn,
 		      sizeof(struct channel_list_end_tlv));
 
 	/* Update the TLV with the response.
-	 * The VF Rx producers are located in the vf zone.
+	 * The VF Rx producers are located in the vf zone in E4, and in the
+	 * queue zone in E5.
 	 */
 	if ((status == PFVF_STATUS_SUCCESS) && !b_legacy) {
 		req = &mbx->req_virt->start_rxq;
 
-		p_tlv->offset =
-			PXP_VF_BAR0_START_MSDM_ZONE_B +
+		if (ECORE_IS_E4(p_dev)) {
+			p_tlv->offset =
+				PXP_VF_BAR0_START_MSDM_ZONE_B +
 				OFFSETOF(struct mstorm_vf_zone,
 					 non_trigger.eth_rx_queue_producers) +
 				sizeof(struct eth_rx_prod_data) * req->rx_qid;
+		} else {
+			struct ecore_vf_queue *p_queue;
+			u16 qzone_id;
+
+			p_queue = &vf->vf_queues[req->rx_qid];
+			ecore_fw_l2_queue(p_hwfn, p_queue->fw_rx_qid,
+					  &qzone_id);
+
+			p_tlv->offset =
+				MSTORM_QZONE_START(p_dev) +
+				qzone_id * MSTORM_QZONE_SIZE(p_dev) +
+				OFFSETOF(struct mstorm_eth_queue_zone_e5,
+					 rx_producers);
+		}
 	}
 
 	ecore_iov_send_response(p_hwfn, p_ptt, vf, length, status);
@@ -2409,7 +2664,7 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	OSAL_MEMSET(&params, 0, sizeof(params));
 	params.queue_id = (u8)p_queue->fw_rx_qid;
 	params.vport_id = vf->vport_id;
-	params.stats_id = vf->abs_vf_id + 0x10;
+	params.stats_id = vf->abs_vf_id + ECORE_VF_START_BIN_ID;
 
 	/* Since IGU index is passed via sb_info, construct a dummy one */
 	OSAL_MEM_ZERO(&sb_dummy, sizeof(sb_dummy));
@@ -2428,16 +2683,31 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	if (p_cid == OSAL_NULL)
 		goto out;
 
-	/* The VF Rx producers are located in the vf zone.
-	 * Legacy VFs have their producers in the queue zone, but they
+	/* The VF Rx producers are located in the vf zone in E4, and in the
+	 * queue zone in E5.
+	 * Legacy E4 VFs have their producers in the queue zone, but they
 	 * calculate the location by their own and clean them prior to this.
 	 */
-	if (!(vf_legacy & ECORE_QCID_LEGACY_VF_RX_PROD))
-		REG_WR(p_hwfn,
-		       GTT_BAR0_MAP_REG_MSDM_RAM +
-		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id,
-						  req->rx_qid),
-		       0);
+	if (!(vf_legacy & ECORE_QCID_LEGACY_VF_RX_PROD)) {
+		if (ECORE_IS_E4(p_hwfn->p_dev)) {
+			REG_WR(p_hwfn,
+			       GTT_BAR0_MAP_REG_MSDM_RAM +
+			       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id,
+							  req->rx_qid),
+			       0);
+		} else {
+			u16 qzone_id;
+
+			if (ecore_fw_l2_queue(p_hwfn, p_queue->fw_rx_qid,
+					      &qzone_id))
+				goto out;
+
+			REG_WR(p_hwfn,
+			       GTT_BAR0_MAP_REG_MSDM_RAM +
+			       MSTORM_ETH_PF_PRODS_OFFSET(qzone_id),
+			       0);
+		}
+	}
 
 	rc = ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
 					req->bd_max_bytes,
@@ -2516,7 +2786,8 @@ ecore_iov_pf_validate_tunn_param(struct vfpf_update_tunn_param_tlv *p_req)
 	bool b_update_requested = false;
 
 	if (p_req->tun_mode_update_mask || p_req->update_tun_cls ||
-	    p_req->update_geneve_port || p_req->update_vxlan_port)
+	    p_req->update_geneve_port || p_req->update_vxlan_port ||
+	    p_req->update_non_l2_vxlan)
 		b_update_requested = true;
 
 	return b_update_requested;
@@ -2549,6 +2820,16 @@ static void ecore_iov_vf_mbx_update_tunn_param(struct ecore_hwfn *p_hwfn,
 		goto send_resp;
 	}
 
+	if (p_req->update_non_l2_vxlan) {
+		if (p_req->non_l2_vxlan_enable) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "non_l2_vxlan mode requested by VF\n");
+			ecore_set_vxlan_no_l2_enable(p_hwfn, p_ptt, true);
+		} else {
+			ecore_set_vxlan_no_l2_enable(p_hwfn, p_ptt, false);
+		}
+	}
+
 	tunn.b_update_rx_cls = p_req->update_tun_cls;
 	tunn.b_update_tx_cls = p_req->update_tun_cls;
 
@@ -2642,7 +2923,7 @@ static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
 
 	/* Update the TLV with the response */
 	if ((status == PFVF_STATUS_SUCCESS) && !b_legacy)
-		p_tlv->offset = DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
+		p_tlv->offset = DB_ADDR_VF(p_hwfn->p_dev, cid, DQ_DEMS_LEGACY);
 
 	ecore_iov_send_response(p_hwfn, p_ptt, p_vf, length, status);
 }
@@ -2685,7 +2966,7 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
 	/* Acquire a new queue-cid */
 	params.queue_id = p_queue->fw_tx_qid;
 	params.vport_id = vf->vport_id;
-	params.stats_id = vf->abs_vf_id + 0x10;
+	params.stats_id = vf->abs_vf_id + ECORE_VF_START_BIN_ID;
 
 	/* Since IGU index is passed via sb_info, construct a dummy one */
 	OSAL_MEM_ZERO(&sb_dummy, sizeof(sb_dummy));
@@ -2877,7 +3158,7 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt,
 					 struct ecore_vf_info *vf)
 {
-	struct ecore_queue_cid *handlers[ECORE_MAX_VF_CHAINS_PER_PF];
+	struct ecore_queue_cid *handlers[ECORE_MAX_QUEUE_VF_CHAINS_PER_PF];
 	u16 length = sizeof(struct pfvf_def_resp_tlv);
 	struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
 	struct vfpf_update_rxq_tlv *req;
@@ -2947,8 +3228,8 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
 
 static enum _ecore_status_t
 ecore_iov_vf_pf_update_mtu(struct ecore_hwfn *p_hwfn,
-				    struct ecore_ptt *p_ptt,
-				    struct ecore_vf_info *p_vf)
+			   struct ecore_ptt *p_ptt,
+			   struct ecore_vf_info *p_vf)
 {
 	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
 	struct ecore_sp_vport_update_params params;
@@ -2984,22 +3265,187 @@ ecore_iov_vf_pf_update_mtu(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
+static u32
+ecore_iov_get_norm_region_conn(struct ecore_hwfn *p_hwfn)
+{
+	u32 roce_cids, core_cids, eth_cids;
+
+	ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ROCE, &roce_cids);
+	ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_CORE, &core_cids);
+	ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH, &eth_cids);
+
+	return roce_cids + core_cids + eth_cids;
+}
+
+#define ECORE_DPM_TIMEOUT	1200
+
+enum _ecore_status_t
+ecore_iov_init_vf_doorbell_bar(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt)
+{
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct ecore_common_dpm_info *dpm_info;
+	u32 norm_region_conn, min_addr_reg1;
+	struct ecore_dpi_info *dpi_info;
+	u32 pwm_regsize, norm_regsize;
+	u32 db_bar_size, n_cpus = 1;
+	u32 roce_edpm_mode;
+	u32 dems_shift;
+	u8 cond, cond2;
+
+	if (!IS_PF_SRIOV(p_hwfn))
+		return ECORE_SUCCESS;
+
+	db_bar_size = ecore_iov_vf_db_bar_size(p_hwfn, p_ptt);
+
+	if (db_bar_size) {
+		db_bar_size = 1 << db_bar_size;
+
+		/* In CMT, doorbell bar should be split over both engines */
+		if (ECORE_IS_CMT(p_hwfn->p_dev))
+			db_bar_size /= 2;
+	} else {
+		db_bar_size = PXP_VF_BAR0_DQ_LENGTH;
+	}
+
+	norm_region_conn = ecore_iov_get_norm_region_conn(p_hwfn);
+	norm_regsize = ROUNDUP(ECORE_VF_DEMS_SIZE * norm_region_conn,
+			       OSAL_PAGE_SIZE);
+	min_addr_reg1 = norm_regsize / 4096;
+	pwm_regsize = db_bar_size - norm_regsize;
+
+	/* Check that the normal and PWM sizes are valid */
+	if (db_bar_size < norm_regsize) {
+		DP_ERR(p_hwfn->p_dev,
+		       "Disabling RDMA for VFs: VF Doorbell BAR size is 0x%x but normal region required to map the L2 cids would be 0x%0x\n",
+		       db_bar_size, norm_regsize);
+		return ECORE_NORESOURCES;
+	}
+
+	if (pwm_regsize < ECORE_MIN_PWM_REGION) {
+		DP_ERR(p_hwfn->p_dev,
+		       "Disabling RDMA for VFs: PWM region size 0x%0x is too small. Should be at least 0x%0x (Doorbell BAR size is 0x%x and normal region size is 0x%0x)\n",
+		       pwm_regsize, ECORE_MIN_PWM_REGION, db_bar_size,
+		       norm_regsize);
+		return ECORE_NORESOURCES;
+	}
+
+	dpi_info = &p_hwfn->pf_iov_info->dpi_info;
+	dpm_info = &p_hwfn->pf_iov_info->dpm_info;
+
+	dpi_info->dpi_bit_shift_addr = DORQ_REG_VF_DPI_BIT_SHIFT;
+	dpm_info->vf_cfg = true;
+
+	dpi_info->wid_count = (u16)n_cpus;
+
+	/* Check return codes from above calls */
+	if (rc != ECORE_SUCCESS) {
+		DP_ERR(p_hwfn,
+		       "Failed to allocate enough DPIs. Allocated %d but the current minimum is set to %d\n",
+			dpi_info->dpi_count,
+			p_hwfn->pf_params.rdma_pf_params.min_dpis);
+
+		DP_ERR(p_hwfn,
+		       "VF doorbell bar: normal_region_size=0x%x, pwm_region_size=0x%x, dpi_size=0x%x, dpi_count=%d, roce_edpm=%s, page_size=%u\n",
+			norm_regsize, pwm_regsize, dpi_info->dpi_size,
+			dpi_info->dpi_count,
+			ecore_edpm_enabled(p_hwfn, dpm_info) ?
+			"enabled" : "disabled", OSAL_PAGE_SIZE);
+
+		return ECORE_NORESOURCES;
+	}
+
+	DP_INFO(p_hwfn,
+		"VF doorbell bar: db_bar_size=0x%x, normal_region_size=0x%x, pwm_region_size=0x%x, dpi_size=0x%x, dpi_count=%d, roce_edpm=%s, page_size=%u\n",
+		db_bar_size, norm_regsize, pwm_regsize, dpi_info->dpi_size,
+		dpi_info->dpi_count, ecore_edpm_enabled(p_hwfn, dpm_info) ?
+		"enabled" : "disabled", OSAL_PAGE_SIZE);
+
+	/* In E4 chip, in case DPM is enabled for VF, the DPM timeout needs to
+	 * be increased.
+	 */
+	if (ECORE_IS_E4(p_hwfn->p_dev) && ecore_edpm_enabled(p_hwfn, dpm_info))
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_DPM_TIMEOUT,
+			 ECORE_DPM_TIMEOUT);
+
+	dpi_info->dpi_start_offset = norm_regsize;
+
+	/* Update the DORQ registers.
+	 * DEMS size is configured as log2 of DWORDs, hence the division by 4.
+	 */
+	dems_shift = OSAL_LOG2(ECORE_VF_DEMS_SIZE / 4);
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_VF_ICID_BIT_SHIFT_NORM, dems_shift);
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_VF_MIN_ADDR_REG1, min_addr_reg1);
+
+	return ECORE_SUCCESS;
+}
+
+static void
+ecore_iov_vf_pf_async_event_resp(struct ecore_hwfn *p_hwfn,
+				 struct ecore_ptt *p_ptt,
+				 struct ecore_vf_info *p_vf)
+{
+	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
+	struct pfvf_async_event_resp_tlv *p_resp;
+	struct event_ring_list_entry *p_eqe_ent;
+	u8 status = PFVF_STATUS_SUCCESS;
+	u8 i;
+
+	mbx->offset = (u8 *)mbx->reply_virt;
+	p_resp = ecore_add_tlv(&mbx->offset, CHANNEL_TLV_ASYNC_EVENT,
+			       sizeof(*p_resp));
+
+	OSAL_SPIN_LOCK(&p_vf->vf_eq_info.eq_list_lock);
+	for (i = 0; i < ECORE_PFVF_EQ_COUNT; i++) {
+		if (OSAL_LIST_IS_EMPTY(&p_vf->vf_eq_info.eq_list))
+			break;
+
+		p_eqe_ent = OSAL_LIST_FIRST_ENTRY(&p_vf->vf_eq_info.eq_list,
+						  struct event_ring_list_entry,
+						  list_entry);
+		OSAL_LIST_REMOVE_ENTRY(&p_eqe_ent->list_entry,
+				       &p_vf->vf_eq_info.eq_list);
+		p_vf->vf_eq_info.eq_list_size--;
+		p_resp->eqs[i] = p_eqe_ent->eqe;
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
+			   "vf %u: Send eq entry - op %x prot %x echo %x\n",
+			   p_eqe_ent->eqe.vf_id, p_eqe_ent->eqe.opcode,
+			   p_eqe_ent->eqe.protocol_id,
+			   OSAL_LE16_TO_CPU(p_eqe_ent->eqe.echo));
+		OSAL_FREE(p_hwfn->p_dev, p_eqe_ent);
+	}
+
+	/* Notify VF about more pending completions */
+	if (!OSAL_LIST_IS_EMPTY(&p_vf->vf_eq_info.eq_list)) {
+		p_vf->bulletin.p_virt->eq_completion++;
+		OSAL_IOV_BULLETIN_UPDATE(p_hwfn);
+	}
+
+	OSAL_SPIN_UNLOCK(&p_vf->vf_eq_info.eq_list_lock);
+
+	p_resp->eq_count = i;
+	ecore_add_tlv(&mbx->offset, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+	ecore_iov_send_response(p_hwfn, p_ptt, p_vf, sizeof(*p_resp), status);
+}
+
 void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn,
-				 void *p_tlvs_list, u16 req_type)
+					void *p_tlvs_list, u16 req_type)
 {
 	struct channel_tlv *p_tlv = (struct channel_tlv *)p_tlvs_list;
 	int len = 0;
 
 	do {
 		if (!p_tlv->length) {
-			DP_NOTICE(p_hwfn, true, "Zero length TLV found\n");
+			DP_NOTICE(p_hwfn, true,
+				  "Zero length TLV found\n");
 			return OSAL_NULL;
 		}
 
 		if (p_tlv->type == req_type) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 				   "Extended tlv type %s, length %d found\n",
-				   qede_ecore_channel_tlvs_string[p_tlv->type],
+				   ecore_channel_tlvs_string[p_tlv->type],
 				   p_tlv->length);
 			return p_tlv;
 		}
@@ -3026,7 +3472,8 @@ ecore_iov_vp_update_act_param(struct ecore_hwfn *p_hwfn,
 	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_ACTIVATE;
 
 	p_act_tlv = (struct vfpf_vport_update_activate_tlv *)
-	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+		    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt,
+					       tlv);
 	if (!p_act_tlv)
 		return;
 
@@ -3047,7 +3494,8 @@ ecore_iov_vp_update_vlan_param(struct ecore_hwfn *p_hwfn,
 	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP;
 
 	p_vlan_tlv = (struct vfpf_vport_update_vlan_strip_tlv *)
-	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+		     ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt,
+						tlv);
 	if (!p_vlan_tlv)
 		return;
 
@@ -3071,15 +3519,14 @@ ecore_iov_vp_update_tx_switch(struct ecore_hwfn *p_hwfn,
 	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH;
 
 	p_tx_switch_tlv = (struct vfpf_vport_update_tx_switch_tlv *)
-	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+			  ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt,
+						     tlv);
 	if (!p_tx_switch_tlv)
 		return;
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, false,
-			  "FPGA: Ignore tx-switching configuration originating"
-			  " from VFs\n");
+		DP_NOTICE(p_hwfn, false, "FPGA: Ignore tx-switching configuration originating from VFs\n");
 		return;
 	}
 #endif
@@ -3099,7 +3546,8 @@ ecore_iov_vp_update_mcast_bin_param(struct ecore_hwfn *p_hwfn,
 	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_MCAST;
 
 	p_mcast_tlv = (struct vfpf_vport_update_mcast_bin_tlv *)
-	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+		      ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt,
+						 tlv);
 	if (!p_mcast_tlv)
 		return;
 
@@ -3119,7 +3567,8 @@ ecore_iov_vp_update_accept_flag(struct ecore_hwfn *p_hwfn,
 	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM;
 
 	p_accept_tlv = (struct vfpf_vport_update_accept_param_tlv *)
-	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+		       ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt,
+						  tlv);
 	if (!p_accept_tlv)
 		return;
 
@@ -3140,7 +3589,8 @@ ecore_iov_vp_update_accept_any_vlan(struct ecore_hwfn *p_hwfn,
 	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN;
 
 	p_accept_any_vlan = (struct vfpf_vport_update_accept_any_vlan_tlv *)
-	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+			    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt,
+						       tlv);
 	if (!p_accept_any_vlan)
 		return;
 
@@ -3165,7 +3615,8 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 	u16 i, q_idx;
 
 	p_rss_tlv = (struct vfpf_vport_update_rss_tlv *)
-	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+		    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt,
+					       tlv);
 	if (!p_rss_tlv) {
 		p_data->rss_params = OSAL_NULL;
 		return;
@@ -3173,18 +3624,14 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
 
 	OSAL_MEMSET(p_rss, 0, sizeof(struct ecore_rss_params));
 
-	p_rss->update_rss_config =
-	    !!(p_rss_tlv->update_rss_flags &
-		VFPF_UPDATE_RSS_CONFIG_FLAG);
-	p_rss->update_rss_capabilities =
-	    !!(p_rss_tlv->update_rss_flags &
-		VFPF_UPDATE_RSS_CAPS_FLAG);
-	p_rss->update_rss_ind_table =
-	    !!(p_rss_tlv->update_rss_flags &
-		VFPF_UPDATE_RSS_IND_TABLE_FLAG);
-	p_rss->update_rss_key =
-	    !!(p_rss_tlv->update_rss_flags &
-		VFPF_UPDATE_RSS_KEY_FLAG);
+	p_rss->update_rss_config = !!(p_rss_tlv->update_rss_flags &
+				      VFPF_UPDATE_RSS_CONFIG_FLAG);
+	p_rss->update_rss_capabilities = !!(p_rss_tlv->update_rss_flags &
+					    VFPF_UPDATE_RSS_CAPS_FLAG);
+	p_rss->update_rss_ind_table = !!(p_rss_tlv->update_rss_flags &
+					 VFPF_UPDATE_RSS_IND_TABLE_FLAG);
+	p_rss->update_rss_key = !!(p_rss_tlv->update_rss_flags &
+				   VFPF_UPDATE_RSS_KEY_FLAG);
 
 	p_rss->rss_enable = p_rss_tlv->rss_enable;
 	p_rss->rss_eng_id = vf->rss_eng_id;
@@ -3231,7 +3678,8 @@ ecore_iov_vp_update_sge_tpa_param(struct ecore_hwfn *p_hwfn,
 	u16 tlv = CHANNEL_TLV_VPORT_UPDATE_SGE_TPA;
 
 	p_sge_tpa_tlv = (struct vfpf_vport_update_sge_tpa_tlv *)
-	    ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+			ecore_iov_search_list_tlvs(p_hwfn,
+						   p_mbx->req_virt, tlv);
 
 	if (!p_sge_tpa_tlv) {
 		p_data->sge_tpa_params = OSAL_NULL;
@@ -3241,27 +3689,42 @@ ecore_iov_vp_update_sge_tpa_param(struct ecore_hwfn *p_hwfn,
 	OSAL_MEMSET(p_sge_tpa, 0, sizeof(struct ecore_sge_tpa_params));
 
 	p_sge_tpa->update_tpa_en_flg =
-	    !!(p_sge_tpa_tlv->update_sge_tpa_flags & VFPF_UPDATE_TPA_EN_FLAG);
+		!!(p_sge_tpa_tlv->update_sge_tpa_flags &
+		   VFPF_UPDATE_TPA_EN_FLAG);
 	p_sge_tpa->update_tpa_param_flg =
-	    !!(p_sge_tpa_tlv->update_sge_tpa_flags &
-		VFPF_UPDATE_TPA_PARAM_FLAG);
+		!!(p_sge_tpa_tlv->update_sge_tpa_flags &
+		   VFPF_UPDATE_TPA_PARAM_FLAG);
 
 	p_sge_tpa->tpa_ipv4_en_flg =
-	    !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_IPV4_EN_FLAG);
+		!!(p_sge_tpa_tlv->sge_tpa_flags &
+		   VFPF_TPA_IPV4_EN_FLAG);
 	p_sge_tpa->tpa_ipv6_en_flg =
-	    !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_IPV6_EN_FLAG);
+		!!(p_sge_tpa_tlv->sge_tpa_flags &
+		   VFPF_TPA_IPV6_EN_FLAG);
 	p_sge_tpa->tpa_pkt_split_flg =
-	    !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_PKT_SPLIT_FLAG);
+		!!(p_sge_tpa_tlv->sge_tpa_flags &
+		   VFPF_TPA_PKT_SPLIT_FLAG);
 	p_sge_tpa->tpa_hdr_data_split_flg =
-	    !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_HDR_DATA_SPLIT_FLAG);
+		!!(p_sge_tpa_tlv->sge_tpa_flags &
+		   VFPF_TPA_HDR_DATA_SPLIT_FLAG);
 	p_sge_tpa->tpa_gro_consistent_flg =
-	    !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_GRO_CONSIST_FLAG);
+		!!(p_sge_tpa_tlv->sge_tpa_flags &
+		   VFPF_TPA_GRO_CONSIST_FLAG);
+	p_sge_tpa->tpa_ipv4_tunn_en_flg =
+		!!(p_sge_tpa_tlv->sge_tpa_flags &
+		   VFPF_TPA_TUNN_IPV4_EN_FLAG);
+	p_sge_tpa->tpa_ipv6_tunn_en_flg =
+		!!(p_sge_tpa_tlv->sge_tpa_flags &
+		    VFPF_TPA_TUNN_IPV6_EN_FLAG);
 
 	p_sge_tpa->tpa_max_aggs_num = p_sge_tpa_tlv->tpa_max_aggs_num;
 	p_sge_tpa->tpa_max_size = p_sge_tpa_tlv->tpa_max_size;
-	p_sge_tpa->tpa_min_size_to_start = p_sge_tpa_tlv->tpa_min_size_to_start;
-	p_sge_tpa->tpa_min_size_to_cont = p_sge_tpa_tlv->tpa_min_size_to_cont;
-	p_sge_tpa->max_buffers_per_cqe = p_sge_tpa_tlv->max_buffers_per_cqe;
+	p_sge_tpa->tpa_min_size_to_start =
+		p_sge_tpa_tlv->tpa_min_size_to_start;
+	p_sge_tpa->tpa_min_size_to_cont =
+		p_sge_tpa_tlv->tpa_min_size_to_cont;
+	p_sge_tpa->max_buffers_per_cqe =
+		p_sge_tpa_tlv->max_buffers_per_cqe;
 
 	p_data->sge_tpa_params = p_sge_tpa;
 
@@ -3284,8 +3747,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	/* Valiate PF can send such a request */
 	if (!vf->vport_instance) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "No VPORT instance available for VF[%d],"
-			   " failing vport update\n",
+			   "No VPORT instance available for VF[%d], failing vport update\n",
 			   vf->abs_vf_id);
 		status = PFVF_STATUS_FAILURE;
 		goto out;
@@ -3298,7 +3760,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	}
 
 	OSAL_MEMSET(&params, 0, sizeof(params));
-	params.opaque_fid = vf->opaque_fid;
+	params.opaque_fid =  vf->opaque_fid;
 	params.vport_id = vf->vport_id;
 	params.rss_params = OSAL_NULL;
 
@@ -3323,6 +3785,24 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	ecore_iov_vp_update_rss_param(p_hwfn, vf, &params, p_rss_params,
 				      mbx, &tlvs_mask, &tlvs_accepted);
 
+	/* Check whether skipping accept flags update is required, e.g. when
+	 * Switchdev mode is enabled. If this is the case and the VF requested
+	 * it, the accept flags update will be skipped and the VF will be
+	 * responded with PFVF_STATUS_SUCCESS.
+	 */
+	if (ecore_get_skip_accept_flags_update(p_hwfn)) {
+		u16 accept_param = 1 << ECORE_IOV_VP_UPDATE_ACCEPT_PARAM;
+
+		if (tlvs_accepted & accept_param) {
+			if (tlvs_accepted == accept_param) {
+				goto out;
+			} else {
+				params.accept_flags.update_rx_mode_config = 0;
+				params.accept_flags.update_tx_mode_config = 0;
+			}
+		}
+	}
+
 	/* Just log a message if there is no single extended tlv in buffer.
 	 * When all features of vport update ramrod would be requested by VF
 	 * as extended TLVs in buffer then an error can be returned in response
@@ -3339,8 +3819,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	if (!tlvs_accepted) {
 		if (tlvs_mask)
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "Upper-layer prevents said VF"
-				   " configuration\n");
+				   "Upper-layer prevents said VF configuration\n");
 		else
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 				   "No feature tlvs found for vport update\n");
@@ -3361,10 +3840,9 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
 	ecore_iov_send_response(p_hwfn, p_ptt, vf, length, status);
 }
 
-static enum _ecore_status_t
-ecore_iov_vf_update_vlan_shadow(struct ecore_hwfn *p_hwfn,
-				struct ecore_vf_info *p_vf,
-				struct ecore_filter_ucast *p_params)
+static enum _ecore_status_t ecore_iov_vf_update_vlan_shadow(struct ecore_hwfn *p_hwfn,
+							    struct ecore_vf_info *p_vf,
+							    struct ecore_filter_ucast *p_params)
 {
 	int i;
 
@@ -3379,9 +3857,8 @@ ecore_iov_vf_update_vlan_shadow(struct ecore_hwfn *p_hwfn,
 			}
 		if (i == ECORE_ETH_VF_NUM_VLAN_FILTERS + 1) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF [%d] - Tries to remove a non-existing"
-				   " vlan\n",
-				   p_vf->relative_vf_id);
+				   "VF [%d] - Tries to remove a non-existing vlan\n",
+				    p_vf->relative_vf_id);
 			return ECORE_INVAL;
 		}
 	} else if (p_params->opcode == ECORE_FILTER_REPLACE ||
@@ -3409,8 +3886,7 @@ ecore_iov_vf_update_vlan_shadow(struct ecore_hwfn *p_hwfn,
 
 		if (i == ECORE_ETH_VF_NUM_VLAN_FILTERS + 1) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF [%d] - Tries to configure more than %d"
-				   " vlan filters\n",
+				   "VF [%d] - Tries to configure more than %d vlan filters\n",
 				   p_vf->relative_vf_id,
 				   ECORE_ETH_VF_NUM_VLAN_FILTERS + 1);
 			return ECORE_INVAL;
@@ -3420,15 +3896,14 @@ ecore_iov_vf_update_vlan_shadow(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t
-ecore_iov_vf_update_mac_shadow(struct ecore_hwfn *p_hwfn,
-			       struct ecore_vf_info *p_vf,
-			       struct ecore_filter_ucast *p_params)
+static enum _ecore_status_t ecore_iov_vf_update_mac_shadow(struct ecore_hwfn *p_hwfn,
+							   struct ecore_vf_info *p_vf,
+							   struct ecore_filter_ucast *p_params)
 {
-	char empty_mac[ETH_ALEN];
+	char empty_mac[ECORE_ETH_ALEN];
 	int i;
 
-	OSAL_MEM_ZERO(empty_mac, ETH_ALEN);
+	OSAL_MEM_ZERO(empty_mac, ECORE_ETH_ALEN);
 
 	/* If we're in forced-mode, we don't allow any change */
 	/* TODO - this would change if we were ever to implement logic for
@@ -3451,9 +3926,9 @@ ecore_iov_vf_update_mac_shadow(struct ecore_hwfn *p_hwfn,
 	if (p_params->opcode == ECORE_FILTER_REMOVE) {
 		for (i = 0; i < ECORE_ETH_VF_NUM_MAC_FILTERS; i++) {
 			if (!OSAL_MEMCMP(p_vf->shadow_config.macs[i],
-					 p_params->mac, ETH_ALEN)) {
+					 p_params->mac, ECORE_ETH_ALEN)) {
 				OSAL_MEM_ZERO(p_vf->shadow_config.macs[i],
-					      ETH_ALEN);
+					      ECORE_ETH_ALEN);
 				break;
 			}
 		}
@@ -3466,7 +3941,7 @@ ecore_iov_vf_update_mac_shadow(struct ecore_hwfn *p_hwfn,
 	} else if (p_params->opcode == ECORE_FILTER_REPLACE ||
 		   p_params->opcode == ECORE_FILTER_FLUSH) {
 		for (i = 0; i < ECORE_ETH_VF_NUM_MAC_FILTERS; i++)
-			OSAL_MEM_ZERO(p_vf->shadow_config.macs[i], ETH_ALEN);
+			OSAL_MEM_ZERO(p_vf->shadow_config.macs[i], ECORE_ETH_ALEN);
 	}
 
 	/* List the new MAC address */
@@ -3476,9 +3951,9 @@ ecore_iov_vf_update_mac_shadow(struct ecore_hwfn *p_hwfn,
 
 	for (i = 0; i < ECORE_ETH_VF_NUM_MAC_FILTERS; i++) {
 		if (!OSAL_MEMCMP(p_vf->shadow_config.macs[i],
-				 empty_mac, ETH_ALEN)) {
+				 empty_mac, ECORE_ETH_ALEN)) {
 			OSAL_MEMCPY(p_vf->shadow_config.macs[i],
-				    p_params->mac, ETH_ALEN);
+				    p_params->mac, ECORE_ETH_ALEN);
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 				   "Added MAC at %d entry in shadow\n", i);
 			break;
@@ -3524,6 +3999,13 @@ static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn,
 	struct ecore_filter_ucast params;
 	enum _ecore_status_t rc;
 
+	/* Check whether skipping unicast filter update is required, e.g. when
+	 * Switchdev mode is enabled. If this is the case, the VF will be
+	 * responded with PFVF_STATUS_SUCCESS.
+	 */
+	if (ecore_get_skip_ucast_filter_update(p_hwfn))
+		goto out;
+
 	/* Prepare the unicast filter params */
 	OSAL_MEMSET(&params, 0, sizeof(struct ecore_filter_ucast));
 	req = &mbx->req_virt->ucast_filter;
@@ -3535,12 +4017,11 @@ static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn,
 	params.is_tx_filter = 1;
 	params.vport_to_remove_from = vf->vport_id;
 	params.vport_to_add_to = vf->vport_id;
-	OSAL_MEMCPY(params.mac, req->mac, ETH_ALEN);
+	OSAL_MEMCPY(params.mac, req->mac, ECORE_ETH_ALEN);
 	params.vlan = req->vlan;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-		   "VF[%d]: opcode 0x%02x type 0x%02x [%s %s] [vport 0x%02x]"
-		   " MAC %02x:%02x:%02x:%02x:%02x:%02x, vlan 0x%04x\n",
+		   "VF[%d]: opcode 0x%02x type 0x%02x [%s %s] [vport 0x%02x] MAC %02x:%02x:%02x:%02x:%02x:%02x, vlan 0x%04x\n",
 		   vf->abs_vf_id, params.opcode, params.type,
 		   params.is_rx_filter ? "RX" : "",
 		   params.is_tx_filter ? "TX" : "",
@@ -3550,8 +4031,7 @@ static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn,
 
 	if (!vf->vport_instance) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "No VPORT instance available for VF[%d],"
-			   " failing ucast MAC configuration\n",
+			   "No VPORT instance available for VF[%d], failing ucast MAC configuration\n",
 			   vf->abs_vf_id);
 		status = PFVF_STATUS_FAILURE;
 		goto out;
@@ -3581,7 +4061,7 @@ static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn,
 	if ((p_bulletin->valid_bitmap & (1 << MAC_ADDR_FORCED)) &&
 	    (params.type == ECORE_FILTER_MAC ||
 	     params.type == ECORE_FILTER_MAC_VLAN)) {
-		if (OSAL_MEMCMP(p_bulletin->mac, params.mac, ETH_ALEN) ||
+		if (OSAL_MEMCMP(p_bulletin->mac, params.mac, ECORE_ETH_ALEN) ||
 		    (params.opcode != ECORE_FILTER_ADD &&
 		     params.opcode != ECORE_FILTER_REPLACE))
 			status = PFVF_STATUS_FORCED;
@@ -3606,6 +4086,43 @@ static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn,
 			       sizeof(struct pfvf_def_resp_tlv), status);
 }
 
+static enum _ecore_status_t
+ecore_iov_vf_pf_bulletin_update_mac(struct ecore_hwfn *p_hwfn,
+				    struct ecore_ptt *p_ptt,
+				    struct ecore_vf_info *p_vf)
+{
+	struct ecore_bulletin_content *p_bulletin = p_vf->bulletin.p_virt;
+	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+	struct vfpf_bulletin_update_mac_tlv *p_req;
+	u8 status = PFVF_STATUS_SUCCESS;
+	u8 *p_mac;
+
+	if (!p_vf->p_vf_info.is_trusted_configured) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Blocking bulletin update request from untrusted VF[%d]\n",
+			   p_vf->abs_vf_id);
+		status = PFVF_STATUS_NOT_SUPPORTED;
+		rc = ECORE_INVAL;
+		goto send_status;
+	}
+
+	p_req = &mbx->req_virt->bulletin_update_mac;
+	p_mac = p_req->mac;
+	OSAL_MEMCPY(p_bulletin->mac, p_mac, ECORE_ETH_ALEN);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "Updated bulletin of VF[%d] with requested MAC[%02x:%02x:%02x:%02x:%02x:%02x]\n",
+		    p_vf->abs_vf_id,
+		    p_mac[0], p_mac[1], p_mac[2], p_mac[3], p_mac[4], p_mac[5]);
+
+send_status:
+	ecore_iov_prepare_resp(p_hwfn, p_ptt, p_vf,
+			       CHANNEL_TLV_BULLETIN_UPDATE_MAC,
+			       sizeof(struct pfvf_def_resp_tlv),
+			       status);
+	return rc;
+}
+
 static void ecore_iov_vf_mbx_int_cleanup(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt,
 					 struct ecore_vf_info *vf)
@@ -3625,10 +4142,10 @@ static void ecore_iov_vf_mbx_int_cleanup(struct ecore_hwfn *p_hwfn,
 
 static void ecore_iov_vf_mbx_close(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt,
-				   struct ecore_vf_info *vf)
+				   struct ecore_vf_info	*vf)
 {
-	u16 length = sizeof(struct pfvf_def_resp_tlv);
-	u8 status = PFVF_STATUS_SUCCESS;
+	u16                      length = sizeof(struct pfvf_def_resp_tlv);
+	u8                       status = PFVF_STATUS_SUCCESS;
 
 	/* Disable Interrupts for VF */
 	ecore_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 0);
@@ -3661,6 +4178,18 @@ static void ecore_iov_vf_mbx_release(struct ecore_hwfn *p_hwfn,
 			status = PFVF_STATUS_FAILURE;
 		}
 
+		/* Load requests for this VF will result with load-cancel
+		 * response and an execution error - pretend to the specific VF
+		 * for the split.
+		 */
+		ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_vf->concrete_fid);
+
+		ecore_wr(p_hwfn, p_ptt, CCFC_REG_WEAK_ENABLE_VF, 0x0);
+		ecore_wr(p_hwfn, p_ptt, TCFC_REG_WEAK_ENABLE_VF, 0x0);
+
+		/* unpretend */
+		ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_hwfn->hw_info.concrete_fid);
+
 		p_vf->state = VF_STOPPED;
 	}
 
@@ -3910,7 +4439,8 @@ ecore_iov_pf_configure_vf_queue_coalesce(struct ecore_hwfn *p_hwfn,
 
 static enum _ecore_status_t
 ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn,
-			   struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
+			   struct ecore_vf_info *p_vf,
+			   struct ecore_ptt *p_ptt)
 {
 	int cnt;
 	u32 val;
@@ -3923,11 +4453,11 @@ ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn,
 			break;
 		OSAL_MSLEEP(20);
 	}
+
 	ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_hwfn->hw_info.concrete_fid);
 
 	if (cnt == 50) {
-		DP_ERR(p_hwfn,
-		       "VF[%d] - dorq failed to cleanup [usage 0x%08x]\n",
+		DP_ERR(p_hwfn, "VF[%d] - dorq failed to cleanup [usage 0x%08x]\n",
 		       p_vf->abs_vf_id, val);
 		return ECORE_TIMEOUT;
 	}
@@ -3939,7 +4469,8 @@ ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn,
 
 static enum _ecore_status_t
 ecore_iov_vf_flr_poll_pbf(struct ecore_hwfn *p_hwfn,
-			  struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
+			  struct ecore_vf_info *p_vf,
+			  struct ecore_ptt *p_ptt)
 {
 	u32 prod, cons[MAX_NUM_EXT_VOQS], distance[MAX_NUM_EXT_VOQS], tmp;
 	u8 max_phys_tcs_per_port = p_hwfn->qm_info.max_phys_tcs_per_port;
@@ -3949,6 +4480,9 @@ ecore_iov_vf_flr_poll_pbf(struct ecore_hwfn *p_hwfn,
 	u8 port_id, tc, tc_id = 0, voq = 0;
 	int cnt;
 
+	OSAL_MEM_ZERO(cons, MAX_NUM_EXT_VOQS * sizeof(u32));
+	OSAL_MEM_ZERO(distance, MAX_NUM_EXT_VOQS * sizeof(u32));
+
 	/* Read initial consumers & producers */
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
 		/* "max_phys_tcs_per_port" active TCs + 1 pure LB TC */
@@ -3956,10 +4490,11 @@ ecore_iov_vf_flr_poll_pbf(struct ecore_hwfn *p_hwfn,
 			tc_id = (tc < max_phys_tcs_per_port) ?
 				tc :
 				PURE_LB_TC;
-			voq = VOQ(port_id, tc_id, max_phys_tcs_per_port);
+			voq = ecore_get_ext_voq(p_hwfn, port_id, tc_id,
+						max_phys_tcs_per_port);
 			cons[voq] = ecore_rd(p_hwfn, p_ptt,
 					     cons_voq0_addr + voq * 0x40);
-		prod = ecore_rd(p_hwfn, p_ptt,
+			prod = ecore_rd(p_hwfn, p_ptt,
 					prod_voq0_addr + voq * 0x40);
 			distance[voq] = prod - cons[voq];
 		}
@@ -3975,13 +4510,13 @@ ecore_iov_vf_flr_poll_pbf(struct ecore_hwfn *p_hwfn,
 				tc_id = (tc < max_phys_tcs_per_port) ?
 					tc :
 					PURE_LB_TC;
-				voq = VOQ(port_id, tc_id,
-					  max_phys_tcs_per_port);
-			tmp = ecore_rd(p_hwfn, p_ptt,
+				voq = ecore_get_ext_voq(p_hwfn, port_id, tc_id,
+							max_phys_tcs_per_port);
+				tmp = ecore_rd(p_hwfn, p_ptt,
 					       cons_voq0_addr + voq * 0x40);
-			if (distance[voq] > tmp - cons[voq])
-				break;
-		}
+				if (distance[voq] > tmp - cons[voq])
+					break;
+			}
 
 			if (tc == max_phys_tcs_per_port + 1)
 				tc = 0;
@@ -4011,7 +4546,10 @@ static enum _ecore_status_t ecore_iov_vf_flr_poll(struct ecore_hwfn *p_hwfn,
 {
 	enum _ecore_status_t rc;
 
-	/* TODO - add SRC and TM polling once we add storage IOV */
+	/* TODO - add SRC polling and Tm task once we add storage/iwarp IOV */
+	rc = ecore_iov_timers_stop(p_hwfn, p_vf, p_ptt);
+	if (rc)
+		return rc;
 
 	rc = ecore_iov_vf_flr_poll_dorq(p_hwfn, p_vf, p_ptt);
 	if (rc)
@@ -4026,8 +4564,9 @@ static enum _ecore_status_t ecore_iov_vf_flr_poll(struct ecore_hwfn *p_hwfn,
 
 static enum _ecore_status_t
 ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
-				 struct ecore_ptt *p_ptt,
-				 u16 rel_vf_id, u32 *ack_vfs)
+				 struct ecore_ptt  *p_ptt,
+				 u16		   rel_vf_id,
+				 u32		   *ack_vfs)
 {
 	struct ecore_vf_info *p_vf;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
@@ -4036,9 +4575,10 @@ ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 	if (!p_vf)
 		return ECORE_SUCCESS;
 
-	if (p_hwfn->pf_iov_info->pending_flr[rel_vf_id / 64] &
-	    (1ULL << (rel_vf_id % 64))) {
+	if (ECORE_VF_ARRAY_GET_VFID(p_hwfn->pf_iov_info->pending_flr,
+				    rel_vf_id)) {
 		u16 vfid = p_vf->abs_vf_id;
+		int i;
 
 		/* TODO - should we lock channel? */
 
@@ -4059,7 +4599,8 @@ ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 		rc = ecore_final_cleanup(p_hwfn, p_ptt, vfid, true);
 		if (rc) {
 			/* TODO - what's now? What a mess.... */
-			DP_ERR(p_hwfn, "Failed handle FLR of VF[%d]\n", vfid);
+			DP_ERR(p_hwfn, "Failed handle FLR of VF[%d]\n",
+			       vfid);
 			return rc;
 		}
 
@@ -4087,18 +4628,20 @@ ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 		if (p_vf->state == VF_RESET)
 			p_vf->state = VF_STOPPED;
 		ack_vfs[vfid / 32] |= (1 << (vfid % 32));
-		p_hwfn->pf_iov_info->pending_flr[rel_vf_id / 64] &=
-		    ~(1ULL << (rel_vf_id % 64));
+		ECORE_VF_ARRAY_CLEAR_VFID(p_hwfn->pf_iov_info->pending_flr,
+					  rel_vf_id);
 		p_vf->vf_mbx.b_pending_msg = false;
 	}
 
 	return rc;
 }
 
-enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
-					      struct ecore_ptt *p_ptt)
+LNX_STATIC enum _ecore_status_t
+ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+
 {
 	u32 ack_vfs[EXT_VF_BITMAP_SIZE_IN_DWORDS];
+	u32 err_data, err_valid, int_sts;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u16 i;
 
@@ -4110,6 +4653,15 @@ enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 	 */
 	OSAL_MSLEEP(100);
 
+	/* Clear expected VF_DISABLED_ACCESS error after pci passthrough */
+	err_data = ecore_rd(p_hwfn, p_ptt, PSWHST_REG_VF_DISABLED_ERROR_DATA);
+	err_valid = ecore_rd(p_hwfn, p_ptt, PSWHST_REG_VF_DISABLED_ERROR_VALID);
+	int_sts = ecore_rd(p_hwfn, p_ptt, PSWHST_REG_INT_STS_CLR);
+	if (int_sts && int_sts != PSWHST_REG_INT_STS_CLR_HST_VF_DISABLED_ACCESS)
+		DP_NOTICE(p_hwfn, false,
+			  "PSWHST int_sts was 0x%x, err_data = 0x%x, err_valid = 0x%x\n",
+			  int_sts, err_data, err_valid);
+
 	for (i = 0; i < p_hwfn->p_dev->p_iov_info->total_vfs; i++)
 		ecore_iov_execute_vf_flr_cleanup(p_hwfn, p_ptt, i, ack_vfs);
 
@@ -4119,7 +4671,9 @@ enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 
 enum _ecore_status_t
 ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
-				struct ecore_ptt *p_ptt, u16 rel_vf_id)
+				struct ecore_ptt  *p_ptt,
+				u16		  rel_vf_id)
+
 {
 	u32 ack_vfs[EXT_VF_BITMAP_SIZE_IN_DWORDS];
 	enum _ecore_status_t rc = ECORE_SUCCESS;
@@ -4135,14 +4689,18 @@ ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
+bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn,
+			  u32 *p_disabled_vfs)
 {
+	u16 i, vf_bitmap_size;
 	bool found = false;
-	u16 i;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Marking FLR-ed VFs\n");
 
-	for (i = 0; i < VF_BITMAP_SIZE_IN_DWORDS; i++)
+	vf_bitmap_size = ECORE_IS_E4(p_hwfn->p_dev) ?
+			 VF_BITMAP_SIZE_IN_DWORDS :
+			 EXT_VF_BITMAP_SIZE_IN_DWORDS;
+	for (i = 0; i < vf_bitmap_size; i++)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "[%08x,...,%08x]: %08x\n",
 			   i * 32, (i + 1) * 32 - 1, p_disabled_vfs[i]);
@@ -4163,7 +4721,7 @@ bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
 
 		vfid = p_vf->abs_vf_id;
 		if ((1 << (vfid % 32)) & p_disabled_vfs[vfid / 32]) {
-			u64 *p_flr = p_hwfn->pf_iov_info->pending_flr;
+			u64 *p_flr =  p_hwfn->pf_iov_info->pending_flr;
 			u16 rel_vf_id = p_vf->relative_vf_id;
 
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
@@ -4177,7 +4735,7 @@ bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
 			 * MFW will not trigger an additional attention for
 			 * VF flr until ACKs, we're safe.
 			 */
-			p_flr[rel_vf_id / 64] |= 1ULL << (rel_vf_id % 64);
+			ECORE_VF_ARRAY_SET_VFID(p_flr, rel_vf_id);
 			found = true;
 		}
 	}
@@ -4185,11 +4743,11 @@ bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
 	return found;
 }
 
-void ecore_iov_get_link(struct ecore_hwfn *p_hwfn,
-			u16 vfid,
-			struct ecore_mcp_link_params *p_params,
-			struct ecore_mcp_link_state *p_link,
-			struct ecore_mcp_link_capabilities *p_caps)
+LNX_STATIC void ecore_iov_get_link(struct ecore_hwfn *p_hwfn,
+				   u16 vfid,
+				   struct ecore_mcp_link_params *p_params,
+				   struct ecore_mcp_link_state *p_link,
+				   struct ecore_mcp_link_capabilities *p_caps)
 {
 	struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
 	struct ecore_bulletin_content *p_bulletin;
@@ -4207,8 +4765,24 @@ void ecore_iov_get_link(struct ecore_hwfn *p_hwfn,
 		__ecore_vf_get_link_caps(p_caps, p_bulletin);
 }
 
-void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
-			       struct ecore_ptt *p_ptt, int vfid)
+static bool
+ecore_iov_vf_has_error(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+		       struct ecore_vf_info *p_vf)
+{
+	u16 abs_vf_id = ECORE_VF_ABS_ID(p_hwfn, p_vf);
+	u32 hw_addr;
+
+	hw_addr = PGLUE_B_REG_WAS_ERROR_VF_31_0 + (abs_vf_id >> 5) * 4;
+
+	if (ecore_rd(p_hwfn, p_ptt, hw_addr) & (1 << (abs_vf_id & 0x1f)))
+		return true;
+
+	return false;
+}
+
+LNX_STATIC void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt,
+					  int vfid)
 {
 	struct ecore_iov_vf_mbx *mbx;
 	struct ecore_vf_info *p_vf;
@@ -4243,8 +4817,8 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 	/* Lock the per vf op mutex and note the locker's identity.
 	 * The unlock will take place in mbx response.
 	 */
-	ecore_iov_lock_vf_pf_channel(p_hwfn,
-				     p_vf, mbx->first_tlv.tl.type);
+	ecore_iov_lock_vf_pf_channel(p_hwfn, p_vf,
+				     mbx->first_tlv.tl.type);
 
 	/* check if tlv type is known */
 	if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type) &&
@@ -4299,9 +4873,19 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 		case CHANNEL_TLV_COALESCE_READ:
 			ecore_iov_vf_pf_get_coalesce(p_hwfn, p_ptt, p_vf);
 			break;
+		case CHANNEL_TLV_BULLETIN_UPDATE_MAC:
+			ecore_iov_vf_pf_bulletin_update_mac(p_hwfn, p_ptt,
+							    p_vf);
+			break;
 		case CHANNEL_TLV_UPDATE_MTU:
 			ecore_iov_vf_pf_update_mtu(p_hwfn, p_ptt, p_vf);
 			break;
+		case CHANNEL_TLV_ASYNC_EVENT:
+			ecore_iov_vf_pf_async_event_resp(p_hwfn, p_ptt, p_vf);
+			break;
+		case CHANNEL_TLV_SOFT_FLR:
+			ecore_mcp_vf_flr(p_hwfn, p_ptt, p_vf->relative_vf_id);
+			break;
 		}
 	} else if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) {
 		/* If we've received a message from a VF we consider malicious
@@ -4332,13 +4916,23 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 		 * VF driver and is sending garbage over the channel.
 		 */
 		DP_NOTICE(p_hwfn, false,
-			  "VF[%02x]: unknown TLV. type %04x length %04x"
-			  " padding %08x reply address %lu\n",
+			  "VF[%02x]: unknown TLV. type %04x length %04x padding %08x "
+			  "reply address %" PRIu64 "\n",
 			  p_vf->abs_vf_id,
 			  mbx->first_tlv.tl.type,
 			  mbx->first_tlv.tl.length,
 			  mbx->first_tlv.padding,
-			  (unsigned long)mbx->first_tlv.reply_address);
+			  mbx->first_tlv.reply_address);
+
+		if (ecore_iov_vf_has_error(p_hwfn, p_ptt, p_vf)) {
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+				   "VF abs[%d]:rel[%d] has error, issue FLR\n",
+				   p_vf->abs_vf_id, p_vf->relative_vf_id);
+			ecore_mcp_vf_flr(p_hwfn, p_ptt, p_vf->relative_vf_id);
+			ecore_iov_unlock_vf_pf_channel(p_hwfn, p_vf,
+						       mbx->first_tlv.tl.type);
+			return;
+		}
 
 		/* Try replying in case reply address matches the acquisition's
 		 * posted address.
@@ -4352,8 +4946,7 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 					       PFVF_STATUS_NOT_SUPPORTED);
 		else
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF[%02x]: Can't respond to TLV -"
-				   " no valid reply address\n",
+				   "VF[%02x]: Can't respond to TLV - no valid reply address\n",
 				   p_vf->abs_vf_id);
 	}
 
@@ -4366,8 +4959,8 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 #endif
 }
 
-void ecore_iov_pf_get_pending_events(struct ecore_hwfn *p_hwfn,
-				     u64 *events)
+LNX_STATIC void ecore_iov_pf_get_pending_events(struct ecore_hwfn *p_hwfn,
+						u64 *events)
 {
 	int i;
 
@@ -4378,7 +4971,7 @@ void ecore_iov_pf_get_pending_events(struct ecore_hwfn *p_hwfn,
 
 		p_vf = &p_hwfn->pf_iov_info->vfs_array[i];
 		if (p_vf->vf_mbx.b_pending_msg)
-			events[i / 64] |= 1ULL << (i % 64);
+			ECORE_VF_ARRAY_SET_VFID(events, i);
 	}
 }
 
@@ -4389,8 +4982,7 @@ ecore_sriov_get_vf_from_absid(struct ecore_hwfn *p_hwfn, u16 abs_vfid)
 
 	if (!_ecore_iov_pf_sanity_check(p_hwfn, (int)abs_vfid - min, false)) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "Got indication for VF [abs 0x%08x] that cannot be"
-			   " handled by PF\n",
+			   "Got indication for VF [abs 0x%08x] that cannot be handled by PF\n",
 			   abs_vfid);
 		return OSAL_NULL;
 	}
@@ -4411,7 +5003,8 @@ static enum _ecore_status_t ecore_sriov_vfpf_msg(struct ecore_hwfn *p_hwfn,
 	/* List the physical address of the request so that handler
 	 * could later on copy the message from it.
 	 */
-	p_vf->vf_mbx.pending_req = (((u64)vf_msg->hi) << 32) | vf_msg->lo;
+	p_vf->vf_mbx.pending_req = (((u64)vf_msg->hi) << 32) |
+				   vf_msg->lo;
 
 	p_vf->vf_mbx.b_pending_msg = true;
 
@@ -4447,7 +5040,8 @@ static enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn *p_hwfn,
 						  u8 opcode,
 						  __le16 echo,
 						  union event_ring_data *data,
-						  u8 OSAL_UNUSED fw_return_code)
+						  u8 OSAL_UNUSED fw_return_code,
+						  u8 OSAL_UNUSED vf_id)
 {
 	switch (opcode) {
 	case COMMON_EVENT_VF_PF_CHANNEL:
@@ -4467,10 +5061,11 @@ static enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn,
+				 u16		   rel_vf_id)
 {
-	return !!(p_hwfn->pf_iov_info->pending_flr[rel_vf_id / 64] &
-		   (1ULL << (rel_vf_id % 64)));
+	return !!(ECORE_VF_ARRAY_GET_VFID(p_hwfn->pf_iov_info->pending_flr,
+					  rel_vf_id));
 }
 
 u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
@@ -4482,15 +5077,17 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 		goto out;
 
 	for (i = rel_vf_id; i < p_iov->total_vfs; i++)
-		if (ecore_iov_is_valid_vfid(p_hwfn, rel_vf_id, true, false))
+		if (ecore_iov_is_valid_vfid(p_hwfn, i, true, false))
 			return i;
 
 out:
-	return MAX_NUM_VFS_K2;
+	return MAX_NUM_VFS;
 }
 
-enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
-					   struct ecore_ptt *ptt, int vfid)
+LNX_STATIC enum _ecore_status_t
+ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
+		      struct ecore_ptt *ptt,
+		      int vfid)
 {
 	struct dmae_params params;
 	struct ecore_vf_info *vf_info;
@@ -4507,9 +5104,11 @@ enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
 	if (ecore_dmae_host2host(p_hwfn, ptt,
 				 vf_info->vf_mbx.pending_req,
 				 vf_info->vf_mbx.req_phys,
-				 sizeof(union vfpf_tlvs) / 4, &params)) {
+				 sizeof(union vfpf_tlvs) / 4,
+				 &params)) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "Failed to copy message from VF 0x%02x\n", vfid);
+			   "Failed to copy message from VF 0x%02x\n",
+			   vfid);
 
 		return ECORE_IO;
 	}
@@ -4517,21 +5116,20 @@ enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-void ecore_iov_bulletin_set_forced_mac(struct ecore_hwfn *p_hwfn,
-				       u8 *mac, int vfid)
+LNX_STATIC void ecore_iov_bulletin_set_forced_mac(struct ecore_hwfn *p_hwfn,
+						  u8 *mac, int vfid)
 {
 	struct ecore_vf_info *vf_info;
 	u64 feature;
 
 	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
 	if (!vf_info) {
-		DP_NOTICE(p_hwfn->p_dev, true,
-			  "Can not set forced MAC, invalid vfid [%d]\n", vfid);
+		DP_NOTICE(p_hwfn->p_dev, true, "Can not set forced MAC, invalid vfid [%d]\n",
+			  vfid);
 		return;
 	}
 	if (vf_info->b_malicious) {
-		DP_NOTICE(p_hwfn->p_dev, false,
-			  "Can't set forced MAC to malicious VF [%d]\n",
+		DP_NOTICE(p_hwfn->p_dev, false, "Can't set forced MAC to malicious VF [%d]\n",
 			  vfid);
 		return;
 	}
@@ -4550,40 +5148,39 @@ void ecore_iov_bulletin_set_forced_mac(struct ecore_hwfn *p_hwfn,
 	}
 
 	OSAL_MEMCPY(vf_info->bulletin.p_virt->mac,
-		    mac, ETH_ALEN);
+		    mac, ECORE_ETH_ALEN);
 
 	vf_info->bulletin.p_virt->valid_bitmap |= feature;
 
 	ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
 }
 
-enum _ecore_status_t ecore_iov_bulletin_set_mac(struct ecore_hwfn *p_hwfn,
-						u8 *mac, int vfid)
+LNX_STATIC enum _ecore_status_t
+ecore_iov_bulletin_set_mac(struct ecore_hwfn *p_hwfn, u8 *mac, int vfid)
 {
 	struct ecore_vf_info *vf_info;
 	u64 feature;
 
 	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
 	if (!vf_info) {
-		DP_NOTICE(p_hwfn->p_dev, true,
-			  "Can not set MAC, invalid vfid [%d]\n", vfid);
+		DP_NOTICE(p_hwfn->p_dev, true, "Can not set MAC, invalid vfid [%d]\n",
+			  vfid);
 		return ECORE_INVAL;
 	}
 	if (vf_info->b_malicious) {
-		DP_NOTICE(p_hwfn->p_dev, false,
-			  "Can't set MAC to malicious VF [%d]\n",
+		DP_NOTICE(p_hwfn->p_dev, false, "Can't set MAC to malicious VF [%d]\n",
 			  vfid);
 		return ECORE_INVAL;
 	}
 
 	if (vf_info->bulletin.p_virt->valid_bitmap & (1 << MAC_ADDR_FORCED)) {
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "Can not set MAC, Forced MAC is configured\n");
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Can not set MAC, Forced MAC is configured\n");
 		return ECORE_INVAL;
 	}
 
 	feature = 1 << VFPF_BULLETIN_MAC_ADDR;
-	OSAL_MEMCPY(vf_info->bulletin.p_virt->mac, mac, ETH_ALEN);
+	OSAL_MEMCPY(vf_info->bulletin.p_virt->mac,
+		    mac, ECORE_ETH_ALEN);
 
 	vf_info->bulletin.p_virt->valid_bitmap |= feature;
 
@@ -4597,7 +5194,8 @@ enum _ecore_status_t ecore_iov_bulletin_set_mac(struct ecore_hwfn *p_hwfn,
 #ifndef LINUX_REMOVE
 enum _ecore_status_t
 ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn,
-					       bool b_untagged_only, int vfid)
+					       bool b_untagged_only,
+					       int vfid)
 {
 	struct ecore_vf_info *vf_info;
 	u64 feature;
@@ -4621,8 +5219,7 @@ ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn,
 	 */
 	if (vf_info->state == VF_ENABLED) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "Can't support untagged change for vfid[%d] -"
-			   " VF is already active\n",
+			   "Can't support untagged change for vfid[%d] - VF is already active\n",
 			   vfid);
 		return ECORE_INVAL;
 	}
@@ -4631,11 +5228,11 @@ ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn,
 	 * VF initialization.
 	 */
 	feature = (1 << VFPF_BULLETIN_UNTAGGED_DEFAULT) |
-	    (1 << VFPF_BULLETIN_UNTAGGED_DEFAULT_FORCED);
+		  (1 << VFPF_BULLETIN_UNTAGGED_DEFAULT_FORCED);
 	vf_info->bulletin.p_virt->valid_bitmap |= feature;
 
 	vf_info->bulletin.p_virt->default_only_untagged = b_untagged_only ? 1
-	    : 0;
+									  : 0;
 
 	return ECORE_SUCCESS;
 }
@@ -4653,16 +5250,15 @@ void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
 }
 #endif
 
-void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
-					u16 pvid, int vfid)
+LNX_STATIC void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
+						   u16 pvid, int vfid)
 {
 	struct ecore_vf_info *vf_info;
 	u64 feature;
 
 	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
 	if (!vf_info) {
-		DP_NOTICE(p_hwfn->p_dev, true,
-			  "Can not set forced MAC, invalid vfid [%d]\n",
+		DP_NOTICE(p_hwfn->p_dev, true, "Can not set forced MAC, invalid vfid [%d]\n",
 			  vfid);
 		return;
 	}
@@ -4706,7 +5302,8 @@ void ecore_iov_bulletin_set_udp_ports(struct ecore_hwfn *p_hwfn,
 	vf_info->bulletin.p_virt->geneve_udp_port = geneve_port;
 }
 
-bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn, int vfid)
+LNX_STATIC bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn,
+						int vfid)
 {
 	struct ecore_vf_info *p_vf_info;
 
@@ -4717,7 +5314,7 @@ bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn, int vfid)
 	return !!p_vf_info->vport_instance;
 }
 
-bool ecore_iov_is_vf_stopped(struct ecore_hwfn *p_hwfn, int vfid)
+LNX_STATIC bool ecore_iov_is_vf_stopped(struct ecore_hwfn *p_hwfn, int vfid)
 {
 	struct ecore_vf_info *p_vf_info;
 
@@ -4728,7 +5325,8 @@ bool ecore_iov_is_vf_stopped(struct ecore_hwfn *p_hwfn, int vfid)
 	return p_vf_info->state == VF_STOPPED;
 }
 
-bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn, int vfid)
+IFDEF_DEFINE_IFLA_VF_SPOOFCHK
+LNX_STATIC bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn, int vfid)
 {
 	struct ecore_vf_info *vf_info;
 
@@ -4738,9 +5336,10 @@ bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn, int vfid)
 
 	return vf_info->spoof_chk;
 }
+ENDIF_DEFINE_IFLA_VF_SPOOFCHK
 
-enum _ecore_status_t ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn,
-					    int vfid, bool val)
+LNX_STATIC enum _ecore_status_t
+ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn, int vfid, bool val)
 {
 	struct ecore_vf_info *vf;
 	enum _ecore_status_t rc = ECORE_INVAL;
@@ -4772,8 +5371,8 @@ u8 ecore_iov_vf_chains_per_pf(struct ecore_hwfn *p_hwfn)
 {
 	u8 max_chains_per_vf = p_hwfn->hw_info.max_chains_per_vf;
 
-	max_chains_per_vf = (max_chains_per_vf) ? max_chains_per_vf
-	    : ECORE_MAX_VF_CHAINS_PER_PF;
+	max_chains_per_vf = (max_chains_per_vf) ?
+		max_chains_per_vf : ECORE_MAX_QUEUE_VF_CHAINS_PER_PF;
 
 	return max_chains_per_vf;
 }
@@ -4784,7 +5383,7 @@ void ecore_iov_get_vf_req_virt_mbx_params(struct ecore_hwfn *p_hwfn,
 					  u16 *p_req_virt_size)
 {
 	struct ecore_vf_info *vf_info =
-	    ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+		ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
 
 	if (!vf_info)
 		return;
@@ -4797,12 +5396,12 @@ void ecore_iov_get_vf_req_virt_mbx_params(struct ecore_hwfn *p_hwfn,
 }
 
 void ecore_iov_get_vf_reply_virt_mbx_params(struct ecore_hwfn *p_hwfn,
-					    u16 rel_vf_id,
+					    u16	rel_vf_id,
 					    void **pp_reply_virt_addr,
-					    u16 *p_reply_virt_size)
+					    u16	*p_reply_virt_size)
 {
 	struct ecore_vf_info *vf_info =
-	    ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+		ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
 
 	if (!vf_info)
 		return;
@@ -4815,11 +5414,12 @@ void ecore_iov_get_vf_reply_virt_mbx_params(struct ecore_hwfn *p_hwfn,
 }
 
 #ifdef CONFIG_ECORE_SW_CHANNEL
-struct ecore_iov_sw_mbx *ecore_iov_get_vf_sw_mbx(struct ecore_hwfn *p_hwfn,
-						 u16 rel_vf_id)
+struct ecore_iov_sw_mbx*
+ecore_iov_get_vf_sw_mbx(struct ecore_hwfn *p_hwfn,
+			u16 rel_vf_id)
 {
 	struct ecore_vf_info *vf_info =
-	    ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+		ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
 
 	if (!vf_info)
 		return OSAL_NULL;
@@ -4839,8 +5439,22 @@ u32 ecore_iov_pfvf_msg_length(void)
 	return sizeof(union pfvf_tlvs);
 }
 
-u8 *ecore_iov_bulletin_get_mac(struct ecore_hwfn *p_hwfn,
-				      u16 rel_vf_id)
+enum _ecore_status_t
+ecore_iov_set_default_vlan(struct ecore_hwfn *p_hwfn, u16 vlan, int vfid)
+{
+	struct ecore_vf_info *vf_info;
+
+	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+	if (!vf_info)
+		return ECORE_INVAL;
+
+	vf_info->p_vf_info.forced_vlan = vlan;
+
+	return ECORE_SUCCESS;
+}
+
+LNX_STATIC u8 *ecore_iov_bulletin_get_mac(struct ecore_hwfn *p_hwfn,
+					  u16 rel_vf_id)
 {
 	struct ecore_vf_info *p_vf;
 
@@ -4855,7 +5469,8 @@ u8 *ecore_iov_bulletin_get_mac(struct ecore_hwfn *p_hwfn,
 	return p_vf->bulletin.p_virt->mac;
 }
 
-u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+LNX_STATIC u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn *p_hwfn,
+						 u16 rel_vf_id)
 {
 	struct ecore_vf_info *p_vf;
 
@@ -4869,8 +5484,8 @@ u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 	return p_vf->bulletin.p_virt->mac;
 }
 
-u16 ecore_iov_bulletin_get_forced_vlan(struct ecore_hwfn *p_hwfn,
-				       u16 rel_vf_id)
+LNX_STATIC u16 ecore_iov_bulletin_get_forced_vlan(struct ecore_hwfn *p_hwfn,
+						  u16 rel_vf_id)
 {
 	struct ecore_vf_info *p_vf;
 
@@ -4884,36 +5499,24 @@ u16 ecore_iov_bulletin_get_forced_vlan(struct ecore_hwfn *p_hwfn,
 	return p_vf->bulletin.p_virt->pvid;
 }
 
-enum _ecore_status_t ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn,
-						 struct ecore_ptt *p_ptt,
-						 int vfid, int val)
+LNX_STATIC enum _ecore_status_t
+ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+			    int vfid, int val)
 {
-	struct ecore_vf_info *vf;
-	u8 abs_vp_id = 0;
-	u16 rl_id;
-	enum _ecore_status_t rc;
-
-	vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
-
-	if (!vf)
-		return ECORE_INVAL;
-
-	rc = ecore_fw_vport(p_hwfn, vf->vport_id, &abs_vp_id);
-	if (rc != ECORE_SUCCESS)
-		return rc;
+	u16 rl_id = ecore_get_pq_rl_id_from_vf(p_hwfn, (u16)vfid);
 
-	rl_id = abs_vp_id; /* The "rl_id" is set as the "vport_id" */
 	return ecore_init_global_rl(p_hwfn, p_ptt, rl_id, (u32)val);
 }
 
-enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
-						     int vfid, u32 rate)
+LNX_STATIC enum _ecore_status_t
+ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev, int vfid, u32 rate)
 {
-	struct ecore_vf_info *vf;
+	struct ecore_hwfn *p_hwfn;
+	u16 vport_id;
 	int i;
 
 	for_each_hwfn(p_dev, i) {
-		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
+		p_hwfn = &p_dev->hwfns[i];
 
 		if (!ecore_iov_pf_sanity_check(p_hwfn, vfid)) {
 			DP_NOTICE(p_hwfn, true,
@@ -4922,14 +5525,25 @@ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
 		}
 	}
 
-	vf = ecore_iov_get_vf_info(ECORE_LEADING_HWFN(p_dev), (u16)vfid, true);
-	if (!vf) {
-		DP_NOTICE(p_dev, true,
-			  "Getting vf info failed, can't set min rate\n");
+	/* Use the leading hwfn - the qm configuration should be symmetric
+	 * between the hwfns.
+	 */
+	p_hwfn = ECORE_LEADING_HWFN(p_dev);
+	vport_id = ecore_get_pq_vport_id_from_vf(p_hwfn, (u16)vfid);
+
+	return ecore_configure_vport_wfq(p_dev, vport_id, rate);
+}
+
+enum _ecore_status_t ecore_iov_get_vf_vport(struct ecore_hwfn *p_hwfn,
+					    int rel_vf_id, u8 *abs_vport_id)
+{
+	struct ecore_vf_info *vf;
+
+	vf = ecore_iov_get_vf_info(p_hwfn, (u16)rel_vf_id, true);
+	if (!vf || vf->state != VF_ENABLED)
 		return ECORE_INVAL;
-	}
 
-	return ecore_configure_vport_wfq(p_dev, vf->vport_id, rate);
+	return ecore_fw_vport(p_hwfn, vf->vport_id, abs_vport_id);
 }
 
 enum _ecore_status_t ecore_iov_get_vf_stats(struct ecore_hwfn *p_hwfn,
@@ -4952,7 +5566,8 @@ enum _ecore_status_t ecore_iov_get_vf_stats(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn,
+			     u16 rel_vf_id)
 {
 	struct ecore_vf_info *p_vf;
 
@@ -4963,7 +5578,8 @@ u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 	return p_vf->num_rxqs;
 }
 
-u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn *p_hwfn,
+				    u16 rel_vf_id)
 {
 	struct ecore_vf_info *p_vf;
 
@@ -4974,7 +5590,8 @@ u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 	return p_vf->num_active_rxqs;
 }
 
-void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn,
+			   u16 rel_vf_id)
 {
 	struct ecore_vf_info *p_vf;
 
@@ -4985,7 +5602,8 @@ void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 	return p_vf->ctx;
 }
 
-u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn,
+			    u16 rel_vf_id)
 {
 	struct ecore_vf_info *p_vf;
 
@@ -4996,7 +5614,8 @@ u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 	return p_vf->num_sbs;
 }
 
-bool ecore_iov_is_vf_wait_for_acquire(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+bool ecore_iov_is_vf_wait_for_acquire(struct ecore_hwfn *p_hwfn,
+				      u16 rel_vf_id)
 {
 	struct ecore_vf_info *p_vf;
 
@@ -5019,7 +5638,8 @@ bool ecore_iov_is_vf_acquired_not_initialized(struct ecore_hwfn *p_hwfn,
 	return (p_vf->state == VF_ACQUIRED);
 }
 
-bool ecore_iov_is_vf_initialized(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
+bool ecore_iov_is_vf_initialized(struct ecore_hwfn *p_hwfn,
+				 u16 rel_vf_id)
 {
 	struct ecore_vf_info *p_vf;
 
@@ -5042,8 +5662,8 @@ bool ecore_iov_is_vf_started(struct ecore_hwfn *p_hwfn,
 	return (p_vf->state != VF_FREE && p_vf->state != VF_STOPPED);
 }
 
-int
-ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid)
+IFDEF_HAS_IFLA_VF_RATE
+LNX_STATIC u32 ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid)
 {
 	struct ecore_wfq_data *vf_vp_wfq;
 	struct ecore_vf_info *vf_info;
@@ -5059,6 +5679,7 @@ ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid)
 	else
 		return 0;
 }
+ENDIF_HAS_IFLA_VF_RATE
 
 #ifdef CONFIG_ECORE_SW_CHANNEL
 void ecore_iov_set_vf_hw_channel(struct ecore_hwfn *p_hwfn, int vfid,
@@ -5073,3 +5694,172 @@ void ecore_iov_set_vf_hw_channel(struct ecore_hwfn *p_hwfn, int vfid,
 	vf_info->b_hw_channel = b_is_hw;
 }
 #endif
+
+static void ecore_iov_db_rec_stats_update(struct ecore_hwfn *p_hwfn,
+					  struct ecore_vf_info *p_vf)
+{
+	u16 vf_id = GET_FIELD(p_vf->concrete_fid, PXP_CONCRETE_FID_VFID);
+	struct ecore_pf_iov *p_iov_info = p_hwfn->pf_iov_info;
+
+	if (++p_vf->db_recovery_info.count > p_iov_info->max_db_rec_count) {
+		p_iov_info->max_db_rec_count = p_vf->db_recovery_info.count;
+		p_hwfn->pf_iov_info->max_db_rec_vfid = vf_id;
+	}
+}
+
+static void ecore_iov_db_rec_handler_vf(struct ecore_hwfn *p_hwfn,
+					struct ecore_ptt *p_ptt,
+					struct ecore_vf_info *p_vf,
+					u32 pf_attn_ovfl,
+					u32 *flush_delay_count)
+{
+	struct ecore_bulletin_content *p_bulletin;
+	u32 vf_cur_ovfl, vf_attn_ovfl;
+	u16 vf_id;
+	enum _ecore_status_t rc;
+
+	/* If VF sticky was caught by the DORQ attn callback, VF must execute
+	 * doorbell recovery. Reading the bit is atomic because the dorq
+	 * attention handler is setting it in interrupt context.
+	 */
+	vf_attn_ovfl = OSAL_TEST_AND_CLEAR_BIT(ECORE_OVERFLOW_BIT,
+					       &p_vf->db_recovery_info.overflow);
+
+	vf_id = GET_FIELD(p_vf->concrete_fid, PXP_CONCRETE_FID_VFID);
+	vf_cur_ovfl = ecore_rd(p_hwfn, p_ptt, DORQ_REG_VF_OVFL_STICKY);
+	if (!vf_cur_ovfl && !vf_attn_ovfl && !pf_attn_ovfl)
+		return;
+
+	DP_NOTICE(p_hwfn, false,
+		  "VF %u Overflow sticky: attn %u current %u pf %u\n",
+		  vf_id, vf_attn_ovfl ? 1 : 0, vf_cur_ovfl ? 1 : 0,
+		  pf_attn_ovfl ? 1 : 0);
+
+	/* Check if sticky overflow indication is set, or it was caught set
+	 * during the DORQ attention callback.
+	 */
+	if (vf_cur_ovfl || vf_attn_ovfl) {
+		if (vf_cur_ovfl && !p_vf->db_recovery_info.edpm_disabled) {
+			rc = ecore_db_rec_flush_queue(p_hwfn, p_ptt,
+						      DORQ_REG_VF_USAGE_CNT,
+						      flush_delay_count);
+			if (rc != ECORE_SUCCESS) {
+				DP_NOTICE(p_hwfn, false,
+					  "Doorbell Recovery flush failed for VF %u, unable to post any more doorbells.\n",
+					  vf_id);
+				return;
+			}
+		}
+
+		ecore_iov_db_rec_stats_update(p_hwfn, p_vf);
+	}
+
+	/* Make doorbell recovery possible for this VF, by resetting
+	 * VF_STICKY to allow the DORQ receiving doorbells from this VF.
+	 */
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_VF_OVFL_STICKY, 0x0);
+
+	p_bulletin = p_vf->bulletin.p_virt;
+	p_bulletin->db_recovery_execute++;
+}
+
+enum _ecore_status_t
+ecore_iov_db_rec_handler(struct ecore_hwfn *p_hwfn)
+{
+	u32 pf_cur_ovfl, pf_attn_ovfl, flush_delay_count = ECORE_DB_REC_COUNT;
+	struct ecore_pf_iov *p_iov_info = p_hwfn->pf_iov_info;
+	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+	struct ecore_vf_info *p_vf;
+	u16 i;
+
+	if (!p_ptt)
+		return ECORE_AGAIN;
+
+	/* If PF sticky was caught by the DORQ attn callback, all VFs must
+	 * execute doorbell recovery. Reading the bit is atomic because the
+	 * dorq attention handler is setting it in interrupt context (or in
+	 * the periodic db_rec handler in a parallel flow).
+	 */
+	pf_attn_ovfl = OSAL_TEST_AND_CLEAR_BIT(ECORE_OVERFLOW_BIT,
+					       &p_hwfn->pf_iov_info->overflow);
+
+	ecore_for_each_vf(p_hwfn, i) {
+		/* PF sticky might change during this loop */
+		pf_cur_ovfl = ecore_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY);
+		if (pf_cur_ovfl) {
+			DP_NOTICE(p_hwfn, false,
+				  "PF Overflow sticky is set, VF doorbell recovery will be rescheduled.\n");
+			goto out;
+		}
+
+		/* Check DORQ drops (sticky) for VF */
+		p_vf = &p_iov_info->vfs_array[i];
+		ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_vf->concrete_fid);
+		ecore_iov_db_rec_handler_vf(p_hwfn, p_ptt, p_vf, pf_attn_ovfl,
+					    &flush_delay_count);
+		ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
+	}
+out:
+	ecore_ptt_release(p_hwfn, p_ptt);
+	OSAL_IOV_BULLETIN_UPDATE(p_hwfn);
+
+	return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_iov_async_event_completion(struct ecore_hwfn *p_hwfn,
+				 struct event_ring_entry *p_eqe)
+{
+	struct event_ring_list_entry *p_eqe_ent, *p_eqe_tmp;
+	struct ecore_pf_iov *p_iov_info;
+	struct ecore_vf_info *p_vf;
+
+	p_iov_info = p_hwfn->pf_iov_info;
+	if (!p_iov_info)
+		return ECORE_INVAL;
+
+	p_vf = ecore_sriov_get_vf_from_absid(p_hwfn, p_eqe->vf_id);
+	if (!p_vf || p_vf->state != VF_ENABLED)
+		return ECORE_INVAL;
+
+	/* Old version VF, doesn't support async events */
+	if (!p_vf->vf_eq_info.eq_support)
+		return ECORE_SUCCESS;
+
+	p_eqe_ent = OSAL_ZALLOC(p_hwfn->p_dev, GFP_ATOMIC, sizeof(*p_eqe_ent));
+	if (!p_eqe_ent) {
+		DP_NOTICE(p_hwfn, false,
+			  "vf %u: Failed to allocate eq entry - op %x prot %x echo %x\n",
+			  p_eqe->vf_id, p_eqe->opcode, p_eqe->protocol_id,
+			  OSAL_LE16_TO_CPU(p_eqe->echo));
+		return ECORE_NOMEM;
+	}
+
+	OSAL_SPIN_LOCK(&p_vf->vf_eq_info.eq_list_lock);
+	p_eqe_ent->eqe = *p_eqe;
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
+		   "vf %u: Store eq entry - op %x prot %x echo %x\n",
+		   p_eqe->vf_id, p_eqe->opcode, p_eqe->protocol_id,
+		   OSAL_LE16_TO_CPU(p_eqe->echo));
+	OSAL_LIST_PUSH_TAIL(&p_eqe_ent->list_entry, &p_vf->vf_eq_info.eq_list);
+	if (++p_vf->vf_eq_info.eq_list_size == ECORE_EQ_MAX_ELEMENTS) {
+		DP_NOTICE(p_hwfn, false,
+			  "vf %u: EQ list reached maximum size 0x%x\n",
+			  p_eqe->vf_id, ECORE_EQ_MAX_ELEMENTS);
+
+		/* Delete oldest EQ entry */
+		p_eqe_tmp = OSAL_LIST_FIRST_ENTRY(&p_vf->vf_eq_info.eq_list,
+						  struct event_ring_list_entry,
+						  list_entry);
+		OSAL_LIST_REMOVE_ENTRY(&p_eqe_tmp->list_entry,
+				       &p_vf->vf_eq_info.eq_list);
+		p_vf->vf_eq_info.eq_list_size--;
+		OSAL_FREE(p_hwfn->p_dev, p_eqe_tmp);
+	}
+
+	p_vf->bulletin.p_virt->eq_completion++;
+	OSAL_SPIN_UNLOCK(&p_vf->vf_eq_info.eq_list_lock);
+	OSAL_IOV_BULLETIN_UPDATE(p_hwfn);
+
+	return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index e748e67d7..8d8a12b92 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -1,20 +1,20 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_SRIOV_H__
 #define __ECORE_SRIOV_H__
 
 #include "ecore_status.h"
-#include "ecore_vfpf_if.h"
 #include "ecore_iov_api.h"
 #include "ecore_hsi_common.h"
 #include "ecore_l2.h"
+#include "ecore_vfpf_if.h"
 
 #define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \
-	(MAX_NUM_VFS_K2 * ECORE_ETH_VF_NUM_VLAN_FILTERS)
+	(MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
 
 /* Represents a full message. Both the request filled by VF
  * and the response filled by the PF. The VF needs one copy
@@ -100,10 +100,43 @@ struct ecore_vf_shadow_config {
 	struct ecore_vf_vlan_shadow vlans[ECORE_ETH_VF_NUM_VLAN_FILTERS + 1];
 
 	/* Shadow copy of all configured MACs; Empty if forcing MACs */
-	u8 macs[ECORE_ETH_VF_NUM_MAC_FILTERS][ETH_ALEN];
+	u8 macs[ECORE_ETH_VF_NUM_MAC_FILTERS][ECORE_ETH_ALEN];
 	u8 inner_vlan_removal;
 };
 
+struct ecore_vf_ll2_queue {
+	u32 cid;
+	u8 qid; /* abs queue id */
+	bool used;
+};
+
+struct ecore_vf_db_rec_info {
+	/* VF doorbell overflow counter */
+	u32		count;
+
+	/* VF doesn't need to flush DORQ if it doesn't use EDPM */
+	bool		edpm_disabled;
+
+	/* VF doorbell overflow sticky indicator was cleared in the DORQ
+	 * attention callback, but still needs to execute doorbell recovery.
+	 * This value doesn't require a lock but must use atomic operations.
+	 */
+	u32		overflow; /* @DPDK */
+};
+
+struct event_ring_list_entry {
+	osal_list_entry_t	list_entry;
+	struct event_ring_entry	eqe;
+};
+
+struct ecore_vf_eq_info {
+	bool			eq_support;
+	bool			eq_active;
+	osal_list_t		eq_list;
+	osal_spinlock_t		eq_list_lock;
+	u16			eq_list_size;
+};
+
 /* PFs maintain an array of this structure, per VF */
 struct ecore_vf_info {
 	struct ecore_iov_vf_mbx vf_mbx;
@@ -138,6 +171,14 @@ struct ecore_vf_info {
 	u8			vport_instance; /* Number of active vports */
 	u8			num_rxqs;
 	u8			num_txqs;
+	u8			num_cnqs;
+	u8			cnq_offset;
+
+	/* The numbers which are communicated to the VF */
+	u8			actual_num_rxqs;
+	u8			actual_num_txqs;
+
+	u16			cnq_sb_start_id;
 
 	u16			rx_coal;
 	u16			tx_coal;
@@ -147,7 +188,8 @@ struct ecore_vf_info {
 	u8			num_mac_filters;
 	u8			num_vlan_filters;
 
-	struct ecore_vf_queue	vf_queues[ECORE_MAX_VF_CHAINS_PER_PF];
+	struct ecore_vf_queue	   vf_queues[ECORE_MAX_QUEUE_VF_CHAINS_PER_PF];
+
 	u16			igu_sbs[ECORE_MAX_VF_CHAINS_PER_PF];
 
 	/* TODO - Only windows is using it - should be removed */
@@ -167,13 +209,19 @@ struct ecore_vf_info {
 	u64 configured_features;
 #define ECORE_IOV_CONFIGURED_FEATURES_MASK	((1 << MAC_ADDR_FORCED) | \
 						 (1 << VLAN_ADDR_FORCED))
+	struct ecore_rdma_info  *rdma_info;
+
+	struct ecore_vf_db_rec_info		db_recovery_info;
+
+	/* Pending EQs list to send to VF */
+	struct ecore_vf_eq_info	vf_eq_info;
 };
 
 /* This structure is part of ecore_hwfn and used only for PFs that have sriov
  * capability enabled.
  */
 struct ecore_pf_iov {
-	struct ecore_vf_info	vfs_array[MAX_NUM_VFS_K2];
+	struct ecore_vf_info	vfs_array[MAX_NUM_VFS];
 	u64			pending_flr[ECORE_VF_ARRAY_LENGTH];
 
 #ifndef REMOVE_DBG
@@ -193,6 +241,39 @@ struct ecore_pf_iov {
 	void			*p_bulletins;
 	dma_addr_t		bulletins_phys;
 	u32			bulletins_size;
+
+#define ECORE_IS_VF_RDMA(p_hwfn)	(((p_hwfn)->pf_iov_info) && \
+					((p_hwfn)->pf_iov_info->rdma_enable))
+
+	/* Indicates whether vf-rdma is supported */
+	bool			rdma_enable;
+
+	/* Hold the original numbers of l2_queue and cnq features */
+	u8			num_l2_queue_feat;
+	u8			num_cnq_feat;
+
+	struct ecore_dpi_info	dpi_info;
+
+	struct ecore_common_dpm_info	dpm_info;
+
+	void			*p_rdma_info;
+
+	/* Keep doorbell recovery statistics on corresponding VFs:
+	 * Which VF overflowed the most, and how many times.
+	 */
+	u32			max_db_rec_count;
+	u16			max_db_rec_vfid;
+
+	/* PF doorbell overflow sticky indicator was cleared in the DORQ
+	 * attention callback, but all its child VFs still need to execute
+	 * doorbell recovery, because when PF overflows, VF doorbells are also
+	 * silently dropped without raising the VF sticky.
+	 * This value doesn't require a lock but must use atomic operations.
+	 */
+	u32			overflow; /* @DPDK */
+
+	/* Holds the max number of VFs which can be loaded */
+	u8			max_active_vfs;
 };
 
 #ifdef CONFIG_ECORE_SRIOV
@@ -265,7 +346,7 @@ void ecore_iov_free_hw_info(struct ecore_dev *p_dev);
  * @return true iff one of the PF's vfs got FLRed. false otherwise.
  */
 bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn,
-			   u32 *disabled_vfs);
+			  u32 *disabled_vfs);
 
 /**
  * @brief Search extended TLVs in request/reply buffer.
@@ -293,5 +374,54 @@ void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn,
 struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn *p_hwfn,
 					    u16 relative_vf_id,
 					    bool b_enabled_only);
+
+/**
+ * @brief get the VF doorbell bar size.
+ *
+ * @param p_dev
+ * @param p_ptt
+ */
+u32 ecore_iov_vf_db_bar_size(struct ecore_hwfn *p_hwfn,
+			     struct ecore_ptt *p_ptt);
+
+/**
+ * @brief get the relative id from absolute id.
+ *
+ * @param p_hwfn
+ * @param abs_id
+ */
+u8 ecore_iov_abs_to_rel_id(struct ecore_hwfn *p_hwfn, u8 abs_id);
+
+/**
+ * @brief store the EQ and notify the VF
+ *
+ * @param p_dev
+ * @param p_eqe
+ */
+enum _ecore_status_t
+ecore_iov_async_event_completion(struct ecore_hwfn *p_hwfn,
+				 struct event_ring_entry *p_eqe);
+
+/**
+ * @brief initialaize the VF doorbell bar
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ */
+enum _ecore_status_t
+ecore_iov_init_vf_doorbell_bar(struct ecore_hwfn *p_hwfn,
+			       struct ecore_ptt *p_ptt);
+
+/**
+ * @brief ecore_iov_get_vf_vport - return the vport_id of a specific VF
+ *
+ * @param p_hwfn
+ * @param rel_vf_id - relative id of the VF for which info is requested
+ * @param abs_vport_id - result vport_id
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_iov_get_vf_vport(struct ecore_hwfn *p_hwfn,
+					    int rel_vf_id, u8 *abs_vport_id);
 #endif
 #endif /* __ECORE_SRIOV_H__ */
diff --git a/drivers/net/qede/base/ecore_status.h b/drivers/net/qede/base/ecore_status.h
index 01cf9fd31..0c1dc52ea 100644
--- a/drivers/net/qede/base/ecore_status.h
+++ b/drivers/net/qede/base/ecore_status.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_STATUS_H__
 #define __ECORE_STATUS_H__
 
diff --git a/drivers/net/qede/base/ecore_utils.h b/drivers/net/qede/base/ecore_utils.h
index 249136b08..c040d1411 100644
--- a/drivers/net/qede/base/ecore_utils.h
+++ b/drivers/net/qede/base/ecore_utils.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_UTILS_H__
 #define __ECORE_UTILS_H__
 
@@ -32,4 +32,18 @@
 #define HILO_DMA_REGPAIR(regpair)	(HILO_DMA(regpair.hi, regpair.lo))
 #define HILO_64_REGPAIR(regpair)	(HILO_64(regpair.hi, regpair.lo))
 
+#ifndef USHRT_MAX
+#define USHRT_MAX       ((u16)(~0U))
+#endif
+
+#define ether_addr_copy(_dst_mac, _src_mac) OSAL_MEMCPY(_dst_mac, _src_mac, ECORE_ETH_ALEN)
+#define ether_addr_equal(_mac1, _mac2) !OSAL_MEMCMP(_mac1, _mac2, ECORE_ETH_ALEN)
+#ifndef BITS_PER_BYTE
+#define BITS_PER_BYTE 8
+#endif
+
+#ifndef ALIGN
+#define __ALIGN_MASK(x, mask)	(((x) + (mask)) & ~(mask))
+#define ALIGN(x, a)		__ALIGN_MASK(x, (a) - 1)
+#endif
 #endif
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index e00caf646..9b99e9281 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #include "bcm_osal.h"
 #include "ecore.h"
 #include "ecore_hsi_eth.h"
@@ -18,7 +18,15 @@
 #include "ecore_mcp_api.h"
 #include "ecore_vf_api.h"
 
-static void *ecore_vf_pf_prep(struct ecore_hwfn *p_hwfn, u16 type, u16 length)
+#ifdef _NTDDK_
+#pragma warning(push)
+#pragma warning(disable : 28167)
+#pragma warning(disable : 28123)
+#pragma warning(disable : 28121)
+#endif
+
+static void *ecore_vf_pf_prep(struct ecore_hwfn *p_hwfn,
+			      u16 type, u16 length)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	void *p_tlv;
@@ -30,23 +38,24 @@ static void *ecore_vf_pf_prep(struct ecore_hwfn *p_hwfn, u16 type, u16 length)
 	 */
 	OSAL_MUTEX_ACQUIRE(&p_iov->mutex);
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-		   "preparing to send %s tlv over vf pf channel\n",
-		   qede_ecore_channel_tlvs_string[type]);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "preparing to send %s tlv over vf pf channel\n",
+		   ecore_channel_tlvs_string[type]);
 
 	/* Reset Request offset */
-	p_iov->offset = (u8 *)(p_iov->vf2pf_request);
+	p_iov->offset = (u8 *)p_iov->vf2pf_request;
 
 	/* Clear mailbox - both request and reply */
-	OSAL_MEMSET(p_iov->vf2pf_request, 0, sizeof(union vfpf_tlvs));
-	OSAL_MEMSET(p_iov->pf2vf_reply, 0, sizeof(union pfvf_tlvs));
+	OSAL_MEMSET(p_iov->vf2pf_request, 0,
+		    sizeof(union vfpf_tlvs));
+	OSAL_MEMSET(p_iov->pf2vf_reply, 0,
+		    sizeof(union pfvf_tlvs));
 
 	/* Init type and length */
 	p_tlv = ecore_add_tlv(&p_iov->offset, type, length);
 
 	/* Init first tlv header */
 	((struct vfpf_first_tlv *)p_tlv)->reply_address =
-	    (u64)p_iov->pf2vf_reply_phys;
+		(u64)p_iov->pf2vf_reply_phys;
 
 	return p_tlv;
 }
@@ -71,6 +80,13 @@ static void ecore_vf_pf_req_end(struct ecore_hwfn *p_hwfn,
  * We'd need to handshake in acquire capabilities for any such.
  */
 #endif
+
+#define ECORE_VF_CHANNEL_USLEEP_ITERATIONS	90
+#define ECORE_VF_CHANNEL_USLEEP_DELAY		100
+#define ECORE_VF_CHANNEL_MSLEEP_ITERATIONS	10
+#define ECORE_VF_CHANNEL_MSLEEP_DELAY		25
+#define ECORE_VF_CHANNEL_MSLEEP_ITER_EMUL	90
+
 static enum _ecore_status_t
 ecore_send_msg2pf(struct ecore_hwfn *p_hwfn,
 		  u8 *done, u32 resp_size)
@@ -79,7 +95,7 @@ ecore_send_msg2pf(struct ecore_hwfn *p_hwfn,
 	struct ustorm_trigger_vf_zone trigger;
 	struct ustorm_vf_zone *zone_data;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
-	int time = 100;
+	int iter;
 
 	zone_data = (struct ustorm_vf_zone *)PXP_VF_BAR0_START_USDM_ZONE_B;
 
@@ -89,19 +105,32 @@ ecore_send_msg2pf(struct ecore_hwfn *p_hwfn,
 	/* need to add the END TLV to the message size */
 	resp_size += sizeof(struct channel_list_end_tlv);
 
+#ifdef CONFIG_ECORE_SW_CHANNEL
+	if (!p_hwfn->vf_iov_info->b_hw_channel) {
+		rc =  OSAL_VF_SEND_MSG2PF(p_hwfn->p_dev,
+					   done,
+					   p_req,
+					   p_hwfn->vf_iov_info->pf2vf_reply,
+					   sizeof(union vfpf_tlvs),
+					   resp_size);
+		/* TODO - no prints about message ? */
+		return rc;
+	}
+#endif
+
 	/* Send TLVs over HW channel */
 	OSAL_MEMSET(&trigger, 0, sizeof(struct ustorm_trigger_vf_zone));
 	trigger.vf_pf_msg_valid = 1;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-		   "VF -> PF [%02x] message: [%08x, %08x] --> %p,"
-		   " %08x --> %p\n",
+		   "VF -> PF [%02x] message: [%08x, %08x] --> %p, %08x --> %p\n",
 		   GET_FIELD(p_hwfn->hw_info.concrete_fid,
 			     PXP_CONCRETE_FID_PFID),
 		   U64_HI(p_hwfn->vf_iov_info->vf2pf_request_phys),
 		   U64_LO(p_hwfn->vf_iov_info->vf2pf_request_phys),
 		   &zone_data->non_trigger.vf_pf_msg_addr,
-		   *((u32 *)&trigger), &zone_data->trigger);
+		   *((u32 *)&trigger),
+		   &zone_data->trigger);
 
 	REG_WR(p_hwfn,
 	       (osal_uintptr_t)&zone_data->non_trigger.vf_pf_msg_addr.lo,
@@ -116,34 +145,39 @@ ecore_send_msg2pf(struct ecore_hwfn *p_hwfn,
 	 */
 	OSAL_WMB(p_hwfn->p_dev);
 
-	REG_WR(p_hwfn, (osal_uintptr_t)&zone_data->trigger,
-	       *((u32 *)&trigger));
+	REG_WR(p_hwfn, (osal_uintptr_t)&zone_data->trigger, *((u32 *)&trigger));
 
 	/* When PF would be done with the response, it would write back to the
 	 * `done' address. Poll until then.
 	 */
-	while ((!*done) && time) {
-		OSAL_MSLEEP(25);
-		time--;
-	}
+	iter = ECORE_VF_CHANNEL_USLEEP_ITERATIONS;
+	while (!*done && iter--)
+		OSAL_UDELAY(ECORE_VF_CHANNEL_USLEEP_DELAY);
+
+	iter = ECORE_VF_CHANNEL_MSLEEP_ITERATIONS;
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+		iter += ECORE_VF_CHANNEL_MSLEEP_ITER_EMUL;
+
+	while (!*done && iter--)
+		OSAL_MSLEEP(ECORE_VF_CHANNEL_MSLEEP_DELAY);
 
 	if (!*done) {
 		DP_NOTICE(p_hwfn, true,
 			  "VF <-- PF Timeout [Type %d]\n",
 			  p_req->first_tlv.tl.type);
-		rc = ECORE_TIMEOUT;
-	} else {
-		if ((*done != PFVF_STATUS_SUCCESS) &&
-		    (*done != PFVF_STATUS_NO_RESOURCE))
-			DP_NOTICE(p_hwfn, false,
-				  "PF response: %d [Type %d]\n",
-				  *done, p_req->first_tlv.tl.type);
-		else
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "PF response: %d [Type %d]\n",
-				   *done, p_req->first_tlv.tl.type);
+		return ECORE_TIMEOUT;
 	}
 
+	if ((*done != PFVF_STATUS_SUCCESS) &&
+	    (*done != PFVF_STATUS_NO_RESOURCE))
+		DP_NOTICE(p_hwfn, false,
+			  "PF response: %d [Type %d]\n",
+			  *done, p_req->first_tlv.tl.type);
+	else
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "PF response: %d [Type %d]\n",
+			   *done, p_req->first_tlv.tl.type);
+
 	return rc;
 }
 
@@ -163,8 +197,8 @@ static void ecore_vf_pf_add_qid(struct ecore_hwfn *p_hwfn,
 	p_qid_tlv->qid = p_cid->qid_usage_idx;
 }
 
-enum _ecore_status_t _ecore_vf_pf_release(struct ecore_hwfn *p_hwfn,
-					  bool b_final)
+static enum _ecore_status_t _ecore_vf_pf_release(struct ecore_hwfn *p_hwfn,
+						 bool b_final)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct pfvf_def_resp_tlv *resp;
@@ -231,7 +265,7 @@ static void ecore_vf_pf_acquire_reduce_resc(struct ecore_hwfn *p_hwfn,
 					    struct pf_vf_resc *p_resp)
 {
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-		   "PF unwilling to fullill resource request: rxq [%02x/%02x] txq [%02x/%02x] sbs [%02x/%02x] mac [%02x/%02x] vlan [%02x/%02x] mc [%02x/%02x] cids [%02x/%02x]. Try PF recommended amount\n",
+		   "PF unwilling to fullill resource request: rxq [%u/%u] txq [%u/%u] sbs [%u/%u] mac [%u/%u] vlan [%u/%u] mc [%u/%u] cids [%u/%u]. Try PF recommended amount\n",
 		   p_req->num_rxqs, p_resp->num_rxqs,
 		   p_req->num_rxqs, p_resp->num_txqs,
 		   p_req->num_sbs, p_resp->num_sbs,
@@ -304,8 +338,8 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 	/* @@@ TBD: PF may not be ready bnx2x_get_vf_id... */
 	req->vfdev_info.opaque_fid = p_hwfn->hw_info.opaque_fid;
 
-	p_resc->num_rxqs = ECORE_MAX_VF_CHAINS_PER_PF;
-	p_resc->num_txqs = ECORE_MAX_VF_CHAINS_PER_PF;
+	p_resc->num_rxqs = ECORE_MAX_QUEUE_VF_CHAINS_PER_PF;
+	p_resc->num_txqs = ECORE_MAX_QUEUE_VF_CHAINS_PER_PF;
 	p_resc->num_sbs = ECORE_MAX_VF_CHAINS_PER_PF;
 	p_resc->num_mac_filters = ECORE_ETH_VF_NUM_MAC_FILTERS;
 	p_resc->num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS;
@@ -326,10 +360,25 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 	/* Fill capability field with any non-deprecated config we support */
 	req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_100G;
 
-	/* If we've mapped the doorbell bar, try using queue qids */
-	if (p_iov->b_doorbell_bar)
+	/* If doorbell bar size is sufficient, try using queue qids */
+	if (p_hwfn->db_size > PXP_VF_BAR0_DQ_LENGTH) {
 		req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_PHYSICAL_BAR |
 						VFPF_ACQUIRE_CAP_QUEUE_QIDS;
+		p_resc->num_cids = ECORE_ETH_VF_MAX_NUM_CIDS;
+	}
+
+	/* Currently, fill ROCE by default. */
+	req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_ROCE;
+
+	/* Check if VF uses EDPM and needs DORQ flush when overflowed */
+	if (p_hwfn->roce_edpm_mode != ECORE_ROCE_EDPM_MODE_DISABLE)
+		req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_EDPM;
+
+	/* This VF version supports async events */
+	req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_EQ;
+
+	/* Set when IWARP is supported. */
+	/* req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_IWARP; */
 
 	/* pf 2 vf bulletin board address */
 	req->bulletin_addr = p_iov->bulletin.phys;
@@ -341,8 +390,7 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 		      sizeof(struct channel_list_end_tlv));
 
 	while (!resources_acquired) {
-		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-			   "attempting to acquire resources\n");
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "attempting to acquire resources\n");
 
 		/* Clear response buffer, as this might be a re-send */
 		OSAL_MEMSET(p_iov->pf2vf_reply, 0,
@@ -350,8 +398,17 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 
 		/* send acquire request */
 		rc = ecore_send_msg2pf(p_hwfn,
-				       &resp->hdr.status, sizeof(*resp));
-
+				       &resp->hdr.status,
+				       sizeof(*resp));
+		/* In general, PF shouldn't take long time to respond for a
+		 * successful message sent by the VF over VPC. If for
+		 * sufficiently long time (~2.5 sec) there is no response,
+		 * which is mainly possible due to VF being FLRed which could
+		 * cause message send failure by VF to the PF over VPC as being
+		 * disallowed due to FLR and cause such timeout, in such cases
+		 * retry for the acquire message which can later be successful
+		 * after VF FLR handling is completed.
+		 */
 		if (retry_cnt && rc == ECORE_TIMEOUT) {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 				   "VF retrying to acquire due to VPC timeout\n");
@@ -364,7 +421,8 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 
 		/* copy acquire response from buffer to p_hwfn */
 		OSAL_MEMCPY(&p_iov->acquire_resp,
-			    resp, sizeof(p_iov->acquire_resp));
+			    resp,
+			    sizeof(p_iov->acquire_resp));
 
 		attempts++;
 
@@ -379,8 +437,26 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 				req->vfdev_info.capabilities |=
 					VFPF_ACQUIRE_CAP_PRE_FP_HSI;
 			}
-			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "resources acquired\n");
+
+			DP_VERBOSE(p_hwfn, (ECORE_MSG_IOV | ECORE_MSG_RDMA),
+				   "VF AQUIRE-RDMA capabilities is:%" PRIx64 "\n",
+				   resp->pfdev_info.capabilities);
+
+			if (resp->pfdev_info.capabilities &
+			    PFVF_ACQUIRE_CAP_ROCE)
+				p_hwfn->hw_info.personality =
+							ECORE_PCI_ETH_ROCE;
+
+			if (resp->pfdev_info.capabilities &
+			    PFVF_ACQUIRE_CAP_IWARP)
+				p_hwfn->hw_info.personality =
+							ECORE_PCI_ETH_IWARP;
+
+			if (resp->pfdev_info.capabilities &
+			    PFVF_ACQUIRE_CAP_EDPM)
+				p_hwfn->vf_iov_info->dpm_enabled = true;
+
+			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "resources acquired\n");
 			resources_acquired = true;
 		} /* PF refuses to allocate our resources */
 		else if (resp->hdr.status == PFVF_STATUS_NO_RESOURCE &&
@@ -392,10 +468,7 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 			if (pfdev_info->major_fp_hsi &&
 			    (pfdev_info->major_fp_hsi != ETH_HSI_VER_MAJOR)) {
 				DP_NOTICE(p_hwfn, false,
-					  "PF uses an incompatible fastpath HSI"
-					  " %02x.%02x [VF requires %02x.%02x]."
-					  " Please change to a VF driver using"
-					  " %02x.xx.\n",
+					  "PF uses an incompatible fastpath HSI %02x.%02x [VF requires %02x.%02x]. Please change to a VF driver using %02x.xx.\n",
 					  pfdev_info->major_fp_hsi,
 					  pfdev_info->minor_fp_hsi,
 					  ETH_HSI_VER_MAJOR, ETH_HSI_VER_MINOR,
@@ -408,17 +481,12 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 				if (req->vfdev_info.capabilities &
 				    VFPF_ACQUIRE_CAP_PRE_FP_HSI) {
 					DP_NOTICE(p_hwfn, false,
-						  "PF uses very old drivers."
-						  " Please change to a VF"
-						  " driver using no later than"
-						  " 8.8.x.x.\n");
+						  "PF uses very old drivers. Please change to a VF driver using no later than 8.8.x.x.\n");
 					rc = ECORE_INVAL;
 					goto exit;
 				} else {
 					DP_INFO(p_hwfn,
-						"PF is old - try re-acquire to"
-						" see if it supports FW-version"
-						" override\n");
+						"PF is old - try re-acquire to see if it supports FW-version override\n");
 					req->vfdev_info.capabilities |=
 						VFPF_ACQUIRE_CAP_PRE_FP_HSI;
 					continue;
@@ -436,14 +504,19 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 			ecore_vf_pf_req_end(p_hwfn, ECORE_AGAIN);
 			return ecore_vf_pf_soft_flr_acquire(p_hwfn);
 		} else {
-			DP_ERR(p_hwfn,
-			       "PF returned err %d to VF acquisition request\n",
+			DP_ERR(p_hwfn, "PF returned error %d to VF acquisition request\n",
 			       resp->hdr.status);
 			rc = ECORE_AGAIN;
 			goto exit;
 		}
 	}
 
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "resc acquired : rxq [%u] txq [%u] sbs [%u] mac [%u] vlan [%u] mc [%u] cids [%u].\n",
+		   resp->resc.num_rxqs, resp->resc.num_txqs, resp->resc.num_sbs,
+		   resp->resc.num_mac_filters, resp->resc.num_vlan_filters,
+		   resp->resc.num_mc_filters, resp->resc.num_cids);
+
 	/* Mark the PF as legacy, if needed */
 	if (req->vfdev_info.capabilities &
 	    VFPF_ACQUIRE_CAP_PRE_FP_HSI)
@@ -461,8 +534,7 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 	rc = OSAL_VF_UPDATE_ACQUIRE_RESC_RESP(p_hwfn, &resp->resc);
 	if (rc) {
 		DP_NOTICE(p_hwfn, true,
-			  "VF_UPDATE_ACQUIRE_RESC_RESP Failed:"
-			  " status = 0x%x.\n",
+			  "VF_UPDATE_ACQUIRE_RESC_RESP Failed: status = 0x%x.\n",
 			  rc);
 		rc = ECORE_AGAIN;
 		goto exit;
@@ -475,8 +547,8 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 	p_dev->type = resp->pfdev_info.dev_type;
 	p_dev->chip_rev = (u8)resp->pfdev_info.chip_rev;
 
-	DP_INFO(p_hwfn, "Chip details - %s%d\n",
-		ECORE_IS_BB(p_dev) ? "BB" : "AH",
+	DP_INFO(p_hwfn, "Chip details - %s %d\n",
+		ECORE_IS_BB(p_dev) ? "BB" : ECORE_IS_AH(p_dev) ? "AH" : "E5",
 		CHIP_REV_IS_A0(p_hwfn->p_dev) ? 0 : 1);
 
 	p_dev->chip_num = pfdev_info->chip_num & 0xffff;
@@ -484,18 +556,15 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 	/* Learn of the possibility of CMT */
 	if (IS_LEAD_HWFN(p_hwfn)) {
 		if (resp->pfdev_info.capabilities & PFVF_ACQUIRE_CAP_100G) {
-			DP_INFO(p_hwfn, "100g VF\n");
+			DP_NOTICE(p_hwfn, false, "100g VF\n");
 			p_dev->num_hwfns = 2;
 		}
 	}
 
-	/* @DPDK */
-	if (((p_iov->b_pre_fp_hsi == true) &
-	    ETH_HSI_VER_MINOR) &&
+	if (!p_iov->b_pre_fp_hsi &&
 	    (resp->pfdev_info.minor_fp_hsi < ETH_HSI_VER_MINOR))
 		DP_INFO(p_hwfn,
-			"PF is using older fastpath HSI;"
-			" %02x.%02x is configured\n",
+			"PF is using older fastpath HSI; %02x.%02x is configured\n",
 			ETH_HSI_VER_MAJOR,
 			resp->pfdev_info.minor_fp_hsi);
 
@@ -545,8 +614,7 @@ ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn,
 	/* Allocate vf sriov info */
 	p_iov = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_iov));
 	if (!p_iov) {
-		DP_NOTICE(p_hwfn, true,
-			  "Failed to allocate `struct ecore_sriov'\n");
+		DP_NOTICE(p_hwfn, true, "Failed to allocate `struct ecore_sriov'\n");
 		return ECORE_NOMEM;
 	}
 
@@ -556,7 +624,10 @@ ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn,
 	 */
 	if (p_hwfn->doorbells == OSAL_NULL) {
 		p_hwfn->doorbells = (u8 OSAL_IOMEM *)p_hwfn->regview +
-						     PXP_VF_BAR0_START_DQ;
+						    PXP_VF_BAR0_START_DQ;
+		p_hwfn->db_size = PXP_VF_BAR0_DQ_LENGTH;
+		p_hwfn->db_offset = (u8 *)p_hwfn->doorbells -
+				    (u8 *)p_hwfn->p_dev->doorbells;
 	} else if (p_hwfn == p_lead) {
 		/* For leading hw-function, value is always correct, but need
 		 * to handle scenario where legacy PF would not support 100g
@@ -569,56 +640,52 @@ ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn,
 		 */
 		if (p_lead->vf_iov_info->b_doorbell_bar)
 			p_iov->b_doorbell_bar = true;
-		else
-			p_hwfn->doorbells = (u8 OSAL_IOMEM *)
-					    p_hwfn->regview +
+		else {
+			p_hwfn->doorbells = (u8 OSAL_IOMEM *)p_hwfn->regview +
 					    PXP_VF_BAR0_START_DQ;
+			p_hwfn->db_offset = (u8 *)p_hwfn->doorbells -
+					    (u8 *)p_hwfn->p_dev->doorbells;
+		}
 	}
 
 	/* Allocate vf2pf msg */
 	p_iov->vf2pf_request = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
-							 &p_iov->
-							 vf2pf_request_phys,
-							 sizeof(union
-								vfpf_tlvs));
+							 &p_iov->vf2pf_request_phys,
+							 sizeof(union vfpf_tlvs));
 	if (!p_iov->vf2pf_request) {
-		DP_NOTICE(p_hwfn, true,
-			 "Failed to allocate `vf2pf_request' DMA memory\n");
+		DP_NOTICE(p_hwfn, true, "Failed to allocate `vf2pf_request' DMA memory\n");
 		goto free_p_iov;
 	}
 
 	p_iov->pf2vf_reply = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
-						       &p_iov->
-						       pf2vf_reply_phys,
+						       &p_iov->pf2vf_reply_phys,
 						       sizeof(union pfvf_tlvs));
 	if (!p_iov->pf2vf_reply) {
-		DP_NOTICE(p_hwfn, true,
-			  "Failed to allocate `pf2vf_reply' DMA memory\n");
+		DP_NOTICE(p_hwfn, true, "Failed to allocate `pf2vf_reply' DMA memory\n");
 		goto free_vf2pf_request;
 	}
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-		   "VF's Request mailbox [%p virt 0x%lx phys], "
-		   "Response mailbox [%p virt 0x%lx phys]\n",
+		   "VF's Request mailbox [%p virt 0x%" PRIx64 " phys], "
+		   "Response mailbox [%p virt 0x%" PRIx64 " phys]\n",
 		   p_iov->vf2pf_request,
-		   (unsigned long)p_iov->vf2pf_request_phys,
+		   (u64)p_iov->vf2pf_request_phys,
 		   p_iov->pf2vf_reply,
-		   (unsigned long)p_iov->pf2vf_reply_phys);
+		   (u64)p_iov->pf2vf_reply_phys);
 
 	/* Allocate Bulletin board */
 	p_iov->bulletin.size = sizeof(struct ecore_bulletin_content);
 	p_iov->bulletin.p_virt = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
-							   &p_iov->bulletin.
-							   phys,
-							   p_iov->bulletin.
-							   size);
+							   &p_iov->bulletin.phys,
+							   p_iov->bulletin.size);
 	if (!p_iov->bulletin.p_virt) {
 		DP_NOTICE(p_hwfn, false, "Failed to alloc bulletin memory\n");
 		goto free_pf2vf_reply;
 	}
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-		   "VF's bulletin Board [%p virt 0x%lx phys 0x%08x bytes]\n",
-		   p_iov->bulletin.p_virt, (unsigned long)p_iov->bulletin.phys,
+		   "VF's bulletin Board [%p virt 0x%" PRIx64 " phys 0x%08x bytes]\n",
+		   p_iov->bulletin.p_virt,
+		   (u64)p_iov->bulletin.phys,
 		   p_iov->bulletin.size);
 
 #ifdef CONFIG_ECORE_LOCK_ALLOC
@@ -635,6 +702,8 @@ ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn,
 	p_hwfn->hw_info.personality = ECORE_PCI_ETH;
 
 	rc = ecore_vf_pf_acquire(p_hwfn);
+	if (rc != ECORE_SUCCESS)
+		return rc;
 
 	/* If VF is 100g using a mapped bar and PF is too old to support that,
 	 * acquisition would succeed - but the VF would have no way knowing
@@ -653,7 +722,9 @@ ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn,
 
 		p_iov->b_doorbell_bar = false;
 		p_hwfn->doorbells = (u8 OSAL_IOMEM *)p_hwfn->regview +
-						     PXP_VF_BAR0_START_DQ;
+						    PXP_VF_BAR0_START_DQ;
+		p_hwfn->db_offset = (u8 *)p_hwfn->doorbells -
+				    (u8 *)p_hwfn->p_dev->doorbells;
 		rc = ecore_vf_pf_acquire(p_hwfn);
 	}
 
@@ -684,7 +755,6 @@ ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn,
 	return ECORE_NOMEM;
 }
 
-/* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */
 static void
 __ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
 			     struct ecore_tunn_update_type *p_src,
@@ -700,7 +770,6 @@ __ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
 	*p_cls = p_src->tun_cls;
 }
 
-/* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */
 static void
 ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
 			   struct ecore_tunn_update_type *p_src,
@@ -802,6 +871,8 @@ ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
 				   &p_req->vxlan_clss, &p_src->vxlan_port,
 				   &p_req->update_vxlan_port,
 				   &p_req->vxlan_port);
+	p_req->update_non_l2_vxlan = p_src->update_non_l2_vxlan;
+	p_req->non_l2_vxlan_enable = p_src->non_l2_vxlan_enable;
 	ecore_vf_prep_tunn_req_tlv(p_req, &p_src->l2_geneve,
 				   ECORE_MODE_L2GENEVE_TUNN,
 				   &p_req->l2geneve_clss, &p_src->geneve_port,
@@ -1005,18 +1076,17 @@ ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
 	 * provided only the queue id.
 	 */
 	if (!p_iov->b_pre_fp_hsi) {
-		*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-						resp->offset;
+		*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells + resp->offset;
 	} else {
 		u8 cid = p_iov->acquire_resp.resc.cid[qid];
 
 		*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
-						DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
+			       DB_ADDR_VF(p_hwfn->p_dev, cid, DQ_DEMS_LEGACY);
 	}
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-		   "Txq[0x%02x]: doorbell at %p [offset 0x%08x]\n",
-		   qid, *pp_doorbell, resp->offset);
+		   "Txq[0x%02x.%02x]: doorbell at %p [offset 0x%08x]\n",
+		   qid, p_cid->qid_usage_idx, *pp_doorbell, resp->offset);
 exit:
 	ecore_vf_pf_req_end(p_hwfn, rc);
 
@@ -1114,11 +1184,14 @@ enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-enum _ecore_status_t
-ecore_vf_pf_vport_start(struct ecore_hwfn *p_hwfn, u8 vport_id,
-			u16 mtu, u8 inner_vlan_removal,
-			enum ecore_tpa_mode tpa_mode, u8 max_buffers_per_cqe,
-			u8 only_untagged)
+enum _ecore_status_t ecore_vf_pf_vport_start(struct ecore_hwfn *p_hwfn,
+					     u8 vport_id,
+					     u16 mtu,
+					     u8 inner_vlan_removal,
+					     enum ecore_tpa_mode tpa_mode,
+					     u8 max_buffers_per_cqe,
+					     u8 only_untagged,
+					     u8 zero_placement_offset)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct vfpf_vport_start_tlv *req;
@@ -1135,9 +1208,10 @@ ecore_vf_pf_vport_start(struct ecore_hwfn *p_hwfn, u8 vport_id,
 	req->tpa_mode = tpa_mode;
 	req->max_buffers_per_cqe = max_buffers_per_cqe;
 	req->only_untagged = only_untagged;
+	req->zero_placement_offset = zero_placement_offset;
 
 	/* status blocks */
-	for (i = 0; i < p_hwfn->vf_iov_info->acquire_resp.resc.num_sbs; i++) {
+	for (i = 0; i < p_hwfn->vf_iov_info->acquire_resp.resc.num_rxqs; i++) {
 		struct ecore_sb_info *p_sb = p_hwfn->vf_iov_info->sbs_info[i];
 
 		if (p_sb)
@@ -1168,7 +1242,7 @@ ecore_vf_pf_vport_start(struct ecore_hwfn *p_hwfn, u8 vport_id,
 enum _ecore_status_t ecore_vf_pf_vport_stop(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
-	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+	struct pfvf_def_resp_tlv *resp  = &p_iov->pf2vf_reply->default_resp;
 	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
@@ -1227,7 +1301,7 @@ ecore_vf_handle_vp_update_is_needed(struct ecore_hwfn *p_hwfn,
 		return !!p_data->sge_tpa_params;
 	default:
 		DP_INFO(p_hwfn, "Unexpected vport-update TLV[%d] %s\n",
-			tlv, qede_ecore_channel_tlvs_string[tlv]);
+			tlv, ecore_channel_tlvs_string[tlv]);
 		return false;
 	}
 }
@@ -1247,19 +1321,19 @@ ecore_vf_handle_vp_update_tlvs_resp(struct ecore_hwfn *p_hwfn,
 			continue;
 
 		p_resp = (struct pfvf_def_resp_tlv *)
-		    ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv);
+			 ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply,
+						    tlv);
 		if (p_resp && p_resp->hdr.status)
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 				   "TLV[%d] type %s Configuration %s\n",
-				   tlv, qede_ecore_channel_tlvs_string[tlv],
+				   tlv, ecore_channel_tlvs_string[tlv],
 				   (p_resp && p_resp->hdr.status) ? "succeeded"
 								  : "failed");
 	}
 }
 
-enum _ecore_status_t
-ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
-			 struct ecore_sp_vport_update_params *p_params)
+enum _ecore_status_t ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
+					      struct ecore_sp_vport_update_params *p_params)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct vfpf_vport_update_tlv *req;
@@ -1299,6 +1373,7 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
 		}
 	}
 
+#ifndef ECORE_UPSTREAM
 	if (p_params->update_inner_vlan_removal_flg) {
 		struct vfpf_vport_update_vlan_strip_tlv *p_vlan_tlv;
 
@@ -1310,6 +1385,7 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
 
 		p_vlan_tlv->remove_vlan = p_params->inner_vlan_removal_flg;
 	}
+#endif
 
 	if (p_params->update_tx_switching_flg) {
 		struct vfpf_vport_update_tx_switch_tlv *p_tx_switch_tlv;
@@ -1350,13 +1426,13 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
 		if (update_rx) {
 			p_accept_tlv->update_rx_mode = update_rx;
 			p_accept_tlv->rx_accept_filter =
-			    p_params->accept_flags.rx_accept_filter;
+					p_params->accept_flags.rx_accept_filter;
 		}
 
 		if (update_tx) {
 			p_accept_tlv->update_tx_mode = update_tx;
 			p_accept_tlv->tx_accept_filter =
-			    p_params->accept_flags.tx_accept_filter;
+					p_params->accept_flags.tx_accept_filter;
 		}
 	}
 
@@ -1372,15 +1448,16 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
 
 		if (rss_params->update_rss_config)
 			p_rss_tlv->update_rss_flags |=
-			    VFPF_UPDATE_RSS_CONFIG_FLAG;
+				VFPF_UPDATE_RSS_CONFIG_FLAG;
 		if (rss_params->update_rss_capabilities)
 			p_rss_tlv->update_rss_flags |=
-			    VFPF_UPDATE_RSS_CAPS_FLAG;
+				VFPF_UPDATE_RSS_CAPS_FLAG;
 		if (rss_params->update_rss_ind_table)
 			p_rss_tlv->update_rss_flags |=
-			    VFPF_UPDATE_RSS_IND_TABLE_FLAG;
+				VFPF_UPDATE_RSS_IND_TABLE_FLAG;
 		if (rss_params->update_rss_key)
-			p_rss_tlv->update_rss_flags |= VFPF_UPDATE_RSS_KEY_FLAG;
+			p_rss_tlv->update_rss_flags |=
+				VFPF_UPDATE_RSS_KEY_FLAG;
 
 		p_rss_tlv->rss_enable = rss_params->rss_enable;
 		p_rss_tlv->rss_caps = rss_params->rss_caps;
@@ -1409,7 +1486,7 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
 		resp_size += sizeof(struct pfvf_def_resp_tlv);
 		p_any_vlan_tlv->accept_any_vlan = p_params->accept_any_vlan;
 		p_any_vlan_tlv->update_accept_any_vlan_flg =
-		    p_params->update_accept_any_vlan_flg;
+					p_params->update_accept_any_vlan_flg;
 	}
 
 	if (p_params->sge_tpa_params) {
@@ -1425,40 +1502,43 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
 
 		if (sge_tpa_params->update_tpa_en_flg)
 			p_sge_tpa_tlv->update_sge_tpa_flags |=
-			    VFPF_UPDATE_TPA_EN_FLAG;
+				VFPF_UPDATE_TPA_EN_FLAG;
 		if (sge_tpa_params->update_tpa_param_flg)
 			p_sge_tpa_tlv->update_sge_tpa_flags |=
-			    VFPF_UPDATE_TPA_PARAM_FLAG;
+				VFPF_UPDATE_TPA_PARAM_FLAG;
 
 		if (sge_tpa_params->tpa_ipv4_en_flg)
-			p_sge_tpa_tlv->sge_tpa_flags |= VFPF_TPA_IPV4_EN_FLAG;
+			p_sge_tpa_tlv->sge_tpa_flags |=
+				VFPF_TPA_IPV4_EN_FLAG;
 		if (sge_tpa_params->tpa_ipv6_en_flg)
-			p_sge_tpa_tlv->sge_tpa_flags |= VFPF_TPA_IPV6_EN_FLAG;
+			p_sge_tpa_tlv->sge_tpa_flags |=
+				VFPF_TPA_IPV6_EN_FLAG;
 		if (sge_tpa_params->tpa_pkt_split_flg)
-			p_sge_tpa_tlv->sge_tpa_flags |= VFPF_TPA_PKT_SPLIT_FLAG;
+			p_sge_tpa_tlv->sge_tpa_flags |=
+				VFPF_TPA_PKT_SPLIT_FLAG;
 		if (sge_tpa_params->tpa_hdr_data_split_flg)
 			p_sge_tpa_tlv->sge_tpa_flags |=
-			    VFPF_TPA_HDR_DATA_SPLIT_FLAG;
+				VFPF_TPA_HDR_DATA_SPLIT_FLAG;
 		if (sge_tpa_params->tpa_gro_consistent_flg)
 			p_sge_tpa_tlv->sge_tpa_flags |=
-			    VFPF_TPA_GRO_CONSIST_FLAG;
+				VFPF_TPA_GRO_CONSIST_FLAG;
 		if (sge_tpa_params->tpa_ipv4_tunn_en_flg)
 			p_sge_tpa_tlv->sge_tpa_flags |=
-			    VFPF_TPA_TUNN_IPV4_EN_FLAG;
+				VFPF_TPA_TUNN_IPV4_EN_FLAG;
 		if (sge_tpa_params->tpa_ipv6_tunn_en_flg)
 			p_sge_tpa_tlv->sge_tpa_flags |=
-			    VFPF_TPA_TUNN_IPV6_EN_FLAG;
+				VFPF_TPA_TUNN_IPV6_EN_FLAG;
 
 		p_sge_tpa_tlv->tpa_max_aggs_num =
-		    sge_tpa_params->tpa_max_aggs_num;
+			sge_tpa_params->tpa_max_aggs_num;
 		p_sge_tpa_tlv->tpa_max_size = sge_tpa_params->tpa_max_size;
 		p_sge_tpa_tlv->tpa_min_size_to_start =
-		    sge_tpa_params->tpa_min_size_to_start;
+			sge_tpa_params->tpa_min_size_to_start;
 		p_sge_tpa_tlv->tpa_min_size_to_cont =
-		    sge_tpa_params->tpa_min_size_to_cont;
+			sge_tpa_params->tpa_min_size_to_cont;
 
 		p_sge_tpa_tlv->max_buffers_per_cqe =
-		    sge_tpa_params->max_buffers_per_cqe;
+			sge_tpa_params->max_buffers_per_cqe;
 	}
 
 	/* add list termination tlv */
@@ -1538,8 +1618,7 @@ void ecore_vf_pf_filter_mcast(struct ecore_hwfn *p_hwfn,
 }
 
 enum _ecore_status_t ecore_vf_pf_filter_ucast(struct ecore_hwfn *p_hwfn,
-					      struct ecore_filter_ucast
-					      *p_ucast)
+					      struct ecore_filter_ucast *p_ucast)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
 	struct vfpf_ucast_filter_tlv *req;
@@ -1548,8 +1627,7 @@ enum _ecore_status_t ecore_vf_pf_filter_ucast(struct ecore_hwfn *p_hwfn,
 
 	/* Sanitize */
 	if (p_ucast->opcode == ECORE_FILTER_MOVE) {
-		DP_NOTICE(p_hwfn, true,
-			  "VFs don't support Moving of filters\n");
+		DP_NOTICE(p_hwfn, true, "VFs don't support Moving of filters\n");
 		return ECORE_INVAL;
 	}
 
@@ -1557,7 +1635,7 @@ enum _ecore_status_t ecore_vf_pf_filter_ucast(struct ecore_hwfn *p_hwfn,
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_UCAST_FILTER, sizeof(*req));
 	req->opcode = (u8)p_ucast->opcode;
 	req->type = (u8)p_ucast->type;
-	OSAL_MEMCPY(req->mac, p_ucast->mac, ETH_ALEN);
+	OSAL_MEMCPY(req->mac, p_ucast->mac, ECORE_ETH_ALEN);
 	req->vlan = p_ucast->vlan;
 
 	/* add list termination tlv */
@@ -1584,7 +1662,7 @@ enum _ecore_status_t ecore_vf_pf_filter_ucast(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_vf_pf_int_cleanup(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
-	struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+	struct pfvf_def_resp_tlv *resp  = &p_iov->pf2vf_reply->default_resp;
 	enum _ecore_status_t rc;
 
 	/* clear mailbox and prep first tlv */
@@ -1644,6 +1722,37 @@ enum _ecore_status_t ecore_vf_pf_get_coalesce(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
+enum _ecore_status_t
+ecore_vf_pf_bulletin_update_mac(struct ecore_hwfn *p_hwfn, u8 *p_mac)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct vfpf_bulletin_update_mac_tlv *p_req;
+	struct pfvf_def_resp_tlv *p_resp;
+	enum _ecore_status_t rc;
+
+	if (!p_mac)
+		return ECORE_INVAL;
+
+	/* clear mailbox and prep header tlv */
+	p_req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_BULLETIN_UPDATE_MAC,
+				 sizeof(*p_req));
+	OSAL_MEMCPY(p_req->mac, p_mac, ECORE_ETH_ALEN);
+	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+		   "Requesting bulletin update for MAC[%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx]\n",
+		   p_mac[0], p_mac[1], p_mac[2], p_mac[3], p_mac[4], p_mac[5]);
+
+	/* add list termination tlv */
+	ecore_add_tlv(&p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	p_resp = &p_iov->pf2vf_reply->default_resp;
+	rc = ecore_send_msg2pf(p_hwfn, &p_resp->hdr.status, sizeof(*p_resp));
+	ecore_vf_pf_req_end(p_hwfn, rc);
+
+	return rc;
+}
+
 enum _ecore_status_t
 ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn, u16 rx_coal, u16 tx_coal,
 			 struct ecore_queue_cid     *p_cid)
@@ -1723,12 +1832,16 @@ u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn,
 			   u16               sb_id)
 {
 	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	u8 num_rxqs;
 
 	if (!p_iov) {
 		DP_NOTICE(p_hwfn, true, "vf_sriov_info isn't initialized\n");
 		return 0;
 	}
 
+	ecore_vf_get_num_rxqs(p_hwfn, &num_rxqs);
+
+	/* @DPDK */
 	return p_iov->acquire_resp.resc.hw_sbs[sb_id].hw_sb_id;
 }
 
@@ -1746,10 +1859,44 @@ void ecore_vf_set_sb_info(struct ecore_hwfn *p_hwfn,
 		DP_NOTICE(p_hwfn, true, "Can't configure SB %04x\n", sb_id);
 		return;
 	}
-
 	p_iov->sbs_info[sb_id] = p_sb;
 }
 
+void ecore_vf_init_admin_mac(struct ecore_hwfn *p_hwfn, u8 *p_mac)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct ecore_bulletin_content temp;
+	u32 crc, crc_size;
+
+	crc_size = sizeof(p_iov->bulletin.p_virt->crc);
+	OSAL_MEMCPY(&temp, p_iov->bulletin.p_virt, p_iov->bulletin.size);
+
+	/* If version did not update, no need to do anything */
+	if (temp.version == p_iov->bulletin_shadow.version) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Bulletin version didn't update\n");
+		return;
+	}
+
+	/* Verify the bulletin we see is valid */
+	crc = OSAL_CRC32(0, (u8 *)&temp + crc_size,
+			 p_iov->bulletin.size - crc_size);
+	if (crc != temp.crc) {
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Bulletin CRC validation failed\n");
+		return;
+	}
+
+	if ((temp.valid_bitmap & (1 << MAC_ADDR_FORCED)) ||
+	    (temp.valid_bitmap & (1 << VFPF_BULLETIN_MAC_ADDR))) {
+		OSAL_MEMCPY(p_mac, temp.mac, ECORE_ETH_ALEN);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+			   "Admin MAC read from bulletin:[%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx]\n",
+			   p_mac[0], p_mac[1], p_mac[2], p_mac[3],
+			   p_mac[4], p_mac[5]);
+	}
+}
+
 enum _ecore_status_t ecore_vf_read_bulletin(struct ecore_hwfn *p_hwfn,
 					    u8 *p_change)
 {
@@ -1845,7 +1992,14 @@ void ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
 				 &p_hwfn->vf_iov_info->bulletin_shadow);
 }
 
-void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn, u8 *num_rxqs)
+void ecore_vf_get_num_sbs(struct ecore_hwfn *p_hwfn,
+			   u8 *num_sbs)
+{
+	*num_sbs = p_hwfn->vf_iov_info->acquire_resp.resc.num_sbs;
+}
+
+void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn,
+			   u8 *num_rxqs)
 {
 	*num_rxqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_rxqs;
 }
@@ -1856,11 +2010,18 @@ void ecore_vf_get_num_txqs(struct ecore_hwfn *p_hwfn,
 	*num_txqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_txqs;
 }
 
-void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn, u8 *port_mac)
+void ecore_vf_get_num_cids(struct ecore_hwfn *p_hwfn,
+			   u8 *num_cids)
+{
+	*num_cids = p_hwfn->vf_iov_info->acquire_resp.resc.num_cids;
+}
+
+void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn,
+			   u8 *port_mac)
 {
 	OSAL_MEMCPY(port_mac,
 		    p_hwfn->vf_iov_info->acquire_resp.pfdev_info.port_mac,
-		    ETH_ALEN);
+		    ECORE_ETH_ALEN);
 }
 
 void ecore_vf_get_num_vlan_filters(struct ecore_hwfn *p_hwfn,
@@ -1872,17 +2033,8 @@ void ecore_vf_get_num_vlan_filters(struct ecore_hwfn *p_hwfn,
 	*num_vlan_filters = p_vf->acquire_resp.resc.num_vlan_filters;
 }
 
-void ecore_vf_get_num_sbs(struct ecore_hwfn *p_hwfn,
-			  u32 *num_sbs)
-{
-	struct ecore_vf_iov *p_vf;
-
-	p_vf = p_hwfn->vf_iov_info;
-	*num_sbs = (u32)p_vf->acquire_resp.resc.num_sbs;
-}
-
 void ecore_vf_get_num_mac_filters(struct ecore_hwfn *p_hwfn,
-				  u32 *num_mac_filters)
+				  u8 *num_mac_filters)
 {
 	struct ecore_vf_iov *p_vf = p_hwfn->vf_iov_info;
 
@@ -1898,14 +2050,14 @@ bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac)
 		return true;
 
 	/* Forbid VF from changing a MAC enforced by PF */
-	if (OSAL_MEMCMP(bulletin->mac, mac, ETH_ALEN))
+	if (OSAL_MEMCMP(bulletin->mac, mac, ECORE_ETH_ALEN))
 		return false;
 
 	return false;
 }
 
-bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac,
-				      u8 *p_is_forced)
+LNX_STATIC bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn,
+						 u8 *dst_mac, u8 *p_is_forced)
 {
 	struct ecore_bulletin_content *bulletin;
 
@@ -1921,7 +2073,7 @@ bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac,
 		return false;
 	}
 
-	OSAL_MEMCPY(dst_mac, bulletin->mac, ETH_ALEN);
+	OSAL_MEMCPY(dst_mac, bulletin->mac, ECORE_ETH_ALEN);
 
 	return true;
 }
@@ -1972,9 +2124,85 @@ void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
 	*fw_eng = info->fw_eng;
 }
 
+void ecore_vf_update_mac(struct ecore_hwfn *p_hwfn, u8 *mac)
+{
+	if (p_hwfn->vf_iov_info)
+		OSAL_MEMCPY(p_hwfn->vf_iov_info->mac_addr, mac, ECORE_ETH_ALEN);
+}
+
 #ifdef CONFIG_ECORE_SW_CHANNEL
 void ecore_vf_set_hw_channel(struct ecore_hwfn *p_hwfn, bool b_is_hw)
 {
 	p_hwfn->vf_iov_info->b_hw_channel = b_is_hw;
 }
 #endif
+
+LNX_STATIC void ecore_vf_db_recovery_execute(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct ecore_bulletin_content *p_bulletin;
+
+	p_bulletin = &p_iov->bulletin_shadow;
+
+	/* Got a new indication from PF to execute doorbell overflow recovery */
+	if (p_iov->db_recovery_execute_prev != p_bulletin->db_recovery_execute)
+		ecore_db_recovery_execute(p_hwfn);
+
+	p_iov->db_recovery_execute_prev = p_bulletin->db_recovery_execute;
+}
+
+LNX_STATIC enum _ecore_status_t
+ecore_vf_async_event_req(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+	struct pfvf_async_event_resp_tlv *p_resp = OSAL_NULL;
+	struct vfpf_async_event_req_tlv *p_req = OSAL_NULL;
+	struct ecore_bulletin_content *p_bulletin;
+	enum _ecore_status_t rc;
+	u8 i;
+
+	p_bulletin = &p_iov->bulletin_shadow;
+	if (p_iov->eq_completion_prev == p_bulletin->eq_completion)
+		return ECORE_SUCCESS;
+
+	/* Update prev value for future comparison */
+	p_iov->eq_completion_prev = p_bulletin->eq_completion;
+
+	p_req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_ASYNC_EVENT,
+				 sizeof(*p_req));
+	ecore_add_tlv(&p_iov->offset,
+		      CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	p_resp = &p_iov->pf2vf_reply->async_event_resp;
+	rc = ecore_send_msg2pf(p_hwfn, &p_resp->hdr.status, sizeof(*p_resp));
+	if (rc != ECORE_SUCCESS)
+		goto exit;
+
+	if (p_resp->hdr.status != PFVF_STATUS_SUCCESS) {
+		rc = ECORE_INVAL;
+		goto exit;
+	}
+
+	for (i = 0; i < p_resp->eq_count; i++) {
+		struct event_ring_entry *p_eqe = &p_resp->eqs[i];
+
+		ecore_event_ring_entry_dp(p_hwfn, p_eqe);
+		rc = ecore_async_event_completion(p_hwfn, p_eqe);
+		if (rc != ECORE_SUCCESS)
+			DP_NOTICE(p_hwfn, false,
+				  "Processing async event completion failed - op %x prot %x echo %x\n",
+				  p_eqe->opcode, p_eqe->protocol_id,
+				  OSAL_LE16_TO_CPU(p_eqe->echo));
+	}
+
+	rc = ECORE_SUCCESS;
+exit:
+	ecore_vf_pf_req_end(p_hwfn, rc);
+
+	return rc;
+}
+
+#ifdef _NTDDK_
+#pragma warning(pop)
+#endif
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index f027eba3e..098cbe6b3 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_VF_H__
 #define __ECORE_VF_H__
 
@@ -14,9 +14,11 @@
 #include "ecore_dev_api.h"
 
 /* Default number of CIDs [total of both Rx and Tx] to be requested
- * by default.
+ * by default, and maximum possible number.
  */
 #define ECORE_ETH_VF_DEFAULT_NUM_CIDS	(32)
+#define ECORE_ETH_VF_MAX_NUM_CIDS	(255)
+#define ECORE_VF_RDMA_DEFAULT_CNQS  4
 
 /* This data is held in the ecore_hwfn structure for VFs only. */
 struct ecore_vf_iov {
@@ -63,8 +65,23 @@ struct ecore_vf_iov {
 
 	/* retry count for VF acquire on channel timeout */
 	u8 acquire_retry_cnt;
+
+	/* Compare this value with bulletin board's db_recovery_execute to know
+	 * whether doorbell overflow recovery is needed.
+	 */
+	u8 db_recovery_execute_prev;
+
+	/* Compare this value with bulletin board's eq_completion to know
+	 * whether new async events completions are pending.
+	 */
+	u16 eq_completion_prev;
+
+	u8 mac_addr[ECORE_ETH_ALEN];
+
+	bool dpm_enabled;
 };
 
+#ifdef CONFIG_ECORE_SRIOV
 /**
  * @brief VF - Get coalesce per VF's relative queue.
  *
@@ -77,7 +94,6 @@ enum _ecore_status_t ecore_vf_pf_get_coalesce(struct ecore_hwfn *p_hwfn,
 					      u16 *p_coal,
 					      struct ecore_queue_cid *p_cid);
 
-enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn);
 /**
  * @brief VF - Set Rx/Tx coalesce per VF's relative queue.
  *             Coalesce value '0' will omit the configuration.
@@ -92,7 +108,6 @@ enum _ecore_status_t ecore_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 					      u16 rx_coal, u16 tx_coal,
 					      struct ecore_queue_cid *p_cid);
 
-#ifdef CONFIG_ECORE_SRIOV
 /**
  * @brief hw preparation for VF
  *	sends ACQUIRE message
@@ -136,7 +151,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
  * @param p_cid
  * @param bd_chain_phys_addr	- physical address of tx chain
  * @param pp_doorbell		- pointer to address to which to
- *				write the doorbell too..
+ *		write the doorbell too..
  *
  * @return enum _ecore_status_t
  */
@@ -200,9 +215,8 @@ enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
  *
  * @return enum _ecore_status_t
  */
-enum _ecore_status_t
-ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
-			 struct ecore_sp_vport_update_params *p_params);
+enum _ecore_status_t ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
+					      struct ecore_sp_vport_update_params *p_params);
 
 /**
  * @brief VF - send a close message to PF
@@ -254,6 +268,7 @@ void ecore_vf_set_sb_info(struct ecore_hwfn *p_hwfn,
  * @param tpa_mode
  * @param max_buffers_per_cqe,
  * @param only_untagged - default behavior regarding vlan acceptance
+ * @param zero_placement_offset - if set, zero padding will be inserted
  *
  * @return enum _ecore_status
  */
@@ -264,7 +279,8 @@ enum _ecore_status_t ecore_vf_pf_vport_start(
 			u8 inner_vlan_removal,
 			enum ecore_tpa_mode tpa_mode,
 			u8 max_buffers_per_cqe,
-			u8 only_untagged);
+			u8 only_untagged,
+			u8 zero_placement_offset);
 
 /**
  * @brief ecore_vf_pf_vport_stop - stop the VF's vport
@@ -317,23 +333,29 @@ void __ecore_vf_get_link_state(struct ecore_mcp_link_state *p_link,
  */
 void __ecore_vf_get_link_caps(struct ecore_mcp_link_capabilities *p_link_caps,
 			      struct ecore_bulletin_content *p_bulletin);
-
 enum _ecore_status_t
 ecore_vf_pf_tunnel_param_update(struct ecore_hwfn *p_hwfn,
 				struct ecore_tunnel_info *p_tunn);
-
 void ecore_vf_set_vf_start_tunn_update_param(struct ecore_tunnel_info *p_tun);
 
-u32 ecore_vf_hw_bar_size(struct ecore_hwfn *p_hwfn,
-		     enum BAR_ID bar_id);
-
+u32 ecore_vf_hw_bar_size(struct ecore_hwfn *p_hwfn, enum BAR_ID bar_id);
+/**
+ * @brief - Ask PF to update the MAC address in it's bulletin board
+ *
+ * @param p_mac - mac address to be updated in bulletin board
+ */
+enum _ecore_status_t
+ecore_vf_pf_bulletin_update_mac(struct ecore_hwfn *p_hwfn, u8 *p_mac);
 /**
  * @brief - ecore_vf_pf_update_mtu Update MTU for VF.
+ *          vport should not be active before issuing this function.
  *
- * @param p_hwfn
  * @param - mtu
  */
 enum _ecore_status_t
 ecore_vf_pf_update_mtu(struct ecore_hwfn *p_hwfn, u16 mtu);
+
+enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn);
 #endif
+
 #endif /* __ECORE_VF_H__ */
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index 43951a9a3..b664dafbd 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_VF_API_H__
 #define __ECORE_VF_API_H__
 
@@ -52,6 +52,15 @@ void ecore_vf_get_link_state(struct ecore_hwfn *p_hwfn,
 void ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
 			    struct ecore_mcp_link_capabilities *p_link_caps);
 
+/**
+ * @brief Get number of status blocks allocated for VF by ecore
+ *
+ *  @param p_hwfn
+ *  @param num_sbs - allocated status blocks
+ */
+void ecore_vf_get_num_sbs(struct ecore_hwfn *p_hwfn,
+			  u8 *num_sbs);
+
 /**
  * @brief Get number of Rx queues allocated for VF by ecore
  *
@@ -70,6 +79,29 @@ void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn,
 void ecore_vf_get_num_txqs(struct ecore_hwfn *p_hwfn,
 			   u8 *num_txqs);
 
+/**
+ * @brief Get number of available connections [both Rx and Tx] for VF
+ *
+ * @param p_hwfn
+ * @param num_cids - allocated number of connections
+ */
+void ecore_vf_get_num_cids(struct ecore_hwfn *p_hwfn, u8 *num_cids);
+
+/**
+ * @brief Get number of cnqs allocated for VF by ecore
+ *
+ * @param p_hwfn
+ * @param num_cnqs - allocated cnqs
+ */
+void ecore_vf_get_num_cnqs(struct ecore_hwfn *p_hwfn, u8 *num_cnqs);
+
+/**
+ * @brief Returns the relative sb_id for the first CNQ
+ *
+ * @param p_hwfn
+ */
+u8 ecore_vf_rdma_cnq_sb_start_id(struct ecore_hwfn *p_hwfn);
+
 /**
  * @brief Get port mac address for VF
  *
@@ -88,9 +120,6 @@ void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn,
 void ecore_vf_get_num_vlan_filters(struct ecore_hwfn *p_hwfn,
 				   u8 *num_vlan_filters);
 
-void ecore_vf_get_num_sbs(struct ecore_hwfn *p_hwfn,
-			  u32 *num_sbs);
-
 /**
  * @brief Get number of MAC filters allocated for VF by ecore
  *
@@ -98,7 +127,7 @@ void ecore_vf_get_num_sbs(struct ecore_hwfn *p_hwfn,
  *  @param num_rxqs - allocated MAC filters
  */
 void ecore_vf_get_num_mac_filters(struct ecore_hwfn *p_hwfn,
-				  u32 *num_mac_filters);
+				  u8 *num_mac_filters);
 
 /**
  * @brief Check if VF can set a MAC address
@@ -110,6 +139,17 @@ void ecore_vf_get_num_mac_filters(struct ecore_hwfn *p_hwfn,
  */
 bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac);
 
+/**
+ * @brief Init the admin mac for the VF from the bulletin
+ *        without updating VF shadow copy of bulletin
+ *
+ * @param p_hwfn
+ * @param p_mac
+ *
+ * @return void
+ */
+void ecore_vf_init_admin_mac(struct ecore_hwfn *p_hwfn, u8 *p_mac);
+
 #ifndef LINUX_REMOVE
 /**
  * @brief Copy forced MAC address from bulletin board
@@ -117,7 +157,7 @@ bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac);
  * @param hwfn
  * @param dst_mac
  * @param p_is_forced - out param which indicate in case mac
- *			exist if it forced or not.
+ *	        exist if it forced or not.
  *
  * @return bool       - return true if mac exist and false if
  *                      not.
@@ -145,11 +185,17 @@ bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid);
  */
 bool ecore_vf_get_pre_fp_hsi(struct ecore_hwfn *p_hwfn);
 
+/**
+ * @brief Execute doorbell overflow recovery if needed
+ *
+ * @param p_hwfn
+ */
+void ecore_vf_db_recovery_execute(struct ecore_hwfn *p_hwfn);
+
 #endif
 
 /**
- * @brief Set firmware version information in dev_info from VFs acquire
- *  response tlv
+ * @brief Set firmware version information in dev_info from VFs acquire response tlv
  *
  * @param p_hwfn
  * @param fw_major
@@ -164,6 +210,7 @@ void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
 			     u16 *fw_eng);
 void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
 				     u16 *p_vxlan_port, u16 *p_geneve_port);
+void ecore_vf_update_mac(struct ecore_hwfn *p_hwfn, u8 *mac);
 
 #ifdef CONFIG_ECORE_SW_CHANNEL
 /**
@@ -177,5 +224,12 @@ void ecore_vf_bulletin_get_udp_ports(struct ecore_hwfn *p_hwfn,
  */
 void ecore_vf_set_hw_channel(struct ecore_hwfn *p_hwfn, bool b_is_hw);
 #endif
+
+/**
+ * @brief request and process completed async events
+ *
+ * @param p_hwfn
+ */
+enum _ecore_status_t ecore_vf_async_event_req(struct ecore_hwfn *p_hwfn);
 #endif
 #endif
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index f92dc428a..e03db7c2b 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -1,14 +1,13 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_VF_PF_IF_H__
 #define __ECORE_VF_PF_IF_H__
 
-/* @@@ TBD MichalK this should be HSI? */
-#define T_ETH_INDIRECTION_TABLE_SIZE 128
+#define T_ETH_INDIRECTION_TABLE_SIZE 128 /* @@@ TBD MichalK this should be HSI? */
 #define T_ETH_RSS_KEY_SIZE 10 /* @@@ TBD this should be HSI? */
 
 /***********************************************
@@ -33,6 +32,15 @@ struct hw_sb_info {
 	u8 padding[5];
 };
 
+/* ECORE GID can be used as IPv4/6 address in RoCE v2 */
+union vfpf_roce_gid {
+	u8 bytes[16];
+	u16 words[8];
+	u32 dwords[4];
+	u64 qwords[2];
+	u32 ipv4_addr;
+};
+
 /***********************************************
  *
  * HW VF-PF channel definitions
@@ -42,6 +50,11 @@ struct hw_sb_info {
  **/
 #define TLV_BUFFER_SIZE		1024
 
+#define PFVF_MAX_QUEUES_PER_VF		16
+#define PFVF_MAX_CNQS_PER_VF		16
+#define PFVF_MAX_SBS_PER_VF		\
+	(PFVF_MAX_QUEUES_PER_VF + PFVF_MAX_CNQS_PER_VF)
+
 /* vf pf channel tlvs */
 /* general tlv header (used for both vf->pf request and pf->vf response) */
 struct channel_tlv {
@@ -88,8 +101,7 @@ struct vfpf_acquire_tlv {
 	 * and allow them to interact.
 	 */
 #endif
-/* VF pre-FP hsi version */
-#define VFPF_ACQUIRE_CAP_PRE_FP_HSI	(1 << 0)
+#define VFPF_ACQUIRE_CAP_PRE_FP_HSI	(1 << 0) /* VF pre-FP hsi version */
 #define VFPF_ACQUIRE_CAP_100G		(1 << 1) /* VF can support 100g */
 
 	/* A requirement for supporting multi-Tx queues on a single queue-zone,
@@ -105,6 +117,21 @@ struct vfpf_acquire_tlv {
 	 * QUEUE_QIDS is set.
 	 */
 #define VFPF_ACQUIRE_CAP_PHYSICAL_BAR	(1 << 3)
+
+	/* Let the PF know this VF needs DORQ flush in case of overflow */
+#define VFPF_ACQUIRE_CAP_EDPM		(1 << 4)
+
+	/* Below capabilities to let PF know that VF supports RDMA. */
+#define VFPF_ACQUIRE_CAP_ROCE (1 << 5)
+#define VFPF_ACQUIRE_CAP_IWARP (1 << 6)
+
+
+	/* Requesting and receiving EQs from PF */
+#define VFPF_ACQUIRE_CAP_EQ		(1 << 7)
+
+	/* Let the PF know this VF supports XRC mode */
+#define VFPF_ACQUIRE_CAP_XRC		(1 << 8)
+
 		u64 capabilities;
 		u8 fw_major;
 		u8 fw_minor;
@@ -186,6 +213,13 @@ struct pfvf_acquire_resp_tlv {
 	/* PF expects queues to be received with additional qids */
 #define PFVF_ACQUIRE_CAP_QUEUE_QIDS		(1 << 3)
 
+	/* Below capabilities to let VF know what PF/chip supports. */
+#define PFVF_ACQUIRE_CAP_ROCE (1 << 4)
+#define PFVF_ACQUIRE_CAP_IWARP (1 << 5)
+
+	/* Let VF know whether it supports EDPM */
+#define PFVF_ACQUIRE_CAP_EDPM (1 << 6)
+
 		u16 db_size;
 		u8  indices_per_sb;
 		u8 os_type;
@@ -199,7 +233,7 @@ struct pfvf_acquire_resp_tlv {
 
 		struct pfvf_stats_info stats_info;
 
-		u8 port_mac[ETH_ALEN];
+		u8 port_mac[ECORE_ETH_ALEN];
 
 		/* It's possible PF had to configure an older fastpath HSI
 		 * [in case VF is newer than PF]. This is communicated back
@@ -215,9 +249,7 @@ struct pfvf_acquire_resp_tlv {
 		 * this struct with suggested amount of resources for next
 		 * acquire request
 		 */
-		#define PFVF_MAX_QUEUES_PER_VF         16
-		#define PFVF_MAX_SBS_PER_VF            16
-		struct hw_sb_info hw_sbs[PFVF_MAX_SBS_PER_VF];
+		struct hw_sb_info hw_sbs[PFVF_MAX_QUEUES_PER_VF];
 		u8      hw_qid[PFVF_MAX_QUEUES_PER_VF];
 		u8      cid[PFVF_MAX_QUEUES_PER_VF];
 
@@ -253,7 +285,7 @@ struct vfpf_qid_tlv {
 
 /* Soft FLR req */
 struct vfpf_soft_flr_tlv {
-	struct vfpf_first_tlv first_tlv;
+	struct vfpf_first_tlv	first_tlv;
 	u32 reserved1;
 	u32 reserved2;
 };
@@ -349,9 +381,9 @@ struct vfpf_q_mac_vlan_filter {
 	u32 flags;
 	#define VFPF_Q_FILTER_DEST_MAC_VALID    0x01
 	#define VFPF_Q_FILTER_VLAN_TAG_VALID    0x02
-	#define VFPF_Q_FILTER_SET_MAC		0x100   /* set/clear */
+	#define VFPF_Q_FILTER_SET_MAC	0x100   /* set/clear */
 
-	u8  mac[ETH_ALEN];
+	u8  mac[ECORE_ETH_ALEN];
 	u16 vlan_tag;
 
 	u8	padding[4];
@@ -361,7 +393,7 @@ struct vfpf_q_mac_vlan_filter {
 struct vfpf_vport_start_tlv {
 	struct vfpf_first_tlv	first_tlv;
 
-	u64		sb_addr[PFVF_MAX_SBS_PER_VF];
+	u64			sb_addr[PFVF_MAX_QUEUES_PER_VF];
 
 	u32			tpa_mode;
 	u16			dep1;
@@ -373,7 +405,8 @@ struct vfpf_vport_start_tlv {
 	u8			only_untagged;
 	u8			max_buffers_per_cqe;
 
-	u8			padding[4];
+	u8			zero_placement_offset;
+	u8			padding[3];
 };
 
 /* Extended tlvs - need to add rss, mcast, accept mode tlvs */
@@ -468,7 +501,7 @@ struct vfpf_ucast_filter_tlv {
 	u8			opcode;
 	u8			type;
 
-	u8			mac[ETH_ALEN];
+	u8			mac[ECORE_ETH_ALEN];
 
 	u16			vlan;
 	u16			padding[3];
@@ -490,7 +523,8 @@ struct vfpf_update_tunn_param_tlv {
 	u8			update_vxlan_port;
 	u16			geneve_port;
 	u16			vxlan_port;
-	u8			padding[2];
+	u8			update_non_l2_vxlan;
+	u8			non_l2_vxlan_enable;
 };
 
 struct pfvf_update_tunn_param_tlv {
@@ -538,7 +572,7 @@ struct pfvf_read_coal_resp_tlv {
 
 struct vfpf_bulletin_update_mac_tlv {
 	struct vfpf_first_tlv first_tlv;
-	u8 mac[ETH_ALEN];
+	u8 mac[ECORE_ETH_ALEN];
 	u8 padding[2];
 };
 
@@ -548,6 +582,22 @@ struct vfpf_update_mtu_tlv {
 	u8 padding[6];
 };
 
+struct vfpf_async_event_req_tlv {
+	struct vfpf_first_tlv	first_tlv;
+	u32 reserved1;
+	u32 reserved2;
+};
+
+#define ECORE_PFVF_EQ_COUNT	32
+
+struct pfvf_async_event_resp_tlv {
+	struct			pfvf_tlv hdr;
+
+	struct event_ring_entry	eqs[ECORE_PFVF_EQ_COUNT];
+	u8			eq_count;
+	u8			padding[7];
+};
+
 union vfpf_tlvs {
 	struct vfpf_first_tlv			first_tlv;
 	struct vfpf_acquire_tlv			acquire;
@@ -564,17 +614,19 @@ union vfpf_tlvs {
 	struct vfpf_read_coal_req_tlv		read_coal_req;
 	struct vfpf_bulletin_update_mac_tlv	bulletin_update_mac;
 	struct vfpf_update_mtu_tlv		update_mtu;
-	struct vfpf_soft_flr_tlv		soft_flr;
+	struct vfpf_async_event_req_tlv		async_event_req;
+	struct vfpf_soft_flr_tlv		vf_soft_flr;
 	struct tlv_buffer_size			tlv_buf_size;
 };
 
 union pfvf_tlvs {
-	struct pfvf_def_resp_tlv		default_resp;
-	struct pfvf_acquire_resp_tlv		acquire_resp;
-	struct tlv_buffer_size			tlv_buf_size;
-	struct pfvf_start_queue_resp_tlv	queue_start;
-	struct pfvf_update_tunn_param_tlv	tunn_param_resp;
-	struct pfvf_read_coal_resp_tlv		read_coal_resp;
+	struct pfvf_def_resp_tlv		 default_resp;
+	struct pfvf_acquire_resp_tlv		 acquire_resp;
+	struct tlv_buffer_size			 tlv_buf_size;
+	struct pfvf_start_queue_resp_tlv	 queue_start;
+	struct pfvf_update_tunn_param_tlv	 tunn_param_resp;
+	struct pfvf_read_coal_resp_tlv		 read_coal_resp;
+	struct pfvf_async_event_resp_tlv	 async_event_resp;
 };
 
 /* This is a structure which is allocated in the VF, which the PF may update
@@ -599,7 +651,7 @@ enum ecore_bulletin_bit {
 	/* Alert the VF that suggested mac was sent by the PF.
 	 * MAC_ADDR will be disabled in case MAC_ADDR_FORCED is set
 	 */
-	VFPF_BULLETIN_MAC_ADDR = 5
+	VFPF_BULLETIN_MAC_ADDR = 5,
 };
 
 struct ecore_bulletin_content {
@@ -612,7 +664,7 @@ struct ecore_bulletin_content {
 	u64 valid_bitmap;
 
 	/* used for MAC_ADDR or MAC_ADDR_FORCED */
-	u8 mac[ETH_ALEN];
+	u8 mac[ECORE_ETH_ALEN];
 
 	/* If valid, 1 => only untagged Rx if no vlan is configured */
 	u8 default_only_untagged;
@@ -647,7 +699,9 @@ struct ecore_bulletin_content {
 	u8 sfp_tx_fault;
 	u16 vxlan_udp_port;
 	u16 geneve_udp_port;
-	u8 padding4[2];
+
+	/* Pending async events completions */
+	u16 eq_completion;
 
 	u32 speed;
 	u32 partner_adv_speed;
@@ -656,7 +710,9 @@ struct ecore_bulletin_content {
 
 	/* Forced vlan */
 	u16 pvid;
-	u16 padding5;
+
+	u8 db_recovery_execute;
+	u8 padding4;
 };
 
 struct ecore_bulletin {
@@ -667,6 +723,7 @@ struct ecore_bulletin {
 
 enum {
 /*!!!!! Make sure to update STRINGS structure accordingly !!!!!*/
+/*!!!!! Also make sure that new TLVs will be inserted last (before CHANNEL_TLV_MAX) !!!!!*/
 
 	CHANNEL_TLV_NONE, /* ends tlv sequence */
 	CHANNEL_TLV_ACQUIRE,
@@ -739,6 +796,6 @@ enum {
 
 /*!!!!! Make sure to update STRINGS structure accordingly !!!!!*/
 };
-extern const char *qede_ecore_channel_tlvs_string[];
+extern const char *ecore_channel_tlvs_string[];
 
 #endif /* __ECORE_VF_PF_IF_H__ */
diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h
index 1ca11c473..706443838 100644
--- a/drivers/net/qede/base/eth_common.h
+++ b/drivers/net/qede/base/eth_common.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ETH_COMMON__
 #define __ETH_COMMON__
 /********************/
@@ -13,9 +13,7 @@
 /* FP HSI version. FP HSI is compatible if (fwVer.major == drvVer.major &&
  * fwVer.minor >= drvVer.minor)
  */
-/* ETH FP HSI Major version */
-#define ETH_HSI_VER_MAJOR                   3
-/* ETH FP HSI Minor version */
+#define ETH_HSI_VER_MAJOR                   3    /* ETH FP HSI Major version */
 #define ETH_HSI_VER_MINOR                   11   /* ETH FP HSI Minor version */
 
 /* Alias for 8.7.x.x/8.8.x.x ETH FP HSI MINOR version. In this version driver
@@ -35,12 +33,12 @@
 #define ETH_RX_CQE_PAGE_SIZE_BYTES          4096
 #define ETH_RX_NUM_NEXT_PAGE_BDS            2
 
-/* Limitation for Tunneled LSO Packets on the offset (in bytes) of the inner IP
- * header (relevant to LSO for tunneled packet):
+/* Limitation for Tunneled LSO Packets on the offset (in bytes) of the inner IP header (relevant to
+ * LSO for tunneled packet):
  */
-/* Offset is limited to 253 bytes (inclusive). */
+/* E4 Only: Offset is limited to 253 bytes (inclusive). */
 #define ETH_MAX_TUNN_LSO_INNER_IPV4_OFFSET          253
-/* Offset is limited to 251 bytes (inclusive). */
+/* E4 Only: Offset is limited to 251 bytes (inclusive). */
 #define ETH_MAX_TUNN_LSO_INNER_IPV6_OFFSET          251
 
 #define ETH_TX_MIN_BDS_PER_NON_LSO_PKT              1
@@ -70,14 +68,16 @@
 /* Value for a connection for which same as last feature is disabled */
 #define ETH_TX_INACTIVE_SAME_AS_LAST                0xFFFF
 
-/* Maximum number of statistics counters */
-#define ETH_NUM_STATISTIC_COUNTERS                  MAX_NUM_VPORTS
-/* Maximum number of statistics counters when doubled VF zone used */
-#define ETH_NUM_STATISTIC_COUNTERS_DOUBLE_VF_ZONE \
-	(ETH_NUM_STATISTIC_COUNTERS - MAX_NUM_VFS / 2)
-/* Maximum number of statistics counters when quad VF zone used */
-#define ETH_NUM_STATISTIC_COUNTERS_QUAD_VF_ZONE \
-	(ETH_NUM_STATISTIC_COUNTERS - 3 * MAX_NUM_VFS / 4)
+/* Maximum number of statistics counters for E4 */
+#define ETH_NUM_STATISTIC_COUNTERS_E4               MAX_NUM_VPORTS_E4
+/* Maximum number of statistics counters for E5 */
+#define ETH_NUM_STATISTIC_COUNTERS_E5               MAX_NUM_VPORTS_E5
+/* E4 Only: Maximum number of statistics counters when doubled VF zone used */
+#define ETH_NUM_STATISTIC_COUNTERS_DOUBLE_VF_ZONE   \
+	(ETH_NUM_STATISTIC_COUNTERS_E4 - MAX_NUM_VFS_E4 / 2)
+/* E4 Only: Maximum number of statistics counters when quad VF zone used */
+#define ETH_NUM_STATISTIC_COUNTERS_QUAD_VF_ZONE     \
+	(ETH_NUM_STATISTIC_COUNTERS_E4 - 3 * MAX_NUM_VFS_E4 / 4)
 
 /* Maximum number of buffers, used for RX packet placement */
 #define ETH_RX_MAX_BUFF_PER_PKT             5
@@ -122,13 +122,15 @@
 
 /* Control frame check constants */
 /* Number of etherType values configured by driver for control frame check */
-#define ETH_CTL_FRAME_ETH_TYPE_NUM              4
+#define ETH_CTL_FRAME_ETH_TYPE_NUM          4
 
 /* GFS constants */
 #define ETH_GFT_TRASHCAN_VPORT         0x1FF /* GFT drop flow vport number */
 #define ETH_GFS_NUM_OF_ACTIONS         10           /* Maximum number of GFS actions supported */
 
 
+#define ETH_TGSRC_CTX_SIZE                6 /*Size in QREGS*/
+#define ETH_RGSRC_CTX_SIZE                6 /*Size in QREGS*/
 
 /*
  * Destination port mode
@@ -156,39 +158,34 @@ enum eth_addr_type {
 
 struct eth_tx_1st_bd_flags {
 	u8 bitfields;
-/* Set to 1 in the first BD. (for debug) */
-#define ETH_TX_1ST_BD_FLAGS_START_BD_MASK         0x1
+#define ETH_TX_1ST_BD_FLAGS_START_BD_MASK         0x1 /* Set to 1 in the first BD. (for debug) */
 #define ETH_TX_1ST_BD_FLAGS_START_BD_SHIFT        0
 /* Do not allow additional VLAN manipulations on this packet. */
 #define ETH_TX_1ST_BD_FLAGS_FORCE_VLAN_MODE_MASK  0x1
 #define ETH_TX_1ST_BD_FLAGS_FORCE_VLAN_MODE_SHIFT 1
-/* Recalculate IP checksum. For tunneled packet - relevant to inner header. */
+/* Recalculate IP checksum. For tunneled packet - relevant to inner header. Note: For LSO packets,
+ * must be set.
+ */
 #define ETH_TX_1ST_BD_FLAGS_IP_CSUM_MASK          0x1
 #define ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT         2
-/* Recalculate TCP/UDP checksum.
- * For tunneled packet - relevant to inner header.
- */
+/* Recalculate TCP/UDP checksum. For tunneled packet - relevant to inner header. */
 #define ETH_TX_1ST_BD_FLAGS_L4_CSUM_MASK          0x1
 #define ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT         3
-/* If set, insert VLAN tag from vlan field to the packet.
- * For tunneled packet - relevant to outer header.
+/* If set, insert VLAN tag from vlan field to the packet. For tunneled packet - relevant to outer
+ * header.
  */
 #define ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_MASK   0x1
 #define ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT  4
-/* If set, this is an LSO packet. Note: For Tunneled LSO packets, the offset of
- * the inner IPV4 (and IPV6) header is limited to 253 (and 251 respectively)
- * bytes, inclusive.
+/* If set, this is an LSO packet. Note: For Tunneled LSO packets, the offset of the inner IPV4 (and
+ * IPV6) header is limited to 253 (and 251 respectively) bytes, inclusive.
  */
 #define ETH_TX_1ST_BD_FLAGS_LSO_MASK              0x1
 #define ETH_TX_1ST_BD_FLAGS_LSO_SHIFT             5
 /* Recalculate Tunnel IP Checksum (if Tunnel IP Header is IPv4) */
 #define ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_MASK     0x1
 #define ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_SHIFT    6
-/* Recalculate Tunnel UDP/GRE Checksum (Depending on Tunnel Type) */
-#define ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_MASK     0x1
-/* Recalculate Tunnel UDP/GRE Checksum (Depending on Tunnel Type). In case of
- * GRE tunnel, this flag means GRE CSO, and in this case GRE checksum field
- * Must be present.
+/* Recalculate Tunnel UDP/GRE Checksum (Depending on Tunnel Type). In case of GRE tunnel, this flag
+ * means GRE CSO, and in this case GRE checksum field Must be present.
  */
 #define ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_MASK     0x1
 #define ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_SHIFT    7
@@ -198,10 +195,9 @@ struct eth_tx_1st_bd_flags {
  * The parsing information data for the first tx bd of a given packet.
  */
 struct eth_tx_data_1st_bd {
-/* VLAN tag to insert to packet (if enabled by vlan_insertion flag). */
-	__le16 vlan;
-/* Number of BDs in packet. Should be at least 1 in non-LSO packet and at least
- * 3 in LSO (or Tunnel with IPv6+ext) packet.
+	__le16 vlan /* VLAN tag to insert to packet (if enabled by vlan_insertion flag). */;
+/* Number of BDs in packet. Should be at least 1 in non-LSO packet and at least 3 in LSO (or Tunnel
+ * with IPv6+ext) packet.
  */
 	u8 nbds;
 	struct eth_tx_1st_bd_flags bd_flags;
@@ -226,47 +222,47 @@ struct eth_tx_data_2nd_bd {
 /* For Tunnel header with IPv6 ext. - Inner L2 Header Size (in 2-byte WORDs) */
 #define ETH_TX_DATA_2ND_BD_TUNN_INNER_L2_HDR_SIZE_W_MASK  0xF
 #define ETH_TX_DATA_2ND_BD_TUNN_INNER_L2_HDR_SIZE_W_SHIFT 0
-/* For Tunnel header with IPv6 ext. - Inner L2 Header MAC DA Type
- * (use enum eth_addr_type)
- */
+/* For Tunnel header with IPv6 ext. - Inner L2 Header MAC DA Type (use enum eth_addr_type) */
 #define ETH_TX_DATA_2ND_BD_TUNN_INNER_ETH_TYPE_MASK       0x3
 #define ETH_TX_DATA_2ND_BD_TUNN_INNER_ETH_TYPE_SHIFT      4
-/* Destination port mode. (use enum dest_port_mode) */
-#define ETH_TX_DATA_2ND_BD_DEST_PORT_MODE_MASK            0x3
-#define ETH_TX_DATA_2ND_BD_DEST_PORT_MODE_SHIFT           6
+/* Destination port mode, applicable only when tx_dst_port_mode_config ==
+ * ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_BD. (use enum dst_port_mode)
+ */
+#define ETH_TX_DATA_2ND_BD_DST_PORT_MODE_MASK             0x3
+#define ETH_TX_DATA_2ND_BD_DST_PORT_MODE_SHIFT            6
 /* Should be 0 in all the BDs, except the first one. (for debug) */
 #define ETH_TX_DATA_2ND_BD_START_BD_MASK                  0x1
 #define ETH_TX_DATA_2ND_BD_START_BD_SHIFT                 8
 /* For Tunnel header with IPv6 ext. - Tunnel Type (use enum eth_tx_tunn_type) */
 #define ETH_TX_DATA_2ND_BD_TUNN_TYPE_MASK                 0x3
 #define ETH_TX_DATA_2ND_BD_TUNN_TYPE_SHIFT                9
-/* For LSO / Tunnel header with IPv6+ext - Set if inner header is IPv6 */
+/* Relevant only for LSO. When Tunnel is with IPv6+ext - Set if inner header is IPv6 */
 #define ETH_TX_DATA_2ND_BD_TUNN_INNER_IPV6_MASK           0x1
 #define ETH_TX_DATA_2ND_BD_TUNN_INNER_IPV6_SHIFT          11
-/* In tunneling mode - Set to 1 when the Inner header is IPv6 with extension.
- * Otherwise set to 1 if the header is IPv6 with extension.
+/* In tunneling mode - Set to 1 when the Inner header is IPv6 with extension. Otherwise set to 1 if
+ * the header is IPv6 with extension.
  */
 #define ETH_TX_DATA_2ND_BD_IPV6_EXT_MASK                  0x1
 #define ETH_TX_DATA_2ND_BD_IPV6_EXT_SHIFT                 12
-/* Set to 1 if Tunnel (outer = encapsulating) header has IPv6 ext. (Note: 3rd BD
- * is required, hence EDPM does not support Tunnel [outer] header with Ipv6Ext)
+/* Set to 1 if Tunnel (outer = encapsulating) header has IPv6 ext. (Note: 3rd BD is required, hence
+ * EDPM does not support Tunnel [outer] header with Ipv6Ext)
  */
 #define ETH_TX_DATA_2ND_BD_TUNN_IPV6_EXT_MASK             0x1
 #define ETH_TX_DATA_2ND_BD_TUNN_IPV6_EXT_SHIFT            13
-/* Set if (inner) L4 protocol is UDP. (Required when IPv6+ext (or tunnel with
- * inner or outer Ipv6+ext) and l4_csum is set)
+/* Set if (inner) L4 protocol is UDP. (Required when IPv6+ext (or tunnel with inner or outer
+ * Ipv6+ext) and l4_csum is set)
  */
 #define ETH_TX_DATA_2ND_BD_L4_UDP_MASK                    0x1
 #define ETH_TX_DATA_2ND_BD_L4_UDP_SHIFT                   14
-/* The pseudo header checksum type in the L4 checksum field. Required when
- * IPv6+ext and l4_csum is set. (use enum eth_l4_pseudo_checksum_mode)
+/* The pseudo header checksum type in the L4 checksum field. Required when IPv6+ext and l4_csum is
+ * set. (use enum eth_l4_pseudo_checksum_mode)
  */
 #define ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE_MASK       0x1
 #define ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE_SHIFT      15
 	__le16 bitfields2;
-/* For inner/outer header IPv6+ext - (inner) L4 header offset (in 2-byte WORDs).
- * For regular packet - offset from the beginning of the packet. For tunneled
- * packet - offset from the beginning of the inner header
+/* For inner/outer header IPv6+ext - (inner) L4 header offset (in 2-byte WORDs). For regular packet
+ * - offset from the beginning of the packet. For tunneled packet - offset from the beginning of the
+ * inner header
  */
 #define ETH_TX_DATA_2ND_BD_L4_HDR_START_OFFSET_W_MASK     0x1FFF
 #define ETH_TX_DATA_2ND_BD_L4_HDR_START_OFFSET_W_SHIFT    0
@@ -278,40 +274,27 @@ struct eth_tx_data_2nd_bd {
  * Firmware data for L2-EDPM packet.
  */
 struct eth_edpm_fw_data {
-/* Parsing information data from the 1st BD. */
-	struct eth_tx_data_1st_bd data_1st_bd;
-/* Parsing information data from the 2nd BD. */
-	struct eth_tx_data_2nd_bd data_2nd_bd;
+	struct eth_tx_data_1st_bd data_1st_bd /* Parsing information data from the 1st BD. */;
+	struct eth_tx_data_2nd_bd data_2nd_bd /* Parsing information data from the 2nd BD. */;
 	__le32 reserved;
 };
 
 
-/*
- * FW debug.
- */
-struct eth_fast_path_cqe_fw_debug {
-	__le16 reserved2 /* FW reserved. */;
-};
-
-
 /*
  * tunneling parsing flags
  */
 struct eth_tunnel_parsing_flags {
 	u8 flags;
-/* 0 - no tunneling, 1 - GENEVE, 2 - GRE, 3 - VXLAN
- * (use enum eth_rx_tunn_type)
- */
+/* 0 - no tunneling, 1 - GENEVE, 2 - GRE, 3 - VXLAN (use enum eth_rx_tunn_type) */
 #define ETH_TUNNEL_PARSING_FLAGS_TYPE_MASK              0x3
 #define ETH_TUNNEL_PARSING_FLAGS_TYPE_SHIFT             0
-/*  If it s not an encapsulated packet then put 0x0. If it s an encapsulated
- *  packet but the tenant-id doesn t exist then put 0x0. Else put 0x1
- *
+/*  If it s not an encapsulated packet then put 0x0. If it s an encapsulated packet but the
+ * tenant-id doesn t exist then put 0x0. Else put 0x1
  */
 #define ETH_TUNNEL_PARSING_FLAGS_TENNANT_ID_EXIST_MASK  0x1
 #define ETH_TUNNEL_PARSING_FLAGS_TENNANT_ID_EXIST_SHIFT 2
-/* Type of the next header above the tunneling: 0 - unknown, 1 - L2, 2 - Ipv4,
- * 3 - IPv6 (use enum tunnel_next_protocol)
+/* Type of the next header above the tunneling: 0 - unknown, 1 - L2, 2 - Ipv4, 3 - IPv6 (use enum
+ * tunnel_next_protocol)
  */
 #define ETH_TUNNEL_PARSING_FLAGS_NEXT_PROTOCOL_MASK     0x3
 #define ETH_TUNNEL_PARSING_FLAGS_NEXT_PROTOCOL_SHIFT    3
@@ -341,30 +324,27 @@ struct eth_pmd_flow_flags {
  * Regular ETH Rx FP CQE.
  */
 struct eth_fast_path_rx_reg_cqe {
-	u8 type /* CQE type */;
+	u8 type /* CQE type (use enum eth_rx_cqe_type) */;
 	u8 bitfields;
 /* Type of calculated RSS hash (use enum rss_hash_type) */
 #define ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE_MASK  0x7
 #define ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE_SHIFT 0
-/* Traffic Class */
-#define ETH_FAST_PATH_RX_REG_CQE_TC_MASK             0xF
+#define ETH_FAST_PATH_RX_REG_CQE_TC_MASK             0xF /* Traffic Class */
 #define ETH_FAST_PATH_RX_REG_CQE_TC_SHIFT            3
 #define ETH_FAST_PATH_RX_REG_CQE_RESERVED0_MASK      0x1
 #define ETH_FAST_PATH_RX_REG_CQE_RESERVED0_SHIFT     7
 	__le16 pkt_len /* Total packet length (from the parser) */;
-/* Parsing and error flags from the parser */
-	struct parsing_and_err_flags pars_flags;
+	struct parsing_and_err_flags pars_flags /* Parsing and error flags from the parser */;
 	__le16 vlan_tag /* 802.1q VLAN tag */;
 	__le32 rss_hash /* RSS hash result */;
 	__le16 len_on_first_bd /* Number of bytes placed on first BD */;
 	u8 placement_offset /* Offset of placement from BD start */;
-/* Tunnel Parsing Flags */
-	struct eth_tunnel_parsing_flags tunnel_pars_flags;
+	struct eth_tunnel_parsing_flags tunnel_pars_flags /* Tunnel Parsing Flags */;
 	u8 bd_num /* Number of BDs, used for packet */;
 	u8 reserved;
 	__le16 reserved2;
-/* aRFS flow ID or Resource ID - Indicates a Vport ID from which packet was
- * sent, used when sending from VF to VF Representor.
+ /* aRFS flow ID or Resource ID - Indicates a Vport ID from which packet was
+  * sent, used when sending from VF to VF Representor.
  */
 	__le32 flow_id_or_resource_id;
 	u8 reserved1[7];
@@ -376,10 +356,9 @@ struct eth_fast_path_rx_reg_cqe {
  * TPA-continue ETH Rx FP CQE.
  */
 struct eth_fast_path_rx_tpa_cont_cqe {
-	u8 type /* CQE type */;
+	u8 type /* CQE type (use enum eth_rx_cqe_type) */;
 	u8 tpa_agg_index /* TPA aggregation index */;
-/* List of the segment sizes */
-	__le16 len_list[ETH_TPA_CQE_CONT_LEN_LIST_SIZE];
+	__le16 len_list[ETH_TPA_CQE_CONT_LEN_LIST_SIZE] /* List of the segment sizes */;
 	u8 reserved;
 	u8 reserved1 /* FW reserved. */;
 	__le16 reserved2[ETH_TPA_CQE_CONT_LEN_LIST_SIZE] /* FW reserved. */;
@@ -392,16 +371,14 @@ struct eth_fast_path_rx_tpa_cont_cqe {
  * TPA-end ETH Rx FP CQE .
  */
 struct eth_fast_path_rx_tpa_end_cqe {
-	u8 type /* CQE type */;
+	u8 type /* CQE type (use enum eth_rx_cqe_type) */;
 	u8 tpa_agg_index /* TPA aggregation index */;
 	__le16 total_packet_len /* Total aggregated packet length */;
 	u8 num_of_bds /* Total number of BDs comprising the packet */;
-/* Aggregation end reason. Use enum eth_tpa_end_reason */
-	u8 end_reason;
+	u8 end_reason /* Aggregation end reason. Use enum eth_tpa_end_reason */;
 	__le16 num_of_coalesced_segs /* Number of coalesced TCP segments */;
 	__le32 ts_delta /* TCP timestamp delta */;
-/* List of the segment sizes */
-	__le16 len_list[ETH_TPA_CQE_END_LEN_LIST_SIZE];
+	__le16 len_list[ETH_TPA_CQE_END_LEN_LIST_SIZE] /* List of the segment sizes */;
 	__le16 reserved3[ETH_TPA_CQE_END_LEN_LIST_SIZE] /* FW reserved. */;
 	__le16 reserved1;
 	u8 reserved2 /* FW reserved. */;
@@ -413,32 +390,29 @@ struct eth_fast_path_rx_tpa_end_cqe {
  * TPA-start ETH Rx FP CQE.
  */
 struct eth_fast_path_rx_tpa_start_cqe {
-	u8 type /* CQE type */;
+	u8 type /* CQE type (use enum eth_rx_cqe_type) */;
 	u8 bitfields;
 /* Type of calculated RSS hash (use enum rss_hash_type) */
 #define ETH_FAST_PATH_RX_TPA_START_CQE_RSS_HASH_TYPE_MASK  0x7
 #define ETH_FAST_PATH_RX_TPA_START_CQE_RSS_HASH_TYPE_SHIFT 0
-/* Traffic Class */
-#define ETH_FAST_PATH_RX_TPA_START_CQE_TC_MASK             0xF
+#define ETH_FAST_PATH_RX_TPA_START_CQE_TC_MASK             0xF /* Traffic Class */
 #define ETH_FAST_PATH_RX_TPA_START_CQE_TC_SHIFT            3
 #define ETH_FAST_PATH_RX_TPA_START_CQE_RESERVED0_MASK      0x1
 #define ETH_FAST_PATH_RX_TPA_START_CQE_RESERVED0_SHIFT     7
 	__le16 seg_len /* Segment length (packetLen from the parser) */;
-/* Parsing and error flags from the parser */
-	struct parsing_and_err_flags pars_flags;
+	struct parsing_and_err_flags pars_flags /* Parsing and error flags from the parser */;
 	__le16 vlan_tag /* 802.1q VLAN tag */;
 	__le32 rss_hash /* RSS hash result */;
 	__le16 len_on_first_bd /* Number of bytes placed on first BD */;
 	u8 placement_offset /* Offset of placement from BD start */;
-/* Tunnel Parsing Flags */
-	struct eth_tunnel_parsing_flags tunnel_pars_flags;
+	struct eth_tunnel_parsing_flags tunnel_pars_flags /* Tunnel Parsing Flags */;
 	u8 tpa_agg_index /* TPA aggregation index */;
 	u8 header_len /* Packet L2+L3+L4 header length */;
 /* Additional BDs length list. Used for backward compatible. */
 	__le16 bw_ext_bd_len_list[ETH_TPA_CQE_START_BW_LEN_LIST_SIZE];
 	__le16 reserved2;
-/* aRFS or GFS flow ID or Resource ID - Indicates a Vport ID from which packet
- * was sent, used when sending from VF to VF Representor
+ /* aRFS or GFS flow ID or Resource ID - Indicates a Vport ID from which packet
+  * was sent, used when sending from VF to VF Representor
  */
 	__le32 flow_id_or_resource_id;
 	u8 reserved[3];
@@ -446,15 +420,18 @@ struct eth_fast_path_rx_tpa_start_cqe {
 };
 
 
+
 /*
  * The L4 pseudo checksum mode for Ethernet
  */
 enum eth_l4_pseudo_checksum_mode {
-/* Pseudo Header checksum on packet is calculated with the correct packet length
+/* Pseudo Header checksum on packet is calculated with the correct (as defined in RFC) packet length
  * field.
  */
 	ETH_L4_PSEUDO_CSUM_CORRECT_LENGTH,
-/* Pseudo Header checksum on packet is calculated with zero length field. */
+/* (Supported only for LSO) Pseudo Header checksum on packet is calculated with zero length field.
+ *
+ */
 	ETH_L4_PSEUDO_CSUM_ZERO_LENGTH,
 	MAX_ETH_L4_PSEUDO_CHECKSUM_MODE
 };
@@ -470,7 +447,7 @@ struct eth_rx_bd {
  * regular ETH Rx SP CQE
  */
 struct eth_slow_path_rx_cqe {
-	u8 type /* CQE type */;
+	u8 type /* CQE type (use enum eth_rx_cqe_type) */;
 	u8 ramrod_cmd_id;
 	u8 error_flag;
 	u8 reserved[25];
@@ -483,14 +460,10 @@ struct eth_slow_path_rx_cqe {
  * union for all ETH Rx CQE types
  */
 union eth_rx_cqe {
-/* Regular FP CQE */
-	struct eth_fast_path_rx_reg_cqe fast_path_regular;
-/* TPA-start CQE */
-	struct eth_fast_path_rx_tpa_start_cqe fast_path_tpa_start;
-/* TPA-continue CQE */
-	struct eth_fast_path_rx_tpa_cont_cqe fast_path_tpa_cont;
-/* TPA-end CQE */
-	struct eth_fast_path_rx_tpa_end_cqe fast_path_tpa_end;
+	struct eth_fast_path_rx_reg_cqe fast_path_regular /* Regular FP CQE */;
+	struct eth_fast_path_rx_tpa_start_cqe fast_path_tpa_start /* TPA-start CQE */;
+	struct eth_fast_path_rx_tpa_cont_cqe fast_path_tpa_cont /* TPA-continue CQE */;
+	struct eth_fast_path_rx_tpa_end_cqe fast_path_tpa_end /* TPA-end CQE */;
 	struct eth_slow_path_rx_cqe slow_path /* SP CQE */;
 };
 
@@ -510,8 +483,7 @@ enum eth_rx_cqe_type {
 
 
 /*
- * Wrapper for PD RX CQE - used in order to cover full cache line when writing
- * CQE
+ * Wrapper for PD RX CQE - used in order to cover full cache line when writing CQE
  */
 struct eth_rx_pmd_cqe {
 	union eth_rx_cqe cqe /* CQE data itself */;
@@ -538,23 +510,21 @@ enum eth_rx_tunn_type {
 enum eth_tpa_end_reason {
 	ETH_AGG_END_UNUSED,
 	ETH_AGG_END_SP_UPDATE /* SP configuration update */,
-/* Maximum aggregation length or maximum buffer number used. */
-	ETH_AGG_END_MAX_LEN,
-/* TCP PSH flag or TCP payload length below continue threshold. */
-	ETH_AGG_END_LAST_SEG,
+	ETH_AGG_END_MAX_LEN /* Maximum aggregation length or maximum buffer number used. */,
+	ETH_AGG_END_LAST_SEG /* TCP PSH flag or TCP payload length below continue threshold. */,
 	ETH_AGG_END_TIMEOUT /* Timeout expiration. */,
-/* Packet header not consistency: different IPv4 TOS, TTL or flags, IPv6 TC,
- * Hop limit or Flow label, TCP header length or TS options. In GRO different
- * TS value, SMAC, DMAC, ackNum, windowSize or VLAN
+/* Packet header not consistency: different IPv4 TOS, TTL or flags, IPv6 TC, Hop limit or Flow
+ * label, TCP header length or TS options. In GRO different TS value, SMAC, DMAC, ackNum, windowSize
+ * or VLAN
  */
 	ETH_AGG_END_NOT_CONSISTENT,
-/* Out of order or retransmission packet: sequence, ack or timestamp not
- * consistent with previous segment.
+/* Out of order or retransmission packet: sequence, ack or timestamp not consistent with previous
+ * segment.
  */
 	ETH_AGG_END_OUT_OF_ORDER,
-/* Next segment cant be aggregated due to LLC/SNAP, IP error, IP fragment, IPv4
- * options, IPv6 extension, IP ECN = CE, TCP errors, TCP options, zero TCP
- * payload length , TCP flags or not supported tunnel header options.
+/* Next segment cant be aggregated due to LLC/SNAP, IP error, IP fragment, IPv4 options, IPv6
+ * extension, IP ECN = CE, TCP errors, TCP options, zero TCP payload length , TCP flags or not
+ * supported tunnel header options.
  */
 	ETH_AGG_END_NON_TPA_SEG,
 	MAX_ETH_TPA_END_REASON
@@ -568,7 +538,7 @@ enum eth_tpa_end_reason {
 struct eth_tx_1st_bd {
 	struct regpair addr /* Single continuous buffer */;
 	__le16 nbytes /* Number of bytes in this BD. */;
-	struct eth_tx_data_1st_bd data /* Parsing information data. */;
+	struct eth_tx_data_1st_bd data /* 1st BD Data */;
 };
 
 
@@ -579,7 +549,7 @@ struct eth_tx_1st_bd {
 struct eth_tx_2nd_bd {
 	struct regpair addr /* Single continuous buffer */;
 	__le16 nbytes /* Number of bytes in this BD. */;
-	struct eth_tx_data_2nd_bd data /* Parsing information data. */;
+	struct eth_tx_data_2nd_bd data /* 2nd BD Data */;
 };
 
 
@@ -589,11 +559,14 @@ struct eth_tx_2nd_bd {
 struct eth_tx_data_3rd_bd {
 	__le16 lso_mss /* For LSO packet - the MSS in bytes. */;
 	__le16 bitfields;
-/* For LSO with inner/outer IPv6+ext - TCP header length (in 4-byte WORDs) */
+/* Inner (for tunneled packets, and single for non-tunneled packets) TCP header length (in 4 byte
+ * words). This value is required in LSO mode, when the inner/outer header is Ipv6 with extension
+ * headers.
+ */
 #define ETH_TX_DATA_3RD_BD_TCP_HDR_LEN_DW_MASK  0xF
 #define ETH_TX_DATA_3RD_BD_TCP_HDR_LEN_DW_SHIFT 0
-/* LSO - number of BDs which contain headers. value should be in range
- * (1..ETH_TX_MAX_LSO_HDR_NBD).
+/* LSO - number of BDs which contain headers. value should be in range (1..ETH_TX_MAX_LSO_HDR_NBD).
+ *
  */
 #define ETH_TX_DATA_3RD_BD_HDR_NBD_MASK         0xF
 #define ETH_TX_DATA_3RD_BD_HDR_NBD_SHIFT        4
@@ -602,9 +575,9 @@ struct eth_tx_data_3rd_bd {
 #define ETH_TX_DATA_3RD_BD_START_BD_SHIFT       8
 #define ETH_TX_DATA_3RD_BD_RESERVED0_MASK       0x7F
 #define ETH_TX_DATA_3RD_BD_RESERVED0_SHIFT      9
-/* For tunnel with IPv6+ext - Pointer to tunnel L4 Header (in 2-byte WORDs) */
+/* For tunnel with IPv6+ext - Pointer to the tunnel L4 Header (in 2-byte WORDs) */
 	u8 tunn_l4_hdr_start_offset_w;
-/* For tunnel with IPv6+ext - Total size of Tunnel Header (in 2-byte WORDs) */
+/* For tunnel with IPv6+ext - Total size of the Tunnel Header (in 2-byte WORDs) */
 	u8 tunn_hdr_size_w;
 };
 
@@ -614,7 +587,7 @@ struct eth_tx_data_3rd_bd {
 struct eth_tx_3rd_bd {
 	struct regpair addr /* Single continuous buffer */;
 	__le16 nbytes /* Number of bytes in this BD. */;
-	struct eth_tx_data_3rd_bd data /* Parsing information data. */;
+	struct eth_tx_data_3rd_bd data /* 3rd BD Data */;
 };
 
 
@@ -622,10 +595,9 @@ struct eth_tx_3rd_bd {
  * The parsing information data for the forth tx bd of a given packet.
  */
 struct eth_tx_data_4th_bd {
-/* Destination Vport ID to forward the packet, applicable only when
- * tx_dst_port_mode_config == ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_BD and
- * dst_port_mode == DST_PORT_LOOPBACK, used to route the packet from VF
- * Representor to VF
+/* Destination Vport ID to forward the packet, applicable only when tx_dst_port_mode_config ==
+ * ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_BD and dst_port_mode== DST_PORT_LOOPBACK, used to route
+ * the packet from VF Representor to VF
  */
 	u8 dst_vport_id;
 	u8 reserved4;
@@ -644,12 +616,12 @@ struct eth_tx_data_4th_bd {
 };
 
 /*
- * The forth tx bd of a given packet
+ * The fourth tx bd of a given packet
  */
 struct eth_tx_4th_bd {
 	struct regpair addr /* Single continuous buffer */;
 	__le16 nbytes /* Number of bytes in this BD. */;
-	struct eth_tx_data_4th_bd data /* Parsing information data. */;
+	struct eth_tx_data_4th_bd data /* 4th BD Data */;
 };
 
 
@@ -670,19 +642,18 @@ struct eth_tx_data_bd {
 };
 
 /*
- * The common regular TX BD ring element
+ * The common regular bd
  */
 struct eth_tx_bd {
 	struct regpair addr /* Single continuous buffer */;
 	__le16 nbytes /* Number of bytes in this BD. */;
-	struct eth_tx_data_bd data /* Complementary information. */;
+	struct eth_tx_data_bd data /* Common BD Data */;
 };
 
 
 union eth_tx_bd_types {
 	struct eth_tx_1st_bd first_bd /* The first tx bd of a given packet */;
-/* The second tx bd of a given packet */
-	struct eth_tx_2nd_bd second_bd;
+	struct eth_tx_2nd_bd second_bd /* The second tx bd of a given packet */;
 	struct eth_tx_3rd_bd third_bd /* The third tx bd of a given packet */;
 	struct eth_tx_4th_bd fourth_bd /* The fourth tx bd of a given packet */;
 	struct eth_tx_bd reg_bd /* The common regular bd */;
@@ -693,6 +664,7 @@ union eth_tx_bd_types {
 
 
 
+
 /*
  * Eth Tx Tunnel Type
  */
@@ -708,7 +680,7 @@ enum eth_tx_tunn_type {
 /*
  * Mstorm Queue Zone
  */
-struct mstorm_eth_queue_zone {
+struct mstorm_eth_queue_zone_e5 {
 	struct eth_rx_prod_data rx_producers /* ETH Rx producers data */;
 	__le32 reserved[3];
 };
@@ -718,8 +690,7 @@ struct mstorm_eth_queue_zone {
  * Ystorm Queue Zone
  */
 struct xstorm_eth_queue_zone {
-/* Tx interrupt coalescing TimeSet */
-	struct coalescing_timeset int_coalescing_timeset;
+	struct coalescing_timeset int_coalescing_timeset /* Tx interrupt coalescing TimeSet */;
 	u8 reserved[7];
 };
 
@@ -729,22 +700,33 @@ struct xstorm_eth_queue_zone {
  */
 struct eth_db_data {
 	u8 params;
-/* destination of doorbell (use enum db_dest) */
-#define ETH_DB_DATA_DEST_MASK         0x3
+#define ETH_DB_DATA_DEST_MASK         0x3 /* destination of doorbell (use enum db_dest) */
 #define ETH_DB_DATA_DEST_SHIFT        0
-/* aggregative command to CM (use enum db_agg_cmd_sel) */
-#define ETH_DB_DATA_AGG_CMD_MASK      0x3
+#define ETH_DB_DATA_AGG_CMD_MASK      0x3 /* aggregative command to CM (use enum db_agg_cmd_sel) */
 #define ETH_DB_DATA_AGG_CMD_SHIFT     2
 #define ETH_DB_DATA_BYPASS_EN_MASK    0x1 /* enable QM bypass */
 #define ETH_DB_DATA_BYPASS_EN_SHIFT   4
 #define ETH_DB_DATA_RESERVED_MASK     0x1
 #define ETH_DB_DATA_RESERVED_SHIFT    5
-/* aggregative value selection */
-#define ETH_DB_DATA_AGG_VAL_SEL_MASK  0x3
+#define ETH_DB_DATA_AGG_VAL_SEL_MASK  0x3 /* aggregative value selection */
 #define ETH_DB_DATA_AGG_VAL_SEL_SHIFT 6
-/* bit for every DQ counter flags in CM context that DQ can increment */
-	u8 agg_flags;
+	u8 agg_flags /* bit for every DQ counter flags in CM context that DQ can increment */;
 	__le16 bd_prod;
 };
 
+
+/*
+ * RSS hash type
+ */
+enum rss_hash_type {
+	RSS_HASH_TYPE_DEFAULT = 0,
+	RSS_HASH_TYPE_IPV4 = 1,
+	RSS_HASH_TYPE_TCP_IPV4 = 2,
+	RSS_HASH_TYPE_IPV6 = 3,
+	RSS_HASH_TYPE_TCP_IPV6 = 4,
+	RSS_HASH_TYPE_UDP_IPV4 = 5,
+	RSS_HASH_TYPE_UDP_IPV6 = 6,
+	MAX_RSS_HASH_TYPE
+};
+
 #endif /* __ETH_COMMON__ */
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6667c2d7a..437482422 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1,30 +1,32 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 /****************************************************************************
  *
- * Name:        mcp_public.h
+ * Name:		mcp_public.h
  *
  * Description: MCP public data
  *
- * Created:     13/01/2013 yanivr
+ * Created:	13/01/2013 yanivr
  *
  ****************************************************************************/
 
 #ifndef MCP_PUBLIC_H
 #define MCP_PUBLIC_H
 
-#define VF_MAX_STATIC 192	/* In case of AH */
-#define VF_BITMAP_SIZE_IN_DWORDS        (VF_MAX_STATIC / 32)
-#define VF_BITMAP_SIZE_IN_BYTES         (VF_BITMAP_SIZE_IN_DWORDS * sizeof(u32))
+#define UEFI
+
+#define VF_MAX_STATIC			192	/* In case of AH */
+#define VF_BITMAP_SIZE_IN_DWORDS	(VF_MAX_STATIC / 32)
+#define VF_BITMAP_SIZE_IN_BYTES		(VF_BITMAP_SIZE_IN_DWORDS * sizeof(u32))
 
 /* Extended array size to support for 240 VFs 8 dwords */
-#define EXT_VF_MAX_STATIC               240
-#define EXT_VF_BITMAP_SIZE_IN_DWORDS    (((EXT_VF_MAX_STATIC - 1) / 32) + 1)
-#define EXT_VF_BITMAP_SIZE_IN_BYTES     (EXT_VF_BITMAP_SIZE_IN_DWORDS * \
+#define EXT_VF_MAX_STATIC		240
+#define EXT_VF_BITMAP_SIZE_IN_DWORDS	(((EXT_VF_MAX_STATIC - 1) / 32) + 1)
+#define EXT_VF_BITMAP_SIZE_IN_BYTES	(EXT_VF_BITMAP_SIZE_IN_DWORDS * \
 					 sizeof(u32))
 #define ADDED_VF_BITMAP_SIZE 2
 
@@ -33,7 +35,7 @@
 #define MCP_GLOB_PORT_MAX	4	/* Global */
 #define MCP_GLOB_FUNC_MAX	16	/* Global */
 
-typedef u32 offsize_t;      /* In DWORDS !!! */
+typedef u32 offsize_t;	/* In DWORDS !!! */
 /* Offset from the beginning of the MCP scratchpad */
 #define OFFSIZE_OFFSET_OFFSET	0
 #define OFFSIZE_OFFSET_MASK	0x0000ffff
@@ -43,58 +45,48 @@ typedef u32 offsize_t;      /* In DWORDS !!! */
 
 /* SECTION_OFFSET is calculating the offset in bytes out of offsize */
 #define SECTION_OFFSET(_offsize)	\
-	((((_offsize & OFFSIZE_OFFSET_MASK) >> OFFSIZE_OFFSET_OFFSET) << 2))
+	(((((_offsize) & OFFSIZE_OFFSET_MASK) >> OFFSIZE_OFFSET_OFFSET) << 2))
 
 /* SECTION_SIZE is calculating the size in bytes out of offsize */
 #define SECTION_SIZE(_offsize)		\
-	(((_offsize & OFFSIZE_SIZE_MASK) >> OFFSIZE_SIZE_OFFSET) << 2)
+	((((_offsize) & OFFSIZE_SIZE_MASK) >> OFFSIZE_SIZE_OFFSET) << 2)
 
-/* SECTION_ADDR returns the GRC addr of a section, given offsize and index
- * within section
- */
+/* SECTION_ADDR returns the GRC addr of a section, given offsize and index within section */
 #define SECTION_ADDR(_offsize, idx)	\
-	(MCP_REG_SCRATCH +		\
-	 SECTION_OFFSET(_offsize) + (SECTION_SIZE(_offsize) * idx))
+	(MCP_REG_SCRATCH + SECTION_OFFSET(_offsize) + (SECTION_SIZE(_offsize) * (idx)))
 
-/* SECTION_OFFSIZE_ADDR returns the GRC addr to the offsize address. Use
- * offsetof, since the OFFSETUP collide with the firmware definition
+/* SECTION_OFFSIZE_ADDR returns the GRC addr to the offsize address. Use offsetof, since the
+ * OFFSETUP collide with the firmware definition
  */
 #define SECTION_OFFSIZE_ADDR(_pub_base, _section) \
-	(_pub_base + offsetof(struct mcp_public_data, sections[_section]))
+	((_pub_base) + offsetof(struct mcp_public_data, sections[_section]))
 /* PHY configuration */
 struct eth_phy_cfg {
-/* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
-	u32 speed;
+	u32 speed;	/* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
 #define ETH_SPEED_AUTONEG   0
 #define ETH_SPEED_SMARTLINQ  0x8 /* deprecated - use link_modes field instead */
 
-	u32 pause;      /* bitmask */
+	u32 pause;	/* bitmask */
 #define ETH_PAUSE_NONE		0x0
 #define ETH_PAUSE_AUTONEG	0x1
 #define ETH_PAUSE_RX		0x2
 #define ETH_PAUSE_TX		0x4
 
-	u32 adv_speed;      /* Default should be the speed_cap_mask */
+	u32 adv_speed;	/* Default should be the speed_cap_mask */
 	u32 loopback_mode;
-#define ETH_LOOPBACK_NONE		 (0)
-/* Serdes loopback. In AH, it refers to Near End */
-#define ETH_LOOPBACK_INT_PHY		 (1)
+#define ETH_LOOPBACK_NONE	 (0)
+#define ETH_LOOPBACK_INT_PHY		 (1) /* Serdes loopback. In AH, it refers to Near End */
 #define ETH_LOOPBACK_EXT_PHY		 (2) /* External PHY Loopback */
-/* External Loopback (Require loopback plug) */
-#define ETH_LOOPBACK_EXT		 (3)
+#define ETH_LOOPBACK_EXT		 (3) /* External Loopback (Require loopback plug) */
 #define ETH_LOOPBACK_MAC		 (4) /* MAC Loopback - not supported */
-#define ETH_LOOPBACK_CNIG_AH_ONLY_0123	 (5) /* Port to itself */
-#define ETH_LOOPBACK_CNIG_AH_ONLY_2301	 (6) /* Port to Port */
+#define ETH_LOOPBACK_CNIG_AH_ONLY_0123   (5) /* Port to itself */
+#define ETH_LOOPBACK_CNIG_AH_ONLY_2301   (6) /* Port to Port */
 #define ETH_LOOPBACK_PCS_AH_ONLY	 (7) /* PCS loopback (TX to RX) */
-/* Loop RX packet from PCS to TX */
-#define ETH_LOOPBACK_REVERSE_MAC_AH_ONLY (8)
-/* Remote Serdes Loopback (RX to TX) */
-#define ETH_LOOPBACK_INT_PHY_FEA_AH_ONLY (9)
+#define ETH_LOOPBACK_REVERSE_MAC_AH_ONLY (8) /* Loop RX packet from PCS to TX */
+#define ETH_LOOPBACK_INT_PHY_FEA_AH_ONLY (9) /* Remote Serdes Loopback (RX to TX) */
 
 	u32 eee_cfg;
-/* EEE is enabled (configuration). Refer to eee_status->active for negotiated
- * status
- */
+/* EEE is enabled (configuration). Refer to eee_status->active for negotiated status */
 #define EEE_CFG_EEE_ENABLED	(1 << 0)
 #define EEE_CFG_TX_LPI		(1 << 1)
 #define EEE_CFG_ADV_SPEED_1G	(1 << 2)
@@ -107,28 +99,84 @@ struct eth_phy_cfg {
 
 	u32 link_modes; /* Additional link modes */
 #define LINK_MODE_SMARTLINQ_ENABLE		0x1  /* XXX Deprecate */
+
+	u32 fec_mode; /* Values similar to nvm cfg */
+#define FEC_FORCE_MODE_MASK			0x000000FF
+#define FEC_FORCE_MODE_OFFSET			0
+#define FEC_FORCE_MODE_NONE			0x00
+#define FEC_FORCE_MODE_FIRECODE			0x01
+#define FEC_FORCE_MODE_RS			0x02
+#define FEC_FORCE_MODE_AUTO			0x07
+#define FEC_EXTENDED_MODE_MASK		0xFFFFFF00
+#define FEC_EXTENDED_MODE_OFFSET	8
+#define ETH_EXT_FEC_NONE			0x00000100
+#define ETH_EXT_FEC_10G_NONE		0x00000200
+#define ETH_EXT_FEC_10G_BASE_R		0x00000400
+#define ETH_EXT_FEC_20G_NONE		0x00000800
+#define ETH_EXT_FEC_20G_BASE_R		0x00001000
+#define ETH_EXT_FEC_25G_NONE		0x00002000
+#define ETH_EXT_FEC_25G_BASE_R		0x00004000
+#define ETH_EXT_FEC_25G_RS528		0x00008000
+#define ETH_EXT_FEC_40G_NONE		0x00010000
+#define ETH_EXT_FEC_40G_BASE_R		0x00020000
+#define ETH_EXT_FEC_50G_NONE		0x00040000
+#define ETH_EXT_FEC_50G_BASE_R		0x00080000
+#define ETH_EXT_FEC_50G_RS528		0x00100000
+#define ETH_EXT_FEC_50G_RS544		0x00200000
+#define ETH_EXT_FEC_100G_NONE		0x00400000
+#define ETH_EXT_FEC_100G_BASE_R		0x00800000
+#define ETH_EXT_FEC_100G_RS528		0x01000000
+#define ETH_EXT_FEC_100G_RS544		0x02000000
+u32 extended_speed;    /* Values similar to nvm cfg */
+#define ETH_EXT_SPEED_MASK			0x0000FFFF
+#define ETH_EXT_SPEED_OFFSET		0
+#define ETH_EXT_SPEED_AN			0x00000001
+#define ETH_EXT_SPEED_1G			0x00000002
+#define ETH_EXT_SPEED_10G			0x00000004
+#define ETH_EXT_SPEED_20G			0x00000008
+#define ETH_EXT_SPEED_25G			0x00000010
+#define ETH_EXT_SPEED_40G			0x00000020
+#define ETH_EXT_SPEED_50G_BASE_R	0x00000040
+#define ETH_EXT_SPEED_50G_BASE_R2	0x00000080
+#define ETH_EXT_SPEED_100G_BASE_R2	0x00000100
+#define ETH_EXT_SPEED_100G_BASE_R4	0x00000200
+#define ETH_EXT_SPEED_100G_BASE_P4	0x00000400
+#define ETH_EXT_ADV_SPEED_MASK			0xFFFF0000
+#define ETH_EXT_ADV_SPEED_OFFSET		16
+#define ETH_EXT_ADV_SPEED_RESERVED		0x00010000
+#define ETH_EXT_ADV_SPEED_1G			0x00020000
+#define ETH_EXT_ADV_SPEED_10G			0x00040000
+#define ETH_EXT_ADV_SPEED_20G			0x00080000
+#define ETH_EXT_ADV_SPEED_25G			0x00100000
+#define ETH_EXT_ADV_SPEED_40G			0x00200000
+#define ETH_EXT_ADV_SPEED_50G_BASE_R	0x00400000
+#define ETH_EXT_ADV_SPEED_50G_BASE_R2	0x00800000
+#define ETH_EXT_ADV_SPEED_100G_BASE_R2	0x01000000
+#define ETH_EXT_ADV_SPEED_100G_BASE_R4	0x02000000
+#define ETH_EXT_ADV_SPEED_100G_BASE_P4	0x04000000
 };
 
 struct port_mf_cfg {
-	u32 dynamic_cfg;    /* device control channel */
-#define PORT_MF_CFG_OV_TAG_MASK              0x0000ffff
-#define PORT_MF_CFG_OV_TAG_OFFSET             0
-#define PORT_MF_CFG_OV_TAG_DEFAULT         PORT_MF_CFG_OV_TAG_MASK
-
-	u32 reserved[1];
+	u32 dynamic_cfg;	/* device control channel */
+#define PORT_MF_CFG_OV_TAG_MASK			 0x0000ffff
+#define PORT_MF_CFG_OV_TAG_OFFSET		  0
+#define PORT_MF_CFG_OV_TAG_DEFAULT	   PORT_MF_CFG_OV_TAG_MASK
+
+	u32 reserved[2];	/* This is to make sure next field "stats" is alignment to 64-bit.
+				 * It doesn't change existing alignment in MFW
+				 */
 };
 
 /* DO NOT add new fields in the middle
  * MUST be synced with struct pmm_stats_map
  */
 struct eth_stats {
-	u64 r64;        /* 0x00 (Offset 0x00 ) RX 64-byte frame counter*/
-	u64 r127; /* 0x01 (Offset 0x08 ) RX 65 to 127 byte frame counter*/
-	u64 r255; /* 0x02 (Offset 0x10 ) RX 128 to 255 byte frame counter*/
-	u64 r511; /* 0x03 (Offset 0x18 ) RX 256 to 511 byte frame counter*/
-	u64 r1023; /* 0x04 (Offset 0x20 ) RX 512 to 1023 byte frame counter*/
-/* 0x05 (Offset 0x28 ) RX 1024 to 1518 byte frame counter */
-	u64 r1518;
+	u64 r64;		/* 0x00 (Offset 0x00 ) RX 64-byte frame counter*/
+	u64 r127;	/* 0x01 (Offset 0x08 ) RX 65 to 127 byte frame counter*/
+	u64 r255;	/* 0x02 (Offset 0x10 ) RX 128 to 255 byte frame counter*/
+	u64 r511;	/* 0x03 (Offset 0x18 ) RX 256 to 511 byte frame counter*/
+	u64 r1023;	/* 0x04 (Offset 0x20 ) RX 512 to 1023 byte frame counter*/
+	u64 r1518;	/* 0x05 (Offset 0x28 ) RX 1024 to 1518 byte frame counter */
 	union {
 		struct { /* bb */
 /* 0x06 (Offset 0x30 ) RX 1519 to 1522 byte VLAN-tagged frame counter */
@@ -141,33 +189,40 @@ struct eth_stats {
 			u64 r9216;
 /* 0x0A (Offset 0x50 ) RX 9217 to 16383 byte frame counter */
 			u64 r16383;
+
 		} bb0;
 		struct { /* ah */
 			u64 unused1;
-/* 0x07 (Offset 0x38 ) RX 1519 to max byte frame counter*/
-			u64 r1519_to_max;
+			u64 r1519_to_max; /* 0x07 (Offset 0x38 ) RX 1519 to max byte frame counter*/
 			u64 unused2;
 			u64 unused3;
 			u64 unused4;
 		} ah0;
+		struct { /* E5 */
+/* 0x06 (Offset 0x30 ) RX 1519 to 2047 byte frame counter*/
+			u64 r2047;
+			u64 r2048_to_max; /* 0x07 (Offset 0x38 ) RX 2048 to max byte frame counter*/
+			u64 unused11;
+			u64 unused12;
+			u64 unused13;
+		} e50;
 	} u0;
-	u64 rfcs;       /* 0x0F (Offset 0x58 ) RX FCS error frame counter*/
-	u64 rxcf;       /* 0x10 (Offset 0x60 ) RX control frame counter*/
-	u64 rxpf;       /* 0x11 (Offset 0x68 ) RX pause frame counter*/
-	u64 rxpp;       /* 0x12 (Offset 0x70 ) RX PFC frame counter*/
-	u64 raln;       /* 0x16 (Offset 0x78 ) RX alignment error counter*/
-	u64 rfcr;       /* 0x19 (Offset 0x80 ) RX false carrier counter */
-	u64 rovr;       /* 0x1A (Offset 0x88 ) RX oversized frame counter*/
-	u64 rjbr;       /* 0x1B (Offset 0x90 ) RX jabber frame counter */
-	u64 rund;       /* 0x34 (Offset 0x98 ) RX undersized frame counter */
-	u64 rfrg;       /* 0x35 (Offset 0xa0 ) RX fragment counter */
-	u64 t64;        /* 0x40 (Offset 0xa8 ) TX 64-byte frame counter */
-	u64 t127; /* 0x41 (Offset 0xb0 ) TX 65 to 127 byte frame counter */
-	u64 t255; /* 0x42 (Offset 0xb8 ) TX 128 to 255 byte frame counter*/
-	u64 t511; /* 0x43 (Offset 0xc0 ) TX 256 to 511 byte frame counter*/
-	u64 t1023; /* 0x44 (Offset 0xc8 ) TX 512 to 1023 byte frame counter*/
-/* 0x45 (Offset 0xd0 ) TX 1024 to 1518 byte frame counter */
-	u64 t1518;
+	u64 rfcs;	/* 0x0F (Offset 0x58 ) RX FCS error frame counter*/
+	u64 rxcf;	/* 0x10 (Offset 0x60 ) RX control frame counter*/
+	u64 rxpf;	/* 0x11 (Offset 0x68 ) RX pause frame counter*/
+	u64 rxpp;	/* 0x12 (Offset 0x70 ) RX PFC frame counter*/
+	u64 raln;	/* 0x16 (Offset 0x78 ) RX alignment error counter*/
+	u64 rfcr;	/* 0x19 (Offset 0x80 ) RX false carrier counter */
+	u64 rovr;	/* 0x1A (Offset 0x88 ) RX oversized frame counter*/
+	u64 rjbr;	/* 0x1B (Offset 0x90 ) RX jabber frame counter */
+	u64 rund;	/* 0x34 (Offset 0x98 ) RX undersized frame counter */
+	u64 rfrg;	/* 0x35 (Offset 0xa0 ) RX fragment counter */
+	u64 t64;		/* 0x40 (Offset 0xa8 ) TX 64-byte frame counter */
+	u64 t127;	/* 0x41 (Offset 0xb0 ) TX 65 to 127 byte frame counter */
+	u64 t255;	/* 0x42 (Offset 0xb8 ) TX 128 to 255 byte frame counter*/
+	u64 t511;	/* 0x43 (Offset 0xc0 ) TX 256 to 511 byte frame counter*/
+	u64 t1023;	/* 0x44 (Offset 0xc8 ) TX 512 to 1023 byte frame counter*/
+	u64 t1518;	/* 0x45 (Offset 0xd0 ) TX 1024 to 1518 byte frame counter */
 	union {
 		struct { /* bb */
 /* 0x47 (Offset 0xd8 ) TX 1519 to 2047 byte frame counter */
@@ -186,37 +241,43 @@ struct eth_stats {
 			u64 unused7;
 			u64 unused8;
 		} ah1;
+		struct { /* E5 */
+/* 0x06 (Offset 0x30 ) TX 1519 to 2047 byte frame counter*/
+			u64 t2047;
+			u64 t2048_to_max; /* 0x07 (Offset 0x38 ) TX 2048 to max byte frame counter*/
+			u64 unused14;
+			u64 unused15;
+		} e5_1;
 	} u1;
-	u64 txpf;       /* 0x50 (Offset 0xf8 ) TX pause frame counter */
-	u64 txpp;       /* 0x51 (Offset 0x100) TX PFC frame counter */
-/* 0x6C (Offset 0x108) Transmit Logical Type LLFC message counter */
+	u64 txpf;	/* 0x50 (Offset 0xf8 ) TX pause frame counter */
+	u64 txpp;	/* 0x51 (Offset 0x100) TX PFC frame counter */
 	union {
 		struct { /* bb */
 /* 0x6C (Offset 0x108) Transmit Logical Type LLFC message counter */
 			u64 tlpiec;
-/* 0x6E (Offset 0x110) Transmit Total Collision Counter */
-			u64 tncl;
+			u64 tncl;	/* 0x6E (Offset 0x110) Transmit Total Collision Counter */
 		} bb2;
 		struct { /* ah */
 			u64 unused9;
 			u64 unused10;
 		} ah2;
+		struct { /* E5 */
+			u64 unused16;
+			u64 unused17;
+		} e5_2;
 	} u2;
-	u64 rbyte;      /* 0x3d (Offset 0x118) RX byte counter */
-	u64 rxuca;      /* 0x0c (Offset 0x120) RX UC frame counter */
-	u64 rxmca;      /* 0x0d (Offset 0x128) RX MC frame counter */
-	u64 rxbca;      /* 0x0e (Offset 0x130) RX BC frame counter */
+	u64 rbyte;	/* 0x3d (Offset 0x118) RX byte counter */
+	u64 rxuca;	/* 0x0c (Offset 0x120) RX UC frame counter */
+	u64 rxmca;	/* 0x0d (Offset 0x128) RX MC frame counter */
+	u64 rxbca;	/* 0x0e (Offset 0x130) RX BC frame counter */
 /* 0x22 (Offset 0x138) RX good frame (good CRC, not oversized, no ERROR) */
 	u64 rxpok;
-	u64 tbyte;      /* 0x6f (Offset 0x140) TX byte counter */
-	u64 txuca;      /* 0x4d (Offset 0x148) TX UC frame counter */
-	u64 txmca;      /* 0x4e (Offset 0x150) TX MC frame counter */
-	u64 txbca;      /* 0x4f (Offset 0x158) TX BC frame counter */
-	u64 txcf;       /* 0x54 (Offset 0x160) TX control frame counter */
-/* HSI - Cannot add more stats to this struct. If needed, then need to open new
- * struct
- */
-
+	u64 tbyte;	/* 0x6f (Offset 0x140) TX byte counter */
+	u64 txuca;	/* 0x4d (Offset 0x148) TX UC frame counter */
+	u64 txmca;	/* 0x4e (Offset 0x150) TX MC frame counter */
+	u64 txbca;	/* 0x4f (Offset 0x158) TX BC frame counter */
+	u64 txcf;	/* 0x54 (Offset 0x160) TX control frame counter */
+	/* HSI - Cannot add more stats to this struct. If needed, then need to open new struct */
 };
 
 struct brb_stats {
@@ -229,26 +290,25 @@ struct port_stats {
 	struct eth_stats eth;
 };
 
-/*----+------------------------------------------------------------------------
- * C  | Number and | Ports in| Ports in|2 PHY-s |# of ports|# of engines
- * h  | rate of    | team #1 | team #2 |are used|per path  | (paths)
- * i  | physical   |         |         |        |          | enabled
- * p  | ports      |         |         |        |          |
- *====+============+=========+=========+========+==========+===================
- * BB | 1x100G     | This is special mode, where there are actually 2 HW func
- * BB | 2x10/20Gbps| 0,1     | NA      |  No    | 1        | 1
- * BB | 2x40 Gbps  | 0,1     | NA      |  Yes   | 1        | 1
- * BB | 2x50Gbps   | 0,1     | NA      |  No    | 1        | 1
- * BB | 4x10Gbps   | 0,2     | 1,3     |  No    | 1/2      | 1,2 (2 is optional)
- * BB | 4x10Gbps   | 0,1     | 2,3     |  No    | 1/2      | 1,2 (2 is optional)
- * BB | 4x10Gbps   | 0,3     | 1,2     |  No    | 1/2      | 1,2 (2 is optional)
- * BB | 4x10Gbps   | 0,1,2,3 | NA      |  No    | 1        | 1
- * AH | 2x10/20Gbps| 0,1     | NA      |  NA    | 1        | NA
- * AH | 4x10Gbps   | 0,1     | 2,3     |  NA    | 2        | NA
- * AH | 4x10Gbps   | 0,2     | 1,3     |  NA    | 2        | NA
- * AH | 4x10Gbps   | 0,3     | 1,2     |  NA    | 2        | NA
- * AH | 4x10Gbps   | 0,1,2,3 | NA      |  NA    | 1        | NA
- *====+============+=========+=========+========+==========+===================
+/*-----+-----------------------------------------------------------------------------
+ * Chip | Number and	   | Ports in| Ports in|2 PHY-s |# of ports|# of engines
+ *	| rate of physical | team #1 | team #2 |are used|per path  | (paths) enabled
+ *	| ports		   |		 |	   |		|	   |
+ *======+==================+=========+=========+========+==========+=================
+ * BB   | 1x100G		   | This is special mode, where there are actually 2 HW func
+ * BB   | 2x10/20Gbps      | 0,1	 | NA      |  No	| 1	   | 1
+ * BB   | 2x40 Gbps	   | 0,1	 | NA      |  Yes   | 1	   | 1
+ * BB   | 2x50Gbps	   | 0,1	 | NA      |  No	| 1	   | 1
+ * BB   | 4x10Gbps	   | 0,2	 | 1,3     |  No	| 1/2      | 1,2 (2 is optional)
+ * BB   | 4x10Gbps	   | 0,1	 | 2,3     |  No	| 1/2      | 1,2 (2 is optional)
+ * BB   | 4x10Gbps	   | 0,3	 | 1,2     |  No	| 1/2      | 1,2 (2 is optional)
+ * BB   | 4x10Gbps	   | 0,1,2,3 | NA      |  No	| 1	   | 1
+ * AH   | 2x10/20Gbps      | 0,1	 | NA      |  NA	| 1	   | NA
+ * AH   | 4x10Gbps	   | 0,1	 | 2,3     |  NA	| 2	   | NA
+ * AH   | 4x10Gbps	   | 0,2	 | 1,3     |  NA	| 2	   | NA
+ * AH   | 4x10Gbps	   | 0,3	 | 1,2     |  NA	| 2	   | NA
+ * AH   | 4x10Gbps	   | 0,1,2,3 | NA      |  NA	| 1	   | NA
+ *======+==================+=========+=========+========+==========+===================
  */
 
 #define CMT_TEAM0 0
@@ -257,24 +317,24 @@ struct port_stats {
 
 struct couple_mode_teaming {
 	u8 port_cmt[MCP_GLOB_PORT_MAX];
-#define PORT_CMT_IN_TEAM            (1 << 0)
+#define PORT_CMT_IN_TEAM			(1 << 0)
 
-#define PORT_CMT_PORT_ROLE          (1 << 1)
-#define PORT_CMT_PORT_INACTIVE      (0 << 1)
-#define PORT_CMT_PORT_ACTIVE        (1 << 1)
+#define PORT_CMT_PORT_ROLE		(1 << 1)
+#define PORT_CMT_PORT_INACTIVE	(0 << 1)
+#define PORT_CMT_PORT_ACTIVE		(1 << 1)
 
-#define PORT_CMT_TEAM_MASK          (1 << 2)
-#define PORT_CMT_TEAM0              (0 << 2)
-#define PORT_CMT_TEAM1              (1 << 2)
+#define PORT_CMT_TEAM_MASK		(1 << 2)
+#define PORT_CMT_TEAM0			(0 << 2)
+#define PORT_CMT_TEAM1			(1 << 2)
 };
 
 /**************************************
  *     LLDP and DCBX HSI structures
  **************************************/
 #define LLDP_CHASSIS_ID_STAT_LEN	4
-#define LLDP_PORT_ID_STAT_LEN		4
+#define LLDP_PORT_ID_STAT_LEN	4
 #define DCBX_MAX_APP_PROTOCOL		32
-#define MAX_SYSTEM_LLDP_TLV_DATA	32  /* In dwords. 128 in bytes*/
+#define MAX_SYSTEM_LLDP_TLV_DATA		32  /* In dwords. 128 in bytes*/
 #define MAX_TLV_BUFFER			128 /* In dwords. 512 in bytes*/
 typedef enum _lldp_agent_e {
 	LLDP_NEAREST_BRIDGE = 0,
@@ -285,23 +345,23 @@ typedef enum _lldp_agent_e {
 
 struct lldp_config_params_s {
 	u32 config;
-#define LLDP_CONFIG_TX_INTERVAL_MASK        0x000000ff
-#define LLDP_CONFIG_TX_INTERVAL_OFFSET       0
-#define LLDP_CONFIG_HOLD_MASK               0x00000f00
-#define LLDP_CONFIG_HOLD_OFFSET              8
-#define LLDP_CONFIG_MAX_CREDIT_MASK         0x0000f000
-#define LLDP_CONFIG_MAX_CREDIT_OFFSET        12
-#define LLDP_CONFIG_ENABLE_RX_MASK          0x40000000
-#define LLDP_CONFIG_ENABLE_RX_OFFSET         30
-#define LLDP_CONFIG_ENABLE_TX_MASK          0x80000000
-#define LLDP_CONFIG_ENABLE_TX_OFFSET         31
+#define LLDP_CONFIG_TX_INTERVAL_MASK		0x000000ff
+#define LLDP_CONFIG_TX_INTERVAL_OFFSET	 0
+#define LLDP_CONFIG_HOLD_MASK			0x00000f00
+#define LLDP_CONFIG_HOLD_OFFSET			 8
+#define LLDP_CONFIG_MAX_CREDIT_MASK		0x0000f000
+#define LLDP_CONFIG_MAX_CREDIT_OFFSET	 12
+#define LLDP_CONFIG_ENABLE_RX_MASK		0x40000000
+#define LLDP_CONFIG_ENABLE_RX_OFFSET		 30
+#define LLDP_CONFIG_ENABLE_TX_MASK		0x80000000
+#define LLDP_CONFIG_ENABLE_TX_OFFSET		 31
 	/* Holds local Chassis ID TLV header, subtype and 9B of payload.
 	 * If firtst byte is 0, then we will use default chassis ID
 	 */
 	u32 local_chassis_id[LLDP_CHASSIS_ID_STAT_LEN];
 	/* Holds local Port ID TLV header, subtype and 9B of payload.
 	 * If firtst byte is 0, then we will use default port ID
-	*/
+	 */
 	u32 local_port_id[LLDP_PORT_ID_STAT_LEN];
 };
 
@@ -317,40 +377,29 @@ struct lldp_status_params_s {
 
 struct dcbx_ets_feature {
 	u32 flags;
-#define DCBX_ETS_ENABLED_MASK                   0x00000001
-#define DCBX_ETS_ENABLED_OFFSET                  0
-#define DCBX_ETS_WILLING_MASK                   0x00000002
-#define DCBX_ETS_WILLING_OFFSET                  1
-#define DCBX_ETS_ERROR_MASK                     0x00000004
-#define DCBX_ETS_ERROR_OFFSET                    2
-#define DCBX_ETS_CBS_MASK                       0x00000008
-#define DCBX_ETS_CBS_OFFSET                      3
-#define DCBX_ETS_MAX_TCS_MASK                   0x000000f0
-#define DCBX_ETS_MAX_TCS_OFFSET                  4
-#define DCBX_OOO_TC_MASK                        0x00000f00
-#define DCBX_OOO_TC_OFFSET                       8
-/* Entries in tc table are orginized that the left most is pri 0, right most is
- * prio 7
- */
-
+#define DCBX_ETS_ENABLED_MASK				0x00000001
+#define DCBX_ETS_ENABLED_OFFSET				 0
+#define DCBX_ETS_WILLING_MASK				0x00000002
+#define DCBX_ETS_WILLING_OFFSET				 1
+#define DCBX_ETS_ERROR_MASK					0x00000004
+#define DCBX_ETS_ERROR_OFFSET				 2
+#define DCBX_ETS_CBS_MASK					0x00000008
+#define DCBX_ETS_CBS_OFFSET					 3
+#define DCBX_ETS_MAX_TCS_MASK				0x000000f0
+#define DCBX_ETS_MAX_TCS_OFFSET				 4
+#define DCBX_OOO_TC_MASK			0x00000f00
+#define DCBX_OOO_TC_OFFSET				8
+	/* Entries in tc table are orginized that the left most is pri 0, right most is prio 7 */
 	u32  pri_tc_tbl[1];
-/* Fixed TCP OOO TC usage is deprecated and used only for driver backward
- * compatibility
- */
+/* Fixed TCP OOO TC usage is deprecated and used only for driver backward compatibility */
 #define DCBX_TCP_OOO_TC				(4)
 #define DCBX_TCP_OOO_K2_4PORT_TC		(3)
 
 #define NIG_ETS_ISCSI_OOO_CLIENT_OFFSET		(DCBX_TCP_OOO_TC + 1)
 #define DCBX_CEE_STRICT_PRIORITY		0xf
-/* Entries in tc table are orginized that the left most is pri 0, right most is
- * prio 7
- */
-
+	/* Entries in tc table are orginized that the left most is pri 0, right most is prio 7 */
 	u32  tc_bw_tbl[2];
-/* Entries in tc table are orginized that the left most is pri 0, right most is
- * prio 7
- */
-
+	/* Entries in tc table are orginized that the left most is pri 0, right most is prio 7 */
 	u32  tc_tsa_tbl[2];
 #define DCBX_ETS_TSA_STRICT			0
 #define DCBX_ETS_TSA_CBS			1
@@ -359,24 +408,24 @@ struct dcbx_ets_feature {
 
 struct dcbx_app_priority_entry {
 	u32 entry;
-#define DCBX_APP_PRI_MAP_MASK       0x000000ff
-#define DCBX_APP_PRI_MAP_OFFSET      0
-#define DCBX_APP_PRI_0              0x01
-#define DCBX_APP_PRI_1              0x02
-#define DCBX_APP_PRI_2              0x04
-#define DCBX_APP_PRI_3              0x08
-#define DCBX_APP_PRI_4              0x10
-#define DCBX_APP_PRI_5              0x20
-#define DCBX_APP_PRI_6              0x40
-#define DCBX_APP_PRI_7              0x80
-#define DCBX_APP_SF_MASK            0x00000300
-#define DCBX_APP_SF_OFFSET           8
-#define DCBX_APP_SF_ETHTYPE         0
-#define DCBX_APP_SF_PORT            1
-#define DCBX_APP_SF_IEEE_MASK       0x0000f000
-#define DCBX_APP_SF_IEEE_OFFSET      12
+#define DCBX_APP_PRI_MAP_MASK	0x000000ff
+#define DCBX_APP_PRI_MAP_OFFSET	 0
+#define DCBX_APP_PRI_0			0x01
+#define DCBX_APP_PRI_1			0x02
+#define DCBX_APP_PRI_2			0x04
+#define DCBX_APP_PRI_3			0x08
+#define DCBX_APP_PRI_4			0x10
+#define DCBX_APP_PRI_5			0x20
+#define DCBX_APP_PRI_6			0x40
+#define DCBX_APP_PRI_7			0x80
+#define DCBX_APP_SF_MASK			0x00000300
+#define DCBX_APP_SF_OFFSET		 8
+#define DCBX_APP_SF_ETHTYPE		0
+#define DCBX_APP_SF_PORT			1
+#define DCBX_APP_SF_IEEE_MASK	0x0000f000
+#define DCBX_APP_SF_IEEE_OFFSET	 12
 #define DCBX_APP_SF_IEEE_RESERVED   0
-#define DCBX_APP_SF_IEEE_ETHTYPE    1
+#define DCBX_APP_SF_IEEE_ETHTYPE	1
 #define DCBX_APP_SF_IEEE_TCP_PORT   2
 #define DCBX_APP_SF_IEEE_UDP_PORT   3
 #define DCBX_APP_SF_IEEE_TCP_UDP_PORT 4
@@ -389,20 +438,20 @@ struct dcbx_app_priority_entry {
 /* FW structure in BE */
 struct dcbx_app_priority_feature {
 	u32 flags;
-#define DCBX_APP_ENABLED_MASK           0x00000001
-#define DCBX_APP_ENABLED_OFFSET          0
-#define DCBX_APP_WILLING_MASK           0x00000002
-#define DCBX_APP_WILLING_OFFSET          1
-#define DCBX_APP_ERROR_MASK             0x00000004
-#define DCBX_APP_ERROR_OFFSET            2
+#define DCBX_APP_ENABLED_MASK		0x00000001
+#define DCBX_APP_ENABLED_OFFSET		 0
+#define DCBX_APP_WILLING_MASK		0x00000002
+#define DCBX_APP_WILLING_OFFSET		 1
+#define DCBX_APP_ERROR_MASK			0x00000004
+#define DCBX_APP_ERROR_OFFSET		 2
 	/* Not in use
-	#define DCBX_APP_DEFAULT_PRI_MASK       0x00000f00
-	#define DCBX_APP_DEFAULT_PRI_OFFSET      8
+#define DCBX_APP_DEFAULT_PRI_MASK	0x00000f00
+#define DCBX_APP_DEFAULT_PRI_OFFSET	 8
 	*/
-#define DCBX_APP_MAX_TCS_MASK           0x0000f000
-#define DCBX_APP_MAX_TCS_OFFSET          12
-#define DCBX_APP_NUM_ENTRIES_MASK       0x00ff0000
-#define DCBX_APP_NUM_ENTRIES_OFFSET      16
+#define DCBX_APP_MAX_TCS_MASK		0x0000f000
+#define DCBX_APP_MAX_TCS_OFFSET		 12
+#define DCBX_APP_NUM_ENTRIES_MASK	0x00ff0000
+#define DCBX_APP_NUM_ENTRIES_OFFSET	 16
 	struct dcbx_app_priority_entry  app_pri_tbl[DCBX_MAX_APP_PROTOCOL];
 };
 
@@ -412,29 +461,29 @@ struct dcbx_features {
 	struct dcbx_ets_feature ets;
 	/* PFC feature */
 	u32 pfc;
-#define DCBX_PFC_PRI_EN_BITMAP_MASK             0x000000ff
-#define DCBX_PFC_PRI_EN_BITMAP_OFFSET            0
-#define DCBX_PFC_PRI_EN_BITMAP_PRI_0            0x01
-#define DCBX_PFC_PRI_EN_BITMAP_PRI_1            0x02
-#define DCBX_PFC_PRI_EN_BITMAP_PRI_2            0x04
-#define DCBX_PFC_PRI_EN_BITMAP_PRI_3            0x08
-#define DCBX_PFC_PRI_EN_BITMAP_PRI_4            0x10
-#define DCBX_PFC_PRI_EN_BITMAP_PRI_5            0x20
-#define DCBX_PFC_PRI_EN_BITMAP_PRI_6            0x40
-#define DCBX_PFC_PRI_EN_BITMAP_PRI_7            0x80
-
-#define DCBX_PFC_FLAGS_MASK                     0x0000ff00
-#define DCBX_PFC_FLAGS_OFFSET                    8
-#define DCBX_PFC_CAPS_MASK                      0x00000f00
-#define DCBX_PFC_CAPS_OFFSET                     8
-#define DCBX_PFC_MBC_MASK                       0x00004000
-#define DCBX_PFC_MBC_OFFSET                      14
-#define DCBX_PFC_WILLING_MASK                   0x00008000
-#define DCBX_PFC_WILLING_OFFSET                  15
-#define DCBX_PFC_ENABLED_MASK                   0x00010000
-#define DCBX_PFC_ENABLED_OFFSET                  16
-#define DCBX_PFC_ERROR_MASK                     0x00020000
-#define DCBX_PFC_ERROR_OFFSET                    17
+#define DCBX_PFC_PRI_EN_BITMAP_MASK			0x000000ff
+#define DCBX_PFC_PRI_EN_BITMAP_OFFSET		 0
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_0			0x01
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_1			0x02
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_2			0x04
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_3			0x08
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_4			0x10
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_5			0x20
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_6			0x40
+#define DCBX_PFC_PRI_EN_BITMAP_PRI_7			0x80
+
+#define DCBX_PFC_FLAGS_MASK					0x0000ff00
+#define DCBX_PFC_FLAGS_OFFSET				 8
+#define DCBX_PFC_CAPS_MASK					0x00000f00
+#define DCBX_PFC_CAPS_OFFSET					 8
+#define DCBX_PFC_MBC_MASK					0x00004000
+#define DCBX_PFC_MBC_OFFSET					 14
+#define DCBX_PFC_WILLING_MASK				0x00008000
+#define DCBX_PFC_WILLING_OFFSET				 15
+#define DCBX_PFC_ENABLED_MASK				0x00010000
+#define DCBX_PFC_ENABLED_OFFSET				 16
+#define DCBX_PFC_ERROR_MASK					0x00020000
+#define DCBX_PFC_ERROR_OFFSET				 17
 
 	/* APP feature */
 	struct dcbx_app_priority_feature app;
@@ -442,14 +491,13 @@ struct dcbx_features {
 
 struct dcbx_local_params {
 	u32 config;
-#define DCBX_CONFIG_VERSION_MASK            0x00000007
-#define DCBX_CONFIG_VERSION_OFFSET           0
-#define DCBX_CONFIG_VERSION_DISABLED        0
-#define DCBX_CONFIG_VERSION_IEEE            1
-#define DCBX_CONFIG_VERSION_CEE             2
-#define DCBX_CONFIG_VERSION_DYNAMIC         \
-	(DCBX_CONFIG_VERSION_IEEE | DCBX_CONFIG_VERSION_CEE)
-#define DCBX_CONFIG_VERSION_STATIC          4
+#define DCBX_CONFIG_VERSION_MASK			0x00000007
+#define DCBX_CONFIG_VERSION_OFFSET		 0
+#define DCBX_CONFIG_VERSION_DISABLED		0
+#define DCBX_CONFIG_VERSION_IEEE			1
+#define DCBX_CONFIG_VERSION_CEE			2
+#define DCBX_CONFIG_VERSION_DYNAMIC		(DCBX_CONFIG_VERSION_IEEE | DCBX_CONFIG_VERSION_CEE)
+#define DCBX_CONFIG_VERSION_STATIC		4
 
 	u32 flags;
 	struct dcbx_features features;
@@ -459,12 +507,12 @@ struct dcbx_mib {
 	u32 prefix_seq_num;
 	u32 flags;
 	/*
-	#define DCBX_CONFIG_VERSION_MASK            0x00000007
-	#define DCBX_CONFIG_VERSION_OFFSET           0
-	#define DCBX_CONFIG_VERSION_DISABLED        0
-	#define DCBX_CONFIG_VERSION_IEEE            1
-	#define DCBX_CONFIG_VERSION_CEE             2
-	#define DCBX_CONFIG_VERSION_STATIC          4
+#define DCBX_CONFIG_VERSION_MASK			0x00000007
+#define DCBX_CONFIG_VERSION_OFFSET		 0
+#define DCBX_CONFIG_VERSION_DISABLED		0
+#define DCBX_CONFIG_VERSION_IEEE			1
+#define DCBX_CONFIG_VERSION_CEE			2
+#define DCBX_CONFIG_VERSION_STATIC		4
 	*/
 	struct dcbx_features features;
 	u32 suffix_seq_num;
@@ -500,6 +548,43 @@ struct dcb_dscp_map {
 #define DCB_DSCP_ENABLE_OFFSET			0
 #define DCB_DSCP_ENABLE				1
 	u32 dscp_pri_map[8];
+	/* the map structure is the following:
+	 * each u32 is split into 4 bits chunks, each chunk holds priority for respective dscp
+	 * Lowest dscp is at lsb
+	 * 31		28		24		20		16
+	 * 12		8		4		0
+	 * dscp_pri_map[0]: | dscp7 pri | dscp6 pri | dscp5 pri | dscp4 pri |
+	 * dscp3 pri | dscp2 pri | dscp1 pri | dscp0 pri |
+	 * dscp_pri_map[1]: | dscp15 pri| dscp14 pri| dscp13 pri| dscp12 pri|
+	 * dscp11 pri| dscp10 pri| dscp9 pri | dscp8 pri |
+	 * etc.
+	 */
+};
+
+struct mcp_val64 {
+	u32 lo;
+	u32 hi;
+};
+
+/* generic_idc_msg_t to be used for inter driver communication.
+ * source_pf specifies the originating PF that sent messages to all target PFs
+ * msg contains 64 bit value of the message - opaque to the MFW
+ */
+struct generic_idc_msg_s {
+	u32 source_pf;
+	struct mcp_val64 msg;
+};
+
+/**************************************
+ *     PCIE Statistics structures
+ **************************************/
+struct pcie_stats_stc {
+	u32 sr_cnt_wr_byte_msb;
+	u32 sr_cnt_wr_byte_lsb;
+	u32 sr_cnt_wr_cnt;
+	u32 sr_cnt_rd_byte_msb;
+	u32 sr_cnt_rd_byte_lsb;
+	u32 sr_cnt_rd_cnt;
 };
 
 /**************************************
@@ -515,14 +600,11 @@ enum _attribute_commands_e {
 };
 
 /**************************************/
-/*                                    */
-/*     P U B L I C      G L O B A L   */
-/*                                    */
+/* P U B L I C G L O B A L */
 /**************************************/
 struct public_global {
-	u32 max_path;       /* 32bit is wasty, but this will be used often */
-/* (Global) 32bit is wasty, but this will be used often */
-	u32 max_ports;
+	u32 max_path;	/* 32bit is wasty, but this will be used often */
+	u32 max_ports;	/* (Global) 32bit is wasty, but this will be used often */
 #define MODE_1P	1		/* TBD - NEED TO THINK OF A BETTER NAME */
 #define MODE_2P	2
 #define MODE_3P	3
@@ -530,7 +612,7 @@ struct public_global {
 	u32 debug_mb_offset;
 	u32 phymod_dbg_mb_offset;
 	struct couple_mode_teaming cmt;
-/* Temperature in Celcius (-255C / +255C), measured every second. */
+	/* Temperature in Celcius (-255C / +255C), measured every second. */
 	s32 internal_temperature;
 	u32 mfw_ver;
 	u32 running_bundle_id;
@@ -547,33 +629,55 @@ struct public_global {
 #define EXT_PHY_FW_UPGRADE_STATUS_SUCCESS	(3)
 #define EXT_PHY_FW_UPGRADE_TYPE_MASK		(0xffff0000)
 #define EXT_PHY_FW_UPGRADE_TYPE_OFFSET		(16)
+
+	u8 runtime_port_swap_map[MODE_4P];
+	u32 data_ptr;
+	u32 data_size;
+	u32 bmb_error_status_cnt;
+	u32 bmb_jumbo_frame_cnt;
+	u32 sent_to_bmc_cnt;
+	u32 handled_by_mfw;
+	u32 sent_to_nw_cnt; /* BMC BC/MC are handled by MFW */
+#define LEAKY_BUCKET_SAMPLES 10
+	u32 to_bmc_kb_per_second; /* Used to be sent_to_nw_cnt */
+	u32 bcast_dropped_to_bmc_cnt;
+	u32 mcast_dropped_to_bmc_cnt;
+	u32 ucast_dropped_to_bmc_cnt;
+	u32 ncsi_response_failure_cnt;
+	u32 device_attr;
+#define DEVICE_ATTR_NVMETCP_HOST	(1 << 0) /* NVMeTCP host exists in the device */
+#define DEVICE_ATTR_NVMETCP_TARGET	(1 << 1) /* NVMeTCP target exists in the device */
+#define DEVICE_ATTR_L2_VFS		(1 << 2) /* At least one VF assigned for ETH PF */
+#define DEVICE_ATTR_RDMA		(1 << 3) /* RMDA personality exists in device */
+/* RMDA with VFs personality exists in device */
+#define DEVICE_ATTR_RDMA_VFS		(1 << 4)
+#define DEVICE_ATTR_ISCSI		(1 << 5) /* ISCSI personality exists in device */
+#define DEVICE_ATTR_FCOE		(1 << 7) /* FCOE personality exists in device */
 };
 
 /**************************************/
-/*                                    */
-/*     P U B L I C      P A T H       */
-/*                                    */
+/* P U B L I C P A T H */
 /**************************************/
 
 /****************************************************************************
- * Shared Memory 2 Region                                                   *
+ * Shared Memory 2 Region						    *
  ****************************************************************************/
-/* The fw_flr_ack is actually built in the following way:                   */
-/* 8 bit:  PF ack                                                           */
-/* 128 bit: VF ack                                                           */
-/* 8 bit:  ios_dis_ack                                                      */
+/* The fw_flr_ack is actually built in the following way: */
+/* 8 bit: PF ack */
+/* 128 bit: VF ack */
+/* 8 bit: ios_dis_ack */
 /* In order to maintain endianity in the mailbox hsi, we want to keep using */
-/* u32. The fw must have the VF right after the PF since this is how it     */
-/* access arrays(it expects always the VF to reside after the PF, and that  */
-/* makes the calculation much easier for it. )                              */
+/* u32. The fw must have the VF right after the PF since this is how it */
+/* access arrays(it expects always the VF to reside after the PF, and that */
+/* makes the calculation much easier for it. ) */
 /* In order to answer both limitations, and keep the struct small, the code */
-/* will abuse the structure defined here to achieve the actual partition    */
-/* above                                                                    */
+/* will abuse the structure defined here to achieve the actual partition */
+/* above */
 /****************************************************************************/
 struct fw_flr_mb {
 	u32 aggint;
 	u32 opgen_addr;
-	u32 accum_ack;      /* 0..15:PF, 16..207:VF, 256..271:IOV_DIS */
+	u32 accum_ack;	/* 0..15:PF, 16..207:VF, 256..271:IOV_DIS */
 #define ACCUM_ACK_PF_BASE	0
 #define ACCUM_ACK_PF_SHIFT	0
 
@@ -591,23 +695,20 @@ struct public_path {
 	 * mcp_vf_disabled is set by the MCP to indicate the driver about VFs
 	 * which were disabled/flred
 	 */
-	u32 mcp_vf_disabled[VF_MAX_STATIC / 32];    /* 0x003c */
+	u32 mcp_vf_disabled[VF_MAX_STATIC / 32];	/* 0x003c */
 
-/* Reset on mcp reset, and incremented for eveny process kill event. */
-	u32 process_kill;
+	u32 process_kill; /* Reset on mcp reset, and incremented for eveny process kill event. */
 #define PROCESS_KILL_COUNTER_MASK		0x0000ffff
 #define PROCESS_KILL_COUNTER_OFFSET		0
 #define PROCESS_KILL_GLOB_AEU_BIT_MASK		0xffff0000
-#define PROCESS_KILL_GLOB_AEU_BIT_OFFSET	16
+#define PROCESS_KILL_GLOB_AEU_BIT_OFFSET		16
 #define GLOBAL_AEU_BIT(aeu_reg_id, aeu_bit) (aeu_reg_id * 32 + aeu_bit)
 	/*Added to support E5 240 VFs*/
 	u32 mcp_vf_disabled2[ADDED_VF_BITMAP_SIZE];
 };
 
 /**************************************/
-/*                                    */
-/*     P U B L I C      P O R T       */
-/*                                    */
+/* P U B L I C P O R T */
 /**************************************/
 #define FC_NPIV_WWPN_SIZE 8
 #define FC_NPIV_WWNN_SIZE 8
@@ -629,32 +730,31 @@ struct dci_fc_npiv_tbl {
 };
 
 /****************************************************************************
- * Driver <-> FW Mailbox                                                    *
+ * Driver <-> FW Mailbox						    *
  ****************************************************************************/
 
 struct public_port {
-	u32 validity_map;   /* 0x0 (4*2 = 0x8) */
+	u32 validity_map;					/* 0x0 (0x4) */
 
 	/* validity bits */
-#define MCP_VALIDITY_PCI_CFG                    0x00100000
-#define MCP_VALIDITY_MB                         0x00200000
-#define MCP_VALIDITY_DEV_INFO                   0x00400000
-#define MCP_VALIDITY_RESERVED                   0x00000007
+#define MCP_VALIDITY_PCI_CFG					0x00100000
+#define MCP_VALIDITY_MB						0x00200000
+#define MCP_VALIDITY_DEV_INFO				0x00400000
+#define MCP_VALIDITY_RESERVED				0x00000007
 
 	/* One licensing bit should be set */
-/* yaniv - tbd ? license */
-#define MCP_VALIDITY_LIC_KEY_IN_EFFECT_MASK     0x00000038
-#define MCP_VALIDITY_LIC_MANUF_KEY_IN_EFFECT    0x00000008
+#define MCP_VALIDITY_LIC_KEY_IN_EFFECT_MASK	0x00000038	/* yaniv - tbd ? license */
+#define MCP_VALIDITY_LIC_MANUF_KEY_IN_EFFECT	0x00000008
 #define MCP_VALIDITY_LIC_UPGRADE_KEY_IN_EFFECT  0x00000010
-#define MCP_VALIDITY_LIC_NO_KEY_IN_EFFECT       0x00000020
+#define MCP_VALIDITY_LIC_NO_KEY_IN_EFFECT	0x00000020
 
 	/* Active MFW */
-#define MCP_VALIDITY_ACTIVE_MFW_UNKNOWN         0x00000000
-#define MCP_VALIDITY_ACTIVE_MFW_MASK            0x000001c0
-#define MCP_VALIDITY_ACTIVE_MFW_NCSI            0x00000040
-#define MCP_VALIDITY_ACTIVE_MFW_NONE            0x000001c0
+#define MCP_VALIDITY_ACTIVE_MFW_UNKNOWN		0x00000000
+#define MCP_VALIDITY_ACTIVE_MFW_MASK			0x000001c0
+#define MCP_VALIDITY_ACTIVE_MFW_NCSI			0x00000040
+#define MCP_VALIDITY_ACTIVE_MFW_NONE			0x000001c0
 
-	u32 link_status;
+	u32 link_status;					/* 0x4 (0x4)*/
 #define LINK_STATUS_LINK_UP				0x00000001
 #define LINK_STATUS_SPEED_AND_DUPLEX_MASK		0x0000001e
 #define LINK_STATUS_SPEED_AND_DUPLEX_1000THD		(1 << 1)
@@ -669,6 +769,7 @@ struct public_port {
 #define LINK_STATUS_AUTO_NEGOTIATE_COMPLETE		0x00000040
 #define LINK_STATUS_PARALLEL_DETECTION_USED		0x00000080
 #define LINK_STATUS_PFC_ENABLED				0x00000100
+#define LINK_STATUS_LINK_PARTNER_SPEED_MASK		0x0001fe00
 #define LINK_STATUS_LINK_PARTNER_1000TFD_CAPABLE	0x00000200
 #define LINK_STATUS_LINK_PARTNER_1000THD_CAPABLE	0x00000400
 #define LINK_STATUS_LINK_PARTNER_10G_CAPABLE		0x00000800
@@ -695,30 +796,46 @@ struct public_port {
 #define LINK_STATUS_FEC_MODE_RS_CL91			(2 << 27)
 #define LINK_STATUS_EXT_PHY_LINK_UP			0x40000000
 
-	u32 link_status1;
-	u32 ext_phy_fw_version;
+	u32 link_status1;				/* 0x8 (0x4) */
+#define LP_PRESENCE_STATUS_OFFSET	0
+#define LP_PRESENCE_STATUS_MASK		0x3
+#define LP_PRESENCE_UNKNOWN		0x0
+#define LP_PRESENCE_PROBING		0x1
+#define	LP_PRESENT			0x2
+#define	LP_NOT_PRESENT			0x3
+#define TXFAULT_STATUS_OFFSET		2
+#define TXFAULT_STATUS_MASK		(0x1 << TXFAULT_STATUS_OFFSET)
+#define TXFAULT_NOT_PRESENT		(0x0 << TXFAULT_STATUS_OFFSET)
+#define TXFAULT_PRESENT			(0x1 << TXFAULT_STATUS_OFFSET)
+#define RXLOS_STATUS_OFFSET		3
+#define RXLOS_STATUS_MASK		(0x1 << RXLOS_STATUS_OFFSET)
+#define RXLOS_NOT_PRESENT		(0x0 << RXLOS_STATUS_OFFSET)
+#define RXLOS_PRESENT			(0x1 << RXLOS_STATUS_OFFSET)
+#define PORT_NOT_READY			(1 << 4) /* Port init is not completed or failed */
+	u32 ext_phy_fw_version;				/* 0xc (0x4) */
 /* Points to struct eth_phy_cfg (For READ-ONLY) */
 	u32 drv_phy_cfg_addr;
 
-	u32 port_stx;
+	u32 port_stx;			/* 0x14 (0x4) */
 
-	u32 stat_nig_timer;
+	u32 stat_nig_timer;		/* 0x18 (0x4) */
 
-	struct port_mf_cfg port_mf_config;
-	struct port_stats stats;
+	struct port_mf_cfg port_mf_config; /* 0x1c (0xc) */
+	struct port_stats stats;	/* 0x28 (0x1e8) 64-bit aligned*/
 
-	u32 media_type;
-#define	MEDIA_UNSPECIFIED	0x0
+	u32 media_type;			/* 0x210 (0x4) */
+#define	MEDIA_UNSPECIFIED		0x0
 #define	MEDIA_SFPP_10G_FIBER	0x1	/* Use MEDIA_MODULE_FIBER instead */
-#define	MEDIA_XFP_FIBER		0x2	/* Use MEDIA_MODULE_FIBER instead */
-#define	MEDIA_DA_TWINAX		0x3
-#define	MEDIA_BASE_T		0x4
-#define MEDIA_SFP_1G_FIBER	0x5	/* Use MEDIA_MODULE_FIBER instead */
-#define MEDIA_MODULE_FIBER	0x6
-#define	MEDIA_KR		0xf0
-#define	MEDIA_NOT_PRESENT	0xff
-
-	u32 lfa_status;
+/* Use MEDIA_MODULE_FIBER instead */
+#define	MEDIA_XFP_FIBER			0x2
+#define	MEDIA_DA_TWINAX			0x3
+#define	MEDIA_BASE_T			0x4
+#define MEDIA_SFP_1G_FIBER		0x5	/* Use MEDIA_MODULE_FIBER instead */
+#define MEDIA_MODULE_FIBER		0x6
+#define	MEDIA_KR				0xf0
+#define	MEDIA_NOT_PRESENT		0xff
+
+	u32 lfa_status;			/* 0x214 (0x4) */
 #define LFA_LINK_FLAP_REASON_OFFSET		0
 #define LFA_LINK_FLAP_REASON_MASK		0x000000ff
 #define LFA_NO_REASON					(0 << 0)
@@ -734,11 +851,15 @@ struct public_port {
 #define LINK_FLAP_AVOIDANCE_COUNT_MASK		0x0000ff00
 #define LINK_FLAP_COUNT_OFFSET			16
 #define LINK_FLAP_COUNT_MASK			0x00ff0000
-
-	u32 link_change_count;
+#define LFA_LINK_FLAP_EXT_REASON_OFFSET	24
+#define LFA_LINK_FLAP_EXT_REASON_MASK		0xff000000
+#define LFA_EXT_SPEED_MISMATCH				(1 << 24)
+#define LFA_EXT_ADV_SPEED_MISMATCH			(1 << 25)
+#define LFA_EXT_FEC_MISMATCH				(1 << 26)
+	u32 link_change_count;			/* 0x218 (0x4) */
 
 	/* LLDP params */
-/* offset: 536 bytes? */
+	/* offset: 536 bytes? */
 	struct lldp_config_params_s lldp_config_params[LLDP_MAX_LLDP_AGENTS];
 	struct lldp_status_params_s lldp_status_params[LLDP_MAX_LLDP_AGENTS];
 	struct lldp_system_tlvs_buffer_s system_lldp_tlvs_buf;
@@ -748,46 +869,42 @@ struct public_port {
 	struct dcbx_mib remote_dcbx_mib;
 	struct dcbx_mib operational_dcbx_mib;
 
-/* FC_NPIV table offset & size in NVRAM value of 0 means not present */
-
+	/* FC_NPIV table offset & size in NVRAM value of 0 means not present */
 	u32 fc_npiv_nvram_tbl_addr;
+#define NPIV_TBL_INVALID_ADDR			0xFFFFFFFF
+
 	u32 fc_npiv_nvram_tbl_size;
 	u32 transceiver_data;
-#define ETH_TRANSCEIVER_STATE_MASK			0x000000FF
-#define ETH_TRANSCEIVER_STATE_OFFSET			0x00000000
-#define ETH_TRANSCEIVER_STATE_UNPLUGGED			0x00000000
-#define ETH_TRANSCEIVER_STATE_PRESENT			0x00000001
-#define ETH_TRANSCEIVER_STATE_VALID			0x00000003
-#define ETH_TRANSCEIVER_STATE_UPDATING			0x00000008
-#define ETH_TRANSCEIVER_TYPE_MASK			0x0000FF00
-#define ETH_TRANSCEIVER_TYPE_OFFSET			0x00000008
-#define ETH_TRANSCEIVER_TYPE_NONE			0x00000000
-#define ETH_TRANSCEIVER_TYPE_UNKNOWN			0x000000FF
-/* 1G Passive copper cable */
-#define ETH_TRANSCEIVER_TYPE_1G_PCC			0x01
-/* 1G Active copper cable  */
-#define ETH_TRANSCEIVER_TYPE_1G_ACC			0x02
+#define ETH_TRANSCEIVER_STATE_MASK		0x000000FF
+#define ETH_TRANSCEIVER_STATE_OFFSET			0x0
+#define ETH_TRANSCEIVER_STATE_UNPLUGGED			0x00
+#define ETH_TRANSCEIVER_STATE_PRESENT			0x01
+#define ETH_TRANSCEIVER_STATE_VALID			0x03
+#define ETH_TRANSCEIVER_STATE_UPDATING			0x08
+#define ETH_TRANSCEIVER_STATE_IN_SETUP			0x10
+#define ETH_TRANSCEIVER_TYPE_MASK		0x0000FF00
+#define ETH_TRANSCEIVER_TYPE_OFFSET		0x8
+#define ETH_TRANSCEIVER_TYPE_NONE			0x00
+#define ETH_TRANSCEIVER_TYPE_UNKNOWN			0xFF
+#define ETH_TRANSCEIVER_TYPE_1G_PCC			0x01 /* 1G Passive copper cable */
+#define ETH_TRANSCEIVER_TYPE_1G_ACC			0x02 /* 1G Active copper cable  */
 #define ETH_TRANSCEIVER_TYPE_1G_LX			0x03
 #define ETH_TRANSCEIVER_TYPE_1G_SX			0x04
 #define ETH_TRANSCEIVER_TYPE_10G_SR			0x05
 #define ETH_TRANSCEIVER_TYPE_10G_LR			0x06
 #define ETH_TRANSCEIVER_TYPE_10G_LRM			0x07
 #define ETH_TRANSCEIVER_TYPE_10G_ER			0x08
-/* 10G Passive copper cable */
-#define ETH_TRANSCEIVER_TYPE_10G_PCC			0x09
-/* 10G Active copper cable  */
-#define ETH_TRANSCEIVER_TYPE_10G_ACC			0x0a
+#define ETH_TRANSCEIVER_TYPE_10G_PCC			0x09 /* 10G Passive copper cable */
+#define ETH_TRANSCEIVER_TYPE_10G_ACC			0x0a /* 10G Active copper cable  */
 #define ETH_TRANSCEIVER_TYPE_XLPPI			0x0b
 #define ETH_TRANSCEIVER_TYPE_40G_LR4			0x0c
 #define ETH_TRANSCEIVER_TYPE_40G_SR4			0x0d
 #define ETH_TRANSCEIVER_TYPE_40G_CR4			0x0e
-/* Active optical cable */
-#define ETH_TRANSCEIVER_TYPE_100G_AOC			0x0f
+#define ETH_TRANSCEIVER_TYPE_100G_AOC			0x0f /* Active optical cable */
 #define ETH_TRANSCEIVER_TYPE_100G_SR4			0x10
 #define ETH_TRANSCEIVER_TYPE_100G_LR4			0x11
 #define ETH_TRANSCEIVER_TYPE_100G_ER4			0x12
-/* Active copper cable */
-#define ETH_TRANSCEIVER_TYPE_100G_ACC			0x13
+#define ETH_TRANSCEIVER_TYPE_100G_ACC			0x13 /* Active copper cable */
 #define ETH_TRANSCEIVER_TYPE_100G_CR4			0x14
 #define ETH_TRANSCEIVER_TYPE_4x10G_SR			0x15
 /* 25G Passive copper cable - short */
@@ -805,7 +922,6 @@ struct public_port {
 #define ETH_TRANSCEIVER_TYPE_25G_SR			0x1c
 #define ETH_TRANSCEIVER_TYPE_25G_LR			0x1d
 #define ETH_TRANSCEIVER_TYPE_25G_AOC			0x1e
-
 #define ETH_TRANSCEIVER_TYPE_4x10G			0x1f
 #define ETH_TRANSCEIVER_TYPE_4x25G_CR			0x20
 #define ETH_TRANSCEIVER_TYPE_1000BASET			0x21
@@ -817,6 +933,62 @@ struct public_port {
 #define ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_CR	0x34
 #define ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_LR	0x35
 #define ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_AOC	0x36
+#define ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_25G_SR	0x37
+#define ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_25G_LR	0x38
+#define ETH_TRANSCEIVER_TYPE_MULTI_RATE_1G_10G_SR	0x39
+#define ETH_TRANSCEIVER_TYPE_MULTI_RATE_1G_10G_LR	0x3a
+#define ETH_TRANSCEIVER_TYPE_25G_ER			0x3b
+#define ETH_TRANSCEIVER_TYPE_50G_CR			0x3c
+#define ETH_TRANSCEIVER_TYPE_50G_SR			0x3d
+#define ETH_TRANSCEIVER_TYPE_50G_LR			0x3e
+#define ETH_TRANSCEIVER_TYPE_50G_FR			0x3f
+#define ETH_TRANSCEIVER_TYPE_50G_R1_AOC			0x40
+#define ETH_TRANSCEIVER_TYPE_50G_R1_ACC_BER_1E6		0x41
+#define ETH_TRANSCEIVER_TYPE_50G_R1_ACC_BER_1E4		0x42
+#define ETH_TRANSCEIVER_TYPE_50G_R1_AOC_BER_1E6		0x43
+#define ETH_TRANSCEIVER_TYPE_50G_R1_AOC_BER_1E4		0x44
+#define ETH_TRANSCEIVER_TYPE_100G_CR2			0x45
+#define ETH_TRANSCEIVER_TYPE_100G_SR2			0x46
+#define ETH_TRANSCEIVER_TYPE_100G_R2_AOC		0x47
+#define ETH_TRANSCEIVER_TYPE_100G_R2_ACC_BER_1E6	0x48
+#define ETH_TRANSCEIVER_TYPE_100G_R2_ACC_BER_1E4	0x49
+#define ETH_TRANSCEIVER_TYPE_100G_R2_AOC_BER_1E6	0x4a
+#define ETH_TRANSCEIVER_TYPE_100G_R2_AOC_BER_1E4	0x4b
+#define ETH_TRANSCEIVER_TYPE_200G_CR4			0x4c
+#define ETH_TRANSCEIVER_TYPE_200G_SR4			0x4d
+#define ETH_TRANSCEIVER_TYPE_200G_LR4			0x4e
+#define ETH_TRANSCEIVER_TYPE_200G_FR4			0x4f
+#define ETH_TRANSCEIVER_TYPE_200G_DR4			0x50
+#define ETH_TRANSCEIVER_TYPE_200G_R4_AOC		0x51
+#define ETH_TRANSCEIVER_TYPE_200G_R4_ACC_BER_1E6	0x52
+#define ETH_TRANSCEIVER_TYPE_200G_R4_ACC_BER_1E4	0x53
+#define ETH_TRANSCEIVER_TYPE_200G_R4_AOC_BER_1E6	0x54
+#define ETH_TRANSCEIVER_TYPE_200G_R4_AOC_BER_1E4	0x55
+/* COULD ALSO ADD... BUT FOR NOW UNSUPPORTED */
+#define ETH_TRANSCEIVER_TYPE_5G_BASET			0x56
+#define ETH_TRANSCEIVER_TYPE_2p5G_BASET			0x57
+#define ETH_TRANSCEIVER_TYPE_10G_BASET_30M		0x58
+#define ETH_TRANSCEIVER_TYPE_25G_BASET			0x59
+#define ETH_TRANSCEIVER_TYPE_40G_BASET			0x5a
+#define ETH_TRANSCEIVER_TYPE_40G_ER4			0x5b
+#define ETH_TRANSCEIVER_TYPE_40G_PSM4			0x5c
+#define ETH_TRANSCEIVER_TYPE_40G_SWDM4			0x5d
+#define ETH_TRANSCEIVER_TYPE_50G_BASET			0x5e
+#define ETH_TRANSCEIVER_TYPE_100G_SR10			0x5f
+#define ETH_TRANSCEIVER_TYPE_100G_CDWM4			0x60
+#define ETH_TRANSCEIVER_TYPE_100G_DWDM2			0x61
+#define ETH_TRANSCEIVER_TYPE_100G_PSM4			0x62
+#define ETH_TRANSCEIVER_TYPE_100G_CLR4			0x63
+#define ETH_TRANSCEIVER_TYPE_100G_WDM			0x64
+#define ETH_TRANSCEIVER_TYPE_100G_SWDM4			0x65 /* Supported */
+#define ETH_TRANSCEIVER_TYPE_100G_4WDM_10		0x66
+#define ETH_TRANSCEIVER_TYPE_100G_4WDM_20		0x67
+#define ETH_TRANSCEIVER_TYPE_100G_4WDM_40		0x68
+#define ETH_TRANSCEIVER_TYPE_100G_DR_CAUI4		0x69
+#define ETH_TRANSCEIVER_TYPE_100G_FR_CAUI4		0x6a
+#define ETH_TRANSCEIVER_TYPE_100G_LR_CAUI4		0x6b
+#define ETH_TRANSCEIVER_TYPE_200G_PSM4			0x6c
+
 	u32 wol_info;
 	u32 wol_pkt_len;
 	u32 wol_pkt_details;
@@ -829,17 +1001,16 @@ struct public_port {
 /* Shows the Local Device EEE capabilities */
 #define EEE_LD_ADV_STATUS_MASK	0x000000f0
 #define EEE_LD_ADV_STATUS_OFFSET	4
-	#define EEE_1G_ADV	(1 << 1)
-	#define EEE_10G_ADV	(1 << 2)
+#define EEE_1G_ADV	(1 << 1)
+#define EEE_10G_ADV	(1 << 2)
 /* Same values as in EEE_LD_ADV, but for Link Parter */
 #define	EEE_LP_ADV_STATUS_MASK	0x00000f00
 #define EEE_LP_ADV_STATUS_OFFSET	8
 
-/* Supported speeds for EEE */
-#define EEE_SUPPORTED_SPEED_MASK	0x0000f000
+#define EEE_SUPPORTED_SPEED_MASK	0x0000f000	/* Supported speeds for EEE */
 #define EEE_SUPPORTED_SPEED_OFFSET	12
-	#define EEE_1G_SUPPORTED	(1 << 1)
-	#define EEE_10G_SUPPORTED	(1 << 2)
+#define EEE_1G_SUPPORTED	(1 << 1)
+#define EEE_10G_SUPPORTED	(1 << 2)
 
 	u32 eee_remote;	/* Used for EEE in LLDP */
 #define EEE_REMOTE_TW_TX_MASK	0x0000ffff
@@ -857,6 +1028,10 @@ struct public_port {
 #define ETH_TRANSCEIVER_HAS_DIAGNOSTIC			(1 << 6)
 #define ETH_TRANSCEIVER_IDENT_MASK			0x0000ff00
 #define ETH_TRANSCEIVER_IDENT_OFFSET			8
+#define ETH_TRANSCEIVER_ENHANCED_OPTIONS_MASK		0x00ff0000
+#define ETH_TRANSCEIVER_ENHANCED_OPTIONS_OFFSET		16
+#define ETH_TRANSCEIVER_HAS_TXFAULT			(1 << 5)
+#define ETH_TRANSCEIVER_HAS_RXLOS			(1 << 4)
 
 	u32 oem_cfg_port;
 #define OEM_CFG_CHANNEL_TYPE_MASK			0x00000003
@@ -871,12 +1046,15 @@ struct public_port {
 
 	struct lldp_received_tlvs_s lldp_received_tlvs[LLDP_MAX_LLDP_AGENTS];
 	u32 system_lldp_tlvs_buf2[MAX_SYSTEM_LLDP_TLV_DATA];
+	u32 phy_module_temperature; /* Temperature of transceiver / external PHY - 0xc80 (0x4)*/
+	u32 nig_reg_stat_rx_bmb_packet;
+	u32 nig_reg_rx_llh_ncsi_mcp_mask;
+	u32 nig_reg_rx_llh_ncsi_mcp_mask_2;
+
 };
 
 /**************************************/
-/*                                    */
-/*     P U B L I C      F U N C       */
-/*                                    */
+/* P U B L I C F U N C */
 /**************************************/
 
 struct public_func {
@@ -885,8 +1063,7 @@ struct public_func {
 
 	/* MTU size per funciton is needed for the OV feature */
 	u32 mtu_size;
-/* 9 entires for the C2S PCP map for each inner VLAN PCP + 1 default */
-
+	/* 9 entires for the C2S PCP map for each inner VLAN PCP + 1 default */
 	/* For PCP values 0-3 use the map lower */
 	/* 0xFF000000 - PCP 0, 0x00FF0000 - PCP 1,
 	 * 0x0000FF00 - PCP 2, 0x000000FF PCP 3
@@ -901,45 +1078,66 @@ struct public_func {
 	/* For PCP default value get the MSB byte of the map default */
 	u32 c2s_pcp_map_default;
 
-	u32 reserved[4];
+	/* For generic inter driver communication channel messages between PFs via MFW*/
+	struct generic_idc_msg_s generic_idc_msg;
+
+	u32 num_of_msix;
 
 	/* replace old mf_cfg */
 	u32 config;
 	/* E/R/I/D */
 	/* function 0 of each port cannot be hidden */
-#define FUNC_MF_CFG_FUNC_HIDE                   0x00000001
-#define FUNC_MF_CFG_PAUSE_ON_HOST_RING          0x00000002
+#define FUNC_MF_CFG_FUNC_HIDE				0x00000001
+#define FUNC_MF_CFG_PAUSE_ON_HOST_RING		0x00000002
 #define FUNC_MF_CFG_PAUSE_ON_HOST_RING_OFFSET    0x00000001
 
 
-#define FUNC_MF_CFG_PROTOCOL_MASK               0x000000f0
-#define FUNC_MF_CFG_PROTOCOL_OFFSET              4
-#define FUNC_MF_CFG_PROTOCOL_ETHERNET           0x00000000
-#define FUNC_MF_CFG_PROTOCOL_ISCSI              0x00000010
-#define FUNC_MF_CFG_PROTOCOL_FCOE		0x00000020
-#define FUNC_MF_CFG_PROTOCOL_ROCE               0x00000030
-#define FUNC_MF_CFG_PROTOCOL_MAX	        0x00000030
+#define FUNC_MF_CFG_PROTOCOL_MASK			0x000000f0
+#define FUNC_MF_CFG_PROTOCOL_OFFSET			 4
+#define FUNC_MF_CFG_PROTOCOL_ETHERNET		0x00000000
+#define FUNC_MF_CFG_PROTOCOL_ISCSI			0x00000010
+#define FUNC_MF_CFG_PROTOCOL_FCOE			0x00000020
+#define FUNC_MF_CFG_PROTOCOL_ROCE			0x00000030
+#define FUNC_MF_CFG_PROTOCOL_NVMETCP			0x00000040
+
+
+#define FUNC_MF_CFG_PROTOCOL_MAX			0x00000050
 
 	/* MINBW, MAXBW */
 	/* value range - 0..100, increments in 1 %  */
-#define FUNC_MF_CFG_MIN_BW_MASK                 0x0000ff00
-#define FUNC_MF_CFG_MIN_BW_OFFSET                8
-#define FUNC_MF_CFG_MIN_BW_DEFAULT              0x00000000
-#define FUNC_MF_CFG_MAX_BW_MASK                 0x00ff0000
-#define FUNC_MF_CFG_MAX_BW_OFFSET                16
-#define FUNC_MF_CFG_MAX_BW_DEFAULT              0x00640000
+#define FUNC_MF_CFG_MIN_BW_MASK				0x0000ff00
+#define FUNC_MF_CFG_MIN_BW_OFFSET			 8
+#define FUNC_MF_CFG_MIN_BW_DEFAULT			0x00000000
+#define FUNC_MF_CFG_MAX_BW_MASK				0x00ff0000
+#define FUNC_MF_CFG_MAX_BW_OFFSET			 16
+#define FUNC_MF_CFG_MAX_BW_DEFAULT			0x00640000
+
+	/*RDMA PROTOCL*/
+#define FUNC_MF_CFG_RDMA_PROTOCOL_MASK		0x03000000
+#define FUNC_MF_CFG_RDMA_PROTOCOL_OFFSET		 24
+#define FUNC_MF_CFG_RDMA_PROTOCOL_NONE		0x00000000
+#define FUNC_MF_CFG_RDMA_PROTOCOL_ROCE		0x01000000
+#define FUNC_MF_CFG_RDMA_PROTOCOL_IWARP		0x02000000
+	/*for future support*/
+#define FUNC_MF_CFG_RDMA_PROTOCOL_BOTH		0x03000000
+
+#define FUNC_MF_CFG_BOOT_MODE_MASK		0x0C000000
+#define FUNC_MF_CFG_BOOT_MODE_OFFSET		26
+#define FUNC_MF_CFG_BOOT_MODE_BIOS_CTRL		0x00000000
+#define FUNC_MF_CFG_BOOT_MODE_DISABLED		0x04000000
+#define FUNC_MF_CFG_BOOT_MODE_ENABLED		0x08000000
 
 	u32 status;
 #define FUNC_STATUS_VIRTUAL_LINK_UP		0x00000001
 #define FUNC_STATUS_LOGICAL_LINK_UP		0x00000002
-#define FUNC_STATUS_FORCED_LINK			0x00000004
+#define FUNC_STATUS_FORCED_LINK		0x00000004
 
-	u32 mac_upper;      /* MAC */
-#define FUNC_MF_CFG_UPPERMAC_MASK               0x0000ffff
-#define FUNC_MF_CFG_UPPERMAC_OFFSET              0
-#define FUNC_MF_CFG_UPPERMAC_DEFAULT            FUNC_MF_CFG_UPPERMAC_MASK
+	u32 mac_upper;	/* MAC */
+#define FUNC_MF_CFG_UPPERMAC_MASK			0x0000ffff
+#define FUNC_MF_CFG_UPPERMAC_OFFSET			 0
+#define FUNC_MF_CFG_UPPERMAC_DEFAULT			FUNC_MF_CFG_UPPERMAC_MASK
 	u32 mac_lower;
-#define FUNC_MF_CFG_LOWERMAC_DEFAULT            0xffffffff
+#define FUNC_MF_CFG_LOWERMAC_DEFAULT			0xffffffff
 
 	u32 fcoe_wwn_port_name_upper;
 	u32 fcoe_wwn_port_name_lower;
@@ -947,10 +1145,10 @@ struct public_func {
 	u32 fcoe_wwn_node_name_upper;
 	u32 fcoe_wwn_node_name_lower;
 
-	u32 ovlan_stag;     /* tags */
-#define FUNC_MF_CFG_OV_STAG_MASK              0x0000ffff
-#define FUNC_MF_CFG_OV_STAG_OFFSET             0
-#define FUNC_MF_CFG_OV_STAG_DEFAULT           FUNC_MF_CFG_OV_STAG_MASK
+	u32 ovlan_stag;	/* tags */
+#define FUNC_MF_CFG_OV_STAG_MASK			  0x0000ffff
+#define FUNC_MF_CFG_OV_STAG_OFFSET		   0
+#define FUNC_MF_CFG_OV_STAG_DEFAULT		  FUNC_MF_CFG_OV_STAG_MASK
 
 	u32 pf_allocation; /* vf per pf */
 
@@ -962,7 +1160,8 @@ struct public_func {
 	 * drv_ack_vf_disabled is set by the PF driver to ack handled disabled
 	 * VFs
 	 */
-	u32 drv_ack_vf_disabled[VF_MAX_STATIC / 32];    /* 0x0044 */
+	/*Not in use anymore*/
+	u32 drv_ack_vf_disabled[VF_MAX_STATIC / 32];	/* 0x0044 */
 
 	u32 drv_id;
 #define DRV_ID_PDA_COMP_VER_MASK	0x0000ffff
@@ -971,8 +1170,7 @@ struct public_func {
 #define LOAD_REQ_HSI_VERSION		2
 #define DRV_ID_MCP_HSI_VER_MASK		0x00ff0000
 #define DRV_ID_MCP_HSI_VER_OFFSET	16
-#define DRV_ID_MCP_HSI_VER_CURRENT	(LOAD_REQ_HSI_VERSION << \
-					 DRV_ID_MCP_HSI_VER_OFFSET)
+#define DRV_ID_MCP_HSI_VER_CURRENT	(LOAD_REQ_HSI_VERSION << DRV_ID_MCP_HSI_VER_OFFSET)
 
 #define DRV_ID_DRV_TYPE_MASK		0x7f000000
 #define DRV_ID_DRV_TYPE_OFFSET		24
@@ -986,6 +1184,10 @@ struct public_func {
 #define DRV_ID_DRV_TYPE_FREEBSD		(7 << DRV_ID_DRV_TYPE_OFFSET)
 #define DRV_ID_DRV_TYPE_AIX		(8 << DRV_ID_DRV_TYPE_OFFSET)
 
+#define DRV_ID_DRV_TYPE_OS		(DRV_ID_DRV_TYPE_LINUX | DRV_ID_DRV_TYPE_WINDOWS | \
+					 DRV_ID_DRV_TYPE_SOLARIS | DRV_ID_DRV_TYPE_VMWARE | \
+					 DRV_ID_DRV_TYPE_FREEBSD | DRV_ID_DRV_TYPE_AIX)
+
 #define DRV_ID_DRV_INIT_HW_MASK		0x80000000
 #define DRV_ID_DRV_INIT_HW_OFFSET	31
 #define DRV_ID_DRV_INIT_HW_FLAG		(1 << DRV_ID_DRV_INIT_HW_OFFSET)
@@ -1009,9 +1211,7 @@ struct public_func {
 };
 
 /**************************************/
-/*                                    */
-/*     P U B L I C       M B          */
-/*                                    */
+/* P U B L I C M B */
 /**************************************/
 /* This is the only section that the driver can write to, and each */
 /* Basically each driver request to set feature parameters,
@@ -1021,15 +1221,10 @@ struct public_func {
  */
 
 struct mcp_mac {
-	u32 mac_upper;      /* Upper 16 bits are always zeroes */
+	u32 mac_upper;	/* Upper 16 bits are always zeroes */
 	u32 mac_lower;
 };
 
-struct mcp_val64 {
-	u32 lo;
-	u32 hi;
-};
-
 struct mcp_file_att {
 	u32 nvm_start_addr;
 	u32 len;
@@ -1085,6 +1280,27 @@ struct ocbb_data_stc {
 	u32 ocsd_req_update_interval;
 };
 
+struct fcoe_cap_stc {
+#define FCOE_CAP_UNDEFINED_VALUE	0xffff
+	u32	max_ios;/*Maximum number of I/Os per connection*/
+#define FCOE_CAP_IOS_MASK		0x0000ffff
+#define FCOE_CAP_IOS_OFFSET		0
+	u32	max_log;/*Maximum number of Logins per port*/
+#define FCOE_CAP_LOG_MASK		0x0000ffff
+#define FCOE_CAP_LOG_OFFSET		0
+	u32	max_exch;/*Maximum number of exchanges*/
+#define FCOE_CAP_EXCH_MASK		0x0000ffff
+#define FCOE_CAP_EXCH_OFFSET		0
+	u32	max_npiv;/*Maximum NPIV WWN per port*/
+#define FCOE_CAP_NPIV_MASK		0x0000ffff
+#define FCOE_CAP_NPIV_OFFSET		0
+	u32	max_tgt;/*Maximum number of targets supported*/
+#define FCOE_CAP_TGT_MASK		0x0000ffff
+#define FCOE_CAP_TGT_OFFSET		0
+	u32	max_outstnd;/*Maximum number of outstanding commands across all connections*/
+#define FCOE_CAP_OUTSTND_MASK		0x0000ffff
+#define FCOE_CAP_OUTSTND_OFFSET		0
+};
 #define MAX_NUM_OF_SENSORS			7
 #define MFW_SENSOR_LOCATION_INTERNAL		1
 #define MFW_SENSOR_LOCATION_EXTERNAL		2
@@ -1133,6 +1349,11 @@ enum resource_id_enum {
 	RESOURCE_LL2_QUEUE_E		=	15,
 	RESOURCE_RDMA_STATS_QUEUE_E	=	16,
 	RESOURCE_BDQ_E			=	17,
+	RESOURCE_QCN_E			=	18,
+	RESOURCE_LLH_FILTER_E		=	19,
+	RESOURCE_VF_MAC_ADDR		=	20,
+	RESOURCE_LL2_CTX_E		=	21,
+	RESOURCE_VF_CNQS		=	22,
 	RESOURCE_MAX_NUM,
 	RESOURCE_NUM_INVALID		=	0xFFFFFFFF
 };
@@ -1150,6 +1371,11 @@ struct resource_info {
 #define RESOURCE_ELEMENT_STRICT (1 << 0)
 };
 
+struct mcp_wwn {
+	u32 wwn_upper;
+	u32 wwn_lower;
+};
+
 #define DRV_ROLE_NONE		0
 #define DRV_ROLE_PREBOOT	1
 #define DRV_ROLE_OS		2
@@ -1203,11 +1429,26 @@ struct attribute_cmd_write_stc {
 	u32 offset;
 };
 
+struct lldp_stats_stc {
+	u32 tx_frames_total;
+	u32 rx_frames_total;
+	u32 rx_frames_discarded;
+	u32 rx_age_outs;
+};
+
+struct get_att_ctrl_stc {
+	u32 disabled_attns;
+	u32 controllable_attns;
+};
+
+struct trace_filter_stc {
+	u32 level;
+	u32 modules;
+};
 union drv_union_data {
 	struct mcp_mac wol_mac; /* UNLOAD_DONE */
 
-/* This configuration should be set by the driver for the LINK_SET command. */
-
+	/* This configuration should be set by the driver for the LINK_SET command. */
 	struct eth_phy_cfg drv_phy_cfg;
 
 	struct mcp_val64 val64; /* For PHY / AVS commands */
@@ -1215,8 +1456,8 @@ union drv_union_data {
 	u8 raw_data[MCP_DRV_NVM_BUF_LEN];
 
 	struct mcp_file_att file_att;
-
-	u32 ack_vf_disabled[VF_MAX_STATIC / 32];
+	/*extend from (VF_MAX_STATIC / 32 6 dword) to support 240 VFs (8 dwords) */
+	u32 ack_vf_disabled[EXT_VF_BITMAP_SIZE_IN_DWORDS];
 
 	struct drv_version_stc drv_version;
 
@@ -1229,287 +1470,385 @@ union drv_union_data {
 	struct resource_info resource;
 	struct bist_nvm_image_att nvm_image_att;
 	struct mdump_config_stc mdump_config;
+	struct mcp_mac lldp_mac;
+	struct mcp_wwn fcoe_fabric_name;
 	u32 dword;
 
 	struct load_req_stc load_req;
 	struct load_rsp_stc load_rsp;
 	struct mdump_retain_data_stc mdump_retain;
 	struct attribute_cmd_write_stc attribute_cmd_write;
+	struct lldp_stats_stc lldp_stats;
+	struct pcie_stats_stc pcie_stats;
+
+	struct get_att_ctrl_stc get_att_ctrl;
+	struct fcoe_cap_stc fcoe_cap;
+	struct trace_filter_stc trace_filter;
 	/* ... */
 };
 
 struct public_drv_mb {
 	u32 drv_mb_header;
-#define DRV_MSG_CODE_MASK                       0xffff0000
-#define DRV_MSG_CODE_LOAD_REQ                   0x10000000
-#define DRV_MSG_CODE_LOAD_DONE                  0x11000000
-#define DRV_MSG_CODE_INIT_HW                    0x12000000
-#define DRV_MSG_CODE_CANCEL_LOAD_REQ            0x13000000
-#define DRV_MSG_CODE_UNLOAD_REQ		        0x20000000
-#define DRV_MSG_CODE_UNLOAD_DONE                0x21000000
-#define DRV_MSG_CODE_INIT_PHY			0x22000000
-	/* Params - FORCE - Reinitialize the link regardless of LFA */
-	/*        - DONT_CARE - Don't flap the link if up */
-#define DRV_MSG_CODE_LINK_RESET			0x23000000
-
-#define DRV_MSG_CODE_SET_LLDP                   0x24000000
-#define DRV_MSG_CODE_REGISTER_LLDP_TLVS_RX      0x24100000
-#define DRV_MSG_CODE_SET_DCBX                   0x25000000
-	/* OneView feature driver HSI*/
-#define DRV_MSG_CODE_OV_UPDATE_CURR_CFG		0x26000000
-#define DRV_MSG_CODE_OV_UPDATE_BUS_NUM		0x27000000
-#define DRV_MSG_CODE_OV_UPDATE_BOOT_PROGRESS	0x28000000
-#define DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER	0x29000000
-#define DRV_MSG_CODE_NIG_DRAIN			0x30000000
-#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE	0x31000000
-#define DRV_MSG_CODE_BW_UPDATE_ACK		0x32000000
-#define DRV_MSG_CODE_OV_UPDATE_MTU		0x33000000
-/* DRV_MB Param: driver version supp, FW_MB param: MFW version supp,
- * data: struct resource_info
- */
-#define DRV_MSG_GET_RESOURCE_ALLOC_MSG		0x34000000
-#define DRV_MSG_SET_RESOURCE_VALUE_MSG		0x35000000
-#define DRV_MSG_CODE_OV_UPDATE_WOL		0x38000000
-#define DRV_MSG_CODE_OV_UPDATE_ESWITCH_MODE	0x39000000
-#define DRV_MSG_CODE_S_TAG_UPDATE_ACK		0x3b000000
-#define DRV_MSG_CODE_OEM_UPDATE_FCOE_CVID	0x3c000000
-#define DRV_MSG_CODE_OEM_UPDATE_FCOE_FABRIC_NAME	0x3d000000
-#define DRV_MSG_CODE_OEM_UPDATE_BOOT_CFG	0x3e000000
-#define DRV_MSG_CODE_OEM_RESET_TO_DEFAULT	0x3f000000
-#define DRV_MSG_CODE_OV_GET_CURR_CFG		0x40000000
-#define DRV_MSG_CODE_GET_OEM_UPDATES		0x41000000
-/* params [31:8] - reserved, [7:0] - bitmap */
-#define DRV_MSG_CODE_GET_PPFID_BITMAP		0x43000000
+#define DRV_MSG_SEQ_NUMBER_MASK			0x0000ffff
+#define DRV_MSG_SEQ_NUMBER_OFFSET		0
+#define DRV_MSG_CODE_MASK			0xffff0000
+#define DRV_MSG_CODE_OFFSET			16
+	u32 drv_mb_param;
 
-/* Param: [0:15] Option ID, [16] - All, [17] - Init, [18] - Commit,
- * [19] - Free
- */
-#define DRV_MSG_CODE_GET_NVM_CFG_OPTION		0x003e0000
-/* Param: [0:15] Option ID,             [17] - Init, [18]       , [19] - Free */
-#define DRV_MSG_CODE_SET_NVM_CFG_OPTION		0x003f0000
-/*deprecated don't use*/
-#define DRV_MSG_CODE_INITIATE_FLR_DEPRECATED    0x02000000
-#define DRV_MSG_CODE_INITIATE_PF_FLR            0x02010000
-#define DRV_MSG_CODE_INITIATE_VF_FLR		0x02020000
-#define DRV_MSG_CODE_VF_DISABLED_DONE           0xc0000000
-#define DRV_MSG_CODE_CFG_VF_MSIX                0xc0010000
-#define DRV_MSG_CODE_CFG_PF_VFS_MSIX            0xc0020000
+	u32 fw_mb_header;
+#define FW_MSG_SEQ_NUMBER_MASK			0x0000ffff
+#define FW_MSG_SEQ_NUMBER_OFFSET		0
+#define FW_MSG_CODE_MASK			0xffff0000
+#define FW_MSG_CODE_OFFSET			16
+	u32 fw_mb_param;
+
+	u32 drv_pulse_mb;
+#define DRV_PULSE_SEQ_MASK					0x00007fff
+#define DRV_PULSE_SYSTEM_TIME_MASK			0xffff0000
+	/*
+	 * The system time is in the format of
+	 * (year-2001)*12*32 + month*32 + day.
+	 */
+#define DRV_PULSE_ALWAYS_ALIVE				0x00008000
+	/*
+	 * Indicate to the firmware not to go into the
+	 * OS-absent when it is not getting driver pulse.
+	 * This is used for debugging as well for PXE(MBA).
+	 */
+
+	u32 mcp_pulse_mb;
+#define MCP_PULSE_SEQ_MASK					0x00007fff
+#define MCP_PULSE_ALWAYS_ALIVE				0x00008000
+	/* Indicates to the driver not to assert due to lack
+	 * of MCP response
+	 */
+#define MCP_EVENT_MASK						0xffff0000
+#define MCP_EVENT_OTHER_DRIVER_RESET_REQ		0x00010000
+
+	/* The union data is used by the driver to pass parameters to the scratchpad. */
+	union drv_union_data union_data;
+
+};
+/***************************************************************/
+/* Driver Message Code (Request) */
+/***************************************************************/
+#define DRV_MSG_CODE(_code_) ((_code_) << DRV_MSG_CODE_OFFSET)
+enum drv_msg_code_enum {
 /* Param is either DRV_MB_PARAM_NVM_PUT_FILE_BEGIN_MFW/IMAGE */
-#define DRV_MSG_CODE_NVM_PUT_FILE_BEGIN		0x00010000
+	DRV_MSG_CODE_NVM_PUT_FILE_BEGIN		= DRV_MSG_CODE(0x0001),
 /* Param should be set to the transaction size (up to 64 bytes) */
-#define DRV_MSG_CODE_NVM_PUT_FILE_DATA		0x00020000
+	DRV_MSG_CODE_NVM_PUT_FILE_DATA		= DRV_MSG_CODE(0x0002),
 /* MFW will place the file offset and len in file_att struct */
-#define DRV_MSG_CODE_NVM_GET_FILE_ATT		0x00030000
-/* Read 32bytes of nvram data. Param is [0:23] ??? Offset [24:31] -
- * ??? Len in Bytes
+	DRV_MSG_CODE_NVM_GET_FILE_ATT		= DRV_MSG_CODE(0x0003),
+/* Read 32bytes of nvram data. Param is [0:23] ??? Offset [24:31] ??? Len in Bytes */
+	DRV_MSG_CODE_NVM_READ_NVRAM		= DRV_MSG_CODE(0x0005),
+/* Writes up to 32Bytes to nvram. Param is [0:23] ??? Offset [24:31] ??? Len in Bytes. In case this
+ * address is in the range of secured file in secured mode, the operation will fail
  */
-#define DRV_MSG_CODE_NVM_READ_NVRAM		0x00050000
-/* Writes up to 32Bytes to nvram. Param is [0:23] ??? Offset [24:31]
- * ??? Len in Bytes. In case this address is in the range of secured file in
- * secured mode, the operation will fail
- */
-#define DRV_MSG_CODE_NVM_WRITE_NVRAM		0x00060000
+	DRV_MSG_CODE_NVM_WRITE_NVRAM		= DRV_MSG_CODE(0x0006),
 /* Delete a file from nvram. Param is image_type. */
-#define DRV_MSG_CODE_NVM_DEL_FILE		0x00080000
-/* Reset MCP when no NVM operation is going on, and no drivers are loaded.
- * In case operation succeed, MCP will not ack back.
- */
-#define DRV_MSG_CODE_MCP_RESET			0x00090000
-/* Temporary command to set secure mode, where the param is 0 (None secure) /
- * 1 (Secure) / 2 (Full-Secure)
+	DRV_MSG_CODE_NVM_DEL_FILE		= DRV_MSG_CODE(0x0008),
+/* Reset MCP when no NVM operation is going on, and no drivers are loaded. In case operation
+ * succeed, MCP will not ack back.
  */
-#define DRV_MSG_CODE_SET_SECURE_MODE		0x000a0000
-/* Param: [0:15] - Address, [16:18] - lane# (0/1/2/3 - for single lane,
- * 4/5 - for dual lanes, 6 - for all lanes, [28] - PMD reg, [29] - select port,
- * [30:31] - port
+	DRV_MSG_CODE_MCP_RESET			= DRV_MSG_CODE(0x0009),
+/* Param: [0:15] - Address, [16:18] - lane# (0/1/2/3 - for single lane, 4/5 - for dual lanes, 6 -
+ * for all lanes, [28] - PMD reg, [29] - select port, [30:31] - port
  */
-#define DRV_MSG_CODE_PHY_RAW_READ		0x000b0000
-/* Param: [0:15] - Address, [16:18] - lane# (0/1/2/3 - for single lane,
- * 4/5 - for dual lanes, 6 - for all lanes, [28] - PMD reg, [29] - select port,
- * [30:31] - port
+	DRV_MSG_CODE_PHY_RAW_READ		= DRV_MSG_CODE(0x000b),
+/* Param: [0:15] - Address, [16:18] - lane# (0/1/2/3 - for single lane, 4/5 - for dual lanes, 6 -
+ * for all lanes, [28] - PMD reg, [29] - select port, [30:31] - port
  */
-#define DRV_MSG_CODE_PHY_RAW_WRITE		0x000c0000
-/* Param: [0:15] - Address, [30:31] - port */
-#define DRV_MSG_CODE_PHY_CORE_READ		0x000d0000
-/* Param: [0:15] - Address, [30:31] - port */
-#define DRV_MSG_CODE_PHY_CORE_WRITE		0x000e0000
+	DRV_MSG_CODE_PHY_RAW_WRITE		= DRV_MSG_CODE(0x000c),
+	/* Param: [0:15] - Address, [30:31] - port */
+	DRV_MSG_CODE_PHY_CORE_READ		= DRV_MSG_CODE(0x000d),
+	/* Param: [0:15] - Address, [30:31] - port */
+	DRV_MSG_CODE_PHY_CORE_WRITE		= DRV_MSG_CODE(0x000e),
 /* Param: [0:3] - version, [4:15] - name (null terminated) */
-#define DRV_MSG_CODE_SET_VERSION		0x000f0000
-#define DRV_MSG_CODE_MCP_RESET_FORCE		0x000f04ce
-/* Halts the MCP. To resume MCP, user will need to use
- * MCP_REG_CPU_STATE/MCP_REG_CPU_MODE registers.
+	DRV_MSG_CODE_SET_VERSION		= DRV_MSG_CODE(0x000f),
+/* Halts the MCP. To resume MCP, user will need to use MCP_REG_CPU_STATE/MCP_REG_CPU_MODE registers.
+ *
  */
-#define DRV_MSG_CODE_MCP_HALT			0x00100000
-/* Set virtual mac address, params [31:6] - reserved, [5:4] - type,
- * [3:0] - func, drv_data[7:0] - MAC/WWNN/WWPN
+	DRV_MSG_CODE_MCP_HALT			= DRV_MSG_CODE(0x0010),
+/* Set virtual mac address, params [31:6] - reserved, [5:4] - type, [3:0] - func, drv_data[7:0] -
+ * MAC/WWNN/WWPN
  */
-#define DRV_MSG_CODE_SET_VMAC                   0x00110000
-/* Set virtual mac address, params [31:6] - reserved, [5:4] - type,
- * [3:0] - func, drv_data[7:0] - MAC/WWNN/WWPN
+	DRV_MSG_CODE_SET_VMAC			= DRV_MSG_CODE(0x0011),
+/* Set virtual mac address, params [31:6] - reserved, [5:4] - type, [3:0] - func, drv_data[7:0] -
+ * MAC/WWNN/WWPN
  */
-#define DRV_MSG_CODE_GET_VMAC                   0x00120000
-#define DRV_MSG_CODE_VMAC_TYPE_OFFSET		4
-#define DRV_MSG_CODE_VMAC_TYPE_MASK             0x30
-#define DRV_MSG_CODE_VMAC_TYPE_MAC              1
-#define DRV_MSG_CODE_VMAC_TYPE_WWNN             2
-#define DRV_MSG_CODE_VMAC_TYPE_WWPN             3
+	DRV_MSG_CODE_GET_VMAC			= DRV_MSG_CODE(0x0012),
 /* Get statistics from pf, params [31:4] - reserved, [3:0] - stats type */
-#define DRV_MSG_CODE_GET_STATS                  0x00130000
-#define DRV_MSG_CODE_STATS_TYPE_LAN             1
-#define DRV_MSG_CODE_STATS_TYPE_FCOE            2
-#define DRV_MSG_CODE_STATS_TYPE_ISCSI           3
-#define DRV_MSG_CODE_STATS_TYPE_RDMA            4
-/* Host shall provide buffer and size for MFW  */
-#define DRV_MSG_CODE_PMD_DIAG_DUMP		0x00140000
-/* Host shall provide buffer and size for MFW  */
-#define DRV_MSG_CODE_PMD_DIAG_EYE		0x00150000
-/* Param: [0:1] - Port, [2:7] - read size, [8:15] - I2C address,
- * [16:31] - offset
- */
-#define DRV_MSG_CODE_TRANSCEIVER_READ		0x00160000
-/* Param: [0:1] - Port, [2:7] - write size, [8:15] - I2C address,
- * [16:31] - offset
- */
-#define DRV_MSG_CODE_TRANSCEIVER_WRITE		0x00170000
+	DRV_MSG_CODE_GET_STATS			= DRV_MSG_CODE(0x0013),
+/* Host shall provide buffer and size for MFW */
+	DRV_MSG_CODE_PMD_DIAG_DUMP		= DRV_MSG_CODE(0x0014),
+/* Host shall provide buffer and size for MFW */
+	DRV_MSG_CODE_PMD_DIAG_EYE		= DRV_MSG_CODE(0x0015),
+/* Param: [0:1] - Port, [2:7] - read size, [8:15] - I2C address, [16:31] - offset */
+	DRV_MSG_CODE_TRANSCEIVER_READ		= DRV_MSG_CODE(0x0016),
+/* Param: [0:1] - Port, [2:7] - write size, [8:15] - I2C address, [16:31] - offset */
+	DRV_MSG_CODE_TRANSCEIVER_WRITE		= DRV_MSG_CODE(0x0017),
 /* indicate OCBB related information */
-#define DRV_MSG_CODE_OCBB_DATA			0x00180000
+	DRV_MSG_CODE_OCBB_DATA			= DRV_MSG_CODE(0x0018),
 /* Set function BW, params[15:8] - min, params[7:0] - max */
-#define DRV_MSG_CODE_SET_BW			0x00190000
+	DRV_MSG_CODE_SET_BW			= DRV_MSG_CODE(0x0019),
+/* When param is set to 1, all parities will be masked(disabled). When params are set to 0, parities
+ * will be unmasked again.
+ */
+	DRV_MSG_CODE_MASK_PARITIES		= DRV_MSG_CODE(0x001a),
+/* param[0] - Simulate fan failure, param[1] - simulate over temp. */
+	DRV_MSG_CODE_INDUCE_FAILURE		= DRV_MSG_CODE(0x001b),
+/* Param: [0:15] - gpio number */
+	DRV_MSG_CODE_GPIO_READ			= DRV_MSG_CODE(0x001c),
+/* Param: [0:15] - gpio number, [16:31] - gpio value */
+	DRV_MSG_CODE_GPIO_WRITE			= DRV_MSG_CODE(0x001d),
+/* Param: [0:7] - test enum, [8:15] - image index, [16:31] - reserved */
+	DRV_MSG_CODE_BIST_TEST			= DRV_MSG_CODE(0x001e),
+	DRV_MSG_CODE_GET_TEMPERATURE		= DRV_MSG_CODE(0x001f),
+/* Set LED mode params :0 operational, 1 LED turn ON, 2 LED turn OFF */
+	DRV_MSG_CODE_SET_LED_MODE		= DRV_MSG_CODE(0x0020),
+/* drv_data[7:0] - EPOC in seconds, drv_data[15:8] - driver version (MAJ MIN BUILD SUB) */
+	DRV_MSG_CODE_TIMESTAMP			= DRV_MSG_CODE(0x0021),
+/* This is an empty mailbox just return OK*/
+	DRV_MSG_CODE_EMPTY_MB			= DRV_MSG_CODE(0x0022),
+/* Param[0:4] - resource number (0-31), Param[5:7] - opcode, param[15:8] - age */
+	DRV_MSG_CODE_RESOURCE_CMD		= DRV_MSG_CODE(0x0023),
+	DRV_MSG_CODE_GET_MBA_VERSION		= DRV_MSG_CODE(0x0024), /* Get MBA version */
+/* Send crash dump commands with param[3:0] - opcode */
+	DRV_MSG_CODE_MDUMP_CMD			= DRV_MSG_CODE(0x0025),
+	DRV_MSG_CODE_MEM_ECC_EVENTS		= DRV_MSG_CODE(0x0026), /* Param: None */
+/* Param: [0:15] - gpio number */
+	DRV_MSG_CODE_GPIO_INFO			= DRV_MSG_CODE(0x0027),
+/* Value will be placed in union */
+	DRV_MSG_CODE_EXT_PHY_READ		= DRV_MSG_CODE(0x0028),
+/* Value should be placed in union */
+	DRV_MSG_CODE_EXT_PHY_WRITE		= DRV_MSG_CODE(0x0029),
+	DRV_MSG_CODE_EXT_PHY_FW_UPGRADE		= DRV_MSG_CODE(0x002a),
+	DRV_MSG_CODE_GET_PF_RDMA_PROTOCOL	= DRV_MSG_CODE(0x002b),
+	DRV_MSG_CODE_SET_LLDP_MAC		= DRV_MSG_CODE(0x002c),
+	DRV_MSG_CODE_GET_LLDP_MAC		= DRV_MSG_CODE(0x002d),
+	DRV_MSG_CODE_OS_WOL			= DRV_MSG_CODE(0x002e),
+	DRV_MSG_CODE_GET_TLV_DONE		= DRV_MSG_CODE(0x002f), /* Param: None */
+/* Param: Set DRV_MB_PARAM_FEATURE_SUPPORT_* */
+	DRV_MSG_CODE_FEATURE_SUPPORT		= DRV_MSG_CODE(0x0030),
+/* return FW_MB_PARAM_FEATURE_SUPPORT_* */
+	DRV_MSG_CODE_GET_MFW_FEATURE_SUPPORT	= DRV_MSG_CODE(0x0031),
+	DRV_MSG_CODE_READ_WOL_REG		= DRV_MSG_CODE(0x0032),
+	DRV_MSG_CODE_WRITE_WOL_REG		= DRV_MSG_CODE(0x0033),
+	DRV_MSG_CODE_GET_WOL_BUFFER		= DRV_MSG_CODE(0x0034),
+/* Param: [0:23] Attribute key, [24:31] Attribute sub command */
+	DRV_MSG_CODE_ATTRIBUTE		= DRV_MSG_CODE(0x0035),
+/* Param: Password len. Union: Plain Password */
+	DRV_MSG_CODE_ENCRYPT_PASSWORD		= DRV_MSG_CODE(0x0036),
+	DRV_MSG_CODE_GET_ENGINE_CONFIG		= DRV_MSG_CODE(0x0037), /* Param: None */
+/* Param: [0:7] - Cmd, [8:9] - len */
+	DRV_MSG_CODE_PMBUS_READ			= DRV_MSG_CODE(0x0038),
+/* Param: [0:7] - Cmd, [8:9] - len, [16:31] -data */
+	DRV_MSG_CODE_PMBUS_WRITE		= DRV_MSG_CODE(0x0039),
+	DRV_MSG_CODE_GENERIC_IDC		= DRV_MSG_CODE(0x003a),
+/* If param = 0, it triggers engine reset. If param = 1, it triggers soft_reset */
+	DRV_MSG_CODE_RESET_CHIP			= DRV_MSG_CODE(0x003b),
+	DRV_MSG_CODE_SET_RETAIN_VMAC		= DRV_MSG_CODE(0x003c),
+	DRV_MSG_CODE_GET_RETAIN_VMAC		= DRV_MSG_CODE(0x003d),
+/* Param: [0:15] Option ID, [16] - All, [17] - Init, [18] - Commit, [19] - Free, [20] -
+ * SelectEntity, [24:27] - EntityID
+ */
+	DRV_MSG_CODE_GET_NVM_CFG_OPTION		= DRV_MSG_CODE(0x003e),
+/* Param: [0:15] Option ID, [17] - Init, [18] , [19] - Free, [20] - SelectEntity, [24:27] - EntityID
+ *
+ */
+	DRV_MSG_CODE_SET_NVM_CFG_OPTION		= DRV_MSG_CODE(0x003f),
+	/* param determine the timeout in usec */
+	DRV_MSG_CODE_PCIE_STATS_START		= DRV_MSG_CODE(0x0040),
+/* MFW returns the stats (struct pcie_stats_stc pcie_stats) gathered in fw_param */
+	DRV_MSG_CODE_PCIE_STATS_GET		= DRV_MSG_CODE(0x0041),
+	DRV_MSG_CODE_GET_ATTN_CONTROL		= DRV_MSG_CODE(0x0042),
+	DRV_MSG_CODE_SET_ATTN_CONTROL		= DRV_MSG_CODE(0x0043),
+/* Param is the struct size. Arguments are in trace_filter_stc (raw data) */
+	DRV_MSG_CODE_SET_TRACE_FILTER		= DRV_MSG_CODE(0x0044),
+/* No params. Restores the trace level to the nvm cfg */
+	DRV_MSG_CODE_RESTORE_TRACE_FILTER	= DRV_MSG_CODE(0x0045),
+	DRV_MSG_CODE_INITIATE_FLR_DEPRECATED	= DRV_MSG_CODE(0x0200), /*deprecated don't use*/
+	DRV_MSG_CODE_INITIATE_PF_FLR		= DRV_MSG_CODE(0x0201),
+	DRV_MSG_CODE_INITIATE_VF_FLR		= DRV_MSG_CODE(0x0202),
+	DRV_MSG_CODE_LOAD_REQ			= DRV_MSG_CODE(0x1000),
+	DRV_MSG_CODE_LOAD_DONE			= DRV_MSG_CODE(0x1100),
+	DRV_MSG_CODE_INIT_HW			= DRV_MSG_CODE(0x1200),
+	DRV_MSG_CODE_CANCEL_LOAD_REQ		= DRV_MSG_CODE(0x1300),
+	DRV_MSG_CODE_UNLOAD_REQ			= DRV_MSG_CODE(0x2000),
+	DRV_MSG_CODE_UNLOAD_DONE		= DRV_MSG_CODE(0x2100),
+/* Params - FORCE - Reinitialize the link regardless of LFA , - DONT_CARE - Don't flap the link if
+ * up
+ */
+	DRV_MSG_CODE_INIT_PHY			= DRV_MSG_CODE(0x2200),
+	DRV_MSG_CODE_LINK_RESET			= DRV_MSG_CODE(0x2300),
+	DRV_MSG_CODE_SET_LLDP			= DRV_MSG_CODE(0x2400),
+	DRV_MSG_CODE_REGISTER_LLDP_TLVS_RX	= DRV_MSG_CODE(0x2410),
+/* OneView feature driver HSI*/
+	DRV_MSG_CODE_SET_DCBX			= DRV_MSG_CODE(0x2500),
+	DRV_MSG_CODE_OV_UPDATE_CURR_CFG		= DRV_MSG_CODE(0x2600),
+	DRV_MSG_CODE_OV_UPDATE_BUS_NUM		= DRV_MSG_CODE(0x2700),
+	DRV_MSG_CODE_OV_UPDATE_BOOT_PROGRESS	= DRV_MSG_CODE(0x2800),
+	DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER	= DRV_MSG_CODE(0x2900),
+	DRV_MSG_CODE_NIG_DRAIN			= DRV_MSG_CODE(0x3000),
+	DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE	= DRV_MSG_CODE(0x3100),
+	DRV_MSG_CODE_BW_UPDATE_ACK		= DRV_MSG_CODE(0x3200),
+	DRV_MSG_CODE_OV_UPDATE_MTU		= DRV_MSG_CODE(0x3300),
+/* DRV_MB Param: driver version supp, FW_MB param: MFW version supp, data: struct resource_info */
+	DRV_MSG_GET_RESOURCE_ALLOC_MSG		= DRV_MSG_CODE(0x3400),
+	DRV_MSG_SET_RESOURCE_VALUE_MSG		= DRV_MSG_CODE(0x3500),
+	DRV_MSG_CODE_OV_UPDATE_WOL		= DRV_MSG_CODE(0x3800),
+	DRV_MSG_CODE_OV_UPDATE_ESWITCH_MODE	= DRV_MSG_CODE(0x3900),
+	DRV_MSG_CODE_S_TAG_UPDATE_ACK		= DRV_MSG_CODE(0x3b00),
+	DRV_MSG_CODE_OEM_UPDATE_FCOE_CVID	= DRV_MSG_CODE(0x3c00),
+	DRV_MSG_CODE_OEM_UPDATE_FCOE_FABRIC_NAME = DRV_MSG_CODE(0x3d00),
+	DRV_MSG_CODE_OEM_UPDATE_BOOT_CFG	= DRV_MSG_CODE(0x3e00),
+	DRV_MSG_CODE_OEM_RESET_TO_DEFAULT	= DRV_MSG_CODE(0x3f00),
+	DRV_MSG_CODE_OV_GET_CURR_CFG		= DRV_MSG_CODE(0x4000),
+	DRV_MSG_CODE_GET_OEM_UPDATES		= DRV_MSG_CODE(0x4100),
+	DRV_MSG_CODE_GET_LLDP_STATS		= DRV_MSG_CODE(0x4200),
+/* params [31:8] - reserved, [7:0] - bitmap */
+	DRV_MSG_CODE_GET_PPFID_BITMAP		= DRV_MSG_CODE(0x4300),
+	DRV_MSG_CODE_VF_DISABLED_DONE		= DRV_MSG_CODE(0xc000),
+	DRV_MSG_CODE_CFG_VF_MSIX		= DRV_MSG_CODE(0xc001),
+	DRV_MSG_CODE_CFG_PF_VFS_MSIX		= DRV_MSG_CODE(0xc002),
+/* get permanent mac - params: [31:24] reserved ,[23:8] index, [7:4] reserved, [3:0] type,.-
+ * driver_data[7:0] mac
+ */
+	DRV_MSG_CODE_GET_PERM_MAC		= DRV_MSG_CODE(0xc003),
+/* send driver debug data up to 32 bytes params [7:0] size < 32 data in the union part */
+	DRV_MSG_CODE_DEBUG_DATA_SEND		= DRV_MSG_CODE(0xc004),
+/* get fcoe capablities from driver*/
+	DRV_MSG_CODE_GET_FCOE_CAP		= DRV_MSG_CODE(0xc005),
+	/* params [7:0] - VFID, [8] - set/clear */
+	DRV_MSG_CODE_VF_WITH_MORE_16SB		= DRV_MSG_CODE(0xc006),
+	/* return FW_MB_MANAGEMENT_STATUS */
+	DRV_MSG_CODE_GET_MANAGEMENT_STATUS	= DRV_MSG_CODE(0xc007),
+};
+
+/* DRV_MSG_CODE_VMAC_TYPE parameters */
+#define DRV_MSG_CODE_VMAC_TYPE_OFFSET		4
+#define DRV_MSG_CODE_VMAC_TYPE_MASK		0x30
+#define DRV_MSG_CODE_VMAC_TYPE_MAC		1
+#define DRV_MSG_CODE_VMAC_TYPE_WWNN		2
+#define DRV_MSG_CODE_VMAC_TYPE_WWPN		3
+
+/* DRV_MSG_CODE_RETAIN_VMAC parameters */
+#define DRV_MSG_CODE_RETAIN_VMAC_FUNC_OFFSET	0
+#define DRV_MSG_CODE_RETAIN_VMAC_FUNC_MASK		0xf
+
+#define DRV_MSG_CODE_RETAIN_VMAC_TYPE_OFFSET	4
+#define DRV_MSG_CODE_RETAIN_VMAC_TYPE_MASK		0x70
+#define DRV_MSG_CODE_RETAIN_VMAC_TYPE_L2		0
+#define DRV_MSG_CODE_RETAIN_VMAC_TYPE_ISCSI		1
+#define DRV_MSG_CODE_RETAIN_VMAC_TYPE_FCOE	2
+#define DRV_MSG_CODE_RETAIN_VMAC_TYPE_WWNN		3
+#define DRV_MSG_CODE_RETAIN_VMAC_TYPE_WWPN		4
+
+#define DRV_MSG_CODE_MCP_RESET_FORCE		0xf04ce
+/* DRV_MSG_CODE_STAT_TYPE parameters */
+#define DRV_MSG_CODE_STATS_TYPE_LAN		1
+#define DRV_MSG_CODE_STATS_TYPE_FCOE		2
+#define DRV_MSG_CODE_STATS_TYPE_ISCSI		3
+#define DRV_MSG_CODE_STATS_TYPE_RDMA		4
+
+/* DRV_MSG_CODE_SET_BW parameters */
 #define BW_MAX_MASK				0x000000ff
 #define BW_MAX_OFFSET				0
 #define BW_MIN_MASK				0x0000ff00
 #define BW_MIN_OFFSET				8
 
-/* When param is set to 1, all parities will be masked(disabled). When params
- * are set to 0, parities will be unmasked again.
- */
-#define DRV_MSG_CODE_MASK_PARITIES		0x001a0000
-/* param[0] - Simulate fan failure,  param[1] - simulate over temp. */
-#define DRV_MSG_CODE_INDUCE_FAILURE		0x001b0000
+/* DRV_MSG_CODE_INDUCE_FAILURE parameters */
 #define DRV_MSG_FAN_FAILURE_TYPE		(1 << 0)
 #define DRV_MSG_TEMPERATURE_FAILURE_TYPE	(1 << 1)
-/* Param: [0:15] - gpio number */
-#define DRV_MSG_CODE_GPIO_READ			0x001c0000
-/* Param: [0:15] - gpio number, [16:31] - gpio value */
-#define DRV_MSG_CODE_GPIO_WRITE			0x001d0000
-/* Param: [0:7] - test enum, [8:15] - image index, [16:31] - reserved */
-#define DRV_MSG_CODE_BIST_TEST			0x001e0000
-#define DRV_MSG_CODE_GET_TEMPERATURE            0x001f0000
-
-/* Set LED mode  params :0 operational, 1 LED turn ON, 2 LED turn OFF */
-#define DRV_MSG_CODE_SET_LED_MODE		0x00200000
-/* drv_data[7:0] - EPOC in seconds, drv_data[15:8] -
- * driver version (MAJ MIN BUILD SUB)
- */
-#define DRV_MSG_CODE_TIMESTAMP                  0x00210000
-/* This is an empty mailbox just return OK*/
-#define DRV_MSG_CODE_EMPTY_MB			0x00220000
-
-/* Param[0:4] - resource number (0-31), Param[5:7] - opcode,
- * param[15:8] - age
- */
-#define DRV_MSG_CODE_RESOURCE_CMD		0x00230000
 
+/* DRV_MSG_CODE_RESOURCE_CMD parameters */
 #define RESOURCE_CMD_REQ_RESC_MASK		0x0000001F
 #define RESOURCE_CMD_REQ_RESC_OFFSET		0
 #define RESOURCE_CMD_REQ_OPCODE_MASK		0x000000E0
 #define RESOURCE_CMD_REQ_OPCODE_OFFSET		5
 /* request resource ownership with default aging */
 #define RESOURCE_OPCODE_REQ			1
-/* request resource ownership without aging */
-#define RESOURCE_OPCODE_REQ_WO_AGING		2
+#define RESOURCE_OPCODE_REQ_WO_AGING		2 /* request resource ownership without aging */
 /* request resource ownership with specific aging timer (in seconds) */
 #define RESOURCE_OPCODE_REQ_W_AGING		3
 #define RESOURCE_OPCODE_RELEASE			4 /* release resource */
-/* force resource release */
-#define RESOURCE_OPCODE_FORCE_RELEASE		5
+#define RESOURCE_OPCODE_FORCE_RELEASE		5 /* force resource release */
 #define RESOURCE_CMD_REQ_AGE_MASK		0x0000FF00
 #define RESOURCE_CMD_REQ_AGE_OFFSET		8
-
 #define RESOURCE_CMD_RSP_OWNER_MASK		0x000000FF
 #define RESOURCE_CMD_RSP_OWNER_OFFSET		0
 #define RESOURCE_CMD_RSP_OPCODE_MASK		0x00000700
 #define RESOURCE_CMD_RSP_OPCODE_OFFSET		8
-/* resource is free and granted to requester */
-#define RESOURCE_OPCODE_GNT			1
-/* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
- * 16 = MFW, 17 = diag over serial
+#define RESOURCE_OPCODE_GNT			1 /* resource is free and granted to requester */
+/* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15, 16 = MFW, 17 = diag over
+ * serial
  */
 #define RESOURCE_OPCODE_BUSY			2
-/* indicate release request was acknowledged */
-#define RESOURCE_OPCODE_RELEASED		3
+#define RESOURCE_OPCODE_RELEASED		3 /* indicate release request was acknowledged */
 /* indicate release request was previously received by other owner */
 #define RESOURCE_OPCODE_RELEASED_PREVIOUS	4
-/* indicate wrong owner during release */
-#define RESOURCE_OPCODE_WRONG_OWNER		5
+#define RESOURCE_OPCODE_WRONG_OWNER		5 /* indicate wrong owner during release */
 #define RESOURCE_OPCODE_UNKNOWN_CMD		255
+#define RESOURCE_DUMP				0 /* dedicate resource 0 for dump */
 
-/* dedicate resource 0 for dump */
-#define RESOURCE_DUMP				0
-
-#define DRV_MSG_CODE_GET_MBA_VERSION		0x00240000 /* Get MBA version */
-/* Send crash dump commands with param[3:0] - opcode */
-#define DRV_MSG_CODE_MDUMP_CMD			0x00250000
-#define MDUMP_DRV_PARAM_OPCODE_MASK		0x0000000f
+/* DRV_MSG_CODE_MDUMP_CMD parameters */
+#define MDUMP_DRV_PARAM_OPCODE_MASK		0x000000ff
 /* acknowledge reception of error indication */
-#define DRV_MSG_CODE_MDUMP_ACK			0x01
-/* set epoc and personality as follow: drv_data[3:0] - epoch,
- * drv_data[7:4] - personality
- */
+#define DRV_MSG_CODE_MDUMP_ACK				0x01
+/* set epoc and personality as follow: drv_data[3:0] - epoch, drv_data[7:4] - personality */
 #define DRV_MSG_CODE_MDUMP_SET_VALUES		0x02
-/* trigger crash dump procedure */
-#define DRV_MSG_CODE_MDUMP_TRIGGER		0x03
-/* Request valid logs and config words */
-#define DRV_MSG_CODE_MDUMP_GET_CONFIG		0x04
-/* Set triggers mask. drv_mb_param should indicate (bitwise) which
- * trigger enabled
- */
+#define DRV_MSG_CODE_MDUMP_TRIGGER			0x03 /* trigger crash dump procedure */
+#define DRV_MSG_CODE_MDUMP_GET_CONFIG		0x04 /* Request valid logs and config words */
+/* Set triggers mask. drv_mb_param should indicate (bitwise) which trigger enabled */
 #define DRV_MSG_CODE_MDUMP_SET_ENABLE		0x05
-/* Clear all logs */
-#define DRV_MSG_CODE_MDUMP_CLEAR_LOGS		0x06
+#define DRV_MSG_CODE_MDUMP_CLEAR_LOGS		0x06 /* Clear all logs */
 #define DRV_MSG_CODE_MDUMP_GET_RETAIN		0x07 /* Get retained data */
 #define DRV_MSG_CODE_MDUMP_CLR_RETAIN		0x08 /* Clear retain data */
-#define DRV_MSG_CODE_MEM_ECC_EVENTS		0x00260000 /* Param: None */
-/* Param: [0:15] - gpio number */
-#define DRV_MSG_CODE_GPIO_INFO			0x00270000
-/* Value will be placed in union */
-#define DRV_MSG_CODE_EXT_PHY_READ		0x00280000
-/* Value should be placed in union */
-#define DRV_MSG_CODE_EXT_PHY_WRITE		0x00290000
-#define DRV_MB_PARAM_ADDR_OFFSET			0
+/* hw_dump parameters */
+#define DRV_MSG_CODE_HW_DUMP_TRIGGER		0x0a /* Clear retain data */
+/* Free Link_Dump / Idle_check buffer after driver consumed it */
+#define DRV_MSG_CODE_MDUMP_FREE_DRIVER_BUF	0x0b
+/* Generate debug data for scripts as wol_dump, link_dump and phy_dump. Data will be available at
+ * fw_param (offsize) in fw_param) for 6 seconds
+ */
+#define DRV_MSG_CODE_MDUMP_GEN_LINK_DUMP	0x0c
+/* Generate debug data for scripts as idle_chk */
+#define DRV_MSG_CODE_MDUMP_GEN_IDLE_CHK		0x0d
+
+
+/* DRV_MSG_CODE_MDUMP_CMD options */
+#define MDUMP_DRV_PARAM_OPTION_MASK		0x00000f00
+/* Driver will not use debug data (GEN_LINK_DUMP, GEN_IDLE_CHK), free Buffer imediately */
+#define DRV_MSG_CODE_MDUMP_USE_DRIVER_BUF_OFFSET	8
+/* Driver will not use debug data (GEN_LINK_DUMP, GEN_IDLE_CHK), free Buffer imediately */
+#define DRV_MSG_CODE_MDUMP_USE_DRIVER_BUF_MASK	0x100
+
+
+/* DRV_MSG_CODE_EXT_PHY_READ/DRV_MSG_CODE_EXT_PHY_WRITE parameters */
+#define DRV_MB_PARAM_ADDR_OFFSET		0
 #define DRV_MB_PARAM_ADDR_MASK			0x0000FFFF
 #define DRV_MB_PARAM_DEVAD_OFFSET		16
 #define DRV_MB_PARAM_DEVAD_MASK			0x001F0000
-#define DRV_MB_PARAM_PORT_OFFSET			21
+#define DRV_MB_PARAM_PORT_OFFSET		21
 #define DRV_MB_PARAM_PORT_MASK			0x00600000
-#define DRV_MSG_CODE_EXT_PHY_FW_UPGRADE		0x002a0000
-
-#define DRV_MSG_CODE_GET_TLV_DONE		0x002f0000 /* Param: None */
-/* Param: Set DRV_MB_PARAM_FEATURE_SUPPORT_* */
-#define DRV_MSG_CODE_FEATURE_SUPPORT            0x00300000
-/* return FW_MB_PARAM_FEATURE_SUPPORT_*  */
-#define DRV_MSG_CODE_GET_MFW_FEATURE_SUPPORT	0x00310000
-#define DRV_MSG_CODE_READ_WOL_REG		0X00320000
-#define DRV_MSG_CODE_WRITE_WOL_REG		0X00330000
-#define DRV_MSG_CODE_GET_WOL_BUFFER		0X00340000
-/* Param: [0:23] Attribute key, [24:31] Attribute sub command */
-#define DRV_MSG_CODE_ATTRIBUTE			0x00350000
 
-/* Param: Password len. Union: Plain Password */
-#define DRV_MSG_CODE_ENCRYPT_PASSWORD		0x00360000
-#define DRV_MSG_CODE_GET_ENGINE_CONFIG		0x00370000 /* Param: None */
-
-#define DRV_MSG_SEQ_NUMBER_MASK                 0x0000ffff
+/* DRV_MSG_CODE_PMBUS_READ/DRV_MSG_CODE_PMBUS_WRITE parameters */
+#define DRV_MB_PARAM_PMBUS_CMD_OFFSET		0
+#define DRV_MB_PARAM_PMBUS_CMD_MASK		0xFF
+#define DRV_MB_PARAM_PMBUS_LEN_OFFSET		8
+#define DRV_MB_PARAM_PMBUS_LEN_MASK		0x300
+#define DRV_MB_PARAM_PMBUS_DATA_OFFSET		16
+#define DRV_MB_PARAM_PMBUS_DATA_MASK		0xFFFF0000
 
-	u32 drv_mb_param;
 	/* UNLOAD_REQ params */
-#define DRV_MB_PARAM_UNLOAD_WOL_UNKNOWN         0x00000000
+#define DRV_MB_PARAM_UNLOAD_WOL_UNKNOWN		0x00000000
 #define DRV_MB_PARAM_UNLOAD_WOL_MCP		0x00000001
-#define DRV_MB_PARAM_UNLOAD_WOL_DISABLED        0x00000002
-#define DRV_MB_PARAM_UNLOAD_WOL_ENABLED         0x00000003
+#define DRV_MB_PARAM_UNLOAD_WOL_DISABLED	0x00000002
+#define DRV_MB_PARAM_UNLOAD_WOL_ENABLED		0x00000003
 
 	/* UNLOAD_DONE_params */
-#define DRV_MB_PARAM_UNLOAD_NON_D3_POWER        0x00000001
+#define DRV_MB_PARAM_UNLOAD_NON_D3_POWER	0x00000001
 
 	/* INIT_PHY params */
 #define DRV_MB_PARAM_INIT_PHY_FORCE		0x00000001
@@ -1534,8 +1873,11 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_NIG_DRAIN_PERIOD_MS_MASK	0x000000FF
 #define DRV_MB_PARAM_NIG_DRAIN_PERIOD_MS_OFFSET	0
 
+#define DRV_MB_PARAM_NVM_PUT_FILE_TYPE_MASK	0x000000ff
+#define DRV_MB_PARAM_NVM_PUT_FILE_TYPE_OFFSET	0
 #define DRV_MB_PARAM_NVM_PUT_FILE_BEGIN_MFW	0x1
 #define DRV_MB_PARAM_NVM_PUT_FILE_BEGIN_IMAGE	0x2
+#define DRV_MB_PARAM_NVM_PUT_FILE_BEGIN_MBI	0x3
 
 #define DRV_MB_PARAM_NVM_OFFSET_OFFSET		0
 #define DRV_MB_PARAM_NVM_OFFSET_MASK		0x00FFFFFF
@@ -1557,13 +1899,18 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_PHYMOD_SIZE_MASK		0x000FFF00
 	/* configure vf MSIX params BB */
 #define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_OFFSET	0
-#define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_MASK	0x000000FF
+#define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_MASK		0x000000FF
 #define DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_OFFSET	8
 #define DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_MASK	0x0000FF00
 	/* configure vf MSIX for PF params AH*/
 #define DRV_MB_PARAM_CFG_PF_VFS_MSIX_SB_NUM_OFFSET	0
 #define DRV_MB_PARAM_CFG_PF_VFS_MSIX_SB_NUM_MASK	0x000000FF
 
+#define DRV_MSG_CODE_VF_WITH_MORE_16SB_VF_ID_OFFSET	0
+#define DRV_MSG_CODE_VF_WITH_MORE_16SB_VF_ID_MASK	0x000000FF
+#define DRV_MSG_CODE_VF_WITH_MORE_16SB_SET_OFFSET	8
+#define DRV_MSG_CODE_VF_WITH_MORE_16SB_SET_MASK		0x00000100
+
 	/* OneView configuration parametres */
 #define DRV_MB_PARAM_OV_CURR_CFG_OFFSET		0
 #define DRV_MB_PARAM_OV_CURR_CFG_MASK		0x0000000F
@@ -1576,7 +1923,7 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_OV_CURR_CFG_DCI		6
 #define DRV_MB_PARAM_OV_CURR_CFG_HII		7
 
-#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_OFFSET				0
+#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_OFFSET			0
 #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_MASK			0x000000FF
 #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_NONE				(1 << 0)
 #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_ISCSI_IP_ACQUIRED		(1 << 1)
@@ -1587,9 +1934,9 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_LOGGED_INTO_TGT		(1 << 4)
 #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_IMG_DOWNLOADED			(1 << 5)
 #define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_OS_HANDOFF			(1 << 6)
-#define DRV_MB_PARAM_OV_UPDATE_BOOT_COMPLETED				0
+#define DRV_MB_PARAM_OV_UPDATE_BOOT_COMPLETED			0
 
-#define DRV_MB_PARAM_OV_PCI_BUS_NUM_OFFSET				0
+#define DRV_MB_PARAM_OV_PCI_BUS_NUM_OFFSET		0
 #define DRV_MB_PARAM_OV_PCI_BUS_NUM_MASK		0x000000FF
 
 #define DRV_MB_PARAM_OV_STORM_FW_VER_OFFSET		0
@@ -1602,26 +1949,42 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_OFFSET		0
 #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_MASK		0xF
 #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_UNKNOWN		0x1
-/* Not Installed*/
-#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_NOT_LOADED	0x2
+#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_NOT_LOADED	0x2 /* Not Installed*/
 #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_LOADING		0x3
 /* installed but disabled by user/admin/OS */
 #define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_DISABLED	0x4
-/* installed and active */
-#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_ACTIVE		0x5
+#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_ACTIVE		0x5 /* installed and active */
 
-#define DRV_MB_PARAM_OV_MTU_SIZE_OFFSET		0
+#define DRV_MB_PARAM_OV_MTU_SIZE_OFFSET	0
 #define DRV_MB_PARAM_OV_MTU_SIZE_MASK		0xFFFFFFFF
 
-#define DRV_MB_PARAM_ESWITCH_MODE_MASK  (DRV_MB_PARAM_ESWITCH_MODE_NONE | \
+#define DRV_MB_PARAM_WOL_MASK		(DRV_MB_PARAM_WOL_DEFAULT |	\
+					 DRV_MB_PARAM_WOL_DISABLED |	\
+					 DRV_MB_PARAM_WOL_ENABLED)
+#define DRV_MB_PARAM_WOL_DEFAULT	DRV_MB_PARAM_UNLOAD_WOL_MCP
+#define DRV_MB_PARAM_WOL_DISABLED	DRV_MB_PARAM_UNLOAD_WOL_DISABLED
+#define DRV_MB_PARAM_WOL_ENABLED	DRV_MB_PARAM_UNLOAD_WOL_ENABLED
+
+#define DRV_MB_PARAM_ESWITCH_MODE_MASK	(DRV_MB_PARAM_ESWITCH_MODE_NONE | \
 					 DRV_MB_PARAM_ESWITCH_MODE_VEB |   \
 					 DRV_MB_PARAM_ESWITCH_MODE_VEPA)
-#define DRV_MB_PARAM_ESWITCH_MODE_NONE  0x0
-#define DRV_MB_PARAM_ESWITCH_MODE_VEB   0x1
-#define DRV_MB_PARAM_ESWITCH_MODE_VEPA  0x2
+#define DRV_MB_PARAM_ESWITCH_MODE_NONE	0x0
+#define DRV_MB_PARAM_ESWITCH_MODE_VEB	0x1
+#define DRV_MB_PARAM_ESWITCH_MODE_VEPA	0x2
+
+#define DRV_MB_PARAM_FCOE_CVID_MASK	0xFFF
+#define DRV_MB_PARAM_FCOE_CVID_OFFSET	0
+
+#define DRV_MB_PARAM_BOOT_CFG_UPDATE_MASK	0xFF
+#define DRV_MB_PARAM_BOOT_CFG_UPDATE_OFFSET	0
+#define DRV_MB_PARAM_BOOT_CFG_UPDATE_FCOE	0x01
+#define DRV_MB_PARAM_BOOT_CFG_UPDATE_ISCSI	0x02
 
-#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_MASK     0x1
-#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET   0
+#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_MASK	0x1
+#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET	0
+
+#define DRV_MB_PARAM_LLDP_STATS_AGENT_MASK	0xFF
+#define DRV_MB_PARAM_LLDP_STATS_AGENT_OFFSET	0
 
 #define DRV_MB_PARAM_SET_LED_MODE_OPER		0x0
 #define DRV_MB_PARAM_SET_LED_MODE_ON		0x1
@@ -1665,39 +2028,43 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_BIST_RC_FAILED		2
 #define DRV_MB_PARAM_BIST_RC_INVALID_PARAMETER		3
 
-#define DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET      0
-#define DRV_MB_PARAM_BIST_TEST_INDEX_MASK       0x000000FF
+#define DRV_MB_PARAM_BIST_TEST_INDEX_OFFSET	 0
+#define DRV_MB_PARAM_BIST_TEST_INDEX_MASK	0x000000FF
 #define DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_OFFSET      8
-#define DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_MASK       0x0000FF00
+#define DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_MASK	  0x0000FF00
 
-#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_MASK      0x0000FFFF
-#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_OFFSET     0
+#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_MASK	0x0000FFFF
+#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_OFFSET	0
 /* driver supports SmartLinQ parameter */
 #define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_SMARTLINQ 0x00000001
-/* driver supports EEE parameter */
-#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_EEE       0x00000002
-#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_MASK      0xFFFF0000
-#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_OFFSET     16
+#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_EEE	0x00000002 /* driver supports EEE parameter */
+/* driver supports FEC Control parameter */
+#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_FEC_CONTROL 0x00000004
+/* driver supports extended speed and FEC Control parameters */
+#define DRV_MB_PARAM_FEATURE_SUPPORT_PORT_EXT_SPEED_FEC_CONTROL 0x00000008
+#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_MASK	0xFFFF0000
+#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_OFFSET	 16
 /* driver supports virtual link parameter */
-#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_VLINK     0x00010000
+#define DRV_MB_PARAM_FEATURE_SUPPORT_FUNC_VLINK	0x00010000
+
+/* DRV_MSG_CODE_DEBUG_DATA_SEND parameters */
+#define DRV_MSG_CODE_DEBUG_DATA_SEND_SIZE_OFFSET	0
+#define DRV_MSG_CODE_DEBUG_DATA_SEND_SIZE_MASK		0xFF
+
 	/* Driver attributes params */
-#define DRV_MB_PARAM_ATTRIBUTE_KEY_OFFSET		 0
+#define DRV_MB_PARAM_ATTRIBUTE_KEY_OFFSET	 0
 #define DRV_MB_PARAM_ATTRIBUTE_KEY_MASK		0x00FFFFFF
 #define DRV_MB_PARAM_ATTRIBUTE_CMD_OFFSET		24
 #define DRV_MB_PARAM_ATTRIBUTE_CMD_MASK		0xFF000000
 
 #define DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET		0
-/* Option# */
-#define DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK		0x0000FFFF
+#define DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK		0x0000FFFF	/* Option# */
+#define DRV_MB_PARAM_NVM_CFG_OPTION_ID_IGNORE		0x0000FFFF
 #define DRV_MB_PARAM_NVM_CFG_OPTION_ALL_OFFSET		16
-/* (Only for Set) Applies option<92>s value to all entities (port/func)
- * depending on the option type
- */
+/* (Only for Set) Applies options value to all entities (port/func) depending on the option type */
 #define DRV_MB_PARAM_NVM_CFG_OPTION_ALL_MASK		0x00010000
 #define DRV_MB_PARAM_NVM_CFG_OPTION_INIT_OFFSET		17
-/* When set, and state is IDLE, MFW will allocate resources and load
- * configuration from NVM
- */
+/* When set, and state is IDLE, MFW will allocate resources and load configuration from NVM */
 #define DRV_MB_PARAM_NVM_CFG_OPTION_INIT_MASK		0x00020000
 #define DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_OFFSET	18
 /* (Only for Set) - When set submit changed nvm_cfg1 to flash */
@@ -1705,162 +2072,247 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_NVM_CFG_OPTION_FREE_OFFSET		19
 /* Free - When set, free allocated resources, and return to IDLE state. */
 #define DRV_MB_PARAM_NVM_CFG_OPTION_FREE_MASK		0x00080000
-#define SINGLE_NVM_WR_OP(optionId) \
-	((((optionId) & DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK) << \
-	  DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET) | \
-	 (DRV_MB_PARAM_NVM_CFG_OPTION_INIT_MASK | \
-	  DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_MASK | \
+#define DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_SEL_OFFSET	20
+/* When set, command applies to the entity (func/port) in the ENTITY_ID field */
+#define DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_SEL_MASK	0x00100000
+#define DRV_MB_PARAM_NVM_CFG_OPTION_DEFAULT_RESTORE_ALL_OFFSET	21
+/* When set, the nvram restore to default */
+#define DRV_MB_PARAM_NVM_CFG_OPTION_DEFAULT_RESTORE_ALL_MASK	0x00200000
+#define DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_ID_OFFSET	24
+/* When ENTITY_SET is set, the command applies to this entity ID (func/port) selection */
+#define DRV_MB_PARAM_NVM_CFG_OPTION_ENTITY_ID_MASK	0x0f000000
+
+#define SINGLE_NVM_WR_OP(option_id)			\
+	((((option_id) & DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK) << \
+	  DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET) | (DRV_MB_PARAM_NVM_CFG_OPTION_INIT_MASK | \
+	  DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_MASK | DRV_MB_PARAM_NVM_CFG_OPTION_FREE_MASK))
+#define BEGIN_NVM_WR_OP(option_id)			\
+	((((option_id) & DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK) << \
+	  DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET) | (DRV_MB_PARAM_NVM_CFG_OPTION_INIT_MASK))
+#define CONT_NVM_WR_OP(option_id)			\
+	((((option_id) & DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK) << \
+	  DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET))
+#define END_NVM_WR_OP(option_id)			\
+	((((option_id) & DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK) << \
+	  DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET) | (DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_MASK | \
 	  DRV_MB_PARAM_NVM_CFG_OPTION_FREE_MASK))
-	u32 fw_mb_header;
-#define FW_MSG_CODE_UNSUPPORTED			0x00000000
-#define FW_MSG_CODE_DRV_LOAD_ENGINE		0x10100000
-#define FW_MSG_CODE_DRV_LOAD_PORT               0x10110000
-#define FW_MSG_CODE_DRV_LOAD_FUNCTION           0x10120000
-#define FW_MSG_CODE_DRV_LOAD_REFUSED_PDA        0x10200000
-#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1      0x10210000
-#define FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG       0x10220000
-#define FW_MSG_CODE_DRV_LOAD_REFUSED_HSI        0x10230000
-#define FW_MSG_CODE_DRV_LOAD_REFUSED_REQUIRES_FORCE 0x10300000
-#define FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT     0x10310000
-#define FW_MSG_CODE_DRV_LOAD_DONE               0x11100000
-#define FW_MSG_CODE_DRV_UNLOAD_ENGINE           0x20110000
-#define FW_MSG_CODE_DRV_UNLOAD_PORT             0x20120000
-#define FW_MSG_CODE_DRV_UNLOAD_FUNCTION         0x20130000
-#define FW_MSG_CODE_DRV_UNLOAD_DONE             0x21100000
-#define FW_MSG_CODE_INIT_PHY_DONE		0x21200000
-#define FW_MSG_CODE_INIT_PHY_ERR_INVALID_ARGS	0x21300000
-#define FW_MSG_CODE_LINK_RESET_DONE		0x23000000
-#define FW_MSG_CODE_SET_LLDP_DONE               0x24000000
-#define FW_MSG_CODE_SET_LLDP_UNSUPPORTED_AGENT  0x24010000
-#define FW_MSG_CODE_REGISTER_LLDP_TLVS_RX_DONE  0x24100000
-#define FW_MSG_CODE_SET_DCBX_DONE               0x25000000
-#define FW_MSG_CODE_UPDATE_CURR_CFG_DONE        0x26000000
-#define FW_MSG_CODE_UPDATE_BUS_NUM_DONE         0x27000000
-#define FW_MSG_CODE_UPDATE_BOOT_PROGRESS_DONE   0x28000000
-#define FW_MSG_CODE_UPDATE_STORM_FW_VER_DONE    0x29000000
-#define FW_MSG_CODE_UPDATE_DRIVER_STATE_DONE    0x31000000
-#define FW_MSG_CODE_DRV_MSG_CODE_BW_UPDATE_DONE 0x32000000
-#define FW_MSG_CODE_DRV_MSG_CODE_MTU_SIZE_DONE  0x33000000
-#define FW_MSG_CODE_RESOURCE_ALLOC_OK           0x34000000
-#define FW_MSG_CODE_RESOURCE_ALLOC_UNKNOWN      0x35000000
-#define FW_MSG_CODE_RESOURCE_ALLOC_DEPRECATED   0x36000000
-#define FW_MSG_CODE_RESOURCE_ALLOC_GEN_ERR      0x37000000
-#define FW_MSG_CODE_GET_OEM_UPDATES_DONE	0x41000000
-
-#define FW_MSG_CODE_NIG_DRAIN_DONE              0x30000000
-#define FW_MSG_CODE_VF_DISABLED_DONE            0xb0000000
-#define FW_MSG_CODE_DRV_CFG_VF_MSIX_DONE        0xb0010000
-#define FW_MSG_CODE_INITIATE_VF_FLR_OK		0xb0030000
-#define FW_MSG_CODE_ERR_RESOURCE_TEMPORARY_UNAVAILABLE	0x008b0000
-#define FW_MSG_CODE_ERR_RESOURCE_ALREADY_ALLOCATED	0x008c0000
-#define FW_MSG_CODE_ERR_RESOURCE_NOT_ALLOCATED		0x008d0000
-#define FW_MSG_CODE_ERR_NON_USER_OPTION			0x008e0000
-#define FW_MSG_CODE_ERR_UNKNOWN_OPTION			0x008f0000
-#define FW_MSG_CODE_WAIT				0x00900000
-#define FW_MSG_CODE_FLR_ACK                     0x02000000
-#define FW_MSG_CODE_FLR_NACK                    0x02100000
-#define FW_MSG_CODE_SET_DRIVER_DONE		0x02200000
-#define FW_MSG_CODE_SET_VMAC_SUCCESS            0x02300000
-#define FW_MSG_CODE_SET_VMAC_FAIL               0x02400000
-
-#define FW_MSG_CODE_NVM_OK			0x00010000
-#define FW_MSG_CODE_NVM_INVALID_MODE		0x00020000
-#define FW_MSG_CODE_NVM_PREV_CMD_WAS_NOT_FINISHED	0x00030000
-#define FW_MSG_CODE_NVM_FAILED_TO_ALLOCATE_PAGE	0x00040000
-#define FW_MSG_CODE_NVM_INVALID_DIR_FOUND	0x00050000
-#define FW_MSG_CODE_NVM_PAGE_NOT_FOUND		0x00060000
-#define FW_MSG_CODE_NVM_FAILED_PARSING_BNDLE_HEADER 0x00070000
-#define FW_MSG_CODE_NVM_FAILED_PARSING_IMAGE_HEADER 0x00080000
-#define FW_MSG_CODE_NVM_PARSING_OUT_OF_SYNC	0x00090000
-#define FW_MSG_CODE_NVM_FAILED_UPDATING_DIR	0x000a0000
-#define FW_MSG_CODE_NVM_FAILED_TO_FREE_PAGE	0x000b0000
-#define FW_MSG_CODE_NVM_FILE_NOT_FOUND		0x000c0000
-#define FW_MSG_CODE_NVM_OPERATION_FAILED	0x000d0000
-#define FW_MSG_CODE_NVM_FAILED_UNALIGNED	0x000e0000
-#define FW_MSG_CODE_NVM_BAD_OFFSET		0x000f0000
-#define FW_MSG_CODE_NVM_BAD_SIGNATURE		0x00100000
-#define FW_MSG_CODE_NVM_FILE_READ_ONLY		0x00200000
-#define FW_MSG_CODE_NVM_UNKNOWN_FILE		0x00300000
-#define FW_MSG_CODE_NVM_PUT_FILE_FINISH_OK	0x00400000
+#define DEFAULT_RESTORE_NVM_OP()			\
+	(DRV_MB_PARAM_NVM_CFG_OPTION_DEFAULT_RESTORE_ALL_MASK | \
+	 DRV_MB_PARAM_NVM_CFG_OPTION_INIT_MASK | DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_MASK | \
+	 DRV_MB_PARAM_NVM_CFG_OPTION_FREE_MASK)
+
+
+/* DRV_MSG_CODE_GET_PERM_MAC parametres */
+#define DRV_MSG_CODE_GET_PERM_MAC_TYPE_OFFSET		0
+#define DRV_MSG_CODE_GET_PERM_MAC_TYPE_MASK		0xF
+#define DRV_MSG_CODE_GET_PERM_MAC_TYPE_PF		0
+#define DRV_MSG_CODE_GET_PERM_MAC_TYPE_BMC		1
+#define DRV_MSG_CODE_GET_PERM_MAC_TYPE_VF		2
+#define DRV_MSG_CODE_GET_PERM_MAC_TYPE_LLDP		3
+#define DRV_MSG_CODE_GET_PERM_MAC_TYPE_MAX		4
+#define DRV_MSG_CODE_GET_PERM_MAC_INDEX_OFFSET		8
+#define DRV_MSG_CODE_GET_PERM_MAC_INDEX_MASK		0xFFFF00
+
+/***************************************************************/
+/* Firmware Message Code (Response) */
+/***************************************************************/
+
+#define FW_MSG_CODE(_code_) ((_code_) << FW_MSG_CODE_OFFSET)
+enum fw_msg_code_enum {
+	FW_MSG_CODE_UNSUPPORTED				= FW_MSG_CODE(0x0000),
+	FW_MSG_CODE_NVM_OK				= FW_MSG_CODE(0x0001),
+	FW_MSG_CODE_NVM_INVALID_MODE			= FW_MSG_CODE(0x0002),
+	FW_MSG_CODE_NVM_PREV_CMD_WAS_NOT_FINISHED	= FW_MSG_CODE(0x0003),
+	FW_MSG_CODE_ATTRIBUTE_INVALID_KEY		= FW_MSG_CODE(0x0002),
+	FW_MSG_CODE_ATTRIBUTE_INVALID_CMD		= FW_MSG_CODE(0x0003),
+	FW_MSG_CODE_NVM_FAILED_TO_ALLOCATE_PAGE		= FW_MSG_CODE(0x0004),
+	FW_MSG_CODE_NVM_INVALID_DIR_FOUND		= FW_MSG_CODE(0x0005),
+	FW_MSG_CODE_NVM_PAGE_NOT_FOUND			= FW_MSG_CODE(0x0006),
+	FW_MSG_CODE_NVM_FAILED_PARSING_BNDLE_HEADER	= FW_MSG_CODE(0x0007),
+	FW_MSG_CODE_NVM_FAILED_PARSING_IMAGE_HEADER	= FW_MSG_CODE(0x0008),
+	FW_MSG_CODE_NVM_PARSING_OUT_OF_SYNC		= FW_MSG_CODE(0x0009),
+	FW_MSG_CODE_NVM_FAILED_UPDATING_DIR		= FW_MSG_CODE(0x000a),
+	FW_MSG_CODE_NVM_FAILED_TO_FREE_PAGE		= FW_MSG_CODE(0x000b),
+	FW_MSG_CODE_NVM_FILE_NOT_FOUND			= FW_MSG_CODE(0x000c),
+	FW_MSG_CODE_NVM_OPERATION_FAILED		= FW_MSG_CODE(0x000d),
+	FW_MSG_CODE_NVM_FAILED_UNALIGNED		= FW_MSG_CODE(0x000e),
+	FW_MSG_CODE_NVM_BAD_OFFSET			= FW_MSG_CODE(0x000f),
+	FW_MSG_CODE_NVM_BAD_SIGNATURE			= FW_MSG_CODE(0x0010),
+	FW_MSG_CODE_NVM_OPERATION_NOT_STARTED		= FW_MSG_CODE(0x0011),
+	FW_MSG_CODE_NVM_FILE_READ_ONLY			= FW_MSG_CODE(0x0020),
+	FW_MSG_CODE_NVM_UNKNOWN_FILE			= FW_MSG_CODE(0x0030),
+	FW_MSG_CODE_NVM_FAILED_CALC_HASH		= FW_MSG_CODE(0x0031),
+	FW_MSG_CODE_NVM_PUBLIC_KEY_MISSING		= FW_MSG_CODE(0x0032),
+	FW_MSG_CODE_NVM_INVALID_PUBLIC_KEY		= FW_MSG_CODE(0x0033),
+	FW_MSG_CODE_NVM_PUT_FILE_FINISH_OK		= FW_MSG_CODE(0x0040),
+	FW_MSG_CODE_UNVERIFIED_KEY_CHAIN		= FW_MSG_CODE(0x0050),
 /* MFW reject "mcp reset" command if one of the drivers is up */
-#define FW_MSG_CODE_MCP_RESET_REJECT		0x00600000
-#define FW_MSG_CODE_NVM_FAILED_CALC_HASH	0x00310000
-#define FW_MSG_CODE_NVM_PUBLIC_KEY_MISSING	0x00320000
-#define FW_MSG_CODE_NVM_INVALID_PUBLIC_KEY	0x00330000
-
-#define FW_MSG_CODE_PHY_OK			0x00110000
-#define FW_MSG_CODE_PHY_ERROR			0x00120000
-#define FW_MSG_CODE_SET_SECURE_MODE_ERROR	0x00130000
-#define FW_MSG_CODE_SET_SECURE_MODE_OK		0x00140000
-#define FW_MSG_MODE_PHY_PRIVILEGE_ERROR		0x00150000
-#define FW_MSG_CODE_OK				0x00160000
-#define FW_MSG_CODE_ERROR			0x00170000
-#define FW_MSG_CODE_LED_MODE_INVALID		0x00170000
-#define FW_MSG_CODE_PHY_DIAG_OK			0x00160000
-#define FW_MSG_CODE_PHY_DIAG_ERROR		0x00170000
-#define FW_MSG_CODE_INIT_HW_FAILED_TO_ALLOCATE_PAGE	0x00040000
-#define FW_MSG_CODE_INIT_HW_FAILED_BAD_STATE    0x00170000
-#define FW_MSG_CODE_INIT_HW_FAILED_TO_SET_WINDOW 0x000d0000
-#define FW_MSG_CODE_INIT_HW_FAILED_NO_IMAGE	0x000c0000
-#define FW_MSG_CODE_INIT_HW_FAILED_VERSION_MISMATCH	0x00100000
-#define FW_MSG_CODE_TRANSCEIVER_DIAG_OK			0x00160000
-#define FW_MSG_CODE_TRANSCEIVER_DIAG_ERROR		0x00170000
-#define FW_MSG_CODE_TRANSCEIVER_NOT_PRESENT		0x00020000
-#define FW_MSG_CODE_TRANSCEIVER_BAD_BUFFER_SIZE		0x000f0000
-#define FW_MSG_CODE_GPIO_OK			0x00160000
-#define FW_MSG_CODE_GPIO_DIRECTION_ERR		0x00170000
-#define FW_MSG_CODE_GPIO_CTRL_ERR		0x00020000
-#define FW_MSG_CODE_GPIO_INVALID		0x000f0000
-#define FW_MSG_CODE_GPIO_INVALID_VALUE		0x00050000
-#define FW_MSG_CODE_BIST_TEST_INVALID		0x000f0000
-#define FW_MSG_CODE_EXTPHY_INVALID_IMAGE_HEADER	0x00700000
-#define FW_MSG_CODE_EXTPHY_INVALID_PHY_TYPE	0x00710000
-#define FW_MSG_CODE_EXTPHY_OPERATION_FAILED	0x00720000
-#define FW_MSG_CODE_EXTPHY_NO_PHY_DETECTED	0x00730000
-#define FW_MSG_CODE_RECOVERY_MODE		0x00740000
-
-	/* mdump related response codes */
-#define FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND	0x00010000
-#define FW_MSG_CODE_MDUMP_ALLOC_FAILED		0x00020000
-#define FW_MSG_CODE_MDUMP_INVALID_CMD		0x00030000
-#define FW_MSG_CODE_MDUMP_IN_PROGRESS		0x00040000
-#define FW_MSG_CODE_MDUMP_WRITE_FAILED		0x00050000
-
-
-#define FW_MSG_CODE_DRV_CFG_PF_VFS_MSIX_DONE     0x00870000
-#define FW_MSG_CODE_DRV_CFG_PF_VFS_MSIX_BAD_ASIC 0x00880000
-
-#define FW_MSG_CODE_WOL_READ_WRITE_OK		0x00820000
-#define FW_MSG_CODE_WOL_READ_WRITE_INVALID_VAL	0x00830000
-#define FW_MSG_CODE_WOL_READ_WRITE_INVALID_ADDR	0x00840000
-#define FW_MSG_CODE_WOL_READ_BUFFER_OK		0x00850000
-#define FW_MSG_CODE_WOL_READ_BUFFER_INVALID_VAL	0x00860000
-
-#define FW_MSG_CODE_ATTRIBUTE_INVALID_KEY	0x00020000
-#define FW_MSG_CODE_ATTRIBUTE_INVALID_CMD	0x00030000
+	FW_MSG_CODE_MCP_RESET_REJECT			= FW_MSG_CODE(0x0060),
+	FW_MSG_CODE_PHY_OK				= FW_MSG_CODE(0x0011),
+	FW_MSG_CODE_PHY_ERROR				= FW_MSG_CODE(0x0012),
+	FW_MSG_CODE_SET_SECURE_MODE_ERROR		= FW_MSG_CODE(0x0013),
+	FW_MSG_CODE_SET_SECURE_MODE_OK			= FW_MSG_CODE(0x0014),
+	FW_MSG_MODE_PHY_PRIVILEGE_ERROR			= FW_MSG_CODE(0x0015),
+	FW_MSG_CODE_OK					= FW_MSG_CODE(0x0016),
+	FW_MSG_CODE_ERROR				= FW_MSG_CODE(0x0017),
+	FW_MSG_CODE_LED_MODE_INVALID			= FW_MSG_CODE(0x0017),
+	FW_MSG_CODE_PHY_DIAG_OK				= FW_MSG_CODE(0x0016),
+	FW_MSG_CODE_PHY_DIAG_ERROR			= FW_MSG_CODE(0x0017),
+	FW_MSG_CODE_INIT_HW_FAILED_TO_ALLOCATE_PAGE	= FW_MSG_CODE(0x0004),
+	FW_MSG_CODE_INIT_HW_FAILED_BAD_STATE		= FW_MSG_CODE(0x0017),
+	FW_MSG_CODE_INIT_HW_FAILED_TO_SET_WINDOW	= FW_MSG_CODE(0x000d),
+	FW_MSG_CODE_INIT_HW_FAILED_NO_IMAGE		= FW_MSG_CODE(0x000c),
+	FW_MSG_CODE_INIT_HW_FAILED_VERSION_MISMATCH	= FW_MSG_CODE(0x0010),
+	FW_MSG_CODE_INIT_HW_FAILED_UNALLOWED_ADDR	= FW_MSG_CODE(0x0092),
+	FW_MSG_CODE_TRANSCEIVER_DIAG_OK			= FW_MSG_CODE(0x0016),
+	FW_MSG_CODE_TRANSCEIVER_DIAG_ERROR		= FW_MSG_CODE(0x0017),
+	FW_MSG_CODE_TRANSCEIVER_NOT_PRESENT		= FW_MSG_CODE(0x0002),
+	FW_MSG_CODE_TRANSCEIVER_BAD_BUFFER_SIZE		= FW_MSG_CODE(0x000f),
+	FW_MSG_CODE_GPIO_OK				= FW_MSG_CODE(0x0016),
+	FW_MSG_CODE_GPIO_DIRECTION_ERR			= FW_MSG_CODE(0x0017),
+	FW_MSG_CODE_GPIO_CTRL_ERR			= FW_MSG_CODE(0x0002),
+	FW_MSG_CODE_GPIO_INVALID			= FW_MSG_CODE(0x000f),
+	FW_MSG_CODE_GPIO_INVALID_VALUE			= FW_MSG_CODE(0x0005),
+	FW_MSG_CODE_BIST_TEST_INVALID			= FW_MSG_CODE(0x000f),
+	FW_MSG_CODE_EXTPHY_INVALID_IMAGE_HEADER		= FW_MSG_CODE(0x0070),
+	FW_MSG_CODE_EXTPHY_INVALID_PHY_TYPE		= FW_MSG_CODE(0x0071),
+	FW_MSG_CODE_EXTPHY_OPERATION_FAILED		= FW_MSG_CODE(0x0072),
+	FW_MSG_CODE_EXTPHY_NO_PHY_DETECTED		= FW_MSG_CODE(0x0073),
+	FW_MSG_CODE_RECOVERY_MODE			= FW_MSG_CODE(0x0074),
+	FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND		= FW_MSG_CODE(0x0001),
+	FW_MSG_CODE_MDUMP_ALLOC_FAILED			= FW_MSG_CODE(0x0002),
+	FW_MSG_CODE_MDUMP_INVALID_CMD			= FW_MSG_CODE(0x0003),
+	FW_MSG_CODE_MDUMP_IN_PROGRESS			= FW_MSG_CODE(0x0004),
+	FW_MSG_CODE_MDUMP_WRITE_FAILED			= FW_MSG_CODE(0x0005),
+	FW_MSG_CODE_OS_WOL_SUPPORTED			= FW_MSG_CODE(0x0080),
+	FW_MSG_CODE_OS_WOL_NOT_SUPPORTED		= FW_MSG_CODE(0x0081),
+	FW_MSG_CODE_WOL_READ_WRITE_OK			= FW_MSG_CODE(0x0082),
+	FW_MSG_CODE_WOL_READ_WRITE_INVALID_VAL		= FW_MSG_CODE(0x0083),
+	FW_MSG_CODE_WOL_READ_WRITE_INVALID_ADDR		= FW_MSG_CODE(0x0084),
+	FW_MSG_CODE_WOL_READ_BUFFER_OK			= FW_MSG_CODE(0x0085),
+	FW_MSG_CODE_WOL_READ_BUFFER_INVALID_VAL		= FW_MSG_CODE(0x0086),
+	FW_MSG_CODE_DRV_CFG_PF_VFS_MSIX_DONE		= FW_MSG_CODE(0x0087),
+	FW_MSG_CODE_DRV_CFG_PF_VFS_MSIX_BAD_ASIC	= FW_MSG_CODE(0x0088),
+	FW_MSG_CODE_RETAIN_VMAC_SUCCESS			= FW_MSG_CODE(0x0089),
+	FW_MSG_CODE_RETAIN_VMAC_FAIL			= FW_MSG_CODE(0x008a),
+	FW_MSG_CODE_ERR_RESOURCE_TEMPORARY_UNAVAILABLE  = FW_MSG_CODE(0x008b),
+	FW_MSG_CODE_ERR_RESOURCE_ALREADY_ALLOCATED	= FW_MSG_CODE(0x008c),
+	FW_MSG_CODE_ERR_RESOURCE_NOT_ALLOCATED		= FW_MSG_CODE(0x008d),
+	FW_MSG_CODE_ERR_NON_USER_OPTION			= FW_MSG_CODE(0x008e),
+	FW_MSG_CODE_ERR_UNKNOWN_OPTION			= FW_MSG_CODE(0x008f),
+	FW_MSG_CODE_WAIT				= FW_MSG_CODE(0x0090),
+	FW_MSG_CODE_BUSY				= FW_MSG_CODE(0x0091),
+	FW_MSG_CODE_FLR_ACK				= FW_MSG_CODE(0x0200),
+	FW_MSG_CODE_FLR_NACK				= FW_MSG_CODE(0x0210),
+	FW_MSG_CODE_SET_DRIVER_DONE			= FW_MSG_CODE(0x0220),
+	FW_MSG_CODE_SET_VMAC_SUCCESS			= FW_MSG_CODE(0x0230),
+	FW_MSG_CODE_SET_VMAC_FAIL			= FW_MSG_CODE(0x0240),
+	FW_MSG_CODE_DRV_LOAD_ENGINE			= FW_MSG_CODE(0x1010),
+	FW_MSG_CODE_DRV_LOAD_PORT			= FW_MSG_CODE(0x1011),
+	FW_MSG_CODE_DRV_LOAD_FUNCTION			= FW_MSG_CODE(0x1012),
+	FW_MSG_CODE_DRV_LOAD_REFUSED_PDA		= FW_MSG_CODE(0x1020),
+	FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1		= FW_MSG_CODE(0x1021),
+	FW_MSG_CODE_DRV_LOAD_REFUSED_DIAG		= FW_MSG_CODE(0x1022),
+	FW_MSG_CODE_DRV_LOAD_REFUSED_HSI		= FW_MSG_CODE(0x1023),
+	FW_MSG_CODE_DRV_LOAD_REFUSED_REQUIRES_FORCE	= FW_MSG_CODE(0x1030),
+	FW_MSG_CODE_DRV_LOAD_REFUSED_REJECT		= FW_MSG_CODE(0x1031),
+	FW_MSG_CODE_DRV_LOAD_DONE			= FW_MSG_CODE(0x1110),
+	FW_MSG_CODE_DRV_UNLOAD_ENGINE			= FW_MSG_CODE(0x2011),
+	FW_MSG_CODE_DRV_UNLOAD_PORT			= FW_MSG_CODE(0x2012),
+	FW_MSG_CODE_DRV_UNLOAD_FUNCTION			= FW_MSG_CODE(0x2013),
+	FW_MSG_CODE_DRV_UNLOAD_DONE			= FW_MSG_CODE(0x2110),
+	FW_MSG_CODE_INIT_PHY_DONE			= FW_MSG_CODE(0x2120),
+	FW_MSG_CODE_INIT_PHY_ERR_INVALID_ARGS		= FW_MSG_CODE(0x2130),
+	FW_MSG_CODE_LINK_RESET_DONE			= FW_MSG_CODE(0x2300),
+	FW_MSG_CODE_SET_LLDP_DONE			= FW_MSG_CODE(0x2400),
+	FW_MSG_CODE_SET_LLDP_UNSUPPORTED_AGENT		= FW_MSG_CODE(0x2401),
+	FW_MSG_CODE_ERROR_EMBEDDED_LLDP_IS_DISABLED	= FW_MSG_CODE(0x2402),
+	FW_MSG_CODE_REGISTER_LLDP_TLVS_RX_DONE		= FW_MSG_CODE(0x2410),
+	FW_MSG_CODE_SET_DCBX_DONE			= FW_MSG_CODE(0x2500),
+	FW_MSG_CODE_UPDATE_CURR_CFG_DONE		= FW_MSG_CODE(0x2600),
+	FW_MSG_CODE_UPDATE_BUS_NUM_DONE			= FW_MSG_CODE(0x2700),
+	FW_MSG_CODE_UPDATE_BOOT_PROGRESS_DONE		= FW_MSG_CODE(0x2800),
+	FW_MSG_CODE_UPDATE_STORM_FW_VER_DONE		= FW_MSG_CODE(0x2900),
+	FW_MSG_CODE_UPDATE_DRIVER_STATE_DONE		= FW_MSG_CODE(0x3100),
+	FW_MSG_CODE_DRV_MSG_CODE_BW_UPDATE_DONE		= FW_MSG_CODE(0x3200),
+	FW_MSG_CODE_DRV_MSG_CODE_MTU_SIZE_DONE		= FW_MSG_CODE(0x3300),
+	FW_MSG_CODE_RESOURCE_ALLOC_OK			= FW_MSG_CODE(0x3400),
+	FW_MSG_CODE_RESOURCE_ALLOC_UNKNOWN		= FW_MSG_CODE(0x3500),
+	FW_MSG_CODE_RESOURCE_ALLOC_DEPRECATED		= FW_MSG_CODE(0x3600),
+	FW_MSG_CODE_RESOURCE_ALLOC_GEN_ERR		= FW_MSG_CODE(0x3700),
+	FW_MSG_CODE_UPDATE_WOL_DONE			= FW_MSG_CODE(0x3800),
+	FW_MSG_CODE_UPDATE_ESWITCH_MODE_DONE		= FW_MSG_CODE(0x3900),
+	FW_MSG_CODE_UPDATE_ERR				= FW_MSG_CODE(0x3a01),
+	FW_MSG_CODE_UPDATE_PARAM_ERR			= FW_MSG_CODE(0x3a02),
+	FW_MSG_CODE_UPDATE_NOT_ALLOWED			= FW_MSG_CODE(0x3a03),
+	FW_MSG_CODE_S_TAG_UPDATE_ACK_DONE		= FW_MSG_CODE(0x3b00),
+	FW_MSG_CODE_UPDATE_FCOE_CVID_DONE		= FW_MSG_CODE(0x3c00),
+	FW_MSG_CODE_UPDATE_FCOE_FABRIC_NAME_DONE	= FW_MSG_CODE(0x3d00),
+	FW_MSG_CODE_UPDATE_BOOT_CFG_DONE		= FW_MSG_CODE(0x3e00),
+	FW_MSG_CODE_RESET_TO_DEFAULT_ACK		= FW_MSG_CODE(0x3f00),
+	FW_MSG_CODE_OV_GET_CURR_CFG_DONE		= FW_MSG_CODE(0x4000),
+	FW_MSG_CODE_GET_OEM_UPDATES_DONE		= FW_MSG_CODE(0x4100),
+	FW_MSG_CODE_GET_LLDP_STATS_DONE			= FW_MSG_CODE(0x4200),
+	FW_MSG_CODE_GET_LLDP_STATS_ERROR		= FW_MSG_CODE(0x4201),
+	FW_MSG_CODE_NIG_DRAIN_DONE			= FW_MSG_CODE(0x3000),
+	FW_MSG_CODE_VF_DISABLED_DONE			= FW_MSG_CODE(0xb000),
+	FW_MSG_CODE_DRV_CFG_VF_MSIX_DONE		= FW_MSG_CODE(0xb001),
+	FW_MSG_CODE_INITIATE_VF_FLR_BAD_PARAM		= FW_MSG_CODE(0xb002),
+	FW_MSG_CODE_INITIATE_VF_FLR_OK			= FW_MSG_CODE(0xb003),
+	FW_MSG_CODE_IDC_BUSY				= FW_MSG_CODE(0x0001),
+	FW_MSG_CODE_GET_PERM_MAC_WRONG_TYPE		= FW_MSG_CODE(0xb004),
+	FW_MSG_CODE_GET_PERM_MAC_WRONG_INDEX		= FW_MSG_CODE(0xb005),
+	FW_MSG_CODE_GET_PERM_MAC_OK			= FW_MSG_CODE(0xb006),
+	FW_MSG_CODE_DEBUG_DATA_SEND_INV_ARG		= FW_MSG_CODE(0xb007),
+	FW_MSG_CODE_DEBUG_DATA_SEND_BUF_FULL		= FW_MSG_CODE(0xb008),
+	FW_MSG_CODE_DEBUG_DATA_SEND_NO_BUF		= FW_MSG_CODE(0xb009),
+	FW_MSG_CODE_DEBUG_NOT_ENABLED			= FW_MSG_CODE(0xb00a),
+	FW_MSG_CODE_DEBUG_DATA_SEND_OK			= FW_MSG_CODE(0xb00b),
+};
 
-#define FW_MSG_SEQ_NUMBER_MASK			0x0000ffff
-#define FW_MSG_SEQ_NUMBER_OFFSET		0
-#define FW_MSG_CODE_MASK			0xffff0000
-#define FW_MSG_CODE_OFFSET			16
-	u32 fw_mb_param;
-/* Resource Allocation params - MFW  version support */
+/* Resource Allocation params - MFW version support */
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK	0xFFFF0000
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_OFFSET		16
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_MASK	0x0000FFFF
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_OFFSET		0
 
+/* get pf rdma protocol command response */
+#define FW_MB_PARAM_GET_PF_RDMA_NONE		0x0
+#define FW_MB_PARAM_GET_PF_RDMA_ROCE		0x1
+#define FW_MB_PARAM_GET_PF_RDMA_IWARP		0x2
+#define FW_MB_PARAM_GET_PF_RDMA_BOTH		0x3
+
 /* get MFW feature support response */
 /* MFW supports SmartLinQ */
-#define FW_MB_PARAM_FEATURE_SUPPORT_SMARTLINQ   0x00000001
-/* MFW supports EEE */
-#define FW_MB_PARAM_FEATURE_SUPPORT_EEE         0x00000002
+#define FW_MB_PARAM_FEATURE_SUPPORT_SMARTLINQ                       (1 << 0)
+#define FW_MB_PARAM_FEATURE_SUPPORT_EEE                             (1 << 1)  /* MFW supports EEE */
 /* MFW supports DRV_LOAD Timeout */
-#define FW_MB_PARAM_FEATURE_SUPPORT_DRV_LOAD_TO  0x00000004
-/* MFW support complete IGU cleanup upon FLR */
-#define FW_MB_PARAM_FEATURE_SUPPORT_IGU_CLEANUP	0x00000080
+#define FW_MB_PARAM_FEATURE_SUPPORT_DRV_LOAD_TO                     (1 << 2)
+/* MFW supports early detection of LP Presence */
+#define FW_MB_PARAM_FEATURE_SUPPORT_LP_PRES_DET                     (1 << 3)
+/* MFW supports relaxed ordering setting */
+#define FW_MB_PARAM_FEATURE_SUPPORT_RELAXED_ORD                     (1 << 4)
+/* MFW supports FEC control by driver */
+#define FW_MB_PARAM_FEATURE_SUPPORT_FEC_CONTROL                     (1 << 5)
+/* MFW supports extended speed and FEC Control parameters */
+#define FW_MB_PARAM_FEATURE_SUPPORT_EXT_SPEED_FEC_CONTROL           (1 << 6)
+/* MFW supports complete IGU cleanup upon FLR */
+#define FW_MB_PARAM_FEATURE_SUPPORT_IGU_CLEANUP                     (1 << 7)
+/* MFW supports VF DPM (bug fixes) */
+#define FW_MB_PARAM_FEATURE_SUPPORT_VF_DPM                          (1 << 8)
+/* MFW supports idleChk collection */
+#define FW_MB_PARAM_FEATURE_SUPPORT_IDLE_CHK                        (1 << 9)
 /* MFW supports virtual link */
-#define FW_MB_PARAM_FEATURE_SUPPORT_VLINK       0x00010000
+#define FW_MB_PARAM_FEATURE_SUPPORT_VLINK                           (1 << 16)
+/* MFW supports disabling embedded LLDP */
+#define FW_MB_PARAM_FEATURE_SUPPORT_DISABLE_LLDP                    (1 << 17)
+/* MFW supports ESL : Enhanced System Lockdown */
+#define FW_MB_PARAM_FEATURE_SUPPORT_ENHANCED_SYS_LCK                (1 << 18)
+/* MFW supports restoring default values of NVM_CFG image */
+#define FW_MB_PARAM_FEATURE_SUPPORT_RESTORE_DEFAULT_CFG             (1 << 19)
+
+/* get MFW management status response */
+#define FW_MB_PARAM_MANAGEMENT_STATUS_LOCKDOWN_ENABLED    0x00000001 /* MFW ESL is active */
 
 #define FW_MB_PARAM_LOAD_DONE_DID_EFUSE_ERROR	(1 << 0)
 
@@ -1882,34 +2334,10 @@ struct public_drv_mb {
 #define FW_MB_PARAM_PPFID_BITMAP_MASK   0xFF
 #define FW_MB_PARAM_PPFID_BITMAP_OFFSET    0
 
-	u32 drv_pulse_mb;
-#define DRV_PULSE_SEQ_MASK                      0x00007fff
-#define DRV_PULSE_SYSTEM_TIME_MASK              0xffff0000
-	/*
-	 * The system time is in the format of
-	 * (year-2001)*12*32 + month*32 + day.
-	 */
-#define DRV_PULSE_ALWAYS_ALIVE                  0x00008000
-	/*
-	 * Indicate to the firmware not to go into the
-	 * OS-absent when it is not getting driver pulse.
-	 * This is used for debugging as well for PXE(MBA).
-	 */
-
-	u32 mcp_pulse_mb;
-#define MCP_PULSE_SEQ_MASK                      0x00007fff
-#define MCP_PULSE_ALWAYS_ALIVE                  0x00008000
-	/* Indicates to the driver not to assert due to lack
-	 * of MCP response
-	 */
-#define MCP_EVENT_MASK                          0xffff0000
-#define MCP_EVENT_OTHER_DRIVER_RESET_REQ        0x00010000
-
-/* The union data is used by the driver to pass parameters to the scratchpad. */
-
-	union drv_union_data union_data;
-
-};
+#define FW_MB_PARAM_NVM_PUT_FILE_REQ_OFFSET_MASK		0x00ffffff
+#define FW_MB_PARAM_NVM_PUT_FILE_REQ_OFFSET_OFFSET		0
+#define FW_MB_PARAM_NVM_PUT_FILE_REQ_SIZE_MASK			0xff000000
+#define FW_MB_PARAM_NVM_PUT_FILE_REQ_SIZE_OFFSET		24
 
 /* MFW - DRV MB */
 /**********************************************************************
@@ -1948,6 +2376,12 @@ enum MFW_DRV_MSG_TYPE {
 	MFW_DRV_MSG_GET_TLV_REQ,
 	MFW_DRV_MSG_OEM_CFG_UPDATE,
 	MFW_DRV_MSG_LLDP_RECEIVED_TLVS_UPDATED,
+	MFW_DRV_MSG_GENERIC_IDC,	/* Generic Inter Driver Communication message */
+	MFW_DRV_MSG_XCVR_TX_FAULT,
+	MFW_DRV_MSG_XCVR_RX_LOS,
+	MFW_DRV_MSG_GET_FCOE_CAP,
+	MFW_DRV_MSG_GEN_LINK_DUMP,
+	MFW_DRV_MSG_GEN_IDLE_CHK,
 	MFW_DRV_MSG_MAX
 };
 
@@ -1958,31 +2392,33 @@ enum MFW_DRV_MSG_TYPE {
 
 #ifdef BIG_ENDIAN		/* Like MFW */
 #define DRV_ACK_MSG(msg_p, msg_id) \
-((u8)((u8 *)msg_p)[msg_id]++;)
+	do { \
+		(u8)((u8 *)msg_p)[msg_id]++; \
+	} while (0)
 #else
 #define DRV_ACK_MSG(msg_p, msg_id) \
-((u8)((u8 *)msg_p)[((msg_id & ~3) | ((~msg_id) & 3))]++;)
+	do { \
+		(u8)((u8 *)msg_p)[((msg_id & ~3) | ((~msg_id) & 3))]++; \
+	} while (0)
 #endif
 
 #define MFW_DRV_UPDATE(shmem_func, msg_id) \
-((u8)((u8 *)(MFW_MB_P(shmem_func)->msg))[msg_id]++;)
+	do { \
+		(u8)((u8 *)(MFW_MB_P(shmem_func)->msg))[msg_id]++; \
+	} while (0)
 
 struct public_mfw_mb {
-	u32 sup_msgs;       /* Assigend with MFW_DRV_MSG_MAX */
-/* Incremented by the MFW */
-	u32 msg[MFW_DRV_MSG_MAX_DWORDS(MFW_DRV_MSG_MAX)];
-/* Incremented by the driver */
-	u32 ack[MFW_DRV_MSG_MAX_DWORDS(MFW_DRV_MSG_MAX)];
+	u32 sup_msgs;	/* Assigend with MFW_DRV_MSG_MAX */
+	u32 msg[MFW_DRV_MSG_MAX_DWORDS(MFW_DRV_MSG_MAX)];   /* Incremented by the MFW */
+	u32 ack[MFW_DRV_MSG_MAX_DWORDS(MFW_DRV_MSG_MAX)];   /* Incremented by the driver */
 };
 
 /**************************************/
-/*                                    */
-/*     P U B L I C       D A T A      */
-/*                                    */
+/* P U B L I C D A T A */
 /**************************************/
 enum public_sections {
-	PUBLIC_DRV_MB,      /* Points to the first drv_mb of path0 */
-	PUBLIC_MFW_MB,      /* Points to the first mfw_mb of path0 */
+	PUBLIC_DRV_MB,	/* Points to the first drv_mb of path0 */
+	PUBLIC_MFW_MB,	/* Points to the first mfw_mb of path0 */
 	PUBLIC_GLOBAL,
 	PUBLIC_PATH,
 	PUBLIC_PORT,
@@ -2020,4 +2456,325 @@ struct mcp_public_data {
 #define MAX_I2C_TRANSACTION_SIZE	16
 #define MAX_I2C_TRANSCEIVER_PAGE_SIZE	256
 
+/* OCBB definitions */
+enum tlvs {
+	/* Category 1: Device Properties */
+	DRV_TLV_CLP_STR,
+	DRV_TLV_CLP_STR_CTD,
+	/* Category 6: Device Configuration */
+	DRV_TLV_SCSI_TO,
+	DRV_TLV_R_T_TOV,
+	DRV_TLV_R_A_TOV,
+	DRV_TLV_E_D_TOV,
+	DRV_TLV_CR_TOV,
+	DRV_TLV_BOOT_TYPE,
+	/* Category 8: Port Configuration */
+	DRV_TLV_NPIV_ENABLED,
+	/* Category 10: Function Configuration */
+	DRV_TLV_FEATURE_FLAGS,
+	DRV_TLV_LOCAL_ADMIN_ADDR,
+	DRV_TLV_ADDITIONAL_MAC_ADDR_1,
+	DRV_TLV_ADDITIONAL_MAC_ADDR_2,
+	DRV_TLV_LSO_MAX_OFFLOAD_SIZE,
+	DRV_TLV_LSO_MIN_SEGMENT_COUNT,
+	DRV_TLV_PROMISCUOUS_MODE,
+	DRV_TLV_TX_DESCRIPTORS_QUEUE_SIZE,
+	DRV_TLV_RX_DESCRIPTORS_QUEUE_SIZE,
+	DRV_TLV_NUM_OF_NET_QUEUE_VMQ_CFG,
+	DRV_TLV_FLEX_NIC_OUTER_VLAN_ID,
+	DRV_TLV_OS_DRIVER_STATES,
+	DRV_TLV_PXE_BOOT_PROGRESS,
+	/* Category 12: FC/FCoE Configuration */
+	DRV_TLV_NPIV_STATE,
+	DRV_TLV_NUM_OF_NPIV_IDS,
+	DRV_TLV_SWITCH_NAME,
+	DRV_TLV_SWITCH_PORT_NUM,
+	DRV_TLV_SWITCH_PORT_ID,
+	DRV_TLV_VENDOR_NAME,
+	DRV_TLV_SWITCH_MODEL,
+	DRV_TLV_SWITCH_FW_VER,
+	DRV_TLV_QOS_PRIORITY_PER_802_1P,
+	DRV_TLV_PORT_ALIAS,
+	DRV_TLV_PORT_STATE,
+	DRV_TLV_FIP_TX_DESCRIPTORS_QUEUE_SIZE,
+	DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_SIZE,
+	DRV_TLV_LINK_FAILURE_COUNT,
+	DRV_TLV_FCOE_BOOT_PROGRESS,
+	/* Category 13: iSCSI Configuration */
+	DRV_TLV_TARGET_LLMNR_ENABLED,
+	DRV_TLV_HEADER_DIGEST_FLAG_ENABLED,
+	DRV_TLV_DATA_DIGEST_FLAG_ENABLED,
+	DRV_TLV_AUTHENTICATION_METHOD,
+	DRV_TLV_ISCSI_BOOT_TARGET_PORTAL,
+	DRV_TLV_MAX_FRAME_SIZE,
+	DRV_TLV_PDU_TX_DESCRIPTORS_QUEUE_SIZE,
+	DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_SIZE,
+	DRV_TLV_ISCSI_BOOT_PROGRESS,
+	/* Category 20: Device Data */
+	DRV_TLV_PCIE_BUS_RX_UTILIZATION,
+	DRV_TLV_PCIE_BUS_TX_UTILIZATION,
+	DRV_TLV_DEVICE_CPU_CORES_UTILIZATION,
+	DRV_TLV_LAST_VALID_DCC_TLV_RECEIVED,
+	DRV_TLV_NCSI_RX_BYTES_RECEIVED,
+	DRV_TLV_NCSI_TX_BYTES_SENT,
+	/* Category 22: Base Port Data */
+	DRV_TLV_RX_DISCARDS,
+	DRV_TLV_RX_ERRORS,
+	DRV_TLV_TX_ERRORS,
+	DRV_TLV_TX_DISCARDS,
+	DRV_TLV_RX_FRAMES_RECEIVED,
+	DRV_TLV_TX_FRAMES_SENT,
+	/* Category 23: FC/FCoE Port Data */
+	DRV_TLV_RX_BROADCAST_PACKETS,
+	DRV_TLV_TX_BROADCAST_PACKETS,
+	/* Category 28: Base Function Data */
+	DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV4,
+	DRV_TLV_NUM_OFFLOADED_CONNECTIONS_TCP_IPV6,
+	DRV_TLV_TX_DESCRIPTOR_QUEUE_AVG_DEPTH,
+	DRV_TLV_RX_DESCRIPTORS_QUEUE_AVG_DEPTH,
+	DRV_TLV_PF_RX_FRAMES_RECEIVED,
+	DRV_TLV_RX_BYTES_RECEIVED,
+	DRV_TLV_PF_TX_FRAMES_SENT,
+	DRV_TLV_TX_BYTES_SENT,
+	DRV_TLV_IOV_OFFLOAD,
+	DRV_TLV_PCI_ERRORS_CAP_ID,
+	DRV_TLV_UNCORRECTABLE_ERROR_STATUS,
+	DRV_TLV_UNCORRECTABLE_ERROR_MASK,
+	DRV_TLV_CORRECTABLE_ERROR_STATUS,
+	DRV_TLV_CORRECTABLE_ERROR_MASK,
+	DRV_TLV_PCI_ERRORS_AECC_REGISTER,
+	DRV_TLV_TX_QUEUES_EMPTY,
+	DRV_TLV_RX_QUEUES_EMPTY,
+	DRV_TLV_TX_QUEUES_FULL,
+	DRV_TLV_RX_QUEUES_FULL,
+	/* Category 29: FC/FCoE Function Data */
+	DRV_TLV_FCOE_TX_DESCRIPTOR_QUEUE_AVG_DEPTH,
+	DRV_TLV_FCOE_RX_DESCRIPTORS_QUEUE_AVG_DEPTH,
+	DRV_TLV_FCOE_RX_FRAMES_RECEIVED,
+	DRV_TLV_FCOE_RX_BYTES_RECEIVED,
+	DRV_TLV_FCOE_TX_FRAMES_SENT,
+	DRV_TLV_FCOE_TX_BYTES_SENT,
+	DRV_TLV_CRC_ERROR_COUNT,
+	DRV_TLV_CRC_ERROR_1_RECEIVED_SOURCE_FC_ID,
+	DRV_TLV_CRC_ERROR_1_TIMESTAMP,
+	DRV_TLV_CRC_ERROR_2_RECEIVED_SOURCE_FC_ID,
+	DRV_TLV_CRC_ERROR_2_TIMESTAMP,
+	DRV_TLV_CRC_ERROR_3_RECEIVED_SOURCE_FC_ID,
+	DRV_TLV_CRC_ERROR_3_TIMESTAMP,
+	DRV_TLV_CRC_ERROR_4_RECEIVED_SOURCE_FC_ID,
+	DRV_TLV_CRC_ERROR_4_TIMESTAMP,
+	DRV_TLV_CRC_ERROR_5_RECEIVED_SOURCE_FC_ID,
+	DRV_TLV_CRC_ERROR_5_TIMESTAMP,
+	DRV_TLV_LOSS_OF_SYNC_ERROR_COUNT,
+	DRV_TLV_LOSS_OF_SIGNAL_ERRORS,
+	DRV_TLV_PRIMITIVE_SEQUENCE_PROTOCOL_ERROR_COUNT,
+	DRV_TLV_DISPARITY_ERROR_COUNT,
+	DRV_TLV_CODE_VIOLATION_ERROR_COUNT,
+	DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_1,
+	DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_2,
+	DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_3,
+	DRV_TLV_LAST_FLOGI_ISSUED_COMMON_PARAMETERS_WORD_4,
+	DRV_TLV_LAST_FLOGI_TIMESTAMP,
+	DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_1,
+	DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_2,
+	DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_3,
+	DRV_TLV_LAST_FLOGI_ACC_COMMON_PARAMETERS_WORD_4,
+	DRV_TLV_LAST_FLOGI_ACC_TIMESTAMP,
+	DRV_TLV_LAST_FLOGI_RJT,
+	DRV_TLV_LAST_FLOGI_RJT_TIMESTAMP,
+	DRV_TLV_FDISCS_SENT_COUNT,
+	DRV_TLV_FDISC_ACCS_RECEIVED,
+	DRV_TLV_FDISC_RJTS_RECEIVED,
+	DRV_TLV_PLOGI_SENT_COUNT,
+	DRV_TLV_PLOGI_ACCS_RECEIVED,
+	DRV_TLV_PLOGI_RJTS_RECEIVED,
+	DRV_TLV_PLOGI_1_SENT_DESTINATION_FC_ID,
+	DRV_TLV_PLOGI_1_TIMESTAMP,
+	DRV_TLV_PLOGI_2_SENT_DESTINATION_FC_ID,
+	DRV_TLV_PLOGI_2_TIMESTAMP,
+	DRV_TLV_PLOGI_3_SENT_DESTINATION_FC_ID,
+	DRV_TLV_PLOGI_3_TIMESTAMP,
+	DRV_TLV_PLOGI_4_SENT_DESTINATION_FC_ID,
+	DRV_TLV_PLOGI_4_TIMESTAMP,
+	DRV_TLV_PLOGI_5_SENT_DESTINATION_FC_ID,
+	DRV_TLV_PLOGI_5_TIMESTAMP,
+	DRV_TLV_PLOGI_1_ACC_RECEIVED_SOURCE_FC_ID,
+	DRV_TLV_PLOGI_1_ACC_TIMESTAMP,
+	DRV_TLV_PLOGI_2_ACC_RECEIVED_SOURCE_FC_ID,
+	DRV_TLV_PLOGI_2_ACC_TIMESTAMP,
+	DRV_TLV_PLOGI_3_ACC_RECEIVED_SOURCE_FC_ID,
+	DRV_TLV_PLOGI_3_ACC_TIMESTAMP,
+	DRV_TLV_PLOGI_4_ACC_RECEIVED_SOURCE_FC_ID,
+	DRV_TLV_PLOGI_4_ACC_TIMESTAMP,
+	DRV_TLV_PLOGI_5_ACC_RECEIVED_SOURCE_FC_ID,
+	DRV_TLV_PLOGI_5_ACC_TIMESTAMP,
+	DRV_TLV_LOGOS_ISSUED,
+	DRV_TLV_LOGO_ACCS_RECEIVED,
+	DRV_TLV_LOGO_RJTS_RECEIVED,
+	DRV_TLV_LOGO_1_RECEIVED_SOURCE_FC_ID,
+	DRV_TLV_LOGO_1_TIMESTAMP,
+	DRV_TLV_LOGO_2_RECEIVED_SOURCE_FC_ID,
+	DRV_TLV_LOGO_2_TIMESTAMP,
+	DRV_TLV_LOGO_3_RECEIVED_SOURCE_FC_ID,
+	DRV_TLV_LOGO_3_TIMESTAMP,
+	DRV_TLV_LOGO_4_RECEIVED_SOURCE_FC_ID,
+	DRV_TLV_LOGO_4_TIMESTAMP,
+	DRV_TLV_LOGO_5_RECEIVED_SOURCE_FC_ID,
+	DRV_TLV_LOGO_5_TIMESTAMP,
+	DRV_TLV_LOGOS_RECEIVED,
+	DRV_TLV_ACCS_ISSUED,
+	DRV_TLV_PRLIS_ISSUED,
+	DRV_TLV_ACCS_RECEIVED,
+	DRV_TLV_ABTS_SENT_COUNT,
+	DRV_TLV_ABTS_ACCS_RECEIVED,
+	DRV_TLV_ABTS_RJTS_RECEIVED,
+	DRV_TLV_ABTS_1_SENT_DESTINATION_FC_ID,
+	DRV_TLV_ABTS_1_TIMESTAMP,
+	DRV_TLV_ABTS_2_SENT_DESTINATION_FC_ID,
+	DRV_TLV_ABTS_2_TIMESTAMP,
+	DRV_TLV_ABTS_3_SENT_DESTINATION_FC_ID,
+	DRV_TLV_ABTS_3_TIMESTAMP,
+	DRV_TLV_ABTS_4_SENT_DESTINATION_FC_ID,
+	DRV_TLV_ABTS_4_TIMESTAMP,
+	DRV_TLV_ABTS_5_SENT_DESTINATION_FC_ID,
+	DRV_TLV_ABTS_5_TIMESTAMP,
+	DRV_TLV_RSCNS_RECEIVED,
+	DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_1,
+	DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_2,
+	DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_3,
+	DRV_TLV_LAST_RSCN_RECEIVED_N_PORT_4,
+	DRV_TLV_LUN_RESETS_ISSUED,
+	DRV_TLV_ABORT_TASK_SETS_ISSUED,
+	DRV_TLV_TPRLOS_SENT,
+	DRV_TLV_NOS_SENT_COUNT,
+	DRV_TLV_NOS_RECEIVED_COUNT,
+	DRV_TLV_OLS_COUNT,
+	DRV_TLV_LR_COUNT,
+	DRV_TLV_LRR_COUNT,
+	DRV_TLV_LIP_SENT_COUNT,
+	DRV_TLV_LIP_RECEIVED_COUNT,
+	DRV_TLV_EOFA_COUNT,
+	DRV_TLV_EOFNI_COUNT,
+	DRV_TLV_SCSI_STATUS_CHECK_CONDITION_COUNT,
+	DRV_TLV_SCSI_STATUS_CONDITION_MET_COUNT,
+	DRV_TLV_SCSI_STATUS_BUSY_COUNT,
+	DRV_TLV_SCSI_STATUS_INTERMEDIATE_COUNT,
+	DRV_TLV_SCSI_STATUS_INTERMEDIATE_CONDITION_MET_COUNT,
+	DRV_TLV_SCSI_STATUS_RESERVATION_CONFLICT_COUNT,
+	DRV_TLV_SCSI_STATUS_TASK_SET_FULL_COUNT,
+	DRV_TLV_SCSI_STATUS_ACA_ACTIVE_COUNT,
+	DRV_TLV_SCSI_STATUS_TASK_ABORTED_COUNT,
+	DRV_TLV_SCSI_CHECK_CONDITION_1_RECEIVED_SK_ASC_ASCQ,
+	DRV_TLV_SCSI_CHECK_1_TIMESTAMP,
+	DRV_TLV_SCSI_CHECK_CONDITION_2_RECEIVED_SK_ASC_ASCQ,
+	DRV_TLV_SCSI_CHECK_2_TIMESTAMP,
+	DRV_TLV_SCSI_CHECK_CONDITION_3_RECEIVED_SK_ASC_ASCQ,
+	DRV_TLV_SCSI_CHECK_3_TIMESTAMP,
+	DRV_TLV_SCSI_CHECK_CONDITION_4_RECEIVED_SK_ASC_ASCQ,
+	DRV_TLV_SCSI_CHECK_4_TIMESTAMP,
+	DRV_TLV_SCSI_CHECK_CONDITION_5_RECEIVED_SK_ASC_ASCQ,
+	DRV_TLV_SCSI_CHECK_5_TIMESTAMP,
+	/* Category 30: iSCSI Function Data */
+	DRV_TLV_PDU_TX_DESCRIPTOR_QUEUE_AVG_DEPTH,
+	DRV_TLV_PDU_RX_DESCRIPTORS_QUEUE_AVG_DEPTH,
+	DRV_TLV_ISCSI_PDU_RX_FRAMES_RECEIVED,
+	DRV_TLV_ISCSI_PDU_RX_BYTES_RECEIVED,
+	DRV_TLV_ISCSI_PDU_TX_FRAMES_SENT,
+	DRV_TLV_ISCSI_PDU_TX_BYTES_SENT,
+	DRV_TLV_RDMA_DRV_VERSION
+};
+
+#define I2C_DEV_ADDR_A2				  0xa2
+#define SFP_EEPROM_A2_TEMPERATURE_ADDR		  0x60
+#define SFP_EEPROM_A2_TEMPERATURE_SIZE		  2
+#define SFP_EEPROM_A2_VCC_ADDR				  0x62
+#define SFP_EEPROM_A2_VCC_SIZE				  2
+#define SFP_EEPROM_A2_TX_BIAS_ADDR			  0x64
+#define SFP_EEPROM_A2_TX_BIAS_SIZE			  2
+#define SFP_EEPROM_A2_TX_POWER_ADDR			  0x66
+#define SFP_EEPROM_A2_TX_POWER_SIZE			  2
+#define SFP_EEPROM_A2_RX_POWER_ADDR			  0x68
+#define SFP_EEPROM_A2_RX_POWER_SIZE			  2
+
+#define I2C_DEV_ADDR_A0				  0xa0
+#define QSFP_EEPROM_A0_TEMPERATURE_ADDR		   0x16
+#define QSFP_EEPROM_A0_TEMPERATURE_SIZE		   2
+#define QSFP_EEPROM_A0_VCC_ADDR				   0x1a
+#define QSFP_EEPROM_A0_VCC_SIZE				   2
+#define QSFP_EEPROM_A0_TX1_BIAS_ADDR			   0x2a
+#define QSFP_EEPROM_A0_TX1_BIAS_SIZE			   2
+#define QSFP_EEPROM_A0_TX1_POWER_ADDR		   0x32
+#define QSFP_EEPROM_A0_TX1_POWER_SIZE		   2
+#define QSFP_EEPROM_A0_RX1_POWER_ADDR		   0x22
+#define QSFP_EEPROM_A0_RX1_POWER_SIZE		   2
+
+/**************************************
+ *     eDiag NETWORK Mode (DON)
+ **************************************/
+
+#define ETH_DON_TYPE		   0x0911    /* NETWORK Mode for QeDiag */
+#define ETH_DON_TRACE_TYPE    0x0912	/* NETWORK Mode Continuous Trace */
+
+#define	DON_RESP_UNKNOWN_CMD_ID	0x10	/* Response Error */
+
+/* Op Codes, Response is Op Code+1 */
+
+#define	DON_REG_READ_REQ_CMD_ID			0x11
+#define	DON_REG_WRITE_REQ_CMD_ID		0x22
+#define	DON_CHALLENGE_REQ_CMD_ID		0x33
+#define	DON_NVM_READ_REQ_CMD_ID			0x44
+#define	DON_BLOCK_READ_REQ_CMD_ID		0x55
+
+#define	DON_MFW_MODE_TRACE_CONTINUOUS_ID	0x70
+
+#if defined(MFW) || defined(DIAG) || defined(WINEDIAG)
+
+#ifndef UEFI
+#if defined(_MSC_VER)
+#pragma pack(push, 1)
+#else
+#pragma pack(1)
+#endif
+#endif
+
+typedef struct {
+	u8 dst_addr[6];
+	u8 src_addr[6];
+	u16 ether_type;
+
+	/* DON Message data starts here, after L2 header		*/
+	/* Do not change alignment to keep backward compatibility */
+	u16 cmd_id;			/* Op code and response code */
+
+	union {
+		struct {		/* DON Commands */
+			u32 address;
+			u32 val;
+			u32 resp_status;
+		};
+		struct {		/* DON Traces	*/
+			u16 mcp_clock;		/* MCP Clock in MHz		*/
+			u16 trace_size;		/* Trace size in bytes		*/
+
+			u32 seconds;		/* Seconds since last reset	*/
+			u32 ticks;		/* Timestamp (NOW)		*/
+		};
+	};
+	union {
+		u8 digest[32]; /* SHA256 */
+		u8 data[32];
+		/* u32 dword[8]; */
+	};
+} don_packet_t;
+
+#ifndef UEFI
+#if defined(_MSC_VER)
+#pragma pack(pop)
+#else
+#pragma pack(0)
+#endif
+#endif		/* #ifndef UEFI */
+
+#endif		/* #if defined(MFW) || defined(DIAG) || defined(WINEDIAG) */
+
 #endif				/* MCP_PUBLIC_H */
diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index daa5437dd..806b8e9a4 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 /****************************************************************************
  *
  * Name:        nvm_cfg.h
@@ -11,7 +11,7 @@
  * Description: NVM config file - Generated file from nvm cfg excel.
  *              DO NOT MODIFY !!!
  *
- * Created:     1/6/2019
+ * Created:     8/4/2020
  *
  ****************************************************************************/
 
@@ -19,18 +19,18 @@
 #define NVM_CFG_H
 
 
-#define NVM_CFG_version 0x84500
+#define NVM_CFG_version 0x86009
 
-#define NVM_CFG_new_option_seq 45
+#define NVM_CFG_new_option_seq 60
 
 #define NVM_CFG_removed_option_seq 4
 
-#define NVM_CFG_updated_value_seq 13
+#define NVM_CFG_updated_value_seq 22
 
 struct nvm_cfg_mac_address {
 	u32 mac_addr_hi;
-		#define NVM_CFG_MAC_ADDRESS_HI_MASK 0x0000FFFF
-		#define NVM_CFG_MAC_ADDRESS_HI_OFFSET 0
+		#define NVM_CFG_MAC_ADDRESS_HI_MASK                             0x0000FFFF
+		#define NVM_CFG_MAC_ADDRESS_HI_OFFSET                           0
 	u32 mac_addr_lo;
 };
 
@@ -38,2555 +38,2636 @@ struct nvm_cfg_mac_address {
  * nvm_cfg1 structs
  ******************************************/
 struct nvm_cfg1_glob {
-	u32 generic_cont0; /* 0x0 */
-		#define NVM_CFG1_GLOB_BOARD_SWAP_MASK 0x0000000F
-		#define NVM_CFG1_GLOB_BOARD_SWAP_OFFSET 0
-		#define NVM_CFG1_GLOB_BOARD_SWAP_NONE 0x0
-		#define NVM_CFG1_GLOB_BOARD_SWAP_PATH 0x1
-		#define NVM_CFG1_GLOB_BOARD_SWAP_PORT 0x2
-		#define NVM_CFG1_GLOB_BOARD_SWAP_BOTH 0x3
-		#define NVM_CFG1_GLOB_MF_MODE_MASK 0x00000FF0
-		#define NVM_CFG1_GLOB_MF_MODE_OFFSET 4
-		#define NVM_CFG1_GLOB_MF_MODE_MF_ALLOWED 0x0
-		#define NVM_CFG1_GLOB_MF_MODE_DEFAULT 0x1
-		#define NVM_CFG1_GLOB_MF_MODE_SPIO4 0x2
-		#define NVM_CFG1_GLOB_MF_MODE_NPAR1_0 0x3
-		#define NVM_CFG1_GLOB_MF_MODE_NPAR1_5 0x4
-		#define NVM_CFG1_GLOB_MF_MODE_NPAR2_0 0x5
-		#define NVM_CFG1_GLOB_MF_MODE_BD 0x6
-		#define NVM_CFG1_GLOB_MF_MODE_UFP 0x7
-		#define NVM_CFG1_GLOB_MF_MODE_DCI_NPAR 0x8
-		#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_MASK 0x00001000
-		#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_OFFSET 12
-		#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_DISABLED 0x0
-		#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_ENABLED 0x1
-		#define NVM_CFG1_GLOB_AVS_MARGIN_LOW_MASK 0x001FE000
-		#define NVM_CFG1_GLOB_AVS_MARGIN_LOW_OFFSET 13
-		#define NVM_CFG1_GLOB_AVS_MARGIN_HIGH_MASK 0x1FE00000
-		#define NVM_CFG1_GLOB_AVS_MARGIN_HIGH_OFFSET 21
-		#define NVM_CFG1_GLOB_ENABLE_SRIOV_MASK 0x20000000
-		#define NVM_CFG1_GLOB_ENABLE_SRIOV_OFFSET 29
-		#define NVM_CFG1_GLOB_ENABLE_SRIOV_DISABLED 0x0
-		#define NVM_CFG1_GLOB_ENABLE_SRIOV_ENABLED 0x1
-		#define NVM_CFG1_GLOB_ENABLE_ATC_MASK 0x40000000
-		#define NVM_CFG1_GLOB_ENABLE_ATC_OFFSET 30
-		#define NVM_CFG1_GLOB_ENABLE_ATC_DISABLED 0x0
-		#define NVM_CFG1_GLOB_ENABLE_ATC_ENABLED 0x1
-		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_MASK \
-								0x80000000
-		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_OFFSET 31
-		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_DISABLED \
-								0x0
-		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_ENABLED 0x1
-	u32 engineering_change[3]; /* 0x4 */
-	u32 manufacturing_id; /* 0x10 */
-	u32 serial_number[4]; /* 0x14 */
-	u32 pcie_cfg; /* 0x24 */
-		#define NVM_CFG1_GLOB_PCI_GEN_MASK 0x00000003
-		#define NVM_CFG1_GLOB_PCI_GEN_OFFSET 0
-		#define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN1 0x0
-		#define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN2 0x1
-		#define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN3 0x2
-		#define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_MASK 0x00000004
-		#define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_OFFSET 2
-		#define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_DISABLED 0x0
-		#define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_ENABLED 0x1
-		#define NVM_CFG1_GLOB_ASPM_SUPPORT_MASK 0x00000018
-		#define NVM_CFG1_GLOB_ASPM_SUPPORT_OFFSET 3
-		#define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_L1_ENABLED 0x0
-		#define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_DISABLED 0x1
-		#define NVM_CFG1_GLOB_ASPM_SUPPORT_L1_DISABLED 0x2
-		#define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_L1_DISABLED 0x3
-		#define NVM_CFG1_GLOB_RESERVED_MPREVENT_PCIE_L1_MENTRY_MASK \
-			0x00000020
-		#define NVM_CFG1_GLOB_RESERVED_MPREVENT_PCIE_L1_MENTRY_OFFSET 5
-		#define NVM_CFG1_GLOB_PCIE_G2_TX_AMPLITUDE_MASK 0x000003C0
-		#define NVM_CFG1_GLOB_PCIE_G2_TX_AMPLITUDE_OFFSET 6
-		#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_MASK 0x00001C00
-		#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_OFFSET 10
-		#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_HW 0x0
-		#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_0DB 0x1
-		#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_3_5DB 0x2
-		#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_6_0DB 0x3
-		#define NVM_CFG1_GLOB_WWN_NODE_PREFIX0_MASK 0x001FE000
-		#define NVM_CFG1_GLOB_WWN_NODE_PREFIX0_OFFSET 13
-		#define NVM_CFG1_GLOB_WWN_NODE_PREFIX1_MASK 0x1FE00000
-		#define NVM_CFG1_GLOB_WWN_NODE_PREFIX1_OFFSET 21
-		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_MASK 0x60000000
-		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_OFFSET 29
-	/*  Set the duration, in sec, fan failure signal should be sampled */
-		#define NVM_CFG1_GLOB_RESERVED_FAN_FAILURE_DURATION_MASK \
-			0x80000000
-		#define NVM_CFG1_GLOB_RESERVED_FAN_FAILURE_DURATION_OFFSET 31
-	u32 mgmt_traffic; /* 0x28 */
-		#define NVM_CFG1_GLOB_RESERVED60_MASK 0x00000001
-		#define NVM_CFG1_GLOB_RESERVED60_OFFSET 0
-		#define NVM_CFG1_GLOB_WWN_PORT_PREFIX0_MASK 0x000001FE
-		#define NVM_CFG1_GLOB_WWN_PORT_PREFIX0_OFFSET 1
-		#define NVM_CFG1_GLOB_WWN_PORT_PREFIX1_MASK 0x0001FE00
-		#define NVM_CFG1_GLOB_WWN_PORT_PREFIX1_OFFSET 9
-		#define NVM_CFG1_GLOB_SMBUS_ADDRESS_MASK 0x01FE0000
-		#define NVM_CFG1_GLOB_SMBUS_ADDRESS_OFFSET 17
-		#define NVM_CFG1_GLOB_SIDEBAND_MODE_MASK 0x06000000
-		#define NVM_CFG1_GLOB_SIDEBAND_MODE_OFFSET 25
-		#define NVM_CFG1_GLOB_SIDEBAND_MODE_DISABLED 0x0
-		#define NVM_CFG1_GLOB_SIDEBAND_MODE_RMII 0x1
-		#define NVM_CFG1_GLOB_SIDEBAND_MODE_SGMII 0x2
-		#define NVM_CFG1_GLOB_AUX_MODE_MASK 0x78000000
-		#define NVM_CFG1_GLOB_AUX_MODE_OFFSET 27
-		#define NVM_CFG1_GLOB_AUX_MODE_DEFAULT 0x0
-		#define NVM_CFG1_GLOB_AUX_MODE_SMBUS_ONLY 0x1
+	u32 generic_cont0;                                                  /* 0x0 */
+		#define NVM_CFG1_GLOB_BOARD_SWAP_MASK                           0x0000000F
+		#define NVM_CFG1_GLOB_BOARD_SWAP_OFFSET                         0
+		#define NVM_CFG1_GLOB_BOARD_SWAP_NONE                           0x0
+		#define NVM_CFG1_GLOB_BOARD_SWAP_PATH                           0x1
+		#define NVM_CFG1_GLOB_BOARD_SWAP_PORT                           0x2
+		#define NVM_CFG1_GLOB_BOARD_SWAP_BOTH                           0x3
+		#define NVM_CFG1_GLOB_MF_MODE_MASK                              0x00000FF0
+		#define NVM_CFG1_GLOB_MF_MODE_OFFSET                            4
+		#define NVM_CFG1_GLOB_MF_MODE_MF_ALLOWED                        0x0
+		#define NVM_CFG1_GLOB_MF_MODE_DEFAULT                           0x1
+		#define NVM_CFG1_GLOB_MF_MODE_SPIO4                             0x2
+		#define NVM_CFG1_GLOB_MF_MODE_NPAR1_0                           0x3
+		#define NVM_CFG1_GLOB_MF_MODE_NPAR1_5                           0x4
+		#define NVM_CFG1_GLOB_MF_MODE_NPAR2_0                           0x5
+		#define NVM_CFG1_GLOB_MF_MODE_BD                                0x6
+		#define NVM_CFG1_GLOB_MF_MODE_UFP                               0x7
+		#define NVM_CFG1_GLOB_MF_MODE_DCI_NPAR                          0x8
+		#define NVM_CFG1_GLOB_MF_MODE_QINQ                              0x9
+		#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_MASK              0x00001000
+		#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_OFFSET            12
+		#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_DISABLED          0x0
+		#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_ENABLED           0x1
+		#define NVM_CFG1_GLOB_AVS_MARGIN_LOW_MASK                       0x001FE000
+		#define NVM_CFG1_GLOB_AVS_MARGIN_LOW_OFFSET                     13
+		#define NVM_CFG1_GLOB_AVS_MARGIN_HIGH_MASK                      0x1FE00000
+		#define NVM_CFG1_GLOB_AVS_MARGIN_HIGH_OFFSET                    21
+		#define NVM_CFG1_GLOB_ENABLE_SRIOV_MASK                         0x20000000
+		#define NVM_CFG1_GLOB_ENABLE_SRIOV_OFFSET                       29
+		#define NVM_CFG1_GLOB_ENABLE_SRIOV_DISABLED                     0x0
+		#define NVM_CFG1_GLOB_ENABLE_SRIOV_ENABLED                      0x1
+		#define NVM_CFG1_GLOB_ENABLE_ATC_MASK                           0x40000000
+		#define NVM_CFG1_GLOB_ENABLE_ATC_OFFSET                         30
+		#define NVM_CFG1_GLOB_ENABLE_ATC_DISABLED                       0x0
+		#define NVM_CFG1_GLOB_ENABLE_ATC_ENABLED                        0x1
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_MASK       0x80000000
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_OFFSET     31
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_DISABLED   0x0
+		#define NVM_CFG1_GLOB_RESERVED__M_WAS_CLOCK_SLOWDOWN_ENABLED    0x1
+	u32 engineering_change[3];                                          /* 0x4 */
+	u32 manufacturing_id;                                              /* 0x10 */
+	u32 serial_number[4];                                              /* 0x14 */
+	u32 pcie_cfg;                                                      /* 0x24 */
+		#define NVM_CFG1_GLOB_PCI_GEN_MASK                              0x00000003
+		#define NVM_CFG1_GLOB_PCI_GEN_OFFSET                            0
+		#define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN1                          0x0
+		#define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN2                          0x1
+		#define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN3                          0x2
+		#define NVM_CFG1_GLOB_PCI_GEN_AHP_PCI_GEN4                      0x3
+		#define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_MASK                   0x00000004
+		#define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_OFFSET                 2
+		#define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_DISABLED               0x0
+		#define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_ENABLED                0x1
+		#define NVM_CFG1_GLOB_ASPM_SUPPORT_MASK                         0x00000018
+		#define NVM_CFG1_GLOB_ASPM_SUPPORT_OFFSET                       3
+		#define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_L1_ENABLED               0x0
+		#define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_DISABLED                 0x1
+		#define NVM_CFG1_GLOB_ASPM_SUPPORT_L1_DISABLED                  0x2
+		#define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_L1_DISABLED              0x3
+		#define NVM_CFG1_GLOB_RESERVED_MPREVENT_PCIE_L1_MENTRY_MASK     0x00000020
+		#define NVM_CFG1_GLOB_RESERVED_MPREVENT_PCIE_L1_MENTRY_OFFSET   5
+		#define NVM_CFG1_GLOB_PCIE_G2_TX_AMPLITUDE_MASK                 0x000003C0
+		#define NVM_CFG1_GLOB_PCIE_G2_TX_AMPLITUDE_OFFSET               6
+		#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_MASK                     0x00001C00
+		#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_OFFSET                   10
+		#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_HW                       0x0
+		#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_0DB                      0x1
+		#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_3_5DB                    0x2
+		#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_6_0DB                    0x3
+		#define NVM_CFG1_GLOB_WWN_NODE_PREFIX0_MASK                     0x001FE000
+		#define NVM_CFG1_GLOB_WWN_NODE_PREFIX0_OFFSET                   13
+		#define NVM_CFG1_GLOB_WWN_NODE_PREFIX1_MASK                     0x1FE00000
+		#define NVM_CFG1_GLOB_WWN_NODE_PREFIX1_OFFSET                   21
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_MASK                      0x60000000
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_OFFSET                    29
+	/* Set the duration, in seconds, fan failure signal should be
+	 * sampled
+	 */
+		#define NVM_CFG1_GLOB_RESERVED_FAN_FAILURE_DURATION_MASK        0x80000000
+		#define NVM_CFG1_GLOB_RESERVED_FAN_FAILURE_DURATION_OFFSET      31
+	u32 mgmt_traffic;                                                  /* 0x28 */
+		#define NVM_CFG1_GLOB_RESERVED60_MASK                           0x00000001
+		#define NVM_CFG1_GLOB_RESERVED60_OFFSET                         0
+		#define NVM_CFG1_GLOB_WWN_PORT_PREFIX0_MASK                     0x000001FE
+		#define NVM_CFG1_GLOB_WWN_PORT_PREFIX0_OFFSET                   1
+		#define NVM_CFG1_GLOB_WWN_PORT_PREFIX1_MASK                     0x0001FE00
+		#define NVM_CFG1_GLOB_WWN_PORT_PREFIX1_OFFSET                   9
+		#define NVM_CFG1_GLOB_SMBUS_ADDRESS_MASK                        0x01FE0000
+		#define NVM_CFG1_GLOB_SMBUS_ADDRESS_OFFSET                      17
+		#define NVM_CFG1_GLOB_SIDEBAND_MODE_MASK                        0x06000000
+		#define NVM_CFG1_GLOB_SIDEBAND_MODE_OFFSET                      25
+		#define NVM_CFG1_GLOB_SIDEBAND_MODE_DISABLED                    0x0
+		#define NVM_CFG1_GLOB_SIDEBAND_MODE_RMII                        0x1
+		#define NVM_CFG1_GLOB_SIDEBAND_MODE_SGMII                       0x2
+		#define NVM_CFG1_GLOB_AUX_MODE_MASK                             0x78000000
+		#define NVM_CFG1_GLOB_AUX_MODE_OFFSET                           27
+		#define NVM_CFG1_GLOB_AUX_MODE_DEFAULT                          0x0
+		#define NVM_CFG1_GLOB_AUX_MODE_SMBUS_ONLY                       0x1
 	/*  Indicates whether external thermal sonsor is available */
-		#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_MASK 0x80000000
-		#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_OFFSET 31
-		#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_DISABLED 0x0
-		#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ENABLED 0x1
-	u32 core_cfg; /* 0x2C */
-		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_OFFSET 0
-		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_2X40G 0x0
-		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X50G 0x1
-		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_1X100G 0x2
-		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X10G_F 0x3
-		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_4X10G_E 0x4
-		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_4X20G 0x5
-		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_1X40G 0xB
-		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X25G 0xC
-		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_1X25G 0xD
-		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X25G 0xE
-		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X10G 0xF
-		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X25G_LIO2 0x10
-		#define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_MASK 0x00000100
-		#define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_OFFSET 8
-		#define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_DISABLED 0x0
-		#define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_ENABLED 0x1
-		#define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_MASK 0x00000200
-		#define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_OFFSET 9
-		#define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_DISABLED 0x0
-		#define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_ENABLED 0x1
-		#define NVM_CFG1_GLOB_MPS10_CORE_ADDR_MASK 0x0003FC00
-		#define NVM_CFG1_GLOB_MPS10_CORE_ADDR_OFFSET 10
-		#define NVM_CFG1_GLOB_MPS25_CORE_ADDR_MASK 0x03FC0000
-		#define NVM_CFG1_GLOB_MPS25_CORE_ADDR_OFFSET 18
-		#define NVM_CFG1_GLOB_AVS_MODE_MASK 0x1C000000
-		#define NVM_CFG1_GLOB_AVS_MODE_OFFSET 26
-		#define NVM_CFG1_GLOB_AVS_MODE_CLOSE_LOOP 0x0
-		#define NVM_CFG1_GLOB_AVS_MODE_OPEN_LOOP_CFG 0x1
-		#define NVM_CFG1_GLOB_AVS_MODE_OPEN_LOOP_OTP 0x2
-		#define NVM_CFG1_GLOB_AVS_MODE_DISABLED 0x3
-		#define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_MASK 0x60000000
-		#define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_OFFSET 29
-		#define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_DISABLED 0x0
-		#define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_ENABLED 0x1
-	u32 e_lane_cfg1; /* 0x30 */
-		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK 0x0000000F
-		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET 0
-		#define NVM_CFG1_GLOB_RX_LANE1_SWAP_MASK 0x000000F0
-		#define NVM_CFG1_GLOB_RX_LANE1_SWAP_OFFSET 4
-		#define NVM_CFG1_GLOB_RX_LANE2_SWAP_MASK 0x00000F00
-		#define NVM_CFG1_GLOB_RX_LANE2_SWAP_OFFSET 8
-		#define NVM_CFG1_GLOB_RX_LANE3_SWAP_MASK 0x0000F000
-		#define NVM_CFG1_GLOB_RX_LANE3_SWAP_OFFSET 12
-		#define NVM_CFG1_GLOB_TX_LANE0_SWAP_MASK 0x000F0000
-		#define NVM_CFG1_GLOB_TX_LANE0_SWAP_OFFSET 16
-		#define NVM_CFG1_GLOB_TX_LANE1_SWAP_MASK 0x00F00000
-		#define NVM_CFG1_GLOB_TX_LANE1_SWAP_OFFSET 20
-		#define NVM_CFG1_GLOB_TX_LANE2_SWAP_MASK 0x0F000000
-		#define NVM_CFG1_GLOB_TX_LANE2_SWAP_OFFSET 24
-		#define NVM_CFG1_GLOB_TX_LANE3_SWAP_MASK 0xF0000000
-		#define NVM_CFG1_GLOB_TX_LANE3_SWAP_OFFSET 28
-	u32 e_lane_cfg2; /* 0x34 */
-		#define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_MASK 0x00000001
-		#define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_OFFSET 0
-		#define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_MASK 0x00000002
-		#define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_OFFSET 1
-		#define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_MASK 0x00000004
-		#define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_OFFSET 2
-		#define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_MASK 0x00000008
-		#define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_OFFSET 3
-		#define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_MASK 0x00000010
-		#define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_OFFSET 4
-		#define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_MASK 0x00000020
-		#define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_OFFSET 5
-		#define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_MASK 0x00000040
-		#define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_OFFSET 6
-		#define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_MASK 0x00000080
-		#define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_OFFSET 7
-		#define NVM_CFG1_GLOB_SMBUS_MODE_MASK 0x00000F00
-		#define NVM_CFG1_GLOB_SMBUS_MODE_OFFSET 8
-		#define NVM_CFG1_GLOB_SMBUS_MODE_DISABLED 0x0
-		#define NVM_CFG1_GLOB_SMBUS_MODE_100KHZ 0x1
-		#define NVM_CFG1_GLOB_SMBUS_MODE_400KHZ 0x2
-		#define NVM_CFG1_GLOB_NCSI_MASK 0x0000F000
-		#define NVM_CFG1_GLOB_NCSI_OFFSET 12
-		#define NVM_CFG1_GLOB_NCSI_DISABLED 0x0
-		#define NVM_CFG1_GLOB_NCSI_ENABLED 0x1
+		#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_MASK              0x80000000
+		#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_OFFSET            31
+		#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_DISABLED          0x0
+		#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ENABLED           0x1
+	u32 core_cfg;                                                      /* 0x2C */
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_MASK                    0x000000FF
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_OFFSET                  0
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_2X40G                0x0
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X50G                   0x1
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_1X100G               0x2
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X10G_F                 0x3
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_4X10G_E              0x4
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_4X20G                0x5
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_1X40G                   0xB
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X25G                   0xC
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_1X25G                   0xD
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X25G                   0xE
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X10G                   0xF
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X25G_LIO2              0x10
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_2X50G_R1            0x11
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_4X50G_R1            0x12
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_1X100G_R2           0x13
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_2X100G_R2           0x14
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_AHP_1X100G_R4           0x15
+		#define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_MASK             0x00000100
+		#define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_OFFSET           8
+		#define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_DISABLED         0x0
+		#define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_ENABLED          0x1
+		#define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_MASK             0x00000200
+		#define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_OFFSET           9
+		#define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_DISABLED         0x0
+		#define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_ENABLED          0x1
+		#define NVM_CFG1_GLOB_MPS10_CORE_ADDR_MASK                      0x0003FC00
+		#define NVM_CFG1_GLOB_MPS10_CORE_ADDR_OFFSET                    10
+		#define NVM_CFG1_GLOB_MPS25_CORE_ADDR_MASK                      0x03FC0000
+		#define NVM_CFG1_GLOB_MPS25_CORE_ADDR_OFFSET                    18
+		#define NVM_CFG1_GLOB_AVS_MODE_MASK                             0x1C000000
+		#define NVM_CFG1_GLOB_AVS_MODE_OFFSET                           26
+		#define NVM_CFG1_GLOB_AVS_MODE_CLOSE_LOOP                       0x0
+		#define NVM_CFG1_GLOB_AVS_MODE_OPEN_LOOP_CFG                    0x1
+		#define NVM_CFG1_GLOB_AVS_MODE_OPEN_LOOP_OTP                    0x2
+		#define NVM_CFG1_GLOB_AVS_MODE_DISABLED                         0x3
+		#define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_MASK                 0x60000000
+		#define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_OFFSET               29
+		#define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_DISABLED             0x0
+		#define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_ENABLED              0x1
+		#define NVM_CFG1_GLOB_DCI_SUPPORT_MASK                          0x80000000
+		#define NVM_CFG1_GLOB_DCI_SUPPORT_OFFSET                        31
+		#define NVM_CFG1_GLOB_DCI_SUPPORT_DISABLED                      0x0
+		#define NVM_CFG1_GLOB_DCI_SUPPORT_ENABLED                       0x1
+	u32 e_lane_cfg1;                                                   /* 0x30 */
+		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK                        0x0000000F
+		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET                      0
+		#define NVM_CFG1_GLOB_RX_LANE1_SWAP_MASK                        0x000000F0
+		#define NVM_CFG1_GLOB_RX_LANE1_SWAP_OFFSET                      4
+		#define NVM_CFG1_GLOB_RX_LANE2_SWAP_MASK                        0x00000F00
+		#define NVM_CFG1_GLOB_RX_LANE2_SWAP_OFFSET                      8
+		#define NVM_CFG1_GLOB_RX_LANE3_SWAP_MASK                        0x0000F000
+		#define NVM_CFG1_GLOB_RX_LANE3_SWAP_OFFSET                      12
+		#define NVM_CFG1_GLOB_TX_LANE0_SWAP_MASK                        0x000F0000
+		#define NVM_CFG1_GLOB_TX_LANE0_SWAP_OFFSET                      16
+		#define NVM_CFG1_GLOB_TX_LANE1_SWAP_MASK                        0x00F00000
+		#define NVM_CFG1_GLOB_TX_LANE1_SWAP_OFFSET                      20
+		#define NVM_CFG1_GLOB_TX_LANE2_SWAP_MASK                        0x0F000000
+		#define NVM_CFG1_GLOB_TX_LANE2_SWAP_OFFSET                      24
+		#define NVM_CFG1_GLOB_TX_LANE3_SWAP_MASK                        0xF0000000
+		#define NVM_CFG1_GLOB_TX_LANE3_SWAP_OFFSET                      28
+	u32 e_lane_cfg2;                                                   /* 0x34 */
+		#define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_MASK                    0x00000001
+		#define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_OFFSET                  0
+		#define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_MASK                    0x00000002
+		#define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_OFFSET                  1
+		#define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_MASK                    0x00000004
+		#define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_OFFSET                  2
+		#define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_MASK                    0x00000008
+		#define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_OFFSET                  3
+		#define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_MASK                    0x00000010
+		#define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_OFFSET                  4
+		#define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_MASK                    0x00000020
+		#define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_OFFSET                  5
+		#define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_MASK                    0x00000040
+		#define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_OFFSET                  6
+		#define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_MASK                    0x00000080
+		#define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_OFFSET                  7
+		#define NVM_CFG1_GLOB_SMBUS_MODE_MASK                           0x00000F00
+		#define NVM_CFG1_GLOB_SMBUS_MODE_OFFSET                         8
+		#define NVM_CFG1_GLOB_SMBUS_MODE_DISABLED                       0x0
+		#define NVM_CFG1_GLOB_SMBUS_MODE_100KHZ                         0x1
+		#define NVM_CFG1_GLOB_SMBUS_MODE_400KHZ                         0x2
+		#define NVM_CFG1_GLOB_NCSI_MASK                                 0x0000F000
+		#define NVM_CFG1_GLOB_NCSI_OFFSET                               12
+		#define NVM_CFG1_GLOB_NCSI_DISABLED                             0x0
+		#define NVM_CFG1_GLOB_NCSI_ENABLED                              0x1
 	/*  Maximum advertised pcie link width */
-		#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_MASK 0x000F0000
-		#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_OFFSET 16
-		#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_BB_16_LANES 0x0
-		#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_1_LANE 0x1
-		#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_2_LANES 0x2
-		#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_4_LANES 0x3
-		#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_8_LANES 0x4
+		#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_MASK                       0x000F0000
+		#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_OFFSET                     16
+		#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_BB_16_AHP_16_LANES         0x0
+		#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_1_LANE                     0x1
+		#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_2_LANES                    0x2
+		#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_4_LANES                    0x3
+		#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_8_LANES                    0x4
 	/*  ASPM L1 mode */
-		#define NVM_CFG1_GLOB_ASPM_L1_MODE_MASK 0x00300000
-		#define NVM_CFG1_GLOB_ASPM_L1_MODE_OFFSET 20
-		#define NVM_CFG1_GLOB_ASPM_L1_MODE_FORCED 0x0
-		#define NVM_CFG1_GLOB_ASPM_L1_MODE_DYNAMIC_LOW_LATENCY 0x1
-		#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_MASK 0x01C00000
-		#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_OFFSET 22
-		#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_DISABLED 0x0
-		#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_EXT_I2C 0x1
-		#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_ONLY 0x2
-		#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_EXT_SMBUS 0x3
-		#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_MASK \
-			0x06000000
-		#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_OFFSET 25
-		#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_DISABLE 0x0
-		#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_INTERNAL 0x1
-		#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_EXTERNAL 0x2
-		#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_BOTH 0x3
+		#define NVM_CFG1_GLOB_ASPM_L1_MODE_MASK                         0x00300000
+		#define NVM_CFG1_GLOB_ASPM_L1_MODE_OFFSET                       20
+		#define NVM_CFG1_GLOB_ASPM_L1_MODE_FORCED                       0x0
+		#define NVM_CFG1_GLOB_ASPM_L1_MODE_DYNAMIC_LOW_LATENCY          0x1
+		#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_MASK                  0x01C00000
+		#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_OFFSET                22
+		#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_DISABLED              0x0
+		#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_EXT_I2C           0x1
+		#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_ONLY              0x2
+		#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_EXT_SMBUS         0x3
+		#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_MASK          0x06000000
+		#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_OFFSET        25
+		#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_DISABLE       0x0
+		#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_INTERNAL      0x1
+		#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_EXTERNAL      0x2
+		#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_BOTH          0x3
 	/*  Set the PLDM sensor modes */
-		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_MASK 0x38000000
-		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_OFFSET 27
-		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_INTERNAL 0x0
-		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_EXTERNAL 0x1
-		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_BOTH 0x2
+		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_MASK                     0x38000000
+		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_OFFSET                   27
+		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_INTERNAL                 0x0
+		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_EXTERNAL                 0x1
+		#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_BOTH                     0x2
+	/*  Enable VDM interface */
+		#define NVM_CFG1_GLOB_PCIE_VDM_ENABLED_MASK                     0x40000000
+		#define NVM_CFG1_GLOB_PCIE_VDM_ENABLED_OFFSET                   30
+		#define NVM_CFG1_GLOB_PCIE_VDM_ENABLED_DISABLED                 0x0
+		#define NVM_CFG1_GLOB_PCIE_VDM_ENABLED_ENABLED                  0x1
 	/*  ROL enable */
-		#define NVM_CFG1_GLOB_RESET_ON_LAN_MASK 0x80000000
-		#define NVM_CFG1_GLOB_RESET_ON_LAN_OFFSET 31
-		#define NVM_CFG1_GLOB_RESET_ON_LAN_DISABLED 0x0
-		#define NVM_CFG1_GLOB_RESET_ON_LAN_ENABLED 0x1
-	u32 f_lane_cfg1; /* 0x38 */
-		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK 0x0000000F
-		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET 0
-		#define NVM_CFG1_GLOB_RX_LANE1_SWAP_MASK 0x000000F0
-		#define NVM_CFG1_GLOB_RX_LANE1_SWAP_OFFSET 4
-		#define NVM_CFG1_GLOB_RX_LANE2_SWAP_MASK 0x00000F00
-		#define NVM_CFG1_GLOB_RX_LANE2_SWAP_OFFSET 8
-		#define NVM_CFG1_GLOB_RX_LANE3_SWAP_MASK 0x0000F000
-		#define NVM_CFG1_GLOB_RX_LANE3_SWAP_OFFSET 12
-		#define NVM_CFG1_GLOB_TX_LANE0_SWAP_MASK 0x000F0000
-		#define NVM_CFG1_GLOB_TX_LANE0_SWAP_OFFSET 16
-		#define NVM_CFG1_GLOB_TX_LANE1_SWAP_MASK 0x00F00000
-		#define NVM_CFG1_GLOB_TX_LANE1_SWAP_OFFSET 20
-		#define NVM_CFG1_GLOB_TX_LANE2_SWAP_MASK 0x0F000000
-		#define NVM_CFG1_GLOB_TX_LANE2_SWAP_OFFSET 24
-		#define NVM_CFG1_GLOB_TX_LANE3_SWAP_MASK 0xF0000000
-		#define NVM_CFG1_GLOB_TX_LANE3_SWAP_OFFSET 28
-	u32 f_lane_cfg2; /* 0x3C */
-		#define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_MASK 0x00000001
-		#define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_OFFSET 0
-		#define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_MASK 0x00000002
-		#define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_OFFSET 1
-		#define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_MASK 0x00000004
-		#define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_OFFSET 2
-		#define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_MASK 0x00000008
-		#define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_OFFSET 3
-		#define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_MASK 0x00000010
-		#define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_OFFSET 4
-		#define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_MASK 0x00000020
-		#define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_OFFSET 5
-		#define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_MASK 0x00000040
-		#define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_OFFSET 6
-		#define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_MASK 0x00000080
-		#define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_OFFSET 7
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_MASK                         0x80000000
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_OFFSET                       31
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_DISABLED                     0x0
+		#define NVM_CFG1_GLOB_RESET_ON_LAN_ENABLED                      0x1
+	u32 f_lane_cfg1;                                                   /* 0x38 */
+		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK                        0x0000000F
+		#define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET                      0
+		#define NVM_CFG1_GLOB_RX_LANE1_SWAP_MASK                        0x000000F0
+		#define NVM_CFG1_GLOB_RX_LANE1_SWAP_OFFSET                      4
+		#define NVM_CFG1_GLOB_RX_LANE2_SWAP_MASK                        0x00000F00
+		#define NVM_CFG1_GLOB_RX_LANE2_SWAP_OFFSET                      8
+		#define NVM_CFG1_GLOB_RX_LANE3_SWAP_MASK                        0x0000F000
+		#define NVM_CFG1_GLOB_RX_LANE3_SWAP_OFFSET                      12
+		#define NVM_CFG1_GLOB_TX_LANE0_SWAP_MASK                        0x000F0000
+		#define NVM_CFG1_GLOB_TX_LANE0_SWAP_OFFSET                      16
+		#define NVM_CFG1_GLOB_TX_LANE1_SWAP_MASK                        0x00F00000
+		#define NVM_CFG1_GLOB_TX_LANE1_SWAP_OFFSET                      20
+		#define NVM_CFG1_GLOB_TX_LANE2_SWAP_MASK                        0x0F000000
+		#define NVM_CFG1_GLOB_TX_LANE2_SWAP_OFFSET                      24
+		#define NVM_CFG1_GLOB_TX_LANE3_SWAP_MASK                        0xF0000000
+		#define NVM_CFG1_GLOB_TX_LANE3_SWAP_OFFSET                      28
+	u32 f_lane_cfg2;                                                   /* 0x3C */
+		#define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_MASK                    0x00000001
+		#define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_OFFSET                  0
+		#define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_MASK                    0x00000002
+		#define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_OFFSET                  1
+		#define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_MASK                    0x00000004
+		#define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_OFFSET                  2
+		#define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_MASK                    0x00000008
+		#define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_OFFSET                  3
+		#define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_MASK                    0x00000010
+		#define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_OFFSET                  4
+		#define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_MASK                    0x00000020
+		#define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_OFFSET                  5
+		#define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_MASK                    0x00000040
+		#define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_OFFSET                  6
+		#define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_MASK                    0x00000080
+		#define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_OFFSET                  7
 	/*  Control the period between two successive checks */
-		#define NVM_CFG1_GLOB_TEMPERATURE_PERIOD_BETWEEN_CHECKS_MASK \
-			0x0000FF00
-		#define NVM_CFG1_GLOB_TEMPERATURE_PERIOD_BETWEEN_CHECKS_OFFSET 8
+		#define NVM_CFG1_GLOB_TEMPERATURE_PERIOD_BETWEEN_CHECKS_MASK    0x0000FF00
+		#define NVM_CFG1_GLOB_TEMPERATURE_PERIOD_BETWEEN_CHECKS_OFFSET  8
 	/*  Set shutdown temperature */
-		#define NVM_CFG1_GLOB_SHUTDOWN_THRESHOLD_TEMPERATURE_MASK \
-			0x00FF0000
-		#define NVM_CFG1_GLOB_SHUTDOWN_THRESHOLD_TEMPERATURE_OFFSET 16
+		#define NVM_CFG1_GLOB_SHUTDOWN_THRESHOLD_TEMPERATURE_MASK       0x00FF0000
+		#define NVM_CFG1_GLOB_SHUTDOWN_THRESHOLD_TEMPERATURE_OFFSET     16
 	/*  Set max. count for over operational temperature */
-		#define NVM_CFG1_GLOB_MAX_COUNT_OPER_THRESHOLD_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_MAX_COUNT_OPER_THRESHOLD_OFFSET 24
-	u32 mps10_preemphasis; /* 0x40 */
-		#define NVM_CFG1_GLOB_LANE0_PREEMP_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_LANE0_PREEMP_OFFSET 0
-		#define NVM_CFG1_GLOB_LANE1_PREEMP_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_LANE1_PREEMP_OFFSET 8
-		#define NVM_CFG1_GLOB_LANE2_PREEMP_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_LANE2_PREEMP_OFFSET 16
-		#define NVM_CFG1_GLOB_LANE3_PREEMP_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_LANE3_PREEMP_OFFSET 24
-	u32 mps10_driver_current; /* 0x44 */
-		#define NVM_CFG1_GLOB_LANE0_AMP_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_LANE0_AMP_OFFSET 0
-		#define NVM_CFG1_GLOB_LANE1_AMP_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_LANE1_AMP_OFFSET 8
-		#define NVM_CFG1_GLOB_LANE2_AMP_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_LANE2_AMP_OFFSET 16
-		#define NVM_CFG1_GLOB_LANE3_AMP_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_LANE3_AMP_OFFSET 24
-	u32 mps25_preemphasis; /* 0x48 */
-		#define NVM_CFG1_GLOB_LANE0_PREEMP_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_LANE0_PREEMP_OFFSET 0
-		#define NVM_CFG1_GLOB_LANE1_PREEMP_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_LANE1_PREEMP_OFFSET 8
-		#define NVM_CFG1_GLOB_LANE2_PREEMP_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_LANE2_PREEMP_OFFSET 16
-		#define NVM_CFG1_GLOB_LANE3_PREEMP_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_LANE3_PREEMP_OFFSET 24
-	u32 mps25_driver_current; /* 0x4C */
-		#define NVM_CFG1_GLOB_LANE0_AMP_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_LANE0_AMP_OFFSET 0
-		#define NVM_CFG1_GLOB_LANE1_AMP_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_LANE1_AMP_OFFSET 8
-		#define NVM_CFG1_GLOB_LANE2_AMP_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_LANE2_AMP_OFFSET 16
-		#define NVM_CFG1_GLOB_LANE3_AMP_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_LANE3_AMP_OFFSET 24
-	u32 pci_id; /* 0x50 */
-		#define NVM_CFG1_GLOB_VENDOR_ID_MASK 0x0000FFFF
-		#define NVM_CFG1_GLOB_VENDOR_ID_OFFSET 0
+		#define NVM_CFG1_GLOB_MAX_COUNT_OPER_THRESHOLD_MASK             0xFF000000
+		#define NVM_CFG1_GLOB_MAX_COUNT_OPER_THRESHOLD_OFFSET           24
+	u32 mps10_preemphasis;                                             /* 0x40 */
+		#define NVM_CFG1_GLOB_LANE0_PREEMP_MASK                         0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_PREEMP_OFFSET                       0
+		#define NVM_CFG1_GLOB_LANE1_PREEMP_MASK                         0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_PREEMP_OFFSET                       8
+		#define NVM_CFG1_GLOB_LANE2_PREEMP_MASK                         0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_PREEMP_OFFSET                       16
+		#define NVM_CFG1_GLOB_LANE3_PREEMP_MASK                         0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_PREEMP_OFFSET                       24
+	u32 mps10_driver_current;                                          /* 0x44 */
+		#define NVM_CFG1_GLOB_LANE0_AMP_MASK                            0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_AMP_OFFSET                          0
+		#define NVM_CFG1_GLOB_LANE1_AMP_MASK                            0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_AMP_OFFSET                          8
+		#define NVM_CFG1_GLOB_LANE2_AMP_MASK                            0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_AMP_OFFSET                          16
+		#define NVM_CFG1_GLOB_LANE3_AMP_MASK                            0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_AMP_OFFSET                          24
+	u32 mps25_preemphasis;                                             /* 0x48 */
+		#define NVM_CFG1_GLOB_LANE0_PREEMP_MASK                         0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_PREEMP_OFFSET                       0
+		#define NVM_CFG1_GLOB_LANE1_PREEMP_MASK                         0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_PREEMP_OFFSET                       8
+		#define NVM_CFG1_GLOB_LANE2_PREEMP_MASK                         0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_PREEMP_OFFSET                       16
+		#define NVM_CFG1_GLOB_LANE3_PREEMP_MASK                         0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_PREEMP_OFFSET                       24
+	u32 mps25_driver_current;                                          /* 0x4C */
+		#define NVM_CFG1_GLOB_LANE0_AMP_MASK                            0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_AMP_OFFSET                          0
+		#define NVM_CFG1_GLOB_LANE1_AMP_MASK                            0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_AMP_OFFSET                          8
+		#define NVM_CFG1_GLOB_LANE2_AMP_MASK                            0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_AMP_OFFSET                          16
+		#define NVM_CFG1_GLOB_LANE3_AMP_MASK                            0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_AMP_OFFSET                          24
+	u32 pci_id;                                                        /* 0x50 */
+		#define NVM_CFG1_GLOB_VENDOR_ID_MASK                            0x0000FFFF
+		#define NVM_CFG1_GLOB_VENDOR_ID_OFFSET                          0
 	/*  Set caution temperature */
-		#define NVM_CFG1_GLOB_DEAD_TEMP_TH_TEMPERATURE_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_DEAD_TEMP_TH_TEMPERATURE_OFFSET 16
+		#define NVM_CFG1_GLOB_DEAD_TEMP_TH_TEMPERATURE_MASK             0x00FF0000
+		#define NVM_CFG1_GLOB_DEAD_TEMP_TH_TEMPERATURE_OFFSET           16
 	/*  Set external thermal sensor I2C address */
-		#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ADDRESS_MASK \
-			0xFF000000
-		#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ADDRESS_OFFSET 24
-	u32 pci_subsys_id; /* 0x54 */
-		#define NVM_CFG1_GLOB_SUBSYSTEM_VENDOR_ID_MASK 0x0000FFFF
-		#define NVM_CFG1_GLOB_SUBSYSTEM_VENDOR_ID_OFFSET 0
-		#define NVM_CFG1_GLOB_SUBSYSTEM_DEVICE_ID_MASK 0xFFFF0000
-		#define NVM_CFG1_GLOB_SUBSYSTEM_DEVICE_ID_OFFSET 16
-	u32 bar; /* 0x58 */
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_MASK 0x0000000F
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_OFFSET 0
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_DISABLED 0x0
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_2K 0x1
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_4K 0x2
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_8K 0x3
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_16K 0x4
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_32K 0x5
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_64K 0x6
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_128K 0x7
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_256K 0x8
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_512K 0x9
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_1M 0xA
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_2M 0xB
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_4M 0xC
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_8M 0xD
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_16M 0xE
-		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_32M 0xF
+		#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ADDRESS_MASK      0xFF000000
+		#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ADDRESS_OFFSET    24
+	u32 pci_subsys_id;                                                 /* 0x54 */
+		#define NVM_CFG1_GLOB_SUBSYSTEM_VENDOR_ID_MASK                  0x0000FFFF
+		#define NVM_CFG1_GLOB_SUBSYSTEM_VENDOR_ID_OFFSET                0
+		#define NVM_CFG1_GLOB_SUBSYSTEM_DEVICE_ID_MASK                  0xFFFF0000
+		#define NVM_CFG1_GLOB_SUBSYSTEM_DEVICE_ID_OFFSET                16
+	u32 bar;                                                           /* 0x58 */
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_MASK                   0x0000000F
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_OFFSET                 0
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_DISABLED               0x0
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_2K                     0x1
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_4K                     0x2
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_8K                     0x3
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_16K                    0x4
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_32K                    0x5
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_64K                    0x6
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_128K                   0x7
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_256K                   0x8
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_512K                   0x9
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_1M                     0xA
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_2M                     0xB
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_4M                     0xC
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_8M                     0xD
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_16M                    0xE
+		#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_32M                    0xF
 	/*  BB VF BAR2 size */
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_MASK 0x000000F0
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_OFFSET 4
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_DISABLED 0x0
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_4K 0x1
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_8K 0x2
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_16K 0x3
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_32K 0x4
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_64K 0x5
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_128K 0x6
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_256K 0x7
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_512K 0x8
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_1M 0x9
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_2M 0xA
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_4M 0xB
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_8M 0xC
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_16M 0xD
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_32M 0xE
-		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_64M 0xF
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_MASK                     0x000000F0
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_OFFSET                   4
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_DISABLED                 0x0
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_4K                       0x1
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_8K                       0x2
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_16K                      0x3
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_32K                      0x4
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_64K                      0x5
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_128K                     0x6
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_256K                     0x7
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_512K                     0x8
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_1M                       0x9
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_2M                       0xA
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_4M                       0xB
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_8M                       0xC
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_16M                      0xD
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_32M                      0xE
+		#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_64M                      0xF
 	/*  BB BAR2 size (global) */
-		#define NVM_CFG1_GLOB_BAR2_SIZE_MASK 0x00000F00
-		#define NVM_CFG1_GLOB_BAR2_SIZE_OFFSET 8
-		#define NVM_CFG1_GLOB_BAR2_SIZE_DISABLED 0x0
-		#define NVM_CFG1_GLOB_BAR2_SIZE_64K 0x1
-		#define NVM_CFG1_GLOB_BAR2_SIZE_128K 0x2
-		#define NVM_CFG1_GLOB_BAR2_SIZE_256K 0x3
-		#define NVM_CFG1_GLOB_BAR2_SIZE_512K 0x4
-		#define NVM_CFG1_GLOB_BAR2_SIZE_1M 0x5
-		#define NVM_CFG1_GLOB_BAR2_SIZE_2M 0x6
-		#define NVM_CFG1_GLOB_BAR2_SIZE_4M 0x7
-		#define NVM_CFG1_GLOB_BAR2_SIZE_8M 0x8
-		#define NVM_CFG1_GLOB_BAR2_SIZE_16M 0x9
-		#define NVM_CFG1_GLOB_BAR2_SIZE_32M 0xA
-		#define NVM_CFG1_GLOB_BAR2_SIZE_64M 0xB
-		#define NVM_CFG1_GLOB_BAR2_SIZE_128M 0xC
-		#define NVM_CFG1_GLOB_BAR2_SIZE_256M 0xD
-		#define NVM_CFG1_GLOB_BAR2_SIZE_512M 0xE
-		#define NVM_CFG1_GLOB_BAR2_SIZE_1G 0xF
-	/*  Set the duration, in secs, fan failure signal should be sampled */
-		#define NVM_CFG1_GLOB_FAN_FAILURE_DURATION_MASK 0x0000F000
-		#define NVM_CFG1_GLOB_FAN_FAILURE_DURATION_OFFSET 12
-	/*  This field defines the board total budget  for bar2 when disabled
+		#define NVM_CFG1_GLOB_BAR2_SIZE_MASK                            0x00000F00
+		#define NVM_CFG1_GLOB_BAR2_SIZE_OFFSET                          8
+		#define NVM_CFG1_GLOB_BAR2_SIZE_DISABLED                        0x0
+		#define NVM_CFG1_GLOB_BAR2_SIZE_64K                             0x1
+		#define NVM_CFG1_GLOB_BAR2_SIZE_128K                            0x2
+		#define NVM_CFG1_GLOB_BAR2_SIZE_256K                            0x3
+		#define NVM_CFG1_GLOB_BAR2_SIZE_512K                            0x4
+		#define NVM_CFG1_GLOB_BAR2_SIZE_1M                              0x5
+		#define NVM_CFG1_GLOB_BAR2_SIZE_2M                              0x6
+		#define NVM_CFG1_GLOB_BAR2_SIZE_4M                              0x7
+		#define NVM_CFG1_GLOB_BAR2_SIZE_8M                              0x8
+		#define NVM_CFG1_GLOB_BAR2_SIZE_16M                             0x9
+		#define NVM_CFG1_GLOB_BAR2_SIZE_32M                             0xA
+		#define NVM_CFG1_GLOB_BAR2_SIZE_64M                             0xB
+		#define NVM_CFG1_GLOB_BAR2_SIZE_128M                            0xC
+		#define NVM_CFG1_GLOB_BAR2_SIZE_256M                            0xD
+		#define NVM_CFG1_GLOB_BAR2_SIZE_512M                            0xE
+		#define NVM_CFG1_GLOB_BAR2_SIZE_1G                              0xF
+	/* Set the duration, in seconds, fan failure signal should be
+	 * sampled
+	 */
+		#define NVM_CFG1_GLOB_FAN_FAILURE_DURATION_MASK                 0x0000F000
+		#define NVM_CFG1_GLOB_FAN_FAILURE_DURATION_OFFSET               12
+	/* This field defines the board total budget  for bar2 when disabled
 	 * the regular bar size is used.
 	 */
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_OFFSET 16
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_DISABLED 0x0
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_64K 0x1
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_128K 0x2
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_256K 0x3
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_512K 0x4
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_1M 0x5
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_2M 0x6
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_4M 0x7
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_8M 0x8
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_16M 0x9
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_32M 0xA
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_64M 0xB
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_128M 0xC
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_256M 0xD
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_512M 0xE
-		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_1G 0xF
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_MASK                    0x00FF0000
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_OFFSET                  16
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_DISABLED                0x0
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_64K                     0x1
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_128K                    0x2
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_256K                    0x3
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_512K                    0x4
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_1M                      0x5
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_2M                      0x6
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_4M                      0x7
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_8M                      0x8
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_16M                     0x9
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_32M                     0xA
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_64M                     0xB
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_128M                    0xC
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_256M                    0xD
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_512M                    0xE
+		#define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_1G                      0xF
 	/*  Enable/Disable Crash dump triggers */
-		#define NVM_CFG1_GLOB_CRASH_DUMP_TRIGGER_ENABLE_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_CRASH_DUMP_TRIGGER_ENABLE_OFFSET 24
-	u32 mps10_txfir_main; /* 0x5C */
-		#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_OFFSET 0
-		#define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_OFFSET 8
-		#define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_OFFSET 16
-		#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_OFFSET 24
-	u32 mps10_txfir_post; /* 0x60 */
-		#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_OFFSET 0
-		#define NVM_CFG1_GLOB_LANE1_TXFIR_POST_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_LANE1_TXFIR_POST_OFFSET 8
-		#define NVM_CFG1_GLOB_LANE2_TXFIR_POST_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_LANE2_TXFIR_POST_OFFSET 16
-		#define NVM_CFG1_GLOB_LANE3_TXFIR_POST_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_LANE3_TXFIR_POST_OFFSET 24
-	u32 mps25_txfir_main; /* 0x64 */
-		#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_OFFSET 0
-		#define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_OFFSET 8
-		#define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_OFFSET 16
-		#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_OFFSET 24
-	u32 mps25_txfir_post; /* 0x68 */
-		#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_OFFSET 0
-		#define NVM_CFG1_GLOB_LANE1_TXFIR_POST_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_LANE1_TXFIR_POST_OFFSET 8
-		#define NVM_CFG1_GLOB_LANE2_TXFIR_POST_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_LANE2_TXFIR_POST_OFFSET 16
-		#define NVM_CFG1_GLOB_LANE3_TXFIR_POST_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_LANE3_TXFIR_POST_OFFSET 24
-	u32 manufacture_ver; /* 0x6C */
-		#define NVM_CFG1_GLOB_MANUF0_VER_MASK 0x0000003F
-		#define NVM_CFG1_GLOB_MANUF0_VER_OFFSET 0
-		#define NVM_CFG1_GLOB_MANUF1_VER_MASK 0x00000FC0
-		#define NVM_CFG1_GLOB_MANUF1_VER_OFFSET 6
-		#define NVM_CFG1_GLOB_MANUF2_VER_MASK 0x0003F000
-		#define NVM_CFG1_GLOB_MANUF2_VER_OFFSET 12
-		#define NVM_CFG1_GLOB_MANUF3_VER_MASK 0x00FC0000
-		#define NVM_CFG1_GLOB_MANUF3_VER_OFFSET 18
-		#define NVM_CFG1_GLOB_MANUF4_VER_MASK 0x3F000000
-		#define NVM_CFG1_GLOB_MANUF4_VER_OFFSET 24
+		#define NVM_CFG1_GLOB_CRASH_DUMP_TRIGGER_ENABLE_MASK            0xFF000000
+		#define NVM_CFG1_GLOB_CRASH_DUMP_TRIGGER_ENABLE_OFFSET          24
+	u32 mps10_txfir_main;                                              /* 0x5C */
+		#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_MASK                     0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_OFFSET                   0
+		#define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_MASK                     0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_OFFSET                   8
+		#define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_MASK                     0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_OFFSET                   16
+		#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_MASK                     0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_OFFSET                   24
+	u32 mps10_txfir_post;                                              /* 0x60 */
+		#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_MASK                     0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_OFFSET                   0
+		#define NVM_CFG1_GLOB_LANE1_TXFIR_POST_MASK                     0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_TXFIR_POST_OFFSET                   8
+		#define NVM_CFG1_GLOB_LANE2_TXFIR_POST_MASK                     0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_TXFIR_POST_OFFSET                   16
+		#define NVM_CFG1_GLOB_LANE3_TXFIR_POST_MASK                     0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_TXFIR_POST_OFFSET                   24
+	u32 mps25_txfir_main;                                              /* 0x64 */
+		#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_MASK                     0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_OFFSET                   0
+		#define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_MASK                     0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_OFFSET                   8
+		#define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_MASK                     0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_OFFSET                   16
+		#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_MASK                     0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_OFFSET                   24
+	u32 mps25_txfir_post;                                              /* 0x68 */
+		#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_MASK                     0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_OFFSET                   0
+		#define NVM_CFG1_GLOB_LANE1_TXFIR_POST_MASK                     0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_TXFIR_POST_OFFSET                   8
+		#define NVM_CFG1_GLOB_LANE2_TXFIR_POST_MASK                     0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_TXFIR_POST_OFFSET                   16
+		#define NVM_CFG1_GLOB_LANE3_TXFIR_POST_MASK                     0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_TXFIR_POST_OFFSET                   24
+	u32 manufacture_ver;                                               /* 0x6C */
+		#define NVM_CFG1_GLOB_MANUF0_VER_MASK                           0x0000003F
+		#define NVM_CFG1_GLOB_MANUF0_VER_OFFSET                         0
+		#define NVM_CFG1_GLOB_MANUF1_VER_MASK                           0x00000FC0
+		#define NVM_CFG1_GLOB_MANUF1_VER_OFFSET                         6
+		#define NVM_CFG1_GLOB_MANUF2_VER_MASK                           0x0003F000
+		#define NVM_CFG1_GLOB_MANUF2_VER_OFFSET                         12
+		#define NVM_CFG1_GLOB_MANUF3_VER_MASK                           0x00FC0000
+		#define NVM_CFG1_GLOB_MANUF3_VER_OFFSET                         18
+		#define NVM_CFG1_GLOB_MANUF4_VER_MASK                           0x3F000000
+		#define NVM_CFG1_GLOB_MANUF4_VER_OFFSET                         24
 	/*  Select package id method */
-		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_MASK 0x40000000
-		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_OFFSET 30
-		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_NVRAM 0x0
-		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_IO_PINS 0x1
-		#define NVM_CFG1_GLOB_RECOVERY_MODE_MASK 0x80000000
-		#define NVM_CFG1_GLOB_RECOVERY_MODE_OFFSET 31
-		#define NVM_CFG1_GLOB_RECOVERY_MODE_DISABLED 0x0
-		#define NVM_CFG1_GLOB_RECOVERY_MODE_ENABLED 0x1
-	u32 manufacture_time; /* 0x70 */
-		#define NVM_CFG1_GLOB_MANUF0_TIME_MASK 0x0000003F
-		#define NVM_CFG1_GLOB_MANUF0_TIME_OFFSET 0
-		#define NVM_CFG1_GLOB_MANUF1_TIME_MASK 0x00000FC0
-		#define NVM_CFG1_GLOB_MANUF1_TIME_OFFSET 6
-		#define NVM_CFG1_GLOB_MANUF2_TIME_MASK 0x0003F000
-		#define NVM_CFG1_GLOB_MANUF2_TIME_OFFSET 12
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_MASK                   0x40000000
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_OFFSET                 30
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_NVRAM                  0x0
+		#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_IO_IO_PINS                0x1
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_MASK                        0x80000000
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_OFFSET                      31
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_DISABLED                    0x0
+		#define NVM_CFG1_GLOB_RECOVERY_MODE_ENABLED                     0x1
+	u32 manufacture_time;                                              /* 0x70 */
+		#define NVM_CFG1_GLOB_MANUF0_TIME_MASK                          0x0000003F
+		#define NVM_CFG1_GLOB_MANUF0_TIME_OFFSET                        0
+		#define NVM_CFG1_GLOB_MANUF1_TIME_MASK                          0x00000FC0
+		#define NVM_CFG1_GLOB_MANUF1_TIME_OFFSET                        6
+		#define NVM_CFG1_GLOB_MANUF2_TIME_MASK                          0x0003F000
+		#define NVM_CFG1_GLOB_MANUF2_TIME_OFFSET                        12
 	/*  Max MSIX for Ethernet in default mode */
-		#define NVM_CFG1_GLOB_MAX_MSIX_MASK 0x03FC0000
-		#define NVM_CFG1_GLOB_MAX_MSIX_OFFSET 18
+		#define NVM_CFG1_GLOB_MAX_MSIX_MASK                             0x03FC0000
+		#define NVM_CFG1_GLOB_MAX_MSIX_OFFSET                           18
 	/*  PF Mapping */
-		#define NVM_CFG1_GLOB_PF_MAPPING_MASK 0x0C000000
-		#define NVM_CFG1_GLOB_PF_MAPPING_OFFSET 26
-		#define NVM_CFG1_GLOB_PF_MAPPING_CONTINUOUS 0x0
-		#define NVM_CFG1_GLOB_PF_MAPPING_FIXED 0x1
-		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_MASK 0x30000000
-		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_OFFSET 28
-		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_DISABLED 0x0
-		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_TI 0x1
+		#define NVM_CFG1_GLOB_PF_MAPPING_MASK                           0x0C000000
+		#define NVM_CFG1_GLOB_PF_MAPPING_OFFSET                         26
+		#define NVM_CFG1_GLOB_PF_MAPPING_CONTINUOUS                     0x0
+		#define NVM_CFG1_GLOB_PF_MAPPING_FIXED                          0x1
+		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_MASK               0x30000000
+		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_OFFSET             28
+		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_DISABLED           0x0
+		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_TI                 0x1
 	/*  Enable/Disable PCIE Relaxed Ordering */
-		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_MASK 0x40000000
-		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_OFFSET 30
-		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_DISABLED 0x0
-		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_ENABLED 0x1
-	/*  Reset the chip using iPOR to release PCIe due to short PERST
-	 *  issues
+		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_MASK                0x40000000
+		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_OFFSET              30
+		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_DISABLED            0x0
+		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_ENABLED             0x1
+	/* Reset the chip using iPOR to release PCIe due to short PERST
+	 * issues
 	 */
-		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_MASK 0x80000000
-		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_OFFSET 31
-		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_DISABLED 0x0
-		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_ENABLED 0x1
-	u32 led_global_settings; /* 0x74 */
-		#define NVM_CFG1_GLOB_LED_SWAP_0_MASK 0x0000000F
-		#define NVM_CFG1_GLOB_LED_SWAP_0_OFFSET 0
-		#define NVM_CFG1_GLOB_LED_SWAP_1_MASK 0x000000F0
-		#define NVM_CFG1_GLOB_LED_SWAP_1_OFFSET 4
-		#define NVM_CFG1_GLOB_LED_SWAP_2_MASK 0x00000F00
-		#define NVM_CFG1_GLOB_LED_SWAP_2_OFFSET 8
-		#define NVM_CFG1_GLOB_LED_SWAP_3_MASK 0x0000F000
-		#define NVM_CFG1_GLOB_LED_SWAP_3_OFFSET 12
+		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_MASK               0x80000000
+		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_OFFSET             31
+		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_DISABLED           0x0
+		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_ENABLED            0x1
+	u32 led_global_settings;                                           /* 0x74 */
+		#define NVM_CFG1_GLOB_LED_SWAP_0_MASK                           0x0000000F
+		#define NVM_CFG1_GLOB_LED_SWAP_0_OFFSET                         0
+		#define NVM_CFG1_GLOB_LED_SWAP_1_MASK                           0x000000F0
+		#define NVM_CFG1_GLOB_LED_SWAP_1_OFFSET                         4
+		#define NVM_CFG1_GLOB_LED_SWAP_2_MASK                           0x00000F00
+		#define NVM_CFG1_GLOB_LED_SWAP_2_OFFSET                         8
+		#define NVM_CFG1_GLOB_LED_SWAP_3_MASK                           0x0000F000
+		#define NVM_CFG1_GLOB_LED_SWAP_3_OFFSET                         12
 	/*  Max. continues operating temperature */
-		#define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_OFFSET 16
-	/*  GPIO which triggers run-time port swap according to the map
-	 *  specified in option 205
+		#define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_MASK              0x00FF0000
+		#define NVM_CFG1_GLOB_MAX_CONT_OPERATING_TEMP_OFFSET            16
+	/* GPIO which triggers run-time port swap according to the map
+	 * specified in option 205
 	 */
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_OFFSET 24
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_NA 0x0
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO0 0x1
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO1 0x2
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO2 0x3
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO3 0x4
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO4 0x5
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO5 0x6
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO6 0x7
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO7 0x8
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO8 0x9
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO9 0xA
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO10 0xB
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO11 0xC
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO12 0xD
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO13 0xE
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO14 0xF
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO15 0x10
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO16 0x11
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO17 0x12
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO18 0x13
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO19 0x14
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO20 0x15
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO21 0x16
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO22 0x17
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO23 0x18
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO24 0x19
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO25 0x1A
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO26 0x1B
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO27 0x1C
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO28 0x1D
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO29 0x1E
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO30 0x1F
-		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO31 0x20
-	u32 generic_cont1; /* 0x78 */
-		#define NVM_CFG1_GLOB_AVS_DAC_CODE_MASK 0x000003FF
-		#define NVM_CFG1_GLOB_AVS_DAC_CODE_OFFSET 0
-		#define NVM_CFG1_GLOB_LANE0_SWAP_MASK 0x00000C00
-		#define NVM_CFG1_GLOB_LANE0_SWAP_OFFSET 10
-		#define NVM_CFG1_GLOB_LANE1_SWAP_MASK 0x00003000
-		#define NVM_CFG1_GLOB_LANE1_SWAP_OFFSET 12
-		#define NVM_CFG1_GLOB_LANE2_SWAP_MASK 0x0000C000
-		#define NVM_CFG1_GLOB_LANE2_SWAP_OFFSET 14
-		#define NVM_CFG1_GLOB_LANE3_SWAP_MASK 0x00030000
-		#define NVM_CFG1_GLOB_LANE3_SWAP_OFFSET 16
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_MASK               0xFF000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_OFFSET             24
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_NA                 0x0
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO0              0x1
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO1              0x2
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO2              0x3
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO3              0x4
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO4              0x5
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO5              0x6
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO6              0x7
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO7              0x8
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO8              0x9
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO9              0xA
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO10             0xB
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO11             0xC
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO12             0xD
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO13             0xE
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO14             0xF
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO15             0x10
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO16             0x11
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO17             0x12
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO18             0x13
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO19             0x14
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO20             0x15
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO21             0x16
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO22             0x17
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO23             0x18
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO24             0x19
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO25             0x1A
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO26             0x1B
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO27             0x1C
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO28             0x1D
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO29             0x1E
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO30             0x1F
+		#define NVM_CFG1_GLOB_RUNTIME_PORT_SWAP_GPIO_GPIO31             0x20
+	u32 generic_cont1;                                                 /* 0x78 */
+		#define NVM_CFG1_GLOB_AVS_DAC_CODE_MASK                         0x000003FF
+		#define NVM_CFG1_GLOB_AVS_DAC_CODE_OFFSET                       0
+		#define NVM_CFG1_GLOB_LANE0_SWAP_MASK                           0x00000C00
+		#define NVM_CFG1_GLOB_LANE0_SWAP_OFFSET                         10
+		#define NVM_CFG1_GLOB_LANE1_SWAP_MASK                           0x00003000
+		#define NVM_CFG1_GLOB_LANE1_SWAP_OFFSET                         12
+		#define NVM_CFG1_GLOB_LANE2_SWAP_MASK                           0x0000C000
+		#define NVM_CFG1_GLOB_LANE2_SWAP_OFFSET                         14
+		#define NVM_CFG1_GLOB_LANE3_SWAP_MASK                           0x00030000
+		#define NVM_CFG1_GLOB_LANE3_SWAP_OFFSET                         16
 	/*  Enable option 195 - Overriding the PCIe Preset value */
-		#define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_MASK 0x00040000
-		#define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_OFFSET 18
-		#define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_DISABLED 0x0
-		#define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_ENABLED 0x1
+		#define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_MASK           0x00040000
+		#define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_OFFSET         18
+		#define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_DISABLED       0x0
+		#define NVM_CFG1_GLOB_OVERRIDE_PCIE_PRESET_EQUAL_ENABLED        0x1
 	/*  PCIe Preset value - applies only if option 194 is enabled */
-		#define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_MASK 0x00780000
-		#define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_OFFSET 19
-	/*  Port mapping to be used when the run-time GPIO for port-swap is
-	 *  defined and set.
+		#define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_MASK                    0x00780000
+		#define NVM_CFG1_GLOB_PCIE_PRESET_VALUE_OFFSET                  19
+	/* Port mapping to be used when the run-time GPIO for port-swap is
+	 * defined and set.
 	 */
-		#define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_MASK 0x01800000
-		#define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_OFFSET 23
-		#define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_MASK 0x06000000
-		#define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_OFFSET 25
-		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_MASK 0x18000000
-		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_OFFSET 27
-		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_MASK 0x60000000
-		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_OFFSET 29
+		#define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_MASK               0x01800000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT0_SWAP_MAP_OFFSET             23
+		#define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_MASK               0x06000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT1_SWAP_MAP_OFFSET             25
+		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_MASK               0x18000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_OFFSET             27
+		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_MASK               0x60000000
+		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_OFFSET             29
 	/*  Option to Disable embedded LLDP, 0 - Off, 1 - On */
-		#define NVM_CFG1_GLOB_LLDP_DISABLE_MASK 0x80000000
-		#define NVM_CFG1_GLOB_LLDP_DISABLE_OFFSET 31
-		#define NVM_CFG1_GLOB_LLDP_DISABLE_OFF 0x0
-		#define NVM_CFG1_GLOB_LLDP_DISABLE_ON 0x1
-	u32 mbi_version; /* 0x7C */
-		#define NVM_CFG1_GLOB_MBI_VERSION_0_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_MBI_VERSION_0_OFFSET 0
-		#define NVM_CFG1_GLOB_MBI_VERSION_1_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_MBI_VERSION_1_OFFSET 8
-		#define NVM_CFG1_GLOB_MBI_VERSION_2_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_MBI_VERSION_2_OFFSET 16
-	/*  If set to other than NA, 0 - Normal operation, 1 - Thermal event
-	 *  occurred
+		#define NVM_CFG1_GLOB_LLDP_DISABLE_MASK                         0x80000000
+		#define NVM_CFG1_GLOB_LLDP_DISABLE_OFFSET                       31
+		#define NVM_CFG1_GLOB_LLDP_DISABLE_OFF                          0x0
+		#define NVM_CFG1_GLOB_LLDP_DISABLE_ON                           0x1
+	u32 mbi_version;                                                   /* 0x7C */
+		#define NVM_CFG1_GLOB_MBI_VERSION_0_MASK                        0x000000FF
+		#define NVM_CFG1_GLOB_MBI_VERSION_0_OFFSET                      0
+		#define NVM_CFG1_GLOB_MBI_VERSION_1_MASK                        0x0000FF00
+		#define NVM_CFG1_GLOB_MBI_VERSION_1_OFFSET                      8
+		#define NVM_CFG1_GLOB_MBI_VERSION_2_MASK                        0x00FF0000
+		#define NVM_CFG1_GLOB_MBI_VERSION_2_OFFSET                      16
+	/* If set to other than NA, 0 - Normal operation, 1 - Thermal event
+	 * occurred
 	 */
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_OFFSET 24
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_NA 0x0
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO0 0x1
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO1 0x2
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO2 0x3
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO3 0x4
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO4 0x5
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO5 0x6
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO6 0x7
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO7 0x8
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO8 0x9
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO9 0xA
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO10 0xB
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO11 0xC
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO12 0xD
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO13 0xE
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO14 0xF
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO15 0x10
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO16 0x11
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO17 0x12
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO18 0x13
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO19 0x14
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO20 0x15
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO21 0x16
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO22 0x17
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO23 0x18
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO24 0x19
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO25 0x1A
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO26 0x1B
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO27 0x1C
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO28 0x1D
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO29 0x1E
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO30 0x1F
-		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO31 0x20
-	u32 mbi_date; /* 0x80 */
-	u32 misc_sig; /* 0x84 */
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_MASK                   0xFF000000
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_OFFSET                 24
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_NA                     0x0
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO0                  0x1
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO1                  0x2
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO2                  0x3
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO3                  0x4
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO4                  0x5
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO5                  0x6
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO6                  0x7
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO7                  0x8
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO8                  0x9
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO9                  0xA
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO10                 0xB
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO11                 0xC
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO12                 0xD
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO13                 0xE
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO14                 0xF
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO15                 0x10
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO16                 0x11
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO17                 0x12
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO18                 0x13
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO19                 0x14
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO20                 0x15
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO21                 0x16
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO22                 0x17
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO23                 0x18
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO24                 0x19
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO25                 0x1A
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO26                 0x1B
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO27                 0x1C
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO28                 0x1D
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO29                 0x1E
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO30                 0x1F
+		#define NVM_CFG1_GLOB_THERMAL_EVENT_GPIO_GPIO31                 0x20
+	u32 mbi_date;                                                      /* 0x80 */
+	u32 misc_sig;                                                      /* 0x84 */
 	/*  Define the GPIO mapping to switch i2c mux */
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_0_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_0_OFFSET 0
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_1_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_1_OFFSET 8
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__NA 0x0
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO0 0x1
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO1 0x2
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO2 0x3
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO3 0x4
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO4 0x5
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO5 0x6
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO6 0x7
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO7 0x8
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO8 0x9
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO9 0xA
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO10 0xB
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO11 0xC
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO12 0xD
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO13 0xE
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO14 0xF
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO15 0x10
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO16 0x11
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO17 0x12
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO18 0x13
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO19 0x14
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO20 0x15
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO21 0x16
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO22 0x17
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO23 0x18
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO24 0x19
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO25 0x1A
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO26 0x1B
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO27 0x1C
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO28 0x1D
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO29 0x1E
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO30 0x1F
-		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO31 0x20
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_0_MASK                   0x000000FF
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_0_OFFSET                 0
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_1_MASK                   0x0000FF00
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_1_OFFSET                 8
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__NA                      0x0
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO0                   0x1
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO1                   0x2
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO2                   0x3
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO3                   0x4
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO4                   0x5
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO5                   0x6
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO6                   0x7
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO7                   0x8
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO8                   0x9
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO9                   0xA
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO10                  0xB
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO11                  0xC
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO12                  0xD
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO13                  0xE
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO14                  0xF
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO15                  0x10
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO16                  0x11
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO17                  0x12
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO18                  0x13
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO19                  0x14
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO20                  0x15
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO21                  0x16
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO22                  0x17
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO23                  0x18
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO24                  0x19
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO25                  0x1A
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO26                  0x1B
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO27                  0x1C
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO28                  0x1D
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO29                  0x1E
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO30                  0x1F
+		#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO31                  0x20
 	/*  Interrupt signal used for SMBus/I2C management interface
-	 *  0 = Interrupt event occurred
-	 *  1 = Normal
-	 */
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_OFFSET 16
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_NA 0x0
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO0 0x1
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO1 0x2
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO2 0x3
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO3 0x4
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO4 0x5
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO5 0x6
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO6 0x7
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO7 0x8
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO8 0x9
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO9 0xA
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO10 0xB
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO11 0xC
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO12 0xD
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO13 0xE
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO14 0xF
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO15 0x10
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO16 0x11
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO17 0x12
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO18 0x13
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO19 0x14
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO20 0x15
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO21 0x16
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO22 0x17
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO23 0x18
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO24 0x19
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO25 0x1A
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO26 0x1B
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO27 0x1C
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO28 0x1D
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO29 0x1E
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO30 0x1F
-		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO31 0x20
+
+	   0 = Interrupt event occurred
+	  1 = Normal
+	   */
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_MASK                   0x00FF0000
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_OFFSET                 16
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_NA                     0x0
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO0                  0x1
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO1                  0x2
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO2                  0x3
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO3                  0x4
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO4                  0x5
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO5                  0x6
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO6                  0x7
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO7                  0x8
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO8                  0x9
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO9                  0xA
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO10                 0xB
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO11                 0xC
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO12                 0xD
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO13                 0xE
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO14                 0xF
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO15                 0x10
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO16                 0x11
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO17                 0x12
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO18                 0x13
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO19                 0x14
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO20                 0x15
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO21                 0x16
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO22                 0x17
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO23                 0x18
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO24                 0x19
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO25                 0x1A
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO26                 0x1B
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO27                 0x1C
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO28                 0x1D
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO29                 0x1E
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO30                 0x1F
+		#define NVM_CFG1_GLOB_I2C_INTERRUPT_GPIO_GPIO31                 0x20
 	/*  Set aLOM FAN on GPIO */
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_OFFSET 24
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_NA 0x0
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO0 0x1
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO1 0x2
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO2 0x3
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO3 0x4
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO4 0x5
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO5 0x6
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO6 0x7
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO7 0x8
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO8 0x9
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO9 0xA
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO10 0xB
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO11 0xC
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO12 0xD
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO13 0xE
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO14 0xF
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO15 0x10
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO16 0x11
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO17 0x12
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO18 0x13
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO19 0x14
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO20 0x15
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO21 0x16
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO22 0x17
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO23 0x18
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO24 0x19
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO25 0x1A
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO26 0x1B
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO27 0x1C
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO28 0x1D
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO29 0x1E
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO30 0x1F
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO31 0x20
-	u32 device_capabilities; /* 0x88 */
-		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ETHERNET 0x1
-		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_FCOE 0x2
-		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ISCSI 0x4
-		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ROCE 0x8
-		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_IWARP 0x10
-	u32 power_dissipated; /* 0x8C */
-		#define NVM_CFG1_GLOB_POWER_DIS_D0_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_POWER_DIS_D0_OFFSET 0
-		#define NVM_CFG1_GLOB_POWER_DIS_D1_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_POWER_DIS_D1_OFFSET 8
-		#define NVM_CFG1_GLOB_POWER_DIS_D2_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_POWER_DIS_D2_OFFSET 16
-		#define NVM_CFG1_GLOB_POWER_DIS_D3_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_POWER_DIS_D3_OFFSET 24
-	u32 power_consumed; /* 0x90 */
-		#define NVM_CFG1_GLOB_POWER_CONS_D0_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_POWER_CONS_D0_OFFSET 0
-		#define NVM_CFG1_GLOB_POWER_CONS_D1_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_POWER_CONS_D1_OFFSET 8
-		#define NVM_CFG1_GLOB_POWER_CONS_D2_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_POWER_CONS_D2_OFFSET 16
-		#define NVM_CFG1_GLOB_POWER_CONS_D3_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_POWER_CONS_D3_OFFSET 24
-	u32 efi_version; /* 0x94 */
-	u32 multi_network_modes_capability; /* 0x98 */
-		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_4X10G 0x1
-		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_1X25G 0x2
-		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X25G 0x4
-		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_4X25G 0x8
-		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_1X40G 0x10
-		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X40G 0x20
-		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X50G 0x40
-		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_BB_1X100G \
-			0x80
-		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X10G 0x100
-	/* @DPDK */
-	u32 reserved1[12]; /* 0x9C */
-	u32 oem1_number[8]; /* 0xCC */
-	u32 oem2_number[8]; /* 0xEC */
-	u32 mps25_active_txfir_pre; /* 0x10C */
-		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_OFFSET 0
-		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_OFFSET 8
-		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_OFFSET 16
-		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_OFFSET 24
-	u32 mps25_active_txfir_main; /* 0x110 */
-		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_OFFSET 0
-		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_OFFSET 8
-		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_OFFSET 16
-		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_OFFSET 24
-	u32 mps25_active_txfir_post; /* 0x114 */
-		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_OFFSET 0
-		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_OFFSET 8
-		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_OFFSET 16
-		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_OFFSET 24
-	u32 features; /* 0x118 */
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_MASK                 0xFF000000
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_OFFSET               24
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_NA                   0x0
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO0                0x1
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO1                0x2
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO2                0x3
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO3                0x4
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO4                0x5
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO5                0x6
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO6                0x7
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO7                0x8
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO8                0x9
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO9                0xA
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO10               0xB
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO11               0xC
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO12               0xD
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO13               0xE
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO14               0xF
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO15               0x10
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO16               0x11
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO17               0x12
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO18               0x13
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO19               0x14
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO20               0x15
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO21               0x16
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO22               0x17
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO23               0x18
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO24               0x19
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO25               0x1A
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO26               0x1B
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO27               0x1C
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO28               0x1D
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO29               0x1E
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO30               0x1F
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_GPIO_GPIO31               0x20
+	u32 device_capabilities;                                           /* 0x88 */
+		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ETHERNET              0x1
+		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_FCOE                  0x2
+		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ISCSI                 0x4
+		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ROCE                  0x8
+		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_IWARP                 0x10
+		#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_NVMETCP               0x20
+	u32 power_dissipated;                                              /* 0x8C */
+		#define NVM_CFG1_GLOB_POWER_DIS_D0_MASK                         0x000000FF
+		#define NVM_CFG1_GLOB_POWER_DIS_D0_OFFSET                       0
+		#define NVM_CFG1_GLOB_POWER_DIS_D1_MASK                         0x0000FF00
+		#define NVM_CFG1_GLOB_POWER_DIS_D1_OFFSET                       8
+		#define NVM_CFG1_GLOB_POWER_DIS_D2_MASK                         0x00FF0000
+		#define NVM_CFG1_GLOB_POWER_DIS_D2_OFFSET                       16
+		#define NVM_CFG1_GLOB_POWER_DIS_D3_MASK                         0xFF000000
+		#define NVM_CFG1_GLOB_POWER_DIS_D3_OFFSET                       24
+	u32 power_consumed;                                                /* 0x90 */
+		#define NVM_CFG1_GLOB_POWER_CONS_D0_MASK                        0x000000FF
+		#define NVM_CFG1_GLOB_POWER_CONS_D0_OFFSET                      0
+		#define NVM_CFG1_GLOB_POWER_CONS_D1_MASK                        0x0000FF00
+		#define NVM_CFG1_GLOB_POWER_CONS_D1_OFFSET                      8
+		#define NVM_CFG1_GLOB_POWER_CONS_D2_MASK                        0x00FF0000
+		#define NVM_CFG1_GLOB_POWER_CONS_D2_OFFSET                      16
+		#define NVM_CFG1_GLOB_POWER_CONS_D3_MASK                        0xFF000000
+		#define NVM_CFG1_GLOB_POWER_CONS_D3_OFFSET                      24
+	u32 efi_version;                                                   /* 0x94 */
+	u32 multi_network_modes_capability;                                /* 0x98 */
+		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_4X10G      0x1
+		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_1X25G      0x2
+		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X25G      0x4
+		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_4X25G      0x8
+		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_1X40G      0x10
+		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X40G      0x20
+		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X50G      0x40
+		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_BB_1X100G  0x80
+		#define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X10G      0x100
+	u32 nvm_cfg_version;                                               /* 0x9C */
+	u32 nvm_cfg_new_option_seq;                                        /* 0xA0 */
+	u32 nvm_cfg_removed_option_seq;                                    /* 0xA4 */
+	u32 nvm_cfg_updated_value_seq;                                     /* 0xA8 */
+	u32 extended_serial_number[8];                                     /* 0xAC */
+	u32 option_kit_pn[8];                                              /* 0xCC */
+	u32 spare_pn[8];                                                   /* 0xEC */
+	u32 mps25_active_txfir_pre;                                       /* 0x10C */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_MASK                  0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_PRE_OFFSET                0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_MASK                  0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_PRE_OFFSET                8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_MASK                  0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_PRE_OFFSET                16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_MASK                  0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_PRE_OFFSET                24
+	u32 mps25_active_txfir_main;                                      /* 0x110 */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_MASK                 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_MAIN_OFFSET               0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_MASK                 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_MAIN_OFFSET               8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_MASK                 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_MAIN_OFFSET               16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_MASK                 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_MAIN_OFFSET               24
+	u32 mps25_active_txfir_post;                                      /* 0x114 */
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_MASK                 0x000000FF
+		#define NVM_CFG1_GLOB_LANE0_ACT_TXFIR_POST_OFFSET               0
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_MASK                 0x0000FF00
+		#define NVM_CFG1_GLOB_LANE1_ACT_TXFIR_POST_OFFSET               8
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_MASK                 0x00FF0000
+		#define NVM_CFG1_GLOB_LANE2_ACT_TXFIR_POST_OFFSET               16
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_MASK                 0xFF000000
+		#define NVM_CFG1_GLOB_LANE3_ACT_TXFIR_POST_OFFSET               24
+	u32 features;                                                     /* 0x118 */
 	/*  Set the Aux Fan on temperature  */
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_OFFSET 0
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_MASK                0x000000FF
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_VALUE_OFFSET              0
 	/*  Set NC-SI package ID */
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_OFFSET 8
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_NA 0x0
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO0 0x1
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO1 0x2
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO2 0x3
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO3 0x4
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO4 0x5
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO5 0x6
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO6 0x7
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO7 0x8
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO8 0x9
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO9 0xA
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO10 0xB
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO11 0xC
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO12 0xD
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO13 0xE
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO14 0xF
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO15 0x10
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO16 0x11
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO17 0x12
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO18 0x13
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO19 0x14
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO20 0x15
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO21 0x16
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO22 0x17
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO23 0x18
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO24 0x19
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO25 0x1A
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO26 0x1B
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO27 0x1C
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO28 0x1D
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO29 0x1E
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO30 0x1F
-		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO31 0x20
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_MASK                         0x0000FF00
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_OFFSET                       8
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_NA                           0x0
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO0                        0x1
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO1                        0x2
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO2                        0x3
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO3                        0x4
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO4                        0x5
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO5                        0x6
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO6                        0x7
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO7                        0x8
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO8                        0x9
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO9                        0xA
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO10                       0xB
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO11                       0xC
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO12                       0xD
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO13                       0xE
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO14                       0xF
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO15                       0x10
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO16                       0x11
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO17                       0x12
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO18                       0x13
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO19                       0x14
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO20                       0x15
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO21                       0x16
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO22                       0x17
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO23                       0x18
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO24                       0x19
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO25                       0x1A
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO26                       0x1B
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO27                       0x1C
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO28                       0x1D
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO29                       0x1E
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO30                       0x1F
+		#define NVM_CFG1_GLOB_SLOT_ID_GPIO_GPIO31                       0x20
 	/*  PMBUS Clock GPIO */
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_OFFSET 16
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_NA 0x0
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO0 0x1
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO1 0x2
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO2 0x3
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO3 0x4
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO4 0x5
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO5 0x6
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO6 0x7
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO7 0x8
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO8 0x9
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO9 0xA
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO10 0xB
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO11 0xC
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO12 0xD
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO13 0xE
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO14 0xF
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO15 0x10
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO16 0x11
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO17 0x12
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO18 0x13
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO19 0x14
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO20 0x15
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO21 0x16
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO22 0x17
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO23 0x18
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO24 0x19
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO25 0x1A
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO26 0x1B
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO27 0x1C
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO28 0x1D
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO29 0x1E
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO30 0x1F
-		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO31 0x20
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_MASK                       0x00FF0000
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_OFFSET                     16
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_NA                         0x0
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO0                      0x1
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO1                      0x2
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO2                      0x3
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO3                      0x4
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO4                      0x5
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO5                      0x6
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO6                      0x7
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO7                      0x8
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO8                      0x9
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO9                      0xA
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO10                     0xB
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO11                     0xC
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO12                     0xD
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO13                     0xE
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO14                     0xF
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO15                     0x10
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO16                     0x11
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO17                     0x12
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO18                     0x13
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO19                     0x14
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO20                     0x15
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO21                     0x16
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO22                     0x17
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO23                     0x18
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO24                     0x19
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO25                     0x1A
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO26                     0x1B
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO27                     0x1C
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO28                     0x1D
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO29                     0x1E
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO30                     0x1F
+		#define NVM_CFG1_GLOB_PMBUS_SCL_GPIO_GPIO31                     0x20
 	/*  PMBUS Data GPIO */
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_OFFSET 24
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_NA 0x0
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO0 0x1
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO1 0x2
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO2 0x3
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO3 0x4
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO4 0x5
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO5 0x6
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO6 0x7
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO7 0x8
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO8 0x9
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO9 0xA
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO10 0xB
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO11 0xC
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO12 0xD
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO13 0xE
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO14 0xF
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO15 0x10
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO16 0x11
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO17 0x12
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO18 0x13
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO19 0x14
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO20 0x15
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO21 0x16
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO22 0x17
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO23 0x18
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO24 0x19
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO25 0x1A
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO26 0x1B
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO27 0x1C
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO28 0x1D
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO29 0x1E
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO30 0x1F
-		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO31 0x20
-	u32 tx_rx_eq_25g_hlpc; /* 0x11C */
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_OFFSET 0
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_OFFSET 8
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_OFFSET 16
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_OFFSET 24
-	u32 tx_rx_eq_25g_llpc; /* 0x120 */
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_OFFSET 0
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_OFFSET 8
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_OFFSET 16
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_OFFSET 24
-	u32 tx_rx_eq_25g_ac; /* 0x124 */
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_OFFSET 0
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_OFFSET 8
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_OFFSET 16
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_OFFSET 24
-	u32 tx_rx_eq_10g_pc; /* 0x128 */
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_OFFSET 0
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_OFFSET 8
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_OFFSET 16
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_OFFSET 24
-	u32 tx_rx_eq_10g_ac; /* 0x12C */
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_OFFSET 0
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_OFFSET 8
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_OFFSET 16
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_OFFSET 24
-	u32 tx_rx_eq_1g; /* 0x130 */
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_OFFSET 0
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_OFFSET 8
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_OFFSET 16
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_OFFSET 24
-	u32 tx_rx_eq_25g_bt; /* 0x134 */
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_OFFSET 0
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_OFFSET 8
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_OFFSET 16
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_OFFSET 24
-	u32 tx_rx_eq_10g_bt; /* 0x138 */
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_OFFSET 0
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_OFFSET 8
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_OFFSET 16
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_OFFSET 24
-	u32 generic_cont4; /* 0x13C */
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_OFFSET 0
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_NA 0x0
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO0 0x1
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO1 0x2
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO2 0x3
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO3 0x4
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO4 0x5
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO5 0x6
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO6 0x7
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO7 0x8
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO8 0x9
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO9 0xA
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO10 0xB
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO11 0xC
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO12 0xD
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO13 0xE
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO14 0xF
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO15 0x10
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO16 0x11
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO17 0x12
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO18 0x13
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO19 0x14
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO20 0x15
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO21 0x16
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO22 0x17
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO23 0x18
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO24 0x19
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO25 0x1A
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO26 0x1B
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO27 0x1C
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO28 0x1D
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO29 0x1E
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO30 0x1F
-		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO31 0x20
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_MASK                       0xFF000000
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_OFFSET                     24
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_NA                         0x0
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO0                      0x1
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO1                      0x2
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO2                      0x3
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO3                      0x4
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO4                      0x5
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO5                      0x6
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO6                      0x7
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO7                      0x8
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO8                      0x9
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO9                      0xA
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO10                     0xB
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO11                     0xC
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO12                     0xD
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO13                     0xE
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO14                     0xF
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO15                     0x10
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO16                     0x11
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO17                     0x12
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO18                     0x13
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO19                     0x14
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO20                     0x15
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO21                     0x16
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO22                     0x17
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO23                     0x18
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO24                     0x19
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO25                     0x1A
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO26                     0x1B
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO27                     0x1C
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO28                     0x1D
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO29                     0x1E
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO30                     0x1F
+		#define NVM_CFG1_GLOB_PMBUS_SDA_GPIO_GPIO31                     0x20
+	u32 tx_rx_eq_25g_hlpc;                                            /* 0x11C */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_MASK             0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_HLPC_OFFSET           0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_MASK             0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_HLPC_OFFSET           8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_MASK             0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_HLPC_OFFSET           16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_MASK             0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_HLPC_OFFSET           24
+	u32 tx_rx_eq_25g_llpc;                                            /* 0x120 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_MASK             0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_LLPC_OFFSET           0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_MASK             0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_LLPC_OFFSET           8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_MASK             0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_LLPC_OFFSET           16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_MASK             0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_LLPC_OFFSET           24
+	u32 tx_rx_eq_25g_ac;                                              /* 0x124 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_MASK               0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_AC_OFFSET             0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_MASK               0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_AC_OFFSET             8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_MASK               0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_AC_OFFSET             16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_MASK               0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_AC_OFFSET             24
+	u32 tx_rx_eq_10g_pc;                                              /* 0x128 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_MASK               0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_PC_OFFSET             0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_MASK               0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_PC_OFFSET             8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_MASK               0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_PC_OFFSET             16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_MASK               0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_PC_OFFSET             24
+	u32 tx_rx_eq_10g_ac;                                              /* 0x12C */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_MASK               0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_AC_OFFSET             0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_MASK               0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_AC_OFFSET             8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_MASK               0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_AC_OFFSET             16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_MASK               0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_AC_OFFSET             24
+	u32 tx_rx_eq_1g;                                                  /* 0x130 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_MASK                   0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_1G_OFFSET                 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_MASK                   0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_1G_OFFSET                 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_MASK                   0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_1G_OFFSET                 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_MASK                   0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_1G_OFFSET                 24
+	u32 tx_rx_eq_25g_bt;                                              /* 0x134 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_MASK               0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_25G_BT_OFFSET             0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_MASK               0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_25G_BT_OFFSET             8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_MASK               0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_25G_BT_OFFSET             16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_MASK               0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_25G_BT_OFFSET             24
+	u32 tx_rx_eq_10g_bt;                                              /* 0x138 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_MASK               0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_10G_BT_OFFSET             0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_MASK               0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_10G_BT_OFFSET             8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_MASK               0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_10G_BT_OFFSET             16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_MASK               0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_10G_BT_OFFSET             24
+	u32 generic_cont4;                                                /* 0x13C */
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_MASK                   0x000000FF
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_OFFSET                 0
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_NA                     0x0
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO0                  0x1
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO1                  0x2
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO2                  0x3
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO3                  0x4
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO4                  0x5
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO5                  0x6
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO6                  0x7
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO7                  0x8
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO8                  0x9
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO9                  0xA
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO10                 0xB
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO11                 0xC
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO12                 0xD
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO13                 0xE
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO14                 0xF
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO15                 0x10
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO16                 0x11
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO17                 0x12
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO18                 0x13
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO19                 0x14
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO20                 0x15
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO21                 0x16
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO22                 0x17
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO23                 0x18
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO24                 0x19
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO25                 0x1A
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO26                 0x1B
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO27                 0x1C
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO28                 0x1D
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO29                 0x1E
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO30                 0x1F
+		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO31                 0x20
 	/*  Select the number of allowed port link in aux power */
-		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_MASK 0x00000300
-		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_OFFSET 8
-		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_DEFAULT 0x0
-		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_1_PORT 0x1
-		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_2_PORTS 0x2
-		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_3_PORTS 0x3
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_MASK                        0x00000300
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_OFFSET                      8
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_DEFAULT                     0x0
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_1_PORT                      0x1
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_2_PORTS                     0x2
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_3_PORTS                     0x3
 	/*  Set Trace Filter Log Level */
-		#define NVM_CFG1_GLOB_TRACE_LEVEL_MASK 0x00000C00
-		#define NVM_CFG1_GLOB_TRACE_LEVEL_OFFSET 10
-		#define NVM_CFG1_GLOB_TRACE_LEVEL_ALL 0x0
-		#define NVM_CFG1_GLOB_TRACE_LEVEL_DEBUG 0x1
-		#define NVM_CFG1_GLOB_TRACE_LEVEL_TRACE 0x2
-		#define NVM_CFG1_GLOB_TRACE_LEVEL_ERROR 0x3
-	/*  For OCP2.0, MFW listens on SMBUS slave address 0x3e, and return
-	 *  temperature reading
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_MASK                          0x00000C00
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_OFFSET                        10
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_DEFAULT                       0x0
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_DEBUG                         0x1
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_TRACE                         0x2
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_ERROR                         0x3
+	/* For OCP2.0, MFW listens on SMBUS slave address 0x3e, and return
+	 * temperature reading
 	 */
-		#define NVM_CFG1_GLOB_EMULATED_TMP421_MASK 0x00001000
-		#define NVM_CFG1_GLOB_EMULATED_TMP421_OFFSET 12
-		#define NVM_CFG1_GLOB_EMULATED_TMP421_DISABLED 0x0
-		#define NVM_CFG1_GLOB_EMULATED_TMP421_ENABLED 0x1
-	/*  GPIO which triggers when ASIC temperature reaches nvm option 286
-	 *  value
+		#define NVM_CFG1_GLOB_EMULATED_TMP421_MASK                      0x00001000
+		#define NVM_CFG1_GLOB_EMULATED_TMP421_OFFSET                    12
+		#define NVM_CFG1_GLOB_EMULATED_TMP421_DISABLED                  0x0
+		#define NVM_CFG1_GLOB_EMULATED_TMP421_ENABLED                   0x1
+	/* GPIO which triggers when ASIC temperature reaches nvm option 286
+	 * value
 	 */
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_MASK 0x001FE000
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_OFFSET 13
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_NA 0x0
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO0 0x1
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO1 0x2
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO2 0x3
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO3 0x4
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO4 0x5
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO5 0x6
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO6 0x7
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO7 0x8
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO8 0x9
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO9 0xA
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO10 0xB
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO11 0xC
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO12 0xD
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO13 0xE
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO14 0xF
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO15 0x10
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO16 0x11
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO17 0x12
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO18 0x13
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO19 0x14
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO20 0x15
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO21 0x16
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO22 0x17
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO23 0x18
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO24 0x19
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO25 0x1A
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO26 0x1B
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO27 0x1C
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO28 0x1D
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO29 0x1E
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO30 0x1F
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO31 0x20
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_MASK             0x001FE000
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_OFFSET           13
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_NA               0x0
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO0            0x1
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO1            0x2
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO2            0x3
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO3            0x4
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO4            0x5
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO5            0x6
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO6            0x7
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO7            0x8
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO8            0x9
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO9            0xA
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO10           0xB
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO11           0xC
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO12           0xD
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO13           0xE
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO14           0xF
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO15           0x10
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO16           0x11
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO17           0x12
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO18           0x13
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO19           0x14
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO20           0x15
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO21           0x16
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO22           0x17
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO23           0x18
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO24           0x19
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO25           0x1A
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO26           0x1B
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO27           0x1C
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO28           0x1D
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO29           0x1E
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO30           0x1F
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO31           0x20
 	/*  Warning temperature threshold used with nvm option 286 */
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_THRESHOLD_MASK 0x1FE00000
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_THRESHOLD_OFFSET 21
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_THRESHOLD_MASK        0x1FE00000
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_THRESHOLD_OFFSET      21
 	/*  Disable PLDM protocol */
-		#define NVM_CFG1_GLOB_DISABLE_PLDM_MASK 0x20000000
-		#define NVM_CFG1_GLOB_DISABLE_PLDM_OFFSET 29
-		#define NVM_CFG1_GLOB_DISABLE_PLDM_DISABLED 0x0
-		#define NVM_CFG1_GLOB_DISABLE_PLDM_ENABLED 0x1
+		#define NVM_CFG1_GLOB_DISABLE_PLDM_MASK                         0x20000000
+		#define NVM_CFG1_GLOB_DISABLE_PLDM_OFFSET                       29
+		#define NVM_CFG1_GLOB_DISABLE_PLDM_DISABLED                     0x0
+		#define NVM_CFG1_GLOB_DISABLE_PLDM_ENABLED                      0x1
 	/*  Disable OCBB protocol */
-		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_MASK 0x40000000
-		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_OFFSET 30
-		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_DISABLED 0x0
-		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_ENABLED 0x1
-	u32 preboot_debug_mode_std; /* 0x140 */
-	u32 preboot_debug_mode_ext; /* 0x144 */
-	u32 ext_phy_cfg1; /* 0x148 */
+		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_MASK                     0x40000000
+		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_OFFSET                   30
+		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_DISABLED                 0x0
+		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_ENABLED                  0x1
+	/*  MCTP Virtual Link OEM3 */
+		#define NVM_CFG1_GLOB_MCTP_VIRTUAL_LINK_OEM3_MASK               0x80000000
+		#define NVM_CFG1_GLOB_MCTP_VIRTUAL_LINK_OEM3_OFFSET             31
+		#define NVM_CFG1_GLOB_MCTP_VIRTUAL_LINK_OEM3_DISABLED           0x0
+		#define NVM_CFG1_GLOB_MCTP_VIRTUAL_LINK_OEM3_ENABLED            0x1
+	u32 preboot_debug_mode_std;                                       /* 0x140 */
+	u32 preboot_debug_mode_ext;                                       /* 0x144 */
+	u32 ext_phy_cfg1;                                                 /* 0x148 */
 	/*  Ext PHY MDI pair swap value */
-		#define NVM_CFG1_GLOB_RESERVED_244_MASK 0x0000FFFF
-		#define NVM_CFG1_GLOB_RESERVED_244_OFFSET 0
+		#define NVM_CFG1_GLOB_RESERVED_244_MASK                         0x0000FFFF
+		#define NVM_CFG1_GLOB_RESERVED_244_OFFSET                       0
 	/*  Define for PGOOD signal Mapping  for EXT PHY */
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_OFFSET 16
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_NA 0x0
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO0 0x1
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO1 0x2
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO2 0x3
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO3 0x4
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO4 0x5
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO5 0x6
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO6 0x7
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO7 0x8
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO8 0x9
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO9 0xA
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO10 0xB
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO11 0xC
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO12 0xD
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO13 0xE
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO14 0xF
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO15 0x10
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO16 0x11
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO17 0x12
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO18 0x13
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO19 0x14
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO20 0x15
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO21 0x16
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO22 0x17
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO23 0x18
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO24 0x19
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO25 0x1A
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO26 0x1B
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO27 0x1C
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO28 0x1D
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO29 0x1E
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO30 0x1F
-		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO31 0x20
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_MASK                        0x00FF0000
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_OFFSET                      16
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_NA                          0x0
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO0                       0x1
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO1                       0x2
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO2                       0x3
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO3                       0x4
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO4                       0x5
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO5                       0x6
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO6                       0x7
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO7                       0x8
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO8                       0x9
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO9                       0xA
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO10                      0xB
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO11                      0xC
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO12                      0xD
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO13                      0xE
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO14                      0xF
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO15                      0x10
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO16                      0x11
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO17                      0x12
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO18                      0x13
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO19                      0x14
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO20                      0x15
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO21                      0x16
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO22                      0x17
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO23                      0x18
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO24                      0x19
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO25                      0x1A
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO26                      0x1B
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO27                      0x1C
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO28                      0x1D
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO29                      0x1E
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO30                      0x1F
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO31                      0x20
 	/*  GPIO which trigger when PERST asserted  */
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_OFFSET 24
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_NA 0x0
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO0 0x1
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO1 0x2
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO2 0x3
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO3 0x4
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO4 0x5
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO5 0x6
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO6 0x7
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO7 0x8
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO8 0x9
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO9 0xA
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO10 0xB
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO11 0xC
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO12 0xD
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO13 0xE
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO14 0xF
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO15 0x10
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO16 0x11
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO17 0x12
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO18 0x13
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO19 0x14
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO20 0x15
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO21 0x16
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO22 0x17
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO23 0x18
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO24 0x19
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO25 0x1A
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO26 0x1B
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO27 0x1C
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO28 0x1D
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO29 0x1E
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO30 0x1F
-		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO31 0x20
-	u32 clocks; /* 0x14C */
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_MASK                0xFF000000
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_OFFSET              24
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_NA                  0x0
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO0               0x1
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO1               0x2
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO2               0x3
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO3               0x4
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO4               0x5
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO5               0x6
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO6               0x7
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO7               0x8
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO8               0x9
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO9               0xA
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO10              0xB
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO11              0xC
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO12              0xD
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO13              0xE
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO14              0xF
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO15              0x10
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO16              0x11
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO17              0x12
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO18              0x13
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO19              0x14
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO20              0x15
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO21              0x16
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO22              0x17
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO23              0x18
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO24              0x19
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO25              0x1A
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO26              0x1B
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO27              0x1C
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO28              0x1D
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO29              0x1E
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO30              0x1F
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO31              0x20
+	u32 clocks;                                                       /* 0x14C */
 	/*  Sets core clock frequency */
-		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_OFFSET 0
-		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_DEFAULT 0x0
-		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_375 0x1
-		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_350 0x2
-		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_325 0x3
-		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_300 0x4
-		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_280 0x5
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MASK                 0x000000FF
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_OFFSET               0
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_DEFAULT     0x0
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_375         0x1
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_350         0x2
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_325         0x3
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_300         0x4
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_280         0x5
 	/*  Sets MAC clock frequency */
-		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_OFFSET 8
-		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_DEFAULT 0x0
-		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_782 0x1
-		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_516 0x2
+		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MASK                  0x0000FF00
+		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_OFFSET                8
+		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_DEFAULT       0x0
+		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_782           0x1
+		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_516           0x2
 	/*  Sets storm clock frequency */
-		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_OFFSET 16
-		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_DEFAULT 0x0
-		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1200 0x1
-		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1000 0x2
-		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_900 0x3
-		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1100 0x4
-	/*  Non zero value will override PCIe AGC threshold to improve
-	 *  receiver
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_MASK                0x00FF0000
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_OFFSET              16
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_DEFAULT   0x0
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1200      0x1
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1000      0x2
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_900       0x3
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1100      0x4
+	/* Non zero value will override PCIe AGC threshold to improve
+	 * receiver
 	 */
-		#define NVM_CFG1_GLOB_OVERRIDE_AGC_THRESHOLD_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_OVERRIDE_AGC_THRESHOLD_OFFSET 24
-	u32 pre2_generic_cont_1; /* 0x150 */
-		#define NVM_CFG1_GLOB_50G_HLPC_PRE2_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_50G_HLPC_PRE2_OFFSET 0
-		#define NVM_CFG1_GLOB_50G_MLPC_PRE2_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_50G_MLPC_PRE2_OFFSET 8
-		#define NVM_CFG1_GLOB_50G_LLPC_PRE2_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_50G_LLPC_PRE2_OFFSET 16
-		#define NVM_CFG1_GLOB_25G_HLPC_PRE2_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_25G_HLPC_PRE2_OFFSET 24
-	u32 pre2_generic_cont_2; /* 0x154 */
-		#define NVM_CFG1_GLOB_25G_LLPC_PRE2_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_25G_LLPC_PRE2_OFFSET 0
-		#define NVM_CFG1_GLOB_25G_AC_PRE2_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_25G_AC_PRE2_OFFSET 8
-		#define NVM_CFG1_GLOB_10G_PC_PRE2_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_10G_PC_PRE2_OFFSET 16
-		#define NVM_CFG1_GLOB_PRE2_10G_AC_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_PRE2_10G_AC_OFFSET 24
-	u32 pre2_generic_cont_3; /* 0x158 */
-		#define NVM_CFG1_GLOB_1G_PRE2_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_1G_PRE2_OFFSET 0
-		#define NVM_CFG1_GLOB_5G_BT_PRE2_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_5G_BT_PRE2_OFFSET 8
-		#define NVM_CFG1_GLOB_10G_BT_PRE2_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_10G_BT_PRE2_OFFSET 16
-	/*  When temperature goes below (warning temperature - delta) warning
-	 *  gpio is unset
+		#define NVM_CFG1_GLOB_OVERRIDE_AGC_THRESHOLD_MASK               0xFF000000
+		#define NVM_CFG1_GLOB_OVERRIDE_AGC_THRESHOLD_OFFSET             24
+	u32 pre2_generic_cont_1;                                          /* 0x150 */
+		#define NVM_CFG1_GLOB_50G_HLPC_PRE2_MASK                        0x000000FF
+		#define NVM_CFG1_GLOB_50G_HLPC_PRE2_OFFSET                      0
+		#define NVM_CFG1_GLOB_50G_MLPC_PRE2_MASK                        0x0000FF00
+		#define NVM_CFG1_GLOB_50G_MLPC_PRE2_OFFSET                      8
+		#define NVM_CFG1_GLOB_50G_LLPC_PRE2_MASK                        0x00FF0000
+		#define NVM_CFG1_GLOB_50G_LLPC_PRE2_OFFSET                      16
+		#define NVM_CFG1_GLOB_25G_HLPC_PRE2_MASK                        0xFF000000
+		#define NVM_CFG1_GLOB_25G_HLPC_PRE2_OFFSET                      24
+	u32 pre2_generic_cont_2;                                          /* 0x154 */
+		#define NVM_CFG1_GLOB_25G_LLPC_PRE2_MASK                        0x000000FF
+		#define NVM_CFG1_GLOB_25G_LLPC_PRE2_OFFSET                      0
+		#define NVM_CFG1_GLOB_25G_AC_PRE2_MASK                          0x0000FF00
+		#define NVM_CFG1_GLOB_25G_AC_PRE2_OFFSET                        8
+		#define NVM_CFG1_GLOB_10G_PC_PRE2_MASK                          0x00FF0000
+		#define NVM_CFG1_GLOB_10G_PC_PRE2_OFFSET                        16
+		#define NVM_CFG1_GLOB_PRE2_10G_AC_MASK                          0xFF000000
+		#define NVM_CFG1_GLOB_PRE2_10G_AC_OFFSET                        24
+	u32 pre2_generic_cont_3;                                          /* 0x158 */
+		#define NVM_CFG1_GLOB_1G_PRE2_MASK                              0x000000FF
+		#define NVM_CFG1_GLOB_1G_PRE2_OFFSET                            0
+		#define NVM_CFG1_GLOB_5G_BT_PRE2_MASK                           0x0000FF00
+		#define NVM_CFG1_GLOB_5G_BT_PRE2_OFFSET                         8
+		#define NVM_CFG1_GLOB_10G_BT_PRE2_MASK                          0x00FF0000
+		#define NVM_CFG1_GLOB_10G_BT_PRE2_OFFSET                        16
+	/* When temperature goes below (warning temperature - delta) warning
+	 * gpio is unset
 	 */
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_DELTA_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_DELTA_OFFSET 24
-	u32 tx_rx_eq_50g_hlpc; /* 0x15C */
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_HLPC_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_HLPC_OFFSET 0
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_HLPC_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_HLPC_OFFSET 8
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_HLPC_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_HLPC_OFFSET 16
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_HLPC_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_HLPC_OFFSET 24
-	u32 tx_rx_eq_50g_mlpc; /* 0x160 */
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_MLPC_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_MLPC_OFFSET 0
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_MLPC_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_MLPC_OFFSET 8
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_MLPC_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_MLPC_OFFSET 16
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_MLPC_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_MLPC_OFFSET 24
-	u32 tx_rx_eq_50g_llpc; /* 0x164 */
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_LLPC_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_LLPC_OFFSET 0
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_LLPC_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_LLPC_OFFSET 8
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_LLPC_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_LLPC_OFFSET 16
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_LLPC_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_LLPC_OFFSET 24
-	u32 tx_rx_eq_50g_ac; /* 0x168 */
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_AC_MASK 0x000000FF
-		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_AC_OFFSET 0
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_AC_MASK 0x0000FF00
-		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_AC_OFFSET 8
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_AC_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_AC_OFFSET 16
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_AC_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_AC_OFFSET 24
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_DELTA_MASK            0xFF000000
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_DELTA_OFFSET          24
+	u32 tx_rx_eq_50g_hlpc;                                            /* 0x15C */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_HLPC_MASK             0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_HLPC_OFFSET           0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_HLPC_MASK             0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_HLPC_OFFSET           8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_HLPC_MASK             0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_HLPC_OFFSET           16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_HLPC_MASK             0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_HLPC_OFFSET           24
+	u32 tx_rx_eq_50g_mlpc;                                            /* 0x160 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_MLPC_MASK             0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_MLPC_OFFSET           0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_MLPC_MASK             0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_MLPC_OFFSET           8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_MLPC_MASK             0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_MLPC_OFFSET           16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_MLPC_MASK             0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_MLPC_OFFSET           24
+	u32 tx_rx_eq_50g_llpc;                                            /* 0x164 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_LLPC_MASK             0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_LLPC_OFFSET           0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_LLPC_MASK             0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_LLPC_OFFSET           8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_LLPC_MASK             0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_LLPC_OFFSET           16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_LLPC_MASK             0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_LLPC_OFFSET           24
+	u32 tx_rx_eq_50g_ac;                                              /* 0x168 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_AC_MASK               0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_AC_OFFSET             0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_AC_MASK               0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_AC_OFFSET             8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_AC_MASK               0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_AC_OFFSET             16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_AC_MASK               0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_AC_OFFSET             24
 	/*  Set Trace Filter Modules Log Bit Mask */
-	u32 trace_modules; /* 0x16C */
-		#define NVM_CFG1_GLOB_TRACE_MODULES_ERROR 0x1
-		#define NVM_CFG1_GLOB_TRACE_MODULES_DBG 0x2
-		#define NVM_CFG1_GLOB_TRACE_MODULES_DRV_HSI 0x4
-		#define NVM_CFG1_GLOB_TRACE_MODULES_INTERRUPT 0x8
-		#define NVM_CFG1_GLOB_TRACE_MODULES_VPD 0x10
-		#define NVM_CFG1_GLOB_TRACE_MODULES_FLR 0x20
-		#define NVM_CFG1_GLOB_TRACE_MODULES_INIT 0x40
-		#define NVM_CFG1_GLOB_TRACE_MODULES_NVM 0x80
-		#define NVM_CFG1_GLOB_TRACE_MODULES_PIM 0x100
-		#define NVM_CFG1_GLOB_TRACE_MODULES_NET 0x200
-		#define NVM_CFG1_GLOB_TRACE_MODULES_POWER 0x400
-		#define NVM_CFG1_GLOB_TRACE_MODULES_UTILS 0x800
-		#define NVM_CFG1_GLOB_TRACE_MODULES_RESOURCES 0x1000
-		#define NVM_CFG1_GLOB_TRACE_MODULES_SCHEDULER 0x2000
-		#define NVM_CFG1_GLOB_TRACE_MODULES_PHYMOD 0x4000
-		#define NVM_CFG1_GLOB_TRACE_MODULES_EVENTS 0x8000
-		#define NVM_CFG1_GLOB_TRACE_MODULES_PMM 0x10000
-		#define NVM_CFG1_GLOB_TRACE_MODULES_DBG_DRV 0x20000
-		#define NVM_CFG1_GLOB_TRACE_MODULES_ETH 0x40000
-		#define NVM_CFG1_GLOB_TRACE_MODULES_SECURITY 0x80000
-		#define NVM_CFG1_GLOB_TRACE_MODULES_PCIE 0x100000
-		#define NVM_CFG1_GLOB_TRACE_MODULES_TRACE 0x200000
-		#define NVM_CFG1_GLOB_TRACE_MODULES_MANAGEMENT 0x400000
-		#define NVM_CFG1_GLOB_TRACE_MODULES_SIM 0x800000
-	u32 pcie_class_code_fcoe; /* 0x170 */
+	u32 trace_modules;                                                /* 0x16C */
+		#define NVM_CFG1_GLOB_TRACE_MODULES_ERROR                       0x1
+		#define NVM_CFG1_GLOB_TRACE_MODULES_DBG                         0x2
+		#define NVM_CFG1_GLOB_TRACE_MODULES_DRV_HSI                     0x4
+		#define NVM_CFG1_GLOB_TRACE_MODULES_INTERRUPT                   0x8
+		#define NVM_CFG1_GLOB_TRACE_MODULES_TEMPERATURE                 0x10
+		#define NVM_CFG1_GLOB_TRACE_MODULES_FLR                         0x20
+		#define NVM_CFG1_GLOB_TRACE_MODULES_INIT                        0x40
+		#define NVM_CFG1_GLOB_TRACE_MODULES_NVM                         0x80
+		#define NVM_CFG1_GLOB_TRACE_MODULES_PIM                         0x100
+		#define NVM_CFG1_GLOB_TRACE_MODULES_NET                         0x200
+		#define NVM_CFG1_GLOB_TRACE_MODULES_POWER                       0x400
+		#define NVM_CFG1_GLOB_TRACE_MODULES_UTILS                       0x800
+		#define NVM_CFG1_GLOB_TRACE_MODULES_RESOURCES                   0x1000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_SCHEDULER                   0x2000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_PHYMOD                      0x4000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_EVENTS                      0x8000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_PMM                         0x10000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_DBG_DRV                     0x20000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_ETH                         0x40000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_SECURITY                    0x80000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_PCIE                        0x100000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_TRACE                       0x200000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_MANAGEMENT                  0x400000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_SIM                         0x800000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_BUF_MGR                     0x1000000
+	u32 pcie_class_code_fcoe;                                         /* 0x170 */
 	/*  Set PCIe FCoE Class Code */
-		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_FCOE_MASK 0x00FFFFFF
-		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_FCOE_OFFSET 0
-	/*  When temperature goes below (ALOM FAN ON AUX value - delta) ALOM
-	 *  FAN ON AUX gpio is unset
+		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_FCOE_MASK                 0x00FFFFFF
+		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_FCOE_OFFSET               0
+	/* When temperature goes below (ALOM FAN ON AUX value - delta) ALOM
+	 * FAN ON AUX gpio is unset
 	 */
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_DELTA_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_DELTA_OFFSET 24
-	u32 pcie_class_code_iscsi; /* 0x174 */
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_DELTA_MASK                0xFF000000
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_DELTA_OFFSET              24
+	u32 pcie_class_code_iscsi;                                        /* 0x174 */
 	/*  Set PCIe iSCSI Class Code */
-		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_ISCSI_MASK 0x00FFFFFF
-		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_ISCSI_OFFSET 0
-	/*  When temperature goes below (Dead Temp TH  - delta)Thermal Event
-	 *  gpio is unset
+		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_ISCSI_MASK                0x00FFFFFF
+		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_ISCSI_OFFSET              0
+	/* When temperature goes below (Dead Temp TH  - delta)Thermal Event
+	 * gpio is unset
 	 */
-		#define NVM_CFG1_GLOB_DEAD_TEMP_TH_DELTA_MASK 0xFF000000
-		#define NVM_CFG1_GLOB_DEAD_TEMP_TH_DELTA_OFFSET 24
-	u32 no_provisioned_mac; /* 0x178 */
+		#define NVM_CFG1_GLOB_DEAD_TEMP_TH_DELTA_MASK                   0xFF000000
+		#define NVM_CFG1_GLOB_DEAD_TEMP_TH_DELTA_OFFSET                 24
+	u32 no_provisioned_mac;                                           /* 0x178 */
 	/*  Set number of provisioned MAC addresses */
-		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_MAC_MASK 0x0000FFFF
-		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_MAC_OFFSET 0
+		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_MAC_MASK            0x0000FFFF
+		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_MAC_OFFSET          0
 	/*  Set number of provisioned VF MAC addresses */
-		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_VF_MAC_MASK 0x00FF0000
-		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_VF_MAC_OFFSET 16
+		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_VF_MAC_MASK         0x00FF0000
+		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_VF_MAC_OFFSET       16
 	/*  Enable/Disable BMC MAC */
-		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_MASK 0x01000000
-		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_OFFSET 24
-		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_DISABLED 0x0
-		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_ENABLED 0x1
-	u32 reserved[43]; /* 0x17C */
+		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_MASK                  0x01000000
+		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_OFFSET                24
+		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_DISABLED              0x0
+		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_ENABLED               0x1
+	/*  Select the number of ports NCSI reports */
+		#define NVM_CFG1_GLOB_NCSI_GET_CAPAS_CH_COUNT_MASK              0x06000000
+		#define NVM_CFG1_GLOB_NCSI_GET_CAPAS_CH_COUNT_OFFSET            25
+	u32 lowest_mbi_version;                                           /* 0x17C */
+		#define NVM_CFG1_GLOB_LOWEST_MBI_VERSION_0_MASK                 0x000000FF
+		#define NVM_CFG1_GLOB_LOWEST_MBI_VERSION_0_OFFSET               0
+		#define NVM_CFG1_GLOB_LOWEST_MBI_VERSION_1_MASK                 0x0000FF00
+		#define NVM_CFG1_GLOB_LOWEST_MBI_VERSION_1_OFFSET               8
+		#define NVM_CFG1_GLOB_LOWEST_MBI_VERSION_2_MASK                 0x00FF0000
+		#define NVM_CFG1_GLOB_LOWEST_MBI_VERSION_2_OFFSET               16
+	u32 generic_cont5;                                                /* 0x180 */
+	/*  Activate options 305-306 */
+		#define NVM_CFG1_GLOB_OVERRIDE_TX_DRIVER_REGULATOR_MASK         0x00000001
+		#define NVM_CFG1_GLOB_OVERRIDE_TX_DRIVER_REGULATOR_OFFSET       0
+		#define NVM_CFG1_GLOB_OVERRIDE_TX_DRIVER_REGULATOR_DISABLED     0x0
+		#define NVM_CFG1_GLOB_OVERRIDE_TX_DRIVER_REGULATOR_ENABLED      0x1
+	/* A value of 0x00 gives 720mV, A setting of 0x05 gives 800mV; A
+	 * setting of 0x15 give a swing of 1060mV
+	 */
+		#define NVM_CFG1_GLOB_TX_REGULATOR_VOLTAGE_GEN1_2_MASK          0x0000003E
+		#define NVM_CFG1_GLOB_TX_REGULATOR_VOLTAGE_GEN1_2_OFFSET        1
+	/* A value of 0x00 gives 720mV, A setting of 0x05 gives 800mV; A
+	 * setting of 0x15 give a swing of 1060mV
+	 */
+		#define NVM_CFG1_GLOB_TX_REGULATOR_VOLTAGE_GEN3_MASK            0x000007C0
+		#define NVM_CFG1_GLOB_TX_REGULATOR_VOLTAGE_GEN3_OFFSET          6
+	/*  QinQ Device Capability */
+		#define NVM_CFG1_GLOB_QINQ_SUPPORT_MASK                         0x00000800
+		#define NVM_CFG1_GLOB_QINQ_SUPPORT_OFFSET                       11
+		#define NVM_CFG1_GLOB_QINQ_SUPPORT_DISABLED                     0x0
+		#define NVM_CFG1_GLOB_QINQ_SUPPORT_ENABLED                      0x1
+	u32 pre2_generic_cont_4;                                          /* 0x184 */
+		#define NVM_CFG1_GLOB_50G_AC_PRE2_MASK                          0x000000FF
+		#define NVM_CFG1_GLOB_50G_AC_PRE2_OFFSET                        0
+	/*  Set PCIe NVMeTCP Class Code */
+		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_NVMETCP_MASK              0xFFFFFF00
+		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_NVMETCP_OFFSET            8
+	u32 reserved[40];                                                 /* 0x188 */
 };
 
 struct nvm_cfg1_path {
-	u32 reserved[1]; /* 0x0 */
+	u32 reserved[1];                                                    /* 0x0 */
 };
 
 struct nvm_cfg1_port {
-	u32 reserved__m_relocated_to_option_123; /* 0x0 */
-	u32 reserved__m_relocated_to_option_124; /* 0x4 */
-	u32 generic_cont0; /* 0x8 */
-		#define NVM_CFG1_PORT_LED_MODE_MASK 0x000000FF
-		#define NVM_CFG1_PORT_LED_MODE_OFFSET 0
-		#define NVM_CFG1_PORT_LED_MODE_MAC1 0x0
-		#define NVM_CFG1_PORT_LED_MODE_PHY1 0x1
-		#define NVM_CFG1_PORT_LED_MODE_PHY2 0x2
-		#define NVM_CFG1_PORT_LED_MODE_PHY3 0x3
-		#define NVM_CFG1_PORT_LED_MODE_MAC2 0x4
-		#define NVM_CFG1_PORT_LED_MODE_PHY4 0x5
-		#define NVM_CFG1_PORT_LED_MODE_PHY5 0x6
-		#define NVM_CFG1_PORT_LED_MODE_PHY6 0x7
-		#define NVM_CFG1_PORT_LED_MODE_MAC3 0x8
-		#define NVM_CFG1_PORT_LED_MODE_PHY7 0x9
-		#define NVM_CFG1_PORT_LED_MODE_PHY8 0xA
-		#define NVM_CFG1_PORT_LED_MODE_PHY9 0xB
-		#define NVM_CFG1_PORT_LED_MODE_MAC4 0xC
-		#define NVM_CFG1_PORT_LED_MODE_PHY10 0xD
-		#define NVM_CFG1_PORT_LED_MODE_PHY11 0xE
-		#define NVM_CFG1_PORT_LED_MODE_PHY12 0xF
-		#define NVM_CFG1_PORT_LED_MODE_BREAKOUT 0x10
-		#define NVM_CFG1_PORT_LED_MODE_OCP_3_0 0x11
-		#define NVM_CFG1_PORT_LED_MODE_OCP_3_0_MAC2 0x12
-		#define NVM_CFG1_PORT_LED_MODE_SW_DEF1 0x13
-		#define NVM_CFG1_PORT_LED_MODE_SW_DEF1_MAC2 0x14
-		#define NVM_CFG1_PORT_ROCE_PRIORITY_MASK 0x0000FF00
-		#define NVM_CFG1_PORT_ROCE_PRIORITY_OFFSET 8
-		#define NVM_CFG1_PORT_DCBX_MODE_MASK 0x000F0000
-		#define NVM_CFG1_PORT_DCBX_MODE_OFFSET 16
-		#define NVM_CFG1_PORT_DCBX_MODE_DISABLED 0x0
-		#define NVM_CFG1_PORT_DCBX_MODE_IEEE 0x1
-		#define NVM_CFG1_PORT_DCBX_MODE_CEE 0x2
-		#define NVM_CFG1_PORT_DCBX_MODE_DYNAMIC 0x3
-		#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_MASK 0x00F00000
-		#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_OFFSET 20
-		#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_ETHERNET 0x1
-		#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_FCOE 0x2
-		#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_ISCSI 0x4
+	u32 reserved__m_relocated_to_option_123;                            /* 0x0 */
+	u32 reserved__m_relocated_to_option_124;                            /* 0x4 */
+	u32 generic_cont0;                                                  /* 0x8 */
+		#define NVM_CFG1_PORT_LED_MODE_MASK                             0x000000FF
+		#define NVM_CFG1_PORT_LED_MODE_OFFSET                           0
+		#define NVM_CFG1_PORT_LED_MODE_MAC1                             0x0
+		#define NVM_CFG1_PORT_LED_MODE_PHY1                             0x1
+		#define NVM_CFG1_PORT_LED_MODE_PHY2                             0x2
+		#define NVM_CFG1_PORT_LED_MODE_PHY3                             0x3
+		#define NVM_CFG1_PORT_LED_MODE_MAC2                             0x4
+		#define NVM_CFG1_PORT_LED_MODE_PHY4                             0x5
+		#define NVM_CFG1_PORT_LED_MODE_PHY5                             0x6
+		#define NVM_CFG1_PORT_LED_MODE_PHY6                             0x7
+		#define NVM_CFG1_PORT_LED_MODE_MAC3                             0x8
+		#define NVM_CFG1_PORT_LED_MODE_PHY7                             0x9
+		#define NVM_CFG1_PORT_LED_MODE_PHY8                             0xA
+		#define NVM_CFG1_PORT_LED_MODE_PHY9                             0xB
+		#define NVM_CFG1_PORT_LED_MODE_MAC4                             0xC
+		#define NVM_CFG1_PORT_LED_MODE_PHY10                            0xD
+		#define NVM_CFG1_PORT_LED_MODE_PHY11                            0xE
+		#define NVM_CFG1_PORT_LED_MODE_PHY12                            0xF
+		#define NVM_CFG1_PORT_LED_MODE_BREAKOUT                         0x10
+		#define NVM_CFG1_PORT_LED_MODE_OCP_3_0                          0x11
+		#define NVM_CFG1_PORT_LED_MODE_OCP_3_0_MAC2                     0x12
+		#define NVM_CFG1_PORT_LED_MODE_SW_DEF1                          0x13
+		#define NVM_CFG1_PORT_LED_MODE_SW_DEF1_MAC2                     0x14
+		#define NVM_CFG1_PORT_ROCE_PRIORITY_MASK                        0x0000FF00
+		#define NVM_CFG1_PORT_ROCE_PRIORITY_OFFSET                      8
+		#define NVM_CFG1_PORT_DCBX_MODE_MASK                            0x000F0000
+		#define NVM_CFG1_PORT_DCBX_MODE_OFFSET                          16
+		#define NVM_CFG1_PORT_DCBX_MODE_DISABLED                        0x0
+		#define NVM_CFG1_PORT_DCBX_MODE_IEEE                            0x1
+		#define NVM_CFG1_PORT_DCBX_MODE_CEE                             0x2
+		#define NVM_CFG1_PORT_DCBX_MODE_DYNAMIC                         0x3
+		#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_MASK            0x00F00000
+		#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_OFFSET          20
+		#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_ETHERNET        0x1
+		#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_FCOE            0x2
+		#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_ISCSI           0x4
+		#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_NVMETCP         0x8
 	/* GPIO for HW reset the PHY. In case it is the same for all ports,
 	 * need to set same value for all ports
 	 */
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_MASK 0xFF000000
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_OFFSET 24
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_NA 0x0
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO0 0x1
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO1 0x2
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO2 0x3
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO3 0x4
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO4 0x5
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO5 0x6
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO6 0x7
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO7 0x8
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO8 0x9
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO9 0xA
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO10 0xB
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO11 0xC
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO12 0xD
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO13 0xE
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO14 0xF
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO15 0x10
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO16 0x11
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO17 0x12
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO18 0x13
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO19 0x14
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO20 0x15
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO21 0x16
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO22 0x17
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO23 0x18
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO24 0x19
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO25 0x1A
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO26 0x1B
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO27 0x1C
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO28 0x1D
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO29 0x1E
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO30 0x1F
-		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO31 0x20
-	u32 pcie_cfg; /* 0xC */
-		#define NVM_CFG1_PORT_RESERVED15_MASK 0x00000007
-		#define NVM_CFG1_PORT_RESERVED15_OFFSET 0
-	u32 features; /* 0x10 */
-		#define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_MASK 0x00000001
-		#define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_OFFSET 0
-		#define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_DISABLED 0x0
-		#define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_ENABLED 0x1
-		#define NVM_CFG1_PORT_MAGIC_PACKET_WOL_MASK 0x00000002
-		#define NVM_CFG1_PORT_MAGIC_PACKET_WOL_OFFSET 1
-		#define NVM_CFG1_PORT_MAGIC_PACKET_WOL_DISABLED 0x0
-		#define NVM_CFG1_PORT_MAGIC_PACKET_WOL_ENABLED 0x1
-	u32 speed_cap_mask; /* 0x14 */
-		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_MASK 0x0000FFFF
-		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
-		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G 0x1
-		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G 0x2
-		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_20G 0x4
-		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G 0x8
-		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G 0x10
-		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G 0x20
-		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40
-		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_MASK 0xFFFF0000
-		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_OFFSET 16
-		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_1G 0x1
-		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_10G 0x2
-		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_20G 0x4
-		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_25G 0x8
-		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_40G 0x10
-		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_50G 0x20
-		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40
-	u32 link_settings; /* 0x18 */
-		#define NVM_CFG1_PORT_DRV_LINK_SPEED_MASK 0x0000000F
-		#define NVM_CFG1_PORT_DRV_LINK_SPEED_OFFSET 0
-		#define NVM_CFG1_PORT_DRV_LINK_SPEED_AUTONEG 0x0
-		#define NVM_CFG1_PORT_DRV_LINK_SPEED_1G 0x1
-		#define NVM_CFG1_PORT_DRV_LINK_SPEED_10G 0x2
-		#define NVM_CFG1_PORT_DRV_LINK_SPEED_20G 0x3
-		#define NVM_CFG1_PORT_DRV_LINK_SPEED_25G 0x4
-		#define NVM_CFG1_PORT_DRV_LINK_SPEED_40G 0x5
-		#define NVM_CFG1_PORT_DRV_LINK_SPEED_50G 0x6
-		#define NVM_CFG1_PORT_DRV_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_MASK 0x00000070
-		#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_OFFSET 4
-		#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_AUTONEG 0x1
-		#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_RX 0x2
-		#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_TX 0x4
-		#define NVM_CFG1_PORT_MFW_LINK_SPEED_MASK 0x00000780
-		#define NVM_CFG1_PORT_MFW_LINK_SPEED_OFFSET 7
-		#define NVM_CFG1_PORT_MFW_LINK_SPEED_AUTONEG 0x0
-		#define NVM_CFG1_PORT_MFW_LINK_SPEED_1G 0x1
-		#define NVM_CFG1_PORT_MFW_LINK_SPEED_10G 0x2
-		#define NVM_CFG1_PORT_MFW_LINK_SPEED_20G 0x3
-		#define NVM_CFG1_PORT_MFW_LINK_SPEED_25G 0x4
-		#define NVM_CFG1_PORT_MFW_LINK_SPEED_40G 0x5
-		#define NVM_CFG1_PORT_MFW_LINK_SPEED_50G 0x6
-		#define NVM_CFG1_PORT_MFW_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_MASK 0x00003800
-		#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_OFFSET 11
-		#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_AUTONEG 0x1
-		#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_RX 0x2
-		#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_TX 0x4
-		#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_MASK \
-			0x00004000
-		#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_OFFSET 14
-		#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_DISABLED \
-			0x0
-		#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_ENABLED \
-			0x1
-		#define NVM_CFG1_PORT_AN_25G_50G_OUI_MASK 0x00018000
-		#define NVM_CFG1_PORT_AN_25G_50G_OUI_OFFSET 15
-		#define NVM_CFG1_PORT_AN_25G_50G_OUI_CONSORTIUM 0x0
-		#define NVM_CFG1_PORT_AN_25G_50G_OUI_BAM 0x1
-		#define NVM_CFG1_PORT_FEC_FORCE_MODE_MASK 0x000E0000
-		#define NVM_CFG1_PORT_FEC_FORCE_MODE_OFFSET 17
-		#define NVM_CFG1_PORT_FEC_FORCE_MODE_NONE 0x0
-		#define NVM_CFG1_PORT_FEC_FORCE_MODE_FIRECODE 0x1
-		#define NVM_CFG1_PORT_FEC_FORCE_MODE_RS 0x2
-		#define NVM_CFG1_PORT_FEC_FORCE_MODE_AUTO 0x7
-		#define NVM_CFG1_PORT_FEC_AN_MODE_MASK 0x00700000
-		#define NVM_CFG1_PORT_FEC_AN_MODE_OFFSET 20
-		#define NVM_CFG1_PORT_FEC_AN_MODE_NONE 0x0
-		#define NVM_CFG1_PORT_FEC_AN_MODE_10G_FIRECODE 0x1
-		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE 0x2
-		#define NVM_CFG1_PORT_FEC_AN_MODE_10G_AND_25G_FIRECODE 0x3
-		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_RS 0x4
-		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE_AND_RS 0x5
-		#define NVM_CFG1_PORT_FEC_AN_MODE_ALL 0x6
-		#define NVM_CFG1_PORT_SMARTLINQ_MODE_MASK 0x00800000
-		#define NVM_CFG1_PORT_SMARTLINQ_MODE_OFFSET 23
-		#define NVM_CFG1_PORT_SMARTLINQ_MODE_DISABLED 0x0
-		#define NVM_CFG1_PORT_SMARTLINQ_MODE_ENABLED 0x1
-		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_MASK 0x01000000
-		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_OFFSET 24
-		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_DISABLED 0x0
-		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_ENABLED 0x1
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_MASK                        0xFF000000
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_OFFSET                      24
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_NA                          0x0
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO0                       0x1
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO1                       0x2
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO2                       0x3
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO3                       0x4
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO4                       0x5
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO5                       0x6
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO6                       0x7
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO7                       0x8
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO8                       0x9
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO9                       0xA
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO10                      0xB
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO11                      0xC
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO12                      0xD
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO13                      0xE
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO14                      0xF
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO15                      0x10
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO16                      0x11
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO17                      0x12
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO18                      0x13
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO19                      0x14
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO20                      0x15
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO21                      0x16
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO22                      0x17
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO23                      0x18
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO24                      0x19
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO25                      0x1A
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO26                      0x1B
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO27                      0x1C
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO28                      0x1D
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO29                      0x1E
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO30                      0x1F
+		#define NVM_CFG1_PORT_EXT_PHY_RESET_GPIO31                      0x20
+	u32 pcie_cfg;                                                       /* 0xC */
+		#define NVM_CFG1_PORT_RESERVED15_MASK                           0x00000007
+		#define NVM_CFG1_PORT_RESERVED15_OFFSET                         0
+		#define NVM_CFG1_PORT_NVMETCP_DID_SUFFIX_MASK                   0x000007F8
+		#define NVM_CFG1_PORT_NVMETCP_DID_SUFFIX_OFFSET                 3
+		#define NVM_CFG1_PORT_NVMETCP_MODE_MASK                         0x00007800
+		#define NVM_CFG1_PORT_NVMETCP_MODE_OFFSET                       11
+		#define NVM_CFG1_PORT_NVMETCP_MODE_HOST_MODE_ONLY               0x0
+		#define NVM_CFG1_PORT_NVMETCP_MODE_TARGET_MODE_ONLY             0x1
+		#define NVM_CFG1_PORT_NVMETCP_MODE_DUAL_MODE                    0x2
+	u32 features;                                                      /* 0x10 */
+		#define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_MASK           0x00000001
+		#define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_OFFSET         0
+		#define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_DISABLED       0x0
+		#define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_ENABLED        0x1
+		#define NVM_CFG1_PORT_MAGIC_PACKET_WOL_MASK                     0x00000002
+		#define NVM_CFG1_PORT_MAGIC_PACKET_WOL_OFFSET                   1
+		#define NVM_CFG1_PORT_MAGIC_PACKET_WOL_DISABLED                 0x0
+		#define NVM_CFG1_PORT_MAGIC_PACKET_WOL_ENABLED                  0x1
+	/*  Enable Permit port shutdown feature */
+		#define NVM_CFG1_PORT_PERMIT_TOTAL_PORT_SHUTDOWN_MASK           0x00000004
+		#define NVM_CFG1_PORT_PERMIT_TOTAL_PORT_SHUTDOWN_OFFSET         2
+		#define NVM_CFG1_PORT_PERMIT_TOTAL_PORT_SHUTDOWN_DISABLED       0x0
+		#define NVM_CFG1_PORT_PERMIT_TOTAL_PORT_SHUTDOWN_ENABLED        0x1
+	u32 speed_cap_mask;                                                /* 0x14 */
+		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_MASK            0x0000FFFF
+		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_OFFSET          0
+		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G              0x1
+		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G             0x2
+		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_20G             0x4
+		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G             0x8
+		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G             0x10
+		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G             0x20
+		#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G         0x40
+		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_MASK            0xFFFF0000
+		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_OFFSET          16
+		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_1G              0x1
+		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_10G             0x2
+		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_20G             0x4
+		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_25G             0x8
+		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_40G             0x10
+		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_50G             0x20
+		#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_BB_100G         0x40
+	u32 link_settings;                                                 /* 0x18 */
+		#define NVM_CFG1_PORT_DRV_LINK_SPEED_MASK                       0x0000000F
+		#define NVM_CFG1_PORT_DRV_LINK_SPEED_OFFSET                     0
+		#define NVM_CFG1_PORT_DRV_LINK_SPEED_AUTONEG                    0x0
+		#define NVM_CFG1_PORT_DRV_LINK_SPEED_1G                         0x1
+		#define NVM_CFG1_PORT_DRV_LINK_SPEED_10G                        0x2
+		#define NVM_CFG1_PORT_DRV_LINK_SPEED_20G                        0x3
+		#define NVM_CFG1_PORT_DRV_LINK_SPEED_25G                        0x4
+		#define NVM_CFG1_PORT_DRV_LINK_SPEED_40G                        0x5
+		#define NVM_CFG1_PORT_DRV_LINK_SPEED_50G                        0x6
+		#define NVM_CFG1_PORT_DRV_LINK_SPEED_BB_100G                    0x7
+		#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_MASK                     0x00000070
+		#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_OFFSET                   4
+		#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_AUTONEG                  0x1
+		#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_RX                       0x2
+		#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_TX                       0x4
+		#define NVM_CFG1_PORT_MFW_LINK_SPEED_MASK                       0x00000780
+		#define NVM_CFG1_PORT_MFW_LINK_SPEED_OFFSET                     7
+		#define NVM_CFG1_PORT_MFW_LINK_SPEED_AUTONEG                    0x0
+		#define NVM_CFG1_PORT_MFW_LINK_SPEED_1G                         0x1
+		#define NVM_CFG1_PORT_MFW_LINK_SPEED_10G                        0x2
+		#define NVM_CFG1_PORT_MFW_LINK_SPEED_20G                        0x3
+		#define NVM_CFG1_PORT_MFW_LINK_SPEED_25G                        0x4
+		#define NVM_CFG1_PORT_MFW_LINK_SPEED_40G                        0x5
+		#define NVM_CFG1_PORT_MFW_LINK_SPEED_50G                        0x6
+		#define NVM_CFG1_PORT_MFW_LINK_SPEED_BB_100G                    0x7
+		#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_MASK                     0x00003800
+		#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_OFFSET                   11
+		#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_AUTONEG                  0x1
+		#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_RX                       0x2
+		#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_TX                       0x4
+		#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_MASK      0x00004000
+		#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_OFFSET    14
+		#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_DISABLED  0x0
+		#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_ENABLED   0x1
+		#define NVM_CFG1_PORT_AN_25G_50G_OUI_MASK                       0x00018000
+		#define NVM_CFG1_PORT_AN_25G_50G_OUI_OFFSET                     15
+		#define NVM_CFG1_PORT_AN_25G_50G_OUI_CONSORTIUM                 0x0
+		#define NVM_CFG1_PORT_AN_25G_50G_OUI_BAM                        0x1
+		#define NVM_CFG1_PORT_FEC_FORCE_MODE_MASK                       0x000E0000
+		#define NVM_CFG1_PORT_FEC_FORCE_MODE_OFFSET                     17
+		#define NVM_CFG1_PORT_FEC_FORCE_MODE_NONE                       0x0
+		#define NVM_CFG1_PORT_FEC_FORCE_MODE_FIRECODE                   0x1
+		#define NVM_CFG1_PORT_FEC_FORCE_MODE_RS                         0x2
+		#define NVM_CFG1_PORT_FEC_FORCE_MODE_AUTO                       0x7
+		#define NVM_CFG1_PORT_FEC_AN_MODE_MASK                          0x00700000
+		#define NVM_CFG1_PORT_FEC_AN_MODE_OFFSET                        20
+		#define NVM_CFG1_PORT_FEC_AN_MODE_NONE                          0x0
+		#define NVM_CFG1_PORT_FEC_AN_MODE_10G_FIRECODE                  0x1
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE                  0x2
+		#define NVM_CFG1_PORT_FEC_AN_MODE_10G_AND_25G_FIRECODE          0x3
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_RS                        0x4
+		#define NVM_CFG1_PORT_FEC_AN_MODE_25G_FIRECODE_AND_RS           0x5
+		#define NVM_CFG1_PORT_FEC_AN_MODE_ALL                           0x6
+		#define NVM_CFG1_PORT_SMARTLINQ_MODE_MASK                       0x00800000
+		#define NVM_CFG1_PORT_SMARTLINQ_MODE_OFFSET                     23
+		#define NVM_CFG1_PORT_SMARTLINQ_MODE_DISABLED                   0x0
+		#define NVM_CFG1_PORT_SMARTLINQ_MODE_ENABLED                    0x1
+		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_MASK           0x01000000
+		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_OFFSET         24
+		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_DISABLED       0x0
+		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_ENABLED        0x1
 	/*  Enable/Disable RX PAM-4 precoding */
-		#define NVM_CFG1_PORT_RX_PRECODE_MASK 0x02000000
-		#define NVM_CFG1_PORT_RX_PRECODE_OFFSET 25
-		#define NVM_CFG1_PORT_RX_PRECODE_DISABLED 0x0
-		#define NVM_CFG1_PORT_RX_PRECODE_ENABLED 0x1
+		#define NVM_CFG1_PORT_RX_PRECODE_MASK                           0x02000000
+		#define NVM_CFG1_PORT_RX_PRECODE_OFFSET                         25
+		#define NVM_CFG1_PORT_RX_PRECODE_DISABLED                       0x0
+		#define NVM_CFG1_PORT_RX_PRECODE_ENABLED                        0x1
 	/*  Enable/Disable TX PAM-4 precoding */
-		#define NVM_CFG1_PORT_TX_PRECODE_MASK 0x04000000
-		#define NVM_CFG1_PORT_TX_PRECODE_OFFSET 26
-		#define NVM_CFG1_PORT_TX_PRECODE_DISABLED 0x0
-		#define NVM_CFG1_PORT_TX_PRECODE_ENABLED 0x1
-	u32 phy_cfg; /* 0x1C */
-		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_MASK 0x0000FFFF
-		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_OFFSET 0
-		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_HIGIG 0x1
-		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_SCRAMBLER 0x2
-		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_FIBER 0x4
-		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_DISABLE_CL72_AN 0x8
-		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_DISABLE_FEC_AN 0x10
-		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_MASK 0x00FF0000
-		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_OFFSET 16
-		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_BYPASS 0x0
-		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR 0x2
-		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR2 0x3
-		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR4 0x4
-		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XFI 0x8
-		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_SFI 0x9
-		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_1000X 0xB
-		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_SGMII 0xC
-		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XLAUI 0x11
-		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XLPPI 0x12
-		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_CAUI 0x21
-		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_CPPI 0x22
-		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_25GAUI 0x31
-		#define NVM_CFG1_PORT_AN_MODE_MASK 0xFF000000
-		#define NVM_CFG1_PORT_AN_MODE_OFFSET 24
-		#define NVM_CFG1_PORT_AN_MODE_NONE 0x0
-		#define NVM_CFG1_PORT_AN_MODE_CL73 0x1
-		#define NVM_CFG1_PORT_AN_MODE_CL37 0x2
-		#define NVM_CFG1_PORT_AN_MODE_CL73_BAM 0x3
-		#define NVM_CFG1_PORT_AN_MODE_BB_CL37_BAM 0x4
-		#define NVM_CFG1_PORT_AN_MODE_BB_HPAM 0x5
-		#define NVM_CFG1_PORT_AN_MODE_BB_SGMII 0x6
-	u32 mgmt_traffic; /* 0x20 */
-		#define NVM_CFG1_PORT_RESERVED61_MASK 0x0000000F
-		#define NVM_CFG1_PORT_RESERVED61_OFFSET 0
-	u32 ext_phy; /* 0x24 */
-		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_MASK 0x000000FF
-		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_OFFSET 0
-		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_NONE 0x0
-		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_BCM8485X 0x1
-		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_BCM5422X 0x2
-		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_88X33X0 0x3
-		#define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_MASK 0x0000FF00
-		#define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_OFFSET 8
+		#define NVM_CFG1_PORT_TX_PRECODE_MASK                           0x04000000
+		#define NVM_CFG1_PORT_TX_PRECODE_OFFSET                         26
+		#define NVM_CFG1_PORT_TX_PRECODE_DISABLED                       0x0
+		#define NVM_CFG1_PORT_TX_PRECODE_ENABLED                        0x1
+	u32 phy_cfg;                                                       /* 0x1C */
+		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_MASK                  0x0000FFFF
+		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_OFFSET                0
+		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_HIGIG                 0x1
+		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_SCRAMBLER             0x2
+		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_FIBER                 0x4
+		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_DISABLE_CL72_AN       0x8
+		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_DISABLE_FEC_AN        0x10
+		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_MASK                 0x00FF0000
+		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_OFFSET               16
+		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_BYPASS               0x0
+		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR                   0x2
+		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR2                  0x3
+		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR4                  0x4
+		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XFI                  0x8
+		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_SFI                  0x9
+		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_1000X                0xB
+		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_SGMII                0xC
+		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XLAUI                0x11
+		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XLPPI                0x12
+		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_CAUI                 0x21
+		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_CPPI                 0x22
+		#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_25GAUI               0x31
+		#define NVM_CFG1_PORT_AN_MODE_MASK                              0xFF000000
+		#define NVM_CFG1_PORT_AN_MODE_OFFSET                            24
+		#define NVM_CFG1_PORT_AN_MODE_NONE                              0x0
+		#define NVM_CFG1_PORT_AN_MODE_CL73                              0x1
+		#define NVM_CFG1_PORT_AN_MODE_CL37                              0x2
+		#define NVM_CFG1_PORT_AN_MODE_CL73_BAM                          0x3
+		#define NVM_CFG1_PORT_AN_MODE_BB_CL37_BAM                       0x4
+		#define NVM_CFG1_PORT_AN_MODE_BB_HPAM                           0x5
+		#define NVM_CFG1_PORT_AN_MODE_BB_SGMII                          0x6
+	u32 mgmt_traffic;                                                  /* 0x20 */
+		#define NVM_CFG1_PORT_RESERVED61_MASK                           0x0000000F
+		#define NVM_CFG1_PORT_RESERVED61_OFFSET                         0
+	u32 ext_phy;                                                       /* 0x24 */
+		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_MASK                    0x000000FF
+		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_OFFSET                  0
+		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_NONE                    0x0
+		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_BCM8485X                0x1
+		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_BCM5422X                0x2
+		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_88X33X0                 0x3
+		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_AQR11X                  0x4
+		#define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_MASK                 0x0000FF00
+		#define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_OFFSET               8
 	/*  EEE power saving mode */
-		#define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_MASK 0x00FF0000
-		#define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_OFFSET 16
-		#define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_DISABLED 0x0
-		#define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_BALANCED 0x1
-		#define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_AGGRESSIVE 0x2
-		#define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_LOW_LATENCY 0x3
-	u32 mba_cfg1; /* 0x28 */
-		#define NVM_CFG1_PORT_PREBOOT_OPROM_MASK 0x00000001
-		#define NVM_CFG1_PORT_PREBOOT_OPROM_OFFSET 0
-		#define NVM_CFG1_PORT_PREBOOT_OPROM_DISABLED 0x0
-		#define NVM_CFG1_PORT_PREBOOT_OPROM_ENABLED 0x1
-		#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_TYPE_MASK 0x00000006
-		#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_TYPE_OFFSET 1
-		#define NVM_CFG1_PORT_MBA_DELAY_TIME_MASK 0x00000078
-		#define NVM_CFG1_PORT_MBA_DELAY_TIME_OFFSET 3
-		#define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_MASK 0x00000080
-		#define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_OFFSET 7
-		#define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_CTRL_S 0x0
-		#define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_CTRL_B 0x1
-		#define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_MASK 0x00000100
-		#define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_OFFSET 8
-		#define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_DISABLED 0x0
-		#define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_ENABLED 0x1
-		#define NVM_CFG1_PORT_RESERVED5_MASK 0x0001FE00
-		#define NVM_CFG1_PORT_RESERVED5_OFFSET 9
-		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_MASK 0x001E0000
-		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_OFFSET 17
-		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_AUTONEG 0x0
-		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_1G 0x1
-		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_10G 0x2
-		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_20G 0x3
-		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_25G 0x4
-		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_40G 0x5
-		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_50G 0x6
-		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_MASK \
-			0x00E00000
-		#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_OFFSET 21
-		#define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_MASK \
-			0x01000000
-		#define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_OFFSET 24
-		#define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_DISABLED \
-			0x0
-		#define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_ENABLED 0x1
-	u32 mba_cfg2; /* 0x2C */
-		#define NVM_CFG1_PORT_RESERVED65_MASK 0x0000FFFF
-		#define NVM_CFG1_PORT_RESERVED65_OFFSET 0
-		#define NVM_CFG1_PORT_RESERVED66_MASK 0x00010000
-		#define NVM_CFG1_PORT_RESERVED66_OFFSET 16
-		#define NVM_CFG1_PORT_PREBOOT_LINK_UP_DELAY_MASK 0x01FE0000
-		#define NVM_CFG1_PORT_PREBOOT_LINK_UP_DELAY_OFFSET 17
-	u32 vf_cfg; /* 0x30 */
-		#define NVM_CFG1_PORT_RESERVED8_MASK 0x0000FFFF
-		#define NVM_CFG1_PORT_RESERVED8_OFFSET 0
-		#define NVM_CFG1_PORT_RESERVED6_MASK 0x000F0000
-		#define NVM_CFG1_PORT_RESERVED6_OFFSET 16
-	struct nvm_cfg_mac_address lldp_mac_address; /* 0x34 */
-	u32 led_port_settings; /* 0x3C */
-		#define NVM_CFG1_PORT_LANE_LED_SPD_0_SEL_MASK 0x000000FF
-		#define NVM_CFG1_PORT_LANE_LED_SPD_0_SEL_OFFSET 0
-		#define NVM_CFG1_PORT_LANE_LED_SPD_1_SEL_MASK 0x0000FF00
-		#define NVM_CFG1_PORT_LANE_LED_SPD_1_SEL_OFFSET 8
-		#define NVM_CFG1_PORT_LANE_LED_SPD_2_SEL_MASK 0x00FF0000
-		#define NVM_CFG1_PORT_LANE_LED_SPD_2_SEL_OFFSET 16
-		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_1G 0x1
-		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_10G 0x2
-		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_AH_25G 0x4
-		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_25G 0x8
-		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_AH_40G 0x8
-		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_40G 0x10
-		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_AH_50G 0x10
-		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_50G 0x20
-		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_100G 0x40
+		#define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_MASK                0x00FF0000
+		#define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_OFFSET              16
+		#define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_DISABLED            0x0
+		#define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_BALANCED            0x1
+		#define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_AGGRESSIVE          0x2
+		#define NVM_CFG1_PORT_EEE_POWER_SAVING_MODE_LOW_LATENCY         0x3
+	/*  Ext PHY AVS Enable */
+		#define NVM_CFG1_PORT_EXT_PHY_AVS_ENABLE_MASK                   0x01000000
+		#define NVM_CFG1_PORT_EXT_PHY_AVS_ENABLE_OFFSET                 24
+		#define NVM_CFG1_PORT_EXT_PHY_AVS_ENABLE_DISABLED               0x0
+		#define NVM_CFG1_PORT_EXT_PHY_AVS_ENABLE_ENABLED                0x1
+	u32 mba_cfg1;                                                      /* 0x28 */
+		#define NVM_CFG1_PORT_PREBOOT_OPROM_MASK                        0x00000001
+		#define NVM_CFG1_PORT_PREBOOT_OPROM_OFFSET                      0
+		#define NVM_CFG1_PORT_PREBOOT_OPROM_DISABLED                    0x0
+		#define NVM_CFG1_PORT_PREBOOT_OPROM_ENABLED                     0x1
+		#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_TYPE_MASK            0x00000006
+		#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_TYPE_OFFSET          1
+		#define NVM_CFG1_PORT_MBA_DELAY_TIME_MASK                       0x00000078
+		#define NVM_CFG1_PORT_MBA_DELAY_TIME_OFFSET                     3
+		#define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_MASK                    0x00000080
+		#define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_OFFSET                  7
+		#define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_CTRL_S                  0x0
+		#define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_CTRL_B                  0x1
+		#define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_MASK                0x00000100
+		#define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_OFFSET              8
+		#define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_DISABLED            0x0
+		#define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_ENABLED             0x1
+		#define NVM_CFG1_PORT_RESERVED5_MASK                            0x0001FE00
+		#define NVM_CFG1_PORT_RESERVED5_OFFSET                          9
+		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_MASK                   0x001E0000
+		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_OFFSET                 17
+		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_AUTONEG                0x0
+		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_1G                     0x1
+		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_10G                    0x2
+		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_20G                    0x3
+		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_25G                    0x4
+		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_40G                    0x5
+		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_50G                    0x6
+		#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_BB_100G                0x7
+		#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_MASK     0x00E00000
+		#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_OFFSET   21
+		#define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_MASK       0x01000000
+		#define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_OFFSET     24
+		#define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_DISABLED   0x0
+		#define NVM_CFG1_PORT_RESERVED_WAS_PREBOOT_SMARTLINQ_ENABLED    0x1
+	u32 mba_cfg2;                                                      /* 0x2C */
+		#define NVM_CFG1_PORT_RESERVED65_MASK                           0x0000FFFF
+		#define NVM_CFG1_PORT_RESERVED65_OFFSET                         0
+		#define NVM_CFG1_PORT_RESERVED66_MASK                           0x00010000
+		#define NVM_CFG1_PORT_RESERVED66_OFFSET                         16
+		#define NVM_CFG1_PORT_PREBOOT_LINK_UP_DELAY_MASK                0x01FE0000
+		#define NVM_CFG1_PORT_PREBOOT_LINK_UP_DELAY_OFFSET              17
+	u32 vf_cfg;                                                        /* 0x30 */
+		#define NVM_CFG1_PORT_RESERVED8_MASK                            0x0000FFFF
+		#define NVM_CFG1_PORT_RESERVED8_OFFSET                          0
+		#define NVM_CFG1_PORT_RESERVED6_MASK                            0x000F0000
+		#define NVM_CFG1_PORT_RESERVED6_OFFSET                          16
+	struct nvm_cfg_mac_address lldp_mac_address;                       /* 0x34 */
+	u32 led_port_settings;                                             /* 0x3C */
+		#define NVM_CFG1_PORT_LANE_LED_SPD_0_SEL_MASK                   0x000000FF
+		#define NVM_CFG1_PORT_LANE_LED_SPD_0_SEL_OFFSET                 0
+		#define NVM_CFG1_PORT_LANE_LED_SPD_1_SEL_MASK                   0x0000FF00
+		#define NVM_CFG1_PORT_LANE_LED_SPD_1_SEL_OFFSET                 8
+		#define NVM_CFG1_PORT_LANE_LED_SPD_2_SEL_MASK                   0x00FF0000
+		#define NVM_CFG1_PORT_LANE_LED_SPD_2_SEL_OFFSET                 16
+		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_1G                      0x1
+		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_10G                     0x2
+		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_AH_25G_AHP_25G          0x4
+		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_25G_AH_40G_AHP_40G   0x8
+		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_40G_AH_50G_AHP_50G   0x10
+		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_50G_AHP_100G         0x20
+		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_100G                 0x40
 	/*  UID LED Blink Mode Settings */
-		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_MASK 0x0F000000
-		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_OFFSET 24
-		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_ACTIVITY_LED 0x1
-		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED0 0x2
-		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED1 0x4
-		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED2 0x8
-	u32 transceiver_00; /* 0x40 */
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_MASK                    0x0F000000
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_OFFSET                  24
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_ACTIVITY_LED            0x1
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED0               0x2
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED1               0x4
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED2               0x8
+	/*  Activity LED Blink Rate in Hz */
+		#define NVM_CFG1_PORT_ACTIVITY_LED_BLINK_RATE_HZ_MASK           0xF0000000
+		#define NVM_CFG1_PORT_ACTIVITY_LED_BLINK_RATE_HZ_OFFSET         28
+	u32 transceiver_00;                                                /* 0x40 */
 	/*  Define for mapping of transceiver signal module absent */
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_MASK 0x000000FF
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_OFFSET 0
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_NA 0x0
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO0 0x1
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO1 0x2
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO2 0x3
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO3 0x4
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO4 0x5
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO5 0x6
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO6 0x7
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO7 0x8
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO8 0x9
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO9 0xA
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO10 0xB
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO11 0xC
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO12 0xD
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO13 0xE
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO14 0xF
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO15 0x10
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO16 0x11
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO17 0x12
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO18 0x13
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO19 0x14
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO20 0x15
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO21 0x16
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO22 0x17
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO23 0x18
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO24 0x19
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO25 0x1A
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO26 0x1B
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO27 0x1C
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO28 0x1D
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO29 0x1E
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO30 0x1F
-		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO31 0x20
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_MASK                     0x000000FF
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_OFFSET                   0
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_NA                       0x0
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO0                    0x1
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO1                    0x2
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO2                    0x3
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO3                    0x4
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO4                    0x5
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO5                    0x6
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO6                    0x7
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO7                    0x8
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO8                    0x9
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO9                    0xA
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO10                   0xB
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO11                   0xC
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO12                   0xD
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO13                   0xE
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO14                   0xF
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO15                   0x10
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO16                   0x11
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO17                   0x12
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO18                   0x13
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO19                   0x14
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO20                   0x15
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO21                   0x16
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO22                   0x17
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO23                   0x18
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO24                   0x19
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO25                   0x1A
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO26                   0x1B
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO27                   0x1C
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO28                   0x1D
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO29                   0x1E
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO30                   0x1F
+		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO31                   0x20
 	/*  Define the GPIO mux settings  to switch i2c mux to this port */
-		#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_0_MASK 0x00000F00
-		#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_0_OFFSET 8
-		#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_MASK 0x0000F000
-		#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_OFFSET 12
+		#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_0_MASK                  0x00000F00
+		#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_0_OFFSET                8
+		#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_MASK                  0x0000F000
+		#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_OFFSET                12
 	/*  Option to override SmartAN FEC requirements */
-		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_MASK 0x00010000
-		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_OFFSET 16
-		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_DISABLED 0x0
-		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_ENABLED 0x1
-	u32 device_ids; /* 0x44 */
-		#define NVM_CFG1_PORT_ETH_DID_SUFFIX_MASK 0x000000FF
-		#define NVM_CFG1_PORT_ETH_DID_SUFFIX_OFFSET 0
-		#define NVM_CFG1_PORT_FCOE_DID_SUFFIX_MASK 0x0000FF00
-		#define NVM_CFG1_PORT_FCOE_DID_SUFFIX_OFFSET 8
-		#define NVM_CFG1_PORT_ISCSI_DID_SUFFIX_MASK 0x00FF0000
-		#define NVM_CFG1_PORT_ISCSI_DID_SUFFIX_OFFSET 16
-		#define NVM_CFG1_PORT_RESERVED_DID_SUFFIX_MASK 0xFF000000
-		#define NVM_CFG1_PORT_RESERVED_DID_SUFFIX_OFFSET 24
-	u32 board_cfg; /* 0x48 */
-	/*  This field defines the board technology
+		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_MASK                 0x00010000
+		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_OFFSET               16
+		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_DISABLED             0x0
+		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_ENABLED              0x1
+	u32 device_ids;                                                    /* 0x44 */
+		#define NVM_CFG1_PORT_ETH_DID_SUFFIX_MASK                       0x000000FF
+		#define NVM_CFG1_PORT_ETH_DID_SUFFIX_OFFSET                     0
+		#define NVM_CFG1_PORT_FCOE_DID_SUFFIX_MASK                      0x0000FF00
+		#define NVM_CFG1_PORT_FCOE_DID_SUFFIX_OFFSET                    8
+		#define NVM_CFG1_PORT_ISCSI_DID_SUFFIX_MASK                     0x00FF0000
+		#define NVM_CFG1_PORT_ISCSI_DID_SUFFIX_OFFSET                   16
+		#define NVM_CFG1_PORT_RESERVED_DID_SUFFIX_MASK                  0xFF000000
+		#define NVM_CFG1_PORT_RESERVED_DID_SUFFIX_OFFSET                24
+	u32 board_cfg;                                                     /* 0x48 */
+	/* This field defines the board technology
 	 * (backpane,transceiver,external PHY)
 	 */
-		#define NVM_CFG1_PORT_PORT_TYPE_MASK 0x000000FF
-		#define NVM_CFG1_PORT_PORT_TYPE_OFFSET 0
-		#define NVM_CFG1_PORT_PORT_TYPE_UNDEFINED 0x0
-		#define NVM_CFG1_PORT_PORT_TYPE_MODULE 0x1
-		#define NVM_CFG1_PORT_PORT_TYPE_BACKPLANE 0x2
-		#define NVM_CFG1_PORT_PORT_TYPE_EXT_PHY 0x3
-		#define NVM_CFG1_PORT_PORT_TYPE_MODULE_SLAVE 0x4
+		#define NVM_CFG1_PORT_PORT_TYPE_MASK                            0x000000FF
+		#define NVM_CFG1_PORT_PORT_TYPE_OFFSET                          0
+		#define NVM_CFG1_PORT_PORT_TYPE_UNDEFINED                       0x0
+		#define NVM_CFG1_PORT_PORT_TYPE_MODULE                          0x1
+		#define NVM_CFG1_PORT_PORT_TYPE_BACKPLANE                       0x2
+		#define NVM_CFG1_PORT_PORT_TYPE_EXT_PHY                         0x3
+		#define NVM_CFG1_PORT_PORT_TYPE_MODULE_SLAVE                    0x4
 	/*  This field defines the GPIO mapped to tx_disable signal in SFP */
-		#define NVM_CFG1_PORT_TX_DISABLE_MASK 0x0000FF00
-		#define NVM_CFG1_PORT_TX_DISABLE_OFFSET 8
-		#define NVM_CFG1_PORT_TX_DISABLE_NA 0x0
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO0 0x1
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO1 0x2
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO2 0x3
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO3 0x4
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO4 0x5
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO5 0x6
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO6 0x7
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO7 0x8
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO8 0x9
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO9 0xA
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO10 0xB
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO11 0xC
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO12 0xD
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO13 0xE
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO14 0xF
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO15 0x10
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO16 0x11
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO17 0x12
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO18 0x13
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO19 0x14
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO20 0x15
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO21 0x16
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO22 0x17
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO23 0x18
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO24 0x19
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO25 0x1A
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO26 0x1B
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO27 0x1C
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO28 0x1D
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO29 0x1E
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO30 0x1F
-		#define NVM_CFG1_PORT_TX_DISABLE_GPIO31 0x20
-	u32 mnm_10g_cap; /* 0x4C */
-		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_MASK \
-			0x0000FFFF
-		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
-		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_1G 0x1
-		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_10G 0x2
-		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_20G 0x4
-		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_25G 0x8
-		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_40G 0x10
-		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_50G 0x20
-		#define \
-		    NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40
-		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_MASK \
-			0xFFFF0000
-		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_OFFSET \
-			16
-		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_1G 0x1
-		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_10G 0x2
-		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_20G 0x4
-		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_25G 0x8
-		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_40G 0x10
-		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_50G 0x20
-		#define \
-		    NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40
-	u32 mnm_10g_ctrl; /* 0x50 */
-		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_MASK 0x0000000F
-		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_OFFSET 0
-		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_AUTONEG 0x0
-		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_1G 0x1
-		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_10G 0x2
-		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_20G 0x3
-		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_25G 0x4
-		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_40G 0x5
-		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_50G 0x6
-		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_MASK 0x000000F0
-		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_OFFSET 4
-		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_AUTONEG 0x0
-		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_1G 0x1
-		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_10G 0x2
-		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_20G 0x3
-		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_25G 0x4
-		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_40G 0x5
-		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_50G 0x6
-		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_BB_100G 0x7
-	/*  This field defines the board technology
+		#define NVM_CFG1_PORT_TX_DISABLE_MASK                           0x0000FF00
+		#define NVM_CFG1_PORT_TX_DISABLE_OFFSET                         8
+		#define NVM_CFG1_PORT_TX_DISABLE_NA                             0x0
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO0                          0x1
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO1                          0x2
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO2                          0x3
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO3                          0x4
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO4                          0x5
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO5                          0x6
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO6                          0x7
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO7                          0x8
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO8                          0x9
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO9                          0xA
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO10                         0xB
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO11                         0xC
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO12                         0xD
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO13                         0xE
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO14                         0xF
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO15                         0x10
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO16                         0x11
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO17                         0x12
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO18                         0x13
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO19                         0x14
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO20                         0x15
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO21                         0x16
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO22                         0x17
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO23                         0x18
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO24                         0x19
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO25                         0x1A
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO26                         0x1B
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO27                         0x1C
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO28                         0x1D
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO29                         0x1E
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO30                         0x1F
+		#define NVM_CFG1_PORT_TX_DISABLE_GPIO31                         0x20
+	u32 mnm_10g_cap;                                                   /* 0x4C */
+		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_MASK    0x0000FFFF
+		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_OFFSET  0
+		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_1G      0x1
+		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_10G     0x2
+		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_20G     0x4
+		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_25G     0x8
+		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_40G     0x10
+		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_50G     0x20
+		#define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40
+		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_MASK    0xFFFF0000
+		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_OFFSET  16
+		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_1G      0x1
+		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_10G     0x2
+		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_20G     0x4
+		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_25G     0x8
+		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_40G     0x10
+		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_50G     0x20
+		#define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40
+	u32 mnm_10g_ctrl;                                                  /* 0x50 */
+		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_MASK               0x0000000F
+		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_OFFSET             0
+		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_AUTONEG            0x0
+		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_1G                 0x1
+		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_10G                0x2
+		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_20G                0x3
+		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_25G                0x4
+		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_40G                0x5
+		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_50G                0x6
+		#define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_BB_100G            0x7
+		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_MASK               0x000000F0
+		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_OFFSET             4
+		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_AUTONEG            0x0
+		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_1G                 0x1
+		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_10G                0x2
+		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_20G                0x3
+		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_25G                0x4
+		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_40G                0x5
+		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_50G                0x6
+		#define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_BB_100G            0x7
+	/* This field defines the board technology
 	 * (backpane,transceiver,external PHY)
-	*/
-		#define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_MASK 0x0000FF00
-		#define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_OFFSET 8
-		#define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_UNDEFINED 0x0
-		#define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_MODULE 0x1
-		#define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_BACKPLANE 0x2
-		#define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_EXT_PHY 0x3
-		#define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_MODULE_SLAVE 0x4
-		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_MASK \
-			0x00FF0000
-		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_OFFSET 16
-		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_BYPASS 0x0
-		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_KR 0x2
-		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_KR2 0x3
-		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_KR4 0x4
-		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_XFI 0x8
-		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_SFI 0x9
-		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_1000X 0xB
-		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_SGMII 0xC
-		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_XLAUI 0x11
-		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_XLPPI 0x12
-		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_CAUI 0x21
-		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_CPPI 0x22
-		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_25GAUI 0x31
-		#define NVM_CFG1_PORT_MNM_10G_ETH_DID_SUFFIX_MASK 0xFF000000
-		#define NVM_CFG1_PORT_MNM_10G_ETH_DID_SUFFIX_OFFSET 24
-	u32 mnm_10g_misc; /* 0x54 */
-		#define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_MASK 0x00000007
-		#define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_OFFSET 0
-		#define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_NONE 0x0
-		#define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_FIRECODE 0x1
-		#define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_RS 0x2
-		#define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_AUTO 0x7
-	u32 mnm_25g_cap; /* 0x58 */
-		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_MASK \
-			0x0000FFFF
-		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
-		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_1G 0x1
-		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_10G 0x2
-		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_20G 0x4
-		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_25G 0x8
-		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_40G 0x10
-		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_50G 0x20
-		#define \
-		    NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40
-		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_MASK \
-			0xFFFF0000
-		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_OFFSET \
-			16
-		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_1G 0x1
-		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_10G 0x2
-		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_20G 0x4
-		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_25G 0x8
-		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_40G 0x10
-		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_50G 0x20
-		#define \
-		    NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40
-	u32 mnm_25g_ctrl; /* 0x5C */
-		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_MASK 0x0000000F
-		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_OFFSET 0
-		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_AUTONEG 0x0
-		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_1G 0x1
-		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_10G 0x2
-		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_20G 0x3
-		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_25G 0x4
-		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_40G 0x5
-		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_50G 0x6
-		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_MASK 0x000000F0
-		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_OFFSET 4
-		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_AUTONEG 0x0
-		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_1G 0x1
-		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_10G 0x2
-		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_20G 0x3
-		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_25G 0x4
-		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_40G 0x5
-		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_50G 0x6
-		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_BB_100G 0x7
-	/*  This field defines the board technology
+	 */
+		#define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_MASK                    0x0000FF00
+		#define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_OFFSET                  8
+		#define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_UNDEFINED               0x0
+		#define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_MODULE                  0x1
+		#define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_BACKPLANE               0x2
+		#define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_EXT_PHY                 0x3
+		#define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_MODULE_SLAVE            0x4
+		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_MASK         0x00FF0000
+		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_OFFSET       16
+		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_BYPASS       0x0
+		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_KR           0x2
+		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_KR2          0x3
+		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_KR4          0x4
+		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_XFI          0x8
+		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_SFI          0x9
+		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_1000X        0xB
+		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_SGMII        0xC
+		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_XLAUI        0x11
+		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_XLPPI        0x12
+		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_CAUI         0x21
+		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_CPPI         0x22
+		#define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_25GAUI       0x31
+		#define NVM_CFG1_PORT_MNM_10G_ETH_DID_SUFFIX_MASK               0xFF000000
+		#define NVM_CFG1_PORT_MNM_10G_ETH_DID_SUFFIX_OFFSET             24
+	u32 mnm_10g_misc;                                                  /* 0x54 */
+		#define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_MASK               0x00000007
+		#define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_OFFSET             0
+		#define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_NONE               0x0
+		#define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_FIRECODE           0x1
+		#define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_RS                 0x2
+		#define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_AUTO               0x7
+	u32 mnm_25g_cap;                                                   /* 0x58 */
+		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_MASK    0x0000FFFF
+		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_OFFSET  0
+		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_1G      0x1
+		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_10G     0x2
+		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_20G     0x4
+		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_25G     0x8
+		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_40G     0x10
+		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_50G     0x20
+		#define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40
+		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_MASK    0xFFFF0000
+		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_OFFSET  16
+		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_1G      0x1
+		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_10G     0x2
+		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_20G     0x4
+		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_25G     0x8
+		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_40G     0x10
+		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_50G     0x20
+		#define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40
+	u32 mnm_25g_ctrl;                                                  /* 0x5C */
+		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_MASK               0x0000000F
+		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_OFFSET             0
+		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_AUTONEG            0x0
+		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_1G                 0x1
+		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_10G                0x2
+		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_20G                0x3
+		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_25G                0x4
+		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_40G                0x5
+		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_50G                0x6
+		#define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_BB_100G            0x7
+		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_MASK               0x000000F0
+		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_OFFSET             4
+		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_AUTONEG            0x0
+		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_1G                 0x1
+		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_10G                0x2
+		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_20G                0x3
+		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_25G                0x4
+		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_40G                0x5
+		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_50G                0x6
+		#define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_BB_100G            0x7
+	/* This field defines the board technology
 	 * (backpane,transceiver,external PHY)
-	*/
-		#define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_MASK 0x0000FF00
-		#define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_OFFSET 8
-		#define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_UNDEFINED 0x0
-		#define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_MODULE 0x1
-		#define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_BACKPLANE 0x2
-		#define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_EXT_PHY 0x3
-		#define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_MODULE_SLAVE 0x4
-		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_MASK \
-			0x00FF0000
-		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_OFFSET 16
-		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_BYPASS 0x0
-		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_KR 0x2
-		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_KR2 0x3
-		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_KR4 0x4
-		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_XFI 0x8
-		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_SFI 0x9
-		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_1000X 0xB
-		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_SGMII 0xC
-		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_XLAUI 0x11
-		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_XLPPI 0x12
-		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_CAUI 0x21
-		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_CPPI 0x22
-		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_25GAUI 0x31
-		#define NVM_CFG1_PORT_MNM_25G_ETH_DID_SUFFIX_MASK 0xFF000000
-		#define NVM_CFG1_PORT_MNM_25G_ETH_DID_SUFFIX_OFFSET 24
-	u32 mnm_25g_misc; /* 0x60 */
-		#define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_MASK 0x00000007
-		#define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_OFFSET 0
-		#define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_NONE 0x0
-		#define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_FIRECODE 0x1
-		#define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_RS 0x2
-		#define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_AUTO 0x7
-	u32 mnm_40g_cap; /* 0x64 */
-		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_MASK \
-			0x0000FFFF
-		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
-		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_1G 0x1
-		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_10G 0x2
-		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_20G 0x4
-		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_25G 0x8
-		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_40G 0x10
-		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_50G 0x20
-		#define \
-		    NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40
-		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_MASK \
-			0xFFFF0000
-		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_OFFSET \
-			16
-		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_1G 0x1
-		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_10G 0x2
-		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_20G 0x4
-		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_25G 0x8
-		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_40G 0x10
-		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_50G 0x20
-		#define \
-		    NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40
-	u32 mnm_40g_ctrl; /* 0x68 */
-		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_MASK 0x0000000F
-		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_OFFSET 0
-		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_AUTONEG 0x0
-		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_1G 0x1
-		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_10G 0x2
-		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_20G 0x3
-		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_25G 0x4
-		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_40G 0x5
-		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_50G 0x6
-		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_MASK 0x000000F0
-		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_OFFSET 4
-		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_AUTONEG 0x0
-		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_1G 0x1
-		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_10G 0x2
-		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_20G 0x3
-		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_25G 0x4
-		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_40G 0x5
-		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_50G 0x6
-		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_BB_100G 0x7
-	/*  This field defines the board technology
+	 */
+		#define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_MASK                    0x0000FF00
+		#define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_OFFSET                  8
+		#define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_UNDEFINED               0x0
+		#define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_MODULE                  0x1
+		#define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_BACKPLANE               0x2
+		#define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_EXT_PHY                 0x3
+		#define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_MODULE_SLAVE            0x4
+		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_MASK         0x00FF0000
+		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_OFFSET       16
+		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_BYPASS       0x0
+		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_KR           0x2
+		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_KR2          0x3
+		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_KR4          0x4
+		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_XFI          0x8
+		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_SFI          0x9
+		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_1000X        0xB
+		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_SGMII        0xC
+		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_XLAUI        0x11
+		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_XLPPI        0x12
+		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_CAUI         0x21
+		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_CPPI         0x22
+		#define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_25GAUI       0x31
+		#define NVM_CFG1_PORT_MNM_25G_ETH_DID_SUFFIX_MASK               0xFF000000
+		#define NVM_CFG1_PORT_MNM_25G_ETH_DID_SUFFIX_OFFSET             24
+	u32 mnm_25g_misc;                                                  /* 0x60 */
+		#define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_MASK               0x00000007
+		#define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_OFFSET             0
+		#define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_NONE               0x0
+		#define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_FIRECODE           0x1
+		#define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_RS                 0x2
+		#define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_AUTO               0x7
+	u32 mnm_40g_cap;                                                   /* 0x64 */
+		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_MASK    0x0000FFFF
+		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_OFFSET  0
+		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_1G      0x1
+		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_10G     0x2
+		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_20G     0x4
+		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_25G     0x8
+		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_40G     0x10
+		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_50G     0x20
+		#define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40
+		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_MASK    0xFFFF0000
+		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_OFFSET  16
+		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_1G      0x1
+		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_10G     0x2
+		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_20G     0x4
+		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_25G     0x8
+		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_40G     0x10
+		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_50G     0x20
+		#define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40
+	u32 mnm_40g_ctrl;                                                  /* 0x68 */
+		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_MASK               0x0000000F
+		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_OFFSET             0
+		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_AUTONEG            0x0
+		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_1G                 0x1
+		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_10G                0x2
+		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_20G                0x3
+		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_25G                0x4
+		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_40G                0x5
+		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_50G                0x6
+		#define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_BB_100G            0x7
+		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_MASK               0x000000F0
+		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_OFFSET             4
+		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_AUTONEG            0x0
+		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_1G                 0x1
+		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_10G                0x2
+		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_20G                0x3
+		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_25G                0x4
+		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_40G                0x5
+		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_50G                0x6
+		#define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_BB_100G            0x7
+	/* This field defines the board technology
 	 * (backpane,transceiver,external PHY)
-	*/
-		#define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_MASK 0x0000FF00
-		#define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_OFFSET 8
-		#define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_UNDEFINED 0x0
-		#define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_MODULE 0x1
-		#define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_BACKPLANE 0x2
-		#define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_EXT_PHY 0x3
-		#define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_MODULE_SLAVE 0x4
-		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_MASK \
-			0x00FF0000
-		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_OFFSET 16
-		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_BYPASS 0x0
-		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_KR 0x2
-		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_KR2 0x3
-		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_KR4 0x4
-		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_XFI 0x8
-		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_SFI 0x9
-		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_1000X 0xB
-		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_SGMII 0xC
-		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_XLAUI 0x11
-		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_XLPPI 0x12
-		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_CAUI 0x21
-		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_CPPI 0x22
-		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_25GAUI 0x31
-		#define NVM_CFG1_PORT_MNM_40G_ETH_DID_SUFFIX_MASK 0xFF000000
-		#define NVM_CFG1_PORT_MNM_40G_ETH_DID_SUFFIX_OFFSET 24
-	u32 mnm_40g_misc; /* 0x6C */
-		#define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_MASK 0x00000007
-		#define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_OFFSET 0
-		#define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_NONE 0x0
-		#define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_FIRECODE 0x1
-		#define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_RS 0x2
-		#define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_AUTO 0x7
-	u32 mnm_50g_cap; /* 0x70 */
-		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_MASK \
-			0x0000FFFF
-		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
-		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_1G 0x1
-		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_10G 0x2
-		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_20G 0x4
-		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_25G 0x8
-		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_40G 0x10
-		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_50G 0x20
-		#define \
-		    NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_BB_100G \
-			0x40
-		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_MASK \
-			0xFFFF0000
-		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_OFFSET \
-			16
-		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_1G 0x1
-		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_10G 0x2
-		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_20G 0x4
-		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_25G 0x8
-		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_40G 0x10
-		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_50G 0x20
-		#define \
-		    NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_BB_100G \
-			0x40
-	u32 mnm_50g_ctrl; /* 0x74 */
-		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_MASK 0x0000000F
-		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_OFFSET 0
-		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_AUTONEG 0x0
-		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_1G 0x1
-		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_10G 0x2
-		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_20G 0x3
-		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_25G 0x4
-		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_40G 0x5
-		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_50G 0x6
-		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_MASK 0x000000F0
-		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_OFFSET 4
-		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_AUTONEG 0x0
-		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_1G 0x1
-		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_10G 0x2
-		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_20G 0x3
-		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_25G 0x4
-		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_40G 0x5
-		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_50G 0x6
-		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_BB_100G 0x7
-	/*  This field defines the board technology
+	 */
+		#define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_MASK                    0x0000FF00
+		#define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_OFFSET                  8
+		#define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_UNDEFINED               0x0
+		#define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_MODULE                  0x1
+		#define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_BACKPLANE               0x2
+		#define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_EXT_PHY                 0x3
+		#define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_MODULE_SLAVE            0x4
+		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_MASK         0x00FF0000
+		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_OFFSET       16
+		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_BYPASS       0x0
+		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_KR           0x2
+		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_KR2          0x3
+		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_KR4          0x4
+		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_XFI          0x8
+		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_SFI          0x9
+		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_1000X        0xB
+		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_SGMII        0xC
+		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_XLAUI        0x11
+		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_XLPPI        0x12
+		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_CAUI         0x21
+		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_CPPI         0x22
+		#define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_25GAUI       0x31
+		#define NVM_CFG1_PORT_MNM_40G_ETH_DID_SUFFIX_MASK               0xFF000000
+		#define NVM_CFG1_PORT_MNM_40G_ETH_DID_SUFFIX_OFFSET             24
+	u32 mnm_40g_misc;                                                  /* 0x6C */
+		#define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_MASK               0x00000007
+		#define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_OFFSET             0
+		#define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_NONE               0x0
+		#define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_FIRECODE           0x1
+		#define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_RS                 0x2
+		#define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_AUTO               0x7
+	u32 mnm_50g_cap;                                                   /* 0x70 */
+		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_MASK    0x0000FFFF
+		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_OFFSET  0
+		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_1G      0x1
+		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_10G     0x2
+		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_20G     0x4
+		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_25G     0x8
+		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_40G     0x10
+		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_50G     0x20
+		#define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40
+		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_MASK    0xFFFF0000
+		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_OFFSET  16
+		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_1G      0x1
+		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_10G     0x2
+		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_20G     0x4
+		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_25G     0x8
+		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_40G     0x10
+		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_50G     0x20
+		#define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40
+	u32 mnm_50g_ctrl;                                                  /* 0x74 */
+		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_MASK               0x0000000F
+		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_OFFSET             0
+		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_AUTONEG            0x0
+		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_1G                 0x1
+		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_10G                0x2
+		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_20G                0x3
+		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_25G                0x4
+		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_40G                0x5
+		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_50G                0x6
+		#define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_BB_100G            0x7
+		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_MASK               0x000000F0
+		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_OFFSET             4
+		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_AUTONEG            0x0
+		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_1G                 0x1
+		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_10G                0x2
+		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_20G                0x3
+		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_25G                0x4
+		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_40G                0x5
+		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_50G                0x6
+		#define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_BB_100G            0x7
+	/* This field defines the board technology
 	 * (backpane,transceiver,external PHY)
-	*/
-		#define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_MASK 0x0000FF00
-		#define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_OFFSET 8
-		#define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_UNDEFINED 0x0
-		#define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_MODULE 0x1
-		#define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_BACKPLANE 0x2
-		#define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_EXT_PHY 0x3
-		#define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_MODULE_SLAVE 0x4
-		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_MASK \
-			0x00FF0000
-		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_OFFSET 16
-		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_BYPASS 0x0
-		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_KR 0x2
-		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_KR2 0x3
-		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_KR4 0x4
-		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_XFI 0x8
-		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_SFI 0x9
-		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_1000X 0xB
-		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_SGMII 0xC
-		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_XLAUI 0x11
-		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_XLPPI 0x12
-		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_CAUI 0x21
-		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_CPPI 0x22
-		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_25GAUI 0x31
-		#define NVM_CFG1_PORT_MNM_50G_ETH_DID_SUFFIX_MASK 0xFF000000
-		#define NVM_CFG1_PORT_MNM_50G_ETH_DID_SUFFIX_OFFSET 24
-	u32 mnm_50g_misc; /* 0x78 */
-		#define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_MASK 0x00000007
-		#define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_OFFSET 0
-		#define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_NONE 0x0
-		#define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_FIRECODE 0x1
-		#define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_RS 0x2
-		#define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_AUTO 0x7
-	u32 mnm_100g_cap; /* 0x7C */
-		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_MASK \
-			0x0000FFFF
-		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_OFFSET 0
-		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_1G 0x1
-		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_10G 0x2
-		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_20G 0x4
-		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_25G 0x8
-		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_40G 0x10
-		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_50G 0x20
-		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_BB_100G 0x40
-		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_MASK \
-			0xFFFF0000
-		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_OFFSET 16
-		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_1G 0x1
-		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_10G 0x2
-		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_20G 0x4
-		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_25G 0x8
-		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_40G 0x10
-		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_50G 0x20
-		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_BB_100G 0x40
-	u32 mnm_100g_ctrl; /* 0x80 */
-		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_MASK 0x0000000F
-		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_OFFSET 0
-		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_AUTONEG 0x0
-		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_1G 0x1
-		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_10G 0x2
-		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_20G 0x3
-		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_25G 0x4
-		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_40G 0x5
-		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_50G 0x6
-		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_BB_100G 0x7
-		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_MASK 0x000000F0
-		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_OFFSET 4
-		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_AUTONEG 0x0
-		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_1G 0x1
-		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_10G 0x2
-		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_20G 0x3
-		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_25G 0x4
-		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_40G 0x5
-		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_50G 0x6
-		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_BB_100G 0x7
-	/*  This field defines the board technology
+	 */
+		#define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_MASK                    0x0000FF00
+		#define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_OFFSET                  8
+		#define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_UNDEFINED               0x0
+		#define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_MODULE                  0x1
+		#define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_BACKPLANE               0x2
+		#define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_EXT_PHY                 0x3
+		#define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_MODULE_SLAVE            0x4
+		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_MASK         0x00FF0000
+		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_OFFSET       16
+		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_BYPASS       0x0
+		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_KR           0x2
+		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_KR2          0x3
+		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_KR4          0x4
+		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_XFI          0x8
+		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_SFI          0x9
+		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_1000X        0xB
+		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_SGMII        0xC
+		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_XLAUI        0x11
+		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_XLPPI        0x12
+		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_CAUI         0x21
+		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_CPPI         0x22
+		#define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_25GAUI       0x31
+		#define NVM_CFG1_PORT_MNM_50G_ETH_DID_SUFFIX_MASK               0xFF000000
+		#define NVM_CFG1_PORT_MNM_50G_ETH_DID_SUFFIX_OFFSET             24
+	u32 mnm_50g_misc;                                                  /* 0x78 */
+		#define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_MASK               0x00000007
+		#define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_OFFSET             0
+		#define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_NONE               0x0
+		#define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_FIRECODE           0x1
+		#define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_RS                 0x2
+		#define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_AUTO               0x7
+	u32 mnm_100g_cap;                                                  /* 0x7C */
+		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_MASK          0x0000FFFF
+		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_OFFSET        0
+		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_1G            0x1
+		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_10G           0x2
+		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_20G           0x4
+		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_25G           0x8
+		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_40G           0x10
+		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_50G           0x20
+		#define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_BB_100G       0x40
+		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_MASK          0xFFFF0000
+		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_OFFSET        16
+		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_1G            0x1
+		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_10G           0x2
+		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_20G           0x4
+		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_25G           0x8
+		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_40G           0x10
+		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_50G           0x20
+		#define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_BB_100G       0x40
+	u32 mnm_100g_ctrl;                                                 /* 0x80 */
+		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_MASK              0x0000000F
+		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_OFFSET            0
+		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_AUTONEG           0x0
+		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_1G                0x1
+		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_10G               0x2
+		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_20G               0x3
+		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_25G               0x4
+		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_40G               0x5
+		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_50G               0x6
+		#define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_BB_100G           0x7
+		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_MASK              0x000000F0
+		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_OFFSET            4
+		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_AUTONEG           0x0
+		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_1G                0x1
+		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_10G               0x2
+		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_20G               0x3
+		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_25G               0x4
+		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_40G               0x5
+		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_50G               0x6
+		#define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_BB_100G           0x7
+	/* This field defines the board technology
 	 * (backpane,transceiver,external PHY)
-	*/
-		#define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_MASK 0x0000FF00
-		#define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_OFFSET 8
-		#define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_UNDEFINED 0x0
-		#define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_MODULE 0x1
-		#define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_BACKPLANE 0x2
-		#define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_EXT_PHY 0x3
-		#define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_MODULE_SLAVE 0x4
-		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_MASK \
-			0x00FF0000
-		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_OFFSET 16
-		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_BYPASS 0x0
-		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_KR 0x2
-		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_KR2 0x3
-		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_KR4 0x4
-		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_XFI 0x8
-		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_SFI 0x9
-		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_1000X 0xB
-		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_SGMII 0xC
-		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_XLAUI 0x11
-		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_XLPPI 0x12
-		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_CAUI 0x21
-		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_CPPI 0x22
-		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_25GAUI 0x31
-		#define NVM_CFG1_PORT_MNM_100G_ETH_DID_SUFFIX_MASK 0xFF000000
-		#define NVM_CFG1_PORT_MNM_100G_ETH_DID_SUFFIX_OFFSET 24
-	u32 mnm_100g_misc; /* 0x84 */
-		#define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_MASK 0x00000007
-		#define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_OFFSET 0
-		#define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_NONE 0x0
-		#define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_FIRECODE 0x1
-		#define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_RS 0x2
-		#define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_AUTO 0x7
-	u32 temperature; /* 0x88 */
-		#define NVM_CFG1_PORT_PHY_MODULE_DEAD_TEMP_TH_MASK 0x000000FF
-		#define NVM_CFG1_PORT_PHY_MODULE_DEAD_TEMP_TH_OFFSET 0
-		#define NVM_CFG1_PORT_PHY_MODULE_ALOM_FAN_ON_TEMP_TH_MASK \
-			0x0000FF00
-		#define NVM_CFG1_PORT_PHY_MODULE_ALOM_FAN_ON_TEMP_TH_OFFSET 8
+	 */
+		#define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_MASK                   0x0000FF00
+		#define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_OFFSET                 8
+		#define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_UNDEFINED              0x0
+		#define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_MODULE                 0x1
+		#define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_BACKPLANE              0x2
+		#define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_EXT_PHY                0x3
+		#define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_MODULE_SLAVE           0x4
+		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_MASK        0x00FF0000
+		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_OFFSET      16
+		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_BYPASS      0x0
+		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_KR          0x2
+		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_KR2         0x3
+		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_KR4         0x4
+		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_XFI         0x8
+		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_SFI         0x9
+		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_1000X       0xB
+		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_SGMII       0xC
+		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_XLAUI       0x11
+		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_XLPPI       0x12
+		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_CAUI        0x21
+		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_CPPI        0x22
+		#define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_25GAUI      0x31
+		#define NVM_CFG1_PORT_MNM_100G_ETH_DID_SUFFIX_MASK              0xFF000000
+		#define NVM_CFG1_PORT_MNM_100G_ETH_DID_SUFFIX_OFFSET            24
+	u32 mnm_100g_misc;                                                 /* 0x84 */
+		#define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_MASK              0x00000007
+		#define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_OFFSET            0
+		#define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_NONE              0x0
+		#define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_FIRECODE          0x1
+		#define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_RS                0x2
+		#define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_AUTO              0x7
+	u32 temperature;                                                   /* 0x88 */
+		#define NVM_CFG1_PORT_PHY_MODULE_DEAD_TEMP_TH_MASK              0x000000FF
+		#define NVM_CFG1_PORT_PHY_MODULE_DEAD_TEMP_TH_OFFSET            0
+		#define NVM_CFG1_PORT_PHY_MODULE_ALOM_FAN_ON_TEMP_TH_MASK       0x0000FF00
+		#define NVM_CFG1_PORT_PHY_MODULE_ALOM_FAN_ON_TEMP_TH_OFFSET     8
 	/*  Warning temperature threshold used with nvm option 235 */
-		#define NVM_CFG1_PORT_PHY_MODULE_WARNING_TEMP_TH_MASK 0x00FF0000
-		#define NVM_CFG1_PORT_PHY_MODULE_WARNING_TEMP_TH_OFFSET 16
-	u32 ext_phy_cfg1; /* 0x8C */
+		#define NVM_CFG1_PORT_PHY_MODULE_WARNING_TEMP_TH_MASK           0x00FF0000
+		#define NVM_CFG1_PORT_PHY_MODULE_WARNING_TEMP_TH_OFFSET         16
+	u32 ext_phy_cfg1;                                                  /* 0x8C */
 	/*  Ext PHY MDI pair swap value */
-		#define NVM_CFG1_PORT_EXT_PHY_MDI_PAIR_SWAP_MASK 0x0000FFFF
-		#define NVM_CFG1_PORT_EXT_PHY_MDI_PAIR_SWAP_OFFSET 0
-	u32 extended_speed; /* 0x90 */
+		#define NVM_CFG1_PORT_EXT_PHY_MDI_PAIR_SWAP_MASK                0x0000FFFF
+		#define NVM_CFG1_PORT_EXT_PHY_MDI_PAIR_SWAP_OFFSET              0
+	u32 extended_speed;                                                /* 0x90 */
 	/*  Sets speed in conjunction with legacy speed field */
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_MASK 0x0000FFFF
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_OFFSET 0
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_NONE 0x1
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_1G 0x2
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_10G 0x4
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_25G 0x8
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_40G 0x10
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R 0x20
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R2 0x40
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R2 0x80
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R4 0x100
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_P4 0x200
-	/*  Sets speed capabilities in conjunction with legacy capabilities
-	 *  field
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_MASK                       0x0000FFFF
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_OFFSET                     0
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_AN               0x1
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_1G               0x2
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_10G              0x4
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_20G              0x8
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_25G              0x10
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_40G              0x20
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R            0x40
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R2           0x80
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R2          0x100
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R4          0x200
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_P4          0x400
+	/* Sets speed capabilities in conjunction with legacy capabilities
+	 * field
 	 */
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_MASK 0xFFFF0000
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_OFFSET 16
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_NONE 0x1
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_1G 0x2
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_10G 0x4
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_25G 0x8
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_40G 0x10
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R 0x20
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R2 0x40
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R2 0x80
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R4 0x100
-		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_P4 0x200
-	/*  Set speed specific FEC setting in conjunction with legacy FEC
-	 *  mode
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_MASK                   0xFFFF0000
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_OFFSET                 16
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_RESERVED     0x1
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_1G           0x2
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_10G          0x4
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_20G          0x8
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_25G          0x10
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_40G          0x20
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R        0x40
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R2       0x80
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R2      0x100
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R4      0x200
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_P4      0x400
+	/* Set speed specific FEC setting in conjunction with legacy FEC
+	 * mode
 	 */
-	u32 extended_fec_mode; /* 0x94 */
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_NONE 0x1
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_10G_NONE 0x2
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_10G_BASE_R 0x4
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_NONE 0x8
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_BASE_R 0x10
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_RS528 0x20
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_40G_NONE 0x40
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_40G_BASE_R 0x80
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_NONE 0x100
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_BASE_R 0x200
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS528 0x400
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS544 0x800
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_NONE 0x1000
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_BASE_R 0x2000
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS528 0x4000
-		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS544 0x8000
-	u32 port_generic_cont_01; /* 0x98 */
+	u32 extended_fec_mode;                                             /* 0x94 */
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_NONE          0x1
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_10G_NONE      0x2
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_10G_BASE_R    0x4
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_20G_NONE      0x8
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_20G_BASE_R    0x10
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_NONE      0x20
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_BASE_R    0x40
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_RS528     0x80
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_40G_NONE      0x100
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_40G_BASE_R    0x200
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_NONE      0x400
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_BASE_R    0x800
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS528     0x1000
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS544     0x2000
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_NONE     0x4000
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_BASE_R   0x8000
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS528    0x10000
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS544    0x20000
+	u32 port_generic_cont_01;                                          /* 0x98 */
 	/*  Define for GPIO mapping of SFP Rate Select 0 */
-		#define NVM_CFG1_PORT_MODULE_RS0_MASK 0x000000FF
-		#define NVM_CFG1_PORT_MODULE_RS0_OFFSET 0
-		#define NVM_CFG1_PORT_MODULE_RS0_NA 0x0
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO0 0x1
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO1 0x2
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO2 0x3
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO3 0x4
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO4 0x5
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO5 0x6
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO6 0x7
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO7 0x8
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO8 0x9
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO9 0xA
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO10 0xB
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO11 0xC
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO12 0xD
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO13 0xE
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO14 0xF
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO15 0x10
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO16 0x11
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO17 0x12
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO18 0x13
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO19 0x14
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO20 0x15
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO21 0x16
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO22 0x17
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO23 0x18
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO24 0x19
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO25 0x1A
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO26 0x1B
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO27 0x1C
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO28 0x1D
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO29 0x1E
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO30 0x1F
-		#define NVM_CFG1_PORT_MODULE_RS0_GPIO31 0x20
+		#define NVM_CFG1_PORT_MODULE_RS0_MASK                           0x000000FF
+		#define NVM_CFG1_PORT_MODULE_RS0_OFFSET                         0
+		#define NVM_CFG1_PORT_MODULE_RS0_NA                             0x0
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO0                          0x1
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO1                          0x2
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO2                          0x3
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO3                          0x4
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO4                          0x5
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO5                          0x6
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO6                          0x7
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO7                          0x8
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO8                          0x9
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO9                          0xA
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO10                         0xB
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO11                         0xC
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO12                         0xD
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO13                         0xE
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO14                         0xF
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO15                         0x10
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO16                         0x11
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO17                         0x12
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO18                         0x13
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO19                         0x14
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO20                         0x15
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO21                         0x16
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO22                         0x17
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO23                         0x18
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO24                         0x19
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO25                         0x1A
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO26                         0x1B
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO27                         0x1C
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO28                         0x1D
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO29                         0x1E
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO30                         0x1F
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO31                         0x20
 	/*  Define for GPIO mapping of SFP Rate Select 1 */
-		#define NVM_CFG1_PORT_MODULE_RS1_MASK 0x0000FF00
-		#define NVM_CFG1_PORT_MODULE_RS1_OFFSET 8
-		#define NVM_CFG1_PORT_MODULE_RS1_NA 0x0
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO0 0x1
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO1 0x2
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO2 0x3
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO3 0x4
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO4 0x5
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO5 0x6
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO6 0x7
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO7 0x8
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO8 0x9
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO9 0xA
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO10 0xB
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO11 0xC
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO12 0xD
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO13 0xE
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO14 0xF
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO15 0x10
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO16 0x11
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO17 0x12
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO18 0x13
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO19 0x14
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO20 0x15
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO21 0x16
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO22 0x17
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO23 0x18
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO24 0x19
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO25 0x1A
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO26 0x1B
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO27 0x1C
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO28 0x1D
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO29 0x1E
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO30 0x1F
-		#define NVM_CFG1_PORT_MODULE_RS1_GPIO31 0x20
+		#define NVM_CFG1_PORT_MODULE_RS1_MASK                           0x0000FF00
+		#define NVM_CFG1_PORT_MODULE_RS1_OFFSET                         8
+		#define NVM_CFG1_PORT_MODULE_RS1_NA                             0x0
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO0                          0x1
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO1                          0x2
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO2                          0x3
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO3                          0x4
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO4                          0x5
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO5                          0x6
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO6                          0x7
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO7                          0x8
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO8                          0x9
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO9                          0xA
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO10                         0xB
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO11                         0xC
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO12                         0xD
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO13                         0xE
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO14                         0xF
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO15                         0x10
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO16                         0x11
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO17                         0x12
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO18                         0x13
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO19                         0x14
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO20                         0x15
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO21                         0x16
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO22                         0x17
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO23                         0x18
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO24                         0x19
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO25                         0x1A
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO26                         0x1B
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO27                         0x1C
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO28                         0x1D
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO29                         0x1E
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO30                         0x1F
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO31                         0x20
 	/*  Define for GPIO mapping of SFP Module TX Fault */
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_MASK 0x00FF0000
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_OFFSET 16
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_NA 0x0
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO0 0x1
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO1 0x2
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO2 0x3
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO3 0x4
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO4 0x5
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO5 0x6
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO6 0x7
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO7 0x8
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO8 0x9
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO9 0xA
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO10 0xB
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO11 0xC
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO12 0xD
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO13 0xE
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO14 0xF
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO15 0x10
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO16 0x11
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO17 0x12
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO18 0x13
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO19 0x14
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO20 0x15
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO21 0x16
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO22 0x17
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO23 0x18
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO24 0x19
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO25 0x1A
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO26 0x1B
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO27 0x1C
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO28 0x1D
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO29 0x1E
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO30 0x1F
-		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO31 0x20
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_MASK                      0x00FF0000
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_OFFSET                    16
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_NA                        0x0
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO0                     0x1
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO1                     0x2
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO2                     0x3
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO3                     0x4
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO4                     0x5
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO5                     0x6
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO6                     0x7
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO7                     0x8
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO8                     0x9
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO9                     0xA
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO10                    0xB
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO11                    0xC
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO12                    0xD
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO13                    0xE
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO14                    0xF
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO15                    0x10
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO16                    0x11
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO17                    0x12
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO18                    0x13
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO19                    0x14
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO20                    0x15
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO21                    0x16
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO22                    0x17
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO23                    0x18
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO24                    0x19
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO25                    0x1A
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO26                    0x1B
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO27                    0x1C
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO28                    0x1D
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO29                    0x1E
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO30                    0x1F
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO31                    0x20
 	/*  Define for GPIO mapping of QSFP Reset signal */
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_MASK 0xFF000000
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_OFFSET 24
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_NA 0x0
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO0 0x1
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO1 0x2
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO2 0x3
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO3 0x4
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO4 0x5
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO5 0x6
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO6 0x7
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO7 0x8
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO8 0x9
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO9 0xA
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO10 0xB
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO11 0xC
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO12 0xD
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO13 0xE
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO14 0xF
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO15 0x10
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO16 0x11
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO17 0x12
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO18 0x13
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO19 0x14
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO20 0x15
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO21 0x16
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO22 0x17
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO23 0x18
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO24 0x19
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO25 0x1A
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO26 0x1B
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO27 0x1C
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO28 0x1D
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO29 0x1E
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO30 0x1F
-		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO31 0x20
-	u32 port_generic_cont_02; /* 0x9C */
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_MASK                    0xFF000000
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_OFFSET                  24
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_NA                      0x0
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO0                   0x1
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO1                   0x2
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO2                   0x3
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO3                   0x4
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO4                   0x5
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO5                   0x6
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO6                   0x7
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO7                   0x8
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO8                   0x9
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO9                   0xA
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO10                  0xB
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO11                  0xC
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO12                  0xD
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO13                  0xE
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO14                  0xF
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO15                  0x10
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO16                  0x11
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO17                  0x12
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO18                  0x13
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO19                  0x14
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO20                  0x15
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO21                  0x16
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO22                  0x17
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO23                  0x18
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO24                  0x19
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO25                  0x1A
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO26                  0x1B
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO27                  0x1C
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO28                  0x1D
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO29                  0x1E
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO30                  0x1F
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO31                  0x20
+	u32 port_generic_cont_02;                                          /* 0x9C */
 	/*  Define for GPIO mapping of QSFP Transceiver LP mode */
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_MASK 0x000000FF
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_OFFSET 0
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_NA 0x0
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO0 0x1
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO1 0x2
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO2 0x3
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO3 0x4
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO4 0x5
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO5 0x6
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO6 0x7
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO7 0x8
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO8 0x9
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO9 0xA
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO10 0xB
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO11 0xC
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO12 0xD
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO13 0xE
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO14 0xF
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO15 0x10
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO16 0x11
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO17 0x12
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO18 0x13
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO19 0x14
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO20 0x15
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO21 0x16
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO22 0x17
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO23 0x18
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO24 0x19
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO25 0x1A
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO26 0x1B
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO27 0x1C
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO28 0x1D
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO29 0x1E
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO30 0x1F
-		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO31 0x20
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_MASK                  0x000000FF
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_OFFSET                0
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_NA                    0x0
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO0                 0x1
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO1                 0x2
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO2                 0x3
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO3                 0x4
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO4                 0x5
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO5                 0x6
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO6                 0x7
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO7                 0x8
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO8                 0x9
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO9                 0xA
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO10                0xB
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO11                0xC
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO12                0xD
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO13                0xE
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO14                0xF
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO15                0x10
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO16                0x11
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO17                0x12
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO18                0x13
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO19                0x14
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO20                0x15
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO21                0x16
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO22                0x17
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO23                0x18
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO24                0x19
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO25                0x1A
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO26                0x1B
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO27                0x1C
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO28                0x1D
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO29                0x1E
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO30                0x1F
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO31                0x20
 	/*  Define for GPIO mapping of Transceiver Power Enable */
-		#define NVM_CFG1_PORT_MODULE_POWER_MASK 0x0000FF00
-		#define NVM_CFG1_PORT_MODULE_POWER_OFFSET 8
-		#define NVM_CFG1_PORT_MODULE_POWER_NA 0x0
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO0 0x1
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO1 0x2
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO2 0x3
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO3 0x4
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO4 0x5
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO5 0x6
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO6 0x7
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO7 0x8
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO8 0x9
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO9 0xA
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO10 0xB
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO11 0xC
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO12 0xD
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO13 0xE
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO14 0xF
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO15 0x10
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO16 0x11
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO17 0x12
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO18 0x13
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO19 0x14
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO20 0x15
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO21 0x16
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO22 0x17
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO23 0x18
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO24 0x19
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO25 0x1A
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO26 0x1B
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO27 0x1C
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO28 0x1D
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO29 0x1E
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO30 0x1F
-		#define NVM_CFG1_PORT_MODULE_POWER_GPIO31 0x20
+		#define NVM_CFG1_PORT_MODULE_POWER_MASK                         0x0000FF00
+		#define NVM_CFG1_PORT_MODULE_POWER_OFFSET                       8
+		#define NVM_CFG1_PORT_MODULE_POWER_NA                           0x0
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO0                        0x1
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO1                        0x2
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO2                        0x3
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO3                        0x4
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO4                        0x5
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO5                        0x6
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO6                        0x7
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO7                        0x8
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO8                        0x9
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO9                        0xA
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO10                       0xB
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO11                       0xC
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO12                       0xD
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO13                       0xE
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO14                       0xF
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO15                       0x10
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO16                       0x11
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO17                       0x12
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO18                       0x13
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO19                       0x14
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO20                       0x15
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO21                       0x16
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO22                       0x17
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO23                       0x18
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO24                       0x19
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO25                       0x1A
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO26                       0x1B
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO27                       0x1C
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO28                       0x1D
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO29                       0x1E
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO30                       0x1F
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO31                       0x20
 	/*  Define for LASI Mapping of Interrupt from module or PHY */
-		#define NVM_CFG1_PORT_LASI_INTR_IN_MASK 0x000F0000
-		#define NVM_CFG1_PORT_LASI_INTR_IN_OFFSET 16
-		#define NVM_CFG1_PORT_LASI_INTR_IN_NA 0x0
-		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI0 0x1
-		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI1 0x2
-		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI2 0x3
-		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI3 0x4
-	u32 reserved[110]; /* 0xA0 */
+		#define NVM_CFG1_PORT_LASI_INTR_IN_MASK                         0x000F0000
+		#define NVM_CFG1_PORT_LASI_INTR_IN_OFFSET                       16
+		#define NVM_CFG1_PORT_LASI_INTR_IN_NA                           0x0
+		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI0                        0x1
+		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI1                        0x2
+		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI2                        0x3
+		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI3                        0x4
+	u32 phy_temp_monitor;                                              /* 0xA0 */
+	/* Enables PHY temperature monitoring. In case PHY temperature rise
+	 * above option 308 (Temperature monitoring PHY temp) for option 309
+	 * (#samples) times, MFW will shutdown the chip.
+	 */
+		#define NVM_CFG1_PORT_SHUTDOWN__M_PHY_MONITOR_ENABLED_MASK      0x00000001
+		#define NVM_CFG1_PORT_SHUTDOWN__M_PHY_MONITOR_ENABLED_OFFSET    0
+		#define NVM_CFG1_PORT_SHUTDOWN__M_PHY_MONITOR_ENABLED_DISABLED  0x0
+		#define NVM_CFG1_PORT_SHUTDOWN__M_PHY_MONITOR_ENABLED_ENABLED   0x1
+	/* In case option 307 is enabled, this option determinges the
+	 * temperature theshold to shutdown the chip if sampled more than
+	 * [option 309 (#samples)] times in a row
+	 */
+		#define NVM_CFG1_PORT_SHUTDOWN__M_PHY_THRESHOLD_TEMP_MASK       0x000001FE
+		#define NVM_CFG1_PORT_SHUTDOWN__M_PHY_THRESHOLD_TEMP_OFFSET     1
+	/* Number of PHY temperature samples above option 308 required in a
+	 * row for MFW to shutdown the chip.
+	 */
+		#define NVM_CFG1_PORT_SHUTDOWN__M_PHY_SAMPLES_MASK              0x0001FE00
+		#define NVM_CFG1_PORT_SHUTDOWN__M_PHY_SAMPLES_OFFSET            9
+	u32 reserved[109];                                                 /* 0xA4 */
 };
 
 struct nvm_cfg1_func {
-	struct nvm_cfg_mac_address mac_address; /* 0x0 */
-	u32 rsrv1; /* 0x8 */
-		#define NVM_CFG1_FUNC_RESERVED1_MASK 0x0000FFFF
-		#define NVM_CFG1_FUNC_RESERVED1_OFFSET 0
-		#define NVM_CFG1_FUNC_RESERVED2_MASK 0xFFFF0000
-		#define NVM_CFG1_FUNC_RESERVED2_OFFSET 16
-	u32 rsrv2; /* 0xC */
-		#define NVM_CFG1_FUNC_RESERVED3_MASK 0x0000FFFF
-		#define NVM_CFG1_FUNC_RESERVED3_OFFSET 0
-		#define NVM_CFG1_FUNC_RESERVED4_MASK 0xFFFF0000
-		#define NVM_CFG1_FUNC_RESERVED4_OFFSET 16
-	u32 device_id; /* 0x10 */
-		#define NVM_CFG1_FUNC_MF_VENDOR_DEVICE_ID_MASK 0x0000FFFF
-		#define NVM_CFG1_FUNC_MF_VENDOR_DEVICE_ID_OFFSET 0
-		#define NVM_CFG1_FUNC_RESERVED77_MASK 0xFFFF0000
-		#define NVM_CFG1_FUNC_RESERVED77_OFFSET 16
-	u32 cmn_cfg; /* 0x14 */
-		#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_MASK 0x00000007
-		#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_OFFSET 0
-		#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_PXE 0x0
-		#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_ISCSI_BOOT 0x3
-		#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_FCOE_BOOT 0x4
-		#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_NONE 0x7
-		#define NVM_CFG1_FUNC_VF_PCI_DEVICE_ID_MASK 0x0007FFF8
-		#define NVM_CFG1_FUNC_VF_PCI_DEVICE_ID_OFFSET 3
-		#define NVM_CFG1_FUNC_PERSONALITY_MASK 0x00780000
-		#define NVM_CFG1_FUNC_PERSONALITY_OFFSET 19
-		#define NVM_CFG1_FUNC_PERSONALITY_ETHERNET 0x0
-		#define NVM_CFG1_FUNC_PERSONALITY_ISCSI 0x1
-		#define NVM_CFG1_FUNC_PERSONALITY_FCOE 0x2
-		#define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_MASK 0x7F800000
-		#define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_OFFSET 23
-		#define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_MASK 0x80000000
-		#define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_OFFSET 31
-		#define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_DISABLED 0x0
-		#define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_ENABLED 0x1
-	u32 pci_cfg; /* 0x18 */
-		#define NVM_CFG1_FUNC_NUMBER_OF_VFS_PER_PF_MASK 0x0000007F
-		#define NVM_CFG1_FUNC_NUMBER_OF_VFS_PER_PF_OFFSET 0
+	struct nvm_cfg_mac_address mac_address;                             /* 0x0 */
+	u32 rsrv1;                                                          /* 0x8 */
+		#define NVM_CFG1_FUNC_RESERVED1_MASK                            0x0000FFFF
+		#define NVM_CFG1_FUNC_RESERVED1_OFFSET                          0
+		#define NVM_CFG1_FUNC_RESERVED2_MASK                            0xFFFF0000
+		#define NVM_CFG1_FUNC_RESERVED2_OFFSET                          16
+	u32 rsrv2;                                                          /* 0xC */
+		#define NVM_CFG1_FUNC_RESERVED3_MASK                            0x0000FFFF
+		#define NVM_CFG1_FUNC_RESERVED3_OFFSET                          0
+		#define NVM_CFG1_FUNC_RESERVED4_MASK                            0xFFFF0000
+		#define NVM_CFG1_FUNC_RESERVED4_OFFSET                          16
+	u32 device_id;                                                     /* 0x10 */
+		#define NVM_CFG1_FUNC_MF_VENDOR_DEVICE_ID_MASK                  0x0000FFFF
+		#define NVM_CFG1_FUNC_MF_VENDOR_DEVICE_ID_OFFSET                0
+		#define NVM_CFG1_FUNC_RESERVED77_MASK                           0xFFFF0000
+		#define NVM_CFG1_FUNC_RESERVED77_OFFSET                         16
+	u32 cmn_cfg;                                                       /* 0x14 */
+		#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_MASK                0x00000007
+		#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_OFFSET              0
+		#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_PXE                 0x0
+		#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_ISCSI_BOOT          0x3
+		#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_FCOE_BOOT           0x4
+		#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_NONE                0x7
+		#define NVM_CFG1_FUNC_VF_PCI_DEVICE_ID_MASK                     0x0007FFF8
+		#define NVM_CFG1_FUNC_VF_PCI_DEVICE_ID_OFFSET                   3
+		#define NVM_CFG1_FUNC_PERSONALITY_MASK                          0x00780000
+		#define NVM_CFG1_FUNC_PERSONALITY_OFFSET                        19
+		#define NVM_CFG1_FUNC_PERSONALITY_ETHERNET                      0x0
+		#define NVM_CFG1_FUNC_PERSONALITY_ISCSI                         0x1
+		#define NVM_CFG1_FUNC_PERSONALITY_FCOE                          0x2
+		#define NVM_CFG1_FUNC_PERSONALITY_NVMETCP                       0x4
+		#define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_MASK                     0x7F800000
+		#define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_OFFSET                   23
+		#define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_MASK                   0x80000000
+		#define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_OFFSET                 31
+		#define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_DISABLED               0x0
+		#define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_ENABLED                0x1
+	u32 pci_cfg;                                                       /* 0x18 */
+		#define NVM_CFG1_FUNC_NUMBER_OF_VFS_PER_PF_MASK                 0x0000007F
+		#define NVM_CFG1_FUNC_NUMBER_OF_VFS_PER_PF_OFFSET               0
 	/*  AH VF BAR2 size */
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_MASK 0x00003F80
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_OFFSET 7
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_DISABLED 0x0
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_4K 0x1
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_8K 0x2
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_16K 0x3
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_32K 0x4
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_64K 0x5
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_128K 0x6
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_256K 0x7
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_512K 0x8
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_1M 0x9
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_2M 0xA
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_4M 0xB
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_8M 0xC
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_16M 0xD
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_32M 0xE
-		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_64M 0xF
-		#define NVM_CFG1_FUNC_BAR1_SIZE_MASK 0x0003C000
-		#define NVM_CFG1_FUNC_BAR1_SIZE_OFFSET 14
-		#define NVM_CFG1_FUNC_BAR1_SIZE_DISABLED 0x0
-		#define NVM_CFG1_FUNC_BAR1_SIZE_64K 0x1
-		#define NVM_CFG1_FUNC_BAR1_SIZE_128K 0x2
-		#define NVM_CFG1_FUNC_BAR1_SIZE_256K 0x3
-		#define NVM_CFG1_FUNC_BAR1_SIZE_512K 0x4
-		#define NVM_CFG1_FUNC_BAR1_SIZE_1M 0x5
-		#define NVM_CFG1_FUNC_BAR1_SIZE_2M 0x6
-		#define NVM_CFG1_FUNC_BAR1_SIZE_4M 0x7
-		#define NVM_CFG1_FUNC_BAR1_SIZE_8M 0x8
-		#define NVM_CFG1_FUNC_BAR1_SIZE_16M 0x9
-		#define NVM_CFG1_FUNC_BAR1_SIZE_32M 0xA
-		#define NVM_CFG1_FUNC_BAR1_SIZE_64M 0xB
-		#define NVM_CFG1_FUNC_BAR1_SIZE_128M 0xC
-		#define NVM_CFG1_FUNC_BAR1_SIZE_256M 0xD
-		#define NVM_CFG1_FUNC_BAR1_SIZE_512M 0xE
-		#define NVM_CFG1_FUNC_BAR1_SIZE_1G 0xF
-		#define NVM_CFG1_FUNC_MAX_BANDWIDTH_MASK 0x03FC0000
-		#define NVM_CFG1_FUNC_MAX_BANDWIDTH_OFFSET 18
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_MASK                     0x00003F80
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_OFFSET                   7
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_DISABLED                 0x0
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_4K                       0x1
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_8K                       0x2
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_16K                      0x3
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_32K                      0x4
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_64K                      0x5
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_128K                     0x6
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_256K                     0x7
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_512K                     0x8
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_1M                       0x9
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_2M                       0xA
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_4M                       0xB
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_8M                       0xC
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_16M                      0xD
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_32M                      0xE
+		#define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_64M                      0xF
+		#define NVM_CFG1_FUNC_BAR1_SIZE_MASK                            0x0003C000
+		#define NVM_CFG1_FUNC_BAR1_SIZE_OFFSET                          14
+		#define NVM_CFG1_FUNC_BAR1_SIZE_DISABLED                        0x0
+		#define NVM_CFG1_FUNC_BAR1_SIZE_64K                             0x1
+		#define NVM_CFG1_FUNC_BAR1_SIZE_128K                            0x2
+		#define NVM_CFG1_FUNC_BAR1_SIZE_256K                            0x3
+		#define NVM_CFG1_FUNC_BAR1_SIZE_512K                            0x4
+		#define NVM_CFG1_FUNC_BAR1_SIZE_1M                              0x5
+		#define NVM_CFG1_FUNC_BAR1_SIZE_2M                              0x6
+		#define NVM_CFG1_FUNC_BAR1_SIZE_4M                              0x7
+		#define NVM_CFG1_FUNC_BAR1_SIZE_8M                              0x8
+		#define NVM_CFG1_FUNC_BAR1_SIZE_16M                             0x9
+		#define NVM_CFG1_FUNC_BAR1_SIZE_32M                             0xA
+		#define NVM_CFG1_FUNC_BAR1_SIZE_64M                             0xB
+		#define NVM_CFG1_FUNC_BAR1_SIZE_128M                            0xC
+		#define NVM_CFG1_FUNC_BAR1_SIZE_256M                            0xD
+		#define NVM_CFG1_FUNC_BAR1_SIZE_512M                            0xE
+		#define NVM_CFG1_FUNC_BAR1_SIZE_1G                              0xF
+		#define NVM_CFG1_FUNC_MAX_BANDWIDTH_MASK                        0x03FC0000
+		#define NVM_CFG1_FUNC_MAX_BANDWIDTH_OFFSET                      18
 	/*  Hide function in npar mode */
-		#define NVM_CFG1_FUNC_FUNCTION_HIDE_MASK 0x04000000
-		#define NVM_CFG1_FUNC_FUNCTION_HIDE_OFFSET 26
-		#define NVM_CFG1_FUNC_FUNCTION_HIDE_DISABLED 0x0
-		#define NVM_CFG1_FUNC_FUNCTION_HIDE_ENABLED 0x1
+		#define NVM_CFG1_FUNC_FUNCTION_HIDE_MASK                        0x04000000
+		#define NVM_CFG1_FUNC_FUNCTION_HIDE_OFFSET                      26
+		#define NVM_CFG1_FUNC_FUNCTION_HIDE_DISABLED                    0x0
+		#define NVM_CFG1_FUNC_FUNCTION_HIDE_ENABLED                     0x1
 	/*  AH BAR2 size (per function) */
-		#define NVM_CFG1_FUNC_BAR2_SIZE_MASK 0x78000000
-		#define NVM_CFG1_FUNC_BAR2_SIZE_OFFSET 27
-		#define NVM_CFG1_FUNC_BAR2_SIZE_DISABLED 0x0
-		#define NVM_CFG1_FUNC_BAR2_SIZE_1M 0x5
-		#define NVM_CFG1_FUNC_BAR2_SIZE_2M 0x6
-		#define NVM_CFG1_FUNC_BAR2_SIZE_4M 0x7
-		#define NVM_CFG1_FUNC_BAR2_SIZE_8M 0x8
-		#define NVM_CFG1_FUNC_BAR2_SIZE_16M 0x9
-		#define NVM_CFG1_FUNC_BAR2_SIZE_32M 0xA
-		#define NVM_CFG1_FUNC_BAR2_SIZE_64M 0xB
-		#define NVM_CFG1_FUNC_BAR2_SIZE_128M 0xC
-		#define NVM_CFG1_FUNC_BAR2_SIZE_256M 0xD
-		#define NVM_CFG1_FUNC_BAR2_SIZE_512M 0xE
-		#define NVM_CFG1_FUNC_BAR2_SIZE_1G 0xF
-	struct nvm_cfg_mac_address fcoe_node_wwn_mac_addr; /* 0x1C */
-	struct nvm_cfg_mac_address fcoe_port_wwn_mac_addr; /* 0x24 */
-	u32 preboot_generic_cfg; /* 0x2C */
-		#define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_MASK 0x0000FFFF
-		#define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_OFFSET 0
-		#define NVM_CFG1_FUNC_PREBOOT_VLAN_MASK 0x00010000
-		#define NVM_CFG1_FUNC_PREBOOT_VLAN_OFFSET 16
-		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_MASK 0x001E0000
-		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_OFFSET 17
-		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ETHERNET 0x1
-		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_FCOE 0x2
-		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ISCSI 0x4
-		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_RDMA 0x8
-	u32 reserved[8]; /* 0x30 */
+		#define NVM_CFG1_FUNC_BAR2_SIZE_MASK                            0x78000000
+		#define NVM_CFG1_FUNC_BAR2_SIZE_OFFSET                          27
+		#define NVM_CFG1_FUNC_BAR2_SIZE_DISABLED                        0x0
+		#define NVM_CFG1_FUNC_BAR2_SIZE_1M                              0x5
+		#define NVM_CFG1_FUNC_BAR2_SIZE_2M                              0x6
+		#define NVM_CFG1_FUNC_BAR2_SIZE_4M                              0x7
+		#define NVM_CFG1_FUNC_BAR2_SIZE_8M                              0x8
+		#define NVM_CFG1_FUNC_BAR2_SIZE_16M                             0x9
+		#define NVM_CFG1_FUNC_BAR2_SIZE_32M                             0xA
+		#define NVM_CFG1_FUNC_BAR2_SIZE_64M                             0xB
+		#define NVM_CFG1_FUNC_BAR2_SIZE_128M                            0xC
+		#define NVM_CFG1_FUNC_BAR2_SIZE_256M                            0xD
+		#define NVM_CFG1_FUNC_BAR2_SIZE_512M                            0xE
+		#define NVM_CFG1_FUNC_BAR2_SIZE_1G                              0xF
+	struct nvm_cfg_mac_address fcoe_node_wwn_mac_addr;                 /* 0x1C */
+	struct nvm_cfg_mac_address fcoe_port_wwn_mac_addr;                 /* 0x24 */
+	u32 preboot_generic_cfg;                                           /* 0x2C */
+		#define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_MASK                   0x0000FFFF
+		#define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_OFFSET                 0
+		#define NVM_CFG1_FUNC_PREBOOT_VLAN_MASK                         0x00010000
+		#define NVM_CFG1_FUNC_PREBOOT_VLAN_OFFSET                       16
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_MASK                0x01FE0000
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_OFFSET              17
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ETHERNET            0x1
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_FCOE                0x2
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_ISCSI               0x4
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_RDMA                0x8
+		#define NVM_CFG1_FUNC_NPAR_ENABLED_PROTOCOL_NVMETCP             0x10
+	u32 features;                                                      /* 0x30 */
+	/*  RDMA protocol enablement  */
+		#define NVM_CFG1_FUNC_RDMA_ENABLEMENT_MASK                      0x00000003
+		#define NVM_CFG1_FUNC_RDMA_ENABLEMENT_OFFSET                    0
+		#define NVM_CFG1_FUNC_RDMA_ENABLEMENT_NONE                      0x0
+		#define NVM_CFG1_FUNC_RDMA_ENABLEMENT_ROCE                      0x1
+		#define NVM_CFG1_FUNC_RDMA_ENABLEMENT_IWARP                     0x2
+		#define NVM_CFG1_FUNC_RDMA_ENABLEMENT_BOTH                      0x3
+	u32 mf_mode_feature;                                               /* 0x34 */
+	/*  QinQ Outer VLAN */
+		#define NVM_CFG1_FUNC_QINQ_OUTER_VLAN_MASK                      0x0000FFFF
+		#define NVM_CFG1_FUNC_QINQ_OUTER_VLAN_OFFSET                    0
+	u32 reserved[6];                                                   /* 0x38 */
 };
 
 struct nvm_cfg1 {
-	struct nvm_cfg1_glob glob; /* 0x0 */
-	struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX]; /* 0x228 */
-	struct nvm_cfg1_port port[MCP_GLOB_PORT_MAX]; /* 0x230 */
-	struct nvm_cfg1_func func[MCP_GLOB_FUNC_MAX]; /* 0xB90 */
+	struct nvm_cfg1_glob glob;                                          /* 0x0 */
+	struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX];                     /* 0x228 */
+	struct nvm_cfg1_port port[MCP_GLOB_PORT_MAX];                     /* 0x230 */
+	struct nvm_cfg1_func func[MCP_GLOB_FUNC_MAX];                     /* 0xB90 */
 };
 
 /******************************************
@@ -2602,6 +2683,11 @@ struct board_info {
 	char *friendly_name;
 };
 
+struct trace_module_info {
+	char *module_name;
+};
+#define NUM_TRACE_MODULES 25
+
 enum nvm_cfg_sections {
 	NVM_CFG_SECTION_NVM_CFG1,
 	NVM_CFG_SECTION_MAX
@@ -2642,7 +2728,6 @@ struct nvm_cfg {
 #define NVM_CFG_ID_MFW_FLOW_CONTROL                                  32
 #define NVM_CFG_ID_OPTIC_MODULE_VENDOR_ENFORCEMENT                   33
 #define NVM_CFG_ID_OPTIONAL_LINK_MODES_BB                            34
-#define NVM_CFG_ID_MF_VENDOR_DEVICE_ID                               37
 #define NVM_CFG_ID_NETWORK_PORT_MODE                                 38
 #define NVM_CFG_ID_MPS10_RX_LANE_SWAP_BB                             39
 #define NVM_CFG_ID_MPS10_TX_LANE_SWAP_BB                             40
@@ -2655,8 +2740,8 @@ struct nvm_cfg {
 #define NVM_CFG_ID_MPS10_PREEMPHASIS_BB                              47
 #define NVM_CFG_ID_MPS10_DRIVER_CURRENT_BB                           48
 #define NVM_CFG_ID_MPS10_ENFORCE_TX_FIR_CFG_BB                       49
-#define NVM_CFG_ID_MPS25_PREEMPHASIS                                 50
-#define NVM_CFG_ID_MPS25_DRIVER_CURRENT                              51
+#define NVM_CFG_ID_MPS25_PREEMPHASIS_BB_K2                           50
+#define NVM_CFG_ID_MPS25_DRIVER_CURRENT_BB_K2                        51
 #define NVM_CFG_ID_MPS25_ENFORCE_TX_FIR_CFG                          52
 #define NVM_CFG_ID_MPS10_CORE_ADDR_BB                                53
 #define NVM_CFG_ID_MPS25_CORE_ADDR_BB                                54
@@ -2668,7 +2753,7 @@ struct nvm_cfg {
 #define NVM_CFG_ID_MBA_DELAY_TIME                                    61
 #define NVM_CFG_ID_MBA_SETUP_HOT_KEY                                 62
 #define NVM_CFG_ID_MBA_HIDE_SETUP_PROMPT                             63
-#define NVM_CFG_ID_PREBOOT_LINK_SPEED                                67
+#define NVM_CFG_ID_PREBOOT_LINK_SPEED_BB_K2                          67
 #define NVM_CFG_ID_PREBOOT_BOOT_PROTOCOL                             69
 #define NVM_CFG_ID_ENABLE_SRIOV                                      70
 #define NVM_CFG_ID_ENABLE_ATC                                        71
@@ -2677,14 +2762,15 @@ struct nvm_cfg {
 #define NVM_CFG_ID_VENDOR_ID                                         76
 #define NVM_CFG_ID_SUBSYSTEM_VENDOR_ID                               78
 #define NVM_CFG_ID_SUBSYSTEM_DEVICE_ID                               79
+#define NVM_CFG_ID_EXPANSION_ROM_SIZE                                80
 #define NVM_CFG_ID_VF_PCI_BAR2_SIZE_BB                               81
 #define NVM_CFG_ID_BAR1_SIZE                                         82
 #define NVM_CFG_ID_BAR2_SIZE_BB                                      83
 #define NVM_CFG_ID_VF_PCI_DEVICE_ID                                  84
 #define NVM_CFG_ID_MPS10_TXFIR_MAIN_BB                               85
 #define NVM_CFG_ID_MPS10_TXFIR_POST_BB                               86
-#define NVM_CFG_ID_MPS25_TXFIR_MAIN                                  87
-#define NVM_CFG_ID_MPS25_TXFIR_POST                                  88
+#define NVM_CFG_ID_MPS25_TXFIR_MAIN_BB_K2                            87
+#define NVM_CFG_ID_MPS25_TXFIR_POST_BB_K2                            88
 #define NVM_CFG_ID_MANUFACTURE_KIT_VERSION                           89
 #define NVM_CFG_ID_MANUFACTURE_TIMESTAMP                             90
 #define NVM_CFG_ID_PERSONALITY                                       92
@@ -2795,20 +2881,19 @@ struct nvm_cfg {
 #define NVM_CFG_ID_NVM_CFG_UPDATED_VALUE_SEQ                         200
 #define NVM_CFG_ID_EXTENDED_SERIAL_NUMBER                            201
 #define NVM_CFG_ID_RDMA_ENABLEMENT                                   202
-#define NVM_CFG_ID_MAX_CONT_OPERATING_TEMP                           203
 #define NVM_CFG_ID_RUNTIME_PORT_SWAP_GPIO                            204
 #define NVM_CFG_ID_RUNTIME_PORT_SWAP_MAP                             205
 #define NVM_CFG_ID_THERMAL_EVENT_GPIO                                206
 #define NVM_CFG_ID_I2C_INTERRUPT_GPIO                                207
 #define NVM_CFG_ID_DCI_SUPPORT                                       208
 #define NVM_CFG_ID_PCIE_VDM_ENABLED                                  209
-#define NVM_CFG_ID_OEM1_NUMBER                                       210
-#define NVM_CFG_ID_OEM2_NUMBER                                       211
+#define NVM_CFG_ID_OPTION_KIT_PN                                     210
+#define NVM_CFG_ID_SPARE_PN                                          211
 #define NVM_CFG_ID_FEC_AN_MODE_K2_E5                                 212
 #define NVM_CFG_ID_NPAR_ENABLED_PROTOCOL                             213
-#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_PRE                            214
-#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_MAIN                           215
-#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_POST                           216
+#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_PRE_BB_K2                      214
+#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_MAIN_BB_K2                     215
+#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_POST_BB_K2                     216
 #define NVM_CFG_ID_ALOM_FAN_ON_AUX_GPIO                              217
 #define NVM_CFG_ID_ALOM_FAN_ON_AUX_VALUE                             218
 #define NVM_CFG_ID_SLOT_ID_GPIO                                      219
@@ -2840,9 +2925,29 @@ struct nvm_cfg {
 #define NVM_CFG_ID_EXT_PHY_MDI_PAIR_SWAP                             249
 #define NVM_CFG_ID_UID_LED_MODE_MASK                                 250
 #define NVM_CFG_ID_NCSI_AUX_LINK                                     251
+#define NVM_CFG_ID_EXTENDED_SPEED_E5                                 252
+#define NVM_CFG_ID_EXTENDED_SPEED_CAP_E5                             253
+#define NVM_CFG_ID_EXTENDED_FEC_MODE_E5                              254
+#define NVM_CFG_ID_RX_PRECODE_E5                                     255
+#define NVM_CFG_ID_TX_PRECODE_E5                                     256
+#define NVM_CFG_ID_50G_HLPC_PRE2_E5                                  257
+#define NVM_CFG_ID_50G_MLPC_PRE2_E5                                  258
+#define NVM_CFG_ID_50G_LLPC_PRE2_E5                                  259
+#define NVM_CFG_ID_25G_HLPC_PRE2_E5                                  260
+#define NVM_CFG_ID_25G_LLPC_PRE2_E5                                  261
+#define NVM_CFG_ID_25G_AC_PRE2_E5                                    262
+#define NVM_CFG_ID_10G_PC_PRE2_E5                                    263
+#define NVM_CFG_ID_10G_AC_PRE2_E5                                    264
+#define NVM_CFG_ID_1G_PRE2_E5                                        265
+#define NVM_CFG_ID_25G_BT_PRE2_E5                                    266
+#define NVM_CFG_ID_10G_BT_PRE2_E5                                    267
+#define NVM_CFG_ID_TX_RX_EQ_50G_HLPC_E5                              268
+#define NVM_CFG_ID_TX_RX_EQ_50G_MLPC_E5                              269
+#define NVM_CFG_ID_TX_RX_EQ_50G_LLPC_E5                              270
+#define NVM_CFG_ID_TX_RX_EQ_50G_AC_E5                                271
 #define NVM_CFG_ID_SMARTAN_FEC_OVERRIDE                              272
 #define NVM_CFG_ID_LLDP_DISABLE                                      273
-#define NVM_CFG_ID_SHORT_PERST_PROTECTION_K2_E5                      274
+#define NVM_CFG_ID_SHORT_PERST_PROTECTION_K2                         274
 #define NVM_CFG_ID_TRANSCEIVER_RATE_SELECT_0                         275
 #define NVM_CFG_ID_TRANSCEIVER_RATE_SELECT_1                         276
 #define NVM_CFG_ID_TRANSCEIVER_MODULE_TX_FAULT                       277
@@ -2869,4 +2974,22 @@ struct nvm_cfg {
 #define NVM_CFG_ID_PHY_MODULE_WARNING_TEMP_TH                        298
 #define NVM_CFG_ID_DISABLE_PLDM                                      299
 #define NVM_CFG_ID_DISABLE_MCTP_OEM                                  300
+#define NVM_CFG_ID_LOWEST_MBI_VERSION                                301
+#define NVM_CFG_ID_ACTIVITY_LED_BLINK_RATE_HZ                        302
+#define NVM_CFG_ID_EXT_PHY_AVS_ENABLE                                303
+#define NVM_CFG_ID_OVERRIDE_TX_DRIVER_REGULATOR_K2                   304
+#define NVM_CFG_ID_TX_REGULATOR_VOLTAGE_GEN1_2_K2                    305
+#define NVM_CFG_ID_TX_REGULATOR_VOLTAGE_GEN3_K2                      306
+#define NVM_CFG_ID_SHUTDOWN___PHY_MONITOR_ENABLED                    307
+#define NVM_CFG_ID_SHUTDOWN___PHY_THRESHOLD_TEMP                     308
+#define NVM_CFG_ID_SHUTDOWN___PHY_SAMPLES                            309
+#define NVM_CFG_ID_NCSI_GET_CAPAS_CH_COUNT                           310
+#define NVM_CFG_ID_PERMIT_TOTAL_PORT_SHUTDOWN                        311
+#define NVM_CFG_ID_MCTP_VIRTUAL_LINK_OEM3                            312
+#define NVM_CFG_ID_50G_AC_PRE2_E5                                    313
+#define NVM_CFG_ID_QINQ_SUPPORT                                      314
+#define NVM_CFG_ID_QINQ_OUTER_VLAN                                   315
+#define NVM_CFG_ID_NVMETCP_DID_SUFFIX_K2_E5                          316
+#define NVM_CFG_ID_NVMETCP_MODE_K2_E5                                317
+#define NVM_CFG_ID_PCIE_CLASS_CODE_NVMETCP_K2_E5                     318
 #endif /* NVM_CFG_H */
diff --git a/drivers/net/qede/qede_debug.c b/drivers/net/qede/qede_debug.c
index d46727c45..085d0532b 100644
--- a/drivers/net/qede/qede_debug.c
+++ b/drivers/net/qede/qede_debug.c
@@ -3937,16 +3937,17 @@ static enum dbg_status qed_find_nvram_image(struct ecore_hwfn *p_hwfn,
 
 	/* Call NVRAM get file command */
 	nvm_result = ecore_mcp_nvm_rd_cmd(p_hwfn,
-					p_ptt,
-					DRV_MSG_CODE_NVM_GET_FILE_ATT,
-					image_type,
-					&ret_mcp_resp,
-					&ret_mcp_param,
-					&ret_txn_size, (u32 *)&file_att);
+					  p_ptt,
+					  DRV_MSG_CODE_NVM_GET_FILE_ATT,
+					  image_type,
+					  &ret_mcp_resp,
+					  &ret_mcp_param,
+					  &ret_txn_size,
+					  (u32 *)&file_att, false);
 
 	/* Check response */
-	if (nvm_result ||
-	    (ret_mcp_resp & FW_MSG_CODE_MASK) != FW_MSG_CODE_NVM_OK)
+	if (nvm_result || (ret_mcp_resp & FW_MSG_CODE_MASK) !=
+	    FW_MSG_CODE_NVM_OK)
 		return DBG_STATUS_NVRAM_GET_IMAGE_FAILED;
 
 	/* Update return values */
@@ -3995,8 +3996,8 @@ static enum dbg_status qed_nvram_read(struct ecore_hwfn *p_hwfn,
 					 DRV_MSG_CODE_NVM_READ_NVRAM, param,
 					 &ret_mcp_resp,
 					 &ret_mcp_param, &ret_read_size,
-					 (u32 *)((u8 *)ret_buf +
-						 read_offset))) {
+					 (u32 *)((u8 *)ret_buf + read_offset),
+					 false)) {
 			DP_NOTICE(p_hwfn->p_dev, false, "rc = DBG_STATUS_NVRAM_READ_FAILED\n");
 			return DBG_STATUS_NVRAM_READ_FAILED;
 		}
@@ -4685,9 +4686,9 @@ static u32 qed_ilt_dump(struct ecore_hwfn *p_hwfn,
 	num_cids_per_page = (int)(cduc_page_size / conn_ctx_size);
 	ilt_pages = p_hwfn->p_cxt_mngr->ilt_shadow;
 
-	/* Dump global params - 22 must match number of params below */
+	/* Dump global params - 21 must match number of params below */
 	offset += qed_dump_common_global_params(p_hwfn, p_ptt,
-						dump_buf + offset, dump, 22);
+						dump_buf + offset, dump, 21);
 	offset += qed_dump_str_param(dump_buf + offset,
 				     dump, "dump-type", "ilt-dump");
 	offset += qed_dump_num_param(dump_buf + offset,
@@ -4746,15 +4747,11 @@ static u32 qed_ilt_dump(struct ecore_hwfn *p_hwfn,
 				     dump,
 				     "max-task-ctx-size",
 				     p_hwfn->p_cxt_mngr->task_ctx_size);
-	offset += qed_dump_num_param(dump_buf + offset,
-				     dump,
-				     "task-type-id",
-				     p_hwfn->p_cxt_mngr->task_type_id);
 	offset += qed_dump_num_param(dump_buf + offset,
 				     dump,
 				     "first-vf-id-in-pf",
 				     p_hwfn->p_cxt_mngr->first_vf_in_pf);
-	offset += /* 18 */ qed_dump_num_param(dump_buf + offset,
+	offset += /* 17 */ qed_dump_num_param(dump_buf + offset,
 					      dump,
 					      "num-vfs-in-pf",
 					      p_hwfn->p_cxt_mngr->vf_count);
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index ab5f5b106..668821dcb 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2656,7 +2656,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 					    ECORE_MAC);
 	else
 		ecore_vf_get_num_mac_filters(ECORE_LEADING_HWFN(edev),
-				(uint32_t *)&adapter->dev_info.num_mac_filters);
+				(uint8_t *)&adapter->dev_info.num_mac_filters);
 
 	/* Allocate memory for storing MAC addr */
 	eth_dev->data->mac_addrs = rte_zmalloc(edev->name,
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 6fc67d878..c91fcc1fa 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -596,7 +596,7 @@ int qede_alloc_fp_resc(struct qede_dev *qdev)
 {
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
 	struct qede_fastpath *fp;
-	uint32_t num_sbs;
+	uint8_t num_sbs;
 	uint16_t sb_idx;
 	int i;
 
-- 
2.18.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] [PATCH 5/7] net/qede: changes for DMA page chain allocation and free
  2021-02-19 10:14 [dpdk-dev] [PATCH 0/7] net/qede: add support for new HW Rasesh Mody
                   ` (2 preceding siblings ...)
  2021-02-19 10:14 ` [dpdk-dev] [PATCH 4/7] net/qede/base: update base driver to 8.62.4.0 Rasesh Mody
@ 2021-02-19 10:14 ` Rasesh Mody
  2021-02-19 10:14 ` [dpdk-dev] [PATCH 6/7] net/qede: add support for new HW Rasesh Mody
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Rasesh Mody @ 2021-02-19 10:14 UTC (permalink / raw)
  To: jerinj, ferruh.yigit; +Cc: Rasesh Mody, dev, GR-Everest-DPDK-Dev, Igor Russkikh

This patch changes the DMA page chain allocation and free mechanism
used for fast path resources. It introduces addr_shadow() structure to
maintain virtual address and DMA mapping. The ecore_chain_params() is
used to maintain all the DMA page chain parameters.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
---
 drivers/net/qede/base/ecore_chain.h   | 241 +++++++++++++++-----------
 drivers/net/qede/base/ecore_dev.c     | 119 ++++++++-----
 drivers/net/qede/base/ecore_dev_api.h |  13 +-
 drivers/net/qede/base/ecore_spq.c     |  55 +++---
 drivers/net/qede/qede_if.h            |  20 +--
 drivers/net/qede/qede_main.c          |   1 +
 drivers/net/qede/qede_rxtx.c          |  67 +++----
 7 files changed, 301 insertions(+), 215 deletions(-)

diff --git a/drivers/net/qede/base/ecore_chain.h b/drivers/net/qede/base/ecore_chain.h
index c69920be5..8c7971081 100644
--- a/drivers/net/qede/base/ecore_chain.h
+++ b/drivers/net/qede/base/ecore_chain.h
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2016 - 2018 Cavium Inc.
+ * Copyright (c) 2018 - 2020 Marvell Semiconductor Inc.
  * All rights reserved.
- * www.cavium.com
+ * www.marvell.com
  */
-
 #ifndef __ECORE_CHAIN_H__
 #define __ECORE_CHAIN_H__
 
@@ -24,8 +24,8 @@ enum ecore_chain_mode {
 };
 
 enum ecore_chain_use_mode {
-	ECORE_CHAIN_USE_TO_PRODUCE,	/* Chain starts empty */
-	ECORE_CHAIN_USE_TO_CONSUME,	/* Chain starts full */
+	ECORE_CHAIN_USE_TO_PRODUCE,		/* Chain starts empty */
+	ECORE_CHAIN_USE_TO_CONSUME,		/* Chain starts full */
 	ECORE_CHAIN_USE_TO_CONSUME_PRODUCE,	/* Chain starts empty */
 };
 
@@ -38,35 +38,40 @@ enum ecore_chain_cnt_type {
 };
 
 struct ecore_chain_next {
-	struct regpair next_phys;
-	void *next_virt;
+	struct regpair	next_phys;
+	void		*next_virt;
 };
 
 struct ecore_chain_pbl_u16 {
-	u16 prod_page_idx;
-	u16 cons_page_idx;
+	u16	prod_page_idx;
+	u16	cons_page_idx;
 };
 
 struct ecore_chain_pbl_u32 {
-	u32 prod_page_idx;
-	u32 cons_page_idx;
+	u32	prod_page_idx;
+	u32	cons_page_idx;
 };
 
 struct ecore_chain_ext_pbl {
-	dma_addr_t p_pbl_phys;
-	void *p_pbl_virt;
+	dma_addr_t	p_pbl_phys;
+	void		*p_pbl_virt;
 };
 
 struct ecore_chain_u16 {
 	/* Cyclic index of next element to produce/consme */
-	u16 prod_idx;
-	u16 cons_idx;
+	u16	prod_idx;
+	u16	cons_idx;
 };
 
 struct ecore_chain_u32 {
 	/* Cyclic index of next element to produce/consme */
-	u32 prod_idx;
-	u32 cons_idx;
+	u32	prod_idx;
+	u32	cons_idx;
+};
+
+struct addr_shadow {
+	void *virt_addr;
+	dma_addr_t dma_map;
 };
 
 struct ecore_chain {
@@ -74,16 +79,17 @@ struct ecore_chain {
 	 * as produce / consume.
 	 */
 	/* Point to next element to produce/consume */
-	void *p_prod_elem;
-	void *p_cons_elem;
+	void				*p_prod_elem;
+	void				*p_cons_elem;
 
 	/* Fastpath portions of the PBL [if exists] */
 
 	struct {
-		/* Table for keeping the virtual addresses of the chain pages,
-		 * respectively to the physical addresses in the pbl table.
+		/* Table for keeping the virtual and physical addresses of the
+		 * chain pages, respectively to the physical addresses
+		 * in the pbl table.
 		 */
-		void		**pp_virt_addr_tbl;
+		struct addr_shadow *shadow;
 
 		union {
 			struct ecore_chain_pbl_u16	pbl_u16;
@@ -92,8 +98,8 @@ struct ecore_chain {
 	} pbl;
 
 	union {
-		struct ecore_chain_u16 chain16;
-		struct ecore_chain_u32 chain32;
+		struct ecore_chain_u16	chain16;
+		struct ecore_chain_u32	chain32;
 	} u;
 
 	/* Capacity counts only usable elements */
@@ -106,12 +112,12 @@ struct ecore_chain {
 	enum ecore_chain_mode		mode;
 
 	/* Elements information for fast calculations */
-	u16 elem_per_page;
-	u16 elem_per_page_mask;
-	u16 elem_size;
-	u16 next_page_mask;
-	u16 usable_per_page;
-	u8 elem_unusable;
+	u16				elem_per_page;
+	u16				elem_per_page_mask;
+	u16				elem_size;
+	u16				next_page_mask;
+	u16				usable_per_page;
+	u8				elem_unusable;
 
 	u8				cnt_type;
 
@@ -119,6 +125,8 @@ struct ecore_chain {
 	 * but isn't involved in regular functionality.
 	 */
 
+	u32				page_size;
+
 	/* Base address of a pre-allocated buffer for pbl */
 	struct {
 		dma_addr_t		p_phys_table;
@@ -140,24 +148,35 @@ struct ecore_chain {
 	/* TBD - do we really need this? Couldn't find usage for it */
 	bool				b_external_pbl;
 
-	void *dp_ctx;
+	void				*dp_ctx;
+};
+
+struct ecore_chain_params {
+	enum ecore_chain_use_mode	intended_use;
+	enum ecore_chain_mode		mode;
+	enum ecore_chain_cnt_type	cnt_type;
+	u32				num_elems;
+	osal_size_t			elem_size;
+	u32				page_size;
+	struct ecore_chain_ext_pbl	*ext_pbl;
 };
 
-#define ECORE_CHAIN_PBL_ENTRY_SIZE	(8)
-#define ECORE_CHAIN_PAGE_SIZE		(0x1000)
-#define ELEMS_PER_PAGE(elem_size)	(ECORE_CHAIN_PAGE_SIZE / (elem_size))
+#define ECORE_CHAIN_PBL_ENTRY_SIZE		8
+#define ECORE_CHAIN_PAGE_SIZE			0x1000
+#define ELEMS_PER_PAGE(page_size, elem_size)	((page_size) / (elem_size))
 
-#define UNUSABLE_ELEMS_PER_PAGE(elem_size, mode)		\
-	  ((mode == ECORE_CHAIN_MODE_NEXT_PTR) ?		\
-	   (u8)(1 + ((sizeof(struct ecore_chain_next) - 1) /	\
+#define UNUSABLE_ELEMS_PER_PAGE(elem_size, mode)			\
+	  (((mode) == ECORE_CHAIN_MODE_NEXT_PTR) ?			\
+	   (u8)(1 + ((sizeof(struct ecore_chain_next) - 1) /		\
 		     (elem_size))) : 0)
 
-#define USABLE_ELEMS_PER_PAGE(elem_size, mode)		\
-	((u32)(ELEMS_PER_PAGE(elem_size) -			\
-	UNUSABLE_ELEMS_PER_PAGE(elem_size, mode)))
+#define USABLE_ELEMS_PER_PAGE(elem_size, page_size, mode)		\
+	  ((u32)(ELEMS_PER_PAGE(page_size, elem_size) -			\
+	   UNUSABLE_ELEMS_PER_PAGE(elem_size, mode)))
 
-#define ECORE_CHAIN_PAGE_CNT(elem_cnt, elem_size, mode)		\
-	DIV_ROUND_UP(elem_cnt, USABLE_ELEMS_PER_PAGE(elem_size, mode))
+#define ECORE_CHAIN_PAGE_CNT(elem_cnt, elem_size, page_size, mode)	\
+	DIV_ROUND_UP(elem_cnt,						\
+		     USABLE_ELEMS_PER_PAGE(elem_size, page_size, mode))
 
 #define is_chain_u16(p)	((p)->cnt_type == ECORE_CHAIN_CNT_TYPE_U16)
 #define is_chain_u32(p)	((p)->cnt_type == ECORE_CHAIN_CNT_TYPE_U32)
@@ -195,18 +214,41 @@ static OSAL_INLINE u32 ecore_chain_get_cons_idx_u32(struct ecore_chain *p_chain)
 #define ECORE_U16_MAX	((u16)~0U)
 #define ECORE_U32_MAX	((u32)~0U)
 
+static OSAL_INLINE u16
+ecore_chain_get_elem_used(struct ecore_chain *p_chain)
+{
+	u16 elem_per_page = p_chain->elem_per_page;
+	u32 prod = p_chain->u.chain16.prod_idx;
+	u32 cons = p_chain->u.chain16.cons_idx;
+	u16 used;
+
+	OSAL_ASSERT(is_chain_u16(p_chain));
+
+	if (prod < cons)
+		prod += (u32)ECORE_U16_MAX + 1;
+
+	used = (u16)(prod - cons);
+	if (p_chain->mode == ECORE_CHAIN_MODE_NEXT_PTR)
+		used -= prod / elem_per_page - cons / elem_per_page;
+
+	return (u16)(used);
+}
+
 static OSAL_INLINE u16 ecore_chain_get_elem_left(struct ecore_chain *p_chain)
 {
+	u16 elem_per_page = p_chain->elem_per_page;
+	u32 prod = p_chain->u.chain16.prod_idx;
+	u32 cons = p_chain->u.chain16.cons_idx;
 	u16 used;
 
 	OSAL_ASSERT(is_chain_u16(p_chain));
 
-	used = (u16)(((u32)ECORE_U16_MAX + 1 +
-		      (u32)(p_chain->u.chain16.prod_idx)) -
-		     (u32)p_chain->u.chain16.cons_idx);
+	if (prod < cons)
+		prod += (u32)ECORE_U16_MAX + 1;
+
+	used = (u16)(prod - cons);
 	if (p_chain->mode == ECORE_CHAIN_MODE_NEXT_PTR)
-		used -= p_chain->u.chain16.prod_idx / p_chain->elem_per_page -
-			p_chain->u.chain16.cons_idx / p_chain->elem_per_page;
+		used -= prod / elem_per_page - cons / elem_per_page;
 
 	return (u16)(p_chain->capacity - used);
 }
@@ -214,16 +256,19 @@ static OSAL_INLINE u16 ecore_chain_get_elem_left(struct ecore_chain *p_chain)
 static OSAL_INLINE u32
 ecore_chain_get_elem_left_u32(struct ecore_chain *p_chain)
 {
+	u16 elem_per_page = p_chain->elem_per_page;
+	u64 prod = p_chain->u.chain32.prod_idx;
+	u64 cons = p_chain->u.chain32.cons_idx;
 	u32 used;
 
 	OSAL_ASSERT(is_chain_u32(p_chain));
 
-	used = (u32)(((u64)ECORE_U32_MAX + 1 +
-		      (u64)(p_chain->u.chain32.prod_idx)) -
-		     (u64)p_chain->u.chain32.cons_idx);
+	if (prod < cons)
+		prod += (u64)ECORE_U32_MAX + 1;
+
+	used = (u32)(prod - cons);
 	if (p_chain->mode == ECORE_CHAIN_MODE_NEXT_PTR)
-		used -= p_chain->u.chain32.prod_idx / p_chain->elem_per_page -
-			p_chain->u.chain32.cons_idx / p_chain->elem_per_page;
+		used -= (u32)(prod / elem_per_page - cons / elem_per_page);
 
 	return p_chain->capacity - used;
 }
@@ -319,7 +364,7 @@ ecore_chain_advance_page(struct ecore_chain *p_chain, void **p_next_elem,
 				*(u32 *)page_to_inc = 0;
 			page_index = *(u32 *)page_to_inc;
 		}
-		*p_next_elem = p_chain->pbl.pp_virt_addr_tbl[page_index];
+		*p_next_elem = p_chain->pbl.shadow[page_index].virt_addr;
 	}
 }
 
@@ -330,24 +375,20 @@ ecore_chain_advance_page(struct ecore_chain *p_chain, void **p_next_elem,
 	(((p)->u.chain32.idx & (p)->elem_per_page_mask) == (p)->usable_per_page)
 
 #define is_unusable_next_idx(p, idx)		\
-	((((p)->u.chain16.idx + 1) &		\
-	(p)->elem_per_page_mask) == (p)->usable_per_page)
+	((((p)->u.chain16.idx + 1) & (p)->elem_per_page_mask) == (p)->usable_per_page)
 
 #define is_unusable_next_idx_u32(p, idx)	\
-	((((p)->u.chain32.idx + 1) &		\
-	(p)->elem_per_page_mask) == (p)->usable_per_page)
-
-#define test_and_skip(p, idx)						\
-	do {								\
-		if (is_chain_u16(p)) {					\
-			if (is_unusable_idx(p, idx))			\
-				(p)->u.chain16.idx +=			\
-					(p)->elem_unusable;		\
-		} else {						\
-			if (is_unusable_idx_u32(p, idx))		\
-				(p)->u.chain32.idx +=			\
-					(p)->elem_unusable;		\
-		}							\
+	((((p)->u.chain32.idx + 1) & (p)->elem_per_page_mask) == (p)->usable_per_page)
+
+#define test_and_skip(p, idx)							\
+	do {									\
+		if (is_chain_u16(p)) {						\
+			if (is_unusable_idx(p, idx))				\
+				(p)->u.chain16.idx += (p)->elem_unusable;	\
+		} else {							\
+			if (is_unusable_idx_u32(p, idx))			\
+				(p)->u.chain32.idx += (p)->elem_unusable;	\
+		}								\
 	} while (0)
 
 /**
@@ -395,7 +436,7 @@ static OSAL_INLINE void ecore_chain_return_produced(struct ecore_chain *p_chain)
  *
  * @param p_chain
  *
- * @return void*, a pointer to next element
+ * @return void *, a pointer to next element
  */
 static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 {
@@ -403,7 +444,8 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 
 	if (is_chain_u16(p_chain)) {
 		if ((p_chain->u.chain16.prod_idx &
-		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
+		     p_chain->elem_per_page_mask) ==
+		    p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain16.prod_idx;
 			p_prod_page_idx = &p_chain->pbl.c.pbl_u16.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
@@ -412,7 +454,8 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 		p_chain->u.chain16.prod_idx++;
 	} else {
 		if ((p_chain->u.chain32.prod_idx &
-		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
+		     p_chain->elem_per_page_mask) ==
+		    p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain32.prod_idx;
 			p_prod_page_idx = &p_chain->pbl.c.pbl_u32.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
@@ -423,7 +466,7 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 
 	p_ret = p_chain->p_prod_elem;
 	p_chain->p_prod_elem = (void *)(((u8 *)p_chain->p_prod_elem) +
-					p_chain->elem_size);
+				       p_chain->elem_size);
 
 	return p_ret;
 }
@@ -469,7 +512,7 @@ void ecore_chain_recycle_consumed(struct ecore_chain *p_chain)
  *
  * @param p_chain
  *
- * @return void*, a pointer to the next buffer written
+ * @return void *, a pointer to the next buffer written
  */
 static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 {
@@ -477,7 +520,8 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 
 	if (is_chain_u16(p_chain)) {
 		if ((p_chain->u.chain16.cons_idx &
-		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
+		     p_chain->elem_per_page_mask) ==
+		    p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain16.cons_idx;
 			p_cons_page_idx = &p_chain->pbl.c.pbl_u16.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
@@ -486,7 +530,8 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 		p_chain->u.chain16.cons_idx++;
 	} else {
 		if ((p_chain->u.chain32.cons_idx &
-		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
+		     p_chain->elem_per_page_mask) ==
+		    p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain32.cons_idx;
 			p_cons_page_idx = &p_chain->pbl.c.pbl_u32.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
@@ -497,7 +542,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 
 	p_ret = p_chain->p_cons_elem;
 	p_chain->p_cons_elem = (void *)(((u8 *)p_chain->p_cons_elem) +
-					p_chain->elem_size);
+				       p_chain->elem_size);
 
 	return p_ret;
 }
@@ -524,7 +569,7 @@ static OSAL_INLINE void ecore_chain_reset(struct ecore_chain *p_chain)
 	p_chain->p_prod_elem = p_chain->p_virt_addr;
 
 	if (p_chain->mode == ECORE_CHAIN_MODE_PBL) {
-		/* Use "page_cnt-1" as a reset value for the prod/cons page's
+		/* Use "page_cnt - 1" as a reset value for the prod/cons page's
 		 * indices, to avoid unnecessary page advancing on the first
 		 * call to ecore_chain_produce/consume. Instead, the indices
 		 * will be advanced to page_cnt and then will be wrapped to 0.
@@ -556,7 +601,7 @@ static OSAL_INLINE void ecore_chain_reset(struct ecore_chain *p_chain)
 }
 
 /**
- * @brief ecore_chain_init_params -
+ * @brief ecore_chain_init -
  *
  * Initalizes a basic chain struct
  *
@@ -569,10 +614,12 @@ static OSAL_INLINE void ecore_chain_reset(struct ecore_chain *p_chain)
  * @param dp_ctx
  */
 static OSAL_INLINE void
-ecore_chain_init_params(struct ecore_chain *p_chain, u32 page_cnt, u8 elem_size,
-			enum ecore_chain_use_mode intended_use,
-			enum ecore_chain_mode mode,
-			enum ecore_chain_cnt_type cnt_type, void *dp_ctx)
+ecore_chain_init(struct ecore_chain *p_chain,
+		 u32 page_cnt, u8 elem_size, u32 page_size,
+		 enum ecore_chain_use_mode intended_use,
+		 enum ecore_chain_mode mode,
+		 enum ecore_chain_cnt_type cnt_type,
+		 void *dp_ctx)
 {
 	/* chain fixed parameters */
 	p_chain->p_virt_addr = OSAL_NULL;
@@ -582,20 +629,22 @@ ecore_chain_init_params(struct ecore_chain *p_chain, u32 page_cnt, u8 elem_size,
 	p_chain->mode = mode;
 	p_chain->cnt_type = (u8)cnt_type;
 
-	p_chain->elem_per_page = ELEMS_PER_PAGE(elem_size);
-	p_chain->usable_per_page = USABLE_ELEMS_PER_PAGE(elem_size, mode);
+	p_chain->elem_per_page = ELEMS_PER_PAGE(page_size, elem_size);
+	p_chain->usable_per_page = USABLE_ELEMS_PER_PAGE(elem_size, page_size,
+							 mode);
 	p_chain->elem_per_page_mask = p_chain->elem_per_page - 1;
 	p_chain->elem_unusable = UNUSABLE_ELEMS_PER_PAGE(elem_size, mode);
 	p_chain->next_page_mask = (p_chain->usable_per_page &
 				   p_chain->elem_per_page_mask);
 
+	p_chain->page_size = page_size;
 	p_chain->page_cnt = page_cnt;
 	p_chain->capacity = p_chain->usable_per_page * page_cnt;
 	p_chain->size = p_chain->elem_per_page * page_cnt;
 	p_chain->b_external_pbl = false;
 	p_chain->pbl_sp.p_phys_table = 0;
 	p_chain->pbl_sp.p_virt_table = OSAL_NULL;
-	p_chain->pbl.pp_virt_addr_tbl = OSAL_NULL;
+	p_chain->pbl.shadow = OSAL_NULL;
 
 	p_chain->dp_ctx = dp_ctx;
 }
@@ -636,11 +685,11 @@ static OSAL_INLINE void ecore_chain_init_mem(struct ecore_chain *p_chain,
 static OSAL_INLINE void ecore_chain_init_pbl_mem(struct ecore_chain *p_chain,
 						 void *p_virt_pbl,
 						 dma_addr_t p_phys_pbl,
-						 void **pp_virt_addr_tbl)
+						 struct addr_shadow *shadow)
 {
 	p_chain->pbl_sp.p_phys_table = p_phys_pbl;
 	p_chain->pbl_sp.p_virt_table = p_virt_pbl;
-	p_chain->pbl.pp_virt_addr_tbl = pp_virt_addr_tbl;
+	p_chain->pbl.shadow = shadow;
 }
 
 /**
@@ -677,7 +726,7 @@ ecore_chain_init_next_ptr_elem(struct ecore_chain *p_chain, void *p_virt_curr,
  *
  * @param p_chain
  *
- * @return void*
+ * @return void *
  */
 static OSAL_INLINE void *ecore_chain_get_last_elem(struct ecore_chain *p_chain)
 {
@@ -695,9 +744,8 @@ static OSAL_INLINE void *ecore_chain_get_last_elem(struct ecore_chain *p_chain)
 		p_next = (struct ecore_chain_next *)((u8 *)p_virt_addr + size);
 		while (p_next->next_virt != p_chain->p_virt_addr) {
 			p_virt_addr = p_next->next_virt;
-			p_next =
-			    (struct ecore_chain_next *)((u8 *)p_virt_addr +
-							size);
+			p_next = (struct ecore_chain_next *)((u8 *)p_virt_addr +
+							     size);
 		}
 		break;
 	case ECORE_CHAIN_MODE_SINGLE:
@@ -705,12 +753,12 @@ static OSAL_INLINE void *ecore_chain_get_last_elem(struct ecore_chain *p_chain)
 		break;
 	case ECORE_CHAIN_MODE_PBL:
 		last_page_idx = p_chain->page_cnt - 1;
-		p_virt_addr = p_chain->pbl.pp_virt_addr_tbl[last_page_idx];
+		p_virt_addr = p_chain->pbl.shadow[last_page_idx].virt_addr;
 		break;
 	}
 	/* p_virt_addr points at this stage to the last page of the chain */
 	size = p_chain->elem_size * (p_chain->usable_per_page - 1);
-	p_virt_addr = ((u8 *)p_virt_addr + size);
+	p_virt_addr = (u8 *)p_virt_addr + size;
 out:
 	return p_virt_addr;
 }
@@ -825,8 +873,8 @@ static OSAL_INLINE void ecore_chain_pbl_zero_mem(struct ecore_chain *p_chain)
 	page_cnt = ecore_chain_get_page_cnt(p_chain);
 
 	for (i = 0; i < page_cnt; i++)
-		OSAL_MEM_ZERO(p_chain->pbl.pp_virt_addr_tbl[i],
-			      ECORE_CHAIN_PAGE_SIZE);
+		OSAL_MEM_ZERO(p_chain->pbl.shadow[i].virt_addr,
+			      p_chain->page_size);
 }
 
 int ecore_chain_print(struct ecore_chain *p_chain, char *buffer,
@@ -835,8 +883,7 @@ int ecore_chain_print(struct ecore_chain *p_chain, char *buffer,
 		      int (*func_ptr_print_element)(struct ecore_chain *p_chain,
 						    void *p_element,
 						    char *buffer),
-		      int (*func_ptr_print_metadata)(struct ecore_chain
-						     *p_chain,
+		      int (*func_ptr_print_metadata)(struct ecore_chain *p_chain,
 						     char *buffer));
 
 #endif /* __ECORE_CHAIN_H__ */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 63e5d6860..93af8c897 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -7784,43 +7784,40 @@ static void ecore_chain_free_single(struct ecore_dev *p_dev,
 		return;
 
 	OSAL_DMA_FREE_COHERENT(p_dev, p_chain->p_virt_addr,
-			       p_chain->p_phys_addr, ECORE_CHAIN_PAGE_SIZE);
+			       p_chain->p_phys_addr, p_chain->page_size);
 }
 
 static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
 				 struct ecore_chain *p_chain)
 {
-	void **pp_virt_addr_tbl = p_chain->pbl.pp_virt_addr_tbl;
-	u8 *p_pbl_virt = (u8 *)p_chain->pbl_sp.p_virt_table;
+	struct addr_shadow *shadow = p_chain->pbl.shadow;
 	u32 page_cnt = p_chain->page_cnt, i, pbl_size;
 
-	if (!pp_virt_addr_tbl)
+	if (!shadow)
 		return;
 
-	if (!p_pbl_virt)
-		goto out;
-
 	for (i = 0; i < page_cnt; i++) {
-		if (!pp_virt_addr_tbl[i])
+		if (!shadow[i].virt_addr || !shadow[i].dma_map)
 			break;
 
-		OSAL_DMA_FREE_COHERENT(p_dev, pp_virt_addr_tbl[i],
-				       *(dma_addr_t *)p_pbl_virt,
-				       ECORE_CHAIN_PAGE_SIZE);
-
-		p_pbl_virt += ECORE_CHAIN_PBL_ENTRY_SIZE;
+		OSAL_DMA_FREE_COHERENT(p_dev, shadow[i].virt_addr,
+				       shadow[i].dma_map,
+				       p_chain->page_size);
 	}
 
 	pbl_size = page_cnt * ECORE_CHAIN_PBL_ENTRY_SIZE;
 
-	if (!p_chain->b_external_pbl)
+	if (!p_chain->b_external_pbl) {
 		OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl_sp.p_virt_table,
 				       p_chain->pbl_sp.p_phys_table, pbl_size);
-out:
-	OSAL_VFREE(p_dev, p_chain->pbl.pp_virt_addr_tbl);
+	}
+
+	OSAL_VFREE(p_dev, p_chain->pbl.shadow);
+	p_chain->pbl.shadow = OSAL_NULL;
 }
 
-void ecore_chain_free(struct ecore_dev *p_dev, struct ecore_chain *p_chain)
+void ecore_chain_free(struct ecore_dev *p_dev,
+		      struct ecore_chain *p_chain)
 {
 	switch (p_chain->mode) {
 	case ECORE_CHAIN_MODE_NEXT_PTR:
@@ -7833,14 +7830,18 @@ void ecore_chain_free(struct ecore_dev *p_dev, struct ecore_chain *p_chain)
 		ecore_chain_free_pbl(p_dev, p_chain);
 		break;
 	}
+
+	/* reset chain addresses to avoid double free */
+	ecore_chain_init_mem(p_chain, OSAL_NULL, 0);
 }
 
 static enum _ecore_status_t
 ecore_chain_alloc_sanity_check(struct ecore_dev *p_dev,
 			       enum ecore_chain_cnt_type cnt_type,
-			       osal_size_t elem_size, u32 page_cnt)
+			       osal_size_t elem_size,
+			       u32 page_size, u32 page_cnt)
 {
-	u64 chain_size = ELEMS_PER_PAGE(elem_size) * page_cnt;
+	u64 chain_size = ELEMS_PER_PAGE(page_size, elem_size) * page_cnt;
 
 	/* The actual chain size can be larger than the maximal possible value
 	 * after rounding up the requested elements number to pages, and after
@@ -7853,8 +7854,8 @@ ecore_chain_alloc_sanity_check(struct ecore_dev *p_dev,
 	    (cnt_type == ECORE_CHAIN_CNT_TYPE_U32 &&
 	     chain_size > ECORE_U32_MAX)) {
 		DP_NOTICE(p_dev, true,
-			  "The actual chain size (0x%lx) is larger than the maximal possible value\n",
-			  (unsigned long)chain_size);
+			  "The actual chain size (0x%" PRIx64 ") is larger than the maximal possible value\n",
+			  chain_size);
 		return ECORE_INVAL;
 	}
 
@@ -7870,7 +7871,7 @@ ecore_chain_alloc_next_ptr(struct ecore_dev *p_dev, struct ecore_chain *p_chain)
 
 	for (i = 0; i < p_chain->page_cnt; i++) {
 		p_virt = OSAL_DMA_ALLOC_COHERENT(p_dev, &p_phys,
-						 ECORE_CHAIN_PAGE_SIZE);
+						 p_chain->page_size);
 		if (!p_virt) {
 			DP_NOTICE(p_dev, false,
 				  "Failed to allocate chain memory\n");
@@ -7903,7 +7904,7 @@ ecore_chain_alloc_single(struct ecore_dev *p_dev, struct ecore_chain *p_chain)
 	dma_addr_t p_phys = 0;
 	void *p_virt = OSAL_NULL;
 
-	p_virt = OSAL_DMA_ALLOC_COHERENT(p_dev, &p_phys, ECORE_CHAIN_PAGE_SIZE);
+	p_virt = OSAL_DMA_ALLOC_COHERENT(p_dev, &p_phys, p_chain->page_size);
 	if (!p_virt) {
 		DP_NOTICE(p_dev, false, "Failed to allocate chain memory\n");
 		return ECORE_NOMEM;
@@ -7922,22 +7923,22 @@ ecore_chain_alloc_pbl(struct ecore_dev *p_dev,
 {
 	u32 page_cnt = p_chain->page_cnt, size, i;
 	dma_addr_t p_phys = 0, p_pbl_phys = 0;
-	void **pp_virt_addr_tbl = OSAL_NULL;
+	struct addr_shadow *shadow = OSAL_NULL;
 	u8 *p_pbl_virt = OSAL_NULL;
 	void *p_virt = OSAL_NULL;
 
-	size = page_cnt * sizeof(*pp_virt_addr_tbl);
-	pp_virt_addr_tbl = (void **)OSAL_VZALLOC(p_dev, size);
-	if (!pp_virt_addr_tbl) {
+	size = page_cnt * sizeof(struct addr_shadow);
+	shadow = (struct addr_shadow *)OSAL_VZALLOC(p_dev, size);
+	if (!shadow) {
 		DP_NOTICE(p_dev, false,
-			  "Failed to allocate memory for the chain virtual addresses table\n");
+			  "Failed to allocate memory for the chain virtual/physical addresses table\n");
 		return ECORE_NOMEM;
 	}
 
 	/* The allocation of the PBL table is done with its full size, since it
 	 * is expected to be successive.
 	 * ecore_chain_init_pbl_mem() is called even in a case of an allocation
-	 * failure, since pp_virt_addr_tbl was previously allocated, and it
+	 * failure, since tbl was previously allocated, and it
 	 * should be saved to allow its freeing during the error flow.
 	 */
 	size = page_cnt * ECORE_CHAIN_PBL_ENTRY_SIZE;
@@ -7950,8 +7951,7 @@ ecore_chain_alloc_pbl(struct ecore_dev *p_dev,
 		p_chain->b_external_pbl = true;
 	}
 
-	ecore_chain_init_pbl_mem(p_chain, p_pbl_virt, p_pbl_phys,
-				 pp_virt_addr_tbl);
+	ecore_chain_init_pbl_mem(p_chain, p_pbl_virt, p_pbl_phys, shadow);
 	if (!p_pbl_virt) {
 		DP_NOTICE(p_dev, false, "Failed to allocate chain pbl memory\n");
 		return ECORE_NOMEM;
@@ -7959,7 +7959,7 @@ ecore_chain_alloc_pbl(struct ecore_dev *p_dev,
 
 	for (i = 0; i < page_cnt; i++) {
 		p_virt = OSAL_DMA_ALLOC_COHERENT(p_dev, &p_phys,
-						 ECORE_CHAIN_PAGE_SIZE);
+						 p_chain->page_size);
 		if (!p_virt) {
 			DP_NOTICE(p_dev, false,
 				  "Failed to allocate chain memory\n");
@@ -7974,7 +7974,8 @@ ecore_chain_alloc_pbl(struct ecore_dev *p_dev,
 		/* Fill the PBL table with the physical address of the page */
 		*(dma_addr_t *)p_pbl_virt = p_phys;
 		/* Keep the virtual address of the page */
-		p_chain->pbl.pp_virt_addr_tbl[i] = p_virt;
+		p_chain->pbl.shadow[i].virt_addr = p_virt;
+		p_chain->pbl.shadow[i].dma_map = p_phys;
 
 		p_pbl_virt += ECORE_CHAIN_PBL_ENTRY_SIZE;
 	}
@@ -7982,36 +7983,58 @@ ecore_chain_alloc_pbl(struct ecore_dev *p_dev,
 	return ECORE_SUCCESS;
 }
 
+void ecore_chain_params_init(struct ecore_chain_params *p_params,
+			     enum ecore_chain_use_mode intended_use,
+			     enum ecore_chain_mode mode,
+			     enum ecore_chain_cnt_type cnt_type,
+			     u32 num_elems,
+			     osal_size_t elem_size)
+{
+	p_params->intended_use = intended_use;
+	p_params->mode = mode;
+	p_params->cnt_type = cnt_type;
+	p_params->num_elems = num_elems;
+	p_params->elem_size = elem_size;
+
+	/* common values */
+	p_params->page_size = ECORE_CHAIN_PAGE_SIZE;
+	p_params->ext_pbl = OSAL_NULL;
+}
+
 enum _ecore_status_t ecore_chain_alloc(struct ecore_dev *p_dev,
-				       enum ecore_chain_use_mode intended_use,
-				       enum ecore_chain_mode mode,
-				       enum ecore_chain_cnt_type cnt_type,
-				       u32 num_elems, osal_size_t elem_size,
 				       struct ecore_chain *p_chain,
-				       struct ecore_chain_ext_pbl *ext_pbl)
+				       struct ecore_chain_params *p_params)
 {
 	u32 page_cnt;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
-	if (mode == ECORE_CHAIN_MODE_SINGLE)
+	if (p_params->mode == ECORE_CHAIN_MODE_SINGLE)
 		page_cnt = 1;
 	else
-		page_cnt = ECORE_CHAIN_PAGE_CNT(num_elems, elem_size, mode);
-
-	rc = ecore_chain_alloc_sanity_check(p_dev, cnt_type, elem_size,
+		page_cnt = ECORE_CHAIN_PAGE_CNT(p_params->num_elems,
+						p_params->elem_size,
+						p_params->page_size,
+						p_params->mode);
+
+	rc = ecore_chain_alloc_sanity_check(p_dev, p_params->cnt_type,
+					    p_params->elem_size,
+					    p_params->page_size,
 					    page_cnt);
 	if (rc) {
 		DP_NOTICE(p_dev, false,
 			  "Cannot allocate a chain with the given arguments:\n"
-			  "[use_mode %d, mode %d, cnt_type %d, num_elems %d, elem_size %zu]\n",
-			  intended_use, mode, cnt_type, num_elems, elem_size);
+			  "[use_mode %d, mode %d, cnt_type %d, num_elems %d, elem_size %zu, page_size %u]\n",
+			  p_params->intended_use, p_params->mode,
+			  p_params->cnt_type, p_params->num_elems,
+			  p_params->elem_size, p_params->page_size);
 		return rc;
 	}
 
-	ecore_chain_init_params(p_chain, page_cnt, (u8)elem_size, intended_use,
-				mode, cnt_type, p_dev->dp_ctx);
+	ecore_chain_init(p_chain, page_cnt, (u8)p_params->elem_size,
+			 p_params->page_size, p_params->intended_use,
+			 p_params->mode, p_params->cnt_type, p_dev->dp_ctx);
 
-	switch (mode) {
+	switch (p_params->mode) {
 	case ECORE_CHAIN_MODE_NEXT_PTR:
 		rc = ecore_chain_alloc_next_ptr(p_dev, p_chain);
 		break;
@@ -8019,7 +8042,7 @@ enum _ecore_status_t ecore_chain_alloc(struct ecore_dev *p_dev,
 		rc = ecore_chain_alloc_single(p_dev, p_chain);
 		break;
 	case ECORE_CHAIN_MODE_PBL:
-		rc = ecore_chain_alloc_pbl(p_dev, p_chain, ext_pbl);
+		rc = ecore_chain_alloc_pbl(p_dev, p_chain, p_params->ext_pbl);
 		break;
 	}
 	if (rc)
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 1ffe286d7..2e06e77c9 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -482,6 +482,12 @@ struct ecore_eth_stats {
 };
 #endif
 
+void ecore_chain_params_init(struct ecore_chain_params *p_params,
+			     enum ecore_chain_use_mode intended_use,
+			     enum ecore_chain_mode mode,
+			     enum ecore_chain_cnt_type cnt_type,
+			     u32 num_elems,
+			     osal_size_t elem_size);
 /**
  * @brief ecore_chain_alloc - Allocate and initialize a chain
  *
@@ -496,13 +502,8 @@ struct ecore_eth_stats {
  */
 enum _ecore_status_t
 ecore_chain_alloc(struct ecore_dev *p_dev,
-		  enum ecore_chain_use_mode intended_use,
-		  enum ecore_chain_mode mode,
-		  enum ecore_chain_cnt_type cnt_type,
-		  u32 num_elems,
-		  osal_size_t elem_size,
 		  struct ecore_chain *p_chain,
-		  struct ecore_chain_ext_pbl *ext_pbl);
+		  struct ecore_chain_params *p_params);
 
 /**
  * @brief ecore_chain_free - Free chain DMA memory
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 47c8a4e90..dda36c995 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -471,7 +471,8 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn	*p_hwfn,
 
 enum _ecore_status_t ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 {
-	struct ecore_eq *p_eq;
+	struct ecore_chain_params chain_params;
+	struct ecore_eq	*p_eq;
 
 	/* Allocate EQ struct */
 	p_eq = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_eq));
@@ -482,13 +483,13 @@ enum _ecore_status_t ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
 	}
 
 	/* Allocate and initialize EQ chain*/
-	if (ecore_chain_alloc(p_hwfn->p_dev,
-			      ECORE_CHAIN_USE_TO_PRODUCE,
-			      ECORE_CHAIN_MODE_PBL,
-			      ECORE_CHAIN_CNT_TYPE_U16,
-			      num_elem,
-			      sizeof(union event_ring_element),
-			      &p_eq->chain, OSAL_NULL) != ECORE_SUCCESS) {
+	ecore_chain_params_init(&chain_params,
+				ECORE_CHAIN_USE_TO_PRODUCE,
+				ECORE_CHAIN_MODE_PBL,
+				ECORE_CHAIN_CNT_TYPE_U16,
+				num_elem,
+				sizeof(union event_ring_element));
+	if (ecore_chain_alloc(p_hwfn->p_dev, &p_eq->chain, &chain_params)) {
 		DP_NOTICE(p_hwfn, false, "Failed to allocate eq chain\n");
 		goto eq_allocate_fail;
 	}
@@ -626,6 +627,7 @@ void ecore_spq_setup(struct ecore_hwfn *p_hwfn)
 enum _ecore_status_t ecore_spq_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_spq_entry *p_virt = OSAL_NULL;
+	struct ecore_chain_params chain_params;
 	struct ecore_spq *p_spq = OSAL_NULL;
 	dma_addr_t p_phys = 0;
 	u32 capacity;
@@ -638,14 +640,20 @@ enum _ecore_status_t ecore_spq_alloc(struct ecore_hwfn *p_hwfn)
 		return ECORE_NOMEM;
 	}
 
+	/* SPQ for VF is needed only for async_comp callbacks */
+	if (IS_VF(p_hwfn->p_dev)) {
+		p_hwfn->p_spq = p_spq;
+		return ECORE_SUCCESS;
+	}
+
 	/* SPQ ring  */
-	if (ecore_chain_alloc(p_hwfn->p_dev,
-			      ECORE_CHAIN_USE_TO_PRODUCE,
-			      ECORE_CHAIN_MODE_SINGLE,
-			      ECORE_CHAIN_CNT_TYPE_U16,
-			      0, /* N/A when the mode is SINGLE */
-			      sizeof(struct slow_path_element),
-			      &p_spq->chain, OSAL_NULL)) {
+	ecore_chain_params_init(&chain_params,
+				ECORE_CHAIN_USE_TO_PRODUCE,
+				ECORE_CHAIN_MODE_SINGLE,
+				ECORE_CHAIN_CNT_TYPE_U16,
+				0, /* N/A when the mode is SINGLE */
+				sizeof(struct slow_path_element));
+	if (ecore_chain_alloc(p_hwfn->p_dev, &p_spq->chain, &chain_params)) {
 		DP_NOTICE(p_hwfn, false, "Failed to allocate spq chain\n");
 		goto spq_allocate_fail;
 	}
@@ -1120,6 +1128,7 @@ void ecore_spq_drop_next_completion(struct ecore_hwfn *p_hwfn)
 
 enum _ecore_status_t ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 {
+	struct ecore_chain_params chain_params;
 	struct ecore_consq *p_consq;
 
 	/* Allocate ConsQ struct */
@@ -1130,14 +1139,14 @@ enum _ecore_status_t ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
 		return ECORE_NOMEM;
 	}
 
-	/* Allocate and initialize EQ chain */
-	if (ecore_chain_alloc(p_hwfn->p_dev,
-			      ECORE_CHAIN_USE_TO_PRODUCE,
-			      ECORE_CHAIN_MODE_PBL,
-			      ECORE_CHAIN_CNT_TYPE_U16,
-			      ECORE_CHAIN_PAGE_SIZE / 0x80,
-			      0x80,
-			      &p_consq->chain, OSAL_NULL) != ECORE_SUCCESS) {
+	/* Allocate and initialize EQ chain*/
+	ecore_chain_params_init(&chain_params,
+				ECORE_CHAIN_USE_TO_PRODUCE,
+				ECORE_CHAIN_MODE_PBL,
+				ECORE_CHAIN_CNT_TYPE_U16,
+				ECORE_CHAIN_PAGE_SIZE / 0x80,
+				0x80);
+	if (ecore_chain_alloc(p_hwfn->p_dev, &p_consq->chain, &chain_params)) {
 		DP_NOTICE(p_hwfn, false, "Failed to allocate consq chain");
 		goto consq_allocate_fail;
 	}
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 1693a243f..daefd4b98 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -141,16 +141,16 @@ struct qed_common_ops {
 		     struct rte_pci_device *pci_dev,
 		     uint32_t dp_module, uint8_t dp_level, bool is_vf);
 	void (*set_name)(struct ecore_dev *edev, char name[]);
-	enum _ecore_status_t
-		(*chain_alloc)(struct ecore_dev *edev,
-			       enum ecore_chain_use_mode
-			       intended_use,
-			       enum ecore_chain_mode mode,
-			       enum ecore_chain_cnt_type cnt_type,
-			       uint32_t num_elems,
-			       osal_size_t elem_size,
-			       struct ecore_chain *p_chain,
-			       struct ecore_chain_ext_pbl *ext_pbl);
+
+	void (*chain_params_init)(struct ecore_chain_params *p_params,
+				  enum ecore_chain_use_mode intended_use,
+				  enum ecore_chain_mode mode,
+				  enum ecore_chain_cnt_type cnt_type,
+				  u32 num_elems, size_t elem_size);
+
+	int (*chain_alloc)(struct ecore_dev *edev,
+			   struct ecore_chain *p_chain,
+			   struct ecore_chain_params *p_params);
 
 	void (*chain_free)(struct ecore_dev *edev,
 			   struct ecore_chain *p_chain);
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index e865d988f..4f99ab8b7 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -781,6 +781,7 @@ const struct qed_common_ops qed_common_ops_pass = {
 	INIT_STRUCT_FIELD(update_pf_params, &qed_update_pf_params),
 	INIT_STRUCT_FIELD(slowpath_start, &qed_slowpath_start),
 	INIT_STRUCT_FIELD(set_name, &qed_set_name),
+	INIT_STRUCT_FIELD(chain_params_init, &ecore_chain_params_init),
 	INIT_STRUCT_FIELD(chain_alloc, &ecore_chain_alloc),
 	INIT_STRUCT_FIELD(chain_free, &ecore_chain_free),
 	INIT_STRUCT_FIELD(sb_init, &qed_sb_init),
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index c91fcc1fa..b6ff59457 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -135,6 +135,7 @@ qede_alloc_rx_queue_mem(struct rte_eth_dev *dev,
 	struct qede_dev *qdev = QEDE_INIT_QDEV(dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
 	struct qede_rx_queue *rxq;
+	struct ecore_chain_params chain_params;
 	size_t size;
 	int rc;
 
@@ -172,43 +173,45 @@ qede_alloc_rx_queue_mem(struct rte_eth_dev *dev,
 	}
 
 	/* Allocate FW Rx ring  */
-	rc = qdev->ops->common->chain_alloc(edev,
-					    ECORE_CHAIN_USE_TO_CONSUME_PRODUCE,
-					    ECORE_CHAIN_MODE_NEXT_PTR,
-					    ECORE_CHAIN_CNT_TYPE_U16,
-					    rxq->nb_rx_desc,
-					    sizeof(struct eth_rx_bd),
-					    &rxq->rx_bd_ring,
-					    NULL);
+	qdev->ops->common->chain_params_init(&chain_params,
+					     ECORE_CHAIN_USE_TO_CONSUME_PRODUCE,
+					     ECORE_CHAIN_MODE_NEXT_PTR,
+					     ECORE_CHAIN_CNT_TYPE_U16,
+					     rxq->nb_rx_desc,
+					     sizeof(struct eth_rx_bd));
+	rc = qdev->ops->common->chain_alloc(edev, &rxq->rx_bd_ring,
+					    &chain_params);
 
 	if (rc != ECORE_SUCCESS) {
 		DP_ERR(edev, "Memory allocation fails for RX BD ring"
 		       " on socket %u\n", socket_id);
-		rte_free(rxq->sw_rx_ring);
-		rte_free(rxq);
-		return NULL;
+		goto err1;
 	}
 
 	/* Allocate FW completion ring */
-	rc = qdev->ops->common->chain_alloc(edev,
-					    ECORE_CHAIN_USE_TO_CONSUME,
-					    ECORE_CHAIN_MODE_PBL,
-					    ECORE_CHAIN_CNT_TYPE_U16,
-					    rxq->nb_rx_desc,
-					    sizeof(union eth_rx_cqe),
-					    &rxq->rx_comp_ring,
-					    NULL);
+	qdev->ops->common->chain_params_init(&chain_params,
+					     ECORE_CHAIN_USE_TO_CONSUME,
+					     ECORE_CHAIN_MODE_PBL,
+					     ECORE_CHAIN_CNT_TYPE_U16,
+					     rxq->nb_rx_desc,
+					     sizeof(union eth_rx_cqe));
+	rc = qdev->ops->common->chain_alloc(edev, &rxq->rx_comp_ring,
+					    &chain_params);
 
 	if (rc != ECORE_SUCCESS) {
 		DP_ERR(edev, "Memory allocation fails for RX CQE ring"
 		       " on socket %u\n", socket_id);
-		qdev->ops->common->chain_free(edev, &rxq->rx_bd_ring);
-		rte_free(rxq->sw_rx_ring);
-		rte_free(rxq);
-		return NULL;
+		goto err2;
 	}
 
 	return rxq;
+
+err2:
+	qdev->ops->common->chain_free(edev, &rxq->rx_bd_ring);
+err1:
+	rte_free(rxq->sw_rx_ring);
+	rte_free(rxq);
+	return NULL;
 }
 
 int
@@ -392,6 +395,8 @@ qede_alloc_tx_queue_mem(struct rte_eth_dev *dev,
 	struct qede_dev *qdev = dev->data->dev_private;
 	struct ecore_dev *edev = &qdev->edev;
 	struct qede_tx_queue *txq;
+	struct ecore_chain_params chain_params;
+	union eth_tx_bd_types *p_virt;
 	int rc;
 
 	txq = rte_zmalloc_socket("qede_tx_queue", sizeof(struct qede_tx_queue),
@@ -408,14 +413,14 @@ qede_alloc_tx_queue_mem(struct rte_eth_dev *dev,
 	txq->qdev = qdev;
 	txq->port_id = dev->data->port_id;
 
-	rc = qdev->ops->common->chain_alloc(edev,
-					    ECORE_CHAIN_USE_TO_CONSUME_PRODUCE,
-					    ECORE_CHAIN_MODE_PBL,
-					    ECORE_CHAIN_CNT_TYPE_U16,
-					    txq->nb_tx_desc,
-					    sizeof(union eth_tx_bd_types),
-					    &txq->tx_pbl,
-					    NULL);
+	qdev->ops->common->chain_params_init(&chain_params,
+					     ECORE_CHAIN_USE_TO_CONSUME_PRODUCE,
+					     ECORE_CHAIN_MODE_PBL,
+					     ECORE_CHAIN_CNT_TYPE_U16,
+					     txq->nb_tx_desc,
+					     sizeof(*p_virt));
+	rc = qdev->ops->common->chain_alloc(edev, &txq->tx_pbl,
+					     &chain_params);
 	if (rc != ECORE_SUCCESS) {
 		DP_ERR(edev,
 		       "Unable to allocate memory for txbd ring on socket %u",
-- 
2.18.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] [PATCH 6/7] net/qede: add support for new HW
  2021-02-19 10:14 [dpdk-dev] [PATCH 0/7] net/qede: add support for new HW Rasesh Mody
                   ` (3 preceding siblings ...)
  2021-02-19 10:14 ` [dpdk-dev] [PATCH 5/7] net/qede: changes for DMA page chain allocation and free Rasesh Mody
@ 2021-02-19 10:14 ` Rasesh Mody
  2021-02-19 10:14 ` [dpdk-dev] [PATCH 7/7] net/qede/base: clean unnecessary ifdef and comments Rasesh Mody
  2021-02-19 12:00 ` [dpdk-dev] [PATCH 0/7] net/qede: add support for new HW Rasesh Mody
  6 siblings, 0 replies; 8+ messages in thread
From: Rasesh Mody @ 2021-02-19 10:14 UTC (permalink / raw)
  To: jerinj, ferruh.yigit; +Cc: Rasesh Mody, dev, GR-Everest-DPDK-Dev, Igor Russkikh

This patch adds PMD support for new hardware (adding new PCI IDs)
50xxx family of Marvell fastlinq adapters. The PMD version is
updated to 3.0.0.1.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
---
 drivers/net/qede/qede_debug.c  |  3 +--
 drivers/net/qede/qede_ethdev.c |  8 +++++++-
 drivers/net/qede/qede_ethdev.h | 11 +++++++----
 3 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/drivers/net/qede/qede_debug.c b/drivers/net/qede/qede_debug.c
index 085d0532b..ca2879432 100644
--- a/drivers/net/qede/qede_debug.c
+++ b/drivers/net/qede/qede_debug.c
@@ -8099,8 +8099,7 @@ void qed_dbg_pf_init(struct ecore_dev *edev)
 	/* Debug values are after init values.
 	 * The offset is the first dword of the file.
 	 */
-	/* TBD: change hardcoded value to offset from FW file */
-	dbg_values = (const u8 *)edev->firmware + 1337296;
+	dbg_values = (const u8 *)edev->firmware + sizeof(u32);
 
 	for_each_hwfn(edev, i) {
 		qed_dbg_set_bin_ptr(&edev->hwfns[i], dbg_values);
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 668821dcb..8b151e907 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -374,7 +374,7 @@ static void qede_print_adapter_info(struct rte_eth_dev *dev)
 	DP_INFO(edev, "**************************************************\n");
 	DP_INFO(edev, " %-20s: %s\n", "DPDK version", rte_version());
 	DP_INFO(edev, " %-20s: %s %c%d\n", "Chip details",
-		  ECORE_IS_BB(edev) ? "BB" : "AH",
+		  ECORE_IS_E5(edev) ? "AHP" : ECORE_IS_BB(edev) ? "BB" : "AH",
 		  'A' + edev->chip_rev,
 		  (int)edev->chip_metal);
 	snprintf(ver_str, QEDE_PMD_DRV_VER_STR_SIZE, "%s",
@@ -2811,6 +2811,9 @@ static const struct rte_pci_id pci_id_qedevf_map[] = {
 	{
 		QEDEVF_RTE_PCI_DEVICE(PCI_DEVICE_ID_QLOGIC_AH_IOV)
 	},
+	{
+		QEDEVF_RTE_PCI_DEVICE(PCI_DEVICE_ID_QLOGIC_E5_IOV)
+	},
 	{.vendor_id = 0,}
 };
 
@@ -2846,6 +2849,9 @@ static const struct rte_pci_id pci_id_qede_map[] = {
 	{
 		QEDE_RTE_PCI_DEVICE(PCI_DEVICE_ID_QLOGIC_AH_25G)
 	},
+	{
+		QEDE_RTE_PCI_DEVICE(PCI_DEVICE_ID_QLOGIC_E5)
+	},
 	{.vendor_id = 0,}
 };
 
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index da4b87f5e..d065069a9 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -45,9 +45,9 @@
 /* Driver versions */
 #define QEDE_PMD_DRV_VER_STR_SIZE NAME_SIZE /* 128 */
 #define QEDE_PMD_VER_PREFIX		"QEDE PMD"
-#define QEDE_PMD_VERSION_MAJOR		2
-#define QEDE_PMD_VERSION_MINOR	        11
-#define QEDE_PMD_VERSION_REVISION       3
+#define QEDE_PMD_VERSION_MAJOR		3
+#define QEDE_PMD_VERSION_MINOR	        0
+#define QEDE_PMD_VERSION_REVISION       0
 #define QEDE_PMD_VERSION_PATCH	        1
 
 #define QEDE_PMD_DRV_VERSION qede_stringify(QEDE_PMD_VERSION_MAJOR) "."     \
@@ -109,6 +109,8 @@
 #define CHIP_NUM_AH_40G			       0x8072
 #define CHIP_NUM_AH_25G			       0x8073
 #define CHIP_NUM_AH_IOV			       0x8090
+#define CHIP_NUM_E5			       0x8170
+#define CHIP_NUM_E5_IOV			       0x8190
 
 #define PCI_DEVICE_ID_QLOGIC_NX2_57980E        CHIP_NUM_57980E
 #define PCI_DEVICE_ID_QLOGIC_NX2_57980S        CHIP_NUM_57980S
@@ -123,7 +125,8 @@
 #define PCI_DEVICE_ID_QLOGIC_AH_40G            CHIP_NUM_AH_40G
 #define PCI_DEVICE_ID_QLOGIC_AH_25G            CHIP_NUM_AH_25G
 #define PCI_DEVICE_ID_QLOGIC_AH_IOV            CHIP_NUM_AH_IOV
-
+#define PCI_DEVICE_ID_QLOGIC_E5                CHIP_NUM_E5
+#define PCI_DEVICE_ID_QLOGIC_E5_IOV            CHIP_NUM_E5_IOV
 
 
 extern char qede_fw_file[];
-- 
2.18.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] [PATCH 7/7] net/qede/base: clean unnecessary ifdef and comments
  2021-02-19 10:14 [dpdk-dev] [PATCH 0/7] net/qede: add support for new HW Rasesh Mody
                   ` (4 preceding siblings ...)
  2021-02-19 10:14 ` [dpdk-dev] [PATCH 6/7] net/qede: add support for new HW Rasesh Mody
@ 2021-02-19 10:14 ` Rasesh Mody
  2021-02-19 12:00 ` [dpdk-dev] [PATCH 0/7] net/qede: add support for new HW Rasesh Mody
  6 siblings, 0 replies; 8+ messages in thread
From: Rasesh Mody @ 2021-02-19 10:14 UTC (permalink / raw)
  To: jerinj, ferruh.yigit; +Cc: Rasesh Mody, dev, GR-Everest-DPDK-Dev, Igor Russkikh

Removed #ifdef LINUX_REMOVE, #ifndef LINUX_REMOVE, TODO comments
and TBD comments. Lots of TODOs and TBDs are not relevant.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
---
 drivers/net/qede/base/bcm_osal.c            |  1 -
 drivers/net/qede/base/bcm_osal.h            |  3 --
 drivers/net/qede/base/ecore.h               |  5 ---
 drivers/net/qede/base/ecore_chain.h         |  1 -
 drivers/net/qede/base/ecore_cxt.c           |  7 ----
 drivers/net/qede/base/ecore_cxt.h           |  1 -
 drivers/net/qede/base/ecore_dev.c           | 43 +-------------------
 drivers/net/qede/base/ecore_hsi_common.h    |  1 -
 drivers/net/qede/base/ecore_hsi_init_func.h |  2 -
 drivers/net/qede/base/ecore_hw.c            |  7 ----
 drivers/net/qede/base/ecore_init_fw_funcs.c |  2 -
 drivers/net/qede/base/ecore_int.c           | 11 -----
 drivers/net/qede/base/ecore_int.h           |  1 -
 drivers/net/qede/base/ecore_iov_api.h       |  5 ---
 drivers/net/qede/base/ecore_l2.c            |  6 ---
 drivers/net/qede/base/ecore_mcp.c           | 20 ---------
 drivers/net/qede/base/ecore_mcp.h           |  1 -
 drivers/net/qede/base/ecore_mcp_api.h       |  4 --
 drivers/net/qede/base/ecore_sp_commands.c   |  3 --
 drivers/net/qede/base/ecore_spq.c           |  5 ---
 drivers/net/qede/base/ecore_sriov.c         | 45 +--------------------
 drivers/net/qede/base/ecore_sriov.h         |  1 -
 drivers/net/qede/base/ecore_vf.c            |  4 --
 drivers/net/qede/base/ecore_vf.h            |  3 --
 drivers/net/qede/base/ecore_vf_api.h        |  2 -
 drivers/net/qede/base/ecore_vfpf_if.h       |  9 +----
 drivers/net/qede/base/mcp_public.h          |  2 -
 drivers/net/qede/qede_ethdev.c              |  1 -
 drivers/net/qede/qede_main.c                |  1 -
 drivers/net/qede/qede_rxtx.c                |  5 ---
 drivers/net/qede/qede_sriov.c               |  4 --
 31 files changed, 6 insertions(+), 200 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.c b/drivers/net/qede/base/bcm_osal.c
index 23a84795f..8e4fe6926 100644
--- a/drivers/net/qede/base/bcm_osal.c
+++ b/drivers/net/qede/base/bcm_osal.c
@@ -121,7 +121,6 @@ void qede_vf_fill_driver_data(struct ecore_hwfn *hwfn,
 			      struct ecore_vf_acquire_sw_info *vf_sw_info)
 {
 	vf_sw_info->os_type = VFPF_ACQUIRE_OS_LINUX_USERSPACE;
-	/* TODO - fill driver version */
 }
 
 void *osal_dma_alloc_coherent(struct ecore_dev *p_dev,
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 38b7fff67..315d091e1 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -270,7 +270,6 @@ typedef struct osal_list_t {
 #define OSAL_LIST_NEXT(entry, field, type) \
 	(type *)((&((entry)->field))->next)
 
-/* TODO: Check field, type order */
 
 #define OSAL_LIST_FOR_EACH_ENTRY(entry, list, field, type) \
 	for (entry = OSAL_LIST_FIRST_ENTRY(list, type, field); \
@@ -284,7 +283,6 @@ typedef struct osal_list_t {
 	  entry = (type *)tmp_entry,					 \
 	  tmp_entry = (entry) ? OSAL_LIST_NEXT(entry, field, type) : NULL)
 
-/* TODO: OSAL_LIST_INSERT_ENTRY_AFTER */
 #define OSAL_LIST_INSERT_ENTRY_AFTER(new_entry, entry, list) \
 	OSAL_LIST_PUSH_HEAD(new_entry, list)
 
@@ -396,7 +394,6 @@ void qede_hw_err_notify(struct ecore_hwfn *p_hwfn,
 #define OSAL_UNZIP_DATA(p_hwfn, input_len, buf, max_size, unzip_buf) \
 	qede_unzip_data(p_hwfn, input_len, buf, max_size, unzip_buf)
 
-/* TODO: */
 #define OSAL_SCHEDULE_RECOVERY_HANDLER(hwfn) nothing
 
 int qede_save_fw_dump(uint16_t port_id);
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 9801d5348..0c75934d2 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -170,7 +170,6 @@ static OSAL_INLINE u32 DB_ADDR_VF_E5(u32 cid, u32 DEMS)
 	((sizeof(type_name) + (u32)(1 << (p_hwfn->p_dev->cache_shift)) - 1) & \
 	 ~((1 << (p_hwfn->p_dev->cache_shift)) - 1))
 
-#ifndef LINUX_REMOVE
 #ifndef U64_HI
 #define U64_HI(val) ((u32)(((u64)(val))  >> 32))
 #endif
@@ -178,7 +177,6 @@ static OSAL_INLINE u32 DB_ADDR_VF_E5(u32 cid, u32 DEMS)
 #ifndef U64_LO
 #define U64_LO(val) ((u32)(((u64)(val)) & 0xffffffff))
 #endif
-#endif
 
 #ifndef __EXTRACT__LINUX__IF__
 #define ECORE_INT_DEBUG_SIZE_DEF     _MB(2)
@@ -271,7 +269,6 @@ enum DP_LEVEL {
 #define ECORE_LOG_NOTICE_MASK	(0x80000000)
 
 enum DP_MODULE {
-#ifndef LINUX_REMOVE
 	ECORE_MSG_DRV		= 0x0001,
 	ECORE_MSG_PROBE		= 0x0002,
 	ECORE_MSG_LINK		= 0x0004,
@@ -287,7 +284,6 @@ enum DP_MODULE {
 	ECORE_MSG_PKTDATA	= 0x1000,
 	ECORE_MSG_HW		= 0x2000,
 	ECORE_MSG_WOL		= 0x4000,
-#endif
 	ECORE_MSG_SPQ		= 0x10000,
 	ECORE_MSG_STATS		= 0x20000,
 	ECORE_MSG_DCB		= 0x40000,
@@ -761,7 +757,6 @@ enum ecore_mf_mode_bit {
 	/* Allow Cross-PF [& child VFs] Tx-switching */
 	ECORE_MF_INTER_PF_SWITCH,
 
-	/* TODO - if we ever re-utilize any of this logic, we can rename */
 	ECORE_MF_UFP_SPECIFIC,
 
 	ECORE_MF_DISABLE_ARFS,
diff --git a/drivers/net/qede/base/ecore_chain.h b/drivers/net/qede/base/ecore_chain.h
index 8c7971081..78a10ce34 100644
--- a/drivers/net/qede/base/ecore_chain.h
+++ b/drivers/net/qede/base/ecore_chain.h
@@ -145,7 +145,6 @@ struct ecore_chain {
 
 	u8				intended_use;
 
-	/* TBD - do we really need this? Couldn't find usage for it */
 	bool				b_external_pbl;
 
 	void				*dp_ctx;
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index c22f97f7d..74cf77255 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -430,7 +430,6 @@ static void ecore_ilt_get_dynamic_line_range(struct ecore_hwfn *p_hwfn,
 	struct ecore_conn_type_cfg *p_cfg;
 	u32 cxts_per_p;
 
-	/* TBD MK: ILT code should be simplified once PROTO enum is changed */
 
 	*dynamic_line_offset = 0;
 	*dynamic_line_cnt = 0;
@@ -2034,7 +2033,6 @@ static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn)
 	OSAL_MEM_ZERO(&tm_iids, sizeof(tm_iids));
 	ecore_cxt_tm_iids(p_hwfn, &tm_iids);
 
-	/* @@@TBD No pre-scan for now */
 
 	cfg_word = 0;
 	SET_FIELD(cfg_word, TM_CFG_NUM_IDS, tm_iids.per_vf_cids);
@@ -2521,11 +2519,6 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
 			    enum ecore_cxt_elem_type elem_type,
 			    u32 iid, u8 vf_id)
 {
-	/* TODO
-	 * Check to see if we need to do anything differeny if this is
-	 * called on behalf of VF.
-	 */
-
 	u32 reg_offset, shadow_line, elem_size, hw_p_size, elems_per_p, line;
 	struct ecore_ilt_client_cfg *p_cli;
 	struct ecore_ilt_cli_blk *p_blk;
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index fe61a8b8f..11cecb4c6 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -368,7 +368,6 @@ struct ecore_cxt_mngr {
 	/* Maximal number of L2 steering filters */
 	u32				arfs_count;
 
-	/* TODO - VF arfs filters ? */
 
 	u16				iscsi_task_pages;
 	u16				fcoe_task_pages;
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 93af8c897..08e165b07 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -36,13 +36,6 @@
 #pragma warning(disable : 28123)
 #endif
 
-/* TODO - there's a bug in DCBx re-configuration flows in MF, as the QM
- * registers involved are not split and thus configuration is a race where
- * some of the PFs configuration might be lost.
- * Eventually, this needs to move into a MFW-covered HW-lock as arbitration
- * mechanism as this doesn't cover some cases [E.g., PDA or scenarios where
- * there's more than a single compiled ecore component in system].
- */
 static osal_spinlock_t qm_lock;
 static u32 qm_lock_ref_cnt;
 
@@ -854,10 +847,6 @@ enum ecore_eng ecore_llh_get_l2_affinity_hint(struct ecore_dev *p_dev)
 	return p_dev->l2_affin_hint ? ECORE_ENG1 : ECORE_ENG0;
 }
 
-/* TBD -
- * When the relevant definitions are available in reg_addr.h, the SHIFT
- * definitions should be removed, and the MASK definitions should be revised.
- */
 #define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_MASK		0x3
 #define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_SHIFT		0
 #define NIG_REG_PPF_TO_ENGINE_SEL_NON_ROCE_MASK		0x3
@@ -1882,7 +1871,6 @@ void ecore_resc_free(struct ecore_dev *p_dev)
 		ecore_dcbx_info_free(p_hwfn);
 		ecore_dbg_user_data_free(p_hwfn);
 		ecore_fw_overlay_mem_free(p_hwfn, &p_hwfn->fw_overlay_mem);
-		/* @@@TBD Flush work-queue ?*/
 
 		/* destroy doorbell recovery mechanism */
 		ecore_db_recovery_teardown(p_hwfn);
@@ -3132,9 +3120,7 @@ static enum _ecore_status_t ecore_lag_create_slave(struct ecore_hwfn *p_hwfn,
 						   u8 master_pfid)
 {
 	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
-	u8 slave_ppfid = 1; /* TODO: Need some sort of resource management function
-			     * to return a free entry
-			     */
+	u8 slave_ppfid = 1;
 	enum _ecore_status_t rc;
 
 	if (!p_ptt)
@@ -3251,7 +3237,6 @@ enum _ecore_status_t ecore_lag_create(struct ecore_dev *dev,
 		return ECORE_INVAL;
 	}
 
-	/* TODO: Check Supported MFW */
 	p_hwfn->lag_info.lag_type = lag_type;
 	p_hwfn->lag_info.link_change_cb = link_change_cb;
 	p_hwfn->lag_info.cxt = cxt;
@@ -3860,10 +3845,7 @@ static enum _ecore_status_t ecore_hw_init_chip(struct ecore_dev *p_dev,
 }
 #endif
 
-/* Init run time data for all PFs and their VFs on an engine.
- * TBD - for VFs - Once we have parent PF info for each VF in
- * shmem available as CAU requires knowledge of parent PF for each VF.
- */
+/* Init run time data for all PFs and their VFs on an engine. */
 static void ecore_init_cau_rt_data(struct ecore_dev *p_dev)
 {
 	u32 offset = CAU_REG_SB_VAR_MEMORY_RT_OFFSET;
@@ -3998,9 +3980,6 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	/* @@TBD MichalK - should add VALIDATE_VFID to init tool...
-	 * need to decide with which value, maybe runtime
-	 */
 	ecore_wr(p_hwfn, p_ptt, PSWRQ2_REG_L2P_VALIDATE_VFID, 0);
 	ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_USE_CLIENTID_IN_TAG, 1);
 
@@ -4762,7 +4741,6 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			     NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET, 1);
 	}
 
-	/* Protocl Configuration  - @@@TBD - should we set 0 otherwise?*/
 	STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_TCP_RT_OFFSET,
 		     (p_hwfn->hw_info.personality == ECORE_PCI_ISCSI) ? 1 : 0);
 	STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_FCOE_RT_OFFSET,
@@ -5469,12 +5447,9 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_ROCE, 0x0);
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_OPENFLOW, 0x0);
 
-		/* @@@TBD - clean transmission queues (5.b) */
-		/* @@@TBD - clean BTB (5.c) */
 
 		ecore_hw_timers_stop(p_dev, p_hwfn, p_ptt);
 
-		/* @@@TBD - verify DMAE requests are done (8) */
 
 		/* Disable Attention Generation */
 		ecore_int_igu_disable_int(p_hwfn, p_ptt);
@@ -5496,7 +5471,6 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
 					     QM_REG_USG_CNT_PF_TX, 0);
 			ecore_verify_reg_val(p_hwfn, p_ptt,
 					     QM_REG_USG_CNT_PF_OTHER, 0);
-			/* @@@TBD - assert on incorrect xCFC values (10.b) */
 		}
 
 		/* Disable PF in HW blocks */
@@ -5573,10 +5547,7 @@ enum _ecore_status_t ecore_hw_stop_fastpath(struct ecore_dev *p_dev)
 		ecore_wr(p_hwfn, p_ptt,
 			 NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x1);
 
-		/* @@@TBD - clean transmission queues (5.b) */
-		/* @@@TBD - clean BTB (5.c) */
 
-		/* @@@TBD - verify DMAE requests are done (8) */
 
 		ecore_int_igu_init_pure_rt(p_hwfn, p_ptt, false, false);
 		/* Need to wait 1ms to guarantee SBs are cleared */
@@ -6159,7 +6130,6 @@ __ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn, enum ecore_resources res_id,
 		return rc;
 	}
 
-	/* TODO: Add MFW support for GFS profiles resource */
 	if (res_id == ECORE_GFS_PROFILE) {
 		DP_INFO(p_hwfn,
 			"Resource %d [%s]: Applying default values [%d,%d]\n",
@@ -6557,7 +6527,6 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 	}
 
-	/* Read nvm_cfg1  (Notice this is just offset, and not offsize (TBD) */
 	nvm_cfg1_offset = ecore_rd(p_hwfn, p_ptt, nvm_cfg_addr + 4);
 
 	addr = MCP_REG_SCRATCH  + nvm_cfg1_offset +
@@ -8178,11 +8147,6 @@ enum _ecore_status_t ecore_set_queue_coalesce(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	struct ecore_ptt *p_ptt;
 
-	/* TODO - Configuring a single queue's coalescing but
-	 * claiming all queues are abiding same configuration
-	 * for PF and VF both.
-	 */
-
 	if (IS_VF(p_hwfn->p_dev))
 		return ecore_vf_pf_set_coalesce(p_hwfn, rx_coal,
 						tx_coal, p_cid);
@@ -8376,7 +8340,6 @@ static enum _ecore_status_t ecore_init_wfq_param(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 	}
 
-	/* TBD - for number of vports greater than 100 */
 	if (num_vports > ECORE_WFQ_UNIT) {
 		DP_ERR(p_hwfn, "Number of vports is greater than %d\n",
 		       ECORE_WFQ_UNIT);
@@ -8487,7 +8450,6 @@ int ecore_configure_vport_wfq(struct ecore_dev *p_dev, u16 vp_id, u32 rate)
 {
 	int i, rc = ECORE_INVAL;
 
-	/* TBD - for multiple hardware functions - that is 100 gig */
 	if (ECORE_IS_CMT(p_dev)) {
 		DP_NOTICE(p_dev, false,
 			  "WFQ configuration is not supported for this device\n");
@@ -8522,7 +8484,6 @@ void ecore_configure_vp_wfq_on_link_change(struct ecore_dev *p_dev,
 {
 	int i;
 
-	/* TBD - for multiple hardware functions - that is 100 gig */
 	if (ECORE_IS_CMT(p_dev)) {
 		DP_ERR(p_dev,
 		       "WFQ configuration is not supported for this device\n");
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index 80812d4ce..4533e96a0 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -3049,7 +3049,6 @@ struct sdm_agg_int_comp_params {
 /* 1 - set a bit in aggregated vector, 0 - dont set */
 #define SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_ENABLE_MASK  0x1
 #define SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_ENABLE_SHIFT 6
-/* Number of bit in the aggregated vector, 0-279 (TBD) */
 #define SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_BIT_MASK     0x1FF
 #define SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_BIT_SHIFT    7
 };
diff --git a/drivers/net/qede/base/ecore_hsi_init_func.h b/drivers/net/qede/base/ecore_hsi_init_func.h
index 93b24b729..b972388db 100644
--- a/drivers/net/qede/base/ecore_hsi_init_func.h
+++ b/drivers/net/qede/base/ecore_hsi_init_func.h
@@ -14,9 +14,7 @@
 #define NUM_OF_VLAN_PRIORITIES			8
 
 /* Size of CRC8 lookup table */
-#ifndef LINUX_REMOVE
 #define CRC8_TABLE_SIZE					256
-#endif
 
 /*
  * GFS context Descriptot QREG (0)
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index e4f9b5e85..00b496416 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -158,7 +158,6 @@ void ecore_ptt_release(struct ecore_hwfn *p_hwfn,
 		       struct ecore_ptt *p_ptt)
 {
 	/* This PTT should not be set to pretend if it is being released */
-	/* TODO - add some pretend sanity checks, to make sure pretend isn't set on this ptt */
 
 	OSAL_SPIN_LOCK(&p_hwfn->p_ptt_pool->lock);
 	OSAL_LIST_PUSH_HEAD(&p_ptt->list_entry, &p_hwfn->p_ptt_pool->free_list);
@@ -543,7 +542,6 @@ static void ecore_dmae_opcode(struct ecore_hwfn	*p_hwfn,
 		    p_params->dst_pf_id : p_hwfn->rel_pf_id;
 	SET_FIELD(opcode, DMAE_CMD_DST_PF_ID, dst_pf_id);
 
-	/* DMAE_E4_TODO need to check which value to specify here. */
 	/* SET_FIELD(opcode, DMAE_CMD_C_DST, !b_complete_to_host);*/
 
 	/* Whether to write a completion word to the completion destination:
@@ -747,10 +745,6 @@ ecore_dmae_operation_wait(struct ecore_hwfn *p_hwfn)
 	wait_cnt_limit *= factor;
 #endif
 
-	/* DMAE_E4_TODO : TODO check if we have to call any other function
-	 * other than BARRIER to sync the completion_word since we are not
-	 * using the volatile keyword for this
-	 */
 	OSAL_BARRIER(p_hwfn->p_dev);
 	while (*p_hwfn->dmae_info.p_completion_word != DMAE_COMPLETION_VAL) {
 		OSAL_UDELAY(DMAE_MIN_WAIT_TIME);
@@ -842,7 +836,6 @@ static enum _ecore_status_t ecore_dmae_execute_sub_operation(struct ecore_hwfn *
 	ecore_status = ecore_dmae_operation_wait(p_hwfn);
 
 #ifndef __EXTRACT__LINUX__
-	/* TODO - is it true ? */
 	if (src_type == ECORE_DMAE_ADDRESS_HOST_VIRT ||
 	    src_type == ECORE_DMAE_ADDRESS_HOST_PHYS)
 		OSAL_DMA_SYNC(p_hwfn->p_dev,
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 575ba8070..ba23e2c4e 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -1873,9 +1873,7 @@ u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8
 	return offset;
 }
 
-#ifndef LINUX_REMOVE
 #define CRC8_INIT_VALUE 0xFF
-#endif
 static u8 cdu_crc8_table[CRC8_TABLE_SIZE];
 
 /* Calculate and return CDU validation byte per connection type/region/cid */
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index e56d5dba7..4d3d0ac6e 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -195,9 +195,6 @@ static enum _ecore_status_t ecore_pswhst_attn_cb(struct ecore_hwfn *p_hwfn)
 			data);
 	}
 
-	/* TODO - We know 'some' of these are legal due to virtualization,
-	 * but is it true for all of them?
-	 */
 	return ECORE_SUCCESS;
 }
 
@@ -1746,7 +1743,6 @@ static void _ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
 	u32 sb_offset, pi_offset;
 
 	if (IS_VF(p_hwfn->p_dev))
-		return;/* @@@TBD MichalK- VF CAU... */
 
 	sb_offset = igu_sb_id * (ECORE_IS_E4(p_hwfn->p_dev) ? PIS_PER_SB_E4
 							    : PIS_PER_SB_E5);
@@ -2401,7 +2397,6 @@ void ecore_int_igu_init_pure_rt(struct ecore_hwfn *p_hwfn,
 	u16 igu_sb_id = 0;
 	u32 val = 0;
 
-	/* @@@TBD MichalK temporary... should be moved to init-tool... */
 	val = ecore_rd(p_hwfn, p_ptt, IGU_REG_BLOCK_CONFIGURATION);
 	val |= IGU_REG_BLOCK_CONFIGURATION_VF_CLEANUP_EN;
 	val &= ~IGU_REG_BLOCK_CONFIGURATION_PXP_TPH_INTERFACE_EN;
@@ -2459,7 +2454,6 @@ int ecore_int_igu_reset_cam(struct ecore_hwfn *p_hwfn,
 			p_info->usage.cnt = RESC_NUM(p_hwfn, ECORE_SB) - 1;
 		}
 
-		/* TODO - how do we learn about VF SBs from MFW? */
 		if (IS_PF_SRIOV(p_hwfn)) {
 			u16 vfs = p_hwfn->p_dev->p_iov_info->total_vfs;
 
@@ -2586,7 +2580,6 @@ int ecore_int_igu_reset_cam_default(struct ecore_hwfn *p_hwfn,
 	p_cnt->orig = 0;
 	p_cnt->iov_orig = 0;
 
-	/* TODO - we probably need to re-configure the CAU as well... */
 	return ecore_int_igu_reset_cam(p_hwfn, p_ptt);
 }
 
@@ -2789,10 +2782,6 @@ ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		p_info->usage.iov_cnt++;
 		p_info->usage.free_cnt_iov++;
 
-		/* TODO - if SBs aren't really the limiting factor,
-		 * then it might not be accurate [in the since that
-		 * we might not need decrement the feature].
-		 */
 		p_hwfn->hw_info.feat_num[ECORE_PF_L2_QUE]--;
 		p_hwfn->hw_info.feat_num[ECORE_VF_L2_QUE]++;
 	} else {
diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h
index 6d72d5ced..24bbaab75 100644
--- a/drivers/net/qede/base/ecore_int.h
+++ b/drivers/net/qede/base/ecore_int.h
@@ -92,7 +92,6 @@ u16 ecore_get_igu_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id);
 struct ecore_igu_block *
 ecore_get_igu_free_sb(struct ecore_hwfn *p_hwfn, bool b_is_pf);
 
-/* TODO Names of function may change... */
 void ecore_int_igu_init_pure_rt(struct ecore_hwfn	*p_hwfn,
 				 struct ecore_ptt	*p_ptt,
 				 bool			b_set,
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 329b30b7b..422de71c7 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -35,9 +35,7 @@
 #define IS_PF_SRIOV(p_hwfn)	(0)
 #endif
 #define IS_PF_SRIOV_ALLOC(p_hwfn)	(!!((p_hwfn)->pf_iov_info))
-#define IS_PF_PDA(p_hwfn)	0 /* @@TBD Michalk */
 
-/* @@@ TBD MichalK - what should this number be*/
 #define ECORE_MAX_QUEUE_VF_CHAINS_PER_PF 16
 #define ECORE_MAX_CNQ_VF_CHAINS_PER_PF 16
 #define ECORE_MAX_VF_CHAINS_PER_PF	\
@@ -108,7 +106,6 @@ struct ecore_iov_vf_init_params {
 	/* Number of requested Queues; Currently, don't support different
 	 * number of Rx/Tx queues.
 	 */
-	/* TODO - remove this limitation */
 	u8 num_queues;
 	u8 num_cnqs;
 
@@ -210,7 +207,6 @@ enum _ecore_status_t ecore_iov_pci_enable_prolog(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_iov_pci_disable_epilog(struct ecore_hwfn *p_hwfn,
 						  struct ecore_ptt *p_ptt);
 
-#ifndef LINUX_REMOVE
 /**
  * @brief mark/clear all VFs before/after an incoming PCIe sriov
  *        disable.
@@ -357,7 +353,6 @@ void ecore_iov_get_link(struct ecore_hwfn *p_hwfn,
  */
 bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn,
 				 u16 rel_vf_id);
-#endif
 
 /**
  * @brief Check if given VF ID @vfid is valid
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 646cf07cb..ac3d0f44d 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -133,7 +133,6 @@ void ecore_l2_free(struct ecore_hwfn *p_hwfn)
 	p_hwfn->p_l2_info = OSAL_NULL;
 }
 
-/* TODO - we'll need locking around these... */
 static bool ecore_eth_queue_qid_usage_add(struct ecore_hwfn *p_hwfn,
 					  struct ecore_queue_cid *p_cid)
 {
@@ -1268,11 +1267,6 @@ ecore_eth_pf_tx_queue_start(struct ecore_hwfn *p_hwfn,
 	enum _ecore_status_t rc;
 	u16 pq_id;
 
-	/* TODO - set tc in the pq_params for multi-cos.
-	 * If pacing is enabled then select queue according to
-	 * rate limiter availability otherwise select queue based
-	 * on multi cos.
-	 */
 	if (IS_ECORE_PACING(p_hwfn))
 		pq_id = ecore_get_cm_pq_idx_rl(p_hwfn, p_cid->rel.queue_id);
 	else
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 2e945556c..eda95dba7 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -263,14 +263,6 @@ enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn,
 					      OFFSETOF(struct public_mfw_mb,
 						       sup_msgs));
 
-	/* @@@TBD:
-	 * The driver can notify that there was an MCP reset, and might read the
-	 * SHMEM values before the MFW has completed initializing them.
-	 * As a temporary solution, the "sup_msgs" field in the MFW mailbox is
-	 * used as a data ready indication.
-	 * This should be replaced with an actual indication when it is provided
-	 * by the MFW.
-	 */
 	if (!p_info->recovery_mode) {
 		while (!p_info->mfw_mb_length && --cnt) {
 			OSAL_MSLEEP(msec);
@@ -1551,12 +1543,6 @@ static void ecore_read_pf_bandwidth(struct ecore_hwfn *p_hwfn,
 
 	p_info = &p_hwfn->mcp_info->func_info;
 
-	/* TODO - bandwidth min/max should have valid values of 1-100,
-	 * as well as some indication that the feature is disabled.
-	 * Until MFW/qlediag enforce those limitations, Assume THERE IS ALWAYS
-	 * limit and correct value to min `1' and max `100' if limit isn't in
-	 * range.
-	 */
 	p_info->bandwidth_min = GET_MFW_FIELD(p_shmem_info->config,
 					      FUNC_MF_CFG_MIN_BW);
 	if (p_info->bandwidth_min < 1 || p_info->bandwidth_min > 100) {
@@ -1918,7 +1904,6 @@ u32 ecore_get_process_kill_counter(struct ecore_hwfn *p_hwfn,
 {
 	u32 path_offsize_addr, path_offsize, path_addr, proc_kill_cnt;
 
-	/* TODO - Add support for VFs */
 	if (IS_VF(p_hwfn->p_dev))
 		return ECORE_INVAL;
 
@@ -2928,7 +2913,6 @@ enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_hwfn *p_hwfn,
 {
 	*p_media_type = MEDIA_UNSPECIFIED;
 
-	/* TODO - Add support for VFs */
 	if (IS_VF(p_hwfn->p_dev))
 		return ECORE_INVAL;
 
@@ -2968,7 +2952,6 @@ enum _ecore_status_t ecore_mcp_get_transceiver_data(struct ecore_hwfn *p_hwfn,
 	*p_transceiver_type = ETH_TRANSCEIVER_TYPE_NONE;
 	*p_transceiver_state = ETH_TRANSCEIVER_STATE_UPDATING;
 
-	/* TODO - Add support for VFs */
 	if (IS_VF(p_hwfn->p_dev))
 		return ECORE_INVAL;
 
@@ -3144,7 +3127,6 @@ enum _ecore_status_t ecore_mcp_get_board_config(struct ecore_hwfn *p_hwfn,
 {
 	u32 nvm_cfg_addr, nvm_cfg1_offset, port_cfg_addr;
 
-	/* TODO - Add support for VFs */
 	if (IS_VF(p_hwfn->p_dev))
 		return ECORE_INVAL;
 
@@ -3306,11 +3288,9 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
 		OSAL_MEMCPY(p_hwfn->p_dev->wol_mac, info->mac, ECORE_ETH_ALEN);
 
 	} else {
-		/* TODO - are there protocols for which there's no MAC? */
 		DP_NOTICE(p_hwfn, false, "MAC is 0 in shmem\n");
 	}
 
-	/* TODO - are these calculations true for BE machine? */
 	info->wwn_port = (u64)shmem_info.fcoe_wwn_port_name_lower |
 			 (((u64)shmem_info.fcoe_wwn_port_name_upper) << 32);
 	info->wwn_node = (u64)shmem_info.fcoe_wwn_node_name_lower |
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 81d3dc04a..60e2b8307 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -15,7 +15,6 @@
 
 /* Using hwfn number (and not pf_num) is required since in CMT mode,
  * same pf_num may be used by two different hwfn
- * TODO - this shouldn't really be in .h file, but until all fields
  * required during hw-init will be placed in their correct place in shmem
  * we need it in ecore_dev.c [for readin the nvram reflection in shmem].
  */
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 800a23fb1..3c5d8b2aa 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -787,7 +787,6 @@ enum _ecore_status_t ecore_mcp_cmd(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_drain(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt);
 
-#ifndef LINUX_REMOVE
 /**
  * @brief - return the mcp function info of the hw function
  *
@@ -797,9 +796,7 @@ enum _ecore_status_t ecore_mcp_drain(struct ecore_hwfn *p_hwfn,
  */
 const struct ecore_mcp_function_info
 *ecore_mcp_get_function_info(struct ecore_hwfn *p_hwfn);
-#endif
 
-#ifndef LINUX_REMOVE
 /**
  * @brief - count number of function with a matching personality on engine.
  *
@@ -813,7 +810,6 @@ const struct ecore_mcp_function_info
 int ecore_mcp_get_personality_cnt(struct ecore_hwfn *p_hwfn,
 				  struct ecore_ptt *p_ptt,
 				  u32 personalities);
-#endif
 
 /**
  * @brief Get the flash size value
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 3a507843a..3ba6d3f19 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -413,9 +413,6 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 		p_ramrod->base_vf_id = (u8)p_iov->first_vf_in_pf;
 		p_ramrod->num_vfs = (u8)p_iov->total_vfs;
 	}
-	/* @@@TBD - update also the "ROCE_VER_KEY" entries when the FW RoCE HSI
-	 * version is available.
-	 */
 	p_ramrod->hsi_fp_ver.major_ver_arr[ETH_VER_KEY] = ETH_HSI_VER_MAJOR;
 	p_ramrod->hsi_fp_ver.minor_ver_arr[ETH_VER_KEY] = ETH_HSI_VER_MINOR;
 
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index dda36c995..08ecb7979 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -262,7 +262,6 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 			&p_cxt_e5->xstorm_st_context.consolid_base_addr;
 	}
 
-	/* @@@TBD we zero the context until we have ilt_reset implemented. */
 	OSAL_MEM_ZERO(p_cxt, core_conn_context_size);
 
 	SET_FIELD(*p_flags10, XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
@@ -858,10 +857,6 @@ static enum _ecore_status_t ecore_spq_post_list(struct ecore_hwfn *p_hwfn,
 	struct ecore_spq	*p_spq = p_hwfn->p_spq;
 	enum _ecore_status_t	rc;
 
-	/* TODO - implementation might be wasteful; will always keep room
-	 * for an additional high priority ramrod (even if one is already
-	 * pending FW)
-	 */
 	while (ecore_chain_get_elem_left(&p_spq->chain) > keep_reserve &&
 	       !OSAL_LIST_IS_EMPTY(head)) {
 		struct ecore_spq_entry  *p_ent =
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 0d768c525..8d62d52a8 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -432,7 +432,6 @@ ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn, int vfid,
 	if (!p_vf)
 		return ECORE_INVAL;
 
-	/* TODO - check VF is in a state where it can accept message */
 	if (!p_vf->vf_bulletin)
 		return ECORE_INVAL;
 
@@ -475,9 +474,6 @@ static enum _ecore_status_t ecore_iov_pci_cfg_info(struct ecore_dev *p_dev)
 
 	OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + RTE_PCI_SRIOV_NUM_VF, &iov->num_vfs);
 	if (iov->num_vfs) {
-		/* @@@TODO - in future we might want to add an OSAL here to
-		 * allow each OS to decide on its own how to act.
-		 */
 		DP_VERBOSE(p_dev, ECORE_MSG_IOV,
 			   "Number of VFs are already set to non-zero value. Ignoring PCI configuration value\n");
 		iov->num_vfs = 0;
@@ -571,7 +567,6 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
 		vf->abs_vf_id = idx + p_iov->first_vf_in_pf;
 		concrete = ecore_vfid_to_concrete(p_hwfn, vf->abs_vf_id);
 		vf->concrete_fid = concrete;
-		/* TODO - need to devise a better way of getting opaque */
 		vf->opaque_fid = (p_hwfn->hw_info.opaque_fid & 0xff) |
 				 (vf->abs_vf_id << 8);
 
@@ -770,7 +765,6 @@ enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn)
 	}
 
 	/* Allocate a new struct for IOV information */
-	/* TODO - can change to VALLOC when its available */
 	p_dev->p_iov_info = OSAL_ZALLOC(p_dev, GFP_KERNEL,
 					sizeof(*p_dev->p_iov_info));
 	if (!p_dev->p_iov_info) {
@@ -878,8 +872,6 @@ LNX_STATIC void ecore_iov_set_vfs_to_disable(struct ecore_dev *p_dev,
 		ecore_iov_set_vf_to_disable(p_dev, i, to_disable);
 }
 
-#ifndef LINUX_REMOVE
-/* @@@TBD Consider taking outside of ecore... */
 enum _ecore_status_t ecore_iov_set_vf_ctx(struct ecore_hwfn *p_hwfn,
 					  u16		    vf_id,
 					  void		    *ctx)
@@ -897,7 +889,6 @@ enum _ecore_status_t ecore_iov_set_vf_ctx(struct ecore_hwfn *p_hwfn,
 	}
 	return rc;
 }
-#endif
 
 static void ecore_iov_vf_pglue_clear_err(struct ecore_hwfn      *p_hwfn,
 					 struct ecore_ptt	*p_ptt,
@@ -1268,7 +1259,6 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 		return ECORE_INVAL;
 	}
 
-	/* TODO - remove this once we get confidence of change */
 	if (!p_params->vport_id) {
 		DP_NOTICE(p_hwfn, false,
 			  "VF[%d] - Unlikely that VF uses vport0. Forgotten?\n",
@@ -1522,10 +1512,8 @@ static void ecore_iov_lock_vf_pf_channel(struct ecore_hwfn *p_hwfn,
 					 u16 tlv)
 {
 	/* lock the channel */
-	/* mutex_lock(&vf->op_mutex); @@@TBD MichalK - add lock... */
 
 	/* record the locking op */
-	/* vf->op_current = tlv; @@@TBD MichalK */
 
 	/* log the lock */
 	if (ecore_iov_tlv_supported(tlv))
@@ -1547,11 +1535,9 @@ static void ecore_iov_unlock_vf_pf_channel(struct ecore_hwfn *p_hwfn,
 	 * "lock mismatch: expected %s found %s",
 	 * channel_tlvs_string[expected_tlv],
 	 * channel_tlvs_string[vf->op_current]);
-	 * @@@TBD MichalK
 	 */
 
 	/* lock the channel */
-	/* mutex_unlock(&vf->op_mutex); @@@TBD MichalK add the lock */
 
 	/* log the unlock */
 	if (ecore_iov_tlv_supported(expected_tlv))
@@ -1896,9 +1882,6 @@ static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
 
 	for (i = 0; i < p_resp->num_rxqs; i++) {
 		p_resp->hw_sbs[i].hw_sb_id = p_vf->igu_sbs[i];
-		/* TODO - what's this sb_qid field? Is it deprecated?
-		 * or is there an ecore_client that looks at this?
-		 */
 		p_resp->hw_sbs[i].sb_qid = 0;
 	}
 
@@ -2001,10 +1984,6 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 	pfdev_info->major_fp_hsi = ETH_HSI_VER_MAJOR;
 	pfdev_info->minor_fp_hsi = ETH_HSI_VER_MINOR;
 
-	/* TODO - not doing anything is bad since we'll assert, but this isn't
-	 * necessarily the right behavior - perhaps we should have allowed some
-	 * versatility here.
-	 */
 	if (vf->state != VF_FREE &&
 	    vf->state != VF_STOPPED) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
@@ -2068,7 +2047,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 
 	/* fill in pfdev info */
 	pfdev_info->chip_num = p_hwfn->p_dev->chip_num;
-	pfdev_info->db_size = 0; /* @@@ TBD MichalK Vf Doorbells */
+	pfdev_info->db_size = 0;
 	pfdev_info->indices_per_sb = ECORE_IS_E4(p_hwfn->p_dev) ? PIS_PER_SB_E4
 								: PIS_PER_SB_E5;
 
@@ -2248,7 +2227,6 @@ ecore_iov_reconfigure_unicast_shadow(struct ecore_hwfn *p_hwfn,
 {
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
-	/* TODO - what about MACs? */
 
 	if ((events & (1 << VLAN_ADDR_FORCED)) &&
 	    !(p_vf->configured_features & (1 << VLAN_ADDR_FORCED)))
@@ -3906,10 +3884,6 @@ static enum _ecore_status_t ecore_iov_vf_update_mac_shadow(struct ecore_hwfn *p_
 	OSAL_MEM_ZERO(empty_mac, ECORE_ETH_ALEN);
 
 	/* If we're in forced-mode, we don't allow any change */
-	/* TODO - this would change if we were ever to implement logic for
-	 * removing a forced MAC altogether [in which case, like for vlans,
-	 * we should be able to re-trace previous configuration.
-	 */
 	if (p_vf->bulletin.p_virt->valid_bitmap & (1 << MAC_ADDR_FORCED))
 		return ECORE_SUCCESS;
 
@@ -4012,7 +3986,6 @@ static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn,
 	params.opcode = (enum ecore_filter_opcode)req->opcode;
 	params.type = (enum ecore_filter_ucast_type)req->type;
 
-	/* @@@TBD - We might need logic on HV side in determining this */
 	params.is_rx_filter = 1;
 	params.is_tx_filter = 1;
 	params.vport_to_remove_from = vf->vport_id;
@@ -4319,9 +4292,6 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
 		vf->rx_coal = rx_coal;
 	}
 
-	/* TODO - in future, it might be possible to pass this in a per-cid
-	 * granularity. For now, do this for all Tx queues.
-	 */
 	if (tx_coal) {
 		struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
 
@@ -4406,9 +4376,6 @@ ecore_iov_pf_configure_vf_queue_coalesce(struct ecore_hwfn *p_hwfn,
 		vf->rx_coal = rx_coal;
 	}
 
-	/* TODO - in future, it might be possible to pass this in a per-cid
-	 * granularity. For now, do this for all Tx queues.
-	 */
 	if (tx_coal) {
 		struct ecore_vf_queue *p_queue = &vf->vf_queues[qid];
 
@@ -4546,7 +4513,6 @@ static enum _ecore_status_t ecore_iov_vf_flr_poll(struct ecore_hwfn *p_hwfn,
 {
 	enum _ecore_status_t rc;
 
-	/* TODO - add SRC polling and Tm task once we add storage/iwarp IOV */
 	rc = ecore_iov_timers_stop(p_hwfn, p_vf, p_ptt);
 	if (rc)
 		return rc;
@@ -4580,7 +4546,6 @@ ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 		u16 vfid = p_vf->abs_vf_id;
 		int i;
 
-		/* TODO - should we lock channel? */
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "VF[%d] - Handling FLR\n", vfid);
@@ -4591,14 +4556,12 @@ ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 		if (!p_vf->b_init)
 			goto cleanup;
 
-		/* TODO - what to do in case of failure? */
 		rc = ecore_iov_vf_flr_poll(p_hwfn, p_vf, p_ptt);
 		if (rc != ECORE_SUCCESS)
 			goto cleanup;
 
 		rc = ecore_final_cleanup(p_hwfn, p_ptt, vfid, true);
 		if (rc) {
-			/* TODO - what's now? What a mess.... */
 			DP_ERR(p_hwfn, "Failed handle FLR of VF[%d]\n",
 			       vfid);
 			return rc;
@@ -4618,7 +4581,6 @@ ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 
 		rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, p_vf);
 		if (rc) {
-			/* TODO - again, a mess... */
 			DP_ERR(p_hwfn, "Failed to re-enable VF[%d] access\n",
 			       vfid);
 			return rc;
@@ -4894,9 +4856,8 @@ LNX_STATIC void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
 		 * next loaded driver to start again.
 		 */
 		if (mbx->first_tlv.tl.type == CHANNEL_TLV_RELEASE) {
-			/* TODO - initiate FLR, remove malicious indication */
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
-				   "VF [%02x] - considered malicious, but wanted to RELEASE. TODO\n",
+				   "VF [%02x] - considered malicious, but wanted to RELEASE.\n",
 				   p_vf->abs_vf_id);
 		} else {
 			DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
@@ -5191,7 +5152,6 @@ ecore_iov_bulletin_set_mac(struct ecore_hwfn *p_hwfn, u8 *mac, int vfid)
 	return ECORE_SUCCESS;
 }
 
-#ifndef LINUX_REMOVE
 enum _ecore_status_t
 ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn,
 					       bool b_untagged_only,
@@ -5248,7 +5208,6 @@ void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn, int vfid,
 
 	*opaque_fid = vf_info->opaque_fid;
 }
-#endif
 
 LNX_STATIC void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
 						   u16 pvid, int vfid)
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 8d8a12b92..3fada4ba8 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -192,7 +192,6 @@ struct ecore_vf_info {
 
 	u16			igu_sbs[ECORE_MAX_VF_CHAINS_PER_PF];
 
-	/* TODO - Only windows is using it - should be removed */
 	u8 was_malicious;
 	u8 num_active_rxqs;
 	void *ctx;
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 9b99e9281..8879b6722 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -113,7 +113,6 @@ ecore_send_msg2pf(struct ecore_hwfn *p_hwfn,
 					   p_hwfn->vf_iov_info->pf2vf_reply,
 					   sizeof(union vfpf_tlvs),
 					   resp_size);
-		/* TODO - no prints about message ? */
 		return rc;
 	}
 #endif
@@ -335,7 +334,6 @@ enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 	req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_ACQUIRE, sizeof(*req));
 	p_resc = &req->resc_request;
 
-	/* @@@ TBD: PF may not be ready bnx2x_get_vf_id... */
 	req->vfdev_info.opaque_fid = p_hwfn->hw_info.opaque_fid;
 
 	p_resc->num_rxqs = ECORE_MAX_QUEUE_VF_CHAINS_PER_PF;
@@ -1143,8 +1141,6 @@ enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
 
 	/* Starting with CHANNEL_TLV_QID and the need for additional queue
 	 * information, this API stopped supporting multiple rxqs.
-	 * TODO - remove this and change the API to accept a single queue-cid
-	 * in a follow-up patch.
 	 */
 	if (num_rxqs != 1) {
 		DP_NOTICE(p_hwfn, true,
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 098cbe6b3..6da949fe8 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -185,9 +185,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
 					  struct ecore_queue_cid *p_cid);
 
-/* TODO - fix all the !SRIOV prototypes */
 
-#ifndef LINUX_REMOVE
 /**
  * @brief VF - update the RX queue by sending a message to the
  *        PF
@@ -205,7 +203,6 @@ enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
 					     u8 num_rxqs,
 					     u8 comp_cqe_flg,
 					     u8 comp_event_flg);
-#endif
 
 /**
  * @brief VF - send a vport update command
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index b664dafbd..5f638aede 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -150,7 +150,6 @@ bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac);
  */
 void ecore_vf_init_admin_mac(struct ecore_hwfn *p_hwfn, u8 *p_mac);
 
-#ifndef LINUX_REMOVE
 /**
  * @brief Copy forced MAC address from bulletin board
  *
@@ -192,7 +191,6 @@ bool ecore_vf_get_pre_fp_hsi(struct ecore_hwfn *p_hwfn);
  */
 void ecore_vf_db_recovery_execute(struct ecore_hwfn *p_hwfn);
 
-#endif
 
 /**
  * @brief Set firmware version information in dev_info from VFs acquire response tlv
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index e03db7c2b..45a0ae429 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -7,8 +7,8 @@
 #ifndef __ECORE_VF_PF_IF_H__
 #define __ECORE_VF_PF_IF_H__
 
-#define T_ETH_INDIRECTION_TABLE_SIZE 128 /* @@@ TBD MichalK this should be HSI? */
-#define T_ETH_RSS_KEY_SIZE 10 /* @@@ TBD this should be HSI? */
+#define T_ETH_INDIRECTION_TABLE_SIZE 128
+#define T_ETH_RSS_KEY_SIZE 10
 
 /***********************************************
  *
@@ -94,20 +94,17 @@ struct vfpf_acquire_tlv {
 	struct vfpf_first_tlv first_tlv;
 
 	struct vf_pf_vfdev_info {
-#ifndef LINUX_REMOVE
 	/* First bit was used on 8.7.x and 8.8.x versions, which had different
 	 * FWs used but with the same faspath HSI. As this was prior to the
 	 * fastpath versioning, wanted to have ability to override fw matching
 	 * and allow them to interact.
 	 */
-#endif
 #define VFPF_ACQUIRE_CAP_PRE_FP_HSI	(1 << 0) /* VF pre-FP hsi version */
 #define VFPF_ACQUIRE_CAP_100G		(1 << 1) /* VF can support 100g */
 
 	/* A requirement for supporting multi-Tx queues on a single queue-zone,
 	 * VF would pass qids as additional information whenever passing queue
 	 * references.
-	 * TODO - due to the CID limitations in Bar0, VFs currently don't pass
 	 * this, and use the legacy CID scheme.
 	 */
 #define VFPF_ACQUIRE_CAP_QUEUE_QIDS	(1 << 2)
@@ -203,11 +200,9 @@ struct pfvf_acquire_resp_tlv {
  * To overcome this, PFs now indicate that they're past that point and the new
  * VFs would fail probe on the older PFs that fail to do so.
  */
-#ifndef LINUX_REMOVE
 /* Said bug was in quest/serpens; Can't be certain no official release included
  * the bug since the fix arrived very late in the programs.
  */
-#endif
 #define PFVF_ACQUIRE_CAP_POST_FW_OVERRIDE	(1 << 2)
 
 	/* PF expects queues to be received with additional qids */
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 437482422..aace40dc5 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -367,7 +367,6 @@ struct lldp_config_params_s {
 
 struct lldp_status_params_s {
 	u32 prefix_seq_num;
-	u32 status; /* TBD */
 	/* Holds remote Chassis ID TLV header, subtype and 9B of payload. */
 	u32 peer_chassis_id[LLDP_CHASSIS_ID_STAT_LEN];
 	/* Holds remote Port ID TLV header, subtype and 9B of payload. */
@@ -605,7 +604,6 @@ enum _attribute_commands_e {
 struct public_global {
 	u32 max_path;	/* 32bit is wasty, but this will be used often */
 	u32 max_ports;	/* (Global) 32bit is wasty, but this will be used often */
-#define MODE_1P	1		/* TBD - NEED TO THINK OF A BETTER NAME */
 #define MODE_2P	2
 #define MODE_3P	3
 #define MODE_4P	4
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 8b151e907..93c39c5b9 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1193,7 +1193,6 @@ static int qede_dev_stop(struct rte_eth_dev *eth_dev)
 	qede_stop_queues(eth_dev);
 
 	/* Disable traffic */
-	ecore_hw_stop_fastpath(edev); /* TBD - loop */
 
 	DP_INFO(edev, "Device is stopped\n");
 
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 4f99ab8b7..ae32f1e86 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -433,7 +433,6 @@ qed_fill_eth_dev_info(struct ecore_dev *edev, struct qed_dev_eth_info *info)
 
 	memset(info, 0, sizeof(*info));
 
-	info->num_tc = 1 /* @@@TBD aelior MULTI_COS */;
 
 	if (IS_PF(edev)) {
 		int max_vf_vlan_filters = 0;
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index b6ff59457..ebc010d2d 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -1395,7 +1395,6 @@ qede_rx_process_tpa_end_cqe(struct qede_dev *qdev,
 					     cqe->len_list[0]);
 	/* Update total length and frags based on end TPA */
 	rx_mb = rxq->tpa_info[cqe->tpa_agg_index].tpa_head;
-	/* TODO:  Add Sanity Checks */
 	rx_mb->nb_segs = cqe->num_of_bds;
 	rx_mb->pkt_len = cqe->total_packet_len;
 
@@ -2172,7 +2171,6 @@ qede_xmit_prep_pkts(__rte_unused void *p_txq, struct rte_mbuf **tx_pkts,
 				rte_errno = EINVAL;
 				break;
 			}
-			/* TBD: confirm its ~9700B for both ? */
 			if (m->tso_segsz > ETH_TX_MAX_NON_LSO_PKT_LEN) {
 				rte_errno = EINVAL;
 				break;
@@ -2529,9 +2527,6 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 							1 << ETH_TX_DATA_2ND_BD_L4_UDP_SHIFT;
 					}
 
-					/* TODO other pseudo checksum modes are
-					 * not supported
-					 */
 					bd2_bf1 |=
 					ETH_L4_PSEUDO_CSUM_CORRECT_LENGTH <<
 					ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE_SHIFT;
diff --git a/drivers/net/qede/qede_sriov.c b/drivers/net/qede/qede_sriov.c
index 0b99a8d6f..9fa227cc4 100644
--- a/drivers/net/qede/qede_sriov.c
+++ b/drivers/net/qede/qede_sriov.c
@@ -138,10 +138,6 @@ static void qed_handle_bulletin_post(struct ecore_hwfn *hwfn)
 		return;
 	}
 
-	/* TODO - at the moment update bulletin board of all VFs.
-	 * if this proves to costly, we can mark VFs that need their
-	 * bulletins updated.
-	 */
 	ecore_for_each_vf(hwfn, i)
 		ecore_iov_post_vf_bulletin(hwfn, i, ptt);
 
-- 
2.18.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] [PATCH 0/7] net/qede: add support for new HW
  2021-02-19 10:14 [dpdk-dev] [PATCH 0/7] net/qede: add support for new HW Rasesh Mody
                   ` (5 preceding siblings ...)
  2021-02-19 10:14 ` [dpdk-dev] [PATCH 7/7] net/qede/base: clean unnecessary ifdef and comments Rasesh Mody
@ 2021-02-19 12:00 ` Rasesh Mody
  6 siblings, 0 replies; 8+ messages in thread
From: Rasesh Mody @ 2021-02-19 12:00 UTC (permalink / raw)
  To: Rasesh Mody, Jerin Jacob Kollanukkaran, ferruh.yigit
  Cc: dev, GR-Everest-DPDK-Dev, Igor Russkikh

> From: Rasesh Mody <rmody@marvell.com>
> Sent: Friday, February 19, 2021 3:44 PM
> 
> Hi,
> 
> This patch series adds support for new HW while modifying existing driver to
> continue supporting previous HWs.
> Highlights of changes:
>  - Registers, HW specific and initialization updates for new HW
>  - FW upgrade
>  - Base driver upgrade, other optimizations and cleanup
> 
> The new 50xxx family of Marvell QLogic fastlinq adapters will bring in support
> for higher speeds, will increase max PPS rates significantly.
> This family will eventually support flexible flow steering and various in-device
> switching modes.
> 
> At the same time, that’s the same architecture and design, as with previous
> QEDE driver. Thus, a lot of fast path and slow path code is expected to be the
> same.
> 
> Please note for checkpatch 100 character max_line_length was used.
> 
> Thanks,
> Rasesh
> 
> Rasesh Mody (7):
>   net/qede/base: update and add register definitions
>   net/qede/base: changes for HSI to support new HW
>   net/qede/base: add OS abstracted changes
>   net/qede/base: update base driver to 8.62.4.0
>   net/qede: changes for DMA page chain allocation and free
>   net/qede: add support for new HW
>   net/qede/base: clean unnecessary ifdef and comments
> 
>  drivers/net/qede/base/bcm_osal.c              |      1 -
>  drivers/net/qede/base/bcm_osal.h              |     42 +-
>  drivers/net/qede/base/common_hsi.h            |   1752 +-
>  drivers/net/qede/base/ecore.h                 |    575 +-
>  drivers/net/qede/base/ecore_attn_values.h     |      3 +-
>  drivers/net/qede/base/ecore_chain.h           |    242 +-
>  drivers/net/qede/base/ecore_cxt.c             |   1234 +-
>  drivers/net/qede/base/ecore_cxt.h             |    149 +-
>  drivers/net/qede/base/ecore_cxt_api.h         |     31 +-
>  drivers/net/qede/base/ecore_dcbx.c            |    526 +-
>  drivers/net/qede/base/ecore_dcbx.h            |     16 +-
>  drivers/net/qede/base/ecore_dcbx_api.h        |     41 +-
>  drivers/net/qede/base/ecore_dev.c             |   4083 +-
>  drivers/net/qede/base/ecore_dev_api.h         |    367 +-
>  drivers/net/qede/base/ecore_gtt_reg_addr.h    |     93 +-
>  drivers/net/qede/base/ecore_gtt_values.h      |      4 +-
>  drivers/net/qede/base/ecore_hsi_common.h      |   2722 +-
>  drivers/net/qede/base/ecore_hsi_debug_tools.h |    426 +-
>  drivers/net/qede/base/ecore_hsi_eth.h         |   4541 +-
>  drivers/net/qede/base/ecore_hsi_func_common.h |      5 +-
>  drivers/net/qede/base/ecore_hsi_init_func.h   |    707 +-
>  drivers/net/qede/base/ecore_hsi_init_tool.h   |    254 +-
>  drivers/net/qede/base/ecore_hw.c              |    386 +-
>  drivers/net/qede/base/ecore_hw.h              |     55 +-
>  drivers/net/qede/base/ecore_hw_defs.h         |     45 +-
>  drivers/net/qede/base/ecore_init_fw_funcs.c   |   1365 +-
>  drivers/net/qede/base/ecore_init_fw_funcs.h   |    457 +-
>  drivers/net/qede/base/ecore_init_ops.c        |    159 +-
>  drivers/net/qede/base/ecore_init_ops.h        |     19 +-
>  drivers/net/qede/base/ecore_int.c             |   1363 +-
>  drivers/net/qede/base/ecore_int.h             |     65 +-
>  drivers/net/qede/base/ecore_int_api.h         |    127 +-
>  drivers/net/qede/base/ecore_iov_api.h         |    118 +-
>  drivers/net/qede/base/ecore_iro.h             |    427 +-
>  drivers/net/qede/base/ecore_iro_values.h      |    463 +-
>  drivers/net/qede/base/ecore_l2.c              |    497 +-
>  drivers/net/qede/base/ecore_l2.h              |     18 +-
>  drivers/net/qede/base/ecore_l2_api.h          |    148 +-
>  drivers/net/qede/base/ecore_mcp.c             |   2631 +-
>  drivers/net/qede/base/ecore_mcp.h             |    125 +-
>  drivers/net/qede/base/ecore_mcp_api.h         |    471 +-
>  drivers/net/qede/base/ecore_mng_tlv.c         |    910 +-
>  drivers/net/qede/base/ecore_proto_if.h        |     69 +-
>  drivers/net/qede/base/ecore_rt_defs.h         |    895 +-
>  drivers/net/qede/base/ecore_sp_api.h          |      6 +-
>  drivers/net/qede/base/ecore_sp_commands.c     |    141 +-
>  drivers/net/qede/base/ecore_sp_commands.h     |     18 +-
>  drivers/net/qede/base/ecore_spq.c             |    431 +-
>  drivers/net/qede/base/ecore_spq.h             |     65 +-
>  drivers/net/qede/base/ecore_sriov.c           |   1700 +-
>  drivers/net/qede/base/ecore_sriov.h           |    147 +-
>  drivers/net/qede/base/ecore_status.h          |      4 +-
>  drivers/net/qede/base/ecore_utils.h           |     18 +-
>  drivers/net/qede/base/ecore_vf.c              |    550 +-
>  drivers/net/qede/base/ecore_vf.h              |     57 +-
>  drivers/net/qede/base/ecore_vf_api.h          |     74 +-
>  drivers/net/qede/base/ecore_vfpf_if.h         |    122 +-
>  drivers/net/qede/base/eth_common.h            |    300 +-
>  drivers/net/qede/base/mcp_public.h            |   2343 +-
>  drivers/net/qede/base/nvm_cfg.h               |   5059 +-
>  drivers/net/qede/base/reg_addr.h              | 190590 ++++++++++++++-
>  drivers/net/qede/qede_debug.c                 |    117 +-
>  drivers/net/qede/qede_ethdev.c                |     11 +-
>  drivers/net/qede/qede_ethdev.h                |     11 +-
>  drivers/net/qede/qede_if.h                    |     20 +-
>  drivers/net/qede/qede_main.c                  |      4 +-
>  drivers/net/qede/qede_rxtx.c                  |     89 +-
>  drivers/net/qede/qede_sriov.c                 |      4 -
>  lib/librte_eal/include/rte_bitops.h           |     54 +-
>  69 files changed, 215373 insertions(+), 15189 deletions(-)
> 
> --

Please discard this v1 patch set due to the size issues encountered. We'll send a v2 series.

Thanks!
Rasesh

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-02-19 12:00 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-19 10:14 [dpdk-dev] [PATCH 0/7] net/qede: add support for new HW Rasesh Mody
2021-02-19 10:14 ` [dpdk-dev] [PATCH 2/7] net/qede/base: changes for HSI to support " Rasesh Mody
2021-02-19 10:14 ` [dpdk-dev] [PATCH 3/7] net/qede/base: add OS abstracted changes Rasesh Mody
2021-02-19 10:14 ` [dpdk-dev] [PATCH 4/7] net/qede/base: update base driver to 8.62.4.0 Rasesh Mody
2021-02-19 10:14 ` [dpdk-dev] [PATCH 5/7] net/qede: changes for DMA page chain allocation and free Rasesh Mody
2021-02-19 10:14 ` [dpdk-dev] [PATCH 6/7] net/qede: add support for new HW Rasesh Mody
2021-02-19 10:14 ` [dpdk-dev] [PATCH 7/7] net/qede/base: clean unnecessary ifdef and comments Rasesh Mody
2021-02-19 12:00 ` [dpdk-dev] [PATCH 0/7] net/qede: add support for new HW Rasesh Mody

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).