DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0
@ 2019-09-30  2:49 Rasesh Mody
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 1/9] net/qede/base: calculate right page index for PBL chains Rasesh Mody
                   ` (19 more replies)
  0 siblings, 20 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-09-30  2:49 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

Hi,

This patch series updates the FW to 8.40.25.0 and includes corresponding
base driver changes. It also includes some enhancements and fixes.
The PMD version is bumped to 2.11.0.1.

Please apply.

Thanks!
-Rasesh

Rasesh Mody (9):
  net/qede/base: calculate right page index for PBL chains
  net/qede/base: change MFW mailbox command log verbosity
  net/qede/base: lock entire QM reconfiguration flow
  net/qede/base: rename HSI datatypes and funcs
  net/qede/base: update rt defs NVM cfg and mcp code
  net/qede/base: move dmae code to HSI
  net/qede/base: update HSI code
  net/qede/base: update the FW to 8.40.25.0
  net/qede: print adapter info during init failure

 drivers/net/qede/base/bcm_osal.c              |    1 +
 drivers/net/qede/base/bcm_osal.h              |    5 +-
 drivers/net/qede/base/common_hsi.h            |  257 +--
 drivers/net/qede/base/ecore.h                 |   77 +-
 drivers/net/qede/base/ecore_chain.h           |   84 +-
 drivers/net/qede/base/ecore_cxt.c             |  520 ++++---
 drivers/net/qede/base/ecore_cxt.h             |   12 +
 drivers/net/qede/base/ecore_dcbx.c            |    7 +-
 drivers/net/qede/base/ecore_dev.c             |  753 +++++----
 drivers/net/qede/base/ecore_dev_api.h         |   92 --
 drivers/net/qede/base/ecore_gtt_reg_addr.h    |   42 +-
 drivers/net/qede/base/ecore_gtt_values.h      |   18 +-
 drivers/net/qede/base/ecore_hsi_common.h      | 1134 +++++++-------
 drivers/net/qede/base/ecore_hsi_debug_tools.h |  475 +++---
 drivers/net/qede/base/ecore_hsi_eth.h         | 1386 ++++++++---------
 drivers/net/qede/base/ecore_hsi_init_func.h   |   25 +-
 drivers/net/qede/base/ecore_hsi_init_tool.h   |   42 +-
 drivers/net/qede/base/ecore_hw.c              |   68 +-
 drivers/net/qede/base/ecore_hw.h              |   98 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   |  717 ++++-----
 drivers/net/qede/base/ecore_init_fw_funcs.h   |  107 +-
 drivers/net/qede/base/ecore_init_ops.c        |   66 +-
 drivers/net/qede/base/ecore_init_ops.h        |   12 +-
 drivers/net/qede/base/ecore_int.c             |  131 +-
 drivers/net/qede/base/ecore_int.h             |    4 +-
 drivers/net/qede/base/ecore_int_api.h         |   13 +-
 drivers/net/qede/base/ecore_iov_api.h         |    4 +-
 drivers/net/qede/base/ecore_iro.h             |  320 ++--
 drivers/net/qede/base/ecore_iro_values.h      |  336 ++--
 drivers/net/qede/base/ecore_l2.c              |   10 +-
 drivers/net/qede/base/ecore_l2_api.h          |    2 +
 drivers/net/qede/base/ecore_mcp.c             |  296 ++--
 drivers/net/qede/base/ecore_mcp.h             |    9 +-
 drivers/net/qede/base/ecore_proto_if.h        |    1 +
 drivers/net/qede/base/ecore_rt_defs.h         |  870 +++++------
 drivers/net/qede/base/ecore_sp_commands.c     |   15 +-
 drivers/net/qede/base/ecore_spq.c             |   55 +-
 drivers/net/qede/base/ecore_sriov.c           |  178 ++-
 drivers/net/qede/base/ecore_sriov.h           |    4 +-
 drivers/net/qede/base/ecore_vf.c              |   18 +-
 drivers/net/qede/base/eth_common.h            |  101 +-
 drivers/net/qede/base/mcp_public.h            |   59 +-
 drivers/net/qede/base/nvm_cfg.h               |  909 ++++++++++-
 drivers/net/qede/base/reg_addr.h              |   75 +-
 drivers/net/qede/qede_ethdev.c                |   54 +-
 drivers/net/qede/qede_ethdev.h                |   21 +-
 drivers/net/qede/qede_main.c                  |    2 +-
 drivers/net/qede/qede_rxtx.c                  |   28 +-
 48 files changed, 5471 insertions(+), 4042 deletions(-)

-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH 1/9] net/qede/base: calculate right page index for PBL chains
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
@ 2019-09-30  2:49 ` Rasesh Mody
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 2/9] net/qede/base: change MFW mailbox command log verbosity Rasesh Mody
                   ` (18 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-09-30  2:49 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

ecore_chain_set_prod/cons() sets the wrong page index in chains with
non-power of 2 page count. Fix ecore_chain_set_prod/cons() for PBL
chains with non power of 2 page count.
Calculate the right page index according to current indexes.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/ecore_chain.h | 84 ++++++++++++++++++++---------
 1 file changed, 58 insertions(+), 26 deletions(-)

diff --git a/drivers/net/qede/base/ecore_chain.h b/drivers/net/qede/base/ecore_chain.h
index 6d0382d3a..c69920be5 100644
--- a/drivers/net/qede/base/ecore_chain.h
+++ b/drivers/net/qede/base/ecore_chain.h
@@ -86,8 +86,8 @@ struct ecore_chain {
 		void		**pp_virt_addr_tbl;
 
 		union {
-			struct ecore_chain_pbl_u16	u16;
-			struct ecore_chain_pbl_u32	u32;
+			struct ecore_chain_pbl_u16	pbl_u16;
+			struct ecore_chain_pbl_u32	pbl_u32;
 		} c;
 	} pbl;
 
@@ -405,7 +405,7 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain16.prod_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain16.prod_idx;
-			p_prod_page_idx = &p_chain->pbl.c.u16.prod_page_idx;
+			p_prod_page_idx = &p_chain->pbl.c.pbl_u16.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
 						 p_prod_idx, p_prod_page_idx);
 		}
@@ -414,7 +414,7 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain32.prod_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain32.prod_idx;
-			p_prod_page_idx = &p_chain->pbl.c.u32.prod_page_idx;
+			p_prod_page_idx = &p_chain->pbl.c.pbl_u32.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
 						 p_prod_idx, p_prod_page_idx);
 		}
@@ -479,7 +479,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain16.cons_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain16.cons_idx;
-			p_cons_page_idx = &p_chain->pbl.c.u16.cons_page_idx;
+			p_cons_page_idx = &p_chain->pbl.c.pbl_u16.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
 						 p_cons_idx, p_cons_page_idx);
 		}
@@ -488,7 +488,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain32.cons_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain32.cons_idx;
-			p_cons_page_idx = &p_chain->pbl.c.u32.cons_page_idx;
+			p_cons_page_idx = &p_chain->pbl.c.pbl_u32.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
 						 p_cons_idx, p_cons_page_idx);
 		}
@@ -532,11 +532,11 @@ static OSAL_INLINE void ecore_chain_reset(struct ecore_chain *p_chain)
 		u32 reset_val = p_chain->page_cnt - 1;
 
 		if (is_chain_u16(p_chain)) {
-			p_chain->pbl.c.u16.prod_page_idx = (u16)reset_val;
-			p_chain->pbl.c.u16.cons_page_idx = (u16)reset_val;
+			p_chain->pbl.c.pbl_u16.prod_page_idx = (u16)reset_val;
+			p_chain->pbl.c.pbl_u16.cons_page_idx = (u16)reset_val;
 		} else {
-			p_chain->pbl.c.u32.prod_page_idx = reset_val;
-			p_chain->pbl.c.u32.cons_page_idx = reset_val;
+			p_chain->pbl.c.pbl_u32.prod_page_idx = reset_val;
+			p_chain->pbl.c.pbl_u32.cons_page_idx = reset_val;
 		}
 	}
 
@@ -725,18 +725,34 @@ static OSAL_INLINE void ecore_chain_set_prod(struct ecore_chain *p_chain,
 					     u32 prod_idx, void *p_prod_elem)
 {
 	if (p_chain->mode == ECORE_CHAIN_MODE_PBL) {
-		/* Use "prod_idx-1" since ecore_chain_produce() advances the
-		 * page index before the producer index when getting to
-		 * "next_page_mask".
+		u32 cur_prod, page_mask, page_cnt, page_diff;
+
+		cur_prod = is_chain_u16(p_chain) ? p_chain->u.chain16.prod_idx
+						 : p_chain->u.chain32.prod_idx;
+
+		/* Assume that number of elements in a page is power of 2 */
+		page_mask = ~p_chain->elem_per_page_mask;
+
+		/* Use "cur_prod - 1" and "prod_idx - 1" since producer index
+		 * reaches the first element of next page before the page index
+		 * is incremented. See ecore_chain_produce().
+		 * Index wrap around is not a problem because the difference
+		 * between current and given producer indexes is always
+		 * positive and lower than the chain's capacity.
 		 */
-		u32 elem_idx =
-			(prod_idx - 1 + p_chain->capacity) % p_chain->capacity;
-		u32 page_idx = elem_idx / p_chain->elem_per_page;
+		page_diff = (((cur_prod - 1) & page_mask) -
+			     ((prod_idx - 1) & page_mask)) /
+			    p_chain->elem_per_page;
 
+		page_cnt = ecore_chain_get_page_cnt(p_chain);
 		if (is_chain_u16(p_chain))
-			p_chain->pbl.c.u16.prod_page_idx = (u16)page_idx;
+			p_chain->pbl.c.pbl_u16.prod_page_idx =
+				(p_chain->pbl.c.pbl_u16.prod_page_idx -
+				 page_diff + page_cnt) % page_cnt;
 		else
-			p_chain->pbl.c.u32.prod_page_idx = page_idx;
+			p_chain->pbl.c.pbl_u32.prod_page_idx =
+				(p_chain->pbl.c.pbl_u32.prod_page_idx -
+				 page_diff + page_cnt) % page_cnt;
 	}
 
 	if (is_chain_u16(p_chain))
@@ -756,18 +772,34 @@ static OSAL_INLINE void ecore_chain_set_cons(struct ecore_chain *p_chain,
 					     u32 cons_idx, void *p_cons_elem)
 {
 	if (p_chain->mode == ECORE_CHAIN_MODE_PBL) {
-		/* Use "cons_idx-1" since ecore_chain_consume() advances the
-		 * page index before the consumer index when getting to
-		 * "next_page_mask".
+		u32 cur_cons, page_mask, page_cnt, page_diff;
+
+		cur_cons = is_chain_u16(p_chain) ? p_chain->u.chain16.cons_idx
+						 : p_chain->u.chain32.cons_idx;
+
+		/* Assume that number of elements in a page is power of 2 */
+		page_mask = ~p_chain->elem_per_page_mask;
+
+		/* Use "cur_cons - 1" and "cons_idx - 1" since consumer index
+		 * reaches the first element of next page before the page index
+		 * is incremented. See ecore_chain_consume().
+		 * Index wrap around is not a problem because the difference
+		 * between current and given consumer indexes is always
+		 * positive and lower than the chain's capacity.
 		 */
-		u32 elem_idx =
-			(cons_idx - 1 + p_chain->capacity) % p_chain->capacity;
-		u32 page_idx = elem_idx / p_chain->elem_per_page;
+		page_diff = (((cur_cons - 1) & page_mask) -
+			     ((cons_idx - 1) & page_mask)) /
+			    p_chain->elem_per_page;
 
+		page_cnt = ecore_chain_get_page_cnt(p_chain);
 		if (is_chain_u16(p_chain))
-			p_chain->pbl.c.u16.cons_page_idx = (u16)page_idx;
+			p_chain->pbl.c.pbl_u16.cons_page_idx =
+				(p_chain->pbl.c.pbl_u16.cons_page_idx -
+				 page_diff + page_cnt) % page_cnt;
 		else
-			p_chain->pbl.c.u32.cons_page_idx = page_idx;
+			p_chain->pbl.c.pbl_u32.cons_page_idx =
+				(p_chain->pbl.c.pbl_u32.cons_page_idx -
+				 page_diff + page_cnt) % page_cnt;
 	}
 
 	if (is_chain_u16(p_chain))
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH 2/9] net/qede/base: change MFW mailbox command log verbosity
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 1/9] net/qede/base: calculate right page index for PBL chains Rasesh Mody
@ 2019-09-30  2:49 ` Rasesh Mody
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 3/9] net/qede/base: lock entire QM reconfiguration flow Rasesh Mody
                   ` (17 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-09-30  2:49 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

Change management FW mailboxes DP_VERBOSE module to ECORE_MSG_HW

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/ecore_mcp.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 6c6560688..1a5152ec5 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -469,7 +469,7 @@ static void __ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 	/* Set the drv command along with the sequence number */
 	DRV_MB_WR(p_hwfn, p_ptt, drv_mb_header, (p_mb_params->cmd | seq_num));
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
 		   "MFW mailbox: command 0x%08x param 0x%08x\n",
 		   (p_mb_params->cmd | seq_num), p_mb_params->param);
 }
@@ -594,7 +594,7 @@ _ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	ecore_mcp_cmd_del_elem(p_hwfn, p_cmd_elem);
 	OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock);
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
 		   "MFW mailbox: response 0x%08x param 0x%08x [after %d.%03d ms]\n",
 		   p_mb_params->mcp_resp, p_mb_params->mcp_param,
 		   (cnt * delay) / 1000, (cnt * delay) % 1000);
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH 3/9] net/qede/base: lock entire QM reconfiguration flow
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 1/9] net/qede/base: calculate right page index for PBL chains Rasesh Mody
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 2/9] net/qede/base: change MFW mailbox command log verbosity Rasesh Mody
@ 2019-09-30  2:49 ` Rasesh Mody
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 4/9] net/qede/base: rename HSI datatypes and funcs Rasesh Mody
                   ` (16 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-09-30  2:49 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

Multiple flows can issue QM reconfiguration, hence hold the lock longer
to account for entire duration of reconfiguration flow.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/ecore_dev.c | 24 +++++++++++++-----------
 1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index d7e1d7b32..b183519b5 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2291,18 +2291,21 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 	bool b_rc;
-	enum _ecore_status_t rc;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	/* multiple flows can issue qm reconf. Need to lock */
+	OSAL_SPIN_LOCK(&qm_lock);
 
 	/* initialize ecore's qm data structure */
 	ecore_init_qm_info(p_hwfn);
 
 	/* stop PF's qm queues */
-	OSAL_SPIN_LOCK(&qm_lock);
 	b_rc = ecore_send_qm_stop_cmd(p_hwfn, p_ptt, false, true,
 				      qm_info->start_pq, qm_info->num_pqs);
-	OSAL_SPIN_UNLOCK(&qm_lock);
-	if (!b_rc)
-		return ECORE_INVAL;
+	if (!b_rc) {
+		rc = ECORE_INVAL;
+		goto unlock;
+	}
 
 	/* clear the QM_PF runtime phase leftovers from previous init */
 	ecore_init_clear_rt_data(p_hwfn);
@@ -2313,18 +2316,17 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 	/* activate init tool on runtime array */
 	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_QM_PF, p_hwfn->rel_pf_id,
 			    p_hwfn->hw_info.hw_mode);
-	if (rc != ECORE_SUCCESS)
-		return rc;
 
 	/* start PF's qm queues */
-	OSAL_SPIN_LOCK(&qm_lock);
 	b_rc = ecore_send_qm_stop_cmd(p_hwfn, p_ptt, true, true,
 				      qm_info->start_pq, qm_info->num_pqs);
-	OSAL_SPIN_UNLOCK(&qm_lock);
 	if (!b_rc)
-		return ECORE_INVAL;
+		rc = ECORE_INVAL;
 
-	return ECORE_SUCCESS;
+unlock:
+	OSAL_SPIN_UNLOCK(&qm_lock);
+
+	return rc;
 }
 
 static enum _ecore_status_t ecore_alloc_qm_data(struct ecore_hwfn *p_hwfn)
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH 4/9] net/qede/base: rename HSI datatypes and funcs
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (2 preceding siblings ...)
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 3/9] net/qede/base: lock entire QM reconfiguration flow Rasesh Mody
@ 2019-09-30  2:49 ` Rasesh Mody
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 5/9] net/qede/base: update rt defs NVM cfg and mcp code Rasesh Mody
                   ` (15 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-09-30  2:49 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

This patch changes code with E4/E5/e4/e5/BB_K2 prefixes and suffixes.
 - HSI datatypes renaming - removed all e5 datatypes and renamed
   all e4 datatypes to be prefix less/suffix less.
   (s/_E4//; s/_e4//; s/E4_//).
 - HSI functions - removed e4/e5 prefixes/suffixes.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/common_hsi.h          |   93 +-
 drivers/net/qede/base/ecore_cxt.c           |    4 +-
 drivers/net/qede/base/ecore_dcbx.c          |    2 +-
 drivers/net/qede/base/ecore_dev.c           |  116 +-
 drivers/net/qede/base/ecore_hsi_common.h    |  847 +++++-------
 drivers/net/qede/base/ecore_hsi_eth.h       | 1308 ++++++++-----------
 drivers/net/qede/base/ecore_hsi_init_tool.h |    4 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c |   41 +-
 drivers/net/qede/base/ecore_int.c           |    8 +-
 drivers/net/qede/base/ecore_int.h           |    4 +-
 drivers/net/qede/base/ecore_int_api.h       |    6 +-
 drivers/net/qede/base/ecore_iov_api.h       |    4 +-
 drivers/net/qede/base/ecore_mcp.c           |    4 +-
 drivers/net/qede/base/ecore_spq.c           |    8 +-
 drivers/net/qede/base/ecore_sriov.c         |    4 +-
 drivers/net/qede/base/ecore_sriov.h         |    4 +-
 drivers/net/qede/base/reg_addr.h            |   65 +-
 drivers/net/qede/qede_rxtx.c                |    8 +-
 18 files changed, 1076 insertions(+), 1454 deletions(-)

diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index 7047eb9f8..b878a92aa 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -106,59 +106,43 @@
 /* PCI functions */
 #define MAX_NUM_PORTS_BB        (2)
 #define MAX_NUM_PORTS_K2        (4)
-#define MAX_NUM_PORTS_E5        (4)
-#define MAX_NUM_PORTS           (MAX_NUM_PORTS_E5)
+#define MAX_NUM_PORTS           (MAX_NUM_PORTS_K2)
 
 #define MAX_NUM_PFS_BB          (8)
 #define MAX_NUM_PFS_K2          (16)
-#define MAX_NUM_PFS_E5          (16)
-#define MAX_NUM_PFS             (MAX_NUM_PFS_E5)
+#define MAX_NUM_PFS             (MAX_NUM_PFS_K2)
 #define MAX_NUM_OF_PFS_IN_CHIP  (16) /* On both engines */
 
 #define MAX_NUM_VFS_BB          (120)
 #define MAX_NUM_VFS_K2          (192)
-#define MAX_NUM_VFS_E4          (MAX_NUM_VFS_K2)
-#define MAX_NUM_VFS_E5          (240)
-#define COMMON_MAX_NUM_VFS      (MAX_NUM_VFS_E5)
+#define COMMON_MAX_NUM_VFS      (MAX_NUM_VFS_K2)
 
 #define MAX_NUM_FUNCTIONS_BB    (MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
 #define MAX_NUM_FUNCTIONS_K2    (MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2)
-#define MAX_NUM_FUNCTIONS       (MAX_NUM_PFS + MAX_NUM_VFS_E4)
 
 /* in both BB and K2, the VF number starts from 16. so for arrays containing all
  * possible PFs and VFs - we need a constant for this size
  */
 #define MAX_FUNCTION_NUMBER_BB      (MAX_NUM_PFS + MAX_NUM_VFS_BB)
 #define MAX_FUNCTION_NUMBER_K2      (MAX_NUM_PFS + MAX_NUM_VFS_K2)
-#define MAX_FUNCTION_NUMBER_E4      (MAX_NUM_PFS + MAX_NUM_VFS_E4)
-#define MAX_FUNCTION_NUMBER_E5      (MAX_NUM_PFS + MAX_NUM_VFS_E5)
-#define COMMON_MAX_FUNCTION_NUMBER  (MAX_NUM_PFS + MAX_NUM_VFS_E5)
+#define COMMON_MAX_FUNCTION_NUMBER  (MAX_NUM_PFS + MAX_NUM_VFS_K2)
 
 #define MAX_NUM_VPORTS_K2       (208)
 #define MAX_NUM_VPORTS_BB       (160)
-#define MAX_NUM_VPORTS_E4       (MAX_NUM_VPORTS_K2)
-#define MAX_NUM_VPORTS_E5       (256)
-#define COMMON_MAX_NUM_VPORTS   (MAX_NUM_VPORTS_E5)
+#define COMMON_MAX_NUM_VPORTS   (MAX_NUM_VPORTS_K2)
 
 #define MAX_NUM_L2_QUEUES_BB	(256)
 #define MAX_NUM_L2_QUEUES_K2    (320)
-#define MAX_NUM_L2_QUEUES_E5    (320) /* TODO_E5_VITALY - fix to 512 */
-#define MAX_NUM_L2_QUEUES		(MAX_NUM_L2_QUEUES_E5)
 
 /* Traffic classes in network-facing blocks (PBF, BTB, NIG, BRB, PRS and QM) */
 #define NUM_PHYS_TCS_4PORT_K2     4
-#define NUM_PHYS_TCS_4PORT_TX_E5  6
-#define NUM_PHYS_TCS_4PORT_RX_E5  4
 #define NUM_OF_PHYS_TCS           8
 #define PURE_LB_TC                NUM_OF_PHYS_TCS
 #define NUM_TCS_4PORT_K2          (NUM_PHYS_TCS_4PORT_K2 + 1)
-#define NUM_TCS_4PORT_TX_E5       (NUM_PHYS_TCS_4PORT_TX_E5 + 1)
-#define NUM_TCS_4PORT_RX_E5       (NUM_PHYS_TCS_4PORT_RX_E5 + 1)
 #define NUM_OF_TCS                (NUM_OF_PHYS_TCS + 1)
 
 /* CIDs */
-#define NUM_OF_CONNECTION_TYPES_E4 (8)
-#define NUM_OF_CONNECTION_TYPES_E5 (16)
+#define NUM_OF_CONNECTION_TYPES (8)
 #define NUM_OF_TASK_TYPES       (8)
 #define NUM_OF_LCIDS            (320)
 #define NUM_OF_LTIDS            (320)
@@ -412,9 +396,8 @@
 #define CAU_FSM_ETH_TX  1
 
 /* Number of Protocol Indices per Status Block */
-#define PIS_PER_SB_E4    12
-#define PIS_PER_SB_E5    8
-#define MAX_PIS_PER_SB_E4	 OSAL_MAX_T(PIS_PER_SB_E4, PIS_PER_SB_E5)
+#define PIS_PER_SB    12
+#define MAX_PIS_PER_SB	 PIS_PER_SB
 
 /* fsm is stopped or not valid for this sb */
 #define CAU_HC_STOPPED_STATE		3
@@ -430,8 +413,7 @@
 
 #define MAX_SB_PER_PATH_K2			(368)
 #define MAX_SB_PER_PATH_BB			(288)
-#define MAX_SB_PER_PATH_E5			(512)
-#define MAX_TOT_SB_PER_PATH			MAX_SB_PER_PATH_E5
+#define MAX_TOT_SB_PER_PATH			MAX_SB_PER_PATH_K2
 
 #define MAX_SB_PER_PF_MIMD			129
 #define MAX_SB_PER_PF_SIMD			64
@@ -639,12 +621,8 @@
 #define MAX_NUM_ILT_RECORDS \
 	OSAL_MAX_T(PXP_NUM_ILT_RECORDS_BB, PXP_NUM_ILT_RECORDS_K2)
 
-#define PXP_NUM_ILT_RECORDS_E5 13664
-
-
 // Host Interface
-#define PXP_QUEUES_ZONE_MAX_NUM_E4	320
-#define PXP_QUEUES_ZONE_MAX_NUM_E5	512
+#define PXP_QUEUES_ZONE_MAX_NUM	320
 
 
 /*****************/
@@ -691,11 +669,12 @@
 /* PBF CONSTANTS  */
 /******************/
 
-/* Number of PBF command queue lines. Each line is 32B. */
-#define PBF_MAX_CMD_LINES_E4 3328
-#define PBF_MAX_CMD_LINES_E5 5280
+/* Number of PBF command queue lines. */
+#define PBF_MAX_CMD_LINES 3328 /* Each line is 256b */
 
 /* Number of BTB blocks. Each block is 256B. */
+#define BTB_MAX_BLOCKS_BB 1440 /* 2880 blocks of 128B */
+#define BTB_MAX_BLOCKS_K2 1840 /* 3680 blocks of 128B */
 #define BTB_MAX_BLOCKS 1440
 
 /*****************/
@@ -1435,40 +1414,20 @@ enum rss_hash_type {
 /*
  * status block structure
  */
-struct status_block_e4 {
-	__le16 pi_array[PIS_PER_SB_E4];
-	__le32 sb_num;
-#define STATUS_BLOCK_E4_SB_NUM_MASK      0x1FF
-#define STATUS_BLOCK_E4_SB_NUM_SHIFT     0
-#define STATUS_BLOCK_E4_ZERO_PAD_MASK    0x7F
-#define STATUS_BLOCK_E4_ZERO_PAD_SHIFT   9
-#define STATUS_BLOCK_E4_ZERO_PAD2_MASK   0xFFFF
-#define STATUS_BLOCK_E4_ZERO_PAD2_SHIFT  16
-	__le32 prod_index;
-#define STATUS_BLOCK_E4_PROD_INDEX_MASK  0xFFFFFF
-#define STATUS_BLOCK_E4_PROD_INDEX_SHIFT 0
-#define STATUS_BLOCK_E4_ZERO_PAD3_MASK   0xFF
-#define STATUS_BLOCK_E4_ZERO_PAD3_SHIFT  24
-};
-
-
-/*
- * status block structure
- */
-struct status_block_e5 {
-	__le16 pi_array[PIS_PER_SB_E5];
+struct status_block {
+	__le16 pi_array[PIS_PER_SB];
 	__le32 sb_num;
-#define STATUS_BLOCK_E5_SB_NUM_MASK      0x1FF
-#define STATUS_BLOCK_E5_SB_NUM_SHIFT     0
-#define STATUS_BLOCK_E5_ZERO_PAD_MASK    0x7F
-#define STATUS_BLOCK_E5_ZERO_PAD_SHIFT   9
-#define STATUS_BLOCK_E5_ZERO_PAD2_MASK   0xFFFF
-#define STATUS_BLOCK_E5_ZERO_PAD2_SHIFT  16
+#define STATUS_BLOCK_SB_NUM_MASK      0x1FF
+#define STATUS_BLOCK_SB_NUM_SHIFT     0
+#define STATUS_BLOCK_ZERO_PAD_MASK    0x7F
+#define STATUS_BLOCK_ZERO_PAD_SHIFT   9
+#define STATUS_BLOCK_ZERO_PAD2_MASK   0xFFFF
+#define STATUS_BLOCK_ZERO_PAD2_SHIFT  16
 	__le32 prod_index;
-#define STATUS_BLOCK_E5_PROD_INDEX_MASK  0xFFFFFF
-#define STATUS_BLOCK_E5_PROD_INDEX_SHIFT 0
-#define STATUS_BLOCK_E5_ZERO_PAD3_MASK   0xFF
-#define STATUS_BLOCK_E5_ZERO_PAD3_SHIFT  24
+#define STATUS_BLOCK_PROD_INDEX_MASK  0xFFFFFF
+#define STATUS_BLOCK_PROD_INDEX_SHIFT 0
+#define STATUS_BLOCK_ZERO_PAD3_MASK   0xFF
+#define STATUS_BLOCK_ZERO_PAD3_SHIFT  24
 };
 
 
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 5c3370e10..bc5628c4e 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -54,8 +54,8 @@
 
 /* connection context union */
 union conn_context {
-	struct e4_core_conn_context core_ctx;
-	struct e4_eth_conn_context eth_ctx;
+	struct core_conn_context core_ctx;
+	struct eth_conn_context eth_ctx;
 };
 
 /* TYPE-0 task context - iSCSI, FCOE */
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index cbc69cde7..b82ca49ff 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -159,7 +159,7 @@ ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 	if (OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits) &&
 	    (type == DCBX_PROTOCOL_ROCE)) {
 		ecore_wr(p_hwfn, p_ptt, DORQ_REG_TAG1_OVRD_MODE, 1);
-		ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_PCP_BB_K2, prio << 1);
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_PCP, prio << 1);
 	}
 }
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index b183519b5..749aea4e8 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -935,7 +935,7 @@ enum _ecore_status_t ecore_llh_set_roce_affinity(struct ecore_dev *p_dev,
 	return rc;
 }
 
-struct ecore_llh_filter_e4_details {
+struct ecore_llh_filter_details {
 	u64 value;
 	u32 mode;
 	u32 protocol_type;
@@ -944,10 +944,10 @@ struct ecore_llh_filter_e4_details {
 };
 
 static enum _ecore_status_t
-ecore_llh_access_filter_e4(struct ecore_hwfn *p_hwfn,
-			   struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx,
-			   struct ecore_llh_filter_e4_details *p_details,
-			   bool b_write_access)
+ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
+			struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx,
+			struct ecore_llh_filter_details *p_details,
+			bool b_write_access)
 {
 	u8 pfid = ECORE_PFID_BY_PPFID(p_hwfn, abs_ppfid);
 	struct ecore_dmae_params params;
@@ -1008,7 +1008,7 @@ ecore_llh_access_filter_e4(struct ecore_hwfn *p_hwfn,
 							  abs_ppfid, addr);
 
 	/* Filter header select */
-	addr = NIG_REG_LLH_FUNC_FILTER_HDR_SEL_BB_K2 + filter_idx * 0x4;
+	addr = NIG_REG_LLH_FUNC_FILTER_HDR_SEL + filter_idx * 0x4;
 	if (b_write_access)
 		ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr,
 			       p_details->hdr_sel);
@@ -1035,7 +1035,7 @@ ecore_llh_add_filter_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			u8 abs_ppfid, u8 filter_idx, u8 filter_prot_type,
 			u32 high, u32 low)
 {
-	struct ecore_llh_filter_e4_details filter_details;
+	struct ecore_llh_filter_details filter_details;
 
 	filter_details.enable = 1;
 	filter_details.value = ((u64)high << 32) | low;
@@ -1048,22 +1048,22 @@ ecore_llh_add_filter_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			      1 : /* protocol-based classification */
 			      0;  /* MAC-address based classification */
 
-	return ecore_llh_access_filter_e4(p_hwfn, p_ptt, abs_ppfid, filter_idx,
-					  &filter_details,
-					  true /* write access */);
+	return ecore_llh_access_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx,
+				&filter_details,
+				true /* write access */);
 }
 
 static enum _ecore_status_t
 ecore_llh_remove_filter_e4(struct ecore_hwfn *p_hwfn,
 			   struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx)
 {
-	struct ecore_llh_filter_e4_details filter_details;
+	struct ecore_llh_filter_details filter_details;
 
 	OSAL_MEMSET(&filter_details, 0, sizeof(filter_details));
 
-	return ecore_llh_access_filter_e4(p_hwfn, p_ptt, abs_ppfid, filter_idx,
-					  &filter_details,
-					  true /* write access */);
+	return ecore_llh_access_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx,
+				       &filter_details,
+				       true /* write access */);
 }
 
 static enum _ecore_status_t
@@ -1468,7 +1468,7 @@ static enum _ecore_status_t
 ecore_llh_dump_ppfid_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			u8 ppfid)
 {
-	struct ecore_llh_filter_e4_details filter_details;
+	struct ecore_llh_filter_details filter_details;
 	u8 abs_ppfid, filter_idx;
 	u32 addr;
 	enum _ecore_status_t rc;
@@ -1486,9 +1486,9 @@ ecore_llh_dump_ppfid_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	for (filter_idx = 0; filter_idx < NIG_REG_LLH_FUNC_FILTER_EN_SIZE;
 	     filter_idx++) {
 		OSAL_MEMSET(&filter_details, 0, sizeof(filter_details));
-		rc =  ecore_llh_access_filter_e4(p_hwfn, p_ptt, abs_ppfid,
-						 filter_idx, &filter_details,
-						 false /* read access */);
+		rc =  ecore_llh_access_filter(p_hwfn, p_ptt, abs_ppfid,
+					      filter_idx, &filter_details,
+					      false /* read access */);
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
@@ -1862,7 +1862,7 @@ static void ecore_init_qm_port_params(struct ecore_hwfn *p_hwfn)
 
 		p_qm_port->active = 1;
 		p_qm_port->active_phys_tcs = active_phys_tcs;
-		p_qm_port->num_pbf_cmd_lines = PBF_MAX_CMD_LINES_E4 / num_ports;
+		p_qm_port->num_pbf_cmd_lines = PBF_MAX_CMD_LINES / num_ports;
 		p_qm_port->num_btb_blocks = BTB_MAX_BLOCKS / num_ports;
 	}
 }
@@ -2730,10 +2730,8 @@ static enum _ecore_status_t ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
 
 	ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV + 4, pl_hv);
 
-	if (CHIP_REV_IS_EMUL(p_dev) &&
-	    (ECORE_IS_AH(p_dev)))
-		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2_E5,
-			 0x3ffffff);
+	if (ECORE_IS_AH(p_dev))
+		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2, 0x3ffffff);
 
 	/* initialize port mode to 4x10G_E (10G with 4x10 SERDES) */
 	/* CNIG_REG_NW_PORT_MODE is same for A0 and B0 */
@@ -3017,49 +3015,59 @@ static void ecore_emul_link_init_bb(struct ecore_hwfn *p_hwfn,
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_ENABLE_REG, 0xf, 1, port);
 }
 
-static void ecore_emul_link_init_ah_e5(struct ecore_hwfn *p_hwfn,
+static void ecore_emul_link_init_ah(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt)
 {
+	u32 mac_base, mac_config_val = 0xa853;
 	u8 port = p_hwfn->port_id;
-	u32 mac_base = NWM_REG_MAC0_K2_E5 + (port << 2) * NWM_REG_MAC0_SIZE;
 
-	DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port);
-
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2_E5 + (port << 2),
-		 (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT) |
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2 + (port << 2),
+		 (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_SHIFT) |
 		 (port <<
-		  CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT) |
-		 (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT));
+		  CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_SHIFT) |
+		 (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_SHIFT));
 
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE_K2_E5,
-		 1 << ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT);
+	mac_base = NWM_REG_MAC0_K2 + (port << 2) * NWM_REG_MAC0_SIZE;
 
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH_K2_E5,
-		 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT);
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE_K2,
+		 1 << ETH_MAC_REG_XIF_MODE_XGMII_K2_SHIFT);
 
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH_K2_E5,
-		 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT);
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH_K2,
+		 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_SHIFT);
 
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5,
-		 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT);
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH_K2,
+		 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_SHIFT);
 
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5,
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS_K2,
+		 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS_K2,
 		 (0xA <<
-		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT) |
+		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_SHIFT) |
 		 (8 <<
-		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT));
+		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_SHIFT));
 
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG_K2_E5,
-		 0xa853);
+	/* Strip the CRC field from the frame */
+	mac_config_val &= ~ETH_MAC_REG_COMMAND_CONFIG_CRC_FWD_K2;
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG_K2,
+		 mac_config_val);
 }
 
 static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
 				 struct ecore_ptt *p_ptt)
 {
-	if (ECORE_IS_AH(p_hwfn->p_dev))
-		ecore_emul_link_init_ah_e5(p_hwfn, p_ptt);
-	else /* BB */
+	u8 port = ECORE_IS_BB(p_hwfn->p_dev) ? p_hwfn->port_id * 2
+					     : p_hwfn->port_id;
+
+	DP_INFO(p_hwfn->p_dev, "Emulation: Configuring Link [port %02x]\n",
+		port);
+
+	if (ECORE_IS_BB(p_hwfn->p_dev))
 		ecore_emul_link_init_bb(p_hwfn, p_ptt);
+	else
+		ecore_emul_link_init_ah(p_hwfn, p_ptt);
+
+	return;
 }
 
 static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
@@ -4190,13 +4198,13 @@ static void ecore_hw_hwfn_prepare(struct ecore_hwfn *p_hwfn)
 	/* clear indirect access */
 	if (ECORE_IS_AH(p_hwfn->p_dev)) {
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5, 0);
+			 PGLUE_B_REG_PGL_ADDR_E8_F0_K2, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5, 0);
+			 PGLUE_B_REG_PGL_ADDR_EC_F0_K2, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5, 0);
+			 PGLUE_B_REG_PGL_ADDR_F0_F0_K2, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5, 0);
+			 PGLUE_B_REG_PGL_ADDR_F4_F0_K2, 0);
 	} else {
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
 			 PGLUE_B_REG_PGL_ADDR_88_F0_BB, 0);
@@ -5178,7 +5186,7 @@ static void ecore_hw_info_port_num_ah_e5(struct ecore_hwfn *p_hwfn,
 #endif
 		for (i = 0; i < MAX_NUM_PORTS_K2; i++) {
 			port = ecore_rd(p_hwfn, p_ptt,
-					CNIG_REG_NIG_PORT0_CONF_K2_E5 +
+					CNIG_REG_NIG_PORT0_CONF_K2 +
 					(i * 4));
 			if (port & 1)
 				p_dev->num_ports_in_engine++;
@@ -5612,13 +5620,13 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 	if (CHIP_REV_IS_FPGA(p_dev)) {
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround; Prevent DMAE parities\n");
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK_K2_E5,
+		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK_K2,
 			 7);
 
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround: Set VF bar0 size\n");
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_VF_BAR0_SIZE_K2_E5, 4);
+			 PGLUE_B_REG_VF_BAR0_SIZE_K2, 4);
 	}
 #endif
 
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index 2ce0ea9e5..7a94ed506 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -73,306 +73,219 @@ struct xstorm_core_conn_st_ctx {
 	__le32 reserved0[55] /* Pad to 15 cycles */;
 };
 
-struct e4_xstorm_core_conn_ag_ctx {
+struct xstorm_core_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 core_state /* state */;
 	u8 flags0;
-/* exist_in_qm0 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
-/* exist_in_qm1 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
-/* exist_in_qm2 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
-/* exist_in_qm3 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
-/* bit4 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
+#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1 /* exist_in_qm0 */
+#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1 /* exist_in_qm1 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1 /* exist_in_qm2 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
+#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1 /* exist_in_qm3 */
+#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1 /* bit4 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
 /* cf_array_active */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
-/* bit6 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
-/* bit7 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1 /* bit6 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1 /* bit7 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
 	u8 flags1;
-/* bit8 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
-/* bit9 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
-/* bit10 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
-/* bit11 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
-/* bit12 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
-/* bit13 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
-/* bit14 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
-/* bit15 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1 /* bit8 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1 /* bit9 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1 /* bit10 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
+#define XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1 /* bit11 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
+#define XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1 /* bit12 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
+#define XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1 /* bit13 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
+#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1 /* bit14 */
+#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1 /* bit15 */
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
 	u8 flags2;
-/* timer0cf */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
-/* timer1cf */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
-/* timer2cf */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
+#define XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3 /* timer0cf */
+#define XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
+#define XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3 /* timer1cf */
+#define XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
+#define XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3 /* timer2cf */
+#define XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
 /* timer_stop_all */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
+#define XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3
+#define XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
 	u8 flags3;
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
+#define XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
+#define XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
+#define XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
+#define XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
+#define XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
+#define XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
+#define XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
+#define XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
 	u8 flags4;
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
-/* cf10 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
-/* cf11 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
+#define XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
+#define XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
+#define XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
+#define XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
+#define XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3 /* cf10 */
+#define XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
+#define XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3 /* cf11 */
+#define XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
 	u8 flags5;
-/* cf12 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
-/* cf13 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
-/* cf14 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
-/* cf15 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
+#define XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3 /* cf12 */
+#define XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
+#define XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3 /* cf13 */
+#define XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
+#define XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3 /* cf14 */
+#define XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
+#define XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3 /* cf15 */
+#define XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
 	u8 flags6;
-/* cf16 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
-/* cf_array_cf */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
-/* cf18 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
-/* cf19 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
+#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3 /* cf16 */
+#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
+#define XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3 /* cf_array_cf */
+#define XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3 /* cf18 */
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
+#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3 /* cf19 */
+#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
 	u8 flags7;
-/* cf20 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
-/* cf21 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
-/* cf22 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
-/* cf0en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
-/* cf1en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
+#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3 /* cf20 */
+#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3 /* cf21 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
+#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3 /* cf22 */
+#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
+#define XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1 /* cf0en */
+#define XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
+#define XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1 /* cf1en */
+#define XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
 	u8 flags8;
-/* cf2en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
-/* cf3en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
-/* cf4en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
-/* cf5en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
-/* cf6en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
-/* cf7en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
-/* cf8en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
-/* cf9en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
+#define XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1 /* cf2en */
+#define XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
+#define XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1 /* cf3en */
+#define XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
+#define XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1 /* cf4en */
+#define XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
+#define XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1 /* cf5en */
+#define XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
+#define XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1 /* cf6en */
+#define XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
+#define XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1 /* cf7en */
+#define XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
+#define XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1 /* cf8en */
+#define XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
+#define XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1 /* cf9en */
+#define XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
 	u8 flags9;
-/* cf10en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
-/* cf11en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
-/* cf12en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
-/* cf13en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
-/* cf14en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
-/* cf15en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
-/* cf16en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
+#define XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1 /* cf10en */
+#define XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
+#define XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1 /* cf11en */
+#define XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
+#define XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1 /* cf12en */
+#define XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
+#define XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1 /* cf13en */
+#define XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
+#define XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1 /* cf14en */
+#define XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
+#define XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1 /* cf15en */
+#define XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
+#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1 /* cf16en */
+#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
 /* cf_array_cf_en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
+#define XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1
+#define XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
 	u8 flags10;
-/* cf18en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
-/* cf19en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
-/* cf20en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
-/* cf21en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
-/* cf22en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
-/* cf23en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
-/* rule0en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
-/* rule1en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1 /* cf18en */
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
+#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1 /* cf19en */
+#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
+#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1 /* cf20en */
+#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1 /* cf21en */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
+#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1 /* cf22en */
+#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
+#define XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1 /* cf23en */
+#define XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1 /* rule0en */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1 /* rule1en */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
 	u8 flags11;
-/* rule2en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
-/* rule3en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
-/* rule4en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
-/* rule5en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
-/* rule6en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
-/* rule7en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
-/* rule8en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
-/* rule9en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1 /* rule2en */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1 /* rule3en */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
+#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1 /* rule4en */
+#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
+#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1 /* rule5en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
+#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1 /* rule6en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
+#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1 /* rule7en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1 /* rule8en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
+#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1 /* rule9en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
 	u8 flags12;
-/* rule10en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
-/* rule11en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
-/* rule12en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
-/* rule13en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
-/* rule14en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
-/* rule15en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
-/* rule16en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
-/* rule17en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
+#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1 /* rule10en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
+#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1 /* rule11en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1 /* rule12en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1 /* rule13en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
+#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1 /* rule14en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
+#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1 /* rule15en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
+#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1 /* rule16en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
+#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1 /* rule17en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
 	u8 flags13;
-/* rule18en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
-/* rule19en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
-/* rule20en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
-/* rule21en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
-/* rule22en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
-/* rule23en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
-/* rule24en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
-/* rule25en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
+#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1 /* rule18en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
+#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1 /* rule19en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1 /* rule20en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1 /* rule21en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1 /* rule22en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1 /* rule23en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1 /* rule24en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1 /* rule25en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
 	u8 flags14;
-/* bit16 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
-/* bit17 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
-/* bit18 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
-/* bit19 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
-/* bit20 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
-/* bit21 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
-/* cf23 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
+#define XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1 /* bit16 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
+#define XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1 /* bit17 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
+#define XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1 /* bit18 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
+#define XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1 /* bit19 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
+#define XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1 /* bit20 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
+#define XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1 /* bit21 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
+#define XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3 /* cf23 */
+#define XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
 	u8 byte2 /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
 	__le16 consolid_prod /* physical_q1 */;
@@ -426,89 +339,89 @@ struct e4_xstorm_core_conn_ag_ctx {
 	__le16 word15 /* word15 */;
 };
 
-struct e4_tstorm_core_conn_ag_ctx {
+struct tstorm_core_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
+#define TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
+#define TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
+#define TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
+#define TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
+#define TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
+#define TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
+#define TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
+#define TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
+#define TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
 	u8 flags1;
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
+#define TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
+#define TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
+#define TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
+#define TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
 	u8 flags2;
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
+#define TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
+#define TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
+#define TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
+#define TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
+#define TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
+#define TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
 	u8 flags3;
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
+#define TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
+#define TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
+#define TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
+#define TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
+#define TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
+#define TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
+#define TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
+#define TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
 	u8 flags4;
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+#define TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
+#define TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
+#define TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
+#define TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
+#define TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
+#define TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
+#define TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
+#define TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
+#define TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
+#define TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
+#define TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
+#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
 	u8 flags5;
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
 	__le32 reg0 /* reg0 */;
 	__le32 reg1 /* reg1 */;
 	__le32 reg2 /* reg2 */;
@@ -530,63 +443,63 @@ struct e4_tstorm_core_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
-struct e4_ustorm_core_conn_ag_ctx {
+struct ustorm_core_conn_ag_ctx {
 	u8 reserved /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define E4_USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define E4_USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define E4_USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+#define USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
 	u8 flags1;
-#define E4_USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
-#define E4_USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
-#define E4_USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
-#define E4_USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
+#define USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
+#define USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
+#define USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
+#define USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
 	u8 flags2;
-#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
-#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
-#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
-#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+#define USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
+#define USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
+#define USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
+#define USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
+#define USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
 	u8 flags3;
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+#define USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
 	u8 byte2 /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* conn_dpi */;
@@ -616,7 +529,7 @@ struct ustorm_core_conn_st_ctx {
 /*
  * core connection context
  */
-struct e4_core_conn_context {
+struct core_conn_context {
 /* ystorm storm context */
 	struct ystorm_core_conn_st_ctx ystorm_st_context;
 	struct regpair ystorm_st_padding[2] /* padding */;
@@ -626,11 +539,11 @@ struct e4_core_conn_context {
 /* xstorm storm context */
 	struct xstorm_core_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
-	struct e4_xstorm_core_conn_ag_ctx xstorm_ag_context;
+	struct xstorm_core_conn_ag_ctx xstorm_ag_context;
 /* tstorm aggregative context */
-	struct e4_tstorm_core_conn_ag_ctx tstorm_ag_context;
+	struct tstorm_core_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
-	struct e4_ustorm_core_conn_ag_ctx ustorm_ag_context;
+	struct ustorm_core_conn_ag_ctx ustorm_ag_context;
 /* mstorm storm context */
 	struct mstorm_core_conn_st_ctx mstorm_st_context;
 /* ustorm storm context */
@@ -2104,90 +2017,6 @@ enum dmae_cmd_src_enum {
 };
 
 
-struct e4_mstorm_core_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
-	u8 flags1;
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
-	__le16 word0 /* word0 */;
-	__le16 word1 /* word1 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-};
-
-
-
-
-
-struct e4_ystorm_core_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
-	u8 flags1;
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
-	u8 byte2 /* byte2 */;
-	u8 byte3 /* byte3 */;
-	__le16 word0 /* word0 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-	__le16 word1 /* word1 */;
-	__le16 word2 /* word2 */;
-	__le16 word3 /* word3 */;
-	__le16 word4 /* word4 */;
-	__le32 reg2 /* reg2 */;
-	__le32 reg3 /* reg3 */;
-};
 
 
 struct fw_asserts_ram_section {
@@ -2416,23 +2245,23 @@ struct qm_rf_opportunistic_mask {
 /*
  * QM hardware structure of QM map memory
  */
-struct qm_rf_pq_map_e4 {
+struct qm_rf_pq_map {
 	__le32 reg;
-#define QM_RF_PQ_MAP_E4_PQ_VALID_MASK          0x1 /* PQ active */
-#define QM_RF_PQ_MAP_E4_PQ_VALID_SHIFT         0
-#define QM_RF_PQ_MAP_E4_RL_ID_MASK             0xFF /* RL ID */
-#define QM_RF_PQ_MAP_E4_RL_ID_SHIFT            1
+#define QM_RF_PQ_MAP_PQ_VALID_MASK          0x1 /* PQ active */
+#define QM_RF_PQ_MAP_PQ_VALID_SHIFT         0
+#define QM_RF_PQ_MAP_RL_ID_MASK             0xFF /* RL ID */
+#define QM_RF_PQ_MAP_RL_ID_SHIFT            1
 /* the first PQ associated with the VPORT and VOQ of this PQ */
-#define QM_RF_PQ_MAP_E4_VP_PQ_ID_MASK          0x1FF
-#define QM_RF_PQ_MAP_E4_VP_PQ_ID_SHIFT         9
-#define QM_RF_PQ_MAP_E4_VOQ_MASK               0x1F /* VOQ */
-#define QM_RF_PQ_MAP_E4_VOQ_SHIFT              18
-#define QM_RF_PQ_MAP_E4_WRR_WEIGHT_GROUP_MASK  0x3 /* WRR weight */
-#define QM_RF_PQ_MAP_E4_WRR_WEIGHT_GROUP_SHIFT 23
-#define QM_RF_PQ_MAP_E4_RL_VALID_MASK          0x1 /* RL active */
-#define QM_RF_PQ_MAP_E4_RL_VALID_SHIFT         25
-#define QM_RF_PQ_MAP_E4_RESERVED_MASK          0x3F
-#define QM_RF_PQ_MAP_E4_RESERVED_SHIFT         26
+#define QM_RF_PQ_MAP_VP_PQ_ID_MASK          0x1FF
+#define QM_RF_PQ_MAP_VP_PQ_ID_SHIFT         9
+#define QM_RF_PQ_MAP_VOQ_MASK               0x1F /* VOQ */
+#define QM_RF_PQ_MAP_VOQ_SHIFT              18
+#define QM_RF_PQ_MAP_WRR_WEIGHT_GROUP_MASK  0x3 /* WRR weight */
+#define QM_RF_PQ_MAP_WRR_WEIGHT_GROUP_SHIFT 23
+#define QM_RF_PQ_MAP_RL_VALID_MASK          0x1 /* RL active */
+#define QM_RF_PQ_MAP_RL_VALID_SHIFT         25
+#define QM_RF_PQ_MAP_RESERVED_MASK          0x3F
+#define QM_RF_PQ_MAP_RESERVED_SHIFT         26
 };
 
 
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index 7bc094792..b1cab2910 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -32,312 +32,224 @@ struct xstorm_eth_conn_st_ctx {
 	__le32 reserved[60];
 };
 
-struct e4_xstorm_eth_conn_ag_ctx {
+struct xstorm_eth_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 eth_state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
+#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
 /* exist_in_qm1 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
 /* exist_in_qm2 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
 /* exist_in_qm3 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
-/* bit4 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
+#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
+#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1 /* bit4 */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
 /* cf_array_active */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
-/* bit6 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
-/* bit7 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
 	u8 flags1;
-/* bit8 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
-/* bit9 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
-/* bit10 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
-/* bit11 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
-/* bit12 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_BIT12_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_BIT12_SHIFT                  4
-/* bit13 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_BIT13_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_BIT13_SHIFT                  5
-/* bit14 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
-/* bit15 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1 /* bit10 */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
+#define XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1 /* bit11 */
+#define XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
+#define XSTORM_ETH_CONN_AG_CTX_E5_RESERVED2_MASK            0x1 /* bit12 */
+#define XSTORM_ETH_CONN_AG_CTX_E5_RESERVED2_SHIFT           4
+#define XSTORM_ETH_CONN_AG_CTX_E5_RESERVED3_MASK            0x1 /* bit13 */
+#define XSTORM_ETH_CONN_AG_CTX_E5_RESERVED3_SHIFT           5
+#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1 /* bit14 */
+#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1 /* bit15 */
+#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
 	u8 flags2;
-/* timer0cf */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
-/* timer1cf */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
-/* timer2cf */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
+#define XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3 /* timer0cf */
+#define XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
+#define XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3 /* timer1cf */
+#define XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
+#define XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3 /* timer2cf */
+#define XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
 /* timer_stop_all */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
+#define XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
+#define XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
 	u8 flags3;
-/* cf4 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
-/* cf5 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
-/* cf6 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
-/* cf7 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
+#define XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
+#define XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
+#define XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
+#define XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
+#define XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
+#define XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
+#define XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
+#define XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
 	u8 flags4;
-/* cf8 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
-/* cf9 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
-/* cf10 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
-/* cf11 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
+#define XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
+#define XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
+#define XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
+#define XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
+#define XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
+#define XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
+#define XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
+#define XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
 	u8 flags5;
-/* cf12 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
-/* cf13 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
-/* cf14 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
-/* cf15 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
+#define XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3 /* cf12 */
+#define XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
+#define XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
+#define XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
+#define XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
+#define XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
+#define XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
+#define XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
 	u8 flags6;
-/* cf16 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
+#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
 /* cf_array_cf */
-#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
-/* cf18 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
-/* cf19 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
+#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
+#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
+#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
 	u8 flags7;
-/* cf20 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
-/* cf21 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
-/* cf22 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
-/* cf0en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
-/* cf1en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
+#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
+#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
+#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+#define XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1 /* cf0en */
+#define XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
+#define XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1 /* cf1en */
+#define XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
 	u8 flags8;
-/* cf2en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
-/* cf3en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
-/* cf4en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
-/* cf5en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
-/* cf6en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
-/* cf7en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
-/* cf8en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
-/* cf9en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
+#define XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1 /* cf2en */
+#define XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
+#define XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1 /* cf3en */
+#define XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
+#define XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1 /* cf4en */
+#define XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
+#define XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1 /* cf5en */
+#define XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
+#define XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1 /* cf6en */
+#define XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
+#define XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1 /* cf7en */
+#define XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
+#define XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1 /* cf8en */
+#define XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
+#define XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1 /* cf9en */
+#define XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
 	u8 flags9;
-/* cf10en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
-/* cf11en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
-/* cf12en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
-/* cf13en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
-/* cf14en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
-/* cf15en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
-/* cf16en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+#define XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1 /* cf10en */
+#define XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
+#define XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1 /* cf11en */
+#define XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
+#define XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1 /* cf12en */
+#define XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
+#define XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1 /* cf13en */
+#define XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
+#define XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1 /* cf14en */
+#define XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
+#define XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1 /* cf15en */
+#define XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
+#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1 /* cf16en */
+#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
 /* cf_array_cf_en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
+#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
 	u8 flags10;
-/* cf18en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
-/* cf19en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
-/* cf20en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
-/* cf21en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
-/* cf22en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
-/* cf23en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
-/* rule0en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
-/* rule1en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
+#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1 /* cf18en */
+#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1 /* cf19en */
+#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1 /* cf20en */
+#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1 /* cf21en */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
+#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1 /* cf22en */
+#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1 /* cf23en */
+#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1 /* rule0en */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1 /* rule1en */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
 	u8 flags11;
-/* rule2en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
-/* rule3en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
-/* rule4en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
-/* rule5en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
-/* rule6en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
-/* rule7en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
-/* rule8en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
-/* rule9en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1 /* rule2en */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1 /* rule3en */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
+#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1 /* rule4en */
+#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1 /* rule5en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
+#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1 /* rule6en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
+#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1 /* rule7en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1 /* rule8en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1 /* rule9en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
 	u8 flags12;
-/* rule10en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
-/* rule11en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
-/* rule12en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
-/* rule13en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
-/* rule14en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
-/* rule15en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
-/* rule16en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
-/* rule17en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
+#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1 /* rule10en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
+#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1 /* rule11en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1 /* rule12en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1 /* rule13en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1 /* rule14en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
+#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1 /* rule15en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
+#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1 /* rule16en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
+#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1 /* rule17en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
 	u8 flags13;
-/* rule18en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
-/* rule19en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
-/* rule20en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
-/* rule21en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
-/* rule22en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
-/* rule23en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
-/* rule24en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
-/* rule25en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1 /* rule18en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
+#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1 /* rule19en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1 /* rule20en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1 /* rule21en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1 /* rule22en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1 /* rule23en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1 /* rule24en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1 /* rule25en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
 	u8 flags14;
-/* bit16 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
-/* bit17 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
-/* bit18 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
-/* bit19 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-/* bit20 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
-/* bit21 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
-/* cf23 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1 /* bit16 */
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1 /* bit17 */
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1 /* bit18 */
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1 /* bit19 */
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1 /* bit20 */
+#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1 /* bit21 */
+#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
+#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
 	u8 edpm_event_id /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
 	__le16 e5_reserved1 /* physical_q1 */;
@@ -398,47 +310,37 @@ struct ystorm_eth_conn_st_ctx {
 	__le32 reserved[8];
 };
 
-struct e4_ystorm_eth_conn_ag_ctx {
+struct ystorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 state /* state */;
 	u8 flags0;
-/* exist_in_qm0 */
-#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
-/* exist_in_qm1 */
-#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
-#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
-#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
-#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
-#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
-#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
-#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
+#define YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1 /* exist_in_qm0 */
+#define YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
+#define YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1 /* exist_in_qm1 */
+#define YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
+#define YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
+#define YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
 	u8 flags1;
-/* cf0en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
-/* cf1en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
-/* cf2en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
-/* rule0en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
-/* rule1en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
-/* rule2en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
-/* rule3en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
-/* rule4en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1 /* cf0en */
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1 /* cf1en */
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
+#define YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1 /* cf2en */
+#define YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
+#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1 /* rule0en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
+#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1 /* rule1en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
+#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1 /* rule2en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
+#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1 /* rule3en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
+#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1 /* rule4en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
 	u8 tx_q0_int_coallecing_timeset /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* word0 */;
@@ -452,89 +354,89 @@ struct e4_ystorm_eth_conn_ag_ctx {
 	__le32 reg3 /* reg3 */;
 };
 
-struct e4_tstorm_eth_conn_ag_ctx {
+struct tstorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
+#define TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
+#define TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
+#define TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
+#define TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
+#define TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
+#define TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
+#define TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
+#define TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
+#define TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
+#define TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
+#define TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
+#define TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
+#define TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
+#define TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
 	u8 flags1;
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
+#define TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
+#define TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
+#define TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
+#define TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
+#define TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
+#define TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
+#define TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
+#define TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
 	u8 flags2;
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
+#define TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
+#define TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
+#define TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
+#define TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
+#define TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
+#define TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
+#define TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
+#define TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
 	u8 flags3;
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
+#define TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
+#define TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
+#define TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
+#define TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
+#define TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
+#define TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
+#define TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
+#define TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
+#define TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
+#define TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
+#define TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
+#define TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
 	u8 flags4;
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
+#define TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
+#define TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
+#define TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
+#define TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
+#define TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
+#define TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
+#define TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
+#define TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
+#define TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
+#define TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
+#define TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
+#define TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
+#define TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
+#define TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
+#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
 	u8 flags5;
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
-#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
+#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
+#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
+#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
+#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
+#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
+#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
+#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
+#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
+#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
 	__le32 reg0 /* reg0 */;
 	__le32 reg1 /* reg1 */;
 	__le32 reg2 /* reg2 */;
@@ -556,88 +458,66 @@ struct e4_tstorm_eth_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
-struct e4_ustorm_eth_conn_ag_ctx {
+struct ustorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
+#define USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1
+#define USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
 /* exist_in_qm1 */
-#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
-/* timer0cf */
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
-/* timer1cf */
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
-/* timer2cf */
-#define E4_USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
-#define E4_USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
+#define USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1
+#define USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
+#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3 /* timer0cf */
+#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
+#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3 /* timer1cf */
+#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
+#define USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3 /* timer2cf */
+#define USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
 	u8 flags1;
 /* timer_stop_all */
-#define E4_USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define E4_USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
-/* cf4 */
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
-/* cf5 */
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
-/* cf6 */
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
+#define USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
+#define USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
+#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3 /* cf4 */
+#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
+#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3 /* cf5 */
+#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
+#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3 /* cf6 */
+#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
 	u8 flags2;
-/* cf0en */
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
-/* cf1en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
-/* cf2en */
-#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
-/* cf3en */
-#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
-/* cf4en */
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
-/* cf5en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
-/* cf6en */
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
-/* rule0en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
+#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1 /* cf0en */
+#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
+#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1 /* cf1en */
+#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
+#define USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1 /* cf2en */
+#define USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
+#define USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1 /* cf3en */
+#define USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
+#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1 /* cf4en */
+#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
+#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1 /* cf5en */
+#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
+#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1 /* cf6en */
+#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
+#define USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1 /* rule0en */
+#define USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
 	u8 flags3;
-/* rule1en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
-/* rule2en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
-/* rule3en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
-/* rule4en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
-/* rule5en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
-/* rule6en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
-/* rule7en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
-/* rule8en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
+#define USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1 /* rule1en */
+#define USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
+#define USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1 /* rule2en */
+#define USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
+#define USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1 /* rule3en */
+#define USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
+#define USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1 /* rule4en */
+#define USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
+#define USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1 /* rule5en */
+#define USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
+#define USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1 /* rule6en */
+#define USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
+#define USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1 /* rule7en */
+#define USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
+#define USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1 /* rule8en */
+#define USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
 	u8 byte2 /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* conn_dpi */;
@@ -667,7 +547,7 @@ struct mstorm_eth_conn_st_ctx {
 /*
  * eth connection context
  */
-struct e4_eth_conn_context {
+struct eth_conn_context {
 /* tstorm storm context */
 	struct tstorm_eth_conn_st_ctx tstorm_st_context;
 	struct regpair tstorm_st_padding[2] /* padding */;
@@ -676,15 +556,15 @@ struct e4_eth_conn_context {
 /* xstorm storm context */
 	struct xstorm_eth_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
-	struct e4_xstorm_eth_conn_ag_ctx xstorm_ag_context;
+	struct xstorm_eth_conn_ag_ctx xstorm_ag_context;
 /* ystorm storm context */
 	struct ystorm_eth_conn_st_ctx ystorm_st_context;
 /* ystorm aggregative context */
-	struct e4_ystorm_eth_conn_ag_ctx ystorm_ag_context;
+	struct ystorm_eth_conn_ag_ctx ystorm_ag_context;
 /* tstorm aggregative context */
-	struct e4_tstorm_eth_conn_ag_ctx tstorm_ag_context;
+	struct tstorm_eth_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
-	struct e4_ustorm_eth_conn_ag_ctx ustorm_ag_context;
+	struct ustorm_eth_conn_ag_ctx ustorm_ag_context;
 /* ustorm storm context */
 	struct ustorm_eth_conn_st_ctx ustorm_st_context;
 /* mstorm storm context */
@@ -1875,37 +1755,37 @@ struct E4XstormEthConnAgCtxDqExtLdPart {
 };
 
 
-struct e4_mstorm_eth_conn_ag_ctx {
+struct mstorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1 /* exist_in_qm0 */
-#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
-#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
-#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
+#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1 /* exist_in_qm0 */
+#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
+#define MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
+#define MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
+#define MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
+#define MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
+#define MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
+#define MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
+#define MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
+#define MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
 	u8 flags1;
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
+#define MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
+#define MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
+#define MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
+#define MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
+#define MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
+#define MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
+#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
+#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
+#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
+#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
+#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
+#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
+#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
+#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
+#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
+#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
 	__le16 word0 /* word0 */;
 	__le16 word1 /* word1 */;
 	__le32 reg0 /* reg0 */;
@@ -1916,289 +1796,243 @@ struct e4_mstorm_eth_conn_ag_ctx {
 
 
 
-struct e4_xstorm_eth_hw_conn_ag_ctx {
+struct xstorm_eth_hw_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 eth_state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
 /* exist_in_qm1 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
 /* exist_in_qm2 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
 /* exist_in_qm3 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
-/* bit4 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
+#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1 /* bit4 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
 /* cf_array_active */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
 	u8 flags1;
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
-/* bit10 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
-/* bit11 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
-/* bit12 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT12_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT12_SHIFT                  4
-/* bit13 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT13_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT13_SHIFT                  5
-/* bit14 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
-/* bit15 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1 /* bit10 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
+#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1 /* bit11 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
+#define XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED2_MASK            0x1 /* bit12 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED2_SHIFT           4
+#define XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED3_MASK            0x1 /* bit13 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED3_SHIFT           5
+#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1 /* bit14 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1 /* bit15 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
 	u8 flags2;
 /* timer0cf */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
 /* timer1cf */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
 /* timer2cf */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
 /* timer_stop_all */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
 	u8 flags3;
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
 	u8 flags4;
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
 	u8 flags5;
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3 /* cf12 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3 /* cf12 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
 	u8 flags6;
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
 /* cf_array_cf */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
+#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
 	u8 flags7;
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
-/* cf0en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
-/* cf1en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
+#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1 /* cf0en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1 /* cf1en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
 	u8 flags8;
-/* cf2en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
-/* cf3en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
-/* cf4en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
-/* cf5en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
-/* cf6en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
-/* cf7en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
-/* cf8en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
-/* cf9en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1 /* cf2en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1 /* cf3en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1 /* cf4en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1 /* cf5en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1 /* cf6en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1 /* cf7en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1 /* cf8en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1 /* cf9en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
 	u8 flags9;
-/* cf10en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
-/* cf11en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
-/* cf12en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
-/* cf13en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
-/* cf14en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
-/* cf15en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
-/* cf16en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1 /* cf10en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1 /* cf11en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1 /* cf12en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1 /* cf13en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1 /* cf14en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1 /* cf15en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
+#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1 /* cf16en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
 /* cf_array_cf_en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
 	u8 flags10;
-/* cf18en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
-/* cf19en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
-/* cf20en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
-/* cf21en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
-/* cf22en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
-/* cf23en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
-/* rule0en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
-/* rule1en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
+#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1 /* cf18en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1 /* cf19en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1 /* cf20en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1 /* cf21en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
+#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1 /* cf22en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1 /* cf23en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1 /* rule0en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1 /* rule1en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
 	u8 flags11;
-/* rule2en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
-/* rule3en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
-/* rule4en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
-/* rule5en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
-/* rule6en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
-/* rule7en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
-/* rule8en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
-/* rule9en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1 /* rule2en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1 /* rule3en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
+#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1 /* rule4en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1 /* rule5en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1 /* rule6en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1 /* rule7en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1 /* rule8en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1 /* rule9en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
 	u8 flags12;
 /* rule10en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
 /* rule11en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
 /* rule12en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
 /* rule13en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
 /* rule14en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
 /* rule15en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
 /* rule16en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
 /* rule17en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
 	u8 flags13;
 /* rule18en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
 /* rule19en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
 /* rule20en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
 /* rule21en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
 /* rule22en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
 /* rule23en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
 /* rule24en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
 /* rule25en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
 	u8 flags14;
-/* bit16 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
-/* bit17 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
-/* bit18 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
-/* bit19 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-/* bit20 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
-/* bit21 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1 /* bit16 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1 /* bit17 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1 /* bit18 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1 /* bit19 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1 /* bit20 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1 /* bit21 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
 	u8 edpm_event_id /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
 	__le16 e5_reserved1 /* physical_q1 */;
diff --git a/drivers/net/qede/base/ecore_hsi_init_tool.h b/drivers/net/qede/base/ecore_hsi_init_tool.h
index 0e157f9bc..1fe4bfc61 100644
--- a/drivers/net/qede/base/ecore_hsi_init_tool.h
+++ b/drivers/net/qede/base/ecore_hsi_init_tool.h
@@ -23,7 +23,6 @@
 enum chip_ids {
 	CHIP_BB,
 	CHIP_K2,
-	CHIP_E5,
 	MAX_CHIP_IDS
 };
 
@@ -134,7 +133,8 @@ enum init_modes {
 	MODE_PORTS_PER_ENG_2,
 	MODE_PORTS_PER_ENG_4,
 	MODE_100G,
-	MODE_E5,
+	MODE_SKIP_PRAM_INIT,
+	MODE_EMUL_MAC,
 	MAX_INIT_MODES
 };
 
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index cfc1156eb..34bcc4249 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -18,12 +18,12 @@
 
 #define CDU_VALIDATION_DEFAULT_CFG 61
 
-static u16 con_region_offsets[3][NUM_OF_CONNECTION_TYPES_E4] = {
+static u16 con_region_offsets[3][NUM_OF_CONNECTION_TYPES] = {
 	{ 400,  336,  352,  304,  304,  384,  416,  352}, /* region 3 offsets */
 	{ 528,  496,  416,  448,  448,  512,  544,  480}, /* region 4 offsets */
 	{ 608,  544,  496,  512,  576,  592,  624,  560}  /* region 5 offsets */
 };
-static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES_E4] = {
+static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
 	{ 240,  240,  112,    0,    0,    0,    0,   96}  /* region 1 offsets */
 };
 
@@ -160,19 +160,17 @@ static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES_E4] = {
 #define QM_CMD_SET_FIELD(var, cmd, field, value) \
 	SET_FIELD(var[cmd##_##field##_OFFSET], cmd##_##field, value)
 
-#define QM_INIT_TX_PQ_MAP(p_hwfn, map, chip, pq_id, rl_valid, \
-			  vp_pq_id, rl_id, ext_voq, wrr) \
-	do {						\
-		OSAL_MEMSET(&map, 0, sizeof(map)); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_##chip##_PQ_VALID, 1); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_##chip##_RL_VALID, rl_valid); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_##chip##_VP_PQ_ID, vp_pq_id); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_##chip##_RL_ID, rl_id); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_##chip##_VOQ, ext_voq); \
-		SET_FIELD(map.reg, \
-			  QM_RF_PQ_MAP_##chip##_WRR_WEIGHT_GROUP, wrr); \
-		STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + pq_id, \
-			     *((u32 *)&map)); \
+#define QM_INIT_TX_PQ_MAP(p_hwfn, map, pq_id, vp_pq_id, \
+			   rl_valid, rl_id, voq, wrr) \
+	do { \
+	OSAL_MEMSET(&map, 0, sizeof(map)); \
+	SET_FIELD(map.reg, QM_RF_PQ_MAP_PQ_VALID, 1); \
+	SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_VALID, rl_valid ? 1 : 0); \
+	SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_ID, rl_id); \
+	SET_FIELD(map.reg, QM_RF_PQ_MAP_VP_PQ_ID, vp_pq_id); \
+	SET_FIELD(map.reg, QM_RF_PQ_MAP_VOQ, voq); \
+	SET_FIELD(map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP, wrr); \
+	STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + pq_id, *((u32 *)&map));\
 	} while (0)
 
 #define WRITE_PQ_INFO_TO_RAM		1
@@ -497,12 +495,11 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		}
 
 		/* Prepare PQ map entry */
-		struct qm_rf_pq_map_e4 tx_pq_map;
+		struct qm_rf_pq_map tx_pq_map;
 
-		QM_INIT_TX_PQ_MAP(p_hwfn, tx_pq_map, E4, pq_id, rl_valid ?
-				  1 : 0,
-				  first_tx_pq_id, rl_valid ?
-				  pq_params[i].vport_id : 0,
+		QM_INIT_TX_PQ_MAP(p_hwfn, tx_pq_map, pq_id, first_tx_pq_id,
+				  rl_valid ? 1 : 0,
+				  rl_valid ? pq_params[i].vport_id : 0,
 				  ext_voq, pq_params[i].wrr_group);
 
 		/* Set PQ base address */
@@ -1577,9 +1574,9 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 		return;
 
 	/* Update DORQ registers */
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5,
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2,
 		 eth_geneve_enable ? 1 : 0);
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5,
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2,
 		 ip_geneve_enable ? 1 : 0);
 }
 
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index 7368d55f7..c8536380c 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -29,7 +29,7 @@ struct ecore_pi_info {
 struct ecore_sb_sp_info {
 	struct ecore_sb_info sb_info;
 	/* per protocol index data */
-	struct ecore_pi_info pi_info_arr[PIS_PER_SB_E4];
+	struct ecore_pi_info pi_info_arr[MAX_PIS_PER_SB];
 };
 
 enum ecore_attention_type {
@@ -1514,7 +1514,7 @@ static void _ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
 	if (IS_VF(p_hwfn->p_dev))
 		return;/* @@@TBD MichalK- VF CAU... */
 
-	sb_offset = igu_sb_id * PIS_PER_SB_E4;
+	sb_offset = igu_sb_id * MAX_PIS_PER_SB;
 	OSAL_MEMSET(&pi_entry, 0, sizeof(struct cau_pi_entry));
 
 	SET_FIELD(pi_entry.prod, CAU_PI_ENTRY_PI_TIMESET, timeset);
@@ -2692,10 +2692,10 @@ enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
 	p_info->igu_cons = ecore_rd(p_hwfn, p_ptt,
 				    IGU_REG_CONSUMER_MEM + sbid * 4);
 
-	for (i = 0; i < PIS_PER_SB_E4; i++)
+	for (i = 0; i < MAX_PIS_PER_SB; i++)
 		p_info->pi[i] = (u16)ecore_rd(p_hwfn, p_ptt,
 					      CAU_REG_PI_MEMORY +
-					      sbid * 4 * PIS_PER_SB_E4 +
+					      sbid * 4 * MAX_PIS_PER_SB +
 					      i * 4);
 
 	return ECORE_SUCCESS;
diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h
index ff2310cff..5042cd1d1 100644
--- a/drivers/net/qede/base/ecore_int.h
+++ b/drivers/net/qede/base/ecore_int.h
@@ -16,8 +16,8 @@
 #define ECORE_SB_ATT_IDX	0x0001
 #define ECORE_SB_EVENT_MASK	0x0003
 
-#define SB_ALIGNED_SIZE(p_hwfn)					\
-	ALIGNED_TYPE_SIZE(struct status_block_e4, p_hwfn)
+#define SB_ALIGNED_SIZE(p_hwfn) \
+	ALIGNED_TYPE_SIZE(struct status_block, p_hwfn)
 
 #define ECORE_SB_INVALID_IDX	0xffff
 
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index 42538a46c..abea2a716 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -24,7 +24,7 @@ enum ecore_int_mode {
 #endif
 
 struct ecore_sb_info {
-	struct status_block_e4 *sb_virt;
+	struct status_block *sb_virt;
 	dma_addr_t sb_phys;
 	u32 sb_ack;		/* Last given ack */
 	u16 igu_sb_id;
@@ -42,7 +42,7 @@ struct ecore_sb_info {
 struct ecore_sb_info_dbg {
 	u32 igu_prod;
 	u32 igu_cons;
-	u16 pi[PIS_PER_SB_E4];
+	u16 pi[MAX_PIS_PER_SB];
 };
 
 struct ecore_sb_cnt_info {
@@ -65,7 +65,7 @@ static OSAL_INLINE u16 ecore_sb_update_sb_idx(struct ecore_sb_info *sb_info)
 	/* barrier(); status block is written to by the chip */
 	/* FIXME: need some sort of barrier. */
 	prod = OSAL_LE32_TO_CPU(sb_info->sb_virt->prod_index) &
-	    STATUS_BLOCK_E4_PROD_INDEX_MASK;
+	       STATUS_BLOCK_PROD_INDEX_MASK;
 	if (sb_info->sb_ack != prod) {
 		sb_info->sb_ack = prod;
 		rc |= ECORE_SB_IDX;
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 55de7086d..c998dbf8d 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -740,7 +740,7 @@ ecore_iov_pf_configure_vf_queue_coalesce(struct ecore_hwfn *p_hwfn,
  * @param p_hwfn
  * @param rel_vf_id
  *
- * @return MAX_NUM_VFS_E4 in case no further active VFs, otherwise index.
+ * @return MAX_NUM_VFS_K2 in case no further active VFs, otherwise index.
  */
 u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
@@ -764,7 +764,7 @@ void ecore_iov_set_vf_hw_channel(struct ecore_hwfn *p_hwfn, int vfid,
 
 #define ecore_for_each_vf(_p_hwfn, _i)					\
 	for (_i = ecore_iov_get_next_active_vf(_p_hwfn, 0);		\
-	     _i < MAX_NUM_VFS_E4;					\
+	     _i < MAX_NUM_VFS_K2;					\
 	     _i = ecore_iov_get_next_active_vf(_p_hwfn, _i + 1))
 
 #endif
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 1a5152ec5..23336c282 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1703,7 +1703,7 @@ static void ecore_mcp_update_stag(struct ecore_hwfn *p_hwfn,
 
 			/* Configure DB to add external vlan to EDPM packets */
 			ecore_wr(p_hwfn, p_ptt, DORQ_REG_TAG1_OVRD_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_EXT_VID_BB_K2,
+			ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_EXT_VID,
 				 p_hwfn->hw_info.ovlan);
 		} else {
 			ecore_wr(p_hwfn, p_ptt, NIG_REG_LLH_FUNC_TAG_EN, 0);
@@ -1711,7 +1711,7 @@ static void ecore_mcp_update_stag(struct ecore_hwfn *p_hwfn,
 
 			/* Configure DB to add external vlan to EDPM packets */
 			ecore_wr(p_hwfn, p_ptt, DORQ_REG_TAG1_OVRD_MODE, 0);
-			ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_EXT_VID_BB_K2, 0);
+			ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_EXT_VID, 0);
 		}
 
 		ecore_sp_pf_update_stag(p_hwfn);
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 88ad961e7..486b21dd9 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -188,7 +188,7 @@ ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry *p_ent)
 static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 				    struct ecore_spq *p_spq)
 {
-	struct e4_core_conn_context *p_cxt;
+	struct core_conn_context *p_cxt;
 	struct ecore_cxt_info cxt_info;
 	u16 physical_q;
 	enum _ecore_status_t rc;
@@ -210,14 +210,14 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 
 	if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev)) {
 		SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
+			  XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
 		SET_FIELD(p_cxt->xstorm_ag_context.flags1,
-			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
+			  XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
 		/* SET_FIELD(p_cxt->xstorm_ag_context.flags10,
 		 *	  E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN, 1);
 		 */
 		SET_FIELD(p_cxt->xstorm_ag_context.flags9,
-			  E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
+			  XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
 	}
 
 	/* CDU validation - FIXME currently disabled */
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 7d73ef9fb..d771ac6d4 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -1787,7 +1787,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 	/* fill in pfdev info */
 	pfdev_info->chip_num = p_hwfn->p_dev->chip_num;
 	pfdev_info->db_size = 0;	/* @@@ TBD MichalK Vf Doorbells */
-	pfdev_info->indices_per_sb = PIS_PER_SB_E4;
+	pfdev_info->indices_per_sb = MAX_PIS_PER_SB;
 
 	pfdev_info->capabilities = PFVF_ACQUIRE_CAP_DEFAULT_UNTAGGED |
 				   PFVF_ACQUIRE_CAP_POST_FW_OVERRIDE;
@@ -4383,7 +4383,7 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 			return i;
 
 out:
-	return MAX_NUM_VFS_E4;
+	return MAX_NUM_VFS_K2;
 }
 
 enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 50c7d2c93..e748e67d7 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -14,7 +14,7 @@
 #include "ecore_l2.h"
 
 #define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \
-	(MAX_NUM_VFS_E4 * ECORE_ETH_VF_NUM_VLAN_FILTERS)
+	(MAX_NUM_VFS_K2 * ECORE_ETH_VF_NUM_VLAN_FILTERS)
 
 /* Represents a full message. Both the request filled by VF
  * and the response filled by the PF. The VF needs one copy
@@ -173,7 +173,7 @@ struct ecore_vf_info {
  * capability enabled.
  */
 struct ecore_pf_iov {
-	struct ecore_vf_info	vfs_array[MAX_NUM_VFS_E4];
+	struct ecore_vf_info	vfs_array[MAX_NUM_VFS_K2];
 	u64			pending_flr[ECORE_VF_ARRAY_LENGTH];
 
 #ifndef REMOVE_DBG
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index be59f7738..9277b46fa 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -134,7 +134,7 @@
 	0x009060UL
 #define  MISCS_REG_CLK_100G_MODE	\
 	0x009070UL
-#define MISCS_REG_RESET_PL_HV_2 \
+#define MISCS_REG_RESET_PL_HV_2_K2 \
 	0x009150UL
 #define  MSDM_REG_ENABLE_IN1 \
 	0xfc0004UL
@@ -1109,7 +1109,7 @@
 #define DORQ_REG_PF_MIN_ADDR_REG1 0x100400UL
 #define MISCS_REG_FUNCTION_HIDE 0x0096f0UL
 #define PCIE_REG_PRTY_MASK 0x0547b4UL
-#define PGLUE_B_REG_VF_BAR0_SIZE 0x2aaeb4UL
+#define PGLUE_B_REG_VF_BAR0_SIZE_K2 0x2aaeb4UL
 #define BAR0_MAP_REG_YSDM_RAM 0x1e80000UL
 #define SEM_FAST_REG_INT_RAM_SIZE 20480
 #define MCP_REG_SCRATCH_SIZE 57344
@@ -1136,12 +1136,12 @@
 #define PGLUE_B_REG_MSDM_OFFSET_MASK_B 0x2aa1c0UL
 #define PRS_REG_PKT_LEN_STAT_TAGS_NOT_COUNTED_FIRST 0x1f0a0cUL
 #define PRS_REG_SEARCH_FCOE 0x1f0408UL
-#define PGLUE_B_REG_PGL_ADDR_E8_F0 0x2aaf98UL
+#define PGLUE_B_REG_PGL_ADDR_E8_F0_K2 0x2aaf98UL
 #define NIG_REG_DSCP_TO_TC_MAP_ENABLE 0x5088f8UL
-#define PGLUE_B_REG_PGL_ADDR_EC_F0 0x2aaf9cUL
-#define PGLUE_B_REG_PGL_ADDR_F0_F0 0x2aafa0UL
+#define PGLUE_B_REG_PGL_ADDR_EC_F0_K2 0x2aaf9cUL
+#define PGLUE_B_REG_PGL_ADDR_F0_F0_K2 0x2aafa0UL
 #define PRS_REG_ROCE_DEST_QP_MAX_PF 0x1f0430UL
-#define PGLUE_B_REG_PGL_ADDR_F4_F0 0x2aafa4UL
+#define PGLUE_B_REG_PGL_ADDR_F4_F0_K2 0x2aafa4UL
 #define IGU_REG_WRITE_DONE_PENDING 0x180900UL
 #define NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR 0x50196cUL
 #define PRS_REG_MSG_INFO 0x1f0a1cUL
@@ -1157,30 +1157,30 @@
 #define CDU_REG_CCFC_CTX_VALID1 0x580404UL
 #define CDU_REG_TCFC_CTX_VALID0 0x580408UL
 
-#define DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5 0x10092cUL
-#define DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5 0x100930UL
-#define MISCS_REG_RESET_PL_HV_2_K2_E5 0x009150UL
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2 0x100930UL
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2 0x10092cUL
 #define CNIG_REG_NW_PORT_MODE_BB 0x218200UL
 #define CNIG_REG_PMEG_IF_CMD_BB 0x21821cUL
 #define CNIG_REG_PMEG_IF_ADDR_BB 0x218224UL
 #define CNIG_REG_PMEG_IF_WRDATA_BB 0x218228UL
-#define NWM_REG_MAC0_K2_E5 0x800400UL
-#define CNIG_REG_NIG_PORT0_CONF_K2_E5 0x218200UL
-#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT 0
-#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT 1
-#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT 3
-#define ETH_MAC_REG_XIF_MODE_K2_E5 0x000080UL
-#define ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT 0
-#define ETH_MAC_REG_FRM_LENGTH_K2_E5 0x000014UL
-#define ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT 0
-#define ETH_MAC_REG_TX_IPG_LENGTH_K2_E5 0x000044UL
-#define ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT 0
-#define ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5 0x00001cUL
-#define ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT 0
-#define ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5 0x000020UL
-#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT 16
-#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT 0
-#define ETH_MAC_REG_COMMAND_CONFIG_K2_E5 0x000008UL
+#define NWM_REG_MAC0_K2 0x800400UL
+  #define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_SHIFT 0
+  #define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_SHIFT 1
+  #define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_SHIFT 3
+#define ETH_MAC_REG_XIF_MODE_K2 0x000080UL
+  #define ETH_MAC_REG_XIF_MODE_XGMII_K2_SHIFT 0
+#define ETH_MAC_REG_FRM_LENGTH_K2 0x000014UL
+  #define ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_SHIFT 0
+#define ETH_MAC_REG_TX_IPG_LENGTH_K2 0x000044UL
+  #define ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_SHIFT 0
+#define ETH_MAC_REG_RX_FIFO_SECTIONS_K2 0x00001cUL
+  #define ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_SHIFT 0
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_K2 0x000020UL
+  #define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_SHIFT 16
+  #define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_SHIFT 0
+  #define ETH_MAC_REG_COMMAND_CONFIG_CRC_FWD_K2 (0x1 << 6)
+  #define ETH_MAC_REG_COMMAND_CONFIG_CRC_FWD_K2_SHIFT 6
+#define ETH_MAC_REG_COMMAND_CONFIG_K2 0x000008UL
 #define MISC_REG_XMAC_CORE_PORT_MODE_BB 0x008c08UL
 #define MISC_REG_XMAC_PHY_PORT_MODE_BB 0x008c04UL
 #define XMAC_REG_MODE_BB 0x210008UL
@@ -1192,17 +1192,12 @@
 #define XMAC_REG_RX_CTRL_BB 0x210030UL
 #define XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB (0x1UL << 12)
 
-#define PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5 0x2aaf98UL
-#define PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5 0x2aaf9cUL
-#define PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5 0x2aafa0UL
-#define PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5 0x2aafa4UL
 #define PGLUE_B_REG_PGL_ADDR_88_F0_BB 0x2aa404UL
 #define PGLUE_B_REG_PGL_ADDR_8C_F0_BB 0x2aa408UL
 #define PGLUE_B_REG_PGL_ADDR_90_F0_BB 0x2aa40cUL
 #define PGLUE_B_REG_PGL_ADDR_94_F0_BB 0x2aa410UL
 #define MISCS_REG_FUNCTION_HIDE_BB_K2 0x0096f0UL
-#define PCIE_REG_PRTY_MASK_K2_E5 0x0547b4UL
-#define PGLUE_B_REG_VF_BAR0_SIZE_K2_E5 0x2aaeb4UL
+#define PCIE_REG_PRTY_MASK_K2 0x0547b4UL
 
 #define PRS_REG_OUTPUT_FORMAT_4_0_BB_K2 0x1f099cUL
 
@@ -1233,10 +1228,10 @@
 #define NIG_REG_LLH_FUNC_TAG_EN 0x5019b0UL
 #define NIG_REG_LLH_FUNC_TAG_VALUE 0x5019d0UL
 #define DORQ_REG_TAG1_OVRD_MODE 0x1008b4UL
-#define DORQ_REG_PF_PCP_BB_K2 0x1008c4UL
-#define DORQ_REG_PF_EXT_VID_BB_K2 0x1008c8UL
+#define DORQ_REG_PF_PCP 0x1008c4UL
+#define DORQ_REG_PF_EXT_VID 0x1008c8UL
 #define PRS_REG_SEARCH_NON_IP_AS_GFT 0x1f11c0UL
 #define NIG_REG_LLH_PPFID2PFID_TBL_0 0x501970UL
 #define NIG_REG_PPF_TO_ENGINE_SEL 0x508900UL
 #define NIG_REG_LLH_ENG_CLS_ROCE_QP_SEL 0x501b98UL
-#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_BB_K2 0x501b40UL
+#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL 0x501b40UL
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index dbb74fc64..abc86402d 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -569,12 +569,12 @@ qede_alloc_mem_sb(struct qede_dev *qdev, struct ecore_sb_info *sb_info,
 		  uint16_t sb_id)
 {
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct status_block_e4 *sb_virt;
+	struct status_block *sb_virt;
 	dma_addr_t sb_phys;
 	int rc;
 
 	sb_virt = OSAL_DMA_ALLOC_COHERENT(edev, &sb_phys,
-					  sizeof(struct status_block_e4));
+					  sizeof(struct status_block));
 	if (!sb_virt) {
 		DP_ERR(edev, "Status block allocation failed\n");
 		return -ENOMEM;
@@ -584,7 +584,7 @@ qede_alloc_mem_sb(struct qede_dev *qdev, struct ecore_sb_info *sb_info,
 	if (rc) {
 		DP_ERR(edev, "Status block initialization failed\n");
 		OSAL_DMA_FREE_COHERENT(edev, sb_virt, sb_phys,
-				       sizeof(struct status_block_e4));
+				       sizeof(struct status_block));
 		return rc;
 	}
 
@@ -683,7 +683,7 @@ void qede_dealloc_fp_resc(struct rte_eth_dev *eth_dev)
 		if (fp->sb_info) {
 			OSAL_DMA_FREE_COHERENT(edev, fp->sb_info->sb_virt,
 				fp->sb_info->sb_phys,
-				sizeof(struct status_block_e4));
+				sizeof(struct status_block));
 			rte_free(fp->sb_info);
 			fp->sb_info = NULL;
 		}
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH 5/9] net/qede/base: update rt defs NVM cfg and mcp code
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (3 preceding siblings ...)
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 4/9] net/qede/base: rename HSI datatypes and funcs Rasesh Mody
@ 2019-09-30  2:49 ` Rasesh Mody
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 6/9] net/qede/base: move dmae code to HSI Rasesh Mody
                   ` (14 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-09-30  2:49 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

Update and add runtime array offsets (rt defs), non-volatile memory
configuration options (nvm cfg) and management co-processor (mcp)
shared code in preparation to update the firmware to version 8.40.25.0.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/bcm_osal.h      |   5 +-
 drivers/net/qede/base/ecore_rt_defs.h | 870 +++++++++++-------------
 drivers/net/qede/base/mcp_public.h    |  59 +-
 drivers/net/qede/base/nvm_cfg.h       | 909 +++++++++++++++++++++++++-
 4 files changed, 1351 insertions(+), 492 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 51edc4151..0f09557cf 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -148,8 +148,8 @@ void osal_dma_free_mem(struct ecore_dev *edev, dma_addr_t phys);
 			      ((u8 *)(uintptr_t)(_p_hwfn->doorbells) +	\
 			      (_db_addr)), (u32)_val)
 
-#define DIRECT_REG_WR64(hwfn, addr, value) nothing
-#define DIRECT_REG_RD64(hwfn, addr) 0
+#define DIRECT_REG_RD64(hwfn, addr) rte_read64(addr)
+#define DIRECT_REG_WR64(hwfn, addr, value) rte_write64((value), (addr))
 
 /* Mutexes */
 
@@ -455,6 +455,7 @@ u32 qede_crc32(u32 crc, u8 *ptr, u32 length);
 
 #define OSAL_DIV_S64(a, b)	((a) / (b))
 #define OSAL_LLDP_RX_TLVS(p_hwfn, tlv_buf, tlv_size) nothing
+#define OSAL_GET_EPOCH(p_hwfn)	0
 #define OSAL_DBG_ALLOC_USER_DATA(p_hwfn, user_data_ptr) (0)
 #define OSAL_DB_REC_OCCURRED(p_hwfn) nothing
 
diff --git a/drivers/net/qede/base/ecore_rt_defs.h b/drivers/net/qede/base/ecore_rt_defs.h
index 3860e1a56..08b1f4700 100644
--- a/drivers/net/qede/base/ecore_rt_defs.h
+++ b/drivers/net/qede/base/ecore_rt_defs.h
@@ -24,512 +24,428 @@
 #define DORQ_REG_VF_MAX_ICID_5_RT_OFFSET                            13
 #define DORQ_REG_VF_MAX_ICID_6_RT_OFFSET                            14
 #define DORQ_REG_VF_MAX_ICID_7_RT_OFFSET                            15
-#define DORQ_REG_PF_WAKE_ALL_RT_OFFSET                              16
-#define DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET                           17
-#define DORQ_REG_GLB_MAX_ICID_0_RT_OFFSET                           18
-#define DORQ_REG_GLB_MAX_ICID_1_RT_OFFSET                           19
-#define DORQ_REG_GLB_RANGE2CONN_TYPE_0_RT_OFFSET                    20
-#define DORQ_REG_GLB_RANGE2CONN_TYPE_1_RT_OFFSET                    21
-#define DORQ_REG_PRV_PF_MAX_ICID_2_RT_OFFSET                        22
-#define DORQ_REG_PRV_PF_MAX_ICID_3_RT_OFFSET                        23
-#define DORQ_REG_PRV_PF_MAX_ICID_4_RT_OFFSET                        24
-#define DORQ_REG_PRV_PF_MAX_ICID_5_RT_OFFSET                        25
-#define DORQ_REG_PRV_VF_MAX_ICID_2_RT_OFFSET                        26
-#define DORQ_REG_PRV_VF_MAX_ICID_3_RT_OFFSET                        27
-#define DORQ_REG_PRV_VF_MAX_ICID_4_RT_OFFSET                        28
-#define DORQ_REG_PRV_VF_MAX_ICID_5_RT_OFFSET                        29
-#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_2_RT_OFFSET                 30
-#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_3_RT_OFFSET                 31
-#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_4_RT_OFFSET                 32
-#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_5_RT_OFFSET                 33
-#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_2_RT_OFFSET                 34
-#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_3_RT_OFFSET                 35
-#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_4_RT_OFFSET                 36
-#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_5_RT_OFFSET                 37
-#define IGU_REG_PF_CONFIGURATION_RT_OFFSET                          38
-#define IGU_REG_VF_CONFIGURATION_RT_OFFSET                          39
-#define IGU_REG_ATTN_MSG_ADDR_L_RT_OFFSET                           40
-#define IGU_REG_ATTN_MSG_ADDR_H_RT_OFFSET                           41
-#define IGU_REG_LEADING_EDGE_LATCH_RT_OFFSET                        42
-#define IGU_REG_TRAILING_EDGE_LATCH_RT_OFFSET                       43
-#define CAU_REG_CQE_AGG_UNIT_SIZE_RT_OFFSET                         44
-#define CAU_REG_SB_VAR_MEMORY_RT_OFFSET                             45
-#define CAU_REG_SB_VAR_MEMORY_RT_SIZE                               1024
-#define CAU_REG_SB_ADDR_MEMORY_RT_OFFSET                            1069
-#define CAU_REG_SB_ADDR_MEMORY_RT_SIZE                              1024
-#define CAU_REG_PI_MEMORY_RT_OFFSET                                 2093
+#define DORQ_REG_VF_ICID_BIT_SHIFT_NORM_RT_OFFSET                   16
+#define DORQ_REG_PF_WAKE_ALL_RT_OFFSET                              17
+#define DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET                           18
+#define IGU_REG_PF_CONFIGURATION_RT_OFFSET                          19
+#define IGU_REG_VF_CONFIGURATION_RT_OFFSET                          20
+#define IGU_REG_ATTN_MSG_ADDR_L_RT_OFFSET                           21
+#define IGU_REG_ATTN_MSG_ADDR_H_RT_OFFSET                           22
+#define IGU_REG_LEADING_EDGE_LATCH_RT_OFFSET                        23
+#define IGU_REG_TRAILING_EDGE_LATCH_RT_OFFSET                       24
+#define CAU_REG_CQE_AGG_UNIT_SIZE_RT_OFFSET                         25
+#define CAU_REG_SB_VAR_MEMORY_RT_OFFSET                             26
+#define CAU_REG_SB_VAR_MEMORY_RT_SIZE                               736
+#define CAU_REG_SB_ADDR_MEMORY_RT_OFFSET                            762
+#define CAU_REG_SB_ADDR_MEMORY_RT_SIZE                              736
+#define CAU_REG_PI_MEMORY_RT_OFFSET                                 1498
 #define CAU_REG_PI_MEMORY_RT_SIZE                                   4416
-#define PRS_REG_SEARCH_RESP_INITIATOR_TYPE_RT_OFFSET                6509
-#define PRS_REG_TASK_ID_MAX_INITIATOR_PF_RT_OFFSET                  6510
-#define PRS_REG_TASK_ID_MAX_INITIATOR_VF_RT_OFFSET                  6511
-#define PRS_REG_TASK_ID_MAX_TARGET_PF_RT_OFFSET                     6512
-#define PRS_REG_TASK_ID_MAX_TARGET_VF_RT_OFFSET                     6513
-#define PRS_REG_SEARCH_TCP_RT_OFFSET                                6514
-#define PRS_REG_SEARCH_FCOE_RT_OFFSET                               6515
-#define PRS_REG_SEARCH_ROCE_RT_OFFSET                               6516
-#define PRS_REG_ROCE_DEST_QP_MAX_VF_RT_OFFSET                       6517
-#define PRS_REG_ROCE_DEST_QP_MAX_PF_RT_OFFSET                       6518
-#define PRS_REG_SEARCH_OPENFLOW_RT_OFFSET                           6519
-#define PRS_REG_SEARCH_NON_IP_AS_OPENFLOW_RT_OFFSET                 6520
-#define PRS_REG_OPENFLOW_SUPPORT_ONLY_KNOWN_OVER_IP_RT_OFFSET       6521
-#define PRS_REG_OPENFLOW_SEARCH_KEY_MASK_RT_OFFSET                  6522
-#define PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET                           6523
-#define PRS_REG_LIGHT_L2_ETHERTYPE_EN_RT_OFFSET                     6524
-#define SRC_REG_FIRSTFREE_RT_OFFSET                                 6525
+#define PRS_REG_SEARCH_RESP_INITIATOR_TYPE_RT_OFFSET                5914
+#define PRS_REG_TASK_ID_MAX_INITIATOR_PF_RT_OFFSET                  5915
+#define PRS_REG_TASK_ID_MAX_INITIATOR_VF_RT_OFFSET                  5916
+#define PRS_REG_TASK_ID_MAX_TARGET_PF_RT_OFFSET                     5917
+#define PRS_REG_TASK_ID_MAX_TARGET_VF_RT_OFFSET                     5918
+#define PRS_REG_SEARCH_TCP_RT_OFFSET                                5919
+#define PRS_REG_SEARCH_FCOE_RT_OFFSET                               5920
+#define PRS_REG_SEARCH_ROCE_RT_OFFSET                               5921
+#define PRS_REG_ROCE_DEST_QP_MAX_VF_RT_OFFSET                       5922
+#define PRS_REG_ROCE_DEST_QP_MAX_PF_RT_OFFSET                       5923
+#define PRS_REG_SEARCH_OPENFLOW_RT_OFFSET                           5924
+#define PRS_REG_SEARCH_NON_IP_AS_OPENFLOW_RT_OFFSET                 5925
+#define PRS_REG_OPENFLOW_SUPPORT_ONLY_KNOWN_OVER_IP_RT_OFFSET       5926
+#define PRS_REG_OPENFLOW_SEARCH_KEY_MASK_RT_OFFSET                  5927
+#define PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET                           5928
+#define PRS_REG_LIGHT_L2_ETHERTYPE_EN_RT_OFFSET                     5929
+#define SRC_REG_FIRSTFREE_RT_OFFSET                                 5930
 #define SRC_REG_FIRSTFREE_RT_SIZE                                   2
-#define SRC_REG_LASTFREE_RT_OFFSET                                  6527
+#define SRC_REG_LASTFREE_RT_OFFSET                                  5932
 #define SRC_REG_LASTFREE_RT_SIZE                                    2
-#define SRC_REG_COUNTFREE_RT_OFFSET                                 6529
-#define SRC_REG_NUMBER_HASH_BITS_RT_OFFSET                          6530
-#define PSWRQ2_REG_CDUT_P_SIZE_RT_OFFSET                            6531
-#define PSWRQ2_REG_CDUC_P_SIZE_RT_OFFSET                            6532
-#define PSWRQ2_REG_TM_P_SIZE_RT_OFFSET                              6533
-#define PSWRQ2_REG_QM_P_SIZE_RT_OFFSET                              6534
-#define PSWRQ2_REG_SRC_P_SIZE_RT_OFFSET                             6535
-#define PSWRQ2_REG_TSDM_P_SIZE_RT_OFFSET                            6536
-#define PSWRQ2_REG_TM_FIRST_ILT_RT_OFFSET                           6537
-#define PSWRQ2_REG_TM_LAST_ILT_RT_OFFSET                            6538
-#define PSWRQ2_REG_QM_FIRST_ILT_RT_OFFSET                           6539
-#define PSWRQ2_REG_QM_LAST_ILT_RT_OFFSET                            6540
-#define PSWRQ2_REG_SRC_FIRST_ILT_RT_OFFSET                          6541
-#define PSWRQ2_REG_SRC_LAST_ILT_RT_OFFSET                           6542
-#define PSWRQ2_REG_CDUC_FIRST_ILT_RT_OFFSET                         6543
-#define PSWRQ2_REG_CDUC_LAST_ILT_RT_OFFSET                          6544
-#define PSWRQ2_REG_CDUT_FIRST_ILT_RT_OFFSET                         6545
-#define PSWRQ2_REG_CDUT_LAST_ILT_RT_OFFSET                          6546
-#define PSWRQ2_REG_TSDM_FIRST_ILT_RT_OFFSET                         6547
-#define PSWRQ2_REG_TSDM_LAST_ILT_RT_OFFSET                          6548
-#define PSWRQ2_REG_TM_NUMBER_OF_PF_BLOCKS_RT_OFFSET                 6549
-#define PSWRQ2_REG_CDUT_NUMBER_OF_PF_BLOCKS_RT_OFFSET               6550
-#define PSWRQ2_REG_CDUC_NUMBER_OF_PF_BLOCKS_RT_OFFSET               6551
-#define PSWRQ2_REG_TM_VF_BLOCKS_RT_OFFSET                           6552
-#define PSWRQ2_REG_CDUT_VF_BLOCKS_RT_OFFSET                         6553
-#define PSWRQ2_REG_CDUC_VF_BLOCKS_RT_OFFSET                         6554
-#define PSWRQ2_REG_TM_BLOCKS_FACTOR_RT_OFFSET                       6555
-#define PSWRQ2_REG_CDUT_BLOCKS_FACTOR_RT_OFFSET                     6556
-#define PSWRQ2_REG_CDUC_BLOCKS_FACTOR_RT_OFFSET                     6557
-#define PSWRQ2_REG_VF_BASE_RT_OFFSET                                6558
-#define PSWRQ2_REG_VF_LAST_ILT_RT_OFFSET                            6559
-#define PSWRQ2_REG_DRAM_ALIGN_WR_RT_OFFSET                          6560
-#define PSWRQ2_REG_DRAM_ALIGN_RD_RT_OFFSET                          6561
-#define PSWRQ2_REG_TGSRC_FIRST_ILT_RT_OFFSET                        6562
-#define PSWRQ2_REG_RGSRC_FIRST_ILT_RT_OFFSET                        6563
-#define PSWRQ2_REG_TGSRC_LAST_ILT_RT_OFFSET                         6564
-#define PSWRQ2_REG_RGSRC_LAST_ILT_RT_OFFSET                         6565
-#define PSWRQ2_REG_ILT_MEMORY_RT_OFFSET                             6566
-#define PSWRQ2_REG_ILT_MEMORY_RT_SIZE                               26414
-#define PGLUE_REG_B_VF_BASE_RT_OFFSET                               32980
-#define PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET                    32981
-#define PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET                       32982
-#define PGLUE_REG_B_CACHE_LINE_SIZE_RT_OFFSET                       32983
-#define PGLUE_REG_B_PF_BAR0_SIZE_RT_OFFSET                          32984
-#define PGLUE_REG_B_PF_BAR1_SIZE_RT_OFFSET                          32985
-#define PGLUE_REG_B_VF_BAR1_SIZE_RT_OFFSET                          32986
-#define TM_REG_VF_ENABLE_CONN_RT_OFFSET                             32987
-#define TM_REG_PF_ENABLE_CONN_RT_OFFSET                             32988
-#define TM_REG_PF_ENABLE_TASK_RT_OFFSET                             32989
-#define TM_REG_GROUP_SIZE_RESOLUTION_CONN_RT_OFFSET                 32990
-#define TM_REG_GROUP_SIZE_RESOLUTION_TASK_RT_OFFSET                 32991
-#define TM_REG_CONFIG_CONN_MEM_RT_OFFSET                            32992
+#define SRC_REG_COUNTFREE_RT_OFFSET                                 5934
+#define SRC_REG_NUMBER_HASH_BITS_RT_OFFSET                          5935
+#define PSWRQ2_REG_CDUT_P_SIZE_RT_OFFSET                            5936
+#define PSWRQ2_REG_CDUC_P_SIZE_RT_OFFSET                            5937
+#define PSWRQ2_REG_TM_P_SIZE_RT_OFFSET                              5938
+#define PSWRQ2_REG_QM_P_SIZE_RT_OFFSET                              5939
+#define PSWRQ2_REG_SRC_P_SIZE_RT_OFFSET                             5940
+#define PSWRQ2_REG_TSDM_P_SIZE_RT_OFFSET                            5941
+#define PSWRQ2_REG_TM_FIRST_ILT_RT_OFFSET                           5942
+#define PSWRQ2_REG_TM_LAST_ILT_RT_OFFSET                            5943
+#define PSWRQ2_REG_QM_FIRST_ILT_RT_OFFSET                           5944
+#define PSWRQ2_REG_QM_LAST_ILT_RT_OFFSET                            5945
+#define PSWRQ2_REG_SRC_FIRST_ILT_RT_OFFSET                          5946
+#define PSWRQ2_REG_SRC_LAST_ILT_RT_OFFSET                           5947
+#define PSWRQ2_REG_CDUC_FIRST_ILT_RT_OFFSET                         5948
+#define PSWRQ2_REG_CDUC_LAST_ILT_RT_OFFSET                          5949
+#define PSWRQ2_REG_CDUT_FIRST_ILT_RT_OFFSET                         5950
+#define PSWRQ2_REG_CDUT_LAST_ILT_RT_OFFSET                          5951
+#define PSWRQ2_REG_TSDM_FIRST_ILT_RT_OFFSET                         5952
+#define PSWRQ2_REG_TSDM_LAST_ILT_RT_OFFSET                          5953
+#define PSWRQ2_REG_TM_NUMBER_OF_PF_BLOCKS_RT_OFFSET                 5954
+#define PSWRQ2_REG_CDUT_NUMBER_OF_PF_BLOCKS_RT_OFFSET               5955
+#define PSWRQ2_REG_CDUC_NUMBER_OF_PF_BLOCKS_RT_OFFSET               5956
+#define PSWRQ2_REG_TM_VF_BLOCKS_RT_OFFSET                           5957
+#define PSWRQ2_REG_CDUT_VF_BLOCKS_RT_OFFSET                         5958
+#define PSWRQ2_REG_CDUC_VF_BLOCKS_RT_OFFSET                         5959
+#define PSWRQ2_REG_TM_BLOCKS_FACTOR_RT_OFFSET                       5960
+#define PSWRQ2_REG_CDUT_BLOCKS_FACTOR_RT_OFFSET                     5961
+#define PSWRQ2_REG_CDUC_BLOCKS_FACTOR_RT_OFFSET                     5962
+#define PSWRQ2_REG_VF_BASE_RT_OFFSET                                5963
+#define PSWRQ2_REG_VF_LAST_ILT_RT_OFFSET                            5964
+#define PSWRQ2_REG_DRAM_ALIGN_WR_RT_OFFSET                          5965
+#define PSWRQ2_REG_DRAM_ALIGN_RD_RT_OFFSET                          5966
+#define PSWRQ2_REG_ILT_MEMORY_RT_OFFSET                             5967
+#define PSWRQ2_REG_ILT_MEMORY_RT_SIZE                               22000
+#define PGLUE_REG_B_VF_BASE_RT_OFFSET                               27967
+#define PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET                    27968
+#define PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET                       27969
+#define PGLUE_REG_B_CACHE_LINE_SIZE_RT_OFFSET                       27970
+#define PGLUE_REG_B_PF_BAR0_SIZE_RT_OFFSET                          27971
+#define PGLUE_REG_B_PF_BAR1_SIZE_RT_OFFSET                          27972
+#define PGLUE_REG_B_VF_BAR1_SIZE_RT_OFFSET                          27973
+#define TM_REG_VF_ENABLE_CONN_RT_OFFSET                             27974
+#define TM_REG_PF_ENABLE_CONN_RT_OFFSET                             27975
+#define TM_REG_PF_ENABLE_TASK_RT_OFFSET                             27976
+#define TM_REG_GROUP_SIZE_RESOLUTION_CONN_RT_OFFSET                 27977
+#define TM_REG_GROUP_SIZE_RESOLUTION_TASK_RT_OFFSET                 27978
+#define TM_REG_CONFIG_CONN_MEM_RT_OFFSET                            27979
 #define TM_REG_CONFIG_CONN_MEM_RT_SIZE                              416
-#define TM_REG_CONFIG_TASK_MEM_RT_OFFSET                            33408
-#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              608
-#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                34016
-#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                34017
-#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                34018
-#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           34019
-#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           34020
-#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           34021
-#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           34022
-#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           34023
-#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           34024
-#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           34025
-#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           34026
-#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           34027
-#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           34028
-#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          34029
-#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          34030
-#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          34031
-#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          34032
-#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          34033
-#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          34034
-#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          34035
-#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          34036
-#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          34037
-#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          34038
-#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          34039
-#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          34040
-#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          34041
-#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          34042
-#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          34043
-#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          34044
-#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          34045
-#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          34046
-#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          34047
-#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          34048
-#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          34049
-#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          34050
-#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          34051
-#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          34052
-#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          34053
-#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          34054
-#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          34055
-#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          34056
-#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          34057
-#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          34058
-#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          34059
-#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          34060
-#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          34061
-#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          34062
-#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          34063
-#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          34064
-#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          34065
-#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          34066
-#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          34067
-#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          34068
-#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          34069
-#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          34070
-#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          34071
-#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          34072
-#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          34073
-#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          34074
-#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          34075
-#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          34076
-#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          34077
-#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          34078
-#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          34079
-#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          34080
-#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          34081
-#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          34082
-#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            34083
+#define TM_REG_CONFIG_TASK_MEM_RT_OFFSET                            28395
+#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              512
+#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                28907
+#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                28908
+#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                28909
+#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           28910
+#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           28911
+#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           28912
+#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           28913
+#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           28914
+#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           28915
+#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           28916
+#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           28917
+#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           28918
+#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           28919
+#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          28920
+#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          28921
+#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          28922
+#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          28923
+#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          28924
+#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          28925
+#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          28926
+#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          28927
+#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          28928
+#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          28929
+#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          28930
+#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          28931
+#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          28932
+#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          28933
+#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          28934
+#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          28935
+#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          28936
+#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          28937
+#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          28938
+#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          28939
+#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          28940
+#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          28941
+#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          28942
+#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          28943
+#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          28944
+#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          28945
+#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          28946
+#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          28947
+#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          28948
+#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          28949
+#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          28950
+#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          28951
+#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          28952
+#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          28953
+#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          28954
+#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          28955
+#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          28956
+#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          28957
+#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          28958
+#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          28959
+#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          28960
+#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          28961
+#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          28962
+#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          28963
+#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          28964
+#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          28965
+#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          28966
+#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          28967
+#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          28968
+#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          28969
+#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          28970
+#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          28971
+#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          28972
+#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          28973
+#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            28974
 #define QM_REG_BASEADDROTHERPQ_RT_SIZE                              128
-#define QM_REG_PTRTBLOTHER_RT_OFFSET                                34211
+#define QM_REG_PTRTBLOTHER_RT_OFFSET                                29102
 #define QM_REG_PTRTBLOTHER_RT_SIZE                                  256
-#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         34467
-#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         34468
-#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          34469
-#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        34470
-#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       34471
-#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            34472
-#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            34473
-#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            34474
-#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            34475
-#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            34476
-#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            34477
-#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            34478
-#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            34479
-#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            34480
-#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            34481
-#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           34482
-#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           34483
-#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           34484
-#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           34485
-#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           34486
-#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           34487
-#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        34488
-#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        34489
-#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        34490
-#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        34491
-#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           34492
-#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           34493
-#define QM_REG_PQTX2PF_0_RT_OFFSET                                  34494
-#define QM_REG_PQTX2PF_1_RT_OFFSET                                  34495
-#define QM_REG_PQTX2PF_2_RT_OFFSET                                  34496
-#define QM_REG_PQTX2PF_3_RT_OFFSET                                  34497
-#define QM_REG_PQTX2PF_4_RT_OFFSET                                  34498
-#define QM_REG_PQTX2PF_5_RT_OFFSET                                  34499
-#define QM_REG_PQTX2PF_6_RT_OFFSET                                  34500
-#define QM_REG_PQTX2PF_7_RT_OFFSET                                  34501
-#define QM_REG_PQTX2PF_8_RT_OFFSET                                  34502
-#define QM_REG_PQTX2PF_9_RT_OFFSET                                  34503
-#define QM_REG_PQTX2PF_10_RT_OFFSET                                 34504
-#define QM_REG_PQTX2PF_11_RT_OFFSET                                 34505
-#define QM_REG_PQTX2PF_12_RT_OFFSET                                 34506
-#define QM_REG_PQTX2PF_13_RT_OFFSET                                 34507
-#define QM_REG_PQTX2PF_14_RT_OFFSET                                 34508
-#define QM_REG_PQTX2PF_15_RT_OFFSET                                 34509
-#define QM_REG_PQTX2PF_16_RT_OFFSET                                 34510
-#define QM_REG_PQTX2PF_17_RT_OFFSET                                 34511
-#define QM_REG_PQTX2PF_18_RT_OFFSET                                 34512
-#define QM_REG_PQTX2PF_19_RT_OFFSET                                 34513
-#define QM_REG_PQTX2PF_20_RT_OFFSET                                 34514
-#define QM_REG_PQTX2PF_21_RT_OFFSET                                 34515
-#define QM_REG_PQTX2PF_22_RT_OFFSET                                 34516
-#define QM_REG_PQTX2PF_23_RT_OFFSET                                 34517
-#define QM_REG_PQTX2PF_24_RT_OFFSET                                 34518
-#define QM_REG_PQTX2PF_25_RT_OFFSET                                 34519
-#define QM_REG_PQTX2PF_26_RT_OFFSET                                 34520
-#define QM_REG_PQTX2PF_27_RT_OFFSET                                 34521
-#define QM_REG_PQTX2PF_28_RT_OFFSET                                 34522
-#define QM_REG_PQTX2PF_29_RT_OFFSET                                 34523
-#define QM_REG_PQTX2PF_30_RT_OFFSET                                 34524
-#define QM_REG_PQTX2PF_31_RT_OFFSET                                 34525
-#define QM_REG_PQTX2PF_32_RT_OFFSET                                 34526
-#define QM_REG_PQTX2PF_33_RT_OFFSET                                 34527
-#define QM_REG_PQTX2PF_34_RT_OFFSET                                 34528
-#define QM_REG_PQTX2PF_35_RT_OFFSET                                 34529
-#define QM_REG_PQTX2PF_36_RT_OFFSET                                 34530
-#define QM_REG_PQTX2PF_37_RT_OFFSET                                 34531
-#define QM_REG_PQTX2PF_38_RT_OFFSET                                 34532
-#define QM_REG_PQTX2PF_39_RT_OFFSET                                 34533
-#define QM_REG_PQTX2PF_40_RT_OFFSET                                 34534
-#define QM_REG_PQTX2PF_41_RT_OFFSET                                 34535
-#define QM_REG_PQTX2PF_42_RT_OFFSET                                 34536
-#define QM_REG_PQTX2PF_43_RT_OFFSET                                 34537
-#define QM_REG_PQTX2PF_44_RT_OFFSET                                 34538
-#define QM_REG_PQTX2PF_45_RT_OFFSET                                 34539
-#define QM_REG_PQTX2PF_46_RT_OFFSET                                 34540
-#define QM_REG_PQTX2PF_47_RT_OFFSET                                 34541
-#define QM_REG_PQTX2PF_48_RT_OFFSET                                 34542
-#define QM_REG_PQTX2PF_49_RT_OFFSET                                 34543
-#define QM_REG_PQTX2PF_50_RT_OFFSET                                 34544
-#define QM_REG_PQTX2PF_51_RT_OFFSET                                 34545
-#define QM_REG_PQTX2PF_52_RT_OFFSET                                 34546
-#define QM_REG_PQTX2PF_53_RT_OFFSET                                 34547
-#define QM_REG_PQTX2PF_54_RT_OFFSET                                 34548
-#define QM_REG_PQTX2PF_55_RT_OFFSET                                 34549
-#define QM_REG_PQTX2PF_56_RT_OFFSET                                 34550
-#define QM_REG_PQTX2PF_57_RT_OFFSET                                 34551
-#define QM_REG_PQTX2PF_58_RT_OFFSET                                 34552
-#define QM_REG_PQTX2PF_59_RT_OFFSET                                 34553
-#define QM_REG_PQTX2PF_60_RT_OFFSET                                 34554
-#define QM_REG_PQTX2PF_61_RT_OFFSET                                 34555
-#define QM_REG_PQTX2PF_62_RT_OFFSET                                 34556
-#define QM_REG_PQTX2PF_63_RT_OFFSET                                 34557
-#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               34558
-#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               34559
-#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               34560
-#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               34561
-#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               34562
-#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               34563
-#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               34564
-#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               34565
-#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               34566
-#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               34567
-#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              34568
-#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              34569
-#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              34570
-#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              34571
-#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              34572
-#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              34573
-#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             34574
-#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             34575
-#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        34576
-#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        34577
-#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          34578
-#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          34579
-#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          34580
-#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          34581
-#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          34582
-#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          34583
-#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          34584
-#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          34585
-#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               34586
+#define QM_REG_VOQCRDLINE_RT_OFFSET                                 29358
+#define QM_REG_VOQCRDLINE_RT_SIZE                                   20
+#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             29378
+#define QM_REG_VOQINITCRDLINE_RT_SIZE                               20
+#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         29398
+#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         29399
+#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          29400
+#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        29401
+#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       29402
+#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            29403
+#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            29404
+#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            29405
+#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            29406
+#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            29407
+#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            29408
+#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            29409
+#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            29410
+#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            29411
+#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            29412
+#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           29413
+#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           29414
+#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           29415
+#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           29416
+#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           29417
+#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           29418
+#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        29419
+#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        29420
+#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        29421
+#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        29422
+#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           29423
+#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           29424
+#define QM_REG_PQTX2PF_0_RT_OFFSET                                  29425
+#define QM_REG_PQTX2PF_1_RT_OFFSET                                  29426
+#define QM_REG_PQTX2PF_2_RT_OFFSET                                  29427
+#define QM_REG_PQTX2PF_3_RT_OFFSET                                  29428
+#define QM_REG_PQTX2PF_4_RT_OFFSET                                  29429
+#define QM_REG_PQTX2PF_5_RT_OFFSET                                  29430
+#define QM_REG_PQTX2PF_6_RT_OFFSET                                  29431
+#define QM_REG_PQTX2PF_7_RT_OFFSET                                  29432
+#define QM_REG_PQTX2PF_8_RT_OFFSET                                  29433
+#define QM_REG_PQTX2PF_9_RT_OFFSET                                  29434
+#define QM_REG_PQTX2PF_10_RT_OFFSET                                 29435
+#define QM_REG_PQTX2PF_11_RT_OFFSET                                 29436
+#define QM_REG_PQTX2PF_12_RT_OFFSET                                 29437
+#define QM_REG_PQTX2PF_13_RT_OFFSET                                 29438
+#define QM_REG_PQTX2PF_14_RT_OFFSET                                 29439
+#define QM_REG_PQTX2PF_15_RT_OFFSET                                 29440
+#define QM_REG_PQTX2PF_16_RT_OFFSET                                 29441
+#define QM_REG_PQTX2PF_17_RT_OFFSET                                 29442
+#define QM_REG_PQTX2PF_18_RT_OFFSET                                 29443
+#define QM_REG_PQTX2PF_19_RT_OFFSET                                 29444
+#define QM_REG_PQTX2PF_20_RT_OFFSET                                 29445
+#define QM_REG_PQTX2PF_21_RT_OFFSET                                 29446
+#define QM_REG_PQTX2PF_22_RT_OFFSET                                 29447
+#define QM_REG_PQTX2PF_23_RT_OFFSET                                 29448
+#define QM_REG_PQTX2PF_24_RT_OFFSET                                 29449
+#define QM_REG_PQTX2PF_25_RT_OFFSET                                 29450
+#define QM_REG_PQTX2PF_26_RT_OFFSET                                 29451
+#define QM_REG_PQTX2PF_27_RT_OFFSET                                 29452
+#define QM_REG_PQTX2PF_28_RT_OFFSET                                 29453
+#define QM_REG_PQTX2PF_29_RT_OFFSET                                 29454
+#define QM_REG_PQTX2PF_30_RT_OFFSET                                 29455
+#define QM_REG_PQTX2PF_31_RT_OFFSET                                 29456
+#define QM_REG_PQTX2PF_32_RT_OFFSET                                 29457
+#define QM_REG_PQTX2PF_33_RT_OFFSET                                 29458
+#define QM_REG_PQTX2PF_34_RT_OFFSET                                 29459
+#define QM_REG_PQTX2PF_35_RT_OFFSET                                 29460
+#define QM_REG_PQTX2PF_36_RT_OFFSET                                 29461
+#define QM_REG_PQTX2PF_37_RT_OFFSET                                 29462
+#define QM_REG_PQTX2PF_38_RT_OFFSET                                 29463
+#define QM_REG_PQTX2PF_39_RT_OFFSET                                 29464
+#define QM_REG_PQTX2PF_40_RT_OFFSET                                 29465
+#define QM_REG_PQTX2PF_41_RT_OFFSET                                 29466
+#define QM_REG_PQTX2PF_42_RT_OFFSET                                 29467
+#define QM_REG_PQTX2PF_43_RT_OFFSET                                 29468
+#define QM_REG_PQTX2PF_44_RT_OFFSET                                 29469
+#define QM_REG_PQTX2PF_45_RT_OFFSET                                 29470
+#define QM_REG_PQTX2PF_46_RT_OFFSET                                 29471
+#define QM_REG_PQTX2PF_47_RT_OFFSET                                 29472
+#define QM_REG_PQTX2PF_48_RT_OFFSET                                 29473
+#define QM_REG_PQTX2PF_49_RT_OFFSET                                 29474
+#define QM_REG_PQTX2PF_50_RT_OFFSET                                 29475
+#define QM_REG_PQTX2PF_51_RT_OFFSET                                 29476
+#define QM_REG_PQTX2PF_52_RT_OFFSET                                 29477
+#define QM_REG_PQTX2PF_53_RT_OFFSET                                 29478
+#define QM_REG_PQTX2PF_54_RT_OFFSET                                 29479
+#define QM_REG_PQTX2PF_55_RT_OFFSET                                 29480
+#define QM_REG_PQTX2PF_56_RT_OFFSET                                 29481
+#define QM_REG_PQTX2PF_57_RT_OFFSET                                 29482
+#define QM_REG_PQTX2PF_58_RT_OFFSET                                 29483
+#define QM_REG_PQTX2PF_59_RT_OFFSET                                 29484
+#define QM_REG_PQTX2PF_60_RT_OFFSET                                 29485
+#define QM_REG_PQTX2PF_61_RT_OFFSET                                 29486
+#define QM_REG_PQTX2PF_62_RT_OFFSET                                 29487
+#define QM_REG_PQTX2PF_63_RT_OFFSET                                 29488
+#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               29489
+#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               29490
+#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               29491
+#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               29492
+#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               29493
+#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               29494
+#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               29495
+#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               29496
+#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               29497
+#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               29498
+#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              29499
+#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              29500
+#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              29501
+#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              29502
+#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              29503
+#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              29504
+#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             29505
+#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             29506
+#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        29507
+#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        29508
+#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          29509
+#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          29510
+#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          29511
+#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          29512
+#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          29513
+#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          29514
+#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          29515
+#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          29516
+#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               29517
 #define QM_REG_RLGLBLINCVAL_RT_SIZE                                 256
-#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           34842
+#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           29773
 #define QM_REG_RLGLBLUPPERBOUND_RT_SIZE                             256
-#define QM_REG_RLGLBLCRD_RT_OFFSET                                  35098
+#define QM_REG_RLGLBLCRD_RT_OFFSET                                  30029
 #define QM_REG_RLGLBLCRD_RT_SIZE                                    256
-#define QM_REG_RLGLBLENABLE_RT_OFFSET                               35354
-#define QM_REG_RLPFPERIOD_RT_OFFSET                                 35355
-#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            35356
-#define QM_REG_RLPFINCVAL_RT_OFFSET                                 35357
+#define QM_REG_RLGLBLENABLE_RT_OFFSET                               30285
+#define QM_REG_RLPFPERIOD_RT_OFFSET                                 30286
+#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            30287
+#define QM_REG_RLPFINCVAL_RT_OFFSET                                 30288
 #define QM_REG_RLPFINCVAL_RT_SIZE                                   16
-#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             35373
+#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             30304
 #define QM_REG_RLPFUPPERBOUND_RT_SIZE                               16
-#define QM_REG_RLPFCRD_RT_OFFSET                                    35389
+#define QM_REG_RLPFCRD_RT_OFFSET                                    30320
 #define QM_REG_RLPFCRD_RT_SIZE                                      16
-#define QM_REG_RLPFENABLE_RT_OFFSET                                 35405
-#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              35406
-#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                35407
+#define QM_REG_RLPFENABLE_RT_OFFSET                                 30336
+#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              30337
+#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                30338
 #define QM_REG_WFQPFWEIGHT_RT_SIZE                                  16
-#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            35423
+#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            30354
 #define QM_REG_WFQPFUPPERBOUND_RT_SIZE                              16
-#define QM_REG_WFQPFCRD_RT_OFFSET                                   35439
-#define QM_REG_WFQPFCRD_RT_SIZE                                     256
-#define QM_REG_WFQPFENABLE_RT_OFFSET                                35695
-#define QM_REG_WFQVPENABLE_RT_OFFSET                                35696
-#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               35697
+#define QM_REG_WFQPFCRD_RT_OFFSET                                   30370
+#define QM_REG_WFQPFCRD_RT_SIZE                                     160
+#define QM_REG_WFQPFENABLE_RT_OFFSET                                30530
+#define QM_REG_WFQVPENABLE_RT_OFFSET                                30531
+#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               30532
 #define QM_REG_BASEADDRTXPQ_RT_SIZE                                 512
-#define QM_REG_TXPQMAP_RT_OFFSET                                    36209
+#define QM_REG_TXPQMAP_RT_OFFSET                                    31044
 #define QM_REG_TXPQMAP_RT_SIZE                                      512
-#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                36721
+#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                31556
 #define QM_REG_WFQVPWEIGHT_RT_SIZE                                  512
-#define QM_REG_WFQVPCRD_RT_OFFSET                                   37233
+#define QM_REG_WFQVPCRD_RT_OFFSET                                   32068
 #define QM_REG_WFQVPCRD_RT_SIZE                                     512
-#define QM_REG_WFQVPMAP_RT_OFFSET                                   37745
+#define QM_REG_WFQVPMAP_RT_OFFSET                                   32580
 #define QM_REG_WFQVPMAP_RT_SIZE                                     512
-#define QM_REG_PTRTBLTX_RT_OFFSET                                   38257
+#define QM_REG_PTRTBLTX_RT_OFFSET                                   33092
 #define QM_REG_PTRTBLTX_RT_SIZE                                     1024
-#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               39281
-#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 320
-#define QM_REG_VOQCRDLINE_RT_OFFSET                                 39601
-#define QM_REG_VOQCRDLINE_RT_SIZE                                   36
-#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             39637
-#define QM_REG_VOQINITCRDLINE_RT_SIZE                               36
-#define QM_REG_RLPFVOQENABLE_MSB_RT_OFFSET                          39673
-#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           39674
-#define NIG_REG_BRB_GATE_DNTFWD_PORT_RT_OFFSET                      39675
-#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     39676
-#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     39677
-#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     39678
-#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     39679
-#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  39680
-#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           39681
+#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               34116
+#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 160
+#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34276
+#define NIG_REG_BRB_GATE_DNTFWD_PORT_RT_OFFSET                      34277
+#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     34278
+#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     34279
+#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     34280
+#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     34281
+#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  34282
+#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           34283
 #define NIG_REG_LLH_FUNC_TAG_EN_RT_SIZE                             4
-#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        39685
+#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        34287
 #define NIG_REG_LLH_FUNC_TAG_VALUE_RT_SIZE                          4
-#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     39689
+#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     34291
 #define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_SIZE                       32
-#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        39721
+#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        34323
 #define NIG_REG_LLH_FUNC_FILTER_EN_RT_SIZE                          16
-#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      39737
+#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      34339
 #define NIG_REG_LLH_FUNC_FILTER_MODE_RT_SIZE                        16
-#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             39753
+#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             34355
 #define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE               16
-#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   39769
+#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   34371
 #define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_SIZE                     16
-#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              39785
-#define NIG_REG_PPF_TO_ENGINE_SEL_RT_OFFSET                         39786
+#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              34387
+#define NIG_REG_PPF_TO_ENGINE_SEL_RT_OFFSET                         34388
 #define NIG_REG_PPF_TO_ENGINE_SEL_RT_SIZE                           8
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_VALUE_RT_OFFSET              39794
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_VALUE_RT_SIZE                1024
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_EN_RT_OFFSET                 40818
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_EN_RT_SIZE                   512
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_MODE_RT_OFFSET               41330
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_MODE_RT_SIZE                 512
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET      41842
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE        512
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_HDR_SEL_RT_OFFSET            42354
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_HDR_SEL_RT_SIZE              512
-#define NIG_REG_LLH_PF_CLS_FILTERS_MAP_RT_OFFSET                    42866
-#define NIG_REG_LLH_PF_CLS_FILTERS_MAP_RT_SIZE                      32
-#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           42898
-#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           42899
-#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           42900
-#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       42901
-#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       42902
-#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       42903
-#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       42904
-#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    42905
-#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    42906
-#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    42907
-#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    42908
-#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        42909
-#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     42910
-#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           42911
-#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      42912
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    42913
-#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       42914
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                42915
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    42916
-#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       42917
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                42918
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    42919
-#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       42920
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                42921
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    42922
-#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       42923
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                42924
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    42925
-#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       42926
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                42927
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    42928
-#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       42929
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                42930
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    42931
-#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       42932
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                42933
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    42934
-#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       42935
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                42936
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    42937
-#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       42938
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                42939
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    42940
-#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       42941
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                42942
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   42943
-#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      42944
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               42945
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   42946
-#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      42947
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               42948
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   42949
-#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      42950
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               42951
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   42952
-#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      42953
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               42954
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   42955
-#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      42956
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               42957
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   42958
-#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      42959
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               42960
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   42961
-#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      42962
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               42963
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   42964
-#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      42965
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               42966
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   42967
-#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      42968
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               42969
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   42970
-#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      42971
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               42972
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ20_RT_OFFSET                   42973
-#define PBF_REG_BTB_GUARANTEED_VOQ20_RT_OFFSET                      42974
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ20_RT_OFFSET               42975
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ21_RT_OFFSET                   42976
-#define PBF_REG_BTB_GUARANTEED_VOQ21_RT_OFFSET                      42977
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ21_RT_OFFSET               42978
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ22_RT_OFFSET                   42979
-#define PBF_REG_BTB_GUARANTEED_VOQ22_RT_OFFSET                      42980
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ22_RT_OFFSET               42981
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ23_RT_OFFSET                   42982
-#define PBF_REG_BTB_GUARANTEED_VOQ23_RT_OFFSET                      42983
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ23_RT_OFFSET               42984
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ24_RT_OFFSET                   42985
-#define PBF_REG_BTB_GUARANTEED_VOQ24_RT_OFFSET                      42986
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ24_RT_OFFSET               42987
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ25_RT_OFFSET                   42988
-#define PBF_REG_BTB_GUARANTEED_VOQ25_RT_OFFSET                      42989
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ25_RT_OFFSET               42990
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ26_RT_OFFSET                   42991
-#define PBF_REG_BTB_GUARANTEED_VOQ26_RT_OFFSET                      42992
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ26_RT_OFFSET               42993
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ27_RT_OFFSET                   42994
-#define PBF_REG_BTB_GUARANTEED_VOQ27_RT_OFFSET                      42995
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ27_RT_OFFSET               42996
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ28_RT_OFFSET                   42997
-#define PBF_REG_BTB_GUARANTEED_VOQ28_RT_OFFSET                      42998
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ28_RT_OFFSET               42999
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ29_RT_OFFSET                   43000
-#define PBF_REG_BTB_GUARANTEED_VOQ29_RT_OFFSET                      43001
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ29_RT_OFFSET               43002
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ30_RT_OFFSET                   43003
-#define PBF_REG_BTB_GUARANTEED_VOQ30_RT_OFFSET                      43004
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ30_RT_OFFSET               43005
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ31_RT_OFFSET                   43006
-#define PBF_REG_BTB_GUARANTEED_VOQ31_RT_OFFSET                      43007
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ31_RT_OFFSET               43008
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ32_RT_OFFSET                   43009
-#define PBF_REG_BTB_GUARANTEED_VOQ32_RT_OFFSET                      43010
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ32_RT_OFFSET               43011
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ33_RT_OFFSET                   43012
-#define PBF_REG_BTB_GUARANTEED_VOQ33_RT_OFFSET                      43013
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ33_RT_OFFSET               43014
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ34_RT_OFFSET                   43015
-#define PBF_REG_BTB_GUARANTEED_VOQ34_RT_OFFSET                      43016
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ34_RT_OFFSET               43017
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ35_RT_OFFSET                   43018
-#define PBF_REG_BTB_GUARANTEED_VOQ35_RT_OFFSET                      43019
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ35_RT_OFFSET               43020
-#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                43021
+#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           34396
+#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           34397
+#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           34398
+#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       34399
+#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       34400
+#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       34401
+#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       34402
+#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    34403
+#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    34404
+#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    34405
+#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    34406
+#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        34407
+#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     34408
+#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34409
+#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      34410
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    34411
+#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       34412
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                34413
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    34414
+#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       34415
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                34416
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    34417
+#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       34418
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                34419
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    34420
+#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       34421
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                34422
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    34423
+#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       34424
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                34425
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    34426
+#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       34427
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                34428
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    34429
+#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       34430
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                34431
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    34432
+#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       34433
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                34434
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    34435
+#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       34436
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                34437
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    34438
+#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       34439
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                34440
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   34441
+#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      34442
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               34443
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   34444
+#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      34445
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               34446
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   34447
+#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      34448
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               34449
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   34450
+#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      34451
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               34452
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   34453
+#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      34454
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               34455
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   34456
+#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      34457
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               34458
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   34459
+#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      34460
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               34461
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   34462
+#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      34463
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               34464
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   34465
+#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      34466
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               34467
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   34468
+#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      34469
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               34470
+#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                34471
 
-#define RUNTIME_ARRAY_SIZE 43022
+#define RUNTIME_ARRAY_SIZE 34472
 
 /* Init Callbacks */
 #define DMAE_READY_CB                                               0
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 13c2e2d11..98b9723dd 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -18,6 +18,15 @@
 #define MCP_PUBLIC_H
 
 #define VF_MAX_STATIC 192	/* In case of AH */
+#define VF_BITMAP_SIZE_IN_DWORDS        (VF_MAX_STATIC / 32)
+#define VF_BITMAP_SIZE_IN_BYTES         (VF_BITMAP_SIZE_IN_DWORDS * sizeof(u32))
+
+/* Extended array size to support for 240 VFs 8 dwords */
+#define EXT_VF_MAX_STATIC               240
+#define EXT_VF_BITMAP_SIZE_IN_DWORDS    (((EXT_VF_MAX_STATIC - 1) / 32) + 1)
+#define EXT_VF_BITMAP_SIZE_IN_BYTES     (EXT_VF_BITMAP_SIZE_IN_DWORDS * \
+					 sizeof(u32))
+#define ADDED_VF_BITMAP_SIZE 2
 
 #define MCP_GLOB_PATH_MAX	2
 #define MCP_PORT_MAX		2	/* Global */
@@ -591,6 +600,8 @@ struct public_path {
 #define PROCESS_KILL_GLOB_AEU_BIT_MASK		0xffff0000
 #define PROCESS_KILL_GLOB_AEU_BIT_OFFSET	16
 #define GLOBAL_AEU_BIT(aeu_reg_id, aeu_bit) (aeu_reg_id * 32 + aeu_bit)
+	/*Added to support E5 240 VFs*/
+	u32 mcp_vf_disabled2[ADDED_VF_BITMAP_SIZE];
 };
 
 /**************************************/
@@ -1270,6 +1281,12 @@ struct public_drv_mb {
 /* params [31:8] - reserved, [7:0] - bitmap */
 #define DRV_MSG_CODE_GET_PPFID_BITMAP		0x43000000
 
+/* Param: [0:15] Option ID, [16] - All, [17] - Init, [18] - Commit,
+ * [19] - Free
+ */
+#define DRV_MSG_CODE_GET_NVM_CFG_OPTION		0x003e0000
+/* Param: [0:15] Option ID,             [17] - Init, [18]       , [19] - Free */
+#define DRV_MSG_CODE_SET_NVM_CFG_OPTION		0x003f0000
 /*deprecated don't use*/
 #define DRV_MSG_CODE_INITIATE_FLR_DEPRECATED    0x02000000
 #define DRV_MSG_CODE_INITIATE_PF_FLR            0x02010000
@@ -1317,6 +1334,7 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_PHY_CORE_WRITE		0x000e0000
 /* Param: [0:3] - version, [4:15] - name (null terminated) */
 #define DRV_MSG_CODE_SET_VERSION		0x000f0000
+#define DRV_MSG_CODE_MCP_RESET_FORCE		0x000f04ce
 /* Halts the MCP. To resume MCP, user will need to use
  * MCP_REG_CPU_STATE/MCP_REG_CPU_MODE registers.
  */
@@ -1607,6 +1625,9 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_SET_LED_MODE_OPER		0x0
 #define DRV_MB_PARAM_SET_LED_MODE_ON		0x1
 #define DRV_MB_PARAM_SET_LED_MODE_OFF		0x2
+#define DRV_MB_PARAM_SET_LED1_MODE_ON		0x3
+#define DRV_MB_PARAM_SET_LED2_MODE_ON		0x4
+#define DRV_MB_PARAM_SET_ACT_LED_MODE_ON	0x6
 
 #define DRV_MB_PARAM_TRANSCEIVER_PORT_OFFSET		0
 #define DRV_MB_PARAM_TRANSCEIVER_PORT_MASK		0x00000003
@@ -1664,8 +1685,32 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_ATTRIBUTE_CMD_OFFSET		24
 #define DRV_MB_PARAM_ATTRIBUTE_CMD_MASK		0xFF000000
 
+#define DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET		0
+/* Option# */
+#define DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK		0x0000FFFF
+#define DRV_MB_PARAM_NVM_CFG_OPTION_ALL_OFFSET		16
+/* (Only for Set) Applies option<92>s value to all entities (port/func)
+ * depending on the option type
+ */
+#define DRV_MB_PARAM_NVM_CFG_OPTION_ALL_MASK		0x00010000
+#define DRV_MB_PARAM_NVM_CFG_OPTION_INIT_OFFSET		17
+/* When set, and state is IDLE, MFW will allocate resources and load
+ * configuration from NVM
+ */
+#define DRV_MB_PARAM_NVM_CFG_OPTION_INIT_MASK		0x00020000
+#define DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_OFFSET	18
+/* (Only for Set) - When set submit changed nvm_cfg1 to flash */
+#define DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_MASK		0x00040000
+#define DRV_MB_PARAM_NVM_CFG_OPTION_FREE_OFFSET		19
+/* Free - When set, free allocated resources, and return to IDLE state. */
+#define DRV_MB_PARAM_NVM_CFG_OPTION_FREE_MASK		0x00080000
+#define SINGLE_NVM_WR_OP(optionId) \
+	((((optionId) & DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK) << \
+	  DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET) | \
+	 (DRV_MB_PARAM_NVM_CFG_OPTION_INIT_MASK | \
+	  DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_MASK | \
+	  DRV_MB_PARAM_NVM_CFG_OPTION_FREE_MASK))
 	u32 fw_mb_header;
-#define FW_MSG_CODE_MASK                        0xffff0000
 #define FW_MSG_CODE_UNSUPPORTED			0x00000000
 #define FW_MSG_CODE_DRV_LOAD_ENGINE		0x10100000
 #define FW_MSG_CODE_DRV_LOAD_PORT               0x10110000
@@ -1704,6 +1749,12 @@ struct public_drv_mb {
 #define FW_MSG_CODE_NIG_DRAIN_DONE              0x30000000
 #define FW_MSG_CODE_VF_DISABLED_DONE            0xb0000000
 #define FW_MSG_CODE_DRV_CFG_VF_MSIX_DONE        0xb0010000
+#define FW_MSG_CODE_ERR_RESOURCE_TEMPORARY_UNAVAILABLE	0x008b0000
+#define FW_MSG_CODE_ERR_RESOURCE_ALREADY_ALLOCATED	0x008c0000
+#define FW_MSG_CODE_ERR_RESOURCE_NOT_ALLOCATED		0x008d0000
+#define FW_MSG_CODE_ERR_NON_USER_OPTION			0x008e0000
+#define FW_MSG_CODE_ERR_UNKNOWN_OPTION			0x008f0000
+#define FW_MSG_CODE_WAIT				0x00900000
 #define FW_MSG_CODE_FLR_ACK                     0x02000000
 #define FW_MSG_CODE_FLR_NACK                    0x02100000
 #define FW_MSG_CODE_SET_DRIVER_DONE		0x02200000
@@ -1783,11 +1834,13 @@ struct public_drv_mb {
 #define FW_MSG_CODE_WOL_READ_BUFFER_OK		0x00850000
 #define FW_MSG_CODE_WOL_READ_BUFFER_INVALID_VAL	0x00860000
 
-#define FW_MSG_SEQ_NUMBER_MASK                  0x0000ffff
-
 #define FW_MSG_CODE_ATTRIBUTE_INVALID_KEY	0x00020000
 #define FW_MSG_CODE_ATTRIBUTE_INVALID_CMD	0x00030000
 
+#define FW_MSG_SEQ_NUMBER_MASK			0x0000ffff
+#define FW_MSG_SEQ_NUMBER_OFFSET		0
+#define FW_MSG_CODE_MASK			0xffff0000
+#define FW_MSG_CODE_OFFSET			16
 	u32 fw_mb_param;
 /* Resource Allocation params - MFW  version support */
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK	0xFFFF0000
diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index ab86260ed..daa5437dd 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -11,20 +11,21 @@
  * Description: NVM config file - Generated file from nvm cfg excel.
  *              DO NOT MODIFY !!!
  *
- * Created:     5/8/2017
+ * Created:     1/6/2019
  *
  ****************************************************************************/
 
 #ifndef NVM_CFG_H
 #define NVM_CFG_H
 
-#define NVM_CFG_version 0x83000
 
-#define NVM_CFG_new_option_seq 23
+#define NVM_CFG_version 0x84500
 
-#define NVM_CFG_removed_option_seq 1
+#define NVM_CFG_new_option_seq 45
 
-#define NVM_CFG_updated_value_seq 4
+#define NVM_CFG_removed_option_seq 4
+
+#define NVM_CFG_updated_value_seq 13
 
 struct nvm_cfg_mac_address {
 	u32 mac_addr_hi;
@@ -54,6 +55,7 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MF_MODE_NPAR2_0 0x5
 		#define NVM_CFG1_GLOB_MF_MODE_BD 0x6
 		#define NVM_CFG1_GLOB_MF_MODE_UFP 0x7
+		#define NVM_CFG1_GLOB_MF_MODE_DCI_NPAR 0x8
 		#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_MASK 0x00001000
 		#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_OFFSET 12
 		#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_DISABLED 0x0
@@ -153,6 +155,7 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_1X25G 0xD
 		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X25G 0xE
 		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X10G 0xF
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X25G_LIO2 0x10
 		#define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_MASK 0x00000100
 		#define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_OFFSET 8
 		#define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_DISABLED 0x0
@@ -510,6 +513,18 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_OFFSET 28
 		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_DISABLED 0x0
 		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_TI 0x1
+	/*  Enable/Disable PCIE Relaxed Ordering */
+		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_MASK 0x40000000
+		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_OFFSET 30
+		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_DISABLED 0x0
+		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_ENABLED 0x1
+	/*  Reset the chip using iPOR to release PCIe due to short PERST
+	 *  issues
+	 */
+		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_MASK 0x80000000
+		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_OFFSET 31
+		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_DISABLED 0x0
+		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_ENABLED 0x1
 	u32 led_global_settings; /* 0x74 */
 		#define NVM_CFG1_GLOB_LED_SWAP_0_MASK 0x0000000F
 		#define NVM_CFG1_GLOB_LED_SWAP_0_OFFSET 0
@@ -590,6 +605,11 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_OFFSET 27
 		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_MASK 0x60000000
 		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_OFFSET 29
+	/*  Option to Disable embedded LLDP, 0 - Off, 1 - On */
+		#define NVM_CFG1_GLOB_LLDP_DISABLE_MASK 0x80000000
+		#define NVM_CFG1_GLOB_LLDP_DISABLE_OFFSET 31
+		#define NVM_CFG1_GLOB_LLDP_DISABLE_OFF 0x0
+		#define NVM_CFG1_GLOB_LLDP_DISABLE_ON 0x1
 	u32 mbi_version; /* 0x7C */
 		#define NVM_CFG1_GLOB_MBI_VERSION_0_MASK 0x000000FF
 		#define NVM_CFG1_GLOB_MBI_VERSION_0_OFFSET 0
@@ -1037,13 +1057,308 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO29 0x1E
 		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO30 0x1F
 		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO31 0x20
+	/*  Select the number of allowed port link in aux power */
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_MASK 0x00000300
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_OFFSET 8
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_DEFAULT 0x0
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_1_PORT 0x1
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_2_PORTS 0x2
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_3_PORTS 0x3
+	/*  Set Trace Filter Log Level */
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_MASK 0x00000C00
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_OFFSET 10
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_ALL 0x0
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_DEBUG 0x1
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_TRACE 0x2
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_ERROR 0x3
+	/*  For OCP2.0, MFW listens on SMBUS slave address 0x3e, and return
+	 *  temperature reading
+	 */
+		#define NVM_CFG1_GLOB_EMULATED_TMP421_MASK 0x00001000
+		#define NVM_CFG1_GLOB_EMULATED_TMP421_OFFSET 12
+		#define NVM_CFG1_GLOB_EMULATED_TMP421_DISABLED 0x0
+		#define NVM_CFG1_GLOB_EMULATED_TMP421_ENABLED 0x1
+	/*  GPIO which triggers when ASIC temperature reaches nvm option 286
+	 *  value
+	 */
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_MASK 0x001FE000
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_OFFSET 13
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO31 0x20
+	/*  Warning temperature threshold used with nvm option 286 */
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_THRESHOLD_MASK 0x1FE00000
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_THRESHOLD_OFFSET 21
+	/*  Disable PLDM protocol */
+		#define NVM_CFG1_GLOB_DISABLE_PLDM_MASK 0x20000000
+		#define NVM_CFG1_GLOB_DISABLE_PLDM_OFFSET 29
+		#define NVM_CFG1_GLOB_DISABLE_PLDM_DISABLED 0x0
+		#define NVM_CFG1_GLOB_DISABLE_PLDM_ENABLED 0x1
+	/*  Disable OCBB protocol */
+		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_MASK 0x40000000
+		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_OFFSET 30
+		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_DISABLED 0x0
+		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_ENABLED 0x1
 	u32 preboot_debug_mode_std; /* 0x140 */
 	u32 preboot_debug_mode_ext; /* 0x144 */
 	u32 ext_phy_cfg1; /* 0x148 */
 	/*  Ext PHY MDI pair swap value */
-		#define NVM_CFG1_GLOB_EXT_PHY_MDI_PAIR_SWAP_MASK 0x0000FFFF
-		#define NVM_CFG1_GLOB_EXT_PHY_MDI_PAIR_SWAP_OFFSET 0
-	u32 reserved[55]; /* 0x14C */
+		#define NVM_CFG1_GLOB_RESERVED_244_MASK 0x0000FFFF
+		#define NVM_CFG1_GLOB_RESERVED_244_OFFSET 0
+	/*  Define for PGOOD signal Mapping  for EXT PHY */
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_OFFSET 16
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_NA 0x0
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO0 0x1
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO1 0x2
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO2 0x3
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO3 0x4
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO4 0x5
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO5 0x6
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO6 0x7
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO7 0x8
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO8 0x9
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO9 0xA
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO10 0xB
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO11 0xC
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO12 0xD
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO13 0xE
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO14 0xF
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO15 0x10
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO16 0x11
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO17 0x12
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO18 0x13
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO19 0x14
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO20 0x15
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO21 0x16
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO22 0x17
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO23 0x18
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO24 0x19
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO31 0x20
+	/*  GPIO which trigger when PERST asserted  */
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO31 0x20
+	u32 clocks; /* 0x14C */
+	/*  Sets core clock frequency */
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_OFFSET 0
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_DEFAULT 0x0
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_375 0x1
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_350 0x2
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_325 0x3
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_300 0x4
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_280 0x5
+	/*  Sets MAC clock frequency */
+		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_OFFSET 8
+		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_DEFAULT 0x0
+		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_782 0x1
+		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_516 0x2
+	/*  Sets storm clock frequency */
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_OFFSET 16
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_DEFAULT 0x0
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1200 0x1
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1000 0x2
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_900 0x3
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1100 0x4
+	/*  Non zero value will override PCIe AGC threshold to improve
+	 *  receiver
+	 */
+		#define NVM_CFG1_GLOB_OVERRIDE_AGC_THRESHOLD_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_OVERRIDE_AGC_THRESHOLD_OFFSET 24
+	u32 pre2_generic_cont_1; /* 0x150 */
+		#define NVM_CFG1_GLOB_50G_HLPC_PRE2_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_50G_HLPC_PRE2_OFFSET 0
+		#define NVM_CFG1_GLOB_50G_MLPC_PRE2_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_50G_MLPC_PRE2_OFFSET 8
+		#define NVM_CFG1_GLOB_50G_LLPC_PRE2_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_50G_LLPC_PRE2_OFFSET 16
+		#define NVM_CFG1_GLOB_25G_HLPC_PRE2_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_25G_HLPC_PRE2_OFFSET 24
+	u32 pre2_generic_cont_2; /* 0x154 */
+		#define NVM_CFG1_GLOB_25G_LLPC_PRE2_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_25G_LLPC_PRE2_OFFSET 0
+		#define NVM_CFG1_GLOB_25G_AC_PRE2_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_25G_AC_PRE2_OFFSET 8
+		#define NVM_CFG1_GLOB_10G_PC_PRE2_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_10G_PC_PRE2_OFFSET 16
+		#define NVM_CFG1_GLOB_PRE2_10G_AC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_PRE2_10G_AC_OFFSET 24
+	u32 pre2_generic_cont_3; /* 0x158 */
+		#define NVM_CFG1_GLOB_1G_PRE2_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_1G_PRE2_OFFSET 0
+		#define NVM_CFG1_GLOB_5G_BT_PRE2_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_5G_BT_PRE2_OFFSET 8
+		#define NVM_CFG1_GLOB_10G_BT_PRE2_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_10G_BT_PRE2_OFFSET 16
+	/*  When temperature goes below (warning temperature - delta) warning
+	 *  gpio is unset
+	 */
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_DELTA_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_DELTA_OFFSET 24
+	u32 tx_rx_eq_50g_hlpc; /* 0x15C */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_HLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_HLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_HLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_HLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_HLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_HLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_HLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_HLPC_OFFSET 24
+	u32 tx_rx_eq_50g_mlpc; /* 0x160 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_MLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_MLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_MLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_MLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_MLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_MLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_MLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_MLPC_OFFSET 24
+	u32 tx_rx_eq_50g_llpc; /* 0x164 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_LLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_LLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_LLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_LLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_LLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_LLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_LLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_LLPC_OFFSET 24
+	u32 tx_rx_eq_50g_ac; /* 0x168 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_AC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_AC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_AC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_AC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_AC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_AC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_AC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_AC_OFFSET 24
+	/*  Set Trace Filter Modules Log Bit Mask */
+	u32 trace_modules; /* 0x16C */
+		#define NVM_CFG1_GLOB_TRACE_MODULES_ERROR 0x1
+		#define NVM_CFG1_GLOB_TRACE_MODULES_DBG 0x2
+		#define NVM_CFG1_GLOB_TRACE_MODULES_DRV_HSI 0x4
+		#define NVM_CFG1_GLOB_TRACE_MODULES_INTERRUPT 0x8
+		#define NVM_CFG1_GLOB_TRACE_MODULES_VPD 0x10
+		#define NVM_CFG1_GLOB_TRACE_MODULES_FLR 0x20
+		#define NVM_CFG1_GLOB_TRACE_MODULES_INIT 0x40
+		#define NVM_CFG1_GLOB_TRACE_MODULES_NVM 0x80
+		#define NVM_CFG1_GLOB_TRACE_MODULES_PIM 0x100
+		#define NVM_CFG1_GLOB_TRACE_MODULES_NET 0x200
+		#define NVM_CFG1_GLOB_TRACE_MODULES_POWER 0x400
+		#define NVM_CFG1_GLOB_TRACE_MODULES_UTILS 0x800
+		#define NVM_CFG1_GLOB_TRACE_MODULES_RESOURCES 0x1000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_SCHEDULER 0x2000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_PHYMOD 0x4000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_EVENTS 0x8000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_PMM 0x10000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_DBG_DRV 0x20000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_ETH 0x40000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_SECURITY 0x80000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_PCIE 0x100000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_TRACE 0x200000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_MANAGEMENT 0x400000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_SIM 0x800000
+	u32 pcie_class_code_fcoe; /* 0x170 */
+	/*  Set PCIe FCoE Class Code */
+		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_FCOE_MASK 0x00FFFFFF
+		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_FCOE_OFFSET 0
+	/*  When temperature goes below (ALOM FAN ON AUX value - delta) ALOM
+	 *  FAN ON AUX gpio is unset
+	 */
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_DELTA_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_DELTA_OFFSET 24
+	u32 pcie_class_code_iscsi; /* 0x174 */
+	/*  Set PCIe iSCSI Class Code */
+		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_ISCSI_MASK 0x00FFFFFF
+		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_ISCSI_OFFSET 0
+	/*  When temperature goes below (Dead Temp TH  - delta)Thermal Event
+	 *  gpio is unset
+	 */
+		#define NVM_CFG1_GLOB_DEAD_TEMP_TH_DELTA_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_DEAD_TEMP_TH_DELTA_OFFSET 24
+	u32 no_provisioned_mac; /* 0x178 */
+	/*  Set number of provisioned MAC addresses */
+		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_MAC_MASK 0x0000FFFF
+		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_MAC_OFFSET 0
+	/*  Set number of provisioned VF MAC addresses */
+		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_VF_MAC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_VF_MAC_OFFSET 16
+	/*  Enable/Disable BMC MAC */
+		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_MASK 0x01000000
+		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_OFFSET 24
+		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_DISABLED 0x0
+		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_ENABLED 0x1
+	u32 reserved[43]; /* 0x17C */
 };
 
 struct nvm_cfg1_path {
@@ -1073,6 +1388,10 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_LED_MODE_PHY11 0xE
 		#define NVM_CFG1_PORT_LED_MODE_PHY12 0xF
 		#define NVM_CFG1_PORT_LED_MODE_BREAKOUT 0x10
+		#define NVM_CFG1_PORT_LED_MODE_OCP_3_0 0x11
+		#define NVM_CFG1_PORT_LED_MODE_OCP_3_0_MAC2 0x12
+		#define NVM_CFG1_PORT_LED_MODE_SW_DEF1 0x13
+		#define NVM_CFG1_PORT_LED_MODE_SW_DEF1_MAC2 0x14
 		#define NVM_CFG1_PORT_ROCE_PRIORITY_MASK 0x0000FF00
 		#define NVM_CFG1_PORT_ROCE_PRIORITY_OFFSET 8
 		#define NVM_CFG1_PORT_DCBX_MODE_MASK 0x000F0000
@@ -1220,6 +1539,16 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_OFFSET 24
 		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_DISABLED 0x0
 		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_ENABLED 0x1
+	/*  Enable/Disable RX PAM-4 precoding */
+		#define NVM_CFG1_PORT_RX_PRECODE_MASK 0x02000000
+		#define NVM_CFG1_PORT_RX_PRECODE_OFFSET 25
+		#define NVM_CFG1_PORT_RX_PRECODE_DISABLED 0x0
+		#define NVM_CFG1_PORT_RX_PRECODE_ENABLED 0x1
+	/*  Enable/Disable TX PAM-4 precoding */
+		#define NVM_CFG1_PORT_TX_PRECODE_MASK 0x04000000
+		#define NVM_CFG1_PORT_TX_PRECODE_OFFSET 26
+		#define NVM_CFG1_PORT_TX_PRECODE_DISABLED 0x0
+		#define NVM_CFG1_PORT_TX_PRECODE_ENABLED 0x1
 	u32 phy_cfg; /* 0x1C */
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_MASK 0x0000FFFF
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_OFFSET 0
@@ -1261,6 +1590,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_NONE 0x0
 		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_BCM8485X 0x1
 		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_BCM5422X 0x2
+		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_88X33X0 0x3
 		#define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_MASK 0x0000FF00
 		#define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_OFFSET 8
 	/*  EEE power saving mode */
@@ -1337,6 +1667,13 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_AH_50G 0x10
 		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_50G 0x20
 		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_100G 0x40
+	/*  UID LED Blink Mode Settings */
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_MASK 0x0F000000
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_OFFSET 24
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_ACTIVITY_LED 0x1
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED0 0x2
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED1 0x4
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED2 0x8
 	u32 transceiver_00; /* 0x40 */
 	/*  Define for mapping of transceiver signal module absent */
 		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_MASK 0x000000FF
@@ -1379,6 +1716,11 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_0_OFFSET 8
 		#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_MASK 0x0000F000
 		#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_OFFSET 12
+	/*  Option to override SmartAN FEC requirements */
+		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_MASK 0x00010000
+		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_OFFSET 16
+		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_DISABLED 0x0
+		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_ENABLED 0x1
 	u32 device_ids; /* 0x44 */
 		#define NVM_CFG1_PORT_ETH_DID_SUFFIX_MASK 0x000000FF
 		#define NVM_CFG1_PORT_ETH_DID_SUFFIX_OFFSET 0
@@ -1840,7 +2182,289 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_PHY_MODULE_ALOM_FAN_ON_TEMP_TH_MASK \
 			0x0000FF00
 		#define NVM_CFG1_PORT_PHY_MODULE_ALOM_FAN_ON_TEMP_TH_OFFSET 8
-	u32 reserved[115]; /* 0x8C */
+	/*  Warning temperature threshold used with nvm option 235 */
+		#define NVM_CFG1_PORT_PHY_MODULE_WARNING_TEMP_TH_MASK 0x00FF0000
+		#define NVM_CFG1_PORT_PHY_MODULE_WARNING_TEMP_TH_OFFSET 16
+	u32 ext_phy_cfg1; /* 0x8C */
+	/*  Ext PHY MDI pair swap value */
+		#define NVM_CFG1_PORT_EXT_PHY_MDI_PAIR_SWAP_MASK 0x0000FFFF
+		#define NVM_CFG1_PORT_EXT_PHY_MDI_PAIR_SWAP_OFFSET 0
+	u32 extended_speed; /* 0x90 */
+	/*  Sets speed in conjunction with legacy speed field */
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_MASK 0x0000FFFF
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_OFFSET 0
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_NONE 0x1
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_1G 0x2
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_10G 0x4
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_25G 0x8
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_40G 0x10
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R 0x20
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R2 0x40
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R2 0x80
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R4 0x100
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_P4 0x200
+	/*  Sets speed capabilities in conjunction with legacy capabilities
+	 *  field
+	 */
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_MASK 0xFFFF0000
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_OFFSET 16
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_NONE 0x1
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_1G 0x2
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_10G 0x4
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_25G 0x8
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_40G 0x10
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R 0x20
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R2 0x40
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R2 0x80
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R4 0x100
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_P4 0x200
+	/*  Set speed specific FEC setting in conjunction with legacy FEC
+	 *  mode
+	 */
+	u32 extended_fec_mode; /* 0x94 */
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_NONE 0x1
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_10G_NONE 0x2
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_10G_BASE_R 0x4
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_NONE 0x8
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_BASE_R 0x10
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_RS528 0x20
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_40G_NONE 0x40
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_40G_BASE_R 0x80
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_NONE 0x100
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_BASE_R 0x200
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS528 0x400
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS544 0x800
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_NONE 0x1000
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_BASE_R 0x2000
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS528 0x4000
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS544 0x8000
+	u32 port_generic_cont_01; /* 0x98 */
+	/*  Define for GPIO mapping of SFP Rate Select 0 */
+		#define NVM_CFG1_PORT_MODULE_RS0_MASK 0x000000FF
+		#define NVM_CFG1_PORT_MODULE_RS0_OFFSET 0
+		#define NVM_CFG1_PORT_MODULE_RS0_NA 0x0
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO0 0x1
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO1 0x2
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO2 0x3
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO3 0x4
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO4 0x5
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO5 0x6
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO6 0x7
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO7 0x8
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO8 0x9
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO9 0xA
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO10 0xB
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO11 0xC
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO12 0xD
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO13 0xE
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO14 0xF
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO15 0x10
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO16 0x11
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO17 0x12
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO18 0x13
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO19 0x14
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO20 0x15
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO21 0x16
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO22 0x17
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO23 0x18
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO24 0x19
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO25 0x1A
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO26 0x1B
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO27 0x1C
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO28 0x1D
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO29 0x1E
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO30 0x1F
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO31 0x20
+	/*  Define for GPIO mapping of SFP Rate Select 1 */
+		#define NVM_CFG1_PORT_MODULE_RS1_MASK 0x0000FF00
+		#define NVM_CFG1_PORT_MODULE_RS1_OFFSET 8
+		#define NVM_CFG1_PORT_MODULE_RS1_NA 0x0
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO0 0x1
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO1 0x2
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO2 0x3
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO3 0x4
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO4 0x5
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO5 0x6
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO6 0x7
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO7 0x8
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO8 0x9
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO9 0xA
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO10 0xB
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO11 0xC
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO12 0xD
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO13 0xE
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO14 0xF
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO15 0x10
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO16 0x11
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO17 0x12
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO18 0x13
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO19 0x14
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO20 0x15
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO21 0x16
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO22 0x17
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO23 0x18
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO24 0x19
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO25 0x1A
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO26 0x1B
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO27 0x1C
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO28 0x1D
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO29 0x1E
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO30 0x1F
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO31 0x20
+	/*  Define for GPIO mapping of SFP Module TX Fault */
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_MASK 0x00FF0000
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_OFFSET 16
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_NA 0x0
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO0 0x1
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO1 0x2
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO2 0x3
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO3 0x4
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO4 0x5
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO5 0x6
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO6 0x7
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO7 0x8
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO8 0x9
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO9 0xA
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO10 0xB
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO11 0xC
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO12 0xD
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO13 0xE
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO14 0xF
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO15 0x10
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO16 0x11
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO17 0x12
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO18 0x13
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO19 0x14
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO20 0x15
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO21 0x16
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO22 0x17
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO23 0x18
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO24 0x19
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO25 0x1A
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO26 0x1B
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO27 0x1C
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO28 0x1D
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO29 0x1E
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO30 0x1F
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO31 0x20
+	/*  Define for GPIO mapping of QSFP Reset signal */
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_MASK 0xFF000000
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_OFFSET 24
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_NA 0x0
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO0 0x1
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO1 0x2
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO2 0x3
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO3 0x4
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO4 0x5
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO5 0x6
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO6 0x7
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO7 0x8
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO8 0x9
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO9 0xA
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO10 0xB
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO11 0xC
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO12 0xD
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO13 0xE
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO14 0xF
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO15 0x10
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO16 0x11
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO17 0x12
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO18 0x13
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO19 0x14
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO20 0x15
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO21 0x16
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO22 0x17
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO23 0x18
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO24 0x19
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO25 0x1A
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO26 0x1B
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO27 0x1C
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO28 0x1D
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO29 0x1E
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO30 0x1F
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO31 0x20
+	u32 port_generic_cont_02; /* 0x9C */
+	/*  Define for GPIO mapping of QSFP Transceiver LP mode */
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_MASK 0x000000FF
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_OFFSET 0
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_NA 0x0
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO0 0x1
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO1 0x2
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO2 0x3
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO3 0x4
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO4 0x5
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO5 0x6
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO6 0x7
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO7 0x8
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO8 0x9
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO9 0xA
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO10 0xB
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO11 0xC
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO12 0xD
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO13 0xE
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO14 0xF
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO15 0x10
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO16 0x11
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO17 0x12
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO18 0x13
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO19 0x14
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO20 0x15
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO21 0x16
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO22 0x17
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO23 0x18
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO24 0x19
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO25 0x1A
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO26 0x1B
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO27 0x1C
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO28 0x1D
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO29 0x1E
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO30 0x1F
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO31 0x20
+	/*  Define for GPIO mapping of Transceiver Power Enable */
+		#define NVM_CFG1_PORT_MODULE_POWER_MASK 0x0000FF00
+		#define NVM_CFG1_PORT_MODULE_POWER_OFFSET 8
+		#define NVM_CFG1_PORT_MODULE_POWER_NA 0x0
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO0 0x1
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO1 0x2
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO2 0x3
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO3 0x4
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO4 0x5
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO5 0x6
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO6 0x7
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO7 0x8
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO8 0x9
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO9 0xA
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO10 0xB
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO11 0xC
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO12 0xD
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO13 0xE
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO14 0xF
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO15 0x10
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO16 0x11
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO17 0x12
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO18 0x13
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO19 0x14
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO20 0x15
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO21 0x16
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO22 0x17
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO23 0x18
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO24 0x19
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO25 0x1A
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO26 0x1B
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO27 0x1C
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO28 0x1D
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO29 0x1E
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO30 0x1F
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO31 0x20
+	/*  Define for LASI Mapping of Interrupt from module or PHY */
+		#define NVM_CFG1_PORT_LASI_INTR_IN_MASK 0x000F0000
+		#define NVM_CFG1_PORT_LASI_INTR_IN_OFFSET 16
+		#define NVM_CFG1_PORT_LASI_INTR_IN_NA 0x0
+		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI0 0x1
+		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI1 0x2
+		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI2 0x3
+		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI3 0x4
+	u32 reserved[110]; /* 0xA0 */
 };
 
 struct nvm_cfg1_func {
@@ -1874,7 +2498,6 @@ struct nvm_cfg1_func {
 		#define NVM_CFG1_FUNC_PERSONALITY_ETHERNET 0x0
 		#define NVM_CFG1_FUNC_PERSONALITY_ISCSI 0x1
 		#define NVM_CFG1_FUNC_PERSONALITY_FCOE 0x2
-		#define NVM_CFG1_FUNC_PERSONALITY_ROCE 0x3
 		#define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_MASK 0x7F800000
 		#define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_OFFSET 23
 		#define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_MASK 0x80000000
@@ -1969,6 +2592,16 @@ struct nvm_cfg1 {
 /******************************************
  * nvm_cfg structs
  ******************************************/
+
+struct board_info {
+	u16 vendor_id;
+	u16 eth_did_suffix;
+	u16 sub_vendor_id;
+	u16 sub_device_id;
+	char *board_name;
+	char *friendly_name;
+};
+
 enum nvm_cfg_sections {
 	NVM_CFG_SECTION_NVM_CFG1,
 	NVM_CFG_SECTION_MAX
@@ -1980,4 +2613,260 @@ struct nvm_cfg {
 	struct nvm_cfg1 cfg1;
 };
 
+/******************************************
+ * nvm_cfg options
+ ******************************************/
+
+#define NVM_CFG_ID_MAC_ADDRESS                                       1
+#define NVM_CFG_ID_BOARD_SWAP                                        8
+#define NVM_CFG_ID_MF_MODE                                           9
+#define NVM_CFG_ID_LED_MODE                                          10
+#define NVM_CFG_ID_FAN_FAILURE_ENFORCEMENT                           11
+#define NVM_CFG_ID_ENGINEERING_CHANGE                                12
+#define NVM_CFG_ID_MANUFACTURING_ID                                  13
+#define NVM_CFG_ID_SERIAL_NUMBER                                     14
+#define NVM_CFG_ID_PCI_GEN                                           15
+#define NVM_CFG_ID_BEACON_WOL_ENABLED                                16
+#define NVM_CFG_ID_ASPM_SUPPORT                                      17
+#define NVM_CFG_ID_ROCE_PRIORITY                                     20
+#define NVM_CFG_ID_ENABLE_WOL_ON_ACPI_PATTERN                        22
+#define NVM_CFG_ID_MAGIC_PACKET_WOL                                  23
+#define NVM_CFG_ID_AVS_MARGIN_LOW_BB                                 24
+#define NVM_CFG_ID_AVS_MARGIN_HIGH_BB                                25
+#define NVM_CFG_ID_DCBX_MODE                                         26
+#define NVM_CFG_ID_DRV_SPEED_CAPABILITY_MASK                         27
+#define NVM_CFG_ID_MFW_SPEED_CAPABILITY_MASK                         28
+#define NVM_CFG_ID_DRV_LINK_SPEED                                    29
+#define NVM_CFG_ID_DRV_FLOW_CONTROL                                  30
+#define NVM_CFG_ID_MFW_LINK_SPEED                                    31
+#define NVM_CFG_ID_MFW_FLOW_CONTROL                                  32
+#define NVM_CFG_ID_OPTIC_MODULE_VENDOR_ENFORCEMENT                   33
+#define NVM_CFG_ID_OPTIONAL_LINK_MODES_BB                            34
+#define NVM_CFG_ID_MF_VENDOR_DEVICE_ID                               37
+#define NVM_CFG_ID_NETWORK_PORT_MODE                                 38
+#define NVM_CFG_ID_MPS10_RX_LANE_SWAP_BB                             39
+#define NVM_CFG_ID_MPS10_TX_LANE_SWAP_BB                             40
+#define NVM_CFG_ID_MPS10_RX_LANE_POLARITY_BB                         41
+#define NVM_CFG_ID_MPS10_TX_LANE_POLARITY_BB                         42
+#define NVM_CFG_ID_MPS25_RX_LANE_SWAP_BB                             43
+#define NVM_CFG_ID_MPS25_TX_LANE_SWAP_BB                             44
+#define NVM_CFG_ID_MPS25_RX_LANE_POLARITY                            45
+#define NVM_CFG_ID_MPS25_TX_LANE_POLARITY                            46
+#define NVM_CFG_ID_MPS10_PREEMPHASIS_BB                              47
+#define NVM_CFG_ID_MPS10_DRIVER_CURRENT_BB                           48
+#define NVM_CFG_ID_MPS10_ENFORCE_TX_FIR_CFG_BB                       49
+#define NVM_CFG_ID_MPS25_PREEMPHASIS                                 50
+#define NVM_CFG_ID_MPS25_DRIVER_CURRENT                              51
+#define NVM_CFG_ID_MPS25_ENFORCE_TX_FIR_CFG                          52
+#define NVM_CFG_ID_MPS10_CORE_ADDR_BB                                53
+#define NVM_CFG_ID_MPS25_CORE_ADDR_BB                                54
+#define NVM_CFG_ID_EXTERNAL_PHY_TYPE                                 55
+#define NVM_CFG_ID_EXTERNAL_PHY_ADDRESS                              56
+#define NVM_CFG_ID_SERDES_NET_INTERFACE_BB                           57
+#define NVM_CFG_ID_AN_MODE_BB                                        58
+#define NVM_CFG_ID_PREBOOT_OPROM                                     59
+#define NVM_CFG_ID_MBA_DELAY_TIME                                    61
+#define NVM_CFG_ID_MBA_SETUP_HOT_KEY                                 62
+#define NVM_CFG_ID_MBA_HIDE_SETUP_PROMPT                             63
+#define NVM_CFG_ID_PREBOOT_LINK_SPEED                                67
+#define NVM_CFG_ID_PREBOOT_BOOT_PROTOCOL                             69
+#define NVM_CFG_ID_ENABLE_SRIOV                                      70
+#define NVM_CFG_ID_ENABLE_ATC                                        71
+#define NVM_CFG_ID_NUMBER_OF_VFS_PER_PF                              74
+#define NVM_CFG_ID_VF_PCI_BAR2_SIZE_K2_E5                            75
+#define NVM_CFG_ID_VENDOR_ID                                         76
+#define NVM_CFG_ID_SUBSYSTEM_VENDOR_ID                               78
+#define NVM_CFG_ID_SUBSYSTEM_DEVICE_ID                               79
+#define NVM_CFG_ID_VF_PCI_BAR2_SIZE_BB                               81
+#define NVM_CFG_ID_BAR1_SIZE                                         82
+#define NVM_CFG_ID_BAR2_SIZE_BB                                      83
+#define NVM_CFG_ID_VF_PCI_DEVICE_ID                                  84
+#define NVM_CFG_ID_MPS10_TXFIR_MAIN_BB                               85
+#define NVM_CFG_ID_MPS10_TXFIR_POST_BB                               86
+#define NVM_CFG_ID_MPS25_TXFIR_MAIN                                  87
+#define NVM_CFG_ID_MPS25_TXFIR_POST                                  88
+#define NVM_CFG_ID_MANUFACTURE_KIT_VERSION                           89
+#define NVM_CFG_ID_MANUFACTURE_TIMESTAMP                             90
+#define NVM_CFG_ID_PERSONALITY                                       92
+#define NVM_CFG_ID_FCOE_NODE_WWN_MAC_ADDR                            93
+#define NVM_CFG_ID_FCOE_PORT_WWN_MAC_ADDR                            94
+#define NVM_CFG_ID_BANDWIDTH_WEIGHT                                  95
+#define NVM_CFG_ID_MAX_BANDWIDTH                                     96
+#define NVM_CFG_ID_PAUSE_ON_HOST_RING                                97
+#define NVM_CFG_ID_PCIE_PREEMPHASIS                                  98
+#define NVM_CFG_ID_LLDP_MAC_ADDRESS                                  99
+#define NVM_CFG_ID_FCOE_WWN_NODE_PREFIX                              100
+#define NVM_CFG_ID_FCOE_WWN_PORT_PREFIX                              101
+#define NVM_CFG_ID_LED_SPEED_SELECT                                  102
+#define NVM_CFG_ID_LED_PORT_SWAP                                     103
+#define NVM_CFG_ID_AVS_MODE_BB                                       104
+#define NVM_CFG_ID_OVERRIDE_SECURE_MODE                              105
+#define NVM_CFG_ID_AVS_DAC_CODE_BB                                   106
+#define NVM_CFG_ID_MBI_VERSION                                       107
+#define NVM_CFG_ID_MBI_DATE                                          108
+#define NVM_CFG_ID_SMBUS_ADDRESS                                     109
+#define NVM_CFG_ID_NCSI_PACKAGE_ID                                   110
+#define NVM_CFG_ID_SIDEBAND_MODE                                     111
+#define NVM_CFG_ID_SMBUS_MODE                                        112
+#define NVM_CFG_ID_NCSI                                              113
+#define NVM_CFG_ID_TRANSCEIVER_MODULE_ABSENT                         114
+#define NVM_CFG_ID_I2C_MUX_SELECT_GPIO_BB                            115
+#define NVM_CFG_ID_I2C_MUX_SELECT_VALUE_BB                           116
+#define NVM_CFG_ID_DEVICE_CAPABILITIES                               117
+#define NVM_CFG_ID_ETH_DID_SUFFIX                                    118
+#define NVM_CFG_ID_FCOE_DID_SUFFIX                                   119
+#define NVM_CFG_ID_ISCSI_DID_SUFFIX                                  120
+#define NVM_CFG_ID_DEFAULT_ENABLED_PROTOCOLS                         122
+#define NVM_CFG_ID_POWER_DISSIPATED_BB                               123
+#define NVM_CFG_ID_POWER_CONSUMED_BB                                 124
+#define NVM_CFG_ID_AUX_MODE                                          125
+#define NVM_CFG_ID_PORT_TYPE                                         126
+#define NVM_CFG_ID_TX_DISABLE                                        127
+#define NVM_CFG_ID_MAX_LINK_WIDTH                                    128
+#define NVM_CFG_ID_ASPM_L1_MODE                                      130
+#define NVM_CFG_ID_ON_CHIP_SENSOR_MODE                               131
+#define NVM_CFG_ID_PREBOOT_VLAN_VALUE                                132
+#define NVM_CFG_ID_PREBOOT_VLAN                                      133
+#define NVM_CFG_ID_TEMPERATURE_PERIOD_BETWEEN_CHECKS                 134
+#define NVM_CFG_ID_SHUTDOWN_THRESHOLD_TEMPERATURE                    135
+#define NVM_CFG_ID_MAX_COUNT_OPER_THRESHOLD                          136
+#define NVM_CFG_ID_DEAD_TEMP_TH_TEMPERATURE                          137
+#define NVM_CFG_ID_TEMPERATURE_MONITORING_MODE                       139
+#define NVM_CFG_ID_AN_25G_50G_OUI                                    140
+#define NVM_CFG_ID_PLDM_SENSOR_MODE                                  141
+#define NVM_CFG_ID_EXTERNAL_THERMAL_SENSOR                           142
+#define NVM_CFG_ID_EXTERNAL_THERMAL_SENSOR_ADDRESS                   143
+#define NVM_CFG_ID_FAN_FAILURE_DURATION                              144
+#define NVM_CFG_ID_FEC_FORCE_MODE                                    145
+#define NVM_CFG_ID_MULTI_NETWORK_MODES_CAPABILITY                    146
+#define NVM_CFG_ID_MNM_10G_DRV_SPEED_CAPABILITY_MASK                 147
+#define NVM_CFG_ID_MNM_10G_MFW_SPEED_CAPABILITY_MASK                 148
+#define NVM_CFG_ID_MNM_10G_DRV_LINK_SPEED                            149
+#define NVM_CFG_ID_MNM_10G_MFW_LINK_SPEED                            150
+#define NVM_CFG_ID_MNM_10G_PORT_TYPE                                 151
+#define NVM_CFG_ID_MNM_10G_SERDES_NET_INTERFACE                      152
+#define NVM_CFG_ID_MNM_10G_FEC_FORCE_MODE                            153
+#define NVM_CFG_ID_MNM_10G_ETH_DID_SUFFIX                            154
+#define NVM_CFG_ID_MNM_25G_DRV_SPEED_CAPABILITY_MASK                 155
+#define NVM_CFG_ID_MNM_25G_MFW_SPEED_CAPABILITY_MASK                 156
+#define NVM_CFG_ID_MNM_25G_DRV_LINK_SPEED                            157
+#define NVM_CFG_ID_MNM_25G_MFW_LINK_SPEED                            158
+#define NVM_CFG_ID_MNM_25G_PORT_TYPE                                 159
+#define NVM_CFG_ID_MNM_25G_SERDES_NET_INTERFACE                      160
+#define NVM_CFG_ID_MNM_25G_ETH_DID_SUFFIX                            161
+#define NVM_CFG_ID_MNM_25G_FEC_FORCE_MODE                            162
+#define NVM_CFG_ID_MNM_40G_DRV_SPEED_CAPABILITY_MASK                 163
+#define NVM_CFG_ID_MNM_40G_MFW_SPEED_CAPABILITY_MASK                 164
+#define NVM_CFG_ID_MNM_40G_DRV_LINK_SPEED                            165
+#define NVM_CFG_ID_MNM_40G_MFW_LINK_SPEED                            166
+#define NVM_CFG_ID_MNM_40G_PORT_TYPE                                 167
+#define NVM_CFG_ID_MNM_40G_SERDES_NET_INTERFACE                      168
+#define NVM_CFG_ID_MNM_40G_ETH_DID_SUFFIX                            169
+#define NVM_CFG_ID_MNM_40G_FEC_FORCE_MODE                            170
+#define NVM_CFG_ID_MNM_50G_DRV_SPEED_CAPABILITY_MASK                 171
+#define NVM_CFG_ID_MNM_50G_MFW_SPEED_CAPABILITY_MASK                 172
+#define NVM_CFG_ID_MNM_50G_DRV_LINK_SPEED                            173
+#define NVM_CFG_ID_MNM_50G_MFW_LINK_SPEED                            174
+#define NVM_CFG_ID_MNM_50G_PORT_TYPE                                 175
+#define NVM_CFG_ID_MNM_50G_SERDES_NET_INTERFACE                      176
+#define NVM_CFG_ID_MNM_50G_ETH_DID_SUFFIX                            177
+#define NVM_CFG_ID_MNM_50G_FEC_FORCE_MODE                            178
+#define NVM_CFG_ID_MNM_100G_DRV_SPEED_CAP_MASK_BB                    179
+#define NVM_CFG_ID_MNM_100G_MFW_SPEED_CAP_MASK_BB                    180
+#define NVM_CFG_ID_MNM_100G_DRV_LINK_SPEED_BB                        181
+#define NVM_CFG_ID_MNM_100G_MFW_LINK_SPEED_BB                        182
+#define NVM_CFG_ID_MNM_100G_PORT_TYPE_BB                             183
+#define NVM_CFG_ID_MNM_100G_SERDES_NET_INTERFACE_BB                  184
+#define NVM_CFG_ID_MNM_100G_ETH_DID_SUFFIX_BB                        185
+#define NVM_CFG_ID_MNM_100G_FEC_FORCE_MODE_BB                        186
+#define NVM_CFG_ID_FUNCTION_HIDE                                     187
+#define NVM_CFG_ID_BAR2_TOTAL_BUDGET_BB                              188
+#define NVM_CFG_ID_CRASH_DUMP_TRIGGER_ENABLE                         189
+#define NVM_CFG_ID_MPS25_LANE_SWAP_K2_E5                             190
+#define NVM_CFG_ID_BAR2_SIZE_K2_E5                                   191
+#define NVM_CFG_ID_EXT_PHY_RESET                                     192
+#define NVM_CFG_ID_EEE_POWER_SAVING_MODE                             193
+#define NVM_CFG_ID_OVERRIDE_PCIE_PRESET_EQUAL_BB                     194
+#define NVM_CFG_ID_PCIE_PRESET_VALUE_BB                              195
+#define NVM_CFG_ID_MAX_MSIX                                          196
+#define NVM_CFG_ID_NVM_CFG_VERSION                                   197
+#define NVM_CFG_ID_NVM_CFG_NEW_OPTION_SEQ                            198
+#define NVM_CFG_ID_NVM_CFG_REMOVED_OPTION_SEQ                        199
+#define NVM_CFG_ID_NVM_CFG_UPDATED_VALUE_SEQ                         200
+#define NVM_CFG_ID_EXTENDED_SERIAL_NUMBER                            201
+#define NVM_CFG_ID_RDMA_ENABLEMENT                                   202
+#define NVM_CFG_ID_MAX_CONT_OPERATING_TEMP                           203
+#define NVM_CFG_ID_RUNTIME_PORT_SWAP_GPIO                            204
+#define NVM_CFG_ID_RUNTIME_PORT_SWAP_MAP                             205
+#define NVM_CFG_ID_THERMAL_EVENT_GPIO                                206
+#define NVM_CFG_ID_I2C_INTERRUPT_GPIO                                207
+#define NVM_CFG_ID_DCI_SUPPORT                                       208
+#define NVM_CFG_ID_PCIE_VDM_ENABLED                                  209
+#define NVM_CFG_ID_OEM1_NUMBER                                       210
+#define NVM_CFG_ID_OEM2_NUMBER                                       211
+#define NVM_CFG_ID_FEC_AN_MODE_K2_E5                                 212
+#define NVM_CFG_ID_NPAR_ENABLED_PROTOCOL                             213
+#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_PRE                            214
+#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_MAIN                           215
+#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_POST                           216
+#define NVM_CFG_ID_ALOM_FAN_ON_AUX_GPIO                              217
+#define NVM_CFG_ID_ALOM_FAN_ON_AUX_VALUE                             218
+#define NVM_CFG_ID_SLOT_ID_GPIO                                      219
+#define NVM_CFG_ID_PMBUS_SCL_GPIO                                    220
+#define NVM_CFG_ID_PMBUS_SDA_GPIO                                    221
+#define NVM_CFG_ID_RESET_ON_LAN                                      222
+#define NVM_CFG_ID_NCSI_PACKAGE_ID_IO                                223
+#define NVM_CFG_ID_TX_RX_EQ_25G_HLPC                                 224
+#define NVM_CFG_ID_TX_RX_EQ_25G_LLPC                                 225
+#define NVM_CFG_ID_TX_RX_EQ_25G_AC                                   226
+#define NVM_CFG_ID_TX_RX_EQ_10G_PC                                   227
+#define NVM_CFG_ID_TX_RX_EQ_10G_AC                                   228
+#define NVM_CFG_ID_TX_RX_EQ_1G                                       229
+#define NVM_CFG_ID_TX_RX_EQ_25G_BT                                   230
+#define NVM_CFG_ID_TX_RX_EQ_10G_BT                                   231
+#define NVM_CFG_ID_PF_MAPPING                                        232
+#define NVM_CFG_ID_RECOVERY_MODE                                     234
+#define NVM_CFG_ID_PHY_MODULE_DEAD_TEMP_TH                           235
+#define NVM_CFG_ID_PHY_MODULE_ALOM_FAN_ON_TEMP_TH                    236
+#define NVM_CFG_ID_PREBOOT_DEBUG_MODE_STD                            237
+#define NVM_CFG_ID_PREBOOT_DEBUG_MODE_EXT                            238
+#define NVM_CFG_ID_SMARTLINQ_MODE                                    239
+#define NVM_CFG_ID_PREBOOT_LINK_UP_DELAY                             242
+#define NVM_CFG_ID_VOLTAGE_REGULATOR_TYPE                            243
+#define NVM_CFG_ID_MAIN_CLOCK_FREQUENCY                              245
+#define NVM_CFG_ID_MAC_CLOCK_FREQUENCY                               246
+#define NVM_CFG_ID_STORM_CLOCK_FREQUENCY                             247
+#define NVM_CFG_ID_PCIE_RELAXED_ORDERING                             248
+#define NVM_CFG_ID_EXT_PHY_MDI_PAIR_SWAP                             249
+#define NVM_CFG_ID_UID_LED_MODE_MASK                                 250
+#define NVM_CFG_ID_NCSI_AUX_LINK                                     251
+#define NVM_CFG_ID_SMARTAN_FEC_OVERRIDE                              272
+#define NVM_CFG_ID_LLDP_DISABLE                                      273
+#define NVM_CFG_ID_SHORT_PERST_PROTECTION_K2_E5                      274
+#define NVM_CFG_ID_TRANSCEIVER_RATE_SELECT_0                         275
+#define NVM_CFG_ID_TRANSCEIVER_RATE_SELECT_1                         276
+#define NVM_CFG_ID_TRANSCEIVER_MODULE_TX_FAULT                       277
+#define NVM_CFG_ID_TRANSCEIVER_QSFP_MODULE_RESET                     278
+#define NVM_CFG_ID_TRANSCEIVER_QSFP_LP_MODE                          279
+#define NVM_CFG_ID_TRANSCEIVER_POWER_ENABLE                          280
+#define NVM_CFG_ID_LASI_INTERRUPT_INPUT                              281
+#define NVM_CFG_ID_EXT_PHY_PGOOD_INPUT                               282
+#define NVM_CFG_ID_TRACE_LEVEL                                       283
+#define NVM_CFG_ID_TRACE_MODULES                                     284
+#define NVM_CFG_ID_EMULATED_TMP421                                   285
+#define NVM_CFG_ID_WARNING_TEMPERATURE_GPIO                          286
+#define NVM_CFG_ID_WARNING_TEMPERATURE_THRESHOLD                     287
+#define NVM_CFG_ID_PERST_INDICATION_GPIO                             288
+#define NVM_CFG_ID_PCIE_CLASS_CODE_FCOE_K2_E5                        289
+#define NVM_CFG_ID_PCIE_CLASS_CODE_ISCSI_K2_E5                       290
+#define NVM_CFG_ID_NUMBER_OF_PROVISIONED_MAC                         291
+#define NVM_CFG_ID_NUMBER_OF_PROVISIONED_VF_MAC                      292
+#define NVM_CFG_ID_PROVISIONED_BMC_MAC                               293
+#define NVM_CFG_ID_OVERRIDE_AGC_THRESHOLD_K2                         294
+#define NVM_CFG_ID_WARNING_TEMPERATURE_DELTA                         295
+#define NVM_CFG_ID_ALOM_FAN_ON_AUX_DELTA                             296
+#define NVM_CFG_ID_DEAD_TEMP_TH_DELTA                                297
+#define NVM_CFG_ID_PHY_MODULE_WARNING_TEMP_TH                        298
+#define NVM_CFG_ID_DISABLE_PLDM                                      299
+#define NVM_CFG_ID_DISABLE_MCTP_OEM                                  300
 #endif /* NVM_CFG_H */
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH 6/9] net/qede/base: move dmae code to HSI
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (4 preceding siblings ...)
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 5/9] net/qede/base: update rt defs NVM cfg and mcp code Rasesh Mody
@ 2019-09-30  2:49 ` Rasesh Mody
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 7/9] net/qede/base: update HSI code Rasesh Mody
                   ` (13 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-09-30  2:49 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

Move DMA engine (DMAE) structures from base driver to HSI module.
Use DMAE_PARAMS_* in place of ECORE_DMAE_FLAG_*.
Enforce SET_FIELD() macro where appropriate.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/ecore_dev.c        | 12 ++--
 drivers/net/qede/base/ecore_dev_api.h    | 92 ------------------------
 drivers/net/qede/base/ecore_hsi_common.h | 58 ++++++++++++++-
 drivers/net/qede/base/ecore_hw.c         | 52 ++++++++------
 drivers/net/qede/base/ecore_hw.h         | 88 +++++++++++++++++------
 drivers/net/qede/base/ecore_init_ops.c   |  4 +-
 drivers/net/qede/base/ecore_sriov.c      | 23 +++---
 7 files changed, 174 insertions(+), 155 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 749aea4e8..2c135afd2 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -950,7 +950,7 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
 			bool b_write_access)
 {
 	u8 pfid = ECORE_PFID_BY_PPFID(p_hwfn, abs_ppfid);
-	struct ecore_dmae_params params;
+	struct dmae_params params;
 	enum _ecore_status_t rc;
 	u32 addr;
 
@@ -973,15 +973,15 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
 	OSAL_MEMSET(&params, 0, sizeof(params));
 
 	if (b_write_access) {
-		params.flags = ECORE_DMAE_FLAG_PF_DST;
-		params.dst_pfid = pfid;
+		SET_FIELD(params.flags, DMAE_PARAMS_DST_PF_VALID, 0x1);
+		params.dst_pf_id = pfid;
 		rc = ecore_dmae_host2grc(p_hwfn, p_ptt,
 					 (u64)(osal_uintptr_t)&p_details->value,
 					 addr, 2 /* size_in_dwords */, &params);
 	} else {
-		params.flags = ECORE_DMAE_FLAG_PF_SRC |
-			       ECORE_DMAE_FLAG_COMPLETION_DST;
-		params.src_pfid = pfid;
+		SET_FIELD(params.flags, DMAE_PARAMS_SRC_PF_VALID, 0x1);
+		SET_FIELD(params.flags, DMAE_PARAMS_COMPLETION_DST, 0x1);
+		params.src_pf_id = pfid;
 		rc = ecore_dmae_grc2host(p_hwfn, p_ptt, addr,
 					 (u64)(osal_uintptr_t)&p_details->value,
 					 2 /* size_in_dwords */, &params);
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index a99888097..4d5cc1a0f 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -415,98 +415,6 @@ struct ecore_eth_stats {
 	};
 };
 
-enum ecore_dmae_address_type_t {
-	ECORE_DMAE_ADDRESS_HOST_VIRT,
-	ECORE_DMAE_ADDRESS_HOST_PHYS,
-	ECORE_DMAE_ADDRESS_GRC
-};
-
-/* value of flags If ECORE_DMAE_FLAG_RW_REPL_SRC flag is set and the
- * source is a block of length DMAE_MAX_RW_SIZE and the
- * destination is larger, the source block will be duplicated as
- * many times as required to fill the destination block. This is
- * used mostly to write a zeroed buffer to destination address
- * using DMA
- */
-#define ECORE_DMAE_FLAG_RW_REPL_SRC	0x00000001
-#define ECORE_DMAE_FLAG_VF_SRC		0x00000002
-#define ECORE_DMAE_FLAG_VF_DST		0x00000004
-#define ECORE_DMAE_FLAG_COMPLETION_DST	0x00000008
-#define ECORE_DMAE_FLAG_PORT		0x00000010
-#define ECORE_DMAE_FLAG_PF_SRC		0x00000020
-#define ECORE_DMAE_FLAG_PF_DST		0x00000040
-
-struct ecore_dmae_params {
-	u32 flags; /* consists of ECORE_DMAE_FLAG_* values */
-	u8 src_vfid;
-	u8 dst_vfid;
-	u8 port_id;
-	u8 src_pfid;
-	u8 dst_pfid;
-};
-
-/**
- * @brief ecore_dmae_host2grc - copy data from source addr to
- * dmae registers using the given ptt
- *
- * @param p_hwfn
- * @param p_ptt
- * @param source_addr
- * @param grc_addr (dmae_data_offset)
- * @param size_in_dwords
- * @param p_params (default parameters will be used in case of OSAL_NULL)
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t
-ecore_dmae_host2grc(struct ecore_hwfn *p_hwfn,
-		    struct ecore_ptt *p_ptt,
-		    u64 source_addr,
-		    u32 grc_addr,
-		    u32 size_in_dwords,
-		    struct ecore_dmae_params *p_params);
-
-/**
- * @brief ecore_dmae_grc2host - Read data from dmae data offset
- * to source address using the given ptt
- *
- * @param p_ptt
- * @param grc_addr (dmae_data_offset)
- * @param dest_addr
- * @param size_in_dwords
- * @param p_params (default parameters will be used in case of OSAL_NULL)
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t
-ecore_dmae_grc2host(struct ecore_hwfn *p_hwfn,
-		    struct ecore_ptt *p_ptt,
-		    u32 grc_addr,
-		    dma_addr_t dest_addr,
-		    u32 size_in_dwords,
-		    struct ecore_dmae_params *p_params);
-
-/**
- * @brief ecore_dmae_host2host - copy data from to source address
- * to a destination address (for SRIOV) using the given ptt
- *
- * @param p_hwfn
- * @param p_ptt
- * @param source_addr
- * @param dest_addr
- * @param size_in_dwords
- * @param p_params (default parameters will be used in case of OSAL_NULL)
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t
-ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
-		     struct ecore_ptt *p_ptt,
-		     dma_addr_t source_addr,
-		     dma_addr_t dest_addr,
-		     u32 size_in_dwords,
-		     struct ecore_dmae_params *p_params);
-
 /**
  * @brief ecore_chain_alloc - Allocate and initialize a chain
  *
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index 7a94ed506..8fa200033 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -1953,7 +1953,11 @@ struct dmae_cmd {
 	__le16 crc16 /* crc16 result */;
 	__le16 crc16_c /* crc16_c result */;
 	__le16 crc10 /* crc_t10 result */;
-	__le16 reserved;
+	__le16 error_bit_reserved;
+#define DMAE_CMD_ERROR_BIT_MASK        0x1 /* Error bit */
+#define DMAE_CMD_ERROR_BIT_SHIFT       0
+#define DMAE_CMD_RESERVED_MASK         0x7FFF
+#define DMAE_CMD_RESERVED_SHIFT        1
 	__le16 xsum16 /* checksum16 result  */;
 	__le16 xsum8 /* checksum8 result  */;
 };
@@ -2017,6 +2021,58 @@ enum dmae_cmd_src_enum {
 };
 
 
+/*
+ * DMAE parameters
+ */
+struct dmae_params {
+	__le32 flags;
+/* If set and the source is a block of length DMAE_MAX_RW_SIZE and the
+ * destination is larger, the source block will be duplicated as many
+ * times as required to fill the destination block. This is used mostly
+ * to write a zeroed buffer to destination address using DMA
+ */
+#define DMAE_PARAMS_RW_REPL_SRC_MASK     0x1
+#define DMAE_PARAMS_RW_REPL_SRC_SHIFT    0
+/* If set, the source is a VF, and the source VF ID is taken from the
+ * src_vf_id parameter.
+ */
+#define DMAE_PARAMS_SRC_VF_VALID_MASK    0x1
+#define DMAE_PARAMS_SRC_VF_VALID_SHIFT   1
+/* If set, the destination is a VF, and the destination VF ID is taken
+ * from the dst_vf_id parameter.
+ */
+#define DMAE_PARAMS_DST_VF_VALID_MASK    0x1
+#define DMAE_PARAMS_DST_VF_VALID_SHIFT   2
+/* If set, a completion is sent to the destination function.
+ * Otherwise its sent to the source function.
+ */
+#define DMAE_PARAMS_COMPLETION_DST_MASK  0x1
+#define DMAE_PARAMS_COMPLETION_DST_SHIFT 3
+/* If set, the port ID is taken from the port_id parameter.
+ * Otherwise, the current port ID is used.
+ */
+#define DMAE_PARAMS_PORT_VALID_MASK      0x1
+#define DMAE_PARAMS_PORT_VALID_SHIFT     4
+/* If set, the source PF ID is taken from the src_pf_id parameter.
+ * Otherwise, the current PF ID is used.
+ */
+#define DMAE_PARAMS_SRC_PF_VALID_MASK    0x1
+#define DMAE_PARAMS_SRC_PF_VALID_SHIFT   5
+/* If set, the destination PF ID is taken from the dst_pf_id parameter.
+ * Otherwise, the current PF ID is used
+ */
+#define DMAE_PARAMS_DST_PF_VALID_MASK    0x1
+#define DMAE_PARAMS_DST_PF_VALID_SHIFT   6
+#define DMAE_PARAMS_RESERVED_MASK        0x1FFFFFF
+#define DMAE_PARAMS_RESERVED_SHIFT       7
+	u8 src_vf_id /* Source VF ID, valid only if src_vf_valid is set */;
+	u8 dst_vf_id /* Destination VF ID, valid only if dst_vf_valid is set */;
+	u8 port_id /* Port ID, valid only if port_valid is set */;
+	u8 src_pf_id /* Source PF ID, valid only if src_pf_valid is set */;
+	u8 dst_pf_id /* Destination PF ID, valid only if dst_pf_valid is set */;
+	u8 reserved1;
+	__le16 reserved2;
+};
 
 
 struct fw_asserts_ram_section {
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 72cd7e9c3..6a79db52e 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -453,14 +453,15 @@ u32 ecore_vfid_to_concrete(struct ecore_hwfn *p_hwfn, u8 vfid)
 /* DMAE */
 
 #define ECORE_DMAE_FLAGS_IS_SET(params, flag)	\
-	((params) != OSAL_NULL && ((params)->flags & ECORE_DMAE_FLAG_##flag))
+	((params) != OSAL_NULL && \
+	 GET_FIELD((params)->flags, DMAE_PARAMS_##flag))
 
 static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
 			      const u8 is_src_type_grc,
 			      const u8 is_dst_type_grc,
-			      struct ecore_dmae_params *p_params)
+			      struct dmae_params *p_params)
 {
-	u8 src_pfid, dst_pfid, port_id;
+	u8 src_pf_id, dst_pf_id, port_id;
 	u16 opcode_b = 0;
 	u32 opcode = 0;
 
@@ -468,19 +469,19 @@ static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
 	 * 0- The source is the PCIe
 	 * 1- The source is the GRC.
 	 */
-	opcode |= (is_src_type_grc ? DMAE_CMD_SRC_MASK_GRC
-		   : DMAE_CMD_SRC_MASK_PCIE) << DMAE_CMD_SRC_SHIFT;
-	src_pfid = ECORE_DMAE_FLAGS_IS_SET(p_params, PF_SRC) ?
-		   p_params->src_pfid : p_hwfn->rel_pf_id;
-	opcode |= (src_pfid & DMAE_CMD_SRC_PF_ID_MASK) <<
+	opcode |= (is_src_type_grc ? dmae_cmd_src_grc : dmae_cmd_src_pcie) <<
+		  DMAE_CMD_SRC_SHIFT;
+	src_pf_id = ECORE_DMAE_FLAGS_IS_SET(p_params, SRC_PF_VALID) ?
+		    p_params->src_pf_id : p_hwfn->rel_pf_id;
+	opcode |= (src_pf_id & DMAE_CMD_SRC_PF_ID_MASK) <<
 		  DMAE_CMD_SRC_PF_ID_SHIFT;
 
 	/* The destination of the DMA can be: 0-None 1-PCIe 2-GRC 3-None */
-	opcode |= (is_dst_type_grc ? DMAE_CMD_DST_MASK_GRC
-		   : DMAE_CMD_DST_MASK_PCIE) << DMAE_CMD_DST_SHIFT;
-	dst_pfid = ECORE_DMAE_FLAGS_IS_SET(p_params, PF_DST) ?
-		   p_params->dst_pfid : p_hwfn->rel_pf_id;
-	opcode |= (dst_pfid & DMAE_CMD_DST_PF_ID_MASK) <<
+	opcode |= (is_dst_type_grc ? dmae_cmd_dst_grc : dmae_cmd_dst_pcie) <<
+		  DMAE_CMD_DST_SHIFT;
+	dst_pf_id = ECORE_DMAE_FLAGS_IS_SET(p_params, DST_PF_VALID) ?
+		    p_params->dst_pf_id : p_hwfn->rel_pf_id;
+	opcode |= (dst_pf_id & DMAE_CMD_DST_PF_ID_MASK) <<
 		  DMAE_CMD_DST_PF_ID_SHIFT;
 
 	/* DMAE_E4_TODO need to check which value to specify here. */
@@ -501,7 +502,7 @@ static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
 	 */
 	opcode |= DMAE_CMD_ENDIANITY << DMAE_CMD_ENDIANITY_MODE_SHIFT;
 
-	port_id = (ECORE_DMAE_FLAGS_IS_SET(p_params, PORT)) ?
+	port_id = (ECORE_DMAE_FLAGS_IS_SET(p_params, PORT_VALID)) ?
 		  p_params->port_id : p_hwfn->port_id;
 	opcode |= port_id << DMAE_CMD_PORT_ID_SHIFT;
 
@@ -512,16 +513,16 @@ static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
 	opcode |= DMAE_CMD_DST_ADDR_RESET_MASK << DMAE_CMD_DST_ADDR_RESET_SHIFT;
 
 	/* SRC/DST VFID: all 1's - pf, otherwise VF id */
-	if (ECORE_DMAE_FLAGS_IS_SET(p_params, VF_SRC)) {
+	if (ECORE_DMAE_FLAGS_IS_SET(p_params, SRC_VF_VALID)) {
 		opcode |= (1 << DMAE_CMD_SRC_VF_ID_VALID_SHIFT);
-		opcode_b |= (p_params->src_vfid << DMAE_CMD_SRC_VF_ID_SHIFT);
+		opcode_b |= (p_params->src_vf_id <<  DMAE_CMD_SRC_VF_ID_SHIFT);
 	} else {
 		opcode_b |= (DMAE_CMD_SRC_VF_ID_MASK <<
 			     DMAE_CMD_SRC_VF_ID_SHIFT);
 	}
-	if (ECORE_DMAE_FLAGS_IS_SET(p_params, VF_DST)) {
+	if (ECORE_DMAE_FLAGS_IS_SET(p_params, DST_VF_VALID)) {
 		opcode |= 1 << DMAE_CMD_DST_VF_ID_VALID_SHIFT;
-		opcode_b |= p_params->dst_vfid << DMAE_CMD_DST_VF_ID_SHIFT;
+		opcode_b |= p_params->dst_vf_id << DMAE_CMD_DST_VF_ID_SHIFT;
 	} else {
 		opcode_b |= DMAE_CMD_DST_VF_ID_MASK << DMAE_CMD_DST_VF_ID_SHIFT;
 	}
@@ -716,6 +717,12 @@ static enum _ecore_status_t ecore_dmae_operation_wait(struct ecore_hwfn *p_hwfn)
 	return ecore_status;
 }
 
+enum ecore_dmae_address_type {
+	ECORE_DMAE_ADDRESS_HOST_VIRT,
+	ECORE_DMAE_ADDRESS_HOST_PHYS,
+	ECORE_DMAE_ADDRESS_GRC
+};
+
 static enum _ecore_status_t
 ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn,
 				 struct ecore_ptt *p_ptt,
@@ -806,7 +813,7 @@ ecore_dmae_execute_command(struct ecore_hwfn *p_hwfn,
 			   u8 src_type,
 			   u8 dst_type,
 			   u32 size_in_dwords,
-			   struct ecore_dmae_params *p_params)
+			   struct dmae_params *p_params)
 {
 	dma_addr_t phys = p_hwfn->dmae_info.completion_word_phys_addr;
 	u16 length_cur = 0, i = 0, cnt_split = 0, length_mod = 0;
@@ -910,7 +917,7 @@ enum _ecore_status_t ecore_dmae_host2grc(struct ecore_hwfn *p_hwfn,
 					 u64 source_addr,
 					 u32 grc_addr,
 					 u32 size_in_dwords,
-					 struct ecore_dmae_params *p_params)
+					 struct dmae_params *p_params)
 {
 	u32 grc_addr_in_dw = grc_addr / sizeof(u32);
 	enum _ecore_status_t rc;
@@ -933,7 +940,7 @@ enum _ecore_status_t ecore_dmae_grc2host(struct ecore_hwfn *p_hwfn,
 					 u32 grc_addr,
 					 dma_addr_t dest_addr,
 					 u32 size_in_dwords,
-					 struct ecore_dmae_params *p_params)
+					 struct dmae_params *p_params)
 {
 	u32 grc_addr_in_dw = grc_addr / sizeof(u32);
 	enum _ecore_status_t rc;
@@ -955,7 +962,8 @@ ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
 		     struct ecore_ptt *p_ptt,
 		     dma_addr_t source_addr,
 		     dma_addr_t dest_addr,
-		     u32 size_in_dwords, struct ecore_dmae_params *p_params)
+		     u32 size_in_dwords,
+					  struct dmae_params *p_params)
 {
 	enum _ecore_status_t rc;
 
diff --git a/drivers/net/qede/base/ecore_hw.h b/drivers/net/qede/base/ecore_hw.h
index 0b5b40c46..e43f337dc 100644
--- a/drivers/net/qede/base/ecore_hw.h
+++ b/drivers/net/qede/base/ecore_hw.h
@@ -31,23 +31,7 @@ enum reserved_ptts {
 #define MISC_REG_DRIVER_CONTROL_0_SIZE	MISC_REG_DRIVER_CONTROL_1_SIZE
 #endif
 
-enum _dmae_cmd_dst_mask {
-	DMAE_CMD_DST_MASK_NONE = 0,
-	DMAE_CMD_DST_MASK_PCIE = 1,
-	DMAE_CMD_DST_MASK_GRC = 2
-};
-
-enum _dmae_cmd_src_mask {
-	DMAE_CMD_SRC_MASK_PCIE = 0,
-	DMAE_CMD_SRC_MASK_GRC = 1
-};
-
-enum _dmae_cmd_crc_mask {
-	DMAE_CMD_COMP_CRC_EN_MASK_NONE = 0,
-	DMAE_CMD_COMP_CRC_EN_MASK_SET = 1
-};
-
-/* definitions for DMA constants */
+/* Definitions for DMA constants */
 #define DMAE_GO_VALUE	0x1
 
 #ifdef __BIG_ENDIAN
@@ -258,16 +242,78 @@ enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn	*p_hwfn);
 */
 void ecore_dmae_info_free(struct ecore_hwfn	*p_hwfn);
 
-enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
-					const u8 *fw_data);
+/**
+ * @brief ecore_dmae_host2grc - copy data from source address to
+ * dmae registers using the given ptt
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param source_addr
+ * @param grc_addr (dmae_data_offset)
+ * @param size_in_dwords
+ * @param p_params (default parameters will be used in case of OSAL_NULL)
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_dmae_host2grc(struct ecore_hwfn *p_hwfn,
+		    struct ecore_ptt *p_ptt,
+		    u64 source_addr,
+		    u32 grc_addr,
+		    u32 size_in_dwords,
+		    struct dmae_params *p_params);
 
-void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn,
-			 enum ecore_hw_err_type err_type);
+/**
+ * @brief ecore_dmae_grc2host - Read data from dmae data offset
+ * to source address using the given ptt
+ *
+ * @param p_ptt
+ * @param grc_addr (dmae_data_offset)
+ * @param dest_addr
+ * @param size_in_dwords
+ * @param p_params (default parameters will be used in case of OSAL_NULL)
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_dmae_grc2host(struct ecore_hwfn *p_hwfn,
+		    struct ecore_ptt *p_ptt,
+		    u32 grc_addr,
+		    dma_addr_t dest_addr,
+		    u32 size_in_dwords,
+		    struct dmae_params *p_params);
+
+/**
+ * @brief ecore_dmae_host2host - copy data from to source address
+ * to a destination address (for SRIOV) using the given ptt
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param source_addr
+ * @param dest_addr
+ * @param size_in_dwords
+ * @param p_params (default parameters will be used in case of OSAL_NULL)
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
+		     struct ecore_ptt *p_ptt,
+		     dma_addr_t source_addr,
+		     dma_addr_t dest_addr,
+		     u32 size_in_dwords,
+		     struct dmae_params *p_params);
 
 enum _ecore_status_t ecore_dmae_sanity(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt,
 				       const char *phase);
 
+enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
+					const u8 *fw_data);
+
+void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn,
+			 enum ecore_hw_err_type err_type);
+
 /**
  * @brief ecore_ppfid_wr - Write value to BAR using the given ptt while
  *	pretending to a PF to which the given PPFID pertains.
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index 044308bf4..8f7209100 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -179,12 +179,12 @@ static enum _ecore_status_t ecore_init_fill_dmae(struct ecore_hwfn *p_hwfn,
 						 u32 addr, u32 fill_count)
 {
 	static u32 zero_buffer[DMAE_MAX_RW_SIZE];
-	struct ecore_dmae_params params;
+	struct dmae_params params;
 
 	OSAL_MEMSET(zero_buffer, 0, sizeof(u32) * DMAE_MAX_RW_SIZE);
 
 	OSAL_MEMSET(&params, 0, sizeof(params));
-	params.flags = ECORE_DMAE_FLAG_RW_REPL_SRC;
+	SET_FIELD(params.flags, DMAE_PARAMS_RW_REPL_SRC, 0x1);
 	return ecore_dmae_host2grc(p_hwfn, p_ptt,
 				   (osal_uintptr_t)&zero_buffer[0],
 				   addr, fill_count, &params);
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index d771ac6d4..264217252 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -347,7 +347,7 @@ enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_bulletin_content *p_bulletin;
 	int crc_size = sizeof(p_bulletin->crc);
-	struct ecore_dmae_params params;
+	struct dmae_params params;
 	struct ecore_vf_info *p_vf;
 
 	p_vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
@@ -371,8 +371,8 @@ enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn,
 
 	/* propagate bulletin board via dmae to vm memory */
 	OSAL_MEMSET(&params, 0, sizeof(params));
-	params.flags = ECORE_DMAE_FLAG_VF_DST;
-	params.dst_vfid = p_vf->abs_vf_id;
+	SET_FIELD(params.flags, DMAE_PARAMS_DST_VF_VALID, 0x1);
+	params.dst_vf_id = p_vf->abs_vf_id;
 	return ecore_dmae_host2host(p_hwfn, p_ptt, p_vf->bulletin.phys,
 				    p_vf->vf_bulletin, p_vf->bulletin.size / 4,
 				    &params);
@@ -1374,7 +1374,7 @@ static void ecore_iov_send_response(struct ecore_hwfn *p_hwfn,
 				    u8 status)
 {
 	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
-	struct ecore_dmae_params params;
+	struct dmae_params params;
 	u8 eng_vf_id;
 
 	mbx->reply_virt->default_resp.hdr.status = status;
@@ -1391,9 +1391,9 @@ static void ecore_iov_send_response(struct ecore_hwfn *p_hwfn,
 
 	eng_vf_id = p_vf->abs_vf_id;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct ecore_dmae_params));
-	params.flags = ECORE_DMAE_FLAG_VF_DST;
-	params.dst_vfid = eng_vf_id;
+	OSAL_MEMSET(&params, 0, sizeof(struct dmae_params));
+	SET_FIELD(params.flags, DMAE_PARAMS_DST_VF_VALID, 0x1);
+	params.dst_vf_id = eng_vf_id;
 
 	ecore_dmae_host2host(p_hwfn, p_ptt, mbx->reply_phys + sizeof(u64),
 			     mbx->req_virt->first_tlv.reply_address +
@@ -4389,16 +4389,17 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
 					   struct ecore_ptt *ptt, int vfid)
 {
-	struct ecore_dmae_params params;
+	struct dmae_params params;
 	struct ecore_vf_info *vf_info;
 
 	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
 	if (!vf_info)
 		return ECORE_INVAL;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct ecore_dmae_params));
-	params.flags = ECORE_DMAE_FLAG_VF_SRC | ECORE_DMAE_FLAG_COMPLETION_DST;
-	params.src_vfid = vf_info->abs_vf_id;
+	OSAL_MEMSET(&params, 0, sizeof(struct dmae_params));
+	SET_FIELD(params.flags, DMAE_PARAMS_SRC_VF_VALID, 0x1);
+	SET_FIELD(params.flags, DMAE_PARAMS_COMPLETION_DST, 0x1);
+	params.src_vf_id = vf_info->abs_vf_id;
 
 	if (ecore_dmae_host2host(p_hwfn, ptt,
 				 vf_info->vf_mbx.pending_req,
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH 7/9] net/qede/base: update HSI code
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (5 preceding siblings ...)
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 6/9] net/qede/base: move dmae code to HSI Rasesh Mody
@ 2019-09-30  2:49 ` Rasesh Mody
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 8/9] net/qede/base: update the FW to 8.40.25.0 Rasesh Mody
                   ` (12 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-09-30  2:49 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

Update hardware software common base driver code in preparation to
update the firmware to version 8.40.25.0.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/bcm_osal.c              |   1 +
 drivers/net/qede/base/common_hsi.h            | 164 ++++--
 drivers/net/qede/base/ecore.h                 |   4 +-
 drivers/net/qede/base/ecore_cxt.c             |  23 +-
 drivers/net/qede/base/ecore_dev.c             |  21 +-
 drivers/net/qede/base/ecore_gtt_reg_addr.h    |  42 +-
 drivers/net/qede/base/ecore_gtt_values.h      |  18 +-
 drivers/net/qede/base/ecore_hsi_common.h      | 231 +++++++--
 drivers/net/qede/base/ecore_hsi_debug_tools.h | 475 ++++++++----------
 drivers/net/qede/base/ecore_hsi_eth.h         | 134 ++---
 drivers/net/qede/base/ecore_hsi_init_func.h   |  25 +-
 drivers/net/qede/base/ecore_hsi_init_tool.h   |  38 ++
 drivers/net/qede/base/ecore_hw.c              |  16 +
 drivers/net/qede/base/ecore_hw.h              |  10 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   |   7 +-
 drivers/net/qede/base/ecore_init_ops.c        |  47 --
 drivers/net/qede/base/ecore_init_ops.h        |  10 -
 drivers/net/qede/base/ecore_iro.h             | 320 ++++++------
 drivers/net/qede/base/ecore_iro_values.h      | 336 ++++++++-----
 drivers/net/qede/base/ecore_mcp.c             |   1 +
 drivers/net/qede/base/eth_common.h            | 101 +++-
 drivers/net/qede/base/reg_addr.h              |  10 +
 drivers/net/qede/qede_rxtx.c                  |  16 +-
 23 files changed, 1218 insertions(+), 832 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.c b/drivers/net/qede/base/bcm_osal.c
index 9915df44f..48d016e24 100644
--- a/drivers/net/qede/base/bcm_osal.c
+++ b/drivers/net/qede/base/bcm_osal.c
@@ -10,6 +10,7 @@
 #include "bcm_osal.h"
 #include "ecore.h"
 #include "ecore_hw.h"
+#include "ecore_dev_api.h"
 #include "ecore_iov_api.h"
 #include "ecore_mcp_api.h"
 #include "ecore_l2_api.h"
diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index b878a92aa..74afed1ec 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -13,12 +13,12 @@
 /* Temporarily here should be added to HSI automatically by resource allocation
  * tool.
  */
-#define T_TEST_AGG_INT_TEMP    6
-#define	M_TEST_AGG_INT_TEMP    8
-#define	U_TEST_AGG_INT_TEMP    6
-#define	X_TEST_AGG_INT_TEMP    14
-#define	Y_TEST_AGG_INT_TEMP    4
-#define	P_TEST_AGG_INT_TEMP    4
+#define T_TEST_AGG_INT_TEMP  6
+#define M_TEST_AGG_INT_TEMP  8
+#define U_TEST_AGG_INT_TEMP  6
+#define X_TEST_AGG_INT_TEMP  14
+#define Y_TEST_AGG_INT_TEMP  4
+#define P_TEST_AGG_INT_TEMP  4
 
 #define X_FINAL_CLEANUP_AGG_INT  1
 
@@ -30,21 +30,20 @@
 #define ISCSI_CDU_TASK_SEG_TYPE       0
 #define FCOE_CDU_TASK_SEG_TYPE        0
 #define RDMA_CDU_TASK_SEG_TYPE        1
+#define ETH_CDU_TASK_SEG_TYPE         2
 
 #define FW_ASSERT_GENERAL_ATTN_IDX    32
 
-#define MAX_PINNED_CCFC			32
-
 #define EAGLE_ENG1_WORKAROUND_NIG_FLOWCTRL_MODE	3
 
 /* Queue Zone sizes in bytes */
-#define TSTORM_QZONE_SIZE    8	 /*tstorm_scsi_queue_zone*/
-#define MSTORM_QZONE_SIZE    16  /*mstorm_eth_queue_zone. Used only for RX
-				  *producer of VFs in backward compatibility
-				  *mode.
-				  */
-#define USTORM_QZONE_SIZE    8	 /*ustorm_eth_queue_zone*/
-#define XSTORM_QZONE_SIZE    8	 /*xstorm_eth_queue_zone*/
+#define TSTORM_QZONE_SIZE    8   /*tstorm_queue_zone*/
+/*mstorm_eth_queue_zone. Used only for RX producer of VFs in backward
+ * compatibility mode.
+ */
+#define MSTORM_QZONE_SIZE    16
+#define USTORM_QZONE_SIZE    8   /*ustorm_queue_zone*/
+#define XSTORM_QZONE_SIZE    8   /*xstorm_eth_queue_zone*/
 #define YSTORM_QZONE_SIZE    0
 #define PSTORM_QZONE_SIZE    0
 
@@ -61,7 +60,8 @@
  */
 #define ETH_MAX_NUM_RX_QUEUES_PER_VF_QUAD     112
 
-
+#define ETH_RGSRC_CTX_SIZE                6 /*Size in QREGS*/
+#define ETH_TGSRC_CTX_SIZE                6 /*Size in QREGS*/
 /********************************/
 /* CORE (LIGHT L2) FW CONSTANTS */
 /********************************/
@@ -76,15 +76,13 @@
 
 #define CORE_SPQE_PAGE_SIZE_BYTES                       4096
 
-/*
- * Usually LL2 queues are opened in pairs TX-RX.
- * There is a hard restriction on number of RX queues (limited by Tstorm RAM)
- * and TX counters (Pstorm RAM).
- * Number of TX queues is almost unlimited.
- * The constants are different so as to allow asymmetric LL2 connections
- */
+/* Number of LL2 RAM based (RX producers and statistics) queues */
+#define MAX_NUM_LL2_RX_RAM_QUEUES               32
+/* Number of LL2 context based (RX producers and statistics) queues */
+#define MAX_NUM_LL2_RX_CTX_QUEUES               208
+#define MAX_NUM_LL2_RX_QUEUES (MAX_NUM_LL2_RX_RAM_QUEUES + \
+			       MAX_NUM_LL2_RX_CTX_QUEUES)
 
-#define MAX_NUM_LL2_RX_QUEUES					48
 #define MAX_NUM_LL2_TX_STATS_COUNTERS			48
 
 
@@ -95,8 +93,8 @@
 
 
 #define FW_MAJOR_VERSION        8
-#define FW_MINOR_VERSION        37
-#define FW_REVISION_VERSION     7
+#define FW_MINOR_VERSION		40
+#define FW_REVISION_VERSION		25
 #define FW_ENGINEERING_VERSION  0
 
 /***********************/
@@ -134,6 +132,8 @@
 #define MAX_NUM_L2_QUEUES_BB	(256)
 #define MAX_NUM_L2_QUEUES_K2    (320)
 
+#define FW_LOWEST_CONSUMEDDMAE_CHANNEL   (26)
+
 /* Traffic classes in network-facing blocks (PBF, BTB, NIG, BRB, PRS and QM) */
 #define NUM_PHYS_TCS_4PORT_K2     4
 #define NUM_OF_PHYS_TCS           8
@@ -145,7 +145,6 @@
 #define NUM_OF_CONNECTION_TYPES (8)
 #define NUM_OF_TASK_TYPES       (8)
 #define NUM_OF_LCIDS            (320)
-#define NUM_OF_LTIDS            (320)
 
 /* Global PXP windows (GTT) */
 #define NUM_OF_GTT          19
@@ -172,6 +171,8 @@
 #define	CDU_CONTEXT_VALIDATION_CFG_USE_CID				(4)
 #define	CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE				(5)
 
+/*enabled, type A, use all */
+#define	CDU_CONTEXT_VALIDATION_DEFAULT_CFG				(0x3D)
 
 /*****************/
 /* DQ CONSTANTS  */
@@ -218,6 +219,7 @@
 #define DQ_XCM_TOE_TX_BD_PROD_CMD           DQ_XCM_AGG_VAL_SEL_WORD4
 #define DQ_XCM_TOE_MORE_TO_SEND_SEQ_CMD     DQ_XCM_AGG_VAL_SEL_REG3
 #define DQ_XCM_TOE_LOCAL_ADV_WND_SEQ_CMD    DQ_XCM_AGG_VAL_SEL_REG4
+#define DQ_XCM_ROCE_ACK_EDPM_DORQ_SEQ_CMD   DQ_XCM_AGG_VAL_SEL_WORD5
 
 /* UCM agg val selection (HW) */
 #define DQ_UCM_AGG_VAL_SEL_WORD0  0
@@ -292,6 +294,7 @@
 #define DQ_UCM_AGG_FLG_SHIFT_RULE1EN   7
 
 /* UCM agg counter flag selection (FW) */
+#define DQ_UCM_NVMF_NEW_CQE_CF_CMD          (1 << DQ_UCM_AGG_FLG_SHIFT_CF1)
 #define DQ_UCM_ETH_PMD_TX_ARM_CMD           (1 << DQ_UCM_AGG_FLG_SHIFT_CF4)
 #define DQ_UCM_ETH_PMD_RX_ARM_CMD           (1 << DQ_UCM_AGG_FLG_SHIFT_CF5)
 #define DQ_UCM_ROCE_CQ_ARM_SE_CF_CMD        (1 << DQ_UCM_AGG_FLG_SHIFT_CF4)
@@ -323,6 +326,9 @@
 /* PWM address mapping */
 #define DQ_PWM_OFFSET_DPM_BASE				0x0
 #define DQ_PWM_OFFSET_DPM_END				0x27
+#define DQ_PWM_OFFSET_XCM32_24ICID_BASE		0x28
+#define DQ_PWM_OFFSET_UCM32_24ICID_BASE		0x30
+#define DQ_PWM_OFFSET_TCM32_24ICID_BASE		0x38
 #define DQ_PWM_OFFSET_XCM16_BASE			0x40
 #define DQ_PWM_OFFSET_XCM32_BASE			0x44
 #define DQ_PWM_OFFSET_UCM16_BASE			0x48
@@ -342,6 +348,13 @@
 #define DQ_PWM_OFFSET_TCM_ROCE_RQ_PROD		(DQ_PWM_OFFSET_TCM16_BASE + 1)
 #define DQ_PWM_OFFSET_TCM_IWARP_RQ_PROD		(DQ_PWM_OFFSET_TCM16_BASE + 3)
 
+#define DQ_PWM_OFFSET_XCM_RDMA_24B_ICID_SQ_PROD \
+	(DQ_PWM_OFFSET_XCM32_24ICID_BASE + 2)
+#define DQ_PWM_OFFSET_UCM_RDMA_24B_ICID_CQ_CONS_32BIT \
+	(DQ_PWM_OFFSET_UCM32_24ICID_BASE + 4)
+#define DQ_PWM_OFFSET_TCM_ROCE_24B_ICID_RQ_PROD	\
+	(DQ_PWM_OFFSET_TCM32_24ICID_BASE + 1)
+
 #define DQ_REGION_SHIFT				        (12)
 
 /* DPM */
@@ -378,6 +391,10 @@
 /* number of global Vport/QCN rate limiters */
 #define MAX_QM_GLOBAL_RLS			256
 
+/* number of global rate limiters */
+#define MAX_QM_GLOBAL_RLS		256
+#define COMMON_MAX_QM_GLOBAL_RLS	(MAX_QM_GLOBAL_RLS)
+
 /* QM registers data */
 #define QM_LINE_CRD_REG_WIDTH		16
 #define QM_LINE_CRD_REG_SIGN_BIT	(1 << (QM_LINE_CRD_REG_WIDTH - 1))
@@ -431,9 +448,6 @@
 #define IGU_MEM_PBA_MSIX_RESERVED_UPPER		0x03ff
 
 #define IGU_CMD_INT_ACK_BASE			0x0400
-#define IGU_CMD_INT_ACK_UPPER			(IGU_CMD_INT_ACK_BASE + \
-						 MAX_TOT_SB_PER_PATH - \
-						 1)
 #define IGU_CMD_INT_ACK_RESERVED_UPPER		0x05ff
 
 #define IGU_CMD_ATTN_BIT_UPD_UPPER		0x05f0
@@ -446,9 +460,6 @@
 #define IGU_REG_SISR_MDPC_WOMASK_UPPER		0x05f6
 
 #define IGU_CMD_PROD_UPD_BASE			0x0600
-#define IGU_CMD_PROD_UPD_UPPER			(IGU_CMD_PROD_UPD_BASE + \
-						 MAX_TOT_SB_PER_PATH  - \
-						 1)
 #define IGU_CMD_PROD_UPD_RESERVED_UPPER		0x07ff
 
 /*****************/
@@ -701,6 +712,12 @@ struct common_queue_zone {
 	__le16 reserved;
 };
 
+struct nvmf_eqe_data {
+	__le16 icid /* The connection ID for which the EQE is written. */;
+	u8 reserved0[6] /* Alignment to line */;
+};
+
+
 /*
  * ETH Rx producers data
  */
@@ -770,6 +787,8 @@ enum protocol_type {
 	PROTOCOLID_PREROCE /* Pre (tapeout) RoCE */,
 	PROTOCOLID_COMMON /* ProtocolCommon */,
 	PROTOCOLID_TCP /* TCP */,
+	PROTOCOLID_RDMA /* RDMA */,
+	PROTOCOLID_SCSI /* SCSI */,
 	MAX_PROTOCOL_TYPE
 };
 
@@ -779,6 +798,36 @@ struct regpair {
 	__le32 hi /* high word for reg-pair */;
 };
 
+/*
+ * RoCE Destroy Event Data
+ */
+struct rdma_eqe_destroy_qp {
+	__le32 cid /* Dedicated field RoCE destroy QP event */;
+	u8 reserved[4];
+};
+
+/*
+ * RoCE Suspend Event Data
+ */
+struct rdma_eqe_suspend_qp {
+	__le32 cid /* Dedicated field RoCE Suspend QP event */;
+	u8 reserved[4];
+};
+
+/*
+ * RDMA Event Data Union
+ */
+union rdma_eqe_data {
+	struct regpair async_handle /* Host handle for the Async Completions */;
+	/* RoCE Destroy Event Data */
+	struct rdma_eqe_destroy_qp rdma_destroy_qp_data;
+	/* RoCE Suspend QP Event Data */
+	struct rdma_eqe_suspend_qp rdma_suspend_qp_data;
+};
+
+struct tstorm_queue_zone {
+	__le32 reserved[2];
+};
 
 
 /*
@@ -993,6 +1042,18 @@ struct db_pwm_addr {
 #define DB_PWM_ADDR_RESERVED1_SHIFT 28
 };
 
+/*
+ * Structure for doorbell address, in legacy mode, without DEMS
+ */
+struct db_legacy_wo_dems_addr {
+	__le32 addr;
+#define DB_LEGACY_WO_DEMS_ADDR_RESERVED0_MASK  0x3
+#define DB_LEGACY_WO_DEMS_ADDR_RESERVED0_SHIFT 0
+#define DB_LEGACY_WO_DEMS_ADDR_ICID_MASK       0x3FFFFFFF /* internal CID */
+#define DB_LEGACY_WO_DEMS_ADDR_ICID_SHIFT      2
+};
+
+
 /*
  * Parameters to RDMA firmware, passed in EDPM doorbell
  */
@@ -1025,6 +1086,43 @@ struct db_rdma_dpm_params {
 #define DB_RDMA_DPM_PARAMS_CONN_TYPE_IS_IWARP_SHIFT 31
 };
 
+/*
+ * Parameters to RDMA firmware, passed in EDPM doorbell
+ */
+struct db_rdma_24b_icid_dpm_params {
+	__le32 params;
+/* Size in QWORD-s of the DPM burst */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_SIZE_MASK                0x3F
+#define DB_RDMA_24B_ICID_DPM_PARAMS_SIZE_SHIFT               0
+/* Type of DPM transacation (DPM_RDMA) (use enum db_dpm_type) */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_DPM_TYPE_MASK            0x3
+#define DB_RDMA_24B_ICID_DPM_PARAMS_DPM_TYPE_SHIFT           6
+/* opcode for RDMA operation */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_OPCODE_MASK              0xFF
+#define DB_RDMA_24B_ICID_DPM_PARAMS_OPCODE_SHIFT             8
+/* ICID extension */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_ICID_EXT_MASK            0xFF
+#define DB_RDMA_24B_ICID_DPM_PARAMS_ICID_EXT_SHIFT           16
+/* Number of invalid bytes in last QWROD of the DPM transaction */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_INV_BYTE_CNT_MASK        0x7
+#define DB_RDMA_24B_ICID_DPM_PARAMS_INV_BYTE_CNT_SHIFT       24
+/* Flag indicating 24b icid mode is enabled */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_EXT_ICID_MODE_EN_MASK    0x1
+#define DB_RDMA_24B_ICID_DPM_PARAMS_EXT_ICID_MODE_EN_SHIFT   27
+/* RoCE completion flag */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_COMPLETION_FLG_MASK      0x1
+#define DB_RDMA_24B_ICID_DPM_PARAMS_COMPLETION_FLG_SHIFT     28
+/* RoCE S flag */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_S_FLG_MASK               0x1
+#define DB_RDMA_24B_ICID_DPM_PARAMS_S_FLG_SHIFT              29
+#define DB_RDMA_24B_ICID_DPM_PARAMS_RESERVED1_MASK           0x1
+#define DB_RDMA_24B_ICID_DPM_PARAMS_RESERVED1_SHIFT          30
+/* Connection type is iWARP */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_CONN_TYPE_IS_IWARP_MASK  0x1
+#define DB_RDMA_24B_ICID_DPM_PARAMS_CONN_TYPE_IS_IWARP_SHIFT 31
+};
+
+
 /*
  * Structure for doorbell data, in RDMA DPM mode, for the first doorbell in a
  * DPM burst
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 524a1dd46..b1d8706c9 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -834,8 +834,8 @@ struct ecore_dev {
 	u8				cache_shift;
 
 	/* Init */
-	const struct iro		*iro_arr;
-	#define IRO (p_hwfn->p_dev->iro_arr)
+	const u32			*iro_arr;
+#define IRO	((const struct iro *)p_hwfn->p_dev->iro_arr)
 
 	/* HW functions */
 	u8				num_hwfns;
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index bc5628c4e..0f04c9447 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -190,9 +190,7 @@ struct ecore_cxt_mngr {
 
 	/* Acquired CIDs */
 	struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES];
-	/* TBD - do we want this allocated to reserve space? */
-	struct ecore_cid_acquired_map
-		acquired_vf[MAX_CONN_TYPES][COMMON_MAX_NUM_VFS];
+	struct ecore_cid_acquired_map *acquired_vf[MAX_CONN_TYPES];
 
 	/* ILT  shadow table */
 	struct ecore_dma_mem *ilt_shadow;
@@ -1040,8 +1038,8 @@ static enum _ecore_status_t ecore_ilt_shadow_alloc(struct ecore_hwfn *p_hwfn)
 
 static void ecore_cid_map_free(struct ecore_hwfn *p_hwfn)
 {
+	u32 type, vf, max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 type, vf;
 
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
 		OSAL_FREE(p_hwfn->p_dev, p_mngr->acquired[type].cid_map);
@@ -1049,7 +1047,7 @@ static void ecore_cid_map_free(struct ecore_hwfn *p_hwfn)
 		p_mngr->acquired[type].max_count = 0;
 		p_mngr->acquired[type].start_cid = 0;
 
-		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+		for (vf = 0; vf < max_num_vfs; vf++) {
 			OSAL_FREE(p_hwfn->p_dev,
 				  p_mngr->acquired_vf[type][vf].cid_map);
 			p_mngr->acquired_vf[type][vf].cid_map = OSAL_NULL;
@@ -1087,6 +1085,7 @@ ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type,
 static enum _ecore_status_t ecore_cid_map_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	u32 max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
 	u32 start_cid = 0, vf_start_cid = 0;
 	u32 type, vf;
 
@@ -1101,7 +1100,7 @@ static enum _ecore_status_t ecore_cid_map_alloc(struct ecore_hwfn *p_hwfn)
 			goto cid_map_fail;
 
 		/* Handle VF maps */
-		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+		for (vf = 0; vf < max_num_vfs; vf++) {
 			p_map = &p_mngr->acquired_vf[type][vf];
 			if (ecore_cid_map_alloc_single(p_hwfn, type,
 						       vf_start_cid,
@@ -1236,10 +1235,10 @@ void ecore_cxt_mngr_free(struct ecore_hwfn *p_hwfn)
 void ecore_cxt_mngr_setup(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	u32 len, max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
 	struct ecore_cid_acquired_map *p_map;
 	struct ecore_conn_type_cfg *p_cfg;
 	int type;
-	u32 len;
 
 	/* Reset acquired cids */
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
@@ -1257,7 +1256,7 @@ void ecore_cxt_mngr_setup(struct ecore_hwfn *p_hwfn)
 		if (!p_cfg->cids_per_vf)
 			continue;
 
-		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+		for (vf = 0; vf < max_num_vfs; vf++) {
 			p_map = &p_mngr->acquired_vf[type][vf];
 			len = DIV_ROUND_UP(p_map->max_count,
 					   BITS_PER_MAP_WORD) *
@@ -1818,16 +1817,16 @@ enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
 					    enum protocol_type type,
 					    u32 *p_cid, u8 vfid)
 {
+	u32 rel_cid, max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
 	struct ecore_cid_acquired_map *p_map;
-	u32 rel_cid;
 
 	if (type >= MAX_CONN_TYPES) {
 		DP_NOTICE(p_hwfn, true, "Invalid protocol type %d", type);
 		return ECORE_INVAL;
 	}
 
-	if (vfid >= COMMON_MAX_NUM_VFS && vfid != ECORE_CXT_PF_CID) {
+	if (vfid >= max_num_vfs && vfid != ECORE_CXT_PF_CID) {
 		DP_NOTICE(p_hwfn, true, "VF [%02x] is out of range\n", vfid);
 		return ECORE_INVAL;
 	}
@@ -1913,12 +1912,12 @@ static bool ecore_cxt_test_cid_acquired(struct ecore_hwfn *p_hwfn,
 
 void _ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid, u8 vfid)
 {
+	u32 rel_cid, max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
 	struct ecore_cid_acquired_map *p_map = OSAL_NULL;
 	enum protocol_type type;
 	bool b_acquired;
-	u32 rel_cid;
 
-	if (vfid != ECORE_CXT_PF_CID && vfid > COMMON_MAX_NUM_VFS) {
+	if (vfid != ECORE_CXT_PF_CID && vfid > max_num_vfs) {
 		DP_NOTICE(p_hwfn, true,
 			  "Trying to return incorrect CID belonging to VF %02x\n",
 			  vfid);
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2c135afd2..2a11b4d29 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1843,7 +1843,7 @@ static void ecore_init_qm_vport_params(struct ecore_hwfn *p_hwfn)
 
 	/* all vports participate in weighted fair queueing */
 	for (i = 0; i < ecore_init_qm_get_num_vports(p_hwfn); i++)
-		qm_info->qm_vport_params[i].vport_wfq = 1;
+		qm_info->qm_vport_params[i].wfq = 1;
 }
 
 /* initialize qm port params */
@@ -2236,11 +2236,8 @@ static void ecore_dp_init_qm_params(struct ecore_hwfn *p_hwfn)
 	/* vport table */
 	for (i = 0; i < qm_info->num_vports; i++) {
 		vport = &qm_info->qm_vport_params[i];
-		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
-			   "vport idx %d, vport_rl %d, wfq %d,"
-			   " first_tx_pq_id [ ",
-			   qm_info->start_vport + i, vport->vport_rl,
-			   vport->vport_wfq);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "vport idx %d, wfq %d, first_tx_pq_id [ ",
+			   qm_info->start_vport + i, vport->wfq);
 		for (tc = 0; tc < NUM_OF_TCS; tc++)
 			DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "%d ",
 				   vport->first_tx_pq_id[tc]);
@@ -2866,7 +2863,7 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	ecore_init_cau_rt_data(p_dev);
 
 	/* Program GTT windows */
-	ecore_gtt_init(p_hwfn, p_ptt);
+	ecore_gtt_init(p_hwfn);
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_EMUL(p_dev)) {
@@ -6248,7 +6245,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 
 /* Calculate final WFQ values for all vports and configure it.
  * After this configuration each vport must have
- * approx min rate =  vport_wfq * min_pf_rate / ECORE_WFQ_UNIT
+ * approx min rate =  wfq * min_pf_rate / ECORE_WFQ_UNIT
  */
 static void ecore_configure_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt,
@@ -6262,11 +6259,11 @@ static void ecore_configure_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
 	for (i = 0; i < p_hwfn->qm_info.num_vports; i++) {
 		u32 wfq_speed = p_hwfn->qm_info.wfq_data[i].min_speed;
 
-		vport_params[i].vport_wfq = (wfq_speed * ECORE_WFQ_UNIT) /
+		vport_params[i].wfq = (wfq_speed * ECORE_WFQ_UNIT) /
 		    min_pf_rate;
 		ecore_init_vport_wfq(p_hwfn, p_ptt,
 				     vport_params[i].first_tx_pq_id,
-				     vport_params[i].vport_wfq);
+				     vport_params[i].wfq);
 	}
 }
 
@@ -6275,7 +6272,7 @@ static void ecore_init_wfq_default_param(struct ecore_hwfn *p_hwfn)
 	int i;
 
 	for (i = 0; i < p_hwfn->qm_info.num_vports; i++)
-		p_hwfn->qm_info.qm_vport_params[i].vport_wfq = 1;
+		p_hwfn->qm_info.qm_vport_params[i].wfq = 1;
 }
 
 static void ecore_disable_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
@@ -6290,7 +6287,7 @@ static void ecore_disable_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
 		ecore_init_wfq_default_param(p_hwfn);
 		ecore_init_vport_wfq(p_hwfn, p_ptt,
 				     vport_params[i].first_tx_pq_id,
-				     vport_params[i].vport_wfq);
+				     vport_params[i].wfq);
 	}
 }
 
diff --git a/drivers/net/qede/base/ecore_gtt_reg_addr.h b/drivers/net/qede/base/ecore_gtt_reg_addr.h
index 8c8fed4e7..f5b11eb28 100644
--- a/drivers/net/qede/base/ecore_gtt_reg_addr.h
+++ b/drivers/net/qede/base/ecore_gtt_reg_addr.h
@@ -8,43 +8,53 @@
 #define GTT_REG_ADDR_H
 
 /* Win 2 */
-/* Access:RW   DataWidth:0x20    */
+//Access:RW   DataWidth:0x20   //
 #define GTT_BAR0_MAP_REG_IGU_CMD                                      0x00f000UL
 
 /* Win 3 */
-/* Access:RW   DataWidth:0x20    */
+//Access:RW   DataWidth:0x20   //
 #define GTT_BAR0_MAP_REG_TSDM_RAM                                     0x010000UL
 
 /* Win 4 */
-/* Access:RW   DataWidth:0x20    */
+//Access:RW   DataWidth:0x20   //
 #define GTT_BAR0_MAP_REG_MSDM_RAM                                     0x011000UL
 
 /* Win 5 */
-/* Access:RW   DataWidth:0x20    */
+//Access:RW   DataWidth:0x20   //
 #define GTT_BAR0_MAP_REG_MSDM_RAM_1024                                0x012000UL
 
 /* Win 6 */
-/* Access:RW   DataWidth:0x20    */
-#define GTT_BAR0_MAP_REG_USDM_RAM                                     0x013000UL
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_MSDM_RAM_2048                                0x013000UL
 
 /* Win 7 */
-/* Access:RW   DataWidth:0x20    */
-#define GTT_BAR0_MAP_REG_USDM_RAM_1024                                0x014000UL
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_USDM_RAM                                     0x014000UL
 
 /* Win 8 */
-/* Access:RW   DataWidth:0x20    */
-#define GTT_BAR0_MAP_REG_USDM_RAM_2048                                0x015000UL
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_USDM_RAM_1024                                0x015000UL
 
 /* Win 9 */
-/* Access:RW   DataWidth:0x20    */
-#define GTT_BAR0_MAP_REG_XSDM_RAM                                     0x016000UL
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_USDM_RAM_2048                                0x016000UL
 
 /* Win 10 */
-/* Access:RW   DataWidth:0x20    */
-#define GTT_BAR0_MAP_REG_YSDM_RAM                                     0x017000UL
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_XSDM_RAM                                     0x017000UL
 
 /* Win 11 */
-/* Access:RW   DataWidth:0x20    */
-#define GTT_BAR0_MAP_REG_PSDM_RAM                                     0x018000UL
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_XSDM_RAM_1024                                0x018000UL
+
+/* Win 12 */
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_YSDM_RAM                                     0x019000UL
+
+/* Win 13 */
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_PSDM_RAM                                     0x01a000UL
+
+/* Win 14 */
 
 #endif
diff --git a/drivers/net/qede/base/ecore_gtt_values.h b/drivers/net/qede/base/ecore_gtt_values.h
index adc20c0ce..2035bed5c 100644
--- a/drivers/net/qede/base/ecore_gtt_values.h
+++ b/drivers/net/qede/base/ecore_gtt_values.h
@@ -13,15 +13,15 @@ static u32 pxp_global_win[] = {
 	0x1c80, /* win 3: addr=0x1c80000, size=4096 bytes */
 	0x1d00, /* win 4: addr=0x1d00000, size=4096 bytes */
 	0x1d01, /* win 5: addr=0x1d01000, size=4096 bytes */
-	0x1d80, /* win 6: addr=0x1d80000, size=4096 bytes */
-	0x1d81, /* win 7: addr=0x1d81000, size=4096 bytes */
-	0x1d82, /* win 8: addr=0x1d82000, size=4096 bytes */
-	0x1e00, /* win 9: addr=0x1e00000, size=4096 bytes */
-	0x1e80, /* win 10: addr=0x1e80000, size=4096 bytes */
-	0x1f00, /* win 11: addr=0x1f00000, size=4096 bytes */
-	0,
-	0,
-	0,
+	0x1d02, /* win 6: addr=0x1d02000, size=4096 bytes */
+	0x1d80, /* win 7: addr=0x1d80000, size=4096 bytes */
+	0x1d81, /* win 8: addr=0x1d81000, size=4096 bytes */
+	0x1d82, /* win 9: addr=0x1d82000, size=4096 bytes */
+	0x1e00, /* win 10: addr=0x1e00000, size=4096 bytes */
+	0x1e01, /* win 11: addr=0x1e01000, size=4096 bytes */
+	0x1e80, /* win 12: addr=0x1e80000, size=4096 bytes */
+	0x1f00, /* win 13: addr=0x1f00000, size=4096 bytes */
+	0x1c08, /* win 14: addr=0x1c08000, size=4096 bytes */
 	0,
 	0,
 	0,
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index 8fa200033..23cfcdeff 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -57,7 +57,7 @@ struct ystorm_core_conn_st_ctx {
  * The core storm context for the Pstorm
  */
 struct pstorm_core_conn_st_ctx {
-	__le32 reserved[4];
+	__le32 reserved[20];
 };
 
 /*
@@ -75,7 +75,7 @@ struct xstorm_core_conn_st_ctx {
 
 struct xstorm_core_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
-	u8 core_state /* state */;
+	u8 state /* state */;
 	u8 flags0;
 #define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1 /* exist_in_qm0 */
 #define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
@@ -516,13 +516,20 @@ struct ustorm_core_conn_ag_ctx {
  * The core storm context for the Mstorm
  */
 struct mstorm_core_conn_st_ctx {
-	__le32 reserved[24];
+	__le32 reserved[40];
 };
 
 /*
  * The core storm context for the Ustorm
  */
 struct ustorm_core_conn_st_ctx {
+	__le32 reserved[20];
+};
+
+/*
+ * The core storm context for the Tstorm
+ */
+struct tstorm_core_conn_st_ctx {
 	__le32 reserved[4];
 };
 
@@ -549,6 +556,9 @@ struct core_conn_context {
 /* ustorm storm context */
 	struct ustorm_core_conn_st_ctx ustorm_st_context;
 	struct regpair ustorm_st_padding[2] /* padding */;
+/* tstorm storm context */
+	struct tstorm_core_conn_st_ctx tstorm_st_context;
+	struct regpair tstorm_st_padding[2] /* padding */;
 };
 
 
@@ -573,6 +583,7 @@ enum core_event_opcode {
 	CORE_EVENT_RX_QUEUE_STOP,
 	CORE_EVENT_RX_QUEUE_FLUSH,
 	CORE_EVENT_TX_QUEUE_UPDATE,
+	CORE_EVENT_QUEUE_STATS_QUERY,
 	MAX_CORE_EVENT_OPCODE
 };
 
@@ -601,7 +612,7 @@ struct core_ll2_port_stats {
 
 
 /*
- * Ethernet TX Per Queue Stats
+ * LL2 TX Per Queue Stats
  */
 struct core_ll2_pstorm_per_queue_stat {
 /* number of total bytes sent without errors */
@@ -616,16 +627,8 @@ struct core_ll2_pstorm_per_queue_stat {
 	struct regpair sent_mcast_pkts;
 /* number of total packets sent without errors */
 	struct regpair sent_bcast_pkts;
-};
-
-
-/*
- * Light-L2 RX Producers in Tstorm RAM
- */
-struct core_ll2_rx_prod {
-	__le16 bd_prod /* BD Producer */;
-	__le16 cqe_prod /* CQE Producer */;
-	__le32 reserved;
+/* number of total packets dropped due to errors */
+	struct regpair error_drop_pkts;
 };
 
 
@@ -636,7 +639,6 @@ struct core_ll2_tstorm_per_queue_stat {
 	struct regpair no_buff_discard;
 };
 
-
 struct core_ll2_ustorm_per_queue_stat {
 	struct regpair rcv_ucast_bytes;
 	struct regpair rcv_mcast_bytes;
@@ -647,6 +649,59 @@ struct core_ll2_ustorm_per_queue_stat {
 };
 
 
+/*
+ * Light-L2 RX Producers
+ */
+struct core_ll2_rx_prod {
+	__le16 bd_prod /* BD Producer */;
+	__le16 cqe_prod /* CQE Producer */;
+};
+
+
+
+struct core_ll2_tx_per_queue_stat {
+/* PSTORM per queue statistics */
+	struct core_ll2_pstorm_per_queue_stat pstorm_stat;
+};
+
+
+
+/*
+ * Structure for doorbell data, in PWM mode, for RX producers update.
+ */
+struct core_pwm_prod_update_data {
+	__le16 icid /* internal CID */;
+	u8 reserved0;
+	u8 params;
+/* aggregative command. Set DB_AGG_CMD_SET for producer update
+ * (use enum db_agg_cmd_sel)
+ */
+#define CORE_PWM_PROD_UPDATE_DATA_AGG_CMD_MASK    0x3
+#define CORE_PWM_PROD_UPDATE_DATA_AGG_CMD_SHIFT   0
+#define CORE_PWM_PROD_UPDATE_DATA_RESERVED1_MASK  0x3F /* Set 0. */
+#define CORE_PWM_PROD_UPDATE_DATA_RESERVED1_SHIFT 2
+	struct core_ll2_rx_prod prod /* Producers. */;
+};
+
+
+/*
+ * Ramrod data for rx/tx queue statistics query ramrod
+ */
+struct core_queue_stats_query_ramrod_data {
+	u8 rx_stat /* If set, collect RX queue statistics. */;
+	u8 tx_stat /* If set, collect TX queue statistics. */;
+	__le16 reserved[3];
+/* Address of RX statistic buffer. core_ll2_rx_per_queue_stat struct will be
+ * write to this address.
+ */
+	struct regpair rx_stat_addr;
+/* Address of TX statistic buffer. core_ll2_tx_per_queue_stat struct will be
+ * write to this address.
+ */
+	struct regpair tx_stat_addr;
+};
+
+
 /*
  * Core Ramrod Command IDs (light L2)
  */
@@ -658,6 +713,7 @@ enum core_ramrod_cmd_id {
 	CORE_RAMROD_TX_QUEUE_STOP /* TX Queue Stop Ramrod */,
 	CORE_RAMROD_RX_QUEUE_FLUSH /* RX Flush queue Ramrod */,
 	CORE_RAMROD_TX_QUEUE_UPDATE /* TX Queue Update Ramrod */,
+	CORE_RAMROD_QUEUE_STATS_QUERY /* Queue Statist Query Ramrod */,
 	MAX_CORE_RAMROD_CMD_ID
 };
 
@@ -772,7 +828,8 @@ struct core_rx_gsi_offload_cqe {
 /* These are the lower 16 bit of QP id in RoCE BTH header */
 	__le16 qp_id;
 	__le32 src_qp /* Source QP from DETH header */;
-	__le32 reserved[3];
+	struct core_rx_cqe_opaque_data opaque_data /* Opaque Data */;
+	__le32 reserved;
 };
 
 /*
@@ -803,24 +860,21 @@ union core_rx_cqe_union {
  * Ramrod data for rx queue start ramrod
  */
 struct core_rx_start_ramrod_data {
-	struct regpair bd_base /* bd address of the first bd page */;
+	struct regpair bd_base /* Address of the first BD page */;
 	struct regpair cqe_pbl_addr /* Base address on host of CQE PBL */;
-	__le16 mtu /* Maximum transmission unit */;
+	__le16 mtu /* MTU */;
 	__le16 sb_id /* Status block ID */;
-	u8 sb_index /* index of the protocol index */;
-	u8 complete_cqe_flg /* post completion to the CQE ring if set */;
-	u8 complete_event_flg /* post completion to the event ring if set */;
-	u8 drop_ttl0_flg /* drop packet with ttl0 if set */;
-	__le16 num_of_pbl_pages /* Num of pages in CQE PBL */;
-/* if set, 802.1q tags will be removed and copied to CQE */
-/* if set, 802.1q tags will be removed and copied to CQE */
+	u8 sb_index /* Status block index */;
+	u8 complete_cqe_flg /* if set - post completion to the CQE ring */;
+	u8 complete_event_flg /* if set - post completion to the event ring */;
+	u8 drop_ttl0_flg /* if set - drop packet with ttl=0 */;
+	__le16 num_of_pbl_pages /* Number of pages in CQE PBL */;
+/* if set - 802.1q tag will be removed and copied to CQE */
 	u8 inner_vlan_stripping_en;
-/* if set and inner vlan does not exist, the outer vlan will copied to CQE as
- * inner vlan. should be used in MF_OVLAN mode only.
- */
-	u8 report_outer_vlan;
+/* if set - outer tag wont be stripped, valid only in MF OVLAN mode. */
+	u8 outer_vlan_stripping_dis;
 	u8 queue_id /* Light L2 RX Queue ID */;
-	u8 main_func_queue /* Is this the main queue for the PF */;
+	u8 main_func_queue /* Set if this is the main PFs LL2 queue */;
 /* Duplicate broadcast packets to LL2 main queue in mf_si mode. Valid if
  * main_func_queue is set.
  */
@@ -829,17 +883,21 @@ struct core_rx_start_ramrod_data {
  * main_func_queue is set.
  */
 	u8 mf_si_mcast_accept_all;
-/* Specifies how ll2 should deal with packets errors: packet_too_big and
- * no_buff
+/* If set, the inner vlan (802.1q tag) priority that is written to cqe will be
+ * zero out, used for TenantDcb
  */
+/* Specifies how ll2 should deal with RX packets errors */
 	struct core_rx_action_on_error action_on_error;
-/* set when in GSI offload mode on ROCE connection */
-	u8 gsi_offload_flag;
+	u8 gsi_offload_flag /* set for GSI offload mode */;
+/* If set, queue is subject for RX VFC classification. */
+	u8 vport_id_valid;
+	u8 vport_id /* Queue VPORT for RX VFC classification. */;
+	u8 zero_prod_flg /* If set, zero RX producers. */;
 /* If set, the inner vlan (802.1q tag) priority that is written to cqe will be
  * zero out, used for TenantDcb
  */
 	u8 wipe_inner_vlan_pri_en;
-	u8 reserved[5];
+	u8 reserved[2];
 };
 
 
@@ -959,13 +1017,14 @@ struct core_tx_start_ramrod_data {
 	u8 conn_type /* connection type that loaded ll2 */;
 	__le16 pbl_size /* Number of BD pages pointed by PBL */;
 	__le16 qm_pq_id /* QM PQ ID */;
-/* set when in GSI offload mode on ROCE connection */
-	u8 gsi_offload_flag;
+	u8 gsi_offload_flag /* set for GSI offload mode */;
+	u8 ctx_stats_en /* Context statistics enable */;
+/* If set, queue is part of VPORT and subject for TX switching. */
+	u8 vport_id_valid;
 /* vport id of the current connection, used to access non_rdma_in_to_in_pri_map
  * which is per vport
  */
 	u8 vport_id;
-	u8 resrved[2];
 };
 
 
@@ -1048,12 +1107,23 @@ struct eth_pstorm_per_pf_stat {
 	struct regpair sent_gre_bytes /* Sent GRE bytes */;
 	struct regpair sent_vxlan_bytes /* Sent VXLAN bytes */;
 	struct regpair sent_geneve_bytes /* Sent GENEVE bytes */;
-	struct regpair sent_gre_pkts /* Sent GRE packets */;
+	struct regpair sent_mpls_bytes /* Sent MPLS bytes */;
+	struct regpair sent_gre_mpls_bytes /* Sent GRE MPLS bytes (E5 Only) */;
+	struct regpair sent_udp_mpls_bytes /* Sent GRE MPLS bytes (E5 Only) */;
+	struct regpair sent_gre_pkts /* Sent GRE packets (E5 Only) */;
 	struct regpair sent_vxlan_pkts /* Sent VXLAN packets */;
 	struct regpair sent_geneve_pkts /* Sent GENEVE packets */;
+	struct regpair sent_mpls_pkts /* Sent MPLS packets (E5 Only) */;
+	struct regpair sent_gre_mpls_pkts /* Sent GRE MPLS packets (E5 Only) */;
+	struct regpair sent_udp_mpls_pkts /* Sent UDP MPLS packets (E5 Only) */;
 	struct regpair gre_drop_pkts /* Dropped GRE TX packets */;
 	struct regpair vxlan_drop_pkts /* Dropped VXLAN TX packets */;
 	struct regpair geneve_drop_pkts /* Dropped GENEVE TX packets */;
+	struct regpair mpls_drop_pkts /* Dropped MPLS TX packets (E5 Only) */;
+/* Dropped GRE MPLS TX packets (E5 Only) */
+	struct regpair gre_mpls_drop_pkts;
+/* Dropped UDP MPLS TX packets (E5 Only) */
+	struct regpair udp_mpls_drop_pkts;
 };
 
 
@@ -1176,6 +1246,8 @@ union event_ring_data {
 	struct iscsi_eqe_data iscsi_info /* Dedicated fields to iscsi data */;
 /* Dedicated fields to iscsi connect done results */
 	struct iscsi_connect_done_results iscsi_conn_done_info;
+	union rdma_eqe_data rdma_data /* Dedicated field for RDMA data */;
+	struct nvmf_eqe_data nvmf_data /* Dedicated field for NVMf data */;
 	struct malicious_vf_eqe_data malicious_vf /* Malicious VF data */;
 /* VF Initial Cleanup data */
 	struct initial_cleanup_eqe_data vf_init_cleanup;
@@ -1187,10 +1259,14 @@ union event_ring_data {
  */
 struct event_ring_entry {
 	u8 protocol_id /* Event Protocol ID (use enum protocol_type) */;
-	u8 opcode /* Event Opcode */;
-	__le16 reserved0 /* Reserved */;
+	u8 opcode /* Event Opcode (Per Protocol Type) */;
+	u8 reserved0 /* Reserved */;
+	u8 vfId /* vfId for this event, 0xFF if this is a PF event */;
 	__le16 echo /* Echo value from ramrod data on the host */;
-	u8 fw_return_code /* FW return code for SP ramrods */;
+/* FW return code for SP ramrods. Use (according to protocol) eth_return_code,
+ * or rdma_fw_return_code, or fcoe_completion_status
+ */
+	u8 fw_return_code;
 	u8 flags;
 /* 0: synchronous EQE - a completion of SP message. 1: asynchronous EQE */
 #define EVENT_RING_ENTRY_ASYNC_MASK      0x1
@@ -1320,6 +1396,22 @@ enum malicious_vf_error_id {
 	ETH_TUNN_IPV6_EXT_NBD_ERR,
 	ETH_CONTROL_PACKET_VIOLATION /* VF sent control frame such as PFC */,
 	ETH_ANTI_SPOOFING_ERR /* Anti-Spoofing verification failure */,
+/* packet scanned is too large (can be 9700 at most) */
+	ETH_PACKET_SIZE_TOO_LARGE,
+/* Tx packet with marked as insert VLAN when its illegal */
+	CORE_ILLEGAL_VLAN_MODE,
+/* indicated number of BDs for the packet is illegal */
+	CORE_ILLEGAL_NBDS,
+	CORE_FIRST_BD_WO_SOP /* 1st BD must have start_bd flag set */,
+/* There are not enough BDs for transmission of even one packet */
+	CORE_INSUFFICIENT_BDS,
+/* TX packet is shorter then reported on BDs or from minimal size */
+	CORE_PACKET_TOO_SMALL,
+	CORE_ILLEGAL_INBAND_TAGS /* TX packet has illegal inband tags marked */,
+	CORE_VLAN_INSERT_AND_INBAND_VLAN /* Vlan cant be added to inband tag */,
+	CORE_MTU_VIOLATION /* TX packet is greater then MTU */,
+	CORE_CONTROL_PACKET_VIOLATION /* VF sent control frame such as PFC */,
+	CORE_ANTI_SPOOFING_ERR /* Anti-Spoofing verification failure */,
 	MAX_MALICIOUS_VF_ERROR_ID
 };
 
@@ -1837,6 +1929,23 @@ enum vf_zone_size_mode {
 
 
 
+/*
+ * Xstorm non-triggering VF zone
+ */
+struct xstorm_non_trigger_vf_zone {
+	struct regpair non_edpm_ack_pkts /* RoCE received statistics */;
+};
+
+
+/*
+ * Tstorm VF zone
+ */
+struct xstorm_vf_zone {
+/* non-interrupt-triggering zone */
+	struct xstorm_non_trigger_vf_zone non_trigger;
+};
+
+
 
 /*
  * Attentions status block
@@ -2205,6 +2314,44 @@ struct igu_msix_vector {
 };
 
 
+struct mstorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	__le16 word0 /* word0 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
 /*
  * per encapsulation type enabling flags
  */
diff --git a/drivers/net/qede/base/ecore_hsi_debug_tools.h b/drivers/net/qede/base/ecore_hsi_debug_tools.h
index 085af0a3d..a959aeea7 100644
--- a/drivers/net/qede/base/ecore_hsi_debug_tools.h
+++ b/drivers/net/qede/base/ecore_hsi_debug_tools.h
@@ -11,98 +11,6 @@
 /****************************************/
 
 
-enum block_addr {
-	GRCBASE_GRC = 0x50000,
-	GRCBASE_MISCS = 0x9000,
-	GRCBASE_MISC = 0x8000,
-	GRCBASE_DBU = 0xa000,
-	GRCBASE_PGLUE_B = 0x2a8000,
-	GRCBASE_CNIG = 0x218000,
-	GRCBASE_CPMU = 0x30000,
-	GRCBASE_NCSI = 0x40000,
-	GRCBASE_OPTE = 0x53000,
-	GRCBASE_BMB = 0x540000,
-	GRCBASE_PCIE = 0x54000,
-	GRCBASE_MCP = 0xe00000,
-	GRCBASE_MCP2 = 0x52000,
-	GRCBASE_PSWHST = 0x2a0000,
-	GRCBASE_PSWHST2 = 0x29e000,
-	GRCBASE_PSWRD = 0x29c000,
-	GRCBASE_PSWRD2 = 0x29d000,
-	GRCBASE_PSWWR = 0x29a000,
-	GRCBASE_PSWWR2 = 0x29b000,
-	GRCBASE_PSWRQ = 0x280000,
-	GRCBASE_PSWRQ2 = 0x240000,
-	GRCBASE_PGLCS = 0x0,
-	GRCBASE_DMAE = 0xc000,
-	GRCBASE_PTU = 0x560000,
-	GRCBASE_TCM = 0x1180000,
-	GRCBASE_MCM = 0x1200000,
-	GRCBASE_UCM = 0x1280000,
-	GRCBASE_XCM = 0x1000000,
-	GRCBASE_YCM = 0x1080000,
-	GRCBASE_PCM = 0x1100000,
-	GRCBASE_QM = 0x2f0000,
-	GRCBASE_TM = 0x2c0000,
-	GRCBASE_DORQ = 0x100000,
-	GRCBASE_BRB = 0x340000,
-	GRCBASE_SRC = 0x238000,
-	GRCBASE_PRS = 0x1f0000,
-	GRCBASE_TSDM = 0xfb0000,
-	GRCBASE_MSDM = 0xfc0000,
-	GRCBASE_USDM = 0xfd0000,
-	GRCBASE_XSDM = 0xf80000,
-	GRCBASE_YSDM = 0xf90000,
-	GRCBASE_PSDM = 0xfa0000,
-	GRCBASE_TSEM = 0x1700000,
-	GRCBASE_MSEM = 0x1800000,
-	GRCBASE_USEM = 0x1900000,
-	GRCBASE_XSEM = 0x1400000,
-	GRCBASE_YSEM = 0x1500000,
-	GRCBASE_PSEM = 0x1600000,
-	GRCBASE_RSS = 0x238800,
-	GRCBASE_TMLD = 0x4d0000,
-	GRCBASE_MULD = 0x4e0000,
-	GRCBASE_YULD = 0x4c8000,
-	GRCBASE_XYLD = 0x4c0000,
-	GRCBASE_PTLD = 0x590000,
-	GRCBASE_YPLD = 0x5b0000,
-	GRCBASE_PRM = 0x230000,
-	GRCBASE_PBF_PB1 = 0xda0000,
-	GRCBASE_PBF_PB2 = 0xda4000,
-	GRCBASE_RPB = 0x23c000,
-	GRCBASE_BTB = 0xdb0000,
-	GRCBASE_PBF = 0xd80000,
-	GRCBASE_RDIF = 0x300000,
-	GRCBASE_TDIF = 0x310000,
-	GRCBASE_CDU = 0x580000,
-	GRCBASE_CCFC = 0x2e0000,
-	GRCBASE_TCFC = 0x2d0000,
-	GRCBASE_IGU = 0x180000,
-	GRCBASE_CAU = 0x1c0000,
-	GRCBASE_RGFS = 0xf00000,
-	GRCBASE_RGSRC = 0x320000,
-	GRCBASE_TGFS = 0xd00000,
-	GRCBASE_TGSRC = 0x322000,
-	GRCBASE_UMAC = 0x51000,
-	GRCBASE_XMAC = 0x210000,
-	GRCBASE_DBG = 0x10000,
-	GRCBASE_NIG = 0x500000,
-	GRCBASE_WOL = 0x600000,
-	GRCBASE_BMBN = 0x610000,
-	GRCBASE_IPC = 0x20000,
-	GRCBASE_NWM = 0x800000,
-	GRCBASE_NWS = 0x700000,
-	GRCBASE_MS = 0x6a0000,
-	GRCBASE_PHY_PCIE = 0x620000,
-	GRCBASE_LED = 0x6b8000,
-	GRCBASE_AVS_WRAP = 0x6b0000,
-	GRCBASE_MISC_AEU = 0x8000,
-	GRCBASE_BAR0_MAP = 0x1c00000,
-	MAX_BLOCK_ADDR
-};
-
-
 enum block_id {
 	BLOCK_GRC,
 	BLOCK_MISCS,
@@ -157,8 +65,6 @@ enum block_id {
 	BLOCK_MULD,
 	BLOCK_YULD,
 	BLOCK_XYLD,
-	BLOCK_PTLD,
-	BLOCK_YPLD,
 	BLOCK_PRM,
 	BLOCK_PBF_PB1,
 	BLOCK_PBF_PB2,
@@ -172,12 +78,9 @@ enum block_id {
 	BLOCK_TCFC,
 	BLOCK_IGU,
 	BLOCK_CAU,
-	BLOCK_RGFS,
-	BLOCK_RGSRC,
-	BLOCK_TGFS,
-	BLOCK_TGSRC,
 	BLOCK_UMAC,
 	BLOCK_XMAC,
+	BLOCK_MSTAT,
 	BLOCK_DBG,
 	BLOCK_NIG,
 	BLOCK_WOL,
@@ -189,8 +92,18 @@ enum block_id {
 	BLOCK_PHY_PCIE,
 	BLOCK_LED,
 	BLOCK_AVS_WRAP,
-	BLOCK_MISC_AEU,
+	BLOCK_PXPREQBUS,
 	BLOCK_BAR0_MAP,
+	BLOCK_MCP_FIO,
+	BLOCK_LAST_INIT,
+	BLOCK_PRS_FC,
+	BLOCK_PBF_FC,
+	BLOCK_NIG_LB_FC,
+	BLOCK_NIG_LB_FC_PLLH,
+	BLOCK_NIG_TX_FC_PLLH,
+	BLOCK_NIG_TX_FC,
+	BLOCK_NIG_RX_FC_PLLH,
+	BLOCK_NIG_RX_FC,
 	MAX_BLOCK_ID
 };
 
@@ -210,10 +123,13 @@ enum bin_dbg_buffer_type {
 	BIN_BUF_DBG_ATTN_REGS /* Attention registers */,
 	BIN_BUF_DBG_ATTN_INDEXES /* Attention indexes */,
 	BIN_BUF_DBG_ATTN_NAME_OFFSETS /* Attention name offsets */,
-	BIN_BUF_DBG_BUS_BLOCKS /* Debug Bus blocks */,
-	BIN_BUF_DBG_BUS_LINES /* Debug Bus lines */,
-	BIN_BUF_DBG_BUS_BLOCKS_USER_DATA /* Debug Bus blocks user data */,
+	BIN_BUF_DBG_BLOCKS /* Blocks debug data */,
+	BIN_BUF_DBG_BLOCKS_CHIP_DATA /* Blocks debug chip data */,
+	BIN_BUF_DBG_BUS_LINES /* Blocks debug bus lines */,
+	BIN_BUF_DBG_BLOCKS_USER_DATA /* Blocks debug user data */,
+	BIN_BUF_DBG_BLOCKS_CHIP_USER_DATA /* Blocks debug chip user data */,
 	BIN_BUF_DBG_BUS_LINE_NAME_OFFSETS /* Debug Bus line name offsets */,
+	BIN_BUF_DBG_RESET_REGS /* Reset registers */,
 	BIN_BUF_DBG_PARSING_STRINGS /* Debug Tools parsing strings */,
 	MAX_BIN_DBG_BUFFER_TYPE
 };
@@ -358,24 +274,95 @@ enum dbg_attn_type {
 
 
 /*
- * Debug Bus block data
+ * Block debug data
  */
-struct dbg_bus_block {
-/* Number of debug lines in this block (excluding signature & latency events) */
-	u8 num_of_lines;
-/* Indicates if this block has a latency events debug line (0/1). */
-	u8 has_latency_events;
-/* Offset of this blocks lines in the Debug Bus lines array. */
-	u16 lines_offset;
+struct dbg_block {
+	u8 name[15] /* Block name */;
+/* The letter (char) of the associated Storm, or 0 if no associated Storm. */
+	u8 associated_storm_letter;
+};
+
+
+/*
+ * Chip-specific block debug data
+ */
+struct dbg_block_chip {
+	u8 flags;
+/* Indicates if the block is removed in this chip (0/1). */
+#define DBG_BLOCK_CHIP_IS_REMOVED_MASK           0x1
+#define DBG_BLOCK_CHIP_IS_REMOVED_SHIFT          0
+/* Indicates if this block has a reset register (0/1). */
+#define DBG_BLOCK_CHIP_HAS_RESET_REG_MASK        0x1
+#define DBG_BLOCK_CHIP_HAS_RESET_REG_SHIFT       1
+/* Indicates if this block should be taken out of reset before GRC Dump (0/1).
+ * Valid only if has_reset_reg is set.
+ */
+#define DBG_BLOCK_CHIP_UNRESET_BEFORE_DUMP_MASK  0x1
+#define DBG_BLOCK_CHIP_UNRESET_BEFORE_DUMP_SHIFT 2
+/* Indicates if this block has a debug bus (0/1). */
+#define DBG_BLOCK_CHIP_HAS_DBG_BUS_MASK          0x1
+#define DBG_BLOCK_CHIP_HAS_DBG_BUS_SHIFT         3
+/* Indicates if this block has a latency events debug line (0/1). Valid only
+ * if has_dbg_bus is set.
+ */
+#define DBG_BLOCK_CHIP_HAS_LATENCY_EVENTS_MASK   0x1
+#define DBG_BLOCK_CHIP_HAS_LATENCY_EVENTS_SHIFT  4
+#define DBG_BLOCK_CHIP_RESERVED0_MASK            0x7
+#define DBG_BLOCK_CHIP_RESERVED0_SHIFT           5
+/* The DBG block client ID of this block/chip. Valid only if has_dbg_bus is
+ * set.
+ */
+	u8 dbg_client_id;
+/* The ID of the reset register of this block/chip in the dbg_reset_reg
+ * array.
+ */
+	u8 reset_reg_id;
+/* The bit offset of this block/chip in the reset register. Valid only if
+ * has_reset_reg is set.
+ */
+	u8 reset_reg_bit_offset;
+	struct dbg_mode_hdr dbg_bus_mode /* Mode header */;
+	u16 reserved1;
+	u8 reserved2;
+/* Number of Debug Bus lines in this block/chip (excluding signature and latency
+ * events). Valid only if has_dbg_bus is set.
+ */
+	u8 num_of_dbg_bus_lines;
+/* Offset of this block/chip Debug Bus lines in the Debug Bus lines array. Valid
+ * only if has_dbg_bus is set.
+ */
+	u16 dbg_bus_lines_offset;
+/* GRC address of the Debug Bus dbg_select register (in dwords). Valid only if
+ * has_dbg_bus is set.
+ */
+	u32 dbg_select_reg_addr;
+/* GRC address of the Debug Bus dbg_dword_enable register (in dwords). Valid
+ * only if has_dbg_bus is set.
+ */
+	u32 dbg_dword_enable_reg_addr;
+/* GRC address of the Debug Bus dbg_shift register (in dwords). Valid only if
+ * has_dbg_bus is set.
+ */
+	u32 dbg_shift_reg_addr;
+/* GRC address of the Debug Bus dbg_force_valid register (in dwords). Valid only
+ * if has_dbg_bus is set.
+ */
+	u32 dbg_force_valid_reg_addr;
+/* GRC address of the Debug Bus dbg_force_frame register (in dwords). Valid only
+ * if has_dbg_bus is set.
+ */
+	u32 dbg_force_frame_reg_addr;
 };
 
 
 /*
- * Debug Bus block user data
+ * Chip-specific block user debug data
+ */
+struct dbg_block_chip_user {
+/* Number of debug bus lines in this block (excluding signature and latency
+ * events).
  */
-struct dbg_bus_block_user_data {
-/* Number of debug lines in this block (excluding signature & latency events) */
-	u8 num_of_lines;
+	u8 num_of_dbg_bus_lines;
 /* Indicates if this block has a latency events debug line (0/1). */
 	u8 has_latency_events;
 /* Offset of this blocks lines in the debug bus line name offsets array. */
@@ -383,6 +370,14 @@ struct dbg_bus_block_user_data {
 };
 
 
+/*
+ * Block user debug data
+ */
+struct dbg_block_user {
+	u8 name[16] /* Block name */;
+};
+
+
 /*
  * Block Debug line data
  */
@@ -603,51 +598,42 @@ enum dbg_idle_chk_severity_types {
 
 
 /*
- * Debug Bus block data
+ * Reset register
  */
-struct dbg_bus_block_data {
-	__le16 data;
-/* 4-bit value: bit i set -> dword/qword i is enabled. */
-#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_MASK       0xF
-#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_SHIFT      0
-/* Number of dwords/qwords to shift right the debug data (0-3) */
-#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_MASK       0xF
-#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_SHIFT      4
-/* 4-bit value: bit i set -> dword/qword i is forced valid. */
-#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_MASK  0xF
-#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_SHIFT 8
-/* 4-bit value: bit i set -> dword/qword i frame bit is forced. */
-#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_MASK  0xF
-#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_SHIFT 12
-	u8 line_num /* Debug line number to select */;
-	u8 hw_id /* HW ID associated with the block */;
+struct dbg_reset_reg {
+	u32 data;
+#define DBG_RESET_REG_ADDR_MASK        0xFFFFFF /* GRC address (in dwords) */
+#define DBG_RESET_REG_ADDR_SHIFT       0
+/* indicates if this register is removed (0/1). */
+#define DBG_RESET_REG_IS_REMOVED_MASK  0x1
+#define DBG_RESET_REG_IS_REMOVED_SHIFT 24
+#define DBG_RESET_REG_RESERVED_MASK    0x7F
+#define DBG_RESET_REG_RESERVED_SHIFT   25
 };
 
 
 /*
- * Debug Bus Clients
- */
-enum dbg_bus_clients {
-	DBG_BUS_CLIENT_RBCN,
-	DBG_BUS_CLIENT_RBCP,
-	DBG_BUS_CLIENT_RBCR,
-	DBG_BUS_CLIENT_RBCT,
-	DBG_BUS_CLIENT_RBCU,
-	DBG_BUS_CLIENT_RBCF,
-	DBG_BUS_CLIENT_RBCX,
-	DBG_BUS_CLIENT_RBCS,
-	DBG_BUS_CLIENT_RBCH,
-	DBG_BUS_CLIENT_RBCZ,
-	DBG_BUS_CLIENT_OTHER_ENGINE,
-	DBG_BUS_CLIENT_TIMESTAMP,
-	DBG_BUS_CLIENT_CPU,
-	DBG_BUS_CLIENT_RBCY,
-	DBG_BUS_CLIENT_RBCQ,
-	DBG_BUS_CLIENT_RBCM,
-	DBG_BUS_CLIENT_RBCB,
-	DBG_BUS_CLIENT_RBCW,
-	DBG_BUS_CLIENT_RBCV,
-	MAX_DBG_BUS_CLIENTS
+ * Debug Bus block data
+ */
+struct dbg_bus_block_data {
+/* 4 bit value, bit i set -> dword/qword i is enabled in block. */
+	u8 enable_mask;
+/* Number of dwords/qwords to cyclically  right the blocks output (0-3). */
+	u8 right_shift;
+/* 4 bit value, bit i set -> dword/qword i is forced valid in block. */
+	u8 force_valid_mask;
+/* 4 bit value, bit i set -> dword/qword i frame bit is forced in block. */
+	u8 force_frame_mask;
+/* bit i set -> dword i contains this blocks data (after shifting). */
+	u8 dword_mask;
+	u8 line_num /* Debug line number to select */;
+	u8 hw_id /* HW ID associated with the block */;
+	u8 flags;
+/* 0/1. If 1, the debug line is 256b, otherwise its 128b. */
+#define DBG_BUS_BLOCK_DATA_IS_256B_LINE_MASK  0x1
+#define DBG_BUS_BLOCK_DATA_IS_256B_LINE_SHIFT 0
+#define DBG_BUS_BLOCK_DATA_RESERVED_MASK      0x7F
+#define DBG_BUS_BLOCK_DATA_RESERVED_SHIFT     1
 };
 
 
@@ -673,15 +659,19 @@ enum dbg_bus_constraint_ops {
  * Debug Bus trigger state data
  */
 struct dbg_bus_trigger_state_data {
-	u8 data;
-/* 4-bit value: bit i set -> dword i of the trigger state block
- * (after right shift) is enabled.
- */
-#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_MASK  0xF
-#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_SHIFT 0
-/* 4-bit value: bit i set -> dword i is compared by a constraint */
-#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_MASK      0xF
-#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_SHIFT     4
+/* Message length (in cycles) to be used for message-based trigger constraints.
+ * If set to 0, message length is based only on frame bit received from HW.
+ */
+	u8 msg_len;
+/* A bit for each dword in the debug bus cycle, indicating if this dword appears
+ * in a trigger constraint (1) or not (0)
+ */
+	u8 constraint_dword_mask;
+/* Storm ID to trigger on. Valid only when triggering on Storm data.
+ * (use enum dbg_storms)
+ */
+	u8 storm_id;
+	u8 reserved;
 };
 
 /*
@@ -751,11 +741,7 @@ struct dbg_bus_storm_data {
 struct dbg_bus_data {
 	u32 app_version /* The tools version number of the application */;
 	u8 state /* The current debug bus state */;
-	u8 hw_dwords /* HW dwords per cycle */;
-/* The HW IDs of the recorded HW blocks, where bits i*3..i*3+2 contain the
- * HW ID of dword/qword i
- */
-	u16 hw_id_mask;
+	u8 mode_256b_en /* Indicates if the 256 bit mode is enabled */;
 	u8 num_enabled_blocks /* Number of blocks enabled for recording */;
 	u8 num_enabled_storms /* Number of Storms enabled for recording */;
 	u8 target /* Output target */;
@@ -777,102 +763,46 @@ struct dbg_bus_data {
  * Valid only if both filter and trigger are enabled (0/1)
  */
 	u8 filter_post_trigger;
-	u16 reserved;
 /* Indicates if the recording trigger is enabled (0/1) */
 	u8 trigger_en;
-/* trigger states data */
-	struct dbg_bus_trigger_state_data trigger_states[3];
+/* A bit for each dword in the debug bus cycle, indicating if this dword
+ * appears in a filter constraint (1) or not (0)
+ */
+	u8 filter_constraint_dword_mask;
 	u8 next_trigger_state /* ID of next trigger state to be added */;
 /* ID of next filter/trigger constraint to be added */
 	u8 next_constraint_id;
-/* If true, all inputs are associated with HW ID 0. Otherwise, each input is
- * assigned a different HW ID (0/1)
+/* trigger states data */
+	struct dbg_bus_trigger_state_data trigger_states[3];
+/* Message length (in cycles) to be used for message-based filter constraints.
+ * If set to 0 message length is based only on frame bit received from HW.
  */
-	u8 unify_inputs;
+	u8 filter_msg_len;
 /* Indicates if the other engine sends it NW recording to this engine (0/1) */
 	u8 rcv_from_other_engine;
+/* A bit for each dword in the debug bus cycle, indicating if this dword is
+ * recorded (1) or not (0)
+ */
+	u8 blocks_dword_mask;
+/* Indicates if there are dwords in the debug bus cycle which are recorded
+ * by more tan one block (0/1)
+ */
+	u8 blocks_dword_overlap;
+/* The HW IDs of the recorded HW blocks, where bits i*3..i*3+2 contain the
+ * HW ID of dword/qword i
+ */
+	u32 hw_id_mask;
 /* Debug Bus PCI buffer data. Valid only when the target is
  * DBG_BUS_TARGET_ID_PCI.
  */
 	struct dbg_bus_pci_buf_data pci_buf;
 /* Debug Bus data for each block */
-	struct dbg_bus_block_data blocks[88];
+	struct dbg_bus_block_data blocks[132];
 /* Debug Bus data for each block */
 	struct dbg_bus_storm_data storms[6];
 };
 
 
-/*
- * Debug bus filter types
- */
-enum dbg_bus_filter_types {
-	DBG_BUS_FILTER_TYPE_OFF /* filter always off */,
-	DBG_BUS_FILTER_TYPE_PRE /* filter before trigger only */,
-	DBG_BUS_FILTER_TYPE_POST /* filter after trigger only */,
-	DBG_BUS_FILTER_TYPE_ON /* filter always on */,
-	MAX_DBG_BUS_FILTER_TYPES
-};
-
-
-/*
- * Debug bus frame modes
- */
-enum dbg_bus_frame_modes {
-	DBG_BUS_FRAME_MODE_0HW_4ST = 0 /* 0 HW dwords, 4 Storm dwords */,
-	DBG_BUS_FRAME_MODE_4HW_0ST = 3 /* 4 HW dwords, 0 Storm dwords */,
-	DBG_BUS_FRAME_MODE_8HW_0ST = 4 /* 8 HW dwords, 0 Storm dwords */,
-	MAX_DBG_BUS_FRAME_MODES
-};
-
-
-/*
- * Debug bus other engine mode
- */
-enum dbg_bus_other_engine_modes {
-	DBG_BUS_OTHER_ENGINE_MODE_NONE,
-	DBG_BUS_OTHER_ENGINE_MODE_DOUBLE_BW_TX,
-	DBG_BUS_OTHER_ENGINE_MODE_DOUBLE_BW_RX,
-	DBG_BUS_OTHER_ENGINE_MODE_CROSS_ENGINE_TX,
-	DBG_BUS_OTHER_ENGINE_MODE_CROSS_ENGINE_RX,
-	MAX_DBG_BUS_OTHER_ENGINE_MODES
-};
-
-
-
-/*
- * Debug bus post-trigger recording types
- */
-enum dbg_bus_post_trigger_types {
-	DBG_BUS_POST_TRIGGER_RECORD /* start recording after trigger */,
-	DBG_BUS_POST_TRIGGER_DROP /* drop data after trigger */,
-	MAX_DBG_BUS_POST_TRIGGER_TYPES
-};
-
-
-/*
- * Debug bus pre-trigger recording types
- */
-enum dbg_bus_pre_trigger_types {
-	DBG_BUS_PRE_TRIGGER_START_FROM_ZERO /* start recording from time 0 */,
-/* start recording some chunks before trigger */
-	DBG_BUS_PRE_TRIGGER_NUM_CHUNKS,
-	DBG_BUS_PRE_TRIGGER_DROP /* drop data before trigger */,
-	MAX_DBG_BUS_PRE_TRIGGER_TYPES
-};
-
-
-/*
- * Debug bus SEMI frame modes
- */
-enum dbg_bus_semi_frame_modes {
-/* 0 slow dwords, 4 fast dwords */
-	DBG_BUS_SEMI_FRAME_MODE_0SLOW_4FAST = 0,
-/* 4 slow dwords, 0 fast dwords */
-	DBG_BUS_SEMI_FRAME_MODE_4SLOW_0FAST = 3,
-	MAX_DBG_BUS_SEMI_FRAME_MODES
-};
-
-
 /*
  * Debug bus states
  */
@@ -901,6 +831,8 @@ enum dbg_bus_storm_modes {
 	DBG_BUS_STORM_MODE_LD_ST_ADDR /* load/store address (fast debug) */,
 	DBG_BUS_STORM_MODE_DRA_FSM /* DRA state machines (fast debug) */,
 	DBG_BUS_STORM_MODE_RH /* recording handlers (fast debug) */,
+/* recording handlers with store messages (fast debug) */
+	DBG_BUS_STORM_MODE_RH_WITH_STORE,
 	DBG_BUS_STORM_MODE_FOC /* FOC: FIN + DRA Rd (slow debug) */,
 	DBG_BUS_STORM_MODE_EXT_STORE /* FOC: External Store (slow) */,
 	MAX_DBG_BUS_STORM_MODES
@@ -955,14 +887,13 @@ enum dbg_grc_params {
 	DBG_GRC_PARAM_DUMP_CAU /* dump CAU memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_QM /* dump QM memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_MCP /* dump MCP memories (0/1) */,
-/* MCP Trace meta data size in bytes */
-	DBG_GRC_PARAM_MCP_TRACE_META_SIZE,
+	DBG_GRC_PARAM_DUMP_DORQ /* dump DORQ memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_CFC /* dump CFC memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_IGU /* dump IGU memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_BRB /* dump BRB memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_BTB /* dump BTB memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_BMB /* dump BMB memories (0/1) */,
-	DBG_GRC_PARAM_DUMP_NIG /* dump NIG memories (0/1) */,
+	DBG_GRC_PARAM_RESERVD1 /* reserved */,
 	DBG_GRC_PARAM_DUMP_MULD /* dump MULD memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_PRS /* dump PRS memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_DMAE /* dump PRS memories (0/1) */,
@@ -971,8 +902,9 @@ enum dbg_grc_params {
 	DBG_GRC_PARAM_DUMP_DIF /* dump DIF memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_STATIC /* dump static debug data (0/1) */,
 	DBG_GRC_PARAM_UNSTALL /* un-stall Storms after dump (0/1) */,
-	DBG_GRC_PARAM_NUM_LCIDS /* number of LCIDs (0..320) */,
-	DBG_GRC_PARAM_NUM_LTIDS /* number of LTIDs (0..320) */,
+	DBG_GRC_PARAM_RESERVED2 /* reserved */,
+/* MCP Trace meta data size in bytes */
+	DBG_GRC_PARAM_MCP_TRACE_META_SIZE,
 /* preset: exclude all memories from dump (1 only) */
 	DBG_GRC_PARAM_EXCLUDE_ALL,
 /* preset: include memories for crash dump (1 only) */
@@ -983,26 +915,12 @@ enum dbg_grc_params {
 	DBG_GRC_PARAM_DUMP_PHY /* dump PHY memories (0/1) */,
 	DBG_GRC_PARAM_NO_MCP /* dont perform MCP commands (0/1) */,
 	DBG_GRC_PARAM_NO_FW_VER /* dont read FW/MFW version (0/1) */,
+	DBG_GRC_PARAM_RESERVED3 /* reserved */,
+	DBG_GRC_PARAM_DUMP_MCP_HW_DUMP /* dump MCP HW Dump (0/1) */,
 	MAX_DBG_GRC_PARAMS
 };
 
 
-/*
- * Debug reset registers
- */
-enum dbg_reset_regs {
-	DBG_RESET_REG_MISCS_PL_UA,
-	DBG_RESET_REG_MISCS_PL_HV,
-	DBG_RESET_REG_MISCS_PL_HV_2,
-	DBG_RESET_REG_MISC_PL_UA,
-	DBG_RESET_REG_MISC_PL_HV,
-	DBG_RESET_REG_MISC_PL_PDA_VMAIN_1,
-	DBG_RESET_REG_MISC_PL_PDA_VMAIN_2,
-	DBG_RESET_REG_MISC_PL_PDA_VAUX,
-	MAX_DBG_RESET_REGS
-};
-
-
 /*
  * Debug status codes
  */
@@ -1016,15 +934,15 @@ enum dbg_status {
 	DBG_STATUS_INVALID_PCI_BUF_SIZE,
 	DBG_STATUS_PCI_BUF_ALLOC_FAILED,
 	DBG_STATUS_PCI_BUF_NOT_ALLOCATED,
-	DBG_STATUS_TOO_MANY_INPUTS,
-	DBG_STATUS_INPUT_OVERLAP,
-	DBG_STATUS_HW_ONLY_RECORDING,
+	DBG_STATUS_INVALID_FILTER_TRIGGER_DWORDS,
+	DBG_STATUS_NO_MATCHING_FRAMING_MODE,
+	DBG_STATUS_VFC_READ_ERROR,
 	DBG_STATUS_STORM_ALREADY_ENABLED,
 	DBG_STATUS_STORM_NOT_ENABLED,
 	DBG_STATUS_BLOCK_ALREADY_ENABLED,
 	DBG_STATUS_BLOCK_NOT_ENABLED,
 	DBG_STATUS_NO_INPUT_ENABLED,
-	DBG_STATUS_NO_FILTER_TRIGGER_64B,
+	DBG_STATUS_NO_FILTER_TRIGGER_256B,
 	DBG_STATUS_FILTER_ALREADY_ENABLED,
 	DBG_STATUS_TRIGGER_ALREADY_ENABLED,
 	DBG_STATUS_TRIGGER_NOT_ENABLED,
@@ -1049,7 +967,7 @@ enum dbg_status {
 	DBG_STATUS_MCP_TRACE_NO_META,
 	DBG_STATUS_MCP_COULD_NOT_HALT,
 	DBG_STATUS_MCP_COULD_NOT_RESUME,
-	DBG_STATUS_RESERVED2,
+	DBG_STATUS_RESERVED0,
 	DBG_STATUS_SEMI_FIFO_NOT_EMPTY,
 	DBG_STATUS_IGU_FIFO_BAD_DATA,
 	DBG_STATUS_MCP_COULD_NOT_MASK_PRTY,
@@ -1057,10 +975,15 @@ enum dbg_status {
 	DBG_STATUS_REG_FIFO_BAD_DATA,
 	DBG_STATUS_PROTECTION_OVERRIDE_BAD_DATA,
 	DBG_STATUS_DBG_ARRAY_NOT_SET,
-	DBG_STATUS_FILTER_BUG,
+	DBG_STATUS_RESERVED1,
 	DBG_STATUS_NON_MATCHING_LINES,
-	DBG_STATUS_INVALID_TRIGGER_DWORD_OFFSET,
+	DBG_STATUS_INSUFFICIENT_HW_IDS,
 	DBG_STATUS_DBG_BUS_IN_USE,
+	DBG_STATUS_INVALID_STORM_DBG_MODE,
+	DBG_STATUS_OTHER_ENGINE_BB_ONLY,
+	DBG_STATUS_FILTER_SINGLE_HW_ID,
+	DBG_STATUS_TRIGGER_SINGLE_HW_ID,
+	DBG_STATUS_MISSING_TRIGGER_STATE_STORM,
 	MAX_DBG_STATUS
 };
 
@@ -1108,9 +1031,9 @@ struct dbg_tools_data {
 	struct idle_chk_data idle_chk /* Idle Check data */;
 	u8 mode_enable[40] /* Indicates if a mode is enabled (0/1) */;
 /* Indicates if a block is in reset state (0/1) */
-	u8 block_in_reset[88];
+	u8 block_in_reset[132];
 	u8 chip_id /* Chip ID (from enum chip_ids) */;
-	u8 platform_id /* Platform ID */;
+	u8 hw_type /* HW Type */;
 	u8 num_ports /* Number of ports in the chip */;
 	u8 num_pfs_per_port /* Number of PFs in each port */;
 	u8 num_vfs /* Number of VFs in the chip */;
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index b1cab2910..bd7bd8658 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -34,7 +34,7 @@ struct xstorm_eth_conn_st_ctx {
 
 struct xstorm_eth_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
-	u8 eth_state /* state */;
+	u8 state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
 #define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
@@ -303,57 +303,6 @@ struct xstorm_eth_conn_ag_ctx {
 	__le16 word15 /* word15 */;
 };
 
-/*
- * The eth storm context for the Ystorm
- */
-struct ystorm_eth_conn_st_ctx {
-	__le32 reserved[8];
-};
-
-struct ystorm_eth_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 state /* state */;
-	u8 flags0;
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1 /* exist_in_qm0 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1 /* exist_in_qm1 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
-#define YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
-#define YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
-	u8 flags1;
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1 /* cf0en */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1 /* cf1en */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1 /* cf2en */
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1 /* rule0en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1 /* rule1en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1 /* rule2en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1 /* rule3en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1 /* rule4en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
-	u8 tx_q0_int_coallecing_timeset /* byte2 */;
-	u8 byte3 /* byte3 */;
-	__le16 word0 /* word0 */;
-	__le32 terminate_spqe /* reg0 */;
-	__le32 reg1 /* reg1 */;
-	__le16 tx_bd_cons_upd /* word1 */;
-	__le16 word2 /* word2 */;
-	__le16 word3 /* word3 */;
-	__le16 word4 /* word4 */;
-	__le32 reg2 /* reg2 */;
-	__le32 reg3 /* reg3 */;
-};
-
 struct tstorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
@@ -458,6 +407,57 @@ struct tstorm_eth_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
+/*
+ * The eth storm context for the Ystorm
+ */
+struct ystorm_eth_conn_st_ctx {
+	__le32 reserved[8];
+};
+
+struct ystorm_eth_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 state /* state */;
+	u8 flags0;
+#define YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1 /* exist_in_qm0 */
+#define YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
+#define YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1 /* exist_in_qm1 */
+#define YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
+#define YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
+#define YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
+	u8 flags1;
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1 /* cf0en */
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1 /* cf1en */
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
+#define YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1 /* cf2en */
+#define YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
+#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1 /* rule0en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
+#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1 /* rule1en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
+#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1 /* rule2en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
+#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1 /* rule3en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
+#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1 /* rule4en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
+	u8 tx_q0_int_coallecing_timeset /* byte2 */;
+	u8 byte3 /* byte3 */;
+	__le16 word0 /* word0 */;
+	__le32 terminate_spqe /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le16 tx_bd_cons_upd /* word1 */;
+	__le16 word2 /* word2 */;
+	__le16 word3 /* word3 */;
+	__le16 word4 /* word4 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+};
+
 struct ustorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
@@ -557,12 +557,12 @@ struct eth_conn_context {
 	struct xstorm_eth_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
 	struct xstorm_eth_conn_ag_ctx xstorm_ag_context;
+/* tstorm aggregative context */
+	struct tstorm_eth_conn_ag_ctx tstorm_ag_context;
 /* ystorm storm context */
 	struct ystorm_eth_conn_st_ctx ystorm_st_context;
 /* ystorm aggregative context */
 	struct ystorm_eth_conn_ag_ctx ystorm_ag_context;
-/* tstorm aggregative context */
-	struct tstorm_eth_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
 	struct ustorm_eth_conn_ag_ctx ustorm_ag_context;
 /* ustorm storm context */
@@ -792,16 +792,34 @@ enum eth_ramrod_cmd_id {
 struct eth_return_code {
 	u8 value;
 /* error code (use enum eth_error_code) */
-#define ETH_RETURN_CODE_ERR_CODE_MASK  0x1F
+#define ETH_RETURN_CODE_ERR_CODE_MASK  0x3F
 #define ETH_RETURN_CODE_ERR_CODE_SHIFT 0
-#define ETH_RETURN_CODE_RESERVED_MASK  0x3
-#define ETH_RETURN_CODE_RESERVED_SHIFT 5
+#define ETH_RETURN_CODE_RESERVED_MASK  0x1
+#define ETH_RETURN_CODE_RESERVED_SHIFT 6
 /* rx path - 0, tx path - 1 */
 #define ETH_RETURN_CODE_RX_TX_MASK     0x1
 #define ETH_RETURN_CODE_RX_TX_SHIFT    7
 };
 
 
+/*
+ * tx destination enum
+ */
+enum eth_tx_dst_mode_config_enum {
+/* tx destination configuration override is disabled */
+	ETH_TX_DST_MODE_CONFIG_DISABLE,
+/* tx destination configuration override is enabled, vport and tx dst will be
+ * taken from from 4th bd
+ */
+	ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_BD,
+/* tx destination configuration override is enabled, vport and tx dst will be
+ * taken from from vport data
+ */
+	ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_VPORT,
+	MAX_ETH_TX_DST_MODE_CONFIG_ENUM
+};
+
+
 /*
  * What to do in case an error occurs
  */
@@ -1431,7 +1449,7 @@ struct vport_update_ramrod_data {
 
 struct E4XstormEthConnAgCtxDqExtLdPart {
 	u8 reserved0 /* cdu_validation */;
-	u8 eth_state /* state */;
+	u8 state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK            0x1
diff --git a/drivers/net/qede/base/ecore_hsi_init_func.h b/drivers/net/qede/base/ecore_hsi_init_func.h
index d77edaa1d..7efe2eff1 100644
--- a/drivers/net/qede/base/ecore_hsi_init_func.h
+++ b/drivers/net/qede/base/ecore_hsi_init_func.h
@@ -88,7 +88,18 @@ struct init_nig_pri_tc_map_req {
 
 
 /*
- * QM per-port init parameters
+ * QM per global RL init parameters
+ */
+struct init_qm_global_rl_params {
+/* Rate limit in Mb/sec units. If set to zero, the link speed is uwsed
+ * instead.
+ */
+	u32 rate_limit;
+};
+
+
+/*
+ * QM per port init parameters
  */
 struct init_qm_port_params {
 	u8 active /* Indicates if this port is active */;
@@ -111,24 +122,20 @@ struct init_qm_pq_params {
 	u8 wrr_group /* WRR group */;
 /* Indicates if a rate limiter should be allocated for the PQ (0/1) */
 	u8 rl_valid;
+	u16 rl_id /* RL ID, valid only if rl_valid is true */;
 	u8 port_id /* Port ID */;
-	u8 reserved0;
-	u16 reserved1;
+	u8 reserved;
 };
 
 
 /*
- * QM per-vport init parameters
+ * QM per VPORT init parameters
  */
 struct init_qm_vport_params {
-/* rate limit in Mb/sec units. a value of 0 means dont configure. ignored if
- * VPORT RL is globally disabled.
- */
-	u32 vport_rl;
 /* WFQ weight. A value of 0 means dont configure. ignored if VPORT WFQ is
  * globally disabled.
  */
-	u16 vport_wfq;
+	u16 wfq;
 /* the first Tx PQ ID associated with this VPORT for each TC. */
 	u16 first_tx_pq_id[NUM_OF_TCS];
 };
diff --git a/drivers/net/qede/base/ecore_hsi_init_tool.h b/drivers/net/qede/base/ecore_hsi_init_tool.h
index 1fe4bfc61..4f878d061 100644
--- a/drivers/net/qede/base/ecore_hsi_init_tool.h
+++ b/drivers/net/qede/base/ecore_hsi_init_tool.h
@@ -46,10 +46,24 @@ enum bin_init_buffer_type {
 	BIN_BUF_INIT_VAL /* init data */,
 	BIN_BUF_INIT_MODE_TREE /* init modes tree */,
 	BIN_BUF_INIT_IRO /* internal RAM offsets */,
+	BIN_BUF_INIT_OVERLAYS /* FW overlays (except overlay 0) */,
 	MAX_BIN_INIT_BUFFER_TYPE
 };
 
 
+/*
+ * FW overlay buffer header
+ */
+struct fw_overlay_buf_hdr {
+	u32 data;
+#define FW_OVERLAY_BUF_HDR_STORM_ID_MASK  0xFF /* Storm ID */
+#define FW_OVERLAY_BUF_HDR_STORM_ID_SHIFT 0
+/* Size of Storm FW overlay buffer in dwords */
+#define FW_OVERLAY_BUF_HDR_BUF_SIZE_MASK  0xFFFFFF
+#define FW_OVERLAY_BUF_HDR_BUF_SIZE_SHIFT 8
+};
+
+
 /*
  * init array header: raw
  */
@@ -117,6 +131,30 @@ union init_array_hdr {
 };
 
 
+enum dbg_bus_clients {
+	DBG_BUS_CLIENT_RBCN,
+	DBG_BUS_CLIENT_RBCP,
+	DBG_BUS_CLIENT_RBCR,
+	DBG_BUS_CLIENT_RBCT,
+	DBG_BUS_CLIENT_RBCU,
+	DBG_BUS_CLIENT_RBCF,
+	DBG_BUS_CLIENT_RBCX,
+	DBG_BUS_CLIENT_RBCS,
+	DBG_BUS_CLIENT_RBCH,
+	DBG_BUS_CLIENT_RBCZ,
+	DBG_BUS_CLIENT_OTHER_ENGINE,
+	DBG_BUS_CLIENT_TIMESTAMP,
+	DBG_BUS_CLIENT_CPU,
+	DBG_BUS_CLIENT_RBCY,
+	DBG_BUS_CLIENT_RBCQ,
+	DBG_BUS_CLIENT_RBCM,
+	DBG_BUS_CLIENT_RBCB,
+	DBG_BUS_CLIENT_RBCW,
+	DBG_BUS_CLIENT_RBCV,
+	MAX_DBG_BUS_CLIENTS
+};
+
+
 enum init_modes {
 	MODE_BB_A0_DEPRECATED,
 	MODE_BB,
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 6a79db52e..0aed043bb 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -12,6 +12,8 @@
 #include "reg_addr.h"
 #include "ecore_utils.h"
 #include "ecore_iov_api.h"
+#include "ecore_gtt_values.h"
+#include "ecore_dev_api.h"
 
 #ifndef ASIC_ONLY
 #define ECORE_EMUL_FACTOR 2000
@@ -78,6 +80,20 @@ enum _ecore_status_t ecore_ptt_pool_alloc(struct ecore_hwfn *p_hwfn)
 	return ECORE_SUCCESS;
 }
 
+void ecore_gtt_init(struct ecore_hwfn *p_hwfn)
+{
+	u32 gtt_base;
+	u32 i;
+
+	/* Set the global windows */
+	gtt_base = PXP_PF_WINDOW_ADMIN_START + PXP_PF_WINDOW_ADMIN_GLOBAL_START;
+
+	for (i = 0; i < OSAL_ARRAY_SIZE(pxp_global_win); i++)
+		if (pxp_global_win[i])
+			REG_WR(p_hwfn, gtt_base + i * PXP_GLOBAL_ENTRY_SIZE,
+			       pxp_global_win[i]);
+}
+
 void ecore_ptt_invalidate(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_ptt *p_ptt;
diff --git a/drivers/net/qede/base/ecore_hw.h b/drivers/net/qede/base/ecore_hw.h
index e43f337dc..238bdb9db 100644
--- a/drivers/net/qede/base/ecore_hw.h
+++ b/drivers/net/qede/base/ecore_hw.h
@@ -8,9 +8,8 @@
 #define __ECORE_HW_H__
 
 #include "ecore.h"
-#include "ecore_dev_api.h"
 
-/* Forward decleration */
+/* Forward declaration */
 struct ecore_ptt;
 
 enum reserved_ptts {
@@ -53,10 +52,8 @@ enum reserved_ptts {
 * @brief ecore_gtt_init - Initialize GTT windows
 *
 * @param p_hwfn
-* @param p_ptt
 */
-void ecore_gtt_init(struct ecore_hwfn *p_hwfn,
-		    struct ecore_ptt *p_ptt);
+void ecore_gtt_init(struct ecore_hwfn *p_hwfn);
 
 /**
  * @brief ecore_ptt_invalidate - Forces all ptt entries to be re-configured
@@ -84,7 +81,6 @@ void ecore_ptt_pool_free(struct ecore_hwfn *p_hwfn);
 /**
  * @brief ecore_ptt_get_bar_addr - Get PPT's external BAR address
  *
- * @param p_hwfn
  * @param p_ptt
  *
  * @return u32
@@ -95,8 +91,8 @@ u32 ecore_ptt_get_bar_addr(struct ecore_ptt	*p_ptt);
  * @brief ecore_ptt_set_win - Set PTT Window's GRC BAR address
  *
  * @param p_hwfn
- * @param new_hw_addr
  * @param p_ptt
+ * @param new_hw_addr
  */
 void ecore_ptt_set_win(struct ecore_hwfn	*p_hwfn,
 		       struct ecore_ptt		*p_ptt,
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 34bcc4249..9f614a4cf 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -663,10 +663,10 @@ static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 
 	/* Go over all PF VPORTs */
 	for (i = 0; i < num_vports; i++) {
-		if (!vport_params[i].vport_wfq)
+		if (!vport_params[i].wfq)
 			continue;
 
-		inc_val = QM_WFQ_INC_VAL(vport_params[i].vport_wfq);
+		inc_val = QM_WFQ_INC_VAL(vport_params[i].wfq);
 		if (inc_val > QM_WFQ_MAX_INC_VAL) {
 			DP_NOTICE(p_hwfn, true,
 				  "Invalid VPORT WFQ weight configuration\n");
@@ -709,8 +709,7 @@ static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
 
 	/* Go over all PF VPORTs */
 	for (i = 0, vport_id = start_vport; i < num_vports; i++, vport_id++) {
-		inc_val = QM_RL_INC_VAL(vport_params[i].vport_rl ?
-			  vport_params[i].vport_rl : link_speed);
+		inc_val = QM_RL_INC_VAL(link_speed);
 		if (inc_val > QM_VP_RL_MAX_INC_VAL(link_speed)) {
 			DP_NOTICE(p_hwfn, true,
 				  "Invalid VPORT rate-limit configuration\n");
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index 8f7209100..ad8570a08 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -534,53 +534,6 @@ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-void ecore_gtt_init(struct ecore_hwfn *p_hwfn,
-		    struct ecore_ptt *p_ptt)
-{
-	u32 gtt_base;
-	u32 i;
-
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
-		/* This is done by MFW on ASIC; regardless, this should only
-		 * be done once per chip [i.e., common]. Implementation is
-		 * not too bright, but it should work on the simple FPGA/EMUL
-		 * scenarios.
-		 */
-		static bool initialized;
-		int poll_cnt = 500;
-		u32 val;
-
-		/* initialize PTT/GTT (poll for completion) */
-		if (!initialized) {
-			ecore_wr(p_hwfn, p_ptt,
-				 PGLUE_B_REG_START_INIT_PTT_GTT, 1);
-			initialized = true;
-		}
-
-		do {
-			/* ptt might be overrided by HW until this is done */
-			OSAL_UDELAY(10);
-			ecore_ptt_invalidate(p_hwfn);
-			val = ecore_rd(p_hwfn, p_ptt,
-				       PGLUE_B_REG_INIT_DONE_PTT_GTT);
-		} while ((val != 1) && --poll_cnt);
-
-		if (!poll_cnt)
-			DP_ERR(p_hwfn,
-			       "PGLUE_B_REG_INIT_DONE didn't complete\n");
-	}
-#endif
-
-	/* Set the global windows */
-	gtt_base = PXP_PF_WINDOW_ADMIN_START + PXP_PF_WINDOW_ADMIN_GLOBAL_START;
-
-	for (i = 0; i < OSAL_ARRAY_SIZE(pxp_global_win); i++)
-		if (pxp_global_win[i])
-			REG_WR(p_hwfn, gtt_base + i * PXP_GLOBAL_ENTRY_SIZE,
-			       pxp_global_win[i]);
-}
-
 enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
 #ifdef CONFIG_ECORE_BINARY_FW
 					const u8 *fw_data)
diff --git a/drivers/net/qede/base/ecore_init_ops.h b/drivers/net/qede/base/ecore_init_ops.h
index de7846d46..21e433309 100644
--- a/drivers/net/qede/base/ecore_init_ops.h
+++ b/drivers/net/qede/base/ecore_init_ops.h
@@ -97,14 +97,4 @@ void ecore_init_store_rt_agg(struct ecore_hwfn *p_hwfn,
 #define STORE_RT_REG_AGG(hwfn, offset, val)			\
 	ecore_init_store_rt_agg(hwfn, offset, (u32 *)&val, sizeof(val))
 
-
-/**
- * @brief
- *      Initialize GTT global windows and set admin window
- *      related params of GTT/PTT to default values.
- *
- * @param p_hwfn
- */
-void ecore_gtt_init(struct ecore_hwfn *p_hwfn,
-		    struct ecore_ptt *p_ptt);
 #endif /* __ECORE_INIT_OPS__ */
diff --git a/drivers/net/qede/base/ecore_iro.h b/drivers/net/qede/base/ecore_iro.h
index 12d45c1c5..b146faff9 100644
--- a/drivers/net/qede/base/ecore_iro.h
+++ b/drivers/net/qede/base/ecore_iro.h
@@ -35,207 +35,239 @@
 #define USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id) (IRO[7].base + \
 	((queue_zone_id) * IRO[7].m1))
 #define USTORM_COMMON_QUEUE_CONS_SIZE (IRO[7].size)
+/* Xstorm common PQ info */
+#define XSTORM_PQ_INFO_OFFSET(pq_id) (IRO[8].base + ((pq_id) * IRO[8].m1))
+#define XSTORM_PQ_INFO_SIZE (IRO[8].size)
 /* Xstorm Integration Test Data */
-#define XSTORM_INTEG_TEST_DATA_OFFSET (IRO[8].base)
-#define XSTORM_INTEG_TEST_DATA_SIZE (IRO[8].size)
+#define XSTORM_INTEG_TEST_DATA_OFFSET (IRO[9].base)
+#define XSTORM_INTEG_TEST_DATA_SIZE (IRO[9].size)
 /* Ystorm Integration Test Data */
-#define YSTORM_INTEG_TEST_DATA_OFFSET (IRO[9].base)
-#define YSTORM_INTEG_TEST_DATA_SIZE (IRO[9].size)
+#define YSTORM_INTEG_TEST_DATA_OFFSET (IRO[10].base)
+#define YSTORM_INTEG_TEST_DATA_SIZE (IRO[10].size)
 /* Pstorm Integration Test Data */
-#define PSTORM_INTEG_TEST_DATA_OFFSET (IRO[10].base)
-#define PSTORM_INTEG_TEST_DATA_SIZE (IRO[10].size)
+#define PSTORM_INTEG_TEST_DATA_OFFSET (IRO[11].base)
+#define PSTORM_INTEG_TEST_DATA_SIZE (IRO[11].size)
 /* Tstorm Integration Test Data */
-#define TSTORM_INTEG_TEST_DATA_OFFSET (IRO[11].base)
-#define TSTORM_INTEG_TEST_DATA_SIZE (IRO[11].size)
+#define TSTORM_INTEG_TEST_DATA_OFFSET (IRO[12].base)
+#define TSTORM_INTEG_TEST_DATA_SIZE (IRO[12].size)
 /* Mstorm Integration Test Data */
-#define MSTORM_INTEG_TEST_DATA_OFFSET (IRO[12].base)
-#define MSTORM_INTEG_TEST_DATA_SIZE (IRO[12].size)
+#define MSTORM_INTEG_TEST_DATA_OFFSET (IRO[13].base)
+#define MSTORM_INTEG_TEST_DATA_SIZE (IRO[13].size)
 /* Ustorm Integration Test Data */
-#define USTORM_INTEG_TEST_DATA_OFFSET (IRO[13].base)
-#define USTORM_INTEG_TEST_DATA_SIZE (IRO[13].size)
+#define USTORM_INTEG_TEST_DATA_OFFSET (IRO[14].base)
+#define USTORM_INTEG_TEST_DATA_SIZE (IRO[14].size)
+/* Xstorm overlay buffer host address */
+#define XSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[15].base)
+#define XSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[15].size)
+/* Ystorm overlay buffer host address */
+#define YSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[16].base)
+#define YSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[16].size)
+/* Pstorm overlay buffer host address */
+#define PSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[17].base)
+#define PSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[17].size)
+/* Tstorm overlay buffer host address */
+#define TSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[18].base)
+#define TSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[18].size)
+/* Mstorm overlay buffer host address */
+#define MSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[19].base)
+#define MSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[19].size)
+/* Ustorm overlay buffer host address */
+#define USTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[20].base)
+#define USTORM_OVERLAY_BUF_ADDR_SIZE (IRO[20].size)
 /* Tstorm producers */
-#define TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) (IRO[14].base + \
-	((core_rx_queue_id) * IRO[14].m1))
-#define TSTORM_LL2_RX_PRODS_SIZE (IRO[14].size)
+#define TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) (IRO[21].base + \
+	((core_rx_queue_id) * IRO[21].m1))
+#define TSTORM_LL2_RX_PRODS_SIZE (IRO[21].size)
 /* Tstorm LightL2 queue statistics */
 #define CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) \
-	(IRO[15].base + ((core_rx_queue_id) * IRO[15].m1))
-#define CORE_LL2_TSTORM_PER_QUEUE_STAT_SIZE (IRO[15].size)
+	(IRO[22].base + ((core_rx_queue_id) * IRO[22].m1))
+#define CORE_LL2_TSTORM_PER_QUEUE_STAT_SIZE (IRO[22].size)
 /* Ustorm LiteL2 queue statistics */
 #define CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) \
-	(IRO[16].base + ((core_rx_queue_id) * IRO[16].m1))
-#define CORE_LL2_USTORM_PER_QUEUE_STAT_SIZE (IRO[16].size)
+	(IRO[23].base + ((core_rx_queue_id) * IRO[23].m1))
+#define CORE_LL2_USTORM_PER_QUEUE_STAT_SIZE (IRO[23].size)
 /* Pstorm LiteL2 queue statistics */
 #define CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) \
-	(IRO[17].base + ((core_tx_stats_id) * IRO[17].m1))
-#define CORE_LL2_PSTORM_PER_QUEUE_STAT_SIZE (IRO[17].size)
+	(IRO[24].base + ((core_tx_stats_id) * IRO[24].m1))
+#define CORE_LL2_PSTORM_PER_QUEUE_STAT_SIZE (IRO[24].size)
 /* Mstorm queue statistics */
-#define MSTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[18].base + \
-	((stat_counter_id) * IRO[18].m1))
-#define MSTORM_QUEUE_STAT_SIZE (IRO[18].size)
-/* Mstorm ETH PF queues producers */
-#define MSTORM_ETH_PF_PRODS_OFFSET(queue_id) (IRO[19].base + \
-	((queue_id) * IRO[19].m1))
-#define MSTORM_ETH_PF_PRODS_SIZE (IRO[19].size)
+#define MSTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[25].base + \
+	((stat_counter_id) * IRO[25].m1))
+#define MSTORM_QUEUE_STAT_SIZE (IRO[25].size)
+/* TPA agregation timeout in us resolution (on ASIC) */
+#define MSTORM_TPA_TIMEOUT_US_OFFSET (IRO[26].base)
+#define MSTORM_TPA_TIMEOUT_US_SIZE (IRO[26].size)
 /* Mstorm ETH VF queues producers offset in RAM. Used in default VF zone size
  * mode.
  */
-#define MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id) (IRO[20].base + \
-	((vf_id) * IRO[20].m1) + ((vf_queue_id) * IRO[20].m2))
-#define MSTORM_ETH_VF_PRODS_SIZE (IRO[20].size)
-/* TPA agregation timeout in us resolution (on ASIC) */
-#define MSTORM_TPA_TIMEOUT_US_OFFSET (IRO[21].base)
-#define MSTORM_TPA_TIMEOUT_US_SIZE (IRO[21].size)
+#define MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id) (IRO[27].base + \
+	((vf_id) * IRO[27].m1) + ((vf_queue_id) * IRO[27].m2))
+#define MSTORM_ETH_VF_PRODS_SIZE (IRO[27].size)
+/* Mstorm ETH PF queues producers */
+#define MSTORM_ETH_PF_PRODS_OFFSET(queue_id) (IRO[28].base + \
+	((queue_id) * IRO[28].m1))
+#define MSTORM_ETH_PF_PRODS_SIZE (IRO[28].size)
 /* Mstorm pf statistics */
-#define MSTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[22].base + ((pf_id) * IRO[22].m1))
-#define MSTORM_ETH_PF_STAT_SIZE (IRO[22].size)
+#define MSTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[29].base + ((pf_id) * IRO[29].m1))
+#define MSTORM_ETH_PF_STAT_SIZE (IRO[29].size)
 /* Ustorm queue statistics */
-#define USTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[23].base + \
-	((stat_counter_id) * IRO[23].m1))
-#define USTORM_QUEUE_STAT_SIZE (IRO[23].size)
+#define USTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[30].base + \
+	((stat_counter_id) * IRO[30].m1))
+#define USTORM_QUEUE_STAT_SIZE (IRO[30].size)
 /* Ustorm pf statistics */
-#define USTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[24].base + ((pf_id) * IRO[24].m1))
-#define USTORM_ETH_PF_STAT_SIZE (IRO[24].size)
+#define USTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[31].base + ((pf_id) * IRO[31].m1))
+#define USTORM_ETH_PF_STAT_SIZE (IRO[31].size)
 /* Pstorm queue statistics */
-#define PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[25].base + \
-	((stat_counter_id) * IRO[25].m1))
-#define PSTORM_QUEUE_STAT_SIZE (IRO[25].size)
+#define PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[32].base + \
+	((stat_counter_id) * IRO[32].m1))
+#define PSTORM_QUEUE_STAT_SIZE (IRO[32].size)
 /* Pstorm pf statistics */
-#define PSTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[26].base + ((pf_id) * IRO[26].m1))
-#define PSTORM_ETH_PF_STAT_SIZE (IRO[26].size)
+#define PSTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[33].base + ((pf_id) * IRO[33].m1))
+#define PSTORM_ETH_PF_STAT_SIZE (IRO[33].size)
 /* Control frame's EthType configuration for TX control frame security */
-#define PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) (IRO[27].base + \
-	((ethType_id) * IRO[27].m1))
-#define PSTORM_CTL_FRAME_ETHTYPE_SIZE (IRO[27].size)
+#define PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) (IRO[34].base + \
+	((ethType_id) * IRO[34].m1))
+#define PSTORM_CTL_FRAME_ETHTYPE_SIZE (IRO[34].size)
 /* Tstorm last parser message */
-#define TSTORM_ETH_PRS_INPUT_OFFSET (IRO[28].base)
-#define TSTORM_ETH_PRS_INPUT_SIZE (IRO[28].size)
+#define TSTORM_ETH_PRS_INPUT_OFFSET (IRO[35].base)
+#define TSTORM_ETH_PRS_INPUT_SIZE (IRO[35].size)
 /* Tstorm Eth limit Rx rate */
-#define ETH_RX_RATE_LIMIT_OFFSET(pf_id) (IRO[29].base + ((pf_id) * IRO[29].m1))
-#define ETH_RX_RATE_LIMIT_SIZE (IRO[29].size)
+#define ETH_RX_RATE_LIMIT_OFFSET(pf_id) (IRO[36].base + ((pf_id) * IRO[36].m1))
+#define ETH_RX_RATE_LIMIT_SIZE (IRO[36].size)
 /* RSS indirection table entry update command per PF offset in TSTORM PF BAR0.
  * Use eth_tstorm_rss_update_data for update.
  */
-#define TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id) (IRO[30].base + \
-	((pf_id) * IRO[30].m1))
-#define TSTORM_ETH_RSS_UPDATE_SIZE (IRO[30].size)
+#define TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id) (IRO[37].base + \
+	((pf_id) * IRO[37].m1))
+#define TSTORM_ETH_RSS_UPDATE_SIZE (IRO[37].size)
 /* Xstorm queue zone */
-#define XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) (IRO[31].base + \
-	((queue_id) * IRO[31].m1))
-#define XSTORM_ETH_QUEUE_ZONE_SIZE (IRO[31].size)
+#define XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) (IRO[38].base + \
+	((queue_id) * IRO[38].m1))
+#define XSTORM_ETH_QUEUE_ZONE_SIZE (IRO[38].size)
 /* Ystorm cqe producer */
-#define YSTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[32].base + \
-	((rss_id) * IRO[32].m1))
-#define YSTORM_TOE_CQ_PROD_SIZE (IRO[32].size)
+#define YSTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[39].base + \
+	((rss_id) * IRO[39].m1))
+#define YSTORM_TOE_CQ_PROD_SIZE (IRO[39].size)
 /* Ustorm cqe producer */
-#define USTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[33].base + \
-	((rss_id) * IRO[33].m1))
-#define USTORM_TOE_CQ_PROD_SIZE (IRO[33].size)
+#define USTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[40].base + \
+	((rss_id) * IRO[40].m1))
+#define USTORM_TOE_CQ_PROD_SIZE (IRO[40].size)
 /* Ustorm grq producer */
-#define USTORM_TOE_GRQ_PROD_OFFSET(pf_id) (IRO[34].base + \
-	((pf_id) * IRO[34].m1))
-#define USTORM_TOE_GRQ_PROD_SIZE (IRO[34].size)
+#define USTORM_TOE_GRQ_PROD_OFFSET(pf_id) (IRO[41].base + \
+	((pf_id) * IRO[41].m1))
+#define USTORM_TOE_GRQ_PROD_SIZE (IRO[41].size)
 /* Tstorm cmdq-cons of given command queue-id */
-#define TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id) (IRO[35].base + \
-	((cmdq_queue_id) * IRO[35].m1))
-#define TSTORM_SCSI_CMDQ_CONS_SIZE (IRO[35].size)
+#define TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id) (IRO[42].base + \
+	((cmdq_queue_id) * IRO[42].m1))
+#define TSTORM_SCSI_CMDQ_CONS_SIZE (IRO[42].size)
 /* Tstorm (reflects M-Storm) bdq-external-producer of given function ID,
  * BDqueue-id
  */
-#define TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id, bdq_id) (IRO[36].base + \
-	((func_id) * IRO[36].m1) + ((bdq_id) * IRO[36].m2))
-#define TSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[36].size)
+#define TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id, bdq_id) \
+	(IRO[43].base + ((storage_func_id) * IRO[43].m1) + \
+	((bdq_id) * IRO[43].m2))
+#define TSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[43].size)
 /* Mstorm bdq-external-producer of given BDQ resource ID, BDqueue-id */
-#define MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id, bdq_id) (IRO[37].base + \
-	((func_id) * IRO[37].m1) + ((bdq_id) * IRO[37].m2))
-#define MSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[37].size)
+#define MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id, bdq_id) \
+	(IRO[44].base + ((storage_func_id) * IRO[44].m1) + \
+	((bdq_id) * IRO[44].m2))
+#define MSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[44].size)
 /* Tstorm iSCSI RX stats */
-#define TSTORM_ISCSI_RX_STATS_OFFSET(pf_id) (IRO[38].base + \
-	((pf_id) * IRO[38].m1))
-#define TSTORM_ISCSI_RX_STATS_SIZE (IRO[38].size)
+#define TSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) (IRO[45].base + \
+	((storage_func_id) * IRO[45].m1))
+#define TSTORM_ISCSI_RX_STATS_SIZE (IRO[45].size)
 /* Mstorm iSCSI RX stats */
-#define MSTORM_ISCSI_RX_STATS_OFFSET(pf_id) (IRO[39].base + \
-	((pf_id) * IRO[39].m1))
-#define MSTORM_ISCSI_RX_STATS_SIZE (IRO[39].size)
+#define MSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) (IRO[46].base + \
+	((storage_func_id) * IRO[46].m1))
+#define MSTORM_ISCSI_RX_STATS_SIZE (IRO[46].size)
 /* Ustorm iSCSI RX stats */
-#define USTORM_ISCSI_RX_STATS_OFFSET(pf_id) (IRO[40].base + \
-	((pf_id) * IRO[40].m1))
-#define USTORM_ISCSI_RX_STATS_SIZE (IRO[40].size)
+#define USTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) (IRO[47].base + \
+	((storage_func_id) * IRO[47].m1))
+#define USTORM_ISCSI_RX_STATS_SIZE (IRO[47].size)
 /* Xstorm iSCSI TX stats */
-#define XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) (IRO[41].base + \
-	((pf_id) * IRO[41].m1))
-#define XSTORM_ISCSI_TX_STATS_SIZE (IRO[41].size)
+#define XSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) (IRO[48].base + \
+	((storage_func_id) * IRO[48].m1))
+#define XSTORM_ISCSI_TX_STATS_SIZE (IRO[48].size)
 /* Ystorm iSCSI TX stats */
-#define YSTORM_ISCSI_TX_STATS_OFFSET(pf_id) (IRO[42].base + \
-	((pf_id) * IRO[42].m1))
-#define YSTORM_ISCSI_TX_STATS_SIZE (IRO[42].size)
+#define YSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) (IRO[49].base + \
+	((storage_func_id) * IRO[49].m1))
+#define YSTORM_ISCSI_TX_STATS_SIZE (IRO[49].size)
 /* Pstorm iSCSI TX stats */
-#define PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) (IRO[43].base + \
-	((pf_id) * IRO[43].m1))
-#define PSTORM_ISCSI_TX_STATS_SIZE (IRO[43].size)
+#define PSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) (IRO[50].base + \
+	((storage_func_id) * IRO[50].m1))
+#define PSTORM_ISCSI_TX_STATS_SIZE (IRO[50].size)
 /* Tstorm FCoE RX stats */
-#define TSTORM_FCOE_RX_STATS_OFFSET(pf_id) (IRO[44].base + \
-	((pf_id) * IRO[44].m1))
-#define TSTORM_FCOE_RX_STATS_SIZE (IRO[44].size)
+#define TSTORM_FCOE_RX_STATS_OFFSET(pf_id) (IRO[51].base + \
+	((pf_id) * IRO[51].m1))
+#define TSTORM_FCOE_RX_STATS_SIZE (IRO[51].size)
 /* Pstorm FCoE TX stats */
-#define PSTORM_FCOE_TX_STATS_OFFSET(pf_id) (IRO[45].base + \
-	((pf_id) * IRO[45].m1))
-#define PSTORM_FCOE_TX_STATS_SIZE (IRO[45].size)
+#define PSTORM_FCOE_TX_STATS_OFFSET(pf_id) (IRO[52].base + \
+	((pf_id) * IRO[52].m1))
+#define PSTORM_FCOE_TX_STATS_SIZE (IRO[52].size)
 /* Pstorm RDMA queue statistics */
-#define PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[46].base + \
-	((rdma_stat_counter_id) * IRO[46].m1))
-#define PSTORM_RDMA_QUEUE_STAT_SIZE (IRO[46].size)
+#define PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[53].base + \
+	((rdma_stat_counter_id) * IRO[53].m1))
+#define PSTORM_RDMA_QUEUE_STAT_SIZE (IRO[53].size)
 /* Tstorm RDMA queue statistics */
-#define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[47].base + \
-	((rdma_stat_counter_id) * IRO[47].m1))
-#define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[47].size)
+#define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[54].base + \
+	((rdma_stat_counter_id) * IRO[54].m1))
+#define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[54].size)
 /* Xstorm error level for assert */
-#define XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[48].base + \
-	((pf_id) * IRO[48].m1))
-#define XSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[48].size)
+#define XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[55].base + \
+	((pf_id) * IRO[55].m1))
+#define XSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[55].size)
 /* Ystorm error level for assert */
-#define YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[49].base + \
-	((pf_id) * IRO[49].m1))
-#define YSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[49].size)
+#define YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[56].base + \
+	((pf_id) * IRO[56].m1))
+#define YSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[56].size)
 /* Pstorm error level for assert */
-#define PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[50].base + \
-	((pf_id) * IRO[50].m1))
-#define PSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[50].size)
+#define PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[57].base + \
+	((pf_id) * IRO[57].m1))
+#define PSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[57].size)
 /* Tstorm error level for assert */
-#define TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[51].base + \
-	((pf_id) * IRO[51].m1))
-#define TSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[51].size)
+#define TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[58].base + \
+	((pf_id) * IRO[58].m1))
+#define TSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[58].size)
 /* Mstorm error level for assert */
-#define MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[52].base + \
-	((pf_id) * IRO[52].m1))
-#define MSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[52].size)
+#define MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[59].base + \
+	((pf_id) * IRO[59].m1))
+#define MSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[59].size)
 /* Ustorm error level for assert */
-#define USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[53].base + \
-	((pf_id) * IRO[53].m1))
-#define USTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[53].size)
+#define USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[60].base + \
+	((pf_id) * IRO[60].m1))
+#define USTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[60].size)
 /* Xstorm iWARP rxmit stats */
-#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) (IRO[54].base + \
-	((pf_id) * IRO[54].m1))
-#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[54].size)
+#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) (IRO[61].base + \
+	((pf_id) * IRO[61].m1))
+#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[61].size)
 /* Tstorm RoCE Event Statistics */
-#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) (IRO[55].base + \
-	((roce_pf_id) * IRO[55].m1))
-#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[55].size)
+#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) (IRO[62].base + \
+	((roce_pf_id) * IRO[62].m1))
+#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[62].size)
 /* DCQCN Received Statistics */
-#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id) (IRO[56].base + \
-	((roce_pf_id) * IRO[56].m1))
-#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_SIZE (IRO[56].size)
+#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id) (IRO[63].base + \
+	((roce_pf_id) * IRO[63].m1))
+#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_SIZE (IRO[63].size)
 /* RoCE Error Statistics */
-#define YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id) (IRO[57].base + \
-	((roce_pf_id) * IRO[57].m1))
-#define YSTORM_ROCE_ERROR_STATS_SIZE (IRO[57].size)
+#define YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id) (IRO[64].base + \
+	((roce_pf_id) * IRO[64].m1))
+#define YSTORM_ROCE_ERROR_STATS_SIZE (IRO[64].size)
 /* DCQCN Sent Statistics */
-#define PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id) (IRO[58].base + \
-	((roce_pf_id) * IRO[58].m1))
-#define PSTORM_ROCE_DCQCN_SENT_STATS_SIZE (IRO[58].size)
+#define PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id) (IRO[65].base + \
+	((roce_pf_id) * IRO[65].m1))
+#define PSTORM_ROCE_DCQCN_SENT_STATS_SIZE (IRO[65].size)
 /* RoCE CQEs Statistics */
-#define USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id) (IRO[59].base + \
-	((roce_pf_id) * IRO[59].m1))
-#define USTORM_ROCE_CQE_STATS_SIZE (IRO[59].size)
+#define USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id) (IRO[66].base + \
+	((roce_pf_id) * IRO[66].m1))
+#define USTORM_ROCE_CQE_STATS_SIZE (IRO[66].size)
+/* Tstorm NVMf per port per producer consumer data */
+#define TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_num_id, \
+	taskpool_index) (IRO[67].base + ((port_num_id) * IRO[67].m1) + \
+	((taskpool_index) * IRO[67].m2))
+#define TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_SIZE (IRO[67].size)
+/* Ustorm NVMf per port counters */
+#define USTORM_NVMF_PORT_COUNTERS_OFFSET(port_num_id) (IRO[68].base + \
+	((port_num_id) * IRO[68].m1))
+#define USTORM_NVMF_PORT_COUNTERS_SIZE (IRO[68].size)
 
-#endif /* __IRO_H__ */
+#endif
diff --git a/drivers/net/qede/base/ecore_iro_values.h b/drivers/net/qede/base/ecore_iro_values.h
index 30e632ce1..6442057ac 100644
--- a/drivers/net/qede/base/ecore_iro_values.h
+++ b/drivers/net/qede/base/ecore_iro_values.h
@@ -7,127 +7,221 @@
 #ifndef __IRO_VALUES_H__
 #define __IRO_VALUES_H__
 
-static const struct iro iro_arr[60] = {
-/* YSTORM_FLOW_CONTROL_MODE_OFFSET */
-	{      0x0,      0x0,      0x0,      0x0,      0x8},
-/* TSTORM_PORT_STAT_OFFSET(port_id) */
-	{   0x4cb8,     0x88,      0x0,      0x0,     0x88},
-/* TSTORM_LL2_PORT_STAT_OFFSET(port_id) */
-	{   0x6530,     0x20,      0x0,      0x0,     0x20},
-/* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id) */
-	{    0xb00,      0x8,      0x0,      0x0,      0x4},
-/* USTORM_FLR_FINAL_ACK_OFFSET(pf_id) */
-	{    0xa80,      0x8,      0x0,      0x0,      0x4},
-/* USTORM_EQE_CONS_OFFSET(pf_id) */
-	{      0x0,      0x8,      0x0,      0x0,      0x2},
-/* USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id) */
-	{     0x80,      0x8,      0x0,      0x0,      0x4},
-/* USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id) */
-	{     0x84,      0x8,      0x0,      0x0,      0x2},
-/* XSTORM_INTEG_TEST_DATA_OFFSET */
-	{   0x4c48,      0x0,      0x0,      0x0,     0x78},
-/* YSTORM_INTEG_TEST_DATA_OFFSET */
-	{   0x3e38,      0x0,      0x0,      0x0,     0x78},
-/* PSTORM_INTEG_TEST_DATA_OFFSET */
-	{   0x3ef8,      0x0,      0x0,      0x0,     0x78},
-/* TSTORM_INTEG_TEST_DATA_OFFSET */
-	{   0x4c40,      0x0,      0x0,      0x0,     0x78},
-/* MSTORM_INTEG_TEST_DATA_OFFSET */
-	{   0x4998,      0x0,      0x0,      0x0,     0x78},
-/* USTORM_INTEG_TEST_DATA_OFFSET */
-	{   0x7f50,      0x0,      0x0,      0x0,     0x78},
-/* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) */
-	{    0xa28,      0x8,      0x0,      0x0,      0x8},
-/* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
-	{   0x6210,     0x10,      0x0,      0x0,     0x10},
-/* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
-	{   0xb820,     0x30,      0x0,      0x0,     0x30},
-/* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) */
-	{   0xa990,     0x30,      0x0,      0x0,     0x30},
-/* MSTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
-	{   0x4b68,     0x80,      0x0,      0x0,     0x40},
-/* MSTORM_ETH_PF_PRODS_OFFSET(queue_id) */
-	{    0x1f8,      0x4,      0x0,      0x0,      0x4},
-/* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id) */
-	{   0x53a8,     0x80,      0x4,      0x0,      0x4},
-/* MSTORM_TPA_TIMEOUT_US_OFFSET */
-	{   0xc7d0,      0x0,      0x0,      0x0,      0x4},
-/* MSTORM_ETH_PF_STAT_OFFSET(pf_id) */
-	{   0x4ba8,     0x80,      0x0,      0x0,     0x20},
-/* USTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
-	{   0x8158,     0x40,      0x0,      0x0,     0x30},
-/* USTORM_ETH_PF_STAT_OFFSET(pf_id) */
-	{   0xe770,     0x60,      0x0,      0x0,     0x60},
-/* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
-	{   0x4090,     0x80,      0x0,      0x0,     0x38},
-/* PSTORM_ETH_PF_STAT_OFFSET(pf_id) */
-	{   0xfea8,     0x78,      0x0,      0x0,     0x78},
-/* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) */
-	{    0x1f8,      0x4,      0x0,      0x0,      0x4},
-/* TSTORM_ETH_PRS_INPUT_OFFSET */
-	{   0xaf20,      0x0,      0x0,      0x0,     0xf0},
-/* ETH_RX_RATE_LIMIT_OFFSET(pf_id) */
-	{   0xb010,      0x8,      0x0,      0x0,      0x8},
-/* TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id) */
-	{    0xc00,      0x8,      0x0,      0x0,      0x8},
-/* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) */
-	{    0x1f8,      0x8,      0x0,      0x0,      0x8},
-/* YSTORM_TOE_CQ_PROD_OFFSET(rss_id) */
-	{    0xac0,      0x8,      0x0,      0x0,      0x8},
-/* USTORM_TOE_CQ_PROD_OFFSET(rss_id) */
-	{   0x2578,      0x8,      0x0,      0x0,      0x8},
-/* USTORM_TOE_GRQ_PROD_OFFSET(pf_id) */
-	{   0x24f8,      0x8,      0x0,      0x0,      0x8},
-/* TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id) */
-	{      0x0,      0x8,      0x0,      0x0,      0x8},
-/* TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id,bdq_id) */
-	{    0x400,     0x18,      0x8,      0x0,      0x8},
-/* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id,bdq_id) */
-	{    0xb78,     0x18,      0x8,      0x0,      0x2},
-/* TSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{   0xd898,     0x50,      0x0,      0x0,     0x3c},
-/* MSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{  0x12908,     0x18,      0x0,      0x0,     0x10},
-/* USTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{  0x11aa8,     0x40,      0x0,      0x0,     0x18},
-/* XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
-	{   0xa588,     0x50,      0x0,      0x0,     0x20},
-/* YSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
-	{   0x8f00,     0x40,      0x0,      0x0,     0x28},
-/* PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
-	{  0x10e30,     0x18,      0x0,      0x0,     0x10},
-/* TSTORM_FCOE_RX_STATS_OFFSET(pf_id) */
-	{   0xde48,     0x48,      0x0,      0x0,     0x38},
-/* PSTORM_FCOE_TX_STATS_OFFSET(pf_id) */
-	{  0x11298,     0x20,      0x0,      0x0,     0x20},
-/* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
-	{   0x40c8,     0x80,      0x0,      0x0,     0x10},
-/* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
-	{   0x5048,     0x10,      0x0,      0x0,     0x10},
-/* XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
-	{   0xa928,      0x8,      0x0,      0x0,      0x1},
-/* YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
-	{   0xa128,      0x8,      0x0,      0x0,      0x1},
-/* PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
-	{  0x11a30,      0x8,      0x0,      0x0,      0x1},
-/* TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
-	{   0xf030,      0x8,      0x0,      0x0,      0x1},
-/* MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
-	{  0x13028,      0x8,      0x0,      0x0,      0x1},
-/* USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
-	{  0x12c58,      0x8,      0x0,      0x0,      0x1},
-/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) */
-	{   0xc9b8,     0x30,      0x0,      0x0,     0x10},
-/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) */
-	{   0xed90,     0x28,      0x0,      0x0,     0x28},
-/* YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id) */
-	{   0xad20,     0x18,      0x0,      0x0,     0x18},
-/* YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id) */
-	{   0xaea0,      0x8,      0x0,      0x0,      0x8},
-/* PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id) */
-	{  0x13c38,      0x8,      0x0,      0x0,      0x8},
-/* USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id) */
-	{  0x13c50,     0x18,      0x0,      0x0,     0x18},
+/* Per-chip offsets in iro_arr in dwords */
+#define E4_IRO_ARR_OFFSET 0
+
+/* IRO Array */
+static const u32 iro_arr[] = {
+	/* E4 */
+	/* YSTORM_FLOW_CONTROL_MODE_OFFSET */
+	/* offset=0x0, size=0x8 */
+	0x00000000, 0x00000000, 0x00080000,
+	/* TSTORM_PORT_STAT_OFFSET(port_id), */
+	/* offset=0x3908, mult1=0x88, size=0x88 */
+	0x00003908, 0x00000088, 0x00880000,
+	/* TSTORM_LL2_PORT_STAT_OFFSET(port_id), */
+	/* offset=0x58f0, mult1=0x20, size=0x20 */
+	0x000058f0, 0x00000020, 0x00200000,
+	/* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id), */
+	/* offset=0xb00, mult1=0x8, size=0x4 */
+	0x00000b00, 0x00000008, 0x00040000,
+	/* USTORM_FLR_FINAL_ACK_OFFSET(pf_id), */
+	/* offset=0xa80, mult1=0x8, size=0x4 */
+	0x00000a80, 0x00000008, 0x00040000,
+	/* USTORM_EQE_CONS_OFFSET(pf_id), */
+	/* offset=0x0, mult1=0x8, size=0x2 */
+	0x00000000, 0x00000008, 0x00020000,
+	/* USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id), */
+	/* offset=0x80, mult1=0x8, size=0x4 */
+	0x00000080, 0x00000008, 0x00040000,
+	/* USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id), */
+	/* offset=0x84, mult1=0x8, size=0x2 */
+	0x00000084, 0x00000008, 0x00020000,
+	/* XSTORM_PQ_INFO_OFFSET(pq_id), */
+	/* offset=0x5618, mult1=0x4, size=0x4 */
+	0x00005618, 0x00000004, 0x00040000,
+	/* XSTORM_INTEG_TEST_DATA_OFFSET */
+	/* offset=0x4cd0, size=0x78 */
+	0x00004cd0, 0x00000000, 0x00780000,
+	/* YSTORM_INTEG_TEST_DATA_OFFSET */
+	/* offset=0x3e40, size=0x78 */
+	0x00003e40, 0x00000000, 0x00780000,
+	/* PSTORM_INTEG_TEST_DATA_OFFSET */
+	/* offset=0x3e00, size=0x78 */
+	0x00003e00, 0x00000000, 0x00780000,
+	/* TSTORM_INTEG_TEST_DATA_OFFSET */
+	/* offset=0x3890, size=0x78 */
+	0x00003890, 0x00000000, 0x00780000,
+	/* MSTORM_INTEG_TEST_DATA_OFFSET */
+	/* offset=0x3b50, size=0x78 */
+	0x00003b50, 0x00000000, 0x00780000,
+	/* USTORM_INTEG_TEST_DATA_OFFSET */
+	/* offset=0x7f58, size=0x78 */
+	0x00007f58, 0x00000000, 0x00780000,
+	/* XSTORM_OVERLAY_BUF_ADDR_OFFSET */
+	/* offset=0x5e58, size=0x8 */
+	0x00005e58, 0x00000000, 0x00080000,
+	/* YSTORM_OVERLAY_BUF_ADDR_OFFSET */
+	/* offset=0x7100, size=0x8 */
+	0x00007100, 0x00000000, 0x00080000,
+	/* PSTORM_OVERLAY_BUF_ADDR_OFFSET */
+	/* offset=0xa820, size=0x8 */
+	0x0000a820, 0x00000000, 0x00080000,
+	/* TSTORM_OVERLAY_BUF_ADDR_OFFSET */
+	/* offset=0x4a18, size=0x8 */
+	0x00004a18, 0x00000000, 0x00080000,
+	/* MSTORM_OVERLAY_BUF_ADDR_OFFSET */
+	/* offset=0xa5a0, size=0x8 */
+	0x0000a5a0, 0x00000000, 0x00080000,
+	/* USTORM_OVERLAY_BUF_ADDR_OFFSET */
+	/* offset=0xbde8, size=0x8 */
+	0x0000bde8, 0x00000000, 0x00080000,
+	/* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id), */
+	/* offset=0x20, mult1=0x4, size=0x4 */
+	0x00000020, 0x00000004, 0x00040000,
+	/* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), */
+	/* offset=0x56d0, mult1=0x10, size=0x10 */
+	0x000056d0, 0x00000010, 0x00100000,
+	/* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), */
+	/* offset=0xc210, mult1=0x30, size=0x30 */
+	0x0000c210, 0x00000030, 0x00300000,
+	/* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id), */
+	/* offset=0xaa08, mult1=0x38, size=0x38 */
+	0x0000aa08, 0x00000038, 0x00380000,
+	/* MSTORM_QUEUE_STAT_OFFSET(stat_counter_id), */
+	/* offset=0x3d20, mult1=0x80, size=0x40 */
+	0x00003d20, 0x00000080, 0x00400000,
+	/* MSTORM_TPA_TIMEOUT_US_OFFSET */
+	/* offset=0xbf60, size=0x4 */
+	0x0000bf60, 0x00000000, 0x00040000,
+	/* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id), */
+	/* offset=0x4560, mult1=0x80, mult2=0x4, size=0x4 */
+	0x00004560, 0x00040080, 0x00040000,
+	/* MSTORM_ETH_PF_PRODS_OFFSET(queue_id), */
+	/* offset=0x1f8, mult1=0x4, size=0x4 */
+	0x000001f8, 0x00000004, 0x00040000,
+	/* MSTORM_ETH_PF_STAT_OFFSET(pf_id), */
+	/* offset=0x3d60, mult1=0x80, size=0x20 */
+	0x00003d60, 0x00000080, 0x00200000,
+	/* USTORM_QUEUE_STAT_OFFSET(stat_counter_id), */
+	/* offset=0x8960, mult1=0x40, size=0x30 */
+	0x00008960, 0x00000040, 0x00300000,
+	/* USTORM_ETH_PF_STAT_OFFSET(pf_id), */
+	/* offset=0xe840, mult1=0x60, size=0x60 */
+	0x0000e840, 0x00000060, 0x00600000,
+	/* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id), */
+	/* offset=0x3f98, mult1=0x80, size=0x38 */
+	0x00003f98, 0x00000080, 0x00380000,
+	/* PSTORM_ETH_PF_STAT_OFFSET(pf_id), */
+	/* offset=0x100b8, mult1=0xc0, size=0xc0 */
+	0x000100b8, 0x000000c0, 0x00c00000,
+	/* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id), */
+	/* offset=0x1f8, mult1=0x2, size=0x2 */
+	0x000001f8, 0x00000002, 0x00020000,
+	/* TSTORM_ETH_PRS_INPUT_OFFSET */
+	/* offset=0xa2a0, size=0x108 */
+	0x0000a2a0, 0x00000000, 0x01080000,
+	/* ETH_RX_RATE_LIMIT_OFFSET(pf_id), */
+	/* offset=0xa3a8, mult1=0x8, size=0x8 */
+	0x0000a3a8, 0x00000008, 0x00080000,
+	/* TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id), */
+	/* offset=0x1c0, mult1=0x8, size=0x8 */
+	0x000001c0, 0x00000008, 0x00080000,
+	/* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id), */
+	/* offset=0x1f8, mult1=0x8, size=0x8 */
+	0x000001f8, 0x00000008, 0x00080000,
+	/* YSTORM_TOE_CQ_PROD_OFFSET(rss_id), */
+	/* offset=0xac0, mult1=0x8, size=0x8 */
+	0x00000ac0, 0x00000008, 0x00080000,
+	/* USTORM_TOE_CQ_PROD_OFFSET(rss_id), */
+	/* offset=0x2578, mult1=0x8, size=0x8 */
+	0x00002578, 0x00000008, 0x00080000,
+	/* USTORM_TOE_GRQ_PROD_OFFSET(pf_id), */
+	/* offset=0x24f8, mult1=0x8, size=0x8 */
+	0x000024f8, 0x00000008, 0x00080000,
+	/* TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id), */
+	/* offset=0x280, mult1=0x8, size=0x8 */
+	0x00000280, 0x00000008, 0x00080000,
+	/* TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), */
+	/* offset=0x680, mult1=0x18, mult2=0x8, size=0x8 */
+	0x00000680, 0x00080018, 0x00080000,
+	/* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), */
+	/* offset=0xb78, mult1=0x18, mult2=0x8, size=0x2 */
+	0x00000b78, 0x00080018, 0x00020000,
+	/* TSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), */
+	/* offset=0xc640, mult1=0x50, size=0x3c */
+	0x0000c640, 0x00000050, 0x003c0000,
+	/* MSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), */
+	/* offset=0x12038, mult1=0x18, size=0x10 */
+	0x00012038, 0x00000018, 0x00100000,
+	/* USTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), */
+	/* offset=0x11b00, mult1=0x40, size=0x18 */
+	0x00011b00, 0x00000040, 0x00180000,
+	/* XSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), */
+	/* offset=0x94d0, mult1=0x50, size=0x20 */
+	0x000094d0, 0x00000050, 0x00200000,
+	/* YSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), */
+	/* offset=0x8b10, mult1=0x40, size=0x28 */
+	0x00008b10, 0x00000040, 0x00280000,
+	/* PSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), */
+	/* offset=0x10fc0, mult1=0x18, size=0x10 */
+	0x00010fc0, 0x00000018, 0x00100000,
+	/* TSTORM_FCOE_RX_STATS_OFFSET(pf_id), */
+	/* offset=0xc828, mult1=0x48, size=0x38 */
+	0x0000c828, 0x00000048, 0x00380000,
+	/* PSTORM_FCOE_TX_STATS_OFFSET(pf_id), */
+	/* offset=0x11090, mult1=0x20, size=0x20 */
+	0x00011090, 0x00000020, 0x00200000,
+	/* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), */
+	/* offset=0x3fd0, mult1=0x80, size=0x10 */
+	0x00003fd0, 0x00000080, 0x00100000,
+	/* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), */
+	/* offset=0x3c98, mult1=0x10, size=0x10 */
+	0x00003c98, 0x00000010, 0x00100000,
+	/* XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
+	/* offset=0xa868, mult1=0x8, size=0x1 */
+	0x0000a868, 0x00000008, 0x00010000,
+	/* YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
+	/* offset=0x97a0, mult1=0x8, size=0x1 */
+	0x000097a0, 0x00000008, 0x00010000,
+	/* PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
+	/* offset=0x11310, mult1=0x8, size=0x1 */
+	0x00011310, 0x00000008, 0x00010000,
+	/* TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
+	/* offset=0xf018, mult1=0x8, size=0x1 */
+	0x0000f018, 0x00000008, 0x00010000,
+	/* MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
+	/* offset=0x12628, mult1=0x8, size=0x1 */
+	0x00012628, 0x00000008, 0x00010000,
+	/* USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
+	/* offset=0x11da8, mult1=0x8, size=0x1 */
+	0x00011da8, 0x00000008, 0x00010000,
+	/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id), */
+	/* offset=0xa978, mult1=0x30, size=0x10 */
+	0x0000a978, 0x00000030, 0x00100000,
+	/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id), */
+	/* offset=0xd768, mult1=0x28, size=0x28 */
+	0x0000d768, 0x00000028, 0x00280000,
+	/* YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id), */
+	/* offset=0x9a58, mult1=0x18, size=0x18 */
+	0x00009a58, 0x00000018, 0x00180000,
+	/* YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id), */
+	/* offset=0x9bd8, mult1=0x8, size=0x8 */
+	0x00009bd8, 0x00000008, 0x00080000,
+	/* PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id), */
+	/* offset=0x13398, mult1=0x8, size=0x8 */
+	0x00013398, 0x00000008, 0x00080000,
+	/* USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id), */
+	/* offset=0x126e8, mult1=0x18, size=0x18 */
+	0x000126e8, 0x00000018, 0x00180000,
+	/* TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_num_id,taskpool_index), */
+	/* offset=0xe608, mult1=0x288, mult2=0x50, size=0x10 */
+	0x0000e608, 0x00500288, 0x00100000,
+	/* USTORM_NVMF_PORT_COUNTERS_OFFSET(port_num_id), */
+	/* offset=0x12970, mult1=0x138, size=0x28 */
+	0x00012970, 0x00000138, 0x00280000,
 };
+/* Data size: 828 bytes */
+
 
 #endif /* __IRO_VALUES_H__ */
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 23336c282..6559d8040 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -24,6 +24,7 @@
 
 #define CHIP_MCP_RESP_ITER_US 10
 #define EMUL_MCP_RESP_ITER_US (1000 * 1000)
+#define GRCBASE_MCP	0xe00000
 
 #define ECORE_DRV_MB_MAX_RETRIES (500 * 1000)	/* Account for 5 sec */
 #define ECORE_MCP_RESET_RETRIES (50 * 1000)	/* Account for 500 msec */
diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h
index 9a401ed4a..4611d86d9 100644
--- a/drivers/net/qede/base/eth_common.h
+++ b/drivers/net/qede/base/eth_common.h
@@ -16,7 +16,7 @@
 /* ETH FP HSI Major version */
 #define ETH_HSI_VER_MAJOR                   3
 /* ETH FP HSI Minor version */
-#define ETH_HSI_VER_MINOR                   10
+#define ETH_HSI_VER_MINOR                   11   /* ETH FP HSI Minor version */
 
 /* Alias for 8.7.x.x/8.8.x.x ETH FP HSI MINOR version. In this version driver
  * is not required to set pkt_len field in eth_tx_1st_bd struct, and tunneling
@@ -24,6 +24,9 @@
  */
 #define ETH_HSI_VER_NO_PKT_LEN_TUNN         5
 
+/* Maximum number of pinned L2 connections (CIDs)*/
+#define ETH_PINNED_CONN_MAX_NUM             32
+
 #define ETH_CACHE_LINE_SIZE                 64
 #define ETH_RX_CQE_GAP                      32
 #define ETH_MAX_RAMROD_PER_CON              8
@@ -48,6 +51,7 @@
 #define ETH_TX_MIN_BDS_PER_TUNN_IPV6_WITH_EXT_PKT   3
 #define ETH_TX_MIN_BDS_PER_IPV6_WITH_EXT_PKT        2
 #define ETH_TX_MIN_BDS_PER_PKT_W_LOOPBACK_MODE      2
+#define ETH_TX_MIN_BDS_PER_PKT_W_VPORT_FORWARDING   4
 /* (QM_REG_TASKBYTECRDCOST_0, QM_VOQ_BYTE_CRD_TASK_COST) -
  * (VLAN-TAG + CRC + IPG + PREAMBLE)
  */
@@ -80,7 +84,7 @@
 /* Minimum number of free BDs in RX ring, that guarantee receiving of at least
  * one RX packet.
  */
-#define ETH_RX_BD_THRESHOLD                12
+#define ETH_RX_BD_THRESHOLD                16
 
 /* num of MAC/VLAN filters */
 #define ETH_NUM_MAC_FILTERS                 512
@@ -98,20 +102,20 @@
 #define ETH_RSS_IND_TABLE_ENTRIES_NUM       128
 /* Length of RSS key (in regs) */
 #define ETH_RSS_KEY_SIZE_REGS               10
-/* number of available RSS engines in K2 */
+/* number of available RSS engines in AH */
 #define ETH_RSS_ENGINE_NUM_K2               207
 /* number of available RSS engines in BB */
 #define ETH_RSS_ENGINE_NUM_BB               127
 
 /* TPA constants */
 /* Maximum number of open TPA aggregations */
-#define ETH_TPA_MAX_AGGS_NUM              64
-/* Maximum number of additional buffers, reported by TPA-start CQE */
-#define ETH_TPA_CQE_START_LEN_LIST_SIZE   ETH_RX_MAX_BUFF_PER_PKT
+#define ETH_TPA_MAX_AGGS_NUM                64
+/* TPA-start CQE additional BD list length. Used for backward compatible  */
+#define ETH_TPA_CQE_START_BW_LEN_LIST_SIZE  2
 /* Maximum number of buffers, reported by TPA-continue CQE */
-#define ETH_TPA_CQE_CONT_LEN_LIST_SIZE    6
+#define ETH_TPA_CQE_CONT_LEN_LIST_SIZE      6
 /* Maximum number of buffers, reported by TPA-end CQE */
-#define ETH_TPA_CQE_END_LEN_LIST_SIZE     4
+#define ETH_TPA_CQE_END_LEN_LIST_SIZE       4
 
 /* Control frame check constants */
 /* Number of etherType values configured by driver for control frame check */
@@ -125,12 +129,12 @@
 /*
  * Destination port mode
  */
-enum dest_port_mode {
-	DEST_PORT_PHY /* Send to physical port. */,
-	DEST_PORT_LOOPBACK /* Send to loopback port. */,
-	DEST_PORT_PHY_LOOPBACK /* Send to physical and loopback port. */,
-	DEST_PORT_DROP /* Drop the packet in PBF. */,
-	MAX_DEST_PORT_MODE
+enum dst_port_mode {
+	DST_PORT_PHY /* Send to physical port. */,
+	DST_PORT_LOOPBACK /* Send to loopback port. */,
+	DST_PORT_PHY_LOOPBACK /* Send to physical and loopback port. */,
+	DST_PORT_DROP /* Drop the packet in PBF. */,
+	MAX_DST_PORT_MODE
 };
 
 
@@ -353,9 +357,13 @@ struct eth_fast_path_rx_reg_cqe {
 /* Tunnel Parsing Flags */
 	struct eth_tunnel_parsing_flags tunnel_pars_flags;
 	u8 bd_num /* Number of BDs, used for packet */;
-	u8 reserved[9];
-	struct eth_fast_path_cqe_fw_debug fw_debug /* FW reserved. */;
-	u8 reserved1[3];
+	u8 reserved;
+	__le16 reserved2;
+/* aRFS flow ID or Resource ID - Indicates a Vport ID from which packet was
+ * sent, used when sending from VF to VF Representor.
+ */
+	__le32 flow_id_or_resource_id;
+	u8 reserved1[7];
 	struct eth_pmd_flow_flags pmd_flags /* CQE valid and toggle bits */;
 };
 
@@ -422,10 +430,14 @@ struct eth_fast_path_rx_tpa_start_cqe {
 	struct eth_tunnel_parsing_flags tunnel_pars_flags;
 	u8 tpa_agg_index /* TPA aggregation index */;
 	u8 header_len /* Packet L2+L3+L4 header length */;
-/* Additional BDs length list. */
-	__le16 ext_bd_len_list[ETH_TPA_CQE_START_LEN_LIST_SIZE];
-	struct eth_fast_path_cqe_fw_debug fw_debug /* FW reserved. */;
-	u8 reserved;
+/* Additional BDs length list. Used for backward compatible. */
+	__le16 bw_ext_bd_len_list[ETH_TPA_CQE_START_BW_LEN_LIST_SIZE];
+	__le16 reserved2;
+/* aRFS or GFS flow ID or Resource ID - Indicates a Vport ID from which packet
+ * was sent, used when sending from VF to VF Representor
+ */
+	__le32 flow_id_or_resource_id;
+	u8 reserved[3];
 	struct eth_pmd_flow_flags pmd_flags /* CQE valid and toggle bits */;
 };
 
@@ -602,6 +614,41 @@ struct eth_tx_3rd_bd {
 };
 
 
+/*
+ * The parsing information data for the forth tx bd of a given packet.
+ */
+struct eth_tx_data_4th_bd {
+/* Destination Vport ID to forward the packet, applicable only when
+ * tx_dst_port_mode_config == ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_BD and
+ * dst_port_mode == DST_PORT_LOOPBACK, used to route the packet from VF
+ * Representor to VF
+ */
+	u8 dst_vport_id;
+	u8 reserved4;
+	__le16 bitfields;
+/* if set, dst_vport_id has a valid value and will be used in FW */
+#define ETH_TX_DATA_4TH_BD_DST_VPORT_ID_VALID_MASK  0x1
+#define ETH_TX_DATA_4TH_BD_DST_VPORT_ID_VALID_SHIFT 0
+#define ETH_TX_DATA_4TH_BD_RESERVED1_MASK           0x7F
+#define ETH_TX_DATA_4TH_BD_RESERVED1_SHIFT          1
+/* Should be 0 in all the BDs, except the first one. (for debug) */
+#define ETH_TX_DATA_4TH_BD_START_BD_MASK            0x1
+#define ETH_TX_DATA_4TH_BD_START_BD_SHIFT           8
+#define ETH_TX_DATA_4TH_BD_RESERVED2_MASK           0x7F
+#define ETH_TX_DATA_4TH_BD_RESERVED2_SHIFT          9
+	__le16 reserved3;
+};
+
+/*
+ * The forth tx bd of a given packet
+ */
+struct eth_tx_4th_bd {
+	struct regpair addr /* Single continuous buffer */;
+	__le16 nbytes /* Number of bytes in this BD. */;
+	struct eth_tx_data_4th_bd data /* Parsing information data. */;
+};
+
+
 /*
  * Complementary information for the regular tx bd of a given packet.
  */
@@ -633,7 +680,8 @@ union eth_tx_bd_types {
 /* The second tx bd of a given packet */
 	struct eth_tx_2nd_bd second_bd;
 	struct eth_tx_3rd_bd third_bd /* The third tx bd of a given packet */;
-	struct eth_tx_bd reg_bd /* The common non-special bd */;
+	struct eth_tx_4th_bd fourth_bd /* The fourth tx bd of a given packet */;
+	struct eth_tx_bd reg_bd /* The common regular bd */;
 };
 
 
@@ -653,6 +701,15 @@ enum eth_tx_tunn_type {
 };
 
 
+/*
+ * Mstorm Queue Zone
+ */
+struct mstorm_eth_queue_zone {
+	struct eth_rx_prod_data rx_producers /* ETH Rx producers data */;
+	__le32 reserved[3];
+};
+
+
 /*
  * Ystorm Queue Zone
  */
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 9277b46fa..91d889dc8 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1235,3 +1235,13 @@
 #define NIG_REG_PPF_TO_ENGINE_SEL 0x508900UL
 #define NIG_REG_LLH_ENG_CLS_ROCE_QP_SEL 0x501b98UL
 #define NIG_REG_LLH_FUNC_FILTER_HDR_SEL 0x501b40UL
+
+#define MCP_REG_CACHE_PAGING_ENABLE 0xe06304UL
+#define PSWRQ2_REG_RESET_STT 0x240008UL
+#define PSWRQ2_REG_PRTY_STS_WR_H_0 0x240208UL
+#define PCI_EXP_DEVCTL_PAYLOAD 0x00e0
+#define PGLUE_B_REG_MASTER_DISCARD_NBLOCK 0x2aa58cUL
+#define PGLUE_B_REG_PRTY_STS_WR_H_0 0x2a8208UL
+#define DORQ_REG_VF_USAGE_CNT_LIM 0x1009ccUL
+#define PGLUE_B_REG_SR_IOV_DISABLED_REQUEST 0x2aa06cUL
+#define PGLUE_B_REG_SR_IOV_DISABLED_REQUEST_CLR 0x2aa070UL
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index abc86402d..77ee3b34f 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -1602,17 +1602,17 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			/* Mark it as LRO packet */
 			ol_flags |= PKT_RX_LRO;
 			/* In split mode,  seg_len is same as len_on_first_bd
-			 * and ext_bd_len_list will be empty since there are
+			 * and bw_ext_bd_len_list will be empty since there are
 			 * no additional buffers
 			 */
 			PMD_RX_LOG(INFO, rxq,
-			    "TPA start[%d] - len_on_first_bd %d header %d"
-			    " [bd_list[0] %d], [seg_len %d]\n",
-			    cqe_start_tpa->tpa_agg_index,
-			    rte_le_to_cpu_16(cqe_start_tpa->len_on_first_bd),
-			    cqe_start_tpa->header_len,
-			    rte_le_to_cpu_16(cqe_start_tpa->ext_bd_len_list[0]),
-			    rte_le_to_cpu_16(cqe_start_tpa->seg_len));
+			 "TPA start[%d] - len_on_first_bd %d header %d"
+			 " [bd_list[0] %d], [seg_len %d]\n",
+			 cqe_start_tpa->tpa_agg_index,
+			 rte_le_to_cpu_16(cqe_start_tpa->len_on_first_bd),
+			 cqe_start_tpa->header_len,
+			 rte_le_to_cpu_16(cqe_start_tpa->bw_ext_bd_len_list[0]),
+			 rte_le_to_cpu_16(cqe_start_tpa->seg_len));
 
 		break;
 		case ETH_RX_CQE_TYPE_TPA_CONT:
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH 8/9] net/qede/base: update the FW to 8.40.25.0
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (6 preceding siblings ...)
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 7/9] net/qede/base: update HSI code Rasesh Mody
@ 2019-09-30  2:49 ` Rasesh Mody
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 9/9] net/qede: print adapter info during init failure Rasesh Mody
                   ` (11 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-09-30  2:49 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

This patch updates the FW to 8.40.25.0 and corresponding base driver
changes. It also updates the PMD version to 2.11.0.1. The FW updates
consists of enhancements and fixes as described below.

 - VF RX queue start ramrod can get stuck due to completion error.
   Return EQ completion with error, when fail to load VF data. Use VF
   FID in RX queue start ramrod
 - Fix big receive buffer initialization for 100G to address failure
   leading to BRB hardware assertion
 - GRE tunnel traffic doesn't run when non-L2 ethernet protocol is enabled,
   fix FW to not forward tunneled SYN packets to LL2.
 - Fix the FW assert that is caused during vport_update when
   tx-switching is enabled
 - Add initial FW support for VF Representors
 - Add ecore_get_hsi_def_val() API to get default HSI values
 - Move following from .c to .h files:
   TSTORM_QZONE_START and MSTORM_QZONE_START
   enum ilt_clients
   renamed struct ecore_dma_mem to phys_mem_desc and moved
 - Add ecore_cxt_set_cli() and ecore_cxt_set_blk() APIs to set client
   config and block details
 - Use SET_FIELD() macro where appropriate
 - Address spell check and code alignment issues

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/ecore.h               |  73 ++-
 drivers/net/qede/base/ecore_cxt.c           | 497 ++++++++------
 drivers/net/qede/base/ecore_cxt.h           |  12 +
 drivers/net/qede/base/ecore_dcbx.c          |   5 +-
 drivers/net/qede/base/ecore_dev.c           | 586 ++++++++++-------
 drivers/net/qede/base/ecore_init_fw_funcs.c | 681 ++++++++++----------
 drivers/net/qede/base/ecore_init_fw_funcs.h | 107 ++-
 drivers/net/qede/base/ecore_init_ops.c      |  15 +-
 drivers/net/qede/base/ecore_init_ops.h      |   2 +-
 drivers/net/qede/base/ecore_int.c           | 129 ++--
 drivers/net/qede/base/ecore_int_api.h       |  11 +-
 drivers/net/qede/base/ecore_l2.c            |  10 +-
 drivers/net/qede/base/ecore_l2_api.h        |   2 +
 drivers/net/qede/base/ecore_mcp.c           | 287 +++++----
 drivers/net/qede/base/ecore_mcp.h           |   9 +-
 drivers/net/qede/base/ecore_proto_if.h      |   1 +
 drivers/net/qede/base/ecore_sp_commands.c   |  15 +-
 drivers/net/qede/base/ecore_spq.c           |  53 +-
 drivers/net/qede/base/ecore_sriov.c         | 157 +++--
 drivers/net/qede/base/ecore_vf.c            |  18 +-
 drivers/net/qede/qede_ethdev.h              |   2 +-
 drivers/net/qede/qede_main.c                |   2 +-
 drivers/net/qede/qede_rxtx.c                |   4 +-
 23 files changed, 1584 insertions(+), 1094 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index b1d8706c9..925b75cb9 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -28,8 +28,8 @@
 #include "mcp_public.h"
 
 #define ECORE_MAJOR_VERSION		8
-#define ECORE_MINOR_VERSION		37
-#define ECORE_REVISION_VERSION		20
+#define ECORE_MINOR_VERSION		40
+#define ECORE_REVISION_VERSION		18
 #define ECORE_ENGINEERING_VERSION	0
 
 #define ECORE_VERSION							\
@@ -467,6 +467,8 @@ struct ecore_wfq_data {
 	bool configured;
 };
 
+#define OFLD_GRP_SIZE 4
+
 struct ecore_qm_info {
 	struct init_qm_pq_params    *qm_pq_params;
 	struct init_qm_vport_params *qm_vport_params;
@@ -513,6 +515,8 @@ struct ecore_fw_data {
 	const u8 *modes_tree_buf;
 	union init_op *init_ops;
 	const u32 *arr_data;
+	const u32 *fw_overlays;
+	u32 fw_overlays_len;
 	u32 init_ops_size;
 };
 
@@ -592,6 +596,7 @@ struct ecore_hwfn {
 
 	u8				num_funcs_on_engine;
 	u8				enabled_func_idx;
+	u8				num_funcs_on_port;
 
 	/* BAR access */
 	void OSAL_IOMEM			*regview;
@@ -745,7 +750,6 @@ struct ecore_dev {
 #endif
 #define ECORE_IS_AH(dev)	((dev)->type == ECORE_DEV_TYPE_AH)
 #define ECORE_IS_K2(dev)	ECORE_IS_AH(dev)
-#define ECORE_IS_E4(dev)	(ECORE_IS_BB(dev) || ECORE_IS_AH(dev))
 
 	u16 vendor_id;
 	u16 device_id;
@@ -893,6 +897,7 @@ struct ecore_dev {
 
 #ifndef ASIC_ONLY
 	bool				b_is_emul_full;
+	bool				b_is_emul_mac;
 #endif
 	/* LLH info */
 	u8				ppfid_bitmap;
@@ -911,16 +916,52 @@ struct ecore_dev {
 	u8				engine_for_debug;
 };
 
-#define NUM_OF_VFS(dev)		(ECORE_IS_BB(dev) ? MAX_NUM_VFS_BB \
-						  : MAX_NUM_VFS_K2)
-#define NUM_OF_L2_QUEUES(dev)	(ECORE_IS_BB(dev) ? MAX_NUM_L2_QUEUES_BB \
-						  : MAX_NUM_L2_QUEUES_K2)
-#define NUM_OF_PORTS(dev)	(ECORE_IS_BB(dev) ? MAX_NUM_PORTS_BB \
-						  : MAX_NUM_PORTS_K2)
-#define NUM_OF_SBS(dev)		(ECORE_IS_BB(dev) ? MAX_SB_PER_PATH_BB \
-						  : MAX_SB_PER_PATH_K2)
-#define NUM_OF_ENG_PFS(dev)	(ECORE_IS_BB(dev) ? MAX_NUM_PFS_BB \
-						  : MAX_NUM_PFS_K2)
+enum ecore_hsi_def_type {
+	ECORE_HSI_DEF_MAX_NUM_VFS,
+	ECORE_HSI_DEF_MAX_NUM_L2_QUEUES,
+	ECORE_HSI_DEF_MAX_NUM_PORTS,
+	ECORE_HSI_DEF_MAX_SB_PER_PATH,
+	ECORE_HSI_DEF_MAX_NUM_PFS,
+	ECORE_HSI_DEF_MAX_NUM_VPORTS,
+	ECORE_HSI_DEF_NUM_ETH_RSS_ENGINE,
+	ECORE_HSI_DEF_MAX_QM_TX_QUEUES,
+	ECORE_HSI_DEF_NUM_PXP_ILT_RECORDS,
+	ECORE_HSI_DEF_NUM_RDMA_STATISTIC_COUNTERS,
+	ECORE_HSI_DEF_MAX_QM_GLOBAL_RLS,
+	ECORE_HSI_DEF_MAX_PBF_CMD_LINES,
+	ECORE_HSI_DEF_MAX_BTB_BLOCKS,
+	ECORE_NUM_HSI_DEFS
+};
+
+u32 ecore_get_hsi_def_val(struct ecore_dev *p_dev,
+			  enum ecore_hsi_def_type type);
+
+#define NUM_OF_VFS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_NUM_VFS)
+#define NUM_OF_L2_QUEUES(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_NUM_L2_QUEUES)
+#define NUM_OF_PORTS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_NUM_PORTS)
+#define NUM_OF_SBS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_SB_PER_PATH)
+#define NUM_OF_ENG_PFS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_NUM_PFS)
+#define NUM_OF_VPORTS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_NUM_VPORTS)
+#define NUM_OF_RSS_ENGINES(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_NUM_ETH_RSS_ENGINE)
+#define NUM_OF_QM_TX_QUEUES(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_QM_TX_QUEUES)
+#define NUM_OF_PXP_ILT_RECORDS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_NUM_PXP_ILT_RECORDS)
+#define NUM_OF_RDMA_STATISTIC_COUNTERS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_NUM_RDMA_STATISTIC_COUNTERS)
+#define NUM_OF_QM_GLOBAL_RLS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_QM_GLOBAL_RLS)
+#define NUM_OF_PBF_CMD_LINES(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_PBF_CMD_LINES)
+#define NUM_OF_BTB_BLOCKS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_BTB_BLOCKS)
 
 #define CRC8_TABLE_SIZE 256
 
@@ -948,7 +989,6 @@ static OSAL_INLINE u8 ecore_concrete_to_sw_fid(u32 concrete_fid)
 }
 
 #define PKT_LB_TC 9
-#define MAX_NUM_VOQS_E4 20
 
 int ecore_configure_vport_wfq(struct ecore_dev *p_dev, u16 vp_id, u32 rate);
 void ecore_configure_vp_wfq_on_link_change(struct ecore_dev *p_dev,
@@ -1023,4 +1063,9 @@ enum _ecore_status_t ecore_all_ppfids_wr(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid);
 enum _ecore_status_t ecore_llh_dump_all(struct ecore_dev *p_dev);
 
+#define TSTORM_QZONE_START	PXP_VF_BAR0_START_SDM_ZONE_A
+
+#define MSTORM_QZONE_START(dev) \
+	(TSTORM_QZONE_START + (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
+
 #endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 0f04c9447..33ad589f6 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -33,6 +33,10 @@
 /* Searcher constants */
 #define SRC_MIN_NUM_ELEMS 256
 
+/* GFS constants */
+#define RGFS_MIN_NUM_ELEMS	256
+#define TGFS_MIN_NUM_ELEMS	256
+
 /* Timers constants */
 #define TM_SHIFT	7
 #define TM_ALIGN	(1 << TM_SHIFT)
@@ -114,16 +118,6 @@ struct ecore_conn_type_cfg {
 #define CDUT_SEG_BLK(n)		(1 + (u8)(n))
 #define CDUT_FL_SEG_BLK(n, X)	(1 + (n) + NUM_TASK_##X##_SEGMENTS)
 
-enum ilt_clients {
-	ILT_CLI_CDUC,
-	ILT_CLI_CDUT,
-	ILT_CLI_QM,
-	ILT_CLI_TM,
-	ILT_CLI_SRC,
-	ILT_CLI_TSDM,
-	ILT_CLI_MAX
-};
-
 struct ilt_cfg_pair {
 	u32 reg;
 	u32 val;
@@ -133,6 +127,7 @@ struct ecore_ilt_cli_blk {
 	u32 total_size;		/* 0 means not active */
 	u32 real_size_in_page;
 	u32 start_line;
+	u32 dynamic_line_offset;
 	u32 dynamic_line_cnt;
 };
 
@@ -153,17 +148,6 @@ struct ecore_ilt_client_cfg {
 	u32 vf_total_lines;
 };
 
-/* Per Path -
- *      ILT shadow table
- *      Protocol acquired CID lists
- *      PF start line in ILT
- */
-struct ecore_dma_mem {
-	dma_addr_t p_phys;
-	void *p_virt;
-	osal_size_t size;
-};
-
 #define MAP_WORD_SIZE		sizeof(unsigned long)
 #define BITS_PER_MAP_WORD	(MAP_WORD_SIZE * 8)
 
@@ -173,6 +157,13 @@ struct ecore_cid_acquired_map {
 	unsigned long *cid_map;
 };
 
+struct ecore_src_t2 {
+	struct phys_mem_desc	*dma_mem;
+	u32			num_pages;
+	u64			first_free;
+	u64			last_free;
+};
+
 struct ecore_cxt_mngr {
 	/* Per protocl configuration */
 	struct ecore_conn_type_cfg conn_cfg[MAX_CONN_TYPES];
@@ -193,17 +184,14 @@ struct ecore_cxt_mngr {
 	struct ecore_cid_acquired_map *acquired_vf[MAX_CONN_TYPES];
 
 	/* ILT  shadow table */
-	struct ecore_dma_mem *ilt_shadow;
+	struct phys_mem_desc		*ilt_shadow;
 	u32 pf_start_line;
 
 	/* Mutex for a dynamic ILT allocation */
 	osal_mutex_t mutex;
 
 	/* SRC T2 */
-	struct ecore_dma_mem *t2;
-	u32 t2_num_pages;
-	u64 first_free;
-	u64 last_free;
+	struct ecore_src_t2		src_t2;
 
 	/* The infrastructure originally was very generic and context/task
 	 * oriented - per connection-type we would set how many of those
@@ -280,15 +268,17 @@ struct ecore_tm_iids {
 	u32 per_vf_tids;
 };
 
-static void ecore_cxt_tm_iids(struct ecore_cxt_mngr *p_mngr,
+static void ecore_cxt_tm_iids(struct ecore_hwfn *p_hwfn,
+			      struct ecore_cxt_mngr *p_mngr,
 			      struct ecore_tm_iids *iids)
 {
+	struct ecore_conn_type_cfg *p_cfg;
 	bool tm_vf_required = false;
 	bool tm_required = false;
 	u32 i, j;
 
 	for (i = 0; i < MAX_CONN_TYPES; i++) {
-		struct ecore_conn_type_cfg *p_cfg = &p_mngr->conn_cfg[i];
+		p_cfg = &p_mngr->conn_cfg[i];
 
 		if (tm_cid_proto(i) || tm_required) {
 			if (p_cfg->cid_count)
@@ -490,43 +480,84 @@ static void ecore_ilt_cli_adv_line(struct ecore_hwfn *p_hwfn,
 		   p_blk->start_line);
 }
 
-static u32 ecore_ilt_get_dynamic_line_cnt(struct ecore_hwfn *p_hwfn,
-					  enum ilt_clients ilt_client)
+static void ecore_ilt_get_dynamic_line_range(struct ecore_hwfn *p_hwfn,
+					     enum ilt_clients ilt_client,
+					     u32 *dynamic_line_offset,
+					     u32 *dynamic_line_cnt)
 {
-	u32 cid_count = p_hwfn->p_cxt_mngr->conn_cfg[PROTOCOLID_ROCE].cid_count;
 	struct ecore_ilt_client_cfg *p_cli;
-	u32 lines_to_skip = 0;
+	struct ecore_conn_type_cfg *p_cfg;
 	u32 cxts_per_p;
 
 	/* TBD MK: ILT code should be simplified once PROTO enum is changed */
 
+	*dynamic_line_offset = 0;
+	*dynamic_line_cnt = 0;
+
 	if (ilt_client == ILT_CLI_CDUC) {
 		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC];
+		p_cfg = &p_hwfn->p_cxt_mngr->conn_cfg[PROTOCOLID_ROCE];
 
 		cxts_per_p = ILT_PAGE_IN_BYTES(p_cli->p_size.val) /
 		    (u32)CONN_CXT_SIZE(p_hwfn);
 
-		lines_to_skip = cid_count / cxts_per_p;
+		*dynamic_line_cnt = p_cfg->cid_count / cxts_per_p;
+	}
+}
+
+static struct ecore_ilt_client_cfg *
+ecore_cxt_set_cli(struct ecore_ilt_client_cfg *p_cli)
+{
+	p_cli->active = false;
+	p_cli->first.val = 0;
+	p_cli->last.val = 0;
+	return p_cli;
+}
+
+static struct ecore_ilt_cli_blk *
+ecore_cxt_set_blk(struct ecore_ilt_cli_blk *p_blk)
+{
+	p_blk->total_size = 0;
+	return p_blk;
 	}
 
-	return lines_to_skip;
+static u32
+ecore_cxt_src_elements(struct ecore_cxt_mngr *p_mngr)
+{
+	struct ecore_src_iids src_iids;
+	u32 elem_num = 0;
+
+	OSAL_MEM_ZERO(&src_iids, sizeof(src_iids));
+	ecore_cxt_src_iids(p_mngr, &src_iids);
+
+	/* Both the PF and VFs searcher connections are stored in the per PF
+	 * database. Thus sum the PF searcher cids and all the VFs searcher
+	 * cids.
+	 */
+	elem_num = src_iids.pf_cids +
+		   src_iids.per_vf_cids * p_mngr->vf_count;
+	if (elem_num == 0)
+		return elem_num;
+
+	elem_num = OSAL_MAX_T(u32, elem_num, SRC_MIN_NUM_ELEMS);
+	elem_num = OSAL_ROUNDUP_POW_OF_TWO(elem_num);
+
+	return elem_num;
 }
 
 enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 {
+	u32 curr_line, total, i, task_size, line, total_size, elem_size;
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 curr_line, total, i, task_size, line;
 	struct ecore_ilt_client_cfg *p_cli;
 	struct ecore_ilt_cli_blk *p_blk;
 	struct ecore_cdu_iids cdu_iids;
-	struct ecore_src_iids src_iids;
 	struct ecore_qm_iids qm_iids;
 	struct ecore_tm_iids tm_iids;
 	struct ecore_tid_seg *p_seg;
 
 	OSAL_MEM_ZERO(&qm_iids, sizeof(qm_iids));
 	OSAL_MEM_ZERO(&cdu_iids, sizeof(cdu_iids));
-	OSAL_MEM_ZERO(&src_iids, sizeof(src_iids));
 	OSAL_MEM_ZERO(&tm_iids, sizeof(tm_iids));
 
 	p_mngr->pf_start_line = RESC_START(p_hwfn, ECORE_ILT);
@@ -536,7 +567,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		   p_hwfn->my_id, p_hwfn->p_cxt_mngr->pf_start_line);
 
 	/* CDUC */
-	p_cli = &p_mngr->clients[ILT_CLI_CDUC];
+	p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_CDUC]);
 
 	curr_line = p_mngr->pf_start_line;
 
@@ -546,7 +577,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 	/* get the counters for the CDUC,CDUC and QM clients  */
 	ecore_cxt_cdu_iids(p_mngr, &cdu_iids);
 
-	p_blk = &p_cli->pf_blks[CDUC_BLK];
+	p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[CDUC_BLK]);
 
 	total = cdu_iids.pf_cids * CONN_CXT_SIZE(p_hwfn);
 
@@ -556,11 +587,12 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 	ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line, ILT_CLI_CDUC);
 	p_cli->pf_total_lines = curr_line - p_blk->start_line;
 
-	p_blk->dynamic_line_cnt = ecore_ilt_get_dynamic_line_cnt(p_hwfn,
-								 ILT_CLI_CDUC);
+	ecore_ilt_get_dynamic_line_range(p_hwfn, ILT_CLI_CDUC,
+					 &p_blk->dynamic_line_offset,
+					 &p_blk->dynamic_line_cnt);
 
 	/* CDUC VF */
-	p_blk = &p_cli->vf_blks[CDUC_BLK];
+	p_blk = ecore_cxt_set_blk(&p_cli->vf_blks[CDUC_BLK]);
 	total = cdu_iids.per_vf_cids * CONN_CXT_SIZE(p_hwfn);
 
 	ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
@@ -574,7 +606,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 				       ILT_CLI_CDUC);
 
 	/* CDUT PF */
-	p_cli = &p_mngr->clients[ILT_CLI_CDUT];
+	p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_CDUT]);
 	p_cli->first.val = curr_line;
 
 	/* first the 'working' task memory */
@@ -583,7 +615,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		if (!p_seg || p_seg->count == 0)
 			continue;
 
-		p_blk = &p_cli->pf_blks[CDUT_SEG_BLK(i)];
+		p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[CDUT_SEG_BLK(i)]);
 		total = p_seg->count * p_mngr->task_type_size[p_seg->type];
 		ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line, total,
 				       p_mngr->task_type_size[p_seg->type]);
@@ -598,7 +630,8 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		if (!p_seg || p_seg->count == 0)
 			continue;
 
-		p_blk = &p_cli->pf_blks[CDUT_FL_SEG_BLK(i, PF)];
+		p_blk = ecore_cxt_set_blk(
+				&p_cli->pf_blks[CDUT_FL_SEG_BLK(i, PF)]);
 
 		if (!p_seg->has_fl_mem) {
 			/* The segment is active (total size pf 'working'
@@ -631,7 +664,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
 				       ILT_CLI_CDUT);
 	}
-	p_cli->pf_total_lines = curr_line - p_cli->pf_blks[0].start_line;
+	p_cli->pf_total_lines = curr_line - p_cli->first.val;
 
 	/* CDUT VF */
 	p_seg = ecore_cxt_tid_seg_info(p_hwfn, TASK_SEGMENT_VF);
@@ -643,7 +676,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		/* 'working' memory */
 		total = p_seg->count * p_mngr->task_type_size[p_seg->type];
 
-		p_blk = &p_cli->vf_blks[CDUT_SEG_BLK(0)];
+		p_blk = ecore_cxt_set_blk(&p_cli->vf_blks[CDUT_SEG_BLK(0)]);
 		ecore_ilt_cli_blk_fill(p_cli, p_blk,
 				       curr_line, total,
 				       p_mngr->task_type_size[p_seg->type]);
@@ -652,7 +685,8 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 				       ILT_CLI_CDUT);
 
 		/* 'init' memory */
-		p_blk = &p_cli->vf_blks[CDUT_FL_SEG_BLK(0, VF)];
+		p_blk = ecore_cxt_set_blk(
+				&p_cli->vf_blks[CDUT_FL_SEG_BLK(0, VF)]);
 		if (!p_seg->has_fl_mem) {
 			/* see comment above */
 			line = p_cli->vf_blks[CDUT_SEG_BLK(0)].start_line;
@@ -664,15 +698,17 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 			ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
 					       ILT_CLI_CDUT);
 		}
-		p_cli->vf_total_lines = curr_line -
-		    p_cli->vf_blks[0].start_line;
+		p_cli->vf_total_lines = curr_line - (p_cli->first.val +
+						     p_cli->pf_total_lines);
 
 		/* Now for the rest of the VFs */
 		for (i = 1; i < p_mngr->vf_count; i++) {
+			/* don't set p_blk i.e. don't clear total_size */
 			p_blk = &p_cli->vf_blks[CDUT_SEG_BLK(0)];
 			ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
 					       ILT_CLI_CDUT);
 
+			/* don't set p_blk i.e. don't clear total_size */
 			p_blk = &p_cli->vf_blks[CDUT_FL_SEG_BLK(0, VF)];
 			ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
 					       ILT_CLI_CDUT);
@@ -680,13 +716,19 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 	}
 
 	/* QM */
-	p_cli = &p_mngr->clients[ILT_CLI_QM];
-	p_blk = &p_cli->pf_blks[0];
-
+	p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_QM]);
+	p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[0]);
+
+	/* At this stage, after the first QM configuration, the PF PQs amount
+	 * is the highest possible. Save this value at qm_info->ilt_pf_pqs to
+	 * detect overflows in the future.
+	 * Even though VF PQs amount can be larger than VF count, use vf_count
+	 * because each VF requires only the full amount of CIDs.
+	 */
 	ecore_cxt_qm_iids(p_hwfn, &qm_iids);
-	total = ecore_qm_pf_mem_size(qm_iids.cids,
+	total = ecore_qm_pf_mem_size(p_hwfn, qm_iids.cids,
 				     qm_iids.vf_cids, qm_iids.tids,
-				     p_hwfn->qm_info.num_pqs,
+				     p_hwfn->qm_info.num_pqs + OFLD_GRP_SIZE,
 				     p_hwfn->qm_info.num_vf_pqs);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
@@ -701,39 +743,15 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 	ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line, ILT_CLI_QM);
 	p_cli->pf_total_lines = curr_line - p_blk->start_line;
 
-	/* SRC */
-	p_cli = &p_mngr->clients[ILT_CLI_SRC];
-	ecore_cxt_src_iids(p_mngr, &src_iids);
-
-	/* Both the PF and VFs searcher connections are stored in the per PF
-	 * database. Thus sum the PF searcher cids and all the VFs searcher
-	 * cids.
-	 */
-	total = src_iids.pf_cids + src_iids.per_vf_cids * p_mngr->vf_count;
-	if (total) {
-		u32 local_max = OSAL_MAX_T(u32, total,
-					   SRC_MIN_NUM_ELEMS);
-
-		total = OSAL_ROUNDUP_POW_OF_TWO(local_max);
-
-		p_blk = &p_cli->pf_blks[0];
-		ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
-				       total * sizeof(struct src_ent),
-				       sizeof(struct src_ent));
-
-		ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
-				       ILT_CLI_SRC);
-		p_cli->pf_total_lines = curr_line - p_blk->start_line;
-	}
-
 	/* TM PF */
-	p_cli = &p_mngr->clients[ILT_CLI_TM];
-	ecore_cxt_tm_iids(p_mngr, &tm_iids);
+	p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_TM]);
+	ecore_cxt_tm_iids(p_hwfn, p_mngr, &tm_iids);
 	total = tm_iids.pf_cids + tm_iids.pf_tids_total;
 	if (total) {
-		p_blk = &p_cli->pf_blks[0];
+		p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[0]);
 		ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
-				       total * TM_ELEM_SIZE, TM_ELEM_SIZE);
+				       total * TM_ELEM_SIZE,
+				       TM_ELEM_SIZE);
 
 		ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
 				       ILT_CLI_TM);
@@ -743,7 +761,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 	/* TM VF */
 	total = tm_iids.per_vf_cids + tm_iids.per_vf_tids;
 	if (total) {
-		p_blk = &p_cli->vf_blks[0];
+		p_blk = ecore_cxt_set_blk(&p_cli->vf_blks[0]);
 		ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
 				       total * TM_ELEM_SIZE, TM_ELEM_SIZE);
 
@@ -757,12 +775,28 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		}
 	}
 
+	/* SRC */
+	p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_SRC]);
+	total = ecore_cxt_src_elements(p_mngr);
+
+	if (total) {
+		total_size = total * sizeof(struct src_ent);
+		elem_size = sizeof(struct src_ent);
+
+		p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[0]);
+		ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
+				       total_size, elem_size);
+		ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
+				       ILT_CLI_SRC);
+		p_cli->pf_total_lines = curr_line - p_blk->start_line;
+	}
+
 	/* TSDM (SRQ CONTEXT) */
 	total = ecore_cxt_get_srq_count(p_hwfn);
 
 	if (total) {
-		p_cli = &p_mngr->clients[ILT_CLI_TSDM];
-		p_blk = &p_cli->pf_blks[SRQ_BLK];
+		p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_TSDM]);
+		p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[SRQ_BLK]);
 		ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
 				       total * SRQ_CXT_SIZE, SRQ_CXT_SIZE);
 
@@ -783,29 +817,60 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 
 static void ecore_cxt_src_t2_free(struct ecore_hwfn *p_hwfn)
 {
-	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_src_t2 *p_t2 = &p_hwfn->p_cxt_mngr->src_t2;
 	u32 i;
 
-	if (!p_mngr->t2)
+	if (!p_t2 || !p_t2->dma_mem)
 		return;
 
-	for (i = 0; i < p_mngr->t2_num_pages; i++)
-		if (p_mngr->t2[i].p_virt)
+	for (i = 0; i < p_t2->num_pages; i++)
+		if (p_t2->dma_mem[i].virt_addr)
 			OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
-					       p_mngr->t2[i].p_virt,
-					       p_mngr->t2[i].p_phys,
-					       p_mngr->t2[i].size);
+					       p_t2->dma_mem[i].virt_addr,
+					       p_t2->dma_mem[i].phys_addr,
+					       p_t2->dma_mem[i].size);
 
-	OSAL_FREE(p_hwfn->p_dev, p_mngr->t2);
+	OSAL_FREE(p_hwfn->p_dev, p_t2->dma_mem);
+	p_t2->dma_mem = OSAL_NULL;
+}
+
+static enum _ecore_status_t
+ecore_cxt_t2_alloc_pages(struct ecore_hwfn *p_hwfn,
+			 struct ecore_src_t2 *p_t2,
+			 u32 total_size, u32 page_size)
+{
+	void **p_virt;
+	u32 size, i;
+
+	if (!p_t2 || !p_t2->dma_mem)
+		return ECORE_INVAL;
+
+	for (i = 0; i < p_t2->num_pages; i++) {
+		size = OSAL_MIN_T(u32, total_size, page_size);
+		p_virt = &p_t2->dma_mem[i].virt_addr;
+
+		*p_virt = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
+						  &p_t2->dma_mem[i].phys_addr,
+						  size);
+		if (!p_t2->dma_mem[i].virt_addr)
+			return ECORE_NOMEM;
+
+		OSAL_MEM_ZERO(*p_virt, size);
+		p_t2->dma_mem[i].size = size;
+		total_size -= size;
+	}
+
+	return ECORE_SUCCESS;
 }
 
 static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
 	u32 conn_num, total_size, ent_per_page, psz, i;
+	struct phys_mem_desc *p_t2_last_page;
 	struct ecore_ilt_client_cfg *p_src;
 	struct ecore_src_iids src_iids;
-	struct ecore_dma_mem *p_t2;
+	struct ecore_src_t2 *p_t2;
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&src_iids, sizeof(src_iids));
@@ -823,49 +888,39 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn)
 
 	/* use the same page size as the SRC ILT client */
 	psz = ILT_PAGE_IN_BYTES(p_src->p_size.val);
-	p_mngr->t2_num_pages = DIV_ROUND_UP(total_size, psz);
+	p_t2 = &p_mngr->src_t2;
+	p_t2->num_pages = DIV_ROUND_UP(total_size, psz);
 
 	/* allocate t2 */
-	p_mngr->t2 = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-				 p_mngr->t2_num_pages *
-				 sizeof(struct ecore_dma_mem));
-	if (!p_mngr->t2) {
+	p_t2->dma_mem = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+				    p_t2->num_pages *
+				    sizeof(struct phys_mem_desc));
+	if (!p_t2->dma_mem) {
 		DP_NOTICE(p_hwfn, false, "Failed to allocate t2 table\n");
 		rc = ECORE_NOMEM;
 		goto t2_fail;
 	}
 
-	/* allocate t2 pages */
-	for (i = 0; i < p_mngr->t2_num_pages; i++) {
-		u32 size = OSAL_MIN_T(u32, total_size, psz);
-		void **p_virt = &p_mngr->t2[i].p_virt;
-
-		*p_virt = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
-						  &p_mngr->t2[i].p_phys, size);
-		if (!p_mngr->t2[i].p_virt) {
-			rc = ECORE_NOMEM;
-			goto t2_fail;
-		}
-		OSAL_MEM_ZERO(*p_virt, size);
-		p_mngr->t2[i].size = size;
-		total_size -= size;
-	}
+	rc = ecore_cxt_t2_alloc_pages(p_hwfn, p_t2, total_size, psz);
+	if (rc)
+		goto t2_fail;
 
 	/* Set the t2 pointers */
 
 	/* entries per page - must be a power of two */
 	ent_per_page = psz / sizeof(struct src_ent);
 
-	p_mngr->first_free = (u64)p_mngr->t2[0].p_phys;
+	p_t2->first_free = (u64)p_t2->dma_mem[0].phys_addr;
 
-	p_t2 = &p_mngr->t2[(conn_num - 1) / ent_per_page];
-	p_mngr->last_free = (u64)p_t2->p_phys +
-	    ((conn_num - 1) & (ent_per_page - 1)) * sizeof(struct src_ent);
+	p_t2_last_page = &p_t2->dma_mem[(conn_num - 1) / ent_per_page];
+	p_t2->last_free = (u64)p_t2_last_page->phys_addr +
+			  ((conn_num - 1) & (ent_per_page - 1)) *
+			  sizeof(struct src_ent);
 
-	for (i = 0; i < p_mngr->t2_num_pages; i++) {
+	for (i = 0; i < p_t2->num_pages; i++) {
 		u32 ent_num = OSAL_MIN_T(u32, ent_per_page, conn_num);
-		struct src_ent *entries = p_mngr->t2[i].p_virt;
-		u64 p_ent_phys = (u64)p_mngr->t2[i].p_phys, val;
+		struct src_ent *entries = p_t2->dma_mem[i].virt_addr;
+		u64 p_ent_phys = (u64)p_t2->dma_mem[i].phys_addr, val;
 		u32 j;
 
 		for (j = 0; j < ent_num - 1; j++) {
@@ -873,8 +928,8 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn)
 			entries[j].next = OSAL_CPU_TO_BE64(val);
 		}
 
-		if (i < p_mngr->t2_num_pages - 1)
-			val = (u64)p_mngr->t2[i + 1].p_phys;
+		if (i < p_t2->num_pages - 1)
+			val = (u64)p_t2->dma_mem[i + 1].phys_addr;
 		else
 			val = 0;
 		entries[j].next = OSAL_CPU_TO_BE64(val);
@@ -921,13 +976,13 @@ static void ecore_ilt_shadow_free(struct ecore_hwfn *p_hwfn)
 	ilt_size = ecore_cxt_ilt_shadow_size(p_cli);
 
 	for (i = 0; p_mngr->ilt_shadow && i < ilt_size; i++) {
-		struct ecore_dma_mem *p_dma = &p_mngr->ilt_shadow[i];
+		struct phys_mem_desc *p_dma = &p_mngr->ilt_shadow[i];
 
-		if (p_dma->p_virt)
+		if (p_dma->virt_addr)
 			OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
 					       p_dma->p_virt,
-					       p_dma->p_phys, p_dma->size);
-		p_dma->p_virt = OSAL_NULL;
+					       p_dma->phys_addr, p_dma->size);
+		p_dma->virt_addr = OSAL_NULL;
 	}
 	OSAL_FREE(p_hwfn->p_dev, p_mngr->ilt_shadow);
 	p_mngr->ilt_shadow = OSAL_NULL;
@@ -938,28 +993,33 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn,
 		    struct ecore_ilt_cli_blk *p_blk,
 		    enum ilt_clients ilt_client, u32 start_line_offset)
 {
-	struct ecore_dma_mem *ilt_shadow = p_hwfn->p_cxt_mngr->ilt_shadow;
-	u32 lines, line, sz_left, lines_to_skip = 0;
+	struct phys_mem_desc *ilt_shadow = p_hwfn->p_cxt_mngr->ilt_shadow;
+	u32 lines, line, sz_left, lines_to_skip, first_skipped_line;
 
 	/* Special handling for RoCE that supports dynamic allocation */
 	if (ilt_client == ILT_CLI_CDUT || ilt_client == ILT_CLI_TSDM)
 		return ECORE_SUCCESS;
 
-	lines_to_skip = p_blk->dynamic_line_cnt;
-
 	if (!p_blk->total_size)
 		return ECORE_SUCCESS;
 
 	sz_left = p_blk->total_size;
+	lines_to_skip = p_blk->dynamic_line_cnt;
 	lines = DIV_ROUND_UP(sz_left, p_blk->real_size_in_page) - lines_to_skip;
 	line = p_blk->start_line + start_line_offset -
-	    p_hwfn->p_cxt_mngr->pf_start_line + lines_to_skip;
+	       p_hwfn->p_cxt_mngr->pf_start_line;
+	first_skipped_line = line + p_blk->dynamic_line_offset;
 
-	for (; lines; lines--) {
+	while (lines) {
 		dma_addr_t p_phys;
 		void *p_virt;
 		u32 size;
 
+		if (lines_to_skip && (line == first_skipped_line)) {
+			line += lines_to_skip;
+			continue;
+		}
+
 		size = OSAL_MIN_T(u32, sz_left, p_blk->real_size_in_page);
 
 /* @DPDK */
@@ -971,8 +1031,8 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn,
 			return ECORE_NOMEM;
 		OSAL_MEM_ZERO(p_virt, size);
 
-		ilt_shadow[line].p_phys = p_phys;
-		ilt_shadow[line].p_virt = p_virt;
+		ilt_shadow[line].phys_addr = p_phys;
+		ilt_shadow[line].virt_addr = p_virt;
 		ilt_shadow[line].size = size;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
@@ -982,6 +1042,7 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn,
 
 		sz_left -= size;
 		line++;
+		lines--;
 	}
 
 	return ECORE_SUCCESS;
@@ -997,7 +1058,7 @@ static enum _ecore_status_t ecore_ilt_shadow_alloc(struct ecore_hwfn *p_hwfn)
 
 	size = ecore_cxt_ilt_shadow_size(clients);
 	p_mngr->ilt_shadow = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-					 size * sizeof(struct ecore_dma_mem));
+					 size * sizeof(struct phys_mem_desc));
 
 	if (!p_mngr->ilt_shadow) {
 		DP_NOTICE(p_hwfn, false, "Failed to allocate ilt shadow table\n");
@@ -1007,7 +1068,7 @@ static enum _ecore_status_t ecore_ilt_shadow_alloc(struct ecore_hwfn *p_hwfn)
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
 		   "Allocated 0x%x bytes for ilt shadow\n",
-		   (u32)(size * sizeof(struct ecore_dma_mem)));
+		   (u32)(size * sizeof(struct phys_mem_desc)));
 
 	for_each_ilt_valid_client(i, clients) {
 		for (j = 0; j < ILT_CLI_PF_BLOCKS; j++) {
@@ -1058,7 +1119,7 @@ static void ecore_cid_map_free(struct ecore_hwfn *p_hwfn)
 }
 
 static enum _ecore_status_t
-ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type,
+__ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type,
 			   u32 cid_start, u32 cid_count,
 			   struct ecore_cid_acquired_map *p_map)
 {
@@ -1082,49 +1143,67 @@ ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type,
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t ecore_cid_map_alloc(struct ecore_hwfn *p_hwfn)
+static enum _ecore_status_t
+ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type, u32 start_cid,
+			   u32 vf_start_cid)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
-	u32 start_cid = 0, vf_start_cid = 0;
-	u32 type, vf;
+	u32 vf, max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
+	struct ecore_cid_acquired_map *p_map;
+	struct ecore_conn_type_cfg *p_cfg;
+	enum _ecore_status_t rc;
 
-	for (type = 0; type < MAX_CONN_TYPES; type++) {
-		struct ecore_conn_type_cfg *p_cfg = &p_mngr->conn_cfg[type];
-		struct ecore_cid_acquired_map *p_map;
+	p_cfg = &p_mngr->conn_cfg[type];
 
 		/* Handle PF maps */
 		p_map = &p_mngr->acquired[type];
-		if (ecore_cid_map_alloc_single(p_hwfn, type, start_cid,
-					       p_cfg->cid_count, p_map))
-			goto cid_map_fail;
+	rc = __ecore_cid_map_alloc_single(p_hwfn, type, start_cid,
+					  p_cfg->cid_count, p_map);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Handle VF maps */
+	for (vf = 0; vf < max_num_vfs; vf++) {
+		p_map = &p_mngr->acquired_vf[type][vf];
+		rc = __ecore_cid_map_alloc_single(p_hwfn, type, vf_start_cid,
+						  p_cfg->cids_per_vf, p_map);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+	}
 
-		/* Handle VF maps */
-		for (vf = 0; vf < max_num_vfs; vf++) {
-			p_map = &p_mngr->acquired_vf[type][vf];
-			if (ecore_cid_map_alloc_single(p_hwfn, type,
-						       vf_start_cid,
-						       p_cfg->cids_per_vf,
-						       p_map))
-				goto cid_map_fail;
-		}
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t ecore_cid_map_alloc(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	u32 start_cid = 0, vf_start_cid = 0;
+	u32 type;
+	enum _ecore_status_t rc;
+
+	for (type = 0; type < MAX_CONN_TYPES; type++) {
+		rc = ecore_cid_map_alloc_single(p_hwfn, type, start_cid,
+						vf_start_cid);
+		if (rc != ECORE_SUCCESS)
+			goto cid_map_fail;
 
-		start_cid += p_cfg->cid_count;
-		vf_start_cid += p_cfg->cids_per_vf;
+		start_cid += p_mngr->conn_cfg[type].cid_count;
+		vf_start_cid += p_mngr->conn_cfg[type].cids_per_vf;
 	}
 
 	return ECORE_SUCCESS;
 
 cid_map_fail:
 	ecore_cid_map_free(p_hwfn);
-	return ECORE_NOMEM;
+	return rc;
 }
 
 enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
 {
+	struct ecore_cid_acquired_map *acquired_vf;
 	struct ecore_ilt_client_cfg *clients;
 	struct ecore_cxt_mngr *p_mngr;
-	u32 i;
+	u32 i, max_num_vfs;
 
 	p_mngr = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_mngr));
 	if (!p_mngr) {
@@ -1132,9 +1211,6 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
 		return ECORE_NOMEM;
 	}
 
-	/* Set the cxt mangr pointer prior to further allocations */
-	p_hwfn->p_cxt_mngr = p_mngr;
-
 	/* Initialize ILT client registers */
 	clients = p_mngr->clients;
 	clients[ILT_CLI_CDUC].first.reg = ILT_CFG_REG(CDUC, FIRST_ILT);
@@ -1183,6 +1259,22 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
 #endif
 	OSAL_MUTEX_INIT(&p_mngr->mutex);
 
+	/* Set the cxt mangr pointer prior to further allocations */
+	p_hwfn->p_cxt_mngr = p_mngr;
+
+	max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
+	for (i = 0; i < MAX_CONN_TYPES; i++) {
+		acquired_vf = OSAL_CALLOC(p_hwfn->p_dev, GFP_KERNEL,
+					  max_num_vfs, sizeof(*acquired_vf));
+		if (!acquired_vf) {
+			DP_NOTICE(p_hwfn, false,
+				  "Failed to allocate an array of `struct ecore_cid_acquired_map'\n");
+			return ECORE_NOMEM;
+		}
+
+		p_mngr->acquired_vf[i] = acquired_vf;
+	}
+
 	return ECORE_SUCCESS;
 }
 
@@ -1220,6 +1312,8 @@ enum _ecore_status_t ecore_cxt_tables_alloc(struct ecore_hwfn *p_hwfn)
 
 void ecore_cxt_mngr_free(struct ecore_hwfn *p_hwfn)
 {
+	u32 i;
+
 	if (!p_hwfn->p_cxt_mngr)
 		return;
 
@@ -1229,7 +1323,11 @@ void ecore_cxt_mngr_free(struct ecore_hwfn *p_hwfn)
 #ifdef CONFIG_ECORE_LOCK_ALLOC
 	OSAL_MUTEX_DEALLOC(&p_hwfn->p_cxt_mngr->mutex);
 #endif
+	for (i = 0; i < MAX_CONN_TYPES; i++)
+		OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_cxt_mngr->acquired_vf[i]);
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_cxt_mngr);
+
+	p_hwfn->p_cxt_mngr = OSAL_NULL;
 }
 
 void ecore_cxt_mngr_setup(struct ecore_hwfn *p_hwfn)
@@ -1435,14 +1533,10 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		      bool is_pf_loading)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	struct ecore_mcp_link_state *p_link;
 	struct ecore_qm_iids iids;
 
 	OSAL_MEM_ZERO(&iids, sizeof(iids));
 	ecore_cxt_qm_iids(p_hwfn, &iids);
-
-	p_link = &ECORE_LEADING_HWFN(p_hwfn->p_dev)->mcp_info->link_output;
-
 	ecore_qm_pf_rt_init(p_hwfn, p_ptt, p_hwfn->rel_pf_id,
 			    qm_info->max_phys_tcs_per_port,
 			    is_pf_loading,
@@ -1452,7 +1546,7 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			    qm_info->num_vf_pqs,
 			    qm_info->start_vport,
 			    qm_info->num_vports, qm_info->pf_wfq,
-			    qm_info->pf_rl, p_link->speed,
+			    qm_info->pf_rl,
 			    p_hwfn->qm_info.qm_pq_params,
 			    p_hwfn->qm_info.qm_vport_params);
 }
@@ -1601,7 +1695,7 @@ static void ecore_ilt_init_pf(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_ilt_client_cfg *clients;
 	struct ecore_cxt_mngr *p_mngr;
-	struct ecore_dma_mem *p_shdw;
+	struct phys_mem_desc *p_shdw;
 	u32 line, rt_offst, i;
 
 	ecore_ilt_bounds_init(p_hwfn);
@@ -1626,10 +1720,10 @@ static void ecore_ilt_init_pf(struct ecore_hwfn *p_hwfn)
 			/** p_virt could be OSAL_NULL incase of dynamic
 			 *  allocation
 			 */
-			if (p_shdw[line].p_virt != OSAL_NULL) {
+			if (p_shdw[line].virt_addr != OSAL_NULL) {
 				SET_FIELD(ilt_hw_entry, ILT_ENTRY_VALID, 1ULL);
 				SET_FIELD(ilt_hw_entry, ILT_ENTRY_PHY_ADDR,
-					  (p_shdw[line].p_phys >> 12));
+					  (p_shdw[line].phys_addr >> 12));
 
 				DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
 					"Setting RT[0x%08x] from"
@@ -1637,7 +1731,7 @@ static void ecore_ilt_init_pf(struct ecore_hwfn *p_hwfn)
 					" Physical addr: 0x%lx\n",
 					rt_offst, line, i,
 					(unsigned long)(p_shdw[line].
-							p_phys >> 12));
+							phys_addr >> 12));
 			}
 
 			STORE_RT_REG_AGG(p_hwfn, rt_offst, ilt_hw_entry);
@@ -1666,9 +1760,9 @@ static void ecore_src_init_pf(struct ecore_hwfn *p_hwfn)
 		     OSAL_LOG2(rounded_conn_num));
 
 	STORE_RT_REG_AGG(p_hwfn, SRC_REG_FIRSTFREE_RT_OFFSET,
-			 p_hwfn->p_cxt_mngr->first_free);
+			 p_hwfn->p_cxt_mngr->src_t2.first_free);
 	STORE_RT_REG_AGG(p_hwfn, SRC_REG_LASTFREE_RT_OFFSET,
-			 p_hwfn->p_cxt_mngr->last_free);
+			 p_hwfn->p_cxt_mngr->src_t2.last_free);
 	DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
 		   "Configured SEARCHER for 0x%08x connections\n",
 		   conn_num);
@@ -1699,18 +1793,18 @@ static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn)
 	u8 i;
 
 	OSAL_MEM_ZERO(&tm_iids, sizeof(tm_iids));
-	ecore_cxt_tm_iids(p_mngr, &tm_iids);
+	ecore_cxt_tm_iids(p_hwfn, p_mngr, &tm_iids);
 
 	/* @@@TBD No pre-scan for now */
 
-	/* Note: We assume consecutive VFs for a PF */
-	for (i = 0; i < p_mngr->vf_count; i++) {
 		cfg_word = 0;
 		SET_FIELD(cfg_word, TM_CFG_NUM_IDS, tm_iids.per_vf_cids);
-		SET_FIELD(cfg_word, TM_CFG_PRE_SCAN_OFFSET, 0);
 		SET_FIELD(cfg_word, TM_CFG_PARENT_PF, p_hwfn->rel_pf_id);
+	SET_FIELD(cfg_word, TM_CFG_PRE_SCAN_OFFSET, 0);
 		SET_FIELD(cfg_word, TM_CFG_CID_PRE_SCAN_ROWS, 0); /* scan all */
 
+	/* Note: We assume consecutive VFs for a PF */
+	for (i = 0; i < p_mngr->vf_count; i++) {
 		rt_reg = TM_REG_CONFIG_CONN_MEM_RT_OFFSET +
 		    (sizeof(cfg_word) / sizeof(u32)) *
 		    (p_hwfn->p_dev->p_iov_info->first_vf_in_pf + i);
@@ -1728,7 +1822,7 @@ static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn)
 	    (NUM_OF_VFS(p_hwfn->p_dev) + p_hwfn->rel_pf_id);
 	STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word);
 
-	/* enale scan */
+	/* enable scan */
 	STORE_RT_REG(p_hwfn, TM_REG_PF_ENABLE_CONN_RT_OFFSET,
 		     tm_iids.pf_cids ? 0x1 : 0x0);
 
@@ -1972,10 +2066,10 @@ enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
 	line = p_info->iid / cxts_per_p;
 
 	/* Make sure context is allocated (dynamic allocation) */
-	if (!p_mngr->ilt_shadow[line].p_virt)
+	if (!p_mngr->ilt_shadow[line].virt_addr)
 		return ECORE_INVAL;
 
-	p_info->p_cxt = (u8 *)p_mngr->ilt_shadow[line].p_virt +
+	p_info->p_cxt = (u8 *)p_mngr->ilt_shadow[line].virt_addr +
 	    p_info->iid % cxts_per_p * conn_cxt_size;
 
 	DP_VERBOSE(p_hwfn, (ECORE_MSG_ILT | ECORE_MSG_CXT),
@@ -2074,7 +2168,7 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
 
 	OSAL_MUTEX_ACQUIRE(&p_hwfn->p_cxt_mngr->mutex);
 
-	if (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_virt)
+	if (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].virt_addr)
 		goto out0;
 
 	p_ptt = ecore_ptt_acquire(p_hwfn);
@@ -2094,8 +2188,8 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
 	}
 	OSAL_MEM_ZERO(p_virt, p_blk->real_size_in_page);
 
-	p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_virt = p_virt;
-	p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_phys = p_phys;
+	p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].virt_addr = p_virt;
+	p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].phys_addr = p_phys;
 	p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].size =
 		p_blk->real_size_in_page;
 
@@ -2107,7 +2201,7 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
 	SET_FIELD(ilt_hw_entry, ILT_ENTRY_VALID, 1ULL);
 	SET_FIELD(ilt_hw_entry,
 		  ILT_ENTRY_PHY_ADDR,
-		  (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_phys >> 12));
+		 (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].phys_addr >> 12));
 
 /* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a wide-bus */
 
@@ -2115,21 +2209,6 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
 			    reg_offset, sizeof(ilt_hw_entry) / sizeof(u32),
 			    OSAL_NULL /* default parameters */);
 
-	if (elem_type == ECORE_ELEM_CXT) {
-		u32 last_cid_allocated = (1 + (iid / elems_per_p)) *
-					 elems_per_p;
-
-		/* Update the relevant register in the parser */
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_ROCE_DEST_QP_MAX_PF,
-			 last_cid_allocated - 1);
-
-		if (!p_hwfn->b_rdma_enabled_in_prs) {
-			/* Enable RoCE search */
-			ecore_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 1);
-			p_hwfn->b_rdma_enabled_in_prs = true;
-		}
-	}
-
 out1:
 	ecore_ptt_release(p_hwfn, p_ptt);
 out0:
@@ -2196,16 +2275,16 @@ ecore_cxt_free_ilt_range(struct ecore_hwfn *p_hwfn,
 	}
 
 	for (i = shadow_start_line; i < shadow_end_line; i++) {
-		if (!p_hwfn->p_cxt_mngr->ilt_shadow[i].p_virt)
+		if (!p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr)
 			continue;
 
 		OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
-				       p_hwfn->p_cxt_mngr->ilt_shadow[i].p_virt,
-				       p_hwfn->p_cxt_mngr->ilt_shadow[i].p_phys,
-				       p_hwfn->p_cxt_mngr->ilt_shadow[i].size);
+				    p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr,
+				    p_hwfn->p_cxt_mngr->ilt_shadow[i].phys_addr,
+				    p_hwfn->p_cxt_mngr->ilt_shadow[i].size);
 
-		p_hwfn->p_cxt_mngr->ilt_shadow[i].p_virt = OSAL_NULL;
-		p_hwfn->p_cxt_mngr->ilt_shadow[i].p_phys = 0;
+		p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr = OSAL_NULL;
+		p_hwfn->p_cxt_mngr->ilt_shadow[i].phys_addr = 0;
 		p_hwfn->p_cxt_mngr->ilt_shadow[i].size = 0;
 
 		/* compute absolute offset */
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index f8c955cac..55f08027d 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -22,6 +22,18 @@ enum ecore_cxt_elem_type {
 	ECORE_ELEM_TASK
 };
 
+enum ilt_clients {
+	ILT_CLI_CDUC,
+	ILT_CLI_CDUT,
+	ILT_CLI_QM,
+	ILT_CLI_TM,
+	ILT_CLI_SRC,
+	ILT_CLI_TSDM,
+	ILT_CLI_RGFS,
+	ILT_CLI_TGFS,
+	ILT_CLI_MAX
+};
+
 u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn,
 				  enum protocol_type type,
 				  u32 *vf_cid);
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index b82ca49ff..ccd4383bb 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -310,8 +310,9 @@ ecore_dcbx_process_tlv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			continue;
 
 		/* if no app tlv was present, don't override in FW */
-		ecore_dcbx_update_app_info(p_data, p_hwfn, p_ptt, false,
-					   priority, tc, type);
+		ecore_dcbx_update_app_info(p_data, p_hwfn, p_ptt,
+					  p_data->arr[DCBX_PROTOCOL_ETH].enable,
+					  priority, tc, type);
 	}
 
 	return ECORE_SUCCESS;
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2a11b4d29..2c47aba48 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -39,6 +39,10 @@
 static osal_spinlock_t qm_lock;
 static u32 qm_lock_ref_cnt;
 
+#ifndef ASIC_ONLY
+static bool b_ptt_gtt_init;
+#endif
+
 /******************** Doorbell Recovery *******************/
 /* The doorbell recovery mechanism consists of a list of entries which represent
  * doorbelling entities (l2 queues, roce sq/rq/cqs, the slowpath spq, etc). Each
@@ -963,13 +967,13 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
 
 	/* Filter enable - should be done first when removing a filter */
 	if (b_write_access && !p_details->enable) {
-		addr = NIG_REG_LLH_FUNC_FILTER_EN_BB_K2 + filter_idx * 0x4;
+		addr = NIG_REG_LLH_FUNC_FILTER_EN + filter_idx * 0x4;
 		ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr,
 			       p_details->enable);
 	}
 
 	/* Filter value */
-	addr = NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 + 2 * filter_idx * 0x4;
+	addr = NIG_REG_LLH_FUNC_FILTER_VALUE + 2 * filter_idx * 0x4;
 	OSAL_MEMSET(&params, 0, sizeof(params));
 
 	if (b_write_access) {
@@ -991,7 +995,7 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	/* Filter mode */
-	addr = NIG_REG_LLH_FUNC_FILTER_MODE_BB_K2 + filter_idx * 0x4;
+	addr = NIG_REG_LLH_FUNC_FILTER_MODE + filter_idx * 0x4;
 	if (b_write_access)
 		ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr, p_details->mode);
 	else
@@ -999,7 +1003,7 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
 						 addr);
 
 	/* Filter protocol type */
-	addr = NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_BB_K2 + filter_idx * 0x4;
+	addr = NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE + filter_idx * 0x4;
 	if (b_write_access)
 		ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr,
 			       p_details->protocol_type);
@@ -1018,7 +1022,7 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
 
 	/* Filter enable - should be done last when adding a filter */
 	if (!b_write_access || p_details->enable) {
-		addr = NIG_REG_LLH_FUNC_FILTER_EN_BB_K2 + filter_idx * 0x4;
+		addr = NIG_REG_LLH_FUNC_FILTER_EN + filter_idx * 0x4;
 		if (b_write_access)
 			ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr,
 				       p_details->enable);
@@ -1031,7 +1035,7 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
 }
 
 static enum _ecore_status_t
-ecore_llh_add_filter_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ecore_llh_add_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			u8 abs_ppfid, u8 filter_idx, u8 filter_prot_type,
 			u32 high, u32 low)
 {
@@ -1054,7 +1058,7 @@ ecore_llh_add_filter_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 }
 
 static enum _ecore_status_t
-ecore_llh_remove_filter_e4(struct ecore_hwfn *p_hwfn,
+ecore_llh_remove_filter(struct ecore_hwfn *p_hwfn,
 			   struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx)
 {
 	struct ecore_llh_filter_details filter_details;
@@ -1066,24 +1070,6 @@ ecore_llh_remove_filter_e4(struct ecore_hwfn *p_hwfn,
 				       true /* write access */);
 }
 
-static enum _ecore_status_t
-ecore_llh_add_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		     u8 abs_ppfid, u8 filter_idx, u8 filter_prot_type, u32 high,
-		     u32 low)
-{
-	return ecore_llh_add_filter_e4(p_hwfn, p_ptt, abs_ppfid,
-				       filter_idx, filter_prot_type,
-				       high, low);
-}
-
-static enum _ecore_status_t
-ecore_llh_remove_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			u8 abs_ppfid, u8 filter_idx)
-{
-	return ecore_llh_remove_filter_e4(p_hwfn, p_ptt, abs_ppfid,
-					  filter_idx);
-}
-
 enum _ecore_status_t ecore_llh_add_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
 					      u8 mac_addr[ETH_ALEN])
 {
@@ -1424,7 +1410,7 @@ void ecore_llh_clear_ppfid_filters(struct ecore_dev *p_dev, u8 ppfid)
 
 	for (filter_idx = 0; filter_idx < NIG_REG_LLH_FUNC_FILTER_EN_SIZE;
 	     filter_idx++) {
-		rc = ecore_llh_remove_filter_e4(p_hwfn, p_ptt,
+		rc = ecore_llh_remove_filter(p_hwfn, p_ptt,
 						abs_ppfid, filter_idx);
 		if (rc != ECORE_SUCCESS)
 			goto out;
@@ -1464,18 +1450,22 @@ enum _ecore_status_t ecore_all_ppfids_wr(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t
-ecore_llh_dump_ppfid_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			u8 ppfid)
+enum _ecore_status_t
+ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid)
 {
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
 	struct ecore_llh_filter_details filter_details;
 	u8 abs_ppfid, filter_idx;
 	u32 addr;
 	enum _ecore_status_t rc;
 
+	if (!p_ptt)
+		return ECORE_AGAIN;
+
 	rc = ecore_abs_ppfid(p_hwfn->p_dev, ppfid, &abs_ppfid);
 	if (rc != ECORE_SUCCESS)
-		return rc;
+		goto out;
 
 	addr = NIG_REG_PPF_TO_ENGINE_SEL + abs_ppfid * 0x4;
 	DP_NOTICE(p_hwfn, false,
@@ -1490,7 +1480,7 @@ ecore_llh_dump_ppfid_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 					      filter_idx, &filter_details,
 					      false /* read access */);
 		if (rc != ECORE_SUCCESS)
-			return rc;
+			goto out;
 
 		DP_NOTICE(p_hwfn, false,
 			  "filter %2hhd: enable %d, value 0x%016lx, mode %d, protocol_type 0x%x, hdr_sel 0x%x\n",
@@ -1500,20 +1490,8 @@ ecore_llh_dump_ppfid_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			  filter_details.protocol_type, filter_details.hdr_sel);
 	}
 
-	return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid)
-{
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
-	enum _ecore_status_t rc;
-
-	if (p_ptt == OSAL_NULL)
-		return ECORE_AGAIN;
-
-	rc = ecore_llh_dump_ppfid_e4(p_hwfn, p_ptt, ppfid);
 
+out:
 	ecore_ptt_release(p_hwfn, p_ptt);
 
 	return rc;
@@ -1851,6 +1829,7 @@ static void ecore_init_qm_port_params(struct ecore_hwfn *p_hwfn)
 {
 	/* Initialize qm port parameters */
 	u8 i, active_phys_tcs, num_ports = p_hwfn->p_dev->num_ports_in_engine;
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
 
 	/* indicate how ooo and high pri traffic is dealt with */
 	active_phys_tcs = num_ports == MAX_NUM_PORTS_K2 ?
@@ -1859,11 +1838,14 @@ static void ecore_init_qm_port_params(struct ecore_hwfn *p_hwfn)
 	for (i = 0; i < num_ports; i++) {
 		struct init_qm_port_params *p_qm_port =
 			&p_hwfn->qm_info.qm_port_params[i];
+		u16 pbf_max_cmd_lines;
 
 		p_qm_port->active = 1;
 		p_qm_port->active_phys_tcs = active_phys_tcs;
-		p_qm_port->num_pbf_cmd_lines = PBF_MAX_CMD_LINES / num_ports;
-		p_qm_port->num_btb_blocks = BTB_MAX_BLOCKS / num_ports;
+		pbf_max_cmd_lines = (u16)NUM_OF_PBF_CMD_LINES(p_dev);
+		p_qm_port->num_pbf_cmd_lines = pbf_max_cmd_lines / num_ports;
+		p_qm_port->num_btb_blocks =
+			NUM_OF_BTB_BLOCKS(p_dev) / num_ports;
 	}
 }
 
@@ -1938,6 +1920,10 @@ static void ecore_init_qm_pq(struct ecore_hwfn *p_hwfn,
 		(pq_init_flags & PQ_INIT_PF_RL ||
 		 pq_init_flags & PQ_INIT_VF_RL);
 
+	/* The "rl_id" is set as the "vport_id" */
+	qm_info->qm_pq_params[pq_idx].rl_id =
+		qm_info->qm_pq_params[pq_idx].vport_id;
+
 	/* qm params accounting */
 	qm_info->num_pqs++;
 	if (!(pq_init_flags & PQ_INIT_SHARE_VPORT))
@@ -2247,10 +2233,10 @@ static void ecore_dp_init_qm_params(struct ecore_hwfn *p_hwfn)
 	/* pq table */
 	for (i = 0; i < qm_info->num_pqs; i++) {
 		pq = &qm_info->qm_pq_params[i];
-		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
-			   "pq idx %d, port %d, vport_id %d, tc %d, wrr_grp %d, rl_valid %d\n",
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "pq idx %d, port %d, vport_id %d, tc %d, wrr_grp %d, rl_valid %d, rl_id %d\n",
 			   qm_info->start_pq + i, pq->port_id, pq->vport_id,
-			   pq->tc_id, pq->wrr_group, pq->rl_valid);
+			   pq->tc_id, pq->wrr_group, pq->rl_valid, pq->rl_id);
 	}
 }
 
@@ -2531,6 +2517,13 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 				  "Failed to allocate dbg user info structure\n");
 			goto alloc_err;
 		}
+
+		rc = OSAL_DBG_ALLOC_USER_DATA(p_hwfn, &p_hwfn->dbg_user_info);
+		if (rc) {
+			DP_NOTICE(p_hwfn, false,
+				  "Failed to allocate dbg user info structure\n");
+			goto alloc_err;
+		}
 	} /* hwfn loop */
 
 	rc = ecore_llh_alloc(p_dev);
@@ -2652,7 +2645,7 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 {
 	int hw_mode = 0;
 
-	if (ECORE_IS_BB_B0(p_hwfn->p_dev)) {
+	if (ECORE_IS_BB(p_hwfn->p_dev)) {
 		hw_mode |= 1 << MODE_BB;
 	} else if (ECORE_IS_AH(p_hwfn->p_dev)) {
 		hw_mode |= 1 << MODE_K2;
@@ -2712,50 +2705,88 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 }
 
 #ifndef ASIC_ONLY
-/* MFW-replacement initializations for non-ASIC */
-static enum _ecore_status_t ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
+/* MFW-replacement initializations for emulation */
+static enum _ecore_status_t ecore_hw_init_chip(struct ecore_dev *p_dev,
 					       struct ecore_ptt *p_ptt)
 {
-	struct ecore_dev *p_dev = p_hwfn->p_dev;
-	u32 pl_hv = 1;
-	int i;
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+	u32 pl_hv, wr_mbs;
+	int i, pos;
+	u16 ctrl = 0;
 
-	if (CHIP_REV_IS_EMUL(p_dev)) {
-		if (ECORE_IS_AH(p_dev))
-			pl_hv |= 0x600;
+	if (!CHIP_REV_IS_EMUL(p_dev)) {
+		DP_NOTICE(p_dev, false,
+			  "ecore_hw_init_chip() shouldn't be called in a non-emulation environment\n");
+		return ECORE_INVAL;
 	}
 
+	pl_hv = ECORE_IS_BB(p_dev) ? 0x1 : 0x401;
 	ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV + 4, pl_hv);
 
 	if (ECORE_IS_AH(p_dev))
 		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2, 0x3ffffff);
 
-	/* initialize port mode to 4x10G_E (10G with 4x10 SERDES) */
-	/* CNIG_REG_NW_PORT_MODE is same for A0 and B0 */
-	if (!CHIP_REV_IS_EMUL(p_dev) || ECORE_IS_BB(p_dev))
+	/* Initialize port mode to 4x10G_E (10G with 4x10 SERDES) */
+	if (ECORE_IS_BB(p_dev))
 		ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB, 4);
 
-	if (CHIP_REV_IS_EMUL(p_dev)) {
-		if (ECORE_IS_AH(p_dev)) {
-			/* 2 for 4-port, 1 for 2-port, 0 for 1-port */
-			ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE,
-				 (p_dev->num_ports_in_engine >> 1));
+	if (ECORE_IS_AH(p_dev)) {
+		/* 2 for 4-port, 1 for 2-port, 0 for 1-port */
+		ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE,
+			 p_dev->num_ports_in_engine >> 1);
 
-			ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN,
-				 p_dev->num_ports_in_engine == 4 ? 0 : 3);
-		}
+		ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN,
+			 p_dev->num_ports_in_engine == 4 ? 0 : 3);
 	}
 
-	/* Poll on RBC */
+	/* Signal the PSWRQ block to start initializing internal memories */
 	ecore_wr(p_hwfn, p_ptt, PSWRQ2_REG_RBC_DONE, 1);
 	for (i = 0; i < 100; i++) {
 		OSAL_UDELAY(50);
 		if (ecore_rd(p_hwfn, p_ptt, PSWRQ2_REG_CFG_DONE) == 1)
 			break;
 	}
-	if (i == 100)
+	if (i == 100) {
 		DP_NOTICE(p_hwfn, true,
 			  "RBC done failed to complete in PSWRQ2\n");
+		return ECORE_TIMEOUT;
+	}
+
+	/* Indicate PSWRQ to initialize steering tag table with zeros */
+	ecore_wr(p_hwfn, p_ptt, PSWRQ2_REG_RESET_STT, 1);
+	for (i = 0; i < 100; i++) {
+		OSAL_UDELAY(50);
+		if (!ecore_rd(p_hwfn, p_ptt, PSWRQ2_REG_RESET_STT))
+			break;
+	}
+	if (i == 100) {
+		DP_NOTICE(p_hwfn, true,
+			  "Steering tag table initialization failed to complete in PSWRQ2\n");
+		return ECORE_TIMEOUT;
+	}
+
+	/* Clear a possible PSWRQ2 STT parity which might have been generated by
+	 * a previous MSI-X read.
+	 */
+	ecore_wr(p_hwfn, p_ptt, PSWRQ2_REG_PRTY_STS_WR_H_0, 0x8);
+
+	/* Configure PSWRQ2_REG_WR_MBS0 according to the MaxPayloadSize field in
+	 * the PCI configuration space. The value is common for all PFs, so it
+	 * is okay to do it according to the first loading PF.
+	 */
+	pos = OSAL_PCI_FIND_CAPABILITY(p_dev, PCI_CAP_ID_EXP);
+	if (!pos) {
+		DP_NOTICE(p_dev, true,
+			  "Failed to find the PCI Express Capability structure in the PCI config space\n");
+		return ECORE_IO;
+	}
+
+	OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + PCI_EXP_DEVCTL, &ctrl);
+	wr_mbs = (ctrl & PCI_EXP_DEVCTL_PAYLOAD) >> 5;
+	ecore_wr(p_hwfn, p_ptt, PSWRQ2_REG_WR_MBS0, wr_mbs);
+
+	/* Configure the PGLUE_B to discard mode */
+	ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_MASTER_DISCARD_NBLOCK, 0x3f);
 
 	return ECORE_SUCCESS;
 }
@@ -2768,7 +2799,8 @@ static enum _ecore_status_t ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
 static void ecore_init_cau_rt_data(struct ecore_dev *p_dev)
 {
 	u32 offset = CAU_REG_SB_VAR_MEMORY_RT_OFFSET;
-	int i, igu_sb_id;
+	u32 igu_sb_id;
+	int i;
 
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
@@ -2866,8 +2898,8 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	ecore_gtt_init(p_hwfn);
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_dev)) {
-		rc = ecore_hw_init_chip(p_hwfn, p_ptt);
+	if (CHIP_REV_IS_EMUL(p_dev) && IS_LEAD_HWFN(p_hwfn)) {
+		rc = ecore_hw_init_chip(p_dev, p_ptt);
 		if (rc != ECORE_SUCCESS)
 			return rc;
 	}
@@ -2885,7 +2917,8 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 				qm_info->max_phys_tcs_per_port,
 				qm_info->pf_rl_en, qm_info->pf_wfq_en,
 				qm_info->vport_rl_en, qm_info->vport_wfq_en,
-				qm_info->qm_port_params);
+				qm_info->qm_port_params,
+				OSAL_NULL /* global RLs are not configured */);
 
 	ecore_cxt_hw_init_common(p_hwfn);
 
@@ -2906,7 +2939,7 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 		/* Workaround clears ROCE search for all functions to prevent
 		 * involving non initialized function in processing ROCE packet.
 		 */
-		num_pfs = NUM_OF_ENG_PFS(p_dev);
+		num_pfs = (u16)NUM_OF_ENG_PFS(p_dev);
 		for (pf_id = 0; pf_id < num_pfs; pf_id++) {
 			ecore_fid_pretend(p_hwfn, p_ptt, pf_id);
 			ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_ROCE, 0x0);
@@ -2922,7 +2955,7 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	 * This is not done inside the init tool since it currently can't
 	 * perform a pretending to VFs.
 	 */
-	max_num_vfs = ECORE_IS_AH(p_dev) ? MAX_NUM_VFS_K2 : MAX_NUM_VFS_BB;
+	max_num_vfs = (u8)NUM_OF_VFS(p_dev);
 	for (vf_id = 0; vf_id < max_num_vfs; vf_id++) {
 		concrete_fid = ecore_vfid_to_concrete(p_hwfn, vf_id);
 		ecore_fid_pretend(p_hwfn, p_ptt, (u16)concrete_fid);
@@ -2982,8 +3015,6 @@ static void ecore_emul_link_init_bb(struct ecore_hwfn *p_hwfn,
 {
 	u8 loopback = 0, port = p_hwfn->port_id * 2;
 
-	DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port);
-
 	/* XLPORT MAC MODE *//* 0 Quad, 4 Single... */
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MODE_REG, (0x4 << 4) | 0x4, 1,
 			 port);
@@ -3113,6 +3144,25 @@ static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
+static u32 ecore_hw_norm_region_conn(struct ecore_hwfn *p_hwfn)
+{
+	u32 norm_region_conn;
+
+	/* The order of CIDs allocation is according to the order of
+	 * 'enum protocol_type'. Therefore, the number of CIDs for the normal
+	 * region is calculated based on the CORE CIDs, in case of non-ETH
+	 * personality, and otherwise - based on the ETH CIDs.
+	 */
+	norm_region_conn =
+		ecore_cxt_get_proto_cid_start(p_hwfn, PROTOCOLID_CORE) +
+		ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_CORE,
+					      OSAL_NULL) +
+		ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH,
+					      OSAL_NULL);
+
+	return norm_region_conn;
+}
+
 static enum _ecore_status_t
 ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn,
 		       struct ecore_ptt *p_ptt, u32 pwm_region_size, u32 n_cpus)
@@ -3183,8 +3233,8 @@ static enum _ecore_status_t
 ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 			      struct ecore_ptt *p_ptt)
 {
+	u32 norm_region_conn, min_addr_reg1;
 	u32 pwm_regsize, norm_regsize;
-	u32 non_pwm_conn, min_addr_reg1;
 	u32 db_bar_size, n_cpus;
 	u32 roce_edpm_mode;
 	u32 pf_dems_shift;
@@ -3209,11 +3259,8 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 	 * connections. The DORQ_REG_PF_MIN_ADDR_REG1 register is
 	 * in units of 4,096 bytes.
 	 */
-	non_pwm_conn = ecore_cxt_get_proto_cid_start(p_hwfn, PROTOCOLID_CORE) +
-	    ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_CORE,
-					  OSAL_NULL) +
-	    ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH, OSAL_NULL);
-	norm_regsize = ROUNDUP(ECORE_PF_DEMS_SIZE * non_pwm_conn,
+	norm_region_conn = ecore_hw_norm_region_conn(p_hwfn);
+	norm_regsize = ROUNDUP(ECORE_PF_DEMS_SIZE * norm_region_conn,
 			       OSAL_PAGE_SIZE);
 	min_addr_reg1 = norm_regsize / 4096;
 	pwm_regsize = db_bar_size - norm_regsize;
@@ -3292,10 +3339,11 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt,
 					       int hw_mode)
 {
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	enum _ecore_status_t rc	= ECORE_SUCCESS;
 
 	/* In CMT the gate should be cleared by the 2nd hwfn */
-	if (!ECORE_IS_CMT(p_hwfn->p_dev) || !IS_LEAD_HWFN(p_hwfn))
+	if (!ECORE_IS_CMT(p_dev) || !IS_LEAD_HWFN(p_hwfn))
 		STORE_RT_REG(p_hwfn, NIG_REG_BRB_GATE_DNTFWD_PORT_RT_OFFSET, 0);
 
 	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_PORT, p_hwfn->port_id,
@@ -3306,16 +3354,11 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
 	ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_MASTER_WRITE_PAD_ENABLE, 0);
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev))
-		return ECORE_SUCCESS;
+	if (CHIP_REV_IS_FPGA(p_dev) && ECORE_IS_BB(p_dev))
+		ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
 
-	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
-		if (ECORE_IS_AH(p_hwfn->p_dev))
-			return ECORE_SUCCESS;
-		else if (ECORE_IS_BB(p_hwfn->p_dev))
-			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
-	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		if (ECORE_IS_CMT(p_hwfn->p_dev)) {
+	if (CHIP_REV_IS_EMUL(p_dev)) {
+		if (ECORE_IS_CMT(p_dev)) {
 			/* Activate OPTE in CMT */
 			u32 val;
 
@@ -3334,13 +3377,24 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
 				 0x55555555);
 		}
 
+		/* Set the TAGMAC default function on the port if needed.
+		 * The ppfid should be set in the vector, except in BB which has
+		 * a bug in the LLH where the ppfid is actually engine based.
+		 */
+		if (OSAL_TEST_BIT(ECORE_MF_NEED_DEF_PF, &p_dev->mf_bits)) {
+			u8 pf_id = p_hwfn->rel_pf_id;
+
+			if (!ECORE_IS_BB(p_dev))
+				pf_id /= p_dev->num_ports_in_engine;
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR, 1 << pf_id);
+		}
+
 		ecore_emul_link_init(p_hwfn, p_ptt);
-	} else {
-		DP_INFO(p_hwfn->p_dev, "link is not being configured\n");
 	}
 #endif
 
-	return rc;
+	return ECORE_SUCCESS;
 }
 
 static enum _ecore_status_t
@@ -3755,9 +3809,9 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 			goto load_err;
 
 		/* Clear the pglue_b was_error indication.
-		 * In E4 it must be done after the BME and the internal
-		 * FID_enable for the PF are set, since VDMs may cause the
-		 * indication to be set again.
+		 * It must be done after the BME and the internal FID_enable for
+		 * the PF are set, since VDMs may cause the indication to be set
+		 * again.
 		 */
 		ecore_pglueb_clear_err(p_hwfn, p_hwfn->p_main_ptt);
 
@@ -4361,11 +4415,41 @@ __ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+#define RDMA_NUM_STATISTIC_COUNTERS_K2                  MAX_NUM_VPORTS_K2
+#define RDMA_NUM_STATISTIC_COUNTERS_BB                  MAX_NUM_VPORTS_BB
+
+static u32 ecore_hsi_def_val[][MAX_CHIP_IDS] = {
+	{MAX_NUM_VFS_BB, MAX_NUM_VFS_K2},
+	{MAX_NUM_L2_QUEUES_BB, MAX_NUM_L2_QUEUES_K2},
+	{MAX_NUM_PORTS_BB, MAX_NUM_PORTS_K2},
+	{MAX_SB_PER_PATH_BB, MAX_SB_PER_PATH_K2, },
+	{MAX_NUM_PFS_BB, MAX_NUM_PFS_K2},
+	{MAX_NUM_VPORTS_BB, MAX_NUM_VPORTS_K2},
+	{ETH_RSS_ENGINE_NUM_BB, ETH_RSS_ENGINE_NUM_K2},
+	{MAX_QM_TX_QUEUES_BB, MAX_QM_TX_QUEUES_K2},
+	{PXP_NUM_ILT_RECORDS_BB, PXP_NUM_ILT_RECORDS_K2},
+	{RDMA_NUM_STATISTIC_COUNTERS_BB, RDMA_NUM_STATISTIC_COUNTERS_K2},
+	{MAX_QM_GLOBAL_RLS, MAX_QM_GLOBAL_RLS},
+	{PBF_MAX_CMD_LINES, PBF_MAX_CMD_LINES},
+	{BTB_MAX_BLOCKS_BB, BTB_MAX_BLOCKS_K2},
+};
+
+u32 ecore_get_hsi_def_val(struct ecore_dev *p_dev, enum ecore_hsi_def_type type)
+{
+	enum chip_ids chip_id = ECORE_IS_BB(p_dev) ? CHIP_BB : CHIP_K2;
+
+	if (type >= ECORE_NUM_HSI_DEFS) {
+		DP_ERR(p_dev, "Unexpected HSI definition type [%d]\n", type);
+		return 0;
+	}
+
+	return ecore_hsi_def_val[type][chip_id];
+}
+
 static enum _ecore_status_t
 ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt)
 {
-	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
 	u32 resc_max_val, mcp_resp;
 	u8 res_id;
 	enum _ecore_status_t rc;
@@ -4407,27 +4491,24 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 					    u32 *p_resc_num, u32 *p_resc_start)
 {
 	u8 num_funcs = p_hwfn->num_funcs_on_engine;
-	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
 
 	switch (res_id) {
 	case ECORE_L2_QUEUE:
-		*p_resc_num = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
-				 MAX_NUM_L2_QUEUES_BB) / num_funcs;
+		*p_resc_num = NUM_OF_L2_QUEUES(p_dev) / num_funcs;
 		break;
 	case ECORE_VPORT:
-		*p_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
-				 MAX_NUM_VPORTS_BB) / num_funcs;
+		*p_resc_num = NUM_OF_VPORTS(p_dev) / num_funcs;
 		break;
 	case ECORE_RSS_ENG:
-		*p_resc_num = (b_ah ? ETH_RSS_ENGINE_NUM_K2 :
-				 ETH_RSS_ENGINE_NUM_BB) / num_funcs;
+		*p_resc_num = NUM_OF_RSS_ENGINES(p_dev) / num_funcs;
 		break;
 	case ECORE_PQ:
-		*p_resc_num = (b_ah ? MAX_QM_TX_QUEUES_K2 :
-				 MAX_QM_TX_QUEUES_BB) / num_funcs;
+		*p_resc_num = NUM_OF_QM_TX_QUEUES(p_dev) / num_funcs;
+		*p_resc_num &= ~0x7; /* The granularity of the PQs is 8 */
 		break;
 	case ECORE_RL:
-		*p_resc_num = MAX_QM_GLOBAL_RLS / num_funcs;
+		*p_resc_num = NUM_OF_QM_GLOBAL_RLS(p_dev) / num_funcs;
 		break;
 	case ECORE_MAC:
 	case ECORE_VLAN:
@@ -4435,11 +4516,10 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 		*p_resc_num = ETH_NUM_MAC_FILTERS / num_funcs;
 		break;
 	case ECORE_ILT:
-		*p_resc_num = (b_ah ? PXP_NUM_ILT_RECORDS_K2 :
-				 PXP_NUM_ILT_RECORDS_BB) / num_funcs;
+		*p_resc_num = NUM_OF_PXP_ILT_RECORDS(p_dev) / num_funcs;
 		break;
 	case ECORE_LL2_QUEUE:
-		*p_resc_num = MAX_NUM_LL2_RX_QUEUES / num_funcs;
+		*p_resc_num = MAX_NUM_LL2_RX_RAM_QUEUES / num_funcs;
 		break;
 	case ECORE_RDMA_CNQ_RAM:
 	case ECORE_CMDQS_CQS:
@@ -4448,9 +4528,7 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 		*p_resc_num = (NUM_OF_GLOBAL_QUEUES / 2) / num_funcs;
 		break;
 	case ECORE_RDMA_STATS_QUEUE:
-		/* @DPDK */
-		*p_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
-				 MAX_NUM_VPORTS_BB) / num_funcs;
+		*p_resc_num = NUM_OF_RDMA_STATISTIC_COUNTERS(p_dev) / num_funcs;
 		break;
 	case ECORE_BDQ:
 		/* @DPDK */
@@ -4588,7 +4666,7 @@ static enum _ecore_status_t ecore_hw_get_ppfid_bitmap(struct ecore_hwfn *p_hwfn,
 	/* 4-ports mode has limitations that should be enforced:
 	 * - BB: the MFW can access only PPFIDs which their corresponding PFIDs
 	 *       belong to this certain port.
-	 * - AH/E5: only 4 PPFIDs per port are available.
+	 * - AH: only 4 PPFIDs per port are available.
 	 */
 	if (ecore_device_num_ports(p_dev) == 4) {
 		u8 mask;
@@ -4627,7 +4705,8 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_resc_unlock_params resc_unlock_params;
 	struct ecore_resc_lock_params resc_lock_params;
-	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
+	u32 max_ilt_lines;
 	u8 res_id;
 	enum _ecore_status_t rc;
 #ifndef ASIC_ONLY
@@ -4703,9 +4782,9 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	}
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
+	if (CHIP_REV_IS_EMUL(p_dev)) {
 		/* Reduced build contains less PQs */
-		if (!(p_hwfn->p_dev->b_is_emul_full)) {
+		if (!(p_dev->b_is_emul_full)) {
 			resc_num[ECORE_PQ] = 32;
 			resc_start[ECORE_PQ] = resc_num[ECORE_PQ] *
 			    p_hwfn->enabled_func_idx;
@@ -4713,26 +4792,27 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 
 		/* For AH emulation, since we have a possible maximal number of
 		 * 16 enabled PFs, in case there are not enough ILT lines -
-		 * allocate only first PF as RoCE and have all the other ETH
-		 * only with less ILT lines.
+		 * allocate only first PF as RoCE and have all the other as
+		 * ETH-only with less ILT lines.
+		 * In case we increase the number of ILT lines for PF0, we need
+		 * also to correct the start value for PF1-15.
 		 */
-		if (!p_hwfn->rel_pf_id && p_hwfn->p_dev->b_is_emul_full)
-			resc_num[ECORE_ILT] = OSAL_MAX_T(u32,
-							 resc_num[ECORE_ILT],
+		if (ECORE_IS_AH(p_dev) && p_dev->b_is_emul_full) {
+			if (!p_hwfn->rel_pf_id) {
+				resc_num[ECORE_ILT] =
+					OSAL_MAX_T(u32, resc_num[ECORE_ILT],
 							 roce_min_ilt_lines);
+			} else if (resc_num[ECORE_ILT] < roce_min_ilt_lines) {
+				resc_start[ECORE_ILT] += roce_min_ilt_lines -
+							 resc_num[ECORE_ILT];
+			}
+		}
 	}
-
-	/* Correct the common ILT calculation if PF0 has more */
-	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev) &&
-	    p_hwfn->p_dev->b_is_emul_full &&
-	    p_hwfn->rel_pf_id && resc_num[ECORE_ILT] < roce_min_ilt_lines)
-		resc_start[ECORE_ILT] += roce_min_ilt_lines -
-		    resc_num[ECORE_ILT];
 #endif
 
 	/* Sanity for ILT */
-	if ((b_ah && (RESC_END(p_hwfn, ECORE_ILT) > PXP_NUM_ILT_RECORDS_K2)) ||
-	    (!b_ah && (RESC_END(p_hwfn, ECORE_ILT) > PXP_NUM_ILT_RECORDS_BB))) {
+	max_ilt_lines = NUM_OF_PXP_ILT_RECORDS(p_dev);
+	if (RESC_END(p_hwfn, ECORE_ILT) > max_ilt_lines) {
 		DP_NOTICE(p_hwfn, true,
 			  "Can't assign ILT pages [%08x,...,%08x]\n",
 			  RESC_START(p_hwfn, ECORE_ILT), RESC_END(p_hwfn,
@@ -4764,6 +4844,28 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
+#ifndef ASIC_ONLY
+static enum _ecore_status_t
+ecore_emul_hw_get_nvm_info(struct ecore_hwfn *p_hwfn)
+{
+	if (IS_LEAD_HWFN(p_hwfn)) {
+		struct ecore_dev *p_dev = p_hwfn->p_dev;
+
+		/* The MF mode on emulation is either default or NPAR 1.0 */
+		p_dev->mf_bits = 1 << ECORE_MF_LLH_MAC_CLSS |
+				 1 << ECORE_MF_LLH_PROTO_CLSS |
+				 1 << ECORE_MF_LL2_NON_UNICAST;
+		if (p_hwfn->num_funcs_on_port > 1)
+			p_dev->mf_bits |= 1 << ECORE_MF_INTER_PF_SWITCH |
+					  1 << ECORE_MF_DISABLE_ARFS;
+		else
+			p_dev->mf_bits |= 1 << ECORE_MF_NEED_DEF_PF;
+	}
+
+	return ECORE_SUCCESS;
+}
+#endif
+
 static enum _ecore_status_t
 ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 		      struct ecore_ptt *p_ptt,
@@ -4775,6 +4877,11 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 	struct ecore_mcp_link_params *link;
 	enum _ecore_status_t rc;
 
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev))
+		return ecore_emul_hw_get_nvm_info(p_hwfn);
+#endif
+
 	/* Read global nvm_cfg address */
 	nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt, MISC_REG_GEN_PURP_CR0);
 
@@ -5122,49 +5229,17 @@ static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
 		   p_hwfn->enabled_func_idx, p_hwfn->num_funcs_on_engine);
 }
 
-static void ecore_hw_info_port_num_bb(struct ecore_hwfn *p_hwfn,
-				      struct ecore_ptt *p_ptt)
-{
-	struct ecore_dev *p_dev = p_hwfn->p_dev;
-	u32 port_mode;
-
 #ifndef ASIC_ONLY
-	/* Read the port mode */
-	if (CHIP_REV_IS_FPGA(p_dev))
-		port_mode = 4;
-	else if (CHIP_REV_IS_EMUL(p_dev) && ECORE_IS_CMT(p_dev))
-		/* In CMT on emulation, assume 1 port */
-		port_mode = 1;
-	else
-#endif
-	port_mode = ecore_rd(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB);
-
-	if (port_mode < 3) {
-		p_dev->num_ports_in_engine = 1;
-	} else if (port_mode <= 5) {
-		p_dev->num_ports_in_engine = 2;
-	} else {
-		DP_NOTICE(p_hwfn, true, "PORT MODE: %d not supported\n",
-			  p_dev->num_ports_in_engine);
-
-		/* Default num_ports_in_engine to something */
-		p_dev->num_ports_in_engine = 1;
-	}
-}
-
-static void ecore_hw_info_port_num_ah_e5(struct ecore_hwfn *p_hwfn,
+static void ecore_emul_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt)
 {
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
-	u32 port;
-	int i;
+	u32 eco_reserved;
 
-	p_dev->num_ports_in_engine = 0;
+	/* MISCS_REG_ECO_RESERVED[15:12]: num of ports in an engine */
+	eco_reserved = ecore_rd(p_hwfn, p_ptt, MISCS_REG_ECO_RESERVED);
 
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_dev)) {
-		port = ecore_rd(p_hwfn, p_ptt, MISCS_REG_ECO_RESERVED);
-		switch ((port & 0xf000) >> 12) {
+	switch ((eco_reserved & 0xf000) >> 12) {
 		case 1:
 			p_dev->num_ports_in_engine = 1;
 			break;
@@ -5176,49 +5251,43 @@ static void ecore_hw_info_port_num_ah_e5(struct ecore_hwfn *p_hwfn,
 			break;
 		default:
 			DP_NOTICE(p_hwfn, false,
-				  "Unknown port mode in ECO_RESERVED %08x\n",
-				  port);
-		}
-	} else
-#endif
-		for (i = 0; i < MAX_NUM_PORTS_K2; i++) {
-			port = ecore_rd(p_hwfn, p_ptt,
-					CNIG_REG_NIG_PORT0_CONF_K2 +
-					(i * 4));
-			if (port & 1)
-				p_dev->num_ports_in_engine++;
+			  "Emulation: Unknown port mode [ECO_RESERVED 0x%08x]\n",
+			  eco_reserved);
+		p_dev->num_ports_in_engine = 2; /* Default to something */
+		break;
 		}
 
-	if (!p_dev->num_ports_in_engine) {
-		DP_NOTICE(p_hwfn, true, "All NIG ports are inactive\n");
-
-		/* Default num_ports_in_engine to something */
-		p_dev->num_ports_in_engine = 1;
-	}
+	p_dev->num_ports = p_dev->num_ports_in_engine *
+			   ecore_device_num_engines(p_dev);
 }
+#endif
 
+/* Determine the number of ports of the device and per engine */
 static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt)
 {
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
+	u32 addr, global_offsize, global_addr;
 
-	/* Determine the number of ports per engine */
-	if (ECORE_IS_BB(p_dev))
-		ecore_hw_info_port_num_bb(p_hwfn, p_ptt);
-	else
-		ecore_hw_info_port_num_ah_e5(p_hwfn, p_ptt);
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_TEDIBEAR(p_dev)) {
+		p_dev->num_ports_in_engine = 1;
+		p_dev->num_ports = 2;
+		return;
+	}
+
+	if (CHIP_REV_IS_EMUL(p_dev)) {
+		ecore_emul_hw_info_port_num(p_hwfn, p_ptt);
+		return;
+	}
+#endif
 
-	/* Get the total number of ports of the device */
-	if (ECORE_IS_CMT(p_dev)) {
 		/* In CMT there is always only one port */
+	if (ECORE_IS_CMT(p_dev)) {
+		p_dev->num_ports_in_engine = 1;
 		p_dev->num_ports = 1;
-#ifndef ASIC_ONLY
-	} else if (CHIP_REV_IS_EMUL(p_dev) || CHIP_REV_IS_TEDIBEAR(p_dev)) {
-		p_dev->num_ports = p_dev->num_ports_in_engine *
-				   ecore_device_num_engines(p_dev);
-#endif
-	} else {
-		u32 addr, global_offsize, global_addr;
+		return;
+	}
 
 		addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
 					    PUBLIC_GLOBAL);
@@ -5226,7 +5295,9 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 		global_addr = SECTION_ADDR(global_offsize, 0);
 		addr = global_addr + OFFSETOF(struct public_global, max_ports);
 		p_dev->num_ports = (u8)ecore_rd(p_hwfn, p_ptt, addr);
-	}
+
+	p_dev->num_ports_in_engine = p_dev->num_ports >>
+				     (ecore_device_num_engines(p_dev) - 1);
 }
 
 static void ecore_mcp_get_eee_caps(struct ecore_hwfn *p_hwfn,
@@ -5280,15 +5351,9 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 	ecore_mcp_get_capabilities(p_hwfn, p_ptt);
 
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev)) {
-#endif
 	rc = ecore_hw_get_nvm_info(p_hwfn, p_ptt, p_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
-#ifndef ASIC_ONLY
-	}
-#endif
 
 	rc = ecore_int_igu_read_cam(p_hwfn, p_ptt);
 	if (rc != ECORE_SUCCESS) {
@@ -5332,16 +5397,15 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		protocol = p_hwfn->mcp_info->func_info.protocol;
 		p_hwfn->hw_info.personality = protocol;
 	}
-
 #ifndef ASIC_ONLY
-	/* To overcome ILT lack for emulation, until at least until we'll have
-	 * a definite answer from system about it, allow only PF0 to be RoCE.
+	else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+		/* AH emulation:
+		 * Allow only PF0 to be RoCE to overcome a lack of ILT lines.
 	 */
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev)) {
-		if (!p_hwfn->rel_pf_id)
-			p_hwfn->hw_info.personality = ECORE_PCI_ETH_ROCE;
-		else
+		if (ECORE_IS_AH(p_hwfn->p_dev) && p_hwfn->rel_pf_id)
 			p_hwfn->hw_info.personality = ECORE_PCI_ETH;
+		else
+			p_hwfn->hw_info.personality = ECORE_PCI_ETH_ROCE;
 	}
 #endif
 
@@ -5379,6 +5443,18 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	return rc;
 }
 
+#define ECORE_MAX_DEVICE_NAME_LEN (8)
+
+void ecore_get_dev_name(struct ecore_dev *p_dev, u8 *name, u8 max_chars)
+{
+	u8 n;
+
+	n = OSAL_MIN_T(u8, max_chars, ECORE_MAX_DEVICE_NAME_LEN);
+	OSAL_SNPRINTF((char *)name, n, "%s %c%d",
+		      ECORE_IS_BB(p_dev) ? "BB" : "AH",
+		      'A' + p_dev->chip_rev, (int)p_dev->chip_metal);
+}
+
 static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt)
 {
@@ -5423,9 +5499,9 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 	}
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_dev)) {
+	if (CHIP_REV_IS_EMUL(p_dev) && ECORE_IS_BB(p_dev)) {
 		/* For some reason we have problems with this register
-		 * in B0 emulation; Simply assume no CMT
+		 * in BB B0 emulation; Simply assume no CMT
 		 */
 		DP_NOTICE(p_dev->hwfns, false,
 			  "device on emul - assume no CMT\n");
@@ -5456,14 +5532,17 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 
 	if (CHIP_REV_IS_EMUL(p_dev)) {
 		tmp = ecore_rd(p_hwfn, p_ptt, MISCS_REG_ECO_RESERVED);
-		if (tmp & (1 << 29)) {
-			DP_NOTICE(p_hwfn, false,
-				  "Emulation: Running on a FULL build\n");
-			p_dev->b_is_emul_full = true;
-		} else {
+
+		/* MISCS_REG_ECO_RESERVED[29]: full/reduced emulation build */
+		p_dev->b_is_emul_full = !!(tmp & (1 << 29));
+
+		/* MISCS_REG_ECO_RESERVED[28]: emulation build w/ or w/o MAC */
+		p_dev->b_is_emul_mac = !!(tmp & (1 << 28));
+
 			DP_NOTICE(p_hwfn, false,
-				  "Emulation: Running on a REDUCED build\n");
-		}
+			  "Emulation: Running on a %s build %s MAC\n",
+			  p_dev->b_is_emul_full ? "full" : "reduced",
+			  p_dev->b_is_emul_mac ? "with" : "without");
 	}
 #endif
 
@@ -5533,7 +5612,7 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 	p_hwfn->p_main_ptt = ecore_get_reserved_ptt(p_hwfn, RESERVED_PTT_MAIN);
 
 	/* First hwfn learns basic information, e.g., number of hwfns */
-	if (!p_hwfn->my_id) {
+	if (IS_LEAD_HWFN(p_hwfn)) {
 		rc = ecore_get_dev_info(p_hwfn, p_hwfn->p_main_ptt);
 		if (rc != ECORE_SUCCESS) {
 			if (p_params->b_relaxed_probe)
@@ -5543,6 +5622,33 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 		}
 	}
 
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev) && !b_ptt_gtt_init) {
+		struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
+		u32 val;
+
+		/* Initialize PTT/GTT (done by MFW on ASIC) */
+		ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_START_INIT_PTT_GTT, 1);
+		OSAL_MSLEEP(10);
+		ecore_ptt_invalidate(p_hwfn);
+		val = ecore_rd(p_hwfn, p_ptt, PGLUE_B_REG_INIT_DONE_PTT_GTT);
+		if (val != 1) {
+			DP_ERR(p_hwfn,
+			       "PTT and GTT init in PGLUE_B didn't complete\n");
+			goto err1;
+		}
+
+		/* Clear a possible PGLUE_B parity from a previous GRC access */
+		ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_PRTY_STS_WR_H_0, 0x380);
+
+		b_ptt_gtt_init = true;
+	}
+#endif
+
+	/* Store the precompiled init data ptrs */
+	if (IS_LEAD_HWFN(p_hwfn))
+		ecore_init_iro_array(p_hwfn->p_dev);
+
 	ecore_hw_hwfn_prepare(p_hwfn);
 
 	/* Initialize MCP structure */
@@ -5581,9 +5687,6 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 
 	/* Check if mdump logs/data are present and update the epoch value */
 	if (IS_LEAD_HWFN(p_hwfn)) {
-#ifndef ASIC_ONLY
-		if (!CHIP_REV_IS_EMUL(p_dev)) {
-#endif
 		rc = ecore_mcp_mdump_get_info(p_hwfn, p_hwfn->p_main_ptt,
 					      &mdump_info);
 		if (rc == ECORE_SUCCESS && mdump_info.num_of_logs)
@@ -5600,9 +5703,6 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 
 		ecore_mcp_mdump_set_values(p_hwfn, p_hwfn->p_main_ptt,
 					   p_params->epoch);
-#ifndef ASIC_ONLY
-		}
-#endif
 	}
 
 	/* Allocate the init RT array and initialize the init-ops engine */
@@ -5615,10 +5715,12 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 	}
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_FPGA(p_dev)) {
-		DP_NOTICE(p_hwfn, false,
-			  "FPGA: workaround; Prevent DMAE parities\n");
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK_K2,
-			 7);
+		if (ECORE_IS_AH(p_dev)) {
+			DP_NOTICE(p_hwfn, false,
+				  "FPGA: workaround; Prevent DMAE parities\n");
+			ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
+				 PCIE_REG_PRTY_MASK_K2, 7);
+		}
 
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround: Set VF bar0 size\n");
@@ -5652,10 +5754,6 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 	if (p_params->b_relaxed_probe)
 		p_params->p_relaxed_res = ECORE_HW_PREPARE_SUCCESS;
 
-	/* Store the precompiled init data ptrs */
-	if (IS_PF(p_dev))
-		ecore_init_iro_array(p_dev);
-
 	/* Initialize the first hwfn - will learn number of hwfns */
 	rc = ecore_hw_prepare_single(p_hwfn, p_dev->regview,
 				     p_dev->doorbells, p_dev->db_phys_addr,
@@ -5665,7 +5763,7 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 
 	p_params->personality = p_hwfn->hw_info.personality;
 
-	/* initilalize 2nd hwfn if necessary */
+	/* Initialize 2nd hwfn if necessary */
 	if (ECORE_IS_CMT(p_dev)) {
 		void OSAL_IOMEM *p_regview, *p_doorbell;
 		u8 OSAL_IOMEM *addr;
@@ -6382,7 +6480,7 @@ static int __ecore_configure_vport_wfq(struct ecore_hwfn *p_hwfn,
 	struct ecore_mcp_link_state *p_link;
 	int rc = ECORE_SUCCESS;
 
-	p_link = &p_hwfn->p_dev->hwfns[0].mcp_info->link_output;
+	p_link = &ECORE_LEADING_HWFN(p_hwfn->p_dev)->mcp_info->link_output;
 
 	if (!p_link->min_pf_rate) {
 		p_hwfn->qm_info.wfq_data[vp_id].min_speed = rate;
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 9f614a4cf..4b27bb93e 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -9,31 +9,29 @@
 #include "ecore_init_ops.h"
 #include "reg_addr.h"
 #include "ecore_rt_defs.h"
-#include "ecore_hsi_common.h"
 #include "ecore_hsi_init_func.h"
-#include "ecore_hsi_eth.h"
 #include "ecore_hsi_init_tool.h"
 #include "ecore_iro.h"
 #include "ecore_init_fw_funcs.h"
-
-#define CDU_VALIDATION_DEFAULT_CFG 61
-
 static u16 con_region_offsets[3][NUM_OF_CONNECTION_TYPES] = {
-	{ 400,  336,  352,  304,  304,  384,  416,  352}, /* region 3 offsets */
-	{ 528,  496,  416,  448,  448,  512,  544,  480}, /* region 4 offsets */
-	{ 608,  544,  496,  512,  576,  592,  624,  560}  /* region 5 offsets */
+	{ 400,  336,  352,  368,  304,  384,  416,  352}, /* region 3 offsets */
+	{ 528,  496,  416,  512,  448,  512,  544,  480}, /* region 4 offsets */
+	{ 608,  544,  496,  576,  576,  592,  624,  560}  /* region 5 offsets */
 };
 static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
 	{ 240,  240,  112,    0,    0,    0,    0,   96}  /* region 1 offsets */
 };
 
 /* General constants */
-#define QM_PQ_MEM_4KB(pq_size) (pq_size ? DIV_ROUND_UP((pq_size + 1) * \
-				QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
-#define QM_PQ_SIZE_256B(pq_size) (pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : \
-				  0)
+#define QM_PQ_MEM_4KB(pq_size) \
+	(pq_size ? DIV_ROUND_UP((pq_size + 1) * QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
+#define QM_PQ_SIZE_256B(pq_size) \
+	(pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : 0)
 #define QM_INVALID_PQ_ID		0xffff
 
+/* Max link speed (in Mbps) */
+#define QM_MAX_LINK_SPEED		100000
+
 /* Feature enable */
 #define QM_BYPASS_EN			1
 #define QM_BYTE_CRD_EN			1
@@ -42,7 +40,8 @@ static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
 #define QM_OTHER_PQS_PER_PF		4
 
 /* VOQ constants */
-#define QM_E5_NUM_EXT_VOQ		(MAX_NUM_PORTS_E5 * NUM_OF_TCS)
+#define MAX_NUM_VOQS			(MAX_NUM_PORTS_K2 * NUM_TCS_4PORT_K2)
+#define VOQS_BIT_MASK			((1 << MAX_NUM_VOQS) - 1)
 
 /* WFQ constants: */
 
@@ -53,8 +52,7 @@ static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
 #define QM_WFQ_VP_PQ_VOQ_SHIFT		0
 
 /* Bit  of PF in WFQ VP PQ map */
-#define QM_WFQ_VP_PQ_PF_E4_SHIFT	5
-#define QM_WFQ_VP_PQ_PF_E5_SHIFT	6
+#define QM_WFQ_VP_PQ_PF_SHIFT		5
 
 /* 0x9000 = 4*9*1024 */
 #define QM_WFQ_INC_VAL(weight)		((weight) * 0x9000)
@@ -62,9 +60,6 @@ static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
 /* Max WFQ increment value is 0.7 * upper bound */
 #define QM_WFQ_MAX_INC_VAL		((QM_WFQ_UPPER_BOUND * 7) / 10)
 
-/* Number of VOQs in E5 QmWfqCrd register */
-#define QM_WFQ_CRD_E5_NUM_VOQS		16
-
 /* RL constants: */
 
 /* Period in us */
@@ -110,8 +105,6 @@ static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
 /* Pure LB CmdQ lines (+spare) */
 #define PBF_CMDQ_PURE_LB_LINES		150
 
-#define PBF_CMDQ_LINES_E5_RSVD_RATIO	8
-
 #define PBF_CMDQ_LINES_RT_OFFSET(ext_voq) \
 	(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + \
 	 ext_voq * \
@@ -174,42 +167,25 @@ static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
 	} while (0)
 
 #define WRITE_PQ_INFO_TO_RAM		1
-#define PQ_INFO_ELEMENT(vp, pf, tc, port, rl_valid, rl)	\
-	(((vp) << 0) | ((pf) << 12) | ((tc) << 16) |    \
-	 ((port) << 20) | ((rl_valid) << 22) | ((rl) << 24))
-#define PQ_INFO_RAM_GRC_ADDRESS(pq_id) \
-	(XSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + 21776 + (pq_id) * 4)
 
-/******************** INTERNAL IMPLEMENTATION *********************/
+#define PQ_INFO_ELEMENT(vp_pq_id, pf, tc, port, rl_valid, rl_id) \
+	(((vp_pq_id) << 0) | ((pf) << 12) | ((tc) << 16) | ((port) << 20) | \
+	 ((rl_valid ? 1 : 0) << 22) | (((rl_id) & 255) << 24) | \
+	 (((rl_id) >> 8) << 9))
 
-/* Returns the external VOQ number */
-static u8 ecore_get_ext_voq(struct ecore_hwfn *p_hwfn,
-			    u8 port_id,
-			    u8 tc,
-			    u8 max_phys_tcs_per_port)
-{
-	if (tc == PURE_LB_TC)
-		return NUM_OF_PHYS_TCS * (MAX_NUM_PORTS_BB) + port_id;
-	else
-		return port_id * (max_phys_tcs_per_port) + tc;
-}
+#define PQ_INFO_RAM_GRC_ADDRESS(pq_id) (XSEM_REG_FAST_MEMORY + \
+	SEM_FAST_REG_INT_RAM + XSTORM_PQ_INFO_OFFSET(pq_id))
+
+/******************** INTERNAL IMPLEMENTATION *********************/
 
 /* Prepare PF RL enable/disable runtime init values */
 static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFENABLE_RT_OFFSET, pf_rl_en ? 1 : 0);
 	if (pf_rl_en) {
-		u8 num_ext_voqs = MAX_NUM_VOQS_E4;
-		u64 voq_bit_mask = ((u64)1 << num_ext_voqs) - 1;
-
 		/* Enable RLs for all VOQs */
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_RT_OFFSET,
-			     (u32)voq_bit_mask);
-#ifdef QM_REG_RLPFVOQENABLE_MSB_RT_OFFSET
-		if (num_ext_voqs >= 32)
-			STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_MSB_RT_OFFSET,
-				     (u32)(voq_bit_mask >> 32));
-#endif
+			     VOQS_BIT_MASK);
 
 		/* Write RL period */
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIOD_RT_OFFSET,
@@ -235,12 +211,13 @@ static void ecore_enable_pf_wfq(struct ecore_hwfn *p_hwfn, bool pf_wfq_en)
 			     QM_WFQ_UPPER_BOUND);
 }
 
-/* Prepare VPORT RL enable/disable runtime init values */
-static void ecore_enable_vport_rl(struct ecore_hwfn *p_hwfn, bool vport_rl_en)
+/* Prepare global RL enable/disable runtime init values */
+static void ecore_enable_global_rl(struct ecore_hwfn *p_hwfn,
+				   bool global_rl_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_RLGLBLENABLE_RT_OFFSET,
-		     vport_rl_en ? 1 : 0);
-	if (vport_rl_en) {
+		     global_rl_en ? 1 : 0);
+	if (global_rl_en) {
 		/* Write RL period (use timer 0 only) */
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIOD_0_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
@@ -271,19 +248,16 @@ static void ecore_enable_vport_wfq(struct ecore_hwfn *p_hwfn, bool vport_wfq_en)
  * the specified VOQ
  */
 static void ecore_cmdq_lines_voq_rt_init(struct ecore_hwfn *p_hwfn,
-					 u8 ext_voq,
+					 u8 voq,
 					 u16 cmdq_lines)
 {
-	u32 qm_line_crd;
+	u32 qm_line_crd = QM_VOQ_LINE_CRD(cmdq_lines);
 
-	qm_line_crd = QM_VOQ_LINE_CRD(cmdq_lines);
-
-	OVERWRITE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(ext_voq),
+	OVERWRITE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq),
 			 (u32)cmdq_lines);
-	STORE_RT_REG(p_hwfn, QM_REG_VOQCRDLINE_RT_OFFSET + ext_voq,
-			 qm_line_crd);
-	STORE_RT_REG(p_hwfn, QM_REG_VOQINITCRDLINE_RT_OFFSET + ext_voq,
-			 qm_line_crd);
+	STORE_RT_REG(p_hwfn, QM_REG_VOQCRDLINE_RT_OFFSET + voq, qm_line_crd);
+	STORE_RT_REG(p_hwfn, QM_REG_VOQINITCRDLINE_RT_OFFSET + voq,
+		     qm_line_crd);
 }
 
 /* Prepare runtime init values to allocate PBF command queue lines. */
@@ -293,12 +267,11 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
 				     struct init_qm_port_params
 				     port_params[MAX_NUM_PORTS])
 {
-	u8 tc, ext_voq, port_id, num_tcs_in_port;
-	u8 num_ext_voqs = MAX_NUM_VOQS_E4;
+	u8 tc, voq, port_id, num_tcs_in_port;
 
 	/* Clear PBF lines of all VOQs */
-	for (ext_voq = 0; ext_voq < num_ext_voqs; ext_voq++)
-		STORE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(ext_voq), 0);
+	for (voq = 0; voq < MAX_NUM_VOQS; voq++)
+		STORE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq), 0);
 
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
 		u16 phys_lines, phys_lines_per_tc;
@@ -307,8 +280,7 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
 			continue;
 
 		/* Find number of command queue lines to divide between the
-		 * active physical TCs. In E5, 1/8 of the lines are reserved.
-		 * the lines for pure LB TC are subtracted.
+		 * active physical TCs.
 		 */
 		phys_lines = port_params[port_id].num_pbf_cmd_lines;
 		phys_lines -= PBF_CMDQ_PURE_LB_LINES;
@@ -323,18 +295,16 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
 
 		/* Init registers per active TC */
 		for (tc = 0; tc < max_phys_tcs_per_port; tc++) {
-			ext_voq = ecore_get_ext_voq(p_hwfn, port_id, tc,
-						    max_phys_tcs_per_port);
-			if (((port_params[port_id].active_phys_tcs >> tc) &
-			    0x1) == 1)
-				ecore_cmdq_lines_voq_rt_init(p_hwfn, ext_voq,
+			voq = VOQ(port_id, tc, max_phys_tcs_per_port);
+			if (((port_params[port_id].active_phys_tcs >>
+			      tc) & 0x1) == 1)
+				ecore_cmdq_lines_voq_rt_init(p_hwfn, voq,
 							     phys_lines_per_tc);
 		}
 
 		/* Init registers for pure LB TC */
-		ext_voq = ecore_get_ext_voq(p_hwfn, port_id, PURE_LB_TC,
-					    max_phys_tcs_per_port);
-		ecore_cmdq_lines_voq_rt_init(p_hwfn, ext_voq,
+		voq = VOQ(port_id, PURE_LB_TC, max_phys_tcs_per_port);
+		ecore_cmdq_lines_voq_rt_init(p_hwfn, voq,
 					     PBF_CMDQ_PURE_LB_LINES);
 	}
 }
@@ -366,7 +336,7 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
 				     port_params[MAX_NUM_PORTS])
 {
 	u32 usable_blocks, pure_lb_blocks, phys_blocks;
-	u8 tc, ext_voq, port_id, num_tcs_in_port;
+	u8 tc, voq, port_id, num_tcs_in_port;
 
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
 		if (!port_params[port_id].active)
@@ -398,24 +368,58 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
 		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
 			if (((port_params[port_id].active_phys_tcs >> tc) &
 			     0x1) == 1) {
-				ext_voq = ecore_get_ext_voq(p_hwfn, port_id, tc,
-							 max_phys_tcs_per_port);
+				voq = VOQ(port_id, tc, max_phys_tcs_per_port);
 				STORE_RT_REG(p_hwfn,
-					PBF_BTB_GUARANTEED_RT_OFFSET(ext_voq),
+					PBF_BTB_GUARANTEED_RT_OFFSET(voq),
 					phys_blocks);
 			}
 		}
 
 		/* Init pure LB TC */
-		ext_voq = ecore_get_ext_voq(p_hwfn, port_id, PURE_LB_TC,
-					    max_phys_tcs_per_port);
-		STORE_RT_REG(p_hwfn, PBF_BTB_GUARANTEED_RT_OFFSET(ext_voq),
+		voq = VOQ(port_id, PURE_LB_TC, max_phys_tcs_per_port);
+		STORE_RT_REG(p_hwfn, PBF_BTB_GUARANTEED_RT_OFFSET(voq),
 			     pure_lb_blocks);
 	}
 }
 
+/* Prepare runtime init values for the specified RL.
+ * If global_rl_params is OSAL_NULL, max link speed (100Gbps) is used instead.
+ * Return -1 on error.
+ */
+static int ecore_global_rl_rt_init(struct ecore_hwfn *p_hwfn,
+				   struct init_qm_global_rl_params
+				     global_rl_params[COMMON_MAX_QM_GLOBAL_RLS])
+{
+	u32 upper_bound = QM_VP_RL_UPPER_BOUND(QM_MAX_LINK_SPEED) |
+			  (u32)QM_RL_CRD_REG_SIGN_BIT;
+	u32 inc_val;
+	u16 rl_id;
+
+	/* Go over all global RLs */
+	for (rl_id = 0; rl_id < MAX_QM_GLOBAL_RLS; rl_id++) {
+		u32 rate_limit = global_rl_params ?
+				 global_rl_params[rl_id].rate_limit : 0;
+
+		inc_val = QM_RL_INC_VAL(rate_limit ?
+					rate_limit : QM_MAX_LINK_SPEED);
+		if (inc_val > QM_VP_RL_MAX_INC_VAL(QM_MAX_LINK_SPEED)) {
+			DP_NOTICE(p_hwfn, true, "Invalid rate limit configuration.\n");
+			return -1;
+		}
+
+		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLCRD_RT_OFFSET + rl_id,
+			     (u32)QM_RL_CRD_REG_SIGN_BIT);
+		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLUPPERBOUND_RT_OFFSET + rl_id,
+			     upper_bound);
+		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLINCVAL_RT_OFFSET + rl_id,
+			     inc_val);
+	}
+
+	return 0;
+}
+
 /* Prepare Tx PQ mapping runtime init values for the specified PF */
-static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
+static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt,
 				    u8 pf_id,
 				    u8 max_phys_tcs_per_port,
@@ -425,7 +429,7 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				    u16 start_pq,
 				    u16 num_pf_pqs,
 				    u16 num_vf_pqs,
-				    u8 start_vport,
+				   u16 start_vport,
 				    u32 base_mem_addr_4kb,
 				    struct init_qm_pq_params *pq_params,
 				    struct init_qm_vport_params *vport_params)
@@ -435,6 +439,9 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 	u32 num_tx_pq_vf_masks = MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE;
 	u16 num_pqs, first_pq_group, last_pq_group, i, j, pq_id, pq_group;
 	u32 pq_mem_4kb, vport_pq_mem_4kb, mem_addr_4kb;
+	#if (WRITE_PQ_INFO_TO_RAM != 0)
+		u32 pq_info = 0;
+	#endif
 
 	num_pqs = num_pf_pqs + num_vf_pqs;
 
@@ -458,24 +465,22 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 
 	/* Go over all Tx PQs */
 	for (i = 0, pq_id = start_pq; i < num_pqs; i++, pq_id++) {
-		u32 max_qm_global_rls = MAX_QM_GLOBAL_RLS;
-		u8 ext_voq, vport_id_in_pf;
-		bool is_vf_pq, rl_valid;
-		u16 first_tx_pq_id;
-
-		ext_voq = ecore_get_ext_voq(p_hwfn, pq_params[i].port_id,
-					    pq_params[i].tc_id,
-					    max_phys_tcs_per_port);
+		u16 first_tx_pq_id, vport_id_in_pf;
+		struct qm_rf_pq_map tx_pq_map;
+		bool is_vf_pq;
+		u8 voq;
+
+		voq = VOQ(pq_params[i].port_id, pq_params[i].tc_id,
+			  max_phys_tcs_per_port);
 		is_vf_pq = (i >= num_pf_pqs);
-		rl_valid = pq_params[i].rl_valid > 0;
 
 		/* Update first Tx PQ of VPORT/TC */
 		vport_id_in_pf = pq_params[i].vport_id - start_vport;
 		first_tx_pq_id =
 		vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].tc_id];
 		if (first_tx_pq_id == QM_INVALID_PQ_ID) {
-			u32 map_val = (ext_voq << QM_WFQ_VP_PQ_VOQ_SHIFT) |
-				       (pf_id << (QM_WFQ_VP_PQ_PF_E4_SHIFT));
+			u32 map_val = (voq << QM_WFQ_VP_PQ_VOQ_SHIFT) |
+				      (pf_id << QM_WFQ_VP_PQ_PF_SHIFT);
 
 			/* Create new VP PQ */
 			vport_params[vport_id_in_pf].
@@ -487,20 +492,10 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				     first_tx_pq_id, map_val);
 		}
 
-		/* Check RL ID */
-		if (rl_valid && pq_params[i].vport_id >= max_qm_global_rls) {
-			DP_NOTICE(p_hwfn, true,
-				  "Invalid VPORT ID for rate limiter config\n");
-			rl_valid = false;
-		}
-
 		/* Prepare PQ map entry */
-		struct qm_rf_pq_map tx_pq_map;
-
 		QM_INIT_TX_PQ_MAP(p_hwfn, tx_pq_map, pq_id, first_tx_pq_id,
-				  rl_valid ? 1 : 0,
-				  rl_valid ? pq_params[i].vport_id : 0,
-				  ext_voq, pq_params[i].wrr_group);
+				  pq_params[i].rl_valid, pq_params[i].rl_id,
+				  voq, pq_params[i].wrr_group);
 
 		/* Set PQ base address */
 		STORE_RT_REG(p_hwfn, QM_REG_BASEADDRTXPQ_RT_OFFSET + pq_id,
@@ -513,17 +508,15 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 					     (pq_id * 2) + j, 0);
 
 		/* Write PQ info to RAM */
-		if (WRITE_PQ_INFO_TO_RAM != 0) {
-			u32 pq_info = 0;
-
-			pq_info = PQ_INFO_ELEMENT(first_tx_pq_id, pf_id,
-						  pq_params[i].tc_id,
-						  pq_params[i].port_id,
-						  rl_valid ? 1 : 0, rl_valid ?
-						  pq_params[i].vport_id : 0);
-			ecore_wr(p_hwfn, p_ptt, PQ_INFO_RAM_GRC_ADDRESS(pq_id),
-				 pq_info);
-		}
+#if (WRITE_PQ_INFO_TO_RAM != 0)
+		pq_info = PQ_INFO_ELEMENT(first_tx_pq_id, pf_id,
+					  pq_params[i].tc_id,
+					  pq_params[i].port_id,
+					  pq_params[i].rl_valid,
+					  pq_params[i].rl_id);
+		ecore_wr(p_hwfn, p_ptt, PQ_INFO_RAM_GRC_ADDRESS(pq_id),
+			 pq_info);
+#endif
 
 		/* If VF PQ, add indication to PQ VF mask */
 		if (is_vf_pq) {
@@ -540,6 +533,8 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		if (tx_pq_vf_mask[i])
 			STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET +
 				     i, tx_pq_vf_mask[i]);
+
+	return 0;
 }
 
 /* Prepare Other PQ mapping runtime init values for the specified PF */
@@ -597,7 +592,7 @@ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				struct init_qm_pq_params *pq_params)
 {
 	u32 inc_val, crd_reg_offset;
-	u8 ext_voq;
+	u8 voq;
 	u16 i;
 
 	inc_val = QM_WFQ_INC_VAL(pf_wfq);
@@ -608,13 +603,12 @@ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 	}
 
 	for (i = 0; i < num_tx_pqs; i++) {
-		ext_voq = ecore_get_ext_voq(p_hwfn, pq_params[i].port_id,
-					    pq_params[i].tc_id,
-					    max_phys_tcs_per_port);
+		voq = VOQ(pq_params[i].port_id, pq_params[i].tc_id,
+			  max_phys_tcs_per_port);
 		crd_reg_offset = (pf_id < MAX_NUM_PFS_BB ?
 				  QM_REG_WFQPFCRD_RT_OFFSET :
 				  QM_REG_WFQPFCRD_MSB_RT_OFFSET) +
-				 ext_voq * MAX_NUM_PFS_BB +
+				 voq * MAX_NUM_PFS_BB +
 				 (pf_id % MAX_NUM_PFS_BB);
 		OVERWRITE_RT_REG(p_hwfn, crd_reg_offset,
 				 (u32)QM_WFQ_CRD_REG_SIGN_BIT);
@@ -654,19 +648,19 @@ static int ecore_pf_rl_rt_init(struct ecore_hwfn *p_hwfn, u8 pf_id, u32 pf_rl)
  * Return -1 on error.
  */
 static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
-				u8 num_vports,
+				u16 num_vports,
 				struct init_qm_vport_params *vport_params)
 {
-	u16 vport_pq_id;
+	u16 vp_pq_id, vport_id;
 	u32 inc_val;
-	u8 tc, i;
+	u8 tc;
 
 	/* Go over all PF VPORTs */
-	for (i = 0; i < num_vports; i++) {
-		if (!vport_params[i].wfq)
+	for (vport_id = 0; vport_id < num_vports; vport_id++) {
+		if (!vport_params[vport_id].wfq)
 			continue;
 
-		inc_val = QM_WFQ_INC_VAL(vport_params[i].wfq);
+		inc_val = QM_WFQ_INC_VAL(vport_params[vport_id].wfq);
 		if (inc_val > QM_WFQ_MAX_INC_VAL) {
 			DP_NOTICE(p_hwfn, true,
 				  "Invalid VPORT WFQ weight configuration\n");
@@ -675,56 +669,16 @@ static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 
 		/* Each VPORT can have several VPORT PQ IDs for various TCs */
 		for (tc = 0; tc < NUM_OF_TCS; tc++) {
-			vport_pq_id = vport_params[i].first_tx_pq_id[tc];
-			if (vport_pq_id != QM_INVALID_PQ_ID) {
-				STORE_RT_REG(p_hwfn, QM_REG_WFQVPCRD_RT_OFFSET +
-					     vport_pq_id,
-					     (u32)QM_WFQ_CRD_REG_SIGN_BIT);
-				STORE_RT_REG(p_hwfn,
-					     QM_REG_WFQVPWEIGHT_RT_OFFSET +
-					     vport_pq_id, inc_val);
-			}
+			vp_pq_id = vport_params[vport_id].first_tx_pq_id[tc];
+			if (vp_pq_id == QM_INVALID_PQ_ID)
+				continue;
+
+			STORE_RT_REG(p_hwfn, QM_REG_WFQVPCRD_RT_OFFSET +
+				     vp_pq_id, (u32)QM_WFQ_CRD_REG_SIGN_BIT);
+			STORE_RT_REG(p_hwfn, QM_REG_WFQVPWEIGHT_RT_OFFSET +
+				     vp_pq_id, inc_val);
 		}
 	}
-	return 0;
-}
-
-/* Prepare VPORT RL runtime init values for the specified VPORTs.
- * Return -1 on error.
- */
-static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
-				  u8 start_vport,
-				  u8 num_vports,
-				  u32 link_speed,
-				  struct init_qm_vport_params *vport_params)
-{
-	u8 i, vport_id;
-	u32 inc_val;
-
-	if (start_vport + num_vports >= MAX_QM_GLOBAL_RLS) {
-		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT ID for rate limiter configuration\n");
-		return -1;
-	}
-
-	/* Go over all PF VPORTs */
-	for (i = 0, vport_id = start_vport; i < num_vports; i++, vport_id++) {
-		inc_val = QM_RL_INC_VAL(link_speed);
-		if (inc_val > QM_VP_RL_MAX_INC_VAL(link_speed)) {
-			DP_NOTICE(p_hwfn, true,
-				  "Invalid VPORT rate-limit configuration\n");
-			return -1;
-		}
-
-		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLCRD_RT_OFFSET + vport_id,
-			     (u32)QM_RL_CRD_REG_SIGN_BIT);
-		STORE_RT_REG(p_hwfn,
-			     QM_REG_RLGLBLUPPERBOUND_RT_OFFSET + vport_id,
-			     QM_VP_RL_UPPER_BOUND(link_speed) |
-			     (u32)QM_RL_CRD_REG_SIGN_BIT);
-		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLINCVAL_RT_OFFSET + vport_id,
-			     inc_val);
-	}
 
 	return 0;
 }
@@ -768,10 +722,10 @@ static bool ecore_send_qm_cmd(struct ecore_hwfn *p_hwfn,
 	return ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt);
 }
 
-
 /******************** INTERFACE IMPLEMENTATION *********************/
 
-u32 ecore_qm_pf_mem_size(u32 num_pf_cids,
+u32 ecore_qm_pf_mem_size(struct ecore_hwfn *p_hwfn,
+			 u32 num_pf_cids,
 						 u32 num_vf_cids,
 						 u32 num_tids,
 						 u16 num_pf_pqs,
@@ -787,25 +741,26 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 			    u8 max_phys_tcs_per_port,
 			    bool pf_rl_en,
 			    bool pf_wfq_en,
-			    bool vport_rl_en,
+			    bool global_rl_en,
 			    bool vport_wfq_en,
 			    struct init_qm_port_params
-			    port_params[MAX_NUM_PORTS])
+				   port_params[MAX_NUM_PORTS],
+			    struct init_qm_global_rl_params
+				   global_rl_params[COMMON_MAX_QM_GLOBAL_RLS])
 {
-	u32 mask;
+	u32 mask = 0;
 
 	/* Init AFullOprtnstcCrdMask */
-	mask = (QM_OPPOR_LINE_VOQ_DEF <<
-		QM_RF_OPPORTUNISTIC_MASK_LINEVOQ_SHIFT) |
-		(QM_BYTE_CRD_EN << QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ_SHIFT) |
-		(pf_wfq_en << QM_RF_OPPORTUNISTIC_MASK_PFWFQ_SHIFT) |
-		(vport_wfq_en << QM_RF_OPPORTUNISTIC_MASK_VPWFQ_SHIFT) |
-		(pf_rl_en << QM_RF_OPPORTUNISTIC_MASK_PFRL_SHIFT) |
-		(vport_rl_en << QM_RF_OPPORTUNISTIC_MASK_VPQCNRL_SHIFT) |
-		(QM_OPPOR_FW_STOP_DEF <<
-		 QM_RF_OPPORTUNISTIC_MASK_FWPAUSE_SHIFT) |
-		(QM_OPPOR_PQ_EMPTY_DEF <<
-		 QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY_SHIFT);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_LINEVOQ,
+		  QM_OPPOR_LINE_VOQ_DEF);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ, QM_BYTE_CRD_EN);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_PFWFQ, pf_wfq_en);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_VPWFQ, vport_wfq_en);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_PFRL, pf_rl_en);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_VPQCNRL, global_rl_en);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_FWPAUSE, QM_OPPOR_FW_STOP_DEF);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY,
+		  QM_OPPOR_PQ_EMPTY_DEF);
 	STORE_RT_REG(p_hwfn, QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET, mask);
 
 	/* Enable/disable PF RL */
@@ -814,8 +769,8 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 	/* Enable/disable PF WFQ */
 	ecore_enable_pf_wfq(p_hwfn, pf_wfq_en);
 
-	/* Enable/disable VPORT RL */
-	ecore_enable_vport_rl(p_hwfn, vport_rl_en);
+	/* Enable/disable global RL */
+	ecore_enable_global_rl(p_hwfn, global_rl_en);
 
 	/* Enable/disable VPORT WFQ */
 	ecore_enable_vport_wfq(p_hwfn, vport_wfq_en);
@@ -828,6 +783,8 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 	ecore_btb_blocks_rt_init(p_hwfn, max_ports_per_engine,
 				 max_phys_tcs_per_port, port_params);
 
+	ecore_global_rl_rt_init(p_hwfn, global_rl_params);
+
 	return 0;
 }
 
@@ -842,24 +799,25 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 			u16 start_pq,
 			u16 num_pf_pqs,
 			u16 num_vf_pqs,
-			u8 start_vport,
-			u8 num_vports,
+			u16 start_vport,
+			u16 num_vports,
 			u16 pf_wfq,
 			u32 pf_rl,
-			u32 link_speed,
 			struct init_qm_pq_params *pq_params,
 			struct init_qm_vport_params *vport_params)
 {
 	u32 other_mem_size_4kb;
-	u8 tc, i;
+	u16 vport_id;
+	u8 tc;
 
 	other_mem_size_4kb = QM_PQ_MEM_4KB(num_pf_cids + num_tids) *
 			     QM_OTHER_PQS_PER_PF;
 
 	/* Clear first Tx PQ ID array for each VPORT */
-	for (i = 0; i < num_vports; i++)
+	for (vport_id = 0; vport_id < num_vports; vport_id++)
 		for (tc = 0; tc < NUM_OF_TCS; tc++)
-			vport_params[i].first_tx_pq_id[tc] = QM_INVALID_PQ_ID;
+			vport_params[vport_id].first_tx_pq_id[tc] =
+				QM_INVALID_PQ_ID;
 
 	/* Map Other PQs (if any) */
 #if QM_OTHER_PQS_PER_PF > 0
@@ -868,10 +826,12 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 #endif
 
 	/* Map Tx PQs */
-	ecore_tx_pq_map_rt_init(p_hwfn, p_ptt, pf_id, max_phys_tcs_per_port,
-				is_pf_loading, num_pf_cids, num_vf_cids,
-				start_pq, num_pf_pqs, num_vf_pqs, start_vport,
-				other_mem_size_4kb, pq_params, vport_params);
+	if (ecore_tx_pq_map_rt_init(p_hwfn, p_ptt, pf_id, max_phys_tcs_per_port,
+				    is_pf_loading, num_pf_cids, num_vf_cids,
+				    start_pq, num_pf_pqs, num_vf_pqs,
+				    start_vport, other_mem_size_4kb, pq_params,
+				    vport_params))
+		return -1;
 
 	/* Init PF WFQ */
 	if (pf_wfq)
@@ -884,15 +844,10 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 	if (ecore_pf_rl_rt_init(p_hwfn, pf_id, pf_rl))
 		return -1;
 
-	/* Set VPORT WFQ */
+	/* Init VPORT WFQ */
 	if (ecore_vp_wfq_rt_init(p_hwfn, num_vports, vport_params))
 		return -1;
 
-	/* Set VPORT RL */
-	if (ecore_vport_rl_rt_init
-	    (p_hwfn, start_vport, num_vports, link_speed, vport_params))
-		return -1;
-
 	return 0;
 }
 
@@ -934,27 +889,49 @@ int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
 
 int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
-			 u16 first_tx_pq_id[NUM_OF_TCS], u16 vport_wfq)
+			 u16 first_tx_pq_id[NUM_OF_TCS],
+			 u16 wfq)
 {
-	u16 vport_pq_id;
+	u16 vp_pq_id;
 	u32 inc_val;
 	u8 tc;
 
-	inc_val = QM_WFQ_INC_VAL(vport_wfq);
+	inc_val = QM_WFQ_INC_VAL(wfq);
 	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
 		DP_NOTICE(p_hwfn, true,
 			  "Invalid VPORT WFQ weight configuration\n");
 		return -1;
 	}
 
+	/* A VPORT can have several VPORT PQ IDs for various TCs */
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
-		vport_pq_id = first_tx_pq_id[tc];
-		if (vport_pq_id != QM_INVALID_PQ_ID) {
+		vp_pq_id = first_tx_pq_id[tc];
+		if (vp_pq_id != QM_INVALID_PQ_ID) {
 			ecore_wr(p_hwfn, p_ptt,
-				 QM_REG_WFQVPWEIGHT + vport_pq_id * 4, inc_val);
+				 QM_REG_WFQVPWEIGHT + vp_pq_id * 4, inc_val);
 		}
 	}
 
+	return 0;
+		}
+
+int ecore_init_global_rl(struct ecore_hwfn *p_hwfn,
+			 struct ecore_ptt *p_ptt,
+			 u16 rl_id,
+			 u32 rate_limit)
+{
+	u32 inc_val;
+
+	inc_val = QM_RL_INC_VAL(rate_limit);
+	if (inc_val > QM_VP_RL_MAX_INC_VAL(rate_limit)) {
+		DP_NOTICE(p_hwfn, true, "Invalid rate limit configuration.\n");
+		return -1;
+	}
+
+	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLCRD + rl_id * 4,
+		 (u32)QM_RL_CRD_REG_SIGN_BIT);
+	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLINCVAL + rl_id * 4, inc_val);
+
 	return 0;
 }
 
@@ -1023,6 +1000,7 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 	return true;
 }
 
+#ifndef UNUSED_HSI_FUNC
 
 /* NIG: ETS configuration constants */
 #define NIG_TX_ETS_CLIENT_OFFSET	4
@@ -1246,6 +1224,9 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 	}
 }
 
+#endif /* UNUSED_HSI_FUNC */
+
+#ifndef UNUSED_HSI_FUNC
 
 /* PRS: ETS configuration constants */
 #define PRS_ETS_MIN_WFQ_BYTES		1600
@@ -1312,6 +1293,8 @@ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
 	}
 }
 
+#endif /* UNUSED_HSI_FUNC */
+#ifndef UNUSED_HSI_FUNC
 
 /* BRB: RAM configuration constants */
 #define BRB_TOTAL_RAM_BLOCKS_BB	4800
@@ -1424,13 +1407,74 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-/* In MF should be called once per port to set EtherType of OuterTag */
+#endif /* UNUSED_HSI_FUNC */
+#ifndef UNUSED_HSI_FUNC
+
+#define ARR_REG_WR(dev, ptt, addr, arr, arr_size)		\
+	do {							\
+		u32 i;						\
+		for (i = 0; i < (arr_size); i++)		\
+			ecore_wr(dev, ptt, ((addr) + (4 * i)),	\
+				 ((u32 *)&(arr))[i]);		\
+	} while (0)
+
+#ifndef DWORDS_TO_BYTES
+#define DWORDS_TO_BYTES(dwords)		((dwords) * REG_SIZE)
+#endif
+
+
+/**
+ * @brief ecore_dmae_to_grc - is an internal function - writes from host to
+ * wide-bus registers (split registers are not supported yet)
+ *
+ * @param p_hwfn -       HW device data
+ * @param p_ptt -       ptt window used for writing the registers.
+ * @param pData - pointer to source data.
+ * @param addr - Destination register address.
+ * @param len_in_dwords - data length in DWARDS (u32)
+ */
+static int ecore_dmae_to_grc(struct ecore_hwfn *p_hwfn,
+			     struct ecore_ptt *p_ptt,
+			     u32 *pData,
+			     u32 addr,
+			     u32 len_in_dwords)
+{
+	struct dmae_params params;
+	bool read_using_dmae = false;
+
+	if (!pData)
+		return -1;
+
+	/* Set DMAE params */
+	OSAL_MEMSET(&params, 0, sizeof(params));
+
+	SET_FIELD(params.flags, DMAE_PARAMS_COMPLETION_DST, 1);
+
+	/* Execute DMAE command */
+	read_using_dmae = !ecore_dmae_host2grc(p_hwfn, p_ptt,
+					       (u64)(osal_uintptr_t)(pData),
+					       addr, len_in_dwords, &params);
+	if (!read_using_dmae)
+		DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG,
+			   "Failed writing to chip using DMAE, using GRC instead\n");
+
+	/* If not read using DMAE, read using GRC */
+	if (!read_using_dmae)
+		/* write to registers using GRC */
+		ARR_REG_WR(p_hwfn, p_ptt, addr, pData, len_in_dwords);
+
+	return len_in_dwords;
+}
+
+/* In MF, should be called once per port to set EtherType of OuterTag */
 void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn, u32 ethType)
 {
 	/* Update DORQ register */
 	STORE_RT_REG(p_hwfn, DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET, ethType);
 }
 
+#endif /* UNUSED_HSI_FUNC */
+
 #define SET_TUNNEL_TYPE_ENABLE_BIT(var, offset, enable) \
 (var = ((var) & ~(1 << (offset))) | ((enable) ? (1 << (offset)) : 0))
 #define PRS_ETH_TUNN_OUTPUT_FORMAT        -188897008
@@ -1579,8 +1623,8 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 		 ip_geneve_enable ? 1 : 0);
 }
 
-#define PRS_ETH_VXLAN_NO_L2_ENABLE_OFFSET   4
-#define PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT      -927094512
+#define PRS_ETH_VXLAN_NO_L2_ENABLE_OFFSET      3
+#define PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT   -925189872
 
 void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn,
 				  struct ecore_ptt *p_ptt,
@@ -1598,10 +1642,9 @@ void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn,
 		/* set VXLAN_NO_L2_ENABLE flag */
 		reg_val |= cfg_mask;
 
-		/* update PRS FIC  register */
+		/* update PRS FIC Format register */
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 		 (u32)PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT);
-	} else  {
 		/* clear VXLAN_NO_L2_ENABLE flag */
 		reg_val &= ~cfg_mask;
 	}
@@ -1610,6 +1653,8 @@ void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn,
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_MSG_INFO, reg_val);
 }
 
+#ifndef UNUSED_HSI_FUNC
+
 #define T_ETH_PACKET_ACTION_GFT_EVENTID  23
 #define PARSER_ETH_CONN_GFT_ACTION_CM_HDR  272
 #define T_ETH_PACKET_MATCH_RFS_EVENTID 25
@@ -1622,6 +1667,9 @@ void ecore_gft_disable(struct ecore_hwfn *p_hwfn,
 		       struct ecore_ptt *p_ptt,
 		       u16 pf_id)
 {
+	struct regpair ram_line;
+	OSAL_MEMSET(&ram_line, 0, sizeof(ram_line));
+
 	/* disable gft search for PF */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 0);
 
@@ -1631,10 +1679,10 @@ void ecore_gft_disable(struct ecore_hwfn *p_hwfn,
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id, 0);
 
 	/* Zero ramline */
-	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
-				RAM_LINE_SIZE * pf_id, 0);
-	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
-				RAM_LINE_SIZE * pf_id + REG_SIZE, 0);
+	ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line,
+			  PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * pf_id,
+			  sizeof(ram_line) / REG_SIZE);
+
 }
 
 
@@ -1661,7 +1709,8 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 			       bool ipv6,
 			       enum gft_profile_type profile_type)
 {
-	u32 reg_val, cam_line, ram_line_lo, ram_line_hi, search_non_ip_as_gft;
+	u32 reg_val, cam_line, search_non_ip_as_gft;
+	struct regpair ram_line = { 0 };
 
 	if (!ipv6 && !ipv4)
 		DP_NOTICE(p_hwfn, true, "gft_config: must accept at least on of - ipv4 or ipv6'\n");
@@ -1722,35 +1771,33 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 			    PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id);
 
 	/* Write line to RAM - compare to filter 4 tuple */
-	ram_line_lo = 0;
-	ram_line_hi = 0;
 
 	/* Search no IP as GFT */
 	search_non_ip_as_gft = 0;
 
 	/* Tunnel type */
-	SET_FIELD(ram_line_lo, GFT_RAM_LINE_TUNNEL_DST_PORT, 1);
-	SET_FIELD(ram_line_lo, GFT_RAM_LINE_TUNNEL_OVER_IP_PROTOCOL, 1);
+	SET_FIELD(ram_line.lo, GFT_RAM_LINE_TUNNEL_DST_PORT, 1);
+	SET_FIELD(ram_line.lo, GFT_RAM_LINE_TUNNEL_OVER_IP_PROTOCOL, 1);
 
 	if (profile_type == GFT_PROFILE_TYPE_4_TUPLE) {
-		SET_FIELD(ram_line_hi, GFT_RAM_LINE_DST_IP, 1);
-		SET_FIELD(ram_line_hi, GFT_RAM_LINE_SRC_IP, 1);
-		SET_FIELD(ram_line_hi, GFT_RAM_LINE_OVER_IP_PROTOCOL, 1);
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_ETHERTYPE, 1);
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_SRC_PORT, 1);
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_DST_PORT, 1);
+		SET_FIELD(ram_line.hi, GFT_RAM_LINE_DST_IP, 1);
+		SET_FIELD(ram_line.hi, GFT_RAM_LINE_SRC_IP, 1);
+		SET_FIELD(ram_line.hi, GFT_RAM_LINE_OVER_IP_PROTOCOL, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_ETHERTYPE, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_SRC_PORT, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_DST_PORT, 1);
 	} else if (profile_type == GFT_PROFILE_TYPE_L4_DST_PORT) {
-		SET_FIELD(ram_line_hi, GFT_RAM_LINE_OVER_IP_PROTOCOL, 1);
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_ETHERTYPE, 1);
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_DST_PORT, 1);
+		SET_FIELD(ram_line.hi, GFT_RAM_LINE_OVER_IP_PROTOCOL, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_ETHERTYPE, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_DST_PORT, 1);
 	} else if (profile_type == GFT_PROFILE_TYPE_IP_DST_ADDR) {
-		SET_FIELD(ram_line_hi, GFT_RAM_LINE_DST_IP, 1);
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_ETHERTYPE, 1);
+		SET_FIELD(ram_line.hi, GFT_RAM_LINE_DST_IP, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_ETHERTYPE, 1);
 	} else if (profile_type == GFT_PROFILE_TYPE_IP_SRC_ADDR) {
-		SET_FIELD(ram_line_hi, GFT_RAM_LINE_SRC_IP, 1);
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_ETHERTYPE, 1);
+		SET_FIELD(ram_line.hi, GFT_RAM_LINE_SRC_IP, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_ETHERTYPE, 1);
 	} else if (profile_type == GFT_PROFILE_TYPE_TUNNEL_TYPE) {
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_TUNNEL_ETHERTYPE, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_TUNNEL_ETHERTYPE, 1);
 
 		/* Allow tunneled traffic without inner IP */
 		search_non_ip_as_gft = 1;
@@ -1758,23 +1805,25 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_NON_IP_AS_GFT,
 		 search_non_ip_as_gft);
-	ecore_wr(p_hwfn, p_ptt,
-		 PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * pf_id,
-		 ram_line_lo);
-	ecore_wr(p_hwfn, p_ptt,
-		 PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * pf_id +
-		 REG_SIZE, ram_line_hi);
+	ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line,
+			  PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * pf_id,
+			  sizeof(ram_line) / REG_SIZE);
 
 	/* Set default profile so that no filter match will happen */
-	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE *
-		 PRS_GFT_CAM_LINES_NO_MATCH, 0xffffffff);
-	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE *
-		 PRS_GFT_CAM_LINES_NO_MATCH + REG_SIZE, 0x3ff);
+	ram_line.lo = 0xffffffff;
+	ram_line.hi = 0x3ff;
+	ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line,
+			  PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE *
+			  PRS_GFT_CAM_LINES_NO_MATCH,
+			  sizeof(ram_line) / REG_SIZE);
 
 	/* Enable gft search */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 1);
 }
 
+
+#endif /* UNUSED_HSI_FUNC */
+
 /* Configure VF zone size mode */
 void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt, u16 mode,
@@ -1853,10 +1902,9 @@ static u8 cdu_crc8_table[CRC8_TABLE_SIZE];
 /* Calculate and return CDU validation byte per connection type / region /
  * cid
  */
-static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region, u32 cid)
+static u8 ecore_calc_cdu_validation_byte(struct ecore_hwfn *p_hwfn,
+					 u8 conn_type, u8 region, u32 cid)
 {
-	const u8 validation_cfg = CDU_VALIDATION_DEFAULT_CFG;
-
 	static u8 crc8_table_valid;	/*automatically initialized to 0*/
 	u8 crc, validation_byte = 0;
 	u32 validation_string = 0;
@@ -1873,15 +1921,20 @@ static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region, u32 cid)
 	 * [7:4]   = Region
 	 * [3:0]   = Type
 	 */
-	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_CID) & 1)
-		validation_string |= (cid & 0xFFF00000) | ((cid & 0xFFF) << 8);
-
-	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_REGION) & 1)
-		validation_string |= ((region & 0xF) << 4);
+#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \
+	CDU_CONTEXT_VALIDATION_CFG_USE_CID) & 1)
+	validation_string |= (cid & 0xFFF00000) | ((cid & 0xFFF) << 8);
+#endif
 
-	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_TYPE) & 1)
-		validation_string |= (conn_type & 0xF);
+#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \
+	CDU_CONTEXT_VALIDATION_CFG_USE_REGION) & 1)
+	validation_string |= ((region & 0xF) << 4);
+#endif
 
+#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \
+	CDU_CONTEXT_VALIDATION_CFG_USE_TYPE) & 1)
+	validation_string |= (conn_type & 0xF);
+#endif
 	/* Convert to big-endian and calculate CRC8*/
 	data_to_crc = OSAL_BE32_TO_CPU(validation_string);
 
@@ -1898,40 +1951,41 @@ static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region, u32 cid)
 	 * [6:3]	= connection_type[3:0]
 	 * [2:0]	= crc[2:0]
 	 */
-
-	validation_byte |= ((validation_cfg >>
+	validation_byte |= ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >>
 			     CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE) & 1) << 7;
 
-	if ((validation_cfg >>
-	     CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT) & 1)
-		validation_byte |= ((conn_type & 0xF) << 3) | (crc & 0x7);
-	else
-		validation_byte |= crc & 0x7F;
-
+#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \
+	CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT) & 1)
+	validation_byte |= ((conn_type & 0xF) << 3) | (crc & 0x7);
+#else
+	validation_byte |= crc & 0x7F;
+#endif
 	return validation_byte;
 }
 
 /* Calcualte and set validation bytes for session context */
-void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
+				       void *p_ctx_mem, u16 ctx_size,
 				       u8 ctx_type, u32 cid)
 {
 	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
 
-	p_ctx = (u8 *)p_ctx_mem;
+	p_ctx = (u8 * const)p_ctx_mem;
+
 	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
 	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
 	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
 
 	OSAL_MEMSET(p_ctx, 0, ctx_size);
 
-	*x_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 3, cid);
-	*t_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 4, cid);
-	*u_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 5, cid);
+	*x_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 3, cid);
+	*t_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 4, cid);
+	*u_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 5, cid);
 }
 
 /* Calcualte and set validation bytes for task context */
-void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size, u8 ctx_type,
-				    u32 tid)
+void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
+				    u16 ctx_size, u8 ctx_type, u32 tid)
 {
 	u8 *p_ctx, *region1_val_ptr;
 
@@ -1940,16 +1994,19 @@ void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size, u8 ctx_type,
 
 	OSAL_MEMSET(p_ctx, 0, ctx_size);
 
-	*region1_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 1, tid);
+	*region1_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 1,
+							  tid);
 }
 
 /* Memset session context to 0 while preserving validation bytes */
-void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size, u8 ctx_type)
+void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
+			      u32 ctx_size, u8 ctx_type)
 {
 	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
 	u8 x_val, t_val, u_val;
 
-	p_ctx = (u8 *)p_ctx_mem;
+	p_ctx = (u8 * const)p_ctx_mem;
+
 	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
 	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
 	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
@@ -1966,7 +2023,8 @@ void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size, u8 ctx_type)
 }
 
 /* Memset task context to 0 while preserving validation bytes */
-void ecore_memset_task_ctx(void *p_ctx_mem, u32 ctx_size, u8 ctx_type)
+void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
+			   u32 ctx_size, u8 ctx_type)
 {
 	u8 *p_ctx, *region1_val_ptr;
 	u8 region1_val;
@@ -1987,62 +2045,15 @@ void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
 {
 	u32 ctx_validation;
 
-	/* Enable validation for connection region 3 - bits [31:24] */
-	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 24;
+	/* Enable validation for connection region 3: CCFC_CTX_VALID0[31:24] */
+	ctx_validation = CDU_CONTEXT_VALIDATION_DEFAULT_CFG << 24;
 	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID0, ctx_validation);
 
-	/* Enable validation for connection region 5 - bits [15: 8] */
-	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8;
+	/* Enable validation for connection region 5: CCFC_CTX_VALID1[15:8] */
+	ctx_validation = CDU_CONTEXT_VALIDATION_DEFAULT_CFG << 8;
 	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID1, ctx_validation);
 
-	/* Enable validation for connection region 1 - bits [15: 8] */
-	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8;
+	/* Enable validation for connection region 1: TCFC_CTX_VALID0[15:8] */
+	ctx_validation = CDU_CONTEXT_VALIDATION_DEFAULT_CFG << 8;
 	ecore_wr(p_hwfn, p_ptt, CDU_REG_TCFC_CTX_VALID0, ctx_validation);
 }
-
-
-/*******************************************************************************
- * File name : rdma_init.c
- * Author    : Michael Shteinbok
- *******************************************************************************
- *******************************************************************************
- * Description:
- * RDMA HSI functions
- *
- *******************************************************************************
- * Notes: This is the input to the auto generated file drv_init_fw_funcs.c
- *
- *******************************************************************************
- */
-static u32 ecore_get_rdma_assert_ram_addr(struct ecore_hwfn *p_hwfn,
-					  u8 storm_id)
-{
-	switch (storm_id) {
-	case 0: return TSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-		       TSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
-	case 1: return MSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-		       MSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
-	case 2: return USEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-		       USTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
-	case 3: return XSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-		       XSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
-	case 4: return YSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-		       YSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
-	case 5: return PSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-		       PSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
-
-	default: return 0;
-	}
-}
-
-void ecore_set_rdma_error_level(struct ecore_hwfn *p_hwfn,
-				struct ecore_ptt *p_ptt,
-				u8 assert_level[NUM_STORMS])
-{
-	u8 storm_id;
-	for (storm_id = 0; storm_id < NUM_STORMS; storm_id++) {
-		u32 ram_addr = ecore_get_rdma_assert_ram_addr(p_hwfn, storm_id);
-
-		ecore_wr(p_hwfn, p_ptt, ram_addr, assert_level[storm_id]);
-	}
-}
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 3503a90c1..1d1b107c4 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -6,7 +6,20 @@
 
 #ifndef _INIT_FW_FUNCS_H
 #define _INIT_FW_FUNCS_H
-/* Forward declarations */
+#include "ecore_hsi_common.h"
+#include "ecore_hsi_eth.h"
+
+/* Physical memory descriptor */
+struct phys_mem_desc {
+	dma_addr_t phys_addr;
+	void *virt_addr;
+	u32 size; /* In bytes */
+};
+
+/* Returns the VOQ based on port and TC */
+#define VOQ(port, tc, max_phys_tcs_per_port) \
+	((tc) == PURE_LB_TC ? NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB + (port) : \
+	 (port) * (max_phys_tcs_per_port) + (tc))
 
 struct init_qm_pq_params;
 
@@ -16,6 +29,7 @@ struct init_qm_pq_params;
  * Returns the required host memory size in 4KB units.
  * Must be called before all QM init HSI functions.
  *
+ * @param p_hwfn -		HW device data
  * @param num_pf_cids - number of connections used by this PF
  * @param num_vf_cids -	number of connections used by VFs of this PF
  * @param num_tids -	number of tasks used by this PF
@@ -24,7 +38,8 @@ struct init_qm_pq_params;
  *
  * @return The required host memory size in 4KB units.
  */
-u32 ecore_qm_pf_mem_size(u32 num_pf_cids,
+u32 ecore_qm_pf_mem_size(struct ecore_hwfn *p_hwfn,
+			 u32 num_pf_cids,
 						 u32 num_vf_cids,
 						 u32 num_tids,
 						 u16 num_pf_pqs,
@@ -39,20 +54,24 @@ u32 ecore_qm_pf_mem_size(u32 num_pf_cids,
  * @param max_phys_tcs_per_port	- max number of physical TCs per port in HW
  * @param pf_rl_en		- enable per-PF rate limiters
  * @param pf_wfq_en		- enable per-PF WFQ
- * @param vport_rl_en		- enable per-VPORT rate limiters
+ * @param global_rl_en -	  enable global rate limiters
  * @param vport_wfq_en		- enable per-VPORT WFQ
- * @param port_params - array of size MAX_NUM_PORTS with params for each port
+ * @param port_params -		  array with parameters for each port.
+ * @param global_rl_params -	  array with parameters for each global RL.
+ *				  If OSAL_NULL, global RLs are not configured.
  *
  * @return 0 on success, -1 on error.
  */
 int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
-			 u8 max_ports_per_engine,
-			 u8 max_phys_tcs_per_port,
-			 bool pf_rl_en,
-			 bool pf_wfq_en,
-			 bool vport_rl_en,
-			 bool vport_wfq_en,
-			 struct init_qm_port_params port_params[MAX_NUM_PORTS]);
+			    u8 max_ports_per_engine,
+			    u8 max_phys_tcs_per_port,
+			    bool pf_rl_en,
+			    bool pf_wfq_en,
+			    bool global_rl_en,
+			    bool vport_wfq_en,
+			  struct init_qm_port_params port_params[MAX_NUM_PORTS],
+			  struct init_qm_global_rl_params
+				 global_rl_params[COMMON_MAX_QM_GLOBAL_RLS]);
 
 /**
  * @brief ecore_qm_pf_rt_init  Prepare QM runtime init values for the PF phase
@@ -76,7 +95,6 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
  *		   be 0. otherwise, the weight must be non-zero.
  * @param pf_rl - rate limit in Mb/sec units. a value of 0 means don't
  *                configure. ignored if PF RL is globally disabled.
- * @param link_speed -		  link speed in Mbps.
  * @param pq_params - array of size (num_pf_pqs+num_vf_pqs) with parameters for
  *                    each Tx PQ associated with the specified PF.
  * @param vport_params - array of size num_vports with parameters for each
@@ -95,11 +113,10 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 			u16 start_pq,
 			u16 num_pf_pqs,
 			u16 num_vf_pqs,
-			u8 start_vport,
-			u8 num_vports,
+			u16 start_vport,
+			u16 num_vports,
 			u16 pf_wfq,
 			u32 pf_rl,
-			u32 link_speed,
 			struct init_qm_pq_params *pq_params,
 			struct init_qm_vport_params *vport_params);
 
@@ -141,14 +158,30 @@ int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
  * @param first_tx_pq_id- An array containing the first Tx PQ ID associated
  *                        with the VPORT for each TC. This array is filled by
  *                        ecore_qm_pf_rt_init
- * @param vport_wfq		- WFQ weight. Must be non-zero.
+ * @param wfq -		   WFQ weight. Must be non-zero.
  *
  * @return 0 on success, -1 on error.
  */
 int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 u16 first_tx_pq_id[NUM_OF_TCS],
-						 u16 vport_wfq);
+			 u16 wfq);
+
+/**
+ * @brief ecore_init_global_rl - Initializes the rate limit of the specified
+ * rate limiter.
+ *
+ * @param p_hwfn -		HW device data
+ * @param p_ptt -		ptt window used for writing the registers
+ * @param rl_id -	RL ID
+ * @param rate_limit -	rate limit in Mb/sec units
+ *
+ * @return 0 on success, -1 on error.
+ */
+int ecore_init_global_rl(struct ecore_hwfn *p_hwfn,
+			 struct ecore_ptt *p_ptt,
+			 u16 rl_id,
+			 u32 rate_limit);
 
 /**
  * @brief ecore_init_vport_rl - Initializes the rate limit of the specified
@@ -283,8 +316,9 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 
 /**
  * @brief ecore_set_vxlan_dest_port - initializes vxlan tunnel destination udp
- *                                    port
+ * port.
  *
+ * @param p_hwfn -       HW device data
  * @param p_ptt     - ptt window used for writing the registers.
  * @param dest_port - vxlan destination udp port.
  */
@@ -295,6 +329,7 @@ void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
 /**
  * @brief ecore_set_vxlan_enable - enable or disable VXLAN tunnel in HW
  *
+ * @param p_hwfn -      HW device data
  * @param p_ptt		- ptt window used for writing the registers.
  * @param vxlan_enable	- vxlan enable flag.
  */
@@ -305,6 +340,7 @@ void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
 /**
  * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
  *
+ * @param p_hwfn -        HW device data
  * @param p_ptt          - ptt window used for writing the registers.
  * @param eth_gre_enable - eth GRE enable enable flag.
  * @param ip_gre_enable  - IP GRE enable enable flag.
@@ -318,6 +354,7 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
  * @brief ecore_set_geneve_dest_port - initializes geneve tunnel destination
  *                                     udp port
  *
+ * @param p_hwfn -       HW device data
  * @param p_ptt     - ptt window used for writing the registers.
  * @param dest_port - geneve destination udp port.
  */
@@ -326,8 +363,9 @@ void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
 				u16 dest_port);
 
 /**
- * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
+ * @brief ecore_set_geneve_enable - enable or disable GRE tunnel in HW
  *
+ * @param p_hwfn -         HW device data
  * @param p_ptt             - ptt window used for writing the registers.
  * @param eth_geneve_enable - eth GENEVE enable enable flag.
  * @param ip_geneve_enable  - IP GENEVE enable enable flag.
@@ -347,7 +385,7 @@ void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt);
 
 /**
- * @brief ecore_gft_disable - Disable and GFT
+ * @brief ecore_gft_disable - Disable GFT
  *
  * @param p_hwfn -   HW device data
  * @param p_ptt -   ptt window used for writing the registers.
@@ -360,6 +398,7 @@ void ecore_gft_disable(struct ecore_hwfn *p_hwfn,
 /**
  * @brief ecore_gft_config - Enable and configure HW for GFT
 *
+ * @param p_hwfn -   HW device data
 * @param p_ptt	- ptt window used for writing the registers.
  * @param pf_id - pf on which to enable GFT.
 * @param tcp	- set profile tcp packets.
@@ -382,12 +421,13 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 * @brief ecore_config_vf_zone_size_mode - Configure VF zone size mode. Must be
 *                                         used before first ETH queue started.
 *
-*
+ * @param p_hwfn -      HW device data
 * @param p_ptt        -  ptt window used for writing the registers. Don't care
-*                        if runtime_init used
+ *           if runtime_init used.
 * @param mode         -  VF zone size mode. Use enum vf_zone_size_mode.
-* @param runtime_init -  Set 1 to init runtime registers in engine phase. Set 0
-*                        if VF zone size mode configured after engine phase.
+ * @param runtime_init - Set 1 to init runtime registers in engine phase.
+ *           Set 0 if VF zone size mode configured after engine
+ *           phase.
 */
 void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt
 				    *p_ptt, u16 mode, bool runtime_init);
@@ -396,6 +436,7 @@ void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt
  * @brief ecore_get_mstorm_queue_stat_offset - Get mstorm statistics offset by
  * VF zone size mode.
 *
+ * @param p_hwfn -         HW device data
 * @param stat_cnt_id         -  statistic counter id
 * @param vf_zone_size_mode   -  VF zone size mode. Use enum vf_zone_size_mode.
 */
@@ -406,6 +447,7 @@ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
  * @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
  * size mode.
 *
+ * @param p_hwfn -           HW device data
 * @param vf_id               -  vf id.
 * @param vf_queue_id         -  per VF rx queue id.
 * @param vf_zone_size_mode   -  vf zone size mode. Use enum vf_zone_size_mode.
@@ -416,6 +458,7 @@ u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8
  * @brief ecore_enable_context_validation - Enable and configure context
  *                                          validation.
  *
+ * @param p_hwfn -   HW device data
  * @param p_ptt - ptt window used for writing the registers.
  */
 void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
@@ -424,12 +467,14 @@ void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
  * @brief ecore_calc_session_ctx_validation - Calcualte validation byte for
  * session context.
  *
+ * @param p_hwfn -		HW device data
  * @param p_ctx_mem -	pointer to context memory.
  * @param ctx_size -	context size.
  * @param ctx_type -	context type.
  * @param cid -		context cid.
  */
-void ecore_calc_session_ctx_validation(void *p_ctx_mem,
+void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
+				       void *p_ctx_mem,
 				       u16 ctx_size,
 				       u8 ctx_type,
 				       u32 cid);
@@ -438,12 +483,14 @@ void ecore_calc_session_ctx_validation(void *p_ctx_mem,
  * @brief ecore_calc_task_ctx_validation - Calcualte validation byte for task
  * context.
  *
+ * @param p_hwfn -		HW device data
  * @param p_ctx_mem -	pointer to context memory.
  * @param ctx_size -	context size.
  * @param ctx_type -	context type.
  * @param tid -		    context tid.
  */
-void ecore_calc_task_ctx_validation(void *p_ctx_mem,
+void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn,
+				    void *p_ctx_mem,
 				    u16 ctx_size,
 				    u8 ctx_type,
 				    u32 tid);
@@ -457,18 +504,22 @@ void ecore_calc_task_ctx_validation(void *p_ctx_mem,
  * @param ctx_size -  size to initialzie.
  * @param ctx_type -  context type.
  */
-void ecore_memset_session_ctx(void *p_ctx_mem,
+void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn,
+			      void *p_ctx_mem,
 			      u32 ctx_size,
 			      u8 ctx_type);
+
 /**
  * @brief ecore_memset_task_ctx - Memset task context to 0 while preserving
  * validation bytes.
  *
+ * @param p_hwfn -		HW device data
  * @param p_ctx_mem - pointer to context memory.
  * @param ctx_size -  size to initialzie.
  * @param ctx_type -  context type.
  */
-void ecore_memset_task_ctx(void *p_ctx_mem,
+void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn,
+			   void *p_ctx_mem,
 			   u32 ctx_size,
 			   u8 ctx_type);
 
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index ad8570a08..ea964ea2f 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -15,7 +15,6 @@
 
 #include "ecore_iro_values.h"
 #include "ecore_sriov.h"
-#include "ecore_gtt_values.h"
 #include "reg_addr.h"
 #include "ecore_init_ops.h"
 
@@ -24,7 +23,7 @@
 
 void ecore_init_iro_array(struct ecore_dev *p_dev)
 {
-	p_dev->iro_arr = iro_arr;
+	p_dev->iro_arr = iro_arr + E4_IRO_ARR_OFFSET;
 }
 
 /* Runtime configuration helpers */
@@ -473,9 +472,9 @@ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
 				    int phase, int phase_id, int modes)
 {
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
+	bool b_dmae = (phase != PHASE_ENGINE);
 	u32 cmd_num, num_init_ops;
 	union init_op *init;
-	bool b_dmae = false;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	num_init_ops = p_dev->fw_data->init_ops_size;
@@ -511,7 +510,6 @@ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
 		case INIT_OP_IF_PHASE:
 			cmd_num += ecore_init_cmd_phase(&cmd->if_phase, phase,
 							phase_id);
-			b_dmae = GET_FIELD(data, INIT_IF_PHASE_OP_DMAE_ENABLE);
 			break;
 		case INIT_OP_DELAY:
 			/* ecore_init_run is always invoked from
@@ -522,6 +520,9 @@ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
 
 		case INIT_OP_CALLBACK:
 			rc = ecore_init_cmd_cb(p_hwfn, p_ptt, &cmd->callback);
+			if (phase == PHASE_ENGINE &&
+			    cmd->callback.callback_id == DMAE_READY_CB)
+				b_dmae = true;
 			break;
 		}
 
@@ -567,11 +568,17 @@ enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
 	fw->modes_tree_buf = (u8 *)((uintptr_t)(fw_data + offset));
 	len = buf_hdr[BIN_BUF_INIT_CMD].length;
 	fw->init_ops_size = len / sizeof(struct init_raw_op);
+	offset = buf_hdr[BIN_BUF_INIT_OVERLAYS].offset;
+	fw->fw_overlays = (u32 *)(fw_data + offset);
+	len = buf_hdr[BIN_BUF_INIT_OVERLAYS].length;
+	fw->fw_overlays_len = len;
 #else
 	fw->init_ops = (union init_op *)init_ops;
 	fw->arr_data = (u32 *)init_val;
 	fw->modes_tree_buf = (u8 *)modes_tree_buf;
 	fw->init_ops_size = init_ops_size;
+	fw->fw_overlays = fw_overlays;
+	fw->fw_overlays_len = sizeof(fw_overlays);
 #endif
 
 	return ECORE_SUCCESS;
diff --git a/drivers/net/qede/base/ecore_init_ops.h b/drivers/net/qede/base/ecore_init_ops.h
index 21e433309..0cbf293b3 100644
--- a/drivers/net/qede/base/ecore_init_ops.h
+++ b/drivers/net/qede/base/ecore_init_ops.h
@@ -95,6 +95,6 @@ void ecore_init_store_rt_agg(struct ecore_hwfn *p_hwfn,
 			     osal_size_t       size);
 
 #define STORE_RT_REG_AGG(hwfn, offset, val)			\
-	ecore_init_store_rt_agg(hwfn, offset, (u32 *)&val, sizeof(val))
+	ecore_init_store_rt_agg(hwfn, offset, (u32 *)&(val), sizeof(val))
 
 #endif /* __ECORE_INIT_OPS__ */
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index c8536380c..b1e127849 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -28,8 +28,10 @@ struct ecore_pi_info {
 
 struct ecore_sb_sp_info {
 	struct ecore_sb_info sb_info;
-	/* per protocol index data */
+
+	/* Per protocol index data */
 	struct ecore_pi_info pi_info_arr[MAX_PIS_PER_SB];
+	osal_size_t pi_info_arr_size;
 };
 
 enum ecore_attention_type {
@@ -58,10 +60,10 @@ struct aeu_invert_reg_bit {
 #define ATTENTION_OFFSET_MASK		(0x000ff000)
 #define ATTENTION_OFFSET_SHIFT		(12)
 
-#define ATTENTION_BB_MASK		(0x00700000)
+#define ATTENTION_BB_MASK		(0xf)
 #define ATTENTION_BB_SHIFT		(20)
 #define ATTENTION_BB(value)		((value) << ATTENTION_BB_SHIFT)
-#define ATTENTION_BB_DIFFERENT		(1 << 23)
+#define ATTENTION_BB_DIFFERENT		(1 << 24)
 
 #define	ATTENTION_CLEAR_ENABLE		(1 << 28)
 	unsigned int flags;
@@ -606,6 +608,8 @@ enum aeu_invert_reg_special_type {
 	AEU_INVERT_REG_SPECIAL_CNIG_1,
 	AEU_INVERT_REG_SPECIAL_CNIG_2,
 	AEU_INVERT_REG_SPECIAL_CNIG_3,
+	AEU_INVERT_REG_SPECIAL_MCP_UMP_TX,
+	AEU_INVERT_REG_SPECIAL_MCP_SCPAD,
 	AEU_INVERT_REG_SPECIAL_MAX,
 };
 
@@ -615,6 +619,8 @@ aeu_descs_special[AEU_INVERT_REG_SPECIAL_MAX] = {
 	{"CNIG port 1", ATTENTION_SINGLE, OSAL_NULL, BLOCK_CNIG},
 	{"CNIG port 2", ATTENTION_SINGLE, OSAL_NULL, BLOCK_CNIG},
 	{"CNIG port 3", ATTENTION_SINGLE, OSAL_NULL, BLOCK_CNIG},
+	{"MCP Latched ump_tx", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+	{"MCP Latched scratchpad", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
 };
 
 /* Notice aeu_invert_reg must be defined in the same order of bits as HW; */
@@ -678,10 +684,15 @@ static struct aeu_invert_reg aeu_descs[NUM_ATTN_REGS] = {
 	  {"AVS stop status ready", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
 	  {"MSTAT", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID},
 	  {"MSTAT per-path", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID},
-	  {"Reserved %d", (6 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
-	   MAX_BLOCK_ID},
+			{"OPTE", ATTENTION_PAR, OSAL_NULL, BLOCK_OPTE},
+			{"MCP", ATTENTION_PAR, OSAL_NULL, BLOCK_MCP},
+			{"MS", ATTENTION_SINGLE, OSAL_NULL, BLOCK_MS},
+			{"UMAC", ATTENTION_SINGLE, OSAL_NULL, BLOCK_UMAC},
+			{"LED", ATTENTION_SINGLE, OSAL_NULL, BLOCK_LED},
+			{"BMBN", ATTENTION_SINGLE, OSAL_NULL, BLOCK_BMBN},
 	  {"NIG", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_NIG},
 	  {"BMB/OPTE/MCP", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BMB},
+			{"BMB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BMB},
 	  {"BTB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BTB},
 	  {"BRB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BRB},
 	  {"PRS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PRS},
@@ -784,10 +795,17 @@ static struct aeu_invert_reg aeu_descs[NUM_ATTN_REGS] = {
 	  {"MCP Latched memory", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
 	  {"MCP Latched scratchpad cache", ATTENTION_SINGLE, OSAL_NULL,
 	   MAX_BLOCK_ID},
-	  {"MCP Latched ump_tx", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
-	  {"MCP Latched scratchpad", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
-	  {"Reserved %d", (28 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
-	   MAX_BLOCK_ID},
+	  {"AVS", ATTENTION_PAR | ATTENTION_BB_DIFFERENT |
+	   ATTENTION_BB(AEU_INVERT_REG_SPECIAL_MCP_UMP_TX), OSAL_NULL,
+	   BLOCK_AVS_WRAP},
+	  {"AVS", ATTENTION_SINGLE | ATTENTION_BB_DIFFERENT |
+	   ATTENTION_BB(AEU_INVERT_REG_SPECIAL_MCP_SCPAD), OSAL_NULL,
+	   BLOCK_AVS_WRAP},
+	  {"PCIe core", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
+	  {"PCIe link up", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
+	  {"PCIe hot reset", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
+	  {"Reserved %d", (9 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
+	    MAX_BLOCK_ID},
 	  }
 	 },
 
@@ -955,14 +973,22 @@ ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn,
 	/* @DPDK */
 	/* Reach assertion if attention is fatal */
 	if (b_fatal || (strcmp(p_bit_name, "PGLUE B RBC") == 0)) {
+#ifndef ASIC_ONLY
+		DP_NOTICE(p_hwfn, !CHIP_REV_IS_EMUL(p_hwfn->p_dev),
+			  "`%s': Fatal attention\n", p_bit_name);
+#else
 		DP_NOTICE(p_hwfn, true, "`%s': Fatal attention\n",
 			  p_bit_name);
+#endif
 
 		ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_HW_ATTN);
 	}
 
 	/* Prevent this Attention from being asserted in the future */
 	if (p_aeu->flags & ATTENTION_CLEAR_ENABLE ||
+#ifndef ASIC_ONLY
+	    CHIP_REV_IS_EMUL(p_hwfn->p_dev) ||
+#endif
 	    p_hwfn->p_dev->attn_clr_en) {
 		u32 val;
 		u32 mask = ~bitmask;
@@ -1013,6 +1039,13 @@ static void ecore_int_deassertion_parity(struct ecore_hwfn *p_hwfn,
 		p_aeu->bit_name);
 }
 
+#define MISC_REG_AEU_AFTER_INVERT_IGU(n) \
+	(MISC_REG_AEU_AFTER_INVERT_1_IGU + (n) * 0x4)
+
+#define MISC_REG_AEU_ENABLE_IGU_OUT(n, group) \
+	(MISC_REG_AEU_ENABLE1_IGU_OUT_0 + (n) * 0x4 + \
+	 (group) * 0x4 * NUM_ATTN_REGS)
+
 /**
  * @brief - handles deassertion of previously asserted attentions.
  *
@@ -1032,8 +1065,7 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
 	/* Read the attention registers in the AEU */
 	for (i = 0; i < NUM_ATTN_REGS; i++) {
 		aeu_inv_arr[i] = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
-					  MISC_REG_AEU_AFTER_INVERT_1_IGU +
-					  i * 0x4);
+					  MISC_REG_AEU_AFTER_INVERT_IGU(i));
 		DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
 			   "Deasserted bits [%d]: %08x\n", i, aeu_inv_arr[i]);
 	}
@@ -1043,7 +1075,7 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
 		struct aeu_invert_reg *p_aeu = &sb_attn_sw->p_aeu_desc[i];
 		u32 parities;
 
-		aeu_en = MISC_REG_AEU_ENABLE1_IGU_OUT_0 + i * sizeof(u32);
+		aeu_en = MISC_REG_AEU_ENABLE_IGU_OUT(i, 0);
 		en = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en);
 		parities = sb_attn_sw->parity_mask[i] & aeu_inv_arr[i] & en;
 
@@ -1074,9 +1106,7 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
 		for (i = 0; i < NUM_ATTN_REGS; i++) {
 			u32 bits;
 
-			aeu_en = MISC_REG_AEU_ENABLE1_IGU_OUT_0 +
-				 i * sizeof(u32) +
-				 k * sizeof(u32) * NUM_ATTN_REGS;
+			aeu_en = MISC_REG_AEU_ENABLE_IGU_OUT(i, k);
 			en = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en);
 			bits = aeu_inv_arr[i] & en;
 
@@ -1249,7 +1279,6 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie)
 	struct ecore_pi_info *pi_info = OSAL_NULL;
 	struct ecore_sb_attn_info *sb_attn;
 	struct ecore_sb_info *sb_info;
-	int arr_size;
 	u16 rc = 0;
 
 	if (!p_hwfn)
@@ -1261,7 +1290,6 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie)
 	}
 
 	sb_info = &p_hwfn->p_sp_sb->sb_info;
-	arr_size = OSAL_ARRAY_SIZE(p_hwfn->p_sp_sb->pi_info_arr);
 	if (!sb_info) {
 		DP_ERR(p_hwfn->p_dev,
 		       "Status block is NULL - cannot ack interrupts\n");
@@ -1326,14 +1354,14 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie)
 		ecore_int_attentions(p_hwfn);
 
 	if (rc & ECORE_SB_IDX) {
-		int pi;
+		osal_size_t pi;
 
 		/* Since we only looked at the SB index, it's possible more
 		 * than a single protocol-index on the SB incremented.
 		 * Iterate over all configured protocol indices and check
 		 * whether something happened for each.
 		 */
-		for (pi = 0; pi < arr_size; pi++) {
+		for (pi = 0; pi < p_hwfn->p_sp_sb->pi_info_arr_size; pi++) {
 			pi_info = &p_hwfn->p_sp_sb->pi_info_arr[pi];
 			if (pi_info->comp_cb != OSAL_NULL)
 				pi_info->comp_cb(p_hwfn, pi_info->cookie);
@@ -1514,7 +1542,7 @@ static void _ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
 	if (IS_VF(p_hwfn->p_dev))
 		return;/* @@@TBD MichalK- VF CAU... */
 
-	sb_offset = igu_sb_id * MAX_PIS_PER_SB;
+	sb_offset = igu_sb_id * PIS_PER_SB;
 	OSAL_MEMSET(&pi_entry, 0, sizeof(struct cau_pi_entry));
 
 	SET_FIELD(pi_entry.prod, CAU_PI_ENTRY_PI_TIMESET, timeset);
@@ -1623,7 +1651,7 @@ void ecore_int_sb_setup(struct ecore_hwfn *p_hwfn,
 {
 	/* zero status block and ack counter */
 	sb_info->sb_ack = 0;
-	OSAL_MEMSET(sb_info->sb_virt, 0, sizeof(*sb_info->sb_virt));
+	OSAL_MEMSET(sb_info->sb_virt, 0, sb_info->sb_size);
 
 	if (IS_PF(p_hwfn->p_dev))
 		ecore_int_cau_conf_sb(p_hwfn, p_ptt, sb_info->sb_phys,
@@ -1706,6 +1734,14 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
 				       dma_addr_t sb_phy_addr, u16 sb_id)
 {
 	sb_info->sb_virt = sb_virt_addr;
+	struct status_block *sb_virt;
+
+	sb_virt = (struct status_block *)sb_info->sb_virt;
+
+	sb_info->sb_size = sizeof(*sb_virt);
+	sb_info->sb_pi_array = sb_virt->pi_array;
+	sb_info->sb_prod_index = &sb_virt->prod_index;
+
 	sb_info->sb_phys = sb_phy_addr;
 
 	sb_info->igu_sb_id = ecore_get_igu_sb_id(p_hwfn, sb_id);
@@ -1737,16 +1773,16 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
 	/* The igu address will hold the absolute address that needs to be
 	 * written to for a specific status block
 	 */
-	if (IS_PF(p_hwfn->p_dev)) {
+	if (IS_PF(p_hwfn->p_dev))
 		sb_info->igu_addr = (u8 OSAL_IOMEM *)p_hwfn->regview +
-		    GTT_BAR0_MAP_REG_IGU_CMD + (sb_info->igu_sb_id << 3);
+				     GTT_BAR0_MAP_REG_IGU_CMD +
+				     (sb_info->igu_sb_id << 3);
 
-	} else {
-		sb_info->igu_addr =
-		    (u8 OSAL_IOMEM *)p_hwfn->regview +
+	else
+		sb_info->igu_addr = (u8 OSAL_IOMEM *)p_hwfn->regview +
 		    PXP_VF_BAR0_START_IGU +
-		    ((IGU_CMD_INT_ACK_BASE + sb_info->igu_sb_id) << 3);
-	}
+				     ((IGU_CMD_INT_ACK_BASE +
+				       sb_info->igu_sb_id) << 3);
 
 	sb_info->flags |= ECORE_SB_INFO_INIT;
 
@@ -1767,7 +1803,7 @@ enum _ecore_status_t ecore_int_sb_release(struct ecore_hwfn *p_hwfn,
 
 	/* zero status block and ack counter */
 	sb_info->sb_ack = 0;
-	OSAL_MEMSET(sb_info->sb_virt, 0, sizeof(*sb_info->sb_virt));
+	OSAL_MEMSET(sb_info->sb_virt, 0, sb_info->sb_size);
 
 	if (IS_VF(p_hwfn->p_dev)) {
 		ecore_vf_set_sb_info(p_hwfn, sb_id, OSAL_NULL);
@@ -1816,11 +1852,10 @@ static enum _ecore_status_t ecore_int_sp_sb_alloc(struct ecore_hwfn *p_hwfn,
 	void *p_virt;
 
 	/* SB struct */
-	p_sb =
-	    OSAL_ALLOC(p_hwfn->p_dev, GFP_KERNEL,
-		       sizeof(*p_sb));
+	p_sb = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_sb));
 	if (!p_sb) {
-		DP_NOTICE(p_hwfn, false, "Failed to allocate `struct ecore_sb_info'\n");
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to allocate `struct ecore_sb_info'\n");
 		return ECORE_NOMEM;
 	}
 
@@ -1838,7 +1873,7 @@ static enum _ecore_status_t ecore_int_sp_sb_alloc(struct ecore_hwfn *p_hwfn,
 	ecore_int_sb_init(p_hwfn, p_ptt, &p_sb->sb_info,
 			  p_virt, p_phys, ECORE_SP_SB_ID);
 
-	OSAL_MEMSET(p_sb->pi_info_arr, 0, sizeof(p_sb->pi_info_arr));
+	p_sb->pi_info_arr_size = PIS_PER_SB;
 
 	return ECORE_SUCCESS;
 }
@@ -1853,14 +1888,14 @@ enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn,
 	u8 pi;
 
 	/* Look for a free index */
-	for (pi = 0; pi < OSAL_ARRAY_SIZE(p_sp_sb->pi_info_arr); pi++) {
+	for (pi = 0; pi < p_sp_sb->pi_info_arr_size; pi++) {
 		if (p_sp_sb->pi_info_arr[pi].comp_cb != OSAL_NULL)
 			continue;
 
 		p_sp_sb->pi_info_arr[pi].comp_cb = comp_cb;
 		p_sp_sb->pi_info_arr[pi].cookie = cookie;
 		*sb_idx = pi;
-		*p_fw_cons = &p_sp_sb->sb_info.sb_virt->pi_array[pi];
+		*p_fw_cons = &p_sp_sb->sb_info.sb_pi_array[pi];
 		rc = ECORE_SUCCESS;
 		break;
 	}
@@ -1988,10 +2023,9 @@ static void ecore_int_igu_cleanup_sb(struct ecore_hwfn *p_hwfn,
 				     bool cleanup_set,
 				     u16 opaque_fid)
 {
-	u32 cmd_ctrl = 0, val = 0, sb_bit = 0, sb_bit_addr = 0, data = 0;
-	u32 pxp_addr = IGU_CMD_INT_ACK_BASE + igu_sb_id;
-	u32 sleep_cnt = IGU_CLEANUP_SLEEP_LENGTH;
-	u8 type = 0;		/* FIXME MichalS type??? */
+	u32 data = 0, cmd_ctrl = 0, sb_bit, sb_bit_addr, pxp_addr;
+	u32 sleep_cnt = IGU_CLEANUP_SLEEP_LENGTH, val;
+	u8 type = 0;
 
 	OSAL_BUILD_BUG_ON((IGU_REG_CLEANUP_STATUS_4 -
 			   IGU_REG_CLEANUP_STATUS_0) != 0x200);
@@ -2006,6 +2040,7 @@ static void ecore_int_igu_cleanup_sb(struct ecore_hwfn *p_hwfn,
 	SET_FIELD(data, IGU_CLEANUP_COMMAND_TYPE, IGU_COMMAND_TYPE_SET);
 
 	/* Set the control register */
+	pxp_addr = IGU_CMD_INT_ACK_BASE + igu_sb_id;
 	SET_FIELD(cmd_ctrl, IGU_CTRL_REG_PXP_ADDR, pxp_addr);
 	SET_FIELD(cmd_ctrl, IGU_CTRL_REG_FID, opaque_fid);
 	SET_FIELD(cmd_ctrl, IGU_CTRL_REG_TYPE, IGU_CTRL_CMD_TYPE_WR);
@@ -2077,9 +2112,11 @@ void ecore_int_igu_init_pure_rt_single(struct ecore_hwfn *p_hwfn,
 			  igu_sb_id);
 
 	/* Clear the CAU for the SB */
-	for (pi = 0; pi < 12; pi++)
+	for (pi = 0; pi < PIS_PER_SB; pi++)
 		ecore_wr(p_hwfn, p_ptt,
-			 CAU_REG_PI_MEMORY + (igu_sb_id * 12 + pi) * 4, 0);
+			 CAU_REG_PI_MEMORY +
+			 (igu_sb_id * PIS_PER_SB + pi) * 4,
+			 0);
 }
 
 void ecore_int_igu_init_pure_rt(struct ecore_hwfn *p_hwfn,
@@ -2679,12 +2716,12 @@ enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
 					  struct ecore_sb_info_dbg *p_info)
 {
 	u16 sbid = p_sb->igu_sb_id;
-	int i;
+	u32 i;
 
 	if (IS_VF(p_hwfn->p_dev))
 		return ECORE_INVAL;
 
-	if (sbid > NUM_OF_SBS(p_hwfn->p_dev))
+	if (sbid >= NUM_OF_SBS(p_hwfn->p_dev))
 		return ECORE_INVAL;
 
 	p_info->igu_prod = ecore_rd(p_hwfn, p_ptt,
@@ -2692,10 +2729,10 @@ enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
 	p_info->igu_cons = ecore_rd(p_hwfn, p_ptt,
 				    IGU_REG_CONSUMER_MEM + sbid * 4);
 
-	for (i = 0; i < MAX_PIS_PER_SB; i++)
+	for (i = 0; i < PIS_PER_SB; i++)
 		p_info->pi[i] = (u16)ecore_rd(p_hwfn, p_ptt,
 					      CAU_REG_PI_MEMORY +
-					      sbid * 4 * MAX_PIS_PER_SB +
+					      sbid * 4 * PIS_PER_SB +
 					      i * 4);
 
 	return ECORE_SUCCESS;
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index abea2a716..d7b6b86cc 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -24,7 +24,12 @@ enum ecore_int_mode {
 #endif
 
 struct ecore_sb_info {
-	struct status_block *sb_virt;
+	void *sb_virt; /* ptr to "struct status_block_e{4,5}" */
+	u32 sb_size; /* size of "struct status_block_e{4,5}" */
+	__le16 *sb_pi_array; /* ptr to "sb_virt->pi_array" */
+	__le32 *sb_prod_index; /* ptr to "sb_virt->prod_index" */
+#define STATUS_BLOCK_PROD_INDEX_MASK	0xFFFFFF
+
 	dma_addr_t sb_phys;
 	u32 sb_ack;		/* Last given ack */
 	u16 igu_sb_id;
@@ -42,7 +47,7 @@ struct ecore_sb_info {
 struct ecore_sb_info_dbg {
 	u32 igu_prod;
 	u32 igu_cons;
-	u16 pi[MAX_PIS_PER_SB];
+	u16 pi[PIS_PER_SB];
 };
 
 struct ecore_sb_cnt_info {
@@ -64,7 +69,7 @@ static OSAL_INLINE u16 ecore_sb_update_sb_idx(struct ecore_sb_info *sb_info)
 
 	/* barrier(); status block is written to by the chip */
 	/* FIXME: need some sort of barrier. */
-	prod = OSAL_LE32_TO_CPU(sb_info->sb_virt->prod_index) &
+	prod = OSAL_LE32_TO_CPU(*sb_info->sb_prod_index) &
 	       STATUS_BLOCK_PROD_INDEX_MASK;
 	if (sb_info->sb_ack != prod) {
 		sb_info->sb_ack = prod;
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 5dcdc84fc..b20d83762 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -2323,18 +2323,17 @@ ecore_eth_tx_queue_maxrate(struct ecore_hwfn *p_hwfn,
 			   struct ecore_ptt *p_ptt,
 			   struct ecore_queue_cid *p_cid, u32 rate)
 {
-	struct ecore_mcp_link_state *p_link;
+	u16 rl_id;
 	u8 vport;
 
 	vport = (u8)ecore_get_qm_vport_idx_rl(p_hwfn, p_cid->rel.queue_id);
-	p_link = &ECORE_LEADING_HWFN(p_hwfn->p_dev)->mcp_info->link_output;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
 		   "About to rate limit qm vport %d for queue %d with rate %d\n",
 		   vport, p_cid->rel.queue_id, rate);
 
-	return ecore_init_vport_rl(p_hwfn, p_ptt, vport, rate,
-				   p_link->speed);
+	rl_id = vport; /* The "rl_id" is set as the "vport_id" */
+	return ecore_init_global_rl(p_hwfn, p_ptt, rl_id, rate);
 }
 
 #define RSS_TSTORM_UPDATE_STATUS_MAX_POLL_COUNT    100
@@ -2358,8 +2357,7 @@ ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	addr = (u8 OSAL_IOMEM *)p_hwfn->regview +
-	       GTT_BAR0_MAP_REG_TSDM_RAM +
+	addr = (u8 *)p_hwfn->regview + GTT_BAR0_MAP_REG_TSDM_RAM +
 	       TSTORM_ETH_RSS_UPDATE_OFFSET(p_hwfn->rel_pf_id);
 
 	*(u64 *)(&update_data) = DIRECT_REG_RD64(p_hwfn, addr);
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index acde81fad..bebf412ed 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -302,6 +302,8 @@ struct ecore_sp_vport_start_params {
 	bool b_err_big_pkt;
 	bool b_err_anti_spoof;
 	bool b_err_ctrl_frame;
+	bool b_en_rgfs;
+	bool b_en_tgfs;
 };
 
 /**
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 6559d8040..a5aa07438 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -22,13 +22,23 @@
 #include "ecore_sp_commands.h"
 #include "ecore_cxt.h"
 
-#define CHIP_MCP_RESP_ITER_US 10
-#define EMUL_MCP_RESP_ITER_US (1000 * 1000)
 #define GRCBASE_MCP	0xe00000
 
+#define ECORE_MCP_RESP_ITER_US		10
 #define ECORE_DRV_MB_MAX_RETRIES (500 * 1000)	/* Account for 5 sec */
 #define ECORE_MCP_RESET_RETRIES (50 * 1000)	/* Account for 500 msec */
 
+#ifndef ASIC_ONLY
+/* Non-ASIC:
+ * The waiting interval is multiplied by 100 to reduce the impact of the
+ * built-in delay of 100usec in each ecore_rd().
+ * In addition, a factor of 4 comparing to ASIC is applied.
+ */
+#define ECORE_EMUL_MCP_RESP_ITER_US	(ECORE_MCP_RESP_ITER_US * 100)
+#define ECORE_EMUL_DRV_MB_MAX_RETRIES	((ECORE_DRV_MB_MAX_RETRIES / 100) * 4)
+#define ECORE_EMUL_MCP_RESET_RETRIES	((ECORE_MCP_RESET_RETRIES / 100) * 4)
+#endif
+
 #define DRV_INNER_WR(_p_hwfn, _p_ptt, _ptr, _offset, _val) \
 	ecore_wr(_p_hwfn, _p_ptt, (_p_hwfn->mcp_info->_ptr + _offset), \
 		 _val)
@@ -186,22 +196,23 @@ static enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn,
 						   struct ecore_ptt *p_ptt)
 {
 	struct ecore_mcp_info *p_info = p_hwfn->mcp_info;
+	u32 drv_mb_offsize, mfw_mb_offsize, val;
 	u8 cnt = ECORE_MCP_SHMEM_RDY_MAX_RETRIES;
 	u8 msec = ECORE_MCP_SHMEM_RDY_ITER_MS;
-	u32 drv_mb_offsize, mfw_mb_offsize;
 	u32 mcp_pf_id = MCP_PF_ID(p_hwfn);
 
+	val = ecore_rd(p_hwfn, p_ptt, MCP_REG_CACHE_PAGING_ENABLE);
+	p_info->public_base = ecore_rd(p_hwfn, p_ptt, MISC_REG_SHARED_MEM_ADDR);
+	if (!p_info->public_base) {
+		DP_NOTICE(p_hwfn, false,
+			  "The address of the MCP scratch-pad is not configured\n");
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, false, "Emulation - assume no MFW\n");
-		p_info->public_base = 0;
-		return ECORE_INVAL;
-	}
+		/* Zeroed "public_base" implies no MFW */
+		if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+			DP_INFO(p_hwfn, "Emulation: Assume no MFW\n");
 #endif
-
-	p_info->public_base = ecore_rd(p_hwfn, p_ptt, MISC_REG_SHARED_MEM_ADDR);
-	if (!p_info->public_base)
 		return ECORE_INVAL;
+	}
 
 	p_info->public_base |= GRCBASE_MCP;
 
@@ -293,7 +304,7 @@ enum _ecore_status_t ecore_mcp_cmd_init(struct ecore_hwfn *p_hwfn,
 
 	if (ecore_load_mcp_offsets(p_hwfn, p_ptt) != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, false, "MCP is not initialized\n");
-		/* Do not free mcp_info here, since public_base indicate that
+		/* Do not free mcp_info here, since "public_base" indicates that
 		 * the MCP is not initialized
 		 */
 		return ECORE_SUCCESS;
@@ -334,14 +345,16 @@ static void ecore_mcp_reread_offsets(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt)
 {
-	u32 org_mcp_reset_seq, seq, delay = CHIP_MCP_RESP_ITER_US, cnt = 0;
+	u32 prev_generic_por_0, seq, delay = ECORE_MCP_RESP_ITER_US, cnt = 0;
+	u32 retries = ECORE_MCP_RESET_RETRIES;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
-		delay = EMUL_MCP_RESP_ITER_US;
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
+		delay = ECORE_EMUL_MCP_RESP_ITER_US;
+		retries = ECORE_EMUL_MCP_RESET_RETRIES;
+	}
 #endif
-
 	if (p_hwfn->mcp_info->b_block_cmd) {
 		DP_NOTICE(p_hwfn, false,
 			  "The MFW is not responsive. Avoid sending MCP_RESET mailbox command.\n");
@@ -351,23 +364,24 @@ enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
 	/* Ensure that only a single thread is accessing the mailbox */
 	OSAL_SPIN_LOCK(&p_hwfn->mcp_info->cmd_lock);
 
-	org_mcp_reset_seq = ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0);
+	prev_generic_por_0 = ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0);
 
 	/* Set drv command along with the updated sequence */
 	ecore_mcp_reread_offsets(p_hwfn, p_ptt);
 	seq = ++p_hwfn->mcp_info->drv_mb_seq;
 	DRV_MB_WR(p_hwfn, p_ptt, drv_mb_header, (DRV_MSG_CODE_MCP_RESET | seq));
 
+	/* Give the MFW up to 500 second (50*1000*10usec) to resume */
 	do {
-		/* Wait for MFW response */
 		OSAL_UDELAY(delay);
-		/* Give the FW up to 500 second (50*1000*10usec) */
-	} while ((org_mcp_reset_seq == ecore_rd(p_hwfn, p_ptt,
-						MISCS_REG_GENERIC_POR_0)) &&
-		 (cnt++ < ECORE_MCP_RESET_RETRIES));
 
-	if (org_mcp_reset_seq !=
-	    ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0)) {
+		if (ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0) !=
+		    prev_generic_por_0)
+			break;
+	} while (cnt++ < retries);
+
+	if (ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0) !=
+	    prev_generic_por_0) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "MCP was reset after %d usec\n", cnt * delay);
 	} else {
@@ -380,6 +394,71 @@ enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
+#ifndef ASIC_ONLY
+static void ecore_emul_mcp_load_req(struct ecore_hwfn *p_hwfn,
+				    struct ecore_mcp_mb_params *p_mb_params)
+{
+	if (GET_MFW_FIELD(p_mb_params->param, DRV_ID_MCP_HSI_VER) !=
+	    1 /* ECORE_LOAD_REQ_HSI_VER_1 */) {
+		p_mb_params->mcp_resp = FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1;
+		return;
+	}
+
+	if (!loaded)
+		p_mb_params->mcp_resp = FW_MSG_CODE_DRV_LOAD_ENGINE;
+	else if (!loaded_port[p_hwfn->port_id])
+		p_mb_params->mcp_resp = FW_MSG_CODE_DRV_LOAD_PORT;
+	else
+		p_mb_params->mcp_resp = FW_MSG_CODE_DRV_LOAD_FUNCTION;
+
+	/* On CMT, always tell that it's engine */
+	if (ECORE_IS_CMT(p_hwfn->p_dev))
+		p_mb_params->mcp_resp = FW_MSG_CODE_DRV_LOAD_ENGINE;
+
+	loaded++;
+	loaded_port[p_hwfn->port_id]++;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Load phase: 0x%08x load cnt: 0x%x port id=%d port_load=%d\n",
+		   p_mb_params->mcp_resp, loaded, p_hwfn->port_id,
+		   loaded_port[p_hwfn->port_id]);
+}
+
+static void ecore_emul_mcp_unload_req(struct ecore_hwfn *p_hwfn)
+{
+	loaded--;
+	loaded_port[p_hwfn->port_id]--;
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Unload cnt: 0x%x\n", loaded);
+}
+
+static enum _ecore_status_t
+ecore_emul_mcp_cmd(struct ecore_hwfn *p_hwfn,
+		   struct ecore_mcp_mb_params *p_mb_params)
+{
+	if (!CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+		return ECORE_INVAL;
+
+	switch (p_mb_params->cmd) {
+	case DRV_MSG_CODE_LOAD_REQ:
+		ecore_emul_mcp_load_req(p_hwfn, p_mb_params);
+		break;
+	case DRV_MSG_CODE_UNLOAD_REQ:
+		ecore_emul_mcp_unload_req(p_hwfn);
+		break;
+	case DRV_MSG_CODE_GET_MFW_FEATURE_SUPPORT:
+	case DRV_MSG_CODE_RESOURCE_CMD:
+	case DRV_MSG_CODE_MDUMP_CMD:
+	case DRV_MSG_CODE_GET_ENGINE_CONFIG:
+	case DRV_MSG_CODE_GET_PPFID_BITMAP:
+		return ECORE_NOTIMPL;
+	default:
+		break;
+	}
+
+	return ECORE_SUCCESS;
+}
+#endif
+
 /* Must be called while cmd_lock is acquired */
 static bool ecore_mcp_has_pending_cmd(struct ecore_hwfn *p_hwfn)
 {
@@ -488,13 +567,18 @@ void ecore_mcp_print_cpu_info(struct ecore_hwfn *p_hwfn,
 			      struct ecore_ptt *p_ptt)
 {
 	u32 cpu_mode, cpu_state, cpu_pc_0, cpu_pc_1, cpu_pc_2;
+	u32 delay = ECORE_MCP_RESP_ITER_US;
 
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+		delay = ECORE_EMUL_MCP_RESP_ITER_US;
+#endif
 	cpu_mode = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
 	cpu_state = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
 	cpu_pc_0 = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
-	OSAL_UDELAY(CHIP_MCP_RESP_ITER_US);
+	OSAL_UDELAY(delay);
 	cpu_pc_1 = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
-	OSAL_UDELAY(CHIP_MCP_RESP_ITER_US);
+	OSAL_UDELAY(delay);
 	cpu_pc_2 = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
 
 	DP_NOTICE(p_hwfn, false,
@@ -617,15 +701,21 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 {
 	osal_size_t union_data_size = sizeof(union drv_union_data);
 	u32 max_retries = ECORE_DRV_MB_MAX_RETRIES;
-	u32 delay = CHIP_MCP_RESP_ITER_US;
+	u32 usecs = ECORE_MCP_RESP_ITER_US;
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
-		delay = EMUL_MCP_RESP_ITER_US;
-	/* There is a built-in delay of 100usec in each MFW response read */
-	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev))
-		max_retries /= 10;
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && !ecore_mcp_is_init(p_hwfn))
+		return ecore_emul_mcp_cmd(p_hwfn, p_mb_params);
+
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
+		max_retries = ECORE_EMUL_DRV_MB_MAX_RETRIES;
+		usecs = ECORE_EMUL_MCP_RESP_ITER_US;
+	}
 #endif
+	if (ECORE_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
+		max_retries = DIV_ROUND_UP(max_retries, 1000);
+		usecs *= 1000;
+	}
 
 	/* MCP not initialized */
 	if (!ecore_mcp_is_init(p_hwfn)) {
@@ -650,7 +740,7 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 	}
 
 	return _ecore_mcp_cmd_and_union(p_hwfn, p_ptt, p_mb_params, max_retries,
-					delay);
+					usecs);
 }
 
 enum _ecore_status_t ecore_mcp_cmd(struct ecore_hwfn *p_hwfn,
@@ -660,18 +750,6 @@ enum _ecore_status_t ecore_mcp_cmd(struct ecore_hwfn *p_hwfn,
 	struct ecore_mcp_mb_params mb_params;
 	enum _ecore_status_t rc;
 
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		if (cmd == DRV_MSG_CODE_UNLOAD_REQ) {
-			loaded--;
-			loaded_port[p_hwfn->port_id]--;
-			DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Unload cnt: 0x%x\n",
-				   loaded);
-		}
-		return ECORE_SUCCESS;
-	}
-#endif
-
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
 	mb_params.param = param;
@@ -745,34 +823,6 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-#ifndef ASIC_ONLY
-static void ecore_mcp_mf_workaround(struct ecore_hwfn *p_hwfn,
-				    u32 *p_load_code)
-{
-	static int load_phase = FW_MSG_CODE_DRV_LOAD_ENGINE;
-
-	if (!loaded)
-		load_phase = FW_MSG_CODE_DRV_LOAD_ENGINE;
-	else if (!loaded_port[p_hwfn->port_id])
-		load_phase = FW_MSG_CODE_DRV_LOAD_PORT;
-	else
-		load_phase = FW_MSG_CODE_DRV_LOAD_FUNCTION;
-
-	/* On CMT, always tell that it's engine */
-	if (ECORE_IS_CMT(p_hwfn->p_dev))
-		load_phase = FW_MSG_CODE_DRV_LOAD_ENGINE;
-
-	*p_load_code = load_phase;
-	loaded++;
-	loaded_port[p_hwfn->port_id]++;
-
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "Load phase: %x load cnt: 0x%x port id=%d port_load=%d\n",
-		   *p_load_code, loaded, p_hwfn->port_id,
-		   loaded_port[p_hwfn->port_id]);
-}
-#endif
-
 static bool
 ecore_mcp_can_force_load(u8 drv_role, u8 exist_drv_role,
 			 enum ecore_override_force_load override_force_load)
@@ -1004,13 +1054,6 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	u8 mfw_drv_role = 0, mfw_force_cmd;
 	enum _ecore_status_t rc;
 
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		ecore_mcp_mf_workaround(p_hwfn, &p_params->load_code);
-		return ECORE_SUCCESS;
-	}
-#endif
-
 	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
 	in_params.hsi_ver = ECORE_LOAD_REQ_HSI_VER_DEFAULT;
 	in_params.drv_ver_0 = ECORE_VERSION;
@@ -1166,15 +1209,17 @@ static void ecore_mcp_handle_vf_flr(struct ecore_hwfn *p_hwfn,
 	u32 mfw_path_offsize = ecore_rd(p_hwfn, p_ptt, addr);
 	u32 path_addr = SECTION_ADDR(mfw_path_offsize,
 				     ECORE_PATH_ID(p_hwfn));
-	u32 disabled_vfs[VF_MAX_STATIC / 32];
+	u32 disabled_vfs[EXT_VF_BITMAP_SIZE_IN_DWORDS];
 	int i;
 
+	OSAL_MEM_ZERO(disabled_vfs, EXT_VF_BITMAP_SIZE_IN_BYTES);
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Reading Disabled VF information from [offset %08x],"
 		   " path_addr %08x\n",
 		   mfw_path_offsize, path_addr);
 
-	for (i = 0; i < (VF_MAX_STATIC / 32); i++) {
+	for (i = 0; i < VF_BITMAP_SIZE_IN_DWORDS; i++) {
 		disabled_vfs[i] = ecore_rd(p_hwfn, p_ptt,
 					   path_addr +
 					   OFFSETOF(struct public_path,
@@ -1193,16 +1238,11 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt,
 					  u32 *vfs_to_ack)
 {
-	u32 addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
-					PUBLIC_FUNC);
-	u32 mfw_func_offsize = ecore_rd(p_hwfn, p_ptt, addr);
-	u32 func_addr = SECTION_ADDR(mfw_func_offsize,
-				     MCP_PF_ID(p_hwfn));
 	struct ecore_mcp_mb_params mb_params;
 	enum _ecore_status_t rc;
-	int i;
+	u16 i;
 
-	for (i = 0; i < (VF_MAX_STATIC / 32); i++)
+	for (i = 0; i < VF_BITMAP_SIZE_IN_DWORDS; i++)
 		DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IOV),
 			   "Acking VFs [%08x,...,%08x] - %08x\n",
 			   i * 32, (i + 1) * 32 - 1, vfs_to_ack[i]);
@@ -1210,7 +1250,7 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_VF_DISABLED_DONE;
 	mb_params.p_data_src = vfs_to_ack;
-	mb_params.data_src_size = VF_MAX_STATIC / 8;
+	mb_params.data_src_size = (u8)VF_BITMAP_SIZE_IN_BYTES;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt,
 				     &mb_params);
 	if (rc != ECORE_SUCCESS) {
@@ -1219,13 +1259,6 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 		return ECORE_TIMEOUT;
 	}
 
-	/* TMP - clear the ACK bits; should be done by MFW */
-	for (i = 0; i < (VF_MAX_STATIC / 32); i++)
-		ecore_wr(p_hwfn, p_ptt,
-			 func_addr +
-			 OFFSETOF(struct public_func, drv_ack_vf_disabled) +
-			 i * sizeof(u32), 0);
-
 	return rc;
 }
 
@@ -1471,8 +1504,11 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 	u32 cmd;
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+		if (b_up)
+			OSAL_LINK_UPDATE(p_hwfn);
 		return ECORE_SUCCESS;
+	}
 #endif
 
 	/* Set the shmem configuration according to params */
@@ -1853,6 +1889,13 @@ ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	struct mdump_config_stc mdump_config;
 	enum _ecore_status_t rc;
 
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && !ecore_mcp_is_init(p_hwfn)) {
+		DP_INFO(p_hwfn, "Emulation: Can't get mdump info\n");
+		return ECORE_NOTIMPL;
+	}
+#endif
+
 	OSAL_MEMSET(p_mdump_info, 0, sizeof(*p_mdump_info));
 
 	addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
@@ -2042,6 +2085,9 @@ ecore_mcp_handle_ufp_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 	/* update storm FW with negotiation results */
 	ecore_sp_pf_update_ufp(p_hwfn);
 
+	/* update stag pcp value */
+	ecore_sp_pf_update_stag(p_hwfn);
+
 	return ECORE_SUCCESS;
 }
 
@@ -2159,9 +2205,9 @@ enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_hwfn *p_hwfn,
 	u32 global_offsize;
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, false, "Emulation - can't get MFW version\n");
-		return ECORE_SUCCESS;
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && !ecore_mcp_is_init(p_hwfn)) {
+		DP_INFO(p_hwfn, "Emulation: Can't get MFW version\n");
+		return ECORE_NOTIMPL;
 	}
 #endif
 
@@ -2203,26 +2249,29 @@ enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_hwfn *p_hwfn,
 					      struct ecore_ptt *p_ptt,
 					      u32 *p_media_type)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS;
+	*p_media_type = MEDIA_UNSPECIFIED;
 
 	/* TODO - Add support for VFs */
 	if (IS_VF(p_hwfn->p_dev))
 		return ECORE_INVAL;
 
 	if (!ecore_mcp_is_init(p_hwfn)) {
+#ifndef ASIC_ONLY
+		if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+			DP_INFO(p_hwfn, "Emulation: Can't get media type\n");
+			return ECORE_NOTIMPL;
+		}
+#endif
 		DP_NOTICE(p_hwfn, false, "MFW is not initialized!\n");
 		return ECORE_BUSY;
 	}
 
-	if (!p_ptt) {
-		*p_media_type = MEDIA_UNSPECIFIED;
-		rc = ECORE_INVAL;
-	} else {
-		*p_media_type = ecore_rd(p_hwfn, p_ptt,
-					 p_hwfn->mcp_info->port_addr +
-					 OFFSETOF(struct public_port,
-						  media_type));
-	}
+	if (!p_ptt)
+		return ECORE_INVAL;
+
+	*p_media_type = ecore_rd(p_hwfn, p_ptt,
+				 p_hwfn->mcp_info->port_addr +
+				 OFFSETOF(struct public_port, media_type));
 
 	return ECORE_SUCCESS;
 }
@@ -2626,9 +2675,9 @@ enum _ecore_status_t ecore_mcp_get_flash_size(struct ecore_hwfn *p_hwfn,
 	u32 flash_size;
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, false, "Emulation - can't get flash size\n");
-		return ECORE_INVAL;
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && !ecore_mcp_is_init(p_hwfn)) {
+		DP_INFO(p_hwfn, "Emulation: Can't get flash size\n");
+		return ECORE_NOTIMPL;
 	}
 #endif
 
@@ -2725,6 +2774,16 @@ enum _ecore_status_t ecore_mcp_config_vf_msix(struct ecore_hwfn *p_hwfn,
 					      struct ecore_ptt *p_ptt,
 					      u8 vf_id, u8 num)
 {
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && !ecore_mcp_is_init(p_hwfn)) {
+		DP_INFO(p_hwfn,
+			"Emulation: Avoid sending the %s mailbox command\n",
+			ECORE_IS_BB(p_hwfn->p_dev) ? "CFG_VF_MSIX" :
+						     "CFG_PF_VFS_MSIX");
+		return ECORE_SUCCESS;
+	}
+#endif
+
 	if (ECORE_IS_BB(p_hwfn->p_dev))
 		return ecore_mcp_config_vf_msix_bb(p_hwfn, p_ptt, vf_id, num);
 	else
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 2c052b7fa..185cc2339 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -75,11 +75,16 @@ struct ecore_mcp_mb_params {
 	u32 cmd;
 	u32 param;
 	void *p_data_src;
-	u8 data_src_size;
 	void *p_data_dst;
-	u8 data_dst_size;
 	u32 mcp_resp;
 	u32 mcp_param;
+	u8 data_src_size;
+	u8 data_dst_size;
+	u32 flags;
+#define ECORE_MB_FLAG_CAN_SLEEP         (0x1 << 0)
+#define ECORE_MB_FLAG_AVOID_BLOCK       (0x1 << 1)
+#define ECORE_MB_FLAGS_IS_SET(params, flag) \
+	((params) != OSAL_NULL && ((params)->flags & ECORE_MB_FLAG_##flag))
 };
 
 struct ecore_drv_tlv_hdr {
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index f91b25e20..64509f7cc 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -62,6 +62,7 @@ struct ecore_iscsi_pf_params {
 	u8		num_uhq_pages_in_ring;
 	u8		num_queues;
 	u8		log_page_size;
+	u8		log_page_size_conn;
 	u8		rqe_log_size;
 	u8		max_fin_rt;
 	u8		gl_rq_pi;
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 49a5ff552..9860a62b5 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -355,14 +355,16 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 		p_ramrod->outer_tag_config.inner_to_outer_pri_map[i] = i;
 
 	/* enable_stag_pri_change should be set if port is in BD mode or,
-	 * UFP with Host Control mode or, UFP with DCB over base interface.
+	 * UFP with Host Control mode.
 	 */
 	if (OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits)) {
-		if ((p_hwfn->ufp_info.pri_type == ECORE_UFP_PRI_OS) ||
-		    (p_hwfn->p_dcbx_info->results.dcbx_enabled))
+		if (p_hwfn->ufp_info.pri_type == ECORE_UFP_PRI_OS)
 			p_ramrod->outer_tag_config.enable_stag_pri_change = 1;
 		else
 			p_ramrod->outer_tag_config.enable_stag_pri_change = 0;
+
+		p_ramrod->outer_tag_config.outer_tag.tci |=
+			OSAL_CPU_TO_LE16(((u16)p_hwfn->ufp_info.tc << 13));
 	}
 
 	/* Place EQ address in RAMROD */
@@ -459,8 +461,7 @@ enum _ecore_status_t ecore_sp_pf_update_ufp(struct ecore_hwfn *p_hwfn)
 		return rc;
 
 	p_ent->ramrod.pf_update.update_enable_stag_pri_change = true;
-	if ((p_hwfn->ufp_info.pri_type == ECORE_UFP_PRI_OS) ||
-	    (p_hwfn->p_dcbx_info->results.dcbx_enabled))
+	if (p_hwfn->ufp_info.pri_type == ECORE_UFP_PRI_OS)
 		p_ent->ramrod.pf_update.enable_stag_pri_change = 1;
 	else
 		p_ent->ramrod.pf_update.enable_stag_pri_change = 0;
@@ -637,6 +638,10 @@ enum _ecore_status_t ecore_sp_heartbeat_ramrod(struct ecore_hwfn *p_hwfn)
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
+	if (OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits))
+		p_ent->ramrod.pf_update.mf_vlan |=
+			OSAL_CPU_TO_LE16(((u16)p_hwfn->ufp_info.tc << 13));
+
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 486b21dd9..6c386821f 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -185,11 +185,26 @@ ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry *p_ent)
 /***************************************************************************
  * HSI access
  ***************************************************************************/
+
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK			0x1
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT			0
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK		0x1
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT		7
+#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK		0x1
+#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT		4
+#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK	0x1
+#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT	6
+
 static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 				    struct ecore_spq *p_spq)
 {
+	__le32 *p_spq_base_lo, *p_spq_base_hi;
+	struct regpair *p_consolid_base_addr;
+	u8 *p_flags1, *p_flags9, *p_flags10;
 	struct core_conn_context *p_cxt;
 	struct ecore_cxt_info cxt_info;
+	u32 core_conn_context_size;
+	__le16 *p_physical_q0;
 	u16 physical_q;
 	enum _ecore_status_t rc;
 
@@ -197,41 +212,39 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 
 	rc = ecore_cxt_get_cid_info(p_hwfn, &cxt_info);
 
-	if (rc < 0) {
+	if (rc != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true, "Cannot find context info for cid=%d\n",
 			  p_spq->cid);
 		return;
 	}
 
 	p_cxt = cxt_info.p_cxt;
+	core_conn_context_size = sizeof(*p_cxt);
+	p_flags1 = &p_cxt->xstorm_ag_context.flags1;
+	p_flags9 = &p_cxt->xstorm_ag_context.flags9;
+	p_flags10 = &p_cxt->xstorm_ag_context.flags10;
+	p_physical_q0 = &p_cxt->xstorm_ag_context.physical_q0;
+	p_spq_base_lo = &p_cxt->xstorm_st_context.spq_base_lo;
+	p_spq_base_hi = &p_cxt->xstorm_st_context.spq_base_hi;
+	p_consolid_base_addr = &p_cxt->xstorm_st_context.consolid_base_addr;
 
 	/* @@@TBD we zero the context until we have ilt_reset implemented. */
-	OSAL_MEM_ZERO(p_cxt, sizeof(*p_cxt));
-
-	if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev)) {
-		SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-			  XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
-		SET_FIELD(p_cxt->xstorm_ag_context.flags1,
-			  XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
-		/* SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-		 *	  E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN, 1);
-		 */
-		SET_FIELD(p_cxt->xstorm_ag_context.flags9,
-			  XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
-	}
+	OSAL_MEM_ZERO(p_cxt, core_conn_context_size);
+
+	SET_FIELD(*p_flags10, XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
+	SET_FIELD(*p_flags1, XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
+	SET_FIELD(*p_flags9, XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
 
 	/* CDU validation - FIXME currently disabled */
 
 	/* QM physical queue */
 	physical_q = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB);
-	p_cxt->xstorm_ag_context.physical_q0 = OSAL_CPU_TO_LE16(physical_q);
+	*p_physical_q0 = OSAL_CPU_TO_LE16(physical_q);
 
-	p_cxt->xstorm_st_context.spq_base_lo =
-	    DMA_LO_LE(p_spq->chain.p_phys_addr);
-	p_cxt->xstorm_st_context.spq_base_hi =
-	    DMA_HI_LE(p_spq->chain.p_phys_addr);
+	*p_spq_base_lo = DMA_LO_LE(p_spq->chain.p_phys_addr);
+	*p_spq_base_hi = DMA_HI_LE(p_spq->chain.p_phys_addr);
 
-	DMA_REGPAIR_LE(p_cxt->xstorm_st_context.consolid_base_addr,
+	DMA_REGPAIR_LE(*p_consolid_base_addr,
 		       p_hwfn->p_consq->chain.p_phys_addr);
 }
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 264217252..deee04ac4 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -906,7 +906,7 @@ ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn,
  *
  * @brief ecore_iov_config_perm_table - configure the permission
  *      zone table.
- *      In E4, queue zone permission table size is 320x9. There
+ *      The queue zone permission table size is 320x9. There
  *      are 320 VF queues for single engine device (256 for dual
  *      engine device), and each entry has the following format:
  *      {Valid, VF[7:0]}
@@ -967,6 +967,9 @@ static u8 ecore_iov_alloc_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 
 	for (qid = 0; qid < num_rx_queues; qid++) {
 		p_block = ecore_get_igu_free_sb(p_hwfn, false);
+		if (!p_block)
+			continue;
+
 		vf->igu_sbs[qid] = p_block->igu_sb_id;
 		p_block->status &= ~ECORE_IGU_STATUS_FREE;
 		SET_FIELD(val, IGU_MAPPING_LINE_VECTOR_NUMBER, qid);
@@ -1064,6 +1067,15 @@ void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
 	p_bulletin->capability_speed = p_caps->speed_capabilities;
 }
 
+#ifndef ASIC_ONLY
+static void ecore_emul_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt)
+{
+	/* Increase the maximum number of DORQ FIFO entries used by child VFs */
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_VF_USAGE_CNT_LIM, 0x3ec);
+}
+#endif
+
 enum _ecore_status_t
 ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
@@ -1188,18 +1200,39 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			   &link_params, &link_state, &link_caps);
 
 	rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, vf);
+	if (rc != ECORE_SUCCESS)
+		return rc;
 
-	if (rc == ECORE_SUCCESS) {
-		vf->b_init = true;
-		p_hwfn->pf_iov_info->active_vfs[vf->relative_vf_id / 64] |=
+	vf->b_init = true;
+#ifndef REMOVE_DBG
+	p_hwfn->pf_iov_info->active_vfs[vf->relative_vf_id / 64] |=
 			(1ULL << (vf->relative_vf_id % 64));
+#endif
 
-		if (IS_LEAD_HWFN(p_hwfn))
-			p_hwfn->p_dev->p_iov_info->num_vfs++;
+	if (IS_LEAD_HWFN(p_hwfn))
+		p_hwfn->p_dev->p_iov_info->num_vfs++;
+
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+		ecore_emul_iov_init_hw_for_vf(p_hwfn, p_ptt);
+#endif
+
+	return ECORE_SUCCESS;
 	}
 
-	return rc;
+#ifndef ASIC_ONLY
+static void ecore_emul_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt)
+{
+	if (!ecore_mcp_is_init(p_hwfn)) {
+		u32 sriov_dis = ecore_rd(p_hwfn, p_ptt,
+					 PGLUE_B_REG_SR_IOV_DISABLED_REQUEST);
+
+		ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_SR_IOV_DISABLED_REQUEST_CLR,
+			 sriov_dis);
 }
+}
+#endif
 
 enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
@@ -1257,6 +1290,11 @@ enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			p_hwfn->p_dev->p_iov_info->num_vfs--;
 	}
 
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+		ecore_emul_iov_release_hw_for_vf(p_hwfn, p_ptt);
+#endif
+
 	return ECORE_SUCCESS;
 }
 
@@ -1391,7 +1429,7 @@ static void ecore_iov_send_response(struct ecore_hwfn *p_hwfn,
 
 	eng_vf_id = p_vf->abs_vf_id;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct dmae_params));
+	OSAL_MEMSET(&params, 0, sizeof(params));
 	SET_FIELD(params.flags, DMAE_PARAMS_DST_VF_VALID, 0x1);
 	params.dst_vf_id = eng_vf_id;
 
@@ -1787,7 +1825,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 	/* fill in pfdev info */
 	pfdev_info->chip_num = p_hwfn->p_dev->chip_num;
 	pfdev_info->db_size = 0;	/* @@@ TBD MichalK Vf Doorbells */
-	pfdev_info->indices_per_sb = MAX_PIS_PER_SB;
+	pfdev_info->indices_per_sb = PIS_PER_SB;
 
 	pfdev_info->capabilities = PFVF_ACQUIRE_CAP_DEFAULT_UNTAGGED |
 				   PFVF_ACQUIRE_CAP_POST_FW_OVERRIDE;
@@ -2247,10 +2285,14 @@ static void ecore_iov_vf_mbx_start_rxq_resp(struct ecore_hwfn *p_hwfn,
 	ecore_add_tlv(&mbx->offset, CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
-	/* Update the TLV with the response */
+	/* Update the TLV with the response.
+	 * The VF Rx producers are located in the vf zone.
+	 */
 	if ((status == PFVF_STATUS_SUCCESS) && !b_legacy) {
 		req = &mbx->req_virt->start_rxq;
-		p_tlv->offset = PXP_VF_BAR0_START_MSDM_ZONE_B +
+
+		p_tlv->offset =
+			PXP_VF_BAR0_START_MSDM_ZONE_B +
 				OFFSETOF(struct mstorm_vf_zone,
 					 non_trigger.eth_rx_queue_producers) +
 				sizeof(struct eth_rx_prod_data) * req->rx_qid;
@@ -2350,13 +2392,15 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	if (p_cid == OSAL_NULL)
 		goto out;
 
-	/* Legacy VFs have their Producers in a different location, which they
-	 * calculate on their own and clean the producer prior to this.
+	/* The VF Rx producers are located in the vf zone.
+	 * Legacy VFs have their producers in the queue zone, but they
+	 * calculate the location by their own and clean them prior to this.
 	 */
 	if (!(vf_legacy & ECORE_QCID_LEGACY_VF_RX_PROD))
 		REG_WR(p_hwfn,
 		       GTT_BAR0_MAP_REG_MSDM_RAM +
-		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, req->rx_qid),
+		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id,
+						  req->rx_qid),
 		       0);
 
 	rc = ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
@@ -3855,48 +3899,70 @@ ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+#define MAX_NUM_EXT_VOQS	(MAX_NUM_PORTS * NUM_OF_TCS)
+
 static enum _ecore_status_t
 ecore_iov_vf_flr_poll_pbf(struct ecore_hwfn *p_hwfn,
 			  struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
 {
-	u32 cons[MAX_NUM_VOQS_E4], distance[MAX_NUM_VOQS_E4];
-	int i, cnt;
+	u32 prod, cons[MAX_NUM_EXT_VOQS], distance[MAX_NUM_EXT_VOQS], tmp;
+	u8 max_phys_tcs_per_port = p_hwfn->qm_info.max_phys_tcs_per_port;
+	u8 max_ports_per_engine = p_hwfn->p_dev->num_ports_in_engine;
+	u32 prod_voq0_addr = PBF_REG_NUM_BLOCKS_ALLOCATED_PROD_VOQ0;
+	u32 cons_voq0_addr = PBF_REG_NUM_BLOCKS_ALLOCATED_CONS_VOQ0;
+	u8 port_id, tc, tc_id = 0, voq = 0;
+	int cnt;
 
 	/* Read initial consumers & producers */
-	for (i = 0; i < MAX_NUM_VOQS_E4; i++) {
-		u32 prod;
-
-		cons[i] = ecore_rd(p_hwfn, p_ptt,
-				   PBF_REG_NUM_BLOCKS_ALLOCATED_CONS_VOQ0 +
-				   i * 0x40);
+	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
+		/* "max_phys_tcs_per_port" active TCs + 1 pure LB TC */
+		for (tc = 0; tc < max_phys_tcs_per_port + 1; tc++) {
+			tc_id = (tc < max_phys_tcs_per_port) ?
+				tc :
+				PURE_LB_TC;
+			voq = VOQ(port_id, tc_id, max_phys_tcs_per_port);
+			cons[voq] = ecore_rd(p_hwfn, p_ptt,
+					     cons_voq0_addr + voq * 0x40);
 		prod = ecore_rd(p_hwfn, p_ptt,
-				PBF_REG_NUM_BLOCKS_ALLOCATED_PROD_VOQ0 +
-				i * 0x40);
-		distance[i] = prod - cons[i];
+					prod_voq0_addr + voq * 0x40);
+			distance[voq] = prod - cons[voq];
+		}
 	}
 
 	/* Wait for consumers to pass the producers */
-	i = 0;
+	port_id = 0;
+	tc = 0;
 	for (cnt = 0; cnt < 50; cnt++) {
-		for (; i < MAX_NUM_VOQS_E4; i++) {
-			u32 tmp;
-
+		for (; port_id < max_ports_per_engine; port_id++) {
+			/* "max_phys_tcs_per_port" active TCs + 1 pure LB TC */
+			for (; tc < max_phys_tcs_per_port + 1; tc++) {
+				tc_id = (tc < max_phys_tcs_per_port) ?
+					tc :
+					PURE_LB_TC;
+				voq = VOQ(port_id, tc_id,
+					  max_phys_tcs_per_port);
 			tmp = ecore_rd(p_hwfn, p_ptt,
-				       PBF_REG_NUM_BLOCKS_ALLOCATED_CONS_VOQ0 +
-				       i * 0x40);
-			if (distance[i] > tmp - cons[i])
+					       cons_voq0_addr + voq * 0x40);
+			if (distance[voq] > tmp - cons[voq])
+				break;
+		}
+
+			if (tc == max_phys_tcs_per_port + 1)
+				tc = 0;
+			else
 				break;
 		}
 
-		if (i == MAX_NUM_VOQS_E4)
+		if (port_id == max_ports_per_engine)
 			break;
 
 		OSAL_MSLEEP(20);
 	}
 
 	if (cnt == 50) {
-		DP_ERR(p_hwfn, "VF[%d] - pbf polling failed on VOQ %d\n",
-		       p_vf->abs_vf_id, i);
+		DP_ERR(p_hwfn,
+		       "VF[%d] - pbf polling failed on VOQ %d [port_id %d, tc_id %d]\n",
+		       p_vf->abs_vf_id, voq, port_id, tc_id);
 		return ECORE_TIMEOUT;
 	}
 
@@ -3996,11 +4062,11 @@ ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 					      struct ecore_ptt *p_ptt)
 {
-	u32 ack_vfs[VF_MAX_STATIC / 32];
+	u32 ack_vfs[EXT_VF_BITMAP_SIZE_IN_DWORDS];
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u16 i;
 
-	OSAL_MEMSET(ack_vfs, 0, sizeof(u32) * (VF_MAX_STATIC / 32));
+	OSAL_MEM_ZERO(ack_vfs, EXT_VF_BITMAP_SIZE_IN_BYTES);
 
 	/* Since BRB <-> PRS interface can't be tested as part of the flr
 	 * polling due to HW limitations, simply sleep a bit. And since
@@ -4019,10 +4085,10 @@ enum _ecore_status_t
 ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 				struct ecore_ptt *p_ptt, u16 rel_vf_id)
 {
-	u32 ack_vfs[VF_MAX_STATIC / 32];
+	u32 ack_vfs[EXT_VF_BITMAP_SIZE_IN_DWORDS];
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
-	OSAL_MEMSET(ack_vfs, 0, sizeof(u32) * (VF_MAX_STATIC / 32));
+	OSAL_MEM_ZERO(ack_vfs, EXT_VF_BITMAP_SIZE_IN_BYTES);
 
 	/* Wait instead of polling the BRB <-> PRS interface */
 	OSAL_MSLEEP(100);
@@ -4039,7 +4105,8 @@ bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
 	u16 i;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Marking FLR-ed VFs\n");
-	for (i = 0; i < (VF_MAX_STATIC / 32); i++)
+
+	for (i = 0; i < VF_BITMAP_SIZE_IN_DWORDS; i++)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "[%08x,...,%08x]: %08x\n",
 			   i * 32, (i + 1) * 32 - 1, p_disabled_vfs[i]);
@@ -4396,7 +4463,7 @@ enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
 	if (!vf_info)
 		return ECORE_INVAL;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct dmae_params));
+	OSAL_MEMSET(&params, 0, sizeof(params));
 	SET_FIELD(params.flags, DMAE_PARAMS_SRC_VF_VALID, 0x1);
 	SET_FIELD(params.flags, DMAE_PARAMS_COMPLETION_DST, 0x1);
 	params.src_vf_id = vf_info->abs_vf_id;
@@ -4785,9 +4852,9 @@ enum _ecore_status_t ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 int vfid, int val)
 {
-	struct ecore_mcp_link_state *p_link;
 	struct ecore_vf_info *vf;
 	u8 abs_vp_id = 0;
+	u16 rl_id;
 	enum _ecore_status_t rc;
 
 	vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
@@ -4799,10 +4866,8 @@ enum _ecore_status_t ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	p_link = &ECORE_LEADING_HWFN(p_hwfn->p_dev)->mcp_info->link_output;
-
-	return ecore_init_vport_rl(p_hwfn, p_ptt, abs_vp_id, (u32)val,
-				   p_link->speed);
+	rl_id = abs_vp_id; /* The "rl_id" is set as the "vport_id" */
+	return ecore_init_global_rl(p_hwfn, p_ptt, rl_id, (u32)val);
 }
 
 enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 3ba6a0cf2..24846cfb5 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -257,6 +257,7 @@ static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 	struct pfvf_acquire_resp_tlv *resp = &p_iov->pf2vf_reply->acquire_resp;
 	struct pf_vf_pfdev_info *pfdev_info = &resp->pfdev_info;
 	struct ecore_vf_acquire_sw_info vf_sw_info;
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	struct vf_pf_resc_request *p_resc;
 	bool resources_acquired = false;
 	struct vfpf_acquire_tlv *req;
@@ -427,20 +428,20 @@ static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 	p_iov->bulletin.size = resp->bulletin_size;
 
 	/* get HW info */
-	p_hwfn->p_dev->type = resp->pfdev_info.dev_type;
-	p_hwfn->p_dev->chip_rev = (u8)resp->pfdev_info.chip_rev;
+	p_dev->type = resp->pfdev_info.dev_type;
+	p_dev->chip_rev = (u8)resp->pfdev_info.chip_rev;
 
 	DP_INFO(p_hwfn, "Chip details - %s%d\n",
-		ECORE_IS_BB(p_hwfn->p_dev) ? "BB" : "AH",
+		ECORE_IS_BB(p_dev) ? "BB" : "AH",
 		CHIP_REV_IS_A0(p_hwfn->p_dev) ? 0 : 1);
 
-	p_hwfn->p_dev->chip_num = pfdev_info->chip_num & 0xffff;
+	p_dev->chip_num = pfdev_info->chip_num & 0xffff;
 
 	/* Learn of the possibility of CMT */
 	if (IS_LEAD_HWFN(p_hwfn)) {
 		if (resp->pfdev_info.capabilities & PFVF_ACQUIRE_CAP_100G) {
 			DP_INFO(p_hwfn, "100g VF\n");
-			p_hwfn->p_dev->num_hwfns = 2;
+			p_dev->num_hwfns = 2;
 		}
 	}
 
@@ -636,10 +637,6 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn)
 	return ECORE_NOMEM;
 }
 
-#define TSTORM_QZONE_START   PXP_VF_BAR0_START_SDM_ZONE_A
-#define MSTORM_QZONE_START(dev)   (TSTORM_QZONE_START + \
-				   (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
-
 /* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */
 static void
 __ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
@@ -828,8 +825,7 @@ ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 		u8 hw_qid = p_iov->acquire_resp.resc.hw_qid[rx_qid];
 		u32 init_prod_val = 0;
 
-		*pp_prod = (u8 OSAL_IOMEM *)
-			   p_hwfn->regview +
+		*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
 			   MSTORM_QZONE_START(p_hwfn->p_dev) +
 			   (hw_qid) * MSTORM_QZONE_SIZE;
 
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index b5f93e9fa..559638508 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -44,7 +44,7 @@
 /* Driver versions */
 #define QEDE_PMD_VER_PREFIX		"QEDE PMD"
 #define QEDE_PMD_VERSION_MAJOR		2
-#define QEDE_PMD_VERSION_MINOR	        10
+#define QEDE_PMD_VERSION_MINOR	        11
 #define QEDE_PMD_VERSION_REVISION       0
 #define QEDE_PMD_VERSION_PATCH	        1
 
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 8a108f99c..c9caec645 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -18,7 +18,7 @@
 char qede_fw_file[PATH_MAX];
 
 static const char * const QEDE_DEFAULT_FIRMWARE =
-	"/lib/firmware/qed/qed_init_values-8.37.7.0.bin";
+	"/lib/firmware/qed/qed_init_values-8.40.25.0.bin";
 
 static void
 qed_update_pf_params(struct ecore_dev *edev, struct ecore_pf_params *params)
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 77ee3b34f..0ab11d500 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -805,7 +805,7 @@ qede_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
 		fp->rxq->hw_rxq_prod_addr = ret_params.p_prod;
 		fp->rxq->handle = ret_params.p_handle;
 
-		fp->rxq->hw_cons_ptr = &fp->sb_info->sb_virt->pi_array[RX_PI];
+		fp->rxq->hw_cons_ptr = &fp->sb_info->sb_pi_array[RX_PI];
 		qede_update_rx_prod(qdev, fp->rxq);
 		eth_dev->data->rx_queue_state[rx_queue_id] =
 			RTE_ETH_QUEUE_STATE_STARTED;
@@ -863,7 +863,7 @@ qede_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id)
 		txq->doorbell_addr = ret_params.p_doorbell;
 		txq->handle = ret_params.p_handle;
 
-		txq->hw_cons_ptr = &fp->sb_info->sb_virt->pi_array[TX_PI(0)];
+		txq->hw_cons_ptr = &fp->sb_info->sb_pi_array[TX_PI(0)];
 		SET_FIELD(txq->tx_db.data.params, ETH_DB_DATA_DEST,
 				DB_DEST_XCM);
 		SET_FIELD(txq->tx_db.data.params, ETH_DB_DATA_AGG_CMD,
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH 9/9] net/qede: print adapter info during init failure
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (7 preceding siblings ...)
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 8/9] net/qede/base: update the FW to 8.40.25.0 Rasesh Mody
@ 2019-09-30  2:49 ` Rasesh Mody
  2019-10-03  5:06 ` [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Jerin Jacob
                   ` (10 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-09-30  2:49 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

Dump the info logs banner with available information in case of
device initialization failure.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/qede_ethdev.c | 54 +++++++++++++++++++++-------------
 drivers/net/qede/qede_ethdev.h | 19 ++++++++----
 2 files changed, 46 insertions(+), 27 deletions(-)

diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 0c9f6590e..5dd49fc20 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -278,30 +278,38 @@ static void qede_print_adapter_info(struct qede_dev *qdev)
 {
 	struct ecore_dev *edev = &qdev->edev;
 	struct qed_dev_info *info = &qdev->dev_info.common;
-	static char drv_ver[QEDE_PMD_DRV_VER_STR_SIZE];
 	static char ver_str[QEDE_PMD_DRV_VER_STR_SIZE];
 
-	DP_INFO(edev, "*********************************\n");
-	DP_INFO(edev, " DPDK version:%s\n", rte_version());
-	DP_INFO(edev, " Chip details : %s %c%d\n",
+	DP_INFO(edev, "**************************************************\n");
+	DP_INFO(edev, " DPDK version\t\t\t: %s\n", rte_version());
+	DP_INFO(edev, " Chip details\t\t\t: %s %c%d\n",
 		  ECORE_IS_BB(edev) ? "BB" : "AH",
 		  'A' + edev->chip_rev,
 		  (int)edev->chip_metal);
-	snprintf(ver_str, QEDE_PMD_DRV_VER_STR_SIZE, "%d.%d.%d.%d",
-		 info->fw_major, info->fw_minor, info->fw_rev, info->fw_eng);
-	snprintf(drv_ver, QEDE_PMD_DRV_VER_STR_SIZE, "%s_%s",
-		 ver_str, QEDE_PMD_VERSION);
-	DP_INFO(edev, " Driver version : %s\n", drv_ver);
-	DP_INFO(edev, " Firmware version : %s\n", ver_str);
+	snprintf(ver_str, QEDE_PMD_DRV_VER_STR_SIZE, "%s",
+		 QEDE_PMD_DRV_VERSION);
+	DP_INFO(edev, " Driver version\t\t\t: %s\n", ver_str);
+
+	snprintf(ver_str, QEDE_PMD_DRV_VER_STR_SIZE, "%s",
+		 QEDE_PMD_BASE_VERSION);
+	DP_INFO(edev, " Base version\t\t\t: %s\n", ver_str);
+
+	snprintf(ver_str, QEDE_PMD_DRV_VER_STR_SIZE, "%s", QEDE_PMD_FW_VERSION);
+	DP_INFO(edev, " Firmware version\t\t\t: %s\n", ver_str);
 
 	snprintf(ver_str, MCP_DRV_VER_STR_SIZE,
 		 "%d.%d.%d.%d",
-		(info->mfw_rev >> 24) & 0xff,
-		(info->mfw_rev >> 16) & 0xff,
-		(info->mfw_rev >> 8) & 0xff, (info->mfw_rev) & 0xff);
-	DP_INFO(edev, " Management Firmware version : %s\n", ver_str);
-	DP_INFO(edev, " Firmware file : %s\n", qede_fw_file);
-	DP_INFO(edev, "*********************************\n");
+		 (info->mfw_rev & QED_MFW_VERSION_3_MASK) >>
+		 QED_MFW_VERSION_3_OFFSET,
+		 (info->mfw_rev & QED_MFW_VERSION_2_MASK) >>
+		 QED_MFW_VERSION_2_OFFSET,
+		 (info->mfw_rev & QED_MFW_VERSION_1_MASK) >>
+		 QED_MFW_VERSION_1_OFFSET,
+		 (info->mfw_rev & QED_MFW_VERSION_0_MASK) >>
+		 QED_MFW_VERSION_0_OFFSET);
+	DP_INFO(edev, " Management Firmware version\t: %s\n", ver_str);
+	DP_INFO(edev, " Firmware file\t\t\t: %s\n", qede_fw_file);
+	DP_INFO(edev, "**************************************************\n");
 }
 
 static void qede_reset_queue_stats(struct qede_dev *qdev, bool xstats)
@@ -2435,6 +2443,10 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 				    dp_level, is_vf);
 	if (rc != 0) {
 		DP_ERR(edev, "qede probe failed rc %d\n", rc);
+		if (do_once) {
+			qede_print_adapter_info(adapter);
+			do_once = false;
+		}
 		return -ENODEV;
 	}
 	qede_update_pf_params(edev);
@@ -2515,6 +2527,11 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 
 	qede_alloc_etherdev(adapter, &dev_info);
 
+	if (do_once) {
+		qede_print_adapter_info(adapter);
+		do_once = false;
+	}
+
 	adapter->ops->common->set_name(edev, edev->name);
 
 	if (!is_vf)
@@ -2571,11 +2588,6 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 
 	eth_dev->dev_ops = (is_vf) ? &qede_eth_vf_dev_ops : &qede_eth_dev_ops;
 
-	if (do_once) {
-		qede_print_adapter_info(adapter);
-		do_once = false;
-	}
-
 	/* Bring-up the link */
 	qede_dev_set_link_state(eth_dev, true);
 
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 559638508..1ac2d086a 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -42,20 +42,27 @@
 #define qede_stringify(x...)		qede_stringify1(x)
 
 /* Driver versions */
+#define QEDE_PMD_DRV_VER_STR_SIZE NAME_SIZE /* 128 */
 #define QEDE_PMD_VER_PREFIX		"QEDE PMD"
 #define QEDE_PMD_VERSION_MAJOR		2
 #define QEDE_PMD_VERSION_MINOR	        11
 #define QEDE_PMD_VERSION_REVISION       0
 #define QEDE_PMD_VERSION_PATCH	        1
 
-#define QEDE_PMD_VERSION qede_stringify(QEDE_PMD_VERSION_MAJOR) "."     \
-			 qede_stringify(QEDE_PMD_VERSION_MINOR) "."     \
-			 qede_stringify(QEDE_PMD_VERSION_REVISION) "."  \
-			 qede_stringify(QEDE_PMD_VERSION_PATCH)
+#define QEDE_PMD_DRV_VERSION qede_stringify(QEDE_PMD_VERSION_MAJOR) "."     \
+			     qede_stringify(QEDE_PMD_VERSION_MINOR) "."     \
+			     qede_stringify(QEDE_PMD_VERSION_REVISION) "."  \
+			     qede_stringify(QEDE_PMD_VERSION_PATCH)
 
-#define QEDE_PMD_DRV_VER_STR_SIZE NAME_SIZE
-#define QEDE_PMD_VER_PREFIX "QEDE PMD"
+#define QEDE_PMD_BASE_VERSION qede_stringify(ECORE_MAJOR_VERSION) "."       \
+			      qede_stringify(ECORE_MINOR_VERSION) "."       \
+			      qede_stringify(ECORE_REVISION_VERSION) "."    \
+			      qede_stringify(ECORE_ENGINEERING_VERSION)
 
+#define QEDE_PMD_FW_VERSION qede_stringify(FW_MAJOR_VERSION) "."            \
+			    qede_stringify(FW_MINOR_VERSION) "."            \
+			    qede_stringify(FW_REVISION_VERSION) "."         \
+			    qede_stringify(FW_ENGINEERING_VERSION)
 
 #define QEDE_RSS_INDIR_INITED     (1 << 0)
 #define QEDE_RSS_KEY_INITED       (1 << 1)
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (8 preceding siblings ...)
  2019-09-30  2:49 ` [dpdk-dev] [PATCH 9/9] net/qede: print adapter info during init failure Rasesh Mody
@ 2019-10-03  5:06 ` Jerin Jacob
  2019-10-03  5:59   ` Rasesh Mody
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 " Rasesh Mody
                   ` (9 subsequent siblings)
  19 siblings, 1 reply; 24+ messages in thread
From: Jerin Jacob @ 2019-10-03  5:06 UTC (permalink / raw)
  To: Rasesh Mody; +Cc: dpdk-dev, Jerin Jacob, Ferruh Yigit, GR-Everest-DPDK-Dev

On Mon, Sep 30, 2019 at 8:19 AM Rasesh Mody <rmody@marvell.com> wrote:
>
> Hi,
>
> This patch series updates the FW to 8.40.25.0 and includes corresponding
> base driver changes. It also includes some enhancements and fixes.
> The PMD version is bumped to 2.11.0.1.
>
> Please apply.
>
> Thanks!
> -Rasesh
>
> Rasesh Mody (9):
>   net/qede/base: calculate right page index for PBL chains
>   net/qede/base: change MFW mailbox command log verbosity
>   net/qede/base: lock entire QM reconfiguration flow
>   net/qede/base: rename HSI datatypes and funcs
>   net/qede/base: update rt defs NVM cfg and mcp code
>   net/qede/base: move dmae code to HSI
>   net/qede/base: update HSI code
>   net/qede/base: update the FW to 8.40.25.0
>   net/qede: print adapter info during init failure

Hi Rasesh,

I understand it is the _base_ code(probably owned by someone else)but
there are a lot of checkpatch issues[1] with this patch in base code.
Is something can be fixed? if so, please send the next version else let us know.

[1]
CHECK:MACRO_ARG_REUSE: Macro argument reuse 'map' - possible side-effects?
#2927: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:163:
+#define QM_INIT_TX_PQ_MAP(p_hwfn, map, pq_id, vp_pq_id, \
+                          rl_valid, rl_id, voq, wrr) \
+       do { \
+       OSAL_MEMSET(&map, 0, sizeof(map)); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_PQ_VALID, 1); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_VALID, rl_valid ? 1 : 0); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_ID, rl_id); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_VP_PQ_ID, vp_pq_id); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_VOQ, voq); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP, wrr); \
+       STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + pq_id, *((u32 *)&map));\
        } while (0)

CHECK:MACRO_ARG_PRECEDENCE: Macro argument 'map' may be better as
'(map)' to avoid precedence issues
#2927: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:163:
+#define QM_INIT_TX_PQ_MAP(p_hwfn, map, pq_id, vp_pq_id, \
+                          rl_valid, rl_id, voq, wrr) \
+       do { \
+       OSAL_MEMSET(&map, 0, sizeof(map)); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_PQ_VALID, 1); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_VALID, rl_valid ? 1 : 0); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_ID, rl_id); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_VP_PQ_ID, vp_pq_id); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_VOQ, voq); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP, wrr); \
+       STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + pq_id, *((u32 *)&map));\
        } while (0)

CHECK:MACRO_ARG_PRECEDENCE: Macro argument 'pq_id' may be better as
'(pq_id)' to avoid precedence issues
#2927: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:163:
+#define QM_INIT_TX_PQ_MAP(p_hwfn, map, pq_id, vp_pq_id, \
+                          rl_valid, rl_id, voq, wrr) \
+       do { \
+       OSAL_MEMSET(&map, 0, sizeof(map)); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_PQ_VALID, 1); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_VALID, rl_valid ? 1 : 0); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_ID, rl_id); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_VP_PQ_ID, vp_pq_id); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_VOQ, voq); \
+       SET_FIELD(map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP, wrr); \
+       STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + pq_id, *((u32 *)&map));\
        } while (0)

WARNING:SUSPECT_CODE_INDENT: suspect code indent for conditional
statements (8, 8)
#2929: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:165:
+       do { \
+       OSAL_MEMSET(&map, 0, sizeof(map)); \

total: 0 errors, 1 warnings, 3 checks, 3176 lines checked
CHECK:CAMELCASE: Avoid CamelCase: <optionId>
#1028: FILE: drivers/net/qede/base/mcp_public.h:1707:
+#define SINGLE_NVM_WR_OP(optionId) \

WARNING:LONG_LINE: line over 80 characters
#1210: FILE: drivers/net/qede/base/nvm_cfg.h:1120:
+               #define
NVM_CFG1_GLOB_WARNING_TEMPERATURE_THRESHOLD_MASK 0x1FE00000

WARNING:LONG_LINE: line over 80 characters
#1322: FILE: drivers/net/qede/base/nvm_cfg.h:1229:
+               #define
NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_DEFAULT 0x0

WARNING:LONG_LINE: line over 80 characters
#1447: FILE: drivers/net/qede/base/nvm_cfg.h:1354:
+               #define
NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_VF_MAC_MASK 0x00FF0000

WARNING:LONG_LINE: line over 80 characters
#1569: FILE: drivers/net/qede/base/nvm_cfg.h:2229:
+               #define
NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_BASE_R 0x10

WARNING:LONG_LINE: line over 80 characters
#1572: FILE: drivers/net/qede/base/nvm_cfg.h:2232:
+               #define
NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_40G_BASE_R 0x80

WARNING:LONG_LINE: line over 80 characters
#1574: FILE: drivers/net/qede/base/nvm_cfg.h:2234:
+               #define
NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_BASE_R 0x200

WARNING:LONG_LINE: line over 80 characters
#1575: FILE: drivers/net/qede/base/nvm_cfg.h:2235:
+               #define
NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS528 0x400

WARNING:LONG_LINE: line over 80 characters
#1576: FILE: drivers/net/qede/base/nvm_cfg.h:2236:
+               #define
NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS544 0x800

WARNING:LONG_LINE: line over 80 characters
#1577: FILE: drivers/net/qede/base/nvm_cfg.h:2237:
+               #define
NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_NONE 0x1000

WARNING:LONG_LINE: line over 80 characters
#1578: FILE: drivers/net/qede/base/nvm_cfg.h:2238:
+               #define
NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_BASE_R 0x2000

WARNING:LONG_LINE: line over 80 characters
#1579: FILE: drivers/net/qede/base/nvm_cfg.h:2239:
+               #define
NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS528 0x4000

WARNING:LONG_LINE: line over 80 characters
#1580: FILE: drivers/net/qede/base/nvm_cfg.h:2240:
+               #define
NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS544 0x8000

total: 0 errors, 12 warnings, 1 checks, 2043 lines checked

### net/qede/base: update HSI code

CHECK:CAMELCASE: Avoid CamelCase: <vfId>
WARNING:LONG_LINE_COMMENT: line over 80 characters
#2913: FILE: drivers/net/qede/base/ecore_iro_values.h:217:
+       /* TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_num_id,taskpool_index),
*/

total: 0 errors, 1 warnings, 2 checks, 2956 lines checked

### net/qede/base: update the FW to 8.40.25.0

CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#433: FILE: drivers/net/qede/base/ecore_cxt.c:633:
+               p_blk = ecore_cxt_set_blk(

CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#461: FILE: drivers/net/qede/base/ecore_cxt.c:688:
+               p_blk = ecore_cxt_set_blk(

CHECK:UNNECESSARY_PARENTHESES: Unnecessary parentheses around 'line ==
first_skipped_line'
#794: FILE: drivers/net/qede/base/ecore_cxt.c:1018:
+               if (lines_to_skip && (line == first_skipped_line)) {

WARNING:EMBEDDED_FUNCTION_NAME: Prefer using '"%s...", __func__' to
using 'ecore_hw_init_chip', this function's name, in a string
#1482: FILE: drivers/net/qede/base/ecore_dev.c:2719:
+                         "ecore_hw_init_chip() shouldn't be called in
a non-emulation environment\n");

CHECK:MACRO_ARG_REUSE: Macro argument reuse 'pq_size' - possible side-effects?
#2348: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:26:
+#define QM_PQ_MEM_4KB(pq_size) \
+       (pq_size ? DIV_ROUND_UP((pq_size + 1) * QM_PQ_ELEMENT_SIZE, 0x1000) : 0)

CHECK:MACRO_ARG_REUSE: Macro argument reuse 'pq_size' - possible side-effects?
#2350: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:28:
+#define QM_PQ_SIZE_256B(pq_size) \
+       (pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : 0)

CHECK:MACRO_ARG_REUSE: Macro argument reuse 'rl_id' - possible side-effects?
#2410: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:171:
+#define PQ_INFO_ELEMENT(vp_pq_id, pf, tc, port, rl_valid, rl_id) \
+       (((vp_pq_id) << 0) | ((pf) << 12) | ((tc) << 16) | ((port) << 20) | \
+        ((rl_valid ? 1 : 0) << 22) | (((rl_id) & 255) << 24) | \
+        (((rl_id) >> 8) << 9))

CHECK:CAMELCASE: Avoid CamelCase: <pData>
#3106: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:1438:
+                            u32 *pData,

CHECK:MACRO_ARG_REUSE: Macro argument reuse 'port' - possible side-effects?
#3524: FILE: drivers/net/qede/base/ecore_init_fw_funcs.h:20:
+#define VOQ(port, tc, max_phys_tcs_per_port) \
+       ((tc) == PURE_LB_TC ? NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB + (port) : \
+        (port) * (max_phys_tcs_per_port) + (tc))

CHECK:MACRO_ARG_REUSE: Macro argument reuse 'tc' - possible side-effects?
#3524: FILE: drivers/net/qede/base/ecore_init_fw_funcs.h:20:
+#define VOQ(port, tc, max_phys_tcs_per_port) \
+       ((tc) == PURE_LB_TC ? NUM_OF_PHYS_TCS * MAX_NUM




>
>  drivers/net/qede/base/bcm_osal.c              |    1 +
>  drivers/net/qede/base/bcm_osal.h              |    5 +-
>  drivers/net/qede/base/common_hsi.h            |  257 +--
>  drivers/net/qede/base/ecore.h                 |   77 +-
>  drivers/net/qede/base/ecore_chain.h           |   84 +-
>  drivers/net/qede/base/ecore_cxt.c             |  520 ++++---
>  drivers/net/qede/base/ecore_cxt.h             |   12 +
>  drivers/net/qede/base/ecore_dcbx.c            |    7 +-
>  drivers/net/qede/base/ecore_dev.c             |  753 +++++----
>  drivers/net/qede/base/ecore_dev_api.h         |   92 --
>  drivers/net/qede/base/ecore_gtt_reg_addr.h    |   42 +-
>  drivers/net/qede/base/ecore_gtt_values.h      |   18 +-
>  drivers/net/qede/base/ecore_hsi_common.h      | 1134 +++++++-------
>  drivers/net/qede/base/ecore_hsi_debug_tools.h |  475 +++---
>  drivers/net/qede/base/ecore_hsi_eth.h         | 1386 ++++++++---------
>  drivers/net/qede/base/ecore_hsi_init_func.h   |   25 +-
>  drivers/net/qede/base/ecore_hsi_init_tool.h   |   42 +-
>  drivers/net/qede/base/ecore_hw.c              |   68 +-
>  drivers/net/qede/base/ecore_hw.h              |   98 +-
>  drivers/net/qede/base/ecore_init_fw_funcs.c   |  717 ++++-----
>  drivers/net/qede/base/ecore_init_fw_funcs.h   |  107 +-
>  drivers/net/qede/base/ecore_init_ops.c        |   66 +-
>  drivers/net/qede/base/ecore_init_ops.h        |   12 +-
>  drivers/net/qede/base/ecore_int.c             |  131 +-
>  drivers/net/qede/base/ecore_int.h             |    4 +-
>  drivers/net/qede/base/ecore_int_api.h         |   13 +-
>  drivers/net/qede/base/ecore_iov_api.h         |    4 +-
>  drivers/net/qede/base/ecore_iro.h             |  320 ++--
>  drivers/net/qede/base/ecore_iro_values.h      |  336 ++--
>  drivers/net/qede/base/ecore_l2.c              |   10 +-
>  drivers/net/qede/base/ecore_l2_api.h          |    2 +
>  drivers/net/qede/base/ecore_mcp.c             |  296 ++--
>  drivers/net/qede/base/ecore_mcp.h             |    9 +-
>  drivers/net/qede/base/ecore_proto_if.h        |    1 +
>  drivers/net/qede/base/ecore_rt_defs.h         |  870 +++++------
>  drivers/net/qede/base/ecore_sp_commands.c     |   15 +-
>  drivers/net/qede/base/ecore_spq.c             |   55 +-
>  drivers/net/qede/base/ecore_sriov.c           |  178 ++-
>  drivers/net/qede/base/ecore_sriov.h           |    4 +-
>  drivers/net/qede/base/ecore_vf.c              |   18 +-
>  drivers/net/qede/base/eth_common.h            |  101 +-
>  drivers/net/qede/base/mcp_public.h            |   59 +-
>  drivers/net/qede/base/nvm_cfg.h               |  909 ++++++++++-
>  drivers/net/qede/base/reg_addr.h              |   75 +-
>  drivers/net/qede/qede_ethdev.c                |   54 +-
>  drivers/net/qede/qede_ethdev.h                |   21 +-
>  drivers/net/qede/qede_main.c                  |    2 +-
>  drivers/net/qede/qede_rxtx.c                  |   28 +-
>  48 files changed, 5471 insertions(+), 4042 deletions(-)
>
> --
> 2.18.0
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0
  2019-10-03  5:06 ` [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Jerin Jacob
@ 2019-10-03  5:59   ` Rasesh Mody
  0 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-10-03  5:59 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Jerin Jacob Kollanukkaran, Ferruh Yigit, GR-Everest-DPDK-Dev

Hi Jerin,

>From: Jerin Jacob <jerinjacobk@gmail.com>
>Sent: Wednesday, October 02, 2019 10:06 PM
>
>On Mon, Sep 30, 2019 at 8:19 AM Rasesh Mody <rmody@marvell.com> wrote:
>>
>> Hi,
>>
>> This patch series updates the FW to 8.40.25.0 and includes
>> corresponding base driver changes. It also includes some enhancements
>and fixes.
>> The PMD version is bumped to 2.11.0.1.
>>
>> Please apply.
>>
>> Thanks!
>> -Rasesh
>>
>> Rasesh Mody (9):
>>   net/qede/base: calculate right page index for PBL chains
>>   net/qede/base: change MFW mailbox command log verbosity
>>   net/qede/base: lock entire QM reconfiguration flow
>>   net/qede/base: rename HSI datatypes and funcs
>>   net/qede/base: update rt defs NVM cfg and mcp code
>>   net/qede/base: move dmae code to HSI
>>   net/qede/base: update HSI code
>>   net/qede/base: update the FW to 8.40.25.0
>>   net/qede: print adapter info during init failure
>
>Hi Rasesh,
>
>I understand it is the _base_ code(probably owned by someone else)but
>there are a lot of checkpatch issues[1] with this patch in base code.
>Is something can be fixed? if so, please send the next version else let us
>know.

I've tried to minimize the checkpatch issues as much as possible. However, I think there are few from the remaining that can be fixed like MACRO_ARG_PRECEDENCE. I'll send a v2 to iron these out.
Please note there are some like SUSPECT_CODE_INDENT, OPEN_ENDED_LINE, which are caused when resolving 80 characters long line warnings. There are few LONG_LINE warnings left as is to maintain code readability. UNNECESSARY_PARENTHESES seem to be false positive as parentheses are helpful in this case.

Thanks!
-Rasesh

>
>[1]
>CHECK:MACRO_ARG_REUSE: Macro argument reuse 'map' - possible side-
>effects?
>#2927: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:163:
>+#define QM_INIT_TX_PQ_MAP(p_hwfn, map, pq_id, vp_pq_id, \
>+                          rl_valid, rl_id, voq, wrr) \
>+       do { \
>+       OSAL_MEMSET(&map, 0, sizeof(map)); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_PQ_VALID, 1); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_VALID, rl_valid ? 1 : 0); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_ID, rl_id); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_VP_PQ_ID, vp_pq_id); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_VOQ, voq); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP, wrr); \
>+       STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + pq_id,
>*((u32
>+*)&map));\
>        } while (0)
>
>CHECK:MACRO_ARG_PRECEDENCE: Macro argument 'map' may be better as
>'(map)' to avoid precedence issues
>#2927: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:163:
>+#define QM_INIT_TX_PQ_MAP(p_hwfn, map, pq_id, vp_pq_id, \
>+                          rl_valid, rl_id, voq, wrr) \
>+       do { \
>+       OSAL_MEMSET(&map, 0, sizeof(map)); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_PQ_VALID, 1); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_VALID, rl_valid ? 1 : 0); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_ID, rl_id); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_VP_PQ_ID, vp_pq_id); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_VOQ, voq); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP, wrr); \
>+       STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + pq_id,
>*((u32
>+*)&map));\
>        } while (0)
>
>CHECK:MACRO_ARG_PRECEDENCE: Macro argument 'pq_id' may be better as
>'(pq_id)' to avoid precedence issues
>#2927: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:163:
>+#define QM_INIT_TX_PQ_MAP(p_hwfn, map, pq_id, vp_pq_id, \
>+                          rl_valid, rl_id, voq, wrr) \
>+       do { \
>+       OSAL_MEMSET(&map, 0, sizeof(map)); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_PQ_VALID, 1); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_VALID, rl_valid ? 1 : 0); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_ID, rl_id); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_VP_PQ_ID, vp_pq_id); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_VOQ, voq); \
>+       SET_FIELD(map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP, wrr); \
>+       STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + pq_id,
>*((u32
>+*)&map));\
>        } while (0)
>
>WARNING:SUSPECT_CODE_INDENT: suspect code indent for conditional
>statements (8, 8)
>#2929: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:165:
>+       do { \
>+       OSAL_MEMSET(&map, 0, sizeof(map)); \
>
>total: 0 errors, 1 warnings, 3 checks, 3176 lines checked
>CHECK:CAMELCASE: Avoid CamelCase: <optionId>
>#1028: FILE: drivers/net/qede/base/mcp_public.h:1707:
>+#define SINGLE_NVM_WR_OP(optionId) \
>
>WARNING:LONG_LINE: line over 80 characters
>#1210: FILE: drivers/net/qede/base/nvm_cfg.h:1120:
>+               #define
>NVM_CFG1_GLOB_WARNING_TEMPERATURE_THRESHOLD_MASK
>0x1FE00000
>
>WARNING:LONG_LINE: line over 80 characters
>#1322: FILE: drivers/net/qede/base/nvm_cfg.h:1229:
>+               #define
>NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_DEFAULT 0x0
>
>WARNING:LONG_LINE: line over 80 characters
>#1447: FILE: drivers/net/qede/base/nvm_cfg.h:1354:
>+               #define
>NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_VF_MAC_MASK
>0x00FF0000
>
>WARNING:LONG_LINE: line over 80 characters
>#1569: FILE: drivers/net/qede/base/nvm_cfg.h:2229:
>+               #define
>NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_BASE_R 0x10
>
>WARNING:LONG_LINE: line over 80 characters
>#1572: FILE: drivers/net/qede/base/nvm_cfg.h:2232:
>+               #define
>NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_40G_BASE_R 0x80
>
>WARNING:LONG_LINE: line over 80 characters
>#1574: FILE: drivers/net/qede/base/nvm_cfg.h:2234:
>+               #define
>NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_BASE_R 0x200
>
>WARNING:LONG_LINE: line over 80 characters
>#1575: FILE: drivers/net/qede/base/nvm_cfg.h:2235:
>+               #define
>NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS528 0x400
>
>WARNING:LONG_LINE: line over 80 characters
>#1576: FILE: drivers/net/qede/base/nvm_cfg.h:2236:
>+               #define
>NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS544 0x800
>
>WARNING:LONG_LINE: line over 80 characters
>#1577: FILE: drivers/net/qede/base/nvm_cfg.h:2237:
>+               #define
>NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_NONE 0x1000
>
>WARNING:LONG_LINE: line over 80 characters
>#1578: FILE: drivers/net/qede/base/nvm_cfg.h:2238:
>+               #define
>NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_BASE_R
>0x2000
>
>WARNING:LONG_LINE: line over 80 characters
>#1579: FILE: drivers/net/qede/base/nvm_cfg.h:2239:
>+               #define
>NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS528 0x4000
>
>WARNING:LONG_LINE: line over 80 characters
>#1580: FILE: drivers/net/qede/base/nvm_cfg.h:2240:
>+               #define
>NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS544 0x8000
>
>total: 0 errors, 12 warnings, 1 checks, 2043 lines checked
>
>### net/qede/base: update HSI code
>
>CHECK:CAMELCASE: Avoid CamelCase: <vfId>
>WARNING:LONG_LINE_COMMENT: line over 80 characters
>#2913: FILE: drivers/net/qede/base/ecore_iro_values.h:217:
>+       /*
>+
>TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_n
>um_id,taskpoo
>+ l_index),
>*/
>
>total: 0 errors, 1 warnings, 2 checks, 2956 lines checked
>
>### net/qede/base: update the FW to 8.40.25.0
>
>CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
>#433: FILE: drivers/net/qede/base/ecore_cxt.c:633:
>+               p_blk = ecore_cxt_set_blk(
>
>CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
>#461: FILE: drivers/net/qede/base/ecore_cxt.c:688:
>+               p_blk = ecore_cxt_set_blk(
>
>CHECK:UNNECESSARY_PARENTHESES: Unnecessary parentheses around 'line
>== first_skipped_line'
>#794: FILE: drivers/net/qede/base/ecore_cxt.c:1018:
>+               if (lines_to_skip && (line == first_skipped_line)) {
>
>WARNING:EMBEDDED_FUNCTION_NAME: Prefer using '"%s...", __func__' to
>using 'ecore_hw_init_chip', this function's name, in a string
>#1482: FILE: drivers/net/qede/base/ecore_dev.c:2719:
>+                         "ecore_hw_init_chip() shouldn't be called in
>a non-emulation environment\n");
>
>CHECK:MACRO_ARG_REUSE: Macro argument reuse 'pq_size' - possible side-
>effects?
>#2348: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:26:
>+#define QM_PQ_MEM_4KB(pq_size) \
>+       (pq_size ? DIV_ROUND_UP((pq_size + 1) * QM_PQ_ELEMENT_SIZE,
>+0x1000) : 0)
>
>CHECK:MACRO_ARG_REUSE: Macro argument reuse 'pq_size' - possible side-
>effects?
>#2350: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:28:
>+#define QM_PQ_SIZE_256B(pq_size) \
>+       (pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : 0)
>
>CHECK:MACRO_ARG_REUSE: Macro argument reuse 'rl_id' - possible side-
>effects?
>#2410: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:171:
>+#define PQ_INFO_ELEMENT(vp_pq_id, pf, tc, port, rl_valid, rl_id) \
>+       (((vp_pq_id) << 0) | ((pf) << 12) | ((tc) << 16) | ((port) << 20) | \
>+        ((rl_valid ? 1 : 0) << 22) | (((rl_id) & 255) << 24) | \
>+        (((rl_id) >> 8) << 9))
>
>CHECK:CAMELCASE: Avoid CamelCase: <pData>
>#3106: FILE: drivers/net/qede/base/ecore_init_fw_funcs.c:1438:
>+                            u32 *pData,
>
>CHECK:MACRO_ARG_REUSE: Macro argument reuse 'port' - possible side-
>effects?
>#3524: FILE: drivers/net/qede/base/ecore_init_fw_funcs.h:20:
>+#define VOQ(port, tc, max_phys_tcs_per_port) \
>+       ((tc) == PURE_LB_TC ? NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB +
>(port) : \
>+        (port) * (max_phys_tcs_per_port) + (tc))
>
>CHECK:MACRO_ARG_REUSE: Macro argument reuse 'tc' - possible side-
>effects?
>#3524: FILE: drivers/net/qede/base/ecore_init_fw_funcs.h:20:
>+#define VOQ(port, tc, max_phys_tcs_per_port) \
>+       ((tc) == PURE_LB_TC ? NUM_OF_PHYS_TCS * MAX_NUM
>
>
>
>
>>
>>  drivers/net/qede/base/bcm_osal.c              |    1 +
>>  drivers/net/qede/base/bcm_osal.h              |    5 +-
>>  drivers/net/qede/base/common_hsi.h            |  257 +--
>>  drivers/net/qede/base/ecore.h                 |   77 +-
>>  drivers/net/qede/base/ecore_chain.h           |   84 +-
>>  drivers/net/qede/base/ecore_cxt.c             |  520 ++++---
>>  drivers/net/qede/base/ecore_cxt.h             |   12 +
>>  drivers/net/qede/base/ecore_dcbx.c            |    7 +-
>>  drivers/net/qede/base/ecore_dev.c             |  753 +++++----
>>  drivers/net/qede/base/ecore_dev_api.h         |   92 --
>>  drivers/net/qede/base/ecore_gtt_reg_addr.h    |   42 +-
>>  drivers/net/qede/base/ecore_gtt_values.h      |   18 +-
>>  drivers/net/qede/base/ecore_hsi_common.h      | 1134 +++++++-------
>>  drivers/net/qede/base/ecore_hsi_debug_tools.h |  475 +++---
>>  drivers/net/qede/base/ecore_hsi_eth.h         | 1386 ++++++++---------
>>  drivers/net/qede/base/ecore_hsi_init_func.h   |   25 +-
>>  drivers/net/qede/base/ecore_hsi_init_tool.h   |   42 +-
>>  drivers/net/qede/base/ecore_hw.c              |   68 +-
>>  drivers/net/qede/base/ecore_hw.h              |   98 +-
>>  drivers/net/qede/base/ecore_init_fw_funcs.c   |  717 ++++-----
>>  drivers/net/qede/base/ecore_init_fw_funcs.h   |  107 +-
>>  drivers/net/qede/base/ecore_init_ops.c        |   66 +-
>>  drivers/net/qede/base/ecore_init_ops.h        |   12 +-
>>  drivers/net/qede/base/ecore_int.c             |  131 +-
>>  drivers/net/qede/base/ecore_int.h             |    4 +-
>>  drivers/net/qede/base/ecore_int_api.h         |   13 +-
>>  drivers/net/qede/base/ecore_iov_api.h         |    4 +-
>>  drivers/net/qede/base/ecore_iro.h             |  320 ++--
>>  drivers/net/qede/base/ecore_iro_values.h      |  336 ++--
>>  drivers/net/qede/base/ecore_l2.c              |   10 +-
>>  drivers/net/qede/base/ecore_l2_api.h          |    2 +
>>  drivers/net/qede/base/ecore_mcp.c             |  296 ++--
>>  drivers/net/qede/base/ecore_mcp.h             |    9 +-
>>  drivers/net/qede/base/ecore_proto_if.h        |    1 +
>>  drivers/net/qede/base/ecore_rt_defs.h         |  870 +++++------
>>  drivers/net/qede/base/ecore_sp_commands.c     |   15 +-
>>  drivers/net/qede/base/ecore_spq.c             |   55 +-
>>  drivers/net/qede/base/ecore_sriov.c           |  178 ++-
>>  drivers/net/qede/base/ecore_sriov.h           |    4 +-
>>  drivers/net/qede/base/ecore_vf.c              |   18 +-
>>  drivers/net/qede/base/eth_common.h            |  101 +-
>>  drivers/net/qede/base/mcp_public.h            |   59 +-
>>  drivers/net/qede/base/nvm_cfg.h               |  909 ++++++++++-
>>  drivers/net/qede/base/reg_addr.h              |   75 +-
>>  drivers/net/qede/qede_ethdev.c                |   54 +-
>>  drivers/net/qede/qede_ethdev.h                |   21 +-
>>  drivers/net/qede/qede_main.c                  |    2 +-
>>  drivers/net/qede/qede_rxtx.c                  |   28 +-
>>  48 files changed, 5471 insertions(+), 4042 deletions(-)
>>
>> --
>> 2.18.0
>>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 0/9] net/qede/base: update FW to 8.40.25.0
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (9 preceding siblings ...)
  2019-10-03  5:06 ` [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Jerin Jacob
@ 2019-10-06 20:14 ` Rasesh Mody
  2019-10-11  7:57   ` Jerin Jacob
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 1/9] net/qede/base: calculate right page index for PBL chains Rasesh Mody
                   ` (8 subsequent siblings)
  19 siblings, 1 reply; 24+ messages in thread
From: Rasesh Mody @ 2019-10-06 20:14 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

Hi,

This patch series updates the FW to 8.40.25.0 and includes corresponding
base driver changes. It also includes some enhancements and fixes.
The PMD version is bumped to 2.11.0.1.

v2:
   Addressed checkpatch issues
   9/9 - print adapter info for any failure (not just probe) during init

Thanks!
-Rasesh

Rasesh Mody (9):
  net/qede/base: calculate right page index for PBL chains
  net/qede/base: change MFW mailbox command log verbosity
  net/qede/base: lock entire QM reconfiguration flow
  net/qede/base: rename HSI datatypes and funcs
  net/qede/base: update rt defs NVM cfg and mcp code
  net/qede/base: move dmae code to HSI
  net/qede/base: update HSI code
  net/qede/base: update the FW to 8.40.25.0
  net/qede: print adapter info during init failure

 drivers/net/qede/base/bcm_osal.c              |    1 +
 drivers/net/qede/base/bcm_osal.h              |    5 +-
 drivers/net/qede/base/common_hsi.h            |  257 +--
 drivers/net/qede/base/ecore.h                 |   77 +-
 drivers/net/qede/base/ecore_chain.h           |   84 +-
 drivers/net/qede/base/ecore_cxt.c             |  520 ++++---
 drivers/net/qede/base/ecore_cxt.h             |   12 +
 drivers/net/qede/base/ecore_dcbx.c            |    7 +-
 drivers/net/qede/base/ecore_dev.c             |  753 +++++----
 drivers/net/qede/base/ecore_dev_api.h         |   92 --
 drivers/net/qede/base/ecore_gtt_reg_addr.h    |   42 +-
 drivers/net/qede/base/ecore_gtt_values.h      |   18 +-
 drivers/net/qede/base/ecore_hsi_common.h      | 1134 +++++++-------
 drivers/net/qede/base/ecore_hsi_debug_tools.h |  475 +++---
 drivers/net/qede/base/ecore_hsi_eth.h         | 1386 ++++++++---------
 drivers/net/qede/base/ecore_hsi_init_func.h   |   25 +-
 drivers/net/qede/base/ecore_hsi_init_tool.h   |   42 +-
 drivers/net/qede/base/ecore_hw.c              |   68 +-
 drivers/net/qede/base/ecore_hw.h              |   98 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   |  718 ++++-----
 drivers/net/qede/base/ecore_init_fw_funcs.h   |  107 +-
 drivers/net/qede/base/ecore_init_ops.c        |   66 +-
 drivers/net/qede/base/ecore_init_ops.h        |   12 +-
 drivers/net/qede/base/ecore_int.c             |  131 +-
 drivers/net/qede/base/ecore_int.h             |    4 +-
 drivers/net/qede/base/ecore_int_api.h         |   13 +-
 drivers/net/qede/base/ecore_iov_api.h         |    4 +-
 drivers/net/qede/base/ecore_iro.h             |  320 ++--
 drivers/net/qede/base/ecore_iro_values.h      |  336 ++--
 drivers/net/qede/base/ecore_l2.c              |   10 +-
 drivers/net/qede/base/ecore_l2_api.h          |    2 +
 drivers/net/qede/base/ecore_mcp.c             |  296 ++--
 drivers/net/qede/base/ecore_mcp.h             |    9 +-
 drivers/net/qede/base/ecore_proto_if.h        |    1 +
 drivers/net/qede/base/ecore_rt_defs.h         |  870 +++++------
 drivers/net/qede/base/ecore_sp_commands.c     |   15 +-
 drivers/net/qede/base/ecore_spq.c             |   55 +-
 drivers/net/qede/base/ecore_sriov.c           |  178 ++-
 drivers/net/qede/base/ecore_sriov.h           |    4 +-
 drivers/net/qede/base/ecore_vf.c              |   18 +-
 drivers/net/qede/base/eth_common.h            |  101 +-
 drivers/net/qede/base/mcp_public.h            |   59 +-
 drivers/net/qede/base/nvm_cfg.h               |  909 ++++++++++-
 drivers/net/qede/base/reg_addr.h              |   75 +-
 drivers/net/qede/qede_ethdev.c                |   81 +-
 drivers/net/qede/qede_ethdev.h                |   21 +-
 drivers/net/qede/qede_main.c                  |    2 +-
 drivers/net/qede/qede_rxtx.c                  |   28 +-
 48 files changed, 5493 insertions(+), 4048 deletions(-)

-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 1/9] net/qede/base: calculate right page index for PBL chains
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (10 preceding siblings ...)
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 " Rasesh Mody
@ 2019-10-06 20:14 ` Rasesh Mody
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 2/9] net/qede/base: change MFW mailbox command log verbosity Rasesh Mody
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-10-06 20:14 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

ecore_chain_set_prod/cons() sets the wrong page index in chains with
non-power of 2 page count. Fix ecore_chain_set_prod/cons() for PBL
chains with non power of 2 page count.
Calculate the right page index according to current indexes.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/ecore_chain.h | 84 ++++++++++++++++++++---------
 1 file changed, 58 insertions(+), 26 deletions(-)

diff --git a/drivers/net/qede/base/ecore_chain.h b/drivers/net/qede/base/ecore_chain.h
index 6d0382d3a..c69920be5 100644
--- a/drivers/net/qede/base/ecore_chain.h
+++ b/drivers/net/qede/base/ecore_chain.h
@@ -86,8 +86,8 @@ struct ecore_chain {
 		void		**pp_virt_addr_tbl;
 
 		union {
-			struct ecore_chain_pbl_u16	u16;
-			struct ecore_chain_pbl_u32	u32;
+			struct ecore_chain_pbl_u16	pbl_u16;
+			struct ecore_chain_pbl_u32	pbl_u32;
 		} c;
 	} pbl;
 
@@ -405,7 +405,7 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain16.prod_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain16.prod_idx;
-			p_prod_page_idx = &p_chain->pbl.c.u16.prod_page_idx;
+			p_prod_page_idx = &p_chain->pbl.c.pbl_u16.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
 						 p_prod_idx, p_prod_page_idx);
 		}
@@ -414,7 +414,7 @@ static OSAL_INLINE void *ecore_chain_produce(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain32.prod_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_prod_idx = &p_chain->u.chain32.prod_idx;
-			p_prod_page_idx = &p_chain->pbl.c.u32.prod_page_idx;
+			p_prod_page_idx = &p_chain->pbl.c.pbl_u32.prod_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_prod_elem,
 						 p_prod_idx, p_prod_page_idx);
 		}
@@ -479,7 +479,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain16.cons_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain16.cons_idx;
-			p_cons_page_idx = &p_chain->pbl.c.u16.cons_page_idx;
+			p_cons_page_idx = &p_chain->pbl.c.pbl_u16.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
 						 p_cons_idx, p_cons_page_idx);
 		}
@@ -488,7 +488,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
 		if ((p_chain->u.chain32.cons_idx &
 		     p_chain->elem_per_page_mask) == p_chain->next_page_mask) {
 			p_cons_idx = &p_chain->u.chain32.cons_idx;
-			p_cons_page_idx = &p_chain->pbl.c.u32.cons_page_idx;
+			p_cons_page_idx = &p_chain->pbl.c.pbl_u32.cons_page_idx;
 			ecore_chain_advance_page(p_chain, &p_chain->p_cons_elem,
 						 p_cons_idx, p_cons_page_idx);
 		}
@@ -532,11 +532,11 @@ static OSAL_INLINE void ecore_chain_reset(struct ecore_chain *p_chain)
 		u32 reset_val = p_chain->page_cnt - 1;
 
 		if (is_chain_u16(p_chain)) {
-			p_chain->pbl.c.u16.prod_page_idx = (u16)reset_val;
-			p_chain->pbl.c.u16.cons_page_idx = (u16)reset_val;
+			p_chain->pbl.c.pbl_u16.prod_page_idx = (u16)reset_val;
+			p_chain->pbl.c.pbl_u16.cons_page_idx = (u16)reset_val;
 		} else {
-			p_chain->pbl.c.u32.prod_page_idx = reset_val;
-			p_chain->pbl.c.u32.cons_page_idx = reset_val;
+			p_chain->pbl.c.pbl_u32.prod_page_idx = reset_val;
+			p_chain->pbl.c.pbl_u32.cons_page_idx = reset_val;
 		}
 	}
 
@@ -725,18 +725,34 @@ static OSAL_INLINE void ecore_chain_set_prod(struct ecore_chain *p_chain,
 					     u32 prod_idx, void *p_prod_elem)
 {
 	if (p_chain->mode == ECORE_CHAIN_MODE_PBL) {
-		/* Use "prod_idx-1" since ecore_chain_produce() advances the
-		 * page index before the producer index when getting to
-		 * "next_page_mask".
+		u32 cur_prod, page_mask, page_cnt, page_diff;
+
+		cur_prod = is_chain_u16(p_chain) ? p_chain->u.chain16.prod_idx
+						 : p_chain->u.chain32.prod_idx;
+
+		/* Assume that number of elements in a page is power of 2 */
+		page_mask = ~p_chain->elem_per_page_mask;
+
+		/* Use "cur_prod - 1" and "prod_idx - 1" since producer index
+		 * reaches the first element of next page before the page index
+		 * is incremented. See ecore_chain_produce().
+		 * Index wrap around is not a problem because the difference
+		 * between current and given producer indexes is always
+		 * positive and lower than the chain's capacity.
 		 */
-		u32 elem_idx =
-			(prod_idx - 1 + p_chain->capacity) % p_chain->capacity;
-		u32 page_idx = elem_idx / p_chain->elem_per_page;
+		page_diff = (((cur_prod - 1) & page_mask) -
+			     ((prod_idx - 1) & page_mask)) /
+			    p_chain->elem_per_page;
 
+		page_cnt = ecore_chain_get_page_cnt(p_chain);
 		if (is_chain_u16(p_chain))
-			p_chain->pbl.c.u16.prod_page_idx = (u16)page_idx;
+			p_chain->pbl.c.pbl_u16.prod_page_idx =
+				(p_chain->pbl.c.pbl_u16.prod_page_idx -
+				 page_diff + page_cnt) % page_cnt;
 		else
-			p_chain->pbl.c.u32.prod_page_idx = page_idx;
+			p_chain->pbl.c.pbl_u32.prod_page_idx =
+				(p_chain->pbl.c.pbl_u32.prod_page_idx -
+				 page_diff + page_cnt) % page_cnt;
 	}
 
 	if (is_chain_u16(p_chain))
@@ -756,18 +772,34 @@ static OSAL_INLINE void ecore_chain_set_cons(struct ecore_chain *p_chain,
 					     u32 cons_idx, void *p_cons_elem)
 {
 	if (p_chain->mode == ECORE_CHAIN_MODE_PBL) {
-		/* Use "cons_idx-1" since ecore_chain_consume() advances the
-		 * page index before the consumer index when getting to
-		 * "next_page_mask".
+		u32 cur_cons, page_mask, page_cnt, page_diff;
+
+		cur_cons = is_chain_u16(p_chain) ? p_chain->u.chain16.cons_idx
+						 : p_chain->u.chain32.cons_idx;
+
+		/* Assume that number of elements in a page is power of 2 */
+		page_mask = ~p_chain->elem_per_page_mask;
+
+		/* Use "cur_cons - 1" and "cons_idx - 1" since consumer index
+		 * reaches the first element of next page before the page index
+		 * is incremented. See ecore_chain_consume().
+		 * Index wrap around is not a problem because the difference
+		 * between current and given consumer indexes is always
+		 * positive and lower than the chain's capacity.
 		 */
-		u32 elem_idx =
-			(cons_idx - 1 + p_chain->capacity) % p_chain->capacity;
-		u32 page_idx = elem_idx / p_chain->elem_per_page;
+		page_diff = (((cur_cons - 1) & page_mask) -
+			     ((cons_idx - 1) & page_mask)) /
+			    p_chain->elem_per_page;
 
+		page_cnt = ecore_chain_get_page_cnt(p_chain);
 		if (is_chain_u16(p_chain))
-			p_chain->pbl.c.u16.cons_page_idx = (u16)page_idx;
+			p_chain->pbl.c.pbl_u16.cons_page_idx =
+				(p_chain->pbl.c.pbl_u16.cons_page_idx -
+				 page_diff + page_cnt) % page_cnt;
 		else
-			p_chain->pbl.c.u32.cons_page_idx = page_idx;
+			p_chain->pbl.c.pbl_u32.cons_page_idx =
+				(p_chain->pbl.c.pbl_u32.cons_page_idx -
+				 page_diff + page_cnt) % page_cnt;
 	}
 
 	if (is_chain_u16(p_chain))
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 2/9] net/qede/base: change MFW mailbox command log verbosity
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (11 preceding siblings ...)
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 1/9] net/qede/base: calculate right page index for PBL chains Rasesh Mody
@ 2019-10-06 20:14 ` Rasesh Mody
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 3/9] net/qede/base: lock entire QM reconfiguration flow Rasesh Mody
                   ` (6 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-10-06 20:14 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

Change management FW mailboxes DP_VERBOSE module to ECORE_MSG_HW

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/ecore_mcp.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 6c6560688..1a5152ec5 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -469,7 +469,7 @@ static void __ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 	/* Set the drv command along with the sequence number */
 	DRV_MB_WR(p_hwfn, p_ptt, drv_mb_header, (p_mb_params->cmd | seq_num));
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
 		   "MFW mailbox: command 0x%08x param 0x%08x\n",
 		   (p_mb_params->cmd | seq_num), p_mb_params->param);
 }
@@ -594,7 +594,7 @@ _ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	ecore_mcp_cmd_del_elem(p_hwfn, p_cmd_elem);
 	OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->cmd_lock);
 
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+	DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
 		   "MFW mailbox: response 0x%08x param 0x%08x [after %d.%03d ms]\n",
 		   p_mb_params->mcp_resp, p_mb_params->mcp_param,
 		   (cnt * delay) / 1000, (cnt * delay) % 1000);
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 3/9] net/qede/base: lock entire QM reconfiguration flow
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (12 preceding siblings ...)
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 2/9] net/qede/base: change MFW mailbox command log verbosity Rasesh Mody
@ 2019-10-06 20:14 ` Rasesh Mody
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 4/9] net/qede/base: rename HSI datatypes and funcs Rasesh Mody
                   ` (5 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-10-06 20:14 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

Multiple flows can issue QM reconfiguration, hence hold the lock longer
to account for entire duration of reconfiguration flow.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/ecore_dev.c | 24 +++++++++++++-----------
 1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index d7e1d7b32..b183519b5 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2291,18 +2291,21 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
 	bool b_rc;
-	enum _ecore_status_t rc;
+	enum _ecore_status_t rc = ECORE_SUCCESS;
+
+	/* multiple flows can issue qm reconf. Need to lock */
+	OSAL_SPIN_LOCK(&qm_lock);
 
 	/* initialize ecore's qm data structure */
 	ecore_init_qm_info(p_hwfn);
 
 	/* stop PF's qm queues */
-	OSAL_SPIN_LOCK(&qm_lock);
 	b_rc = ecore_send_qm_stop_cmd(p_hwfn, p_ptt, false, true,
 				      qm_info->start_pq, qm_info->num_pqs);
-	OSAL_SPIN_UNLOCK(&qm_lock);
-	if (!b_rc)
-		return ECORE_INVAL;
+	if (!b_rc) {
+		rc = ECORE_INVAL;
+		goto unlock;
+	}
 
 	/* clear the QM_PF runtime phase leftovers from previous init */
 	ecore_init_clear_rt_data(p_hwfn);
@@ -2313,18 +2316,17 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
 	/* activate init tool on runtime array */
 	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_QM_PF, p_hwfn->rel_pf_id,
 			    p_hwfn->hw_info.hw_mode);
-	if (rc != ECORE_SUCCESS)
-		return rc;
 
 	/* start PF's qm queues */
-	OSAL_SPIN_LOCK(&qm_lock);
 	b_rc = ecore_send_qm_stop_cmd(p_hwfn, p_ptt, true, true,
 				      qm_info->start_pq, qm_info->num_pqs);
-	OSAL_SPIN_UNLOCK(&qm_lock);
 	if (!b_rc)
-		return ECORE_INVAL;
+		rc = ECORE_INVAL;
 
-	return ECORE_SUCCESS;
+unlock:
+	OSAL_SPIN_UNLOCK(&qm_lock);
+
+	return rc;
 }
 
 static enum _ecore_status_t ecore_alloc_qm_data(struct ecore_hwfn *p_hwfn)
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 4/9] net/qede/base: rename HSI datatypes and funcs
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (13 preceding siblings ...)
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 3/9] net/qede/base: lock entire QM reconfiguration flow Rasesh Mody
@ 2019-10-06 20:14 ` Rasesh Mody
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 5/9] net/qede/base: update rt defs NVM cfg and mcp code Rasesh Mody
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-10-06 20:14 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

This patch changes code with E4/E5/e4/e5/BB_K2 prefixes and suffixes.
 - HSI datatypes renaming - removed all e5 datatypes and renamed
   all e4 datatypes to be prefix less/suffix less.
   (s/_E4//; s/_e4//; s/E4_//).
 - HSI functions - removed e4/e5 prefixes/suffixes.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/common_hsi.h          |   93 +-
 drivers/net/qede/base/ecore_cxt.c           |    4 +-
 drivers/net/qede/base/ecore_dcbx.c          |    2 +-
 drivers/net/qede/base/ecore_dev.c           |  116 +-
 drivers/net/qede/base/ecore_hsi_common.h    |  847 +++++-------
 drivers/net/qede/base/ecore_hsi_eth.h       | 1308 ++++++++-----------
 drivers/net/qede/base/ecore_hsi_init_tool.h |    4 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c |   42 +-
 drivers/net/qede/base/ecore_int.c           |    8 +-
 drivers/net/qede/base/ecore_int.h           |    4 +-
 drivers/net/qede/base/ecore_int_api.h       |    6 +-
 drivers/net/qede/base/ecore_iov_api.h       |    4 +-
 drivers/net/qede/base/ecore_mcp.c           |    4 +-
 drivers/net/qede/base/ecore_spq.c           |    8 +-
 drivers/net/qede/base/ecore_sriov.c         |    4 +-
 drivers/net/qede/base/ecore_sriov.h         |    4 +-
 drivers/net/qede/base/reg_addr.h            |   65 +-
 drivers/net/qede/qede_rxtx.c                |    8 +-
 18 files changed, 1077 insertions(+), 1454 deletions(-)

diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index 7047eb9f8..b878a92aa 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -106,59 +106,43 @@
 /* PCI functions */
 #define MAX_NUM_PORTS_BB        (2)
 #define MAX_NUM_PORTS_K2        (4)
-#define MAX_NUM_PORTS_E5        (4)
-#define MAX_NUM_PORTS           (MAX_NUM_PORTS_E5)
+#define MAX_NUM_PORTS           (MAX_NUM_PORTS_K2)
 
 #define MAX_NUM_PFS_BB          (8)
 #define MAX_NUM_PFS_K2          (16)
-#define MAX_NUM_PFS_E5          (16)
-#define MAX_NUM_PFS             (MAX_NUM_PFS_E5)
+#define MAX_NUM_PFS             (MAX_NUM_PFS_K2)
 #define MAX_NUM_OF_PFS_IN_CHIP  (16) /* On both engines */
 
 #define MAX_NUM_VFS_BB          (120)
 #define MAX_NUM_VFS_K2          (192)
-#define MAX_NUM_VFS_E4          (MAX_NUM_VFS_K2)
-#define MAX_NUM_VFS_E5          (240)
-#define COMMON_MAX_NUM_VFS      (MAX_NUM_VFS_E5)
+#define COMMON_MAX_NUM_VFS      (MAX_NUM_VFS_K2)
 
 #define MAX_NUM_FUNCTIONS_BB    (MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
 #define MAX_NUM_FUNCTIONS_K2    (MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2)
-#define MAX_NUM_FUNCTIONS       (MAX_NUM_PFS + MAX_NUM_VFS_E4)
 
 /* in both BB and K2, the VF number starts from 16. so for arrays containing all
  * possible PFs and VFs - we need a constant for this size
  */
 #define MAX_FUNCTION_NUMBER_BB      (MAX_NUM_PFS + MAX_NUM_VFS_BB)
 #define MAX_FUNCTION_NUMBER_K2      (MAX_NUM_PFS + MAX_NUM_VFS_K2)
-#define MAX_FUNCTION_NUMBER_E4      (MAX_NUM_PFS + MAX_NUM_VFS_E4)
-#define MAX_FUNCTION_NUMBER_E5      (MAX_NUM_PFS + MAX_NUM_VFS_E5)
-#define COMMON_MAX_FUNCTION_NUMBER  (MAX_NUM_PFS + MAX_NUM_VFS_E5)
+#define COMMON_MAX_FUNCTION_NUMBER  (MAX_NUM_PFS + MAX_NUM_VFS_K2)
 
 #define MAX_NUM_VPORTS_K2       (208)
 #define MAX_NUM_VPORTS_BB       (160)
-#define MAX_NUM_VPORTS_E4       (MAX_NUM_VPORTS_K2)
-#define MAX_NUM_VPORTS_E5       (256)
-#define COMMON_MAX_NUM_VPORTS   (MAX_NUM_VPORTS_E5)
+#define COMMON_MAX_NUM_VPORTS   (MAX_NUM_VPORTS_K2)
 
 #define MAX_NUM_L2_QUEUES_BB	(256)
 #define MAX_NUM_L2_QUEUES_K2    (320)
-#define MAX_NUM_L2_QUEUES_E5    (320) /* TODO_E5_VITALY - fix to 512 */
-#define MAX_NUM_L2_QUEUES		(MAX_NUM_L2_QUEUES_E5)
 
 /* Traffic classes in network-facing blocks (PBF, BTB, NIG, BRB, PRS and QM) */
 #define NUM_PHYS_TCS_4PORT_K2     4
-#define NUM_PHYS_TCS_4PORT_TX_E5  6
-#define NUM_PHYS_TCS_4PORT_RX_E5  4
 #define NUM_OF_PHYS_TCS           8
 #define PURE_LB_TC                NUM_OF_PHYS_TCS
 #define NUM_TCS_4PORT_K2          (NUM_PHYS_TCS_4PORT_K2 + 1)
-#define NUM_TCS_4PORT_TX_E5       (NUM_PHYS_TCS_4PORT_TX_E5 + 1)
-#define NUM_TCS_4PORT_RX_E5       (NUM_PHYS_TCS_4PORT_RX_E5 + 1)
 #define NUM_OF_TCS                (NUM_OF_PHYS_TCS + 1)
 
 /* CIDs */
-#define NUM_OF_CONNECTION_TYPES_E4 (8)
-#define NUM_OF_CONNECTION_TYPES_E5 (16)
+#define NUM_OF_CONNECTION_TYPES (8)
 #define NUM_OF_TASK_TYPES       (8)
 #define NUM_OF_LCIDS            (320)
 #define NUM_OF_LTIDS            (320)
@@ -412,9 +396,8 @@
 #define CAU_FSM_ETH_TX  1
 
 /* Number of Protocol Indices per Status Block */
-#define PIS_PER_SB_E4    12
-#define PIS_PER_SB_E5    8
-#define MAX_PIS_PER_SB_E4	 OSAL_MAX_T(PIS_PER_SB_E4, PIS_PER_SB_E5)
+#define PIS_PER_SB    12
+#define MAX_PIS_PER_SB	 PIS_PER_SB
 
 /* fsm is stopped or not valid for this sb */
 #define CAU_HC_STOPPED_STATE		3
@@ -430,8 +413,7 @@
 
 #define MAX_SB_PER_PATH_K2			(368)
 #define MAX_SB_PER_PATH_BB			(288)
-#define MAX_SB_PER_PATH_E5			(512)
-#define MAX_TOT_SB_PER_PATH			MAX_SB_PER_PATH_E5
+#define MAX_TOT_SB_PER_PATH			MAX_SB_PER_PATH_K2
 
 #define MAX_SB_PER_PF_MIMD			129
 #define MAX_SB_PER_PF_SIMD			64
@@ -639,12 +621,8 @@
 #define MAX_NUM_ILT_RECORDS \
 	OSAL_MAX_T(PXP_NUM_ILT_RECORDS_BB, PXP_NUM_ILT_RECORDS_K2)
 
-#define PXP_NUM_ILT_RECORDS_E5 13664
-
-
 // Host Interface
-#define PXP_QUEUES_ZONE_MAX_NUM_E4	320
-#define PXP_QUEUES_ZONE_MAX_NUM_E5	512
+#define PXP_QUEUES_ZONE_MAX_NUM	320
 
 
 /*****************/
@@ -691,11 +669,12 @@
 /* PBF CONSTANTS  */
 /******************/
 
-/* Number of PBF command queue lines. Each line is 32B. */
-#define PBF_MAX_CMD_LINES_E4 3328
-#define PBF_MAX_CMD_LINES_E5 5280
+/* Number of PBF command queue lines. */
+#define PBF_MAX_CMD_LINES 3328 /* Each line is 256b */
 
 /* Number of BTB blocks. Each block is 256B. */
+#define BTB_MAX_BLOCKS_BB 1440 /* 2880 blocks of 128B */
+#define BTB_MAX_BLOCKS_K2 1840 /* 3680 blocks of 128B */
 #define BTB_MAX_BLOCKS 1440
 
 /*****************/
@@ -1435,40 +1414,20 @@ enum rss_hash_type {
 /*
  * status block structure
  */
-struct status_block_e4 {
-	__le16 pi_array[PIS_PER_SB_E4];
-	__le32 sb_num;
-#define STATUS_BLOCK_E4_SB_NUM_MASK      0x1FF
-#define STATUS_BLOCK_E4_SB_NUM_SHIFT     0
-#define STATUS_BLOCK_E4_ZERO_PAD_MASK    0x7F
-#define STATUS_BLOCK_E4_ZERO_PAD_SHIFT   9
-#define STATUS_BLOCK_E4_ZERO_PAD2_MASK   0xFFFF
-#define STATUS_BLOCK_E4_ZERO_PAD2_SHIFT  16
-	__le32 prod_index;
-#define STATUS_BLOCK_E4_PROD_INDEX_MASK  0xFFFFFF
-#define STATUS_BLOCK_E4_PROD_INDEX_SHIFT 0
-#define STATUS_BLOCK_E4_ZERO_PAD3_MASK   0xFF
-#define STATUS_BLOCK_E4_ZERO_PAD3_SHIFT  24
-};
-
-
-/*
- * status block structure
- */
-struct status_block_e5 {
-	__le16 pi_array[PIS_PER_SB_E5];
+struct status_block {
+	__le16 pi_array[PIS_PER_SB];
 	__le32 sb_num;
-#define STATUS_BLOCK_E5_SB_NUM_MASK      0x1FF
-#define STATUS_BLOCK_E5_SB_NUM_SHIFT     0
-#define STATUS_BLOCK_E5_ZERO_PAD_MASK    0x7F
-#define STATUS_BLOCK_E5_ZERO_PAD_SHIFT   9
-#define STATUS_BLOCK_E5_ZERO_PAD2_MASK   0xFFFF
-#define STATUS_BLOCK_E5_ZERO_PAD2_SHIFT  16
+#define STATUS_BLOCK_SB_NUM_MASK      0x1FF
+#define STATUS_BLOCK_SB_NUM_SHIFT     0
+#define STATUS_BLOCK_ZERO_PAD_MASK    0x7F
+#define STATUS_BLOCK_ZERO_PAD_SHIFT   9
+#define STATUS_BLOCK_ZERO_PAD2_MASK   0xFFFF
+#define STATUS_BLOCK_ZERO_PAD2_SHIFT  16
 	__le32 prod_index;
-#define STATUS_BLOCK_E5_PROD_INDEX_MASK  0xFFFFFF
-#define STATUS_BLOCK_E5_PROD_INDEX_SHIFT 0
-#define STATUS_BLOCK_E5_ZERO_PAD3_MASK   0xFF
-#define STATUS_BLOCK_E5_ZERO_PAD3_SHIFT  24
+#define STATUS_BLOCK_PROD_INDEX_MASK  0xFFFFFF
+#define STATUS_BLOCK_PROD_INDEX_SHIFT 0
+#define STATUS_BLOCK_ZERO_PAD3_MASK   0xFF
+#define STATUS_BLOCK_ZERO_PAD3_SHIFT  24
 };
 
 
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 5c3370e10..bc5628c4e 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -54,8 +54,8 @@
 
 /* connection context union */
 union conn_context {
-	struct e4_core_conn_context core_ctx;
-	struct e4_eth_conn_context eth_ctx;
+	struct core_conn_context core_ctx;
+	struct eth_conn_context eth_ctx;
 };
 
 /* TYPE-0 task context - iSCSI, FCOE */
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index cbc69cde7..b82ca49ff 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -159,7 +159,7 @@ ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
 	if (OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits) &&
 	    (type == DCBX_PROTOCOL_ROCE)) {
 		ecore_wr(p_hwfn, p_ptt, DORQ_REG_TAG1_OVRD_MODE, 1);
-		ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_PCP_BB_K2, prio << 1);
+		ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_PCP, prio << 1);
 	}
 }
 
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index b183519b5..749aea4e8 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -935,7 +935,7 @@ enum _ecore_status_t ecore_llh_set_roce_affinity(struct ecore_dev *p_dev,
 	return rc;
 }
 
-struct ecore_llh_filter_e4_details {
+struct ecore_llh_filter_details {
 	u64 value;
 	u32 mode;
 	u32 protocol_type;
@@ -944,10 +944,10 @@ struct ecore_llh_filter_e4_details {
 };
 
 static enum _ecore_status_t
-ecore_llh_access_filter_e4(struct ecore_hwfn *p_hwfn,
-			   struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx,
-			   struct ecore_llh_filter_e4_details *p_details,
-			   bool b_write_access)
+ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
+			struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx,
+			struct ecore_llh_filter_details *p_details,
+			bool b_write_access)
 {
 	u8 pfid = ECORE_PFID_BY_PPFID(p_hwfn, abs_ppfid);
 	struct ecore_dmae_params params;
@@ -1008,7 +1008,7 @@ ecore_llh_access_filter_e4(struct ecore_hwfn *p_hwfn,
 							  abs_ppfid, addr);
 
 	/* Filter header select */
-	addr = NIG_REG_LLH_FUNC_FILTER_HDR_SEL_BB_K2 + filter_idx * 0x4;
+	addr = NIG_REG_LLH_FUNC_FILTER_HDR_SEL + filter_idx * 0x4;
 	if (b_write_access)
 		ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr,
 			       p_details->hdr_sel);
@@ -1035,7 +1035,7 @@ ecore_llh_add_filter_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			u8 abs_ppfid, u8 filter_idx, u8 filter_prot_type,
 			u32 high, u32 low)
 {
-	struct ecore_llh_filter_e4_details filter_details;
+	struct ecore_llh_filter_details filter_details;
 
 	filter_details.enable = 1;
 	filter_details.value = ((u64)high << 32) | low;
@@ -1048,22 +1048,22 @@ ecore_llh_add_filter_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			      1 : /* protocol-based classification */
 			      0;  /* MAC-address based classification */
 
-	return ecore_llh_access_filter_e4(p_hwfn, p_ptt, abs_ppfid, filter_idx,
-					  &filter_details,
-					  true /* write access */);
+	return ecore_llh_access_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx,
+				&filter_details,
+				true /* write access */);
 }
 
 static enum _ecore_status_t
 ecore_llh_remove_filter_e4(struct ecore_hwfn *p_hwfn,
 			   struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx)
 {
-	struct ecore_llh_filter_e4_details filter_details;
+	struct ecore_llh_filter_details filter_details;
 
 	OSAL_MEMSET(&filter_details, 0, sizeof(filter_details));
 
-	return ecore_llh_access_filter_e4(p_hwfn, p_ptt, abs_ppfid, filter_idx,
-					  &filter_details,
-					  true /* write access */);
+	return ecore_llh_access_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx,
+				       &filter_details,
+				       true /* write access */);
 }
 
 static enum _ecore_status_t
@@ -1468,7 +1468,7 @@ static enum _ecore_status_t
 ecore_llh_dump_ppfid_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			u8 ppfid)
 {
-	struct ecore_llh_filter_e4_details filter_details;
+	struct ecore_llh_filter_details filter_details;
 	u8 abs_ppfid, filter_idx;
 	u32 addr;
 	enum _ecore_status_t rc;
@@ -1486,9 +1486,9 @@ ecore_llh_dump_ppfid_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	for (filter_idx = 0; filter_idx < NIG_REG_LLH_FUNC_FILTER_EN_SIZE;
 	     filter_idx++) {
 		OSAL_MEMSET(&filter_details, 0, sizeof(filter_details));
-		rc =  ecore_llh_access_filter_e4(p_hwfn, p_ptt, abs_ppfid,
-						 filter_idx, &filter_details,
-						 false /* read access */);
+		rc =  ecore_llh_access_filter(p_hwfn, p_ptt, abs_ppfid,
+					      filter_idx, &filter_details,
+					      false /* read access */);
 		if (rc != ECORE_SUCCESS)
 			return rc;
 
@@ -1862,7 +1862,7 @@ static void ecore_init_qm_port_params(struct ecore_hwfn *p_hwfn)
 
 		p_qm_port->active = 1;
 		p_qm_port->active_phys_tcs = active_phys_tcs;
-		p_qm_port->num_pbf_cmd_lines = PBF_MAX_CMD_LINES_E4 / num_ports;
+		p_qm_port->num_pbf_cmd_lines = PBF_MAX_CMD_LINES / num_ports;
 		p_qm_port->num_btb_blocks = BTB_MAX_BLOCKS / num_ports;
 	}
 }
@@ -2730,10 +2730,8 @@ static enum _ecore_status_t ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
 
 	ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV + 4, pl_hv);
 
-	if (CHIP_REV_IS_EMUL(p_dev) &&
-	    (ECORE_IS_AH(p_dev)))
-		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2_E5,
-			 0x3ffffff);
+	if (ECORE_IS_AH(p_dev))
+		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2, 0x3ffffff);
 
 	/* initialize port mode to 4x10G_E (10G with 4x10 SERDES) */
 	/* CNIG_REG_NW_PORT_MODE is same for A0 and B0 */
@@ -3017,49 +3015,59 @@ static void ecore_emul_link_init_bb(struct ecore_hwfn *p_hwfn,
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_ENABLE_REG, 0xf, 1, port);
 }
 
-static void ecore_emul_link_init_ah_e5(struct ecore_hwfn *p_hwfn,
+static void ecore_emul_link_init_ah(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt)
 {
+	u32 mac_base, mac_config_val = 0xa853;
 	u8 port = p_hwfn->port_id;
-	u32 mac_base = NWM_REG_MAC0_K2_E5 + (port << 2) * NWM_REG_MAC0_SIZE;
 
-	DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port);
-
-	ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2_E5 + (port << 2),
-		 (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT) |
+	ecore_wr(p_hwfn, p_ptt, CNIG_REG_NIG_PORT0_CONF_K2 + (port << 2),
+		 (1 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_SHIFT) |
 		 (port <<
-		  CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT) |
-		 (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT));
+		  CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_SHIFT) |
+		 (0 << CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_SHIFT));
 
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE_K2_E5,
-		 1 << ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT);
+	mac_base = NWM_REG_MAC0_K2 + (port << 2) * NWM_REG_MAC0_SIZE;
 
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH_K2_E5,
-		 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT);
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_XIF_MODE_K2,
+		 1 << ETH_MAC_REG_XIF_MODE_XGMII_K2_SHIFT);
 
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH_K2_E5,
-		 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT);
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_FRM_LENGTH_K2,
+		 9018 << ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_SHIFT);
 
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5,
-		 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT);
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_IPG_LENGTH_K2,
+		 0xc << ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_SHIFT);
 
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5,
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_RX_FIFO_SECTIONS_K2,
+		 8 << ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_SHIFT);
+
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_TX_FIFO_SECTIONS_K2,
 		 (0xA <<
-		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT) |
+		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_SHIFT) |
 		 (8 <<
-		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT));
+		  ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_SHIFT));
 
-	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG_K2_E5,
-		 0xa853);
+	/* Strip the CRC field from the frame */
+	mac_config_val &= ~ETH_MAC_REG_COMMAND_CONFIG_CRC_FWD_K2;
+	ecore_wr(p_hwfn, p_ptt, mac_base + ETH_MAC_REG_COMMAND_CONFIG_K2,
+		 mac_config_val);
 }
 
 static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
 				 struct ecore_ptt *p_ptt)
 {
-	if (ECORE_IS_AH(p_hwfn->p_dev))
-		ecore_emul_link_init_ah_e5(p_hwfn, p_ptt);
-	else /* BB */
+	u8 port = ECORE_IS_BB(p_hwfn->p_dev) ? p_hwfn->port_id * 2
+					     : p_hwfn->port_id;
+
+	DP_INFO(p_hwfn->p_dev, "Emulation: Configuring Link [port %02x]\n",
+		port);
+
+	if (ECORE_IS_BB(p_hwfn->p_dev))
 		ecore_emul_link_init_bb(p_hwfn, p_ptt);
+	else
+		ecore_emul_link_init_ah(p_hwfn, p_ptt);
+
+	return;
 }
 
 static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
@@ -4190,13 +4198,13 @@ static void ecore_hw_hwfn_prepare(struct ecore_hwfn *p_hwfn)
 	/* clear indirect access */
 	if (ECORE_IS_AH(p_hwfn->p_dev)) {
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5, 0);
+			 PGLUE_B_REG_PGL_ADDR_E8_F0_K2, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5, 0);
+			 PGLUE_B_REG_PGL_ADDR_EC_F0_K2, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5, 0);
+			 PGLUE_B_REG_PGL_ADDR_F0_F0_K2, 0);
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5, 0);
+			 PGLUE_B_REG_PGL_ADDR_F4_F0_K2, 0);
 	} else {
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
 			 PGLUE_B_REG_PGL_ADDR_88_F0_BB, 0);
@@ -5178,7 +5186,7 @@ static void ecore_hw_info_port_num_ah_e5(struct ecore_hwfn *p_hwfn,
 #endif
 		for (i = 0; i < MAX_NUM_PORTS_K2; i++) {
 			port = ecore_rd(p_hwfn, p_ptt,
-					CNIG_REG_NIG_PORT0_CONF_K2_E5 +
+					CNIG_REG_NIG_PORT0_CONF_K2 +
 					(i * 4));
 			if (port & 1)
 				p_dev->num_ports_in_engine++;
@@ -5612,13 +5620,13 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 	if (CHIP_REV_IS_FPGA(p_dev)) {
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround; Prevent DMAE parities\n");
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK_K2_E5,
+		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK_K2,
 			 7);
 
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround: Set VF bar0 size\n");
 		ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
-			 PGLUE_B_REG_VF_BAR0_SIZE_K2_E5, 4);
+			 PGLUE_B_REG_VF_BAR0_SIZE_K2, 4);
 	}
 #endif
 
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index 2ce0ea9e5..7a94ed506 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -73,306 +73,219 @@ struct xstorm_core_conn_st_ctx {
 	__le32 reserved0[55] /* Pad to 15 cycles */;
 };
 
-struct e4_xstorm_core_conn_ag_ctx {
+struct xstorm_core_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 core_state /* state */;
 	u8 flags0;
-/* exist_in_qm0 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
-/* exist_in_qm1 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
-/* exist_in_qm2 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
-/* exist_in_qm3 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
-/* bit4 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
+#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1 /* exist_in_qm0 */
+#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK            0x1 /* exist_in_qm1 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT           1
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK            0x1 /* exist_in_qm2 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT           2
+#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK         0x1 /* exist_in_qm3 */
+#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT        3
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK            0x1 /* bit4 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT           4
 /* cf_array_active */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
-/* bit6 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
-/* bit7 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK            0x1
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT           5
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK            0x1 /* bit6 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT           6
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK            0x1 /* bit7 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT           7
 	u8 flags1;
-/* bit8 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
-/* bit9 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
-/* bit10 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
-/* bit11 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
-/* bit12 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
-/* bit13 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
-/* bit14 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
-/* bit15 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK            0x1 /* bit8 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT           0
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK            0x1 /* bit9 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT           1
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK            0x1 /* bit10 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT           2
+#define XSTORM_CORE_CONN_AG_CTX_BIT11_MASK                0x1 /* bit11 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT               3
+#define XSTORM_CORE_CONN_AG_CTX_BIT12_MASK                0x1 /* bit12 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT               4
+#define XSTORM_CORE_CONN_AG_CTX_BIT13_MASK                0x1 /* bit13 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT               5
+#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK       0x1 /* bit14 */
+#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT      6
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK         0x1 /* bit15 */
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT        7
 	u8 flags2;
-/* timer0cf */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
-/* timer1cf */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
-/* timer2cf */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
+#define XSTORM_CORE_CONN_AG_CTX_CF0_MASK                  0x3 /* timer0cf */
+#define XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT                 0
+#define XSTORM_CORE_CONN_AG_CTX_CF1_MASK                  0x3 /* timer1cf */
+#define XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT                 2
+#define XSTORM_CORE_CONN_AG_CTX_CF2_MASK                  0x3 /* timer2cf */
+#define XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT                 4
 /* timer_stop_all */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
+#define XSTORM_CORE_CONN_AG_CTX_CF3_MASK                  0x3
+#define XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT                 6
 	u8 flags3;
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
+#define XSTORM_CORE_CONN_AG_CTX_CF4_MASK                  0x3 /* cf4 */
+#define XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT                 0
+#define XSTORM_CORE_CONN_AG_CTX_CF5_MASK                  0x3 /* cf5 */
+#define XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT                 2
+#define XSTORM_CORE_CONN_AG_CTX_CF6_MASK                  0x3 /* cf6 */
+#define XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT                 4
+#define XSTORM_CORE_CONN_AG_CTX_CF7_MASK                  0x3 /* cf7 */
+#define XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT                 6
 	u8 flags4;
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
-/* cf10 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
-/* cf11 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
+#define XSTORM_CORE_CONN_AG_CTX_CF8_MASK                  0x3 /* cf8 */
+#define XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT                 0
+#define XSTORM_CORE_CONN_AG_CTX_CF9_MASK                  0x3 /* cf9 */
+#define XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT                 2
+#define XSTORM_CORE_CONN_AG_CTX_CF10_MASK                 0x3 /* cf10 */
+#define XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT                4
+#define XSTORM_CORE_CONN_AG_CTX_CF11_MASK                 0x3 /* cf11 */
+#define XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT                6
 	u8 flags5;
-/* cf12 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
-/* cf13 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
-/* cf14 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
-/* cf15 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
+#define XSTORM_CORE_CONN_AG_CTX_CF12_MASK                 0x3 /* cf12 */
+#define XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT                0
+#define XSTORM_CORE_CONN_AG_CTX_CF13_MASK                 0x3 /* cf13 */
+#define XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT                2
+#define XSTORM_CORE_CONN_AG_CTX_CF14_MASK                 0x3 /* cf14 */
+#define XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT                4
+#define XSTORM_CORE_CONN_AG_CTX_CF15_MASK                 0x3 /* cf15 */
+#define XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT                6
 	u8 flags6;
-/* cf16 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
-/* cf_array_cf */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
-/* cf18 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
-/* cf19 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
+#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK     0x3 /* cf16 */
+#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT    0
+#define XSTORM_CORE_CONN_AG_CTX_CF17_MASK                 0x3 /* cf_array_cf */
+#define XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT                2
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK                0x3 /* cf18 */
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT               4
+#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK         0x3 /* cf19 */
+#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT        6
 	u8 flags7;
-/* cf20 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
-/* cf21 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
-/* cf22 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
-/* cf0en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
-/* cf1en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
+#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK             0x3 /* cf20 */
+#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT            0
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK           0x3 /* cf21 */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT          2
+#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK            0x3 /* cf22 */
+#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT           4
+#define XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK                0x1 /* cf0en */
+#define XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT               6
+#define XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK                0x1 /* cf1en */
+#define XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT               7
 	u8 flags8;
-/* cf2en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
-/* cf3en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
-/* cf4en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
-/* cf5en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
-/* cf6en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
-/* cf7en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
-/* cf8en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
-/* cf9en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
+#define XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK                0x1 /* cf2en */
+#define XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT               0
+#define XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK                0x1 /* cf3en */
+#define XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT               1
+#define XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK                0x1 /* cf4en */
+#define XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT               2
+#define XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK                0x1 /* cf5en */
+#define XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT               3
+#define XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK                0x1 /* cf6en */
+#define XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT               4
+#define XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK                0x1 /* cf7en */
+#define XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT               5
+#define XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK                0x1 /* cf8en */
+#define XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT               6
+#define XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK                0x1 /* cf9en */
+#define XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT               7
 	u8 flags9;
-/* cf10en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
-/* cf11en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
-/* cf12en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
-/* cf13en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
-/* cf14en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
-/* cf15en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
-/* cf16en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
+#define XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK               0x1 /* cf10en */
+#define XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT              0
+#define XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK               0x1 /* cf11en */
+#define XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT              1
+#define XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK               0x1 /* cf12en */
+#define XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT              2
+#define XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK               0x1 /* cf13en */
+#define XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT              3
+#define XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK               0x1 /* cf14en */
+#define XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT              4
+#define XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK               0x1 /* cf15en */
+#define XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT              5
+#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK  0x1 /* cf16en */
+#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
 /* cf_array_cf_en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
+#define XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK               0x1
+#define XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT              7
 	u8 flags10;
-/* cf18en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
-/* cf19en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
-/* cf20en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
-/* cf21en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
-/* cf22en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
-/* cf23en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
-/* rule0en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
-/* rule1en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK             0x1 /* cf18en */
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT            0
+#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK      0x1 /* cf19en */
+#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT     1
+#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK          0x1 /* cf20en */
+#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT         2
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK           0x1 /* cf21en */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT          3
+#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK         0x1 /* cf22en */
+#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT        4
+#define XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK               0x1 /* cf23en */
+#define XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT              5
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK           0x1 /* rule0en */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT          6
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK           0x1 /* rule1en */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT          7
 	u8 flags11;
-/* rule2en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
-/* rule3en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
-/* rule4en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
-/* rule5en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
-/* rule6en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
-/* rule7en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
-/* rule8en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
-/* rule9en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK           0x1 /* rule2en */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT          0
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK           0x1 /* rule3en */
+#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT          1
+#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK       0x1 /* rule4en */
+#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT      2
+#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK              0x1 /* rule5en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT             3
+#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK              0x1 /* rule6en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT             4
+#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK              0x1 /* rule7en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT             5
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK         0x1 /* rule8en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT        6
+#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK              0x1 /* rule9en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT             7
 	u8 flags12;
-/* rule10en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
-/* rule11en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
-/* rule12en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
-/* rule13en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
-/* rule14en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
-/* rule15en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
-/* rule16en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
-/* rule17en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
+#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK             0x1 /* rule10en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT            0
+#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK             0x1 /* rule11en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT            1
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK         0x1 /* rule12en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT        2
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK         0x1 /* rule13en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT        3
+#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK             0x1 /* rule14en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT            4
+#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK             0x1 /* rule15en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT            5
+#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK             0x1 /* rule16en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT            6
+#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK             0x1 /* rule17en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT            7
 	u8 flags13;
-/* rule18en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
-/* rule19en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
-/* rule20en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
-/* rule21en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
-/* rule22en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
-/* rule23en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
-/* rule24en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
-/* rule25en */
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
+#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK             0x1 /* rule18en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT            0
+#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK             0x1 /* rule19en */
+#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT            1
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK         0x1 /* rule20en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT        2
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK         0x1 /* rule21en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT        3
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK         0x1 /* rule22en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT        4
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK         0x1 /* rule23en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT        5
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK         0x1 /* rule24en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT        6
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK         0x1 /* rule25en */
+#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT        7
 	u8 flags14;
-/* bit16 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
-/* bit17 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
-/* bit18 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
-/* bit19 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
-/* bit20 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
-/* bit21 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1
-#define E4_XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
-/* cf23 */
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3
-#define E4_XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
+#define XSTORM_CORE_CONN_AG_CTX_BIT16_MASK                0x1 /* bit16 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT               0
+#define XSTORM_CORE_CONN_AG_CTX_BIT17_MASK                0x1 /* bit17 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT               1
+#define XSTORM_CORE_CONN_AG_CTX_BIT18_MASK                0x1 /* bit18 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT               2
+#define XSTORM_CORE_CONN_AG_CTX_BIT19_MASK                0x1 /* bit19 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT               3
+#define XSTORM_CORE_CONN_AG_CTX_BIT20_MASK                0x1 /* bit20 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT               4
+#define XSTORM_CORE_CONN_AG_CTX_BIT21_MASK                0x1 /* bit21 */
+#define XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT               5
+#define XSTORM_CORE_CONN_AG_CTX_CF23_MASK                 0x3 /* cf23 */
+#define XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT                6
 	u8 byte2 /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
 	__le16 consolid_prod /* physical_q1 */;
@@ -426,89 +339,89 @@ struct e4_xstorm_core_conn_ag_ctx {
 	__le16 word15 /* word15 */;
 };
 
-struct e4_tstorm_core_conn_ag_ctx {
+struct tstorm_core_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
+#define TSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define TSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define TSTORM_CORE_CONN_AG_CTX_BIT2_MASK     0x1 /* bit2 */
+#define TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT    2
+#define TSTORM_CORE_CONN_AG_CTX_BIT3_MASK     0x1 /* bit3 */
+#define TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT    3
+#define TSTORM_CORE_CONN_AG_CTX_BIT4_MASK     0x1 /* bit4 */
+#define TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT    4
+#define TSTORM_CORE_CONN_AG_CTX_BIT5_MASK     0x1 /* bit5 */
+#define TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT    5
+#define TSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     6
 	u8 flags1;
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
+#define TSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     0
+#define TSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     2
+#define TSTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT     4
+#define TSTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT     6
 	u8 flags2;
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
+#define TSTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT     0
+#define TSTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT     2
+#define TSTORM_CORE_CONN_AG_CTX_CF7_MASK      0x3 /* cf7 */
+#define TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT     4
+#define TSTORM_CORE_CONN_AG_CTX_CF8_MASK      0x3 /* cf8 */
+#define TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT     6
 	u8 flags3;
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
+#define TSTORM_CORE_CONN_AG_CTX_CF9_MASK      0x3 /* cf9 */
+#define TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT     0
+#define TSTORM_CORE_CONN_AG_CTX_CF10_MASK     0x3 /* cf10 */
+#define TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT    2
+#define TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   4
+#define TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   5
+#define TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   6
+#define TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   7
 	u8 flags4;
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+#define TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   0
+#define TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   1
+#define TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   2
+#define TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK    0x1 /* cf7en */
+#define TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT   3
+#define TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK    0x1 /* cf8en */
+#define TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT   4
+#define TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK    0x1 /* cf9en */
+#define TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT   5
+#define TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK   0x1 /* cf10en */
+#define TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT  6
+#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
 	u8 flags5;
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define E4_TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
 	__le32 reg0 /* reg0 */;
 	__le32 reg1 /* reg1 */;
 	__le32 reg2 /* reg2 */;
@@ -530,63 +443,63 @@ struct e4_tstorm_core_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
-struct e4_ustorm_core_conn_ag_ctx {
+struct ustorm_core_conn_ag_ctx {
 	u8 reserved /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define E4_USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define E4_USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define E4_USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define E4_USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define E4_USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+#define USTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define USTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define USTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* timer0cf */
+#define USTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define USTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* timer1cf */
+#define USTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define USTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* timer2cf */
+#define USTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
 	u8 flags1;
-#define E4_USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
-#define E4_USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
-#define E4_USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
-#define E4_USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
+#define USTORM_CORE_CONN_AG_CTX_CF3_MASK      0x3 /* timer_stop_all */
+#define USTORM_CORE_CONN_AG_CTX_CF3_SHIFT     0
+#define USTORM_CORE_CONN_AG_CTX_CF4_MASK      0x3 /* cf4 */
+#define USTORM_CORE_CONN_AG_CTX_CF4_SHIFT     2
+#define USTORM_CORE_CONN_AG_CTX_CF5_MASK      0x3 /* cf5 */
+#define USTORM_CORE_CONN_AG_CTX_CF5_SHIFT     4
+#define USTORM_CORE_CONN_AG_CTX_CF6_MASK      0x3 /* cf6 */
+#define USTORM_CORE_CONN_AG_CTX_CF6_SHIFT     6
 	u8 flags2;
-#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
-#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
-#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
-#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
-#define E4_USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
+#define USTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define USTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define USTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define USTORM_CORE_CONN_AG_CTX_CF3EN_MASK    0x1 /* cf3en */
+#define USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT   3
+#define USTORM_CORE_CONN_AG_CTX_CF4EN_MASK    0x1 /* cf4en */
+#define USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT   4
+#define USTORM_CORE_CONN_AG_CTX_CF5EN_MASK    0x1 /* cf5en */
+#define USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT   5
+#define USTORM_CORE_CONN_AG_CTX_CF6EN_MASK    0x1 /* cf6en */
+#define USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT   6
+#define USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
 	u8 flags3;
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
-#define E4_USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
+#define USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
+#define USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
+#define USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
+#define USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
+#define USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK  0x1 /* rule5en */
+#define USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
+#define USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK  0x1 /* rule6en */
+#define USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
+#define USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK  0x1 /* rule7en */
+#define USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
+#define USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK  0x1 /* rule8en */
+#define USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
 	u8 byte2 /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* conn_dpi */;
@@ -616,7 +529,7 @@ struct ustorm_core_conn_st_ctx {
 /*
  * core connection context
  */
-struct e4_core_conn_context {
+struct core_conn_context {
 /* ystorm storm context */
 	struct ystorm_core_conn_st_ctx ystorm_st_context;
 	struct regpair ystorm_st_padding[2] /* padding */;
@@ -626,11 +539,11 @@ struct e4_core_conn_context {
 /* xstorm storm context */
 	struct xstorm_core_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
-	struct e4_xstorm_core_conn_ag_ctx xstorm_ag_context;
+	struct xstorm_core_conn_ag_ctx xstorm_ag_context;
 /* tstorm aggregative context */
-	struct e4_tstorm_core_conn_ag_ctx tstorm_ag_context;
+	struct tstorm_core_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
-	struct e4_ustorm_core_conn_ag_ctx ustorm_ag_context;
+	struct ustorm_core_conn_ag_ctx ustorm_ag_context;
 /* mstorm storm context */
 	struct mstorm_core_conn_st_ctx mstorm_st_context;
 /* ustorm storm context */
@@ -2104,90 +2017,6 @@ enum dmae_cmd_src_enum {
 };
 
 
-struct e4_mstorm_core_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define E4_MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define E4_MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
-	u8 flags1;
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define E4_MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
-	__le16 word0 /* word0 */;
-	__le16 word1 /* word1 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-};
-
-
-
-
-
-struct e4_ystorm_core_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 byte1 /* state */;
-	u8 flags0;
-#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
-#define E4_YSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
-#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
-#define E4_YSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
-	u8 flags1;
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
-#define E4_YSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
-	u8 byte2 /* byte2 */;
-	u8 byte3 /* byte3 */;
-	__le16 word0 /* word0 */;
-	__le32 reg0 /* reg0 */;
-	__le32 reg1 /* reg1 */;
-	__le16 word1 /* word1 */;
-	__le16 word2 /* word2 */;
-	__le16 word3 /* word3 */;
-	__le16 word4 /* word4 */;
-	__le32 reg2 /* reg2 */;
-	__le32 reg3 /* reg3 */;
-};
 
 
 struct fw_asserts_ram_section {
@@ -2416,23 +2245,23 @@ struct qm_rf_opportunistic_mask {
 /*
  * QM hardware structure of QM map memory
  */
-struct qm_rf_pq_map_e4 {
+struct qm_rf_pq_map {
 	__le32 reg;
-#define QM_RF_PQ_MAP_E4_PQ_VALID_MASK          0x1 /* PQ active */
-#define QM_RF_PQ_MAP_E4_PQ_VALID_SHIFT         0
-#define QM_RF_PQ_MAP_E4_RL_ID_MASK             0xFF /* RL ID */
-#define QM_RF_PQ_MAP_E4_RL_ID_SHIFT            1
+#define QM_RF_PQ_MAP_PQ_VALID_MASK          0x1 /* PQ active */
+#define QM_RF_PQ_MAP_PQ_VALID_SHIFT         0
+#define QM_RF_PQ_MAP_RL_ID_MASK             0xFF /* RL ID */
+#define QM_RF_PQ_MAP_RL_ID_SHIFT            1
 /* the first PQ associated with the VPORT and VOQ of this PQ */
-#define QM_RF_PQ_MAP_E4_VP_PQ_ID_MASK          0x1FF
-#define QM_RF_PQ_MAP_E4_VP_PQ_ID_SHIFT         9
-#define QM_RF_PQ_MAP_E4_VOQ_MASK               0x1F /* VOQ */
-#define QM_RF_PQ_MAP_E4_VOQ_SHIFT              18
-#define QM_RF_PQ_MAP_E4_WRR_WEIGHT_GROUP_MASK  0x3 /* WRR weight */
-#define QM_RF_PQ_MAP_E4_WRR_WEIGHT_GROUP_SHIFT 23
-#define QM_RF_PQ_MAP_E4_RL_VALID_MASK          0x1 /* RL active */
-#define QM_RF_PQ_MAP_E4_RL_VALID_SHIFT         25
-#define QM_RF_PQ_MAP_E4_RESERVED_MASK          0x3F
-#define QM_RF_PQ_MAP_E4_RESERVED_SHIFT         26
+#define QM_RF_PQ_MAP_VP_PQ_ID_MASK          0x1FF
+#define QM_RF_PQ_MAP_VP_PQ_ID_SHIFT         9
+#define QM_RF_PQ_MAP_VOQ_MASK               0x1F /* VOQ */
+#define QM_RF_PQ_MAP_VOQ_SHIFT              18
+#define QM_RF_PQ_MAP_WRR_WEIGHT_GROUP_MASK  0x3 /* WRR weight */
+#define QM_RF_PQ_MAP_WRR_WEIGHT_GROUP_SHIFT 23
+#define QM_RF_PQ_MAP_RL_VALID_MASK          0x1 /* RL active */
+#define QM_RF_PQ_MAP_RL_VALID_SHIFT         25
+#define QM_RF_PQ_MAP_RESERVED_MASK          0x3F
+#define QM_RF_PQ_MAP_RESERVED_SHIFT         26
 };
 
 
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index 7bc094792..b1cab2910 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -32,312 +32,224 @@ struct xstorm_eth_conn_st_ctx {
 	__le32 reserved[60];
 };
 
-struct e4_xstorm_eth_conn_ag_ctx {
+struct xstorm_eth_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 eth_state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
+#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
 /* exist_in_qm1 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK               0x1
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT              1
 /* exist_in_qm2 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK               0x1
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT              2
 /* exist_in_qm3 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
-/* bit4 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
+#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
+#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK               0x1 /* bit4 */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT              4
 /* cf_array_active */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
-/* bit6 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
-/* bit7 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK               0x1
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT              7
 	u8 flags1;
-/* bit8 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
-/* bit9 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
-/* bit10 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
-/* bit11 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
-/* bit12 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_BIT12_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_BIT12_SHIFT                  4
-/* bit13 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_BIT13_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_BIT13_SHIFT                  5
-/* bit14 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
-/* bit15 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT              1
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK               0x1 /* bit10 */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT              2
+#define XSTORM_ETH_CONN_AG_CTX_BIT11_MASK                   0x1 /* bit11 */
+#define XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT                  3
+#define XSTORM_ETH_CONN_AG_CTX_E5_RESERVED2_MASK            0x1 /* bit12 */
+#define XSTORM_ETH_CONN_AG_CTX_E5_RESERVED2_SHIFT           4
+#define XSTORM_ETH_CONN_AG_CTX_E5_RESERVED3_MASK            0x1 /* bit13 */
+#define XSTORM_ETH_CONN_AG_CTX_E5_RESERVED3_SHIFT           5
+#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1 /* bit14 */
+#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1 /* bit15 */
+#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
 	u8 flags2;
-/* timer0cf */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
-/* timer1cf */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
-/* timer2cf */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
+#define XSTORM_ETH_CONN_AG_CTX_CF0_MASK                     0x3 /* timer0cf */
+#define XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT                    0
+#define XSTORM_ETH_CONN_AG_CTX_CF1_MASK                     0x3 /* timer1cf */
+#define XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT                    2
+#define XSTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3 /* timer2cf */
+#define XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    4
 /* timer_stop_all */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
+#define XSTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
+#define XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    6
 	u8 flags3;
-/* cf4 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
-/* cf5 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
-/* cf6 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
-/* cf7 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
+#define XSTORM_ETH_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
+#define XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT                    0
+#define XSTORM_ETH_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
+#define XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT                    2
+#define XSTORM_ETH_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
+#define XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT                    4
+#define XSTORM_ETH_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
+#define XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT                    6
 	u8 flags4;
-/* cf8 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
-/* cf9 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
-/* cf10 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
-/* cf11 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
+#define XSTORM_ETH_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
+#define XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT                    0
+#define XSTORM_ETH_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
+#define XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT                    2
+#define XSTORM_ETH_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
+#define XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT                   4
+#define XSTORM_ETH_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
+#define XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT                   6
 	u8 flags5;
-/* cf12 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
-/* cf13 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
-/* cf14 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
-/* cf15 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
+#define XSTORM_ETH_CONN_AG_CTX_CF12_MASK                    0x3 /* cf12 */
+#define XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT                   0
+#define XSTORM_ETH_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
+#define XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT                   2
+#define XSTORM_ETH_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
+#define XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT                   4
+#define XSTORM_ETH_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
+#define XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT                   6
 	u8 flags6;
-/* cf16 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
+#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
 /* cf_array_cf */
-#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
-/* cf18 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
-/* cf19 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
+#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
+#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
+#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
 	u8 flags7;
-/* cf20 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
-/* cf21 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
-/* cf22 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
-/* cf0en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
-/* cf1en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
+#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
+#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
+#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+#define XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK                   0x1 /* cf0en */
+#define XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT                  6
+#define XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK                   0x1 /* cf1en */
+#define XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT                  7
 	u8 flags8;
-/* cf2en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
-/* cf3en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
-/* cf4en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
-/* cf5en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
-/* cf6en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
-/* cf7en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
-/* cf8en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
-/* cf9en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
+#define XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1 /* cf2en */
+#define XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  0
+#define XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1 /* cf3en */
+#define XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  1
+#define XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK                   0x1 /* cf4en */
+#define XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT                  2
+#define XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK                   0x1 /* cf5en */
+#define XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT                  3
+#define XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK                   0x1 /* cf6en */
+#define XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT                  4
+#define XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK                   0x1 /* cf7en */
+#define XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT                  5
+#define XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK                   0x1 /* cf8en */
+#define XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT                  6
+#define XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK                   0x1 /* cf9en */
+#define XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT                  7
 	u8 flags9;
-/* cf10en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
-/* cf11en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
-/* cf12en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
-/* cf13en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
-/* cf14en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
-/* cf15en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
-/* cf16en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+#define XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK                  0x1 /* cf10en */
+#define XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT                 0
+#define XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK                  0x1 /* cf11en */
+#define XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT                 1
+#define XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK                  0x1 /* cf12en */
+#define XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT                 2
+#define XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK                  0x1 /* cf13en */
+#define XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT                 3
+#define XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK                  0x1 /* cf14en */
+#define XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT                 4
+#define XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK                  0x1 /* cf15en */
+#define XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT                 5
+#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1 /* cf16en */
+#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
 /* cf_array_cf_en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
+#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
 	u8 flags10;
-/* cf18en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
-/* cf19en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
-/* cf20en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
-/* cf21en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
-/* cf22en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
-/* cf23en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
-/* rule0en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
-/* rule1en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
+#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK                0x1 /* cf18en */
+#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1 /* cf19en */
+#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1 /* cf20en */
+#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK              0x1 /* cf21en */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT             3
+#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1 /* cf22en */
+#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1 /* cf23en */
+#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK              0x1 /* rule0en */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT             6
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK              0x1 /* rule1en */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT             7
 	u8 flags11;
-/* rule2en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
-/* rule3en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
-/* rule4en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
-/* rule5en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
-/* rule6en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
-/* rule7en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
-/* rule8en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
-/* rule9en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK              0x1 /* rule2en */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT             0
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK              0x1 /* rule3en */
+#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT             1
+#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1 /* rule4en */
+#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1 /* rule5en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                3
+#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1 /* rule6en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                4
+#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1 /* rule7en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                5
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK            0x1 /* rule8en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK                 0x1 /* rule9en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT                7
 	u8 flags12;
-/* rule10en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
-/* rule11en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
-/* rule12en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
-/* rule13en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
-/* rule14en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
-/* rule15en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
-/* rule16en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
-/* rule17en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
+#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK                0x1 /* rule10en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT               0
+#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK                0x1 /* rule11en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT               1
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK            0x1 /* rule12en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK            0x1 /* rule13en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK                0x1 /* rule14en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT               4
+#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK                0x1 /* rule15en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT               5
+#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK                0x1 /* rule16en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT               6
+#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK                0x1 /* rule17en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT               7
 	u8 flags13;
-/* rule18en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
-/* rule19en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
-/* rule20en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
-/* rule21en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
-/* rule22en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
-/* rule23en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
-/* rule24en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
-/* rule25en */
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK                0x1 /* rule18en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT               0
+#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK                0x1 /* rule19en */
+#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT               1
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK            0x1 /* rule20en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK            0x1 /* rule21en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK            0x1 /* rule22en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK            0x1 /* rule23en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK            0x1 /* rule24en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK            0x1 /* rule25en */
+#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
 	u8 flags14;
-/* bit16 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
-/* bit17 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
-/* bit18 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
-/* bit19 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-/* bit20 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
-/* bit21 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
-#define E4_XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
-/* cf23 */
-#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3
-#define E4_XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1 /* bit16 */
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1 /* bit17 */
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1 /* bit18 */
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1 /* bit19 */
+#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1 /* bit20 */
+#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1 /* bit21 */
+#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
+#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
 	u8 edpm_event_id /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
 	__le16 e5_reserved1 /* physical_q1 */;
@@ -398,47 +310,37 @@ struct ystorm_eth_conn_st_ctx {
 	__le32 reserved[8];
 };
 
-struct e4_ystorm_eth_conn_ag_ctx {
+struct ystorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 state /* state */;
 	u8 flags0;
-/* exist_in_qm0 */
-#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
-/* exist_in_qm1 */
-#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
-#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
-#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
-#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
-#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
-#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
-#define E4_YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
+#define YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1 /* exist_in_qm0 */
+#define YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
+#define YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1 /* exist_in_qm1 */
+#define YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
+#define YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
+#define YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
 	u8 flags1;
-/* cf0en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
-/* cf1en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
-/* cf2en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
-/* rule0en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
-/* rule1en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
-/* rule2en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
-/* rule3en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
-/* rule4en */
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1
-#define E4_YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1 /* cf0en */
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1 /* cf1en */
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
+#define YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1 /* cf2en */
+#define YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
+#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1 /* rule0en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
+#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1 /* rule1en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
+#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1 /* rule2en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
+#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1 /* rule3en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
+#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1 /* rule4en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
 	u8 tx_q0_int_coallecing_timeset /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* word0 */;
@@ -452,89 +354,89 @@ struct e4_ystorm_eth_conn_ag_ctx {
 	__le32 reg3 /* reg3 */;
 };
 
-struct e4_tstorm_eth_conn_ag_ctx {
+struct tstorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
+#define TSTORM_ETH_CONN_AG_CTX_BIT0_MASK      0x1 /* exist_in_qm0 */
+#define TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT     0
+#define TSTORM_ETH_CONN_AG_CTX_BIT1_MASK      0x1 /* exist_in_qm1 */
+#define TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT     1
+#define TSTORM_ETH_CONN_AG_CTX_BIT2_MASK      0x1 /* bit2 */
+#define TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT     2
+#define TSTORM_ETH_CONN_AG_CTX_BIT3_MASK      0x1 /* bit3 */
+#define TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT     3
+#define TSTORM_ETH_CONN_AG_CTX_BIT4_MASK      0x1 /* bit4 */
+#define TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT     4
+#define TSTORM_ETH_CONN_AG_CTX_BIT5_MASK      0x1 /* bit5 */
+#define TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT     5
+#define TSTORM_ETH_CONN_AG_CTX_CF0_MASK       0x3 /* timer0cf */
+#define TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT      6
 	u8 flags1;
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
+#define TSTORM_ETH_CONN_AG_CTX_CF1_MASK       0x3 /* timer1cf */
+#define TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT      0
+#define TSTORM_ETH_CONN_AG_CTX_CF2_MASK       0x3 /* timer2cf */
+#define TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT      2
+#define TSTORM_ETH_CONN_AG_CTX_CF3_MASK       0x3 /* timer_stop_all */
+#define TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT      4
+#define TSTORM_ETH_CONN_AG_CTX_CF4_MASK       0x3 /* cf4 */
+#define TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT      6
 	u8 flags2;
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
+#define TSTORM_ETH_CONN_AG_CTX_CF5_MASK       0x3 /* cf5 */
+#define TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT      0
+#define TSTORM_ETH_CONN_AG_CTX_CF6_MASK       0x3 /* cf6 */
+#define TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT      2
+#define TSTORM_ETH_CONN_AG_CTX_CF7_MASK       0x3 /* cf7 */
+#define TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT      4
+#define TSTORM_ETH_CONN_AG_CTX_CF8_MASK       0x3 /* cf8 */
+#define TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT      6
 	u8 flags3;
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
+#define TSTORM_ETH_CONN_AG_CTX_CF9_MASK       0x3 /* cf9 */
+#define TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT      0
+#define TSTORM_ETH_CONN_AG_CTX_CF10_MASK      0x3 /* cf10 */
+#define TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT     2
+#define TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK     0x1 /* cf0en */
+#define TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT    4
+#define TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK     0x1 /* cf1en */
+#define TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT    5
+#define TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK     0x1 /* cf2en */
+#define TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT    6
+#define TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK     0x1 /* cf3en */
+#define TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT    7
 	u8 flags4;
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
+#define TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK     0x1 /* cf4en */
+#define TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT    0
+#define TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK     0x1 /* cf5en */
+#define TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT    1
+#define TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK     0x1 /* cf6en */
+#define TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT    2
+#define TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK     0x1 /* cf7en */
+#define TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT    3
+#define TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK     0x1 /* cf8en */
+#define TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT    4
+#define TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK     0x1 /* cf9en */
+#define TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT    5
+#define TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK    0x1 /* cf10en */
+#define TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT   6
+#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK   0x1 /* rule0en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT  7
 	u8 flags5;
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
-#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
-#define E4_TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
+#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK   0x1 /* rule1en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT  0
+#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK   0x1 /* rule2en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT  1
+#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK   0x1 /* rule3en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT  2
+#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK   0x1 /* rule4en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT  3
+#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK   0x1 /* rule5en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT  4
+#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK  0x1 /* rule6en */
+#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
+#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK   0x1 /* rule7en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT  6
+#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK   0x1 /* rule8en */
+#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT  7
 	__le32 reg0 /* reg0 */;
 	__le32 reg1 /* reg1 */;
 	__le32 reg2 /* reg2 */;
@@ -556,88 +458,66 @@ struct e4_tstorm_eth_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
-struct e4_ustorm_eth_conn_ag_ctx {
+struct ustorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
+#define USTORM_ETH_CONN_AG_CTX_BIT0_MASK                    0x1
+#define USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                   0
 /* exist_in_qm1 */
-#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
-/* timer0cf */
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
-/* timer1cf */
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
-/* timer2cf */
-#define E4_USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3
-#define E4_USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
+#define USTORM_ETH_CONN_AG_CTX_BIT1_MASK                    0x1
+#define USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                   1
+#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK     0x3 /* timer0cf */
+#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT    2
+#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK     0x3 /* timer1cf */
+#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT    4
+#define USTORM_ETH_CONN_AG_CTX_CF2_MASK                     0x3 /* timer2cf */
+#define USTORM_ETH_CONN_AG_CTX_CF2_SHIFT                    6
 	u8 flags1;
 /* timer_stop_all */
-#define E4_USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
-#define E4_USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
-/* cf4 */
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
-/* cf5 */
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
-/* cf6 */
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
+#define USTORM_ETH_CONN_AG_CTX_CF3_MASK                     0x3
+#define USTORM_ETH_CONN_AG_CTX_CF3_SHIFT                    0
+#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK               0x3 /* cf4 */
+#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT              2
+#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK               0x3 /* cf5 */
+#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT              4
+#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK       0x3 /* cf6 */
+#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT      6
 	u8 flags2;
-/* cf0en */
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
-/* cf1en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
-/* cf2en */
-#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
-/* cf3en */
-#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
-/* cf4en */
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
-/* cf5en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
-/* cf6en */
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
-/* rule0en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
+#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK  0x1 /* cf0en */
+#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
+#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK  0x1 /* cf1en */
+#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
+#define USTORM_ETH_CONN_AG_CTX_CF2EN_MASK                   0x1 /* cf2en */
+#define USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                  2
+#define USTORM_ETH_CONN_AG_CTX_CF3EN_MASK                   0x1 /* cf3en */
+#define USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT                  3
+#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK            0x1 /* cf4en */
+#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT           4
+#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK            0x1 /* cf5en */
+#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT           5
+#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK    0x1 /* cf6en */
+#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT   6
+#define USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK                 0x1 /* rule0en */
+#define USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT                7
 	u8 flags3;
-/* rule1en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
-/* rule2en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
-/* rule3en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
-/* rule4en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
-/* rule5en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
-/* rule6en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
-/* rule7en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
-/* rule8en */
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1
-#define E4_USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
+#define USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK                 0x1 /* rule1en */
+#define USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT                0
+#define USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK                 0x1 /* rule2en */
+#define USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT                1
+#define USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK                 0x1 /* rule3en */
+#define USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT                2
+#define USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK                 0x1 /* rule4en */
+#define USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT                3
+#define USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK                 0x1 /* rule5en */
+#define USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT                4
+#define USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK                 0x1 /* rule6en */
+#define USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT                5
+#define USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK                 0x1 /* rule7en */
+#define USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT                6
+#define USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK                 0x1 /* rule8en */
+#define USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT                7
 	u8 byte2 /* byte2 */;
 	u8 byte3 /* byte3 */;
 	__le16 word0 /* conn_dpi */;
@@ -667,7 +547,7 @@ struct mstorm_eth_conn_st_ctx {
 /*
  * eth connection context
  */
-struct e4_eth_conn_context {
+struct eth_conn_context {
 /* tstorm storm context */
 	struct tstorm_eth_conn_st_ctx tstorm_st_context;
 	struct regpair tstorm_st_padding[2] /* padding */;
@@ -676,15 +556,15 @@ struct e4_eth_conn_context {
 /* xstorm storm context */
 	struct xstorm_eth_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
-	struct e4_xstorm_eth_conn_ag_ctx xstorm_ag_context;
+	struct xstorm_eth_conn_ag_ctx xstorm_ag_context;
 /* ystorm storm context */
 	struct ystorm_eth_conn_st_ctx ystorm_st_context;
 /* ystorm aggregative context */
-	struct e4_ystorm_eth_conn_ag_ctx ystorm_ag_context;
+	struct ystorm_eth_conn_ag_ctx ystorm_ag_context;
 /* tstorm aggregative context */
-	struct e4_tstorm_eth_conn_ag_ctx tstorm_ag_context;
+	struct tstorm_eth_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
-	struct e4_ustorm_eth_conn_ag_ctx ustorm_ag_context;
+	struct ustorm_eth_conn_ag_ctx ustorm_ag_context;
 /* ustorm storm context */
 	struct ustorm_eth_conn_st_ctx ustorm_st_context;
 /* mstorm storm context */
@@ -1875,37 +1755,37 @@ struct E4XstormEthConnAgCtxDqExtLdPart {
 };
 
 
-struct e4_mstorm_eth_conn_ag_ctx {
+struct mstorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
 	u8 flags0;
-#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1 /* exist_in_qm0 */
-#define E4_MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
-#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
-#define E4_MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
+#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK  0x1 /* exist_in_qm0 */
+#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
+#define MSTORM_ETH_CONN_AG_CTX_BIT1_MASK          0x1 /* exist_in_qm1 */
+#define MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT         1
+#define MSTORM_ETH_CONN_AG_CTX_CF0_MASK           0x3 /* cf0 */
+#define MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT          2
+#define MSTORM_ETH_CONN_AG_CTX_CF1_MASK           0x3 /* cf1 */
+#define MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT          4
+#define MSTORM_ETH_CONN_AG_CTX_CF2_MASK           0x3 /* cf2 */
+#define MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT          6
 	u8 flags1;
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
-#define E4_MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
+#define MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK         0x1 /* cf0en */
+#define MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT        0
+#define MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK         0x1 /* cf1en */
+#define MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT        1
+#define MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK         0x1 /* cf2en */
+#define MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT        2
+#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK       0x1 /* rule0en */
+#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT      3
+#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK       0x1 /* rule1en */
+#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT      4
+#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK       0x1 /* rule2en */
+#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT      5
+#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK       0x1 /* rule3en */
+#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT      6
+#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK       0x1 /* rule4en */
+#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT      7
 	__le16 word0 /* word0 */;
 	__le16 word1 /* word1 */;
 	__le32 reg0 /* reg0 */;
@@ -1916,289 +1796,243 @@ struct e4_mstorm_eth_conn_ag_ctx {
 
 
 
-struct e4_xstorm_eth_hw_conn_ag_ctx {
+struct xstorm_eth_hw_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
 	u8 eth_state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
+#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT           0
 /* exist_in_qm1 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK               0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT              1
 /* exist_in_qm2 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK               0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT              2
 /* exist_in_qm3 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
-/* bit4 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
+#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT           3
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK               0x1 /* bit4 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT              4
 /* cf_array_active */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK               0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT              5
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK               0x1 /* bit6 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT              6
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK               0x1 /* bit7 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT              7
 	u8 flags1;
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
-/* bit10 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
-/* bit11 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
-/* bit12 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT12_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT12_SHIFT                  4
-/* bit13 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT13_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_BIT13_SHIFT                  5
-/* bit14 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
-/* bit15 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK               0x1 /* bit8 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT              0
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK               0x1 /* bit9 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT              1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK               0x1 /* bit10 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT              2
+#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK                   0x1 /* bit11 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT                  3
+#define XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED2_MASK            0x1 /* bit12 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED2_SHIFT           4
+#define XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED3_MASK            0x1 /* bit13 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_E5_RESERVED3_SHIFT           5
+#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK          0x1 /* bit14 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT         6
+#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK            0x1 /* bit15 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT           7
 	u8 flags2;
 /* timer0cf */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK                     0x3
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT                    0
 /* timer1cf */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK                     0x3
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT                    2
 /* timer2cf */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK                     0x3
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT                    4
 /* timer_stop_all */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK                     0x3
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT                    6
 	u8 flags3;
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK                     0x3 /* cf4 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT                    0
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK                     0x3 /* cf5 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT                    2
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK                     0x3 /* cf6 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT                    4
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK                     0x3 /* cf7 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT                    6
 	u8 flags4;
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK                     0x3 /* cf8 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT                    0
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK                     0x3 /* cf9 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT                    2
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK                    0x3 /* cf10 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT                   4
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK                    0x3 /* cf11 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT                   6
 	u8 flags5;
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3 /* cf12 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK                    0x3 /* cf12 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT                   0
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK                    0x3 /* cf13 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT                   2
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK                    0x3 /* cf14 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT                   4
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK                    0x3 /* cf15 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT                   6
 	u8 flags6;
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
+#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK        0x3 /* cf16 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT       0
 /* cf_array_cf */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
+#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK        0x3
+#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT       2
+#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK                   0x3 /* cf18 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT                  4
+#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK            0x3 /* cf19 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT           6
 	u8 flags7;
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
-/* cf0en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
-/* cf1en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
+#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK                0x3 /* cf20 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT               0
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK              0x3 /* cf21 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT             2
+#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK               0x3 /* cf22 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT              4
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK                   0x1 /* cf0en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT                  6
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK                   0x1 /* cf1en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT                  7
 	u8 flags8;
-/* cf2en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
-/* cf3en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
-/* cf4en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
-/* cf5en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
-/* cf6en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
-/* cf7en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
-/* cf8en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
-/* cf9en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK                   0x1 /* cf2en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT                  0
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK                   0x1 /* cf3en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT                  1
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK                   0x1 /* cf4en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT                  2
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK                   0x1 /* cf5en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT                  3
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK                   0x1 /* cf6en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT                  4
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK                   0x1 /* cf7en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT                  5
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK                   0x1 /* cf8en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT                  6
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK                   0x1 /* cf9en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT                  7
 	u8 flags9;
-/* cf10en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
-/* cf11en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
-/* cf12en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
-/* cf13en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
-/* cf14en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
-/* cf15en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
-/* cf16en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK                  0x1 /* cf10en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT                 0
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK                  0x1 /* cf11en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT                 1
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK                  0x1 /* cf12en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT                 2
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK                  0x1 /* cf13en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT                 3
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK                  0x1 /* cf14en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT                 4
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK                  0x1 /* cf15en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT                 5
+#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK     0x1 /* cf16en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT    6
 /* cf_array_cf_en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
+#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK     0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT    7
 	u8 flags10;
-/* cf18en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
-/* cf19en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
-/* cf20en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
-/* cf21en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
-/* cf22en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
-/* cf23en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
-/* rule0en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
-/* rule1en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
+#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK                0x1 /* cf18en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT               0
+#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK         0x1 /* cf19en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT        1
+#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK             0x1 /* cf20en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT            2
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK              0x1 /* cf21en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT             3
+#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK            0x1 /* cf22en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT           4
+#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK  0x1 /* cf23en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK              0x1 /* rule0en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT             6
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK              0x1 /* rule1en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT             7
 	u8 flags11;
-/* rule2en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
-/* rule3en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
-/* rule4en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
-/* rule5en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
-/* rule6en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
-/* rule7en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
-/* rule8en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
-/* rule9en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK              0x1 /* rule2en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT             0
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK              0x1 /* rule3en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT             1
+#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK          0x1 /* rule4en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT         2
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK                 0x1 /* rule5en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT                3
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK                 0x1 /* rule6en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT                4
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK                 0x1 /* rule7en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT                5
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK            0x1 /* rule8en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT           6
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK                 0x1 /* rule9en */
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT                7
 	u8 flags12;
 /* rule10en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT               0
 /* rule11en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT               1
 /* rule12en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT           2
 /* rule13en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT           3
 /* rule14en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT               4
 /* rule15en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT               5
 /* rule16en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT               6
 /* rule17en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT               7
 	u8 flags13;
 /* rule18en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT               0
 /* rule19en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK                0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT               1
 /* rule20en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT           2
 /* rule21en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT           3
 /* rule22en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT           4
 /* rule23en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT           5
 /* rule24en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT           6
 /* rule25en */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK            0x1
+#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT           7
 	u8 flags14;
-/* bit16 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
-/* bit17 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
-/* bit18 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
-/* bit19 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
-/* bit20 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
-/* bit21 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
-#define E4_XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK        0x1 /* bit16 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT       0
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK      0x1 /* bit17 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT     1
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK    0x1 /* bit18 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT   2
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK    0x1 /* bit19 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT   3
+#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK          0x1 /* bit20 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT         4
+#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK        0x1 /* bit21 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT       5
+#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK              0x3 /* cf23 */
+#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT             6
 	u8 edpm_event_id /* byte2 */;
 	__le16 physical_q0 /* physical_q0 */;
 	__le16 e5_reserved1 /* physical_q1 */;
diff --git a/drivers/net/qede/base/ecore_hsi_init_tool.h b/drivers/net/qede/base/ecore_hsi_init_tool.h
index 0e157f9bc..1fe4bfc61 100644
--- a/drivers/net/qede/base/ecore_hsi_init_tool.h
+++ b/drivers/net/qede/base/ecore_hsi_init_tool.h
@@ -23,7 +23,6 @@
 enum chip_ids {
 	CHIP_BB,
 	CHIP_K2,
-	CHIP_E5,
 	MAX_CHIP_IDS
 };
 
@@ -134,7 +133,8 @@ enum init_modes {
 	MODE_PORTS_PER_ENG_2,
 	MODE_PORTS_PER_ENG_4,
 	MODE_100G,
-	MODE_E5,
+	MODE_SKIP_PRAM_INIT,
+	MODE_EMUL_MAC,
 	MAX_INIT_MODES
 };
 
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index cfc1156eb..928d41b46 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -18,12 +18,12 @@
 
 #define CDU_VALIDATION_DEFAULT_CFG 61
 
-static u16 con_region_offsets[3][NUM_OF_CONNECTION_TYPES_E4] = {
+static u16 con_region_offsets[3][NUM_OF_CONNECTION_TYPES] = {
 	{ 400,  336,  352,  304,  304,  384,  416,  352}, /* region 3 offsets */
 	{ 528,  496,  416,  448,  448,  512,  544,  480}, /* region 4 offsets */
 	{ 608,  544,  496,  512,  576,  592,  624,  560}  /* region 5 offsets */
 };
-static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES_E4] = {
+static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
 	{ 240,  240,  112,    0,    0,    0,    0,   96}  /* region 1 offsets */
 };
 
@@ -160,19 +160,18 @@ static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES_E4] = {
 #define QM_CMD_SET_FIELD(var, cmd, field, value) \
 	SET_FIELD(var[cmd##_##field##_OFFSET], cmd##_##field, value)
 
-#define QM_INIT_TX_PQ_MAP(p_hwfn, map, chip, pq_id, rl_valid, \
-			  vp_pq_id, rl_id, ext_voq, wrr) \
-	do {						\
-		OSAL_MEMSET(&map, 0, sizeof(map)); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_##chip##_PQ_VALID, 1); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_##chip##_RL_VALID, rl_valid); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_##chip##_VP_PQ_ID, vp_pq_id); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_##chip##_RL_ID, rl_id); \
-		SET_FIELD(map.reg, QM_RF_PQ_MAP_##chip##_VOQ, ext_voq); \
-		SET_FIELD(map.reg, \
-			  QM_RF_PQ_MAP_##chip##_WRR_WEIGHT_GROUP, wrr); \
-		STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + pq_id, \
-			     *((u32 *)&map)); \
+#define QM_INIT_TX_PQ_MAP(p_hwfn, map, pq_id, vp_pq_id, \
+			   rl_valid, rl_id, voq, wrr) \
+	do { \
+		OSAL_MEMSET(&(map), 0, sizeof(map)); \
+		SET_FIELD(map.reg, QM_RF_PQ_MAP_PQ_VALID, 1); \
+		SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_VALID, rl_valid ? 1 : 0); \
+		SET_FIELD(map.reg, QM_RF_PQ_MAP_RL_ID, rl_id); \
+		SET_FIELD(map.reg, QM_RF_PQ_MAP_VP_PQ_ID, vp_pq_id); \
+		SET_FIELD(map.reg, QM_RF_PQ_MAP_VOQ, voq); \
+		SET_FIELD(map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP, wrr); \
+		STORE_RT_REG(p_hwfn, QM_REG_TXPQMAP_RT_OFFSET + (pq_id), \
+			     *((u32 *)&(map))); \
 	} while (0)
 
 #define WRITE_PQ_INFO_TO_RAM		1
@@ -497,12 +496,11 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		}
 
 		/* Prepare PQ map entry */
-		struct qm_rf_pq_map_e4 tx_pq_map;
+		struct qm_rf_pq_map tx_pq_map;
 
-		QM_INIT_TX_PQ_MAP(p_hwfn, tx_pq_map, E4, pq_id, rl_valid ?
-				  1 : 0,
-				  first_tx_pq_id, rl_valid ?
-				  pq_params[i].vport_id : 0,
+		QM_INIT_TX_PQ_MAP(p_hwfn, tx_pq_map, pq_id, first_tx_pq_id,
+				  rl_valid ? 1 : 0,
+				  rl_valid ? pq_params[i].vport_id : 0,
 				  ext_voq, pq_params[i].wrr_group);
 
 		/* Set PQ base address */
@@ -1577,9 +1575,9 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 		return;
 
 	/* Update DORQ registers */
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5,
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2,
 		 eth_geneve_enable ? 1 : 0);
-	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5,
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2,
 		 ip_geneve_enable ? 1 : 0);
 }
 
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index 7368d55f7..c8536380c 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -29,7 +29,7 @@ struct ecore_pi_info {
 struct ecore_sb_sp_info {
 	struct ecore_sb_info sb_info;
 	/* per protocol index data */
-	struct ecore_pi_info pi_info_arr[PIS_PER_SB_E4];
+	struct ecore_pi_info pi_info_arr[MAX_PIS_PER_SB];
 };
 
 enum ecore_attention_type {
@@ -1514,7 +1514,7 @@ static void _ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
 	if (IS_VF(p_hwfn->p_dev))
 		return;/* @@@TBD MichalK- VF CAU... */
 
-	sb_offset = igu_sb_id * PIS_PER_SB_E4;
+	sb_offset = igu_sb_id * MAX_PIS_PER_SB;
 	OSAL_MEMSET(&pi_entry, 0, sizeof(struct cau_pi_entry));
 
 	SET_FIELD(pi_entry.prod, CAU_PI_ENTRY_PI_TIMESET, timeset);
@@ -2692,10 +2692,10 @@ enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
 	p_info->igu_cons = ecore_rd(p_hwfn, p_ptt,
 				    IGU_REG_CONSUMER_MEM + sbid * 4);
 
-	for (i = 0; i < PIS_PER_SB_E4; i++)
+	for (i = 0; i < MAX_PIS_PER_SB; i++)
 		p_info->pi[i] = (u16)ecore_rd(p_hwfn, p_ptt,
 					      CAU_REG_PI_MEMORY +
-					      sbid * 4 * PIS_PER_SB_E4 +
+					      sbid * 4 * MAX_PIS_PER_SB +
 					      i * 4);
 
 	return ECORE_SUCCESS;
diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h
index ff2310cff..5042cd1d1 100644
--- a/drivers/net/qede/base/ecore_int.h
+++ b/drivers/net/qede/base/ecore_int.h
@@ -16,8 +16,8 @@
 #define ECORE_SB_ATT_IDX	0x0001
 #define ECORE_SB_EVENT_MASK	0x0003
 
-#define SB_ALIGNED_SIZE(p_hwfn)					\
-	ALIGNED_TYPE_SIZE(struct status_block_e4, p_hwfn)
+#define SB_ALIGNED_SIZE(p_hwfn) \
+	ALIGNED_TYPE_SIZE(struct status_block, p_hwfn)
 
 #define ECORE_SB_INVALID_IDX	0xffff
 
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index 42538a46c..abea2a716 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -24,7 +24,7 @@ enum ecore_int_mode {
 #endif
 
 struct ecore_sb_info {
-	struct status_block_e4 *sb_virt;
+	struct status_block *sb_virt;
 	dma_addr_t sb_phys;
 	u32 sb_ack;		/* Last given ack */
 	u16 igu_sb_id;
@@ -42,7 +42,7 @@ struct ecore_sb_info {
 struct ecore_sb_info_dbg {
 	u32 igu_prod;
 	u32 igu_cons;
-	u16 pi[PIS_PER_SB_E4];
+	u16 pi[MAX_PIS_PER_SB];
 };
 
 struct ecore_sb_cnt_info {
@@ -65,7 +65,7 @@ static OSAL_INLINE u16 ecore_sb_update_sb_idx(struct ecore_sb_info *sb_info)
 	/* barrier(); status block is written to by the chip */
 	/* FIXME: need some sort of barrier. */
 	prod = OSAL_LE32_TO_CPU(sb_info->sb_virt->prod_index) &
-	    STATUS_BLOCK_E4_PROD_INDEX_MASK;
+	       STATUS_BLOCK_PROD_INDEX_MASK;
 	if (sb_info->sb_ack != prod) {
 		sb_info->sb_ack = prod;
 		rc |= ECORE_SB_IDX;
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 55de7086d..c998dbf8d 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -740,7 +740,7 @@ ecore_iov_pf_configure_vf_queue_coalesce(struct ecore_hwfn *p_hwfn,
  * @param p_hwfn
  * @param rel_vf_id
  *
- * @return MAX_NUM_VFS_E4 in case no further active VFs, otherwise index.
+ * @return MAX_NUM_VFS_K2 in case no further active VFs, otherwise index.
  */
 u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
 
@@ -764,7 +764,7 @@ void ecore_iov_set_vf_hw_channel(struct ecore_hwfn *p_hwfn, int vfid,
 
 #define ecore_for_each_vf(_p_hwfn, _i)					\
 	for (_i = ecore_iov_get_next_active_vf(_p_hwfn, 0);		\
-	     _i < MAX_NUM_VFS_E4;					\
+	     _i < MAX_NUM_VFS_K2;					\
 	     _i = ecore_iov_get_next_active_vf(_p_hwfn, _i + 1))
 
 #endif
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 1a5152ec5..23336c282 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1703,7 +1703,7 @@ static void ecore_mcp_update_stag(struct ecore_hwfn *p_hwfn,
 
 			/* Configure DB to add external vlan to EDPM packets */
 			ecore_wr(p_hwfn, p_ptt, DORQ_REG_TAG1_OVRD_MODE, 1);
-			ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_EXT_VID_BB_K2,
+			ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_EXT_VID,
 				 p_hwfn->hw_info.ovlan);
 		} else {
 			ecore_wr(p_hwfn, p_ptt, NIG_REG_LLH_FUNC_TAG_EN, 0);
@@ -1711,7 +1711,7 @@ static void ecore_mcp_update_stag(struct ecore_hwfn *p_hwfn,
 
 			/* Configure DB to add external vlan to EDPM packets */
 			ecore_wr(p_hwfn, p_ptt, DORQ_REG_TAG1_OVRD_MODE, 0);
-			ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_EXT_VID_BB_K2, 0);
+			ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_EXT_VID, 0);
 		}
 
 		ecore_sp_pf_update_stag(p_hwfn);
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 88ad961e7..486b21dd9 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -188,7 +188,7 @@ ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry *p_ent)
 static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 				    struct ecore_spq *p_spq)
 {
-	struct e4_core_conn_context *p_cxt;
+	struct core_conn_context *p_cxt;
 	struct ecore_cxt_info cxt_info;
 	u16 physical_q;
 	enum _ecore_status_t rc;
@@ -210,14 +210,14 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 
 	if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev)) {
 		SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
+			  XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
 		SET_FIELD(p_cxt->xstorm_ag_context.flags1,
-			  E4_XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
+			  XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
 		/* SET_FIELD(p_cxt->xstorm_ag_context.flags10,
 		 *	  E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN, 1);
 		 */
 		SET_FIELD(p_cxt->xstorm_ag_context.flags9,
-			  E4_XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
+			  XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
 	}
 
 	/* CDU validation - FIXME currently disabled */
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 7d73ef9fb..d771ac6d4 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -1787,7 +1787,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 	/* fill in pfdev info */
 	pfdev_info->chip_num = p_hwfn->p_dev->chip_num;
 	pfdev_info->db_size = 0;	/* @@@ TBD MichalK Vf Doorbells */
-	pfdev_info->indices_per_sb = PIS_PER_SB_E4;
+	pfdev_info->indices_per_sb = MAX_PIS_PER_SB;
 
 	pfdev_info->capabilities = PFVF_ACQUIRE_CAP_DEFAULT_UNTAGGED |
 				   PFVF_ACQUIRE_CAP_POST_FW_OVERRIDE;
@@ -4383,7 +4383,7 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 			return i;
 
 out:
-	return MAX_NUM_VFS_E4;
+	return MAX_NUM_VFS_K2;
 }
 
 enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 50c7d2c93..e748e67d7 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -14,7 +14,7 @@
 #include "ecore_l2.h"
 
 #define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \
-	(MAX_NUM_VFS_E4 * ECORE_ETH_VF_NUM_VLAN_FILTERS)
+	(MAX_NUM_VFS_K2 * ECORE_ETH_VF_NUM_VLAN_FILTERS)
 
 /* Represents a full message. Both the request filled by VF
  * and the response filled by the PF. The VF needs one copy
@@ -173,7 +173,7 @@ struct ecore_vf_info {
  * capability enabled.
  */
 struct ecore_pf_iov {
-	struct ecore_vf_info	vfs_array[MAX_NUM_VFS_E4];
+	struct ecore_vf_info	vfs_array[MAX_NUM_VFS_K2];
 	u64			pending_flr[ECORE_VF_ARRAY_LENGTH];
 
 #ifndef REMOVE_DBG
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index be59f7738..9277b46fa 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -134,7 +134,7 @@
 	0x009060UL
 #define  MISCS_REG_CLK_100G_MODE	\
 	0x009070UL
-#define MISCS_REG_RESET_PL_HV_2 \
+#define MISCS_REG_RESET_PL_HV_2_K2 \
 	0x009150UL
 #define  MSDM_REG_ENABLE_IN1 \
 	0xfc0004UL
@@ -1109,7 +1109,7 @@
 #define DORQ_REG_PF_MIN_ADDR_REG1 0x100400UL
 #define MISCS_REG_FUNCTION_HIDE 0x0096f0UL
 #define PCIE_REG_PRTY_MASK 0x0547b4UL
-#define PGLUE_B_REG_VF_BAR0_SIZE 0x2aaeb4UL
+#define PGLUE_B_REG_VF_BAR0_SIZE_K2 0x2aaeb4UL
 #define BAR0_MAP_REG_YSDM_RAM 0x1e80000UL
 #define SEM_FAST_REG_INT_RAM_SIZE 20480
 #define MCP_REG_SCRATCH_SIZE 57344
@@ -1136,12 +1136,12 @@
 #define PGLUE_B_REG_MSDM_OFFSET_MASK_B 0x2aa1c0UL
 #define PRS_REG_PKT_LEN_STAT_TAGS_NOT_COUNTED_FIRST 0x1f0a0cUL
 #define PRS_REG_SEARCH_FCOE 0x1f0408UL
-#define PGLUE_B_REG_PGL_ADDR_E8_F0 0x2aaf98UL
+#define PGLUE_B_REG_PGL_ADDR_E8_F0_K2 0x2aaf98UL
 #define NIG_REG_DSCP_TO_TC_MAP_ENABLE 0x5088f8UL
-#define PGLUE_B_REG_PGL_ADDR_EC_F0 0x2aaf9cUL
-#define PGLUE_B_REG_PGL_ADDR_F0_F0 0x2aafa0UL
+#define PGLUE_B_REG_PGL_ADDR_EC_F0_K2 0x2aaf9cUL
+#define PGLUE_B_REG_PGL_ADDR_F0_F0_K2 0x2aafa0UL
 #define PRS_REG_ROCE_DEST_QP_MAX_PF 0x1f0430UL
-#define PGLUE_B_REG_PGL_ADDR_F4_F0 0x2aafa4UL
+#define PGLUE_B_REG_PGL_ADDR_F4_F0_K2 0x2aafa4UL
 #define IGU_REG_WRITE_DONE_PENDING 0x180900UL
 #define NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR 0x50196cUL
 #define PRS_REG_MSG_INFO 0x1f0a1cUL
@@ -1157,30 +1157,30 @@
 #define CDU_REG_CCFC_CTX_VALID1 0x580404UL
 #define CDU_REG_TCFC_CTX_VALID0 0x580408UL
 
-#define DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2_E5 0x10092cUL
-#define DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2_E5 0x100930UL
-#define MISCS_REG_RESET_PL_HV_2_K2_E5 0x009150UL
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN_K2 0x100930UL
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN_K2 0x10092cUL
 #define CNIG_REG_NW_PORT_MODE_BB 0x218200UL
 #define CNIG_REG_PMEG_IF_CMD_BB 0x21821cUL
 #define CNIG_REG_PMEG_IF_ADDR_BB 0x218224UL
 #define CNIG_REG_PMEG_IF_WRDATA_BB 0x218228UL
-#define NWM_REG_MAC0_K2_E5 0x800400UL
-#define CNIG_REG_NIG_PORT0_CONF_K2_E5 0x218200UL
-#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_E5_SHIFT 0
-#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_E5_SHIFT 1
-#define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_E5_SHIFT 3
-#define ETH_MAC_REG_XIF_MODE_K2_E5 0x000080UL
-#define ETH_MAC_REG_XIF_MODE_XGMII_K2_E5_SHIFT 0
-#define ETH_MAC_REG_FRM_LENGTH_K2_E5 0x000014UL
-#define ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_E5_SHIFT 0
-#define ETH_MAC_REG_TX_IPG_LENGTH_K2_E5 0x000044UL
-#define ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_E5_SHIFT 0
-#define ETH_MAC_REG_RX_FIFO_SECTIONS_K2_E5 0x00001cUL
-#define ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_E5_SHIFT 0
-#define ETH_MAC_REG_TX_FIFO_SECTIONS_K2_E5 0x000020UL
-#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_E5_SHIFT 16
-#define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_E5_SHIFT 0
-#define ETH_MAC_REG_COMMAND_CONFIG_K2_E5 0x000008UL
+#define NWM_REG_MAC0_K2 0x800400UL
+  #define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_ENABLE_0_K2_SHIFT 0
+  #define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_NWM_PORT_MAP_0_K2_SHIFT 1
+  #define CNIG_REG_NIG_PORT0_CONF_NIG_PORT_RATE_0_K2_SHIFT 3
+#define ETH_MAC_REG_XIF_MODE_K2 0x000080UL
+  #define ETH_MAC_REG_XIF_MODE_XGMII_K2_SHIFT 0
+#define ETH_MAC_REG_FRM_LENGTH_K2 0x000014UL
+  #define ETH_MAC_REG_FRM_LENGTH_FRM_LENGTH_K2_SHIFT 0
+#define ETH_MAC_REG_TX_IPG_LENGTH_K2 0x000044UL
+  #define ETH_MAC_REG_TX_IPG_LENGTH_TXIPG_K2_SHIFT 0
+#define ETH_MAC_REG_RX_FIFO_SECTIONS_K2 0x00001cUL
+  #define ETH_MAC_REG_RX_FIFO_SECTIONS_RX_SECTION_FULL_K2_SHIFT 0
+#define ETH_MAC_REG_TX_FIFO_SECTIONS_K2 0x000020UL
+  #define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_EMPTY_K2_SHIFT 16
+  #define ETH_MAC_REG_TX_FIFO_SECTIONS_TX_SECTION_FULL_K2_SHIFT 0
+  #define ETH_MAC_REG_COMMAND_CONFIG_CRC_FWD_K2 (0x1 << 6)
+  #define ETH_MAC_REG_COMMAND_CONFIG_CRC_FWD_K2_SHIFT 6
+#define ETH_MAC_REG_COMMAND_CONFIG_K2 0x000008UL
 #define MISC_REG_XMAC_CORE_PORT_MODE_BB 0x008c08UL
 #define MISC_REG_XMAC_PHY_PORT_MODE_BB 0x008c04UL
 #define XMAC_REG_MODE_BB 0x210008UL
@@ -1192,17 +1192,12 @@
 #define XMAC_REG_RX_CTRL_BB 0x210030UL
 #define XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB (0x1UL << 12)
 
-#define PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5 0x2aaf98UL
-#define PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5 0x2aaf9cUL
-#define PGLUE_B_REG_PGL_ADDR_F0_F0_K2_E5 0x2aafa0UL
-#define PGLUE_B_REG_PGL_ADDR_F4_F0_K2_E5 0x2aafa4UL
 #define PGLUE_B_REG_PGL_ADDR_88_F0_BB 0x2aa404UL
 #define PGLUE_B_REG_PGL_ADDR_8C_F0_BB 0x2aa408UL
 #define PGLUE_B_REG_PGL_ADDR_90_F0_BB 0x2aa40cUL
 #define PGLUE_B_REG_PGL_ADDR_94_F0_BB 0x2aa410UL
 #define MISCS_REG_FUNCTION_HIDE_BB_K2 0x0096f0UL
-#define PCIE_REG_PRTY_MASK_K2_E5 0x0547b4UL
-#define PGLUE_B_REG_VF_BAR0_SIZE_K2_E5 0x2aaeb4UL
+#define PCIE_REG_PRTY_MASK_K2 0x0547b4UL
 
 #define PRS_REG_OUTPUT_FORMAT_4_0_BB_K2 0x1f099cUL
 
@@ -1233,10 +1228,10 @@
 #define NIG_REG_LLH_FUNC_TAG_EN 0x5019b0UL
 #define NIG_REG_LLH_FUNC_TAG_VALUE 0x5019d0UL
 #define DORQ_REG_TAG1_OVRD_MODE 0x1008b4UL
-#define DORQ_REG_PF_PCP_BB_K2 0x1008c4UL
-#define DORQ_REG_PF_EXT_VID_BB_K2 0x1008c8UL
+#define DORQ_REG_PF_PCP 0x1008c4UL
+#define DORQ_REG_PF_EXT_VID 0x1008c8UL
 #define PRS_REG_SEARCH_NON_IP_AS_GFT 0x1f11c0UL
 #define NIG_REG_LLH_PPFID2PFID_TBL_0 0x501970UL
 #define NIG_REG_PPF_TO_ENGINE_SEL 0x508900UL
 #define NIG_REG_LLH_ENG_CLS_ROCE_QP_SEL 0x501b98UL
-#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_BB_K2 0x501b40UL
+#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL 0x501b40UL
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index fffccf070..d6382b62c 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -569,12 +569,12 @@ qede_alloc_mem_sb(struct qede_dev *qdev, struct ecore_sb_info *sb_info,
 		  uint16_t sb_id)
 {
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct status_block_e4 *sb_virt;
+	struct status_block *sb_virt;
 	dma_addr_t sb_phys;
 	int rc;
 
 	sb_virt = OSAL_DMA_ALLOC_COHERENT(edev, &sb_phys,
-					  sizeof(struct status_block_e4));
+					  sizeof(struct status_block));
 	if (!sb_virt) {
 		DP_ERR(edev, "Status block allocation failed\n");
 		return -ENOMEM;
@@ -584,7 +584,7 @@ qede_alloc_mem_sb(struct qede_dev *qdev, struct ecore_sb_info *sb_info,
 	if (rc) {
 		DP_ERR(edev, "Status block initialization failed\n");
 		OSAL_DMA_FREE_COHERENT(edev, sb_virt, sb_phys,
-				       sizeof(struct status_block_e4));
+				       sizeof(struct status_block));
 		return rc;
 	}
 
@@ -683,7 +683,7 @@ void qede_dealloc_fp_resc(struct rte_eth_dev *eth_dev)
 		if (fp->sb_info) {
 			OSAL_DMA_FREE_COHERENT(edev, fp->sb_info->sb_virt,
 				fp->sb_info->sb_phys,
-				sizeof(struct status_block_e4));
+				sizeof(struct status_block));
 			rte_free(fp->sb_info);
 			fp->sb_info = NULL;
 		}
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 5/9] net/qede/base: update rt defs NVM cfg and mcp code
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (14 preceding siblings ...)
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 4/9] net/qede/base: rename HSI datatypes and funcs Rasesh Mody
@ 2019-10-06 20:14 ` Rasesh Mody
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 6/9] net/qede/base: move dmae code to HSI Rasesh Mody
                   ` (3 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-10-06 20:14 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

Update and add runtime array offsets (rt defs), non-volatile memory
configuration options (nvm cfg) and management co-processor (mcp)
shared code in preparation to update the firmware to version 8.40.25.0.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/bcm_osal.h      |   5 +-
 drivers/net/qede/base/ecore_rt_defs.h | 870 +++++++++++-------------
 drivers/net/qede/base/mcp_public.h    |  59 +-
 drivers/net/qede/base/nvm_cfg.h       | 909 +++++++++++++++++++++++++-
 4 files changed, 1351 insertions(+), 492 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 51edc4151..0f09557cf 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -148,8 +148,8 @@ void osal_dma_free_mem(struct ecore_dev *edev, dma_addr_t phys);
 			      ((u8 *)(uintptr_t)(_p_hwfn->doorbells) +	\
 			      (_db_addr)), (u32)_val)
 
-#define DIRECT_REG_WR64(hwfn, addr, value) nothing
-#define DIRECT_REG_RD64(hwfn, addr) 0
+#define DIRECT_REG_RD64(hwfn, addr) rte_read64(addr)
+#define DIRECT_REG_WR64(hwfn, addr, value) rte_write64((value), (addr))
 
 /* Mutexes */
 
@@ -455,6 +455,7 @@ u32 qede_crc32(u32 crc, u8 *ptr, u32 length);
 
 #define OSAL_DIV_S64(a, b)	((a) / (b))
 #define OSAL_LLDP_RX_TLVS(p_hwfn, tlv_buf, tlv_size) nothing
+#define OSAL_GET_EPOCH(p_hwfn)	0
 #define OSAL_DBG_ALLOC_USER_DATA(p_hwfn, user_data_ptr) (0)
 #define OSAL_DB_REC_OCCURRED(p_hwfn) nothing
 
diff --git a/drivers/net/qede/base/ecore_rt_defs.h b/drivers/net/qede/base/ecore_rt_defs.h
index 3860e1a56..08b1f4700 100644
--- a/drivers/net/qede/base/ecore_rt_defs.h
+++ b/drivers/net/qede/base/ecore_rt_defs.h
@@ -24,512 +24,428 @@
 #define DORQ_REG_VF_MAX_ICID_5_RT_OFFSET                            13
 #define DORQ_REG_VF_MAX_ICID_6_RT_OFFSET                            14
 #define DORQ_REG_VF_MAX_ICID_7_RT_OFFSET                            15
-#define DORQ_REG_PF_WAKE_ALL_RT_OFFSET                              16
-#define DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET                           17
-#define DORQ_REG_GLB_MAX_ICID_0_RT_OFFSET                           18
-#define DORQ_REG_GLB_MAX_ICID_1_RT_OFFSET                           19
-#define DORQ_REG_GLB_RANGE2CONN_TYPE_0_RT_OFFSET                    20
-#define DORQ_REG_GLB_RANGE2CONN_TYPE_1_RT_OFFSET                    21
-#define DORQ_REG_PRV_PF_MAX_ICID_2_RT_OFFSET                        22
-#define DORQ_REG_PRV_PF_MAX_ICID_3_RT_OFFSET                        23
-#define DORQ_REG_PRV_PF_MAX_ICID_4_RT_OFFSET                        24
-#define DORQ_REG_PRV_PF_MAX_ICID_5_RT_OFFSET                        25
-#define DORQ_REG_PRV_VF_MAX_ICID_2_RT_OFFSET                        26
-#define DORQ_REG_PRV_VF_MAX_ICID_3_RT_OFFSET                        27
-#define DORQ_REG_PRV_VF_MAX_ICID_4_RT_OFFSET                        28
-#define DORQ_REG_PRV_VF_MAX_ICID_5_RT_OFFSET                        29
-#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_2_RT_OFFSET                 30
-#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_3_RT_OFFSET                 31
-#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_4_RT_OFFSET                 32
-#define DORQ_REG_PRV_PF_RANGE2CONN_TYPE_5_RT_OFFSET                 33
-#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_2_RT_OFFSET                 34
-#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_3_RT_OFFSET                 35
-#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_4_RT_OFFSET                 36
-#define DORQ_REG_PRV_VF_RANGE2CONN_TYPE_5_RT_OFFSET                 37
-#define IGU_REG_PF_CONFIGURATION_RT_OFFSET                          38
-#define IGU_REG_VF_CONFIGURATION_RT_OFFSET                          39
-#define IGU_REG_ATTN_MSG_ADDR_L_RT_OFFSET                           40
-#define IGU_REG_ATTN_MSG_ADDR_H_RT_OFFSET                           41
-#define IGU_REG_LEADING_EDGE_LATCH_RT_OFFSET                        42
-#define IGU_REG_TRAILING_EDGE_LATCH_RT_OFFSET                       43
-#define CAU_REG_CQE_AGG_UNIT_SIZE_RT_OFFSET                         44
-#define CAU_REG_SB_VAR_MEMORY_RT_OFFSET                             45
-#define CAU_REG_SB_VAR_MEMORY_RT_SIZE                               1024
-#define CAU_REG_SB_ADDR_MEMORY_RT_OFFSET                            1069
-#define CAU_REG_SB_ADDR_MEMORY_RT_SIZE                              1024
-#define CAU_REG_PI_MEMORY_RT_OFFSET                                 2093
+#define DORQ_REG_VF_ICID_BIT_SHIFT_NORM_RT_OFFSET                   16
+#define DORQ_REG_PF_WAKE_ALL_RT_OFFSET                              17
+#define DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET                           18
+#define IGU_REG_PF_CONFIGURATION_RT_OFFSET                          19
+#define IGU_REG_VF_CONFIGURATION_RT_OFFSET                          20
+#define IGU_REG_ATTN_MSG_ADDR_L_RT_OFFSET                           21
+#define IGU_REG_ATTN_MSG_ADDR_H_RT_OFFSET                           22
+#define IGU_REG_LEADING_EDGE_LATCH_RT_OFFSET                        23
+#define IGU_REG_TRAILING_EDGE_LATCH_RT_OFFSET                       24
+#define CAU_REG_CQE_AGG_UNIT_SIZE_RT_OFFSET                         25
+#define CAU_REG_SB_VAR_MEMORY_RT_OFFSET                             26
+#define CAU_REG_SB_VAR_MEMORY_RT_SIZE                               736
+#define CAU_REG_SB_ADDR_MEMORY_RT_OFFSET                            762
+#define CAU_REG_SB_ADDR_MEMORY_RT_SIZE                              736
+#define CAU_REG_PI_MEMORY_RT_OFFSET                                 1498
 #define CAU_REG_PI_MEMORY_RT_SIZE                                   4416
-#define PRS_REG_SEARCH_RESP_INITIATOR_TYPE_RT_OFFSET                6509
-#define PRS_REG_TASK_ID_MAX_INITIATOR_PF_RT_OFFSET                  6510
-#define PRS_REG_TASK_ID_MAX_INITIATOR_VF_RT_OFFSET                  6511
-#define PRS_REG_TASK_ID_MAX_TARGET_PF_RT_OFFSET                     6512
-#define PRS_REG_TASK_ID_MAX_TARGET_VF_RT_OFFSET                     6513
-#define PRS_REG_SEARCH_TCP_RT_OFFSET                                6514
-#define PRS_REG_SEARCH_FCOE_RT_OFFSET                               6515
-#define PRS_REG_SEARCH_ROCE_RT_OFFSET                               6516
-#define PRS_REG_ROCE_DEST_QP_MAX_VF_RT_OFFSET                       6517
-#define PRS_REG_ROCE_DEST_QP_MAX_PF_RT_OFFSET                       6518
-#define PRS_REG_SEARCH_OPENFLOW_RT_OFFSET                           6519
-#define PRS_REG_SEARCH_NON_IP_AS_OPENFLOW_RT_OFFSET                 6520
-#define PRS_REG_OPENFLOW_SUPPORT_ONLY_KNOWN_OVER_IP_RT_OFFSET       6521
-#define PRS_REG_OPENFLOW_SEARCH_KEY_MASK_RT_OFFSET                  6522
-#define PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET                           6523
-#define PRS_REG_LIGHT_L2_ETHERTYPE_EN_RT_OFFSET                     6524
-#define SRC_REG_FIRSTFREE_RT_OFFSET                                 6525
+#define PRS_REG_SEARCH_RESP_INITIATOR_TYPE_RT_OFFSET                5914
+#define PRS_REG_TASK_ID_MAX_INITIATOR_PF_RT_OFFSET                  5915
+#define PRS_REG_TASK_ID_MAX_INITIATOR_VF_RT_OFFSET                  5916
+#define PRS_REG_TASK_ID_MAX_TARGET_PF_RT_OFFSET                     5917
+#define PRS_REG_TASK_ID_MAX_TARGET_VF_RT_OFFSET                     5918
+#define PRS_REG_SEARCH_TCP_RT_OFFSET                                5919
+#define PRS_REG_SEARCH_FCOE_RT_OFFSET                               5920
+#define PRS_REG_SEARCH_ROCE_RT_OFFSET                               5921
+#define PRS_REG_ROCE_DEST_QP_MAX_VF_RT_OFFSET                       5922
+#define PRS_REG_ROCE_DEST_QP_MAX_PF_RT_OFFSET                       5923
+#define PRS_REG_SEARCH_OPENFLOW_RT_OFFSET                           5924
+#define PRS_REG_SEARCH_NON_IP_AS_OPENFLOW_RT_OFFSET                 5925
+#define PRS_REG_OPENFLOW_SUPPORT_ONLY_KNOWN_OVER_IP_RT_OFFSET       5926
+#define PRS_REG_OPENFLOW_SEARCH_KEY_MASK_RT_OFFSET                  5927
+#define PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET                           5928
+#define PRS_REG_LIGHT_L2_ETHERTYPE_EN_RT_OFFSET                     5929
+#define SRC_REG_FIRSTFREE_RT_OFFSET                                 5930
 #define SRC_REG_FIRSTFREE_RT_SIZE                                   2
-#define SRC_REG_LASTFREE_RT_OFFSET                                  6527
+#define SRC_REG_LASTFREE_RT_OFFSET                                  5932
 #define SRC_REG_LASTFREE_RT_SIZE                                    2
-#define SRC_REG_COUNTFREE_RT_OFFSET                                 6529
-#define SRC_REG_NUMBER_HASH_BITS_RT_OFFSET                          6530
-#define PSWRQ2_REG_CDUT_P_SIZE_RT_OFFSET                            6531
-#define PSWRQ2_REG_CDUC_P_SIZE_RT_OFFSET                            6532
-#define PSWRQ2_REG_TM_P_SIZE_RT_OFFSET                              6533
-#define PSWRQ2_REG_QM_P_SIZE_RT_OFFSET                              6534
-#define PSWRQ2_REG_SRC_P_SIZE_RT_OFFSET                             6535
-#define PSWRQ2_REG_TSDM_P_SIZE_RT_OFFSET                            6536
-#define PSWRQ2_REG_TM_FIRST_ILT_RT_OFFSET                           6537
-#define PSWRQ2_REG_TM_LAST_ILT_RT_OFFSET                            6538
-#define PSWRQ2_REG_QM_FIRST_ILT_RT_OFFSET                           6539
-#define PSWRQ2_REG_QM_LAST_ILT_RT_OFFSET                            6540
-#define PSWRQ2_REG_SRC_FIRST_ILT_RT_OFFSET                          6541
-#define PSWRQ2_REG_SRC_LAST_ILT_RT_OFFSET                           6542
-#define PSWRQ2_REG_CDUC_FIRST_ILT_RT_OFFSET                         6543
-#define PSWRQ2_REG_CDUC_LAST_ILT_RT_OFFSET                          6544
-#define PSWRQ2_REG_CDUT_FIRST_ILT_RT_OFFSET                         6545
-#define PSWRQ2_REG_CDUT_LAST_ILT_RT_OFFSET                          6546
-#define PSWRQ2_REG_TSDM_FIRST_ILT_RT_OFFSET                         6547
-#define PSWRQ2_REG_TSDM_LAST_ILT_RT_OFFSET                          6548
-#define PSWRQ2_REG_TM_NUMBER_OF_PF_BLOCKS_RT_OFFSET                 6549
-#define PSWRQ2_REG_CDUT_NUMBER_OF_PF_BLOCKS_RT_OFFSET               6550
-#define PSWRQ2_REG_CDUC_NUMBER_OF_PF_BLOCKS_RT_OFFSET               6551
-#define PSWRQ2_REG_TM_VF_BLOCKS_RT_OFFSET                           6552
-#define PSWRQ2_REG_CDUT_VF_BLOCKS_RT_OFFSET                         6553
-#define PSWRQ2_REG_CDUC_VF_BLOCKS_RT_OFFSET                         6554
-#define PSWRQ2_REG_TM_BLOCKS_FACTOR_RT_OFFSET                       6555
-#define PSWRQ2_REG_CDUT_BLOCKS_FACTOR_RT_OFFSET                     6556
-#define PSWRQ2_REG_CDUC_BLOCKS_FACTOR_RT_OFFSET                     6557
-#define PSWRQ2_REG_VF_BASE_RT_OFFSET                                6558
-#define PSWRQ2_REG_VF_LAST_ILT_RT_OFFSET                            6559
-#define PSWRQ2_REG_DRAM_ALIGN_WR_RT_OFFSET                          6560
-#define PSWRQ2_REG_DRAM_ALIGN_RD_RT_OFFSET                          6561
-#define PSWRQ2_REG_TGSRC_FIRST_ILT_RT_OFFSET                        6562
-#define PSWRQ2_REG_RGSRC_FIRST_ILT_RT_OFFSET                        6563
-#define PSWRQ2_REG_TGSRC_LAST_ILT_RT_OFFSET                         6564
-#define PSWRQ2_REG_RGSRC_LAST_ILT_RT_OFFSET                         6565
-#define PSWRQ2_REG_ILT_MEMORY_RT_OFFSET                             6566
-#define PSWRQ2_REG_ILT_MEMORY_RT_SIZE                               26414
-#define PGLUE_REG_B_VF_BASE_RT_OFFSET                               32980
-#define PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET                    32981
-#define PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET                       32982
-#define PGLUE_REG_B_CACHE_LINE_SIZE_RT_OFFSET                       32983
-#define PGLUE_REG_B_PF_BAR0_SIZE_RT_OFFSET                          32984
-#define PGLUE_REG_B_PF_BAR1_SIZE_RT_OFFSET                          32985
-#define PGLUE_REG_B_VF_BAR1_SIZE_RT_OFFSET                          32986
-#define TM_REG_VF_ENABLE_CONN_RT_OFFSET                             32987
-#define TM_REG_PF_ENABLE_CONN_RT_OFFSET                             32988
-#define TM_REG_PF_ENABLE_TASK_RT_OFFSET                             32989
-#define TM_REG_GROUP_SIZE_RESOLUTION_CONN_RT_OFFSET                 32990
-#define TM_REG_GROUP_SIZE_RESOLUTION_TASK_RT_OFFSET                 32991
-#define TM_REG_CONFIG_CONN_MEM_RT_OFFSET                            32992
+#define SRC_REG_COUNTFREE_RT_OFFSET                                 5934
+#define SRC_REG_NUMBER_HASH_BITS_RT_OFFSET                          5935
+#define PSWRQ2_REG_CDUT_P_SIZE_RT_OFFSET                            5936
+#define PSWRQ2_REG_CDUC_P_SIZE_RT_OFFSET                            5937
+#define PSWRQ2_REG_TM_P_SIZE_RT_OFFSET                              5938
+#define PSWRQ2_REG_QM_P_SIZE_RT_OFFSET                              5939
+#define PSWRQ2_REG_SRC_P_SIZE_RT_OFFSET                             5940
+#define PSWRQ2_REG_TSDM_P_SIZE_RT_OFFSET                            5941
+#define PSWRQ2_REG_TM_FIRST_ILT_RT_OFFSET                           5942
+#define PSWRQ2_REG_TM_LAST_ILT_RT_OFFSET                            5943
+#define PSWRQ2_REG_QM_FIRST_ILT_RT_OFFSET                           5944
+#define PSWRQ2_REG_QM_LAST_ILT_RT_OFFSET                            5945
+#define PSWRQ2_REG_SRC_FIRST_ILT_RT_OFFSET                          5946
+#define PSWRQ2_REG_SRC_LAST_ILT_RT_OFFSET                           5947
+#define PSWRQ2_REG_CDUC_FIRST_ILT_RT_OFFSET                         5948
+#define PSWRQ2_REG_CDUC_LAST_ILT_RT_OFFSET                          5949
+#define PSWRQ2_REG_CDUT_FIRST_ILT_RT_OFFSET                         5950
+#define PSWRQ2_REG_CDUT_LAST_ILT_RT_OFFSET                          5951
+#define PSWRQ2_REG_TSDM_FIRST_ILT_RT_OFFSET                         5952
+#define PSWRQ2_REG_TSDM_LAST_ILT_RT_OFFSET                          5953
+#define PSWRQ2_REG_TM_NUMBER_OF_PF_BLOCKS_RT_OFFSET                 5954
+#define PSWRQ2_REG_CDUT_NUMBER_OF_PF_BLOCKS_RT_OFFSET               5955
+#define PSWRQ2_REG_CDUC_NUMBER_OF_PF_BLOCKS_RT_OFFSET               5956
+#define PSWRQ2_REG_TM_VF_BLOCKS_RT_OFFSET                           5957
+#define PSWRQ2_REG_CDUT_VF_BLOCKS_RT_OFFSET                         5958
+#define PSWRQ2_REG_CDUC_VF_BLOCKS_RT_OFFSET                         5959
+#define PSWRQ2_REG_TM_BLOCKS_FACTOR_RT_OFFSET                       5960
+#define PSWRQ2_REG_CDUT_BLOCKS_FACTOR_RT_OFFSET                     5961
+#define PSWRQ2_REG_CDUC_BLOCKS_FACTOR_RT_OFFSET                     5962
+#define PSWRQ2_REG_VF_BASE_RT_OFFSET                                5963
+#define PSWRQ2_REG_VF_LAST_ILT_RT_OFFSET                            5964
+#define PSWRQ2_REG_DRAM_ALIGN_WR_RT_OFFSET                          5965
+#define PSWRQ2_REG_DRAM_ALIGN_RD_RT_OFFSET                          5966
+#define PSWRQ2_REG_ILT_MEMORY_RT_OFFSET                             5967
+#define PSWRQ2_REG_ILT_MEMORY_RT_SIZE                               22000
+#define PGLUE_REG_B_VF_BASE_RT_OFFSET                               27967
+#define PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET                    27968
+#define PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET                       27969
+#define PGLUE_REG_B_CACHE_LINE_SIZE_RT_OFFSET                       27970
+#define PGLUE_REG_B_PF_BAR0_SIZE_RT_OFFSET                          27971
+#define PGLUE_REG_B_PF_BAR1_SIZE_RT_OFFSET                          27972
+#define PGLUE_REG_B_VF_BAR1_SIZE_RT_OFFSET                          27973
+#define TM_REG_VF_ENABLE_CONN_RT_OFFSET                             27974
+#define TM_REG_PF_ENABLE_CONN_RT_OFFSET                             27975
+#define TM_REG_PF_ENABLE_TASK_RT_OFFSET                             27976
+#define TM_REG_GROUP_SIZE_RESOLUTION_CONN_RT_OFFSET                 27977
+#define TM_REG_GROUP_SIZE_RESOLUTION_TASK_RT_OFFSET                 27978
+#define TM_REG_CONFIG_CONN_MEM_RT_OFFSET                            27979
 #define TM_REG_CONFIG_CONN_MEM_RT_SIZE                              416
-#define TM_REG_CONFIG_TASK_MEM_RT_OFFSET                            33408
-#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              608
-#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                34016
-#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                34017
-#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                34018
-#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           34019
-#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           34020
-#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           34021
-#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           34022
-#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           34023
-#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           34024
-#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           34025
-#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           34026
-#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           34027
-#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           34028
-#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          34029
-#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          34030
-#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          34031
-#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          34032
-#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          34033
-#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          34034
-#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          34035
-#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          34036
-#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          34037
-#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          34038
-#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          34039
-#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          34040
-#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          34041
-#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          34042
-#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          34043
-#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          34044
-#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          34045
-#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          34046
-#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          34047
-#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          34048
-#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          34049
-#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          34050
-#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          34051
-#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          34052
-#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          34053
-#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          34054
-#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          34055
-#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          34056
-#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          34057
-#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          34058
-#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          34059
-#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          34060
-#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          34061
-#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          34062
-#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          34063
-#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          34064
-#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          34065
-#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          34066
-#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          34067
-#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          34068
-#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          34069
-#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          34070
-#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          34071
-#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          34072
-#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          34073
-#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          34074
-#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          34075
-#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          34076
-#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          34077
-#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          34078
-#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          34079
-#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          34080
-#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          34081
-#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          34082
-#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            34083
+#define TM_REG_CONFIG_TASK_MEM_RT_OFFSET                            28395
+#define TM_REG_CONFIG_TASK_MEM_RT_SIZE                              512
+#define QM_REG_MAXPQSIZE_0_RT_OFFSET                                28907
+#define QM_REG_MAXPQSIZE_1_RT_OFFSET                                28908
+#define QM_REG_MAXPQSIZE_2_RT_OFFSET                                28909
+#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET                           28910
+#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET                           28911
+#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET                           28912
+#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET                           28913
+#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET                           28914
+#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET                           28915
+#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET                           28916
+#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET                           28917
+#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET                           28918
+#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET                           28919
+#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET                          28920
+#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET                          28921
+#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET                          28922
+#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET                          28923
+#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET                          28924
+#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET                          28925
+#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET                          28926
+#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET                          28927
+#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET                          28928
+#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET                          28929
+#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET                          28930
+#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET                          28931
+#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET                          28932
+#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET                          28933
+#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET                          28934
+#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET                          28935
+#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET                          28936
+#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET                          28937
+#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET                          28938
+#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET                          28939
+#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET                          28940
+#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET                          28941
+#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET                          28942
+#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET                          28943
+#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET                          28944
+#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET                          28945
+#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET                          28946
+#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET                          28947
+#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET                          28948
+#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET                          28949
+#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET                          28950
+#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET                          28951
+#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET                          28952
+#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET                          28953
+#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET                          28954
+#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET                          28955
+#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET                          28956
+#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET                          28957
+#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET                          28958
+#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET                          28959
+#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET                          28960
+#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET                          28961
+#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET                          28962
+#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET                          28963
+#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET                          28964
+#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET                          28965
+#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET                          28966
+#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET                          28967
+#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET                          28968
+#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET                          28969
+#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET                          28970
+#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET                          28971
+#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET                          28972
+#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET                          28973
+#define QM_REG_BASEADDROTHERPQ_RT_OFFSET                            28974
 #define QM_REG_BASEADDROTHERPQ_RT_SIZE                              128
-#define QM_REG_PTRTBLOTHER_RT_OFFSET                                34211
+#define QM_REG_PTRTBLOTHER_RT_OFFSET                                29102
 #define QM_REG_PTRTBLOTHER_RT_SIZE                                  256
-#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         34467
-#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         34468
-#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          34469
-#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        34470
-#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       34471
-#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            34472
-#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            34473
-#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            34474
-#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            34475
-#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            34476
-#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            34477
-#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            34478
-#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            34479
-#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            34480
-#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            34481
-#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           34482
-#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           34483
-#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           34484
-#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           34485
-#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           34486
-#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           34487
-#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        34488
-#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        34489
-#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        34490
-#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        34491
-#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           34492
-#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           34493
-#define QM_REG_PQTX2PF_0_RT_OFFSET                                  34494
-#define QM_REG_PQTX2PF_1_RT_OFFSET                                  34495
-#define QM_REG_PQTX2PF_2_RT_OFFSET                                  34496
-#define QM_REG_PQTX2PF_3_RT_OFFSET                                  34497
-#define QM_REG_PQTX2PF_4_RT_OFFSET                                  34498
-#define QM_REG_PQTX2PF_5_RT_OFFSET                                  34499
-#define QM_REG_PQTX2PF_6_RT_OFFSET                                  34500
-#define QM_REG_PQTX2PF_7_RT_OFFSET                                  34501
-#define QM_REG_PQTX2PF_8_RT_OFFSET                                  34502
-#define QM_REG_PQTX2PF_9_RT_OFFSET                                  34503
-#define QM_REG_PQTX2PF_10_RT_OFFSET                                 34504
-#define QM_REG_PQTX2PF_11_RT_OFFSET                                 34505
-#define QM_REG_PQTX2PF_12_RT_OFFSET                                 34506
-#define QM_REG_PQTX2PF_13_RT_OFFSET                                 34507
-#define QM_REG_PQTX2PF_14_RT_OFFSET                                 34508
-#define QM_REG_PQTX2PF_15_RT_OFFSET                                 34509
-#define QM_REG_PQTX2PF_16_RT_OFFSET                                 34510
-#define QM_REG_PQTX2PF_17_RT_OFFSET                                 34511
-#define QM_REG_PQTX2PF_18_RT_OFFSET                                 34512
-#define QM_REG_PQTX2PF_19_RT_OFFSET                                 34513
-#define QM_REG_PQTX2PF_20_RT_OFFSET                                 34514
-#define QM_REG_PQTX2PF_21_RT_OFFSET                                 34515
-#define QM_REG_PQTX2PF_22_RT_OFFSET                                 34516
-#define QM_REG_PQTX2PF_23_RT_OFFSET                                 34517
-#define QM_REG_PQTX2PF_24_RT_OFFSET                                 34518
-#define QM_REG_PQTX2PF_25_RT_OFFSET                                 34519
-#define QM_REG_PQTX2PF_26_RT_OFFSET                                 34520
-#define QM_REG_PQTX2PF_27_RT_OFFSET                                 34521
-#define QM_REG_PQTX2PF_28_RT_OFFSET                                 34522
-#define QM_REG_PQTX2PF_29_RT_OFFSET                                 34523
-#define QM_REG_PQTX2PF_30_RT_OFFSET                                 34524
-#define QM_REG_PQTX2PF_31_RT_OFFSET                                 34525
-#define QM_REG_PQTX2PF_32_RT_OFFSET                                 34526
-#define QM_REG_PQTX2PF_33_RT_OFFSET                                 34527
-#define QM_REG_PQTX2PF_34_RT_OFFSET                                 34528
-#define QM_REG_PQTX2PF_35_RT_OFFSET                                 34529
-#define QM_REG_PQTX2PF_36_RT_OFFSET                                 34530
-#define QM_REG_PQTX2PF_37_RT_OFFSET                                 34531
-#define QM_REG_PQTX2PF_38_RT_OFFSET                                 34532
-#define QM_REG_PQTX2PF_39_RT_OFFSET                                 34533
-#define QM_REG_PQTX2PF_40_RT_OFFSET                                 34534
-#define QM_REG_PQTX2PF_41_RT_OFFSET                                 34535
-#define QM_REG_PQTX2PF_42_RT_OFFSET                                 34536
-#define QM_REG_PQTX2PF_43_RT_OFFSET                                 34537
-#define QM_REG_PQTX2PF_44_RT_OFFSET                                 34538
-#define QM_REG_PQTX2PF_45_RT_OFFSET                                 34539
-#define QM_REG_PQTX2PF_46_RT_OFFSET                                 34540
-#define QM_REG_PQTX2PF_47_RT_OFFSET                                 34541
-#define QM_REG_PQTX2PF_48_RT_OFFSET                                 34542
-#define QM_REG_PQTX2PF_49_RT_OFFSET                                 34543
-#define QM_REG_PQTX2PF_50_RT_OFFSET                                 34544
-#define QM_REG_PQTX2PF_51_RT_OFFSET                                 34545
-#define QM_REG_PQTX2PF_52_RT_OFFSET                                 34546
-#define QM_REG_PQTX2PF_53_RT_OFFSET                                 34547
-#define QM_REG_PQTX2PF_54_RT_OFFSET                                 34548
-#define QM_REG_PQTX2PF_55_RT_OFFSET                                 34549
-#define QM_REG_PQTX2PF_56_RT_OFFSET                                 34550
-#define QM_REG_PQTX2PF_57_RT_OFFSET                                 34551
-#define QM_REG_PQTX2PF_58_RT_OFFSET                                 34552
-#define QM_REG_PQTX2PF_59_RT_OFFSET                                 34553
-#define QM_REG_PQTX2PF_60_RT_OFFSET                                 34554
-#define QM_REG_PQTX2PF_61_RT_OFFSET                                 34555
-#define QM_REG_PQTX2PF_62_RT_OFFSET                                 34556
-#define QM_REG_PQTX2PF_63_RT_OFFSET                                 34557
-#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               34558
-#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               34559
-#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               34560
-#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               34561
-#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               34562
-#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               34563
-#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               34564
-#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               34565
-#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               34566
-#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               34567
-#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              34568
-#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              34569
-#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              34570
-#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              34571
-#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              34572
-#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              34573
-#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             34574
-#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             34575
-#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        34576
-#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        34577
-#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          34578
-#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          34579
-#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          34580
-#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          34581
-#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          34582
-#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          34583
-#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          34584
-#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          34585
-#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               34586
+#define QM_REG_VOQCRDLINE_RT_OFFSET                                 29358
+#define QM_REG_VOQCRDLINE_RT_SIZE                                   20
+#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             29378
+#define QM_REG_VOQINITCRDLINE_RT_SIZE                               20
+#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET                         29398
+#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET                         29399
+#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET                          29400
+#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET                        29401
+#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET                       29402
+#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET                            29403
+#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET                            29404
+#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET                            29405
+#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET                            29406
+#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET                            29407
+#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET                            29408
+#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET                            29409
+#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET                            29410
+#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET                            29411
+#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET                            29412
+#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET                           29413
+#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET                           29414
+#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET                           29415
+#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET                           29416
+#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET                           29417
+#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET                           29418
+#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET                        29419
+#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET                        29420
+#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET                        29421
+#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET                        29422
+#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET                           29423
+#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET                           29424
+#define QM_REG_PQTX2PF_0_RT_OFFSET                                  29425
+#define QM_REG_PQTX2PF_1_RT_OFFSET                                  29426
+#define QM_REG_PQTX2PF_2_RT_OFFSET                                  29427
+#define QM_REG_PQTX2PF_3_RT_OFFSET                                  29428
+#define QM_REG_PQTX2PF_4_RT_OFFSET                                  29429
+#define QM_REG_PQTX2PF_5_RT_OFFSET                                  29430
+#define QM_REG_PQTX2PF_6_RT_OFFSET                                  29431
+#define QM_REG_PQTX2PF_7_RT_OFFSET                                  29432
+#define QM_REG_PQTX2PF_8_RT_OFFSET                                  29433
+#define QM_REG_PQTX2PF_9_RT_OFFSET                                  29434
+#define QM_REG_PQTX2PF_10_RT_OFFSET                                 29435
+#define QM_REG_PQTX2PF_11_RT_OFFSET                                 29436
+#define QM_REG_PQTX2PF_12_RT_OFFSET                                 29437
+#define QM_REG_PQTX2PF_13_RT_OFFSET                                 29438
+#define QM_REG_PQTX2PF_14_RT_OFFSET                                 29439
+#define QM_REG_PQTX2PF_15_RT_OFFSET                                 29440
+#define QM_REG_PQTX2PF_16_RT_OFFSET                                 29441
+#define QM_REG_PQTX2PF_17_RT_OFFSET                                 29442
+#define QM_REG_PQTX2PF_18_RT_OFFSET                                 29443
+#define QM_REG_PQTX2PF_19_RT_OFFSET                                 29444
+#define QM_REG_PQTX2PF_20_RT_OFFSET                                 29445
+#define QM_REG_PQTX2PF_21_RT_OFFSET                                 29446
+#define QM_REG_PQTX2PF_22_RT_OFFSET                                 29447
+#define QM_REG_PQTX2PF_23_RT_OFFSET                                 29448
+#define QM_REG_PQTX2PF_24_RT_OFFSET                                 29449
+#define QM_REG_PQTX2PF_25_RT_OFFSET                                 29450
+#define QM_REG_PQTX2PF_26_RT_OFFSET                                 29451
+#define QM_REG_PQTX2PF_27_RT_OFFSET                                 29452
+#define QM_REG_PQTX2PF_28_RT_OFFSET                                 29453
+#define QM_REG_PQTX2PF_29_RT_OFFSET                                 29454
+#define QM_REG_PQTX2PF_30_RT_OFFSET                                 29455
+#define QM_REG_PQTX2PF_31_RT_OFFSET                                 29456
+#define QM_REG_PQTX2PF_32_RT_OFFSET                                 29457
+#define QM_REG_PQTX2PF_33_RT_OFFSET                                 29458
+#define QM_REG_PQTX2PF_34_RT_OFFSET                                 29459
+#define QM_REG_PQTX2PF_35_RT_OFFSET                                 29460
+#define QM_REG_PQTX2PF_36_RT_OFFSET                                 29461
+#define QM_REG_PQTX2PF_37_RT_OFFSET                                 29462
+#define QM_REG_PQTX2PF_38_RT_OFFSET                                 29463
+#define QM_REG_PQTX2PF_39_RT_OFFSET                                 29464
+#define QM_REG_PQTX2PF_40_RT_OFFSET                                 29465
+#define QM_REG_PQTX2PF_41_RT_OFFSET                                 29466
+#define QM_REG_PQTX2PF_42_RT_OFFSET                                 29467
+#define QM_REG_PQTX2PF_43_RT_OFFSET                                 29468
+#define QM_REG_PQTX2PF_44_RT_OFFSET                                 29469
+#define QM_REG_PQTX2PF_45_RT_OFFSET                                 29470
+#define QM_REG_PQTX2PF_46_RT_OFFSET                                 29471
+#define QM_REG_PQTX2PF_47_RT_OFFSET                                 29472
+#define QM_REG_PQTX2PF_48_RT_OFFSET                                 29473
+#define QM_REG_PQTX2PF_49_RT_OFFSET                                 29474
+#define QM_REG_PQTX2PF_50_RT_OFFSET                                 29475
+#define QM_REG_PQTX2PF_51_RT_OFFSET                                 29476
+#define QM_REG_PQTX2PF_52_RT_OFFSET                                 29477
+#define QM_REG_PQTX2PF_53_RT_OFFSET                                 29478
+#define QM_REG_PQTX2PF_54_RT_OFFSET                                 29479
+#define QM_REG_PQTX2PF_55_RT_OFFSET                                 29480
+#define QM_REG_PQTX2PF_56_RT_OFFSET                                 29481
+#define QM_REG_PQTX2PF_57_RT_OFFSET                                 29482
+#define QM_REG_PQTX2PF_58_RT_OFFSET                                 29483
+#define QM_REG_PQTX2PF_59_RT_OFFSET                                 29484
+#define QM_REG_PQTX2PF_60_RT_OFFSET                                 29485
+#define QM_REG_PQTX2PF_61_RT_OFFSET                                 29486
+#define QM_REG_PQTX2PF_62_RT_OFFSET                                 29487
+#define QM_REG_PQTX2PF_63_RT_OFFSET                                 29488
+#define QM_REG_PQOTHER2PF_0_RT_OFFSET                               29489
+#define QM_REG_PQOTHER2PF_1_RT_OFFSET                               29490
+#define QM_REG_PQOTHER2PF_2_RT_OFFSET                               29491
+#define QM_REG_PQOTHER2PF_3_RT_OFFSET                               29492
+#define QM_REG_PQOTHER2PF_4_RT_OFFSET                               29493
+#define QM_REG_PQOTHER2PF_5_RT_OFFSET                               29494
+#define QM_REG_PQOTHER2PF_6_RT_OFFSET                               29495
+#define QM_REG_PQOTHER2PF_7_RT_OFFSET                               29496
+#define QM_REG_PQOTHER2PF_8_RT_OFFSET                               29497
+#define QM_REG_PQOTHER2PF_9_RT_OFFSET                               29498
+#define QM_REG_PQOTHER2PF_10_RT_OFFSET                              29499
+#define QM_REG_PQOTHER2PF_11_RT_OFFSET                              29500
+#define QM_REG_PQOTHER2PF_12_RT_OFFSET                              29501
+#define QM_REG_PQOTHER2PF_13_RT_OFFSET                              29502
+#define QM_REG_PQOTHER2PF_14_RT_OFFSET                              29503
+#define QM_REG_PQOTHER2PF_15_RT_OFFSET                              29504
+#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET                             29505
+#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET                             29506
+#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET                        29507
+#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET                        29508
+#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET                          29509
+#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET                          29510
+#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET                          29511
+#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET                          29512
+#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET                          29513
+#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET                          29514
+#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET                          29515
+#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET                          29516
+#define QM_REG_RLGLBLINCVAL_RT_OFFSET                               29517
 #define QM_REG_RLGLBLINCVAL_RT_SIZE                                 256
-#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           34842
+#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET                           29773
 #define QM_REG_RLGLBLUPPERBOUND_RT_SIZE                             256
-#define QM_REG_RLGLBLCRD_RT_OFFSET                                  35098
+#define QM_REG_RLGLBLCRD_RT_OFFSET                                  30029
 #define QM_REG_RLGLBLCRD_RT_SIZE                                    256
-#define QM_REG_RLGLBLENABLE_RT_OFFSET                               35354
-#define QM_REG_RLPFPERIOD_RT_OFFSET                                 35355
-#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            35356
-#define QM_REG_RLPFINCVAL_RT_OFFSET                                 35357
+#define QM_REG_RLGLBLENABLE_RT_OFFSET                               30285
+#define QM_REG_RLPFPERIOD_RT_OFFSET                                 30286
+#define QM_REG_RLPFPERIODTIMER_RT_OFFSET                            30287
+#define QM_REG_RLPFINCVAL_RT_OFFSET                                 30288
 #define QM_REG_RLPFINCVAL_RT_SIZE                                   16
-#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             35373
+#define QM_REG_RLPFUPPERBOUND_RT_OFFSET                             30304
 #define QM_REG_RLPFUPPERBOUND_RT_SIZE                               16
-#define QM_REG_RLPFCRD_RT_OFFSET                                    35389
+#define QM_REG_RLPFCRD_RT_OFFSET                                    30320
 #define QM_REG_RLPFCRD_RT_SIZE                                      16
-#define QM_REG_RLPFENABLE_RT_OFFSET                                 35405
-#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              35406
-#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                35407
+#define QM_REG_RLPFENABLE_RT_OFFSET                                 30336
+#define QM_REG_RLPFVOQENABLE_RT_OFFSET                              30337
+#define QM_REG_WFQPFWEIGHT_RT_OFFSET                                30338
 #define QM_REG_WFQPFWEIGHT_RT_SIZE                                  16
-#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            35423
+#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET                            30354
 #define QM_REG_WFQPFUPPERBOUND_RT_SIZE                              16
-#define QM_REG_WFQPFCRD_RT_OFFSET                                   35439
-#define QM_REG_WFQPFCRD_RT_SIZE                                     256
-#define QM_REG_WFQPFENABLE_RT_OFFSET                                35695
-#define QM_REG_WFQVPENABLE_RT_OFFSET                                35696
-#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               35697
+#define QM_REG_WFQPFCRD_RT_OFFSET                                   30370
+#define QM_REG_WFQPFCRD_RT_SIZE                                     160
+#define QM_REG_WFQPFENABLE_RT_OFFSET                                30530
+#define QM_REG_WFQVPENABLE_RT_OFFSET                                30531
+#define QM_REG_BASEADDRTXPQ_RT_OFFSET                               30532
 #define QM_REG_BASEADDRTXPQ_RT_SIZE                                 512
-#define QM_REG_TXPQMAP_RT_OFFSET                                    36209
+#define QM_REG_TXPQMAP_RT_OFFSET                                    31044
 #define QM_REG_TXPQMAP_RT_SIZE                                      512
-#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                36721
+#define QM_REG_WFQVPWEIGHT_RT_OFFSET                                31556
 #define QM_REG_WFQVPWEIGHT_RT_SIZE                                  512
-#define QM_REG_WFQVPCRD_RT_OFFSET                                   37233
+#define QM_REG_WFQVPCRD_RT_OFFSET                                   32068
 #define QM_REG_WFQVPCRD_RT_SIZE                                     512
-#define QM_REG_WFQVPMAP_RT_OFFSET                                   37745
+#define QM_REG_WFQVPMAP_RT_OFFSET                                   32580
 #define QM_REG_WFQVPMAP_RT_SIZE                                     512
-#define QM_REG_PTRTBLTX_RT_OFFSET                                   38257
+#define QM_REG_PTRTBLTX_RT_OFFSET                                   33092
 #define QM_REG_PTRTBLTX_RT_SIZE                                     1024
-#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               39281
-#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 320
-#define QM_REG_VOQCRDLINE_RT_OFFSET                                 39601
-#define QM_REG_VOQCRDLINE_RT_SIZE                                   36
-#define QM_REG_VOQINITCRDLINE_RT_OFFSET                             39637
-#define QM_REG_VOQINITCRDLINE_RT_SIZE                               36
-#define QM_REG_RLPFVOQENABLE_MSB_RT_OFFSET                          39673
-#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           39674
-#define NIG_REG_BRB_GATE_DNTFWD_PORT_RT_OFFSET                      39675
-#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     39676
-#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     39677
-#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     39678
-#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     39679
-#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  39680
-#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           39681
+#define QM_REG_WFQPFCRD_MSB_RT_OFFSET                               34116
+#define QM_REG_WFQPFCRD_MSB_RT_SIZE                                 160
+#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34276
+#define NIG_REG_BRB_GATE_DNTFWD_PORT_RT_OFFSET                      34277
+#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET                     34278
+#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET                     34279
+#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET                     34280
+#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET                     34281
+#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET                  34282
+#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET                           34283
 #define NIG_REG_LLH_FUNC_TAG_EN_RT_SIZE                             4
-#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        39685
+#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET                        34287
 #define NIG_REG_LLH_FUNC_TAG_VALUE_RT_SIZE                          4
-#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     39689
+#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET                     34291
 #define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_SIZE                       32
-#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        39721
+#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET                        34323
 #define NIG_REG_LLH_FUNC_FILTER_EN_RT_SIZE                          16
-#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      39737
+#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET                      34339
 #define NIG_REG_LLH_FUNC_FILTER_MODE_RT_SIZE                        16
-#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             39753
+#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET             34355
 #define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE               16
-#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   39769
+#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET                   34371
 #define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_SIZE                     16
-#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              39785
-#define NIG_REG_PPF_TO_ENGINE_SEL_RT_OFFSET                         39786
+#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET                              34387
+#define NIG_REG_PPF_TO_ENGINE_SEL_RT_OFFSET                         34388
 #define NIG_REG_PPF_TO_ENGINE_SEL_RT_SIZE                           8
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_VALUE_RT_OFFSET              39794
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_VALUE_RT_SIZE                1024
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_EN_RT_OFFSET                 40818
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_EN_RT_SIZE                   512
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_MODE_RT_OFFSET               41330
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_MODE_RT_SIZE                 512
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET      41842
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE        512
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_HDR_SEL_RT_OFFSET            42354
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_HDR_SEL_RT_SIZE              512
-#define NIG_REG_LLH_PF_CLS_FILTERS_MAP_RT_OFFSET                    42866
-#define NIG_REG_LLH_PF_CLS_FILTERS_MAP_RT_SIZE                      32
-#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           42898
-#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           42899
-#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           42900
-#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       42901
-#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       42902
-#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       42903
-#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       42904
-#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    42905
-#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    42906
-#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    42907
-#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    42908
-#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        42909
-#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     42910
-#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           42911
-#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      42912
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    42913
-#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       42914
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                42915
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    42916
-#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       42917
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                42918
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    42919
-#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       42920
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                42921
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    42922
-#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       42923
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                42924
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    42925
-#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       42926
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                42927
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    42928
-#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       42929
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                42930
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    42931
-#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       42932
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                42933
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    42934
-#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       42935
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                42936
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    42937
-#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       42938
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                42939
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    42940
-#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       42941
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                42942
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   42943
-#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      42944
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               42945
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   42946
-#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      42947
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               42948
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   42949
-#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      42950
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               42951
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   42952
-#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      42953
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               42954
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   42955
-#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      42956
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               42957
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   42958
-#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      42959
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               42960
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   42961
-#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      42962
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               42963
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   42964
-#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      42965
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               42966
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   42967
-#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      42968
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               42969
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   42970
-#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      42971
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               42972
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ20_RT_OFFSET                   42973
-#define PBF_REG_BTB_GUARANTEED_VOQ20_RT_OFFSET                      42974
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ20_RT_OFFSET               42975
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ21_RT_OFFSET                   42976
-#define PBF_REG_BTB_GUARANTEED_VOQ21_RT_OFFSET                      42977
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ21_RT_OFFSET               42978
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ22_RT_OFFSET                   42979
-#define PBF_REG_BTB_GUARANTEED_VOQ22_RT_OFFSET                      42980
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ22_RT_OFFSET               42981
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ23_RT_OFFSET                   42982
-#define PBF_REG_BTB_GUARANTEED_VOQ23_RT_OFFSET                      42983
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ23_RT_OFFSET               42984
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ24_RT_OFFSET                   42985
-#define PBF_REG_BTB_GUARANTEED_VOQ24_RT_OFFSET                      42986
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ24_RT_OFFSET               42987
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ25_RT_OFFSET                   42988
-#define PBF_REG_BTB_GUARANTEED_VOQ25_RT_OFFSET                      42989
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ25_RT_OFFSET               42990
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ26_RT_OFFSET                   42991
-#define PBF_REG_BTB_GUARANTEED_VOQ26_RT_OFFSET                      42992
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ26_RT_OFFSET               42993
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ27_RT_OFFSET                   42994
-#define PBF_REG_BTB_GUARANTEED_VOQ27_RT_OFFSET                      42995
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ27_RT_OFFSET               42996
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ28_RT_OFFSET                   42997
-#define PBF_REG_BTB_GUARANTEED_VOQ28_RT_OFFSET                      42998
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ28_RT_OFFSET               42999
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ29_RT_OFFSET                   43000
-#define PBF_REG_BTB_GUARANTEED_VOQ29_RT_OFFSET                      43001
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ29_RT_OFFSET               43002
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ30_RT_OFFSET                   43003
-#define PBF_REG_BTB_GUARANTEED_VOQ30_RT_OFFSET                      43004
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ30_RT_OFFSET               43005
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ31_RT_OFFSET                   43006
-#define PBF_REG_BTB_GUARANTEED_VOQ31_RT_OFFSET                      43007
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ31_RT_OFFSET               43008
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ32_RT_OFFSET                   43009
-#define PBF_REG_BTB_GUARANTEED_VOQ32_RT_OFFSET                      43010
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ32_RT_OFFSET               43011
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ33_RT_OFFSET                   43012
-#define PBF_REG_BTB_GUARANTEED_VOQ33_RT_OFFSET                      43013
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ33_RT_OFFSET               43014
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ34_RT_OFFSET                   43015
-#define PBF_REG_BTB_GUARANTEED_VOQ34_RT_OFFSET                      43016
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ34_RT_OFFSET               43017
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ35_RT_OFFSET                   43018
-#define PBF_REG_BTB_GUARANTEED_VOQ35_RT_OFFSET                      43019
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ35_RT_OFFSET               43020
-#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                43021
+#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET                           34396
+#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET                           34397
+#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET                           34398
+#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET                       34399
+#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET                       34400
+#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET                       34401
+#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET                       34402
+#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET                    34403
+#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET                    34404
+#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET                    34405
+#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET                    34406
+#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET                        34407
+#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET                     34408
+#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET                           34409
+#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET                      34410
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET                    34411
+#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET                       34412
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET                34413
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET                    34414
+#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET                       34415
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET                34416
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET                    34417
+#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET                       34418
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET                34419
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET                    34420
+#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET                       34421
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET                34422
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET                    34423
+#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET                       34424
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET                34425
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET                    34426
+#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET                       34427
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET                34428
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET                    34429
+#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET                       34430
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET                34431
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET                    34432
+#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET                       34433
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET                34434
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET                    34435
+#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET                       34436
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET                34437
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET                    34438
+#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET                       34439
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET                34440
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET                   34441
+#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET                      34442
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET               34443
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET                   34444
+#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET                      34445
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET               34446
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET                   34447
+#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET                      34448
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET               34449
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET                   34450
+#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET                      34451
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET               34452
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET                   34453
+#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET                      34454
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET               34455
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET                   34456
+#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET                      34457
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET               34458
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET                   34459
+#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET                      34460
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET               34461
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET                   34462
+#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET                      34463
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET               34464
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET                   34465
+#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET                      34466
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET               34467
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET                   34468
+#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET                      34469
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET               34470
+#define XCM_REG_CON_PHY_Q3_RT_OFFSET                                34471
 
-#define RUNTIME_ARRAY_SIZE 43022
+#define RUNTIME_ARRAY_SIZE 34472
 
 /* Init Callbacks */
 #define DMAE_READY_CB                                               0
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 13c2e2d11..98b9723dd 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -18,6 +18,15 @@
 #define MCP_PUBLIC_H
 
 #define VF_MAX_STATIC 192	/* In case of AH */
+#define VF_BITMAP_SIZE_IN_DWORDS        (VF_MAX_STATIC / 32)
+#define VF_BITMAP_SIZE_IN_BYTES         (VF_BITMAP_SIZE_IN_DWORDS * sizeof(u32))
+
+/* Extended array size to support for 240 VFs 8 dwords */
+#define EXT_VF_MAX_STATIC               240
+#define EXT_VF_BITMAP_SIZE_IN_DWORDS    (((EXT_VF_MAX_STATIC - 1) / 32) + 1)
+#define EXT_VF_BITMAP_SIZE_IN_BYTES     (EXT_VF_BITMAP_SIZE_IN_DWORDS * \
+					 sizeof(u32))
+#define ADDED_VF_BITMAP_SIZE 2
 
 #define MCP_GLOB_PATH_MAX	2
 #define MCP_PORT_MAX		2	/* Global */
@@ -591,6 +600,8 @@ struct public_path {
 #define PROCESS_KILL_GLOB_AEU_BIT_MASK		0xffff0000
 #define PROCESS_KILL_GLOB_AEU_BIT_OFFSET	16
 #define GLOBAL_AEU_BIT(aeu_reg_id, aeu_bit) (aeu_reg_id * 32 + aeu_bit)
+	/*Added to support E5 240 VFs*/
+	u32 mcp_vf_disabled2[ADDED_VF_BITMAP_SIZE];
 };
 
 /**************************************/
@@ -1270,6 +1281,12 @@ struct public_drv_mb {
 /* params [31:8] - reserved, [7:0] - bitmap */
 #define DRV_MSG_CODE_GET_PPFID_BITMAP		0x43000000
 
+/* Param: [0:15] Option ID, [16] - All, [17] - Init, [18] - Commit,
+ * [19] - Free
+ */
+#define DRV_MSG_CODE_GET_NVM_CFG_OPTION		0x003e0000
+/* Param: [0:15] Option ID,             [17] - Init, [18]       , [19] - Free */
+#define DRV_MSG_CODE_SET_NVM_CFG_OPTION		0x003f0000
 /*deprecated don't use*/
 #define DRV_MSG_CODE_INITIATE_FLR_DEPRECATED    0x02000000
 #define DRV_MSG_CODE_INITIATE_PF_FLR            0x02010000
@@ -1317,6 +1334,7 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_PHY_CORE_WRITE		0x000e0000
 /* Param: [0:3] - version, [4:15] - name (null terminated) */
 #define DRV_MSG_CODE_SET_VERSION		0x000f0000
+#define DRV_MSG_CODE_MCP_RESET_FORCE		0x000f04ce
 /* Halts the MCP. To resume MCP, user will need to use
  * MCP_REG_CPU_STATE/MCP_REG_CPU_MODE registers.
  */
@@ -1607,6 +1625,9 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_SET_LED_MODE_OPER		0x0
 #define DRV_MB_PARAM_SET_LED_MODE_ON		0x1
 #define DRV_MB_PARAM_SET_LED_MODE_OFF		0x2
+#define DRV_MB_PARAM_SET_LED1_MODE_ON		0x3
+#define DRV_MB_PARAM_SET_LED2_MODE_ON		0x4
+#define DRV_MB_PARAM_SET_ACT_LED_MODE_ON	0x6
 
 #define DRV_MB_PARAM_TRANSCEIVER_PORT_OFFSET		0
 #define DRV_MB_PARAM_TRANSCEIVER_PORT_MASK		0x00000003
@@ -1664,8 +1685,32 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_ATTRIBUTE_CMD_OFFSET		24
 #define DRV_MB_PARAM_ATTRIBUTE_CMD_MASK		0xFF000000
 
+#define DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET		0
+/* Option# */
+#define DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK		0x0000FFFF
+#define DRV_MB_PARAM_NVM_CFG_OPTION_ALL_OFFSET		16
+/* (Only for Set) Applies option<92>s value to all entities (port/func)
+ * depending on the option type
+ */
+#define DRV_MB_PARAM_NVM_CFG_OPTION_ALL_MASK		0x00010000
+#define DRV_MB_PARAM_NVM_CFG_OPTION_INIT_OFFSET		17
+/* When set, and state is IDLE, MFW will allocate resources and load
+ * configuration from NVM
+ */
+#define DRV_MB_PARAM_NVM_CFG_OPTION_INIT_MASK		0x00020000
+#define DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_OFFSET	18
+/* (Only for Set) - When set submit changed nvm_cfg1 to flash */
+#define DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_MASK		0x00040000
+#define DRV_MB_PARAM_NVM_CFG_OPTION_FREE_OFFSET		19
+/* Free - When set, free allocated resources, and return to IDLE state. */
+#define DRV_MB_PARAM_NVM_CFG_OPTION_FREE_MASK		0x00080000
+#define SINGLE_NVM_WR_OP(optionId) \
+	((((optionId) & DRV_MB_PARAM_NVM_CFG_OPTION_ID_MASK) << \
+	  DRV_MB_PARAM_NVM_CFG_OPTION_ID_OFFSET) | \
+	 (DRV_MB_PARAM_NVM_CFG_OPTION_INIT_MASK | \
+	  DRV_MB_PARAM_NVM_CFG_OPTION_COMMIT_MASK | \
+	  DRV_MB_PARAM_NVM_CFG_OPTION_FREE_MASK))
 	u32 fw_mb_header;
-#define FW_MSG_CODE_MASK                        0xffff0000
 #define FW_MSG_CODE_UNSUPPORTED			0x00000000
 #define FW_MSG_CODE_DRV_LOAD_ENGINE		0x10100000
 #define FW_MSG_CODE_DRV_LOAD_PORT               0x10110000
@@ -1704,6 +1749,12 @@ struct public_drv_mb {
 #define FW_MSG_CODE_NIG_DRAIN_DONE              0x30000000
 #define FW_MSG_CODE_VF_DISABLED_DONE            0xb0000000
 #define FW_MSG_CODE_DRV_CFG_VF_MSIX_DONE        0xb0010000
+#define FW_MSG_CODE_ERR_RESOURCE_TEMPORARY_UNAVAILABLE	0x008b0000
+#define FW_MSG_CODE_ERR_RESOURCE_ALREADY_ALLOCATED	0x008c0000
+#define FW_MSG_CODE_ERR_RESOURCE_NOT_ALLOCATED		0x008d0000
+#define FW_MSG_CODE_ERR_NON_USER_OPTION			0x008e0000
+#define FW_MSG_CODE_ERR_UNKNOWN_OPTION			0x008f0000
+#define FW_MSG_CODE_WAIT				0x00900000
 #define FW_MSG_CODE_FLR_ACK                     0x02000000
 #define FW_MSG_CODE_FLR_NACK                    0x02100000
 #define FW_MSG_CODE_SET_DRIVER_DONE		0x02200000
@@ -1783,11 +1834,13 @@ struct public_drv_mb {
 #define FW_MSG_CODE_WOL_READ_BUFFER_OK		0x00850000
 #define FW_MSG_CODE_WOL_READ_BUFFER_INVALID_VAL	0x00860000
 
-#define FW_MSG_SEQ_NUMBER_MASK                  0x0000ffff
-
 #define FW_MSG_CODE_ATTRIBUTE_INVALID_KEY	0x00020000
 #define FW_MSG_CODE_ATTRIBUTE_INVALID_CMD	0x00030000
 
+#define FW_MSG_SEQ_NUMBER_MASK			0x0000ffff
+#define FW_MSG_SEQ_NUMBER_OFFSET		0
+#define FW_MSG_CODE_MASK			0xffff0000
+#define FW_MSG_CODE_OFFSET			16
 	u32 fw_mb_param;
 /* Resource Allocation params - MFW  version support */
 #define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK	0xFFFF0000
diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index ab86260ed..daa5437dd 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -11,20 +11,21 @@
  * Description: NVM config file - Generated file from nvm cfg excel.
  *              DO NOT MODIFY !!!
  *
- * Created:     5/8/2017
+ * Created:     1/6/2019
  *
  ****************************************************************************/
 
 #ifndef NVM_CFG_H
 #define NVM_CFG_H
 
-#define NVM_CFG_version 0x83000
 
-#define NVM_CFG_new_option_seq 23
+#define NVM_CFG_version 0x84500
 
-#define NVM_CFG_removed_option_seq 1
+#define NVM_CFG_new_option_seq 45
 
-#define NVM_CFG_updated_value_seq 4
+#define NVM_CFG_removed_option_seq 4
+
+#define NVM_CFG_updated_value_seq 13
 
 struct nvm_cfg_mac_address {
 	u32 mac_addr_hi;
@@ -54,6 +55,7 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_MF_MODE_NPAR2_0 0x5
 		#define NVM_CFG1_GLOB_MF_MODE_BD 0x6
 		#define NVM_CFG1_GLOB_MF_MODE_UFP 0x7
+		#define NVM_CFG1_GLOB_MF_MODE_DCI_NPAR 0x8
 		#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_MASK 0x00001000
 		#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_OFFSET 12
 		#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_DISABLED 0x0
@@ -153,6 +155,7 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_1X25G 0xD
 		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X25G 0xE
 		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X10G 0xF
+		#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X25G_LIO2 0x10
 		#define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_MASK 0x00000100
 		#define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_OFFSET 8
 		#define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_DISABLED 0x0
@@ -510,6 +513,18 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_OFFSET 28
 		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_DISABLED 0x0
 		#define NVM_CFG1_GLOB_VOLTAGE_REGULATOR_TYPE_TI 0x1
+	/*  Enable/Disable PCIE Relaxed Ordering */
+		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_MASK 0x40000000
+		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_OFFSET 30
+		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_DISABLED 0x0
+		#define NVM_CFG1_GLOB_PCIE_RELAXED_ORDERING_ENABLED 0x1
+	/*  Reset the chip using iPOR to release PCIe due to short PERST
+	 *  issues
+	 */
+		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_MASK 0x80000000
+		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_OFFSET 31
+		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_DISABLED 0x0
+		#define NVM_CFG1_GLOB_SHORT_PERST_PROTECTION_ENABLED 0x1
 	u32 led_global_settings; /* 0x74 */
 		#define NVM_CFG1_GLOB_LED_SWAP_0_MASK 0x0000000F
 		#define NVM_CFG1_GLOB_LED_SWAP_0_OFFSET 0
@@ -590,6 +605,11 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_RUNTIME_PORT2_SWAP_MAP_OFFSET 27
 		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_MASK 0x60000000
 		#define NVM_CFG1_GLOB_RUNTIME_PORT3_SWAP_MAP_OFFSET 29
+	/*  Option to Disable embedded LLDP, 0 - Off, 1 - On */
+		#define NVM_CFG1_GLOB_LLDP_DISABLE_MASK 0x80000000
+		#define NVM_CFG1_GLOB_LLDP_DISABLE_OFFSET 31
+		#define NVM_CFG1_GLOB_LLDP_DISABLE_OFF 0x0
+		#define NVM_CFG1_GLOB_LLDP_DISABLE_ON 0x1
 	u32 mbi_version; /* 0x7C */
 		#define NVM_CFG1_GLOB_MBI_VERSION_0_MASK 0x000000FF
 		#define NVM_CFG1_GLOB_MBI_VERSION_0_OFFSET 0
@@ -1037,13 +1057,308 @@ struct nvm_cfg1_glob {
 		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO29 0x1E
 		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO30 0x1F
 		#define NVM_CFG1_GLOB_THERMAL_ALARM_GPIO_GPIO31 0x20
+	/*  Select the number of allowed port link in aux power */
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_MASK 0x00000300
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_OFFSET 8
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_DEFAULT 0x0
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_1_PORT 0x1
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_2_PORTS 0x2
+		#define NVM_CFG1_GLOB_NCSI_AUX_LINK_3_PORTS 0x3
+	/*  Set Trace Filter Log Level */
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_MASK 0x00000C00
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_OFFSET 10
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_ALL 0x0
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_DEBUG 0x1
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_TRACE 0x2
+		#define NVM_CFG1_GLOB_TRACE_LEVEL_ERROR 0x3
+	/*  For OCP2.0, MFW listens on SMBUS slave address 0x3e, and return
+	 *  temperature reading
+	 */
+		#define NVM_CFG1_GLOB_EMULATED_TMP421_MASK 0x00001000
+		#define NVM_CFG1_GLOB_EMULATED_TMP421_OFFSET 12
+		#define NVM_CFG1_GLOB_EMULATED_TMP421_DISABLED 0x0
+		#define NVM_CFG1_GLOB_EMULATED_TMP421_ENABLED 0x1
+	/*  GPIO which triggers when ASIC temperature reaches nvm option 286
+	 *  value
+	 */
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_MASK 0x001FE000
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_OFFSET 13
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_GPIO_GPIO31 0x20
+	/*  Warning temperature threshold used with nvm option 286 */
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_THRESHOLD_MASK 0x1FE00000
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_THRESHOLD_OFFSET 21
+	/*  Disable PLDM protocol */
+		#define NVM_CFG1_GLOB_DISABLE_PLDM_MASK 0x20000000
+		#define NVM_CFG1_GLOB_DISABLE_PLDM_OFFSET 29
+		#define NVM_CFG1_GLOB_DISABLE_PLDM_DISABLED 0x0
+		#define NVM_CFG1_GLOB_DISABLE_PLDM_ENABLED 0x1
+	/*  Disable OCBB protocol */
+		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_MASK 0x40000000
+		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_OFFSET 30
+		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_DISABLED 0x0
+		#define NVM_CFG1_GLOB_DISABLE_MCTP_OEM_ENABLED 0x1
 	u32 preboot_debug_mode_std; /* 0x140 */
 	u32 preboot_debug_mode_ext; /* 0x144 */
 	u32 ext_phy_cfg1; /* 0x148 */
 	/*  Ext PHY MDI pair swap value */
-		#define NVM_CFG1_GLOB_EXT_PHY_MDI_PAIR_SWAP_MASK 0x0000FFFF
-		#define NVM_CFG1_GLOB_EXT_PHY_MDI_PAIR_SWAP_OFFSET 0
-	u32 reserved[55]; /* 0x14C */
+		#define NVM_CFG1_GLOB_RESERVED_244_MASK 0x0000FFFF
+		#define NVM_CFG1_GLOB_RESERVED_244_OFFSET 0
+	/*  Define for PGOOD signal Mapping  for EXT PHY */
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_OFFSET 16
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_NA 0x0
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO0 0x1
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO1 0x2
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO2 0x3
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO3 0x4
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO4 0x5
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO5 0x6
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO6 0x7
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO7 0x8
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO8 0x9
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO9 0xA
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO10 0xB
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO11 0xC
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO12 0xD
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO13 0xE
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO14 0xF
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO15 0x10
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO16 0x11
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO17 0x12
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO18 0x13
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO19 0x14
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO20 0x15
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO21 0x16
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO22 0x17
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO23 0x18
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO24 0x19
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_EXT_PHY_PGOOD_GPIO31 0x20
+	/*  GPIO which trigger when PERST asserted  */
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_OFFSET 24
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_NA 0x0
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO0 0x1
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO1 0x2
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO2 0x3
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO3 0x4
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO4 0x5
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO5 0x6
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO6 0x7
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO7 0x8
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO8 0x9
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO9 0xA
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO10 0xB
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO11 0xC
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO12 0xD
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO13 0xE
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO14 0xF
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO15 0x10
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO16 0x11
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO17 0x12
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO18 0x13
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO19 0x14
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO20 0x15
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO21 0x16
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO22 0x17
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO23 0x18
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO24 0x19
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO25 0x1A
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO26 0x1B
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO27 0x1C
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO28 0x1D
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO29 0x1E
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO30 0x1F
+		#define NVM_CFG1_GLOB_PERST_INDICATION_GPIO_GPIO31 0x20
+	u32 clocks; /* 0x14C */
+	/*  Sets core clock frequency */
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_OFFSET 0
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_DEFAULT 0x0
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_375 0x1
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_350 0x2
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_325 0x3
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_300 0x4
+		#define NVM_CFG1_GLOB_MAIN_CLOCK_FREQUENCY_MAIN_CLK_280 0x5
+	/*  Sets MAC clock frequency */
+		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_OFFSET 8
+		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_DEFAULT 0x0
+		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_782 0x1
+		#define NVM_CFG1_GLOB_MAC_CLOCK_FREQUENCY_MAC_CLK_516 0x2
+	/*  Sets storm clock frequency */
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_OFFSET 16
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_DEFAULT 0x0
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1200 0x1
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1000 0x2
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_900 0x3
+		#define NVM_CFG1_GLOB_STORM_CLOCK_FREQUENCY_STORM_CLK_1100 0x4
+	/*  Non zero value will override PCIe AGC threshold to improve
+	 *  receiver
+	 */
+		#define NVM_CFG1_GLOB_OVERRIDE_AGC_THRESHOLD_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_OVERRIDE_AGC_THRESHOLD_OFFSET 24
+	u32 pre2_generic_cont_1; /* 0x150 */
+		#define NVM_CFG1_GLOB_50G_HLPC_PRE2_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_50G_HLPC_PRE2_OFFSET 0
+		#define NVM_CFG1_GLOB_50G_MLPC_PRE2_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_50G_MLPC_PRE2_OFFSET 8
+		#define NVM_CFG1_GLOB_50G_LLPC_PRE2_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_50G_LLPC_PRE2_OFFSET 16
+		#define NVM_CFG1_GLOB_25G_HLPC_PRE2_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_25G_HLPC_PRE2_OFFSET 24
+	u32 pre2_generic_cont_2; /* 0x154 */
+		#define NVM_CFG1_GLOB_25G_LLPC_PRE2_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_25G_LLPC_PRE2_OFFSET 0
+		#define NVM_CFG1_GLOB_25G_AC_PRE2_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_25G_AC_PRE2_OFFSET 8
+		#define NVM_CFG1_GLOB_10G_PC_PRE2_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_10G_PC_PRE2_OFFSET 16
+		#define NVM_CFG1_GLOB_PRE2_10G_AC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_PRE2_10G_AC_OFFSET 24
+	u32 pre2_generic_cont_3; /* 0x158 */
+		#define NVM_CFG1_GLOB_1G_PRE2_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_1G_PRE2_OFFSET 0
+		#define NVM_CFG1_GLOB_5G_BT_PRE2_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_5G_BT_PRE2_OFFSET 8
+		#define NVM_CFG1_GLOB_10G_BT_PRE2_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_10G_BT_PRE2_OFFSET 16
+	/*  When temperature goes below (warning temperature - delta) warning
+	 *  gpio is unset
+	 */
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_DELTA_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_WARNING_TEMPERATURE_DELTA_OFFSET 24
+	u32 tx_rx_eq_50g_hlpc; /* 0x15C */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_HLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_HLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_HLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_HLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_HLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_HLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_HLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_HLPC_OFFSET 24
+	u32 tx_rx_eq_50g_mlpc; /* 0x160 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_MLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_MLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_MLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_MLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_MLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_MLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_MLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_MLPC_OFFSET 24
+	u32 tx_rx_eq_50g_llpc; /* 0x164 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_LLPC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_LLPC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_LLPC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_LLPC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_LLPC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_LLPC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_LLPC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_LLPC_OFFSET 24
+	u32 tx_rx_eq_50g_ac; /* 0x168 */
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_AC_MASK 0x000000FF
+		#define NVM_CFG1_GLOB_INDEX0_RX_TX_EQ_50G_AC_OFFSET 0
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_AC_MASK 0x0000FF00
+		#define NVM_CFG1_GLOB_INDEX1_RX_TX_EQ_50G_AC_OFFSET 8
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_AC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_INDEX2_RX_TX_EQ_50G_AC_OFFSET 16
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_AC_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_INDEX3_RX_TX_EQ_50G_AC_OFFSET 24
+	/*  Set Trace Filter Modules Log Bit Mask */
+	u32 trace_modules; /* 0x16C */
+		#define NVM_CFG1_GLOB_TRACE_MODULES_ERROR 0x1
+		#define NVM_CFG1_GLOB_TRACE_MODULES_DBG 0x2
+		#define NVM_CFG1_GLOB_TRACE_MODULES_DRV_HSI 0x4
+		#define NVM_CFG1_GLOB_TRACE_MODULES_INTERRUPT 0x8
+		#define NVM_CFG1_GLOB_TRACE_MODULES_VPD 0x10
+		#define NVM_CFG1_GLOB_TRACE_MODULES_FLR 0x20
+		#define NVM_CFG1_GLOB_TRACE_MODULES_INIT 0x40
+		#define NVM_CFG1_GLOB_TRACE_MODULES_NVM 0x80
+		#define NVM_CFG1_GLOB_TRACE_MODULES_PIM 0x100
+		#define NVM_CFG1_GLOB_TRACE_MODULES_NET 0x200
+		#define NVM_CFG1_GLOB_TRACE_MODULES_POWER 0x400
+		#define NVM_CFG1_GLOB_TRACE_MODULES_UTILS 0x800
+		#define NVM_CFG1_GLOB_TRACE_MODULES_RESOURCES 0x1000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_SCHEDULER 0x2000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_PHYMOD 0x4000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_EVENTS 0x8000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_PMM 0x10000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_DBG_DRV 0x20000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_ETH 0x40000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_SECURITY 0x80000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_PCIE 0x100000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_TRACE 0x200000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_MANAGEMENT 0x400000
+		#define NVM_CFG1_GLOB_TRACE_MODULES_SIM 0x800000
+	u32 pcie_class_code_fcoe; /* 0x170 */
+	/*  Set PCIe FCoE Class Code */
+		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_FCOE_MASK 0x00FFFFFF
+		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_FCOE_OFFSET 0
+	/*  When temperature goes below (ALOM FAN ON AUX value - delta) ALOM
+	 *  FAN ON AUX gpio is unset
+	 */
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_DELTA_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_ALOM_FAN_ON_AUX_DELTA_OFFSET 24
+	u32 pcie_class_code_iscsi; /* 0x174 */
+	/*  Set PCIe iSCSI Class Code */
+		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_ISCSI_MASK 0x00FFFFFF
+		#define NVM_CFG1_GLOB_PCIE_CLASS_CODE_ISCSI_OFFSET 0
+	/*  When temperature goes below (Dead Temp TH  - delta)Thermal Event
+	 *  gpio is unset
+	 */
+		#define NVM_CFG1_GLOB_DEAD_TEMP_TH_DELTA_MASK 0xFF000000
+		#define NVM_CFG1_GLOB_DEAD_TEMP_TH_DELTA_OFFSET 24
+	u32 no_provisioned_mac; /* 0x178 */
+	/*  Set number of provisioned MAC addresses */
+		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_MAC_MASK 0x0000FFFF
+		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_MAC_OFFSET 0
+	/*  Set number of provisioned VF MAC addresses */
+		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_VF_MAC_MASK 0x00FF0000
+		#define NVM_CFG1_GLOB_NUMBER_OF_PROVISIONED_VF_MAC_OFFSET 16
+	/*  Enable/Disable BMC MAC */
+		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_MASK 0x01000000
+		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_OFFSET 24
+		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_DISABLED 0x0
+		#define NVM_CFG1_GLOB_PROVISIONED_BMC_MAC_ENABLED 0x1
+	u32 reserved[43]; /* 0x17C */
 };
 
 struct nvm_cfg1_path {
@@ -1073,6 +1388,10 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_LED_MODE_PHY11 0xE
 		#define NVM_CFG1_PORT_LED_MODE_PHY12 0xF
 		#define NVM_CFG1_PORT_LED_MODE_BREAKOUT 0x10
+		#define NVM_CFG1_PORT_LED_MODE_OCP_3_0 0x11
+		#define NVM_CFG1_PORT_LED_MODE_OCP_3_0_MAC2 0x12
+		#define NVM_CFG1_PORT_LED_MODE_SW_DEF1 0x13
+		#define NVM_CFG1_PORT_LED_MODE_SW_DEF1_MAC2 0x14
 		#define NVM_CFG1_PORT_ROCE_PRIORITY_MASK 0x0000FF00
 		#define NVM_CFG1_PORT_ROCE_PRIORITY_OFFSET 8
 		#define NVM_CFG1_PORT_DCBX_MODE_MASK 0x000F0000
@@ -1220,6 +1539,16 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_OFFSET 24
 		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_DISABLED 0x0
 		#define NVM_CFG1_PORT_RESERVED_WAS_MFW_SMARTLINQ_ENABLED 0x1
+	/*  Enable/Disable RX PAM-4 precoding */
+		#define NVM_CFG1_PORT_RX_PRECODE_MASK 0x02000000
+		#define NVM_CFG1_PORT_RX_PRECODE_OFFSET 25
+		#define NVM_CFG1_PORT_RX_PRECODE_DISABLED 0x0
+		#define NVM_CFG1_PORT_RX_PRECODE_ENABLED 0x1
+	/*  Enable/Disable TX PAM-4 precoding */
+		#define NVM_CFG1_PORT_TX_PRECODE_MASK 0x04000000
+		#define NVM_CFG1_PORT_TX_PRECODE_OFFSET 26
+		#define NVM_CFG1_PORT_TX_PRECODE_DISABLED 0x0
+		#define NVM_CFG1_PORT_TX_PRECODE_ENABLED 0x1
 	u32 phy_cfg; /* 0x1C */
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_MASK 0x0000FFFF
 		#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_OFFSET 0
@@ -1261,6 +1590,7 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_NONE 0x0
 		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_BCM8485X 0x1
 		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_BCM5422X 0x2
+		#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_88X33X0 0x3
 		#define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_MASK 0x0000FF00
 		#define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_OFFSET 8
 	/*  EEE power saving mode */
@@ -1337,6 +1667,13 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_AH_50G 0x10
 		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_50G 0x20
 		#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_100G 0x40
+	/*  UID LED Blink Mode Settings */
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_MASK 0x0F000000
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_OFFSET 24
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_ACTIVITY_LED 0x1
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED0 0x2
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED1 0x4
+		#define NVM_CFG1_PORT_UID_LED_MODE_MASK_LINK_LED2 0x8
 	u32 transceiver_00; /* 0x40 */
 	/*  Define for mapping of transceiver signal module absent */
 		#define NVM_CFG1_PORT_TRANS_MODULE_ABS_MASK 0x000000FF
@@ -1379,6 +1716,11 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_0_OFFSET 8
 		#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_MASK 0x0000F000
 		#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_OFFSET 12
+	/*  Option to override SmartAN FEC requirements */
+		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_MASK 0x00010000
+		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_OFFSET 16
+		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_DISABLED 0x0
+		#define NVM_CFG1_PORT_SMARTAN_FEC_OVERRIDE_ENABLED 0x1
 	u32 device_ids; /* 0x44 */
 		#define NVM_CFG1_PORT_ETH_DID_SUFFIX_MASK 0x000000FF
 		#define NVM_CFG1_PORT_ETH_DID_SUFFIX_OFFSET 0
@@ -1840,7 +2182,289 @@ struct nvm_cfg1_port {
 		#define NVM_CFG1_PORT_PHY_MODULE_ALOM_FAN_ON_TEMP_TH_MASK \
 			0x0000FF00
 		#define NVM_CFG1_PORT_PHY_MODULE_ALOM_FAN_ON_TEMP_TH_OFFSET 8
-	u32 reserved[115]; /* 0x8C */
+	/*  Warning temperature threshold used with nvm option 235 */
+		#define NVM_CFG1_PORT_PHY_MODULE_WARNING_TEMP_TH_MASK 0x00FF0000
+		#define NVM_CFG1_PORT_PHY_MODULE_WARNING_TEMP_TH_OFFSET 16
+	u32 ext_phy_cfg1; /* 0x8C */
+	/*  Ext PHY MDI pair swap value */
+		#define NVM_CFG1_PORT_EXT_PHY_MDI_PAIR_SWAP_MASK 0x0000FFFF
+		#define NVM_CFG1_PORT_EXT_PHY_MDI_PAIR_SWAP_OFFSET 0
+	u32 extended_speed; /* 0x90 */
+	/*  Sets speed in conjunction with legacy speed field */
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_MASK 0x0000FFFF
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_OFFSET 0
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_NONE 0x1
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_1G 0x2
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_10G 0x4
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_25G 0x8
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_40G 0x10
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R 0x20
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_50G_R2 0x40
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R2 0x80
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_R4 0x100
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_EXTND_SPD_100G_P4 0x200
+	/*  Sets speed capabilities in conjunction with legacy capabilities
+	 *  field
+	 */
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_MASK 0xFFFF0000
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_OFFSET 16
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_NONE 0x1
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_1G 0x2
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_10G 0x4
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_25G 0x8
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_40G 0x10
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R 0x20
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_50G_R2 0x40
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R2 0x80
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_R4 0x100
+		#define NVM_CFG1_PORT_EXTENDED_SPEED_CAP_EXTND_SPD_100G_P4 0x200
+	/*  Set speed specific FEC setting in conjunction with legacy FEC
+	 *  mode
+	 */
+	u32 extended_fec_mode; /* 0x94 */
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_NONE 0x1
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_10G_NONE 0x2
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_10G_BASE_R 0x4
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_NONE 0x8
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_BASE_R 0x10
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_25G_RS528 0x20
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_40G_NONE 0x40
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_40G_BASE_R 0x80
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_NONE 0x100
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_BASE_R 0x200
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS528 0x400
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_50G_RS544 0x800
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_NONE 0x1000
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_BASE_R 0x2000
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS528 0x4000
+		#define NVM_CFG1_PORT_EXTENDED_FEC_MODE_EXTND_FEC_100G_RS544 0x8000
+	u32 port_generic_cont_01; /* 0x98 */
+	/*  Define for GPIO mapping of SFP Rate Select 0 */
+		#define NVM_CFG1_PORT_MODULE_RS0_MASK 0x000000FF
+		#define NVM_CFG1_PORT_MODULE_RS0_OFFSET 0
+		#define NVM_CFG1_PORT_MODULE_RS0_NA 0x0
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO0 0x1
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO1 0x2
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO2 0x3
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO3 0x4
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO4 0x5
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO5 0x6
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO6 0x7
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO7 0x8
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO8 0x9
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO9 0xA
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO10 0xB
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO11 0xC
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO12 0xD
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO13 0xE
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO14 0xF
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO15 0x10
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO16 0x11
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO17 0x12
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO18 0x13
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO19 0x14
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO20 0x15
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO21 0x16
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO22 0x17
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO23 0x18
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO24 0x19
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO25 0x1A
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO26 0x1B
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO27 0x1C
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO28 0x1D
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO29 0x1E
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO30 0x1F
+		#define NVM_CFG1_PORT_MODULE_RS0_GPIO31 0x20
+	/*  Define for GPIO mapping of SFP Rate Select 1 */
+		#define NVM_CFG1_PORT_MODULE_RS1_MASK 0x0000FF00
+		#define NVM_CFG1_PORT_MODULE_RS1_OFFSET 8
+		#define NVM_CFG1_PORT_MODULE_RS1_NA 0x0
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO0 0x1
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO1 0x2
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO2 0x3
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO3 0x4
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO4 0x5
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO5 0x6
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO6 0x7
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO7 0x8
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO8 0x9
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO9 0xA
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO10 0xB
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO11 0xC
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO12 0xD
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO13 0xE
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO14 0xF
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO15 0x10
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO16 0x11
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO17 0x12
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO18 0x13
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO19 0x14
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO20 0x15
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO21 0x16
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO22 0x17
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO23 0x18
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO24 0x19
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO25 0x1A
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO26 0x1B
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO27 0x1C
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO28 0x1D
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO29 0x1E
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO30 0x1F
+		#define NVM_CFG1_PORT_MODULE_RS1_GPIO31 0x20
+	/*  Define for GPIO mapping of SFP Module TX Fault */
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_MASK 0x00FF0000
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_OFFSET 16
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_NA 0x0
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO0 0x1
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO1 0x2
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO2 0x3
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO3 0x4
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO4 0x5
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO5 0x6
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO6 0x7
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO7 0x8
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO8 0x9
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO9 0xA
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO10 0xB
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO11 0xC
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO12 0xD
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO13 0xE
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO14 0xF
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO15 0x10
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO16 0x11
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO17 0x12
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO18 0x13
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO19 0x14
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO20 0x15
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO21 0x16
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO22 0x17
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO23 0x18
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO24 0x19
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO25 0x1A
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO26 0x1B
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO27 0x1C
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO28 0x1D
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO29 0x1E
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO30 0x1F
+		#define NVM_CFG1_PORT_MODULE_TX_FAULT_GPIO31 0x20
+	/*  Define for GPIO mapping of QSFP Reset signal */
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_MASK 0xFF000000
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_OFFSET 24
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_NA 0x0
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO0 0x1
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO1 0x2
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO2 0x3
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO3 0x4
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO4 0x5
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO5 0x6
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO6 0x7
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO7 0x8
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO8 0x9
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO9 0xA
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO10 0xB
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO11 0xC
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO12 0xD
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO13 0xE
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO14 0xF
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO15 0x10
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO16 0x11
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO17 0x12
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO18 0x13
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO19 0x14
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO20 0x15
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO21 0x16
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO22 0x17
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO23 0x18
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO24 0x19
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO25 0x1A
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO26 0x1B
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO27 0x1C
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO28 0x1D
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO29 0x1E
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO30 0x1F
+		#define NVM_CFG1_PORT_QSFP_MODULE_RESET_GPIO31 0x20
+	u32 port_generic_cont_02; /* 0x9C */
+	/*  Define for GPIO mapping of QSFP Transceiver LP mode */
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_MASK 0x000000FF
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_OFFSET 0
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_NA 0x0
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO0 0x1
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO1 0x2
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO2 0x3
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO3 0x4
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO4 0x5
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO5 0x6
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO6 0x7
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO7 0x8
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO8 0x9
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO9 0xA
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO10 0xB
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO11 0xC
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO12 0xD
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO13 0xE
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO14 0xF
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO15 0x10
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO16 0x11
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO17 0x12
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO18 0x13
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO19 0x14
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO20 0x15
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO21 0x16
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO22 0x17
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO23 0x18
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO24 0x19
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO25 0x1A
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO26 0x1B
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO27 0x1C
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO28 0x1D
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO29 0x1E
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO30 0x1F
+		#define NVM_CFG1_PORT_QSFP_MODULE_LP_MODE_GPIO31 0x20
+	/*  Define for GPIO mapping of Transceiver Power Enable */
+		#define NVM_CFG1_PORT_MODULE_POWER_MASK 0x0000FF00
+		#define NVM_CFG1_PORT_MODULE_POWER_OFFSET 8
+		#define NVM_CFG1_PORT_MODULE_POWER_NA 0x0
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO0 0x1
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO1 0x2
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO2 0x3
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO3 0x4
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO4 0x5
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO5 0x6
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO6 0x7
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO7 0x8
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO8 0x9
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO9 0xA
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO10 0xB
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO11 0xC
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO12 0xD
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO13 0xE
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO14 0xF
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO15 0x10
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO16 0x11
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO17 0x12
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO18 0x13
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO19 0x14
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO20 0x15
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO21 0x16
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO22 0x17
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO23 0x18
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO24 0x19
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO25 0x1A
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO26 0x1B
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO27 0x1C
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO28 0x1D
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO29 0x1E
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO30 0x1F
+		#define NVM_CFG1_PORT_MODULE_POWER_GPIO31 0x20
+	/*  Define for LASI Mapping of Interrupt from module or PHY */
+		#define NVM_CFG1_PORT_LASI_INTR_IN_MASK 0x000F0000
+		#define NVM_CFG1_PORT_LASI_INTR_IN_OFFSET 16
+		#define NVM_CFG1_PORT_LASI_INTR_IN_NA 0x0
+		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI0 0x1
+		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI1 0x2
+		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI2 0x3
+		#define NVM_CFG1_PORT_LASI_INTR_IN_LASI3 0x4
+	u32 reserved[110]; /* 0xA0 */
 };
 
 struct nvm_cfg1_func {
@@ -1874,7 +2498,6 @@ struct nvm_cfg1_func {
 		#define NVM_CFG1_FUNC_PERSONALITY_ETHERNET 0x0
 		#define NVM_CFG1_FUNC_PERSONALITY_ISCSI 0x1
 		#define NVM_CFG1_FUNC_PERSONALITY_FCOE 0x2
-		#define NVM_CFG1_FUNC_PERSONALITY_ROCE 0x3
 		#define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_MASK 0x7F800000
 		#define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_OFFSET 23
 		#define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_MASK 0x80000000
@@ -1969,6 +2592,16 @@ struct nvm_cfg1 {
 /******************************************
  * nvm_cfg structs
  ******************************************/
+
+struct board_info {
+	u16 vendor_id;
+	u16 eth_did_suffix;
+	u16 sub_vendor_id;
+	u16 sub_device_id;
+	char *board_name;
+	char *friendly_name;
+};
+
 enum nvm_cfg_sections {
 	NVM_CFG_SECTION_NVM_CFG1,
 	NVM_CFG_SECTION_MAX
@@ -1980,4 +2613,260 @@ struct nvm_cfg {
 	struct nvm_cfg1 cfg1;
 };
 
+/******************************************
+ * nvm_cfg options
+ ******************************************/
+
+#define NVM_CFG_ID_MAC_ADDRESS                                       1
+#define NVM_CFG_ID_BOARD_SWAP                                        8
+#define NVM_CFG_ID_MF_MODE                                           9
+#define NVM_CFG_ID_LED_MODE                                          10
+#define NVM_CFG_ID_FAN_FAILURE_ENFORCEMENT                           11
+#define NVM_CFG_ID_ENGINEERING_CHANGE                                12
+#define NVM_CFG_ID_MANUFACTURING_ID                                  13
+#define NVM_CFG_ID_SERIAL_NUMBER                                     14
+#define NVM_CFG_ID_PCI_GEN                                           15
+#define NVM_CFG_ID_BEACON_WOL_ENABLED                                16
+#define NVM_CFG_ID_ASPM_SUPPORT                                      17
+#define NVM_CFG_ID_ROCE_PRIORITY                                     20
+#define NVM_CFG_ID_ENABLE_WOL_ON_ACPI_PATTERN                        22
+#define NVM_CFG_ID_MAGIC_PACKET_WOL                                  23
+#define NVM_CFG_ID_AVS_MARGIN_LOW_BB                                 24
+#define NVM_CFG_ID_AVS_MARGIN_HIGH_BB                                25
+#define NVM_CFG_ID_DCBX_MODE                                         26
+#define NVM_CFG_ID_DRV_SPEED_CAPABILITY_MASK                         27
+#define NVM_CFG_ID_MFW_SPEED_CAPABILITY_MASK                         28
+#define NVM_CFG_ID_DRV_LINK_SPEED                                    29
+#define NVM_CFG_ID_DRV_FLOW_CONTROL                                  30
+#define NVM_CFG_ID_MFW_LINK_SPEED                                    31
+#define NVM_CFG_ID_MFW_FLOW_CONTROL                                  32
+#define NVM_CFG_ID_OPTIC_MODULE_VENDOR_ENFORCEMENT                   33
+#define NVM_CFG_ID_OPTIONAL_LINK_MODES_BB                            34
+#define NVM_CFG_ID_MF_VENDOR_DEVICE_ID                               37
+#define NVM_CFG_ID_NETWORK_PORT_MODE                                 38
+#define NVM_CFG_ID_MPS10_RX_LANE_SWAP_BB                             39
+#define NVM_CFG_ID_MPS10_TX_LANE_SWAP_BB                             40
+#define NVM_CFG_ID_MPS10_RX_LANE_POLARITY_BB                         41
+#define NVM_CFG_ID_MPS10_TX_LANE_POLARITY_BB                         42
+#define NVM_CFG_ID_MPS25_RX_LANE_SWAP_BB                             43
+#define NVM_CFG_ID_MPS25_TX_LANE_SWAP_BB                             44
+#define NVM_CFG_ID_MPS25_RX_LANE_POLARITY                            45
+#define NVM_CFG_ID_MPS25_TX_LANE_POLARITY                            46
+#define NVM_CFG_ID_MPS10_PREEMPHASIS_BB                              47
+#define NVM_CFG_ID_MPS10_DRIVER_CURRENT_BB                           48
+#define NVM_CFG_ID_MPS10_ENFORCE_TX_FIR_CFG_BB                       49
+#define NVM_CFG_ID_MPS25_PREEMPHASIS                                 50
+#define NVM_CFG_ID_MPS25_DRIVER_CURRENT                              51
+#define NVM_CFG_ID_MPS25_ENFORCE_TX_FIR_CFG                          52
+#define NVM_CFG_ID_MPS10_CORE_ADDR_BB                                53
+#define NVM_CFG_ID_MPS25_CORE_ADDR_BB                                54
+#define NVM_CFG_ID_EXTERNAL_PHY_TYPE                                 55
+#define NVM_CFG_ID_EXTERNAL_PHY_ADDRESS                              56
+#define NVM_CFG_ID_SERDES_NET_INTERFACE_BB                           57
+#define NVM_CFG_ID_AN_MODE_BB                                        58
+#define NVM_CFG_ID_PREBOOT_OPROM                                     59
+#define NVM_CFG_ID_MBA_DELAY_TIME                                    61
+#define NVM_CFG_ID_MBA_SETUP_HOT_KEY                                 62
+#define NVM_CFG_ID_MBA_HIDE_SETUP_PROMPT                             63
+#define NVM_CFG_ID_PREBOOT_LINK_SPEED                                67
+#define NVM_CFG_ID_PREBOOT_BOOT_PROTOCOL                             69
+#define NVM_CFG_ID_ENABLE_SRIOV                                      70
+#define NVM_CFG_ID_ENABLE_ATC                                        71
+#define NVM_CFG_ID_NUMBER_OF_VFS_PER_PF                              74
+#define NVM_CFG_ID_VF_PCI_BAR2_SIZE_K2_E5                            75
+#define NVM_CFG_ID_VENDOR_ID                                         76
+#define NVM_CFG_ID_SUBSYSTEM_VENDOR_ID                               78
+#define NVM_CFG_ID_SUBSYSTEM_DEVICE_ID                               79
+#define NVM_CFG_ID_VF_PCI_BAR2_SIZE_BB                               81
+#define NVM_CFG_ID_BAR1_SIZE                                         82
+#define NVM_CFG_ID_BAR2_SIZE_BB                                      83
+#define NVM_CFG_ID_VF_PCI_DEVICE_ID                                  84
+#define NVM_CFG_ID_MPS10_TXFIR_MAIN_BB                               85
+#define NVM_CFG_ID_MPS10_TXFIR_POST_BB                               86
+#define NVM_CFG_ID_MPS25_TXFIR_MAIN                                  87
+#define NVM_CFG_ID_MPS25_TXFIR_POST                                  88
+#define NVM_CFG_ID_MANUFACTURE_KIT_VERSION                           89
+#define NVM_CFG_ID_MANUFACTURE_TIMESTAMP                             90
+#define NVM_CFG_ID_PERSONALITY                                       92
+#define NVM_CFG_ID_FCOE_NODE_WWN_MAC_ADDR                            93
+#define NVM_CFG_ID_FCOE_PORT_WWN_MAC_ADDR                            94
+#define NVM_CFG_ID_BANDWIDTH_WEIGHT                                  95
+#define NVM_CFG_ID_MAX_BANDWIDTH                                     96
+#define NVM_CFG_ID_PAUSE_ON_HOST_RING                                97
+#define NVM_CFG_ID_PCIE_PREEMPHASIS                                  98
+#define NVM_CFG_ID_LLDP_MAC_ADDRESS                                  99
+#define NVM_CFG_ID_FCOE_WWN_NODE_PREFIX                              100
+#define NVM_CFG_ID_FCOE_WWN_PORT_PREFIX                              101
+#define NVM_CFG_ID_LED_SPEED_SELECT                                  102
+#define NVM_CFG_ID_LED_PORT_SWAP                                     103
+#define NVM_CFG_ID_AVS_MODE_BB                                       104
+#define NVM_CFG_ID_OVERRIDE_SECURE_MODE                              105
+#define NVM_CFG_ID_AVS_DAC_CODE_BB                                   106
+#define NVM_CFG_ID_MBI_VERSION                                       107
+#define NVM_CFG_ID_MBI_DATE                                          108
+#define NVM_CFG_ID_SMBUS_ADDRESS                                     109
+#define NVM_CFG_ID_NCSI_PACKAGE_ID                                   110
+#define NVM_CFG_ID_SIDEBAND_MODE                                     111
+#define NVM_CFG_ID_SMBUS_MODE                                        112
+#define NVM_CFG_ID_NCSI                                              113
+#define NVM_CFG_ID_TRANSCEIVER_MODULE_ABSENT                         114
+#define NVM_CFG_ID_I2C_MUX_SELECT_GPIO_BB                            115
+#define NVM_CFG_ID_I2C_MUX_SELECT_VALUE_BB                           116
+#define NVM_CFG_ID_DEVICE_CAPABILITIES                               117
+#define NVM_CFG_ID_ETH_DID_SUFFIX                                    118
+#define NVM_CFG_ID_FCOE_DID_SUFFIX                                   119
+#define NVM_CFG_ID_ISCSI_DID_SUFFIX                                  120
+#define NVM_CFG_ID_DEFAULT_ENABLED_PROTOCOLS                         122
+#define NVM_CFG_ID_POWER_DISSIPATED_BB                               123
+#define NVM_CFG_ID_POWER_CONSUMED_BB                                 124
+#define NVM_CFG_ID_AUX_MODE                                          125
+#define NVM_CFG_ID_PORT_TYPE                                         126
+#define NVM_CFG_ID_TX_DISABLE                                        127
+#define NVM_CFG_ID_MAX_LINK_WIDTH                                    128
+#define NVM_CFG_ID_ASPM_L1_MODE                                      130
+#define NVM_CFG_ID_ON_CHIP_SENSOR_MODE                               131
+#define NVM_CFG_ID_PREBOOT_VLAN_VALUE                                132
+#define NVM_CFG_ID_PREBOOT_VLAN                                      133
+#define NVM_CFG_ID_TEMPERATURE_PERIOD_BETWEEN_CHECKS                 134
+#define NVM_CFG_ID_SHUTDOWN_THRESHOLD_TEMPERATURE                    135
+#define NVM_CFG_ID_MAX_COUNT_OPER_THRESHOLD                          136
+#define NVM_CFG_ID_DEAD_TEMP_TH_TEMPERATURE                          137
+#define NVM_CFG_ID_TEMPERATURE_MONITORING_MODE                       139
+#define NVM_CFG_ID_AN_25G_50G_OUI                                    140
+#define NVM_CFG_ID_PLDM_SENSOR_MODE                                  141
+#define NVM_CFG_ID_EXTERNAL_THERMAL_SENSOR                           142
+#define NVM_CFG_ID_EXTERNAL_THERMAL_SENSOR_ADDRESS                   143
+#define NVM_CFG_ID_FAN_FAILURE_DURATION                              144
+#define NVM_CFG_ID_FEC_FORCE_MODE                                    145
+#define NVM_CFG_ID_MULTI_NETWORK_MODES_CAPABILITY                    146
+#define NVM_CFG_ID_MNM_10G_DRV_SPEED_CAPABILITY_MASK                 147
+#define NVM_CFG_ID_MNM_10G_MFW_SPEED_CAPABILITY_MASK                 148
+#define NVM_CFG_ID_MNM_10G_DRV_LINK_SPEED                            149
+#define NVM_CFG_ID_MNM_10G_MFW_LINK_SPEED                            150
+#define NVM_CFG_ID_MNM_10G_PORT_TYPE                                 151
+#define NVM_CFG_ID_MNM_10G_SERDES_NET_INTERFACE                      152
+#define NVM_CFG_ID_MNM_10G_FEC_FORCE_MODE                            153
+#define NVM_CFG_ID_MNM_10G_ETH_DID_SUFFIX                            154
+#define NVM_CFG_ID_MNM_25G_DRV_SPEED_CAPABILITY_MASK                 155
+#define NVM_CFG_ID_MNM_25G_MFW_SPEED_CAPABILITY_MASK                 156
+#define NVM_CFG_ID_MNM_25G_DRV_LINK_SPEED                            157
+#define NVM_CFG_ID_MNM_25G_MFW_LINK_SPEED                            158
+#define NVM_CFG_ID_MNM_25G_PORT_TYPE                                 159
+#define NVM_CFG_ID_MNM_25G_SERDES_NET_INTERFACE                      160
+#define NVM_CFG_ID_MNM_25G_ETH_DID_SUFFIX                            161
+#define NVM_CFG_ID_MNM_25G_FEC_FORCE_MODE                            162
+#define NVM_CFG_ID_MNM_40G_DRV_SPEED_CAPABILITY_MASK                 163
+#define NVM_CFG_ID_MNM_40G_MFW_SPEED_CAPABILITY_MASK                 164
+#define NVM_CFG_ID_MNM_40G_DRV_LINK_SPEED                            165
+#define NVM_CFG_ID_MNM_40G_MFW_LINK_SPEED                            166
+#define NVM_CFG_ID_MNM_40G_PORT_TYPE                                 167
+#define NVM_CFG_ID_MNM_40G_SERDES_NET_INTERFACE                      168
+#define NVM_CFG_ID_MNM_40G_ETH_DID_SUFFIX                            169
+#define NVM_CFG_ID_MNM_40G_FEC_FORCE_MODE                            170
+#define NVM_CFG_ID_MNM_50G_DRV_SPEED_CAPABILITY_MASK                 171
+#define NVM_CFG_ID_MNM_50G_MFW_SPEED_CAPABILITY_MASK                 172
+#define NVM_CFG_ID_MNM_50G_DRV_LINK_SPEED                            173
+#define NVM_CFG_ID_MNM_50G_MFW_LINK_SPEED                            174
+#define NVM_CFG_ID_MNM_50G_PORT_TYPE                                 175
+#define NVM_CFG_ID_MNM_50G_SERDES_NET_INTERFACE                      176
+#define NVM_CFG_ID_MNM_50G_ETH_DID_SUFFIX                            177
+#define NVM_CFG_ID_MNM_50G_FEC_FORCE_MODE                            178
+#define NVM_CFG_ID_MNM_100G_DRV_SPEED_CAP_MASK_BB                    179
+#define NVM_CFG_ID_MNM_100G_MFW_SPEED_CAP_MASK_BB                    180
+#define NVM_CFG_ID_MNM_100G_DRV_LINK_SPEED_BB                        181
+#define NVM_CFG_ID_MNM_100G_MFW_LINK_SPEED_BB                        182
+#define NVM_CFG_ID_MNM_100G_PORT_TYPE_BB                             183
+#define NVM_CFG_ID_MNM_100G_SERDES_NET_INTERFACE_BB                  184
+#define NVM_CFG_ID_MNM_100G_ETH_DID_SUFFIX_BB                        185
+#define NVM_CFG_ID_MNM_100G_FEC_FORCE_MODE_BB                        186
+#define NVM_CFG_ID_FUNCTION_HIDE                                     187
+#define NVM_CFG_ID_BAR2_TOTAL_BUDGET_BB                              188
+#define NVM_CFG_ID_CRASH_DUMP_TRIGGER_ENABLE                         189
+#define NVM_CFG_ID_MPS25_LANE_SWAP_K2_E5                             190
+#define NVM_CFG_ID_BAR2_SIZE_K2_E5                                   191
+#define NVM_CFG_ID_EXT_PHY_RESET                                     192
+#define NVM_CFG_ID_EEE_POWER_SAVING_MODE                             193
+#define NVM_CFG_ID_OVERRIDE_PCIE_PRESET_EQUAL_BB                     194
+#define NVM_CFG_ID_PCIE_PRESET_VALUE_BB                              195
+#define NVM_CFG_ID_MAX_MSIX                                          196
+#define NVM_CFG_ID_NVM_CFG_VERSION                                   197
+#define NVM_CFG_ID_NVM_CFG_NEW_OPTION_SEQ                            198
+#define NVM_CFG_ID_NVM_CFG_REMOVED_OPTION_SEQ                        199
+#define NVM_CFG_ID_NVM_CFG_UPDATED_VALUE_SEQ                         200
+#define NVM_CFG_ID_EXTENDED_SERIAL_NUMBER                            201
+#define NVM_CFG_ID_RDMA_ENABLEMENT                                   202
+#define NVM_CFG_ID_MAX_CONT_OPERATING_TEMP                           203
+#define NVM_CFG_ID_RUNTIME_PORT_SWAP_GPIO                            204
+#define NVM_CFG_ID_RUNTIME_PORT_SWAP_MAP                             205
+#define NVM_CFG_ID_THERMAL_EVENT_GPIO                                206
+#define NVM_CFG_ID_I2C_INTERRUPT_GPIO                                207
+#define NVM_CFG_ID_DCI_SUPPORT                                       208
+#define NVM_CFG_ID_PCIE_VDM_ENABLED                                  209
+#define NVM_CFG_ID_OEM1_NUMBER                                       210
+#define NVM_CFG_ID_OEM2_NUMBER                                       211
+#define NVM_CFG_ID_FEC_AN_MODE_K2_E5                                 212
+#define NVM_CFG_ID_NPAR_ENABLED_PROTOCOL                             213
+#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_PRE                            214
+#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_MAIN                           215
+#define NVM_CFG_ID_MPS25_ACTIVE_TXFIR_POST                           216
+#define NVM_CFG_ID_ALOM_FAN_ON_AUX_GPIO                              217
+#define NVM_CFG_ID_ALOM_FAN_ON_AUX_VALUE                             218
+#define NVM_CFG_ID_SLOT_ID_GPIO                                      219
+#define NVM_CFG_ID_PMBUS_SCL_GPIO                                    220
+#define NVM_CFG_ID_PMBUS_SDA_GPIO                                    221
+#define NVM_CFG_ID_RESET_ON_LAN                                      222
+#define NVM_CFG_ID_NCSI_PACKAGE_ID_IO                                223
+#define NVM_CFG_ID_TX_RX_EQ_25G_HLPC                                 224
+#define NVM_CFG_ID_TX_RX_EQ_25G_LLPC                                 225
+#define NVM_CFG_ID_TX_RX_EQ_25G_AC                                   226
+#define NVM_CFG_ID_TX_RX_EQ_10G_PC                                   227
+#define NVM_CFG_ID_TX_RX_EQ_10G_AC                                   228
+#define NVM_CFG_ID_TX_RX_EQ_1G                                       229
+#define NVM_CFG_ID_TX_RX_EQ_25G_BT                                   230
+#define NVM_CFG_ID_TX_RX_EQ_10G_BT                                   231
+#define NVM_CFG_ID_PF_MAPPING                                        232
+#define NVM_CFG_ID_RECOVERY_MODE                                     234
+#define NVM_CFG_ID_PHY_MODULE_DEAD_TEMP_TH                           235
+#define NVM_CFG_ID_PHY_MODULE_ALOM_FAN_ON_TEMP_TH                    236
+#define NVM_CFG_ID_PREBOOT_DEBUG_MODE_STD                            237
+#define NVM_CFG_ID_PREBOOT_DEBUG_MODE_EXT                            238
+#define NVM_CFG_ID_SMARTLINQ_MODE                                    239
+#define NVM_CFG_ID_PREBOOT_LINK_UP_DELAY                             242
+#define NVM_CFG_ID_VOLTAGE_REGULATOR_TYPE                            243
+#define NVM_CFG_ID_MAIN_CLOCK_FREQUENCY                              245
+#define NVM_CFG_ID_MAC_CLOCK_FREQUENCY                               246
+#define NVM_CFG_ID_STORM_CLOCK_FREQUENCY                             247
+#define NVM_CFG_ID_PCIE_RELAXED_ORDERING                             248
+#define NVM_CFG_ID_EXT_PHY_MDI_PAIR_SWAP                             249
+#define NVM_CFG_ID_UID_LED_MODE_MASK                                 250
+#define NVM_CFG_ID_NCSI_AUX_LINK                                     251
+#define NVM_CFG_ID_SMARTAN_FEC_OVERRIDE                              272
+#define NVM_CFG_ID_LLDP_DISABLE                                      273
+#define NVM_CFG_ID_SHORT_PERST_PROTECTION_K2_E5                      274
+#define NVM_CFG_ID_TRANSCEIVER_RATE_SELECT_0                         275
+#define NVM_CFG_ID_TRANSCEIVER_RATE_SELECT_1                         276
+#define NVM_CFG_ID_TRANSCEIVER_MODULE_TX_FAULT                       277
+#define NVM_CFG_ID_TRANSCEIVER_QSFP_MODULE_RESET                     278
+#define NVM_CFG_ID_TRANSCEIVER_QSFP_LP_MODE                          279
+#define NVM_CFG_ID_TRANSCEIVER_POWER_ENABLE                          280
+#define NVM_CFG_ID_LASI_INTERRUPT_INPUT                              281
+#define NVM_CFG_ID_EXT_PHY_PGOOD_INPUT                               282
+#define NVM_CFG_ID_TRACE_LEVEL                                       283
+#define NVM_CFG_ID_TRACE_MODULES                                     284
+#define NVM_CFG_ID_EMULATED_TMP421                                   285
+#define NVM_CFG_ID_WARNING_TEMPERATURE_GPIO                          286
+#define NVM_CFG_ID_WARNING_TEMPERATURE_THRESHOLD                     287
+#define NVM_CFG_ID_PERST_INDICATION_GPIO                             288
+#define NVM_CFG_ID_PCIE_CLASS_CODE_FCOE_K2_E5                        289
+#define NVM_CFG_ID_PCIE_CLASS_CODE_ISCSI_K2_E5                       290
+#define NVM_CFG_ID_NUMBER_OF_PROVISIONED_MAC                         291
+#define NVM_CFG_ID_NUMBER_OF_PROVISIONED_VF_MAC                      292
+#define NVM_CFG_ID_PROVISIONED_BMC_MAC                               293
+#define NVM_CFG_ID_OVERRIDE_AGC_THRESHOLD_K2                         294
+#define NVM_CFG_ID_WARNING_TEMPERATURE_DELTA                         295
+#define NVM_CFG_ID_ALOM_FAN_ON_AUX_DELTA                             296
+#define NVM_CFG_ID_DEAD_TEMP_TH_DELTA                                297
+#define NVM_CFG_ID_PHY_MODULE_WARNING_TEMP_TH                        298
+#define NVM_CFG_ID_DISABLE_PLDM                                      299
+#define NVM_CFG_ID_DISABLE_MCTP_OEM                                  300
 #endif /* NVM_CFG_H */
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 6/9] net/qede/base: move dmae code to HSI
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (15 preceding siblings ...)
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 5/9] net/qede/base: update rt defs NVM cfg and mcp code Rasesh Mody
@ 2019-10-06 20:14 ` Rasesh Mody
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 7/9] net/qede/base: update HSI code Rasesh Mody
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-10-06 20:14 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

Move DMA engine (DMAE) structures from base driver to HSI module.
Use DMAE_PARAMS_* in place of ECORE_DMAE_FLAG_*.
Enforce SET_FIELD() macro where appropriate.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/ecore_dev.c        | 12 ++--
 drivers/net/qede/base/ecore_dev_api.h    | 92 ------------------------
 drivers/net/qede/base/ecore_hsi_common.h | 58 ++++++++++++++-
 drivers/net/qede/base/ecore_hw.c         | 52 ++++++++------
 drivers/net/qede/base/ecore_hw.h         | 88 +++++++++++++++++------
 drivers/net/qede/base/ecore_init_ops.c   |  4 +-
 drivers/net/qede/base/ecore_sriov.c      | 23 +++---
 7 files changed, 174 insertions(+), 155 deletions(-)

diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 749aea4e8..2c135afd2 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -950,7 +950,7 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
 			bool b_write_access)
 {
 	u8 pfid = ECORE_PFID_BY_PPFID(p_hwfn, abs_ppfid);
-	struct ecore_dmae_params params;
+	struct dmae_params params;
 	enum _ecore_status_t rc;
 	u32 addr;
 
@@ -973,15 +973,15 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
 	OSAL_MEMSET(&params, 0, sizeof(params));
 
 	if (b_write_access) {
-		params.flags = ECORE_DMAE_FLAG_PF_DST;
-		params.dst_pfid = pfid;
+		SET_FIELD(params.flags, DMAE_PARAMS_DST_PF_VALID, 0x1);
+		params.dst_pf_id = pfid;
 		rc = ecore_dmae_host2grc(p_hwfn, p_ptt,
 					 (u64)(osal_uintptr_t)&p_details->value,
 					 addr, 2 /* size_in_dwords */, &params);
 	} else {
-		params.flags = ECORE_DMAE_FLAG_PF_SRC |
-			       ECORE_DMAE_FLAG_COMPLETION_DST;
-		params.src_pfid = pfid;
+		SET_FIELD(params.flags, DMAE_PARAMS_SRC_PF_VALID, 0x1);
+		SET_FIELD(params.flags, DMAE_PARAMS_COMPLETION_DST, 0x1);
+		params.src_pf_id = pfid;
 		rc = ecore_dmae_grc2host(p_hwfn, p_ptt, addr,
 					 (u64)(osal_uintptr_t)&p_details->value,
 					 2 /* size_in_dwords */, &params);
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index a99888097..4d5cc1a0f 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -415,98 +415,6 @@ struct ecore_eth_stats {
 	};
 };
 
-enum ecore_dmae_address_type_t {
-	ECORE_DMAE_ADDRESS_HOST_VIRT,
-	ECORE_DMAE_ADDRESS_HOST_PHYS,
-	ECORE_DMAE_ADDRESS_GRC
-};
-
-/* value of flags If ECORE_DMAE_FLAG_RW_REPL_SRC flag is set and the
- * source is a block of length DMAE_MAX_RW_SIZE and the
- * destination is larger, the source block will be duplicated as
- * many times as required to fill the destination block. This is
- * used mostly to write a zeroed buffer to destination address
- * using DMA
- */
-#define ECORE_DMAE_FLAG_RW_REPL_SRC	0x00000001
-#define ECORE_DMAE_FLAG_VF_SRC		0x00000002
-#define ECORE_DMAE_FLAG_VF_DST		0x00000004
-#define ECORE_DMAE_FLAG_COMPLETION_DST	0x00000008
-#define ECORE_DMAE_FLAG_PORT		0x00000010
-#define ECORE_DMAE_FLAG_PF_SRC		0x00000020
-#define ECORE_DMAE_FLAG_PF_DST		0x00000040
-
-struct ecore_dmae_params {
-	u32 flags; /* consists of ECORE_DMAE_FLAG_* values */
-	u8 src_vfid;
-	u8 dst_vfid;
-	u8 port_id;
-	u8 src_pfid;
-	u8 dst_pfid;
-};
-
-/**
- * @brief ecore_dmae_host2grc - copy data from source addr to
- * dmae registers using the given ptt
- *
- * @param p_hwfn
- * @param p_ptt
- * @param source_addr
- * @param grc_addr (dmae_data_offset)
- * @param size_in_dwords
- * @param p_params (default parameters will be used in case of OSAL_NULL)
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t
-ecore_dmae_host2grc(struct ecore_hwfn *p_hwfn,
-		    struct ecore_ptt *p_ptt,
-		    u64 source_addr,
-		    u32 grc_addr,
-		    u32 size_in_dwords,
-		    struct ecore_dmae_params *p_params);
-
-/**
- * @brief ecore_dmae_grc2host - Read data from dmae data offset
- * to source address using the given ptt
- *
- * @param p_ptt
- * @param grc_addr (dmae_data_offset)
- * @param dest_addr
- * @param size_in_dwords
- * @param p_params (default parameters will be used in case of OSAL_NULL)
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t
-ecore_dmae_grc2host(struct ecore_hwfn *p_hwfn,
-		    struct ecore_ptt *p_ptt,
-		    u32 grc_addr,
-		    dma_addr_t dest_addr,
-		    u32 size_in_dwords,
-		    struct ecore_dmae_params *p_params);
-
-/**
- * @brief ecore_dmae_host2host - copy data from to source address
- * to a destination address (for SRIOV) using the given ptt
- *
- * @param p_hwfn
- * @param p_ptt
- * @param source_addr
- * @param dest_addr
- * @param size_in_dwords
- * @param p_params (default parameters will be used in case of OSAL_NULL)
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t
-ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
-		     struct ecore_ptt *p_ptt,
-		     dma_addr_t source_addr,
-		     dma_addr_t dest_addr,
-		     u32 size_in_dwords,
-		     struct ecore_dmae_params *p_params);
-
 /**
  * @brief ecore_chain_alloc - Allocate and initialize a chain
  *
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index 7a94ed506..8fa200033 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -1953,7 +1953,11 @@ struct dmae_cmd {
 	__le16 crc16 /* crc16 result */;
 	__le16 crc16_c /* crc16_c result */;
 	__le16 crc10 /* crc_t10 result */;
-	__le16 reserved;
+	__le16 error_bit_reserved;
+#define DMAE_CMD_ERROR_BIT_MASK        0x1 /* Error bit */
+#define DMAE_CMD_ERROR_BIT_SHIFT       0
+#define DMAE_CMD_RESERVED_MASK         0x7FFF
+#define DMAE_CMD_RESERVED_SHIFT        1
 	__le16 xsum16 /* checksum16 result  */;
 	__le16 xsum8 /* checksum8 result  */;
 };
@@ -2017,6 +2021,58 @@ enum dmae_cmd_src_enum {
 };
 
 
+/*
+ * DMAE parameters
+ */
+struct dmae_params {
+	__le32 flags;
+/* If set and the source is a block of length DMAE_MAX_RW_SIZE and the
+ * destination is larger, the source block will be duplicated as many
+ * times as required to fill the destination block. This is used mostly
+ * to write a zeroed buffer to destination address using DMA
+ */
+#define DMAE_PARAMS_RW_REPL_SRC_MASK     0x1
+#define DMAE_PARAMS_RW_REPL_SRC_SHIFT    0
+/* If set, the source is a VF, and the source VF ID is taken from the
+ * src_vf_id parameter.
+ */
+#define DMAE_PARAMS_SRC_VF_VALID_MASK    0x1
+#define DMAE_PARAMS_SRC_VF_VALID_SHIFT   1
+/* If set, the destination is a VF, and the destination VF ID is taken
+ * from the dst_vf_id parameter.
+ */
+#define DMAE_PARAMS_DST_VF_VALID_MASK    0x1
+#define DMAE_PARAMS_DST_VF_VALID_SHIFT   2
+/* If set, a completion is sent to the destination function.
+ * Otherwise its sent to the source function.
+ */
+#define DMAE_PARAMS_COMPLETION_DST_MASK  0x1
+#define DMAE_PARAMS_COMPLETION_DST_SHIFT 3
+/* If set, the port ID is taken from the port_id parameter.
+ * Otherwise, the current port ID is used.
+ */
+#define DMAE_PARAMS_PORT_VALID_MASK      0x1
+#define DMAE_PARAMS_PORT_VALID_SHIFT     4
+/* If set, the source PF ID is taken from the src_pf_id parameter.
+ * Otherwise, the current PF ID is used.
+ */
+#define DMAE_PARAMS_SRC_PF_VALID_MASK    0x1
+#define DMAE_PARAMS_SRC_PF_VALID_SHIFT   5
+/* If set, the destination PF ID is taken from the dst_pf_id parameter.
+ * Otherwise, the current PF ID is used
+ */
+#define DMAE_PARAMS_DST_PF_VALID_MASK    0x1
+#define DMAE_PARAMS_DST_PF_VALID_SHIFT   6
+#define DMAE_PARAMS_RESERVED_MASK        0x1FFFFFF
+#define DMAE_PARAMS_RESERVED_SHIFT       7
+	u8 src_vf_id /* Source VF ID, valid only if src_vf_valid is set */;
+	u8 dst_vf_id /* Destination VF ID, valid only if dst_vf_valid is set */;
+	u8 port_id /* Port ID, valid only if port_valid is set */;
+	u8 src_pf_id /* Source PF ID, valid only if src_pf_valid is set */;
+	u8 dst_pf_id /* Destination PF ID, valid only if dst_pf_valid is set */;
+	u8 reserved1;
+	__le16 reserved2;
+};
 
 
 struct fw_asserts_ram_section {
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 72cd7e9c3..6a79db52e 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -453,14 +453,15 @@ u32 ecore_vfid_to_concrete(struct ecore_hwfn *p_hwfn, u8 vfid)
 /* DMAE */
 
 #define ECORE_DMAE_FLAGS_IS_SET(params, flag)	\
-	((params) != OSAL_NULL && ((params)->flags & ECORE_DMAE_FLAG_##flag))
+	((params) != OSAL_NULL && \
+	 GET_FIELD((params)->flags, DMAE_PARAMS_##flag))
 
 static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
 			      const u8 is_src_type_grc,
 			      const u8 is_dst_type_grc,
-			      struct ecore_dmae_params *p_params)
+			      struct dmae_params *p_params)
 {
-	u8 src_pfid, dst_pfid, port_id;
+	u8 src_pf_id, dst_pf_id, port_id;
 	u16 opcode_b = 0;
 	u32 opcode = 0;
 
@@ -468,19 +469,19 @@ static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
 	 * 0- The source is the PCIe
 	 * 1- The source is the GRC.
 	 */
-	opcode |= (is_src_type_grc ? DMAE_CMD_SRC_MASK_GRC
-		   : DMAE_CMD_SRC_MASK_PCIE) << DMAE_CMD_SRC_SHIFT;
-	src_pfid = ECORE_DMAE_FLAGS_IS_SET(p_params, PF_SRC) ?
-		   p_params->src_pfid : p_hwfn->rel_pf_id;
-	opcode |= (src_pfid & DMAE_CMD_SRC_PF_ID_MASK) <<
+	opcode |= (is_src_type_grc ? dmae_cmd_src_grc : dmae_cmd_src_pcie) <<
+		  DMAE_CMD_SRC_SHIFT;
+	src_pf_id = ECORE_DMAE_FLAGS_IS_SET(p_params, SRC_PF_VALID) ?
+		    p_params->src_pf_id : p_hwfn->rel_pf_id;
+	opcode |= (src_pf_id & DMAE_CMD_SRC_PF_ID_MASK) <<
 		  DMAE_CMD_SRC_PF_ID_SHIFT;
 
 	/* The destination of the DMA can be: 0-None 1-PCIe 2-GRC 3-None */
-	opcode |= (is_dst_type_grc ? DMAE_CMD_DST_MASK_GRC
-		   : DMAE_CMD_DST_MASK_PCIE) << DMAE_CMD_DST_SHIFT;
-	dst_pfid = ECORE_DMAE_FLAGS_IS_SET(p_params, PF_DST) ?
-		   p_params->dst_pfid : p_hwfn->rel_pf_id;
-	opcode |= (dst_pfid & DMAE_CMD_DST_PF_ID_MASK) <<
+	opcode |= (is_dst_type_grc ? dmae_cmd_dst_grc : dmae_cmd_dst_pcie) <<
+		  DMAE_CMD_DST_SHIFT;
+	dst_pf_id = ECORE_DMAE_FLAGS_IS_SET(p_params, DST_PF_VALID) ?
+		    p_params->dst_pf_id : p_hwfn->rel_pf_id;
+	opcode |= (dst_pf_id & DMAE_CMD_DST_PF_ID_MASK) <<
 		  DMAE_CMD_DST_PF_ID_SHIFT;
 
 	/* DMAE_E4_TODO need to check which value to specify here. */
@@ -501,7 +502,7 @@ static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
 	 */
 	opcode |= DMAE_CMD_ENDIANITY << DMAE_CMD_ENDIANITY_MODE_SHIFT;
 
-	port_id = (ECORE_DMAE_FLAGS_IS_SET(p_params, PORT)) ?
+	port_id = (ECORE_DMAE_FLAGS_IS_SET(p_params, PORT_VALID)) ?
 		  p_params->port_id : p_hwfn->port_id;
 	opcode |= port_id << DMAE_CMD_PORT_ID_SHIFT;
 
@@ -512,16 +513,16 @@ static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
 	opcode |= DMAE_CMD_DST_ADDR_RESET_MASK << DMAE_CMD_DST_ADDR_RESET_SHIFT;
 
 	/* SRC/DST VFID: all 1's - pf, otherwise VF id */
-	if (ECORE_DMAE_FLAGS_IS_SET(p_params, VF_SRC)) {
+	if (ECORE_DMAE_FLAGS_IS_SET(p_params, SRC_VF_VALID)) {
 		opcode |= (1 << DMAE_CMD_SRC_VF_ID_VALID_SHIFT);
-		opcode_b |= (p_params->src_vfid << DMAE_CMD_SRC_VF_ID_SHIFT);
+		opcode_b |= (p_params->src_vf_id <<  DMAE_CMD_SRC_VF_ID_SHIFT);
 	} else {
 		opcode_b |= (DMAE_CMD_SRC_VF_ID_MASK <<
 			     DMAE_CMD_SRC_VF_ID_SHIFT);
 	}
-	if (ECORE_DMAE_FLAGS_IS_SET(p_params, VF_DST)) {
+	if (ECORE_DMAE_FLAGS_IS_SET(p_params, DST_VF_VALID)) {
 		opcode |= 1 << DMAE_CMD_DST_VF_ID_VALID_SHIFT;
-		opcode_b |= p_params->dst_vfid << DMAE_CMD_DST_VF_ID_SHIFT;
+		opcode_b |= p_params->dst_vf_id << DMAE_CMD_DST_VF_ID_SHIFT;
 	} else {
 		opcode_b |= DMAE_CMD_DST_VF_ID_MASK << DMAE_CMD_DST_VF_ID_SHIFT;
 	}
@@ -716,6 +717,12 @@ static enum _ecore_status_t ecore_dmae_operation_wait(struct ecore_hwfn *p_hwfn)
 	return ecore_status;
 }
 
+enum ecore_dmae_address_type {
+	ECORE_DMAE_ADDRESS_HOST_VIRT,
+	ECORE_DMAE_ADDRESS_HOST_PHYS,
+	ECORE_DMAE_ADDRESS_GRC
+};
+
 static enum _ecore_status_t
 ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn,
 				 struct ecore_ptt *p_ptt,
@@ -806,7 +813,7 @@ ecore_dmae_execute_command(struct ecore_hwfn *p_hwfn,
 			   u8 src_type,
 			   u8 dst_type,
 			   u32 size_in_dwords,
-			   struct ecore_dmae_params *p_params)
+			   struct dmae_params *p_params)
 {
 	dma_addr_t phys = p_hwfn->dmae_info.completion_word_phys_addr;
 	u16 length_cur = 0, i = 0, cnt_split = 0, length_mod = 0;
@@ -910,7 +917,7 @@ enum _ecore_status_t ecore_dmae_host2grc(struct ecore_hwfn *p_hwfn,
 					 u64 source_addr,
 					 u32 grc_addr,
 					 u32 size_in_dwords,
-					 struct ecore_dmae_params *p_params)
+					 struct dmae_params *p_params)
 {
 	u32 grc_addr_in_dw = grc_addr / sizeof(u32);
 	enum _ecore_status_t rc;
@@ -933,7 +940,7 @@ enum _ecore_status_t ecore_dmae_grc2host(struct ecore_hwfn *p_hwfn,
 					 u32 grc_addr,
 					 dma_addr_t dest_addr,
 					 u32 size_in_dwords,
-					 struct ecore_dmae_params *p_params)
+					 struct dmae_params *p_params)
 {
 	u32 grc_addr_in_dw = grc_addr / sizeof(u32);
 	enum _ecore_status_t rc;
@@ -955,7 +962,8 @@ ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
 		     struct ecore_ptt *p_ptt,
 		     dma_addr_t source_addr,
 		     dma_addr_t dest_addr,
-		     u32 size_in_dwords, struct ecore_dmae_params *p_params)
+		     u32 size_in_dwords,
+					  struct dmae_params *p_params)
 {
 	enum _ecore_status_t rc;
 
diff --git a/drivers/net/qede/base/ecore_hw.h b/drivers/net/qede/base/ecore_hw.h
index 0b5b40c46..e43f337dc 100644
--- a/drivers/net/qede/base/ecore_hw.h
+++ b/drivers/net/qede/base/ecore_hw.h
@@ -31,23 +31,7 @@ enum reserved_ptts {
 #define MISC_REG_DRIVER_CONTROL_0_SIZE	MISC_REG_DRIVER_CONTROL_1_SIZE
 #endif
 
-enum _dmae_cmd_dst_mask {
-	DMAE_CMD_DST_MASK_NONE = 0,
-	DMAE_CMD_DST_MASK_PCIE = 1,
-	DMAE_CMD_DST_MASK_GRC = 2
-};
-
-enum _dmae_cmd_src_mask {
-	DMAE_CMD_SRC_MASK_PCIE = 0,
-	DMAE_CMD_SRC_MASK_GRC = 1
-};
-
-enum _dmae_cmd_crc_mask {
-	DMAE_CMD_COMP_CRC_EN_MASK_NONE = 0,
-	DMAE_CMD_COMP_CRC_EN_MASK_SET = 1
-};
-
-/* definitions for DMA constants */
+/* Definitions for DMA constants */
 #define DMAE_GO_VALUE	0x1
 
 #ifdef __BIG_ENDIAN
@@ -258,16 +242,78 @@ enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn	*p_hwfn);
 */
 void ecore_dmae_info_free(struct ecore_hwfn	*p_hwfn);
 
-enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
-					const u8 *fw_data);
+/**
+ * @brief ecore_dmae_host2grc - copy data from source address to
+ * dmae registers using the given ptt
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param source_addr
+ * @param grc_addr (dmae_data_offset)
+ * @param size_in_dwords
+ * @param p_params (default parameters will be used in case of OSAL_NULL)
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_dmae_host2grc(struct ecore_hwfn *p_hwfn,
+		    struct ecore_ptt *p_ptt,
+		    u64 source_addr,
+		    u32 grc_addr,
+		    u32 size_in_dwords,
+		    struct dmae_params *p_params);
 
-void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn,
-			 enum ecore_hw_err_type err_type);
+/**
+ * @brief ecore_dmae_grc2host - Read data from dmae data offset
+ * to source address using the given ptt
+ *
+ * @param p_ptt
+ * @param grc_addr (dmae_data_offset)
+ * @param dest_addr
+ * @param size_in_dwords
+ * @param p_params (default parameters will be used in case of OSAL_NULL)
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_dmae_grc2host(struct ecore_hwfn *p_hwfn,
+		    struct ecore_ptt *p_ptt,
+		    u32 grc_addr,
+		    dma_addr_t dest_addr,
+		    u32 size_in_dwords,
+		    struct dmae_params *p_params);
+
+/**
+ * @brief ecore_dmae_host2host - copy data from to source address
+ * to a destination address (for SRIOV) using the given ptt
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param source_addr
+ * @param dest_addr
+ * @param size_in_dwords
+ * @param p_params (default parameters will be used in case of OSAL_NULL)
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
+		     struct ecore_ptt *p_ptt,
+		     dma_addr_t source_addr,
+		     dma_addr_t dest_addr,
+		     u32 size_in_dwords,
+		     struct dmae_params *p_params);
 
 enum _ecore_status_t ecore_dmae_sanity(struct ecore_hwfn *p_hwfn,
 				       struct ecore_ptt *p_ptt,
 				       const char *phase);
 
+enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
+					const u8 *fw_data);
+
+void ecore_hw_err_notify(struct ecore_hwfn *p_hwfn,
+			 enum ecore_hw_err_type err_type);
+
 /**
  * @brief ecore_ppfid_wr - Write value to BAR using the given ptt while
  *	pretending to a PF to which the given PPFID pertains.
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index 044308bf4..8f7209100 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -179,12 +179,12 @@ static enum _ecore_status_t ecore_init_fill_dmae(struct ecore_hwfn *p_hwfn,
 						 u32 addr, u32 fill_count)
 {
 	static u32 zero_buffer[DMAE_MAX_RW_SIZE];
-	struct ecore_dmae_params params;
+	struct dmae_params params;
 
 	OSAL_MEMSET(zero_buffer, 0, sizeof(u32) * DMAE_MAX_RW_SIZE);
 
 	OSAL_MEMSET(&params, 0, sizeof(params));
-	params.flags = ECORE_DMAE_FLAG_RW_REPL_SRC;
+	SET_FIELD(params.flags, DMAE_PARAMS_RW_REPL_SRC, 0x1);
 	return ecore_dmae_host2grc(p_hwfn, p_ptt,
 				   (osal_uintptr_t)&zero_buffer[0],
 				   addr, fill_count, &params);
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index d771ac6d4..264217252 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -347,7 +347,7 @@ enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_bulletin_content *p_bulletin;
 	int crc_size = sizeof(p_bulletin->crc);
-	struct ecore_dmae_params params;
+	struct dmae_params params;
 	struct ecore_vf_info *p_vf;
 
 	p_vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
@@ -371,8 +371,8 @@ enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn,
 
 	/* propagate bulletin board via dmae to vm memory */
 	OSAL_MEMSET(&params, 0, sizeof(params));
-	params.flags = ECORE_DMAE_FLAG_VF_DST;
-	params.dst_vfid = p_vf->abs_vf_id;
+	SET_FIELD(params.flags, DMAE_PARAMS_DST_VF_VALID, 0x1);
+	params.dst_vf_id = p_vf->abs_vf_id;
 	return ecore_dmae_host2host(p_hwfn, p_ptt, p_vf->bulletin.phys,
 				    p_vf->vf_bulletin, p_vf->bulletin.size / 4,
 				    &params);
@@ -1374,7 +1374,7 @@ static void ecore_iov_send_response(struct ecore_hwfn *p_hwfn,
 				    u8 status)
 {
 	struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
-	struct ecore_dmae_params params;
+	struct dmae_params params;
 	u8 eng_vf_id;
 
 	mbx->reply_virt->default_resp.hdr.status = status;
@@ -1391,9 +1391,9 @@ static void ecore_iov_send_response(struct ecore_hwfn *p_hwfn,
 
 	eng_vf_id = p_vf->abs_vf_id;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct ecore_dmae_params));
-	params.flags = ECORE_DMAE_FLAG_VF_DST;
-	params.dst_vfid = eng_vf_id;
+	OSAL_MEMSET(&params, 0, sizeof(struct dmae_params));
+	SET_FIELD(params.flags, DMAE_PARAMS_DST_VF_VALID, 0x1);
+	params.dst_vf_id = eng_vf_id;
 
 	ecore_dmae_host2host(p_hwfn, p_ptt, mbx->reply_phys + sizeof(u64),
 			     mbx->req_virt->first_tlv.reply_address +
@@ -4389,16 +4389,17 @@ u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
 enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
 					   struct ecore_ptt *ptt, int vfid)
 {
-	struct ecore_dmae_params params;
+	struct dmae_params params;
 	struct ecore_vf_info *vf_info;
 
 	vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
 	if (!vf_info)
 		return ECORE_INVAL;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct ecore_dmae_params));
-	params.flags = ECORE_DMAE_FLAG_VF_SRC | ECORE_DMAE_FLAG_COMPLETION_DST;
-	params.src_vfid = vf_info->abs_vf_id;
+	OSAL_MEMSET(&params, 0, sizeof(struct dmae_params));
+	SET_FIELD(params.flags, DMAE_PARAMS_SRC_VF_VALID, 0x1);
+	SET_FIELD(params.flags, DMAE_PARAMS_COMPLETION_DST, 0x1);
+	params.src_vf_id = vf_info->abs_vf_id;
 
 	if (ecore_dmae_host2host(p_hwfn, ptt,
 				 vf_info->vf_mbx.pending_req,
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 7/9] net/qede/base: update HSI code
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (16 preceding siblings ...)
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 6/9] net/qede/base: move dmae code to HSI Rasesh Mody
@ 2019-10-06 20:14 ` Rasesh Mody
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 8/9] net/qede/base: update the FW to 8.40.25.0 Rasesh Mody
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 9/9] net/qede: print adapter info during init failure Rasesh Mody
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-10-06 20:14 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

Update hardware software common base driver code in preparation to
update the firmware to version 8.40.25.0.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/bcm_osal.c              |   1 +
 drivers/net/qede/base/common_hsi.h            | 164 ++++--
 drivers/net/qede/base/ecore.h                 |   4 +-
 drivers/net/qede/base/ecore_cxt.c             |  23 +-
 drivers/net/qede/base/ecore_dev.c             |  21 +-
 drivers/net/qede/base/ecore_gtt_reg_addr.h    |  42 +-
 drivers/net/qede/base/ecore_gtt_values.h      |  18 +-
 drivers/net/qede/base/ecore_hsi_common.h      | 231 +++++++--
 drivers/net/qede/base/ecore_hsi_debug_tools.h | 475 ++++++++----------
 drivers/net/qede/base/ecore_hsi_eth.h         | 134 ++---
 drivers/net/qede/base/ecore_hsi_init_func.h   |  25 +-
 drivers/net/qede/base/ecore_hsi_init_tool.h   |  38 ++
 drivers/net/qede/base/ecore_hw.c              |  16 +
 drivers/net/qede/base/ecore_hw.h              |  10 +-
 drivers/net/qede/base/ecore_init_fw_funcs.c   |   7 +-
 drivers/net/qede/base/ecore_init_ops.c        |  47 --
 drivers/net/qede/base/ecore_init_ops.h        |  10 -
 drivers/net/qede/base/ecore_iro.h             | 320 ++++++------
 drivers/net/qede/base/ecore_iro_values.h      | 336 ++++++++-----
 drivers/net/qede/base/ecore_mcp.c             |   1 +
 drivers/net/qede/base/eth_common.h            | 101 +++-
 drivers/net/qede/base/reg_addr.h              |  10 +
 drivers/net/qede/qede_rxtx.c                  |  16 +-
 23 files changed, 1218 insertions(+), 832 deletions(-)

diff --git a/drivers/net/qede/base/bcm_osal.c b/drivers/net/qede/base/bcm_osal.c
index 9915df44f..48d016e24 100644
--- a/drivers/net/qede/base/bcm_osal.c
+++ b/drivers/net/qede/base/bcm_osal.c
@@ -10,6 +10,7 @@
 #include "bcm_osal.h"
 #include "ecore.h"
 #include "ecore_hw.h"
+#include "ecore_dev_api.h"
 #include "ecore_iov_api.h"
 #include "ecore_mcp_api.h"
 #include "ecore_l2_api.h"
diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index b878a92aa..74afed1ec 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -13,12 +13,12 @@
 /* Temporarily here should be added to HSI automatically by resource allocation
  * tool.
  */
-#define T_TEST_AGG_INT_TEMP    6
-#define	M_TEST_AGG_INT_TEMP    8
-#define	U_TEST_AGG_INT_TEMP    6
-#define	X_TEST_AGG_INT_TEMP    14
-#define	Y_TEST_AGG_INT_TEMP    4
-#define	P_TEST_AGG_INT_TEMP    4
+#define T_TEST_AGG_INT_TEMP  6
+#define M_TEST_AGG_INT_TEMP  8
+#define U_TEST_AGG_INT_TEMP  6
+#define X_TEST_AGG_INT_TEMP  14
+#define Y_TEST_AGG_INT_TEMP  4
+#define P_TEST_AGG_INT_TEMP  4
 
 #define X_FINAL_CLEANUP_AGG_INT  1
 
@@ -30,21 +30,20 @@
 #define ISCSI_CDU_TASK_SEG_TYPE       0
 #define FCOE_CDU_TASK_SEG_TYPE        0
 #define RDMA_CDU_TASK_SEG_TYPE        1
+#define ETH_CDU_TASK_SEG_TYPE         2
 
 #define FW_ASSERT_GENERAL_ATTN_IDX    32
 
-#define MAX_PINNED_CCFC			32
-
 #define EAGLE_ENG1_WORKAROUND_NIG_FLOWCTRL_MODE	3
 
 /* Queue Zone sizes in bytes */
-#define TSTORM_QZONE_SIZE    8	 /*tstorm_scsi_queue_zone*/
-#define MSTORM_QZONE_SIZE    16  /*mstorm_eth_queue_zone. Used only for RX
-				  *producer of VFs in backward compatibility
-				  *mode.
-				  */
-#define USTORM_QZONE_SIZE    8	 /*ustorm_eth_queue_zone*/
-#define XSTORM_QZONE_SIZE    8	 /*xstorm_eth_queue_zone*/
+#define TSTORM_QZONE_SIZE    8   /*tstorm_queue_zone*/
+/*mstorm_eth_queue_zone. Used only for RX producer of VFs in backward
+ * compatibility mode.
+ */
+#define MSTORM_QZONE_SIZE    16
+#define USTORM_QZONE_SIZE    8   /*ustorm_queue_zone*/
+#define XSTORM_QZONE_SIZE    8   /*xstorm_eth_queue_zone*/
 #define YSTORM_QZONE_SIZE    0
 #define PSTORM_QZONE_SIZE    0
 
@@ -61,7 +60,8 @@
  */
 #define ETH_MAX_NUM_RX_QUEUES_PER_VF_QUAD     112
 
-
+#define ETH_RGSRC_CTX_SIZE                6 /*Size in QREGS*/
+#define ETH_TGSRC_CTX_SIZE                6 /*Size in QREGS*/
 /********************************/
 /* CORE (LIGHT L2) FW CONSTANTS */
 /********************************/
@@ -76,15 +76,13 @@
 
 #define CORE_SPQE_PAGE_SIZE_BYTES                       4096
 
-/*
- * Usually LL2 queues are opened in pairs TX-RX.
- * There is a hard restriction on number of RX queues (limited by Tstorm RAM)
- * and TX counters (Pstorm RAM).
- * Number of TX queues is almost unlimited.
- * The constants are different so as to allow asymmetric LL2 connections
- */
+/* Number of LL2 RAM based (RX producers and statistics) queues */
+#define MAX_NUM_LL2_RX_RAM_QUEUES               32
+/* Number of LL2 context based (RX producers and statistics) queues */
+#define MAX_NUM_LL2_RX_CTX_QUEUES               208
+#define MAX_NUM_LL2_RX_QUEUES (MAX_NUM_LL2_RX_RAM_QUEUES + \
+			       MAX_NUM_LL2_RX_CTX_QUEUES)
 
-#define MAX_NUM_LL2_RX_QUEUES					48
 #define MAX_NUM_LL2_TX_STATS_COUNTERS			48
 
 
@@ -95,8 +93,8 @@
 
 
 #define FW_MAJOR_VERSION        8
-#define FW_MINOR_VERSION        37
-#define FW_REVISION_VERSION     7
+#define FW_MINOR_VERSION		40
+#define FW_REVISION_VERSION		25
 #define FW_ENGINEERING_VERSION  0
 
 /***********************/
@@ -134,6 +132,8 @@
 #define MAX_NUM_L2_QUEUES_BB	(256)
 #define MAX_NUM_L2_QUEUES_K2    (320)
 
+#define FW_LOWEST_CONSUMEDDMAE_CHANNEL   (26)
+
 /* Traffic classes in network-facing blocks (PBF, BTB, NIG, BRB, PRS and QM) */
 #define NUM_PHYS_TCS_4PORT_K2     4
 #define NUM_OF_PHYS_TCS           8
@@ -145,7 +145,6 @@
 #define NUM_OF_CONNECTION_TYPES (8)
 #define NUM_OF_TASK_TYPES       (8)
 #define NUM_OF_LCIDS            (320)
-#define NUM_OF_LTIDS            (320)
 
 /* Global PXP windows (GTT) */
 #define NUM_OF_GTT          19
@@ -172,6 +171,8 @@
 #define	CDU_CONTEXT_VALIDATION_CFG_USE_CID				(4)
 #define	CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE				(5)
 
+/*enabled, type A, use all */
+#define	CDU_CONTEXT_VALIDATION_DEFAULT_CFG				(0x3D)
 
 /*****************/
 /* DQ CONSTANTS  */
@@ -218,6 +219,7 @@
 #define DQ_XCM_TOE_TX_BD_PROD_CMD           DQ_XCM_AGG_VAL_SEL_WORD4
 #define DQ_XCM_TOE_MORE_TO_SEND_SEQ_CMD     DQ_XCM_AGG_VAL_SEL_REG3
 #define DQ_XCM_TOE_LOCAL_ADV_WND_SEQ_CMD    DQ_XCM_AGG_VAL_SEL_REG4
+#define DQ_XCM_ROCE_ACK_EDPM_DORQ_SEQ_CMD   DQ_XCM_AGG_VAL_SEL_WORD5
 
 /* UCM agg val selection (HW) */
 #define DQ_UCM_AGG_VAL_SEL_WORD0  0
@@ -292,6 +294,7 @@
 #define DQ_UCM_AGG_FLG_SHIFT_RULE1EN   7
 
 /* UCM agg counter flag selection (FW) */
+#define DQ_UCM_NVMF_NEW_CQE_CF_CMD          (1 << DQ_UCM_AGG_FLG_SHIFT_CF1)
 #define DQ_UCM_ETH_PMD_TX_ARM_CMD           (1 << DQ_UCM_AGG_FLG_SHIFT_CF4)
 #define DQ_UCM_ETH_PMD_RX_ARM_CMD           (1 << DQ_UCM_AGG_FLG_SHIFT_CF5)
 #define DQ_UCM_ROCE_CQ_ARM_SE_CF_CMD        (1 << DQ_UCM_AGG_FLG_SHIFT_CF4)
@@ -323,6 +326,9 @@
 /* PWM address mapping */
 #define DQ_PWM_OFFSET_DPM_BASE				0x0
 #define DQ_PWM_OFFSET_DPM_END				0x27
+#define DQ_PWM_OFFSET_XCM32_24ICID_BASE		0x28
+#define DQ_PWM_OFFSET_UCM32_24ICID_BASE		0x30
+#define DQ_PWM_OFFSET_TCM32_24ICID_BASE		0x38
 #define DQ_PWM_OFFSET_XCM16_BASE			0x40
 #define DQ_PWM_OFFSET_XCM32_BASE			0x44
 #define DQ_PWM_OFFSET_UCM16_BASE			0x48
@@ -342,6 +348,13 @@
 #define DQ_PWM_OFFSET_TCM_ROCE_RQ_PROD		(DQ_PWM_OFFSET_TCM16_BASE + 1)
 #define DQ_PWM_OFFSET_TCM_IWARP_RQ_PROD		(DQ_PWM_OFFSET_TCM16_BASE + 3)
 
+#define DQ_PWM_OFFSET_XCM_RDMA_24B_ICID_SQ_PROD \
+	(DQ_PWM_OFFSET_XCM32_24ICID_BASE + 2)
+#define DQ_PWM_OFFSET_UCM_RDMA_24B_ICID_CQ_CONS_32BIT \
+	(DQ_PWM_OFFSET_UCM32_24ICID_BASE + 4)
+#define DQ_PWM_OFFSET_TCM_ROCE_24B_ICID_RQ_PROD	\
+	(DQ_PWM_OFFSET_TCM32_24ICID_BASE + 1)
+
 #define DQ_REGION_SHIFT				        (12)
 
 /* DPM */
@@ -378,6 +391,10 @@
 /* number of global Vport/QCN rate limiters */
 #define MAX_QM_GLOBAL_RLS			256
 
+/* number of global rate limiters */
+#define MAX_QM_GLOBAL_RLS		256
+#define COMMON_MAX_QM_GLOBAL_RLS	(MAX_QM_GLOBAL_RLS)
+
 /* QM registers data */
 #define QM_LINE_CRD_REG_WIDTH		16
 #define QM_LINE_CRD_REG_SIGN_BIT	(1 << (QM_LINE_CRD_REG_WIDTH - 1))
@@ -431,9 +448,6 @@
 #define IGU_MEM_PBA_MSIX_RESERVED_UPPER		0x03ff
 
 #define IGU_CMD_INT_ACK_BASE			0x0400
-#define IGU_CMD_INT_ACK_UPPER			(IGU_CMD_INT_ACK_BASE + \
-						 MAX_TOT_SB_PER_PATH - \
-						 1)
 #define IGU_CMD_INT_ACK_RESERVED_UPPER		0x05ff
 
 #define IGU_CMD_ATTN_BIT_UPD_UPPER		0x05f0
@@ -446,9 +460,6 @@
 #define IGU_REG_SISR_MDPC_WOMASK_UPPER		0x05f6
 
 #define IGU_CMD_PROD_UPD_BASE			0x0600
-#define IGU_CMD_PROD_UPD_UPPER			(IGU_CMD_PROD_UPD_BASE + \
-						 MAX_TOT_SB_PER_PATH  - \
-						 1)
 #define IGU_CMD_PROD_UPD_RESERVED_UPPER		0x07ff
 
 /*****************/
@@ -701,6 +712,12 @@ struct common_queue_zone {
 	__le16 reserved;
 };
 
+struct nvmf_eqe_data {
+	__le16 icid /* The connection ID for which the EQE is written. */;
+	u8 reserved0[6] /* Alignment to line */;
+};
+
+
 /*
  * ETH Rx producers data
  */
@@ -770,6 +787,8 @@ enum protocol_type {
 	PROTOCOLID_PREROCE /* Pre (tapeout) RoCE */,
 	PROTOCOLID_COMMON /* ProtocolCommon */,
 	PROTOCOLID_TCP /* TCP */,
+	PROTOCOLID_RDMA /* RDMA */,
+	PROTOCOLID_SCSI /* SCSI */,
 	MAX_PROTOCOL_TYPE
 };
 
@@ -779,6 +798,36 @@ struct regpair {
 	__le32 hi /* high word for reg-pair */;
 };
 
+/*
+ * RoCE Destroy Event Data
+ */
+struct rdma_eqe_destroy_qp {
+	__le32 cid /* Dedicated field RoCE destroy QP event */;
+	u8 reserved[4];
+};
+
+/*
+ * RoCE Suspend Event Data
+ */
+struct rdma_eqe_suspend_qp {
+	__le32 cid /* Dedicated field RoCE Suspend QP event */;
+	u8 reserved[4];
+};
+
+/*
+ * RDMA Event Data Union
+ */
+union rdma_eqe_data {
+	struct regpair async_handle /* Host handle for the Async Completions */;
+	/* RoCE Destroy Event Data */
+	struct rdma_eqe_destroy_qp rdma_destroy_qp_data;
+	/* RoCE Suspend QP Event Data */
+	struct rdma_eqe_suspend_qp rdma_suspend_qp_data;
+};
+
+struct tstorm_queue_zone {
+	__le32 reserved[2];
+};
 
 
 /*
@@ -993,6 +1042,18 @@ struct db_pwm_addr {
 #define DB_PWM_ADDR_RESERVED1_SHIFT 28
 };
 
+/*
+ * Structure for doorbell address, in legacy mode, without DEMS
+ */
+struct db_legacy_wo_dems_addr {
+	__le32 addr;
+#define DB_LEGACY_WO_DEMS_ADDR_RESERVED0_MASK  0x3
+#define DB_LEGACY_WO_DEMS_ADDR_RESERVED0_SHIFT 0
+#define DB_LEGACY_WO_DEMS_ADDR_ICID_MASK       0x3FFFFFFF /* internal CID */
+#define DB_LEGACY_WO_DEMS_ADDR_ICID_SHIFT      2
+};
+
+
 /*
  * Parameters to RDMA firmware, passed in EDPM doorbell
  */
@@ -1025,6 +1086,43 @@ struct db_rdma_dpm_params {
 #define DB_RDMA_DPM_PARAMS_CONN_TYPE_IS_IWARP_SHIFT 31
 };
 
+/*
+ * Parameters to RDMA firmware, passed in EDPM doorbell
+ */
+struct db_rdma_24b_icid_dpm_params {
+	__le32 params;
+/* Size in QWORD-s of the DPM burst */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_SIZE_MASK                0x3F
+#define DB_RDMA_24B_ICID_DPM_PARAMS_SIZE_SHIFT               0
+/* Type of DPM transacation (DPM_RDMA) (use enum db_dpm_type) */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_DPM_TYPE_MASK            0x3
+#define DB_RDMA_24B_ICID_DPM_PARAMS_DPM_TYPE_SHIFT           6
+/* opcode for RDMA operation */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_OPCODE_MASK              0xFF
+#define DB_RDMA_24B_ICID_DPM_PARAMS_OPCODE_SHIFT             8
+/* ICID extension */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_ICID_EXT_MASK            0xFF
+#define DB_RDMA_24B_ICID_DPM_PARAMS_ICID_EXT_SHIFT           16
+/* Number of invalid bytes in last QWROD of the DPM transaction */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_INV_BYTE_CNT_MASK        0x7
+#define DB_RDMA_24B_ICID_DPM_PARAMS_INV_BYTE_CNT_SHIFT       24
+/* Flag indicating 24b icid mode is enabled */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_EXT_ICID_MODE_EN_MASK    0x1
+#define DB_RDMA_24B_ICID_DPM_PARAMS_EXT_ICID_MODE_EN_SHIFT   27
+/* RoCE completion flag */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_COMPLETION_FLG_MASK      0x1
+#define DB_RDMA_24B_ICID_DPM_PARAMS_COMPLETION_FLG_SHIFT     28
+/* RoCE S flag */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_S_FLG_MASK               0x1
+#define DB_RDMA_24B_ICID_DPM_PARAMS_S_FLG_SHIFT              29
+#define DB_RDMA_24B_ICID_DPM_PARAMS_RESERVED1_MASK           0x1
+#define DB_RDMA_24B_ICID_DPM_PARAMS_RESERVED1_SHIFT          30
+/* Connection type is iWARP */
+#define DB_RDMA_24B_ICID_DPM_PARAMS_CONN_TYPE_IS_IWARP_MASK  0x1
+#define DB_RDMA_24B_ICID_DPM_PARAMS_CONN_TYPE_IS_IWARP_SHIFT 31
+};
+
+
 /*
  * Structure for doorbell data, in RDMA DPM mode, for the first doorbell in a
  * DPM burst
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 524a1dd46..b1d8706c9 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -834,8 +834,8 @@ struct ecore_dev {
 	u8				cache_shift;
 
 	/* Init */
-	const struct iro		*iro_arr;
-	#define IRO (p_hwfn->p_dev->iro_arr)
+	const u32			*iro_arr;
+#define IRO	((const struct iro *)p_hwfn->p_dev->iro_arr)
 
 	/* HW functions */
 	u8				num_hwfns;
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index bc5628c4e..0f04c9447 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -190,9 +190,7 @@ struct ecore_cxt_mngr {
 
 	/* Acquired CIDs */
 	struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES];
-	/* TBD - do we want this allocated to reserve space? */
-	struct ecore_cid_acquired_map
-		acquired_vf[MAX_CONN_TYPES][COMMON_MAX_NUM_VFS];
+	struct ecore_cid_acquired_map *acquired_vf[MAX_CONN_TYPES];
 
 	/* ILT  shadow table */
 	struct ecore_dma_mem *ilt_shadow;
@@ -1040,8 +1038,8 @@ static enum _ecore_status_t ecore_ilt_shadow_alloc(struct ecore_hwfn *p_hwfn)
 
 static void ecore_cid_map_free(struct ecore_hwfn *p_hwfn)
 {
+	u32 type, vf, max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 type, vf;
 
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
 		OSAL_FREE(p_hwfn->p_dev, p_mngr->acquired[type].cid_map);
@@ -1049,7 +1047,7 @@ static void ecore_cid_map_free(struct ecore_hwfn *p_hwfn)
 		p_mngr->acquired[type].max_count = 0;
 		p_mngr->acquired[type].start_cid = 0;
 
-		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+		for (vf = 0; vf < max_num_vfs; vf++) {
 			OSAL_FREE(p_hwfn->p_dev,
 				  p_mngr->acquired_vf[type][vf].cid_map);
 			p_mngr->acquired_vf[type][vf].cid_map = OSAL_NULL;
@@ -1087,6 +1085,7 @@ ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type,
 static enum _ecore_status_t ecore_cid_map_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	u32 max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
 	u32 start_cid = 0, vf_start_cid = 0;
 	u32 type, vf;
 
@@ -1101,7 +1100,7 @@ static enum _ecore_status_t ecore_cid_map_alloc(struct ecore_hwfn *p_hwfn)
 			goto cid_map_fail;
 
 		/* Handle VF maps */
-		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+		for (vf = 0; vf < max_num_vfs; vf++) {
 			p_map = &p_mngr->acquired_vf[type][vf];
 			if (ecore_cid_map_alloc_single(p_hwfn, type,
 						       vf_start_cid,
@@ -1236,10 +1235,10 @@ void ecore_cxt_mngr_free(struct ecore_hwfn *p_hwfn)
 void ecore_cxt_mngr_setup(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	u32 len, max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
 	struct ecore_cid_acquired_map *p_map;
 	struct ecore_conn_type_cfg *p_cfg;
 	int type;
-	u32 len;
 
 	/* Reset acquired cids */
 	for (type = 0; type < MAX_CONN_TYPES; type++) {
@@ -1257,7 +1256,7 @@ void ecore_cxt_mngr_setup(struct ecore_hwfn *p_hwfn)
 		if (!p_cfg->cids_per_vf)
 			continue;
 
-		for (vf = 0; vf < COMMON_MAX_NUM_VFS; vf++) {
+		for (vf = 0; vf < max_num_vfs; vf++) {
 			p_map = &p_mngr->acquired_vf[type][vf];
 			len = DIV_ROUND_UP(p_map->max_count,
 					   BITS_PER_MAP_WORD) *
@@ -1818,16 +1817,16 @@ enum _ecore_status_t _ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
 					    enum protocol_type type,
 					    u32 *p_cid, u8 vfid)
 {
+	u32 rel_cid, max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
 	struct ecore_cid_acquired_map *p_map;
-	u32 rel_cid;
 
 	if (type >= MAX_CONN_TYPES) {
 		DP_NOTICE(p_hwfn, true, "Invalid protocol type %d", type);
 		return ECORE_INVAL;
 	}
 
-	if (vfid >= COMMON_MAX_NUM_VFS && vfid != ECORE_CXT_PF_CID) {
+	if (vfid >= max_num_vfs && vfid != ECORE_CXT_PF_CID) {
 		DP_NOTICE(p_hwfn, true, "VF [%02x] is out of range\n", vfid);
 		return ECORE_INVAL;
 	}
@@ -1913,12 +1912,12 @@ static bool ecore_cxt_test_cid_acquired(struct ecore_hwfn *p_hwfn,
 
 void _ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid, u8 vfid)
 {
+	u32 rel_cid, max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
 	struct ecore_cid_acquired_map *p_map = OSAL_NULL;
 	enum protocol_type type;
 	bool b_acquired;
-	u32 rel_cid;
 
-	if (vfid != ECORE_CXT_PF_CID && vfid > COMMON_MAX_NUM_VFS) {
+	if (vfid != ECORE_CXT_PF_CID && vfid > max_num_vfs) {
 		DP_NOTICE(p_hwfn, true,
 			  "Trying to return incorrect CID belonging to VF %02x\n",
 			  vfid);
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2c135afd2..2a11b4d29 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1843,7 +1843,7 @@ static void ecore_init_qm_vport_params(struct ecore_hwfn *p_hwfn)
 
 	/* all vports participate in weighted fair queueing */
 	for (i = 0; i < ecore_init_qm_get_num_vports(p_hwfn); i++)
-		qm_info->qm_vport_params[i].vport_wfq = 1;
+		qm_info->qm_vport_params[i].wfq = 1;
 }
 
 /* initialize qm port params */
@@ -2236,11 +2236,8 @@ static void ecore_dp_init_qm_params(struct ecore_hwfn *p_hwfn)
 	/* vport table */
 	for (i = 0; i < qm_info->num_vports; i++) {
 		vport = &qm_info->qm_vport_params[i];
-		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
-			   "vport idx %d, vport_rl %d, wfq %d,"
-			   " first_tx_pq_id [ ",
-			   qm_info->start_vport + i, vport->vport_rl,
-			   vport->vport_wfq);
+		DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "vport idx %d, wfq %d, first_tx_pq_id [ ",
+			   qm_info->start_vport + i, vport->wfq);
 		for (tc = 0; tc < NUM_OF_TCS; tc++)
 			DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "%d ",
 				   vport->first_tx_pq_id[tc]);
@@ -2866,7 +2863,7 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	ecore_init_cau_rt_data(p_dev);
 
 	/* Program GTT windows */
-	ecore_gtt_init(p_hwfn, p_ptt);
+	ecore_gtt_init(p_hwfn);
 
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_EMUL(p_dev)) {
@@ -6248,7 +6245,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
 
 /* Calculate final WFQ values for all vports and configure it.
  * After this configuration each vport must have
- * approx min rate =  vport_wfq * min_pf_rate / ECORE_WFQ_UNIT
+ * approx min rate =  wfq * min_pf_rate / ECORE_WFQ_UNIT
  */
 static void ecore_configure_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt,
@@ -6262,11 +6259,11 @@ static void ecore_configure_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
 	for (i = 0; i < p_hwfn->qm_info.num_vports; i++) {
 		u32 wfq_speed = p_hwfn->qm_info.wfq_data[i].min_speed;
 
-		vport_params[i].vport_wfq = (wfq_speed * ECORE_WFQ_UNIT) /
+		vport_params[i].wfq = (wfq_speed * ECORE_WFQ_UNIT) /
 		    min_pf_rate;
 		ecore_init_vport_wfq(p_hwfn, p_ptt,
 				     vport_params[i].first_tx_pq_id,
-				     vport_params[i].vport_wfq);
+				     vport_params[i].wfq);
 	}
 }
 
@@ -6275,7 +6272,7 @@ static void ecore_init_wfq_default_param(struct ecore_hwfn *p_hwfn)
 	int i;
 
 	for (i = 0; i < p_hwfn->qm_info.num_vports; i++)
-		p_hwfn->qm_info.qm_vport_params[i].vport_wfq = 1;
+		p_hwfn->qm_info.qm_vport_params[i].wfq = 1;
 }
 
 static void ecore_disable_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
@@ -6290,7 +6287,7 @@ static void ecore_disable_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
 		ecore_init_wfq_default_param(p_hwfn);
 		ecore_init_vport_wfq(p_hwfn, p_ptt,
 				     vport_params[i].first_tx_pq_id,
-				     vport_params[i].vport_wfq);
+				     vport_params[i].wfq);
 	}
 }
 
diff --git a/drivers/net/qede/base/ecore_gtt_reg_addr.h b/drivers/net/qede/base/ecore_gtt_reg_addr.h
index 8c8fed4e7..f5b11eb28 100644
--- a/drivers/net/qede/base/ecore_gtt_reg_addr.h
+++ b/drivers/net/qede/base/ecore_gtt_reg_addr.h
@@ -8,43 +8,53 @@
 #define GTT_REG_ADDR_H
 
 /* Win 2 */
-/* Access:RW   DataWidth:0x20    */
+//Access:RW   DataWidth:0x20   //
 #define GTT_BAR0_MAP_REG_IGU_CMD                                      0x00f000UL
 
 /* Win 3 */
-/* Access:RW   DataWidth:0x20    */
+//Access:RW   DataWidth:0x20   //
 #define GTT_BAR0_MAP_REG_TSDM_RAM                                     0x010000UL
 
 /* Win 4 */
-/* Access:RW   DataWidth:0x20    */
+//Access:RW   DataWidth:0x20   //
 #define GTT_BAR0_MAP_REG_MSDM_RAM                                     0x011000UL
 
 /* Win 5 */
-/* Access:RW   DataWidth:0x20    */
+//Access:RW   DataWidth:0x20   //
 #define GTT_BAR0_MAP_REG_MSDM_RAM_1024                                0x012000UL
 
 /* Win 6 */
-/* Access:RW   DataWidth:0x20    */
-#define GTT_BAR0_MAP_REG_USDM_RAM                                     0x013000UL
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_MSDM_RAM_2048                                0x013000UL
 
 /* Win 7 */
-/* Access:RW   DataWidth:0x20    */
-#define GTT_BAR0_MAP_REG_USDM_RAM_1024                                0x014000UL
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_USDM_RAM                                     0x014000UL
 
 /* Win 8 */
-/* Access:RW   DataWidth:0x20    */
-#define GTT_BAR0_MAP_REG_USDM_RAM_2048                                0x015000UL
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_USDM_RAM_1024                                0x015000UL
 
 /* Win 9 */
-/* Access:RW   DataWidth:0x20    */
-#define GTT_BAR0_MAP_REG_XSDM_RAM                                     0x016000UL
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_USDM_RAM_2048                                0x016000UL
 
 /* Win 10 */
-/* Access:RW   DataWidth:0x20    */
-#define GTT_BAR0_MAP_REG_YSDM_RAM                                     0x017000UL
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_XSDM_RAM                                     0x017000UL
 
 /* Win 11 */
-/* Access:RW   DataWidth:0x20    */
-#define GTT_BAR0_MAP_REG_PSDM_RAM                                     0x018000UL
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_XSDM_RAM_1024                                0x018000UL
+
+/* Win 12 */
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_YSDM_RAM                                     0x019000UL
+
+/* Win 13 */
+//Access:RW   DataWidth:0x20   //
+#define GTT_BAR0_MAP_REG_PSDM_RAM                                     0x01a000UL
+
+/* Win 14 */
 
 #endif
diff --git a/drivers/net/qede/base/ecore_gtt_values.h b/drivers/net/qede/base/ecore_gtt_values.h
index adc20c0ce..2035bed5c 100644
--- a/drivers/net/qede/base/ecore_gtt_values.h
+++ b/drivers/net/qede/base/ecore_gtt_values.h
@@ -13,15 +13,15 @@ static u32 pxp_global_win[] = {
 	0x1c80, /* win 3: addr=0x1c80000, size=4096 bytes */
 	0x1d00, /* win 4: addr=0x1d00000, size=4096 bytes */
 	0x1d01, /* win 5: addr=0x1d01000, size=4096 bytes */
-	0x1d80, /* win 6: addr=0x1d80000, size=4096 bytes */
-	0x1d81, /* win 7: addr=0x1d81000, size=4096 bytes */
-	0x1d82, /* win 8: addr=0x1d82000, size=4096 bytes */
-	0x1e00, /* win 9: addr=0x1e00000, size=4096 bytes */
-	0x1e80, /* win 10: addr=0x1e80000, size=4096 bytes */
-	0x1f00, /* win 11: addr=0x1f00000, size=4096 bytes */
-	0,
-	0,
-	0,
+	0x1d02, /* win 6: addr=0x1d02000, size=4096 bytes */
+	0x1d80, /* win 7: addr=0x1d80000, size=4096 bytes */
+	0x1d81, /* win 8: addr=0x1d81000, size=4096 bytes */
+	0x1d82, /* win 9: addr=0x1d82000, size=4096 bytes */
+	0x1e00, /* win 10: addr=0x1e00000, size=4096 bytes */
+	0x1e01, /* win 11: addr=0x1e01000, size=4096 bytes */
+	0x1e80, /* win 12: addr=0x1e80000, size=4096 bytes */
+	0x1f00, /* win 13: addr=0x1f00000, size=4096 bytes */
+	0x1c08, /* win 14: addr=0x1c08000, size=4096 bytes */
 	0,
 	0,
 	0,
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index 8fa200033..23cfcdeff 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -57,7 +57,7 @@ struct ystorm_core_conn_st_ctx {
  * The core storm context for the Pstorm
  */
 struct pstorm_core_conn_st_ctx {
-	__le32 reserved[4];
+	__le32 reserved[20];
 };
 
 /*
@@ -75,7 +75,7 @@ struct xstorm_core_conn_st_ctx {
 
 struct xstorm_core_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
-	u8 core_state /* state */;
+	u8 state /* state */;
 	u8 flags0;
 #define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK         0x1 /* exist_in_qm0 */
 #define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT        0
@@ -516,13 +516,20 @@ struct ustorm_core_conn_ag_ctx {
  * The core storm context for the Mstorm
  */
 struct mstorm_core_conn_st_ctx {
-	__le32 reserved[24];
+	__le32 reserved[40];
 };
 
 /*
  * The core storm context for the Ustorm
  */
 struct ustorm_core_conn_st_ctx {
+	__le32 reserved[20];
+};
+
+/*
+ * The core storm context for the Tstorm
+ */
+struct tstorm_core_conn_st_ctx {
 	__le32 reserved[4];
 };
 
@@ -549,6 +556,9 @@ struct core_conn_context {
 /* ustorm storm context */
 	struct ustorm_core_conn_st_ctx ustorm_st_context;
 	struct regpair ustorm_st_padding[2] /* padding */;
+/* tstorm storm context */
+	struct tstorm_core_conn_st_ctx tstorm_st_context;
+	struct regpair tstorm_st_padding[2] /* padding */;
 };
 
 
@@ -573,6 +583,7 @@ enum core_event_opcode {
 	CORE_EVENT_RX_QUEUE_STOP,
 	CORE_EVENT_RX_QUEUE_FLUSH,
 	CORE_EVENT_TX_QUEUE_UPDATE,
+	CORE_EVENT_QUEUE_STATS_QUERY,
 	MAX_CORE_EVENT_OPCODE
 };
 
@@ -601,7 +612,7 @@ struct core_ll2_port_stats {
 
 
 /*
- * Ethernet TX Per Queue Stats
+ * LL2 TX Per Queue Stats
  */
 struct core_ll2_pstorm_per_queue_stat {
 /* number of total bytes sent without errors */
@@ -616,16 +627,8 @@ struct core_ll2_pstorm_per_queue_stat {
 	struct regpair sent_mcast_pkts;
 /* number of total packets sent without errors */
 	struct regpair sent_bcast_pkts;
-};
-
-
-/*
- * Light-L2 RX Producers in Tstorm RAM
- */
-struct core_ll2_rx_prod {
-	__le16 bd_prod /* BD Producer */;
-	__le16 cqe_prod /* CQE Producer */;
-	__le32 reserved;
+/* number of total packets dropped due to errors */
+	struct regpair error_drop_pkts;
 };
 
 
@@ -636,7 +639,6 @@ struct core_ll2_tstorm_per_queue_stat {
 	struct regpair no_buff_discard;
 };
 
-
 struct core_ll2_ustorm_per_queue_stat {
 	struct regpair rcv_ucast_bytes;
 	struct regpair rcv_mcast_bytes;
@@ -647,6 +649,59 @@ struct core_ll2_ustorm_per_queue_stat {
 };
 
 
+/*
+ * Light-L2 RX Producers
+ */
+struct core_ll2_rx_prod {
+	__le16 bd_prod /* BD Producer */;
+	__le16 cqe_prod /* CQE Producer */;
+};
+
+
+
+struct core_ll2_tx_per_queue_stat {
+/* PSTORM per queue statistics */
+	struct core_ll2_pstorm_per_queue_stat pstorm_stat;
+};
+
+
+
+/*
+ * Structure for doorbell data, in PWM mode, for RX producers update.
+ */
+struct core_pwm_prod_update_data {
+	__le16 icid /* internal CID */;
+	u8 reserved0;
+	u8 params;
+/* aggregative command. Set DB_AGG_CMD_SET for producer update
+ * (use enum db_agg_cmd_sel)
+ */
+#define CORE_PWM_PROD_UPDATE_DATA_AGG_CMD_MASK    0x3
+#define CORE_PWM_PROD_UPDATE_DATA_AGG_CMD_SHIFT   0
+#define CORE_PWM_PROD_UPDATE_DATA_RESERVED1_MASK  0x3F /* Set 0. */
+#define CORE_PWM_PROD_UPDATE_DATA_RESERVED1_SHIFT 2
+	struct core_ll2_rx_prod prod /* Producers. */;
+};
+
+
+/*
+ * Ramrod data for rx/tx queue statistics query ramrod
+ */
+struct core_queue_stats_query_ramrod_data {
+	u8 rx_stat /* If set, collect RX queue statistics. */;
+	u8 tx_stat /* If set, collect TX queue statistics. */;
+	__le16 reserved[3];
+/* Address of RX statistic buffer. core_ll2_rx_per_queue_stat struct will be
+ * write to this address.
+ */
+	struct regpair rx_stat_addr;
+/* Address of TX statistic buffer. core_ll2_tx_per_queue_stat struct will be
+ * write to this address.
+ */
+	struct regpair tx_stat_addr;
+};
+
+
 /*
  * Core Ramrod Command IDs (light L2)
  */
@@ -658,6 +713,7 @@ enum core_ramrod_cmd_id {
 	CORE_RAMROD_TX_QUEUE_STOP /* TX Queue Stop Ramrod */,
 	CORE_RAMROD_RX_QUEUE_FLUSH /* RX Flush queue Ramrod */,
 	CORE_RAMROD_TX_QUEUE_UPDATE /* TX Queue Update Ramrod */,
+	CORE_RAMROD_QUEUE_STATS_QUERY /* Queue Statist Query Ramrod */,
 	MAX_CORE_RAMROD_CMD_ID
 };
 
@@ -772,7 +828,8 @@ struct core_rx_gsi_offload_cqe {
 /* These are the lower 16 bit of QP id in RoCE BTH header */
 	__le16 qp_id;
 	__le32 src_qp /* Source QP from DETH header */;
-	__le32 reserved[3];
+	struct core_rx_cqe_opaque_data opaque_data /* Opaque Data */;
+	__le32 reserved;
 };
 
 /*
@@ -803,24 +860,21 @@ union core_rx_cqe_union {
  * Ramrod data for rx queue start ramrod
  */
 struct core_rx_start_ramrod_data {
-	struct regpair bd_base /* bd address of the first bd page */;
+	struct regpair bd_base /* Address of the first BD page */;
 	struct regpair cqe_pbl_addr /* Base address on host of CQE PBL */;
-	__le16 mtu /* Maximum transmission unit */;
+	__le16 mtu /* MTU */;
 	__le16 sb_id /* Status block ID */;
-	u8 sb_index /* index of the protocol index */;
-	u8 complete_cqe_flg /* post completion to the CQE ring if set */;
-	u8 complete_event_flg /* post completion to the event ring if set */;
-	u8 drop_ttl0_flg /* drop packet with ttl0 if set */;
-	__le16 num_of_pbl_pages /* Num of pages in CQE PBL */;
-/* if set, 802.1q tags will be removed and copied to CQE */
-/* if set, 802.1q tags will be removed and copied to CQE */
+	u8 sb_index /* Status block index */;
+	u8 complete_cqe_flg /* if set - post completion to the CQE ring */;
+	u8 complete_event_flg /* if set - post completion to the event ring */;
+	u8 drop_ttl0_flg /* if set - drop packet with ttl=0 */;
+	__le16 num_of_pbl_pages /* Number of pages in CQE PBL */;
+/* if set - 802.1q tag will be removed and copied to CQE */
 	u8 inner_vlan_stripping_en;
-/* if set and inner vlan does not exist, the outer vlan will copied to CQE as
- * inner vlan. should be used in MF_OVLAN mode only.
- */
-	u8 report_outer_vlan;
+/* if set - outer tag wont be stripped, valid only in MF OVLAN mode. */
+	u8 outer_vlan_stripping_dis;
 	u8 queue_id /* Light L2 RX Queue ID */;
-	u8 main_func_queue /* Is this the main queue for the PF */;
+	u8 main_func_queue /* Set if this is the main PFs LL2 queue */;
 /* Duplicate broadcast packets to LL2 main queue in mf_si mode. Valid if
  * main_func_queue is set.
  */
@@ -829,17 +883,21 @@ struct core_rx_start_ramrod_data {
  * main_func_queue is set.
  */
 	u8 mf_si_mcast_accept_all;
-/* Specifies how ll2 should deal with packets errors: packet_too_big and
- * no_buff
+/* If set, the inner vlan (802.1q tag) priority that is written to cqe will be
+ * zero out, used for TenantDcb
  */
+/* Specifies how ll2 should deal with RX packets errors */
 	struct core_rx_action_on_error action_on_error;
-/* set when in GSI offload mode on ROCE connection */
-	u8 gsi_offload_flag;
+	u8 gsi_offload_flag /* set for GSI offload mode */;
+/* If set, queue is subject for RX VFC classification. */
+	u8 vport_id_valid;
+	u8 vport_id /* Queue VPORT for RX VFC classification. */;
+	u8 zero_prod_flg /* If set, zero RX producers. */;
 /* If set, the inner vlan (802.1q tag) priority that is written to cqe will be
  * zero out, used for TenantDcb
  */
 	u8 wipe_inner_vlan_pri_en;
-	u8 reserved[5];
+	u8 reserved[2];
 };
 
 
@@ -959,13 +1017,14 @@ struct core_tx_start_ramrod_data {
 	u8 conn_type /* connection type that loaded ll2 */;
 	__le16 pbl_size /* Number of BD pages pointed by PBL */;
 	__le16 qm_pq_id /* QM PQ ID */;
-/* set when in GSI offload mode on ROCE connection */
-	u8 gsi_offload_flag;
+	u8 gsi_offload_flag /* set for GSI offload mode */;
+	u8 ctx_stats_en /* Context statistics enable */;
+/* If set, queue is part of VPORT and subject for TX switching. */
+	u8 vport_id_valid;
 /* vport id of the current connection, used to access non_rdma_in_to_in_pri_map
  * which is per vport
  */
 	u8 vport_id;
-	u8 resrved[2];
 };
 
 
@@ -1048,12 +1107,23 @@ struct eth_pstorm_per_pf_stat {
 	struct regpair sent_gre_bytes /* Sent GRE bytes */;
 	struct regpair sent_vxlan_bytes /* Sent VXLAN bytes */;
 	struct regpair sent_geneve_bytes /* Sent GENEVE bytes */;
-	struct regpair sent_gre_pkts /* Sent GRE packets */;
+	struct regpair sent_mpls_bytes /* Sent MPLS bytes */;
+	struct regpair sent_gre_mpls_bytes /* Sent GRE MPLS bytes (E5 Only) */;
+	struct regpair sent_udp_mpls_bytes /* Sent GRE MPLS bytes (E5 Only) */;
+	struct regpair sent_gre_pkts /* Sent GRE packets (E5 Only) */;
 	struct regpair sent_vxlan_pkts /* Sent VXLAN packets */;
 	struct regpair sent_geneve_pkts /* Sent GENEVE packets */;
+	struct regpair sent_mpls_pkts /* Sent MPLS packets (E5 Only) */;
+	struct regpair sent_gre_mpls_pkts /* Sent GRE MPLS packets (E5 Only) */;
+	struct regpair sent_udp_mpls_pkts /* Sent UDP MPLS packets (E5 Only) */;
 	struct regpair gre_drop_pkts /* Dropped GRE TX packets */;
 	struct regpair vxlan_drop_pkts /* Dropped VXLAN TX packets */;
 	struct regpair geneve_drop_pkts /* Dropped GENEVE TX packets */;
+	struct regpair mpls_drop_pkts /* Dropped MPLS TX packets (E5 Only) */;
+/* Dropped GRE MPLS TX packets (E5 Only) */
+	struct regpair gre_mpls_drop_pkts;
+/* Dropped UDP MPLS TX packets (E5 Only) */
+	struct regpair udp_mpls_drop_pkts;
 };
 
 
@@ -1176,6 +1246,8 @@ union event_ring_data {
 	struct iscsi_eqe_data iscsi_info /* Dedicated fields to iscsi data */;
 /* Dedicated fields to iscsi connect done results */
 	struct iscsi_connect_done_results iscsi_conn_done_info;
+	union rdma_eqe_data rdma_data /* Dedicated field for RDMA data */;
+	struct nvmf_eqe_data nvmf_data /* Dedicated field for NVMf data */;
 	struct malicious_vf_eqe_data malicious_vf /* Malicious VF data */;
 /* VF Initial Cleanup data */
 	struct initial_cleanup_eqe_data vf_init_cleanup;
@@ -1187,10 +1259,14 @@ union event_ring_data {
  */
 struct event_ring_entry {
 	u8 protocol_id /* Event Protocol ID (use enum protocol_type) */;
-	u8 opcode /* Event Opcode */;
-	__le16 reserved0 /* Reserved */;
+	u8 opcode /* Event Opcode (Per Protocol Type) */;
+	u8 reserved0 /* Reserved */;
+	u8 vfId /* vfId for this event, 0xFF if this is a PF event */;
 	__le16 echo /* Echo value from ramrod data on the host */;
-	u8 fw_return_code /* FW return code for SP ramrods */;
+/* FW return code for SP ramrods. Use (according to protocol) eth_return_code,
+ * or rdma_fw_return_code, or fcoe_completion_status
+ */
+	u8 fw_return_code;
 	u8 flags;
 /* 0: synchronous EQE - a completion of SP message. 1: asynchronous EQE */
 #define EVENT_RING_ENTRY_ASYNC_MASK      0x1
@@ -1320,6 +1396,22 @@ enum malicious_vf_error_id {
 	ETH_TUNN_IPV6_EXT_NBD_ERR,
 	ETH_CONTROL_PACKET_VIOLATION /* VF sent control frame such as PFC */,
 	ETH_ANTI_SPOOFING_ERR /* Anti-Spoofing verification failure */,
+/* packet scanned is too large (can be 9700 at most) */
+	ETH_PACKET_SIZE_TOO_LARGE,
+/* Tx packet with marked as insert VLAN when its illegal */
+	CORE_ILLEGAL_VLAN_MODE,
+/* indicated number of BDs for the packet is illegal */
+	CORE_ILLEGAL_NBDS,
+	CORE_FIRST_BD_WO_SOP /* 1st BD must have start_bd flag set */,
+/* There are not enough BDs for transmission of even one packet */
+	CORE_INSUFFICIENT_BDS,
+/* TX packet is shorter then reported on BDs or from minimal size */
+	CORE_PACKET_TOO_SMALL,
+	CORE_ILLEGAL_INBAND_TAGS /* TX packet has illegal inband tags marked */,
+	CORE_VLAN_INSERT_AND_INBAND_VLAN /* Vlan cant be added to inband tag */,
+	CORE_MTU_VIOLATION /* TX packet is greater then MTU */,
+	CORE_CONTROL_PACKET_VIOLATION /* VF sent control frame such as PFC */,
+	CORE_ANTI_SPOOFING_ERR /* Anti-Spoofing verification failure */,
 	MAX_MALICIOUS_VF_ERROR_ID
 };
 
@@ -1837,6 +1929,23 @@ enum vf_zone_size_mode {
 
 
 
+/*
+ * Xstorm non-triggering VF zone
+ */
+struct xstorm_non_trigger_vf_zone {
+	struct regpair non_edpm_ack_pkts /* RoCE received statistics */;
+};
+
+
+/*
+ * Tstorm VF zone
+ */
+struct xstorm_vf_zone {
+/* non-interrupt-triggering zone */
+	struct xstorm_non_trigger_vf_zone non_trigger;
+};
+
+
 
 /*
  * Attentions status block
@@ -2205,6 +2314,44 @@ struct igu_msix_vector {
 };
 
 
+struct mstorm_core_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 byte1 /* state */;
+	u8 flags0;
+#define MSTORM_CORE_CONN_AG_CTX_BIT0_MASK     0x1 /* exist_in_qm0 */
+#define MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT    0
+#define MSTORM_CORE_CONN_AG_CTX_BIT1_MASK     0x1 /* exist_in_qm1 */
+#define MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT    1
+#define MSTORM_CORE_CONN_AG_CTX_CF0_MASK      0x3 /* cf0 */
+#define MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT     2
+#define MSTORM_CORE_CONN_AG_CTX_CF1_MASK      0x3 /* cf1 */
+#define MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT     4
+#define MSTORM_CORE_CONN_AG_CTX_CF2_MASK      0x3 /* cf2 */
+#define MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT     6
+	u8 flags1;
+#define MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK    0x1 /* cf0en */
+#define MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT   0
+#define MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK    0x1 /* cf1en */
+#define MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT   1
+#define MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK    0x1 /* cf2en */
+#define MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT   2
+#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK  0x1 /* rule0en */
+#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
+#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK  0x1 /* rule1en */
+#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
+#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK  0x1 /* rule2en */
+#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
+#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK  0x1 /* rule3en */
+#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
+#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK  0x1 /* rule4en */
+#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
+	__le16 word0 /* word0 */;
+	__le16 word1 /* word1 */;
+	__le32 reg0 /* reg0 */;
+	__le32 reg1 /* reg1 */;
+};
+
+
 /*
  * per encapsulation type enabling flags
  */
diff --git a/drivers/net/qede/base/ecore_hsi_debug_tools.h b/drivers/net/qede/base/ecore_hsi_debug_tools.h
index 085af0a3d..a959aeea7 100644
--- a/drivers/net/qede/base/ecore_hsi_debug_tools.h
+++ b/drivers/net/qede/base/ecore_hsi_debug_tools.h
@@ -11,98 +11,6 @@
 /****************************************/
 
 
-enum block_addr {
-	GRCBASE_GRC = 0x50000,
-	GRCBASE_MISCS = 0x9000,
-	GRCBASE_MISC = 0x8000,
-	GRCBASE_DBU = 0xa000,
-	GRCBASE_PGLUE_B = 0x2a8000,
-	GRCBASE_CNIG = 0x218000,
-	GRCBASE_CPMU = 0x30000,
-	GRCBASE_NCSI = 0x40000,
-	GRCBASE_OPTE = 0x53000,
-	GRCBASE_BMB = 0x540000,
-	GRCBASE_PCIE = 0x54000,
-	GRCBASE_MCP = 0xe00000,
-	GRCBASE_MCP2 = 0x52000,
-	GRCBASE_PSWHST = 0x2a0000,
-	GRCBASE_PSWHST2 = 0x29e000,
-	GRCBASE_PSWRD = 0x29c000,
-	GRCBASE_PSWRD2 = 0x29d000,
-	GRCBASE_PSWWR = 0x29a000,
-	GRCBASE_PSWWR2 = 0x29b000,
-	GRCBASE_PSWRQ = 0x280000,
-	GRCBASE_PSWRQ2 = 0x240000,
-	GRCBASE_PGLCS = 0x0,
-	GRCBASE_DMAE = 0xc000,
-	GRCBASE_PTU = 0x560000,
-	GRCBASE_TCM = 0x1180000,
-	GRCBASE_MCM = 0x1200000,
-	GRCBASE_UCM = 0x1280000,
-	GRCBASE_XCM = 0x1000000,
-	GRCBASE_YCM = 0x1080000,
-	GRCBASE_PCM = 0x1100000,
-	GRCBASE_QM = 0x2f0000,
-	GRCBASE_TM = 0x2c0000,
-	GRCBASE_DORQ = 0x100000,
-	GRCBASE_BRB = 0x340000,
-	GRCBASE_SRC = 0x238000,
-	GRCBASE_PRS = 0x1f0000,
-	GRCBASE_TSDM = 0xfb0000,
-	GRCBASE_MSDM = 0xfc0000,
-	GRCBASE_USDM = 0xfd0000,
-	GRCBASE_XSDM = 0xf80000,
-	GRCBASE_YSDM = 0xf90000,
-	GRCBASE_PSDM = 0xfa0000,
-	GRCBASE_TSEM = 0x1700000,
-	GRCBASE_MSEM = 0x1800000,
-	GRCBASE_USEM = 0x1900000,
-	GRCBASE_XSEM = 0x1400000,
-	GRCBASE_YSEM = 0x1500000,
-	GRCBASE_PSEM = 0x1600000,
-	GRCBASE_RSS = 0x238800,
-	GRCBASE_TMLD = 0x4d0000,
-	GRCBASE_MULD = 0x4e0000,
-	GRCBASE_YULD = 0x4c8000,
-	GRCBASE_XYLD = 0x4c0000,
-	GRCBASE_PTLD = 0x590000,
-	GRCBASE_YPLD = 0x5b0000,
-	GRCBASE_PRM = 0x230000,
-	GRCBASE_PBF_PB1 = 0xda0000,
-	GRCBASE_PBF_PB2 = 0xda4000,
-	GRCBASE_RPB = 0x23c000,
-	GRCBASE_BTB = 0xdb0000,
-	GRCBASE_PBF = 0xd80000,
-	GRCBASE_RDIF = 0x300000,
-	GRCBASE_TDIF = 0x310000,
-	GRCBASE_CDU = 0x580000,
-	GRCBASE_CCFC = 0x2e0000,
-	GRCBASE_TCFC = 0x2d0000,
-	GRCBASE_IGU = 0x180000,
-	GRCBASE_CAU = 0x1c0000,
-	GRCBASE_RGFS = 0xf00000,
-	GRCBASE_RGSRC = 0x320000,
-	GRCBASE_TGFS = 0xd00000,
-	GRCBASE_TGSRC = 0x322000,
-	GRCBASE_UMAC = 0x51000,
-	GRCBASE_XMAC = 0x210000,
-	GRCBASE_DBG = 0x10000,
-	GRCBASE_NIG = 0x500000,
-	GRCBASE_WOL = 0x600000,
-	GRCBASE_BMBN = 0x610000,
-	GRCBASE_IPC = 0x20000,
-	GRCBASE_NWM = 0x800000,
-	GRCBASE_NWS = 0x700000,
-	GRCBASE_MS = 0x6a0000,
-	GRCBASE_PHY_PCIE = 0x620000,
-	GRCBASE_LED = 0x6b8000,
-	GRCBASE_AVS_WRAP = 0x6b0000,
-	GRCBASE_MISC_AEU = 0x8000,
-	GRCBASE_BAR0_MAP = 0x1c00000,
-	MAX_BLOCK_ADDR
-};
-
-
 enum block_id {
 	BLOCK_GRC,
 	BLOCK_MISCS,
@@ -157,8 +65,6 @@ enum block_id {
 	BLOCK_MULD,
 	BLOCK_YULD,
 	BLOCK_XYLD,
-	BLOCK_PTLD,
-	BLOCK_YPLD,
 	BLOCK_PRM,
 	BLOCK_PBF_PB1,
 	BLOCK_PBF_PB2,
@@ -172,12 +78,9 @@ enum block_id {
 	BLOCK_TCFC,
 	BLOCK_IGU,
 	BLOCK_CAU,
-	BLOCK_RGFS,
-	BLOCK_RGSRC,
-	BLOCK_TGFS,
-	BLOCK_TGSRC,
 	BLOCK_UMAC,
 	BLOCK_XMAC,
+	BLOCK_MSTAT,
 	BLOCK_DBG,
 	BLOCK_NIG,
 	BLOCK_WOL,
@@ -189,8 +92,18 @@ enum block_id {
 	BLOCK_PHY_PCIE,
 	BLOCK_LED,
 	BLOCK_AVS_WRAP,
-	BLOCK_MISC_AEU,
+	BLOCK_PXPREQBUS,
 	BLOCK_BAR0_MAP,
+	BLOCK_MCP_FIO,
+	BLOCK_LAST_INIT,
+	BLOCK_PRS_FC,
+	BLOCK_PBF_FC,
+	BLOCK_NIG_LB_FC,
+	BLOCK_NIG_LB_FC_PLLH,
+	BLOCK_NIG_TX_FC_PLLH,
+	BLOCK_NIG_TX_FC,
+	BLOCK_NIG_RX_FC_PLLH,
+	BLOCK_NIG_RX_FC,
 	MAX_BLOCK_ID
 };
 
@@ -210,10 +123,13 @@ enum bin_dbg_buffer_type {
 	BIN_BUF_DBG_ATTN_REGS /* Attention registers */,
 	BIN_BUF_DBG_ATTN_INDEXES /* Attention indexes */,
 	BIN_BUF_DBG_ATTN_NAME_OFFSETS /* Attention name offsets */,
-	BIN_BUF_DBG_BUS_BLOCKS /* Debug Bus blocks */,
-	BIN_BUF_DBG_BUS_LINES /* Debug Bus lines */,
-	BIN_BUF_DBG_BUS_BLOCKS_USER_DATA /* Debug Bus blocks user data */,
+	BIN_BUF_DBG_BLOCKS /* Blocks debug data */,
+	BIN_BUF_DBG_BLOCKS_CHIP_DATA /* Blocks debug chip data */,
+	BIN_BUF_DBG_BUS_LINES /* Blocks debug bus lines */,
+	BIN_BUF_DBG_BLOCKS_USER_DATA /* Blocks debug user data */,
+	BIN_BUF_DBG_BLOCKS_CHIP_USER_DATA /* Blocks debug chip user data */,
 	BIN_BUF_DBG_BUS_LINE_NAME_OFFSETS /* Debug Bus line name offsets */,
+	BIN_BUF_DBG_RESET_REGS /* Reset registers */,
 	BIN_BUF_DBG_PARSING_STRINGS /* Debug Tools parsing strings */,
 	MAX_BIN_DBG_BUFFER_TYPE
 };
@@ -358,24 +274,95 @@ enum dbg_attn_type {
 
 
 /*
- * Debug Bus block data
+ * Block debug data
  */
-struct dbg_bus_block {
-/* Number of debug lines in this block (excluding signature & latency events) */
-	u8 num_of_lines;
-/* Indicates if this block has a latency events debug line (0/1). */
-	u8 has_latency_events;
-/* Offset of this blocks lines in the Debug Bus lines array. */
-	u16 lines_offset;
+struct dbg_block {
+	u8 name[15] /* Block name */;
+/* The letter (char) of the associated Storm, or 0 if no associated Storm. */
+	u8 associated_storm_letter;
+};
+
+
+/*
+ * Chip-specific block debug data
+ */
+struct dbg_block_chip {
+	u8 flags;
+/* Indicates if the block is removed in this chip (0/1). */
+#define DBG_BLOCK_CHIP_IS_REMOVED_MASK           0x1
+#define DBG_BLOCK_CHIP_IS_REMOVED_SHIFT          0
+/* Indicates if this block has a reset register (0/1). */
+#define DBG_BLOCK_CHIP_HAS_RESET_REG_MASK        0x1
+#define DBG_BLOCK_CHIP_HAS_RESET_REG_SHIFT       1
+/* Indicates if this block should be taken out of reset before GRC Dump (0/1).
+ * Valid only if has_reset_reg is set.
+ */
+#define DBG_BLOCK_CHIP_UNRESET_BEFORE_DUMP_MASK  0x1
+#define DBG_BLOCK_CHIP_UNRESET_BEFORE_DUMP_SHIFT 2
+/* Indicates if this block has a debug bus (0/1). */
+#define DBG_BLOCK_CHIP_HAS_DBG_BUS_MASK          0x1
+#define DBG_BLOCK_CHIP_HAS_DBG_BUS_SHIFT         3
+/* Indicates if this block has a latency events debug line (0/1). Valid only
+ * if has_dbg_bus is set.
+ */
+#define DBG_BLOCK_CHIP_HAS_LATENCY_EVENTS_MASK   0x1
+#define DBG_BLOCK_CHIP_HAS_LATENCY_EVENTS_SHIFT  4
+#define DBG_BLOCK_CHIP_RESERVED0_MASK            0x7
+#define DBG_BLOCK_CHIP_RESERVED0_SHIFT           5
+/* The DBG block client ID of this block/chip. Valid only if has_dbg_bus is
+ * set.
+ */
+	u8 dbg_client_id;
+/* The ID of the reset register of this block/chip in the dbg_reset_reg
+ * array.
+ */
+	u8 reset_reg_id;
+/* The bit offset of this block/chip in the reset register. Valid only if
+ * has_reset_reg is set.
+ */
+	u8 reset_reg_bit_offset;
+	struct dbg_mode_hdr dbg_bus_mode /* Mode header */;
+	u16 reserved1;
+	u8 reserved2;
+/* Number of Debug Bus lines in this block/chip (excluding signature and latency
+ * events). Valid only if has_dbg_bus is set.
+ */
+	u8 num_of_dbg_bus_lines;
+/* Offset of this block/chip Debug Bus lines in the Debug Bus lines array. Valid
+ * only if has_dbg_bus is set.
+ */
+	u16 dbg_bus_lines_offset;
+/* GRC address of the Debug Bus dbg_select register (in dwords). Valid only if
+ * has_dbg_bus is set.
+ */
+	u32 dbg_select_reg_addr;
+/* GRC address of the Debug Bus dbg_dword_enable register (in dwords). Valid
+ * only if has_dbg_bus is set.
+ */
+	u32 dbg_dword_enable_reg_addr;
+/* GRC address of the Debug Bus dbg_shift register (in dwords). Valid only if
+ * has_dbg_bus is set.
+ */
+	u32 dbg_shift_reg_addr;
+/* GRC address of the Debug Bus dbg_force_valid register (in dwords). Valid only
+ * if has_dbg_bus is set.
+ */
+	u32 dbg_force_valid_reg_addr;
+/* GRC address of the Debug Bus dbg_force_frame register (in dwords). Valid only
+ * if has_dbg_bus is set.
+ */
+	u32 dbg_force_frame_reg_addr;
 };
 
 
 /*
- * Debug Bus block user data
+ * Chip-specific block user debug data
+ */
+struct dbg_block_chip_user {
+/* Number of debug bus lines in this block (excluding signature and latency
+ * events).
  */
-struct dbg_bus_block_user_data {
-/* Number of debug lines in this block (excluding signature & latency events) */
-	u8 num_of_lines;
+	u8 num_of_dbg_bus_lines;
 /* Indicates if this block has a latency events debug line (0/1). */
 	u8 has_latency_events;
 /* Offset of this blocks lines in the debug bus line name offsets array. */
@@ -383,6 +370,14 @@ struct dbg_bus_block_user_data {
 };
 
 
+/*
+ * Block user debug data
+ */
+struct dbg_block_user {
+	u8 name[16] /* Block name */;
+};
+
+
 /*
  * Block Debug line data
  */
@@ -603,51 +598,42 @@ enum dbg_idle_chk_severity_types {
 
 
 /*
- * Debug Bus block data
+ * Reset register
  */
-struct dbg_bus_block_data {
-	__le16 data;
-/* 4-bit value: bit i set -> dword/qword i is enabled. */
-#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_MASK       0xF
-#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_SHIFT      0
-/* Number of dwords/qwords to shift right the debug data (0-3) */
-#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_MASK       0xF
-#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_SHIFT      4
-/* 4-bit value: bit i set -> dword/qword i is forced valid. */
-#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_MASK  0xF
-#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_SHIFT 8
-/* 4-bit value: bit i set -> dword/qword i frame bit is forced. */
-#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_MASK  0xF
-#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_SHIFT 12
-	u8 line_num /* Debug line number to select */;
-	u8 hw_id /* HW ID associated with the block */;
+struct dbg_reset_reg {
+	u32 data;
+#define DBG_RESET_REG_ADDR_MASK        0xFFFFFF /* GRC address (in dwords) */
+#define DBG_RESET_REG_ADDR_SHIFT       0
+/* indicates if this register is removed (0/1). */
+#define DBG_RESET_REG_IS_REMOVED_MASK  0x1
+#define DBG_RESET_REG_IS_REMOVED_SHIFT 24
+#define DBG_RESET_REG_RESERVED_MASK    0x7F
+#define DBG_RESET_REG_RESERVED_SHIFT   25
 };
 
 
 /*
- * Debug Bus Clients
- */
-enum dbg_bus_clients {
-	DBG_BUS_CLIENT_RBCN,
-	DBG_BUS_CLIENT_RBCP,
-	DBG_BUS_CLIENT_RBCR,
-	DBG_BUS_CLIENT_RBCT,
-	DBG_BUS_CLIENT_RBCU,
-	DBG_BUS_CLIENT_RBCF,
-	DBG_BUS_CLIENT_RBCX,
-	DBG_BUS_CLIENT_RBCS,
-	DBG_BUS_CLIENT_RBCH,
-	DBG_BUS_CLIENT_RBCZ,
-	DBG_BUS_CLIENT_OTHER_ENGINE,
-	DBG_BUS_CLIENT_TIMESTAMP,
-	DBG_BUS_CLIENT_CPU,
-	DBG_BUS_CLIENT_RBCY,
-	DBG_BUS_CLIENT_RBCQ,
-	DBG_BUS_CLIENT_RBCM,
-	DBG_BUS_CLIENT_RBCB,
-	DBG_BUS_CLIENT_RBCW,
-	DBG_BUS_CLIENT_RBCV,
-	MAX_DBG_BUS_CLIENTS
+ * Debug Bus block data
+ */
+struct dbg_bus_block_data {
+/* 4 bit value, bit i set -> dword/qword i is enabled in block. */
+	u8 enable_mask;
+/* Number of dwords/qwords to cyclically  right the blocks output (0-3). */
+	u8 right_shift;
+/* 4 bit value, bit i set -> dword/qword i is forced valid in block. */
+	u8 force_valid_mask;
+/* 4 bit value, bit i set -> dword/qword i frame bit is forced in block. */
+	u8 force_frame_mask;
+/* bit i set -> dword i contains this blocks data (after shifting). */
+	u8 dword_mask;
+	u8 line_num /* Debug line number to select */;
+	u8 hw_id /* HW ID associated with the block */;
+	u8 flags;
+/* 0/1. If 1, the debug line is 256b, otherwise its 128b. */
+#define DBG_BUS_BLOCK_DATA_IS_256B_LINE_MASK  0x1
+#define DBG_BUS_BLOCK_DATA_IS_256B_LINE_SHIFT 0
+#define DBG_BUS_BLOCK_DATA_RESERVED_MASK      0x7F
+#define DBG_BUS_BLOCK_DATA_RESERVED_SHIFT     1
 };
 
 
@@ -673,15 +659,19 @@ enum dbg_bus_constraint_ops {
  * Debug Bus trigger state data
  */
 struct dbg_bus_trigger_state_data {
-	u8 data;
-/* 4-bit value: bit i set -> dword i of the trigger state block
- * (after right shift) is enabled.
- */
-#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_MASK  0xF
-#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_SHIFT 0
-/* 4-bit value: bit i set -> dword i is compared by a constraint */
-#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_MASK      0xF
-#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_SHIFT     4
+/* Message length (in cycles) to be used for message-based trigger constraints.
+ * If set to 0, message length is based only on frame bit received from HW.
+ */
+	u8 msg_len;
+/* A bit for each dword in the debug bus cycle, indicating if this dword appears
+ * in a trigger constraint (1) or not (0)
+ */
+	u8 constraint_dword_mask;
+/* Storm ID to trigger on. Valid only when triggering on Storm data.
+ * (use enum dbg_storms)
+ */
+	u8 storm_id;
+	u8 reserved;
 };
 
 /*
@@ -751,11 +741,7 @@ struct dbg_bus_storm_data {
 struct dbg_bus_data {
 	u32 app_version /* The tools version number of the application */;
 	u8 state /* The current debug bus state */;
-	u8 hw_dwords /* HW dwords per cycle */;
-/* The HW IDs of the recorded HW blocks, where bits i*3..i*3+2 contain the
- * HW ID of dword/qword i
- */
-	u16 hw_id_mask;
+	u8 mode_256b_en /* Indicates if the 256 bit mode is enabled */;
 	u8 num_enabled_blocks /* Number of blocks enabled for recording */;
 	u8 num_enabled_storms /* Number of Storms enabled for recording */;
 	u8 target /* Output target */;
@@ -777,102 +763,46 @@ struct dbg_bus_data {
  * Valid only if both filter and trigger are enabled (0/1)
  */
 	u8 filter_post_trigger;
-	u16 reserved;
 /* Indicates if the recording trigger is enabled (0/1) */
 	u8 trigger_en;
-/* trigger states data */
-	struct dbg_bus_trigger_state_data trigger_states[3];
+/* A bit for each dword in the debug bus cycle, indicating if this dword
+ * appears in a filter constraint (1) or not (0)
+ */
+	u8 filter_constraint_dword_mask;
 	u8 next_trigger_state /* ID of next trigger state to be added */;
 /* ID of next filter/trigger constraint to be added */
 	u8 next_constraint_id;
-/* If true, all inputs are associated with HW ID 0. Otherwise, each input is
- * assigned a different HW ID (0/1)
+/* trigger states data */
+	struct dbg_bus_trigger_state_data trigger_states[3];
+/* Message length (in cycles) to be used for message-based filter constraints.
+ * If set to 0 message length is based only on frame bit received from HW.
  */
-	u8 unify_inputs;
+	u8 filter_msg_len;
 /* Indicates if the other engine sends it NW recording to this engine (0/1) */
 	u8 rcv_from_other_engine;
+/* A bit for each dword in the debug bus cycle, indicating if this dword is
+ * recorded (1) or not (0)
+ */
+	u8 blocks_dword_mask;
+/* Indicates if there are dwords in the debug bus cycle which are recorded
+ * by more tan one block (0/1)
+ */
+	u8 blocks_dword_overlap;
+/* The HW IDs of the recorded HW blocks, where bits i*3..i*3+2 contain the
+ * HW ID of dword/qword i
+ */
+	u32 hw_id_mask;
 /* Debug Bus PCI buffer data. Valid only when the target is
  * DBG_BUS_TARGET_ID_PCI.
  */
 	struct dbg_bus_pci_buf_data pci_buf;
 /* Debug Bus data for each block */
-	struct dbg_bus_block_data blocks[88];
+	struct dbg_bus_block_data blocks[132];
 /* Debug Bus data for each block */
 	struct dbg_bus_storm_data storms[6];
 };
 
 
-/*
- * Debug bus filter types
- */
-enum dbg_bus_filter_types {
-	DBG_BUS_FILTER_TYPE_OFF /* filter always off */,
-	DBG_BUS_FILTER_TYPE_PRE /* filter before trigger only */,
-	DBG_BUS_FILTER_TYPE_POST /* filter after trigger only */,
-	DBG_BUS_FILTER_TYPE_ON /* filter always on */,
-	MAX_DBG_BUS_FILTER_TYPES
-};
-
-
-/*
- * Debug bus frame modes
- */
-enum dbg_bus_frame_modes {
-	DBG_BUS_FRAME_MODE_0HW_4ST = 0 /* 0 HW dwords, 4 Storm dwords */,
-	DBG_BUS_FRAME_MODE_4HW_0ST = 3 /* 4 HW dwords, 0 Storm dwords */,
-	DBG_BUS_FRAME_MODE_8HW_0ST = 4 /* 8 HW dwords, 0 Storm dwords */,
-	MAX_DBG_BUS_FRAME_MODES
-};
-
-
-/*
- * Debug bus other engine mode
- */
-enum dbg_bus_other_engine_modes {
-	DBG_BUS_OTHER_ENGINE_MODE_NONE,
-	DBG_BUS_OTHER_ENGINE_MODE_DOUBLE_BW_TX,
-	DBG_BUS_OTHER_ENGINE_MODE_DOUBLE_BW_RX,
-	DBG_BUS_OTHER_ENGINE_MODE_CROSS_ENGINE_TX,
-	DBG_BUS_OTHER_ENGINE_MODE_CROSS_ENGINE_RX,
-	MAX_DBG_BUS_OTHER_ENGINE_MODES
-};
-
-
-
-/*
- * Debug bus post-trigger recording types
- */
-enum dbg_bus_post_trigger_types {
-	DBG_BUS_POST_TRIGGER_RECORD /* start recording after trigger */,
-	DBG_BUS_POST_TRIGGER_DROP /* drop data after trigger */,
-	MAX_DBG_BUS_POST_TRIGGER_TYPES
-};
-
-
-/*
- * Debug bus pre-trigger recording types
- */
-enum dbg_bus_pre_trigger_types {
-	DBG_BUS_PRE_TRIGGER_START_FROM_ZERO /* start recording from time 0 */,
-/* start recording some chunks before trigger */
-	DBG_BUS_PRE_TRIGGER_NUM_CHUNKS,
-	DBG_BUS_PRE_TRIGGER_DROP /* drop data before trigger */,
-	MAX_DBG_BUS_PRE_TRIGGER_TYPES
-};
-
-
-/*
- * Debug bus SEMI frame modes
- */
-enum dbg_bus_semi_frame_modes {
-/* 0 slow dwords, 4 fast dwords */
-	DBG_BUS_SEMI_FRAME_MODE_0SLOW_4FAST = 0,
-/* 4 slow dwords, 0 fast dwords */
-	DBG_BUS_SEMI_FRAME_MODE_4SLOW_0FAST = 3,
-	MAX_DBG_BUS_SEMI_FRAME_MODES
-};
-
-
 /*
  * Debug bus states
  */
@@ -901,6 +831,8 @@ enum dbg_bus_storm_modes {
 	DBG_BUS_STORM_MODE_LD_ST_ADDR /* load/store address (fast debug) */,
 	DBG_BUS_STORM_MODE_DRA_FSM /* DRA state machines (fast debug) */,
 	DBG_BUS_STORM_MODE_RH /* recording handlers (fast debug) */,
+/* recording handlers with store messages (fast debug) */
+	DBG_BUS_STORM_MODE_RH_WITH_STORE,
 	DBG_BUS_STORM_MODE_FOC /* FOC: FIN + DRA Rd (slow debug) */,
 	DBG_BUS_STORM_MODE_EXT_STORE /* FOC: External Store (slow) */,
 	MAX_DBG_BUS_STORM_MODES
@@ -955,14 +887,13 @@ enum dbg_grc_params {
 	DBG_GRC_PARAM_DUMP_CAU /* dump CAU memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_QM /* dump QM memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_MCP /* dump MCP memories (0/1) */,
-/* MCP Trace meta data size in bytes */
-	DBG_GRC_PARAM_MCP_TRACE_META_SIZE,
+	DBG_GRC_PARAM_DUMP_DORQ /* dump DORQ memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_CFC /* dump CFC memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_IGU /* dump IGU memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_BRB /* dump BRB memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_BTB /* dump BTB memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_BMB /* dump BMB memories (0/1) */,
-	DBG_GRC_PARAM_DUMP_NIG /* dump NIG memories (0/1) */,
+	DBG_GRC_PARAM_RESERVD1 /* reserved */,
 	DBG_GRC_PARAM_DUMP_MULD /* dump MULD memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_PRS /* dump PRS memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_DMAE /* dump PRS memories (0/1) */,
@@ -971,8 +902,9 @@ enum dbg_grc_params {
 	DBG_GRC_PARAM_DUMP_DIF /* dump DIF memories (0/1) */,
 	DBG_GRC_PARAM_DUMP_STATIC /* dump static debug data (0/1) */,
 	DBG_GRC_PARAM_UNSTALL /* un-stall Storms after dump (0/1) */,
-	DBG_GRC_PARAM_NUM_LCIDS /* number of LCIDs (0..320) */,
-	DBG_GRC_PARAM_NUM_LTIDS /* number of LTIDs (0..320) */,
+	DBG_GRC_PARAM_RESERVED2 /* reserved */,
+/* MCP Trace meta data size in bytes */
+	DBG_GRC_PARAM_MCP_TRACE_META_SIZE,
 /* preset: exclude all memories from dump (1 only) */
 	DBG_GRC_PARAM_EXCLUDE_ALL,
 /* preset: include memories for crash dump (1 only) */
@@ -983,26 +915,12 @@ enum dbg_grc_params {
 	DBG_GRC_PARAM_DUMP_PHY /* dump PHY memories (0/1) */,
 	DBG_GRC_PARAM_NO_MCP /* dont perform MCP commands (0/1) */,
 	DBG_GRC_PARAM_NO_FW_VER /* dont read FW/MFW version (0/1) */,
+	DBG_GRC_PARAM_RESERVED3 /* reserved */,
+	DBG_GRC_PARAM_DUMP_MCP_HW_DUMP /* dump MCP HW Dump (0/1) */,
 	MAX_DBG_GRC_PARAMS
 };
 
 
-/*
- * Debug reset registers
- */
-enum dbg_reset_regs {
-	DBG_RESET_REG_MISCS_PL_UA,
-	DBG_RESET_REG_MISCS_PL_HV,
-	DBG_RESET_REG_MISCS_PL_HV_2,
-	DBG_RESET_REG_MISC_PL_UA,
-	DBG_RESET_REG_MISC_PL_HV,
-	DBG_RESET_REG_MISC_PL_PDA_VMAIN_1,
-	DBG_RESET_REG_MISC_PL_PDA_VMAIN_2,
-	DBG_RESET_REG_MISC_PL_PDA_VAUX,
-	MAX_DBG_RESET_REGS
-};
-
-
 /*
  * Debug status codes
  */
@@ -1016,15 +934,15 @@ enum dbg_status {
 	DBG_STATUS_INVALID_PCI_BUF_SIZE,
 	DBG_STATUS_PCI_BUF_ALLOC_FAILED,
 	DBG_STATUS_PCI_BUF_NOT_ALLOCATED,
-	DBG_STATUS_TOO_MANY_INPUTS,
-	DBG_STATUS_INPUT_OVERLAP,
-	DBG_STATUS_HW_ONLY_RECORDING,
+	DBG_STATUS_INVALID_FILTER_TRIGGER_DWORDS,
+	DBG_STATUS_NO_MATCHING_FRAMING_MODE,
+	DBG_STATUS_VFC_READ_ERROR,
 	DBG_STATUS_STORM_ALREADY_ENABLED,
 	DBG_STATUS_STORM_NOT_ENABLED,
 	DBG_STATUS_BLOCK_ALREADY_ENABLED,
 	DBG_STATUS_BLOCK_NOT_ENABLED,
 	DBG_STATUS_NO_INPUT_ENABLED,
-	DBG_STATUS_NO_FILTER_TRIGGER_64B,
+	DBG_STATUS_NO_FILTER_TRIGGER_256B,
 	DBG_STATUS_FILTER_ALREADY_ENABLED,
 	DBG_STATUS_TRIGGER_ALREADY_ENABLED,
 	DBG_STATUS_TRIGGER_NOT_ENABLED,
@@ -1049,7 +967,7 @@ enum dbg_status {
 	DBG_STATUS_MCP_TRACE_NO_META,
 	DBG_STATUS_MCP_COULD_NOT_HALT,
 	DBG_STATUS_MCP_COULD_NOT_RESUME,
-	DBG_STATUS_RESERVED2,
+	DBG_STATUS_RESERVED0,
 	DBG_STATUS_SEMI_FIFO_NOT_EMPTY,
 	DBG_STATUS_IGU_FIFO_BAD_DATA,
 	DBG_STATUS_MCP_COULD_NOT_MASK_PRTY,
@@ -1057,10 +975,15 @@ enum dbg_status {
 	DBG_STATUS_REG_FIFO_BAD_DATA,
 	DBG_STATUS_PROTECTION_OVERRIDE_BAD_DATA,
 	DBG_STATUS_DBG_ARRAY_NOT_SET,
-	DBG_STATUS_FILTER_BUG,
+	DBG_STATUS_RESERVED1,
 	DBG_STATUS_NON_MATCHING_LINES,
-	DBG_STATUS_INVALID_TRIGGER_DWORD_OFFSET,
+	DBG_STATUS_INSUFFICIENT_HW_IDS,
 	DBG_STATUS_DBG_BUS_IN_USE,
+	DBG_STATUS_INVALID_STORM_DBG_MODE,
+	DBG_STATUS_OTHER_ENGINE_BB_ONLY,
+	DBG_STATUS_FILTER_SINGLE_HW_ID,
+	DBG_STATUS_TRIGGER_SINGLE_HW_ID,
+	DBG_STATUS_MISSING_TRIGGER_STATE_STORM,
 	MAX_DBG_STATUS
 };
 
@@ -1108,9 +1031,9 @@ struct dbg_tools_data {
 	struct idle_chk_data idle_chk /* Idle Check data */;
 	u8 mode_enable[40] /* Indicates if a mode is enabled (0/1) */;
 /* Indicates if a block is in reset state (0/1) */
-	u8 block_in_reset[88];
+	u8 block_in_reset[132];
 	u8 chip_id /* Chip ID (from enum chip_ids) */;
-	u8 platform_id /* Platform ID */;
+	u8 hw_type /* HW Type */;
 	u8 num_ports /* Number of ports in the chip */;
 	u8 num_pfs_per_port /* Number of PFs in each port */;
 	u8 num_vfs /* Number of VFs in the chip */;
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index b1cab2910..bd7bd8658 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -34,7 +34,7 @@ struct xstorm_eth_conn_st_ctx {
 
 struct xstorm_eth_conn_ag_ctx {
 	u8 reserved0 /* cdu_validation */;
-	u8 eth_state /* state */;
+	u8 state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
 #define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK            0x1
@@ -303,57 +303,6 @@ struct xstorm_eth_conn_ag_ctx {
 	__le16 word15 /* word15 */;
 };
 
-/*
- * The eth storm context for the Ystorm
- */
-struct ystorm_eth_conn_st_ctx {
-	__le32 reserved[8];
-};
-
-struct ystorm_eth_conn_ag_ctx {
-	u8 byte0 /* cdu_validation */;
-	u8 state /* state */;
-	u8 flags0;
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1 /* exist_in_qm0 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1 /* exist_in_qm1 */
-#define YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
-#define YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
-#define YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
-	u8 flags1;
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1 /* cf0en */
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1 /* cf1en */
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1 /* cf2en */
-#define YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1 /* rule0en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1 /* rule1en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1 /* rule2en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1 /* rule3en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1 /* rule4en */
-#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
-	u8 tx_q0_int_coallecing_timeset /* byte2 */;
-	u8 byte3 /* byte3 */;
-	__le16 word0 /* word0 */;
-	__le32 terminate_spqe /* reg0 */;
-	__le32 reg1 /* reg1 */;
-	__le16 tx_bd_cons_upd /* word1 */;
-	__le16 word2 /* word2 */;
-	__le16 word3 /* word3 */;
-	__le16 word4 /* word4 */;
-	__le32 reg2 /* reg2 */;
-	__le32 reg3 /* reg3 */;
-};
-
 struct tstorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
@@ -458,6 +407,57 @@ struct tstorm_eth_conn_ag_ctx {
 	__le32 reg10 /* reg10 */;
 };
 
+/*
+ * The eth storm context for the Ystorm
+ */
+struct ystorm_eth_conn_st_ctx {
+	__le32 reserved[8];
+};
+
+struct ystorm_eth_conn_ag_ctx {
+	u8 byte0 /* cdu_validation */;
+	u8 state /* state */;
+	u8 flags0;
+#define YSTORM_ETH_CONN_AG_CTX_BIT0_MASK                  0x1 /* exist_in_qm0 */
+#define YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT                 0
+#define YSTORM_ETH_CONN_AG_CTX_BIT1_MASK                  0x1 /* exist_in_qm1 */
+#define YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT                 1
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK     0x3 /* cf0 */
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT    2
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK      0x3 /* cf1 */
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT     4
+#define YSTORM_ETH_CONN_AG_CTX_CF2_MASK                   0x3 /* cf2 */
+#define YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT                  6
+	u8 flags1;
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK  0x1 /* cf0en */
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK   0x1 /* cf1en */
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT  1
+#define YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK                 0x1 /* cf2en */
+#define YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT                2
+#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK               0x1 /* rule0en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT              3
+#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK               0x1 /* rule1en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT              4
+#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK               0x1 /* rule2en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT              5
+#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK               0x1 /* rule3en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT              6
+#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK               0x1 /* rule4en */
+#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT              7
+	u8 tx_q0_int_coallecing_timeset /* byte2 */;
+	u8 byte3 /* byte3 */;
+	__le16 word0 /* word0 */;
+	__le32 terminate_spqe /* reg0 */;
+	__le32 reg1 /* reg1 */;
+	__le16 tx_bd_cons_upd /* word1 */;
+	__le16 word2 /* word2 */;
+	__le16 word3 /* word3 */;
+	__le16 word4 /* word4 */;
+	__le32 reg2 /* reg2 */;
+	__le32 reg3 /* reg3 */;
+};
+
 struct ustorm_eth_conn_ag_ctx {
 	u8 byte0 /* cdu_validation */;
 	u8 byte1 /* state */;
@@ -557,12 +557,12 @@ struct eth_conn_context {
 	struct xstorm_eth_conn_st_ctx xstorm_st_context;
 /* xstorm aggregative context */
 	struct xstorm_eth_conn_ag_ctx xstorm_ag_context;
+/* tstorm aggregative context */
+	struct tstorm_eth_conn_ag_ctx tstorm_ag_context;
 /* ystorm storm context */
 	struct ystorm_eth_conn_st_ctx ystorm_st_context;
 /* ystorm aggregative context */
 	struct ystorm_eth_conn_ag_ctx ystorm_ag_context;
-/* tstorm aggregative context */
-	struct tstorm_eth_conn_ag_ctx tstorm_ag_context;
 /* ustorm aggregative context */
 	struct ustorm_eth_conn_ag_ctx ustorm_ag_context;
 /* ustorm storm context */
@@ -792,16 +792,34 @@ enum eth_ramrod_cmd_id {
 struct eth_return_code {
 	u8 value;
 /* error code (use enum eth_error_code) */
-#define ETH_RETURN_CODE_ERR_CODE_MASK  0x1F
+#define ETH_RETURN_CODE_ERR_CODE_MASK  0x3F
 #define ETH_RETURN_CODE_ERR_CODE_SHIFT 0
-#define ETH_RETURN_CODE_RESERVED_MASK  0x3
-#define ETH_RETURN_CODE_RESERVED_SHIFT 5
+#define ETH_RETURN_CODE_RESERVED_MASK  0x1
+#define ETH_RETURN_CODE_RESERVED_SHIFT 6
 /* rx path - 0, tx path - 1 */
 #define ETH_RETURN_CODE_RX_TX_MASK     0x1
 #define ETH_RETURN_CODE_RX_TX_SHIFT    7
 };
 
 
+/*
+ * tx destination enum
+ */
+enum eth_tx_dst_mode_config_enum {
+/* tx destination configuration override is disabled */
+	ETH_TX_DST_MODE_CONFIG_DISABLE,
+/* tx destination configuration override is enabled, vport and tx dst will be
+ * taken from from 4th bd
+ */
+	ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_BD,
+/* tx destination configuration override is enabled, vport and tx dst will be
+ * taken from from vport data
+ */
+	ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_VPORT,
+	MAX_ETH_TX_DST_MODE_CONFIG_ENUM
+};
+
+
 /*
  * What to do in case an error occurs
  */
@@ -1431,7 +1449,7 @@ struct vport_update_ramrod_data {
 
 struct E4XstormEthConnAgCtxDqExtLdPart {
 	u8 reserved0 /* cdu_validation */;
-	u8 eth_state /* state */;
+	u8 state /* state */;
 	u8 flags0;
 /* exist_in_qm0 */
 #define E4XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK            0x1
diff --git a/drivers/net/qede/base/ecore_hsi_init_func.h b/drivers/net/qede/base/ecore_hsi_init_func.h
index d77edaa1d..7efe2eff1 100644
--- a/drivers/net/qede/base/ecore_hsi_init_func.h
+++ b/drivers/net/qede/base/ecore_hsi_init_func.h
@@ -88,7 +88,18 @@ struct init_nig_pri_tc_map_req {
 
 
 /*
- * QM per-port init parameters
+ * QM per global RL init parameters
+ */
+struct init_qm_global_rl_params {
+/* Rate limit in Mb/sec units. If set to zero, the link speed is uwsed
+ * instead.
+ */
+	u32 rate_limit;
+};
+
+
+/*
+ * QM per port init parameters
  */
 struct init_qm_port_params {
 	u8 active /* Indicates if this port is active */;
@@ -111,24 +122,20 @@ struct init_qm_pq_params {
 	u8 wrr_group /* WRR group */;
 /* Indicates if a rate limiter should be allocated for the PQ (0/1) */
 	u8 rl_valid;
+	u16 rl_id /* RL ID, valid only if rl_valid is true */;
 	u8 port_id /* Port ID */;
-	u8 reserved0;
-	u16 reserved1;
+	u8 reserved;
 };
 
 
 /*
- * QM per-vport init parameters
+ * QM per VPORT init parameters
  */
 struct init_qm_vport_params {
-/* rate limit in Mb/sec units. a value of 0 means dont configure. ignored if
- * VPORT RL is globally disabled.
- */
-	u32 vport_rl;
 /* WFQ weight. A value of 0 means dont configure. ignored if VPORT WFQ is
  * globally disabled.
  */
-	u16 vport_wfq;
+	u16 wfq;
 /* the first Tx PQ ID associated with this VPORT for each TC. */
 	u16 first_tx_pq_id[NUM_OF_TCS];
 };
diff --git a/drivers/net/qede/base/ecore_hsi_init_tool.h b/drivers/net/qede/base/ecore_hsi_init_tool.h
index 1fe4bfc61..4f878d061 100644
--- a/drivers/net/qede/base/ecore_hsi_init_tool.h
+++ b/drivers/net/qede/base/ecore_hsi_init_tool.h
@@ -46,10 +46,24 @@ enum bin_init_buffer_type {
 	BIN_BUF_INIT_VAL /* init data */,
 	BIN_BUF_INIT_MODE_TREE /* init modes tree */,
 	BIN_BUF_INIT_IRO /* internal RAM offsets */,
+	BIN_BUF_INIT_OVERLAYS /* FW overlays (except overlay 0) */,
 	MAX_BIN_INIT_BUFFER_TYPE
 };
 
 
+/*
+ * FW overlay buffer header
+ */
+struct fw_overlay_buf_hdr {
+	u32 data;
+#define FW_OVERLAY_BUF_HDR_STORM_ID_MASK  0xFF /* Storm ID */
+#define FW_OVERLAY_BUF_HDR_STORM_ID_SHIFT 0
+/* Size of Storm FW overlay buffer in dwords */
+#define FW_OVERLAY_BUF_HDR_BUF_SIZE_MASK  0xFFFFFF
+#define FW_OVERLAY_BUF_HDR_BUF_SIZE_SHIFT 8
+};
+
+
 /*
  * init array header: raw
  */
@@ -117,6 +131,30 @@ union init_array_hdr {
 };
 
 
+enum dbg_bus_clients {
+	DBG_BUS_CLIENT_RBCN,
+	DBG_BUS_CLIENT_RBCP,
+	DBG_BUS_CLIENT_RBCR,
+	DBG_BUS_CLIENT_RBCT,
+	DBG_BUS_CLIENT_RBCU,
+	DBG_BUS_CLIENT_RBCF,
+	DBG_BUS_CLIENT_RBCX,
+	DBG_BUS_CLIENT_RBCS,
+	DBG_BUS_CLIENT_RBCH,
+	DBG_BUS_CLIENT_RBCZ,
+	DBG_BUS_CLIENT_OTHER_ENGINE,
+	DBG_BUS_CLIENT_TIMESTAMP,
+	DBG_BUS_CLIENT_CPU,
+	DBG_BUS_CLIENT_RBCY,
+	DBG_BUS_CLIENT_RBCQ,
+	DBG_BUS_CLIENT_RBCM,
+	DBG_BUS_CLIENT_RBCB,
+	DBG_BUS_CLIENT_RBCW,
+	DBG_BUS_CLIENT_RBCV,
+	MAX_DBG_BUS_CLIENTS
+};
+
+
 enum init_modes {
 	MODE_BB_A0_DEPRECATED,
 	MODE_BB,
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 6a79db52e..0aed043bb 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -12,6 +12,8 @@
 #include "reg_addr.h"
 #include "ecore_utils.h"
 #include "ecore_iov_api.h"
+#include "ecore_gtt_values.h"
+#include "ecore_dev_api.h"
 
 #ifndef ASIC_ONLY
 #define ECORE_EMUL_FACTOR 2000
@@ -78,6 +80,20 @@ enum _ecore_status_t ecore_ptt_pool_alloc(struct ecore_hwfn *p_hwfn)
 	return ECORE_SUCCESS;
 }
 
+void ecore_gtt_init(struct ecore_hwfn *p_hwfn)
+{
+	u32 gtt_base;
+	u32 i;
+
+	/* Set the global windows */
+	gtt_base = PXP_PF_WINDOW_ADMIN_START + PXP_PF_WINDOW_ADMIN_GLOBAL_START;
+
+	for (i = 0; i < OSAL_ARRAY_SIZE(pxp_global_win); i++)
+		if (pxp_global_win[i])
+			REG_WR(p_hwfn, gtt_base + i * PXP_GLOBAL_ENTRY_SIZE,
+			       pxp_global_win[i]);
+}
+
 void ecore_ptt_invalidate(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_ptt *p_ptt;
diff --git a/drivers/net/qede/base/ecore_hw.h b/drivers/net/qede/base/ecore_hw.h
index e43f337dc..238bdb9db 100644
--- a/drivers/net/qede/base/ecore_hw.h
+++ b/drivers/net/qede/base/ecore_hw.h
@@ -8,9 +8,8 @@
 #define __ECORE_HW_H__
 
 #include "ecore.h"
-#include "ecore_dev_api.h"
 
-/* Forward decleration */
+/* Forward declaration */
 struct ecore_ptt;
 
 enum reserved_ptts {
@@ -53,10 +52,8 @@ enum reserved_ptts {
 * @brief ecore_gtt_init - Initialize GTT windows
 *
 * @param p_hwfn
-* @param p_ptt
 */
-void ecore_gtt_init(struct ecore_hwfn *p_hwfn,
-		    struct ecore_ptt *p_ptt);
+void ecore_gtt_init(struct ecore_hwfn *p_hwfn);
 
 /**
  * @brief ecore_ptt_invalidate - Forces all ptt entries to be re-configured
@@ -84,7 +81,6 @@ void ecore_ptt_pool_free(struct ecore_hwfn *p_hwfn);
 /**
  * @brief ecore_ptt_get_bar_addr - Get PPT's external BAR address
  *
- * @param p_hwfn
  * @param p_ptt
  *
  * @return u32
@@ -95,8 +91,8 @@ u32 ecore_ptt_get_bar_addr(struct ecore_ptt	*p_ptt);
  * @brief ecore_ptt_set_win - Set PTT Window's GRC BAR address
  *
  * @param p_hwfn
- * @param new_hw_addr
  * @param p_ptt
+ * @param new_hw_addr
  */
 void ecore_ptt_set_win(struct ecore_hwfn	*p_hwfn,
 		       struct ecore_ptt		*p_ptt,
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 928d41b46..a0a6e3aba 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -664,10 +664,10 @@ static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 
 	/* Go over all PF VPORTs */
 	for (i = 0; i < num_vports; i++) {
-		if (!vport_params[i].vport_wfq)
+		if (!vport_params[i].wfq)
 			continue;
 
-		inc_val = QM_WFQ_INC_VAL(vport_params[i].vport_wfq);
+		inc_val = QM_WFQ_INC_VAL(vport_params[i].wfq);
 		if (inc_val > QM_WFQ_MAX_INC_VAL) {
 			DP_NOTICE(p_hwfn, true,
 				  "Invalid VPORT WFQ weight configuration\n");
@@ -710,8 +710,7 @@ static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
 
 	/* Go over all PF VPORTs */
 	for (i = 0, vport_id = start_vport; i < num_vports; i++, vport_id++) {
-		inc_val = QM_RL_INC_VAL(vport_params[i].vport_rl ?
-			  vport_params[i].vport_rl : link_speed);
+		inc_val = QM_RL_INC_VAL(link_speed);
 		if (inc_val > QM_VP_RL_MAX_INC_VAL(link_speed)) {
 			DP_NOTICE(p_hwfn, true,
 				  "Invalid VPORT rate-limit configuration\n");
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index 8f7209100..ad8570a08 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -534,53 +534,6 @@ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
-void ecore_gtt_init(struct ecore_hwfn *p_hwfn,
-		    struct ecore_ptt *p_ptt)
-{
-	u32 gtt_base;
-	u32 i;
-
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
-		/* This is done by MFW on ASIC; regardless, this should only
-		 * be done once per chip [i.e., common]. Implementation is
-		 * not too bright, but it should work on the simple FPGA/EMUL
-		 * scenarios.
-		 */
-		static bool initialized;
-		int poll_cnt = 500;
-		u32 val;
-
-		/* initialize PTT/GTT (poll for completion) */
-		if (!initialized) {
-			ecore_wr(p_hwfn, p_ptt,
-				 PGLUE_B_REG_START_INIT_PTT_GTT, 1);
-			initialized = true;
-		}
-
-		do {
-			/* ptt might be overrided by HW until this is done */
-			OSAL_UDELAY(10);
-			ecore_ptt_invalidate(p_hwfn);
-			val = ecore_rd(p_hwfn, p_ptt,
-				       PGLUE_B_REG_INIT_DONE_PTT_GTT);
-		} while ((val != 1) && --poll_cnt);
-
-		if (!poll_cnt)
-			DP_ERR(p_hwfn,
-			       "PGLUE_B_REG_INIT_DONE didn't complete\n");
-	}
-#endif
-
-	/* Set the global windows */
-	gtt_base = PXP_PF_WINDOW_ADMIN_START + PXP_PF_WINDOW_ADMIN_GLOBAL_START;
-
-	for (i = 0; i < OSAL_ARRAY_SIZE(pxp_global_win); i++)
-		if (pxp_global_win[i])
-			REG_WR(p_hwfn, gtt_base + i * PXP_GLOBAL_ENTRY_SIZE,
-			       pxp_global_win[i]);
-}
-
 enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
 #ifdef CONFIG_ECORE_BINARY_FW
 					const u8 *fw_data)
diff --git a/drivers/net/qede/base/ecore_init_ops.h b/drivers/net/qede/base/ecore_init_ops.h
index de7846d46..21e433309 100644
--- a/drivers/net/qede/base/ecore_init_ops.h
+++ b/drivers/net/qede/base/ecore_init_ops.h
@@ -97,14 +97,4 @@ void ecore_init_store_rt_agg(struct ecore_hwfn *p_hwfn,
 #define STORE_RT_REG_AGG(hwfn, offset, val)			\
 	ecore_init_store_rt_agg(hwfn, offset, (u32 *)&val, sizeof(val))
 
-
-/**
- * @brief
- *      Initialize GTT global windows and set admin window
- *      related params of GTT/PTT to default values.
- *
- * @param p_hwfn
- */
-void ecore_gtt_init(struct ecore_hwfn *p_hwfn,
-		    struct ecore_ptt *p_ptt);
 #endif /* __ECORE_INIT_OPS__ */
diff --git a/drivers/net/qede/base/ecore_iro.h b/drivers/net/qede/base/ecore_iro.h
index 12d45c1c5..b146faff9 100644
--- a/drivers/net/qede/base/ecore_iro.h
+++ b/drivers/net/qede/base/ecore_iro.h
@@ -35,207 +35,239 @@
 #define USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id) (IRO[7].base + \
 	((queue_zone_id) * IRO[7].m1))
 #define USTORM_COMMON_QUEUE_CONS_SIZE (IRO[7].size)
+/* Xstorm common PQ info */
+#define XSTORM_PQ_INFO_OFFSET(pq_id) (IRO[8].base + ((pq_id) * IRO[8].m1))
+#define XSTORM_PQ_INFO_SIZE (IRO[8].size)
 /* Xstorm Integration Test Data */
-#define XSTORM_INTEG_TEST_DATA_OFFSET (IRO[8].base)
-#define XSTORM_INTEG_TEST_DATA_SIZE (IRO[8].size)
+#define XSTORM_INTEG_TEST_DATA_OFFSET (IRO[9].base)
+#define XSTORM_INTEG_TEST_DATA_SIZE (IRO[9].size)
 /* Ystorm Integration Test Data */
-#define YSTORM_INTEG_TEST_DATA_OFFSET (IRO[9].base)
-#define YSTORM_INTEG_TEST_DATA_SIZE (IRO[9].size)
+#define YSTORM_INTEG_TEST_DATA_OFFSET (IRO[10].base)
+#define YSTORM_INTEG_TEST_DATA_SIZE (IRO[10].size)
 /* Pstorm Integration Test Data */
-#define PSTORM_INTEG_TEST_DATA_OFFSET (IRO[10].base)
-#define PSTORM_INTEG_TEST_DATA_SIZE (IRO[10].size)
+#define PSTORM_INTEG_TEST_DATA_OFFSET (IRO[11].base)
+#define PSTORM_INTEG_TEST_DATA_SIZE (IRO[11].size)
 /* Tstorm Integration Test Data */
-#define TSTORM_INTEG_TEST_DATA_OFFSET (IRO[11].base)
-#define TSTORM_INTEG_TEST_DATA_SIZE (IRO[11].size)
+#define TSTORM_INTEG_TEST_DATA_OFFSET (IRO[12].base)
+#define TSTORM_INTEG_TEST_DATA_SIZE (IRO[12].size)
 /* Mstorm Integration Test Data */
-#define MSTORM_INTEG_TEST_DATA_OFFSET (IRO[12].base)
-#define MSTORM_INTEG_TEST_DATA_SIZE (IRO[12].size)
+#define MSTORM_INTEG_TEST_DATA_OFFSET (IRO[13].base)
+#define MSTORM_INTEG_TEST_DATA_SIZE (IRO[13].size)
 /* Ustorm Integration Test Data */
-#define USTORM_INTEG_TEST_DATA_OFFSET (IRO[13].base)
-#define USTORM_INTEG_TEST_DATA_SIZE (IRO[13].size)
+#define USTORM_INTEG_TEST_DATA_OFFSET (IRO[14].base)
+#define USTORM_INTEG_TEST_DATA_SIZE (IRO[14].size)
+/* Xstorm overlay buffer host address */
+#define XSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[15].base)
+#define XSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[15].size)
+/* Ystorm overlay buffer host address */
+#define YSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[16].base)
+#define YSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[16].size)
+/* Pstorm overlay buffer host address */
+#define PSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[17].base)
+#define PSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[17].size)
+/* Tstorm overlay buffer host address */
+#define TSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[18].base)
+#define TSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[18].size)
+/* Mstorm overlay buffer host address */
+#define MSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[19].base)
+#define MSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[19].size)
+/* Ustorm overlay buffer host address */
+#define USTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[20].base)
+#define USTORM_OVERLAY_BUF_ADDR_SIZE (IRO[20].size)
 /* Tstorm producers */
-#define TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) (IRO[14].base + \
-	((core_rx_queue_id) * IRO[14].m1))
-#define TSTORM_LL2_RX_PRODS_SIZE (IRO[14].size)
+#define TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) (IRO[21].base + \
+	((core_rx_queue_id) * IRO[21].m1))
+#define TSTORM_LL2_RX_PRODS_SIZE (IRO[21].size)
 /* Tstorm LightL2 queue statistics */
 #define CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) \
-	(IRO[15].base + ((core_rx_queue_id) * IRO[15].m1))
-#define CORE_LL2_TSTORM_PER_QUEUE_STAT_SIZE (IRO[15].size)
+	(IRO[22].base + ((core_rx_queue_id) * IRO[22].m1))
+#define CORE_LL2_TSTORM_PER_QUEUE_STAT_SIZE (IRO[22].size)
 /* Ustorm LiteL2 queue statistics */
 #define CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) \
-	(IRO[16].base + ((core_rx_queue_id) * IRO[16].m1))
-#define CORE_LL2_USTORM_PER_QUEUE_STAT_SIZE (IRO[16].size)
+	(IRO[23].base + ((core_rx_queue_id) * IRO[23].m1))
+#define CORE_LL2_USTORM_PER_QUEUE_STAT_SIZE (IRO[23].size)
 /* Pstorm LiteL2 queue statistics */
 #define CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) \
-	(IRO[17].base + ((core_tx_stats_id) * IRO[17].m1))
-#define CORE_LL2_PSTORM_PER_QUEUE_STAT_SIZE (IRO[17].size)
+	(IRO[24].base + ((core_tx_stats_id) * IRO[24].m1))
+#define CORE_LL2_PSTORM_PER_QUEUE_STAT_SIZE (IRO[24].size)
 /* Mstorm queue statistics */
-#define MSTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[18].base + \
-	((stat_counter_id) * IRO[18].m1))
-#define MSTORM_QUEUE_STAT_SIZE (IRO[18].size)
-/* Mstorm ETH PF queues producers */
-#define MSTORM_ETH_PF_PRODS_OFFSET(queue_id) (IRO[19].base + \
-	((queue_id) * IRO[19].m1))
-#define MSTORM_ETH_PF_PRODS_SIZE (IRO[19].size)
+#define MSTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[25].base + \
+	((stat_counter_id) * IRO[25].m1))
+#define MSTORM_QUEUE_STAT_SIZE (IRO[25].size)
+/* TPA agregation timeout in us resolution (on ASIC) */
+#define MSTORM_TPA_TIMEOUT_US_OFFSET (IRO[26].base)
+#define MSTORM_TPA_TIMEOUT_US_SIZE (IRO[26].size)
 /* Mstorm ETH VF queues producers offset in RAM. Used in default VF zone size
  * mode.
  */
-#define MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id) (IRO[20].base + \
-	((vf_id) * IRO[20].m1) + ((vf_queue_id) * IRO[20].m2))
-#define MSTORM_ETH_VF_PRODS_SIZE (IRO[20].size)
-/* TPA agregation timeout in us resolution (on ASIC) */
-#define MSTORM_TPA_TIMEOUT_US_OFFSET (IRO[21].base)
-#define MSTORM_TPA_TIMEOUT_US_SIZE (IRO[21].size)
+#define MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id) (IRO[27].base + \
+	((vf_id) * IRO[27].m1) + ((vf_queue_id) * IRO[27].m2))
+#define MSTORM_ETH_VF_PRODS_SIZE (IRO[27].size)
+/* Mstorm ETH PF queues producers */
+#define MSTORM_ETH_PF_PRODS_OFFSET(queue_id) (IRO[28].base + \
+	((queue_id) * IRO[28].m1))
+#define MSTORM_ETH_PF_PRODS_SIZE (IRO[28].size)
 /* Mstorm pf statistics */
-#define MSTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[22].base + ((pf_id) * IRO[22].m1))
-#define MSTORM_ETH_PF_STAT_SIZE (IRO[22].size)
+#define MSTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[29].base + ((pf_id) * IRO[29].m1))
+#define MSTORM_ETH_PF_STAT_SIZE (IRO[29].size)
 /* Ustorm queue statistics */
-#define USTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[23].base + \
-	((stat_counter_id) * IRO[23].m1))
-#define USTORM_QUEUE_STAT_SIZE (IRO[23].size)
+#define USTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[30].base + \
+	((stat_counter_id) * IRO[30].m1))
+#define USTORM_QUEUE_STAT_SIZE (IRO[30].size)
 /* Ustorm pf statistics */
-#define USTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[24].base + ((pf_id) * IRO[24].m1))
-#define USTORM_ETH_PF_STAT_SIZE (IRO[24].size)
+#define USTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[31].base + ((pf_id) * IRO[31].m1))
+#define USTORM_ETH_PF_STAT_SIZE (IRO[31].size)
 /* Pstorm queue statistics */
-#define PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[25].base + \
-	((stat_counter_id) * IRO[25].m1))
-#define PSTORM_QUEUE_STAT_SIZE (IRO[25].size)
+#define PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[32].base + \
+	((stat_counter_id) * IRO[32].m1))
+#define PSTORM_QUEUE_STAT_SIZE (IRO[32].size)
 /* Pstorm pf statistics */
-#define PSTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[26].base + ((pf_id) * IRO[26].m1))
-#define PSTORM_ETH_PF_STAT_SIZE (IRO[26].size)
+#define PSTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[33].base + ((pf_id) * IRO[33].m1))
+#define PSTORM_ETH_PF_STAT_SIZE (IRO[33].size)
 /* Control frame's EthType configuration for TX control frame security */
-#define PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) (IRO[27].base + \
-	((ethType_id) * IRO[27].m1))
-#define PSTORM_CTL_FRAME_ETHTYPE_SIZE (IRO[27].size)
+#define PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) (IRO[34].base + \
+	((ethType_id) * IRO[34].m1))
+#define PSTORM_CTL_FRAME_ETHTYPE_SIZE (IRO[34].size)
 /* Tstorm last parser message */
-#define TSTORM_ETH_PRS_INPUT_OFFSET (IRO[28].base)
-#define TSTORM_ETH_PRS_INPUT_SIZE (IRO[28].size)
+#define TSTORM_ETH_PRS_INPUT_OFFSET (IRO[35].base)
+#define TSTORM_ETH_PRS_INPUT_SIZE (IRO[35].size)
 /* Tstorm Eth limit Rx rate */
-#define ETH_RX_RATE_LIMIT_OFFSET(pf_id) (IRO[29].base + ((pf_id) * IRO[29].m1))
-#define ETH_RX_RATE_LIMIT_SIZE (IRO[29].size)
+#define ETH_RX_RATE_LIMIT_OFFSET(pf_id) (IRO[36].base + ((pf_id) * IRO[36].m1))
+#define ETH_RX_RATE_LIMIT_SIZE (IRO[36].size)
 /* RSS indirection table entry update command per PF offset in TSTORM PF BAR0.
  * Use eth_tstorm_rss_update_data for update.
  */
-#define TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id) (IRO[30].base + \
-	((pf_id) * IRO[30].m1))
-#define TSTORM_ETH_RSS_UPDATE_SIZE (IRO[30].size)
+#define TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id) (IRO[37].base + \
+	((pf_id) * IRO[37].m1))
+#define TSTORM_ETH_RSS_UPDATE_SIZE (IRO[37].size)
 /* Xstorm queue zone */
-#define XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) (IRO[31].base + \
-	((queue_id) * IRO[31].m1))
-#define XSTORM_ETH_QUEUE_ZONE_SIZE (IRO[31].size)
+#define XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) (IRO[38].base + \
+	((queue_id) * IRO[38].m1))
+#define XSTORM_ETH_QUEUE_ZONE_SIZE (IRO[38].size)
 /* Ystorm cqe producer */
-#define YSTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[32].base + \
-	((rss_id) * IRO[32].m1))
-#define YSTORM_TOE_CQ_PROD_SIZE (IRO[32].size)
+#define YSTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[39].base + \
+	((rss_id) * IRO[39].m1))
+#define YSTORM_TOE_CQ_PROD_SIZE (IRO[39].size)
 /* Ustorm cqe producer */
-#define USTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[33].base + \
-	((rss_id) * IRO[33].m1))
-#define USTORM_TOE_CQ_PROD_SIZE (IRO[33].size)
+#define USTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[40].base + \
+	((rss_id) * IRO[40].m1))
+#define USTORM_TOE_CQ_PROD_SIZE (IRO[40].size)
 /* Ustorm grq producer */
-#define USTORM_TOE_GRQ_PROD_OFFSET(pf_id) (IRO[34].base + \
-	((pf_id) * IRO[34].m1))
-#define USTORM_TOE_GRQ_PROD_SIZE (IRO[34].size)
+#define USTORM_TOE_GRQ_PROD_OFFSET(pf_id) (IRO[41].base + \
+	((pf_id) * IRO[41].m1))
+#define USTORM_TOE_GRQ_PROD_SIZE (IRO[41].size)
 /* Tstorm cmdq-cons of given command queue-id */
-#define TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id) (IRO[35].base + \
-	((cmdq_queue_id) * IRO[35].m1))
-#define TSTORM_SCSI_CMDQ_CONS_SIZE (IRO[35].size)
+#define TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id) (IRO[42].base + \
+	((cmdq_queue_id) * IRO[42].m1))
+#define TSTORM_SCSI_CMDQ_CONS_SIZE (IRO[42].size)
 /* Tstorm (reflects M-Storm) bdq-external-producer of given function ID,
  * BDqueue-id
  */
-#define TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id, bdq_id) (IRO[36].base + \
-	((func_id) * IRO[36].m1) + ((bdq_id) * IRO[36].m2))
-#define TSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[36].size)
+#define TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id, bdq_id) \
+	(IRO[43].base + ((storage_func_id) * IRO[43].m1) + \
+	((bdq_id) * IRO[43].m2))
+#define TSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[43].size)
 /* Mstorm bdq-external-producer of given BDQ resource ID, BDqueue-id */
-#define MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id, bdq_id) (IRO[37].base + \
-	((func_id) * IRO[37].m1) + ((bdq_id) * IRO[37].m2))
-#define MSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[37].size)
+#define MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id, bdq_id) \
+	(IRO[44].base + ((storage_func_id) * IRO[44].m1) + \
+	((bdq_id) * IRO[44].m2))
+#define MSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[44].size)
 /* Tstorm iSCSI RX stats */
-#define TSTORM_ISCSI_RX_STATS_OFFSET(pf_id) (IRO[38].base + \
-	((pf_id) * IRO[38].m1))
-#define TSTORM_ISCSI_RX_STATS_SIZE (IRO[38].size)
+#define TSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) (IRO[45].base + \
+	((storage_func_id) * IRO[45].m1))
+#define TSTORM_ISCSI_RX_STATS_SIZE (IRO[45].size)
 /* Mstorm iSCSI RX stats */
-#define MSTORM_ISCSI_RX_STATS_OFFSET(pf_id) (IRO[39].base + \
-	((pf_id) * IRO[39].m1))
-#define MSTORM_ISCSI_RX_STATS_SIZE (IRO[39].size)
+#define MSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) (IRO[46].base + \
+	((storage_func_id) * IRO[46].m1))
+#define MSTORM_ISCSI_RX_STATS_SIZE (IRO[46].size)
 /* Ustorm iSCSI RX stats */
-#define USTORM_ISCSI_RX_STATS_OFFSET(pf_id) (IRO[40].base + \
-	((pf_id) * IRO[40].m1))
-#define USTORM_ISCSI_RX_STATS_SIZE (IRO[40].size)
+#define USTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) (IRO[47].base + \
+	((storage_func_id) * IRO[47].m1))
+#define USTORM_ISCSI_RX_STATS_SIZE (IRO[47].size)
 /* Xstorm iSCSI TX stats */
-#define XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) (IRO[41].base + \
-	((pf_id) * IRO[41].m1))
-#define XSTORM_ISCSI_TX_STATS_SIZE (IRO[41].size)
+#define XSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) (IRO[48].base + \
+	((storage_func_id) * IRO[48].m1))
+#define XSTORM_ISCSI_TX_STATS_SIZE (IRO[48].size)
 /* Ystorm iSCSI TX stats */
-#define YSTORM_ISCSI_TX_STATS_OFFSET(pf_id) (IRO[42].base + \
-	((pf_id) * IRO[42].m1))
-#define YSTORM_ISCSI_TX_STATS_SIZE (IRO[42].size)
+#define YSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) (IRO[49].base + \
+	((storage_func_id) * IRO[49].m1))
+#define YSTORM_ISCSI_TX_STATS_SIZE (IRO[49].size)
 /* Pstorm iSCSI TX stats */
-#define PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) (IRO[43].base + \
-	((pf_id) * IRO[43].m1))
-#define PSTORM_ISCSI_TX_STATS_SIZE (IRO[43].size)
+#define PSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) (IRO[50].base + \
+	((storage_func_id) * IRO[50].m1))
+#define PSTORM_ISCSI_TX_STATS_SIZE (IRO[50].size)
 /* Tstorm FCoE RX stats */
-#define TSTORM_FCOE_RX_STATS_OFFSET(pf_id) (IRO[44].base + \
-	((pf_id) * IRO[44].m1))
-#define TSTORM_FCOE_RX_STATS_SIZE (IRO[44].size)
+#define TSTORM_FCOE_RX_STATS_OFFSET(pf_id) (IRO[51].base + \
+	((pf_id) * IRO[51].m1))
+#define TSTORM_FCOE_RX_STATS_SIZE (IRO[51].size)
 /* Pstorm FCoE TX stats */
-#define PSTORM_FCOE_TX_STATS_OFFSET(pf_id) (IRO[45].base + \
-	((pf_id) * IRO[45].m1))
-#define PSTORM_FCOE_TX_STATS_SIZE (IRO[45].size)
+#define PSTORM_FCOE_TX_STATS_OFFSET(pf_id) (IRO[52].base + \
+	((pf_id) * IRO[52].m1))
+#define PSTORM_FCOE_TX_STATS_SIZE (IRO[52].size)
 /* Pstorm RDMA queue statistics */
-#define PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[46].base + \
-	((rdma_stat_counter_id) * IRO[46].m1))
-#define PSTORM_RDMA_QUEUE_STAT_SIZE (IRO[46].size)
+#define PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[53].base + \
+	((rdma_stat_counter_id) * IRO[53].m1))
+#define PSTORM_RDMA_QUEUE_STAT_SIZE (IRO[53].size)
 /* Tstorm RDMA queue statistics */
-#define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[47].base + \
-	((rdma_stat_counter_id) * IRO[47].m1))
-#define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[47].size)
+#define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[54].base + \
+	((rdma_stat_counter_id) * IRO[54].m1))
+#define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[54].size)
 /* Xstorm error level for assert */
-#define XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[48].base + \
-	((pf_id) * IRO[48].m1))
-#define XSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[48].size)
+#define XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[55].base + \
+	((pf_id) * IRO[55].m1))
+#define XSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[55].size)
 /* Ystorm error level for assert */
-#define YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[49].base + \
-	((pf_id) * IRO[49].m1))
-#define YSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[49].size)
+#define YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[56].base + \
+	((pf_id) * IRO[56].m1))
+#define YSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[56].size)
 /* Pstorm error level for assert */
-#define PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[50].base + \
-	((pf_id) * IRO[50].m1))
-#define PSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[50].size)
+#define PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[57].base + \
+	((pf_id) * IRO[57].m1))
+#define PSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[57].size)
 /* Tstorm error level for assert */
-#define TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[51].base + \
-	((pf_id) * IRO[51].m1))
-#define TSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[51].size)
+#define TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[58].base + \
+	((pf_id) * IRO[58].m1))
+#define TSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[58].size)
 /* Mstorm error level for assert */
-#define MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[52].base + \
-	((pf_id) * IRO[52].m1))
-#define MSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[52].size)
+#define MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[59].base + \
+	((pf_id) * IRO[59].m1))
+#define MSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[59].size)
 /* Ustorm error level for assert */
-#define USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[53].base + \
-	((pf_id) * IRO[53].m1))
-#define USTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[53].size)
+#define USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[60].base + \
+	((pf_id) * IRO[60].m1))
+#define USTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[60].size)
 /* Xstorm iWARP rxmit stats */
-#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) (IRO[54].base + \
-	((pf_id) * IRO[54].m1))
-#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[54].size)
+#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) (IRO[61].base + \
+	((pf_id) * IRO[61].m1))
+#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[61].size)
 /* Tstorm RoCE Event Statistics */
-#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) (IRO[55].base + \
-	((roce_pf_id) * IRO[55].m1))
-#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[55].size)
+#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) (IRO[62].base + \
+	((roce_pf_id) * IRO[62].m1))
+#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[62].size)
 /* DCQCN Received Statistics */
-#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id) (IRO[56].base + \
-	((roce_pf_id) * IRO[56].m1))
-#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_SIZE (IRO[56].size)
+#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id) (IRO[63].base + \
+	((roce_pf_id) * IRO[63].m1))
+#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_SIZE (IRO[63].size)
 /* RoCE Error Statistics */
-#define YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id) (IRO[57].base + \
-	((roce_pf_id) * IRO[57].m1))
-#define YSTORM_ROCE_ERROR_STATS_SIZE (IRO[57].size)
+#define YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id) (IRO[64].base + \
+	((roce_pf_id) * IRO[64].m1))
+#define YSTORM_ROCE_ERROR_STATS_SIZE (IRO[64].size)
 /* DCQCN Sent Statistics */
-#define PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id) (IRO[58].base + \
-	((roce_pf_id) * IRO[58].m1))
-#define PSTORM_ROCE_DCQCN_SENT_STATS_SIZE (IRO[58].size)
+#define PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id) (IRO[65].base + \
+	((roce_pf_id) * IRO[65].m1))
+#define PSTORM_ROCE_DCQCN_SENT_STATS_SIZE (IRO[65].size)
 /* RoCE CQEs Statistics */
-#define USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id) (IRO[59].base + \
-	((roce_pf_id) * IRO[59].m1))
-#define USTORM_ROCE_CQE_STATS_SIZE (IRO[59].size)
+#define USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id) (IRO[66].base + \
+	((roce_pf_id) * IRO[66].m1))
+#define USTORM_ROCE_CQE_STATS_SIZE (IRO[66].size)
+/* Tstorm NVMf per port per producer consumer data */
+#define TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_num_id, \
+	taskpool_index) (IRO[67].base + ((port_num_id) * IRO[67].m1) + \
+	((taskpool_index) * IRO[67].m2))
+#define TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_SIZE (IRO[67].size)
+/* Ustorm NVMf per port counters */
+#define USTORM_NVMF_PORT_COUNTERS_OFFSET(port_num_id) (IRO[68].base + \
+	((port_num_id) * IRO[68].m1))
+#define USTORM_NVMF_PORT_COUNTERS_SIZE (IRO[68].size)
 
-#endif /* __IRO_H__ */
+#endif
diff --git a/drivers/net/qede/base/ecore_iro_values.h b/drivers/net/qede/base/ecore_iro_values.h
index 30e632ce1..6442057ac 100644
--- a/drivers/net/qede/base/ecore_iro_values.h
+++ b/drivers/net/qede/base/ecore_iro_values.h
@@ -7,127 +7,221 @@
 #ifndef __IRO_VALUES_H__
 #define __IRO_VALUES_H__
 
-static const struct iro iro_arr[60] = {
-/* YSTORM_FLOW_CONTROL_MODE_OFFSET */
-	{      0x0,      0x0,      0x0,      0x0,      0x8},
-/* TSTORM_PORT_STAT_OFFSET(port_id) */
-	{   0x4cb8,     0x88,      0x0,      0x0,     0x88},
-/* TSTORM_LL2_PORT_STAT_OFFSET(port_id) */
-	{   0x6530,     0x20,      0x0,      0x0,     0x20},
-/* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id) */
-	{    0xb00,      0x8,      0x0,      0x0,      0x4},
-/* USTORM_FLR_FINAL_ACK_OFFSET(pf_id) */
-	{    0xa80,      0x8,      0x0,      0x0,      0x4},
-/* USTORM_EQE_CONS_OFFSET(pf_id) */
-	{      0x0,      0x8,      0x0,      0x0,      0x2},
-/* USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id) */
-	{     0x80,      0x8,      0x0,      0x0,      0x4},
-/* USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id) */
-	{     0x84,      0x8,      0x0,      0x0,      0x2},
-/* XSTORM_INTEG_TEST_DATA_OFFSET */
-	{   0x4c48,      0x0,      0x0,      0x0,     0x78},
-/* YSTORM_INTEG_TEST_DATA_OFFSET */
-	{   0x3e38,      0x0,      0x0,      0x0,     0x78},
-/* PSTORM_INTEG_TEST_DATA_OFFSET */
-	{   0x3ef8,      0x0,      0x0,      0x0,     0x78},
-/* TSTORM_INTEG_TEST_DATA_OFFSET */
-	{   0x4c40,      0x0,      0x0,      0x0,     0x78},
-/* MSTORM_INTEG_TEST_DATA_OFFSET */
-	{   0x4998,      0x0,      0x0,      0x0,     0x78},
-/* USTORM_INTEG_TEST_DATA_OFFSET */
-	{   0x7f50,      0x0,      0x0,      0x0,     0x78},
-/* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) */
-	{    0xa28,      0x8,      0x0,      0x0,      0x8},
-/* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
-	{   0x6210,     0x10,      0x0,      0x0,     0x10},
-/* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
-	{   0xb820,     0x30,      0x0,      0x0,     0x30},
-/* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) */
-	{   0xa990,     0x30,      0x0,      0x0,     0x30},
-/* MSTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
-	{   0x4b68,     0x80,      0x0,      0x0,     0x40},
-/* MSTORM_ETH_PF_PRODS_OFFSET(queue_id) */
-	{    0x1f8,      0x4,      0x0,      0x0,      0x4},
-/* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id) */
-	{   0x53a8,     0x80,      0x4,      0x0,      0x4},
-/* MSTORM_TPA_TIMEOUT_US_OFFSET */
-	{   0xc7d0,      0x0,      0x0,      0x0,      0x4},
-/* MSTORM_ETH_PF_STAT_OFFSET(pf_id) */
-	{   0x4ba8,     0x80,      0x0,      0x0,     0x20},
-/* USTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
-	{   0x8158,     0x40,      0x0,      0x0,     0x30},
-/* USTORM_ETH_PF_STAT_OFFSET(pf_id) */
-	{   0xe770,     0x60,      0x0,      0x0,     0x60},
-/* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
-	{   0x4090,     0x80,      0x0,      0x0,     0x38},
-/* PSTORM_ETH_PF_STAT_OFFSET(pf_id) */
-	{   0xfea8,     0x78,      0x0,      0x0,     0x78},
-/* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) */
-	{    0x1f8,      0x4,      0x0,      0x0,      0x4},
-/* TSTORM_ETH_PRS_INPUT_OFFSET */
-	{   0xaf20,      0x0,      0x0,      0x0,     0xf0},
-/* ETH_RX_RATE_LIMIT_OFFSET(pf_id) */
-	{   0xb010,      0x8,      0x0,      0x0,      0x8},
-/* TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id) */
-	{    0xc00,      0x8,      0x0,      0x0,      0x8},
-/* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) */
-	{    0x1f8,      0x8,      0x0,      0x0,      0x8},
-/* YSTORM_TOE_CQ_PROD_OFFSET(rss_id) */
-	{    0xac0,      0x8,      0x0,      0x0,      0x8},
-/* USTORM_TOE_CQ_PROD_OFFSET(rss_id) */
-	{   0x2578,      0x8,      0x0,      0x0,      0x8},
-/* USTORM_TOE_GRQ_PROD_OFFSET(pf_id) */
-	{   0x24f8,      0x8,      0x0,      0x0,      0x8},
-/* TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id) */
-	{      0x0,      0x8,      0x0,      0x0,      0x8},
-/* TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id,bdq_id) */
-	{    0x400,     0x18,      0x8,      0x0,      0x8},
-/* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id,bdq_id) */
-	{    0xb78,     0x18,      0x8,      0x0,      0x2},
-/* TSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{   0xd898,     0x50,      0x0,      0x0,     0x3c},
-/* MSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{  0x12908,     0x18,      0x0,      0x0,     0x10},
-/* USTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
-	{  0x11aa8,     0x40,      0x0,      0x0,     0x18},
-/* XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
-	{   0xa588,     0x50,      0x0,      0x0,     0x20},
-/* YSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
-	{   0x8f00,     0x40,      0x0,      0x0,     0x28},
-/* PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
-	{  0x10e30,     0x18,      0x0,      0x0,     0x10},
-/* TSTORM_FCOE_RX_STATS_OFFSET(pf_id) */
-	{   0xde48,     0x48,      0x0,      0x0,     0x38},
-/* PSTORM_FCOE_TX_STATS_OFFSET(pf_id) */
-	{  0x11298,     0x20,      0x0,      0x0,     0x20},
-/* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
-	{   0x40c8,     0x80,      0x0,      0x0,     0x10},
-/* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
-	{   0x5048,     0x10,      0x0,      0x0,     0x10},
-/* XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
-	{   0xa928,      0x8,      0x0,      0x0,      0x1},
-/* YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
-	{   0xa128,      0x8,      0x0,      0x0,      0x1},
-/* PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
-	{  0x11a30,      0x8,      0x0,      0x0,      0x1},
-/* TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
-	{   0xf030,      0x8,      0x0,      0x0,      0x1},
-/* MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
-	{  0x13028,      0x8,      0x0,      0x0,      0x1},
-/* USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
-	{  0x12c58,      0x8,      0x0,      0x0,      0x1},
-/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) */
-	{   0xc9b8,     0x30,      0x0,      0x0,     0x10},
-/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) */
-	{   0xed90,     0x28,      0x0,      0x0,     0x28},
-/* YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id) */
-	{   0xad20,     0x18,      0x0,      0x0,     0x18},
-/* YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id) */
-	{   0xaea0,      0x8,      0x0,      0x0,      0x8},
-/* PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id) */
-	{  0x13c38,      0x8,      0x0,      0x0,      0x8},
-/* USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id) */
-	{  0x13c50,     0x18,      0x0,      0x0,     0x18},
+/* Per-chip offsets in iro_arr in dwords */
+#define E4_IRO_ARR_OFFSET 0
+
+/* IRO Array */
+static const u32 iro_arr[] = {
+	/* E4 */
+	/* YSTORM_FLOW_CONTROL_MODE_OFFSET */
+	/* offset=0x0, size=0x8 */
+	0x00000000, 0x00000000, 0x00080000,
+	/* TSTORM_PORT_STAT_OFFSET(port_id), */
+	/* offset=0x3908, mult1=0x88, size=0x88 */
+	0x00003908, 0x00000088, 0x00880000,
+	/* TSTORM_LL2_PORT_STAT_OFFSET(port_id), */
+	/* offset=0x58f0, mult1=0x20, size=0x20 */
+	0x000058f0, 0x00000020, 0x00200000,
+	/* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id), */
+	/* offset=0xb00, mult1=0x8, size=0x4 */
+	0x00000b00, 0x00000008, 0x00040000,
+	/* USTORM_FLR_FINAL_ACK_OFFSET(pf_id), */
+	/* offset=0xa80, mult1=0x8, size=0x4 */
+	0x00000a80, 0x00000008, 0x00040000,
+	/* USTORM_EQE_CONS_OFFSET(pf_id), */
+	/* offset=0x0, mult1=0x8, size=0x2 */
+	0x00000000, 0x00000008, 0x00020000,
+	/* USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id), */
+	/* offset=0x80, mult1=0x8, size=0x4 */
+	0x00000080, 0x00000008, 0x00040000,
+	/* USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id), */
+	/* offset=0x84, mult1=0x8, size=0x2 */
+	0x00000084, 0x00000008, 0x00020000,
+	/* XSTORM_PQ_INFO_OFFSET(pq_id), */
+	/* offset=0x5618, mult1=0x4, size=0x4 */
+	0x00005618, 0x00000004, 0x00040000,
+	/* XSTORM_INTEG_TEST_DATA_OFFSET */
+	/* offset=0x4cd0, size=0x78 */
+	0x00004cd0, 0x00000000, 0x00780000,
+	/* YSTORM_INTEG_TEST_DATA_OFFSET */
+	/* offset=0x3e40, size=0x78 */
+	0x00003e40, 0x00000000, 0x00780000,
+	/* PSTORM_INTEG_TEST_DATA_OFFSET */
+	/* offset=0x3e00, size=0x78 */
+	0x00003e00, 0x00000000, 0x00780000,
+	/* TSTORM_INTEG_TEST_DATA_OFFSET */
+	/* offset=0x3890, size=0x78 */
+	0x00003890, 0x00000000, 0x00780000,
+	/* MSTORM_INTEG_TEST_DATA_OFFSET */
+	/* offset=0x3b50, size=0x78 */
+	0x00003b50, 0x00000000, 0x00780000,
+	/* USTORM_INTEG_TEST_DATA_OFFSET */
+	/* offset=0x7f58, size=0x78 */
+	0x00007f58, 0x00000000, 0x00780000,
+	/* XSTORM_OVERLAY_BUF_ADDR_OFFSET */
+	/* offset=0x5e58, size=0x8 */
+	0x00005e58, 0x00000000, 0x00080000,
+	/* YSTORM_OVERLAY_BUF_ADDR_OFFSET */
+	/* offset=0x7100, size=0x8 */
+	0x00007100, 0x00000000, 0x00080000,
+	/* PSTORM_OVERLAY_BUF_ADDR_OFFSET */
+	/* offset=0xa820, size=0x8 */
+	0x0000a820, 0x00000000, 0x00080000,
+	/* TSTORM_OVERLAY_BUF_ADDR_OFFSET */
+	/* offset=0x4a18, size=0x8 */
+	0x00004a18, 0x00000000, 0x00080000,
+	/* MSTORM_OVERLAY_BUF_ADDR_OFFSET */
+	/* offset=0xa5a0, size=0x8 */
+	0x0000a5a0, 0x00000000, 0x00080000,
+	/* USTORM_OVERLAY_BUF_ADDR_OFFSET */
+	/* offset=0xbde8, size=0x8 */
+	0x0000bde8, 0x00000000, 0x00080000,
+	/* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id), */
+	/* offset=0x20, mult1=0x4, size=0x4 */
+	0x00000020, 0x00000004, 0x00040000,
+	/* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), */
+	/* offset=0x56d0, mult1=0x10, size=0x10 */
+	0x000056d0, 0x00000010, 0x00100000,
+	/* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id), */
+	/* offset=0xc210, mult1=0x30, size=0x30 */
+	0x0000c210, 0x00000030, 0x00300000,
+	/* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id), */
+	/* offset=0xaa08, mult1=0x38, size=0x38 */
+	0x0000aa08, 0x00000038, 0x00380000,
+	/* MSTORM_QUEUE_STAT_OFFSET(stat_counter_id), */
+	/* offset=0x3d20, mult1=0x80, size=0x40 */
+	0x00003d20, 0x00000080, 0x00400000,
+	/* MSTORM_TPA_TIMEOUT_US_OFFSET */
+	/* offset=0xbf60, size=0x4 */
+	0x0000bf60, 0x00000000, 0x00040000,
+	/* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id), */
+	/* offset=0x4560, mult1=0x80, mult2=0x4, size=0x4 */
+	0x00004560, 0x00040080, 0x00040000,
+	/* MSTORM_ETH_PF_PRODS_OFFSET(queue_id), */
+	/* offset=0x1f8, mult1=0x4, size=0x4 */
+	0x000001f8, 0x00000004, 0x00040000,
+	/* MSTORM_ETH_PF_STAT_OFFSET(pf_id), */
+	/* offset=0x3d60, mult1=0x80, size=0x20 */
+	0x00003d60, 0x00000080, 0x00200000,
+	/* USTORM_QUEUE_STAT_OFFSET(stat_counter_id), */
+	/* offset=0x8960, mult1=0x40, size=0x30 */
+	0x00008960, 0x00000040, 0x00300000,
+	/* USTORM_ETH_PF_STAT_OFFSET(pf_id), */
+	/* offset=0xe840, mult1=0x60, size=0x60 */
+	0x0000e840, 0x00000060, 0x00600000,
+	/* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id), */
+	/* offset=0x3f98, mult1=0x80, size=0x38 */
+	0x00003f98, 0x00000080, 0x00380000,
+	/* PSTORM_ETH_PF_STAT_OFFSET(pf_id), */
+	/* offset=0x100b8, mult1=0xc0, size=0xc0 */
+	0x000100b8, 0x000000c0, 0x00c00000,
+	/* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id), */
+	/* offset=0x1f8, mult1=0x2, size=0x2 */
+	0x000001f8, 0x00000002, 0x00020000,
+	/* TSTORM_ETH_PRS_INPUT_OFFSET */
+	/* offset=0xa2a0, size=0x108 */
+	0x0000a2a0, 0x00000000, 0x01080000,
+	/* ETH_RX_RATE_LIMIT_OFFSET(pf_id), */
+	/* offset=0xa3a8, mult1=0x8, size=0x8 */
+	0x0000a3a8, 0x00000008, 0x00080000,
+	/* TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id), */
+	/* offset=0x1c0, mult1=0x8, size=0x8 */
+	0x000001c0, 0x00000008, 0x00080000,
+	/* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id), */
+	/* offset=0x1f8, mult1=0x8, size=0x8 */
+	0x000001f8, 0x00000008, 0x00080000,
+	/* YSTORM_TOE_CQ_PROD_OFFSET(rss_id), */
+	/* offset=0xac0, mult1=0x8, size=0x8 */
+	0x00000ac0, 0x00000008, 0x00080000,
+	/* USTORM_TOE_CQ_PROD_OFFSET(rss_id), */
+	/* offset=0x2578, mult1=0x8, size=0x8 */
+	0x00002578, 0x00000008, 0x00080000,
+	/* USTORM_TOE_GRQ_PROD_OFFSET(pf_id), */
+	/* offset=0x24f8, mult1=0x8, size=0x8 */
+	0x000024f8, 0x00000008, 0x00080000,
+	/* TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id), */
+	/* offset=0x280, mult1=0x8, size=0x8 */
+	0x00000280, 0x00000008, 0x00080000,
+	/* TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), */
+	/* offset=0x680, mult1=0x18, mult2=0x8, size=0x8 */
+	0x00000680, 0x00080018, 0x00080000,
+	/* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id,bdq_id), */
+	/* offset=0xb78, mult1=0x18, mult2=0x8, size=0x2 */
+	0x00000b78, 0x00080018, 0x00020000,
+	/* TSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), */
+	/* offset=0xc640, mult1=0x50, size=0x3c */
+	0x0000c640, 0x00000050, 0x003c0000,
+	/* MSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), */
+	/* offset=0x12038, mult1=0x18, size=0x10 */
+	0x00012038, 0x00000018, 0x00100000,
+	/* USTORM_ISCSI_RX_STATS_OFFSET(storage_func_id), */
+	/* offset=0x11b00, mult1=0x40, size=0x18 */
+	0x00011b00, 0x00000040, 0x00180000,
+	/* XSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), */
+	/* offset=0x94d0, mult1=0x50, size=0x20 */
+	0x000094d0, 0x00000050, 0x00200000,
+	/* YSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), */
+	/* offset=0x8b10, mult1=0x40, size=0x28 */
+	0x00008b10, 0x00000040, 0x00280000,
+	/* PSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id), */
+	/* offset=0x10fc0, mult1=0x18, size=0x10 */
+	0x00010fc0, 0x00000018, 0x00100000,
+	/* TSTORM_FCOE_RX_STATS_OFFSET(pf_id), */
+	/* offset=0xc828, mult1=0x48, size=0x38 */
+	0x0000c828, 0x00000048, 0x00380000,
+	/* PSTORM_FCOE_TX_STATS_OFFSET(pf_id), */
+	/* offset=0x11090, mult1=0x20, size=0x20 */
+	0x00011090, 0x00000020, 0x00200000,
+	/* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), */
+	/* offset=0x3fd0, mult1=0x80, size=0x10 */
+	0x00003fd0, 0x00000080, 0x00100000,
+	/* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id), */
+	/* offset=0x3c98, mult1=0x10, size=0x10 */
+	0x00003c98, 0x00000010, 0x00100000,
+	/* XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
+	/* offset=0xa868, mult1=0x8, size=0x1 */
+	0x0000a868, 0x00000008, 0x00010000,
+	/* YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
+	/* offset=0x97a0, mult1=0x8, size=0x1 */
+	0x000097a0, 0x00000008, 0x00010000,
+	/* PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
+	/* offset=0x11310, mult1=0x8, size=0x1 */
+	0x00011310, 0x00000008, 0x00010000,
+	/* TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
+	/* offset=0xf018, mult1=0x8, size=0x1 */
+	0x0000f018, 0x00000008, 0x00010000,
+	/* MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
+	/* offset=0x12628, mult1=0x8, size=0x1 */
+	0x00012628, 0x00000008, 0x00010000,
+	/* USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id), */
+	/* offset=0x11da8, mult1=0x8, size=0x1 */
+	0x00011da8, 0x00000008, 0x00010000,
+	/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id), */
+	/* offset=0xa978, mult1=0x30, size=0x10 */
+	0x0000a978, 0x00000030, 0x00100000,
+	/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id), */
+	/* offset=0xd768, mult1=0x28, size=0x28 */
+	0x0000d768, 0x00000028, 0x00280000,
+	/* YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id), */
+	/* offset=0x9a58, mult1=0x18, size=0x18 */
+	0x00009a58, 0x00000018, 0x00180000,
+	/* YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id), */
+	/* offset=0x9bd8, mult1=0x8, size=0x8 */
+	0x00009bd8, 0x00000008, 0x00080000,
+	/* PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id), */
+	/* offset=0x13398, mult1=0x8, size=0x8 */
+	0x00013398, 0x00000008, 0x00080000,
+	/* USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id), */
+	/* offset=0x126e8, mult1=0x18, size=0x18 */
+	0x000126e8, 0x00000018, 0x00180000,
+	/* TSTORM_NVMF_PORT_TASKPOOL_PRODUCER_CONSUMER_OFFSET(port_num_id,taskpool_index), */
+	/* offset=0xe608, mult1=0x288, mult2=0x50, size=0x10 */
+	0x0000e608, 0x00500288, 0x00100000,
+	/* USTORM_NVMF_PORT_COUNTERS_OFFSET(port_num_id), */
+	/* offset=0x12970, mult1=0x138, size=0x28 */
+	0x00012970, 0x00000138, 0x00280000,
 };
+/* Data size: 828 bytes */
+
 
 #endif /* __IRO_VALUES_H__ */
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 23336c282..6559d8040 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -24,6 +24,7 @@
 
 #define CHIP_MCP_RESP_ITER_US 10
 #define EMUL_MCP_RESP_ITER_US (1000 * 1000)
+#define GRCBASE_MCP	0xe00000
 
 #define ECORE_DRV_MB_MAX_RETRIES (500 * 1000)	/* Account for 5 sec */
 #define ECORE_MCP_RESET_RETRIES (50 * 1000)	/* Account for 500 msec */
diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h
index 9a401ed4a..4611d86d9 100644
--- a/drivers/net/qede/base/eth_common.h
+++ b/drivers/net/qede/base/eth_common.h
@@ -16,7 +16,7 @@
 /* ETH FP HSI Major version */
 #define ETH_HSI_VER_MAJOR                   3
 /* ETH FP HSI Minor version */
-#define ETH_HSI_VER_MINOR                   10
+#define ETH_HSI_VER_MINOR                   11   /* ETH FP HSI Minor version */
 
 /* Alias for 8.7.x.x/8.8.x.x ETH FP HSI MINOR version. In this version driver
  * is not required to set pkt_len field in eth_tx_1st_bd struct, and tunneling
@@ -24,6 +24,9 @@
  */
 #define ETH_HSI_VER_NO_PKT_LEN_TUNN         5
 
+/* Maximum number of pinned L2 connections (CIDs)*/
+#define ETH_PINNED_CONN_MAX_NUM             32
+
 #define ETH_CACHE_LINE_SIZE                 64
 #define ETH_RX_CQE_GAP                      32
 #define ETH_MAX_RAMROD_PER_CON              8
@@ -48,6 +51,7 @@
 #define ETH_TX_MIN_BDS_PER_TUNN_IPV6_WITH_EXT_PKT   3
 #define ETH_TX_MIN_BDS_PER_IPV6_WITH_EXT_PKT        2
 #define ETH_TX_MIN_BDS_PER_PKT_W_LOOPBACK_MODE      2
+#define ETH_TX_MIN_BDS_PER_PKT_W_VPORT_FORWARDING   4
 /* (QM_REG_TASKBYTECRDCOST_0, QM_VOQ_BYTE_CRD_TASK_COST) -
  * (VLAN-TAG + CRC + IPG + PREAMBLE)
  */
@@ -80,7 +84,7 @@
 /* Minimum number of free BDs in RX ring, that guarantee receiving of at least
  * one RX packet.
  */
-#define ETH_RX_BD_THRESHOLD                12
+#define ETH_RX_BD_THRESHOLD                16
 
 /* num of MAC/VLAN filters */
 #define ETH_NUM_MAC_FILTERS                 512
@@ -98,20 +102,20 @@
 #define ETH_RSS_IND_TABLE_ENTRIES_NUM       128
 /* Length of RSS key (in regs) */
 #define ETH_RSS_KEY_SIZE_REGS               10
-/* number of available RSS engines in K2 */
+/* number of available RSS engines in AH */
 #define ETH_RSS_ENGINE_NUM_K2               207
 /* number of available RSS engines in BB */
 #define ETH_RSS_ENGINE_NUM_BB               127
 
 /* TPA constants */
 /* Maximum number of open TPA aggregations */
-#define ETH_TPA_MAX_AGGS_NUM              64
-/* Maximum number of additional buffers, reported by TPA-start CQE */
-#define ETH_TPA_CQE_START_LEN_LIST_SIZE   ETH_RX_MAX_BUFF_PER_PKT
+#define ETH_TPA_MAX_AGGS_NUM                64
+/* TPA-start CQE additional BD list length. Used for backward compatible  */
+#define ETH_TPA_CQE_START_BW_LEN_LIST_SIZE  2
 /* Maximum number of buffers, reported by TPA-continue CQE */
-#define ETH_TPA_CQE_CONT_LEN_LIST_SIZE    6
+#define ETH_TPA_CQE_CONT_LEN_LIST_SIZE      6
 /* Maximum number of buffers, reported by TPA-end CQE */
-#define ETH_TPA_CQE_END_LEN_LIST_SIZE     4
+#define ETH_TPA_CQE_END_LEN_LIST_SIZE       4
 
 /* Control frame check constants */
 /* Number of etherType values configured by driver for control frame check */
@@ -125,12 +129,12 @@
 /*
  * Destination port mode
  */
-enum dest_port_mode {
-	DEST_PORT_PHY /* Send to physical port. */,
-	DEST_PORT_LOOPBACK /* Send to loopback port. */,
-	DEST_PORT_PHY_LOOPBACK /* Send to physical and loopback port. */,
-	DEST_PORT_DROP /* Drop the packet in PBF. */,
-	MAX_DEST_PORT_MODE
+enum dst_port_mode {
+	DST_PORT_PHY /* Send to physical port. */,
+	DST_PORT_LOOPBACK /* Send to loopback port. */,
+	DST_PORT_PHY_LOOPBACK /* Send to physical and loopback port. */,
+	DST_PORT_DROP /* Drop the packet in PBF. */,
+	MAX_DST_PORT_MODE
 };
 
 
@@ -353,9 +357,13 @@ struct eth_fast_path_rx_reg_cqe {
 /* Tunnel Parsing Flags */
 	struct eth_tunnel_parsing_flags tunnel_pars_flags;
 	u8 bd_num /* Number of BDs, used for packet */;
-	u8 reserved[9];
-	struct eth_fast_path_cqe_fw_debug fw_debug /* FW reserved. */;
-	u8 reserved1[3];
+	u8 reserved;
+	__le16 reserved2;
+/* aRFS flow ID or Resource ID - Indicates a Vport ID from which packet was
+ * sent, used when sending from VF to VF Representor.
+ */
+	__le32 flow_id_or_resource_id;
+	u8 reserved1[7];
 	struct eth_pmd_flow_flags pmd_flags /* CQE valid and toggle bits */;
 };
 
@@ -422,10 +430,14 @@ struct eth_fast_path_rx_tpa_start_cqe {
 	struct eth_tunnel_parsing_flags tunnel_pars_flags;
 	u8 tpa_agg_index /* TPA aggregation index */;
 	u8 header_len /* Packet L2+L3+L4 header length */;
-/* Additional BDs length list. */
-	__le16 ext_bd_len_list[ETH_TPA_CQE_START_LEN_LIST_SIZE];
-	struct eth_fast_path_cqe_fw_debug fw_debug /* FW reserved. */;
-	u8 reserved;
+/* Additional BDs length list. Used for backward compatible. */
+	__le16 bw_ext_bd_len_list[ETH_TPA_CQE_START_BW_LEN_LIST_SIZE];
+	__le16 reserved2;
+/* aRFS or GFS flow ID or Resource ID - Indicates a Vport ID from which packet
+ * was sent, used when sending from VF to VF Representor
+ */
+	__le32 flow_id_or_resource_id;
+	u8 reserved[3];
 	struct eth_pmd_flow_flags pmd_flags /* CQE valid and toggle bits */;
 };
 
@@ -602,6 +614,41 @@ struct eth_tx_3rd_bd {
 };
 
 
+/*
+ * The parsing information data for the forth tx bd of a given packet.
+ */
+struct eth_tx_data_4th_bd {
+/* Destination Vport ID to forward the packet, applicable only when
+ * tx_dst_port_mode_config == ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_BD and
+ * dst_port_mode == DST_PORT_LOOPBACK, used to route the packet from VF
+ * Representor to VF
+ */
+	u8 dst_vport_id;
+	u8 reserved4;
+	__le16 bitfields;
+/* if set, dst_vport_id has a valid value and will be used in FW */
+#define ETH_TX_DATA_4TH_BD_DST_VPORT_ID_VALID_MASK  0x1
+#define ETH_TX_DATA_4TH_BD_DST_VPORT_ID_VALID_SHIFT 0
+#define ETH_TX_DATA_4TH_BD_RESERVED1_MASK           0x7F
+#define ETH_TX_DATA_4TH_BD_RESERVED1_SHIFT          1
+/* Should be 0 in all the BDs, except the first one. (for debug) */
+#define ETH_TX_DATA_4TH_BD_START_BD_MASK            0x1
+#define ETH_TX_DATA_4TH_BD_START_BD_SHIFT           8
+#define ETH_TX_DATA_4TH_BD_RESERVED2_MASK           0x7F
+#define ETH_TX_DATA_4TH_BD_RESERVED2_SHIFT          9
+	__le16 reserved3;
+};
+
+/*
+ * The forth tx bd of a given packet
+ */
+struct eth_tx_4th_bd {
+	struct regpair addr /* Single continuous buffer */;
+	__le16 nbytes /* Number of bytes in this BD. */;
+	struct eth_tx_data_4th_bd data /* Parsing information data. */;
+};
+
+
 /*
  * Complementary information for the regular tx bd of a given packet.
  */
@@ -633,7 +680,8 @@ union eth_tx_bd_types {
 /* The second tx bd of a given packet */
 	struct eth_tx_2nd_bd second_bd;
 	struct eth_tx_3rd_bd third_bd /* The third tx bd of a given packet */;
-	struct eth_tx_bd reg_bd /* The common non-special bd */;
+	struct eth_tx_4th_bd fourth_bd /* The fourth tx bd of a given packet */;
+	struct eth_tx_bd reg_bd /* The common regular bd */;
 };
 
 
@@ -653,6 +701,15 @@ enum eth_tx_tunn_type {
 };
 
 
+/*
+ * Mstorm Queue Zone
+ */
+struct mstorm_eth_queue_zone {
+	struct eth_rx_prod_data rx_producers /* ETH Rx producers data */;
+	__le32 reserved[3];
+};
+
+
 /*
  * Ystorm Queue Zone
  */
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 9277b46fa..91d889dc8 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1235,3 +1235,13 @@
 #define NIG_REG_PPF_TO_ENGINE_SEL 0x508900UL
 #define NIG_REG_LLH_ENG_CLS_ROCE_QP_SEL 0x501b98UL
 #define NIG_REG_LLH_FUNC_FILTER_HDR_SEL 0x501b40UL
+
+#define MCP_REG_CACHE_PAGING_ENABLE 0xe06304UL
+#define PSWRQ2_REG_RESET_STT 0x240008UL
+#define PSWRQ2_REG_PRTY_STS_WR_H_0 0x240208UL
+#define PCI_EXP_DEVCTL_PAYLOAD 0x00e0
+#define PGLUE_B_REG_MASTER_DISCARD_NBLOCK 0x2aa58cUL
+#define PGLUE_B_REG_PRTY_STS_WR_H_0 0x2a8208UL
+#define DORQ_REG_VF_USAGE_CNT_LIM 0x1009ccUL
+#define PGLUE_B_REG_SR_IOV_DISABLED_REQUEST 0x2aa06cUL
+#define PGLUE_B_REG_SR_IOV_DISABLED_REQUEST_CLR 0x2aa070UL
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index d6382b62c..40a246229 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -1602,17 +1602,17 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			/* Mark it as LRO packet */
 			ol_flags |= PKT_RX_LRO;
 			/* In split mode,  seg_len is same as len_on_first_bd
-			 * and ext_bd_len_list will be empty since there are
+			 * and bw_ext_bd_len_list will be empty since there are
 			 * no additional buffers
 			 */
 			PMD_RX_LOG(INFO, rxq,
-			    "TPA start[%d] - len_on_first_bd %d header %d"
-			    " [bd_list[0] %d], [seg_len %d]\n",
-			    cqe_start_tpa->tpa_agg_index,
-			    rte_le_to_cpu_16(cqe_start_tpa->len_on_first_bd),
-			    cqe_start_tpa->header_len,
-			    rte_le_to_cpu_16(cqe_start_tpa->ext_bd_len_list[0]),
-			    rte_le_to_cpu_16(cqe_start_tpa->seg_len));
+			 "TPA start[%d] - len_on_first_bd %d header %d"
+			 " [bd_list[0] %d], [seg_len %d]\n",
+			 cqe_start_tpa->tpa_agg_index,
+			 rte_le_to_cpu_16(cqe_start_tpa->len_on_first_bd),
+			 cqe_start_tpa->header_len,
+			 rte_le_to_cpu_16(cqe_start_tpa->bw_ext_bd_len_list[0]),
+			 rte_le_to_cpu_16(cqe_start_tpa->seg_len));
 
 		break;
 		case ETH_RX_CQE_TYPE_TPA_CONT:
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 8/9] net/qede/base: update the FW to 8.40.25.0
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (17 preceding siblings ...)
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 7/9] net/qede/base: update HSI code Rasesh Mody
@ 2019-10-06 20:14 ` Rasesh Mody
  2019-10-11 16:13   ` Ferruh Yigit
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 9/9] net/qede: print adapter info during init failure Rasesh Mody
  19 siblings, 1 reply; 24+ messages in thread
From: Rasesh Mody @ 2019-10-06 20:14 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

This patch updates the FW to 8.40.25.0 and corresponding base driver
changes. It also updates the PMD version to 2.11.0.1. The FW updates
consists of enhancements and fixes as described below.

 - VF RX queue start ramrod can get stuck due to completion error.
   Return EQ completion with error, when fail to load VF data. Use VF
   FID in RX queue start ramrod
 - Fix big receive buffer initialization for 100G to address failure
   leading to BRB hardware assertion
 - GRE tunnel traffic doesn't run when non-L2 ethernet protocol is enabled,
   fix FW to not forward tunneled SYN packets to LL2.
 - Fix the FW assert that is caused during vport_update when
   tx-switching is enabled
 - Add initial FW support for VF Representors
 - Add ecore_get_hsi_def_val() API to get default HSI values
 - Move following from .c to .h files:
   TSTORM_QZONE_START and MSTORM_QZONE_START
   enum ilt_clients
   renamed struct ecore_dma_mem to phys_mem_desc and moved
 - Add ecore_cxt_set_cli() and ecore_cxt_set_blk() APIs to set client
   config and block details
 - Use SET_FIELD() macro where appropriate
 - Address spell check and code alignment issues

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/base/ecore.h               |  73 ++-
 drivers/net/qede/base/ecore_cxt.c           | 497 ++++++++------
 drivers/net/qede/base/ecore_cxt.h           |  12 +
 drivers/net/qede/base/ecore_dcbx.c          |   5 +-
 drivers/net/qede/base/ecore_dev.c           | 586 ++++++++++-------
 drivers/net/qede/base/ecore_init_fw_funcs.c | 681 ++++++++++----------
 drivers/net/qede/base/ecore_init_fw_funcs.h | 107 ++-
 drivers/net/qede/base/ecore_init_ops.c      |  15 +-
 drivers/net/qede/base/ecore_init_ops.h      |   2 +-
 drivers/net/qede/base/ecore_int.c           | 129 ++--
 drivers/net/qede/base/ecore_int_api.h       |  11 +-
 drivers/net/qede/base/ecore_l2.c            |  10 +-
 drivers/net/qede/base/ecore_l2_api.h        |   2 +
 drivers/net/qede/base/ecore_mcp.c           | 287 +++++----
 drivers/net/qede/base/ecore_mcp.h           |   9 +-
 drivers/net/qede/base/ecore_proto_if.h      |   1 +
 drivers/net/qede/base/ecore_sp_commands.c   |  15 +-
 drivers/net/qede/base/ecore_spq.c           |  53 +-
 drivers/net/qede/base/ecore_sriov.c         | 157 +++--
 drivers/net/qede/base/ecore_vf.c            |  18 +-
 drivers/net/qede/qede_ethdev.h              |   2 +-
 drivers/net/qede/qede_main.c                |   2 +-
 drivers/net/qede/qede_rxtx.c                |   4 +-
 23 files changed, 1584 insertions(+), 1094 deletions(-)

diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index b1d8706c9..925b75cb9 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -28,8 +28,8 @@
 #include "mcp_public.h"
 
 #define ECORE_MAJOR_VERSION		8
-#define ECORE_MINOR_VERSION		37
-#define ECORE_REVISION_VERSION		20
+#define ECORE_MINOR_VERSION		40
+#define ECORE_REVISION_VERSION		18
 #define ECORE_ENGINEERING_VERSION	0
 
 #define ECORE_VERSION							\
@@ -467,6 +467,8 @@ struct ecore_wfq_data {
 	bool configured;
 };
 
+#define OFLD_GRP_SIZE 4
+
 struct ecore_qm_info {
 	struct init_qm_pq_params    *qm_pq_params;
 	struct init_qm_vport_params *qm_vport_params;
@@ -513,6 +515,8 @@ struct ecore_fw_data {
 	const u8 *modes_tree_buf;
 	union init_op *init_ops;
 	const u32 *arr_data;
+	const u32 *fw_overlays;
+	u32 fw_overlays_len;
 	u32 init_ops_size;
 };
 
@@ -592,6 +596,7 @@ struct ecore_hwfn {
 
 	u8				num_funcs_on_engine;
 	u8				enabled_func_idx;
+	u8				num_funcs_on_port;
 
 	/* BAR access */
 	void OSAL_IOMEM			*regview;
@@ -745,7 +750,6 @@ struct ecore_dev {
 #endif
 #define ECORE_IS_AH(dev)	((dev)->type == ECORE_DEV_TYPE_AH)
 #define ECORE_IS_K2(dev)	ECORE_IS_AH(dev)
-#define ECORE_IS_E4(dev)	(ECORE_IS_BB(dev) || ECORE_IS_AH(dev))
 
 	u16 vendor_id;
 	u16 device_id;
@@ -893,6 +897,7 @@ struct ecore_dev {
 
 #ifndef ASIC_ONLY
 	bool				b_is_emul_full;
+	bool				b_is_emul_mac;
 #endif
 	/* LLH info */
 	u8				ppfid_bitmap;
@@ -911,16 +916,52 @@ struct ecore_dev {
 	u8				engine_for_debug;
 };
 
-#define NUM_OF_VFS(dev)		(ECORE_IS_BB(dev) ? MAX_NUM_VFS_BB \
-						  : MAX_NUM_VFS_K2)
-#define NUM_OF_L2_QUEUES(dev)	(ECORE_IS_BB(dev) ? MAX_NUM_L2_QUEUES_BB \
-						  : MAX_NUM_L2_QUEUES_K2)
-#define NUM_OF_PORTS(dev)	(ECORE_IS_BB(dev) ? MAX_NUM_PORTS_BB \
-						  : MAX_NUM_PORTS_K2)
-#define NUM_OF_SBS(dev)		(ECORE_IS_BB(dev) ? MAX_SB_PER_PATH_BB \
-						  : MAX_SB_PER_PATH_K2)
-#define NUM_OF_ENG_PFS(dev)	(ECORE_IS_BB(dev) ? MAX_NUM_PFS_BB \
-						  : MAX_NUM_PFS_K2)
+enum ecore_hsi_def_type {
+	ECORE_HSI_DEF_MAX_NUM_VFS,
+	ECORE_HSI_DEF_MAX_NUM_L2_QUEUES,
+	ECORE_HSI_DEF_MAX_NUM_PORTS,
+	ECORE_HSI_DEF_MAX_SB_PER_PATH,
+	ECORE_HSI_DEF_MAX_NUM_PFS,
+	ECORE_HSI_DEF_MAX_NUM_VPORTS,
+	ECORE_HSI_DEF_NUM_ETH_RSS_ENGINE,
+	ECORE_HSI_DEF_MAX_QM_TX_QUEUES,
+	ECORE_HSI_DEF_NUM_PXP_ILT_RECORDS,
+	ECORE_HSI_DEF_NUM_RDMA_STATISTIC_COUNTERS,
+	ECORE_HSI_DEF_MAX_QM_GLOBAL_RLS,
+	ECORE_HSI_DEF_MAX_PBF_CMD_LINES,
+	ECORE_HSI_DEF_MAX_BTB_BLOCKS,
+	ECORE_NUM_HSI_DEFS
+};
+
+u32 ecore_get_hsi_def_val(struct ecore_dev *p_dev,
+			  enum ecore_hsi_def_type type);
+
+#define NUM_OF_VFS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_NUM_VFS)
+#define NUM_OF_L2_QUEUES(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_NUM_L2_QUEUES)
+#define NUM_OF_PORTS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_NUM_PORTS)
+#define NUM_OF_SBS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_SB_PER_PATH)
+#define NUM_OF_ENG_PFS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_NUM_PFS)
+#define NUM_OF_VPORTS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_NUM_VPORTS)
+#define NUM_OF_RSS_ENGINES(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_NUM_ETH_RSS_ENGINE)
+#define NUM_OF_QM_TX_QUEUES(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_QM_TX_QUEUES)
+#define NUM_OF_PXP_ILT_RECORDS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_NUM_PXP_ILT_RECORDS)
+#define NUM_OF_RDMA_STATISTIC_COUNTERS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_NUM_RDMA_STATISTIC_COUNTERS)
+#define NUM_OF_QM_GLOBAL_RLS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_QM_GLOBAL_RLS)
+#define NUM_OF_PBF_CMD_LINES(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_PBF_CMD_LINES)
+#define NUM_OF_BTB_BLOCKS(dev) \
+	ecore_get_hsi_def_val(dev, ECORE_HSI_DEF_MAX_BTB_BLOCKS)
 
 #define CRC8_TABLE_SIZE 256
 
@@ -948,7 +989,6 @@ static OSAL_INLINE u8 ecore_concrete_to_sw_fid(u32 concrete_fid)
 }
 
 #define PKT_LB_TC 9
-#define MAX_NUM_VOQS_E4 20
 
 int ecore_configure_vport_wfq(struct ecore_dev *p_dev, u16 vp_id, u32 rate);
 void ecore_configure_vp_wfq_on_link_change(struct ecore_dev *p_dev,
@@ -1023,4 +1063,9 @@ enum _ecore_status_t ecore_all_ppfids_wr(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid);
 enum _ecore_status_t ecore_llh_dump_all(struct ecore_dev *p_dev);
 
+#define TSTORM_QZONE_START	PXP_VF_BAR0_START_SDM_ZONE_A
+
+#define MSTORM_QZONE_START(dev) \
+	(TSTORM_QZONE_START + (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
+
 #endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 0f04c9447..773b75ecd 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -33,6 +33,10 @@
 /* Searcher constants */
 #define SRC_MIN_NUM_ELEMS 256
 
+/* GFS constants */
+#define RGFS_MIN_NUM_ELEMS	256
+#define TGFS_MIN_NUM_ELEMS	256
+
 /* Timers constants */
 #define TM_SHIFT	7
 #define TM_ALIGN	(1 << TM_SHIFT)
@@ -114,16 +118,6 @@ struct ecore_conn_type_cfg {
 #define CDUT_SEG_BLK(n)		(1 + (u8)(n))
 #define CDUT_FL_SEG_BLK(n, X)	(1 + (n) + NUM_TASK_##X##_SEGMENTS)
 
-enum ilt_clients {
-	ILT_CLI_CDUC,
-	ILT_CLI_CDUT,
-	ILT_CLI_QM,
-	ILT_CLI_TM,
-	ILT_CLI_SRC,
-	ILT_CLI_TSDM,
-	ILT_CLI_MAX
-};
-
 struct ilt_cfg_pair {
 	u32 reg;
 	u32 val;
@@ -133,6 +127,7 @@ struct ecore_ilt_cli_blk {
 	u32 total_size;		/* 0 means not active */
 	u32 real_size_in_page;
 	u32 start_line;
+	u32 dynamic_line_offset;
 	u32 dynamic_line_cnt;
 };
 
@@ -153,17 +148,6 @@ struct ecore_ilt_client_cfg {
 	u32 vf_total_lines;
 };
 
-/* Per Path -
- *      ILT shadow table
- *      Protocol acquired CID lists
- *      PF start line in ILT
- */
-struct ecore_dma_mem {
-	dma_addr_t p_phys;
-	void *p_virt;
-	osal_size_t size;
-};
-
 #define MAP_WORD_SIZE		sizeof(unsigned long)
 #define BITS_PER_MAP_WORD	(MAP_WORD_SIZE * 8)
 
@@ -173,6 +157,13 @@ struct ecore_cid_acquired_map {
 	unsigned long *cid_map;
 };
 
+struct ecore_src_t2 {
+	struct phys_mem_desc	*dma_mem;
+	u32			num_pages;
+	u64			first_free;
+	u64			last_free;
+};
+
 struct ecore_cxt_mngr {
 	/* Per protocl configuration */
 	struct ecore_conn_type_cfg conn_cfg[MAX_CONN_TYPES];
@@ -193,17 +184,14 @@ struct ecore_cxt_mngr {
 	struct ecore_cid_acquired_map *acquired_vf[MAX_CONN_TYPES];
 
 	/* ILT  shadow table */
-	struct ecore_dma_mem *ilt_shadow;
+	struct phys_mem_desc		*ilt_shadow;
 	u32 pf_start_line;
 
 	/* Mutex for a dynamic ILT allocation */
 	osal_mutex_t mutex;
 
 	/* SRC T2 */
-	struct ecore_dma_mem *t2;
-	u32 t2_num_pages;
-	u64 first_free;
-	u64 last_free;
+	struct ecore_src_t2		src_t2;
 
 	/* The infrastructure originally was very generic and context/task
 	 * oriented - per connection-type we would set how many of those
@@ -280,15 +268,17 @@ struct ecore_tm_iids {
 	u32 per_vf_tids;
 };
 
-static void ecore_cxt_tm_iids(struct ecore_cxt_mngr *p_mngr,
+static void ecore_cxt_tm_iids(struct ecore_hwfn *p_hwfn,
+			      struct ecore_cxt_mngr *p_mngr,
 			      struct ecore_tm_iids *iids)
 {
+	struct ecore_conn_type_cfg *p_cfg;
 	bool tm_vf_required = false;
 	bool tm_required = false;
 	u32 i, j;
 
 	for (i = 0; i < MAX_CONN_TYPES; i++) {
-		struct ecore_conn_type_cfg *p_cfg = &p_mngr->conn_cfg[i];
+		p_cfg = &p_mngr->conn_cfg[i];
 
 		if (tm_cid_proto(i) || tm_required) {
 			if (p_cfg->cid_count)
@@ -490,43 +480,84 @@ static void ecore_ilt_cli_adv_line(struct ecore_hwfn *p_hwfn,
 		   p_blk->start_line);
 }
 
-static u32 ecore_ilt_get_dynamic_line_cnt(struct ecore_hwfn *p_hwfn,
-					  enum ilt_clients ilt_client)
+static void ecore_ilt_get_dynamic_line_range(struct ecore_hwfn *p_hwfn,
+					     enum ilt_clients ilt_client,
+					     u32 *dynamic_line_offset,
+					     u32 *dynamic_line_cnt)
 {
-	u32 cid_count = p_hwfn->p_cxt_mngr->conn_cfg[PROTOCOLID_ROCE].cid_count;
 	struct ecore_ilt_client_cfg *p_cli;
-	u32 lines_to_skip = 0;
+	struct ecore_conn_type_cfg *p_cfg;
 	u32 cxts_per_p;
 
 	/* TBD MK: ILT code should be simplified once PROTO enum is changed */
 
+	*dynamic_line_offset = 0;
+	*dynamic_line_cnt = 0;
+
 	if (ilt_client == ILT_CLI_CDUC) {
 		p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC];
+		p_cfg = &p_hwfn->p_cxt_mngr->conn_cfg[PROTOCOLID_ROCE];
 
 		cxts_per_p = ILT_PAGE_IN_BYTES(p_cli->p_size.val) /
 		    (u32)CONN_CXT_SIZE(p_hwfn);
 
-		lines_to_skip = cid_count / cxts_per_p;
+		*dynamic_line_cnt = p_cfg->cid_count / cxts_per_p;
+	}
+}
+
+static struct ecore_ilt_client_cfg *
+ecore_cxt_set_cli(struct ecore_ilt_client_cfg *p_cli)
+{
+	p_cli->active = false;
+	p_cli->first.val = 0;
+	p_cli->last.val = 0;
+	return p_cli;
+}
+
+static struct ecore_ilt_cli_blk *
+ecore_cxt_set_blk(struct ecore_ilt_cli_blk *p_blk)
+{
+	p_blk->total_size = 0;
+	return p_blk;
 	}
 
-	return lines_to_skip;
+static u32
+ecore_cxt_src_elements(struct ecore_cxt_mngr *p_mngr)
+{
+	struct ecore_src_iids src_iids;
+	u32 elem_num = 0;
+
+	OSAL_MEM_ZERO(&src_iids, sizeof(src_iids));
+	ecore_cxt_src_iids(p_mngr, &src_iids);
+
+	/* Both the PF and VFs searcher connections are stored in the per PF
+	 * database. Thus sum the PF searcher cids and all the VFs searcher
+	 * cids.
+	 */
+	elem_num = src_iids.pf_cids +
+		   src_iids.per_vf_cids * p_mngr->vf_count;
+	if (elem_num == 0)
+		return elem_num;
+
+	elem_num = OSAL_MAX_T(u32, elem_num, SRC_MIN_NUM_ELEMS);
+	elem_num = OSAL_ROUNDUP_POW_OF_TWO(elem_num);
+
+	return elem_num;
 }
 
 enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 {
+	u32 curr_line, total, i, task_size, line, total_size, elem_size;
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 curr_line, total, i, task_size, line;
 	struct ecore_ilt_client_cfg *p_cli;
 	struct ecore_ilt_cli_blk *p_blk;
 	struct ecore_cdu_iids cdu_iids;
-	struct ecore_src_iids src_iids;
 	struct ecore_qm_iids qm_iids;
 	struct ecore_tm_iids tm_iids;
 	struct ecore_tid_seg *p_seg;
 
 	OSAL_MEM_ZERO(&qm_iids, sizeof(qm_iids));
 	OSAL_MEM_ZERO(&cdu_iids, sizeof(cdu_iids));
-	OSAL_MEM_ZERO(&src_iids, sizeof(src_iids));
 	OSAL_MEM_ZERO(&tm_iids, sizeof(tm_iids));
 
 	p_mngr->pf_start_line = RESC_START(p_hwfn, ECORE_ILT);
@@ -536,7 +567,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		   p_hwfn->my_id, p_hwfn->p_cxt_mngr->pf_start_line);
 
 	/* CDUC */
-	p_cli = &p_mngr->clients[ILT_CLI_CDUC];
+	p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_CDUC]);
 
 	curr_line = p_mngr->pf_start_line;
 
@@ -546,7 +577,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 	/* get the counters for the CDUC,CDUC and QM clients  */
 	ecore_cxt_cdu_iids(p_mngr, &cdu_iids);
 
-	p_blk = &p_cli->pf_blks[CDUC_BLK];
+	p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[CDUC_BLK]);
 
 	total = cdu_iids.pf_cids * CONN_CXT_SIZE(p_hwfn);
 
@@ -556,11 +587,12 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 	ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line, ILT_CLI_CDUC);
 	p_cli->pf_total_lines = curr_line - p_blk->start_line;
 
-	p_blk->dynamic_line_cnt = ecore_ilt_get_dynamic_line_cnt(p_hwfn,
-								 ILT_CLI_CDUC);
+	ecore_ilt_get_dynamic_line_range(p_hwfn, ILT_CLI_CDUC,
+					 &p_blk->dynamic_line_offset,
+					 &p_blk->dynamic_line_cnt);
 
 	/* CDUC VF */
-	p_blk = &p_cli->vf_blks[CDUC_BLK];
+	p_blk = ecore_cxt_set_blk(&p_cli->vf_blks[CDUC_BLK]);
 	total = cdu_iids.per_vf_cids * CONN_CXT_SIZE(p_hwfn);
 
 	ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
@@ -574,7 +606,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 				       ILT_CLI_CDUC);
 
 	/* CDUT PF */
-	p_cli = &p_mngr->clients[ILT_CLI_CDUT];
+	p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_CDUT]);
 	p_cli->first.val = curr_line;
 
 	/* first the 'working' task memory */
@@ -583,7 +615,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		if (!p_seg || p_seg->count == 0)
 			continue;
 
-		p_blk = &p_cli->pf_blks[CDUT_SEG_BLK(i)];
+		p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[CDUT_SEG_BLK(i)]);
 		total = p_seg->count * p_mngr->task_type_size[p_seg->type];
 		ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line, total,
 				       p_mngr->task_type_size[p_seg->type]);
@@ -598,7 +630,8 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		if (!p_seg || p_seg->count == 0)
 			continue;
 
-		p_blk = &p_cli->pf_blks[CDUT_FL_SEG_BLK(i, PF)];
+		p_blk =
+		     ecore_cxt_set_blk(&p_cli->pf_blks[CDUT_FL_SEG_BLK(i, PF)]);
 
 		if (!p_seg->has_fl_mem) {
 			/* The segment is active (total size pf 'working'
@@ -631,7 +664,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
 				       ILT_CLI_CDUT);
 	}
-	p_cli->pf_total_lines = curr_line - p_cli->pf_blks[0].start_line;
+	p_cli->pf_total_lines = curr_line - p_cli->first.val;
 
 	/* CDUT VF */
 	p_seg = ecore_cxt_tid_seg_info(p_hwfn, TASK_SEGMENT_VF);
@@ -643,7 +676,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		/* 'working' memory */
 		total = p_seg->count * p_mngr->task_type_size[p_seg->type];
 
-		p_blk = &p_cli->vf_blks[CDUT_SEG_BLK(0)];
+		p_blk = ecore_cxt_set_blk(&p_cli->vf_blks[CDUT_SEG_BLK(0)]);
 		ecore_ilt_cli_blk_fill(p_cli, p_blk,
 				       curr_line, total,
 				       p_mngr->task_type_size[p_seg->type]);
@@ -652,7 +685,8 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 				       ILT_CLI_CDUT);
 
 		/* 'init' memory */
-		p_blk = &p_cli->vf_blks[CDUT_FL_SEG_BLK(0, VF)];
+		p_blk =
+		     ecore_cxt_set_blk(&p_cli->vf_blks[CDUT_FL_SEG_BLK(0, VF)]);
 		if (!p_seg->has_fl_mem) {
 			/* see comment above */
 			line = p_cli->vf_blks[CDUT_SEG_BLK(0)].start_line;
@@ -664,15 +698,17 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 			ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
 					       ILT_CLI_CDUT);
 		}
-		p_cli->vf_total_lines = curr_line -
-		    p_cli->vf_blks[0].start_line;
+		p_cli->vf_total_lines = curr_line - (p_cli->first.val +
+						     p_cli->pf_total_lines);
 
 		/* Now for the rest of the VFs */
 		for (i = 1; i < p_mngr->vf_count; i++) {
+			/* don't set p_blk i.e. don't clear total_size */
 			p_blk = &p_cli->vf_blks[CDUT_SEG_BLK(0)];
 			ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
 					       ILT_CLI_CDUT);
 
+			/* don't set p_blk i.e. don't clear total_size */
 			p_blk = &p_cli->vf_blks[CDUT_FL_SEG_BLK(0, VF)];
 			ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
 					       ILT_CLI_CDUT);
@@ -680,13 +716,19 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 	}
 
 	/* QM */
-	p_cli = &p_mngr->clients[ILT_CLI_QM];
-	p_blk = &p_cli->pf_blks[0];
-
+	p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_QM]);
+	p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[0]);
+
+	/* At this stage, after the first QM configuration, the PF PQs amount
+	 * is the highest possible. Save this value at qm_info->ilt_pf_pqs to
+	 * detect overflows in the future.
+	 * Even though VF PQs amount can be larger than VF count, use vf_count
+	 * because each VF requires only the full amount of CIDs.
+	 */
 	ecore_cxt_qm_iids(p_hwfn, &qm_iids);
-	total = ecore_qm_pf_mem_size(qm_iids.cids,
+	total = ecore_qm_pf_mem_size(p_hwfn, qm_iids.cids,
 				     qm_iids.vf_cids, qm_iids.tids,
-				     p_hwfn->qm_info.num_pqs,
+				     p_hwfn->qm_info.num_pqs + OFLD_GRP_SIZE,
 				     p_hwfn->qm_info.num_vf_pqs);
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
@@ -701,39 +743,15 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 	ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line, ILT_CLI_QM);
 	p_cli->pf_total_lines = curr_line - p_blk->start_line;
 
-	/* SRC */
-	p_cli = &p_mngr->clients[ILT_CLI_SRC];
-	ecore_cxt_src_iids(p_mngr, &src_iids);
-
-	/* Both the PF and VFs searcher connections are stored in the per PF
-	 * database. Thus sum the PF searcher cids and all the VFs searcher
-	 * cids.
-	 */
-	total = src_iids.pf_cids + src_iids.per_vf_cids * p_mngr->vf_count;
-	if (total) {
-		u32 local_max = OSAL_MAX_T(u32, total,
-					   SRC_MIN_NUM_ELEMS);
-
-		total = OSAL_ROUNDUP_POW_OF_TWO(local_max);
-
-		p_blk = &p_cli->pf_blks[0];
-		ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
-				       total * sizeof(struct src_ent),
-				       sizeof(struct src_ent));
-
-		ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
-				       ILT_CLI_SRC);
-		p_cli->pf_total_lines = curr_line - p_blk->start_line;
-	}
-
 	/* TM PF */
-	p_cli = &p_mngr->clients[ILT_CLI_TM];
-	ecore_cxt_tm_iids(p_mngr, &tm_iids);
+	p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_TM]);
+	ecore_cxt_tm_iids(p_hwfn, p_mngr, &tm_iids);
 	total = tm_iids.pf_cids + tm_iids.pf_tids_total;
 	if (total) {
-		p_blk = &p_cli->pf_blks[0];
+		p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[0]);
 		ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
-				       total * TM_ELEM_SIZE, TM_ELEM_SIZE);
+				       total * TM_ELEM_SIZE,
+				       TM_ELEM_SIZE);
 
 		ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
 				       ILT_CLI_TM);
@@ -743,7 +761,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 	/* TM VF */
 	total = tm_iids.per_vf_cids + tm_iids.per_vf_tids;
 	if (total) {
-		p_blk = &p_cli->vf_blks[0];
+		p_blk = ecore_cxt_set_blk(&p_cli->vf_blks[0]);
 		ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
 				       total * TM_ELEM_SIZE, TM_ELEM_SIZE);
 
@@ -757,12 +775,28 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 		}
 	}
 
+	/* SRC */
+	p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_SRC]);
+	total = ecore_cxt_src_elements(p_mngr);
+
+	if (total) {
+		total_size = total * sizeof(struct src_ent);
+		elem_size = sizeof(struct src_ent);
+
+		p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[0]);
+		ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
+				       total_size, elem_size);
+		ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
+				       ILT_CLI_SRC);
+		p_cli->pf_total_lines = curr_line - p_blk->start_line;
+	}
+
 	/* TSDM (SRQ CONTEXT) */
 	total = ecore_cxt_get_srq_count(p_hwfn);
 
 	if (total) {
-		p_cli = &p_mngr->clients[ILT_CLI_TSDM];
-		p_blk = &p_cli->pf_blks[SRQ_BLK];
+		p_cli = ecore_cxt_set_cli(&p_mngr->clients[ILT_CLI_TSDM]);
+		p_blk = ecore_cxt_set_blk(&p_cli->pf_blks[SRQ_BLK]);
 		ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
 				       total * SRQ_CXT_SIZE, SRQ_CXT_SIZE);
 
@@ -783,29 +817,60 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
 
 static void ecore_cxt_src_t2_free(struct ecore_hwfn *p_hwfn)
 {
-	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	struct ecore_src_t2 *p_t2 = &p_hwfn->p_cxt_mngr->src_t2;
 	u32 i;
 
-	if (!p_mngr->t2)
+	if (!p_t2 || !p_t2->dma_mem)
 		return;
 
-	for (i = 0; i < p_mngr->t2_num_pages; i++)
-		if (p_mngr->t2[i].p_virt)
+	for (i = 0; i < p_t2->num_pages; i++)
+		if (p_t2->dma_mem[i].virt_addr)
 			OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
-					       p_mngr->t2[i].p_virt,
-					       p_mngr->t2[i].p_phys,
-					       p_mngr->t2[i].size);
+					       p_t2->dma_mem[i].virt_addr,
+					       p_t2->dma_mem[i].phys_addr,
+					       p_t2->dma_mem[i].size);
 
-	OSAL_FREE(p_hwfn->p_dev, p_mngr->t2);
+	OSAL_FREE(p_hwfn->p_dev, p_t2->dma_mem);
+	p_t2->dma_mem = OSAL_NULL;
+}
+
+static enum _ecore_status_t
+ecore_cxt_t2_alloc_pages(struct ecore_hwfn *p_hwfn,
+			 struct ecore_src_t2 *p_t2,
+			 u32 total_size, u32 page_size)
+{
+	void **p_virt;
+	u32 size, i;
+
+	if (!p_t2 || !p_t2->dma_mem)
+		return ECORE_INVAL;
+
+	for (i = 0; i < p_t2->num_pages; i++) {
+		size = OSAL_MIN_T(u32, total_size, page_size);
+		p_virt = &p_t2->dma_mem[i].virt_addr;
+
+		*p_virt = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
+						  &p_t2->dma_mem[i].phys_addr,
+						  size);
+		if (!p_t2->dma_mem[i].virt_addr)
+			return ECORE_NOMEM;
+
+		OSAL_MEM_ZERO(*p_virt, size);
+		p_t2->dma_mem[i].size = size;
+		total_size -= size;
+	}
+
+	return ECORE_SUCCESS;
 }
 
 static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
 	u32 conn_num, total_size, ent_per_page, psz, i;
+	struct phys_mem_desc *p_t2_last_page;
 	struct ecore_ilt_client_cfg *p_src;
 	struct ecore_src_iids src_iids;
-	struct ecore_dma_mem *p_t2;
+	struct ecore_src_t2 *p_t2;
 	enum _ecore_status_t rc;
 
 	OSAL_MEM_ZERO(&src_iids, sizeof(src_iids));
@@ -823,49 +888,39 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn)
 
 	/* use the same page size as the SRC ILT client */
 	psz = ILT_PAGE_IN_BYTES(p_src->p_size.val);
-	p_mngr->t2_num_pages = DIV_ROUND_UP(total_size, psz);
+	p_t2 = &p_mngr->src_t2;
+	p_t2->num_pages = DIV_ROUND_UP(total_size, psz);
 
 	/* allocate t2 */
-	p_mngr->t2 = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-				 p_mngr->t2_num_pages *
-				 sizeof(struct ecore_dma_mem));
-	if (!p_mngr->t2) {
+	p_t2->dma_mem = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
+				    p_t2->num_pages *
+				    sizeof(struct phys_mem_desc));
+	if (!p_t2->dma_mem) {
 		DP_NOTICE(p_hwfn, false, "Failed to allocate t2 table\n");
 		rc = ECORE_NOMEM;
 		goto t2_fail;
 	}
 
-	/* allocate t2 pages */
-	for (i = 0; i < p_mngr->t2_num_pages; i++) {
-		u32 size = OSAL_MIN_T(u32, total_size, psz);
-		void **p_virt = &p_mngr->t2[i].p_virt;
-
-		*p_virt = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
-						  &p_mngr->t2[i].p_phys, size);
-		if (!p_mngr->t2[i].p_virt) {
-			rc = ECORE_NOMEM;
-			goto t2_fail;
-		}
-		OSAL_MEM_ZERO(*p_virt, size);
-		p_mngr->t2[i].size = size;
-		total_size -= size;
-	}
+	rc = ecore_cxt_t2_alloc_pages(p_hwfn, p_t2, total_size, psz);
+	if (rc)
+		goto t2_fail;
 
 	/* Set the t2 pointers */
 
 	/* entries per page - must be a power of two */
 	ent_per_page = psz / sizeof(struct src_ent);
 
-	p_mngr->first_free = (u64)p_mngr->t2[0].p_phys;
+	p_t2->first_free = (u64)p_t2->dma_mem[0].phys_addr;
 
-	p_t2 = &p_mngr->t2[(conn_num - 1) / ent_per_page];
-	p_mngr->last_free = (u64)p_t2->p_phys +
-	    ((conn_num - 1) & (ent_per_page - 1)) * sizeof(struct src_ent);
+	p_t2_last_page = &p_t2->dma_mem[(conn_num - 1) / ent_per_page];
+	p_t2->last_free = (u64)p_t2_last_page->phys_addr +
+			  ((conn_num - 1) & (ent_per_page - 1)) *
+			  sizeof(struct src_ent);
 
-	for (i = 0; i < p_mngr->t2_num_pages; i++) {
+	for (i = 0; i < p_t2->num_pages; i++) {
 		u32 ent_num = OSAL_MIN_T(u32, ent_per_page, conn_num);
-		struct src_ent *entries = p_mngr->t2[i].p_virt;
-		u64 p_ent_phys = (u64)p_mngr->t2[i].p_phys, val;
+		struct src_ent *entries = p_t2->dma_mem[i].virt_addr;
+		u64 p_ent_phys = (u64)p_t2->dma_mem[i].phys_addr, val;
 		u32 j;
 
 		for (j = 0; j < ent_num - 1; j++) {
@@ -873,8 +928,8 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn)
 			entries[j].next = OSAL_CPU_TO_BE64(val);
 		}
 
-		if (i < p_mngr->t2_num_pages - 1)
-			val = (u64)p_mngr->t2[i + 1].p_phys;
+		if (i < p_t2->num_pages - 1)
+			val = (u64)p_t2->dma_mem[i + 1].phys_addr;
 		else
 			val = 0;
 		entries[j].next = OSAL_CPU_TO_BE64(val);
@@ -921,13 +976,13 @@ static void ecore_ilt_shadow_free(struct ecore_hwfn *p_hwfn)
 	ilt_size = ecore_cxt_ilt_shadow_size(p_cli);
 
 	for (i = 0; p_mngr->ilt_shadow && i < ilt_size; i++) {
-		struct ecore_dma_mem *p_dma = &p_mngr->ilt_shadow[i];
+		struct phys_mem_desc *p_dma = &p_mngr->ilt_shadow[i];
 
-		if (p_dma->p_virt)
+		if (p_dma->virt_addr)
 			OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
 					       p_dma->p_virt,
-					       p_dma->p_phys, p_dma->size);
-		p_dma->p_virt = OSAL_NULL;
+					       p_dma->phys_addr, p_dma->size);
+		p_dma->virt_addr = OSAL_NULL;
 	}
 	OSAL_FREE(p_hwfn->p_dev, p_mngr->ilt_shadow);
 	p_mngr->ilt_shadow = OSAL_NULL;
@@ -938,28 +993,33 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn,
 		    struct ecore_ilt_cli_blk *p_blk,
 		    enum ilt_clients ilt_client, u32 start_line_offset)
 {
-	struct ecore_dma_mem *ilt_shadow = p_hwfn->p_cxt_mngr->ilt_shadow;
-	u32 lines, line, sz_left, lines_to_skip = 0;
+	struct phys_mem_desc *ilt_shadow = p_hwfn->p_cxt_mngr->ilt_shadow;
+	u32 lines, line, sz_left, lines_to_skip, first_skipped_line;
 
 	/* Special handling for RoCE that supports dynamic allocation */
 	if (ilt_client == ILT_CLI_CDUT || ilt_client == ILT_CLI_TSDM)
 		return ECORE_SUCCESS;
 
-	lines_to_skip = p_blk->dynamic_line_cnt;
-
 	if (!p_blk->total_size)
 		return ECORE_SUCCESS;
 
 	sz_left = p_blk->total_size;
+	lines_to_skip = p_blk->dynamic_line_cnt;
 	lines = DIV_ROUND_UP(sz_left, p_blk->real_size_in_page) - lines_to_skip;
 	line = p_blk->start_line + start_line_offset -
-	    p_hwfn->p_cxt_mngr->pf_start_line + lines_to_skip;
+	       p_hwfn->p_cxt_mngr->pf_start_line;
+	first_skipped_line = line + p_blk->dynamic_line_offset;
 
-	for (; lines; lines--) {
+	while (lines) {
 		dma_addr_t p_phys;
 		void *p_virt;
 		u32 size;
 
+		if (lines_to_skip && (line == first_skipped_line)) {
+			line += lines_to_skip;
+			continue;
+		}
+
 		size = OSAL_MIN_T(u32, sz_left, p_blk->real_size_in_page);
 
 /* @DPDK */
@@ -971,8 +1031,8 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn,
 			return ECORE_NOMEM;
 		OSAL_MEM_ZERO(p_virt, size);
 
-		ilt_shadow[line].p_phys = p_phys;
-		ilt_shadow[line].p_virt = p_virt;
+		ilt_shadow[line].phys_addr = p_phys;
+		ilt_shadow[line].virt_addr = p_virt;
 		ilt_shadow[line].size = size;
 
 		DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
@@ -982,6 +1042,7 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn,
 
 		sz_left -= size;
 		line++;
+		lines--;
 	}
 
 	return ECORE_SUCCESS;
@@ -997,7 +1058,7 @@ static enum _ecore_status_t ecore_ilt_shadow_alloc(struct ecore_hwfn *p_hwfn)
 
 	size = ecore_cxt_ilt_shadow_size(clients);
 	p_mngr->ilt_shadow = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
-					 size * sizeof(struct ecore_dma_mem));
+					 size * sizeof(struct phys_mem_desc));
 
 	if (!p_mngr->ilt_shadow) {
 		DP_NOTICE(p_hwfn, false, "Failed to allocate ilt shadow table\n");
@@ -1007,7 +1068,7 @@ static enum _ecore_status_t ecore_ilt_shadow_alloc(struct ecore_hwfn *p_hwfn)
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
 		   "Allocated 0x%x bytes for ilt shadow\n",
-		   (u32)(size * sizeof(struct ecore_dma_mem)));
+		   (u32)(size * sizeof(struct phys_mem_desc)));
 
 	for_each_ilt_valid_client(i, clients) {
 		for (j = 0; j < ILT_CLI_PF_BLOCKS; j++) {
@@ -1058,7 +1119,7 @@ static void ecore_cid_map_free(struct ecore_hwfn *p_hwfn)
 }
 
 static enum _ecore_status_t
-ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type,
+__ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type,
 			   u32 cid_start, u32 cid_count,
 			   struct ecore_cid_acquired_map *p_map)
 {
@@ -1082,49 +1143,67 @@ ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type,
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t ecore_cid_map_alloc(struct ecore_hwfn *p_hwfn)
+static enum _ecore_status_t
+ecore_cid_map_alloc_single(struct ecore_hwfn *p_hwfn, u32 type, u32 start_cid,
+			   u32 vf_start_cid)
 {
 	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
-	u32 max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
-	u32 start_cid = 0, vf_start_cid = 0;
-	u32 type, vf;
+	u32 vf, max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
+	struct ecore_cid_acquired_map *p_map;
+	struct ecore_conn_type_cfg *p_cfg;
+	enum _ecore_status_t rc;
 
-	for (type = 0; type < MAX_CONN_TYPES; type++) {
-		struct ecore_conn_type_cfg *p_cfg = &p_mngr->conn_cfg[type];
-		struct ecore_cid_acquired_map *p_map;
+	p_cfg = &p_mngr->conn_cfg[type];
 
 		/* Handle PF maps */
 		p_map = &p_mngr->acquired[type];
-		if (ecore_cid_map_alloc_single(p_hwfn, type, start_cid,
-					       p_cfg->cid_count, p_map))
-			goto cid_map_fail;
+	rc = __ecore_cid_map_alloc_single(p_hwfn, type, start_cid,
+					  p_cfg->cid_count, p_map);
+	if (rc != ECORE_SUCCESS)
+		return rc;
+
+	/* Handle VF maps */
+	for (vf = 0; vf < max_num_vfs; vf++) {
+		p_map = &p_mngr->acquired_vf[type][vf];
+		rc = __ecore_cid_map_alloc_single(p_hwfn, type, vf_start_cid,
+						  p_cfg->cids_per_vf, p_map);
+		if (rc != ECORE_SUCCESS)
+			return rc;
+	}
 
-		/* Handle VF maps */
-		for (vf = 0; vf < max_num_vfs; vf++) {
-			p_map = &p_mngr->acquired_vf[type][vf];
-			if (ecore_cid_map_alloc_single(p_hwfn, type,
-						       vf_start_cid,
-						       p_cfg->cids_per_vf,
-						       p_map))
-				goto cid_map_fail;
-		}
+	return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t ecore_cid_map_alloc(struct ecore_hwfn *p_hwfn)
+{
+	struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+	u32 start_cid = 0, vf_start_cid = 0;
+	u32 type;
+	enum _ecore_status_t rc;
+
+	for (type = 0; type < MAX_CONN_TYPES; type++) {
+		rc = ecore_cid_map_alloc_single(p_hwfn, type, start_cid,
+						vf_start_cid);
+		if (rc != ECORE_SUCCESS)
+			goto cid_map_fail;
 
-		start_cid += p_cfg->cid_count;
-		vf_start_cid += p_cfg->cids_per_vf;
+		start_cid += p_mngr->conn_cfg[type].cid_count;
+		vf_start_cid += p_mngr->conn_cfg[type].cids_per_vf;
 	}
 
 	return ECORE_SUCCESS;
 
 cid_map_fail:
 	ecore_cid_map_free(p_hwfn);
-	return ECORE_NOMEM;
+	return rc;
 }
 
 enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
 {
+	struct ecore_cid_acquired_map *acquired_vf;
 	struct ecore_ilt_client_cfg *clients;
 	struct ecore_cxt_mngr *p_mngr;
-	u32 i;
+	u32 i, max_num_vfs;
 
 	p_mngr = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_mngr));
 	if (!p_mngr) {
@@ -1132,9 +1211,6 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
 		return ECORE_NOMEM;
 	}
 
-	/* Set the cxt mangr pointer prior to further allocations */
-	p_hwfn->p_cxt_mngr = p_mngr;
-
 	/* Initialize ILT client registers */
 	clients = p_mngr->clients;
 	clients[ILT_CLI_CDUC].first.reg = ILT_CFG_REG(CDUC, FIRST_ILT);
@@ -1183,6 +1259,22 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
 #endif
 	OSAL_MUTEX_INIT(&p_mngr->mutex);
 
+	/* Set the cxt mangr pointer prior to further allocations */
+	p_hwfn->p_cxt_mngr = p_mngr;
+
+	max_num_vfs = NUM_OF_VFS(p_hwfn->p_dev);
+	for (i = 0; i < MAX_CONN_TYPES; i++) {
+		acquired_vf = OSAL_CALLOC(p_hwfn->p_dev, GFP_KERNEL,
+					  max_num_vfs, sizeof(*acquired_vf));
+		if (!acquired_vf) {
+			DP_NOTICE(p_hwfn, false,
+				  "Failed to allocate an array of `struct ecore_cid_acquired_map'\n");
+			return ECORE_NOMEM;
+		}
+
+		p_mngr->acquired_vf[i] = acquired_vf;
+	}
+
 	return ECORE_SUCCESS;
 }
 
@@ -1220,6 +1312,8 @@ enum _ecore_status_t ecore_cxt_tables_alloc(struct ecore_hwfn *p_hwfn)
 
 void ecore_cxt_mngr_free(struct ecore_hwfn *p_hwfn)
 {
+	u32 i;
+
 	if (!p_hwfn->p_cxt_mngr)
 		return;
 
@@ -1229,7 +1323,11 @@ void ecore_cxt_mngr_free(struct ecore_hwfn *p_hwfn)
 #ifdef CONFIG_ECORE_LOCK_ALLOC
 	OSAL_MUTEX_DEALLOC(&p_hwfn->p_cxt_mngr->mutex);
 #endif
+	for (i = 0; i < MAX_CONN_TYPES; i++)
+		OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_cxt_mngr->acquired_vf[i]);
 	OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_cxt_mngr);
+
+	p_hwfn->p_cxt_mngr = OSAL_NULL;
 }
 
 void ecore_cxt_mngr_setup(struct ecore_hwfn *p_hwfn)
@@ -1435,14 +1533,10 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		      bool is_pf_loading)
 {
 	struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
-	struct ecore_mcp_link_state *p_link;
 	struct ecore_qm_iids iids;
 
 	OSAL_MEM_ZERO(&iids, sizeof(iids));
 	ecore_cxt_qm_iids(p_hwfn, &iids);
-
-	p_link = &ECORE_LEADING_HWFN(p_hwfn->p_dev)->mcp_info->link_output;
-
 	ecore_qm_pf_rt_init(p_hwfn, p_ptt, p_hwfn->rel_pf_id,
 			    qm_info->max_phys_tcs_per_port,
 			    is_pf_loading,
@@ -1452,7 +1546,7 @@ void ecore_qm_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			    qm_info->num_vf_pqs,
 			    qm_info->start_vport,
 			    qm_info->num_vports, qm_info->pf_wfq,
-			    qm_info->pf_rl, p_link->speed,
+			    qm_info->pf_rl,
 			    p_hwfn->qm_info.qm_pq_params,
 			    p_hwfn->qm_info.qm_vport_params);
 }
@@ -1601,7 +1695,7 @@ static void ecore_ilt_init_pf(struct ecore_hwfn *p_hwfn)
 {
 	struct ecore_ilt_client_cfg *clients;
 	struct ecore_cxt_mngr *p_mngr;
-	struct ecore_dma_mem *p_shdw;
+	struct phys_mem_desc *p_shdw;
 	u32 line, rt_offst, i;
 
 	ecore_ilt_bounds_init(p_hwfn);
@@ -1626,10 +1720,10 @@ static void ecore_ilt_init_pf(struct ecore_hwfn *p_hwfn)
 			/** p_virt could be OSAL_NULL incase of dynamic
 			 *  allocation
 			 */
-			if (p_shdw[line].p_virt != OSAL_NULL) {
+			if (p_shdw[line].virt_addr != OSAL_NULL) {
 				SET_FIELD(ilt_hw_entry, ILT_ENTRY_VALID, 1ULL);
 				SET_FIELD(ilt_hw_entry, ILT_ENTRY_PHY_ADDR,
-					  (p_shdw[line].p_phys >> 12));
+					  (p_shdw[line].phys_addr >> 12));
 
 				DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
 					"Setting RT[0x%08x] from"
@@ -1637,7 +1731,7 @@ static void ecore_ilt_init_pf(struct ecore_hwfn *p_hwfn)
 					" Physical addr: 0x%lx\n",
 					rt_offst, line, i,
 					(unsigned long)(p_shdw[line].
-							p_phys >> 12));
+							phys_addr >> 12));
 			}
 
 			STORE_RT_REG_AGG(p_hwfn, rt_offst, ilt_hw_entry);
@@ -1666,9 +1760,9 @@ static void ecore_src_init_pf(struct ecore_hwfn *p_hwfn)
 		     OSAL_LOG2(rounded_conn_num));
 
 	STORE_RT_REG_AGG(p_hwfn, SRC_REG_FIRSTFREE_RT_OFFSET,
-			 p_hwfn->p_cxt_mngr->first_free);
+			 p_hwfn->p_cxt_mngr->src_t2.first_free);
 	STORE_RT_REG_AGG(p_hwfn, SRC_REG_LASTFREE_RT_OFFSET,
-			 p_hwfn->p_cxt_mngr->last_free);
+			 p_hwfn->p_cxt_mngr->src_t2.last_free);
 	DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
 		   "Configured SEARCHER for 0x%08x connections\n",
 		   conn_num);
@@ -1699,18 +1793,18 @@ static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn)
 	u8 i;
 
 	OSAL_MEM_ZERO(&tm_iids, sizeof(tm_iids));
-	ecore_cxt_tm_iids(p_mngr, &tm_iids);
+	ecore_cxt_tm_iids(p_hwfn, p_mngr, &tm_iids);
 
 	/* @@@TBD No pre-scan for now */
 
-	/* Note: We assume consecutive VFs for a PF */
-	for (i = 0; i < p_mngr->vf_count; i++) {
 		cfg_word = 0;
 		SET_FIELD(cfg_word, TM_CFG_NUM_IDS, tm_iids.per_vf_cids);
-		SET_FIELD(cfg_word, TM_CFG_PRE_SCAN_OFFSET, 0);
 		SET_FIELD(cfg_word, TM_CFG_PARENT_PF, p_hwfn->rel_pf_id);
+	SET_FIELD(cfg_word, TM_CFG_PRE_SCAN_OFFSET, 0);
 		SET_FIELD(cfg_word, TM_CFG_CID_PRE_SCAN_ROWS, 0); /* scan all */
 
+	/* Note: We assume consecutive VFs for a PF */
+	for (i = 0; i < p_mngr->vf_count; i++) {
 		rt_reg = TM_REG_CONFIG_CONN_MEM_RT_OFFSET +
 		    (sizeof(cfg_word) / sizeof(u32)) *
 		    (p_hwfn->p_dev->p_iov_info->first_vf_in_pf + i);
@@ -1728,7 +1822,7 @@ static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn)
 	    (NUM_OF_VFS(p_hwfn->p_dev) + p_hwfn->rel_pf_id);
 	STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word);
 
-	/* enale scan */
+	/* enable scan */
 	STORE_RT_REG(p_hwfn, TM_REG_PF_ENABLE_CONN_RT_OFFSET,
 		     tm_iids.pf_cids ? 0x1 : 0x0);
 
@@ -1972,10 +2066,10 @@ enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
 	line = p_info->iid / cxts_per_p;
 
 	/* Make sure context is allocated (dynamic allocation) */
-	if (!p_mngr->ilt_shadow[line].p_virt)
+	if (!p_mngr->ilt_shadow[line].virt_addr)
 		return ECORE_INVAL;
 
-	p_info->p_cxt = (u8 *)p_mngr->ilt_shadow[line].p_virt +
+	p_info->p_cxt = (u8 *)p_mngr->ilt_shadow[line].virt_addr +
 	    p_info->iid % cxts_per_p * conn_cxt_size;
 
 	DP_VERBOSE(p_hwfn, (ECORE_MSG_ILT | ECORE_MSG_CXT),
@@ -2074,7 +2168,7 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
 
 	OSAL_MUTEX_ACQUIRE(&p_hwfn->p_cxt_mngr->mutex);
 
-	if (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_virt)
+	if (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].virt_addr)
 		goto out0;
 
 	p_ptt = ecore_ptt_acquire(p_hwfn);
@@ -2094,8 +2188,8 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
 	}
 	OSAL_MEM_ZERO(p_virt, p_blk->real_size_in_page);
 
-	p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_virt = p_virt;
-	p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_phys = p_phys;
+	p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].virt_addr = p_virt;
+	p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].phys_addr = p_phys;
 	p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].size =
 		p_blk->real_size_in_page;
 
@@ -2107,7 +2201,7 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
 	SET_FIELD(ilt_hw_entry, ILT_ENTRY_VALID, 1ULL);
 	SET_FIELD(ilt_hw_entry,
 		  ILT_ENTRY_PHY_ADDR,
-		  (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_phys >> 12));
+		 (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].phys_addr >> 12));
 
 /* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a wide-bus */
 
@@ -2115,21 +2209,6 @@ ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
 			    reg_offset, sizeof(ilt_hw_entry) / sizeof(u32),
 			    OSAL_NULL /* default parameters */);
 
-	if (elem_type == ECORE_ELEM_CXT) {
-		u32 last_cid_allocated = (1 + (iid / elems_per_p)) *
-					 elems_per_p;
-
-		/* Update the relevant register in the parser */
-		ecore_wr(p_hwfn, p_ptt, PRS_REG_ROCE_DEST_QP_MAX_PF,
-			 last_cid_allocated - 1);
-
-		if (!p_hwfn->b_rdma_enabled_in_prs) {
-			/* Enable RoCE search */
-			ecore_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 1);
-			p_hwfn->b_rdma_enabled_in_prs = true;
-		}
-	}
-
 out1:
 	ecore_ptt_release(p_hwfn, p_ptt);
 out0:
@@ -2196,16 +2275,16 @@ ecore_cxt_free_ilt_range(struct ecore_hwfn *p_hwfn,
 	}
 
 	for (i = shadow_start_line; i < shadow_end_line; i++) {
-		if (!p_hwfn->p_cxt_mngr->ilt_shadow[i].p_virt)
+		if (!p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr)
 			continue;
 
 		OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
-				       p_hwfn->p_cxt_mngr->ilt_shadow[i].p_virt,
-				       p_hwfn->p_cxt_mngr->ilt_shadow[i].p_phys,
-				       p_hwfn->p_cxt_mngr->ilt_shadow[i].size);
+				    p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr,
+				    p_hwfn->p_cxt_mngr->ilt_shadow[i].phys_addr,
+				    p_hwfn->p_cxt_mngr->ilt_shadow[i].size);
 
-		p_hwfn->p_cxt_mngr->ilt_shadow[i].p_virt = OSAL_NULL;
-		p_hwfn->p_cxt_mngr->ilt_shadow[i].p_phys = 0;
+		p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr = OSAL_NULL;
+		p_hwfn->p_cxt_mngr->ilt_shadow[i].phys_addr = 0;
 		p_hwfn->p_cxt_mngr->ilt_shadow[i].size = 0;
 
 		/* compute absolute offset */
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index f8c955cac..55f08027d 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -22,6 +22,18 @@ enum ecore_cxt_elem_type {
 	ECORE_ELEM_TASK
 };
 
+enum ilt_clients {
+	ILT_CLI_CDUC,
+	ILT_CLI_CDUT,
+	ILT_CLI_QM,
+	ILT_CLI_TM,
+	ILT_CLI_SRC,
+	ILT_CLI_TSDM,
+	ILT_CLI_RGFS,
+	ILT_CLI_TGFS,
+	ILT_CLI_MAX
+};
+
 u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn,
 				  enum protocol_type type,
 				  u32 *vf_cid);
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index b82ca49ff..ccd4383bb 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -310,8 +310,9 @@ ecore_dcbx_process_tlv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			continue;
 
 		/* if no app tlv was present, don't override in FW */
-		ecore_dcbx_update_app_info(p_data, p_hwfn, p_ptt, false,
-					   priority, tc, type);
+		ecore_dcbx_update_app_info(p_data, p_hwfn, p_ptt,
+					  p_data->arr[DCBX_PROTOCOL_ETH].enable,
+					  priority, tc, type);
 	}
 
 	return ECORE_SUCCESS;
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 2a11b4d29..2c47aba48 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -39,6 +39,10 @@
 static osal_spinlock_t qm_lock;
 static u32 qm_lock_ref_cnt;
 
+#ifndef ASIC_ONLY
+static bool b_ptt_gtt_init;
+#endif
+
 /******************** Doorbell Recovery *******************/
 /* The doorbell recovery mechanism consists of a list of entries which represent
  * doorbelling entities (l2 queues, roce sq/rq/cqs, the slowpath spq, etc). Each
@@ -963,13 +967,13 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
 
 	/* Filter enable - should be done first when removing a filter */
 	if (b_write_access && !p_details->enable) {
-		addr = NIG_REG_LLH_FUNC_FILTER_EN_BB_K2 + filter_idx * 0x4;
+		addr = NIG_REG_LLH_FUNC_FILTER_EN + filter_idx * 0x4;
 		ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr,
 			       p_details->enable);
 	}
 
 	/* Filter value */
-	addr = NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 + 2 * filter_idx * 0x4;
+	addr = NIG_REG_LLH_FUNC_FILTER_VALUE + 2 * filter_idx * 0x4;
 	OSAL_MEMSET(&params, 0, sizeof(params));
 
 	if (b_write_access) {
@@ -991,7 +995,7 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
 		return rc;
 
 	/* Filter mode */
-	addr = NIG_REG_LLH_FUNC_FILTER_MODE_BB_K2 + filter_idx * 0x4;
+	addr = NIG_REG_LLH_FUNC_FILTER_MODE + filter_idx * 0x4;
 	if (b_write_access)
 		ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr, p_details->mode);
 	else
@@ -999,7 +1003,7 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
 						 addr);
 
 	/* Filter protocol type */
-	addr = NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_BB_K2 + filter_idx * 0x4;
+	addr = NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE + filter_idx * 0x4;
 	if (b_write_access)
 		ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr,
 			       p_details->protocol_type);
@@ -1018,7 +1022,7 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
 
 	/* Filter enable - should be done last when adding a filter */
 	if (!b_write_access || p_details->enable) {
-		addr = NIG_REG_LLH_FUNC_FILTER_EN_BB_K2 + filter_idx * 0x4;
+		addr = NIG_REG_LLH_FUNC_FILTER_EN + filter_idx * 0x4;
 		if (b_write_access)
 			ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr,
 				       p_details->enable);
@@ -1031,7 +1035,7 @@ ecore_llh_access_filter(struct ecore_hwfn *p_hwfn,
 }
 
 static enum _ecore_status_t
-ecore_llh_add_filter_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ecore_llh_add_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			u8 abs_ppfid, u8 filter_idx, u8 filter_prot_type,
 			u32 high, u32 low)
 {
@@ -1054,7 +1058,7 @@ ecore_llh_add_filter_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 }
 
 static enum _ecore_status_t
-ecore_llh_remove_filter_e4(struct ecore_hwfn *p_hwfn,
+ecore_llh_remove_filter(struct ecore_hwfn *p_hwfn,
 			   struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx)
 {
 	struct ecore_llh_filter_details filter_details;
@@ -1066,24 +1070,6 @@ ecore_llh_remove_filter_e4(struct ecore_hwfn *p_hwfn,
 				       true /* write access */);
 }
 
-static enum _ecore_status_t
-ecore_llh_add_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-		     u8 abs_ppfid, u8 filter_idx, u8 filter_prot_type, u32 high,
-		     u32 low)
-{
-	return ecore_llh_add_filter_e4(p_hwfn, p_ptt, abs_ppfid,
-				       filter_idx, filter_prot_type,
-				       high, low);
-}
-
-static enum _ecore_status_t
-ecore_llh_remove_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			u8 abs_ppfid, u8 filter_idx)
-{
-	return ecore_llh_remove_filter_e4(p_hwfn, p_ptt, abs_ppfid,
-					  filter_idx);
-}
-
 enum _ecore_status_t ecore_llh_add_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
 					      u8 mac_addr[ETH_ALEN])
 {
@@ -1424,7 +1410,7 @@ void ecore_llh_clear_ppfid_filters(struct ecore_dev *p_dev, u8 ppfid)
 
 	for (filter_idx = 0; filter_idx < NIG_REG_LLH_FUNC_FILTER_EN_SIZE;
 	     filter_idx++) {
-		rc = ecore_llh_remove_filter_e4(p_hwfn, p_ptt,
+		rc = ecore_llh_remove_filter(p_hwfn, p_ptt,
 						abs_ppfid, filter_idx);
 		if (rc != ECORE_SUCCESS)
 			goto out;
@@ -1464,18 +1450,22 @@ enum _ecore_status_t ecore_all_ppfids_wr(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-static enum _ecore_status_t
-ecore_llh_dump_ppfid_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
-			u8 ppfid)
+enum _ecore_status_t
+ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid)
 {
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
 	struct ecore_llh_filter_details filter_details;
 	u8 abs_ppfid, filter_idx;
 	u32 addr;
 	enum _ecore_status_t rc;
 
+	if (!p_ptt)
+		return ECORE_AGAIN;
+
 	rc = ecore_abs_ppfid(p_hwfn->p_dev, ppfid, &abs_ppfid);
 	if (rc != ECORE_SUCCESS)
-		return rc;
+		goto out;
 
 	addr = NIG_REG_PPF_TO_ENGINE_SEL + abs_ppfid * 0x4;
 	DP_NOTICE(p_hwfn, false,
@@ -1490,7 +1480,7 @@ ecore_llh_dump_ppfid_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 					      filter_idx, &filter_details,
 					      false /* read access */);
 		if (rc != ECORE_SUCCESS)
-			return rc;
+			goto out;
 
 		DP_NOTICE(p_hwfn, false,
 			  "filter %2hhd: enable %d, value 0x%016lx, mode %d, protocol_type 0x%x, hdr_sel 0x%x\n",
@@ -1500,20 +1490,8 @@ ecore_llh_dump_ppfid_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 			  filter_details.protocol_type, filter_details.hdr_sel);
 	}
 
-	return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid)
-{
-	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
-	struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
-	enum _ecore_status_t rc;
-
-	if (p_ptt == OSAL_NULL)
-		return ECORE_AGAIN;
-
-	rc = ecore_llh_dump_ppfid_e4(p_hwfn, p_ptt, ppfid);
 
+out:
 	ecore_ptt_release(p_hwfn, p_ptt);
 
 	return rc;
@@ -1851,6 +1829,7 @@ static void ecore_init_qm_port_params(struct ecore_hwfn *p_hwfn)
 {
 	/* Initialize qm port parameters */
 	u8 i, active_phys_tcs, num_ports = p_hwfn->p_dev->num_ports_in_engine;
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
 
 	/* indicate how ooo and high pri traffic is dealt with */
 	active_phys_tcs = num_ports == MAX_NUM_PORTS_K2 ?
@@ -1859,11 +1838,14 @@ static void ecore_init_qm_port_params(struct ecore_hwfn *p_hwfn)
 	for (i = 0; i < num_ports; i++) {
 		struct init_qm_port_params *p_qm_port =
 			&p_hwfn->qm_info.qm_port_params[i];
+		u16 pbf_max_cmd_lines;
 
 		p_qm_port->active = 1;
 		p_qm_port->active_phys_tcs = active_phys_tcs;
-		p_qm_port->num_pbf_cmd_lines = PBF_MAX_CMD_LINES / num_ports;
-		p_qm_port->num_btb_blocks = BTB_MAX_BLOCKS / num_ports;
+		pbf_max_cmd_lines = (u16)NUM_OF_PBF_CMD_LINES(p_dev);
+		p_qm_port->num_pbf_cmd_lines = pbf_max_cmd_lines / num_ports;
+		p_qm_port->num_btb_blocks =
+			NUM_OF_BTB_BLOCKS(p_dev) / num_ports;
 	}
 }
 
@@ -1938,6 +1920,10 @@ static void ecore_init_qm_pq(struct ecore_hwfn *p_hwfn,
 		(pq_init_flags & PQ_INIT_PF_RL ||
 		 pq_init_flags & PQ_INIT_VF_RL);
 
+	/* The "rl_id" is set as the "vport_id" */
+	qm_info->qm_pq_params[pq_idx].rl_id =
+		qm_info->qm_pq_params[pq_idx].vport_id;
+
 	/* qm params accounting */
 	qm_info->num_pqs++;
 	if (!(pq_init_flags & PQ_INIT_SHARE_VPORT))
@@ -2247,10 +2233,10 @@ static void ecore_dp_init_qm_params(struct ecore_hwfn *p_hwfn)
 	/* pq table */
 	for (i = 0; i < qm_info->num_pqs; i++) {
 		pq = &qm_info->qm_pq_params[i];
-		DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
-			   "pq idx %d, port %d, vport_id %d, tc %d, wrr_grp %d, rl_valid %d\n",
+		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+			   "pq idx %d, port %d, vport_id %d, tc %d, wrr_grp %d, rl_valid %d, rl_id %d\n",
 			   qm_info->start_pq + i, pq->port_id, pq->vport_id,
-			   pq->tc_id, pq->wrr_group, pq->rl_valid);
+			   pq->tc_id, pq->wrr_group, pq->rl_valid, pq->rl_id);
 	}
 }
 
@@ -2531,6 +2517,13 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
 				  "Failed to allocate dbg user info structure\n");
 			goto alloc_err;
 		}
+
+		rc = OSAL_DBG_ALLOC_USER_DATA(p_hwfn, &p_hwfn->dbg_user_info);
+		if (rc) {
+			DP_NOTICE(p_hwfn, false,
+				  "Failed to allocate dbg user info structure\n");
+			goto alloc_err;
+		}
 	} /* hwfn loop */
 
 	rc = ecore_llh_alloc(p_dev);
@@ -2652,7 +2645,7 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 {
 	int hw_mode = 0;
 
-	if (ECORE_IS_BB_B0(p_hwfn->p_dev)) {
+	if (ECORE_IS_BB(p_hwfn->p_dev)) {
 		hw_mode |= 1 << MODE_BB;
 	} else if (ECORE_IS_AH(p_hwfn->p_dev)) {
 		hw_mode |= 1 << MODE_K2;
@@ -2712,50 +2705,88 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
 }
 
 #ifndef ASIC_ONLY
-/* MFW-replacement initializations for non-ASIC */
-static enum _ecore_status_t ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
+/* MFW-replacement initializations for emulation */
+static enum _ecore_status_t ecore_hw_init_chip(struct ecore_dev *p_dev,
 					       struct ecore_ptt *p_ptt)
 {
-	struct ecore_dev *p_dev = p_hwfn->p_dev;
-	u32 pl_hv = 1;
-	int i;
+	struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+	u32 pl_hv, wr_mbs;
+	int i, pos;
+	u16 ctrl = 0;
 
-	if (CHIP_REV_IS_EMUL(p_dev)) {
-		if (ECORE_IS_AH(p_dev))
-			pl_hv |= 0x600;
+	if (!CHIP_REV_IS_EMUL(p_dev)) {
+		DP_NOTICE(p_dev, false,
+			  "ecore_hw_init_chip() shouldn't be called in a non-emulation environment\n");
+		return ECORE_INVAL;
 	}
 
+	pl_hv = ECORE_IS_BB(p_dev) ? 0x1 : 0x401;
 	ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV + 4, pl_hv);
 
 	if (ECORE_IS_AH(p_dev))
 		ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2_K2, 0x3ffffff);
 
-	/* initialize port mode to 4x10G_E (10G with 4x10 SERDES) */
-	/* CNIG_REG_NW_PORT_MODE is same for A0 and B0 */
-	if (!CHIP_REV_IS_EMUL(p_dev) || ECORE_IS_BB(p_dev))
+	/* Initialize port mode to 4x10G_E (10G with 4x10 SERDES) */
+	if (ECORE_IS_BB(p_dev))
 		ecore_wr(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB, 4);
 
-	if (CHIP_REV_IS_EMUL(p_dev)) {
-		if (ECORE_IS_AH(p_dev)) {
-			/* 2 for 4-port, 1 for 2-port, 0 for 1-port */
-			ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE,
-				 (p_dev->num_ports_in_engine >> 1));
+	if (ECORE_IS_AH(p_dev)) {
+		/* 2 for 4-port, 1 for 2-port, 0 for 1-port */
+		ecore_wr(p_hwfn, p_ptt, MISC_REG_PORT_MODE,
+			 p_dev->num_ports_in_engine >> 1);
 
-			ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN,
-				 p_dev->num_ports_in_engine == 4 ? 0 : 3);
-		}
+		ecore_wr(p_hwfn, p_ptt, MISC_REG_BLOCK_256B_EN,
+			 p_dev->num_ports_in_engine == 4 ? 0 : 3);
 	}
 
-	/* Poll on RBC */
+	/* Signal the PSWRQ block to start initializing internal memories */
 	ecore_wr(p_hwfn, p_ptt, PSWRQ2_REG_RBC_DONE, 1);
 	for (i = 0; i < 100; i++) {
 		OSAL_UDELAY(50);
 		if (ecore_rd(p_hwfn, p_ptt, PSWRQ2_REG_CFG_DONE) == 1)
 			break;
 	}
-	if (i == 100)
+	if (i == 100) {
 		DP_NOTICE(p_hwfn, true,
 			  "RBC done failed to complete in PSWRQ2\n");
+		return ECORE_TIMEOUT;
+	}
+
+	/* Indicate PSWRQ to initialize steering tag table with zeros */
+	ecore_wr(p_hwfn, p_ptt, PSWRQ2_REG_RESET_STT, 1);
+	for (i = 0; i < 100; i++) {
+		OSAL_UDELAY(50);
+		if (!ecore_rd(p_hwfn, p_ptt, PSWRQ2_REG_RESET_STT))
+			break;
+	}
+	if (i == 100) {
+		DP_NOTICE(p_hwfn, true,
+			  "Steering tag table initialization failed to complete in PSWRQ2\n");
+		return ECORE_TIMEOUT;
+	}
+
+	/* Clear a possible PSWRQ2 STT parity which might have been generated by
+	 * a previous MSI-X read.
+	 */
+	ecore_wr(p_hwfn, p_ptt, PSWRQ2_REG_PRTY_STS_WR_H_0, 0x8);
+
+	/* Configure PSWRQ2_REG_WR_MBS0 according to the MaxPayloadSize field in
+	 * the PCI configuration space. The value is common for all PFs, so it
+	 * is okay to do it according to the first loading PF.
+	 */
+	pos = OSAL_PCI_FIND_CAPABILITY(p_dev, PCI_CAP_ID_EXP);
+	if (!pos) {
+		DP_NOTICE(p_dev, true,
+			  "Failed to find the PCI Express Capability structure in the PCI config space\n");
+		return ECORE_IO;
+	}
+
+	OSAL_PCI_READ_CONFIG_WORD(p_dev, pos + PCI_EXP_DEVCTL, &ctrl);
+	wr_mbs = (ctrl & PCI_EXP_DEVCTL_PAYLOAD) >> 5;
+	ecore_wr(p_hwfn, p_ptt, PSWRQ2_REG_WR_MBS0, wr_mbs);
+
+	/* Configure the PGLUE_B to discard mode */
+	ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_MASTER_DISCARD_NBLOCK, 0x3f);
 
 	return ECORE_SUCCESS;
 }
@@ -2768,7 +2799,8 @@ static enum _ecore_status_t ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
 static void ecore_init_cau_rt_data(struct ecore_dev *p_dev)
 {
 	u32 offset = CAU_REG_SB_VAR_MEMORY_RT_OFFSET;
-	int i, igu_sb_id;
+	u32 igu_sb_id;
+	int i;
 
 	for_each_hwfn(p_dev, i) {
 		struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
@@ -2866,8 +2898,8 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	ecore_gtt_init(p_hwfn);
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_dev)) {
-		rc = ecore_hw_init_chip(p_hwfn, p_ptt);
+	if (CHIP_REV_IS_EMUL(p_dev) && IS_LEAD_HWFN(p_hwfn)) {
+		rc = ecore_hw_init_chip(p_dev, p_ptt);
 		if (rc != ECORE_SUCCESS)
 			return rc;
 	}
@@ -2885,7 +2917,8 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 				qm_info->max_phys_tcs_per_port,
 				qm_info->pf_rl_en, qm_info->pf_wfq_en,
 				qm_info->vport_rl_en, qm_info->vport_wfq_en,
-				qm_info->qm_port_params);
+				qm_info->qm_port_params,
+				OSAL_NULL /* global RLs are not configured */);
 
 	ecore_cxt_hw_init_common(p_hwfn);
 
@@ -2906,7 +2939,7 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 		/* Workaround clears ROCE search for all functions to prevent
 		 * involving non initialized function in processing ROCE packet.
 		 */
-		num_pfs = NUM_OF_ENG_PFS(p_dev);
+		num_pfs = (u16)NUM_OF_ENG_PFS(p_dev);
 		for (pf_id = 0; pf_id < num_pfs; pf_id++) {
 			ecore_fid_pretend(p_hwfn, p_ptt, pf_id);
 			ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_ROCE, 0x0);
@@ -2922,7 +2955,7 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
 	 * This is not done inside the init tool since it currently can't
 	 * perform a pretending to VFs.
 	 */
-	max_num_vfs = ECORE_IS_AH(p_dev) ? MAX_NUM_VFS_K2 : MAX_NUM_VFS_BB;
+	max_num_vfs = (u8)NUM_OF_VFS(p_dev);
 	for (vf_id = 0; vf_id < max_num_vfs; vf_id++) {
 		concrete_fid = ecore_vfid_to_concrete(p_hwfn, vf_id);
 		ecore_fid_pretend(p_hwfn, p_ptt, (u16)concrete_fid);
@@ -2982,8 +3015,6 @@ static void ecore_emul_link_init_bb(struct ecore_hwfn *p_hwfn,
 {
 	u8 loopback = 0, port = p_hwfn->port_id * 2;
 
-	DP_INFO(p_hwfn->p_dev, "Configurating Emulation Link %02x\n", port);
-
 	/* XLPORT MAC MODE *//* 0 Quad, 4 Single... */
 	ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MODE_REG, (0x4 << 4) | 0x4, 1,
 			 port);
@@ -3113,6 +3144,25 @@ static void ecore_link_init_bb(struct ecore_hwfn *p_hwfn,
 }
 #endif
 
+static u32 ecore_hw_norm_region_conn(struct ecore_hwfn *p_hwfn)
+{
+	u32 norm_region_conn;
+
+	/* The order of CIDs allocation is according to the order of
+	 * 'enum protocol_type'. Therefore, the number of CIDs for the normal
+	 * region is calculated based on the CORE CIDs, in case of non-ETH
+	 * personality, and otherwise - based on the ETH CIDs.
+	 */
+	norm_region_conn =
+		ecore_cxt_get_proto_cid_start(p_hwfn, PROTOCOLID_CORE) +
+		ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_CORE,
+					      OSAL_NULL) +
+		ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH,
+					      OSAL_NULL);
+
+	return norm_region_conn;
+}
+
 static enum _ecore_status_t
 ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn,
 		       struct ecore_ptt *p_ptt, u32 pwm_region_size, u32 n_cpus)
@@ -3183,8 +3233,8 @@ static enum _ecore_status_t
 ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 			      struct ecore_ptt *p_ptt)
 {
+	u32 norm_region_conn, min_addr_reg1;
 	u32 pwm_regsize, norm_regsize;
-	u32 non_pwm_conn, min_addr_reg1;
 	u32 db_bar_size, n_cpus;
 	u32 roce_edpm_mode;
 	u32 pf_dems_shift;
@@ -3209,11 +3259,8 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
 	 * connections. The DORQ_REG_PF_MIN_ADDR_REG1 register is
 	 * in units of 4,096 bytes.
 	 */
-	non_pwm_conn = ecore_cxt_get_proto_cid_start(p_hwfn, PROTOCOLID_CORE) +
-	    ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_CORE,
-					  OSAL_NULL) +
-	    ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH, OSAL_NULL);
-	norm_regsize = ROUNDUP(ECORE_PF_DEMS_SIZE * non_pwm_conn,
+	norm_region_conn = ecore_hw_norm_region_conn(p_hwfn);
+	norm_regsize = ROUNDUP(ECORE_PF_DEMS_SIZE * norm_region_conn,
 			       OSAL_PAGE_SIZE);
 	min_addr_reg1 = norm_regsize / 4096;
 	pwm_regsize = db_bar_size - norm_regsize;
@@ -3292,10 +3339,11 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt,
 					       int hw_mode)
 {
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	enum _ecore_status_t rc	= ECORE_SUCCESS;
 
 	/* In CMT the gate should be cleared by the 2nd hwfn */
-	if (!ECORE_IS_CMT(p_hwfn->p_dev) || !IS_LEAD_HWFN(p_hwfn))
+	if (!ECORE_IS_CMT(p_dev) || !IS_LEAD_HWFN(p_hwfn))
 		STORE_RT_REG(p_hwfn, NIG_REG_BRB_GATE_DNTFWD_PORT_RT_OFFSET, 0);
 
 	rc = ecore_init_run(p_hwfn, p_ptt, PHASE_PORT, p_hwfn->port_id,
@@ -3306,16 +3354,11 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
 	ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_MASTER_WRITE_PAD_ENABLE, 0);
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev))
-		return ECORE_SUCCESS;
+	if (CHIP_REV_IS_FPGA(p_dev) && ECORE_IS_BB(p_dev))
+		ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
 
-	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
-		if (ECORE_IS_AH(p_hwfn->p_dev))
-			return ECORE_SUCCESS;
-		else if (ECORE_IS_BB(p_hwfn->p_dev))
-			ecore_link_init_bb(p_hwfn, p_ptt, p_hwfn->port_id);
-	} else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		if (ECORE_IS_CMT(p_hwfn->p_dev)) {
+	if (CHIP_REV_IS_EMUL(p_dev)) {
+		if (ECORE_IS_CMT(p_dev)) {
 			/* Activate OPTE in CMT */
 			u32 val;
 
@@ -3334,13 +3377,24 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
 				 0x55555555);
 		}
 
+		/* Set the TAGMAC default function on the port if needed.
+		 * The ppfid should be set in the vector, except in BB which has
+		 * a bug in the LLH where the ppfid is actually engine based.
+		 */
+		if (OSAL_TEST_BIT(ECORE_MF_NEED_DEF_PF, &p_dev->mf_bits)) {
+			u8 pf_id = p_hwfn->rel_pf_id;
+
+			if (!ECORE_IS_BB(p_dev))
+				pf_id /= p_dev->num_ports_in_engine;
+			ecore_wr(p_hwfn, p_ptt,
+				 NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR, 1 << pf_id);
+		}
+
 		ecore_emul_link_init(p_hwfn, p_ptt);
-	} else {
-		DP_INFO(p_hwfn->p_dev, "link is not being configured\n");
 	}
 #endif
 
-	return rc;
+	return ECORE_SUCCESS;
 }
 
 static enum _ecore_status_t
@@ -3755,9 +3809,9 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
 			goto load_err;
 
 		/* Clear the pglue_b was_error indication.
-		 * In E4 it must be done after the BME and the internal
-		 * FID_enable for the PF are set, since VDMs may cause the
-		 * indication to be set again.
+		 * It must be done after the BME and the internal FID_enable for
+		 * the PF are set, since VDMs may cause the indication to be set
+		 * again.
 		 */
 		ecore_pglueb_clear_err(p_hwfn, p_hwfn->p_main_ptt);
 
@@ -4361,11 +4415,41 @@ __ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+#define RDMA_NUM_STATISTIC_COUNTERS_K2                  MAX_NUM_VPORTS_K2
+#define RDMA_NUM_STATISTIC_COUNTERS_BB                  MAX_NUM_VPORTS_BB
+
+static u32 ecore_hsi_def_val[][MAX_CHIP_IDS] = {
+	{MAX_NUM_VFS_BB, MAX_NUM_VFS_K2},
+	{MAX_NUM_L2_QUEUES_BB, MAX_NUM_L2_QUEUES_K2},
+	{MAX_NUM_PORTS_BB, MAX_NUM_PORTS_K2},
+	{MAX_SB_PER_PATH_BB, MAX_SB_PER_PATH_K2, },
+	{MAX_NUM_PFS_BB, MAX_NUM_PFS_K2},
+	{MAX_NUM_VPORTS_BB, MAX_NUM_VPORTS_K2},
+	{ETH_RSS_ENGINE_NUM_BB, ETH_RSS_ENGINE_NUM_K2},
+	{MAX_QM_TX_QUEUES_BB, MAX_QM_TX_QUEUES_K2},
+	{PXP_NUM_ILT_RECORDS_BB, PXP_NUM_ILT_RECORDS_K2},
+	{RDMA_NUM_STATISTIC_COUNTERS_BB, RDMA_NUM_STATISTIC_COUNTERS_K2},
+	{MAX_QM_GLOBAL_RLS, MAX_QM_GLOBAL_RLS},
+	{PBF_MAX_CMD_LINES, PBF_MAX_CMD_LINES},
+	{BTB_MAX_BLOCKS_BB, BTB_MAX_BLOCKS_K2},
+};
+
+u32 ecore_get_hsi_def_val(struct ecore_dev *p_dev, enum ecore_hsi_def_type type)
+{
+	enum chip_ids chip_id = ECORE_IS_BB(p_dev) ? CHIP_BB : CHIP_K2;
+
+	if (type >= ECORE_NUM_HSI_DEFS) {
+		DP_ERR(p_dev, "Unexpected HSI definition type [%d]\n", type);
+		return 0;
+	}
+
+	return ecore_hsi_def_val[type][chip_id];
+}
+
 static enum _ecore_status_t
 ecore_hw_set_soft_resc_size(struct ecore_hwfn *p_hwfn,
 			    struct ecore_ptt *p_ptt)
 {
-	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
 	u32 resc_max_val, mcp_resp;
 	u8 res_id;
 	enum _ecore_status_t rc;
@@ -4407,27 +4491,24 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 					    u32 *p_resc_num, u32 *p_resc_start)
 {
 	u8 num_funcs = p_hwfn->num_funcs_on_engine;
-	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
 
 	switch (res_id) {
 	case ECORE_L2_QUEUE:
-		*p_resc_num = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
-				 MAX_NUM_L2_QUEUES_BB) / num_funcs;
+		*p_resc_num = NUM_OF_L2_QUEUES(p_dev) / num_funcs;
 		break;
 	case ECORE_VPORT:
-		*p_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
-				 MAX_NUM_VPORTS_BB) / num_funcs;
+		*p_resc_num = NUM_OF_VPORTS(p_dev) / num_funcs;
 		break;
 	case ECORE_RSS_ENG:
-		*p_resc_num = (b_ah ? ETH_RSS_ENGINE_NUM_K2 :
-				 ETH_RSS_ENGINE_NUM_BB) / num_funcs;
+		*p_resc_num = NUM_OF_RSS_ENGINES(p_dev) / num_funcs;
 		break;
 	case ECORE_PQ:
-		*p_resc_num = (b_ah ? MAX_QM_TX_QUEUES_K2 :
-				 MAX_QM_TX_QUEUES_BB) / num_funcs;
+		*p_resc_num = NUM_OF_QM_TX_QUEUES(p_dev) / num_funcs;
+		*p_resc_num &= ~0x7; /* The granularity of the PQs is 8 */
 		break;
 	case ECORE_RL:
-		*p_resc_num = MAX_QM_GLOBAL_RLS / num_funcs;
+		*p_resc_num = NUM_OF_QM_GLOBAL_RLS(p_dev) / num_funcs;
 		break;
 	case ECORE_MAC:
 	case ECORE_VLAN:
@@ -4435,11 +4516,10 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 		*p_resc_num = ETH_NUM_MAC_FILTERS / num_funcs;
 		break;
 	case ECORE_ILT:
-		*p_resc_num = (b_ah ? PXP_NUM_ILT_RECORDS_K2 :
-				 PXP_NUM_ILT_RECORDS_BB) / num_funcs;
+		*p_resc_num = NUM_OF_PXP_ILT_RECORDS(p_dev) / num_funcs;
 		break;
 	case ECORE_LL2_QUEUE:
-		*p_resc_num = MAX_NUM_LL2_RX_QUEUES / num_funcs;
+		*p_resc_num = MAX_NUM_LL2_RX_RAM_QUEUES / num_funcs;
 		break;
 	case ECORE_RDMA_CNQ_RAM:
 	case ECORE_CMDQS_CQS:
@@ -4448,9 +4528,7 @@ enum _ecore_status_t ecore_hw_get_dflt_resc(struct ecore_hwfn *p_hwfn,
 		*p_resc_num = (NUM_OF_GLOBAL_QUEUES / 2) / num_funcs;
 		break;
 	case ECORE_RDMA_STATS_QUEUE:
-		/* @DPDK */
-		*p_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
-				 MAX_NUM_VPORTS_BB) / num_funcs;
+		*p_resc_num = NUM_OF_RDMA_STATISTIC_COUNTERS(p_dev) / num_funcs;
 		break;
 	case ECORE_BDQ:
 		/* @DPDK */
@@ -4588,7 +4666,7 @@ static enum _ecore_status_t ecore_hw_get_ppfid_bitmap(struct ecore_hwfn *p_hwfn,
 	/* 4-ports mode has limitations that should be enforced:
 	 * - BB: the MFW can access only PPFIDs which their corresponding PFIDs
 	 *       belong to this certain port.
-	 * - AH/E5: only 4 PPFIDs per port are available.
+	 * - AH: only 4 PPFIDs per port are available.
 	 */
 	if (ecore_device_num_ports(p_dev) == 4) {
 		u8 mask;
@@ -4627,7 +4705,8 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 {
 	struct ecore_resc_unlock_params resc_unlock_params;
 	struct ecore_resc_lock_params resc_lock_params;
-	bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
+	u32 max_ilt_lines;
 	u8 res_id;
 	enum _ecore_status_t rc;
 #ifndef ASIC_ONLY
@@ -4703,9 +4782,9 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	}
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
+	if (CHIP_REV_IS_EMUL(p_dev)) {
 		/* Reduced build contains less PQs */
-		if (!(p_hwfn->p_dev->b_is_emul_full)) {
+		if (!(p_dev->b_is_emul_full)) {
 			resc_num[ECORE_PQ] = 32;
 			resc_start[ECORE_PQ] = resc_num[ECORE_PQ] *
 			    p_hwfn->enabled_func_idx;
@@ -4713,26 +4792,27 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 
 		/* For AH emulation, since we have a possible maximal number of
 		 * 16 enabled PFs, in case there are not enough ILT lines -
-		 * allocate only first PF as RoCE and have all the other ETH
-		 * only with less ILT lines.
+		 * allocate only first PF as RoCE and have all the other as
+		 * ETH-only with less ILT lines.
+		 * In case we increase the number of ILT lines for PF0, we need
+		 * also to correct the start value for PF1-15.
 		 */
-		if (!p_hwfn->rel_pf_id && p_hwfn->p_dev->b_is_emul_full)
-			resc_num[ECORE_ILT] = OSAL_MAX_T(u32,
-							 resc_num[ECORE_ILT],
+		if (ECORE_IS_AH(p_dev) && p_dev->b_is_emul_full) {
+			if (!p_hwfn->rel_pf_id) {
+				resc_num[ECORE_ILT] =
+					OSAL_MAX_T(u32, resc_num[ECORE_ILT],
 							 roce_min_ilt_lines);
+			} else if (resc_num[ECORE_ILT] < roce_min_ilt_lines) {
+				resc_start[ECORE_ILT] += roce_min_ilt_lines -
+							 resc_num[ECORE_ILT];
+			}
+		}
 	}
-
-	/* Correct the common ILT calculation if PF0 has more */
-	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev) &&
-	    p_hwfn->p_dev->b_is_emul_full &&
-	    p_hwfn->rel_pf_id && resc_num[ECORE_ILT] < roce_min_ilt_lines)
-		resc_start[ECORE_ILT] += roce_min_ilt_lines -
-		    resc_num[ECORE_ILT];
 #endif
 
 	/* Sanity for ILT */
-	if ((b_ah && (RESC_END(p_hwfn, ECORE_ILT) > PXP_NUM_ILT_RECORDS_K2)) ||
-	    (!b_ah && (RESC_END(p_hwfn, ECORE_ILT) > PXP_NUM_ILT_RECORDS_BB))) {
+	max_ilt_lines = NUM_OF_PXP_ILT_RECORDS(p_dev);
+	if (RESC_END(p_hwfn, ECORE_ILT) > max_ilt_lines) {
 		DP_NOTICE(p_hwfn, true,
 			  "Can't assign ILT pages [%08x,...,%08x]\n",
 			  RESC_START(p_hwfn, ECORE_ILT), RESC_END(p_hwfn,
@@ -4764,6 +4844,28 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
+#ifndef ASIC_ONLY
+static enum _ecore_status_t
+ecore_emul_hw_get_nvm_info(struct ecore_hwfn *p_hwfn)
+{
+	if (IS_LEAD_HWFN(p_hwfn)) {
+		struct ecore_dev *p_dev = p_hwfn->p_dev;
+
+		/* The MF mode on emulation is either default or NPAR 1.0 */
+		p_dev->mf_bits = 1 << ECORE_MF_LLH_MAC_CLSS |
+				 1 << ECORE_MF_LLH_PROTO_CLSS |
+				 1 << ECORE_MF_LL2_NON_UNICAST;
+		if (p_hwfn->num_funcs_on_port > 1)
+			p_dev->mf_bits |= 1 << ECORE_MF_INTER_PF_SWITCH |
+					  1 << ECORE_MF_DISABLE_ARFS;
+		else
+			p_dev->mf_bits |= 1 << ECORE_MF_NEED_DEF_PF;
+	}
+
+	return ECORE_SUCCESS;
+}
+#endif
+
 static enum _ecore_status_t
 ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 		      struct ecore_ptt *p_ptt,
@@ -4775,6 +4877,11 @@ ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
 	struct ecore_mcp_link_params *link;
 	enum _ecore_status_t rc;
 
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev))
+		return ecore_emul_hw_get_nvm_info(p_hwfn);
+#endif
+
 	/* Read global nvm_cfg address */
 	nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt, MISC_REG_GEN_PURP_CR0);
 
@@ -5122,49 +5229,17 @@ static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
 		   p_hwfn->enabled_func_idx, p_hwfn->num_funcs_on_engine);
 }
 
-static void ecore_hw_info_port_num_bb(struct ecore_hwfn *p_hwfn,
-				      struct ecore_ptt *p_ptt)
-{
-	struct ecore_dev *p_dev = p_hwfn->p_dev;
-	u32 port_mode;
-
 #ifndef ASIC_ONLY
-	/* Read the port mode */
-	if (CHIP_REV_IS_FPGA(p_dev))
-		port_mode = 4;
-	else if (CHIP_REV_IS_EMUL(p_dev) && ECORE_IS_CMT(p_dev))
-		/* In CMT on emulation, assume 1 port */
-		port_mode = 1;
-	else
-#endif
-	port_mode = ecore_rd(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB);
-
-	if (port_mode < 3) {
-		p_dev->num_ports_in_engine = 1;
-	} else if (port_mode <= 5) {
-		p_dev->num_ports_in_engine = 2;
-	} else {
-		DP_NOTICE(p_hwfn, true, "PORT MODE: %d not supported\n",
-			  p_dev->num_ports_in_engine);
-
-		/* Default num_ports_in_engine to something */
-		p_dev->num_ports_in_engine = 1;
-	}
-}
-
-static void ecore_hw_info_port_num_ah_e5(struct ecore_hwfn *p_hwfn,
+static void ecore_emul_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 					 struct ecore_ptt *p_ptt)
 {
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
-	u32 port;
-	int i;
+	u32 eco_reserved;
 
-	p_dev->num_ports_in_engine = 0;
+	/* MISCS_REG_ECO_RESERVED[15:12]: num of ports in an engine */
+	eco_reserved = ecore_rd(p_hwfn, p_ptt, MISCS_REG_ECO_RESERVED);
 
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_dev)) {
-		port = ecore_rd(p_hwfn, p_ptt, MISCS_REG_ECO_RESERVED);
-		switch ((port & 0xf000) >> 12) {
+	switch ((eco_reserved & 0xf000) >> 12) {
 		case 1:
 			p_dev->num_ports_in_engine = 1;
 			break;
@@ -5176,49 +5251,43 @@ static void ecore_hw_info_port_num_ah_e5(struct ecore_hwfn *p_hwfn,
 			break;
 		default:
 			DP_NOTICE(p_hwfn, false,
-				  "Unknown port mode in ECO_RESERVED %08x\n",
-				  port);
-		}
-	} else
-#endif
-		for (i = 0; i < MAX_NUM_PORTS_K2; i++) {
-			port = ecore_rd(p_hwfn, p_ptt,
-					CNIG_REG_NIG_PORT0_CONF_K2 +
-					(i * 4));
-			if (port & 1)
-				p_dev->num_ports_in_engine++;
+			  "Emulation: Unknown port mode [ECO_RESERVED 0x%08x]\n",
+			  eco_reserved);
+		p_dev->num_ports_in_engine = 2; /* Default to something */
+		break;
 		}
 
-	if (!p_dev->num_ports_in_engine) {
-		DP_NOTICE(p_hwfn, true, "All NIG ports are inactive\n");
-
-		/* Default num_ports_in_engine to something */
-		p_dev->num_ports_in_engine = 1;
-	}
+	p_dev->num_ports = p_dev->num_ports_in_engine *
+			   ecore_device_num_engines(p_dev);
 }
+#endif
 
+/* Determine the number of ports of the device and per engine */
 static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt)
 {
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
+	u32 addr, global_offsize, global_addr;
 
-	/* Determine the number of ports per engine */
-	if (ECORE_IS_BB(p_dev))
-		ecore_hw_info_port_num_bb(p_hwfn, p_ptt);
-	else
-		ecore_hw_info_port_num_ah_e5(p_hwfn, p_ptt);
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_TEDIBEAR(p_dev)) {
+		p_dev->num_ports_in_engine = 1;
+		p_dev->num_ports = 2;
+		return;
+	}
+
+	if (CHIP_REV_IS_EMUL(p_dev)) {
+		ecore_emul_hw_info_port_num(p_hwfn, p_ptt);
+		return;
+	}
+#endif
 
-	/* Get the total number of ports of the device */
-	if (ECORE_IS_CMT(p_dev)) {
 		/* In CMT there is always only one port */
+	if (ECORE_IS_CMT(p_dev)) {
+		p_dev->num_ports_in_engine = 1;
 		p_dev->num_ports = 1;
-#ifndef ASIC_ONLY
-	} else if (CHIP_REV_IS_EMUL(p_dev) || CHIP_REV_IS_TEDIBEAR(p_dev)) {
-		p_dev->num_ports = p_dev->num_ports_in_engine *
-				   ecore_device_num_engines(p_dev);
-#endif
-	} else {
-		u32 addr, global_offsize, global_addr;
+		return;
+	}
 
 		addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
 					    PUBLIC_GLOBAL);
@@ -5226,7 +5295,9 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
 		global_addr = SECTION_ADDR(global_offsize, 0);
 		addr = global_addr + OFFSETOF(struct public_global, max_ports);
 		p_dev->num_ports = (u8)ecore_rd(p_hwfn, p_ptt, addr);
-	}
+
+	p_dev->num_ports_in_engine = p_dev->num_ports >>
+				     (ecore_device_num_engines(p_dev) - 1);
 }
 
 static void ecore_mcp_get_eee_caps(struct ecore_hwfn *p_hwfn,
@@ -5280,15 +5351,9 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 
 	ecore_mcp_get_capabilities(p_hwfn, p_ptt);
 
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_ASIC(p_hwfn->p_dev)) {
-#endif
 	rc = ecore_hw_get_nvm_info(p_hwfn, p_ptt, p_params);
 	if (rc != ECORE_SUCCESS)
 		return rc;
-#ifndef ASIC_ONLY
-	}
-#endif
 
 	rc = ecore_int_igu_read_cam(p_hwfn, p_ptt);
 	if (rc != ECORE_SUCCESS) {
@@ -5332,16 +5397,15 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 		protocol = p_hwfn->mcp_info->func_info.protocol;
 		p_hwfn->hw_info.personality = protocol;
 	}
-
 #ifndef ASIC_ONLY
-	/* To overcome ILT lack for emulation, until at least until we'll have
-	 * a definite answer from system about it, allow only PF0 to be RoCE.
+	else if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+		/* AH emulation:
+		 * Allow only PF0 to be RoCE to overcome a lack of ILT lines.
 	 */
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev)) {
-		if (!p_hwfn->rel_pf_id)
-			p_hwfn->hw_info.personality = ECORE_PCI_ETH_ROCE;
-		else
+		if (ECORE_IS_AH(p_hwfn->p_dev) && p_hwfn->rel_pf_id)
 			p_hwfn->hw_info.personality = ECORE_PCI_ETH;
+		else
+			p_hwfn->hw_info.personality = ECORE_PCI_ETH_ROCE;
 	}
 #endif
 
@@ -5379,6 +5443,18 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	return rc;
 }
 
+#define ECORE_MAX_DEVICE_NAME_LEN (8)
+
+void ecore_get_dev_name(struct ecore_dev *p_dev, u8 *name, u8 max_chars)
+{
+	u8 n;
+
+	n = OSAL_MIN_T(u8, max_chars, ECORE_MAX_DEVICE_NAME_LEN);
+	OSAL_SNPRINTF((char *)name, n, "%s %c%d",
+		      ECORE_IS_BB(p_dev) ? "BB" : "AH",
+		      'A' + p_dev->chip_rev, (int)p_dev->chip_metal);
+}
+
 static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 					       struct ecore_ptt *p_ptt)
 {
@@ -5423,9 +5499,9 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 	}
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_dev)) {
+	if (CHIP_REV_IS_EMUL(p_dev) && ECORE_IS_BB(p_dev)) {
 		/* For some reason we have problems with this register
-		 * in B0 emulation; Simply assume no CMT
+		 * in BB B0 emulation; Simply assume no CMT
 		 */
 		DP_NOTICE(p_dev->hwfns, false,
 			  "device on emul - assume no CMT\n");
@@ -5456,14 +5532,17 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_hwfn *p_hwfn,
 
 	if (CHIP_REV_IS_EMUL(p_dev)) {
 		tmp = ecore_rd(p_hwfn, p_ptt, MISCS_REG_ECO_RESERVED);
-		if (tmp & (1 << 29)) {
-			DP_NOTICE(p_hwfn, false,
-				  "Emulation: Running on a FULL build\n");
-			p_dev->b_is_emul_full = true;
-		} else {
+
+		/* MISCS_REG_ECO_RESERVED[29]: full/reduced emulation build */
+		p_dev->b_is_emul_full = !!(tmp & (1 << 29));
+
+		/* MISCS_REG_ECO_RESERVED[28]: emulation build w/ or w/o MAC */
+		p_dev->b_is_emul_mac = !!(tmp & (1 << 28));
+
 			DP_NOTICE(p_hwfn, false,
-				  "Emulation: Running on a REDUCED build\n");
-		}
+			  "Emulation: Running on a %s build %s MAC\n",
+			  p_dev->b_is_emul_full ? "full" : "reduced",
+			  p_dev->b_is_emul_mac ? "with" : "without");
 	}
 #endif
 
@@ -5533,7 +5612,7 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 	p_hwfn->p_main_ptt = ecore_get_reserved_ptt(p_hwfn, RESERVED_PTT_MAIN);
 
 	/* First hwfn learns basic information, e.g., number of hwfns */
-	if (!p_hwfn->my_id) {
+	if (IS_LEAD_HWFN(p_hwfn)) {
 		rc = ecore_get_dev_info(p_hwfn, p_hwfn->p_main_ptt);
 		if (rc != ECORE_SUCCESS) {
 			if (p_params->b_relaxed_probe)
@@ -5543,6 +5622,33 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 		}
 	}
 
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev) && !b_ptt_gtt_init) {
+		struct ecore_ptt *p_ptt = p_hwfn->p_main_ptt;
+		u32 val;
+
+		/* Initialize PTT/GTT (done by MFW on ASIC) */
+		ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_START_INIT_PTT_GTT, 1);
+		OSAL_MSLEEP(10);
+		ecore_ptt_invalidate(p_hwfn);
+		val = ecore_rd(p_hwfn, p_ptt, PGLUE_B_REG_INIT_DONE_PTT_GTT);
+		if (val != 1) {
+			DP_ERR(p_hwfn,
+			       "PTT and GTT init in PGLUE_B didn't complete\n");
+			goto err1;
+		}
+
+		/* Clear a possible PGLUE_B parity from a previous GRC access */
+		ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_PRTY_STS_WR_H_0, 0x380);
+
+		b_ptt_gtt_init = true;
+	}
+#endif
+
+	/* Store the precompiled init data ptrs */
+	if (IS_LEAD_HWFN(p_hwfn))
+		ecore_init_iro_array(p_hwfn->p_dev);
+
 	ecore_hw_hwfn_prepare(p_hwfn);
 
 	/* Initialize MCP structure */
@@ -5581,9 +5687,6 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 
 	/* Check if mdump logs/data are present and update the epoch value */
 	if (IS_LEAD_HWFN(p_hwfn)) {
-#ifndef ASIC_ONLY
-		if (!CHIP_REV_IS_EMUL(p_dev)) {
-#endif
 		rc = ecore_mcp_mdump_get_info(p_hwfn, p_hwfn->p_main_ptt,
 					      &mdump_info);
 		if (rc == ECORE_SUCCESS && mdump_info.num_of_logs)
@@ -5600,9 +5703,6 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 
 		ecore_mcp_mdump_set_values(p_hwfn, p_hwfn->p_main_ptt,
 					   p_params->epoch);
-#ifndef ASIC_ONLY
-		}
-#endif
 	}
 
 	/* Allocate the init RT array and initialize the init-ops engine */
@@ -5615,10 +5715,12 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
 	}
 #ifndef ASIC_ONLY
 	if (CHIP_REV_IS_FPGA(p_dev)) {
-		DP_NOTICE(p_hwfn, false,
-			  "FPGA: workaround; Prevent DMAE parities\n");
-		ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK_K2,
-			 7);
+		if (ECORE_IS_AH(p_dev)) {
+			DP_NOTICE(p_hwfn, false,
+				  "FPGA: workaround; Prevent DMAE parities\n");
+			ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
+				 PCIE_REG_PRTY_MASK_K2, 7);
+		}
 
 		DP_NOTICE(p_hwfn, false,
 			  "FPGA: workaround: Set VF bar0 size\n");
@@ -5652,10 +5754,6 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 	if (p_params->b_relaxed_probe)
 		p_params->p_relaxed_res = ECORE_HW_PREPARE_SUCCESS;
 
-	/* Store the precompiled init data ptrs */
-	if (IS_PF(p_dev))
-		ecore_init_iro_array(p_dev);
-
 	/* Initialize the first hwfn - will learn number of hwfns */
 	rc = ecore_hw_prepare_single(p_hwfn, p_dev->regview,
 				     p_dev->doorbells, p_dev->db_phys_addr,
@@ -5665,7 +5763,7 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
 
 	p_params->personality = p_hwfn->hw_info.personality;
 
-	/* initilalize 2nd hwfn if necessary */
+	/* Initialize 2nd hwfn if necessary */
 	if (ECORE_IS_CMT(p_dev)) {
 		void OSAL_IOMEM *p_regview, *p_doorbell;
 		u8 OSAL_IOMEM *addr;
@@ -6382,7 +6480,7 @@ static int __ecore_configure_vport_wfq(struct ecore_hwfn *p_hwfn,
 	struct ecore_mcp_link_state *p_link;
 	int rc = ECORE_SUCCESS;
 
-	p_link = &p_hwfn->p_dev->hwfns[0].mcp_info->link_output;
+	p_link = &ECORE_LEADING_HWFN(p_hwfn->p_dev)->mcp_info->link_output;
 
 	if (!p_link->min_pf_rate) {
 		p_hwfn->qm_info.wfq_data[vp_id].min_speed = rate;
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index a0a6e3aba..d746aaed1 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -9,31 +9,29 @@
 #include "ecore_init_ops.h"
 #include "reg_addr.h"
 #include "ecore_rt_defs.h"
-#include "ecore_hsi_common.h"
 #include "ecore_hsi_init_func.h"
-#include "ecore_hsi_eth.h"
 #include "ecore_hsi_init_tool.h"
 #include "ecore_iro.h"
 #include "ecore_init_fw_funcs.h"
-
-#define CDU_VALIDATION_DEFAULT_CFG 61
-
 static u16 con_region_offsets[3][NUM_OF_CONNECTION_TYPES] = {
-	{ 400,  336,  352,  304,  304,  384,  416,  352}, /* region 3 offsets */
-	{ 528,  496,  416,  448,  448,  512,  544,  480}, /* region 4 offsets */
-	{ 608,  544,  496,  512,  576,  592,  624,  560}  /* region 5 offsets */
+	{ 400,  336,  352,  368,  304,  384,  416,  352}, /* region 3 offsets */
+	{ 528,  496,  416,  512,  448,  512,  544,  480}, /* region 4 offsets */
+	{ 608,  544,  496,  576,  576,  592,  624,  560}  /* region 5 offsets */
 };
 static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
 	{ 240,  240,  112,    0,    0,    0,    0,   96}  /* region 1 offsets */
 };
 
 /* General constants */
-#define QM_PQ_MEM_4KB(pq_size) (pq_size ? DIV_ROUND_UP((pq_size + 1) * \
-				QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
-#define QM_PQ_SIZE_256B(pq_size) (pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : \
-				  0)
+#define QM_PQ_MEM_4KB(pq_size) \
+	(pq_size ? DIV_ROUND_UP((pq_size + 1) * QM_PQ_ELEMENT_SIZE, 0x1000) : 0)
+#define QM_PQ_SIZE_256B(pq_size) \
+	(pq_size ? DIV_ROUND_UP(pq_size, 0x100) - 1 : 0)
 #define QM_INVALID_PQ_ID		0xffff
 
+/* Max link speed (in Mbps) */
+#define QM_MAX_LINK_SPEED		100000
+
 /* Feature enable */
 #define QM_BYPASS_EN			1
 #define QM_BYTE_CRD_EN			1
@@ -42,7 +40,8 @@ static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
 #define QM_OTHER_PQS_PER_PF		4
 
 /* VOQ constants */
-#define QM_E5_NUM_EXT_VOQ		(MAX_NUM_PORTS_E5 * NUM_OF_TCS)
+#define MAX_NUM_VOQS			(MAX_NUM_PORTS_K2 * NUM_TCS_4PORT_K2)
+#define VOQS_BIT_MASK			((1 << MAX_NUM_VOQS) - 1)
 
 /* WFQ constants: */
 
@@ -53,8 +52,7 @@ static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
 #define QM_WFQ_VP_PQ_VOQ_SHIFT		0
 
 /* Bit  of PF in WFQ VP PQ map */
-#define QM_WFQ_VP_PQ_PF_E4_SHIFT	5
-#define QM_WFQ_VP_PQ_PF_E5_SHIFT	6
+#define QM_WFQ_VP_PQ_PF_SHIFT		5
 
 /* 0x9000 = 4*9*1024 */
 #define QM_WFQ_INC_VAL(weight)		((weight) * 0x9000)
@@ -62,9 +60,6 @@ static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
 /* Max WFQ increment value is 0.7 * upper bound */
 #define QM_WFQ_MAX_INC_VAL		((QM_WFQ_UPPER_BOUND * 7) / 10)
 
-/* Number of VOQs in E5 QmWfqCrd register */
-#define QM_WFQ_CRD_E5_NUM_VOQS		16
-
 /* RL constants: */
 
 /* Period in us */
@@ -110,8 +105,6 @@ static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
 /* Pure LB CmdQ lines (+spare) */
 #define PBF_CMDQ_PURE_LB_LINES		150
 
-#define PBF_CMDQ_LINES_E5_RSVD_RATIO	8
-
 #define PBF_CMDQ_LINES_RT_OFFSET(ext_voq) \
 	(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + \
 	 ext_voq * \
@@ -175,42 +168,25 @@ static u16 task_region_offsets[1][NUM_OF_CONNECTION_TYPES] = {
 	} while (0)
 
 #define WRITE_PQ_INFO_TO_RAM		1
-#define PQ_INFO_ELEMENT(vp, pf, tc, port, rl_valid, rl)	\
-	(((vp) << 0) | ((pf) << 12) | ((tc) << 16) |    \
-	 ((port) << 20) | ((rl_valid) << 22) | ((rl) << 24))
-#define PQ_INFO_RAM_GRC_ADDRESS(pq_id) \
-	(XSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM + 21776 + (pq_id) * 4)
 
-/******************** INTERNAL IMPLEMENTATION *********************/
+#define PQ_INFO_ELEMENT(vp_pq_id, pf, tc, port, rl_valid, rl_id) \
+	(((vp_pq_id) << 0) | ((pf) << 12) | ((tc) << 16) | ((port) << 20) | \
+	 ((rl_valid ? 1 : 0) << 22) | (((rl_id) & 255) << 24) | \
+	 (((rl_id) >> 8) << 9))
 
-/* Returns the external VOQ number */
-static u8 ecore_get_ext_voq(struct ecore_hwfn *p_hwfn,
-			    u8 port_id,
-			    u8 tc,
-			    u8 max_phys_tcs_per_port)
-{
-	if (tc == PURE_LB_TC)
-		return NUM_OF_PHYS_TCS * (MAX_NUM_PORTS_BB) + port_id;
-	else
-		return port_id * (max_phys_tcs_per_port) + tc;
-}
+#define PQ_INFO_RAM_GRC_ADDRESS(pq_id) (XSEM_REG_FAST_MEMORY + \
+	SEM_FAST_REG_INT_RAM + XSTORM_PQ_INFO_OFFSET(pq_id))
+
+/******************** INTERNAL IMPLEMENTATION *********************/
 
 /* Prepare PF RL enable/disable runtime init values */
 static void ecore_enable_pf_rl(struct ecore_hwfn *p_hwfn, bool pf_rl_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_RLPFENABLE_RT_OFFSET, pf_rl_en ? 1 : 0);
 	if (pf_rl_en) {
-		u8 num_ext_voqs = MAX_NUM_VOQS_E4;
-		u64 voq_bit_mask = ((u64)1 << num_ext_voqs) - 1;
-
 		/* Enable RLs for all VOQs */
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_RT_OFFSET,
-			     (u32)voq_bit_mask);
-#ifdef QM_REG_RLPFVOQENABLE_MSB_RT_OFFSET
-		if (num_ext_voqs >= 32)
-			STORE_RT_REG(p_hwfn, QM_REG_RLPFVOQENABLE_MSB_RT_OFFSET,
-				     (u32)(voq_bit_mask >> 32));
-#endif
+			     VOQS_BIT_MASK);
 
 		/* Write RL period */
 		STORE_RT_REG(p_hwfn, QM_REG_RLPFPERIOD_RT_OFFSET,
@@ -236,12 +212,13 @@ static void ecore_enable_pf_wfq(struct ecore_hwfn *p_hwfn, bool pf_wfq_en)
 			     QM_WFQ_UPPER_BOUND);
 }
 
-/* Prepare VPORT RL enable/disable runtime init values */
-static void ecore_enable_vport_rl(struct ecore_hwfn *p_hwfn, bool vport_rl_en)
+/* Prepare global RL enable/disable runtime init values */
+static void ecore_enable_global_rl(struct ecore_hwfn *p_hwfn,
+				   bool global_rl_en)
 {
 	STORE_RT_REG(p_hwfn, QM_REG_RLGLBLENABLE_RT_OFFSET,
-		     vport_rl_en ? 1 : 0);
-	if (vport_rl_en) {
+		     global_rl_en ? 1 : 0);
+	if (global_rl_en) {
 		/* Write RL period (use timer 0 only) */
 		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLPERIOD_0_RT_OFFSET,
 			     QM_RL_PERIOD_CLK_25M);
@@ -272,19 +249,16 @@ static void ecore_enable_vport_wfq(struct ecore_hwfn *p_hwfn, bool vport_wfq_en)
  * the specified VOQ
  */
 static void ecore_cmdq_lines_voq_rt_init(struct ecore_hwfn *p_hwfn,
-					 u8 ext_voq,
+					 u8 voq,
 					 u16 cmdq_lines)
 {
-	u32 qm_line_crd;
+	u32 qm_line_crd = QM_VOQ_LINE_CRD(cmdq_lines);
 
-	qm_line_crd = QM_VOQ_LINE_CRD(cmdq_lines);
-
-	OVERWRITE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(ext_voq),
+	OVERWRITE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq),
 			 (u32)cmdq_lines);
-	STORE_RT_REG(p_hwfn, QM_REG_VOQCRDLINE_RT_OFFSET + ext_voq,
-			 qm_line_crd);
-	STORE_RT_REG(p_hwfn, QM_REG_VOQINITCRDLINE_RT_OFFSET + ext_voq,
-			 qm_line_crd);
+	STORE_RT_REG(p_hwfn, QM_REG_VOQCRDLINE_RT_OFFSET + voq, qm_line_crd);
+	STORE_RT_REG(p_hwfn, QM_REG_VOQINITCRDLINE_RT_OFFSET + voq,
+		     qm_line_crd);
 }
 
 /* Prepare runtime init values to allocate PBF command queue lines. */
@@ -294,12 +268,11 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
 				     struct init_qm_port_params
 				     port_params[MAX_NUM_PORTS])
 {
-	u8 tc, ext_voq, port_id, num_tcs_in_port;
-	u8 num_ext_voqs = MAX_NUM_VOQS_E4;
+	u8 tc, voq, port_id, num_tcs_in_port;
 
 	/* Clear PBF lines of all VOQs */
-	for (ext_voq = 0; ext_voq < num_ext_voqs; ext_voq++)
-		STORE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(ext_voq), 0);
+	for (voq = 0; voq < MAX_NUM_VOQS; voq++)
+		STORE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq), 0);
 
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
 		u16 phys_lines, phys_lines_per_tc;
@@ -308,8 +281,7 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
 			continue;
 
 		/* Find number of command queue lines to divide between the
-		 * active physical TCs. In E5, 1/8 of the lines are reserved.
-		 * the lines for pure LB TC are subtracted.
+		 * active physical TCs.
 		 */
 		phys_lines = port_params[port_id].num_pbf_cmd_lines;
 		phys_lines -= PBF_CMDQ_PURE_LB_LINES;
@@ -324,18 +296,16 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
 
 		/* Init registers per active TC */
 		for (tc = 0; tc < max_phys_tcs_per_port; tc++) {
-			ext_voq = ecore_get_ext_voq(p_hwfn, port_id, tc,
-						    max_phys_tcs_per_port);
-			if (((port_params[port_id].active_phys_tcs >> tc) &
-			    0x1) == 1)
-				ecore_cmdq_lines_voq_rt_init(p_hwfn, ext_voq,
+			voq = VOQ(port_id, tc, max_phys_tcs_per_port);
+			if (((port_params[port_id].active_phys_tcs >>
+			      tc) & 0x1) == 1)
+				ecore_cmdq_lines_voq_rt_init(p_hwfn, voq,
 							     phys_lines_per_tc);
 		}
 
 		/* Init registers for pure LB TC */
-		ext_voq = ecore_get_ext_voq(p_hwfn, port_id, PURE_LB_TC,
-					    max_phys_tcs_per_port);
-		ecore_cmdq_lines_voq_rt_init(p_hwfn, ext_voq,
+		voq = VOQ(port_id, PURE_LB_TC, max_phys_tcs_per_port);
+		ecore_cmdq_lines_voq_rt_init(p_hwfn, voq,
 					     PBF_CMDQ_PURE_LB_LINES);
 	}
 }
@@ -367,7 +337,7 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
 				     port_params[MAX_NUM_PORTS])
 {
 	u32 usable_blocks, pure_lb_blocks, phys_blocks;
-	u8 tc, ext_voq, port_id, num_tcs_in_port;
+	u8 tc, voq, port_id, num_tcs_in_port;
 
 	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
 		if (!port_params[port_id].active)
@@ -399,24 +369,58 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
 		for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
 			if (((port_params[port_id].active_phys_tcs >> tc) &
 			     0x1) == 1) {
-				ext_voq = ecore_get_ext_voq(p_hwfn, port_id, tc,
-							 max_phys_tcs_per_port);
+				voq = VOQ(port_id, tc, max_phys_tcs_per_port);
 				STORE_RT_REG(p_hwfn,
-					PBF_BTB_GUARANTEED_RT_OFFSET(ext_voq),
+					PBF_BTB_GUARANTEED_RT_OFFSET(voq),
 					phys_blocks);
 			}
 		}
 
 		/* Init pure LB TC */
-		ext_voq = ecore_get_ext_voq(p_hwfn, port_id, PURE_LB_TC,
-					    max_phys_tcs_per_port);
-		STORE_RT_REG(p_hwfn, PBF_BTB_GUARANTEED_RT_OFFSET(ext_voq),
+		voq = VOQ(port_id, PURE_LB_TC, max_phys_tcs_per_port);
+		STORE_RT_REG(p_hwfn, PBF_BTB_GUARANTEED_RT_OFFSET(voq),
 			     pure_lb_blocks);
 	}
 }
 
+/* Prepare runtime init values for the specified RL.
+ * If global_rl_params is OSAL_NULL, max link speed (100Gbps) is used instead.
+ * Return -1 on error.
+ */
+static int ecore_global_rl_rt_init(struct ecore_hwfn *p_hwfn,
+				   struct init_qm_global_rl_params
+				     global_rl_params[COMMON_MAX_QM_GLOBAL_RLS])
+{
+	u32 upper_bound = QM_VP_RL_UPPER_BOUND(QM_MAX_LINK_SPEED) |
+			  (u32)QM_RL_CRD_REG_SIGN_BIT;
+	u32 inc_val;
+	u16 rl_id;
+
+	/* Go over all global RLs */
+	for (rl_id = 0; rl_id < MAX_QM_GLOBAL_RLS; rl_id++) {
+		u32 rate_limit = global_rl_params ?
+				 global_rl_params[rl_id].rate_limit : 0;
+
+		inc_val = QM_RL_INC_VAL(rate_limit ?
+					rate_limit : QM_MAX_LINK_SPEED);
+		if (inc_val > QM_VP_RL_MAX_INC_VAL(QM_MAX_LINK_SPEED)) {
+			DP_NOTICE(p_hwfn, true, "Invalid rate limit configuration.\n");
+			return -1;
+		}
+
+		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLCRD_RT_OFFSET + rl_id,
+			     (u32)QM_RL_CRD_REG_SIGN_BIT);
+		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLUPPERBOUND_RT_OFFSET + rl_id,
+			     upper_bound);
+		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLINCVAL_RT_OFFSET + rl_id,
+			     inc_val);
+	}
+
+	return 0;
+}
+
 /* Prepare Tx PQ mapping runtime init values for the specified PF */
-static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
+static int ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt,
 				    u8 pf_id,
 				    u8 max_phys_tcs_per_port,
@@ -426,7 +430,7 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				    u16 start_pq,
 				    u16 num_pf_pqs,
 				    u16 num_vf_pqs,
-				    u8 start_vport,
+				   u16 start_vport,
 				    u32 base_mem_addr_4kb,
 				    struct init_qm_pq_params *pq_params,
 				    struct init_qm_vport_params *vport_params)
@@ -436,6 +440,9 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 	u32 num_tx_pq_vf_masks = MAX_QM_TX_QUEUES / QM_PF_QUEUE_GROUP_SIZE;
 	u16 num_pqs, first_pq_group, last_pq_group, i, j, pq_id, pq_group;
 	u32 pq_mem_4kb, vport_pq_mem_4kb, mem_addr_4kb;
+	#if (WRITE_PQ_INFO_TO_RAM != 0)
+		u32 pq_info = 0;
+	#endif
 
 	num_pqs = num_pf_pqs + num_vf_pqs;
 
@@ -459,24 +466,22 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 
 	/* Go over all Tx PQs */
 	for (i = 0, pq_id = start_pq; i < num_pqs; i++, pq_id++) {
-		u32 max_qm_global_rls = MAX_QM_GLOBAL_RLS;
-		u8 ext_voq, vport_id_in_pf;
-		bool is_vf_pq, rl_valid;
-		u16 first_tx_pq_id;
-
-		ext_voq = ecore_get_ext_voq(p_hwfn, pq_params[i].port_id,
-					    pq_params[i].tc_id,
-					    max_phys_tcs_per_port);
+		u16 first_tx_pq_id, vport_id_in_pf;
+		struct qm_rf_pq_map tx_pq_map;
+		bool is_vf_pq;
+		u8 voq;
+
+		voq = VOQ(pq_params[i].port_id, pq_params[i].tc_id,
+			  max_phys_tcs_per_port);
 		is_vf_pq = (i >= num_pf_pqs);
-		rl_valid = pq_params[i].rl_valid > 0;
 
 		/* Update first Tx PQ of VPORT/TC */
 		vport_id_in_pf = pq_params[i].vport_id - start_vport;
 		first_tx_pq_id =
 		vport_params[vport_id_in_pf].first_tx_pq_id[pq_params[i].tc_id];
 		if (first_tx_pq_id == QM_INVALID_PQ_ID) {
-			u32 map_val = (ext_voq << QM_WFQ_VP_PQ_VOQ_SHIFT) |
-				       (pf_id << (QM_WFQ_VP_PQ_PF_E4_SHIFT));
+			u32 map_val = (voq << QM_WFQ_VP_PQ_VOQ_SHIFT) |
+				      (pf_id << QM_WFQ_VP_PQ_PF_SHIFT);
 
 			/* Create new VP PQ */
 			vport_params[vport_id_in_pf].
@@ -488,20 +493,10 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 				     first_tx_pq_id, map_val);
 		}
 
-		/* Check RL ID */
-		if (rl_valid && pq_params[i].vport_id >= max_qm_global_rls) {
-			DP_NOTICE(p_hwfn, true,
-				  "Invalid VPORT ID for rate limiter config\n");
-			rl_valid = false;
-		}
-
 		/* Prepare PQ map entry */
-		struct qm_rf_pq_map tx_pq_map;
-
 		QM_INIT_TX_PQ_MAP(p_hwfn, tx_pq_map, pq_id, first_tx_pq_id,
-				  rl_valid ? 1 : 0,
-				  rl_valid ? pq_params[i].vport_id : 0,
-				  ext_voq, pq_params[i].wrr_group);
+				  pq_params[i].rl_valid, pq_params[i].rl_id,
+				  voq, pq_params[i].wrr_group);
 
 		/* Set PQ base address */
 		STORE_RT_REG(p_hwfn, QM_REG_BASEADDRTXPQ_RT_OFFSET + pq_id,
@@ -514,17 +509,15 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 					     (pq_id * 2) + j, 0);
 
 		/* Write PQ info to RAM */
-		if (WRITE_PQ_INFO_TO_RAM != 0) {
-			u32 pq_info = 0;
-
-			pq_info = PQ_INFO_ELEMENT(first_tx_pq_id, pf_id,
-						  pq_params[i].tc_id,
-						  pq_params[i].port_id,
-						  rl_valid ? 1 : 0, rl_valid ?
-						  pq_params[i].vport_id : 0);
-			ecore_wr(p_hwfn, p_ptt, PQ_INFO_RAM_GRC_ADDRESS(pq_id),
-				 pq_info);
-		}
+#if (WRITE_PQ_INFO_TO_RAM != 0)
+		pq_info = PQ_INFO_ELEMENT(first_tx_pq_id, pf_id,
+					  pq_params[i].tc_id,
+					  pq_params[i].port_id,
+					  pq_params[i].rl_valid,
+					  pq_params[i].rl_id);
+		ecore_wr(p_hwfn, p_ptt, PQ_INFO_RAM_GRC_ADDRESS(pq_id),
+			 pq_info);
+#endif
 
 		/* If VF PQ, add indication to PQ VF mask */
 		if (is_vf_pq) {
@@ -541,6 +534,8 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
 		if (tx_pq_vf_mask[i])
 			STORE_RT_REG(p_hwfn, QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET +
 				     i, tx_pq_vf_mask[i]);
+
+	return 0;
 }
 
 /* Prepare Other PQ mapping runtime init values for the specified PF */
@@ -598,7 +593,7 @@ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 				struct init_qm_pq_params *pq_params)
 {
 	u32 inc_val, crd_reg_offset;
-	u8 ext_voq;
+	u8 voq;
 	u16 i;
 
 	inc_val = QM_WFQ_INC_VAL(pf_wfq);
@@ -609,13 +604,12 @@ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 	}
 
 	for (i = 0; i < num_tx_pqs; i++) {
-		ext_voq = ecore_get_ext_voq(p_hwfn, pq_params[i].port_id,
-					    pq_params[i].tc_id,
-					    max_phys_tcs_per_port);
+		voq = VOQ(pq_params[i].port_id, pq_params[i].tc_id,
+			  max_phys_tcs_per_port);
 		crd_reg_offset = (pf_id < MAX_NUM_PFS_BB ?
 				  QM_REG_WFQPFCRD_RT_OFFSET :
 				  QM_REG_WFQPFCRD_MSB_RT_OFFSET) +
-				 ext_voq * MAX_NUM_PFS_BB +
+				 voq * MAX_NUM_PFS_BB +
 				 (pf_id % MAX_NUM_PFS_BB);
 		OVERWRITE_RT_REG(p_hwfn, crd_reg_offset,
 				 (u32)QM_WFQ_CRD_REG_SIGN_BIT);
@@ -655,19 +649,19 @@ static int ecore_pf_rl_rt_init(struct ecore_hwfn *p_hwfn, u8 pf_id, u32 pf_rl)
  * Return -1 on error.
  */
 static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
-				u8 num_vports,
+				u16 num_vports,
 				struct init_qm_vport_params *vport_params)
 {
-	u16 vport_pq_id;
+	u16 vp_pq_id, vport_id;
 	u32 inc_val;
-	u8 tc, i;
+	u8 tc;
 
 	/* Go over all PF VPORTs */
-	for (i = 0; i < num_vports; i++) {
-		if (!vport_params[i].wfq)
+	for (vport_id = 0; vport_id < num_vports; vport_id++) {
+		if (!vport_params[vport_id].wfq)
 			continue;
 
-		inc_val = QM_WFQ_INC_VAL(vport_params[i].wfq);
+		inc_val = QM_WFQ_INC_VAL(vport_params[vport_id].wfq);
 		if (inc_val > QM_WFQ_MAX_INC_VAL) {
 			DP_NOTICE(p_hwfn, true,
 				  "Invalid VPORT WFQ weight configuration\n");
@@ -676,56 +670,16 @@ static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
 
 		/* Each VPORT can have several VPORT PQ IDs for various TCs */
 		for (tc = 0; tc < NUM_OF_TCS; tc++) {
-			vport_pq_id = vport_params[i].first_tx_pq_id[tc];
-			if (vport_pq_id != QM_INVALID_PQ_ID) {
-				STORE_RT_REG(p_hwfn, QM_REG_WFQVPCRD_RT_OFFSET +
-					     vport_pq_id,
-					     (u32)QM_WFQ_CRD_REG_SIGN_BIT);
-				STORE_RT_REG(p_hwfn,
-					     QM_REG_WFQVPWEIGHT_RT_OFFSET +
-					     vport_pq_id, inc_val);
-			}
+			vp_pq_id = vport_params[vport_id].first_tx_pq_id[tc];
+			if (vp_pq_id == QM_INVALID_PQ_ID)
+				continue;
+
+			STORE_RT_REG(p_hwfn, QM_REG_WFQVPCRD_RT_OFFSET +
+				     vp_pq_id, (u32)QM_WFQ_CRD_REG_SIGN_BIT);
+			STORE_RT_REG(p_hwfn, QM_REG_WFQVPWEIGHT_RT_OFFSET +
+				     vp_pq_id, inc_val);
 		}
 	}
-	return 0;
-}
-
-/* Prepare VPORT RL runtime init values for the specified VPORTs.
- * Return -1 on error.
- */
-static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
-				  u8 start_vport,
-				  u8 num_vports,
-				  u32 link_speed,
-				  struct init_qm_vport_params *vport_params)
-{
-	u8 i, vport_id;
-	u32 inc_val;
-
-	if (start_vport + num_vports >= MAX_QM_GLOBAL_RLS) {
-		DP_NOTICE(p_hwfn, true,
-			  "Invalid VPORT ID for rate limiter configuration\n");
-		return -1;
-	}
-
-	/* Go over all PF VPORTs */
-	for (i = 0, vport_id = start_vport; i < num_vports; i++, vport_id++) {
-		inc_val = QM_RL_INC_VAL(link_speed);
-		if (inc_val > QM_VP_RL_MAX_INC_VAL(link_speed)) {
-			DP_NOTICE(p_hwfn, true,
-				  "Invalid VPORT rate-limit configuration\n");
-			return -1;
-		}
-
-		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLCRD_RT_OFFSET + vport_id,
-			     (u32)QM_RL_CRD_REG_SIGN_BIT);
-		STORE_RT_REG(p_hwfn,
-			     QM_REG_RLGLBLUPPERBOUND_RT_OFFSET + vport_id,
-			     QM_VP_RL_UPPER_BOUND(link_speed) |
-			     (u32)QM_RL_CRD_REG_SIGN_BIT);
-		STORE_RT_REG(p_hwfn, QM_REG_RLGLBLINCVAL_RT_OFFSET + vport_id,
-			     inc_val);
-	}
 
 	return 0;
 }
@@ -769,10 +723,10 @@ static bool ecore_send_qm_cmd(struct ecore_hwfn *p_hwfn,
 	return ecore_poll_on_qm_cmd_ready(p_hwfn, p_ptt);
 }
 
-
 /******************** INTERFACE IMPLEMENTATION *********************/
 
-u32 ecore_qm_pf_mem_size(u32 num_pf_cids,
+u32 ecore_qm_pf_mem_size(struct ecore_hwfn *p_hwfn,
+			 u32 num_pf_cids,
 						 u32 num_vf_cids,
 						 u32 num_tids,
 						 u16 num_pf_pqs,
@@ -788,25 +742,26 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 			    u8 max_phys_tcs_per_port,
 			    bool pf_rl_en,
 			    bool pf_wfq_en,
-			    bool vport_rl_en,
+			    bool global_rl_en,
 			    bool vport_wfq_en,
 			    struct init_qm_port_params
-			    port_params[MAX_NUM_PORTS])
+				   port_params[MAX_NUM_PORTS],
+			    struct init_qm_global_rl_params
+				   global_rl_params[COMMON_MAX_QM_GLOBAL_RLS])
 {
-	u32 mask;
+	u32 mask = 0;
 
 	/* Init AFullOprtnstcCrdMask */
-	mask = (QM_OPPOR_LINE_VOQ_DEF <<
-		QM_RF_OPPORTUNISTIC_MASK_LINEVOQ_SHIFT) |
-		(QM_BYTE_CRD_EN << QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ_SHIFT) |
-		(pf_wfq_en << QM_RF_OPPORTUNISTIC_MASK_PFWFQ_SHIFT) |
-		(vport_wfq_en << QM_RF_OPPORTUNISTIC_MASK_VPWFQ_SHIFT) |
-		(pf_rl_en << QM_RF_OPPORTUNISTIC_MASK_PFRL_SHIFT) |
-		(vport_rl_en << QM_RF_OPPORTUNISTIC_MASK_VPQCNRL_SHIFT) |
-		(QM_OPPOR_FW_STOP_DEF <<
-		 QM_RF_OPPORTUNISTIC_MASK_FWPAUSE_SHIFT) |
-		(QM_OPPOR_PQ_EMPTY_DEF <<
-		 QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY_SHIFT);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_LINEVOQ,
+		  QM_OPPOR_LINE_VOQ_DEF);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_BYTEVOQ, QM_BYTE_CRD_EN);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_PFWFQ, pf_wfq_en);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_VPWFQ, vport_wfq_en);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_PFRL, pf_rl_en);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_VPQCNRL, global_rl_en);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_FWPAUSE, QM_OPPOR_FW_STOP_DEF);
+	SET_FIELD(mask, QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY,
+		  QM_OPPOR_PQ_EMPTY_DEF);
 	STORE_RT_REG(p_hwfn, QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET, mask);
 
 	/* Enable/disable PF RL */
@@ -815,8 +770,8 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 	/* Enable/disable PF WFQ */
 	ecore_enable_pf_wfq(p_hwfn, pf_wfq_en);
 
-	/* Enable/disable VPORT RL */
-	ecore_enable_vport_rl(p_hwfn, vport_rl_en);
+	/* Enable/disable global RL */
+	ecore_enable_global_rl(p_hwfn, global_rl_en);
 
 	/* Enable/disable VPORT WFQ */
 	ecore_enable_vport_wfq(p_hwfn, vport_wfq_en);
@@ -829,6 +784,8 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
 	ecore_btb_blocks_rt_init(p_hwfn, max_ports_per_engine,
 				 max_phys_tcs_per_port, port_params);
 
+	ecore_global_rl_rt_init(p_hwfn, global_rl_params);
+
 	return 0;
 }
 
@@ -843,24 +800,25 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 			u16 start_pq,
 			u16 num_pf_pqs,
 			u16 num_vf_pqs,
-			u8 start_vport,
-			u8 num_vports,
+			u16 start_vport,
+			u16 num_vports,
 			u16 pf_wfq,
 			u32 pf_rl,
-			u32 link_speed,
 			struct init_qm_pq_params *pq_params,
 			struct init_qm_vport_params *vport_params)
 {
 	u32 other_mem_size_4kb;
-	u8 tc, i;
+	u16 vport_id;
+	u8 tc;
 
 	other_mem_size_4kb = QM_PQ_MEM_4KB(num_pf_cids + num_tids) *
 			     QM_OTHER_PQS_PER_PF;
 
 	/* Clear first Tx PQ ID array for each VPORT */
-	for (i = 0; i < num_vports; i++)
+	for (vport_id = 0; vport_id < num_vports; vport_id++)
 		for (tc = 0; tc < NUM_OF_TCS; tc++)
-			vport_params[i].first_tx_pq_id[tc] = QM_INVALID_PQ_ID;
+			vport_params[vport_id].first_tx_pq_id[tc] =
+				QM_INVALID_PQ_ID;
 
 	/* Map Other PQs (if any) */
 #if QM_OTHER_PQS_PER_PF > 0
@@ -869,10 +827,12 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 #endif
 
 	/* Map Tx PQs */
-	ecore_tx_pq_map_rt_init(p_hwfn, p_ptt, pf_id, max_phys_tcs_per_port,
-				is_pf_loading, num_pf_cids, num_vf_cids,
-				start_pq, num_pf_pqs, num_vf_pqs, start_vport,
-				other_mem_size_4kb, pq_params, vport_params);
+	if (ecore_tx_pq_map_rt_init(p_hwfn, p_ptt, pf_id, max_phys_tcs_per_port,
+				    is_pf_loading, num_pf_cids, num_vf_cids,
+				    start_pq, num_pf_pqs, num_vf_pqs,
+				    start_vport, other_mem_size_4kb, pq_params,
+				    vport_params))
+		return -1;
 
 	/* Init PF WFQ */
 	if (pf_wfq)
@@ -885,15 +845,10 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 	if (ecore_pf_rl_rt_init(p_hwfn, pf_id, pf_rl))
 		return -1;
 
-	/* Set VPORT WFQ */
+	/* Init VPORT WFQ */
 	if (ecore_vp_wfq_rt_init(p_hwfn, num_vports, vport_params))
 		return -1;
 
-	/* Set VPORT RL */
-	if (ecore_vport_rl_rt_init
-	    (p_hwfn, start_vport, num_vports, link_speed, vport_params))
-		return -1;
-
 	return 0;
 }
 
@@ -935,27 +890,49 @@ int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
 
 int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
-			 u16 first_tx_pq_id[NUM_OF_TCS], u16 vport_wfq)
+			 u16 first_tx_pq_id[NUM_OF_TCS],
+			 u16 wfq)
 {
-	u16 vport_pq_id;
+	u16 vp_pq_id;
 	u32 inc_val;
 	u8 tc;
 
-	inc_val = QM_WFQ_INC_VAL(vport_wfq);
+	inc_val = QM_WFQ_INC_VAL(wfq);
 	if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
 		DP_NOTICE(p_hwfn, true,
 			  "Invalid VPORT WFQ weight configuration\n");
 		return -1;
 	}
 
+	/* A VPORT can have several VPORT PQ IDs for various TCs */
 	for (tc = 0; tc < NUM_OF_TCS; tc++) {
-		vport_pq_id = first_tx_pq_id[tc];
-		if (vport_pq_id != QM_INVALID_PQ_ID) {
+		vp_pq_id = first_tx_pq_id[tc];
+		if (vp_pq_id != QM_INVALID_PQ_ID) {
 			ecore_wr(p_hwfn, p_ptt,
-				 QM_REG_WFQVPWEIGHT + vport_pq_id * 4, inc_val);
+				 QM_REG_WFQVPWEIGHT + vp_pq_id * 4, inc_val);
 		}
 	}
 
+	return 0;
+		}
+
+int ecore_init_global_rl(struct ecore_hwfn *p_hwfn,
+			 struct ecore_ptt *p_ptt,
+			 u16 rl_id,
+			 u32 rate_limit)
+{
+	u32 inc_val;
+
+	inc_val = QM_RL_INC_VAL(rate_limit);
+	if (inc_val > QM_VP_RL_MAX_INC_VAL(rate_limit)) {
+		DP_NOTICE(p_hwfn, true, "Invalid rate limit configuration.\n");
+		return -1;
+	}
+
+	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLCRD + rl_id * 4,
+		 (u32)QM_RL_CRD_REG_SIGN_BIT);
+	ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLINCVAL + rl_id * 4, inc_val);
+
 	return 0;
 }
 
@@ -1024,6 +1001,7 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
 	return true;
 }
 
+#ifndef UNUSED_HSI_FUNC
 
 /* NIG: ETS configuration constants */
 #define NIG_TX_ETS_CLIENT_OFFSET	4
@@ -1247,6 +1225,9 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
 	}
 }
 
+#endif /* UNUSED_HSI_FUNC */
+
+#ifndef UNUSED_HSI_FUNC
 
 /* PRS: ETS configuration constants */
 #define PRS_ETS_MIN_WFQ_BYTES		1600
@@ -1313,6 +1294,8 @@ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
 	}
 }
 
+#endif /* UNUSED_HSI_FUNC */
+#ifndef UNUSED_HSI_FUNC
 
 /* BRB: RAM configuration constants */
 #define BRB_TOTAL_RAM_BLOCKS_BB	4800
@@ -1425,13 +1408,74 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
 	}
 }
 
-/* In MF should be called once per port to set EtherType of OuterTag */
+#endif /* UNUSED_HSI_FUNC */
+#ifndef UNUSED_HSI_FUNC
+
+#define ARR_REG_WR(dev, ptt, addr, arr, arr_size)		\
+	do {							\
+		u32 i;						\
+		for (i = 0; i < (arr_size); i++)		\
+			ecore_wr(dev, ptt, ((addr) + (4 * i)),	\
+				 ((u32 *)&(arr))[i]);		\
+	} while (0)
+
+#ifndef DWORDS_TO_BYTES
+#define DWORDS_TO_BYTES(dwords)		((dwords) * REG_SIZE)
+#endif
+
+
+/**
+ * @brief ecore_dmae_to_grc - is an internal function - writes from host to
+ * wide-bus registers (split registers are not supported yet)
+ *
+ * @param p_hwfn -       HW device data
+ * @param p_ptt -       ptt window used for writing the registers.
+ * @param pData - pointer to source data.
+ * @param addr - Destination register address.
+ * @param len_in_dwords - data length in DWARDS (u32)
+ */
+static int ecore_dmae_to_grc(struct ecore_hwfn *p_hwfn,
+			     struct ecore_ptt *p_ptt,
+			     u32 *pData,
+			     u32 addr,
+			     u32 len_in_dwords)
+{
+	struct dmae_params params;
+	bool read_using_dmae = false;
+
+	if (!pData)
+		return -1;
+
+	/* Set DMAE params */
+	OSAL_MEMSET(&params, 0, sizeof(params));
+
+	SET_FIELD(params.flags, DMAE_PARAMS_COMPLETION_DST, 1);
+
+	/* Execute DMAE command */
+	read_using_dmae = !ecore_dmae_host2grc(p_hwfn, p_ptt,
+					       (u64)(osal_uintptr_t)(pData),
+					       addr, len_in_dwords, &params);
+	if (!read_using_dmae)
+		DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG,
+			   "Failed writing to chip using DMAE, using GRC instead\n");
+
+	/* If not read using DMAE, read using GRC */
+	if (!read_using_dmae)
+		/* write to registers using GRC */
+		ARR_REG_WR(p_hwfn, p_ptt, addr, pData, len_in_dwords);
+
+	return len_in_dwords;
+}
+
+/* In MF, should be called once per port to set EtherType of OuterTag */
 void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn, u32 ethType)
 {
 	/* Update DORQ register */
 	STORE_RT_REG(p_hwfn, DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET, ethType);
 }
 
+#endif /* UNUSED_HSI_FUNC */
+
 #define SET_TUNNEL_TYPE_ENABLE_BIT(var, offset, enable) \
 (var = ((var) & ~(1 << (offset))) | ((enable) ? (1 << (offset)) : 0))
 #define PRS_ETH_TUNN_OUTPUT_FORMAT        -188897008
@@ -1580,8 +1624,8 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
 		 ip_geneve_enable ? 1 : 0);
 }
 
-#define PRS_ETH_VXLAN_NO_L2_ENABLE_OFFSET   4
-#define PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT      -927094512
+#define PRS_ETH_VXLAN_NO_L2_ENABLE_OFFSET      3
+#define PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT   -925189872
 
 void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn,
 				  struct ecore_ptt *p_ptt,
@@ -1599,10 +1643,9 @@ void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn,
 		/* set VXLAN_NO_L2_ENABLE flag */
 		reg_val |= cfg_mask;
 
-		/* update PRS FIC  register */
+		/* update PRS FIC Format register */
 		ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0_BB_K2,
 		 (u32)PRS_ETH_VXLAN_NO_L2_OUTPUT_FORMAT);
-	} else  {
 		/* clear VXLAN_NO_L2_ENABLE flag */
 		reg_val &= ~cfg_mask;
 	}
@@ -1611,6 +1654,8 @@ void ecore_set_vxlan_no_l2_enable(struct ecore_hwfn *p_hwfn,
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_MSG_INFO, reg_val);
 }
 
+#ifndef UNUSED_HSI_FUNC
+
 #define T_ETH_PACKET_ACTION_GFT_EVENTID  23
 #define PARSER_ETH_CONN_GFT_ACTION_CM_HDR  272
 #define T_ETH_PACKET_MATCH_RFS_EVENTID 25
@@ -1623,6 +1668,9 @@ void ecore_gft_disable(struct ecore_hwfn *p_hwfn,
 		       struct ecore_ptt *p_ptt,
 		       u16 pf_id)
 {
+	struct regpair ram_line;
+	OSAL_MEMSET(&ram_line, 0, sizeof(ram_line));
+
 	/* disable gft search for PF */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 0);
 
@@ -1632,10 +1680,10 @@ void ecore_gft_disable(struct ecore_hwfn *p_hwfn,
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id, 0);
 
 	/* Zero ramline */
-	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
-				RAM_LINE_SIZE * pf_id, 0);
-	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
-				RAM_LINE_SIZE * pf_id + REG_SIZE, 0);
+	ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line,
+			  PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * pf_id,
+			  sizeof(ram_line) / REG_SIZE);
+
 }
 
 
@@ -1662,7 +1710,8 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 			       bool ipv6,
 			       enum gft_profile_type profile_type)
 {
-	u32 reg_val, cam_line, ram_line_lo, ram_line_hi, search_non_ip_as_gft;
+	u32 reg_val, cam_line, search_non_ip_as_gft;
+	struct regpair ram_line = { 0 };
 
 	if (!ipv6 && !ipv4)
 		DP_NOTICE(p_hwfn, true, "gft_config: must accept at least on of - ipv4 or ipv6'\n");
@@ -1723,35 +1772,33 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 			    PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id);
 
 	/* Write line to RAM - compare to filter 4 tuple */
-	ram_line_lo = 0;
-	ram_line_hi = 0;
 
 	/* Search no IP as GFT */
 	search_non_ip_as_gft = 0;
 
 	/* Tunnel type */
-	SET_FIELD(ram_line_lo, GFT_RAM_LINE_TUNNEL_DST_PORT, 1);
-	SET_FIELD(ram_line_lo, GFT_RAM_LINE_TUNNEL_OVER_IP_PROTOCOL, 1);
+	SET_FIELD(ram_line.lo, GFT_RAM_LINE_TUNNEL_DST_PORT, 1);
+	SET_FIELD(ram_line.lo, GFT_RAM_LINE_TUNNEL_OVER_IP_PROTOCOL, 1);
 
 	if (profile_type == GFT_PROFILE_TYPE_4_TUPLE) {
-		SET_FIELD(ram_line_hi, GFT_RAM_LINE_DST_IP, 1);
-		SET_FIELD(ram_line_hi, GFT_RAM_LINE_SRC_IP, 1);
-		SET_FIELD(ram_line_hi, GFT_RAM_LINE_OVER_IP_PROTOCOL, 1);
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_ETHERTYPE, 1);
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_SRC_PORT, 1);
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_DST_PORT, 1);
+		SET_FIELD(ram_line.hi, GFT_RAM_LINE_DST_IP, 1);
+		SET_FIELD(ram_line.hi, GFT_RAM_LINE_SRC_IP, 1);
+		SET_FIELD(ram_line.hi, GFT_RAM_LINE_OVER_IP_PROTOCOL, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_ETHERTYPE, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_SRC_PORT, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_DST_PORT, 1);
 	} else if (profile_type == GFT_PROFILE_TYPE_L4_DST_PORT) {
-		SET_FIELD(ram_line_hi, GFT_RAM_LINE_OVER_IP_PROTOCOL, 1);
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_ETHERTYPE, 1);
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_DST_PORT, 1);
+		SET_FIELD(ram_line.hi, GFT_RAM_LINE_OVER_IP_PROTOCOL, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_ETHERTYPE, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_DST_PORT, 1);
 	} else if (profile_type == GFT_PROFILE_TYPE_IP_DST_ADDR) {
-		SET_FIELD(ram_line_hi, GFT_RAM_LINE_DST_IP, 1);
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_ETHERTYPE, 1);
+		SET_FIELD(ram_line.hi, GFT_RAM_LINE_DST_IP, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_ETHERTYPE, 1);
 	} else if (profile_type == GFT_PROFILE_TYPE_IP_SRC_ADDR) {
-		SET_FIELD(ram_line_hi, GFT_RAM_LINE_SRC_IP, 1);
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_ETHERTYPE, 1);
+		SET_FIELD(ram_line.hi, GFT_RAM_LINE_SRC_IP, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_ETHERTYPE, 1);
 	} else if (profile_type == GFT_PROFILE_TYPE_TUNNEL_TYPE) {
-		SET_FIELD(ram_line_lo, GFT_RAM_LINE_TUNNEL_ETHERTYPE, 1);
+		SET_FIELD(ram_line.lo, GFT_RAM_LINE_TUNNEL_ETHERTYPE, 1);
 
 		/* Allow tunneled traffic without inner IP */
 		search_non_ip_as_gft = 1;
@@ -1759,23 +1806,25 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_NON_IP_AS_GFT,
 		 search_non_ip_as_gft);
-	ecore_wr(p_hwfn, p_ptt,
-		 PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * pf_id,
-		 ram_line_lo);
-	ecore_wr(p_hwfn, p_ptt,
-		 PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * pf_id +
-		 REG_SIZE, ram_line_hi);
+	ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line,
+			  PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * pf_id,
+			  sizeof(ram_line) / REG_SIZE);
 
 	/* Set default profile so that no filter match will happen */
-	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE *
-		 PRS_GFT_CAM_LINES_NO_MATCH, 0xffffffff);
-	ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE *
-		 PRS_GFT_CAM_LINES_NO_MATCH + REG_SIZE, 0x3ff);
+	ram_line.lo = 0xffffffff;
+	ram_line.hi = 0x3ff;
+	ecore_dmae_to_grc(p_hwfn, p_ptt, (u32 *)&ram_line,
+			  PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE *
+			  PRS_GFT_CAM_LINES_NO_MATCH,
+			  sizeof(ram_line) / REG_SIZE);
 
 	/* Enable gft search */
 	ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 1);
 }
 
+
+#endif /* UNUSED_HSI_FUNC */
+
 /* Configure VF zone size mode */
 void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
 				    struct ecore_ptt *p_ptt, u16 mode,
@@ -1854,10 +1903,9 @@ static u8 cdu_crc8_table[CRC8_TABLE_SIZE];
 /* Calculate and return CDU validation byte per connection type / region /
  * cid
  */
-static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region, u32 cid)
+static u8 ecore_calc_cdu_validation_byte(struct ecore_hwfn *p_hwfn,
+					 u8 conn_type, u8 region, u32 cid)
 {
-	const u8 validation_cfg = CDU_VALIDATION_DEFAULT_CFG;
-
 	static u8 crc8_table_valid;	/*automatically initialized to 0*/
 	u8 crc, validation_byte = 0;
 	u32 validation_string = 0;
@@ -1874,15 +1922,20 @@ static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region, u32 cid)
 	 * [7:4]   = Region
 	 * [3:0]   = Type
 	 */
-	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_CID) & 1)
-		validation_string |= (cid & 0xFFF00000) | ((cid & 0xFFF) << 8);
-
-	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_REGION) & 1)
-		validation_string |= ((region & 0xF) << 4);
+#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \
+	CDU_CONTEXT_VALIDATION_CFG_USE_CID) & 1)
+	validation_string |= (cid & 0xFFF00000) | ((cid & 0xFFF) << 8);
+#endif
 
-	if ((validation_cfg >> CDU_CONTEXT_VALIDATION_CFG_USE_TYPE) & 1)
-		validation_string |= (conn_type & 0xF);
+#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \
+	CDU_CONTEXT_VALIDATION_CFG_USE_REGION) & 1)
+	validation_string |= ((region & 0xF) << 4);
+#endif
 
+#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \
+	CDU_CONTEXT_VALIDATION_CFG_USE_TYPE) & 1)
+	validation_string |= (conn_type & 0xF);
+#endif
 	/* Convert to big-endian and calculate CRC8*/
 	data_to_crc = OSAL_BE32_TO_CPU(validation_string);
 
@@ -1899,40 +1952,41 @@ static u8 ecore_calc_cdu_validation_byte(u8 conn_type, u8 region, u32 cid)
 	 * [6:3]	= connection_type[3:0]
 	 * [2:0]	= crc[2:0]
 	 */
-
-	validation_byte |= ((validation_cfg >>
+	validation_byte |= ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >>
 			     CDU_CONTEXT_VALIDATION_CFG_USE_ACTIVE) & 1) << 7;
 
-	if ((validation_cfg >>
-	     CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT) & 1)
-		validation_byte |= ((conn_type & 0xF) << 3) | (crc & 0x7);
-	else
-		validation_byte |= crc & 0x7F;
-
+#if ((CDU_CONTEXT_VALIDATION_DEFAULT_CFG >> \
+	CDU_CONTEXT_VALIDATION_CFG_VALIDATION_TYPE_SHIFT) & 1)
+	validation_byte |= ((conn_type & 0xF) << 3) | (crc & 0x7);
+#else
+	validation_byte |= crc & 0x7F;
+#endif
 	return validation_byte;
 }
 
 /* Calcualte and set validation bytes for session context */
-void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size,
+void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
+				       void *p_ctx_mem, u16 ctx_size,
 				       u8 ctx_type, u32 cid)
 {
 	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
 
-	p_ctx = (u8 *)p_ctx_mem;
+	p_ctx = (u8 * const)p_ctx_mem;
+
 	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
 	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
 	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
 
 	OSAL_MEMSET(p_ctx, 0, ctx_size);
 
-	*x_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 3, cid);
-	*t_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 4, cid);
-	*u_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 5, cid);
+	*x_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 3, cid);
+	*t_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 4, cid);
+	*u_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 5, cid);
 }
 
 /* Calcualte and set validation bytes for task context */
-void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size, u8 ctx_type,
-				    u32 tid)
+void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
+				    u16 ctx_size, u8 ctx_type, u32 tid)
 {
 	u8 *p_ctx, *region1_val_ptr;
 
@@ -1941,16 +1995,19 @@ void ecore_calc_task_ctx_validation(void *p_ctx_mem, u16 ctx_size, u8 ctx_type,
 
 	OSAL_MEMSET(p_ctx, 0, ctx_size);
 
-	*region1_val_ptr = ecore_calc_cdu_validation_byte(ctx_type, 1, tid);
+	*region1_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 1,
+							  tid);
 }
 
 /* Memset session context to 0 while preserving validation bytes */
-void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size, u8 ctx_type)
+void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
+			      u32 ctx_size, u8 ctx_type)
 {
 	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
 	u8 x_val, t_val, u_val;
 
-	p_ctx = (u8 *)p_ctx_mem;
+	p_ctx = (u8 * const)p_ctx_mem;
+
 	x_val_ptr = &p_ctx[con_region_offsets[0][ctx_type]];
 	t_val_ptr = &p_ctx[con_region_offsets[1][ctx_type]];
 	u_val_ptr = &p_ctx[con_region_offsets[2][ctx_type]];
@@ -1967,7 +2024,8 @@ void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size, u8 ctx_type)
 }
 
 /* Memset task context to 0 while preserving validation bytes */
-void ecore_memset_task_ctx(void *p_ctx_mem, u32 ctx_size, u8 ctx_type)
+void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
+			   u32 ctx_size, u8 ctx_type)
 {
 	u8 *p_ctx, *region1_val_ptr;
 	u8 region1_val;
@@ -1988,62 +2046,15 @@ void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
 {
 	u32 ctx_validation;
 
-	/* Enable validation for connection region 3 - bits [31:24] */
-	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 24;
+	/* Enable validation for connection region 3: CCFC_CTX_VALID0[31:24] */
+	ctx_validation = CDU_CONTEXT_VALIDATION_DEFAULT_CFG << 24;
 	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID0, ctx_validation);
 
-	/* Enable validation for connection region 5 - bits [15: 8] */
-	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8;
+	/* Enable validation for connection region 5: CCFC_CTX_VALID1[15:8] */
+	ctx_validation = CDU_CONTEXT_VALIDATION_DEFAULT_CFG << 8;
 	ecore_wr(p_hwfn, p_ptt, CDU_REG_CCFC_CTX_VALID1, ctx_validation);
 
-	/* Enable validation for connection region 1 - bits [15: 8] */
-	ctx_validation = CDU_VALIDATION_DEFAULT_CFG << 8;
+	/* Enable validation for connection region 1: TCFC_CTX_VALID0[15:8] */
+	ctx_validation = CDU_CONTEXT_VALIDATION_DEFAULT_CFG << 8;
 	ecore_wr(p_hwfn, p_ptt, CDU_REG_TCFC_CTX_VALID0, ctx_validation);
 }
-
-
-/*******************************************************************************
- * File name : rdma_init.c
- * Author    : Michael Shteinbok
- *******************************************************************************
- *******************************************************************************
- * Description:
- * RDMA HSI functions
- *
- *******************************************************************************
- * Notes: This is the input to the auto generated file drv_init_fw_funcs.c
- *
- *******************************************************************************
- */
-static u32 ecore_get_rdma_assert_ram_addr(struct ecore_hwfn *p_hwfn,
-					  u8 storm_id)
-{
-	switch (storm_id) {
-	case 0: return TSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-		       TSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
-	case 1: return MSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-		       MSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
-	case 2: return USEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-		       USTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
-	case 3: return XSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-		       XSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
-	case 4: return YSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-		       YSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
-	case 5: return PSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
-		       PSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
-
-	default: return 0;
-	}
-}
-
-void ecore_set_rdma_error_level(struct ecore_hwfn *p_hwfn,
-				struct ecore_ptt *p_ptt,
-				u8 assert_level[NUM_STORMS])
-{
-	u8 storm_id;
-	for (storm_id = 0; storm_id < NUM_STORMS; storm_id++) {
-		u32 ram_addr = ecore_get_rdma_assert_ram_addr(p_hwfn, storm_id);
-
-		ecore_wr(p_hwfn, p_ptt, ram_addr, assert_level[storm_id]);
-	}
-}
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 3503a90c1..1d1b107c4 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -6,7 +6,20 @@
 
 #ifndef _INIT_FW_FUNCS_H
 #define _INIT_FW_FUNCS_H
-/* Forward declarations */
+#include "ecore_hsi_common.h"
+#include "ecore_hsi_eth.h"
+
+/* Physical memory descriptor */
+struct phys_mem_desc {
+	dma_addr_t phys_addr;
+	void *virt_addr;
+	u32 size; /* In bytes */
+};
+
+/* Returns the VOQ based on port and TC */
+#define VOQ(port, tc, max_phys_tcs_per_port) \
+	((tc) == PURE_LB_TC ? NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB + (port) : \
+	 (port) * (max_phys_tcs_per_port) + (tc))
 
 struct init_qm_pq_params;
 
@@ -16,6 +29,7 @@ struct init_qm_pq_params;
  * Returns the required host memory size in 4KB units.
  * Must be called before all QM init HSI functions.
  *
+ * @param p_hwfn -		HW device data
  * @param num_pf_cids - number of connections used by this PF
  * @param num_vf_cids -	number of connections used by VFs of this PF
  * @param num_tids -	number of tasks used by this PF
@@ -24,7 +38,8 @@ struct init_qm_pq_params;
  *
  * @return The required host memory size in 4KB units.
  */
-u32 ecore_qm_pf_mem_size(u32 num_pf_cids,
+u32 ecore_qm_pf_mem_size(struct ecore_hwfn *p_hwfn,
+			 u32 num_pf_cids,
 						 u32 num_vf_cids,
 						 u32 num_tids,
 						 u16 num_pf_pqs,
@@ -39,20 +54,24 @@ u32 ecore_qm_pf_mem_size(u32 num_pf_cids,
  * @param max_phys_tcs_per_port	- max number of physical TCs per port in HW
  * @param pf_rl_en		- enable per-PF rate limiters
  * @param pf_wfq_en		- enable per-PF WFQ
- * @param vport_rl_en		- enable per-VPORT rate limiters
+ * @param global_rl_en -	  enable global rate limiters
  * @param vport_wfq_en		- enable per-VPORT WFQ
- * @param port_params - array of size MAX_NUM_PORTS with params for each port
+ * @param port_params -		  array with parameters for each port.
+ * @param global_rl_params -	  array with parameters for each global RL.
+ *				  If OSAL_NULL, global RLs are not configured.
  *
  * @return 0 on success, -1 on error.
  */
 int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
-			 u8 max_ports_per_engine,
-			 u8 max_phys_tcs_per_port,
-			 bool pf_rl_en,
-			 bool pf_wfq_en,
-			 bool vport_rl_en,
-			 bool vport_wfq_en,
-			 struct init_qm_port_params port_params[MAX_NUM_PORTS]);
+			    u8 max_ports_per_engine,
+			    u8 max_phys_tcs_per_port,
+			    bool pf_rl_en,
+			    bool pf_wfq_en,
+			    bool global_rl_en,
+			    bool vport_wfq_en,
+			  struct init_qm_port_params port_params[MAX_NUM_PORTS],
+			  struct init_qm_global_rl_params
+				 global_rl_params[COMMON_MAX_QM_GLOBAL_RLS]);
 
 /**
  * @brief ecore_qm_pf_rt_init  Prepare QM runtime init values for the PF phase
@@ -76,7 +95,6 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
  *		   be 0. otherwise, the weight must be non-zero.
  * @param pf_rl - rate limit in Mb/sec units. a value of 0 means don't
  *                configure. ignored if PF RL is globally disabled.
- * @param link_speed -		  link speed in Mbps.
  * @param pq_params - array of size (num_pf_pqs+num_vf_pqs) with parameters for
  *                    each Tx PQ associated with the specified PF.
  * @param vport_params - array of size num_vports with parameters for each
@@ -95,11 +113,10 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
 			u16 start_pq,
 			u16 num_pf_pqs,
 			u16 num_vf_pqs,
-			u8 start_vport,
-			u8 num_vports,
+			u16 start_vport,
+			u16 num_vports,
 			u16 pf_wfq,
 			u32 pf_rl,
-			u32 link_speed,
 			struct init_qm_pq_params *pq_params,
 			struct init_qm_vport_params *vport_params);
 
@@ -141,14 +158,30 @@ int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
  * @param first_tx_pq_id- An array containing the first Tx PQ ID associated
  *                        with the VPORT for each TC. This array is filled by
  *                        ecore_qm_pf_rt_init
- * @param vport_wfq		- WFQ weight. Must be non-zero.
+ * @param wfq -		   WFQ weight. Must be non-zero.
  *
  * @return 0 on success, -1 on error.
  */
 int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 u16 first_tx_pq_id[NUM_OF_TCS],
-						 u16 vport_wfq);
+			 u16 wfq);
+
+/**
+ * @brief ecore_init_global_rl - Initializes the rate limit of the specified
+ * rate limiter.
+ *
+ * @param p_hwfn -		HW device data
+ * @param p_ptt -		ptt window used for writing the registers
+ * @param rl_id -	RL ID
+ * @param rate_limit -	rate limit in Mb/sec units
+ *
+ * @return 0 on success, -1 on error.
+ */
+int ecore_init_global_rl(struct ecore_hwfn *p_hwfn,
+			 struct ecore_ptt *p_ptt,
+			 u16 rl_id,
+			 u32 rate_limit);
 
 /**
  * @brief ecore_init_vport_rl - Initializes the rate limit of the specified
@@ -283,8 +316,9 @@ void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
 
 /**
  * @brief ecore_set_vxlan_dest_port - initializes vxlan tunnel destination udp
- *                                    port
+ * port.
  *
+ * @param p_hwfn -       HW device data
  * @param p_ptt     - ptt window used for writing the registers.
  * @param dest_port - vxlan destination udp port.
  */
@@ -295,6 +329,7 @@ void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
 /**
  * @brief ecore_set_vxlan_enable - enable or disable VXLAN tunnel in HW
  *
+ * @param p_hwfn -      HW device data
  * @param p_ptt		- ptt window used for writing the registers.
  * @param vxlan_enable	- vxlan enable flag.
  */
@@ -305,6 +340,7 @@ void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
 /**
  * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
  *
+ * @param p_hwfn -        HW device data
  * @param p_ptt          - ptt window used for writing the registers.
  * @param eth_gre_enable - eth GRE enable enable flag.
  * @param ip_gre_enable  - IP GRE enable enable flag.
@@ -318,6 +354,7 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
  * @brief ecore_set_geneve_dest_port - initializes geneve tunnel destination
  *                                     udp port
  *
+ * @param p_hwfn -       HW device data
  * @param p_ptt     - ptt window used for writing the registers.
  * @param dest_port - geneve destination udp port.
  */
@@ -326,8 +363,9 @@ void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
 				u16 dest_port);
 
 /**
- * @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
+ * @brief ecore_set_geneve_enable - enable or disable GRE tunnel in HW
  *
+ * @param p_hwfn -         HW device data
  * @param p_ptt             - ptt window used for writing the registers.
  * @param eth_geneve_enable - eth GENEVE enable enable flag.
  * @param ip_geneve_enable  - IP GENEVE enable enable flag.
@@ -347,7 +385,7 @@ void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
 				   struct ecore_ptt *p_ptt);
 
 /**
- * @brief ecore_gft_disable - Disable and GFT
+ * @brief ecore_gft_disable - Disable GFT
  *
  * @param p_hwfn -   HW device data
  * @param p_ptt -   ptt window used for writing the registers.
@@ -360,6 +398,7 @@ void ecore_gft_disable(struct ecore_hwfn *p_hwfn,
 /**
  * @brief ecore_gft_config - Enable and configure HW for GFT
 *
+ * @param p_hwfn -   HW device data
 * @param p_ptt	- ptt window used for writing the registers.
  * @param pf_id - pf on which to enable GFT.
 * @param tcp	- set profile tcp packets.
@@ -382,12 +421,13 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
 * @brief ecore_config_vf_zone_size_mode - Configure VF zone size mode. Must be
 *                                         used before first ETH queue started.
 *
-*
+ * @param p_hwfn -      HW device data
 * @param p_ptt        -  ptt window used for writing the registers. Don't care
-*                        if runtime_init used
+ *           if runtime_init used.
 * @param mode         -  VF zone size mode. Use enum vf_zone_size_mode.
-* @param runtime_init -  Set 1 to init runtime registers in engine phase. Set 0
-*                        if VF zone size mode configured after engine phase.
+ * @param runtime_init - Set 1 to init runtime registers in engine phase.
+ *           Set 0 if VF zone size mode configured after engine
+ *           phase.
 */
 void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt
 				    *p_ptt, u16 mode, bool runtime_init);
@@ -396,6 +436,7 @@ void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt
  * @brief ecore_get_mstorm_queue_stat_offset - Get mstorm statistics offset by
  * VF zone size mode.
 *
+ * @param p_hwfn -         HW device data
 * @param stat_cnt_id         -  statistic counter id
 * @param vf_zone_size_mode   -  VF zone size mode. Use enum vf_zone_size_mode.
 */
@@ -406,6 +447,7 @@ u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
  * @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
  * size mode.
 *
+ * @param p_hwfn -           HW device data
 * @param vf_id               -  vf id.
 * @param vf_queue_id         -  per VF rx queue id.
 * @param vf_zone_size_mode   -  vf zone size mode. Use enum vf_zone_size_mode.
@@ -416,6 +458,7 @@ u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8
  * @brief ecore_enable_context_validation - Enable and configure context
  *                                          validation.
  *
+ * @param p_hwfn -   HW device data
  * @param p_ptt - ptt window used for writing the registers.
  */
 void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
@@ -424,12 +467,14 @@ void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
  * @brief ecore_calc_session_ctx_validation - Calcualte validation byte for
  * session context.
  *
+ * @param p_hwfn -		HW device data
  * @param p_ctx_mem -	pointer to context memory.
  * @param ctx_size -	context size.
  * @param ctx_type -	context type.
  * @param cid -		context cid.
  */
-void ecore_calc_session_ctx_validation(void *p_ctx_mem,
+void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
+				       void *p_ctx_mem,
 				       u16 ctx_size,
 				       u8 ctx_type,
 				       u32 cid);
@@ -438,12 +483,14 @@ void ecore_calc_session_ctx_validation(void *p_ctx_mem,
  * @brief ecore_calc_task_ctx_validation - Calcualte validation byte for task
  * context.
  *
+ * @param p_hwfn -		HW device data
  * @param p_ctx_mem -	pointer to context memory.
  * @param ctx_size -	context size.
  * @param ctx_type -	context type.
  * @param tid -		    context tid.
  */
-void ecore_calc_task_ctx_validation(void *p_ctx_mem,
+void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn,
+				    void *p_ctx_mem,
 				    u16 ctx_size,
 				    u8 ctx_type,
 				    u32 tid);
@@ -457,18 +504,22 @@ void ecore_calc_task_ctx_validation(void *p_ctx_mem,
  * @param ctx_size -  size to initialzie.
  * @param ctx_type -  context type.
  */
-void ecore_memset_session_ctx(void *p_ctx_mem,
+void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn,
+			      void *p_ctx_mem,
 			      u32 ctx_size,
 			      u8 ctx_type);
+
 /**
  * @brief ecore_memset_task_ctx - Memset task context to 0 while preserving
  * validation bytes.
  *
+ * @param p_hwfn -		HW device data
  * @param p_ctx_mem - pointer to context memory.
  * @param ctx_size -  size to initialzie.
  * @param ctx_type -  context type.
  */
-void ecore_memset_task_ctx(void *p_ctx_mem,
+void ecore_memset_task_ctx(struct ecore_hwfn *p_hwfn,
+			   void *p_ctx_mem,
 			   u32 ctx_size,
 			   u8 ctx_type);
 
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index ad8570a08..ea964ea2f 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -15,7 +15,6 @@
 
 #include "ecore_iro_values.h"
 #include "ecore_sriov.h"
-#include "ecore_gtt_values.h"
 #include "reg_addr.h"
 #include "ecore_init_ops.h"
 
@@ -24,7 +23,7 @@
 
 void ecore_init_iro_array(struct ecore_dev *p_dev)
 {
-	p_dev->iro_arr = iro_arr;
+	p_dev->iro_arr = iro_arr + E4_IRO_ARR_OFFSET;
 }
 
 /* Runtime configuration helpers */
@@ -473,9 +472,9 @@ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
 				    int phase, int phase_id, int modes)
 {
 	struct ecore_dev *p_dev = p_hwfn->p_dev;
+	bool b_dmae = (phase != PHASE_ENGINE);
 	u32 cmd_num, num_init_ops;
 	union init_op *init;
-	bool b_dmae = false;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 	num_init_ops = p_dev->fw_data->init_ops_size;
@@ -511,7 +510,6 @@ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
 		case INIT_OP_IF_PHASE:
 			cmd_num += ecore_init_cmd_phase(&cmd->if_phase, phase,
 							phase_id);
-			b_dmae = GET_FIELD(data, INIT_IF_PHASE_OP_DMAE_ENABLE);
 			break;
 		case INIT_OP_DELAY:
 			/* ecore_init_run is always invoked from
@@ -522,6 +520,9 @@ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
 
 		case INIT_OP_CALLBACK:
 			rc = ecore_init_cmd_cb(p_hwfn, p_ptt, &cmd->callback);
+			if (phase == PHASE_ENGINE &&
+			    cmd->callback.callback_id == DMAE_READY_CB)
+				b_dmae = true;
 			break;
 		}
 
@@ -567,11 +568,17 @@ enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
 	fw->modes_tree_buf = (u8 *)((uintptr_t)(fw_data + offset));
 	len = buf_hdr[BIN_BUF_INIT_CMD].length;
 	fw->init_ops_size = len / sizeof(struct init_raw_op);
+	offset = buf_hdr[BIN_BUF_INIT_OVERLAYS].offset;
+	fw->fw_overlays = (u32 *)(fw_data + offset);
+	len = buf_hdr[BIN_BUF_INIT_OVERLAYS].length;
+	fw->fw_overlays_len = len;
 #else
 	fw->init_ops = (union init_op *)init_ops;
 	fw->arr_data = (u32 *)init_val;
 	fw->modes_tree_buf = (u8 *)modes_tree_buf;
 	fw->init_ops_size = init_ops_size;
+	fw->fw_overlays = fw_overlays;
+	fw->fw_overlays_len = sizeof(fw_overlays);
 #endif
 
 	return ECORE_SUCCESS;
diff --git a/drivers/net/qede/base/ecore_init_ops.h b/drivers/net/qede/base/ecore_init_ops.h
index 21e433309..0cbf293b3 100644
--- a/drivers/net/qede/base/ecore_init_ops.h
+++ b/drivers/net/qede/base/ecore_init_ops.h
@@ -95,6 +95,6 @@ void ecore_init_store_rt_agg(struct ecore_hwfn *p_hwfn,
 			     osal_size_t       size);
 
 #define STORE_RT_REG_AGG(hwfn, offset, val)			\
-	ecore_init_store_rt_agg(hwfn, offset, (u32 *)&val, sizeof(val))
+	ecore_init_store_rt_agg(hwfn, offset, (u32 *)&(val), sizeof(val))
 
 #endif /* __ECORE_INIT_OPS__ */
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index c8536380c..b1e127849 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -28,8 +28,10 @@ struct ecore_pi_info {
 
 struct ecore_sb_sp_info {
 	struct ecore_sb_info sb_info;
-	/* per protocol index data */
+
+	/* Per protocol index data */
 	struct ecore_pi_info pi_info_arr[MAX_PIS_PER_SB];
+	osal_size_t pi_info_arr_size;
 };
 
 enum ecore_attention_type {
@@ -58,10 +60,10 @@ struct aeu_invert_reg_bit {
 #define ATTENTION_OFFSET_MASK		(0x000ff000)
 #define ATTENTION_OFFSET_SHIFT		(12)
 
-#define ATTENTION_BB_MASK		(0x00700000)
+#define ATTENTION_BB_MASK		(0xf)
 #define ATTENTION_BB_SHIFT		(20)
 #define ATTENTION_BB(value)		((value) << ATTENTION_BB_SHIFT)
-#define ATTENTION_BB_DIFFERENT		(1 << 23)
+#define ATTENTION_BB_DIFFERENT		(1 << 24)
 
 #define	ATTENTION_CLEAR_ENABLE		(1 << 28)
 	unsigned int flags;
@@ -606,6 +608,8 @@ enum aeu_invert_reg_special_type {
 	AEU_INVERT_REG_SPECIAL_CNIG_1,
 	AEU_INVERT_REG_SPECIAL_CNIG_2,
 	AEU_INVERT_REG_SPECIAL_CNIG_3,
+	AEU_INVERT_REG_SPECIAL_MCP_UMP_TX,
+	AEU_INVERT_REG_SPECIAL_MCP_SCPAD,
 	AEU_INVERT_REG_SPECIAL_MAX,
 };
 
@@ -615,6 +619,8 @@ aeu_descs_special[AEU_INVERT_REG_SPECIAL_MAX] = {
 	{"CNIG port 1", ATTENTION_SINGLE, OSAL_NULL, BLOCK_CNIG},
 	{"CNIG port 2", ATTENTION_SINGLE, OSAL_NULL, BLOCK_CNIG},
 	{"CNIG port 3", ATTENTION_SINGLE, OSAL_NULL, BLOCK_CNIG},
+	{"MCP Latched ump_tx", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
+	{"MCP Latched scratchpad", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
 };
 
 /* Notice aeu_invert_reg must be defined in the same order of bits as HW; */
@@ -678,10 +684,15 @@ static struct aeu_invert_reg aeu_descs[NUM_ATTN_REGS] = {
 	  {"AVS stop status ready", ATTENTION_SINGLE, OSAL_NULL, MAX_BLOCK_ID},
 	  {"MSTAT", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID},
 	  {"MSTAT per-path", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID},
-	  {"Reserved %d", (6 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
-	   MAX_BLOCK_ID},
+			{"OPTE", ATTENTION_PAR, OSAL_NULL, BLOCK_OPTE},
+			{"MCP", ATTENTION_PAR, OSAL_NULL, BLOCK_MCP},
+			{"MS", ATTENTION_SINGLE, OSAL_NULL, BLOCK_MS},
+			{"UMAC", ATTENTION_SINGLE, OSAL_NULL, BLOCK_UMAC},
+			{"LED", ATTENTION_SINGLE, OSAL_NULL, BLOCK_LED},
+			{"BMBN", ATTENTION_SINGLE, OSAL_NULL, BLOCK_BMBN},
 	  {"NIG", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_NIG},
 	  {"BMB/OPTE/MCP", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BMB},
+			{"BMB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BMB},
 	  {"BTB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BTB},
 	  {"BRB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BRB},
 	  {"PRS", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_PRS},
@@ -784,10 +795,17 @@ static struct aeu_invert_reg aeu_descs[NUM_ATTN_REGS] = {
 	  {"MCP Latched memory", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
 	  {"MCP Latched scratchpad cache", ATTENTION_SINGLE, OSAL_NULL,
 	   MAX_BLOCK_ID},
-	  {"MCP Latched ump_tx", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
-	  {"MCP Latched scratchpad", ATTENTION_PAR, OSAL_NULL, MAX_BLOCK_ID},
-	  {"Reserved %d", (28 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
-	   MAX_BLOCK_ID},
+	  {"AVS", ATTENTION_PAR | ATTENTION_BB_DIFFERENT |
+	   ATTENTION_BB(AEU_INVERT_REG_SPECIAL_MCP_UMP_TX), OSAL_NULL,
+	   BLOCK_AVS_WRAP},
+	  {"AVS", ATTENTION_SINGLE | ATTENTION_BB_DIFFERENT |
+	   ATTENTION_BB(AEU_INVERT_REG_SPECIAL_MCP_SCPAD), OSAL_NULL,
+	   BLOCK_AVS_WRAP},
+	  {"PCIe core", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
+	  {"PCIe link up", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
+	  {"PCIe hot reset", ATTENTION_SINGLE, OSAL_NULL, BLOCK_PGLCS},
+	  {"Reserved %d", (9 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
+	    MAX_BLOCK_ID},
 	  }
 	 },
 
@@ -955,14 +973,22 @@ ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn,
 	/* @DPDK */
 	/* Reach assertion if attention is fatal */
 	if (b_fatal || (strcmp(p_bit_name, "PGLUE B RBC") == 0)) {
+#ifndef ASIC_ONLY
+		DP_NOTICE(p_hwfn, !CHIP_REV_IS_EMUL(p_hwfn->p_dev),
+			  "`%s': Fatal attention\n", p_bit_name);
+#else
 		DP_NOTICE(p_hwfn, true, "`%s': Fatal attention\n",
 			  p_bit_name);
+#endif
 
 		ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_HW_ATTN);
 	}
 
 	/* Prevent this Attention from being asserted in the future */
 	if (p_aeu->flags & ATTENTION_CLEAR_ENABLE ||
+#ifndef ASIC_ONLY
+	    CHIP_REV_IS_EMUL(p_hwfn->p_dev) ||
+#endif
 	    p_hwfn->p_dev->attn_clr_en) {
 		u32 val;
 		u32 mask = ~bitmask;
@@ -1013,6 +1039,13 @@ static void ecore_int_deassertion_parity(struct ecore_hwfn *p_hwfn,
 		p_aeu->bit_name);
 }
 
+#define MISC_REG_AEU_AFTER_INVERT_IGU(n) \
+	(MISC_REG_AEU_AFTER_INVERT_1_IGU + (n) * 0x4)
+
+#define MISC_REG_AEU_ENABLE_IGU_OUT(n, group) \
+	(MISC_REG_AEU_ENABLE1_IGU_OUT_0 + (n) * 0x4 + \
+	 (group) * 0x4 * NUM_ATTN_REGS)
+
 /**
  * @brief - handles deassertion of previously asserted attentions.
  *
@@ -1032,8 +1065,7 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
 	/* Read the attention registers in the AEU */
 	for (i = 0; i < NUM_ATTN_REGS; i++) {
 		aeu_inv_arr[i] = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
-					  MISC_REG_AEU_AFTER_INVERT_1_IGU +
-					  i * 0x4);
+					  MISC_REG_AEU_AFTER_INVERT_IGU(i));
 		DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
 			   "Deasserted bits [%d]: %08x\n", i, aeu_inv_arr[i]);
 	}
@@ -1043,7 +1075,7 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
 		struct aeu_invert_reg *p_aeu = &sb_attn_sw->p_aeu_desc[i];
 		u32 parities;
 
-		aeu_en = MISC_REG_AEU_ENABLE1_IGU_OUT_0 + i * sizeof(u32);
+		aeu_en = MISC_REG_AEU_ENABLE_IGU_OUT(i, 0);
 		en = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en);
 		parities = sb_attn_sw->parity_mask[i] & aeu_inv_arr[i] & en;
 
@@ -1074,9 +1106,7 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
 		for (i = 0; i < NUM_ATTN_REGS; i++) {
 			u32 bits;
 
-			aeu_en = MISC_REG_AEU_ENABLE1_IGU_OUT_0 +
-				 i * sizeof(u32) +
-				 k * sizeof(u32) * NUM_ATTN_REGS;
+			aeu_en = MISC_REG_AEU_ENABLE_IGU_OUT(i, k);
 			en = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en);
 			bits = aeu_inv_arr[i] & en;
 
@@ -1249,7 +1279,6 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie)
 	struct ecore_pi_info *pi_info = OSAL_NULL;
 	struct ecore_sb_attn_info *sb_attn;
 	struct ecore_sb_info *sb_info;
-	int arr_size;
 	u16 rc = 0;
 
 	if (!p_hwfn)
@@ -1261,7 +1290,6 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie)
 	}
 
 	sb_info = &p_hwfn->p_sp_sb->sb_info;
-	arr_size = OSAL_ARRAY_SIZE(p_hwfn->p_sp_sb->pi_info_arr);
 	if (!sb_info) {
 		DP_ERR(p_hwfn->p_dev,
 		       "Status block is NULL - cannot ack interrupts\n");
@@ -1326,14 +1354,14 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie)
 		ecore_int_attentions(p_hwfn);
 
 	if (rc & ECORE_SB_IDX) {
-		int pi;
+		osal_size_t pi;
 
 		/* Since we only looked at the SB index, it's possible more
 		 * than a single protocol-index on the SB incremented.
 		 * Iterate over all configured protocol indices and check
 		 * whether something happened for each.
 		 */
-		for (pi = 0; pi < arr_size; pi++) {
+		for (pi = 0; pi < p_hwfn->p_sp_sb->pi_info_arr_size; pi++) {
 			pi_info = &p_hwfn->p_sp_sb->pi_info_arr[pi];
 			if (pi_info->comp_cb != OSAL_NULL)
 				pi_info->comp_cb(p_hwfn, pi_info->cookie);
@@ -1514,7 +1542,7 @@ static void _ecore_int_cau_conf_pi(struct ecore_hwfn *p_hwfn,
 	if (IS_VF(p_hwfn->p_dev))
 		return;/* @@@TBD MichalK- VF CAU... */
 
-	sb_offset = igu_sb_id * MAX_PIS_PER_SB;
+	sb_offset = igu_sb_id * PIS_PER_SB;
 	OSAL_MEMSET(&pi_entry, 0, sizeof(struct cau_pi_entry));
 
 	SET_FIELD(pi_entry.prod, CAU_PI_ENTRY_PI_TIMESET, timeset);
@@ -1623,7 +1651,7 @@ void ecore_int_sb_setup(struct ecore_hwfn *p_hwfn,
 {
 	/* zero status block and ack counter */
 	sb_info->sb_ack = 0;
-	OSAL_MEMSET(sb_info->sb_virt, 0, sizeof(*sb_info->sb_virt));
+	OSAL_MEMSET(sb_info->sb_virt, 0, sb_info->sb_size);
 
 	if (IS_PF(p_hwfn->p_dev))
 		ecore_int_cau_conf_sb(p_hwfn, p_ptt, sb_info->sb_phys,
@@ -1706,6 +1734,14 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
 				       dma_addr_t sb_phy_addr, u16 sb_id)
 {
 	sb_info->sb_virt = sb_virt_addr;
+	struct status_block *sb_virt;
+
+	sb_virt = (struct status_block *)sb_info->sb_virt;
+
+	sb_info->sb_size = sizeof(*sb_virt);
+	sb_info->sb_pi_array = sb_virt->pi_array;
+	sb_info->sb_prod_index = &sb_virt->prod_index;
+
 	sb_info->sb_phys = sb_phy_addr;
 
 	sb_info->igu_sb_id = ecore_get_igu_sb_id(p_hwfn, sb_id);
@@ -1737,16 +1773,16 @@ enum _ecore_status_t ecore_int_sb_init(struct ecore_hwfn *p_hwfn,
 	/* The igu address will hold the absolute address that needs to be
 	 * written to for a specific status block
 	 */
-	if (IS_PF(p_hwfn->p_dev)) {
+	if (IS_PF(p_hwfn->p_dev))
 		sb_info->igu_addr = (u8 OSAL_IOMEM *)p_hwfn->regview +
-		    GTT_BAR0_MAP_REG_IGU_CMD + (sb_info->igu_sb_id << 3);
+				     GTT_BAR0_MAP_REG_IGU_CMD +
+				     (sb_info->igu_sb_id << 3);
 
-	} else {
-		sb_info->igu_addr =
-		    (u8 OSAL_IOMEM *)p_hwfn->regview +
+	else
+		sb_info->igu_addr = (u8 OSAL_IOMEM *)p_hwfn->regview +
 		    PXP_VF_BAR0_START_IGU +
-		    ((IGU_CMD_INT_ACK_BASE + sb_info->igu_sb_id) << 3);
-	}
+				     ((IGU_CMD_INT_ACK_BASE +
+				       sb_info->igu_sb_id) << 3);
 
 	sb_info->flags |= ECORE_SB_INFO_INIT;
 
@@ -1767,7 +1803,7 @@ enum _ecore_status_t ecore_int_sb_release(struct ecore_hwfn *p_hwfn,
 
 	/* zero status block and ack counter */
 	sb_info->sb_ack = 0;
-	OSAL_MEMSET(sb_info->sb_virt, 0, sizeof(*sb_info->sb_virt));
+	OSAL_MEMSET(sb_info->sb_virt, 0, sb_info->sb_size);
 
 	if (IS_VF(p_hwfn->p_dev)) {
 		ecore_vf_set_sb_info(p_hwfn, sb_id, OSAL_NULL);
@@ -1816,11 +1852,10 @@ static enum _ecore_status_t ecore_int_sp_sb_alloc(struct ecore_hwfn *p_hwfn,
 	void *p_virt;
 
 	/* SB struct */
-	p_sb =
-	    OSAL_ALLOC(p_hwfn->p_dev, GFP_KERNEL,
-		       sizeof(*p_sb));
+	p_sb = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_sb));
 	if (!p_sb) {
-		DP_NOTICE(p_hwfn, false, "Failed to allocate `struct ecore_sb_info'\n");
+		DP_NOTICE(p_hwfn, false,
+			  "Failed to allocate `struct ecore_sb_info'\n");
 		return ECORE_NOMEM;
 	}
 
@@ -1838,7 +1873,7 @@ static enum _ecore_status_t ecore_int_sp_sb_alloc(struct ecore_hwfn *p_hwfn,
 	ecore_int_sb_init(p_hwfn, p_ptt, &p_sb->sb_info,
 			  p_virt, p_phys, ECORE_SP_SB_ID);
 
-	OSAL_MEMSET(p_sb->pi_info_arr, 0, sizeof(p_sb->pi_info_arr));
+	p_sb->pi_info_arr_size = PIS_PER_SB;
 
 	return ECORE_SUCCESS;
 }
@@ -1853,14 +1888,14 @@ enum _ecore_status_t ecore_int_register_cb(struct ecore_hwfn *p_hwfn,
 	u8 pi;
 
 	/* Look for a free index */
-	for (pi = 0; pi < OSAL_ARRAY_SIZE(p_sp_sb->pi_info_arr); pi++) {
+	for (pi = 0; pi < p_sp_sb->pi_info_arr_size; pi++) {
 		if (p_sp_sb->pi_info_arr[pi].comp_cb != OSAL_NULL)
 			continue;
 
 		p_sp_sb->pi_info_arr[pi].comp_cb = comp_cb;
 		p_sp_sb->pi_info_arr[pi].cookie = cookie;
 		*sb_idx = pi;
-		*p_fw_cons = &p_sp_sb->sb_info.sb_virt->pi_array[pi];
+		*p_fw_cons = &p_sp_sb->sb_info.sb_pi_array[pi];
 		rc = ECORE_SUCCESS;
 		break;
 	}
@@ -1988,10 +2023,9 @@ static void ecore_int_igu_cleanup_sb(struct ecore_hwfn *p_hwfn,
 				     bool cleanup_set,
 				     u16 opaque_fid)
 {
-	u32 cmd_ctrl = 0, val = 0, sb_bit = 0, sb_bit_addr = 0, data = 0;
-	u32 pxp_addr = IGU_CMD_INT_ACK_BASE + igu_sb_id;
-	u32 sleep_cnt = IGU_CLEANUP_SLEEP_LENGTH;
-	u8 type = 0;		/* FIXME MichalS type??? */
+	u32 data = 0, cmd_ctrl = 0, sb_bit, sb_bit_addr, pxp_addr;
+	u32 sleep_cnt = IGU_CLEANUP_SLEEP_LENGTH, val;
+	u8 type = 0;
 
 	OSAL_BUILD_BUG_ON((IGU_REG_CLEANUP_STATUS_4 -
 			   IGU_REG_CLEANUP_STATUS_0) != 0x200);
@@ -2006,6 +2040,7 @@ static void ecore_int_igu_cleanup_sb(struct ecore_hwfn *p_hwfn,
 	SET_FIELD(data, IGU_CLEANUP_COMMAND_TYPE, IGU_COMMAND_TYPE_SET);
 
 	/* Set the control register */
+	pxp_addr = IGU_CMD_INT_ACK_BASE + igu_sb_id;
 	SET_FIELD(cmd_ctrl, IGU_CTRL_REG_PXP_ADDR, pxp_addr);
 	SET_FIELD(cmd_ctrl, IGU_CTRL_REG_FID, opaque_fid);
 	SET_FIELD(cmd_ctrl, IGU_CTRL_REG_TYPE, IGU_CTRL_CMD_TYPE_WR);
@@ -2077,9 +2112,11 @@ void ecore_int_igu_init_pure_rt_single(struct ecore_hwfn *p_hwfn,
 			  igu_sb_id);
 
 	/* Clear the CAU for the SB */
-	for (pi = 0; pi < 12; pi++)
+	for (pi = 0; pi < PIS_PER_SB; pi++)
 		ecore_wr(p_hwfn, p_ptt,
-			 CAU_REG_PI_MEMORY + (igu_sb_id * 12 + pi) * 4, 0);
+			 CAU_REG_PI_MEMORY +
+			 (igu_sb_id * PIS_PER_SB + pi) * 4,
+			 0);
 }
 
 void ecore_int_igu_init_pure_rt(struct ecore_hwfn *p_hwfn,
@@ -2679,12 +2716,12 @@ enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
 					  struct ecore_sb_info_dbg *p_info)
 {
 	u16 sbid = p_sb->igu_sb_id;
-	int i;
+	u32 i;
 
 	if (IS_VF(p_hwfn->p_dev))
 		return ECORE_INVAL;
 
-	if (sbid > NUM_OF_SBS(p_hwfn->p_dev))
+	if (sbid >= NUM_OF_SBS(p_hwfn->p_dev))
 		return ECORE_INVAL;
 
 	p_info->igu_prod = ecore_rd(p_hwfn, p_ptt,
@@ -2692,10 +2729,10 @@ enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
 	p_info->igu_cons = ecore_rd(p_hwfn, p_ptt,
 				    IGU_REG_CONSUMER_MEM + sbid * 4);
 
-	for (i = 0; i < MAX_PIS_PER_SB; i++)
+	for (i = 0; i < PIS_PER_SB; i++)
 		p_info->pi[i] = (u16)ecore_rd(p_hwfn, p_ptt,
 					      CAU_REG_PI_MEMORY +
-					      sbid * 4 * MAX_PIS_PER_SB +
+					      sbid * 4 * PIS_PER_SB +
 					      i * 4);
 
 	return ECORE_SUCCESS;
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index abea2a716..d7b6b86cc 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -24,7 +24,12 @@ enum ecore_int_mode {
 #endif
 
 struct ecore_sb_info {
-	struct status_block *sb_virt;
+	void *sb_virt; /* ptr to "struct status_block_e{4,5}" */
+	u32 sb_size; /* size of "struct status_block_e{4,5}" */
+	__le16 *sb_pi_array; /* ptr to "sb_virt->pi_array" */
+	__le32 *sb_prod_index; /* ptr to "sb_virt->prod_index" */
+#define STATUS_BLOCK_PROD_INDEX_MASK	0xFFFFFF
+
 	dma_addr_t sb_phys;
 	u32 sb_ack;		/* Last given ack */
 	u16 igu_sb_id;
@@ -42,7 +47,7 @@ struct ecore_sb_info {
 struct ecore_sb_info_dbg {
 	u32 igu_prod;
 	u32 igu_cons;
-	u16 pi[MAX_PIS_PER_SB];
+	u16 pi[PIS_PER_SB];
 };
 
 struct ecore_sb_cnt_info {
@@ -64,7 +69,7 @@ static OSAL_INLINE u16 ecore_sb_update_sb_idx(struct ecore_sb_info *sb_info)
 
 	/* barrier(); status block is written to by the chip */
 	/* FIXME: need some sort of barrier. */
-	prod = OSAL_LE32_TO_CPU(sb_info->sb_virt->prod_index) &
+	prod = OSAL_LE32_TO_CPU(*sb_info->sb_prod_index) &
 	       STATUS_BLOCK_PROD_INDEX_MASK;
 	if (sb_info->sb_ack != prod) {
 		sb_info->sb_ack = prod;
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 5dcdc84fc..b20d83762 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -2323,18 +2323,17 @@ ecore_eth_tx_queue_maxrate(struct ecore_hwfn *p_hwfn,
 			   struct ecore_ptt *p_ptt,
 			   struct ecore_queue_cid *p_cid, u32 rate)
 {
-	struct ecore_mcp_link_state *p_link;
+	u16 rl_id;
 	u8 vport;
 
 	vport = (u8)ecore_get_qm_vport_idx_rl(p_hwfn, p_cid->rel.queue_id);
-	p_link = &ECORE_LEADING_HWFN(p_hwfn->p_dev)->mcp_info->link_output;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
 		   "About to rate limit qm vport %d for queue %d with rate %d\n",
 		   vport, p_cid->rel.queue_id, rate);
 
-	return ecore_init_vport_rl(p_hwfn, p_ptt, vport, rate,
-				   p_link->speed);
+	rl_id = vport; /* The "rl_id" is set as the "vport_id" */
+	return ecore_init_global_rl(p_hwfn, p_ptt, rl_id, rate);
 }
 
 #define RSS_TSTORM_UPDATE_STATUS_MAX_POLL_COUNT    100
@@ -2358,8 +2357,7 @@ ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	addr = (u8 OSAL_IOMEM *)p_hwfn->regview +
-	       GTT_BAR0_MAP_REG_TSDM_RAM +
+	addr = (u8 *)p_hwfn->regview + GTT_BAR0_MAP_REG_TSDM_RAM +
 	       TSTORM_ETH_RSS_UPDATE_OFFSET(p_hwfn->rel_pf_id);
 
 	*(u64 *)(&update_data) = DIRECT_REG_RD64(p_hwfn, addr);
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index acde81fad..bebf412ed 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -302,6 +302,8 @@ struct ecore_sp_vport_start_params {
 	bool b_err_big_pkt;
 	bool b_err_anti_spoof;
 	bool b_err_ctrl_frame;
+	bool b_en_rgfs;
+	bool b_en_tgfs;
 };
 
 /**
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 6559d8040..a5aa07438 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -22,13 +22,23 @@
 #include "ecore_sp_commands.h"
 #include "ecore_cxt.h"
 
-#define CHIP_MCP_RESP_ITER_US 10
-#define EMUL_MCP_RESP_ITER_US (1000 * 1000)
 #define GRCBASE_MCP	0xe00000
 
+#define ECORE_MCP_RESP_ITER_US		10
 #define ECORE_DRV_MB_MAX_RETRIES (500 * 1000)	/* Account for 5 sec */
 #define ECORE_MCP_RESET_RETRIES (50 * 1000)	/* Account for 500 msec */
 
+#ifndef ASIC_ONLY
+/* Non-ASIC:
+ * The waiting interval is multiplied by 100 to reduce the impact of the
+ * built-in delay of 100usec in each ecore_rd().
+ * In addition, a factor of 4 comparing to ASIC is applied.
+ */
+#define ECORE_EMUL_MCP_RESP_ITER_US	(ECORE_MCP_RESP_ITER_US * 100)
+#define ECORE_EMUL_DRV_MB_MAX_RETRIES	((ECORE_DRV_MB_MAX_RETRIES / 100) * 4)
+#define ECORE_EMUL_MCP_RESET_RETRIES	((ECORE_MCP_RESET_RETRIES / 100) * 4)
+#endif
+
 #define DRV_INNER_WR(_p_hwfn, _p_ptt, _ptr, _offset, _val) \
 	ecore_wr(_p_hwfn, _p_ptt, (_p_hwfn->mcp_info->_ptr + _offset), \
 		 _val)
@@ -186,22 +196,23 @@ static enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn,
 						   struct ecore_ptt *p_ptt)
 {
 	struct ecore_mcp_info *p_info = p_hwfn->mcp_info;
+	u32 drv_mb_offsize, mfw_mb_offsize, val;
 	u8 cnt = ECORE_MCP_SHMEM_RDY_MAX_RETRIES;
 	u8 msec = ECORE_MCP_SHMEM_RDY_ITER_MS;
-	u32 drv_mb_offsize, mfw_mb_offsize;
 	u32 mcp_pf_id = MCP_PF_ID(p_hwfn);
 
+	val = ecore_rd(p_hwfn, p_ptt, MCP_REG_CACHE_PAGING_ENABLE);
+	p_info->public_base = ecore_rd(p_hwfn, p_ptt, MISC_REG_SHARED_MEM_ADDR);
+	if (!p_info->public_base) {
+		DP_NOTICE(p_hwfn, false,
+			  "The address of the MCP scratch-pad is not configured\n");
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, false, "Emulation - assume no MFW\n");
-		p_info->public_base = 0;
-		return ECORE_INVAL;
-	}
+		/* Zeroed "public_base" implies no MFW */
+		if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+			DP_INFO(p_hwfn, "Emulation: Assume no MFW\n");
 #endif
-
-	p_info->public_base = ecore_rd(p_hwfn, p_ptt, MISC_REG_SHARED_MEM_ADDR);
-	if (!p_info->public_base)
 		return ECORE_INVAL;
+	}
 
 	p_info->public_base |= GRCBASE_MCP;
 
@@ -293,7 +304,7 @@ enum _ecore_status_t ecore_mcp_cmd_init(struct ecore_hwfn *p_hwfn,
 
 	if (ecore_load_mcp_offsets(p_hwfn, p_ptt) != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, false, "MCP is not initialized\n");
-		/* Do not free mcp_info here, since public_base indicate that
+		/* Do not free mcp_info here, since "public_base" indicates that
 		 * the MCP is not initialized
 		 */
 		return ECORE_SUCCESS;
@@ -334,14 +345,16 @@ static void ecore_mcp_reread_offsets(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
 				     struct ecore_ptt *p_ptt)
 {
-	u32 org_mcp_reset_seq, seq, delay = CHIP_MCP_RESP_ITER_US, cnt = 0;
+	u32 prev_generic_por_0, seq, delay = ECORE_MCP_RESP_ITER_US, cnt = 0;
+	u32 retries = ECORE_MCP_RESET_RETRIES;
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
-		delay = EMUL_MCP_RESP_ITER_US;
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
+		delay = ECORE_EMUL_MCP_RESP_ITER_US;
+		retries = ECORE_EMUL_MCP_RESET_RETRIES;
+	}
 #endif
-
 	if (p_hwfn->mcp_info->b_block_cmd) {
 		DP_NOTICE(p_hwfn, false,
 			  "The MFW is not responsive. Avoid sending MCP_RESET mailbox command.\n");
@@ -351,23 +364,24 @@ enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
 	/* Ensure that only a single thread is accessing the mailbox */
 	OSAL_SPIN_LOCK(&p_hwfn->mcp_info->cmd_lock);
 
-	org_mcp_reset_seq = ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0);
+	prev_generic_por_0 = ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0);
 
 	/* Set drv command along with the updated sequence */
 	ecore_mcp_reread_offsets(p_hwfn, p_ptt);
 	seq = ++p_hwfn->mcp_info->drv_mb_seq;
 	DRV_MB_WR(p_hwfn, p_ptt, drv_mb_header, (DRV_MSG_CODE_MCP_RESET | seq));
 
+	/* Give the MFW up to 500 second (50*1000*10usec) to resume */
 	do {
-		/* Wait for MFW response */
 		OSAL_UDELAY(delay);
-		/* Give the FW up to 500 second (50*1000*10usec) */
-	} while ((org_mcp_reset_seq == ecore_rd(p_hwfn, p_ptt,
-						MISCS_REG_GENERIC_POR_0)) &&
-		 (cnt++ < ECORE_MCP_RESET_RETRIES));
 
-	if (org_mcp_reset_seq !=
-	    ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0)) {
+		if (ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0) !=
+		    prev_generic_por_0)
+			break;
+	} while (cnt++ < retries);
+
+	if (ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0) !=
+	    prev_generic_por_0) {
 		DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 			   "MCP was reset after %d usec\n", cnt * delay);
 	} else {
@@ -380,6 +394,71 @@ enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
 	return rc;
 }
 
+#ifndef ASIC_ONLY
+static void ecore_emul_mcp_load_req(struct ecore_hwfn *p_hwfn,
+				    struct ecore_mcp_mb_params *p_mb_params)
+{
+	if (GET_MFW_FIELD(p_mb_params->param, DRV_ID_MCP_HSI_VER) !=
+	    1 /* ECORE_LOAD_REQ_HSI_VER_1 */) {
+		p_mb_params->mcp_resp = FW_MSG_CODE_DRV_LOAD_REFUSED_HSI_1;
+		return;
+	}
+
+	if (!loaded)
+		p_mb_params->mcp_resp = FW_MSG_CODE_DRV_LOAD_ENGINE;
+	else if (!loaded_port[p_hwfn->port_id])
+		p_mb_params->mcp_resp = FW_MSG_CODE_DRV_LOAD_PORT;
+	else
+		p_mb_params->mcp_resp = FW_MSG_CODE_DRV_LOAD_FUNCTION;
+
+	/* On CMT, always tell that it's engine */
+	if (ECORE_IS_CMT(p_hwfn->p_dev))
+		p_mb_params->mcp_resp = FW_MSG_CODE_DRV_LOAD_ENGINE;
+
+	loaded++;
+	loaded_port[p_hwfn->port_id]++;
+
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+		   "Load phase: 0x%08x load cnt: 0x%x port id=%d port_load=%d\n",
+		   p_mb_params->mcp_resp, loaded, p_hwfn->port_id,
+		   loaded_port[p_hwfn->port_id]);
+}
+
+static void ecore_emul_mcp_unload_req(struct ecore_hwfn *p_hwfn)
+{
+	loaded--;
+	loaded_port[p_hwfn->port_id]--;
+	DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Unload cnt: 0x%x\n", loaded);
+}
+
+static enum _ecore_status_t
+ecore_emul_mcp_cmd(struct ecore_hwfn *p_hwfn,
+		   struct ecore_mcp_mb_params *p_mb_params)
+{
+	if (!CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+		return ECORE_INVAL;
+
+	switch (p_mb_params->cmd) {
+	case DRV_MSG_CODE_LOAD_REQ:
+		ecore_emul_mcp_load_req(p_hwfn, p_mb_params);
+		break;
+	case DRV_MSG_CODE_UNLOAD_REQ:
+		ecore_emul_mcp_unload_req(p_hwfn);
+		break;
+	case DRV_MSG_CODE_GET_MFW_FEATURE_SUPPORT:
+	case DRV_MSG_CODE_RESOURCE_CMD:
+	case DRV_MSG_CODE_MDUMP_CMD:
+	case DRV_MSG_CODE_GET_ENGINE_CONFIG:
+	case DRV_MSG_CODE_GET_PPFID_BITMAP:
+		return ECORE_NOTIMPL;
+	default:
+		break;
+	}
+
+	return ECORE_SUCCESS;
+}
+#endif
+
 /* Must be called while cmd_lock is acquired */
 static bool ecore_mcp_has_pending_cmd(struct ecore_hwfn *p_hwfn)
 {
@@ -488,13 +567,18 @@ void ecore_mcp_print_cpu_info(struct ecore_hwfn *p_hwfn,
 			      struct ecore_ptt *p_ptt)
 {
 	u32 cpu_mode, cpu_state, cpu_pc_0, cpu_pc_1, cpu_pc_2;
+	u32 delay = ECORE_MCP_RESP_ITER_US;
 
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+		delay = ECORE_EMUL_MCP_RESP_ITER_US;
+#endif
 	cpu_mode = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
 	cpu_state = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
 	cpu_pc_0 = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
-	OSAL_UDELAY(CHIP_MCP_RESP_ITER_US);
+	OSAL_UDELAY(delay);
 	cpu_pc_1 = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
-	OSAL_UDELAY(CHIP_MCP_RESP_ITER_US);
+	OSAL_UDELAY(delay);
 	cpu_pc_2 = ecore_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
 
 	DP_NOTICE(p_hwfn, false,
@@ -617,15 +701,21 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 {
 	osal_size_t union_data_size = sizeof(union drv_union_data);
 	u32 max_retries = ECORE_DRV_MB_MAX_RETRIES;
-	u32 delay = CHIP_MCP_RESP_ITER_US;
+	u32 usecs = ECORE_MCP_RESP_ITER_US;
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
-		delay = EMUL_MCP_RESP_ITER_US;
-	/* There is a built-in delay of 100usec in each MFW response read */
-	if (CHIP_REV_IS_FPGA(p_hwfn->p_dev))
-		max_retries /= 10;
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && !ecore_mcp_is_init(p_hwfn))
+		return ecore_emul_mcp_cmd(p_hwfn, p_mb_params);
+
+	if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
+		max_retries = ECORE_EMUL_DRV_MB_MAX_RETRIES;
+		usecs = ECORE_EMUL_MCP_RESP_ITER_US;
+	}
 #endif
+	if (ECORE_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
+		max_retries = DIV_ROUND_UP(max_retries, 1000);
+		usecs *= 1000;
+	}
 
 	/* MCP not initialized */
 	if (!ecore_mcp_is_init(p_hwfn)) {
@@ -650,7 +740,7 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
 	}
 
 	return _ecore_mcp_cmd_and_union(p_hwfn, p_ptt, p_mb_params, max_retries,
-					delay);
+					usecs);
 }
 
 enum _ecore_status_t ecore_mcp_cmd(struct ecore_hwfn *p_hwfn,
@@ -660,18 +750,6 @@ enum _ecore_status_t ecore_mcp_cmd(struct ecore_hwfn *p_hwfn,
 	struct ecore_mcp_mb_params mb_params;
 	enum _ecore_status_t rc;
 
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		if (cmd == DRV_MSG_CODE_UNLOAD_REQ) {
-			loaded--;
-			loaded_port[p_hwfn->port_id]--;
-			DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Unload cnt: 0x%x\n",
-				   loaded);
-		}
-		return ECORE_SUCCESS;
-	}
-#endif
-
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = cmd;
 	mb_params.param = param;
@@ -745,34 +823,6 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
-#ifndef ASIC_ONLY
-static void ecore_mcp_mf_workaround(struct ecore_hwfn *p_hwfn,
-				    u32 *p_load_code)
-{
-	static int load_phase = FW_MSG_CODE_DRV_LOAD_ENGINE;
-
-	if (!loaded)
-		load_phase = FW_MSG_CODE_DRV_LOAD_ENGINE;
-	else if (!loaded_port[p_hwfn->port_id])
-		load_phase = FW_MSG_CODE_DRV_LOAD_PORT;
-	else
-		load_phase = FW_MSG_CODE_DRV_LOAD_FUNCTION;
-
-	/* On CMT, always tell that it's engine */
-	if (ECORE_IS_CMT(p_hwfn->p_dev))
-		load_phase = FW_MSG_CODE_DRV_LOAD_ENGINE;
-
-	*p_load_code = load_phase;
-	loaded++;
-	loaded_port[p_hwfn->port_id]++;
-
-	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
-		   "Load phase: %x load cnt: 0x%x port id=%d port_load=%d\n",
-		   *p_load_code, loaded, p_hwfn->port_id,
-		   loaded_port[p_hwfn->port_id]);
-}
-#endif
-
 static bool
 ecore_mcp_can_force_load(u8 drv_role, u8 exist_drv_role,
 			 enum ecore_override_force_load override_force_load)
@@ -1004,13 +1054,6 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
 	u8 mfw_drv_role = 0, mfw_force_cmd;
 	enum _ecore_status_t rc;
 
-#ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		ecore_mcp_mf_workaround(p_hwfn, &p_params->load_code);
-		return ECORE_SUCCESS;
-	}
-#endif
-
 	OSAL_MEM_ZERO(&in_params, sizeof(in_params));
 	in_params.hsi_ver = ECORE_LOAD_REQ_HSI_VER_DEFAULT;
 	in_params.drv_ver_0 = ECORE_VERSION;
@@ -1166,15 +1209,17 @@ static void ecore_mcp_handle_vf_flr(struct ecore_hwfn *p_hwfn,
 	u32 mfw_path_offsize = ecore_rd(p_hwfn, p_ptt, addr);
 	u32 path_addr = SECTION_ADDR(mfw_path_offsize,
 				     ECORE_PATH_ID(p_hwfn));
-	u32 disabled_vfs[VF_MAX_STATIC / 32];
+	u32 disabled_vfs[EXT_VF_BITMAP_SIZE_IN_DWORDS];
 	int i;
 
+	OSAL_MEM_ZERO(disabled_vfs, EXT_VF_BITMAP_SIZE_IN_BYTES);
+
 	DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
 		   "Reading Disabled VF information from [offset %08x],"
 		   " path_addr %08x\n",
 		   mfw_path_offsize, path_addr);
 
-	for (i = 0; i < (VF_MAX_STATIC / 32); i++) {
+	for (i = 0; i < VF_BITMAP_SIZE_IN_DWORDS; i++) {
 		disabled_vfs[i] = ecore_rd(p_hwfn, p_ptt,
 					   path_addr +
 					   OFFSETOF(struct public_path,
@@ -1193,16 +1238,11 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 					  struct ecore_ptt *p_ptt,
 					  u32 *vfs_to_ack)
 {
-	u32 addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
-					PUBLIC_FUNC);
-	u32 mfw_func_offsize = ecore_rd(p_hwfn, p_ptt, addr);
-	u32 func_addr = SECTION_ADDR(mfw_func_offsize,
-				     MCP_PF_ID(p_hwfn));
 	struct ecore_mcp_mb_params mb_params;
 	enum _ecore_status_t rc;
-	int i;
+	u16 i;
 
-	for (i = 0; i < (VF_MAX_STATIC / 32); i++)
+	for (i = 0; i < VF_BITMAP_SIZE_IN_DWORDS; i++)
 		DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IOV),
 			   "Acking VFs [%08x,...,%08x] - %08x\n",
 			   i * 32, (i + 1) * 32 - 1, vfs_to_ack[i]);
@@ -1210,7 +1250,7 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 	OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
 	mb_params.cmd = DRV_MSG_CODE_VF_DISABLED_DONE;
 	mb_params.p_data_src = vfs_to_ack;
-	mb_params.data_src_size = VF_MAX_STATIC / 8;
+	mb_params.data_src_size = (u8)VF_BITMAP_SIZE_IN_BYTES;
 	rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt,
 				     &mb_params);
 	if (rc != ECORE_SUCCESS) {
@@ -1219,13 +1259,6 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
 		return ECORE_TIMEOUT;
 	}
 
-	/* TMP - clear the ACK bits; should be done by MFW */
-	for (i = 0; i < (VF_MAX_STATIC / 32); i++)
-		ecore_wr(p_hwfn, p_ptt,
-			 func_addr +
-			 OFFSETOF(struct public_func, drv_ack_vf_disabled) +
-			 i * sizeof(u32), 0);
-
 	return rc;
 }
 
@@ -1471,8 +1504,11 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
 	u32 cmd;
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+		if (b_up)
+			OSAL_LINK_UPDATE(p_hwfn);
 		return ECORE_SUCCESS;
+	}
 #endif
 
 	/* Set the shmem configuration according to params */
@@ -1853,6 +1889,13 @@ ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
 	struct mdump_config_stc mdump_config;
 	enum _ecore_status_t rc;
 
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && !ecore_mcp_is_init(p_hwfn)) {
+		DP_INFO(p_hwfn, "Emulation: Can't get mdump info\n");
+		return ECORE_NOTIMPL;
+	}
+#endif
+
 	OSAL_MEMSET(p_mdump_info, 0, sizeof(*p_mdump_info));
 
 	addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
@@ -2042,6 +2085,9 @@ ecore_mcp_handle_ufp_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
 	/* update storm FW with negotiation results */
 	ecore_sp_pf_update_ufp(p_hwfn);
 
+	/* update stag pcp value */
+	ecore_sp_pf_update_stag(p_hwfn);
+
 	return ECORE_SUCCESS;
 }
 
@@ -2159,9 +2205,9 @@ enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_hwfn *p_hwfn,
 	u32 global_offsize;
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, false, "Emulation - can't get MFW version\n");
-		return ECORE_SUCCESS;
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && !ecore_mcp_is_init(p_hwfn)) {
+		DP_INFO(p_hwfn, "Emulation: Can't get MFW version\n");
+		return ECORE_NOTIMPL;
 	}
 #endif
 
@@ -2203,26 +2249,29 @@ enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_hwfn *p_hwfn,
 					      struct ecore_ptt *p_ptt,
 					      u32 *p_media_type)
 {
-	enum _ecore_status_t rc = ECORE_SUCCESS;
+	*p_media_type = MEDIA_UNSPECIFIED;
 
 	/* TODO - Add support for VFs */
 	if (IS_VF(p_hwfn->p_dev))
 		return ECORE_INVAL;
 
 	if (!ecore_mcp_is_init(p_hwfn)) {
+#ifndef ASIC_ONLY
+		if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+			DP_INFO(p_hwfn, "Emulation: Can't get media type\n");
+			return ECORE_NOTIMPL;
+		}
+#endif
 		DP_NOTICE(p_hwfn, false, "MFW is not initialized!\n");
 		return ECORE_BUSY;
 	}
 
-	if (!p_ptt) {
-		*p_media_type = MEDIA_UNSPECIFIED;
-		rc = ECORE_INVAL;
-	} else {
-		*p_media_type = ecore_rd(p_hwfn, p_ptt,
-					 p_hwfn->mcp_info->port_addr +
-					 OFFSETOF(struct public_port,
-						  media_type));
-	}
+	if (!p_ptt)
+		return ECORE_INVAL;
+
+	*p_media_type = ecore_rd(p_hwfn, p_ptt,
+				 p_hwfn->mcp_info->port_addr +
+				 OFFSETOF(struct public_port, media_type));
 
 	return ECORE_SUCCESS;
 }
@@ -2626,9 +2675,9 @@ enum _ecore_status_t ecore_mcp_get_flash_size(struct ecore_hwfn *p_hwfn,
 	u32 flash_size;
 
 #ifndef ASIC_ONLY
-	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
-		DP_NOTICE(p_hwfn, false, "Emulation - can't get flash size\n");
-		return ECORE_INVAL;
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && !ecore_mcp_is_init(p_hwfn)) {
+		DP_INFO(p_hwfn, "Emulation: Can't get flash size\n");
+		return ECORE_NOTIMPL;
 	}
 #endif
 
@@ -2725,6 +2774,16 @@ enum _ecore_status_t ecore_mcp_config_vf_msix(struct ecore_hwfn *p_hwfn,
 					      struct ecore_ptt *p_ptt,
 					      u8 vf_id, u8 num)
 {
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && !ecore_mcp_is_init(p_hwfn)) {
+		DP_INFO(p_hwfn,
+			"Emulation: Avoid sending the %s mailbox command\n",
+			ECORE_IS_BB(p_hwfn->p_dev) ? "CFG_VF_MSIX" :
+						     "CFG_PF_VFS_MSIX");
+		return ECORE_SUCCESS;
+	}
+#endif
+
 	if (ECORE_IS_BB(p_hwfn->p_dev))
 		return ecore_mcp_config_vf_msix_bb(p_hwfn, p_ptt, vf_id, num);
 	else
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 2c052b7fa..185cc2339 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -75,11 +75,16 @@ struct ecore_mcp_mb_params {
 	u32 cmd;
 	u32 param;
 	void *p_data_src;
-	u8 data_src_size;
 	void *p_data_dst;
-	u8 data_dst_size;
 	u32 mcp_resp;
 	u32 mcp_param;
+	u8 data_src_size;
+	u8 data_dst_size;
+	u32 flags;
+#define ECORE_MB_FLAG_CAN_SLEEP         (0x1 << 0)
+#define ECORE_MB_FLAG_AVOID_BLOCK       (0x1 << 1)
+#define ECORE_MB_FLAGS_IS_SET(params, flag) \
+	((params) != OSAL_NULL && ((params)->flags & ECORE_MB_FLAG_##flag))
 };
 
 struct ecore_drv_tlv_hdr {
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index f91b25e20..64509f7cc 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -62,6 +62,7 @@ struct ecore_iscsi_pf_params {
 	u8		num_uhq_pages_in_ring;
 	u8		num_queues;
 	u8		log_page_size;
+	u8		log_page_size_conn;
 	u8		rqe_log_size;
 	u8		max_fin_rt;
 	u8		gl_rq_pi;
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 49a5ff552..9860a62b5 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -355,14 +355,16 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
 		p_ramrod->outer_tag_config.inner_to_outer_pri_map[i] = i;
 
 	/* enable_stag_pri_change should be set if port is in BD mode or,
-	 * UFP with Host Control mode or, UFP with DCB over base interface.
+	 * UFP with Host Control mode.
 	 */
 	if (OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits)) {
-		if ((p_hwfn->ufp_info.pri_type == ECORE_UFP_PRI_OS) ||
-		    (p_hwfn->p_dcbx_info->results.dcbx_enabled))
+		if (p_hwfn->ufp_info.pri_type == ECORE_UFP_PRI_OS)
 			p_ramrod->outer_tag_config.enable_stag_pri_change = 1;
 		else
 			p_ramrod->outer_tag_config.enable_stag_pri_change = 0;
+
+		p_ramrod->outer_tag_config.outer_tag.tci |=
+			OSAL_CPU_TO_LE16(((u16)p_hwfn->ufp_info.tc << 13));
 	}
 
 	/* Place EQ address in RAMROD */
@@ -459,8 +461,7 @@ enum _ecore_status_t ecore_sp_pf_update_ufp(struct ecore_hwfn *p_hwfn)
 		return rc;
 
 	p_ent->ramrod.pf_update.update_enable_stag_pri_change = true;
-	if ((p_hwfn->ufp_info.pri_type == ECORE_UFP_PRI_OS) ||
-	    (p_hwfn->p_dcbx_info->results.dcbx_enabled))
+	if (p_hwfn->ufp_info.pri_type == ECORE_UFP_PRI_OS)
 		p_ent->ramrod.pf_update.enable_stag_pri_change = 1;
 	else
 		p_ent->ramrod.pf_update.enable_stag_pri_change = 0;
@@ -637,6 +638,10 @@ enum _ecore_status_t ecore_sp_heartbeat_ramrod(struct ecore_hwfn *p_hwfn)
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
+	if (OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits))
+		p_ent->ramrod.pf_update.mf_vlan |=
+			OSAL_CPU_TO_LE16(((u16)p_hwfn->ufp_info.tc << 13));
+
 	return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
 }
 
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 486b21dd9..6c386821f 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -185,11 +185,26 @@ ecore_spq_fill_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry *p_ent)
 /***************************************************************************
  * HSI access
  ***************************************************************************/
+
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK			0x1
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT			0
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK		0x1
+#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT		7
+#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK		0x1
+#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT		4
+#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK	0x1
+#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT	6
+
 static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 				    struct ecore_spq *p_spq)
 {
+	__le32 *p_spq_base_lo, *p_spq_base_hi;
+	struct regpair *p_consolid_base_addr;
+	u8 *p_flags1, *p_flags9, *p_flags10;
 	struct core_conn_context *p_cxt;
 	struct ecore_cxt_info cxt_info;
+	u32 core_conn_context_size;
+	__le16 *p_physical_q0;
 	u16 physical_q;
 	enum _ecore_status_t rc;
 
@@ -197,41 +212,39 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
 
 	rc = ecore_cxt_get_cid_info(p_hwfn, &cxt_info);
 
-	if (rc < 0) {
+	if (rc != ECORE_SUCCESS) {
 		DP_NOTICE(p_hwfn, true, "Cannot find context info for cid=%d\n",
 			  p_spq->cid);
 		return;
 	}
 
 	p_cxt = cxt_info.p_cxt;
+	core_conn_context_size = sizeof(*p_cxt);
+	p_flags1 = &p_cxt->xstorm_ag_context.flags1;
+	p_flags9 = &p_cxt->xstorm_ag_context.flags9;
+	p_flags10 = &p_cxt->xstorm_ag_context.flags10;
+	p_physical_q0 = &p_cxt->xstorm_ag_context.physical_q0;
+	p_spq_base_lo = &p_cxt->xstorm_st_context.spq_base_lo;
+	p_spq_base_hi = &p_cxt->xstorm_st_context.spq_base_hi;
+	p_consolid_base_addr = &p_cxt->xstorm_st_context.consolid_base_addr;
 
 	/* @@@TBD we zero the context until we have ilt_reset implemented. */
-	OSAL_MEM_ZERO(p_cxt, sizeof(*p_cxt));
-
-	if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev)) {
-		SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-			  XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
-		SET_FIELD(p_cxt->xstorm_ag_context.flags1,
-			  XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
-		/* SET_FIELD(p_cxt->xstorm_ag_context.flags10,
-		 *	  E4_XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN, 1);
-		 */
-		SET_FIELD(p_cxt->xstorm_ag_context.flags9,
-			  XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
-	}
+	OSAL_MEM_ZERO(p_cxt, core_conn_context_size);
+
+	SET_FIELD(*p_flags10, XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN, 1);
+	SET_FIELD(*p_flags1, XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE, 1);
+	SET_FIELD(*p_flags9, XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN, 1);
 
 	/* CDU validation - FIXME currently disabled */
 
 	/* QM physical queue */
 	physical_q = ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_LB);
-	p_cxt->xstorm_ag_context.physical_q0 = OSAL_CPU_TO_LE16(physical_q);
+	*p_physical_q0 = OSAL_CPU_TO_LE16(physical_q);
 
-	p_cxt->xstorm_st_context.spq_base_lo =
-	    DMA_LO_LE(p_spq->chain.p_phys_addr);
-	p_cxt->xstorm_st_context.spq_base_hi =
-	    DMA_HI_LE(p_spq->chain.p_phys_addr);
+	*p_spq_base_lo = DMA_LO_LE(p_spq->chain.p_phys_addr);
+	*p_spq_base_hi = DMA_HI_LE(p_spq->chain.p_phys_addr);
 
-	DMA_REGPAIR_LE(p_cxt->xstorm_st_context.consolid_base_addr,
+	DMA_REGPAIR_LE(*p_consolid_base_addr,
 		       p_hwfn->p_consq->chain.p_phys_addr);
 }
 
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 264217252..deee04ac4 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -906,7 +906,7 @@ ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn,
  *
  * @brief ecore_iov_config_perm_table - configure the permission
  *      zone table.
- *      In E4, queue zone permission table size is 320x9. There
+ *      The queue zone permission table size is 320x9. There
  *      are 320 VF queues for single engine device (256 for dual
  *      engine device), and each entry has the following format:
  *      {Valid, VF[7:0]}
@@ -967,6 +967,9 @@ static u8 ecore_iov_alloc_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
 
 	for (qid = 0; qid < num_rx_queues; qid++) {
 		p_block = ecore_get_igu_free_sb(p_hwfn, false);
+		if (!p_block)
+			continue;
+
 		vf->igu_sbs[qid] = p_block->igu_sb_id;
 		p_block->status &= ~ECORE_IGU_STATUS_FREE;
 		SET_FIELD(val, IGU_MAPPING_LINE_VECTOR_NUMBER, qid);
@@ -1064,6 +1067,15 @@ void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
 	p_bulletin->capability_speed = p_caps->speed_capabilities;
 }
 
+#ifndef ASIC_ONLY
+static void ecore_emul_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt)
+{
+	/* Increase the maximum number of DORQ FIFO entries used by child VFs */
+	ecore_wr(p_hwfn, p_ptt, DORQ_REG_VF_USAGE_CNT_LIM, 0x3ec);
+}
+#endif
+
 enum _ecore_status_t
 ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			 struct ecore_ptt *p_ptt,
@@ -1188,18 +1200,39 @@ ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			   &link_params, &link_state, &link_caps);
 
 	rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, vf);
+	if (rc != ECORE_SUCCESS)
+		return rc;
 
-	if (rc == ECORE_SUCCESS) {
-		vf->b_init = true;
-		p_hwfn->pf_iov_info->active_vfs[vf->relative_vf_id / 64] |=
+	vf->b_init = true;
+#ifndef REMOVE_DBG
+	p_hwfn->pf_iov_info->active_vfs[vf->relative_vf_id / 64] |=
 			(1ULL << (vf->relative_vf_id % 64));
+#endif
 
-		if (IS_LEAD_HWFN(p_hwfn))
-			p_hwfn->p_dev->p_iov_info->num_vfs++;
+	if (IS_LEAD_HWFN(p_hwfn))
+		p_hwfn->p_dev->p_iov_info->num_vfs++;
+
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+		ecore_emul_iov_init_hw_for_vf(p_hwfn, p_ptt);
+#endif
+
+	return ECORE_SUCCESS;
 	}
 
-	return rc;
+#ifndef ASIC_ONLY
+static void ecore_emul_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
+					  struct ecore_ptt *p_ptt)
+{
+	if (!ecore_mcp_is_init(p_hwfn)) {
+		u32 sriov_dis = ecore_rd(p_hwfn, p_ptt,
+					 PGLUE_B_REG_SR_IOV_DISABLED_REQUEST);
+
+		ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_SR_IOV_DISABLED_REQUEST_CLR,
+			 sriov_dis);
 }
+}
+#endif
 
 enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
@@ -1257,6 +1290,11 @@ enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
 			p_hwfn->p_dev->p_iov_info->num_vfs--;
 	}
 
+#ifndef ASIC_ONLY
+	if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
+		ecore_emul_iov_release_hw_for_vf(p_hwfn, p_ptt);
+#endif
+
 	return ECORE_SUCCESS;
 }
 
@@ -1391,7 +1429,7 @@ static void ecore_iov_send_response(struct ecore_hwfn *p_hwfn,
 
 	eng_vf_id = p_vf->abs_vf_id;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct dmae_params));
+	OSAL_MEMSET(&params, 0, sizeof(params));
 	SET_FIELD(params.flags, DMAE_PARAMS_DST_VF_VALID, 0x1);
 	params.dst_vf_id = eng_vf_id;
 
@@ -1787,7 +1825,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn       *p_hwfn,
 	/* fill in pfdev info */
 	pfdev_info->chip_num = p_hwfn->p_dev->chip_num;
 	pfdev_info->db_size = 0;	/* @@@ TBD MichalK Vf Doorbells */
-	pfdev_info->indices_per_sb = MAX_PIS_PER_SB;
+	pfdev_info->indices_per_sb = PIS_PER_SB;
 
 	pfdev_info->capabilities = PFVF_ACQUIRE_CAP_DEFAULT_UNTAGGED |
 				   PFVF_ACQUIRE_CAP_POST_FW_OVERRIDE;
@@ -2247,10 +2285,14 @@ static void ecore_iov_vf_mbx_start_rxq_resp(struct ecore_hwfn *p_hwfn,
 	ecore_add_tlv(&mbx->offset, CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
 
-	/* Update the TLV with the response */
+	/* Update the TLV with the response.
+	 * The VF Rx producers are located in the vf zone.
+	 */
 	if ((status == PFVF_STATUS_SUCCESS) && !b_legacy) {
 		req = &mbx->req_virt->start_rxq;
-		p_tlv->offset = PXP_VF_BAR0_START_MSDM_ZONE_B +
+
+		p_tlv->offset =
+			PXP_VF_BAR0_START_MSDM_ZONE_B +
 				OFFSETOF(struct mstorm_vf_zone,
 					 non_trigger.eth_rx_queue_producers) +
 				sizeof(struct eth_rx_prod_data) * req->rx_qid;
@@ -2350,13 +2392,15 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
 	if (p_cid == OSAL_NULL)
 		goto out;
 
-	/* Legacy VFs have their Producers in a different location, which they
-	 * calculate on their own and clean the producer prior to this.
+	/* The VF Rx producers are located in the vf zone.
+	 * Legacy VFs have their producers in the queue zone, but they
+	 * calculate the location by their own and clean them prior to this.
 	 */
 	if (!(vf_legacy & ECORE_QCID_LEGACY_VF_RX_PROD))
 		REG_WR(p_hwfn,
 		       GTT_BAR0_MAP_REG_MSDM_RAM +
-		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, req->rx_qid),
+		       MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id,
+						  req->rx_qid),
 		       0);
 
 	rc = ecore_eth_rxq_start_ramrod(p_hwfn, p_cid,
@@ -3855,48 +3899,70 @@ ecore_iov_vf_flr_poll_dorq(struct ecore_hwfn *p_hwfn,
 	return ECORE_SUCCESS;
 }
 
+#define MAX_NUM_EXT_VOQS	(MAX_NUM_PORTS * NUM_OF_TCS)
+
 static enum _ecore_status_t
 ecore_iov_vf_flr_poll_pbf(struct ecore_hwfn *p_hwfn,
 			  struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
 {
-	u32 cons[MAX_NUM_VOQS_E4], distance[MAX_NUM_VOQS_E4];
-	int i, cnt;
+	u32 prod, cons[MAX_NUM_EXT_VOQS], distance[MAX_NUM_EXT_VOQS], tmp;
+	u8 max_phys_tcs_per_port = p_hwfn->qm_info.max_phys_tcs_per_port;
+	u8 max_ports_per_engine = p_hwfn->p_dev->num_ports_in_engine;
+	u32 prod_voq0_addr = PBF_REG_NUM_BLOCKS_ALLOCATED_PROD_VOQ0;
+	u32 cons_voq0_addr = PBF_REG_NUM_BLOCKS_ALLOCATED_CONS_VOQ0;
+	u8 port_id, tc, tc_id = 0, voq = 0;
+	int cnt;
 
 	/* Read initial consumers & producers */
-	for (i = 0; i < MAX_NUM_VOQS_E4; i++) {
-		u32 prod;
-
-		cons[i] = ecore_rd(p_hwfn, p_ptt,
-				   PBF_REG_NUM_BLOCKS_ALLOCATED_CONS_VOQ0 +
-				   i * 0x40);
+	for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
+		/* "max_phys_tcs_per_port" active TCs + 1 pure LB TC */
+		for (tc = 0; tc < max_phys_tcs_per_port + 1; tc++) {
+			tc_id = (tc < max_phys_tcs_per_port) ?
+				tc :
+				PURE_LB_TC;
+			voq = VOQ(port_id, tc_id, max_phys_tcs_per_port);
+			cons[voq] = ecore_rd(p_hwfn, p_ptt,
+					     cons_voq0_addr + voq * 0x40);
 		prod = ecore_rd(p_hwfn, p_ptt,
-				PBF_REG_NUM_BLOCKS_ALLOCATED_PROD_VOQ0 +
-				i * 0x40);
-		distance[i] = prod - cons[i];
+					prod_voq0_addr + voq * 0x40);
+			distance[voq] = prod - cons[voq];
+		}
 	}
 
 	/* Wait for consumers to pass the producers */
-	i = 0;
+	port_id = 0;
+	tc = 0;
 	for (cnt = 0; cnt < 50; cnt++) {
-		for (; i < MAX_NUM_VOQS_E4; i++) {
-			u32 tmp;
-
+		for (; port_id < max_ports_per_engine; port_id++) {
+			/* "max_phys_tcs_per_port" active TCs + 1 pure LB TC */
+			for (; tc < max_phys_tcs_per_port + 1; tc++) {
+				tc_id = (tc < max_phys_tcs_per_port) ?
+					tc :
+					PURE_LB_TC;
+				voq = VOQ(port_id, tc_id,
+					  max_phys_tcs_per_port);
 			tmp = ecore_rd(p_hwfn, p_ptt,
-				       PBF_REG_NUM_BLOCKS_ALLOCATED_CONS_VOQ0 +
-				       i * 0x40);
-			if (distance[i] > tmp - cons[i])
+					       cons_voq0_addr + voq * 0x40);
+			if (distance[voq] > tmp - cons[voq])
+				break;
+		}
+
+			if (tc == max_phys_tcs_per_port + 1)
+				tc = 0;
+			else
 				break;
 		}
 
-		if (i == MAX_NUM_VOQS_E4)
+		if (port_id == max_ports_per_engine)
 			break;
 
 		OSAL_MSLEEP(20);
 	}
 
 	if (cnt == 50) {
-		DP_ERR(p_hwfn, "VF[%d] - pbf polling failed on VOQ %d\n",
-		       p_vf->abs_vf_id, i);
+		DP_ERR(p_hwfn,
+		       "VF[%d] - pbf polling failed on VOQ %d [port_id %d, tc_id %d]\n",
+		       p_vf->abs_vf_id, voq, port_id, tc_id);
 		return ECORE_TIMEOUT;
 	}
 
@@ -3996,11 +4062,11 @@ ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 					      struct ecore_ptt *p_ptt)
 {
-	u32 ack_vfs[VF_MAX_STATIC / 32];
+	u32 ack_vfs[EXT_VF_BITMAP_SIZE_IN_DWORDS];
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 	u16 i;
 
-	OSAL_MEMSET(ack_vfs, 0, sizeof(u32) * (VF_MAX_STATIC / 32));
+	OSAL_MEM_ZERO(ack_vfs, EXT_VF_BITMAP_SIZE_IN_BYTES);
 
 	/* Since BRB <-> PRS interface can't be tested as part of the flr
 	 * polling due to HW limitations, simply sleep a bit. And since
@@ -4019,10 +4085,10 @@ enum _ecore_status_t
 ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
 				struct ecore_ptt *p_ptt, u16 rel_vf_id)
 {
-	u32 ack_vfs[VF_MAX_STATIC / 32];
+	u32 ack_vfs[EXT_VF_BITMAP_SIZE_IN_DWORDS];
 	enum _ecore_status_t rc = ECORE_SUCCESS;
 
-	OSAL_MEMSET(ack_vfs, 0, sizeof(u32) * (VF_MAX_STATIC / 32));
+	OSAL_MEM_ZERO(ack_vfs, EXT_VF_BITMAP_SIZE_IN_BYTES);
 
 	/* Wait instead of polling the BRB <-> PRS interface */
 	OSAL_MSLEEP(100);
@@ -4039,7 +4105,8 @@ bool ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
 	u16 i;
 
 	DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Marking FLR-ed VFs\n");
-	for (i = 0; i < (VF_MAX_STATIC / 32); i++)
+
+	for (i = 0; i < VF_BITMAP_SIZE_IN_DWORDS; i++)
 		DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
 			   "[%08x,...,%08x]: %08x\n",
 			   i * 32, (i + 1) * 32 - 1, p_disabled_vfs[i]);
@@ -4396,7 +4463,7 @@ enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
 	if (!vf_info)
 		return ECORE_INVAL;
 
-	OSAL_MEMSET(&params, 0, sizeof(struct dmae_params));
+	OSAL_MEMSET(&params, 0, sizeof(params));
 	SET_FIELD(params.flags, DMAE_PARAMS_SRC_VF_VALID, 0x1);
 	SET_FIELD(params.flags, DMAE_PARAMS_COMPLETION_DST, 0x1);
 	params.src_vf_id = vf_info->abs_vf_id;
@@ -4785,9 +4852,9 @@ enum _ecore_status_t ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn,
 						 struct ecore_ptt *p_ptt,
 						 int vfid, int val)
 {
-	struct ecore_mcp_link_state *p_link;
 	struct ecore_vf_info *vf;
 	u8 abs_vp_id = 0;
+	u16 rl_id;
 	enum _ecore_status_t rc;
 
 	vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
@@ -4799,10 +4866,8 @@ enum _ecore_status_t ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn,
 	if (rc != ECORE_SUCCESS)
 		return rc;
 
-	p_link = &ECORE_LEADING_HWFN(p_hwfn->p_dev)->mcp_info->link_output;
-
-	return ecore_init_vport_rl(p_hwfn, p_ptt, abs_vp_id, (u32)val,
-				   p_link->speed);
+	rl_id = abs_vp_id; /* The "rl_id" is set as the "vport_id" */
+	return ecore_init_global_rl(p_hwfn, p_ptt, rl_id, (u32)val);
 }
 
 enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 3ba6a0cf2..24846cfb5 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -257,6 +257,7 @@ static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 	struct pfvf_acquire_resp_tlv *resp = &p_iov->pf2vf_reply->acquire_resp;
 	struct pf_vf_pfdev_info *pfdev_info = &resp->pfdev_info;
 	struct ecore_vf_acquire_sw_info vf_sw_info;
+	struct ecore_dev *p_dev = p_hwfn->p_dev;
 	struct vf_pf_resc_request *p_resc;
 	bool resources_acquired = false;
 	struct vfpf_acquire_tlv *req;
@@ -427,20 +428,20 @@ static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
 	p_iov->bulletin.size = resp->bulletin_size;
 
 	/* get HW info */
-	p_hwfn->p_dev->type = resp->pfdev_info.dev_type;
-	p_hwfn->p_dev->chip_rev = (u8)resp->pfdev_info.chip_rev;
+	p_dev->type = resp->pfdev_info.dev_type;
+	p_dev->chip_rev = (u8)resp->pfdev_info.chip_rev;
 
 	DP_INFO(p_hwfn, "Chip details - %s%d\n",
-		ECORE_IS_BB(p_hwfn->p_dev) ? "BB" : "AH",
+		ECORE_IS_BB(p_dev) ? "BB" : "AH",
 		CHIP_REV_IS_A0(p_hwfn->p_dev) ? 0 : 1);
 
-	p_hwfn->p_dev->chip_num = pfdev_info->chip_num & 0xffff;
+	p_dev->chip_num = pfdev_info->chip_num & 0xffff;
 
 	/* Learn of the possibility of CMT */
 	if (IS_LEAD_HWFN(p_hwfn)) {
 		if (resp->pfdev_info.capabilities & PFVF_ACQUIRE_CAP_100G) {
 			DP_INFO(p_hwfn, "100g VF\n");
-			p_hwfn->p_dev->num_hwfns = 2;
+			p_dev->num_hwfns = 2;
 		}
 	}
 
@@ -636,10 +637,6 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn)
 	return ECORE_NOMEM;
 }
 
-#define TSTORM_QZONE_START   PXP_VF_BAR0_START_SDM_ZONE_A
-#define MSTORM_QZONE_START(dev)   (TSTORM_QZONE_START + \
-				   (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
-
 /* @DPDK - changed enum ecore_tunn_clss to enum ecore_tunn_mode */
 static void
 __ecore_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
@@ -828,8 +825,7 @@ ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
 		u8 hw_qid = p_iov->acquire_resp.resc.hw_qid[rx_qid];
 		u32 init_prod_val = 0;
 
-		*pp_prod = (u8 OSAL_IOMEM *)
-			   p_hwfn->regview +
+		*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
 			   MSTORM_QZONE_START(p_hwfn->p_dev) +
 			   (hw_qid) * MSTORM_QZONE_SIZE;
 
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index b5f93e9fa..559638508 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -44,7 +44,7 @@
 /* Driver versions */
 #define QEDE_PMD_VER_PREFIX		"QEDE PMD"
 #define QEDE_PMD_VERSION_MAJOR		2
-#define QEDE_PMD_VERSION_MINOR	        10
+#define QEDE_PMD_VERSION_MINOR	        11
 #define QEDE_PMD_VERSION_REVISION       0
 #define QEDE_PMD_VERSION_PATCH	        1
 
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 8a108f99c..c9caec645 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -18,7 +18,7 @@
 char qede_fw_file[PATH_MAX];
 
 static const char * const QEDE_DEFAULT_FIRMWARE =
-	"/lib/firmware/qed/qed_init_values-8.37.7.0.bin";
+	"/lib/firmware/qed/qed_init_values-8.40.25.0.bin";
 
 static void
 qed_update_pf_params(struct ecore_dev *edev, struct ecore_pf_params *params)
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 40a246229..a28dd0a07 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -805,7 +805,7 @@ qede_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
 		fp->rxq->hw_rxq_prod_addr = ret_params.p_prod;
 		fp->rxq->handle = ret_params.p_handle;
 
-		fp->rxq->hw_cons_ptr = &fp->sb_info->sb_virt->pi_array[RX_PI];
+		fp->rxq->hw_cons_ptr = &fp->sb_info->sb_pi_array[RX_PI];
 		qede_update_rx_prod(qdev, fp->rxq);
 		eth_dev->data->rx_queue_state[rx_queue_id] =
 			RTE_ETH_QUEUE_STATE_STARTED;
@@ -863,7 +863,7 @@ qede_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id)
 		txq->doorbell_addr = ret_params.p_doorbell;
 		txq->handle = ret_params.p_handle;
 
-		txq->hw_cons_ptr = &fp->sb_info->sb_virt->pi_array[TX_PI(0)];
+		txq->hw_cons_ptr = &fp->sb_info->sb_pi_array[TX_PI(0)];
 		SET_FIELD(txq->tx_db.data.params, ETH_DB_DATA_DEST,
 				DB_DEST_XCM);
 		SET_FIELD(txq->tx_db.data.params, ETH_DB_DATA_AGG_CMD,
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 9/9] net/qede: print adapter info during init failure
  2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
                   ` (18 preceding siblings ...)
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 8/9] net/qede/base: update the FW to 8.40.25.0 Rasesh Mody
@ 2019-10-06 20:14 ` Rasesh Mody
  19 siblings, 0 replies; 24+ messages in thread
From: Rasesh Mody @ 2019-10-06 20:14 UTC (permalink / raw)
  To: dev, jerinj, ferruh.yigit; +Cc: Rasesh Mody, GR-Everest-DPDK-Dev

Dump the info logs banner with available information in case of
device initialization failure.

Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/qede_ethdev.c | 81 ++++++++++++++++++++++------------
 drivers/net/qede/qede_ethdev.h | 19 +++++---
 2 files changed, 67 insertions(+), 33 deletions(-)

diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 0c9f6590e..53fdfde9a 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -278,30 +278,44 @@ static void qede_print_adapter_info(struct qede_dev *qdev)
 {
 	struct ecore_dev *edev = &qdev->edev;
 	struct qed_dev_info *info = &qdev->dev_info.common;
-	static char drv_ver[QEDE_PMD_DRV_VER_STR_SIZE];
 	static char ver_str[QEDE_PMD_DRV_VER_STR_SIZE];
 
-	DP_INFO(edev, "*********************************\n");
-	DP_INFO(edev, " DPDK version:%s\n", rte_version());
-	DP_INFO(edev, " Chip details : %s %c%d\n",
+	DP_INFO(edev, "**************************************************\n");
+	DP_INFO(edev, " DPDK version\t\t\t: %s\n", rte_version());
+	DP_INFO(edev, " Chip details\t\t\t: %s %c%d\n",
 		  ECORE_IS_BB(edev) ? "BB" : "AH",
 		  'A' + edev->chip_rev,
 		  (int)edev->chip_metal);
-	snprintf(ver_str, QEDE_PMD_DRV_VER_STR_SIZE, "%d.%d.%d.%d",
-		 info->fw_major, info->fw_minor, info->fw_rev, info->fw_eng);
-	snprintf(drv_ver, QEDE_PMD_DRV_VER_STR_SIZE, "%s_%s",
-		 ver_str, QEDE_PMD_VERSION);
-	DP_INFO(edev, " Driver version : %s\n", drv_ver);
-	DP_INFO(edev, " Firmware version : %s\n", ver_str);
+	snprintf(ver_str, QEDE_PMD_DRV_VER_STR_SIZE, "%s",
+		 QEDE_PMD_DRV_VERSION);
+	DP_INFO(edev, " Driver version\t\t\t: %s\n", ver_str);
+
+	snprintf(ver_str, QEDE_PMD_DRV_VER_STR_SIZE, "%s",
+		 QEDE_PMD_BASE_VERSION);
+	DP_INFO(edev, " Base version\t\t\t: %s\n", ver_str);
+
+	if (!IS_VF(edev))
+		snprintf(ver_str, QEDE_PMD_DRV_VER_STR_SIZE, "%s",
+			 QEDE_PMD_FW_VERSION);
+	else
+		snprintf(ver_str, QEDE_PMD_DRV_VER_STR_SIZE, "%d.%d.%d.%d",
+			 info->fw_major, info->fw_minor,
+			 info->fw_rev, info->fw_eng);
+	DP_INFO(edev, " Firmware version\t\t\t: %s\n", ver_str);
 
 	snprintf(ver_str, MCP_DRV_VER_STR_SIZE,
 		 "%d.%d.%d.%d",
-		(info->mfw_rev >> 24) & 0xff,
-		(info->mfw_rev >> 16) & 0xff,
-		(info->mfw_rev >> 8) & 0xff, (info->mfw_rev) & 0xff);
-	DP_INFO(edev, " Management Firmware version : %s\n", ver_str);
-	DP_INFO(edev, " Firmware file : %s\n", qede_fw_file);
-	DP_INFO(edev, "*********************************\n");
+		 (info->mfw_rev & QED_MFW_VERSION_3_MASK) >>
+		 QED_MFW_VERSION_3_OFFSET,
+		 (info->mfw_rev & QED_MFW_VERSION_2_MASK) >>
+		 QED_MFW_VERSION_2_OFFSET,
+		 (info->mfw_rev & QED_MFW_VERSION_1_MASK) >>
+		 QED_MFW_VERSION_1_OFFSET,
+		 (info->mfw_rev & QED_MFW_VERSION_0_MASK) >>
+		 QED_MFW_VERSION_0_OFFSET);
+	DP_INFO(edev, " Management Firmware version\t: %s\n", ver_str);
+	DP_INFO(edev, " Firmware file\t\t\t: %s\n", qede_fw_file);
+	DP_INFO(edev, "**************************************************\n");
 }
 
 static void qede_reset_queue_stats(struct qede_dev *qdev, bool xstats)
@@ -2427,7 +2441,8 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 	qed_ops = qed_get_eth_ops();
 	if (!qed_ops) {
 		DP_ERR(edev, "Failed to get qed_eth_ops_pass\n");
-		return -EINVAL;
+		rc = -EINVAL;
+		goto err;
 	}
 
 	DP_INFO(edev, "Starting qede probe\n");
@@ -2435,7 +2450,8 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 				    dp_level, is_vf);
 	if (rc != 0) {
 		DP_ERR(edev, "qede probe failed rc %d\n", rc);
-		return -ENODEV;
+		rc = -ENODEV;
+		goto err;
 	}
 	qede_update_pf_params(edev);
 
@@ -2456,7 +2472,8 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 
 	if (rte_intr_enable(&pci_dev->intr_handle)) {
 		DP_ERR(edev, "rte_intr_enable() failed\n");
-		return -ENODEV;
+		rc = -ENODEV;
+		goto err;
 	}
 
 	/* Start the Slowpath-process */
@@ -2491,7 +2508,8 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 		if (rc != 0) {
 			DP_ERR(edev, "Unable to start periodic"
 				     " timer rc %d\n", rc);
-			return -EINVAL;
+			rc = -EINVAL;
+			goto err;
 		}
 	}
 
@@ -2500,7 +2518,8 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 		DP_ERR(edev, "Cannot start slowpath rc = %d\n", rc);
 		rte_eal_alarm_cancel(qede_poll_sp_sb_cb,
 				     (void *)eth_dev);
-		return -ENODEV;
+		rc = -ENODEV;
+		goto err;
 	}
 
 	rc = qed_ops->fill_dev_info(edev, &dev_info);
@@ -2510,11 +2529,17 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 		qed_ops->common->remove(edev);
 		rte_eal_alarm_cancel(qede_poll_sp_sb_cb,
 				     (void *)eth_dev);
-		return -ENODEV;
+		rc = -ENODEV;
+		goto err;
 	}
 
 	qede_alloc_etherdev(adapter, &dev_info);
 
+	if (do_once) {
+		qede_print_adapter_info(adapter);
+		do_once = false;
+	}
+
 	adapter->ops->common->set_name(edev, edev->name);
 
 	if (!is_vf)
@@ -2571,11 +2596,6 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 
 	eth_dev->dev_ops = (is_vf) ? &qede_eth_vf_dev_ops : &qede_eth_dev_ops;
 
-	if (do_once) {
-		qede_print_adapter_info(adapter);
-		do_once = false;
-	}
-
 	/* Bring-up the link */
 	qede_dev_set_link_state(eth_dev, true);
 
@@ -2621,6 +2641,13 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 	DP_INFO(edev, "Device initialized\n");
 
 	return 0;
+
+err:
+	if (do_once) {
+		qede_print_adapter_info(adapter);
+		do_once = false;
+	}
+	return rc;
 }
 
 static int qedevf_eth_dev_init(struct rte_eth_dev *eth_dev)
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 559638508..1ac2d086a 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -42,20 +42,27 @@
 #define qede_stringify(x...)		qede_stringify1(x)
 
 /* Driver versions */
+#define QEDE_PMD_DRV_VER_STR_SIZE NAME_SIZE /* 128 */
 #define QEDE_PMD_VER_PREFIX		"QEDE PMD"
 #define QEDE_PMD_VERSION_MAJOR		2
 #define QEDE_PMD_VERSION_MINOR	        11
 #define QEDE_PMD_VERSION_REVISION       0
 #define QEDE_PMD_VERSION_PATCH	        1
 
-#define QEDE_PMD_VERSION qede_stringify(QEDE_PMD_VERSION_MAJOR) "."     \
-			 qede_stringify(QEDE_PMD_VERSION_MINOR) "."     \
-			 qede_stringify(QEDE_PMD_VERSION_REVISION) "."  \
-			 qede_stringify(QEDE_PMD_VERSION_PATCH)
+#define QEDE_PMD_DRV_VERSION qede_stringify(QEDE_PMD_VERSION_MAJOR) "."     \
+			     qede_stringify(QEDE_PMD_VERSION_MINOR) "."     \
+			     qede_stringify(QEDE_PMD_VERSION_REVISION) "."  \
+			     qede_stringify(QEDE_PMD_VERSION_PATCH)
 
-#define QEDE_PMD_DRV_VER_STR_SIZE NAME_SIZE
-#define QEDE_PMD_VER_PREFIX "QEDE PMD"
+#define QEDE_PMD_BASE_VERSION qede_stringify(ECORE_MAJOR_VERSION) "."       \
+			      qede_stringify(ECORE_MINOR_VERSION) "."       \
+			      qede_stringify(ECORE_REVISION_VERSION) "."    \
+			      qede_stringify(ECORE_ENGINEERING_VERSION)
 
+#define QEDE_PMD_FW_VERSION qede_stringify(FW_MAJOR_VERSION) "."            \
+			    qede_stringify(FW_MINOR_VERSION) "."            \
+			    qede_stringify(FW_REVISION_VERSION) "."         \
+			    qede_stringify(FW_ENGINEERING_VERSION)
 
 #define QEDE_RSS_INDIR_INITED     (1 << 0)
 #define QEDE_RSS_KEY_INITED       (1 << 1)
-- 
2.18.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/9] net/qede/base: update FW to 8.40.25.0
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 " Rasesh Mody
@ 2019-10-11  7:57   ` Jerin Jacob
  0 siblings, 0 replies; 24+ messages in thread
From: Jerin Jacob @ 2019-10-11  7:57 UTC (permalink / raw)
  To: Rasesh Mody; +Cc: dpdk-dev, Jerin Jacob, Ferruh Yigit, GR-Everest-DPDK-Dev

On Mon, Oct 7, 2019 at 1:44 AM Rasesh Mody <rmody@marvell.com> wrote:
>
> Hi,
>
> This patch series updates the FW to 8.40.25.0 and includes corresponding
> base driver changes. It also includes some enhancements and fixes.
> The PMD version is bumped to 2.11.0.1.
>
> v2:
>    Addressed checkpatch issues
>    9/9 - print adapter info for any failure (not just probe) during init
>
> Thanks!
> -Rasesh

Series applied to dpdk-next-net-mrvl/master. Thanks.



>
> Rasesh Mody (9):
>   net/qede/base: calculate right page index for PBL chains
>   net/qede/base: change MFW mailbox command log verbosity
>   net/qede/base: lock entire QM reconfiguration flow
>   net/qede/base: rename HSI datatypes and funcs
>   net/qede/base: update rt defs NVM cfg and mcp code
>   net/qede/base: move dmae code to HSI
>   net/qede/base: update HSI code
>   net/qede/base: update the FW to 8.40.25.0
>   net/qede: print adapter info during init failure
>
>  drivers/net/qede/base/bcm_osal.c              |    1 +
>  drivers/net/qede/base/bcm_osal.h              |    5 +-
>  drivers/net/qede/base/common_hsi.h            |  257 +--
>  drivers/net/qede/base/ecore.h                 |   77 +-
>  drivers/net/qede/base/ecore_chain.h           |   84 +-
>  drivers/net/qede/base/ecore_cxt.c             |  520 ++++---
>  drivers/net/qede/base/ecore_cxt.h             |   12 +
>  drivers/net/qede/base/ecore_dcbx.c            |    7 +-
>  drivers/net/qede/base/ecore_dev.c             |  753 +++++----
>  drivers/net/qede/base/ecore_dev_api.h         |   92 --
>  drivers/net/qede/base/ecore_gtt_reg_addr.h    |   42 +-
>  drivers/net/qede/base/ecore_gtt_values.h      |   18 +-
>  drivers/net/qede/base/ecore_hsi_common.h      | 1134 +++++++-------
>  drivers/net/qede/base/ecore_hsi_debug_tools.h |  475 +++---
>  drivers/net/qede/base/ecore_hsi_eth.h         | 1386 ++++++++---------
>  drivers/net/qede/base/ecore_hsi_init_func.h   |   25 +-
>  drivers/net/qede/base/ecore_hsi_init_tool.h   |   42 +-
>  drivers/net/qede/base/ecore_hw.c              |   68 +-
>  drivers/net/qede/base/ecore_hw.h              |   98 +-
>  drivers/net/qede/base/ecore_init_fw_funcs.c   |  718 ++++-----
>  drivers/net/qede/base/ecore_init_fw_funcs.h   |  107 +-
>  drivers/net/qede/base/ecore_init_ops.c        |   66 +-
>  drivers/net/qede/base/ecore_init_ops.h        |   12 +-
>  drivers/net/qede/base/ecore_int.c             |  131 +-
>  drivers/net/qede/base/ecore_int.h             |    4 +-
>  drivers/net/qede/base/ecore_int_api.h         |   13 +-
>  drivers/net/qede/base/ecore_iov_api.h         |    4 +-
>  drivers/net/qede/base/ecore_iro.h             |  320 ++--
>  drivers/net/qede/base/ecore_iro_values.h      |  336 ++--
>  drivers/net/qede/base/ecore_l2.c              |   10 +-
>  drivers/net/qede/base/ecore_l2_api.h          |    2 +
>  drivers/net/qede/base/ecore_mcp.c             |  296 ++--
>  drivers/net/qede/base/ecore_mcp.h             |    9 +-
>  drivers/net/qede/base/ecore_proto_if.h        |    1 +
>  drivers/net/qede/base/ecore_rt_defs.h         |  870 +++++------
>  drivers/net/qede/base/ecore_sp_commands.c     |   15 +-
>  drivers/net/qede/base/ecore_spq.c             |   55 +-
>  drivers/net/qede/base/ecore_sriov.c           |  178 ++-
>  drivers/net/qede/base/ecore_sriov.h           |    4 +-
>  drivers/net/qede/base/ecore_vf.c              |   18 +-
>  drivers/net/qede/base/eth_common.h            |  101 +-
>  drivers/net/qede/base/mcp_public.h            |   59 +-
>  drivers/net/qede/base/nvm_cfg.h               |  909 ++++++++++-
>  drivers/net/qede/base/reg_addr.h              |   75 +-
>  drivers/net/qede/qede_ethdev.c                |   81 +-
>  drivers/net/qede/qede_ethdev.h                |   21 +-
>  drivers/net/qede/qede_main.c                  |    2 +-
>  drivers/net/qede/qede_rxtx.c                  |   28 +-
>  48 files changed, 5493 insertions(+), 4048 deletions(-)
>
> --
> 2.18.0
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v2 8/9] net/qede/base: update the FW to 8.40.25.0
  2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 8/9] net/qede/base: update the FW to 8.40.25.0 Rasesh Mody
@ 2019-10-11 16:13   ` Ferruh Yigit
  0 siblings, 0 replies; 24+ messages in thread
From: Ferruh Yigit @ 2019-10-11 16:13 UTC (permalink / raw)
  To: Rasesh Mody, dev, jerinj; +Cc: GR-Everest-DPDK-Dev

On 10/6/2019 9:14 PM, Rasesh Mody wrote:
> This patch updates the FW to 8.40.25.0 and corresponding base driver
> changes. It also updates the PMD version to 2.11.0.1. The FW updates
> consists of enhancements and fixes as described below.
> 
>  - VF RX queue start ramrod can get stuck due to completion error.
>    Return EQ completion with error, when fail to load VF data. Use VF
>    FID in RX queue start ramrod
>  - Fix big receive buffer initialization for 100G to address failure
>    leading to BRB hardware assertion
>  - GRE tunnel traffic doesn't run when non-L2 ethernet protocol is enabled,
>    fix FW to not forward tunneled SYN packets to LL2.
>  - Fix the FW assert that is caused during vport_update when
>    tx-switching is enabled
>  - Add initial FW support for VF Representors
>  - Add ecore_get_hsi_def_val() API to get default HSI values
>  - Move following from .c to .h files:
>    TSTORM_QZONE_START and MSTORM_QZONE_START
>    enum ilt_clients
>    renamed struct ecore_dma_mem to phys_mem_desc and moved
>  - Add ecore_cxt_set_cli() and ecore_cxt_set_blk() APIs to set client
>    config and block details
>  - Use SET_FIELD() macro where appropriate
>  - Address spell check and code alignment issues
> 
> Signed-off-by: Rasesh Mody <rmody@marvell.com>

<...>

> -void ecore_calc_session_ctx_validation(void *p_ctx_mem, u16 ctx_size,
> +void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
> +				       void *p_ctx_mem, u16 ctx_size,
>  				       u8 ctx_type, u32 cid)
>  {
>  	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
>  
> -	p_ctx = (u8 *)p_ctx_mem;
> +	p_ctx = (u8 * const)p_ctx_mem;

This is causing build error with icc [1], I will remove 'const' while merging.

[1]
error #191: type qualifier is meaningless on cast type

<...>

> -void ecore_memset_session_ctx(void *p_ctx_mem, u32 ctx_size, u8 ctx_type)
> +void ecore_memset_session_ctx(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
> +			      u32 ctx_size, u8 ctx_type)
>  {
>  	u8 *x_val_ptr, *t_val_ptr, *u_val_ptr, *p_ctx;
>  	u8 x_val, t_val, u_val;
>  
> -	p_ctx = (u8 *)p_ctx_mem;
> +	p_ctx = (u8 * const)p_ctx_mem;

Ditto

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2019-10-11 16:14 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-30  2:49 [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Rasesh Mody
2019-09-30  2:49 ` [dpdk-dev] [PATCH 1/9] net/qede/base: calculate right page index for PBL chains Rasesh Mody
2019-09-30  2:49 ` [dpdk-dev] [PATCH 2/9] net/qede/base: change MFW mailbox command log verbosity Rasesh Mody
2019-09-30  2:49 ` [dpdk-dev] [PATCH 3/9] net/qede/base: lock entire QM reconfiguration flow Rasesh Mody
2019-09-30  2:49 ` [dpdk-dev] [PATCH 4/9] net/qede/base: rename HSI datatypes and funcs Rasesh Mody
2019-09-30  2:49 ` [dpdk-dev] [PATCH 5/9] net/qede/base: update rt defs NVM cfg and mcp code Rasesh Mody
2019-09-30  2:49 ` [dpdk-dev] [PATCH 6/9] net/qede/base: move dmae code to HSI Rasesh Mody
2019-09-30  2:49 ` [dpdk-dev] [PATCH 7/9] net/qede/base: update HSI code Rasesh Mody
2019-09-30  2:49 ` [dpdk-dev] [PATCH 8/9] net/qede/base: update the FW to 8.40.25.0 Rasesh Mody
2019-09-30  2:49 ` [dpdk-dev] [PATCH 9/9] net/qede: print adapter info during init failure Rasesh Mody
2019-10-03  5:06 ` [dpdk-dev] [PATCH 0/9] net/qede/base: update FW to 8.40.25.0 Jerin Jacob
2019-10-03  5:59   ` Rasesh Mody
2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 " Rasesh Mody
2019-10-11  7:57   ` Jerin Jacob
2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 1/9] net/qede/base: calculate right page index for PBL chains Rasesh Mody
2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 2/9] net/qede/base: change MFW mailbox command log verbosity Rasesh Mody
2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 3/9] net/qede/base: lock entire QM reconfiguration flow Rasesh Mody
2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 4/9] net/qede/base: rename HSI datatypes and funcs Rasesh Mody
2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 5/9] net/qede/base: update rt defs NVM cfg and mcp code Rasesh Mody
2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 6/9] net/qede/base: move dmae code to HSI Rasesh Mody
2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 7/9] net/qede/base: update HSI code Rasesh Mody
2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 8/9] net/qede/base: update the FW to 8.40.25.0 Rasesh Mody
2019-10-11 16:13   ` Ferruh Yigit
2019-10-06 20:14 ` [dpdk-dev] [PATCH v2 9/9] net/qede: print adapter info during init failure Rasesh Mody

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).